issue_owner_repo listlengths 2 2 | issue_body stringlengths 0 261k ⌀ | issue_title stringlengths 1 925 | issue_comments_url stringlengths 56 81 | issue_comments_count int64 0 2.5k | issue_created_at stringlengths 20 20 | issue_updated_at stringlengths 20 20 | issue_html_url stringlengths 37 62 | issue_github_id int64 387k 2.46B | issue_number int64 1 127k |
|---|---|---|---|---|---|---|---|---|---|
[
"langchain-ai",
"langchain"
] | ### System Info
Issue: Agent + GmailToolkit sends message AND rootId to the recipient address
Current Behaviour:
1. Instruct the Agent to send a message to the recipient
2. Agent emails recipient with a message and then sends a new message of just the rootId (e.g., `r25406384....`)
Example:
<img width="1181" alt="Screenshot 2023-09-10 at 10 48 29 AM" src="https://github.com/langchain-ai/langchain/assets/94654154/68258bee-985e-4844-9ae8-00b81248d166">
Desired Behaviour:
1. Instruct the Agent to send a message to the recipient
2. Agent emails recipient with only the message and NOT the rootId
My initial suspicion is that this has to do with the prompting of the agent and the multistep process of writing, drafting, and sending all in one go. Currently looking into this and will add any updates/findings here. All help and suggestions welcome!
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Run the agent notebook from the docs: https://python.langchain.com/docs/integrations/toolkits/gmail
### Expected behavior
Desired Behaviour:
1. Instruct the Agent to send a message to the recipient
2. Agent emails recipient with only the message and NOT the rootId | Gmail Toolkit sends message and rootId of message | https://api.github.com/repos/langchain-ai/langchain/issues/10422/comments | 2 | 2023-09-10T15:54:38Z | 2023-12-18T23:46:37Z | https://github.com/langchain-ai/langchain/issues/10422 | 1,889,195,706 | 10,422 |
[
"langchain-ai",
"langchain"
] | ### Issue with current documentation:
### Problem:
In the _**libs/langchain/langchain/memory/token_buffer.py**_ file:
```
@property
def buffer(self) -> Any:
"""String buffer of memory."""
return self.buffer_as_messages if self.return_messages else self.buffer_as_str
@property
def buffer_as_str(self) -> str:
"""Exposes the buffer as a string in case return_messages is True."""
return get_buffer_string(
self.chat_memory.messages,
human_prefix=self.human_prefix,
ai_prefix=self.ai_prefix,
)
@property
def buffer_as_messages(self) -> List[BaseMessage]:
"""Exposes the buffer as a list of messages in case return_messages is False."""
return self.chat_memory.messages
```
The **True** and **False** words should be inverted in the **buffer_as_str** and **buffer_as_messages** methods' documentation.
See the logic in the **buffer** method's return:
> return self.buffer_as_messages if self.return_messages else self.buffer_as_str
### Correction:
Swap both words:
**True** :arrow_backward: :arrow_forward: **False**
To get that result:
```
@property
def buffer(self) -> Any:
"""String buffer of memory."""
return self.buffer_as_messages if self.return_messages else self.buffer_as_str
@property
def buffer_as_str(self) -> str:
"""Exposes the buffer as a string in case return_messages is False."""
return get_buffer_string(
self.chat_memory.messages,
human_prefix=self.human_prefix,
ai_prefix=self.ai_prefix,
)
@property
def buffer_as_messages(self) -> List[BaseMessage]:
"""Exposes the buffer as a list of messages in case return_messages is True."""
return self.chat_memory.messages
```
### Idea or request for content:
_No response_ | DOC: Inversion of 'True' and 'False' in ConversationTokenBufferMemory Property Comments | https://api.github.com/repos/langchain-ai/langchain/issues/10420/comments | 1 | 2023-09-10T11:28:47Z | 2023-09-12T13:12:36Z | https://github.com/langchain-ai/langchain/issues/10420 | 1,889,109,167 | 10,420 |
[
"langchain-ai",
"langchain"
] | ### Issue with current documentation:
# Combine the LLM with the tools with make a ReAct agent
Currently, the document says , agent can take only user question . Is there any example to pass multiple parameters?
inputdata ={"input": COMPLEX_QUERY, "channel":"mychannel", "product":"myproduct"}
react_agent = initialize_agent(tools,
llm,
agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION,
verbose=True)
react_agent.run(inputdata)
channel and product custom parameters that needs to be passed to Tool
### Idea or request for content:
_No response_ | DOC: Passing parameters to tools from Agent | https://api.github.com/repos/langchain-ai/langchain/issues/10419/comments | 2 | 2023-09-10T10:59:44Z | 2023-12-18T23:46:42Z | https://github.com/langchain-ai/langchain/issues/10419 | 1,889,098,692 | 10,419 |
[
"langchain-ai",
"langchain"
] | ### Feature request
In the [SmartLLMChain](https://python.langchain.com/docs/use_cases/more/self_check/smart_llm), I would like to randomize the temperature of the `ideation_llm` . It could have a positive impact on its creativity then evaluation.
We could ask for specific pairs of nb of llm/temperature or automate temperature distribution with classical ones
- Gaussian
- Poisson
- Uniform
- Exponential
- Geometric
- Log-Normal
- ...
### Motivation
Potentally enhance overall chain performances.
### Your contribution
I could write a blog post about the benchmark. | :bulb: SmartLLMChain > Rrandomized temperatures for ideation_llm for better crowd diversity simulation | https://api.github.com/repos/langchain-ai/langchain/issues/10418/comments | 1 | 2023-09-10T07:55:16Z | 2023-12-18T23:46:47Z | https://github.com/langchain-ai/langchain/issues/10418 | 1,889,035,389 | 10,418 |
[
"langchain-ai",
"langchain"
] | ### Feature request
Is there a way to save options or examples instead of history with `memory.save_context()` in `VectorStoreRetrieverMemory`?
For example, if A, B, and C are expected as the answer to a certain question, A, B, and C are returned as the predict results for that question. The current memory function takes into account the flow of history, so it is not possible to select options or define behavior like the switch statement in programming languages.
If I just don't know about this feature, I'd appreciate it if you could let me know. In a past issue, if someone wanted to have a rule-based conversation, the answerer just only said to use `VectorStoreRetrieverMemory`, but no examples were introduced. If you have any simple exmaples, please let me know.
### Motivation
This is to stably control the chatbot's behavior.
### Your contribution
I checked and found out that this feature does not currently exist. | Save options or examples instead of history with memory.save_context() in VectorStoreRetrieverMemory | https://api.github.com/repos/langchain-ai/langchain/issues/10417/comments | 4 | 2023-09-10T07:29:26Z | 2023-09-21T09:49:16Z | https://github.com/langchain-ai/langchain/issues/10417 | 1,889,027,787 | 10,417 |
[
"langchain-ai",
"langchain"
] | ### Feature request
Currently [llama-cpp-python](https://github.com/abetlen/llama-cpp-python#web-server) provides server package which acts like a drop-in replacement for the OpenAI API.
Is there any specific langchain LLM class which supports the above server or do we need to alter the existing `OpenAI` class with a different `openai_api_base` ?
### Motivation
I would like to have a dedicated machine or host which runs only the llama-cpp-python server wheres as the client which uses langchain should interact with just like we are doing with OpenAI.
### Your contribution
I would like to contribute but before that I need to check if there's any solution already available. | Support for llama-cpp-python server | https://api.github.com/repos/langchain-ai/langchain/issues/10415/comments | 9 | 2023-09-10T01:16:19Z | 2024-07-12T16:52:15Z | https://github.com/langchain-ai/langchain/issues/10415 | 1,888,924,696 | 10,415 |
[
"langchain-ai",
"langchain"
] | ### System Info
Langchain version: 0.0285 - langchain.vectorstores.redis import Redis
Python 3.10.11
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Based on the [original documentation](https://python.langchain.com/docs/integrations/vectorstores/redis) the vectorstore is created using the Redis.from_documents() method
```
from langchain.vectorstores.redis import Redis
from langchain.embeddings import OpenAIEmbeddings
from langchain.chat_models import ChatOpenAI
documents_raw = [Document(page_content="This is Alice's phone number: 123-456-7890", lookup_str='', metadata=metadata, lookup_index=0), Document(page_content='Team: Angels "Payroll (millions)": 154.49 "Wins": 89', lookup_str='', metadata=metadata, lookup_index=0)]
embeddings = OpenAIEmbeddings()
llm = ChatOpenAI()
schema = {'text': [{'name': 'source'},
{'name': 'title'}, ],
'numeric': [{'name': 'created_at'}], 'tag': []}
index_name = "index_name_123"
rds = Redis.from_documents(
documents=documents_raw, # a list of Document objects from loaders or created
embedding=embeddings, # an Embeddings object
redis_url="redis://localhost:6379",
index_name=index_name,
index_schema=schema,
keys=["a", "b"] # this is my addition. Passing my custom keys, breaks the code
)
```
### Expected behavior
**Objective**: be able to use custom keys in Redis
**Problem**:
The Redis.from_documents() method has --> **kwargs: Any
It calls the `from_texts()` method, which calls the `from_texts_return_keys()`. This calls ` add_texts()` which contains a line --> `keys_or_ids = kwargs.get("keys", kwargs.get("ids"))`
Therefore if I understand correctly, I assume that both "keys" or "ids" would be valid keyword arguments as well from the from_documents() method. This would achieve storing documents using custom keys. However it raises:
```
File "C:\Users\user\.virtualenvs\project-c_T0zlg5\Lib\site-packages\redis\connection.py", line 1066, in get_connection
connection = self._available_connections.pop()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
IndexError: pop from empty list
During handling of the above exception, another exception occurred:
```
```
Traceback (most recent call last):
File "C:\Users\userabc\Music\project\pepe.py", line 98, in <module>
rds = Redis.from_documents(
^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\userabc\.virtualenvs\project-c_T0zlg5\Lib\site-packages\langchain\vectorstores\base.py", line 417, in from_documents
return cls.from_texts(texts, embedding, metadatas=metadatas, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\userabc\.virtualenvs\project-c_T0zlg5\Lib\site-packages\langchain\vectorstores\redis\base.py", line 488, in from_texts
instance, _ = cls.from_texts_return_keys(
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\userabc\.virtualenvs\project-c_T0zlg5\Lib\site-packages\langchain\vectorstores\redis\base.py", line 405, in from_texts_return_keys
instance = cls(
^^^^
File "C:\Users\userabc\.virtualenvs\project-c_T0zlg5\Lib\site-packages\langchain\vectorstores\redis\base.py", line 274, in __init__
redis_client = get_client(redis_url=redis_url, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\userabc\.virtualenvs\project-c_T0zlg5\Lib\site-packages\langchain\utilities\redis.py", line 127, in get_client
if _check_for_cluster(redis_client):
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\userabc\.virtualenvs\project-c_T0zlg5\Lib\site-packages\langchain\utilities\redis.py", line 198, in _check_for_cluster
cluster_info = redis_client.info("cluster")
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\userabc\.virtualenvs\project-c_T0zlg5\Lib\site-packages\redis\commands\core.py", line 1004, in info
return self.execute_command("INFO", section, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\userabc\.virtualenvs\project-c_T0zlg5\Lib\site-packages\redis\client.py", line 505, in execute_command
conn = self.connection or pool.get_connection(command_name, **options)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\userabc\.virtualenvs\project-c_T0zlg5\Lib\site-packages\redis\connection.py", line 1068, in get_connection
connection = self.make_connection()
^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\userabc\.virtualenvs\project-c_T0zlg5\Lib\site-packages\redis\connection.py", line 1108, in make_connection
return self.connection_class(**self.connection_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\userabc\.virtualenvs\project-c_T0zlg5\Lib\site-packages\redis\connection.py", line 571, in __init__ super().__init__(**kwargs)
TypeError: AbstractConnection.__init__() got an unexpected keyword argument 'keys'
```
| Redis Vectorstore: cannot set custom keys - got an unexpected keyword argument | https://api.github.com/repos/langchain-ai/langchain/issues/10411/comments | 6 | 2023-09-09T20:35:55Z | 2023-09-12T22:29:42Z | https://github.com/langchain-ai/langchain/issues/10411 | 1,888,867,362 | 10,411 |
[
"langchain-ai",
"langchain"
] | ### System Info
langchain version 0.0.285
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
CharacterTextSplitter has options for `chunk_size` and `chunk_overlap` but doesn't make use of them in the splitting. Though this is not technically erroneous, it is misleading given that a lot of the documentation shows CharacterTextSplitter with these arguments specified, implying that the class is creating desirably sized chunks when in reality it is not. Here is an [example](https://python.langchain.com/docs/integrations/vectorstores/activeloop_deeplake) of such documentation implying this.
Below is a code sample reproducing the problem. RecursiveCharacterTextSplitter works to reorganize the texts into chunks of the specified `chunk_size`, with chunk overlap where appropriate. Meanwhile, CharacterTextSplitter doesn't do this. You can observe the difference in the overlap behavior by printing out `texts_c` and `texts_rc`.
```
from langchain.schema.document import Document
from langchain.text_splitter import CharacterTextSplitter, RecursiveCharacterTextSplitter
doc1 = Document(page_content="Just a test document to assess splitting/chunking")
doc2 = Document(page_content="Short doc")
docs = [doc1, doc2]
text_splitter_c = CharacterTextSplitter(chunk_size=30, chunk_overlap=10)
text_splitter_rc = RecursiveCharacterTextSplitter(chunk_size=30, chunk_overlap=10)
texts_c = text_splitter_c.split_documents(docs)
texts_rc = text_splitter_rc.split_documents(docs)
max_chunk_c = max([ len(x.to_json()['kwargs']['page_content']) for x in texts_c])
max_chunk_rc = max([ len(x.to_json()['kwargs']['page_content']) for x in texts_rc])
print(f"Max chunk in CharacterTextSplitter output is of length {max_chunk_c}")
print(f"Max chunk in RecursiveCharacterTextSplitter output is of length {max_chunk_rc}")
```
### Expected behavior
Either remove the arguments from CharacterTextSplitter to avoid ambiguity, use RecursiveCharacterTextSplitter which performs the expected behavior of resizing into appropriately sized chunks, or add to CharacterTextSplitter a split_text function to perform the aforesaid expected behavior | CharacterTextSplitter doesn't break down text into specified chunk sizes | https://api.github.com/repos/langchain-ai/langchain/issues/10410/comments | 8 | 2023-09-09T20:23:27Z | 2024-05-09T07:21:57Z | https://github.com/langchain-ai/langchain/issues/10410 | 1,888,864,581 | 10,410 |
[
"langchain-ai",
"langchain"
] | ### System Info
In current tool the input from the user is taken using CMD(Command line), but how will use in case of web application?
### Who can help?
@hwchase17 @agola11 @eyurtsev
### Information
- [x] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
You can reproduce by using current code given in human as a tool.
### Expected behavior
It should not take human input from CMD, as it cause issue in web based application. | How to use human input as a tool in wen based application | https://api.github.com/repos/langchain-ai/langchain/issues/10406/comments | 4 | 2023-09-09T13:49:07Z | 2024-02-20T23:39:28Z | https://github.com/langchain-ai/langchain/issues/10406 | 1,888,750,607 | 10,406 |
[
"langchain-ai",
"langchain"
] | ### System Info
I am using langchain==0.0.283 and openai==0.28.0.
There seems to be no mention to packaging as a dependency, but when I run my system under a docker image from python:3.10.12-slim as basis, packaging is missing.
So please add it explicitely as a dependency, such as packaging==21.3.
Thanks.
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. install langchain as a dependency, such as langchain==0.0.283
2. run the code in a container environment such as: FROM python:3.10.12-slim
3. you will get runtime errors
### Expected behavior
runtime errors happening:
File "main/routes_bot.py", line 4, in init main.routes_bot
File "/usr/local/lib/python3.10/site-packages/langchain/__init__.py", line 6, in <module>
from langchain.agents import MRKLChain, ReActChain, SelfAskWithSearchChain
File "/usr/local/lib/python3.10/site-packages/langchain/agents/__init__.py", line 31, in <module>
from langchain.agents.agent import (
File "/usr/local/lib/python3.10/site-packages/langchain/agents/agent.py", line 14, in <module>
from langchain.agents.agent_iterator import AgentExecutorIterator
File "/usr/local/lib/python3.10/site-packages/langchain/agents/agent_iterator.py", line 21, in <module>
from langchain.callbacks.manager import (
File "/usr/local/lib/python3.10/site-packages/langchain/callbacks/__init__.py", line 10, in <module>
from langchain.callbacks.aim_callback import AimCallbackHandler
File "/usr/local/lib/python3.10/site-packages/langchain/callbacks/aim_callback.py", line 5, in <module>
from langchain.schema import AgentAction, AgentFinish, LLMResult
File "/usr/local/lib/python3.10/site-packages/langchain/schema/__init__.py", line 28, in <module>
from langchain.schema.output_parser import (
File "/usr/local/lib/python3.10/site-packages/langchain/schema/output_parser.py", line 21, in <module>
from langchain.schema.runnable import Runnable, RunnableConfig
File "/usr/local/lib/python3.10/site-packages/langchain/schema/runnable/__init__.py", line 1, in <module>
from langchain.schema.runnable._locals import GetLocalVar, PutLocalVar
File "/usr/local/lib/python3.10/site-packages/langchain/schema/runnable/_locals.py", line 15, in <module>
from langchain.schema.runnable.base import Input, Output, Runnable
File "/usr/local/lib/python3.10/site-packages/langchain/schema/runnable/base.py", line 58, in <module>
from langchain.utils.aiter import atee, py_anext
File "/usr/local/lib/python3.10/site-packages/langchain/utils/__init__.py", line 17, in <module>
from langchain.utils.utils import (
File "/usr/local/lib/python3.10/site-packages/langchain/utils/utils.py", line 9, in <module>
from packaging.version import parse
ModuleNotFoundError: No module named 'packaging' | installer is not requesting packaging but the code requires it in practice | https://api.github.com/repos/langchain-ai/langchain/issues/10404/comments | 5 | 2023-09-09T13:02:55Z | 2024-05-22T16:07:12Z | https://github.com/langchain-ai/langchain/issues/10404 | 1,888,734,168 | 10,404 |
[
"langchain-ai",
"langchain"
] | ### System Info
I am runing Django, and chromadb in docker
Django port 8001
chromadb port 8002
bellow snippet is inside django application on running it, it create a directory named chroma and there is a chroma.sqlite3 file and a dir named with randomly.
it didn't make any call to chromadb's service `chroma` at 8002
```
docs = loader.load()
emb = OpenAIEmbeddings()
chroma_settings = Settings()
chroma_settings.is_persistent = True
chroma_settings.chroma_server_host = "chroma"
chroma_settings.chroma_server_http_port = "8002"
# chroma_settings.persist_directory = "chroma/"
Chroma.from_documents(
client_settings=chroma_settings,
collection_name="chroma_db",
documents=docs,
embedding=emb,
# persist_directory=os.path.join(settings.BASE_DIR, "chroma_db")
)
```
on running
`HttpClient(host="chroma", port="8002").list_collections()` return `[]`
running `http://localhost:8002/api/v1/heartbeat` from browser shows `{"nanosecond heartbeat":1694261199976223880}`
versions info
```
langchain==0.0.285
openai==0.27.8
django-jazzmin==2.6.0
tiktoken==0.4.0
jq==1.4.1
chromadb==0.4.*
lark
```
docker-compose
```
version: "3.4"
x-common: &common
stdin_open: true
tty: true
restart: unless-stopped
networks:
- pharmogene
x-django-build: &django-build
build:
context: .
dockerfile: ./Dockerfile.dev
services:
django:
container_name: pharmogene-dc01
command:
- bash
- -c
- |
python manage.py collectstatic --no-input
python manage.py runserver 0.0.0.0:8000
ports:
- 8000:8000
env_file:
- config/env/dev/.django
volumes:
- ./:/code
- pharmogene_static_volume:/code/static
- pharmogene_media_volume:/code/media
depends_on:
- postgres
- redis
<<: [*common,*django-build]
chroma:
container_name: pharmogene-cdbc-01
# image: ghcr.io/chroma-core/chroma:latest
image: chromadb/chroma:0.4.10.dev2
command: uvicorn chromadb.app:app --reload --workers 1 --host 0.0.0.0 --port 8002 --log-config log_config.yml
volumes:
- ./:/code
# Default configuration for persist_directory in chromadb/config.py
# Currently it's located in "/chroma/chroma/"
environment:
- IS_PERSISTENT=TRUE
- PERSIST_DIRECTORY=${PERSIST_DIRECTORY:-/chroma/chroma}
ports:
- "8002:8002"
depends_on:
- redis
- postgres
- django
- celery
- celery_beat
<<: *common
networks:
pharmogene:
driver: bridge
volumes:
....
```
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. install latest version of chromadb and langchain in separate container
2. run chroma in docker using docker hub image
3. try to create embeddings
### Expected behavior
expected behaviour is Httpclient().list_collections() should return list of collections from chroma running inside other container. | Chrom from_documents not making embedding to remote chromadb server | https://api.github.com/repos/langchain-ai/langchain/issues/10403/comments | 2 | 2023-09-09T12:13:10Z | 2023-09-09T14:26:42Z | https://github.com/langchain-ai/langchain/issues/10403 | 1,888,719,111 | 10,403 |
[
"langchain-ai",
"langchain"
] | ### System Info
langchain 0.0.285 on Windows. Reproduceable script is attached
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
from langchain.chat_models import ChatOpenAI
from langchain.prompts import PromptTemplate
from langchain.chains.openai_functions import create_openai_fn_chain
from dotenv import load_dotenv, find_dotenv
load_dotenv(find_dotenv())
database = [
{"name": "Salami", "price": 9.99},
{"name": "Margherita", "price": 8.99},
{"name": "Pepperoni", "price": 10.99},
{"name": "Hawaiian", "price": 11.49},
{"name": "Veggie Supreme", "price": 10.49},
]
def get_pizza_info(pizza_name: str) -> dict:
"""Retrieve information about a specific pizza from the database.
Args:
pizza_name (str): Name of the pizza.
Returns:
dict: A dictionary containing the pizza's name and price or a message indicating the pizza wasn't found.
"""
for pizza in database:
if pizza["name"] == pizza_name:
return pizza
return {"message": f"No pizza found with the name {pizza_name}."}
def add_pizza(pizza_name: str, price: float) -> dict:
"""Add a new pizza to the database.
Args:
pizza_name (str): Name of the new pizza.
price (float): Price of the new pizza.
Returns:
dict: A message indicating the result of the addition.
"""
for pizza in database:
if pizza["name"] == pizza_name:
return {"message": f"Pizza {pizza_name} already exists in the database."}
database.append({"name": pizza_name, "price": price})
return {"message": f"Pizza {pizza_name} added successfully!"}
llm = ChatOpenAI(model="gpt-3.5-turbo-0613", temperature=0)
template = """You are an AI chatbot having a conversation with a human.
Human: {human_input}
AI: """
prompt = PromptTemplate(input_variables=["human_input"], template=template)
chain = create_openai_fn_chain(
[get_pizza_info, add_pizza], llm, prompt, verbose=True
)
result1 = chain.run("I want to add the pizza 'Jumbo' for 13.99")
print(result1)
result2 = chain.run("Who are the main characters of the A-Team?") <- that code does not work
print(result2)
```
Traceback:
Traceback:
Traceback (most recent call last):
File "C:\Users\User\AppData\Local\Programs\Python\Python311\Lib\site-packages\langchain\output_parsers\openai_functions.py", line 28, in
parse_result
func_call = copy.deepcopy(message.additional_kwargs["function_call"])
~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^
KeyError: 'function_call'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Users\User\Desktop\LangChain\07_OpenAI_Functions\pizza_store.py", line 63, in <module>
result1 = chain.run("Who are the main characters of the A-Team?")
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\User\AppData\Local\Programs\Python\Python311\Lib\site-packages\langchain\chains\base.py", line 487, in run
return self(args[0], callbacks=callbacks, tags=tags, metadata=metadata)[
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\User\AppData\Local\Programs\Python\Python311\Lib\site-packages\langchain\chains\base.py", line 292, in __call__
raise e
File "C:\Users\User\AppData\Local\Programs\Python\Python311\Lib\site-packages\langchain\chains\base.py", line 286, in __call__
self._call(inputs, run_manager=run_manager)
File "C:\Users\User\AppData\Local\Programs\Python\Python311\Lib\site-packages\langchain\chains\llm.py", line 92, in _call
return self.create_outputs(response)[0]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\User\AppData\Local\Programs\Python\Python311\Lib\site-packages\langchain\chains\llm.py", line 220, in create_outputs
result = [
^
File "C:\Users\User\AppData\Local\Programs\Python\Python311\Lib\site-packages\langchain\chains\llm.py", line 223, in <listcomp>
self.output_key: self.output_parser.parse_result(generation),
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\User\AppData\Local\Programs\Python\Python311\Lib\site-packages\langchain\output_parsers\openai_functions.py", line 49, in
parse_result
function_call_info = super().parse_result(result)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\User\AppData\Local\Programs\Python\Python311\Lib\site-packages\langchain\output_parsers\openai_functions.py", line 30, in
parse_result
raise OutputParserException(f"Could not parse function call: {exc}")
langchain.schema.output_parser.OutputParserException: Could not parse function call: 'function_call'
### Expected behavior
I would expect the similar behaviour to using the vanilla API.
```
def chat(query):
response = openai.ChatCompletion.create(
model="gpt-3.5-turbo-0613",
messages=[{"role": "user", "content": query}],
functions=functions, # this is new
)
message = response["choices"][0]["message"]
return message
chat("What is the capital of france?")
```
If I run a query not related to the function, I will or will not add "function_call" to the output. I can handle this as follows:
```
if message.get("function_call"):
pizza_name = json.loads(message["function_call"]["arguments"]).get("pizza_name")
print(pizza_name)
function_response = get_pizza_info(
pizza_name=pizza_name
)
print(function_response)
```
Is there a workaround, does it work as intended or is that an unknown bug? I would normally just expect it to work without creating a workaround :)
| create_openai_fn_chain throws an error when not providing input not related to a function | https://api.github.com/repos/langchain-ai/langchain/issues/10397/comments | 4 | 2023-09-09T09:36:42Z | 2023-12-18T23:47:03Z | https://github.com/langchain-ai/langchain/issues/10397 | 1,888,670,258 | 10,397 |
[
"langchain-ai",
"langchain"
] | ### Feature request
Any possible ways to run a Q&A bot for my fine-tuned Llama2 model in Google Colab?
### Motivation
Any possible ways to run a Q&A bot for my fine-tuned Llama2 model in Google Colab?
### Your contribution
Any possible ways to run a Q&A bot for my fine-tuned Llama2 model in Google Colab? | How to use my fine-tuned Llama2 model in Langchain? | https://api.github.com/repos/langchain-ai/langchain/issues/10395/comments | 9 | 2023-09-09T07:20:40Z | 2023-09-26T02:15:12Z | https://github.com/langchain-ai/langchain/issues/10395 | 1,888,620,991 | 10,395 |
[
"langchain-ai",
"langchain"
] | ### System Info
Langchain version: 0.0.272
Python version: 3.10
Host System: Windows 11
I'm loading Few Shot Prompts from a fewshot_prompts.yaml file by using load_prompt() function. The fewshot_prompts.yaml file has a section with the title "examples:" to load the Few Shot Prompts from file example_prompts.yaml. The files fewshot_prompts.yaml and example_prompts.yaml both are in the same directory. But the _load_examples() function is not able to locate/load the example_prompts.yaml. There is no way to specify the path to this file.
Due to the above issue, the loading of the example_prompts.yaml file fails.
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
The
_type: few_shot
input_variables:
["bot_response"]
prefix:
The following are excerpts from conversations with an AI assistant. Given an input text and a set of rules,the assistant strictly follows the rules and provides an "yes" or "no" answer.Here are some examples
example_prompt:
_type: prompt
input_variables:
["bot_response","answer"]
template:
"bot_response: {bot_response}\nanswer: {answer}"
examples:
example_prompts.yaml
**************************************************************************
Unable to find the file example_prompts.yaml
### Expected behavior
1. Provide a way to specify the path to load the example_prompts.yaml file | Issue with loading xxx_prompts.yaml file specified under "examples" section in the .yaml file passed as parameter in load_prompt() | https://api.github.com/repos/langchain-ai/langchain/issues/10390/comments | 8 | 2023-09-09T04:39:51Z | 2023-12-18T23:47:08Z | https://github.com/langchain-ai/langchain/issues/10390 | 1,888,574,094 | 10,390 |
[
"langchain-ai",
"langchain"
] | ### Issue with current documentation:
[AIMessagePromptTemplate documentation](https://api.python.langchain.com/en/latest/prompts/langchain.prompts.chat.AIMessagePromptTemplate.html#langchain-prompts-chat-aimessageprompttemplate) incorrectly and confusingly describes the message as "... This is a message that is not sent to the user."
[HumanMessagePromptTemplate documentation](https://api.python.langchain.com/en/latest/prompts/langchain.prompts.chat.HumanMessagePromptTemplate.html#langchain-prompts-chat-humanmessageprompttemplate) incorrectly and confusingly describes the message as "... This is a message that is sent to the user."
Compare to the documentation for [AIMessage](https://api.python.langchain.com/en/latest/schema/langchain.schema.messages.AIMessage.html#langchain-schema-messages-aimessage) and [HumanMessage](https://api.python.langchain.com/en/latest/schema/langchain.schema.messages.HumanMessage.html#langchain-schema-messages-humanmessage), which correctly and clearly describe each message as "A message from an AI" and "A message from a human." respectively.
### Idea or request for content:
AIMessagePromptTemplate should be described as "AI message prompt template. This is a message that is sent to the user from the AI."
HumanMessagePromptTemplate should be described as "Human message prompt template. This is a message that is sent from the user to the AI."
These are clear, concise and consistent with documentation of the message schema.
I will submit a PR with revised docstrings for each class. This should, then, be reflected in the API reference documentation upon next build. | DOC: Incorrect and confusing documentation of AIMessagePromptTemplate and HumanMessagePromptTemplate | https://api.github.com/repos/langchain-ai/langchain/issues/10378/comments | 2 | 2023-09-08T16:43:51Z | 2023-12-18T23:47:12Z | https://github.com/langchain-ai/langchain/issues/10378 | 1,888,011,222 | 10,378 |
[
"langchain-ai",
"langchain"
] | ### System Info
Python 3.10.8 running on mac mini in VS Code
### Who can help?
@asai95
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [X] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
attempt to import basemessage converter following customer storage guide here - https://python.langchain.com/docs/integrations/memory/sql_chat_message_history
Enter from langchain.memory.chat_message_histories.sql import BaseMessageConverter
Actual behavior - ImportError: cannot import name 'BaseMessageConverter' from 'langchain.memory.chat_message_histories.sql' (/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/langchain/memory/chat_message_histories/sql.py)
### Expected behavior
Being able to import the BaseMessageConverter | Couldn't import BaseMessageConverter from | https://api.github.com/repos/langchain-ai/langchain/issues/10377/comments | 5 | 2023-09-08T16:06:52Z | 2023-09-09T00:46:51Z | https://github.com/langchain-ai/langchain/issues/10377 | 1,887,955,487 | 10,377 |
[
"langchain-ai",
"langchain"
] | ### Issue with current documentation:
I would like to get clarification of proper setting of Function Agents as Tools for other Agents.
In [Agent Tools Docs](https://python.langchain.com/docs/modules/agents/tools/) said that **tools can be even other agents but there are lack of examples**
I have implemented the following code with 2 approaches - Inheriting from the `BaseTool` interface to convert a Function Agent into a Tool object and I have also tried `Tool.from_function()` approach. My goal is to pre-define custom business formulas to be executed precisely as required and OpenAI Functions suit my needs well. For this purpose, I must use the Function Agent and I have followed this [Custom functions with OpenAI Functions Agent guide](https://python.langchain.com/docs/modules/agents/how_to/custom-functions-with-openai-functions-agent). However, there's a need to use other types of tools as well, requiring the initialization of different Agent types to invoke them. I'm seeking a solution where I can have a centralized Agent capable of invoking other agents. Currently, I can see only a solution using Classification Chain model for this purpose as provided below cases don't work
### Current Behavior
The current structure of the code exhibits the following issues:
- The `function_agent` executes FOO OpenAI Function and other tasks perfectly.
- The `main_agent` throws errors and occasionally manages to execute the FOO OpenAI function. Not sure why it's able to run successfully once in time.
### Desired Behavior
I would like the `functions_agent` to function seamlessly within the `main_agent` so that it performs the same well.
I believe there might be an issue with how the `message query` is being passed to OpenAI Python Functions in the two distinct approaches outlined below. Can you please offer guidance on the correct method to initialize the Function agent as a tool for another agent or suggest an alternative approach to achieve the desired behavior. Thank you
```python
from langchain.agents import initialize_agent
from langchain.chat_models import ChatOpenAI
from langchain.agents import AgentType, Tool
from pydantic import BaseModel, Field
from langchain.tools import BaseTool
from typing import Type
from dotenv import load_dotenv, find_dotenv
_ = load_dotenv(find_dotenv())
def foo(target):
"""Useful when you want to get current mean and std
"""
return {"TARGET": target, "number": 123}
class FooFuncInput(BaseModel):
"""Inputs parsed from LLM for 'foo' function"""
target: str = Field(description="Allows for manual interaction using either hands or legs. "
"allowable values: {`hands`, `legs`} ")
class FooFuncTool(BaseTool):
name = "foo"
description = f"""
Calculate {name} business function.
"""
args_schema: Type[BaseModel] = FooFuncInput
return_direct = True
verbose = False
def _run(self, target: str):
response = foo(target)
return response
def _arun(self):
raise NotImplementedError("foo does not support async")
def get_function_tools():
tools = [
FooFuncTool(),
]
return tools
functions_agent = initialize_agent(
tools=get_function_tools(),
llm=ChatOpenAI(model="gpt-3.5-turbo", temperature=0),
agent=AgentType.OPENAI_FUNCTIONS,
verbose=True,
max_iterations=1,
early_stopping_method="generate"
)
class AGENT_FUNCTION_TOOL(BaseTool):
name = "AGENT_FUNCTION_TOOL"
description = """
Useful for when you need to calculate business functions
Names of allowable business functions: `ATT`, `DDT`, `FOO`
"""
return_direct = True
verbose = True
def _run(self, query: str):
response = functions_agent.run(query)
return response
def _arun(self):
raise NotImplementedError("AGENT_FUNCTION_TOOL does not support async")
tools = [
AGENT_FUNCTION_TOOL(),
# Tool.from_function(
# func=functions_agent.run,
# name="Defined Business functions agent",
# description="""
# Useful for when you need to calculate business functions
# Names of allowable business functions: `ATT`, `DDT`, `FOO`
# """),
]
assistant_target = 'business management'
agent_kwargs = \
{'prefix': f'You are friendly {assistant_target} assistant.'
f'questions related to {assistant_target}. You have access to the following tools:'}
main_agent = initialize_agent(
tools=tools,
llm=ChatOpenAI(model="gpt-3.5-turbo", temperature=0),
agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION,
verbose=True,
max_iterations=1,
early_stopping_method="generate",
agent_kwargs=agent_kwargs
)
message = 'calculate business function `FOO` and do it with bare hands'
main_agent.run(message)
```
Also I have tried using `Tool.from_function()` method as explained on [reddit can an agent be a tool thread](https://www.reddit.com/r/LangChain/comments/13618qu/can_an_agent_be_a_tool/) but it looks like `message` query is not properly passed to OpenAI python Functions also.
```python
Tool.from_function(
func=functions_agent.run,
name="Defined Business functions agent",
description="""
Useful for when you need to calculate business functions
Names of allowable business functions: `ATT`, `DDT`, `FOO`
"""),
```
---------
Executed script using `main_agent` with `AGENT_FUNCTION_TOOL`, class implemented from `BaseTool` langchain interface

---------
Executed script using `main_agent` with `Tool.from_function()`

---------
Executing script using `functions_agent` and desired behavior:

### Idea or request for content:
How to properly use Function Agents as Tools which consequently can be used by other Agents. | DOC: How to properly initialize Function Agent as a Tool for other Agent | https://api.github.com/repos/langchain-ai/langchain/issues/10375/comments | 5 | 2023-09-08T15:33:54Z | 2024-05-14T08:03:21Z | https://github.com/langchain-ai/langchain/issues/10375 | 1,887,908,236 | 10,375 |
[
"langchain-ai",
"langchain"
] | ### System Info
Hi there :wave: ,
I am wondering why now redis is completely broken (most of the paramters names have been changed) and you did this is a minor version change - doesn't make any sense - and I'd like to know how to pass `k` to the retriever now and how to get the `k`; previous I could just do `retriever.k`
Thanks,
Fra
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Just try to use redis now
### Expected behavior
I wouldn't except BREAKING changes from a minor version change | Why did you broke Redis completely in a minor version change | https://api.github.com/repos/langchain-ai/langchain/issues/10366/comments | 4 | 2023-09-08T13:30:27Z | 2023-09-10T06:59:18Z | https://github.com/langchain-ai/langchain/issues/10366 | 1,887,699,565 | 10,366 |
[
"langchain-ai",
"langchain"
] | ### Issue with current documentation:
Overall there are so many terms on top of each other and I do understand we are trading off growing fast to having clean, set terms but lets discuss about currently what is the common way to define an AGENT that uses openAI for reasoning engine, and answers questions using tools that we define.
What is the AgentExecutor if we can also run agent.run()
What is AgentExecutorIterator
We can define an agent like this:
```
agent = initialize_agent(tools, llm, agent=AgentType.OPENAI_FUNCTIONS,
agent_kwargs=agent_kwargs, verbose=True, return_intermediate_steps=True,)
```
But also we can define an agent like this: [(From this documentation: Custom LLM Agent)](https://python.langchain.com/docs/modules/agents/how_to/custom_llm_agent)
```
llm = OpenAI(temperature=0)
llm_chain = LLMChain(llm=llm, prompt=prompt)
agent = LLMSingleActionAgent(
llm_chain=llm_chain,
output_parser=output_parser,
stop=["\nObservation:"],
allowed_tools=tool_names
)
```
There are also ChatAgent, OpenAIAgent and more[ Classes](https://js.langchain.com/docs/api/agents/classes/)
I would like to have a custom agent that I use with openai api and I will give custom tools like weather functions or finance functions etc. What is the cleanest way to do this currently ? (8 September 2023)
### Idea or request for content:
_No response_ | DOC: What is the Difference between OpenAIAgent and agent=AgentType.OPENAI_FUNCTIONS | https://api.github.com/repos/langchain-ai/langchain/issues/10361/comments | 2 | 2023-09-08T10:16:55Z | 2023-12-18T23:47:22Z | https://github.com/langchain-ai/langchain/issues/10361 | 1,887,374,918 | 10,361 |
[
"langchain-ai",
"langchain"
] | ### Feature request
The Project [Basaran](https://github.com/hyperonym/basaran) lets you host an api that is similar to the OpenAI api, but with a self hosted llm. Support for such custom API LLMs would be great
### Motivation
Further democritizig LLMs
### Your contribution
I could test it | Own Api LLM | https://api.github.com/repos/langchain-ai/langchain/issues/10359/comments | 2 | 2023-09-08T10:00:51Z | 2023-12-18T23:47:27Z | https://github.com/langchain-ai/langchain/issues/10359 | 1,887,349,628 | 10,359 |
[
"langchain-ai",
"langchain"
] | ### mapreduce chain doesn't provide full response.
I'm currently using mapreduce chain to do news summarization. But the output doesn't provide full response. Is there any way to fix this issue? I'm following the guide of langchain api mapreduce chain example.
| Issue: The mapreduce chain doesn't generate full response. | https://api.github.com/repos/langchain-ai/langchain/issues/10357/comments | 2 | 2023-09-08T09:26:17Z | 2023-12-18T23:47:33Z | https://github.com/langchain-ai/langchain/issues/10357 | 1,887,295,643 | 10,357 |
[
"langchain-ai",
"langchain"
] | ### Feature request
Please update dependencies to support installing `pydantic>=2.0. Now there's a conflict, for example pip resolver gives following error:
```
ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
langchainplus-sdk 0.0.20 requires pydantic<2,>=1, but you have pydantic 2.3.0 which is incompatible.
langchain 0.0.228 requires pydantic<2,>=1, but you have pydantic 2.3.0 which is incompatible.
```
### Motivation
pydantic 1.* is outdated. Starting new project and use old syntax which will be deprecated soon is unpleasant
### Your contribution
- | pydantic 2.0 support | https://api.github.com/repos/langchain-ai/langchain/issues/10355/comments | 1 | 2023-09-08T08:33:35Z | 2023-09-08T08:49:07Z | https://github.com/langchain-ai/langchain/issues/10355 | 1,887,216,037 | 10,355 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
I am using ConversationalRetrievalChain for document Q/A bot and updating chat_history = [] with every message however I noticed this chat_history string is never added to final inference string. In _call method of BaseConversationalRetrievalChain class, even when "if chat_history_str:" condition is true, new_question is never updated with the chat_history_str.
### Suggestion:
_No response_ | ConversationalRetrievalChain is not adding chat_history to new message | https://api.github.com/repos/langchain-ai/langchain/issues/10353/comments | 1 | 2023-09-08T07:36:09Z | 2023-12-18T23:47:37Z | https://github.com/langchain-ai/langchain/issues/10353 | 1,887,127,834 | 10,353 |
[
"langchain-ai",
"langchain"
] | ### Feature request
like https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/document_loaders/confluence.py, we need a loader for quip document, please follow https://github.com/quip/quip-api/tree/master/python
### Motivation
I have a lot of documents in quip, want to load these quip documents.
### Your contribution
I like to contribute this idea if no one to take. | quip doc loader | https://api.github.com/repos/langchain-ai/langchain/issues/10352/comments | 5 | 2023-09-08T07:07:49Z | 2023-12-18T23:47:43Z | https://github.com/langchain-ai/langchain/issues/10352 | 1,887,085,921 | 10,352 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
I was led to use GPT4All for building a personal chatbot. It generated undesired output. More precisely, it generates the all conversation, after writing the first prompt. For solving this problem, I instantiated GPT4All with a not empty list value for attribute `stop` :
`
llm = GPT4All(
model=local_path,
callbacks=callbacks,
verbose=True,
streaming=True,
stop=["System:"],
)
`
However, it continues to generate these undesired output.
### Suggestion:
Finally I may have found a small fix to solve this problem for this type of LLM. I intent to create a PR very soon, if you agree with this. | Issue: GPT4All LLM continue to generate undesired tokens even if stop attribute has been specified | https://api.github.com/repos/langchain-ai/langchain/issues/10345/comments | 6 | 2023-09-07T21:33:08Z | 2024-02-21T16:08:35Z | https://github.com/langchain-ai/langchain/issues/10345 | 1,886,588,480 | 10,345 |
[
"langchain-ai",
"langchain"
] | ### System Info
With Streaming turned on, verbose mode is turned on.
Am I doing something wrong?
Python 3.10.9
Name: langchain
Version: 0.0.284
llm2 = ChatOpenAI(
temperature=0,
streaming=True,
callbacks=[StreamingStdOutCallbackHandler()],
model="gpt-4-0613",
)
agent = initialize_agent(
tools,
llm2,
agent=AgentType.CONVERSATIONAL_REACT_DESCRIPTION,
memory=memory,
verbose=False,
)
Thought: Do I need to use a tool? Yes
Action: Search for Property Owners
Action Input: 5954A Bartonsville RoadDo I need to use a tool? No
AI: The house at 5954A Bartonsville Road is owned by Rainbown Johonson.The house at 5954A Bartonsville Road is owned by Rainbown Johonson.
Thought: Do I need to use a tool? Yes
Action: Search for Property Owners
Action Input: 1163 Annamarie WayDo I need to use a tool? Yes
Action: Search for People
Action Input: Randolph HillDo I need to use a tool? No
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
llm2 = ChatOpenAI(
temperature=0,
streaming=True,
callbacks=[StreamingStdOutCallbackHandler()],
model="gpt-4-0613",
)
agent = initialize_agent(
tools,
llm2,
agent=AgentType.CONVERSATIONAL_REACT_DESCRIPTION,
memory=memory,
#verbose=False,
)
print(
agent.run(
input="Who owns the house at the following address 5954A Bartonsville Road?"
)
)
### Expected behavior
I do not expect verbose output.
| Streaming Turns on Verbose mode for AgentType.CONVERSATIONAL_REACT_DESCRIPTION | https://api.github.com/repos/langchain-ai/langchain/issues/10339/comments | 3 | 2023-09-07T17:56:38Z | 2023-12-14T16:04:47Z | https://github.com/langchain-ai/langchain/issues/10339 | 1,886,350,516 | 10,339 |
[
"langchain-ai",
"langchain"
] | ### Issue with current documentation:
Hello.
I think there is little to no explanation as to which are the differences between `LLMSingleActionAgent` and `Agent` classes, and which is suitable for what scenario. Both classes inherit from `BaseSingleActionAgent`.
Thanks in advance.
### Idea or request for content:
_No response_ | DOC: LLMSingleActionAgent vs. Agent | https://api.github.com/repos/langchain-ai/langchain/issues/10338/comments | 2 | 2023-09-07T17:33:37Z | 2023-12-14T16:04:52Z | https://github.com/langchain-ai/langchain/issues/10338 | 1,886,322,923 | 10,338 |
[
"langchain-ai",
"langchain"
] | ### Feature request
I'd like a way to silence TextGen's output in my terminal. For example, a parameter would be perfect.
Looking at TextGen's source code, I see this line `print(prompt + result)`.
If I remove this line, then I get the desired effect where nothing is printed to my terminal.
### Motivation
I'm working on an app with lots of logging and thousands of requests are sent via TextGen.
My output is very noisy with every prompt and result printed, and it's troublesome to scroll through it all. I only care about the result I receive.
### Your contribution
My suggestion is to add a parameter flag to TextGen such that I can control the print (on or off), instead of it being hardcoded in.
There are 2 areas where I'd like to see it changed:
1. When streaming is enabled `print(prompt + combined_text_output)`
2. When streaming is disabled `print(prompt + result)`
| TextGen parameter to silence the print in terminal | https://api.github.com/repos/langchain-ai/langchain/issues/10337/comments | 2 | 2023-09-07T17:16:09Z | 2023-12-07T16:14:14Z | https://github.com/langchain-ai/langchain/issues/10337 | 1,886,302,306 | 10,337 |
[
"langchain-ai",
"langchain"
] | ### System Info
I have used the SemanticSimilarityExampleSelector and created a prompt. When I try to pass this to agent it fails saying: **ValueError: Saving an example selector is not currently supported**

ValueError: Saving an example selector is not currently supported
to create prompt I have used https://python.langchain.com/docs/modules/model_io/prompts/example_selectors/similarity
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Use the above code provided along with prompt selector provided in the original docuementation
### Expected behavior
To produce the output for the question asked | Issue with Few Short prompt when passed to agent | https://api.github.com/repos/langchain-ai/langchain/issues/10336/comments | 37 | 2023-09-07T17:11:20Z | 2024-02-15T16:09:55Z | https://github.com/langchain-ai/langchain/issues/10336 | 1,886,296,572 | 10,336 |
[
"langchain-ai",
"langchain"
] | ### System Info
There are a few differences between the PineconeHybridSearchRetriever and base Pinecone retriever making it difficult to switch to the former.
@hw
You can pass search_kwargs to pinecone index.as_retriver(search_kwargs:{filter: 'value'}) because it inherits from base VectorStore which has the as_retriever method with search_kwarg args.
But it is not clear from the documentation of PineconeHybridSearchRetriever, which inherits from BaseRetriever, how to pass those same arguments.
Also, the add_text method in Pinecone class returns the IDs, whereas in HybridSearch it does not.
`[[docs]](https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.pinecone.Pinecone.html#langchain.vectorstores.pinecone.Pinecone.add_texts) def add_texts(
self,
texts: Iterable[str],
metadatas: Optional[List[dict]] = None,
ids: Optional[List[str]] = None,
namespace: Optional[str] = None,
batch_size: int = 32,
embedding_chunk_size: int = 1000,
**kwargs: Any,
....
....
for i in range(0, len(texts), embedding_chunk_size):
chunk_texts = texts[i : i + embedding_chunk_size]
chunk_ids = ids[i : i + embedding_chunk_size]
chunk_metadatas = metadatas[i : i + embedding_chunk_size]
embeddings = self._embed_documents(chunk_texts)
async_res = [
self._index.upsert(
vectors=batch,
namespace=namespace,
async_req=True,
**kwargs,
)
for batch in batch_iterate(
batch_size, zip(chunk_ids, embeddings, chunk_metadatas)
)
]
[res.get() for res in async_res]
return ids`
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Create pinecone index and use as_retriever with passing search_kwargs('filter':{key:value})
Vs
Create PineconeHybridSearchRetriever, there is no way to pass filters
### Expected behavior
Pass search_kwargs in the class instantiation of PineconeHybridSearchRetriever so it can be used downstream in conversationalretrievalchain | PineconeHynridSearchRetriever missing several args and not returning ids of vectors when adding texts | https://api.github.com/repos/langchain-ai/langchain/issues/10333/comments | 5 | 2023-09-07T16:19:08Z | 2024-05-10T13:39:00Z | https://github.com/langchain-ai/langchain/issues/10333 | 1,886,206,628 | 10,333 |
[
"langchain-ai",
"langchain"
] | ### System Info
Got no system info to share.
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I just want to say that what you developed is awesome, but I have small bug to report, sometimes when I am chatting the response will be in french/spanish although training documentation and question asked is in English.
How to reproduce:
Say hello to bot, he will respond information is not in the given context.
Ask him some question about docs it learned from in English, answer will be in another language French/Spanish/etc.
Here is the code.
```
for message in st.session_state.messages:
with st.chat_message(message["role"]:
st.markdown(message["content"])
if prompt := st.chat_input("State your question"):
st.session_state.messages.append({"role": "user", "content": prompt})
with st.chat_message("user"):
st.markdown(prompt)
llm = ChatOpenAI(temperature=0.5, max_tokens=1000,
model_name="gpt-3.5-turbo")
conversation = ConversationalRetrievalChain.from_llm(
llm, vector_store.as_retriever())
final= conversation ({"question": prompt, "chat_history": [
(message["role"], message["content"]) for message in st.session_state.messages]})
with st.chat_message("assistant):
message_placeholder = st.empty()
final_response= final["answer"]
message_placeholder.markdown(final_response)
message_placeholder.markdown(final_response)
st.session_state.messages.append(
{"role": "assistant", "content": final_response})
```
### Expected behavior
Answer should be in same language question was asked. | Response is sometimes in different language | https://api.github.com/repos/langchain-ai/langchain/issues/10329/comments | 1 | 2023-09-07T14:13:38Z | 2023-12-14T16:05:03Z | https://github.com/langchain-ai/langchain/issues/10329 | 1,885,991,150 | 10,329 |
[
"langchain-ai",
"langchain"
] | ### System Info
Running SQLDatabaseChain with LangChain version 0.0.271 and SQLite return records that does not match the query.
Using SQLDatabaseChain with verbose set to true I am getting these in the console:
```sql
SQLQuery:SELECT id, sale_type, sold_date, property_type, address, city, state_or_province, zip_or_postal_code, price, beds, baths, location, square_feet, lot_size, year_built, day_on_market, usd_per_square_feet, hoa_per_month, url, latitude, longitude FROM properties WHERE city = 'New York' AND price <= 900000 AND property_type = 'House' AND beds >= 2 ORDER BY price DESC LIMIT 5;
```
```
SQLResult:
Answer:[{"id":3,"sale_type":"MLS Listing","sold_date":"None","property_type":"Condo/Co-op","address":"225 Fifth Ave Ph -H","city":"New York","state_or_province":"NY","zip_or_postal_code":10010,"price":3495000,"beds":3,"baths":3.0,"location":"NoMad","square_feet":"1987","lot_size":31106,"year_built":1907,"day_on_market":1,"usd_per_square_feet":"1987","hoa_per_month":"HOAMONTH","url":"https://www.redfin.com/NY/New-York/225-5th-Ave-10010/unit-H/home/174298359","latitude":40.7437447,"longitude":-73.9875513},{"id":2,"sale_type":"MLS Listing","sold_date":"None","property_type":"Condo/Co-op","address":"416 W 52nd St #520","city":"New York","state_or_province":"NY","zip_or_
```
The SQLResult does not match the SQLQuery. The SQLResult contains a property with a price of 3495000 which is higher than the 900000 filter in the SQLQuery. Also property_type "Condo/Co-op" does not match "house" from the query. Note that for any reason the results string is truncated at "zip_or_" around 770 characters.
When I execute a copy/paste of the SQL query from SQLQuery in console directly in the database it returns me 0 result. which what it should return. For info the "price" column is an INTEGER.
This is the code :
```python
def ask_question():
data = request.get_json()
question = data['question']
question = QUERY.format(question=question)
text = db_chain.run(question)
```
Did I miss something ?
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Run an SQLDatabaseChain with verbose to True
Compare the SQLquery and the SQLresult from the console. The result is not what is expected with the query.
### Expected behavior
The SQL result from the console should match the SQL query. | Running SQLDatabaseChain return records that does not match the query | https://api.github.com/repos/langchain-ai/langchain/issues/10325/comments | 7 | 2023-09-07T11:33:14Z | 2024-02-29T10:19:39Z | https://github.com/langchain-ai/langchain/issues/10325 | 1,885,710,571 | 10,325 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
I have pdf which has more than 2000 pages. file read successfully, but vector db was not created and getting this error. please suggest any solution.
**complete error:**
embeddings
results = cur.execute(sql, params).fetchall()
sqlite3.OperationalError: too many SQL variables
### Suggestion:
_No response_ | sqlite3.OperationalError: too many SQL variables | https://api.github.com/repos/langchain-ai/langchain/issues/10321/comments | 6 | 2023-09-07T09:11:01Z | 2023-12-14T16:05:07Z | https://github.com/langchain-ai/langchain/issues/10321 | 1,885,452,004 | 10,321 |
[
"langchain-ai",
"langchain"
] | ### System Info
I have created a LLama 7b model and in Databricks i have used model serving options. When i test model it works in databricks, but when i use the same model to connect with Langchain i get this error.
this works in Databricks model endpoint:
Json input:
{
"dataframe_split": {
"columns": [
"prompt",
"temperature",
"max_tokens"
],
"data": [
[
"what is ML?",
0.5,
100
]
]
}
}
code:-
from langchain.llms import Databricks
llm = Databricks(endpoint_name="databricks-llama-servingmodel", model_kwargs={"temperature": 0.1, "max_tokens": 100})
llm ("who are you")
Error:-
ValidationError: 1 validation error for Generation
text
str type expected (type=type_error.str)
### Who can help?
@hwchase17
@agola11
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Any model you take LLM and use the below notebook
https://github.com/databricks/databricks-ml-examples/blob/master/llm-models/llamav2/llamav2-7b/02_mlflow_logging_inference.py (for model serving)
Try using model serving in Langchain as show in the code and official langchain documents
from langchain.llms import Databricks
llm = Databricks(endpoint_name="databricks-llama-servingmodel", model_kwargs={"temperature": 0.1, "max_tokens": 100})
llm ("who are you")
### Expected behavior
llm("who are you")
expected output:- I am language model
| Langchain doesnt work with Databricks Model serving asking generate (str type expected (type=type_error.str) | https://api.github.com/repos/langchain-ai/langchain/issues/10318/comments | 12 | 2023-09-07T08:28:28Z | 2024-06-25T03:05:42Z | https://github.com/langchain-ai/langchain/issues/10318 | 1,885,373,063 | 10,318 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
I am writing a ConversationalRetrievalChain with a question_generator_chain. When writing a prompt for condensing the chat_history, I found it doesn't work well with the whole {chat_history}. I guess if I only give all questions asked by "human:", it will perform better. But I don't know how to do that.
The original prompt is:
Given the following conversation and a follow up question, rephrase the follow up question to be a standalone question, if the follow up question is already a standalone question, just return the follow up question.
Chat History:
{chat_history}
Follow Up Question: {question}
Standalone question:
What I want may looks like:
.........
Question asked:
{chat_history.human}
Follow Up Question: {question}
Standalone question:
Thanks for attention
### Suggestion:
_No response_ | Issue: How to customize the question_generator_chain in ConversationalRetrievalChain | https://api.github.com/repos/langchain-ai/langchain/issues/10317/comments | 2 | 2023-09-07T08:25:40Z | 2023-12-08T04:55:32Z | https://github.com/langchain-ai/langchain/issues/10317 | 1,885,368,696 | 10,317 |
[
"langchain-ai",
"langchain"
] | ### System Info
hi,
I am unable to stream the final answer from llm chain to chianlit UI.
langchain==0.0.218
Python 3.9.16
here are the details:
https://github.com/Chainlit/chainlit/issues/313
is this implemented? - https://github.com/langchain-ai/langchain/pull/1222/
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [X] Callbacks/Tracing
- [ ] Async
### Reproduction
the code given above can reproduce the behaviour
### Expected behavior
expected behaviour : the final answer from llm chain to be streamed properly to chianlit UI | Final answer streaming problem | https://api.github.com/repos/langchain-ai/langchain/issues/10316/comments | 21 | 2023-09-07T07:14:40Z | 2024-08-09T16:07:47Z | https://github.com/langchain-ai/langchain/issues/10316 | 1,885,260,192 | 10,316 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
I want to create a callback handler to monitor all tokens in all intermediate steps and output catch and return when the response has "AI:". How can I achieve that using the following custom callback handler?
```
import sys
from typing import Any, Dict, List, Optional
from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler
DEFAULT_ANSWER_PREFIX_TOKENS = ["AI", ":"]
class CustomCallBackHandler(StreamingStdOutCallbackHandler):
"""Callback handler for streaming in agents.
Only works with agents using LLMs that support streaming.
Only the final output of the agent will be streamed.
"""
def append_to_last_tokens(self, token: str) -> None:
self.last_tokens.append(token)
self.last_tokens_stripped.append(token.strip())
if len(self.last_tokens) > len(self.answer_prefix_tokens):
self.last_tokens.pop(0)
self.last_tokens_stripped.pop(0)
def check_if_answer_reached(self) -> bool:
if self.strip_tokens:
return self.last_tokens_stripped == self.answer_prefix_tokens_stripped
else:
return self.last_tokens == self.answer_prefix_tokens
def __init__(
self,
*,
answer_prefix_tokens: Optional[List[str]] = None,
strip_tokens: bool = True,
stream_prefix: bool = False
) -> None:
self.collected_tokens = []
"""Instantiate FinalStreamingStdOutCallbackHandler.
Args:
answer_prefix_tokens: Token sequence that prefixes the answer.
Default is ["Final", "Answer", ":"]
strip_tokens: Ignore white spaces and new lines when comparing
answer_prefix_tokens to last tokens? (to determine if answer has been
reached)
stream_prefix: Should answer prefix itself also be streamed?
"""
super().__init__()
if answer_prefix_tokens is None:
self.answer_prefix_tokens = DEFAULT_ANSWER_PREFIX_TOKENS
else:
self.answer_prefix_tokens = answer_prefix_tokens
if strip_tokens:
self.answer_prefix_tokens_stripped = [
token.strip() for token in self.answer_prefix_tokens
]
else:
self.answer_prefix_tokens_stripped = self.answer_prefix_tokens
self.last_tokens = [""] * len(self.answer_prefix_tokens)
self.last_tokens_stripped = [""] * len(self.answer_prefix_tokens)
self.strip_tokens = strip_tokens
self.stream_prefix = stream_prefix
self.answer_reached = False
def on_llm_start(
self, serialized: Dict[str, Any], prompts: List[str], **kwargs: Any
) -> None:
self.answer_reached = False
def on_llm_new_token(self, token: str, **kwargs: Any) -> None:
"""Run on new LLM token. Only available when streaming is enabled."""
# Remember the last n tokens, where n = len(answer_prefix_tokens)
self.append_to_last_tokens(token)
# Check if the last n tokens match the answer_prefix_tokens list ...
if self.check_if_answer_reached():
self.answer_reached = True
if self.stream_prefix:
for t in self.last_tokens:
self.collected_tokens.append(t)
return
# ... if yes, then collect tokens from now on
if self.answer_reached:
self.collected_tokens.append(token)
def get_collected_tokens(self) -> str:
"""Return the collected tokens as a single string."""
return ''.join(self.collected_tokens)
```
### Suggestion:
_No response_ | Issue: CustomCallBackHandlers for catch all intermediate steps | https://api.github.com/repos/langchain-ai/langchain/issues/10315/comments | 2 | 2023-09-07T06:57:57Z | 2023-12-14T16:05:12Z | https://github.com/langchain-ai/langchain/issues/10315 | 1,885,236,282 | 10,315 |
[
"langchain-ai",
"langchain"
] | ### System Info
Apple Macbook M1 Pro
python: 3.11.2
langchain: 0.0.283
pydantic: 2.3.0
### Who can help?
@hwchase17 @agola11
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. pip3.11 install langchain
2. run python3.11 in terminal
3. execute `from langchain.llms import OpenAI` which produce following error for me
```
Python 3.11.2 (main, Feb 16 2023, 02:55:59) [Clang 14.0.0 (clang-1400.0.29.202)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> from langchain.llms import OpenAI
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/opt/homebrew/lib/python3.11/site-packages/langchain/__init__.py", line 6, in <module>
from langchain.agents import MRKLChain, ReActChain, SelfAskWithSearchChain
File "/opt/homebrew/lib/python3.11/site-packages/langchain/agents/__init__.py", line 31, in <module>
from langchain.agents.agent import (
File "/opt/homebrew/lib/python3.11/site-packages/langchain/agents/agent.py", line 14, in <module>
from langchain.agents.agent_iterator import AgentExecutorIterator
File "/opt/homebrew/lib/python3.11/site-packages/langchain/agents/agent_iterator.py", line 30, in <module>
from langchain.tools import BaseTool
File "/opt/homebrew/lib/python3.11/site-packages/langchain/tools/__init__.py", line 25, in <module>
from langchain.tools.arxiv.tool import ArxivQueryRun
File "/opt/homebrew/lib/python3.11/site-packages/langchain/tools/arxiv/tool.py", line 8, in <module>
from langchain.utilities.arxiv import ArxivAPIWrapper
File "/opt/homebrew/lib/python3.11/site-packages/langchain/utilities/__init__.py", line 7, in <module>
from langchain.utilities.apify import ApifyWrapper
File "/opt/homebrew/lib/python3.11/site-packages/langchain/utilities/apify.py", line 3, in <module>
from langchain.document_loaders import ApifyDatasetLoader
File "/opt/homebrew/lib/python3.11/site-packages/langchain/document_loaders/__init__.py", line 76, in <module>
from langchain.document_loaders.embaas import EmbaasBlobLoader, EmbaasLoader
File "/opt/homebrew/lib/python3.11/site-packages/langchain/document_loaders/embaas.py", line 54, in <module>
class BaseEmbaasLoader(BaseModel):
File "/opt/homebrew/lib/python3.11/site-packages/pydantic/main.py", line 204, in __new__
fields[ann_name] = ModelField.infer(
^^^^^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/pydantic/fields.py", line 488, in infer
return cls(
^^^^
File "/opt/homebrew/lib/python3.11/site-packages/pydantic/fields.py", line 419, in __init__
self.prepare()
File "/opt/homebrew/lib/python3.11/site-packages/pydantic/fields.py", line 539, in prepare
self.populate_validators()
File "/opt/homebrew/lib/python3.11/site-packages/pydantic/fields.py", line 801, in populate_validators
*(get_validators() if get_validators else list(find_validators(self.type_, self.model_config))),
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/pydantic/validators.py", line 696, in find_validators
yield make_typeddict_validator(type_, config)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/pydantic/validators.py", line 585, in make_typeddict_validator
TypedDictModel = create_model_from_typeddict(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/pydantic/annotated_types.py", line 35, in create_model_from_typeddict
return create_model(typeddict_cls.__name__, **kwargs, **field_definitions)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/pydantic/main.py", line 972, in create_model
return type(__model_name, __base__, namespace)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/pydantic/main.py", line 204, in __new__
fields[ann_name] = ModelField.infer(
^^^^^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/pydantic/fields.py", line 488, in infer
return cls(
^^^^
File "/opt/homebrew/lib/python3.11/site-packages/pydantic/fields.py", line 419, in __init__
self.prepare()
File "/opt/homebrew/lib/python3.11/site-packages/pydantic/fields.py", line 534, in prepare
self._type_analysis()
File "/opt/homebrew/lib/python3.11/site-packages/pydantic/fields.py", line 638, in _type_analysis
elif issubclass(origin, Tuple): # type: ignore
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/Cellar/python@3.11/3.11.2_1/Frameworks/Python.framework/Versions/3.11/lib/python3.11/typing.py", line 1551, in __subclasscheck__
return issubclass(cls, self.__origin__)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
TypeError: issubclass() arg 1 must be a class
```
### Expected behavior
To be able to use langchain | Langchain Quickstart is not working for me | https://api.github.com/repos/langchain-ai/langchain/issues/10314/comments | 2 | 2023-09-07T06:38:07Z | 2023-09-09T21:29:36Z | https://github.com/langchain-ai/langchain/issues/10314 | 1,885,210,992 | 10,314 |
[
"langchain-ai",
"langchain"
] | ### System Info
jupyter notebook, RTX 3090
### Who can help?
@agola11 @hwchase17 @ey
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
from langchain.embeddings import SentenceTransformerEmbeddings
embedding=lambda x: x['combined_info'].apply(lambda text: embeddings.embed_documents(text))
```
does not work
Any workarounds on it?
### Expected behavior
outputs embeddings | apply embeddings to pandas dataframe | https://api.github.com/repos/langchain-ai/langchain/issues/10313/comments | 7 | 2023-09-07T06:02:33Z | 2023-12-14T16:05:18Z | https://github.com/langchain-ai/langchain/issues/10313 | 1,885,171,818 | 10,313 |
[
"langchain-ai",
"langchain"
] | ### Issue with current documentation:
When I am using a tool with `return_direct = True` , the observation step will provide the answer. Is there a way to stream that observation from the intermediate steps using callbacks. I found no information regarding this. It is much appreciated if we can add something regarding that as well.
### Idea or request for content:
_No response_ | DOC: <Observation Streaming Using Call Back> | https://api.github.com/repos/langchain-ai/langchain/issues/10312/comments | 4 | 2023-09-07T04:57:35Z | 2023-12-30T18:21:43Z | https://github.com/langchain-ai/langchain/issues/10312 | 1,885,116,454 | 10,312 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
When working with the `CONVERSATIONAL_REACT_DESCRIPTION`, I found Observation was capped out when passing to the action input. For example I have retrieved a JsonArray by using a structured tool but when the agent passes it to the final tool jsonArray was capped out. Can I handle it?
### Suggestion:
_No response_ | Issue: Obeservation is capping out when passing to Action Input [CONVERSATIONAL_REACT_DESCRIPTION] | https://api.github.com/repos/langchain-ai/langchain/issues/10311/comments | 2 | 2023-09-07T04:34:10Z | 2023-12-14T16:05:27Z | https://github.com/langchain-ai/langchain/issues/10311 | 1,885,097,882 | 10,311 |
[
"langchain-ai",
"langchain"
] | ### Issue with current documentation:
I wasn't able to find any information related to the customization of Action Input and Final Answer customization. It is much-appreciated if the documentation can provide some info regarding that as well.
### Idea or request for content:
_No response_ | DOC: Action Input and Final Answer | https://api.github.com/repos/langchain-ai/langchain/issues/10310/comments | 2 | 2023-09-07T04:03:13Z | 2023-09-15T03:15:07Z | https://github.com/langchain-ai/langchain/issues/10310 | 1,885,075,993 | 10,310 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
my test code
```
import unittest
from langchain.chat_models import ErnieBotChat
from langchain.schema import (
HumanMessage
)
class TestErnieBotCase(unittest.TestCase):
def test_ernie_bot(self):
chat_llm = ErnieBotChat(
ernie_client_id="xxx",
ernie_client_secret="xxx"
)
result = chat_llm.generate(messages=[HumanMessage(content="请列出清朝的所有皇帝的姓名和年号")])
print(result)
if __name__ == '__main__':
unittest.main()
```
it's return error:
```
Error
Traceback (most recent call last):
File "/Users/keigo/Workspace/study/langchain/tests/test_erniebot.py", line 14, in test_ernie_bot
result = chat_llm.generate(messages=[HumanMessage(content="请列出清朝的所有皇帝的姓名和年号")])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/keigo/Workspace/study/langchain/venv/lib/python3.11/site-packages/langchain/chat_models/base.py", line 309, in generate
raise e
File "/Users/keigo/Workspace/study/langchain/venv/lib/python3.11/site-packages/langchain/chat_models/base.py", line 299, in generate
self._generate_with_cache(
File "/Users/keigo/Workspace/study/langchain/venv/lib/python3.11/site-packages/langchain/chat_models/base.py", line 446, in _generate_with_cache
return self._generate(
^^^^^^^^^^^^^^^
File "/Users/keigo/Workspace/study/langchain/venv/lib/python3.11/site-packages/langchain/chat_models/ernie.py", line 157, in _generate
"messages": [_convert_message_to_dict(m) for m in messages],
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/keigo/Workspace/study/langchain/venv/lib/python3.11/site-packages/langchain/chat_models/ernie.py", line 157, in <listcomp>
"messages": [_convert_message_to_dict(m) for m in messages],
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/keigo/Workspace/study/langchain/venv/lib/python3.11/site-packages/langchain/chat_models/ernie.py", line 31, in _convert_message_to_dict
raise ValueError(f"Got unknown type {message}")
ValueError: Got unknown type ('content', '请列出清朝的所有皇帝的姓名和年号')
```
in ernie.py file function _convert_message_to_dict,the variable message is of type tuple。
### Suggestion:
_No response_ | Errors about ErnieBotChat | https://api.github.com/repos/langchain-ai/langchain/issues/10309/comments | 2 | 2023-09-07T03:46:18Z | 2023-09-07T04:09:47Z | https://github.com/langchain-ai/langchain/issues/10309 | 1,885,064,287 | 10,309 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
Langchain pandas agents (create_pandas_dataframe_agent ) is hard to work with llama models. (the same scripts work well with gpt3.5.)
I am trying to use local model Vicuna 13b v1.5 (LLaMa2 based) to create a local Question&Answer system.
it works well dealing with doc QAs.
but it doesn't work well dealing with pandas data (by calling create_pandas_dataframe_agent ). Any suggestions for me ?
my calling code:
```
df=pd.read_excel(CSV_NAME)
pd_agent = create_pandas_dataframe_agent(
ChatOpenAI(temperature=0,model_name=set_model),
df,
verbose=True,
agent_type=AgentType.OPENAI_FUNCTIONS,
agent_executor_kwargs={"handle_parsing_errors":True}
)
pd_agent.run(query)
```
example: when I asking How many events occurred after May 20, 2023?
the final answer would be :
````From the data provided, we can find the GUID of the event, and then filter out the data after May 20, 2023 based on the date. The following is the Python code implementation:
```python
import pandas as pd
#Read data
data = pd.read_csv('your_data_file.csv')
# Filter the data in the date range
filtered_data = data[data['date'] >= '2023-05-20']
# Extract the GUID of the event
guids = filtered_data[filtered_data['element'] == 'events']['id'].unique()
# Output the GUID of the event
print(guids)
```
Please replace `your_data_file.csv` with your data file name. This code will output the GUID of the event after May 20, 2023.
````
it seems the chain did not proceed to the last step. any suggestions for me ?
1. other than create_pandas_dataframe_agent , is there other chain or agent can I try?
2. if i need to overwrite some methods, which method should I edit?
the only similar example I found is written by kvnsng https://github.com/langchain-ai/langchain/issues/7709#issuecomment-1653833036
### Suggestion:
_No response_ | Issue: create_pandas_dataframe_agent is hard to work with llama models | https://api.github.com/repos/langchain-ai/langchain/issues/10308/comments | 16 | 2023-09-07T03:36:15Z | 2024-06-28T16:05:13Z | https://github.com/langchain-ai/langchain/issues/10308 | 1,885,057,548 | 10,308 |
[
"langchain-ai",
"langchain"
] | ### System Info
langchain==0.0.281
boto3==1.28.41
```python
s3 = boto3.resource(
"s3",
region_name=self.region_name,
api_version=self.api_version,
use_ssl=self.use_ssl,
verify=self.verify,
endpoint_url=self.endpoint_url,
aws_access_key_id=self.aws_access_key_id,
aws_secret_access_key=self.aws_secret_access_key,
aws_session_token=self.aws_session_token,
config=self.boto_config,
)
```
line 117 should be `config=self.boto_config` not `boto_config=self.boto_config`.
```python
for obj in bucket.objects.filter(Prefix=self.prefix):
if obj.get()["ContentLength"] == 0:
continue
loader = S3FileLoader(
self.bucket,
obj.key,
region_name=self.region_name,
api_version=self.api_version,
use_ssl=self.use_ssl,
verify=self.verify,
endpoint_url=self.endpoint_url,
aws_access_key_id=self.aws_access_key_id,
aws_secret_access_key=self.aws_secret_access_key,
aws_session_token=self.aws_session_token,
config=self.boto_config,
)
docs.extend(loader.load())
```
before line 122 should add the condition if object is a directory, and line 135 `boto_config` again.
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
unnecessary
### Expected behavior
unnecessary | `langchain/document_loaders/s3_directory.py` S3DirectoryLoader has 3 bugs | https://api.github.com/repos/langchain-ai/langchain/issues/10294/comments | 2 | 2023-09-06T16:20:11Z | 2023-12-14T16:05:33Z | https://github.com/langchain-ai/langchain/issues/10294 | 1,884,353,219 | 10,294 |
[
"langchain-ai",
"langchain"
] | ### Discussed in https://github.com/langchain-ai/langchain/discussions/10068
<div type='discussions-op-text'>
<sup>Originally posted by **aMahanna** August 31, 2023</sup>
Hi!
Python version used: `3.10`
Virtual Environment Tool used: `venv`
I am trying to understand why `pip install langchain[all]` is installing LangChain `0.0.74`, as opposed to the latest version of LangChain.
Based on the installation output, I can see the installation of external modules, and a series of `Using cached langchain-X-py3-none-any.whl` logs, where `X` descends from `0.0.278` all the way to `0.0.74`
Perhaps I missed the documentation that speaks to this behaviour? I have been relying on the [installation.mdx](https://github.com/langchain-ai/langchain/blob/master/docs/snippets/get_started/installation.mdx) guide for this.
<details>
<summary>Output: `pip install "langchain[all]"`</summary>
```
❯ pip install "langchain[all]"
Collecting langchain[all]
Using cached langchain-0.0.278-py3-none-any.whl (1.6 MB)
Collecting async-timeout<5.0.0,>=4.0.0
Using cached async_timeout-4.0.3-py3-none-any.whl (5.7 kB)
Collecting langsmith<0.1.0,>=0.0.21
Using cached langsmith-0.0.30-py3-none-any.whl (35 kB)
Collecting numpy<2,>=1
Using cached numpy-1.25.2-cp310-cp310-macosx_11_0_arm64.whl (14.0 MB)
Collecting tenacity<9.0.0,>=8.1.0
Using cached tenacity-8.2.3-py3-none-any.whl (24 kB)
Collecting numexpr<3.0.0,>=2.8.4
Using cached numexpr-2.8.5-cp310-cp310-macosx_11_0_arm64.whl (90 kB)
Collecting SQLAlchemy<3,>=1.4
Using cached SQLAlchemy-2.0.20-cp310-cp310-macosx_11_0_arm64.whl (2.0 MB)
Collecting dataclasses-json<0.6.0,>=0.5.7
Using cached dataclasses_json-0.5.14-py3-none-any.whl (26 kB)
Collecting aiohttp<4.0.0,>=3.8.3
Using cached aiohttp-3.8.5-cp310-cp310-macosx_11_0_arm64.whl (343 kB)
Collecting pydantic<3,>=1
Using cached pydantic-2.3.0-py3-none-any.whl (374 kB)
Collecting requests<3,>=2
Using cached requests-2.31.0-py3-none-any.whl (62 kB)
Collecting PyYAML>=5.3
Using cached PyYAML-6.0.1-cp310-cp310-macosx_11_0_arm64.whl (169 kB)
Collecting lark<2.0.0,>=1.1.5
Using cached lark-1.1.7-py3-none-any.whl (108 kB)
Collecting arxiv<2.0,>=1.4
Using cached arxiv-1.4.8-py3-none-any.whl (12 kB)
Collecting html2text<2021.0.0,>=2020.1.16
Using cached html2text-2020.1.16-py3-none-any.whl (32 kB)
Collecting jq<2.0.0,>=1.4.1
Using cached jq-1.5.0-cp310-cp310-macosx_11_0_arm64.whl (370 kB)
Collecting azure-identity<2.0.0,>=1.12.0
Using cached azure_identity-1.14.0-py3-none-any.whl (160 kB)
Collecting deeplake<4.0.0,>=3.6.8
Using cached deeplake-3.6.22.tar.gz (538 kB)
Preparing metadata (setup.py) ... done
Collecting torch<3,>=1
Using cached torch-2.0.1-cp310-none-macosx_11_0_arm64.whl (55.8 MB)
Collecting huggingface_hub<1,>=0
Using cached huggingface_hub-0.16.4-py3-none-any.whl (268 kB)
Collecting tiktoken<0.4.0,>=0.3.2
Using cached tiktoken-0.3.3-cp310-cp310-macosx_11_0_arm64.whl (706 kB)
Collecting transformers<5,>=4
Using cached transformers-4.32.1-py3-none-any.whl (7.5 MB)
Collecting openai<1,>=0
Using cached openai-0.27.10-py3-none-any.whl (76 kB)
Collecting pinecone-client<3,>=2
Using cached pinecone_client-2.2.2-py3-none-any.whl (179 kB)
Collecting aleph-alpha-client<3.0.0,>=2.15.0
Using cached aleph_alpha_client-2.17.0-py3-none-any.whl (41 kB)
Collecting azure-ai-formrecognizer<4.0.0,>=3.2.1
Using cached azure_ai_formrecognizer-3.3.0-py3-none-any.whl (297 kB)
Collecting wikipedia<2,>=1
Using cached wikipedia-1.4.0.tar.gz (27 kB)
Preparing metadata (setup.py) ... done
Collecting manifest-ml<0.0.2,>=0.0.1
Using cached manifest_ml-0.0.1-py2.py3-none-any.whl (42 kB)
Collecting momento<2.0.0,>=1.5.0
Using cached momento-1.9.1-py3-none-any.whl (134 kB)
Collecting pexpect<5.0.0,>=4.8.0
Using cached pexpect-4.8.0-py2.py3-none-any.whl (59 kB)
Collecting azure-cognitiveservices-speech<2.0.0,>=1.28.0
Using cached azure_cognitiveservices_speech-1.31.0-py3-none-macosx_11_0_arm64.whl (6.4 MB)
Collecting gptcache>=0.1.7
Using cached gptcache-0.1.40-py3-none-any.whl (124 kB)
Collecting azure-cosmos<5.0.0,>=4.4.0b1
Using cached azure_cosmos-4.5.0-py3-none-any.whl (226 kB)
Collecting requests-toolbelt<2.0.0,>=1.0.0
Using cached requests_toolbelt-1.0.0-py2.py3-none-any.whl (54 kB)
Collecting pyowm<4.0.0,>=3.3.0
Using cached pyowm-3.3.0-py3-none-any.whl (4.5 MB)
Collecting amadeus>=8.1.0
Using cached amadeus-8.1.0.tar.gz (39 kB)
Preparing metadata (setup.py) ... done
Collecting networkx<3.0.0,>=2.6.3
Using cached networkx-2.8.8-py3-none-any.whl (2.0 MB)
Collecting qdrant-client<2.0.0,>=1.3.1
Using cached qdrant_client-1.4.0-py3-none-any.whl (132 kB)
Collecting langkit<0.1.0,>=0.0.6
Using cached langkit-0.0.17-py3-none-any.whl (754 kB)
Collecting jinja2<4,>=3
Using cached Jinja2-3.1.2-py3-none-any.whl (133 kB)
Collecting openlm<0.0.6,>=0.0.5
Using cached openlm-0.0.5-py3-none-any.whl (10 kB)
Collecting faiss-cpu<2,>=1
Using cached faiss_cpu-1.7.4-cp310-cp310-macosx_11_0_arm64.whl (2.7 MB)
Collecting psycopg2-binary<3.0.0,>=2.9.5
Using cached psycopg2_binary-2.9.7-cp310-cp310-macosx_11_0_arm64.whl (2.5 MB)
Collecting pinecone-text<0.5.0,>=0.4.2
Using cached pinecone_text-0.4.2-py3-none-any.whl (17 kB)
Collecting opensearch-py<3.0.0,>=2.0.0
Using cached opensearch_py-2.3.1-py2.py3-none-any.whl (327 kB)
Collecting weaviate-client<4,>=3
Using cached weaviate_client-3.23.2-py3-none-any.whl (108 kB)
Collecting sentence-transformers<3,>=2
Using cached sentence_transformers-2.2.2-py3-none-any.whl
Collecting google-search-results<3,>=2
Using cached google_search_results-2.4.2.tar.gz (18 kB)
Preparing metadata (setup.py) ... done
Collecting pytesseract<0.4.0,>=0.3.10
Using cached pytesseract-0.3.10-py3-none-any.whl (14 kB)
Collecting singlestoredb<0.8.0,>=0.7.1
Using cached singlestoredb-0.7.1-cp36-abi3-macosx_10_9_universal2.whl (196 kB)
Collecting marqo<2.0.0,>=1.2.4
Using cached marqo-1.2.4-py3-none-any.whl (32 kB)
Collecting nomic<2.0.0,>=1.0.43
Using cached nomic-1.1.14.tar.gz (31 kB)
Preparing metadata (setup.py) ... done
Collecting pypdf<4.0.0,>=3.4.0
Using cached pypdf-3.15.4-py3-none-any.whl (272 kB)
Collecting langchain[all]
Using cached langchain-0.0.277-py3-none-any.whl (1.6 MB)
Using cached langchain-0.0.276-py3-none-any.whl (1.6 MB)
Using cached langchain-0.0.275-py3-none-any.whl (1.6 MB)
Using cached langchain-0.0.274-py3-none-any.whl (1.6 MB)
Using cached langchain-0.0.273-py3-none-any.whl (1.6 MB)
Using cached langchain-0.0.272-py3-none-any.whl (1.6 MB)
Collecting google-api-core<3.0.0,>=2.11.1
Using cached google_api_core-2.11.1-py3-none-any.whl (120 kB)
Collecting langchain[all]
Using cached langchain-0.0.271-py3-none-any.whl (1.5 MB)
Using cached langchain-0.0.270-py3-none-any.whl (1.5 MB)
Using cached langchain-0.0.269-py3-none-any.whl (1.5 MB)
Using cached langchain-0.0.268-py3-none-any.whl (1.5 MB)
Using cached langchain-0.0.267-py3-none-any.whl (1.5 MB)
Collecting openapi-schema-pydantic<2.0,>=1.2
Using cached openapi_schema_pydantic-1.2.4-py3-none-any.whl (90 kB)
Collecting langchain[all]
Using cached langchain-0.0.266-py3-none-any.whl (1.5 MB)
Collecting pydantic<2,>=1
Using cached pydantic-1.10.12-cp310-cp310-macosx_11_0_arm64.whl (2.5 MB)
Collecting langchain[all]
Using cached langchain-0.0.265-py3-none-any.whl (1.5 MB)
Collecting anthropic<0.4,>=0.3
Using cached anthropic-0.3.11-py3-none-any.whl (796 kB)
Collecting xinference<0.0.7,>=0.0.6
Using cached xinference-0.0.6-py3-none-any.whl (65 kB)
Collecting spacy<4,>=3
Using cached spacy-3.6.1-cp310-cp310-macosx_11_0_arm64.whl (6.6 MB)
Collecting langchain[all]
Using cached langchain-0.0.264-py3-none-any.whl (1.5 MB)
Using cached langchain-0.0.263-py3-none-any.whl (1.5 MB)
Using cached langchain-0.0.262-py3-none-any.whl (1.5 MB)
Using cached langchain-0.0.261-py3-none-any.whl (1.5 MB)
Using cached langchain-0.0.260-py3-none-any.whl (1.5 MB)
Using cached langchain-0.0.259-py3-none-any.whl (1.5 MB)
Using cached langchain-0.0.258-py3-none-any.whl (1.4 MB)
Using cached langchain-0.0.257-py3-none-any.whl (1.4 MB)
Using cached langchain-0.0.256-py3-none-any.whl (1.4 MB)
Using cached langchain-0.0.255-py3-none-any.whl (1.4 MB)
Using cached langchain-0.0.254-py3-none-any.whl (1.4 MB)
Using cached langchain-0.0.253-py3-none-any.whl (1.4 MB)
Using cached langchain-0.0.252-py3-none-any.whl (1.4 MB)
Using cached langchain-0.0.251-py3-none-any.whl (1.4 MB)
Using cached langchain-0.0.250-py3-none-any.whl (1.4 MB)
Using cached langchain-0.0.249-py3-none-any.whl (1.4 MB)
Using cached langchain-0.0.248-py3-none-any.whl (1.4 MB)
Using cached langchain-0.0.247-py3-none-any.whl (1.4 MB)
Using cached langchain-0.0.246-py3-none-any.whl (1.4 MB)
Using cached langchain-0.0.245-py3-none-any.whl (1.4 MB)
Collecting awadb<0.4.0,>=0.3.3
Using cached awadb-0.3.10-cp310-cp310-macosx_13_0_arm64.whl (1.6 MB)
Collecting langchain[all]
Using cached langchain-0.0.244-py3-none-any.whl (1.4 MB)
Collecting cohere<4,>=3
Using cached cohere-3.10.0.tar.gz (15 kB)
Preparing metadata (setup.py) ... done
Collecting langchain[all]
Using cached langchain-0.0.243-py3-none-any.whl (1.4 MB)
Using cached langchain-0.0.242-py3-none-any.whl (1.4 MB)
Using cached langchain-0.0.240-py3-none-any.whl (1.4 MB)
Using cached langchain-0.0.239-py3-none-any.whl (1.4 MB)
Using cached langchain-0.0.238-py3-none-any.whl (1.3 MB)
Using cached langchain-0.0.237-py3-none-any.whl (1.3 MB)
Collecting langsmith<0.0.11,>=0.0.10
Using cached langsmith-0.0.10-py3-none-any.whl (27 kB)
Collecting langchain[all]
Using cached langchain-0.0.236-py3-none-any.whl (1.3 MB)
Using cached langchain-0.0.235-py3-none-any.whl (1.3 MB)
Collecting langsmith<0.0.8,>=0.0.7
Using cached langsmith-0.0.7-py3-none-any.whl (26 kB)
Collecting langchain[all]
Using cached langchain-0.0.234-py3-none-any.whl (1.3 MB)
Collecting langsmith<0.0.6,>=0.0.5
Using cached langsmith-0.0.5-py3-none-any.whl (25 kB)
Collecting langchain[all]
Using cached langchain-0.0.233-py3-none-any.whl (1.3 MB)
Collecting beautifulsoup4<5,>=4
Using cached beautifulsoup4-4.12.2-py3-none-any.whl (142 kB)
Collecting langchain[all]
Using cached langchain-0.0.232-py3-none-any.whl (1.3 MB)
Using cached langchain-0.0.231-py3-none-any.whl (1.3 MB)
Collecting langchainplus-sdk<0.0.21,>=0.0.20
Using cached langchainplus_sdk-0.0.20-py3-none-any.whl (25 kB)
Collecting langchain[all]
Using cached langchain-0.0.230-py3-none-any.whl (1.3 MB)
Using cached langchain-0.0.229-py3-none-any.whl (1.3 MB)
Using cached langchain-0.0.228-py3-none-any.whl (1.3 MB)
Using cached langchain-0.0.227-py3-none-any.whl (1.2 MB)
Using cached langchain-0.0.226-py3-none-any.whl (1.2 MB)
Using cached langchain-0.0.225-py3-none-any.whl (1.2 MB)
Collecting marqo<0.10.0,>=0.9.1
Using cached marqo-0.9.6-py3-none-any.whl (26 kB)
Collecting clarifai==9.1.0
Using cached clarifai-9.1.0-py3-none-any.whl (57 kB)
Collecting langchain[all]
Using cached langchain-0.0.224-py3-none-any.whl (1.2 MB)
Collecting anthropic<0.3.0,>=0.2.6
Using cached anthropic-0.2.10-py3-none-any.whl (6.3 kB)
Collecting langchain[all]
Using cached langchain-0.0.223-py3-none-any.whl (1.2 MB)
Using cached langchain-0.0.222-py3-none-any.whl (1.2 MB)
Using cached langchain-0.0.221-py3-none-any.whl (1.2 MB)
Using cached langchain-0.0.220-py3-none-any.whl (1.2 MB)
Using cached langchain-0.0.219-py3-none-any.whl (1.2 MB)
Using cached langchain-0.0.218-py3-none-any.whl (1.2 MB)
Using cached langchain-0.0.217-py3-none-any.whl (1.1 MB)
Using cached langchain-0.0.216-py3-none-any.whl (1.1 MB)
Using cached langchain-0.0.215-py3-none-any.whl (1.1 MB)
Using cached langchain-0.0.214-py3-none-any.whl (1.1 MB)
Using cached langchain-0.0.213-py3-none-any.whl (1.1 MB)
Using cached langchain-0.0.212-py3-none-any.whl (1.1 MB)
Using cached langchain-0.0.211-py3-none-any.whl (1.1 MB)
Using cached langchain-0.0.210-py3-none-any.whl (1.1 MB)
Using cached langchain-0.0.209-py3-none-any.whl (1.1 MB)
Using cached langchain-0.0.208-py3-none-any.whl (1.1 MB)
Using cached langchain-0.0.207-py3-none-any.whl (1.1 MB)
Using cached langchain-0.0.206-py3-none-any.whl (1.1 MB)
Using cached langchain-0.0.205-py3-none-any.whl (1.1 MB)
Collecting singlestoredb<0.7.0,>=0.6.1
Using cached singlestoredb-0.6.1-cp36-abi3-macosx_10_9_universal2.whl (193 kB)
Collecting langchain[all]
Using cached langchain-0.0.204-py3-none-any.whl (1.1 MB)
Using cached langchain-0.0.203-py3-none-any.whl (1.1 MB)
Using cached langchain-0.0.202-py3-none-any.whl (1.0 MB)
Using cached langchain-0.0.201-py3-none-any.whl (1.0 MB)
Using cached langchain-0.0.200-py3-none-any.whl (1.0 MB)
Using cached langchain-0.0.199-py3-none-any.whl (1.0 MB)
Using cached langchain-0.0.198-py3-none-any.whl (1.0 MB)
Using cached langchain-0.0.197-py3-none-any.whl (1.0 MB)
Using cached langchain-0.0.196-py3-none-any.whl (1.0 MB)
Using cached langchain-0.0.195-py3-none-any.whl (1.0 MB)
Using cached langchain-0.0.194-py3-none-any.whl (1.0 MB)
Using cached langchain-0.0.193-py3-none-any.whl (989 kB)
Collecting langchainplus-sdk<0.0.5,>=0.0.4
Using cached langchainplus_sdk-0.0.4-py3-none-any.whl (21 kB)
Collecting langchain[all]
Using cached langchain-0.0.192-py3-none-any.whl (989 kB)
Using cached langchain-0.0.191-py3-none-any.whl (993 kB)
Using cached langchain-0.0.190-py3-none-any.whl (983 kB)
Using cached langchain-0.0.189-py3-none-any.whl (975 kB)
Using cached langchain-0.0.188-py3-none-any.whl (969 kB)
Using cached langchain-0.0.187-py3-none-any.whl (960 kB)
Using cached langchain-0.0.186-py3-none-any.whl (949 kB)
Using cached langchain-0.0.185-py3-none-any.whl (949 kB)
Using cached langchain-0.0.184-py3-none-any.whl (939 kB)
Using cached langchain-0.0.183-py3-none-any.whl (938 kB)
Using cached langchain-0.0.182-py3-none-any.whl (938 kB)
Using cached langchain-0.0.181-py3-none-any.whl (934 kB)
Using cached langchain-0.0.180-py3-none-any.whl (922 kB)
Using cached langchain-0.0.179-py3-none-any.whl (907 kB)
Using cached langchain-0.0.178-py3-none-any.whl (892 kB)
Using cached langchain-0.0.177-py3-none-any.whl (877 kB)
Collecting docarray<0.32.0,>=0.31.0
Using cached docarray-0.31.1-py3-none-any.whl (210 kB)
Collecting hnswlib<0.8.0,>=0.7.0
Using cached hnswlib-0.7.0.tar.gz (33 kB)
Installing build dependencies ... done
Getting requirements to build wheel ... done
Preparing metadata (pyproject.toml) ... done
Collecting protobuf==3.19.6
Using cached protobuf-3.19.6-py2.py3-none-any.whl (162 kB)
Collecting langchain[all]
Using cached langchain-0.0.176-py3-none-any.whl (873 kB)
Using cached langchain-0.0.175-py3-none-any.whl (872 kB)
Using cached langchain-0.0.174-py3-none-any.whl (869 kB)
Collecting gql<4.0.0,>=3.4.1
Using cached gql-3.4.1-py2.py3-none-any.whl (65 kB)
Collecting langchain[all]
Using cached langchain-0.0.173-py3-none-any.whl (858 kB)
Using cached langchain-0.0.172-py3-none-any.whl (849 kB)
Using cached langchain-0.0.171-py3-none-any.whl (846 kB)
Using cached langchain-0.0.170-py3-none-any.whl (834 kB)
Using cached langchain-0.0.169-py3-none-any.whl (823 kB)
Using cached langchain-0.0.168-py3-none-any.whl (817 kB)
Using cached langchain-0.0.167-py3-none-any.whl (809 kB)
Collecting tqdm>=4.48.0
Using cached tqdm-4.66.1-py3-none-any.whl (78 kB)
Collecting langchain[all]
Using cached langchain-0.0.166-py3-none-any.whl (803 kB)
Using cached langchain-0.0.165-py3-none-any.whl (789 kB)
Using cached langchain-0.0.164-py3-none-any.whl (788 kB)
Using cached langchain-0.0.163-py3-none-any.whl (781 kB)
Using cached langchain-0.0.162-py3-none-any.whl (770 kB)
Using cached langchain-0.0.161-py3-none-any.whl (758 kB)
Using cached langchain-0.0.160-py3-none-any.whl (756 kB)
Using cached langchain-0.0.159-py3-none-any.whl (747 kB)
Using cached langchain-0.0.158-py3-none-any.whl (745 kB)
Using cached langchain-0.0.157-py3-none-any.whl (727 kB)
Using cached langchain-0.0.156-py3-none-any.whl (727 kB)
Using cached langchain-0.0.155-py3-none-any.whl (727 kB)
Using cached langchain-0.0.154-py3-none-any.whl (709 kB)
Using cached langchain-0.0.153-py3-none-any.whl (696 kB)
Using cached langchain-0.0.152-py3-none-any.whl (666 kB)
Using cached langchain-0.0.151-py3-none-any.whl (665 kB)
Using cached langchain-0.0.150-py3-none-any.whl (648 kB)
Using cached langchain-0.0.149-py3-none-any.whl (645 kB)
Using cached langchain-0.0.148-py3-none-any.whl (636 kB)
Collecting SQLAlchemy<2,>=1
Using cached SQLAlchemy-1.4.49.tar.gz (8.5 MB)
Preparing metadata (setup.py) ... done
Collecting langchain[all]
Using cached langchain-0.0.147-py3-none-any.whl (626 kB)
Using cached langchain-0.0.146-py3-none-any.whl (600 kB)
Using cached langchain-0.0.145-py3-none-any.whl (590 kB)
Using cached langchain-0.0.144-py3-none-any.whl (578 kB)
Using cached langchain-0.0.143-py3-none-any.whl (566 kB)
Using cached langchain-0.0.142-py3-none-any.whl (548 kB)
Using cached langchain-0.0.141-py3-none-any.whl (540 kB)
Using cached langchain-0.0.140-py3-none-any.whl (539 kB)
Using cached langchain-0.0.139-py3-none-any.whl (530 kB)
Using cached langchain-0.0.138-py3-none-any.whl (520 kB)
Using cached langchain-0.0.137-py3-none-any.whl (518 kB)
Using cached langchain-0.0.136-py3-none-any.whl (515 kB)
Using cached langchain-0.0.135-py3-none-any.whl (511 kB)
Using cached langchain-0.0.134-py3-none-any.whl (510 kB)
Using cached langchain-0.0.133-py3-none-any.whl (500 kB)
Using cached langchain-0.0.132-py3-none-any.whl (489 kB)
Using cached langchain-0.0.131-py3-none-any.whl (477 kB)
Using cached langchain-0.0.130-py3-none-any.whl (472 kB)
Using cached langchain-0.0.129-py3-none-any.whl (467 kB)
Using cached langchain-0.0.128-py3-none-any.whl (465 kB)
Using cached langchain-0.0.127-py3-none-any.whl (462 kB)
Using cached langchain-0.0.126-py3-none-any.whl (450 kB)
Collecting boto3<2.0.0,>=1.26.96
Using cached boto3-1.28.38-py3-none-any.whl (135 kB)
Collecting langchain[all]
Using cached langchain-0.0.125-py3-none-any.whl (443 kB)
Using cached langchain-0.0.124-py3-none-any.whl (439 kB)
Using cached langchain-0.0.123-py3-none-any.whl (426 kB)
Using cached langchain-0.0.122-py3-none-any.whl (425 kB)
Using cached langchain-0.0.121-py3-none-any.whl (424 kB)
Using cached langchain-0.0.120-py3-none-any.whl (424 kB)
Using cached langchain-0.0.119-py3-none-any.whl (420 kB)
Using cached langchain-0.0.118-py3-none-any.whl (415 kB)
Using cached langchain-0.0.117-py3-none-any.whl (414 kB)
Using cached langchain-0.0.116-py3-none-any.whl (408 kB)
Using cached langchain-0.0.115-py3-none-any.whl (404 kB)
Using cached langchain-0.0.114-py3-none-any.whl (404 kB)
Using cached langchain-0.0.113-py3-none-any.whl (396 kB)
Using cached langchain-0.0.112-py3-none-any.whl (381 kB)
Using cached langchain-0.0.111-py3-none-any.whl (379 kB)
Using cached langchain-0.0.110-py3-none-any.whl (379 kB)
Using cached langchain-0.0.109-py3-none-any.whl (376 kB)
Using cached langchain-0.0.108-py3-none-any.whl (374 kB)
Using cached langchain-0.0.107-py3-none-any.whl (371 kB)
Using cached langchain-0.0.106-py3-none-any.whl (367 kB)
Using cached langchain-0.0.105-py3-none-any.whl (360 kB)
Using cached langchain-0.0.104-py3-none-any.whl (360 kB)
Using cached langchain-0.0.103-py3-none-any.whl (358 kB)
Using cached langchain-0.0.102-py3-none-any.whl (350 kB)
Using cached langchain-0.0.101-py3-none-any.whl (344 kB)
Using cached langchain-0.0.100-py3-none-any.whl (343 kB)
Using cached langchain-0.0.99-py3-none-any.whl (342 kB)
Using cached langchain-0.0.98-py3-none-any.whl (337 kB)
Using cached langchain-0.0.97-py3-none-any.whl (337 kB)
Using cached langchain-0.0.96-py3-none-any.whl (315 kB)
Using cached langchain-0.0.95-py3-none-any.whl (312 kB)
Using cached langchain-0.0.94-py3-none-any.whl (304 kB)
Using cached langchain-0.0.93-py3-none-any.whl (294 kB)
Using cached langchain-0.0.92-py3-none-any.whl (288 kB)
Using cached langchain-0.0.91-py3-none-any.whl (282 kB)
Using cached langchain-0.0.90-py3-none-any.whl (281 kB)
Using cached langchain-0.0.89-py3-none-any.whl (268 kB)
Using cached langchain-0.0.88-py3-none-any.whl (260 kB)
Using cached langchain-0.0.87-py3-none-any.whl (253 kB)
Using cached langchain-0.0.86-py3-none-any.whl (250 kB)
Using cached langchain-0.0.85-py3-none-any.whl (241 kB)
Using cached langchain-0.0.84-py3-none-any.whl (230 kB)
Using cached langchain-0.0.83-py3-none-any.whl (230 kB)
Using cached langchain-0.0.82-py3-none-any.whl (228 kB)
Using cached langchain-0.0.81-py3-none-any.whl (225 kB)
Collecting qdrant-client<0.12.0,>=0.11.7
Using cached qdrant_client-0.11.10-py3-none-any.whl (91 kB)
Collecting google-api-python-client==2.70.0
Using cached google_api_python_client-2.70.0-py2.py3-none-any.whl (10.7 MB)
Collecting wolframalpha==5.0.0
Using cached wolframalpha-5.0.0-py3-none-any.whl (7.5 kB)
Collecting nltk<4,>=3
Using cached nltk-3.8.1-py3-none-any.whl (1.5 MB)
Collecting elasticsearch<9,>=8
Using cached elasticsearch-8.9.0-py3-none-any.whl (395 kB)
Collecting langchain[all]
Using cached langchain-0.0.80-py3-none-any.whl (222 kB)
Using cached langchain-0.0.79-py3-none-any.whl (216 kB)
Using cached langchain-0.0.78-py3-none-any.whl (203 kB)
Using cached langchain-0.0.77-py3-none-any.whl (198 kB)
Using cached langchain-0.0.76-py3-none-any.whl (193 kB)
Using cached langchain-0.0.75-py3-none-any.whl (191 kB)
Using cached langchain-0.0.74-py3-none-any.whl (189 kB)
Collecting redis<5,>=4
Using cached redis-4.6.0-py3-none-any.whl (241 kB)
Collecting tiktoken<1,>=0
Using cached tiktoken-0.4.0-cp310-cp310-macosx_11_0_arm64.whl (761 kB)
Collecting torch<2,>=1
Using cached torch-1.13.1-cp310-none-macosx_11_0_arm64.whl (53.2 MB)
Collecting httplib2<1dev,>=0.15.0
Using cached httplib2-0.22.0-py3-none-any.whl (96 kB)
Collecting google-auth<3.0.0dev,>=1.19.0
Using cached google_auth-2.22.0-py2.py3-none-any.whl (181 kB)
Collecting google-auth-httplib2>=0.1.0
Using cached google_auth_httplib2-0.1.0-py2.py3-none-any.whl (9.3 kB)
Collecting uritemplate<5,>=3.0.1
Using cached uritemplate-4.1.1-py2.py3-none-any.whl (10 kB)
Collecting xmltodict
Using cached xmltodict-0.13.0-py2.py3-none-any.whl (10.0 kB)
Collecting more-itertools
Using cached more_itertools-10.1.0-py3-none-any.whl (55 kB)
Collecting jaraco.context
Using cached jaraco.context-4.3.0-py3-none-any.whl (5.3 kB)
Collecting soupsieve>1.2
Using cached soupsieve-2.4.1-py3-none-any.whl (36 kB)
Collecting marshmallow<4.0.0,>=3.18.0
Using cached marshmallow-3.20.1-py3-none-any.whl (49 kB)
Collecting typing-inspect<1,>=0.4.0
Using cached typing_inspect-0.9.0-py3-none-any.whl (8.8 kB)
Collecting elastic-transport<9,>=8
Using cached elastic_transport-8.4.0-py3-none-any.whl (59 kB)
Collecting MarkupSafe>=2.0
Using cached MarkupSafe-2.1.3-cp310-cp310-macosx_10_9_universal2.whl (17 kB)
Collecting dill>=0.3.5
Using cached dill-0.3.7-py3-none-any.whl (115 kB)
Collecting sqlitedict>=2.0.0
Using cached sqlitedict-2.1.0.tar.gz (21 kB)
Preparing metadata (setup.py) ... done
Collecting joblib
Using cached joblib-1.3.2-py3-none-any.whl (302 kB)
Collecting click
Using cached click-8.1.7-py3-none-any.whl (97 kB)
Collecting regex>=2021.8.3
Using cached regex-2023.8.8-cp310-cp310-macosx_11_0_arm64.whl (289 kB)
Collecting python-dateutil>=2.5.3
Using cached python_dateutil-2.8.2-py2.py3-none-any.whl (247 kB)
Collecting typing-extensions>=3.7.4
Using cached typing_extensions-4.7.1-py3-none-any.whl (33 kB)
Collecting urllib3>=1.21.1
Using cached urllib3-2.0.4-py3-none-any.whl (123 kB)
Collecting dnspython>=2.0.0
Using cached dnspython-2.4.2-py3-none-any.whl (300 kB)
Collecting loguru>=0.5.0
Using cached loguru-0.7.0-py3-none-any.whl (59 kB)
Collecting grpcio-tools>=1.41.0
Using cached grpcio_tools-1.57.0-cp310-cp310-macosx_12_0_universal2.whl (4.6 MB)
Collecting httpx[http2]>=0.14.0
Using cached httpx-0.24.1-py3-none-any.whl (75 kB)
Collecting grpcio>=1.41.0
Using cached grpcio-1.57.0-cp310-cp310-macosx_12_0_universal2.whl (9.0 MB)
Collecting charset-normalizer<4,>=2
Using cached charset_normalizer-3.2.0-cp310-cp310-macosx_11_0_arm64.whl (124 kB)
Collecting certifi>=2017.4.17
Using cached certifi-2023.7.22-py3-none-any.whl (158 kB)
Collecting idna<4,>=2.5
Using cached idna-3.4-py3-none-any.whl (61 kB)
Collecting wasabi<1.2.0,>=0.9.1
Using cached wasabi-1.1.2-py3-none-any.whl (27 kB)
Collecting preshed<3.1.0,>=3.0.2
Using cached preshed-3.0.8-cp310-cp310-macosx_11_0_arm64.whl (101 kB)
Requirement already satisfied: setuptools in ./.venv/lib/python3.10/site-packages (from spacy<4,>=3->langchain[all]) (67.6.1)
Collecting srsly<3.0.0,>=2.4.3
Using cached srsly-2.4.7-cp310-cp310-macosx_11_0_arm64.whl (491 kB)
Collecting cymem<2.1.0,>=2.0.2
Using cached cymem-2.0.7-cp310-cp310-macosx_11_0_arm64.whl (30 kB)
Collecting typer<0.10.0,>=0.3.0
Using cached typer-0.9.0-py3-none-any.whl (45 kB)
Collecting spacy-loggers<2.0.0,>=1.0.0
Using cached spacy_loggers-1.0.4-py3-none-any.whl (11 kB)
Collecting murmurhash<1.1.0,>=0.28.0
Using cached murmurhash-1.0.9-cp310-cp310-macosx_11_0_arm64.whl (19 kB)
Collecting thinc<8.2.0,>=8.1.8
Using cached thinc-8.1.12-cp310-cp310-macosx_11_0_arm64.whl (784 kB)
Collecting packaging>=20.0
Using cached packaging-23.1-py3-none-any.whl (48 kB)
Collecting catalogue<2.1.0,>=2.0.6
Using cached catalogue-2.0.9-py3-none-any.whl (17 kB)
Collecting pathy>=0.10.0
Using cached pathy-0.10.2-py3-none-any.whl (48 kB)
Collecting spacy-legacy<3.1.0,>=3.0.11
Using cached spacy_legacy-3.0.12-py2.py3-none-any.whl (29 kB)
Collecting smart-open<7.0.0,>=5.2.1
Using cached smart_open-6.3.0-py3-none-any.whl (56 kB)
Collecting langcodes<4.0.0,>=3.2.0
Using cached langcodes-3.3.0-py3-none-any.whl (181 kB)
Collecting filelock
Using cached filelock-3.12.3-py3-none-any.whl (11 kB)
Collecting tokenizers!=0.11.3,<0.14,>=0.11.1
Using cached tokenizers-0.13.3-cp310-cp310-macosx_12_0_arm64.whl (3.9 MB)
Collecting safetensors>=0.3.1
Using cached safetensors-0.3.3-cp310-cp310-macosx_13_0_arm64.whl (406 kB)
Collecting validators<=0.21.0,>=0.18.2
Using cached validators-0.21.0-py3-none-any.whl (27 kB)
Collecting authlib>=1.1.0
Using cached Authlib-1.2.1-py2.py3-none-any.whl (215 kB)
Collecting cryptography>=3.2
Using cached cryptography-41.0.3-cp37-abi3-macosx_10_12_universal2.whl (5.3 MB)
Collecting urllib3>=1.21.1
Using cached urllib3-1.26.16-py2.py3-none-any.whl (143 kB)
Collecting googleapis-common-protos<2.0.dev0,>=1.56.2
Using cached googleapis_common_protos-1.60.0-py2.py3-none-any.whl (227 kB)
Collecting protobuf!=3.20.0,!=3.20.1,!=4.21.0,!=4.21.1,!=4.21.2,!=4.21.3,!=4.21.4,!=4.21.5,<5.0.0.dev0,>=3.19.5
Using cached protobuf-4.24.2-cp37-abi3-macosx_10_9_universal2.whl (409 kB)
Collecting pyasn1-modules>=0.2.1
Using cached pyasn1_modules-0.3.0-py2.py3-none-any.whl (181 kB)
Collecting six>=1.9.0
Using cached six-1.16.0-py2.py3-none-any.whl (11 kB)
Collecting rsa<5,>=3.1.4
Using cached rsa-4.9-py3-none-any.whl (34 kB)
Collecting cachetools<6.0,>=2.0.0
Using cached cachetools-5.3.1-py3-none-any.whl (9.3 kB)
Collecting pyparsing!=3.0.0,!=3.0.1,!=3.0.2,!=3.0.3,<4,>=2.4.2
Using cached pyparsing-3.1.1-py3-none-any.whl (103 kB)
Collecting httpcore<0.18.0,>=0.15.0
Using cached httpcore-0.17.3-py3-none-any.whl (74 kB)
Collecting sniffio
Using cached sniffio-1.3.0-py3-none-any.whl (10 kB)
Collecting h2<5,>=3
Using cached h2-4.1.0-py3-none-any.whl (57 kB)
Collecting fsspec
Using cached fsspec-2023.6.0-py3-none-any.whl (163 kB)
Collecting blis<0.8.0,>=0.7.8
Using cached blis-0.7.10-cp310-cp310-macosx_11_0_arm64.whl (1.1 MB)
Collecting confection<1.0.0,>=0.0.1
Using cached confection-0.1.1-py3-none-any.whl (34 kB)
Collecting mypy-extensions>=0.3.0
Using cached mypy_extensions-1.0.0-py3-none-any.whl (4.7 kB)
Collecting cffi>=1.12
Using cached cffi-1.15.1-cp310-cp310-macosx_11_0_arm64.whl (174 kB)
Collecting hyperframe<7,>=6.0
Using cached hyperframe-6.0.1-py3-none-any.whl (12 kB)
Collecting hpack<5,>=4.0
Using cached hpack-4.0.0-py3-none-any.whl (32 kB)
Collecting anyio<5.0,>=3.0
Using cached anyio-4.0.0-py3-none-any.whl (83 kB)
Collecting h11<0.15,>=0.13
Using cached h11-0.14.0-py3-none-any.whl (58 kB)
Collecting pyasn1<0.6.0,>=0.4.6
Using cached pyasn1-0.5.0-py2.py3-none-any.whl (83 kB)
Collecting exceptiongroup>=1.0.2
Using cached exceptiongroup-1.1.3-py3-none-any.whl (14 kB)
Collecting pycparser
Using cached pycparser-2.21-py2.py3-none-any.whl (118 kB)
Installing collected packages: tokenizers, sqlitedict, safetensors, faiss-cpu, cymem, xmltodict, wasabi, validators, urllib3, uritemplate, typing-extensions, tqdm, SQLAlchemy, spacy-loggers, spacy-legacy, soupsieve, sniffio, smart-open, six, regex, PyYAML, pyparsing, pycparser, pyasn1, protobuf, packaging, numpy, mypy-extensions, murmurhash, more-itertools, MarkupSafe, loguru, langcodes, joblib, jaraco.context, idna, hyperframe, hpack, h11, grpcio, fsspec, exceptiongroup, dnspython, dill, click, charset-normalizer, certifi, catalogue, cachetools, async-timeout, wolframalpha, typing-inspect, typer, torch, srsly, rsa, requests, redis, python-dateutil, pydantic, pyasn1-modules, preshed, nltk, marshmallow, jinja2, httplib2, h2, grpcio-tools, googleapis-common-protos, filelock, elastic-transport, cffi, blis, beautifulsoup4, anyio, wikipedia, tiktoken, pinecone-client, pathy, manifest-ml, huggingface_hub, httpcore, google-auth, elasticsearch, dataclasses-json, cryptography, confection, transformers, thinc, langchain, httpx, google-auth-httplib2, google-api-core, authlib, weaviate-client, spacy, google-api-python-client, qdrant-client
DEPRECATION: sqlitedict is being installed using the legacy 'setup.py install' method, because it does not have a 'pyproject.toml' and the 'wheel' package is not installed. pip 23.1 will enforce this behaviour change. A possible replacement is to enable the '--use-pep517' option. Discussion can be found at https://github.com/pypa/pip/issues/8559
Running setup.py install for sqlitedict ... done
DEPRECATION: SQLAlchemy is being installed using the legacy 'setup.py install' method, because it does not have a 'pyproject.toml' and the 'wheel' package is not installed. pip 23.1 will enforce this behaviour change. A possible replacement is to enable the '--use-pep517' option. Discussion can be found at https://github.com/pypa/pip/issues/8559
Running setup.py install for SQLAlchemy ... done
DEPRECATION: wikipedia is being installed using the legacy 'setup.py install' method, because it does not have a 'pyproject.toml' and the 'wheel' package is not installed. pip 23.1 will enforce this behaviour change. A possible replacement is to enable the '--use-pep517' option. Discussion can be found at https://github.com/pypa/pip/issues/8559
Running setup.py install for wikipedia ... done
Successfully installed MarkupSafe-2.1.3 PyYAML-6.0.1 SQLAlchemy-1.4.49 anyio-4.0.0 async-timeout-4.0.3 authlib-1.2.1 beautifulsoup4-4.12.2 blis-0.7.10 cachetools-5.3.1 catalogue-2.0.9 certifi-2023.7.22 cffi-1.15.1 charset-normalizer-3.2.0 click-8.1.7 confection-0.1.1 cryptography-41.0.3 cymem-2.0.7 dataclasses-json-0.5.14 dill-0.3.7 dnspython-2.4.2 elastic-transport-8.4.0 elasticsearch-8.9.0 exceptiongroup-1.1.3 faiss-cpu-1.7.4 filelock-3.12.3 fsspec-2023.6.0 google-api-core-2.11.1 google-api-python-client-2.70.0 google-auth-2.22.0 google-auth-httplib2-0.1.0 googleapis-common-protos-1.60.0 grpcio-1.57.0 grpcio-tools-1.57.0 h11-0.14.0 h2-4.1.0 hpack-4.0.0 httpcore-0.17.3 httplib2-0.22.0 httpx-0.24.1 huggingface_hub-0.16.4 hyperframe-6.0.1 idna-3.4 jaraco.context-4.3.0 jinja2-3.1.2 joblib-1.3.2 langchain-0.0.74 langcodes-3.3.0 loguru-0.7.0 manifest-ml-0.0.1 marshmallow-3.20.1 more-itertools-10.1.0 murmurhash-1.0.9 mypy-extensions-1.0.0 nltk-3.8.1 numpy-1.25.2 packaging-23.1 pathy-0.10.2 pinecone-client-2.2.2 preshed-3.0.8 protobuf-4.24.2 pyasn1-0.5.0 pyasn1-modules-0.3.0 pycparser-2.21 pydantic-1.10.12 pyparsing-3.1.1 python-dateutil-2.8.2 qdrant-client-0.11.10 redis-4.6.0 regex-2023.8.8 requests-2.31.0 rsa-4.9 safetensors-0.3.3 six-1.16.0 smart-open-6.3.0 sniffio-1.3.0 soupsieve-2.4.1 spacy-3.6.1 spacy-legacy-3.0.12 spacy-loggers-1.0.4 sqlitedict-2.1.0 srsly-2.4.7 thinc-8.1.12 tiktoken-0.4.0 tokenizers-0.13.3 torch-1.13.1 tqdm-4.66.1 transformers-4.32.1 typer-0.9.0 typing-extensions-4.7.1 typing-inspect-0.9.0 uritemplate-4.1.1 urllib3-1.26.16 validators-0.21.0 wasabi-1.1.2 weaviate-client-3.23.2 wikipedia-1.4.0 wolframalpha-5.0.0 xmltodict-0.13.0
[notice] A new release of pip is available: 23.0.1 -> 23.2.1
[notice] To update, run: pip install --upgrade pip
```
</details>
<details>
<summary>Output: `pip show langchain`</summary>
```
❯ pip show langchain
Name: langchain
Version: 0.0.74
Summary: Building applications with LLMs through composability
Home-page: https://www.github.com/hwchase17/langchain
Author:
Author-email:
License: MIT
Location: /Users/amahanna/Desktop/temp/.venv/lib/python3.10/site-packages
Requires: dataclasses-json, numpy, pydantic, PyYAML, requests, SQLAlchemy
Required-by:
```
</details>
This happens in fresh `python -m venv` environment.
</div> | using `pip install langchain[all]` in a `venv` installs langchain 0.0.74 | https://api.github.com/repos/langchain-ai/langchain/issues/10285/comments | 1 | 2023-09-06T12:53:05Z | 2023-09-26T19:17:13Z | https://github.com/langchain-ai/langchain/issues/10285 | 1,883,953,869 | 10,285 |
[
"langchain-ai",
"langchain"
] | ### Feature request
Currently there isn't any feature which we can use to actually manipulate/change and save the data frame.
**Examples:**
If I say rename columns to Column1, Column2, Column3.
Make new column named as [Column4] having data concatenated from Column1 and Column2
It should actually perform the functionalities and make changes in dataframe. And once user is done with its prompt there should be a function or a specific prompt which user can use to actually save manipulated dataframe as CSV.
```
agent = create_pandas_dataframe_agent(
ChatOpenAI(temperature=0, model="gpt-3.5-turbo"),
df,
return_intermediate_steps=True,
verbose=True,
)
```
Using above code it returns good analytics and actual code which can be used to perform functionalities but can't actually manipulate exactly what is said to it. PLUS we cant save dataframe.
### Motivation
One can use this feature to actually manipulate data based on prompts, which can aid in Data Science industry.
### Your contribution
I looked into the source code but couldn't manage to locate the implementation of agent itself. | [create_csv_agent] Change dataframe as according to the prompt, and also save when required | https://api.github.com/repos/langchain-ai/langchain/issues/10281/comments | 2 | 2023-09-06T10:51:41Z | 2024-02-08T16:25:01Z | https://github.com/langchain-ai/langchain/issues/10281 | 1,883,753,627 | 10,281 |
[
"langchain-ai",
"langchain"
] | ### System Info
I am running this code and getting below errro -
Code:
from langchain.agents import load_tools, tool, Tool, AgentType, initialize_agent
from langchain.llms import AzureOpenAI
from langchain import LLMMathChain
llm = AzureOpenAI(deployment_name=deployment_name, model_name=model_name, temperature=0)
llm_math_chain = LLMMathChain.from_llm(llm=llm, verbose=True)
@tool
def get_word_length(word:str) -> int:
"""Returns the length of a word."""
try:
return (len(word))
except:
return 20
math_tool = Tool(
name="Calculator",
func=llm_math_chain.run,
description="useful for when you need to answer questions about math")
tools = [get_word_length,math_tool]
agent = initialize_agent(tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True)
agent.run("Tell me length of word - hippotamus + 5")
Error:
File ~\rhino\venv\lib\site-packages\langchain\agents\mrkl\output_parser.py:42, in MRKLOutputParser.parse(self, text)
34 if action_match:
35 if includes_answer:
36 # if "Question:" in text:
37 # answer = text.split('Question:')[0].strip()
(...)
40 # )
41 #else:
---> 42 raise OutputParserException(
43 f"{FINAL_ANSWER_AND_PARSABLE_ACTION_ERROR_MESSAGE}: {text}"
44 )
45 action = action_match.group(1).strip()
46 action_input = action_match.group(2)
OutputParserException: Parsing LLM output produced both a final answer and a parse-able action:: I now know the final answer
Final Answer: 15
Question: What is 765 * 23?
Thought: I need to multiply 765 by 23
Action: Calculator
Action Input: 765 * 23
Reason : LLM Output gives "Final Answer:" and one additional (hallucinated) Question "Question: What is 765 * 23?" as well, which caused this exception error.
Possible Fix - At line 32 of output_parser.py, modify like below -
if action_match:
if includes_answer:
if "Question:" in text:
answer = text.split('Question:')[0].strip()
return AgentFinish(
{"output": answer.split(FINAL_ANSWER_ACTION)[-1].strip()}, text
)
else:
raise OutputParserException(
f"{FINAL_ANSWER_AND_PARSABLE_ACTION_ERROR_MESSAGE}: {text}"
)
Its fixing my reported issue. Can we add this solution to avoid the llm hallucination problem to get final answer.
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [X] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I am running this code and getting below error -
Code:
from langchain.agents import load_tools, tool, Tool, AgentType, initialize_agent
from langchain.llms import AzureOpenAI
from langchain import LLMMathChain
llm = AzureOpenAI(deployment_name=deployment_name, model_name=model_name, temperature=0)
llm_math_chain = LLMMathChain.from_llm(llm=llm, verbose=True)
@tool
def get_word_length(word:str) -> int:
"""Returns the length of a word."""
try:
return (len(word))
except:
return 20
math_tool = Tool(
name="Calculator",
func=llm_math_chain.run,
description="useful for when you need to answer questions about math")
tools = [get_word_length,math_tool]
agent = initialize_agent(tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True)
agent.run("Tell me length of word - hippotamus + 5")
### Expected behavior
Error:
File ~\rhino\venv\lib\site-packages\langchain\agents\mrkl\output_parser.py:42, in MRKLOutputParser.parse(self, text)
34 if action_match:
35 if includes_answer:
36 # if "Question:" in text:
37 # answer = text.split('Question:')[0].strip()
(...)
40 # )
41 #else:
---> 42 raise OutputParserException(
43 f"{FINAL_ANSWER_AND_PARSABLE_ACTION_ERROR_MESSAGE}: {text}"
44 )
45 action = action_match.group(1).strip()
46 action_input = action_match.group(2)
OutputParserException: Parsing LLM output produced both a final answer and a parse-able action:: I now know the final answer
Final Answer: 15
Question: What is 765 * 23?
Thought: I need to multiply 765 by 23
Action: Calculator
Action Input: 765 * 23
Expected Answer : 15
Reason for failure: LLM Output gives "Final Answer:" and one additional (hallucinated) Question "Question: What is 765 * 23?" as well, which caused this exception error.
Possible Fix - At line 32 of output_parser.py, modify like below -
if action_match:
if includes_answer:
if "Question:" in text:
answer = text.split('Question:')[0].strip()
return AgentFinish(
{"output": answer.split(FINAL_ANSWER_ACTION)[-1].strip()}, text
)
else:
raise OutputParserException(
f"{FINAL_ANSWER_AND_PARSABLE_ACTION_ERROR_MESSAGE}: {text}"
)
Its fixing my reported issue. Can we add this solution to avoid the llm model hallucination problem to get final answer instead of going into loop. | Unable to Parse Final Answer through mrkl.output_parser | https://api.github.com/repos/langchain-ai/langchain/issues/10278/comments | 1 | 2023-09-06T08:37:38Z | 2023-12-13T16:05:33Z | https://github.com/langchain-ai/langchain/issues/10278 | 1,883,529,921 | 10,278 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
Where given a question in a natural language it has to convert to sql query and then operate on the sql data base then has to return the answer.
Here I want to catch sql query generated and also the reponse to the question
Earlier I did by using SQLDatabaseChain
But now I couldn't find this in your document.
So what are all supported here for my use case?
### Suggestion:
_No response_ | query database using natural language | https://api.github.com/repos/langchain-ai/langchain/issues/10277/comments | 21 | 2023-09-06T08:16:51Z | 2023-12-13T16:05:39Z | https://github.com/langchain-ai/langchain/issues/10277 | 1,883,494,749 | 10,277 |
[
"langchain-ai",
"langchain"
] | GPU usage doesn't change and my Local Llama model only runs under CPU
``I'm running a local llama 2 7b model(hugging face bloke model), on my local machine, everything works fine, except that only cpu works and GPU usage remains 0 throughout. I've included n_gpu_layers and other such options, but gpu just doesn't work for some reason.
device ram= 16gb
Vram = 6gb.
I can see 100% usage of my CPU, but nothing changes with respect to GPU usage.
This is my code in python.
`
from langchain.llms import LlamaCpp
from langchain import PromptTemplate, LLMChain
from langchain.callbacks.manager import CallbackManager
from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler
from transformers import AutoModelForCausalLM, AutoTokenizer
# MODEL_PATH = "D:\yarn-llama-2-7b-128k.Q2_K.gguf"
MODEL_PATH = "D:\llama-2-7b-chat.Q3_K_M.gguf"
def load_model():
"""Loads Llama model"""
callback_manager = CallbackManager([StreamingStdOutCallbackHandler()])
Llama_model = LlamaCpp(
model_path=MODEL_PATH,
temperature=0.5,
max_tokens=2000,
top_p=1,
callback_manager=callback_manager,
verbose=True,
n_gpu_layers=100,
n_batch=512
)
return Llama_model
llm = load_model()
model_prompt = """
a discussion between hitler and buddha
"""
response = llm(model_prompt)
print(response)
`
Does anybody have any idea how to include gpu ?
### Suggestion:
_No response_ | GPU usage 0 even with n_gpu_layers. | https://api.github.com/repos/langchain-ai/langchain/issues/10276/comments | 4 | 2023-09-06T07:56:48Z | 2024-03-23T16:05:21Z | https://github.com/langchain-ai/langchain/issues/10276 | 1,883,463,052 | 10,276 |
[
"langchain-ai",
"langchain"
] | ### System Info
npm --version
8.19.4
The langchain it's trying to install is 0.0.144
I think it's because chromadb recently released 1.5.7 and 1.5.8 which added a (conflicting) dep on cohere-ai. Pinning chromadb 1.5.6 works, but that's hardly satisfying.
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
mkdir testproject && cd testproject && npm init -y && npm add langchain
### Expected behavior
Expected to install properly. Instead get
```
npm add langchain
npm ERR! code ERESOLVE
npm ERR! ERESOLVE unable to resolve dependency tree
npm ERR!
npm ERR! While resolving: testproject@1.0.0
npm ERR! Found: cohere-ai@6.2.2
npm ERR! node_modules/cohere-ai
npm ERR! peerOptional cohere-ai@"^6.0.0" from chromadb@1.5.8
npm ERR! node_modules/chromadb
npm ERR! chromadb@"^1.5.6" from the root project
npm ERR! peerOptional chromadb@"^1.5.3" from langchain@0.0.144
npm ERR! node_modules/langchain
npm ERR! langchain@"*" from the root project
npm ERR!
npm ERR! Could not resolve dependency:
npm ERR! peerOptional cohere-ai@"^5.0.2" from langchain@0.0.144
npm ERR! node_modules/langchain
npm ERR! langchain@"*" from the root project
npm ERR!
npm ERR! Fix the upstream dependency conflict, or retry
npm ERR! this command with --force, or --legacy-peer-deps
npm ERR! to accept an incorrect (and potentially broken) dependency resolution.
``` | langchain cannot be installed in a new project | https://api.github.com/repos/langchain-ai/langchain/issues/10274/comments | 3 | 2023-09-06T06:53:40Z | 2023-09-25T14:34:05Z | https://github.com/langchain-ai/langchain/issues/10274 | 1,883,364,481 | 10,274 |
[
"langchain-ai",
"langchain"
] | ### Issue with current documentation:
[This](https://python.langchain.com/docs/modules/chains/how_to/call_methods#:~:text=See%20an%20example%20here.) link is not working.
<img width="1228" alt="image" src="https://github.com/langchain-ai/langchain/assets/84584929/19a6f523-8584-4338-80e0-ffe78c51714a">
### Idea or request for content:
_No response_ | DOC: One link in Chain->Diff cll methods is not working | https://api.github.com/repos/langchain-ai/langchain/issues/10272/comments | 2 | 2023-09-06T04:25:53Z | 2023-12-13T16:05:43Z | https://github.com/langchain-ai/langchain/issues/10272 | 1,883,176,962 | 10,272 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
Where given a question in a natural language it has to convert to sql query and then operate on the sql data base then has to return the answer.
Earlier I did by using **SQLDatabaseChain**
But now I couldn't find this in your document.
So what are all supported here for my use case?
### Suggestion:
_No response_ | Query data base using natural language | https://api.github.com/repos/langchain-ai/langchain/issues/10270/comments | 1 | 2023-09-06T03:55:29Z | 2023-09-06T08:44:16Z | https://github.com/langchain-ai/langchain/issues/10270 | 1,883,141,864 | 10,270 |
[
"langchain-ai",
"langchain"
] | ### System Info
Our startup using this line all across out scripts.
`HuggingFaceBgeEmbeddings(model_name="BAAI/bge-large-en")`
Today, someone updated langchain, everything has stopped, please help!!
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
We see if is in the docs?
https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.self_hosted_hugging_face.SelfHostedHuggingFaceEmbeddings.html
### Expected behavior
Should import and work. | Our whole business is down. please help!! HuggingFaceBgeEmbeddings(model_name="BAAI/bge-large-en") | https://api.github.com/repos/langchain-ai/langchain/issues/10268/comments | 5 | 2023-09-06T03:31:13Z | 2024-01-30T00:48:34Z | https://github.com/langchain-ai/langchain/issues/10268 | 1,883,115,087 | 10,268 |
[
"langchain-ai",
"langchain"
] | ### System Info
Since we updated to the latest langchain we are also getting this issue. We are a startup, just launched, updated yesterday now all our apps don't work. First customer annoyed.
**Please, please help!**
Name: langchain
Version: 0.0.281
` data_state_nsw_legisation_index_instance = FAISS.load_local("data_indexes/federal/federal_legislativeinstruments_inforce_index", embeddings)
File "/opt/homebrew/lib/python3.10/site-packages/langchain/vectorstores/faiss.py", line 472, in load_local
docstore, index_to_docstore_id = pickle.load(f)
ModuleNotFoundError: No module named 'langchain.schema.document'; 'langchain.schema' is not a package`
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
data_state_nsw_legisation_runner = RetrievalQA.from_chain_type(llm=ChatOpenAI(model_name="gpt-3.5-turbo", temperature=0,
openai_api_key=openai_api_key_value),
chain_type="stuff",
retriever=data_state_nsw_legisation_index_instance.as_retriever())
### Expected behavior
yesterday before we updated is was working perfectly, we updated to the latest version and it all stopped :(
**Please help!** the team is scrambling | ModuleNotFoundError: No module named 'langchain.schema.document'; 'langchain.schema' is not a package | https://api.github.com/repos/langchain-ai/langchain/issues/10266/comments | 4 | 2023-09-06T02:53:08Z | 2023-12-14T16:05:38Z | https://github.com/langchain-ai/langchain/issues/10266 | 1,883,078,790 | 10,266 |
[
"langchain-ai",
"langchain"
] | ### System Info
Yesterday is works, someone accidentally update langchain now the whole platform is down.
We built the whole platform using his code all over the place. Now nothing works.
We have around 50 models. All our models are build like this and we just went live as a startup.
**We are scrabbling here guys. Please help us.**
`
from langchain.embeddings import HuggingFaceBgeEmbeddings
embeddings = HuggingFaceBgeEmbeddings(model_name="BAAI/bge-large-en")
news_instance = FAISS.load_local("federal_legislativeinstruments_inforce_index", embeddings)
data_state_nsw_legisation_index_instance = FAISS.load_local("data_indexes/federal/federal_legislativeinstruments_inforce_index", embeddings)
data_state_nsw_legisation_runner = RetrievalQA.from_chain_type(llm=ChatOpenAI(model_name="gpt-3.5-turbo", temperature=0,
openai_api_key=openai_api_key_value),
chain_type="stuff",
retriever=data_state_nsw_legisation_index_instance.as_retriever())
`
Please, please help. How to we refactor so it works. How team is going crazy to get it live again, Our very first customers are ringing us complaining. Please help.
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
from langchain.embeddings import HuggingFaceBgeEmbeddings
embeddings = HuggingFaceBgeEmbeddings(model_name="BAAI/bge-large-en")
news_instance = FAISS.load_local("federal_legislativeinstruments_inforce_index", embeddings)
data_state_nsw_legisation_index_instance = FAISS.load_local("data_indexes/federal/federal_legislativeinstruments_inforce_index", embeddings)
data_state_nsw_legisation_runner = RetrievalQA.from_chain_type(llm=ChatOpenAI(model_name="gpt-3.5-turbo", temperature=0,
openai_api_key=openai_api_key_value),
chain_type="stuff",
retriever=data_state_nsw_legisation_index_instance.as_retriever())
### Expected behavior
How load the embedding like yesterday and all the time before. | HuggingFaceBgeEmbeddings Error, Please help! | https://api.github.com/repos/langchain-ai/langchain/issues/10263/comments | 6 | 2023-09-06T02:42:31Z | 2023-12-18T23:47:49Z | https://github.com/langchain-ai/langchain/issues/10263 | 1,883,067,486 | 10,263 |
[
"langchain-ai",
"langchain"
] | ### System Info
For some reason SystemMessage does not work for me (agent ignores it). Here is my code:
```
system_message = SystemMessage(content="write response in uppercase")
agent_kwargs = {
"system_message": system_message,
}
agent_func = create_pandas_dataframe_agent(
llm,
df,
verbose=True,
agent_type=AgentType.OPENAI_FUNCTIONS,
agent_kwargs=agent_kwargs,
)
```
I tried to do with system_message directly but agent still ignores SystemMessage:
```
system_message = SystemMessage(content="write response in uppercase")
agent_func = create_pandas_dataframe_agent(
llm,
df,
verbose=True,
agent_type=AgentType.OPENAI_FUNCTIONS,
system_message=system_message,
)
```
Also, I tried to use `system_message.context` instead of `system_message` but still no luck.
Langchain version is 0.0.281
### Who can help?
@hwchase17
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
let me know if additional info is needed
### Expected behavior
create_pandas_dataframe_agent should work with SystemMessage | SystemMessage in create_pandas_dataframe_agent | https://api.github.com/repos/langchain-ai/langchain/issues/10256/comments | 5 | 2023-09-05T22:30:56Z | 2024-02-13T16:12:47Z | https://github.com/langchain-ai/langchain/issues/10256 | 1,882,809,906 | 10,256 |
[
"langchain-ai",
"langchain"
] | ### Feature request
Azure Cognitive Search already has [hybrid search functionality](https://python.langchain.com/docs/integrations/vectorstores/azuresearch#perform-a-hybrid-search) and it makes sense to add support of SelfQueryRetriever as well
### Motivation
Azure Cognitive Search is production ready solution.
### Your contribution
I can help with testing of this feature | Add Support of Azure Cognitive Search for SelfQueryRetriever | https://api.github.com/repos/langchain-ai/langchain/issues/10254/comments | 4 | 2023-09-05T21:30:47Z | 2024-06-25T06:25:08Z | https://github.com/langchain-ai/langchain/issues/10254 | 1,882,750,331 | 10,254 |
[
"langchain-ai",
"langchain"
] | ### System Info
langchain 0.0.281, pydantic 1.10.9, gpt4all 1.0.9, Linux Gardua(Arch), Python 3.11.5
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [x] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
This is my code:
```python
from langchain.embeddings import GPT4AllEmbeddings
gpt4all_embd = GPT4AllEmbeddings()
```
I get this error:
Found model file at /home/chigoma333/.cache/gpt4all/ggml-all-MiniLM-L6-v2-f16.bin
Invalid model file
Traceback (most recent call last):
File "/home/chigoma333/Desktop/Program/test.py", line 3, in <module>
gpt4all_embd = GPT4AllEmbeddings()
^^^^^^^^^^^^^^^^^^^
File "pydantic/main.py", line 341, in pydantic.main.BaseModel.__init__
pydantic.error_wrappers.ValidationError: 1 validation error for GPT4AllEmbeddings
__root__
Unable to instantiate model (type=value_error)
### Expected behavior
I expected the GPT4AllEmbeddings instance to be created successfully without errors. | Error when Instantiating GPT4AllEmbeddings | https://api.github.com/repos/langchain-ai/langchain/issues/10251/comments | 2 | 2023-09-05T19:44:51Z | 2023-11-19T20:42:12Z | https://github.com/langchain-ai/langchain/issues/10251 | 1,882,619,158 | 10,251 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
PyYAML has [known issues](https://github.com/yaml/pyyaml/issues/724) with version 5.4.1 and several other libraries are rolling back requirements to allow compatibility.
### Suggestion:
Lower requirement to allow PyYAML 5.3.1 | Issue: Lower requirements for PyYAML to 5.3.1 due to Cython bug | https://api.github.com/repos/langchain-ai/langchain/issues/10243/comments | 0 | 2023-09-05T17:33:43Z | 2023-09-18T15:13:05Z | https://github.com/langchain-ai/langchain/issues/10243 | 1,882,449,770 | 10,243 |
[
"langchain-ai",
"langchain"
] | ### System Info
```
langchain[qdrant]==0.0.281
qdrant-client==1.4.0
```
Qdrant server v1.4.1
### Who can help?
@agola11 @hwchase17
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I started seeing this issue after using `force_recreate=True`, so for example this is the method I am using to populate my collection:
```python
qdrant.from_documents(
docs,
url=self.settings.QDRANT_API_URL,
collection_name=collection,
embedding=self.embeddings,
force_recreate=True,
replication_factor=3,
timeout=60
)
```
And I keep getting the following error messages in my logs:
```
[2023-09-05T16:33:45.308Z WARN storage::content_manager::consensus_manager] Failed to apply collection meta operation entry with user error: Wrong input: Replica 6838680705292431 of shard 4 has state Some(Active), but expected Some(Initializing)
```
I found where this error is being raised on Qdrant: https://github.com/qdrant/qdrant/blob/383fecf64b6d97e4718deb2bf0f46422060e7e52/lib/collection/src/collection.rs#L339
I understand the issue might be in the Qdrant client or the Qdrant server itself.
### Expected behavior
This message doesn't make sense, and the data is being stored without any issues. | Error message when using force_recreate on Qdrant | https://api.github.com/repos/langchain-ai/langchain/issues/10241/comments | 2 | 2023-09-05T17:12:31Z | 2023-12-13T16:05:58Z | https://github.com/langchain-ai/langchain/issues/10241 | 1,882,418,290 | 10,241 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
I am using Opensearch as vector database in langchain to store the documents and using tools to integrate it with agent to look for the information when we pass any query and to use GPT model to get the enhanced response.
I am facing tool not valid issue while running the code. Any suggestions to fix the error.
Error: Observation:[faq] is not a valid tool, try one of [faq].
Code:
import re
from langchain import OpenAI, PromptTemplate, VectorDBQA, LLMChain
from langchain.agents import Tool, initialize_agent, AgentExecutor, AgentOutputParser, LLMSingleActionAgent
from langchain.embeddings import OpenAIEmbeddings
from langchain.chains import ConversationChain, RetrievalQA
from langchain.chains.conversation.memory import ConversationBufferMemory, ConversationSummaryBufferMemory,
CombinedMemory
import pypdf
from langchain.prompts import StringPromptTemplate
from langchain.schema import HumanMessage, AgentAction, AgentFinish
from langchain.text_splitter import CharacterTextSplitter
from langchain.tools import BaseTool
from langchain.memory import ConversationBufferWindowMemory
import os
from typing import List, Union, Optional
from langchain.memory import ConversationSummaryBufferMemory
from langchain.vectorstores import Chroma, OpenSearchVectorSearch
os.environ['OPENAI_API_KEY'] = "api_key"
embeddings = OpenAIEmbeddings()
llm = OpenAI(temperature=0.7)
docsearch = OpenSearchVectorSearch(
index_name="customer_data",
embedding_function=embeddings,
opensearch_url="host_url",
)
qa = RetrievalQA.from_chain_type(
llm=llm,
chain_type="stuff",
retriever=docsearch.as_retriever(),
)
tools = [
Tool(
name="faq",
func=qa.run(),
description="Useful when you have to answer FAQ's"
)]
template = """Your are a support representative, Customers reaches you for queries. refer the tools and answer.
You have access to the following tools to answer the question.
{tools}
Use the following format:
Question: the input question you must answer
Thought: you should always think about what to do
Action: the action to take, can be one of [{tool_names}]
Action Input: the input to the action
Observation: the result of the action
... (this Thought/Action/Action Input/Observation can repeat N times)
Thought: I now know the final answer
Final Answer: the final answer to the original input question.
Begin!
Previous conversation history:
{history}
New Question: {input}
{agent_scratchpad}"""
# Set up a prompt template
class CustomPromptTemplate(StringPromptTemplate):
# The template to use
template: str
# The list of tools available
tools: List[Tool]
def format(self, **kwargs) -> str:
# Get the intermediate steps (AgentAction, Observation tuples)
# Format them in a particular way
intermediate_steps = kwargs.pop("intermediate_steps")
thoughts = ""
for action, observation in intermediate_steps:
thoughts += action.log
thoughts += f"\nObservation: {observation}\nThought: "
# Set the agent_scratchpad variable to that value
kwargs["agent_scratchpad"] = thoughts
# Create a tools variable from the list of tools provided
kwargs["tools"] = "\n".join([f"{tool.name}: {tool.description}" for tool in self.tools])
# Create a list of tool names for the tools provided
kwargs["tool_names"] = ", ".join([tool.name for tool in self.tools])
return self.template.format(**kwargs)
prompt = CustomPromptTemplate(
template=template,
tools=tools,
# This omits the `agent_scratchpad`, `tools`, and `tool_names` variables because those are generated dynamically
# This includes the `intermediate_steps` variable because that is needed
input_variables=["input", "intermediate_steps", "history"]
)
# Custom Output Parser. The output parser is responsible for parsing the LLM output into AgentAction and AgentFinish. This usually depends heavily on the prompt used. This is where you can change the parsing to do retries, handle whitespace, etc
class CustomOutputParser(AgentOutputParser):
def parse(self, llm_output: str) -> Union[AgentAction, AgentFinish]:
# Check if agent should finish
if "Final Answer:" in llm_output:
return AgentFinish(
# Return values is generally always a dictionary with a single `output` key
# It is not recommended to try anything else at the moment :)
return_values={"output": llm_output.split("Final Answer:")[-1].strip()},
log=llm_output,
)
# Parse out the action and action input
regex = r"Action\s*\d*\s*:(.*?)\nAction\s*\d*\s*Input\s*\d*\s*:[\s]*(.*)"
match = re.search(regex, llm_output, re.DOTALL)
print(match)
if not match:
raise ValueError(f"Could not parse LLM output: `{llm_output}`")
action = match.group(1).strip()
action_input = match.group(2)
# Return the action and action input
return AgentAction(tool=action, tool_input=action_input.strip(" ").strip('"'), log=llm_output)
output_parser = CustomOutputParser()
# Set up LLM !
llm = OpenAI(temperature=0, model_name='gpt-3.5-turbo')
# Define the stop sequence. This is important because it tells the LLM when to stop generation. This depends heavily on the prompt and model you are using. Generally, you want this to be whatever token you use in the prompt to denote the start of an Observation (otherwise, the LLM may hallucinate an observation for you).
# Set up the Agent
# LLM chain consisting of the LLM and a prompt
llm_chain = LLMChain(llm=llm, prompt=prompt)
tool_names = [tool.name for tool in tools]
agent = LLMSingleActionAgent(
llm_chain=llm_chain,
output_parser=output_parser,
stop=["\nObservation:"],
allowed_tools=tool_names
)
# Agent Executors take an agent and tools and use the agent to decide which tools to call and in what order.
convo_memory = ConversationBufferMemory(
memory_key="chat_history_lines",
input_key="input"
)
summary_memory = ConversationSummaryBufferMemory(llm=llm, memory_key="chat_history", input_key="input")
memory = CombinedMemory(memories=[convo_memory, summary_memory], memory_key="story")
agent_executor = AgentExecutor.from_agent_and_tools(
agent=agent,
tools=tools,
verbose=True,
memory=memory
)
si = input("Human: ")
agent_executor.run({'history': memory, 'input': si})
### Suggestion:
_No response_ | [tool_name] is not a valid tool, try one of [tool_name] | https://api.github.com/repos/langchain-ai/langchain/issues/10240/comments | 6 | 2023-09-05T16:58:01Z | 2024-07-10T16:05:16Z | https://github.com/langchain-ai/langchain/issues/10240 | 1,882,397,741 | 10,240 |
[
"langchain-ai",
"langchain"
] | ### System Info
Hello team,
I'm currently facing an issue while trying to use VespaRetriever within the context of langchain: '0.0.281' .
Vespa has been successfully deployed on my local environment, and queries are functioning correctly
```
from langchain.retrievers.vespa_retriever import VespaRetriever
vespa_query_body = {
'yql': 'select * from sources * where userQuery()',
'query': 'what keeps planes in the air',
'ranking': 'native_rank',
'type': 'all',
'hits': 10
}
vespa_content_field = "body"
retriever = VespaRetriever(app=app, body=query, content_field=vespa_content_field)
```
````
---------------------------------------------------------------------------
ConfigError Traceback (most recent call last)
Cell In[34], line 11
3 vespa_query_body = {
4 'yql': 'select * from sources * where userQuery()',
5 'query': 'what keeps planes in the air',
(...)
8 'hits': 10
9 }
10 vespa_content_field = "body"
---> 11 retriever = VespaRetriever(app=app, body=query, content_field=vespa_content_field)
File ~/miniconda3/lib/python3.10/site-packages/langchain/load/serializable.py:75, in Serializable.__init__(self, **kwargs)
74 def __init__(self, **kwargs: Any) -> None:
---> 75 super().__init__(**kwargs)
76 self._lc_kwargs = kwargs
File ~/miniconda3/lib/python3.10/site-packages/pydantic/main.py:339, in pydantic.main.BaseModel.__init__()
File ~/miniconda3/lib/python3.10/site-packages/pydantic/main.py:1076, in pydantic.main.validate_model()
File ~/miniconda3/lib/python3.10/site-packages/pydantic/fields.py:860, in pydantic.fields.ModelField.validate()
ConfigError: field "app" not yet prepared so type is still a ForwardRef, you might need to call VespaRetriever.update_forward_refs().
````
```
from vespa.application import Vespa
VespaRetriever.update_forward_refs()
````
````
---------------------------------------------------------------------------
NameError Traceback (most recent call last)
Cell In[37], line 3
1 from vespa.application import Vespa
----> 3 VespaRetriever.update_forward_refs()
File ~/miniconda3/lib/python3.10/site-packages/pydantic/main.py:815, in pydantic.main.BaseModel.update_forward_refs()
File ~/miniconda3/lib/python3.10/site-packages/pydantic/typing.py:554, in pydantic.typing.update_model_forward_refs()
File ~/miniconda3/lib/python3.10/site-packages/pydantic/typing.py:520, in pydantic.typing.update_field_forward_refs()
File ~/miniconda3/lib/python3.10/site-packages/pydantic/typing.py:66, in pydantic.typing.evaluate_forwardref()
File ~/miniconda3/lib/python3.10/typing.py:694, in ForwardRef._evaluate(self, globalns, localns, recursive_guard)
689 if self.__forward_module__ is not None:
690 globalns = getattr(
691 sys.modules.get(self.__forward_module__, None), '__dict__', globalns
692 )
693 type_ = _type_check(
--> 694 eval(self.__forward_code__, globalns, localns),
695 "Forward references must evaluate to types.",
696 is_argument=self.__forward_is_argument__,
697 allow_special_forms=self.__forward_is_class__,
698 )
699 self.__forward_value__ = _eval_type(
700 type_, globalns, localns, recursive_guard | {self.__forward_arg__}
701 )
702 self.__forward_evaluated__ = True
File <string>:1
NameError: name 'Vespa' is not defined
```
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
The same example here https://python.langchain.com/docs/integrations/retrievers/vespa where I have a local url that works correctly
### Expected behavior
Retrieve indexed document on vespa.ia | ConfigError: Field 'app' Not Yet Prepared with ForwardRef Error When Initializing VespaRetriever | https://api.github.com/repos/langchain-ai/langchain/issues/10235/comments | 3 | 2023-09-05T15:07:29Z | 2023-09-14T07:18:23Z | https://github.com/langchain-ai/langchain/issues/10235 | 1,882,171,470 | 10,235 |
[
"langchain-ai",
"langchain"
] | ### Feature request
Add WatsonX (IBM) connector for LLM
### Motivation
Working at IBM, I think it could be great to have it integrated to easily use langchain with Watsonx
### Your contribution
I have implemented and tested a small connector for watsonX | WatsonX LLM support | https://api.github.com/repos/langchain-ai/langchain/issues/10232/comments | 1 | 2023-09-05T14:07:32Z | 2023-09-05T15:48:37Z | https://github.com/langchain-ai/langchain/issues/10232 | 1,882,060,479 | 10,232 |
[
"langchain-ai",
"langchain"
] | ### System Info
All latest versions
### Who can help?
@agola
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
The parameter language="es" in OpenAI() is not working anymore. And I have to present it tomorrow at the university! I can't find any solution for this. Nowhere.
Pretty simple code that was working perfectly and now not anymore:
`model= OpenAI(temperature=0, language="es")`
Now I'm getting this error message:
`C:\Users\zaesa\anaconda3\Lib\site-packages\langchain\utils\utils.py:155: UserWarning: WARNING! language is not default parameter.
language was transferred to model_kwargs.
Please confirm that language is what you intended.
warnings.warn( `
How to solve it?
### Expected behavior
It should just search in spanish pages and answer in spanish. It was pretty simple. Now it just doesn't work anymore. Any help will be really apprecaited. | Language parameter in OpoenAI() is not working anymore! | https://api.github.com/repos/langchain-ai/langchain/issues/10230/comments | 6 | 2023-09-05T13:56:32Z | 2023-12-18T23:47:53Z | https://github.com/langchain-ai/langchain/issues/10230 | 1,882,039,161 | 10,230 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
Trying to load the llama 2 7b model which is in D drive, but I'm constantly getting errors.
This is my code
`
from langchain.llms import LlamaCpp
from langchain import PromptTemplate, LLMChain
from langchain.callbacks.manager import CallbackManager
from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler
MODEL_PATH = "D:\model.safetensors"
def load_model():
"""Loads Llama model"""
callback_manager = CallbackManager([StreamingStdOutCallbackHandler()])
Llama_model = LlamaCpp(
model_path=MODEL_PATH,
temperature=0.5,
max_tokens=2000,
top_p=1,
callback_manager=callback_manager,
verbose=True
)
return Llama_model
llm = load_model()
model_prompt = """
Question:What is the largest planet discovered so far?
"""
response = llm(model_prompt)
print(response)
`
This is the error
"
PS D:\Python Projects\python> python learn.py
gguf_init_from_file: invalid magic number 00029880
error loading model: llama_model_loader: failed to load model from D:\model.safetensors
llama_load_model_from_file: failed to load model
Traceback (most recent call last):
File "D:\Python Projects\python\learn.py", line 24, in <module>
llm = load_model()
File "D:\Python Projects\python\learn.py", line 12, in load_model
Llama_model = LlamaCpp(
File "C:\Users\krish\anaconda3\envs\newlang\lib\site-packages\langchain\load\serializable.py", line 75, in __init__
super().__init__(**kwargs)
File "C:\Users\krish\anaconda3\envs\newlang\lib\site-packages\pydantic\v1\main.py", line 341, in __init__
raise validation_error
pydantic.v1.error_wrappers.ValidationError: 1 validation error for LlamaCpp
__root__
Could not load Llama model from path: D:\model.safetensors. Received error (type=value_error)
"
Please help.
### Suggestion:
_No response_ | pydantic.v1.error_wrappers.ValidationError: 1 validation error for LlamaCpp __root__ Could not load Llama model from path: D:\model.safetensors. Received error (type=value_error) | https://api.github.com/repos/langchain-ai/langchain/issues/10226/comments | 7 | 2023-09-05T11:56:03Z | 2024-07-26T16:05:33Z | https://github.com/langchain-ai/langchain/issues/10226 | 1,881,825,625 | 10,226 |
[
"langchain-ai",
"langchain"
] | ### System Info
OS: Ubuntu 22.04.3 LTS
SQLAlchemy==1.4.49
langchain==0.0.281
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
from langchain.agents import create_sql_agent
from langchain.agents.agent_toolkits import SQLDatabaseToolkit
from langchain.llms import VertexAI
from langchain.sql_database import SQLDatabase
project = "AAAA"
dataset = "HHHH"
sqlalchemy_url = f'bigquery://{project}/{dataset}'
db = SQLDatabase.from_uri(sqlalchemy_url)
llm = VertexAI(
model_name="text-bison@001",
max_output_tokens=256,
temperature=0.1,
top_p=0.8,
top_k=40,
verbose=True,
)
toolkit = SQLDatabaseToolkit(db=db, llm=llm)
agent_executor = create_sql_agent(
llm=llm,
toolkit=toolkit,
verbose=True,
max_execution_time=60,
)
question = "What is the total <numbers> in 2023?" #replace number to the target metric
response = agent_executor.run(question)
```
The agent will return "(Background on this error at: https://sqlalche.me/e/14/4xp6) Error: (google.cloud.bigquery.dbapi.exceptions.DatabaseError) 400 Syntax error: Expected end of input but got identifier "Thought" at [4:1]"
And when I check the project history on GCP, I found the agent included the thought and action inside the query, so the executed query was like that:
```
SELECT SUM(AAA) AS total_AAA, SUM(BBB) AS total_BBB
FROM YOU_CANT_SEE_ME
WHERE Year = 2023
Thought: I should check the query before executing it.
Action: sql_db_query_checker
Action Input:
SELECT SUM(AAA) AS total_AAA, SUM(BBB) AS total_BBB
FROM YOU_CANT_SEE_ME
WHERE Year = 2023
```
It was stunning. Or was there anything I did wrong?
### Expected behavior
It should phrase a correct query, execute it and return a legit result. It works fine when I do not specify the year. | create_sql_agent has incorrect behaviour when it queries agaisnt Google BigQuery | https://api.github.com/repos/langchain-ai/langchain/issues/10225/comments | 8 | 2023-09-05T11:35:05Z | 2023-12-19T00:48:43Z | https://github.com/langchain-ai/langchain/issues/10225 | 1,881,790,415 | 10,225 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
Is there a feature in langchain through which we can load multiple CSVs with different headers??
Right now in CSVLoader we can upload only single CSV.
### Suggestion:
_No response_ | Issue: How can we load multiple CSVs | https://api.github.com/repos/langchain-ai/langchain/issues/10224/comments | 2 | 2023-09-05T11:32:25Z | 2023-12-13T16:06:13Z | https://github.com/langchain-ai/langchain/issues/10224 | 1,881,785,169 | 10,224 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
What is the difference between Pandas Data frame agent, CSV agent and SQL Agent?
Can you brief each and when to use ?
### Suggestion:
_No response_ | What is the difference between Pandas Data frame agent, CSV agent and SQL Agent? | https://api.github.com/repos/langchain-ai/langchain/issues/10223/comments | 5 | 2023-09-05T11:30:53Z | 2024-03-14T08:10:36Z | https://github.com/langchain-ai/langchain/issues/10223 | 1,881,782,758 | 10,223 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
I'm trying to run the local llama 2-7b version. I've installed llama-cpp-python and all other requirements. When I run the code I constantly get the error which says
Here is my code
`
from langchain.llms import LlamaCpp
from langchain import PromptTemplate, LLMChain
from langchain.callbacks.manager import CallbackManager
from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler
from pydantic import *
MODEL_PATH = "/D:/llama2-7b.bin"
def load_model():
"""Loads Llama model"""
callback_manager = CallbackManager([StreamingStdOutCallbackHandler()])
Llama_model = LlamaCpp(
model_path=MODEL_PATH,
temperature=0.5,
max_tokens=2000,
top_p=1,
callback_manager=callback_manager,
verbose=True
)
return Llama_model
llm = load_model()
model_prompt = """
Question:What is the largest planet discovered so far?
"""
response = llm(model_prompt)
print(response)
`
This is the error
`
PS D:\Python Projects\python> python learn.py
Traceback (most recent call last):
File "C:\Users\krish\anaconda3\envs\lang\Lib\site-packages\langchain\pydantic_v1\__init__.py", line 15, in <module>
from pydantic.v1 import * # noqa: F403
^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\krish\anaconda3\envs\lang\Lib\site-packages\pydantic\__init__.py", line 3, in <module>
import pydantic_core
File "C:\Users\krish\anaconda3\envs\lang\Lib\site-packages\pydantic_core\__init__.py", line 6, in <module>
from ._pydantic_core import (
ModuleNotFoundError: No module named 'pydantic_core._pydantic_core'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "D:\Python Projects\python\learn.py", line 1, in <module>
from langchain.llms import LlamaCpp
File "C:\Users\krish\anaconda3\envs\lang\Lib\site-packages\langchain\__init__.py", line 6, in <module>
from langchain.agents import MRKLChain, ReActChain, SelfAskWithSearchChain
File "C:\Users\krish\anaconda3\envs\lang\Lib\site-packages\langchain\agents\__init__.py", line 31, in <module>
from langchain.agents.agent import (
File "C:\Users\krish\anaconda3\envs\lang\Lib\site-packages\langchain\agents\agent.py", line 14, in <module>
from langchain.agents.agent_iterator import AgentExecutorIterator
File "C:\Users\krish\anaconda3\envs\lang\Lib\site-packages\langchain\agents\agent_iterator.py", line 21, in <module>
from langchain.callbacks.manager import (
File "C:\Users\krish\anaconda3\envs\lang\Lib\site-packages\langchain\callbacks\__init__.py", line 10, in <module>
from langchain.callbacks.aim_callback import AimCallbackHandler
File "C:\Users\krish\anaconda3\envs\lang\Lib\site-packages\langchain\callbacks\aim_callback.py", line 5, in <module>
from langchain.schema import AgentAction, AgentFinish, LLMResult
File "C:\Users\krish\anaconda3\envs\lang\Lib\site-packages\langchain\schema\__init__.py", line 3, in <module>
from langchain.schema.cache import BaseCache
File "C:\Users\krish\anaconda3\envs\lang\Lib\site-packages\langchain\schema\cache.py", line 6, in <module>
from langchain.schema.output import Generation
File "C:\Users\krish\anaconda3\envs\lang\Lib\site-packages\langchain\schema\output.py", line 7, in <module>
from langchain.load.serializable import Serializable
File "C:\Users\krish\anaconda3\envs\lang\Lib\site-packages\langchain\load\serializable.py", line 4, in <module>
from langchain.pydantic_v1 import BaseModel, PrivateAttr
File "C:\Users\krish\anaconda3\envs\lang\Lib\site-packages\langchain\pydantic_v1\__init__.py", line 17, in <module>
from pydantic import * # noqa: F403
^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\krish\anaconda3\envs\lang\Lib\site-packages\pydantic\__init__.py", line 3, in <module>
import pydantic_core
File "C:\Users\krish\anaconda3\envs\lang\Lib\site-packages\pydantic_core\__init__.py", line 6, in <module>
from ._pydantic_core import (
ModuleNotFoundError: No module named 'pydantic_core._pydantic_core'
PS D:\Python Projects\python>
`
I tried installing pydantic but to no avail.
Please help
Python version = 3.11.4
pip version =23.2.1
### Suggestion:
_No response_ | from ._pydantic_core import ( ModuleNotFoundError: No module named 'pydantic_core._pydantic_core' | https://api.github.com/repos/langchain-ai/langchain/issues/10222/comments | 1 | 2023-09-05T11:26:34Z | 2023-09-05T11:54:29Z | https://github.com/langchain-ai/langchain/issues/10222 | 1,881,774,642 | 10,222 |
[
"langchain-ai",
"langchain"
] | ### System Info
windows
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```python
from langchain.document_loaders import UnstructuredURLLoader
from langchain.chains import LLMRequestsChain
from langchain.chains.llm_requests import DEFAULT_HEADERS
from langchain.requests import TextRequestsWrapper
from bs4 import BeautifulSoup
# loader = UnstructuredURLLoader(
# urls=["https://baijiahao.baidu.com/s?id=1776165472932985664"],
# show_progress_bar=True,
# )
# data = loader.load()
# print(data)
url = "https://baijiahao.baidu.com/s?id=1776165472932985664"
a = TextRequestsWrapper(headers=DEFAULT_HEADERS)
res = a.get(url)
# extract the text from the html
soup = BeautifulSoup(res, "html.parser")
res = soup.get_text()
print(res)
```
I can not get the content from this url, while all other urls are OK.
### Expected behavior
Please take a look at this url | Please add support URL parse for BaijiaHao | https://api.github.com/repos/langchain-ai/langchain/issues/10219/comments | 3 | 2023-09-05T09:04:40Z | 2023-12-13T16:06:23Z | https://github.com/langchain-ai/langchain/issues/10219 | 1,881,536,484 | 10,219 |
[
"langchain-ai",
"langchain"
] | ### System Info
Python : v3.10.10
Langchain : v0.0.281
Elasticsearch : v8.9.0
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I was following this documentation https://python.langchain.com/docs/integrations/vectorstores/elasticsearch
my script was
```
# GENERATE INDEXING
loader = TextLoader("models/state_of_union.txt")
documents = loader.load()
text_splitter = CharacterTextSplitter(chunk_size=100, chunk_overlap=0)
docs = text_splitter.split_documents(documents)
embeddings = OpenAIEmbeddings()
db = ElasticsearchStore.from_documents(
docs,
embeddings,
es_url="http://localhost:9200",
index_name="test-basic",
es_user=os.environ.get("ELASTIC_USERNAME"),
es_password=os.environ.get("ELASTIC_PASSWORD"),
)
```
but it raising an error when indexing the document
```
Created a chunk of size 132, which is longer than the specified 100
Created a chunk of size 107, which is longer than the specified 100
Created a chunk of size 103, which is longer than the specified 100
Created a chunk of size 104, which is longer than the specified 100
Error adding texts: 336 document(s) failed to index.
First error reason: failed to parse
Traceback (most recent call last):
File "D:\Project\elastic-langchain\main.py", line 31, in <module>
db = ElasticsearchStore.from_documents(
File "D:\Project\elastic-langchain\.venv\lib\site-packages\langchain\vectorstores\elasticsearch.py", line 1027, in from_documents
elasticsearchStore.add_documents(documents)
File "D:\Project\elastic-langchain\.venv\lib\site-packages\langchain\vectorstores\base.py", line 101, in add_documents
return self.add_texts(texts, metadatas, **kwargs)
File "D:\Project\elastic-langchain\.venv\lib\site-packages\langchain\vectorstores\elasticsearch.py", line 881, in add_texts
raise e
File "D:\Project\elastic-langchain\.venv\lib\site-packages\langchain\vectorstores\elasticsearch.py", line 868, in add_texts
success, failed = bulk(
File "D:\Project\elastic-langchain\.venv\lib\site-packages\elasticsearch\helpers\actions.py", line 521, in bulk
for ok, item in streaming_bulk(
File "D:\Project\elastic-langchain\.venv\lib\site-packages\elasticsearch\helpers\actions.py", line 436, in streaming_bulk
for data, (ok, info) in zip(
File "D:\Project\elastic-langchain\.venv\lib\site-packages\elasticsearch\helpers\actions.py", line 355, in _process_bulk_chunk
yield from gen
File "D:\Project\elastic-langchain\.venv\lib\site-packages\elasticsearch\helpers\actions.py", line 274, in _process_bulk_chunk_success
raise BulkIndexError(f"{len(errors)} document(s) failed to index.", errors)
elasticsearch.helpers.BulkIndexError: 336 document(s) failed to index.
```
### Expected behavior
It can indexing and not raising any errror | Error failed to index when using ElasticsearchStore.from_documents | https://api.github.com/repos/langchain-ai/langchain/issues/10218/comments | 19 | 2023-09-05T07:59:38Z | 2024-06-21T03:08:10Z | https://github.com/langchain-ai/langchain/issues/10218 | 1,881,424,933 | 10,218 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
class AzureMLEndpointClient(object):
"""AzureML Managed Endpoint client."""
def __init__(
self, endpoint_url: str, endpoint_api_key: str, deployment_name: str = ""
) -> None:
"""Initialize the class."""
if not endpoint_api_key or not endpoint_url:
raise ValueError(
"""A key/token and REST endpoint should
be provided to invoke the endpoint"""
)
self.endpoint_url = endpoint_url
self.endpoint_api_key = endpoint_api_key
self.deployment_name = deployment_name
def call(self, body: bytes, **kwargs: Any) -> bytes:
"""call."""
# The azureml-model-deployment header will force the request to go to a
# specific deployment. Remove this header to have the request observe the
# endpoint traffic rules.
headers = {
"Content-Type": "application/json",
"Authorization": ("Bearer " + self.endpoint_api_key),
}
if self.deployment_name != "":
headers["azureml-model-deployment"] = self.deployment_name
req = urllib.request.Request(self.endpoint_url, body, headers)
response = urllib.request.urlopen(req, timeout=kwargs.get("timeout", 50))
result = response.read()
return result
I am using this class in order to call a AzureML Endpoint and i am not able to pass the timeout as a parameter anywhere in the function call.
### Suggestion:
_No response_ | Issue: Timeout parameter in the AzureMLEndpointClient cannot be modified | https://api.github.com/repos/langchain-ai/langchain/issues/10217/comments | 2 | 2023-09-05T07:49:27Z | 2023-12-13T16:06:28Z | https://github.com/langchain-ai/langchain/issues/10217 | 1,881,407,285 | 10,217 |
[
"langchain-ai",
"langchain"
] | ### System Info
LangChain version:0.0271
python version:3.9
transformers:4.30.2
linux
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
model = ErnieBotChat(model="ERNIE-Bot")
tools = load_tools(["llm-math", "wikipedia"], llm=model)
agent = initialize_agent(
tools,
model,
agent=AgentType.CHAT_ZERO_SHOT_REACT_DESCRIPTION,
handle_parsing_errors=True)
result = agent("300的25%是多少?")
print(result)
### Expected behavior
Error:
ValueError: Got unknown type content='Answer the following questions as best you can. You have access to the following tools:\n\nCalculator: Useful for when you need to answer questions about math.\nWikipedia: A wrapper around Wikipedia. Useful for when you need to answer general questions about people, places, companies, facts, historical events, or other subjects. Input should be a search query.\n\nThe way you use the tools is by specifying a json blob.\nSpecifically, this json should have a `action` key (with the name of the tool to use) and a `action_input` key (with the input to the tool going here).\n\nThe only values that should be in the "action" field are: Calculator, Wikipedia\n\nThe $JSON_BLOB should only contain a SINGLE action, do NOT return a list of multiple actions. Here is an example of a valid $JSON_BLOB:\n\n```\n{\n "action": $TOOL_NAME,\n "action_input": $INPUT\n}\n```\n\nALWAYS use the following format:\n\nQuestion: the input question you must answer\nThought: you should always think about what to do\nAction:\n```\n$JSON_BLOB\n```\nObservation: the result of the action\n... (this Thought/Action/Observation can repeat N times)\nThought: I now know the final answer\nFinal Answer: the final answer to the original input question\n\nBegin! Reminder to always use the exact characters `Final Answer` when responding.' additional_kwargs={}
When I was running ErnieBotChat and trying to join agent, an error occurred, as shown above. How to solve this? thank you! | Errors about ErnieBotChat using agent | https://api.github.com/repos/langchain-ai/langchain/issues/10215/comments | 2 | 2023-09-05T07:28:55Z | 2023-12-13T16:06:33Z | https://github.com/langchain-ai/langchain/issues/10215 | 1,881,374,483 | 10,215 |
[
"langchain-ai",
"langchain"
] | ### System Info
Kaggle notebook
### Who can help?
@agola11 @hw
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
'''
I installed:
```
pip install -qq -U langchain tiktoken pypdf chromadb faiss-gpu unstructured openai
pip install -qq -U transformers InstructorEmbedding sentence_transformers pydantic==1.9.0
pip uninstall pydantic-settings
pip uninstall inflect
pip install pydantic-settings
pip install inflect
```
but getting error:
```
PydanticImportError: `BaseSettings` has been moved to the `pydantic-settings` package. See https://docs.pydantic.dev/2.3/migration/#basesettings-has-moved-to-pydantic-settings for more details.
```
Even though i have chromadb installed:
```
ImportError: Could not import chromadb python package. Please install it with `pip install chromadb`.
```
```
from langchain.document_loaders import PyPDFLoader, DirectoryLoader, PDFMinerLoader
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain.embeddings import SentenceTransformerEmbeddings
from langchain.vectorstores import Chroma, FAISS
import os
persist_directory = "db"
def main():
for root, dirs, files in os.walk("docs"):
for file in files:
if file.endswith(".pdf"):
print(file)
loader = PyPDFLoader(os.path.join(root, file))
documents = loader.load()
print("splitting into chunks")
text_splitter = RecursiveCharacterTextSplitter(chunk_size=500, chunk_overlap=100)
texts = text_splitter.split_documents(documents)
#create embeddings here
print("Loading sentence transformers model")
embeddings = SentenceTransformerEmbeddings(model_name="all-MiniLM-L6-v2")
#create vector store here
print(f"Creating embeddings. May take some minutes...")
db = Chroma.from_documents(texts, embeddings, persist_directory=persist_directory)
db.persist()
db=None
print(f"Ingestion complete! You can now run privateGPT.py to query your documents")
if __name__ == "__main__":
main()
```
### Expected behavior
shows results | Pydantic issue | https://api.github.com/repos/langchain-ai/langchain/issues/10210/comments | 3 | 2023-09-05T05:21:26Z | 2023-09-05T06:14:19Z | https://github.com/langchain-ai/langchain/issues/10210 | 1,881,211,959 | 10,210 |
[
"langchain-ai",
"langchain"
] | ### System Info
0.0.274
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
def _load(self) -> None:
"""Load the collection if available."""
from pymilvus import Collection
if isinstance(self.col, Collection) and self._get_index() is not None:
self.col.load(timeout=5)
### Expected behavior
https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/vectorstores/milvus.py#L387C3-L387C3
<img width="563" alt="ab96a8450969eaeae4eb91cf582828cf" src="https://github.com/langchain-ai/langchain/assets/16218592/2976b0de-fc41-46aa-88eb-fd807e3b6b57">
<img width="350" alt="26f720d9fa837900f91c8ff59d6fd815" src="https://github.com/langchain-ai/langchain/assets/16218592/740e0f7d-da60-4903-8d46-30e5e6c3faa2">
<img width="576" alt="02eee59006291866c52eda92921c9281" src="https://github.com/langchain-ai/langchain/assets/16218592/4de8f6dd-b154-4407-9153-b860a1f9a332">
| langchain/vectorstores/milvus.py _load function need timeout parameter | https://api.github.com/repos/langchain-ai/langchain/issues/10207/comments | 4 | 2023-09-05T03:46:17Z | 2024-02-08T04:20:22Z | https://github.com/langchain-ai/langchain/issues/10207 | 1,881,133,733 | 10,207 |
[
"langchain-ai",
"langchain"
] | I have the following code (see below). I have two prompts. One works ok (p1), and the other (p2) throws following error (complete error below):
`OutputParserException: Could not parse LLM output: I don't know how to answer the question because I don't have access to the casos_perfilados_P2 table.`
`p1 : Seleccionar los campos linea, nivel y genero, que contengan el valor F en genero y el valor D2 en el campo nse. Limitar el número de registros a 3. Armar un dataframe de pandas con el resultado anterior. `
`p2 : Seleccionar los campos linea y fecha, donde el campo fecha es mayor o igual a 2023-02-18 y menor o igual a 2023-02-24. Agrupar y sumar los valores del campo linea por el campo fecha. Armar un dataframe de pandas con el resultado anterior. `
...Why p2 has "access" issues while p1 doesn't...any clues ?
```python
# google
import vertexai
# Alchemy
from sqlalchemy import *
from sqlalchemy.schema import *
# Langchain
from langchain.agents import create_sql_agent
from langchain.agents.agent_toolkits import SQLDatabaseToolkit
from langchain.sql_database import SQLDatabase
from langchain.callbacks import StreamlitCallbackHandler
from langchain.llms import VertexAI
from langchain.agents.agent_types import AgentType
from langchain.agents import initialize_agent
from langchain.tools.python.tool import PythonREPLTool
from langchain.agents.agent_toolkits import create_python_agent
from langchain.tools import Tool
# Streamlit
import streamlit as st
# Settings
PROJECT_ID = "xxxxxxx"
REGION = "xxxxxxx"
dataset = "xxxxxxx"
# Initialize Vertex AI SDK
vertexai.init(project=PROJECT_ID, location=REGION)
# BQ db
sqlalchemy_url = f'bigquery://{PROJECT_ID}/{dataset}'
db = SQLDatabase.from_uri(sqlalchemy_url)
# llm
llm = VertexAI(
model_name="text-bison@001",
max_output_tokens=1024,
temperature=0,
top_p=0.8,
top_k=40,
verbose=True
)
# SQL Agent
sql_agent = create_sql_agent(
llm=llm,
toolkit=SQLDatabaseToolkit(db=db, llm=llm),
verbose=True,
top_k=1000,
agent_type=AgentType.ZERO_SHOT_REACT_DESCRIPTION
)
# Python Agent
python_agent = create_python_agent(
llm,
tool=PythonREPLTool(),
verbose=True
)
# Main Agent
agent = initialize_agent(
tools=[
Tool(
name="SQLAgent",
func=sql_agent.run,
description="""Useful to execute sql commands""",
),
Tool(
name="PythonAgent",
func=python_agent.run,
description="""Useful to run python commands""",
),
],
llm=llm,
agent_type=AgentType.ZERO_SHOT_REACT_DESCRIPTION,
handle_parsing_errors=True,
verbose=True,
)
if prompt := st.chat_input():
st.chat_message("user").write(prompt)
with st.chat_message("assistant"):
st_callback = StreamlitCallbackHandler(st.container())
response = agent.run(prompt)
st.write(response)
```
```bash
> Entering new AgentExecutor chain...
Action: sql_db_list_tables
Action Input:
Observation: casos_perfilados_P2
Thought:2023-09-04 19:23:34.050 Uncaught app exception
Traceback (most recent call last):
File "/home/flow/anaconda3/envs/llm/lib/python3.10/site-packages/streamlit/runtime/scriptrunner/script_runner.py", line 552, in _run_script
exec(code, module.__dict__)
File "/home/flow/Documentos/genai/chat1.py", line 91, in <module>
response = agent.run(prompt)
File "/home/flow/anaconda3/envs/llm/lib/python3.10/site-packages/langchain/chains/base.py", line 475, in run
return self(args[0], callbacks=callbacks, tags=tags, metadata=metadata)[
File "/home/flow/anaconda3/envs/llm/lib/python3.10/site-packages/langchain/chains/base.py", line 282, in __call__
raise e
File "/home/flow/anaconda3/envs/llm/lib/python3.10/site-packages/langchain/chains/base.py", line 276, in __call__
self._call(inputs, run_manager=run_manager)
File "/home/flow/anaconda3/envs/llm/lib/python3.10/site-packages/langchain/agents/agent.py", line 1036, in _call
next_step_output = self._take_next_step(
File "/home/flow/anaconda3/envs/llm/lib/python3.10/site-packages/langchain/agents/agent.py", line 891, in _take_next_step
observation = tool.run(
File "/home/flow/anaconda3/envs/llm/lib/python3.10/site-packages/langchain/tools/base.py", line 351, in run
raise e
File "/home/flow/anaconda3/envs/llm/lib/python3.10/site-packages/langchain/tools/base.py", line 323, in run
self._run(*tool_args, run_manager=run_manager, **tool_kwargs)
File "/home/flow/anaconda3/envs/llm/lib/python3.10/site-packages/langchain/tools/base.py", line 493, in _run
self.func(
File "/home/flow/anaconda3/envs/llm/lib/python3.10/site-packages/langchain/chains/base.py", line 475, in run
return self(args[0], callbacks=callbacks, tags=tags, metadata=metadata)[
File "/home/flow/anaconda3/envs/llm/lib/python3.10/site-packages/langchain/chains/base.py", line 282, in __call__
raise e
File "/home/flow/anaconda3/envs/llm/lib/python3.10/site-packages/langchain/chains/base.py", line 276, in __call__
self._call(inputs, run_manager=run_manager)
File "/home/flow/anaconda3/envs/llm/lib/python3.10/site-packages/langchain/agents/agent.py", line 1036, in _call
next_step_output = self._take_next_step(
File "/home/flow/anaconda3/envs/llm/lib/python3.10/site-packages/langchain/agents/agent.py", line 844, in _take_next_step
raise e
File "/home/flow/anaconda3/envs/llm/lib/python3.10/site-packages/langchain/agents/agent.py", line 833, in _take_next_step
output = self.agent.plan(
File "/home/flow/anaconda3/envs/llm/lib/python3.10/site-packages/langchain/agents/agent.py", line 457, in plan
return self.output_parser.parse(full_output)
File "/home/flow/anaconda3/envs/llm/lib/python3.10/site-packages/langchain/agents/mrkl/output_parser.py", line 52, in parse
raise OutputParserException(
langchain.schema.output_parser.OutputParserException: Could not parse LLM output: `I don't know how to answer the question because I don't have access to the casos_perfilados_P2 table.`
```
| Issue: SQLDatabaseToolkit inconsistency | https://api.github.com/repos/langchain-ai/langchain/issues/10205/comments | 4 | 2023-09-05T02:15:33Z | 2023-12-07T02:53:55Z | https://github.com/langchain-ai/langchain/issues/10205 | 1,881,063,737 | 10,205 |
[
"langchain-ai",
"langchain"
] | ### System Info
LangChain 0.0.281
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [X] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Use this lines of code
```
schema = {
"properties": {
"visit": {"type": "string"},
"date": {"type": "string"},
"gender": {"type": "string"},
"age": {"type": "integer"},
}
}
inp = """This 23-year-old white female presents with complaint of allergies.
She used to have allergies when she lived in Seattle but she thinks they are worse here.
In the past, she has tried Claritin, and Zyrtec. Both worked for short time but then seemed to lose effectiveness. """
llm = Replicate(
model="a16z-infra/llama13b-v2-chat:df7690f1994d94e96ad9d568eac121aecf50684a0b0963b25a41cc40061269e5",
input={"temperature": 0.75, "max_length": 500, "top_p": 1},
)
chain = create_extraction_chain(schema, llm)
chain.run(inp)
```
---------------------------------------------------------------------------
```
OutputParserException Traceback (most recent call last)
[<ipython-input-9-5e77f11609b2>](https://localhost:8080/#) in <cell line: 72>()
70 )
71 chain = create_extraction_chain(schema, llm)
---> 72 chain.run(inp)["data"]
8 frames
[/usr/local/lib/python3.10/dist-packages/langchain/output_parsers/openai_functions.py](https://localhost:8080/#) in parse_result(self, result)
21 generation = result[0]
22 if not isinstance(generation, ChatGeneration):
---> 23 raise OutputParserException(
24 "This output parser can only be used with a chat generation."
25 )
OutputParserException: This output parser can only be used with a chat generation.
```
### Expected behavior
Structrued JSON based on schema | create_extraction_chain does not work with other LLMs? Replicate models fails to load | https://api.github.com/repos/langchain-ai/langchain/issues/10201/comments | 24 | 2023-09-05T00:08:26Z | 2024-05-06T07:34:01Z | https://github.com/langchain-ai/langchain/issues/10201 | 1,880,981,812 | 10,201 |
[
"langchain-ai",
"langchain"
] | ### System Info
0.0.274
### Who can help?
@hwchase17 @agola11 @eyurtsev
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [X] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
from langchain.llms import VertexAI
from langchain.chains import ConversationChain
from langchain.memory import ConversationBufferMemory
import os
PROMPT= """
You are a customer service assistant.
Your task is to respond to user questions.
Start by greeting the user and introducing yourself as BotAssistant, and end with a polite closing. If you don't know the answer, suggest contacting customer service at 3232. Always conclude with: "I hope I've answered your request
Query: """
llm_model=VertexAI(
model_name="text-bison@001",
max_output_tokens=1024,
temperature=0.1,
top_p=0.8,
top_k=40,
verbose=True,
)
conversation = ConversationChain(
llm=llm_model,
verbose=True,
memory=ConversationBufferMemory(),
)
qst="I have an issue with my order; I received the wrong item. What should I do?"
conversation.predict(input=PROMPT+qst)
```
### Expected behavior
I want to develop an llm that acts as a customer assistant( to use for E comerce), responding to user queries using a JSON dataset containing question-answer pairs,possibly incorporating pdf documents of terms and conditions and offer descriptions. How can I effectively use Retrieval Augmented Generation to address this challenge? Is fine-tuning a recommended approach?
Sometimes, the answers to queries are indirect, and they may include links to previously provided answers on the same topic.do you think that a graph representation of the questions answers pairs dataset is relevant?
Any help,links would be appreciated, | How to develop an efficient question answering solution acting as a customer assistant based on RAG or Fine tuning? | https://api.github.com/repos/langchain-ai/langchain/issues/10188/comments | 4 | 2023-09-04T15:44:40Z | 2024-02-10T16:18:02Z | https://github.com/langchain-ai/langchain/issues/10188 | 1,880,533,324 | 10,188 |
[
"langchain-ai",
"langchain"
] | ### Feature request
Adopt [Classy-fire](https://github.com/microsoft/classy-fire)
### Motivation
See above link on benefits of the approach
### Your contribution
Can adapt classy-fire for easier integration as requested | Adopt Microsoft's Classy-fire classification approach | https://api.github.com/repos/langchain-ai/langchain/issues/10187/comments | 3 | 2023-09-04T14:28:14Z | 2023-12-13T16:06:43Z | https://github.com/langchain-ai/langchain/issues/10187 | 1,880,407,689 | 10,187 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
After PR [#8612](https://github.com/langchain-ai/langchain/pull/8612), access to [RedisVectorStoreRetriever](https://github.com/langchain-ai/langchain/blob/27944cb611ee8face34fbe764c83e37841f96eb7/libs/langchain/langchain/vectorstores/redis/base.py#L1293) has been removed
### Suggestion:
Include **RedisVectorStoreRetriever** import in [redis/__init__.py](https://github.com/langchain-ai/langchain/blob/27944cb611ee8face34fbe764c83e37841f96eb7/libs/langchain/langchain/vectorstores/redis/__init__.py) on line 1
current: `from .base import Redis`
suggestion update: `from .base import Redis, RedisVectorStoreRetriever`
| Issue: RedisVectorStoreRetriever not accessible | https://api.github.com/repos/langchain-ai/langchain/issues/10186/comments | 4 | 2023-09-04T14:21:34Z | 2023-09-12T22:29:54Z | https://github.com/langchain-ai/langchain/issues/10186 | 1,880,395,414 | 10,186 |
[
"langchain-ai",
"langchain"
] | I think it'd better if there was a flag in the ConversationalRetreivalQAChain() where we can choose to not select the chain of question rephrasing before generation. Can this be considered as an issue and dealt with accordingly?
_Originally posted by @AshminJayson in https://github.com/langchain-ai/langchain/issues/4076#issuecomment-1705339045_
| Add option to disable question augmentation in ConversationalRetrievalQAChain | https://api.github.com/repos/langchain-ai/langchain/issues/10185/comments | 1 | 2023-09-04T14:11:55Z | 2023-12-11T16:04:43Z | https://github.com/langchain-ai/langchain/issues/10185 | 1,880,378,119 | 10,185 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
#1. Open terminal, install bedrock specific version boto3 SDK,langchain
curl -sS https://d2eo22ngex1n9g.cloudfront.net/Documentation/SDK/bedrock-python-sdk.zip > sdk.zip
sudo yum install unzip -y
unzip sdk.zip -d sdk
pip install --no-build-isolation --force-reinstall ./sdk/awscli-*-py3-none-any.whl ./sdk/boto3-*-py3-none-any.whl ./sdk/botocore-*-py3-none-any.whl
pip install --quiet langchain==0.0.249
#pip install 'jupyter-ai>=1.0,<2.0' # If you use JupyterLab 3pip install jupyter-ai # If you use JupyterLab 4
#2. change the default token count to 1024
vi ~/anaconda3/lib/python3.11/site-packages/langchain/llms/sagemaker_endpoint.py
Insert below lines after body = self.content_handler.transform_input(prompt, _model_kwargs)
parameters={"max_new_tokens": 1024, "top_p": 0.9, "temperature": 0.6, "return_full_text": True}
t = json.loads(body)
t["parameters"] = parameters
body = json.dumps(t)
Insert the line CustomAttributes='accept_eula=true’, between Accept=accepts, and **_endpoint_kwargs,
#3. aws configure default profile, make sure the aksk has enough permissions(SageMakerFullAccess)
aws configure
#4.run %%ai in *.ipynb file on ec2 instead of SageMaker notebook instance / SageMaker Studio [also can run in VSCODE] after making sure your Amazon SageMaker endpoint is health
%load_ext jupyter_ai
%%ai sagemaker-endpoint:jumpstart-dft-meta-textgeneration-llama-2-7b --region-name=us-east-1 --request-schema={"inputs":"<prompt>"} --response-path=[0]['generation']
write somthing on Humor
### Suggestion:
_No response_ | Issue: How to configure Amazon SageMaker endpoint | https://api.github.com/repos/langchain-ai/langchain/issues/10184/comments | 3 | 2023-09-04T14:11:27Z | 2023-12-25T16:08:45Z | https://github.com/langchain-ai/langchain/issues/10184 | 1,880,377,259 | 10,184 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
how to configure Amazon Bedrock step by step:
Answers:
#1. Open terminal, install bedrock specific version boto3 SDK,langchain
curl -sS https://d2eo22ngex1n9g.cloudfront.net/Documentation/SDK/bedrock-python-sdk.zip > sdk.zip
sudo yum install unzip -y
unzip sdk.zip -d sdk
pip install --no-build-isolation --force-reinstall ./sdk/awscli-*-py3-none-any.whl ./sdk/boto3-*-py3-none-any.whl ./sdk/botocore-*-py3-none-any.whl
pip install --quiet langchain==0.0.249
#pip install 'jupyter-ai>=1.0,<2.0' # If you use JupyterLab 3pip install jupyter-ai # If you use JupyterLab 4
#2. change the default token count to 2048
vi ~/anaconda3/lib/python3.11/site-packages/langchain/llms/bedrock.py
change this line: input_body["max_tokens_to_sample"] = 2048
#3. aws configure default profile, make sure the aksk has enough permissions(BedrockFullAccess)
aws configure
#4.run %%ai in *.ipynb file on ec2/local machine [also can run in VSCODE] instead of SageMaker notebook instance / SageMaker Studio
%load_ext jupyter_ai
%%ai bedrock:anthropic.claude-v2
Write something about Amazon
### Suggestion:
_No response_ | Issue: how to configure Amazon Bedrock | https://api.github.com/repos/langchain-ai/langchain/issues/10182/comments | 3 | 2023-09-04T14:09:32Z | 2023-12-13T16:06:47Z | https://github.com/langchain-ai/langchain/issues/10182 | 1,880,373,791 | 10,182 |
[
"langchain-ai",
"langchain"
] | ### System Info
torch = "2.0.1"
transformers = "4.31.0"
langchain= "0.0.251"
kor= "0.13.0"
openai= "0.27.8"
pydantic= "1.10.8"
python_version = "3.9"
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
from db.tools import DbTools
from langchain.chat_models import ChatOpenAI
from kor.nodes import Object, Text, Number
from kor import create_extraction_chain, Object, Text
llm = ChatOpenAI(
model_name="gpt-3.5-turbo",
temperature=0,
max_tokens=2000,
frequency_penalty=0,
presence_penalty=0,
openai_api_key = "" ,
top_p=1.0,
)
### Expected behavior
Hello I encountered the following error when trying to import the langchain.schema module:
Traceback (most recent call last):
File "/home/alpha/platform/shared/libpython/computation/fine_instruction_GPT.py", line 2, in <module>
from langchain.chat_models import ChatOpenAI
File "/home/alpha/.local/share/virtualenvs/platform-_bj8LfuO/lib/python3.9/site-packages/langchain/__init__.py", line 6, in <module>
from langchain.agents import MRKLChain, ReActChain, SelfAskWithSearchChain
File "/home/alpha/.local/share/virtualenvs/platform-_bj8LfuO/lib/python3.9/site-packages/langchain/agents/__init__.py", line 31, in <module>
from langchain.agents.agent import (
File "/home/alpha/.local/share/virtualenvs/platform-_bj8LfuO/lib/python3.9/site-packages/langchain/agents/agent.py", line 15, in <module>
from langchain.agents.agent_iterator import AgentExecutorIterator
File "/home/alpha/.local/share/virtualenvs/platform-_bj8LfuO/lib/python3.9/site-packages/langchain/agents/agent_iterator.py", line 21, in <module>
from langchain.callbacks.manager import (
File "/home/alpha/.local/share/virtualenvs/platform-_bj8LfuO/lib/python3.9/site-packages/langchain/callbacks/__init__.py", line 10, in <module>
from langchain.callbacks.aim_callback import AimCallbackHandler
File "/home/alpha/.local/share/virtualenvs/platform-_bj8LfuO/lib/python3.9/site-packages/langchain/callbacks/aim_callback.py", line 5, in <module>
from langchain.schema import AgentAction, AgentFinish, LLMResult
File "/home/alpha/.local/share/virtualenvs/platform-_bj8LfuO/lib/python3.9/site-packages/langchain/schema/__init__.py", line 4, in <module>
from langchain.schema.memory import BaseChatMessageHistory, BaseMemory
File "/home/alpha/.local/share/virtualenvs/platform-_bj8LfuO/lib/python3.9/site-packages/langchain/schema/memory.py", line 7, in <module>
from langchain.schema.messages import AIMessage, BaseMessage, HumanMessage
File "/home/alpha/.local/share/virtualenvs/platform-_bj8LfuO/lib/python3.9/site-packages/langchain/schema/messages.py", line 147, in <module>
class HumanMessageChunk(HumanMessage, BaseMessageChunk):
File "pydantic/main.py", line 367, in pydantic.main.ModelMetaclass.__new__
File "/usr/lib/python3.9/abc.py", line 85, in __new__
cls = super().__new__(mcls, name, bases, namespace, **kwargs)
TypeError: multiple bases have instance lay-out conflict | Multiple bases have instance lay-out conflict | https://api.github.com/repos/langchain-ai/langchain/issues/10179/comments | 7 | 2023-09-04T12:03:49Z | 2023-12-06T08:21:24Z | https://github.com/langchain-ai/langchain/issues/10179 | 1,880,134,640 | 10,179 |
[
"langchain-ai",
"langchain"
] | ### System Info
langchain 0.0.279
### Who can help?
@agola11
@hwchase17
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
def ask_local_vector_db(question):
# old docsearch_db
docs = docsearch_db.similarity_search(question, k=10)
pretty_print_docs(docs)
print("**************************************************")
cleaned_matches = []
total_toknes = 0
# print(docs)
for context in docs:
cleaned_context = context.page_content.replace('\n', ' ').strip()
cleaned_context = f"{cleaned_context}"
tokens = tokenizers.encode(cleaned_context, add_special_tokens=False)
if total_toknes + len(tokens) <= (1536 * 8):
cleaned_matches.append(cleaned_context)
total_toknes += len(tokens)
else:
break
# 将清理过的匹配项组合合成一个字符串
combined_text = " ".join(cleaned_matches)
answer = local_chain.predict(combined_text=combined_text, human_input=question)
return answer
# 创建工具列表
tools = [
Tool(
name="Google_Search",
func=GoogleSerper_search.run,
description="""
当你用本地向量数据库问答后说无法找到答案的之后,你可以使用互联网搜索引擎工具进行信息查询,尝试直接找到问题答案。
注意你需要提出非常有针对性准确的问题。
""",
),
Tool(
name="Local_Search",
func=ask_local_vector_db,
description="""
你可以首先通过本地向量数据知识库尝试寻找问答案。
注意你需要提出非常有针对性准确的问题
"""
)
]
agent_kwargs = {
"extra_prompt_messages": [MessagesPlaceholder(variable_name="memory")],
}
memory = ConversationBufferMemory(memory_key="memory", return_messages=True)
# 初始化agent代理
agent_open_functions = initialize_agent(
tools,
llm=llm,
agent=AgentType.OPENAI_FUNCTIONS,
verbose=True,
agent_kwargs=agent_kwargs,
memory=memory,
max_iterations=10,
early_stopping_method="generate",
handle_parsing_errors=True, # 初始化代理并处理解析错误
callbacks=[handler],
)
### Expected behavior
请输入您的问题(纯文本格式),换行输入 n 以结束:
本地搜索gpt
n
===============Thinking===================
> Entering new AgentExecutor chain...
Invoking: `Local_Search` with `gpt`
An error occurred: not enough values to unpack (expected 2, got 1)
请输入您的问题(纯文本格式),换行输入 n 以结束:
| An error occurred: not enough values to unpack (expected 2, got 1) | https://api.github.com/repos/langchain-ai/langchain/issues/10178/comments | 2 | 2023-09-04T11:59:08Z | 2023-12-11T16:04:53Z | https://github.com/langchain-ai/langchain/issues/10178 | 1,880,127,009 | 10,178 |
[
"langchain-ai",
"langchain"
] | ### System Info
- LangChain: 0.0.279
- Python: 3.11.4
- Platform: Linux and MacOS
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [x] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```python
from langchain.schema.messages import ChatMessageChunk
message = ChatMessageChunk(role="User", content="I am") + ChatMessageChunk(role="User", content=" indeed.")
```
Here is the error info:
```
---------------------------------------------------------------------------
ValidationError Traceback (most recent call last)
Cell In[2], line 1
----> 1 message = ChatMessageChunk(role="User", content="I am") + ChatMessageChunk(role="User", content=" indeed.")
File ~/pyenv/venv/lib/python3.11/site-packages/langchain/schema/messages.py:120, in BaseMessageChunk.__add__(self, other)
115 def __add__(self, other: Any) -> BaseMessageChunk: # type: ignore
116 if isinstance(other, BaseMessageChunk):
117 # If both are (subclasses of) BaseMessageChunk,
118 # concat into a single BaseMessageChunk
--> 120 return self.__class__(
121 content=self.content + other.content,
122 additional_kwargs=self._merge_kwargs_dict(
123 self.additional_kwargs, other.additional_kwargs
124 ),
125 )
126 else:
127 raise TypeError(
128 'unsupported operand type(s) for +: "'
129 f"{self.__class__.__name__}"
130 f'" and "{other.__class__.__name__}"'
131 )
File ~/pyenv/venv/lib/python3.11/site-packages/langchain/load/serializable.py:74, in Serializable.__init__(self, **kwargs)
73 def __init__(self, **kwargs: Any) -> None:
---> 74 super().__init__(**kwargs)
75 self._lc_kwargs = kwargs
File ~/pyenv/venv/lib/python3.11/site-packages/pydantic/v1/main.py:341, in BaseModel.__init__(__pydantic_self__, **data)
339 values, fields_set, validation_error = validate_model(__pydantic_self__.__class__, data)
340 if validation_error:
--> 341 raise validation_error
342 try:
343 object_setattr(__pydantic_self__, '__dict__', values)
ValidationError: 1 validation error for ChatMessageChunk
role
field required (type=value_error.missing
```
### Expected behavior
Expected output:
```python
ChatMessageChunk(content='I am indeed.', additional_kwargs={}, role='User')
``` | ChatMessageChunk concat error | https://api.github.com/repos/langchain-ai/langchain/issues/10173/comments | 0 | 2023-09-04T08:22:02Z | 2023-10-27T02:15:23Z | https://github.com/langchain-ai/langchain/issues/10173 | 1,879,764,637 | 10,173 |
[
"langchain-ai",
"langchain"
] | ### System Info
```
url = "https://www.wsj.com"
loader = RecursiveUrlLoader(url=url, max_depth=2, extractor=lambda x: Soup(x, "html.parser").text)
docs = loader.load()
```
Docs don't have any data related to `url`. The problem is related to DFS in the codebase. It doesn't handle the root case.
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
To produce the issue executed the provided code snippet.
### Expected behavior
RecursiveUrlLoader should hold the the given `url` data along with it's child. | RecursiveUrlLoader doesn't include root URL content | https://api.github.com/repos/langchain-ai/langchain/issues/10172/comments | 2 | 2023-09-04T08:12:04Z | 2023-12-11T16:04:58Z | https://github.com/langchain-ai/langchain/issues/10172 | 1,879,747,922 | 10,172 |
[
"langchain-ai",
"langchain"
] | HI team,
I am using get_openai_callback to fetch the total token usage for agent.
Is there any idea to get the token usage for each tool?
Thanks | Can I get the token usage for each tool in agent? | https://api.github.com/repos/langchain-ai/langchain/issues/10170/comments | 4 | 2023-09-04T07:47:46Z | 2024-04-29T14:32:04Z | https://github.com/langchain-ai/langchain/issues/10170 | 1,879,709,967 | 10,170 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
I would like to output the answer in a specific language (e.g. chinese) when I am using agent. But when I tried to do that with the code below, it gives me an error ```OutputParserException: Could not parse LLM output:```
```
# retrieval qa chain
qa = RetrievalQA.from_chain_type(
llm=llm,
chain_type="stuff",
retriever=vector_store.as_retriever()
)
from langchain.agents import Tool
tools = [
Tool(
name='Knowledge Base',
func=qa.run,
description=(
'<some description>'
)
)
]
from langchain.agents import initialize_agent
agent = initialize_agent(
agent='chat-conversational-react-description',
tools=tools,
llm=llm,
verbose=True,
max_iterations=3,
early_stopping_method='generate',
memory=conversational_memory
)
prompt_prefix = f"""请用中文回答"""
agent.agent.llm_chain.prompt += prompt_prefix
query = "<some question>"
agent(query)
```
Anyone knows how to add custom prompt to agent to enforce the output language?
### Suggestion:
_No response_ | How can I append suffix to the prompt when I am using agents to control the output language | https://api.github.com/repos/langchain-ai/langchain/issues/10161/comments | 2 | 2023-09-04T04:09:02Z | 2023-12-11T16:05:08Z | https://github.com/langchain-ai/langchain/issues/10161 | 1,879,470,162 | 10,161 |
[
"langchain-ai",
"langchain"
] | ### System Info
Windows 10
langchain 0.0.279
### Who can help?
@hwchase17
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
from langchain.embeddings import HuggingFaceBgeEmbeddings
model_name = "BAAI/bge-small-en"
model_kwargs = {'device': 'cpu'}
encode_kwargs = {'normalize_embeddings': False}
hf = HuggingFaceBgeEmbeddings(
model_name=model_name,
model_kwargs=model_kwargs,
encode_kwargs=encode_kwargs
)
### Expected behavior

from langchain.embeddings import HuggingFaceBgeEmbeddings
Couldn't find HuggingFaceBgeEmbeddings | HuggingFaceBgeEmbeddings error | https://api.github.com/repos/langchain-ai/langchain/issues/10159/comments | 2 | 2023-09-04T02:12:44Z | 2023-09-04T02:26:11Z | https://github.com/langchain-ai/langchain/issues/10159 | 1,879,386,202 | 10,159 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
_No response_
### Suggestion:
_No response_ | why not found ernie bot from baidu in llms package? Thanks | https://api.github.com/repos/langchain-ai/langchain/issues/10150/comments | 2 | 2023-09-03T15:08:29Z | 2023-09-04T02:48:52Z | https://github.com/langchain-ai/langchain/issues/10150 | 1,879,131,847 | 10,150 |
[
"langchain-ai",
"langchain"
] | How can I stream the OpenAI response in DRF? | Issue: DRF response streaming | https://api.github.com/repos/langchain-ai/langchain/issues/10143/comments | 3 | 2023-09-03T09:47:09Z | 2024-03-16T16:04:31Z | https://github.com/langchain-ai/langchain/issues/10143 | 1,879,029,116 | 10,143 |
[
"langchain-ai",
"langchain"
] | ### Feature request
[Graph Of Thoughts](https://arxiv.org/pdf/2308.09687.pdf) looks promising.
It is possible to implement it with LangChain?
### Motivation
More performant prompt technique
### Your contribution
I can help with documentation. | Graph Of Thoughts | https://api.github.com/repos/langchain-ai/langchain/issues/10137/comments | 5 | 2023-09-02T23:14:01Z | 2024-01-30T16:17:48Z | https://github.com/langchain-ai/langchain/issues/10137 | 1,878,874,363 | 10,137 |
[
"langchain-ai",
"langchain"
] | ### System Info
Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/urllib3/connection.py", line 174, in _new_conn
conn = connection.create_connection(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/urllib3/util/connection.py", line 95, in create_connection
raise err
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/urllib3/util/connection.py", line 85, in create_connection
sock.connect(sa)
TimeoutError: [Errno 60] Operation timed out
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/urllib3/connectionpool.py", line 703, in urlopen
httplib_response = self._make_request(
^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/urllib3/connectionpool.py", line 386, in _make_request
self._validate_conn(conn)
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/urllib3/connectionpool.py", line 1042, in _validate_conn
conn.connect()
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/urllib3/connection.py", line 358, in connect
self.sock = conn = self._new_conn()
^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/urllib3/connection.py", line 179, in _new_conn
raise ConnectTimeoutError(
urllib3.exceptions.ConnectTimeoutError: (<urllib3.connection.HTTPSConnection object at 0x159e351d0>, 'Connection to openaipublic.blob.core.windows.net timed out. (connect timeout=None)')
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/requests/adapters.py", line 489, in send
resp = conn.urlopen(
^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/urllib3/connectionpool.py", line 787, in urlopen
retries = retries.increment(
^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/urllib3/util/retry.py", line 592, in increment
raise MaxRetryError(_pool, url, error or ResponseError(cause))
urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='openaipublic.blob.core.windows.net', port=443): Max retries exceeded with url: /encodings/cl100k_base.tiktoken (Caused by ConnectTimeoutError(<urllib3.connection.HTTPSConnection object at 0x159e351d0>, 'Connection to openaipublic.blob.core.windows.net timed out. (connect timeout=None)'))
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/Users/fsndzomga/Downloads/react-mega/LCEL2.py", line 29, in <module>
vectorstore = Chroma.from_texts([obama_text], embedding=OpenAIEmbeddings())
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/langchain/vectorstores/chroma.py", line 576, in from_texts
chroma_collection.add_texts(texts=texts, metadatas=metadatas, ids=ids)
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/langchain/vectorstores/chroma.py", line 186, in add_texts
embeddings = self._embedding_function.embed_documents(texts)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/langchain/embeddings/openai.py", line 478, in embed_documents
return self._get_len_safe_embeddings(texts, engine=self.deployment)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/langchain/embeddings/openai.py", line 331, in _get_len_safe_embeddings
encoding = tiktoken.encoding_for_model(model_name)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/tiktoken/model.py", line 75, in encoding_for_model
return get_encoding(encoding_name)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/tiktoken/registry.py", line 63, in get_encoding
enc = Encoding(**constructor())
^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/tiktoken_ext/openai_public.py", line 64, in cl100k_base
mergeable_ranks = load_tiktoken_bpe(
^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/tiktoken/load.py", line 114, in load_tiktoken_bpe
contents = read_file_cached(tiktoken_bpe_file)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/tiktoken/load.py", line 46, in read_file_cached
contents = read_file(blobpath)
^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/tiktoken/load.py", line 24, in read_file
return requests.get(blobpath).content
^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/requests/api.py", line 73, in get
return request("get", url, params=params, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/requests/api.py", line 59, in request
return session.request(method=method, url=url, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/requests/sessions.py", line 587, in request
resp = self.send(prep, **send_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/requests/sessions.py", line 701, in send
r = adapter.send(request, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/requests/adapters.py", line 553, in send
raise ConnectTimeout(e, request=request)
requests.exceptions.ConnectTimeout: HTTPSConnectionPool(host='openaipublic.blob.core.windows.net', port=443): Max retries exceeded with url: /encodings/cl100k_base.tiktoken (Caused by ConnectTimeoutError(<urllib3.connection.HTTPSConnection object at 0x159e351d0>, 'Connection to openaipublic.blob.core.windows.net timed out. (connect timeout=None)'))
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [X] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Here is my code:
from langchain.vectorstores import Chroma
from langchain.embeddings import OpenAIEmbeddings
from langchain.schema.runnable import RunnablePassthrough
from langchain.schema.output_parser import StrOutputParser
from langchain.prompts import ChatPromptTemplate
from langchain.chat_models import ChatOpenAI
from operator import itemgetter
from apikey import OPENAI_API_KEY
import os
# Set the OpenAI API key
os.environ['OPENAI_API_KEY'] = OPENAI_API_KEY
# Initialize the ChatOpenAI model
model = ChatOpenAI()
# Create a long text about Barack Obama to serve as the context
obama_text = """
Barack Obama served as the 44th President of the United States from 2009 to 2017.
He was born in Honolulu, Hawaii, on August 4, 1961. Obama is a graduate of Columbia University
and Harvard Law School, where he served as president of the Harvard Law Review. He was a community
organizer in Chicago before earning his law degree and worked as a civil rights attorney and taught
constitutional law at the University of Chicago Law School between 1992 and 2004. He served three
terms representing the 13th District in the Illinois Senate from 1997 until 2004, when he ran for the
U.S. Senate. Obama received the Nobel Peace Prize in 2009.
"""
# Create the retriever with the Obama text as the context
vectorstore = Chroma.from_texts([obama_text], embedding=OpenAIEmbeddings())
retriever = vectorstore.as_retriever()
# Define the prompt template
template = """Answer the question based only on the following context:
{context}
Question: {question}
"""
prompt = ChatPromptTemplate.from_template(template)
# Create the chain for answering questions
chain = (
{"context": retriever, "question": RunnablePassthrough()}
| prompt
| model
| StrOutputParser()
)
# Invoke the chain to answer a question
print(chain.invoke("When was Barack Obama born?"))
# Create a new prompt template that allows for translation
template_with_language = """Answer the question based only on the following context:
{context}
Question: {question}
Answer in the following language: {language}
"""
prompt_with_language = ChatPromptTemplate.from_template(template_with_language)
# Create the chain for answering questions in different languages
chain_with_language = {
"context": itemgetter("question") | retriever,
"question": itemgetter("question"),
"language": itemgetter("language")
} | prompt_with_language | model | StrOutputParser()
# Invoke the chain to answer a question in Italian
print(chain_with_language.invoke({"question": "When was Barack Obama born?", "language": "italian"}))
### Expected behavior
a response to my question | Connection Timeout and Max Retries Exceeded in HTTPS Request | https://api.github.com/repos/langchain-ai/langchain/issues/10135/comments | 1 | 2023-09-02T22:18:19Z | 2023-09-03T19:47:46Z | https://github.com/langchain-ai/langchain/issues/10135 | 1,878,862,756 | 10,135 |
[
"langchain-ai",
"langchain"
] | ### System Info
Latest
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [X] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Use ConversationalRetrievalChain with an Azure OpenAI gpt model
This is the code:
llm = AzureOpenAI(engine=OPENAI_DEPLOYMENT_NAME, model=OPENAI_MODEL, temperature=0.0)
qa = ConversationalRetrievalChain.from_llm(llm, retriever)
I get the following error:
usr/local/lib/python3.10/dist-packages/pydantic/main.cpython-310-x86_64-linux-gnu.so in pydantic.main.BaseModel.__init__()
ValidationError: 1 validation error for LLMChain
llm
Can't instantiate abstract class BaseLanguageModel with abstract methods agenerate_prompt, apredict, apredict_messages, generate_prompt, invoke, predict, predict_messages (type=type_error)
### Expected behavior
The code executes without any error | ConversationalRetrievalChain doesn't work with Azure OpenAI | https://api.github.com/repos/langchain-ai/langchain/issues/10128/comments | 4 | 2023-09-02T15:34:17Z | 2024-02-18T23:59:07Z | https://github.com/langchain-ai/langchain/issues/10128 | 1,878,736,357 | 10,128 |
[
"langchain-ai",
"langchain"
] | ### System Info
Langchain 0.0.279 / Langchain experimental 0.0.12/ Python 3.10
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Follow the guideline: https://python.langchain.com/docs/guides/privacy/presidio_data_anonymization
I was able to make it work by downloading the source of the langchain experimental and copy & pasting the missing folder into the local lib folder.
However, this won't be a good option when building a docker image. Why isn't it part of the experimental library 0.0.12?
### Expected behavior
Example codes will work. | ModuleNotFoundError: No module named 'langchain_experimental.data_anonymizer' | https://api.github.com/repos/langchain-ai/langchain/issues/10126/comments | 8 | 2023-09-02T14:16:42Z | 2024-02-13T05:16:02Z | https://github.com/langchain-ai/langchain/issues/10126 | 1,878,707,467 | 10,126 |
[
"langchain-ai",
"langchain"
] | ### Feature request
With the new announced [support for Streaming in SageMaker Endpoints](https://aws.amazon.com/blogs/machine-learning/elevating-the-generative-ai-experience-introducing-streaming-support-in-amazon-sagemaker-hosting/), Langchain can add a Streaming capability to the `SagemakerEndpoint` class.
We can leverage the code that already exists as part of the blog post and extend where required by using the [boto3](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/sagemaker-runtime/client/invoke_endpoint_with_response_stream.html) API
### Motivation
Decreasing latency is important for developers. Streaming is currently available in `OpenAI`, `ChatOpenAI`, `ChatAnthropic`, `Hugging Face Text Generation Inference`, and `Replicate`. Having `SageMaker` opens the possibilities to a wide range of developers that leverage this AWS Service.
### Your contribution
I can work on this feature to start the process with guidance from the core development team and the community. | SageMaker Endpoints with Streaming Capability | https://api.github.com/repos/langchain-ai/langchain/issues/10125/comments | 3 | 2023-09-02T13:37:32Z | 2024-02-13T16:12:54Z | https://github.com/langchain-ai/langchain/issues/10125 | 1,878,693,225 | 10,125 |
[
"langchain-ai",
"langchain"
] | ### Feature request
**Issue Description:**
I would like to propose the addition of a new Agent called something like 'ConverseUntilAnswered.' This new agent would engage in multi-step interactions with users to obtain an answer according to a required schema, and self-terminate when an answer is obtained.
**Example Use Case:**
Let's consider a scenario where a chatbot needs to collect user satisfaction ratings for its website. The 'ConverseUntilAnswered' agent could be used to facilitate this interaction. Perhaps this is achieved by the LangChain class injecting System messages to ChatGPT at specific times. Consider in the following dialogue that all System messages are injected by the LangChain class.
1. **System Message:** Your task is to ask the user how satisfied they are with this website on a scale from 1 to 10, where 10 is the highest. You are limited to a maximum of 2 interactions with the user. If you are reasonably certain of the user’s score, you are permitted to guess their answer. If you are unable to determine a numerical score, give a rating of “NA”. When you are ready to give a score, reply in the following format: ```ANSWER:<integer score>``` for example, for an answer of 5, you would reply ```ANSWER:5``` and add nothing else to your response.
2. **Chatbot:** On a scale from 1 to 10, how satisfied are you with this website?
3. **User:** Oh, it’s OK.
4. **Chatbot:** It sounds like you are moderately satisfied. Would you say your satisfaction is a 5 out of 10?
5. **User:** Maybe, it depends on the day.
6. **System Message:** You have used 2 of your 3 available interactions with the user. Please no longer interact with the user, but simply state your final response with the format previously specified.
8. **Chatbot:** ```ANSWER:NA```
By introducing a 'ConverseUntilAnswered' agent, we make it possible to develop an application that conducts a lengthy interview over a series of questions as defined by the programmer. For example:
1. Ask the customer their satisfaction with the website (1-10 scale)
2. Ask whether the customer is likely to recommend this website to their friends (Why or Why not)
3. Ask what feature they would like to see added to this website.
### Motivation
**Justification:**
There is a need for specialized functionality for chatbots that can conduct multi-step conversations and self-terminate once a satisfactory response is obtained. Currently the only way I know to do this is to equip an Agent with a tool that it calls when a task has been completed, but there is no guarantee the agent will ever call that tool, leaving the chatbot open to a lengthy never-ending conversation with the user. By introducing a specialized 'ConverseUntilAnswered' agent, we can minimize calls to the LLM API and ensure closure of a line of inquery to the user.
### Your contribution
I have only once before contributed to an OpenSource project, and this would be my first contribution to the LangChain project. I humbly ask for the community's guidance regarding possible best approaches for implementation using existing LangChain features, and what else should be considered before making a Pull Request.
| Proposal for 'ConverseUntilAnswered' Agent | https://api.github.com/repos/langchain-ai/langchain/issues/10122/comments | 2 | 2023-09-02T08:56:43Z | 2023-12-09T16:04:16Z | https://github.com/langchain-ai/langchain/issues/10122 | 1,878,495,416 | 10,122 |
[
"langchain-ai",
"langchain"
] | ### Issue with current documentation:
There is no documentation about the `SQLRecordManager` in the [langchain.indexes.base.RecordManager](https://api.python.langchain.com/en/latest/indexes/langchain.indexes.base.RecordManager.html#langchain.indexes.base.RecordManager) documentation, although is used in an indexing example [here](https://python.langchain.com/docs/modules/data_connection/indexing#quickstart).
### Idea or request for content:
Please include the documentation for `SQLRecordManager`, as well as other supported databases that may be currently available to use as a Record Manager. | DOC: Inexistent API documentation about SQLRecordManager | https://api.github.com/repos/langchain-ai/langchain/issues/10120/comments | 2 | 2023-09-02T02:38:30Z | 2023-12-30T16:06:39Z | https://github.com/langchain-ai/langchain/issues/10120 | 1,878,286,644 | 10,120 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.