issue_owner_repo listlengths 2 2 | issue_body stringlengths 0 261k ⌀ | issue_title stringlengths 1 925 | issue_comments_url stringlengths 56 81 | issue_comments_count int64 0 2.5k | issue_created_at stringlengths 20 20 | issue_updated_at stringlengths 20 20 | issue_html_url stringlengths 37 62 | issue_github_id int64 387k 2.46B | issue_number int64 1 127k |
|---|---|---|---|---|---|---|---|---|---|
[
"langchain-ai",
"langchain"
] | ### Feature request
When there are multiple GPUs available, the Ollama API provides the main_gpu option to specify which GPU to use as the main one. Please modify Langchain's ChatOllama to also include this feature.

### Motivation
When running multiple tasks simultaneously on a server, it is necessary to designate specific processes for Ollama.
### Your contribution
None | Please modify ChatOllama to allow the option to specify main_gpu | https://api.github.com/repos/langchain-ai/langchain/issues/15924/comments | 2 | 2024-01-12T01:38:21Z | 2024-01-12T02:59:13Z | https://github.com/langchain-ai/langchain/issues/15924 | 2,077,910,530 | 15,924 |
[
"langchain-ai",
"langchain"
] | ### Issue with current documentation:
With langchain version == 0.1.0
This stop param of the Class VLLM does not work.
For instance, this code has no effect regarding stop word.
```
model = VLLM(
stop=["stop_word"],
model=model_name,
trust_remote_code=True, # mandatory for hf models
max_new_tokens=512,
top_k=10,
top_p=0.95,
temperature=0.2,
vllm_kwargs=vllm_kwargs,
)
```
But this code works:
```
prompt = PromptTemplate(
template=template, input_variables=["system_message", "question"]
)
llm_chain = LLMChain(prompt=prompt, llm=model)
llm_chain.run(
{"system_message": system_message, "question": question, "stop":["stop_word"]}
)
```
### Idea or request for content:
Clear the documentation or it is a bug. | DOC: stop params does not work with langchain_community.llms import VLLM but work in LLMChain | https://api.github.com/repos/langchain-ai/langchain/issues/15921/comments | 1 | 2024-01-12T00:02:57Z | 2024-04-19T16:19:52Z | https://github.com/langchain-ai/langchain/issues/15921 | 2,077,798,404 | 15,921 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
### Example Code
```python
metadata_field_info = [
AttributeInfo(name="source",description="The document this chunk is from.",type="string"),
AttributeInfo(name="origin",description="The origin the document came from. Comes from either scraped websites like TheKinection.org, Kinecta.org or database files like Bancworks. Bancworks is the higher priority.",type="string"),
AttributeInfo(name="date_day",description="The day the document was uploaded.",type="string"),
AttributeInfo(name="date_uploaded",description="The month year the document is current to.",type="string"),
AttributeInfo(name="date_month",description="The month the document was uploaded.",type="string"),
AttributeInfo(name="date_month_name",description="The month name the document was uploaded.",type="string"),
AttributeInfo(name="date_year_long",description="The full year the document was uploaded.",type="string"),
AttributeInfo(name="date_year_short",description="The short year the document was uploaded.",type="string"),
]
llm = ChatOpenAI(temperature=0)
vectorstore = Pinecone.from_existing_index(index_name="test", embedding=get_embedding_function())
# print("Load existing vector store")\
retriever = SelfQueryRetriever.from_llm(
llm,
vectorstore,
"Information about when the document was created and where it was grabbed from.",
metadata_field_info,
)
```
```python
question = "Give the minimum opening deposits for each accounts for the rate sheets in January"
retriever.get_relevant_documents(question)
```
### Description
When I ask to fetch relevant documents with the following query:
- "Give the minimum opening deposits for each accounts for the rate sheets in January"
There is no problem. However, if I make this query a little more robust...
- "Give the minimum opening deposits for each accounts for the rate sheets in January 2023"
I get a CNAME "and" error. This happens in both Pinecone and ChromaDB. Something is wrong with how the query translator is operating or I am missing some crucial step. We should be able to use multiple metadata flags at once.
### System Info
Python 3.11
Langchain 0.1.0
Chroma 0.4.22
Pinecone 2.2.4
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async | Issues with SelfQueryRetriever and the "AND" operator failing in queries that search for multiple metadata flags | https://api.github.com/repos/langchain-ai/langchain/issues/15919/comments | 8 | 2024-01-11T23:03:50Z | 2024-06-08T16:09:06Z | https://github.com/langchain-ai/langchain/issues/15919 | 2,077,750,276 | 15,919 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
### Example Code
See description.
### Description
I am using SelfQueryRetriever. For a response JSON that contains null query, for example:
```
json
{
"query": null,
"filter": ...
}
```
The output parser throws OutputParserException at [line 51](https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/chains/query_constructor/base.py#L51).
OutputParserException('Parsing text\n```json\n{\n "query": null,\n "filter": "eq(\\"kategorie\\", \\"Pravo\\")"\n}\n```\n raised following error:\nobject of type \'NoneType\' has no len()')Traceback (most recent call last):
File "/home/MetaExponential/.local/lib/python3.10/site-packages/langchain/chains/query_constructor/base.py", line 51, in parse
if len(parsed["query"]) == 0:
### System Info
absl-py==2.0.0
ai21==1.2.8
aioboto3==12.0.0
aiobotocore==2.7.0
aiohttp==3.8.4
aioitertools==0.11.0
aiosignal==1.3.1
altgraph @ file:///AppleInternal/Library/BuildRoots/9dd5efe2-7fad-11ee-b588-aa530c46a9ea/Library/Caches/com.apple.xbs/Sources/python3/altgraph-0.17.2-py2.py3-none-any.whl
annotated-types==0.6.0
annoy==1.17.3
antlr4-python3-runtime==4.9.3
anyio==3.7.1
argilla==1.7.0
astunparse==1.6.3
async-generator==1.10
async-timeout==4.0.2
attrs==23.1.0
Babel==2.12.1
backoff==2.2.1
bcrypt==4.0.1
beautifulsoup4==4.12.2
blinker==1.6.2
boto3==1.28.64
botocore==1.31.64
build==0.10.0
CacheControl==0.12.11
cachetools==5.3.1
camel-converter==3.1.0
certifi==2022.12.7
cffi==1.15.1
cfgv==3.3.1
chardet==5.2.0
charset-normalizer==3.1.0
Chroma==0.2.0
chroma-hnswlib==0.7.3
chromadb==0.4.13
cleo==2.0.1
click==8.1.7
clickhouse-connect==0.6.18
CoffeeScript==2.0.3
cohere==4.31
coloredlogs==15.0.1
commonmark==0.9.1
contourpy==1.0.7
crashtest==0.4.1
cryptography==40.0.2
cssselect==1.2.0
cycler==0.11.0
dataclasses-json==0.5.7
datasets==2.12.0
decorator==5.1.1
Deprecated==1.2.13
deprecation==2.1.0
dill==0.3.7
distlib==0.3.6
distro==1.8.0
dnspython==2.3.0
docutils==0.19
duckdb==0.7.1
dulwich==0.21.5
effdet==0.3.0
elastic-transport==8.10.0
elasticsearch==7.13.4
et-xmlfile==1.1.0
exceptiongroup==1.1.1
facebook-sdk==3.1.0
facebooktoken==0.0.1
faiss-cpu==1.7.4
fastapi==0.103.2
fastavro==1.8.2
feedfinder2==0.0.4
feedparser==6.0.11
filelock==3.12.0
Flask==2.3.2
Flask-Cors==4.0.0
Flask-Limiter==3.4.1
Flask-Mail==0.9.1
flatbuffers==23.5.26
fonttools==4.39.4
frozenlist==1.3.3
fsspec==2023.6.0
future @ file:///AppleInternal/Library/BuildRoots/9dd5efe2-7fad-11ee-b588-aa530c46a9ea/Library/Caches/com.apple.xbs/Sources/python3/future-0.18.2-py3-none-any.whl
fuzzywuzzy==0.18.0
gast==0.5.4
google-ai-generativelanguage==0.4.0
google-api-core==2.12.0
google-auth==2.23.3
google-auth-oauthlib==1.0.0
google-cloud-aiplatform==1.38.1
google-cloud-bigquery==3.12.0
google-cloud-core==2.3.3
google-cloud-resource-manager==1.10.4
google-cloud-storage==2.12.0
google-crc32c==1.5.0
google-generativeai==0.3.2
google-pasta==0.2.0
google-resumable-media==2.6.0
googleapis-common-protos==1.56.4
grpc-gateway-protoc-gen-openapiv2==0.1.0
grpc-google-iam-v1==0.12.6
grpcio==1.59.0
grpcio-status==1.59.0
grpcio-tools==1.59.0
h11==0.14.0
h2==4.1.0
h5py==3.10.0
hnswlib==0.7.0
hpack==4.0.0
html5lib==1.1
httpcore==0.16.3
httptools==0.5.0
httpx==0.23.3
huggingface-hub==0.14.1
humanfriendly==10.0
humbug==0.3.2
hyperframe==6.0.1
identify==2.5.23
idna==3.4
importlib-metadata==6.6.0
importlib-resources==5.12.0
iniconfig==2.0.0
install==1.3.5
installer==0.7.0
iopath==0.1.10
itsdangerous==2.1.2
jaraco.classes==3.2.3
jieba3k==0.35.1
Jinja2==3.1.2
jmespath==1.0.1
joblib==1.2.0
jq==1.6.0
jsonpatch==1.33
jsonpointer==2.4
jsonschema==4.17.3
jwt==1.3.1
keras==2.14.0
keyring==23.13.1
kiwisolver==1.4.4
lancedb==0.3.2
langchain==0.1.0
langchain-community==0.0.11
langchain-core==0.1.9
langchain-google-genai==0.0.5
langsmith==0.0.77
lark==1.1.7
layoutparser==0.3.4
Levenshtein==0.23.0
libclang==16.0.6
libdeeplake==0.0.84
limits==3.5.0
llama-cpp-python==0.1.39
lockfile==0.12.2
loguru==0.7.0
lxml==4.9.2
lz4==4.3.2
macholib @ file:///AppleInternal/Library/BuildRoots/9dd5efe2-7fad-11ee-b588-aa530c46a9ea/Library/Caches/com.apple.xbs/Sources/python3/macholib-1.15.2-py2.py3-none-any.whl
Mako==1.2.4
Markdown==3.4.3
markdown2==2.4.8
MarkupSafe==2.1.2
marshmallow==3.19.0
marshmallow-enum==1.5.1
matplotlib==3.7.1
meilisearch==0.28.4
ml-dtypes==0.2.0
monotonic==1.6
more-itertools==9.1.0
mpmath==1.3.0
msg-parser==1.2.0
msgpack==1.0.5
multidict==6.0.4
multiprocess==0.70.15
mypy-extensions==1.0.0
nest-asyncio==1.5.8
networkx==3.1
newspaper3k==0.2.8
nltk==3.8.1
nodeenv==1.7.0
numexpr==2.8.4
numpy==1.26.1
oauthlib==3.2.2
olefile==0.46
omegaconf==2.3.0
onnxruntime==1.14.1
openai==1.3.5
openapi-schema-pydantic==1.2.4
opencv-python==4.7.0.72
openpyxl==3.1.2
opt-einsum==3.3.0
ordered-set==4.1.0
outcome==1.2.0
overrides==7.4.0
packaging==23.2
pandas==1.5.3
pathos==0.3.1
pdf2image==1.16.3
pdfminer.six==20221105
pdfplumber==0.9.0
pexpect==4.8.0
Pillow==9.5.0
pinecone-client==2.2.4
pkginfo==1.9.6
platformdirs==2.6.2
Plim==1.0.0
pluggy==1.3.0
poetry==1.4.2
poetry-core==1.5.2
poetry-plugin-export==1.3.1
poppler-utils==0.1.0
portalocker==2.7.0
posthog==3.0.1
pox==0.3.3
ppft==1.7.6.7
pre-commit==3.2.2
proto-plus==1.22.3
protobuf==4.24.4
ptyprocess==0.7.0
pulsar-client==3.3.0
py==1.11.0
pyarrow==12.0.0
pyasn1==0.5.0
pyasn1-modules==0.3.0
pycocotools==2.0.6
pycparser==2.21
pydantic==2.4.2
pydantic_core==2.10.1
PyExecJS==1.5.1
pyfb==0.6.0
Pygments==2.15.1
PyJWT==2.7.0
pylance==0.8.7
PyMuPDF==1.22.3
pypandoc==1.11
pyparsing==3.0.9
pypdf==3.8.1
PyPDF2==3.0.1
PyPika==0.48.9
pyproject_hooks==1.0.0
pyrsistent==0.19.3
pyScss==1.4.0
PySocks==1.7.1
pytesseract==0.3.10
pytest==7.4.4
python-dateutil==2.8.2
python-docx==0.8.11
python-dotenv==1.0.0
python-Levenshtein==0.23.0
python-magic==0.4.27
python-multipart==0.0.6
python-pptx==0.6.21
pytz==2023.3
PyYAML==6.0
qdrant-client==1.6.4
rank-bm25==0.2.2
rapidfuzz==3.4.0
ratelimiter==1.2.0.post0
readability-lxml==0.8.1
redis==5.0.1
regex==2023.3.23
requests==2.31.0
requests-file==1.5.1
requests-oauthlib==1.3.1
requests-toolbelt==0.10.1
responses==0.18.0
retry==0.9.2
rfc3986==1.5.0
rich==13.0.1
rsa==4.9
s3transfer==0.7.0
safetensors==0.3.1
scikit-learn==1.2.2
scipy==1.10.1
selenium==4.9.1
semver==3.0.2
sentence-transformers==2.2.2
sentencepiece==0.1.98
sgmllib3k==1.0.0
shapely==2.0.2
shellingham==1.5.0.post1
simplejson==3.19.1
six @ file:///AppleInternal/Library/BuildRoots/9dd5efe2-7fad-11ee-b588-aa530c46a9ea/Library/Caches/com.apple.xbs/Sources/python3/six-1.15.0-py2.py3-none-any.whl
snakeviz==2.2.0
sniffio==1.3.0
sortedcontainers==2.4.0
soupsieve==2.5
SQLAlchemy==2.0.16
sqlean.py==0.21.8.4
starlette==0.27.0
stylus==0.1.2
sympy==1.11.1
tenacity==8.2.2
tensorboard==2.14.1
tensorboard-data-server==0.7.2
tensorflow==2.14.0
tensorflow-estimator==2.14.0
tensorflow-io-gcs-filesystem==0.34.0
tensorflow-macos==2.14.0
termcolor==2.3.0
threadpoolctl==3.1.0
tiktoken==0.4.0
timm==0.9.1
tinysegmenter==0.3
tldextract==5.1.1
tokenizers==0.13.3
tomli==2.0.1
tomlkit==0.11.8
torch==2.1.0
torchvision==0.15.1
tornado==6.2
tqdm==4.65.0
transformers==4.28.1
trio==0.22.0
trio-websocket==0.10.3
trove-classifiers==2023.5.2
typer==0.9.0
typing-inspect==0.8.0
typing_extensions==4.8.0
tzdata==2023.3
unstructured==0.6.6
unstructured-inference==0.4.4
urllib3==1.26.15
uvicorn==0.22.0
uvloop==0.17.0
virtualenv==20.21.1
Wand==0.6.11
watchfiles==0.19.0
webencodings==0.5.1
websockets==11.0.2
Werkzeug==2.3.6
wrapt==1.14.1
wsproto==1.2.0
xattr==0.10.1
XlsxWriter==3.1.0
xxhash==3.2.0
yarl==1.9.2
zipp==3.15.0
zstandard==0.21.0
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [X] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async | query_constructor throws OutputParserException is query is null | https://api.github.com/repos/langchain-ai/langchain/issues/15914/comments | 1 | 2024-01-11T21:54:11Z | 2024-04-18T16:21:30Z | https://github.com/langchain-ai/langchain/issues/15914 | 2,077,661,061 | 15,914 |
[
"langchain-ai",
"langchain"
] | ### Feature request
Make it easy to use `tokenizers` for HF tokenizers instead of `transformers`
### Motivation
`tokenizers` has far fewer dependencies | Add ability to use `tokenizers` instead of `transformers` for HF tokenizers | https://api.github.com/repos/langchain-ai/langchain/issues/15902/comments | 1 | 2024-01-11T18:42:00Z | 2024-04-18T16:30:29Z | https://github.com/langchain-ai/langchain/issues/15902 | 2,077,368,838 | 15,902 |
[
"langchain-ai",
"langchain"
] | ### Feature request
Tool for OpenAI image generation API using openai's v1 sdk https://platform.openai.com/docs/guides/images
### Motivation
Useful for image-gen applications with language interfaces | Integration for OpenAI image gen with v1 sdk | https://api.github.com/repos/langchain-ai/langchain/issues/15901/comments | 3 | 2024-01-11T18:37:41Z | 2024-06-01T00:19:27Z | https://github.com/langchain-ai/langchain/issues/15901 | 2,077,362,764 | 15,901 |
[
"langchain-ai",
"langchain"
] | ### Feature request
Tool for OpenAI speech-to-text (using openai v1) https://platform.openai.com/docs/guides/speech-to-text
### Motivation
Useful for building voice interfaces | OpenAI speech-to-text API integration | https://api.github.com/repos/langchain-ai/langchain/issues/15900/comments | 2 | 2024-01-11T18:35:31Z | 2024-06-15T16:06:57Z | https://github.com/langchain-ai/langchain/issues/15900 | 2,077,359,821 | 15,900 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
### Example Code
from langchain_openai import ChatOpenAI
llm = ChatOpenAI()
llm.invoke("how can langsmith help with testing?")
### Description
site-packages\langchain_openai\chat_models\base.py", line 454, in _create_chat_result
response = response.dict()
AttributeError: 'str' object has no attribute 'dict'
### System Info
Python 3.10.12
langchain 0.1.0
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async | The sample code of version 0.1.0 of the official website cannot be executed. | https://api.github.com/repos/langchain-ai/langchain/issues/15888/comments | 13 | 2024-01-11T15:14:23Z | 2024-07-14T13:05:49Z | https://github.com/langchain-ai/langchain/issues/15888 | 2,076,928,679 | 15,888 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
### Example Code
The following code:
```python
from arango import ArangoClient
from langchain_community.graphs import ArangoGraph
from langchain.chains import ArangoGraphQAChain
# Initialize the ArangoDB client.
client = ArangoClient(hosts='http://localhost:8529')
# Connect to Database
db = client.db('mydb', username='myuser', password='mypass')
# Instantiate the ArangoDB-LangChain Graph
graph = ArangoGraph(db)
```
Produces the following exception:
Traceback (most recent call last):
File "/Users/vgreen/working_dir/xpm/graph_qa_01.py", line 19, in <module>
graph = ArangoGraph(db)
File "/Users/vgreen/Documents/2023/repos/Trancendence/pipeline/devenv/lib/python3.10/site-packages/langchain_community/graphs/arangodb_graph.py", line 23, in __init__
self.set_db(db)
File "/Users/vgreen/Documents/2023/repos/Trancendence/pipeline/devenv/lib/python3.10/site-packages/langchain_community/graphs/arangodb_graph.py", line 42, in set_db
self.set_schema()
File "/Users/vgreen/Documents/2023/repos/Trancendence/pipeline/devenv/lib/python3.10/site-packages/langchain_community/graphs/arangodb_graph.py", line 49, in set_schema
self.__schema = self.generate_schema() if schema is None else schema
File "/Users/vgreen/Documents/2023/repos/Trancendence/pipeline/devenv/lib/python3.10/site-packages/langchain_community/graphs/arangodb_graph.py", line 96, in generate_schema
for doc in self.__db.aql.execute(aql):
File "/Users/vgreen/Documents/2023/repos/Trancendence/pipeline/devenv/lib/python3.10/site-packages/arango/aql.py", line 453, in execute
return self._execute(request, response_handler)
File "/Users/vgreen/Documents/2023/repos/Trancendence/pipeline/devenv/lib/python3.10/site-packages/arango/api.py", line 74, in _execute
return self._executor.execute(request, response_handler)
File "/Users/vgreen/Documents/2023/repos/Trancendence/pipeline/devenv/lib/python3.10/site-packages/arango/executor.py", line 66, in execute
return response_handler(resp)
File "/Users/vgreen/Documents/2023/repos/Trancendence/pipeline/devenv/lib/python3.10/site-packages/arango/aql.py", line 450, in response_handler
raise AQLQueryExecuteError(resp, request)
arango.exceptions.AQLQueryExecuteError: [HTTP 400][ERR 1501] AQL: syntax error, unexpected FOR declaration near 'for
LIMIT 1
...' at position 2:37 (while parsing)
### Description
I'm trying to use Langchain to connect to an ArangoDB Graph database to perform question and answering and when attempting to instantiate an ArangoGraph object it throws an AQLQueryExecuteError.
### System Info
Langchain version: v0.1.0
Platform: Mac OS Sonoma
Python version: 3.10 (venv)
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async | Instantiating ArangoGraph(db) produces [HTTP 400][ERR 1501] AQL: syntax error | https://api.github.com/repos/langchain-ai/langchain/issues/15886/comments | 1 | 2024-01-11T14:50:40Z | 2024-04-18T16:21:27Z | https://github.com/langchain-ai/langchain/issues/15886 | 2,076,869,945 | 15,886 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
### Example Code
```python
model_name = "Intel/dynamic_tinybert"
tokenizer = AutoTokenizer.from_pretrained(model_name, padding=True, truncation=True, max_length=512)
question_answerer = pipeline(
"question-answering",
model=model_name,
tokenizer=tokenizer,
return_tensors='pt'
)
llm = HuggingFacePipeline(
pipeline=question_answerer,
model_kwargs={"temperature": 0.7, "max_length": 50},
)
prompt_template = """
As literature critic answer me
question: {question}
context: {context}
"""
prompt = PromptTemplate(template=prompt_template, input_variables=["context", "question"])
chain = RetrievalQA.from_chain_type(
llm=llm,
chain_type="stuff",
retriever=retriever,
chain_type_kwargs = {"prompt": prompt})
question = "Who is Hamlet ?"
answer = chain.invoke({"query": question}) # issue here <--
print(answer)
```
### Description
I tried to implement simple RetrievalQA from with langchain_chain.faiss vector search but I've faced with such assert,
argument needs to be of type (SquadExample, dict)
Here, there is an issue.
answer = chain.invoke({"query": question})
Thank you in advance.
### System Info
Windows 10, python 3.11, langchain 0.1.0
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async | argument needs to be of type (SquadExample, dict) | https://api.github.com/repos/langchain-ai/langchain/issues/15884/comments | 18 | 2024-01-11T14:29:24Z | 2024-06-08T16:09:01Z | https://github.com/langchain-ai/langchain/issues/15884 | 2,076,818,792 | 15,884 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
### Example Code
- With `LLM`:
```py
import os
from typing import Any, List
import requests
from langchain.callbacks.base import Callbacks
from langchain.chains import LLMChain
from langchain.chains.base import Chain
from langchain.prompts import PromptTemplate
from langchain_core.callbacks.manager import CallbackManagerForLLMRun
from langchain_core.language_models.llms import LLM
required_envs = ["API_BASE", "API_KEY", "DEPLOYMENT_NAME"]
for env in required_envs:
if env not in os.environ:
raise ValueError(f"Missing required environment variable: {env}")
class CustomLLM(LLM):
@property
def _llm_type(self) -> str:
return "CustomLLM"
def _call(
self,
prompt: str,
stop: List[str] | None = None,
run_manager: CallbackManagerForLLMRun | None = None,
**kwargs: Any
) -> str:
"""Call the API with the given prompt and return the result."""
self._api_endpoint: str = str(os.getenv("API_BASE"))
self._api_key: str = str(os.getenv("API_KEY"))
self._deployment_name: str = str(os.getenv("DEPLOYMENT_NAME"))
result = requests.post(
f"{self._api_endpoint}/llm/deployments/{self._deployment_name}/chat/completions?api-version=2023-05-15",
headers={
"Content-Type": "application/json",
"api-key": self._api_key,
},
json={
"messages": prompt,
"temperature": 0,
"top_p": 0,
"model": "gpt-4-32k",
},
)
if result.status_code != 200:
raise RuntimeError(
f"Failed to call API: {result.status_code} {result.content}"
)
else:
return result.json()["choices"][0]["message"]
def get_chain(prompt: PromptTemplate, callbacks: Callbacks = []) -> Chain:
"""
This function initializes and returns an LLMChain with a given prompt and callbacks.
Args:
prompt (str): The prompt to initialize the LLMChain with.
callbacks (Callbacks): Langchain callbacks fo
Returns:
Chain: An instance of LLMChain.
"""
llm = CustomLLM()
chain = LLMChain(llm=llm, prompt=prompt, callbacks=callbacks)
return chain
if __name__ == "__main__":
prompt_template = """
You are an insurance agent. You are provided with instructions, and you must provide an answer.
Question: {question}
"""
prompt = PromptTemplate(
template=prompt_template,
input_variables=["question"],
)
chain = get_chain(prompt)
result = chain.invoke({"question": "What is the best insurance policy for me?"})
print(result)
```
- With `Runnable`:
```py
import os
from typing import Any, List
import requests
from langchain.callbacks.base import Callbacks
from langchain.chains import LLMChain
from langchain.chains.base import Chain
from langchain.prompts import PromptTemplate
from langchain_core.callbacks.manager import CallbackManagerForLLMRun
from langchain.schema.runnable import Runnable
from langchain.schema.language_model import LanguageModelInput
required_envs = ["API_BASE", "API_KEY", "DEPLOYMENT_NAME"]
for env in required_envs:
if env not in os.environ:
raise ValueError(f"Missing required environment variable: {env}")
class CustomLLM(LLM):
@property
def _llm_type(self) -> str:
return "CustomLLM"
def invoke(
self, input: LanguageModelInput, config: RunnableConfig | None = None
) -> str:
return super().invoke(input)
def _call(
self,
prompt: str,
run_manager: Optional[CallbackManagerForLLMRun] = None,
) -> str:
"""Call the API with the given prompt and return the result."""
self._api_endpoint: str = str(os.getenv("OPENAI_API_BASE"))
self._api_key: str = str(os.getenv("OPENAI_API_BASE"))
self._deployment_name: str = str(os.getenv("DEPLOYMENT_NAME"))
result = requests.post(
f"{self._api_endpoint}/llm/deployments/{self._deployment_name}/chat/completions?api-version=2023-05-15",
headers={
"Content-Type": "application/json",
"api-key": self._api_key,
},
json={
"messages": prompt,
"temperature": 0,
"top_p": 0,
"model": "gpt-4-32k",
},
)
if result.status_code != 200:
raise RuntimeError(
f"Failed to call API: {result.status_code} {result.content}"
)
else:
return result.json()["choices"][0]["message"]
def get_chain(prompt: PromptTemplate, callbacks: Callbacks = []) -> Chain:
"""
This function initializes and returns an LLMChain with a given prompt and callbacks.
Args:
prompt (str): The prompt to initialize the LLMChain with.
callbacks (Callbacks): Langchain callbacks fo
Returns:
Chain: An instance of LLMChain.
"""
llm = CustomLLM()
chain = LLMChain(llm=llm, prompt=prompt, callbacks=callbacks)
return chain
if __name__ == "__main__":
prompt_template = """
You are an insurance agent. You are provided with instructions, and you must provide an answer.
Question: {question}
"""
prompt = PromptTemplate(
template=prompt_template,
input_variables=["question"],
)
chain = get_chain(prompt)
result = chain.invoke({"question": "What is the best insurance policy for me?"})
print(result)
```
### Description
Hi!
I'm not exactly whether this is a bug, or an expected behavior.
I'm in a situation where I cannot use the LLM directly, and instead need to use APIs that interact with the LLM itself.
I've hence decided to create a CustomLLM using the documentation [here](https://python.langchain.com/docs/modules/model_io/llms/custom_llm) to keep leveraging `Chain` features.
Here are the problems I've been facing:
- When using the `LLM` class as the Base class of my `CustomLLM` class, I run into the following error:
```
Traceback (most recent call last):
File "custom_llm.py", line 83, in <module>
chain = get_chain(prompt)
^^^^^^^^^^^^^^^^^
File "custom_llm.py", line 70, in get_chain
chain = LLMChain(llm=llm, prompt=prompt, callbacks=callbacks)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File ".venv/lib/python3.11/site-packages/langchain/load/serializable.py", line 97, in __init__
super().__init__(**kwargs)
File ".venv/lib/python3.11/site-packages/pydantic/v1/main.py", line 341, in __init__
raise validation_error
pydantic.v1.error_wrappers.ValidationError: 2 validation errors for LLMChain
llm
instance of Runnable expected (type=type_error.arbitrary_type; expected_arbitrary_type=Runnable)
llm
instance of Runnable expected (type=type_error.arbitrary_type; expected_arbitrary_type=Runnable)
```
- Following this error, I've decided to modify the class, so it extends from `Runnable` (cf second code snippet in the example) but when running the new code I get this:
```
Traceback (most recent call last):
File "utils/custom_llm.py", line 90, in <module>
result = chain.invoke({"question": "What is the best insurance policy for me?"})
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File ".venv/lib/python3.11/site-packages/langchain/chains/base.py", line 87, in invoke
return self(
^^^^^
File ".venv/lib/python3.11/site-packages/langchain/chains/base.py", line 310, in __call__
raise e
File ".venv/lib/python3.11/site-packages/langchain/chains/base.py", line 304, in __call__
self._call(inputs, run_manager=run_manager)
File ".venv/lib/python3.11/site-packages/langchain/chains/llm.py", line 108, in _call
response = self.generate([inputs], run_manager=run_manager)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File ".venv/lib/python3.11/site-packages/langchain/chains/llm.py", line 127, in generate
results = self.llm.bind(stop=stop, **self.llm_kwargs).batch(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File ".venv/lib/python3.11/site-packages/langchain/schema/runnable/base.py", line 2658, in batch
return self.bound.batch(
^^^^^^^^^^^^^^^^^
File ".venv/lib/python3.11/site-packages/langchain/schema/runnable/base.py", line 321, in batch
return cast(List[Output], [invoke(inputs[0], configs[0])])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File ".venv/lib/python3.11/site-packages/langchain/schema/runnable/base.py", line 317, in invoke
return self.invoke(input, config, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
TypeError: CustomLLM.invoke() got an unexpected keyword argument 'stop'
```
### System Info
langchain==0.0.329
langchain-core==0.1.9
Platform: MacOS 13.6.2
Python: 3.11
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async | CustomLLM cannot be used to build `Chains` when using `LLM` or `Runnable` | https://api.github.com/repos/langchain-ai/langchain/issues/15880/comments | 5 | 2024-01-11T13:49:44Z | 2024-06-05T07:44:12Z | https://github.com/langchain-ai/langchain/issues/15880 | 2,076,708,819 | 15,880 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
### Example Code
import os
import openai
import sys
import panel as pn # GUI
pn.extension()
from dotenv import load_dotenv, find_dotenv
_ = load_dotenv(find_dotenv()) # read local .env file
import datetime
current_date = datetime.datetime.now().date()
if current_date < datetime.date(2023, 9, 2):
llm_name = "gpt-3.5-turbo-0301"
else:
llm_name = "gpt-3.5-turbo"
print(llm_name)
from langchain.vectorstores import Chroma
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.document_loaders import PyPDFLoader
docs = PyPDFLoader("sameer_mahajan.pdf").load()
from langchain.text_splitter import TokenTextSplitter
text_splitter = TokenTextSplitter(chunk_size=1, chunk_overlap=0)
splits = text_splitter.split_documents(docs)
embedding = OpenAIEmbeddings(
deployment = "embeddings",
openai_api_key = os.environ['OPENAI_API_KEY'],
openai_api_base = os.environ['OPENAI_ENDPOINT'],
openai_api_version = os.environ['OPENAI_DEPLOYMENT_VERSION'],
openai_api_type = "azure",
chunk_size = 1)
vectordb = Chroma.from_documents(
documents=splits,
embedding=embedding,
persist_directory=persist_directory
)
### Description
I expect vectordb to persist for my further chatbot however I get an exception of
`NotFoundError: Error code: 404 - {'error': {'code': '404', 'message': 'Resource not found'}}`
### System Info
python 3.10.2
embedding model text-embedding-ada-002
### Related Components
- [ ] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async | Resource not found error trying to use chromadb with Azure Open AI | https://api.github.com/repos/langchain-ai/langchain/issues/15878/comments | 7 | 2024-01-11T13:05:25Z | 2024-06-01T00:07:38Z | https://github.com/langchain-ai/langchain/issues/15878 | 2,076,601,164 | 15,878 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
### Example Code
from langchain_google_genai import GoogleGenerativeAIEmbeddings
embeddings = GoogleGenerativeAIEmbeddings(model="models/embedding-001")
vector = embeddings.embed_query("hello, world!")
### Description
langchain_google_genai._common.GoogleGenerativeAIError: Error embedding content: Deadline of 60.0s exceeded while calling target function
### System Info
langchain 0.1.0
langchain-community 0.0.10
langchain-core 0.1.8
langchain-google-genai 0.0.5
langchain-openai 0.0.2
langchainhub 0.1.14
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async | langchain_google_genai._common.GoogleGenerativeAIError: Error embedding content: Deadline of 60.0s exceeded while calling target function | https://api.github.com/repos/langchain-ai/langchain/issues/15876/comments | 1 | 2024-01-11T12:44:17Z | 2024-04-18T16:07:30Z | https://github.com/langchain-ai/langchain/issues/15876 | 2,076,546,582 | 15,876 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
### Example Code
Here is my custom parser code:
```
def parse(output):
# If no function was invoked, return to user
if "function_call" not in output.additional_kwargs:
return AgentFinish(return_values={"answer": output.content, "sources":""}, log=output.content)
# Parse out the function call
function_call = output.additional_kwargs["function_call"]
name = function_call["name"]
inputs = json.loads(function_call["arguments"])
# If the Response function was invoked, return to the user with the function inputs
if name == "Response":
return AgentFinish(return_values=inputs, log=str(function_call))
# Otherwise, return an agent action
else:
return AgentActionMessageLog(
tool=name, tool_input=inputs, log="", message_log=[output]
)
```
Here is my agent code:
```
agent = (
{
"input": itemgetter("input"),
# Format agent scratchpad from intermediate steps
"agent_scratchpad": lambda x: format_to_openai_functions(
x["intermediate_steps"]),
"history" : lambda x:x["history"]
}
| prompt
| condense_prompt
| llm_with_tools
| parse
)
agent_executor = AgentExecutor(tools=[retriever_tool],
agent=agent,
memory=st.session_state.agentmemory,
verbose=True,
handle_parsing_errors=True,
)
```
### Description
I get the following error when I call my agent_executor.invoke method - An error occurred: Invalid control character at: line 2 column 129 (char 130) - This is only when my retriever returns some special characters such as " • " - I'm assuming it is this - a bullet point dots.
I have been using a custom parser: How can I use the solution from below link for output parser solution to solve the problem with a custom parser? Or add "strict=False" in the json response?
Or is there any other solution?
https://github.com/langchain-ai/langchain/issues/9460
### System Info
langchain==0.0.315
pydantic==2.5.2
streamlit==1.29.0
openai==0.28
### Related Components
- [X] LLMs/Chat Models
- [X] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [X] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [X] Memory
- [X] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async | agent_executor.invoke method: An error occurred: Invalid control character at: line y column xxx (char xxx) | https://api.github.com/repos/langchain-ai/langchain/issues/15872/comments | 2 | 2024-01-11T08:29:48Z | 2024-01-11T09:11:37Z | https://github.com/langchain-ai/langchain/issues/15872 | 2,076,039,740 | 15,872 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
### Example Code
from langchain.chat_models import ChatOpenAI
from langchain.embeddings import OpenAIEmbeddings
from langchain.prompts import ChatPromptTemplate
from langchain.schema.output_parser import StrOutputParser
from langchain.schema.runnable import RunnableMap
from langchain.schema.messages import HumanMessage, SystemMessage
from langchain.vectorstores import DocArrayInMemorySearch
from langchain.utils.openai_functions import convert_pydantic_to_openai_function
from typing import List
from pydantic import BaseModel, Field
class PopulationSearch(BaseModel):
"""Get the population size based on the given city"""
city: str = Field(description="city")
population_function = convert_pydantic_to_openai_function(PopulationSearch)
model = ChatOpenAI(
temperature=0,
model_name="gpt4-turbo"
)
response = model.invoke("What is the population of Wuhan?", functions=[population_function])
print(response.additional_kwargs)
### Description
The result is {}. Why is it empty value?
The screenshots are as follows:

### System Info
Python version is 3.11
LangChain version is 0.0.343
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async | No function called | https://api.github.com/repos/langchain-ai/langchain/issues/15871/comments | 2 | 2024-01-11T08:24:30Z | 2024-01-11T18:50:01Z | https://github.com/langchain-ai/langchain/issues/15871 | 2,076,031,634 | 15,871 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
### Example Code
from langchain.document_loaders import ConfluenceLoader
loader = ConfluenceLoader(
url="<my confluence link>", username="<my user name>",
api_key="<my token>"
)
documents = loader.load(space_key="<my space>", include_attachments=True, limit=1, max_pages=1)
### Description
I am trying to load all confluence pages using ConflueceLoader. I expect to get all the pages but instead I get the AttributeError: 'str' object has no attribute 'get'
### System Info
python version 3.10.2
langchain version 0.0.345
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async | ConfluenceLoader.load giving AttributeError: 'str' object has no attribute 'get' while reading all documents from space | https://api.github.com/repos/langchain-ai/langchain/issues/15869/comments | 11 | 2024-01-11T06:48:42Z | 2024-07-03T16:05:07Z | https://github.com/langchain-ai/langchain/issues/15869 | 2,075,891,242 | 15,869 |
[
"langchain-ai",
"langchain"
] | ### Feature request
I noticed that the MongoDBChatMessageHistory class object is attempting to create an index during connection, causing each request to take longer than usual. Index Creation is one time process, So no need create index everytime. By default, index creation is enabled. To address this, add an additional parameter, index_creation. If index_creation is set to False, this step should be ignored.
Current Code:
```
def __init__(
self,
connection_string: str,
session_id: str,
database_name: str = DEFAULT_DBNAME,
collection_name: str = DEFAULT_COLLECTION_NAME,
):
from pymongo import MongoClient, errors
self.connection_string = connection_string
self.session_id = session_id
self.database_name = database_name
self.collection_name = collection_name
try:
self.client: MongoClient = MongoClient(connection_string)
except errors.ConnectionFailure as error:
logger.error(error)
self.db = self.client[database_name]
self.collection = self.db[collection_name]
self.collection.create_index("SessionId")
```
Proposed modification:
```
def __init__(
self,
connection_string: str,
session_id: str,
database_name: str = DEFAULT_DBNAME,
collection_name: str = DEFAULT_COLLECTION_NAME,
index_creation:bool =True #New argument
):
from pymongo import MongoClient, errors
self.connection_string = connection_string
self.session_id = session_id
self.database_name = database_name
self.collection_name = collection_name
try:
self.client: MongoClient = MongoClient(connection_string)
except errors.ConnectionFailure as error:
logger.error(error)
self.db = self.client[database_name]
self.collection = self.db[collection_name]
if index_creation: #Conditional Index Creation
self.collection.create_index("SessionId")
```
### Motivation
Developer can do custom modification, But if you make this feature that feature comes package.
### Your contribution
yes, can do this
Proposed modification:
```
def __init__(
self,
connection_string: str,
session_id: str,
database_name: str = DEFAULT_DBNAME,
collection_name: str = DEFAULT_COLLECTION_NAME,
index_creation:bool =True #New argument
):
from pymongo import MongoClient, errors
self.connection_string = connection_string
self.session_id = session_id
self.database_name = database_name
self.collection_name = collection_name
try:
self.client: MongoClient = MongoClient(connection_string)
except errors.ConnectionFailure as error:
logger.error(error)
self.db = self.client[database_name]
self.collection = self.db[collection_name]
if index_creation: #Conditional Index Creation
self.collection.create_index("SessionId")
``` | Index Creation | https://api.github.com/repos/langchain-ai/langchain/issues/15868/comments | 2 | 2024-01-11T06:35:00Z | 2024-06-01T00:20:58Z | https://github.com/langchain-ai/langchain/issues/15868 | 2,075,869,822 | 15,868 |
[
"langchain-ai",
"langchain"
] | ### Issue with current documentation:
import os
from langchain.chat_models import ChatOpenAI
from langchain.chains import ConversationalRetrievalChain
from langchain.memory import ConversationTokenBufferMemory
from langchain_community.embeddings import HuggingFaceEmbeddings
from langchain_community.vectorstores.qdrant import Qdrant
from langchain_core.prompts import ChatPromptTemplate, SystemMessagePromptTemplate, MessagesPlaceholder, \
HumanMessagePromptTemplate
from qdrant_client import QdrantClient
os.environ['OPENAI_API_KEY'] = "mykey"
client = QdrantClient(host="192.168.0.313", port=6333)
COLLECTION_NAME = "embed"
embeddings = HuggingFaceEmbeddings(model_name="all-MiniLM-L6-v2")
vectorstore = Qdrant.from_documents(
client=client,
collection_name=COLLECTION_NAME,
embeddings=embeddings,
search_params={"metric_type": "cosine"},
)
prompt = ChatPromptTemplate(
messages=[
SystemMessagePromptTemplate.from_template(
"you are robot."
),
MessagesPlaceholder(variable_name="chat_history"),
HumanMessagePromptTemplate.from_template("{question}"),
]
)
llm_name = "gpt-3.5-turbo"
llm = ChatOpenAI(model_name=llm_name, temperature=0)
memory_token_limit = 100
retriever = vectorstore.as_retriever()
memory = ConversationTokenBufferMemory(
llm=llm,
prompt=prompt,
max_token_limit=int(memory_token_limit),
memory_key="chat_history",
return_messages=True,
)
qa = ConversationalRetrievalChain.from_llm(
llm=llm,
retriever=retriever,
memory=memory,
verbose=True,
)
chat_history = []
while True:
memory.load_memory_variables({})
question = input('ask:')
result = qa.run({'question': question, 'chat_history': chat_history})
print(result)
chat_history.append([f'User: {question}', f'Ai: {result}'])
print(chat_history)
st_history = ' '.join(map(str, chat_history))
res = embeddings.embed_query(st_history)
print(f'ok: {res[:4]}...')
if question.lower() == 'bye':
break
### Idea or request for content:
embeddings
cosine | How can I store chat history in a database and retrieve results using cosine similarity when querying the database?" | https://api.github.com/repos/langchain-ai/langchain/issues/15866/comments | 1 | 2024-01-11T05:20:34Z | 2024-04-18T16:30:26Z | https://github.com/langchain-ai/langchain/issues/15866 | 2,075,774,096 | 15,866 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
### Example Code
```python
import os
from langchain_openai import ChatOpenAI
from langchain_community.tools.tavily_search import TavilySearchResults
from langchain import hub
from langchain.agents import create_openai_functions_agent
from langchain.agents import AgentExecutor
os.environ["OPENAI_API_KEY"] = "*************************************"
os.environ["TAVILY_API_KEY"] = "*************************************"
search = TavilySearchResults()
tools = [search]
# Get the prompt to use - you can modify this!
prompt = hub.pull("hwchase17/openai-functions-agent")
llm = ChatOpenAI(
model="gpt-3.5-turbo",
temperature=0,
verbose=True,
)
agent = create_openai_functions_agent(llm, tools, prompt)
agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)
agent_executor.invoke({"input": "what is the weather in SF?"})
```
### Description
Running the example code above will cause an error for both `gpt-3-turbo` and `gpt-4-0613`. The error message is:
```
openai.NotFoundError: Error code: 404 - {'error': {'message': 'Unrecognized request argument supplied: functions (request id: XXXXX)', 'type': 'invalid_request_error', 'param': '', 'code': None}}
```
I searched for this error and found a solution, which involves adding the parameter `api-version="2023-07-01-preview"`. However, I couldn't find a place to input this parameter. After reading through some source code, I finally figured out how:
```python
function_obj = agent.middle[1]
if function_obj.kwargs:
function_obj.kwargs["extra_query"] = {"api-version": "2023-07-01-preview"}
else:
function_obj.kwargs = {"extra_query": {"api-version": "2023-07-01-preview"}}
```
This led to another error:
```
openai.BadRequestError: Error code: 400 - {'error': {'message': "Invalid value for 'content': expected a string, got null. (request id: XXXXX", 'type': 'invalid_request_error', 'param': 'messages.[2].content', 'code': None}}
```
After some debugging, I found the reason. Inside `langchain_openai`, there is a function `_convert_message_to_dict`:
```python
elif isinstance(message, AIMessage):
message_dict = {"role": "assistant", "content": message.content}
if "function_call" in message.additional_kwargs:
message_dict["function_call"] = message.additional_kwargs["function_call"]
# If function call only, content is None not empty string
if message_dict["content"] == "":
message_dict["content"] = None
```
This code turns the search result into an `AIMessage` and, for some reason, does not allow the content to be an empty string, so it makes it `None`. However, the OpenAI API does not accept this. To make it work, I had to rewrite the code:
```python
def _convert_message_to_dict(message: BaseMessage) -> dict:
...
elif isinstance(message, AIMessage):
message_dict = {"role": "assistant", "content": message.content}
if "function_call" in message.additional_kwargs:
message_dict["function_call"] = message.additional_kwargs["function_call"]
# If function call only, content is None not empty string
# ATTENTION: CHANGE HERE
# if message_dict["content"] == "":
# message_dict["content"] = None
if "tool_calls" in message.additional_kwargs:
message_dict["tool_calls"] = message.additional_kwargs["tool_calls"]
# If tool calls only, content is None not empty string
if message_dict["content"] == "":
message_dict["content"] = None
...
return message_dict
def new_create_message_dicts(
self, messages: List[BaseMessage], stop: Optional[List[str]]
) -> Tuple[List[Dict[str, Any]], Dict[str, Any]]:
params = self._default_params
if stop is not None:
if "stop" in params:
raise ValueError("`stop` found in both the input and default params.")
params["stop"] = stop
message_dicts = [_convert_message_to_dict(m) for m in messages]
return message_dicts, params
llm.Config.extra = Extra.allow
llm._create_message_dicts = partial(new_create_message_dicts, llm)
```
I mean, really? I'm not sure what I did wrong, but it's certainly not easy to make it work. If it's not a bug, I hope to get a simpler and more elegant solution.
### System Info
langchain==0.1.0
langchain-community==0.0.10
langchain-core==0.1.8
langchain-openai==0.0.2
langchainhub==0.1.14
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async | the agent example in the quickstart documentation failed to run | https://api.github.com/repos/langchain-ai/langchain/issues/15863/comments | 4 | 2024-01-11T03:54:33Z | 2024-05-07T16:07:53Z | https://github.com/langchain-ai/langchain/issues/15863 | 2,075,684,626 | 15,863 |
[
"langchain-ai",
"langchain"
] | I am creating a tool using _run and _arun versions for my FastAPI code to use the tool in AgentExecutor. When I test my agent, I am running into this AttributeError which I am unable to resolve even with a debugger. Am I missing anything here?
from fastapi import Request
from langchain.tools import tool, BaseTool
from pydantic import BaseModel, Field
from typing import Type, Optional
from langchain.callbacks.manager import (
AsyncCallbackManagerForToolRun,
CallbackManagerForToolRun,
)
from settings import app_settings as settings
from ai.tools import run_chain
import httpx
class PlaceHolderSchema(BaseModel):
dummy: Optional[str]
async def run_chain_tool(request: Request) :
class ChainTool(BaseTool):
name = "run_chain"
description = "This tool takes a user question as input and returns the answer using the Cube JSON extraction and Cube API response."
args_schema: Type[PlaceHolderSchema] = PlaceHolderSchema
def _run(self,
question: str,
run_manager: Optional[CallbackManagerForToolRun] = None,
dummy: Optional[str] = None,
) -> str:
"""
This function "synchronously" runs the tool.
Args:
question (str): User question about the Cube data model.
Returns:
answer (str): Answer to the user question by running a chain of steps such as generation of Cube JSON, calling Cube API, and generating final answer.
"""
raise NotImplementedError("run_chain tool does not support synchronous execution")
async def _arun(self,
question: str,
run_manager: Optional[AsyncCallbackManagerForToolRun] = None,
dummy: Optional[str] = None,
) -> str:
"""
This function "asynchronously" runs the tool.
Args:
question (str): User question about the Cube data model.
Returns:
answer (str): Answer to the user question by running a chain of steps such as generation of Cube JSON, calling Cube API, and generating final answer.
"""
try:
answer = await run_chain(question, request)
print(answer)
return answer
except Exception as e: print(f"Error: {e}")
return ChainTool() | AttributeError: 'str' object has no attribute 'log' | https://api.github.com/repos/langchain-ai/langchain/issues/15861/comments | 2 | 2024-01-11T03:44:47Z | 2024-04-18T16:33:09Z | https://github.com/langchain-ai/langchain/issues/15861 | 2,075,674,679 | 15,861 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
### Example Code
The following code:
```
import os
from langchain_community.utilities.google_trends import GoogleTrendsAPIWrapper
os.environ['SERPAPI_API_KEY'] = ''
tool = GoogleTrendsAPIWrapper()
tool.run("Something that will yield an error like totally")
```
will yield this:
```
Traceback (most recent call last):
File "/home/ubuntu/Work/luc/langchain-google-trends-issue-1/langchain_google_trends_issue_1/main.py", line 9, in <module>
tool.run("Something that will yield an error like totally")
File "/home/ubuntu/Work/luc/langchain-google-trends-issue-1/.venv/lib/python3.10/site-packages/langchain_community/utilities/google_trends.py", line 68, in run
total_results = client.get_dict()["interest_over_time"]["timeline_data"]
KeyError: 'interest_over_time'
```
### Description
* I'm trying to use the Google Trends tool with some AI agent.
* Now the (not so smart) agent ran a query with Google Trend that did NOT return what its implementation expected.
* I would expect the implementation to follow a more foolproof logic.
### System Info
# pyproject.toml
python = "^3.10"
langchain = "0.0.354"
pytest = "^7.4.4"
google-search-results = "^2.4.2"
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async | Google Trend utility makes assumptions on keys from response | https://api.github.com/repos/langchain-ai/langchain/issues/15859/comments | 4 | 2024-01-11T03:03:52Z | 2024-04-18T16:21:26Z | https://github.com/langchain-ai/langchain/issues/15859 | 2,075,635,675 | 15,859 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
### Example Code
Metadata
```python
doc.metadata["date_year_short"] = date_obj.strftime("%y") # 23
doc.metadata["date_year_long"] = date_obj.strftime("%Y") # 2023
doc.metadata["date_month"] = date_obj.strftime("%-m") # 12
doc.metadata["date_month_name"] = calendar.month_name[date_obj.month] # December
doc.metadata["date_day"] = date_obj.strftime("%-d") # 31
doc.metadata["date_uploaded"] = calendar.month_name[date_obj.month] + " " + date_obj.strftime("%Y") # January 2023
```
Self-Query Retriever + Pinecone DB Instatiation
```python
llm = ChatOpenAI(temperature=0)
vectorstore = Pinecone.from_existing_index(index_name="test", embedding=get_embedding_function())
retriever = SelfQueryRetriever.from_llm(
llm,
vectorstore,
"Information about when the document was created and where it was grabbed from.",
metadata_field_info,
)
# bancworks_docs[1359]
retriever.vectorstore.similarity_search_with_score(question)
```
### Description
Should be able to see my metadata instantiated in string form the way I created them instead of being converted to date time.
For example, my date_month_name field is "Feburary 2023". It should not be converted to 2/1/2000.
### System Info
Docker image container, Python v3.11, Langchain v0.1.0
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async | SelfQueryRetriever with Pinecone Automatically Converts String Metadata into DateTime | https://api.github.com/repos/langchain-ai/langchain/issues/15856/comments | 5 | 2024-01-11T02:23:09Z | 2024-04-19T16:30:32Z | https://github.com/langchain-ai/langchain/issues/15856 | 2,075,587,256 | 15,856 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
### Example Code
I wrote the code for categorization of prompt.
```python
def prompt_router(input, embeddings, prompt_templates, prompt_embeddings):
query_embedding = embeddings.embed_query(input["query"])
similarity = cosine_similarity([query_embedding], prompt_embeddings)[0]
most_similar = prompt_templates[similarity.argmax()]
print(most_similar)
return PromptTemplate.from_template(most_similar)
def main_categorizer(message):
global memory, entry
formatted_history = memory.get_history_as_string()
case1_template = """Description of what case1 does
Chat History:
{chat_history}
Here is a question:
{query}"""
case2_template = """Description of what case2 does
Chat History:
{chat_history}
Here is a question:
{query}"""
case3_template = """Description of what case3 does
Chat History:
{chat_history}
Here is a question:
{query}"""
case4_template = """Description of what case4 does
Chat History:
{chat_history}
Here is a question:
{query}"""
prompt_templates = [case1_template, case2_template, case3_template, case4_template]
prompt_embeddings = embeddings.embed_documents(prompt_templates)
chain = (
{"query": RunnablePassthrough()}
| RunnableLambda(prompt_router)
| llm
| StrOutputParser()
)
```
### Description
Based on the document in https://python.langchain.com/docs/expression_language/cookbook/embedding_router,
I've tried to implement embedding router.
What I would like to do is, adding conversation history to the case prompts so that they can use historical conversation as well to consider which category the user prompt is.
In here, I have no idea where to put {chat_history} value just like the query being inserted with ```"query": RunnablePassthrough()```
### System Info
langchian==0.0.352
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [X] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async | embedding router with conversation history | https://api.github.com/repos/langchain-ai/langchain/issues/15854/comments | 6 | 2024-01-11T01:33:54Z | 2024-01-11T02:34:53Z | https://github.com/langchain-ai/langchain/issues/15854 | 2,075,537,332 | 15,854 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
### Description
`AmadeusClosestAirport` contains a "hardcoded" call to `ChatOpenAI` (see [here](https://github.com/langchain-ai/langchain/blob/a06db53c37344b5a9906fbf656173c3421109398/libs/community/langchain_community/tools/amadeus/closest_airport.py#L50)), while it would make sense to use the same `llm` passed to the chain/agent when initialized.
In addition, this implies that `AmadeusToolkit` implicitly depends on `openai`, which should not be the case.
Example (source code from the [docs](https://python.langchain.com/docs/integrations/toolkits/amadeus))
```py
from langchain_community.agent_toolkits.amadeus.toolkit import AmadeusToolkit
# Set environmental variables here
import os
os.environ["AMADEUS_CLIENT_ID"] = "CLIENT_ID"
os.environ["AMADEUS_CLIENT_SECRET"] = "CLIENT_SECRET"
os.environ["OPENAI_API_KEY"] = "API_KEY"
# os.environ["AMADEUS_HOSTNAME"] = "production" or "test"
toolkit = AmadeusToolkit()
tools = toolkit.get_tools()
llm = OpenAI(temperature=0) # this can be any `BaseLLM`
agent = initialize_agent(
tools=tools,
llm=llm,
verbose=False,
agent=AgentType.STRUCTURED_CHAT_ZERO_SHOT_REACT_DESCRIPTION,
)
# ==> agent calls `ChatOpenAI` regardless of `llm` <===
agent.run("What is the name of the airport in Cali, Colombia?")
```
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async | AmadeusClosestAirport tool should accept any LLM | https://api.github.com/repos/langchain-ai/langchain/issues/15847/comments | 3 | 2024-01-10T22:22:03Z | 2024-01-12T12:00:49Z | https://github.com/langchain-ai/langchain/issues/15847 | 2,075,315,430 | 15,847 |
[
"langchain-ai",
"langchain"
] | ### Issue with current documentation:
When running the `WebBaseLoader` it requires `bs4` installed which is not mentioned in the docs.
https://github.com/langchain-ai/langchain/blob/21a153894917e530cbe82a778be6f9cf10c9ae5f/docs/docs/get_started/quickstart.mdx#L185C1-L194C1
### Idea or request for content:
I think it should be mentioned just like `faiss` a few lines below. | DOC: Missing dependency when going through the Quickstart section | https://api.github.com/repos/langchain-ai/langchain/issues/15845/comments | 1 | 2024-01-10T21:47:23Z | 2024-01-11T03:32:56Z | https://github.com/langchain-ai/langchain/issues/15845 | 2,075,269,126 | 15,845 |
[
"langchain-ai",
"langchain"
] | ### Issue with current documentation:
Clicking on any of the `agent_types` [here](https://python.langchain.com/docs/modules/agents/agent_types) leads to faulty links with the following message:
Example from this link: https://python.langchain.com/docs/modules/agents/openai_tools
<img width="873" alt="image" src="https://github.com/langchain-ai/langchain/assets/8833114/d6d5268c-397a-4eda-9a07-e5ec4b4b2d13">
### Idea or request for content:
Update table to point to correct hyperlink i.e. https://python.langchain.com/docs/modules/agents/agent_types/openai_tools for the example above. | DOC: Page Not Found when clicking on different agent types in table | https://api.github.com/repos/langchain-ai/langchain/issues/15837/comments | 2 | 2024-01-10T18:42:33Z | 2024-01-24T20:11:42Z | https://github.com/langchain-ai/langchain/issues/15837 | 2,074,963,053 | 15,837 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation, with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
### Example Code
```
from langchain.output_parsers import PydanticOutputParser
from pydantic import BaseModel, Field
from langchain.memory import ConversationBufferMemory
from langchain.prompts.chat import ChatPromptTemplate, SystemMessagePromptTemplate, HumanMessagePromptTemplate, MessagesPlaceholder
from langchain.chains import LLMChain
class TennisPlayer(BaseModel):
age: int = Field(description="Age of the player")
nb_victories: int = Field(description="Nb of victories in major tournaments")
llm = None # Instantiate the LLM here
parser = PydanticOutputParser(pydantic_object=TennisPlayer)
prompt = "You'll be asked information about a tennis player.\n" \
"You'll answer with the following format:\n" \
"{format_instructions}"
memory = ConversationBufferMemory(memory_key="chat_history", input_key="query", return_messages=True)
chat_prompt = ChatPromptTemplate.from_messages([SystemMessagePromptTemplate.from_template(prompt),
MessagesPlaceholder(variable_name="chat_history"),
HumanMessagePromptTemplate.from_template("{query}")])
chain = LLMChain(llm=llm, prompt=chat_prompt, memory=memory, output_parser=parser)
chain.invoke(input={"query": "Rafael Nadal", "format_instructions": parser.get_format_instructions()})
```
### Description
The previous code triggers an error while converting the output from the LLM to an AIMessage to place in the ConversationBufferMemory object. The problem is that it passes the constructed object (the output of PydanticOutputParser.parse) instead of the output message as a plain string.
### System Info
langchain 0.1.0
python 3.10.13
Windows 10
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [X] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [X] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async | Cannot combine an output parser and a conversation buffer memory | https://api.github.com/repos/langchain-ai/langchain/issues/15835/comments | 3 | 2024-01-10T18:08:47Z | 2024-04-18T16:33:06Z | https://github.com/langchain-ai/langchain/issues/15835 | 2,074,909,445 | 15,835 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation, with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
### Example Code
```
# Chunking the sentence with fixed size
from langchain.text_splitter import RecursiveCharacterTextSplitter
text_splitter = RecursiveCharacterTextSplitter(chunk_size=1000, chunk_overlap=20)
all_splits = text_splitter.split_documents(documents)
```
```
# Creating Embdeddings of the sentences and storing it into Graph DB
from langchain_community.embeddings import HuggingFaceBgeEmbeddings
from langchain.vectorstores import Neo4jVector
model_name = "BAAI/bge-small-en"
model_kwargs = {"device": "cuda"}
embeddings = HuggingFaceBgeEmbeddings(model_name=model_name, model_kwargs=model_kwargs)
# storing embeddings in the vector store
vectorstore = Neo4jVector.from_documents(all_splits, embeddings)
```
```
# Instantiate Neo4j vector from documents
neo4j_new_index = Neo4jVector.from_documents(
documents,
HuggingFaceBgeEmbeddings(),
url=os.environ["NEO4J_URI"],
username=os.environ["NEO4J_USERNAME"],
password=os.environ["NEO4J_PASSWORD"]
)
```
```
ERROR:neo4j.io:Failed to write data to connection ResolvedIPv4Address(('34.126.171.25', 7687)) (ResolvedIPv4Address(('34.126.171.25', 7687)))
ERROR:neo4j.io:Failed to write data to connection IPv4Address(('07e87ccd.databases.neo4j.io', 7687)) (ResolvedIPv4Address(('34.126.171.25', 7687)))
ERROR:neo4j.io:Failed to write data to connection ResolvedIPv4Address(('34.126.171.25', 7687)) (ResolvedIPv4Address(('34.126.171.25', 7687)))
ERROR:neo4j.io:Failed to write data to connection IPv4Address(('07e87ccd.databases.neo4j.io', 7687)) (ResolvedIPv4Address(('34.126.171.25', 7687)))
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
[<ipython-input-25-7220f62f84c8>](https://localhost:8080/#) in <cell line: 2>()
1 # Instantiate Neo4j vector from documents
----> 2 neo4j_new_index = Neo4jVector.from_documents(
3 documents,
4 HuggingFaceBgeEmbeddings(),
5 url=os.environ["NEO4J_URI"],
2 frames
[/usr/local/lib/python3.10/dist-packages/langchain_community/vectorstores/neo4j_vector.py](https://localhost:8080/#) in __from(cls, texts, embeddings, embedding, metadatas, ids, create_id_index, search_type, **kwargs)
445 # If the index already exists, check if embedding dimensions match
446 elif not store.embedding_dimension == embedding_dimension:
--> 447 raise ValueError(
448 f"Index with name {store.index_name} already exists."
449 "The provided embedding function and vector index "
ValueError: Index with name vector already exists.The provided embedding function and vector index dimensions do not match.
Embedding function dimension: 1024
Vector index dimension: 384
```
### Description
```
ERROR:neo4j.io:Failed to write data to connection ResolvedIPv4Address(('34.126.171.25', 7687)) (ResolvedIPv4Address(('34.126.171.25', 7687)))
ERROR:neo4j.io:Failed to write data to connection IPv4Address(('07e87ccd.databases.neo4j.io', 7687)) (ResolvedIPv4Address(('34.126.171.25', 7687)))
ERROR:neo4j.io:Failed to write data to connection ResolvedIPv4Address(('34.126.171.25', 7687)) (ResolvedIPv4Address(('34.126.171.25', 7687)))
ERROR:neo4j.io:Failed to write data to connection IPv4Address(('07e87ccd.databases.neo4j.io', 7687)) (ResolvedIPv4Address(('34.126.171.25', 7687)))
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
[<ipython-input-25-7220f62f84c8>](https://localhost:8080/#) in <cell line: 2>()
1 # Instantiate Neo4j vector from documents
----> 2 neo4j_new_index = Neo4jVector.from_documents(
3 documents,
4 HuggingFaceBgeEmbeddings(),
5 url=os.environ["NEO4J_URI"],
2 frames
[/usr/local/lib/python3.10/dist-packages/langchain_community/vectorstores/neo4j_vector.py](https://localhost:8080/#) in __from(cls, texts, embeddings, embedding, metadatas, ids, create_id_index, search_type, **kwargs)
445 # If the index already exists, check if embedding dimensions match
446 elif not store.embedding_dimension == embedding_dimension:
--> 447 raise ValueError(
448 f"Index with name {store.index_name} already exists."
449 "The provided embedding function and vector index "
ValueError: Index with name vector already exists.The provided embedding function and vector index dimensions do not match.
Embedding function dimension: 1024
Vector index dimension: 384
```
### System Info
Windows: `11`
pip == `23.3.1`
python == `3.10.10`
langchain ==` 0.1.0`
transformers == `4.36.2`
sentence_transformers == `2.2.2`
Neo4j == `5`
### Related Components
- [ ] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async | ValueError: Index with name vector already exists.The provided embedding function and vector index dimensions do not match. | https://api.github.com/repos/langchain-ai/langchain/issues/15834/comments | 5 | 2024-01-10T18:02:46Z | 2024-01-12T12:40:02Z | https://github.com/langchain-ai/langchain/issues/15834 | 2,074,900,437 | 15,834 |
[
"langchain-ai",
"langchain"
] | ### System Info
langchain v0.1.0
### Who can help?
@agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [X] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [X] Callbacks/Tracing
- [ ] Async
### Reproduction
```
from langchain.output_parsers import PydanticOutputParser
from pydantic import BaseModel, Field
from typing import Any, Dict
from langchain_core.outputs.llm_result import LLMResult
from langchain.prompts.chat import ChatPromptTemplate, HumanMessagePromptTemplate
from langchain.chains import LLMChain
from langchain.callbacks.base import BaseCallbackHandler
class CustomCallBack(BaseCallbackHandler):
def on_llm_end(self, response: LLMResult, **kwargs: Any) -> Any:
print(f"on_llm_end => {response}")
def on_chain_end(self, outputs: Dict[str, Any], **kwargs: Any) -> Any:
print(f"on_chain_end => {outputs}")
def on_text(self, text: str, **kwargs: Any) -> Any:
print(f"on_text => {text}")
class TennisPlayer(BaseModel):
age: int = Field(description="Age of the player")
nb_victories: int = Field(description="Nb of victories in major tournaments")
# Instantiate the LLM here
llm = None
parser = PydanticOutputParser(pydantic_object=TennisPlayer)
prompt = "Give me some information about Rafael Nadal.\n" \
"You'll answer with the following format:\n" \
"{format_instructions}"
chat_prompt = ChatPromptTemplate.from_messages([HumanMessagePromptTemplate.from_template(prompt)])
chain = LLMChain(llm=llm, prompt=chat_prompt, callbacks=[CustomCallBack()], output_parser=parser)
chain.invoke(input={"format_instructions": parser.get_format_instructions()})
```
### Expected behavior
The custom callback handler makes it possible to intercept the prompt sent to the LLM (through _on_text_) and the output in _on_chain_end_. The problem is that when an output parser is involved, the _outputs_ dictionary of _on_chain_end_ associates the "text" key with the final constructed object and not the output message (containing the JSON data that has been marshalled to an object). And for an unknown reason, the _on_llm_end_ callback function isn't invoked...
When something goes wrong in the marshalling process, the analysis of the LLM's output message is mandatory. Well, it doesn't seem abnormal to get the final object produced by the chain in the _on_chain_end_ callback, but in that case I would expect the _on_llm_end_ callback to be called just before with the output message in parameter. But it is not. So, at this stage, it's not possible to intercept the raw LLM's output message for debugging purposes. | Intercepting the output message in a callback handler before it is sent to the output parser | https://api.github.com/repos/langchain-ai/langchain/issues/15830/comments | 5 | 2024-01-10T17:27:36Z | 2024-04-18T16:21:24Z | https://github.com/langchain-ai/langchain/issues/15830 | 2,074,843,378 | 15,830 |
[
"langchain-ai",
"langchain"
] | ### System Info
Langchain 0.1.0
python 3.10
### Who can help?
@agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [X] Callbacks/Tracing
- [ ] Async
### Reproduction
reproducing the error could be as follows
```python
...
#################################################################
## get multiple queries to be searched on the web
query_generation_chain = (
search_queries_prompt
| llm.bind(stop=TOGETHER_STOP_KEYWORDS)
| CommaSeparatedListOutputParser()
)
#################################################################
## scrape and summarize a webpages based on urls
summarize_chain = RunnablePassthrough.assign(
summary=RunnablePassthrough.assign(text=lambda x: scrape_webpage(x["url"])[:10_000])
| summarize_prompt
| llm
| StrOutputParser(),
) | (lambda x: f'URL: {x["url"]} \n\nSUMMARY: {x["summary"]}')
chain = (
RunnablePassthrough.assign(urls = query_generation_chain | fetch_links_from_web)
| RunnableLambda(lambda x: [{"question": x["question"], "url": url} for url in x["urls"]])
| summarize_chain.map() ## generate list of summarized article for each link
| (lambda reports: "\n\n".join(reports)) ## combine the summaries into a report
)
```
if i invoke `get_graph()` on the chain like this :
```python
chain.get_graph()
```
i get this error :
```console
Traceback (most recent call last):
File "/home/joede/dev/llm_playground/researcher/main.py", line 30, in <module>
report_writer_chain.get_graph().print_ascii()
File "/home/joede/.cache/pypoetry/virtualenvs/researcher-Jp5_VGHR-py3.10/lib/python3.10/site-packages/langchain_core/runnables/base.py", line 1690, in get_graph
step_graph = step.get_graph(config)
File "/home/joede/.cache/pypoetry/virtualenvs/researcher-Jp5_VGHR-py3.10/lib/python3.10/site-packages/langchain_core/runnables/passthrough.py", line 379, in get_graph
graph = self.mapper.get_graph(config)
File "/home/joede/.cache/pypoetry/virtualenvs/researcher-Jp5_VGHR-py3.10/lib/python3.10/site-packages/langchain_core/runnables/base.py", line 2282, in get_graph
step_graph = step.get_graph()
File "/home/joede/.cache/pypoetry/virtualenvs/researcher-Jp5_VGHR-py3.10/lib/python3.10/site-packages/langchain_core/runnables/passthrough.py", line 379, in get_graph
graph = self.mapper.get_graph(config)
File "/home/joede/.cache/pypoetry/virtualenvs/researcher-Jp5_VGHR-py3.10/lib/python3.10/site-packages/langchain_core/runnables/base.py", line 2282, in get_graph
step_graph = step.get_graph()
File "/home/joede/.cache/pypoetry/virtualenvs/researcher-Jp5_VGHR-py3.10/lib/python3.10/site-packages/langchain_core/runnables/base.py", line 1690, in get_graph
step_graph = step.get_graph(config)
File "/home/joede/.cache/pypoetry/virtualenvs/researcher-Jp5_VGHR-py3.10/lib/python3.10/site-packages/langchain_core/runnables/base.py", line 2906, in get_graph
graph = super().get_graph(config)
File "/home/joede/.cache/pypoetry/virtualenvs/researcher-Jp5_VGHR-py3.10/lib/python3.10/site-packages/langchain_core/runnables/base.py", line 399, in get_graph
output_node = graph.add_node(self.get_output_schema(config))
File "/home/joede/.cache/pypoetry/virtualenvs/researcher-Jp5_VGHR-py3.10/lib/python3.10/site-packages/langchain_core/runnables/base.py", line 331, in get_output_schema
if inspect.isclass(root_type) and issubclass(root_type, BaseModel):
File "/usr/lib/python3.10/abc.py", line 123, in __subclasscheck__
return _abc_subclasscheck(cls, subclass)
TypeError: issubclass() arg 1 must be a class
```
with further inspection i found it error when it gets here:
```python
def get_input_schema(...):
....
root_type = self.OutputType
if inspect.isclass(root_type) and issubclass(root_type, BaseModel):
return root_type
```
in the `langchain_core/runnable/base.py`
### Expected behavior
the expected output should be a graph of ascii characters that should look like this :
```console
+---------------------------------+
| Parallel<research_summary>Input |
+---------------------------------+
**** *******
*** *******
** ******
+---------------------+ ****
| Parallel<urls>Input | *
+---------------------+ *
*** **** *
**** *** *
** **** *
+--------------------+ ** *
| ChatPromptTemplate | * *
+--------------------+ * *
* * *
* * *
* * *
+---------------+ * *
| WithFallbacks | * *
+---------------+ * *
* * *
* * *
* * *
+--------------------------------+ * *
| CommaSeparatedListOutputParser | * *
+--------------------------------+ * *
* * *
* * *
* * *
+------------------------------+ +-------------+ *
| Lambda(fetch_links_from_web) | | Passthrough | *
+------------------------------+ *+-------------+ *
*** **** *
**** **** *
** ** *
+----------------------+ +-------------+
| Parallel<urls>Output | **| Passthrough |
+----------------------+ ******* +-------------+
**** ******
*** *******
** ****
+----------------------------------+
| Parallel<research_summary>Output |
+----------------------------------+
*
*
*
+--------------------+
| ChatPromptTemplate |
+--------------------+
*
*
*
+---------------+
| WithFallbacks |
+---------------+
*
*
*
+-----------------+
| StrOutputParser |
+-----------------+
*
*
*
+-----------------------+
| StrOutputParserOutput |
+-----------------------+
``` | `chain.get_graph()` doesn't play nicely with `chain.map()` or `list[str]` | https://api.github.com/repos/langchain-ai/langchain/issues/15828/comments | 1 | 2024-01-10T17:15:20Z | 2024-04-17T16:18:38Z | https://github.com/langchain-ai/langchain/issues/15828 | 2,074,820,818 | 15,828 |
[
"langchain-ai",
"langchain"
] | ### System Info
python=3.11
langchain= latest
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
while running a code with create_pandas_dataframe_agent it throwing key error
from langchain.agents.agent_types import AgentType
from langchain_experimental.agents.agent_toolkits import create_pandas_dataframe_agent
from langchain_openai import ChatOpenAI
import pandas as pd
from langchain_openai import OpenAI
df = pd.read_csv(r"C:\Users\rndbcpsoft\OneDrive\Desktop\test\chat_data_2024-01-05_13-20-11.csv")
# agent = create_pandas_dataframe_agent(
# ChatOpenAI(temperature=0, model="gpt-3.5-turbo"),
# df,
# verbose=True,
# agent_type=AgentType.OPENAI_FUNCTIONS,
# )
llm = ChatOpenAI(
temperature=0, model="gpt-3.5-turbo-0613", openai_api_key=openapi_key , streaming=True
)
pandas_df_agent = create_pandas_dataframe_agent(
llm,
df,
verbose=True,
agent_type=AgentType.OPENAI_FUNCTIONS,
handle_parsing_errors=True,
)
error:
KeyError Traceback (most recent call last)
Cell In[7], [line 12](vscode-notebook-cell:?execution_count=7&line=12)
[1](vscode-notebook-cell:?execution_count=7&line=1) # agent = create_pandas_dataframe_agent(
[2](vscode-notebook-cell:?execution_count=7&line=2) # ChatOpenAI(temperature=0, model="gpt-3.5-turbo"),
[3](vscode-notebook-cell:?execution_count=7&line=3) # df,
[4](vscode-notebook-cell:?execution_count=7&line=4) # verbose=True,
[5](vscode-notebook-cell:?execution_count=7&line=5) # agent_type=AgentType.OPENAI_FUNCTIONS,
[6](vscode-notebook-cell:?execution_count=7&line=6) # )
[8](vscode-notebook-cell:?execution_count=7&line=8) llm = ChatOpenAI(
[9](vscode-notebook-cell:?execution_count=7&line=9) temperature=0, model="gpt-3.5-turbo-0613", openai_api_key=openapi_key , streaming=True
[10](vscode-notebook-cell:?execution_count=7&line=10) )
---> [12](vscode-notebook-cell:?execution_count=7&line=12) pandas_df_agent = create_pandas_dataframe_agent(
[13](vscode-notebook-cell:?execution_count=7&line=13) llm,
[14](vscode-notebook-cell:?execution_count=7&line=14) df,
[15](vscode-notebook-cell:?execution_count=7&line=15) verbose=True,
[16](vscode-notebook-cell:?execution_count=7&line=16) agent_type=AgentType.OPENAI_FUNCTIONS,
[17](vscode-notebook-cell:?execution_count=7&line=17) handle_parsing_errors=True,
[18](vscode-notebook-cell:?execution_count=7&line=18) )
File [c:\Users\rndbcpsoft\OneDrive\Desktop\test\envtest\Lib\site-packages\langchain_experimental\agents\agent_toolkits\pandas\base.py:322](file:///C:/Users/rndbcpsoft/OneDrive/Desktop/test/envtest/Lib/site-packages/langchain_experimental/agents/agent_toolkits/pandas/base.py:322), in create_pandas_dataframe_agent(llm, df, agent_type, callback_manager, prefix, suffix, input_variables, verbose, return_intermediate_steps, max_iterations, max_execution_time, early_stopping_method, agent_executor_kwargs, include_df_in_prompt, number_of_head_rows, extra_tools, **kwargs)
[313](file:///C:/Users/rndbcpsoft/OneDrive/Desktop/test/envtest/Lib/site-packages/langchain_experimental/agents/agent_toolkits/pandas/base.py:313) _prompt, base_tools = _get_functions_prompt_and_tools(
[314](file:///C:/Users/rndbcpsoft/OneDrive/Desktop/test/envtest/Lib/site-packages/langchain_experimental/agents/agent_toolkits/pandas/base.py:314) df,
[315](file:///C:/Users/rndbcpsoft/OneDrive/Desktop/test/envtest/Lib/site-packages/langchain_experimental/agents/agent_toolkits/pandas/base.py:315) prefix=prefix,
(...)
...
---> [57](file:///C:/Users/rndbcpsoft/OneDrive/Desktop/test/envtest/Lib/site-packages/langchain/agents/openai_functions_agent/base.py:57) if not isinstance(values["llm"], ChatOpenAI):
[58](file:///C:/Users/rndbcpsoft/OneDrive/Desktop/test/envtest/Lib/site-packages/langchain/agents/openai_functions_agent/base.py:58) raise ValueError("Only supported with ChatOpenAI models.")
[59](file:///C:/Users/rndbcpsoft/OneDrive/Desktop/test/envtest/Lib/site-packages/langchain/agents/openai_functions_agent/base.py:59) return values
KeyError: 'llm'
### Expected behavior
while running a code with create_pandas_dataframe_agent it throwing key error
from langchain.agents.agent_types import AgentType
from langchain_experimental.agents.agent_toolkits import create_pandas_dataframe_agent
from langchain_openai import ChatOpenAI
import pandas as pd
from langchain_openai import OpenAI
df = pd.read_csv(r"C:\Users\rndbcpsoft\OneDrive\Desktop\test\chat_data_2024-01-05_13-20-11.csv")
# agent = create_pandas_dataframe_agent(
# ChatOpenAI(temperature=0, model="gpt-3.5-turbo"),
# df,
# verbose=True,
# agent_type=AgentType.OPENAI_FUNCTIONS,
# )
llm = ChatOpenAI(
temperature=0, model="gpt-3.5-turbo-0613", openai_api_key=openapi_key , streaming=True
)
pandas_df_agent = create_pandas_dataframe_agent(
llm,
df,
verbose=True,
agent_type=AgentType.OPENAI_FUNCTIONS,
handle_parsing_errors=True,
)
error:
KeyError Traceback (most recent call last)
Cell In[7], [line 12](vscode-notebook-cell:?execution_count=7&line=12)
[1](vscode-notebook-cell:?execution_count=7&line=1) # agent = create_pandas_dataframe_agent(
[2](vscode-notebook-cell:?execution_count=7&line=2) # ChatOpenAI(temperature=0, model="gpt-3.5-turbo"),
[3](vscode-notebook-cell:?execution_count=7&line=3) # df,
[4](vscode-notebook-cell:?execution_count=7&line=4) # verbose=True,
[5](vscode-notebook-cell:?execution_count=7&line=5) # agent_type=AgentType.OPENAI_FUNCTIONS,
[6](vscode-notebook-cell:?execution_count=7&line=6) # )
[8](vscode-notebook-cell:?execution_count=7&line=8) llm = ChatOpenAI(
[9](vscode-notebook-cell:?execution_count=7&line=9) temperature=0, model="gpt-3.5-turbo-0613", openai_api_key=openapi_key , streaming=True
[10](vscode-notebook-cell:?execution_count=7&line=10) )
---> [12](vscode-notebook-cell:?execution_count=7&line=12) pandas_df_agent = create_pandas_dataframe_agent(
[13](vscode-notebook-cell:?execution_count=7&line=13) llm,
[14](vscode-notebook-cell:?execution_count=7&line=14) df,
[15](vscode-notebook-cell:?execution_count=7&line=15) verbose=True,
[16](vscode-notebook-cell:?execution_count=7&line=16) agent_type=AgentType.OPENAI_FUNCTIONS,
[17](vscode-notebook-cell:?execution_count=7&line=17) handle_parsing_errors=True,
[18](vscode-notebook-cell:?execution_count=7&line=18) )
File [c:\Users\rndbcpsoft\OneDrive\Desktop\test\envtest\Lib\site-packages\langchain_experimental\agents\agent_toolkits\pandas\base.py:322](file:///C:/Users/rndbcpsoft/OneDrive/Desktop/test/envtest/Lib/site-packages/langchain_experimental/agents/agent_toolkits/pandas/base.py:322), in create_pandas_dataframe_agent(llm, df, agent_type, callback_manager, prefix, suffix, input_variables, verbose, return_intermediate_steps, max_iterations, max_execution_time, early_stopping_method, agent_executor_kwargs, include_df_in_prompt, number_of_head_rows, extra_tools, **kwargs)
[313](file:///C:/Users/rndbcpsoft/OneDrive/Desktop/test/envtest/Lib/site-packages/langchain_experimental/agents/agent_toolkits/pandas/base.py:313) _prompt, base_tools = _get_functions_prompt_and_tools(
[314](file:///C:/Users/rndbcpsoft/OneDrive/Desktop/test/envtest/Lib/site-packages/langchain_experimental/agents/agent_toolkits/pandas/base.py:314) df,
[315](file:///C:/Users/rndbcpsoft/OneDrive/Desktop/test/envtest/Lib/site-packages/langchain_experimental/agents/agent_toolkits/pandas/base.py:315) prefix=prefix,
(...)
...
---> [57](file:///C:/Users/rndbcpsoft/OneDrive/Desktop/test/envtest/Lib/site-packages/langchain/agents/openai_functions_agent/base.py:57) if not isinstance(values["llm"], ChatOpenAI):
[58](file:///C:/Users/rndbcpsoft/OneDrive/Desktop/test/envtest/Lib/site-packages/langchain/agents/openai_functions_agent/base.py:58) raise ValueError("Only supported with ChatOpenAI models.")
[59](file:///C:/Users/rndbcpsoft/OneDrive/Desktop/test/envtest/Lib/site-packages/langchain/agents/openai_functions_agent/base.py:59) return values
KeyError: 'llm' | KeyError: 'llm' in create_pandas_dataframe_agent | https://api.github.com/repos/langchain-ai/langchain/issues/15819/comments | 4 | 2024-01-10T13:34:12Z | 2024-04-18T16:36:53Z | https://github.com/langchain-ai/langchain/issues/15819 | 2,074,391,510 | 15,819 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
Can someone please help me pass llamaCPP instance into langchain's conversational retrieval chain that uses a retriever.
### Suggestion:
_No response_ | using LLamaCPP with conversational retrieval chain. | https://api.github.com/repos/langchain-ai/langchain/issues/15818/comments | 1 | 2024-01-10T13:29:03Z | 2024-04-17T16:16:51Z | https://github.com/langchain-ai/langchain/issues/15818 | 2,074,381,981 | 15,818 |
[
"langchain-ai",
"langchain"
] | ### Feature request
The current document_loaders accept file path to process. But most of the time, especially if application deployed to somewhere, file is uploaded by user and not exist on file system.
Writing that in-memory bytes to disk and re-read is a unnecessary step.
It would be good to take BytesIO or some abstraction to process in-memory files.
### Motivation
It will eliminate writing in-memory files to disk and re-reading them from disk while using document_loaders.
### Your contribution
I can create a PR for this. | document_loaders to support BytesIO or an interface for in-memory objects | https://api.github.com/repos/langchain-ai/langchain/issues/15815/comments | 6 | 2024-01-10T12:36:25Z | 2024-04-17T16:20:25Z | https://github.com/langchain-ai/langchain/issues/15815 | 2,074,285,594 | 15,815 |
[
"langchain-ai",
"langchain"
] | ### System Info
LC version: 0.1.0
Platform: MacOS
Python version: 3.12.1
### Who can help?
@hwchase17 @agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
Python 3.12.1 (main, Jan 9 2024, 18:02:09) [Clang 15.0.0 (clang-1500.0.40.1)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> from langchain_openai import AzureChatOpenAI
>>> from langchain_core.runnables import ConfigurableField
>>> ConfigurableAzureChatOpenAI = AzureChatOpenAI(
... openai_api_key = "asdfg",
... openai_api_version = "asdg",
... deployment_name='asdg',
... azure_endpoint="asdg",
... temperature=0.9
... ).configurable_fields(
... azure_endpoint=ConfigurableField(id="azure_endpoint"),
... openai_api_key=ConfigurableField(id="openai_api_key"),
... azure_deployment=ConfigurableField(id="deployment_name"),
... openai_api_version=ConfigurableField(id="openai_api_version"),
... )
Traceback (most recent call last):
File "<stdin>", line 7, in <module>
File "/Users/pramodh/.pyenv/versions/3.12.1/lib/python3.12/site-packages/langchain_core/runnables/base.py", line 1368, in configurable_fields
raise ValueError(
ValueError: Configuration key azure_deployment not found in client=<openai.resources.chat.completions.Completions object at 0x1079f4350> async_client=<openai.resources.chat.completions.AsyncCompletions object at 0x1079f58e0> temperature=0.9 openai_api_key='asdfg' openai_proxy='' azure_endpoint='asdg' deployment_name='asdg' openai_api_version='asdg' openai_api_type='azure': available keys are {self.__fields__.keys()}
```
### Expected behavior
`azure_deployment` is an alias for `deployment_name` defined inside `AzureChatOpenAI`, but it cannot be set as a `ConfigurableField` - we instead have to set `deployment_name` as a ConfigurableField.
I would expect the above code to not throw an error, as they are just aliases. | AzureChatOpenAI: `Configuration key azure_deployment not found in client` | https://api.github.com/repos/langchain-ai/langchain/issues/15814/comments | 1 | 2024-01-10T12:30:52Z | 2024-04-17T16:27:44Z | https://github.com/langchain-ai/langchain/issues/15814 | 2,074,275,377 | 15,814 |
[
"langchain-ai",
"langchain"
] | ### Issue with current documentation:
i have tried with several tests, even with the most basic e.g in the doc, nothing. Dissapointed because it got me superexcited at first:
from langchain_experimental.llms.ollama_functions import OllamaFunctions
from langchain.schema import HumanMessage
model = OllamaFunctions(model="dolphinmodel",)
model = model.bind(
functions=[
{
"name": "get_current_weather",
"description": "Get the current weather in a given location",
"parameters": {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "The city and state, " "e.g. San Francisco, CA",
},
"unit": {
"type": "string",
"enum": ["celsius", "fahrenheit"],
},
},
"required": ["location"],
},
}
],
function_call={"name": "get_current_weather"},
)
model.invoke("what is the weather in Boston?")
### Idea or request for content:
_No response_ | DOC: <https://python.langchain.com/docs/integrations/chat/ollama_functions 'DOC: ' prefix>ollamafunctions not working at all | https://api.github.com/repos/langchain-ai/langchain/issues/15808/comments | 2 | 2024-01-10T09:17:02Z | 2024-07-04T16:07:33Z | https://github.com/langchain-ai/langchain/issues/15808 | 2,073,927,465 | 15,808 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
import os
from langchain.prompts.prompt import PromptTemplate
from langchain.llms import OpenAI
from langchain.chains.question_answering import load_qa_chain
from langchain.chains import (
ConversationalRetrievalChain,
LLMChain
)
from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler
from langchain_community.chat_models import ChatOpenAI
from langchain_community.embeddings import HuggingFaceEmbeddings
from langchain_core.callbacks import CallbackManager
from qdrant_client import QdrantClient
from langchain.vectorstores import Qdrant
os.environ['OPENAI_API_KEY'] = "mykey"
embeddings = HuggingFaceEmbeddings(
model_name="all-MiniLM-L6-v2"
)
llm_name = "gpt-3.5-turbo"
llm = ChatOpenAI(model_name=llm_name, temperature=0)
streaming_llm = OpenAI(
streaming=True,
callback_manager=CallbackManager([
StreamingStdOutCallbackHandler()
]),
verbose=True,
max_tokens=150,
temperature=0.2
)
condense_question_prompt = PromptTemplate.from_template(
"在幹嘛"
)
qa_prompt = PromptTemplate.from_template("測")
question_generator = LLMChain(
llm=llm,
prompt=condense_question_prompt
)
doc_chain = load_qa_chain(
llm=streaming_llm,
chain_type="stuff",
prompt=qa_prompt
)
client = QdrantClient(host="192.168.0.31", port=6333)
collection_name = "test"
vectorstore = Qdrant(client, collection_name,
embedding_function=embeddings.embed_query)
chatbot = ConversationalRetrievalChain(
retriever=vectorstore.as_retriever(),
combine_docs_chain=doc_chain,
question_generator=question_generator
)
chat_history = []
question = input("Hi! What are you looking for today?")
while True:
result = chatbot(
{"question": question, "chat_history": chat_history}
)
print("\n")
chat_history.append((result["question"], result["answer"]))
question = input()

### Suggestion:
Why can't I store and retrieve vectors? Please help me fix it. | Why can't I store and retrieve vectors? Please help me fix it. | https://api.github.com/repos/langchain-ai/langchain/issues/15806/comments | 1 | 2024-01-10T08:47:26Z | 2024-04-17T16:17:52Z | https://github.com/langchain-ai/langchain/issues/15806 | 2,073,877,371 | 15,806 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
from langchain.chains import ConversationalRetrievalChain, ConversationChain, LLMChain
from langchain.memory import ConversationTokenBufferMemory
from langchain_community.chat_models import ChatOpenAI
from langchain_community.embeddings import HuggingFaceEmbeddings
from langchain_core.prompts import ChatPromptTemplate, SystemMessagePromptTemplate, MessagesPlaceholder, \
HumanMessagePromptTemplate
from qdrant_client import QdrantClient, models
import os
from qdrant_client.grpc import PointStruct
os.environ['OPENAI_API_KEY'] = "mykey"
COLLECTION_NAME = "teeeeee"
embeddings = HuggingFaceEmbeddings(
model_name="all-MiniLM-L6-v2"
)
print("已成功連線到Qdrant")
def connection():
client = QdrantClient(host="192.168.0.311", port=6333)
client.recreate_collection(
collection_name=COLLECTION_NAME,
vectors_config=models.VectorParams(
distance=models.Distance.COSINE,
size=384),
optimizers_config=models.OptimizersConfigDiff(memmap_threshold=20000),
hnsw_config=models.HnswConfigDiff(on_disk=True, m=16, ef_construct=100)
)
return client
prompt = ChatPromptTemplate(
messages=[
SystemMessagePromptTemplate.from_template(
"你是耶米菈."
),
MessagesPlaceholder(variable_name="chat_history"),
HumanMessagePromptTemplate.from_template("{question}"),
]
)
llm_name = "gpt-3.5-turbo"
llm = ChatOpenAI(model_name=llm_name, temperature=0)
memory_token_limit = 2000
memory = ConversationTokenBufferMemory(
llm=llm,
prompt=prompt,
max_token_limit=int(memory_token_limit),
memory_key="chat_history",
return_messages=True,
)
conversation = LLMChain(llm=llm, prompt=prompt, verbose=True, memory=memory)
chat_history = []
def upsert_vector(client, vectors, data):
for i, vector in enumerate(vectors):
client.upsert(
collection_name=COLLECTION_NAME,
points=[PointStruct(id=i,
vector=vectors[i],
payload=data[i])]
)
print("upsert finish")
def search_from_qdrant(client, vector, k=1):
search_result = client.search(
collection_name=COLLECTION_NAME,
query_vector=vector,
limit=k,
append_payload=True,
)
return search_result
def get_embedding(text, model_name):
while True:
memory.load_memory_variables({})
question = input('提問:')
result = conversation.run({'question': question, 'chat_history': chat_history})
print(result)
chat_history.append([f'User: {question}', f'Ai: {result}'])
print(chat_history)
st_history = ' '.join(map(str, chat_history))
res = embeddings.embed_query(st_history)
print(f'ok: {res[:4]}...')
if question.lower() == 'bye':
break
return st_history
def main():
qclient = connection()
data_objs = [
{
"id": 1,
"teeeeee": f"我是阿狗。你叫做阿狗"
},
]
embedding_array = [get_embedding(text["teeeeee"], embeddings)
for text in data_objs]
upsert_vector(qclient, embedding_array, data_objs)
query_text = "請複誦我剛才所說的話"
query_embedding = get_embedding(query_text, embeddings)
results = search_from_qdrant(qclient, query_embedding, k=1)
print(f"尋找 {query_text}:", results)
if __name__ == '__main__':
main()
Execution result:
Traceback (most recent call last):
File "C:\Users\syz\Downloads\Chat-Bot-using-gpt-3.5-turbo-main\models\測.py", line 117, in <module>
main()
File "C:\Users\syz\Downloads\Chat-Bot-using-gpt-3.5-turbo-main\models\測.py", line 109, in main
upsert_vector(qclient, embedding_array, data_objs)
File "C:\Users\syz\Downloads\Chat-Bot-using-gpt-3.5-turbo-main\models\測.py", line 63, in upsert_vector
points=[PointStruct(id=i,
^^^^^^^^^^^^^^^^^
TypeError: Message must be initialized with a dict: qdrant.PointStruct
### Suggestion:
_No response_ | Why can't you search vectors? | https://api.github.com/repos/langchain-ai/langchain/issues/15804/comments | 1 | 2024-01-10T07:35:35Z | 2024-04-17T16:22:20Z | https://github.com/langchain-ai/langchain/issues/15804 | 2,073,772,510 | 15,804 |
[
"langchain-ai",
"langchain"
] | ### System Info
Issue with current documentation:
I was reading the documentation and in the modules/model_io/concepts page noticed a minor issue with the pagination navigation. Both the "Previous" and "Next" links currently point to the same page ('model_io'), which may lead to confusion for users.

### Who can help?
@hwchase17
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
// The error is on the documentation website.
### Expected behavior
Upon reviewing the content, I believe that the "Next" link should navigate users to the 'prompts' page of the 'model_io' section, providing a seamless transition for readers.
| DOC: modules/model_io/concepts in documentation | https://api.github.com/repos/langchain-ai/langchain/issues/15803/comments | 1 | 2024-01-10T07:33:55Z | 2024-04-17T16:17:13Z | https://github.com/langchain-ai/langchain/issues/15803 | 2,073,770,325 | 15,803 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
from langchain.chains import ConversationalRetrievalChain, ConversationChain, LLMChain
from langchain.memory import ConversationTokenBufferMemory
from langchain_community.chat_models import ChatOpenAI
from langchain_community.embeddings import HuggingFaceEmbeddings
from langchain_core.prompts import ChatPromptTemplate, SystemMessagePromptTemplate, MessagesPlaceholder, \
HumanMessagePromptTemplate
from qdrant_client import QdrantClient, models
import os
from qdrant_client.grpc import PointStruct
os.environ['OPENAI_API_KEY'] = "mykey"
COLLECTION_NAME = "lyric"
embeddings = HuggingFaceEmbeddings(
model_name="all-MiniLM-L6-v2"
)
print("已成功連線到Qdrant")
def connection():
client = QdrantClient(host="192.168.0.28", port=6333)
client.recreate_collection(
collection_name=COLLECTION_NAME,
vectors_config=models.VectorParams(
distance=models.Distance.COSINE,
size=1536),
optimizers_config=models.OptimizersConfigDiff(memmap_threshold=20000),
hnsw_config=models.HnswConfigDiff(on_disk=True, m=16, ef_construct=100)
)
return client
prompt = ChatPromptTemplate(
messages=[
SystemMessagePromptTemplate.from_template(
"you are robot."
),
MessagesPlaceholder(variable_name="chat_history"),
HumanMessagePromptTemplate.from_template("{question}"),
]
)
llm_name = "gpt-3.5-turbo"
llm = ChatOpenAI(model_name=llm_name, temperature=0)
memory_token_limit = 2000
memory = ConversationTokenBufferMemory(
llm=llm,
prompt=prompt,
max_token_limit=int(memory_token_limit),
memory_key="chat_history",
return_messages=True,
)
conversation = LLMChain(llm=llm, prompt=prompt, verbose=True, memory=memory)
chat_history = []
def upsert_vector(client, vectors, data):
for i, vector in enumerate(vectors):
client.upsert(
collection_name=COLLECTION_NAME,
points=[PointStruct(id=i,
vector=vectors[i],
payload=data[i])]
)
print("upsert finish")
def search_from_qdrant(client, vector, k=1):
search_result = client.search(
collection_name=COLLECTION_NAME,
query_vector=vector,
limit=k,
append_payload=True,
)
return search_result
def main():
qclient = connection()
while True:
memory.load_memory_variables({})
question = input('提問:')
result = conversation.run({'question': question, 'chat_history': chat_history})
print(result)
chat_history.append([f'User: {question}', f'Ai: {result}'])
print(chat_history)
st_history = ' '.join(map(str, chat_history))
res = embeddings.embed_query(st_history)
print(f'ok: {res[:4]}...')
if question.lower() == 'bye':
break
data_objs = [
{
"id": 1,
"lyric": f"{res}"
},
]
embedding_array = [res(text["lyric"], embeddings)
for text in data_objs]
upsert_vector(qclient, embedding_array, data_objs)
query_text = "Please repeat what I just said"
query_embedding = res(query_text, embeddings)
results = search_from_qdrant(qclient, query_embedding, k=1)
print(f"select {query_text}:", results)
if __name__ == '__main__':
main()
Why can't I pass 'res' to embedding_array and perform vector search? Also, please help me find out where else I might be going wrong

### Suggestion:
_No response_ | Why can't I pass 'res' to embedding_array and perform vector search? Also, please help me find out where else I might be going wrong | https://api.github.com/repos/langchain-ai/langchain/issues/15802/comments | 1 | 2024-01-10T06:35:58Z | 2024-04-17T16:25:14Z | https://github.com/langchain-ai/langchain/issues/15802 | 2,073,696,335 | 15,802 |
[
"langchain-ai",
"langchain"
] | ### System Info
langchain==0.0.350
python==3.9.2rc1
### Who can help?
@agola11
Sample code
```
from langchain.output_parsers import ResponseSchema, StructuredOutputParser
from langchain.prompts import PromptTemplate
response_schemas = [
ResponseSchema(name="result", description="answer to the user's question"),
ResponseSchema(
name="source_documents",
description="source used to answer the user's question",
),
]
output_parser = StructuredOutputParser.from_response_schemas(response_schemas)
#
format_instructions = output_parser.get_format_instructions()
#
llms = LlamaCpp(streaming=True,
model_path=r"C:\Users\PLNAYAK\Documents\Local_LLM_Inference\zephyr-7b-alpha.Q4_K_M.gguf",
max_tokens = 500,
temperature=0.75,
top_p=1,
model_kwargs={"gpu_layers":0,"stream":True},
verbose=True,n_threads = int(os.cpu_count()/2),
n_ctx=4096)
#
prompt = PromptTemplate(
template="Use the following pieces of context to answer the question at the end. If you don't know the answer, just say that you don't know, don't try to make up an answer.\n\n{context}\n\nQuestion: {question}\nHelpful Answer:",
input_variables=["context","question"],
partial_variables={"format_instructions": format_instructions},
output_parser=output_parser
)
#
chain = prompt | llms | output_parser
chain.invoke({"question":query,"context":complete_context})
```
Error Log
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
Cell In[41], [line 1](vscode-notebook-cell:?execution_count=41&line=1)
----> [1](vscode-notebook-cell:?execution_count=41&line=1) chain.invoke({"question":query,"context":complete_context})
File [c:\Users\PLNAYAK\Documents\Local_LLM_Inference\llms\lib\site-packages\langchain_core\runnables\base.py:1514](file:///C:/Users/PLNAYAK/Documents/Local_LLM_Inference/llms/lib/site-packages/langchain_core/runnables/base.py:1514), in RunnableSequence.invoke(self, input, config)
[1512](file:///C:/Users/PLNAYAK/Documents/Local_LLM_Inference/llms/lib/site-packages/langchain_core/runnables/base.py:1512) try:
[1513](file:///C:/Users/PLNAYAK/Documents/Local_LLM_Inference/llms/lib/site-packages/langchain_core/runnables/base.py:1513) for i, step in enumerate(self.steps):
-> [1514](file:///C:/Users/PLNAYAK/Documents/Local_LLM_Inference/llms/lib/site-packages/langchain_core/runnables/base.py:1514) input = step.invoke(
[1515](file:///C:/Users/PLNAYAK/Documents/Local_LLM_Inference/llms/lib/site-packages/langchain_core/runnables/base.py:1515) input,
[1516](file:///C:/Users/PLNAYAK/Documents/Local_LLM_Inference/llms/lib/site-packages/langchain_core/runnables/base.py:1516) # mark each step as a child run
[1517](file:///C:/Users/PLNAYAK/Documents/Local_LLM_Inference/llms/lib/site-packages/langchain_core/runnables/base.py:1517) patch_config(
[1518](file:///C:/Users/PLNAYAK/Documents/Local_LLM_Inference/llms/lib/site-packages/langchain_core/runnables/base.py:1518) config, callbacks=run_manager.get_child(f"seq:step:{i+1}")
[1519](file:///C:/Users/PLNAYAK/Documents/Local_LLM_Inference/llms/lib/site-packages/langchain_core/runnables/base.py:1519) ),
[1520](file:///C:/Users/PLNAYAK/Documents/Local_LLM_Inference/llms/lib/site-packages/langchain_core/runnables/base.py:1520) )
[1521](file:///C:/Users/PLNAYAK/Documents/Local_LLM_Inference/llms/lib/site-packages/langchain_core/runnables/base.py:1521) # finish the root run
[1522](file:///C:/Users/PLNAYAK/Documents/Local_LLM_Inference/llms/lib/site-packages/langchain_core/runnables/base.py:1522) except BaseException as e:
File [c:\Users\PLNAYAK\Documents\Local_LLM_Inference\llms\lib\site-packages\langchain_core\prompts\base.py:94](file:///C:/Users/PLNAYAK/Documents/Local_LLM_Inference/llms/lib/site-packages/langchain_core/prompts/base.py:94), in BasePromptTemplate.invoke(self, input, config)
[91](file:///C:/Users/PLNAYAK/Documents/Local_LLM_Inference/llms/lib/site-packages/langchain_core/prompts/base.py:91) def invoke(
[92](file:///C:/Users/PLNAYAK/Documents/Local_LLM_Inference/llms/lib/site-packages/langchain_core/prompts/base.py:92) self, input: Dict, config: Optional[RunnableConfig] = None
[93](file:///C:/Users/PLNAYAK/Documents/Local_LLM_Inference/llms/lib/site-packages/langchain_core/prompts/base.py:93) ) -> PromptValue:
---> [94](file:///C:/Users/PLNAYAK/Documents/Local_LLM_Inference/llms/lib/site-packages/langchain_core/prompts/base.py:94) return self._call_with_config(
[95](file:///C:/Users/PLNAYAK/Documents/Local_LLM_Inference/llms/lib/site-packages/langchain_core/prompts/base.py:95) self._format_prompt_with_error_handling,
[96](file:///C:/Users/PLNAYAK/Documents/Local_LLM_Inference/llms/lib/site-packages/langchain_core/prompts/base.py:96) input,
[97](file:///C:/Users/PLNAYAK/Documents/Local_LLM_Inference/llms/lib/site-packages/langchain_core/prompts/base.py:97) config,
[98](file:///C:/Users/PLNAYAK/Documents/Local_LLM_Inference/llms/lib/site-packages/langchain_core/prompts/base.py:98) run_type="prompt",
[99](file:///C:/Users/PLNAYAK/Documents/Local_LLM_Inference/llms/lib/site-packages/langchain_core/prompts/base.py:99) )
File [c:\Users\PLNAYAK\Documents\Local_LLM_Inference\llms\lib\site-packages\langchain_core\runnables\base.py:886](file:///C:/Users/PLNAYAK/Documents/Local_LLM_Inference/llms/lib/site-packages/langchain_core/runnables/base.py:886), in Runnable._call_with_config(self, func, input, config, run_type, **kwargs)
[879](file:///C:/Users/PLNAYAK/Documents/Local_LLM_Inference/llms/lib/site-packages/langchain_core/runnables/base.py:879) run_manager = callback_manager.on_chain_start(
[880](file:///C:/Users/PLNAYAK/Documents/Local_LLM_Inference/llms/lib/site-packages/langchain_core/runnables/base.py:880) dumpd(self),
[881](file:///C:/Users/PLNAYAK/Documents/Local_LLM_Inference/llms/lib/site-packages/langchain_core/runnables/base.py:881) input,
[882](file:///C:/Users/PLNAYAK/Documents/Local_LLM_Inference/llms/lib/site-packages/langchain_core/runnables/base.py:882) run_type=run_type,
[883](file:///C:/Users/PLNAYAK/Documents/Local_LLM_Inference/llms/lib/site-packages/langchain_core/runnables/base.py:883) name=config.get("run_name"),
[884](file:///C:/Users/PLNAYAK/Documents/Local_LLM_Inference/llms/lib/site-packages/langchain_core/runnables/base.py:884) )
[885](file:///C:/Users/PLNAYAK/Documents/Local_LLM_Inference/llms/lib/site-packages/langchain_core/runnables/base.py:885) try:
--> [886](file:///C:/Users/PLNAYAK/Documents/Local_LLM_Inference/llms/lib/site-packages/langchain_core/runnables/base.py:886) output = call_func_with_variable_args(
[887](file:///C:/Users/PLNAYAK/Documents/Local_LLM_Inference/llms/lib/site-packages/langchain_core/runnables/base.py:887) func, input, config, run_manager, **kwargs
[888](file:///C:/Users/PLNAYAK/Documents/Local_LLM_Inference/llms/lib/site-packages/langchain_core/runnables/base.py:888) )
[889](file:///C:/Users/PLNAYAK/Documents/Local_LLM_Inference/llms/lib/site-packages/langchain_core/runnables/base.py:889) except BaseException as e:
[890](file:///C:/Users/PLNAYAK/Documents/Local_LLM_Inference/llms/lib/site-packages/langchain_core/runnables/base.py:890) run_manager.on_chain_error(e)
File [c:\Users\PLNAYAK\Documents\Local_LLM_Inference\llms\lib\site-packages\langchain_core\runnables\config.py:308](file:///C:/Users/PLNAYAK/Documents/Local_LLM_Inference/llms/lib/site-packages/langchain_core/runnables/config.py:308), in call_func_with_variable_args(func, input, config, run_manager, **kwargs)
[306](file:///C:/Users/PLNAYAK/Documents/Local_LLM_Inference/llms/lib/site-packages/langchain_core/runnables/config.py:306) if run_manager is not None and accepts_run_manager(func):
[307](file:///C:/Users/PLNAYAK/Documents/Local_LLM_Inference/llms/lib/site-packages/langchain_core/runnables/config.py:307) kwargs["run_manager"] = run_manager
--> [308](file:///C:/Users/PLNAYAK/Documents/Local_LLM_Inference/llms/lib/site-packages/langchain_core/runnables/config.py:308) return func(input, **kwargs)
File [c:\Users\PLNAYAK\Documents\Local_LLM_Inference\llms\lib\site-packages\langchain_core\prompts\base.py:89](file:///C:/Users/PLNAYAK/Documents/Local_LLM_Inference/llms/lib/site-packages/langchain_core/prompts/base.py:89), in BasePromptTemplate._format_prompt_with_error_handling(self, inner_input)
[83](file:///C:/Users/PLNAYAK/Documents/Local_LLM_Inference/llms/lib/site-packages/langchain_core/prompts/base.py:83) except KeyError as e:
[84](file:///C:/Users/PLNAYAK/Documents/Local_LLM_Inference/llms/lib/site-packages/langchain_core/prompts/base.py:84) raise KeyError(
[85](file:///C:/Users/PLNAYAK/Documents/Local_LLM_Inference/llms/lib/site-packages/langchain_core/prompts/base.py:85) f"Input to {self.__class__.__name__} is missing variable {e}. "
[86](file:///C:/Users/PLNAYAK/Documents/Local_LLM_Inference/llms/lib/site-packages/langchain_core/prompts/base.py:86) f" Expected: {self.input_variables}"
[87](file:///C:/Users/PLNAYAK/Documents/Local_LLM_Inference/llms/lib/site-packages/langchain_core/prompts/base.py:87) f" Received: {list(inner_input.keys())}"
[88](file:///C:/Users/PLNAYAK/Documents/Local_LLM_Inference/llms/lib/site-packages/langchain_core/prompts/base.py:88) ) from e
---> [89](file:///C:/Users/PLNAYAK/Documents/Local_LLM_Inference/llms/lib/site-packages/langchain_core/prompts/base.py:89) return self.format_prompt(**input_dict)
File [c:\Users\PLNAYAK\Documents\Local_LLM_Inference\llms\lib\site-packages\langchain_core\prompts\string.py:161](file:///C:/Users/PLNAYAK/Documents/Local_LLM_Inference/llms/lib/site-packages/langchain_core/prompts/string.py:161), in StringPromptTemplate.format_prompt(self, **kwargs)
[159](file:///C:/Users/PLNAYAK/Documents/Local_LLM_Inference/llms/lib/site-packages/langchain_core/prompts/string.py:159) def format_prompt(self, **kwargs: Any) -> PromptValue:
[160](file:///C:/Users/PLNAYAK/Documents/Local_LLM_Inference/llms/lib/site-packages/langchain_core/prompts/string.py:160) """Create Chat Messages."""
--> [161](file:///C:/Users/PLNAYAK/Documents/Local_LLM_Inference/llms/lib/site-packages/langchain_core/prompts/string.py:161) return StringPromptValue(text=self.format(**kwargs))
File [c:\Users\PLNAYAK\Documents\Local_LLM_Inference\llms\lib\site-packages\langchain_core\prompts\prompt.py:132](file:///C:/Users/PLNAYAK/Documents/Local_LLM_Inference/llms/lib/site-packages/langchain_core/prompts/prompt.py:132), in PromptTemplate.format(self, **kwargs)
[117](file:///C:/Users/PLNAYAK/Documents/Local_LLM_Inference/llms/lib/site-packages/langchain_core/prompts/prompt.py:117) """Format the prompt with the inputs.
[118](file:///C:/Users/PLNAYAK/Documents/Local_LLM_Inference/llms/lib/site-packages/langchain_core/prompts/prompt.py:118)
[119](file:///C:/Users/PLNAYAK/Documents/Local_LLM_Inference/llms/lib/site-packages/langchain_core/prompts/prompt.py:119) Args:
(...)
[129](file:///C:/Users/PLNAYAK/Documents/Local_LLM_Inference/llms/lib/site-packages/langchain_core/prompts/prompt.py:129) prompt.format(variable1="foo")
[130](file:///C:/Users/PLNAYAK/Documents/Local_LLM_Inference/llms/lib/site-packages/langchain_core/prompts/prompt.py:130) """
[131](file:///C:/Users/PLNAYAK/Documents/Local_LLM_Inference/llms/lib/site-packages/langchain_core/prompts/prompt.py:131) kwargs = self._merge_partial_and_user_variables(**kwargs)
--> [132](file:///C:/Users/PLNAYAK/Documents/Local_LLM_Inference/llms/lib/site-packages/langchain_core/prompts/prompt.py:132) return DEFAULT_FORMATTER_MAPPING[self.template_format](self.template, **kwargs)
File [C:\Program](file:///C:/Program) Files\Python39\lib\string.py:161, in Formatter.format(self, format_string, *args, **kwargs)
[160](file:///C:/Program%20Files/Python39/lib/string.py:160) def format(self, format_string, /, *args, **kwargs):
--> [161](file:///C:/Program%20Files/Python39/lib/string.py:161) return self.vformat(format_string, args, kwargs)
File [c:\Users\PLNAYAK\Documents\Local_LLM_Inference\llms\lib\site-packages\langchain_core\utils\formatting.py:29](file:///C:/Users/PLNAYAK/Documents/Local_LLM_Inference/llms/lib/site-packages/langchain_core/utils/formatting.py:29), in StrictFormatter.vformat(self, format_string, args, kwargs)
[24](file:///C:/Users/PLNAYAK/Documents/Local_LLM_Inference/llms/lib/site-packages/langchain_core/utils/formatting.py:24) if len(args) > 0:
[25](file:///C:/Users/PLNAYAK/Documents/Local_LLM_Inference/llms/lib/site-packages/langchain_core/utils/formatting.py:25) raise ValueError(
[26](file:///C:/Users/PLNAYAK/Documents/Local_LLM_Inference/llms/lib/site-packages/langchain_core/utils/formatting.py:26) "No arguments should be provided, "
[27](file:///C:/Users/PLNAYAK/Documents/Local_LLM_Inference/llms/lib/site-packages/langchain_core/utils/formatting.py:27) "everything should be passed as keyword arguments."
[28](file:///C:/Users/PLNAYAK/Documents/Local_LLM_Inference/llms/lib/site-packages/langchain_core/utils/formatting.py:28) )
---> [29](file:///C:/Users/PLNAYAK/Documents/Local_LLM_Inference/llms/lib/site-packages/langchain_core/utils/formatting.py:29) return super().vformat(format_string, args, kwargs)
File [C:\Program](file:///C:/Program) Files\Python39\lib\string.py:166, in Formatter.vformat(self, format_string, args, kwargs)
[164](file:///C:/Program%20Files/Python39/lib/string.py:164) used_args = set()
[165](file:///C:/Program%20Files/Python39/lib/string.py:165) result, _ = self._vformat(format_string, args, kwargs, used_args, 2)
--> [166](file:///C:/Program%20Files/Python39/lib/string.py:166) self.check_unused_args(used_args, args, kwargs)
[167](file:///C:/Program%20Files/Python39/lib/string.py:167) return result
File [c:\Users\PLNAYAK\Documents\Local_LLM_Inference\llms\lib\site-packages\langchain_core\utils\formatting.py:18](file:///C:/Users/PLNAYAK/Documents/Local_LLM_Inference/llms/lib/site-packages/langchain_core/utils/formatting.py:18), in StrictFormatter.check_unused_args(self, used_args, args, kwargs)
[16](file:///C:/Users/PLNAYAK/Documents/Local_LLM_Inference/llms/lib/site-packages/langchain_core/utils/formatting.py:16) extra = set(kwargs).difference(used_args)
[17](file:///C:/Users/PLNAYAK/Documents/Local_LLM_Inference/llms/lib/site-packages/langchain_core/utils/formatting.py:17) if extra:
---> [18](file:///C:/Users/PLNAYAK/Documents/Local_LLM_Inference/llms/lib/site-packages/langchain_core/utils/formatting.py:18) raise KeyError(extra)
KeyError: {'format_instructions'}
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Sample code
```
from langchain.output_parsers import ResponseSchema, StructuredOutputParser
from langchain.prompts import PromptTemplate
response_schemas = [
ResponseSchema(name="result", description="answer to the user's question"),
ResponseSchema(
name="source_documents",
description="source used to answer the user's question",
),
]
output_parser = StructuredOutputParser.from_response_schemas(response_schemas)
#
format_instructions = output_parser.get_format_instructions()
#
llms = LlamaCpp(streaming=True,
model_path=r"C:\Users\PLNAYAK\Documents\Local_LLM_Inference\zephyr-7b-alpha.Q4_K_M.gguf",
max_tokens = 500,
temperature=0.75,
top_p=1,
model_kwargs={"gpu_layers":0,"stream":True},
verbose=True,n_threads = int(os.cpu_count()/2),
n_ctx=4096)
#
prompt = PromptTemplate(
template="Use the following pieces of context to answer the question at the end. If you don't know the answer, just say that you don't know, don't try to make up an answer.\n\n{context}\n\nQuestion: {question}\nHelpful Answer:",
input_variables=["context","question"],
partial_variables={"format_instructions": format_instructions},
output_parser=output_parser
)
#
chain = prompt | llms | output_parser
chain.invoke({"question":query,"context":complete_context})
```
### Expected behavior
It should return output in a structured format | Encounter Error (KeyError: {'format_instructions'})while using StructuredOutputParser | https://api.github.com/repos/langchain-ai/langchain/issues/15801/comments | 2 | 2024-01-10T06:00:17Z | 2024-06-14T16:08:42Z | https://github.com/langchain-ai/langchain/issues/15801 | 2,073,649,435 | 15,801 |
[
"langchain-ai",
"langchain"
] | ### System Info
langchain==0.0.352
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```python
# Define LLM to generate response
llm = VertexAI(model_name='text-bison@001', max_output_tokens=512, temperature=0.2)
if not message:
message = request.form.get('userInput')
template = """
appropriate custom prompt context...
{context}
Question: {question}
"""
qa_prompt = ChatPromptTemplate.from_messages(
[
("system", template),
MessagesPlaceholder(variable_name="chat_history"),
("user", "{question}"),
]
)
rag_chain = (
RunnablePassthrough.assign(
context=contextualized_question | sub_retriever
)
| qa_prompt
| llm
| remove_prefix
)
response = rag_chain.invoke(({"question": message, "chat_history": memory.get_history()}))
memory.add_interaction(message, response)
```
### Expected behavior
I want to get intermediate output of contextualized_question chain, in
```python
RunnablePassthrough.assign(
context=contextualized_question | sub_retriever
)
```
so that I can easily debug the whole process.
For now, I am just getting final response from the chain which is,
```python
response = rag_chain.invoke(({"question": message, "chat_history": memory.get_history()}))
``` | printing intermediate output from RAG chains | https://api.github.com/repos/langchain-ai/langchain/issues/15800/comments | 3 | 2024-01-10T05:52:27Z | 2024-01-11T01:16:50Z | https://github.com/langchain-ai/langchain/issues/15800 | 2,073,641,136 | 15,800 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
Please look at the example below, I used ChatPromptTemplate to chat wit gpt, the output of the gpt always have a prefix "AI: ",how to remove it.
```python
def chat(self, messages):
history = [("system", SYSTEM)]
for message in messages:
if message["role"] == "user":
history.append(("human", message["content"]))
else:
history.append(("ai", message["content"]))
prompt = ChatPromptTemplate.from_messages(history)
chat_chain = prompt | self.model
res = chat_chain.stream({})
return res
```

### Suggestion:
_No response_ | Issue: How to use ChatPromptTemplate? | https://api.github.com/repos/langchain-ai/langchain/issues/15797/comments | 5 | 2024-01-10T05:00:37Z | 2024-07-10T16:05:40Z | https://github.com/langchain-ai/langchain/issues/15797 | 2,073,590,016 | 15,797 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
I am developing in a Colab environment and I have Typing_Extensions Issue
Package Version
-------------------------------- ---------------------
absl-py 1.4.0
aiohttp 3.9.1
aiosignal 1.3.1
alabaster 0.7.13
albumentations 1.3.1
altair 4.2.2
anyio 3.7.1
appdirs 1.4.4
argon2-cffi 23.1.0
argon2-cffi-bindings 21.2.0
array-record 0.5.0
arviz 0.15.1
astropy 5.3.4
astunparse 1.6.3
async-timeout 4.0.3
atpublic 4.0
attrs 23.2.0
audioread 3.0.1
autograd 1.6.2
Babel 2.14.0
backcall 0.2.0
backoff 2.2.1
beautifulsoup4 4.11.2
bidict 0.22.1
bigframes 0.18.0
bleach 6.1.0
blinker 1.4
blis 0.7.11
blosc2 2.0.0
bokeh 3.3.2
bqplot 0.12.42
branca 0.7.0
build 1.0.3
CacheControl 0.13.1
cachetools 5.3.2
catalogue 2.0.10
certifi 2023.11.17
cffi 1.16.0
chardet 5.2.0
charset-normalizer 3.3.2
chex 0.1.7
click 8.1.7
click-plugins 1.1.1
cligj 0.7.2
cloudpickle 2.2.1
cmake 3.27.9
cmdstanpy 1.2.0
cohere 4.41
colorcet 3.0.1
colorlover 0.3.0
colour 0.1.5
community 1.0.0b1
confection 0.1.4
cons 0.4.6
contextlib2 21.6.0
contourpy 1.2.0
cryptography 41.0.7
cufflinks 0.17.3
cupy-cuda12x 12.2.0
cvxopt 1.3.2
cvxpy 1.3.2
cycler 0.12.1
cymem 2.0.8
Cython 3.0.7
dask 2023.8.1
dataclasses-json 0.6.3
datascience 0.17.6
db-dtypes 1.2.0
dbus-python 1.2.18
debugpy 1.6.6
decorator 4.4.2
defusedxml 0.7.1
diskcache 5.6.3
distributed 2023.8.1
distro 1.7.0
dlib 19.24.2
dm-tree 0.1.8
docutils 0.18.1
dopamine-rl 4.0.6
duckdb 0.9.2
earthengine-api 0.1.384
easydict 1.11
ecos 2.0.12
editdistance 0.6.2
eerepr 0.0.4
en-core-web-sm 3.6.0
entrypoints 0.4
et-xmlfile 1.1.0
etils 1.6.0
etuples 0.3.9
exceptiongroup 1.2.0
fastai 2.7.13
fastavro 1.9.3
fastcore 1.5.29
fastdownload 0.0.7
fastjsonschema 2.19.1
fastprogress 1.0.3
fastrlock 0.8.2
filelock 3.13.1
fiona 1.9.5
firebase-admin 5.3.0
Flask 2.2.5
flatbuffers 23.5.26
flax 0.7.5
folium 0.14.0
fonttools 4.47.0
frozendict 2.4.0
frozenlist 1.4.1
fsspec 2023.6.0
future 0.18.3
gast 0.5.4
gcsfs 2023.6.0
GDAL 3.4.3
gdown 4.6.6
geemap 0.30.0
gensim 4.3.2
geocoder 1.38.1
geographiclib 2.0
geopandas 0.13.2
geopy 2.3.0
gin-config 0.5.0
glob2 0.7
google 2.0.3
google-ai-generativelanguage 0.4.0
google-api-core 2.11.1
google-api-python-client 2.84.0
google-auth 2.17.3
google-auth-httplib2 0.1.1
google-auth-oauthlib 1.2.0
google-cloud-aiplatform 1.38.1
google-cloud-bigquery 3.12.0
google-cloud-bigquery-connection 1.12.1
google-cloud-bigquery-storage 2.24.0
google-cloud-core 2.3.3
google-cloud-datastore 2.15.2
google-cloud-firestore 2.11.1
google-cloud-functions 1.13.3
google-cloud-iam 2.13.0
google-cloud-language 2.9.1
google-cloud-resource-manager 1.11.0
google-cloud-storage 2.8.0
google-cloud-translate 3.11.3
google-colab 1.0.0
google-crc32c 1.5.0
google-generativeai 0.3.2
google-pasta 0.2.0
google-resumable-media 2.7.0
googleapis-common-protos 1.62.0
googledrivedownloader 0.4
graphviz 0.20.1
greenlet 3.0.3
grpc-google-iam-v1 0.13.0
grpcio 1.60.0
grpcio-status 1.48.2
gspread 3.4.2
gspread-dataframe 3.3.1
gym 0.25.2
gym-notices 0.0.8
h11 0.14.0
h5netcdf 1.3.0
h5py 3.9.0
holidays 0.40
holoviews 1.17.1
html5lib 1.1
httpcore 1.0.2
httpimport 1.3.1
httplib2 0.22.0
httpx 0.26.0
huggingface-hub 0.20.2
humanize 4.7.0
hyperopt 0.2.7
ibis-framework 7.1.0
idna 3.6
imageio 2.31.6
imageio-ffmpeg 0.4.9
imagesize 1.4.1
imbalanced-learn 0.10.1
imgaug 0.4.0
importlib-metadata 6.11.0
importlib-resources 6.1.1
imutils 0.5.4
inflect 7.0.0
iniconfig 2.0.0
install 1.3.5
intel-openmp 2023.2.3
ipyevents 2.0.2
ipyfilechooser 0.6.0
ipykernel 5.5.6
ipyleaflet 0.18.1
ipython 7.34.0
ipython-genutils 0.2.0
ipython-sql 0.5.0
ipytree 0.2.2
ipywidgets 7.7.1
itsdangerous 2.1.2
jax 0.4.23
jaxlib 0.4.23+cuda12.cudnn89
jeepney 0.7.1
jieba 0.42.1
Jinja2 3.1.2
joblib 1.3.2
jsonpatch 1.33
jsonpickle 3.0.2
jsonpointer 2.4
jsonschema 4.19.2
jsonschema-specifications 2023.12.1
jupyter-client 6.1.12
jupyter-console 6.1.0
jupyter_core 5.7.0
jupyter-server 1.24.0
jupyterlab_pygments 0.3.0
jupyterlab-widgets 3.0.9
kaggle 1.5.16
kagglehub 0.1.4
keras 2.15.0
keyring 23.5.0
kiwisolver 1.4.5
langchain 0.1.0
langchain-community 0.0.11
langchain-core 0.1.8
langcodes 3.3.0
langsmith 0.0.79
launchpadlib 1.10.16
lazr.restfulclient 0.14.4
lazr.uri 1.0.6
lazy_loader 0.3
libclang 16.0.6
librosa 0.10.1
lida 0.0.10
lightgbm 4.1.0
linkify-it-py 2.0.2
llmx 0.0.15a0
llvmlite 0.41.1
locket 1.0.0
logical-unification 0.4.6
lxml 4.9.4
malloy 2023.1067
Markdown 3.5.1
markdown-it-py 3.0.0
MarkupSafe 2.1.3
marshmallow 3.20.2
matplotlib 3.7.1
matplotlib-inline 0.1.6
matplotlib-venn 0.11.9
mdit-py-plugins 0.4.0
mdurl 0.1.2
miniKanren 1.0.3
missingno 0.5.2
mistune 0.8.4
mizani 0.9.3
mkl 2023.2.0
ml-dtypes 0.2.0
mlxtend 0.22.0
more-itertools 10.1.0
moviepy 1.0.3
mpmath 1.3.0
msgpack 1.0.7
multidict 6.0.4
multipledispatch 1.0.0
multitasking 0.0.11
murmurhash 1.0.10
music21 9.1.0
mypy-extensions 1.0.0
natsort 8.4.0
nbclassic 1.0.0
nbclient 0.9.0
nbconvert 6.5.4
nbformat 5.9.2
nest-asyncio 1.5.8
networkx 3.2.1
nibabel 4.0.2
nltk 3.8.1
notebook 6.5.5
notebook_shim 0.2.3
numba 0.58.1
numexpr 2.8.8
numpy 1.23.5
oauth2client 4.1.3
oauthlib 3.2.2
openai 1.7.0
opencv-contrib-python 4.8.0.76
opencv-python 4.8.0.76
opencv-python-headless 4.9.0.80
openpyxl 3.1.2
opt-einsum 3.3.0
optax 0.1.7
orbax-checkpoint 0.4.4
osqp 0.6.2.post8
packaging 23.2
pandas 1.5.3
pandas-datareader 0.10.0
pandas-gbq 0.19.2
pandas-stubs 1.5.3.230304
pandocfilters 1.5.0
panel 1.3.6
param 2.0.1
parso 0.8.3
parsy 2.1
partd 1.4.1
pathlib 1.0.1
pathy 0.10.3
patsy 0.5.6
peewee 3.17.0
pexpect 4.9.0
pickleshare 0.7.5
Pillow 9.4.0
pins 0.8.4
pip 23.3.2
pip-tools 6.13.0
platformdirs 4.1.0
plotly 5.15.0
plotnine 0.12.4
pluggy 1.3.0
polars 0.17.3
pooch 1.8.0
portpicker 1.5.2
prefetch-generator 1.0.3
preshed 3.0.9
prettytable 3.9.0
proglog 0.1.10
progressbar2 4.2.0
prometheus-client 0.19.0
promise 2.3
prompt-toolkit 3.0.43
prophet 1.1.5
proto-plus 1.23.0
protobuf 3.20.3
psutil 5.9.5
psycopg2 2.9.9
ptyprocess 0.7.0
py-cpuinfo 9.0.0
py4j 0.10.9.7
pyarrow 10.0.1
pyarrow-hotfix 0.6
pyasn1 0.5.1
pyasn1-modules 0.3.0
pycocotools 2.0.7
pycparser 2.21
pyct 0.5.0
pydantic 1.10.13
pydata-google-auth 1.8.2
pydot 1.4.2
pydot-ng 2.0.0
pydotplus 2.0.2
PyDrive 1.3.1
PyDrive2 1.6.3
pyerfa 2.0.1.1
pygame 2.5.2
Pygments 2.16.1
PyGObject 3.42.1
PyJWT 2.3.0
pymc 5.7.2
pymystem3 0.2.0
PyOpenGL 3.1.7
pyOpenSSL 23.3.0
pyparsing 3.1.1
pyperclip 1.8.2
pyproj 3.6.1
pyproject_hooks 1.0.0
pyshp 2.3.1
PySocks 1.7.1
pytensor 2.14.2
pytest 7.4.4
python-apt 0.0.0
python-box 7.1.1
python-dateutil 2.8.2
python-louvain 0.16
python-slugify 8.0.1
python-utils 3.8.1
pytz 2023.3.post1
pyviz_comms 3.0.0
PyWavelets 1.5.0
PyYAML 6.0.1
pyzmq 23.2.1
qdldl 0.1.7.post0
qudida 0.0.4
ratelim 0.1.6
referencing 0.32.0
regex 2023.6.3
requests 2.31.0
requests-oauthlib 1.3.1
requirements-parser 0.5.0
rich 13.7.0
rpds-py 0.16.2
rpy2 3.4.2
rsa 4.9
safetensors 0.4.1
scikit-image 0.19.3
scikit-learn 1.2.2
scipy 1.11.4
scooby 0.9.2
scs 3.2.4.post1
seaborn 0.12.2
SecretStorage 3.3.1
Send2Trash 1.8.2
setuptools 67.7.2
shapely 2.0.2
six 1.16.0
sklearn-pandas 2.2.0
smart-open 6.4.0
sniffio 1.3.0
snowballstemmer 2.2.0
sortedcontainers 2.4.0
soundfile 0.12.1
soupsieve 2.5
soxr 0.3.7
spacy 3.6.1
spacy-legacy 3.0.12
spacy-loggers 1.0.5
Sphinx 5.0.2
sphinxcontrib-applehelp 1.0.7
sphinxcontrib-devhelp 1.0.5
sphinxcontrib-htmlhelp 2.0.4
sphinxcontrib-jsmath 1.0.1
sphinxcontrib-qthelp 1.0.6
sphinxcontrib-serializinghtml 1.1.9
SQLAlchemy 2.0.24
sqlglot 19.9.0
sqlparse 0.4.4
srsly 2.4.8
stanio 0.3.0
statsmodels 0.14.1
sympy 1.12
tables 3.8.0
tabulate 0.9.0
tbb 2021.11.0
tblib 3.0.0
tenacity 8.2.3
tensorboard 2.15.1
tensorboard-data-server 0.7.2
tensorflow 2.15.0
tensorflow-datasets 4.9.4
tensorflow-estimator 2.15.0
tensorflow-gcs-config 2.15.0
tensorflow-hub 0.15.0
tensorflow-io-gcs-filesystem 0.35.0
tensorflow-metadata 1.14.0
tensorflow-probability 0.23.0
tensorstore 0.1.45
termcolor 2.4.0
terminado 0.18.0
text-unidecode 1.3
textblob 0.17.1
tf-slim 1.1.0
thinc 8.1.12
threadpoolctl 3.2.0
tifffile 2023.12.9
tiktoken 0.5.2
tinycss2 1.2.1
tokenizers 0.15.0
toml 0.10.2
tomli 2.0.1
toolz 0.12.0
torch 2.1.0+cu121
torchaudio 2.1.0+cu121
torchdata 0.7.0
torchsummary 1.5.1
torchtext 0.16.0
torchvision 0.16.0+cu121
tornado 6.3.2
tqdm 4.66.1
traitlets 5.7.1
traittypes 0.2.1
transformers 4.35.2
triton 2.1.0
tweepy 4.14.0
typer 0.9.0
types-pytz 2023.3.1.1
types-setuptools 69.0.0.20240106
typing_extensions 4.7.0
typing-inspect 0.9.0
tzlocal 5.2
uc-micro-py 1.0.2
uritemplate 4.1.1
urllib3 2.0.7
vega-datasets 0.9.0
wadllib 1.3.6
wasabi 1.1.2
wcwidth 0.2.12
webcolors 1.13
webencodings 0.5.1
websocket-client 1.7.0
Werkzeug 3.0.1
wheel 0.42.0
widgetsnbextension 3.6.6
wordcloud 1.9.3
wrapt 1.14.1
xarray 2023.7.0
xarray-einstats 0.6.0
xgboost 2.0.3
xlrd 2.0.1
xxhash 3.4.1
xyzservices 2023.10.1
yarl 1.9.4
yellowbrick 1.5
yfinance 0.2.33
zict 3.0.0
zipp 3.17.0
------------------------------------------------------------------------------------------------------------------------------------------
from langchain_community.document_loaders import TextLoader
from langchain.text_splitter import CharacterTextSplitter
from langchain_community.embeddings import OpenAIEmbeddings
from langchain.vectorstores.chroma import Chroma
**There has Error!!**
**embeddings = OpenAIEmbeddings()**
emb = embeddings.embed_query("beef dishes")
#print(emb)
text_splitter = CharacterTextSplitter(
separator="\n",
chunk_size=100,
chunk_overlap=0
)
loader = TextLoader("/content/drive/MyDrive/food.txt", encoding='utf-8')
#loader = TextLoader("facts.txt")
docs = loader.load_and_split(
text_splitter=text_splitter
)
db = Chroma(embedding_function=embeddings)
db.add_documents(docs, persist_directory="emb")
results = db.similarity_search_with_score("looking for beef dishes?")
for result in results:
print("\n")
print(result[1])
print(result[0].page_content)
-----------------------------------------------------------------------------------------
**ImportError Traceback (most recent call last)
[/usr/local/lib/python3.10/dist-packages/langchain_community/embeddings/openai.py](https://localhost:8080/#) in validate_environment(cls, values)
326 try:
--> 327 import openai
328 except ImportError:
10 frames
ImportError: cannot import name 'Iterator' from 'typing_extensions' (/usr/local/lib/python3.10/dist-packages/typing_extensions.py)
During handling of the above exception, another exception occurred:
ImportError Traceback (most recent call last)
[/usr/local/lib/python3.10/dist-packages/langchain_community/embeddings/openai.py](https://localhost:8080/#) in validate_environment(cls, values)
327 import openai
328 except ImportError:
--> 329 raise ImportError(
330 "Could not import openai python package. "
331 "Please install it with `pip install openai`."**
ImportError: Could not import openai python package. Please install it with `pip install openai`.
### Suggestion:
_No response_ | Issue: ImportError in Langchain Community Library When Importing OpenAI Package Due to Typing_Extensions Issue | https://api.github.com/repos/langchain-ai/langchain/issues/15795/comments | 1 | 2024-01-10T03:30:46Z | 2024-04-17T16:33:20Z | https://github.com/langchain-ai/langchain/issues/15795 | 2,073,517,905 | 15,795 |
[
"langchain-ai",
"langchain"
] | ### System Info
```
end_response = chain.run(
input=input["input"],
question=input["question"],
callbacks=[StreamingHandler()],
tags=tags,
)
```
```
StreamingHandler() is an extension of the langchain class `BaseCallbackHandler` and extends its methods:
```
def on_llm_new_token(self, token: str, **kwargs) -> None:
if token:
self.queue_event(event_data=token)
```
With a regular `LLMChain`:
```
conv_chain = LLMChain(
llm=llm,
memory=memory,
prompt=chain_prompt,
verbose=True,
)
```
this `on_llm_new_token` method gets invoked each call with each new token.
However, with create_structured_output_chain, it seems to get invoked with empty tokens each time:
```
conv_chain = create_structured_output_chain(
output_schema=APydanticClass,
llm=llm,
prompt=chain_prompt,
verbose=True,
)
```
### Who can help?
@agola11 seems the right perrson to tag 🙏
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [X] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [X] Callbacks/Tracing
- [ ] Async
### Reproduction
1. Use create_structured_output_chain with a pydantic schema
2. Attach a callback with on_llm_new_token overriden
3. on_llm_new_token gets invoked with empty tokens.
### Expected behavior
Tokens streamed back in the json format of the schema requested. E.g. if the schema is:
```
class XYZ(BaseModel):
matches: Optional[List[str]] = Field(
default=None, description="abc"
)
not_matches: Optional[List[str]] = Field(
default=None,
description="def",
)
```
I'd expect it to be streamed back token by token or even category by category. | create_structured_output_chain doesn't invoke the given callback and on_llm_new_token with tokens | https://api.github.com/repos/langchain-ai/langchain/issues/15790/comments | 2 | 2024-01-10T02:43:26Z | 2024-04-18T16:21:24Z | https://github.com/langchain-ai/langchain/issues/15790 | 2,073,482,807 | 15,790 |
[
"langchain-ai",
"langchain"
] | ### System Info
langchain==0.0.352
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```python
from langchain.retrievers import (
GoogleVertexAIMultiTurnSearchRetriever,
GoogleVertexAISearchRetriever,
GoogleCloudEnterpriseSearchRetriever
)
PROJECT_ID = "my_project_id"
SEARCH_ENGINE_ID = "I tried both for datastore_id and app_id at Vertex Search"
LOCATION_ID = "us"
retriever = GoogleCloudEnterpriseSearchRetriever(
project_id=PROJECT_ID,
search_engine_id=SEARCH_ENGINE_ID,
location_id=LOCATION_ID,
max_documents=3
)
while 1:
message = input()
result = retriever.get_relevant_documents(message)
for doc in result:
print(doc)
```
### Expected behavior
I expected it works well with the defined datastore, but it returned the error saying,
```
google.api_core.exceptions.NotFound: 404 DataStore projects/500618827687/locations/us/collections/default_collection/dataStores/['datastore_id'] not found
``` | GoogleCloudEnterpriseSearchRetriever returned 'datastore not found' error even with the 'us' configurations | https://api.github.com/repos/langchain-ai/langchain/issues/15785/comments | 7 | 2024-01-10T00:05:52Z | 2024-01-22T23:17:32Z | https://github.com/langchain-ai/langchain/issues/15785 | 2,073,361,082 | 15,785 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
I am currently utilizing LangChain version 0.0.335 in my Fast API Python application. In the main.py file, the following code snippet is implemented:
main.py
```
streaming_model = ChatOpenAI(
model_name="gpt-4",
temperature=0.1,
openai_api_key=os.getenv("OPENAI_API_KEY2"),
)
non_streaming_model = ChatOpenAI(
model_name="gpt-4",
temperature=0.1,
openai_api_key=os.getenv("OPENAI_API_KEY2"),
)
retriever = vector_store.as_retriever()
sales_persona_prompt = PromptTemplate.from_template(SALES_PERSONA_PROMPT)
condense_prompt = PromptTemplate.from_template(CONDENSE_PROMPT)
chain = ConversationalRetrievalChain.from_llm(
llm=streaming_model,
retriever=retriever,
condense_question_prompt=condense_prompt,
condense_question_llm=non_streaming_model,
combine_docs_chain_kwargs={"prompt": sales_persona_prompt},
verbose=True,
)
return chain(
{"question": sanitized_question, "chat_history": conversation_history}
)
except Exception as e:
return {"error": str(e)}
```
However, this implementation throws the following error:
` "error": "2 validation errors for LLMChain\nllm\n instance of Runnable expected (type=type_error.arbitrary_type; expected_arbitrary_type=Runnable)\nllm\n instance of Runnable expected (type=type_error.arbitrary_type; expected_arbitrary_type=Runnable)"`
Expected Behavior:
I expected the code to execute without errors. The issue seems to be related to the expected types for the llm parameter in the ConversationalRetrievalChain.from_llm method.
Request for Assistance:
I kindly request assistance in understanding and resolving this issue. Any insights, recommendations, or specific steps to address the error would be highly appreciated.
Thank you for your time and support.
### Suggestion:
_No response_ | Issue with LangChain v0.0.335 - Error in ChatOpenAI Callbacks Expected Runnable Instances | https://api.github.com/repos/langchain-ai/langchain/issues/15779/comments | 4 | 2024-01-09T21:13:36Z | 2024-03-02T01:26:21Z | https://github.com/langchain-ai/langchain/issues/15779 | 2,073,178,041 | 15,779 |
[
"langchain-ai",
"langchain"
] | ### System Info
**Platform**: Ubuntu 22.04
**Python**: 3.10
**Langchain**:
langchain 0.1.0
langchain-community 0.0.10
langchain-core 0.1.8
langchain-openai 0.0.2
langsmith 0.0.78
### Who can help?
@hwchase17
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I used this code :
```py
out = chain.batch(entries, config={"max_concurrency": 3})
```
I can see in Langsmith that more than 12 requests were made at parallel, causing rate limit failure with OpenAI API (TPM).
```
RateLimitError: Error code: 429 - {'error': {'message': 'Rate limit reached for gpt-3.5-turbo-1106 in organization org-W83OoPhCAmgMx2r35aLyv9Tr on tokens per min (TPM): Limit 60000, Used 54134, Requested 6465. Please try again in 599ms. Visit https://platform.openai.com/account/rate-limits to learn more.', 'type': 'tokens', 'param': None, 'code': 'rate_limit_exceeded'}}
```
### Expected behavior
I would expect max_concurrency to limit the amount of concurrency used, but actually that doesn't seem to be the case.
Batch doesn't seem to limit concurrency at all.
This code works perfectly :
```py
from concurrent.futures import ThreadPoolExecutor
def batch_chain(inputs: list) -> list:
with ThreadPoolExecutor(max_workers=3) as executor:
return list(executor.map(chain.invoke, inputs))
out = batch_chain(entries)
``` | chain.batch() doesn't use config options properly (max concurrency) | https://api.github.com/repos/langchain-ai/langchain/issues/15767/comments | 9 | 2024-01-09T18:34:52Z | 2024-06-11T15:43:01Z | https://github.com/langchain-ai/langchain/issues/15767 | 2,072,940,890 | 15,767 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
How can EnsembleRetriever be called asynchronously? I have a dataset with ~1k questions and I wish to find the documents that can best answer each of them. However, calling it sequentially takes a lot of time. Can I run the retriever in parallel for all rows (or chunks of it)? Or is there a different way to optimise the run times?
I'm calling it like this now but it gives out a segmentation fault after getting stuck for an hour
```
import asyncio
queries = [query1, query2, ...]
async def process_query(profile):
result = await ensemble_retriever.aget_relevant_documents(profile)
return result
async def process_all_queries():
tasks = [process_query(query) for query in queries]
results = await asyncio.gather(*tasks)
return results
results = asyncio.run(process_all_queries())
```
### Suggestion:
_No response_ | Async with EnsembleRetriever | https://api.github.com/repos/langchain-ai/langchain/issues/15764/comments | 6 | 2024-01-09T17:13:31Z | 2024-04-18T17:00:46Z | https://github.com/langchain-ai/langchain/issues/15764 | 2,072,810,448 | 15,764 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
I am developing a Streamlit application where I aim to stream the agent's responses to the UI. Previously, I was able to achieve this by utilizing chains with a simple call to ```chain.stream()```. However, after switching to agents, I cannot stream its response in the same way given that it is implemented in LCEL.
I've tried to use ```StreamingStdOutCallbackHandler``` but the response gets streamed in the terminal only and not to the UI.
Any insights, guidance, or fixes regarding this issue would be greatly appreciated
### Suggestion:
_No response_ | Issue: Streaming agent's response to Streamlit UI | https://api.github.com/repos/langchain-ai/langchain/issues/15747/comments | 1 | 2024-01-09T13:06:25Z | 2024-01-09T14:42:49Z | https://github.com/langchain-ai/langchain/issues/15747 | 2,072,340,407 | 15,747 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
Everything was working fine but now suddenly I'm receiving all sorts of LangChain Deprecation issues.
I installed the langchain_openai package and also installed langchain_community package too and replaced all the imports with the suggested ones in the error. It went well but now I'm stuck at this issue
The error is:
`/home/ec2-user/anaconda3/envs/JupyterSystemEnv/lib/python3.10/site-packages/langchain_core/_api/deprecation.py:115: LangChainDeprecationWarning: The function `__call__` was deprecated in LangChain 0.1.0 and will be removed in 0.2.0. Use invoke instead.
warn_deprecated(`
In my code I've replaced all "run" to "invoke" but don't know why this warning is coming up.
I'm also using a LangChain Summarizer and I checked the documentation and it's exactly the way it is in the documentation.
I don't know how to get rid of that deprecation warning now. I don't want to suppress the warning, I want to resolve it so it won't cause any issue in the future.
**This is the only code that I've related to LangChain:**
```
# Langchain Libraries
from langchain_openai import ChatOpenAI
from langchain.prompts import PromptTemplate
from langchain.chains import LLMChain
from langchain.docstore.document import Document
from langchain_community.callbacks import get_openai_callback
from langchain.text_splitter import TokenTextSplitter
from langchain.chains.summarize import load_summarize_chain
from langchain_core.output_parsers import StrOutputParser
# ------------------------------------------------------------
# General ChatGPT function that's required for all the Call-type Prompts
def chatgpt_function(prompt, transcript):
model_kwargs={"seed":235, "top_p":0.01}
llm = ChatOpenAI(model_name='gpt-3.5-turbo', temperature=0, model_kwargs=model_kwargs, max_tokens=tokens)
template = """
{prompt}
Call Transcript: ```{text}```
"""
prompt_main = PromptTemplate(
input_variables=["prompt", "text"],
template=template,)
with get_openai_callback() as cb:
# llm_chain = LLMChain(llm=llm, prompt=prompt_main)
output_parser = StrOutputParser()
llm_chain = prompt_main | llm | output_parser
all_text = str(template) + str(prompt) + str(transcript)
threshold = (llm.get_num_tokens(text=all_text) + tokens)
# print("Total Tokens:",threshold)
if int(threshold) <= 4000:
chatgpt_output = llm_chain.invoke({"prompt":prompt, "text":transcript})
else:
transcript_ = token_limiter(transcript)
chatgpt_output = llm_chain.invoke({"prompt":prompt, "text":transcript_})
return chatgpt_output
# -------------------------------------------------------
# Function to get refined summary if Transcript is long
def token_limiter(transcript):
text_splitter = TokenTextSplitter(chunk_size=3000, chunk_overlap=200)
texts = text_splitter.split_text(transcript)
docs = [Document(page_content=text) for text in texts]
question_prompt_template = """
I'm providing you a call transcript refined summary enclosed in triple backticks. summarize it furter.
Call Transcript: ```{text}```
Provide me a summary transcript. do not add add any title/ heading like summary or anything else. just give summary text.
"""
question_prompt = PromptTemplate(
template=question_prompt_template, input_variables=["text"]
)
refine_prompt_template = """
Write a summary of the following text enclosed in triple backticks (```).
```{text}```
"""
refine_prompt = PromptTemplate(
template=refine_prompt_template, input_variables=["text"]
)
llm = ChatOpenAI(model_name='gpt-3.5-turbo', temperature=0, max_tokens=800)
refine_chain = load_summarize_chain(
llm,
chain_type="refine",
question_prompt=question_prompt,
refine_prompt=refine_prompt,
return_intermediate_steps=True,
)
summary_refine = refine_chain({"input_documents": docs}, return_only_outputs=True)
return summary_refine['output_text']```
### Suggestion:
Please let me know what I need to change in my code to get rid of that Deprecation warning. Thank you | Issue: LangChainDeprecationWarning: The function `__call__` was deprecated in LangChain 0.1.0 and will be removed in 0.2.0. Use invoke instead. warn_deprecated( | https://api.github.com/repos/langchain-ai/langchain/issues/15741/comments | 11 | 2024-01-09T10:53:09Z | 2024-04-14T20:26:29Z | https://github.com/langchain-ai/langchain/issues/15741 | 2,072,124,775 | 15,741 |
[
"langchain-ai",
"langchain"
] | ### System Info
From pyproject.toml:
python=3.11.5
crewai = "0.1.6"
langchain = '==0.0.335'
openai = '==0.28.1'
unstructured = '==0.10.25'
pyowm = '3.3.0'
tools = "^0.1.9"
wikipedia = "1.4.0"
yfinance = "0.2.33"
sec-api = "1.0.17"
tiktoken = "0.5.2"
faiss-cpu = "1.7.4"
python-dotenv = "1.0.0"
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Running any of the scripts in the crewAI (https://github.com/joaomdmoura/crewAI)
Running crewAI I get the following.
Connection error caused failure to patch http://localhost:1984/runs/7fdd9cf2-4f50-4ee1-8fef-9202b07cc756 in LangSmith API. Please confirm your LANGCHAIN_ENDPOINT. ConnectionError(MaxRetryError("HTTPConnectionPool(host='localhost', port=1984): Max retries exceeded with url: /runs/7fdd9cf2-4f50-4ee1-8fef-9202b07cc756 (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x130d51e10>: Failed to establish a new connection: [Errno 61] Connection refused'))"))
Connection error caused failure to post http://localhost:1984/runs in LangSmith API. Please confirm your LANGCHAIN_ENDPOINT. ConnectionError(MaxRetryError("HTTPConnectionPool(host='localhost', port=1984): Max retries exceeded with url: /runs (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x130d6b390>: Failed to establish a new connection: [Errno 61] Connection refused'))"))
I ma not running LangSmith nor do I have any access to it. I have tried setting in my .env to no effect.
LANGCHAIN_TRACING=false
LANGCHAIN_TRACING_V2=false
LANGCHAIN_HANDLER=None
### Expected behavior
Dont expect to see the error reports. Note that not all users are seeing this error | Connection error caused failure to post http://localhost:1984/runs in LangSmith API. | https://api.github.com/repos/langchain-ai/langchain/issues/15739/comments | 2 | 2024-01-09T10:36:08Z | 2024-04-25T16:17:04Z | https://github.com/langchain-ai/langchain/issues/15739 | 2,072,094,761 | 15,739 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
import os
from urllib.parse import quote_plus
from langchain.vectorstores.pgvector import PGVector
from langchain.chat_models import ChatOpenAI
from langchain.chains import ConversationalRetrievalChain
from langchain.memory import ConversationTokenBufferMemory
from langchain_community.embeddings import HuggingFaceEmbeddings
from langchain_core.prompts import ChatPromptTemplate, SystemMessagePromptTemplate, MessagesPlaceholder,
HumanMessagePromptTemplate
os.environ['OPENAI_API_KEY'] = "key"
embeddings = HuggingFaceEmbeddings(model_name="all-MiniLM-L6-v2")
CONNECTION_STRING = PGVector.connection_string_from_db_params(
driver=os.environ.get("PGVECTOR_DRIVER", "psycopg2"),
host="1x2.1x8.xx.xx",
port=5432,
database="Ai",
user="xxxxxxxxx",
password=quote_plus("xxxxxx@xx"),
)
vectordb = PGVector(embedding_function=embeddings,
collection_name="tmp04",
connection_string=CONNECTION_STRING,
)
prompt = ChatPromptTemplate(
messages=[
SystemMessagePromptTemplate.from_template(
"i am robot"
),
MessagesPlaceholder(variable_name="chat_history"),
HumanMessagePromptTemplate.from_template("{question}"),
]
)
retriever = vectordb.as_retriever()
llm_name = "gpt-3.5-turbo"
llm = ChatOpenAI(model_name=llm_name, temperature=0)
memory_token_limit = 1000
memory = ConversationTokenBufferMemory(
llm=llm,
prompt=prompt,
max_token_limit=int(memory_token_limit),
memory_key="chat_history",
return_messages=True,
)
qa = ConversationalRetrievalChain.from_llm(
llm=llm,
retriever=retriever,
memory=memory,
verbose=True,
)
chat_history = []
while True:
memory.load_memory_variables({})
question = input('ask:')
# question = retriever.get_relevant_documents(input('ask:'))
result = qa.run({'question': question, 'chat_history': chat_history})
print(result)
chat_history.append([f'User: {question}', f'Ai: {result}'])
print(chat_history)
embeddings = HuggingFaceEmbeddings(model_name="all-MiniLM-L6-v2")
st_history = ' '.join(map(str, chat_history))
res = embeddings.embed_query(st_history)
vectordb.add_vector(res)
select_vdb = vectordb.similarity_search(question, k=5)
print(select_vdb)
print(f'ok: {res[:4]}...')
if question.lower() == 'bye':
break
![Uploading image.png…]()
### Suggestion:
How can I specify my own designated table for vector search to retrieve vector data for comparison and provide a response to OpenAI for reference? Also, I noticed the prompt disappeared | How can I specify my own designated table for vector search to retrieve vector data for comparison and provide a response to OpenAI for reference? Also, I noticed the prompt disappeared | https://api.github.com/repos/langchain-ai/langchain/issues/15735/comments | 2 | 2024-01-09T08:59:07Z | 2024-04-16T16:20:31Z | https://github.com/langchain-ai/langchain/issues/15735 | 2,071,916,307 | 15,735 |
[
"langchain-ai",
"langchain"
] | ### Issue with current documentation:
import os
from urllib.parse import quote_plus
from langchain.vectorstores.pgvector import PGVector
from langchain.chat_models import ChatOpenAI
from langchain.chains import ConversationalRetrievalChain
from langchain.memory import ConversationTokenBufferMemory
from langchain_community.embeddings import HuggingFaceEmbeddings
from langchain_core.prompts import ChatPromptTemplate, SystemMessagePromptTemplate, MessagesPlaceholder,
HumanMessagePromptTemplate
os.environ['OPENAI_API_KEY'] = "key"
embeddings = HuggingFaceEmbeddings(model_name="all-MiniLM-L6-v2")
CONNECTION_STRING = PGVector.connection_string_from_db_params(
driver=os.environ.get("PGVECTOR_DRIVER", "psycopg2"),
host="1x2.1x8.xx.xx",
port=5432,
database="Ai",
user="xxxxxxxxx",
password=quote_plus("xxxxxx@xx"),
)
vectordb = PGVector(embedding_function=embeddings,
collection_name="tmp04",
connection_string=CONNECTION_STRING,
)
prompt = ChatPromptTemplate(
messages=[
SystemMessagePromptTemplate.from_template(
"i am robot"
),
MessagesPlaceholder(variable_name="chat_history"),
HumanMessagePromptTemplate.from_template("{question}"),
]
)
retriever = vectordb.as_retriever()
llm_name = "gpt-3.5-turbo"
llm = ChatOpenAI(model_name=llm_name, temperature=0)
memory_token_limit = 1000
memory = ConversationTokenBufferMemory(
llm=llm,
prompt=prompt,
max_token_limit=int(memory_token_limit),
memory_key="chat_history",
return_messages=True,
)
qa = ConversationalRetrievalChain.from_llm(
llm=llm,
retriever=retriever,
memory=memory,
verbose=True,
)
chat_history = []
while True:
memory.load_memory_variables({})
question = input('ask:')
# question = retriever.get_relevant_documents(input('提問:'))
result = qa.run({'question': question, 'chat_history': chat_history})
print(result)
chat_history.append([f'User: {question}', f'Ai: {result}'])
print(chat_history)
embeddings = HuggingFaceEmbeddings(model_name="all-MiniLM-L6-v2")
st_history = ' '.join(map(str, chat_history))
res = embeddings.embed_query(st_history)
vectordb.add_vector(res)
select_vdb = vectordb.similarity_search(question, k=5)
print(select_vdb)
print(f'ok: {res[:4]}...')
if question.lower() == 'bye':
break

### Idea or request for content:
How can I specify my own designated table for vector search to retrieve vector data for comparison and provide a response to OpenAI for reference? Also, I noticed the prompt disappeared. | How can I specify my own designated table for vector search to retrieve vector data for comparison and provide a response to OpenAI for reference? Also, I noticed the prompt disappeared. | https://api.github.com/repos/langchain-ai/langchain/issues/15734/comments | 4 | 2024-01-09T08:39:38Z | 2024-01-09T08:48:55Z | https://github.com/langchain-ai/langchain/issues/15734 | 2,071,885,054 | 15,734 |
[
"langchain-ai",
"langchain"
] | ### Issue with current documentation:
I just installed langchain 0.1.0 and according to the documentation
https://api.python.langchain.com/en/latest/_modules/langchain_openai/chat_models/azure.html#
AzureChatOpenAI should be in langchain_openai.chat_models but its instead in langchain_community.chat_models
### Idea or request for content:
_No response_ | DOC: AzureChatOpenAI in documentation | https://api.github.com/repos/langchain-ai/langchain/issues/15733/comments | 1 | 2024-01-09T08:21:31Z | 2024-04-16T16:07:23Z | https://github.com/langchain-ai/langchain/issues/15733 | 2,071,858,324 | 15,733 |
[
"langchain-ai",
"langchain"
] | ### System Info
This is a random occurrence. Maybe after I ask many questions
when it happen, Only clear the memory can recover.
the code to ask:
async for chunk in runnable.astream( #or call astream_log
question,
config
):
await res.stream_token(chunk)
error information:
2024-01-09 13:32:02 - Error in LangchainTracer.on_llm_error callback: IndexError('list index out of range')
2024-01-09 13:32:02 -
Traceback (most recent call last):
File "/usr/local/lib/python3.11/site-packages/chainlit/utils.py", line 39, in wrapper
return await user_function(**params_values)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/rag-app/main.py", line 164, in onMessage
await app.question_anwsering(message.content, False)
File "/rag-app/app.py", line 367, in question_anwsering
async for chunk in runnable.astream_log(
File "/usr/local/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 752, in astream_log
await task
File "/usr/local/lib/python3.11/asyncio/futures.py", line 290, in __await__
return self.result() # May raise too.
^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/asyncio/futures.py", line 203, in result
raise self._exception.with_traceback(self._exception_tb)
File "/usr/local/lib/python3.11/asyncio/tasks.py", line 267, in __step
result = coro.send(None)
^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 706, in consume_astream
async for chunk in self.astream(input, config, **kwargs):
File "/usr/local/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 2158, in astream
async for chunk in self.atransform(input_aiter(), config, **kwargs):
File "/usr/local/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 2141, in atransform
async for chunk in self._atransform_stream_with_config(
File "/usr/local/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 1308, in _atransform_stream_with_config
chunk: Output = await asyncio.create_task( # type: ignore[call-arg]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/asyncio/futures.py", line 287, in __await__
yield self # This tells Task to wait for completion.
^^^^^^^^^^
File "/usr/local/lib/python3.11/asyncio/tasks.py", line 339, in __wakeup
future.result()
File "/usr/local/lib/python3.11/asyncio/futures.py", line 203, in result
raise self._exception.with_traceback(self._exception_tb)
File "/usr/local/lib/python3.11/asyncio/tasks.py", line 267, in __step
result = coro.send(None)
^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 2111, in _atransform
async for output in final_pipeline:
File "/usr/local/lib/python3.11/site-packages/langchain_core/output_parsers/transform.py", line 60, in atransform
async for chunk in self._atransform_stream_with_config(
File "/usr/local/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 1283, in _atransform_stream_with_config
final_input: Optional[Input] = await py_anext(input_for_tracing, None)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/langchain_core/utils/aiter.py", line 62, in anext_impl
return await __anext__(iterator)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/langchain_core/utils/aiter.py", line 97, in tee_peer
item = await iterator.__anext__()
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 806, in atransform
async for output in self.astream(final, config, **kwargs):
File "/usr/local/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 307, in astream
raise e
File "/usr/local/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 299, in astream
assert generation is not None
^^^^^^^^^^^^^^^^^^^^^^
AssertionError
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [X] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [X] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [X] Callbacks/Tracing
- [X] Async
### Reproduction
This is a random occurrence. Maybe after I ask many questions
when it happen, Only clear the memory can recover.
### Expected behavior
fix it | LangchainTracer.on_llm_error callback: IndexError('list index out of range') | https://api.github.com/repos/langchain-ai/langchain/issues/15732/comments | 3 | 2024-01-09T07:20:53Z | 2024-04-17T16:32:32Z | https://github.com/langchain-ai/langchain/issues/15732 | 2,071,773,809 | 15,732 |
[
"langchain-ai",
"langchain"
] | ### System Info
- Langchain 0.1.0
- PHP 6 (a.k.a. Python 3.11.7)
- Windows 9 (a.k.a. Fedora 39)
<details><summary>requirements.txt</summary>
- aiohttp==3.9.1
- aiosignal==1.3.1
- annotated-types==0.6.0
- anyio==4.2.0
- argon2-cffi==23.1.0
- argon2-cffi-bindings==21.2.0
- arrow==1.3.0
- asgiref==3.7.2
- asttokens==2.4.1
- async-lru==2.0.4
- attrs==23.2.0
- Babel==2.14.0
- backoff==2.2.1
- bcrypt==4.1.2
- beautifulsoup4==4.12.2
- bleach==6.1.0
- build==1.0.3
- cachetools==5.3.2
- certifi==2023.11.17
- cffi==1.16.0
- chardet==5.2.0
- charset-normalizer==3.3.2
- chroma-hnswlib==0.7.3
- chromadb==0.4.22
- click==8.1.7
- coloredlogs==15.0.1
- comm==0.2.1
- dataclasses-json==0.6.3
- debugpy==1.8.0
- decorator==5.1.1
- defusedxml==0.7.1
- Deprecated==1.2.14
- distro==1.9.0
- docarray==0.40.0
- emoji==2.9.0
- executing==2.0.1
- fastapi==0.108.0
- fastjsonschema==2.19.1
- filelock==3.13.1
- filetype==1.2.0
- flatbuffers==23.5.26
- fqdn==1.5.1
- frozenlist==1.4.1
- fsspec==2023.12.2
- gitdb==4.0.11
- GitPython==3.1.40
- google-auth==2.26.1
- googleapis-common-protos==1.62.0
- greenlet==3.0.3
- grpcio==1.60.0
- h11==0.14.0
- httpcore==1.0.2
- httptools==0.6.1
- httpx==0.26.0
- huggingface-hub==0.20.2
- humanfriendly==10.0
- idna==3.6
- importlib-metadata==6.11.0
- importlib-resources==6.1.1
- ipykernel==6.28.0
- ipython==8.19.0
- isoduration==20.11.0
- jedi==0.19.1
- Jinja2==3.1.2
- joblib==1.3.2
- json5==0.9.14
- jsonpatch==1.33
- jsonpath-python==1.0.6
- jsonpointer==2.4
- jsonschema==4.20.0
- jsonschema-specifications==2023.12.1
- jupyter-events==0.9.0
- jupyter-lsp==2.2.1
- jupyter_client==8.6.0
- jupyter_core==5.7.0
- jupyter_server==2.12.2
- jupyter_server_terminals==0.5.1
- jupyterlab==4.0.10
- jupyterlab_pygments==0.3.0
- jupyterlab_server==2.25.2
- kubernetes==28.1.0
- langchain==0.1.0
- langchain-community==0.0.9
- langchain-core==0.1.7
- langchain-openai==0.0.2
- langdetect==1.0.9
- langsmith==0.0.77
- lxml==5.1.0
- Markdown==3.5.1
- markdown-it-py==3.0.0
- MarkupSafe==2.1.3
- marshmallow==3.20.1
- matplotlib-inline==0.1.6
- mdurl==0.1.2
- mistune==3.0.2
- mmh3==4.0.1
- monotonic==1.6
- mpmath==1.3.0
- multidict==6.0.4
- mypy-extensions==1.0.0
- nbclient==0.9.0
- nbconvert==7.14.0
- nbformat==5.9.2
- nest-asyncio==1.5.8
- nltk==3.8.1
- notebook_shim==0.2.3
- numpy==1.26.3
- oauthlib==3.2.2
- onnxruntime==1.16.3
- openai==1.6.1
- opentelemetry-api==1.22.0
- opentelemetry-exporter-otlp-proto-common==1.22.0
- opentelemetry-exporter-otlp-proto-grpc==1.22.0
- opentelemetry-instrumentation==0.43b0
- opentelemetry-instrumentation-asgi==0.43b0
- opentelemetry-instrumentation-fastapi==0.43b0
- opentelemetry-proto==1.22.0
- opentelemetry-sdk==1.22.0
- opentelemetry-semantic-conventions==0.43b0
- opentelemetry-util-http==0.43b0
- orjson==3.9.10
- overrides==7.4.0
- packaging==23.2
- pandocfilters==1.5.0
- parso==0.8.3
- pexpect==4.9.0
- platformdirs==4.1.0
- posthog==3.1.0
- prometheus-client==0.19.0
- prompt-toolkit==3.0.43
- protobuf==4.25.1
- psutil==5.9.7
- ptyprocess==0.7.0
- pulsar-client==3.4.0
- pure-eval==0.2.2
- pyasn1==0.5.1
- pyasn1-modules==0.3.0
- pycparser==2.21
- pydantic==2.5.3
- pydantic_core==2.14.6
- Pygments==2.17.2
- PyPika==0.48.9
- pyproject_hooks==1.0.0
- python-dateutil==2.8.2
- python-dotenv==1.0.0
- python-iso639==2024.1.2
- python-json-logger==2.0.7
- python-magic==0.4.27
- PyYAML==6.0.1
- pyzmq==25.1.2
- rapidfuzz==3.6.1
- referencing==0.32.1
- regex==2023.12.25
- requests==2.31.0
- requests-oauthlib==1.3.1
- rfc3339-validator==0.1.4
- rfc3986-validator==0.1.1
- rich==13.7.0
- rpds-py==0.16.2
- rsa==4.9
- Send2Trash==1.8.2
- six==1.16.0
- smmap==5.0.1
- sniffio==1.3.0
- soupsieve==2.5
- SQLAlchemy==2.0.25
- stack-data==0.6.3
- starlette==0.32.0.post1
- sympy==1.12
- tabulate==0.9.0
- tenacity==8.2.3
- terminado==0.18.0
- tiktoken==0.5.2
- tinycss2==1.2.1
- tokenizers==0.15.0
- tornado==6.4
- tqdm==4.66.1
- traitlets==5.14.1
- typer==0.9.0
- types-python-dateutil==2.8.19.20240106
- types-requests==2.31.0.20240106
- typing-inspect==0.9.0
- typing_extensions==4.9.0
- unstructured==0.11.8
- unstructured-client==0.15.2
- uri-template==1.3.0
- urllib3==1.26.18
- uvicorn==0.25.0
- uvloop==0.19.0
- watchfiles==0.21.0
- wcwidth==0.2.13
- webcolors==1.13
- webencodings==0.5.1
- websocket-client==1.7.0
- websockets==12.0
- wrapt==1.16.0
- yarl==1.9.4
- zipp==3.17.0
</details>
### Who can help?
@ey
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Code sample to reproduce where `my-codebase` is a directory with a heterogeneous collection of files (.tsx, .json, .ts, .js, .md)
```
# Document loading: Load codebase from local directory
from langchain_community.document_loaders import DirectoryLoader
project_path = "my-codebase"
loader = DirectoryLoader(project_path, use_multithreading=False)
my_codebase_data = loader.load()
```
This creates the following error:
```
{
"name": "ValueError",
"message": "Detected a JSON file that does not conform to the Unstructured schema. partition_json currently only processes serialized Unstructured output.",
"stack": "---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
Cell In[16], line 8
4 project_path = \"my-codebase\"
6 loader = DirectoryLoader(project_path, use_multithreading=False)
----> 8 my_codebase_data = loader.load()
File ~/Repos/chain-repo/.venv/lib64/python3.11/site-packages/langchain_community/document_loaders/directory.py:157, in DirectoryLoader.load(self)
155 else:
156 for i in items:
--> 157 self.load_file(i, p, docs, pbar)
159 if pbar:
160 pbar.close()
File ~/Repos/chain-repo/.venv/lib64/python3.11/site-packages/langchain_community/document_loaders/directory.py:106, in DirectoryLoader.load_file(self, item, path, docs, pbar)
104 logger.warning(f\"Error loading file {str(item)}: {e}\")
105 else:
--> 106 raise e
107 finally:
108 if pbar:
File ~/Repos/chain-repo/.venv/lib64/python3.11/site-packages/langchain_community/document_loaders/directory.py:100, in DirectoryLoader.load_file(self, item, path, docs, pbar)
98 try:
99 logger.debug(f\"Processing file: {str(item)}\")
--> 100 sub_docs = self.loader_cls(str(item), **self.loader_kwargs).load()
101 docs.extend(sub_docs)
102 except Exception as e:
File ~/Repos/chain-repo/.venv/lib64/python3.11/site-packages/langchain_community/document_loaders/unstructured.py:87, in UnstructuredBaseLoader.load(self)
85 def load(self) -> List[Document]:
86 \"\"\"Load file.\"\"\"
---> 87 elements = self._get_elements()
88 self._post_process_elements(elements)
89 if self.mode == \"elements\":
File ~/Repos/chain-repo/.venv/lib64/python3.11/site-packages/langchain_community/document_loaders/unstructured.py:173, in UnstructuredFileLoader._get_elements(self)
170 def _get_elements(self) -> List:
171 from unstructured.partition.auto import partition
--> 173 return partition(filename=self.file_path, **self.unstructured_kwargs)
File ~/Repos/chain-repo/.venv/lib64/python3.11/site-packages/unstructured/partition/auto.py:480, in partition(filename, content_type, file, file_filename, url, include_page_breaks, strategy, encoding, paragraph_grouper, headers, skip_infer_table_types, ssl_verify, ocr_languages, languages, detect_language_per_element, pdf_infer_table_structure, pdf_extract_images, pdf_extract_element_types, pdf_image_output_dir_path, pdf_extract_to_payload, xml_keep_tags, data_source_metadata, metadata_filename, request_timeout, hi_res_model_name, model_name, **kwargs)
478 elif filetype == FileType.JSON:
479 if not is_json_processable(filename=filename, file=file):
--> 480 raise ValueError(
481 \"Detected a JSON file that does not conform to the Unstructured schema. \"
482 \"partition_json currently only processes serialized Unstructured output.\",
483 )
484 elements = partition_json(filename=filename, file=file, **kwargs)
485 elif (filetype == FileType.XLSX) or (filetype == FileType.XLS):
ValueError: Detected a JSON file that does not conform to the Unstructured schema. partition_json currently only processes serialized Unstructured output."
}
```
### Expected behavior
To get the expected behavior, set `use_multithreading` to True:
```
loader = DirectoryLoader(project_path, use_multithreading=True)
```
Doing this loads the files without error.
Curiously, I get the same loader if I just set `silent_errors` to True:
```
loader = DirectoryLoader(project_path, use_multithreading=False, silent_errors=True)
```
In this case, the error is printed, but the execution is not halted.
Curiously, if I set `use_multithreading` to True and have `silent_errors` set to True, I get the same behaviour as for `use_multithreading=False`. This time it acknowledges that there are errors, where as if it is silent, it just ignores them and doesn't even print them.
```
loader = DirectoryLoader(project_path, use_multithreading=True, silent_errors=True)
```
### Additional thoughts
- This might need to be broken up into different issues
- I am also noticing that the `recursive` parameter is set to False by default, but it still recursively goes through each subdirectory in the directory, is this expected?. | DirectoryLoader use_multithreading inconsistent behavior between true and false (and issue with UnstructuredFileLoader and .json files) | https://api.github.com/repos/langchain-ai/langchain/issues/15731/comments | 2 | 2024-01-09T06:38:32Z | 2024-07-23T16:07:11Z | https://github.com/langchain-ai/langchain/issues/15731 | 2,071,722,976 | 15,731 |
[
"langchain-ai",
"langchain"
] | ### System Info
Ubuntu 20.04
I got this while reading a book pdf with extract_images=True.
[113](https://file+.vscode-resource.vscode-cdn.net/home/karan/kj_workspace/kj_argentelm/risk_assessment/backend/~/anaconda3/envs/python39/lib/python3.9/site-packages/langchain_community/document_loaders/parsers/pdf.py:113) if xObject[obj]["[/Filter](https://file+.vscode-resource.vscode-cdn.net/Filter)"][1:] in _PDF_FILTER_WITHOUT_LOSS:
[114](https://file+.vscode-resource.vscode-cdn.net/home/karan/kj_workspace/kj_argentelm/risk_assessment/backend/~/anaconda3/envs/python39/lib/python3.9/site-packages/langchain_community/document_loaders/parsers/pdf.py:114) height, width = xObject[obj]["[/Height](https://file+.vscode-resource.vscode-cdn.net/Height)"], xObject[obj]["[/Width](https://file+.vscode-resource.vscode-cdn.net/Width)"]
[116](https://file+.vscode-resource.vscode-cdn.net/home/karan/kj_workspace/kj_argentelm/risk_assessment/backend/~/anaconda3/envs/python39/lib/python3.9/site-packages/langchain_community/document_loaders/parsers/pdf.py:116) images.append(
--> [117](https://file+.vscode-resource.vscode-cdn.net/home/karan/kj_workspace/kj_argentelm/risk_assessment/backend/~/anaconda3/envs/python39/lib/python3.9/site-packages/langchain_community/document_loaders/parsers/pdf.py:117) np.frombuffer(xObject[obj].get_data(), dtype=np.uint8).reshape(
[118](https://file+.vscode-resource.vscode-cdn.net/home/karan/kj_workspace/kj_argentelm/risk_assessment/backend/~/anaconda3/envs/python39/lib/python3.9/site-packages/langchain_community/document_loaders/parsers/pdf.py:118) height, width, -1
[119](https://file+.vscode-resource.vscode-cdn.net/home/karan/kj_workspace/kj_argentelm/risk_assessment/backend/~/anaconda3/envs/python39/lib/python3.9/site-packages/langchain_community/document_loaders/parsers/pdf.py:119) )
[120](https://file+.vscode-resource.vscode-cdn.net/home/karan/kj_workspace/kj_argentelm/risk_assessment/backend/~/anaconda3/envs/python39/lib/python3.9/site-packages/langchain_community/document_loaders/parsers/pdf.py:120) )
[121](https://file+.vscode-resource.vscode-cdn.net/home/karan/kj_workspace/kj_argentelm/risk_assessment/backend/~/anaconda3/envs/python39/lib/python3.9/site-packages/langchain_community/document_loaders/parsers/pdf.py:121) elif xObject[obj]["[/Filter](https://file+.vscode-resource.vscode-cdn.net/Filter)"][1:] in _PDF_FILTER_WITH_LOSS:
[122](https://file+.vscode-resource.vscode-cdn.net/home/karan/kj_workspace/kj_argentelm/risk_assessment/backend/~/anaconda3/envs/python39/lib/python3.9/site-packages/langchain_community/document_loaders/parsers/pdf.py:122) images.append(xObject[obj].get_data())
ValueError: cannot reshape array of size 293 into shape (193,121,newaxis)

### Who can help?
@eyurtsev
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
from langchain_community.document_loaders import PyPDFLoader
loader = PyPDFLoader("./book.pdf", extract_images=True)
```
### Expected behavior
It should load the pdf and extract info from images also. When I set extract_images=False it works fine. | ValueError: cannot reshape array of size 293 into shape (193,121,newaxis) | https://api.github.com/repos/langchain-ai/langchain/issues/15730/comments | 1 | 2024-01-09T05:52:40Z | 2024-01-09T14:40:30Z | https://github.com/langchain-ai/langchain/issues/15730 | 2,071,675,818 | 15,730 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
I've seen in the langchain documentation code for vector search in Neo4j which take `OpenAIEmbeddings()` as an object parameter in order to make an embedding for input query
```python
index_name = "vector" # default index name
store = Neo4jVector.from_existing_index(
OpenAIEmbeddings(),
url=url,
username=username,
password=password,
index_name=index_name,
)
```
What i wonder is that, can we pass another embedding model e.g. huggingface model into that parameter instead of openai itself because it many cases, there exist an incompatible dimension when we already have an existing index that is embedded by another off-the-shelf model rather than a embedding model from OpenAI ?
Moreover, i took a look at the source code in case that there has no way to add huggingface model
https://github.com/langchain-ai/langchain/blob/04caf07dee2e2843ab720e5b8f0c0e83d0b86a3e/libs/community/langchain_community/vectorstores/neo4j_vector.py#L111-L147
what i've found is that for the `embedding` parameters of `Neo4jVector` object, it should be Any embedding function implementing`langchain.embeddings.base.Embeddings` interface. Here is the code described that class
https://github.com/langchain-ai/langchain/blob/04caf07dee2e2843ab720e5b8f0c0e83d0b86a3e/libs/core/langchain_core/embeddings.py#L7-L24
Does it mean that we must construct a class that inherits from it in order to implement it effectively, if yes, please provide an example to do it.
### Suggestion:
Any suggested way from this case, if it is currently not supported huggingface model with Neo4j VectorStore, i will help contributing it and make a PR then. | Issue: mechanism of embedding parameters in Neo4j Vector object | https://api.github.com/repos/langchain-ai/langchain/issues/15729/comments | 1 | 2024-01-09T04:39:49Z | 2024-01-10T02:55:08Z | https://github.com/langchain-ai/langchain/issues/15729 | 2,071,616,292 | 15,729 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
import os
from urllib.parse import quote_plus
from langchain.vectorstores.pgvector import PGVector
from langchain.chat_models import ChatOpenAI
from langchain.chains import ConversationalRetrievalChain
from langchain.memory import ConversationTokenBufferMemory
from langchain_community.embeddings import HuggingFaceEmbeddings
from langchain_core.prompts import ChatPromptTemplate, SystemMessagePromptTemplate, MessagesPlaceholder,
HumanMessagePromptTemplate
os.environ['OPENAI_API_KEY'] = "mykey"
embeddings = HuggingFaceEmbeddings(model_name="all-MiniLM-L6-v2")
CONNECTION_STRING = PGVector.connection_string_from_db_params(
driver=os.environ.get("PGVECTOR_DRIVER", "psycopg2"),
host="192.xxx.xx.xxx",
port=5432,
database="xxx",
user="xxx",
password=quote_plus("xx@xxxxxr"),
)
vectordb = PGVector(embedding_function=embeddings,
collection_name="tmp06",
connection_string=CONNECTION_STRING,
)
prompt = ChatPromptTemplate(
messages=[
SystemMessagePromptTemplate.from_template(
"請把使用者的對話紀錄當作參考作為回覆,回答只能使用繁體中文字"
),
MessagesPlaceholder(variable_name="chat_history"),
HumanMessagePromptTemplate.from_template("{question}"),
]
)
llm_name = "gpt-3.5-turbo"
llm = ChatOpenAI(model_name=llm_name, temperature=0)
retriever = vectordb.as_retriever()
memory = ConversationTokenBufferMemory(
llm=llm,
prompt=prompt,
memory_key="chat_history",
return_messages=True,
)
qa = ConversationalRetrievalChain.from_llm(
llm=llm,
retriever=retriever,
memory=memory,
verbose=True,
)
chat_history = []
while True:
memory.load_memory_variables({})
question = input('提問:')
result = qa.run({'question': question, 'chat_history': chat_history})
print(result)
chat_history.append([f'User: {question}', f'Ai: {result}'])
print(chat_history)
embeddings = HuggingFaceEmbeddings(model_name="all-MiniLM-L6-v2")
st_history = ' '.join(map(str, chat_history))
res = embeddings.embed_query(st_history)
vectordb.add_vector(res)
select_vdb = vectordb.nearest(res, n=1)
print(f'ok: {res[:4]}...')
if question.lower() == 'bye':
break
### Suggestion:
translates to "Unable to retrieve my prompt when starting the conversation" in English | Unable to retrieve my prompt when starting the conversation | https://api.github.com/repos/langchain-ai/langchain/issues/15728/comments | 2 | 2024-01-09T03:13:25Z | 2024-01-09T14:39:02Z | https://github.com/langchain-ai/langchain/issues/15728 | 2,071,551,848 | 15,728 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
i have question, is it possible to get two different type answer from one prompt ? i want to change my question from nlp to queries sql or it will return common answer from chatgpt, for example "show data purchasing" the answer will queries and if the question "show me rate usd today" it will result from the internet
### Suggestion:
_No response_ | promt result | https://api.github.com/repos/langchain-ai/langchain/issues/15719/comments | 1 | 2024-01-08T19:53:03Z | 2024-01-08T19:53:28Z | https://github.com/langchain-ai/langchain/issues/15719 | 2,071,122,353 | 15,719 |
[
"langchain-ai",
"langchain"
] | ### System Info
```➜ ~ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 22.04.3 LTS
Release: 22.04
Codename: jammy
```
```
In [2]: langchain.__version__
Out[2]: '0.0.354'
```
```
In [4]: from langchain_core import __version__
In [5]: __version__
Out[5]: '0.1.8'
```
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
# Description
Create extraction chain pydantic does not work for valid pydantic schemas.
```python
from typing import Optional, List
from langchain.chains import create_extraction_chain_pydantic
from langchain_openai import ChatOpenAI
class Person(BaseModel):
"""Identifying information about a person in a text."""
person_name: str
person_height: Optional[int]
person_hair_color: Optional[str]
dog_breed: Optional[str]
dog_name: Optional[str]
# Chain for an extraction approach based on OpenAI Functions
extraction_chain = create_extraction_chain_pydantic(Person, ChatOpenAI(temperature=0))
extraction_chain.invoke("My name is tom and i'm 6 feet tall")
```
However, more complex pydantic definitions fail:
```
class People(BaseModel):
"""Identifying information about all people in a text."""
__root__: List[Person]
# Chain for an extraction approach based on OpenAI Functions
extraction_chain = create_extraction_chain_pydantic(People, ChatOpenAI(temperature=0))
extraction_chain.invoke("My name is tom and i'm 6 feet tall")
```

```
class NestedPeople(BaseModel):
"""Identifying information about all people in a text."""
people: List[Person]
# Chain for an extraction approach based on OpenAI Functions
extraction_chain = create_extraction_chain_pydantic(NestedPeople, ChatOpenAI(temperature=0))
extraction_chain.invoke("My name is tom and i'm 6 feet tall")
```

---
## Acceptance criteria
1. Code does not affect backwards compatibility if possible If must be a breaking change, perhaps we should create a new function for this purpose.
2. Should we replace LLMChain with an LCEL chain and determine what is the correct output interface for extractions? User may want error information to be returned rather than raised.
3. Unit-tests must cover above cases
### Expected behavior
All shown cases should work properly and not fail during initialization time. | Extraction: create_extraction_chain_pydantic | https://api.github.com/repos/langchain-ai/langchain/issues/15715/comments | 3 | 2024-01-08T19:11:32Z | 2024-03-08T16:39:50Z | https://github.com/langchain-ai/langchain/issues/15715 | 2,071,064,930 | 15,715 |
[
"langchain-ai",
"langchain"
] | ### Feature request
I'm trying to extend AgentExecutor with custom logic and I want to override how the agent perform actions.
What i'd really need is only to override the __aperform_agent_action_ function; however this function is defined in the __aiter_next_step_ function, making it necessary to override the whole function.
This obviously comes with the drawbacks of more code and having to reconciliate future updates.
In my opinion, the __aiter_next_step_ function could be extracted into an instance or static method, allowing to override only the relevant parts.
Also, for the synchronous version _iter_next_step_ a similar problem arises, as the __perform_agent_action_ is not defined at all.
The relevant code can be extracted in a method making it easier to override it.
### Motivation
This update would allow for better extendibility of the AgentExecutor class
### Your contribution
I can submit a PR to address the issue | Extract _aperform_agent_action from _aiter_next_step from AgentExecutor | https://api.github.com/repos/langchain-ai/langchain/issues/15706/comments | 1 | 2024-01-08T14:12:40Z | 2024-01-24T02:22:10Z | https://github.com/langchain-ai/langchain/issues/15706 | 2,070,544,706 | 15,706 |
[
"langchain-ai",
"langchain"
] | ### Issue with current documentation:
Dear all
I have this pipeline
```python
translation_cache = ToJSON(key=key, out_dir=Path("results/sabadel/translation"))
translation_prompt = Prompt.from_yaml(Path("prompts/translate.yml"))
translation_chain = (
{
"transcription": lambda data: format_transcription_for_prompt(
data["transcription"]
)
}
| translation_prompt.template
| model
| {
"transcription": RunnableLambda(
lambda res: translation_output_parser(res, transcription)
)
}
| {"transcription": lambda x: translation_cache(x)}
)
translation_chain = (
RunnableLambda(lambda data: {**data, "transcription": translation_cache.load()})
if translation_cache.exists
else translation_chain
)
# evaluation
evaluation_prompt = Prompt.from_yaml(Path("prompts/sabadell/evaluation.yml"))
evaluation_cache = ToJSON(key=key, out_dir=Path("results/sabadel/evaluation"))
evaluation_chain = (
evaluation_prompt.template | model | evaluation_output_parser | evaluation_cache
)
evaluation_chain = (
RunnableLambda(lambda data: {**data, "evaluations": evaluation_cache.load()})
if evaluation_cache.exists
else evaluation_chain
)
# retention
retention_prompt = Prompt.from_yaml(Path("prompts/sabadell/evaluation.retention.yml"))
retention_cache = ToJSON(key=key, out_dir=Path("results/sabadel/retention"))
retention_chain = (
retention_prompt.template | model | retention_output_parser | retention_cache
)
retention_chain = RunnableLambda(
lambda data: {
**data,
"retention": retention_cache.load()
if retention_cache.exists
else retention_chain(**data),
}
)
# final chain
# chain = translation_chain | retention_chain
# print(translation_chain.invoke({"transcription": transcription}))
chain = translation_chain | evaluation_chain | retention_chain
print(chain.get_graph().print_ascii())
print(
chain.invoke({"transcription": transcription, "retention_script": retention_script})
)
```
Now what I'd like to do is that `evaluation_chain` puts stuff into a key `evaluations` and pass along the original data dict + that key to `retention_chain` and `retention_chain` should put it's output into a `retention` data key and then pass along the original dict + all the outputs
how to do it?
### Idea or request for content:
_No response_ | DOC: Data Pipeline for humans | https://api.github.com/repos/langchain-ai/langchain/issues/15705/comments | 3 | 2024-01-08T14:10:39Z | 2024-01-09T14:42:08Z | https://github.com/langchain-ai/langchain/issues/15705 | 2,070,541,205 | 15,705 |
[
"langchain-ai",
"langchain"
] | ### Feature request
Feature request
They provide a [python client](https://docs.mistral.ai/platform/endpoints/) to access the embedding model
### Motivation
It would be great if we added the new embedding service from Mistral!
### Your contribution
I can work on this and submit a PR | Add support for the Mistral AI Embedding Model | https://api.github.com/repos/langchain-ai/langchain/issues/15702/comments | 2 | 2024-01-08T12:35:54Z | 2024-04-16T16:15:00Z | https://github.com/langchain-ai/langchain/issues/15702 | 2,070,370,106 | 15,702 |
[
"langchain-ai",
"langchain"
] | ### Issue with current documentation:
Hi.
I am a newcomer to Langchain following the Quickstart tutorial in a Jupyter Notebook, using the setup recommended by the installation guide. I am following the OpenAI tutorial, rather than the local LLM version.
I followed the exact code in the docs by pasting the cells into my notebook. All code works perfectly without a single error or warning. However, the code fails at this point:
```python
response = retrieval_chain.invoke({"input": "how can langsmith help with testing?"})
print(response["answer"])
# LangSmith offers several features that can help with testing:...
```
When I attempt to run this code, I get the following output in my notebook:
```
{
"name": "ValidationError",
"message": "2 validation errors for DocArrayDoc
text
Field required [type=missing, input_value={'embedding': [-0.0144587... -0.015377209573652503]}, input_type=dict]
For further information visit https://errors.pydantic.dev/2.5/v/missing
metadata
Field required [type=missing, input_value={'embedding': [-0.0144587... -0.015377209573652503]}, input_type=dict]
For further information visit https://errors.pydantic.dev/2.5/v/missing",
"stack": "---------------------------------------------------------------------------
ValidationError Traceback (most recent call last)
Cell In[18], line 1
----> 1 response = retrieval_chain.invoke({\"input\": \"how can langsmith help with testing?\"})
2 print(response[\"answer\"])
4 # LangSmith offers several features that can help with testing:...
File e:\\Repos\\Practice Projects\\Langchain\\Quickstart\\quickstart\\Lib\\site-packages\\langchain_core\\runnables\\base.py:3590, in RunnableBindingBase.invoke(self, input, config, **kwargs)
3584 def invoke(
3585 self,
3586 input: Input,
3587 config: Optional[RunnableConfig] = None,
3588 **kwargs: Optional[Any],
3589 ) -> Output:
-> 3590 return self.bound.invoke(
3591 input,
3592 self._merge_configs(config),
3593 **{**self.kwargs, **kwargs},
3594 )
File e:\\Repos\\Practice Projects\\Langchain\\Quickstart\\quickstart\\Lib\\site-packages\\langchain_core\\runnables\\base.py:1762, in RunnableSequence.invoke(self, input, config)
1760 try:
1761 for i, step in enumerate(self.steps):
-> 1762 input = step.invoke(
1763 input,
1764 # mark each step as a child run
1765 patch_config(
1766 config, callbacks=run_manager.get_child(f\"seq:step:{i+1}\")
1767 ),
1768 )
1769 # finish the root run
1770 except BaseException as e:
File e:\\Repos\\Practice Projects\\Langchain\\Quickstart\\quickstart\\Lib\\site-packages\\langchain_core\\runnables\\passthrough.py:415, in RunnableAssign.invoke(self, input, config, **kwargs)
409 def invoke(
410 self,
411 input: Dict[str, Any],
412 config: Optional[RunnableConfig] = None,
413 **kwargs: Any,
414 ) -> Dict[str, Any]:
--> 415 return self._call_with_config(self._invoke, input, config, **kwargs)
File e:\\Repos\\Practice Projects\\Langchain\\Quickstart\\quickstart\\Lib\\site-packages\\langchain_core\\runnables\\base.py:975, in Runnable._call_with_config(self, func, input, config, run_type, **kwargs)
971 context = copy_context()
972 context.run(var_child_runnable_config.set, child_config)
973 output = cast(
974 Output,
--> 975 context.run(
976 call_func_with_variable_args,
977 func, # type: ignore[arg-type]
978 input, # type: ignore[arg-type]
979 config,
980 run_manager,
981 **kwargs,
982 ),
983 )
984 except BaseException as e:
985 run_manager.on_chain_error(e)
File e:\\Repos\\Practice Projects\\Langchain\\Quickstart\\quickstart\\Lib\\site-packages\\langchain_core\\runnables\\config.py:323, in call_func_with_variable_args(func, input, config, run_manager, **kwargs)
321 if run_manager is not None and accepts_run_manager(func):
322 kwargs[\"run_manager\"] = run_manager
--> 323 return func(input, **kwargs)
File e:\\Repos\\Practice Projects\\Langchain\\Quickstart\\quickstart\\Lib\\site-packages\\langchain_core\\runnables\\passthrough.py:402, in RunnableAssign._invoke(self, input, run_manager, config, **kwargs)
389 def _invoke(
390 self,
391 input: Dict[str, Any],
(...)
394 **kwargs: Any,
395 ) -> Dict[str, Any]:
396 assert isinstance(
397 input, dict
398 ), \"The input to RunnablePassthrough.assign() must be a dict.\"
400 return {
401 **input,
--> 402 **self.mapper.invoke(
403 input,
404 patch_config(config, callbacks=run_manager.get_child()),
405 **kwargs,
406 ),
407 }
File e:\\Repos\\Practice Projects\\Langchain\\Quickstart\\quickstart\\Lib\\site-packages\\langchain_core\\runnables\\base.py:2327, in RunnableParallel.invoke(self, input, config)
2314 with get_executor_for_config(config) as executor:
2315 futures = [
2316 executor.submit(
2317 step.invoke,
(...)
2325 for key, step in steps.items()
2326 ]
-> 2327 output = {key: future.result() for key, future in zip(steps, futures)}
2328 # finish the root run
2329 except BaseException as e:
File e:\\Repos\\Practice Projects\\Langchain\\Quickstart\\quickstart\\Lib\\site-packages\\langchain_core\\runnables\\base.py:2327, in <dictcomp>(.0)
2314 with get_executor_for_config(config) as executor:
2315 futures = [
2316 executor.submit(
2317 step.invoke,
(...)
2325 for key, step in steps.items()
2326 ]
-> 2327 output = {key: future.result() for key, future in zip(steps, futures)}
2328 # finish the root run
2329 except BaseException as e:
File C:\\ProgramData\\miniconda3\\Lib\\concurrent\\futures\\_base.py:456, in Future.result(self, timeout)
454 raise CancelledError()
455 elif self._state == FINISHED:
--> 456 return self.__get_result()
457 else:
458 raise TimeoutError()
File C:\\ProgramData\\miniconda3\\Lib\\concurrent\\futures\\_base.py:401, in Future.__get_result(self)
399 if self._exception:
400 try:
--> 401 raise self._exception
402 finally:
403 # Break a reference cycle with the exception in self._exception
404 self = None
File C:\\ProgramData\\miniconda3\\Lib\\concurrent\\futures\\thread.py:58, in _WorkItem.run(self)
55 return
57 try:
---> 58 result = self.fn(*self.args, **self.kwargs)
59 except BaseException as exc:
60 self.future.set_exception(exc)
File e:\\Repos\\Practice Projects\\Langchain\\Quickstart\\quickstart\\Lib\\site-packages\\langchain_core\\runnables\\base.py:3590, in RunnableBindingBase.invoke(self, input, config, **kwargs)
3584 def invoke(
3585 self,
3586 input: Input,
3587 config: Optional[RunnableConfig] = None,
3588 **kwargs: Optional[Any],
3589 ) -> Output:
-> 3590 return self.bound.invoke(
3591 input,
3592 self._merge_configs(config),
3593 **{**self.kwargs, **kwargs},
3594 )
File e:\\Repos\\Practice Projects\\Langchain\\Quickstart\\quickstart\\Lib\\site-packages\\langchain_core\\runnables\\base.py:1762, in RunnableSequence.invoke(self, input, config)
1760 try:
1761 for i, step in enumerate(self.steps):
-> 1762 input = step.invoke(
1763 input,
1764 # mark each step as a child run
1765 patch_config(
1766 config, callbacks=run_manager.get_child(f\"seq:step:{i+1}\")
1767 ),
1768 )
1769 # finish the root run
1770 except BaseException as e:
File e:\\Repos\\Practice Projects\\Langchain\\Quickstart\\quickstart\\Lib\\site-packages\\langchain_core\\retrievers.py:121, in BaseRetriever.invoke(self, input, config)
117 def invoke(
118 self, input: str, config: Optional[RunnableConfig] = None
119 ) -> List[Document]:
120 config = ensure_config(config)
--> 121 return self.get_relevant_documents(
122 input,
123 callbacks=config.get(\"callbacks\"),
124 tags=config.get(\"tags\"),
125 metadata=config.get(\"metadata\"),
126 run_name=config.get(\"run_name\"),
127 )
File e:\\Repos\\Practice Projects\\Langchain\\Quickstart\\quickstart\\Lib\\site-packages\\langchain_core\\retrievers.py:223, in BaseRetriever.get_relevant_documents(self, query, callbacks, tags, metadata, run_name, **kwargs)
221 except Exception as e:
222 run_manager.on_retriever_error(e)
--> 223 raise e
224 else:
225 run_manager.on_retriever_end(
226 result,
227 **kwargs,
228 )
File e:\\Repos\\Practice Projects\\Langchain\\Quickstart\\quickstart\\Lib\\site-packages\\langchain_core\\retrievers.py:216, in BaseRetriever.get_relevant_documents(self, query, callbacks, tags, metadata, run_name, **kwargs)
214 _kwargs = kwargs if self._expects_other_args else {}
215 if self._new_arg_supported:
--> 216 result = self._get_relevant_documents(
217 query, run_manager=run_manager, **_kwargs
218 )
219 else:
220 result = self._get_relevant_documents(query, **_kwargs)
File e:\\Repos\\Practice Projects\\Langchain\\Quickstart\\quickstart\\Lib\\site-packages\\langchain_core\\vectorstores.py:654, in VectorStoreRetriever._get_relevant_documents(self, query, run_manager)
650 def _get_relevant_documents(
651 self, query: str, *, run_manager: CallbackManagerForRetrieverRun
652 ) -> List[Document]:
653 if self.search_type == \"similarity\":
--> 654 docs = self.vectorstore.similarity_search(query, **self.search_kwargs)
655 elif self.search_type == \"similarity_score_threshold\":
656 docs_and_similarities = (
657 self.vectorstore.similarity_search_with_relevance_scores(
658 query, **self.search_kwargs
659 )
660 )
File e:\\Repos\\Practice Projects\\Langchain\\Quickstart\\quickstart\\Lib\\site-packages\\langchain_community\\vectorstores\\docarray\\base.py:127, in DocArrayIndex.similarity_search(self, query, k, **kwargs)
115 def similarity_search(
116 self, query: str, k: int = 4, **kwargs: Any
117 ) -> List[Document]:
118 \"\"\"Return docs most similar to query.
119
120 Args:
(...)
125 List of Documents most similar to the query.
126 \"\"\"
--> 127 results = self.similarity_search_with_score(query, k=k, **kwargs)
128 return [doc for doc, _ in results]
File e:\\Repos\\Practice Projects\\Langchain\\Quickstart\\quickstart\\Lib\\site-packages\\langchain_community\\vectorstores\\docarray\\base.py:106, in DocArrayIndex.similarity_search_with_score(self, query, k, **kwargs)
94 \"\"\"Return docs most similar to query.
95
96 Args:
(...)
103 Lower score represents more similarity.
104 \"\"\"
105 query_embedding = self.embedding.embed_query(query)
--> 106 query_doc = self.doc_cls(embedding=query_embedding) # type: ignore
107 docs, scores = self.doc_index.find(query_doc, search_field=\"embedding\", limit=k)
109 result = [
110 (Document(page_content=doc.text, metadata=doc.metadata), score)
111 for doc, score in zip(docs, scores)
112 ]
File e:\\Repos\\Practice Projects\\Langchain\\Quickstart\\quickstart\\Lib\\site-packages\\pydantic\\main.py:164, in BaseModel.__init__(__pydantic_self__, **data)
162 # `__tracebackhide__` tells pytest and some other tools to omit this function from tracebacks
163 __tracebackhide__ = True
--> 164 __pydantic_self__.__pydantic_validator__.validate_python(data, self_instance=__pydantic_self__)
ValidationError: 2 validation errors for DocArrayDoc
text
Field required [type=missing, input_value={'embedding': [-0.0144587... -0.015377209573652503]}, input_type=dict]
For further information visit https://errors.pydantic.dev/2.5/v/missing
metadata
Field required [type=missing, input_value={'embedding': [-0.0144587... -0.015377209573652503]}, input_type=dict]
For further information visit https://errors.pydantic.dev/2.5/v/missing"
}
```
### Idea or request for content:
As I am a newcomer, I do not understand exactly what the issue is. Thus, I would like to request that the documentation be updated so that the code works correctly. In the meantime, I would appreciate any assistance so I can continue to learn Langchain through the quickstart and work my way through the rest of the docs . | DOC: Quickstart Code Fails for Retrieval Chain | https://api.github.com/repos/langchain-ai/langchain/issues/15700/comments | 5 | 2024-01-08T10:23:26Z | 2024-01-08T15:54:43Z | https://github.com/langchain-ai/langchain/issues/15700 | 2,070,146,142 | 15,700 |
[
"langchain-ai",
"langchain"
] | ### System Info
Python 3.10.12
langchain 0.0.354
### Who can help?
@hwch
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
from langchain.agents.agent_toolkits.slack.toolkit import SlackToolkit
stoolkit = SlackToolkit()
tools = stoolkit.get_tools()
agent = OpenAIAssistantRunnable.create_assistant(
name="Sales assistant",
instructions="""You are a admin agent, tasked with the following jobs:
2. Read and post messages on Slack""",
tools=tools,
model="gpt-4-1106-preview",
as_agent=True
)
from langchain.agents.agent import AgentExecutor
agent_executor = AgentExecutor.from_agent_and_tools(agent=agent, tools=tools)
agent_executor.invoke({"content":"list all messages in #budget-decisions"})
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Cell In[11], line 13
agent_executor = AgentExecutor.from_agent_and_tools(agent=agent, tools=tools)
---> agent_executor.invoke({"content":"list all messages in #budget-decisions"})
File ~/smith/lib/python3.10/site-packages/langchain/chains/base.py:93, in Chain.invoke(self, input, config, **kwargs)
86 def invoke(
87 self,
88 input: Dict[str, Any],
89 config: Optional[RunnableConfig] = None,
90 **kwargs: Any,
91 ) -> Dict[str, Any]:
92 config = ensure_config(config)
---> 93 return self(
94 input,
95 callbacks=config.get("callbacks"),
96 tags=config.get("tags"),
97 metadata=config.get("metadata"),
98 run_name=config.get("run_name"),
99 **kwargs,
100 )
File ~/smith/lib/python3.10/site-packages/langchain/chains/base.py:316, in Chain.__call__(self, inputs, return_only_outputs, callbacks, tags, metadata, run_name, include_run_info)
314 except BaseException as e:
315 run_manager.on_chain_error(e)
--> 316 raise e
317 run_manager.on_chain_end(outputs)
318 final_outputs: Dict[str, Any] = self.prep_outputs(
319 inputs, outputs, return_only_outputs
320 )
File ~/smith/lib/python3.10/site-packages/langchain/chains/base.py:310, in Chain.__call__(self, inputs, return_only_outputs, callbacks, tags, metadata, run_name, include_run_info)
303 run_manager = callback_manager.on_chain_start(
304 dumpd(self),
305 inputs,
306 name=run_name,
307 )
308 try:
309 outputs = (
--> 310 self._call(inputs, run_manager=run_manager)
311 if new_arg_supported
312 else self._call(inputs)
313 )
314 except BaseException as e:
315 run_manager.on_chain_error(e)
File ~/smith/lib/python3.10/site-packages/langchain/agents/agent.py:1312, in AgentExecutor._call(self, inputs, run_manager)
1310 # We now enter the agent loop (until it returns something).
1311 while self._should_continue(iterations, time_elapsed):
-> 1312 next_step_output = self._take_next_step(
1313 name_to_tool_map,
1314 color_mapping,
1315 inputs,
1316 intermediate_steps,
1317 run_manager=run_manager,
1318 )
1319 if isinstance(next_step_output, AgentFinish):
1320 return self._return(
1321 next_step_output, intermediate_steps, run_manager=run_manager
1322 )
File ~/smith/lib/python3.10/site-packages/langchain/agents/agent.py:1038, in AgentExecutor._take_next_step(self, name_to_tool_map, color_mapping, inputs, intermediate_steps, run_manager)
1029 def _take_next_step(
1030 self,
1031 name_to_tool_map: Dict[str, BaseTool],
(...)
1035 run_manager: Optional[CallbackManagerForChainRun] = None,
1036 ) -> Union[AgentFinish, List[Tuple[AgentAction, str]]]:
1037 return self._consume_next_step(
-> 1038 [
1039 a
1040 for a in self._iter_next_step(
1041 name_to_tool_map,
1042 color_mapping,
1043 inputs,
1044 intermediate_steps,
1045 run_manager,
1046 )
1047 ]
1048 )
File ~/smith/lib/python3.10/site-packages/langchain/agents/agent.py:1038, in <listcomp>(.0)
1029 def _take_next_step(
1030 self,
1031 name_to_tool_map: Dict[str, BaseTool],
(...)
1035 run_manager: Optional[CallbackManagerForChainRun] = None,
1036 ) -> Union[AgentFinish, List[Tuple[AgentAction, str]]]:
1037 return self._consume_next_step(
-> 1038 [
1039 a
1040 for a in self._iter_next_step(
1041 name_to_tool_map,
1042 color_mapping,
1043 inputs,
1044 intermediate_steps,
1045 run_manager,
1046 )
1047 ]
1048 )
File ~/smith/lib/python3.10/site-packages/langchain/agents/agent.py:1134, in AgentExecutor._iter_next_step(self, name_to_tool_map, color_mapping, inputs, intermediate_steps, run_manager)
1132 tool_run_kwargs["llm_prefix"] = ""
1133 # We then call the tool on the tool input to get an observation
-> 1134 observation = tool.run(
1135 agent_action.tool_input,
1136 verbose=self.verbose,
1137 color=color,
1138 callbacks=run_manager.get_child() if run_manager else None,
1139 **tool_run_kwargs,
1140 )
1141 else:
1142 tool_run_kwargs = self.agent.tool_run_logging_kwargs()
File ~/smith/lib/python3.10/site-packages/langchain_core/tools.py:365, in BaseTool.run(self, tool_input, verbose, start_color, color, callbacks, tags, metadata, run_name, **kwargs)
363 except (Exception, KeyboardInterrupt) as e:
364 run_manager.on_tool_error(e)
--> 365 raise e
366 else:
367 run_manager.on_tool_end(
368 str(observation), color=color, name=self.name, **kwargs
369 )
File ~/smith/lib/python3.10/site-packages/langchain_core/tools.py:337, in BaseTool.run(self, tool_input, verbose, start_color, color, callbacks, tags, metadata, run_name, **kwargs)
334 try:
335 tool_args, tool_kwargs = self._to_args_and_kwargs(parsed_input)
336 observation = (
--> 337 self._run(*tool_args, run_manager=run_manager, **tool_kwargs)
338 if new_arg_supported
339 else self._run(*tool_args, **tool_kwargs)
340 )
341 except ToolException as e:
342 if not self.handle_tool_error:
TypeError: SlackGetChannel._run() got multiple values for argument 'run_manager'
### Expected behavior
The slack agent should send a message on the said channel. | TypeError: SlackGetChannel._run() got multiple values for argument 'run_manager' | https://api.github.com/repos/langchain-ai/langchain/issues/15698/comments | 2 | 2024-01-08T09:58:38Z | 2024-04-15T16:25:31Z | https://github.com/langchain-ai/langchain/issues/15698 | 2,070,099,650 | 15,698 |
[
"langchain-ai",
"langchain"
] | ### System Info
Chroma 0.4.22
Langchain 0.0.354
### Who can help?
@agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. Create a SelfQueryRetriever
2. Create AttributeInfo metadata list in preparation for filtering based off metadata.
```python
self_query_retriever = SelfQueryRetriever.from_llm(
llm,
vectorstore,
"Information about when document was published and where it originated from",
metadata_field_info
)
# retriever = MergerRetriever(retrievers=[parent_retriever, self_query_retriever])
retriever = self_query_retriever
template = """
### Instruction: You're an assistant who knows the following information:
### {context}
If you don't know the answer, then say you don't know and refer the user to the respective department for extra information.
Absolutely do not mention you are an AI language model. Use only the chat history and the following information.
### {chat_history}
### Input: {question}
### Response:
""".strip()
prompt = PromptTemplate(input_variables=["context", "chat_history", "question"], template=template)
chain = ConversationalRetrievalChain.from_llm(
llm,
chain_type="stuff",
retriever=retriever,
combine_docs_chain_kwargs={"prompt": prompt},#, "metadata_weights": metadata_weights},
return_source_documents=True,
verbose=False,
rephrase_question=True,
max_tokens_limit=16000,
response_if_no_docs_found="""I'm sorry, but I was not able to find the answer to your question based on the information I know. You may have to reach out to the respective internal department for more details regarding your inquiry."""
)
return chain
def score_unstructured(model, data, query, **kwargs) -> str:
"""Custom model hook for making completions with our knowledge base.
When requesting predictions from the deployment, pass a dictionary
with the following keys:
- 'question' the question to be passed to the retrieval chain
- 'chat_history' (optional) a list of two-element lists corresponding to
preceding dialogue between the Human and AI, respectively
datarobot-user-models (DRUM) handles loading the model and calling
this function with the appropriate parameters.
Returns:
--------
rv : str
Json dictionary with keys:
- 'question' user's original question
- 'chat_history' chat history that was provided with the original question
- 'answer' the generated answer to the question
- 'references' list of references that were used to generate the answer
- 'error' - error message if exception in handling request
"""
import json
try:
chain = model
data_dict = json.loads(data)
if 'chat_history' in data_dict:
chat_history = [(human, ai,) for human, ai in data_dict['chat_history']]
else:
chat_history = []# model.chat_history
rv = chain(
inputs={
'question': data_dict['question'],
'chat_history': chat_history,
},
)
source_docs = rv.pop('source_documents')
rv['references'] = [doc.metadata['source'] for doc in source_docs]
if len(source_docs) > 0:
rv["top_reference_text"] = [doc.page_content for doc in source_docs]
else:
rv["top_reference_text"] = ""
except Exception as e:
rv = {'error': f"{e.__class__.__name__}: {str(e)}"}
return json.dumps(rv)
model = load_model(".")
```
I asked the following question:
```python
questions = ["What is the minimum opening deposit for each account as of January 2023?"]
os.environ["TOKENIZERS_PARALLELISM"] = "false"
for question in questions:
rv = score_unstructured(model, json.dumps(
{
"question": question
# "chat_history": []
}
),
None)
print(rv)
print(question.upper())
print(json.loads(rv)["answer"])
print(json.loads(rv))
print("------------------------------------------------")
```
The issue I got was ```ValueError: Expected where operand value to be a str, int, float, or list of those type, got {'date': '2023-01-01', 'type': 'date'}```
It looks like the SelfQueryRetriever converted my question that had January 2023 to a date object. This date object throws an error. I'm not sure how to resolve this issue on my end.
### Expected behavior
Query with a date and receive an answer from the SelfQueryRetriever. | SelfQueryRetriever, ValueError: Expected where operand value to be a str, int, float, or list of those type | https://api.github.com/repos/langchain-ai/langchain/issues/15696/comments | 11 | 2024-01-08T09:48:39Z | 2024-06-10T14:52:24Z | https://github.com/langchain-ai/langchain/issues/15696 | 2,070,080,675 | 15,696 |
[
"langchain-ai",
"langchain"
] | ### Feature request
- I want the local LLM (IlamaCpp) to maintain its context, which will significantly improve the efficiency of follow-up questions.
- Currently, the context of IlamaCpp is lost after the first call, necessitating the reprocessing of all tokens for any subsequent question.
- **Proposed Solution:** Utilize the internal KV cache of IlamaCpp to retain context and avoid reprocessing the same tokens repeatedly.
### Motivation
- My motivation is to address the inefficiency in the current process where the context is not preserved between queries.
- There seems to be no existing solution for this specific issue as per my research, for example, [LangChain Caching Documentation](https://python.langchain.com/docs/modules/model_io/chat/chat_model_caching#in-memory-cache).
minimized example which shows my current workaround:
```
from langchain.llms import LlamaCpp
from langchain.chains import LLMChain
from langchain.prompts import ChatPromptTemplate, HumanMessagePromptTemplate
from pathlib import Path
class LlmRunner:
def __init__(self, path_to_model: Path) -> None:
self.llm_instance = LlamaCpp(
model_path=str(path_to_model),
n_ctx=16384,
max_tokens=-1,
temperature=0,
repeat_penalty=1.15,
n_gpu_layers=1,
n_threads=8,
verbose=False,
)
def run(self, invoice_text: str):
# Initial processing filling up kv-cache and context
initial_prompt = """
Extract and format keys from the invoice text into JSON.
<context>
{input}
</context>
"""
chain1 = LLMChain(
llm=self.llm_instance,
prompt=ChatPromptTemplate.from_messages([
HumanMessagePromptTemplate.from_template(initial_prompt)
])
)
response1 = chain1.invoke({'input': invoice_text})
# Follow-up processing which COULD reuse the context, but doesn't
follow_up_prompt = """
Review your results and normalize the dates to YYYY-MM-DD.
<input>
{input}
</input>
<context>
{invoice_text}
</context>"""
chain2 = LLMChain(
llm=self.llm_instance,
prompt=ChatPromptTemplate.from_messages([
HumanMessagePromptTemplate.from_template(follow_up_prompt)
])
)
response2 = chain2.invoke({'invoice_text' : invoice_text, 'input': response1['text']})
return response2['text']
# Example usage
path_to_model = Path("path_to_your_model")
runner = LlmRunner(path_to_model)
invoice_text = "Your invoice text here"
result = runner.run(invoice_text)
print(result)
```
### Your contribution
- if something like this already exists I am willing to provide example and update documentation
- if you point me in the right directions and it's just a few 100s LOC I am willing to submit a PR | Reuse KV-Cache with local LLM (IlamaCpp) instead of expensive reprocessing of all history tokens | https://api.github.com/repos/langchain-ai/langchain/issues/15695/comments | 3 | 2024-01-08T09:47:45Z | 2024-03-23T22:37:54Z | https://github.com/langchain-ai/langchain/issues/15695 | 2,070,079,179 | 15,695 |
[
"langchain-ai",
"langchain"
] | ### Feature request
Every time i create a milvus object, i load the collection, but there is no way to dynamically know the replica_number of the currently loaded collection, so there is a disadvantage that i have to hand over the different replica_number for each collection as an argument. Therefore, when creating a milvus object, I would like to add a flag that can determine whether to load or not.
### Motivation
Always loading a collection can cause an unexpected error.
### Your contribution
https://github.com/langchain-ai/langchain/pull/15693 | feat: add a flag that determines whether to load the milvus collection | https://api.github.com/repos/langchain-ai/langchain/issues/15694/comments | 1 | 2024-01-08T09:14:35Z | 2024-01-15T19:25:25Z | https://github.com/langchain-ai/langchain/issues/15694 | 2,070,024,246 | 15,694 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
<pre>
```
def generate_custom_prompt(query=None, name=None, not_uuid=None, chroma_db_path=None):
check = query.lower()
embedding = OpenAIEmbeddings()
vectordb = Chroma(persist_directory=chroma_db_path, embedding_function=embedding)
retriever = vectordb.as_retriever(search_kwargs={"k": 2})
relevant_document = retriever.get_relevant_documents(query)
print(relevant_document, "*****************************************")
context_text = "\n\n---\n\n".join([doc.page_content for doc in relevant_document])
user_experience_inst = UserExperience.objects.get(not_uuid=not_uuid)
greetings = ['hi', 'hello', 'hey', 'hui', 'hiiii', 'hii', 'hiii', 'heyyy']
if check in greetings:
custom_prompt_template = f"""
Just simply reply with "Hello {name}! How can I assist you today?"
"""
elif check not in greetings and user_experience_inst.custom_prompt:
custom_prompt_template = f"""Answer the question based only on following context: ```{context_text} ```
You are a chatbot designed to provide answers to User's Questions:```{check}```, delimited by triple backticks.
Generate your answer to match the user's requirements: {user_experience_inst.custom_prompt}
If you encounter a question for which you don't know the answer, please respond with 'I don't know' and refrain from making up an answer.
- Before saying 'I don't know,' please re-verify your vector store to ensure the answer is not present in the database.
Remember, your goal is to assist the user in the best way possible. If the question is unclear or ambiguous, feel free to ask for clarification.
User's Question: ```{check}```
AI Answer:"""
else:
custom_prompt_template = f"""Generate your response exclusively from the provided context: {{context_text}}. You function as a chatbot specializing in delivering detailed answers to the User's Question: ```{{check}} ```, enclosed within triple backticks.
Generate your answer in points in the following format:
1. Point no 1
1.1 Its subpoint in details
1.2 More information if needed.
2. Point no 2
2.1 Its subpoint in details
2.2 More information if needed.
…
N. Another main point.
If you encounter a question for which you don't know the answer based on the predefined points, please respond with 'I don't know' and refrain from making up an answer.
However, if the answer is not present in the predefined points, then Provide comprehensive information related to the user's query.
Remember, your goal is to assist the user in the best way possible. If the question is unclear or ambiguous, you can ask for clarification.
User's Question: ```{{check}} ```
AI Answer:"""
custom_prompt = ChatPromptTemplate.from_template(template=custom_prompt_template)
formatted_prompt = custom_prompt.format(context_text=context_text, check=check)
llm = ChatOpenAI(temperature=0.1)
memory = ConversationBufferMemory(llm=llm, output_key='answer', memory_key='chat_history', return_messages=True)
qa = ConversationalRetrievalChain.from_llm(llm=llm, memory=memory, chain_type="stuff", retriever=retriever,
return_source_documents=True, get_chat_history=lambda h: h, verbose=True,
combine_docs_chain_kwargs={"prompt": PromptTemplate(
template=custom_prompt_template, input_variables=["context_text", "check"])})
return qa
```
</pre>
How to add chat history in prompt template as well in above function
### Suggestion:
_No response_ | Issue: How to add chat history in prompt template | https://api.github.com/repos/langchain-ai/langchain/issues/15692/comments | 5 | 2024-01-08T08:54:42Z | 2024-04-15T16:20:34Z | https://github.com/langchain-ai/langchain/issues/15692 | 2,069,993,594 | 15,692 |
[
"langchain-ai",
"langchain"
] | ### Feature request
I suggest supporting the Milvus vector database's new [Dynamic Schema](https://milvus.io/docs/dynamic_schema.md) feature.
### Motivation
According to Milvus:
> Dynamic schema enables users to insert entities with new fields into a Milvus collection without modifying the existing schema. This means that users can insert data without knowing the full schema of a collection and can include fields that are not yet defined.
I think it is good to allow Langchain to have this feature when multiple types or schema of documents are added to the database.
### Your contribution
I propose to add a "dynamic_schema" flag to the `__init__` and `from_texts` method of the Milvus class:
`__init__` method:
https://github.com/langchain-ai/langchain/blob/4c47f39fcb539fdeff6dd6d9b1f483cd9a1af69b/libs/community/langchain_community/vectorstores/milvus.py#L107-L125
Change to:
```python
def __init__(
self,
embedding_function: Embeddings,
collection_name: str = "LangChainCollection",
collection_description: str = "",
connection_args: Optional[dict[str, Any]] = None,
consistency_level: str = "Session",
index_params: Optional[dict] = None,
search_params: Optional[dict] = None,
drop_old: Optional[bool] = False,
*,
primary_field: str = "pk",
text_field: str = "text",
vector_field: str = "vector",
metadata_field: Optional[str] = None,
partition_names: Optional[list] = None,
replica_number: int = 1,
timeout: Optional[float] = None,
dynamic_schema = False,
):
```
`from_texts` method:
https://github.com/langchain-ai/langchain/blob/4c47f39fcb539fdeff6dd6d9b1f483cd9a1af69b/libs/community/langchain_community/vectorstores/milvus.py#L839-L887
Change to:
```python
def from_texts(
cls,
texts: List[str],
embedding: Embeddings,
metadatas: Optional[List[dict]] = None,
collection_name: str = "LangChainCollection",
connection_args: dict[str, Any] = DEFAULT_MILVUS_CONNECTION,
consistency_level: str = "Session",
index_params: Optional[dict] = None,
search_params: Optional[dict] = None,
drop_old: bool = False,
dynamic_schema = False,
**kwargs: Any,
) -> Milvus:
```
I may later submit a PR for this suggestion. | Add Dynamic Schema support for the Milvus vector store | https://api.github.com/repos/langchain-ai/langchain/issues/15690/comments | 3 | 2024-01-08T08:06:51Z | 2024-08-07T16:06:24Z | https://github.com/langchain-ai/langchain/issues/15690 | 2,069,926,013 | 15,690 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
import os
from urllib.parse import quote_plus
from langchain.vectorstores.pgvector import PGVector
from langchain.chat_models import ChatOpenAI
from langchain.chains import ConversationalRetrievalChain
from langchain.memory import ConversationTokenBufferMemory
from langchain_community.embeddings import HuggingFaceEmbeddings
from langchain_community.vectorstores.pgvector import DistanceStrategy
os.environ['OPENAI_API_KEY'] = "mykey"
embeddings = HuggingFaceEmbeddings(model_name="all-MiniLM-L6-v2")
CONNECTION_STRING = PGVector.connection_string_from_db_params(
driver=os.environ.get("PGVECTOR_DRIVER", "psycopg2"),
host="192.168.xxx.xx",
port=5432,
database="xxxxx",
user="xxxxxxxxx",
password=quote_plus("xxxx@r"),
)
vectordb = PGVector(embedding_function=embeddings,
collection_name="tmp06",
connection_string=CONNECTION_STRING,
distance_strategy=DistanceStrategy.COSINE,
)
llm_name = "gpt-3.5-turbo"
llm = ChatOpenAI(model_name=llm_name, temperature=0)
memory_token_limit = 100
retriever = vectordb.as_retriever()
memory = ConversationTokenBufferMemory(
llm=llm,
max_token_limit=int(memory_token_limit),
memory_key="chat_history",
return_messages=True
)
qa = ConversationalRetrievalChain.from_llm(
llm,
retriever=retriever,
memory=memory,
verbose=True,
)
chat_history = []
while True:
memory.load_memory_variables({})
question = input('ask:')
result = qa.run({'question': question, 'chat_history': chat_history})
print(result)
chat_history.append([f'User: {question}', f'Ai: {result}'])
print(chat_history)
embeddings = HuggingFaceEmbeddings(model_name="all-MiniLM-L6-v2")
st_history = ' '.join(map(str, chat_history))
res = embeddings.embed_query(st_history)
vectordb.add_vector(res)
best_solution = vectordb.nearest(res, n=1)
print(f'ok: {res[:4]}...')
if question.lower() == 'bye':
break
Traceback (most recent call last):
File "C:\Users\syz\Downloads\ChatBotgpt-3.5-turbo-main\models\1227.py", line 53, in
vectordb.add_vector(res)
^^^^^^^^^^^^^^^^^^^
AttributeError: 'PGVector' object has no attribute 'add_vector'
### Suggestion:
How can I extract vector data from pgvector for use as a reference in the next conversation to enable long-term memory functionality for my chatbot? | Issue: <How can I extract vector data from pgvector for use as a reference in the next conversation to enable long-term memory functionality for my chatbot?> | https://api.github.com/repos/langchain-ai/langchain/issues/15689/comments | 1 | 2024-01-08T07:57:56Z | 2024-04-15T16:24:00Z | https://github.com/langchain-ai/langchain/issues/15689 | 2,069,914,553 | 15,689 |
[
"langchain-ai",
"langchain"
] | ### Feature request
from langchain_experimental.sql import SQLDatabaseChain
from langchain.sql_database import SQLDatabase
I'm using the above packages to connect the databricks database(SQLDatabse)and passing it to the model chain(SQLDatabaseChain) to generate the SQLQuery. But I want to close the connection of the database after each response. I couldn't find anything to close the database connection using this SQLDatabase package. Even in the SQLDatabase documentation I couldn't find anything. So I need some close() function to close the connection of the database.
### Motivation
Because of this close() functionality not available in the SQLDatabase package. I'm getting (sqlalchemy.exc.OperationalError) so I need to reboot the server to tackle this issue but that was not the feasible solution. And one more thing I can't use other different packages to connect my database because the model chain accept only the SQLDatabase in the parameter.
### Your contribution
Try to add the close() functionality in the SQLDatabase.py file so the database connection can be closed. So that I'll not be facing any issues in the future.
Thanks in advance. | No close() functionality in langchain.sql_database import SQLDatabase package | https://api.github.com/repos/langchain-ai/langchain/issues/15687/comments | 1 | 2024-01-08T07:38:59Z | 2024-04-15T16:15:25Z | https://github.com/langchain-ai/langchain/issues/15687 | 2,069,891,752 | 15,687 |
[
"langchain-ai",
"langchain"
] | Hi,
I have built a RAG app with RetrievalQA and now wanted to try out a new approach. I am using an English LLM but the responses should be in German. E.g. if the user asks something in German "Hallo, wer bist du?", the user query should be translated to "Hello, who are you?" before feeding it into the RAG pipeline.
After the model made its response in English "I am an helpful assistant" the output should be translated back to German "Ich bin ein hilfreicher Assistent".
As translator I am using `googletrans==3.1.0a0`
Here is my RetrievalQA Chain:
```
from langchain.chains import RetrievalQA
from langchain.memory import ConversationBufferWindowMemory
import box
import yaml
from src.utils import set_prompt, setup_dbqa, build_retrieval_qa
from src.llm import build_llm
from src.prompts import mistral_prompt
from langchain.vectorstores import FAISS
with open('config/config.yml', 'r', encoding='utf8') as ymlfile:
cfg = box.Box(yaml.safe_load(ymlfile))
def build_retrieval_qaa(llm, prompt, vectordb):
chain_type_kwargs={ "prompt": prompt,
"memory": ConversationBufferWindowMemory(
memory_key="chat_history",
input_key="question",
#output_key="answer",
k=8,
return_messages=True),
"verbose": False
}
dbqa = RetrievalQA.from_chain_type(llm=llm,
chain_type='stuff',
retriever=vectordb,
return_source_documents=cfg.RETURN_SOURCE_DOCUMENTS,
chain_type_kwargs=chain_type_kwargs,
verbose=False
)
return dbqa
llm = build_llm(ANY LLM)
qa_prompt = set_prompt(mistral_prompt)
vectordb = FAISS.load_local(cfg.DB_FAISS_PATH, bge_embeddings)
vectordb = vectordb.as_retriever(search_kwargs={'k': cfg.VECTOR_COUNT, 'score_treshold': cfg.SCORE_TRESHOLD}, search_type="similarity")
dbqa = build_retrieval_qaa(llm, qa_prompt, vectordb)
dbqa("Was bedeutet IPv6 für die Software-Entwicklung?") # Gives me a response
```
The prompt looks like this:
```
mistral_prompt = """
<s> [INST] Du bist RagBot, ein hilfsbereiter Assistent. Antworte nur auf Deutsch. Verwende die folgenden Kontextinformationen, um die Frage am Ende zu beantworten. Wenn du die Antwort nicht kennst, sag einfach, dass du es nicht weisst. Versuche nicht eine Antwort zu erfinden.
###Chat History###: {chat_history}
###Kontext###: {context}
###Frage###: {question}
Antwort: [/INST]
"""
```
So what do I have to change here, to first translate the user query and the prompt from DE to EN, and afterwards the Model response from EN to DE? Specifically I have problems to translate the provided context, chat history and question. | Translate User Query and Model Response in RetrievalQA Chain | https://api.github.com/repos/langchain-ai/langchain/issues/15686/comments | 1 | 2024-01-08T07:34:14Z | 2024-04-15T16:37:21Z | https://github.com/langchain-ai/langchain/issues/15686 | 2,069,885,942 | 15,686 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
import os
from urllib.parse import quote_plus
from langchain.vectorstores.pgvector import PGVector
from langchain.chat_models import ChatOpenAI
from langchain.chains import ConversationalRetrievalChain
from langchain.memory import ConversationTokenBufferMemory
from langchain_community.embeddings import HuggingFaceEmbeddings
from langchain_community.vectorstores.pgvector import DistanceStrategy
os.environ['OPENAI_API_KEY'] = "mykey"
embeddings = HuggingFaceEmbeddings(model_name="all-MiniLM-L6-v2")
CONNECTION_STRING = PGVector.connection_string_from_db_params(
driver=os.environ.get("PGVECTOR_DRIVER", "psycopg2"),
host="192.168.xxx.xx",
port=5432,
database="xxxxx",
user="xxxxxxxxx",
password=quote_plus("xxxxxr"),
)
vectordb = PGVector(embedding_function=embeddings,
collection_name="tmp06",
connection_string=CONNECTION_STRING,
distance_strategy=DistanceStrategy.COSINE,
)
llm_name = "gpt-3.5-turbo"
llm = ChatOpenAI(model_name=llm_name, temperature=0)
memory_token_limit = 100
retriever = vectordb.as_retriever()
memory = ConversationTokenBufferMemory(
llm=llm,
max_token_limit=int(memory_token_limit),
memory_key="chat_history",
return_messages=True
)
qa = ConversationalRetrievalChain.from_llm(
llm,
retriever=retriever,
memory=memory,
verbose=True,
)
chat_history = []
while True:
memory.load_memory_variables({})
question = input('ask:')
result = qa.run({'question': question, 'chat_history': chat_history})
print(result)
chat_history.append([f'User: {question}', f'Ai: {result}'])
print(chat_history)
embeddings = HuggingFaceEmbeddings(model_name="all-MiniLM-L6-v2")
st_history = ' '.join(map(str, chat_history))
res = embeddings.embed_query(st_history) ########################ˇ
print(f'ok: {res[:4]}...')
if question.lower() == 'bye':
break
How can I store 'res' in a vector database, and have a vector retrieval query for the best solution every time there's an input, to achieve long-term memory for OpenAI responses? Please help me modify this string translates to "Please help me see if there are any errors" in English
### Suggestion:
How can I store 'res' in a vector database, and have a vector retrieval query for the best solution every time there's an input, to achieve long-term memory for OpenAI responses? Please help me modify this string translates to "Please help me see if there are any errors" in English | Issue: <How can I store 'res' in a vector database, and have a vector retrieval query for the best solution every time there's an input, to achieve long-term memory for OpenAI responses? Please help me modify this string: ' prefix> | https://api.github.com/repos/langchain-ai/langchain/issues/15685/comments | 2 | 2024-01-08T07:33:59Z | 2024-04-15T16:20:22Z | https://github.com/langchain-ai/langchain/issues/15685 | 2,069,885,649 | 15,685 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
User: Help me reset my password
Agent: Please provide your account number
User: My account is Axxx
Agent: SMS verification code has been sent, please provide SMS verification code
User: 091839
Agent: Account password has been reset to 123456
The Agent is responsible for resetting the password for the user. In this example, the Agent needs to communicate back and forth with the user, such as providing an account to send a verification code, and providing a verification code to reset the password.
I added 4 tools, but when I asked any questions, all the tools were used once, which was not what I expected. I wanted to use the corresponding tool according to the specific situation.
```
> Entering new AgentExecutor chain...
{
"action": "ResetPasswordAskTool",
"action_input": "Axxx"
}
Observation: 请提供下您的账号:
Thought:/usr/local/lib/python3.10/dist-packages/langchain_core/_api/deprecation.py:189: LangChainDeprecationWarning: The function `__call__` was deprecated in LangChain 0.1.0 and will be removed in 0.2.0. Use invoke instead.
warn_deprecated(
{
"action": "ReceiveUserAccountTool",
"action_input": "Axxx"
}
Observation: 已经接收到您的账号,您提供的账号为:Axxx
Thought:/usr/local/lib/python3.10/dist-packages/langchain_core/_api/deprecation.py:189: LangChainDeprecationWarning: The function `__call__` was deprecated in LangChain 0.1.0 and will be removed in 0.2.0. Use invoke instead.
warn_deprecated(
{
"action": "SendSmsTool",
"action_input": "Axxx"
}
Observation: 短信验证码已发出,请查看手机收到的重置密码的短信验证码,并提供给我。
Thought:/usr/local/lib/python3.10/dist-packages/langchain_core/_api/deprecation.py:189: LangChainDeprecationWarning: The function `__call__` was deprecated in LangChain 0.1.0 and will be removed in 0.2.0. Use invoke instead.
warn_deprecated(
{
"action": "ResetPasswordTool",
"action_input": "123456"
}
Observation: 密码已经重置为:123321
Thought:/usr/local/lib/python3.10/dist-packages/langchain_core/_api/deprecation.py:189: LangChainDeprecationWarning: The function `__call__` was deprecated in LangChain 0.1.0 and will be removed in 0.2.0. Use invoke instead.
warn_deprecated(
{
"action": "Final Answer",
"action_input": "密码已经重置为:123321"
}
> Finished chain.
intermediate_steps= [(AgentAction(tool='ResetPasswordAskTool', tool_input='Axxx', log='{\n "action": "ResetPasswordAskTool",\n "action_input": "Axxx"\n}'), '请提供下您的账号:'), (AgentAction(tool='ReceiveUserAccountTool', tool_input='Axxx', log='{\n "action": "ReceiveUserAccountTool",\n "action_input": "Axxx"\n}'), '已经接收到您的账号,您提供的账号为:Axxx'), (AgentAction(tool='SendSmsTool', tool_input='Axxx', log='{\n "action": "SendSmsTool",\n "action_input": "Axxx"\n}'), '短信验证码已发出,请查看手机收到的重置密码的短信验证码,并提供给我。'), (AgentAction(tool='ResetPasswordTool', tool_input='123456', log='{\n "action": "ResetPasswordTool",\n "action_input": "123456"\n}'), '密码已经重置为:123321')]
response output= 密码已经重置为:123321
```
### Suggestion:
_No response_ | How to use tools for tasks that are dependent on each other | https://api.github.com/repos/langchain-ai/langchain/issues/15684/comments | 1 | 2024-01-08T07:14:13Z | 2024-04-15T16:15:21Z | https://github.com/langchain-ai/langchain/issues/15684 | 2,069,861,793 | 15,684 |
[
"langchain-ai",
"langchain"
] | ### System Info
Langchain 0.1.0
Python 3.10.12
### Who can help?
@hwchase17
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
from langchain.agents.agent_toolkits import O365Toolkit
otoolkit = O365Toolkit()
o365_tools = otoolkit.get_tools()
tools.append(o365_tools)
from langchain_experimental.openai_assistant import OpenAIAssistantRunnable
agent = OpenAIAssistantRunnable.create_assistant(
name="My assistant",
# instructions="""You are a admin agent, tasked with the following jobs:
1. Read and post messages on Microsoft 365 Outlook"""
tools=tools,
model="gpt-4-1106-preview",
as_agent=True
)
from langchain.agents.agent import AgentExecutor
agent_executor = AgentExecutor.from_agent_and_tools(agent=agent, tools=tools)
from langchain.agents.agent import AgentExecutor
agent_executor = AgentExecutor.from_agent_and_tools(agent=agent, tools=tools)
-------------------------------------------------------------------------------------------------------------------------------
KeyError Traceback (most recent call last)
Cell In[13], line 3
1 from langchain.agents.agent import AgentExecutor
----> 3 agent_executor = AgentExecutor.from_agent_and_tools(agent=agent, tools=tools)
File ~/smith/lib/python3.10/site-packages/langchain/agents/agent.py:891, in AgentExecutor.from_agent_and_tools(cls, agent, tools, callbacks, **kwargs)
882 @classmethod
883 def from_agent_and_tools(
884 cls,
(...)
888 **kwargs: Any,
889 ) -> AgentExecutor:
890 """Create from agent and tools."""
--> 891 return cls(
892 agent=agent,
893 tools=tools,
894 callbacks=callbacks,
895 **kwargs,
896 )
File ~/smith/lib/python3.10/site-packages/langchain_core/load/serializable.py:107, in Serializable.__init__(self, **kwargs)
106 def __init__(self, **kwargs: Any) -> None:
--> 107 super().__init__(**kwargs)
108 self._lc_kwargs = kwargs
File ~/smith/lib/python3.10/site-packages/pydantic/main.py:339, in pydantic.main.BaseModel.__init__()
File ~/smith/lib/python3.10/site-packages/pydantic/main.py:1102, in pydantic.main.validate_model()
File ~/smith/lib/python3.10/site-packages/langchain/agents/agent.py:916, in AgentExecutor.validate_return_direct_tool(cls, values)
914 """Validate that tools are compatible with agent."""
915 agent = values["agent"]
--> 916 tools = values["tools"]
917 if isinstance(agent, BaseMultiActionAgent):
918 for tool in tools:
KeyError: 'tools'
### Expected behavior
agent_executor should get created properly, this was working a week ago. | AgentExecutor.from_agent_and_tools(agent=agent, tools=tools) -> throws KeyError. | https://api.github.com/repos/langchain-ai/langchain/issues/15679/comments | 4 | 2024-01-08T05:19:09Z | 2024-01-08T05:43:30Z | https://github.com/langchain-ai/langchain/issues/15679 | 2,069,692,507 | 15,679 |
[
"langchain-ai",
"langchain"
] | ### Issue with current documentation:
I created an app using AzureOpenAI, and initially, the import statement worked fine:
```
from langchain.chat_models import AzureChatOpenAI
```
My original version details were:
```
langchain==0.0.352
langchain-community==0.0.6
langchain-core==0.1.3
openai==1.6.1
```
Later, I upgraded to:
```
langchain==0.0.354
langchain-community==0.0.9
langchain-core==0.1.7
langchain-experimental==0.0.47
langchain-openai==0.0.2
openai==1.6.1
```
The upgrade led to a deprecation warning for `AzureChatOpenAI`. The suggestion was to use `langchain_openai.AzureChatOpenAI`, but trying to import it gave a `ModuleNotFoundError`. After some trial and error, I found that installing `langchain_openai` separately fixed the issue. Now, I can import `AzureOpenAI`, `AzureOpenAIEmbeddings`, and `AzureChatOpenAI`.
### Idea or request for content:
Despite my research, I couldn't find documentation mentioning the need to install `langchain_openai` separately, which wasted a lot of time and created unnecessary confusion. Sharing this issue here, hope it helps others facing a similar problem. Please add this to the documentation | class `AzureChatOpenAI` was deprecated in LangChain 0.1.0 and will be removed in 0.2.0. Use langchain_openai.AzureChatOpenAI instead. | https://api.github.com/repos/langchain-ai/langchain/issues/15674/comments | 2 | 2024-01-08T03:59:37Z | 2024-04-16T16:14:59Z | https://github.com/langchain-ai/langchain/issues/15674 | 2,069,592,782 | 15,674 |
[
"langchain-ai",
"langchain"
] | ### Feature request
Hi, I am trying to use ConversationalRetrievalChain with Azure Cognitive Search as retriever with streaming capabilities enabled. The code is not providing the output in a streaming manner. I would like to know if there is any such feature which is supported using Langchain combining Azure Cognitive Search with LLM.
The code snippet I used is as below.
# Code Snippet
def search_docs_chain_with_memory_streaming(
search_index_name=os.getenv("AZURE_COGNITIVE_SEARCH_INDEX_NAME"),
question_list=[],
answer_list=[],
):
code = detect(question)
language_name = map_language_code_to_name(code)
embeddings = OpenAIEmbeddings(
deployment=oaienvs.OPENAI_EMBEDDING_DEPLOYMENT_NAME,
model=oaienvs.OPENAI_EMBEDDING_MODEL_NAME,
openai_api_base=os.environ["OPENAI_API_BASE"],
openai_api_type=os.environ["OPENAI_API_TYPE"],
)
memory = ConversationBufferMemory(memory_key="chat_history", output_key="answer")
acs = AzureSearch(
azure_search_endpoint=os.getenv("AZURE_SEARCH_SERVICE_ENDPOINT"),
azure_search_key=os.getenv("AZURE_COGNITIVE_SEARCH_API_KEY"),
index_name=search_index_name,
search_type="similarity",
semantic_configuration_name="default",
embedding_function=embeddings.embed_query,
)
retriever = acs.as_retriever()
retriever.search_kwargs = {"score_threshold": 0.8} # {'k':1}
print("language_name-----", language_name)
hcp_conv_template = (
get_prompt(workflows, "retrievalchain_hcp_conv_template1", "system_prompt", "v0")
+ language_name +
get_prompt(workflows, "retrievalchain_hcp_conv_template2", "system_prompt", "v0")
)
CONDENSE_QUESTION_PROMPT = get_prompt(workflows, "retrievalchain_condense_question_prompt", "system_prompt", "v0")
prompt = PromptTemplate(
input_variables=["question"], template=CONDENSE_QUESTION_PROMPT
)
SYSTEM_MSG2 = get_prompt(workflows, "retrievalchain_system_msg_template", "system_prompt", "v0")
messages = [
SystemMessagePromptTemplate.from_template(SYSTEM_MSG2),
HumanMessagePromptTemplate.from_template(hcp_conv_template),
]
qa_prompt = ChatPromptTemplate.from_messages(messages)
llm = AzureChatOpenAI(
deployment_name=oaienvs.OPENAI_CHAT_MODEL_DEPLOYMENT_NAME, temperature=0.7, max_retries=4,
#callbacks=[streaming_cb],
streaming=True
#callback_manager=CallbackManager([MyCustomHandler()])
)
qa_chain = ConversationalRetrievalChain.from_llm(
llm=llm,
retriever=retriever,
get_chat_history=lambda o: o,
memory=memory,
condense_question_prompt=prompt,
return_source_documents=True,
verbose=True,
#callback_manager=convo_cb_manager,
#condense_question_llm = llm_condense_ques,
combine_docs_chain_kwargs={"prompt": qa_prompt},
)
if len(question_list) == 0:
question = question + ". Give the answer only in " + language_name + "."
for i in range(len(question_list)):
qa_chain.memory.save_context(
inputs={"question": question_list[i]}, outputs={"answer": answer_list[i]}
)
#return qa_chain.stream({"question": question, "chat_history": []})
return qa_chain
Also I have tried different callback handlers and invoke methods as mentioned in https://gist.github.com/jvelezmagic/03ddf4c452d011aae36b2a0f73d72f68
Kindly suggest if there is any workaround to it.
### Motivation
The motivation is to stream the LLM response using Langchain and Azure Cognitive Search for RAG usecase.
### Your contribution
I have attached the code and the support links in the description. | Support for ConversationalRetrievalChain with Azure Cognitive Search as retriever and Azure Open AI as LLM for Streaming Output | https://api.github.com/repos/langchain-ai/langchain/issues/15673/comments | 2 | 2024-01-08T03:42:19Z | 2024-04-15T16:44:18Z | https://github.com/langchain-ai/langchain/issues/15673 | 2,069,572,435 | 15,673 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.


When I deploy the server then it report the error like the above.
How can I fix it ? Thank you !
Heres my code:
`
PROMPT_TEST = """Answer the question based only on the following context:
Based on the previous {history}
Question: {question}
Following: {affection}
Totally,You could select one of the above strategy of mix one.
{format_instructions}
"""`
`chain_with_history_stream = RunnableWithMessageHistory(
{
"question": itemgetter("question"),
"affection": RunnablePassthrough()
}
| PROMPT_TEST | llm,
lambda session_id: MyRedisChatMessageHistory(session_id, url=REDIS_URL),
input_messages_key="question",
history_messages_key="history",
verbose=True,
max_message_history=30,
)`
##Error feedback:
`Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/root/miniconda3/envs/xdan-chat/lib/python3.10/multiprocessing/spawn.py", line 116, in spawn_main
exitcode = _main(fd, parent_sentinel)
File "/root/miniconda3/envs/xdan-chat/lib/python3.10/multiprocessing/spawn.py", line 125, in _main
prepare(preparation_data)
File "/root/miniconda3/envs/xdan-chat/lib/python3.10/multiprocessing/spawn.py", line 236, in prepare
_fixup_main_from_path(data['init_main_from_path'])
File "/root/miniconda3/envs/xdan-chat/lib/python3.10/multiprocessing/spawn.py", line 287, in _fixup_main_from_path
main_content = runpy.run_path(main_path,
File "/root/miniconda3/envs/xdan-chat/lib/python3.10/runpy.py", line 289, in run_path
return _run_module_code(code, init_globals, run_name,
File "/root/miniconda3/envs/xdan-chat/lib/python3.10/runpy.py", line 96, in _run_module_code
_run_code(code, mod_globals, init_globals,
File "/root/miniconda3/envs/xdan-chat/lib/python3.10/runpy.py", line 86, in _run_code
exec(code, run_globals)
File "/workspace/xDAN-Dreamy-Chat/app/server.py", line 254, in <module>
{
TypeError: unsupported operand type(s) for |: 'dict' and 'str'`
### Suggestion:
_No response_ | How to manager the new Variables:TypeError: unsupported operand type(s) for |: 'dict' and 'str' | https://api.github.com/repos/langchain-ai/langchain/issues/15672/comments | 5 | 2024-01-08T01:57:00Z | 2024-04-15T16:25:16Z | https://github.com/langchain-ai/langchain/issues/15672 | 2,069,449,221 | 15,672 |
[
"langchain-ai",
"langchain"
] | ### System Info
Hi, I am encountering this error when trying to import anything from the `langchain.embeddings` on Amazon linux AMI with python 3.9 and `langchain==0.0.350`
```python
Traceback (most recent call last):
File "/home/ec2-user/app/search/./app.py", line 9, in <module>
from search import make_chain, postprocess
File "/home/ec2-user/app/search/search.py", line 6, in <module>
from langchain.embeddings import HuggingFaceInstructEmbeddings
File "/home/ec2-user/.local/lib/python3.9/site-packages/langchain/embeddings/__init__.py", line 62, in <module>
from langchain.embeddings.openai import OpenAIEmbeddings
File "/home/ec2-user/.local/lib/python3.9/site-packages/langchain/embeddings/openai.py", line 1, in <module>
from langchain_community.embeddings.openai import (
ImportError: cannot import name '_is_openai_v1' from 'langchain_community.embeddings.openai' (/home/ec2-user/.local/lib/python3.9/site-packages/langchain_community/embeddings/openai.py)
```
The error currently occurs when calling
```python
from langchain.embeddings import HuggingFaceInstructEmbeddings
```
My requirements.txt file looks like this:
```
fastapi==0.105.0
lancedb==0.3.4
langchain==0.0.350
langserve==0.0.36
numpy==1.26.2
pandas==2.1.4
Requests==2.31.0
uvicorn==0.24.0.post1
```
I should note that I've tried reinstalling langchain, openai and transfomers. I've also tried python 3.10 and got the same error.
I should also note that none of my modules are called openai.
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Call `from langchain.embeddings import HuggingFaceInstructEmbeddings` or any of the embeddings modules
### Expected behavior
Should be able to import without errors. | ImportError: cannot import name '_is_openai_v1' | https://api.github.com/repos/langchain-ai/langchain/issues/15671/comments | 3 | 2024-01-08T01:46:22Z | 2024-01-08T15:49:42Z | https://github.com/langchain-ai/langchain/issues/15671 | 2,069,437,208 | 15,671 |
[
"langchain-ai",
"langchain"
] | ### System Info
google-cloud-aiplatform==1.35.0,
langchain-0.0.354
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [X] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```python
from langchain.output_parsers import ResponseSchema, StructuredOutputParser
from langchain.prompts import PromptTemplate
from langchain.llms import VertexAI
import vertexai
class bcolors:
SERVER = '\033[92m'
CLIENT = '\033[93m'
ENDC = '\033[0m'
source_response_schemas = [
ResponseSchema(name="answer", description="answer to the user's question"),
ResponseSchema(
name="source",
description="source used to answer the user's question, should be a website.",
),
]
found_response_schemas = [
ResponseSchema(name="answer", description="answer to the user's question"),
ResponseSchema(
name="found",
description="whether the model could find the proper answers or not.",
),
]
source_output_parser = StructuredOutputParser.from_response_schemas(source_response_schemas)
found_output_parser = StructuredOutputParser.from_response_schemas(found_response_schemas)
format_instructions = source_output_parser.get_format_instructions()
found_checker = found_output_parser.get_format_instructions()
prompt = PromptTemplate(
template="answer the users question as best as possible.\n{found_cheker}\n{format_instructions}\n{question}",
input_variables=["question"],
partial_variables={"found_checker": found_checker, "format_instructions": format_instructions},
)
vertexai.init(project="my_project_id", location="us-central1")
model = VertexAI(model_name='text-bison@001', max_output_tokens=512, temperature=0.2)
chain = prompt | model | found_output_parser | source_output_parser
while 1:
message = input(bcolors.CLIENT + "Ask to the Cooking Assistant --->> " + bcolors.ENDC)
for s in chain.stream({"question": message}):
print(bcolors.SERVER + "<<<<<<< Cooking Assistant >>>>>>", str(s) + bcolors.ENDC)
```
this code returns the error saying,
```
KeyError: "Input to PromptTemplate is missing variable 'found_cheker'. Expected: ['found_cheker', 'question'] Received: ['question']"
```
### Expected behavior
I expect the model responses would be something like,
```
{'answer': 'proper answer', 'found': True, 'source': 'the source found.'}
``` | multiple ResponseSchema | https://api.github.com/repos/langchain-ai/langchain/issues/15670/comments | 3 | 2024-01-08T01:02:36Z | 2024-01-16T00:48:55Z | https://github.com/langchain-ai/langchain/issues/15670 | 2,069,393,611 | 15,670 |
[
"langchain-ai",
"langchain"
] | ### System Info
google-cloud-aiplatform==1.35.0,
langchain-0.0.354
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```python
response_schemas = [
ResponseSchema(name="answer", description="answer to the user's question"),
ResponseSchema(
name="found check",
description="boolean value (True or False) whether the data found from the reference or not.",
),
]
output_parser = StructuredOutputParser.from_response_schemas(response_schemas)
format_instructions = output_parser.get_format_instructions()
qa_prompt = PromptTemplate(
input_variables=[
("system", template),
MessagesPlaceholder(variable_name="chat_history"),
("user", "{question}"),
],
partial_variables={"format_instructions": format_instructions},
)
rag_chain = (
RunnablePassthrough.assign(
context=contextualized_question | temp_retriever | format_docs
)
| qa_prompt
| llm
| remove_prefix
)
```
### Expected behavior
I expect that I could use something like ChatPromptTemplate.from_messages and response_schemas at a same time to return specific value with the conversation history based prompting. | Adding response_schemas to ChatPromptTemplate.from_messages prompt design | https://api.github.com/repos/langchain-ai/langchain/issues/15669/comments | 2 | 2024-01-07T23:58:01Z | 2024-01-08T00:59:54Z | https://github.com/langchain-ai/langchain/issues/15669 | 2,069,357,139 | 15,669 |
[
"langchain-ai",
"langchain"
] | ### System Info
Langchain v0.0.354, Python v3.11, Chroma v0.4.22, Lark v1.1.8
### Who can help?
@agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```python
def load_self_query_retriever(persist_dir: str, docs: list, metadata_field_info: list, document_content_description = "Information about various documents, the date they are up to date with and where they were sourced from."):
llm = ChatOpenAI(temperature=0)
vectorstore = None
try:
vectorstore = Chroma(persist_directory=persist_dir, embedding_function=get_embedding_function())
except:
vectorstore = Chroma.from_documents(docs, get_embedding_function(), persist_directory=persist_dir)
return SelfQueryRetriever.from_llm(
llm,
vectorstore,
document_content_description,
metadata_field_info,
)
metadata_field_info = [
AttributeInfo(name="source",description="The document this chunk is from.",type="string",),
AttributeInfo(name="origin",description="The origin the document came from. Bancworks is the higher priority.",type="string",),
AttributeInfo(name="date_day",description="The day the document was uploaded.",type="integer",),
AttributeInfo(name="date_uploaded",description="The month year the document is current to.",type="integer",)
]
self_query_retriever = load_self_query_retriever("storage/deploy/chroma-db-self-query", bancworks_docs, metadata_field_info)
```
The following error is thrown:
```python
---------------------------------------------------------------------------
ImportError Traceback (most recent call last)
Cell In[1], line 110
76 return SelfQueryRetriever.from_llm(
77 llm,
78 vectorstore,
79 document_content_description,
80 metadata_field_info,
81 )
83 metadata_field_info = [
84 AttributeInfo(name="source",description="The document this chunk is from.",type="string",),
85 AttributeInfo(name="origin",description="The origin the document came from. Comes from either scraped websites like TheKinection.org, Kinecta.org or database files like Bancworks. Bancworks is the higher priority.",type="string",),
(...)
107 # ),
108 ]
--> 110 self_query_retriever = load_self_query_retriever("storage/deploy/chroma-db-self-query", bancworks_docs, metadata_field_info)
113 # parent_retriever = load_parent_retriever("full_docs", "storage/deploy/chroma-db-parent")
114
115 # current_place = 0
(...)
127 # retriever.add_documents(bancworks_docs)
128 # retriever.add_documents(bancworks_docs)
Cell In[1], line 76, in load_self_query_retriever(persist_dir, docs, metadata_field_info, document_content_description)
73 llm = ChatOpenAI(temperature=0)
74 vectorstore = Chroma.from_documents(docs, OpenAIEmbeddings(), persist_directory=persist_dir)
---> 76 return SelfQueryRetriever.from_llm(
77 llm,
78 vectorstore,
79 document_content_description,
80 metadata_field_info,
81 )
File /etc/system/kernel/.venv/lib64/python3.11/site-packages/langchain/retrievers/self_query/base.py:225, in SelfQueryRetriever.from_llm(cls, llm, vectorstore, document_contents, metadata_field_info, structured_query_translator, chain_kwargs, enable_limit, use_original_query, **kwargs)
218 if (
219 "allowed_operators" not in chain_kwargs
220 and structured_query_translator.allowed_operators is not None
221 ):
222 chain_kwargs[
223 "allowed_operators"
224 ] = structured_query_translator.allowed_operators
--> 225 query_constructor = load_query_constructor_runnable(
226 llm,
227 document_contents,
228 metadata_field_info,
229 enable_limit=enable_limit,
230 **chain_kwargs,
231 )
232 return cls(
233 query_constructor=query_constructor,
234 vectorstore=vectorstore,
(...)
237 **kwargs,
238 )
File /etc/system/kernel/.venv/lib64/python3.11/site-packages/langchain/chains/query_constructor/base.py:357, in load_query_constructor_runnable(llm, document_contents, attribute_info, examples, allowed_comparators, allowed_operators, enable_limit, schema_prompt, fix_invalid, **kwargs)
353 for ainfo in attribute_info:
354 allowed_attributes.append(
355 ainfo.name if isinstance(ainfo, AttributeInfo) else ainfo["name"]
356 )
--> 357 output_parser = StructuredQueryOutputParser.from_components(
358 allowed_comparators=allowed_comparators,
359 allowed_operators=allowed_operators,
360 allowed_attributes=allowed_attributes,
361 fix_invalid=fix_invalid,
362 )
363 return prompt | llm | output_parser
File /etc/system/kernel/.venv/lib64/python3.11/site-packages/langchain/chains/query_constructor/base.py:99, in StructuredQueryOutputParser.from_components(cls, allowed_comparators, allowed_operators, allowed_attributes, fix_invalid)
96 return fixed
98 else:
---> 99 ast_parse = get_parser(
100 allowed_comparators=allowed_comparators,
101 allowed_operators=allowed_operators,
102 allowed_attributes=allowed_attributes,
103 ).parse
104 return cls(ast_parse=ast_parse)
File /etc/system/kernel/.venv/lib64/python3.11/site-packages/langchain/chains/query_constructor/parser.py:174, in get_parser(allowed_comparators, allowed_operators, allowed_attributes)
172 # QueryTransformer is None when Lark cannot be imported.
173 if QueryTransformer is None:
--> 174 raise ImportError(
175 "Cannot import lark, please install it with 'pip install lark'."
176 )
177 transformer = QueryTransformer(
178 allowed_comparators=allowed_comparators,
179 allowed_operators=allowed_operators,
180 allowed_attributes=allowed_attributes,
181 )
182 return Lark(GRAMMAR, parser="lalr", transformer=transformer, start="program")
ImportError: Cannot import lark, please install it with 'pip install lark'.
```
### Expected behavior
Be able to instantiate SelfQueryRetriever.from_llm successfully | SelfQueryRetriever.from_llm raises following issue: ImportError: Cannot import lark, please install it with 'pip install lark'. | https://api.github.com/repos/langchain-ai/langchain/issues/15668/comments | 8 | 2024-01-07T23:44:54Z | 2024-05-15T04:41:38Z | https://github.com/langchain-ai/langchain/issues/15668 | 2,069,348,971 | 15,668 |
[
"langchain-ai",
"langchain"
] | ### Feature request
It would be helpful if I can make a RAG chain to output whether it could find the answer from the reference or not as a boolean value.
### Motivation
From personal ideation.
### Your contribution
N/A | found checker for RAG chain | https://api.github.com/repos/langchain-ai/langchain/issues/15667/comments | 2 | 2024-01-07T23:32:22Z | 2024-07-12T16:03:13Z | https://github.com/langchain-ai/langchain/issues/15667 | 2,069,343,504 | 15,667 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
After upgrading to langchain 0.1.0, I received depreciation warnings and updated my imports to langchain_community which cleared that error, then received depreciation warnings about __call__ to Invoke:
The function `__call__` was deprecated in LangChain 0.1.0 and will be removed in 0.2.0. Use invoke instead.
I switched the invoke method on the call still get some of the same depreciation warnings. Not sure how to fix this or if it's a bug. code:
```python
qachain = RetrievalQA.from_chain_type(ollama, retriever=vectorstore.as_retriever(search_kwargs={"k": args.top_matches}))
res = (qachain.invoke({"query": args.question}))
```
How do I fix this?
### Suggestion:
_No response_ | Issue: __call__ was deprecated use invoke instead warning persists after switching to invoke | https://api.github.com/repos/langchain-ai/langchain/issues/15665/comments | 2 | 2024-01-07T21:49:55Z | 2024-05-31T15:02:56Z | https://github.com/langchain-ai/langchain/issues/15665 | 2,069,304,783 | 15,665 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
Hello LangChain community,
We're always happy to see more folks getting involved in contributing to the LangChain codebase.
This is a good first issue if you want to learn more about how to set up
for development in the LangChain codebase.
## Goal
Your contribution will make it easier for users to use integrations with the newest LangChain syntax
## Context
As you may have noticed, we’ve recently gone to LangChain 0.1. As part of this, we want to update integration pages to be consistent with new methods. These largely include: (a) new methods for invoking integrations and chains (`invoke`, `stream`), (b) new methods for creating chains (LCEL, `create_xyz_..`).
There are a lot of integrations, so we’d love community help! This is a great way to get started contributing to the library as it will make you familiar with best practices and various integrations.
## Set up for development
There are lots of integration notebooks in https://github.com/langchain-ai/langchain/tree/master/docs/docs/integrations. After making changes there, you should run `make format` from the root LangChain directory to run our formatter.
## Shall you accept
Shall you accept this challenge, please claim one (and only one) of the modules from the list
below as one that you will be working on, and respond to this issue.
Once you've made the required code changes, open a PR and link to this issue.
## Acceptance Criteria
- Uses new methods for calling chains (`invoke`, `stream`, etc)
- Uses LCEL where appropriate
- Follows the format outlined below
## Examples
We've gotten started with some examples to show how we imagine these integration pages should look like. The exact format may look different for each type of integration, so make sure to look at the type you are working on:
- LLMs:
- https://python.langchain.com/docs/integrations/llms/cohere
- Chat Models:
- https://python.langchain.com/docs/integrations/chat/cohere
- Vectorstores:
- https://python.langchain.com/docs/integrations/vectorstores/faiss
- Retrievers:
- https://python.langchain.com/docs/integrations/retrievers/tavily
- https://python.langchain.com/docs/integrations/retrievers/ragatouille
- Tools:
- https://python.langchain.com/docs/integrations/tools/tavily_search
- Toolkits:
- https://python.langchain.com/docs/integrations/toolkits/gmail
- Memory:
- https://python.langchain.com/docs/integrations/memory/sql_chat_message_history
## Your contribution
Please sign up by responding to this issue and including the name of the module.
### Suggestion:
_No response_ | For New Contributors: Update Integration Documentation | https://api.github.com/repos/langchain-ai/langchain/issues/15664/comments | 30 | 2024-01-07T21:22:46Z | 2024-02-12T05:19:32Z | https://github.com/langchain-ai/langchain/issues/15664 | 2,069,295,306 | 15,664 |
[
"langchain-ai",
"langchain"
] | ### System Info
Langchain Version: 0.0.354 (also tried with 0.1.0)
Python version: 3.9.18
yfinance version: 0.2.35
OS: Windows 10
### Who can help?
@hwchase17 , @agola11
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Just exectuing the bottom of this page (the tool soley): https://python.langchain.com/docs/integrations/tools/yahoo_finance_news
**Returned this error (see):**
from langchain.tools.yahoo_finance_news import YahooFinanceNewsTool
tool = YahooFinanceNewsTool()
res = tool.run("AAPL")
print(res)
updating langchain to newest version didnt change anything for me.
Im also using a poetry installed file with a clean fresh enviroment, same error.
### Expected behavior
To do exactly whats written in the docs and to not drop an error | using YahooFinanceNewsTool() results to KeyError: 'description' | https://api.github.com/repos/langchain-ai/langchain/issues/15656/comments | 1 | 2024-01-07T13:52:58Z | 2024-04-14T16:16:15Z | https://github.com/langchain-ai/langchain/issues/15656 | 2,069,139,043 | 15,656 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
I want to use chatopenai as a tool, I need to add the agent chat_history or context into the tool, but the tool generally only accepts a string input, so how do I pass in other parameters
### Suggestion:
none | Issue: <Please write a comprehensive title after the 'Issue: ' prefix> | https://api.github.com/repos/langchain-ai/langchain/issues/15654/comments | 1 | 2024-01-07T10:35:43Z | 2024-01-07T10:55:01Z | https://github.com/langchain-ai/langchain/issues/15654 | 2,069,076,913 | 15,654 |
[
"langchain-ai",
"langchain"
] | ### Issue with current documentation:
When clicking on redis and then when trying to re-direct to github for seeing the implementation that page is not found
from this [integration's page](https://integrations.langchain.com/memory)

Error :

### Idea or request for content:
i would like to know the place where this has been implemented to fix the issue and raise an PR
Happy to help the community | DOC: Integration re-direct to github page not found | https://api.github.com/repos/langchain-ai/langchain/issues/15651/comments | 4 | 2024-01-07T07:32:23Z | 2024-04-14T16:16:47Z | https://github.com/langchain-ai/langchain/issues/15651 | 2,069,024,318 | 15,651 |
[
"langchain-ai",
"langchain"
] | I'm trying to create a simple test that can:
- use Ollama as the model
- use the agent with my custom tools to enrich the output
- history to store the conversation history
Based on examples, the code should look like this:
```
const llm = new ChatOllama(...);
const tools = [...];
const executor = await initializeAgentExecutorWithOptions(tools, llm, ...);
```
the compiler does not like `llm` parameter because
```
Argument of type 'ChatOllama' is not assignable to parameter of type 'BaseLanguageModelInterface<any, BaseLanguageModelCallOptions>'
```
and this is the same for OpenAI llm as well.
I don't see this `BaseLanguageModelCallOptions` interface being used anywhere in the code. Is this the right way to use it? | Creating a conversation agent with tools and history for Ollama | https://api.github.com/repos/langchain-ai/langchain/issues/15650/comments | 2 | 2024-01-07T06:16:24Z | 2024-01-07T15:12:27Z | https://github.com/langchain-ai/langchain/issues/15650 | 2,069,007,389 | 15,650 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
I have a warning when I run my langchain code "how to resolve this warning "My code has a warning "D:\anaconda3\envs\py311\Lib\site-packages\langchain\__init__.py:34: UserWarning: Importing verbose from langchain root module is no longer supported. Please use langchain.globals.set_verbose() / langchain.globals.get_verbose() instead. warnings.warn(", how to resolve it?
### Suggestion:
_No response_ | Issue: how to resolve this warning "My code has a warning "D:\anaconda3\envs\py311\Lib\site-packages\langchain\__init__.py:34: UserWarning: Importing verbose from langchain root module is no longer supported. Please use langchain.globals.set_verbose() / langchain.globals.get_verbose() instead. warnings.warn("" | https://api.github.com/repos/langchain-ai/langchain/issues/15647/comments | 3 | 2024-01-07T00:49:44Z | 2024-06-17T11:24:20Z | https://github.com/langchain-ai/langchain/issues/15647 | 2,068,909,302 | 15,647 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
I had no issues running the langchain code before, but when I moved the callback_handler position, this warning appeared: "D:\anaconda3\envs\py311\Lib\site-packages\langchain\__init__.py:34: UserWarning: Importing verbose from langchain root module is no longer supported. Please use langchain.globals.set_verbose() / langchain.globals.get_verbose() instead. warnings.warn("
### Suggestion:
_No response_ | Issue: My code has a warning "D:\anaconda3\envs\py311\Lib\site-packages\langchain\__init__.py:34: UserWarning: Importing verbose from langchain root module is no longer supported. Please use langchain.globals.set_verbose() / langchain.globals.get_verbose() instead. warnings.warn(" | https://api.github.com/repos/langchain-ai/langchain/issues/15646/comments | 1 | 2024-01-07T00:42:11Z | 2024-01-07T00:48:07Z | https://github.com/langchain-ai/langchain/issues/15646 | 2,068,907,422 | 15,646 |
[
"langchain-ai",
"langchain"
] | ### System Info
LangChain 0.0.354, Python 3.11
### Who can help?
@hwchase17
@agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Embedded 1000 or so documents and did a vector similarity search. Came back with a lot of good results. Did the get_relevant_documents call but had no returns. LLM also did not.
My retriever is:
- ParentDocumentRetriever with a parent_splitter and child_splitter
- Parent splits at 2000 tokens. Child splits at 400.
```python
def load_chroma_db(collection_name: str):
parent_splitter = RecursiveCharacterTextSplitter(chunk_size=2000)
child_splitter = RecursiveCharacterTextSplitter(chunk_size=400)
vectorstore = None
try:
vectorstore = Chroma(persist_directory="storage/deploy/chroma-db", embedding_function=get_embedding_function())
print("Loaded existing vector store")
except:
print("Creating new vector store")
vectorstore = Chroma(
collection_name=collection_name,
embedding_function=get_embedding_function(),
persist_directory="storage/deploy/chroma-db"
)
store = InMemoryStore()
retriever = ParentDocumentRetriever(
vectorstore=vectorstore,
docstore=store,
child_splitter=child_splitter,
parent_splitter=parent_splitter
)
return retriever
retriever = load_chroma_db("full_docs")
retriever.vectorstore.similarity_search_with_score("How do I make a zelle transaction?")
```
This returns
```python
[(Document(page_content='Basic Introduction to Zelle® What is Zelle®? Zelle® is a fast, safe, and easy way for members to send money directly between most bank or credit union accounts in the U.S. These Person- to- Person transactions typically occur within minutes. With just an email address or U.S. mobile phone number, members can send money to friends, family, and people they know and trust, regardless of where they', metadata={'date_day': 27, 'date_month': 4, 'date_year': 2023, 'doc_id': '22bfa535-8bc7-4a97-8270-d84b77ba81b0', 'source': 'storage/Bancworks/Zelle & CST FAQs - Internal Use.pdf.txt'}),
0.24197007715702057),
(Document(page_content='Basic Introduction to Zelle® ......................................... 2 Zelle® Transaction Limits / Tiers ............................. 2 Enrollment / Eligible Accounts ................................ 3 Sending / Receiving Transactions ........................... 4 Disputes / Fraud / Scams ......................................... 5 Customer Service Tool (CST)', metadata={'date_day': 27, 'date_month': 4, 'date_year': 2023, 'doc_id': 'eb27b502-c37c-4462-9d8b-488b53c3aa11', 'source': 'storage/Bancworks/Zelle & CST FAQs - Internal Use.pdf.txt'}),
0.24453413486480713),
(Document(page_content='Step 1: Find Zelle in the main menu of the Kinecta mobile banking app. Step 2: Enroll with a U.S. mobile number or email address and select a checking account. Step 3: Start using Zelle. Talking Points: • Zelle is a fast, safe and easy way to send money directly between almost any checking or savings accounts in the U.S., typically within minutes. • With just an email address or U.S. mobile phone', metadata={'date_day': 28, 'date_month': 4, 'date_year': 2023, 'doc_id': '161170fc-8871-412d-a0e7-47e4b1b3d889', 'source': 'storage/Bancworks/Zelle - MarketGram - 20230502.pdf.txt'}),
0.2447606921195984),
(Document(page_content='• Zelle is a fast, safe and easy way to send money directly between almost any checking or savings accounts in the U.S., typically within minutes. • With just an email address or U.S. mobile phone number, send money to people you trust, regardless of where they bank. • Transactions between enrolled consumers typically occur in minutes and generally do not incur transaction fees. • Send, split or', metadata={'date_day': 28, 'date_month': 4, 'date_year': 2023, 'doc_id': '161170fc-8871-412d-a0e7-47e4b1b3d889', 'source': 'storage/Bancworks/Zelle - MarketGram - 20230502.pdf.txt'}),
0.2502959370613098)]
```
If I do the following call:
```python
retriever.get_relevant_documents("How do I make a zelle transaction?", k=4)
```
I get nothing returned.
```python
[]
```
### Expected behavior
Parent documents should be returned based on the child embeddings found. | ChromaDB ParentDocumentRetriever.get_relevant_documents not returning docs despite similarity_search returning matching docs | https://api.github.com/repos/langchain-ai/langchain/issues/15644/comments | 4 | 2024-01-06T22:51:01Z | 2024-01-07T00:56:13Z | https://github.com/langchain-ai/langchain/issues/15644 | 2,068,873,967 | 15,644 |
[
"langchain-ai",
"langchain"
] | ### System Info
Using...
langchain==0.0.353
langchain-core==0.1.4
Seems to have broken from yesterday's merges?
```
from langchain.chains.combine_documents.stuff import StuffDocumentsChain
--
2319 | File "/root/.local/lib/python3.9/site-packages/langchain/chains/__init__.py", line 56, in <module>
2320 | from langchain.chains.openai_functions import (
2321 | File "/root/.local/lib/python3.9/site-packages/langchain/chains/openai_functions/__init__.py", line 1, in <module>
2322 | from langchain.chains.openai_functions.base import (
2323 | File "/root/.local/lib/python3.9/site-packages/langchain/chains/openai_functions/base.py", line 32, in <module>
2324 | from langchain.utils.openai_functions import convert_pydantic_to_openai_function
2325 | File "/root/.local/lib/python3.9/site-packages/langchain/utils/openai_functions.py", line 1, in <module>
2326 | from langchain_community.utils.openai_functions import (
2327 | File "/root/.local/lib/python3.9/site-packages/langchain_community/utils/openai_functions.py", line 3, in <module>
2328 | from langchain_core.utils.function_calling import (
2329 | ModuleNotFoundError: No module named 'langchain_core.utils.function_calling'
```
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [x] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
It's triggered, in our case, when we import `StuffDocumentsChain`
from langchain.chains.combine_documents.stuff import StuffDocumentsChain
### Expected behavior
No error! | Broken imports | https://api.github.com/repos/langchain-ai/langchain/issues/15643/comments | 2 | 2024-01-06T21:23:27Z | 2024-01-06T21:45:16Z | https://github.com/langchain-ai/langchain/issues/15643 | 2,068,840,009 | 15,643 |
[
"langchain-ai",
"langchain"
] | ### System Info
Langchain
### Who can help?
LangChain with Gemini Pro
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
stuff_chain = load_qa_chain(model, chain_type="stuff", prompt=prompt)
question = "content pls?"
stuff_answer = stuff_chain(
{"input_documents": pages[1:], "question": question}, return_only_outputs=True
)
### Expected behavior
ReadTimeout: HTTPConnectionPool(host='localhost', port=36027): Read timed out. (read timeout=60.0) | ReadTimeout with Arabic pdf files | https://api.github.com/repos/langchain-ai/langchain/issues/15639/comments | 3 | 2024-01-06T19:35:55Z | 2024-04-13T16:12:05Z | https://github.com/langchain-ai/langchain/issues/15639 | 2,068,795,849 | 15,639 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
I have the following ChromaDB setup:
```python
parent_splitter = RecursiveCharacterTextSplitter(chunk_size=2000)
child_splitter = RecursiveCharacterTextSplitter(chunk_size=400)
vectorstore = None
try:
vectorstore = Chroma(persist_directory="storage/deploy/chroma-db", embedding_function=get_embedding_function())
print("Loaded existing vector store")
except:
print("Creating new vector store")
vectorstore = Chroma(
collection_name=collection_name,
embedding_function=get_embedding_function(),
persist_directory="storage/deploy/chroma-db"
)
store = InMemoryStore()
retriever = ParentDocumentRetriever(
vectorstore=vectorstore,
docstore=store,
child_splitter=child_splitter,
parent_splitter=parent_splitter
)
return retriever
```
The issue is if I add a bunch of documents to the retriever, the memory eventually can run out and crash the system. Is there a way for this to be done out of RAM instead? Or am I misunderstanding the usage of this.
### Suggestion:
Is there a non-inmemory docstore that can be used in the ParentDocumentRetriever or does it not make sense in the use case. | Issue: What docstore to use in ChromaDB that isn't in memory? | https://api.github.com/repos/langchain-ai/langchain/issues/15633/comments | 5 | 2024-01-06T10:53:10Z | 2024-03-07T10:29:16Z | https://github.com/langchain-ai/langchain/issues/15633 | 2,068,532,355 | 15,633 |
[
"langchain-ai",
"langchain"
] | ### Issue with current documentation:
https://python.langchain.com/docs/integrations/chat/fireworks
Hi, I'm new Langchain with Fireworks.
I run this code in document 'ChatFireworks' and got an issue.
Environment : python 3.11, Window10
```Create a simple chain with memory
chain = (
RunnablePassthrough.assign(
history=memory.load_memory_variables | (lambda x: x["history"])
)
| prompt
| llm.bind(stop=["\n\n"])
)
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Cell In[17], line 3
1 chain = (
2 RunnablePassthrough.assign(
----> 3 history=memory.load_memory_variables | (lambda x: x["history"])
4 )
5 | prompt
6 | llm.bind(stop=["\n\n"])
7 )
TypeError: unsupported operand type(s) for |: 'method' and 'function'```
### Idea or request for content:
TypeError: unsupported operand type(s) for |: 'method' and 'function | DOC: langchain with Fireworks ai | https://api.github.com/repos/langchain-ai/langchain/issues/15632/comments | 4 | 2024-01-06T10:45:41Z | 2024-04-13T16:16:17Z | https://github.com/langchain-ai/langchain/issues/15632 | 2,068,529,844 | 15,632 |
[
"langchain-ai",
"langchain"
] | ### System Info
`langchain==0.1.0`
`langchain-community==0.0.9`
`langchain-core==0.1.7`
`linux 20.04`
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Following the documentation https://python.langchain.com/docs/modules/agents/how_to/custom_agent
### Expected behavior
Should output something similar to this
```
{'input': 'How many letters in the word educa',
'output': 'There are 5 letters in the word "educa".'}
```
Instead got an error when ran `agent_executor.invoke({"input": "How many letters in the word educa"})`
```
NotFoundError: Error code: 404 - {'error': {'message': 'Unrecognized request argument supplied: functions', 'type': 'invalid_request_error', 'param': None, 'code': None}}
``` | 'Unrecognized request argument supplied: functions' error when executing agent | following documentation | https://api.github.com/repos/langchain-ai/langchain/issues/15628/comments | 2 | 2024-01-06T06:53:43Z | 2024-01-06T07:02:47Z | https://github.com/langchain-ai/langchain/issues/15628 | 2,068,438,255 | 15,628 |
[
"langchain-ai",
"langchain"
] | ### System Info
Python 3.11, Langchain 0.0.354, ChromaDB v0.4.22
### Who can help?
@agola11
@hwchase17
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```python
from langchain.text_splitter import MarkdownTextSplitter, RecursiveCharacterTextSplitter
from langchain.document_loaders import DirectoryLoader
from langchain.storage import InMemoryStore
from langchain.retrievers import ParentDocumentRetriever
from langchain.vectorstores import Chroma
import chromadb
def load_chroma_db(collection_name: str):
parent_splitter = RecursiveCharacterTextSplitter(chunk_size=2000)
child_splitter = RecursiveCharacterTextSplitter(chunk_size=400)
vectorstore = Chroma(
collection_name=collection_name,
embedding_function=get_embedding_function(),
persist_directory="storage/deploy/chroma-db"
)
store = InMemoryStore()
retriever = ParentDocumentRetriever(
vectorstore=vectorstore,
docstore=store,
child_splitter=child_splitter,
parent_splitter=parent_splitter
)
return retriever
retriever = load_chroma_db("full_docs")
retriever.add_documents(bancworks_docs)
```
### Expected behavior
Should be able to load ChromaDB and persist it. | AttributeError: module 'chromadb' has no attribute 'config' | https://api.github.com/repos/langchain-ai/langchain/issues/15616/comments | 9 | 2024-01-06T00:06:53Z | 2024-02-23T13:36:44Z | https://github.com/langchain-ai/langchain/issues/15616 | 2,068,219,804 | 15,616 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.