issue_owner_repo listlengths 2 2 | issue_body stringlengths 0 261k ⌀ | issue_title stringlengths 1 925 | issue_comments_url stringlengths 56 81 | issue_comments_count int64 0 2.5k | issue_created_at stringlengths 20 20 | issue_updated_at stringlengths 20 20 | issue_html_url stringlengths 37 62 | issue_github_id int64 387k 2.46B | issue_number int64 1 127k |
|---|---|---|---|---|---|---|---|---|---|
[
"hwchase17",
"langchain"
] | The retry decorator for ChatOpenAI is hardcoded [here](https://github.com/hwchase17/langchain/blob/d54c88aa2140f27c36fa18375f942e5b239799ee/langchain/chat_models/openai.py#L39)
Allow the user to supply a custom retry_decorator. | Allow custom retry_decorator to be passed to the LLM | https://api.github.com/repos/langchain-ai/langchain/issues/3109/comments | 3 | 2023-04-18T19:45:57Z | 2023-12-14T16:09:08Z | https://github.com/langchain-ai/langchain/issues/3109 | 1,673,734,340 | 3,109 |
[
"hwchase17",
"langchain"
] | langchain Version: 0.0.143
SHA: aad0a498ac693acd304cf66e16a6430f5c0410a8
---
In [1]: import numexpr
In [2]: numexpr.__version__
Out[2]: '2.8.4'
-----
```python
llm_math.run("what is the common denominator of 2 and 5")
```
Stack trace:
> Entering new LLMMathChain chain...
what is the common denominator of 2 and 5
```text
LCM(2, 5)
```
...numexpr.evaluate("LCM(2, 5)")...
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
File ~/src/langchain/langchain/chains/llm_math/base.py:60, in LLMMathChain._evaluate_expression(self, expression)
58 local_dict = {"pi": math.pi, "e": math.e}
59 output = str(
---> 60 numexpr.evaluate(
61 expression.strip(),
62 global_dict={}, # restrict access to globals
63 local_dict=local_dict, # add common mathematical functions
64 )
65 )
66 except Exception as e:
File ~/.pyenv/versions/3.10.2/envs/langchain_3_10/lib/python3.10/site-packages/numexpr/necompiler.py:817, in evaluate(ex, local_dict, global_dict, out, order, casting, **kwargs)
816 if expr_key not in _names_cache:
--> 817 _names_cache[expr_key] = getExprNames(ex, context)
818 names, ex_uses_vml = _names_cache[expr_key]
File ~/.pyenv/versions/3.10.2/envs/langchain_3_10/lib/python3.10/site-packages/numexpr/necompiler.py:704, in getExprNames(text, context)
703 def getExprNames(text, context):
--> 704 ex = stringToExpression(text, {}, context)
705 ast = expressionToAST(ex)
File ~/.pyenv/versions/3.10.2/envs/langchain_3_10/lib/python3.10/site-packages/numexpr/necompiler.py:289, in stringToExpression(s, types, context)
288 # now build the expression
--> 289 ex = eval(c, names)
290 if expressions.isConstant(ex):
File <expr>:1
TypeError: 'VariableNode' object is not callable
During handling of the above exception, another exception occurred:
ValueError Traceback (most recent call last)
Cell In[11], line 1
----> 1 llm_math.run("what is the common denominator of 2 and 5")
File ~/src/langchain/langchain/chains/base.py:213, in Chain.run(self, *args, **kwargs)
211 if len(args) != 1:
212 raise ValueError("`run` supports only one positional argument.")
--> 213 return self(args[0])[self.output_keys[0]]
215 if kwargs and not args:
216 return self(kwargs)[self.output_keys[0]]
File ~/src/langchain/langchain/chains/base.py:116, in Chain.__call__(self, inputs, return_only_outputs)
114 except (KeyboardInterrupt, Exception) as e:
115 self.callback_manager.on_chain_error(e, verbose=self.verbose)
--> 116 raise e
117 self.callback_manager.on_chain_end(outputs, verbose=self.verbose)
118 return self.prep_outputs(inputs, outputs, return_only_outputs)
File ~/src/langchain/langchain/chains/base.py:113, in Chain.__call__(self, inputs, return_only_outputs)
107 self.callback_manager.on_chain_start(
108 {"name": self.__class__.__name__},
109 inputs,
110 verbose=self.verbose,
111 )
112 try:
--> 113 outputs = self._call(inputs)
114 except (KeyboardInterrupt, Exception) as e:
115 self.callback_manager.on_chain_error(e, verbose=self.verbose)
File ~/src/langchain/langchain/chains/llm_math/base.py:131, in LLMMathChain._call(self, inputs)
127 self.callback_manager.on_text(inputs[self.input_key], verbose=self.verbose)
128 llm_output = llm_executor.predict(
129 question=inputs[self.input_key], stop=["```output"]
130 )
--> 131 return self._process_llm_result(llm_output)
File ~/src/langchain/langchain/chains/llm_math/base.py:78, in LLMMathChain._process_llm_result(self, llm_output)
76 if text_match:
77 expression = text_match.group(1)
---> 78 output = self._evaluate_expression(expression)
79 self.callback_manager.on_text("\nAnswer: ", verbose=self.verbose)
80 self.callback_manager.on_text(output, color="yellow", verbose=self.verbose)
File ~/src/langchain/langchain/chains/llm_math/base.py:67, in LLMMathChain._evaluate_expression(self, expression)
59 output = str(
60 numexpr.evaluate(
61 expression.strip(),
(...)
64 )
65 )
66 except Exception as e:
---> 67 raise ValueError(f"{e}. Please try again with a valid numerical expression")
69 # Remove any leading and trailing brackets from the output
70 return re.sub(r"^\[|\]$", "", output)
ValueError: 'VariableNode' object is not callable. Please try again with a valid numerical expression
| Encountering exceptions when using LLMathChain on master | https://api.github.com/repos/langchain-ai/langchain/issues/3108/comments | 7 | 2023-04-18T19:44:09Z | 2023-10-02T18:52:56Z | https://github.com/langchain-ai/langchain/issues/3108 | 1,673,732,281 | 3,108 |
[
"hwchase17",
"langchain"
] | I have been trying to add memory to my `create_pandas_dataframe_agent` agent and ran into some issues.
I created the agent like this
```python
agent = create_pandas_dataframe_agent(
llm=llm,
df=df,
prefix=prefix,
suffix=suffix,
max_iterations=4,
input_variables=["df", "chat_history", "input", "agent_scratchpad"],
)
```
and ran into
```Traceback (most recent call last):
File "/path/projects/test/langchain/main.py", line 42, in <module>
a = agent.run("This is a test")
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/langchain/chains/base.py", line 213, in run
return self(args[0])[self.output_keys[0]]
^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/langchain/chains/base.py", line 106, in __call__
inputs = self.prep_inputs(inputs)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/langchain/chains/base.py", line 185, in prep_inputs
raise ValueError(
ValueError: A single string input was passed in, but this chain expects multiple inputs ({'input', 'chat_history'}). When a chain expects multiple inputs, please call it by passing in a dictionary, eg `chain({'foo': 1, 'bar': 2})
```
I was able to fix it by modifying the `create_pandas_dataframe_agent` to accept the memory object and then passing that along to the `AgentCreator` like so:
``` python
return AgentExecutor.from_agent_and_tools(
agent=agent,
tools=tools,
verbose=verbose,
return_intermediate_steps=return_intermediate_steps,
max_iterations=max_iterations,
max_execution_time=max_execution_time,
early_stopping_method=early_stopping_method,
memory=memory,
)
```
Not sure what I did wrong or if I am misunderstanding something in general, maybe this is just the current behavior and adding memory would be a feature request? | Getting ConversationBufferMemory to work with create_pandas_dataframe_agent | https://api.github.com/repos/langchain-ai/langchain/issues/3106/comments | 26 | 2023-04-18T19:20:33Z | 2024-06-30T16:02:47Z | https://github.com/langchain-ai/langchain/issues/3106 | 1,673,703,559 | 3,106 |
[
"hwchase17",
"langchain"
] | Is there any ETA on this new LLM integration? | Is Google PaLM integration in the pipeline? | https://api.github.com/repos/langchain-ai/langchain/issues/3101/comments | 4 | 2023-04-18T17:30:34Z | 2023-09-26T16:08:00Z | https://github.com/langchain-ai/langchain/issues/3101 | 1,673,559,016 | 3,101 |
[
"hwchase17",
"langchain"
] | Model page here: https://huggingface.co/Writer/camel-5b-hf
| Add bindings for Camel model API | https://api.github.com/repos/langchain-ai/langchain/issues/3099/comments | 1 | 2023-04-18T17:18:55Z | 2023-04-21T01:07:06Z | https://github.com/langchain-ai/langchain/issues/3099 | 1,673,543,750 | 3,099 |
[
"hwchase17",
"langchain"
] | Glad to see in #2859 @hwchase17 added a `TimeWeightedVectorStoreRetriever`.
I'm creating a game so I want `last_accessed_at` can be things like... number of rounds, turns and others.
I would make it in one or two days. Is there anyone want to review it?
---
I'm reproducing the Generative Agent article, repo: [ofey404/WalkingShadows](https://github.com/ofey404/WalkingShadows)
And I'd like to create a more generic `TimeWeightedVectorStoreRetriever`. Currently it's based on datetime like this:
```python
expected_score = (
1.0 - time_weighted_retriever.decay_rate
) ** expected_hours_passed + vector_salience
```
In my case `last_accessed_at` can be the number of rounds. | [feat] I want to contribute a more generic `TimeWeightedVectorStoreRetriever` | https://api.github.com/repos/langchain-ai/langchain/issues/3098/comments | 1 | 2023-04-18T17:00:09Z | 2023-09-10T16:31:11Z | https://github.com/langchain-ai/langchain/issues/3098 | 1,673,518,661 | 3,098 |
[
"hwchase17",
"langchain"
] | Trying to import langchain 0.0.128 and up in AWS Lambda (Using serverless framework) fails with this:
I suspect that this is the PR that causes the issue.
Maybe the __version__ line should be wrapped in a try catch in case the code is ran on an environment where the metadata for packages is not available, as it happens with serverless python requirements.
https://github.com/hwchase17/langchain/pull/2221
```
[ERROR] PackageNotFoundError: No package metadata was found for langchain
Traceback (most recent call last):
File "/var/task/serverless_sdk/__init__.py", line 144, in wrapped_handler
return user_handler(event, context)
File "/var/task/s_event_webhook.py", line 25, in error_handler
raise e
File "/var/task/s_event_webhook.py", line 20, in <module>
user_handler = serverless_sdk.get_user_handler('endpoints.event_webhook.handler')
File "/var/task/serverless_sdk/__init__.py", line 56, in get_user_handler
user_module = import_module(user_module_name)
File "/var/lang/lib/python3.10/importlib/__init__.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1050, in _gcd_import
File "<frozen importlib._bootstrap>", line 1027, in _find_and_load
File "<frozen importlib._bootstrap>", line 1006, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 688, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 883, in exec_module
File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
File "/var/task/endpoints/event_webhook.py", line 8, in <module>
from chatlib.agent import handle_event
File "/var/task/chatlib/agent.py", line 7, in <module>
from langchain import LLMChain
File "/var/task/langchain/__init__.py", line 58, in <module>
__version__ = metadata.version(__package__)
File "/var/lang/lib/python3.10/importlib/metadata/__init__.py", line 996, in version
return distribution(distribution_name).version
File "/var/lang/lib/python3.10/importlib/metadata/__init__.py", line 969, in distribution
return Distribution.from_name(distribution_name)
File "/var/lang/lib/python3.10/importlib/metadata/__init__.py", line 548, in from_name
raise PackageNotFoundError(name)
``` | Langchain not working on Lambda from 0.0.128 | https://api.github.com/repos/langchain-ai/langchain/issues/3097/comments | 1 | 2023-04-18T16:48:58Z | 2023-04-19T00:38:20Z | https://github.com/langchain-ai/langchain/issues/3097 | 1,673,503,749 | 3,097 |
[
"hwchase17",
"langchain"
] | I was wondering how to use the `return_intermediate_step` flag for agent executors this is the current Appoach I found to be working:
Ok ive dug a little deeper and it seems like setting `return_intermediate_step = True` when creating the agent with `initialize_agent` works. Only when using a memory you need to set `memory.output = "output"` otherwise it will error when trying to save the context.
I had to do a minor modification in https://github.com/hwchase17/langchain/blob/894c272a562471aadc1eb48e4a2992923533dea0/langchain/memory/chat_memory.py#L32-L36
Cause when using agents the outputs can be lists wich would error when saving the context.
If I modify it like this:
```python
def save_context(self, inputs: Dict[str, Any], outputs: Dict[str, str]) -> None:
"""Save context from this conversation to buffer."""
input_str, output_str = self._get_input_output(inputs, outputs)
if not isinstance(input_str, list):
input_str = [input_str]
if not isinstance(output_str, list):
output_str = [output_str]
for input in input_str:
self.chat_memory.add_user_message(input)
for output in output_str:
self.chat_memory.add_ai_message(output)
```
Then even saving the context with memory works for the agent. ( you can also load an inital context from dict.
```python
import os
from langchain.callbacks import get_openai_callback
from langchain.agents import Tool
from langchain.agents import AgentType
from langchain.memory import ConversationBufferMemory
from langchain import OpenAI
from langchain.utilities import GoogleSearchAPIWrapper
from langchain.schema import messages_from_dict
from langchain.agents import initialize_agent
from langchain.chat_models import ChatOpenAI
from langchain.callbacks.base import CallbackManager
from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler
from langchain.memory import ConversationBufferMemory, ConversationSummaryBufferMemory, ConversationSummaryMemory
from langchain import OpenAI, ConversationChain
from langchain.prompts.chat import (
ChatPromptTemplate,
SystemMessagePromptTemplate,
AIMessagePromptTemplate,
HumanMessagePromptTemplate,
)
from langchain.prompts import (
ChatPromptTemplate,
MessagesPlaceholder,
SystemMessagePromptTemplate,
HumanMessagePromptTemplate
)
llm = ChatOpenAI(temperature=0, openai_api_key=OPENAI_KEY, model_name="gpt-3.5-turbo",
streaming=True, callback_manager=CallbackManager([StreamingStdOutCallbackHandler()]))
memory = ConversationSummaryBufferMemory(
return_messages=True, llm=llm, max_token_limit=150, memory_key="chat_history")
# set the output key so that memory doesn't error on save
memory.output_key = "output"
# You an input a previously saved agent state. like this:
state = [{'type': 'ai', 'data': {
'content': 'Nice to meet you, Tim!', 'additional_kwargs': {}}}]
search = GoogleSearchAPIWrapper(
google_api_key=GOOGLE_API_KEY, google_cse_id=my_cse_id)
tools = [
Tool(
name="Current Search",
func=search.run,
description="useful for when you need to answer questions about current events or the current state of the world"
),
]
memory.chat_memory.messages = messages_from_dict(state)
agent_chain = initialize_agent(
tools, llm, agent=AgentType.CONVERSATIONAL_REACT_DESCRIPTION, verbose=True, memory=memory, return_intermediate_steps=True)
MESSAGE = "What is currently the most popular web browser?"
with get_openai_callback() as cb:
out = agent_chain(
inputs=[MESSAGE])
# The output dict will also contain a 'intermediate_steps' key.
```
example output:
```python
out = {
...,
"intermediate_steps: [(AgentAction(tool='Current Search', tool_input='current most popular web browser', log='Do I need to use a tool? Yes\nAction: Current Search\nAction Input: current most popular web browser'), "Zooming into the internet browser market shares on different platforms, Chrome continues to dominate as the most popular browser on desktops with a market share\xa0... Feb 21, 2023 ... The most popular current browsers are Google Chrome, Apple's Safari, Microsoft Edge, and Firefox. Historically one of the large players in the\xa0... The usage share of web browsers is the portion, often expressed as a percentage, of visitors to a group of web sites that use a particular web browser. This graph shows the market share of browsers worldwide from Mar 2022 - Mar ... 56% 70% Chrome Safari Edge Firefox Samsung Internet Opera UC Browser Android\xa0... Feb 11, 2021 ... Google Chrome, then, is by far the most used browser, accounting for more than half of all web traffic, followed by Safari in a distant second\xa0... Here we examine the top five browsers in the US, in order of popularity. ... basically a pinned tab of recent sites that syncs between the desktop and\xa0... Google Chrome is the most popular and widely-used desktop web browser by far. ... This browser's current market share is slightly less than it was at this\xa0... Mar 15, 2023 ... Firefox; Google Chrome; Microsoft Edge; Apple Safari; Opera; Brave; Vivaldi; DuckDuckgo; Chromium; Epic. Comparison of Best Browser\xa0... Web browser, cookie & cache settings. HealthCare.gov is compatible with most popular web browsing software. This includes the most recent and commonly used\xa0... Browser Statistics. ❮ Home Next ❯. W3Schools' famous ... The Most Popular Browsers ... W3Schools' statistics may not be relevant to your web site.")]
...
```
This is most definitely not the right way to do this but also I'm not sure if there is a correct way yet :D
What I think would be really cool is to have something like a callback_manager also for agent actions.
That way you could develop applications using agents with immediate feedback while the agent is executed. | Usage of `return_intermediate_step` on Agents, and agent step callbacks | https://api.github.com/repos/langchain-ai/langchain/issues/3091/comments | 4 | 2023-04-18T14:09:22Z | 2023-09-20T11:26:27Z | https://github.com/langchain-ai/langchain/issues/3091 | 1,673,220,342 | 3,091 |
[
"hwchase17",
"langchain"
] | Got a loop when asking for "What's the best BBQ in Kansas City" . When added to the prompt - "say cannot answer,if you don´t know the answer " did not stop the loop.
The loop only stopped after blowing up the context length
On the other hand, if using OpenAI old models, it worked fine . Using SerpAPIWrapper | Agent SELF_ASK_WITH_SEARCH does not work with ChatOpenAI models | https://api.github.com/repos/langchain-ai/langchain/issues/3090/comments | 5 | 2023-04-18T13:26:25Z | 2023-10-02T16:08:52Z | https://github.com/langchain-ai/langchain/issues/3090 | 1,673,135,511 | 3,090 |
[
"hwchase17",
"langchain"
] | After reviewing the work done on https://github.com/hwchase17/langchain/pull/2859 and its accompanying examples, I propose creating Generative Characters as a set of langchain components. These components would include Memory, Chain, and Agent Classes.
- Memory: This includes the ability to retrieve documents from VectorStore using TimeWeightedVectorStoreRetriever, calculate their score, summarize them, add memory and fetch memory.
- Chain: This involves generating reactions and dialogue responses.
- Agent: I'm not entirely sure about this one. Since the chain can generate reactions, it may be able to use tools as well.
I would like to work on this. Any suggestions or help would be greatly appreciated.
@vowelparrot | [feat] Create Memory, Chain and Agent Classes for Generative Characters | https://api.github.com/repos/langchain-ai/langchain/issues/3087/comments | 3 | 2023-04-18T12:15:46Z | 2023-09-18T16:15:33Z | https://github.com/langchain-ai/langchain/issues/3087 | 1,672,996,991 | 3,087 |
[
"hwchase17",
"langchain"
] | Sitemap data ingestion is a super powerful tool and I love that you already have it built-in. However, sitemaps are potentially huge, covering hundreds or even thousands of sub-sites.
If one starts to crawl through the sitemap of a large website, there is little information on how the progress is going.
Therefore, I suggest adding a `tqdm` progressbar in the async web base loader to give the user some estimate.
While we're at it, we could also add a retry logic because on long runs, there are higher risk of running against anti-scraping policy and forced timeouts or disconnections.
See below screenshot for my implementation. Code change in the linked [PR](https://github.com/hwchase17/langchain/pull/3131).
 | Add tqdm progress bar to base web base loader | https://api.github.com/repos/langchain-ai/langchain/issues/3083/comments | 1 | 2023-04-18T10:58:48Z | 2023-04-23T02:19:39Z | https://github.com/langchain-ai/langchain/issues/3083 | 1,672,867,984 | 3,083 |
[
"hwchase17",
"langchain"
] | null | Need a Simple Example or method to get stream response of ConversationChain | https://api.github.com/repos/langchain-ai/langchain/issues/3080/comments | 2 | 2023-04-18T10:04:21Z | 2023-09-10T16:31:16Z | https://github.com/langchain-ai/langchain/issues/3080 | 1,672,777,542 | 3,080 |
[
"hwchase17",
"langchain"
] | ## Motivation
Right now, HuggingFaceEmbeddings doesn't support loading an embedding model's weights from the cache but downloading the weights every time. Fixing this would be a low hanging fruit by allowing the user to pass their cache directory.
## Suggestion
The only change has only a few lines in __init__()
```python
class HuggingFaceEmbeddings(BaseModel, Embeddings):
"""Wrapper around sentence_transformers embedding models.
To use, you should have the ``sentence_transformers`` python package installed.
Example:
.. code-block:: python
from langchain.embeddings import HuggingFaceEmbeddings
model_name = "sentence-transformers/all-mpnet-base-v2"
hf = HuggingFaceEmbeddings(model_name=model_name)
"""
client: Any #: :meta private:
model_name: str = DEFAULT_MODEL_NAME
"""Model name to use."""
def __init__(self, cache_folder=None, **kwargs: Any):
"""Initialize the sentence_transformer."""
super().__init__(**kwargs)
try:
import sentence_transformers
self.client = sentence_transformers.SentenceTransformer(model_name_or_path=self.model_name, cache_folder=cache_folder)
except ImportError:
raise ValueError(
"Could not import sentence_transformers python package. "
"Please install it with `pip install sentence_transformers`."
)
class Config:
"""Configuration for this pydantic object."""
extra = Extra.forbid
def embed_documents(self, texts: List[str]) -> List[List[float]]:
"""Compute doc embeddings using a HuggingFace transformer model.
Args:
texts: The list of texts to embed.
Returns:
List of embeddings, one for each text.
"""
texts = list(map(lambda x: x.replace("\n", " "), texts))
embeddings = self.client.encode(texts)
return embeddings.tolist()
def embed_query(self, text: str) -> List[float]:
"""Compute query embeddings using a HuggingFace transformer model.
Args:
text: The text to embed.
Returns:
Embeddings for the text.
"""
text = text.replace("\n", " ")
embedding = self.client.encode(text)
return embedding.tolist()
``` | Feature Request: Allow initializing HuggingFaceEmbeddings from the cached weight | https://api.github.com/repos/langchain-ai/langchain/issues/3079/comments | 9 | 2023-04-18T09:43:38Z | 2024-02-13T16:17:08Z | https://github.com/langchain-ai/langchain/issues/3079 | 1,672,736,711 | 3,079 |
[
"hwchase17",
"langchain"
] | I'm facing a weird issue with the `ConversationBufferWindowMemory`
Running `memory.load_memory_variables({})` prints:
```
{'chat_history': [HumanMessage(content='Hi my name is Ismail', additional_kwargs={}), AIMessage(content='Hello Ismail! How can I assist you today?', additional_kwargs={})]}
```
The error I get after sending a second message to the chain is:
```
> Entering new ConversationalRetrievalChain chain...
[2023-04-18 10:34:52,512] ERROR in app: Exception on /api/v1/chat [POST]
Traceback (most recent call last):
File "/Users/homanp/Projects/ADGPT_ENV/lib/python3.9/site-packages/flask/app.py", line 2528, in wsgi_app
response = self.full_dispatch_request()
File "/Users/homanp/Projects/ADGPT_ENV/lib/python3.9/site-packages/flask/app.py", line 1825, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/Users/homanp/Projects/ADGPT_ENV/lib/python3.9/site-packages/flask/app.py", line 1823, in full_dispatch_request
rv = self.dispatch_request()
File "/Users/homanp/Projects/ADGPT_ENV/lib/python3.9/site-packages/flask/app.py", line 1799, in dispatch_request
return self.ensure_sync(self.view_functions[rule.endpoint])(**view_args)
File "/Users/homanp/Projects/ad-gpt/app.py", line 46, in chat
result = chain({"question": message, "chat_history": []})
File "/Users/homanp/Projects/ADGPT_ENV/lib/python3.9/site-packages/langchain/chains/base.py", line 116, in __call__
raise e
File "/Users/homanp/Projects/ADGPT_ENV/lib/python3.9/site-packages/langchain/chains/base.py", line 113, in __call__
outputs = self._call(inputs)
File "/Users/homanp/Projects/ADGPT_ENV/lib/python3.9/site-packages/langchain/chains/conversational_retrieval/base.py", line 71, in _call
chat_history_str = get_chat_history(inputs["chat_history"])
File "/Users/homanp/Projects/ADGPT_ENV/lib/python3.9/site-packages/langchain/chains/conversational_retrieval/base.py", line 25, in _get_chat_history
human = "Human: " + human_s
TypeError: can only concatenate str (not "tuple") to str
```
Current implementaion:
```
memory = ConversationBufferWindowMemory(memory_key='chat_history', k=2, return_messages=True)
chain = ConversationalRetrievalChain.from_llm(model,
memory=memory,
verbose=True,
retriever=retriever,
qa_prompt=QA_PROMPT,
condense_question_prompt=CONDENSE_QUESTION_PROMPT,)
``` | Error `can only concatenate str (not "tuple") to str` when using `ConversationBufferWindowMemory` | https://api.github.com/repos/langchain-ai/langchain/issues/3077/comments | 11 | 2023-04-18T08:38:57Z | 2023-11-13T16:10:00Z | https://github.com/langchain-ai/langchain/issues/3077 | 1,672,633,625 | 3,077 |
[
"hwchase17",
"langchain"
] | I notice they use different API, but what's the difference between these 2 apis?
Question Answering:
docs = docsearch.get_relevant_documents(query)
Question Answering with Sources:
docs = docsearch.similarity_search(query) | Difference between "Question Answering with Sources" and "Question Answering" | https://api.github.com/repos/langchain-ai/langchain/issues/3073/comments | 8 | 2023-04-18T08:05:55Z | 2023-10-12T16:10:19Z | https://github.com/langchain-ai/langchain/issues/3073 | 1,672,578,778 | 3,073 |
[
"hwchase17",
"langchain"
] | https://github.com/hwchase17/langchain/blob/894c272a562471aadc1eb48e4a2992923533dea0/langchain/memory/summary_buffer.py#L57-L70
The ```ConversationSummaryBufferMemory``` class in ```langchain/memory/summary_buffer.py``` currently prunes chat_memory's messages using the ```List.pop()``` method (line 66). This approach works as expected for the in-memory implementation ```ChatMessageHistory```, where messages are stored as a simple List.
However, this method of pruning messages is not suitable for other implementations where messages are calculated output Lists, such as ```DynamoDBChatMessageHistory``` or ```RedisChatMessageHistory```. In these cases, the current implementation fails to prune messages as intended.
To address this issue, we may need to modify the ```BaseChatMessageHistory``` class to provide a unified interface for pruning messages, which can then be overridden as needed by specific implementations. | Issue with ConversationSummaryBufferMemory pruning messages for non-in-memory chat message histories | https://api.github.com/repos/langchain-ai/langchain/issues/3072/comments | 6 | 2023-04-18T07:48:11Z | 2024-05-20T08:06:21Z | https://github.com/langchain-ai/langchain/issues/3072 | 1,672,549,574 | 3,072 |
[
"hwchase17",
"langchain"
] | I'm testing out the tutorial code for Agents:
`from langchain.agents import load_tools
from langchain.agents import initialize_agent
from langchain.agents import AgentType
from langchain.llms import OpenAI
llm = OpenAI(temperature=0)
tools = load_tools(["serpapi", "llm-math"], llm=llm)
agent = initialize_agent(tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True)
agent.run("What was the high temperature in SF yesterday in Fahrenheit? What is that number raised to the .023 power?")`
And so far it generates the result:
`> Entering new AgentExecutor chain...
I need to find the temperature first, then use the calculator to raise it to the .023 power.
Action: Search
Action Input: "High temperature in SF yesterday"
Observation: High: 60.8ºf @3:10 PM Low: 48.2ºf @2:05 AM Approx.
Thought: I need to convert the temperature to a number
Action: Calculator
Action Input: 60.8`
But raises an issue and doesn't calculate 60.8^.023
` raise ValueError(f"unknown format from LLM: {llm_output}")
ValueError: unknown format from LLM: This is not a math problem and cannot be solved using the numexpr library.`
What's the reason behind this error? | llm-math raising an issue | https://api.github.com/repos/langchain-ai/langchain/issues/3071/comments | 16 | 2023-04-18T06:44:50Z | 2023-11-16T05:52:22Z | https://github.com/langchain-ai/langchain/issues/3071 | 1,672,458,216 | 3,071 |
[
"hwchase17",
"langchain"
] | https://github.com/hwchase17/langchain/blob/894c272a562471aadc1eb48e4a2992923533dea0/langchain/document_loaders/git.py#L8 | Once we Clone the Repo using the Git Document loader. How we can auth the Private Repos and How we can chunk the code files into meaning full code and create Embeddings? | https://api.github.com/repos/langchain-ai/langchain/issues/3069/comments | 1 | 2023-04-18T06:08:55Z | 2023-09-10T16:31:22Z | https://github.com/langchain-ai/langchain/issues/3069 | 1,672,416,914 | 3,069 |
[
"hwchase17",
"langchain"
] | Axios is at v 1.3.5, why does langchain set the dependency to major version 0?
It is set to "axios": "^0.26.0",
Do we want: "axios": ">=0.26.0" ?
Does the whole world need to downgrade in order to use Langchain?
Or is this just me and my setup is screwed up somehow. I don't see anyone else making noise about it, so i'm a little concerned I have something wrong with what i'm working on.

| Axios dependency forcing a downgrade on nextJS build. | https://api.github.com/repos/langchain-ai/langchain/issues/3065/comments | 5 | 2023-04-18T05:38:52Z | 2023-09-26T16:08:10Z | https://github.com/langchain-ai/langchain/issues/3065 | 1,672,386,014 | 3,065 |
[
"hwchase17",
"langchain"
] | Hey folks. I am experimenting with OpenAPI agents and the most recent [Spotify API](https://github.com/sonallux/spotify-web-api/releases). The API defines the endpoint `/me/top/{type}`. _Type_ can be, for example, `tracks`. A GET to `/me/top/tracks` will return the top tracks for the user making the request.
The planned actions coming out of the LLM, if you ask it to list your favorite tracks, will correctly include `GET /me/top/tracks`. In [planner.py](https://github.com/hwchase17/langchain/blob/577ec92f16813565d788da03f6ce830f4657c7b0/langchain/agents/agent_toolkits/openapi/planner.py#L225) there is validation check that will verify if the suggested endpoint exists. But, it compares `GET /me/top/tracks` with `GET /me/top/{type}`, which will cause an error: `ValueError: GET /me/top/tracks endpoint does not exist`.
A change to `reduce_openapi_spec` or `planner.py` would fix it.
| Validation check in planner.py not working as intended? | https://api.github.com/repos/langchain-ai/langchain/issues/3064/comments | 6 | 2023-04-18T05:12:32Z | 2023-09-27T16:08:06Z | https://github.com/langchain-ai/langchain/issues/3064 | 1,672,364,479 | 3,064 |
[
"hwchase17",
"langchain"
] | ## Problem
The current `DirectoryLoader` class relies on the python `glob` and `rglob` utilities to load the filepaths. These utilities in python don't support advanced file patterns, for example specifying files with multiple extensions. For example, consider a sample directory with these files.
```bash
- a.py
- b.js
- c.json
- d.yml
```
Currently, there is no way to load only the files with `.py` or `.yml` extension.
## Proposed Solution
### Preferred
Include the [wcmatch](https://github.com/facelessuser/wcmatch) library as a dependency that replaces the built-in glob and rglob, and supports all unix supported options for specifying file patterns. For example, with `wcmatch`, users can include a pattern like `['*.py', *'.yml']` to include files with `.py` or `.yml` extension.
### Alternate
Add an `include` or `exclude` list to the `DirectoryLoader` interface, so that users can specify the file patterns to include or exclude. | DirectoryLoader doesn't support including unix file patterns | https://api.github.com/repos/langchain-ai/langchain/issues/3062/comments | 3 | 2023-04-18T05:00:24Z | 2023-09-18T16:19:42Z | https://github.com/langchain-ai/langchain/issues/3062 | 1,672,352,495 | 3,062 |
[
"hwchase17",
"langchain"
] | Sometimes the LLM response (generated code) tends to miss the ending ticks "```". Therefore causing the text parsing to fail due to `not enough values to unpack`.
Suggest to simply the `_, action, _' to just `action` then with index
Error message below
```
> Entering new AgentExecutor chain...
Traceback (most recent call last):
File "E:\open_source_contrib\langchain-venv\lib\site-packages\langchain\agents\chat\output_parser.py", line 17, in parse
_, action, _ = text.split("```")
ValueError: not enough values to unpack (expected 3, got 2)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "E:\open_source_contrib\test.py", line 67, in <module>
agent_msg = agent.run(prompt_template)
File "E:\open_source_contrib\langchain-venv\lib\site-packages\langchain\chains\base.py", line 213, in run
return self(args[0])[self.output_keys[0]]
File "E:\open_source_contrib\langchain-venv\lib\site-packages\langchain\chains\base.py", line 116, in __call__
raise e
File "E:\open_source_contrib\langchain-venv\lib\site-packages\langchain\chains\base.py", line 113, in __call__
outputs = self._call(inputs)
File "E:\open_source_contrib\langchain-venv\lib\site-packages\langchain\agents\agent.py", line 792, in _call
next_step_output = self._take_next_step(
File "E:\open_source_contrib\langchain-venv\lib\site-packages\langchain\agents\agent.py", line 672, in _take_next_step
output = self.agent.plan(intermediate_steps, **inputs)
File "E:\open_source_contrib\langchain-venv\lib\site-packages\langchain\agents\agent.py", line 385, in plan
return self.output_parser.parse(full_output)
File "E:\open_source_contrib\langchain-venv\lib\site-packages\langchain\agents\chat\output_parser.py", line 23, in parse
raise ValueError(f"Could not parse LLM output: {text}")
ValueError: Could not parse LLM output: Question: How do I put the given data into a pandas dataframe and save it into a csv file at the specified path?
Thought: I need to use the Python REPL tool to import pandas, create a dataframe with the given data, and then use the to_csv method to save it to the specified file path.
Action:
```
{
"action": "Python REPL",
"action_input": "import pandas as pd\n\n# create dataframe\ndata = {\n 'Quarter': ['Q4-2021', 'Q1-2022', 'Q2-2022', 'Q3-2022', 'Q4-2022'],\n 'EPS attributable to common stockholders, diluted (GAAP)': [1.07, 0.95, 0.76, 0.95, 1.07],\n 'EPS attributable to common stockholders, diluted (non-GAAP)': [1.19, 1.05, 0.85, 1.05, 1.19]\n}\ndf = pd.DataFrame(data)\n\n# save to csv\ndf.to_csv('E:\\\\open_source_contrib\\\\output\\\\agent_output.xlsx', index=False)"
}
(langchain-venv) PS E:\open_source_contrib>
``` | Error when parsing code from LLM response ValueError: Could not parse LLM output: | https://api.github.com/repos/langchain-ai/langchain/issues/3057/comments | 1 | 2023-04-18T04:13:20Z | 2023-04-24T04:19:22Z | https://github.com/langchain-ai/langchain/issues/3057 | 1,672,318,279 | 3,057 |
[
"hwchase17",
"langchain"
] | Hi fellas.
Langchain is awesome. I have an agent that I created for an app and it will be interacted with via an api. The agent of course needs to run asynchronously. I can run it without any issues synchronously but with agent.arun(inputs) I cannot connect with openai. It throws the "Error communicating with OpenAI" error.
before I followed this notebook with my own amends to the task at hand: https://python.langchain.com/en/latest/modules/agents/agents/custom_llm_chat_agent.html
and just changed the final part from agent.run to agent.arun according to the blogpost.
But clearly this wasnt enough and I went on youtube and came accross this video: https://youtu.be/eAikW9o1Ros
I followed what he was doing for a simple agent and customized mine by creating this function:
async def async_agent_executor(inputs):
manager = CallbackManager([StdOutCallbackHandler()])
llm = ChatOpenAI(temperature=0, callback_manager=manager)
llm_chain = LLMChain(llm=llm, prompt=custom_prompt, callback_manager=manager)
async_tools = load_tools(["serpapi"], llm=llm, callback_manager=manager)
agent = LLMSingleActionAgent(
llm_chain=llm_chain,
output_parser=output_parser,
stop=["\nObservation:"],
allowed_tools=tool_names,
callback_manager=manager
)
agent_executor = AgentExecutor.from_agent_and_tools(agent=agent, tools=async_tools, verbose=True, callback_manager=manager)
return await agent_executor.arun(inputs)
I dont have too much experience working with async as well. Can you please help why it cannot connect with OpenAI in serial execution but not in async? Thank you. | Error communicating with OpenAI when running agent in async | https://api.github.com/repos/langchain-ai/langchain/issues/3056/comments | 7 | 2023-04-18T04:11:58Z | 2023-11-20T16:07:27Z | https://github.com/langchain-ai/langchain/issues/3056 | 1,672,317,378 | 3,056 |
[
"hwchase17",
"langchain"
] | Conservation instead of conversation. PR pending. Putting in this issue to link. | Spelling Error in ConstitutionalAI Chain Prompt | https://api.github.com/repos/langchain-ai/langchain/issues/3048/comments | 0 | 2023-04-18T03:28:10Z | 2023-04-19T02:45:06Z | https://github.com/langchain-ai/langchain/issues/3048 | 1,672,287,885 | 3,048 |
[
"hwchase17",
"langchain"
] |
retriever = PineconeHybridSearchRetriever(embeddings=embeddings, index=index, tokenizer=CharacterTextSplitter)
result = retriever.get_relevant_documents(given_str)
gives TypeError: init() got an unexpected keyword argument 'padding'
with bm25_encoder
retriever = PineconeHybridSearchRetriever(embeddings=embeddings, index=index, tokenizer=CharacterTextSplitter , sparse_encoder=bm25_encoder)
it gives
ValidationError: 1 validation error for PineconeHybridSearchRetriever
sparse_encoder
extra fields not permitted (type=value_error.extra)
| pinecone_hybrid_search doesn't work by following the documents. | https://api.github.com/repos/langchain-ai/langchain/issues/3043/comments | 2 | 2023-04-18T01:27:31Z | 2023-09-18T16:19:47Z | https://github.com/langchain-ai/langchain/issues/3043 | 1,672,192,416 | 3,043 |
[
"hwchase17",
"langchain"
] | The error is random, it only occurs sometimes.
`loader = YoutubeLoader.from_youtube_url(vidurl, add_video_info=True, language=lang)` | YoutubeLoader : Error: Exception while accessing title of https://youtube.com/watch?v=XXX. Please file a bug report at https://github.com/pytube/pytube | https://api.github.com/repos/langchain-ai/langchain/issues/3040/comments | 8 | 2023-04-17T23:22:49Z | 2023-09-27T16:08:12Z | https://github.com/langchain-ai/langchain/issues/3040 | 1,672,104,671 | 3,040 |
[
"hwchase17",
"langchain"
] | When using the agent to call the tool, some situations may cause an escape, returning the action and final answer at the same time, causing the tool not to run. It is recommended to add appropriate prompt words at the end of “Final Answer: The final answer to the original input question” prompt template to avoid this situation. | Some situations cause the tool to not work | https://api.github.com/repos/langchain-ai/langchain/issues/3037/comments | 2 | 2023-04-17T21:59:53Z | 2023-09-10T16:31:37Z | https://github.com/langchain-ai/langchain/issues/3037 | 1,672,028,543 | 3,037 |
[
"hwchase17",
"langchain"
] | I'd like to be able to run a query via SQLDatabaseSequentialChain or SQLDatabaseChain involving multiple tables living in multiple different schemas, but it seems that as it is, the code is set up to only allow and look through just the one schema provided. | Unable to use multiple schemas in SQLDatabase | https://api.github.com/repos/langchain-ai/langchain/issues/3036/comments | 18 | 2023-04-17T21:40:01Z | 2024-07-11T11:30:14Z | https://github.com/langchain-ai/langchain/issues/3036 | 1,672,009,290 | 3,036 |
[
"hwchase17",
"langchain"
] | I'm running the openai "todo" manifest and swagger.
After 2013-04-16, I get the following error parsing response:
```ValueError: Could not parse LLM output: `I need to check the TODO Plugin API to see if it can help me answer this question.```
Input question was
```agent_chain.run("Do I have a todo to check my mailbox?")```
The LLM response was:
```
I need to check the TODO Plugin API to see if it can help me answer this question.
Action: todo
```
The response doesn't have any Action Input section! Fix is incoming. | TODO action example from openai fails | https://api.github.com/repos/langchain-ai/langchain/issues/3035/comments | 1 | 2023-04-17T20:49:58Z | 2023-09-10T16:31:42Z | https://github.com/langchain-ai/langchain/issues/3035 | 1,671,945,585 | 3,035 |
[
"hwchase17",
"langchain"
] | Greetings,
The
```python
from langchain.agents import create_sql_agent
from langchain.agents.agent_toolkits import SQLDatabaseToolkit
from langchain.sql_database import SQLDatabase
from langchain.chat_models import AzureChatOpenAI
db = SQLDatabase.from_uri(connection_string2)
toolkit = SQLDatabaseToolkit(db=db)
agent_executor = create_sql_agent(
llm=AzureChatOpenAI(deployment_name="gpt-4-32k", model_name="gpt-4-32k",temperature=0.0),
toolkit=toolkit,
verbose=True
)
agent_executor.run("Tell me about this database")
```
I get the error in `query_checker_sql_db`
```
Thought:The TITLE column seems to be related to the topics in the CONTENT table. I should query this column to get the topics.
Action: query_checker_sql_db
Action Input: SELECT TOP 10 TITLE FROM CONTENTTraceback (most recent call last):
File "/code/confluence_test.py", line 57, in <module>
File "/usr/local/lib/python3.9/site-packages/langchain/chains/base.py", line 213, in run
return self(args[0])[self.output_keys[0]]
File "/usr/local/lib/python3.9/site-packages/langchain/chains/base.py", line 116, in __call__
raise e
File "/usr/local/lib/python3.9/site-packages/langchain/chains/base.py", line 113, in __call__
outputs = self._call(inputs)
File "/usr/local/lib/python3.9/site-packages/langchain/agents/agent.py", line 792, in _call
next_step_output = self._take_next_step(
File "/usr/local/lib/python3.9/site-packages/langchain/agents/agent.py", line 695, in _take_next_step
observation = tool.run(
File "/usr/local/lib/python3.9/site-packages/langchain/tools/base.py", line 73, in run
raise e
File "/usr/local/lib/python3.9/site-packages/langchain/tools/base.py", line 70, in run
observation = self._run(tool_input)
File "/code/sql_database/tool.py", line 125, in _run
return self.llm_chain.predict(query=query, dialect=self.db.dialect)
File "/usr/local/lib/python3.9/site-packages/langchain/chains/llm.py", line 151, in predict
return self(kwargs)[self.output_key]
File "/usr/local/lib/python3.9/site-packages/langchain/chains/base.py", line 116, in __call__
raise e
File "/usr/local/lib/python3.9/site-packages/langchain/chains/base.py", line 113, in __call__
outputs = self._call(inputs)
File "/usr/local/lib/python3.9/site-packages/langchain/chains/llm.py", line 57, in _call
return self.apply([inputs])[0]
File "/usr/local/lib/python3.9/site-packages/langchain/chains/llm.py", line 118, in apply
response = self.generate(input_list)
File "/usr/local/lib/python3.9/site-packages/langchain/chains/llm.py", line 62, in generate
return self.llm.generate_prompt(prompts, stop)
File "/usr/local/lib/python3.9/site-packages/langchain/llms/base.py", line 107, in generate_prompt
return self.generate(prompt_strings, stop=stop)
File "/usr/local/lib/python3.9/site-packages/langchain/llms/base.py", line 140, in generate
raise e
File "/usr/local/lib/python3.9/site-packages/langchain/llms/base.py", line 137, in generate
output = self._generate(prompts, stop=stop)
File "/usr/local/lib/python3.9/site-packages/langchain/llms/openai.py", line 290, in _generate
response = completion_with_retry(self, prompt=_prompts, **params)
File "/usr/local/lib/python3.9/site-packages/langchain/llms/openai.py", line 99, in completion_with_retry
return _completion_with_retry(**kwargs)
File "/usr/local/lib/python3.9/site-packages/tenacity/__init__.py", line 326, in wrapped_f
return self(f, *args, **kw)
File "/usr/local/lib/python3.9/site-packages/tenacity/__init__.py", line 406, in __call__
do = self.iter(retry_state=retry_state)
File "/usr/local/lib/python3.9/site-packages/tenacity/__init__.py", line 351, in iter
return fut.result()
File "/usr/local/lib/python3.9/concurrent/futures/_base.py", line 439, in result
return self.__get_result()
File "/usr/local/lib/python3.9/concurrent/futures/_base.py", line 391, in __get_result
raise self._exception
File "/usr/local/lib/python3.9/site-packages/tenacity/__init__.py", line 409, in __call__
result = fn(*args, **kwargs)
File "/usr/local/lib/python3.9/site-packages/langchain/llms/openai.py", line 97, in _completion_with_retry
return llm.client.create(**kwargs)
File "/usr/local/lib/python3.9/site-packages/openai-0.27.2-py3.9.egg/openai/api_resources/completion.py", line 25, in create
File "/usr/local/lib/python3.9/site-packages/openai-0.27.2-py3.9.egg/openai/api_resources/abstract/engine_api_resource.py", line 149, in create
File "/usr/local/lib/python3.9/site-packages/openai-0.27.2-py3.9.egg/openai/api_resources/abstract/engine_api_resource.py", line 83, in __prepare_create_request
openai.error.InvalidRequestError: Must provide an 'engine' or 'deployment_id' parameter to create a <class 'openai.api_resources.completion.Completion'>
```
because it seems that the llm is still defaulting to `llm=OpenAI(cache=None, verbose=False...)`
as seen in this values output
from SQLDatabaseToolkit
```
--------------------------
--------------------------
memory=None callback_manager=<langchain.callbacks.shared.SharedCallbackManager object at 0x7fff4f982250> verbose=False prompt=PromptTemplate(input_variables=['query', 'dialect'], output_parser=None, partial_variables={}, template='\n{query}\nDouble check the {dialect} query above for common mistakes, including:\n- Using NOT IN with NULL values\n- Using UNION when UNION ALL should have been used\n- Using BETWEEN for exclusive ranges\n- Data type mismatch in predicates\n- Properly quoting identifiers\n- Using the correct number of arguments for functions\n- Casting to the correct data type\n- Using the proper columns for joins\n\nIf there are any of the above mistakes, rewrite the query. If there are no mistakes, just reproduce the original query.', template_format='f-string', validate_template=True) llm=OpenAI(cache=None, verbose=False, callback_manager=<langchain.callbacks.shared.SharedCallbackManager object at 0x7fff4f982250>, client=<class 'openai.api_resources.completion.Completion'>, model_name='text-davinci-003', temperature=0.0, max_tokens=256, top_p=1, frequency_penalty=0, presence_penalty=0, n=1, best_of=1, model_kwargs={}, openai_api_key=None, openai_api_base=None, openai_organization=None, batch_size=20, request_timeout=None, logit_bias={}, max_retries=6, streaming=False) output_key='text'
--------------------------
--------------------------
```
from create_sql_agent
```
--------------------------
--------------------------
memory=None callback_manager=<langchain.callbacks.shared.SharedCallbackManager object at 0x7fff4f97f7f0> verbose=False prompt=PromptTemplate(input_variables=['query', 'dialect'], output_parser=None, partial_variables={}, template='\n{query}\nDouble check the {dialect} query above for common mistakes, including:\n- Using NOT IN with NULL values\n- Using UNION when UNION ALL should have been used\n- Using BETWEEN for exclusive ranges\n- Data type mismatch in predicates\n- Properly quoting identifiers\n- Using the correct number of arguments for functions\n- Casting to the correct data type\n- Using the proper columns for joins\n\nIf there are any of the above mistakes, rewrite the query. If there are no mistakes, just reproduce the original query.', template_format='f-string', validate_template=True) llm=OpenAI(cache=None, verbose=False, callback_manager=<langchain.callbacks.shared.SharedCallbackManager object at 0x7fff4f97f7f0>, client=<class 'openai.api_resources.completion.Completion'>, model_name='text-davinci-003', temperature=0.0, max_tokens=256, top_p=1, frequency_penalty=0, presence_penalty=0, n=1, best_of=1, model_kwargs={}, openai_api_key=None, openai_api_base=None, openai_organization=None, batch_size=20, request_timeout=None, logit_bias={}, max_retries=6, streaming=False) output_key='text'
--------------------------
--------------------------
```
| SQLToolKit not passing correct llm to llm_chain with AzureChatOpenAI | https://api.github.com/repos/langchain-ai/langchain/issues/3031/comments | 2 | 2023-04-17T20:17:06Z | 2023-09-10T16:31:47Z | https://github.com/langchain-ai/langchain/issues/3031 | 1,671,905,614 | 3,031 |
[
"hwchase17",
"langchain"
] | I have generated the Chroma DB from a single file ( basically lots of questions and answers in one text file ), sometimes when I do
```
db.similarity_search("some question", k=4)
```
And the question is too broad, it will rerun a **LOT** of results, since I'm using the result in next LLM query (prompt template) I often can hit the "maximum context length is 4097 tokens" how to deal with this ? | Limit the db.similarity_search("some question", k=4) output. | https://api.github.com/repos/langchain-ai/langchain/issues/3029/comments | 4 | 2023-04-17T20:09:20Z | 2023-04-18T11:08:20Z | https://github.com/langchain-ai/langchain/issues/3029 | 1,671,895,690 | 3,029 |
[
"hwchase17",
"langchain"
] | Hello everyone I got this error, I already poppler_path in PATH in system. Is there anyone got the same like me?
index = VectorstoreIndexCreator().from_loaders(loaders)
---------------------------------------------------------------------------
FileNotFoundError Traceback (most recent call last)
[/usr/local/lib/python3.9/dist-packages/pdf2image/pdf2image.py](https://localhost:8080/#) in pdfinfo_from_path(pdf_path, userpw, ownerpw, poppler_path, rawdates, timeout)
567 env["LD_LIBRARY_PATH"] = poppler_path + ":" + env.get("LD_LIBRARY_PATH", "")
--> 568 proc = Popen(command, env=env, stdout=PIPE, stderr=PIPE) | Error index = VectorstoreIndexCreator().from_loaders(loaders) | https://api.github.com/repos/langchain-ai/langchain/issues/3025/comments | 1 | 2023-04-17T16:57:00Z | 2023-09-10T16:31:52Z | https://github.com/langchain-ai/langchain/issues/3025 | 1,671,601,909 | 3,025 |
[
"hwchase17",
"langchain"
] | Hi,
when I am trying to index the documents using cromadb, I am getting the following error. When looked into it, understood it is the compatibility isssue. But couldn't exactly find what packages are the hnswlib compatible with.
ImportError: /anaconda3/envs/myenv/lib/python3.9/site-packages/hnswlib.cpython-39-x86_64-linux-gnu.so: undefined symbol: _ZNSt15__exception_ptr13exception_ptr10_M_releaseEv
| Import error and undefined symbol | https://api.github.com/repos/langchain-ai/langchain/issues/3017/comments | 6 | 2023-04-17T13:51:59Z | 2023-11-16T16:08:31Z | https://github.com/langchain-ai/langchain/issues/3017 | 1,671,234,135 | 3,017 |
[
"hwchase17",
"langchain"
] | Sometimes it is quick expensive to crawl all the URLs, is it possible to save the Documents and reload it later?
For example:
```
loader = GitbookLoader("https://docs.gitbook.com")
page_data = loader.load()
```
Then save the page_data to gitbook.json, could be in the format of
```
[
{
"page_content": "...",
"metadata": {"source": "...", "title": "..."}
}
]
```
Next time when I want to resplit the document or rebuild embeddlings, I can do:
```
documents = JsonLoader("gitbook.json")
```
It would be great if both the `Save` function and `JsonLoader` can be developed. | Is there any quick way to save generated Documents and reload it? | https://api.github.com/repos/langchain-ai/langchain/issues/3016/comments | 3 | 2023-04-17T12:33:05Z | 2023-09-28T16:08:05Z | https://github.com/langchain-ai/langchain/issues/3016 | 1,671,070,412 | 3,016 |
[
"hwchase17",
"langchain"
] | I am facing a problem when trying to use the Chroma vector store with a persisted index. I have already loaded a document, created embeddings for it, and saved those embeddings in Chroma. The script ran perfectly with LLM and also created the necessary files in the persistence directory (.chroma\index). The files include:
chroma-collections.parquet
chroma-embeddings.parquet
id_to_uuid_3508d87c-12d1-4bbe-ae7f-69a0ec3c6616.pkl
index_3508d87c-12d1-4bbe-ae7f-69a0ec3c6616.bin
index_metadata_3508d87c-12d1-4bbe-ae7f-69a0ec3c6616.pkl
uuid_to_id_3508d87c-12d1-4bbe-ae7f-69a0ec3c6616.pkl
However, when I try to initialize the Chroma instance using the persist_directory to utilize the previously saved embeddings, I encounter a NoIndexException error, stating "Index not found, please create an instance before querying".
Here is a snippet of the code I am using in a Jupyter notebook:
```
# Section 1
import os
from langchain.vectorstores import Chroma
from langchain.chat_models import ChatOpenAI
from langchain.chains.question_answering import load_qa_chain
# Load environment variables
%reload_ext dotenv
%dotenv info.env
OPENAI_API_KEY = os.getenv("OPENAI_API_KEY")
# Section 2 - Initialize Chroma without an embedding function
persist_directory = '.chroma\\index'
db = Chroma(persist_directory=persist_directory)
# Section 3
# Load chat model and question answering chain
llm = ChatOpenAI(model_name="gpt-3.5-turbo", temperature=.5, openai_api_key=OPENAI_API_KEY)
chain = load_qa_chain(llm, chain_type="stuff")
# Section 4
# Run the chain on a sample query
query = "The Question - Can you also cite the information you give after your answer?"
docs = db.similarity_search(query)
response = chain.run(input_documents=docs, question=query)
print(response)
```
Please help me understand what might be causing this problem and suggest possible solutions. Additionally, I am curious if these pre-existing embeddings could be reused without incurring the same cost for generating Ada embeddings again, as the documents I am working with have lots of pages. Thanks in advance! | "NoIndexException: Index not found when initializing Chroma from a persisted directory" | https://api.github.com/repos/langchain-ai/langchain/issues/3011/comments | 38 | 2023-04-17T10:21:07Z | 2023-10-25T16:09:22Z | https://github.com/langchain-ai/langchain/issues/3011 | 1,670,863,970 | 3,011 |
[
"hwchase17",
"langchain"
] | hello,
i have an instance of chatbotui, configured with the chatgpt api.
how do i integrate longchain, so that it allows me to upload documents, which chatgpt will then have to read and use in the conversation?
also the memory functionality would be useful to integrate.
thanks in advance to all | chatbot ui integration | https://api.github.com/repos/langchain-ai/langchain/issues/3008/comments | 1 | 2023-04-17T09:24:15Z | 2023-09-10T16:31:57Z | https://github.com/langchain-ai/langchain/issues/3008 | 1,670,767,000 | 3,008 |
[
"hwchase17",
"langchain"
] | I was trying to override the OpenAIEmbeddings class with some customized implementation and got this:
```
In [1]: from langchain.embeddings.openai import OpenAIEmbeddings
In [2]: class O(OpenAIEmbeddings):
...: pass
...:
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-2-bc8ad24584c2> in <module>
----> 1 class O(OpenAIEmbeddings):
2 pass
3
~/opt/miniconda3/lib/python3.9/site-packages/pydantic/main.cpython-39-darwin.so in pydantic.main.ModelMetaclass._new_()
~/opt/miniconda3/lib/python3.9/site-packages/pydantic/utils.cpython-39-darwin.so in pydantic.utils.smart_deepcopy()
~/opt/miniconda3/lib/python3.9/copy.py in deepcopy(x, memo, _nil)
144 copier = _deepcopy_dispatch.get(cls)
145 if copier is not None:
--> 146 y = copier(x, memo)
147 else:
148 if issubclass(cls, type):
~/opt/miniconda3/lib/python3.9/copy.py in _deepcopy_dict(x, memo, deepcopy)
228 memo[id(x)] = y
229 for key, value in x.items():
--> 230 y[deepcopy(key, memo)] = deepcopy(value, memo)
231 return y
232 d[dict] = _deepcopy_dict
~/opt/miniconda3/lib/python3.9/copy.py in deepcopy(x, memo, _nil)
170 y = x
171 else:
--> 172 y = _reconstruct(x, memo, *rv)
173
174 # If is its own copy, don't memoize.
~/opt/miniconda3/lib/python3.9/copy.py in _reconstruct(x, memo, func, args, state, listiter, dictiter, deepcopy)
268 if state is not None:
269 if deep:
--> 270 state = deepcopy(state, memo)
271 if hasattr(y, '_setstate_'):
272 y._setstate_(state)
~/opt/miniconda3/lib/python3.9/copy.py in deepcopy(x, memo, _nil)
144 copier = _deepcopy_dispatch.get(cls)
145 if copier is not None:
--> 146 y = copier(x, memo)
147 else:
148 if issubclass(cls, type):
~/opt/miniconda3/lib/python3.9/copy.py in _deepcopy_tuple(x, memo, deepcopy)
208
209 def _deepcopy_tuple(x, memo, deepcopy=deepcopy):
--> 210 y = [deepcopy(a, memo) for a in x]
211 # We're not going to put the tuple in the memo, but it's still important we
212 # check for it, in case the tuple contains recursive mutable structures.
~/opt/miniconda3/lib/python3.9/copy.py in <listcomp>(.0)
208
209 def _deepcopy_tuple(x, memo, deepcopy=deepcopy):
--> 210 y = [deepcopy(a, memo) for a in x]
211 # We're not going to put the tuple in the memo, but it's still important we
212 # check for it, in case the tuple contains recursive mutable structures.
~/opt/miniconda3/lib/python3.9/copy.py in deepcopy(x, memo, _nil)
144 copier = _deepcopy_dispatch.get(cls)
145 if copier is not None:
--> 146 y = copier(x, memo)
147 else:
148 if issubclass(cls, type):
~/opt/miniconda3/lib/python3.9/copy.py in _deepcopy_dict(x, memo, deepcopy)
228 memo[id(x)] = y
229 for key, value in x.items():
--> 230 y[deepcopy(key, memo)] = deepcopy(value, memo)
231 return y
232 d[dict] = _deepcopy_dict
~/opt/miniconda3/lib/python3.9/copy.py in deepcopy(x, memo, _nil)
170 y = x
171 else:
--> 172 y = _reconstruct(x, memo, *rv)
173
174 # If is its own copy, don't memoize.
~/opt/miniconda3/lib/python3.9/copy.py in _reconstruct(x, memo, func, args, state, listiter, dictiter, deepcopy)
262 if deep and args:
263 args = (deepcopy(arg, memo) for arg in args)
--> 264 y = func(*args)
265 if deep:
266 memo[id(x)] = y
~/opt/miniconda3/lib/python3.9/copy.py in <genexpr>(.0)
261 deep = memo is not None
262 if deep and args:
--> 263 args = (deepcopy(arg, memo) for arg in args)
264 y = func(*args)
265 if deep:
~/opt/miniconda3/lib/python3.9/copy.py in deepcopy(x, memo, _nil)
144 copier = _deepcopy_dispatch.get(cls)
145 if copier is not None:
--> 146 y = copier(x, memo)
147 else:
148 if issubclass(cls, type):
~/opt/miniconda3/lib/python3.9/copy.py in _deepcopy_tuple(x, memo, deepcopy)
208
209 def _deepcopy_tuple(x, memo, deepcopy=deepcopy):
--> 210 y = [deepcopy(a, memo) for a in x]
211 # We're not going to put the tuple in the memo, but it's still important we
212 # check for it, in case the tuple contains recursive mutable structures.
~/opt/miniconda3/lib/python3.9/copy.py in <listcomp>(.0)
208
209 def _deepcopy_tuple(x, memo, deepcopy=deepcopy):
--> 210 y = [deepcopy(a, memo) for a in x]
211 # We're not going to put the tuple in the memo, but it's still important we
212 # check for it, in case the tuple contains recursive mutable structures.
~/opt/miniconda3/lib/python3.9/copy.py in deepcopy(x, memo, _nil)
170 y = x
171 else:
--> 172 y = _reconstruct(x, memo, *rv)
173
174 # If is its own copy, don't memoize.
~/opt/miniconda3/lib/python3.9/copy.py in _reconstruct(x, memo, func, args, state, listiter, dictiter, deepcopy)
262 if deep and args:
263 args = (deepcopy(arg, memo) for arg in args)
--> 264 y = func(*args)
265 if deep:
266 memo[id(x)] = y
~/opt/miniconda3/lib/python3.9/typing.py in inner(*args, **kwds)
266 except TypeError:
267 pass # All real errors (not unhashable args) are raised below.
--> 268 return func(*args, **kwds)
269 return inner
270
~/opt/miniconda3/lib/python3.9/typing.py in _getitem_(self, params)
901 return self.copy_with((p, _TypingEllipsis))
902 msg = "Tuple[t0, t1, ...]: each t must be a type."
--> 903 params = tuple(_type_check(p, msg) for p in params)
904 return self.copy_with(params)
905
~/opt/miniconda3/lib/python3.9/typing.py in <genexpr>(.0)
901 return self.copy_with((p, _TypingEllipsis))
902 msg = "Tuple[t0, t1, ...]: each t must be a type."
--> 903 params = tuple(_type_check(p, msg) for p in params)
904 return self.copy_with(params)
905
~/opt/miniconda3/lib/python3.9/typing.py in _type_check(arg, msg, is_argument)
155 return arg
156 if not callable(arg):
--> 157 raise TypeError(f"{msg} Got {arg!r:.100}.")
158 return arg
159
TypeError: Tuple[t0, t1, ...]: each t must be a type. Got ().
```
Checked the code and seems not too much code is related to tuple. Any clues about how it happened? | OpenAIEmbeddings can't be inherited | https://api.github.com/repos/langchain-ai/langchain/issues/3007/comments | 1 | 2023-04-17T09:08:50Z | 2023-09-10T16:32:02Z | https://github.com/langchain-ai/langchain/issues/3007 | 1,670,741,139 | 3,007 |
[
"hwchase17",
"langchain"
] | Hello.
I am trying the Time Weighted VectorStore Retriever example,
but I get an error at the following
ImportError: cannot import name 'TimeWeightedVectorStoreRetriever' from 'langchain.retrievers' (/usr/local/lib/python3.9/dist-packages/langchain/retrievers/__init__.py)
The version of langchain is 0.0.141, I think there is no library for TimeWeightedVectorStoreRetriever. Does anyone know how to solve this problem?
/usr/local/lib/python3.9/dist-packages/langchain/retrievers/ | TimeWeightedVectorStoreRetriever not found | https://api.github.com/repos/langchain-ai/langchain/issues/3006/comments | 1 | 2023-04-17T08:11:24Z | 2023-04-18T04:56:47Z | https://github.com/langchain-ai/langchain/issues/3006 | 1,670,649,685 | 3,006 |
[
"hwchase17",
"langchain"
] | I am getting this error whenever the time is greater than 60 seconds. I tried giving timeout=120 seconds in ChatOpenAI().
`Retrying langchain.chat_models.openai.ChatOpenAI.completion_with_retry.<locals>._completion_with_retry in 4.0 seconds as it raised Timeout: Request timed out: HTTPSConnectionPool(host='api.openai.com', port=443): Read timed out. (read timeout=60).`
What is the reason for this issue and how can I rectify it?
| Frequent request timed out error | https://api.github.com/repos/langchain-ai/langchain/issues/3005/comments | 38 | 2023-04-17T07:28:20Z | 2024-01-08T16:22:54Z | https://github.com/langchain-ai/langchain/issues/3005 | 1,670,584,519 | 3,005 |
[
"hwchase17",
"langchain"
] | Hi,
I am using DirectoryLoader as document loader and for some of csv files getting below error
ValueError: Invalid file union\Book1.csv. The FileType.UNK file type is not supported in partition.
can anyone suggest oplease, how to fix this, I will be thankful to you.
Thank You | ValueError: Invalid file union\Book1.csv. The FileType.UNK file type is not supported in partition. | https://api.github.com/repos/langchain-ai/langchain/issues/3002/comments | 4 | 2023-04-17T06:06:29Z | 2023-05-02T19:25:04Z | https://github.com/langchain-ai/langchain/issues/3002 | 1,670,480,210 | 3,002 |
[
"hwchase17",
"langchain"
] | CSV/Pandas Dataframe agent actually replies to question irrelevant to the data, this can be easily resolved by including an extra line in the prompt to not reply to questions irrelevant to the dataframe. | CSV/Pandas Dataframe agent replying to questions irrelevant to data | https://api.github.com/repos/langchain-ai/langchain/issues/3000/comments | 4 | 2023-04-17T04:57:28Z | 2023-09-18T16:19:52Z | https://github.com/langchain-ai/langchain/issues/3000 | 1,670,406,409 | 3,000 |
[
"hwchase17",
"langchain"
] | While trying to figure out how to save persistent memory, I've come across what I believe to be an error in the docs:
Running the example verbatim produces an error.
[Source](https://python.langchain.com/en/latest/modules/memory/types/entity_summary_memory.html#using-in-a-chain)
```
from langchain.chains import ConversationChain
from langchain.memory import ConversationEntityMemory
from langchain.memory.prompt import ENTITY_MEMORY_CONVERSATION_TEMPLATE
from pydantic import BaseModel
from typing import List, Dict, Any
conversation = ConversationChain(
llm=llm,
verbose=True,
prompt=ENTITY_MEMORY_CONVERSATION_TEMPLATE,
memory=ConversationEntityMemory(llm=llm)
)
conversation.predict(input="Deven & Sam are working on a hackathon project")
conversation.memory.store
```
conversation.memory.store does not exist.
Are the doc's incorrect? Happy to contribute toward a fix, just looking for some direction on what the correct implementation should be in this usecase. | ConversationChain class variable missing when running example from the docs? | https://api.github.com/repos/langchain-ai/langchain/issues/2997/comments | 1 | 2023-04-17T02:08:33Z | 2023-09-10T16:32:12Z | https://github.com/langchain-ai/langchain/issues/2997 | 1,670,278,618 | 2,997 |
[
"hwchase17",
"langchain"
] | Hello everyone! I am new to using langchain and I am currently facing an issue with the figma loader. I have followed the steps outlined in the documentation, but I am receiving a TypeError with the following message:
`TypeError: expected string or bytes-like object`
The error occurs when I try to create a vector store index using the code `index = VectorstoreIndexCreator().from_loaders([figma_loader])`. I have been stuck on this for a while and would really appreciate any help that I can get.
`----> 2 index = VectorstoreIndexCreator().from_loaders([figma_loader])
3 figma_doc_retriever = index.vectorstore.as_retriever()
11 frames
[/usr/lib/python3.9/http/client.py](https://localhost:8080/#) in putheader(self, header, *values)
1260 values[i] = str(one_value).encode('ascii')
1261
-> 1262 if _is_illegal_header_value(values[i]):
1263 raise ValueError('Invalid header value %r' % (values[i],))
1264
TypeError: expected string or bytes-like object`
I have already checked the repository's issue tracker but haven't found any solutions that address my specific problem. I have also provided relevant code snippets and steps I have taken so far.
Thank you in advance for your help! I am looking forward to hearing from you soon. | Vector Store Creator from Figma loader throws an error | https://api.github.com/repos/langchain-ai/langchain/issues/2996/comments | 2 | 2023-04-17T02:02:23Z | 2023-06-22T05:40:00Z | https://github.com/langchain-ai/langchain/issues/2996 | 1,670,271,781 | 2,996 |
[
"hwchase17",
"langchain"
] | Hello all,
I've been encountering an issue while trying to install the dependencies using ```poetry install -E all``` command. I am currently working on the latest commit (a9310a3e) in my development environment. Here is the error message I receive:
```
RuntimeError
Unable to find installation candidates for torch (1.13.1)
at /opt/homebrew/Cellar/poetry/1.4.2/libexec/lib/python3.11/site-packages/poetry/installation/chooser.py:109 in choose_for
105│
106│ links.append(link)
107│
108│ if not links:
→ 109│ raise RuntimeError(f"Unable to find installation candidates for {package}")
110│
111│ # Get the best link
112│ chosen = max(links, key=lambda link: self._sort_key(package, link))
113│
```
Has anyone else experienced this issue, and if so, have you found any solutions or workarounds? Any help or suggestions would be greatly appreciated.
Thank you! | Unable to find installation candidates for torch (1.13.1) | https://api.github.com/repos/langchain-ai/langchain/issues/2991/comments | 6 | 2023-04-16T21:17:07Z | 2024-01-05T23:40:21Z | https://github.com/langchain-ai/langchain/issues/2991 | 1,670,143,578 | 2,991 |
[
"hwchase17",
"langchain"
] | How can I access the data of a website using 'API TOKEN' and use that data in langchain for custom purpose? | How to access data of a website using API token | https://api.github.com/repos/langchain-ai/langchain/issues/2990/comments | 1 | 2023-04-16T19:50:22Z | 2023-09-10T16:32:17Z | https://github.com/langchain-ai/langchain/issues/2990 | 1,670,111,690 | 2,990 |
[
"hwchase17",
"langchain"
] | the code is simple
```
def load_vector_memory_from_dir(dir_path):
from langchain.document_loaders import DirectoryLoader
loader = DirectoryLoader(dir_path)
documents = loader.load()
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
texts = text_splitter.split_documents(documents)
return FAISS.from_documents(texts, OpenAIEmbeddings())
def get_answer_from_vector_memory(vector_memory, query):
from langchain.agents.agent_toolkits import (
create_vectorstore_agent,
VectorStoreToolkit,
VectorStoreInfo,
)
vectorstore_info = VectorStoreInfo(
name="software_requirement_specification",
description="software requirement specification and other things you want to know",
vectorstore=vector_memory
)
toolkit = VectorStoreToolkit(vectorstore_info=vectorstore_info)
agent_executor = create_vectorstore_agent(
llm=ChatOpenAI(temperature=0),
toolkit=toolkit,
verbose=True
)
answer = agent_executor.run(query)
return answer
def get_answer_from_vector_memory_and_web(text):
pass
if __name__ == "__main__":
vector_store = load_vector_memory_from_dir("../../docs")
get_answer_from_vector_memory(vector_store, "What will be changed in the next version?")
```
got error
```openai.error.InvalidRequestError: This model's maximum context length is 4097 tokens, however you requested 6542 tokens (6286 in your prompt; 256 for the completion). Please reduce your prompt; or completion length.```
I searched for this issue but all of them are saying you should add an argument to retrievalQAChain etc to reduce the prompt length,but I'm using agents(to combine tools with docQA), there's no argument for me to change | Got prompt token length error when using agent | https://api.github.com/repos/langchain-ai/langchain/issues/2988/comments | 1 | 2023-04-16T17:27:10Z | 2023-09-10T16:32:24Z | https://github.com/langchain-ai/langchain/issues/2988 | 1,670,056,700 | 2,988 |
[
"hwchase17",
"langchain"
] | I'm using these code :
llm=ChatOpenAI(temperature=0)
memory = ConversationBufferMemory(memory_key="chat_history", return_messages=True)
agent_chain = initialize_agent(tools, llm, agent=AgentType.CONVERSATIONAL_REACT_DESCRIPTION, verbose=True, memory=memory)
I run this line several times
agent_chain.run(input="Hi, how are you today ?")
then these errors are show up
[agents/conversational/base.py](https://localhost:8080/#) in _extract_tool_and_input(self, llm_output)
83 match = re.search(regex, llm_output)
84 if not match:
---> 85 raise ValueError(f"Could not parse LLM output: `{llm_output}`") | Bug : could not parse LLM output: `{llm_output}`") when I run the same question several times | https://api.github.com/repos/langchain-ai/langchain/issues/2985/comments | 2 | 2023-04-16T16:45:42Z | 2023-09-10T16:32:28Z | https://github.com/langchain-ai/langchain/issues/2985 | 1,670,041,682 | 2,985 |
[
"hwchase17",
"langchain"
] | Hi,
I'm running official docker image from Chroma and using it via rest API (I need it in server mode for persistent storage in production deployment)
When inserting documents (I'm loading pdfs) I'm getting
`chromadb.api.models.Collection No embedding_function provided, using default embedding function: SentenceTransformerEmbeddingFunction`
even though I'm passing OpenAIEmbeddings() as embedding parameter
```
embeddings = OpenAIEmbeddings()
chroma_settings = Settings(
chroma_api_impl="rest",
chroma_server_host="localhost",
chroma_server_http_port=8000,
anonymized_telemetry=False,
)
loader = PyPDFLoader(pdf_url)
pages = loader.load_and_split()
Chroma.from_documents(
documents=pages, embedding=embeddings, client_settings=chroma_settings
)
```
| embedding function not passed properly to Chroma | https://api.github.com/repos/langchain-ai/langchain/issues/2982/comments | 17 | 2023-04-16T15:46:43Z | 2024-05-11T14:34:48Z | https://github.com/langchain-ai/langchain/issues/2982 | 1,670,023,228 | 2,982 |
[
"hwchase17",
"langchain"
] | This is actually half an issue, half an open disscussion topic.
Following #2898 , I tried the offline [LLAMA model](https://python.langchain.com/en/latest/modules/models/llms/integrations/llamacpp.html) with the same agent. And the result is somehow interesting:
Given the same prompt:
```
Answer the following questions as best you can. You have access to the following tools:
Google Search: A wrapper around Google Search. Useful for when you need to answer questions about current events. Input should be a search query.
Calculator: Useful for when you need to answer questions about math.
Use the following format:
Question: the input question you must answer
Thought: you should always think about what to do
Action: the action to take, should be one of [Google Search, Calculator]
Action Input: the input to the action
Observation: the result of the action
... (this Thought/Action/Action Input/Observation can repeat N times)
Thought: I now know the final answer
Final Answer: the final answer to the original input question
Begin!
Question: Who is Leo DiCaprio's current girlfriend? What is her current age raised to the 0.43 power?
Thought:
```
The reply from LlamaCpp, using prompttemplate is:
```
Action: Use Google Search
Action Input: type in "Leo DiCaprio\'s girlfriend"
Observation: xxx
...
```
You see, the model is able to perform some "reasoning" from the prompt, and the response it generates, although not strictly consistant with what chatGPT or GPT4 does, is also correct in some sense. However, in his response, "Action" is **Use Google Search** rather than **Google Search**. It is not a deal as natural language. However, it does pose problems when the agent uses regex (or, in a general way, a rule-based method) to select among different tools.
I am thinking that for smaller, off-line models (not restricted to llamacpp or GPT4All), where they might not able to provide GPT4 consistant, but still human acceptable responses, how to make langchain better support them. I came up with 2 options:
1. working on the regex and make them generalize as much as possible to the input diversity, as long as the meaning is correct. Altough It might end up again with a "human engineered" dilema.
2. use some more generalize methods like those of "sentiment classification". I.e, to use the LLMs to classify on which tool to use for the next step, rather than using a regex mather.
Any ideas ? | agent with LLAMA or GPT4All | https://api.github.com/repos/langchain-ai/langchain/issues/2980/comments | 6 | 2023-04-16T13:03:52Z | 2023-11-28T16:12:05Z | https://github.com/langchain-ai/langchain/issues/2980 | 1,669,962,934 | 2,980 |
[
"hwchase17",
"langchain"
] | I trying to create embeddings of CSV file of size around 137 MB which has both numerical and text column (total of 6). using the following code
`from langchain.document_loaders.csv_loader import CSVLoader
loader = CSVLoader(file_path=path, encoding="utf-8")
data = loader.load()
embeddings = CohereEmbeddings(model="multilingual-22-12", cohere_api_key= cohere_api_key)
doc_result = embeddings.embed_documents([data])`
**above gives the following error**
`TypeError Traceback (most recent call last)
[<ipython-input-13-6789a7d649fe>](https://localhost:8080/#) in <cell line: 1>()
----> 1 doc_result = embeddings.embed_documents([data])
15 frames
[/usr/lib/python3.9/json/encoder.py](https://localhost:8080/#) in default(self, o)
177
178 """
--> 179 raise TypeError(f'Object of type {o.__class__.__name__} '
180 f'is not JSON serializable')
181
**TypeError: Object of type Document is not JSON serializable**`
I trying to figuring out the solution but seems like it hard. Please help me in this regard, I will be very thankful | Cohere's multilingual model does not creating embeddings of CSV | https://api.github.com/repos/langchain-ai/langchain/issues/2979/comments | 1 | 2023-04-16T12:54:00Z | 2023-09-10T16:32:33Z | https://github.com/langchain-ai/langchain/issues/2979 | 1,669,957,240 | 2,979 |
[
"hwchase17",
"langchain"
] | I'm using this output parser. But when the agent is passing this to the action state, I'm getting parsing error.
<img width="1441" alt="image" src="https://user-images.githubusercontent.com/19322429/232312291-41effe19-0c95-4bb3-8528-ffe9f6571f89.png">
Here is the parser:
class CustomOutputParser(AgentOutputParser):
def parse(self, llm_output: str) -> Union[AgentAction, AgentFinish]:
print(f"Input to parse function: {llm_output}") # Print the input
# Check if agent should finish
if "Final Answer:" in llm_output:
return AgentFinish(
# Return values is generally always a dictionary with a single `output` key
# It is not recommended to try anything else at the moment :)
return_values={"output": llm_output.split("Final Answer:")[-1].strip()},
log=llm_output,
)
# Parse out the action and action input
regex = r"Action: (.*?)[\n]*Action Input:([\s\S]*?)(?=\n\n|$)"
match = re.search(regex, llm_output, re.DOTALL)
if not match:
raise ValueError(f"Could not parse LLM output: `{llm_output}`")
action = match.group(1).strip()
action_input = match.group(2).strip()
# Return the action and action input
return AgentAction(tool=action, tool_input=action_input.strip(" ").strip('"'), log=llm_output) | Parsing issue when using Python client | https://api.github.com/repos/langchain-ai/langchain/issues/2978/comments | 2 | 2023-04-16T12:44:04Z | 2023-09-10T16:32:38Z | https://github.com/langchain-ai/langchain/issues/2978 | 1,669,945,651 | 2,978 |
[
"hwchase17",
"langchain"
] | Is there any docs or related issues for caching the Azure chat open ai responses i cannot find one. | Caching for chat based models. | https://api.github.com/repos/langchain-ai/langchain/issues/2976/comments | 8 | 2023-04-16T12:27:30Z | 2023-12-21T16:08:39Z | https://github.com/langchain-ai/langchain/issues/2976 | 1,669,937,971 | 2,976 |
[
"hwchase17",
"langchain"
] | Hi,
I'm using RetrievalQA.from_chain_type to query local index.
I'm using Custom Prompt as input (query and context)
Is there a way to log or inspect the actual prompt that is sent to OpenAI API including the query, and the context?
Also
How to control the number of documents the retriever returns?
Is there an option to see the accuracy score of each doc in the source_documents returned by the query?
Thanks
Dror | Question: RetrievalQA.from_chain_type - logging the full prompt | https://api.github.com/repos/langchain-ai/langchain/issues/2975/comments | 7 | 2023-04-16T12:21:39Z | 2024-02-08T09:46:58Z | https://github.com/langchain-ai/langchain/issues/2975 | 1,669,936,294 | 2,975 |
[
"hwchase17",
"langchain"
] | consider rewrite with pypdf2 | pypdf has compatible problems with pdf files which contain complex encodings | https://api.github.com/repos/langchain-ai/langchain/issues/2973/comments | 2 | 2023-04-16T11:10:01Z | 2023-09-10T16:32:43Z | https://github.com/langchain-ai/langchain/issues/2973 | 1,669,870,714 | 2,973 |
[
"hwchase17",
"langchain"
] | Would be really nice to have support for Googles Vertex AI Matching Engine as a Vector Store:
[Google Cloud Vector Store](https://cloud.google.com/vertex-ai/docs/matching-engine/overview?hl=en)
I'm currently building an AI application with langchain agents using Google Cloud as my backend.
So im trying not to use to many third party services to keep everything as tidy as possible.
Using a Google solution for the vector store would be a huge plus.
| Support for Vertex AI Matching Engine as a Vector Store | https://api.github.com/repos/langchain-ai/langchain/issues/2971/comments | 2 | 2023-04-16T10:34:51Z | 2023-09-25T16:08:43Z | https://github.com/langchain-ai/langchain/issues/2971 | 1,669,850,988 | 2,971 |
[
"hwchase17",
"langchain"
] | When you set a max_iterations on a tool agent and it is down to its last iteration, it doesn't make sense for it to try to use a tool. Using a tool would require another iteration, which will be blocked.
There should be some way for the agent to realize it's out of iterations and just return a `Final answer` with whatever information its managed to get. | Tool agents should not try to use a tool on their last iteration | https://api.github.com/repos/langchain-ai/langchain/issues/2970/comments | 6 | 2023-04-16T10:25:52Z | 2023-12-06T17:46:55Z | https://github.com/langchain-ai/langchain/issues/2970 | 1,669,841,936 | 2,970 |
[
"hwchase17",
"langchain"
] | using CHAT_CONVERSATIONAL_REACT_DESCRIPTION the agent does not chain together tools - only the first iteration, then stops. The other AgenTypes work as expected.
```
memory = ConversationBufferMemory(memory_key="chat_history", return_messages=True)
llm = AzureChatOpenAI(
temperature=0,
deployment_name="gpt4",
model_name="gpt-4")
tools = load_tools(["google-search", "requests_all", "llm-math", "wolfram-alpha", "wikipedia", "pal-math"], llm=llm)
agent_zero_shot = initialize_agent(
tools,
llm,
agent=AgentType.CHAT_ZERO_SHOT_REACT_DESCRIPTION,
memory=memory,
verbose=True)
response = agent_zero_shot.run(input="Who is Leo DiCaprio's girlfriend? What is her current age raised to the 0.43 power?")
print(response)
# returns: Eden Polani's age raised to the 0.43 power is approximately 3.55.
# agent_conversational = initialize_agent(
# tools,
# llm,
# agent=AgentType.CHAT_CONVERSATIONAL_REACT_DESCRIPTION,
# memory=memory, # I think it ignores this
# verbose=True)
# response = agent_conversational.run(input="Who is Leo DiCaprio's girlfriend? What is her current age raised to the 0.43 power?")
# print(response)
# returns: Finished chain. Leonardo DiCaprio's most recent girlfriend is rumored to be Eden Polani, who is 19 years old. To calculate her age raised to the 0.43 power, I'll need to use a calculator.
```
CONVERSATIONAL_REACT_DESCRIPTION works as expected. However given I'm using Azure and GPT4 I only have the chat interface. | CHAT_CONVERSATIONAL_REACT_DESCRIPTION vs CONVERSATIONAL_REACT_DESCRIPTION | https://api.github.com/repos/langchain-ai/langchain/issues/2968/comments | 15 | 2023-04-16T09:03:51Z | 2024-02-15T16:12:05Z | https://github.com/langchain-ai/langchain/issues/2968 | 1,669,796,108 | 2,968 |
[
"hwchase17",
"langchain"
] | I tried creating a pandas dataframe agent (using create_dataframe_agent) with ChatOpenai (gpt3-turbo) as the LLM! But langchain isn't able to parse the LLM's output code. Ofcoure when I use davince model it works
### This is the code:
from langchain.llms import OpenAIChat
openaichat = OpenAIChat(model_name="gpt-3.5-turbo")
agent = create_csv_agent(openaichat, 'fishfry-locations.csv', verbose=True)
x = agent.run("How many rows for church?")
### This is the output and error
> Entering new AgentExecutor chain...
Thought: We need to filter the dataframe to only include rows where the venue_type is "Church" and then count the number of rows.
Action: python_repl_ast
Action Input:
```
len(df[df['venue_type'] == 'Church'])
```
Observation: invalid syntax (<unknown>, line 1)
Thought:I need to fix the syntax error by adding a closing parenthesis at the end of the input.
Action: python_repl_ast
Action Input:
```
len(df[df['venue_type'] == 'Church'])
```
Observation: invalid syntax (<unknown>, line 1)
Thought:
> Finished chain. | ChatOpenai (gpt3-turbo) isn't compatible with create_pandas_dataframe_agent, create_csv_agent etc | https://api.github.com/repos/langchain-ai/langchain/issues/2967/comments | 3 | 2023-04-16T08:21:47Z | 2023-09-18T16:19:58Z | https://github.com/langchain-ai/langchain/issues/2967 | 1,669,768,015 | 2,967 |
[
"hwchase17",
"langchain"
] | as you have now created a specific dialect pr https://github.com/hwchase17/langchain/pull/2748
you better remove these lines or make them only applicable if the dialect is sqlite,
most of the dialects don't support this
https://github.com/hwchase17/langchain/blob/b634489b2e8951b880c2ec467cdcf00f11830705/langchain/sql_database.py#L218-L219
ps i have to set this value to None once it instantiate sqldatabase otherwise i run in trouble. (i'm using ibm_db_sa. dialect and it works as a charm with eg chatgpt
i also think there are some related tickets around this
ps i set my schema in the connection string
but i still have to set it in the model (otherwise it cannot find my included tables
but this might be another problem
thanks
| SQLDatabase : Remove set search_path (or rewrite it) | https://api.github.com/repos/langchain-ai/langchain/issues/2951/comments | 9 | 2023-04-15T20:09:12Z | 2023-12-06T18:20:32Z | https://github.com/langchain-ai/langchain/issues/2951 | 1,669,555,201 | 2,951 |
[
"hwchase17",
"langchain"
] | Streaming is supported by llama-cpp-python and works in Jupyter notebooks outside langchain code, but I can't get it to work with langchain. I didn't see any code for streaming in llms/llamacpp.py. I tried to do calls to self.callback_manager..on_llm_new_token() but nothing worked. | LlamaCpp model needs streaming support | https://api.github.com/repos/langchain-ai/langchain/issues/2948/comments | 2 | 2023-04-15T19:22:53Z | 2023-09-10T16:32:53Z | https://github.com/langchain-ai/langchain/issues/2948 | 1,669,542,731 | 2,948 |
[
"hwchase17",
"langchain"
] | After ingesting some markdown files using a slightly modified version of the question-answering over docs example, I ran the qa.py script as it was in the example
```
# qa.py
import faiss
from langchain import OpenAI, HuggingFaceHub, LLMChain
from langchain.chains import VectorDBQAWithSourcesChain
import pickle
import argparse
parser = argparse.ArgumentParser(description='Ask a question to the notion DB.')
parser.add_argument('question', type=str, help='The question to ask the notion DB')
args = parser.parse_args()
# Load the LangChain.
index = faiss.read_index("docs.index")
with open("faiss_store.pkl", "rb") as f:
store = pickle.load(f)
store.index = index
chain = VectorDBQAWithSourcesChain.from_llm(llm=OpenAI(temperature=0), vectorstore=store)
result = chain({"question": args.question})
print(f"Answer: {result['answer']}")
```
Only to get this cryptic error
```
Traceback (most recent call last):
File "C:\Users\ahmad\OneDrive\Desktop\Coding\LANGCHAINSSSSSS\notion-qa\qa.py", line 22, in <module>
result = chain({"question": args.question})
File "C:\Users\ahmad\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\langchain\chains\base.py", line 146, in __call__
raise e
File "C:\Users\ahmad\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\langchain\chains\base.py", line 142, in __call__
outputs = self._call(inputs)
File "C:\Users\ahmad\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\langchain\chains\qa_with_sources\base.py", line 97, in _call
answer, _ = self.combine_document_chain.combine_docs(docs, **inputs)
File "C:\Users\ahmad\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\langchain\chains\combine_documents\map_reduce.py", line 150, in combine_docs
num_tokens = length_func(result_docs, **kwargs)
File "C:\Users\ahmad\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\langchain\chains\combine_documents\stuff.py", line 77, in prompt_length
inputs = self._get_inputs(docs, **kwargs)
File "C:\Users\ahmad\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\langchain\chains\combine_documents\stuff.py", line 64, in _get_inputs
document_info = {
File "C:\Users\ahmad\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\langchain\chains\combine_documents\stuff.py", line 65, in <dictcomp>
k: base_info[k] for k in self.document_prompt.input_variables
KeyError: 'source'
```
Here is the code I used for ingesting
|
```
"""This is the logic for ingesting Notion data into LangChain."""
from pathlib import Path
from langchain.text_splitter import CharacterTextSplitter
import faiss
from langchain.vectorstores import FAISS
from langchain.embeddings import OpenAIEmbeddings
import pickle
import time
from tqdm import tqdm
# Here we load in the data in the format that Notion exports it in.
folder = list(Path("Notion_DB/").glob("**/*.md"))
files = []
sources = []
for myFile in folder:
with open(myFile, 'r', encoding='utf-8') as f:
print(myFile.name)
files.append(f.read())
sources.append(myFile)
# Here we split the documents, as needed, into smaller chunks.
# We do this due to the context limits of the LLMs.
text_splitter = CharacterTextSplitter(chunk_size=800, separator="\n")
docs = []
metadatas = []
for i, f in enumerate(files):
splits = text_splitter.split_text(f)
docs.extend(splits)
metadatas.extend([{"source": sources[i]}] * len(splits))
# Add each element in docs into FAISS store, keeping a delay between inserting elements so we don't exceed rate limit
store = None
for (index, chunk) in tqdm(enumerate(docs)):
if index == 0:
store = FAISS.from_texts([chunk], OpenAIEmbeddings())
else:
time.sleep(1) # wait for a second to not exceed any rate limits
store.add_texts([chunk])
# print('finished with index '+index.__str__())
print('Done yayy!')
# # Here we create a vector store from the documents and save it to disk.
faiss.write_index(store.index, "docs.index")
store.index = None
with open("faiss_store.pkl", "wb") as f:
pickle.dump(store, f)
```
| Question Answering over Docs giving cryptic error upon query | https://api.github.com/repos/langchain-ai/langchain/issues/2944/comments | 2 | 2023-04-15T15:38:36Z | 2023-09-10T16:32:58Z | https://github.com/langchain-ai/langchain/issues/2944 | 1,669,458,405 | 2,944 |
[
"hwchase17",
"langchain"
] | For example, lets say I have a big txt file (WhatsApp chat export). Now when I'm storing it as embeddings in the vector store, I think the source_document is set as the `<name_of_file>.txt` which is fine. But what I want is to attribute a finer source. Like say, the person(s) who said this particular keyword, datetime and so on.
Is this currently supported in Langchain? | Is there a way we can pass in a custom source into vector store? | https://api.github.com/repos/langchain-ai/langchain/issues/2941/comments | 4 | 2023-04-15T14:12:30Z | 2023-09-10T16:33:03Z | https://github.com/langchain-ai/langchain/issues/2941 | 1,669,421,607 | 2,941 |
[
"hwchase17",
"langchain"
] | In OpenAPI documentation, an endpoint might include a placeholder for a parameter:
```
GET /states/{abbr}
```
Currently, exact matches are needed with OpenAPI Planner to retrieve documentation. In the example above, `GET /states/FL` would receive a `ValueError(f"{endpoint_name} endpoint does not exist.")`.
| Allow OpenAPI planner to respect URLs with placeholders | https://api.github.com/repos/langchain-ai/langchain/issues/2938/comments | 1 | 2023-04-15T13:54:15Z | 2023-10-12T23:20:34Z | https://github.com/langchain-ai/langchain/issues/2938 | 1,669,406,711 | 2,938 |
[
"hwchase17",
"langchain"
] | hey, i'm adding a tool for jira jql, i ran into something weird and not sure what's wrong or how to debug, could anyone help?
the Action Input for the action taken is `summary ~ "add support"`
but the actual instruction passed into _run of my tool is `summary ~ "add support`, missing the closing double quotes.
<img width="1740" alt="Screenshot 2023-04-15 at 9 52 11 pm" src="https://user-images.githubusercontent.com/32046231/232217825-e1f4c62f-998c-4b2d-8890-97a726bfc84d.png">
| agent._extract_tool_and_input removes double quote at the end of action input. | https://api.github.com/repos/langchain-ai/langchain/issues/2936/comments | 2 | 2023-04-15T11:52:24Z | 2023-09-10T16:33:14Z | https://github.com/langchain-ai/langchain/issues/2936 | 1,669,327,179 | 2,936 |
[
"hwchase17",
"langchain"
] | Just installed Lanchain and followed the tutorials without a problem till I reached the agents part.
The following modules are not recognized.
```
from langchain.agents import initialize_agent
from langchain.agents import AgentType
```
I tried running lanchain in python 3.7, 3.8.11, 3.9 and 3.10 because other people suggested changing versions. | ModuleNotFoundError: No module named 'langchain.agents' | https://api.github.com/repos/langchain-ai/langchain/issues/2935/comments | 1 | 2023-04-15T10:14:16Z | 2023-04-15T10:34:48Z | https://github.com/langchain-ai/langchain/issues/2935 | 1,669,296,610 | 2,935 |
[
"hwchase17",
"langchain"
] | Hello Dev,
I dont see json_agent_executor executing right.. For my simple requirement, its not able to give desired output.
I have 5 users .. users.json.
`[
{
"username": "john_doe",
"email": "john.doe@example.com"
},
{
"username": "jane_doe",
"email": "jane.doe@example.com"
},
{
"username": "mark_smith",
"email": "mark.smith@example.com"
},
{
"username": "sarah_jones",
"email": "sarah.jones@example.com"
},
{
"username": "david_wilson",
"email": "david.wilson@example.com"
}
]
`
I am using the below code..
*************************************************************
`import os
import json
from langchain.agents import (
create_json_agent,
AgentExecutor
)
from langchain.agents.agent_toolkits import JsonToolkit
from langchain.chains import LLMChain
from langchain.llms.openai import OpenAI
from langchain.requests import TextRequestsWrapper
from langchain.tools.json.tool import JsonSpec
with open("/content/sample_data/users.json") as f:
data = json.load(f)
json_spec = JsonSpec(dict_=data, max_value_length=4000)
json_toolkit = JsonToolkit(spec=json_spec)
json_agent_executor = create_json_agent(
llm=OpenAI(temperature=0),
toolkit=json_toolkit,
verbose=True
)
json_agent_executor.run("What is email id of sarah_jones")`
*********************************************************************
The Agent is unable to find some basic stuff... This the the output..
***************************************************************************
Entering new AgentExecutor chain...
Action: json_spec_list_keys
Action Input: data
Observation: ['username']
Thought: I should look at the value of the username key
Action: json_spec_get_value
Action Input: data["username"]
Observation: email
Thought: I should look at the value of the email key
Action: json_spec_get_value
Action Input: data["username"]["email"]
Observation: TypeError('string indices must be integers')
Thought: I should look at the keys of the username key
Action: json_spec_list_keys
Action Input: data["username"]
Observation: ValueError('Value at path `data["username"]` is not a dict, get the value directly.')
Thought: I should look at the value of the username key
Action: json_spec_get_value
Action Input: data["username"]
Observation: email
Thought: I should look at the value of the email key
Action: json_spec_get_value
Action Input: data["username"]["email"]
Observation: TypeError('string indices must be integers')
Thought: I should look at the keys of the username key
Action: json_spec_list_keys
Action Input: data["username"]
Observation: ValueError('Value at path `data["username"]` is not a dict, get the value directly.')
Thought: I should look at the value of the username key
Action: json_spec_get_value
Action Input: data["username"]
Observation: email
Thought: I should look at the value of the email key
Action: json_spec_get_value
Action Input: data["username"]["email"]
Observation: TypeError('string indices must be integers')
Thought: I should look at the keys of the username key
Action: json_spec_list_keys
Action Input: data["username"]
Observation: ValueError('Value at path `data["username"]` is not a dict, get the value directly.')
Thought: I should look at the value of the username key
Action: json_spec_get_value
Action Input: data["username"]
Observation: email
Thought: I should look at the value of the email key
Action: json_spec_get_value
Action Input: data["username"]["email"]
Observation: TypeError('string indices must be integers')
Thought: I should look at the keys of the username key
Action: json_spec_list_keys
Action Input: data["username"]
Observation: ValueError('Value at path `data["username"]` is not a dict, get the value directly.')
Thought: I should look at the value of the username key
Action: json_spec_get_value
Action Input: data["username"]
Observation: email
Thought: I should look at the value of the email key
Action: json_spec_get_value
Action Input: data["username"]["email"]
Observation: TypeError('string indices must be integers')
Thought:
> Finished chain.
Agent stopped due to iteration limit or time limit.`
| json_agent_executor unable to perform some basic stuff | https://api.github.com/repos/langchain-ai/langchain/issues/2931/comments | 8 | 2023-04-15T07:35:01Z | 2024-03-25T07:06:03Z | https://github.com/langchain-ai/langchain/issues/2931 | 1,669,216,123 | 2,931 |
[
"hwchase17",
"langchain"
] | In "combine_docs" in "MapReduceDocumentsChain" class in "langchain/chains/combine_documents/map_reduce.py"
num_tokens is defaulted to 3000 not flexible to the model type. | AnalyzeDocumentChain cannot work with "text-ada-001" model or any 2k tokens model | https://api.github.com/repos/langchain-ai/langchain/issues/2930/comments | 3 | 2023-04-15T06:34:23Z | 2023-09-10T16:33:20Z | https://github.com/langchain-ai/langchain/issues/2930 | 1,669,191,327 | 2,930 |
[
"hwchase17",
"langchain"
] | `UnicodeEncodeError Traceback (most recent call last)
Cell In[13], line 11
2 tools = [
3 Tool(
4 name="Intermediate Answer",
(...)
7 )
8 ]
10 self_ask_with_search = initialize_agent(tools, llm, agent=AgentType.SELF_ASK_WITH_SEARCH, verbose=True)
---> 11 self_ask_with_search.run("How do I get into an Ivy league college?")
File C:\Python311\Lib\site-packages\langchain\chains\base.py:213, in Chain.run(self, *args, **kwargs)
211 if len(args) != 1:
212 raise ValueError("`run` supports only one positional argument.")
--> 213 return self(args[0])[self.output_keys[0]]
215 if kwargs and not args:
216 return self(kwargs)[self.output_keys[0]]
File C:\Python311\Lib\site-packages\langchain\chains\base.py:116, in Chain.__call__(self, inputs, return_only_outputs)
114 except (KeyboardInterrupt, Exception) as e:
115 self.callback_manager.on_chain_error(e, verbose=self.verbose)
--> 116 raise e
117 self.callback_manager.on_chain_end(outputs, verbose=self.verbose)
118 return self.prep_outputs(inputs, outputs, return_only_outputs)
File C:\Python311\Lib\site-packages\langchain\chains\base.py:113, in Chain.__call__(self, inputs, return_only_outputs)
107 self.callback_manager.on_chain_start(
108 {"name": self.__class__.__name__},
109 inputs,
110 verbose=self.verbose,
111 )
112 try:
--> 113 outputs = self._call(inputs)
114 except (KeyboardInterrupt, Exception) as e:
115 self.callback_manager.on_chain_error(e, verbose=self.verbose)
File C:\Python311\Lib\site-packages\langchain\agents\agent.py:499, in _call(self, inputs)
494 """Validate that appropriate tools are passed in."""
495 pass
497 @classmethod
498 def from_llm_and_tools(
--> 499 cls,
500 llm: BaseLanguageModel,
501 tools: Sequence[BaseTool],
502 callback_manager: Optional[BaseCallbackManager] = None,
503 **kwargs: Any,
504 ) -> Agent:
505 """Construct an agent from an LLM and tools."""
506 cls._validate_tools(tools)
File C:\Python311\Lib\site-packages\langchain\agents\agent.py:409, in _take_next_step(self, name_to_tool_map, color_mapping, inputs, intermediate_steps)
399 def plan(
400 self, intermediate_steps: List[Tuple[AgentAction, str]], **kwargs: Any
401 ) -> Union[AgentAction, AgentFinish]:
402 """Given input, decided what to do.
403
404 Args:
405 intermediate_steps: Steps the LLM has taken to date,
406 along with observations
407 **kwargs: User inputs.
408
--> 409 Returns:
410 Action specifying what tool to use.
411 """
412 full_inputs = self.get_full_inputs(intermediate_steps, **kwargs)
413 action = self._get_next_action(full_inputs)
File C:\Python311\Lib\site-packages\langchain\agents\agent.py:105, in plan(self, intermediate_steps, **kwargs)
97 else:
98 raise ValueError(
99 f"Got unsupported early_stopping_method `{early_stopping_method}`"
100 )
102 @classmethod
103 def from_llm_and_tools(
104 cls,
--> 105 llm: BaseLanguageModel,
106 tools: Sequence[BaseTool],
107 callback_manager: Optional[BaseCallbackManager] = None,
108 **kwargs: Any,
109 ) -> BaseSingleActionAgent:
110 raise NotImplementedError
112 @property
113 def _agent_type(self) -> str:
File C:\Python311\Lib\site-packages\langchain\agents\agent.py:71, in _get_next_action(self, full_inputs)
62 @abstractmethod
63 async def aplan(
64 self, intermediate_steps: List[Tuple[AgentAction, str]], **kwargs: Any
65 ) -> Union[AgentAction, AgentFinish]:
66 """Given input, decided what to do.
67
68 Args:
69 intermediate_steps: Steps the LLM has taken to date,
70 along with observations
---> 71 **kwargs: User inputs.
72
73 Returns:
74 Action specifying what tool to use.
75 """
File C:\Python311\Lib\site-packages\langchain\chains\llm.py:151, in LLMChain.predict(self, **kwargs)
137 def predict(self, **kwargs: Any) -> str:
138 """Format prompt with kwargs and pass to LLM.
139
140 Args:
(...)
149 completion = llm.predict(adjective="funny")
150 """
--> 151 return self(kwargs)[self.output_key]
File C:\Python311\Lib\site-packages\langchain\chains\base.py:116, in Chain.__call__(self, inputs, return_only_outputs)
114 except (KeyboardInterrupt, Exception) as e:
115 self.callback_manager.on_chain_error(e, verbose=self.verbose)
--> 116 raise e
117 self.callback_manager.on_chain_end(outputs, verbose=self.verbose)
118 return self.prep_outputs(inputs, outputs, return_only_outputs)
File C:\Python311\Lib\site-packages\langchain\chains\base.py:113, in Chain.__call__(self, inputs, return_only_outputs)
107 self.callback_manager.on_chain_start(
108 {"name": self.__class__.__name__},
109 inputs,
110 verbose=self.verbose,
111 )
112 try:
--> 113 outputs = self._call(inputs)
114 except (KeyboardInterrupt, Exception) as e:
115 self.callback_manager.on_chain_error(e, verbose=self.verbose)
File C:\Python311\Lib\site-packages\langchain\chains\llm.py:57, in LLMChain._call(self, inputs)
56 def _call(self, inputs: Dict[str, Any]) -> Dict[str, str]:
---> 57 return self.apply([inputs])[0]
File C:\Python311\Lib\site-packages\langchain\chains\llm.py:118, in LLMChain.apply(self, input_list)
116 def apply(self, input_list: List[Dict[str, Any]]) -> List[Dict[str, str]]:
117 """Utilize the LLM generate method for speed gains."""
--> 118 response = self.generate(input_list)
119 return self.create_outputs(response)
File C:\Python311\Lib\site-packages\langchain\chains\llm.py:62, in LLMChain.generate(self, input_list)
60 """Generate LLM result from inputs."""
61 prompts, stop = self.prep_prompts(input_list)
---> 62 return self.llm.generate_prompt(prompts, stop)
File C:\Python311\Lib\site-packages\langchain\llms\base.py:107, in BaseLLM.generate_prompt(self, prompts, stop)
103 def generate_prompt(
104 self, prompts: List[PromptValue], stop: Optional[List[str]] = None
105 ) -> LLMResult:
106 prompt_strings = [p.to_string() for p in prompts]
--> 107 return self.generate(prompt_strings, stop=stop)
File C:\Python311\Lib\site-packages\langchain\llms\base.py:140, in BaseLLM.generate(self, prompts, stop)
138 except (KeyboardInterrupt, Exception) as e:
139 self.callback_manager.on_llm_error(e, verbose=self.verbose)
--> 140 raise e
141 self.callback_manager.on_llm_end(output, verbose=self.verbose)
142 return output
File C:\Python311\Lib\site-packages\langchain\llms\base.py:137, in BaseLLM.generate(self, prompts, stop)
133 self.callback_manager.on_llm_start(
134 {"name": self.__class__.__name__}, prompts, verbose=self.verbose
135 )
136 try:
--> 137 output = self._generate(prompts, stop=stop)
138 except (KeyboardInterrupt, Exception) as e:
139 self.callback_manager.on_llm_error(e, verbose=self.verbose)
File C:\Python311\Lib\site-packages\langchain\llms\base.py:324, in LLM._generate(self, prompts, stop)
322 generations = []
323 for prompt in prompts:
--> 324 text = self._call(prompt, stop=stop)
325 generations.append([Generation(text=text)])
326 return LLMResult(generations=generations)
File C:\Python311\Lib\site-packages\langchain\llms\anthropic.py:146, in _call(self, prompt, stop)
130 def _call(self, prompt: str, stop: Optional[List[str]] = None) -> str:
131 r"""Call out to Anthropic's completion endpoint.
132
133 Args:
134 prompt: The prompt to pass into the model.
135 stop: Optional list of stop words to use when generating.
136
137 Returns:
138 The string generated by the model.
139
140 Example:
141 .. code-block:: python
142
143 prompt = "What are the biggest risks facing humanity?"
144 prompt = f"\n\nHuman: {prompt}\n\nAssistant:"
145 response = model(prompt)
--> 146
147 """
148 stop = self._get_anthropic_stop(stop)
149 if self.streaming:
File C:\Python311\Lib\site-packages\anthropic\api.py:239, in Client.completion(self, **kwargs)
238 def completion(self, **kwargs) -> dict:
--> 239 return self._request_as_json(
240 "post",
241 "/v1/complete",
242 params=kwargs,
243 )
File C:\Python311\Lib\site-packages\anthropic\api.py:198, in Client._request_as_json(self, *args, **kwargs)
197 def _request_as_json(self, *args, **kwargs) -> dict:
--> 198 result = self._request_raw(*args, **kwargs)
199 content = result.content.decode("utf-8")
200 json_body = json.loads(content)
File C:\Python311\Lib\site-packages\anthropic\api.py:117, in Client._request_raw(self, method, path, params, headers, request_timeout)
109 def _request_raw(
110 self,
111 method: str,
(...)
115 request_timeout: Optional[Union[float, Tuple[float, float]]] = None,
116 ) -> requests.Response:
--> 117 request = self._request_params(headers, method, params, path, request_timeout)
118 result = self._session.request(
119 request.method,
120 request.url,
(...)
124 timeout=request.timeout,
125 )
127 if result.status_code != 200:
File C:\Python311\Lib\site-packages\anthropic\api.py:85, in Client._request_params(self, headers, method, params, path, request_timeout)
79 del params["disable_checks"]
80 else:
81 # NOTE: disabling_checks can lead to very poor sampling quality from our API.
82 # _Please_ read the docs on "Claude instructions when using the API" before disabling this.
83 # Also note, future versions of the API will enforce these as hard constraints automatically,
84 # so please consider these SDK-side checks as things you'll need to handle regardless.
---> 85 _validate_request(params)
86 data = None
87 if params:
File C:\Python311\Lib\site-packages\anthropic\api.py:273, in _validate_request(params)
271 if prompt.endswith(" "):
272 raise ApiException(f"Prompt must not end with a space character")
--> 273 _validate_prompt_length(params)
File C:\Python311\Lib\site-packages\anthropic\api.py:279, in _validate_prompt_length(params)
277 prompt: str = params["prompt"]
278 try:
--> 279 prompt_tokens = tokenizer.count_tokens(prompt)
280 max_tokens_to_sample: int = params["max_tokens_to_sample"]
281 token_limit = 9 * 1024
File C:\Python311\Lib\site-packages\anthropic\tokenizer.py:52, in count_tokens(text)
51 def count_tokens(text: str) -> int:
---> 52 tokenizer = get_tokenizer()
53 encoded_text = tokenizer.encode(text)
54 return len(encoded_text.ids)
File C:\Python311\Lib\site-packages\anthropic\tokenizer.py:36, in get_tokenizer()
34 if not claude_tokenizer:
35 try:
---> 36 tokenizer_data = _get_cached_tokenizer_file_as_str()
37 except httpx.HTTPError as e:
38 raise TokenizerException(f'Failed to download tokenizer: {e}')
File C:\Python311\Lib\site-packages\anthropic\tokenizer.py:26, in _get_cached_tokenizer_file_as_str()
24 response.raise_for_status()
25 with open(tokenizer_file, 'w') as f:
---> 26 f.write(response.text)
28 with open(tokenizer_file, 'r') as f:
29 return f.read()
File C:\Python311\Lib\encodings\cp1252.py:19, in IncrementalEncoder.encode(self, input, final)
18 def encode(self, input, final=False):
---> 19 return codecs.charmap_encode(input,self.errors,encoding_table)[0]
UnicodeEncodeError: 'charmap' codec can't encode character '\u0100' in position 2452: character maps to <undefined>` | Error when generating text with the Anthropic LLM | https://api.github.com/repos/langchain-ai/langchain/issues/2929/comments | 1 | 2023-04-15T05:33:09Z | 2023-08-06T18:52:23Z | https://github.com/langchain-ai/langchain/issues/2929 | 1,669,165,015 | 2,929 |
[
"hwchase17",
"langchain"
] | I hope that langchain can support dolly-v2 which is generated by Databricks employees and released under a permissive license (CC-BY-SA). | Will it support Dolly-V2? | https://api.github.com/repos/langchain-ai/langchain/issues/2928/comments | 4 | 2023-04-15T05:21:27Z | 2023-05-02T17:46:43Z | https://github.com/langchain-ai/langchain/issues/2928 | 1,669,159,656 | 2,928 |
[
"hwchase17",
"langchain"
] | This is the mypy response for the following code:
```
ChatOpenAI(
model_name=args.model_name,
temperature=args.temperature,
)
```
I see in the code that ChatOpenAI has a variable client that in the comments is marked private.
Any remediation? | Mypy: Missing named argument "client" for "ChatOpenAI" | https://api.github.com/repos/langchain-ai/langchain/issues/2925/comments | 11 | 2023-04-15T03:14:17Z | 2024-04-18T20:03:39Z | https://github.com/langchain-ai/langchain/issues/2925 | 1,669,123,006 | 2,925 |
[
"hwchase17",
"langchain"
] | The OpenSearch documentation notes that you may use a boolean filter for ANN search: https://opensearch.org/docs/latest/search-plugins/knn/filter-search-knn/#boolean-filter-with-ann-search
It would be nice to allow passing in a boolean filter to the OpenSearch vector store `similarity_search` function. | OpenSearch: allow boolean filter search for ANN | https://api.github.com/repos/langchain-ai/langchain/issues/2924/comments | 1 | 2023-04-15T02:26:24Z | 2023-04-18T03:26:28Z | https://github.com/langchain-ai/langchain/issues/2924 | 1,669,111,473 | 2,924 |
[
"hwchase17",
"langchain"
] | Hello everyone,
Working in an implementation Index-GPT+LangChain.
I'm trying to set a custom prompt where I can set additional context. [Based on the documentation ](https://python.langchain.com/en/latest/reference/modules/chains.html?highlight=langchain.chains.llm.LLMChain#langchain.chains.LLMChain) I'm trying to run this code.
langchain 0.0.139
llama-index 0.5.15
```
template = """Pretend you are Steve Jobs. Answer with motivational content. Steve: How I can help you today?. Person: I want some motivation. Steve: You are amazing you can create any type of business you want.
Person: {question}?
Steve:"""
prompt = PromptTemplate(template=template, input_variables=["question"])
llm=OpenAI(temperature=0)
print(type(llm))
llm = LLMChain(prompt=prompt, llm=llm)
print(type(llm))
memory = ConversationBufferMemory(memory_key="chat_history")
agent_chain = create_llama_chat_agent(
toolkit,
llm,
memory=memory,
verbose=True
)
```
But I'm getting the below error:
```
ValidationError: 1 validation error for LLMChain
llm
Can't instantiate abstract class BaseLanguageModel with abstract methods agenerate_prompt, generate_prompt (type=type_error)
```
Thanks for your help. | Can't instantiate langchain.chains.LLMChain to create_llama_chat_agent (Setting custom prompt) | https://api.github.com/repos/langchain-ai/langchain/issues/2922/comments | 7 | 2023-04-15T00:05:05Z | 2023-12-27T16:08:04Z | https://github.com/langchain-ai/langchain/issues/2922 | 1,669,064,432 | 2,922 |
[
"hwchase17",
"langchain"
] | [from the notebook ](https://github.com/hwchase17/langchain/blob/master/docs/modules/models/llms/examples/streaming_llm.ipynb
) It says: LangChain provides streaming support for LLMs. Currently, we support streaming for the OpenAI, ChatOpenAI. and Anthropic implementations, but streaming support for other LLM implementations is on the roadmap.
I am more interested in using the commercially open-source LLM available on Hugging Face, such as Dolly V2. I am wondering whether LangChain has plans to include streaming support for Hugging Face's LLM in their roadmap. Additionally, is there any timeline for its integration? Thank you. | streaming support for LLM, from huggingface | https://api.github.com/repos/langchain-ai/langchain/issues/2918/comments | 15 | 2023-04-14T22:32:37Z | 2024-08-02T11:45:30Z | https://github.com/langchain-ai/langchain/issues/2918 | 1,669,020,416 | 2,918 |
[
"hwchase17",
"langchain"
] | SQLDatabaseToolkit is not currently working.
Se errors attached.
This is the code that creates the errors:
```
llm = AzureChatOpenAI(deployment_name="gpt-4",temperature=0, max_tokens=500)
db = SQLDatabase.from_uri(db_url)
toolkit = SQLDatabaseToolkit(db=db)
agent_executor = create_sql_agent(llm=llm,toolkit=toolkit,verbose=True)
```
<img width="572" alt="Screenshot 2023-04-14 154708" src="https://user-images.githubusercontent.com/2685728/232151658-bf3c188c-0ae2-4bff-93fc-e553123c7d0e.png">
And if if I add the llm parameter to the toolkit:
```
toolkit = SQLDatabaseToolkit(db=db, llm=llm)
agent_executor = create_sql_agent(llm=llm,toolkit=toolkit,verbose=True)
```
this is the error
<img width="472" alt="Screenshot 2023-04-14 154906" src="https://user-images.githubusercontent.com/2685728/232151917-252168d0-6d4c-443e-8cfe-b08604b8c4b0.png">
| SQLDatabaseToolkit not working | https://api.github.com/repos/langchain-ai/langchain/issues/2914/comments | 10 | 2023-04-14T20:52:02Z | 2023-09-21T17:07:42Z | https://github.com/langchain-ai/langchain/issues/2914 | 1,668,936,333 | 2,914 |
[
"hwchase17",
"langchain"
] | those files may be `node_modules` or `.pycache` files or sensitive env files, all of which should be ignored by default | Ignore files from `.gitignore` in Git loader | https://api.github.com/repos/langchain-ai/langchain/issues/2905/comments | 0 | 2023-04-14T17:08:38Z | 2023-04-14T22:02:23Z | https://github.com/langchain-ai/langchain/issues/2905 | 1,668,624,936 | 2,905 |
[
"hwchase17",
"langchain"
] | I encountered a bug when using PromptLayerOpenAI. The code works as intended only when `model_name` parameter is set to `text-davinci-003`. When a different model is specified, an error message is returned.
This works:
```python
chain = load_qa_chain(PromptLayerOpenAI(
temperature=0,
model_name="text-davinci-003",
pl_tags=["tag1", "tag2"]
), chain_type="stuff", memory=memory, prompt=prompt)
```
This does not work:
```python
chain = load_qa_chain(PromptLayerOpenAI(
temperature=0,
model_name="gpt-3.5-turbo", # <== cause of error
pl_tags=["jwheeler", "contractqa"]
), chain_type="stuff", memory=memory, prompt=prompt)
```
The error message:
```bash
openai.error.InvalidRequestError: Unrecognized request argument supplied: pl_tags
```
| PromptLayerOpenAI throws an error when any model other than `text-davinci-003` is passed to the `model_name` parameter | https://api.github.com/repos/langchain-ai/langchain/issues/2903/comments | 3 | 2023-04-14T16:13:25Z | 2023-11-14T16:09:34Z | https://github.com/langchain-ai/langchain/issues/2903 | 1,668,543,593 | 2,903 |
[
"hwchase17",
"langchain"
] | When use agent to answer question **"Who is Leo DiCaprio's current girlfriend? What is her current age raised to the 0.43 power?"**
I saw openAI gives the following initial reply:
```
I should use Google Search to find out who is Leo DiCaprio's current girlfriend. For the second part of the question, I should use the calculator to calculate her age raised to the 0.43 power.
Action 1: Google Search
Action 1 Input: "Leo DiCaprio current girlfriend"
```
Instead of **"Action"** and **"Action Input"** keywords, we have **"Action 1"** and **"Action 1 Input"** instead.
The regex in langchain/agents/mrkl/base.py:
**regex = r"Action: (.*?)[\n]*Action Input:[\s]*(.*)"**
is better to be changed to
**regex = r"Action.*?: (.*?)[\n]*Action.*? Input:[\s]*(.*)"**
In order to avoid tool not found error. | regex in langchain/agents/mrkl/base.py | https://api.github.com/repos/langchain-ai/langchain/issues/2898/comments | 4 | 2023-04-14T15:15:34Z | 2023-09-18T16:20:03Z | https://github.com/langchain-ai/langchain/issues/2898 | 1,668,448,881 | 2,898 |
[
"hwchase17",
"langchain"
] | Could not parse LLM output: I'm not familiar with "bla". Would you like me to search for more information on it?
```
Traceback (most recent call last):
File "/Users/admin/Library/Caches/pypoetry/virtualenvs/kbgpt-I7QBBX8f-py3.10/lib/python3.10/site-packages/langchain/agents/chat/base.py", line 50, in _extract_tool_and_input
_, action, _ = text.split("```")
ValueError: not enough values to unpack (expected 3, got 1)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/Caskroom/miniconda/base/lib/python3.10/runpy.py", line 196, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/usr/local/Caskroom/miniconda/base/lib/python3.10/runpy.py", line 86, in _run_code
exec(code, run_globals)
File "/Users/admin/.vscode/extensions/ms-python.python-2023.6.0/pythonFiles/lib/python/debugpy/adapter/../../debugpy/launcher/../../debugpy/__main__.py", line 39, in <module>
cli.main()
File "/Users/admin/.vscode/extensions/ms-python.python-2023.6.0/pythonFiles/lib/python/debugpy/adapter/../../debugpy/launcher/../../debugpy/../debugpy/server/cli.py", line 430, in main
run()
File "/Users/admin/.vscode/extensions/ms-python.python-2023.6.0/pythonFiles/lib/python/debugpy/adapter/../../debugpy/launcher/../../debugpy/../debugpy/server/cli.py", line 284, in run_file
runpy.run_path(target, run_name="__main__")
File "/Users/admin/.vscode/extensions/ms-python.python-2023.6.0/pythonFiles/lib/python/debugpy/_vendored/pydevd/_pydevd_bundle/pydevd_runpy.py", line 321, in run_path
return _run_module_code(code, init_globals, run_name,
File "/Users/admin/.vscode/extensions/ms-python.python-2023.6.0/pythonFiles/lib/python/debugpy/_vendored/pydevd/_pydevd_bundle/pydevd_runpy.py", line 135, in _run_module_code
_run_code(code, mod_globals, init_globals,
File "/Users/admin/.vscode/extensions/ms-python.python-2023.6.0/pythonFiles/lib/python/debugpy/_vendored/pydevd/_pydevd_bundle/pydevd_runpy.py", line 124, in _run_code
exec(code, run_globals)
File "/Users/admin/Projects/kbgpt/mrkl_chat.py", line 37, in <module>
raise e
File "/Users/admin/Projects/kbgpt/mrkl_chat.py", line 35, in <module>
mrkl.run(txt)
File "/Users/admin/Library/Caches/pypoetry/virtualenvs/kbgpt-I7QBBX8f-py3.10/lib/python3.10/site-packages/langchain/chains/base.py", line 213, in run
return self(args[0])[self.output_keys[0]]
File "/Users/admin/Library/Caches/pypoetry/virtualenvs/kbgpt-I7QBBX8f-py3.10/lib/python3.10/site-packages/langchain/chains/base.py", line 116, in __call__
raise e
File "/Users/admin/Library/Caches/pypoetry/virtualenvs/kbgpt-I7QBBX8f-py3.10/lib/python3.10/site-packages/langchain/chains/base.py", line 113, in __call__
outputs = self._call(inputs)
File "/Users/admin/Library/Caches/pypoetry/virtualenvs/kbgpt-I7QBBX8f-py3.10/lib/python3.10/site-packages/langchain/agents/agent.py", line 812, in _call
next_step_output = self._take_next_step(
File "/Users/admin/Library/Caches/pypoetry/virtualenvs/kbgpt-I7QBBX8f-py3.10/lib/python3.10/site-packages/langchain/agents/agent.py", line 692, in _take_next_step
output = self.agent.plan(intermediate_steps, **inputs)
File "/Users/admin/Library/Caches/pypoetry/virtualenvs/kbgpt-I7QBBX8f-py3.10/lib/python3.10/site-packages/langchain/agents/agent.py", line 403, in plan
action = self._get_next_action(full_inputs)
File "/Users/admin/Library/Caches/pypoetry/virtualenvs/kbgpt-I7QBBX8f-py3.10/lib/python3.10/site-packages/langchain/agents/agent.py", line 365, in _get_next_action
parsed_output = self._extract_tool_and_input(full_output)
File "/Users/admin/Library/Caches/pypoetry/virtualenvs/kbgpt-I7QBBX8f-py3.10/lib/python3.10/site-packages/langchain/agents/chat/base.py", line 55, in _extract_tool_and_input
raise ValueError(f"Could not parse LLM output: {text}")
ValueError: Could not parse LLM output: I'm not sure what you mean by "bla". Can you please provide more context or clarify your question?
``` | ChatAgent gets "Would you like me to search for more information on it?" instead of Action: or FinalAnswer: | https://api.github.com/repos/langchain-ai/langchain/issues/2896/comments | 1 | 2023-04-14T14:51:47Z | 2023-09-10T16:33:29Z | https://github.com/langchain-ai/langchain/issues/2896 | 1,668,400,538 | 2,896 |
[
"hwchase17",
"langchain"
] | We're working on an implementation for a vector store using the GCP Matching Engine.
We'll be contributing the implementation.
If you have any questions or suggestions please contact me (@tomaspiaggio) or @scafati98. | GCP Matching Engine as Vector Store | https://api.github.com/repos/langchain-ai/langchain/issues/2892/comments | 5 | 2023-04-14T13:58:38Z | 2023-08-07T23:53:24Z | https://github.com/langchain-ai/langchain/issues/2892 | 1,668,302,654 | 2,892 |
[
"hwchase17",
"langchain"
] | Can do REST with OpenAPI? But what about GQL? Possible even? | How GraphQL? | https://api.github.com/repos/langchain-ai/langchain/issues/2891/comments | 8 | 2023-04-14T13:58:14Z | 2023-10-30T16:07:48Z | https://github.com/langchain-ai/langchain/issues/2891 | 1,668,301,661 | 2,891 |
[
"hwchase17",
"langchain"
] | I guess it just need to return the text when it can't parse the action as triple tilt wrapped json?
```python
from langchain import LLMMathChain, OpenAI
from langchain.agents import AgentType, Tool, initialize_agent
from langchain.chat_models import ChatOpenAI
from langchain.embeddings import OpenAIEmbeddings
from langchain.vectorstores.redis import Redis
from config import *
llm = ChatOpenAI(temperature=0, verbose=True)
llm1 = OpenAI(temperature=0)
llm_math_chain = LLMMathChain(llm=llm1, verbose=True)
rds = Redis.from_existing_index(
redis_url=REDIS_URL,
index_name=CUSTOMER_SERVICE_INDEX,
embedding=OpenAIEmbeddings(),
).as_retriever(k=1)
tools = [
Tool(
name="Search",
func=lambda x: "\n\n".join(d.page_content for d in rds.get_relevant_documents(query=x)),
description="useful for when you need to answer questions. the input to this should be a single search term.",
)
]
mrkl = initialize_agent(tools, llm, agent=AgentType.CHAT_ZERO_SHOT_REACT_DESCRIPTION, verbose=True)
while True:
txt = input("Enter a question: ")
mrkl.run(txt)
```
```
Traceback (most recent call last):
File "/Users/admin/Library/Caches/pypoetry/virtualenvs/kbgpt-I7QBBX8f-py3.10/lib/python3.10/site-packages/langchain/agents/chat/base.py", line 50, in _extract_tool_and_input
_, action, _ = text.split("```")
ValueError: not enough values to unpack (expected 3, got 1)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/Caskroom/miniconda/base/lib/python3.10/runpy.py", line 196, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/usr/local/Caskroom/miniconda/base/lib/python3.10/runpy.py", line 86, in _run_code
exec(code, run_globals)
File "/Users/admin/.vscode/extensions/ms-python.python-2023.6.0/pythonFiles/lib/python/debugpy/adapter/../../debugpy/launcher/../../debugpy/__main__.py", line 39, in <module>
cli.main()
File "/Users/admin/.vscode/extensions/ms-python.python-2023.6.0/pythonFiles/lib/python/debugpy/adapter/../../debugpy/launcher/../../debugpy/../debugpy/server/cli.py", line 430, in main
run()
File "/Users/admin/.vscode/extensions/ms-python.python-2023.6.0/pythonFiles/lib/python/debugpy/adapter/../../debugpy/launcher/../../debugpy/../debugpy/server/cli.py", line 284, in run_file
runpy.run_path(target, run_name="__main__")
File "/Users/admin/.vscode/extensions/ms-python.python-2023.6.0/pythonFiles/lib/python/debugpy/_vendored/pydevd/_pydevd_bundle/pydevd_runpy.py", line 321, in run_path
return _run_module_code(code, init_globals, run_name,
File "/Users/admin/.vscode/extensions/ms-python.python-2023.6.0/pythonFiles/lib/python/debugpy/_vendored/pydevd/_pydevd_bundle/pydevd_runpy.py", line 135, in _run_module_code
_run_code(code, mod_globals, init_globals,
File "/Users/admin/.vscode/extensions/ms-python.python-2023.6.0/pythonFiles/lib/python/debugpy/_vendored/pydevd/_pydevd_bundle/pydevd_runpy.py", line 124, in _run_code
exec(code, run_globals)
File "/Users/admin/Projects/kbgpt/mrkl_chat.py", line 30, in <module>
mrkl.run(txt)
File "/Users/admin/Library/Caches/pypoetry/virtualenvs/kbgpt-I7QBBX8f-py3.10/lib/python3.10/site-packages/langchain/chains/base.py", line 213, in run
return self(args[0])[self.output_keys[0]]
File "/Users/admin/Library/Caches/pypoetry/virtualenvs/kbgpt-I7QBBX8f-py3.10/lib/python3.10/site-packages/langchain/chains/base.py", line 116, in __call__
raise e
File "/Users/admin/Library/Caches/pypoetry/virtualenvs/kbgpt-I7QBBX8f-py3.10/lib/python3.10/site-packages/langchain/chains/base.py", line 113, in __call__
outputs = self._call(inputs)
File "/Users/admin/Library/Caches/pypoetry/virtualenvs/kbgpt-I7QBBX8f-py3.10/lib/python3.10/site-packages/langchain/agents/agent.py", line 812, in _call
next_step_output = self._take_next_step(
File "/Users/admin/Library/Caches/pypoetry/virtualenvs/kbgpt-I7QBBX8f-py3.10/lib/python3.10/site-packages/langchain/agents/agent.py", line 692, in _take_next_step
output = self.agent.plan(intermediate_steps, **inputs)
File "/Users/admin/Library/Caches/pypoetry/virtualenvs/kbgpt-I7QBBX8f-py3.10/lib/python3.10/site-packages/langchain/agents/agent.py", line 403, in plan
action = self._get_next_action(full_inputs)
File "/Users/admin/Library/Caches/pypoetry/virtualenvs/kbgpt-I7QBBX8f-py3.10/lib/python3.10/site-packages/langchain/agents/agent.py", line 365, in _get_next_action
parsed_output = self._extract_tool_and_input(full_output)
File "/Users/admin/Library/Caches/pypoetry/virtualenvs/kbgpt-I7QBBX8f-py3.10/lib/python3.10/site-packages/langchain/agents/chat/base.py", line 55, in _extract_tool_and_input
raise ValueError(f"Could not parse LLM output: {text}")
ValueError: Could not parse LLM output: I was not able to find the answer. Maybe there is no public information available on HDFC's current market cap.
``` | ValueError when it can not find an answer in the MRKL chat agent. | https://api.github.com/repos/langchain-ai/langchain/issues/2890/comments | 0 | 2023-04-14T13:51:06Z | 2023-04-14T14:49:16Z | https://github.com/langchain-ai/langchain/issues/2890 | 1,668,289,225 | 2,890 |
[
"hwchase17",
"langchain"
] | I have been working with [BunJS](https://bun.sh) runtime and decided to try langchain with it.
I also noted in documentation that there are some supported runtimes...
It seems that it is not fully compatible with Bun... It imports, instantiates the model, but doesn't execute it.
Am I doing something wrong?
```javascript
import { OpenAI } from "langchain/llms/openai";
console.log("imported");
const model = new OpenAI({ openAIApiKey: "sk-...", temperature: 0.7 });
console.log("model created")
const res = await model.call(
"What would be a good company name a company that makes colorful socks?"
);
console.log(res);
```
Running the example:
```shell
% bun index.ts
imported
model created
38 | const PQueue = "default" in PQueueMod ? PQueueMod.default : PQueueMod;
39 | this.queue = new PQueue({ concurrency: this.maxConcurrency });
40 | }
41 | // eslint-disable-next-line @typescript-eslint/no-explicit-any
42 | call(callable, ...args) {
43 | return this.queue.add(() => pRetry(() => callable(...args).catch((error) => {
^
TypeError: undefined is not a function (near '...this.queue.add...')
at call (/Users/luismal/Projects/bunjs+langchainjs/node_modules/langchain/dist/util/async_caller.js:43:15)
at /Users/luismal/Projects/bunjs+langchainjs/node_modules/langchain/dist/llms/openai.js:312:15
at completionWithRetry (/Users/luismal/Projects/bunjs+langchainjs/node_modules/langchain/dist/llms/openai.js:300:30)
at /Users/luismal/Projects/bunjs+langchainjs/node_modules/langchain/dist/llms/openai.js:270:24
at _generate (/Users/luismal/Projects/bunjs+langchainjs/node_modules/langchain/dist/llms/openai.js:204:20)
at /Users/luismal/Projects/bunjs+langchainjs/node_modules/langchain/dist/llms/base.js:43:27
``` | [Feature Request] BunJs Support | https://api.github.com/repos/langchain-ai/langchain/issues/2888/comments | 2 | 2023-04-14T12:26:34Z | 2023-04-14T17:29:44Z | https://github.com/langchain-ai/langchain/issues/2888 | 1,668,159,936 | 2,888 |
[
"hwchase17",
"langchain"
] | I have fine tuned curie model of OPEN AI on sample text data and i used that model in
llm = OpenAI(
temperature=0.7,
openai_api_key='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',
model_name="curie:ft-personal-2023-03-31-05-59-15"#"text-davinci-003"#""#'' # can be used with llms like 'gpt-3.5-turbo'
)
after run the script i am getting an error
ValueError: Unknown model: curie:ft-personal-2023-03-31-05-59-15. Please provide a valid OpenAI model name.Known models are: gpt-4, gpt-4-0314, gpt-4-completion, gpt-4-0314-completion, gpt-4-32k, gpt-4-32k-0314, gpt-4-32k-completion, gpt-4-32k-0314-completion, gpt-3.5-turbo, gpt-3.5-turbo-0301, text-ada-001, ada, text-babbage-001, babbage, text-curie-001, curie, text-davinci-003, text-davinci-002, code-davinci-002
i have give a correct name of fine tune model. what is the issue. can anyone help me to solve this? | About fine tune model | https://api.github.com/repos/langchain-ai/langchain/issues/2887/comments | 2 | 2023-04-14T10:54:55Z | 2023-05-23T18:18:05Z | https://github.com/langchain-ai/langchain/issues/2887 | 1,668,028,067 | 2,887 |
[
"hwchase17",
"langchain"
] | HI,
I am getting this error. Sounds like normal pronlem, anyone can halp?
TypeError: 'FAISS' object is not callable
Traceback:
File "D:\mk\python\Lib\site-packages\streamlit\runtime\scriptrunner\script_runner.py", line 565, in _run_script
exec(code, module.__dict__)
File "D:\mk\python\ready cody\Zkoušení\CHAT_WITH_DATA\main.py", line 65, in <module>
docs = vectorstore(user_input)
^^^^^^^^^^^^^^^^^^^^^^^ | TypeError: 'FAISS' object is not callable | https://api.github.com/repos/langchain-ai/langchain/issues/2881/comments | 3 | 2023-04-14T06:05:13Z | 2023-09-10T16:33:34Z | https://github.com/langchain-ai/langchain/issues/2881 | 1,667,566,366 | 2,881 |
[
"hwchase17",
"langchain"
] | `import os
import time
import gptcache
from gptcache.processor.pre import get_prompt
from gptcache.manager.factory import get_data_manager
from langchain.cache import GPTCache, SQLiteCache
from gptcache.manager import get_data_manager, CacheBase, VectorBase
from gptcache import Cache
from gptcache.embedding import Onnx
from gptcache.similarity_evaluation.distance import SearchDistanceEvaluation
from langchain.llms import OpenAI
import langchain
import openai
from decouple import config
os.environ["OPENAI_API_KEY"] = config("OPENAI_API_KEY")
openai.api_base = config("OPENAI_API_BASE")
llm = OpenAI(model_name="text-davinci-002", n=1, best_of=1)
i = 0
file_prefix = "data_map"
llm_cache = Cache()
def init_gptcache_map(cache_obj: gptcache.Cache):
global i
cache_path = f'{file_prefix}_{i}.txt'
onnx = Onnx()
cache_base = CacheBase('sqlite')
vector_base = VectorBase('faiss', dimension=onnx.dimension)
data_manager = get_data_manager(cache_base, vector_base, max_size=10, clean_size=2)
cache_obj.init(
pre_embedding_func=get_prompt,
embedding_func=onnx.to_embeddings,
data_manager=data_manager,
similarity_evaluation=SearchDistanceEvaluation(),
)
i += 1
langchain.llm_cache = GPTCache(init_gptcache_map)
llm("Tell me a joke")
`
error:
`Traceback (most recent call last):
File "D:\chat-main\tt.py", line 43, in <module>
llm("Tell me a joke")
File "D:\chat-main\venv\Lib\site-packages\langchain\llms\base.py", line 246, in __call__
return self.generate([prompt], stop=stop).generations[0][0].text
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\chat-main\venv\Lib\site-packages\langchain\llms\base.py", line 161, in generate
llm_output = update_cache(
^^^^^^^^^^^^^
File "D:\chat-main\venv\Lib\site-packages\langchain\llms\base.py", line 51, in update_cache
langchain.llm_cache.update(prompt, llm_string, result)
File "D:\chat-main\venv\Lib\site-packages\langchain\cache.py", line 255, in update
return adapt(
^^^^^^
File "D:\chat-main\venv\Lib\site-packages\gptcache\adapter\adapter.py", line 22, in adapt
embedding_data = time_cal(
^^^^^^^^^
File "D:\chat-main\venv\Lib\site-packages\gptcache\__init__.py", line 25, in inner
res = func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "D:\chat-main\venv\Lib\site-packages\gptcache\embedding\onnx.py", line 58, in to_embeddings
ort_outputs = self.ort_session.run(None, ort_inputs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Program Files (x86)\Python311\Lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 200, in run
return self._sess.run(output_names, input_feed, run_options)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
onnxruntime.capi.onnxruntime_pybind11_state.InvalidArgument: [ONNXRuntimeError] : 2 : INVALID_ARGUMENT : Unexpected input data type. Actual: (tensor(int32)) , expected: (tensor(int64))` | GPTCache similarity caching code example encountered an error during execution. | https://api.github.com/repos/langchain-ai/langchain/issues/2879/comments | 9 | 2023-04-14T05:54:25Z | 2024-07-01T08:03:52Z | https://github.com/langchain-ai/langchain/issues/2879 | 1,667,553,784 | 2,879 |
[
"hwchase17",
"langchain"
] | The `RecursiveTextSplitter` creates a list of strings.
The `CharacterTextSplitter` creates a list of `langchain.schema.Document`
The `Pinecone.from_documents() `loader seems to expect a list of `langchain.schema.Document`
As such, if you try to feed it a "documents" object created by the RecursiveTextSplitter, you get this error:
```
--> 181 texts = [d.page_content for d in documents]
AttributeError: 'str' object has no attribute 'page_content'
```
This is a bug on the RecursiveTextSplitter, right? | RecursiveTextSplitter creates a list of strings that don't play well with Pinecone.from_documents() | https://api.github.com/repos/langchain-ai/langchain/issues/2877/comments | 2 | 2023-04-14T05:39:17Z | 2023-09-10T16:33:39Z | https://github.com/langchain-ai/langchain/issues/2877 | 1,667,541,958 | 2,877 |
[
"hwchase17",
"langchain"
] | In Agents -> loading.py on line 40 there is a redundant piece of code.
```
if config_type not in AGENT_TO_CLASS:
raise ValueError(f"Loading {config_type} agent not supported")
``` | Redundunt piece of code | https://api.github.com/repos/langchain-ai/langchain/issues/2874/comments | 2 | 2023-04-14T05:28:42Z | 2023-09-10T16:33:44Z | https://github.com/langchain-ai/langchain/issues/2874 | 1,667,533,910 | 2,874 |
[
"hwchase17",
"langchain"
] | Here's what I tried:
`import os
os.environ["COHERE_API_KEY"] = ""
from langchain.agents import create_sql_agent
from langchain.agents.agent_toolkits import SQLDatabaseToolkit
from langchain.sql_database import SQLDatabase
from langchain.llms import Cohere
from langchain.agents import AgentExecutor
db = SQLDatabase.from_uri("sqlite:///Chinook.db")
toolkit = SQLDatabaseToolkit(db=db)
agent_executor = create_sql_agent(
llm=Cohere(temperature=0, model="xlarge"),
toolkit=toolkit,
verbose=True
)
agent_executor.run("Give me the most popular artist and the dollar amount the customers spent on this artist")`
The error I received:
`File "/usr/local/lib/python3.9/site-packages/langchain/tools/sql_database/tool.py", line 85, in <lambda>
llm=OpenAI(temperature=0),
File "pydantic/main.py", line 341, in pydantic.main.BaseModel.__init__
pydantic.error_wrappers.ValidationError: 1 validation error for OpenAI
__root__
Did not find openai_api_key, please add an environment variable `OPENAI_API_KEY` which contains it, or pass `openai_api_key` as a named parameter. (type=value_error)`
| sqlagent doesn't work when using Cohere LLM | https://api.github.com/repos/langchain-ai/langchain/issues/2866/comments | 5 | 2023-04-14T04:15:56Z | 2023-10-09T16:08:38Z | https://github.com/langchain-ai/langchain/issues/2866 | 1,667,483,180 | 2,866 |
[
"hwchase17",
"langchain"
] | While LangChain has already explored [using Hugging Face Datasets to evaluate models](https://python.langchain.com/en/latest/use_cases/evaluation/huggingface_datasets.html), it would be great to see loaders for [HuggingFace Datasets](https://huggingface.co/datasets).
I see several benefits to creating a loader for [steaming-enabled](https://huggingface.co/docs/datasets/stream) HuggingFace datasets:
**1. Integration with Hugging Face models:** Hugging Face datasets are designed to work seamlessly with Hugging Face models, such as Transformers and Tokenizers. This means that you can easily use streaming datasets to provide context for your LangChain-powered LLMs or other Hugging Face models.
**2. Customization:** Hugging Face datasets provide a flexible and customizable way to process and transform data. You can apply custom functions or transformations to the prompts as they are streamed. For example, you can preprocess the prompts by removing stop words or punctuation, or you can extract features from the prompts using a feature extraction model.
**3. Compatibility with different data formats:** Hugging Face datasets support a wide range of data formats, including CSV, JSON, and Parquet. This means that you can easily stream prompts from different sources and formats.
**4. Dynamic updating:** Streaming datasets can be updated in real-time, which can enable you to add new prompts or remove outdated prompts from the dataset without having to reload the entire dataset.
**5. Real-time processing:** Streaming datasets can enable real-time processing of user prompts, which can be useful in applications that require fast response times. | Dataset Loaders: HuggingFace | https://api.github.com/repos/langchain-ai/langchain/issues/2864/comments | 3 | 2023-04-14T03:24:28Z | 2024-07-10T11:27:30Z | https://github.com/langchain-ai/langchain/issues/2864 | 1,667,448,793 | 2,864 |
[
"hwchase17",
"langchain"
] | When I tries to read the all the sheets from the `.xlsx` file and pass it to the `create_pandas_dataframe_agent` it creates error.
`
from langchain.agents import create_pandas_dataframe_agent
`
`
df = pd.read_excel('data.xlsx', sheet_name= none)
`
`
agent = create_pandas_dataframe_agent(OpenAI(temperature=0), df, verbose=True)
` | Pandas Dataframe Agent Issue with Multiple sheets of xlsx file | https://api.github.com/repos/langchain-ai/langchain/issues/2862/comments | 2 | 2023-04-14T03:05:12Z | 2023-09-10T16:33:54Z | https://github.com/langchain-ai/langchain/issues/2862 | 1,667,437,084 | 2,862 |
[
"hwchase17",
"langchain"
] | I am using Directory Loader to load my all the pdf in my data folder.
`
from langchain.document_loaders import DirectoryLoader
`
`
loader = DirectoryLoader("data", glob = "**/*.pdf")
`
`
documents = loader.load()
`
`
print(documents)
`
This throw error while when I load txt files this is working fine. | Loading Multiple PDF error | https://api.github.com/repos/langchain-ai/langchain/issues/2860/comments | 13 | 2023-04-14T01:53:48Z | 2023-09-28T16:08:16Z | https://github.com/langchain-ai/langchain/issues/2860 | 1,667,385,606 | 2,860 |
[
"hwchase17",
"langchain"
] | Running the code below produces the following error: `document_variable_name summaries was not found in llm_chain input_variables: ['name'] (type=value_error)`
Any ideas?
Code:
```python
def use_prompt(self, template: str, variables=List[str], verbose: bool = False):
prompt_template = PromptTemplate(
template=template,
input_variables=variables,
)
self.chain = load_qa_with_sources_chain(
llm=self.llm,
prompt=prompt_template,
verbose=verbose,
)
use_prompt(template="Only answer the question 'What is my name?' by replaying with only the name. My name is {name}", variables=["name"])
``` | Trying to pass custom prompt in load_qa_with_sources_chain results in error | https://api.github.com/repos/langchain-ai/langchain/issues/2858/comments | 11 | 2023-04-13T23:16:01Z | 2024-06-10T16:06:30Z | https://github.com/langchain-ai/langchain/issues/2858 | 1,667,267,927 | 2,858 |
[
"hwchase17",
"langchain"
] | terminal tool is not executing commands
my code:
```
tools = load_tools(["llm-math","wikipedia","terminal"], llm=test)
agent = initialize_agent(tools,
test,
agent="zero-shot-react-description",
verbose=True)
```
output:
```
Action: Terminal
Action Input: ls
Observation:
doc.txt downloads myscript.sh test
Thought: I can list all the files
Final Answer:
doc.txt downloads myscript.sh test
> Finished chain.
doc.txt downloads myscript.sh test
```
it is hallucinating and not really executing the `ls` command
i modified the `BashProcess().run` function to print something when its executed and confirmed that the agent is not executing it. | terminal tool is not executing commands | https://api.github.com/repos/langchain-ai/langchain/issues/2857/comments | 1 | 2023-04-13T22:27:19Z | 2023-09-15T22:12:50Z | https://github.com/langchain-ai/langchain/issues/2857 | 1,667,214,225 | 2,857 |
[
"hwchase17",
"langchain"
] | We should implement all abstract methods in VectorStore so that users can use weaviate as the vector store for any use case.
Context:
https://github.com/hwchase17/langchain/blob/763f87953686a69897d1f4d2260388b88eb8d670/langchain/vectorstores/base.py#L104-L113 | Implement from_documents class method in weaviate VectorStore | https://api.github.com/repos/langchain-ai/langchain/issues/2855/comments | 12 | 2023-04-13T21:11:00Z | 2023-06-08T12:35:52Z | https://github.com/langchain-ai/langchain/issues/2855 | 1,667,134,280 | 2,855 |
[
"hwchase17",
"langchain"
] | This is related to AzureOpenAI call.
import os
import tiktoken
from langchain.embeddings import OpenAIEmbeddings
from langchain.llms import AzureOpenAI
os.environ["OPENAI_API_TYPE"] = "azure"
os.environ["OPENAI_API_BASE"] = "https://xxxxxxx.openai.azure.com/"
os.environ["OPENAI_API_KEY"] = "xxxx"
embeddings = OpenAIEmbeddings(model="SimilarityCurie001-AzureDeploymentName")
text = "This is a test document."
query_result = embeddings.embed_query(text)
Getting error on the execution of 'query_result = embeddings.embed_query(text)' line.
MODEL_TO_ENCODING variable is having all the encoding mapping against the real names of the models.
but we specify AzureDeploymentName of the the model in embeddings = OpenAIEmbeddings(model="SimilarityCurie001-AzureDeploymentName").
and the look up fails.
| 'Could not automatically map SimilarityCurie001 to a tokeniser. Please use `tiktok.get_encoding` to explicitly get the tokeniser you expect.' | https://api.github.com/repos/langchain-ai/langchain/issues/2854/comments | 15 | 2023-04-13T21:08:19Z | 2023-09-29T16:08:41Z | https://github.com/langchain-ai/langchain/issues/2854 | 1,667,130,746 | 2,854 |
[
"hwchase17",
"langchain"
] | Hello, I came across a problem when using "similarity_search_with_score".
According to the [doc](https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/faiss.html?highlight=FAISS.from_documents#faiss), it should return "not only the documents but also the similarity score of the query to them".
`docs_and_scores = db.similarity_search_with_score(query)`
However, I noticed the scores for the top-5 docs are: [0.40305698, 0.43590686, 0.4464777, 0.46140206, 0.46226424], which are not sorted in a descending order.
Did anyone have the same problem?
| The scores returned by 'similarity_search_with_score' are NOT in descending order | https://api.github.com/repos/langchain-ai/langchain/issues/2845/comments | 8 | 2023-04-13T17:51:39Z | 2024-02-21T17:00:01Z | https://github.com/langchain-ai/langchain/issues/2845 | 1,666,877,498 | 2,845 |
[
"hwchase17",
"langchain"
] | Adds Annoy index as VectorStore: https://github.com/spotify/annoy
Annoy might be useful in situations where a "read only" vector store is required/sufficient.
context: https://discord.com/channels/1038097195422978059/1051632794427723827/1096089994168377354 | Add Annoy as VectorStore | https://api.github.com/repos/langchain-ai/langchain/issues/2842/comments | 0 | 2023-04-13T17:10:45Z | 2023-04-16T20:44:06Z | https://github.com/langchain-ai/langchain/issues/2842 | 1,666,809,978 | 2,842 |
[
"hwchase17",
"langchain"
] | When using ZERO_SHOT_REACT_DESCRIPTION agent type with ChatOpenAI as LLM using 'gpt-3.5-turbo' model and other tools are available like "Google Search", the agent goes into a weird train of thoughts because it deems the answer is "too easy" So in the end it gives the wrong "Final Answer". See screenshot below

To reproduce, you need to use model 'gpt-3.5-turbo' and ChatOpenAI as the llm for the agent
```
chat = ChatOpenAI(temperature=0, model_name='gpt-3.5-turbo')
llm = OpenAI(temperature=0)
tools = load_tools(["google-search", "llm-math"], llm=llm)
agent = initialize_agent(tools, chat, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True)
agent.run("what is 2+2")
```
| BUG - Agent goes into weird train of thoughts when asked with "too easy" question | https://api.github.com/repos/langchain-ai/langchain/issues/2840/comments | 2 | 2023-04-13T16:44:22Z | 2023-04-13T22:24:53Z | https://github.com/langchain-ai/langchain/issues/2840 | 1,666,765,856 | 2,840 |
[
"hwchase17",
"langchain"
] | ### Description
`qdrant.add_texts` always failed
### Steps to repreduce
Try add texts to qdrant like this :
```python
import qdrant_client
client = qdrant_client.QdrantClient("localhost", port=6333)
qdrant = Qdrant(
client=client, collection_name=COLLECTION_NAME,
embedding_function=embeddings.embed_documents
)
...
qdrant.add_texts( texts = [doc.page_content for doc in docs], metadatas = [doc.metadata for doc in docs])
```
and it will come out with error:
```
Traceback (most recent call last):
File "build_vector_db.py", line 50, in <module>
qdrant.add_texts( texts = [doc.page_content for doc in docs], metadatas = [doc.metadata for doc in docs])
File "/usr/local/lib/python3.8/dist-packages/langchain/vectorstores/qdrant.py", line 81, in add_texts
points=rest.Batch(
File "pydantic/main.py", line 341, in pydantic.main.BaseModel.__init__
pydantic.error_wrappers.ValidationError: 732 validation errors for Batch
vectors -> 0 -> 0
value is not a valid float (type=type_error.float)
vectors -> 0 -> 1
value is not a valid float (type=type_error.float)
vectors -> 0 -> 2
value is not a valid float (type=type_error.float)
vectors -> 0 -> 3
value is not a valid float (type=type_error.float)
....
```
I think we may update the line (keep the same logic with `from_texts` which works fine):
``` python
vectors=[self.embedding_function(text) for text in texts],
```
to
``` python
vectors=self.embedding_function(texts)
```
I will make a PR for it
| Fix "validation errors for Batch" when call qdrant.add_texts | https://api.github.com/repos/langchain-ai/langchain/issues/2837/comments | 2 | 2023-04-13T16:03:16Z | 2024-01-30T11:48:16Z | https://github.com/langchain-ai/langchain/issues/2837 | 1,666,708,124 | 2,837 |
[
"hwchase17",
"langchain"
] | I was trying to use MarkdownTextSplitter to translate a document and maintain formatting, but I noticed that the splitter removed formatting from the markdown when splitting it.
As an example, the following markdown example when split with chunk_size=200 removes the "## " from the features line, as well as the line breaks preceding and following that line.
```markdown
# Dillinger
- Type some Markdown on the left
- See HTML in the right
- ✨Magic ✨
## Features
- Import a HTML file and watch it magically convert to Markdown
- Drag and drop images (requires your Dropbox account be linked)
- Import and save files from GitHub, Dropbox, Google Drive and One Drive
--
```
When split using this code:
```python
markdown_splitter = MarkdownTextSplitter(chunk_size=200, chunk_overlap=0)
docs = markdown_splitter.create_documents([markdown_document])
for doc in docs:
print(doc.page_content)
````
The output becomes:
```markdown
# Dillinger
- Type some Markdown on the left
- See HTML in the right
- ✨Magic ✨
Features
- Import a HTML file and watch it magically convert to Markdown
- Drag and drop images (requires your Dropbox account be linked)
- Import and save files from GitHub, Dropbox, Google Drive and One Drive
--
```
The formatting and line breaks around the "Features" line are removed. Expected behavior would be that each split doc, when combined, would be the original text.
Solution would be to never have formatting and line breaks removed, or, add the removed prefix/suffix in metadata or other keys so they could be used to re-construct the document with intact formatting.
[Full code example](https://gist.github.com/vbelius/993e3031dc825aa7a9c7b38af54de4d2)
```bash
~: pip show langchain
Name: langchain
Version: 0.0.138
``` | MarkdownTextSplitter removes formatting and line breaks | https://api.github.com/repos/langchain-ai/langchain/issues/2836/comments | 19 | 2023-04-13T15:45:30Z | 2023-10-18T16:09:03Z | https://github.com/langchain-ai/langchain/issues/2836 | 1,666,679,061 | 2,836 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.