issue_owner_repo listlengths 2 2 | issue_body stringlengths 0 261k ⌀ | issue_title stringlengths 1 925 | issue_comments_url stringlengths 56 81 | issue_comments_count int64 0 2.5k | issue_created_at stringlengths 20 20 | issue_updated_at stringlengths 20 20 | issue_html_url stringlengths 37 62 | issue_github_id int64 387k 2.46B | issue_number int64 1 127k |
|---|---|---|---|---|---|---|---|---|---|
[
"langchain-ai",
"langchain"
] | I tried creating a pandas dataframe agent (using create_dataframe_agent) with ChatOpenAI from promptlayer or otherwise just plain version (ChatOpenAI) as the LLM! But langchain isn't able to parse the LLM's output code. If I modify some regular expression manually, it works (but again fails if the code is a single line, etc.). Basically the regex and parsing logic for figuring out Action, Action_input for chat models needs to be changed. This probably means lot of other existing chains with ChatOpenAI as LLM are up for a toss!
Code:
```
chat = PromptLayerChatOpenAI(pl_tags=["langchain"])
df_agent = create_pandas_dataframe_agent(chat, haplo_insuf, verbose=True)
df_agent.run("find the maximum number of employees working in any company based on the range. a-b means minimum a, maximum b!")
````
Output:
```
Entering new AgentExecutor chain...
Thought: We need to extract the minimum and maximum number of employees from the "df" dataframe and find the maximum value.
Action: python_repl_ast
Action Input:```max_emp = df['# Employees'].str.split('-', expand=True).astype(int).max(axis=1)max_emp.max()```
Observation: invalid syntax (<unknown>, line 2)
Thought:There seems to be a syntax error in the code. Let me fix it.
Action: python_repl_ast
Action Input:```max_emp = df['# Employees'].str.split('-', expand=True).astype(int).max(axis=1)max_emp.max()```
Observation: invalid syntax (<unknown>, line 2)
Thought:...
``` | ChatOpenAI isn't compatible with create_pandas_dataframe_agent | https://api.github.com/repos/langchain-ai/langchain/issues/1931/comments | 10 | 2023-03-23T15:52:32Z | 2023-07-06T19:34:35Z | https://github.com/langchain-ai/langchain/issues/1931 | 1,637,808,698 | 1,931 |
[
"langchain-ai",
"langchain"
] | After sending several requests to OpenAI, it always encounter request timeouts, accompanied by long periods of waiting.
Env:
OS: Ubuntu 22
Python: 3.10
langchain: 0.0.117
## Request time out
```shell
WARNING:/home/soda/.local/lib/python3.10/site-packages/langchain/chat_models/openai.py:Retrying langchain.chat_models.openai.acompletion_with_retry.<locals>._completion_with_retry in 4.0 seconds as it raised Timeout: Request timed out.
```

## TimeoutError from None
```shell
answer, _ = await self.combine_documents_chain.acombine_docs(docs, question=question)
File "/home/soda/.local/lib/python3.10/site-packages/langchain/chains/combine_documents/stuff.py", line 97, in acombine_docs
return await self.llm_chain.apredict(**inputs), {}
File "/home/soda/.local/lib/python3.10/site-packages/langchain/chains/llm.py", line 167, in apredict
return (await self.acall(kwargs))[self.output_key]
File "/home/soda/.local/lib/python3.10/site-packages/langchain/chains/base.py", line 154, in acall
raise e
File "/home/soda/.local/lib/python3.10/site-packages/langchain/chains/base.py", line 148, in acall
outputs = await self._acall(inputs)
File "/home/soda/.local/lib/python3.10/site-packages/langchain/chains/llm.py", line 135, in _acall
return (await self.aapply([inputs]))[0]
File "/home/soda/.local/lib/python3.10/site-packages/langchain/chains/llm.py", line 123, in aapply
response = await self.agenerate(input_list)
File "/home/soda/.local/lib/python3.10/site-packages/langchain/chains/llm.py", line 67, in agenerate
return await self.llm.agenerate_prompt(prompts, stop)
File "/home/soda/.local/lib/python3.10/site-packages/langchain/chat_models/base.py", line 103, in agenerate_prompt
raise e
File "/home/soda/.local/lib/python3.10/site-packages/langchain/chat_models/base.py", line 97, in agenerate_prompt
output = await self.agenerate(prompt_messages, stop=stop)
File "/home/soda/.local/lib/python3.10/site-packages/langchain/chat_models/base.py", line 62, in agenerate
results = [await self._agenerate(m, stop=stop) for m in messages]
File "/home/soda/.local/lib/python3.10/site-packages/langchain/chat_models/base.py", line 62, in <listcomp>
results = [await self._agenerate(m, stop=stop) for m in messages]
File "/home/soda/.local/lib/python3.10/site-packages/langchain/chat_models/openai.py", line 286, in _agenerate
async for stream_resp in await acompletion_with_retry(
File "/home/soda/.local/lib/python3.10/site-packages/openai/api_resources/abstract/engine_api_resource.py", line 230, in <genexpr>
return (
File "/home/soda/.local/lib/python3.10/site-packages/openai/api_requestor.py", line 319, in wrap_resp
async for r in resp:
File "/home/soda/.local/lib/python3.10/site-packages/openai/api_requestor.py", line 633, in <genexpr>
return (
File "/home/soda/.local/lib/python3.10/site-packages/openai/api_requestor.py", line 114, in parse_stream_async
async for line in rbody:
File "/home/soda/.local/lib/python3.10/site-packages/aiohttp/streams.py", line 35, in __anext__
rv = await self.read_func()
File "/home/soda/.local/lib/python3.10/site-packages/aiohttp/streams.py", line 311, in readline
return await self.readuntil()
File "/home/soda/.local/lib/python3.10/site-packages/aiohttp/streams.py", line 343, in readuntil
await self._wait("readuntil")
File "/home/soda/.local/lib/python3.10/site-packages/aiohttp/streams.py", line 303, in _wait
with self._timer:
File "/home/soda/.local/lib/python3.10/site-packages/aiohttp/helpers.py", line 721, in __exit__
raise asyncio.TimeoutError from None
```
Have other people encountered similar errors?
| Frequent Request timed out | https://api.github.com/repos/langchain-ai/langchain/issues/1929/comments | 5 | 2023-03-23T14:44:31Z | 2023-06-07T03:02:27Z | https://github.com/langchain-ai/langchain/issues/1929 | 1,637,686,943 | 1,929 |
[
"langchain-ai",
"langchain"
] | Right now to create an agent that has memory, we do
`memory = ConversationBufferMemory(memory_key=str('chat_history'))`
Then pass it to the agent
`agent_executor = AgentExecutor.from_agent_and_tools(agent=agent, tools=tools, verbose=True, memory=memory)`
However, the memory will be gone if the server restarts. I learned that memory can be dumped as json, but to avoid losing messages, we have to dump to json every time a message comes in, which is not a good solution.
Can we add support for storing conversation buffer memory in redis and when constructing the agent pass the redis instance to the agent executor?
Let me know if there is an existing solution for this. Appreciate the help!
| Support Save and Retrieve Memory from Redis | https://api.github.com/repos/langchain-ai/langchain/issues/1927/comments | 2 | 2023-03-23T13:47:03Z | 2023-04-07T19:55:27Z | https://github.com/langchain-ai/langchain/issues/1927 | 1,637,579,089 | 1,927 |
[
"langchain-ai",
"langchain"
] | HI,
I am getting an error "InvalidRequestError: Resource not found" while using Chat model using VectorDBQA chain.
This issue is coming up after v0.0.120 release and I was not getting this in the previous version which I used which was v0.0.118.
I am using Chatmodel from Azure through AzureChatOpenAI and embeddings through Azure`s model.
| Getting an error InvalidRequestError: Resource not found in VectorDBQA | https://api.github.com/repos/langchain-ai/langchain/issues/1923/comments | 10 | 2023-03-23T11:37:04Z | 2023-11-03T07:22:09Z | https://github.com/langchain-ai/langchain/issues/1923 | 1,637,360,666 | 1,923 |
[
"langchain-ai",
"langchain"
] | Have we considered using Ruff in this repo instead of flake8? Ruff is a lot faster and used by many popular Python repos.
https://github.com/charliermarsh/ruff | Ruff instead of flake8? | https://api.github.com/repos/langchain-ai/langchain/issues/1919/comments | 2 | 2023-03-23T06:36:12Z | 2023-03-23T06:51:45Z | https://github.com/langchain-ai/langchain/issues/1919 | 1,636,937,677 | 1,919 |
[
"langchain-ai",
"langchain"
] | Add optional parameters (like a `siterestrict`) to GoogleSearchAPIWrapper.
https://developers.google.com/custom-search/v1/site_restricted_api
| Support for Google's Custom Search Site Restricted JSON API | https://api.github.com/repos/langchain-ai/langchain/issues/1915/comments | 0 | 2023-03-23T04:53:09Z | 2023-03-28T04:24:27Z | https://github.com/langchain-ai/langchain/issues/1915 | 1,636,847,361 | 1,915 |
[
"langchain-ai",
"langchain"
] | How to add a json example into the prompt template. Currently, it errors.
Here is one example prompt
human_template = """Summarize user's order into the json format keys:"name","size", "topping", "ice", "sugar", "special_instruction".
Here are two examples of the order JSON object:
{
"name": "Jasmine Green Tea/Milk Tea",
"quantity": 2,
"size": "Large",
"ice": "25%",
"sugar": "100%",
"toppings": "Pearl, Green Bean, Grass Jelly"
},
{
"name": "Popcorn Chicken",
"quantity": 1,
"special_instruction": "Make it spicy"
}
{extra_store_instruction}
"""
I will get this error:
File "/home/ubuntu/.local/lib/python3.10/site-packages/langchain/prompts/chat.py", line 67, in from_template
prompt = PromptTemplate.from_template(template)
File "/home/ubuntu/.local/lib/python3.10/site-packages/langchain/prompts/prompt.py", line 130, in from_template
return cls(input_variables=list(sorted(input_variables)), template=template)
File "pydantic/main.py", line 341, in pydantic.main.BaseModel.__init__
pydantic.error_wrappers.ValidationError: 1 validation error for PromptTemplate
__root__
Invalid format specifier (type=value_error)
| how to escape {} for the PromptTemplate | https://api.github.com/repos/langchain-ai/langchain/issues/1914/comments | 9 | 2023-03-23T04:16:24Z | 2024-05-10T06:08:49Z | https://github.com/langchain-ai/langchain/issues/1914 | 1,636,824,214 | 1,914 |
[
"langchain-ai",
"langchain"
] | I just upgraded to the most recent version of langchain (0.0.119) and I get the following error when I try to import langchain in python.
Here's the error message:
>
--> 111 class SelfHostedHuggingFaceLLM(SelfHostedPipeline, BaseModel):
112 Wrapper around HuggingFace Pipeline API to run on self-hosted remote hardware.
113
114 Supported hardware includes auto-launched instances on AWS, GCP, Azure,
149 model_load_fn=get_pipeline, model_id="gpt2", hardware=gpu)
150 152 model_id: str = DEFAULT_MODEL_ID
File ~/.local/lib/python3.9/site-packages/pydantic/main.py:254, in pydantic.main.ModelMetaclass.__new__()
File ~/.local/lib/python3.9/site-packages/pydantic/utils.py:144, in pydantic.utils.validate_field_name()
NameError: Field name inference_fn shadows a BaseModel attribute; use a different field name with alias=inference_fn
>
Any help will be appreciated.
Thanks in advance
| import langchain (0.0.119) does not work | https://api.github.com/repos/langchain-ai/langchain/issues/1913/comments | 4 | 2023-03-23T03:40:53Z | 2023-09-25T16:15:39Z | https://github.com/langchain-ai/langchain/issues/1913 | 1,636,797,392 | 1,913 |
[
"langchain-ai",
"langchain"
] | Code to create a ConstitutionalChain from an LLM:
```
@classmethod
def from_llm(
cls,
llm: BaseLanguageModel,
chain: LLMChain,
critique_prompt: BasePromptTemplate = CRITIQUE_PROMPT,
revision_prompt: BasePromptTemplate = REVISION_PROMPT,
**kwargs: Any,
) -> "ConstitutionalChain":
"""Create a chain from an LLM."""
critique_chain = LLMChain(llm=llm, prompt=critique_prompt)
revision_chain = LLMChain(llm=llm, prompt=revision_prompt)
return cls(
chain=chain,
critique_chain=critique_chain,
revision_chain=revision_chain,
**kwargs,
)
```
It requires an LLMChain.
SimpleSequentialChain doesn't inherit from LLMChain. It inherits from Chain.
```
class SimpleSequentialChain(Chain, BaseModel):
"""Simple chain where the outputs of one step feed directly into next."""
chains: List[Chain]
strip_outputs: bool = False
input_key: str = "input" #: :meta private:
output_key: str = "output" #: :meta private:
```
Therefore we cannot pass a SimpleSequentialChain to the ConstitutionalChain.
This disrupts the workflow where we create a pipeline and want to pass it to a constitutional chain for checks.
I would be happy to submit a PR to fix this : )
Presumably there should be no issue as a ConstitutionalChain shouldn't require anything more than a run method from the input chain, as it contains it's own LLM and prompts (in the form of principles). | ConstitutionalChain cannot be composed with SimpleSequentialChain | https://api.github.com/repos/langchain-ai/langchain/issues/1904/comments | 4 | 2023-03-23T00:35:35Z | 2024-01-10T19:17:00Z | https://github.com/langchain-ai/langchain/issues/1904 | 1,636,674,948 | 1,904 |
[
"langchain-ai",
"langchain"
] | When we use `OpenSearchVectorSearch` to index documents into Opensearch, there is no way to decide the index name.
There is `index_name` argument in `OpenSearchVectorSearch` however this is ignored when we call `OpenSearchVectorSearch.from_documents()` method. In the method index name is set randomly with uuid [see here](https://github.com/hwchase17/langchain/blob/f155d9d3ec194146c28710f43c5cd9da3f709447/langchain/vectorstores/opensearch_vector_search.py#L366)
| OpenSearch index_name is not deterministic | https://api.github.com/repos/langchain-ai/langchain/issues/1900/comments | 1 | 2023-03-22T22:59:59Z | 2023-03-23T10:10:01Z | https://github.com/langchain-ai/langchain/issues/1900 | 1,636,610,281 | 1,900 |
[
"langchain-ai",
"langchain"
] | It seems this can already calculate costs, can it throw errors/warnings if usage exceeds a certain pre-configured amount? That way people won't accidentally spend $500 if they miscalculate, etc. | Add a way to configure soft/hard caps on $ spent on APIs | https://api.github.com/repos/langchain-ai/langchain/issues/1899/comments | 2 | 2023-03-22T22:13:30Z | 2023-09-10T16:40:53Z | https://github.com/langchain-ai/langchain/issues/1899 | 1,636,565,917 | 1,899 |
[
"langchain-ai",
"langchain"
] | I'm on Windows 10 and one of the examples from the Documentation [Load a prompt template from LangChainHub](https://langchain.readthedocs.io/en/latest/modules/prompts/getting_started.html#:~:text=such%20as%20Mako.-,Load%20a%20prompt%20template%20from%20LangChainHub,-%23) has the following code:
```python
from langchain.prompts import load_prompt
prompt = load_prompt("lc://prompts/conversation/prompt.json")
prompt.format(history="", input="What is 1 + 1?")
```
Which in Windows produces the following error:
```console
(venv) PS D:\Usuarios\Dev01\Documents\GitHub\almohada\chat> python prompts.py
Traceback (most recent call last):
File "D:\Usuarios\Dev01\Documents\GitHub\almohada\chat\prompts.py", line 58, in <module>
prompt = load_prompt("lc://prompts/conversation/prompt.json")
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Usuarios\Dev01\Documents\GitHub\almohada\chat\venv\Lib\site-packages\langchain\prompts\loading.py", line 120, in load_prompt
if hub_result := try_load_from_hub(
^^^^^^^^^^^^^^^^^^
File "D:\Usuarios\Dev01\Documents\GitHub\almohada\chat\venv\Lib\site-packages\langchain\utilities\loading.py", line 44, in try_load_from_hub
raise ValueError(f"Could not find file at {full_url}")
ValueError: Could not find file at https://raw.githubusercontent.com/hwchase17/langchain-hub/master/prompts\conversation\prompt.json
```
This is caused by line 41 in `utilities/loading.py`
```python
full_url = urljoin(URL_BASE.format(ref=ref), str(remote_path))
```
I have not tested this on another OS but in my mind the fix it's the following:
```python
def try_load_from_hub(
path: Union[str, Path],
loader: Callable[[str], T],
valid_prefix: str,
valid_suffixes: Set[str],
**kwargs: Any,
) -> Optional[T]:
"""Load configuration from hub. Returns None if path is not a hub path."""
if not isinstance(path, str) or not (match := HUB_PATH_RE.match(path)):
return None
ref, remote_path_str = match.groups()
ref = ref[1:] if ref else DEFAULT_REF
remote_path = Path(remote_path_str)
if remote_path.parts[0] != valid_prefix:
return None
if remote_path.suffix[1:] not in valid_suffixes:
raise ValueError("Unsupported file type.")
full_url = urljoin(URL_BASE.format(ref=ref), remote_path_str) # here, instead of stringifying the remote_path just use the one extracted in the line above
r = requests.get(full_url, timeout=5)
if r.status_code != 200:
raise ValueError(f"Could not find file at {full_url}")
with tempfile.TemporaryDirectory() as tmpdirname:
file = Path(tmpdirname) / remote_path.name
with open(file, "wb") as f:
f.write(r.content)
return loader(str(file), **kwargs)
```
I'd feedback on the solution. Since I can't test on all possible environments I'm not 100% sure about it | Error try_load_from_hub could not find file | https://api.github.com/repos/langchain-ai/langchain/issues/1897/comments | 4 | 2023-03-22T20:51:53Z | 2023-08-14T16:58:59Z | https://github.com/langchain-ai/langchain/issues/1897 | 1,636,480,289 | 1,897 |
[
"langchain-ai",
"langchain"
] | ImportError: cannot import name 'BaseOutputParser' from 'langchain.output_parsers' | Import Error | https://api.github.com/repos/langchain-ai/langchain/issues/1896/comments | 1 | 2023-03-22T18:58:13Z | 2023-03-23T14:24:41Z | https://github.com/langchain-ai/langchain/issues/1896 | 1,636,338,302 | 1,896 |
[
"langchain-ai",
"langchain"
] | With a Postgres database, I've found I need to add the schema to [line 69](https://github.com/hwchase17/langchain/blob/ce5d97bcb3e263f6aa69da6c334e35e20bf4db11/langchain/sql_database.py#L69):
`self._metadata.reflect(bind=self._engine, schema=self._schema)` | SQLDatabase does not handle schema correctly | https://api.github.com/repos/langchain-ai/langchain/issues/1894/comments | 1 | 2023-03-22T17:34:41Z | 2023-08-21T16:08:15Z | https://github.com/langchain-ai/langchain/issues/1894 | 1,636,218,153 | 1,894 |
[
"langchain-ai",
"langchain"
] | This is the message

| pip install -U langchain is the best thing that you can do before you start your day | https://api.github.com/repos/langchain-ai/langchain/issues/1892/comments | 7 | 2023-03-22T16:30:44Z | 2024-03-17T09:26:09Z | https://github.com/langchain-ai/langchain/issues/1892 | 1,636,122,407 | 1,892 |
[
"langchain-ai",
"langchain"
] | null | Support for Google BARD API? | https://api.github.com/repos/langchain-ai/langchain/issues/1889/comments | 17 | 2023-03-22T15:46:42Z | 2024-08-09T09:57:05Z | https://github.com/langchain-ai/langchain/issues/1889 | 1,636,046,362 | 1,889 |
[
"langchain-ai",
"langchain"
] | It looks like Llama uses an unsupported embedding scheme:
https://nn.labml.ai/transformers/rope/index.html
I'm opening this thread so we can have a conversation about how to support these embeddings within langchain. I'm happy to help, but my knowledge is limited. | Support for Rotary Embeddings for Llama | https://api.github.com/repos/langchain-ai/langchain/issues/1885/comments | 3 | 2023-03-22T15:04:00Z | 2023-09-18T16:22:55Z | https://github.com/langchain-ai/langchain/issues/1885 | 1,635,952,857 | 1,885 |
[
"langchain-ai",
"langchain"
] | ### Describe the bug
in `WandbCallbackHandler.on_chain_start` (row 533 in wandb_callback.py), the name of the input key is hardcoded set to "input" and the name of the output key is hard-coded "output", so if the chain uses any other name for the input key, this results in `KyeError`.
<!--- A minimal code snippet between the quotes below -->
```python
```
<!--- A full traceback of the exception in the quotes below -->
```shell
Traceback (most recent call last):
File "/Applications/PyCharm.app/Contents/plugins/python/helpers/pydev/pydevd.py", line 1448, in _exec
pydev_imports.execfile(file, globals, locals) # execute the script
File "/Applications/PyCharm.app/Contents/plugins/python/helpers/pydev/_pydev_imps/_pydev_execfile.py", line 18, in execfile
exec(compile(contents+"\n", file, 'exec'), glob, loc)
File "use_wandb.py", line 21, in <module>
res = intenter.get_intent(IntenterInput(query="hi there"))
File "intenter.py", line 183, in get_intent
result = self.full_chain(inputs.dict())
File ".venv/lib/python3.9/site-packages/langchain/chains/base.py", line 107, in __call__
self.callback_manager.on_chain_start(
File "venv/lib/python3.9/site-packages/langchain/callbacks/base.py", line 184, in on_chain_start
handler.on_chain_start(serialized, inputs, **kwargs)
File ".venv/lib/python3.9/site-packages/langchain/callbacks/wandb_callback.py", line 533, in on_chain_start
chain_input = inputs["input"]
KeyError: 'input'
python-BaseException
```
### Additional Files
_No response_
### Environment
WandB version: wandb==0.14.0
OS: macOS big sur
Python version: 3.9.9
Versions of relevant libraries: langchain==0.0.118
### Additional Context
_No response_ | Bug in WandbCallbackHandler | https://api.github.com/repos/langchain-ai/langchain/issues/1884/comments | 3 | 2023-03-22T13:15:40Z | 2023-09-10T16:40:58Z | https://github.com/langchain-ai/langchain/issues/1884 | 1,635,746,539 | 1,884 |
[
"langchain-ai",
"langchain"
] | Hi,
I am a bit confused as to what is the best approach to implement the "chatting with a document store". There seem to be two approaches to do this:
- ChatVectorDBChain -- https://github.com/mayooear/gpt4-pdf-chatbot-langchain
- agent + ConversationBufferMemory -- https://github.com/jerryjliu/llama_index/blob/main/examples/chatbot/Chatbot_SEC.ipynb
Is there an advantage to using ChatVectorDBChain?
Thank you very much!
| ChatVectorDBChain vs. agent + ConversationBufferMemory for chat | https://api.github.com/repos/langchain-ai/langchain/issues/1883/comments | 5 | 2023-03-22T12:41:20Z | 2023-10-29T16:07:28Z | https://github.com/langchain-ai/langchain/issues/1883 | 1,635,694,115 | 1,883 |
[
"langchain-ai",
"langchain"
] | ```
Traceback (most recent call last):
File "/Users/xingfanxia/projects/notion-qa/qa.py", line 25, in <module>
result = chain({"question": args.question})
File "/opt/homebrew/lib/python3.10/site-packages/langchain/chains/base.py", line 116, in __call__
raise e
File "/opt/homebrew/lib/python3.10/site-packages/langchain/chains/base.py", line 113, in __call__
outputs = self._call(inputs)
File "/opt/homebrew/lib/python3.10/site-packages/langchain/chains/qa_with_sources/base.py", line 118, in _call
answer, _ = self.combine_documents_chain.combine_docs(docs, **inputs)
File "/opt/homebrew/lib/python3.10/site-packages/langchain/chains/combine_documents/map_reduce.py", line 143, in combine_docs
return self._process_results(results, docs, token_max, **kwargs)
File "/opt/homebrew/lib/python3.10/site-packages/langchain/chains/combine_documents/map_reduce.py", line 173, in _process_results
num_tokens = length_func(result_docs, **kwargs)
File "/opt/homebrew/lib/python3.10/site-packages/langchain/chains/combine_documents/stuff.py", line 83, in prompt_length
return self.llm_chain.llm.get_num_tokens(prompt)
File "/opt/homebrew/lib/python3.10/site-packages/langchain/chat_models/openai.py", line 331, in get_num_tokens
enc = tiktoken.encoding_for_model(self.model_name)
File "/opt/homebrew/lib/python3.10/site-packages/tiktoken/model.py", line 51, in encoding_for_model
raise KeyError(
KeyError: 'Could not automatically map gpt-3.5-turbo to a tokeniser. Please use `tiktok.get_encoding` to explicitly get the tokeniser you expect.'
``` | Tiktoken version is too old for `gpt-3.5-turbo` | https://api.github.com/repos/langchain-ai/langchain/issues/1881/comments | 11 | 2023-03-22T09:23:39Z | 2023-10-18T05:27:58Z | https://github.com/langchain-ai/langchain/issues/1881 | 1,635,371,810 | 1,881 |
[
"langchain-ai",
"langchain"
] | As of the latest release, this is not accepted.
```
/opt/homebrew/lib/python3.10/site-packages/langchain/llms/openai.py:169: UserWarning: You are trying to use a chat model. This way of initializing it is no longer supported. Instead, please use: `from langchain.chat_models import ChatOpenAI`
warnings.warn(
```
And will throw error when asked | Is it possible to use `gpt-3.5-turbo` for `VectorDBQAWithSourcesChain` | https://api.github.com/repos/langchain-ai/langchain/issues/1880/comments | 4 | 2023-03-22T09:01:34Z | 2023-03-24T04:55:20Z | https://github.com/langchain-ai/langchain/issues/1880 | 1,635,332,994 | 1,880 |
[
"langchain-ai",
"langchain"
] | Love Langchain library, so obsessed with it lately!
I've using ChatVectorDBChain which retrieves answers from Pinecone vectorstore and it's been working very well.
But one thing I noticed is that for normal `ConversationChain`, you can add `memory` argument, which provides nice user experience because it remembers the discussed entities.
Question: can we add `memory` argument to `ChatVectorDBChain`? If it is already existed, could you point me whether below code is the right way to use it?
Thanks again so much!!😊
```
from langchain.chains import ChatVectorDBChain
from langchain.memory import ConversationEntityMemory
chat_with_sources = ChatVectorDBChain.from_llm(
llm=llm,
chain_type="stuff",
vectorstore=vectorstore,
return_source_documents=True
#memory=ConversationEntityMemory(llm=llm, k=5)
)
``` | Entity memory + ChatVectorDB ? | https://api.github.com/repos/langchain-ai/langchain/issues/1876/comments | 6 | 2023-03-22T06:25:42Z | 2023-09-20T18:16:25Z | https://github.com/langchain-ai/langchain/issues/1876 | 1,635,138,622 | 1,876 |
[
"langchain-ai",
"langchain"
] | We should add human input as a tool. Human is AGI and can step in when the model is confused or lost, or need some help. | Use human input as a tool | https://api.github.com/repos/langchain-ai/langchain/issues/1871/comments | 7 | 2023-03-22T00:23:26Z | 2024-07-26T22:26:55Z | https://github.com/langchain-ai/langchain/issues/1871 | 1,634,874,206 | 1,871 |
[
"langchain-ai",
"langchain"
] | If I have a context (document) to answer questions from, is there a way to send multiple messages to openai using langchain API? I would like to get a list back where each item corresponds to answer for each question.
Thanks,
Ravi | Sending multiple questions in one API call and get responses for each | https://api.github.com/repos/langchain-ai/langchain/issues/1870/comments | 2 | 2023-03-22T00:08:09Z | 2023-09-18T16:22:59Z | https://github.com/langchain-ai/langchain/issues/1870 | 1,634,864,043 | 1,870 |
[
"langchain-ai",
"langchain"
] | Are you considering to add ability to use ensembling during document search? | Use ensemble of indicies on document search | https://api.github.com/repos/langchain-ai/langchain/issues/1868/comments | 1 | 2023-03-21T22:25:49Z | 2023-08-21T16:08:19Z | https://github.com/langchain-ai/langchain/issues/1868 | 1,634,779,399 | 1,868 |
[
"langchain-ai",
"langchain"
] | The current constructor for ElasticVectorSearch does not support connecting to Elastic Cloud.
```
>>> db = ElasticVectorSearch.from_documents(docs, embeddings,
... cloud_id = CLOUD_ID,
... basic_auth=(USER_NAME, PASSWORD))Traceback (most recent call last): File "<stdin>", line 1, in <module>
File "c:\Users\bcollingsworth\code\langchain_testing\.venv\lib\site-packages\langchain\vectorstores\base.py", line 113, in from_documents
return cls.from_texts(texts, embedding, metadatas=metadatas, **kwargs)
File "c:\Users\bcollingsworth\code\langchain_testing\.venv\lib\site-packages\langchain\vectorstores\elastic_vector_search.py", line 161, in from_texts
elasticsearch_url = get_from_dict_or_env(
File "c:\Users\bcollingsworth\code\langchain_testing\.venv\lib\site-packages\langchain\utils.py", line 17, in get_from_dict_or_env
raise ValueError(
ValueError: Did not find elasticsearch_url, please add an environment variable `ELASTICSEARCH_URL` which contains it, or pass `elasticsearch_url` as a named parameter.
```
It looks like the current implementation would only support an instance of Elastic Search on localhost without security enabled, which is not recommended.
I can connect with the elasticsearch python client like this:
```
from elasticsearch import Elasticsearch
es = Elasticsearch(cloud_id=CLOUD_ID, http_auth=(USER_NAME, PASSWORD))
es.info()
```
See the documentation at [Connecting to Elastic Cloud](https://www.elastic.co/guide/en/elasticsearch/client/python-api/current/connecting.html#connect-ec)
An alternative constructor to the ElasticVectorSearch that takes a elasticsearch client instance, index name and embedding object would allow the user to deal with all the possible authentication set ups before using langchain.
| ElasticVectorSearch constructor to support Elastic Cloud | https://api.github.com/repos/langchain-ai/langchain/issues/1865/comments | 6 | 2023-03-21T20:56:51Z | 2023-09-19T15:17:29Z | https://github.com/langchain-ai/langchain/issues/1865 | 1,634,676,623 | 1,865 |
[
"langchain-ai",
"langchain"
] | How can we load directly xlsx file in langchain just like CSV loader? I could not be able to find in the documentation | Xlsx loader | https://api.github.com/repos/langchain-ai/langchain/issues/1862/comments | 7 | 2023-03-21T17:25:01Z | 2023-12-02T16:10:07Z | https://github.com/langchain-ai/langchain/issues/1862 | 1,634,398,916 | 1,862 |
[
"langchain-ai",
"langchain"
] | Hey all, Gor from [Aim](http://github.com/aimhubio/aim) here 👋
First of all thanks for this awesome project!
After playing with LangChain, I found that Aim // LangChain integration can be extremely helpful with tracing and deep exploration of prompts... at scale... with just a few steps. ⬇️
## How can Aim trace and enhance prompts observability?
### Prompts comparison
It would be great to be able to compare different prompt templates.
Here is an example view where it is possible to explore and compare outputs of different prompts:

Querying, grouping, coloring included!
### Prompts exploration
We are also planning to integrate [ICE](https://github.com/oughtinc/ice) (or an alternative) for an upgraded experience to analyze an individual session:

---
The integration might be helpful both for prompt engineering and prompt monitoring in production.
Please consider this as an early iteration. Thoughts and feedbacks are highly appreciated. 🙌
I believe this will be a great addition to both of the tools and will be beneficial for the community!
Would be super happy to tell more about our roadmap and learn if there is a space for integrations and collaboration. Happy to jump on a quick call or open a shared channel on slack/discord.
<details>
<summary>
Prompts tracking
</summary>
</br>
Aim provides a generic interface to track and store strings with an additional context.
And it is super easy to integrate with any python script or tool.
It is as simple as calling:
```py
aim_run.track(input_text, name="inputs", context={'function': ..., 'kwargs': ..., 'tags': ...})
```
...or both inputs and outputs:
```py
aim_run.track([input_text, output_text], name="inputs", context={'function': ..., 'kwargs': ..., 'tags': ...})
```
...or inputs, outputs and intermediate results:
```py
aim_run.track([input_text, intermediate_res_1, intermediate_res_2, output_text], name="inputs", context={'function': ..., 'kwargs': ..., 'tags': ...})
```
...or even arbitrary py objects:
```py
aim_run.track({
"prompt": input_text,
"intermediate_res_1": Image(intermediate_res_1), # tracking images
... # tracking any other python objects
}], name="inputs", context={'function': ..., 'kwargs': ..., 'tags': ...})
```
</details>
<details>
<summary>
More details
</summary>
</br>
Aim is fully open source, it tracks and stores data at a local .aim repository.
Aim has builtin tools - explorers, which enable in-depth exploration and comparison of any type of metadata.
With the upcoming releases it is also going to enable building fully customizable dashboards. Particularly targeting prompt engineering use-cases.
Aim repo: http://github.com/aimhubio/aim
Aim docs: http://aimstack.readthedocs.io
</details>
@hwchase17
| Prompts tracing and visualization with Aim (Aim // LangChain integration) | https://api.github.com/repos/langchain-ai/langchain/issues/1861/comments | 1 | 2023-03-21T17:11:07Z | 2023-09-18T16:23:05Z | https://github.com/langchain-ai/langchain/issues/1861 | 1,634,374,448 | 1,861 |
[
"langchain-ai",
"langchain"
] | I am trying to follow the [MRKL chat example](https://langchain.readthedocs.io/en/latest/modules/agents/implementations/mrkl_chat.html) but with AzureOpenAI (`text-davinci-003`) and AzureChatOpenAI (`gpt-3.5-turbo`). However, I am running into this error:
```
ValueError: `stop` found in both the input and default params.
```
I have confirmed that AzureOpenAI and AzureChatOpenAI work independently from the agent:
<img width="992" alt="image" src="https://user-images.githubusercontent.com/404062/226631921-e1c85c76-0bdf-4415-9cae-7149eced7f0f.png">
I am using langchain version 0.0.117.
Seems related to this WIP PR: https://github.com/hwchase17/langchain/pull/1817 | ValueError when using Azure and chat-zero-shot-react-description agent | https://api.github.com/repos/langchain-ai/langchain/issues/1852/comments | 3 | 2023-03-21T14:10:21Z | 2023-09-26T16:14:15Z | https://github.com/langchain-ai/langchain/issues/1852 | 1,634,012,102 | 1,852 |
[
"langchain-ai",
"langchain"
] | The role attribute in line 616 of openai.py under the llms package of langchain is expected to provide an external modification entry or parameter.
<img width="1146" alt="image" src="https://user-images.githubusercontent.com/29686094/226600002-ba0a89fd-c65a-4d8c-92a0-73e7a44dfdb9.png">
Here are the parameter values of role when calling the model from the openai website.
<img width="239" alt="1774af7245061cda5320201e6529484e" src="https://user-images.githubusercontent.com/29686094/226600059-a6ae29d8-ac78-47ad-9542-9bf60cacd018.png">
| I would like to provide an entry for role=system | https://api.github.com/repos/langchain-ai/langchain/issues/1848/comments | 4 | 2023-03-21T12:02:49Z | 2023-09-18T16:23:10Z | https://github.com/langchain-ai/langchain/issues/1848 | 1,633,770,590 | 1,848 |
[
"langchain-ai",
"langchain"
] | ```python
import os
os.environ["OPENAI_API_KEY"] = "..."
from langchain.agents import Tool, load_tools
from langchain.memory import ConversationBufferMemory
from langchain import OpenAI
from langchain.utilities import GoogleSearchAPIWrapper
from langchain.agents import initialize_agent
tool_names = []
tool_names.append("python_repl")
llm=OpenAI(temperature=0.1,model_name="gpt-3.5-turbo")
tools = load_tools(tool_names=tool_names, llm=llm)
memory = ConversationBufferMemory(memory_key="chat_history")
agent_chain = initialize_agent(tools, llm, agent="conversational-react-description", verbose=True, memory=memory)
default_arg = "Hi, how may I help you?"
if len(sys.argv) < 2:
arg= default_arg
else:
arg = sys.argv[1]
while(True):
agent_chain.run(input=arg)
arg = input("\nHi, how may I help you?\n")
```
I run
```shell
python3 main_21_03.py "compute 5 * 6 using python"
```
And I get a loop where the agent is running the right command, but its observation is empty:
```shell
compute 5 * 6 using Python
> Entering new AgentExecutor chain...
Thought: Do I need to use a tool? Yes
Action: Python REPL
Action Input: 5 * 6
Observation:
Thought:Do I need to use a tool? Yes
Action: Python REPL
Action Input: 5 * 6
Observation:
[etc...]
``` | Empty observation when using python_repl | https://api.github.com/repos/langchain-ai/langchain/issues/1846/comments | 2 | 2023-03-21T09:21:58Z | 2023-10-30T16:08:08Z | https://github.com/langchain-ai/langchain/issues/1846 | 1,633,506,456 | 1,846 |
[
"langchain-ai",
"langchain"
] |
```
agent = create_csv_agent(llm, 'titanic.csv', verbose=True)
json_agent_executor = create_json_agent(llm, toolkit=json_toolkit, verbose=True)
openapi_agent_executor = create_openapi_agent(
llm=OpenAI(temperature=0),
toolkit=openapi_toolkit,
verbose=True
)
agent_executor = create_sql_agent(
llm=OpenAI(temperature=0),
toolkit=toolkit,
verbose=True
)
agent_executor = create_vectorstore_agent(
llm=llm,
toolkit=toolkit,
verbose=True
)
```
```
agent_executor = create_python_agent(llm, tool=PythonREPLTool(),verbose=True)
```
if we can make the second parameter with toolkit, then the python agent can keep consistency with other agent.
is there any extra consideration for this API when design? | make the `create_python_agent` API more consistency with other same API | https://api.github.com/repos/langchain-ai/langchain/issues/1845/comments | 2 | 2023-03-21T06:44:54Z | 2023-09-10T16:41:13Z | https://github.com/langchain-ai/langchain/issues/1845 | 1,633,317,706 | 1,845 |
[
"langchain-ai",
"langchain"
] | Using `VectorDBQAWithSourcesChain` with arun, facing below issue
`ValueError: `run` not supported when there is not exactly one output key. Got ['answer', 'sources'].` | Facing issue when using arun with VectorDBQAWithSourcesChain chain | https://api.github.com/repos/langchain-ai/langchain/issues/1844/comments | 19 | 2023-03-21T05:59:09Z | 2023-10-19T16:09:34Z | https://github.com/langchain-ai/langchain/issues/1844 | 1,633,279,135 | 1,844 |
[
"langchain-ai",
"langchain"
] | Hey there, just asking on what the progress is on the "Custom Agent Class" for langchain? 🙂
https://langchain.readthedocs.io/en/latest/modules/agents/examples/custom_agent.html#custom-agent-class | Progress on Custom Agent Class? | https://api.github.com/repos/langchain-ai/langchain/issues/1840/comments | 6 | 2023-03-21T02:06:22Z | 2023-09-28T16:10:58Z | https://github.com/langchain-ai/langchain/issues/1840 | 1,633,115,886 | 1,840 |
[
"langchain-ai",
"langchain"
] | I have 3 pdf files in my directory and I "documentized", added metadata, split, embed and store them in pinecone, like this:
```
loader = DirectoryLoader('data/dir', glob="**/*.pdf", loader_cls=UnstructuredPDFLoader)
data = loader.load()
#I added company names explicitly for now
data[0].metadata["company"]="Apple"
data[1].metadata["company"]="Miscrosoft"
data[2].metadata["company"]="Tesla"
text_splitter = RecursiveCharacterTextSplitter(chunk_size=500, chunk_overlap=200)
texts = text_splitter.split_documents(data)
embeddings = OpenAIEmbeddings(openai_api_key=OPENAI_API_KEY)
pinecone.init(
api_key=PINECONE_API_KEY,
environment=PINECONE_API_ENV
)
metadatas = []
for text in texts:
metadatas.append({
"company": text.metadata["company"]
})
Pinecone.from_texts([t.page_content for t in texts], embeddings, index_name=index_name, metadatas=metadatas)
```
I want to build a Q&A system, so that I will mention a company name in my query and pinecon should look for the documents having company `A` in the metadata. Here what I have:
```
pinecone.init(
api_key=PINECONE_API_KEY,
environment=PINECONE_API_ENV
)
index_name = "index"
embeddings = OpenAIEmbeddings(openai_api_key=OPENAI_API_KEY)
docsearch = Pinecone.from_existing_index(index_name=index_name, embedding=embeddings)
llm = ChatOpenAI(temperature=0, openai_api_key=OPENAI_API_KEY)
chain = load_qa_chain(llm, chain_type="stuff")
query = "What is the total revenue of Apple?"
docs = docsearch.similarity_search(query, include_metadata=True)
res = chain.run(input_documents=docs, question=query)
print(res)
```
However, there are still document chunks from non-Apple documents in the output of `docs`. What am I doing wrong here and how do I utilize the information in metadata both on doc_search and chat-gpt query (If possible)? Thanks | How metadata is being used during similarity search and query? | https://api.github.com/repos/langchain-ai/langchain/issues/1838/comments | 10 | 2023-03-21T01:32:20Z | 2024-03-27T12:24:17Z | https://github.com/langchain-ai/langchain/issues/1838 | 1,633,096,854 | 1,838 |
[
"langchain-ai",
"langchain"
] | 1. Cannot initialize match chain with ChatOpenAI LLM
llm_math = LLMMathChain(llm=ChatOpenAI(temperature=0))
---------------------------------------------------------------------------
ValidationError Traceback (most recent call last)
Cell In[33], line 1
----> 1 llm_math = LLMMathChain(llm=ChatOpenAI(temperature=0))
File ~/anaconda3/envs/gpt_index/lib/python3.8/site-packages/pydantic/main.py:341, in pydantic.main.BaseModel.__init__()
ValidationError: 1 validation error for LLMMathChain
llm
Can't instantiate abstract class BaseLLM with abstract methods _agenerate, _generate, _llm_type (type=type_error)
2. Works ok with OpenAI LLM
llm_math = LLMMathChain(llm=OpenAI(temperature=0))
| LLMMathChain to allow ChatOpenAI as an llm | https://api.github.com/repos/langchain-ai/langchain/issues/1834/comments | 10 | 2023-03-20T23:12:24Z | 2023-04-29T21:57:59Z | https://github.com/langchain-ai/langchain/issues/1834 | 1,633,003,060 | 1,834 |
[
"langchain-ai",
"langchain"
] | Hi! I tried implementing the docs from [here](https://langchain.readthedocs.io/en/latest/modules/utils/examples/zapier.html) but am running into this issue – is it due to openAI's API being down?
```
> Entering new AgentExecutor chain...
I need to find the email, summarize it, and send it to slack.
Action: Gmail: Find Email
Action Input: Find the last email I received regarding Silicon Valley BankTraceback (most recent call last):
File "/Users/joship/Desktop/gptops/ops_gpt.py", line 21, in <module>
agent.run("Summarize the last email I received regarding Silicon Valley Bank. Send the summary to slack.")
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/langchain/chains/base.py", line 213, in run
return self(args[0])[self.output_keys[0]]
^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/langchain/chains/base.py", line 116, in __call__
raise e
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/langchain/chains/base.py", line 113, in __call__
outputs = self._call(inputs)
^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/langchain/agents/agent.py", line 505, in _call
next_step_output = self._take_next_step(
^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/langchain/agents/agent.py", line 423, in _take_next_step
observation = tool.run(
^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/langchain/tools/base.py", line 71, in run
raise e
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/langchain/tools/base.py", line 68, in run
observation = self._run(tool_input)
^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/langchain/tools/zapier/tool.py", line 121, in _run
return self.api_wrapper.run_as_str(self.action_id, instructions, self.params)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/langchain/utilities/zapier.py", line 141, in run_as_str
data = self.run(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/langchain/utilities/zapier.py", line 121, in run
response.raise_for_status()
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/requests/models.py", line 1021, in raise_for_status
raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: https://nla.zapier.com/api/v1/exposed/01GW01TC6DKAN899S3SGKN2MQ3/execute/?
``` | Implementing Zapier example | https://api.github.com/repos/langchain-ai/langchain/issues/1832/comments | 1 | 2023-03-20T22:29:41Z | 2023-09-10T16:41:19Z | https://github.com/langchain-ai/langchain/issues/1832 | 1,632,960,593 | 1,832 |
[
"langchain-ai",
"langchain"
] | I tried following [these docs](https://langchain.readthedocs.io/en/latest/modules/utils/examples/zapier.html) to import Zapier integration
```
from langchain.tools.zapier.tool import ZapierNLARunAction
from langchain.utilities.zapier import ZapierNLAWrapper
```
But I'm getting these errors:
`Traceback (most recent call last):
File "/Users/joship/Desktop/gptops/ops_gpt.py", line 15, in <module>
from langchain.utilities import Zapier
ImportError: cannot import name 'Zapier' from 'langchain.utilities' (/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/langchain/utilities/__init__.py)`
This is after running `pip3 install 'langchain[all]'` is there somthing I'm missing when it comes to nested modules? | Importing Zapier | https://api.github.com/repos/langchain-ai/langchain/issues/1831/comments | 1 | 2023-03-20T21:50:36Z | 2023-03-20T22:28:18Z | https://github.com/langchain-ai/langchain/issues/1831 | 1,632,916,808 | 1,831 |
[
"langchain-ai",
"langchain"
] | **Issue:** When trying to read data from some URLs, I get a 403 error during load. I assume this is due to the web-server not allowing all user agents.
**Expected behavior:** It would be great if I could specify a user agent (e.g. standard browsers like Mozilla, maybe also Google bots) for making the URL requests.
**My code**
```
from langchain.document_loaders import UnstructuredURLLoader
urls = ["https://dsgvo-gesetz.de/art-1"]
loader = UnstructuredURLLoader(urls=urls)
data = loader.load()
```
**Error message**
```
ValueError Traceback (most recent call last)
Cell In[62], line 1
----> 1 data = loader.load()
File /opt/conda/lib/python3.10/site-packages/langchain/document_loaders/url.py:28, in UnstructuredURLLoader.load(self)
26 docs: List[Document] = list()
27 for url in self.urls:
---> 28 elements = partition_html(url=url)
29 text = "\n\n".join([str(el) for el in elements])
30 metadata = {"source": url}
File /opt/conda/lib/python3.10/site-packages/unstructured/partition/html.py:72, in partition_html(filename, file, text, url, encoding, include_page_breaks, include_metadata, parser)
70 response = requests.get(url)
71 if not response.ok:
---> 72 raise ValueError(f"URL return an error: {response.status_code}")
74 content_type = response.headers.get("Content-Type", "")
75 if not content_type.startswith("text/html"):
ValueError: URL return an error: 403
```
**for reference: URL that works without the 403 error**
```https://www.heise.de/newsticker/``` | UnstructuredURLLoader Error 403 | https://api.github.com/repos/langchain-ai/langchain/issues/1829/comments | 9 | 2023-03-20T21:26:40Z | 2023-06-19T00:47:02Z | https://github.com/langchain-ai/langchain/issues/1829 | 1,632,888,908 | 1,829 |
[
"langchain-ai",
"langchain"
] | The current documentation https://langchain.readthedocs.io/en/latest/modules/agents/getting_started.html seems to not be up to date with version 0.0.117:
```shell
UserWarning: You are trying to use a chat model. This way of initializing it is no longer supported. Instead, please use: `from langchain.chat_models import ChatOpenAI`
```
EDIT: to be more precise, this only happens if I try to change model_name to "gpt-3.5-turbo" | Documentation not up to date | https://api.github.com/repos/langchain-ai/langchain/issues/1827/comments | 12 | 2023-03-20T18:10:51Z | 2024-02-16T16:10:11Z | https://github.com/langchain-ai/langchain/issues/1827 | 1,632,628,572 | 1,827 |
[
"langchain-ai",
"langchain"
] |
I am experiencing an issue with Chroma:
`Chroma.from_texts(texts=chunks, embedding=embeddings, persist_directory=config.PERSIST_DIR, metadatas=None)`
opt/anaconda3/lib/python3.8/site-packages/langchain/vectorstores/chroma.py", line 27, in <listcomp>
(Document(page_content=result[0], metadata=result[1]), result[2])
File "pydantic/main.py", line 341, in pydantic.main.BaseModel.__init__
pydantic.error_wrappers.ValidationError: 1 validation error for Document
metadata
none is not an allowed value (type=type_error.none.not_allowed)
Thanks for all the help! | Validation error- Metadata should not be empty or None | https://api.github.com/repos/langchain-ai/langchain/issues/1825/comments | 2 | 2023-03-20T17:50:04Z | 2023-09-10T16:41:24Z | https://github.com/langchain-ai/langchain/issues/1825 | 1,632,592,766 | 1,825 |
[
"langchain-ai",
"langchain"
] | If I'm reading correctly, this is the function to add_texts to Chroma
```
def add_texts(
self,
texts: Iterable[str],
metadatas: Optional[List[dict]] = None,
ids: Optional[List[str]] = None,
**kwargs: Any,
) -> List[str]:
"""Run more texts through the embeddings and add to the vectorstore.
Args:
texts (Iterable[str]): Texts to add to the vectorstore.
metadatas (Optional[List[dict]], optional): Optional list of metadatas.
ids (Optional[List[str]], optional): Optional list of IDs.
Returns:
List[str]: List of IDs of the added texts.
"""
# TODO: Handle the case where the user doesn't provide ids on the Collection
if ids is None:
ids = [str(uuid.uuid1()) for _ in texts]
embeddings = None
if self._embedding_function is not None:
embeddings = self._embedding_function.embed_documents(list(texts))
self._collection.add(
metadatas=metadatas, embeddings=embeddings, documents=texts, ids=ids
)
return ids
```
It does not seem to check if the texts are already inside the database. This means it's very easy to duplicate work when running indexing jobs incrementally. What's more, the Chroma class from langchain.vectorstores does not seem to expose functions to see if some text is already inside the vector store.
What's the preferred way of dealing with this? I can of course set up a separate db that keeps track of hashes of text inside the Chromadb, but this seems unnecessarily clunky and something that you'd expect the db to do for you.
| Avoiding recomputation of embeddings with Chroma | https://api.github.com/repos/langchain-ai/langchain/issues/1824/comments | 9 | 2023-03-20T17:48:30Z | 2023-09-28T16:11:03Z | https://github.com/langchain-ai/langchain/issues/1824 | 1,632,590,839 | 1,824 |
[
"langchain-ai",
"langchain"
] | Hi,
I would like to contribute to LangChain, need to know if our feature is relevant as part of LangChain.
Would like check with you in private. How we can share our idea?
Moshe | New feature: Contribution | https://api.github.com/repos/langchain-ai/langchain/issues/1823/comments | 1 | 2023-03-20T17:03:16Z | 2023-09-10T16:41:29Z | https://github.com/langchain-ai/langchain/issues/1823 | 1,632,522,870 | 1,823 |
[
"langchain-ai",
"langchain"
] | Creating and using AzureChatOpenAI directly works fine, but crashing through ChatVectorDBChain with "ValueError: Should always be something for OpenAI."
Example:
```
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.vectorstores import FAISS
from langchain.text_splitter import CharacterTextSplitter
from langchain.chains import ChatVectorDBChain
from langchain.document_loaders import PagedPDFSplitter
from langchain.chat_models import AzureChatOpenAI
from langchain.prompts.chat import ChatPromptTemplate, SystemMessagePromptTemplate, AIMessagePromptTemplate, HumanMessagePromptTemplate
from langchain.schema import AIMessage, HumanMessage, SystemMessage
system_template="""Use the following pieces of context to answer the users question.
If you don't know the answer, just say that you don't know, don't try to make up an answer.
----------------
{context}"""
messages = [
SystemMessagePromptTemplate.from_template(system_template),
HumanMessagePromptTemplate.from_template("{question}")
]
prompt = ChatPromptTemplate.from_messages(messages)
loader = PagedPDFSplitter("myfile.pdf")
text_splitter = CharacterTextSplitter(chunk_size=700, chunk_overlap=50)
documents = loader.load_and_split(text_splitter)
embeddings = OpenAIEmbeddings(chunk_size=1)
vectorstore = FAISS.from_documents(documents, embeddings)
qa = ChatVectorDBChain.from_llm(AzureChatOpenAI(temperature=0, deployment_name='gpt-35-turbo'), vectorstore, qa_prompt=prompt, top_k_docs_for_context=2)
## All good to here, but following line crashes:
result = qa({"question": "what is section 2 about?", "chat_history": []})
```
Crashes with:
ValueError: Should always be something for OpenAI.
This works fine on ChatOpenAI but not AzureChatOpenAI. | Creating and using AzureChatOpenAI directly works fine, but crashing through ChatVectorDBChain with "ValueError: Should always be something for OpenAI." | https://api.github.com/repos/langchain-ai/langchain/issues/1822/comments | 2 | 2023-03-20T16:27:16Z | 2023-03-20T16:47:11Z | https://github.com/langchain-ai/langchain/issues/1822 | 1,632,466,855 | 1,822 |
[
"langchain-ai",
"langchain"
] | Hey team,
i have created an API in the below format
`
@app.post("/get_answer")
async def get_answer(user_query):
return vector_db.qa(user_query)
`
And I call this function for 5 user_query, but it only able to process first query other query faces issues from OpenAI. Is calling OpenAI API parallely not allowed at all?
| async call for question answering | https://api.github.com/repos/langchain-ai/langchain/issues/1816/comments | 1 | 2023-03-20T13:59:50Z | 2023-09-10T16:41:33Z | https://github.com/langchain-ai/langchain/issues/1816 | 1,632,175,081 | 1,816 |
[
"langchain-ai",
"langchain"
] | During the tracing of ChatVectorDBChain, even though it shows source_documents but when clicked on Explore, source_documents suddenly becomes [object Object],[object Object].
Here is an example:

and then after clicking on explore button:

| During the tracing of ChatVectorDBChain, even though it shows source_documents but when clicked on Explore, source_documents suddenly becomes [object Object],[object Object] | https://api.github.com/repos/langchain-ai/langchain/issues/1815/comments | 3 | 2023-03-20T10:58:37Z | 2023-09-10T16:41:40Z | https://github.com/langchain-ai/langchain/issues/1815 | 1,631,865,900 | 1,815 |
[
"langchain-ai",
"langchain"
] | OpenAI has been fairly unstable with keeping up the load after back to back releases and it tends to fail at some requests.
now If we're embedding a big document in chunks, it tends to fail at some point and Pinecone.from_texts() does not have exception handling, on failure of a chunk whole document that is left is wasted.
<img width="641" alt="image" src="https://user-images.githubusercontent.com/64721638/226257394-76ba84f7-2801-49e9-9261-b4a4550a4660.png">
- we should use exponential retry or some better exception handling so that whole document does not crashes.
please let me know if I can work on the issue and submit a PR | Need better exception Handling at Pinecone.from_texts() | https://api.github.com/repos/langchain-ai/langchain/issues/1811/comments | 1 | 2023-03-20T05:48:55Z | 2023-09-10T16:41:44Z | https://github.com/langchain-ai/langchain/issues/1811 | 1,631,430,634 | 1,811 |
[
"langchain-ai",
"langchain"
] | Error Message:

Usage:
I currently using **AzureChatOpenAI** with below parameters

But it is not taking the value from parameter and throwing error. | AzureChatOpenAI failed to accept openai_api_base, throwing error | https://api.github.com/repos/langchain-ai/langchain/issues/1810/comments | 1 | 2023-03-20T05:44:48Z | 2023-09-10T16:41:49Z | https://github.com/langchain-ai/langchain/issues/1810 | 1,631,427,537 | 1,810 |
[
"langchain-ai",
"langchain"
] | While I am trying to rebuild chat_pdf based on mayo's example.
I noticed that vector store with pinecone doesn't respond with similar docs when it performs similarity_search function.
I test it against other vector provider like faiss and chroma. In both cases, those all works.
Here is the [code link](https://github.com/butzhang/simple_chat_pdf/blob/main/simple_chat_pdf/components/question_handler.py#L28-L30)
Steps to reproduce.
"""
from simple_chat_pdf.components.question_handler import QuestionHandler
question = 'what is this legal case about'
r = QuestionHandler().get_answer(question=question, chat_history=[])
"""
You can switch from different vector and see only pinecone provide failed and didn't find similar docs
you need to switch to your open_api key because mine might expire
| issue with pinecone similarity_search function | https://api.github.com/repos/langchain-ai/langchain/issues/1809/comments | 2 | 2023-03-20T05:13:20Z | 2023-10-27T16:09:24Z | https://github.com/langchain-ai/langchain/issues/1809 | 1,631,399,350 | 1,809 |
[
"langchain-ai",
"langchain"
] | Lots of customers is asking if langchain have a document loader like AWS S3 or GCS for Azure Blob Storage as well. As you know Microsoft is a big partner for OpenAI , so there is a real need to have native document loader for Azure Blob storage as well. We will be very happy to see this feature ASAP. | Document loader for Azure Blob storage | https://api.github.com/repos/langchain-ai/langchain/issues/1805/comments | 3 | 2023-03-20T02:39:16Z | 2023-03-27T15:17:18Z | https://github.com/langchain-ai/langchain/issues/1805 | 1,631,276,872 | 1,805 |
[
"langchain-ai",
"langchain"
] | `poetry install -E all` fails with Poetry >=1.4.0 due to upstream incompatibility between `poetry>=1.4.0` and `pydata_sphinx_theme`.
This is a tracking issue. I've already created an issue upstream here: https://github.com/pydata/pydata-sphinx-theme/issues/1253 | Poetry 1.4.0 installation fails | https://api.github.com/repos/langchain-ai/langchain/issues/1801/comments | 2 | 2023-03-19T23:42:55Z | 2023-09-12T21:30:13Z | https://github.com/langchain-ai/langchain/issues/1801 | 1,631,163,256 | 1,801 |
[
"langchain-ai",
"langchain"
] | Hi, does anyone know how to override the prompt template of ConversationChain? I am creating a custom prompt template that takes in an additional input variable
```
PROMPT_TEMPLATE = """ {my_info}
{history}
Human: {input}
AI:"""
PROMPT = PromptTemplate(
input_variables=["history", "input", "my_info"], template=PROMPT_TEMPLATE
)
conversation_chain = ConversationChain(
prompt=PROMPT,
llm=OpenAI(temperature=0.7),
verbose=True,
memory=ConversationBufferMemory()
)
```
but got the following error:
```
Got unexpected prompt input variables. The prompt expects ['history', 'input', 'my_info'], but got ['history'] as inputs from memory, and input as the normal input key. (type=value_error)
```
Is my understanding correct that currently ConversationChain can only support prompt template that takes in "history" and "input" as the input variables? | Error when overriding default prompt template of ConversationChain | https://api.github.com/repos/langchain-ai/langchain/issues/1800/comments | 27 | 2023-03-19T23:33:20Z | 2024-02-08T06:43:47Z | https://github.com/langchain-ai/langchain/issues/1800 | 1,631,160,642 | 1,800 |
[
"langchain-ai",
"langchain"
] | `ChatVectorDBChain` with `ChatOpenAI` fails with `Should always be something for OpenAI.`
It appears `_combine_llm_outputs` added to `ChatOpenAI` in v0.0.116 is receiving a `llm_outputs` list containing a `None` value. I think this is related to streaming responses from the OpenAI API.
see https://github.com/hwchase17/langchain/pull/1785 | v0.0.116 ChatVectorDBChain with ChatOpenAI fails with `Should always be something for OpenAI.` | https://api.github.com/repos/langchain-ai/langchain/issues/1799/comments | 2 | 2023-03-19T23:18:24Z | 2023-03-20T14:51:20Z | https://github.com/langchain-ai/langchain/issues/1799 | 1,631,156,471 | 1,799 |
[
"langchain-ai",
"langchain"
] | @hwchase17 Thank you for leading work on this repo! It's very clear you've put a lot of love into this project. 🤗 ❤️
My coworker, @3coins, with whom I work daily, is a regular contributor here.
I wanted to offer you a design proposal that I think would be a great addition to LangChain. Here goes:
# Motivation
LLM providers lack a unified interface for common use-cases:
- **Specifying a model ID to a model provider**
- Users lack a unified interface for specifying the model they wish to use when instantiating a model provider. `openai` and `openai-chat` both expect the model ID in `model_name`, but `ai21` and `anthropic` expect `model`.
- **Discovering supported models**
- Users don't have any way of determining valid model IDs for a given provider. For example, a user trying to invoke ChatGPT via `OpenAI(model_name="chatgpt")` will be quite confused, and there's no clear way for a user to look up what the allowed values are.
- The only solution for this now is to look up upstream API reference documentation on what models are allowed, which is very tedious.
- **Determining prerequisites**
- Users don't have any way of determining what packages they need without looking at the source code or encountering a runtime `ImportError`, even if they have knowledge of what providers they wish to use. There's no attribute/property on any of the model providers that lists what packages must be installed before using them.
- **Determining necessary authentication**
- Similar to the use-case above, users don't have any way of knowing in advance what authentication they should supply and how to supply them before using a provider.
We have run into this pain points while [integrating LangChain with Jupyter AI](https://github.com/jupyterlab/jupyter-ai/pull/18). This proposal, if accepted, would allow for applications to build on top of LangChain LLM providers much more easily.
# Proposal
I propose:
1. We formalize the notion of model IDs and provider IDs, and enforce this naming convention broadly throughout the codebase and documentation.
2. We implement a unified interface for provider interaction by expanding the definition of `BaseLanguageModel`:
```python
class EnvAuthStrategy(BaseModel):
type: Literal["env"] = 'env'
name: str
# for sagemaker_endpoint expecting creds at ~/.aws/credentials
class FileAuthStrategy(BaseModel):
type: Literal["file"] = 'file'
path: str
AuthStrategy = Union[EnvAuthStrategy, FileAuthStrategy]
class BaseLanguageModel(BaseModel, ABC):
# provider ID. this is currently bound to _llm_type()
id: str
# ID of the model to invoke by this provider
model_id: str
# List of supported models
# For registry providers, this will just be ["*"]
# See rest of the issue for explanation
models: List[str]
# List of package requirements
package_reqs: List[str]
# Authn/authz strategy
auth_strategy: AuthStrategy
...
```
Subclasses should override these fields and [make all of them constant](https://docs.pydantic.dev/usage/schema/#field-customization) with the exception of `model_id`. For example, the `cohere` provider might look something like this:
```python
class CohereAuthStrategy(EnvAuthStrategy):
name = "COHERE_API_KEY"
class Cohere(LLM, BaseModel):
id = Field("cohere", const=True)
# "medium" is just the default, still changeable at runtime
model_id = "medium"
# Cohere model provider supports any model available via
# `cohere.Client#generate()`.`
# Reference: https://docs.cohere.ai/reference/generate
models: Field(["medium", "xlarge"], const=True)
package_reqs: Field(["cohere"], const=True)
auth_strategy: Field(CohereAuthStrategy, const=True)
```
This strategy also handles registry providers (see "Terminology" section below) nicely. **A model ID can be defined as any identifier for an instance of a model**. This is best illustrated with a few examples:
- `huggingface_hub`: the HF repo ID, e.g. `gpt2` or `google/flan-t5-xxl`
- `huggingface_endpoint`: the URL to the HF endpoint, e.g. `https://foo.us-west-2.aws.endpoints.huggingface.cloud/bar`
- `sagemaker_endpoint`: the endpoint name. Your region and authentication credentials should already be specified via boto3.
While the syntax of model IDs is indeed very different for registry providers, if you think about it, it still functions exactly as a model ID: it identifies one and only one model hosted elsewhere.
Because registry providers have a dynamic, unknown, and very large set of valid model IDs, they declare their supported models using a wildcard like so:
```
models: Field(["*"], const=True)
```
# Next steps
Let's first get consensus on this proposal and assign the work later. I'd be happy to kick off the PR and define the types, but I'll likely need to loop in other contributors to assist due to the scope of this proposal. There will also need to be some work done on the documentation as well.
# Appendix
## Terminology
- **Model provider / LLM provider**: A class that can provide one or more models. In LangChain, these currently inherit `BaseLanguageModel`.
- **Registry provider**: A special subset of model providers that have a dynamic number of provided models.
- These include but are not limited to: `huggingface_hub`, `huggingface_endpoint`, `sagemaker_endpoint`
- These providers are unique in the sense that they do not have a static number of models they support. New models are being uploaded to HuggingFace Hub daily.
- We call these providers "registry providers" (since they mimic the behavior of a package registry, like NPM/PyPi)
- **Model**: A model provided by a model provider (forgive the circular working definition, models are quite difficult to define strictly). For example, `text-davinci-003` is a model provided by the `openai` model provider.
- **Model provider ID**: a string identifier for a model provider. Currently this is retrieved from the `_llm_type` property.
- **Model ID**: a string identifier for a model.
| [Design Proposal] Standardized interface for LLM provider usage | https://api.github.com/repos/langchain-ai/langchain/issues/1797/comments | 3 | 2023-03-19T22:28:09Z | 2023-09-10T16:41:54Z | https://github.com/langchain-ai/langchain/issues/1797 | 1,631,141,587 | 1,797 |
[
"langchain-ai",
"langchain"
] | The current design leaves only two options for initialization `openai-python`:
1. `OPENAI_API_KEY` environment variable
2. pass in `openai_api_key`
LangChain works just fine with the other `openai-python` settings like `openai.api_type`.
Can we make it so it uses `openai.api_key` if it is not `None`?
Happy to try a PR but wanted to understand design choice first.
https://github.com/hwchase17/langchain/blob/master/langchain/llms/openai.py#LL202C9-L202C29 | OpenAI base model initialization ignores currently set openi.api_key | https://api.github.com/repos/langchain-ai/langchain/issues/1796/comments | 1 | 2023-03-19T21:45:28Z | 2023-08-24T16:14:16Z | https://github.com/langchain-ai/langchain/issues/1796 | 1,631,129,215 | 1,796 |
[
"langchain-ai",
"langchain"
] | I'm using the pipeline for Q&A pipeline on non-english language:
```
pinecone.init(
api_key=PINECONE_API_KEY, # find at app.pinecone.io
environment=PINECONE_API_ENV # next to api key in console
)
index_name = "langchain2"
embeddings = OpenAIEmbeddings(openai_api_key=OPENAI_API_KEY)
docsearch = Pinecone.from_existing_index(index_name=index_name, embedding=embeddings)
llm = OpenAI(temperature=0, openai_api_key=OPENAI_API_KEY)
chain = load_qa_chain(llm, chain_type="stuff")
query = "QUESTION"
docs = docsearch.similarity_search(query, include_metadata=True)
res = chain.run(input_documents=docs, question=query)
print(res)
```
It stucks in `res = chain.run(input_documents=docs, question=query)` lane. I'm waiting for ~20 mins already. What's the reason for that and how to investigate?
---------------
**UPD**
After ~30 mins I got
```
Retrying langchain.llms.openai.completion_with_retry.<locals>._completion_with_retry in 4.0 seconds as it raised Timeout: Request timed out: HTTPSConnectionPool(host='api.openai.com', port=443): Read timed out. (read timeout=600).
```
What's the reason for that and how can I fix it? | chain.run threw 403 error | https://api.github.com/repos/langchain-ai/langchain/issues/1795/comments | 6 | 2023-03-19T21:35:50Z | 2023-03-22T20:39:40Z | https://github.com/langchain-ai/langchain/issues/1795 | 1,631,126,760 | 1,795 |
[
"langchain-ai",
"langchain"
] | Within the Chroma DB, similarity_search has a default "k" of 4.
However, if there are less than 4 results to return to the query, it will crash instead of returning whatever is available.
```similarity_search(query: str, k: int = 4, filter: Optional[Dict[str, str]] = None, **kwargs: Any) → List[langchain.docstore.document.Document][[source]](https://langchain.readthedocs.io/en/latest/_modules/langchain/vectorstores/chroma.html#Chroma.similarity_search)
Run similarity search with Chroma.
Parameters
query (str) – Query text to search for.
k (int) – Number of results to return. Defaults to 4.
filter (Optional[Dict[str, str]]) – Filter by metadata. Defaults to None.
Returns
List of documents most simmilar to the query text.
Return type
List[Document]
```
```
raise NotEnoughElementsException(
chromadb.errors.NotEnoughElementsException: Number of requested results 4 cannot be greater than number of elements in index 1
```
I believe a check is needed to return whatever is available whenever the results are < k.
Edit: You can set `k=1` for the search, but then you are artificially limiting yourself, without knowing what is available.
It really should check if the N results returned are <= K, and if so, instead of crashing it should return whatever is availab.e. | [BUG] Chroma DB - similarity_search - chromadb.errors.NotEnoughElementsException | https://api.github.com/repos/langchain-ai/langchain/issues/1793/comments | 16 | 2023-03-19T21:11:37Z | 2023-09-22T16:05:46Z | https://github.com/langchain-ai/langchain/issues/1793 | 1,631,119,009 | 1,793 |
[
"langchain-ai",
"langchain"
] | How to query from an existing index?
I filled up an index in Pinecode using:
```
docsearch = Pinecone.from_texts([t.page_content for t in texts], embeddings, index_name=index_name)
```
Now, I'm creating a separate `.py` file, where I need to use the existing index to query. As I understand I need to use the following function:
```
@classmethod
def from_existing_index(
cls,
index_name: str,
embedding: Embeddings,
text_key: str = "text",
namespace: Optional[str] = None,
) -> Pinecone:
"""Load pinecone vectorstore from index name."""
try:
import pinecone
except ImportError:
raise ValueError(
"Could not import pinecone python package. "
"Please install it with `pip install pinecone-client`."
)
return cls(
pinecone.Index(index_name), embedding.embed_query, text_key, namespace
)
```
but what are the arguments there? I know only my `index_name` . What are the rest arguments? Embeddings are embeddings from OpenAI, right?
e.g.:
```
embeddings = OpenAIEmbeddings(openai_api_key=OPENAI_API_KEY)
```
What is the `text_key`? And what `name_space` ? | Query on existing index | https://api.github.com/repos/langchain-ai/langchain/issues/1792/comments | 30 | 2023-03-19T20:55:14Z | 2024-07-06T10:06:53Z | https://github.com/langchain-ai/langchain/issues/1792 | 1,631,113,414 | 1,792 |
[
"langchain-ai",
"langchain"
] | # [BUG] OpenMetero prompt token count error
The OpenMetero api returns a json that is large and the GPT-3 api cannot study it to return information
## What and Why?
While running the OpenMetero Lang chain the open metero api returns a large json file that is then sent to the gpt-3 api to understand, which is larger than the token size thus returning errors.
## Suggestions?
Add another step to parse json using json chain | [BUG] OpenMetero prompt token count error | https://api.github.com/repos/langchain-ai/langchain/issues/1790/comments | 1 | 2023-03-19T18:40:49Z | 2023-08-24T16:14:21Z | https://github.com/langchain-ai/langchain/issues/1790 | 1,631,067,675 | 1,790 |
[
"langchain-ai",
"langchain"
] | Is there any chance to add support for selfhost models like ChatGLM or transformer models.
Tried to use runhouse with model “ChatGLM-6B”, but not working. | Add support for selfhost models like ChatGLM or transformer models | https://api.github.com/repos/langchain-ai/langchain/issues/1780/comments | 7 | 2023-03-19T16:33:47Z | 2023-09-19T23:30:45Z | https://github.com/langchain-ai/langchain/issues/1780 | 1,631,012,518 | 1,780 |
[
"langchain-ai",
"langchain"
] | https://github.com/hwchase17/langchain/blob/3701b2901e76f2f97239c2152a6a7d01754fb666/langchain/chains/question_answering/map_rerank_prompt.py#L6
This regex is not handling the case where if answer text contains something like: `\n Helpful Score: 100` and the parsing fails even if the score is 100.
It should be something like: `(.*?)\n((.?)*)Score: (.*)` to resolve the use case | Fix regex for map_rerank_prompt.py | https://api.github.com/repos/langchain-ai/langchain/issues/1779/comments | 2 | 2023-03-19T15:26:29Z | 2023-09-10T16:41:59Z | https://github.com/langchain-ai/langchain/issues/1779 | 1,630,987,862 | 1,779 |
[
"langchain-ai",
"langchain"
] | It would be great to see LangChain integrate with Standford's Alpaca 7B model, a fine-tuned LlaMa (see https://github.com/hwchase17/langchain/issues/1473).
Standford created an AI able to generate outputs that were largely on par with OpenAI’s `text-davinci-003` and regularly better than GPT-3 — all for a fraction of the computing power and price.
Stanford's Alpaca is a language model that was fine-tuned from Meta's LLaMA with 52,000 instructions generated by GPT-3.5[[1]](https://the-decoder.com/stanfords-alpaca-shows-that-openai-may-have-a-problem/). The researchers used AI-generated instructions to train Alpaca 7B, which exhibits many GPT-3.5-like behaviors[[1]](https://the-decoder.com/stanfords-alpaca-shows-that-openai-may-have-a-problem/). In a blind test using input from the Self-Instruct Evaluation Set, both models performed comparably[[1]](https://the-decoder.com/stanfords-alpaca-shows-that-openai-may-have-a-problem/)[[2]](https://github.com/tatsu-lab/stanford_alpaca). However, Alpaca has problems common to other language models such as hallucinations, toxicity, and stereotyping[[1]](https://the-decoder.com/stanfords-alpaca-shows-that-openai-may-have-a-problem/). The team is releasing an interactive demo, the training dataset, and the training code for research purposes[[1]](https://the-decoder.com/stanfords-alpaca-shows-that-openai-may-have-a-problem/)[[4]](https://crfm.stanford.edu/2023/03/13/alpaca.html).
Alpaca is still under development and has limitations that need to be addressed. The researchers have not yet fine-tuned the model to be safe and harmless[[2]](https://github.com/tatsu-lab/stanford_alpaca). They encourage users to be cautious when interacting with Alpaca and report any concerning behavior to help improve it[[2]](https://github.com/tatsu-lab/stanford_alpaca).
LLaMA is a new open-source language model from Meta Research that performs as well as closed-source models. Stanford's Alpaca is a fine-tuned version of LLaMA that can respond to instructions like ChatGPT[[3]](https://replicate.com/blog/replicate-alpaca). It functions more like a fancy version of autocomplete than a conversational bot[[3]](https://replicate.com/blog/replicate-alpaca).
The researchers are releasing their findings about an instruction-following language model dubbed Alpaca. They trained the Alpaca model on 52K instruction-following demonstrations generated in the style of self-instruct using `text-davinci-003`. On the self-instruct evaluation set, Alpaca shows many behaviors similar to OpenAI’s `text-davinci-003` but is also surprisingly small and easy/cheap to reproduce[[4]](https://crfm.stanford.edu/2023/03/13/alpaca.html). | Alpaca (Fine-tuned LLaMA) | https://api.github.com/repos/langchain-ai/langchain/issues/1777/comments | 23 | 2023-03-19T15:16:09Z | 2023-09-22T16:50:52Z | https://github.com/langchain-ai/langchain/issues/1777 | 1,630,984,304 | 1,777 |
[
"langchain-ai",
"langchain"
] | This issue may sound silly, I apologize for not being able to find the answer (and for my poor English).
For example, I want my chatbot only use paths from a list I give. I tried to include the following content in suffix:
```py
"""
omit...
You should only use the following file paths:
{path_list}
omit...
"""
```
But how can I add `path_list` to `input_variables`? I noticed that the `create_prompt` method has the following statement:
```py
if input_variables is None:
input_variables = ["input", "chat_history", "agent_scratchpad"]
```
So I tried adding `input_variables` in `agent_kwargs`:
```py
agent = initialize_agent(
tools,
llm,
agent="conversational-react-description",
verbose=True,
memory=memory,
return_intermediate_steps=True,
agent_kwargs={
'prefix': CHAT_BOT_PREFIX,
'format_instructions': CHAT_BOT_FORMAT_INSTRUCTIONS,
'suffix': CHAT_BOT_SUFFIX,
'input_variables': ["input", "chat_history", "agent_scratchpad", "path_list"], # here
},
)
```
This does create a prompt in the agent with a `path_list` variable in it. But how can I pass the value of path_list?
The way I thought was to pass it togethor with `input`.
```py
result = agent({"input": str_input, "path_list": path_list_str})
```
But this doesn't work. It raises the following error:
```text
ValueError: One input key expected got ['path_list', 'input']
```
I don't know if there's another way or if I'm missing something. | How to use self defined variable in an Agent? | https://api.github.com/repos/langchain-ai/langchain/issues/1774/comments | 5 | 2023-03-19T11:03:28Z | 2023-03-24T23:38:32Z | https://github.com/langchain-ai/langchain/issues/1774 | 1,630,889,769 | 1,774 |
[
"langchain-ai",
"langchain"
] | When using the chat application, I encountered an error message stating "openai.error.InvalidRequestError: This model's maximum context length is 4097 tokens" when I asked a question like "Did he mention Stephen Breyer?".

| openai.error.InvalidRequestError: This model's maximum context length is 4097 tokens | https://api.github.com/repos/langchain-ai/langchain/issues/1767/comments | 11 | 2023-03-19T01:53:30Z | 2023-11-01T16:08:00Z | https://github.com/langchain-ai/langchain/issues/1767 | 1,630,737,686 | 1,767 |
[
"langchain-ai",
"langchain"
] | It seems that #1578 adds support for SQLAlchemy v2 but the [poetry lock file](https://github.com/hwchase17/langchain/blob/8685d53adcdd0310e76349ecb4e2b87f980c4673/poetry.lock#L6211) is still at 1.4.46. | Update poetry lock to allow SQLAlchemy v2 | https://api.github.com/repos/langchain-ai/langchain/issues/1766/comments | 8 | 2023-03-19T01:48:23Z | 2023-04-25T04:10:57Z | https://github.com/langchain-ai/langchain/issues/1766 | 1,630,736,276 | 1,766 |
[
"langchain-ai",
"langchain"
] | If I use langchain for question answering on a single document with any LLM, is there a way to also extract the corresponding character offsets (begin and end) of the response in the original document?
Thanks,
Ravi. | Character offsets for the response/Answer | https://api.github.com/repos/langchain-ai/langchain/issues/1763/comments | 1 | 2023-03-18T21:44:36Z | 2023-08-24T16:14:26Z | https://github.com/langchain-ai/langchain/issues/1763 | 1,630,620,637 | 1,763 |
[
"langchain-ai",
"langchain"
] | ## Summary
Occurring in [Chat Langchain Demo](https://github.com/hwchase17/chat-langchain) when upgrading from 0.0.105 -> 0.0.106 getting
`ERROR:root:'OpenAIEmbeddings' object has no attribute 'max_retries'`
This occurs AFTER the user sends their first message, instantly receive back the error in the logs. The error is thrown from this try catch block. https://github.com/hwchase17/chat-langchain/blob/ba456378e04125ccbdc7715f5be17114df2ee2e1/main.py#L68
My suspicion is that it originates within the `get_chain` method, as the ChatResponse's are really just validation and don't touch Embeddings. Seems like may be deeper.
## Expected Behavior (v0.0.105):

## Actual Behavior (v0.0.106):

## Frequency:
Whenever creating a USER message in chat-langchain when upgrading past v0.0.105
## Environment:
[List the operating system, browser, device, and any other relevant software or hardware information]
https://github.com/hwchase17/chat-langchain
## Demo
https://user-images.githubusercontent.com/40816745/226128476-9d6dfae8-2d4f-438d-b9f7-c7daf39c8646.mp4 | ERROR:root:'OpenAIEmbeddings' object has no attribute 'max_retries' - VERSION 0.0.106 and up | https://api.github.com/repos/langchain-ai/langchain/issues/1759/comments | 2 | 2023-03-18T18:17:46Z | 2023-09-18T16:23:15Z | https://github.com/langchain-ai/langchain/issues/1759 | 1,630,492,439 | 1,759 |
[
"langchain-ai",
"langchain"
] | # Quick summary
Using the `namespace` argument in the function `Pinecone.from_existing_index` has no effect. Indeed, it is passed to `pinecone.Index`, which has no `namespace` argument.
# Steps to reproduce a relevant bug
```
import pinecone
from langchain.docstore.document import Document
from langchain.vectorstores.pinecone import Pinecone
from tests.integration_tests.vectorstores.fake_embeddings import FakeEmbeddings
index = pinecone.Index("langchain-demo") # this should be a new index
texts = ["foo", "bar", "baz"]
metadatas = [{"page": i} for i in range(len(texts))]
Pinecone.from_texts(
texts,
FakeEmbeddings(),
index_name="langchain-demo",
metadatas=metadatas,
namespace="test-namespace",
)
texts = ["foo2", "bar2", "baz2"]
metadatas = [{"page": i} for i in range(len(texts))]
Pinecone.from_texts(
texts,
FakeEmbeddings(),
index_name="langchain-demo",
metadatas=metadatas,
namespace="test-namespace2",
)
# Search with namespace
docsearch = Pinecone.from_existing_index("langchain-demo",
embedding=FakeEmbeddings(),
namespace="test-namespace")
output = docsearch.similarity_search("foo", k=6)
# check that we don't get results from the other namespace
page_contents = [o.page_content for o in output]
assert set(page_contents) == set(["foo", "bar", "baz"])
```
# Fix
The `namespace` argument used in `Pinecone.from_existing_index` and `Pinecone.from_texts` should be stored as an attribute and used by default by every method. | namespace argument not taken into account when creating Pinecone index | https://api.github.com/repos/langchain-ai/langchain/issues/1756/comments | 0 | 2023-03-18T12:26:39Z | 2023-03-19T02:55:40Z | https://github.com/langchain-ai/langchain/issues/1756 | 1,630,304,145 | 1,756 |
[
"langchain-ai",
"langchain"
] | Currently, the document on [Creating a custom prompt template](https://langchain.readthedocs.io/en/latest/modules/prompts/examples/custom_prompt_template.html) is outdated, as the code in the guide is no longer functional for the following reasons:
1. The document outlines creating a custom prompt template by inheriting from `BasePromptTemplate` with only the `format` method. However, a new required method `format_prompt` has been introduced as an interface for adapting chat-style prompt template usage.
2. Maybe, the `StringPromptTemplate` was created to absorb the change, but it is currently not exposed.
Therefore, I suggest using `StringPromptTemplate` instead of `BasePromptTemplate`, and exposing it in `langchain.prompts`.
I have created a [PR#1753](https://github.com/hwchase17/langchain/pull/1753) for this, and would appreciate it if you could review it.
Additionally, I have created another PR to slightly modify the class docstring for both `BasePromptTemplate` and `StringPromptTemplate`, as their current docstrings are outdated and require updating in relation to the issue at hand. | Document for a custom prompt template is outdated. | https://api.github.com/repos/langchain-ai/langchain/issues/1754/comments | 0 | 2023-03-18T10:40:32Z | 2023-03-19T23:51:51Z | https://github.com/langchain-ai/langchain/issues/1754 | 1,630,270,933 | 1,754 |
[
"langchain-ai",
"langchain"
] | I tried the code from the documentation here on a Jupyter notebook from VScode : https://langchain.readthedocs.io/en/latest/modules/llms/async_llm.html?highlight=agenerate#async-api-for-llm
It worked but when I replaced:
`from langchain.llms import OpenAI`
with
`
from langchain.chat_models import ChatOpenAI`
and
`
OpenAI(temperature=0.9)`
with
`
ChatOpenAI(temperature=0.9)`
I received an error "Got unknown type H" shown in the first error message below. The H is the first letter of the prompt and it changes when the prompt changes so I suppose somewhere the string is split.
I then tried to run it as a script changing the await outside of a function to asyncio.run( but I got the second error below.
--------------
# Error with a Jupyter VScode notebook using `await`
## Code used
```
import time
import asyncio
def generate_serially():
llm = ChatOpenAI(temperature=0)
for _ in range(10):
resp = llm.generate(["Hello, how are you?"])
print(resp.generations[0][0].text)
async def async_generate(llm):
resp = await llm.agenerate(["Hello, how are you?"])
print(resp.generations[0][0].text)
async def generate_concurrently():
llm = ChatOpenAI(temperature=0)
tasks = [async_generate(llm) for _ in range(10)]
await asyncio.gather(*tasks)
s = time.perf_counter()
# # If running this outside of Jupyter, use asyncio.run(generate_concurrently())
await generate_concurrently()
elapsed = time.perf_counter() - s
print("\n", f"Concurrent executed in {elapsed:0.2f} seconds.")
s = time.perf_counter()
generate_serially()
elapsed = time.perf_counter() - s
print("\n",f"Serial executed in {elapsed:0.2f} seconds.")
```
## Error obtained
```
ValueError Traceback (most recent call last)
/home/username/projects/Latex_Doc_Search.ipynb Cell 35 in ()
----> 1 await generate_concurrently()
/home/username/projects/Latex_Doc_Search.ipynb Cell 35 in generate_concurrently()
17 llm = ChatOpenAI(temperature=0)
18 tasks = [async_generate(llm) for _ in range(10)]
---> 19 await asyncio.gather(*tasks)
/home/username/projects/Latex_Doc_Search.ipynb Cell 35 in async_generate(llm)
11 async def async_generate(llm):
---> 12 resp = await llm.agenerate(["Hello, how are you?"])
13 print(resp.generations[0][0].text)
File ~/anaconda3/lib/python3.9/site-packages/langchain/chat_models/base.py:57, in BaseChatModel.agenerate(self, messages, stop)
53 async def agenerate(
54 self, messages: List[List[BaseMessage]], stop: Optional[List[str]] = None
55 ) -> LLMResult:
56 """Top Level call"""
---> 57 results = [await self._agenerate(m, stop=stop) for m in messages]
58 return LLMResult(generations=[res.generations for res in results])
File ~/anaconda3/lib/python3.9/site-packages/langchain/chat_models/base.py:57, in (.0)
53 async def agenerate(
...
---> 88 raise ValueError(f"Got unknown type {message}")
89 if "name" in message.additional_kwargs:
90 message_dict["name"] = message.additional_kwargs["name"]
ValueError: Got unknown type H
```
------------------
--------------------
# Error with a script using asyncio.run(
```
ValueError Traceback (most recent call last)
[/home/username/projects/Latex_Doc_Search.ipynb](https://file+.vscode-resource.vscode-cdn.net/home/username/projects/Latex_Doc_Search.ipynb) Cell 39 in ()
[1](vscode-notebook-cell:/home/username/projects/Latex_Doc_Search.ipynb#Y150sZmlsZQ%3D%3D?line=0) import nest_asyncio
[2](vscode-notebook-cell:/home//projects/Latex_Doc_Search.ipynb#Y150sZmlsZQ%3D%3D?line=1) nest_asyncio.apply()
----> [4](vscode-notebook-cell:/home/username/projects/Latex_Doc_Search.ipynb#Y150sZmlsZQ%3D%3D?line=3) await generate_concurrently()
[/home/username/projects/Latex_Doc_Search.ipynb](https://file+.vscode-resource.vscode-cdn.net/home/username/projects/Latex_Doc_Search.ipynb) Cell 39 in generate_concurrently()
[17](vscode-notebook-cell:/home/username/projects/Latex_Doc_Search.ipynb#Y150sZmlsZQ%3D%3D?line=16) llm = ChatOpenAI(temperature=0)
[18](vscode-notebook-cell:/home/username/projects/Latex_Doc_Search.ipynb#Y150sZmlsZQ%3D%3D?line=17) tasks = [async_generate(llm) for _ in range(10)]
---> [19](vscode-notebook-cell:/home/username/projects/Latex_Doc_Search.ipynb#Y150sZmlsZQ%3D%3D?line=18) await asyncio.gather(*tasks)
File [~/anaconda3/lib/python3.9/asyncio/tasks.py:328](https://file+.vscode-resource.vscode-cdn.net/home/username/projects/~/anaconda3/lib/python3.9/asyncio/tasks.py:328), in Task.__wakeup(self, future)
326 def __wakeup(self, future):
327 try:
--> 328 future.result()
329 except BaseException as exc:
330 # This may also be a cancellation.
331 self.__step(exc)
File [~/anaconda3/lib/python3.9/asyncio/tasks.py:256](https://file+.vscode-resource.vscode-cdn.net/home/username/projects/~/anaconda3/lib/python3.9/asyncio/tasks.py:256), in Task.__step(***failed resolving arguments***)
252 try:
253 if exc is None:
254 # We use the `send` method directly, because coroutines
255 # don't have `__iter__` and `__next__` methods.
...
---> 88 raise ValueError(f"Got unknown type {message}")
89 if "name" in message.additional_kwargs:
90 message_dict["name"] = message.additional_kwargs["name"]
ValueError: Got unknown type H
``` | error when using an asynchronous await with ChatGPT | https://api.github.com/repos/langchain-ai/langchain/issues/1751/comments | 6 | 2023-03-18T08:13:50Z | 2023-11-27T21:59:57Z | https://github.com/langchain-ai/langchain/issues/1751 | 1,630,226,364 | 1,751 |
[
"langchain-ai",
"langchain"
] | AzureOpenAI seems not work for gpt-3.5-turbo due to this issue | AzureOpenAI gpt-3.5-turbo doesn't support best_of parameter | https://api.github.com/repos/langchain-ai/langchain/issues/1747/comments | 6 | 2023-03-18T00:45:00Z | 2023-09-27T16:12:35Z | https://github.com/langchain-ai/langchain/issues/1747 | 1,630,060,590 | 1,747 |
[
"langchain-ai",
"langchain"
] | I'm trying to follow this example [https://langchain.readthedocs.io/en/latest/modules/indexes/chain_examples/chat_vector_db.html?highlight=chatvectordb#chat-vector-db-with-streaming-to-stdout](url)
and I've used PagedPDFSplitter to load a PDF.
This is how I've done it (build_vectorstore returns Chroma.from_documents(texts,embeddings))
```
documents = load_documents()
texts= split_text(documents)
vectorstore = build_vectorstore(texts)
llm = ChatOpenAI(temperature=0)
streaming_llm = ChatOpenAI(streaming=True,callback_manager=CallbackManager([StreamingStdOutCallbackHandler()]),verbose=True,temperature=0)
question_generator = LLMChain(llm=llm,prompt=CONDENSE_QUESTION_PROMPT)
doc_chain = load_qa_chain(streaming_llm,chain_type="stuff",prompt=QA_PROMPT)
qa = ChatVectorDBChain(vectorstore=vectorstore,combine_docs_chain=doc_chain,question_generator=question_generator)
chat_history = []
query = "What does this document contain?"
result = qa({"question": query, "chat_history": chat_history})
```
And this is the error I'm getting:
KeyError: {'context', 'question'}
```
[91] result = qa({"question": query, "chat_history": chat_history})
[114](file:///c%3A/Users//miniconda3/envs/langchain2/lib/site-packages/langchain/chains/base.py?line=113) except (KeyboardInterrupt, Exception) as e:
[115](file:///c%3A/Users//miniconda3/envs/langchain2/lib/site-packages/langchain/chains/base.py?line=114) self.callback_manager.on_chain_error(e, verbose=self.verbose)
--> [116](file:///c%3A/Users//miniconda3/envs/langchain2/lib/site-packages/langchain/chains/base.py?line=115) raise e
[117](file:///c%3A/Users//miniconda3/envs/langchain2/lib/site-packages/langchain/chains/base.py?line=116) self.callback_manager.on_chain_end(outputs, verbose=self.verbose)
[118](file:///c%3A/Users//miniconda3/envs/langchain2/lib/site-packages/langchain/chains/base.py?line=117) return self.prep_outputs(inputs, outputs, return_only_outputs)
[107](file:///c%3A/Users//miniconda3/envs/langchain2/lib/site-packages/langchain/chains/base.py?line=106) self.callback_manager.on_chain_start(
[108](file:///c%3A/Users//miniconda3/envs/langchain2/lib/site-packages/langchain/chains/base.py?line=107) {"name": self.__class__.__name__},
[109](file:///c%3A/Users//miniconda3/envs/langchain2/lib/site-packages/langchain/chains/base.py?line=108) inputs,
[110](file:///c%3A/Users//miniconda3/envs/langchain2/lib/site-packages/langchain/chains/base.py?line=109) verbose=self.verbose,
[111](file:///c%3A/Users//miniconda3/envs/langchain2/lib/site-packages/langchain/chains/base.py?line=110) )
[112](file:///c%3A/Users//miniconda3/envs/langchain2/lib/site-packages/langchain/chains/base.py?line=111) try:
--> [113](file:///c%3A/Users//miniconda3/envs/langchain2/lib/site-packages/langchain/chains/base.py?line=112) outputs = self._call(inputs)
[114](file:///c%3A/Users//miniconda3/envs/langchain2/lib/site-packages/langchain/chains/base.py?line=113) except (KeyboardInterrupt, Exception) as e:
[115](file:///c%3A/Users//miniconda3/envs/langchain2/lib/site-packages/langchain/chains/base.py?line=114) self.callback_manager.on_chain_error(e, verbose=self.verbose)
...
[16](file:///c%3A/Users//miniconda3/envs/langchain2/lib/site-packages/langchain/formatting.py?line=15) extra = set(kwargs).difference(used_args)
[17](file:///c%3A/Users//miniconda3/envs/langchain2/lib/site-packages/langchain/formatting.py?line=16) if extra:
---> [18](file:///c%3A/Users//miniconda3/envs/langchain2/lib/site-packages/langchain/formatting.py?line=17) raise KeyError(extra)
KeyError: {'context', 'question'}
``` | Getting KeyError: {'context', 'question'} when following Chat Vector DB with streaming example | https://api.github.com/repos/langchain-ai/langchain/issues/1745/comments | 1 | 2023-03-17T22:09:27Z | 2023-09-10T16:42:10Z | https://github.com/langchain-ai/langchain/issues/1745 | 1,629,974,694 | 1,745 |
[
"langchain-ai",
"langchain"
] | I have found that the OpenAI embeddings are decent, but suffer when you want specific names or keywords to be included. I've found that the [sparse-dense approach](https://www.pinecone.io/learn/sparse-dense) produces better results, but not supported by the current implementation of the Vectorstore or the chains.
Has anyone else implemented a workaround or is there any planned support for this? | Support for Pinecone Hybrid Search (Sparse-dense embeddings) | https://api.github.com/repos/langchain-ai/langchain/issues/1743/comments | 7 | 2023-03-17T20:02:14Z | 2023-09-25T16:16:15Z | https://github.com/langchain-ai/langchain/issues/1743 | 1,629,872,044 | 1,743 |
[
"langchain-ai",
"langchain"
] | It's really useful to move away the prompts etc. from the main codebase. Currently, from the documentation and my own testing, seems that only those chains can be serialized that have OpenAI LLM (only `text-davinci-003`). But, no such support is available for Azure-based OpenAI LLMs (`text-davinci-003` and `gpt-3.5-turbo`).
Is my understanding correct? If yes, any plans on adding it soon?
| Chain Serialization Support with Azure OpenAI LLMs | https://api.github.com/repos/langchain-ai/langchain/issues/1736/comments | 2 | 2023-03-17T12:50:42Z | 2023-10-12T16:11:09Z | https://github.com/langchain-ai/langchain/issues/1736 | 1,629,261,920 | 1,736 |
[
"langchain-ai",
"langchain"
] | https://github.com/hwchase17/langchain/blob/276940fd9babf8aec570dd869cc84fbca1c766bf/langchain/vectorstores/milvus.py#L319
I'm using milvus where for my question I'm getting 0 documents and so index out of range error occurs
Error line:
https://github.com/hwchase17/langchain/blob/276940fd9babf8aec570dd869cc84fbca1c766bf/langchain/chains/llm.py#L95 | list index out of range error if similarity search gives 0 docs | https://api.github.com/repos/langchain-ai/langchain/issues/1733/comments | 7 | 2023-03-17T11:14:48Z | 2023-08-11T05:50:41Z | https://github.com/langchain-ai/langchain/issues/1733 | 1,629,133,888 | 1,733 |
[
"langchain-ai",
"langchain"
] | As most scientific papers are released not only as pdf, but also also source code (LaTeX), I propose adding a LaTeX Text Splitter.
I shall split in hierarchical order (e.g. by sections first, then subsections, headings, ...) | LaTeX Text Splitter | https://api.github.com/repos/langchain-ai/langchain/issues/1731/comments | 1 | 2023-03-17T09:51:15Z | 2023-03-18T02:39:19Z | https://github.com/langchain-ai/langchain/issues/1731 | 1,629,015,269 | 1,731 |
[
"langchain-ai",
"langchain"
] | Please include ChatGPT Turbo's ChatOpenAI object to be passed as an LLM completion in the Summarizer chain.
_(from langchain.chat_models.openai import ChatOpenAI)_
for the following usage
`summary_chain = load_summarize_chain(llm, chain_type="map_reduce")
summarize_document_chain = AnalyzeDocumentChain(combine_docs_chain=summary_chain)` | Include ChatGPT Turbo model in the Chain (Summarization) | https://api.github.com/repos/langchain-ai/langchain/issues/1730/comments | 5 | 2023-03-17T09:21:12Z | 2023-09-28T16:11:08Z | https://github.com/langchain-ai/langchain/issues/1730 | 1,628,973,092 | 1,730 |
[
"langchain-ai",
"langchain"
] | Is there anyway I can use my existing vector store that I already have on pinecone? I don't want to keep generating the embeddings because my documents have a lot of text in them.
I also want to link pinecone and the serpAPI, so that if the answer is not available in pinecone, I can browse the web for answers.
Is this something doable? Any resources would be really helpful. | Unable to add vector store and SerpAPI to agent | https://api.github.com/repos/langchain-ai/langchain/issues/1729/comments | 1 | 2023-03-17T07:42:56Z | 2023-08-24T16:14:35Z | https://github.com/langchain-ai/langchain/issues/1729 | 1,628,849,552 | 1,729 |
[
"langchain-ai",
"langchain"
] | ```
import os, pdb
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.text_splitter import CharacterTextSplitter, TokenTextSplitter
from langchain.vectorstores import Milvus
from langchain.document_loaders import TextLoader
from langchain.embeddings import HuggingFaceEmbeddings
from langchain.chains.question_answering import load_qa_chain
from langchain.llms import OpenAI, OpenAIChat
os.environ["TOKENIZERS_PARALLELISM"] = "false"
f_path = "doc.txt"
loader = TextLoader(f_path)
documents = loader.load()
text_splitter = TokenTextSplitter(chunk_size=512, chunk_overlap=20)
docs = text_splitter.split_documents(documents)
embeddings = HuggingFaceEmbeddings(model_name="sentence-transformers/all-mpnet-base-v2")
vector_db = Milvus.from_documents(
documents,
embeddings,
connection_args={"host": "localhost", "port": "19530"},
)
query = "Who is the author of this study?"
docs = vector_db.similarity_search(query, 2)
chain = load_qa_chain(
OpenAIChat(model_name="gpt-3.5-turbo", temperature=0.1), chain_type="refine"
)
print(chain.run(input_documents=docs, question=query))
```
Gives
`ValueError: Missing some input keys: {'existing_answer'}`
Not sure what's going wrong. I suppose for `refine` chain the first call should not be looking for `existing_answer`, right? | Refine Chain Error | https://api.github.com/repos/langchain-ai/langchain/issues/1724/comments | 5 | 2023-03-17T02:43:06Z | 2023-11-01T16:08:05Z | https://github.com/langchain-ai/langchain/issues/1724 | 1,628,593,372 | 1,724 |
[
"langchain-ai",
"langchain"
] | the example notebook at https://github.com/hwchase17/langchain/blob/master/docs/modules/agents/agent_toolkits/sql_database.ipynb is broken at the following line.
``` python
agent_executor = create_sql_agent(
llm=OpenAI(temperature=0),
toolkit=toolkit,
verbose=True
)
```
It produces a `KeyError: 'tools'`
This error is related to another error`TypeError("cannot pickle 'module' object")`. It appears that during the Pydantic validation process for any of the SQLDataBaseTools (e.g., QuerySQLDataBaseTool), there is an attempt to deep copy the `db` attribute, which is a SQLDatabase object. The method used is `rv = reductor(4)`. However, that is followed up with a call to `_reconstruct(x, memo, *rv)`. That call will throw an error for creating the new SQLDataBase object because the required `engine` attribute is missing.
A workaround is to define a `__deepcopy__()` method in SQLDatabase. If the deep copy is not necessary, the method may simply return `self` or do a shallow copy. But need to make sure what is the right approach here. | example sql_database toolkit agent notebook throws "KeyError" in create_sql_agent | https://api.github.com/repos/langchain-ai/langchain/issues/1721/comments | 4 | 2023-03-17T01:16:14Z | 2023-04-12T12:41:52Z | https://github.com/langchain-ai/langchain/issues/1721 | 1,628,532,635 | 1,721 |
[
"langchain-ai",
"langchain"
] | https://github.com/hwchase17/langchain/blob/master/langchain/agents/agent_toolkits/sql/toolkit.py#L38
SqlDatabaseToolkit should have custom llm_chain field for QueryCheckerTool. This is causing issues when OpenAI is not available, as the QueryCheckerTool will automatically use OpenAI. | SqlDatabaseToolkit should have custom llm_chain for QueryCheckerTool | https://api.github.com/repos/langchain-ai/langchain/issues/1719/comments | 2 | 2023-03-17T00:12:47Z | 2023-09-18T16:23:20Z | https://github.com/langchain-ai/langchain/issues/1719 | 1,628,492,807 | 1,719 |
[
"langchain-ai",
"langchain"
] | I noticed there is no support for stop sequences in the langchain API,
Is this some deliberate choice, or should I make a PR to add support for it? | No stop sequneces supported for OpenAI | https://api.github.com/repos/langchain-ai/langchain/issues/1717/comments | 6 | 2023-03-16T20:05:04Z | 2023-09-28T16:11:13Z | https://github.com/langchain-ai/langchain/issues/1717 | 1,628,182,486 | 1,717 |
[
"langchain-ai",
"langchain"
] | Hello, I've noticed that after the latest commit of @MthwRobinson there are two different modules to load Word documents, could they be unified in a single version? Also there are two notebooks that do almost the same thing.
[docx.py](https://github.com/hwchase17/langchain/blob/3c2468452284ee37b8a88a20b864255fa4385b65/langchain/document_loaders/docx.py) and [word_document.py](https://github.com/hwchase17/langchain/blob/3c2468452284ee37b8a88a20b864255fa4385b65/langchain/document_loaders/word_document.py)
[microsoft_word.ipynb](https://github.com/hwchase17/langchain/blob/3c2468452284ee37b8a88a20b864255fa4385b65/docs/modules/document_loaders/examples/microsoft_word.ipynb) and [word_document.ipynb](https://github.com/hwchase17/langchain/blob/3c2468452284ee37b8a88a20b864255fa4385b65/docs/modules/document_loaders/examples/word_document.ipynb)
Or am I just missing something?
| Two different document loaders for Microsoft Word files | https://api.github.com/repos/langchain-ai/langchain/issues/1716/comments | 5 | 2023-03-16T18:57:09Z | 2023-05-17T16:18:06Z | https://github.com/langchain-ai/langchain/issues/1716 | 1,628,093,530 | 1,716 |
[
"langchain-ai",
"langchain"
] | I was going through [Vectorstore Agent](https://langchain.readthedocs.io/en/latest/modules/agents/agent_toolkits/vectorstore.html?highlight=vectorstore%20agent#vectorstore-agent) tutorial and I am facing issues with the `VectorStoreQAWithSourcesTool`.
Looking closely at the code https://github.com/hwchase17/langchain/blob/3c2468452284ee37b8a88a20b864255fa4385b65/langchain/chains/qa_with_sources/base.py#L119-L120
it appears, the parsing rule might be too strict for extract the list of sources. Often, when the agent is fetching information from the vectorstore, the `VectorStoreQAWithSourcesTool` output is something like `....SOURCES:\n<source1>\n<source2>...` instead of `...SOURCES: <source1>,<source2>...`.
Due to this, the `VectorStoreQAWithSourcesTool` output is broken and the agent response is impacted.
P.S. I used `Chroma` as the vectorstore db and `OpenAI(temperature=0)` as the LLM. | bug(QA with Sources): source parsing is not reliable | https://api.github.com/repos/langchain-ai/langchain/issues/1712/comments | 3 | 2023-03-16T15:47:53Z | 2023-03-31T15:41:59Z | https://github.com/langchain-ai/langchain/issues/1712 | 1,627,779,986 | 1,712 |
[
"langchain-ai",
"langchain"
] | I have been performing tests on the following dataset (https://www.kaggle.com/datasets/peopledatalabssf/free-7-million-company-dataset) for a couple of days now. Yesterday and earlier agent had no problems with answering questions like:
- "list all companies starting with 'a'"
- "what is the company earliest founded and in which year?"
Now it takes non-existing columns for analysis or completely misunderstands the questions. Any ideas why such a performance drop happen? Is OpenAI changing the LLMs on their side? | performance of CSVAgent dropped significantly | https://api.github.com/repos/langchain-ai/langchain/issues/1710/comments | 1 | 2023-03-16T13:40:36Z | 2023-09-10T16:42:15Z | https://github.com/langchain-ai/langchain/issues/1710 | 1,627,497,863 | 1,710 |
[
"langchain-ai",
"langchain"
] | I've just implemented the AsyncCallbackManager and handler, and everything works fine except for the fact I receive this warning
> /usr/local/lib/python3.10/site-packages/langchain/agents/agent.py:456: RuntimeWarning: coroutine 'AsyncCallbackManager.on_agent_action' was never awaited
> self.callback_manager.on_agent_action(
> RuntimeWarning: Enable tracemalloc to get the object allocation traceback
Which seems to prevent the logging I'm trying to implement with the handler.
Relevant code snippet:
https://github.com/hwchase17/langchain/blob/3c2468452284ee37b8a88a20b864255fa4385b65/langchain/agents/agent.py#L456-L458
Looking at the codebase, it looks like the other callback manager calls are awaited, but not sure if I'm using the wrong executor?
https://github.com/hwchase17/langchain/blob/3c2468452284ee37b8a88a20b864255fa4385b65/langchain/agents/agent.py#L384-L387
| on_agent_action was never awaited | https://api.github.com/repos/langchain-ai/langchain/issues/1708/comments | 6 | 2023-03-16T11:50:17Z | 2023-03-20T18:33:18Z | https://github.com/langchain-ai/langchain/issues/1708 | 1,627,311,222 | 1,708 |
[
"langchain-ai",
"langchain"
] | When using `ConversationSummaryMemory` or `ConversationSummaryBufferMemory`, I sometimes want to see the details of the summarization. I would like to add a parameter for displaying this information.
This feature can be easily added by introducing a `verbose` parameter to the `SummarizerMixin` class and setting its default value to `False`, then pass to `LLMChain`. When setting up the memory, you can simply specify `verbose` like this:
```
memory = ConversationSummaryMemory(
llm=llm_summarization,
verbose=True,
)
```
I created this issue to determine if this feature is necessary. If it is needed, I can add the feature myself and submit a Pull Request. | Displaying details of summarization in `ConversationSummaryMemory` and `ConversationSummaryBufferMemory` | https://api.github.com/repos/langchain-ai/langchain/issues/1705/comments | 1 | 2023-03-16T06:55:24Z | 2023-09-10T16:42:20Z | https://github.com/langchain-ai/langchain/issues/1705 | 1,626,834,563 | 1,705 |
[
"langchain-ai",
"langchain"
] | Looking at the [tracing](https://langchain.readthedocs.io/en/latest/tracing.html) docs, would be great if I could point the output at my own tracing backend using a portable/open-standard format.
As the tracing features are built out, would be amazing if an output option was just [OLTP using the Python SDK](https://opentelemetry.io/docs/instrumentation/python/), which is very well supported by a number of different tools. | Support OpenTelemetry for Tracing | https://api.github.com/repos/langchain-ai/langchain/issues/1704/comments | 2 | 2023-03-16T04:03:41Z | 2023-09-18T16:25:28Z | https://github.com/langchain-ai/langchain/issues/1704 | 1,626,671,239 | 1,704 |
[
"langchain-ai",
"langchain"
] | I ask you to test mrkl agent prompt on different LLM. In my experience no LLM other than ones by OpenAI can handle mrkl prompt. I am but one man and make mistakes. I want someone to double check this claim. Bloom came close but it even it can't handle it. They all can't format response up to the standard. Langchain starts to throw parsing errors. | Testing ZeroShotAgent using different LLM other than OpenAI | https://api.github.com/repos/langchain-ai/langchain/issues/1703/comments | 1 | 2023-03-16T03:51:09Z | 2023-08-24T16:14:46Z | https://github.com/langchain-ai/langchain/issues/1703 | 1,626,660,999 | 1,703 |
[
"langchain-ai",
"langchain"
] | Currently it takes 10-15s to get response from OpenAI
I am using example similar to this https://langchain.readthedocs.io/en/latest/modules/agents/examples/agent_vectorstore.html
| Are there any ways to increase response speed? | https://api.github.com/repos/langchain-ai/langchain/issues/1702/comments | 22 | 2023-03-15T21:59:35Z | 2024-05-13T16:07:15Z | https://github.com/langchain-ai/langchain/issues/1702 | 1,626,345,845 | 1,702 |
[
"langchain-ai",
"langchain"
] | This won't work, because your class needs runhouse. I would like to use it on my GPU Server so ist should not log in via ssh to it's own:
```
embeddings = SelfHostedEmbeddings(
model_load_fn=get_pipeline,
hardware="cuda",
model_reqs=["./", "torch", "transformers"],
inference_fn=inference_fn
)
```
So here cuda would not work... | No local GPU for self hosted embeddings | https://api.github.com/repos/langchain-ai/langchain/issues/1695/comments | 3 | 2023-03-15T16:41:37Z | 2023-10-08T16:08:12Z | https://github.com/langchain-ai/langchain/issues/1695 | 1,625,882,506 | 1,695 |
[
"langchain-ai",
"langchain"
] | Sorry if this is a dumb questions but If a have a txt file with sentences separated by newlines how do i split by newlines to generate embeddings for each sentence | Quick question about splitter | https://api.github.com/repos/langchain-ai/langchain/issues/1694/comments | 2 | 2023-03-15T15:31:56Z | 2023-03-15T17:30:11Z | https://github.com/langchain-ai/langchain/issues/1694 | 1,625,759,946 | 1,694 |
[
"langchain-ai",
"langchain"
] | I am using this example https://langchain.readthedocs.io/en/latest/modules/chat/examples/agent.html with my data
If I am not using Chat (ChatOpenAI) it works without issue
So probably some issue how you handling output
```
Entering new AgentExecutor chain...
DEBUG:Chroma:time to pre process our knn query: 1.430511474609375e-06
DEBUG:Chroma:time to run knn query: 0.0008881092071533203
Thought: I am not sure if I was created by AA or not. I need to use the QA System to find out.
Action: QA System
Action Input: .....
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
[<ipython-input-52-e127e5158c61>](https://localhost:8080/#) in <module>
----> 1 agent_executor.run("Are you create by OpenAI?")
8 frames
[/usr/local/lib/python3.9/dist-packages/langchain/agents/mrkl/base.py](https://localhost:8080/#) in get_action_and_input(llm_output)
44 match = re.search(regex, llm_output, re.DOTALL)
45 if not match:
---> 46 raise ValueError(f"Could not parse LLM output: `{llm_output}`")
47 action = match.group(1).strip()
48 action_input = match.group(2)
ValueError: Could not parse LLM output: `Based on the information available to me
``` | Bug with parsing | https://api.github.com/repos/langchain-ai/langchain/issues/1688/comments | 2 | 2023-03-15T11:32:57Z | 2023-09-10T16:42:25Z | https://github.com/langchain-ai/langchain/issues/1688 | 1,625,343,754 | 1,688 |
[
"langchain-ai",
"langchain"
] | With the rise of multi-modal models (GPT-4 announced today), and how popular LangChain is among the research community, we should be ready for new modalities. Ideally, the user should be able to do something like:
```python
vllm = OpenAI(model_name='gpt-4', max_tokens=through_the_roof)
prompt = PromptTemplate(
input_variables=["user_text", "image", "prompt"],
template="{prompt} {image} Question: {user_text}")
vl_chain = VLLMChain(vllm=vllm, prompt=prompt, verbose=True)
vl_chain.predict(prompt=prompt, image=pil_image, user_text=user_text)
```
If no one is working on this currently, I'm willing to try to add this functionality in my spare time.
The uncertain part is, we don't know if GPT-4 is going to be like Flamingo (allowing you to put multiple images in a specific position and order the document) or like BLIP-2 (cross-attention between image and text, but in no particular order). My educated guess is the former. | [Feature suggestion] Multi-modal Langchain | https://api.github.com/repos/langchain-ai/langchain/issues/1677/comments | 1 | 2023-03-15T01:40:31Z | 2023-03-17T18:00:16Z | https://github.com/langchain-ai/langchain/issues/1677 | 1,624,571,606 | 1,677 |
[
"langchain-ai",
"langchain"
] | I was attempting to split this volume:
https://terrorgum.com/tfox/books/introductionto3dgameprogrammingwithdirectx12.pdf
using
text_splitter = RecursiveCharacterTextSplitter(
chunk_size = 1000
chunk_overlap = 10,
length_function = len,
separators="\n\n"
)
On page 289, it enters an infinite recursive loop where it only has one split and no seperators in the split. It then keeps recursively calling self.split_text(s) until it errors out.
modifying to a chunk_size of 2000 with a chunk overlap of 100 makes this go away, but it seems there should be some detection of infinite recursion happening, as the error result wasn't immediately obvious. | RecursiveCharacterTextSplitter.split_text can enter infinite recursive loop | https://api.github.com/repos/langchain-ai/langchain/issues/1663/comments | 7 | 2023-03-14T17:29:33Z | 2023-11-03T16:08:27Z | https://github.com/langchain-ai/langchain/issues/1663 | 1,623,978,467 | 1,663 |
[
"langchain-ai",
"langchain"
] | Google has just announced the PaLM API [https://developers.googleblog.com/2023/03/announcing-palm-api-and-makersuite.html](https://developers.googleblog.com/2023/03/announcing-palm-api-and-makersuite.html)!!
| Google Cloud PaLM API integration | https://api.github.com/repos/langchain-ai/langchain/issues/1661/comments | 3 | 2023-03-14T14:38:18Z | 2023-09-28T16:11:18Z | https://github.com/langchain-ai/langchain/issues/1661 | 1,623,618,445 | 1,661 |
[
"langchain-ai",
"langchain"
] | Hi. I am trying to generate a customized prompt where I need to escape the curly-braces. Whenever I use curly braces in my prompt, it is being taken as place holders.
So, can you please suggest how to escape the curly braces in the prompts? | How to escape curly-braces "{}" in a customized prompt? | https://api.github.com/repos/langchain-ai/langchain/issues/1660/comments | 9 | 2023-03-14T12:59:42Z | 2024-07-28T09:30:47Z | https://github.com/langchain-ai/langchain/issues/1660 | 1,623,430,725 | 1,660 |
[
"langchain-ai",
"langchain"
] | Now Microsoft have released gpt-35-turbo, please can AzureOpenAI be added to chat_models.
Thanks | Add AzureOpenAI for chat_models | https://api.github.com/repos/langchain-ai/langchain/issues/1659/comments | 1 | 2023-03-14T12:03:16Z | 2023-03-20T12:41:19Z | https://github.com/langchain-ai/langchain/issues/1659 | 1,623,336,280 | 1,659 |
[
"langchain-ai",
"langchain"
] | I am having trouble using langchain with llama-index (gpt-index). I don't understand what is happening on the langchain side.
When I use OpenAIChat as LLM then sometimes with some user queries I get this error:
```
raise ValueError(f"Could not parse LLM output: `{llm_output}`")
ValueError: Could not parse LLM output: `Thought: Do I need to use a tool? No
```
And to make it worse when I switch to the OpenAI LLM then the agent almost never decides to use the tool.
I am okay with either solution but I just can't seem to fix it. What is happening?
My code:
```
from langchain.agents import ConversationalAgent, Tool, AgentExecutor
from langchain import OpenAI, LLMChain
from langchain.llms import OpenAIChat
TOOLS = [
Tool(
name = "GPT Index",
func=lambda q: str(INDEX.query(q, llm_predictor=LLM_PREDICTOR, text_qa_template=QA_PROMPT, similarity_top_k=5, response_mode="compact")),
description="useful for when you need to answer questions about weddings or marriage.",
return_direct=True
),
]
LLM=OpenAIChat(temperature=0)
prefix = """Assistant is a large language model trained by OpenAI.
Assistant is designed to support a wide range of tasks, from answering simple questions to providing detailed explanations and discussions on a wide range of topics. As a language model, Assistant can generate human-like text based on input received, and can provide natural-sounding conversation or consistent, on-topic responses.
Assistant is constantly learning and improving, and its capabilities are always evolving. It can process vast amounts of text to understand and provide accurate and helpful answers to a variety of questions. Additionally, Assistant can generate its own text based on received input, allowing it to participate in discussions on a variety of topics, or provide explanations and commentary.
Overall, Assistant is a powerful tool that can support a variety of tasks and provide valuable insights and information on a wide range of topics. Whether you need help with a specific question or want to have a conversation about a specific topic, Assistant is here to help.
TOOLS:
------
Assistant has access to the following tools."""
suffix = """Answer the questions you know to the best of your knowledge.
Begin!
User Input: {input}
{agent_scratchpad}"""
prompt = ConversationalAgent.create_prompt(
TOOLS,
prefix=prefix,
suffix=suffix,
input_variables=["input", "agent_scratchpad"]
)
llm_chain = LLMChain(llm=LLM, prompt=prompt)
agent = ConversationalAgent(llm_chain=llm_chain)
agent_executor = AgentExecutor.from_agent_and_tools(agent=agent, tools=TOOLS, verbose=True)
response = agent_executor.run(user_message)
```
| ValueError: Could not parse LLM output | https://api.github.com/repos/langchain-ai/langchain/issues/1657/comments | 13 | 2023-03-14T09:01:11Z | 2024-01-30T00:52:57Z | https://github.com/langchain-ai/langchain/issues/1657 | 1,623,023,032 | 1,657 |
[
"langchain-ai",
"langchain"
] | I am trying to follow the example from this [URL](https://langchain.readthedocs.io/en/latest/modules/indexes/chain_examples/question_answering.html), but I am getting the above error. What might be wrong?
## Environment
OS: Windows 11
Python: 3.9
| AttributeError: 'VectorStoreIndexWrapper' object has no attribute 'similarity_search' | https://api.github.com/repos/langchain-ai/langchain/issues/1655/comments | 5 | 2023-03-14T07:04:19Z | 2023-09-29T16:09:57Z | https://github.com/langchain-ai/langchain/issues/1655 | 1,622,855,759 | 1,655 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.