issue_owner_repo
listlengths
2
2
issue_body
stringlengths
0
261k
βŒ€
issue_title
stringlengths
1
925
issue_comments_url
stringlengths
56
81
issue_comments_count
int64
0
2.5k
issue_created_at
stringlengths
20
20
issue_updated_at
stringlengths
20
20
issue_html_url
stringlengths
37
62
issue_github_id
int64
387k
2.46B
issue_number
int64
1
127k
[ "langchain-ai", "langchain" ]
Users may [add_texts](https://github.com/hwchase17/langchain/blob/master/langchain/vectorstores/pinecone.py#L58) and initialize [from_texts](https://github.com/hwchase17/langchain/blob/master/langchain/vectorstores/pinecone.py#L161) to a namespaced Pinecone index. However, there is currently no way to obtain a `Pinecone` instance to a namespace in an existing Pinecone index. This prevents re-use of namespaced indices created by e.g. `from_texts(docs, ..., namespace=namespace)`.
Namespaced pinecone operations cannot be re-used
https://api.github.com/repos/langchain-ai/langchain/issues/718/comments
0
2023-01-24T14:14:53Z
2023-01-24T15:02:59Z
https://github.com/langchain-ai/langchain/issues/718
1,555,066,729
718
[ "langchain-ai", "langchain" ]
**Context**: I am working on some low-code tooling for langchain and GPT index. As part of this work, I would like to represent langchain classes as JSON, ideally with a JSON Schema to validate it. I know Pydantic's `.json()` and `.schema()` methods aim to fulfill this need. **Problem**: I expect the following code to produce valid JSON ```py from langchain import OpenAI OpenAI().json() ``` Instead, it currently raises the following error ``` Traceback (most recent call last): File "<stdin>", line 1, in <module> File "pydantic/main.py", line 505, in pydantic.main.BaseModel.json File "/usr/lib/python3.9/json/__init__.py", line 234, in dumps return cls( File "/usr/lib/python3.9/json/encoder.py", line 199, in encode chunks = self.iterencode(o, _one_shot=True) File "/usr/lib/python3.9/json/encoder.py", line 257, in iterencode return _iterencode(o, 0) File "pydantic/json.py", line 90, in pydantic.json.pydantic_encoder TypeError: Object of type 'type' is not JSON serializable ```
Langchain classes are not pydantic JSON serializable
https://api.github.com/repos/langchain-ai/langchain/issues/717/comments
1
2023-01-24T14:12:31Z
2023-01-26T06:24:39Z
https://github.com/langchain-ai/langchain/issues/717
1,555,062,335
717
[ "langchain-ai", "langchain" ]
**Description**: I installed via pip `pip install langchain` Then started to run some of the examples. When I was trying the following (from here https://langchain.readthedocs.io/en/latest/use_cases/question_answering.html): `from langchain.chains.qa_with_sources import load_qa_with_sources_chain` I received the following error: ``` from langchain.chains.qa_with_sources import load_qa_with_sources_chain ImportError: cannot import name 'load_qa_with_sources_chain' from 'langchain.chains.qa_with_sources' ``` **OS** Ubuntu 20.04 Python 3.8
[BUG] ImportError: cannot import name 'load_qa_with_sources_chain' from 'langchain.chains.qa_with_sources'
https://api.github.com/repos/langchain-ai/langchain/issues/712/comments
2
2023-01-24T10:04:03Z
2023-01-24T10:09:51Z
https://github.com/langchain-ai/langchain/issues/712
1,554,672,212
712
[ "langchain-ai", "langchain" ]
agent_chain.run(input="Open this website http://classics.mit.edu//Homer/iliad.1.i.html and summarize it")
handle long responses to urls
https://api.github.com/repos/langchain-ai/langchain/issues/709/comments
2
2023-01-24T06:30:58Z
2023-09-18T16:25:11Z
https://github.com/langchain-ai/langchain/issues/709
1,554,401,520
709
[ "langchain-ai", "langchain" ]
If you reproduce the agents example in https://langchain.readthedocs.io/en/latest/getting_started/getting_started.html but remove the "serpapi" parameter, the agent gets into an infinite loop with respect to trying actions with tools in which it has already observed the tool is not a valid tool. A proposed fix is to maintain a set of all visited tools and do a check before having the agent settle on using that tool. I haven't seen the source code to validate this viability, but this outcome definitely seems not ideal. <img width="786" alt="Screenshot 2023-01-23 at 12 01 45 PM" src="https://user-images.githubusercontent.com/3188413/214102958-eec3ef46-4fa9-42cc-a43a-213966332fdd.png">
Don't try tools that have already been invalidated in the current run
https://api.github.com/repos/langchain-ai/langchain/issues/702/comments
2
2023-01-23T17:05:47Z
2023-09-10T16:46:30Z
https://github.com/langchain-ai/langchain/issues/702
1,553,450,519
702
[ "langchain-ai", "langchain" ]
This is a very niche problem, but when you including JSON as one of the samples in your PromptTemplate it breaks the execution. Code to replicate it: ```python from langchain.llms import OpenAI from langchain.chains import ConversationChain from langchain import PromptTemplate, FewShotPromptTemplate import json prefix = """P:""" examples = [ {"instruction": "A", "sample": json.dumps({"a":"b"})}, {"instruction": "B", "sample": json.dumps({"a":"b"})} ] example_formatter_template = """{instruction} \n {sample} """ suffix = """S: {input}""" example_prompt = PromptTemplate( input_variables=["instruction", "sample"], template=example_formatter_template, ) few_shot_prompt = FewShotPromptTemplate( examples=examples, example_prompt=example_prompt, prefix=prefix, suffix=suffix, input_variables=["input"], example_separator="\n\n", ) input = """TEST""" print(few_shot_prompt.format(input=input)) ``` The error is: `Traceback (most recent call last): File "main.py", line 42, in <module> print(few_shot_prompt.format(input=input)) File "/home/runner/Athena/venv/lib/python3.8/site-packages/langchain/prompts/few_shot.py", line 110, in format return DEFAULT_FORMATTER_MAPPING[self.template_format](template, **kwargs) File "/nix/store/2vm88xw7513h9pyjyafw32cps51b0ia1-python3-3.8.12/lib/python3.8/string.py", line 163, in format return self.vformat(format_string, args, kwargs) File "/home/runner/Athena/venv/lib/python3.8/site-packages/langchain/formatting.py", line 29, in vformat return super().vformat(format_string, args, kwargs) File "/nix/store/2vm88xw7513h9pyjyafw32cps51b0ia1-python3-3.8.12/lib/python3.8/string.py", line 167, in vformat result, _ = self._vformat(format_string, args, kwargs, used_args, 2) File "/nix/store/2vm88xw7513h9pyjyafw32cps51b0ia1-python3-3.8.12/lib/python3.8/string.py", line 207, in _vformat obj, arg_used = self.get_field(field_name, args, kwargs) File "/nix/store/2vm88xw7513h9pyjyafw32cps51b0ia1-python3-3.8.12/lib/python3.8/string.py", line 272, in get_field obj = self.get_value(first, args, kwargs) File "/nix/store/2vm88xw7513h9pyjyafw32cps51b0ia1-python3-3.8.12/lib/python3.8/string.py", line 229, in get_value return kwargs[key] KeyError: '"a"'`
When the example is JSON it breaks PromptTemplate
https://api.github.com/repos/langchain-ai/langchain/issues/700/comments
5
2023-01-23T12:21:25Z
2023-09-28T16:12:44Z
https://github.com/langchain-ai/langchain/issues/700
1,553,004,099
700
[ "langchain-ai", "langchain" ]
null
make it easier to include few shot examples/example selectors in the zero-shot-agent workflow
https://api.github.com/repos/langchain-ai/langchain/issues/695/comments
1
2023-01-22T21:10:13Z
2023-08-24T16:19:41Z
https://github.com/langchain-ai/langchain/issues/695
1,552,286,028
695
[ "langchain-ai", "langchain" ]
Hello, I love using the stuff chain for my docs/blogs; it gives me better answers, is faster, and is cheaper. However, sometimes some questions lead to token-exceeded errors from OpenAI; I thought maybe in those cases, I could reduce the `K` value and try again. I ended up writing this for my use case, ```Python tiktoken_encoder = tiktoken.get_encoding("gpt2") def page_content(doc): return doc.page_content def reduce_tokens_below_limit(docs, limit=3400): tokens = len(tiktoken_encoder.encode("".join(map(page_content, docs)))) return docs if (tokens <= limit) else reduce_tokens_below_limit(docs[:-1]) class MyVectorDBQAWithSourcesChain(VectorDBQAWithSourcesChain): def _get_docs(self, inputs): question = inputs[self.question_key] docs = self.vectorstore.similarity_search(question, k=self.k) return reduce_tokens_below_limit(docs) ``` I used recursion because I liked the clarity of the method, and the `K` value isn't going to be higher than single-digit numbers in a practical scenario. I would love to contribute this back upstream; where do you suggest I include this, and what should I name it? I'll take care of docs and other things for this use case. :)
Handling tokens exceeding exception in Stuff Chain
https://api.github.com/repos/langchain-ai/langchain/issues/687/comments
6
2023-01-22T06:46:02Z
2023-09-27T16:15:13Z
https://github.com/langchain-ai/langchain/issues/687
1,552,022,734
687
[ "langchain-ai", "langchain" ]
null
add streaming to chains
https://api.github.com/repos/langchain-ai/langchain/issues/679/comments
4
2023-01-21T23:31:27Z
2023-11-16T16:09:12Z
https://github.com/langchain-ai/langchain/issues/679
1,551,939,855
679
[ "langchain-ai", "langchain" ]
https://github.com/hwchase17/langchain/blob/236ae93610a8538d3d0044fc29379c481acc6789/tests/integration_tests/vectorstores/test_faiss.py#L54 This test will fail because `FAISS.from_texts` will assign uuid4s as keys in its docstore, while `expected_docstore` has string numbers as keys.
test_faiss_with_metadatas: key mismatch in assert
https://api.github.com/repos/langchain-ai/langchain/issues/674/comments
0
2023-01-21T16:02:54Z
2023-01-22T00:08:16Z
https://github.com/langchain-ai/langchain/issues/674
1,551,839,786
674
[ "langchain-ai", "langchain" ]
If the model predicts a completion that begins with a stop sequence, resulting in no output, we'll get a `KeyError`: ``` Traceback (most recent call last): ... File ".../langchain/llms/openai.py", line 152, in _generate token_usage[_key] = response["usage"][_key] KeyError: 'completion_tokens' ``` That's because `response` is an empty dict. You can reproduce this with the following prompt and params: ```python response = openai.Completion.create( model="text-davinci-003", prompt="Remove all words from \"Original Text\".\n\nOriginal Text: yeah.\n\nCorrected Text:", temperature=0, max_tokens=256, top_p=1, frequency_penalty=0, presence_penalty=0 ) ```
Ungraceful handling of empty completions (i.e., completion that begins with a stop sequence)
https://api.github.com/repos/langchain-ai/langchain/issues/673/comments
1
2023-01-21T09:33:05Z
2023-01-24T08:26:27Z
https://github.com/langchain-ai/langchain/issues/673
1,551,749,091
673
[ "langchain-ai", "langchain" ]
I couldn't find the repo for the documentation, so opening this as an issue here. *** On the [Getting Started page](https://langchain.readthedocs.io/en/latest/modules/prompts/getting_started.html) for prompt templates, I believe the very last example ```python print(dynamic_prompt.format(adjective=long_string)) ``` should actually be ```python print(dynamic_prompt.format(input=long_string)) ``` The existing example produces `KeyError: 'input'` as expected *** On the [Create a custom prompt template](https://langchain.readthedocs.io/en/latest/modules/prompts/examples/custom_prompt_template.html#id1) page, I believe the line ```python Function Name: {kwargs["function_name"]} ``` should actually be ```python Function Name: {kwargs["function_name"].__name__} ``` The existing example produces the prompt: ``` Given the function name and source code, generate an English language explanation of the function. Function Name: <function get_source_code at 0x7f907bc0e0e0> Source Code: def get_source_code(function_name): # Get the source code of the function return inspect.getsource(function_name) Explanation: ``` *** On the [Example Selectors](https://langchain.readthedocs.io/en/latest/modules/prompts/examples/example_selectors.html) page, the first example does not define `example_prompt`, which is also subtly different from previous example prompts used. For user convenience, I suggest including ```python example_prompt = PromptTemplate( input_variables=["input", "output"], template="Input: {input}\nOutput: {output}", ) ``` in the code to be copy-pasted
Code corrections in readthedocs
https://api.github.com/repos/langchain-ai/langchain/issues/670/comments
1
2023-01-21T00:45:03Z
2023-01-22T03:48:38Z
https://github.com/langchain-ai/langchain/issues/670
1,551,623,549
670
[ "langchain-ai", "langchain" ]
### SQLDatabaseChain At the moment when using the SQLDatabaseChain, if it generates a query that has several arguments it will give a positional argument limit error. For example here is an sql query generated in the chain. `SELECT Product Name FROM inventory_table WHERE Expiration Date < date('now') ORDER BY Expiration Date LIMIT 10;` It gives this error: ``` if len(args) != 1: raise ValueError("`run` supports only one positional argument.") return self(args[0])[self.output_keys[0]] ``` > Also tried the SQLDatabaseSequentialChain, but its meant to look at different tables, but this data is in one table with multiple columns. I am currently trying to split the data in different tables and use this as an alternative. *Table info* ``` "Table 'inventory_table' has columns: index (BIGINT), id (BIGINT), Product Name (TEXT), Dosage Form (TEXT), Price (BIGINT), Stock Amount (BIGINT), Reorder Level (BIGINT), Expiration Date (TEXT)." ```
Only one argument allowed for SQLDatabaseChain.run
https://api.github.com/repos/langchain-ai/langchain/issues/669/comments
1
2023-01-21T00:20:17Z
2023-01-23T17:38:50Z
https://github.com/langchain-ai/langchain/issues/669
1,551,613,795
669
[ "langchain-ai", "langchain" ]
Hi, great lib, great work! https://langchain.readthedocs.io/en/latest/modules/memory/examples/adding_memory.html I'm considering to use langchain LLM with memory in a stateless environment, what is the best practice or any example? First thing coming to my mind is to implement a custom memory with something like Redis, Firestore, DB, mounted volume, etc. Thanks πŸš€
Memory in stateless environment
https://api.github.com/repos/langchain-ai/langchain/issues/666/comments
10
2023-01-20T16:56:04Z
2023-10-03T10:04:49Z
https://github.com/langchain-ai/langchain/issues/666
1,551,172,156
666
[ "langchain-ai", "langchain" ]
null
consistent usage of `run`, `__call__`, `predict`
https://api.github.com/repos/langchain-ai/langchain/issues/663/comments
1
2023-01-20T14:46:00Z
2023-08-24T16:19:48Z
https://github.com/langchain-ai/langchain/issues/663
1,550,967,190
663
[ "langchain-ai", "langchain" ]
When I try to replicate the code from the [documentation - Custom SQLAlchemy Schemas](https://langchain.readthedocs.io/en/latest/modules/llms/examples/llm_caching.html), there seems to be a problem with the table name. Although the custom schema says __tablename__ = "llm_cache_fulltext", the table that's created seems to be named "full_llm_cache" Code, followed by error message: ``` Base = declarative_base() class FulltextLLMCache(Base): # type: ignore """Postgres table for fulltext-indexed LLM Cache""" __tablename__ = "llm_cache_fulltext" id = Column(Integer, Sequence('cache_id'), primary_key=True) prompt = Column(String, nullable=False) llm = Column(String, nullable=False) idx = Column(Integer) response = Column(String) prompt_tsv = Column(TSVectorType(), Computed("to_tsvector('english', llm || ' ' || prompt)", persisted=True)) __table_args__ = ( Index("idx_fulltext_prompt_tsv", prompt_tsv, postgresql_using="gin"), ) from sqlalchemy.engine import URL url_object = "postgresql:// [db on render]" engine = create_engine(url_object, echo=True) langchain.llm_cache = SQLAlchemyCache(engine, FulltextLLMCache) ``` Error message: ``` langchain.llm_cache = SQLAlchemyCache(engine, FulltextLLMCache) ...: 2023-01-19 16:29:06,382 INFO sqlalchemy.engine.Engine BEGIN (implicit) 2023-01-19 16:29:06,383 INFO sqlalchemy.engine.Engine select relname from pg_class c join pg_namespace n on n.oid=c.relnamespace where pg_catalog.pg_table_is_visible(c.oid) and relname=%(name)s 2023-01-19 16:29:06,383 INFO sqlalchemy.engine.Engine [cached since 1980s ago] {'name': 'full_llm_cache'} ```
table naming issue with SQLAlchemy
https://api.github.com/repos/langchain-ai/langchain/issues/655/comments
1
2023-01-19T15:38:13Z
2023-01-19T23:33:46Z
https://github.com/langchain-ai/langchain/issues/655
1,549,412,212
655
[ "langchain-ai", "langchain" ]
Hi! Amazing work :). I was trying to use this library to test a LLM called GPT-JT but I got a timeout error: > ValueError: Error raised by inference API: Model togethercomputer/GPT-JT-6B-v1 time out I'm running this in a quite powerful server, and resources shouldn't be an issue. The code is probably not using GPU, but I haven't found a way for making `langchain` use GPU. Could you please tell me if this is automatic, and if I'm missing something? Code for reproducibility: ``` from langchain import PromptTemplate, HuggingFaceHub, LLMChain template = """Question: {question} Answer: Let's think step by step.""" prompt = PromptTemplate(template=template, input_variables=["question"]) llm_chain = LLMChain(prompt=prompt, llm=HuggingFaceHub(repo_id="togethercomputer/GPT-JT-6B-v1")) question = "Is an apple a fruit?" print(llm_chain.run(question)) ```
Missing GPU support? HuggingFace model too slow
https://api.github.com/repos/langchain-ai/langchain/issues/648/comments
10
2023-01-18T16:05:40Z
2023-09-29T16:10:38Z
https://github.com/langchain-ai/langchain/issues/648
1,538,271,346
648
[ "langchain-ai", "langchain" ]
Hi there! Just found out about this from the [Dr Alan D. Thompson's Video](https://www.youtube.com/watch?v=wYGbY811oMo&ab_channel=DrAlanD.Thompson) (which was great!) I havn't been this excited about the status of gpt since chatgpt was dropped There are so many tools i wish (and starting) to create. πŸ˜ƒ As one can notice with these LLMs , how you structure your inputs, and how well articulated they are, will have major impacts on your resulting outputs. And as it seems, the last thing we will need to do is just - explain what it is we want really clearly with that notion in mind, ## One of my "Holy Grails" is - [Readme2Repo](https://github.com/fire17/Readme2Repo) [ **This could be huge** ] Basically I want to create a new repo on github with only a properly defined "end-product" version of the Readme (which was also probably generated/assisted via some prompts with LLM beforehand). The service should load the repo from github, analyze the read me with a pre-prompt such as: ``` The following is the Readme to my project. I want you to give me a list of all of the files I will need for this project. Then give me each and every file in a separate codeblock. Give only the file contents, any commentary or explanations should go into the code as comments - in these comments give thorough explanations to why and how everything works. Write tests based on the examples in the Readme, in the test code have a comment section detailing how to run all the tests and output the results. If the tests will fail, I will add them at the end of the Readme, make sure the code you provide deals and solves all those issues Confirm with a short but detailed high level explanation and summery of the project and all it's capabilities, then start giving me all the files. ``` I think i can go on, even more fine tuning this mega-prompt (also maybe pass it in chunks or ask for files separately each step) But i hope you get the idea With [LangChain](https://github.com/hwchase17/langchain/issues/new) as a base, this finally seems possible. ## Future features i magine are: - [Issue2PR](https://github.com/fire17/Issue2PR) - where for every issue i create on my own repo, a new branch is created, with modified files based on the current state of the repo, and the issue given (base for the prompt directive). At the top level, this branch should run github actions, to run all the tests, and if they pass - auto submit a pull request. [ **This could be mega huge** ] - [Repo2Repo Variations](https://github.com/fire17/Repo2Repo) - Auto generate variations of a repo which can include but are not limited to: - Optimizing Performance - Enhanced Stability, Durability, Dependency & Security - Cross Language Variants, Cross Platform Variants - Latest Modern Stack Variants * it can look for tools and platforms used in the stack, check online if there are other popular more modern or supported frameworks and alternatives, considers a beneficial route, and creates a branch with the changes - Auto packager ( to easily publish to pypi, npm , etc) And the list goes on.... Of course , all of this needs to be integrated with github so it's seamless to the developers Feels like the possibilites are endless it's just insane I hope yall are catching my drift Let me know with β™₯️ or πŸ‘ so we can see how many people are interested πŸ™ + Share your ideas for automating repos I hope we can find all the right people to develop this and bring it to the free open source community asap! All the best! And have a good one! Edit: Until someone can find a more developed project that's active on this - I've made a repo for all those who are interested to pitch in their ideas πŸ’› [GithubGPT](https://github.com/fire17/GithubGPT)
[Feature/Project Request] Foss Readme2Repo / GithubGPT
https://api.github.com/repos/langchain-ai/langchain/issues/645/comments
2
2023-01-18T12:45:13Z
2023-09-12T06:52:17Z
https://github.com/langchain-ai/langchain/issues/645
1,537,941,195
645
[ "langchain-ai", "langchain" ]
Hey Team, I am running a recursive summarization using MapReduce Chain. I want to calculate cost of running the summarization and for that I want to know about the total token_usage. How can I achieve this functionality?
How to calculate cost of LLMchain?
https://api.github.com/repos/langchain-ai/langchain/issues/644/comments
4
2023-01-18T09:55:04Z
2023-01-24T09:16:16Z
https://github.com/langchain-ai/langchain/issues/644
1,537,706,770
644
[ "langchain-ai", "langchain" ]
null
standardize stop token and extra kwargs across all llm wrappers
https://api.github.com/repos/langchain-ai/langchain/issues/643/comments
2
2023-01-18T08:17:45Z
2023-08-24T16:19:57Z
https://github.com/langchain-ai/langchain/issues/643
1,537,577,489
643
[ "langchain-ai", "langchain" ]
I'm getting an openai `RateLimitError` when embedding my chunked texts with `"text-embedding-ada-002"`, which I have rate limited to 8 chunks of <1024 every 15 secs. > openai.error.RateLimitError: Rate limit reached for default-global-with-image-limits in organization org-xxx on requests per min. Limit: 60.000000 / min. Current: 70.000000 / min. Contact support@openai.com if you continue to have issues. Please add a payment method to your account to increase your rate limit. Visit https://beta.openai.com/account/billing to add a payment method. Every 15 seconds, I'm calling this once; ```py for ... search_index.add_texts(texts=chunked[i : i + 8]) time.sleep(15) ``` The chunks list `chunked` was created using ```py text_splitter = NLTKTextSplitter(chunk_size=1024) chunked = [chunk for source in sources for chunk in text_splitter.split_text(source) ] ``` Why is my request rate exceeding 70/min when I'm only embedding at ~32 chunks/min? Does each chunk take more than 1 request to process? Anyway to better rate limit my embedding queries? Thanks
Rate limit error
https://api.github.com/repos/langchain-ai/langchain/issues/634/comments
13
2023-01-17T07:07:41Z
2024-07-24T15:06:46Z
https://github.com/langchain-ai/langchain/issues/634
1,535,885,020
634
[ "langchain-ai", "langchain" ]
after: https://github.com/hwchase17/langchain/issues/628 add a method that ensembles too results from the 2 together into a final set. that way it uses two different but complementary approaches to retrieving relevant examples.
ensemble example selector results from diff methods
https://api.github.com/repos/langchain-ai/langchain/issues/630/comments
1
2023-01-16T16:17:06Z
2023-08-24T16:20:02Z
https://github.com/langchain-ai/langchain/issues/630
1,535,184,945
630
[ "langchain-ai", "langchain" ]
add example selector that picks based on ngram overlap https://arxiv.org/abs/2212.02437
add ngram overlap example selector
https://api.github.com/repos/langchain-ai/langchain/issues/628/comments
1
2023-01-16T16:09:49Z
2023-08-24T16:20:08Z
https://github.com/langchain-ai/langchain/issues/628
1,535,169,820
628
[ "langchain-ai", "langchain" ]
When I run the minimal example provided in the [quickstart](https://langchain.readthedocs.io/en/latest/getting_started/getting_started.html#building-a-language-model-application), I get an error: ```python from langchain.llms import OpenAI llm = OpenAI(temperature=0.9) text = "What would be a good company name a company that makes colorful socks?" print(llm(text)) ``` ``` openai.error.InvalidRequestError: Unrecognized request argument supplied: request_timeout ``` Support for `request_timeout` was added to LangChain recently (https://github.com/hwchase17/langchain/pull/398), but the [OpenAI docs](https://beta.openai.com/docs/api-reference/completions/create) don't mention that request parameter. By default LangChain passes `'request_timeout': None` and removing it from the request fixes the problem.
Unrecognized request argument: request_timeout
https://api.github.com/repos/langchain-ai/langchain/issues/626/comments
2
2023-01-16T06:49:59Z
2023-01-16T08:08:32Z
https://github.com/langchain-ai/langchain/issues/626
1,534,391,322
626
[ "langchain-ai", "langchain" ]
https://langchain.readthedocs.io/en/latest/modules/prompts/examples/custom_prompt_template.html Link here is missing <img width="614" alt="Screenshot 2023-01-15 at 9 45 21 PM" src="https://user-images.githubusercontent.com/16283396/212587689-ad2800d4-3a56-463c-85d1-fd7d374bf99c.png">
Default Prompt Template examples are missing
https://api.github.com/repos/langchain-ai/langchain/issues/625/comments
1
2023-01-16T02:45:44Z
2023-08-24T16:20:12Z
https://github.com/langchain-ai/langchain/issues/625
1,534,178,183
625
[ "langchain-ai", "langchain" ]
This simple piece of code: ``` from langchain.agents import load_tools, initialize_agent from langchain.llms import Cohere llm = Cohere(model="xlarge",temperature=0) tools = load_tools(["serpapi", "llm-math"], llm=llm) agent = initialize_agent(tools, llm, agent="zero-shot-react-description", verbose=True) ``` Returns this error: > Traceback (most recent call last): File "main.py", line 41, in <module> agent = initialize_agent(tools, llm, agent="zero-shot-react-description", verbose=True) File "/home/runner/TestLangChain/venv/lib/python3.8/site-packages/langchain/agents/loading.py", line 54, in initialize_agent return AgentExecutor.from_agent_and_tools( File "/home/runner/TestLangChain/venv/lib/python3.8/site-packages/langchain/agents/agent.py", line 214, in from_agent_and_tools return cls( File "pydantic/main.py", line 341, in pydantic.main.BaseModel.__init__ pydantic.error_wrappers.ValidationError: 1 validation error for AgentExecutor tools -> 1 cannot pickle '_queue.SimpleQueue' object (type=type_error) If I initialize an agent with a list of tools which don't require an LLM, there is no error.
Trying to initialize an agent with Cohere and LLM-requiring tool causes a ValidationError
https://api.github.com/repos/langchain-ai/langchain/issues/617/comments
3
2023-01-14T19:13:50Z
2023-08-24T16:20:18Z
https://github.com/langchain-ai/langchain/issues/617
1,533,437,131
617
[ "langchain-ai", "langchain" ]
For example here is my python code: ``` search = GoogleSearchAPIWrapper() tools = [ Tool( name = "Current Search", func=search.run, description="useful for when you need to answer questions about current events or the current state of the world" ), ] memory = ConversationBufferMemory(memory_key="chat_history") llm=OpenAI(temperature=0) agent_chain = initialize_agent(tools, llm, agent="conversational-react-description", verbose=True, memory=memory) print(agent_chain.run(input="hi, i am Aryan")) print(agent_chain.run(input="what's my name?")) ``` So far it's fine because when I run in python it remembers context. However if I use it in flask and send input as api request it doesn't remember context. **Basically when I send input over Flask request agent doesn't remember context**
Need help to continue same chain over API Request?
https://api.github.com/repos/langchain-ai/langchain/issues/614/comments
3
2023-01-14T13:14:28Z
2023-09-18T16:25:16Z
https://github.com/langchain-ai/langchain/issues/614
1,533,248,871
614
[ "langchain-ai", "langchain" ]
Consider adding `white-space: break-spaces;` to `<pre>` tags on the docs to improve readability of code snippets, especially since there's usually a lot of text in the prompt to read through. As a side-effect, they will make pages seem longer. But that might not be a major issue. In a bit I can add this in a PR when I get a chance; but if someone has the local env setup already and can make this change to the sphinx docs, that would be great. Before: <img width="500" alt="image" src="https://user-images.githubusercontent.com/3076502/212448408-096ed88e-6bbe-4571-9dee-09b61b8d02dd.png"> After: <img width="500" alt="image" src="https://user-images.githubusercontent.com/3076502/212448362-2f4b9d98-f828-405e-8479-9c032b477c71.png">
[Docs] Wrap text in code snippets to make prompts easier to read
https://api.github.com/repos/langchain-ai/langchain/issues/613/comments
1
2023-01-14T03:19:09Z
2023-01-14T15:39:30Z
https://github.com/langchain-ai/langchain/issues/613
1,533,117,543
613
[ "langchain-ai", "langchain" ]
Summarize returns empty result when intermediate documents contain hashtag `#`. Otherwise using different prompt which doesn't generate hashtags all is cool. prompt_template = """Write a tweet from the following text: {text} """ PROMPT = PromptTemplate (template=prompt_template, input_variables=["text"]) chain = load_summarize_chain (llm, chain_type="map_reduce", map_prompt=PROMPT, combine_prompt=PROMPT) chain.run (docs) ''
[Chain] Summarize returns empty result when intermediate documents contain hashtag `#`
https://api.github.com/repos/langchain-ai/langchain/issues/603/comments
4
2023-01-13T00:24:38Z
2023-01-24T08:29:01Z
https://github.com/langchain-ai/langchain/issues/603
1,531,528,097
603
[ "langchain-ai", "langchain" ]
Pinecone's vector DB allows for per-vector metadata which is filterable in index.query() ([pinecone docs](https://docs.pinecone.io/docs/metadata-filtering#querying-an-index-with-metadata-filters)). Suggesting exposing this filters argument in Pinecone.similarity_search() and Pinecone.similarity_search_with_score().
Add metadata filtering to Pinecone index query
https://api.github.com/repos/langchain-ai/langchain/issues/600/comments
0
2023-01-12T22:11:04Z
2023-01-13T05:15:53Z
https://github.com/langchain-ai/langchain/issues/600
1,531,431,925
600
[ "langchain-ai", "langchain" ]
I occasionally get this error when using the 'open-meteo-api' tool: openai.error.InvalidRequestError: This model's maximum context length is 4097 tokens, however you requested 6536 tokens (6280 in your prompt; 256 for the completion). Please reduce your prompt; or completion length.
Error when using the 'open-meteo-api' tool
https://api.github.com/repos/langchain-ai/langchain/issues/598/comments
5
2023-01-12T17:50:38Z
2023-09-25T16:20:23Z
https://github.com/langchain-ai/langchain/issues/598
1,531,137,139
598
[ "langchain-ai", "langchain" ]
What is the best way to build an agent that can read from courtlistener.com & perform inference based on it? Generally curious if it would be possible to integrate courtlistener.com's API into the Langchain ecosystem. They already have their own internal search engine accessible via API & the web, but would love to have a connector to it. https://www.courtlistener.com/help/api/rest/ https://www.courtlistener.com/api/rest/v3/
[Feature Request] Courtlistener.com API
https://api.github.com/repos/langchain-ai/langchain/issues/589/comments
2
2023-01-12T00:43:40Z
2023-08-24T16:20:22Z
https://github.com/langchain-ai/langchain/issues/589
1,529,917,685
589
[ "langchain-ai", "langchain" ]
It would be convenient if perhaps the PythonREPL returned the last output of the REPL if there's no stdout Also, improve the tool description to note that All Python outputs must use print().
Improve python repl tool
https://api.github.com/repos/langchain-ai/langchain/issues/586/comments
4
2023-01-11T20:44:57Z
2023-09-28T16:12:59Z
https://github.com/langchain-ai/langchain/issues/586
1,529,670,774
586
[ "langchain-ai", "langchain" ]
Hi I always feel thank you for opening such a great repo! I read the code of [ConversationalAgent](https://github.com/hwchase17/langchain/blob/74932f25167331101e6ca22b0612ccd57f78b81b/langchain/agents/conversational/base.py#L16) and found `human_prefix` is not used in [FORMAT_INSTRUCTIONS](https://github.com/hwchase17/langchain/blob/74932f25167331101e6ca22b0612ccd57f78b81b/langchain/agents/conversational/prompt.py#L14) So I wonder human_prefix is necessary or not. thank you!
Is human_prefix necessary for ConversationalAgent?
https://api.github.com/repos/langchain-ai/langchain/issues/571/comments
1
2023-01-10T02:39:55Z
2023-08-24T16:20:28Z
https://github.com/langchain-ai/langchain/issues/571
1,526,667,087
571
[ "langchain-ai", "langchain" ]
When using a conversational agent, a correct answer from a search tool was ignored. The search was for "who is the monarch of England?" Here's the output: ``` > Entering new AgentExecutor chain... Thought: Do I need to use a tool? Yes Action: Search Action Input: monarch of England Observation: Charles III Thought: Do I need to use a tool? No AI: The current monarch of England is Queen Elizabeth II. > Finished chain. ``` Here's the chain relevant code: ``` llm = OpenAI(temperature=0) search = GoogleSearchAPIWrapper() tools = [ Tool( "Search", SerpAPIWrapper().run, "A search engine. Useful for when you need to answer questions about current events. Input should be a search query.", ), Tool( "PAL-MATH", PALChain.from_math_prompt(llm).run, "A language model that is really good at solving complex word math problems. Input should be a math problem.", ), ] memory = ConversationBufferMemory(memory_key="chat_history") chain = initialize_agent(tools, llm, agent="conversational-react-description", verbose=True, memory=memory) ```
Conversational Agent ignored correct answer from Search
https://api.github.com/repos/langchain-ai/langchain/issues/566/comments
5
2023-01-09T18:48:18Z
2023-09-25T16:20:34Z
https://github.com/langchain-ai/langchain/issues/566
1,526,129,520
566
[ "langchain-ai", "langchain" ]
I'm trying to implement an agent that has a Yelp tool to find the best restaurants in an area. I have been using LLMRequests, but it seem as though the beautiful soup output of the requests response is very noisy (see below). For something like this, should one implement a different chain (e.g. a 'Yelp' chain that integrates with Yelp API) instead? ` 'Between >>> and <<< are the raw search result text from yelp\n Extract the names of top restaurants or say "not found" if there are no results.\n >>> \n\n\nSearch Businesses In San Francisco, CA - Yelp\nYelpFor BusinessesWrite a ReviewLog InSign UpRestaurantsHome ServicesAuto ServicesMoreMoreFilters$$$$$$$$$$SuggestedOpen Now\xa0--:--WaitlistFeaturesOffering a DealOffers DeliveryOffers TakeoutGood for KidsSee allNeighborhoodsAlamo SquareAnza VistaAshbury HeightsBalboa TerraceSee allDistanceBird\'s-eye ViewDriving (5 mi.)Biking (2 mi.)Walking (1 mi.)Within 4 blocksBrowsing San Francisco, CA businessesSort:RecommendedAllPriceOpen NowWaitlist1.\xa0Bi-Rite Creamery10004Ice Cream & Frozen Yogurt$$MissionThis is a placeholderβ€œNice little shop near Mission Dolores Park! Like a 1 min walk away\n\nThey had a buncha unique flavors to choose from, I tried banana, cream brΓ»lΓ©e, honey…”\xa0more2.\xa0Brenda’s French Soul Food11895Breakfast & BrunchSouthernCajun/Creole$$TenderloinThis is a placeholderβ€œToday I go there with my coworker and then have orders foods \nAfter having the food \nWe settle bills and the server charges more than usual price \nAfter that I…”\xa0more3.\xa0Gary Danko5800American (New)FrenchWine Bars$$$$Russian HillThis is a placeholderβ€œAmazing service and food! An error was made with our reservation so we showed up and weren\'t on their books! After showing proof of the res, they offered for…”\xa0more4.\xa0Tartine Bakery8666BakeriesCafesDesserts$$MissionThis is a placeholderβ€œSo cute here! The treats were so delicious. Honestly, the brownies were so very delish! We had a a good variety of treats to enjoy and they were all really…”\xa0more5.\xa0Mitchells Ice Cream4623Ice Cream & Frozen YogurtCustom Cakes$Bernal HeightsThis is a placeholder6.\xa0Hog Island Oyster6782SeafoodSeafood MarketsLive/Raw Food$$EmbarcaderoThis is a placeholderβ€œGreat place. Fresh oysters ! I love the area and the view of the bay! Great food and ambiance!”\xa0more7.\xa0House of Prime Rib8300American (Traditional)SteakhousesWine Bars$$$Nob HillThis is a placeholderβ€œI\'m not going to say much but this is basically the best Prime Rib restaurant in America. It was so good that I brought my mom here from out of town. Here is…”\xa0more8.\xa0Fog Harbor Fish House8700SeafoodWine BarsCocktail Bars$$Fisherman\'s WharfThis is a placeholderOutdoor seatingFull barLive wait time: 12 - 22 minsβ€œFirst time here and I wasn\'t disappointed, the foods are yummy. The staffs are very good specially to our server Shawn.”\xa0moreFind a Table9.\xa0B Patisserie3196BakeriesPatisserie/Cake ShopMacarons$$Lower Pacific HeightsThis is a placeholderβ€œOne of my absolute favorite pastry shops in SF. Their Kouign amann are the best that I\'ve ever tasted. They have classic flavors (plain and chocolate) and also…”\xa0more10.\xa0Burma Superstar7367Burmese$$Inner RichmondThis is a placeholderβ€œGreat food. great service. \nOutdoor seating has heater when it\'s cold out. \nHighly recommend the place.”\xa0moreStart Order1234567891 of 24Can\'t find the business?Adding a business to Yelp is always free.Add businessGot search feedback? Help us improve.More NearbyActive LifeArts & EntertainmentAutomotiveBeauty & SpasEducationEvent Planning & ServicesFoodHealth & MedicalHome ServicesHotels & TravelLocal FlavorLocal ServicesNightlifePetsProfessional ServicesPublic Services & GovernmentRestaurantsShoppingMore categoriesAboutAbout YelpCareersPressInvestor RelationsTrust & SafetyContent GuidelinesAccessibility StatementTerms of ServicePrivacy PolicyAd ChoicesYour Privacy ChoicesDiscoverYelp Project Cost GuidesCollectionsTalkEventsYelp BlogSupportYelp MobileDevelopersRSSYelp for BusinessYelp for BusinessBusiness Owner LoginClaim your Business PageAdvertise on YelpYelp for Restaurant OwnersTable ManagementBusiness Success StoriesBusiness SupportYelp Blog for BusinessLanguagesEnglishCountriesUnited StatesAboutBlogSupportTermsPrivacy PolicyYour Privacy ChoicesCopyright Β© 2004–2023 Yelp Inc. Yelp, , and related marks are registered trademarks of Yelp.Some Data By Acxiom\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n <<<\n Extracted:`
Question on LLMRequests
https://api.github.com/repos/langchain-ai/langchain/issues/560/comments
4
2023-01-08T03:34:54Z
2023-11-07T16:09:14Z
https://github.com/langchain-ai/langchain/issues/560
1,524,303,212
560
[ "langchain-ai", "langchain" ]
should be doable with callbacks
count tokens used in chain
https://api.github.com/repos/langchain-ai/langchain/issues/558/comments
4
2023-01-07T20:44:41Z
2023-01-24T08:29:36Z
https://github.com/langchain-ai/langchain/issues/558
1,524,139,524
558
[ "langchain-ai", "langchain" ]
This line seems to be incorrect: https://github.com/hwchase17/langchain/blob/1f248c47f36af986be7ea647a6af223c9bb34b2b/langchain/llms/base.py#L112. Should be either `prompts = prompts[missing_prompt_idxs[i]` or `prompts = missing_prompts[i]`.
Bug with LLM caching
https://api.github.com/repos/langchain-ai/langchain/issues/551/comments
1
2023-01-05T23:37:12Z
2023-01-06T15:30:12Z
https://github.com/langchain-ai/langchain/issues/551
1,521,621,356
551
[ "langchain-ai", "langchain" ]
```python from langchain.agents import load_tools, initialize_agent, get_all_tool_names from langchain.llms.huggingface_pipeline import HuggingFacePipeline from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline model_id = "sberbank-ai/mGPT" tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained(model_id) pipe = pipeline( "text-generation", model=model, tokenizer=tokenizer, max_new_tokens=10 ) hf = HuggingFacePipeline(pipeline=pipe) news_api_key = "oops my api key" tool_names = ["news-api", "llm-math"] tools = load_tools(tool_names, llm=hf, news_api_key=news_api_key) agent = initialize_agent(tools, hf, agent="zero-shot-react-description", verbose=True, return_intermediate_steps=True) response = agent({"input":"how many people live in Brazil?"}) print(response["intermediate_steps"]) ``` I have the code above but it doesnt work as expected, it just says: `ValueError: Could not parse LLM output: ' I know that there are more than 100,000'` What am I doing wrong?
Could not parse LLM output
https://api.github.com/repos/langchain-ai/langchain/issues/549/comments
5
2023-01-05T20:19:27Z
2023-02-04T18:47:40Z
https://github.com/langchain-ai/langchain/issues/549
1,521,353,737
549
[ "langchain-ai", "langchain" ]
Hi LangChain Team!! Thank you for this awesome library. Hands down one of the best projects I have seen so far. It would be helpful if there is a way to access provider-specific information when they're loaded as tools as part of an Agent or a Chain. For Example: LLMs After running this: `llm_result = llm.generate(["Tell me a joke", "Tell me a poem"]*15)` One can run this: `llm_result.llm_output` But if the above LLM was added as tools like the following, is there a way to access the `some_result.llm_output`? `tools = load_tools(["serpapi", "llm-math"], llm=llm)`
Ability to access provider specific information from with Chains or Agents
https://api.github.com/repos/langchain-ai/langchain/issues/547/comments
5
2023-01-05T17:25:05Z
2023-09-10T16:46:40Z
https://github.com/langchain-ai/langchain/issues/547
1,521,104,086
547
[ "langchain-ai", "langchain" ]
``` ValidationError: 1 validation error for RefineDocumentsChain return_intermediate_steps extra fields not permitted (type=value_error.extra) ```
bug in new `return_intermediate_steps` arg
https://api.github.com/repos/langchain-ai/langchain/issues/545/comments
2
2023-01-05T16:02:14Z
2023-07-14T17:07:22Z
https://github.com/langchain-ai/langchain/issues/545
1,520,982,156
545
[ "langchain-ai", "langchain" ]
Something like: ```python PROMPT = PromptTemplate( input_variables=[ "name", "input", "time", ], default_values={ "name": "John", "time": lambda: str(datetime.now()), }, template=some_template, ) ```
Add option for specifying default values in PromptTemplate (maybe getters?)
https://api.github.com/repos/langchain-ai/langchain/issues/544/comments
2
2023-01-05T07:07:58Z
2023-09-10T16:46:45Z
https://github.com/langchain-ai/langchain/issues/544
1,520,209,619
544
[ "langchain-ai", "langchain" ]
Hi, I just noticed a dead link in the documentation. Description: If you are [here](https://langchain.readthedocs.io/en/latest/modules/llms.html) and would like to click on _References_ at the bottom of the page, you end up with a [404](https://langchain.readthedocs.io/reference/modules/llms.html). I think the correct link would be [this](https://langchain.readthedocs.io/en/latest/reference/modules/llms.html) one. Thought I might bring this up :)
Dead link in Modules/LLMs section of documentation
https://api.github.com/repos/langchain-ai/langchain/issues/532/comments
0
2023-01-04T11:00:43Z
2023-01-05T08:52:04Z
https://github.com/langchain-ai/langchain/issues/532
1,518,764,115
532
[ "langchain-ai", "langchain" ]
Would it be possible to have the textsplitter ensure the text it's splitting is always <= chunk_size? Seeing this issue while trying to embed large documents
[Feature Request] Have textsplitter always be <= chunk_size
https://api.github.com/repos/langchain-ai/langchain/issues/528/comments
7
2023-01-04T06:35:57Z
2023-09-27T16:15:33Z
https://github.com/langchain-ai/langchain/issues/528
1,518,407,159
528
[ "langchain-ai", "langchain" ]
[Here](https://github.com/hwchase17/langchain/blob/master/langchain/llms/base.py#L70), instead of: ```python generations = [existing_prompts[i] for i in range(len(prompts))] ``` it should be ```python generations = [ existing_prompts[i] for i in range(len(prompts)) if i not in missing_prompt_idxs ] ``` @hwchase17 lemme know if I am missing something.
`langchain.llms.base.BaseLLM` has a bug
https://api.github.com/repos/langchain-ai/langchain/issues/527/comments
4
2023-01-04T05:59:06Z
2023-01-05T02:39:07Z
https://github.com/langchain-ai/langchain/issues/527
1,518,368,739
527
[ "langchain-ai", "langchain" ]
Greetings! Thanks for all the work on a great library. A suggestion I'd recommend is the addition of separate methods to the serpapi wrapper to pull values from the SerpAPI json produced for searches. There's a lot of data in there that could be used to aid downstream tasks. Here's an [example JSON](https://serpapi.com/searches/c5ce8ca22344ac58/63b3101c3517646fe749a0e0.json). In particular, I'd be interested in methods to retrieve the 'snippet' and 'url' values, or to pull the search id values for each search to enable further API calls directly from [SerpAPI's search archive](https://serpapi.com/search-archive-api).
Add methods to pull json data from SerpAPI wrapper
https://api.github.com/repos/langchain-ai/langchain/issues/522/comments
4
2023-01-03T16:37:46Z
2023-11-21T16:08:05Z
https://github.com/langchain-ai/langchain/issues/522
1,517,645,443
522
[ "langchain-ai", "langchain" ]
Hello! First of all, thank you for opening such an excellent code! I am truly excited and want to contribute somehow. Now I try to read codes regarding [openai.py](https://github.com/hwchase17/langchain/blob/master/langchain/llms/openai.py) and found that` Extra.forbid` at [here](https://github.com/hwchase17/langchain/blob/3efec55f939a9758682488bf23c1d7646ee35a6f/langchain/llms/openai.py#L58). However, I can initialize it like the code below. ```python OpenAI(temperatures=0, hey = "hey", hello = "how are you?") ``` The reason is OpenAI class sets all extras inputs as model_args at [here](https://github.com/hwchase17/langchain/blob/3efec55f939a9758682488bf23c1d7646ee35a6f/langchain/llms/openai.py#L60) I assume this validation was added to accommodate future changes in the GPT API parameter, but I think it is likely to cause confusion for the reader. Therefore my suggestion is simply to delete one of the following part 1. the part of `Extra.forbid` at line 58 2. Or delete the validation part, which starts from line 60, and add a document like `if you want to set more parameter of GPT API, please add parameter in model_args`. please tell me how you feel about my suggestion. thank you. Yongtae
Is Extra.forbid necessary?
https://api.github.com/repos/langchain-ai/langchain/issues/519/comments
10
2023-01-03T09:37:54Z
2023-01-05T02:33:09Z
https://github.com/langchain-ai/langchain/issues/519
1,517,138,568
519
[ "langchain-ai", "langchain" ]
- windows 10 - python 3.7 - langchain 0.0.27 ``` ImportError Traceback (most recent call last) ~\AppData\Local\Temp\ipykernel_35084\2350772706.py in <module> ----> 1 from gpt_index import GPTTreeIndex, SimpleDirectoryReader 2 documents = SimpleDirectoryReader('data').load_data() 3 index = GPTTreeIndex(documents) c:\Users\faris\.conda\envs\gpt\lib\site-packages\gpt_index\__init__.py in <module> 13 14 # indices ---> 15 from gpt_index.indices.keyword_table import ( 16 GPTKeywordTableIndex, 17 GPTRAKEKeywordTableIndex, c:\Users\faris\.conda\envs\gpt\lib\site-packages\gpt_index\indices\__init__.py in <module> 2 3 # indices ----> 4 from gpt_index.indices.keyword_table.base import GPTKeywordTableIndex 5 from gpt_index.indices.keyword_table.rake_base import GPTRAKEKeywordTableIndex 6 from gpt_index.indices.keyword_table.simple_base import GPTSimpleKeywordTableIndex c:\Users\faris\.conda\envs\gpt\lib\site-packages\gpt_index\indices\keyword_table\__init__.py in <module> 2 3 # indices ----> 4 from gpt_index.indices.keyword_table.base import GPTKeywordTableIndex 5 from gpt_index.indices.keyword_table.rake_base import GPTRAKEKeywordTableIndex 6 from gpt_index.indices.keyword_table.simple_base import GPTSimpleKeywordTableIndex c:\Users\faris\.conda\envs\gpt\lib\site-packages\gpt_index\indices\keyword_table\base.py in <module> 13 14 from gpt_index.data_structs.data_structs import KeywordTable ---> 15 from gpt_index.indices.base import DOCUMENTS_INPUT, BaseGPTIndex 16 from gpt_index.indices.keyword_table.utils import extract_keywords_given_response 17 from gpt_index.langchain_helpers.chain_wrapper import LLMPredictor c:\Users\faris\.conda\envs\gpt\lib\site-packages\gpt_index\indices\base.py in <module> 17 from gpt_index.data_structs.data_structs import IndexStruct, Node 18 from gpt_index.data_structs.struct_type import IndexStructType ---> 19 from gpt_index.indices.prompt_helper import PromptHelper 20 from gpt_index.indices.query.query_runner import QueryRunner 21 from gpt_index.indices.query.schema import QueryConfig, QueryMode c:\Users\faris\.conda\envs\gpt\lib\site-packages\gpt_index\indices\prompt_helper.py in <module> 10 from gpt_index.constants import MAX_CHUNK_OVERLAP 11 from gpt_index.data_structs.data_structs import Node ---> 12 from gpt_index.langchain_helpers.chain_wrapper import LLMPredictor 13 from gpt_index.langchain_helpers.text_splitter import TokenTextSplitter 14 from gpt_index.prompts.base import Prompt c:\Users\faris\.conda\envs\gpt\lib\site-packages\gpt_index\langchain_helpers\chain_wrapper.py in <module> 5 6 from langchain import Cohere, LLMChain, OpenAI ----> 7 from langchain.llms import AI21 8 from langchain.llms.base import BaseLLM 9 ImportError: cannot import name 'AI21' from 'langchain.llms' (c:\Users\faris\.conda\envs\gpt\lib\site-packages\langchain\llms\__init__.py) ```
cannot import name 'AI21' from 'langchain.llms' (...\.conda\envs\gpt\lib\site-packages\langchain\llms\__init__.py)
https://api.github.com/repos/langchain-ai/langchain/issues/510/comments
12
2023-01-02T15:52:28Z
2023-09-12T01:56:44Z
https://github.com/langchain-ai/langchain/issues/510
1,516,524,603
510
[ "langchain-ai", "langchain" ]
What is the best way to build an agent that can read from Google docs and perform inference based on it (and an input)? I see there are a lot of Search functionality, but it seems to be general search (versus search over a particular source).
Integration with Google Docs
https://api.github.com/repos/langchain-ai/langchain/issues/509/comments
3
2023-01-02T03:29:07Z
2023-08-24T16:20:44Z
https://github.com/langchain-ai/langchain/issues/509
1,515,992,797
509
[ "langchain-ai", "langchain" ]
right now, if max iterations is exceeded for an agent run, we just exit early with a constant string. a more graceful way would be allowing one final call to the llm and asking it to make a prediction with the information that currently exists
more graceful handling of "max_iterations" for agent
https://api.github.com/repos/langchain-ai/langchain/issues/508/comments
0
2023-01-02T03:27:38Z
2023-01-03T15:46:10Z
https://github.com/langchain-ai/langchain/issues/508
1,515,992,387
508
[ "langchain-ai", "langchain" ]
Hi I'm trying to use the class StuffDocumentsChain but have not seen any usage example. I simply wish to reload existing code fragment and re-shape it (iterate). When doing so from scratch it works fine, since the memory is provided to the next calls and the context is maintained, so work can be while in the same "session" The problem is when I load an existing file and try to start with it, after the response arrived propery, I get this error: raise ValueError(f"One input key expected got {prompt_input_keys}") ValueError: One input key expected got ['reshape', 'human_input'] this is my template, and I'm using {reshape} as the starting input of the file contents, but for some reason it fails, that's why i started looking into the StuffDocumentsChain. ''' {reshape} /* {history} */ /* {human_input} */ ''' I also tried to change the memory before sending so to "populate it" with pre-loaded file contents, but was not able to do so.
How to use StuffDocumentsChain / Error when using third input variable
https://api.github.com/repos/langchain-ai/langchain/issues/504/comments
3
2023-01-01T09:11:10Z
2023-09-10T16:46:50Z
https://github.com/langchain-ai/langchain/issues/504
1,515,412,705
504
[ "langchain-ai", "langchain" ]
for `load_qa_chain` we could unify the args by having a new arg name `return_steps` to replace the names `return_refine_steps` and `return_map_steps` (it would do the same thing as those existing args)
load_qa_chain return_steps arg
https://api.github.com/repos/langchain-ai/langchain/issues/499/comments
0
2022-12-30T21:30:04Z
2023-01-03T15:45:10Z
https://github.com/langchain-ai/langchain/issues/499
1,514,853,240
499
[ "langchain-ai", "langchain" ]
Hi there, I'm trying to use huggingface embeddings with elasticvectorsearch, but can't initialize `ElasticVectorSearch` class. I run elastic search locally in docker: ``` docker network create elastic docker run --name es01 --net elastic -p 9200:9200 -it docker.elastic.co/elasticsearch/elasticsearch:8.5.3 ``` Then I run the following code: ```python ... hf = HuggingFaceEmbeddings( model_name="sentence-transformers/all-mpnet-base-v2" ) elastic_vector_search = ElasticVectorSearch.from_texts( texts, hf, elasticsearch_url="https://elastic:FR1lY9OEo89NxkbWoj9Z@localhost:9200" ) ``` and get ``` elastic_transport.TlsError: TLS error caused by: TlsError(TLS error caused by: SSLError([SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self signed certificate in certificate chain (_ssl.c:1129))) ``` It seems that to fix this issue I'd need to provide additional params like cert, etc. but `ElasticVectorSearch` only takes a connection string. e.g. when I make a curl request: ``` curl -u elastic https://localhost:9200 ``` gives me the following ``` curl: (60) SSL certificate problem: self signed certificate in certificate chain More details here: https://curl.se/docs/sslcerts.html curl failed to verify the legitimacy of the server and therefore could not establish a secure connection to it. To learn more about this situation and how to fix it, please visit the web page mentioned above. ``` when I run the same command with -k, it works ``` curl -u elastic https://localhost:9200 -k ``` it also works when I pass certificate file ``` curl --cacert http_ca.crt -u elastic https://localhost:9200 ``` and works in python if I pass cert file: ```python es_client = elasticsearch.Elasticsearch(elasticsearch_url, ca_certs="http_ca.crt") ``` Any ideas how to fix it?
ElasticVectorSearch - SSL: CERTIFICATE_VERIFY_FAILED
https://api.github.com/repos/langchain-ai/langchain/issues/498/comments
3
2022-12-30T15:03:40Z
2023-01-02T11:20:45Z
https://github.com/langchain-ai/langchain/issues/498
1,514,580,482
498
[ "langchain-ai", "langchain" ]
how is it possible to tell to the `serpapi` tool to use DuckDuckGo instead of Google?
different search engine
https://api.github.com/repos/langchain-ai/langchain/issues/474/comments
8
2022-12-29T17:51:40Z
2023-01-18T07:14:40Z
https://github.com/langchain-ai/langchain/issues/474
1,513,936,430
474
[ "langchain-ai", "langchain" ]
Our chat bot would occasionally run into this error: ``` e.py", line 65, in generate new_results = self._generate(missing_prompts, stop=stop) File "/home/ubuntu/.cache/pypoetry/virtualenvs/lm-bot-w2c4PdJl-py3.8/lib/python3.8/site-packages/langchain/llms/openai.py", line 151, in _generate token_usage[_key] = response["usage"][_key] KeyError: 'completion_tokens' ```
KeyError: 'completion tokens'
https://api.github.com/repos/langchain-ai/langchain/issues/472/comments
1
2022-12-29T16:11:06Z
2022-12-30T03:16:36Z
https://github.com/langchain-ai/langchain/issues/472
1,513,869,625
472
[ "langchain-ai", "langchain" ]
not sure what best integration looks like
integrate with https://github.com/stanford-crfm/helm
https://api.github.com/repos/langchain-ai/langchain/issues/465/comments
3
2022-12-29T04:42:06Z
2023-08-24T16:20:54Z
https://github.com/langchain-ai/langchain/issues/465
1,513,369,552
465
[ "langchain-ai", "langchain" ]
Hi, I'm trying to use LLM chain with HuggingFaceHub (facebook/opt-1.3b) and I'm getting time out error: `ValueError: Error raised by inference API: Model facebook/opt-1.3b time out` I would like to know if I'm doing something wrong? or there is a real issue. This is my [collab](https://colab.research.google.com/drive/158xobl37kvMlvAvwzmSngENYm3GvrFlC?usp=sharing ). Appreciate your assistance.
Getting time our while trying to use HuggingFaceHub
https://api.github.com/repos/langchain-ai/langchain/issues/461/comments
2
2022-12-28T20:55:00Z
2022-12-29T08:40:15Z
https://github.com/langchain-ai/langchain/issues/461
1,513,159,946
461
[ "langchain-ai", "langchain" ]
API seams easy https://wolframalpha.readthedocs.io/en/latest/?badge=latest
Wolfram Alpha connection as tool
https://api.github.com/repos/langchain-ai/langchain/issues/455/comments
6
2022-12-28T14:56:23Z
2023-09-18T17:03:37Z
https://github.com/langchain-ai/langchain/issues/455
1,512,875,310
455
[ "langchain-ai", "langchain" ]
tool chains...so that a agent can direcly funnel the returns of a webrequest to a summerization tool that would save a lot of context so the agent can call something like summerizeAndanwser(Webrequest("gpt provided url"), "gpt provided question")
Let Agent use Tool chains
https://api.github.com/repos/langchain-ai/langchain/issues/451/comments
2
2022-12-28T13:57:55Z
2023-08-24T16:20:59Z
https://github.com/langchain-ai/langchain/issues/451
1,512,823,574
451
[ "langchain-ai", "langchain" ]
for refine output: `out[i]['refine_steps'][j]` for map output: `out[i]['map_steps'][j]['output_text']`
different output types from map vs refine intermediate steps
https://api.github.com/repos/langchain-ai/langchain/issues/440/comments
3
2022-12-27T20:11:03Z
2022-12-28T11:59:46Z
https://github.com/langchain-ai/langchain/issues/440
1,512,128,567
440
[ "langchain-ai", "langchain" ]
Hi @hwchase17 Thanks for making this, amazing open source project. I'm looking to make Agents more usable in production environment (eg for building a user-facing "chatbot") and some parts of the existing langchain api might need some changes to make that possible. Namely, "streaming" (some kind of) progress to the user while an agent does its thing is key for a low latency experience. Another concern is to keep track of the "lineage" (ie. which LLM prompts / API calls) of a particular output, in order to eg. selectively display it to the user or support improving the agent over time. From looking at the code of the library so far it looks to me like the best starting point to make this happen is to evolve the logger api, namely 1. Make it (optionally) local to each instance, so that multiple agents can run independently in the same process without mixing things up, see PR #437 2. Evolve the BaseLogger API to be more amenable to structured logging, eg. the `log_llm_response` method could/should have available the prompt and inputs too, otherwise we don't know what that output is for, etc Would welcome your thoughts? Thanks Nuno
Discussion: Evolving the logger api
https://api.github.com/repos/langchain-ai/langchain/issues/439/comments
4
2022-12-27T14:30:51Z
2023-09-25T16:20:44Z
https://github.com/langchain-ai/langchain/issues/439
1,511,855,972
439
[ "langchain-ai", "langchain" ]
probably involves making them NOT namedtuples (since those are immutable)
ability to change the tool descriptions after loading
https://api.github.com/repos/langchain-ai/langchain/issues/438/comments
1
2022-12-27T14:21:11Z
2023-08-24T16:21:04Z
https://github.com/langchain-ai/langchain/issues/438
1,511,848,249
438
[ "langchain-ai", "langchain" ]
I tried to do a map-reduce summary using curie instead a davinci. Code: ``` llm=OpenAI(model_name="text-curie-001",temperature=0) summary_chain = load_summarize_chain(llm, chain_type="map_reduce") summary=summary_chain.run(docs) ``` Error: ``` Token indices sequence length is longer than the specified maximum sequence length for this model (2741 > 1024). Running this sequence through the model will result in indexing errors --------------------------------------------------------------------------- InvalidRequestError Traceback (most recent call last) [<ipython-input-72-4ced0b77973e>](https://localhost:8080/#) in <module> ----> 1 summary=summary_chain.run(docs) 16 frames [/usr/local/lib/python3.8/dist-packages/openai/api_requestor.py](https://localhost:8080/#) in _interpret_response_line(self, rbody, rcode, rheaders, stream) 427 stream_error = stream and "error" in resp.data 428 if stream_error or not 200 <= rcode < 300: --> 429 raise self.handle_error_response( 430 rbody, rcode, resp.data, rheaders, stream_error=stream_error 431 ) InvalidRequestError: This model's maximum context length is 2049 tokens, however you requested 2997 tokens (2741 in your prompt; 256 for the completion). Please reduce your prompt; or completion length. ``` NB: Error doesn't occur if I use "refine" instead (though the summary does something weird and doesn't summarize the whole document)
map_reduce summarizer errors when using LLM with smaller context window
https://api.github.com/repos/langchain-ai/langchain/issues/434/comments
13
2022-12-26T20:09:12Z
2023-10-12T16:11:21Z
https://github.com/langchain-ai/langchain/issues/434
1,511,209,670
434
[ "langchain-ai", "langchain" ]
Use `logging` or another thread-safe method
Print statements cause langchain runs to not be thead safe
https://api.github.com/repos/langchain-ai/langchain/issues/431/comments
1
2022-12-26T18:31:12Z
2023-08-24T16:21:08Z
https://github.com/langchain-ai/langchain/issues/431
1,511,143,486
431
[ "langchain-ai", "langchain" ]
The PALChain generates code that uses math and np libraries, but those aren't imported in PythonREPL. Here is the output of a recent run: > Entering new PALChain chain... def solution(): """A particle moves so that it is at (3 sin (t/4), 3 cos (t/4)) at time t. Find the speed of the particle, measured in unit of distance per unit of time.""" x = 3 * np.sin(t / 4) y = 3 * np.cos(t / 4) dx = 3 * np.cos(t / 4) / 4 dy = -3 * np.sin(t / 4) / 4 speed = np.sqrt(dx ** 2 + dy ** 2) result = speed return result > Finished PALChain chain. ==== date/time: 2022-12-26 17:21:25.817242 ==== calc_gpt_pal math problem: A particle moves so that it is at (3 sin (t/4), 3 cos (t/4)) at time t. Find the speed of the particle, measured in unit of distance per unit of time. calc_gpt_pal answer: name 'np' is not defined
PALChain generates code that uses math and np libraries, but those aren't imported in PythonREPL
https://api.github.com/repos/langchain-ai/langchain/issues/430/comments
5
2022-12-26T16:34:32Z
2023-09-10T16:46:56Z
https://github.com/langchain-ai/langchain/issues/430
1,511,075,608
430
[ "langchain-ai", "langchain" ]
AWS CLI has the concept of a `--dryrun` parameter that will not actually execute a command, but will show you the commands that will be issued. This is useful to prevent costly errors. It would be useful to have a similar `--dryrun` or similar parameter that could show the query plan or the number of LLM calls that would be issued. This is very useful in recursive summarization, where large input docs can be passed, and your OpenAI bill can explode :-). The expected output would be either 1) an *estimate* of the number of queries that would be issued, or 2) some execution plan. The concept of a query estimate may need to be refined b/c it's impossible to know beforehand how big the returned text will be from the LLM. So recursive summarization could never know. But it might be possible to estimate the worst case if the LLM returns the max possible tokens at each recursive summary request. Interested in ideas here!
Support for dry-run to estimate LLM calls that will be made
https://api.github.com/repos/langchain-ai/langchain/issues/425/comments
1
2022-12-26T03:37:27Z
2023-08-24T16:21:19Z
https://github.com/langchain-ai/langchain/issues/425
1,510,557,474
425
[ "langchain-ai", "langchain" ]
Data is in csv here: https://github.com/sylinrl/TruthfulQA/blob/main/TruthfulQA.csv
Add TruthfulQA as QA examples [Eval Branch]
https://api.github.com/repos/langchain-ai/langchain/issues/415/comments
2
2022-12-24T03:10:36Z
2023-08-24T16:21:23Z
https://github.com/langchain-ai/langchain/issues/415
1,509,918,983
415
[ "langchain-ai", "langchain" ]
We could import in these functions and integrate https://github.com/EleutherAI/lm-evaluation-harness/blob/master/lm_eval/tasks/truthfulqa.py#L253-L417
Add traditional eval measures to eval methods [Eval branch]
https://api.github.com/repos/langchain-ai/langchain/issues/413/comments
1
2022-12-24T02:29:21Z
2023-08-24T16:21:29Z
https://github.com/langchain-ai/langchain/issues/413
1,509,910,695
413
[ "langchain-ai", "langchain" ]
Hey Team, `https://github.com/hwchase17/langchain/blob/master/docs/explanation/combine_docs.md#refine` Here as we can see There is an error in the description of the `Refine` method. The text describes the `Refine`method as having "Pros" (advantages) that include "Can pull in more relevant context, and may be less lossy than `RefineDocumentsChain.`" However, this makes no sense, as the `Refine` method is being described, not the `RefineDocumentsChain` method.
Reference error in Documentation
https://api.github.com/repos/langchain-ai/langchain/issues/403/comments
2
2022-12-23T10:10:34Z
2022-12-23T13:54:50Z
https://github.com/langchain-ai/langchain/issues/403
1,509,152,730
403
[ "langchain-ai", "langchain" ]
Feature request: Support Reinforcement Learning with Human Feedback Example https://github.com/lucidrains/PaLM-rlhf-pytorch
Feature Request: Reinforcement Learning with Human Feedback
https://api.github.com/repos/langchain-ai/langchain/issues/399/comments
5
2022-12-22T14:14:47Z
2023-09-10T16:47:00Z
https://github.com/langchain-ai/langchain/issues/399
1,507,966,977
399
[ "langchain-ai", "langchain" ]
https://github.com/hwchase17/langchain/pull/367 introduced a way to invoke `run` with `**kwargs`, making `predict` unnecessary https://github.com/hwchase17/langchain/blob/766b84a9d9155494d7cf4d46b30cf8c92e2d5788/langchain/chains/llm.py#L83
`LLMChain.predict` unnecessary
https://api.github.com/repos/langchain-ai/langchain/issues/381/comments
1
2022-12-19T07:32:40Z
2023-09-12T21:30:06Z
https://github.com/langchain-ai/langchain/issues/381
1,502,451,649
381
[ "langchain-ai", "langchain" ]
Since #372 docs on agents with memory are no longer valid
Agent + Memory Docs are outdated
https://api.github.com/repos/langchain-ai/langchain/issues/380/comments
3
2022-12-19T07:16:05Z
2022-12-20T09:37:24Z
https://github.com/langchain-ai/langchain/issues/380
1,502,436,404
380
[ "langchain-ai", "langchain" ]
as per @agola11's comment in https://github.com/hwchase17/langchain/pull/368 the naming convention for base classes inconsistent. We should pick a convention and stick with it. For chain the base is `Chain`, for agents, the base is `Agent`, for llms the base is `BaseLLM` according to this PR I want to stick with Base (we use it for example selector, and prompt, and other things)
make base names consistent
https://api.github.com/repos/langchain-ai/langchain/issues/373/comments
1
2022-12-18T20:56:46Z
2023-09-12T21:30:05Z
https://github.com/langchain-ai/langchain/issues/373
1,502,028,640
373
[ "langchain-ai", "langchain" ]
null
implement https://arxiv.org/pdf/2211.13892.pdf example selector
https://api.github.com/repos/langchain-ai/langchain/issues/371/comments
1
2022-12-18T16:01:54Z
2022-12-19T22:09:28Z
https://github.com/langchain-ai/langchain/issues/371
1,501,952,225
371
[ "langchain-ai", "langchain" ]
``` # Load the tool configs that are needed. from langchain import LLMMathChain, SerpAPIWrapper llm = OpenAI(temperature=0) search = SerpAPIWrapper() llm_math_chain = LLMMathChain(llm=llm, verbose=True) tools = [ Tool( name = "Search", func=search.run, description="useful for when you need to answer questions about current events" ), Tool( name="Calculator", func=llm_math_chain.run, description="useful for when you need to answer questions about math" ) ] agent = initialize_agent(tools, llm, agent="zero-shot-react-description", verbose=True) agent.run("Who won the US Open men's tennis final in 2022? What is the next prime number after his age?") ``` Produces: ```> Entering new ZeroShotAgent chain... Who won the US Open men's tennis final in 2022? What is the next prime number after his age? Thought: I need to find out who won the US Open and then calculate the next prime number Action: Search Action Input: "US Open men's tennis final 2022" Observation: Spain's Carlos Alcaraz, 19, defeated Casper Ruud in the 2022 US Open men's singles final to earn his first Grand Slam title. Thought: I need to calculate the next prime number Action: Calculator Action Input: 19 > Entering new LLMMathChain chain... 19 + 4 Answer: 23Traceback (most recent call last): File "/Users/ankushgola/Code/langchain/tests/agent_test.py", line 70, in <module> main() File "/Users/ankushgola/Code/langchain/tests/agent_test.py", line 65, in main agent.run("Who won the US Open men's tennis final in 2022? What is the next prime number after his age?") File "/Users/ankushgola/Code/langchain/langchain/chains/base.py", line 134, in run return self({self.input_keys[0]: text})[self.output_keys[0]] File "/Users/ankushgola/Code/langchain/langchain/chains/base.py", line 107, in __call__ outputs = self._call(inputs) File "/Users/ankushgola/Code/langchain/langchain/agents/agent.py", line 150, in _call observation = chain(output.tool_input) File "/Users/ankushgola/Code/langchain/langchain/chains/base.py", line 134, in run return self({self.input_keys[0]: text})[self.output_keys[0]] File "/Users/ankushgola/Code/langchain/langchain/chains/base.py", line 107, in __call__ outputs = self._call(inputs) File "/Users/ankushgola/Code/langchain/langchain/chains/llm_math/base.py", line 70, in _call raise ValueError(f"unknown format from LLM: {t}") ValueError: unknown format from LLM: + 4 Answer: 23 ``` Very interesting that the answer is actually correct. The problem is that the output from the calculator is: ``` ' + 4 Answer: 23' ``` `ValueError` is being thrown bc output of Calculator doesn't being with `'Answer:'` Can repro with another input (also involving prime numbers): ``` agent.run("Who won the US Open men's tennis final in 2019? What is the next prime number after his age?") ``` In this case, the output from the calculator is ``` '? Answer: 37' ``` Again, interesting that answer is actually correct.
`LLMMatchChain` ValueError encountered when using `ZeroShotAgent`
https://api.github.com/repos/langchain-ai/langchain/issues/370/comments
2
2022-12-18T05:08:52Z
2022-12-29T13:20:56Z
https://github.com/langchain-ai/langchain/issues/370
1,501,750,204
370
[ "langchain-ai", "langchain" ]
Add support for server-sent events from the OpenAI API. Because some of these generations take a bit of time to finish I'd like to stream tokens back as they become ready. Documentation for this feature can be found [here](https://beta.openai.com/docs/api-reference/completions/create#completions/create-stream). This seems like it could require a pretty substantial refactor. LangChain would need to continuously return `LLMResult`s while maintaining a connection to the server.
Add Support For OpenAI SSE Response
https://api.github.com/repos/langchain-ai/langchain/issues/363/comments
3
2022-12-16T21:25:22Z
2023-04-11T17:34:05Z
https://github.com/langchain-ai/langchain/issues/363
1,500,884,185
363
[ "langchain-ai", "langchain" ]
https://github.com/hwchase17/langchain/blob/c1b50b7b13545b1549c051182f547cfc2f8ed0be/langchain/chains/combine_documents/map_reduce.py#L120
_collapse_docs should use self.llm_chain not elf.combine_document_chain.combine_docs
https://api.github.com/repos/langchain-ai/langchain/issues/356/comments
5
2022-12-16T02:37:41Z
2022-12-16T14:42:30Z
https://github.com/langchain-ai/langchain/issues/356
1,499,469,690
356
[ "langchain-ai", "langchain" ]
Add support for running your own HF pipeline locally. This would allow you to get a lot more dynamic with what HF features and models you support since you wouldn't be beholden to what is hosted in HF hub. You could also do stuff with HF Optimum to quantize your models and stuff to get pretty fast inference even running on a laptop. Let me know if you want this code and I will clean it up.
Add Support for Running Your Own HF Pipeline Locally
https://api.github.com/repos/langchain-ai/langchain/issues/354/comments
0
2022-12-16T00:12:15Z
2022-12-17T17:24:18Z
https://github.com/langchain-ai/langchain/issues/354
1,499,319,014
354
[ "langchain-ai", "langchain" ]
Currently only done for OpenAI, but should be done for all
Add batching support to all LLMs
https://api.github.com/repos/langchain-ai/langchain/issues/348/comments
3
2022-12-15T14:54:52Z
2023-08-24T16:21:35Z
https://github.com/langchain-ai/langchain/issues/348
1,498,582,775
348
[ "langchain-ai", "langchain" ]
Hello, Could the `OpenAIEmbeddings` class support the other OpenAI embeddings, such as the text-similarity and code-search engines? Currently we can only specify the engine type (ada, babbage, curie, davinci) during instantiation and are forced to use the text-search engines. Native support for this would be appreciated instead of my workaround - happy to contribute a PR to this if needed. Thanks!
Support for other OpenAI Embeddings (text-similarity, code-search)
https://api.github.com/repos/langchain-ai/langchain/issues/342/comments
5
2022-12-15T00:18:26Z
2022-12-16T00:55:40Z
https://github.com/langchain-ai/langchain/issues/342
1,497,597,312
342
[ "langchain-ai", "langchain" ]
https://github.com/yangkevin2/emnlp22-re3-story-generation
Story generation
https://api.github.com/repos/langchain-ai/langchain/issues/341/comments
1
2022-12-14T18:24:56Z
2023-09-12T21:30:04Z
https://github.com/langchain-ai/langchain/issues/341
1,497,167,348
341
[ "langchain-ai", "langchain" ]
Some agents get stuck in looping behavior. Add a max iteration argument to avoid infinite while loops.
Add max iteration argument to Agents
https://api.github.com/repos/langchain-ai/langchain/issues/336/comments
1
2022-12-14T05:46:12Z
2023-09-12T21:30:03Z
https://github.com/langchain-ai/langchain/issues/336
1,495,748,711
336
[ "langchain-ai", "langchain" ]
null
Caching for LLMChain calls
https://api.github.com/repos/langchain-ai/langchain/issues/335/comments
1
2022-12-14T05:45:13Z
2022-12-19T14:24:25Z
https://github.com/langchain-ai/langchain/issues/335
1,495,747,358
335
[ "langchain-ai", "langchain" ]
If you have a chain that involves agents and that could have a quantitative answer, have a query which allows you to run the chain N times and take some reduce function on the result. This would pair well with caching, in case any steps of the chain are running the same inferences. It would also pair well with a max iteration loop for an agent, so that one of the agents doesn't get stuck in a while loop.
Rerun agent N times and do majority voting on output
https://api.github.com/repos/langchain-ai/langchain/issues/334/comments
1
2022-12-14T05:44:51Z
2023-09-12T21:30:02Z
https://github.com/langchain-ai/langchain/issues/334
1,495,746,810
334
[ "langchain-ai", "langchain" ]
A chain where you first call an LLM to do classification choose which subchain to call, then you call that specific subchain @sjwhitmore @johnmcdonnell were you guys going to work on this?
ForkChain
https://api.github.com/repos/langchain-ai/langchain/issues/320/comments
5
2022-12-12T03:47:12Z
2023-09-10T16:47:07Z
https://github.com/langchain-ai/langchain/issues/320
1,490,868,845
320
[ "langchain-ai", "langchain" ]
There is an open PR (https://github.com/hwchase17/langchain/pull/131) but its not complete. Would be nice to finish it up and add it in
Add APE
https://api.github.com/repos/langchain-ai/langchain/issues/319/comments
1
2022-12-12T03:45:37Z
2023-09-12T21:30:01Z
https://github.com/langchain-ai/langchain/issues/319
1,490,867,226
319
[ "langchain-ai", "langchain" ]
something like ``` from langchain.chains.base import Chain class BaseWhileChain(Chain): def _call(self, inputs: Dict[str, str]) -> Dict[str, str]: state = self.get_initial_state(inputs) while True: outputs = self.inner_chain(state) if self.stopping_critera(outputs): return outputs else: state = self.update_state(state, inputs) ```
WhileLoop chain
https://api.github.com/repos/langchain-ai/langchain/issues/314/comments
2
2022-12-11T22:34:58Z
2023-09-10T16:47:11Z
https://github.com/langchain-ai/langchain/issues/314
1,490,521,742
314
[ "langchain-ai", "langchain" ]
In the OpenAI llm model, frequency_penalty and presence_penalty are annotated as ints when they should allow decimals between 0.0 and 2.0 inclusive.
OpenAI llm rounds frequency & presence penalties to ints
https://api.github.com/repos/langchain-ai/langchain/issues/310/comments
3
2022-12-11T15:34:28Z
2022-12-11T20:34:04Z
https://github.com/langchain-ai/langchain/issues/310
1,490,129,581
310
[ "langchain-ai", "langchain" ]
Can we add a couple lines here that ensures that the len() of the combined text will not cause a context window error? https://github.com/hwchase17/langchain/blob/9d08384d5ff3de40dac72d99efb99288e6ad4ee1/langchain/chains/combine_documents/map_reduce.py#L60 Otherwise, one sees this: ``` InvalidRequestError: This model's maximum context length is 4097 tokens, however you requested 6213 tokens (5957 in your prompt; 256 for the completion). Please reduce your prompt; or completion length. ```
Ensure length of the text will not cause a context window error
https://api.github.com/repos/langchain-ai/langchain/issues/301/comments
5
2022-12-10T20:06:01Z
2022-12-19T14:24:58Z
https://github.com/langchain-ai/langchain/issues/301
1,488,901,201
301
[ "langchain-ai", "langchain" ]
Missing sources on this line: https://github.com/hwchase17/langchain/blob/master/langchain/chains/qa_with_sources/map_reduce_prompt.py#L41
Missing text in map_reduce_prompt.py file
https://api.github.com/repos/langchain-ai/langchain/issues/298/comments
2
2022-12-10T15:20:22Z
2022-12-11T01:15:07Z
https://github.com/langchain-ai/langchain/issues/298
1,488,576,891
298
[ "langchain-ai", "langchain" ]
The current `Memory` objects allow a user to store the [entire conversation history](https://github.com/hwchase17/langchain/blob/master/langchain/chains/conversation/memory.py#L22), a [subset of history](https://github.com/hwchase17/langchain/blob/master/langchain/chains/conversation/memory.py#L50), and a [summary of history](https://github.com/hwchase17/langchain/blob/master/langchain/chains/conversation/memory.py#L79). Users might want to store a summary of the conversation in addition to a fixed-size history. You could imagine the following algorithm: ``` 1. Check if current conversation history has reached maximum size 2. If so, generate a conversation summary. 3. Optionally, merge the prior summary with the new summary. 4. Trim the conversation history 5. Append new exchange to conversation history ``` If the user wants to maintain a fixed-size conversation history as well as a summary, a simpler approach might be to create a more general `MemoryBank` container. There would need to be validation that the memory key is unique. ``` class MemoryBank(Memory, BaseModel): resources: list[Memory] def load_memory_variables(self, inputs: Dict[str, Any]) -> Dict[str, str]: """Return history buffer.""" return {r.memory_key: r.buffer for r in self.resources} ... ```
Support long-term and short-term memory in Conversation chain
https://api.github.com/repos/langchain-ai/langchain/issues/296/comments
2
2022-12-10T03:43:37Z
2023-08-24T16:21:39Z
https://github.com/langchain-ai/langchain/issues/296
1,487,881,928
296
[ "langchain-ai", "langchain" ]
This will allow the LLMs to hold on to previous commands.
Add support for naming the AI entity differently (other than "AI", which seems to be hardcoded)
https://api.github.com/repos/langchain-ai/langchain/issues/295/comments
5
2022-12-09T21:02:07Z
2023-08-24T16:21:44Z
https://github.com/langchain-ai/langchain/issues/295
1,487,452,383
295
[ "langchain-ai", "langchain" ]
Hey, I'm looking to create an open-source tool that can be used to have conversations about pricing out different cloud services. Does anyone want to help out? or does anyone have any thoughts?
Connect a GPT model to cloud cost calculators
https://api.github.com/repos/langchain-ai/langchain/issues/292/comments
2
2022-12-09T15:50:22Z
2023-08-24T16:21:49Z
https://github.com/langchain-ai/langchain/issues/292
1,486,981,181
292
[ "langchain-ai", "langchain" ]
https://cut-hardhat-23a.notion.site/code-for-webGPT-44485e5c97bd403ba4e1c2d5197af71d ``` from serpapi import GoogleSearch import requests import openai import logging import sys, os openai.api_key = "YOUR OPEN_AI API KEY" headers = {'Cache-Control': 'no-cache', 'Content-Type': 'application/json'} params = {'token': 'YOUR BROWSERLESS API KEY'} def scarpe_webpage(link): json_data = { 'url': link, 'elements': [{'selector': 'body'}], } response = requests.post('https://chrome.browserless.io/scrape', params=params, headers=headers, json=json_data) webpage_text = response.json()['data'][0]['results'][0]['text'] return webpage_text def summarize_webpage(question, webpage_text): prompt = """You are an intelligent summarization engine. Extract and summarize the most relevant information from a body of text related to a question. Question: {} Body of text to extract and summarize information from: {} Relevant information:""".format(question, webpage_text[0:2500]) completion = openai.Completion.create(engine="text-davinci-003", prompt=prompt, temperature=0.8, max_tokens=800) return completion.choices[0].text def summarize_final_answer(question, summaries): prompt = """You are an intelligent summarization engine. Extract and summarize relevant information from the four points below to construct an answer to a question. Question: {} Relevant Information: 1. {} 2. {} 3. {} 4. {}""".format(question, summaries[0], summaries[1], summaries[2], summaries[3]) completion = openai.Completion.create(engine="text-davinci-003", prompt=prompt, temperature=0.8, max_tokens=800) return completion.choices[0].text def get_link(r): return r['link'] def get_search_results(question): search = GoogleSearch({ "q": question, "api_key": "YOUR SERP API KEY", "logging": False }) result = search.get_dict() return list(map(get_link, result['organic_results'])) def print_citations(links, summaries): print("Citations:") i = 0 while i < 4: print("\n","[{}]".format(i+1), links[i],"\n", summaries[i], "\n") i += 1 def main(): print("\nTell me about:\n") question = input() print("\n") sys.stdout = open(os.devnull, 'w') #disable print links = get_search_results(question) sys.stdout = sys.__stdout__ #enable print webpages = list(map(scarpe_webpage, links[:4])) summaries = [] for x in webpages: summaries.append(summarize_webpage(question, x)) final_summary = summarize_final_answer(question, summaries) print("Here is the answer:", final_summary, "\n") print_citations(links, summaries) if __name__ == "__main__": main() ```
WebGPT
https://api.github.com/repos/langchain-ai/langchain/issues/289/comments
2
2022-12-09T07:00:33Z
2023-09-26T16:18:08Z
https://github.com/langchain-ai/langchain/issues/289
1,486,171,161
289
[ "langchain-ai", "langchain" ]
We need an analogous structure to LengthBasedExampleSelector but for Memory. MemorySelector? This would provide an interface which would allow some mutation of the Memory if max context window is reached (& perhaps upon other conditions as well)
Add support for memory selection by length or other features (e.g. semantic similarity)
https://api.github.com/repos/langchain-ai/langchain/issues/287/comments
5
2022-12-08T19:51:22Z
2023-09-10T16:47:16Z
https://github.com/langchain-ai/langchain/issues/287
1,485,301,364
287
[ "langchain-ai", "langchain" ]
It would be useful in certain circumstances to be able to manually flush the memory buffer of an agent. There's no defined interface for doing it at present (though it can be accomplished by holding on to a buffer and clearing it directly). Possibly something like and have define a `flush()` method on the `Memory` abstract class: ``` memory = ConversationBufferMemory(memory_key="chat_history") llm_chain = LLMChain(llm=OpenAI(temperature=0), prompt=prompt, memory=memory) agent = ZeroShotAgent(llm_chain=llm_chain, tools=tools, verbose=True) agent.run("....") ... agent.flush() # calls Memory.flush() ``` Thoughts?
Add support for memory flushing
https://api.github.com/repos/langchain-ai/langchain/issues/285/comments
10
2022-12-08T12:50:45Z
2022-12-11T16:03:33Z
https://github.com/langchain-ai/langchain/issues/285
1,484,573,489
285
[ "langchain-ai", "langchain" ]
would be cool to do a demo with clustering on the embeddings classes to cluster similar documents / document chunks
Add clustering on embeddings
https://api.github.com/repos/langchain-ai/langchain/issues/280/comments
3
2022-12-07T17:20:40Z
2023-08-24T16:22:00Z
https://github.com/langchain-ai/langchain/issues/280
1,482,429,496
280
[ "langchain-ai", "langchain" ]
right now, agents only take in a single string. would be nice to let them take in multiple arguments.
allow agents to take multiple inputs
https://api.github.com/repos/langchain-ai/langchain/issues/279/comments
6
2022-12-07T16:41:22Z
2022-12-19T14:20:56Z
https://github.com/langchain-ai/langchain/issues/279
1,482,347,904
279
[ "langchain-ai", "langchain" ]
to handle cases where it does not generate an action/action input
add fix_text method to MRKL
https://api.github.com/repos/langchain-ai/langchain/issues/277/comments
1
2022-12-07T15:26:43Z
2023-09-12T21:30:00Z
https://github.com/langchain-ai/langchain/issues/277
1,482,186,449
277
[ "langchain-ai", "langchain" ]
https://arxiv.org/pdf/2207.05221.pdf should be a separate LLMchain with a prompt that people can add on at the end
add in method for getting confidence
https://api.github.com/repos/langchain-ai/langchain/issues/266/comments
4
2022-12-05T18:55:06Z
2023-08-24T16:22:04Z
https://github.com/langchain-ai/langchain/issues/266
1,477,208,775
266