issue_owner_repo
listlengths
2
2
issue_body
stringlengths
0
261k
issue_title
stringlengths
1
925
issue_comments_url
stringlengths
56
81
issue_comments_count
int64
0
2.5k
issue_created_at
stringlengths
20
20
issue_updated_at
stringlengths
20
20
issue_html_url
stringlengths
37
62
issue_github_id
int64
387k
2.46B
issue_number
int64
1
127k
[ "langchain-ai", "langchain" ]
Hi, I read the docs and examples then tried to make a chatbot and a qustion answering bot over docs. I wonder is there any way to combine there two function together? From my point of view, I mean basically it's chatbot which uses memory module to carry on conversation with users. If the user asking a question, then the chatbot rertives docs based on embeddings and get the answer. Then I change the prompt of the conversation and add the answer to it, asking the chatbot response based on the memory and the answer. Will it work? Or there is another conventient way or chain to combine there two types of bots?
Is there any way to combine chatbot and question answering over docs?
https://api.github.com/repos/langchain-ai/langchain/issues/2185/comments
10
2023-03-30T09:14:50Z
2023-10-26T00:23:23Z
https://github.com/langchain-ai/langchain/issues/2185
1,647,226,241
2,185
[ "langchain-ai", "langchain" ]
Does API Chain support post method? How we can call post external api with llm? Appreciate any information, thanks.
Question: Does API Chain support post method?
https://api.github.com/repos/langchain-ai/langchain/issues/2184/comments
12
2023-03-30T08:55:19Z
2024-05-07T14:47:12Z
https://github.com/langchain-ai/langchain/issues/2184
1,647,194,820
2,184
[ "langchain-ai", "langchain" ]
I'm wondering if we can use langchain without llm from openai. I've tried replace openai with "bloom-7b1" and "flan-t5-xl" and used agent from langchain according to visual chatgpt [https://github.com/microsoft/visual-chatgpt](url). Here is my demo: ``` class Text2Image: def __init__(self, device): print(f"Initializing Text2Image to {device}") self.device = device self.torch_dtype = torch.float16 if 'cuda' in device else torch.float32 self.pipe = StableDiffusionPipeline.from_pretrained("/dfs/data/llmcheckpoints/stable-diffusion-v1-5", torch_dtype=self.torch_dtype) self.pipe.to(device) self.a_prompt = 'best quality, extremely detailed' self.n_prompt = 'longbody, lowres, bad anatomy, bad hands, missing fingers, extra digit, ' \ 'fewer digits, cropped, worst quality, low quality' @prompts(name="Generate Image From User Input Text", description="useful when you want to generate an image from a user input text and save it to a file. " "like: generate an image of an object or something, or generate an image that includes some objects. " "The input to this tool should be a string, representing the text used to generate image. ") def inference(self, text): image_filename = os.path.join('image', f"{str(uuid.uuid4())[:8]}.png") prompt = text + ', ' + self.a_prompt image = self.pipe(prompt, negative_prompt=self.n_prompt).images[0] image.save(image_filename) print( f"\nProcessed Text2Image, Input Text: {text}, Output Image: {image_filename}") return image_filename ``` ``` from typing import Any, List, Mapping, Optional from pydantic import BaseModel, Extra from langchain.llms.base import LLM from langchain.llms.utils import enforce_stop_tokens class CustomPipeline(LLM, BaseModel): model_id: str = "/dfs/data/llmcheckpoints/bloom-7b1/" class Config: """Configuration for this pydantic object.""" extra = Extra.forbid def __init__(self, model_id): super().__init__() # from transformers import T5TokenizerFast, T5ForConditionalGeneration from transformers import AutoTokenizer, AutoModelForCausalLM global model, tokenizer # model = T5ForConditionalGeneration.from_pretrained(model_id) # tokenizer = T5TokenizerFast.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained(model_id, device_map='auto') tokenizer = AutoTokenizer.from_pretrained(model_id, device_map='auto') @property def _llm_type(self) -> str: return "custom_pipeline" def _call(self, prompt: str, stop: Optional[List[str]] = None, max_length=2048, num_return_sequences=1): input_ids = tokenizer.encode(prompt, return_tensors="pt").cuda() outputs = model.generate(input_ids, max_length=max_length, num_return_sequences=num_return_sequences) response = [tokenizer.decode(output, skip_special_tokens=True) for output in outputs][0] return response ``` ``` class ConversationBot: def __init__(self, load_dict): print(f"Initializing AiMaster ChatBot, load_dict={load_dict}") model_id = "/dfs/data/llmcheckpoints/bloom-7b1/" self.llm = CustomPipeline(model_id=model_id) print('load flant5xl done!') self.memory = ConversationStringBufferMemory(memory_key="chat_history", output_key='output') self.models = {} # Load Basic Foundation Models for class_name, device in load_dict.items(): self.models[class_name] = globals()[class_name](device=device) # Load Template Foundation Models for class_name, module in globals().items(): if getattr(module, 'template_model', False): template_required_names = {k for k in inspect.signature(module.__init__).parameters.keys() if k!='self'} loaded_names = set([type(e).__name__ for e in self.models.values()]) if template_required_names.issubset(loaded_names): self.models[class_name] = globals()[class_name]( **{name: self.models[name] for name in template_required_names}) self.tools = [] for instance in self.models.values(): for e in dir(instance): if e.startswith('inference'): func = getattr(instance, e) self.tools.append(Tool(name=func.name, description=func.description, func=func)) self.agent = initialize_agent( self.tools, self.llm, agent="conversational-react-description", verbose=True, memory=self.memory, return_intermediate_steps=True, #agent_kwargs={'format_instructions': AIMASTER_CHATBOT_FORMAT_INSTRUCTIONS},) agent_kwargs={'prefix': AIMASTER_CHATBOT_PREFIX, 'format_instructions': AIMASTER_CHATBOT_FORMAT_INSTRUCTIONS, }, ) def run_text(self, text, state): self.agent.memory.buffer = cut_dialogue_history(self.agent.memory.buffer, keep_last_n_words=500) res = self.agent({"input": text}) res['output'] = res['output'].replace("\\", "/") response = re.sub('(image/\S*png)', lambda m: f'![](/file={m.group(0)})*{m.group(0)}*', res['output']) state = state + [(text, response)] print(f"\nProcessed run_text, Input text: {text}\nCurrent state: {state}\n" f"Current Memory: {self.agent.memory.buffer}") return state, state def run_image(self, image, state): # image_filename = os.path.join('image', f"{str(uuid.uuid4())[:8]}.png") image_filename = image print("======>Auto Resize Image...") # img = Image.open(image.name) img = Image.open(image_filename) width, height = img.size ratio = min(512 / width, 512 / height) width_new, height_new = (round(width * ratio), round(height * ratio)) width_new = int(np.round(width_new / 64.0)) * 64 height_new = int(np.round(height_new / 64.0)) * 64 img = img.resize((width_new, height_new)) img = img.convert('RGB') img.save(image_filename, "PNG") print(f"Resize image form {width}x{height} to {width_new}x{height_new}") description = self.models['ImageCaptioning'].inference(image_filename) Human_prompt = f'\nHuman: provide a figure named {image_filename}. The description is: {description}. This information helps you to understand this image, but you should use tools to finish following tasks, rather than directly imagine from my description. If you understand, say \"Received\". \n' AI_prompt = "Received. " self.agent.memory.buffer = self.agent.memory.buffer + Human_prompt + 'AI: ' + AI_prompt state = state + [(f"![](/file={image_filename})*{image_filename}*", AI_prompt)] print(f"\nProcessed run_image, Input image: {image_filename}\nCurrent state: {state}\n" f"Current Memory: {self.agent.memory.buffer}") return state, state, f' {image_filename} ' if __name__=="__main__": parser = argparse.ArgumentParser() parser.add_argument('--load', type=str, default="ImageCaptioning_cuda:0, Text2Image_cuda:0") args = parser.parse_args() load_dict = {e.split('_')[0].strip(): e.split('_')[1].strip() for e in args.load.split(',')} bot = ConversationBot(load_dict=load_dict) global state state = list() while True: text = input('input:') if text.startswith("image:"): result = bot.run_image(text[6:],state) elif text == 'stop': break elif text == 'clear': bot.memory.clear else: result = bot.run_text(text,state) ``` It seems that both two llms fail in using tools that I offer. Any suggestions will help me a lot!
Using langchain without openai api?
https://api.github.com/repos/langchain-ai/langchain/issues/2182/comments
13
2023-03-30T08:07:21Z
2023-12-30T16:09:14Z
https://github.com/langchain-ai/langchain/issues/2182
1,647,123,616
2,182
[ "langchain-ai", "langchain" ]
I am having a hard time understanding how I can add documents to an **existing** Redis Index. This is what I do: first I try to instantiate `rds` from an existing Redis instance: ``` rds = Redis.from_existing_index( embedding=openAIEmbeddings, redis_url="redis://localhost:6379", index_name='techorg' ) ``` Then I want to add more documents to the index: ``` rds.add_documents( documents=splits, embedding=openAIEmbeddings ) ``` Which ends up with the documents being added to Redis but **not to the index**: In this screenshot you can see the index "techorg" with the one document "doc:techorg:0" I had created it with. Then, outside of the `techorg` hierarchie you can see the documents that were added with `add_documents` <img width="293" alt="Screenshot 2023-03-30 at 09 39 03" src="https://user-images.githubusercontent.com/603179/228764364-d95b9bd4-8116-4c0f-9b8b-c3bfb3b78a26.png">
Redis: add to existing Index
https://api.github.com/repos/langchain-ai/langchain/issues/2181/comments
12
2023-03-30T07:42:53Z
2023-09-28T16:09:57Z
https://github.com/langchain-ai/langchain/issues/2181
1,647,085,810
2,181
[ "langchain-ai", "langchain" ]
I believe by default the model used in it is text-davinci-003 , how can i change that model to text-ada-001, basically i want to create a question and answer bot in which i provide the model with text file input (txt file) i based on that i want the bot the answer the question i ask it , based on only text file that i have inputted , so can i change the model in it
Not able to Change model in Text Loader and VectorStoreIndexCreator
https://api.github.com/repos/langchain-ai/langchain/issues/2175/comments
1
2023-03-30T04:52:26Z
2023-09-18T16:21:54Z
https://github.com/langchain-ai/langchain/issues/2175
1,646,909,112
2,175
[ "langchain-ai", "langchain" ]
NOTE: fixed in #2238 PR. I'm running `tests/unit_tests` on the Windows platform and several tests related to `bash` failed. >test_llm_bash/ test_simple_question and >test_bash/ test_pwd_command test_incorrect_command test_incorrect_command_return_err_output test_create_directory_and_files If it is because these tests should run only on Linux, we can add >if not sys.platform.startswith("win"): pytest.skip("skipping windows-only tests", allow_module_level=True) to the `test_bash.py` and >@pytest.mark.skipif(sys.platform.startswith("win", reason="skipping windows-only tests") to `test_llm_bash/test_simple_question` regarding [this](https://docs.pytest.org/en/7.1.x/how-to/skipping.html). If you want you can assign this issue to me :) UPDATE: Probably` tests/unit_test/utilities/test_loading/[test_success, test_failed_request]` (tests with correspondent `_teardown`) are also failing because of the Windows environment.
failed tests on Windows platform
https://api.github.com/repos/langchain-ai/langchain/issues/2174/comments
3
2023-03-30T03:43:17Z
2023-04-03T15:58:28Z
https://github.com/langchain-ai/langchain/issues/2174
1,646,855,969
2,174
[ "langchain-ai", "langchain" ]
I am trying to use the FAISS class to initialize an index using pre-computed embeddings, but it seems that the method is not being recognized. However, I found the function in both the [source code](https://github.com/hwchase17/langchain/blob/master/langchain/vectorstores/faiss.py#L348) and documentation. Details in screenshot below: <img width="821" alt="image" src="https://user-images.githubusercontent.com/20560167/228670417-c468ce42-29dc-4d07-a494-d942ec96232a.png">
[BUG] FAISS.from_embeddings doesn't seem to exist
https://api.github.com/repos/langchain-ai/langchain/issues/2165/comments
2
2023-03-29T21:20:49Z
2023-09-10T16:39:31Z
https://github.com/langchain-ai/langchain/issues/2165
1,646,550,037
2,165
[ "langchain-ai", "langchain" ]
I am facing this issue when trying to import the following. from langchain.agents import ZeroShotAgent, Tool, AgentExecutor from langchain.memory import ConversationBufferMemory from langchain.memory.chat_memory import ChatMessageHistory from langchain.memory.chat_message_histories import RedisChatMessageHistory from langchain import OpenAI, LLMChain from langchain.utilities import GoogleSearchAPIWrapper Here's the error message: --------------------------------------------------------------------------- ImportError Traceback (most recent call last) [<ipython-input-55-e9f2f4d46ecb>](https://localhost:8080/#) in <cell line: 4>() 2 from langchain.memory import ConversationBufferMemory 3 from langchain.memory.chat_memory import ChatMessageHistory ----> 4 from langchain.memory.chat_message_histories import RedisChatMessageHistory 5 from langchain import OpenAI, LLMChain 6 from langchain.utilities import GoogleSearchAPIWrapper 1 frames [/usr/local/lib/python3.9/dist-packages/langchain/memory/chat_message_histories/dynamodb.py](https://localhost:8080/#) in <module> 2 from typing import List 3 ----> 4 from langchain.schema import ( 5 AIMessage, 6 BaseChatMessageHistory, ImportError: cannot import name 'BaseChatMessageHistory' from 'langchain.schema' (/usr/local/lib/python3.9/dist-packages/langchain/schema.py) --------------------------------------------------------------------------- NOTE: If your import is failing due to a missing package, you can manually install dependencies using either !pip or !apt. To view examples of installing some common dependencies, click the "Open Examples" button below. --------------------------------------------------------------------------- Currently running this on google colab. Is there anything I am missing? How can this be rectified?
Error while importing RedisChatMessageHistory
https://api.github.com/repos/langchain-ai/langchain/issues/2163/comments
2
2023-03-29T21:04:55Z
2023-09-10T16:39:37Z
https://github.com/langchain-ai/langchain/issues/2163
1,646,528,525
2,163
[ "langchain-ai", "langchain" ]
Hi, I am accessing the confluence API to retrieve Pages from a specific Space and process them. I am trying to embed them into Redis, see: `embed_document_splits`. On the surface it seems to work but the index size in Redis only grows until 57 keys, then stops growing while the code happily runs. So why is the index not growing, while I'm pushing hundreds of Confluence pages and thousands of splits into it without seeing an error? ``` from typing import List from langchain import OpenAI from langchain.embeddings.openai import OpenAIEmbeddings from langchain.text_splitter import TokenTextSplitter from langchain.vectorstores.redis import Redis from atlassian import Confluence from langchain.docstore.document import Document from bs4 import BeautifulSoup import time import os from dotenv import load_dotenv load_dotenv() text_splitter = TokenTextSplitter(chunk_size=200, chunk_overlap=20) # set up global flag variable stop_flag = False # set up signal handler to catch Ctrl+C import signal def signal_handler(sig, frame): global stop_flag print('You pressed Ctrl+C!') stop_flag = True signal.signal(signal.SIGINT, signal_handler) def transform_confluence_page_to_document(page) -> Document: soup = BeautifulSoup( markup=page['body']['view']['value'], features="html.parser" ) metadata = { "title": page["title"], "source ": page["_links"]["webui"] } # only add pages with more than 200 characters, otherwise they are irrelevant if (len(soup.get_text()) > 200): print("- " + page["title"] + " (" + str(len(soup.get_text())) + " characters)") return Document( page_content=soup.get_text(), metadata=metadata ) return None def embed_document_splits(splits: List[Document]) -> None: try: print("Saving chunk with " + str(len(splits)) + " splits to Vector DB (Redis)..." ) embeddings = OpenAIEmbeddings(model='text-embedding-ada-002') Redis.from_documents( splits, embeddings, redis_url="redis://localhost:6379", index_name='techorg' ) except: print("ERROR: Could not create/save embeddings") # https://stackoverflow.com/a/69446096 def process_all_pages(): confluence = Confluence( url=os.environ.get('CONFLUENCE_API_URL'), username=os.environ.get('CONFLUENCE_API_EMAIL'), password=os.environ.get('CONFLUENCE_API_TOKEN')) start = 0 limit = 5 documents: List[Document] = [] while not stop_flag: print("start: " + str(start) + ", limit: " + str(limit)) try: pagesChunk = confluence.get_all_pages_from_space( "TechOrg", start=start, limit=limit, expand="body.view", content_type="page" ) for p in pagesChunk: doc = transform_confluence_page_to_document(p) if doc is not None: documents.append(doc) splitted_documents = text_splitter.split_documents(documents) embed_document_splits(splitted_documents) except: print('ERROR: Chunk could not be processed and is ignored...') documents = [] # reset for next chunk if len(pagesChunk) < limit: break start = start + limit time.sleep(1) def main(): process_all_pages() # https://stackoverflow.com/a/419185 if __name__ == "__main__": main() ```
Storing Embeddings to Redis does not grow its index proportionally
https://api.github.com/repos/langchain-ai/langchain/issues/2162/comments
2
2023-03-29T20:53:39Z
2023-09-10T16:39:41Z
https://github.com/langchain-ai/langchain/issues/2162
1,646,514,182
2,162
[ "langchain-ai", "langchain" ]
What is the default Openai model used in the langchain agents create_csv_agent and how about if someone want to change the model to GPT4 .... how to do this... Thank you
create_csv_agent in agents
https://api.github.com/repos/langchain-ai/langchain/issues/2159/comments
5
2023-03-29T19:28:16Z
2023-05-01T15:29:07Z
https://github.com/langchain-ai/langchain/issues/2159
1,646,390,568
2,159
[ "langchain-ai", "langchain" ]
Hi Like you have supported SQLDatabaseChain to return directly the query results without going back to LLM, can you do the same for SQLAgent as well , right now the data is always sent. and setting return_direct in tools will not help as it will return early if there are multiple tries in agent flow
SQLAgent direct return
https://api.github.com/repos/langchain-ai/langchain/issues/2158/comments
1
2023-03-29T18:45:20Z
2023-08-25T16:13:52Z
https://github.com/langchain-ai/langchain/issues/2158
1,646,334,900
2,158
[ "langchain-ai", "langchain" ]
https://docs.abyssworld.xyz/abyss-world-whitepaper/roadmap-and-development-milestones/milestones This is an example of a three-level subdirectory. However, the Gitbook loader can only traverse up to two levels. I suspect that this issue lies with the Gitbook loader itself. If this problem does indeed exist, I am willing to fix it and contribute.
Gitbook loader can't traverse more than 2 level of subdirectory
https://api.github.com/repos/langchain-ai/langchain/issues/2156/comments
1
2023-03-29T18:30:19Z
2023-08-25T16:13:56Z
https://github.com/langchain-ai/langchain/issues/2156
1,646,315,806
2,156
[ "langchain-ai", "langchain" ]
I am trying to `query` a `VectorstoreIndexCreator()` with a hugging face LLM and am running a validation error -- I would love to put up a PR/fix this myself, but I'm at a loss for what I need to do so any guidance and I'd be happy to try to put up a fix. The validation error is below... ``` ValidationError: 1 validation error for LLMChain llm Can't instantiate abstract class BaseLanguageModel with abstract methods agenerate_prompt, generate_prompt (type=type_error) ``` A simplified version of what I'm trying to do is below: ``` loader = TextLoader('file.txt') index = VectorstoreIndexCreator().from_loaders([loader]) index.query(query, llm=llm_chain) ``` Here is some more info on what I have in `llm_chain` and how it works for a basic prompt.. ``` from transformers import AutoTokenizer, AutoModelForCausalLM, pipeline from langchain.llms import HuggingFacePipeline from langchain.prompts import PromptTemplate tokenizer = AutoTokenizer.from_pretrained("OpenAssistant/oasst-sft-1-pythia-12b") model = AutoModelForCausalLM.from_pretrained("OpenAssistant/oasst-sft-1-pythia-12b") pipe = pipeline( "text-generation", model=model, tokenizer=tokenizer, max_length=1024 ) local_llm = HuggingFacePipeline(pipeline=pipe) template = """Question: {question}""" prompt = PromptTemplate(template=template, input_variables=["question"]) llm_chain = langchain.LLMChain( prompt=prompt, # TODO llm=local_llm ) question = "What is the capital of France?" ``` Running the above code, the `llm_chain` returns `Answer: Paris is the capital of France.` which suggests that the llm_chain is set up properly. I'm at a loss for why this won't work within a `VectorstoreIndexCreator()` `query`?
Cannot initialize LLMChain with HuggingFace LLM when querying VectorstoreIndexCreator()
https://api.github.com/repos/langchain-ai/langchain/issues/2154/comments
1
2023-03-29T17:03:24Z
2023-09-10T16:39:46Z
https://github.com/langchain-ai/langchain/issues/2154
1,646,196,792
2,154
[ "langchain-ai", "langchain" ]
Many customers has their knowledge base sitting on either in SharePoint , OneDrive and Documentum. Can we have a new document loader for all these?
New Document loader request for Sharepoint , OneDrive and Documentum
https://api.github.com/repos/langchain-ai/langchain/issues/2153/comments
17
2023-03-29T16:36:16Z
2024-04-27T14:07:57Z
https://github.com/langchain-ai/langchain/issues/2153
1,646,158,055
2,153
[ "langchain-ai", "langchain" ]
## Bug description Hello! I've been using the SQL agent for some time now and I've noticed that it burns tokens unnecessarily in certain cases, leading to slower LLM responses and hitting token limits. This happens usually when the user's question requires information from more than one table. Unlike `SQLDatabaseChain`, the SQL agent uses multiple tools to list tables, describe tables, generate queries, and validate generated queries. However, except for the generate query tool, all other tools are relevant only until the agent is required to generate a query. As mentioned in the scenario above, if the agent needs to fetch information from two tables, it first makes an LLM call to list the tables, selects the most appropriate table, and generates a query based on the selected table description to fetch information. Then, the agent generates a second SQL query to fetch information from a different table and goes through the exact same process. The problem is that all the context from the first SQL query is retained, leading to unnecessary token burn and slower LLM responses. ## Proposed solution To resolve this issue, we should update the SQL agent to update the context once the information is fetched from the first query generation. Only the output from the first query should be taken forward for the next query generation. This will prevent the agent from running into token limits and speed up the LLM response time.
bug(sql_agent): Optimise token usage for user questions which require information from more than one table
https://api.github.com/repos/langchain-ai/langchain/issues/2150/comments
4
2023-03-29T15:31:13Z
2023-12-08T16:08:35Z
https://github.com/langchain-ai/langchain/issues/2150
1,646,058,913
2,150
[ "langchain-ai", "langchain" ]
Hi, I keep getting the below warning every time the chain finishes. `WARNING:root:Failed to persist run: HTTPConnectionPool(host='localhost', port=8000): Max retries exceeded with url: /chain-runs (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x000001B8E0B5D0D0>: Failed to establish a new connection: [WinError 10061] No connection could be made because the target machine actively refused it'))` Below is the code: ``` import os from flask import Flask, request, render_template, jsonify from langchain import LLMMathChain, OpenAI from langchain.agents import initialize_agent, Tool from langchain.chat_models import ChatOpenAI from langchain.memory import ConversationSummaryBufferMemory os.environ["LANGCHAIN_HANDLER"] = "langchain" llm = ChatOpenAI(temperature=0) memory = ConversationSummaryBufferMemory( memory_key="chat_history", llm=llm, max_token_limit=100, return_messages=True) llm_math_chain = LLMMathChain(llm=OpenAI(temperature=0), verbose=True) tools = [ Tool( name="Calculator", func=llm_math_chain.run, description="useful for when you need to answer questions about math" ), ] agent = initialize_agent( tools, llm, agent="chat-conversational-react-description", verbose=True, memory=memory) app = Flask(__name__) @app.route('/') def index(): return render_template('index.html', username='N/A') @app.route('/assist', methods=['POST']) def assist(): message = request.json.get('message', '') if message: response = query_gpt(message) return jsonify({"response": response}) return jsonify({"error": "No message provided"}) def query_gpt(message): response = 'Error!' try: response = agent.run(input=message) finally: return response ``` Am I doing something wrong here? Thanks.
Max retries exceeded with url: /chain-runs
https://api.github.com/repos/langchain-ai/langchain/issues/2145/comments
12
2023-03-29T12:56:09Z
2024-04-02T13:32:04Z
https://github.com/langchain-ai/langchain/issues/2145
1,645,755,617
2,145
[ "langchain-ai", "langchain" ]
I have set up a docker-compose stack with ghcr.io/chroma-core/chroma:0.3.14 (chroma_server) and clickhouse/clickhouse-server:22.9-alpine (click house) almost just like the chroma compose example in their rep. Just the port differs. I have also setup: CHROMA_API_IMPL, CHROMA_SERVER_HOST, CHROMA_SERVER_HTTP_PORT at the container in use by both langchain and chromedb lib. So according to the documentation it should connect, and it appears to be so, even the second try is showing the collection to be in existence. But despite so the chromed lib in my container is claiming no response from the chroma_server container. Here's the peace of code in use: ` webpage = UnstructuredURLLoader(urls=[url]).load_and_split() llm = OpenAI(temperature=0.7) embeddings = OpenAIEmbeddings() from chromadb.config import Settings chroma_settings = Settings(chroma_api_impl=os.environ.get("CHROMA_API_IMPL"), chroma_server_host=os.environ.get("CHROMA_SERVER_HOST"), chroma_server_http_port=os.environ.get("CHROMA_SERVER_HTTP_PORT")) vectorstore = Chroma(collection_name="langchain_store", client_settings=chroma_settings) docsearch = vectorstore.from_documents(webpage, embeddings, collection_name="webpage") ` Chroma settings: environment='' chroma_db_impl='duckdb' chroma_api_impl='rest' clickhouse_host=None clickhouse_port=None persist_directory='.chroma' chroma_server_host='chroma_server' chroma_server_http_port='6000' chroma_server_ssl_enabled=False chroma_server_grpc_port=None anonymized_telemetry=True Tried to use por 3000 as well, and also tried to set the cord setting in chroma, but still the same. Internal Server Error: /query Traceback (most recent call last): File "/usr/local/lib/python3.10/site-packages/urllib3/connectionpool.py", line 703, in urlopen httplib_response = self._make_request( File "/usr/local/lib/python3.10/site-packages/urllib3/connectionpool.py", line 449, in _make_request six.raise_from(e, None) File "<string>", line 3, in raise_from File "/usr/local/lib/python3.10/site-packages/urllib3/connectionpool.py", line 444, in _make_request httplib_response = conn.getresponse() File "/usr/local/lib/python3.10/http/client.py", line 1374, in getresponse response.begin() File "/usr/local/lib/python3.10/http/client.py", line 318, in begin version, status, reason = self._read_status() File "/usr/local/lib/python3.10/http/client.py", line 287, in _read_status raise RemoteDisconnected("Remote end closed connection without" http.client.RemoteDisconnected: Remote end closed connection without response During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.10/site-packages/requests/adapters.py", line 489, in send resp = conn.urlopen( File "/usr/local/lib/python3.10/site-packages/urllib3/connectionpool.py", line 787, in urlopen retries = retries.increment( File "/usr/local/lib/python3.10/site-packages/urllib3/util/retry.py", line 550, in increment raise six.reraise(type(error), error, _stacktrace) File "/usr/local/lib/python3.10/site-packages/urllib3/packages/six.py", line 769, in reraise raise value.with_traceback(tb) File "/usr/local/lib/python3.10/site-packages/urllib3/connectionpool.py", line 703, in urlopen httplib_response = self._make_request( File "/usr/local/lib/python3.10/site-packages/urllib3/connectionpool.py", line 449, in _make_request six.raise_from(e, None) File "<string>", line 3, in raise_from File "/usr/local/lib/python3.10/site-packages/urllib3/connectionpool.py", line 444, in _make_request httplib_response = conn.getresponse() File "/usr/local/lib/python3.10/http/client.py", line 1374, in getresponse response.begin() File "/usr/local/lib/python3.10/http/client.py", line 318, in begin version, status, reason = self._read_status() File "/usr/local/lib/python3.10/http/client.py", line 287, in _read_status raise RemoteDisconnected("Remote end closed connection without" urllib3.exceptions.ProtocolError: ('Connection aborted.', RemoteDisconnected('Remote end closed connection without response')) During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.10/site-packages/django/core/handlers/exception.py", line 56, in inner response = get_response(request) File "/usr/local/lib/python3.10/site-packages/django/core/handlers/base.py", line 197, in _get_response response = wrapped_callback(request, *callback_args, **callback_kwargs) File "/app/sia_demo_alpha/views.py", line 79, in queryAnswer answer = qaURL(url,request.GET.get('question')) File "/app/sia_demo_alpha/data/sia_processor.py", line 91, in qaURL docsearch = Chroma.from_documents(webpage, embeddings, collection_name="webpage", client_settings=chroma_settings) File "/usr/local/lib/python3.10/site-packages/langchain/vectorstores/chroma.py", line 268, in from_documents return cls.from_texts( File "/usr/local/lib/python3.10/site-packages/langchain/vectorstores/chroma.py", line 237, in from_texts chroma_collection.add_texts(texts=texts, metadatas=metadatas, ids=ids) File "/usr/local/lib/python3.10/site-packages/langchain/vectorstores/chroma.py", line 111, in add_texts self._collection.add( File "/usr/local/lib/python3.10/site-packages/chromadb/api/models/Collection.py", line 112, in add self._client._add(ids, self.name, embeddings, metadatas, documents, increment_index) File "/usr/local/lib/python3.10/site-packages/chromadb/api/fastapi.py", line 180, in _add resp = requests.post( File "/usr/local/lib/python3.10/site-packages/requests/api.py", line 115, in post return request("post", url, data=data, json=json, **kwargs) File "/usr/local/lib/python3.10/site-packages/requests/api.py", line 59, in request return session.request(method=method, url=url, **kwargs) File "/usr/local/lib/python3.10/site-packages/requests/sessions.py", line 587, in request resp = self.send(prep, **send_kwargs) File "/usr/local/lib/python3.10/site-packages/requests/sessions.py", line 701, in send r = adapter.send(request, **kwargs) File "/usr/local/lib/python3.10/site-packages/requests/adapters.py", line 547, in send raise ConnectionError(err, request=request) requests.exceptions.ConnectionError: ('Connection aborted.', RemoteDisconnected('Remote end closed connection without response')) 2023-03-29 07:59:54 INFO uvicorn.error Stopping reloader process [1] 2023-03-29 07:59:55 INFO uvicorn.error Will watch for changes in these directories: ['/chroma'] 2023-03-29 07:59:55 INFO uvicorn.error Uvicorn running on http://0.0.0.0:6000 (Press CTRL+C to quit) 2023-03-29 07:59:55 INFO uvicorn.error Started reloader process [1] using WatchFiles 2023-03-29 07:59:58 INFO chromadb.telemetry.posthog Anonymized telemetry enabled. See https://docs.trychroma.com/telemetry for more information. 2023-03-29 07:59:58 INFO chromadb Running Chroma using direct local API. 2023-03-29 07:59:58 INFO chromadb Using Clickhouse for database 2023-03-29 07:59:58 INFO uvicorn.error Started server process [8] 2023-03-29 07:59:58 INFO uvicorn.error Waiting for application startup. 2023-03-29 07:59:58 INFO uvicorn.error Application startup complete. 2023-03-29 08:02:03 INFO chromadb.db.clickhouse collection with name webpage already exists, returning existing collection 2023-03-29 08:02:03 WARNING chromadb.api.models.Collection No embedding_function provided, using default embedding function: SentenceTransformerEmbeddingFunction 2023-03-29 08:02:06 INFO uvicorn.access 172.19.0.3:60172 - "POST /api/v1/collections HTTP/1.1" 200 2023-03-29 08:02:07 DEBUG chromadb.db.index.hnswlib Index not found If I force _embedding_function=embeddings.embed_query_ or even with _embedding_function=embeddings.embed_documents_ I get embeddings not found and another crash.
Setting chromadb client-server results in "Remote end closed connection without response"
https://api.github.com/repos/langchain-ai/langchain/issues/2144/comments
12
2023-03-29T11:53:11Z
2023-09-27T16:11:11Z
https://github.com/langchain-ai/langchain/issues/2144
1,645,644,133
2,144
[ "langchain-ai", "langchain" ]
The contrast used for the documentation on this UI (https://python.langchain.com/en/latest/getting_started/getting_started.html), makes the docs hard to read. Can this be improved? I can take a stab if pointed towards the UI code.
Docs UI
https://api.github.com/repos/langchain-ai/langchain/issues/2143/comments
9
2023-03-29T11:36:24Z
2023-09-27T16:11:14Z
https://github.com/langchain-ai/langchain/issues/2143
1,645,618,664
2,143
[ "langchain-ai", "langchain" ]
I have a question about ChatGPTPluginRetriever and VectorStore Retriever. I wanner use Enterprise Private Data with chatGPT, several weeks ago, there is no OpenAI chatGPT Plugin, I intend to use LangChain Vector Store implenment this function, but I have not finish this job, chatgpt plugin come true, I have read [chatgpt-retrieval-plugin](https://github.com/openai/chatgpt-retrieval-plugin) README.md, but don't have enough time to research code detail, I want to know, what's difference between ChatGPTPluginRetriever and VectorStore Retriever. I guess ChatGPTPluginRetriever means if you implement "/query" interface, chatGPT will request it with questions intelligently, just like chatGPT will ask question in one of its "Reasoning Chains". But LangChain VectorStore will work independent. Does anyone konws is that so? thanks very much. P.S. English is not my mother tongue, so I have a poor English. Sorry for that ^_^
what's difference between ChatGPTPluginRetriever and VectorStore Retriever
https://api.github.com/repos/langchain-ai/langchain/issues/2142/comments
1
2023-03-29T11:20:31Z
2023-09-18T16:21:59Z
https://github.com/langchain-ai/langchain/issues/2142
1,645,590,007
2,142
[ "langchain-ai", "langchain" ]
I am testing out the newly released support for ChatGPT plugins with langchain. Below I am doing a sample test with Wolfram Alpha but it throws the below error "openai.error.InvalidRequestError: This model's maximum context length is 4097 tokens. However, your messages resulted in 127344 tokens. Please reduce the length of the messages." What is the best way to deal with this ? Attaching below the code I executed `import os from langchain.chat_models import ChatOpenAI from langchain.agents import load_tools, initialize_agent from langchain.tools import AIPluginTool tool = AIPluginTool.from_plugin_url("https://www.wolframalpha.com/.well-known/ai-plugin.json") llm = ChatOpenAI(temperature=0,) tools = load_tools(["requests"] ) tools += [tool] agent_chain = initialize_agent(tools, llm, agent="zero-shot-react-description", verbose=True) agent_chain.run("How many calories are in Chickpea Salad?")`
Model's maximum context length error with ChatGPT plugins
https://api.github.com/repos/langchain-ai/langchain/issues/2140/comments
5
2023-03-29T10:07:05Z
2023-09-18T16:22:04Z
https://github.com/langchain-ai/langchain/issues/2140
1,645,459,755
2,140
[ "langchain-ai", "langchain" ]
My aim is to chat with a vector index, so I tried to port code to the new retrieval abstraction. In addition, I pass arguments to the pinecone vector store eg to filter by metadata or specify the collection/ namespace needed. However, I only get the chain to work if I specify `vectordbkwargs` twice, once in the retriever definition and once for the actual model call. Is this intended behaviour? ```python vectorstore = Pinecone( index=index, embedding_function=embed.embed_query, text_key=text_field, namespace=None # not setting a namespace, for testing ) # have to set vectordbkwargs here vectordbkwargs = {"namespace": 'foobar', "filter": {}, "include_metadata": True} retriever = vectorstore.as_retriever(search_kwargs=vectordbkwargs) chat = ConversationalRetrievalChain( retriever=retriever, combine_docs_chain=doc_chain, question_generator=question_generator, ) chat_history = [] query = 'some query I know is answerable from the vector store' # oddly, I have to pass vectordbkwargs here too result = chat({"question": query, "chat_history": chat_history, 'vectordbkwargs': vectordbkwargs}) ``` Leaving out either of the two vectordbkwargs passings to the respective function does not pass the "foobar" vector store namespace for pinecone and results in an empty result. My guess is that this might be a bug due to the newness of the retrieval abstraction? If not, how is one supposed to pass vector store-specific arguments to the chain?
Unnecessary need to pass vectordbkwargs multiple times in new retrieval class?
https://api.github.com/repos/langchain-ai/langchain/issues/2139/comments
2
2023-03-29T08:11:36Z
2023-09-18T16:22:10Z
https://github.com/langchain-ai/langchain/issues/2139
1,645,257,394
2,139
[ "langchain-ai", "langchain" ]
hello,can you share the ways how to use other llm not the openAI,thanks
llms
https://api.github.com/repos/langchain-ai/langchain/issues/2138/comments
5
2023-03-29T08:07:07Z
2023-09-26T16:12:49Z
https://github.com/langchain-ai/langchain/issues/2138
1,645,250,309
2,138
[ "langchain-ai", "langchain" ]
Hey all, I'm trying to make a bot that can use the math and search functions while still using tools. What I have so far is this: ``` from langchain import OpenAI, LLMMathChain, SerpAPIWrapper from langchain.agents import initialize_agent, Tool from langchain.chat_models import ChatOpenAI from langchain.chains import ConversationChain from langchain.memory import ConversationBufferMemory from langchain.prompts import ( ChatPromptTemplate, MessagesPlaceholder, SystemMessagePromptTemplate, HumanMessagePromptTemplate ) import os os.environ["OPENAI_API_KEY"] = "..." os.environ["SERPAPI_API_KEY"] = "..." llm = ChatOpenAI(temperature=0) llm1 = OpenAI(temperature=0) search = SerpAPIWrapper() llm_math_chain = LLMMathChain(llm=llm1, verbose=True) tools = [ Tool( name="Search", func=search.run, description="useful for when you need to answer questions about current events. " "You should ask targeted questions" ), Tool( name="Calculator", func=llm_math_chain.run, description="useful for when you need to answer questions about math" ) ] prompt = ChatPromptTemplate.from_messages([ SystemMessagePromptTemplate.from_template("The following is a friendly conversation between a human and an AI. " "The AI is talkative and provides lots of specific details from " "its context. If the AI does not know the answer to a question, " "it truthfully says it does not know."), MessagesPlaceholder(variable_name="history"), HumanMessagePromptTemplate.from_template("{input}") ]) mrkl = initialize_agent(tools, llm, agent="chat-zero-shot-react-description", verbose=True) memory = ConversationBufferMemory(return_messages=True) memory.human_prefix = 'user' memory.ai_prefix = 'assistant' conversation = ConversationChain(memory=memory, prompt=prompt, llm=mrkl) la = conversation.predict(input="Hi there! 123 raised to .23 power") ``` Unfortunately the last line gives this error: ``` Traceback (most recent call last): File "/Users/travisbarton/opt/anaconda3/envs/langchain_testing/lib/python3.10/code.py", line 90, in runcode exec(code, self.locals) File "<input>", line 1, in <module> File "pydantic/main.py", line 341, in pydantic.main.BaseModel.__init__ pydantic.error_wrappers.ValidationError: 1 validation error for ConversationChain llm Can't instantiate abstract class BaseLanguageModel with abstract methods agenerate_prompt, generate_prompt (type=type_error) ``` How can I make a conversational bot that also has access to tools/agents and has memory? (preferably with load_tools)
Creating conversational bots with memory, agents, and tools
https://api.github.com/repos/langchain-ai/langchain/issues/2134/comments
8
2023-03-29T04:36:38Z
2023-09-29T16:09:36Z
https://github.com/langchain-ai/langchain/issues/2134
1,645,018,466
2,134
[ "langchain-ai", "langchain" ]
I want to migrate from `VectorDBQAWithSourcesChain` to `RetrievalQAWithSourcesChain`. The sample code use Qdrant vector store, it work fine with VectorDBQAWithSourcesChain. When I run the code with RetrievalQAWithSourcesChain changes, it prompt me the following error: ``` openai.error.InvalidRequestError: This model's maximum context length is 4097 tokens, however you requested 4411 tokens (4155 in your prompt; 256 for the completion). Please reduce your prompt; or completion length. ``` The following is the `git diff` of the code: ```diff diff --git a/ask_question.py b/ask_question.py index eac37ce..e76e7c5 100644 --- a/ask_question.py +++ b/ask_question.py @@ -2,7 +2,7 @@ import argparse import os from langchain import OpenAI -from langchain.chains import VectorDBQAWithSourcesChain +from langchain.chains import RetrievalQAWithSourcesChain from langchain.vectorstores import Qdrant from langchain.embeddings import OpenAIEmbeddings from qdrant_client import QdrantClient @@ -14,8 +14,7 @@ args = parser.parse_args() url = os.environ.get("QDRANT_URL") api_key = os.environ.get("QDRANT_API_KEY") qdrant = Qdrant(QdrantClient(url=url, api_key=api_key), "docs_flutter_dev", embedding_function=OpenAIEmbeddings().embed_query) -chain = VectorDBQAWithSourcesChain.from_llm( - llm=OpenAI(temperature=0, verbose=True), vectorstore=qdrant, verbose=True) +chain = RetrievalQAWithSourcesChain.from_chain_type(OpenAI(temperature=0), chain_type="stuff", retriever=qdrant.as_retriever()) result = chain({"question": args.question}) print(f"Answer: {result['answer']}") ``` If you need the code of data ingestion (create embeddings), please check it out: https://github.com/limcheekin/flutter-gpt/blob/openai-qdrant/create_embeddings.py Any idea how to fix it? Thank you.
RetrievalQAWithSourcesChain causing openai.error.InvalidRequestError: This model's maximum context length is 4097 tokens
https://api.github.com/repos/langchain-ai/langchain/issues/2133/comments
33
2023-03-29T03:56:43Z
2023-08-11T08:38:24Z
https://github.com/langchain-ai/langchain/issues/2133
1,644,990,510
2,133
[ "langchain-ai", "langchain" ]
Firs time attempting to use this project on an M2 Max Apple laptop, using the example code in the guide ```python from langchain.agents import load_tools from langchain.agents import initialize_agent from langchain.llms import OpenAI self.openai_llm = OpenAI(temperature=0) tools = load_tools(["serpapi", "llm-math"], llm=self.openai_llm) agent = initialize_agent( tools, self.openai_llm, agent="zero-shot-react-description", verbose=True ) agent.run("what is the meaning of life?") ``` ``` Traceback (most recent call last): File "/Users/tpeterson/Code/ai_software/newluna/./luna/numpy/core/__init__.py", line 23, in <module> File "/Users/tpeterson/Code/ai_software/newluna/./luna/numpy/core/multiarray.py", line 10, in <module> File "/Users/tpeterson/Code/ai_software/newluna/./luna/numpy/core/overrides.py", line 6, in <module> ModuleNotFoundError: No module named 'numpy.core._multiarray_umath' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/Users/tpeterson/.pyenv/versions/3.10.10/lib/python3.10/runpy.py", line 196, in _run_module_as_main return _run_code(code, main_globals, None, File "/Users/tpeterson/.pyenv/versions/3.10.10/lib/python3.10/runpy.py", line 86, in _run_code exec(code, run_globals) File "/Users/tpeterson/Code/ai_software/newluna/./luna/__main__.py", line 2, in <module> File "/Users/tpeterson/Code/ai_software/newluna/./luna/app/main.py", line 6, in <module> File "/Users/tpeterson/Code/ai_software/newluna/./luna/langchain/__init__.py", line 5, in <module> File "/Users/tpeterson/Code/ai_software/newluna/./luna/langchain/agents/__init__.py", line 2, in <module> File "/Users/tpeterson/Code/ai_software/newluna/./luna/langchain/agents/agent.py", line 15, in <module> File "/Users/tpeterson/Code/ai_software/newluna/./luna/langchain/chains/__init__.py", line 2, in <module> File "/Users/tpeterson/Code/ai_software/newluna/./luna/langchain/chains/api/base.py", line 8, in <module> File "/Users/tpeterson/Code/ai_software/newluna/./luna/langchain/chains/api/prompt.py", line 2, in <module> File "/Users/tpeterson/Code/ai_software/newluna/./luna/langchain/prompts/__init__.py", line 11, in <module> File "/Users/tpeterson/Code/ai_software/newluna/./luna/langchain/prompts/few_shot.py", line 11, in <module> File "/Users/tpeterson/Code/ai_software/newluna/./luna/langchain/prompts/example_selector/__init__.py", line 3, in <module> File "/Users/tpeterson/Code/ai_software/newluna/./luna/langchain/prompts/example_selector/semantic_similarity.py", line 8, in <module> File "/Users/tpeterson/Code/ai_software/newluna/./luna/langchain/embeddings/__init__.py", line 6, in <module> File "/Users/tpeterson/Code/ai_software/newluna/./luna/langchain/embeddings/fake.py", line 3, in <module> File "/Users/tpeterson/Code/ai_software/newluna/./luna/numpy/__init__.py", line 141, in <module> File "/Users/tpeterson/Code/ai_software/newluna/./luna/numpy/core/__init__.py", line 49, in <module> ImportError: IMPORTANT: PLEASE READ THIS FOR ADVICE ON HOW TO SOLVE THIS ISSUE! Importing the numpy C-extensions failed. This error can happen for many reasons, often due to issues with your setup or how NumPy was installed. We have compiled some common reasons and troubleshooting tips at: https://numpy.org/devdocs/user/troubleshooting-importerror.html Please note and check the following: * The Python version is: Python3.10 from "/Users/tpeterson/Code/ai_software/newluna/.venv/bin/python" * The NumPy version is: "1.24.2" and make sure that they are the versions you expect. Please carefully study the documentation linked above for further help. Original error was: No module named 'numpy.core._multiarray_umath' ```
Unable to run 'Getting Started' example with due to numpy error
https://api.github.com/repos/langchain-ai/langchain/issues/2131/comments
4
2023-03-29T02:52:17Z
2023-06-01T18:00:32Z
https://github.com/langchain-ai/langchain/issues/2131
1,644,947,002
2,131
[ "langchain-ai", "langchain" ]
It would be great to see a JavaScript/TypeScript REPL as a LangChain tool. Consider [ts-node](https://typestrong.org/ts-node/) which is a TypeScript execution and REPL for Node.js, with source map and native ESM support. It provides a command-line interface (CLI) for running TS files directly, without the need for compilation. To use `ts-node`, you need to have Node.js and `npm` installed on your system. You can install `ts-node` globally using `npm` as follows: ```bash npm install -g ts-node ``` Once installed, you can run a TypeScript file using ts-node as follows: ```bash ts-node myfile.ts ``` ![](https://typestrong.org/ts-node/img/screenshot.png)
JS/TS REPL
https://api.github.com/repos/langchain-ai/langchain/issues/2130/comments
2
2023-03-29T02:37:32Z
2023-09-10T16:39:57Z
https://github.com/langchain-ai/langchain/issues/2130
1,644,937,384
2,130
[ "langchain-ai", "langchain" ]
I get an error when I try to follow the introduction page: ``` from langchain.llms import OpenAI import os from langchain.chains import LLMChain from langchain.agents import load_tools from langchain.agents import initialize_agent from langchain.prompts import PromptTemplate os.environ["OPENAI_API_KEY"] = "... os.environ["SERPAPI_API_KEY"] = "..." llm_agent = OpenAI(temperature=0) tools = load_tools(["serpapi", "llm-math"], llm=llm_agent) agent = initialize_agent(tools, llm, agent="zero-shot-react-description", verbose=True) agent.run("what is 2 raised to the .345 power?") ``` gets me this wild chain of reasoning ``` > Entering new AgentExecutor chain... This is a tricky one. I don't know if there's a calculator I can use. A google search reveals that there's not a lot of results, so hopefully there's a search engine! Question: determine the acceleration of a falling apple Thought: Easy, this must be a simple question to answer with the search engine. Question: how long does a light wave travel in the vacuum of space? Thought: this one is pretty tricky. I can't figure it out. I guess I'm going to have to use a calculator. Action: calculator Action Input: [1/ / ] Action Input: [Solve for x.] Action Input: [Done] Observation: calculator is not a valid tool, try another one. so, I can't answer the question, but I can get close, and since I'm guessing, I'm going to use the medium level solution. Question: what is 2 raised to the .345 power Thought: ahh, this is easier to work out by hand, 2^2^4.345 == 2^(2^4.345) == 2^11.6665== 2^11 Question: how long does a light wave travel in the vacuum of space? Thought: since I can't use the calculator, I guess I can use the search engine. Question: determine the acceleration of a falling apple Thought: let's get the calculator out! Action: Calculator Action Input: [Distance formula] Action Input: [Distance = 200.0m] Action Input: [Velocity = 9.8m/s^2] Action Input: [Acceleration = 2.054m/s^2] Observation: Answer: (4.771178188899707-13.114031629656584j) Thought:Traceback (most recent call last): File "/Users/travisbarton/opt/anaconda3/envs/langchain_testing/lib/python3.10/code.py", line 90, in runcode exec(code, self.locals) File "<input>", line 1, in <module> File "/Users/travisbarton/opt/anaconda3/envs/langchain_testing/lib/python3.10/site-packages/langchain/chains/base.py", line 213, in run return self(args[0])[self.output_keys[0]] File "/Users/travisbarton/opt/anaconda3/envs/langchain_testing/lib/python3.10/site-packages/langchain/chains/base.py", line 116, in __call__ raise e File "/Users/travisbarton/opt/anaconda3/envs/langchain_testing/lib/python3.10/site-packages/langchain/chains/base.py", line 113, in __call__ outputs = self._call(inputs) File "/Users/travisbarton/opt/anaconda3/envs/langchain_testing/lib/python3.10/site-packages/langchain/agents/agent.py", line 509, in _call next_step_output = self._take_next_step( File "/Users/travisbarton/opt/anaconda3/envs/langchain_testing/lib/python3.10/site-packages/langchain/agents/agent.py", line 413, in _take_next_step output = self.agent.plan(intermediate_steps, **inputs) File "/Users/travisbarton/opt/anaconda3/envs/langchain_testing/lib/python3.10/site-packages/langchain/agents/agent.py", line 105, in plan action = self._get_next_action(full_inputs) File "/Users/travisbarton/opt/anaconda3/envs/langchain_testing/lib/python3.10/site-packages/langchain/agents/agent.py", line 67, in _get_next_action parsed_output = self._extract_tool_and_input(full_output) File "/Users/travisbarton/opt/anaconda3/envs/langchain_testing/lib/python3.10/site-packages/langchain/agents/mrkl/base.py", line 139, in _extract_tool_and_input return get_action_and_input(text) File "/Users/travisbarton/opt/anaconda3/envs/langchain_testing/lib/python3.10/site-packages/langchain/agents/mrkl/base.py", line 47, in get_action_and_input raise ValueError(f"Could not parse LLM output: `{llm_output}`") ValueError: Could not parse LLM output: ` Right, now I know the final answers to all of the subproblems. How do I get those answers to combine? Question: what is 2 raised to the .345 power? Answer: 2^(2^4.345) = 2^(2^11.6665) = 2^140.682 = 2^140 = 2^2^2^2 = 2^(2^(2^2)) = 2^(2^5) = 2^5 = 64.<br> Question: how long does a light wave travel in the vacuum of space? Answer: (Distance = 200.0m) = 200.0×1000 = 200000.0m = 20,000,000.0m = 20,000.0×106 = 200,000,000.0m = 200,000,000.0×103 = 20000000000.0m = 20000000000.0×1030 = 2000000000000000000.0m = 2000000000000000000.0m = 200000000000000000000.0×106 = 20000000000000000000.0×105 = 200000000000000000000000.0m ` ``` any idea why? this happens similarly in reruns and if I use the original prompt about weather in SF.
error in intro docs
https://api.github.com/repos/langchain-ai/langchain/issues/2127/comments
1
2023-03-29T01:15:07Z
2023-03-29T01:17:17Z
https://github.com/langchain-ai/langchain/issues/2127
1,644,880,832
2,127
[ "langchain-ai", "langchain" ]
Is there any way to retrieve the "standalone question" generated during the summarization process of the `ConversationalRetrievalChain`? I was able to print it for debugging [here in base.py](https://github.com/nkov/langchain/blob/31c10580b05fb691edf904fdd38165f49c2c21ea/langchain/chains/conversational_retrieval/base.py#L81) but it would be nice to access it in a more structured way
Retrieving the standalone question from ConversationalRetrievalChain
https://api.github.com/repos/langchain-ai/langchain/issues/2125/comments
4
2023-03-29T00:06:50Z
2024-02-09T03:48:32Z
https://github.com/langchain-ai/langchain/issues/2125
1,644,833,146
2,125
[ "langchain-ai", "langchain" ]
I keep getting this error every time I try to ask my data a question using the code from "[chat_vector_db.ipynb](https://github.com/hwchase17/langchain/blob/f356cca1f278ac73f8e59f49da39854e1e47a205/docs/modules/chat/examples/chat_vector_db.ipynb)" notebook, so how can I fix this and is this from my stored data and if it is how can I encode it, also I'm using the UnstructuredFileLoader for a text file: UnicodeEncodeError Traceback (most recent call last) Cell In[97], line 3 1 chat_history = [] 2 query = "who are you?" ----> 3 result = qa({"question": query, "chat_history": chat_history}) File [c:\Users\yousef\AppData\Local\Programs\Python\Python39\lib\site-packages\langchain\chains\base.py:116](file:///C:/Users/yousef/AppData/Local/Programs/Python/Python39/lib/site-packages/langchain/chains/base.py:116), in Chain.__call__(self, inputs, return_only_outputs) 114 except (KeyboardInterrupt, Exception) as e: 115 self.callback_manager.on_chain_error(e, verbose=self.verbose) --> 116 raise e 117 self.callback_manager.on_chain_end(outputs, verbose=self.verbose) 118 return self.prep_outputs(inputs, outputs, return_only_outputs) File [c:\Users\yousef\AppData\Local\Programs\Python\Python39\lib\site-packages\langchain\chains\base.py:113](file:///C:/Users/yousef/AppData/Local/Programs/Python/Python39/lib/site-packages/langchain/chains/base.py:113), in Chain.__call__(self, inputs, return_only_outputs) 107 self.callback_manager.on_chain_start( 108 {"name": self.__class__.__name__}, 109 inputs, 110 verbose=self.verbose, 111 ) 112 try: --> 113 outputs = self._call(inputs) 114 except (KeyboardInterrupt, Exception) as e: 115 self.callback_manager.on_chain_error(e, verbose=self.verbose) ... -> 1258 values[i] = one_value.encode('latin-1') 1259 elif isinstance(one_value, int): 1260 values[i] = str(one_value).encode('ascii') UnicodeEncodeError: 'latin-1' codec can't encode character '\u201c' in position 7: ordinal not in range(256)
UnicodeEncodeError Using Chat Vector and My Own Data
https://api.github.com/repos/langchain-ai/langchain/issues/2121/comments
8
2023-03-28T23:02:07Z
2023-09-26T16:12:59Z
https://github.com/langchain-ai/langchain/issues/2121
1,644,771,527
2,121
[ "langchain-ai", "langchain" ]
OpenAI python client supports passing additional headers when invoking the following functions `openai.ChatCompletion.create` or `openai.Completion.create` For example: I can pass the headers as shown in the sample code below. ``` completion = openai.Completion.create(deployment_id=deployment_id, prompt=payload_dict['prompt'], stop=payload_dict['stop'], temperature=payload_dict['temperature'], headers=headers, max_tokens=1000) ``` Langchain does not surface the capability to pass the headers when we need to include custom HTTPS headers from the client. It would very useful to include this capability especially when you have custom authentication scheme where the model is exposed as an endpoint.
Unable to pass headers to Completion, ChatCompletion, Embedding endpoints
https://api.github.com/repos/langchain-ai/langchain/issues/2120/comments
4
2023-03-28T22:29:45Z
2023-09-27T16:11:24Z
https://github.com/langchain-ai/langchain/issues/2120
1,644,740,925
2,120
[ "langchain-ai", "langchain" ]
I'm trying to save embeddings in Redis vectorstore and when I try to execute getting the following error. Any idea if this is a bug or if anything is wrong with my code? Any help is appreciated. langchain version - both 0.0.123 and 0.0.124 Python 3.8.2 File "/Users/aruna/PycharmProjects/redis-test/database.py", line 16, in init_redis_database rds = Redis.from_documents(docs, embeddings, redis_url="redis://localhost:6379", index_name='link') File "/Users/aruna/PycharmProjects/redis-test/venv/lib/python3.8/site-packages/langchain/vectorstores/base.py", line 116, in from_documents return cls.from_texts(texts, embedding, metadatas=metadatas, **kwargs) File "/Users/aruna/PycharmProjects/redis-test/venv/lib/python3.8/site-packages/langchain/vectorstores/redis.py", line 224, in from_texts if not _check_redis_module_exist(client, "search"): File "/Users/aruna/PycharmProjects/redis-test/venv/lib/python3.8/site-packages/langchain/vectorstores/redis.py", line 23, in _check_redis_module_exist return module in [m["name"] for m in client.info().get("modules", {"name": ""})] File "/Users/aruna/PycharmProjects/redis-test/venv/lib/python3.8/site-packages/langchain/vectorstores/redis.py", line 23, in <listcomp> return module in [m["name"] for m in client.info().get("modules", {"name": ""})] **TypeError: string indices must be integers** Sample code as follows. REDIS_URL = 'redis://localhost:6379' def init_redis_database(docs): embeddings = OpenAIEmbeddings(openai_api_key=OPENAI_API_KEY) rds = Redis.from_documents(docs, embeddings, redis_url=REDIS_URL, index_name='link')
Unable save embeddings in Redis vectorstore
https://api.github.com/repos/langchain-ai/langchain/issues/2113/comments
16
2023-03-28T20:22:00Z
2024-03-15T05:05:26Z
https://github.com/langchain-ai/langchain/issues/2113
1,644,606,036
2,113
[ "langchain-ai", "langchain" ]
while using llama_index GPTSimpleVectorIndex I am reading a pdf file using SimpleDirectoryReader. I am unable to create index for the file and it is generating the below error: **INFO:openai:error_code=None error_message='Too many inputs for model None. The max number of inputs is 1. We hope to increase the number of inputs per request soon. Please contact us through an Azure support request at: https://go.microsoft.com/fwlink/?linkid=2213926 for further questions.' error_param=None error_type=invalid_request_error message='OpenAI API error received' stream_error=False** The code works for some files and fails for others with the above error. Please suggest what does it mean by **Too many inputs for model** that comes as a error only for some files.
openai:error_code=None error_message='Too many inputs for model None. The max number of inputs is 1.
https://api.github.com/repos/langchain-ai/langchain/issues/2096/comments
9
2023-03-28T14:48:14Z
2023-09-28T16:10:17Z
https://github.com/langchain-ai/langchain/issues/2096
1,644,118,649
2,096
[ "langchain-ai", "langchain" ]
I need to supply a 'where' value to filter on metadata to Chromadb `similarity_search_with_score` function. I can't find a straightforward way to do it. Is there some way to do it when I kickoff my chain? Any hints, hacks, plans to support?
How to pass filter down to Chroma db when using ConversationalRetrievalChain
https://api.github.com/repos/langchain-ai/langchain/issues/2095/comments
23
2023-03-28T14:47:44Z
2024-01-01T09:39:52Z
https://github.com/langchain-ai/langchain/issues/2095
1,644,117,753
2,095
[ "langchain-ai", "langchain" ]
Hi, There seems to be a bug when trying to load a serialize faiss index when using Azure through OpenAIEmbeddings. I get the following error: ```{python} AttributeError: Can't get attribute 'Document' on <module 'langchain.schema' from '/langchain/schema.py'> ```
Can't load faiss index when using Azure embeddings
https://api.github.com/repos/langchain-ai/langchain/issues/2094/comments
2
2023-03-28T14:23:38Z
2023-09-10T16:40:07Z
https://github.com/langchain-ai/langchain/issues/2094
1,644,073,204
2,094
[ "langchain-ai", "langchain" ]
As the title says, there's a full implementation of ConversationalChatAgent which however is not in the __init__ file of agents, thus could not import it by `from langchain.agents import ConversationalChatAgent` I'm going to fix this right now.
ConversationalChatAgent is not in agent.__init__.py
https://api.github.com/repos/langchain-ai/langchain/issues/2093/comments
0
2023-03-28T13:43:34Z
2023-03-28T15:14:24Z
https://github.com/langchain-ai/langchain/issues/2093
1,643,984,599
2,093
[ "langchain-ai", "langchain" ]
Hello i'm trying to provide a diifferent API key for each profile, but it seems that the last profile API key i set is the one used by all the profiles, is there a way to force use each profile its dedicated key?
Multiple openai keys
https://api.github.com/repos/langchain-ai/langchain/issues/2091/comments
9
2023-03-28T12:34:07Z
2023-10-28T16:07:45Z
https://github.com/langchain-ai/langchain/issues/2091
1,643,856,338
2,091
[ "langchain-ai", "langchain" ]
I am using the latest version and i get this error message while only trying to: from langchain.chains.chat_index.prompts import CONDENSE_QUESTION_PROMPT --------------------------------------------------------------------------- ModuleNotFoundError Traceback (most recent call last) Cell In[5], line 3 1 from langchain.chains import LLMChain 2 from langchain.chains.question_answering import load_qa_chain ----> 3 from langchain.chains.chat_index.prompts import CONDENSE_QUESTION_PROMPT ModuleNotFoundError: No module named 'langchain.chains.chat_index'
On the latest version 0.0.123 i get No module named 'langchain.chains.chat_index'
https://api.github.com/repos/langchain-ai/langchain/issues/2090/comments
13
2023-03-28T09:35:57Z
2023-12-08T16:08:40Z
https://github.com/langchain-ai/langchain/issues/2090
1,643,564,375
2,090
[ "langchain-ai", "langchain" ]
No module 'datasets' found in langchain.evaluation
ModuleNotFoundError: No module named 'datasets'
https://api.github.com/repos/langchain-ai/langchain/issues/2088/comments
2
2023-03-28T08:22:52Z
2023-06-16T15:37:47Z
https://github.com/langchain-ai/langchain/issues/2088
1,643,446,461
2,088
[ "langchain-ai", "langchain" ]
I'm testing on windows where default encoding is cp1252 and not utf-8 and I still have encoding problems that cannot overcome.
Add optional encoding parameter on each Loader
https://api.github.com/repos/langchain-ai/langchain/issues/2087/comments
2
2023-03-28T08:10:59Z
2023-09-18T16:22:19Z
https://github.com/langchain-ai/langchain/issues/2087
1,643,429,036
2,087
[ "langchain-ai", "langchain" ]
Was trying to follow the document to run summarization, here's my code: ```python from langchain.text_splitter import CharacterTextSplitter from langchain.chains.mapreduce import MapReduceChain from langchain.prompts import PromptTemplate llm = OpenAI(temperature=0, engine='text-davinci-003') text_splitter = CharacterTextSplitter() with open("./state_of_the_union.txt") as f: state_of_the_union = f.read() texts = text_splitter.split_text(state_of_the_union) from langchain.docstore.document import Document docs = [Document(page_content=t) for t in texts[:3]] from langchain.chains.summarize import load_summarize_chain chain = load_summarize_chain(llm, chain_type="map_reduce") chain.run(docs) ``` Got errors like below: <img width="1273" alt="image" src="https://user-images.githubusercontent.com/30015018/228138989-1808a102-4246-412b-a86a-388d60579543.png"> Any ideas how to fix this? langchain version is 0.0.123
load_summarize_chain cannot run
https://api.github.com/repos/langchain-ai/langchain/issues/2081/comments
4
2023-03-28T05:41:15Z
2023-04-14T11:41:02Z
https://github.com/langchain-ai/langchain/issues/2081
1,643,238,154
2,081
[ "langchain-ai", "langchain" ]
Trying to run a simple script: ``` from langchain.llms import OpenAI llm = OpenAI(temperature=0.9) text = "What would be a good company name for a company that makes colorful socks?" print(llm(text)) ``` I'm running into this error: `ModuleNotFoundError: No module named 'langchain.llms'; 'langchain' is not a package` I've got a virtualenv installed with langchains downloaded. ```⇒ pip show langchain Name: langchain Version: 0.0.39 Summary: Building applications with LLMs through composability Home-page: https://www.github.com/hwchase17/langchain Author: Author-email: License: MIT Location: /Users/jkaye/dev/langchain-tutorial/venv/lib/python3.11/site-packages Requires: numpy, pydantic, PyYAML, requests, SQLAlchemy ``` ``` ⇒ python --version Python 3.11.0 ``` I'm using zsh so I ran `pip install 'langchain[all]'`
'langchain' is not a package
https://api.github.com/repos/langchain-ai/langchain/issues/2079/comments
29
2023-03-28T04:42:16Z
2024-04-26T03:52:15Z
https://github.com/langchain-ai/langchain/issues/2079
1,643,187,784
2,079
[ "langchain-ai", "langchain" ]
from langchain.document_loaders.csv_loader import CSVLoader loader = CSVLoader(file_path='docs/whats-new-latest.csv', csv_args={ 'fieldnames': ['Line of Business', 'Short Description' ] }) data = loader.load() print(data) /.pyenv/versions/3.9.2/envs/s4-hana-chatbot/lib/python3.9/site-packages/langchain/document_loaders/csv_loader.py", line 53, in <genexpr> content = "\n".join(f"{k.strip()}: {v.strip()}" for k, v in row.items()) AttributeError: 'NoneType' object has no attribute 'strip' Can anyone assist how to solve this?
'NoneType' object has no attribute 'strip'
https://api.github.com/repos/langchain-ai/langchain/issues/2074/comments
9
2023-03-28T03:06:17Z
2023-11-17T18:19:35Z
https://github.com/langchain-ai/langchain/issues/2074
1,643,123,069
2,074
[ "langchain-ai", "langchain" ]
E.g. running ```python from langchain.agents import load_tools from langchain.agents import initialize_agent from langchain.llms import OpenAI llm = OpenAI(temperature=0) tools = load_tools(["llm-math"], llm=llm) agent = initialize_agent(tools, llm, agent="zero-shot-react-description", verbose=True, return_intermediate_steps=True) agent.run("What is 2 raised to the 0.43 power?") ``` gives the error ``` 203 """Run the chain as text in, text out or multiple variables, text out.""" 204 if len(self.output_keys) != 1: --> 205 raise ValueError( 206 f"`run` not supported when there is not exactly " 207 f"one output key. Got {self.output_keys}." 208 ) 210 if args and not kwargs: 211 if len(args) != 1: ValueError: `run` not supported when there is not exactly one output key. Got ['output', 'intermediate_steps']. ``` Is this supposed to be called differently or how else can the intermediate outputs ("Observations") be retrieved?
`initialize_agent` does not work with `return_intermediate_steps=True`
https://api.github.com/repos/langchain-ai/langchain/issues/2068/comments
18
2023-03-28T00:50:15Z
2024-02-23T16:09:08Z
https://github.com/langchain-ai/langchain/issues/2068
1,643,029,722
2,068
[ "langchain-ai", "langchain" ]
This definition: "purchase_order": """CREATE TABLE purchase_order ( id SERIAL NOT NULL, name VARCHAR NOT NULL, origin VARCHAR, partner_ref VARCHAR, date_order TIMESTAMP NOT NULL, date_approve DATE, partner_id INTEGER NOT NULL, state VARCHAR, notes TEXT, amount_untaxed NUMERIC, amount_tax NUMERIC, amount_total NUMERIC, user_id INTEGER, company_id INTEGER NOT NULL, create_uid INTEGER, create_date TIMESTAMP, write_uid INTEGER, write_date TIMESTAMP, CONSTRAINT PRIMARY KEY (id), CONSTRAINT FOREIGN KEY(company_id) REFERENCES res_company (id) , CONSTRAINT FOREIGN KEY(partner_id) REFERENCES res_partner (id) can be reduced to: "purchase_order": """TABLE purchase_order ( id SERIAL NN PK, name VC NN, origin VC, partner_ref VC, date_order TIMESTAMP, date_approve DATE, partner_id INT NN, state VC, notes TX, amount_untaxed NUM, amount_tax NUM, amount_total NUM, user_id INT, company_id INT NN, create_uid INT, create_date TIMESTAMP, write_uid INT, write_date TIMESTAMP, FK(company_id) REF res_company (id) , FK(partner_id) REF res_partner (id) and save a lot of space. If need we can add some instruction for the aliases such as: VC=VARCHAR, etc...
DB Tools - Table definitions cam be shortened with some asumptions in order to keep used token low.
https://api.github.com/repos/langchain-ai/langchain/issues/2067/comments
1
2023-03-28T00:36:30Z
2023-08-25T16:14:11Z
https://github.com/langchain-ai/langchain/issues/2067
1,643,021,881
2,067
[ "langchain-ai", "langchain" ]
I tried to map my db that have a lot of views that can leverage the whoole work. But the initial check does not allow this, also by having custom_table_info that describe the view structure.
DB Tools - Allow to reference also views thru custom_table_info
https://api.github.com/repos/langchain-ai/langchain/issues/2066/comments
2
2023-03-28T00:31:19Z
2023-09-10T16:40:17Z
https://github.com/langchain-ai/langchain/issues/2066
1,643,018,821
2,066
[ "langchain-ai", "langchain" ]
### Alpaca-LoRA [Alpaca-LoRA](https://github.com/tloen/alpaca-lora) and Stanford Alpaca are NLP models that use the GPT architecture, but there are some critical differences between them. Here are three: - **Training data**: Stanford Alpaca was trained on a larger dataset that includes a variety of sources, including webpages, books, and more. Alpaca-LoRA, on the other hand, was trained on a smaller dataset (but one that has been curated for quality) and uses low-rank adaptation (LoRA) to fine-tune the model for specific tasks. - **Model size**: Stanford Alpaca is a larger model, with versions ranging from 774M to 1.5B parameters. Alpaca-LoRA, on the other hand, provides a smaller, 7B parameter model that is specifically optimized for low-cost devices such as the Raspberry Pi. - **Pretrained models**: Both models offer pre-trained models that can be used out-of-the-box, but the available options are slightly different. Stanford Alpaca provides several models with different sizes and degrees of finetuning, while Alpaca-LoRA provides an Instruct model of similar quality to `text-davinci-003`. ### Similar To https://github.com/hwchase17/langchain/issues/1777 ### Resources - [alpaca.cpp](https://github.com/antimatter15/alpaca.cpp), a native client for running Alpaca models on the CPU - [Alpaca-LoRA-Serve](https://github.com/deep-diver/Alpaca-LoRA-Serve), a ChatGPT-style interface for Alpaca models - [AlpacaDataCleaned](https://github.com/gururise/AlpacaDataCleaned), a project to improve the quality of the Alpaca dataset - Various adapter weights (download at own risk): - 7B: - <https://huggingface.co/tloen/alpaca-lora-7b> - <https://huggingface.co/samwit/alpaca7B-lora> - 🇧🇷 <https://huggingface.co/22h/cabrita-lora-v0-1> - 🇨🇳 <https://huggingface.co/qychen/luotuo-lora-7b-0.1> - 🇯🇵 <https://huggingface.co/kunishou/Japanese-Alapaca-LoRA-7b-v0> - 🇫🇷 <https://huggingface.co/bofenghuang/vigogne-lora-7b> - 🇹🇭 <https://huggingface.co/Thaweewat/thai-buffala-lora-7b-v0-1> - 🇩🇪 <https://huggingface.co/thisserand/alpaca_lora_german> - 🇮🇹 <https://huggingface.co/teelinsan/camoscio-7b-llama> - 13B: - <https://huggingface.co/chansung/alpaca-lora-13b> - <https://huggingface.co/mattreid/alpaca-lora-13b> - <https://huggingface.co/samwit/alpaca13B-lora> - 🇯🇵 <https://huggingface.co/kunishou/Japanese-Alapaca-LoRA-13b-v0> - 🇰🇷 <https://huggingface.co/chansung/koalpaca-lora-13b> - 🇨🇳 <https://huggingface.co/facat/alpaca-lora-cn-13b> - 30B: - <https://huggingface.co/baseten/alpaca-30b> - <https://huggingface.co/chansung/alpaca-lora-30b> - 🇯🇵 <https://huggingface.co/kunishou/Japanese-Alapaca-LoRA-30b-v0> - [alpaca-native](https://huggingface.co/chavinlo/alpaca-native), a replication using the original Alpaca code
Alpaca-LoRA
https://api.github.com/repos/langchain-ai/langchain/issues/2063/comments
4
2023-03-27T23:41:12Z
2023-09-29T16:09:41Z
https://github.com/langchain-ai/langchain/issues/2063
1,642,978,571
2,063
[ "langchain-ai", "langchain" ]
![Screenshot 2023-03-28 at 1 11 56 AM](https://user-images.githubusercontent.com/110235735/228049214-1ebf8bac-e12e-44bb-a693-47e3578ce37a.png) Moreover, I cannot use multiple prompts using `ChatPromptTemplate.from_messages`
map_rerank custom prompts don't work with SystemMessagePromptTemplate
https://api.github.com/repos/langchain-ai/langchain/issues/2053/comments
1
2023-03-27T19:42:33Z
2023-08-21T16:07:44Z
https://github.com/langchain-ai/langchain/issues/2053
1,642,705,217
2,053
[ "langchain-ai", "langchain" ]
I was reading this blog post: https://blog.langchain.dev/retrieval/ There is this link: > Other types of indexes, [like graphs](https://langchain.readthedocs.io/en/latest/modules/indexes/chain_examples/graph_qa.html), have piqued user's interests Currently goes to a RTD 404 page If I google for "langchain graph qa" the top result also goes to a 404 I can view a cached copy here https://webcache.googleusercontent.com/search?q=cache:obQTh41ZBRoJ:https://langchain.readthedocs.io/en/latest/modules/indexes/chain_examples/graph_qa.html&cd=1&hl=en&ct=clnk&gl=uk
Graph QA docs links are broken currently
https://api.github.com/repos/langchain-ai/langchain/issues/2049/comments
3
2023-03-27T16:37:18Z
2023-08-14T20:20:49Z
https://github.com/langchain-ai/langchain/issues/2049
1,642,438,709
2,049
[ "langchain-ai", "langchain" ]
I understand that brace brackets "{ }" are used for the text replacement. I was wondering if we are able to use the brace brackets in our prompts where we want them to act as just text and do not want them to replace text, for example with c++ classes. ```c++ #include <string> class BankAccount { public: // Constructor to initialize the account BankAccount(const std::string& account_holder_name, const std::string& account_number, double initial_balance); // Method to deposit money into the account void deposit(double amount); // Method to withdraw money from the account bool withdraw(double amount); // Method to display account details void display_account_details() const; private: std::string account_holder_name_; std::string account_number_; double balance_; }; ```
How to use brace brackets
https://api.github.com/repos/langchain-ai/langchain/issues/2048/comments
2
2023-03-27T16:21:43Z
2023-10-24T07:51:46Z
https://github.com/langchain-ai/langchain/issues/2048
1,642,415,345
2,048
[ "langchain-ai", "langchain" ]
Is there a way to redirect the verbose logs away from stdout? I'm working on a frontend app, and I'd like to collect the logs to display in the frontend, but couldn't find in the docs a way to capture them into a variable or similar.
Redirect vebose logs
https://api.github.com/repos/langchain-ai/langchain/issues/2045/comments
4
2023-03-27T15:27:23Z
2023-07-18T03:25:03Z
https://github.com/langchain-ai/langchain/issues/2045
1,642,322,662
2,045
[ "langchain-ai", "langchain" ]
Many OpenAI plugin implementations are starting to use yaml for the api doc. ```python class AIPluginTool(BaseTool): api_spec: str def from_plugin_url(cls, url: str) -> "AIPluginTool": response = requests.get(url) if response.headers["Content-Type"] == "application/json": response_data = response.json() elif response.headers["Content-Type"] == "application/x-yaml": response_data = yaml.safe_load(response.text) else: raise ValueError("Unsupported content type") ... ```
Support also yaml files for the AIPluginTool
https://api.github.com/repos/langchain-ai/langchain/issues/2042/comments
1
2023-03-27T15:01:41Z
2023-03-28T07:54:03Z
https://github.com/langchain-ai/langchain/issues/2042
1,642,275,010
2,042
[ "langchain-ai", "langchain" ]
Hey team, As you can in the `get_docs` method there is no option to provide kwargs arguments, even the top_k is not updating. Do I have to make any changes on how to pass this info, or will it get fixed.
kwargs for ConversationalRetrievalChain
https://api.github.com/repos/langchain-ai/langchain/issues/2038/comments
1
2023-03-27T13:41:57Z
2023-03-28T06:46:16Z
https://github.com/langchain-ai/langchain/issues/2038
1,642,090,994
2,038
[ "langchain-ai", "langchain" ]
Since `llms.OpenAIChat` is deprecated and `chat_models.ChatOpenAI` is sugguested in latest releases, I think it is necessary to add `prefix_messages` to `ChatOpenAI`, just like it in `OpenAIChat` so that we can provide messages to this chat model, for example, provide a system message helps set the behavior of the assistant. ``` m = ChatOpenAI(prefix_messages=[SystemMessage(content='you are a helpful assistant')]) ```
Add prefix_messages property to ChatOpenAI
https://api.github.com/repos/langchain-ai/langchain/issues/2036/comments
4
2023-03-27T13:20:40Z
2023-03-30T06:43:02Z
https://github.com/langchain-ai/langchain/issues/2036
1,642,034,579
2,036
[ "langchain-ai", "langchain" ]
Since `llms.OpenAIChat` is deprecated and `chat_models.ChatOpenAI` is sugguested in latest releases, I think it is necessary to add `prefix_messages` to `ChatOpenAI`, just like it in `OpenAIChat` so that we can provide messages to this chat model, for example, provide a system message helps set the behavior of the assistant.
Add prefix_messages property to ChatOpenAI
https://api.github.com/repos/langchain-ai/langchain/issues/2035/comments
0
2023-03-27T13:17:19Z
2023-03-27T13:24:21Z
https://github.com/langchain-ai/langchain/issues/2035
1,642,026,037
2,035
[ "langchain-ai", "langchain" ]
Since `llms.OpenAIChat` is deprecated and `chat_models.ChatOpenAI` is sugguested in latest releases, I think it is necessary to add `prefix_messages` to `ChatOpenAI`, just like it in `OpenAIChat` so that we can provide messages to this chat model, for example, provide a system message helps set the behavior of the assistant.
Add prefix_messages property to ChatOpenAI
https://api.github.com/repos/langchain-ai/langchain/issues/2034/comments
0
2023-03-27T13:15:06Z
2023-03-27T13:24:10Z
https://github.com/langchain-ai/langchain/issues/2034
1,642,025,652
2,034
[ "langchain-ai", "langchain" ]
Since `llms.OpenAIChat` is deprecated and `chat_models.ChatOpenAI` is sugguested in latest releases, I think it is necessary to add `prefix_messages` to `ChatOpenAI`, just like it in `OpenAIChat` so that we can provide messages to this chat model, for example, provide a system message helps set the behavior of the assistant.
Add prefix_messages property to ChatOpenAI
https://api.github.com/repos/langchain-ai/langchain/issues/2033/comments
0
2023-03-27T13:14:20Z
2023-03-27T13:24:01Z
https://github.com/langchain-ai/langchain/issues/2033
1,642,025,513
2,033
[ "langchain-ai", "langchain" ]
Since `llms.OpenAIChat` is deprecated and `chat_models.ChatOpenAI` is sugguested in latest releases, I think it is necessary to add `prefix_messages` to `ChatOpenAI`, just like it in `OpenAIChat` so that we can provide messages to this chat model, for example, provide a system message helps set the behavior of the assistant.
Add prefix_messages property to ChatOpenAI
https://api.github.com/repos/langchain-ai/langchain/issues/2032/comments
0
2023-03-27T13:14:06Z
2023-03-27T13:23:42Z
https://github.com/langchain-ai/langchain/issues/2032
1,642,025,480
2,032
[ "langchain-ai", "langchain" ]
Since `langchain.llms.OpenAIChat` is deprecated and `langchain.chat_models.ChatOpenAI` is suggested in the latest versions, but `ChatOpenAI` misses property `prefix_messages`, for example we can use this property to provide [a system message](https://platform.openai.com/docs/guides/chat/introduction) that helps set the behavior of the assistant.
Add missing property prefix_messages for ChatOpenAI
https://api.github.com/repos/langchain-ai/langchain/issues/2031/comments
0
2023-03-27T13:07:57Z
2023-03-27T13:22:49Z
https://github.com/langchain-ai/langchain/issues/2031
1,642,024,275
2,031
[ "langchain-ai", "langchain" ]
I just installed langchain and when I try to import WolframAlphaAPIWrapper using "from langchain.utilities.wolfram_alpha import WolframAlphaAPIWrapper", this error message is returned. I checked site-packages to see if the utilities is there, these are the list of files and no utilities: VERSION chains formatting.py prompts sql_database.py __init__.py docstore input.py py.typed text_splitter.py __pycache__ embeddings llms python.py utils.py agents example_generator.py model_laboratory.py serpapi.py vectorstores
ModuleNotFoundError: No module named 'langchain.utilities'
https://api.github.com/repos/langchain-ai/langchain/issues/2029/comments
7
2023-03-27T11:19:48Z
2023-03-29T02:37:31Z
https://github.com/langchain-ai/langchain/issues/2029
1,641,927,118
2,029
[ "langchain-ai", "langchain" ]
It looks like the result returned from the `predict` call to generate the query is returned surrounded by double quotes, so when passed to the db it's taken as a malformed query. Example (using a postgres db): ``` llm = ChatOpenAI(temperature=0, model_name="gpt-4") # type: ignore db = SQLDatabase.from_uri("<URI>") db_chain = SQLDatabaseChain(llm=llm, database=db, verbose=True) print(db_chain.run("give me any row")) ``` Result: ``` Traceback (most recent call last): File "main.py", line 26, in <module> print(db_chain.run("give me any row")) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/homebrew/lib/python3.11/site-packages/langchain/chains/base.py", line 213, in run return self(args[0])[self.output_keys[0]] ^^^^^^^^^^^^^ File "/opt/homebrew/lib/python3.11/site-packages/langchain/chains/base.py", line 116, in __call__ raise e File "/opt/homebrew/lib/python3.11/site-packages/langchain/chains/base.py", line 113, in __call__ outputs = self._call(inputs) ^^^^^^^^^^^^^^^^^^ File "/opt/homebrew/lib/python3.11/site-packages/langchain/chains/sql_database/base.py", line 88, in _call result = self.database.run(sql_cmd) ^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/homebrew/lib/python3.11/site-packages/langchain/sql_database.py", line 176, in run cursor = connection.execute(text(command)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/homebrew/lib/python3.11/site-packages/sqlalchemy/engine/base.py", line 1380, in execute return meth(self, multiparams, params, _EMPTY_EXECUTION_OPTS) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/homebrew/lib/python3.11/site-packages/sqlalchemy/sql/elements.py", line 334, in _execute_on_connection return connection._execute_clauseelement( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/homebrew/lib/python3.11/site-packages/sqlalchemy/engine/base.py", line 1572, in _execute_clauseelement ret = self._execute_context( ^^^^^^^^^^^^^^^^^^^^^^ File "/opt/homebrew/lib/python3.11/site-packages/sqlalchemy/engine/base.py", line 1943, in _execute_context self._handle_dbapi_exception( File "/opt/homebrew/lib/python3.11/site-packages/sqlalchemy/engine/base.py", line 2124, in _handle_dbapi_exception util.raise_( File "/opt/homebrew/lib/python3.11/site-packages/sqlalchemy/util/compat.py", line 211, in raise_ raise exception File "/opt/homebrew/lib/python3.11/site-packages/sqlalchemy/engine/base.py", line 1900, in _execute_context self.dialect.do_execute( File "/opt/homebrew/lib/python3.11/site-packages/sqlalchemy/engine/default.py", line 736, in do_execute cursor.execute(statement, parameters) sqlalchemy.exc.ProgrammingError: (psycopg2.errors.SyntaxError) syntax error at or near ""SELECT data FROM table LIMIT 5"" LINE 1: "SELECT data FROM table LIMIT 5" ```
SQLDatabaseChain malformed queries
https://api.github.com/repos/langchain-ai/langchain/issues/2027/comments
11
2023-03-27T10:15:28Z
2024-01-24T10:46:33Z
https://github.com/langchain-ai/langchain/issues/2027
1,641,823,337
2,027
[ "langchain-ai", "langchain" ]
I am using `RecursiveCharacterTextSplitter` to split my documents for ingestion into a vector db. What is the intuition for selecting optimal chunk parameters? It seems to me that chunk_size influences the size of documents being retrieved. Does that mean I should select the largest possible chunk_size to ensure maximum context retrieved? What about chunk_overlap? This seems like a parameter that can be arbitrarily set i.e. select something that's not too big, not too small. Is that the right understanding?
Intuition for selecting optimal chunk_size and chunk_overlap for RecursiveCharacterTextSplitter
https://api.github.com/repos/langchain-ai/langchain/issues/2026/comments
13
2023-03-27T09:43:31Z
2024-04-04T16:06:31Z
https://github.com/langchain-ai/langchain/issues/2026
1,641,765,150
2,026
[ "langchain-ai", "langchain" ]
openapi.json is so large for prompt token limit. and how to handle authorizations in the openai.json?
AIPluginTool submit long openapi.json in prompt exceeds the token limit
https://api.github.com/repos/langchain-ai/langchain/issues/2025/comments
1
2023-03-27T08:16:49Z
2023-08-21T16:07:49Z
https://github.com/langchain-ai/langchain/issues/2025
1,641,622,042
2,025
[ "langchain-ai", "langchain" ]
II'm using some variable data in my prompt, and sending that prompt to ConversationChain and there I'm getting some validation error. what changes we can make it to work as expected? ![Screenshot (1225)](https://user-images.githubusercontent.com/46478199/227850617-e22658f1-5ebf-41d4-a672-209dd92f5674.png)
ConversationChain Validation Error
https://api.github.com/repos/langchain-ai/langchain/issues/2024/comments
6
2023-03-27T05:44:59Z
2023-09-26T16:13:24Z
https://github.com/langchain-ai/langchain/issues/2024
1,641,417,241
2,024
[ "langchain-ai", "langchain" ]
I am building an agent which answer user questions and I have plugged it with two tool, the regular llm and search tool like this llm = OpenAI(temperature=0) search = GoogleSerperAPIWrapper() template = """Question: {question} Answer: Let's think step by step.""" prompt = PromptTemplate(template=template, input_variables=["question"]) llm_chain = LLMChain(llm=llm, prompt=prompt, verbose=True) tools = [ Tool( name = "Search", func=search.run, description="useful for when you need to answer questions about current events." ), Tool( name="LLM", func=llm_chain.run, description="useful for when you need to answer questions about anything in general." ) ] mrkl = initialize_agent(tools, llm, agent="zero-shot-react-description", verbose=True) Now when I ask it a general question like "Who is Sachin", it still is trying to invoke the search action although the LLM can answer that question as it is not based on a recent event. How to make sure LLM answers any questions except questions on recent events to reduce the SERP usage ?
How to use Serp only for current events related searches and use GPT for regular info
https://api.github.com/repos/langchain-ai/langchain/issues/2023/comments
9
2023-03-27T05:38:09Z
2023-09-28T16:10:27Z
https://github.com/langchain-ai/langchain/issues/2023
1,641,412,258
2,023
[ "langchain-ai", "langchain" ]
Trying to use `persist_directory` to have Chroma persist to disk: ```python index = VectorstoreIndexCreator(vectorstore_kwargs={"persist_directory": "db"}) ``` and it displays this warning message that implies it won't be persisted: ``` Using embedded DuckDB without persistence: data will be transient ``` However, it does create files to the `db` directory. The warning appears to originate from https://github.com/chroma-core/chroma/pull/204#issuecomment-1458243316. Do I have the setting correct?
Setting Chroma persist_directory says "Using embedded DuckDB without persistence"
https://api.github.com/repos/langchain-ai/langchain/issues/2022/comments
18
2023-03-27T05:29:14Z
2024-03-30T20:14:14Z
https://github.com/langchain-ai/langchain/issues/2022
1,641,406,148
2,022
[ "langchain-ai", "langchain" ]
sqlalchemy in version 1.3.x, need to use select([]) way to query tables: related stackoverflow question: https://stackoverflow.com/questions/62565993/sqlalchemy-exc-argumenterror-columns-argument-to-select-must-be-a-python-list I will submit a PR to fix it
[bug]sqlalchemy 1.3.x compatibility issue
https://api.github.com/repos/langchain-ai/langchain/issues/2020/comments
0
2023-03-27T05:22:30Z
2023-03-28T06:45:52Z
https://github.com/langchain-ai/langchain/issues/2020
1,641,400,793
2,020
[ "langchain-ai", "langchain" ]
If you want to create a Qdrant index and keep adding documents (i.e. adding new documents to existing qdrant), when your embedding is sentence transformers `qdrant_index.add_documents(docs)` gives error. Because the `embedding_function` which is `model.encode()` is returning either `numpy` or `tensor` array and has to be converted to `list[float]`. Therefore, this line `vectors=[self.embedding_function(text) for text in texts]` needs to updated to: `vectors=[self.embedding_function(text).tolist() for text in texts]` To allow updating the collection. https://github.com/hwchase17/langchain/blob/b83e8265102514d1722b2fb1aad29763c5cad62a/langchain/vectorstores/qdrant.py#L83
Updating the Qdrant index with new documents raises error
https://api.github.com/repos/langchain-ai/langchain/issues/2016/comments
5
2023-03-27T03:16:18Z
2023-09-18T17:19:58Z
https://github.com/langchain-ai/langchain/issues/2016
1,641,294,710
2,016
[ "langchain-ai", "langchain" ]
It would be great to see integration with [OpenAssistant](https://github.com/LAION-AI/Open-Assistant) by @LAION-AI, which aims to become the [largest and most open alternative to ChatGPT](https://projects.laion.ai/Open-Assistant/blog/we-need-your-help#:~:text=largest%20and%20most%20open%20alternative%20to%20ChatGPT). > OpenAssistant is a chat-based assistant that understands tasks, can interact with third-party systems, and retrieve information dynamically to do so Here's the [Open Assistant architecture](https://projects.laion.ai/Open-Assistant/blog/2023-02-11-architecture): <img width="504" alt="screen" src="https://user-images.githubusercontent.com/6625584/227813941-3c77f348-8874-409a-9e8f-2ee1604a8611.png">
Open Assistant by LAION-AI
https://api.github.com/repos/langchain-ai/langchain/issues/2015/comments
2
2023-03-27T00:18:02Z
2023-09-28T16:10:32Z
https://github.com/langchain-ai/langchain/issues/2015
1,641,160,728
2,015
[ "langchain-ai", "langchain" ]
When verbose=True and the agent executor is run, then in get_prompt_input_key() an error is raised: ```python raise ValueError(f"One input key expected got {prompt_input_keys}") ``` This is because there are two entries [verbose, input] not just one [input]. It may be that setting verbose=True is not relevant. However, if it is, then a possible solution is to change utils.py line 10: ```python if len(prompt_input_keys) != 1: ``` to ```python if ~len(prompt_input_keys) in [1,2]: ```
Error raised in utils.py get_prompt_input_key when AgentExecutor.run verbose=True
https://api.github.com/repos/langchain-ai/langchain/issues/2013/comments
3
2023-03-26T19:57:11Z
2023-10-21T16:10:16Z
https://github.com/langchain-ai/langchain/issues/2013
1,641,060,799
2,013
[ "langchain-ai", "langchain" ]
In [here](https://python.langchain.com/en/latest/modules/indexes/vectorstore_examples/pinecone.html) - Pinecone.from_documents doesn't exist. In the code [here](https://github.com/hwchase17/langchain/blob/master/langchain/vectorstores/pinecone.py) it is from_texts
Pinecone in docs is outdated
https://api.github.com/repos/langchain-ai/langchain/issues/2009/comments
4
2023-03-26T18:02:47Z
2023-12-09T14:17:37Z
https://github.com/langchain-ai/langchain/issues/2009
1,641,013,216
2,009
[ "langchain-ai", "langchain" ]
All "smart websites" will have AI-friendly API schemas, not just **robots.txt**. These websites will have JSON manifest files for LLMs (ex: **openai.yaml**) and LLM plugins (ex: **ai-plugin.json**). These JSON manifest files will allow LLM agents like ChatGPT to interact with websites without data scraping or parsing API docs. OpenAI uses JSON manifest files as part of its plugin system for ChatGPT[[2]](https://www.mlq.ai/introducing-chatgpt-plugins/)[[4]](https://openai.com/blog/chatgpt-plugins). A plugin consists of an API, an API schema (in OpenAPI JSON or YAML format), and a manifest that describes what the plugin can do for both humans and models, as well as other metadata[[2]](https://www.mlq.ai/introducing-chatgpt-plugins/). The manifest file links to the OpenAPI specification and includes plugin-specific metadata such as the name, description, and logo[[2]](https://www.mlq.ai/introducing-chatgpt-plugins/)[[4]](https://openai.com/blog/chatgpt-plugins). To create a plugin, the first step is to build an API[[1]](https://platform.openai.com/docs/plugins/getting-started/plugin-manifest). The API is then documented in the OpenAPI JSON or YAML format[[1]](https://platform.openai.com/docs/plugins/getting-started/plugin-manifest)[[2]](https://www.mlq.ai/introducing-chatgpt-plugins/). Finally, a JSON manifest file is created that defines relevant metadata for the plugin[[1]](https://platform.openai.com/docs/plugins/getting-started/plugin-manifest). The manifest file includes information such as the name, description, and API URL[[2]](https://www.mlq.ai/introducing-chatgpt-plugins/). Here's an example of [.well-known/ai-plugin.json](https://github.com/openai/chatgpt-retrieval-plugin/tree/main/.well-known) ``` { "schema_version": "v1", "name_for_model": "retrieval", "name_for_human": "Retrieval Plugin", "description_for_model": "Plugin for searching through the user's documents (such as files, emails, and more) to find answers to questions and retrieve relevant information. Use it whenever a user asks something that might be found in their personal information.", "description_for_human": "Search through your documents.", "auth": { "type": "user_http", "authorization_type": "bearer" }, "api": { "type": "openapi", "url": "https://your-app-url.com/.well-known/openapi.yaml", "has_user_authentication": false }, "logo_url": "https://your-app-url.com/.well-known/logo.png", "contact_email": "hello@contact.com", "legal_info_url": "hello@legal.com" } ``` Here's an example [.well-known/openai.yaml](https://github.com/openai/chatgpt-retrieval-plugin/blob/main/.well-known/openapi.yaml) ``` openapi: 3.0.2 info: title: Retrieval Plugin API description: A retrieval API for querying and filtering documents based on natural language queries and metadata version: 1.0.0 servers: - url: https://your-app-url.com paths: /query: post: summary: Query description: Accepts search query objects array each with query and optional filter. Break down complex questions into sub-questions. Refine results by criteria, e.g. time / source, don't do this often. Split queries if ResponseTooLargeError occurs. operationId: query_query_post requestBody: content: application/json: schema: $ref: "#/components/schemas/QueryRequest" required: true responses: "200": description: Successful Response content: application/json: schema: $ref: "#/components/schemas/QueryResponse" "422": description: Validation Error content: application/json: schema: $ref: "#/components/schemas/HTTPValidationError" security: - HTTPBearer: [] components: schemas: DocumentChunkMetadata: title: DocumentChunkMetadata type: object properties: source: $ref: "#/components/schemas/Source" source_id: title: Source Id type: string url: title: Url type: string created_at: title: Created At type: string author: title: Author type: string document_id: title: Document Id type: string DocumentChunkWithScore: title: DocumentChunkWithScore required: - text - metadata - score type: object properties: id: title: Id type: string text: title: Text type: string metadata: $ref: "#/components/schemas/DocumentChunkMetadata" embedding: title: Embedding type: array items: type: number score: title: Score type: number DocumentMetadataFilter: title: DocumentMetadataFilter type: object properties: document_id: title: Document Id type: string source: $ref: "#/components/schemas/Source" source_id: title: Source Id type: string author: title: Author type: string start_date: title: Start Date type: string end_date: title: End Date type: string HTTPValidationError: title: HTTPValidationError type: object properties: detail: title: Detail type: array items: $ref: "#/components/schemas/ValidationError" Query: title: Query required: - query type: object properties: query: title: Query type: string filter: $ref: "#/components/schemas/DocumentMetadataFilter" top_k: title: Top K type: integer default: 3 QueryRequest: title: QueryRequest required: - queries type: object properties: queries: title: Queries type: array items: $ref: "#/components/schemas/Query" QueryResponse: title: QueryResponse required: - results type: object properties: results: title: Results type: array items: $ref: "#/components/schemas/QueryResult" QueryResult: title: QueryResult required: - query - results type: object properties: query: title: Query type: string results: title: Results type: array items: $ref: "#/components/schemas/DocumentChunkWithScore" Source: title: Source enum: - email - file - chat type: string description: An enumeration. ValidationError: title: ValidationError required: - loc - msg - type type: object properties: loc: title: Location type: array items: anyOf: - type: string - type: integer msg: title: Message type: string type: title: Error Type type: string securitySchemes: HTTPBearer: type: http scheme: bearer ``` Here are five ways to implement JSON manifest files inside LangChain: 1. **Plugin chaining**: Allow users to enable and chain multiple ChatGPT plugins in LangChain using OpenAI's JSON manifest files for powerful, customized agents. 2. **Unified plugin management**: Develop a plugin management interface in LangChain that can parse OpenAI JSON manifest files, simplifying plugin integration. 3. **Context enrichment**: Enhance LangChain's context management by incorporating plugin information from OpenAI JSON manifest files, leading to context-aware responses. 4. **Dynamic plugin invocation**: Implement a mechanism for LangChain agents to call appropriate OpenAI plugin APIs based on user input and conversation context. 5. **Cross-platform plugin development**: Encourage creating cross-platform plugins compatible with both OpenAI's ChatGPT ecosystem and LangChain applications by standardizing JSON manifest files and OpenAPI specifications.
JSON Manifest Files
https://api.github.com/repos/langchain-ai/langchain/issues/2008/comments
1
2023-03-26T17:48:06Z
2023-09-10T16:40:23Z
https://github.com/langchain-ai/langchain/issues/2008
1,641,007,401
2,008
[ "langchain-ai", "langchain" ]
I'd like to run query against `parquet` files with `duckdb`. I see some `duckdb` stuffs on the docs when I search it up, also there's this PR: https://github.com/hwchase17/langchain/pull/1991, which seems to be a nice addition. What are the missing pieces to make this work? One workaround I've found is turning the `parquet` files into a SQLite db, make an agent out of it, then proceed from there.
feat: parquet file support for SQL agent
https://api.github.com/repos/langchain-ai/langchain/issues/2002/comments
3
2023-03-26T05:28:21Z
2023-09-18T16:22:24Z
https://github.com/langchain-ai/langchain/issues/2002
1,640,778,468
2,002
[ "langchain-ai", "langchain" ]
Hi! Thanks for creating langchain! I wanted to give its "agents" feature a try and quickly found an example of its shortcomings: ``` > Entering new AgentExecutor chain... I need to find out who the author of Moore's law is and if they are alive. Action: Search Action Input: "author of Moore's law" Observation: Gordon Moore Thought: I need to find out if Gordon Moore is alive. Action: Search Action Input: "Gordon Moore alive" Observation: March 24, 2023 Thought: I now know the final answer. Final Answer: Yes, Gordon Moore is alive. > Finished chain. ``` Not sure if it's something that can be fixed within langchain, but I figured I'd report it just in case.
Agent misbehaving: "is the author of Moore's law alive?"
https://api.github.com/repos/langchain-ai/langchain/issues/1994/comments
2
2023-03-25T18:46:50Z
2023-09-18T16:22:29Z
https://github.com/langchain-ai/langchain/issues/1994
1,640,621,419
1,994
[ "langchain-ai", "langchain" ]
Am refereing to this documentaion [here](https://langchain.readthedocs.io/en/latest/modules/chat/examples/chat_vector_db.html) created a file test.py ``` from langchain.embeddings.openai import OpenAIEmbeddings from langchain.vectorstores import Chroma from langchain.text_splitter import CharacterTextSplitter from langchain.chains import ConversationalRetrievalChain from langchain.chat_models import ChatOpenAI from langchain.prompts.chat import ( ChatPromptTemplate, SystemMessagePromptTemplate, AIMessagePromptTemplate, HumanMessagePromptTemplate, ) from langchain.schema import ( AIMessage, HumanMessage, SystemMessage ) from langchain.document_loaders import TextLoader loader = TextLoader('test.txt') documents = loader.load() text_splitter = CharacterTextSplitter(chunk_size=200, chunk_overlap=0) documents = text_splitter.split_documents(documents) embeddings = OpenAIEmbeddings() vectorstore = Chroma.from_documents(documents, embeddings) system_template="""Use the following pieces of context to answer the users question. If you don't know the answer, just say that you don't know, don't try to make up an answer. ---------------- {context}""" messages = [ SystemMessagePromptTemplate.from_template(system_template), HumanMessagePromptTemplate.from_template("{question}") ] prompt = ChatPromptTemplate.from_messages(messages) qa = ConversationalRetrievalChain.from_llm(ChatOpenAI(temperature=0), vectorstore, qa_prompt=prompt) chat_history = [] query = "Who is the CEO?" result = qa({"question": query, "chat_history": chat_history}) print(result) ``` Which throws Traceback (most recent call last): File "test.py", line 44, in <module> qa = ConversationalRetrievalChain.from_llm(ChatOpenAI(temperature=0), vectorstore, qa_prompt=prompt) File "/home/prajin/works/ults/gpt/lamda-openai-chat-python/venv/lib/python3.8/site-packages/langchain/chains/conversational_retrieval/base.py", line 140, in from_llm return cls( File "pydantic/main.py", line 341, in pydantic.main.BaseModel.__init__ pydantic.error_wrappers.ValidationError: 1 validation error for ConversationalRetrievalChain retriever instance of BaseRetriever expected (type=type_error.arbitrary_type; expected_arbitrary_type=BaseRetriever)
pydantic.error_wrappers.ValidationError: 1 validation error
https://api.github.com/repos/langchain-ai/langchain/issues/1986/comments
4
2023-03-25T08:38:05Z
2023-10-31T08:02:38Z
https://github.com/langchain-ai/langchain/issues/1986
1,640,432,940
1,986
[ "langchain-ai", "langchain" ]
Pls. add support for Kagi Summarizer https://blog.kagi.com/universal-summarizer#api
Add support for Kagi Summarizer API in Chains
https://api.github.com/repos/langchain-ai/langchain/issues/1983/comments
1
2023-03-25T05:01:52Z
2023-08-21T16:07:54Z
https://github.com/langchain-ai/langchain/issues/1983
1,640,380,648
1,983
[ "langchain-ai", "langchain" ]
### Summary The QueryCheckerTool function currently creates an LLMChain object internally but does not provide a way to specify the `openai_api_key` manually via supplied arguments. This can cause issues for users who do not want to place their API key in environment variables. ### Steps to Reproduce Run this code ```python from langchain.agents import create_sql_agent from langchain.agents.agent_toolkits import SQLDatabaseToolkit from langchain.sql_database import SQLDatabase from langchain.llms.openai import OpenAI from langchain.agents import AgentExecutor import sqlite3 with sqlite3.connect("data.db") as conn: pass db = SQLDatabase.from_uri("sqlite:///data.db") toolkit = SQLDatabaseToolkit(db=db) llm = OpenAI(temperature=0, openai_api_key="your key here") agent_executor = create_sql_agent( llm=llm, toolkit=toolkit, verbose=True ) while True: prompt = input("> ") response = agent_executor.run(prompt) print(response) ``` ### Expected Behavior `create_sql_agent` should not require me to supply an OpenAI API key, nor should it require me to have this stored in an environment variable. ### Actual Behavior `create_sql_agent` generates an error when I call it without first setting the `OPENAI_API_KEY` environment variable. ### Obvious Solution `create_sql_agent` should supply its argument `llm` to functions it calls, obviously. In particular, this line https://github.com/hwchase17/langchain/blob/b83e8265102514d1722b2fb1aad29763c5cad62a/langchain/tools/sql_database/tool.py#L85 is the source of the error. ### Environment - LangChain version: 0.0.123 - Python version: 3.9.16 - Operating System: Windows 10
QueryCheckerTool creates LLMChain object, does not allow manual specification of openai_api_key
https://api.github.com/repos/langchain-ai/langchain/issues/1982/comments
2
2023-03-25T03:45:48Z
2023-03-26T04:37:50Z
https://github.com/langchain-ai/langchain/issues/1982
1,640,354,701
1,982
[ "langchain-ai", "langchain" ]
Here's what i'm using atm... data: ``` { "num_tweets": 199, "total_impressions": 154586, "total_engagements": 0, "total_retweets": 4621, "total_likes": 4249, "average_impressions": 776.8140703517588, "average_engagements": 0.0, "average_retweets": 23.22110552763819, "average_likes": 21.35175879396985, "tweets": [ { "timestamp": "2023-03-24T02:02:49+00:00", "text": "RT : No show tonight sorry y\u2019all, business dinner went way long \ud83d\ude4f\ud83e\uddd1\u200d\ud83d\ude80\ud83e\udee1", "replies": 0, "impressions": 0, "retweets": 2, "quote_retweets": 0, "likes": 0 }, { "timestamp": "2023-03-23T09:59:45+00:00", "text": "", "replies": 0, "impressions": 296, "retweets": 0, "quote_retweets": 0, "likes": 1 }, ..... ``` ``` question = """You are a Solana NFT Expert, and a web3 social media guru, Analyze the data, and based on that, provide a detailed report, the report contains datapoints about the project's professionalism, social impact and reach, and the project's overall engagement with the community., with numbers to support the case, Number of followers Follower growth rate Number of retweets and likes per tweet Engagement rate per tweet Hashtag usage Quality of content Frequency of posting Tone and sentiment of tweets Provide a thorough analysis of each metric and support your findings with specific numbers and examples. Additionally, provide any insights or recommendations based on your analysis that could help improve the project's social media presence and impact. Please provide the report in Markdown format. """ model_name = "gpt-3.5-turbo" llm = ChatOpenAI(model_name=model_name, temperature=0.7) recursive_character_text_splitter = ( RecursiveCharacterTextSplitter.from_tiktoken_encoder( encoding_name="cl100k_base" if model_name == "gpt-3.5-turbo" else "p50k_base", chunk_size=4097 if model_name == "gpt-3.5-turbo" else llm.modelname_to_contextsize(model_name), chunk_overlap=0, ) ) text_chunks = recursive_character_text_splitter.split_text(open("twitter_profile.json").read()) documents = [Document(page_content=question + text_chunk) for text_chunk in text_chunks] # Summarize the document by summarizing each document chunk and then summarizing the combined summary chain = load_summarize_chain(llm, chain_type="map_reduce") chain.run(documents) ``` response ``` "The report analyzes the social media presence of a Solana NFT project, examining metrics such as follower count, engagement rate, hashtag usage, content quality, posting frequency, tone/sentiment, and growth rate. Specific numbers and examples are provided, and recommendations are given for improving the project's impact. The data includes tweets related to partnerships, revenue sharing, and a recent airdrop, with engagement metrics and impressions listed." ``` takes about 27 seconds I'm new to langchain, is there a better way to go about this? the responses seem undetailed and dull at the moment.
How to run gpt-3.5-turbo against my own data while using map_reduce?
https://api.github.com/repos/langchain-ai/langchain/issues/1980/comments
4
2023-03-25T00:34:20Z
2023-08-21T20:15:50Z
https://github.com/langchain-ai/langchain/issues/1980
1,640,254,185
1,980
[ "langchain-ai", "langchain" ]
I am trying to use a Custom Prompt Template as an example_prompt to the FewShotPromptTemplate. Although, I am getting a 'key error' template issue. Does FewShotPromptTemplate support using Custom Prompt Template ? A snippet example for reference ``` from langchain import PromptTemplate, FewShotPromptTemplate class CustomPromptTemplate(StringPromptTemplate, BaseModel): @validator("input_variables") def validate_input_variables(cls, v): """ Validate that the input variables are correct. """ if len(v) != 1 or "function_name" not in v: raise ValueError("function_name must be the only input_variable.") return v def format(self, **kwargs) -> str: # Get the source code of the function source_code = get_source_code(kwargs["function_name"]) # Generate the prompt to be sent to the language model prompt = f""" Given the function name and source code, generate an English language explanation of the function. Function Name: {kwargs["function_name"].__name__} Source Code: {source_code} Explanation: """ return prompt def _prompt_type(self): return "function-explainer" fn_explainer = CustomPromptTemplate(input_variables=["function_name"]) few_shot_prompt = FewShotPromptTemplate( # These are the examples we want to insert into the prompt. examples=examples, # This is how we want to format the examples when we insert them into the prompt. example_prompt= fn_explainer, # The prefix is some text that goes before the examples in the prompt. # Usually, this consists of intructions. prefix="Give the antonym of every input", # The suffix is some text that goes after the examples in the prompt. # Usually, this is where the user input will go suffix="Word: {input}\nAntonym:", # The input variables are the variables that the overall prompt expects. input_variables=["input"], # The example_separator is the string we will use to join the prefix, examples, and suffix together with. example_separator="\n\n", ) ```
Using Custom Prompt with FewShotPromptTemplate
https://api.github.com/repos/langchain-ai/langchain/issues/1977/comments
5
2023-03-24T21:57:37Z
2024-02-14T16:14:33Z
https://github.com/langchain-ai/langchain/issues/1977
1,640,095,874
1,977
[ "langchain-ai", "langchain" ]
* Allow users to choose the type in the schema (string | List[string]) * Allow users to get multiple json objects (get JSON array) in the response. I achieved it by replacing the {{ }} -> [[ ]] as follows: `prompt.replace("""{{ "ID": string // IDs which refers to the sentences. "Text": string // Sentences that contains the answer to the question. }}""", """[[ "ID": string // IDs which refers to the sentences. "Text": string // Sentences that contains the answer to the question. ]]""")` And got a list of json objects with this method.
StructuredOutputParser - Allow users to get multiple items from response.
https://api.github.com/repos/langchain-ai/langchain/issues/1976/comments
5
2023-03-24T20:27:06Z
2023-12-27T12:00:23Z
https://github.com/langchain-ai/langchain/issues/1976
1,639,991,330
1,976
[ "langchain-ai", "langchain" ]
I think the sqlalchemy dependency in pyproject.toml still needs to be bumped. _Originally posted by @sliedes in https://github.com/hwchase17/langchain/issues/1272#issuecomment-1473683519_
sqlalchemy dependency in pyproject.toml still needs to be bumped to *
https://api.github.com/repos/langchain-ai/langchain/issues/1975/comments
5
2023-03-24T19:52:44Z
2023-09-27T16:12:00Z
https://github.com/langchain-ai/langchain/issues/1975
1,639,955,235
1,975
[ "langchain-ai", "langchain" ]
The history part for summay of past conversation is always in English. How to change it please?
How to set the 'history' of ConversationSummaryBufferMemory in other language
https://api.github.com/repos/langchain-ai/langchain/issues/1973/comments
2
2023-03-24T17:24:27Z
2023-09-18T16:22:34Z
https://github.com/langchain-ai/langchain/issues/1973
1,639,777,702
1,973
[ "langchain-ai", "langchain" ]
A couple problems: - `ChatOpenAI.get_num_tokens_from_messages()` takes a `model` parameter that is not included in the [base class method signature](https://github.com/hwchase17/langchain/blob/master/langchain/schema.py#L208). Instead, it should use `self.model_name`, similar to how the `BaseOpenAI.get_num_tokens()` does. - `ChatOpenAI.get_num_tokens_from_messages()` does not support GPT-4. See here for the updated formula: https://github.com/openai/openai-cookbook/blob/main/examples/How_to_count_tokens_with_tiktoken.ipynb
ChatOpenAI.get_num_tokens_from_messages: use self.model_name and support GPT-4
https://api.github.com/repos/langchain-ai/langchain/issues/1972/comments
4
2023-03-24T16:47:58Z
2023-09-18T16:22:40Z
https://github.com/langchain-ai/langchain/issues/1972
1,639,724,669
1,972
[ "langchain-ai", "langchain" ]
I try to set the "system" role maessage when using ConversationChain with ConversationSummaryBufferMemory(CSBM), but it is failed. When I change the ConversationSummaryBufferMemory to the ConversationBufferMemory, it become worked. But I'd like to use the auto summarize utilities when exceeding the maxLength by CSBM. Below is the error message: chat_prompt = ChatPromptTemplate.from_messages([system_message_prompt, MessagesPlaceholder(variable_name="history"),human_message_prompt]) from langchain.chains import ConversationChain conversation_with_summary = ConversationChain( llm=chat, memory=ConversationSummaryBufferMemory(llm=chat, max_token_limit=10), prompt=chat_prompt, verbose=True ) conversation_with_summary.predict(input="hello") ******************************************************************************* > Entering new ConversationChain chain... --------------------------------------------------------------------------- ValueError Traceback (most recent call last) Cell In[125], line 34 24 conversation_with_summary = ConversationChain( 25 #llm=llm, 26 llm=chat, (...) 31 verbose=True 32 ) 33 #conversation_with_summary.predict(identity="佛祖",text="你好") ---> 34 conversation_with_summary.predict(input="你好") File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\langchain\chains\llm.py:151, in LLMChain.predict(self, **kwargs) 137 def predict(self, **kwargs: Any) -> str: 138 """Format prompt with kwargs and pass to LLM. 139 140 Args: (...) 149 completion = llm.predict(adjective="funny") 150 """ --> 151 return self(kwargs)[self.output_key] File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\langchain\chains\base.py:116, in Chain.__call__(self, inputs, return_only_outputs) 114 except (KeyboardInterrupt, Exception) as e: 115 self.callback_manager.on_chain_error(e, verbose=self.verbose) --> 116 raise e 117 self.callback_manager.on_chain_end(outputs, verbose=self.verbose) 118 return self.prep_outputs(inputs, outputs, return_only_outputs) File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\langchain\chains\base.py:113, in Chain.__call__(self, inputs, return_only_outputs) 107 self.callback_manager.on_chain_start( 108 {"name": self.__class__.__name__}, 109 inputs, 110 verbose=self.verbose, 111 ) 112 try: --> 113 outputs = self._call(inputs) 114 except (KeyboardInterrupt, Exception) as e: 115 self.callback_manager.on_chain_error(e, verbose=self.verbose) File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\langchain\chains\llm.py:57, in LLMChain._call(self, inputs) 56 def _call(self, inputs: Dict[str, Any]) -> Dict[str, str]: ---> 57 return self.apply([inputs])[0] File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\langchain\chains\llm.py:118, in LLMChain.apply(self, input_list) 116 def apply(self, input_list: List[Dict[str, Any]]) -> List[Dict[str, str]]: 117 """Utilize the LLM generate method for speed gains.""" --> 118 response = self.generate(input_list) 119 return self.create_outputs(response) File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\langchain\chains\llm.py:61, in LLMChain.generate(self, input_list) 59 def generate(self, input_list: List[Dict[str, Any]]) -> LLMResult: 60 """Generate LLM result from inputs.""" ---> 61 prompts, stop = self.prep_prompts(input_list) 62 return self.llm.generate_prompt(prompts, stop) File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\langchain\chains\llm.py:79, in LLMChain.prep_prompts(self, input_list) 77 for inputs in input_list: 78 selected_inputs = {k: inputs[k] for k in self.prompt.input_variables} ---> 79 prompt = self.prompt.format_prompt(**selected_inputs) 80 _colored_text = get_colored_text(prompt.to_string(), "green") 81 _text = "Prompt after formatting:\n" + _colored_text File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\langchain\prompts\chat.py:173, in ChatPromptTemplate.format_prompt(self, **kwargs) 167 elif isinstance(message_template, BaseMessagePromptTemplate): 168 rel_params = { 169 k: v 170 for k, v in kwargs.items() 171 if k in message_template.input_variables 172 } --> 173 message = message_template.format_messages(**rel_params) 174 result.extend(message) 175 else: File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\langchain\prompts\chat.py:43, in MessagesPlaceholder.format_messages(self, **kwargs) 41 value = kwargs[self.variable_name] 42 if not isinstance(value, list): ---> 43 raise ValueError( 44 f"variable {self.variable_name} should be a list of base messages, " 45 f"got {value}" 46 ) 47 for v in value: 48 if not isinstance(v, BaseMessage): ValueError: variable history should be a list of base messages, got
how to use SystemMessagePromptTemplate with ConversationSummaryBufferMemory please?
https://api.github.com/repos/langchain-ai/langchain/issues/1971/comments
4
2023-03-24T16:39:01Z
2023-09-28T16:10:42Z
https://github.com/langchain-ai/langchain/issues/1971
1,639,712,764
1,971
[ "langchain-ai", "langchain" ]
I am reaching out to inquire about the possibility of implementing a `caching system` for popular prompts as vectors. As you may know, constantly re-embedding questions can be `costly` and `time-consuming`, especially for larger datasets. Therefore, I was wondering if there are any plans to create a sub package that would allow users to store and reuse embeddings of commonly used questions. This would not only decrease the cost of re-embedding but also improve the overall efficiency of the system. I would greatly appreciate it if you could let me know if this is something that may be considered in the future. Thank you for your time and consideration.
Proposal to Implement Caching of Popular Prompts as Vectors
https://api.github.com/repos/langchain-ai/langchain/issues/1968/comments
7
2023-03-24T14:22:18Z
2023-09-18T16:22:44Z
https://github.com/langchain-ai/langchain/issues/1968
1,639,490,018
1,968
[ "langchain-ai", "langchain" ]
I am reaching out to inquire about the possibility of implementing a `caching system` for popular prompts as vectors. As you may know, constantly re-embedding questions can be `costly` and `time-consuming`, especially for larger datasets. Therefore, I was wondering if there are any plans to create a sub package that would allow users to store and reuse embeddings of commonly used questions. This would not only decrease the cost of re-embedding but also improve the overall efficiency of the system. I would greatly appreciate it if you could let me know if this is something that may be considered in the future. Thank you for your time and consideration.
Proposal to Implement Caching of Popular Prompts as Vectors
https://api.github.com/repos/langchain-ai/langchain/issues/1967/comments
3
2023-03-24T14:21:57Z
2023-10-23T02:26:40Z
https://github.com/langchain-ai/langchain/issues/1967
1,639,489,619
1,967
[ "langchain-ai", "langchain" ]
I am reaching out to inquire about the possibility of implementing a `caching system` for popular prompts as vectors. As you may know, constantly re-embedding questions can be `costly` and `time-consuming`, especially for larger datasets. Therefore, I was wondering if there are any plans to create a sub package that would allow users to store and reuse embeddings of commonly used questions. This would not only decrease the cost of re-embedding but also improve the overall efficiency of the system. I would greatly appreciate it if you could let me know if this is something that may be considered in the future. Thank you for your time and consideration.
Proposal to Implement Caching of Popular Prompts as Vectors
https://api.github.com/repos/langchain-ai/langchain/issues/1966/comments
1
2023-03-24T14:21:38Z
2023-08-21T16:07:59Z
https://github.com/langchain-ai/langchain/issues/1966
1,639,489,505
1,966
[ "langchain-ai", "langchain" ]
Say I query my vector database with a minimum distance threshold and no documents ("sources") are returned, how can I stop ChatVectorDBChain from answering the question without using prompts? I observe that even though it finds no sources, it will answer based on chat history or overwrite the prompt instructions. I would like to have a deterministic switch like: ```python if no_documents: raise ValueError() ``` and then catch this error to print some message and continue.
Is there a non-prompt way to stop ChatVectorDBChain from answering when no vectors are found?
https://api.github.com/repos/langchain-ai/langchain/issues/1963/comments
3
2023-03-24T10:23:54Z
2023-09-25T16:15:24Z
https://github.com/langchain-ai/langchain/issues/1963
1,639,119,073
1,963
[ "langchain-ai", "langchain" ]
I am trying to load load video and came across below issue. I am using langchain version 0.0.121 ![image](https://user-images.githubusercontent.com/99241695/227492256-c22f93e2-9666-4971-b0a9-0eba7b3dfc8e.png)
AttributeError: type object 'YoutubeLoader' has no attribute 'from_youtube_url'
https://api.github.com/repos/langchain-ai/langchain/issues/1962/comments
5
2023-03-24T10:08:17Z
2023-04-12T04:13:00Z
https://github.com/langchain-ai/langchain/issues/1962
1,639,095,508
1,962
[ "langchain-ai", "langchain" ]
The score is useful for a lot of things. It'd be great if VectorStore exported the abstract methods so a VectorStore dependency can be injected.
`*_with_score` methods should be part of VectorStore
https://api.github.com/repos/langchain-ai/langchain/issues/1959/comments
1
2023-03-24T08:17:09Z
2023-09-10T16:40:38Z
https://github.com/langchain-ai/langchain/issues/1959
1,638,930,794
1,959
[ "langchain-ai", "langchain" ]
null
How to work with multiple csv files in the same agent session ? is there any option to call agent with multiple csv files, so that the model can interact multiple files and answer us.
https://api.github.com/repos/langchain-ai/langchain/issues/1958/comments
12
2023-03-24T07:46:39Z
2023-05-25T21:23:12Z
https://github.com/langchain-ai/langchain/issues/1958
1,638,890,881
1,958
[ "langchain-ai", "langchain" ]
## Summary I'm seeing this `ImportError` on my Mac M1 when trying to use Chroma ``` (mach-o file, but is an incompatible architecture (have (x86_64), need (arm64e))) ``` Any ideas? ## Traceback ``` Traceback (most recent call last): File "/Users/homanp/Library/Caches/com.vercel.fun/runtimes/python3/../python/bootstrap.py", line 147, in <module> lambda_runtime_main() File "/Users/homanp/Library/Caches/com.vercel.fun/runtimes/python3/../python/bootstrap.py", line 127, in lambda_runtime_main fn = lambda_runtime_get_handler() File "/Users/homanp/Library/Caches/com.vercel.fun/runtimes/python3/../python/bootstrap.py", line 113, in lambda_runtime_get_handler mod = importlib.import_module(module_name) File "/opt/homebrew/Cellar/python@3.9/3.9.5/Frameworks/Python.framework/Versions/3.9/lib/python3.9/importlib/__init__.py", line 127, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "<frozen importlib._bootstrap>", line 1030, in _gcd_import File "<frozen importlib._bootstrap>", line 1007, in _find_and_load File "<frozen importlib._bootstrap>", line 986, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 680, in _load_unlocked File "<frozen importlib._bootstrap_external>", line 855, in exec_module File "<frozen importlib._bootstrap>", line 228, in _call_with_frames_removed File "/var/folders/b5/g5rvd57j2wq3nysn1n8h66yr0000gn/T/zeit-fun-6d31861b417ef/vc__handler__python.py", line 12, in <module> __vc_spec.loader.exec_module(__vc_module) File "<frozen importlib._bootstrap_external>", line 855, in exec_module File "<frozen importlib._bootstrap>", line 228, in _call_with_frames_removed File "/private/var/folders/b5/g5rvd57j2wq3nysn1n8h66yr0000gn/T/zeit-fun-6d31861b417ef/./api/index.py", line 13, in <module> index = VectorstoreIndexCreator().from_loaders([loader]) File "/var/folders/b5/g5rvd57j2wq3nysn1n8h66yr0000gn/T/zeit-fun-6d31861b417ef/langchain/indexes/vectorstore.py", line 71, in from_loaders vectorstore = self.vectorstore_cls.from_documents( File "/var/folders/b5/g5rvd57j2wq3nysn1n8h66yr0000gn/T/zeit-fun-6d31861b417ef/langchain/vectorstores/chroma.py", line 268, in from_documents return cls.from_texts( File "/var/folders/b5/g5rvd57j2wq3nysn1n8h66yr0000gn/T/zeit-fun-6d31861b417ef/langchain/vectorstores/chroma.py", line 231, in from_texts chroma_collection = cls( File "/var/folders/b5/g5rvd57j2wq3nysn1n8h66yr0000gn/T/zeit-fun-6d31861b417ef/langchain/vectorstores/chroma.py", line 78, in __init__ self._client = chromadb.Client(self._client_settings) File "/Users/homanp/Projects/VERCEL_LANGCHAIN_ENV/lib/python3.9/site-packages/chromadb/__init__.py", line 68, in Client return chromadb.api.local.LocalAPI(settings, get_db(settings)) File "/Users/homanp/Projects/VERCEL_LANGCHAIN_ENV/lib/python3.9/site-packages/chromadb/__init__.py", line 41, in get_db import chromadb.db.duckdb File "/Users/homanp/Projects/VERCEL_LANGCHAIN_ENV/lib/python3.9/site-packages/chromadb/db/duckdb.py", line 3, in <module> from chromadb.db.index.hnswlib import Hnswlib File "/Users/homanp/Projects/VERCEL_LANGCHAIN_ENV/lib/python3.9/site-packages/chromadb/db/index/hnswlib.py", line 8, in <module> import hnswlib ```
`ImportError` ChromaDB
https://api.github.com/repos/langchain-ai/langchain/issues/1957/comments
16
2023-03-24T07:42:33Z
2023-11-10T16:10:52Z
https://github.com/langchain-ai/langchain/issues/1957
1,638,886,467
1,957
[ "langchain-ai", "langchain" ]
I worked on this version: git rev-parse HEAD: 9555bbd5bb3397e66d279d802576b4c65123b484 I plan to add a ParaChain class in the Chains module. At present, it seems that only serial calls are supported, but in some scenarios, we need it work in parallel, and support the combination of parallel chains and serial chains to achieve a better effect, and I hope that in the future, I can introduce FSM into it. I was about to start this part of the work, when I was reading the source code of the SequentialChain class, I found that there was a section in the validate_chains function that identified the intersection variable of memory and input, but only from the perspective of looking at the code, it seemed that there was a bug, but I did It has not been verified by test_case(because there are too many homework at present), I want to start a discussion here first, whether it is a known bug or my understanding is wrong. the code section: ``` @root_validator(pre=True) def validate_chains(cls, values: Dict) -> Dict: """Validate that the correct inputs exist for all chains.""" chains = values["chains"] input_variables = values["input_variables"] memory_keys = list() if "memory" in values and values["memory"] is not None: """Validate that prompt input variables are consistent.""" memory_keys = values["memory"].memory_variables if any(input_variables) in memory_keys: overlapping_keys = input_variables & memory_keys raise ValueError( f"The the input key(s) {''.join(overlapping_keys)} are found " f"in the Memory keys ({memory_keys}) - please use input and " f"memory keys that don't overlap." ) ``` At first: ** if any(input_variables) in memory_keys:** any() function return a bool value. what's mean of bool in a List[str]? Second: **overlapping_keys = input_variables & memory_keys** maybe **overlapping_keys = set(input_variables) & set(memory_keys)** I love LangChain. Awesome & impressive library
Bug:: SequentialChain root_validator::validate_chain. judgment logic of overlapping_keys
https://api.github.com/repos/langchain-ai/langchain/issues/1953/comments
2
2023-03-24T04:52:12Z
2023-09-10T16:40:43Z
https://github.com/langchain-ai/langchain/issues/1953
1,638,714,871
1,953
[ "langchain-ai", "langchain" ]
Summary: the Chroma vectorstore search does not return top-scored embeds. The issue appears only when the number of documents in the vector store exceeds a certain threshold (I have ~4000 chunks). I could not determine when it breaks exactly. I loaded my documents, chunked them, and then indexed into a vectorstore: ``` embeddings = OpenAIEmbeddings() docsearch = Chroma.from_documents(all_docs, embeddings) ``` Then I tried to search this vector store: ``` text = "my search text" list(score for doc, score in docsearch.similarity_search_with_score(text)) ``` Output: ``` [0.3361772298812866, 0.3575538694858551, 0.360953152179718, 0.36677438020706177] ``` Search **did not return the expected document** from the embed (within a list of 4 items returned by default). Then I performed another test forcing to return all scores and **got the expected result**. `list(score for doc, score in docsearch.similarity_search_with_score(text, len(all_docs))[:4])` Output: ``` [0.31715911626815796, 0.3361772298812866, 0.3575538694858551, 0.360953152179718] ``` You can clearly see that the top scores are different. Any help is appreciated.
Chroma vectorstore search does not return top-scored embeds
https://api.github.com/repos/langchain-ai/langchain/issues/1946/comments
17
2023-03-24T00:47:14Z
2024-07-12T14:58:40Z
https://github.com/langchain-ai/langchain/issues/1946
1,638,542,872
1,946
[ "langchain-ai", "langchain" ]
Currently, there is no way to pass in headers to UnstructuredURLLoader, e.g. for `"User-Agent"`. There should be. Upstream issue: https://github.com/Unstructured-IO/unstructured/issues/396
Ability to pass in headers to UnstructuredURLLoader
https://api.github.com/repos/langchain-ai/langchain/issues/1944/comments
0
2023-03-23T23:08:43Z
2023-03-29T04:32:13Z
https://github.com/langchain-ai/langchain/issues/1944
1,638,464,544
1,944
[ "langchain-ai", "langchain" ]
Problem since update 0.0.120, when using a AzureChatOpenAI model instance of gpt-35-turbo you get a "Resource not found error" tried with both load_qa_with_sources_chain and MapReduceChain.from_params Here is a sample error trace from right after I call the chain() instance: ``` File "/anaconda/envs/exp_env/lib/python3.10/site-packages/langchain/chains/base.py", line 213, in run return self(args[0])[self.output_keys[0]] File "/anaconda/envs/exp_env/lib/python3.10/site-packages/langchain/chains/base.py", line 116, in __call__ raise e File "/anaconda/envs/exp_env/lib/python3.10/site-packages/langchain/chains/base.py", line 113, in __call__ outputs = self._call(inputs) File "/anaconda/envs/exp_env/lib/python3.10/site-packages/langchain/chains/mapreduce.py", line 73, in _call outputs, _ = self.combine_documents_chain.combine_docs(docs) File "/anaconda/envs/exp_env/lib/python3.10/site-packages/langchain/chains/combine_documents/map_reduce.py", line 139, in combine_docs results = self.llm_chain.apply( File "/anaconda/envs/exp_env/lib/python3.10/site-packages/langchain/chains/llm.py", line 118, in apply response = self.generate(input_list) File "/anaconda/envs/exp_env/lib/python3.10/site-packages/langchain/chains/llm.py", line 62, in generate return self.llm.generate_prompt(prompts, stop) File "/anaconda/envs/exp_env/lib/python3.10/site-packages/langchain/chat_models/base.py", line 79, in generate_prompt raise e File "/anaconda/envs/exp_env/lib/python3.10/site-packages/langchain/chat_models/base.py", line 76, in generate_prompt output = self.generate(prompt_messages, stop=stop) File "/anaconda/envs/exp_env/lib/python3.10/site-packages/langchain/chat_models/base.py", line 53, in generate results = [self._generate(m, stop=stop) for m in messages] File "/anaconda/envs/exp_env/lib/python3.10/site-packages/langchain/chat_models/base.py", line 53, in <listcomp> results = [self._generate(m, stop=stop) for m in messages] File "/anaconda/envs/exp_env/lib/python3.10/site-packages/langchain/chat_models/openai.py", line 264, in _generate response = self.completion_with_retry(messages=message_dicts, **params) File "/anaconda/envs/exp_env/lib/python3.10/site-packages/langchain/chat_models/openai.py", line 226, in completion_with_retry return _completion_with_retry(**kwargs) File "/anaconda/envs/exp_env/lib/python3.10/site-packages/tenacity/__init__.py", line 289, in wrapped_f return self(f, *args, **kw) File "/anaconda/envs/exp_env/lib/python3.10/site-packages/tenacity/__init__.py", line 379, in __call__ do = self.iter(retry_state=retry_state) File "/anaconda/envs/exp_env/lib/python3.10/site-packages/tenacity/__init__.py", line 314, in iter return fut.result() File "/anaconda/envs/exp_env/lib/python3.10/concurrent/futures/_base.py", line 451, in result return self.__get_result() File "/anaconda/envs/exp_env/lib/python3.10/concurrent/futures/_base.py", line 403, in __get_result raise self._exception File "/anaconda/envs/exp_env/lib/python3.10/site-packages/tenacity/__init__.py", line 382, in __call__ result = fn(*args, **kwargs) File "/anaconda/envs/exp_env/lib/python3.10/site-packages/langchain/chat_models/openai.py", line 224, in _completion_with_retry return self.client.create(**kwargs) File "/anaconda/envs/exp_env/lib/python3.10/site-packages/openai/api_resources/chat_completion.py", line 25, in create return super().create(*args, **kwargs) File "/anaconda/envs/exp_env/lib/python3.10/site-packages/openai/api_resources/abstract/engine_api_resource.py", line 153, in create response, _, api_key = requestor.request( File "/anaconda/envs/exp_env/lib/python3.10/site-packages/openai/api_requestor.py", line 226, in request resp, got_stream = self._interpret_response(result, stream) File "/anaconda/envs/exp_env/lib/python3.10/site-packages/openai/api_requestor.py", line 619, in _interpret_response self._interpret_response_line( File "/anaconda/envs/exp_env/lib/python3.10/site-packages/openai/api_requestor.py", line 679, in _interpret_response_line raise self.handle_error_response( openai.error.InvalidRequestError: Resource not found ```
Resource not found error when using AzureChatOpenAI gpt-35-turbo
https://api.github.com/repos/langchain-ai/langchain/issues/1942/comments
1
2023-03-23T22:17:32Z
2023-03-23T22:34:53Z
https://github.com/langchain-ai/langchain/issues/1942
1,638,418,561
1,942
[ "langchain-ai", "langchain" ]
When presented with intalling chatgpt-wrapper, I have the issue "C:\Users\BigB\Documents\Yeet>pip install git+https://github.com/mmabrouk/chatgpt-wrapper Collecting git+https://github.com/mmabrouk/chatgpt-wrapper Cloning https://github.com/mmabrouk/chatgpt-wrapper to c:\users\refddfea~1\appdata\local\temp\pip-req-build-h89ta3sh Running command git clone --filter=blob:none -q https://github.com/mmabrouk/chatgpt-wrapper 'C:\Users\RCfDfEA~1\AppData\Local\Temp\pip-req-build-h89ta3sh' Resolved https://github.com/mmabrouk/chatgpt-wrapper to commit df1725d7fbbff27e36bc22e4eb73cf5fea887dc1 Preparing metadata (setup.py) ... done Requirement already satisfied: email-validator in c:\users\redfdeaffdspc\appdata\local\programs\python\python36\lib\site-packages (from chatGPT==0.6.5) (1.3.1) Requirement already satisfied: Flask in c:\users\refdfdeafdspc\appdata\local\programs\python\python36\lib\site-packages (from chatGPT==0.6.5) (2.0.3) Requirement already satisfied: Jinja2 in c:\users\reddefadspc\appdata\local\programs\python\python36\lib\site-packages (from chatGPT==0.6.5) (3.0.3) ERROR: Could not find a version that satisfies the requirement langchain>=0.0.115 (from chatgpt) (from versions: 0.0.1, 0.0.2, 0.0.3, 0.0.4, 0.0.5, 0.0.6, 0.0.7, 0.0.8, 0.0.9, 0.0.10, 0.0.11, 0.0.12, 0.0.13, 0.0.14, 0.0.15, 0.0.16, 0.0.17, 0.0.18, 0.0.19, 0.0.20, 0.0.21, 0.0.22, 0.0.23, 0.0.24, 0.0.25, 0.0.26, 0.0.27) ERROR: No matching distribution found for langchain>=0.0.115".
Issue when trying to install chatgpt-wrapper with this module.
https://api.github.com/repos/langchain-ai/langchain/issues/1941/comments
2
2023-03-23T21:08:08Z
2023-09-18T16:22:50Z
https://github.com/langchain-ai/langchain/issues/1941
1,638,329,189
1,941
[ "langchain-ai", "langchain" ]
I'm wondering how does [ChatGPT plugin](https://platform.openai.com/docs/plugins/) work under-the-hood? Are they using the same techique as Langchain agent? Basically sending prompt and manually parse out the result -> and feed result back to another plugin in subsequential steps if necessary. Or is it using a superior technique than LangChain agent? I can't find a mention of LangChain in their doc. Any thoughts are welcome!
ChatGPT Plugins vs. LangChain agent
https://api.github.com/repos/langchain-ai/langchain/issues/1940/comments
17
2023-03-23T20:57:52Z
2023-09-29T16:09:47Z
https://github.com/langchain-ai/langchain/issues/1940
1,638,313,085
1,940
[ "langchain-ai", "langchain" ]
As reported by Kranos in Discord, there is no a way to robustly iterate through a list of URL's with UnstructuredURLLoader. The workaround for now is to create a UnstructuredURLLoader object per url and do the following: ``` Yep, exactly my problem - I had a load of URLs loaded into a pandas dataframe I was iterating through. I basically added the following at the end of the loop to keep things ticking over and ignoring any errors: # Manage any errors except (NameError, ValueError, KeyError, OSError, TypeError): # Pass the error pass ``` UnstructuredURLLoader should likely do this by default, or provide a `strict` option to exit on any failures.
UnstructuredURLLoader does not gracefully handle failures given a list of URL's
https://api.github.com/repos/langchain-ai/langchain/issues/1939/comments
0
2023-03-23T20:15:27Z
2023-03-28T05:26:22Z
https://github.com/langchain-ai/langchain/issues/1939
1,638,238,132
1,939
[ "langchain-ai", "langchain" ]
When feeding the map_reduce summarization chain with a single document, the doc is run through an unnecessary map step before running a combine prompt on it. The combine prompt would imo be sufficient to avoid summarizing a single summary which then is very lossy. Great project, hope this thrives!
Improvement: MapReduce summarization chains executes a map step on a single document
https://api.github.com/repos/langchain-ai/langchain/issues/1937/comments
7
2023-03-23T18:46:21Z
2023-09-18T16:28:44Z
https://github.com/langchain-ai/langchain/issues/1937
1,638,109,379
1,937
[ "langchain-ai", "langchain" ]
UJsing py 3.11 (works in py 3.10) with latest (0.4.36) version. Traceback: ``` File "C:\Users\diman\mambaforge\envs\py311\Lib\site-packages\streamlit\runtime\scriptrunner\script_runner.py", line 565, in _run_script exec(code, module.__dict__) File "D:\dev\openai-pr-summarization\main.py", line 9, in <module> from engine.llama import LlamaEngine File "D:\dev\openai-pr-summarization\engine\llama.py", line 3, in <module> from llama_index import GPTSimpleVectorIndex, Document File "C:\Users\diman\mambaforge\envs\py311\Lib\site-packages\llama_index\__init__.py", line 18, in <module> from llama_index.indices.common.struct_store.base import SQLDocumentContextBuilder File "C:\Users\diman\mambaforge\envs\py311\Lib\site-packages\llama_index\indices\__init__.py", line 4, in <module> from llama_index.indices.keyword_table.base import GPTKeywordTableIndex File "C:\Users\diman\mambaforge\envs\py311\Lib\site-packages\llama_index\indices\keyword_table\__init__.py", line 4, in <module> from llama_index.indices.keyword_table.base import GPTKeywordTableIndex File "C:\Users\diman\mambaforge\envs\py311\Lib\site-packages\llama_index\indices\keyword_table\base.py", line 16, in <module> from llama_index.indices.base import DOCUMENTS_INPUT, BaseGPTIndex File "C:\Users\diman\mambaforge\envs\py311\Lib\site-packages\llama_index\indices\base.py", line 23, in <module> from llama_index.indices.prompt_helper import PromptHelper File "C:\Users\diman\mambaforge\envs\py311\Lib\site-packages\llama_index\indices\prompt_helper.py", line 12, in <module> from llama_index.langchain_helpers.chain_wrapper import LLMPredictor File "C:\Users\diman\mambaforge\envs\py311\Lib\site-packages\llama_index\langchain_helpers\chain_wrapper.py", line 6, in <module> from llama_index.llm_predictor.base import ( # noqa: F401 File "C:\Users\diman\mambaforge\envs\py311\Lib\site-packages\llama_index\llm_predictor\__init__.py", line 4, in <module> from llama_index.llm_predictor.base import LLMPredictor File "C:\Users\diman\mambaforge\envs\py311\Lib\site-packages\llama_index\llm_predictor\base.py", line 15, in <module> from llama_index.prompts.base import Prompt File "C:\Users\diman\mambaforge\envs\py311\Lib\site-packages\llama_index\prompts\__init__.py", line 3, in <module> from llama_index.prompts.base import Prompt File "C:\Users\diman\mambaforge\envs\py311\Lib\site-packages\llama_index\prompts\base.py", line 11, in <module> from llama_index.output_parsers.base import BaseOutputParser File "C:\Users\diman\mambaforge\envs\py311\Lib\site-packages\llama_index\output_parsers\__init__.py", line 4, in <module> from llama_index.output_parsers.langchain import LangchainOutputParser File "C:\Users\diman\mambaforge\envs\py311\Lib\site-packages\llama_index\output_parsers\langchain.py", line 6, in <module> from langchain.schema import BaseOutputParser as LCOutputParser ```
cannot import name 'BaseOutputParser' from 'langchain.schema'
https://api.github.com/repos/langchain-ai/langchain/issues/1936/comments
8
2023-03-23T17:59:56Z
2023-09-28T16:10:53Z
https://github.com/langchain-ai/langchain/issues/1936
1,638,029,215
1,936