diff --git "a/train.jsonl" "b/train.jsonl" new file mode 100644--- /dev/null +++ "b/train.jsonl" @@ -0,0 +1,3052 @@ +{"id": "ffe6a87f4f10-0", "text": "Models\uf0c1\nLangChain provides interfaces and integrations for a number of different types of models.\nLLMs\nChat Models", "source": "https://api.python.langchain.com/en/latest/models.html"} +{"id": "570abd027f9e-0", "text": "Model I/O\uf0c1\nLangChain provides interfaces and integrations for working with language models.\nPrompts\nModels\nOutput Parsers", "source": "https://api.python.langchain.com/en/latest/model_io.html"} +{"id": "9d241ef1116b-0", "text": "Prompts\uf0c1\nThe reference guides here all relate to objects for working with Prompts.\nPrompt Templates\nExample Selector", "source": "https://api.python.langchain.com/en/latest/prompts.html"} +{"id": "98f977de8f2f-0", "text": "Data connection\uf0c1\nLangChain has a number of modules that help you load, structure, store, and retrieve documents.\nDocument Loaders\nDocument Transformers\nEmbeddings\nVector Stores\nRetrievers", "source": "https://api.python.langchain.com/en/latest/data_connection.html"} +{"id": "f0bec661129a-0", "text": "Embeddings\uf0c1\nWrappers around embedding modules.\nclass langchain.embeddings.OpenAIEmbeddings(*, client=None, model='text-embedding-ada-002', deployment='text-embedding-ada-002', openai_api_version=None, openai_api_base=None, openai_api_type=None, openai_proxy=None, embedding_ctx_length=8191, openai_api_key=None, openai_organization=None, allowed_special={}, disallowed_special='all', chunk_size=1000, max_retries=6, request_timeout=None, headers=None, tiktoken_model_name=None)[source]\uf0c1\nBases: pydantic.main.BaseModel, langchain.embeddings.base.Embeddings\nWrapper around OpenAI embedding models.\nTo use, you should have the openai python package installed, and the\nenvironment variable OPENAI_API_KEY set with your API key or pass it\nas a named parameter to the constructor.\nExample\nfrom langchain.embeddings import OpenAIEmbeddings\nopenai = OpenAIEmbeddings(openai_api_key=\"my-api-key\")\nIn order to use the library with Microsoft Azure endpoints, you need to set\nthe OPENAI_API_TYPE, OPENAI_API_BASE, OPENAI_API_KEY and OPENAI_API_VERSION.\nThe OPENAI_API_TYPE must be set to \u2018azure\u2019 and the others correspond to\nthe properties of your endpoint.\nIn addition, the deployment name must be passed as the model parameter.\nExample\nimport os\nos.environ[\"OPENAI_API_TYPE\"] = \"azure\"\nos.environ[\"OPENAI_API_BASE\"] = \"https://\nattribute endpoint_name: str = ''\uf0c1\nThe name of the endpoint from the deployed Sagemaker model.\nMust be unique within an AWS Region.\nattribute model_kwargs: Optional[Dict] = None\uf0c1\nKey word arguments to pass to the model.\nattribute region_name: str = ''\uf0c1\nThe aws region where the Sagemaker model is deployed, eg. us-west-2.\nembed_documents(texts, chunk_size=64)[source]\uf0c1\nCompute doc embeddings using a SageMaker Inference Endpoint.\nParameters\ntexts (List[str]) \u2013 The list of texts to embed.\nchunk_size (int) \u2013 The chunk size defines how many input texts will\nbe grouped together as request. If None, will use the\nchunk size specified by the class.\nReturns\nList of embeddings, one for each text.\nReturn type\nList[List[float]]\nembed_query(text)[source]\uf0c1\nCompute query embeddings using a SageMaker inference endpoint.\nParameters\ntext (str) \u2013 The text to embed.\nReturns", "source": "https://api.python.langchain.com/en/latest/modules/embeddings.html"} +{"id": "f0bec661129a-14", "text": "Parameters\ntext (str) \u2013 The text to embed.\nReturns\nEmbeddings for the text.\nReturn type\nList[float]\nclass langchain.embeddings.HuggingFaceInstructEmbeddings(*, client=None, model_name='hkunlp/instructor-large', cache_folder=None, model_kwargs=None, encode_kwargs=None, embed_instruction='Represent the document for retrieval: ', query_instruction='Represent the question for retrieving supporting documents: ')[source]\uf0c1\nBases: pydantic.main.BaseModel, langchain.embeddings.base.Embeddings\nWrapper around sentence_transformers embedding models.\nTo use, you should have the sentence_transformers\nand InstructorEmbedding python packages installed.\nExample\nfrom langchain.embeddings import HuggingFaceInstructEmbeddings\nmodel_name = \"hkunlp/instructor-large\"\nmodel_kwargs = {'device': 'cpu'}\nencode_kwargs = {'normalize_embeddings': True}\nhf = HuggingFaceInstructEmbeddings(\n model_name=model_name,\n model_kwargs=model_kwargs,\n encode_kwargs=encode_kwargs\n)\nParameters\nclient (Any) \u2013 \nmodel_name (str) \u2013 \ncache_folder (Optional[str]) \u2013 \nmodel_kwargs (Dict[str, Any]) \u2013 \nencode_kwargs (Dict[str, Any]) \u2013 \nembed_instruction (str) \u2013 \nquery_instruction (str) \u2013 \nReturn type\nNone\nattribute cache_folder: Optional[str] = None\uf0c1\nPath to store models.\nCan be also set by SENTENCE_TRANSFORMERS_HOME environment variable.\nattribute embed_instruction: str = 'Represent the document for retrieval: '\uf0c1\nInstruction to use for embedding documents.\nattribute encode_kwargs: Dict[str, Any] [Optional]\uf0c1\nKey word arguments to pass when calling the encode method of the model.\nattribute model_kwargs: Dict[str, Any] [Optional]\uf0c1", "source": "https://api.python.langchain.com/en/latest/modules/embeddings.html"} +{"id": "f0bec661129a-15", "text": "attribute model_kwargs: Dict[str, Any] [Optional]\uf0c1\nKey word arguments to pass to the model.\nattribute model_name: str = 'hkunlp/instructor-large'\uf0c1\nModel name to use.\nattribute query_instruction: str = 'Represent the question for retrieving supporting documents: '\uf0c1\nInstruction to use for embedding query.\nembed_documents(texts)[source]\uf0c1\nCompute doc embeddings using a HuggingFace instruct model.\nParameters\ntexts (List[str]) \u2013 The list of texts to embed.\nReturns\nList of embeddings, one for each text.\nReturn type\nList[List[float]]\nembed_query(text)[source]\uf0c1\nCompute query embeddings using a HuggingFace instruct model.\nParameters\ntext (str) \u2013 The text to embed.\nReturns\nEmbeddings for the text.\nReturn type\nList[float]\nclass langchain.embeddings.MosaicMLInstructorEmbeddings(*, endpoint_url='https://models.hosted-on.mosaicml.hosting/instructor-xl/v1/predict', embed_instruction='Represent the document for retrieval: ', query_instruction='Represent the question for retrieving supporting documents: ', retry_sleep=1.0, mosaicml_api_token=None)[source]\uf0c1\nBases: pydantic.main.BaseModel, langchain.embeddings.base.Embeddings\nWrapper around MosaicML\u2019s embedding inference service.\nTo use, you should have the\nenvironment variable MOSAICML_API_TOKEN set with your API token, or pass\nit as a named parameter to the constructor.\nExample\nfrom langchain.llms import MosaicMLInstructorEmbeddings\nendpoint_url = (\n \"https://models.hosted-on.mosaicml.hosting/instructor-large/v1/predict\"\n)\nmosaic_llm = MosaicMLInstructorEmbeddings(\n endpoint_url=endpoint_url,", "source": "https://api.python.langchain.com/en/latest/modules/embeddings.html"} +{"id": "f0bec661129a-16", "text": "endpoint_url=endpoint_url,\n mosaicml_api_token=\"my-api-key\"\n)\nParameters\nendpoint_url (str) \u2013 \nembed_instruction (str) \u2013 \nquery_instruction (str) \u2013 \nretry_sleep (float) \u2013 \nmosaicml_api_token (Optional[str]) \u2013 \nReturn type\nNone\nattribute embed_instruction: str = 'Represent the document for retrieval: '\uf0c1\nInstruction used to embed documents.\nattribute endpoint_url: str = 'https://models.hosted-on.mosaicml.hosting/instructor-xl/v1/predict'\uf0c1\nEndpoint URL to use.\nattribute query_instruction: str = 'Represent the question for retrieving supporting documents: '\uf0c1\nInstruction used to embed the query.\nattribute retry_sleep: float = 1.0\uf0c1\nHow long to try sleeping for if a rate limit is encountered\nembed_documents(texts)[source]\uf0c1\nEmbed documents using a MosaicML deployed instructor embedding model.\nParameters\ntexts (List[str]) \u2013 The list of texts to embed.\nReturns\nList of embeddings, one for each text.\nReturn type\nList[List[float]]\nembed_query(text)[source]\uf0c1\nEmbed a query using a MosaicML deployed instructor embedding model.\nParameters\ntext (str) \u2013 The text to embed.\nReturns\nEmbeddings for the text.\nReturn type\nList[float]\nclass langchain.embeddings.SelfHostedEmbeddings(*, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None, pipeline_ref=None, client=None, inference_fn=, hardware=None, model_load_fn, load_fn_kwargs=None, model_reqs=['./', 'torch'], inference_kwargs=None)[source]\uf0c1\nBases: langchain.llms.self_hosted.SelfHostedPipeline, langchain.embeddings.base.Embeddings", "source": "https://api.python.langchain.com/en/latest/modules/embeddings.html"} +{"id": "f0bec661129a-17", "text": "Runs custom embedding models on self-hosted remote hardware.\nSupported hardware includes auto-launched instances on AWS, GCP, Azure,\nand Lambda, as well as servers specified\nby IP address and SSH credentials (such as on-prem, or another\ncloud like Paperspace, Coreweave, etc.).\nTo use, you should have the runhouse python package installed.\nExample using a model load function:from langchain.embeddings import SelfHostedEmbeddings\nfrom transformers import AutoModelForCausalLM, AutoTokenizer, pipeline\nimport runhouse as rh\ngpu = rh.cluster(name=\"rh-a10x\", instance_type=\"A100:1\")\ndef get_pipeline():\n model_id = \"facebook/bart-large\"\n tokenizer = AutoTokenizer.from_pretrained(model_id)\n model = AutoModelForCausalLM.from_pretrained(model_id)\n return pipeline(\"feature-extraction\", model=model, tokenizer=tokenizer)\nembeddings = SelfHostedEmbeddings(\n model_load_fn=get_pipeline,\n hardware=gpu\n model_reqs=[\"./\", \"torch\", \"transformers\"],\n)\nExample passing in a pipeline path:from langchain.embeddings import SelfHostedHFEmbeddings\nimport runhouse as rh\nfrom transformers import pipeline\ngpu = rh.cluster(name=\"rh-a10x\", instance_type=\"A100:1\")\npipeline = pipeline(model=\"bert-base-uncased\", task=\"feature-extraction\")\nrh.blob(pickle.dumps(pipeline),\n path=\"models/pipeline.pkl\").save().to(gpu, path=\"models\")\nembeddings = SelfHostedHFEmbeddings.from_pipeline(\n pipeline=\"models/pipeline.pkl\",\n hardware=gpu,\n model_reqs=[\"./\", \"torch\", \"transformers\"],\n)\nParameters\ncache (Optional[bool]) \u2013 \nverbose (bool) \u2013", "source": "https://api.python.langchain.com/en/latest/modules/embeddings.html"} +{"id": "f0bec661129a-18", "text": ")\nParameters\ncache (Optional[bool]) \u2013 \nverbose (bool) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ncallback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) \u2013 \ntags (Optional[List[str]]) \u2013 \npipeline_ref (Any) \u2013 \nclient (Any) \u2013 \ninference_fn (Callable) \u2013 \nhardware (Any) \u2013 \nmodel_load_fn (Callable) \u2013 \nload_fn_kwargs (Optional[dict]) \u2013 \nmodel_reqs (List[str]) \u2013 \ninference_kwargs (Any) \u2013 \nReturn type\nNone\nattribute inference_fn: Callable = \uf0c1\nInference function to extract the embeddings on the remote hardware.\nattribute inference_kwargs: Any = None\uf0c1\nAny kwargs to pass to the model\u2019s inference function.\nembed_documents(texts)[source]\uf0c1\nCompute doc embeddings using a HuggingFace transformer model.\nParameters\ntexts (List[str]) \u2013 The list of texts to embed.s\nReturns\nList of embeddings, one for each text.\nReturn type\nList[List[float]]\nembed_query(text)[source]\uf0c1\nCompute query embeddings using a HuggingFace transformer model.\nParameters\ntext (str) \u2013 The text to embed.\nReturns\nEmbeddings for the text.\nReturn type\nList[float]", "source": "https://api.python.langchain.com/en/latest/modules/embeddings.html"} +{"id": "f0bec661129a-19", "text": "Returns\nEmbeddings for the text.\nReturn type\nList[float]\nclass langchain.embeddings.SelfHostedHuggingFaceEmbeddings(*, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None, pipeline_ref=None, client=None, inference_fn=, hardware=None, model_load_fn=, load_fn_kwargs=None, model_reqs=['./', 'sentence_transformers', 'torch'], inference_kwargs=None, model_id='sentence-transformers/all-mpnet-base-v2')[source]\uf0c1\nBases: langchain.embeddings.self_hosted.SelfHostedEmbeddings\nRuns sentence_transformers embedding models on self-hosted remote hardware.\nSupported hardware includes auto-launched instances on AWS, GCP, Azure,\nand Lambda, as well as servers specified\nby IP address and SSH credentials (such as on-prem, or another cloud\nlike Paperspace, Coreweave, etc.).\nTo use, you should have the runhouse python package installed.\nExample\nfrom langchain.embeddings import SelfHostedHuggingFaceEmbeddings\nimport runhouse as rh\nmodel_name = \"sentence-transformers/all-mpnet-base-v2\"\ngpu = rh.cluster(name=\"rh-a10x\", instance_type=\"A100:1\")\nhf = SelfHostedHuggingFaceEmbeddings(model_name=model_name, hardware=gpu)\nParameters\ncache (Optional[bool]) \u2013 \nverbose (bool) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ncallback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) \u2013 \ntags (Optional[List[str]]) \u2013 \npipeline_ref (Any) \u2013 \nclient (Any) \u2013 \ninference_fn (Callable) \u2013 \nhardware (Any) \u2013", "source": "https://api.python.langchain.com/en/latest/modules/embeddings.html"} +{"id": "f0bec661129a-20", "text": "inference_fn (Callable) \u2013 \nhardware (Any) \u2013 \nmodel_load_fn (Callable) \u2013 \nload_fn_kwargs (Optional[dict]) \u2013 \nmodel_reqs (List[str]) \u2013 \ninference_kwargs (Any) \u2013 \nmodel_id (str) \u2013 \nReturn type\nNone\nattribute hardware: Any = None\uf0c1\nRemote hardware to send the inference function to.\nattribute inference_fn: Callable = \uf0c1\nInference function to extract the embeddings.\nattribute load_fn_kwargs: Optional[dict] = None\uf0c1\nKey word arguments to pass to the model load function.\nattribute model_id: str = 'sentence-transformers/all-mpnet-base-v2'\uf0c1\nModel name to use.\nattribute model_load_fn: Callable = \uf0c1\nFunction to load the model remotely on the server.\nattribute model_reqs: List[str] = ['./', 'sentence_transformers', 'torch']\uf0c1\nRequirements to install on hardware to inference the model.\nclass langchain.embeddings.SelfHostedHuggingFaceInstructEmbeddings(*, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None, pipeline_ref=None, client=None, inference_fn=, hardware=None, model_load_fn=, load_fn_kwargs=None, model_reqs=['./', 'InstructorEmbedding', 'torch'], inference_kwargs=None, model_id='hkunlp/instructor-large', embed_instruction='Represent the document for retrieval: ', query_instruction='Represent the question for retrieving supporting documents: ')[source]\uf0c1\nBases: langchain.embeddings.self_hosted_hugging_face.SelfHostedHuggingFaceEmbeddings\nRuns InstructorEmbedding embedding models on self-hosted remote hardware.\nSupported hardware includes auto-launched instances on AWS, GCP, Azure,", "source": "https://api.python.langchain.com/en/latest/modules/embeddings.html"} +{"id": "f0bec661129a-21", "text": "Supported hardware includes auto-launched instances on AWS, GCP, Azure,\nand Lambda, as well as servers specified\nby IP address and SSH credentials (such as on-prem, or another\ncloud like Paperspace, Coreweave, etc.).\nTo use, you should have the runhouse python package installed.\nExample\nfrom langchain.embeddings import SelfHostedHuggingFaceInstructEmbeddings\nimport runhouse as rh\nmodel_name = \"hkunlp/instructor-large\"\ngpu = rh.cluster(name='rh-a10x', instance_type='A100:1')\nhf = SelfHostedHuggingFaceInstructEmbeddings(\n model_name=model_name, hardware=gpu)\nParameters\ncache (Optional[bool]) \u2013 \nverbose (bool) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ncallback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) \u2013 \ntags (Optional[List[str]]) \u2013 \npipeline_ref (Any) \u2013 \nclient (Any) \u2013 \ninference_fn (Callable) \u2013 \nhardware (Any) \u2013 \nmodel_load_fn (Callable) \u2013 \nload_fn_kwargs (Optional[dict]) \u2013 \nmodel_reqs (List[str]) \u2013 \ninference_kwargs (Any) \u2013 \nmodel_id (str) \u2013 \nembed_instruction (str) \u2013 \nquery_instruction (str) \u2013 \nReturn type\nNone\nattribute embed_instruction: str = 'Represent the document for retrieval: '\uf0c1\nInstruction to use for embedding documents.\nattribute model_id: str = 'hkunlp/instructor-large'\uf0c1\nModel name to use.\nattribute model_reqs: List[str] = ['./', 'InstructorEmbedding', 'torch']\uf0c1\nRequirements to install on hardware to inference the model.", "source": "https://api.python.langchain.com/en/latest/modules/embeddings.html"} +{"id": "f0bec661129a-22", "text": "Requirements to install on hardware to inference the model.\nattribute query_instruction: str = 'Represent the question for retrieving supporting documents: '\uf0c1\nInstruction to use for embedding query.\nembed_documents(texts)[source]\uf0c1\nCompute doc embeddings using a HuggingFace instruct model.\nParameters\ntexts (List[str]) \u2013 The list of texts to embed.\nReturns\nList of embeddings, one for each text.\nReturn type\nList[List[float]]\nembed_query(text)[source]\uf0c1\nCompute query embeddings using a HuggingFace instruct model.\nParameters\ntext (str) \u2013 The text to embed.\nReturns\nEmbeddings for the text.\nReturn type\nList[float]\nclass langchain.embeddings.FakeEmbeddings(*, size)[source]\uf0c1\nBases: langchain.embeddings.base.Embeddings, pydantic.main.BaseModel\nParameters\nsize (int) \u2013 \nReturn type\nNone\nembed_documents(texts)[source]\uf0c1\nEmbed search docs.\nParameters\ntexts (List[str]) \u2013 \nReturn type\nList[List[float]]\nembed_query(text)[source]\uf0c1\nEmbed query text.\nParameters\ntext (str) \u2013 \nReturn type\nList[float]\nclass langchain.embeddings.AlephAlphaAsymmetricSemanticEmbedding(*, client=None, model='luminous-base', hosting='https://api.aleph-alpha.com', normalize=True, compress_to_size=128, contextual_control_threshold=None, control_log_additive=True, aleph_alpha_api_key=None)[source]\uf0c1\nBases: pydantic.main.BaseModel, langchain.embeddings.base.Embeddings\nWrapper for Aleph Alpha\u2019s Asymmetric Embeddings\nAA provides you with an endpoint to embed a document and a query.\nThe models were optimized to make the embeddings of documents and\nthe query for a document as similar as possible.", "source": "https://api.python.langchain.com/en/latest/modules/embeddings.html"} +{"id": "f0bec661129a-23", "text": "the query for a document as similar as possible.\nTo learn more, check out: https://docs.aleph-alpha.com/docs/tasks/semantic_embed/\nExample\nfrom aleph_alpha import AlephAlphaAsymmetricSemanticEmbedding\nembeddings = AlephAlphaSymmetricSemanticEmbedding()\ndocument = \"This is a content of the document\"\nquery = \"What is the content of the document?\"\ndoc_result = embeddings.embed_documents([document])\nquery_result = embeddings.embed_query(query)\nParameters\nclient (Any) \u2013 \nmodel (Optional[str]) \u2013 \nhosting (Optional[str]) \u2013 \nnormalize (Optional[bool]) \u2013 \ncompress_to_size (Optional[int]) \u2013 \ncontextual_control_threshold (Optional[int]) \u2013 \ncontrol_log_additive (Optional[bool]) \u2013 \naleph_alpha_api_key (Optional[str]) \u2013 \nReturn type\nNone\nattribute aleph_alpha_api_key: Optional[str] = None\uf0c1\nAPI key for Aleph Alpha API.\nattribute compress_to_size: Optional[int] = 128\uf0c1\nShould the returned embeddings come back as an original 5120-dim vector,\nor should it be compressed to 128-dim.\nattribute contextual_control_threshold: Optional[int] = None\uf0c1\nAttention control parameters only apply to those tokens that have\nexplicitly been set in the request.\nattribute control_log_additive: Optional[bool] = True\uf0c1\nApply controls on prompt items by adding the log(control_factor)\nto attention scores.\nattribute hosting: Optional[str] = 'https://api.aleph-alpha.com'\uf0c1\nOptional parameter that specifies which datacenters may process the request.\nattribute model: Optional[str] = 'luminous-base'\uf0c1\nModel name to use.\nattribute normalize: Optional[bool] = True\uf0c1\nShould returned embeddings be normalized", "source": "https://api.python.langchain.com/en/latest/modules/embeddings.html"} +{"id": "f0bec661129a-24", "text": "attribute normalize: Optional[bool] = True\uf0c1\nShould returned embeddings be normalized\nembed_documents(texts)[source]\uf0c1\nCall out to Aleph Alpha\u2019s asymmetric Document endpoint.\nParameters\ntexts (List[str]) \u2013 The list of texts to embed.\nReturns\nList of embeddings, one for each text.\nReturn type\nList[List[float]]\nembed_query(text)[source]\uf0c1\nCall out to Aleph Alpha\u2019s asymmetric, query embedding endpoint\n:param text: The text to embed.\nReturns\nEmbeddings for the text.\nParameters\ntext (str) \u2013 \nReturn type\nList[float]\nclass langchain.embeddings.AlephAlphaSymmetricSemanticEmbedding(*, client=None, model='luminous-base', hosting='https://api.aleph-alpha.com', normalize=True, compress_to_size=128, contextual_control_threshold=None, control_log_additive=True, aleph_alpha_api_key=None)[source]\uf0c1\nBases: langchain.embeddings.aleph_alpha.AlephAlphaAsymmetricSemanticEmbedding\nThe symmetric version of the Aleph Alpha\u2019s semantic embeddings.\nThe main difference is that here, both the documents and\nqueries are embedded with a SemanticRepresentation.Symmetric\n.. rubric:: Example\nfrom aleph_alpha import AlephAlphaSymmetricSemanticEmbedding\nembeddings = AlephAlphaAsymmetricSemanticEmbedding()\ntext = \"This is a test text\"\ndoc_result = embeddings.embed_documents([text])\nquery_result = embeddings.embed_query(text)\nParameters\nclient (Any) \u2013 \nmodel (Optional[str]) \u2013 \nhosting (Optional[str]) \u2013 \nnormalize (Optional[bool]) \u2013 \ncompress_to_size (Optional[int]) \u2013 \ncontextual_control_threshold (Optional[int]) \u2013 \ncontrol_log_additive (Optional[bool]) \u2013 \naleph_alpha_api_key (Optional[str]) \u2013", "source": "https://api.python.langchain.com/en/latest/modules/embeddings.html"} +{"id": "f0bec661129a-25", "text": "aleph_alpha_api_key (Optional[str]) \u2013 \nReturn type\nNone\nembed_documents(texts)[source]\uf0c1\nCall out to Aleph Alpha\u2019s Document endpoint.\nParameters\ntexts (List[str]) \u2013 The list of texts to embed.\nReturns\nList of embeddings, one for each text.\nReturn type\nList[List[float]]\nembed_query(text)[source]\uf0c1\nCall out to Aleph Alpha\u2019s asymmetric, query embedding endpoint\n:param text: The text to embed.\nReturns\nEmbeddings for the text.\nParameters\ntext (str) \u2013 \nReturn type\nList[float]\nlangchain.embeddings.SentenceTransformerEmbeddings\uf0c1\nalias of langchain.embeddings.huggingface.HuggingFaceEmbeddings\nclass langchain.embeddings.MiniMaxEmbeddings(*, endpoint_url='https://api.minimax.chat/v1/embeddings', model='embo-01', embed_type_db='db', embed_type_query='query', minimax_group_id=None, minimax_api_key=None)[source]\uf0c1\nBases: pydantic.main.BaseModel, langchain.embeddings.base.Embeddings\nWrapper around MiniMax\u2019s embedding inference service.\nTo use, you should have the environment variable MINIMAX_GROUP_ID and\nMINIMAX_API_KEY set with your API token, or pass it as a named parameter to\nthe constructor.\nExample\nfrom langchain.embeddings import MiniMaxEmbeddings\nembeddings = MiniMaxEmbeddings()\nquery_text = \"This is a test query.\"\nquery_result = embeddings.embed_query(query_text)\ndocument_text = \"This is a test document.\"\ndocument_result = embeddings.embed_documents([document_text])\nParameters\nendpoint_url (str) \u2013 \nmodel (str) \u2013 \nembed_type_db (str) \u2013 \nembed_type_query (str) \u2013 \nminimax_group_id (Optional[str]) \u2013", "source": "https://api.python.langchain.com/en/latest/modules/embeddings.html"} +{"id": "f0bec661129a-26", "text": "embed_type_query (str) \u2013 \nminimax_group_id (Optional[str]) \u2013 \nminimax_api_key (Optional[str]) \u2013 \nReturn type\nNone\nattribute embed_type_db: str = 'db'\uf0c1\nFor embed_documents\nattribute embed_type_query: str = 'query'\uf0c1\nFor embed_query\nattribute endpoint_url: str = 'https://api.minimax.chat/v1/embeddings'\uf0c1\nEndpoint URL to use.\nattribute minimax_api_key: Optional[str] = None\uf0c1\nAPI Key for MiniMax API.\nattribute minimax_group_id: Optional[str] = None\uf0c1\nGroup ID for MiniMax API.\nattribute model: str = 'embo-01'\uf0c1\nEmbeddings model name to use.\nembed_documents(texts)[source]\uf0c1\nEmbed documents using a MiniMax embedding endpoint.\nParameters\ntexts (List[str]) \u2013 The list of texts to embed.\nReturns\nList of embeddings, one for each text.\nReturn type\nList[List[float]]\nembed_query(text)[source]\uf0c1\nEmbed a query using a MiniMax embedding endpoint.\nParameters\ntext (str) \u2013 The text to embed.\nReturns\nEmbeddings for the text.\nReturn type\nList[float]\nclass langchain.embeddings.BedrockEmbeddings(*, client=None, region_name=None, credentials_profile_name=None, model_id='amazon.titan-e1t-medium', model_kwargs=None)[source]\uf0c1\nBases: pydantic.main.BaseModel, langchain.embeddings.base.Embeddings\nEmbeddings provider to invoke Bedrock embedding models.\nTo authenticate, the AWS client uses the following methods to\nautomatically load credentials:\nhttps://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html\nIf a specific credential profile should be used, you must pass", "source": "https://api.python.langchain.com/en/latest/modules/embeddings.html"} +{"id": "f0bec661129a-27", "text": "If a specific credential profile should be used, you must pass\nthe name of the profile from the ~/.aws/credentials file that is to be used.\nMake sure the credentials / roles used have the required policies to\naccess the Bedrock service.\nParameters\nclient (Any) \u2013 \nregion_name (Optional[str]) \u2013 \ncredentials_profile_name (Optional[str]) \u2013 \nmodel_id (str) \u2013 \nmodel_kwargs (Optional[Dict]) \u2013 \nReturn type\nNone\nattribute credentials_profile_name: Optional[str] = None\uf0c1\nThe name of the profile in the ~/.aws/credentials or ~/.aws/config files, which\nhas either access keys or role information specified.\nIf not specified, the default credential profile or, if on an EC2 instance,\ncredentials from IMDS will be used.\nSee: https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html\nattribute model_id: str = 'amazon.titan-e1t-medium'\uf0c1\nId of the model to call, e.g., amazon.titan-e1t-medium, this is\nequivalent to the modelId property in the list-foundation-models api\nattribute model_kwargs: Optional[Dict] = None\uf0c1\nKey word arguments to pass to the model.\nattribute region_name: Optional[str] = None\uf0c1\nThe aws region e.g., us-west-2. Fallsback to AWS_DEFAULT_REGION env variable\nor region specified in ~/.aws/config in case it is not provided here.\nembed_documents(texts, chunk_size=1)[source]\uf0c1\nCompute doc embeddings using a Bedrock model.\nParameters\ntexts (List[str]) \u2013 The list of texts to embed.\nchunk_size (int) \u2013 Bedrock currently only allows single string\ninputs, so chunk size is always 1. This input is here\nonly for compatibility with the embeddings interface.\nReturns", "source": "https://api.python.langchain.com/en/latest/modules/embeddings.html"} +{"id": "f0bec661129a-28", "text": "only for compatibility with the embeddings interface.\nReturns\nList of embeddings, one for each text.\nReturn type\nList[List[float]]\nembed_query(text)[source]\uf0c1\nCompute query embeddings using a Bedrock model.\nParameters\ntext (str) \u2013 The text to embed.\nReturns\nEmbeddings for the text.\nReturn type\nList[float]\nclass langchain.embeddings.DeepInfraEmbeddings(*, model_id='sentence-transformers/clip-ViT-B-32', normalize=False, embed_instruction='passage: ', query_instruction='query: ', model_kwargs=None, deepinfra_api_token=None)[source]\uf0c1\nBases: pydantic.main.BaseModel, langchain.embeddings.base.Embeddings\nWrapper around Deep Infra\u2019s embedding inference service.\nTo use, you should have the\nenvironment variable DEEPINFRA_API_TOKEN set with your API token, or pass\nit as a named parameter to the constructor.\nThere are multiple embeddings models available,\nsee https://deepinfra.com/models?type=embeddings.\nExample\nfrom langchain.embeddings import DeepInfraEmbeddings\ndeepinfra_emb = DeepInfraEmbeddings(\n model_id=\"sentence-transformers/clip-ViT-B-32\",\n deepinfra_api_token=\"my-api-key\"\n)\nr1 = deepinfra_emb.embed_documents(\n [\n \"Alpha is the first letter of Greek alphabet\",\n \"Beta is the second letter of Greek alphabet\",\n ]\n)\nr2 = deepinfra_emb.embed_query(\n \"What is the second letter of Greek alphabet\"\n)\nParameters\nmodel_id (str) \u2013 \nnormalize (bool) \u2013 \nembed_instruction (str) \u2013 \nquery_instruction (str) \u2013 \nmodel_kwargs (Optional[dict]) \u2013 \ndeepinfra_api_token (Optional[str]) \u2013 \nReturn type\nNone", "source": "https://api.python.langchain.com/en/latest/modules/embeddings.html"} +{"id": "f0bec661129a-29", "text": "deepinfra_api_token (Optional[str]) \u2013 \nReturn type\nNone\nattribute embed_instruction: str = 'passage: '\uf0c1\nInstruction used to embed documents.\nattribute model_id: str = 'sentence-transformers/clip-ViT-B-32'\uf0c1\nEmbeddings model to use.\nattribute model_kwargs: Optional[dict] = None\uf0c1\nOther model keyword args\nattribute normalize: bool = False\uf0c1\nwhether to normalize the computed embeddings\nattribute query_instruction: str = 'query: '\uf0c1\nInstruction used to embed the query.\nembed_documents(texts)[source]\uf0c1\nEmbed documents using a Deep Infra deployed embedding model.\nParameters\ntexts (List[str]) \u2013 The list of texts to embed.\nReturns\nList of embeddings, one for each text.\nReturn type\nList[List[float]]\nembed_query(text)[source]\uf0c1\nEmbed a query using a Deep Infra deployed embedding model.\nParameters\ntext (str) \u2013 The text to embed.\nReturns\nEmbeddings for the text.\nReturn type\nList[float]\nclass langchain.embeddings.DashScopeEmbeddings(*, client=None, model='text-embedding-v1', dashscope_api_key=None, max_retries=5)[source]\uf0c1\nBases: pydantic.main.BaseModel, langchain.embeddings.base.Embeddings\nWrapper around DashScope embedding models.\nTo use, you should have the dashscope python package installed, and the\nenvironment variable DASHSCOPE_API_KEY set with your API key or pass it\nas a named parameter to the constructor.\nExample\nfrom langchain.embeddings import DashScopeEmbeddings\nembeddings = DashScopeEmbeddings(dashscope_api_key=\"my-api-key\")\nExample\nimport os\nos.environ[\"DASHSCOPE_API_KEY\"] = \"your DashScope API KEY\"", "source": "https://api.python.langchain.com/en/latest/modules/embeddings.html"} +{"id": "f0bec661129a-30", "text": "os.environ[\"DASHSCOPE_API_KEY\"] = \"your DashScope API KEY\"\nfrom langchain.embeddings.dashscope import DashScopeEmbeddings\nembeddings = DashScopeEmbeddings(\n model=\"text-embedding-v1\",\n)\ntext = \"This is a test query.\"\nquery_result = embeddings.embed_query(text)\nParameters\nclient (Any) \u2013 \nmodel (str) \u2013 \ndashscope_api_key (Optional[str]) \u2013 \nmax_retries (int) \u2013 \nReturn type\nNone\nattribute dashscope_api_key: Optional[str] = None\uf0c1\nMaximum number of retries to make when generating.\nembed_documents(texts)[source]\uf0c1\nCall out to DashScope\u2019s embedding endpoint for embedding search docs.\nParameters\ntexts (List[str]) \u2013 The list of texts to embed.\nchunk_size \u2013 The chunk size of embeddings. If None, will use the chunk size\nspecified by the class.\nReturns\nList of embeddings, one for each text.\nReturn type\nList[List[float]]\nembed_query(text)[source]\uf0c1\nCall out to DashScope\u2019s embedding endpoint for embedding query text.\nParameters\ntext (str) \u2013 The text to embed.\nReturns\nEmbedding for the text.\nReturn type\nList[float]\nclass langchain.embeddings.EmbaasEmbeddings(*, model='e5-large-v2', instruction=None, api_url='https://api.embaas.io/v1/embeddings/', embaas_api_key=None)[source]\uf0c1\nBases: pydantic.main.BaseModel, langchain.embeddings.base.Embeddings\nWrapper around embaas\u2019s embedding service.\nTo use, you should have the\nenvironment variable EMBAAS_API_KEY set with your API key, or pass\nit as a named parameter to the constructor.\nExample\n# Initialise with default model and instruction", "source": "https://api.python.langchain.com/en/latest/modules/embeddings.html"} +{"id": "f0bec661129a-31", "text": "it as a named parameter to the constructor.\nExample\n# Initialise with default model and instruction\nfrom langchain.embeddings import EmbaasEmbeddings\nemb = EmbaasEmbeddings()\n# Initialise with custom model and instruction\nfrom langchain.embeddings import EmbaasEmbeddings\nemb_model = \"instructor-large\"\nemb_inst = \"Represent the Wikipedia document for retrieval\"\nemb = EmbaasEmbeddings(\n model=emb_model,\n instruction=emb_inst\n)\nParameters\nmodel (str) \u2013 \ninstruction (Optional[str]) \u2013 \napi_url (str) \u2013 \nembaas_api_key (Optional[str]) \u2013 \nReturn type\nNone\nattribute api_url: str = 'https://api.embaas.io/v1/embeddings/'\uf0c1\nThe URL for the embaas embeddings API.\nattribute instruction: Optional[str] = None\uf0c1\nInstruction used for domain-specific embeddings.\nattribute model: str = 'e5-large-v2'\uf0c1\nThe model used for embeddings.\nembed_documents(texts)[source]\uf0c1\nGet embeddings for a list of texts.\nParameters\ntexts (List[str]) \u2013 The list of texts to get embeddings for.\nReturns\nList of embeddings, one for each text.\nReturn type\nList[List[float]]\nembed_query(text)[source]\uf0c1\nGet embeddings for a single text.\nParameters\ntext (str) \u2013 The text to get embeddings for.\nReturns\nList of embeddings.\nReturn type\nList[float]", "source": "https://api.python.langchain.com/en/latest/modules/embeddings.html"} +{"id": "c0aff4bf2256-0", "text": "Memory\uf0c1\nclass langchain.memory.CassandraChatMessageHistory(contact_points, session_id, port=9042, username='cassandra', password='cassandra', keyspace_name='chat_history', table_name='message_store')[source]\uf0c1\nBases: langchain.schema.BaseChatMessageHistory\nChat message history that stores history in Cassandra.\nParameters\ncontact_points (List[str]) \u2013 list of ips to connect to Cassandra cluster\nsession_id (str) \u2013 arbitrary key that is used to store the messages\nof a single chat session.\nport (int) \u2013 port to connect to Cassandra cluster\nusername (str) \u2013 username to connect to Cassandra cluster\npassword (str) \u2013 password to connect to Cassandra cluster\nkeyspace_name (str) \u2013 name of the keyspace to use\ntable_name (str) \u2013 name of the table to use\nproperty messages: List[langchain.schema.BaseMessage]\uf0c1\nRetrieve the messages from Cassandra\nadd_message(message)[source]\uf0c1\nAppend the message to the record in Cassandra\nParameters\nmessage (langchain.schema.BaseMessage) \u2013 \nReturn type\nNone\nclear()[source]\uf0c1\nClear session memory from Cassandra\nReturn type\nNone\nclass langchain.memory.ChatMessageHistory(*, messages=[])[source]\uf0c1\nBases: langchain.schema.BaseChatMessageHistory, pydantic.main.BaseModel\nParameters\nmessages (List[langchain.schema.BaseMessage]) \u2013 \nReturn type\nNone\nattribute messages: List[langchain.schema.BaseMessage] = []\uf0c1\nadd_message(message)[source]\uf0c1\nAdd a self-created message to the store\nParameters\nmessage (langchain.schema.BaseMessage) \u2013 \nReturn type\nNone\nclear()[source]\uf0c1\nRemove all messages from the store\nReturn type\nNone\nclass langchain.memory.CombinedMemory(*, memories)[source]\uf0c1", "source": "https://api.python.langchain.com/en/latest/modules/memory.html"} +{"id": "c0aff4bf2256-1", "text": "None\nclass langchain.memory.CombinedMemory(*, memories)[source]\uf0c1\nBases: langchain.schema.BaseMemory\nClass for combining multiple memories\u2019 data together.\nParameters\nmemories (List[langchain.schema.BaseMemory]) \u2013 \nReturn type\nNone\nattribute memories: List[langchain.schema.BaseMemory] [Required]\uf0c1\nFor tracking all the memories that should be accessed.\nclear()[source]\uf0c1\nClear context from this session for every memory.\nReturn type\nNone\nload_memory_variables(inputs)[source]\uf0c1\nLoad all vars from sub-memories.\nParameters\ninputs (Dict[str, Any]) \u2013 \nReturn type\nDict[str, str]\nsave_context(inputs, outputs)[source]\uf0c1\nSave context from this session for every memory.\nParameters\ninputs (Dict[str, Any]) \u2013 \noutputs (Dict[str, str]) \u2013 \nReturn type\nNone\nproperty memory_variables: List[str]\uf0c1\nAll the memory variables that this instance provides.\nclass langchain.memory.ConversationBufferMemory(*, chat_memory=None, output_key=None, input_key=None, return_messages=False, human_prefix='Human', ai_prefix='AI', memory_key='history')[source]\uf0c1\nBases: langchain.memory.chat_memory.BaseChatMemory\nBuffer for storing conversation memory.\nParameters\nchat_memory (langchain.schema.BaseChatMessageHistory) \u2013 \noutput_key (Optional[str]) \u2013 \ninput_key (Optional[str]) \u2013 \nreturn_messages (bool) \u2013 \nhuman_prefix (str) \u2013 \nai_prefix (str) \u2013 \nmemory_key (str) \u2013 \nReturn type\nNone\nattribute ai_prefix: str = 'AI'\uf0c1\nattribute human_prefix: str = 'Human'\uf0c1\nload_memory_variables(inputs)[source]\uf0c1\nReturn history buffer.\nParameters", "source": "https://api.python.langchain.com/en/latest/modules/memory.html"} +{"id": "c0aff4bf2256-2", "text": "load_memory_variables(inputs)[source]\uf0c1\nReturn history buffer.\nParameters\ninputs (Dict[str, Any]) \u2013 \nReturn type\nDict[str, Any]\nproperty buffer: Any\uf0c1\nString buffer of memory.\nclass langchain.memory.ConversationBufferWindowMemory(*, chat_memory=None, output_key=None, input_key=None, return_messages=False, human_prefix='Human', ai_prefix='AI', memory_key='history', k=5)[source]\uf0c1\nBases: langchain.memory.chat_memory.BaseChatMemory\nBuffer for storing conversation memory.\nParameters\nchat_memory (langchain.schema.BaseChatMessageHistory) \u2013 \noutput_key (Optional[str]) \u2013 \ninput_key (Optional[str]) \u2013 \nreturn_messages (bool) \u2013 \nhuman_prefix (str) \u2013 \nai_prefix (str) \u2013 \nmemory_key (str) \u2013 \nk (int) \u2013 \nReturn type\nNone\nattribute ai_prefix: str = 'AI'\uf0c1\nattribute human_prefix: str = 'Human'\uf0c1\nattribute k: int = 5\uf0c1\nload_memory_variables(inputs)[source]\uf0c1\nReturn history buffer.\nParameters\ninputs (Dict[str, Any]) \u2013 \nReturn type\nDict[str, str]\nproperty buffer: List[langchain.schema.BaseMessage]\uf0c1\nString buffer of memory.", "source": "https://api.python.langchain.com/en/latest/modules/memory.html"} +{"id": "c0aff4bf2256-3", "text": "class langchain.memory.ConversationEntityMemory(*, chat_memory=None, output_key=None, input_key=None, return_messages=False, human_prefix='Human', ai_prefix='AI', llm, entity_extraction_prompt=PromptTemplate(input_variables=['history', 'input'], output_parser=None, partial_variables={}, template='You are an AI assistant reading the transcript of a conversation between an AI and a human. Extract all of the proper nouns from the last line of conversation. As a guideline, a proper noun is generally capitalized. You should definitely extract all names and places.\\n\\nThe conversation history is provided just in case of a coreference (e.g. \"What do you know about him\" where \"him\" is defined in a previous line) -- ignore items mentioned there that are not in the last line.\\n\\nReturn the output as a single comma-separated list, or NONE if there is nothing of note to return (e.g. the user is just issuing a greeting or having a simple conversation).\\n\\nEXAMPLE\\nConversation history:\\nPerson #1: how\\'s it going today?\\nAI: \"It\\'s going great! How about you?\"\\nPerson #1: good! busy working on Langchain. lots to do.\\nAI: \"That sounds like a lot of work! What kind of things are you doing to make Langchain better?\"\\nLast line:\\nPerson #1: i\\'m trying to improve Langchain\\'s interfaces, the UX, its integrations with various products the user might want ... a lot of stuff.\\nOutput: Langchain\\nEND OF EXAMPLE\\n\\nEXAMPLE\\nConversation history:\\nPerson #1: how\\'s it going today?\\nAI: \"It\\'s going great! How about you?\"\\nPerson #1: good! busy working on Langchain. lots to do.\\nAI: \"That sounds like a lot of work! What kind of things are you doing to", "source": "https://api.python.langchain.com/en/latest/modules/memory.html"} +{"id": "c0aff4bf2256-4", "text": "\"That sounds like a lot of work! What kind of things are you doing to make Langchain better?\"\\nLast line:\\nPerson #1: i\\'m trying to improve Langchain\\'s interfaces, the UX, its integrations with various products the user might want ... a lot of stuff. I\\'m working with Person #2.\\nOutput: Langchain, Person #2\\nEND OF EXAMPLE\\n\\nConversation history (for reference only):\\n{history}\\nLast line of conversation (for extraction):\\nHuman: {input}\\n\\nOutput:', template_format='f-string', validate_template=True), entity_summarization_prompt=PromptTemplate(input_variables=['entity', 'summary', 'history', 'input'], output_parser=None, partial_variables={}, template='You are an AI assistant helping a human keep track of facts about relevant people, places, and concepts in their life. Update the summary of the provided entity in the \"Entity\" section based on the last line of your conversation with the human. If you are writing the summary for the first time, return a single sentence.\\nThe update should only include facts that are relayed in the last line of conversation about the provided entity, and should only contain facts about the provided entity.\\n\\nIf there is no new information about the provided entity or the information is not worth noting (not an important or relevant fact to remember long-term), return the existing summary unchanged.\\n\\nFull conversation history (for context):\\n{history}\\n\\nEntity to summarize:\\n{entity}\\n\\nExisting summary of {entity}:\\n{summary}\\n\\nLast line of conversation:\\nHuman: {input}\\nUpdated summary:', template_format='f-string', validate_template=True), entity_cache=[], k=3, chat_history_key='history', entity_store=None)[source]\uf0c1", "source": "https://api.python.langchain.com/en/latest/modules/memory.html"} +{"id": "c0aff4bf2256-5", "text": "Bases: langchain.memory.chat_memory.BaseChatMemory\nEntity extractor & summarizer memory.\nExtracts named entities from the recent chat history and generates summaries.\nWith a swapable entity store, persisting entities across conversations.\nDefaults to an in-memory entity store, and can be swapped out for a Redis,\nSQLite, or other entity store.\nParameters\nchat_memory (langchain.schema.BaseChatMessageHistory) \u2013 \noutput_key (Optional[str]) \u2013 \ninput_key (Optional[str]) \u2013 \nreturn_messages (bool) \u2013 \nhuman_prefix (str) \u2013 \nai_prefix (str) \u2013 \nllm (langchain.base_language.BaseLanguageModel) \u2013 \nentity_extraction_prompt (langchain.prompts.base.BasePromptTemplate) \u2013 \nentity_summarization_prompt (langchain.prompts.base.BasePromptTemplate) \u2013 \nentity_cache (List[str]) \u2013 \nk (int) \u2013 \nchat_history_key (str) \u2013 \nentity_store (langchain.memory.entity.BaseEntityStore) \u2013 \nReturn type\nNone\nattribute ai_prefix: str = 'AI'\uf0c1\nattribute chat_history_key: str = 'history'\uf0c1\nattribute entity_cache: List[str] = []\uf0c1", "source": "https://api.python.langchain.com/en/latest/modules/memory.html"} +{"id": "c0aff4bf2256-6", "text": "attribute entity_extraction_prompt: langchain.prompts.base.BasePromptTemplate = PromptTemplate(input_variables=['history', 'input'], output_parser=None, partial_variables={}, template='You are an AI assistant reading the transcript of a conversation between an AI and a human. Extract all of the proper nouns from the last line of conversation. As a guideline, a proper noun is generally capitalized. You should definitely extract all names and places.\\n\\nThe conversation history is provided just in case of a coreference (e.g. \"What do you know about him\" where \"him\" is defined in a previous line) -- ignore items mentioned there that are not in the last line.\\n\\nReturn the output as a single comma-separated list, or NONE if there is nothing of note to return (e.g. the user is just issuing a greeting or having a simple conversation).\\n\\nEXAMPLE\\nConversation history:\\nPerson #1: how\\'s it going today?\\nAI: \"It\\'s going great! How about you?\"\\nPerson #1: good! busy working on Langchain. lots to do.\\nAI: \"That sounds like a lot of work! What kind of things are you doing to make Langchain better?\"\\nLast line:\\nPerson #1: i\\'m trying to improve Langchain\\'s interfaces, the UX, its integrations with various products the user might want ... a lot of stuff.\\nOutput: Langchain\\nEND OF EXAMPLE\\n\\nEXAMPLE\\nConversation history:\\nPerson #1: how\\'s it going today?\\nAI: \"It\\'s going great! How about you?\"\\nPerson #1: good! busy working on Langchain. lots to do.\\nAI: \"That sounds like a lot of work! What kind of things are you doing to make Langchain better?\"\\nLast line:\\nPerson #1: i\\'m trying to improve Langchain\\'s interfaces, the", "source": "https://api.python.langchain.com/en/latest/modules/memory.html"} +{"id": "c0aff4bf2256-7", "text": "line:\\nPerson #1: i\\'m trying to improve Langchain\\'s interfaces, the UX, its integrations with various products the user might want ... a lot of stuff. I\\'m working with Person #2.\\nOutput: Langchain, Person #2\\nEND OF EXAMPLE\\n\\nConversation history (for reference only):\\n{history}\\nLast line of conversation (for extraction):\\nHuman: {input}\\n\\nOutput:', template_format='f-string', validate_template=True)\uf0c1", "source": "https://api.python.langchain.com/en/latest/modules/memory.html"} +{"id": "c0aff4bf2256-8", "text": "attribute entity_store: langchain.memory.entity.BaseEntityStore [Optional]\uf0c1\nattribute entity_summarization_prompt: langchain.prompts.base.BasePromptTemplate = PromptTemplate(input_variables=['entity', 'summary', 'history', 'input'], output_parser=None, partial_variables={}, template='You are an AI assistant helping a human keep track of facts about relevant people, places, and concepts in their life. Update the summary of the provided entity in the \"Entity\" section based on the last line of your conversation with the human. If you are writing the summary for the first time, return a single sentence.\\nThe update should only include facts that are relayed in the last line of conversation about the provided entity, and should only contain facts about the provided entity.\\n\\nIf there is no new information about the provided entity or the information is not worth noting (not an important or relevant fact to remember long-term), return the existing summary unchanged.\\n\\nFull conversation history (for context):\\n{history}\\n\\nEntity to summarize:\\n{entity}\\n\\nExisting summary of {entity}:\\n{summary}\\n\\nLast line of conversation:\\nHuman: {input}\\nUpdated summary:', template_format='f-string', validate_template=True)\uf0c1\nattribute human_prefix: str = 'Human'\uf0c1\nattribute k: int = 3\uf0c1\nattribute llm: langchain.base_language.BaseLanguageModel [Required]\uf0c1\nclear()[source]\uf0c1\nClear memory contents.\nReturn type\nNone\nload_memory_variables(inputs)[source]\uf0c1\nReturns chat history and all generated entities with summaries if available,\nand updates or clears the recent entity cache.\nNew entity name can be found when calling this method, before the entity\nsummaries are generated, so the entity cache values may be empty if no entity\ndescriptions are generated yet.\nParameters\ninputs (Dict[str, Any]) \u2013 \nReturn type", "source": "https://api.python.langchain.com/en/latest/modules/memory.html"} +{"id": "c0aff4bf2256-9", "text": "Parameters\ninputs (Dict[str, Any]) \u2013 \nReturn type\nDict[str, Any]\nsave_context(inputs, outputs)[source]\uf0c1\nSave context from this conversation history to the entity store.\nGenerates a summary for each entity in the entity cache by prompting\nthe model, and saves these summaries to the entity store.\nParameters\ninputs (Dict[str, Any]) \u2013 \noutputs (Dict[str, str]) \u2013 \nReturn type\nNone\nproperty buffer: List[langchain.schema.BaseMessage]\uf0c1\nAccess chat memory messages.", "source": "https://api.python.langchain.com/en/latest/modules/memory.html"} +{"id": "c0aff4bf2256-10", "text": "class langchain.memory.ConversationKGMemory(*, chat_memory=None, output_key=None, input_key=None, return_messages=False, k=2, human_prefix='Human', ai_prefix='AI', kg=None, knowledge_extraction_prompt=PromptTemplate(input_variables=['history', 'input'], output_parser=None, partial_variables={}, template=\"You are a networked intelligence helping a human track knowledge triples about all relevant people, things, concepts, etc. and integrating them with your knowledge stored within your weights as well as that stored in a knowledge graph. Extract all of the knowledge triples from the last line of conversation. A knowledge triple is a clause that contains a subject, a predicate, and an object. The subject is the entity being described, the predicate is the property of the subject that is being described, and the object is the value of the property.\\n\\nEXAMPLE\\nConversation history:\\nPerson #1: Did you hear aliens landed in Area 51?\\nAI: No, I didn't hear that. What do you know about Area 51?\\nPerson #1: It's a secret military base in Nevada.\\nAI: What do you know about Nevada?\\nLast line of conversation:\\nPerson #1: It's a state in the US. It's also the number 1 producer of gold in the US.\\n\\nOutput: (Nevada, is a, state)<|>(Nevada, is in, US)<|>(Nevada, is the number 1 producer of, gold)\\nEND OF EXAMPLE\\n\\nEXAMPLE\\nConversation history:\\nPerson #1: Hello.\\nAI: Hi! How are you?\\nPerson #1: I'm good. How are you?\\nAI: I'm good too.\\nLast line of conversation:\\nPerson #1: I'm going to the store.\\n\\nOutput: NONE\\nEND OF EXAMPLE\\n\\nEXAMPLE\\nConversation history:\\nPerson #1: What", "source": "https://api.python.langchain.com/en/latest/modules/memory.html"} +{"id": "c0aff4bf2256-11", "text": "NONE\\nEND OF EXAMPLE\\n\\nEXAMPLE\\nConversation history:\\nPerson #1: What do you know about Descartes?\\nAI: Descartes was a French philosopher, mathematician, and scientist who lived in the 17th century.\\nPerson #1: The Descartes I'm referring to is a standup comedian and interior designer from Montreal.\\nAI: Oh yes, He is a comedian and an interior designer. He has been in the industry for 30 years. His favorite food is baked bean pie.\\nLast line of conversation:\\nPerson #1: Oh huh. I know Descartes likes to drive antique scooters and play the mandolin.\\nOutput: (Descartes, likes to drive, antique scooters)<|>(Descartes, plays, mandolin)\\nEND OF EXAMPLE\\n\\nConversation history (for reference only):\\n{history}\\nLast line of conversation (for extraction):\\nHuman: {input}\\n\\nOutput:\", template_format='f-string', validate_template=True), entity_extraction_prompt=PromptTemplate(input_variables=['history', 'input'], output_parser=None, partial_variables={}, template='You are an AI assistant reading the transcript of a conversation between an AI and a human. Extract all of the proper nouns from the last line of conversation. As a guideline, a proper noun is generally capitalized. You should definitely extract all names and places.\\n\\nThe conversation history is provided just in case of a coreference (e.g. \"What do you know about him\" where \"him\" is defined in a previous line) -- ignore items mentioned there that are not in the last line.\\n\\nReturn the output as a single comma-separated list, or NONE if there is nothing of note to return (e.g. the user is just issuing a greeting or having a simple conversation).\\n\\nEXAMPLE\\nConversation history:\\nPerson #1: how\\'s it going today?\\nAI:", "source": "https://api.python.langchain.com/en/latest/modules/memory.html"} +{"id": "c0aff4bf2256-12", "text": "history:\\nPerson #1: how\\'s it going today?\\nAI: \"It\\'s going great! How about you?\"\\nPerson #1: good! busy working on Langchain. lots to do.\\nAI: \"That sounds like a lot of work! What kind of things are you doing to make Langchain better?\"\\nLast line:\\nPerson #1: i\\'m trying to improve Langchain\\'s interfaces, the UX, its integrations with various products the user might want ... a lot of stuff.\\nOutput: Langchain\\nEND OF EXAMPLE\\n\\nEXAMPLE\\nConversation history:\\nPerson #1: how\\'s it going today?\\nAI: \"It\\'s going great! How about you?\"\\nPerson #1: good! busy working on Langchain. lots to do.\\nAI: \"That sounds like a lot of work! What kind of things are you doing to make Langchain better?\"\\nLast line:\\nPerson #1: i\\'m trying to improve Langchain\\'s interfaces, the UX, its integrations with various products the user might want ... a lot of stuff. I\\'m working with Person #2.\\nOutput: Langchain, Person #2\\nEND OF EXAMPLE\\n\\nConversation history (for reference only):\\n{history}\\nLast line of conversation (for extraction):\\nHuman: {input}\\n\\nOutput:', template_format='f-string', validate_template=True), llm, summary_message_cls=, memory_key='history')[source]\uf0c1", "source": "https://api.python.langchain.com/en/latest/modules/memory.html"} +{"id": "c0aff4bf2256-13", "text": "Bases: langchain.memory.chat_memory.BaseChatMemory\nKnowledge graph memory for storing conversation memory.\nIntegrates with external knowledge graph to store and retrieve\ninformation about knowledge triples in the conversation.\nParameters\nchat_memory (langchain.schema.BaseChatMessageHistory) \u2013 \noutput_key (Optional[str]) \u2013 \ninput_key (Optional[str]) \u2013 \nreturn_messages (bool) \u2013 \nk (int) \u2013 \nhuman_prefix (str) \u2013 \nai_prefix (str) \u2013 \nkg (langchain.graphs.networkx_graph.NetworkxEntityGraph) \u2013 \nknowledge_extraction_prompt (langchain.prompts.base.BasePromptTemplate) \u2013 \nentity_extraction_prompt (langchain.prompts.base.BasePromptTemplate) \u2013 \nllm (langchain.base_language.BaseLanguageModel) \u2013 \nsummary_message_cls (Type[langchain.schema.BaseMessage]) \u2013 \nmemory_key (str) \u2013 \nReturn type\nNone\nattribute ai_prefix: str = 'AI'\uf0c1", "source": "https://api.python.langchain.com/en/latest/modules/memory.html"} +{"id": "c0aff4bf2256-14", "text": "attribute entity_extraction_prompt: langchain.prompts.base.BasePromptTemplate = PromptTemplate(input_variables=['history', 'input'], output_parser=None, partial_variables={}, template='You are an AI assistant reading the transcript of a conversation between an AI and a human. Extract all of the proper nouns from the last line of conversation. As a guideline, a proper noun is generally capitalized. You should definitely extract all names and places.\\n\\nThe conversation history is provided just in case of a coreference (e.g. \"What do you know about him\" where \"him\" is defined in a previous line) -- ignore items mentioned there that are not in the last line.\\n\\nReturn the output as a single comma-separated list, or NONE if there is nothing of note to return (e.g. the user is just issuing a greeting or having a simple conversation).\\n\\nEXAMPLE\\nConversation history:\\nPerson #1: how\\'s it going today?\\nAI: \"It\\'s going great! How about you?\"\\nPerson #1: good! busy working on Langchain. lots to do.\\nAI: \"That sounds like a lot of work! What kind of things are you doing to make Langchain better?\"\\nLast line:\\nPerson #1: i\\'m trying to improve Langchain\\'s interfaces, the UX, its integrations with various products the user might want ... a lot of stuff.\\nOutput: Langchain\\nEND OF EXAMPLE\\n\\nEXAMPLE\\nConversation history:\\nPerson #1: how\\'s it going today?\\nAI: \"It\\'s going great! How about you?\"\\nPerson #1: good! busy working on Langchain. lots to do.\\nAI: \"That sounds like a lot of work! What kind of things are you doing to make Langchain better?\"\\nLast line:\\nPerson #1: i\\'m trying to improve Langchain\\'s interfaces, the", "source": "https://api.python.langchain.com/en/latest/modules/memory.html"} +{"id": "c0aff4bf2256-15", "text": "line:\\nPerson #1: i\\'m trying to improve Langchain\\'s interfaces, the UX, its integrations with various products the user might want ... a lot of stuff. I\\'m working with Person #2.\\nOutput: Langchain, Person #2\\nEND OF EXAMPLE\\n\\nConversation history (for reference only):\\n{history}\\nLast line of conversation (for extraction):\\nHuman: {input}\\n\\nOutput:', template_format='f-string', validate_template=True)\uf0c1", "source": "https://api.python.langchain.com/en/latest/modules/memory.html"} +{"id": "c0aff4bf2256-16", "text": "attribute human_prefix: str = 'Human'\uf0c1\nattribute k: int = 2\uf0c1\nattribute kg: langchain.graphs.networkx_graph.NetworkxEntityGraph [Optional]\uf0c1", "source": "https://api.python.langchain.com/en/latest/modules/memory.html"} +{"id": "c0aff4bf2256-17", "text": "attribute knowledge_extraction_prompt: langchain.prompts.base.BasePromptTemplate = PromptTemplate(input_variables=['history', 'input'], output_parser=None, partial_variables={}, template=\"You are a networked intelligence helping a human track knowledge triples about all relevant people, things, concepts, etc. and integrating them with your knowledge stored within your weights as well as that stored in a knowledge graph. Extract all of the knowledge triples from the last line of conversation. A knowledge triple is a clause that contains a subject, a predicate, and an object. The subject is the entity being described, the predicate is the property of the subject that is being described, and the object is the value of the property.\\n\\nEXAMPLE\\nConversation history:\\nPerson #1: Did you hear aliens landed in Area 51?\\nAI: No, I didn't hear that. What do you know about Area 51?\\nPerson #1: It's a secret military base in Nevada.\\nAI: What do you know about Nevada?\\nLast line of conversation:\\nPerson #1: It's a state in the US. It's also the number 1 producer of gold in the US.\\n\\nOutput: (Nevada, is a, state)<|>(Nevada, is in, US)<|>(Nevada, is the number 1 producer of, gold)\\nEND OF EXAMPLE\\n\\nEXAMPLE\\nConversation history:\\nPerson #1: Hello.\\nAI: Hi! How are you?\\nPerson #1: I'm good. How are you?\\nAI: I'm good too.\\nLast line of conversation:\\nPerson #1: I'm going to the store.\\n\\nOutput: NONE\\nEND OF EXAMPLE\\n\\nEXAMPLE\\nConversation history:\\nPerson #1: What do you know about Descartes?\\nAI: Descartes was a French philosopher, mathematician, and scientist who lived in the 17th", "source": "https://api.python.langchain.com/en/latest/modules/memory.html"} +{"id": "c0aff4bf2256-18", "text": "Descartes was a French philosopher, mathematician, and scientist who lived in the 17th century.\\nPerson #1: The Descartes I'm referring to is a standup comedian and interior designer from Montreal.\\nAI: Oh yes, He is a comedian and an interior designer. He has been in the industry for 30 years. His favorite food is baked bean pie.\\nLast line of conversation:\\nPerson #1: Oh huh. I know Descartes likes to drive antique scooters and play the mandolin.\\nOutput: (Descartes, likes to drive, antique scooters)<|>(Descartes, plays, mandolin)\\nEND OF EXAMPLE\\n\\nConversation history (for reference only):\\n{history}\\nLast line of conversation (for extraction):\\nHuman: {input}\\n\\nOutput:\", template_format='f-string', validate_template=True)\uf0c1", "source": "https://api.python.langchain.com/en/latest/modules/memory.html"} +{"id": "c0aff4bf2256-19", "text": "attribute llm: langchain.base_language.BaseLanguageModel [Required]\uf0c1\nattribute summary_message_cls: Type[langchain.schema.BaseMessage] = \uf0c1\nNumber of previous utterances to include in the context.\nclear()[source]\uf0c1\nClear memory contents.\nReturn type\nNone\nget_current_entities(input_string)[source]\uf0c1\nParameters\ninput_string (str) \u2013 \nReturn type\nList[str]\nget_knowledge_triplets(input_string)[source]\uf0c1\nParameters\ninput_string (str) \u2013 \nReturn type\nList[langchain.graphs.networkx_graph.KnowledgeTriple]\nload_memory_variables(inputs)[source]\uf0c1\nReturn history buffer.\nParameters\ninputs (Dict[str, Any]) \u2013 \nReturn type\nDict[str, Any]\nsave_context(inputs, outputs)[source]\uf0c1\nSave context from this conversation to buffer.\nParameters\ninputs (Dict[str, Any]) \u2013 \noutputs (Dict[str, str]) \u2013 \nReturn type\nNone\nclass langchain.memory.ConversationStringBufferMemory(*, human_prefix='Human', ai_prefix='AI', buffer='', output_key=None, input_key=None, memory_key='history')[source]\uf0c1\nBases: langchain.schema.BaseMemory\nBuffer for storing conversation memory.\nParameters\nhuman_prefix (str) \u2013 \nai_prefix (str) \u2013 \nbuffer (str) \u2013 \noutput_key (Optional[str]) \u2013 \ninput_key (Optional[str]) \u2013 \nmemory_key (str) \u2013 \nReturn type\nNone\nattribute ai_prefix: str = 'AI'\uf0c1\nPrefix to use for AI generated responses.\nattribute buffer: str = ''\uf0c1\nattribute human_prefix: str = 'Human'\uf0c1\nattribute input_key: Optional[str] = None\uf0c1", "source": "https://api.python.langchain.com/en/latest/modules/memory.html"} +{"id": "c0aff4bf2256-20", "text": "attribute input_key: Optional[str] = None\uf0c1\nattribute output_key: Optional[str] = None\uf0c1\nclear()[source]\uf0c1\nClear memory contents.\nReturn type\nNone\nload_memory_variables(inputs)[source]\uf0c1\nReturn history buffer.\nParameters\ninputs (Dict[str, Any]) \u2013 \nReturn type\nDict[str, str]\nsave_context(inputs, outputs)[source]\uf0c1\nSave context from this conversation to buffer.\nParameters\ninputs (Dict[str, Any]) \u2013 \noutputs (Dict[str, str]) \u2013 \nReturn type\nNone\nproperty memory_variables: List[str]\uf0c1\nWill always return list of memory variables.\n:meta private:", "source": "https://api.python.langchain.com/en/latest/modules/memory.html"} +{"id": "c0aff4bf2256-21", "text": "Will always return list of memory variables.\n:meta private:\nclass langchain.memory.ConversationSummaryBufferMemory(*, human_prefix='Human', ai_prefix='AI', llm, prompt=PromptTemplate(input_variables=['summary', 'new_lines'], output_parser=None, partial_variables={}, template='Progressively summarize the lines of conversation provided, adding onto the previous summary returning a new summary.\\n\\nEXAMPLE\\nCurrent summary:\\nThe human asks what the AI thinks of artificial intelligence. The AI thinks artificial intelligence is a force for good.\\n\\nNew lines of conversation:\\nHuman: Why do you think artificial intelligence is a force for good?\\nAI: Because artificial intelligence will help humans reach their full potential.\\n\\nNew summary:\\nThe human asks what the AI thinks of artificial intelligence. The AI thinks artificial intelligence is a force for good because it will help humans reach their full potential.\\nEND OF EXAMPLE\\n\\nCurrent summary:\\n{summary}\\n\\nNew lines of conversation:\\n{new_lines}\\n\\nNew summary:', template_format='f-string', validate_template=True), summary_message_cls=, chat_memory=None, output_key=None, input_key=None, return_messages=False, max_token_limit=2000, moving_summary_buffer='', memory_key='history')[source]\uf0c1\nBases: langchain.memory.chat_memory.BaseChatMemory, langchain.memory.summary.SummarizerMixin\nBuffer with summarizer for storing conversation memory.\nParameters\nhuman_prefix (str) \u2013 \nai_prefix (str) \u2013 \nllm (langchain.base_language.BaseLanguageModel) \u2013 \nprompt (langchain.prompts.base.BasePromptTemplate) \u2013 \nsummary_message_cls (Type[langchain.schema.BaseMessage]) \u2013 \nchat_memory (langchain.schema.BaseChatMessageHistory) \u2013 \noutput_key (Optional[str]) \u2013 \ninput_key (Optional[str]) \u2013", "source": "https://api.python.langchain.com/en/latest/modules/memory.html"} +{"id": "c0aff4bf2256-22", "text": "output_key (Optional[str]) \u2013 \ninput_key (Optional[str]) \u2013 \nreturn_messages (bool) \u2013 \nmax_token_limit (int) \u2013 \nmoving_summary_buffer (str) \u2013 \nmemory_key (str) \u2013 \nReturn type\nNone\nattribute max_token_limit: int = 2000\uf0c1\nattribute memory_key: str = 'history'\uf0c1\nattribute moving_summary_buffer: str = ''\uf0c1\nclear()[source]\uf0c1\nClear memory contents.\nReturn type\nNone\nload_memory_variables(inputs)[source]\uf0c1\nReturn history buffer.\nParameters\ninputs (Dict[str, Any]) \u2013 \nReturn type\nDict[str, Any]\nprune()[source]\uf0c1\nPrune buffer if it exceeds max token limit\nReturn type\nNone\nsave_context(inputs, outputs)[source]\uf0c1\nSave context from this conversation to buffer.\nParameters\ninputs (Dict[str, Any]) \u2013 \noutputs (Dict[str, str]) \u2013 \nReturn type\nNone\nproperty buffer: List[langchain.schema.BaseMessage]\uf0c1", "source": "https://api.python.langchain.com/en/latest/modules/memory.html"} +{"id": "c0aff4bf2256-23", "text": "Return type\nNone\nproperty buffer: List[langchain.schema.BaseMessage]\uf0c1\nclass langchain.memory.ConversationSummaryMemory(*, human_prefix='Human', ai_prefix='AI', llm, prompt=PromptTemplate(input_variables=['summary', 'new_lines'], output_parser=None, partial_variables={}, template='Progressively summarize the lines of conversation provided, adding onto the previous summary returning a new summary.\\n\\nEXAMPLE\\nCurrent summary:\\nThe human asks what the AI thinks of artificial intelligence. The AI thinks artificial intelligence is a force for good.\\n\\nNew lines of conversation:\\nHuman: Why do you think artificial intelligence is a force for good?\\nAI: Because artificial intelligence will help humans reach their full potential.\\n\\nNew summary:\\nThe human asks what the AI thinks of artificial intelligence. The AI thinks artificial intelligence is a force for good because it will help humans reach their full potential.\\nEND OF EXAMPLE\\n\\nCurrent summary:\\n{summary}\\n\\nNew lines of conversation:\\n{new_lines}\\n\\nNew summary:', template_format='f-string', validate_template=True), summary_message_cls=, chat_memory=None, output_key=None, input_key=None, return_messages=False, buffer='', memory_key='history')[source]\uf0c1\nBases: langchain.memory.chat_memory.BaseChatMemory, langchain.memory.summary.SummarizerMixin\nConversation summarizer to memory.\nParameters\nhuman_prefix (str) \u2013 \nai_prefix (str) \u2013 \nllm (langchain.base_language.BaseLanguageModel) \u2013 \nprompt (langchain.prompts.base.BasePromptTemplate) \u2013 \nsummary_message_cls (Type[langchain.schema.BaseMessage]) \u2013 \nchat_memory (langchain.schema.BaseChatMessageHistory) \u2013 \noutput_key (Optional[str]) \u2013 \ninput_key (Optional[str]) \u2013 \nreturn_messages (bool) \u2013", "source": "https://api.python.langchain.com/en/latest/modules/memory.html"} +{"id": "c0aff4bf2256-24", "text": "input_key (Optional[str]) \u2013 \nreturn_messages (bool) \u2013 \nbuffer (str) \u2013 \nmemory_key (str) \u2013 \nReturn type\nNone\nattribute buffer: str = ''\uf0c1\nclear()[source]\uf0c1\nClear memory contents.\nReturn type\nNone\nclassmethod from_messages(llm, chat_memory, *, summarize_step=2, **kwargs)[source]\uf0c1\nParameters\nllm (langchain.base_language.BaseLanguageModel) \u2013 \nchat_memory (langchain.schema.BaseChatMessageHistory) \u2013 \nsummarize_step (int) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.memory.summary.ConversationSummaryMemory\nload_memory_variables(inputs)[source]\uf0c1\nReturn history buffer.\nParameters\ninputs (Dict[str, Any]) \u2013 \nReturn type\nDict[str, Any]\nsave_context(inputs, outputs)[source]\uf0c1\nSave context from this conversation to buffer.\nParameters\ninputs (Dict[str, Any]) \u2013 \noutputs (Dict[str, str]) \u2013 \nReturn type\nNone\nclass langchain.memory.ConversationTokenBufferMemory(*, chat_memory=None, output_key=None, input_key=None, return_messages=False, human_prefix='Human', ai_prefix='AI', llm, memory_key='history', max_token_limit=2000)[source]\uf0c1\nBases: langchain.memory.chat_memory.BaseChatMemory\nBuffer for storing conversation memory.\nParameters\nchat_memory (langchain.schema.BaseChatMessageHistory) \u2013 \noutput_key (Optional[str]) \u2013 \ninput_key (Optional[str]) \u2013 \nreturn_messages (bool) \u2013 \nhuman_prefix (str) \u2013 \nai_prefix (str) \u2013 \nllm (langchain.base_language.BaseLanguageModel) \u2013 \nmemory_key (str) \u2013 \nmax_token_limit (int) \u2013 \nReturn type\nNone", "source": "https://api.python.langchain.com/en/latest/modules/memory.html"} +{"id": "c0aff4bf2256-25", "text": "max_token_limit (int) \u2013 \nReturn type\nNone\nattribute ai_prefix: str = 'AI'\uf0c1\nattribute human_prefix: str = 'Human'\uf0c1\nattribute llm: langchain.base_language.BaseLanguageModel [Required]\uf0c1\nattribute max_token_limit: int = 2000\uf0c1\nattribute memory_key: str = 'history'\uf0c1\nload_memory_variables(inputs)[source]\uf0c1\nReturn history buffer.\nParameters\ninputs (Dict[str, Any]) \u2013 \nReturn type\nDict[str, Any]\nsave_context(inputs, outputs)[source]\uf0c1\nSave context from this conversation to buffer. Pruned.\nParameters\ninputs (Dict[str, Any]) \u2013 \noutputs (Dict[str, str]) \u2013 \nReturn type\nNone\nproperty buffer: List[langchain.schema.BaseMessage]\uf0c1\nString buffer of memory.\nclass langchain.memory.CosmosDBChatMessageHistory(cosmos_endpoint, cosmos_database, cosmos_container, session_id, user_id, credential=None, connection_string=None, ttl=None, cosmos_client_kwargs=None)[source]\uf0c1\nBases: langchain.schema.BaseChatMessageHistory\nChat history backed by Azure CosmosDB.\nParameters\ncosmos_endpoint (str) \u2013 \ncosmos_database (str) \u2013 \ncosmos_container (str) \u2013 \nsession_id (str) \u2013 \nuser_id (str) \u2013 \ncredential (Any) \u2013 \nconnection_string (Optional[str]) \u2013 \nttl (Optional[int]) \u2013 \ncosmos_client_kwargs (Optional[dict]) \u2013 \nprepare_cosmos()[source]\uf0c1\nPrepare the CosmosDB client.\nUse this function or the context manager to make sure your database is ready.\nReturn type\nNone\nload_messages()[source]\uf0c1\nRetrieve the messages from Cosmos\nReturn type\nNone\nadd_message(message)[source]\uf0c1", "source": "https://api.python.langchain.com/en/latest/modules/memory.html"} +{"id": "c0aff4bf2256-26", "text": "Retrieve the messages from Cosmos\nReturn type\nNone\nadd_message(message)[source]\uf0c1\nAdd a self-created message to the store\nParameters\nmessage (langchain.schema.BaseMessage) \u2013 \nReturn type\nNone\nupsert_messages()[source]\uf0c1\nUpdate the cosmosdb item.\nReturn type\nNone\nclear()[source]\uf0c1\nClear session memory from this memory and cosmos.\nReturn type\nNone\nclass langchain.memory.DynamoDBChatMessageHistory(table_name, session_id, endpoint_url=None)[source]\uf0c1\nBases: langchain.schema.BaseChatMessageHistory\nChat message history that stores history in AWS DynamoDB.\nThis class expects that a DynamoDB table with name table_name\nand a partition Key of SessionId is present.\nParameters\ntable_name (str) \u2013 name of the DynamoDB table\nsession_id (str) \u2013 arbitrary key that is used to store the messages\nof a single chat session.\nendpoint_url (Optional[str]) \u2013 URL of the AWS endpoint to connect to. This argument\nis optional and useful for test purposes, like using Localstack.\nIf you plan to use AWS cloud service, you normally don\u2019t have to\nworry about setting the endpoint_url.\nproperty messages: List[langchain.schema.BaseMessage]\uf0c1\nRetrieve the messages from DynamoDB\nadd_message(message)[source]\uf0c1\nAppend the message to the record in DynamoDB\nParameters\nmessage (langchain.schema.BaseMessage) \u2013 \nReturn type\nNone\nclear()[source]\uf0c1\nClear session memory from DynamoDB\nReturn type\nNone\nclass langchain.memory.FileChatMessageHistory(file_path)[source]\uf0c1\nBases: langchain.schema.BaseChatMessageHistory\nChat message history that stores history in a local file.\nParameters\nfile_path (str) \u2013 path of the local file to store the messages.", "source": "https://api.python.langchain.com/en/latest/modules/memory.html"} +{"id": "c0aff4bf2256-27", "text": "Parameters\nfile_path (str) \u2013 path of the local file to store the messages.\nproperty messages: List[langchain.schema.BaseMessage]\uf0c1\nRetrieve the messages from the local file\nadd_message(message)[source]\uf0c1\nAppend the message to the record in the local file\nParameters\nmessage (langchain.schema.BaseMessage) \u2013 \nReturn type\nNone\nclear()[source]\uf0c1\nClear session memory from the local file\nReturn type\nNone\nclass langchain.memory.InMemoryEntityStore(*, store={})[source]\uf0c1\nBases: langchain.memory.entity.BaseEntityStore\nBasic in-memory entity store.\nParameters\nstore (Dict[str, Optional[str]]) \u2013 \nReturn type\nNone\nattribute store: Dict[str, Optional[str]] = {}\uf0c1\nclear()[source]\uf0c1\nDelete all entities from store.\nReturn type\nNone\ndelete(key)[source]\uf0c1\nDelete entity value from store.\nParameters\nkey (str) \u2013 \nReturn type\nNone\nexists(key)[source]\uf0c1\nCheck if entity exists in store.\nParameters\nkey (str) \u2013 \nReturn type\nbool\nget(key, default=None)[source]\uf0c1\nGet entity value from store.\nParameters\nkey (str) \u2013 \ndefault (Optional[str]) \u2013 \nReturn type\nOptional[str]\nset(key, value)[source]\uf0c1\nSet entity value in store.\nParameters\nkey (str) \u2013 \nvalue (Optional[str]) \u2013 \nReturn type\nNone\nclass langchain.memory.MomentoChatMessageHistory(session_id, cache_client, cache_name, *, key_prefix='message_store:', ttl=None, ensure_cache_exists=True)[source]\uf0c1\nBases: langchain.schema.BaseChatMessageHistory\nChat message history cache that uses Momento as a backend.\nSee https://gomomento.com/", "source": "https://api.python.langchain.com/en/latest/modules/memory.html"} +{"id": "c0aff4bf2256-28", "text": "See https://gomomento.com/\nParameters\nsession_id (str) \u2013 \ncache_client (momento.CacheClient) \u2013 \ncache_name (str) \u2013 \nkey_prefix (str) \u2013 \nttl (Optional[timedelta]) \u2013 \nensure_cache_exists (bool) \u2013 \nclassmethod from_client_params(session_id, cache_name, ttl, *, configuration=None, auth_token=None, **kwargs)[source]\uf0c1\nConstruct cache from CacheClient parameters.\nParameters\nsession_id (str) \u2013 \ncache_name (str) \u2013 \nttl (timedelta) \u2013 \nconfiguration (Optional[momento.config.Configuration]) \u2013 \nauth_token (Optional[str]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nMomentoChatMessageHistory\nproperty messages: list[langchain.schema.BaseMessage]\uf0c1\nRetrieve the messages from Momento.\nRaises\nSdkException \u2013 Momento service or network error\nException \u2013 Unexpected response\nReturns\nList of cached messages\nReturn type\nlist[BaseMessage]\nadd_message(message)[source]\uf0c1\nStore a message in the cache.\nParameters\nmessage (BaseMessage) \u2013 The message object to store.\nRaises\nSdkException \u2013 Momento service or network error.\nException \u2013 Unexpected response.\nReturn type\nNone\nclear()[source]\uf0c1\nRemove the session\u2019s messages from the cache.\nRaises\nSdkException \u2013 Momento service or network error.\nException \u2013 Unexpected response.\nReturn type\nNone\nclass langchain.memory.MongoDBChatMessageHistory(connection_string, session_id, database_name='chat_history', collection_name='message_store')[source]\uf0c1\nBases: langchain.schema.BaseChatMessageHistory\nChat message history that stores history in MongoDB.\nParameters\nconnection_string (str) \u2013 connection string to connect to MongoDB\nsession_id (str) \u2013 arbitrary key that is used to store the messages", "source": "https://api.python.langchain.com/en/latest/modules/memory.html"} +{"id": "c0aff4bf2256-29", "text": "session_id (str) \u2013 arbitrary key that is used to store the messages\nof a single chat session.\ndatabase_name (str) \u2013 name of the database to use\ncollection_name (str) \u2013 name of the collection to use\nproperty messages: List[langchain.schema.BaseMessage]\uf0c1\nRetrieve the messages from MongoDB\nadd_message(message)[source]\uf0c1\nAppend the message to the record in MongoDB\nParameters\nmessage (langchain.schema.BaseMessage) \u2013 \nReturn type\nNone\nclear()[source]\uf0c1\nClear session memory from MongoDB\nReturn type\nNone\nclass langchain.memory.MotorheadMemory(*, chat_memory=None, output_key=None, input_key=None, return_messages=False, url='https://api.getmetal.io/v1/motorhead', session_id, context=None, api_key=None, client_id=None, timeout=3000, memory_key='history')[source]\uf0c1\nBases: langchain.memory.chat_memory.BaseChatMemory\nParameters\nchat_memory (langchain.schema.BaseChatMessageHistory) \u2013 \noutput_key (Optional[str]) \u2013 \ninput_key (Optional[str]) \u2013 \nreturn_messages (bool) \u2013 \nurl (str) \u2013 \nsession_id (str) \u2013 \ncontext (Optional[str]) \u2013 \napi_key (Optional[str]) \u2013 \nclient_id (Optional[str]) \u2013 \ntimeout (int) \u2013 \nmemory_key (str) \u2013 \nReturn type\nNone\nattribute api_key: Optional[str] = None\uf0c1\nattribute client_id: Optional[str] = None\uf0c1\nattribute context: Optional[str] = None\uf0c1\nattribute session_id: str [Required]\uf0c1\nattribute url: str = 'https://api.getmetal.io/v1/motorhead'\uf0c1\ndelete_session()[source]\uf0c1\nDelete a session\nReturn type\nNone\nasync init()[source]\uf0c1", "source": "https://api.python.langchain.com/en/latest/modules/memory.html"} +{"id": "c0aff4bf2256-30", "text": "Delete a session\nReturn type\nNone\nasync init()[source]\uf0c1\nReturn type\nNone\nload_memory_variables(values)[source]\uf0c1\nReturn key-value pairs given the text input to the chain.\nIf None, return all memories\nParameters\nvalues (Dict[str, Any]) \u2013 \nReturn type\nDict[str, Any]\nsave_context(inputs, outputs)[source]\uf0c1\nSave context from this conversation to buffer.\nParameters\ninputs (Dict[str, Any]) \u2013 \noutputs (Dict[str, str]) \u2013 \nReturn type\nNone\nproperty memory_variables: List[str]\uf0c1\nInput keys this memory class will load dynamically.\nclass langchain.memory.PostgresChatMessageHistory(session_id, connection_string='postgresql://postgres:mypassword@localhost/chat_history', table_name='message_store')[source]\uf0c1\nBases: langchain.schema.BaseChatMessageHistory\nChat message history stored in a Postgres database.\nParameters\nsession_id (str) \u2013 \nconnection_string (str) \u2013 \ntable_name (str) \u2013 \nproperty messages: List[langchain.schema.BaseMessage]\uf0c1\nRetrieve the messages from PostgreSQL\nadd_message(message)[source]\uf0c1\nAppend the message to the record in PostgreSQL\nParameters\nmessage (langchain.schema.BaseMessage) \u2013 \nReturn type\nNone\nclear()[source]\uf0c1\nClear session memory from PostgreSQL\nReturn type\nNone\nclass langchain.memory.ReadOnlySharedMemory(*, memory)[source]\uf0c1\nBases: langchain.schema.BaseMemory\nA memory wrapper that is read-only and cannot be changed.\nParameters\nmemory (langchain.schema.BaseMemory) \u2013 \nReturn type\nNone\nattribute memory: langchain.schema.BaseMemory [Required]\uf0c1\nclear()[source]\uf0c1\nNothing to clear, got a memory like a vault.\nReturn type\nNone", "source": "https://api.python.langchain.com/en/latest/modules/memory.html"} +{"id": "c0aff4bf2256-31", "text": "Nothing to clear, got a memory like a vault.\nReturn type\nNone\nload_memory_variables(inputs)[source]\uf0c1\nLoad memory variables from memory.\nParameters\ninputs (Dict[str, Any]) \u2013 \nReturn type\nDict[str, str]\nsave_context(inputs, outputs)[source]\uf0c1\nNothing should be saved or changed\nParameters\ninputs (Dict[str, Any]) \u2013 \noutputs (Dict[str, str]) \u2013 \nReturn type\nNone\nproperty memory_variables: List[str]\uf0c1\nReturn memory variables.\nclass langchain.memory.RedisChatMessageHistory(session_id, url='redis://localhost:6379/0', key_prefix='message_store:', ttl=None)[source]\uf0c1\nBases: langchain.schema.BaseChatMessageHistory\nChat message history stored in a Redis database.\nParameters\nsession_id (str) \u2013 \nurl (str) \u2013 \nkey_prefix (str) \u2013 \nttl (Optional[int]) \u2013 \nproperty key: str\uf0c1\nConstruct the record key to use\nproperty messages: List[langchain.schema.BaseMessage]\uf0c1\nRetrieve the messages from Redis\nadd_message(message)[source]\uf0c1\nAppend the message to the record in Redis\nParameters\nmessage (langchain.schema.BaseMessage) \u2013 \nReturn type\nNone\nclear()[source]\uf0c1\nClear session memory from Redis\nReturn type\nNone\nclass langchain.memory.RedisEntityStore(session_id='default', url='redis://localhost:6379/0', key_prefix='memory_store', ttl=86400, recall_ttl=259200, *args, redis_client=None)[source]\uf0c1\nBases: langchain.memory.entity.BaseEntityStore\nRedis-backed Entity store. Entities get a TTL of 1 day by default, and\nthat TTL is extended by 3 days every time the entity is read back.\nParameters", "source": "https://api.python.langchain.com/en/latest/modules/memory.html"} +{"id": "c0aff4bf2256-32", "text": "that TTL is extended by 3 days every time the entity is read back.\nParameters\nsession_id (str) \u2013 \nurl (str) \u2013 \nkey_prefix (str) \u2013 \nttl (Optional[int]) \u2013 \nrecall_ttl (Optional[int]) \u2013 \nargs (Any) \u2013 \nredis_client (Any) \u2013 \nReturn type\nNone\nattribute key_prefix: str = 'memory_store'\uf0c1\nattribute recall_ttl: Optional[int] = 259200\uf0c1\nattribute redis_client: Any = None\uf0c1\nattribute session_id: str = 'default'\uf0c1\nattribute ttl: Optional[int] = 86400\uf0c1\nclear()[source]\uf0c1\nDelete all entities from store.\nReturn type\nNone\ndelete(key)[source]\uf0c1\nDelete entity value from store.\nParameters\nkey (str) \u2013 \nReturn type\nNone\nexists(key)[source]\uf0c1\nCheck if entity exists in store.\nParameters\nkey (str) \u2013 \nReturn type\nbool\nget(key, default=None)[source]\uf0c1\nGet entity value from store.\nParameters\nkey (str) \u2013 \ndefault (Optional[str]) \u2013 \nReturn type\nOptional[str]\nset(key, value)[source]\uf0c1\nSet entity value in store.\nParameters\nkey (str) \u2013 \nvalue (Optional[str]) \u2013 \nReturn type\nNone\nproperty full_key_prefix: str\uf0c1\nclass langchain.memory.SQLChatMessageHistory(session_id, connection_string, table_name='message_store')[source]\uf0c1\nBases: langchain.schema.BaseChatMessageHistory\nChat message history stored in an SQL database.\nParameters\nsession_id (str) \u2013 \nconnection_string (str) \u2013 \ntable_name (str) \u2013 \nproperty messages: List[langchain.schema.BaseMessage]\uf0c1", "source": "https://api.python.langchain.com/en/latest/modules/memory.html"} +{"id": "c0aff4bf2256-33", "text": "property messages: List[langchain.schema.BaseMessage]\uf0c1\nRetrieve all messages from db\nadd_message(message)[source]\uf0c1\nAppend the message to the record in db\nParameters\nmessage (langchain.schema.BaseMessage) \u2013 \nReturn type\nNone\nclear()[source]\uf0c1\nClear session memory from db\nReturn type\nNone\nclass langchain.memory.SQLiteEntityStore(session_id='default', db_file='entities.db', table_name='memory_store', *args)[source]\uf0c1\nBases: langchain.memory.entity.BaseEntityStore\nSQLite-backed Entity store\nParameters\nsession_id (str) \u2013 \ndb_file (str) \u2013 \ntable_name (str) \u2013 \nargs (Any) \u2013 \nReturn type\nNone\nattribute session_id: str = 'default'\uf0c1\nattribute table_name: str = 'memory_store'\uf0c1\nclear()[source]\uf0c1\nDelete all entities from store.\nReturn type\nNone\ndelete(key)[source]\uf0c1\nDelete entity value from store.\nParameters\nkey (str) \u2013 \nReturn type\nNone\nexists(key)[source]\uf0c1\nCheck if entity exists in store.\nParameters\nkey (str) \u2013 \nReturn type\nbool\nget(key, default=None)[source]\uf0c1\nGet entity value from store.\nParameters\nkey (str) \u2013 \ndefault (Optional[str]) \u2013 \nReturn type\nOptional[str]\nset(key, value)[source]\uf0c1\nSet entity value in store.\nParameters\nkey (str) \u2013 \nvalue (Optional[str]) \u2013 \nReturn type\nNone\nproperty full_table_name: str\uf0c1\nclass langchain.memory.SimpleMemory(*, memories={})[source]\uf0c1\nBases: langchain.schema.BaseMemory\nSimple memory for storing context or other bits of information that shouldn\u2019t\never change between prompts.", "source": "https://api.python.langchain.com/en/latest/modules/memory.html"} +{"id": "c0aff4bf2256-34", "text": "Simple memory for storing context or other bits of information that shouldn\u2019t\never change between prompts.\nParameters\nmemories (Dict[str, Any]) \u2013 \nReturn type\nNone\nattribute memories: Dict[str, Any] = {}\uf0c1\nclear()[source]\uf0c1\nNothing to clear, got a memory like a vault.\nReturn type\nNone\nload_memory_variables(inputs)[source]\uf0c1\nReturn key-value pairs given the text input to the chain.\nIf None, return all memories\nParameters\ninputs (Dict[str, Any]) \u2013 \nReturn type\nDict[str, str]\nsave_context(inputs, outputs)[source]\uf0c1\nNothing should be saved or changed, my memory is set in stone.\nParameters\ninputs (Dict[str, Any]) \u2013 \noutputs (Dict[str, str]) \u2013 \nReturn type\nNone\nproperty memory_variables: List[str]\uf0c1\nInput keys this memory class will load dynamically.\nclass langchain.memory.VectorStoreRetrieverMemory(*, retriever, memory_key='history', input_key=None, return_docs=False)[source]\uf0c1\nBases: langchain.schema.BaseMemory\nClass for a VectorStore-backed memory object.\nParameters\nretriever (langchain.vectorstores.base.VectorStoreRetriever) \u2013 \nmemory_key (str) \u2013 \ninput_key (Optional[str]) \u2013 \nreturn_docs (bool) \u2013 \nReturn type\nNone\nattribute input_key: Optional[str] = None\uf0c1\nKey name to index the inputs to load_memory_variables.\nattribute memory_key: str = 'history'\uf0c1\nKey name to locate the memories in the result of load_memory_variables.\nattribute retriever: langchain.vectorstores.base.VectorStoreRetriever [Required]\uf0c1\nVectorStoreRetriever object to connect to.\nattribute return_docs: bool = False\uf0c1", "source": "https://api.python.langchain.com/en/latest/modules/memory.html"} +{"id": "c0aff4bf2256-35", "text": "attribute return_docs: bool = False\uf0c1\nWhether or not to return the result of querying the database directly.\nclear()[source]\uf0c1\nNothing to clear.\nReturn type\nNone\nload_memory_variables(inputs)[source]\uf0c1\nReturn history buffer.\nParameters\ninputs (Dict[str, Any]) \u2013 \nReturn type\nDict[str, Union[List[langchain.schema.Document], str]]\nsave_context(inputs, outputs)[source]\uf0c1\nSave context from this conversation to buffer.\nParameters\ninputs (Dict[str, Any]) \u2013 \noutputs (Dict[str, str]) \u2013 \nReturn type\nNone\nproperty memory_variables: List[str]\uf0c1\nThe list of keys emitted from the load_memory_variables method.\nclass langchain.memory.ZepChatMessageHistory(session_id, url='http://localhost:8000')[source]\uf0c1\nBases: langchain.schema.BaseChatMessageHistory\nA ChatMessageHistory implementation that uses Zep as a backend.\nRecommended usage:\n# Set up Zep Chat History\nzep_chat_history = ZepChatMessageHistory(\n session_id=session_id,\n url=ZEP_API_URL,\n)\n# Use a standard ConversationBufferMemory to encapsulate the Zep chat history\nmemory = ConversationBufferMemory(\n memory_key=\"chat_history\", chat_memory=zep_chat_history\n)\nZep provides long-term conversation storage for LLM apps. The server stores,\nsummarizes, embeds, indexes, and enriches conversational AI chat\nhistories, and exposes them via simple, low-latency APIs.\nFor server installation instructions and more, see: https://getzep.github.io/\nThis class is a thin wrapper around the zep-python package. Additional\nZep functionality is exposed via the zep_summary and zep_messages\nproperties.\nFor more information on the zep-python package, see:", "source": "https://api.python.langchain.com/en/latest/modules/memory.html"} +{"id": "c0aff4bf2256-36", "text": "properties.\nFor more information on the zep-python package, see:\nhttps://github.com/getzep/zep-python\nParameters\nsession_id (str) \u2013 \nurl (str) \u2013 \nReturn type\nNone\nproperty messages: List[langchain.schema.BaseMessage]\uf0c1\nRetrieve messages from Zep memory\nproperty zep_messages: List[Message]\uf0c1\nRetrieve summary from Zep memory\nproperty zep_summary: Optional[str]\uf0c1\nRetrieve summary from Zep memory\nadd_message(message)[source]\uf0c1\nAppend the message to the Zep memory history\nParameters\nmessage (langchain.schema.BaseMessage) \u2013 \nReturn type\nNone\nsearch(query, metadata=None, limit=None)[source]\uf0c1\nSearch Zep memory for messages matching the query\nParameters\nquery (str) \u2013 \nmetadata (Optional[Dict]) \u2013 \nlimit (Optional[int]) \u2013 \nReturn type\nList[MemorySearchResult]\nclear()[source]\uf0c1\nClear session memory from Zep. Note that Zep is long-term storage for memory\nand this is not advised unless you have specific data retention requirements.\nReturn type\nNone", "source": "https://api.python.langchain.com/en/latest/modules/memory.html"} +{"id": "025b560d46c1-0", "text": "Output Parsers\uf0c1\nclass langchain.output_parsers.BooleanOutputParser(*, true_val='YES', false_val='NO')[source]\uf0c1\nBases: langchain.schema.BaseOutputParser[bool]\nParameters\ntrue_val (str) \u2013 \nfalse_val (str) \u2013 \nReturn type\nNone\nattribute false_val: str = 'NO'\uf0c1\nattribute true_val: str = 'YES'\uf0c1\nparse(text)[source]\uf0c1\nParse the output of an LLM call to a boolean.\nParameters\ntext (str) \u2013 output of language model\nReturns\nboolean\nReturn type\nbool\nclass langchain.output_parsers.CombiningOutputParser(*, parsers)[source]\uf0c1\nBases: langchain.schema.BaseOutputParser\nClass to combine multiple output parsers into one.\nParameters\nparsers (List[langchain.schema.BaseOutputParser]) \u2013 \nReturn type\nNone\nattribute parsers: List[langchain.schema.BaseOutputParser] [Required]\uf0c1\nget_format_instructions()[source]\uf0c1\nInstructions on how the LLM output should be formatted.\nReturn type\nstr\nparse(text)[source]\uf0c1\nParse the output of an LLM call.\nParameters\ntext (str) \u2013 \nReturn type\nDict[str, Any]\nclass langchain.output_parsers.CommaSeparatedListOutputParser[source]\uf0c1\nBases: langchain.output_parsers.list.ListOutputParser\nParse out comma separated lists.\nReturn type\nNone\nget_format_instructions()[source]\uf0c1\nInstructions on how the LLM output should be formatted.\nReturn type\nstr\nparse(text)[source]\uf0c1\nParse the output of an LLM call.\nParameters\ntext (str) \u2013 \nReturn type\nList[str]", "source": "https://api.python.langchain.com/en/latest/modules/output_parsers.html"} +{"id": "025b560d46c1-1", "text": "Parameters\ntext (str) \u2013 \nReturn type\nList[str]\nclass langchain.output_parsers.DatetimeOutputParser(*, format='%Y-%m-%dT%H:%M:%S.%fZ')[source]\uf0c1\nBases: langchain.schema.BaseOutputParser[datetime.datetime]\nParameters\nformat (str) \u2013 \nReturn type\nNone\nattribute format: str = '%Y-%m-%dT%H:%M:%S.%fZ'\uf0c1\nget_format_instructions()[source]\uf0c1\nInstructions on how the LLM output should be formatted.\nReturn type\nstr\nparse(response)[source]\uf0c1\nParse the output of an LLM call.\nA method which takes in a string (assumed output of a language model )\nand parses it into some structure.\nParameters\ntext \u2013 output of language model\nresponse (str) \u2013 \nReturns\nstructured output\nReturn type\ndatetime.datetime\nclass langchain.output_parsers.EnumOutputParser(*, enum)[source]\uf0c1\nBases: langchain.schema.BaseOutputParser\nParameters\nenum (Type[enum.Enum]) \u2013 \nReturn type\nNone\nattribute enum: Type[enum.Enum] [Required]\uf0c1\nget_format_instructions()[source]\uf0c1\nInstructions on how the LLM output should be formatted.\nReturn type\nstr\nparse(response)[source]\uf0c1\nParse the output of an LLM call.\nA method which takes in a string (assumed output of a language model )\nand parses it into some structure.\nParameters\ntext \u2013 output of language model\nresponse (str) \u2013 \nReturns\nstructured output\nReturn type\nAny\nclass langchain.output_parsers.GuardrailsOutputParser(*, guard=None, api=None, args=None, kwargs=None)[source]\uf0c1\nBases: langchain.schema.BaseOutputParser\nParameters\nguard (Any) \u2013", "source": "https://api.python.langchain.com/en/latest/modules/output_parsers.html"} +{"id": "025b560d46c1-2", "text": "Bases: langchain.schema.BaseOutputParser\nParameters\nguard (Any) \u2013 \napi (Optional[Callable]) \u2013 \nargs (Any) \u2013 \nkwargs (Any) \u2013 \nReturn type\nNone\nattribute api: Optional[Callable] = None\uf0c1\nattribute args: Any = None\uf0c1\nattribute guard: Any = None\uf0c1\nattribute kwargs: Any = None\uf0c1\nclassmethod from_rail(rail_file, num_reasks=1, api=None, *args, **kwargs)[source]\uf0c1\nParameters\nrail_file (str) \u2013 \nnum_reasks (int) \u2013 \napi (Optional[Callable]) \u2013 \nargs (Any) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.output_parsers.rail_parser.GuardrailsOutputParser\nclassmethod from_rail_string(rail_str, num_reasks=1, api=None, *args, **kwargs)[source]\uf0c1\nParameters\nrail_str (str) \u2013 \nnum_reasks (int) \u2013 \napi (Optional[Callable]) \u2013 \nargs (Any) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.output_parsers.rail_parser.GuardrailsOutputParser\nget_format_instructions()[source]\uf0c1\nInstructions on how the LLM output should be formatted.\nReturn type\nstr\nparse(text)[source]\uf0c1\nParse the output of an LLM call.\nA method which takes in a string (assumed output of a language model )\nand parses it into some structure.\nParameters\ntext (str) \u2013 output of language model\nReturns\nstructured output\nReturn type\nDict\nclass langchain.output_parsers.ListOutputParser[source]\uf0c1\nBases: langchain.schema.BaseOutputParser\nClass to parse the output of an LLM call to a list.\nReturn type\nNone", "source": "https://api.python.langchain.com/en/latest/modules/output_parsers.html"} +{"id": "025b560d46c1-3", "text": "Class to parse the output of an LLM call to a list.\nReturn type\nNone\nabstract parse(text)[source]\uf0c1\nParse the output of an LLM call.\nParameters\ntext (str) \u2013 \nReturn type\nList[str]\nclass langchain.output_parsers.OutputFixingParser(*, parser, retry_chain)[source]\uf0c1\nBases: langchain.schema.BaseOutputParser[langchain.output_parsers.fix.T]\nWraps a parser and tries to fix parsing errors.\nParameters\nparser (langchain.schema.BaseOutputParser[langchain.output_parsers.fix.T]) \u2013 \nretry_chain (langchain.chains.llm.LLMChain) \u2013 \nReturn type\nNone\nattribute parser: langchain.schema.BaseOutputParser[langchain.output_parsers.fix.T] [Required]\uf0c1\nattribute retry_chain: langchain.chains.llm.LLMChain [Required]\uf0c1\nclassmethod from_llm(llm, parser, prompt=PromptTemplate(input_variables=['completion', 'error', 'instructions'], output_parser=None, partial_variables={}, template='Instructions:\\n--------------\\n{instructions}\\n--------------\\nCompletion:\\n--------------\\n{completion}\\n--------------\\n\\nAbove, the Completion did not satisfy the constraints given in the Instructions.\\nError:\\n--------------\\n{error}\\n--------------\\n\\nPlease try again. Please only respond with an answer that satisfies the constraints laid out in the Instructions:', template_format='f-string', validate_template=True))[source]\uf0c1\nParameters\nllm (langchain.base_language.BaseLanguageModel) \u2013 \nparser (langchain.schema.BaseOutputParser[langchain.output_parsers.fix.T]) \u2013 \nprompt (langchain.prompts.base.BasePromptTemplate) \u2013 \nReturn type\nlangchain.output_parsers.fix.OutputFixingParser[langchain.output_parsers.fix.T]\nget_format_instructions()[source]\uf0c1", "source": "https://api.python.langchain.com/en/latest/modules/output_parsers.html"} +{"id": "025b560d46c1-4", "text": "get_format_instructions()[source]\uf0c1\nInstructions on how the LLM output should be formatted.\nReturn type\nstr\nparse(completion)[source]\uf0c1\nParse the output of an LLM call.\nA method which takes in a string (assumed output of a language model )\nand parses it into some structure.\nParameters\ntext \u2013 output of language model\ncompletion (str) \u2013 \nReturns\nstructured output\nReturn type\nlangchain.output_parsers.fix.T\nclass langchain.output_parsers.PydanticOutputParser(*, pydantic_object)[source]\uf0c1\nBases: langchain.schema.BaseOutputParser[langchain.output_parsers.pydantic.T]\nParameters\npydantic_object (Type[langchain.output_parsers.pydantic.T]) \u2013 \nReturn type\nNone\nattribute pydantic_object: Type[langchain.output_parsers.pydantic.T] [Required]\uf0c1\nget_format_instructions()[source]\uf0c1\nInstructions on how the LLM output should be formatted.\nReturn type\nstr\nparse(text)[source]\uf0c1\nParse the output of an LLM call.\nA method which takes in a string (assumed output of a language model )\nand parses it into some structure.\nParameters\ntext (str) \u2013 output of language model\nReturns\nstructured output\nReturn type\nlangchain.output_parsers.pydantic.T\nclass langchain.output_parsers.RegexDictParser(*, regex_pattern=\"{}:\\\\s?([^.'\\\\n']*)\\\\.?\", output_key_to_format, no_update_value=None)[source]\uf0c1\nBases: langchain.schema.BaseOutputParser\nClass to parse the output into a dictionary.\nParameters\nregex_pattern (str) \u2013 \noutput_key_to_format (Dict[str, str]) \u2013 \nno_update_value (Optional[str]) \u2013 \nReturn type\nNone", "source": "https://api.python.langchain.com/en/latest/modules/output_parsers.html"} +{"id": "025b560d46c1-5", "text": "no_update_value (Optional[str]) \u2013 \nReturn type\nNone\nattribute no_update_value: Optional[str] = None\uf0c1\nattribute output_key_to_format: Dict[str, str] [Required]\uf0c1\nattribute regex_pattern: str = \"{}:\\\\s?([^.'\\\\n']*)\\\\.?\"\uf0c1\nparse(text)[source]\uf0c1\nParse the output of an LLM call.\nParameters\ntext (str) \u2013 \nReturn type\nDict[str, str]\nclass langchain.output_parsers.RegexParser(*, regex, output_keys, default_output_key=None)[source]\uf0c1\nBases: langchain.schema.BaseOutputParser\nClass to parse the output into a dictionary.\nParameters\nregex (str) \u2013 \noutput_keys (List[str]) \u2013 \ndefault_output_key (Optional[str]) \u2013 \nReturn type\nNone\nattribute default_output_key: Optional[str] = None\uf0c1\nattribute output_keys: List[str] [Required]\uf0c1\nattribute regex: str [Required]\uf0c1\nparse(text)[source]\uf0c1\nParse the output of an LLM call.\nParameters\ntext (str) \u2013 \nReturn type\nDict[str, str]\nclass langchain.output_parsers.ResponseSchema(*, name, description, type='string')[source]\uf0c1\nBases: pydantic.main.BaseModel\nParameters\nname (str) \u2013 \ndescription (str) \u2013 \ntype (str) \u2013 \nReturn type\nNone\nattribute description: str [Required]\uf0c1\nattribute name: str [Required]\uf0c1\nattribute type: str = 'string'\uf0c1\nclass langchain.output_parsers.RetryOutputParser(*, parser, retry_chain)[source]\uf0c1\nBases: langchain.schema.BaseOutputParser[langchain.output_parsers.retry.T]\nWraps a parser and tries to fix parsing errors.", "source": "https://api.python.langchain.com/en/latest/modules/output_parsers.html"} +{"id": "025b560d46c1-6", "text": "Wraps a parser and tries to fix parsing errors.\nDoes this by passing the original prompt and the completion to another\nLLM, and telling it the completion did not satisfy criteria in the prompt.\nParameters\nparser (langchain.schema.BaseOutputParser[langchain.output_parsers.retry.T]) \u2013 \nretry_chain (langchain.chains.llm.LLMChain) \u2013 \nReturn type\nNone\nattribute parser: langchain.schema.BaseOutputParser[langchain.output_parsers.retry.T] [Required]\uf0c1\nattribute retry_chain: langchain.chains.llm.LLMChain [Required]\uf0c1\nclassmethod from_llm(llm, parser, prompt=PromptTemplate(input_variables=['completion', 'prompt'], output_parser=None, partial_variables={}, template='Prompt:\\n{prompt}\\nCompletion:\\n{completion}\\n\\nAbove, the Completion did not satisfy the constraints given in the Prompt.\\nPlease try again:', template_format='f-string', validate_template=True))[source]\uf0c1\nParameters\nllm (langchain.base_language.BaseLanguageModel) \u2013 \nparser (langchain.schema.BaseOutputParser[langchain.output_parsers.retry.T]) \u2013 \nprompt (langchain.prompts.base.BasePromptTemplate) \u2013 \nReturn type\nlangchain.output_parsers.retry.RetryOutputParser[langchain.output_parsers.retry.T]\nget_format_instructions()[source]\uf0c1\nInstructions on how the LLM output should be formatted.\nReturn type\nstr\nparse(completion)[source]\uf0c1\nParse the output of an LLM call.\nA method which takes in a string (assumed output of a language model )\nand parses it into some structure.\nParameters\ntext \u2013 output of language model\ncompletion (str) \u2013 \nReturns\nstructured output\nReturn type\nlangchain.output_parsers.retry.T\nparse_with_prompt(completion, prompt_value)[source]\uf0c1", "source": "https://api.python.langchain.com/en/latest/modules/output_parsers.html"} +{"id": "025b560d46c1-7", "text": "parse_with_prompt(completion, prompt_value)[source]\uf0c1\nOptional method to parse the output of an LLM call with a prompt.\nThe prompt is largely provided in the event the OutputParser wants\nto retry or fix the output in some way, and needs information from\nthe prompt to do so.\nParameters\ncompletion (str) \u2013 output of language model\nprompt \u2013 prompt value\nprompt_value (langchain.schema.PromptValue) \u2013 \nReturns\nstructured output\nReturn type\nlangchain.output_parsers.retry.T\nclass langchain.output_parsers.RetryWithErrorOutputParser(*, parser, retry_chain)[source]\uf0c1\nBases: langchain.schema.BaseOutputParser[langchain.output_parsers.retry.T]\nWraps a parser and tries to fix parsing errors.\nDoes this by passing the original prompt, the completion, AND the error\nthat was raised to another language model and telling it that the completion\ndid not work, and raised the given error. Differs from RetryOutputParser\nin that this implementation provides the error that was raised back to the\nLLM, which in theory should give it more information on how to fix it.\nParameters\nparser (langchain.schema.BaseOutputParser[langchain.output_parsers.retry.T]) \u2013 \nretry_chain (langchain.chains.llm.LLMChain) \u2013 \nReturn type\nNone\nattribute parser: langchain.schema.BaseOutputParser[langchain.output_parsers.retry.T] [Required]\uf0c1\nattribute retry_chain: langchain.chains.llm.LLMChain [Required]\uf0c1", "source": "https://api.python.langchain.com/en/latest/modules/output_parsers.html"} +{"id": "025b560d46c1-8", "text": "attribute retry_chain: langchain.chains.llm.LLMChain [Required]\uf0c1\nclassmethod from_llm(llm, parser, prompt=PromptTemplate(input_variables=['completion', 'error', 'prompt'], output_parser=None, partial_variables={}, template='Prompt:\\n{prompt}\\nCompletion:\\n{completion}\\n\\nAbove, the Completion did not satisfy the constraints given in the Prompt.\\nDetails: {error}\\nPlease try again:', template_format='f-string', validate_template=True))[source]\uf0c1\nParameters\nllm (langchain.base_language.BaseLanguageModel) \u2013 \nparser (langchain.schema.BaseOutputParser[langchain.output_parsers.retry.T]) \u2013 \nprompt (langchain.prompts.base.BasePromptTemplate) \u2013 \nReturn type\nlangchain.output_parsers.retry.RetryWithErrorOutputParser[langchain.output_parsers.retry.T]\nget_format_instructions()[source]\uf0c1\nInstructions on how the LLM output should be formatted.\nReturn type\nstr\nparse(completion)[source]\uf0c1\nParse the output of an LLM call.\nA method which takes in a string (assumed output of a language model )\nand parses it into some structure.\nParameters\ntext \u2013 output of language model\ncompletion (str) \u2013 \nReturns\nstructured output\nReturn type\nlangchain.output_parsers.retry.T\nparse_with_prompt(completion, prompt_value)[source]\uf0c1\nOptional method to parse the output of an LLM call with a prompt.\nThe prompt is largely provided in the event the OutputParser wants\nto retry or fix the output in some way, and needs information from\nthe prompt to do so.\nParameters\ncompletion (str) \u2013 output of language model\nprompt \u2013 prompt value\nprompt_value (langchain.schema.PromptValue) \u2013 \nReturns\nstructured output\nReturn type\nlangchain.output_parsers.retry.T", "source": "https://api.python.langchain.com/en/latest/modules/output_parsers.html"} +{"id": "025b560d46c1-9", "text": "Returns\nstructured output\nReturn type\nlangchain.output_parsers.retry.T\nclass langchain.output_parsers.StructuredOutputParser(*, response_schemas)[source]\uf0c1\nBases: langchain.schema.BaseOutputParser\nParameters\nresponse_schemas (List[langchain.output_parsers.structured.ResponseSchema]) \u2013 \nReturn type\nNone\nattribute response_schemas: List[langchain.output_parsers.structured.ResponseSchema] [Required]\uf0c1\nclassmethod from_response_schemas(response_schemas)[source]\uf0c1\nParameters\nresponse_schemas (List[langchain.output_parsers.structured.ResponseSchema]) \u2013 \nReturn type\nlangchain.output_parsers.structured.StructuredOutputParser\nget_format_instructions()[source]\uf0c1\nInstructions on how the LLM output should be formatted.\nReturn type\nstr\nparse(text)[source]\uf0c1\nParse the output of an LLM call.\nA method which takes in a string (assumed output of a language model )\nand parses it into some structure.\nParameters\ntext (str) \u2013 output of language model\nReturns\nstructured output\nReturn type\nAny", "source": "https://api.python.langchain.com/en/latest/modules/output_parsers.html"} +{"id": "1ace37f9e39e-0", "text": "Tools\uf0c1\nCore toolkit implementations.\nclass langchain.tools.AIPluginTool(*, name, description, args_schema=, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, plugin, api_spec)[source]\uf0c1\nBases: langchain.tools.base.BaseTool\nParameters\nname (str) \u2013 \ndescription (str) \u2013 \nargs_schema (Type[langchain.tools.plugin.AIPluginToolSchema]) \u2013 \nreturn_direct (bool) \u2013 \nverbose (bool) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ncallback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) \u2013 \nhandle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) \u2013 \nplugin (langchain.tools.plugin.AIPlugin) \u2013 \napi_spec (str) \u2013 \nReturn type\nNone\nattribute api_spec: str [Required]\uf0c1\nattribute args_schema: Type[AIPluginToolSchema] = \uf0c1\nPydantic model class to validate and parse the tool\u2019s input arguments.\nattribute plugin: AIPlugin [Required]\uf0c1\nclassmethod from_plugin_url(url)[source]\uf0c1\nParameters\nurl (str) \u2013 \nReturn type\nlangchain.tools.plugin.AIPluginTool\nclass langchain.tools.APIOperation(*, operation_id, description=None, base_url, path, method, properties, request_body=None)[source]\uf0c1\nBases: pydantic.main.BaseModel\nA model for a single API operation.\nParameters\noperation_id (str) \u2013 \ndescription (Optional[str]) \u2013 \nbase_url (str) \u2013 \npath (str) \u2013", "source": "https://api.python.langchain.com/en/latest/modules/tools.html"} +{"id": "1ace37f9e39e-1", "text": "base_url (str) \u2013 \npath (str) \u2013 \nmethod (langchain.utilities.openapi.HTTPVerb) \u2013 \nproperties (Sequence[langchain.tools.openapi.utils.api_models.APIProperty]) \u2013 \nrequest_body (Optional[langchain.tools.openapi.utils.api_models.APIRequestBody]) \u2013 \nReturn type\nNone\nattribute base_url: str [Required]\uf0c1\nThe base URL of the operation.\nattribute description: Optional[str] = None\uf0c1\nThe description of the operation.\nattribute method: langchain.utilities.openapi.HTTPVerb [Required]\uf0c1\nThe HTTP method of the operation.\nattribute operation_id: str [Required]\uf0c1\nThe unique identifier of the operation.\nattribute path: str [Required]\uf0c1\nThe path of the operation.\nattribute properties: Sequence[langchain.tools.openapi.utils.api_models.APIProperty] [Required]\uf0c1\nattribute request_body: Optional[langchain.tools.openapi.utils.api_models.APIRequestBody] = None\uf0c1\nThe request body of the operation.\nclassmethod from_openapi_spec(spec, path, method)[source]\uf0c1\nCreate an APIOperation from an OpenAPI spec.\nParameters\nspec (langchain.utilities.openapi.OpenAPISpec) \u2013 \npath (str) \u2013 \nmethod (str) \u2013 \nReturn type\nlangchain.tools.openapi.utils.api_models.APIOperation\nclassmethod from_openapi_url(spec_url, path, method)[source]\uf0c1\nCreate an APIOperation from an OpenAPI URL.\nParameters\nspec_url (str) \u2013 \npath (str) \u2013 \nmethod (str) \u2013 \nReturn type\nlangchain.tools.openapi.utils.api_models.APIOperation\nto_typescript()[source]\uf0c1\nGet typescript string representation of the operation.\nReturn type\nstr\nstatic ts_type_from_python(type_)[source]\uf0c1\nParameters", "source": "https://api.python.langchain.com/en/latest/modules/tools.html"} +{"id": "1ace37f9e39e-2", "text": "Return type\nstr\nstatic ts_type_from_python(type_)[source]\uf0c1\nParameters\ntype_ (Union[str, Type, tuple, None, enum.Enum]) \u2013 \nReturn type\nstr\nproperty body_params: List[str]\uf0c1\nproperty path_params: List[str]\uf0c1\nproperty query_params: List[str]\uf0c1\nclass langchain.tools.ArxivQueryRun(*, name='arxiv', description='A wrapper around Arxiv.org Useful for when you need to answer questions about Physics, Mathematics, Computer Science, Quantitative Biology, Quantitative Finance, Statistics, Electrical Engineering, and Economics from scientific articles on arxiv.org. Input should be a search query.', args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, api_wrapper=None)[source]\uf0c1\nBases: langchain.tools.base.BaseTool\nTool that adds the capability to search using the Arxiv API.\nParameters\nname (str) \u2013 \ndescription (str) \u2013 \nargs_schema (Optional[Type[pydantic.main.BaseModel]]) \u2013 \nreturn_direct (bool) \u2013 \nverbose (bool) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ncallback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) \u2013 \nhandle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) \u2013 \napi_wrapper (langchain.utilities.arxiv.ArxivAPIWrapper) \u2013 \nReturn type\nNone\nattribute api_wrapper: langchain.utilities.arxiv.ArxivAPIWrapper [Optional]\uf0c1", "source": "https://api.python.langchain.com/en/latest/modules/tools.html"} +{"id": "1ace37f9e39e-3", "text": "attribute api_wrapper: langchain.utilities.arxiv.ArxivAPIWrapper [Optional]\uf0c1\nclass langchain.tools.AzureCogsFormRecognizerTool(*, name='azure_cognitive_services_form_recognizer', description='A wrapper around Azure Cognitive Services Form Recognizer. Useful for when you need to extract text, tables, and key-value pairs from documents. Input should be a url to a document.', args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, azure_cogs_key='', azure_cogs_endpoint='', doc_analysis_client=None)[source]\uf0c1\nBases: langchain.tools.base.BaseTool\nTool that queries the Azure Cognitive Services Form Recognizer API.\nIn order to set this up, follow instructions at:\nhttps://learn.microsoft.com/en-us/azure/applied-ai-services/form-recognizer/quickstarts/get-started-sdks-rest-api?view=form-recog-3.0.0&pivots=programming-language-python\nParameters\nname (str) \u2013 \ndescription (str) \u2013 \nargs_schema (Optional[Type[pydantic.main.BaseModel]]) \u2013 \nreturn_direct (bool) \u2013 \nverbose (bool) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ncallback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) \u2013 \nhandle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) \u2013 \nazure_cogs_key (str) \u2013 \nazure_cogs_endpoint (str) \u2013 \ndoc_analysis_client (Any) \u2013 \nReturn type\nNone", "source": "https://api.python.langchain.com/en/latest/modules/tools.html"} +{"id": "1ace37f9e39e-4", "text": "doc_analysis_client (Any) \u2013 \nReturn type\nNone\nclass langchain.tools.AzureCogsImageAnalysisTool(*, name='azure_cognitive_services_image_analysis', description='A wrapper around Azure Cognitive Services Image Analysis. Useful for when you need to analyze images. Input should be a url to an image.', args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, azure_cogs_key='', azure_cogs_endpoint='', vision_service=None, analysis_options=None)[source]\uf0c1\nBases: langchain.tools.base.BaseTool\nTool that queries the Azure Cognitive Services Image Analysis API.\nIn order to set this up, follow instructions at:\nhttps://learn.microsoft.com/en-us/azure/cognitive-services/computer-vision/quickstarts-sdk/image-analysis-client-library-40\nParameters\nname (str) \u2013 \ndescription (str) \u2013 \nargs_schema (Optional[Type[pydantic.main.BaseModel]]) \u2013 \nreturn_direct (bool) \u2013 \nverbose (bool) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ncallback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) \u2013 \nhandle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) \u2013 \nazure_cogs_key (str) \u2013 \nazure_cogs_endpoint (str) \u2013 \nvision_service (Any) \u2013 \nanalysis_options (Any) \u2013 \nReturn type\nNone", "source": "https://api.python.langchain.com/en/latest/modules/tools.html"} +{"id": "1ace37f9e39e-5", "text": "analysis_options (Any) \u2013 \nReturn type\nNone\nclass langchain.tools.AzureCogsSpeech2TextTool(*, name='azure_cognitive_services_speech2text', description='A wrapper around Azure Cognitive Services Speech2Text. Useful for when you need to transcribe audio to text. Input should be a url to an audio file.', args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, azure_cogs_key='', azure_cogs_region='', speech_language='en-US', speech_config=None)[source]\uf0c1\nBases: langchain.tools.base.BaseTool\nTool that queries the Azure Cognitive Services Speech2Text API.\nIn order to set this up, follow instructions at:\nhttps://learn.microsoft.com/en-us/azure/cognitive-services/speech-service/get-started-speech-to-text?pivots=programming-language-python\nParameters\nname (str) \u2013 \ndescription (str) \u2013 \nargs_schema (Optional[Type[pydantic.main.BaseModel]]) \u2013 \nreturn_direct (bool) \u2013 \nverbose (bool) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ncallback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) \u2013 \nhandle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) \u2013 \nazure_cogs_key (str) \u2013 \nazure_cogs_region (str) \u2013 \nspeech_language (str) \u2013 \nspeech_config (Any) \u2013 \nReturn type\nNone", "source": "https://api.python.langchain.com/en/latest/modules/tools.html"} +{"id": "1ace37f9e39e-6", "text": "speech_config (Any) \u2013 \nReturn type\nNone\nclass langchain.tools.AzureCogsText2SpeechTool(*, name='azure_cognitive_services_text2speech', description='A wrapper around Azure Cognitive Services Text2Speech. Useful for when you need to convert text to speech. ', args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, azure_cogs_key='', azure_cogs_region='', speech_language='en-US', speech_config=None)[source]\uf0c1\nBases: langchain.tools.base.BaseTool\nTool that queries the Azure Cognitive Services Text2Speech API.\nIn order to set this up, follow instructions at:\nhttps://learn.microsoft.com/en-us/azure/cognitive-services/speech-service/get-started-text-to-speech?pivots=programming-language-python\nParameters\nname (str) \u2013 \ndescription (str) \u2013 \nargs_schema (Optional[Type[pydantic.main.BaseModel]]) \u2013 \nreturn_direct (bool) \u2013 \nverbose (bool) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ncallback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) \u2013 \nhandle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) \u2013 \nazure_cogs_key (str) \u2013 \nazure_cogs_region (str) \u2013 \nspeech_language (str) \u2013 \nspeech_config (Any) \u2013 \nReturn type\nNone", "source": "https://api.python.langchain.com/en/latest/modules/tools.html"} +{"id": "1ace37f9e39e-7", "text": "speech_config (Any) \u2013 \nReturn type\nNone\nclass langchain.tools.BaseGraphQLTool(*, name='query_graphql', description=\"\u00a0\u00a0\u00a0 Input to this tool is a detailed and correct GraphQL query, output is a result from the API.\\n\u00a0\u00a0\u00a0 If the query is not correct, an error message will be returned.\\n\u00a0\u00a0\u00a0 If an error is returned with 'Bad request' in it, rewrite the query and try again.\\n\u00a0\u00a0\u00a0 If an error is returned with 'Unauthorized' in it, do not try again, but tell the user to change their authentication.\\n\\n\u00a0\u00a0\u00a0 Example Input: query {{ allUsers {{ id, name, email }} }}\u00a0\u00a0\u00a0 \", args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, graphql_wrapper)[source]\uf0c1\nBases: langchain.tools.base.BaseTool\nBase tool for querying a GraphQL API.\nParameters\nname (str) \u2013 \ndescription (str) \u2013 \nargs_schema (Optional[Type[pydantic.main.BaseModel]]) \u2013 \nreturn_direct (bool) \u2013 \nverbose (bool) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ncallback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) \u2013 \nhandle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) \u2013 \ngraphql_wrapper (langchain.utilities.graphql.GraphQLAPIWrapper) \u2013 \nReturn type\nNone\nattribute graphql_wrapper: langchain.utilities.graphql.GraphQLAPIWrapper [Required]\uf0c1\nclass langchain.tools.BaseRequestsTool(*, requests_wrapper)[source]\uf0c1\nBases: pydantic.main.BaseModel\nBase class for requests tools.\nParameters\nrequests_wrapper (langchain.requests.TextRequestsWrapper) \u2013 \nReturn type\nNone", "source": "https://api.python.langchain.com/en/latest/modules/tools.html"} +{"id": "1ace37f9e39e-8", "text": "Parameters\nrequests_wrapper (langchain.requests.TextRequestsWrapper) \u2013 \nReturn type\nNone\nattribute requests_wrapper: langchain.requests.TextRequestsWrapper [Required]\uf0c1\nclass langchain.tools.BaseSQLDatabaseTool(*, db)[source]\uf0c1\nBases: pydantic.main.BaseModel\nBase tool for interacting with a SQL database.\nParameters\ndb (langchain.sql_database.SQLDatabase) \u2013 \nReturn type\nNone\nattribute db: langchain.sql_database.SQLDatabase [Required]\uf0c1\nclass langchain.tools.BaseSparkSQLTool(*, db)[source]\uf0c1\nBases: pydantic.main.BaseModel\nBase tool for interacting with Spark SQL.\nParameters\ndb (langchain.utilities.spark_sql.SparkSQL) \u2013 \nReturn type\nNone\nattribute db: langchain.utilities.spark_sql.SparkSQL [Required]\uf0c1\nclass langchain.tools.BaseTool(*, name, description, args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False)[source]\uf0c1\nBases: abc.ABC, pydantic.main.BaseModel\nInterface LangChain tools must implement.\nParameters\nname (str) \u2013 \ndescription (str) \u2013 \nargs_schema (Optional[Type[pydantic.main.BaseModel]]) \u2013 \nreturn_direct (bool) \u2013 \nverbose (bool) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ncallback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) \u2013 \nhandle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) \u2013 \nReturn type\nNone\nattribute args_schema: Optional[Type[pydantic.main.BaseModel]] = None\uf0c1", "source": "https://api.python.langchain.com/en/latest/modules/tools.html"} +{"id": "1ace37f9e39e-9", "text": "attribute args_schema: Optional[Type[pydantic.main.BaseModel]] = None\uf0c1\nPydantic model class to validate and parse the tool\u2019s input arguments.\nattribute callback_manager: Optional[langchain.callbacks.base.BaseCallbackManager] = None\uf0c1\nDeprecated. Please use callbacks instead.\nattribute callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None\uf0c1\nCallbacks to be called during tool execution.\nattribute description: str [Required]\uf0c1\nUsed to tell the model how/when/why to use the tool.\nYou can provide few-shot examples as a part of the description.\nattribute handle_tool_error: Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]] = False\uf0c1\nHandle the content of the ToolException thrown.\nattribute name: str [Required]\uf0c1\nThe unique name of the tool that clearly communicates its purpose.\nattribute return_direct: bool = False\uf0c1\nWhether to return the tool\u2019s output directly. Setting this to True means\nthat after the tool is called, the AgentExecutor will stop looping.\nattribute verbose: bool = False\uf0c1\nWhether to log the tool\u2019s progress.\nasync arun(tool_input, verbose=None, start_color='green', color='green', callbacks=None, **kwargs)[source]\uf0c1\nRun the tool asynchronously.\nParameters\ntool_input (Union[str, Dict]) \u2013 \nverbose (Optional[bool]) \u2013 \nstart_color (Optional[str]) \u2013 \ncolor (Optional[str]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nAny", "source": "https://api.python.langchain.com/en/latest/modules/tools.html"} +{"id": "1ace37f9e39e-10", "text": "kwargs (Any) \u2013 \nReturn type\nAny\nrun(tool_input, verbose=None, start_color='green', color='green', callbacks=None, **kwargs)[source]\uf0c1\nRun the tool.\nParameters\ntool_input (Union[str, Dict]) \u2013 \nverbose (Optional[bool]) \u2013 \nstart_color (Optional[str]) \u2013 \ncolor (Optional[str]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nAny\nproperty args: dict\uf0c1\nproperty is_single_input: bool\uf0c1\nWhether the tool only accepts a single input.\nclass langchain.tools.BingSearchResults(*, name='Bing Search Results JSON', description='A wrapper around Bing Search. Useful for when you need to answer questions about current events. Input should be a search query. Output is a JSON array of the query results', args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, num_results=4, api_wrapper)[source]\uf0c1\nBases: langchain.tools.base.BaseTool\nTool that has capability to query the Bing Search API and get back json.\nParameters\nname (str) \u2013 \ndescription (str) \u2013 \nargs_schema (Optional[Type[pydantic.main.BaseModel]]) \u2013 \nreturn_direct (bool) \u2013 \nverbose (bool) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ncallback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) \u2013 \nhandle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) \u2013 \nnum_results (int) \u2013", "source": "https://api.python.langchain.com/en/latest/modules/tools.html"} +{"id": "1ace37f9e39e-11", "text": "num_results (int) \u2013 \napi_wrapper (langchain.utilities.bing_search.BingSearchAPIWrapper) \u2013 \nReturn type\nNone\nattribute api_wrapper: langchain.utilities.bing_search.BingSearchAPIWrapper [Required]\uf0c1\nattribute num_results: int = 4\uf0c1\nclass langchain.tools.BingSearchRun(*, name='bing_search', description='A wrapper around Bing Search. Useful for when you need to answer questions about current events. Input should be a search query.', args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, api_wrapper)[source]\uf0c1\nBases: langchain.tools.base.BaseTool\nTool that adds the capability to query the Bing search API.\nParameters\nname (str) \u2013 \ndescription (str) \u2013 \nargs_schema (Optional[Type[pydantic.main.BaseModel]]) \u2013 \nreturn_direct (bool) \u2013 \nverbose (bool) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ncallback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) \u2013 \nhandle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) \u2013 \napi_wrapper (langchain.utilities.bing_search.BingSearchAPIWrapper) \u2013 \nReturn type\nNone\nattribute api_wrapper: langchain.utilities.bing_search.BingSearchAPIWrapper [Required]\uf0c1\nclass langchain.tools.BraveSearch(*, name='brave_search', description='a search engine. useful for when you need to answer questions about current events. input should be a search query.', args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, search_wrapper)[source]\uf0c1", "source": "https://api.python.langchain.com/en/latest/modules/tools.html"} +{"id": "1ace37f9e39e-12", "text": "Bases: langchain.tools.base.BaseTool\nParameters\nname (str) \u2013 \ndescription (str) \u2013 \nargs_schema (Optional[Type[pydantic.main.BaseModel]]) \u2013 \nreturn_direct (bool) \u2013 \nverbose (bool) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ncallback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) \u2013 \nhandle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) \u2013 \nsearch_wrapper (langchain.utilities.brave_search.BraveSearchWrapper) \u2013 \nReturn type\nNone\nattribute search_wrapper: BraveSearchWrapper [Required]\uf0c1\nclassmethod from_api_key(api_key, search_kwargs=None, **kwargs)[source]\uf0c1\nParameters\napi_key (str) \u2013 \nsearch_kwargs (Optional[dict]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.tools.brave_search.tool.BraveSearch\nclass langchain.tools.ClickTool(*, name='click_element', description='Click on an element with the given CSS selector', args_schema=, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, sync_browser=None, async_browser=None, visible_only=True, playwright_strict=False, playwright_timeout=1000)[source]\uf0c1\nBases: langchain.tools.playwright.base.BaseBrowserTool\nParameters\nname (str) \u2013 \ndescription (str) \u2013 \nargs_schema (Type[pydantic.main.BaseModel]) \u2013 \nreturn_direct (bool) \u2013 \nverbose (bool) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013", "source": "https://api.python.langchain.com/en/latest/modules/tools.html"} +{"id": "1ace37f9e39e-13", "text": "callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) \u2013 \nhandle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) \u2013 \nsync_browser (Optional['SyncBrowser']) \u2013 \nasync_browser (Optional['AsyncBrowser']) \u2013 \nvisible_only (bool) \u2013 \nplaywright_strict (bool) \u2013 \nplaywright_timeout (float) \u2013 \nReturn type\nNone\nattribute args_schema: Type[BaseModel] = \uf0c1\nPydantic model class to validate and parse the tool\u2019s input arguments.\nattribute description: str = 'Click on an element with the given CSS selector'\uf0c1\nUsed to tell the model how/when/why to use the tool.\nYou can provide few-shot examples as a part of the description.\nattribute name: str = 'click_element'\uf0c1\nThe unique name of the tool that clearly communicates its purpose.\nattribute playwright_strict: bool = False\uf0c1\nWhether to employ Playwright\u2019s strict mode when clicking on elements.\nattribute playwright_timeout: float = 1000\uf0c1\nTimeout (in ms) for Playwright to wait for element to be ready.\nattribute visible_only: bool = True\uf0c1\nWhether to consider only visible elements.\nclass langchain.tools.CopyFileTool(*, name='copy_file', description='Create a copy of a file in a specified location', args_schema=, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, root_dir=None)[source]\uf0c1\nBases: langchain.tools.file_management.utils.BaseFileToolMixin, langchain.tools.base.BaseTool\nParameters\nname (str) \u2013 \ndescription (str) \u2013", "source": "https://api.python.langchain.com/en/latest/modules/tools.html"} +{"id": "1ace37f9e39e-14", "text": "Parameters\nname (str) \u2013 \ndescription (str) \u2013 \nargs_schema (Type[pydantic.main.BaseModel]) \u2013 \nreturn_direct (bool) \u2013 \nverbose (bool) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ncallback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) \u2013 \nhandle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) \u2013 \nroot_dir (Optional[str]) \u2013 \nReturn type\nNone\nattribute args_schema: Type[pydantic.main.BaseModel] = \uf0c1\nPydantic model class to validate and parse the tool\u2019s input arguments.\nattribute description: str = 'Create a copy of a file in a specified location'\uf0c1\nUsed to tell the model how/when/why to use the tool.\nYou can provide few-shot examples as a part of the description.\nattribute name: str = 'copy_file'\uf0c1\nThe unique name of the tool that clearly communicates its purpose.\nclass langchain.tools.CurrentWebPageTool(*, name='current_webpage', description='Returns the URL of the current page', args_schema=, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, sync_browser=None, async_browser=None)[source]\uf0c1\nBases: langchain.tools.playwright.base.BaseBrowserTool\nParameters\nname (str) \u2013 \ndescription (str) \u2013 \nargs_schema (Type[pydantic.main.BaseModel]) \u2013 \nreturn_direct (bool) \u2013 \nverbose (bool) \u2013", "source": "https://api.python.langchain.com/en/latest/modules/tools.html"} +{"id": "1ace37f9e39e-15", "text": "return_direct (bool) \u2013 \nverbose (bool) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ncallback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) \u2013 \nhandle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) \u2013 \nsync_browser (Optional['SyncBrowser']) \u2013 \nasync_browser (Optional['AsyncBrowser']) \u2013 \nReturn type\nNone\nattribute args_schema: Type[BaseModel] = \uf0c1\nPydantic model class to validate and parse the tool\u2019s input arguments.\nattribute description: str = 'Returns the URL of the current page'\uf0c1\nUsed to tell the model how/when/why to use the tool.\nYou can provide few-shot examples as a part of the description.\nattribute name: str = 'current_webpage'\uf0c1\nThe unique name of the tool that clearly communicates its purpose.\nclass langchain.tools.DeleteFileTool(*, name='file_delete', description='Delete a file', args_schema=, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, root_dir=None)[source]\uf0c1\nBases: langchain.tools.file_management.utils.BaseFileToolMixin, langchain.tools.base.BaseTool\nParameters\nname (str) \u2013 \ndescription (str) \u2013 \nargs_schema (Type[pydantic.main.BaseModel]) \u2013 \nreturn_direct (bool) \u2013 \nverbose (bool) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ncallback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) \u2013", "source": "https://api.python.langchain.com/en/latest/modules/tools.html"} +{"id": "1ace37f9e39e-16", "text": "callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) \u2013 \nhandle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) \u2013 \nroot_dir (Optional[str]) \u2013 \nReturn type\nNone\nattribute args_schema: Type[pydantic.main.BaseModel] = \uf0c1\nPydantic model class to validate and parse the tool\u2019s input arguments.\nattribute description: str = 'Delete a file'\uf0c1\nUsed to tell the model how/when/why to use the tool.\nYou can provide few-shot examples as a part of the description.\nattribute name: str = 'file_delete'\uf0c1\nThe unique name of the tool that clearly communicates its purpose.\nclass langchain.tools.DuckDuckGoSearchResults(*, name='DuckDuckGo Results JSON', description='A wrapper around Duck Duck Go Search. Useful for when you need to answer questions about current events. Input should be a search query. Output is a JSON array of the query results', args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, num_results=4, api_wrapper=None)[source]\uf0c1\nBases: langchain.tools.base.BaseTool\nTool that queries the Duck Duck Go Search API and get back json.\nParameters\nname (str) \u2013 \ndescription (str) \u2013 \nargs_schema (Optional[Type[pydantic.main.BaseModel]]) \u2013 \nreturn_direct (bool) \u2013 \nverbose (bool) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ncallback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) \u2013", "source": "https://api.python.langchain.com/en/latest/modules/tools.html"} +{"id": "1ace37f9e39e-17", "text": "callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) \u2013 \nhandle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) \u2013 \nnum_results (int) \u2013 \napi_wrapper (langchain.utilities.duckduckgo_search.DuckDuckGoSearchAPIWrapper) \u2013 \nReturn type\nNone\nattribute api_wrapper: langchain.utilities.duckduckgo_search.DuckDuckGoSearchAPIWrapper [Optional]\uf0c1\nattribute num_results: int = 4\uf0c1\nclass langchain.tools.DuckDuckGoSearchRun(*, name='duckduckgo_search', description='A wrapper around DuckDuckGo Search. Useful for when you need to answer questions about current events. Input should be a search query.', args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, api_wrapper=None)[source]\uf0c1\nBases: langchain.tools.base.BaseTool\nTool that adds the capability to query the DuckDuckGo search API.\nParameters\nname (str) \u2013 \ndescription (str) \u2013 \nargs_schema (Optional[Type[pydantic.main.BaseModel]]) \u2013 \nreturn_direct (bool) \u2013 \nverbose (bool) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ncallback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) \u2013 \nhandle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) \u2013 \napi_wrapper (langchain.utilities.duckduckgo_search.DuckDuckGoSearchAPIWrapper) \u2013 \nReturn type\nNone\nattribute api_wrapper: langchain.utilities.duckduckgo_search.DuckDuckGoSearchAPIWrapper [Optional]\uf0c1", "source": "https://api.python.langchain.com/en/latest/modules/tools.html"} +{"id": "1ace37f9e39e-18", "text": "class langchain.tools.ExtractHyperlinksTool(*, name='extract_hyperlinks', description='Extract all hyperlinks on the current webpage', args_schema=, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, sync_browser=None, async_browser=None)[source]\uf0c1\nBases: langchain.tools.playwright.base.BaseBrowserTool\nExtract all hyperlinks on the page.\nParameters\nname (str) \u2013 \ndescription (str) \u2013 \nargs_schema (Type[pydantic.main.BaseModel]) \u2013 \nreturn_direct (bool) \u2013 \nverbose (bool) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ncallback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) \u2013 \nhandle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) \u2013 \nsync_browser (Optional['SyncBrowser']) \u2013 \nasync_browser (Optional['AsyncBrowser']) \u2013 \nReturn type\nNone\nattribute args_schema: Type[BaseModel] = \uf0c1\nPydantic model class to validate and parse the tool\u2019s input arguments.\nattribute description: str = 'Extract all hyperlinks on the current webpage'\uf0c1\nUsed to tell the model how/when/why to use the tool.\nYou can provide few-shot examples as a part of the description.\nattribute name: str = 'extract_hyperlinks'\uf0c1\nThe unique name of the tool that clearly communicates its purpose.\nstatic scrape_page(page, html_content, absolute_urls)[source]\uf0c1\nParameters\npage (Any) \u2013 \nhtml_content (str) \u2013", "source": "https://api.python.langchain.com/en/latest/modules/tools.html"} +{"id": "1ace37f9e39e-19", "text": "Parameters\npage (Any) \u2013 \nhtml_content (str) \u2013 \nabsolute_urls (bool) \u2013 \nReturn type\nstr\nclass langchain.tools.ExtractTextTool(*, name='extract_text', description='Extract all the text on the current webpage', args_schema=, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, sync_browser=None, async_browser=None)[source]\uf0c1\nBases: langchain.tools.playwright.base.BaseBrowserTool\nParameters\nname (str) \u2013 \ndescription (str) \u2013 \nargs_schema (Type[pydantic.main.BaseModel]) \u2013 \nreturn_direct (bool) \u2013 \nverbose (bool) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ncallback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) \u2013 \nhandle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) \u2013 \nsync_browser (Optional['SyncBrowser']) \u2013 \nasync_browser (Optional['AsyncBrowser']) \u2013 \nReturn type\nNone\nattribute args_schema: Type[BaseModel] = \uf0c1\nPydantic model class to validate and parse the tool\u2019s input arguments.\nattribute description: str = 'Extract all the text on the current webpage'\uf0c1\nUsed to tell the model how/when/why to use the tool.\nYou can provide few-shot examples as a part of the description.\nattribute name: str = 'extract_text'\uf0c1\nThe unique name of the tool that clearly communicates its purpose.", "source": "https://api.python.langchain.com/en/latest/modules/tools.html"} +{"id": "1ace37f9e39e-20", "text": "The unique name of the tool that clearly communicates its purpose.\nclass langchain.tools.FileSearchTool(*, name='file_search', description='Recursively search for files in a subdirectory that match the regex pattern', args_schema=, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, root_dir=None)[source]\uf0c1\nBases: langchain.tools.file_management.utils.BaseFileToolMixin, langchain.tools.base.BaseTool\nParameters\nname (str) \u2013 \ndescription (str) \u2013 \nargs_schema (Type[pydantic.main.BaseModel]) \u2013 \nreturn_direct (bool) \u2013 \nverbose (bool) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ncallback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) \u2013 \nhandle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) \u2013 \nroot_dir (Optional[str]) \u2013 \nReturn type\nNone\nattribute args_schema: Type[pydantic.main.BaseModel] = \uf0c1\nPydantic model class to validate and parse the tool\u2019s input arguments.\nattribute description: str = 'Recursively search for files in a subdirectory that match the regex pattern'\uf0c1\nUsed to tell the model how/when/why to use the tool.\nYou can provide few-shot examples as a part of the description.\nattribute name: str = 'file_search'\uf0c1\nThe unique name of the tool that clearly communicates its purpose.", "source": "https://api.python.langchain.com/en/latest/modules/tools.html"} +{"id": "1ace37f9e39e-21", "text": "The unique name of the tool that clearly communicates its purpose.\nclass langchain.tools.GetElementsTool(*, name='get_elements', description='Retrieve elements in the current web page matching the given CSS selector', args_schema=, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, sync_browser=None, async_browser=None)[source]\uf0c1\nBases: langchain.tools.playwright.base.BaseBrowserTool\nParameters\nname (str) \u2013 \ndescription (str) \u2013 \nargs_schema (Type[pydantic.main.BaseModel]) \u2013 \nreturn_direct (bool) \u2013 \nverbose (bool) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ncallback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) \u2013 \nhandle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) \u2013 \nsync_browser (Optional['SyncBrowser']) \u2013 \nasync_browser (Optional['AsyncBrowser']) \u2013 \nReturn type\nNone\nattribute args_schema: Type[BaseModel] = \uf0c1\nPydantic model class to validate and parse the tool\u2019s input arguments.\nattribute description: str = 'Retrieve elements in the current web page matching the given CSS selector'\uf0c1\nUsed to tell the model how/when/why to use the tool.\nYou can provide few-shot examples as a part of the description.\nattribute name: str = 'get_elements'\uf0c1\nThe unique name of the tool that clearly communicates its purpose.", "source": "https://api.python.langchain.com/en/latest/modules/tools.html"} +{"id": "1ace37f9e39e-22", "text": "The unique name of the tool that clearly communicates its purpose.\nclass langchain.tools.GmailCreateDraft(*, name='create_gmail_draft', description='Use this tool to create a draft email with the provided message fields.', args_schema=, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, api_resource=None)[source]\uf0c1\nBases: langchain.tools.gmail.base.GmailBaseTool\nParameters\nname (str) \u2013 \ndescription (str) \u2013 \nargs_schema (Type[langchain.tools.gmail.create_draft.CreateDraftSchema]) \u2013 \nreturn_direct (bool) \u2013 \nverbose (bool) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ncallback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) \u2013 \nhandle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) \u2013 \napi_resource (Resource) \u2013 \nReturn type\nNone\nattribute args_schema: Type[langchain.tools.gmail.create_draft.CreateDraftSchema] = \uf0c1\nPydantic model class to validate and parse the tool\u2019s input arguments.\nattribute description: str = 'Use this tool to create a draft email with the provided message fields.'\uf0c1\nUsed to tell the model how/when/why to use the tool.\nYou can provide few-shot examples as a part of the description.\nattribute name: str = 'create_gmail_draft'\uf0c1\nThe unique name of the tool that clearly communicates its purpose.", "source": "https://api.python.langchain.com/en/latest/modules/tools.html"} +{"id": "1ace37f9e39e-23", "text": "The unique name of the tool that clearly communicates its purpose.\nclass langchain.tools.GmailGetMessage(*, name='get_gmail_message', description='Use this tool to fetch an email by message ID. Returns the thread ID, snipet, body, subject, and sender.', args_schema=, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, api_resource=None)[source]\uf0c1\nBases: langchain.tools.gmail.base.GmailBaseTool\nParameters\nname (str) \u2013 \ndescription (str) \u2013 \nargs_schema (Type[langchain.tools.gmail.get_message.SearchArgsSchema]) \u2013 \nreturn_direct (bool) \u2013 \nverbose (bool) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ncallback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) \u2013 \nhandle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) \u2013 \napi_resource (Resource) \u2013 \nReturn type\nNone\nattribute args_schema: Type[langchain.tools.gmail.get_message.SearchArgsSchema] = \uf0c1\nPydantic model class to validate and parse the tool\u2019s input arguments.\nattribute description: str = 'Use this tool to fetch an email by message ID. Returns the thread ID, snipet, body, subject, and sender.'\uf0c1\nUsed to tell the model how/when/why to use the tool.\nYou can provide few-shot examples as a part of the description.\nattribute name: str = 'get_gmail_message'\uf0c1\nThe unique name of the tool that clearly communicates its purpose.", "source": "https://api.python.langchain.com/en/latest/modules/tools.html"} +{"id": "1ace37f9e39e-24", "text": "The unique name of the tool that clearly communicates its purpose.\nclass langchain.tools.GmailGetThread(*, name='get_gmail_thread', description='Use this tool to search for email messages. The input must be a valid Gmail query. The output is a JSON list of messages.', args_schema=, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, api_resource=None)[source]\uf0c1\nBases: langchain.tools.gmail.base.GmailBaseTool\nParameters\nname (str) \u2013 \ndescription (str) \u2013 \nargs_schema (Type[langchain.tools.gmail.get_thread.GetThreadSchema]) \u2013 \nreturn_direct (bool) \u2013 \nverbose (bool) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ncallback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) \u2013 \nhandle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) \u2013 \napi_resource (Resource) \u2013 \nReturn type\nNone\nattribute args_schema: Type[langchain.tools.gmail.get_thread.GetThreadSchema] = \uf0c1\nPydantic model class to validate and parse the tool\u2019s input arguments.\nattribute description: str = 'Use this tool to search for email messages. The input must be a valid Gmail query. The output is a JSON list of messages.'\uf0c1\nUsed to tell the model how/when/why to use the tool.\nYou can provide few-shot examples as a part of the description.\nattribute name: str = 'get_gmail_thread'\uf0c1\nThe unique name of the tool that clearly communicates its purpose.", "source": "https://api.python.langchain.com/en/latest/modules/tools.html"} +{"id": "1ace37f9e39e-25", "text": "The unique name of the tool that clearly communicates its purpose.\nclass langchain.tools.GmailSearch(*, name='search_gmail', description='Use this tool to search for email messages or threads. The input must be a valid Gmail query. The output is a JSON list of the requested resource.', args_schema=, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, api_resource=None)[source]\uf0c1\nBases: langchain.tools.gmail.base.GmailBaseTool\nParameters\nname (str) \u2013 \ndescription (str) \u2013 \nargs_schema (Type[langchain.tools.gmail.search.SearchArgsSchema]) \u2013 \nreturn_direct (bool) \u2013 \nverbose (bool) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ncallback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) \u2013 \nhandle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) \u2013 \napi_resource (Resource) \u2013 \nReturn type\nNone\nattribute args_schema: Type[langchain.tools.gmail.search.SearchArgsSchema] = \uf0c1\nPydantic model class to validate and parse the tool\u2019s input arguments.\nattribute description: str = 'Use this tool to search for email messages or threads. The input must be a valid Gmail query. The output is a JSON list of the requested resource.'\uf0c1\nUsed to tell the model how/when/why to use the tool.\nYou can provide few-shot examples as a part of the description.\nattribute name: str = 'search_gmail'\uf0c1\nThe unique name of the tool that clearly communicates its purpose.", "source": "https://api.python.langchain.com/en/latest/modules/tools.html"} +{"id": "1ace37f9e39e-26", "text": "The unique name of the tool that clearly communicates its purpose.\nclass langchain.tools.GmailSendMessage(*, name='send_gmail_message', description='Use this tool to send email messages. The input is the message, recipents', args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, api_resource=None)[source]\uf0c1\nBases: langchain.tools.gmail.base.GmailBaseTool\nParameters\nname (str) \u2013 \ndescription (str) \u2013 \nargs_schema (Optional[Type[pydantic.main.BaseModel]]) \u2013 \nreturn_direct (bool) \u2013 \nverbose (bool) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ncallback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) \u2013 \nhandle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) \u2013 \napi_resource (Resource) \u2013 \nReturn type\nNone\nattribute description: str = 'Use this tool to send email messages. The input is the message, recipents'\uf0c1\nUsed to tell the model how/when/why to use the tool.\nYou can provide few-shot examples as a part of the description.\nattribute name: str = 'send_gmail_message'\uf0c1\nThe unique name of the tool that clearly communicates its purpose.\nclass langchain.tools.GooglePlacesTool(*, name='google_places', description='A wrapper around Google Places. Useful for when you need to validate or discover addressed from ambiguous text. Input should be a search query.', args_schema=, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, api_wrapper=None)[source]\uf0c1", "source": "https://api.python.langchain.com/en/latest/modules/tools.html"} +{"id": "1ace37f9e39e-27", "text": "Bases: langchain.tools.base.BaseTool\nTool that adds the capability to query the Google places API.\nParameters\nname (str) \u2013 \ndescription (str) \u2013 \nargs_schema (Type[pydantic.main.BaseModel]) \u2013 \nreturn_direct (bool) \u2013 \nverbose (bool) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ncallback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) \u2013 \nhandle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) \u2013 \napi_wrapper (langchain.utilities.google_places_api.GooglePlacesAPIWrapper) \u2013 \nReturn type\nNone\nattribute api_wrapper: langchain.utilities.google_places_api.GooglePlacesAPIWrapper [Optional]\uf0c1\nattribute args_schema: Type[pydantic.main.BaseModel] = \uf0c1\nPydantic model class to validate and parse the tool\u2019s input arguments.\nclass langchain.tools.GoogleSearchResults(*, name='Google Search Results JSON', description='A wrapper around Google Search. Useful for when you need to answer questions about current events. Input should be a search query. Output is a JSON array of the query results', args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, num_results=4, api_wrapper)[source]\uf0c1\nBases: langchain.tools.base.BaseTool\nTool that has capability to query the Google Search API and get back json.\nParameters\nname (str) \u2013 \ndescription (str) \u2013 \nargs_schema (Optional[Type[pydantic.main.BaseModel]]) \u2013 \nreturn_direct (bool) \u2013 \nverbose (bool) \u2013", "source": "https://api.python.langchain.com/en/latest/modules/tools.html"} +{"id": "1ace37f9e39e-28", "text": "return_direct (bool) \u2013 \nverbose (bool) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ncallback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) \u2013 \nhandle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) \u2013 \nnum_results (int) \u2013 \napi_wrapper (langchain.utilities.google_search.GoogleSearchAPIWrapper) \u2013 \nReturn type\nNone\nattribute api_wrapper: langchain.utilities.google_search.GoogleSearchAPIWrapper [Required]\uf0c1\nattribute num_results: int = 4\uf0c1\nclass langchain.tools.GoogleSearchRun(*, name='google_search', description='A wrapper around Google Search. Useful for when you need to answer questions about current events. Input should be a search query.', args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, api_wrapper)[source]\uf0c1\nBases: langchain.tools.base.BaseTool\nTool that adds the capability to query the Google search API.\nParameters\nname (str) \u2013 \ndescription (str) \u2013 \nargs_schema (Optional[Type[pydantic.main.BaseModel]]) \u2013 \nreturn_direct (bool) \u2013 \nverbose (bool) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ncallback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) \u2013 \nhandle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) \u2013 \napi_wrapper (langchain.utilities.google_search.GoogleSearchAPIWrapper) \u2013 \nReturn type\nNone\nattribute api_wrapper: langchain.utilities.google_search.GoogleSearchAPIWrapper [Required]\uf0c1", "source": "https://api.python.langchain.com/en/latest/modules/tools.html"} +{"id": "1ace37f9e39e-29", "text": "attribute api_wrapper: langchain.utilities.google_search.GoogleSearchAPIWrapper [Required]\uf0c1\nclass langchain.tools.GoogleSerperResults(*, name='Google Serrper Results JSON', description='A low-cost Google Search API.Useful for when you need to answer questions about current events.Input should be a search query. Output is a JSON object of the query results', args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, api_wrapper=None)[source]\uf0c1\nBases: langchain.tools.base.BaseTool\nTool that has capability to query the Serper.dev Google Search API\nand get back json.\nParameters\nname (str) \u2013 \ndescription (str) \u2013 \nargs_schema (Optional[Type[pydantic.main.BaseModel]]) \u2013 \nreturn_direct (bool) \u2013 \nverbose (bool) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ncallback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) \u2013 \nhandle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) \u2013 \napi_wrapper (langchain.utilities.google_serper.GoogleSerperAPIWrapper) \u2013 \nReturn type\nNone\nattribute api_wrapper: langchain.utilities.google_serper.GoogleSerperAPIWrapper [Optional]\uf0c1\nclass langchain.tools.GoogleSerperRun(*, name='google_serper', description='A low-cost Google Search API.Useful for when you need to answer questions about current events.Input should be a search query.', args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, api_wrapper)[source]\uf0c1\nBases: langchain.tools.base.BaseTool\nTool that adds the capability to query the Serper.dev Google search API.\nParameters", "source": "https://api.python.langchain.com/en/latest/modules/tools.html"} +{"id": "1ace37f9e39e-30", "text": "Tool that adds the capability to query the Serper.dev Google search API.\nParameters\nname (str) \u2013 \ndescription (str) \u2013 \nargs_schema (Optional[Type[pydantic.main.BaseModel]]) \u2013 \nreturn_direct (bool) \u2013 \nverbose (bool) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ncallback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) \u2013 \nhandle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) \u2013 \napi_wrapper (langchain.utilities.google_serper.GoogleSerperAPIWrapper) \u2013 \nReturn type\nNone\nattribute api_wrapper: langchain.utilities.google_serper.GoogleSerperAPIWrapper [Required]\uf0c1\nclass langchain.tools.HumanInputRun(*, name='human', description='You can ask a human for guidance when you think you got stuck or you are not sure what to do next. The input should be a question for the human.', args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, prompt_func=None, input_func=None)[source]\uf0c1\nBases: langchain.tools.base.BaseTool\nTool that adds the capability to ask user for input.\nParameters\nname (str) \u2013 \ndescription (str) \u2013 \nargs_schema (Optional[Type[pydantic.main.BaseModel]]) \u2013 \nreturn_direct (bool) \u2013 \nverbose (bool) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ncallback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) \u2013", "source": "https://api.python.langchain.com/en/latest/modules/tools.html"} +{"id": "1ace37f9e39e-31", "text": "callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) \u2013 \nhandle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) \u2013 \nprompt_func (Callable[[str], None]) \u2013 \ninput_func (Callable) \u2013 \nReturn type\nNone\nattribute input_func: Callable [Optional]\uf0c1\nattribute prompt_func: Callable[[str], None] [Optional]\uf0c1\nclass langchain.tools.IFTTTWebhook(*, name, description, args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, url)[source]\uf0c1\nBases: langchain.tools.base.BaseTool\nIFTTT Webhook.\nParameters\nname (str) \u2013 name of the tool\ndescription (str) \u2013 description of the tool\nurl (str) \u2013 url to hit with the json event.\nargs_schema (Optional[Type[pydantic.main.BaseModel]]) \u2013 \nreturn_direct (bool) \u2013 \nverbose (bool) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ncallback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) \u2013 \nhandle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) \u2013 \nReturn type\nNone\nattribute url: str [Required]\uf0c1", "source": "https://api.python.langchain.com/en/latest/modules/tools.html"} +{"id": "1ace37f9e39e-32", "text": "Return type\nNone\nattribute url: str [Required]\uf0c1\nclass langchain.tools.InfoPowerBITool(*, name='schema_powerbi', description='\\n\u00a0\u00a0\u00a0 Input to this tool is a comma-separated list of tables, output is the schema and sample rows for those tables.\\n\u00a0\u00a0\u00a0 Be sure that the tables actually exist by calling list_tables_powerbi first!\\n\\n\u00a0\u00a0\u00a0 Example Input: \"table1, table2, table3\"\\n\u00a0\u00a0\u00a0 ', args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, powerbi)[source]\uf0c1\nBases: langchain.tools.base.BaseTool\nTool for getting metadata about a PowerBI Dataset.\nParameters\nname (str) \u2013 \ndescription (str) \u2013 \nargs_schema (Optional[Type[pydantic.main.BaseModel]]) \u2013 \nreturn_direct (bool) \u2013 \nverbose (bool) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ncallback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) \u2013 \nhandle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) \u2013 \npowerbi (langchain.utilities.powerbi.PowerBIDataset) \u2013 \nReturn type\nNone\nattribute powerbi: langchain.utilities.powerbi.PowerBIDataset [Required]\uf0c1\nclass langchain.tools.InfoSQLDatabaseTool(*, name='sql_db_schema', description='\\n\u00a0\u00a0\u00a0 Input to this tool is a comma-separated list of tables, output is the schema and sample rows for those tables.\u00a0\u00a0\u00a0 \\n\\n\u00a0\u00a0\u00a0 Example Input: \"table1, table2, table3\"\\n\u00a0\u00a0\u00a0 ', args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, db)[source]\uf0c1", "source": "https://api.python.langchain.com/en/latest/modules/tools.html"} +{"id": "1ace37f9e39e-33", "text": "Bases: langchain.tools.sql_database.tool.BaseSQLDatabaseTool, langchain.tools.base.BaseTool\nTool for getting metadata about a SQL database.\nParameters\nname (str) \u2013 \ndescription (str) \u2013 \nargs_schema (Optional[Type[pydantic.main.BaseModel]]) \u2013 \nreturn_direct (bool) \u2013 \nverbose (bool) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ncallback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) \u2013 \nhandle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) \u2013 \ndb (langchain.sql_database.SQLDatabase) \u2013 \nReturn type\nNone\nclass langchain.tools.InfoSparkSQLTool(*, name='schema_sql_db', description='\\n\u00a0\u00a0\u00a0 Input to this tool is a comma-separated list of tables, output is the schema and sample rows for those tables.\\n\u00a0\u00a0\u00a0 Be sure that the tables actually exist by calling list_tables_sql_db first!\\n\\n\u00a0\u00a0\u00a0 Example Input: \"table1, table2, table3\"\\n\u00a0\u00a0\u00a0 ', args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, db)[source]\uf0c1\nBases: langchain.tools.spark_sql.tool.BaseSparkSQLTool, langchain.tools.base.BaseTool\nTool for getting metadata about a Spark SQL.\nParameters\nname (str) \u2013 \ndescription (str) \u2013 \nargs_schema (Optional[Type[pydantic.main.BaseModel]]) \u2013 \nreturn_direct (bool) \u2013 \nverbose (bool) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ncallback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) \u2013", "source": "https://api.python.langchain.com/en/latest/modules/tools.html"} +{"id": "1ace37f9e39e-34", "text": "callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) \u2013 \nhandle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) \u2013 \ndb (langchain.utilities.spark_sql.SparkSQL) \u2013 \nReturn type\nNone\nclass langchain.tools.JiraAction(*, name='', description='', args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, api_wrapper=None, mode)[source]\uf0c1\nBases: langchain.tools.base.BaseTool\nParameters\nname (str) \u2013 \ndescription (str) \u2013 \nargs_schema (Optional[Type[pydantic.main.BaseModel]]) \u2013 \nreturn_direct (bool) \u2013 \nverbose (bool) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ncallback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) \u2013 \nhandle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) \u2013 \napi_wrapper (langchain.utilities.jira.JiraAPIWrapper) \u2013 \nmode (str) \u2013 \nReturn type\nNone\nattribute api_wrapper: langchain.utilities.jira.JiraAPIWrapper [Optional]\uf0c1\nattribute mode: str [Required]\uf0c1", "source": "https://api.python.langchain.com/en/latest/modules/tools.html"} +{"id": "1ace37f9e39e-35", "text": "attribute mode: str [Required]\uf0c1\nclass langchain.tools.JsonGetValueTool(*, name='json_spec_get_value', description='\\n\u00a0\u00a0\u00a0 Can be used to see value in string format at a given path.\\n\u00a0\u00a0\u00a0 Before calling this you should be SURE that the path to this exists.\\n\u00a0\u00a0\u00a0 The input is a text representation of the path to the dict in Python syntax (e.g. data[\"key1\"][0][\"key2\"]).\\n\u00a0\u00a0\u00a0 ', args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, spec)[source]\uf0c1\nBases: langchain.tools.base.BaseTool\nTool for getting a value in a JSON spec.\nParameters\nname (str) \u2013 \ndescription (str) \u2013 \nargs_schema (Optional[Type[pydantic.main.BaseModel]]) \u2013 \nreturn_direct (bool) \u2013 \nverbose (bool) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ncallback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) \u2013 \nhandle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) \u2013 \nspec (langchain.tools.json.tool.JsonSpec) \u2013 \nReturn type\nNone\nattribute spec: JsonSpec [Required]\uf0c1", "source": "https://api.python.langchain.com/en/latest/modules/tools.html"} +{"id": "1ace37f9e39e-36", "text": "Return type\nNone\nattribute spec: JsonSpec [Required]\uf0c1\nclass langchain.tools.JsonListKeysTool(*, name='json_spec_list_keys', description='\\n\u00a0\u00a0\u00a0 Can be used to list all keys at a given path. \\n\u00a0\u00a0\u00a0 Before calling this you should be SURE that the path to this exists.\\n\u00a0\u00a0\u00a0 The input is a text representation of the path to the dict in Python syntax (e.g. data[\"key1\"][0][\"key2\"]).\\n\u00a0\u00a0\u00a0 ', args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, spec)[source]\uf0c1\nBases: langchain.tools.base.BaseTool\nTool for listing keys in a JSON spec.\nParameters\nname (str) \u2013 \ndescription (str) \u2013 \nargs_schema (Optional[Type[pydantic.main.BaseModel]]) \u2013 \nreturn_direct (bool) \u2013 \nverbose (bool) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ncallback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) \u2013 \nhandle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) \u2013 \nspec (langchain.tools.json.tool.JsonSpec) \u2013 \nReturn type\nNone\nattribute spec: JsonSpec [Required]\uf0c1\nclass langchain.tools.ListDirectoryTool(*, name='list_directory', description='List files and directories in a specified folder', args_schema=, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, root_dir=None)[source]\uf0c1\nBases: langchain.tools.file_management.utils.BaseFileToolMixin, langchain.tools.base.BaseTool\nParameters\nname (str) \u2013", "source": "https://api.python.langchain.com/en/latest/modules/tools.html"} +{"id": "1ace37f9e39e-37", "text": "Parameters\nname (str) \u2013 \ndescription (str) \u2013 \nargs_schema (Type[pydantic.main.BaseModel]) \u2013 \nreturn_direct (bool) \u2013 \nverbose (bool) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ncallback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) \u2013 \nhandle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) \u2013 \nroot_dir (Optional[str]) \u2013 \nReturn type\nNone\nattribute args_schema: Type[pydantic.main.BaseModel] = \uf0c1\nPydantic model class to validate and parse the tool\u2019s input arguments.\nattribute description: str = 'List files and directories in a specified folder'\uf0c1\nUsed to tell the model how/when/why to use the tool.\nYou can provide few-shot examples as a part of the description.\nattribute name: str = 'list_directory'\uf0c1\nThe unique name of the tool that clearly communicates its purpose.\nclass langchain.tools.ListPowerBITool(*, name='list_tables_powerbi', description='Input is an empty string, output is a comma separated list of tables in the database.', args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, powerbi)[source]\uf0c1\nBases: langchain.tools.base.BaseTool\nTool for getting tables names.\nParameters\nname (str) \u2013 \ndescription (str) \u2013 \nargs_schema (Optional[Type[pydantic.main.BaseModel]]) \u2013 \nreturn_direct (bool) \u2013 \nverbose (bool) \u2013", "source": "https://api.python.langchain.com/en/latest/modules/tools.html"} +{"id": "1ace37f9e39e-38", "text": "return_direct (bool) \u2013 \nverbose (bool) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ncallback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) \u2013 \nhandle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) \u2013 \npowerbi (langchain.utilities.powerbi.PowerBIDataset) \u2013 \nReturn type\nNone\nattribute powerbi: langchain.utilities.powerbi.PowerBIDataset [Required]\uf0c1\nclass langchain.tools.ListSQLDatabaseTool(*, name='sql_db_list_tables', description='Input is an empty string, output is a comma separated list of tables in the database.', args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, db)[source]\uf0c1\nBases: langchain.tools.sql_database.tool.BaseSQLDatabaseTool, langchain.tools.base.BaseTool\nTool for getting tables names.\nParameters\nname (str) \u2013 \ndescription (str) \u2013 \nargs_schema (Optional[Type[pydantic.main.BaseModel]]) \u2013 \nreturn_direct (bool) \u2013 \nverbose (bool) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ncallback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) \u2013 \nhandle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) \u2013 \ndb (langchain.sql_database.SQLDatabase) \u2013 \nReturn type\nNone", "source": "https://api.python.langchain.com/en/latest/modules/tools.html"} +{"id": "1ace37f9e39e-39", "text": "db (langchain.sql_database.SQLDatabase) \u2013 \nReturn type\nNone\nclass langchain.tools.ListSparkSQLTool(*, name='list_tables_sql_db', description='Input is an empty string, output is a comma separated list of tables in the Spark SQL.', args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, db)[source]\uf0c1\nBases: langchain.tools.spark_sql.tool.BaseSparkSQLTool, langchain.tools.base.BaseTool\nTool for getting tables names.\nParameters\nname (str) \u2013 \ndescription (str) \u2013 \nargs_schema (Optional[Type[pydantic.main.BaseModel]]) \u2013 \nreturn_direct (bool) \u2013 \nverbose (bool) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ncallback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) \u2013 \nhandle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) \u2013 \ndb (langchain.utilities.spark_sql.SparkSQL) \u2013 \nReturn type\nNone\nclass langchain.tools.MetaphorSearchResults(*, name='metaphor_search_results_json', description='A wrapper around Metaphor Search. Input should be a Metaphor-optimized query. Output is a JSON array of the query results', args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, api_wrapper)[source]\uf0c1\nBases: langchain.tools.base.BaseTool\nTool that has capability to query the Metaphor Search API and get back json.\nParameters\nname (str) \u2013 \ndescription (str) \u2013 \nargs_schema (Optional[Type[pydantic.main.BaseModel]]) \u2013", "source": "https://api.python.langchain.com/en/latest/modules/tools.html"} +{"id": "1ace37f9e39e-40", "text": "args_schema (Optional[Type[pydantic.main.BaseModel]]) \u2013 \nreturn_direct (bool) \u2013 \nverbose (bool) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ncallback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) \u2013 \nhandle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) \u2013 \napi_wrapper (langchain.utilities.metaphor_search.MetaphorSearchAPIWrapper) \u2013 \nReturn type\nNone\nattribute api_wrapper: langchain.utilities.metaphor_search.MetaphorSearchAPIWrapper [Required]\uf0c1\nclass langchain.tools.MoveFileTool(*, name='move_file', description='Move or rename a file from one location to another', args_schema=, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, root_dir=None)[source]\uf0c1\nBases: langchain.tools.file_management.utils.BaseFileToolMixin, langchain.tools.base.BaseTool\nParameters\nname (str) \u2013 \ndescription (str) \u2013 \nargs_schema (Type[pydantic.main.BaseModel]) \u2013 \nreturn_direct (bool) \u2013 \nverbose (bool) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ncallback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) \u2013 \nhandle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) \u2013 \nroot_dir (Optional[str]) \u2013 \nReturn type\nNone", "source": "https://api.python.langchain.com/en/latest/modules/tools.html"} +{"id": "1ace37f9e39e-41", "text": "root_dir (Optional[str]) \u2013 \nReturn type\nNone\nattribute args_schema: Type[pydantic.main.BaseModel] = \uf0c1\nPydantic model class to validate and parse the tool\u2019s input arguments.\nattribute description: str = 'Move or rename a file from one location to another'\uf0c1\nUsed to tell the model how/when/why to use the tool.\nYou can provide few-shot examples as a part of the description.\nattribute name: str = 'move_file'\uf0c1\nThe unique name of the tool that clearly communicates its purpose.\nclass langchain.tools.NavigateBackTool(*, name='previous_webpage', description='Navigate back to the previous page in the browser history', args_schema=, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, sync_browser=None, async_browser=None)[source]\uf0c1\nBases: langchain.tools.playwright.base.BaseBrowserTool\nNavigate back to the previous page in the browser history.\nParameters\nname (str) \u2013 \ndescription (str) \u2013 \nargs_schema (Type[pydantic.main.BaseModel]) \u2013 \nreturn_direct (bool) \u2013 \nverbose (bool) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ncallback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) \u2013 \nhandle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) \u2013 \nsync_browser (Optional['SyncBrowser']) \u2013 \nasync_browser (Optional['AsyncBrowser']) \u2013 \nReturn type\nNone\nattribute args_schema: Type[BaseModel] = \uf0c1", "source": "https://api.python.langchain.com/en/latest/modules/tools.html"} +{"id": "1ace37f9e39e-42", "text": "Pydantic model class to validate and parse the tool\u2019s input arguments.\nattribute description: str = 'Navigate back to the previous page in the browser history'\uf0c1\nUsed to tell the model how/when/why to use the tool.\nYou can provide few-shot examples as a part of the description.\nattribute name: str = 'previous_webpage'\uf0c1\nThe unique name of the tool that clearly communicates its purpose.\nclass langchain.tools.NavigateTool(*, name='navigate_browser', description='Navigate a browser to the specified URL', args_schema=, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, sync_browser=None, async_browser=None)[source]\uf0c1\nBases: langchain.tools.playwright.base.BaseBrowserTool\nParameters\nname (str) \u2013 \ndescription (str) \u2013 \nargs_schema (Type[pydantic.main.BaseModel]) \u2013 \nreturn_direct (bool) \u2013 \nverbose (bool) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ncallback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) \u2013 \nhandle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) \u2013 \nsync_browser (Optional['SyncBrowser']) \u2013 \nasync_browser (Optional['AsyncBrowser']) \u2013 \nReturn type\nNone\nattribute args_schema: Type[BaseModel] = \uf0c1\nPydantic model class to validate and parse the tool\u2019s input arguments.\nattribute description: str = 'Navigate a browser to the specified URL'\uf0c1\nUsed to tell the model how/when/why to use the tool.", "source": "https://api.python.langchain.com/en/latest/modules/tools.html"} +{"id": "1ace37f9e39e-43", "text": "Used to tell the model how/when/why to use the tool.\nYou can provide few-shot examples as a part of the description.\nattribute name: str = 'navigate_browser'\uf0c1\nThe unique name of the tool that clearly communicates its purpose.\nclass langchain.tools.OpenAPISpec(*, openapi='3.1.0', info, jsonSchemaDialect=None, servers=[Server(url='/', description=None, variables=None)], paths=None, webhooks=None, components=None, security=None, tags=None, externalDocs=None)[source]\uf0c1\nBases: openapi_schema_pydantic.v3.v3_1_0.open_api.OpenAPI\nOpenAPI Model that removes misformatted parts of the spec.\nParameters\nopenapi (str) \u2013 \ninfo (openapi_schema_pydantic.v3.v3_1_0.info.Info) \u2013 \njsonSchemaDialect (Optional[str]) \u2013 \nservers (List[openapi_schema_pydantic.v3.v3_1_0.server.Server]) \u2013 \npaths (Optional[Dict[str, openapi_schema_pydantic.v3.v3_1_0.path_item.PathItem]]) \u2013 \nwebhooks (Optional[Dict[str, Union[openapi_schema_pydantic.v3.v3_1_0.path_item.PathItem, openapi_schema_pydantic.v3.v3_1_0.reference.Reference]]]) \u2013 \ncomponents (Optional[openapi_schema_pydantic.v3.v3_1_0.components.Components]) \u2013 \nsecurity (Optional[List[Dict[str, List[str]]]]) \u2013 \ntags (Optional[List[openapi_schema_pydantic.v3.v3_1_0.tag.Tag]]) \u2013 \nexternalDocs (Optional[openapi_schema_pydantic.v3.v3_1_0.external_documentation.ExternalDocumentation]) \u2013 \nReturn type\nNone", "source": "https://api.python.langchain.com/en/latest/modules/tools.html"} +{"id": "1ace37f9e39e-44", "text": "Return type\nNone\nclassmethod from_file(path)[source]\uf0c1\nGet an OpenAPI spec from a file path.\nParameters\npath (Union[str, pathlib.Path]) \u2013 \nReturn type\nlangchain.utilities.openapi.OpenAPISpec\nclassmethod from_spec_dict(spec_dict)[source]\uf0c1\nGet an OpenAPI spec from a dict.\nParameters\nspec_dict (dict) \u2013 \nReturn type\nlangchain.utilities.openapi.OpenAPISpec\nclassmethod from_text(text)[source]\uf0c1\nGet an OpenAPI spec from a text.\nParameters\ntext (str) \u2013 \nReturn type\nlangchain.utilities.openapi.OpenAPISpec\nclassmethod from_url(url)[source]\uf0c1\nGet an OpenAPI spec from a URL.\nParameters\nurl (str) \u2013 \nReturn type\nlangchain.utilities.openapi.OpenAPISpec\nstatic get_cleaned_operation_id(operation, path, method)[source]\uf0c1\nGet a cleaned operation id from an operation id.\nParameters\noperation (openapi_schema_pydantic.v3.v3_1_0.operation.Operation) \u2013 \npath (str) \u2013 \nmethod (str) \u2013 \nReturn type\nstr\nget_methods_for_path(path)[source]\uf0c1\nReturn a list of valid methods for the specified path.\nParameters\npath (str) \u2013 \nReturn type\nList[str]\nget_operation(path, method)[source]\uf0c1\nGet the operation object for a given path and HTTP method.\nParameters\npath (str) \u2013 \nmethod (str) \u2013 \nReturn type\nopenapi_schema_pydantic.v3.v3_1_0.operation.Operation\nget_parameters_for_operation(operation)[source]\uf0c1\nGet the components for a given operation.\nParameters\noperation (openapi_schema_pydantic.v3.v3_1_0.operation.Operation) \u2013 \nReturn type", "source": "https://api.python.langchain.com/en/latest/modules/tools.html"} +{"id": "1ace37f9e39e-45", "text": "Return type\nList[openapi_schema_pydantic.v3.v3_1_0.parameter.Parameter]\nget_parameters_for_path(path)[source]\uf0c1\nParameters\npath (str) \u2013 \nReturn type\nList[openapi_schema_pydantic.v3.v3_1_0.parameter.Parameter]\nget_referenced_schema(ref)[source]\uf0c1\nGet a schema (or nested reference) or err.\nParameters\nref (openapi_schema_pydantic.v3.v3_1_0.reference.Reference) \u2013 \nReturn type\nopenapi_schema_pydantic.v3.v3_1_0.schema.Schema\nget_request_body_for_operation(operation)[source]\uf0c1\nGet the request body for a given operation.\nParameters\noperation (openapi_schema_pydantic.v3.v3_1_0.operation.Operation) \u2013 \nReturn type\nOptional[openapi_schema_pydantic.v3.v3_1_0.request_body.RequestBody]\nget_schema(schema)[source]\uf0c1\nParameters\nschema (Union[openapi_schema_pydantic.v3.v3_1_0.reference.Reference, openapi_schema_pydantic.v3.v3_1_0.schema.Schema]) \u2013 \nReturn type\nopenapi_schema_pydantic.v3.v3_1_0.schema.Schema\nclassmethod parse_obj(obj)[source]\uf0c1\nParameters\nobj (dict) \u2013 \nReturn type\nlangchain.utilities.openapi.OpenAPISpec\nproperty base_url: str\uf0c1\nGet the base url.", "source": "https://api.python.langchain.com/en/latest/modules/tools.html"} +{"id": "1ace37f9e39e-46", "text": "property base_url: str\uf0c1\nGet the base url.\nclass langchain.tools.OpenWeatherMapQueryRun(*, name='OpenWeatherMap', description='A wrapper around OpenWeatherMap API. Useful for fetching current weather information for a specified location. Input should be a location string (e.g. London,GB).', args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, api_wrapper=None)[source]\uf0c1\nBases: langchain.tools.base.BaseTool\nTool that adds the capability to query using the OpenWeatherMap API.\nParameters\nname (str) \u2013 \ndescription (str) \u2013 \nargs_schema (Optional[Type[pydantic.main.BaseModel]]) \u2013 \nreturn_direct (bool) \u2013 \nverbose (bool) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ncallback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) \u2013 \nhandle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) \u2013 \napi_wrapper (langchain.utilities.openweathermap.OpenWeatherMapAPIWrapper) \u2013 \nReturn type\nNone\nattribute api_wrapper: langchain.utilities.openweathermap.OpenWeatherMapAPIWrapper [Optional]\uf0c1\nclass langchain.tools.PubmedQueryRun(*, name='PubMed', description='A wrapper around PubMed.org Useful for when you need to answer questions about Physics, Mathematics, Computer Science, Quantitative Biology, Quantitative Finance, Statistics, Electrical Engineering, and Economics from scientific articles on PubMed.org. Input should be a search query.', args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, api_wrapper=None)[source]\uf0c1\nBases: langchain.tools.base.BaseTool", "source": "https://api.python.langchain.com/en/latest/modules/tools.html"} +{"id": "1ace37f9e39e-47", "text": "Bases: langchain.tools.base.BaseTool\nTool that adds the capability to search using the PubMed API.\nParameters\nname (str) \u2013 \ndescription (str) \u2013 \nargs_schema (Optional[Type[pydantic.main.BaseModel]]) \u2013 \nreturn_direct (bool) \u2013 \nverbose (bool) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ncallback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) \u2013 \nhandle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) \u2013 \napi_wrapper (langchain.utilities.pupmed.PubMedAPIWrapper) \u2013 \nReturn type\nNone\nattribute api_wrapper: langchain.utilities.pupmed.PubMedAPIWrapper [Optional]\uf0c1\nclass langchain.tools.PythonAstREPLTool(*, name='python_repl_ast', description='A Python shell. Use this to execute python commands. Input should be a valid python command. When using this tool, sometimes output is abbreviated - make sure it does not look abbreviated before using it in your answer.', args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, globals=None, locals=None, sanitize_input=True)[source]\uf0c1\nBases: langchain.tools.base.BaseTool\nA tool for running python code in a REPL.\nParameters\nname (str) \u2013 \ndescription (str) \u2013 \nargs_schema (Optional[Type[pydantic.main.BaseModel]]) \u2013 \nreturn_direct (bool) \u2013 \nverbose (bool) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ncallback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) \u2013", "source": "https://api.python.langchain.com/en/latest/modules/tools.html"} +{"id": "1ace37f9e39e-48", "text": "callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) \u2013 \nhandle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) \u2013 \nglobals (Optional[Dict]) \u2013 \nlocals (Optional[Dict]) \u2013 \nsanitize_input (bool) \u2013 \nReturn type\nNone\nattribute globals: Optional[Dict] [Optional]\uf0c1\nattribute locals: Optional[Dict] [Optional]\uf0c1\nattribute sanitize_input: bool = True\uf0c1\nclass langchain.tools.PythonREPLTool(*, name='Python_REPL', description='A Python shell. Use this to execute python commands. Input should be a valid python command. If you want to see the output of a value, you should print it out with `print(...)`.', args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, python_repl=None, sanitize_input=True)[source]\uf0c1\nBases: langchain.tools.base.BaseTool\nA tool for running python code in a REPL.\nParameters\nname (str) \u2013 \ndescription (str) \u2013 \nargs_schema (Optional[Type[pydantic.main.BaseModel]]) \u2013 \nreturn_direct (bool) \u2013 \nverbose (bool) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ncallback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) \u2013 \nhandle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) \u2013 \npython_repl (langchain.utilities.python.PythonREPL) \u2013 \nsanitize_input (bool) \u2013 \nReturn type\nNone\nattribute python_repl: langchain.utilities.python.PythonREPL [Optional]\uf0c1", "source": "https://api.python.langchain.com/en/latest/modules/tools.html"} +{"id": "1ace37f9e39e-49", "text": "attribute python_repl: langchain.utilities.python.PythonREPL [Optional]\uf0c1\nattribute sanitize_input: bool = True\uf0c1\nclass langchain.tools.QueryCheckerTool(*, name='query_checker_sql_db', description='\\n\u00a0\u00a0\u00a0 Use this tool to double check if your query is correct before executing it.\\n\u00a0\u00a0\u00a0 Always use this tool before executing a query with query_sql_db!\\n\u00a0\u00a0\u00a0 ', args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, db, template='\\n{query}\\nDouble check the Spark SQL query above for common mistakes, including:\\n- Using NOT IN with NULL values\\n- Using UNION when UNION ALL should have been used\\n- Using BETWEEN for exclusive ranges\\n- Data type mismatch in predicates\\n- Properly quoting identifiers\\n- Using the correct number of arguments for functions\\n- Casting to the correct data type\\n- Using the proper columns for joins\\n\\nIf there are any of the above mistakes, rewrite the query. If there are no mistakes, just reproduce the original query.', llm, llm_chain)[source]\uf0c1\nBases: langchain.tools.spark_sql.tool.BaseSparkSQLTool, langchain.tools.base.BaseTool\nUse an LLM to check if a query is correct.\nAdapted from https://www.patterns.app/blog/2023/01/18/crunchbot-sql-analyst-gpt/\nParameters\nname (str) \u2013 \ndescription (str) \u2013 \nargs_schema (Optional[Type[pydantic.main.BaseModel]]) \u2013 \nreturn_direct (bool) \u2013 \nverbose (bool) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ncallback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) \u2013", "source": "https://api.python.langchain.com/en/latest/modules/tools.html"} +{"id": "1ace37f9e39e-50", "text": "callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) \u2013 \nhandle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) \u2013 \ndb (langchain.utilities.spark_sql.SparkSQL) \u2013 \ntemplate (str) \u2013 \nllm (langchain.base_language.BaseLanguageModel) \u2013 \nllm_chain (langchain.chains.llm.LLMChain) \u2013 \nReturn type\nNone\nattribute llm: langchain.base_language.BaseLanguageModel [Required]\uf0c1\nattribute llm_chain: langchain.chains.llm.LLMChain [Required]\uf0c1\nattribute template: str = '\\n{query}\\nDouble check the Spark SQL query above for common mistakes, including:\\n- Using NOT IN with NULL values\\n- Using UNION when UNION ALL should have been used\\n- Using BETWEEN for exclusive ranges\\n- Data type mismatch in predicates\\n- Properly quoting identifiers\\n- Using the correct number of arguments for functions\\n- Casting to the correct data type\\n- Using the proper columns for joins\\n\\nIf there are any of the above mistakes, rewrite the query. If there are no mistakes, just reproduce the original query.'\uf0c1", "source": "https://api.python.langchain.com/en/latest/modules/tools.html"} +{"id": "1ace37f9e39e-51", "text": "class langchain.tools.QueryPowerBITool(*, name='query_powerbi', description='\\n\u00a0\u00a0\u00a0 Input to this tool is a detailed question about the dataset, output is a result from the dataset. It will try to answer the question using the dataset, and if it cannot, it will ask for clarification.\\n\\n\u00a0\u00a0\u00a0 Example Input: \"How many rows are in table1?\"\\n\u00a0\u00a0\u00a0 ', args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, llm_chain, powerbi, template='\\nAnswer the question below with a DAX query that can be sent to Power BI. DAX queries have a simple syntax comprised of just one required keyword, EVALUATE, and several optional keywords: ORDER BY, START AT, DEFINE, MEASURE, VAR, TABLE, and COLUMN. Each keyword defines a statement used for the duration of the query. Any time < or > are used in the text below it means that those values need to be replaced by table, columns or other things. If the question is not something you can answer with a DAX query, reply with \"I cannot answer this\" and the question will be escalated to a human.\\n\\nSome DAX functions return a table instead of a scalar, and must be wrapped in a function that evaluates the table and returns a scalar; unless the table is a single column, single row table, then it is treated as a scalar value. Most DAX functions require one or more arguments, which can include tables, columns, expressions, and values. However, some functions, such as PI, do not require any arguments, but always require parentheses to indicate the null argument. For example, you must always type PI(), not PI. You can also nest functions within other functions. \\n\\nSome commonly used functions are:\\nEVALUATE - At the most basic level, a DAX query is an EVALUATE statement", "source": "https://api.python.langchain.com/en/latest/modules/tools.html"} +{"id": "1ace37f9e39e-52", "text": "
- At the most basic level, a DAX query is an EVALUATE statement containing a table expression. At least one EVALUATE statement is required, however, a query can contain any number of EVALUATE statements.\\nEVALUATE
ORDER BY ASC or DESC - The optional ORDER BY keyword defines one or more expressions used to sort query results. Any expression that can be evaluated for each row of the result is valid.\\nEVALUATE
ORDER BY ASC or DESC START AT or - The optional START AT keyword is used inside an ORDER BY clause. It defines the value at which the query results begin.\\nDEFINE MEASURE | VAR; EVALUATE
- The optional DEFINE keyword introduces one or more calculated entity definitions that exist only for the duration of the query. Definitions precede the EVALUATE statement and are valid for all EVALUATE statements in the query. Definitions can be variables, measures, tables1, and columns1. Definitions can reference other definitions that appear before or after the current definition. At least one definition is required if the DEFINE keyword is included in a query.\\nMEASURE
[] = - Introduces a measure definition in a DEFINE statement of a DAX query.\\nVAR = - Stores the result of an expression as a named variable, which can then be passed as an argument to other measure expressions. Once resultant values have been calculated for a variable expression, those values do not change, even if the variable is referenced in another expression.\\n\\nFILTER(
,) - Returns a table that represents a subset of another table or expression, where is a Boolean expression that is to be evaluated for each row of the table. For example, [Amount] > 0 or [Region] =", "source": "https://api.python.langchain.com/en/latest/modules/tools.html"} +{"id": "1ace37f9e39e-53", "text": "each row of the table. For example, [Amount] > 0 or [Region] = \"France\"\\nROW(, ) - Returns a table with a single row containing values that result from the expressions given to each column.\\nDISTINCT() - Returns a one-column table that contains the distinct values from the specified column. In other words, duplicate values are removed and only unique values are returned. This function cannot be used to Return values into a cell or column on a worksheet; rather, you nest the DISTINCT function within a formula, to get a list of distinct values that can be passed to another function and then counted, summed, or used for other operations.\\nDISTINCT(
) - Returns a table by removing duplicate rows from another table or expression.\\n\\nAggregation functions, names with a A in it, handle booleans and empty strings in appropriate ways, while the same function without A only uses the numeric values in a column. Functions names with an X in it can include a expression as an argument, this will be evaluated for each row in the table and the result will be used in the regular function calculation, these are the functions:\\nCOUNT(), COUNTA(), COUNTX(
,), COUNTAX(
,), COUNTROWS([
]), COUNTBLANK(), DISTINCTCOUNT(), DISTINCTCOUNTNOBLANK () - these are all variantions of count functions.\\nAVERAGE(), AVERAGEA(), AVERAGEX(
,) - these are all variantions of average functions.\\nMAX(), MAXA(), MAXX(
,) - these are all variantions of max functions.\\nMIN(), MINA(), MINX(
,) - these are all variantions", "source": "https://api.python.langchain.com/en/latest/modules/tools.html"} +{"id": "1ace37f9e39e-54", "text": "MINA(), MINX(
,) - these are all variantions of min functions.\\nPRODUCT(), PRODUCTX(
,) - these are all variantions of product functions.\\nSUM(), SUMX(
,) - these are all variantions of sum functions.\\n\\nDate and time functions:\\nDATE(year, month, day) - Returns a date value that represents the specified year, month, and day.\\nDATEDIFF(date1, date2, ) - Returns the difference between two date values, in the specified interval, that can be SECOND, MINUTE, HOUR, DAY, WEEK, MONTH, QUARTER, YEAR.\\nDATEVALUE() - Returns a date value that represents the specified date.\\nYEAR(), QUARTER(), MONTH(), DAY(), HOUR(), MINUTE(), SECOND() - Returns the part of the date for the specified date.\\n\\nFinally, make sure to escape double quotes with a single backslash, and make sure that only table names have single quotes around them, while names of measures or the values of columns that you want to compare against are in escaped double quotes. Newlines are not necessary and can be skipped. The queries are serialized as json and so will have to fit be compliant with json syntax. Sometimes you will get a question, a DAX query and a error, in that case you need to rewrite the DAX query to get the correct answer.\\n\\nThe following tables exist: {tables}\\n\\nand the schema\\'s for some are given here:\\n{schemas}\\n\\nExamples:\\n{examples}\\n\\nQuestion: {tool_input}\\nDAX: \\n', examples='\\nQuestion: How many rows are in the table
?\\nDAX:", "source": "https://api.python.langchain.com/en/latest/modules/tools.html"} +{"id": "1ace37f9e39e-55", "text": "examples='\\nQuestion: How many rows are in the table
?\\nDAX: EVALUATE ROW(\"Number of rows\", COUNTROWS(
))\\n----\\nQuestion: How many rows are in the table
where is not empty?\\nDAX: EVALUATE ROW(\"Number of rows\", COUNTROWS(FILTER(
,
[] <> \"\")))\\n----\\nQuestion: What was the average of in
?\\nDAX: EVALUATE ROW(\"Average\", AVERAGE(
[]))\\n----\\n', session_cache=None, max_iterations=5)[source]\uf0c1", "source": "https://api.python.langchain.com/en/latest/modules/tools.html"} +{"id": "1ace37f9e39e-56", "text": "Bases: langchain.tools.base.BaseTool\nTool for querying a Power BI Dataset.\nParameters\nname (str) \u2013 \ndescription (str) \u2013 \nargs_schema (Optional[Type[pydantic.main.BaseModel]]) \u2013 \nreturn_direct (bool) \u2013 \nverbose (bool) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ncallback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) \u2013 \nhandle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) \u2013 \nllm_chain (langchain.chains.llm.LLMChain) \u2013 \npowerbi (langchain.utilities.powerbi.PowerBIDataset) \u2013 \ntemplate (Optional[str]) \u2013 \nexamples (Optional[str]) \u2013 \nsession_cache (Dict[str, Any]) \u2013 \nmax_iterations (int) \u2013 \nReturn type\nNone\nattribute examples: Optional[str] = '\\nQuestion: How many rows are in the table
?\\nDAX: EVALUATE ROW(\"Number of rows\", COUNTROWS(
))\\n----\\nQuestion: How many rows are in the table
where is not empty?\\nDAX: EVALUATE ROW(\"Number of rows\", COUNTROWS(FILTER(
,
[] <> \"\")))\\n----\\nQuestion: What was the average of in
?\\nDAX: EVALUATE ROW(\"Average\", AVERAGE(
[]))\\n----\\n'\uf0c1\nattribute llm_chain: langchain.chains.llm.LLMChain [Required]\uf0c1\nattribute max_iterations: int = 5\uf0c1", "source": "https://api.python.langchain.com/en/latest/modules/tools.html"} +{"id": "1ace37f9e39e-57", "text": "attribute max_iterations: int = 5\uf0c1\nattribute powerbi: langchain.utilities.powerbi.PowerBIDataset [Required]\uf0c1\nattribute session_cache: Dict[str, Any] [Optional]\uf0c1", "source": "https://api.python.langchain.com/en/latest/modules/tools.html"} +{"id": "1ace37f9e39e-58", "text": "attribute template: Optional[str] = '\\nAnswer the question below with a DAX query that can be sent to Power BI. DAX queries have a simple syntax comprised of just one required keyword, EVALUATE, and several optional keywords: ORDER BY, START AT, DEFINE, MEASURE, VAR, TABLE, and COLUMN. Each keyword defines a statement used for the duration of the query. Any time < or > are used in the text below it means that those values need to be replaced by table, columns or other things. If the question is not something you can answer with a DAX query, reply with \"I cannot answer this\" and the question will be escalated to a human.\\n\\nSome DAX functions return a table instead of a scalar, and must be wrapped in a function that evaluates the table and returns a scalar; unless the table is a single column, single row table, then it is treated as a scalar value. Most DAX functions require one or more arguments, which can include tables, columns, expressions, and values. However, some functions, such as PI, do not require any arguments, but always require parentheses to indicate the null argument. For example, you must always type PI(), not PI. You can also nest functions within other functions. \\n\\nSome commonly used functions are:\\nEVALUATE
- At the most basic level, a DAX query is an EVALUATE statement containing a table expression. At least one EVALUATE statement is required, however, a query can contain any number of EVALUATE statements.\\nEVALUATE
ORDER BY ASC or DESC - The optional ORDER BY keyword defines one or more expressions used to sort query results. Any expression that can be evaluated for each row of the result is valid.\\nEVALUATE
ORDER BY ASC or DESC START AT or - The optional", "source": "https://api.python.langchain.com/en/latest/modules/tools.html"} +{"id": "1ace37f9e39e-59", "text": "ORDER BY ASC or DESC START AT or - The optional START AT keyword is used inside an ORDER BY clause. It defines the value at which the query results begin.\\nDEFINE MEASURE | VAR; EVALUATE
- The optional DEFINE keyword introduces one or more calculated entity definitions that exist only for the duration of the query. Definitions precede the EVALUATE statement and are valid for all EVALUATE statements in the query. Definitions can be variables, measures, tables1, and columns1. Definitions can reference other definitions that appear before or after the current definition. At least one definition is required if the DEFINE keyword is included in a query.\\nMEASURE
[] = - Introduces a measure definition in a DEFINE statement of a DAX query.\\nVAR = - Stores the result of an expression as a named variable, which can then be passed as an argument to other measure expressions. Once resultant values have been calculated for a variable expression, those values do not change, even if the variable is referenced in another expression.\\n\\nFILTER(
,) - Returns a table that represents a subset of another table or expression, where is a Boolean expression that is to be evaluated for each row of the table. For example, [Amount] > 0 or [Region] = \"France\"\\nROW(, ) - Returns a table with a single row containing values that result from the expressions given to each column.\\nDISTINCT() - Returns a one-column table that contains the distinct values from the specified column. In other words, duplicate values are removed and only unique values are returned. This function cannot be used to Return values into a cell or column on a worksheet; rather, you nest the DISTINCT function within a formula, to get a list of distinct values that can be passed", "source": "https://api.python.langchain.com/en/latest/modules/tools.html"} +{"id": "1ace37f9e39e-60", "text": "you nest the DISTINCT function within a formula, to get a list of distinct values that can be passed to another function and then counted, summed, or used for other operations.\\nDISTINCT(
) - Returns a table by removing duplicate rows from another table or expression.\\n\\nAggregation functions, names with a A in it, handle booleans and empty strings in appropriate ways, while the same function without A only uses the numeric values in a column. Functions names with an X in it can include a expression as an argument, this will be evaluated for each row in the table and the result will be used in the regular function calculation, these are the functions:\\nCOUNT(), COUNTA(), COUNTX(
,), COUNTAX(
,), COUNTROWS([
]), COUNTBLANK(), DISTINCTCOUNT(), DISTINCTCOUNTNOBLANK () - these are all variantions of count functions.\\nAVERAGE(), AVERAGEA(), AVERAGEX(
,) - these are all variantions of average functions.\\nMAX(), MAXA(), MAXX(
,) - these are all variantions of max functions.\\nMIN(), MINA(), MINX(
,) - these are all variantions of min functions.\\nPRODUCT(), PRODUCTX(
,) - these are all variantions of product functions.\\nSUM(), SUMX(
,) - these are all variantions of sum functions.\\n\\nDate and time functions:\\nDATE(year, month, day) - Returns a date value that represents the specified year, month, and day.\\nDATEDIFF(date1, date2, ) - Returns the difference between two date values, in the specified", "source": "https://api.python.langchain.com/en/latest/modules/tools.html"} +{"id": "1ace37f9e39e-61", "text": "date2, ) - Returns the difference between two date values, in the specified interval, that can be SECOND, MINUTE, HOUR, DAY, WEEK, MONTH, QUARTER, YEAR.\\nDATEVALUE() - Returns a date value that represents the specified date.\\nYEAR(), QUARTER(), MONTH(), DAY(), HOUR(), MINUTE(), SECOND() - Returns the part of the date for the specified date.\\n\\nFinally, make sure to escape double quotes with a single backslash, and make sure that only table names have single quotes around them, while names of measures or the values of columns that you want to compare against are in escaped double quotes. Newlines are not necessary and can be skipped. The queries are serialized as json and so will have to fit be compliant with json syntax. Sometimes you will get a question, a DAX query and a error, in that case you need to rewrite the DAX query to get the correct answer.\\n\\nThe following tables exist: {tables}\\n\\nand the schema\\'s for some are given here:\\n{schemas}\\n\\nExamples:\\n{examples}\\n\\nQuestion: {tool_input}\\nDAX: \\n'\uf0c1", "source": "https://api.python.langchain.com/en/latest/modules/tools.html"} +{"id": "1ace37f9e39e-62", "text": "class langchain.tools.QuerySQLCheckerTool(*, name='sql_db_query_checker', description='\\n\u00a0\u00a0\u00a0 Use this tool to double check if your query is correct before executing it.\\n\u00a0\u00a0\u00a0 Always use this tool before executing a query with query_sql_db!\\n\u00a0\u00a0\u00a0 ', args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, db, template='\\n{query}\\nDouble check the {dialect} query above for common mistakes, including:\\n- Using NOT IN with NULL values\\n- Using UNION when UNION ALL should have been used\\n- Using BETWEEN for exclusive ranges\\n- Data type mismatch in predicates\\n- Properly quoting identifiers\\n- Using the correct number of arguments for functions\\n- Casting to the correct data type\\n- Using the proper columns for joins\\n\\nIf there are any of the above mistakes, rewrite the query. If there are no mistakes, just reproduce the original query.', llm, llm_chain)[source]\uf0c1\nBases: langchain.tools.sql_database.tool.BaseSQLDatabaseTool, langchain.tools.base.BaseTool\nUse an LLM to check if a query is correct.\nAdapted from https://www.patterns.app/blog/2023/01/18/crunchbot-sql-analyst-gpt/\nParameters\nname (str) \u2013 \ndescription (str) \u2013 \nargs_schema (Optional[Type[pydantic.main.BaseModel]]) \u2013 \nreturn_direct (bool) \u2013 \nverbose (bool) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ncallback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) \u2013 \nhandle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) \u2013 \ndb (langchain.sql_database.SQLDatabase) \u2013", "source": "https://api.python.langchain.com/en/latest/modules/tools.html"} +{"id": "1ace37f9e39e-63", "text": "db (langchain.sql_database.SQLDatabase) \u2013 \ntemplate (str) \u2013 \nllm (langchain.base_language.BaseLanguageModel) \u2013 \nllm_chain (langchain.chains.llm.LLMChain) \u2013 \nReturn type\nNone\nattribute llm: langchain.base_language.BaseLanguageModel [Required]\uf0c1\nattribute llm_chain: langchain.chains.llm.LLMChain [Required]\uf0c1\nattribute template: str = '\\n{query}\\nDouble check the {dialect} query above for common mistakes, including:\\n- Using NOT IN with NULL values\\n- Using UNION when UNION ALL should have been used\\n- Using BETWEEN for exclusive ranges\\n- Data type mismatch in predicates\\n- Properly quoting identifiers\\n- Using the correct number of arguments for functions\\n- Casting to the correct data type\\n- Using the proper columns for joins\\n\\nIf there are any of the above mistakes, rewrite the query. If there are no mistakes, just reproduce the original query.'\uf0c1\nclass langchain.tools.QuerySQLDataBaseTool(*, name='sql_db_query', description='\\n\u00a0\u00a0\u00a0 Input to this tool is a detailed and correct SQL query, output is a result from the database.\\n\u00a0\u00a0\u00a0 If the query is not correct, an error message will be returned.\\n\u00a0\u00a0\u00a0 If an error is returned, rewrite the query, check the query, and try again.\\n\u00a0\u00a0\u00a0 ', args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, db)[source]\uf0c1\nBases: langchain.tools.sql_database.tool.BaseSQLDatabaseTool, langchain.tools.base.BaseTool\nTool for querying a SQL database.\nParameters\nname (str) \u2013 \ndescription (str) \u2013 \nargs_schema (Optional[Type[pydantic.main.BaseModel]]) \u2013 \nreturn_direct (bool) \u2013 \nverbose (bool) \u2013", "source": "https://api.python.langchain.com/en/latest/modules/tools.html"} +{"id": "1ace37f9e39e-64", "text": "return_direct (bool) \u2013 \nverbose (bool) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ncallback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) \u2013 \nhandle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) \u2013 \ndb (langchain.sql_database.SQLDatabase) \u2013 \nReturn type\nNone\nclass langchain.tools.QuerySparkSQLTool(*, name='query_sql_db', description='\\n\u00a0\u00a0\u00a0 Input to this tool is a detailed and correct SQL query, output is a result from the Spark SQL.\\n\u00a0\u00a0\u00a0 If the query is not correct, an error message will be returned.\\n\u00a0\u00a0\u00a0 If an error is returned, rewrite the query, check the query, and try again.\\n\u00a0\u00a0\u00a0 ', args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, db)[source]\uf0c1\nBases: langchain.tools.spark_sql.tool.BaseSparkSQLTool, langchain.tools.base.BaseTool\nTool for querying a Spark SQL.\nParameters\nname (str) \u2013 \ndescription (str) \u2013 \nargs_schema (Optional[Type[pydantic.main.BaseModel]]) \u2013 \nreturn_direct (bool) \u2013 \nverbose (bool) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ncallback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) \u2013 \nhandle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) \u2013 \ndb (langchain.utilities.spark_sql.SparkSQL) \u2013 \nReturn type\nNone", "source": "https://api.python.langchain.com/en/latest/modules/tools.html"} +{"id": "1ace37f9e39e-65", "text": "db (langchain.utilities.spark_sql.SparkSQL) \u2013 \nReturn type\nNone\nclass langchain.tools.ReadFileTool(*, name='read_file', description='Read file from disk', args_schema=, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, root_dir=None)[source]\uf0c1\nBases: langchain.tools.file_management.utils.BaseFileToolMixin, langchain.tools.base.BaseTool\nParameters\nname (str) \u2013 \ndescription (str) \u2013 \nargs_schema (Type[pydantic.main.BaseModel]) \u2013 \nreturn_direct (bool) \u2013 \nverbose (bool) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ncallback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) \u2013 \nhandle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) \u2013 \nroot_dir (Optional[str]) \u2013 \nReturn type\nNone\nattribute args_schema: Type[pydantic.main.BaseModel] = \uf0c1\nPydantic model class to validate and parse the tool\u2019s input arguments.\nattribute description: str = 'Read file from disk'\uf0c1\nUsed to tell the model how/when/why to use the tool.\nYou can provide few-shot examples as a part of the description.\nattribute name: str = 'read_file'\uf0c1\nThe unique name of the tool that clearly communicates its purpose.", "source": "https://api.python.langchain.com/en/latest/modules/tools.html"} +{"id": "1ace37f9e39e-66", "text": "The unique name of the tool that clearly communicates its purpose.\nclass langchain.tools.RequestsDeleteTool(*, name='requests_delete', description='A portal to the internet. Use this when you need to make a DELETE request to a URL. Input should be a specific url, and the output will be the text response of the DELETE request.', args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, requests_wrapper)[source]\uf0c1\nBases: langchain.tools.requests.tool.BaseRequestsTool, langchain.tools.base.BaseTool\nTool for making a DELETE request to an API endpoint.\nParameters\nname (str) \u2013 \ndescription (str) \u2013 \nargs_schema (Optional[Type[pydantic.main.BaseModel]]) \u2013 \nreturn_direct (bool) \u2013 \nverbose (bool) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ncallback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) \u2013 \nhandle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) \u2013 \nrequests_wrapper (langchain.requests.TextRequestsWrapper) \u2013 \nReturn type\nNone\nclass langchain.tools.RequestsGetTool(*, name='requests_get', description='A portal to the internet. Use this when you need to get specific content from a website. Input should be a\u00a0 url (i.e. https://www.google.com). The output will be the text response of the GET request.', args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, requests_wrapper)[source]\uf0c1\nBases: langchain.tools.requests.tool.BaseRequestsTool, langchain.tools.base.BaseTool\nTool for making a GET request to an API endpoint.\nParameters", "source": "https://api.python.langchain.com/en/latest/modules/tools.html"} +{"id": "1ace37f9e39e-67", "text": "Tool for making a GET request to an API endpoint.\nParameters\nname (str) \u2013 \ndescription (str) \u2013 \nargs_schema (Optional[Type[pydantic.main.BaseModel]]) \u2013 \nreturn_direct (bool) \u2013 \nverbose (bool) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ncallback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) \u2013 \nhandle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) \u2013 \nrequests_wrapper (langchain.requests.TextRequestsWrapper) \u2013 \nReturn type\nNone\nclass langchain.tools.RequestsPatchTool(*, name='requests_patch', description='Use this when you want to PATCH to a website.\\n\u00a0\u00a0\u00a0 Input should be a json string with two keys: \"url\" and \"data\".\\n\u00a0\u00a0\u00a0 The value of \"url\" should be a string, and the value of \"data\" should be a dictionary of \\n\u00a0\u00a0\u00a0 key-value pairs you want to PATCH to the url.\\n\u00a0\u00a0\u00a0 Be careful to always use double quotes for strings in the json string\\n\u00a0\u00a0\u00a0 The output will be the text response of the PATCH request.\\n\u00a0\u00a0\u00a0 ', args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, requests_wrapper)[source]\uf0c1\nBases: langchain.tools.requests.tool.BaseRequestsTool, langchain.tools.base.BaseTool\nTool for making a PATCH request to an API endpoint.\nParameters\nname (str) \u2013 \ndescription (str) \u2013 \nargs_schema (Optional[Type[pydantic.main.BaseModel]]) \u2013 \nreturn_direct (bool) \u2013 \nverbose (bool) \u2013", "source": "https://api.python.langchain.com/en/latest/modules/tools.html"} +{"id": "1ace37f9e39e-68", "text": "return_direct (bool) \u2013 \nverbose (bool) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ncallback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) \u2013 \nhandle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) \u2013 \nrequests_wrapper (langchain.requests.TextRequestsWrapper) \u2013 \nReturn type\nNone\nclass langchain.tools.RequestsPostTool(*, name='requests_post', description='Use this when you want to POST to a website.\\n\u00a0\u00a0\u00a0 Input should be a json string with two keys: \"url\" and \"data\".\\n\u00a0\u00a0\u00a0 The value of \"url\" should be a string, and the value of \"data\" should be a dictionary of \\n\u00a0\u00a0\u00a0 key-value pairs you want to POST to the url.\\n\u00a0\u00a0\u00a0 Be careful to always use double quotes for strings in the json string\\n\u00a0\u00a0\u00a0 The output will be the text response of the POST request.\\n\u00a0\u00a0\u00a0 ', args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, requests_wrapper)[source]\uf0c1\nBases: langchain.tools.requests.tool.BaseRequestsTool, langchain.tools.base.BaseTool\nTool for making a POST request to an API endpoint.\nParameters\nname (str) \u2013 \ndescription (str) \u2013 \nargs_schema (Optional[Type[pydantic.main.BaseModel]]) \u2013 \nreturn_direct (bool) \u2013 \nverbose (bool) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ncallback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) \u2013 \nhandle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) \u2013", "source": "https://api.python.langchain.com/en/latest/modules/tools.html"} +{"id": "1ace37f9e39e-69", "text": "requests_wrapper (langchain.requests.TextRequestsWrapper) \u2013 \nReturn type\nNone\nclass langchain.tools.RequestsPutTool(*, name='requests_put', description='Use this when you want to PUT to a website.\\n\u00a0\u00a0\u00a0 Input should be a json string with two keys: \"url\" and \"data\".\\n\u00a0\u00a0\u00a0 The value of \"url\" should be a string, and the value of \"data\" should be a dictionary of \\n\u00a0\u00a0\u00a0 key-value pairs you want to PUT to the url.\\n\u00a0\u00a0\u00a0 Be careful to always use double quotes for strings in the json string.\\n\u00a0\u00a0\u00a0 The output will be the text response of the PUT request.\\n\u00a0\u00a0\u00a0 ', args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, requests_wrapper)[source]\uf0c1\nBases: langchain.tools.requests.tool.BaseRequestsTool, langchain.tools.base.BaseTool\nTool for making a PUT request to an API endpoint.\nParameters\nname (str) \u2013 \ndescription (str) \u2013 \nargs_schema (Optional[Type[pydantic.main.BaseModel]]) \u2013 \nreturn_direct (bool) \u2013 \nverbose (bool) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ncallback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) \u2013 \nhandle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) \u2013 \nrequests_wrapper (langchain.requests.TextRequestsWrapper) \u2013 \nReturn type\nNone", "source": "https://api.python.langchain.com/en/latest/modules/tools.html"} +{"id": "1ace37f9e39e-70", "text": "requests_wrapper (langchain.requests.TextRequestsWrapper) \u2013 \nReturn type\nNone\nclass langchain.tools.SceneXplainTool(*, name='image_explainer', description='An Image Captioning Tool: Use this tool to generate a detailed caption for an image. The input can be an image file of any format, and the output will be a text description that covers every detail of the image.', args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, api_wrapper=None)[source]\uf0c1\nBases: langchain.tools.base.BaseTool\nTool that adds the capability to explain images.\nParameters\nname (str) \u2013 \ndescription (str) \u2013 \nargs_schema (Optional[Type[pydantic.main.BaseModel]]) \u2013 \nreturn_direct (bool) \u2013 \nverbose (bool) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ncallback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) \u2013 \nhandle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) \u2013 \napi_wrapper (langchain.utilities.scenexplain.SceneXplainAPIWrapper) \u2013 \nReturn type\nNone\nattribute api_wrapper: langchain.utilities.scenexplain.SceneXplainAPIWrapper [Optional]\uf0c1\nclass langchain.tools.SearxSearchResults(*, name='Searx Search Results', description='A meta search engine.Useful for when you need to answer questions about current events.Input should be a search query. Output is a JSON array of the query results', args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, wrapper, num_results=4, kwargs=None, **extra_data)[source]\uf0c1", "source": "https://api.python.langchain.com/en/latest/modules/tools.html"} +{"id": "1ace37f9e39e-71", "text": "Bases: langchain.tools.base.BaseTool\nTool that has the capability to query a Searx instance and get back json.\nParameters\nname (str) \u2013 \ndescription (str) \u2013 \nargs_schema (Optional[Type[pydantic.main.BaseModel]]) \u2013 \nreturn_direct (bool) \u2013 \nverbose (bool) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ncallback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) \u2013 \nhandle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) \u2013 \nwrapper (langchain.utilities.searx_search.SearxSearchWrapper) \u2013 \nnum_results (int) \u2013 \nkwargs (dict) \u2013 \nextra_data (Any) \u2013 \nReturn type\nNone\nattribute kwargs: dict [Optional]\uf0c1\nattribute num_results: int = 4\uf0c1\nattribute wrapper: langchain.utilities.searx_search.SearxSearchWrapper [Required]\uf0c1\nclass langchain.tools.SearxSearchRun(*, name='searx_search', description='A meta search engine.Useful for when you need to answer questions about current events.Input should be a search query.', args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, wrapper, kwargs=None)[source]\uf0c1\nBases: langchain.tools.base.BaseTool\nTool that adds the capability to query a Searx instance.\nParameters\nname (str) \u2013 \ndescription (str) \u2013 \nargs_schema (Optional[Type[pydantic.main.BaseModel]]) \u2013 \nreturn_direct (bool) \u2013 \nverbose (bool) \u2013", "source": "https://api.python.langchain.com/en/latest/modules/tools.html"} +{"id": "1ace37f9e39e-72", "text": "return_direct (bool) \u2013 \nverbose (bool) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ncallback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) \u2013 \nhandle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) \u2013 \nwrapper (langchain.utilities.searx_search.SearxSearchWrapper) \u2013 \nkwargs (dict) \u2013 \nReturn type\nNone\nattribute kwargs: dict [Optional]\uf0c1\nattribute wrapper: langchain.utilities.searx_search.SearxSearchWrapper [Required]\uf0c1\nclass langchain.tools.ShellTool(*, name='terminal', description='Run shell commands on this Linux machine.', args_schema=, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, process=None)[source]\uf0c1\nBases: langchain.tools.base.BaseTool\nTool to run shell commands.\nParameters\nname (str) \u2013 \ndescription (str) \u2013 \nargs_schema (Type[pydantic.main.BaseModel]) \u2013 \nreturn_direct (bool) \u2013 \nverbose (bool) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ncallback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) \u2013 \nhandle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) \u2013 \nprocess (langchain.utilities.bash.BashProcess) \u2013 \nReturn type\nNone\nattribute args_schema: Type[pydantic.main.BaseModel] = \uf0c1\nSchema for input arguments.", "source": "https://api.python.langchain.com/en/latest/modules/tools.html"} +{"id": "1ace37f9e39e-73", "text": "Schema for input arguments.\nattribute description: str = 'Run shell commands on this Linux machine.'\uf0c1\nDescription of tool.\nattribute name: str = 'terminal'\uf0c1\nName of tool.\nattribute process: langchain.utilities.bash.BashProcess [Optional]\uf0c1\nBash process to run commands.\nclass langchain.tools.SleepTool(*, name='sleep', description='Make agent sleep for a specified number of seconds.', args_schema=, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False)[source]\uf0c1\nBases: langchain.tools.base.BaseTool\nTool that adds the capability to sleep.\nParameters\nname (str) \u2013 \ndescription (str) \u2013 \nargs_schema (Type[pydantic.main.BaseModel]) \u2013 \nreturn_direct (bool) \u2013 \nverbose (bool) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ncallback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) \u2013 \nhandle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) \u2013 \nReturn type\nNone\nattribute args_schema: Type[pydantic.main.BaseModel] = \uf0c1\nPydantic model class to validate and parse the tool\u2019s input arguments.\nlangchain.tools.StdInInquireTool(*args, **kwargs)[source]\uf0c1\nTool for asking the user for input.\nParameters\nargs (Any) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.tools.human.tool.HumanInputRun", "source": "https://api.python.langchain.com/en/latest/modules/tools.html"} +{"id": "1ace37f9e39e-74", "text": "Return type\nlangchain.tools.human.tool.HumanInputRun\nclass langchain.tools.SteamshipImageGenerationTool(*, name='GenerateImage', description='Useful for when you need to generate an image.Input: A detailed text-2-image prompt describing an imageOutput: the UUID of a generated image', args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, model_name, size='512x512', steamship, return_urls=False)[source]\uf0c1\nBases: langchain.tools.base.BaseTool\nParameters\nname (str) \u2013 \ndescription (str) \u2013 \nargs_schema (Optional[Type[pydantic.main.BaseModel]]) \u2013 \nreturn_direct (bool) \u2013 \nverbose (bool) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ncallback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) \u2013 \nhandle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) \u2013 \nmodel_name (langchain.tools.steamship_image_generation.tool.ModelName) \u2013 \nsize (Optional[str]) \u2013 \nsteamship (Steamship) \u2013 \nreturn_urls (Optional[bool]) \u2013 \nReturn type\nNone\nattribute model_name: ModelName [Required]\uf0c1\nattribute return_urls: Optional[bool] = False\uf0c1\nattribute size: Optional[str] = '512x512'\uf0c1\nattribute steamship: Steamship [Required]\uf0c1\nclass langchain.tools.StructuredTool(*, name, description='', args_schema, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, func, coroutine=None)[source]\uf0c1\nBases: langchain.tools.base.BaseTool", "source": "https://api.python.langchain.com/en/latest/modules/tools.html"} +{"id": "1ace37f9e39e-75", "text": "Bases: langchain.tools.base.BaseTool\nTool that can operate on any number of inputs.\nParameters\nname (str) \u2013 \ndescription (str) \u2013 \nargs_schema (Type[pydantic.main.BaseModel]) \u2013 \nreturn_direct (bool) \u2013 \nverbose (bool) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ncallback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) \u2013 \nhandle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) \u2013 \nfunc (Callable[[...], Any]) \u2013 \ncoroutine (Optional[Callable[[...], Awaitable[Any]]]) \u2013 \nReturn type\nNone\nattribute args_schema: Type[pydantic.main.BaseModel] [Required]\uf0c1\nThe input arguments\u2019 schema.\nThe tool schema.\nattribute coroutine: Optional[Callable[[...], Awaitable[Any]]] = None\uf0c1\nThe asynchronous version of the function.\nattribute description: str = ''\uf0c1\nUsed to tell the model how/when/why to use the tool.\nYou can provide few-shot examples as a part of the description.\nattribute func: Callable[[...], Any] [Required]\uf0c1\nThe function to run when the tool is called.\nclassmethod from_function(func, name=None, description=None, return_direct=False, args_schema=None, infer_schema=True, **kwargs)[source]\uf0c1\nCreate tool from a given function.\nA classmethod that helps to create a tool from a function.\nParameters\nfunc (Callable) \u2013 The function from which to create a tool\nname (Optional[str]) \u2013 The name of the tool. Defaults to the function name\ndescription (Optional[str]) \u2013 The description of the tool. Defaults to the function docstring", "source": "https://api.python.langchain.com/en/latest/modules/tools.html"} +{"id": "1ace37f9e39e-76", "text": "description (Optional[str]) \u2013 The description of the tool. Defaults to the function docstring\nreturn_direct (bool) \u2013 Whether to return the result directly or as a callback\nargs_schema (Optional[Type[pydantic.main.BaseModel]]) \u2013 The schema of the tool\u2019s input arguments\ninfer_schema (bool) \u2013 Whether to infer the schema from the function\u2019s signature\n**kwargs \u2013 Additional arguments to pass to the tool\nkwargs (Any) \u2013 \nReturns\nThe tool\nReturn type\nlangchain.tools.base.StructuredTool\nExamples\n\u2026 code-block:: python\ndef add(a: int, b: int) -> int:\u201c\u201d\u201dAdd two numbers\u201d\u201d\u201d\nreturn a + b\ntool = StructuredTool.from_function(add)\ntool.run(1, 2) # 3\nproperty args: dict\uf0c1\nThe tool\u2019s input arguments.\nclass langchain.tools.Tool(name, func, description, *, args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, coroutine=None)[source]\uf0c1\nBases: langchain.tools.base.BaseTool\nTool that takes in function or coroutine directly.\nParameters\nname (str) \u2013 \nfunc (Callable[[...], str]) \u2013 \ndescription (str) \u2013 \nargs_schema (Optional[Type[pydantic.main.BaseModel]]) \u2013 \nreturn_direct (bool) \u2013 \nverbose (bool) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ncallback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) \u2013 \nhandle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) \u2013 \ncoroutine (Optional[Callable[[...], Awaitable[str]]]) \u2013 \nReturn type\nNone", "source": "https://api.python.langchain.com/en/latest/modules/tools.html"} +{"id": "1ace37f9e39e-77", "text": "Return type\nNone\nattribute args_schema: Optional[Type[pydantic.main.BaseModel]] = None\uf0c1\nPydantic model class to validate and parse the tool\u2019s input arguments.\nattribute callback_manager: Optional[langchain.callbacks.base.BaseCallbackManager] = None\uf0c1\nDeprecated. Please use callbacks instead.\nattribute callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None\uf0c1\nCallbacks to be called during tool execution.\nattribute coroutine: Optional[Callable[[...], Awaitable[str]]] = None\uf0c1\nThe asynchronous version of the function.\nattribute description: str = ''\uf0c1\nUsed to tell the model how/when/why to use the tool.\nYou can provide few-shot examples as a part of the description.\nattribute func: Callable[[...], str] [Required]\uf0c1\nThe function to run when the tool is called.\nattribute handle_tool_error: Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]] = False\uf0c1\nHandle the content of the ToolException thrown.\nattribute name: str [Required]\uf0c1\nThe unique name of the tool that clearly communicates its purpose.\nattribute return_direct: bool = False\uf0c1\nWhether to return the tool\u2019s output directly. Setting this to True means\nthat after the tool is called, the AgentExecutor will stop looping.\nattribute verbose: bool = False\uf0c1\nWhether to log the tool\u2019s progress.\nclassmethod from_function(func, name, description, return_direct=False, args_schema=None, **kwargs)[source]\uf0c1\nInitialize tool from a function.\nParameters\nfunc (Callable) \u2013 \nname (str) \u2013 \ndescription (str) \u2013 \nreturn_direct (bool) \u2013 \nargs_schema (Optional[Type[pydantic.main.BaseModel]]) \u2013", "source": "https://api.python.langchain.com/en/latest/modules/tools.html"} +{"id": "1ace37f9e39e-78", "text": "args_schema (Optional[Type[pydantic.main.BaseModel]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.tools.base.Tool\nproperty args: dict\uf0c1\nThe tool\u2019s input arguments.\nclass langchain.tools.VectorStoreQATool(*, name, description, args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, vectorstore, llm=None)[source]\uf0c1\nBases: langchain.tools.vectorstore.tool.BaseVectorStoreTool, langchain.tools.base.BaseTool\nTool for the VectorDBQA chain. To be initialized with name and chain.\nParameters\nname (str) \u2013 \ndescription (str) \u2013 \nargs_schema (Optional[Type[pydantic.main.BaseModel]]) \u2013 \nreturn_direct (bool) \u2013 \nverbose (bool) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ncallback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) \u2013 \nhandle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) \u2013 \nvectorstore (langchain.vectorstores.base.VectorStore) \u2013 \nllm (langchain.base_language.BaseLanguageModel) \u2013 \nReturn type\nNone\nstatic get_description(name, description)[source]\uf0c1\nParameters\nname (str) \u2013 \ndescription (str) \u2013 \nReturn type\nstr\nclass langchain.tools.VectorStoreQAWithSourcesTool(*, name, description, args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, vectorstore, llm=None)[source]\uf0c1\nBases: langchain.tools.vectorstore.tool.BaseVectorStoreTool, langchain.tools.base.BaseTool", "source": "https://api.python.langchain.com/en/latest/modules/tools.html"} +{"id": "1ace37f9e39e-79", "text": "Tool for the VectorDBQAWithSources chain.\nParameters\nname (str) \u2013 \ndescription (str) \u2013 \nargs_schema (Optional[Type[pydantic.main.BaseModel]]) \u2013 \nreturn_direct (bool) \u2013 \nverbose (bool) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ncallback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) \u2013 \nhandle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) \u2013 \nvectorstore (langchain.vectorstores.base.VectorStore) \u2013 \nllm (langchain.base_language.BaseLanguageModel) \u2013 \nReturn type\nNone\nstatic get_description(name, description)[source]\uf0c1\nParameters\nname (str) \u2013 \ndescription (str) \u2013 \nReturn type\nstr\nclass langchain.tools.WikipediaQueryRun(*, name='Wikipedia', description='A wrapper around Wikipedia. Useful for when you need to answer general questions about people, places, companies, facts, historical events, or other subjects. Input should be a search query.', args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, api_wrapper)[source]\uf0c1\nBases: langchain.tools.base.BaseTool\nTool that adds the capability to search using the Wikipedia API.\nParameters\nname (str) \u2013 \ndescription (str) \u2013 \nargs_schema (Optional[Type[pydantic.main.BaseModel]]) \u2013 \nreturn_direct (bool) \u2013 \nverbose (bool) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ncallback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) \u2013", "source": "https://api.python.langchain.com/en/latest/modules/tools.html"} +{"id": "1ace37f9e39e-80", "text": "callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) \u2013 \nhandle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) \u2013 \napi_wrapper (langchain.utilities.wikipedia.WikipediaAPIWrapper) \u2013 \nReturn type\nNone\nattribute api_wrapper: langchain.utilities.wikipedia.WikipediaAPIWrapper [Required]\uf0c1\nclass langchain.tools.WolframAlphaQueryRun(*, name='wolfram_alpha', description='A wrapper around Wolfram Alpha. Useful for when you need to answer questions about Math, Science, Technology, Culture, Society and Everyday Life. Input should be a search query.', args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, api_wrapper)[source]\uf0c1\nBases: langchain.tools.base.BaseTool\nTool that adds the capability to query using the Wolfram Alpha SDK.\nParameters\nname (str) \u2013 \ndescription (str) \u2013 \nargs_schema (Optional[Type[pydantic.main.BaseModel]]) \u2013 \nreturn_direct (bool) \u2013 \nverbose (bool) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ncallback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) \u2013 \nhandle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) \u2013 \napi_wrapper (langchain.utilities.wolfram_alpha.WolframAlphaAPIWrapper) \u2013 \nReturn type\nNone\nattribute api_wrapper: langchain.utilities.wolfram_alpha.WolframAlphaAPIWrapper [Required]\uf0c1", "source": "https://api.python.langchain.com/en/latest/modules/tools.html"} +{"id": "1ace37f9e39e-81", "text": "class langchain.tools.WriteFileTool(*, name='write_file', description='Write file to disk', args_schema=, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, root_dir=None)[source]\uf0c1\nBases: langchain.tools.file_management.utils.BaseFileToolMixin, langchain.tools.base.BaseTool\nParameters\nname (str) \u2013 \ndescription (str) \u2013 \nargs_schema (Type[pydantic.main.BaseModel]) \u2013 \nreturn_direct (bool) \u2013 \nverbose (bool) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ncallback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) \u2013 \nhandle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) \u2013 \nroot_dir (Optional[str]) \u2013 \nReturn type\nNone\nattribute args_schema: Type[pydantic.main.BaseModel] = \uf0c1\nPydantic model class to validate and parse the tool\u2019s input arguments.\nattribute description: str = 'Write file to disk'\uf0c1\nUsed to tell the model how/when/why to use the tool.\nYou can provide few-shot examples as a part of the description.\nattribute name: str = 'write_file'\uf0c1\nThe unique name of the tool that clearly communicates its purpose.", "source": "https://api.python.langchain.com/en/latest/modules/tools.html"} +{"id": "1ace37f9e39e-82", "text": "The unique name of the tool that clearly communicates its purpose.\nclass langchain.tools.YouTubeSearchTool(*, name='youtube_search', description='search for youtube videos associated with a person. the input to this tool should be a comma separated list, the first part contains a person name and the second a number that is the maximum number of video results to return aka num_results. the second part is optional', args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False)[source]\uf0c1\nBases: langchain.tools.base.BaseTool\nParameters\nname (str) \u2013 \ndescription (str) \u2013 \nargs_schema (Optional[Type[pydantic.main.BaseModel]]) \u2013 \nreturn_direct (bool) \u2013 \nverbose (bool) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ncallback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) \u2013 \nhandle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) \u2013 \nReturn type\nNone", "source": "https://api.python.langchain.com/en/latest/modules/tools.html"} +{"id": "1ace37f9e39e-83", "text": "Return type\nNone\nclass langchain.tools.ZapierNLAListActions(*, name='ZapierNLA_list_actions', description='A wrapper around Zapier NLA actions. The input to this tool is a natural language instruction, for example \"get the latest email from my bank\" or \"send a slack message to the #general channel\". Each tool will have params associated with it that are specified as a list. You MUST take into account the params when creating the instruction. For example, if the params are [\\'Message_Text\\', \\'Channel\\'], your instruction should be something like \\'send a slack message to the #general channel with the text hello world\\'. Another example: if the params are [\\'Calendar\\', \\'Search_Term\\'], your instruction should be something like \\'find the meeting in my personal calendar at 3pm\\'. Do not make up params, they will be explicitly specified in the tool description. If you do not have enough information to fill in the params, just say \\'not enough information provided in the instruction, missing \\'. If you get a none or null response, STOP EXECUTION, do not try to another tool!This tool specifically used for: {zapier_description}, and has params: {params}This tool returns a list of the user\\'s exposed actions.', args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, api_wrapper=None)[source]\uf0c1\nBases: langchain.tools.base.BaseTool\nReturns a list of all exposed (enabled) actions associated withcurrent user (associated with the set api_key). Change your exposed\nactions here: https://nla.zapier.com/demo/start/\nThe return list can be empty if no actions exposed. Else will contain\na list of action objects:\n[{\u201cid\u201d: str,\n\u201cdescription\u201d: str,\n\u201cparams\u201d: Dict[str, str]", "source": "https://api.python.langchain.com/en/latest/modules/tools.html"} +{"id": "1ace37f9e39e-84", "text": "\u201cdescription\u201d: str,\n\u201cparams\u201d: Dict[str, str]\n}]\nparams will always contain an instructions key, the only required\nparam. All others optional and if provided will override any AI guesses\n(see \u201cunderstanding the AI guessing flow\u201d here:\nhttps://nla.zapier.com/api/v1/docs)\nParameters\nNone \u2013 \nname (str) \u2013 \ndescription (str) \u2013 \nargs_schema (Optional[Type[pydantic.main.BaseModel]]) \u2013 \nreturn_direct (bool) \u2013 \nverbose (bool) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ncallback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) \u2013 \nhandle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) \u2013 \napi_wrapper (langchain.utilities.zapier.ZapierNLAWrapper) \u2013 \nReturn type\nNone\nattribute api_wrapper: langchain.utilities.zapier.ZapierNLAWrapper [Optional]\uf0c1", "source": "https://api.python.langchain.com/en/latest/modules/tools.html"} +{"id": "1ace37f9e39e-85", "text": "class langchain.tools.ZapierNLARunAction(*, name='', description='', args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, api_wrapper=None, action_id, params=None, base_prompt='A wrapper around Zapier NLA actions. The input to this tool is a natural language instruction, for example \"get the latest email from my bank\" or \"send a slack message to the #general channel\". Each tool will have params associated with it that are specified as a list. You MUST take into account the params when creating the instruction. For example, if the params are [\\'Message_Text\\', \\'Channel\\'], your instruction should be something like \\'send a slack message to the #general channel with the text hello world\\'. Another example: if the params are [\\'Calendar\\', \\'Search_Term\\'], your instruction should be something like \\'find the meeting in my personal calendar at 3pm\\'. Do not make up params, they will be explicitly specified in the tool description. If you do not have enough information to fill in the params, just say \\'not enough information provided in the instruction, missing \\'. If you get a none or null response, STOP EXECUTION, do not try to another tool!This tool specifically used for: {zapier_description}, and has params: {params}', zapier_description, params_schema=None)[source]\uf0c1\nBases: langchain.tools.base.BaseTool\nExecutes an action that is identified by action_id, must be exposed(enabled) by the current user (associated with the set api_key). Change\nyour exposed actions here: https://nla.zapier.com/demo/start/\nThe return JSON is guaranteed to be less than ~500 words (350\ntokens) making it safe to inject into the prompt of another LLM\ncall.\nParameters", "source": "https://api.python.langchain.com/en/latest/modules/tools.html"} +{"id": "1ace37f9e39e-86", "text": "tokens) making it safe to inject into the prompt of another LLM\ncall.\nParameters\naction_id (str) \u2013 a specific action ID (from list actions) of the action to execute\n(the set api_key must be associated with the action owner)\ninstructions \u2013 a natural language instruction string for using the action\n(eg. \u201cget the latest email from Mike Knoop\u201d for \u201cGmail: find email\u201d action)\nparams (Optional[dict]) \u2013 a dict, optional. Any params provided will override AI guesses\nfrom instructions (see \u201cunderstanding the AI guessing flow\u201d here:\nhttps://nla.zapier.com/api/v1/docs)\nname (str) \u2013 \ndescription (str) \u2013 \nargs_schema (Optional[Type[pydantic.main.BaseModel]]) \u2013 \nreturn_direct (bool) \u2013 \nverbose (bool) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ncallback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) \u2013 \nhandle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) \u2013 \napi_wrapper (langchain.utilities.zapier.ZapierNLAWrapper) \u2013 \nbase_prompt (str) \u2013 \nzapier_description (str) \u2013 \nparams_schema (Dict[str, str]) \u2013 \nReturn type\nNone\nattribute action_id: str [Required]\uf0c1\nattribute api_wrapper: langchain.utilities.zapier.ZapierNLAWrapper [Optional]\uf0c1", "source": "https://api.python.langchain.com/en/latest/modules/tools.html"} +{"id": "1ace37f9e39e-87", "text": "attribute base_prompt: str = 'A wrapper around Zapier NLA actions. The input to this tool is a natural language instruction, for example \"get the latest email from my bank\" or \"send a slack message to the #general channel\". Each tool will have params associated with it that are specified as a list. You MUST take into account the params when creating the instruction. For example, if the params are [\\'Message_Text\\', \\'Channel\\'], your instruction should be something like \\'send a slack message to the #general channel with the text hello world\\'. Another example: if the params are [\\'Calendar\\', \\'Search_Term\\'], your instruction should be something like \\'find the meeting in my personal calendar at 3pm\\'. Do not make up params, they will be explicitly specified in the tool description. If you do not have enough information to fill in the params, just say \\'not enough information provided in the instruction, missing \\'. If you get a none or null response, STOP EXECUTION, do not try to another tool!This tool specifically used for: {zapier_description}, and has params: {params}'\uf0c1\nattribute params: Optional[dict] = None\uf0c1\nattribute params_schema: Dict[str, str] [Optional]\uf0c1\nattribute zapier_description: str [Required]\uf0c1\nlangchain.tools.format_tool_to_openai_function(tool)[source]\uf0c1\nFormat tool into the OpenAI function API.\nParameters\ntool (langchain.tools.base.BaseTool) \u2013 \nReturn type\nlangchain.tools.convert_to_openai.FunctionDescription\nlangchain.tools.tool(*args, return_direct=False, args_schema=None, infer_schema=True)[source]\uf0c1\nMake tools out of functions, can be used with or without arguments.\nParameters\n*args \u2013 The arguments to the tool.\nreturn_direct (bool) \u2013 Whether to return directly from the tool rather", "source": "https://api.python.langchain.com/en/latest/modules/tools.html"} +{"id": "1ace37f9e39e-88", "text": "return_direct (bool) \u2013 Whether to return directly from the tool rather\nthan continuing the agent loop.\nargs_schema (Optional[Type[pydantic.main.BaseModel]]) \u2013 optional argument schema for user to specify\ninfer_schema (bool) \u2013 Whether to infer the schema of the arguments from\nthe function\u2019s signature. This also makes the resultant tool\naccept a dictionary input to its run() function.\nargs (Union[str, Callable]) \u2013 \nReturn type\nCallable\nRequires:\nFunction must be of type (str) -> str\nFunction must have a docstring\nExamples\n@tool\ndef search_api(query: str) -> str:\n # Searches the API for the query.\n return\n@tool(\"search\", return_direct=True)\ndef search_api(query: str) -> str:\n # Searches the API for the query.\n return", "source": "https://api.python.langchain.com/en/latest/modules/tools.html"} +{"id": "1a644f703b6b-0", "text": "Callbacks\uf0c1\nCallback handlers that allow listening to events in LangChain.\nclass langchain.callbacks.AimCallbackHandler(repo=None, experiment_name=None, system_tracking_interval=10, log_system_params=True)[source]\uf0c1\nBases: langchain.callbacks.aim_callback.BaseMetadataCallbackHandler, langchain.callbacks.base.BaseCallbackHandler\nCallback Handler that logs to Aim.\nParameters\nrepo (str, optional) \u2013 Aim repository path or Repo object to which\nRun object is bound. If skipped, default Repo is used.\nexperiment_name (str, optional) \u2013 Sets Run\u2019s experiment property.\n\u2018default\u2019 if not specified. Can be used later to query runs/sequences.\nsystem_tracking_interval (int, optional) \u2013 Sets the tracking interval\nin seconds for system usage metrics (CPU, Memory, etc.). Set to None\nto disable system metrics tracking.\nlog_system_params (bool, optional) \u2013 Enable/Disable logging of system\nparams such as installed packages, git info, environment variables, etc.\nReturn type\nNone\nThis handler will utilize the associated callback method called and formats\nthe input of each callback function with metadata regarding the state of LLM run\nand then logs the response to Aim.\nsetup(**kwargs)[source]\uf0c1\nParameters\nkwargs (Any) \u2013 \nReturn type\nNone\non_llm_start(serialized, prompts, **kwargs)[source]\uf0c1\nRun when LLM starts.\nParameters\nserialized (Dict[str, Any]) \u2013 \nprompts (List[str]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nNone\non_llm_end(response, **kwargs)[source]\uf0c1\nRun when LLM ends running.\nParameters\nresponse (langchain.schema.LLMResult) \u2013 \nkwargs (Any) \u2013 \nReturn type\nNone\non_llm_new_token(token, **kwargs)[source]\uf0c1", "source": "https://api.python.langchain.com/en/latest/modules/callbacks.html"} +{"id": "1a644f703b6b-1", "text": "Return type\nNone\non_llm_new_token(token, **kwargs)[source]\uf0c1\nRun when LLM generates a new token.\nParameters\ntoken (str) \u2013 \nkwargs (Any) \u2013 \nReturn type\nNone\non_llm_error(error, **kwargs)[source]\uf0c1\nRun when LLM errors.\nParameters\nerror (Union[Exception, KeyboardInterrupt]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nNone\non_chain_start(serialized, inputs, **kwargs)[source]\uf0c1\nRun when chain starts running.\nParameters\nserialized (Dict[str, Any]) \u2013 \ninputs (Dict[str, Any]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nNone\non_chain_end(outputs, **kwargs)[source]\uf0c1\nRun when chain ends running.\nParameters\noutputs (Dict[str, Any]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nNone\non_chain_error(error, **kwargs)[source]\uf0c1\nRun when chain errors.\nParameters\nerror (Union[Exception, KeyboardInterrupt]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nNone\non_tool_start(serialized, input_str, **kwargs)[source]\uf0c1\nRun when tool starts running.\nParameters\nserialized (Dict[str, Any]) \u2013 \ninput_str (str) \u2013 \nkwargs (Any) \u2013 \nReturn type\nNone\non_tool_end(output, **kwargs)[source]\uf0c1\nRun when tool ends running.\nParameters\noutput (str) \u2013 \nkwargs (Any) \u2013 \nReturn type\nNone\non_tool_error(error, **kwargs)[source]\uf0c1\nRun when tool errors.\nParameters\nerror (Union[Exception, KeyboardInterrupt]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nNone\non_text(text, **kwargs)[source]\uf0c1", "source": "https://api.python.langchain.com/en/latest/modules/callbacks.html"} +{"id": "1a644f703b6b-2", "text": "Return type\nNone\non_text(text, **kwargs)[source]\uf0c1\nRun when agent is ending.\nParameters\ntext (str) \u2013 \nkwargs (Any) \u2013 \nReturn type\nNone\non_agent_finish(finish, **kwargs)[source]\uf0c1\nRun when agent ends running.\nParameters\nfinish (langchain.schema.AgentFinish) \u2013 \nkwargs (Any) \u2013 \nReturn type\nNone\non_agent_action(action, **kwargs)[source]\uf0c1\nRun on agent action.\nParameters\naction (langchain.schema.AgentAction) \u2013 \nkwargs (Any) \u2013 \nReturn type\nAny\nflush_tracker(repo=None, experiment_name=None, system_tracking_interval=10, log_system_params=True, langchain_asset=None, reset=True, finish=False)[source]\uf0c1\nFlush the tracker and reset the session.\nParameters\nrepo (str, optional) \u2013 Aim repository path or Repo object to which\nRun object is bound. If skipped, default Repo is used.\nexperiment_name (str, optional) \u2013 Sets Run\u2019s experiment property.\n\u2018default\u2019 if not specified. Can be used later to query runs/sequences.\nsystem_tracking_interval (int, optional) \u2013 Sets the tracking interval\nin seconds for system usage metrics (CPU, Memory, etc.). Set to None\nto disable system metrics tracking.\nlog_system_params (bool, optional) \u2013 Enable/Disable logging of system\nparams such as installed packages, git info, environment variables, etc.\nlangchain_asset (Any) \u2013 The langchain asset to save.\nreset (bool) \u2013 Whether to reset the session.\nfinish (bool) \u2013 Whether to finish the run.\nReturns \u2013 None\nReturn type\nNone\nclass langchain.callbacks.ArgillaCallbackHandler(dataset_name, workspace_name=None, api_url=None, api_key=None)[source]\uf0c1", "source": "https://api.python.langchain.com/en/latest/modules/callbacks.html"} +{"id": "1a644f703b6b-3", "text": "Bases: langchain.callbacks.base.BaseCallbackHandler\nCallback Handler that logs into Argilla.\nParameters\ndataset_name (str) \u2013 name of the FeedbackDataset in Argilla. Note that it must\nexist in advance. If you need help on how to create a FeedbackDataset in\nArgilla, please visit\nhttps://docs.argilla.io/en/latest/guides/llms/practical_guides/use_argilla_callback_in_langchain.html.\nworkspace_name (Optional[str]) \u2013 name of the workspace in Argilla where the specified\nFeedbackDataset lives in. Defaults to None, which means that the\ndefault workspace will be used.\napi_url (Optional[str]) \u2013 URL of the Argilla Server that we want to use, and where the\nFeedbackDataset lives in. Defaults to None, which means that either\nARGILLA_API_URL environment variable or the default http://localhost:6900\nwill be used.\napi_key (Optional[str]) \u2013 API Key to connect to the Argilla Server. Defaults to None, which\nmeans that either ARGILLA_API_KEY environment variable or the default\nargilla.apikey will be used.\nRaises\nImportError \u2013 if the argilla package is not installed.\nConnectionError \u2013 if the connection to Argilla fails.\nFileNotFoundError \u2013 if the FeedbackDataset retrieval from Argilla fails.\nReturn type\nNone\nExamples\n>>> from langchain.llms import OpenAI\n>>> from langchain.callbacks import ArgillaCallbackHandler\n>>> argilla_callback = ArgillaCallbackHandler(\n... dataset_name=\"my-dataset\",\n... workspace_name=\"my-workspace\",\n... api_url=\"http://localhost:6900\",\n... api_key=\"argilla.apikey\",\n... )\n>>> llm = OpenAI(\n... temperature=0,\n... callbacks=[argilla_callback],\n... verbose=True,", "source": "https://api.python.langchain.com/en/latest/modules/callbacks.html"} +{"id": "1a644f703b6b-4", "text": "... callbacks=[argilla_callback],\n... verbose=True,\n... openai_api_key=\"API_KEY_HERE\",\n... )\n>>> llm.generate([\n... \"What is the best NLP-annotation tool out there? (no bias at all)\",\n... ])\n\"Argilla, no doubt about it.\"\non_llm_start(serialized, prompts, **kwargs)[source]\uf0c1\nSave the prompts in memory when an LLM starts.\nParameters\nserialized (Dict[str, Any]) \u2013 \nprompts (List[str]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nNone\non_llm_new_token(token, **kwargs)[source]\uf0c1\nDo nothing when a new token is generated.\nParameters\ntoken (str) \u2013 \nkwargs (Any) \u2013 \nReturn type\nNone\non_llm_end(response, **kwargs)[source]\uf0c1\nLog records to Argilla when an LLM ends.\nParameters\nresponse (langchain.schema.LLMResult) \u2013 \nkwargs (Any) \u2013 \nReturn type\nNone\non_llm_error(error, **kwargs)[source]\uf0c1\nDo nothing when LLM outputs an error.\nParameters\nerror (Union[Exception, KeyboardInterrupt]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nNone\non_chain_start(serialized, inputs, **kwargs)[source]\uf0c1\nIf the key input is in inputs, then save it in self.prompts using\neither the parent_run_id or the run_id as the key. This is done so that\nwe don\u2019t log the same input prompt twice, once when the LLM starts and once\nwhen the chain starts.\nParameters\nserialized (Dict[str, Any]) \u2013 \ninputs (Dict[str, Any]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nNone", "source": "https://api.python.langchain.com/en/latest/modules/callbacks.html"} +{"id": "1a644f703b6b-5", "text": "kwargs (Any) \u2013 \nReturn type\nNone\non_chain_end(outputs, **kwargs)[source]\uf0c1\nIf either the parent_run_id or the run_id is in self.prompts, then\nlog the outputs to Argilla, and pop the run from self.prompts. The behavior\ndiffers if the output is a list or not.\nParameters\noutputs (Dict[str, Any]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nNone\non_chain_error(error, **kwargs)[source]\uf0c1\nDo nothing when LLM chain outputs an error.\nParameters\nerror (Union[Exception, KeyboardInterrupt]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nNone\non_tool_start(serialized, input_str, **kwargs)[source]\uf0c1\nDo nothing when tool starts.\nParameters\nserialized (Dict[str, Any]) \u2013 \ninput_str (str) \u2013 \nkwargs (Any) \u2013 \nReturn type\nNone\non_agent_action(action, **kwargs)[source]\uf0c1\nDo nothing when agent takes a specific action.\nParameters\naction (langchain.schema.AgentAction) \u2013 \nkwargs (Any) \u2013 \nReturn type\nAny\non_tool_end(output, observation_prefix=None, llm_prefix=None, **kwargs)[source]\uf0c1\nDo nothing when tool ends.\nParameters\noutput (str) \u2013 \nobservation_prefix (Optional[str]) \u2013 \nllm_prefix (Optional[str]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nNone\non_tool_error(error, **kwargs)[source]\uf0c1\nDo nothing when tool outputs an error.\nParameters\nerror (Union[Exception, KeyboardInterrupt]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nNone\non_text(text, **kwargs)[source]\uf0c1\nDo nothing\nParameters\ntext (str) \u2013 \nkwargs (Any) \u2013", "source": "https://api.python.langchain.com/en/latest/modules/callbacks.html"} +{"id": "1a644f703b6b-6", "text": "Do nothing\nParameters\ntext (str) \u2013 \nkwargs (Any) \u2013 \nReturn type\nNone\non_agent_finish(finish, **kwargs)[source]\uf0c1\nDo nothing\nParameters\nfinish (langchain.schema.AgentFinish) \u2013 \nkwargs (Any) \u2013 \nReturn type\nNone\nclass langchain.callbacks.ArizeCallbackHandler(model_id=None, model_version=None, SPACE_KEY=None, API_KEY=None)[source]\uf0c1\nBases: langchain.callbacks.base.BaseCallbackHandler\nCallback Handler that logs to Arize.\nParameters\nmodel_id (Optional[str]) \u2013 \nmodel_version (Optional[str]) \u2013 \nSPACE_KEY (Optional[str]) \u2013 \nAPI_KEY (Optional[str]) \u2013 \nReturn type\nNone\non_llm_start(serialized, prompts, **kwargs)[source]\uf0c1\nRun when LLM starts running.\nParameters\nserialized (Dict[str, Any]) \u2013 \nprompts (List[str]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nNone\non_llm_new_token(token, **kwargs)[source]\uf0c1\nDo nothing.\nParameters\ntoken (str) \u2013 \nkwargs (Any) \u2013 \nReturn type\nNone\non_llm_end(response, **kwargs)[source]\uf0c1\nRun when LLM ends running.\nParameters\nresponse (langchain.schema.LLMResult) \u2013 \nkwargs (Any) \u2013 \nReturn type\nNone\non_llm_error(error, **kwargs)[source]\uf0c1\nDo nothing.\nParameters\nerror (Union[Exception, KeyboardInterrupt]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nNone\non_chain_start(serialized, inputs, **kwargs)[source]\uf0c1\nRun when chain starts running.\nParameters\nserialized (Dict[str, Any]) \u2013 \ninputs (Dict[str, Any]) \u2013 \nkwargs (Any) \u2013", "source": "https://api.python.langchain.com/en/latest/modules/callbacks.html"} +{"id": "1a644f703b6b-7", "text": "inputs (Dict[str, Any]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nNone\non_chain_end(outputs, **kwargs)[source]\uf0c1\nDo nothing.\nParameters\noutputs (Dict[str, Any]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nNone\non_chain_error(error, **kwargs)[source]\uf0c1\nDo nothing.\nParameters\nerror (Union[Exception, KeyboardInterrupt]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nNone\non_tool_start(serialized, input_str, **kwargs)[source]\uf0c1\nRun when tool starts running.\nParameters\nserialized (Dict[str, Any]) \u2013 \ninput_str (str) \u2013 \nkwargs (Any) \u2013 \nReturn type\nNone\non_agent_action(action, **kwargs)[source]\uf0c1\nDo nothing.\nParameters\naction (langchain.schema.AgentAction) \u2013 \nkwargs (Any) \u2013 \nReturn type\nAny\non_tool_end(output, observation_prefix=None, llm_prefix=None, **kwargs)[source]\uf0c1\nRun when tool ends running.\nParameters\noutput (str) \u2013 \nobservation_prefix (Optional[str]) \u2013 \nllm_prefix (Optional[str]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nNone\non_tool_error(error, **kwargs)[source]\uf0c1\nRun when tool errors.\nParameters\nerror (Union[Exception, KeyboardInterrupt]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nNone\non_text(text, **kwargs)[source]\uf0c1\nRun on arbitrary text.\nParameters\ntext (str) \u2013 \nkwargs (Any) \u2013 \nReturn type\nNone\non_agent_finish(finish, **kwargs)[source]\uf0c1\nRun on agent end.\nParameters\nfinish (langchain.schema.AgentFinish) \u2013 \nkwargs (Any) \u2013 \nReturn type\nNone", "source": "https://api.python.langchain.com/en/latest/modules/callbacks.html"} +{"id": "1a644f703b6b-8", "text": "kwargs (Any) \u2013 \nReturn type\nNone\nclass langchain.callbacks.AsyncIteratorCallbackHandler[source]\uf0c1\nBases: langchain.callbacks.base.AsyncCallbackHandler\nCallback handler that returns an async iterator.\nReturn type\nNone\nproperty always_verbose: bool\uf0c1\nqueue: asyncio.queues.Queue[str]\uf0c1\ndone: asyncio.locks.Event\uf0c1\nasync on_llm_start(serialized, prompts, **kwargs)[source]\uf0c1\nRun when LLM starts running.\nParameters\nserialized (Dict[str, Any]) \u2013 \nprompts (List[str]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nNone\nasync on_llm_new_token(token, **kwargs)[source]\uf0c1\nRun on new LLM token. Only available when streaming is enabled.\nParameters\ntoken (str) \u2013 \nkwargs (Any) \u2013 \nReturn type\nNone\nasync on_llm_end(response, **kwargs)[source]\uf0c1\nRun when LLM ends running.\nParameters\nresponse (langchain.schema.LLMResult) \u2013 \nkwargs (Any) \u2013 \nReturn type\nNone\nasync on_llm_error(error, **kwargs)[source]\uf0c1\nRun when LLM errors.\nParameters\nerror (Union[Exception, KeyboardInterrupt]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nNone\nasync aiter()[source]\uf0c1\nReturn type\nAsyncIterator[str]\nclass langchain.callbacks.ClearMLCallbackHandler(task_type='inference', project_name='langchain_callback_demo', tags=None, task_name=None, visualize=False, complexity_metrics=False, stream_logs=False)[source]\uf0c1\nBases: langchain.callbacks.utils.BaseMetadataCallbackHandler, langchain.callbacks.base.BaseCallbackHandler\nCallback Handler that logs to ClearML.\nParameters", "source": "https://api.python.langchain.com/en/latest/modules/callbacks.html"} +{"id": "1a644f703b6b-9", "text": "Callback Handler that logs to ClearML.\nParameters\njob_type (str) \u2013 The type of clearml task such as \u201cinference\u201d, \u201ctesting\u201d or \u201cqc\u201d\nproject_name (str) \u2013 The clearml project name\ntags (list) \u2013 Tags to add to the task\ntask_name (str) \u2013 Name of the clearml task\nvisualize (bool) \u2013 Whether to visualize the run.\ncomplexity_metrics (bool) \u2013 Whether to log complexity metrics\nstream_logs (bool) \u2013 Whether to stream callback actions to ClearML\ntask_type (Optional[str]) \u2013 \nReturn type\nNone\nThis handler will utilize the associated callback method and formats\nthe input of each callback function with metadata regarding the state of LLM run,\nand adds the response to the list of records for both the {method}_records and\naction. It then logs the response to the ClearML console.\non_llm_start(serialized, prompts, **kwargs)[source]\uf0c1\nRun when LLM starts.\nParameters\nserialized (Dict[str, Any]) \u2013 \nprompts (List[str]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nNone\non_llm_new_token(token, **kwargs)[source]\uf0c1\nRun when LLM generates a new token.\nParameters\ntoken (str) \u2013 \nkwargs (Any) \u2013 \nReturn type\nNone\non_llm_end(response, **kwargs)[source]\uf0c1\nRun when LLM ends running.\nParameters\nresponse (langchain.schema.LLMResult) \u2013 \nkwargs (Any) \u2013 \nReturn type\nNone\non_llm_error(error, **kwargs)[source]\uf0c1\nRun when LLM errors.\nParameters\nerror (Union[Exception, KeyboardInterrupt]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nNone", "source": "https://api.python.langchain.com/en/latest/modules/callbacks.html"} +{"id": "1a644f703b6b-10", "text": "kwargs (Any) \u2013 \nReturn type\nNone\non_chain_start(serialized, inputs, **kwargs)[source]\uf0c1\nRun when chain starts running.\nParameters\nserialized (Dict[str, Any]) \u2013 \ninputs (Dict[str, Any]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nNone\non_chain_end(outputs, **kwargs)[source]\uf0c1\nRun when chain ends running.\nParameters\noutputs (Dict[str, Any]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nNone\non_chain_error(error, **kwargs)[source]\uf0c1\nRun when chain errors.\nParameters\nerror (Union[Exception, KeyboardInterrupt]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nNone\non_tool_start(serialized, input_str, **kwargs)[source]\uf0c1\nRun when tool starts running.\nParameters\nserialized (Dict[str, Any]) \u2013 \ninput_str (str) \u2013 \nkwargs (Any) \u2013 \nReturn type\nNone\non_tool_end(output, **kwargs)[source]\uf0c1\nRun when tool ends running.\nParameters\noutput (str) \u2013 \nkwargs (Any) \u2013 \nReturn type\nNone\non_tool_error(error, **kwargs)[source]\uf0c1\nRun when tool errors.\nParameters\nerror (Union[Exception, KeyboardInterrupt]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nNone\non_text(text, **kwargs)[source]\uf0c1\nRun when agent is ending.\nParameters\ntext (str) \u2013 \nkwargs (Any) \u2013 \nReturn type\nNone\non_agent_finish(finish, **kwargs)[source]\uf0c1\nRun when agent ends running.\nParameters\nfinish (langchain.schema.AgentFinish) \u2013 \nkwargs (Any) \u2013 \nReturn type\nNone\non_agent_action(action, **kwargs)[source]\uf0c1", "source": "https://api.python.langchain.com/en/latest/modules/callbacks.html"} +{"id": "1a644f703b6b-11", "text": "Return type\nNone\non_agent_action(action, **kwargs)[source]\uf0c1\nRun on agent action.\nParameters\naction (langchain.schema.AgentAction) \u2013 \nkwargs (Any) \u2013 \nReturn type\nAny\nanalyze_text(text)[source]\uf0c1\nAnalyze text using textstat and spacy.\nParameters\ntext (str) \u2013 The text to analyze.\nReturns\nA dictionary containing the complexity metrics.\nReturn type\n(dict)\nflush_tracker(name=None, langchain_asset=None, finish=False)[source]\uf0c1\nFlush the tracker and setup the session.\nEverything after this will be a new table.\nParameters\nname (Optional[str]) \u2013 Name of the preformed session so far so it is identifyable\nlangchain_asset (Any) \u2013 The langchain asset to save.\nfinish (bool) \u2013 Whether to finish the run.\nReturns \u2013 None\nReturn type\nNone\nclass langchain.callbacks.CometCallbackHandler(task_type='inference', workspace=None, project_name=None, tags=None, name=None, visualizations=None, complexity_metrics=False, custom_metrics=None, stream_logs=True)[source]\uf0c1\nBases: langchain.callbacks.utils.BaseMetadataCallbackHandler, langchain.callbacks.base.BaseCallbackHandler\nCallback Handler that logs to Comet.\nParameters\njob_type (str) \u2013 The type of comet_ml task such as \u201cinference\u201d,\n\u201ctesting\u201d or \u201cqc\u201d\nproject_name (str) \u2013 The comet_ml project name\ntags (list) \u2013 Tags to add to the task\ntask_name (str) \u2013 Name of the comet_ml task\nvisualize (bool) \u2013 Whether to visualize the run.\ncomplexity_metrics (bool) \u2013 Whether to log complexity metrics\nstream_logs (bool) \u2013 Whether to stream callback actions to Comet\ntask_type (Optional[str]) \u2013 \nworkspace (Optional[str]) \u2013 \nname (Optional[str]) \u2013", "source": "https://api.python.langchain.com/en/latest/modules/callbacks.html"} +{"id": "1a644f703b6b-12", "text": "workspace (Optional[str]) \u2013 \nname (Optional[str]) \u2013 \nvisualizations (Optional[List[str]]) \u2013 \ncustom_metrics (Optional[Callable]) \u2013 \nReturn type\nNone\nThis handler will utilize the associated callback method and formats\nthe input of each callback function with metadata regarding the state of LLM run,\nand adds the response to the list of records for both the {method}_records and\naction. It then logs the response to Comet.\non_llm_start(serialized, prompts, **kwargs)[source]\uf0c1\nRun when LLM starts.\nParameters\nserialized (Dict[str, Any]) \u2013 \nprompts (List[str]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nNone\non_llm_new_token(token, **kwargs)[source]\uf0c1\nRun when LLM generates a new token.\nParameters\ntoken (str) \u2013 \nkwargs (Any) \u2013 \nReturn type\nNone\non_llm_end(response, **kwargs)[source]\uf0c1\nRun when LLM ends running.\nParameters\nresponse (langchain.schema.LLMResult) \u2013 \nkwargs (Any) \u2013 \nReturn type\nNone\non_llm_error(error, **kwargs)[source]\uf0c1\nRun when LLM errors.\nParameters\nerror (Union[Exception, KeyboardInterrupt]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nNone\non_chain_start(serialized, inputs, **kwargs)[source]\uf0c1\nRun when chain starts running.\nParameters\nserialized (Dict[str, Any]) \u2013 \ninputs (Dict[str, Any]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nNone\non_chain_end(outputs, **kwargs)[source]\uf0c1\nRun when chain ends running.\nParameters\noutputs (Dict[str, Any]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nNone", "source": "https://api.python.langchain.com/en/latest/modules/callbacks.html"} +{"id": "1a644f703b6b-13", "text": "kwargs (Any) \u2013 \nReturn type\nNone\non_chain_error(error, **kwargs)[source]\uf0c1\nRun when chain errors.\nParameters\nerror (Union[Exception, KeyboardInterrupt]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nNone\non_tool_start(serialized, input_str, **kwargs)[source]\uf0c1\nRun when tool starts running.\nParameters\nserialized (Dict[str, Any]) \u2013 \ninput_str (str) \u2013 \nkwargs (Any) \u2013 \nReturn type\nNone\non_tool_end(output, **kwargs)[source]\uf0c1\nRun when tool ends running.\nParameters\noutput (str) \u2013 \nkwargs (Any) \u2013 \nReturn type\nNone\non_tool_error(error, **kwargs)[source]\uf0c1\nRun when tool errors.\nParameters\nerror (Union[Exception, KeyboardInterrupt]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nNone\non_text(text, **kwargs)[source]\uf0c1\nRun when agent is ending.\nParameters\ntext (str) \u2013 \nkwargs (Any) \u2013 \nReturn type\nNone\non_agent_finish(finish, **kwargs)[source]\uf0c1\nRun when agent ends running.\nParameters\nfinish (langchain.schema.AgentFinish) \u2013 \nkwargs (Any) \u2013 \nReturn type\nNone\non_agent_action(action, **kwargs)[source]\uf0c1\nRun on agent action.\nParameters\naction (langchain.schema.AgentAction) \u2013 \nkwargs (Any) \u2013 \nReturn type\nAny\nflush_tracker(langchain_asset=None, task_type='inference', workspace=None, project_name='comet-langchain-demo', tags=None, name=None, visualizations=None, complexity_metrics=False, custom_metrics=None, finish=False, reset=False)[source]\uf0c1\nFlush the tracker and setup the session.\nEverything after this will be a new table.", "source": "https://api.python.langchain.com/en/latest/modules/callbacks.html"} +{"id": "1a644f703b6b-14", "text": "Flush the tracker and setup the session.\nEverything after this will be a new table.\nParameters\nname (Optional[str]) \u2013 Name of the preformed session so far so it is identifyable\nlangchain_asset (Any) \u2013 The langchain asset to save.\nfinish (bool) \u2013 Whether to finish the run.\nReturns \u2013 None\ntask_type (Optional[str]) \u2013 \nworkspace (Optional[str]) \u2013 \nproject_name (Optional[str]) \u2013 \ntags (Optional[Sequence]) \u2013 \nvisualizations (Optional[List[str]]) \u2013 \ncomplexity_metrics (bool) \u2013 \ncustom_metrics (Optional[Callable]) \u2013 \nreset (bool) \u2013 \nReturn type\nNone\nclass langchain.callbacks.FileCallbackHandler(filename, mode='a', color=None)[source]\uf0c1\nBases: langchain.callbacks.base.BaseCallbackHandler\nCallback Handler that writes to a file.\nParameters\nfilename (str) \u2013 \nmode (str) \u2013 \ncolor (Optional[str]) \u2013 \nReturn type\nNone\non_chain_start(serialized, inputs, **kwargs)[source]\uf0c1\nPrint out that we are entering a chain.\nParameters\nserialized (Dict[str, Any]) \u2013 \ninputs (Dict[str, Any]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nNone\non_chain_end(outputs, **kwargs)[source]\uf0c1\nPrint out that we finished a chain.\nParameters\noutputs (Dict[str, Any]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nNone\non_agent_action(action, color=None, **kwargs)[source]\uf0c1\nRun on agent action.\nParameters\naction (langchain.schema.AgentAction) \u2013 \ncolor (Optional[str]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nAny", "source": "https://api.python.langchain.com/en/latest/modules/callbacks.html"} +{"id": "1a644f703b6b-15", "text": "color (Optional[str]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nAny\non_tool_end(output, color=None, observation_prefix=None, llm_prefix=None, **kwargs)[source]\uf0c1\nIf not the final action, print out observation.\nParameters\noutput (str) \u2013 \ncolor (Optional[str]) \u2013 \nobservation_prefix (Optional[str]) \u2013 \nllm_prefix (Optional[str]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nNone\non_text(text, color=None, end='', **kwargs)[source]\uf0c1\nRun when agent ends.\nParameters\ntext (str) \u2013 \ncolor (Optional[str]) \u2013 \nend (str) \u2013 \nkwargs (Any) \u2013 \nReturn type\nNone\non_agent_finish(finish, color=None, **kwargs)[source]\uf0c1\nRun on agent end.\nParameters\nfinish (langchain.schema.AgentFinish) \u2013 \ncolor (Optional[str]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nNone\nclass langchain.callbacks.FinalStreamingStdOutCallbackHandler(*, answer_prefix_tokens=None, strip_tokens=True, stream_prefix=False)[source]\uf0c1\nBases: langchain.callbacks.streaming_stdout.StreamingStdOutCallbackHandler\nCallback handler for streaming in agents.\nOnly works with agents using LLMs that support streaming.\nOnly the final output of the agent will be streamed.\nParameters\nanswer_prefix_tokens (Optional[List[str]]) \u2013 \nstrip_tokens (bool) \u2013 \nstream_prefix (bool) \u2013 \nReturn type\nNone\nappend_to_last_tokens(token)[source]\uf0c1\nParameters\ntoken (str) \u2013 \nReturn type\nNone\ncheck_if_answer_reached()[source]\uf0c1\nReturn type\nbool\non_llm_start(serialized, prompts, **kwargs)[source]\uf0c1\nRun when LLM starts running.\nParameters", "source": "https://api.python.langchain.com/en/latest/modules/callbacks.html"} +{"id": "1a644f703b6b-16", "text": "Run when LLM starts running.\nParameters\nserialized (Dict[str, Any]) \u2013 \nprompts (List[str]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nNone\non_llm_new_token(token, **kwargs)[source]\uf0c1\nRun on new LLM token. Only available when streaming is enabled.\nParameters\ntoken (str) \u2013 \nkwargs (Any) \u2013 \nReturn type\nNone\nclass langchain.callbacks.HumanApprovalCallbackHandler(approve=, should_check=)[source]\uf0c1\nBases: langchain.callbacks.base.BaseCallbackHandler\nCallback for manually validating values.\nParameters\napprove (Callable[[Any], bool]) \u2013 \nshould_check (Callable[[Dict[str, Any]], bool]) \u2013 \nraise_error: bool = True\uf0c1\non_tool_start(serialized, input_str, *, run_id, parent_run_id=None, **kwargs)[source]\uf0c1\nRun when tool starts running.\nParameters\nserialized (Dict[str, Any]) \u2013 \ninput_str (str) \u2013 \nrun_id (uuid.UUID) \u2013 \nparent_run_id (Optional[uuid.UUID]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nAny\nclass langchain.callbacks.InfinoCallbackHandler(model_id=None, model_version=None, verbose=False)[source]\uf0c1\nBases: langchain.callbacks.base.BaseCallbackHandler\nCallback Handler that logs to Infino.\nParameters\nmodel_id (Optional[str]) \u2013 \nmodel_version (Optional[str]) \u2013 \nverbose (bool) \u2013 \nReturn type\nNone\non_llm_start(serialized, prompts, **kwargs)[source]\uf0c1\nLog the prompts to Infino, and set start time and error flag.\nParameters\nserialized (Dict[str, Any]) \u2013 \nprompts (List[str]) \u2013", "source": "https://api.python.langchain.com/en/latest/modules/callbacks.html"} +{"id": "1a644f703b6b-17", "text": "serialized (Dict[str, Any]) \u2013 \nprompts (List[str]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nNone\non_llm_new_token(token, **kwargs)[source]\uf0c1\nDo nothing when a new token is generated.\nParameters\ntoken (str) \u2013 \nkwargs (Any) \u2013 \nReturn type\nNone\non_llm_end(response, **kwargs)[source]\uf0c1\nLog the latency, error, token usage, and response to Infino.\nParameters\nresponse (langchain.schema.LLMResult) \u2013 \nkwargs (Any) \u2013 \nReturn type\nNone\non_llm_error(error, **kwargs)[source]\uf0c1\nSet the error flag.\nParameters\nerror (Union[Exception, KeyboardInterrupt]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nNone\non_chain_start(serialized, inputs, **kwargs)[source]\uf0c1\nDo nothing when LLM chain starts.\nParameters\nserialized (Dict[str, Any]) \u2013 \ninputs (Dict[str, Any]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nNone\non_chain_end(outputs, **kwargs)[source]\uf0c1\nDo nothing when LLM chain ends.\nParameters\noutputs (Dict[str, Any]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nNone\non_chain_error(error, **kwargs)[source]\uf0c1\nNeed to log the error.\nParameters\nerror (Union[Exception, KeyboardInterrupt]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nNone\non_tool_start(serialized, input_str, **kwargs)[source]\uf0c1\nDo nothing when tool starts.\nParameters\nserialized (Dict[str, Any]) \u2013 \ninput_str (str) \u2013 \nkwargs (Any) \u2013 \nReturn type\nNone\non_agent_action(action, **kwargs)[source]\uf0c1", "source": "https://api.python.langchain.com/en/latest/modules/callbacks.html"} +{"id": "1a644f703b6b-18", "text": "Return type\nNone\non_agent_action(action, **kwargs)[source]\uf0c1\nDo nothing when agent takes a specific action.\nParameters\naction (langchain.schema.AgentAction) \u2013 \nkwargs (Any) \u2013 \nReturn type\nAny\non_tool_end(output, observation_prefix=None, llm_prefix=None, **kwargs)[source]\uf0c1\nDo nothing when tool ends.\nParameters\noutput (str) \u2013 \nobservation_prefix (Optional[str]) \u2013 \nllm_prefix (Optional[str]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nNone\non_tool_error(error, **kwargs)[source]\uf0c1\nDo nothing when tool outputs an error.\nParameters\nerror (Union[Exception, KeyboardInterrupt]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nNone\non_text(text, **kwargs)[source]\uf0c1\nDo nothing.\nParameters\ntext (str) \u2013 \nkwargs (Any) \u2013 \nReturn type\nNone\non_agent_finish(finish, **kwargs)[source]\uf0c1\nDo nothing.\nParameters\nfinish (langchain.schema.AgentFinish) \u2013 \nkwargs (Any) \u2013 \nReturn type\nNone\nclass langchain.callbacks.MlflowCallbackHandler(name='langchainrun-%', experiment='langchain', tags={}, tracking_uri=None)[source]\uf0c1\nBases: langchain.callbacks.utils.BaseMetadataCallbackHandler, langchain.callbacks.base.BaseCallbackHandler\nCallback Handler that logs metrics and artifacts to mlflow server.\nParameters\nname (str) \u2013 Name of the run.\nexperiment (str) \u2013 Name of the experiment.\ntags (dict) \u2013 Tags to be attached for the run.\ntracking_uri (str) \u2013 MLflow tracking server uri.\nReturn type\nNone\nThis handler will utilize the associated callback method called and formats\nthe input of each callback function with metadata regarding the state of LLM run,", "source": "https://api.python.langchain.com/en/latest/modules/callbacks.html"} +{"id": "1a644f703b6b-19", "text": "the input of each callback function with metadata regarding the state of LLM run,\nand adds the response to the list of records for both the {method}_records and\naction. It then logs the response to mlflow server.\non_llm_start(serialized, prompts, **kwargs)[source]\uf0c1\nRun when LLM starts.\nParameters\nserialized (Dict[str, Any]) \u2013 \nprompts (List[str]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nNone\non_llm_new_token(token, **kwargs)[source]\uf0c1\nRun when LLM generates a new token.\nParameters\ntoken (str) \u2013 \nkwargs (Any) \u2013 \nReturn type\nNone\non_llm_end(response, **kwargs)[source]\uf0c1\nRun when LLM ends running.\nParameters\nresponse (langchain.schema.LLMResult) \u2013 \nkwargs (Any) \u2013 \nReturn type\nNone\non_llm_error(error, **kwargs)[source]\uf0c1\nRun when LLM errors.\nParameters\nerror (Union[Exception, KeyboardInterrupt]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nNone\non_chain_start(serialized, inputs, **kwargs)[source]\uf0c1\nRun when chain starts running.\nParameters\nserialized (Dict[str, Any]) \u2013 \ninputs (Dict[str, Any]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nNone\non_chain_end(outputs, **kwargs)[source]\uf0c1\nRun when chain ends running.\nParameters\noutputs (Dict[str, Any]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nNone\non_chain_error(error, **kwargs)[source]\uf0c1\nRun when chain errors.\nParameters\nerror (Union[Exception, KeyboardInterrupt]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nNone", "source": "https://api.python.langchain.com/en/latest/modules/callbacks.html"} +{"id": "1a644f703b6b-20", "text": "kwargs (Any) \u2013 \nReturn type\nNone\non_tool_start(serialized, input_str, **kwargs)[source]\uf0c1\nRun when tool starts running.\nParameters\nserialized (Dict[str, Any]) \u2013 \ninput_str (str) \u2013 \nkwargs (Any) \u2013 \nReturn type\nNone\non_tool_end(output, **kwargs)[source]\uf0c1\nRun when tool ends running.\nParameters\noutput (str) \u2013 \nkwargs (Any) \u2013 \nReturn type\nNone\non_tool_error(error, **kwargs)[source]\uf0c1\nRun when tool errors.\nParameters\nerror (Union[Exception, KeyboardInterrupt]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nNone\non_text(text, **kwargs)[source]\uf0c1\nRun when agent is ending.\nParameters\ntext (str) \u2013 \nkwargs (Any) \u2013 \nReturn type\nNone\non_agent_finish(finish, **kwargs)[source]\uf0c1\nRun when agent ends running.\nParameters\nfinish (langchain.schema.AgentFinish) \u2013 \nkwargs (Any) \u2013 \nReturn type\nNone\non_agent_action(action, **kwargs)[source]\uf0c1\nRun on agent action.\nParameters\naction (langchain.schema.AgentAction) \u2013 \nkwargs (Any) \u2013 \nReturn type\nAny\nflush_tracker(langchain_asset=None, finish=False)[source]\uf0c1\nParameters\nlangchain_asset (Any) \u2013 \nfinish (bool) \u2013 \nReturn type\nNone\nclass langchain.callbacks.OpenAICallbackHandler[source]\uf0c1\nBases: langchain.callbacks.base.BaseCallbackHandler\nCallback Handler that tracks OpenAI info.\ntotal_tokens: int = 0\uf0c1\nprompt_tokens: int = 0\uf0c1\ncompletion_tokens: int = 0\uf0c1\nsuccessful_requests: int = 0\uf0c1", "source": "https://api.python.langchain.com/en/latest/modules/callbacks.html"} +{"id": "1a644f703b6b-21", "text": "completion_tokens: int = 0\uf0c1\nsuccessful_requests: int = 0\uf0c1\ntotal_cost: float = 0.0\uf0c1\nproperty always_verbose: bool\uf0c1\nWhether to call verbose callbacks even if verbose is False.\non_llm_start(serialized, prompts, **kwargs)[source]\uf0c1\nPrint out the prompts.\nParameters\nserialized (Dict[str, Any]) \u2013 \nprompts (List[str]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nNone\non_llm_new_token(token, **kwargs)[source]\uf0c1\nPrint out the token.\nParameters\ntoken (str) \u2013 \nkwargs (Any) \u2013 \nReturn type\nNone\non_llm_end(response, **kwargs)[source]\uf0c1\nCollect token usage.\nParameters\nresponse (langchain.schema.LLMResult) \u2013 \nkwargs (Any) \u2013 \nReturn type\nNone\nclass langchain.callbacks.StdOutCallbackHandler(color=None)[source]\uf0c1\nBases: langchain.callbacks.base.BaseCallbackHandler\nCallback Handler that prints to std out.\nParameters\ncolor (Optional[str]) \u2013 \nReturn type\nNone\non_llm_start(serialized, prompts, **kwargs)[source]\uf0c1\nPrint out the prompts.\nParameters\nserialized (Dict[str, Any]) \u2013 \nprompts (List[str]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nNone\non_llm_end(response, **kwargs)[source]\uf0c1\nDo nothing.\nParameters\nresponse (langchain.schema.LLMResult) \u2013 \nkwargs (Any) \u2013 \nReturn type\nNone\non_llm_new_token(token, **kwargs)[source]\uf0c1\nDo nothing.\nParameters\ntoken (str) \u2013 \nkwargs (Any) \u2013 \nReturn type\nNone\non_llm_error(error, **kwargs)[source]\uf0c1", "source": "https://api.python.langchain.com/en/latest/modules/callbacks.html"} +{"id": "1a644f703b6b-22", "text": "Return type\nNone\non_llm_error(error, **kwargs)[source]\uf0c1\nDo nothing.\nParameters\nerror (Union[Exception, KeyboardInterrupt]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nNone\non_chain_start(serialized, inputs, **kwargs)[source]\uf0c1\nPrint out that we are entering a chain.\nParameters\nserialized (Dict[str, Any]) \u2013 \ninputs (Dict[str, Any]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nNone\non_chain_end(outputs, **kwargs)[source]\uf0c1\nPrint out that we finished a chain.\nParameters\noutputs (Dict[str, Any]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nNone\non_chain_error(error, **kwargs)[source]\uf0c1\nDo nothing.\nParameters\nerror (Union[Exception, KeyboardInterrupt]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nNone\non_tool_start(serialized, input_str, **kwargs)[source]\uf0c1\nDo nothing.\nParameters\nserialized (Dict[str, Any]) \u2013 \ninput_str (str) \u2013 \nkwargs (Any) \u2013 \nReturn type\nNone\non_agent_action(action, color=None, **kwargs)[source]\uf0c1\nRun on agent action.\nParameters\naction (langchain.schema.AgentAction) \u2013 \ncolor (Optional[str]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nAny\non_tool_end(output, color=None, observation_prefix=None, llm_prefix=None, **kwargs)[source]\uf0c1\nIf not the final action, print out observation.\nParameters\noutput (str) \u2013 \ncolor (Optional[str]) \u2013 \nobservation_prefix (Optional[str]) \u2013 \nllm_prefix (Optional[str]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nNone", "source": "https://api.python.langchain.com/en/latest/modules/callbacks.html"} +{"id": "1a644f703b6b-23", "text": "kwargs (Any) \u2013 \nReturn type\nNone\non_tool_error(error, **kwargs)[source]\uf0c1\nDo nothing.\nParameters\nerror (Union[Exception, KeyboardInterrupt]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nNone\non_text(text, color=None, end='', **kwargs)[source]\uf0c1\nRun when agent ends.\nParameters\ntext (str) \u2013 \ncolor (Optional[str]) \u2013 \nend (str) \u2013 \nkwargs (Any) \u2013 \nReturn type\nNone\non_agent_finish(finish, color=None, **kwargs)[source]\uf0c1\nRun on agent end.\nParameters\nfinish (langchain.schema.AgentFinish) \u2013 \ncolor (Optional[str]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nNone\nclass langchain.callbacks.StreamingStdOutCallbackHandler[source]\uf0c1\nBases: langchain.callbacks.base.BaseCallbackHandler\nCallback handler for streaming. Only works with LLMs that support streaming.\non_llm_start(serialized, prompts, **kwargs)[source]\uf0c1\nRun when LLM starts running.\nParameters\nserialized (Dict[str, Any]) \u2013 \nprompts (List[str]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nNone\non_llm_new_token(token, **kwargs)[source]\uf0c1\nRun on new LLM token. Only available when streaming is enabled.\nParameters\ntoken (str) \u2013 \nkwargs (Any) \u2013 \nReturn type\nNone\non_llm_end(response, **kwargs)[source]\uf0c1\nRun when LLM ends running.\nParameters\nresponse (langchain.schema.LLMResult) \u2013 \nkwargs (Any) \u2013 \nReturn type\nNone\non_llm_error(error, **kwargs)[source]\uf0c1\nRun when LLM errors.\nParameters", "source": "https://api.python.langchain.com/en/latest/modules/callbacks.html"} +{"id": "1a644f703b6b-24", "text": "Run when LLM errors.\nParameters\nerror (Union[Exception, KeyboardInterrupt]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nNone\non_chain_start(serialized, inputs, **kwargs)[source]\uf0c1\nRun when chain starts running.\nParameters\nserialized (Dict[str, Any]) \u2013 \ninputs (Dict[str, Any]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nNone\non_chain_end(outputs, **kwargs)[source]\uf0c1\nRun when chain ends running.\nParameters\noutputs (Dict[str, Any]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nNone\non_chain_error(error, **kwargs)[source]\uf0c1\nRun when chain errors.\nParameters\nerror (Union[Exception, KeyboardInterrupt]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nNone\non_tool_start(serialized, input_str, **kwargs)[source]\uf0c1\nRun when tool starts running.\nParameters\nserialized (Dict[str, Any]) \u2013 \ninput_str (str) \u2013 \nkwargs (Any) \u2013 \nReturn type\nNone\non_agent_action(action, **kwargs)[source]\uf0c1\nRun on agent action.\nParameters\naction (langchain.schema.AgentAction) \u2013 \nkwargs (Any) \u2013 \nReturn type\nAny\non_tool_end(output, **kwargs)[source]\uf0c1\nRun when tool ends running.\nParameters\noutput (str) \u2013 \nkwargs (Any) \u2013 \nReturn type\nNone\non_tool_error(error, **kwargs)[source]\uf0c1\nRun when tool errors.\nParameters\nerror (Union[Exception, KeyboardInterrupt]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nNone\non_text(text, **kwargs)[source]\uf0c1\nRun on arbitrary text.\nParameters\ntext (str) \u2013 \nkwargs (Any) \u2013 \nReturn type", "source": "https://api.python.langchain.com/en/latest/modules/callbacks.html"} +{"id": "1a644f703b6b-25", "text": "Parameters\ntext (str) \u2013 \nkwargs (Any) \u2013 \nReturn type\nNone\non_agent_finish(finish, **kwargs)[source]\uf0c1\nRun on agent end.\nParameters\nfinish (langchain.schema.AgentFinish) \u2013 \nkwargs (Any) \u2013 \nReturn type\nNone\nlangchain.callbacks.StreamlitCallbackHandler(parent_container, *, max_thought_containers=4, expand_new_thoughts=True, collapse_completed_thoughts=True, thought_labeler=None)[source]\uf0c1\nConstruct a new StreamlitCallbackHandler. This CallbackHandler is geared towards\nuse with a LangChain Agent; it displays the Agent\u2019s LLM and tool-usage \u201cthoughts\u201d\ninside a series of Streamlit expanders.\nParameters\nparent_container (DeltaGenerator) \u2013 The st.container that will contain all the Streamlit elements that the\nHandler creates.\nmax_thought_containers (int) \u2013 The max number of completed LLM thought containers to show at once. When this\nthreshold is reached, a new thought will cause the oldest thoughts to be\ncollapsed into a \u201cHistory\u201d expander. Defaults to 4.\nexpand_new_thoughts (bool) \u2013 Each LLM \u201cthought\u201d gets its own st.expander. This param controls whether that\nexpander is expanded by default. Defaults to True.\ncollapse_completed_thoughts (bool) \u2013 If True, LLM thought expanders will be collapsed when completed.\nDefaults to True.\nthought_labeler (Optional[LLMThoughtLabeler]) \u2013 An optional custom LLMThoughtLabeler instance. If unspecified, the handler\nwill use the default thought labeling logic. Defaults to None.\nReturns\nA new StreamlitCallbackHandler instance.\nNote that this is an \u201cauto-updating\u201d API (if the installed version of Streamlit)\nhas a more recent StreamlitCallbackHandler implementation, an instance of that class", "source": "https://api.python.langchain.com/en/latest/modules/callbacks.html"} +{"id": "1a644f703b6b-26", "text": "has a more recent StreamlitCallbackHandler implementation, an instance of that class\nwill be used.\nReturn type\nBaseCallbackHandler\nclass langchain.callbacks.LLMThoughtLabeler[source]\uf0c1\nBases: object\nGenerates markdown labels for LLMThought containers. Pass a custom\nsubclass of this to StreamlitCallbackHandler to override its default\nlabeling logic.\nget_initial_label()[source]\uf0c1\nReturn the markdown label for a new LLMThought that doesn\u2019t have\nan associated tool yet.\nReturn type\nstr\nget_tool_label(tool, is_complete)[source]\uf0c1\nReturn the label for an LLMThought that has an associated\ntool.\nParameters\ntool (langchain.callbacks.streamlit.streamlit_callback_handler.ToolRecord) \u2013 The tool\u2019s ToolRecord\nis_complete (bool) \u2013 True if the thought is complete; False if the thought\nis still receiving input.\nReturn type\nThe markdown label for the thought\u2019s container.\nget_history_label()[source]\uf0c1\nReturn a markdown label for the special \u2018history\u2019 container\nthat contains overflow thoughts.\nReturn type\nstr\nget_final_agent_thought_label()[source]\uf0c1\nReturn the markdown label for the agent\u2019s final thought -\nthe \u201cNow I have the answer\u201d thought, that doesn\u2019t involve\na tool.\nReturn type\nstr\nclass langchain.callbacks.WandbCallbackHandler(job_type=None, project='langchain_callback_demo', entity=None, tags=None, group=None, name=None, notes=None, visualize=False, complexity_metrics=False, stream_logs=False)[source]\uf0c1\nBases: langchain.callbacks.utils.BaseMetadataCallbackHandler, langchain.callbacks.base.BaseCallbackHandler\nCallback Handler that logs to Weights and Biases.\nParameters\njob_type (str) \u2013 The type of job.\nproject (str) \u2013 The project to log to.", "source": "https://api.python.langchain.com/en/latest/modules/callbacks.html"} +{"id": "1a644f703b6b-27", "text": "project (str) \u2013 The project to log to.\nentity (str) \u2013 The entity to log to.\ntags (list) \u2013 The tags to log.\ngroup (str) \u2013 The group to log to.\nname (str) \u2013 The name of the run.\nnotes (str) \u2013 The notes to log.\nvisualize (bool) \u2013 Whether to visualize the run.\ncomplexity_metrics (bool) \u2013 Whether to log complexity metrics.\nstream_logs (bool) \u2013 Whether to stream callback actions to W&B\nReturn type\nNone\nThis handler will utilize the associated callback method called and formats\nthe input of each callback function with metadata regarding the state of LLM run,\nand adds the response to the list of records for both the {method}_records and\naction. It then logs the response using the run.log() method to Weights and Biases.\non_llm_start(serialized, prompts, **kwargs)[source]\uf0c1\nRun when LLM starts.\nParameters\nserialized (Dict[str, Any]) \u2013 \nprompts (List[str]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nNone\non_llm_new_token(token, **kwargs)[source]\uf0c1\nRun when LLM generates a new token.\nParameters\ntoken (str) \u2013 \nkwargs (Any) \u2013 \nReturn type\nNone\non_llm_end(response, **kwargs)[source]\uf0c1\nRun when LLM ends running.\nParameters\nresponse (langchain.schema.LLMResult) \u2013 \nkwargs (Any) \u2013 \nReturn type\nNone\non_llm_error(error, **kwargs)[source]\uf0c1\nRun when LLM errors.\nParameters\nerror (Union[Exception, KeyboardInterrupt]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nNone\non_chain_start(serialized, inputs, **kwargs)[source]\uf0c1", "source": "https://api.python.langchain.com/en/latest/modules/callbacks.html"} +{"id": "1a644f703b6b-28", "text": "None\non_chain_start(serialized, inputs, **kwargs)[source]\uf0c1\nRun when chain starts running.\nParameters\nserialized (Dict[str, Any]) \u2013 \ninputs (Dict[str, Any]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nNone\non_chain_end(outputs, **kwargs)[source]\uf0c1\nRun when chain ends running.\nParameters\noutputs (Dict[str, Any]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nNone\non_chain_error(error, **kwargs)[source]\uf0c1\nRun when chain errors.\nParameters\nerror (Union[Exception, KeyboardInterrupt]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nNone\non_tool_start(serialized, input_str, **kwargs)[source]\uf0c1\nRun when tool starts running.\nParameters\nserialized (Dict[str, Any]) \u2013 \ninput_str (str) \u2013 \nkwargs (Any) \u2013 \nReturn type\nNone\non_tool_end(output, **kwargs)[source]\uf0c1\nRun when tool ends running.\nParameters\noutput (str) \u2013 \nkwargs (Any) \u2013 \nReturn type\nNone\non_tool_error(error, **kwargs)[source]\uf0c1\nRun when tool errors.\nParameters\nerror (Union[Exception, KeyboardInterrupt]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nNone\non_text(text, **kwargs)[source]\uf0c1\nRun when agent is ending.\nParameters\ntext (str) \u2013 \nkwargs (Any) \u2013 \nReturn type\nNone\non_agent_finish(finish, **kwargs)[source]\uf0c1\nRun when agent ends running.\nParameters\nfinish (langchain.schema.AgentFinish) \u2013 \nkwargs (Any) \u2013 \nReturn type\nNone\non_agent_action(action, **kwargs)[source]\uf0c1\nRun on agent action.\nParameters", "source": "https://api.python.langchain.com/en/latest/modules/callbacks.html"} +{"id": "1a644f703b6b-29", "text": "Run on agent action.\nParameters\naction (langchain.schema.AgentAction) \u2013 \nkwargs (Any) \u2013 \nReturn type\nAny\nflush_tracker(langchain_asset=None, reset=True, finish=False, job_type=None, project=None, entity=None, tags=None, group=None, name=None, notes=None, visualize=None, complexity_metrics=None)[source]\uf0c1\nFlush the tracker and reset the session.\nParameters\nlangchain_asset (Any) \u2013 The langchain asset to save.\nreset (bool) \u2013 Whether to reset the session.\nfinish (bool) \u2013 Whether to finish the run.\njob_type (Optional[str]) \u2013 The job type.\nproject (Optional[str]) \u2013 The project.\nentity (Optional[str]) \u2013 The entity.\ntags (Optional[Sequence]) \u2013 The tags.\ngroup (Optional[str]) \u2013 The group.\nname (Optional[str]) \u2013 The name.\nnotes (Optional[str]) \u2013 The notes.\nvisualize (Optional[bool]) \u2013 Whether to visualize.\ncomplexity_metrics (Optional[bool]) \u2013 Whether to compute complexity metrics.\nReturns \u2013 None\nReturn type\nNone\nclass langchain.callbacks.WhyLabsCallbackHandler(logger)[source]\uf0c1\nBases: langchain.callbacks.base.BaseCallbackHandler\nWhyLabs CallbackHandler.\nParameters\nlogger (Logger) \u2013 \non_llm_start(serialized, prompts, **kwargs)[source]\uf0c1\nPass the input prompts to the logger\nParameters\nserialized (Dict[str, Any]) \u2013 \nprompts (List[str]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nNone\non_llm_end(response, **kwargs)[source]\uf0c1\nPass the generated response to the logger.\nParameters\nresponse (langchain.schema.LLMResult) \u2013 \nkwargs (Any) \u2013 \nReturn type\nNone", "source": "https://api.python.langchain.com/en/latest/modules/callbacks.html"} +{"id": "1a644f703b6b-30", "text": "kwargs (Any) \u2013 \nReturn type\nNone\non_llm_new_token(token, **kwargs)[source]\uf0c1\nDo nothing.\nParameters\ntoken (str) \u2013 \nkwargs (Any) \u2013 \nReturn type\nNone\non_llm_error(error, **kwargs)[source]\uf0c1\nDo nothing.\nParameters\nerror (Union[Exception, KeyboardInterrupt]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nNone\non_chain_start(serialized, inputs, **kwargs)[source]\uf0c1\nDo nothing.\nParameters\nserialized (Dict[str, Any]) \u2013 \ninputs (Dict[str, Any]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nNone\non_chain_end(outputs, **kwargs)[source]\uf0c1\nDo nothing.\nParameters\noutputs (Dict[str, Any]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nNone\non_chain_error(error, **kwargs)[source]\uf0c1\nDo nothing.\nParameters\nerror (Union[Exception, KeyboardInterrupt]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nNone\non_tool_start(serialized, input_str, **kwargs)[source]\uf0c1\nDo nothing.\nParameters\nserialized (Dict[str, Any]) \u2013 \ninput_str (str) \u2013 \nkwargs (Any) \u2013 \nReturn type\nNone\non_agent_action(action, color=None, **kwargs)[source]\uf0c1\nDo nothing.\nParameters\naction (langchain.schema.AgentAction) \u2013 \ncolor (Optional[str]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nAny\non_tool_end(output, color=None, observation_prefix=None, llm_prefix=None, **kwargs)[source]\uf0c1\nDo nothing.\nParameters\noutput (str) \u2013 \ncolor (Optional[str]) \u2013 \nobservation_prefix (Optional[str]) \u2013", "source": "https://api.python.langchain.com/en/latest/modules/callbacks.html"} +{"id": "1a644f703b6b-31", "text": "color (Optional[str]) \u2013 \nobservation_prefix (Optional[str]) \u2013 \nllm_prefix (Optional[str]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nNone\non_tool_error(error, **kwargs)[source]\uf0c1\nDo nothing.\nParameters\nerror (Union[Exception, KeyboardInterrupt]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nNone\non_text(text, **kwargs)[source]\uf0c1\nDo nothing.\nParameters\ntext (str) \u2013 \nkwargs (Any) \u2013 \nReturn type\nNone\non_agent_finish(finish, color=None, **kwargs)[source]\uf0c1\nRun on agent end.\nParameters\nfinish (langchain.schema.AgentFinish) \u2013 \ncolor (Optional[str]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nNone\nflush()[source]\uf0c1\nReturn type\nNone\nclose()[source]\uf0c1\nReturn type\nNone\nclassmethod from_params(*, api_key=None, org_id=None, dataset_id=None, sentiment=False, toxicity=False, themes=False)[source]\uf0c1\nInstantiate whylogs Logger from params.\nParameters\napi_key (Optional[str]) \u2013 WhyLabs API key. Optional because the preferred\nway to specify the API key is with environment variable\nWHYLABS_API_KEY.\norg_id (Optional[str]) \u2013 WhyLabs organization id to write profiles to.\nIf not set must be specified in environment variable\nWHYLABS_DEFAULT_ORG_ID.\ndataset_id (Optional[str]) \u2013 The model or dataset this callback is gathering\ntelemetry for. If not set must be specified in environment variable\nWHYLABS_DEFAULT_DATASET_ID.\nsentiment (bool) \u2013 If True will initialize a model to perform\nsentiment analysis compound score. Defaults to False and will not gather\nthis metric.", "source": "https://api.python.langchain.com/en/latest/modules/callbacks.html"} +{"id": "1a644f703b6b-32", "text": "sentiment analysis compound score. Defaults to False and will not gather\nthis metric.\ntoxicity (bool) \u2013 If True will initialize a model to score\ntoxicity. Defaults to False and will not gather this metric.\nthemes (bool) \u2013 If True will initialize a model to calculate\ndistance to configured themes. Defaults to None and will not gather this\nmetric.\nReturn type\nLogger\nlangchain.callbacks.get_openai_callback()[source]\uf0c1\nGet the OpenAI callback handler in a context manager.\nwhich conveniently exposes token and cost information.\nReturns\nThe OpenAI callback handler.\nReturn type\nOpenAICallbackHandler\nExample\n>>> with get_openai_callback() as cb:\n... # Use the OpenAI callback handler\nlangchain.callbacks.tracing_enabled(session_name='default')[source]\uf0c1\nGet the Deprecated LangChainTracer in a context manager.\nParameters\nsession_name (str, optional) \u2013 The name of the session.\nDefaults to \u201cdefault\u201d.\nReturns\nThe LangChainTracer session.\nReturn type\nTracerSessionV1\nExample\n>>> with tracing_enabled() as session:\n... # Use the LangChainTracer session\nlangchain.callbacks.wandb_tracing_enabled(session_name='default')[source]\uf0c1\nGet the WandbTracer in a context manager.\nParameters\nsession_name (str, optional) \u2013 The name of the session.\nDefaults to \u201cdefault\u201d.\nReturns\nNone\nReturn type\nGenerator[None, None, None]\nExample\n>>> with wandb_tracing_enabled() as session:\n... # Use the WandbTracer session", "source": "https://api.python.langchain.com/en/latest/modules/callbacks.html"} +{"id": "98c621fca5f3-0", "text": "Document Loaders\uf0c1\nAll different types of document loaders.\nclass langchain.document_loaders.AcreomLoader(path, encoding='UTF-8', collect_metadata=True)[source]\uf0c1\nBases: langchain.document_loaders.base.BaseLoader\nParameters\npath (str) \u2013 \nencoding (str) \u2013 \ncollect_metadata (bool) \u2013 \nFRONT_MATTER_REGEX = re.compile('^---\\\\n(.*?)\\\\n---\\\\n', re.MULTILINE|re.DOTALL)\uf0c1\nlazy_load()[source]\uf0c1\nA lazy loader for document content.\nReturn type\nIterator[langchain.schema.Document]\nload()[source]\uf0c1\nLoad data into document objects.\nReturn type\nList[langchain.schema.Document]\nclass langchain.document_loaders.AZLyricsLoader(web_path, header_template=None, verify=True)[source]\uf0c1\nBases: langchain.document_loaders.web_base.WebBaseLoader\nLoader that loads AZLyrics webpages.\nParameters\nweb_path (Union[str, List[str]]) \u2013 \nheader_template (Optional[dict]) \u2013 \nverify (Optional[bool]) \u2013 \nload()[source]\uf0c1\nLoad webpage.\nReturn type\nList[langchain.schema.Document]\nclass langchain.document_loaders.AirbyteJSONLoader(file_path)[source]\uf0c1\nBases: langchain.document_loaders.base.BaseLoader\nLoader that loads local airbyte json files.\nParameters\nfile_path (str) \u2013 \nload()[source]\uf0c1\nLoad file.\nReturn type\nList[langchain.schema.Document]\nclass langchain.document_loaders.AirtableLoader(api_token, table_id, base_id)[source]\uf0c1\nBases: langchain.document_loaders.base.BaseLoader\nLoader for Airtable tables.\nParameters\napi_token (str) \u2013 \ntable_id (str) \u2013", "source": "https://api.python.langchain.com/en/latest/modules/document_loaders.html"} +{"id": "98c621fca5f3-1", "text": "Parameters\napi_token (str) \u2013 \ntable_id (str) \u2013 \nbase_id (str) \u2013 \nlazy_load()[source]\uf0c1\nLazy load records from table.\nReturn type\nIterator[langchain.schema.Document]\nload()[source]\uf0c1\nLoad Table.\nReturn type\nList[langchain.schema.Document]\nclass langchain.document_loaders.ApifyDatasetLoader(dataset_id, dataset_mapping_function)[source]\uf0c1\nBases: langchain.document_loaders.base.BaseLoader, pydantic.main.BaseModel\nLogic for loading documents from Apify datasets.\nParameters\ndataset_id (str) \u2013 \ndataset_mapping_function (Callable[[Dict], langchain.schema.Document]) \u2013 \nReturn type\nNone\nattribute apify_client: Any = None\uf0c1\nattribute dataset_id: str [Required]\uf0c1\nThe ID of the dataset on the Apify platform.\nattribute dataset_mapping_function: Callable[[Dict], langchain.schema.Document] [Required]\uf0c1\nA custom function that takes a single dictionary (an Apify dataset item)\nand converts it to an instance of the Document class.\nload()[source]\uf0c1\nLoad documents.\nReturn type\nList[langchain.schema.Document]\nclass langchain.document_loaders.ArxivLoader(query, load_max_docs=100, load_all_available_meta=False)[source]\uf0c1\nBases: langchain.document_loaders.base.BaseLoader\nLoads a query result from arxiv.org into a list of Documents.\nEach document represents one Document.\nThe loader converts the original PDF format into the text.\nParameters\nquery (str) \u2013 \nload_max_docs (Optional[int]) \u2013 \nload_all_available_meta (Optional[bool]) \u2013 \nload()[source]\uf0c1\nLoad data into document objects.\nReturn type\nList[langchain.schema.Document]", "source": "https://api.python.langchain.com/en/latest/modules/document_loaders.html"} +{"id": "98c621fca5f3-2", "text": "Load data into document objects.\nReturn type\nList[langchain.schema.Document]\nclass langchain.document_loaders.AzureBlobStorageContainerLoader(conn_str, container, prefix='')[source]\uf0c1\nBases: langchain.document_loaders.base.BaseLoader\nLoading logic for loading documents from Azure Blob Storage.\nParameters\nconn_str (str) \u2013 \ncontainer (str) \u2013 \nprefix (str) \u2013 \nload()[source]\uf0c1\nLoad documents.\nReturn type\nList[langchain.schema.Document]\nclass langchain.document_loaders.AzureBlobStorageFileLoader(conn_str, container, blob_name)[source]\uf0c1\nBases: langchain.document_loaders.base.BaseLoader\nLoading logic for loading documents from Azure Blob Storage.\nParameters\nconn_str (str) \u2013 \ncontainer (str) \u2013 \nblob_name (str) \u2013 \nload()[source]\uf0c1\nLoad documents.\nReturn type\nList[langchain.schema.Document]\nclass langchain.document_loaders.BSHTMLLoader(file_path, open_encoding=None, bs_kwargs=None, get_text_separator='')[source]\uf0c1\nBases: langchain.document_loaders.base.BaseLoader\nLoader that uses beautiful soup to parse HTML files.\nParameters\nfile_path (str) \u2013 \nopen_encoding (Optional[str]) \u2013 \nbs_kwargs (Optional[dict]) \u2013 \nget_text_separator (str) \u2013 \nReturn type\nNone\nload()[source]\uf0c1\nLoad data into document objects.\nReturn type\nList[langchain.schema.Document]\nclass langchain.document_loaders.BibtexLoader(file_path, *, parser=None, max_docs=None, max_content_chars=4000, load_extra_metadata=False, file_pattern='[^:]+\\\\.pdf')[source]\uf0c1\nBases: langchain.document_loaders.base.BaseLoader\nLoads a bibtex file into a list of Documents.", "source": "https://api.python.langchain.com/en/latest/modules/document_loaders.html"} +{"id": "98c621fca5f3-3", "text": "Loads a bibtex file into a list of Documents.\nEach document represents one entry from the bibtex file.\nIf a PDF file is present in the file bibtex field, the original PDF\nis loaded into the document text. If no such file entry is present,\nthe abstract field is used instead.\nParameters\nfile_path (str) \u2013 \nparser (Optional[langchain.utilities.bibtex.BibtexparserWrapper]) \u2013 \nmax_docs (Optional[int]) \u2013 \nmax_content_chars (Optional[int]) \u2013 \nload_extra_metadata (bool) \u2013 \nfile_pattern (str) \u2013 \nlazy_load()[source]\uf0c1\nLoad bibtex file using bibtexparser and get the article texts plus the\narticle metadata.\nSee https://bibtexparser.readthedocs.io/en/master/\nReturns\na list of documents with the document.page_content in text format\nReturn type\nIterator[langchain.schema.Document]\nload()[source]\uf0c1\nLoad bibtex file documents from the given bibtex file path.\nSee https://bibtexparser.readthedocs.io/en/master/\nParameters\nfile_path \u2013 the path to the bibtex file\nReturns\na list of documents with the document.page_content in text format\nReturn type\nList[langchain.schema.Document]\nclass langchain.document_loaders.BigQueryLoader(query, project=None, page_content_columns=None, metadata_columns=None, credentials=None)[source]\uf0c1\nBases: langchain.document_loaders.base.BaseLoader\nLoads a query result from BigQuery into a list of documents.\nEach document represents one row of the result. The page_content_columns\nare written into the page_content of the document. The metadata_columns\nare written into the metadata of the document. By default, all columns\nare written into the page_content and none into the metadata.\nParameters", "source": "https://api.python.langchain.com/en/latest/modules/document_loaders.html"} +{"id": "98c621fca5f3-4", "text": "are written into the page_content and none into the metadata.\nParameters\nquery (str) \u2013 \nproject (Optional[str]) \u2013 \npage_content_columns (Optional[List[str]]) \u2013 \nmetadata_columns (Optional[List[str]]) \u2013 \ncredentials (Optional[Credentials]) \u2013 \nload()[source]\uf0c1\nLoad data into document objects.\nReturn type\nList[langchain.schema.Document]\nclass langchain.document_loaders.BiliBiliLoader(video_urls)[source]\uf0c1\nBases: langchain.document_loaders.base.BaseLoader\nLoader that loads bilibili transcripts.\nParameters\nvideo_urls (List[str]) \u2013 \nload()[source]\uf0c1\nLoad from bilibili url.\nReturn type\nList[langchain.schema.Document]\nclass langchain.document_loaders.BlackboardLoader(blackboard_course_url, bbrouter, load_all_recursively=True, basic_auth=None, cookies=None)[source]\uf0c1\nBases: langchain.document_loaders.web_base.WebBaseLoader\nLoader that loads all documents from a Blackboard course.\nThis loader is not compatible with all Blackboard courses. It is only\ncompatible with courses that use the new Blackboard interface.\nTo use this loader, you must have the BbRouter cookie. You can get this\ncookie by logging into the course and then copying the value of the\nBbRouter cookie from the browser\u2019s developer tools.\nExample\nfrom langchain.document_loaders import BlackboardLoader\nloader = BlackboardLoader(\n blackboard_course_url=\"https://blackboard.example.com/webapps/blackboard/execute/announcement?method=search&context=course_entry&course_id=_123456_1\",\n bbrouter=\"expires:12345...\",\n)\ndocuments = loader.load()\nParameters\nblackboard_course_url (str) \u2013 \nbbrouter (str) \u2013", "source": "https://api.python.langchain.com/en/latest/modules/document_loaders.html"} +{"id": "98c621fca5f3-5", "text": "blackboard_course_url (str) \u2013 \nbbrouter (str) \u2013 \nload_all_recursively (bool) \u2013 \nbasic_auth (Optional[Tuple[str, str]]) \u2013 \ncookies (Optional[dict]) \u2013 \nfolder_path: str\uf0c1\nbase_url: str\uf0c1\nload_all_recursively: bool\uf0c1\ncheck_bs4()[source]\uf0c1\nCheck if BeautifulSoup4 is installed.\nRaises\nImportError \u2013 If BeautifulSoup4 is not installed.\nReturn type\nNone\nload()[source]\uf0c1\nLoad data into document objects.\nReturns\nList of documents.\nReturn type\nList[langchain.schema.Document]\ndownload(path)[source]\uf0c1\nDownload a file from a url.\nParameters\npath (str) \u2013 Path to the file.\nReturn type\nNone\nparse_filename(url)[source]\uf0c1\nParse the filename from a url.\nParameters\nurl (str) \u2013 Url to parse the filename from.\nReturns\nThe filename.\nReturn type\nstr\nclass langchain.document_loaders.Blob(*, data=None, mimetype=None, encoding='utf-8', path=None)[source]\uf0c1\nBases: pydantic.main.BaseModel\nA blob is used to represent raw data by either reference or value.\nProvides an interface to materialize the blob in different representations, and\nhelp to decouple the development of data loaders from the downstream parsing of\nthe raw data.\nInspired by: https://developer.mozilla.org/en-US/docs/Web/API/Blob\nParameters\ndata (Optional[Union[bytes, str]]) \u2013 \nmimetype (Optional[str]) \u2013 \nencoding (str) \u2013 \npath (Optional[Union[str, pathlib.PurePath]]) \u2013 \nReturn type\nNone\nattribute data: Optional[Union[bytes, str]] = None\uf0c1", "source": "https://api.python.langchain.com/en/latest/modules/document_loaders.html"} +{"id": "98c621fca5f3-6", "text": "None\nattribute data: Optional[Union[bytes, str]] = None\uf0c1\nattribute encoding: str = 'utf-8'\uf0c1\nattribute mimetype: Optional[str] = None\uf0c1\nattribute path: Optional[Union[str, pathlib.PurePath]] = None\uf0c1\nas_bytes()[source]\uf0c1\nRead data as bytes.\nReturn type\nbytes\nas_bytes_io()[source]\uf0c1\nRead data as a byte stream.\nReturn type\nGenerator[Union[_io.BytesIO, _io.BufferedReader], None, None]\nas_string()[source]\uf0c1\nRead data as a string.\nReturn type\nstr\nclassmethod from_data(data, *, encoding='utf-8', mime_type=None, path=None)[source]\uf0c1\nInitialize the blob from in-memory data.\nParameters\ndata (Union[str, bytes]) \u2013 the in-memory data associated with the blob\nencoding (str) \u2013 Encoding to use if decoding the bytes into a string\nmime_type (Optional[str]) \u2013 if provided, will be set as the mime-type of the data\npath (Optional[str]) \u2013 if provided, will be set as the source from which the data came\nReturns\nBlob instance\nReturn type\nlangchain.document_loaders.blob_loaders.schema.Blob\nclassmethod from_path(path, *, encoding='utf-8', mime_type=None, guess_type=True)[source]\uf0c1\nLoad the blob from a path like object.\nParameters\npath (Union[str, pathlib.PurePath]) \u2013 path like object to file to be read\nencoding (str) \u2013 Encoding to use if decoding the bytes into a string\nmime_type (Optional[str]) \u2013 if provided, will be set as the mime-type of the data\nguess_type (bool) \u2013 If True, the mimetype will be guessed from the file extension,\nif a mime-type was not provided\nReturns\nBlob instance\nReturn type", "source": "https://api.python.langchain.com/en/latest/modules/document_loaders.html"} +{"id": "98c621fca5f3-7", "text": "if a mime-type was not provided\nReturns\nBlob instance\nReturn type\nlangchain.document_loaders.blob_loaders.schema.Blob\nproperty source: Optional[str]\uf0c1\nThe source location of the blob as string if known otherwise none.\nclass langchain.document_loaders.BlobLoader[source]\uf0c1\nBases: abc.ABC\nAbstract interface for blob loaders implementation.\nImplementer should be able to load raw content from a storage system according\nto some criteria and return the raw content lazily as a stream of blobs.\nabstract yield_blobs()[source]\uf0c1\nA lazy loader for raw data represented by LangChain\u2019s Blob object.\nReturns\nA generator over blobs\nReturn type\nIterable[langchain.document_loaders.blob_loaders.schema.Blob]\nclass langchain.document_loaders.BlockchainDocumentLoader(contract_address, blockchainType=BlockchainType.ETH_MAINNET, api_key='docs-demo', startToken='', get_all_tokens=False, max_execution_time=None)[source]\uf0c1\nBases: langchain.document_loaders.base.BaseLoader\nLoads elements from a blockchain smart contract into Langchain documents.\nThe supported blockchains are: Ethereum mainnet, Ethereum Goerli testnet,\nPolygon mainnet, and Polygon Mumbai testnet.\nIf no BlockchainType is specified, the default is Ethereum mainnet.\nThe Loader uses the Alchemy API to interact with the blockchain.\nALCHEMY_API_KEY environment variable must be set to use this loader.\nThe API returns 100 NFTs per request and can be paginated using the\nstartToken parameter.\nIf get_all_tokens is set to True, the loader will get all tokens\non the contract. Note that for contracts with a large number of tokens,\nthis may take a long time (e.g. 10k tokens is 100 requests).\nDefault value is false for this reason.", "source": "https://api.python.langchain.com/en/latest/modules/document_loaders.html"} +{"id": "98c621fca5f3-8", "text": "Default value is false for this reason.\nThe max_execution_time (sec) can be set to limit the execution time\nof the loader.\nFuture versions of this loader can:\nSupport additional Alchemy APIs (e.g. getTransactions, etc.)\nSupport additional blockain APIs (e.g. Infura, Opensea, etc.)\nParameters\ncontract_address (str) \u2013 \nblockchainType (langchain.document_loaders.blockchain.BlockchainType) \u2013 \napi_key (str) \u2013 \nstartToken (str) \u2013 \nget_all_tokens (bool) \u2013 \nmax_execution_time (Optional[int]) \u2013 \nload()[source]\uf0c1\nLoad data into document objects.\nReturn type\nList[langchain.schema.Document]\nclass langchain.document_loaders.CSVLoader(file_path, source_column=None, csv_args=None, encoding=None)[source]\uf0c1\nBases: langchain.document_loaders.base.BaseLoader\nLoads a CSV file into a list of documents.\nEach document represents one row of the CSV file. Every row is converted into a\nkey/value pair and outputted to a new line in the document\u2019s page_content.\nThe source for each document loaded from csv is set to the value of the\nfile_path argument for all doucments by default.\nYou can override this by setting the source_column argument to the\nname of a column in the CSV file.\nThe source of each document will then be set to the value of the column\nwith the name specified in source_column.\nOutput Example:column1: value1\ncolumn2: value2\ncolumn3: value3\nParameters\nfile_path (str) \u2013 \nsource_column (Optional[str]) \u2013 \ncsv_args (Optional[Dict]) \u2013 \nencoding (Optional[str]) \u2013 \nload()[source]\uf0c1\nLoad data into document objects.\nReturn type", "source": "https://api.python.langchain.com/en/latest/modules/document_loaders.html"} +{"id": "98c621fca5f3-9", "text": "load()[source]\uf0c1\nLoad data into document objects.\nReturn type\nList[langchain.schema.Document]\nclass langchain.document_loaders.ChatGPTLoader(log_file, num_logs=- 1)[source]\uf0c1\nBases: langchain.document_loaders.base.BaseLoader\nLoader that loads conversations from exported ChatGPT data.\nParameters\nlog_file (str) \u2013 \nnum_logs (int) \u2013 \nload()[source]\uf0c1\nLoad data into document objects.\nReturn type\nList[langchain.schema.Document]\nclass langchain.document_loaders.CoNLLULoader(file_path)[source]\uf0c1\nBases: langchain.document_loaders.base.BaseLoader\nLoad CoNLL-U files.\nParameters\nfile_path (str) \u2013 \nload()[source]\uf0c1\nLoad from file path.\nReturn type\nList[langchain.schema.Document]\nclass langchain.document_loaders.CollegeConfidentialLoader(web_path, header_template=None, verify=True)[source]\uf0c1\nBases: langchain.document_loaders.web_base.WebBaseLoader\nLoader that loads College Confidential webpages.\nParameters\nweb_path (Union[str, List[str]]) \u2013 \nheader_template (Optional[dict]) \u2013 \nverify (Optional[bool]) \u2013 \nload()[source]\uf0c1\nLoad webpage.\nReturn type\nList[langchain.schema.Document]\nclass langchain.document_loaders.ConfluenceLoader(url, api_key=None, username=None, oauth2=None, token=None, cloud=True, number_of_retries=3, min_retry_seconds=2, max_retry_seconds=10, confluence_kwargs=None)[source]\uf0c1\nBases: langchain.document_loaders.base.BaseLoader\nLoad Confluence pages. Port of https://llamahub.ai/l/confluence\nThis currently supports username/api_key, Oauth2 login or personal access token\nauthentication.", "source": "https://api.python.langchain.com/en/latest/modules/document_loaders.html"} +{"id": "98c621fca5f3-10", "text": "This currently supports username/api_key, Oauth2 login or personal access token\nauthentication.\nSpecify a list page_ids and/or space_key to load in the corresponding pages into\nDocument objects, if both are specified the union of both sets will be returned.\nYou can also specify a boolean include_attachments to include attachments, this\nis set to False by default, if set to True all attachments will be downloaded and\nConfluenceReader will extract the text from the attachments and add it to the\nDocument object. Currently supported attachment types are: PDF, PNG, JPEG/JPG,\nSVG, Word and Excel.\nConfluence API supports difference format of page content. The storage format is the\nraw XML representation for storage. The view format is the HTML representation for\nviewing with macros are rendered as though it is viewed by users. You can pass\na enum content_format argument to load() to specify the content format, this is\nset to ContentFormat.STORAGE by default.\nHint: space_key and page_id can both be found in the URL of a page in Confluence\n- https://yoursite.atlassian.com/wiki/spaces//pages/\nExample\nfrom langchain.document_loaders import ConfluenceLoader\nloader = ConfluenceLoader(\n url=\"https://yoursite.atlassian.com/wiki\",\n username=\"me\",\n api_key=\"12345\"\n)\ndocuments = loader.load(space_key=\"SPACE\",limit=50)\nParameters\nurl (str) \u2013 _description_\napi_key (str, optional) \u2013 _description_, defaults to None\nusername (str, optional) \u2013 _description_, defaults to None\noauth2 (dict, optional) \u2013 _description_, defaults to {}\ntoken (str, optional) \u2013 _description_, defaults to None\ncloud (bool, optional) \u2013 _description_, defaults to True", "source": "https://api.python.langchain.com/en/latest/modules/document_loaders.html"} +{"id": "98c621fca5f3-11", "text": "cloud (bool, optional) \u2013 _description_, defaults to True\nnumber_of_retries (Optional[int], optional) \u2013 How many times to retry, defaults to 3\nmin_retry_seconds (Optional[int], optional) \u2013 defaults to 2\nmax_retry_seconds (Optional[int], optional) \u2013 defaults to 10\nconfluence_kwargs (dict, optional) \u2013 additional kwargs to initialize confluence with\nRaises\nValueError \u2013 Errors while validating input\nImportError \u2013 Required dependencies not installed.\nstatic validate_init_args(url=None, api_key=None, username=None, oauth2=None, token=None)[source]\uf0c1\nValidates proper combinations of init arguments\nParameters\nurl (Optional[str]) \u2013 \napi_key (Optional[str]) \u2013 \nusername (Optional[str]) \u2013 \noauth2 (Optional[dict]) \u2013 \ntoken (Optional[str]) \u2013 \nReturn type\nOptional[List]\nload(space_key=None, page_ids=None, label=None, cql=None, include_restricted_content=False, include_archived_content=False, include_attachments=False, include_comments=False, content_format=ContentFormat.STORAGE, limit=50, max_pages=1000, ocr_languages=None)[source]\uf0c1\nParameters\nspace_key (Optional[str], optional) \u2013 Space key retrieved from a confluence URL, defaults to None\npage_ids (Optional[List[str]], optional) \u2013 List of specific page IDs to load, defaults to None\nlabel (Optional[str], optional) \u2013 Get all pages with this label, defaults to None\ncql (Optional[str], optional) \u2013 CQL Expression, defaults to None\ninclude_restricted_content (bool, optional) \u2013 defaults to False\ninclude_archived_content (bool, optional) \u2013 Whether to include archived content,\ndefaults to False\ninclude_attachments (bool, optional) \u2013 defaults to False\ninclude_comments (bool, optional) \u2013 defaults to False", "source": "https://api.python.langchain.com/en/latest/modules/document_loaders.html"} +{"id": "98c621fca5f3-12", "text": "include_comments (bool, optional) \u2013 defaults to False\ncontent_format (ContentFormat) \u2013 Specify content format, defaults to ContentFormat.STORAGE\nlimit (int, optional) \u2013 Maximum number of pages to retrieve per request, defaults to 50\nmax_pages (int, optional) \u2013 Maximum number of pages to retrieve in total, defaults 1000\nocr_languages (str, optional) \u2013 The languages to use for the Tesseract agent. To use a\nlanguage, you\u2019ll first need to install the appropriate\nTesseract language pack.\nRaises\nValueError \u2013 _description_\nImportError \u2013 _description_\nReturns\n_description_\nReturn type\nList[Document]\npaginate_request(retrieval_method, **kwargs)[source]\uf0c1\nPaginate the various methods to retrieve groups of pages.\nUnfortunately, due to page size, sometimes the Confluence API\ndoesn\u2019t match the limit value. If limit is >100 confluence\nseems to cap the response to 100. Also, due to the Atlassian Python\npackage, we don\u2019t get the \u201cnext\u201d values from the \u201c_links\u201d key because\nthey only return the value from the results key. So here, the pagination\nstarts from 0 and goes until the max_pages, getting the limit number\nof pages with each request. We have to manually check if there\nare more docs based on the length of the returned list of pages, rather than\njust checking for the presence of a next key in the response like this page\nwould have you do:\nhttps://developer.atlassian.com/server/confluence/pagination-in-the-rest-api/\nParameters\nretrieval_method (callable) \u2013 Function used to retrieve docs\nkwargs (Any) \u2013 \nReturns\nList of documents\nReturn type\nList\nis_public_page(page)[source]\uf0c1\nCheck if a page is publicly accessible.\nParameters\npage (dict) \u2013", "source": "https://api.python.langchain.com/en/latest/modules/document_loaders.html"} +{"id": "98c621fca5f3-13", "text": "Check if a page is publicly accessible.\nParameters\npage (dict) \u2013 \nReturn type\nbool\nprocess_pages(pages, include_restricted_content, include_attachments, include_comments, content_format, ocr_languages=None)[source]\uf0c1\nProcess a list of pages into a list of documents.\nParameters\npages (List[dict]) \u2013 \ninclude_restricted_content (bool) \u2013 \ninclude_attachments (bool) \u2013 \ninclude_comments (bool) \u2013 \ncontent_format (langchain.document_loaders.confluence.ContentFormat) \u2013 \nocr_languages (Optional[str]) \u2013 \nReturn type\nList[langchain.schema.Document]\nprocess_page(page, include_attachments, include_comments, content_format, ocr_languages=None)[source]\uf0c1\nParameters\npage (dict) \u2013 \ninclude_attachments (bool) \u2013 \ninclude_comments (bool) \u2013 \ncontent_format (langchain.document_loaders.confluence.ContentFormat) \u2013 \nocr_languages (Optional[str]) \u2013 \nReturn type\nlangchain.schema.Document\nprocess_attachment(page_id, ocr_languages=None)[source]\uf0c1\nParameters\npage_id (str) \u2013 \nocr_languages (Optional[str]) \u2013 \nReturn type\nList[str]\nprocess_pdf(link, ocr_languages=None)[source]\uf0c1\nParameters\nlink (str) \u2013 \nocr_languages (Optional[str]) \u2013 \nReturn type\nstr\nprocess_image(link, ocr_languages=None)[source]\uf0c1\nParameters\nlink (str) \u2013 \nocr_languages (Optional[str]) \u2013 \nReturn type\nstr\nprocess_doc(link)[source]\uf0c1\nParameters\nlink (str) \u2013 \nReturn type\nstr\nprocess_xls(link)[source]\uf0c1\nParameters\nlink (str) \u2013 \nReturn type\nstr\nprocess_svg(link, ocr_languages=None)[source]\uf0c1\nParameters", "source": "https://api.python.langchain.com/en/latest/modules/document_loaders.html"} +{"id": "98c621fca5f3-14", "text": "str\nprocess_svg(link, ocr_languages=None)[source]\uf0c1\nParameters\nlink (str) \u2013 \nocr_languages (Optional[str]) \u2013 \nReturn type\nstr\nclass langchain.document_loaders.DataFrameLoader(data_frame, page_content_column='text')[source]\uf0c1\nBases: langchain.document_loaders.base.BaseLoader\nLoad Pandas DataFrames.\nParameters\ndata_frame (Any) \u2013 \npage_content_column (str) \u2013 \nlazy_load()[source]\uf0c1\nLazy load records from dataframe.\nReturn type\nIterator[langchain.schema.Document]\nload()[source]\uf0c1\nLoad full dataframe.\nReturn type\nList[langchain.schema.Document]\nclass langchain.document_loaders.DiffbotLoader(api_token, urls, continue_on_failure=True)[source]\uf0c1\nBases: langchain.document_loaders.base.BaseLoader\nLoader that loads Diffbot file json.\nParameters\napi_token (str) \u2013 \nurls (List[str]) \u2013 \ncontinue_on_failure (bool) \u2013 \nload()[source]\uf0c1\nExtract text from Diffbot on all the URLs and return Document instances\nReturn type\nList[langchain.schema.Document]\nclass langchain.document_loaders.DirectoryLoader(path, glob='**/[!.]*', silent_errors=False, load_hidden=False, loader_cls=, loader_kwargs=None, recursive=False, show_progress=False, use_multithreading=False, max_concurrency=4)[source]\uf0c1\nBases: langchain.document_loaders.base.BaseLoader\nLoading logic for loading documents from a directory.\nParameters\npath (str) \u2013 \nglob (str) \u2013 \nsilent_errors (bool) \u2013 \nload_hidden (bool) \u2013", "source": "https://api.python.langchain.com/en/latest/modules/document_loaders.html"} +{"id": "98c621fca5f3-15", "text": "silent_errors (bool) \u2013 \nload_hidden (bool) \u2013 \nloader_cls (Union[Type[langchain.document_loaders.unstructured.UnstructuredFileLoader], Type[langchain.document_loaders.text.TextLoader], Type[langchain.document_loaders.html_bs.BSHTMLLoader]]) \u2013 \nloader_kwargs (Optional[dict]) \u2013 \nrecursive (bool) \u2013 \nshow_progress (bool) \u2013 \nuse_multithreading (bool) \u2013 \nmax_concurrency (int) \u2013 \nload_file(item, path, docs, pbar)[source]\uf0c1\nParameters\nitem (pathlib.Path) \u2013 \npath (pathlib.Path) \u2013 \ndocs (List[langchain.schema.Document]) \u2013 \npbar (Optional[Any]) \u2013 \nReturn type\nNone\nload()[source]\uf0c1\nLoad documents.\nReturn type\nList[langchain.schema.Document]\nclass langchain.document_loaders.DiscordChatLoader(chat_log, user_id_col='ID')[source]\uf0c1\nBases: langchain.document_loaders.base.BaseLoader\nLoad Discord chat logs.\nParameters\nchat_log (pd.DataFrame) \u2013 \nuser_id_col (str) \u2013 \nload()[source]\uf0c1\nLoad all chat messages.\nReturn type\nList[langchain.schema.Document]\nclass langchain.document_loaders.DocugamiLoader(*, api='https://api.docugami.com/v1preview1', access_token=None, docset_id=None, document_ids=None, file_paths=None, min_chunk_size=32)[source]\uf0c1\nBases: langchain.document_loaders.base.BaseLoader, pydantic.main.BaseModel\nLoader that loads processed docs from Docugami.\nTo use, you should have the lxml python package installed.\nParameters\napi (str) \u2013 \naccess_token (Optional[str]) \u2013", "source": "https://api.python.langchain.com/en/latest/modules/document_loaders.html"} +{"id": "98c621fca5f3-16", "text": "Parameters\napi (str) \u2013 \naccess_token (Optional[str]) \u2013 \ndocset_id (Optional[str]) \u2013 \ndocument_ids (Optional[Sequence[str]]) \u2013 \nfile_paths (Optional[Sequence[Union[pathlib.Path, str]]]) \u2013 \nmin_chunk_size (int) \u2013 \nReturn type\nNone\nattribute access_token: Optional[str] = None\uf0c1\nattribute api: str = 'https://api.docugami.com/v1preview1'\uf0c1\nattribute docset_id: Optional[str] = None\uf0c1\nattribute document_ids: Optional[Sequence[str]] = None\uf0c1\nattribute file_paths: Optional[Sequence[Union[pathlib.Path, str]]] = None\uf0c1\nattribute min_chunk_size: int = 32\uf0c1\nload()[source]\uf0c1\nLoad documents.\nReturn type\nList[langchain.schema.Document]\nclass langchain.document_loaders.Docx2txtLoader(file_path)[source]\uf0c1\nBases: langchain.document_loaders.base.BaseLoader, abc.ABC\nLoads a DOCX with docx2txt and chunks at character level.\nDefaults to check for local file, but if the file is a web path, it will download it\nto a temporary file, and use that, then clean up the temporary file after completion\nParameters\nfile_path (str) \u2013 \nload()[source]\uf0c1\nLoad given path as single page.\nReturn type\nList[langchain.schema.Document]\nclass langchain.document_loaders.DuckDBLoader(query, database=':memory:', read_only=False, config=None, page_content_columns=None, metadata_columns=None)[source]\uf0c1\nBases: langchain.document_loaders.base.BaseLoader\nLoads a query result from DuckDB into a list of documents.\nEach document represents one row of the result. The page_content_columns", "source": "https://api.python.langchain.com/en/latest/modules/document_loaders.html"} +{"id": "98c621fca5f3-17", "text": "Each document represents one row of the result. The page_content_columns\nare written into the page_content of the document. The metadata_columns\nare written into the metadata of the document. By default, all columns\nare written into the page_content and none into the metadata.\nParameters\nquery (str) \u2013 \ndatabase (str) \u2013 \nread_only (bool) \u2013 \nconfig (Optional[Dict[str, str]]) \u2013 \npage_content_columns (Optional[List[str]]) \u2013 \nmetadata_columns (Optional[List[str]]) \u2013 \nload()[source]\uf0c1\nLoad data into document objects.\nReturn type\nList[langchain.schema.Document]\nclass langchain.document_loaders.EmbaasBlobLoader(*, embaas_api_key=None, api_url='https://api.embaas.io/v1/document/extract-text/bytes/', params={})[source]\uf0c1\nBases: langchain.document_loaders.embaas.BaseEmbaasLoader, langchain.document_loaders.base.BaseBlobParser\nWrapper around embaas\u2019s document byte loader service.\nTo use, you should have the\nenvironment variable EMBAAS_API_KEY set with your API key, or pass\nit as a named parameter to the constructor.\nExample\n# Default parsing\nfrom langchain.document_loaders.embaas import EmbaasBlobLoader\nloader = EmbaasBlobLoader()\nblob = Blob.from_path(path=\"example.mp3\")\ndocuments = loader.parse(blob=blob)\n# Custom api parameters (create embeddings automatically)\nfrom langchain.document_loaders.embaas import EmbaasBlobLoader\nloader = EmbaasBlobLoader(\n params={\n \"should_embed\": True,\n \"model\": \"e5-large-v2\",\n \"chunk_size\": 256,\n \"chunk_splitter\": \"CharacterTextSplitter\"\n }\n)", "source": "https://api.python.langchain.com/en/latest/modules/document_loaders.html"} +{"id": "98c621fca5f3-18", "text": "\"chunk_splitter\": \"CharacterTextSplitter\"\n }\n)\nblob = Blob.from_path(path=\"example.pdf\")\ndocuments = loader.parse(blob=blob)\nParameters\nembaas_api_key (Optional[str]) \u2013 \napi_url (str) \u2013 \nparams (langchain.document_loaders.embaas.EmbaasDocumentExtractionParameters) \u2013 \nReturn type\nNone\nlazy_parse(blob)[source]\uf0c1\nLazy parsing interface.\nSubclasses are required to implement this method.\nParameters\nblob (langchain.document_loaders.blob_loaders.schema.Blob) \u2013 Blob instance\nReturns\nGenerator of documents\nReturn type\nIterator[langchain.schema.Document]\nclass langchain.document_loaders.EmbaasLoader(*, embaas_api_key=None, api_url='https://api.embaas.io/v1/document/extract-text/bytes/', params={}, file_path, blob_loader=None)[source]\uf0c1\nBases: langchain.document_loaders.embaas.BaseEmbaasLoader, langchain.document_loaders.base.BaseLoader\nWrapper around embaas\u2019s document loader service.\nTo use, you should have the\nenvironment variable EMBAAS_API_KEY set with your API key, or pass\nit as a named parameter to the constructor.\nExample\n# Default parsing\nfrom langchain.document_loaders.embaas import EmbaasLoader\nloader = EmbaasLoader(file_path=\"example.mp3\")\ndocuments = loader.load()\n# Custom api parameters (create embeddings automatically)\nfrom langchain.document_loaders.embaas import EmbaasBlobLoader\nloader = EmbaasBlobLoader(\n file_path=\"example.pdf\",\n params={\n \"should_embed\": True,\n \"model\": \"e5-large-v2\",\n \"chunk_size\": 256,", "source": "https://api.python.langchain.com/en/latest/modules/document_loaders.html"} +{"id": "98c621fca5f3-19", "text": "\"chunk_size\": 256,\n \"chunk_splitter\": \"CharacterTextSplitter\"\n }\n)\ndocuments = loader.load()\nParameters\nembaas_api_key (Optional[str]) \u2013 \napi_url (str) \u2013 \nparams (langchain.document_loaders.embaas.EmbaasDocumentExtractionParameters) \u2013 \nfile_path (str) \u2013 \nblob_loader (Optional[langchain.document_loaders.embaas.EmbaasBlobLoader]) \u2013 \nReturn type\nNone\nattribute blob_loader: Optional[langchain.document_loaders.embaas.EmbaasBlobLoader] = None\uf0c1\nThe blob loader to use. If not provided, a default one will be created.\nattribute file_path: str [Required]\uf0c1\nThe path to the file to load.\nlazy_load()[source]\uf0c1\nLoad the documents from the file path lazily.\nReturn type\nIterator[langchain.schema.Document]\nload()[source]\uf0c1\nLoad data into document objects.\nReturn type\nList[langchain.schema.Document]\nload_and_split(text_splitter=None)[source]\uf0c1\nLoad documents and split into chunks.\nParameters\ntext_splitter (Optional[langchain.text_splitter.TextSplitter]) \u2013 \nReturn type\nList[langchain.schema.Document]\nclass langchain.document_loaders.EverNoteLoader(file_path, load_single_document=True)[source]\uf0c1\nBases: langchain.document_loaders.base.BaseLoader\nEverNote Loader.\nLoads an EverNote notebook export file e.g. my_notebook.enex into Documents.\nInstructions on producing this file can be found at\nhttps://help.evernote.com/hc/en-us/articles/209005557-Export-notes-and-notebooks-as-ENEX-or-HTML\nCurrently only the plain text in the note is extracted and stored as the contents", "source": "https://api.python.langchain.com/en/latest/modules/document_loaders.html"} +{"id": "98c621fca5f3-20", "text": "Currently only the plain text in the note is extracted and stored as the contents\nof the Document, any non content metadata (e.g. \u2018author\u2019, \u2018created\u2019, \u2018updated\u2019 etc.\nbut not \u2018content-raw\u2019 or \u2018resource\u2019) tags on the note will be extracted and stored\nas metadata on the Document.\nParameters\nfile_path (str) \u2013 The path to the notebook export with a .enex extension\nload_single_document (bool) \u2013 Whether or not to concatenate the content of all\nnotes into a single long Document.\nTrue (If this is set to) \u2013 the \u2018source\u2019 which contains the file name of the export.\nload()[source]\uf0c1\nLoad documents from EverNote export file.\nReturn type\nList[langchain.schema.Document]\nclass langchain.document_loaders.FacebookChatLoader(path)[source]\uf0c1\nBases: langchain.document_loaders.base.BaseLoader\nLoader that loads Facebook messages json directory dump.\nParameters\npath (str) \u2013 \nload()[source]\uf0c1\nLoad documents.\nReturn type\nList[langchain.schema.Document]\nclass langchain.document_loaders.FaunaLoader(query, page_content_field, secret, metadata_fields=None)[source]\uf0c1\nBases: langchain.document_loaders.base.BaseLoader\nFaunaDB Loader.\nParameters\nquery (str) \u2013 \npage_content_field (str) \u2013 \nsecret (str) \u2013 \nmetadata_fields (Optional[Sequence[str]]) \u2013 \nquery\uf0c1\nThe FQL query string to execute.\nType\nstr\npage_content_field\uf0c1\nThe field that contains the content of each page.\nType\nstr\nsecret\uf0c1\nThe secret key for authenticating to FaunaDB.\nType\nstr\nmetadata_fields\uf0c1\nOptional list of field names to include in metadata.\nType\nOptional[Sequence[str]]", "source": "https://api.python.langchain.com/en/latest/modules/document_loaders.html"} +{"id": "98c621fca5f3-21", "text": "Optional list of field names to include in metadata.\nType\nOptional[Sequence[str]]\nload()[source]\uf0c1\nLoad data into document objects.\nReturn type\nList[langchain.schema.Document]\nlazy_load()[source]\uf0c1\nA lazy loader for document content.\nReturn type\nIterator[langchain.schema.Document]\nclass langchain.document_loaders.FigmaFileLoader(access_token, ids, key)[source]\uf0c1\nBases: langchain.document_loaders.base.BaseLoader\nLoader that loads Figma file json.\nParameters\naccess_token (str) \u2013 \nids (str) \u2013 \nkey (str) \u2013 \nload()[source]\uf0c1\nLoad file\nReturn type\nList[langchain.schema.Document]\nclass langchain.document_loaders.FileSystemBlobLoader(path, *, glob='**/[!.]*', suffixes=None, show_progress=False)[source]\uf0c1\nBases: langchain.document_loaders.blob_loaders.schema.BlobLoader\nBlob loader for the local file system.\nExample:\nfrom langchain.document_loaders.blob_loaders import FileSystemBlobLoader\nloader = FileSystemBlobLoader(\"/path/to/directory\")\nfor blob in loader.yield_blobs():\n print(blob)\nParameters\npath (Union[str, pathlib.Path]) \u2013 \nglob (str) \u2013 \nsuffixes (Optional[Sequence[str]]) \u2013 \nshow_progress (bool) \u2013 \nReturn type\nNone\nyield_blobs()[source]\uf0c1\nYield blobs that match the requested pattern.\nReturn type\nIterable[langchain.document_loaders.blob_loaders.schema.Blob]\ncount_matching_files()[source]\uf0c1\nCount files that match the pattern without loading them.\nReturn type\nint\nclass langchain.document_loaders.GCSDirectoryLoader(project_name, bucket, prefix='')[source]\uf0c1", "source": "https://api.python.langchain.com/en/latest/modules/document_loaders.html"} +{"id": "98c621fca5f3-22", "text": "Bases: langchain.document_loaders.base.BaseLoader\nLoading logic for loading documents from GCS.\nParameters\nproject_name (str) \u2013 \nbucket (str) \u2013 \nprefix (str) \u2013 \nload()[source]\uf0c1\nLoad documents.\nReturn type\nList[langchain.schema.Document]\nclass langchain.document_loaders.GCSFileLoader(project_name, bucket, blob)[source]\uf0c1\nBases: langchain.document_loaders.base.BaseLoader\nLoading logic for loading documents from GCS.\nParameters\nproject_name (str) \u2013 \nbucket (str) \u2013 \nblob (str) \u2013 \nload()[source]\uf0c1\nLoad documents.\nReturn type\nList[langchain.schema.Document]\nclass langchain.document_loaders.GitHubIssuesLoader(*, repo, access_token, include_prs=True, milestone=None, state=None, assignee=None, creator=None, mentioned=None, labels=None, sort=None, direction=None, since=None)[source]\uf0c1\nBases: langchain.document_loaders.github.BaseGitHubLoader\nParameters\nrepo (str) \u2013 \naccess_token (str) \u2013 \ninclude_prs (bool) \u2013 \nmilestone (Optional[Union[int, Literal['*', 'none']]]) \u2013 \nstate (Optional[Literal['open', 'closed', 'all']]) \u2013 \nassignee (Optional[str]) \u2013 \ncreator (Optional[str]) \u2013 \nmentioned (Optional[str]) \u2013 \nlabels (Optional[List[str]]) \u2013 \nsort (Optional[Literal['created', 'updated', 'comments']]) \u2013 \ndirection (Optional[Literal['asc', 'desc']]) \u2013 \nsince (Optional[str]) \u2013 \nReturn type\nNone\nattribute assignee: Optional[str] = None\uf0c1", "source": "https://api.python.langchain.com/en/latest/modules/document_loaders.html"} +{"id": "98c621fca5f3-23", "text": "Return type\nNone\nattribute assignee: Optional[str] = None\uf0c1\nFilter on assigned user. Pass \u2018none\u2019 for no user and \u2018*\u2019 for any user.\nattribute creator: Optional[str] = None\uf0c1\nFilter on the user that created the issue.\nattribute direction: Optional[Literal['asc', 'desc']] = None\uf0c1\nThe direction to sort the results by. Can be one of: \u2018asc\u2019, \u2018desc\u2019.\nattribute include_prs: bool = True\uf0c1\nIf True include Pull Requests in results, otherwise ignore them.\nattribute labels: Optional[List[str]] = None\uf0c1\nLabel names to filter one. Example: bug,ui,@high.\nattribute mentioned: Optional[str] = None\uf0c1\nFilter on a user that\u2019s mentioned in the issue.\nattribute milestone: Optional[Union[int, Literal['*', 'none']]] = None\uf0c1\nIf integer is passed, it should be a milestone\u2019s number field.\nIf the string \u2018*\u2019 is passed, issues with any milestone are accepted.\nIf the string \u2018none\u2019 is passed, issues without milestones are returned.\nattribute since: Optional[str] = None\uf0c1\nOnly show notifications updated after the given time.\nThis is a timestamp in ISO 8601 format: YYYY-MM-DDTHH:MM:SSZ.\nattribute sort: Optional[Literal['created', 'updated', 'comments']] = None\uf0c1\nWhat to sort results by. Can be one of: \u2018created\u2019, \u2018updated\u2019, \u2018comments\u2019.\nDefault is \u2018created\u2019.\nattribute state: Optional[Literal['open', 'closed', 'all']] = None\uf0c1\nFilter on issue state. Can be one of: \u2018open\u2019, \u2018closed\u2019, \u2018all\u2019.\nlazy_load()[source]\uf0c1\nGet issues of a GitHub repository.\nReturns\npage_content\nmetadata\nurl\ntitle\ncreator", "source": "https://api.python.langchain.com/en/latest/modules/document_loaders.html"} +{"id": "98c621fca5f3-24", "text": "Returns\npage_content\nmetadata\nurl\ntitle\ncreator\ncreated_at\nlast_update_time\nclosed_time\nnumber of comments\nstate\nlabels\nassignee\nassignees\nmilestone\nlocked\nnumber\nis_pull_request\nReturn type\nA list of Documents with attributes\nload()[source]\uf0c1\nGet issues of a GitHub repository.\nReturns\npage_content\nmetadata\nurl\ntitle\ncreator\ncreated_at\nlast_update_time\nclosed_time\nnumber of comments\nstate\nlabels\nassignee\nassignees\nmilestone\nlocked\nnumber\nis_pull_request\nReturn type\nA list of Documents with attributes\nparse_issue(issue)[source]\uf0c1\nCreate Document objects from a list of GitHub issues.\nParameters\nissue (dict) \u2013 \nReturn type\nlangchain.schema.Document\nproperty query_params: str\uf0c1\nproperty url: str\uf0c1\nclass langchain.document_loaders.GitLoader(repo_path, clone_url=None, branch='main', file_filter=None)[source]\uf0c1\nBases: langchain.document_loaders.base.BaseLoader\nLoads files from a Git repository into a list of documents.\nRepository can be local on disk available at repo_path,\nor remote at clone_url that will be cloned to repo_path.\nCurrently supports only text files.\nEach document represents one file in the repository. The path points to\nthe local Git repository, and the branch specifies the branch to load\nfiles from. By default, it loads from the main branch.\nParameters\nrepo_path (str) \u2013 \nclone_url (Optional[str]) \u2013 \nbranch (Optional[str]) \u2013 \nfile_filter (Optional[Callable[[str], bool]]) \u2013 \nload()[source]\uf0c1\nLoad data into document objects.\nReturn type\nList[langchain.schema.Document]", "source": "https://api.python.langchain.com/en/latest/modules/document_loaders.html"} +{"id": "98c621fca5f3-25", "text": "Load data into document objects.\nReturn type\nList[langchain.schema.Document]\nclass langchain.document_loaders.GitbookLoader(web_page, load_all_paths=False, base_url=None, content_selector='main')[source]\uf0c1\nBases: langchain.document_loaders.web_base.WebBaseLoader\nLoad GitBook data.\nload from either a single page, or\nload all (relative) paths in the navbar.\nParameters\nweb_page (str) \u2013 \nload_all_paths (bool) \u2013 \nbase_url (Optional[str]) \u2013 \ncontent_selector (str) \u2013 \nload()[source]\uf0c1\nFetch text from one single GitBook page.\nReturn type\nList[langchain.schema.Document]\nclass langchain.document_loaders.GoogleApiClient(credentials_path=PosixPath('/home/docs/.credentials/credentials.json'), service_account_path=PosixPath('/home/docs/.credentials/credentials.json'), token_path=PosixPath('/home/docs/.credentials/token.json'))[source]\uf0c1\nBases: object\nA Generic Google Api Client.\nTo use, you should have the google_auth_oauthlib,youtube_transcript_api,google\npython package installed.\nAs the google api expects credentials you need to set up a google account and\nregister your Service. \u201chttps://developers.google.com/docs/api/quickstart/python\u201d\nExample\nfrom langchain.document_loaders import GoogleApiClient\ngoogle_api_client = GoogleApiClient(\n service_account_path=Path(\"path_to_your_sec_file.json\")\n)\nParameters\ncredentials_path (pathlib.Path) \u2013 \nservice_account_path (pathlib.Path) \u2013 \ntoken_path (pathlib.Path) \u2013 \nReturn type\nNone\ncredentials_path: pathlib.Path = PosixPath('/home/docs/.credentials/credentials.json')\uf0c1", "source": "https://api.python.langchain.com/en/latest/modules/document_loaders.html"} +{"id": "98c621fca5f3-26", "text": "service_account_path: pathlib.Path = PosixPath('/home/docs/.credentials/credentials.json')\uf0c1\ntoken_path: pathlib.Path = PosixPath('/home/docs/.credentials/token.json')\uf0c1\nclassmethod validate_channel_or_videoIds_is_set(values)[source]\uf0c1\nValidate that either folder_id or document_ids is set, but not both.\nParameters\nvalues (Dict[str, Any]) \u2013 \nReturn type\nDict[str, Any]\nclass langchain.document_loaders.GoogleApiYoutubeLoader(google_api_client, channel_name=None, video_ids=None, add_video_info=True, captions_language='en', continue_on_failure=False)[source]\uf0c1\nBases: langchain.document_loaders.base.BaseLoader\nLoader that loads all Videos from a Channel\nTo use, you should have the googleapiclient,youtube_transcript_api\npython package installed.\nAs the service needs a google_api_client, you first have to initialize\nthe GoogleApiClient.\nAdditionally you have to either provide a channel name or a list of videoids\n\u201chttps://developers.google.com/docs/api/quickstart/python\u201d\nExample\nfrom langchain.document_loaders import GoogleApiClient\nfrom langchain.document_loaders import GoogleApiYoutubeLoader\ngoogle_api_client = GoogleApiClient(\n service_account_path=Path(\"path_to_your_sec_file.json\")\n)\nloader = GoogleApiYoutubeLoader(\n google_api_client=google_api_client,\n channel_name = \"CodeAesthetic\"\n)\nload.load()\nParameters\ngoogle_api_client (langchain.document_loaders.youtube.GoogleApiClient) \u2013 \nchannel_name (Optional[str]) \u2013 \nvideo_ids (Optional[List[str]]) \u2013 \nadd_video_info (bool) \u2013 \ncaptions_language (str) \u2013 \ncontinue_on_failure (bool) \u2013 \nReturn type\nNone\ngoogle_api_client: langchain.document_loaders.youtube.GoogleApiClient\uf0c1", "source": "https://api.python.langchain.com/en/latest/modules/document_loaders.html"} +{"id": "98c621fca5f3-27", "text": "Return type\nNone\ngoogle_api_client: langchain.document_loaders.youtube.GoogleApiClient\uf0c1\nchannel_name: Optional[str] = None\uf0c1\nvideo_ids: Optional[List[str]] = None\uf0c1\nadd_video_info: bool = True\uf0c1\ncaptions_language: str = 'en'\uf0c1\ncontinue_on_failure: bool = False\uf0c1\nclassmethod validate_channel_or_videoIds_is_set(values)[source]\uf0c1\nValidate that either folder_id or document_ids is set, but not both.\nParameters\nvalues (Dict[str, Any]) \u2013 \nReturn type\nDict[str, Any]\nload()[source]\uf0c1\nLoad documents.\nReturn type\nList[langchain.schema.Document]\nclass langchain.document_loaders.GoogleDriveLoader(*, service_account_key=PosixPath('/home/docs/.credentials/keys.json'), credentials_path=PosixPath('/home/docs/.credentials/credentials.json'), token_path=PosixPath('/home/docs/.credentials/token.json'), folder_id=None, document_ids=None, file_ids=None, recursive=False, file_types=None, load_trashed_files=False, file_loader_cls=None, file_loader_kwargs={})[source]\uf0c1\nBases: langchain.document_loaders.base.BaseLoader, pydantic.main.BaseModel\nLoader that loads Google Docs from Google Drive.\nParameters\nservice_account_key (pathlib.Path) \u2013 \ncredentials_path (pathlib.Path) \u2013 \ntoken_path (pathlib.Path) \u2013 \nfolder_id (Optional[str]) \u2013 \ndocument_ids (Optional[List[str]]) \u2013 \nfile_ids (Optional[List[str]]) \u2013 \nrecursive (bool) \u2013 \nfile_types (Optional[Sequence[str]]) \u2013 \nload_trashed_files (bool) \u2013 \nfile_loader_cls (Any) \u2013 \nfile_loader_kwargs (Dict[str, Any]) \u2013 \nReturn type\nNone", "source": "https://api.python.langchain.com/en/latest/modules/document_loaders.html"} +{"id": "98c621fca5f3-28", "text": "file_loader_kwargs (Dict[str, Any]) \u2013 \nReturn type\nNone\nattribute credentials_path: pathlib.Path = PosixPath('/home/docs/.credentials/credentials.json')\uf0c1\nattribute document_ids: Optional[List[str]] = None\uf0c1\nattribute file_ids: Optional[List[str]] = None\uf0c1\nattribute file_loader_cls: Any = None\uf0c1\nattribute file_loader_kwargs: Dict[str, Any] = {}\uf0c1\nattribute file_types: Optional[Sequence[str]] = None\uf0c1\nattribute folder_id: Optional[str] = None\uf0c1\nattribute load_trashed_files: bool = False\uf0c1\nattribute recursive: bool = False\uf0c1\nattribute service_account_key: pathlib.Path = PosixPath('/home/docs/.credentials/keys.json')\uf0c1\nattribute token_path: pathlib.Path = PosixPath('/home/docs/.credentials/token.json')\uf0c1\nload()[source]\uf0c1\nLoad documents.\nReturn type\nList[langchain.schema.Document]\nclass langchain.document_loaders.GutenbergLoader(file_path)[source]\uf0c1\nBases: langchain.document_loaders.base.BaseLoader\nLoader that uses urllib to load .txt web files.\nParameters\nfile_path (str) \u2013 \nload()[source]\uf0c1\nLoad file.\nReturn type\nList[langchain.schema.Document]\nclass langchain.document_loaders.HNLoader(web_path, header_template=None, verify=True)[source]\uf0c1\nBases: langchain.document_loaders.web_base.WebBaseLoader\nLoad Hacker News data from either main page results or the comments page.\nParameters\nweb_path (Union[str, List[str]]) \u2013 \nheader_template (Optional[dict]) \u2013 \nverify (Optional[bool]) \u2013 \nload()[source]\uf0c1\nGet important HN webpage information.\nComponents are:\ntitle\ncontent\nsource url,\ntime of post\nauthor of the post", "source": "https://api.python.langchain.com/en/latest/modules/document_loaders.html"} +{"id": "98c621fca5f3-29", "text": "title\ncontent\nsource url,\ntime of post\nauthor of the post\nnumber of comments\nrank of the post\nReturn type\nList[langchain.schema.Document]\nload_comments(soup_info)[source]\uf0c1\nLoad comments from a HN post.\nParameters\nsoup_info (Any) \u2013 \nReturn type\nList[langchain.schema.Document]\nload_results(soup)[source]\uf0c1\nLoad items from an HN page.\nParameters\nsoup (Any) \u2013 \nReturn type\nList[langchain.schema.Document]\nclass langchain.document_loaders.HuggingFaceDatasetLoader(path, page_content_column='text', name=None, data_dir=None, data_files=None, cache_dir=None, keep_in_memory=None, save_infos=False, use_auth_token=None, num_proc=None)[source]\uf0c1\nBases: langchain.document_loaders.base.BaseLoader\nLoading logic for loading documents from the Hugging Face Hub.\nParameters\npath (str) \u2013 \npage_content_column (str) \u2013 \nname (Optional[str]) \u2013 \ndata_dir (Optional[str]) \u2013 \ndata_files (Optional[Union[str, Sequence[str], Mapping[str, Union[str, Sequence[str]]]]]) \u2013 \ncache_dir (Optional[str]) \u2013 \nkeep_in_memory (Optional[bool]) \u2013 \nsave_infos (bool) \u2013 \nuse_auth_token (Optional[Union[bool, str]]) \u2013 \nnum_proc (Optional[int]) \u2013 \nlazy_load()[source]\uf0c1\nLoad documents lazily.\nReturn type\nIterator[langchain.schema.Document]\nload()[source]\uf0c1\nLoad documents.\nReturn type\nList[langchain.schema.Document]\nclass langchain.document_loaders.IFixitLoader(web_path)[source]\uf0c1\nBases: langchain.document_loaders.base.BaseLoader", "source": "https://api.python.langchain.com/en/latest/modules/document_loaders.html"} +{"id": "98c621fca5f3-30", "text": "Bases: langchain.document_loaders.base.BaseLoader\nLoad iFixit repair guides, device wikis and answers.\niFixit is the largest, open repair community on the web. The site contains nearly\n100k repair manuals, 200k Questions & Answers on 42k devices, and all the data is\nlicensed under CC-BY.\nThis loader will allow you to download the text of a repair guide, text of Q&A\u2019s\nand wikis from devices on iFixit using their open APIs and web scraping.\nParameters\nweb_path (str) \u2013 \nload()[source]\uf0c1\nLoad data into document objects.\nReturn type\nList[langchain.schema.Document]\nstatic load_suggestions(query='', doc_type='all')[source]\uf0c1\nParameters\nquery (str) \u2013 \ndoc_type (str) \u2013 \nReturn type\nList[langchain.schema.Document]\nload_questions_and_answers(url_override=None)[source]\uf0c1\nParameters\nurl_override (Optional[str]) \u2013 \nReturn type\nList[langchain.schema.Document]\nload_device(url_override=None, include_guides=True)[source]\uf0c1\nParameters\nurl_override (Optional[str]) \u2013 \ninclude_guides (bool) \u2013 \nReturn type\nList[langchain.schema.Document]\nload_guide(url_override=None)[source]\uf0c1\nParameters\nurl_override (Optional[str]) \u2013 \nReturn type\nList[langchain.schema.Document]\nclass langchain.document_loaders.IMSDbLoader(web_path, header_template=None, verify=True)[source]\uf0c1\nBases: langchain.document_loaders.web_base.WebBaseLoader\nLoader that loads IMSDb webpages.\nParameters\nweb_path (Union[str, List[str]]) \u2013 \nheader_template (Optional[dict]) \u2013 \nverify (Optional[bool]) \u2013 \nload()[source]\uf0c1\nLoad webpage.", "source": "https://api.python.langchain.com/en/latest/modules/document_loaders.html"} +{"id": "98c621fca5f3-31", "text": "verify (Optional[bool]) \u2013 \nload()[source]\uf0c1\nLoad webpage.\nReturn type\nList[langchain.schema.Document]\nclass langchain.document_loaders.ImageCaptionLoader(path_images, blip_processor='Salesforce/blip-image-captioning-base', blip_model='Salesforce/blip-image-captioning-base')[source]\uf0c1\nBases: langchain.document_loaders.base.BaseLoader\nLoader that loads the captions of an image\nParameters\npath_images (Union[str, List[str]]) \u2013 \nblip_processor (str) \u2013 \nblip_model (str) \u2013 \nload()[source]\uf0c1\nLoad from a list of image files\nReturn type\nList[langchain.schema.Document]\nclass langchain.document_loaders.IuguLoader(resource, api_token=None)[source]\uf0c1\nBases: langchain.document_loaders.base.BaseLoader\nLoader that fetches data from IUGU.\nParameters\nresource (str) \u2013 \napi_token (Optional[str]) \u2013 \nReturn type\nNone\nload()[source]\uf0c1\nLoad data into document objects.\nReturn type\nList[langchain.schema.Document]\nclass langchain.document_loaders.JSONLoader(file_path, jq_schema, content_key=None, metadata_func=None, text_content=True)[source]\uf0c1\nBases: langchain.document_loaders.base.BaseLoader\nLoads a JSON file and references a jq schema provided to load the text into\ndocuments.\nExample\n[{\u201ctext\u201d: \u2026}, {\u201ctext\u201d: \u2026}, {\u201ctext\u201d: \u2026}] -> schema = .[].text\n{\u201ckey\u201d: [{\u201ctext\u201d: \u2026}, {\u201ctext\u201d: \u2026}, {\u201ctext\u201d: \u2026}]} -> schema = .key[].text\n[\u201c\u201d, \u201c\u201d, \u201c\u201d] -> schema = .[]\nParameters\nfile_path (Union[str, pathlib.Path]) \u2013 \njq_schema (str) \u2013", "source": "https://api.python.langchain.com/en/latest/modules/document_loaders.html"} +{"id": "98c621fca5f3-32", "text": "file_path (Union[str, pathlib.Path]) \u2013 \njq_schema (str) \u2013 \ncontent_key (Optional[str]) \u2013 \nmetadata_func (Optional[Callable[[Dict, Dict], Dict]]) \u2013 \ntext_content (bool) \u2013 \nload()[source]\uf0c1\nLoad and return documents from the JSON file.\nReturn type\nList[langchain.schema.Document]\nclass langchain.document_loaders.JoplinLoader(access_token=None, port=41184, host='localhost')[source]\uf0c1\nBases: langchain.document_loaders.base.BaseLoader\nLoader that fetches notes from Joplin.\nIn order to use this loader, you need to have Joplin running with the\nWeb Clipper enabled (look for \u201cWeb Clipper\u201d in the app settings).\nTo get the access token, you need to go to the Web Clipper options and\nunder \u201cAdvanced Options\u201d you will find the access token.\nYou can find more information about the Web Clipper service here:\nhttps://joplinapp.org/clipper/\nParameters\naccess_token (Optional[str]) \u2013 \nport (int) \u2013 \nhost (str) \u2013 \nReturn type\nNone\nlazy_load()[source]\uf0c1\nA lazy loader for document content.\nReturn type\nIterator[langchain.schema.Document]\nload()[source]\uf0c1\nLoad data into document objects.\nReturn type\nList[langchain.schema.Document]\nclass langchain.document_loaders.MWDumpLoader(file_path, encoding='utf8')[source]\uf0c1\nBases: langchain.document_loaders.base.BaseLoader\nLoad MediaWiki dump from XML file\n.. rubric:: Example\nfrom langchain.document_loaders import MWDumpLoader\nloader = MWDumpLoader(\n file_path=\"myWiki.xml\",\n encoding=\"utf8\"\n)\ndocs = loader.load()", "source": "https://api.python.langchain.com/en/latest/modules/document_loaders.html"} +{"id": "98c621fca5f3-33", "text": "encoding=\"utf8\"\n)\ndocs = loader.load()\nfrom langchain.text_splitter import RecursiveCharacterTextSplitter\ntext_splitter = RecursiveCharacterTextSplitter(\n chunk_size=1000, chunk_overlap=0\n)\ntexts = text_splitter.split_documents(docs)\nParameters\nfile_path (str) \u2013 XML local file path\nencoding (str, optional) \u2013 Charset encoding, defaults to \u201cutf8\u201d\nload()[source]\uf0c1\nLoad from file path.\nReturn type\nList[langchain.schema.Document]\nclass langchain.document_loaders.MastodonTootsLoader(mastodon_accounts, number_toots=100, exclude_replies=False, access_token=None, api_base_url='https://mastodon.social')[source]\uf0c1\nBases: langchain.document_loaders.base.BaseLoader\nMastodon toots loader.\nParameters\nmastodon_accounts (Sequence[str]) \u2013 \nnumber_toots (Optional[int]) \u2013 \nexclude_replies (bool) \u2013 \naccess_token (Optional[str]) \u2013 \napi_base_url (str) \u2013 \nload()[source]\uf0c1\nLoad toots into documents.\nReturn type\nList[langchain.schema.Document]\nclass langchain.document_loaders.MathpixPDFLoader(file_path, processed_file_format='mmd', max_wait_time_seconds=500, should_clean_pdf=False, **kwargs)[source]\uf0c1\nBases: langchain.document_loaders.pdf.BasePDFLoader\nParameters\nfile_path (str) \u2013 \nprocessed_file_format (str) \u2013 \nmax_wait_time_seconds (int) \u2013 \nshould_clean_pdf (bool) \u2013 \nkwargs (Any) \u2013 \nReturn type\nNone\nproperty headers: dict\uf0c1\nproperty url: str\uf0c1\nproperty data: dict\uf0c1\nsend_pdf()[source]\uf0c1\nReturn type\nstr", "source": "https://api.python.langchain.com/en/latest/modules/document_loaders.html"} +{"id": "98c621fca5f3-34", "text": "property data: dict\uf0c1\nsend_pdf()[source]\uf0c1\nReturn type\nstr\nwait_for_processing(pdf_id)[source]\uf0c1\nParameters\npdf_id (str) \u2013 \nReturn type\nNone\nget_processed_pdf(pdf_id)[source]\uf0c1\nParameters\npdf_id (str) \u2013 \nReturn type\nstr\nclean_pdf(contents)[source]\uf0c1\nParameters\ncontents (str) \u2013 \nReturn type\nstr\nload()[source]\uf0c1\nLoad data into document objects.\nReturn type\nList[langchain.schema.Document]\nclass langchain.document_loaders.MaxComputeLoader(query, api_wrapper, *, page_content_columns=None, metadata_columns=None)[source]\uf0c1\nBases: langchain.document_loaders.base.BaseLoader\nLoads a query result from Alibaba Cloud MaxCompute table into documents.\nParameters\nquery (str) \u2013 \napi_wrapper (MaxComputeAPIWrapper) \u2013 \npage_content_columns (Optional[Sequence[str]]) \u2013 \nmetadata_columns (Optional[Sequence[str]]) \u2013 \nclassmethod from_params(query, endpoint, project, *, access_id=None, secret_access_key=None, **kwargs)[source]\uf0c1\nConvenience constructor that builds the MaxCompute API wrapper fromgiven parameters.\nParameters\nquery (str) \u2013 SQL query to execute.\nendpoint (str) \u2013 MaxCompute endpoint.\nproject (str) \u2013 A project is a basic organizational unit of MaxCompute, which is\nsimilar to a database.\naccess_id (Optional[str]) \u2013 MaxCompute access ID. Should be passed in directly or set as the\nenvironment variable MAX_COMPUTE_ACCESS_ID.\nsecret_access_key (Optional[str]) \u2013 MaxCompute secret access key. Should be passed in\ndirectly or set as the environment variable\nMAX_COMPUTE_SECRET_ACCESS_KEY.\nkwargs (Any) \u2013 \nReturn type\nlangchain.document_loaders.max_compute.MaxComputeLoader", "source": "https://api.python.langchain.com/en/latest/modules/document_loaders.html"} +{"id": "98c621fca5f3-35", "text": "Return type\nlangchain.document_loaders.max_compute.MaxComputeLoader\nlazy_load()[source]\uf0c1\nA lazy loader for document content.\nReturn type\nIterator[langchain.schema.Document]\nload()[source]\uf0c1\nLoad data into document objects.\nReturn type\nList[langchain.schema.Document]\nclass langchain.document_loaders.MergedDataLoader(loaders)[source]\uf0c1\nBases: langchain.document_loaders.base.BaseLoader\nMerge documents from a list of loaders\nParameters\nloaders (List) \u2013 \nlazy_load()[source]\uf0c1\nLazy load docs from each individual loader.\nReturn type\nIterator[langchain.schema.Document]\nload()[source]\uf0c1\nLoad docs.\nReturn type\nList[langchain.schema.Document]\nclass langchain.document_loaders.MHTMLLoader(file_path, open_encoding=None, bs_kwargs=None, get_text_separator='')[source]\uf0c1\nBases: langchain.document_loaders.base.BaseLoader\nLoader that uses beautiful soup to parse HTML files.\nParameters\nfile_path (str) \u2013 \nopen_encoding (Optional[str]) \u2013 \nbs_kwargs (Optional[dict]) \u2013 \nget_text_separator (str) \u2013 \nReturn type\nNone\nload()[source]\uf0c1\nLoad data into document objects.\nReturn type\nList[langchain.schema.Document]\nclass langchain.document_loaders.ModernTreasuryLoader(resource, organization_id=None, api_key=None)[source]\uf0c1\nBases: langchain.document_loaders.base.BaseLoader\nLoader that fetches data from Modern Treasury.\nParameters\nresource (str) \u2013 \norganization_id (Optional[str]) \u2013 \napi_key (Optional[str]) \u2013 \nReturn type\nNone\nload()[source]\uf0c1\nLoad data into document objects.\nReturn type\nList[langchain.schema.Document]", "source": "https://api.python.langchain.com/en/latest/modules/document_loaders.html"} +{"id": "98c621fca5f3-36", "text": "Load data into document objects.\nReturn type\nList[langchain.schema.Document]\nclass langchain.document_loaders.NotebookLoader(path, include_outputs=False, max_output_length=10, remove_newline=False, traceback=False)[source]\uf0c1\nBases: langchain.document_loaders.base.BaseLoader\nLoader that loads .ipynb notebook files.\nParameters\npath (str) \u2013 \ninclude_outputs (bool) \u2013 \nmax_output_length (int) \u2013 \nremove_newline (bool) \u2013 \ntraceback (bool) \u2013 \nload()[source]\uf0c1\nLoad documents.\nReturn type\nList[langchain.schema.Document]\nclass langchain.document_loaders.NotionDBLoader(integration_token, database_id, request_timeout_sec=10)[source]\uf0c1\nBases: langchain.document_loaders.base.BaseLoader\nNotion DB Loader.\nReads content from pages within a Noton Database.\n:param integration_token: Notion integration token.\n:type integration_token: str\n:param database_id: Notion database id.\n:type database_id: str\n:param request_timeout_sec: Timeout for Notion requests in seconds.\n:type request_timeout_sec: int\nParameters\nintegration_token (str) \u2013 \ndatabase_id (str) \u2013 \nrequest_timeout_sec (Optional[int]) \u2013 \nReturn type\nNone\nload()[source]\uf0c1\nLoad documents from the Notion database.\n:returns: List of documents.\n:rtype: List[Document]\nReturn type\nList[langchain.schema.Document]\nload_page(page_summary)[source]\uf0c1\nRead a page.\nParameters\npage_summary (Dict[str, Any]) \u2013 \nReturn type\nlangchain.schema.Document\nclass langchain.document_loaders.NotionDirectoryLoader(path)[source]\uf0c1\nBases: langchain.document_loaders.base.BaseLoader", "source": "https://api.python.langchain.com/en/latest/modules/document_loaders.html"} +{"id": "98c621fca5f3-37", "text": "Bases: langchain.document_loaders.base.BaseLoader\nLoader that loads Notion directory dump.\nParameters\npath (str) \u2013 \nload()[source]\uf0c1\nLoad documents.\nReturn type\nList[langchain.schema.Document]\nclass langchain.document_loaders.ObsidianLoader(path, encoding='UTF-8', collect_metadata=True)[source]\uf0c1\nBases: langchain.document_loaders.base.BaseLoader\nLoader that loads Obsidian files from disk.\nParameters\npath (str) \u2013 \nencoding (str) \u2013 \ncollect_metadata (bool) \u2013 \nFRONT_MATTER_REGEX = re.compile('^---\\\\n(.*?)\\\\n---\\\\n', re.MULTILINE|re.DOTALL)\uf0c1\nload()[source]\uf0c1\nLoad documents.\nReturn type\nList[langchain.schema.Document]\nclass langchain.document_loaders.OneDriveFileLoader(*, file)[source]\uf0c1\nBases: langchain.document_loaders.base.BaseLoader, pydantic.main.BaseModel\nParameters\nfile (File) \u2013 \nReturn type\nNone\nattribute file: File [Required]\uf0c1\nload()[source]\uf0c1\nLoad Documents\nReturn type\nList[langchain.schema.Document]\nclass langchain.document_loaders.OneDriveLoader(*, settings=None, drive_id, folder_path=None, object_ids=None, auth_with_token=False)[source]\uf0c1\nBases: langchain.document_loaders.base.BaseLoader, pydantic.main.BaseModel\nParameters\nsettings (langchain.document_loaders.onedrive._OneDriveSettings) \u2013 \ndrive_id (str) \u2013 \nfolder_path (Optional[str]) \u2013 \nobject_ids (Optional[List[str]]) \u2013 \nauth_with_token (bool) \u2013 \nReturn type\nNone\nattribute auth_with_token: bool = False\uf0c1\nattribute drive_id: str [Required]\uf0c1", "source": "https://api.python.langchain.com/en/latest/modules/document_loaders.html"} +{"id": "98c621fca5f3-38", "text": "attribute drive_id: str [Required]\uf0c1\nattribute folder_path: Optional[str] = None\uf0c1\nattribute object_ids: Optional[List[str]] = None\uf0c1\nattribute settings: langchain.document_loaders.onedrive._OneDriveSettings [Optional]\uf0c1\nload()[source]\uf0c1\nLoads all supported document files from the specified OneDrive drive a\nnd returns a list of Document objects.\nReturns\nA list of Document objects\nrepresenting the loaded documents.\nReturn type\nList[Document]\nRaises\nValueError \u2013 If the specified drive ID\ndoes not correspond to a drive in the OneDrive storage. \u2013 \nclass langchain.document_loaders.OnlinePDFLoader(file_path)[source]\uf0c1\nBases: langchain.document_loaders.pdf.BasePDFLoader\nLoader that loads online PDFs.\nParameters\nfile_path (str) \u2013 \nload()[source]\uf0c1\nLoad documents.\nReturn type\nList[langchain.schema.Document]\nclass langchain.document_loaders.OutlookMessageLoader(file_path)[source]\uf0c1\nBases: langchain.document_loaders.base.BaseLoader\nLoader that loads Outlook Message files using extract_msg.\nhttps://github.com/TeamMsgExtractor/msg-extractor\nParameters\nfile_path (str) \u2013 \nload()[source]\uf0c1\nLoad data into document objects.\nReturn type\nList[langchain.schema.Document]\nclass langchain.document_loaders.OpenCityDataLoader(city_id, dataset_id, limit)[source]\uf0c1\nBases: langchain.document_loaders.base.BaseLoader\nLoader that loads Open city data.\nParameters\ncity_id (str) \u2013 \ndataset_id (str) \u2013 \nlimit (int) \u2013 \nlazy_load()[source]\uf0c1\nLazy load records.\nReturn type\nIterator[langchain.schema.Document]\nload()[source]\uf0c1\nLoad records.\nReturn type", "source": "https://api.python.langchain.com/en/latest/modules/document_loaders.html"} +{"id": "98c621fca5f3-39", "text": "load()[source]\uf0c1\nLoad records.\nReturn type\nList[langchain.schema.Document]\nclass langchain.document_loaders.PDFMinerLoader(file_path)[source]\uf0c1\nBases: langchain.document_loaders.pdf.BasePDFLoader\nLoader that uses PDFMiner to load PDF files.\nParameters\nfile_path (str) \u2013 \nReturn type\nNone\nload()[source]\uf0c1\nEagerly load the content.\nReturn type\nList[langchain.schema.Document]\nlazy_load()[source]\uf0c1\nLazily lod documents.\nReturn type\nIterator[langchain.schema.Document]\nclass langchain.document_loaders.PDFMinerPDFasHTMLLoader(file_path)[source]\uf0c1\nBases: langchain.document_loaders.pdf.BasePDFLoader\nLoader that uses PDFMiner to load PDF files as HTML content.\nParameters\nfile_path (str) \u2013 \nload()[source]\uf0c1\nLoad file.\nReturn type\nList[langchain.schema.Document]\nclass langchain.document_loaders.PDFPlumberLoader(file_path, text_kwargs=None)[source]\uf0c1\nBases: langchain.document_loaders.pdf.BasePDFLoader\nLoader that uses pdfplumber to load PDF files.\nParameters\nfile_path (str) \u2013 \ntext_kwargs (Optional[Mapping[str, Any]]) \u2013 \nReturn type\nNone\nload()[source]\uf0c1\nLoad file.\nReturn type\nList[langchain.schema.Document]\nlangchain.document_loaders.PagedPDFSplitter\uf0c1\nalias of langchain.document_loaders.pdf.PyPDFLoader\nclass langchain.document_loaders.PlaywrightURLLoader(urls, continue_on_failure=True, headless=True, remove_selectors=None)[source]\uf0c1\nBases: langchain.document_loaders.base.BaseLoader", "source": "https://api.python.langchain.com/en/latest/modules/document_loaders.html"} +{"id": "98c621fca5f3-40", "text": "Bases: langchain.document_loaders.base.BaseLoader\nLoader that uses Playwright and to load a page and unstructured to load the html.\nThis is useful for loading pages that require javascript to render.\nParameters\nurls (List[str]) \u2013 \ncontinue_on_failure (bool) \u2013 \nheadless (bool) \u2013 \nremove_selectors (Optional[List[str]]) \u2013 \nurls\uf0c1\nList of URLs to load.\nType\nList[str]\ncontinue_on_failure\uf0c1\nIf True, continue loading other URLs on failure.\nType\nbool\nheadless\uf0c1\nIf True, the browser will run in headless mode.\nType\nbool\nload()[source]\uf0c1\nLoad the specified URLs using Playwright and create Document instances.\nReturns\nA list of Document instances with loaded content.\nReturn type\nList[Document]\nclass langchain.document_loaders.PsychicLoader(api_key, connector_id, connection_id)[source]\uf0c1\nBases: langchain.document_loaders.base.BaseLoader\nLoader that loads documents from Psychic.dev.\nParameters\napi_key (str) \u2013 \nconnector_id (str) \u2013 \nconnection_id (str) \u2013 \nload()[source]\uf0c1\nLoad documents.\nReturn type\nList[langchain.schema.Document]\nclass langchain.document_loaders.PyMuPDFLoader(file_path)[source]\uf0c1\nBases: langchain.document_loaders.pdf.BasePDFLoader\nLoader that uses PyMuPDF to load PDF files.\nParameters\nfile_path (str) \u2013 \nReturn type\nNone\nload(**kwargs)[source]\uf0c1\nLoad file.\nParameters\nkwargs (Optional[Any]) \u2013 \nReturn type\nList[langchain.schema.Document]", "source": "https://api.python.langchain.com/en/latest/modules/document_loaders.html"} +{"id": "98c621fca5f3-41", "text": "kwargs (Optional[Any]) \u2013 \nReturn type\nList[langchain.schema.Document]\nclass langchain.document_loaders.PyPDFDirectoryLoader(path, glob='**/[!.]*.pdf', silent_errors=False, load_hidden=False, recursive=False)[source]\uf0c1\nBases: langchain.document_loaders.base.BaseLoader\nLoads a directory with PDF files with pypdf and chunks at character level.\nLoader also stores page numbers in metadatas.\nParameters\npath (str) \u2013 \nglob (str) \u2013 \nsilent_errors (bool) \u2013 \nload_hidden (bool) \u2013 \nrecursive (bool) \u2013 \nload()[source]\uf0c1\nLoad data into document objects.\nReturn type\nList[langchain.schema.Document]\nclass langchain.document_loaders.PyPDFLoader(file_path)[source]\uf0c1\nBases: langchain.document_loaders.pdf.BasePDFLoader\nLoads a PDF with pypdf and chunks at character level.\nLoader also stores page numbers in metadatas.\nParameters\nfile_path (str) \u2013 \nReturn type\nNone\nload()[source]\uf0c1\nLoad given path as pages.\nReturn type\nList[langchain.schema.Document]\nlazy_load()[source]\uf0c1\nLazy load given path as pages.\nReturn type\nIterator[langchain.schema.Document]\nclass langchain.document_loaders.PyPDFium2Loader(file_path)[source]\uf0c1\nBases: langchain.document_loaders.pdf.BasePDFLoader\nLoads a PDF with pypdfium2 and chunks at character level.\nParameters\nfile_path (str) \u2013 \nload()[source]\uf0c1\nLoad given path as pages.\nReturn type\nList[langchain.schema.Document]\nlazy_load()[source]\uf0c1\nLazy load given path as pages.\nReturn type\nIterator[langchain.schema.Document]", "source": "https://api.python.langchain.com/en/latest/modules/document_loaders.html"} +{"id": "98c621fca5f3-42", "text": "Lazy load given path as pages.\nReturn type\nIterator[langchain.schema.Document]\nclass langchain.document_loaders.PySparkDataFrameLoader(spark_session=None, df=None, page_content_column='text', fraction_of_memory=0.1)[source]\uf0c1\nBases: langchain.document_loaders.base.BaseLoader\nLoad PySpark DataFrames\nParameters\nspark_session (Optional[SparkSession]) \u2013 \ndf (Optional[Any]) \u2013 \npage_content_column (str) \u2013 \nfraction_of_memory (float) \u2013 \nget_num_rows()[source]\uf0c1\nGets the amount of \u201cfeasible\u201d rows for the DataFrame\nReturn type\nTuple[int, int]\nlazy_load()[source]\uf0c1\nA lazy loader for document content.\nReturn type\nIterator[langchain.schema.Document]\nload()[source]\uf0c1\nLoad from the dataframe.\nReturn type\nList[langchain.schema.Document]\nclass langchain.document_loaders.PythonLoader(file_path)[source]\uf0c1\nBases: langchain.document_loaders.text.TextLoader\nLoad Python files, respecting any non-default encoding if specified.\nParameters\nfile_path (str) \u2013 \nclass langchain.document_loaders.ReadTheDocsLoader(path, encoding=None, errors=None, custom_html_tag=None, **kwargs)[source]\uf0c1\nBases: langchain.document_loaders.base.BaseLoader\nLoader that loads ReadTheDocs documentation directory dump.\nParameters\npath (Union[str, pathlib.Path]) \u2013 \nencoding (Optional[str]) \u2013 \nerrors (Optional[str]) \u2013 \ncustom_html_tag (Optional[Tuple[str, dict]]) \u2013 \nkwargs (Optional[Any]) \u2013 \nload()[source]\uf0c1\nLoad documents.\nReturn type\nList[langchain.schema.Document]\nclass langchain.document_loaders.RecursiveUrlLoader(url, exclude_dirs=None)[source]\uf0c1", "source": "https://api.python.langchain.com/en/latest/modules/document_loaders.html"} +{"id": "98c621fca5f3-43", "text": "Bases: langchain.document_loaders.base.BaseLoader\nLoader that loads all child links from a given url.\nParameters\nurl (str) \u2013 \nexclude_dirs (Optional[str]) \u2013 \nReturn type\nNone\nget_child_links_recursive(url, visited=None)[source]\uf0c1\nRecursively get all child links starting with the path of the input URL.\nParameters\nurl (str) \u2013 \nvisited (Optional[Set[str]]) \u2013 \nReturn type\nSet[str]\nlazy_load()[source]\uf0c1\nA lazy loader for document content.\nReturn type\nIterator[langchain.schema.Document]\nload()[source]\uf0c1\nLoad web pages.\nReturn type\nList[langchain.schema.Document]\nclass langchain.document_loaders.RedditPostsLoader(client_id, client_secret, user_agent, search_queries, mode, categories=['new'], number_posts=10)[source]\uf0c1\nBases: langchain.document_loaders.base.BaseLoader\nReddit posts loader.\nRead posts on a subreddit.\nFirst you need to go to\nhttps://www.reddit.com/prefs/apps/\nand create your application\nParameters\nclient_id (str) \u2013 \nclient_secret (str) \u2013 \nuser_agent (str) \u2013 \nsearch_queries (Sequence[str]) \u2013 \nmode (str) \u2013 \ncategories (Sequence[str]) \u2013 \nnumber_posts (Optional[int]) \u2013 \nload()[source]\uf0c1\nLoad reddits.\nReturn type\nList[langchain.schema.Document]\nclass langchain.document_loaders.RoamLoader(path)[source]\uf0c1\nBases: langchain.document_loaders.base.BaseLoader\nLoader that loads Roam files from disk.\nParameters\npath (str) \u2013 \nload()[source]\uf0c1\nLoad documents.\nReturn type\nList[langchain.schema.Document]", "source": "https://api.python.langchain.com/en/latest/modules/document_loaders.html"} +{"id": "98c621fca5f3-44", "text": "Load documents.\nReturn type\nList[langchain.schema.Document]\nclass langchain.document_loaders.S3DirectoryLoader(bucket, prefix='')[source]\uf0c1\nBases: langchain.document_loaders.base.BaseLoader\nLoading logic for loading documents from s3.\nParameters\nbucket (str) \u2013 \nprefix (str) \u2013 \nload()[source]\uf0c1\nLoad documents.\nReturn type\nList[langchain.schema.Document]\nclass langchain.document_loaders.S3FileLoader(bucket, key)[source]\uf0c1\nBases: langchain.document_loaders.base.BaseLoader\nLoading logic for loading documents from s3.\nParameters\nbucket (str) \u2013 \nkey (str) \u2013 \nload()[source]\uf0c1\nLoad documents.\nReturn type\nList[langchain.schema.Document]\nclass langchain.document_loaders.SRTLoader(file_path)[source]\uf0c1\nBases: langchain.document_loaders.base.BaseLoader\nLoader for .srt (subtitle) files.\nParameters\nfile_path (str) \u2013 \nload()[source]\uf0c1\nLoad using pysrt file.\nReturn type\nList[langchain.schema.Document]\nclass langchain.document_loaders.SeleniumURLLoader(urls, continue_on_failure=True, browser='chrome', binary_location=None, executable_path=None, headless=True, arguments=[])[source]\uf0c1\nBases: langchain.document_loaders.base.BaseLoader\nLoader that uses Selenium and to load a page and unstructured to load the html.\nThis is useful for loading pages that require javascript to render.\nParameters\nurls (List[str]) \u2013 \ncontinue_on_failure (bool) \u2013 \nbrowser (Literal['chrome', 'firefox']) \u2013 \nbinary_location (Optional[str]) \u2013 \nexecutable_path (Optional[str]) \u2013 \nheadless (bool) \u2013 \narguments (List[str]) \u2013", "source": "https://api.python.langchain.com/en/latest/modules/document_loaders.html"} +{"id": "98c621fca5f3-45", "text": "headless (bool) \u2013 \narguments (List[str]) \u2013 \nurls\uf0c1\nList of URLs to load.\nType\nList[str]\ncontinue_on_failure\uf0c1\nIf True, continue loading other URLs on failure.\nType\nbool\nbrowser\uf0c1\nThe browser to use, either \u2018chrome\u2019 or \u2018firefox\u2019.\nType\nstr\nbinary_location\uf0c1\nThe location of the browser binary.\nType\nOptional[str]\nexecutable_path\uf0c1\nThe path to the browser executable.\nType\nOptional[str]\nheadless\uf0c1\nIf True, the browser will run in headless mode.\nType\nbool\narguments [List[str]]\nList of arguments to pass to the browser.\nload()[source]\uf0c1\nLoad the specified URLs using Selenium and create Document instances.\nReturns\nA list of Document instances with loaded content.\nReturn type\nList[Document]\nclass langchain.document_loaders.SitemapLoader(web_path, filter_urls=None, parsing_function=None, blocksize=None, blocknum=0, meta_function=None, is_local=False)[source]\uf0c1\nBases: langchain.document_loaders.web_base.WebBaseLoader\nLoader that fetches a sitemap and loads those URLs.\nParameters\nweb_path (str) \u2013 \nfilter_urls (Optional[List[str]]) \u2013 \nparsing_function (Optional[Callable]) \u2013 \nblocksize (Optional[int]) \u2013 \nblocknum (int) \u2013 \nmeta_function (Optional[Callable]) \u2013 \nis_local (bool) \u2013 \nparse_sitemap(soup)[source]\uf0c1\nParse sitemap xml and load into a list of dicts.\nParameters\nsoup (Any) \u2013 \nReturn type\nList[dict]\nload()[source]\uf0c1\nLoad sitemap.\nReturn type\nList[langchain.schema.Document]", "source": "https://api.python.langchain.com/en/latest/modules/document_loaders.html"} +{"id": "98c621fca5f3-46", "text": "Load sitemap.\nReturn type\nList[langchain.schema.Document]\nclass langchain.document_loaders.SlackDirectoryLoader(zip_path, workspace_url=None)[source]\uf0c1\nBases: langchain.document_loaders.base.BaseLoader\nLoader for loading documents from a Slack directory dump.\nParameters\nzip_path (str) \u2013 \nworkspace_url (Optional[str]) \u2013 \nload()[source]\uf0c1\nLoad and return documents from the Slack directory dump.\nReturn type\nList[langchain.schema.Document]\nclass langchain.document_loaders.SnowflakeLoader(query, user, password, account, warehouse, role, database, schema, parameters=None, page_content_columns=None, metadata_columns=None)[source]\uf0c1\nBases: langchain.document_loaders.base.BaseLoader\nLoads a query result from Snowflake into a list of documents.\nEach document represents one row of the result. The page_content_columns\nare written into the page_content of the document. The metadata_columns\nare written into the metadata of the document. By default, all columns\nare written into the page_content and none into the metadata.\nParameters\nquery (str) \u2013 \nuser (str) \u2013 \npassword (str) \u2013 \naccount (str) \u2013 \nwarehouse (str) \u2013 \nrole (str) \u2013 \ndatabase (str) \u2013 \nschema (str) \u2013 \nparameters (Optional[Dict[str, Any]]) \u2013 \npage_content_columns (Optional[List[str]]) \u2013 \nmetadata_columns (Optional[List[str]]) \u2013 \nlazy_load()[source]\uf0c1\nA lazy loader for document content.\nReturn type\nIterator[langchain.schema.Document]\nload()[source]\uf0c1\nLoad data into document objects.\nReturn type\nList[langchain.schema.Document]\nclass langchain.document_loaders.SpreedlyLoader(access_token, resource)[source]\uf0c1", "source": "https://api.python.langchain.com/en/latest/modules/document_loaders.html"} +{"id": "98c621fca5f3-47", "text": "class langchain.document_loaders.SpreedlyLoader(access_token, resource)[source]\uf0c1\nBases: langchain.document_loaders.base.BaseLoader\nLoader that fetches data from Spreedly API.\nParameters\naccess_token (str) \u2013 \nresource (str) \u2013 \nReturn type\nNone\nload()[source]\uf0c1\nLoad data into document objects.\nReturn type\nList[langchain.schema.Document]\nclass langchain.document_loaders.StripeLoader(resource, access_token=None)[source]\uf0c1\nBases: langchain.document_loaders.base.BaseLoader\nLoader that fetches data from Stripe.\nParameters\nresource (str) \u2013 \naccess_token (Optional[str]) \u2013 \nReturn type\nNone\nload()[source]\uf0c1\nLoad data into document objects.\nReturn type\nList[langchain.schema.Document]\nclass langchain.document_loaders.TelegramChatApiLoader(chat_entity=None, api_id=None, api_hash=None, username=None, file_path='telegram_data.json')[source]\uf0c1\nBases: langchain.document_loaders.base.BaseLoader\nLoader that loads Telegram chat json directory dump.\nParameters\nchat_entity (Optional[EntityLike]) \u2013 \napi_id (Optional[int]) \u2013 \napi_hash (Optional[str]) \u2013 \nusername (Optional[str]) \u2013 \nfile_path (str) \u2013 \nasync fetch_data_from_telegram()[source]\uf0c1\nFetch data from Telegram API and save it as a JSON file.\nReturn type\nNone\nload()[source]\uf0c1\nLoad documents.\nReturn type\nList[langchain.schema.Document]\nclass langchain.document_loaders.TelegramChatFileLoader(path)[source]\uf0c1\nBases: langchain.document_loaders.base.BaseLoader\nLoader that loads Telegram chat json directory dump.\nParameters\npath (str) \u2013 \nload()[source]\uf0c1\nLoad documents.", "source": "https://api.python.langchain.com/en/latest/modules/document_loaders.html"} +{"id": "98c621fca5f3-48", "text": "Parameters\npath (str) \u2013 \nload()[source]\uf0c1\nLoad documents.\nReturn type\nList[langchain.schema.Document]\nlangchain.document_loaders.TelegramChatLoader\uf0c1\nalias of langchain.document_loaders.telegram.TelegramChatFileLoader\nclass langchain.document_loaders.TextLoader(file_path, encoding=None, autodetect_encoding=False)[source]\uf0c1\nBases: langchain.document_loaders.base.BaseLoader\nLoad text files.\nParameters\nfile_path (str) \u2013 Path to the file to load.\nencoding (Optional[str]) \u2013 File encoding to use. If None, the file will be loaded\nencoding. (with the default system) \u2013 \nautodetect_encoding (bool) \u2013 Whether to try to autodetect the file encoding\nif the specified encoding fails.\nload()[source]\uf0c1\nLoad from file path.\nReturn type\nList[langchain.schema.Document]\nclass langchain.document_loaders.ToMarkdownLoader(url, api_key)[source]\uf0c1\nBases: langchain.document_loaders.base.BaseLoader\nLoader that loads HTML to markdown using 2markdown.\nParameters\nurl (str) \u2013 \napi_key (str) \u2013 \nlazy_load()[source]\uf0c1\nLazily load the file.\nReturn type\nIterator[langchain.schema.Document]\nload()[source]\uf0c1\nLoad file.\nReturn type\nList[langchain.schema.Document]\nclass langchain.document_loaders.TomlLoader(source)[source]\uf0c1\nBases: langchain.document_loaders.base.BaseLoader\nA TOML document loader that inherits from the BaseLoader class.\nThis class can be initialized with either a single source file or a source\ndirectory containing TOML files.\nParameters\nsource (Union[str, pathlib.Path]) \u2013 \nload()[source]\uf0c1\nLoad and return all documents.\nReturn type", "source": "https://api.python.langchain.com/en/latest/modules/document_loaders.html"} +{"id": "98c621fca5f3-49", "text": "load()[source]\uf0c1\nLoad and return all documents.\nReturn type\nList[langchain.schema.Document]\nlazy_load()[source]\uf0c1\nLazily load the TOML documents from the source file or directory.\nReturn type\nIterator[langchain.schema.Document]\nclass langchain.document_loaders.TrelloLoader(client, board_name, *, include_card_name=True, include_comments=True, include_checklist=True, card_filter='all', extra_metadata=('due_date', 'labels', 'list', 'closed'))[source]\uf0c1\nBases: langchain.document_loaders.base.BaseLoader\nTrello loader. Reads all cards from a Trello board.\nParameters\nclient (TrelloClient) \u2013 \nboard_name (str) \u2013 \ninclude_card_name (bool) \u2013 \ninclude_comments (bool) \u2013 \ninclude_checklist (bool) \u2013 \ncard_filter (Literal['closed', 'open', 'all']) \u2013 \nextra_metadata (Tuple[str, ...]) \u2013 \nclassmethod from_credentials(board_name, *, api_key=None, token=None, **kwargs)[source]\uf0c1\nConvenience constructor that builds TrelloClient init param for you.\nParameters\nboard_name (str) \u2013 The name of the Trello board.\napi_key (Optional[str]) \u2013 Trello API key. Can also be specified as environment variable\nTRELLO_API_KEY.\ntoken (Optional[str]) \u2013 Trello token. Can also be specified as environment variable\nTRELLO_TOKEN.\ninclude_card_name \u2013 Whether to include the name of the card in the document.\ninclude_comments \u2013 Whether to include the comments on the card in the\ndocument.\ninclude_checklist \u2013 Whether to include the checklist on the card in the\ndocument.\ncard_filter \u2013 Filter on card status. Valid values are \u201cclosed\u201d, \u201copen\u201d,\n\u201call\u201d.", "source": "https://api.python.langchain.com/en/latest/modules/document_loaders.html"} +{"id": "98c621fca5f3-50", "text": "\u201call\u201d.\nextra_metadata \u2013 List of additional metadata fields to include as document\nmetadata.Valid values are \u201cdue_date\u201d, \u201clabels\u201d, \u201clist\u201d, \u201cclosed\u201d.\nkwargs (Any) \u2013 \nReturn type\nlangchain.document_loaders.trello.TrelloLoader\nload()[source]\uf0c1\nLoads all cards from the specified Trello board.\nYou can filter the cards, metadata and text included by using the optional\nparameters.\nReturns:A list of documents, one for each card in the board.\nReturn type\nList[langchain.schema.Document]\nclass langchain.document_loaders.TwitterTweetLoader(auth_handler, twitter_users, number_tweets=100)[source]\uf0c1\nBases: langchain.document_loaders.base.BaseLoader\nTwitter tweets loader.\nRead tweets of user twitter handle.\nFirst you need to go to\nhttps://developer.twitter.com/en/docs/twitter-api\n/getting-started/getting-access-to-the-twitter-api\nto get your token. And create a v2 version of the app.\nParameters\nauth_handler (Union[OAuthHandler, OAuth2BearerHandler]) \u2013 \ntwitter_users (Sequence[str]) \u2013 \nnumber_tweets (Optional[int]) \u2013 \nload()[source]\uf0c1\nLoad tweets.\nReturn type\nList[langchain.schema.Document]\nclassmethod from_bearer_token(oauth2_bearer_token, twitter_users, number_tweets=100)[source]\uf0c1\nCreate a TwitterTweetLoader from OAuth2 bearer token.\nParameters\noauth2_bearer_token (str) \u2013 \ntwitter_users (Sequence[str]) \u2013 \nnumber_tweets (Optional[int]) \u2013 \nReturn type\nlangchain.document_loaders.twitter.TwitterTweetLoader\nclassmethod from_secrets(access_token, access_token_secret, consumer_key, consumer_secret, twitter_users, number_tweets=100)[source]\uf0c1\nCreate a TwitterTweetLoader from access tokens and secrets.\nParameters", "source": "https://api.python.langchain.com/en/latest/modules/document_loaders.html"} +{"id": "98c621fca5f3-51", "text": "Create a TwitterTweetLoader from access tokens and secrets.\nParameters\naccess_token (str) \u2013 \naccess_token_secret (str) \u2013 \nconsumer_key (str) \u2013 \nconsumer_secret (str) \u2013 \ntwitter_users (Sequence[str]) \u2013 \nnumber_tweets (Optional[int]) \u2013 \nReturn type\nlangchain.document_loaders.twitter.TwitterTweetLoader\nclass langchain.document_loaders.UnstructuredAPIFileIOLoader(file, mode='single', url='https://api.unstructured.io/general/v0/general', api_key='', **unstructured_kwargs)[source]\uf0c1\nBases: langchain.document_loaders.unstructured.UnstructuredFileIOLoader\nLoader that uses the unstructured web API to load file IO objects.\nParameters\nfile (Union[IO, Sequence[IO]]) \u2013 \nmode (str) \u2013 \nurl (str) \u2013 \napi_key (str) \u2013 \nunstructured_kwargs (Any) \u2013 \nclass langchain.document_loaders.UnstructuredAPIFileLoader(file_path='', mode='single', url='https://api.unstructured.io/general/v0/general', api_key='', **unstructured_kwargs)[source]\uf0c1\nBases: langchain.document_loaders.unstructured.UnstructuredFileLoader\nLoader that uses the unstructured web API to load files.\nParameters\nfile_path (Union[str, List[str]]) \u2013 \nmode (str) \u2013 \nurl (str) \u2013 \napi_key (str) \u2013 \nunstructured_kwargs (Any) \u2013 \nclass langchain.document_loaders.UnstructuredCSVLoader(file_path, mode='single', **unstructured_kwargs)[source]\uf0c1\nBases: langchain.document_loaders.unstructured.UnstructuredFileLoader\nLoader that uses unstructured to load CSV files.\nParameters\nfile_path (str) \u2013 \nmode (str) \u2013 \nunstructured_kwargs (Any) \u2013", "source": "https://api.python.langchain.com/en/latest/modules/document_loaders.html"} +{"id": "98c621fca5f3-52", "text": "mode (str) \u2013 \nunstructured_kwargs (Any) \u2013 \nclass langchain.document_loaders.UnstructuredEPubLoader(file_path, mode='single', **unstructured_kwargs)[source]\uf0c1\nBases: langchain.document_loaders.unstructured.UnstructuredFileLoader\nLoader that uses unstructured to load epub files.\nParameters\nfile_path (Union[str, List[str]]) \u2013 \nmode (str) \u2013 \nunstructured_kwargs (Any) \u2013 \nclass langchain.document_loaders.UnstructuredEmailLoader(file_path, mode='single', **unstructured_kwargs)[source]\uf0c1\nBases: langchain.document_loaders.unstructured.UnstructuredFileLoader\nLoader that uses unstructured to load email files.\nParameters\nfile_path (Union[str, List[str]]) \u2013 \nmode (str) \u2013 \nunstructured_kwargs (Any) \u2013 \nclass langchain.document_loaders.UnstructuredExcelLoader(file_path, mode='single', **unstructured_kwargs)[source]\uf0c1\nBases: langchain.document_loaders.unstructured.UnstructuredFileLoader\nLoader that uses unstructured to load Microsoft Excel files.\nParameters\nfile_path (str) \u2013 \nmode (str) \u2013 \nunstructured_kwargs (Any) \u2013 \nclass langchain.document_loaders.UnstructuredFileIOLoader(file, mode='single', **unstructured_kwargs)[source]\uf0c1\nBases: langchain.document_loaders.unstructured.UnstructuredBaseLoader\nLoader that uses unstructured to load file IO objects.\nParameters\nfile (Union[IO, Sequence[IO]]) \u2013 \nmode (str) \u2013 \nunstructured_kwargs (Any) \u2013 \nclass langchain.document_loaders.UnstructuredFileLoader(file_path, mode='single', **unstructured_kwargs)[source]\uf0c1\nBases: langchain.document_loaders.unstructured.UnstructuredBaseLoader", "source": "https://api.python.langchain.com/en/latest/modules/document_loaders.html"} +{"id": "98c621fca5f3-53", "text": "Bases: langchain.document_loaders.unstructured.UnstructuredBaseLoader\nLoader that uses unstructured to load files.\nParameters\nfile_path (Union[str, List[str]]) \u2013 \nmode (str) \u2013 \nunstructured_kwargs (Any) \u2013 \nclass langchain.document_loaders.UnstructuredHTMLLoader(file_path, mode='single', **unstructured_kwargs)[source]\uf0c1\nBases: langchain.document_loaders.unstructured.UnstructuredFileLoader\nLoader that uses unstructured to load HTML files.\nParameters\nfile_path (Union[str, List[str]]) \u2013 \nmode (str) \u2013 \nunstructured_kwargs (Any) \u2013 \nclass langchain.document_loaders.UnstructuredImageLoader(file_path, mode='single', **unstructured_kwargs)[source]\uf0c1\nBases: langchain.document_loaders.unstructured.UnstructuredFileLoader\nLoader that uses unstructured to load image files, such as PNGs and JPGs.\nParameters\nfile_path (Union[str, List[str]]) \u2013 \nmode (str) \u2013 \nunstructured_kwargs (Any) \u2013 \nclass langchain.document_loaders.UnstructuredMarkdownLoader(file_path, mode='single', **unstructured_kwargs)[source]\uf0c1\nBases: langchain.document_loaders.unstructured.UnstructuredFileLoader\nLoader that uses unstructured to load markdown files.\nParameters\nfile_path (Union[str, List[str]]) \u2013 \nmode (str) \u2013 \nunstructured_kwargs (Any) \u2013 \nclass langchain.document_loaders.UnstructuredODTLoader(file_path, mode='single', **unstructured_kwargs)[source]\uf0c1\nBases: langchain.document_loaders.unstructured.UnstructuredFileLoader\nLoader that uses unstructured to load open office ODT files.\nParameters\nfile_path (str) \u2013 \nmode (str) \u2013 \nunstructured_kwargs (Any) \u2013", "source": "https://api.python.langchain.com/en/latest/modules/document_loaders.html"} +{"id": "98c621fca5f3-54", "text": "mode (str) \u2013 \nunstructured_kwargs (Any) \u2013 \nclass langchain.document_loaders.UnstructuredPDFLoader(file_path, mode='single', **unstructured_kwargs)[source]\uf0c1\nBases: langchain.document_loaders.unstructured.UnstructuredFileLoader\nLoader that uses unstructured to load PDF files.\nParameters\nfile_path (Union[str, List[str]]) \u2013 \nmode (str) \u2013 \nunstructured_kwargs (Any) \u2013 \nclass langchain.document_loaders.UnstructuredPowerPointLoader(file_path, mode='single', **unstructured_kwargs)[source]\uf0c1\nBases: langchain.document_loaders.unstructured.UnstructuredFileLoader\nLoader that uses unstructured to load powerpoint files.\nParameters\nfile_path (Union[str, List[str]]) \u2013 \nmode (str) \u2013 \nunstructured_kwargs (Any) \u2013 \nclass langchain.document_loaders.UnstructuredRSTLoader(file_path, mode='single', **unstructured_kwargs)[source]\uf0c1\nBases: langchain.document_loaders.unstructured.UnstructuredFileLoader\nLoader that uses unstructured to load RST files.\nParameters\nfile_path (str) \u2013 \nmode (str) \u2013 \nunstructured_kwargs (Any) \u2013 \nclass langchain.document_loaders.UnstructuredRTFLoader(file_path, mode='single', **unstructured_kwargs)[source]\uf0c1\nBases: langchain.document_loaders.unstructured.UnstructuredFileLoader\nLoader that uses unstructured to load rtf files.\nParameters\nfile_path (str) \u2013 \nmode (str) \u2013 \nunstructured_kwargs (Any) \u2013 \nclass langchain.document_loaders.UnstructuredURLLoader(urls, continue_on_failure=True, mode='single', show_progress_bar=False, **unstructured_kwargs)[source]\uf0c1\nBases: langchain.document_loaders.base.BaseLoader", "source": "https://api.python.langchain.com/en/latest/modules/document_loaders.html"} +{"id": "98c621fca5f3-55", "text": "Bases: langchain.document_loaders.base.BaseLoader\nLoader that uses unstructured to load HTML files.\nParameters\nurls (List[str]) \u2013 \ncontinue_on_failure (bool) \u2013 \nmode (str) \u2013 \nshow_progress_bar (bool) \u2013 \nunstructured_kwargs (Any) \u2013 \nload()[source]\uf0c1\nLoad file.\nReturn type\nList[langchain.schema.Document]\nclass langchain.document_loaders.UnstructuredWordDocumentLoader(file_path, mode='single', **unstructured_kwargs)[source]\uf0c1\nBases: langchain.document_loaders.unstructured.UnstructuredFileLoader\nLoader that uses unstructured to load word documents.\nParameters\nfile_path (Union[str, List[str]]) \u2013 \nmode (str) \u2013 \nunstructured_kwargs (Any) \u2013 \nclass langchain.document_loaders.UnstructuredXMLLoader(file_path, mode='single', **unstructured_kwargs)[source]\uf0c1\nBases: langchain.document_loaders.unstructured.UnstructuredFileLoader\nLoader that uses unstructured to load XML files.\nParameters\nfile_path (str) \u2013 \nmode (str) \u2013 \nunstructured_kwargs (Any) \u2013 \nclass langchain.document_loaders.WeatherDataLoader(client, places)[source]\uf0c1\nBases: langchain.document_loaders.base.BaseLoader\nWeather Reader.\nReads the forecast & current weather of any location using OpenWeatherMap\u2019s free\nAPI. Checkout \u2018https://openweathermap.org/appid\u2019 for more on how to generate a free\nOpenWeatherMap API.\nParameters\nclient (OpenWeatherMapAPIWrapper) \u2013 \nplaces (Sequence[str]) \u2013 \nReturn type\nNone\nclassmethod from_params(places, *, openweathermap_api_key=None)[source]\uf0c1\nParameters\nplaces (Sequence[str]) \u2013 \nopenweathermap_api_key (Optional[str]) \u2013 \nReturn type", "source": "https://api.python.langchain.com/en/latest/modules/document_loaders.html"} +{"id": "98c621fca5f3-56", "text": "openweathermap_api_key (Optional[str]) \u2013 \nReturn type\nlangchain.document_loaders.weather.WeatherDataLoader\nlazy_load()[source]\uf0c1\nLazily load weather data for the given locations.\nReturn type\nIterator[langchain.schema.Document]\nload()[source]\uf0c1\nLoad weather data for the given locations.\nReturn type\nList[langchain.schema.Document]\nclass langchain.document_loaders.WebBaseLoader(web_path, header_template=None, verify=True)[source]\uf0c1\nBases: langchain.document_loaders.base.BaseLoader\nLoader that uses urllib and beautiful soup to load webpages.\nParameters\nweb_path (Union[str, List[str]]) \u2013 \nheader_template (Optional[dict]) \u2013 \nverify (Optional[bool]) \u2013 \nrequests_per_second: int = 2\uf0c1\nMax number of concurrent requests to make.\ndefault_parser: str = 'html.parser'\uf0c1\nDefault parser to use for BeautifulSoup.\nrequests_kwargs: Dict[str, Any] = {}\uf0c1\nkwargs for requests\nbs_get_text_kwargs: Dict[str, Any] = {}\uf0c1\nkwargs for beatifulsoup4 get_text\nweb_paths: List[str]\uf0c1\nproperty web_path: str\uf0c1\nasync fetch_all(urls)[source]\uf0c1\nFetch all urls concurrently with rate limiting.\nParameters\nurls (List[str]) \u2013 \nReturn type\nAny\nscrape_all(urls, parser=None)[source]\uf0c1\nFetch all urls, then return soups for all results.\nParameters\nurls (List[str]) \u2013 \nparser (Optional[str]) \u2013 \nReturn type\nList[Any]\nscrape(parser=None)[source]\uf0c1\nScrape data from webpage and return it in BeautifulSoup format.\nParameters\nparser (Optional[str]) \u2013 \nReturn type\nAny\nlazy_load()[source]\uf0c1", "source": "https://api.python.langchain.com/en/latest/modules/document_loaders.html"} +{"id": "98c621fca5f3-57", "text": "Return type\nAny\nlazy_load()[source]\uf0c1\nLazy load text from the url(s) in web_path.\nReturn type\nIterator[langchain.schema.Document]\nload()[source]\uf0c1\nLoad text from the url(s) in web_path.\nReturn type\nList[langchain.schema.Document]\naload()[source]\uf0c1\nLoad text from the urls in web_path async into Documents.\nReturn type\nList[langchain.schema.Document]\nclass langchain.document_loaders.WhatsAppChatLoader(path)[source]\uf0c1\nBases: langchain.document_loaders.base.BaseLoader\nLoader that loads WhatsApp messages text file.\nParameters\npath (str) \u2013 \nload()[source]\uf0c1\nLoad documents.\nReturn type\nList[langchain.schema.Document]\nclass langchain.document_loaders.WikipediaLoader(query, lang='en', load_max_docs=100, load_all_available_meta=False, doc_content_chars_max=4000)[source]\uf0c1\nBases: langchain.document_loaders.base.BaseLoader\nLoads a query result from www.wikipedia.org into a list of Documents.\nThe hard limit on the number of downloaded Documents is 300 for now.\nEach wiki page represents one Document.\nParameters\nquery (str) \u2013 \nlang (str) \u2013 \nload_max_docs (Optional[int]) \u2013 \nload_all_available_meta (Optional[bool]) \u2013 \ndoc_content_chars_max (Optional[int]) \u2013 \nload()[source]\uf0c1\nLoads the query result from Wikipedia into a list of Documents.\nReturns\nA list of Document objects representing the loadedWikipedia pages.\nReturn type\nList[Document]\nclass langchain.document_loaders.YoutubeAudioLoader(urls, save_dir)[source]\uf0c1\nBases: langchain.document_loaders.blob_loaders.schema.BlobLoader\nLoad YouTube urls as audio file(s).\nParameters", "source": "https://api.python.langchain.com/en/latest/modules/document_loaders.html"} +{"id": "98c621fca5f3-58", "text": "Load YouTube urls as audio file(s).\nParameters\nurls (List[str]) \u2013 \nsave_dir (str) \u2013 \nyield_blobs()[source]\uf0c1\nYield audio blobs for each url.\nReturn type\nIterable[langchain.document_loaders.blob_loaders.schema.Blob]\nclass langchain.document_loaders.YoutubeLoader(video_id, add_video_info=False, language='en', translation='en', continue_on_failure=False)[source]\uf0c1\nBases: langchain.document_loaders.base.BaseLoader\nLoader that loads Youtube transcripts.\nParameters\nvideo_id (str) \u2013 \nadd_video_info (bool) \u2013 \nlanguage (Union[str, Sequence[str]]) \u2013 \ntranslation (str) \u2013 \ncontinue_on_failure (bool) \u2013 \nstatic extract_video_id(youtube_url)[source]\uf0c1\nExtract video id from common YT urls.\nParameters\nyoutube_url (str) \u2013 \nReturn type\nstr\nclassmethod from_youtube_url(youtube_url, **kwargs)[source]\uf0c1\nGiven youtube URL, load video.\nParameters\nyoutube_url (str) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.document_loaders.youtube.YoutubeLoader\nload()[source]\uf0c1\nLoad documents.\nReturn type\nList[langchain.schema.Document]", "source": "https://api.python.langchain.com/en/latest/modules/document_loaders.html"} +{"id": "6c0374edea1b-0", "text": "Experimental\uf0c1\nThis module contains experimental modules and reproductions of existing work using LangChain primitives.\nAutonomous agents\uf0c1\nHere, we document the BabyAGI and AutoGPT classes from the langchain.experimental module.\nclass langchain.experimental.BabyAGI(*, memory=None, callbacks=None, callback_manager=None, verbose=None, tags=None, task_list=None, task_creation_chain, task_prioritization_chain, execution_chain, task_id_counter=1, vectorstore, max_iterations=None)[source]\uf0c1\nBases: langchain.chains.base.Chain, pydantic.main.BaseModel\nController model for the BabyAGI agent.\nParameters\nmemory (Optional[langchain.schema.BaseMemory]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ncallback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) \u2013 \nverbose (bool) \u2013 \ntags (Optional[List[str]]) \u2013 \ntask_list (collections.deque) \u2013 \ntask_creation_chain (langchain.chains.base.Chain) \u2013 \ntask_prioritization_chain (langchain.chains.base.Chain) \u2013 \nexecution_chain (langchain.chains.base.Chain) \u2013 \ntask_id_counter (int) \u2013 \nvectorstore (langchain.vectorstores.base.VectorStore) \u2013 \nmax_iterations (Optional[int]) \u2013 \nReturn type\nNone\nmodel Config[source]\uf0c1\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\uf0c1\nproperty input_keys: List[str]\uf0c1\nInput keys this chain expects.\nproperty output_keys: List[str]\uf0c1\nOutput keys this chain expects.\nget_next_task(result, task_description, objective)[source]\uf0c1\nGet the next task.\nParameters\nresult (str) \u2013 \ntask_description (str) \u2013", "source": "https://api.python.langchain.com/en/latest/modules/experimental.html"} +{"id": "6c0374edea1b-1", "text": "Parameters\nresult (str) \u2013 \ntask_description (str) \u2013 \nobjective (str) \u2013 \nReturn type\nList[Dict]\nprioritize_tasks(this_task_id, objective)[source]\uf0c1\nPrioritize tasks.\nParameters\nthis_task_id (int) \u2013 \nobjective (str) \u2013 \nReturn type\nList[Dict]\nexecute_task(objective, task, k=5)[source]\uf0c1\nExecute a task.\nParameters\nobjective (str) \u2013 \ntask (str) \u2013 \nk (int) \u2013 \nReturn type\nstr\nclassmethod from_llm(llm, vectorstore, verbose=False, task_execution_chain=None, **kwargs)[source]\uf0c1\nInitialize the BabyAGI Controller.\nParameters\nllm (langchain.base_language.BaseLanguageModel) \u2013 \nvectorstore (langchain.vectorstores.base.VectorStore) \u2013 \nverbose (bool) \u2013 \ntask_execution_chain (Optional[langchain.chains.base.Chain]) \u2013 \nkwargs (Dict[str, Any]) \u2013 \nReturn type\nlangchain.experimental.autonomous_agents.baby_agi.baby_agi.BabyAGI\nclass langchain.experimental.AutoGPT(ai_name, memory, chain, output_parser, tools, feedback_tool=None, chat_history_memory=None)[source]\uf0c1\nBases: object\nAgent class for interacting with Auto-GPT.\nParameters\nai_name (str) \u2013 \nmemory (VectorStoreRetriever) \u2013 \nchain (LLMChain) \u2013 \noutput_parser (BaseAutoGPTOutputParser) \u2013 \ntools (List[BaseTool]) \u2013 \nfeedback_tool (Optional[HumanInputRun]) \u2013 \nchat_history_memory (Optional[BaseChatMessageHistory]) \u2013 \nGenerative agents\uf0c1\nHere, we document the GenerativeAgent and GenerativeAgentMemory classes from the langchain.experimental module.", "source": "https://api.python.langchain.com/en/latest/modules/experimental.html"} +{"id": "6c0374edea1b-2", "text": "class langchain.experimental.GenerativeAgent(*, name, age=None, traits='N/A', status, memory, llm, verbose=False, summary='', summary_refresh_seconds=3600, last_refreshed=None, daily_summaries=None)[source]\uf0c1\nBases: pydantic.main.BaseModel\nA character with memory and innate characteristics.\nParameters\nname (str) \u2013 \nage (Optional[int]) \u2013 \ntraits (str) \u2013 \nstatus (str) \u2013 \nmemory (langchain.experimental.generative_agents.memory.GenerativeAgentMemory) \u2013 \nllm (langchain.base_language.BaseLanguageModel) \u2013 \nverbose (bool) \u2013 \nsummary (str) \u2013 \nsummary_refresh_seconds (int) \u2013 \nlast_refreshed (datetime.datetime) \u2013 \ndaily_summaries (List[str]) \u2013 \nReturn type\nNone\nattribute name: str [Required]\uf0c1\nThe character\u2019s name.\nattribute age: Optional[int] = None\uf0c1\nThe optional age of the character.\nattribute traits: str = 'N/A'\uf0c1\nPermanent traits to ascribe to the character.\nattribute status: str [Required]\uf0c1\nThe traits of the character you wish not to change.\nattribute memory: langchain.experimental.generative_agents.memory.GenerativeAgentMemory [Required]\uf0c1\nThe memory object that combines relevance, recency, and \u2018importance\u2019.\nattribute llm: langchain.base_language.BaseLanguageModel [Required]\uf0c1\nThe underlying language model.\nattribute summary: str = ''\uf0c1\nStateful self-summary generated via reflection on the character\u2019s memory.\nattribute summary_refresh_seconds: int = 3600\uf0c1\nHow frequently to re-generate the summary.\nattribute last_refreshed: datetime.datetime [Optional]\uf0c1\nThe last time the character\u2019s summary was regenerated.", "source": "https://api.python.langchain.com/en/latest/modules/experimental.html"} +{"id": "6c0374edea1b-3", "text": "The last time the character\u2019s summary was regenerated.\nattribute daily_summaries: List[str] [Optional]\uf0c1\nSummary of the events in the plan that the agent took.\nmodel Config[source]\uf0c1\nBases: object\nConfiguration for this pydantic object.\narbitrary_types_allowed = True\uf0c1\nsummarize_related_memories(observation)[source]\uf0c1\nSummarize memories that are most relevant to an observation.\nParameters\nobservation (str) \u2013 \nReturn type\nstr\ngenerate_reaction(observation, now=None)[source]\uf0c1\nReact to a given observation.\nParameters\nobservation (str) \u2013 \nnow (Optional[datetime.datetime]) \u2013 \nReturn type\nTuple[bool, str]\ngenerate_dialogue_response(observation, now=None)[source]\uf0c1\nReact to a given observation.\nParameters\nobservation (str) \u2013 \nnow (Optional[datetime.datetime]) \u2013 \nReturn type\nTuple[bool, str]\nget_summary(force_refresh=False, now=None)[source]\uf0c1\nReturn a descriptive summary of the agent.\nParameters\nforce_refresh (bool) \u2013 \nnow (Optional[datetime.datetime]) \u2013 \nReturn type\nstr\nget_full_header(force_refresh=False, now=None)[source]\uf0c1\nReturn a full header of the agent\u2019s status, summary, and current time.\nParameters\nforce_refresh (bool) \u2013 \nnow (Optional[datetime.datetime]) \u2013 \nReturn type\nstr", "source": "https://api.python.langchain.com/en/latest/modules/experimental.html"} +{"id": "6c0374edea1b-4", "text": "now (Optional[datetime.datetime]) \u2013 \nReturn type\nstr\nclass langchain.experimental.GenerativeAgentMemory(*, llm, memory_retriever, verbose=False, reflection_threshold=None, current_plan=[], importance_weight=0.15, aggregate_importance=0.0, max_tokens_limit=1200, queries_key='queries', most_recent_memories_token_key='recent_memories_token', add_memory_key='add_memory', relevant_memories_key='relevant_memories', relevant_memories_simple_key='relevant_memories_simple', most_recent_memories_key='most_recent_memories', now_key='now', reflecting=False)[source]\uf0c1\nBases: langchain.schema.BaseMemory\nParameters\nllm (langchain.base_language.BaseLanguageModel) \u2013 \nmemory_retriever (langchain.retrievers.time_weighted_retriever.TimeWeightedVectorStoreRetriever) \u2013 \nverbose (bool) \u2013 \nreflection_threshold (Optional[float]) \u2013 \ncurrent_plan (List[str]) \u2013 \nimportance_weight (float) \u2013 \naggregate_importance (float) \u2013 \nmax_tokens_limit (int) \u2013 \nqueries_key (str) \u2013 \nmost_recent_memories_token_key (str) \u2013 \nadd_memory_key (str) \u2013 \nrelevant_memories_key (str) \u2013 \nrelevant_memories_simple_key (str) \u2013 \nmost_recent_memories_key (str) \u2013 \nnow_key (str) \u2013 \nreflecting (bool) \u2013 \nReturn type\nNone\nattribute llm: langchain.base_language.BaseLanguageModel [Required]\uf0c1\nThe core language model.\nattribute memory_retriever: langchain.retrievers.time_weighted_retriever.TimeWeightedVectorStoreRetriever [Required]\uf0c1\nThe retriever to fetch related memories.\nattribute reflection_threshold: Optional[float] = None\uf0c1", "source": "https://api.python.langchain.com/en/latest/modules/experimental.html"} +{"id": "6c0374edea1b-5", "text": "attribute reflection_threshold: Optional[float] = None\uf0c1\nWhen aggregate_importance exceeds reflection_threshold, stop to reflect.\nattribute current_plan: List[str] = []\uf0c1\nThe current plan of the agent.\nattribute importance_weight: float = 0.15\uf0c1\nHow much weight to assign the memory importance.\nattribute aggregate_importance: float = 0.0\uf0c1\nTrack the sum of the \u2018importance\u2019 of recent memories.\nTriggers reflection when it reaches reflection_threshold.\npause_to_reflect(now=None)[source]\uf0c1\nReflect on recent observations and generate \u2018insights\u2019.\nParameters\nnow (Optional[datetime.datetime]) \u2013 \nReturn type\nList[str]\nadd_memories(memory_content, now=None)[source]\uf0c1\nAdd an observations or memories to the agent\u2019s memory.\nParameters\nmemory_content (str) \u2013 \nnow (Optional[datetime.datetime]) \u2013 \nReturn type\nList[str]\nadd_memory(memory_content, now=None)[source]\uf0c1\nAdd an observation or memory to the agent\u2019s memory.\nParameters\nmemory_content (str) \u2013 \nnow (Optional[datetime.datetime]) \u2013 \nReturn type\nList[str]\nfetch_memories(observation, now=None)[source]\uf0c1\nFetch related memories.\nParameters\nobservation (str) \u2013 \nnow (Optional[datetime.datetime]) \u2013 \nReturn type\nList[langchain.schema.Document]\nproperty memory_variables: List[str]\uf0c1\nInput keys this memory class will load dynamically.\nload_memory_variables(inputs)[source]\uf0c1\nReturn key-value pairs given the text input to the chain.\nParameters\ninputs (Dict[str, Any]) \u2013 \nReturn type\nDict[str, str]\nsave_context(inputs, outputs)[source]\uf0c1\nSave the context of this model run to memory.\nParameters\ninputs (Dict[str, Any]) \u2013", "source": "https://api.python.langchain.com/en/latest/modules/experimental.html"} +{"id": "6c0374edea1b-6", "text": "Parameters\ninputs (Dict[str, Any]) \u2013 \noutputs (Dict[str, Any]) \u2013 \nReturn type\nNone\nclear()[source]\uf0c1\nClear memory contents.\nReturn type\nNone", "source": "https://api.python.langchain.com/en/latest/modules/experimental.html"} +{"id": "1b035ff3a02e-0", "text": "Utilities\uf0c1\nGeneral utilities.\nclass langchain.utilities.ApifyWrapper(*, apify_client=None, apify_client_async=None)[source]\uf0c1\nBases: pydantic.main.BaseModel\nWrapper around Apify.\nTo use, you should have the apify-client python package installed,\nand the environment variable APIFY_API_TOKEN set with your API key, or pass\napify_api_token as a named parameter to the constructor.\nParameters\napify_client (Any) \u2013 \napify_client_async (Any) \u2013 \nReturn type\nNone\nattribute apify_client: Any = None\uf0c1\nattribute apify_client_async: Any = None\uf0c1\nasync acall_actor(actor_id, run_input, dataset_mapping_function, *, build=None, memory_mbytes=None, timeout_secs=None)[source]\uf0c1\nRun an Actor on the Apify platform and wait for results to be ready.\nParameters\nactor_id (str) \u2013 The ID or name of the Actor on the Apify platform.\nrun_input (Dict) \u2013 The input object of the Actor that you\u2019re trying to run.\ndataset_mapping_function (Callable) \u2013 A function that takes a single\ndictionary (an Apify dataset item) and converts it to\nan instance of the Document class.\nbuild (str, optional) \u2013 Optionally specifies the actor build to run.\nIt can be either a build tag or build number.\nmemory_mbytes (int, optional) \u2013 Optional memory limit for the run,\nin megabytes.\ntimeout_secs (int, optional) \u2013 Optional timeout for the run, in seconds.\nReturns\nA loader that will fetch the records from theActor run\u2019s default dataset.\nReturn type\nApifyDatasetLoader\ncall_actor(actor_id, run_input, dataset_mapping_function, *, build=None, memory_mbytes=None, timeout_secs=None)[source]\uf0c1", "source": "https://api.python.langchain.com/en/latest/modules/utilities.html"} +{"id": "1b035ff3a02e-1", "text": "Run an Actor on the Apify platform and wait for results to be ready.\nParameters\nactor_id (str) \u2013 The ID or name of the Actor on the Apify platform.\nrun_input (Dict) \u2013 The input object of the Actor that you\u2019re trying to run.\ndataset_mapping_function (Callable) \u2013 A function that takes a single\ndictionary (an Apify dataset item) and converts it to an\ninstance of the Document class.\nbuild (str, optional) \u2013 Optionally specifies the actor build to run.\nIt can be either a build tag or build number.\nmemory_mbytes (int, optional) \u2013 Optional memory limit for the run,\nin megabytes.\ntimeout_secs (int, optional) \u2013 Optional timeout for the run, in seconds.\nReturns\nA loader that will fetch the records from theActor run\u2019s default dataset.\nReturn type\nApifyDatasetLoader\nclass langchain.utilities.ArxivAPIWrapper(*, arxiv_search=None, arxiv_exceptions=None, top_k_results=3, load_max_docs=100, load_all_available_meta=False, doc_content_chars_max=4000, ARXIV_MAX_QUERY_LENGTH=300)[source]\uf0c1\nBases: pydantic.main.BaseModel\nWrapper around ArxivAPI.\nTo use, you should have the arxiv python package installed.\nhttps://lukasschwab.me/arxiv.py/index.html\nThis wrapper will use the Arxiv API to conduct searches and\nfetch document summaries. By default, it will return the document summaries\nof the top-k results.\nIt limits the Document content by doc_content_chars_max.\nSet doc_content_chars_max=None if you don\u2019t want to limit the content size.\nParameters\ntop_k_results (int) \u2013 number of the top-scored document used for the arxiv tool", "source": "https://api.python.langchain.com/en/latest/modules/utilities.html"} +{"id": "1b035ff3a02e-2", "text": "ARXIV_MAX_QUERY_LENGTH (int) \u2013 the cut limit on the query used for the arxiv tool.\nload_max_docs (int) \u2013 a limit to the number of loaded documents\nload_all_available_meta (bool) \u2013 \nif True: the metadata of the loaded Documents gets all available meta info(see https://lukasschwab.me/arxiv.py/index.html#Result),\nif False: the metadata gets only the most informative fields.\narxiv_search (Any) \u2013 \narxiv_exceptions (Any) \u2013 \ndoc_content_chars_max (Optional[int]) \u2013 \nReturn type\nNone\nattribute arxiv_exceptions: Any = None\uf0c1\nattribute doc_content_chars_max: Optional[int] = 4000\uf0c1\nattribute load_all_available_meta: bool = False\uf0c1\nattribute load_max_docs: int = 100\uf0c1\nattribute top_k_results: int = 3\uf0c1\nload(query)[source]\uf0c1\nRun Arxiv search and get the article texts plus the article meta information.\nSee https://lukasschwab.me/arxiv.py/index.html#Search\nReturns: a list of documents with the document.page_content in text format\nParameters\nquery (str) \u2013 \nReturn type\nList[langchain.schema.Document]\nrun(query)[source]\uf0c1\nRun Arxiv search and get the article meta information.\nSee https://lukasschwab.me/arxiv.py/index.html#Search\nSee https://lukasschwab.me/arxiv.py/index.html#Result\nIt uses only the most informative fields of article meta information.\nParameters\nquery (str) \u2013 \nReturn type\nstr\nclass langchain.utilities.BashProcess(strip_newlines=False, return_err_output=False, persistent=False)[source]\uf0c1\nBases: object\nExecutes bash commands and returns the output.\nParameters", "source": "https://api.python.langchain.com/en/latest/modules/utilities.html"} +{"id": "1b035ff3a02e-3", "text": "Bases: object\nExecutes bash commands and returns the output.\nParameters\nstrip_newlines (bool) \u2013 \nreturn_err_output (bool) \u2013 \npersistent (bool) \u2013 \nrun(commands)[source]\uf0c1\nRun commands and return final output.\nParameters\ncommands (Union[str, List[str]]) \u2013 \nReturn type\nstr\nprocess_output(output, command)[source]\uf0c1\nParameters\noutput (str) \u2013 \ncommand (str) \u2013 \nReturn type\nstr\nclass langchain.utilities.BibtexparserWrapper[source]\uf0c1\nBases: pydantic.main.BaseModel\nWrapper around bibtexparser.\nTo use, you should have the bibtexparser python package installed.\nhttps://bibtexparser.readthedocs.io/en/master/\nThis wrapper will use bibtexparser to load a collection of references from\na bibtex file and fetch document summaries.\nReturn type\nNone\nget_metadata(entry, load_extra=False)[source]\uf0c1\nGet metadata for the given entry.\nParameters\nentry (Mapping[str, Any]) \u2013 \nload_extra (bool) \u2013 \nReturn type\nDict[str, Any]\nload_bibtex_entries(path)[source]\uf0c1\nLoad bibtex entries from the bibtex file at the given path.\nParameters\npath (str) \u2013 \nReturn type\nList[Dict[str, Any]]\nclass langchain.utilities.BingSearchAPIWrapper(*, bing_subscription_key, bing_search_url, k=10)[source]\uf0c1\nBases: pydantic.main.BaseModel\nWrapper for Bing Search API.\nIn order to set this up, follow instructions at:\nhttps://levelup.gitconnected.com/api-tutorial-how-to-use-bing-web-search-api-in-python-4165d5592a7e\nParameters", "source": "https://api.python.langchain.com/en/latest/modules/utilities.html"} +{"id": "1b035ff3a02e-4", "text": "Parameters\nbing_subscription_key (str) \u2013 \nbing_search_url (str) \u2013 \nk (int) \u2013 \nReturn type\nNone\nattribute bing_search_url: str [Required]\uf0c1\nattribute bing_subscription_key: str [Required]\uf0c1\nattribute k: int = 10\uf0c1\nresults(query, num_results)[source]\uf0c1\nRun query through BingSearch and return metadata.\nParameters\nquery (str) \u2013 The query to search for.\nnum_results (int) \u2013 The number of results to return.\nReturns\nsnippet - The description of the result.\ntitle - The title of the result.\nlink - The link to the result.\nReturn type\nA list of dictionaries with the following keys\nrun(query)[source]\uf0c1\nRun query through BingSearch and parse result.\nParameters\nquery (str) \u2013 \nReturn type\nstr\nclass langchain.utilities.BraveSearchWrapper(*, api_key, search_kwargs=None)[source]\uf0c1\nBases: pydantic.main.BaseModel\nParameters\napi_key (str) \u2013 \nsearch_kwargs (dict) \u2013 \nReturn type\nNone\nattribute api_key: str [Required]\uf0c1\nattribute search_kwargs: dict [Optional]\uf0c1\nrun(query)[source]\uf0c1\nParameters\nquery (str) \u2013 \nReturn type\nstr\nclass langchain.utilities.DuckDuckGoSearchAPIWrapper(*, k=10, region='wt-wt', safesearch='moderate', time='y', max_results=5)[source]\uf0c1\nBases: pydantic.main.BaseModel\nWrapper for DuckDuckGo Search API.\nFree and does not require any setup\nParameters\nk (int) \u2013 \nregion (Optional[str]) \u2013 \nsafesearch (str) \u2013 \ntime (Optional[str]) \u2013", "source": "https://api.python.langchain.com/en/latest/modules/utilities.html"} +{"id": "1b035ff3a02e-5", "text": "safesearch (str) \u2013 \ntime (Optional[str]) \u2013 \nmax_results (int) \u2013 \nReturn type\nNone\nattribute k: int = 10\uf0c1\nattribute max_results: int = 5\uf0c1\nattribute region: Optional[str] = 'wt-wt'\uf0c1\nattribute safesearch: str = 'moderate'\uf0c1\nattribute time: Optional[str] = 'y'\uf0c1\nget_snippets(query)[source]\uf0c1\nRun query through DuckDuckGo and return concatenated results.\nParameters\nquery (str) \u2013 \nReturn type\nList[str]\nresults(query, num_results)[source]\uf0c1\nRun query through DuckDuckGo and return metadata.\nParameters\nquery (str) \u2013 The query to search for.\nnum_results (int) \u2013 The number of results to return.\nReturns\nsnippet - The description of the result.\ntitle - The title of the result.\nlink - The link to the result.\nReturn type\nA list of dictionaries with the following keys\nrun(query)[source]\uf0c1\nParameters\nquery (str) \u2013 \nReturn type\nstr\nclass langchain.utilities.GooglePlacesAPIWrapper(*, gplaces_api_key=None, google_map_client=None, top_k_results=None)[source]\uf0c1\nBases: pydantic.main.BaseModel\nWrapper around Google Places API.\nTo use, you should have the googlemaps python package installed,an API key for the google maps platform,\nand the enviroment variable \u2018\u2019GPLACES_API_KEY\u2019\u2019\nset with your API key , or pass \u2018gplaces_api_key\u2019\nas a named parameter to the constructor.\nBy default, this will return the all the results on the input query.You can use the top_k_results argument to limit the number of results.\nExample\nfrom langchain import GooglePlacesAPIWrapper", "source": "https://api.python.langchain.com/en/latest/modules/utilities.html"} +{"id": "1b035ff3a02e-6", "text": "Example\nfrom langchain import GooglePlacesAPIWrapper\ngplaceapi = GooglePlacesAPIWrapper()\nParameters\ngplaces_api_key (Optional[str]) \u2013 \ngoogle_map_client (Any) \u2013 \ntop_k_results (Optional[int]) \u2013 \nReturn type\nNone\nattribute gplaces_api_key: Optional[str] = None\uf0c1\nattribute top_k_results: Optional[int] = None\uf0c1\nfetch_place_details(place_id)[source]\uf0c1\nParameters\nplace_id (str) \u2013 \nReturn type\nOptional[str]\nformat_place_details(place_details)[source]\uf0c1\nParameters\nplace_details (Dict[str, Any]) \u2013 \nReturn type\nOptional[str]\nrun(query)[source]\uf0c1\nRun Places search and get k number of places that exists that match.\nParameters\nquery (str) \u2013 \nReturn type\nstr\nclass langchain.utilities.GoogleSearchAPIWrapper(*, search_engine=None, google_api_key=None, google_cse_id=None, k=10, siterestrict=False)[source]\uf0c1\nBases: pydantic.main.BaseModel\nWrapper for Google Search API.\nAdapted from: Instructions adapted from https://stackoverflow.com/questions/\n37083058/\nprogrammatically-searching-google-in-python-using-custom-search\nTODO: DOCS for using it\n1. Install google-api-python-client\n- If you don\u2019t already have a Google account, sign up.\n- If you have never created a Google APIs Console project,\nread the Managing Projects page and create a project in the Google API Console.\n- Install the library using pip install google-api-python-client\nThe current version of the library is 2.70.0 at this time\n2. To create an API key:\n- Navigate to the APIs & Services\u2192Credentials panel in Cloud Console.\n- Select Create credentials, then select API key from the drop-down menu.", "source": "https://api.python.langchain.com/en/latest/modules/utilities.html"} +{"id": "1b035ff3a02e-7", "text": "- Select Create credentials, then select API key from the drop-down menu.\n- The API key created dialog box displays your newly created key.\n- You now have an API_KEY\n3. Setup Custom Search Engine so you can search the entire web\n- Create a custom search engine in this link.\n- In Sites to search, add any valid URL (i.e. www.stackoverflow.com).\n- That\u2019s all you have to fill up, the rest doesn\u2019t matter.\nIn the left-side menu, click Edit search engine \u2192 {your search engine name}\n\u2192 Setup Set Search the entire web to ON. Remove the URL you added from\nthe list of Sites to search.\n- Under Search engine ID you\u2019ll find the search-engine-ID.\n4. Enable the Custom Search API\n- Navigate to the APIs & Services\u2192Dashboard panel in Cloud Console.\n- Click Enable APIs and Services.\n- Search for Custom Search API and click on it.\n- Click Enable.\nURL for it: https://console.cloud.google.com/apis/library/customsearch.googleapis\n.com\nParameters\nsearch_engine (Any) \u2013 \ngoogle_api_key (Optional[str]) \u2013 \ngoogle_cse_id (Optional[str]) \u2013 \nk (int) \u2013 \nsiterestrict (bool) \u2013 \nReturn type\nNone\nattribute google_api_key: Optional[str] = None\uf0c1\nattribute google_cse_id: Optional[str] = None\uf0c1\nattribute k: int = 10\uf0c1\nattribute siterestrict: bool = False\uf0c1\nresults(query, num_results)[source]\uf0c1\nRun query through GoogleSearch and return metadata.\nParameters\nquery (str) \u2013 The query to search for.\nnum_results (int) \u2013 The number of results to return.\nReturns\nsnippet - The description of the result.\ntitle - The title of the result.", "source": "https://api.python.langchain.com/en/latest/modules/utilities.html"} +{"id": "1b035ff3a02e-8", "text": "Returns\nsnippet - The description of the result.\ntitle - The title of the result.\nlink - The link to the result.\nReturn type\nA list of dictionaries with the following keys\nrun(query)[source]\uf0c1\nRun query through GoogleSearch and parse result.\nParameters\nquery (str) \u2013 \nReturn type\nstr\nclass langchain.utilities.GoogleSerperAPIWrapper(*, k=10, gl='us', hl='en', type='search', tbs=None, serper_api_key=None, aiosession=None, result_key_for_type={'images': 'images', 'news': 'news', 'places': 'places', 'search': 'organic'})[source]\uf0c1\nBases: pydantic.main.BaseModel\nWrapper around the Serper.dev Google Search API.\nYou can create a free API key at https://serper.dev.\nTo use, you should have the environment variable SERPER_API_KEY\nset with your API key, or pass serper_api_key as a named parameter\nto the constructor.\nExample\nfrom langchain import GoogleSerperAPIWrapper\ngoogle_serper = GoogleSerperAPIWrapper()\nParameters\nk (int) \u2013 \ngl (str) \u2013 \nhl (str) \u2013 \ntype (Literal['news', 'search', 'places', 'images']) \u2013 \ntbs (Optional[str]) \u2013 \nserper_api_key (Optional[str]) \u2013 \naiosession (Optional[aiohttp.client.ClientSession]) \u2013 \nresult_key_for_type (dict) \u2013 \nReturn type\nNone\nattribute aiosession: Optional[aiohttp.client.ClientSession] = None\uf0c1\nattribute gl: str = 'us'\uf0c1\nattribute hl: str = 'en'\uf0c1\nattribute k: int = 10\uf0c1\nattribute serper_api_key: Optional[str] = None\uf0c1", "source": "https://api.python.langchain.com/en/latest/modules/utilities.html"} +{"id": "1b035ff3a02e-9", "text": "attribute serper_api_key: Optional[str] = None\uf0c1\nattribute tbs: Optional[str] = None\uf0c1\nattribute type: Literal['news', 'search', 'places', 'images'] = 'search'\uf0c1\nasync aresults(query, **kwargs)[source]\uf0c1\nRun query through GoogleSearch.\nParameters\nquery (str) \u2013 \nkwargs (Any) \u2013 \nReturn type\nDict\nasync arun(query, **kwargs)[source]\uf0c1\nRun query through GoogleSearch and parse result async.\nParameters\nquery (str) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\nresults(query, **kwargs)[source]\uf0c1\nRun query through GoogleSearch.\nParameters\nquery (str) \u2013 \nkwargs (Any) \u2013 \nReturn type\nDict\nrun(query, **kwargs)[source]\uf0c1\nRun query through GoogleSearch and parse result.\nParameters\nquery (str) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\nclass langchain.utilities.GraphQLAPIWrapper(*, custom_headers=None, graphql_endpoint, gql_client=None, gql_function)[source]\uf0c1\nBases: pydantic.main.BaseModel\nWrapper around GraphQL API.\nTo use, you should have the gql python package installed.\nThis wrapper will use the GraphQL API to conduct queries.\nParameters\ncustom_headers (Optional[Dict[str, str]]) \u2013 \ngraphql_endpoint (str) \u2013 \ngql_client (Any) \u2013 \ngql_function (Callable[[str], Any]) \u2013 \nReturn type\nNone\nattribute custom_headers: Optional[Dict[str, str]] = None\uf0c1\nattribute graphql_endpoint: str [Required]\uf0c1\nrun(query)[source]\uf0c1\nRun a GraphQL query and get the results.\nParameters\nquery (str) \u2013 \nReturn type\nstr", "source": "https://api.python.langchain.com/en/latest/modules/utilities.html"} +{"id": "1b035ff3a02e-10", "text": "class langchain.utilities.JiraAPIWrapper(*, jira=None, confluence=None, jira_username=None, jira_api_token=None, jira_instance_url=None, operations=[{'mode': 'jql', 'name': 'JQL Query', 'description': '\\n\u00a0\u00a0\u00a0 This tool is a wrapper around atlassian-python-api\\'s Jira jql API, useful when you need to search for Jira issues.\\n\u00a0\u00a0\u00a0 The input to this tool is a JQL query string, and will be passed into atlassian-python-api\\'s Jira `jql` function,\\n\u00a0\u00a0\u00a0 For example, to find all the issues in project \"Test\" assigned to the me, you would pass in the following string:\\n\u00a0\u00a0\u00a0 project = Test AND assignee = currentUser()\\n\u00a0\u00a0\u00a0 or to find issues with summaries that contain the word \"test\", you would pass in the following string:\\n\u00a0\u00a0\u00a0 summary ~ \\'test\\'\\n\u00a0\u00a0\u00a0 '}, {'mode': 'get_projects', 'name': 'Get Projects', 'description': \"\\n\u00a0\u00a0\u00a0 This tool is a wrapper around atlassian-python-api's Jira project API, \\n\u00a0\u00a0\u00a0 useful when you need to fetch all the projects the user has access to, find out how many projects there are, or as an intermediary step that involv searching by projects. \\n\u00a0\u00a0\u00a0 there is no input to this tool.\\n\u00a0\u00a0\u00a0 \"}, {'mode': 'create_issue', 'name': 'Create Issue', 'description': '\\n\u00a0\u00a0\u00a0 This tool is a wrapper around atlassian-python-api\\'s Jira issue_create API, useful when you need to create a Jira issue. \\n\u00a0\u00a0\u00a0 The input to this tool is a dictionary specifying the fields of the Jira issue, and will be passed into atlassian-python-api\\'s Jira `issue_create` function.\\n\u00a0\u00a0\u00a0 For example, to create a low priority task called \"test issue\" with description \"test description\", you would pass in the following", "source": "https://api.python.langchain.com/en/latest/modules/utilities.html"} +{"id": "1b035ff3a02e-11", "text": "low priority task called \"test issue\" with description \"test description\", you would pass in the following dictionary: \\n\u00a0\u00a0\u00a0 {{\"summary\": \"test issue\", \"description\": \"test description\", \"issuetype\": {{\"name\": \"Task\"}}, \"priority\": {{\"name\": \"Low\"}}}}\\n\u00a0\u00a0\u00a0 '}, {'mode': 'other', 'name': 'Catch all Jira API call', 'description': '\\n\u00a0\u00a0\u00a0 This tool is a wrapper around atlassian-python-api\\'s Jira API.\\n\u00a0\u00a0\u00a0 There are other dedicated tools for fetching all projects, and creating and searching for issues, \\n\u00a0\u00a0\u00a0 use this tool if you need to perform any other actions allowed by the atlassian-python-api Jira API.\\n\u00a0\u00a0\u00a0 The input to this tool is line of python code that calls a function from atlassian-python-api\\'s Jira API\\n\u00a0\u00a0\u00a0 For example, to update the summary field of an issue, you would pass in the following string:\\n\u00a0\u00a0\u00a0 self.jira.update_issue_field(key, {{\"summary\": \"New summary\"}})\\n\u00a0\u00a0\u00a0 or to find out how many projects are in the Jira instance, you would pass in the following string:\\n\u00a0\u00a0\u00a0 self.jira.projects()\\n\u00a0\u00a0\u00a0 For more information on the Jira API, refer to https://atlassian-python-api.readthedocs.io/jira.html\\n\u00a0\u00a0\u00a0 '}, {'mode': 'create_page', 'name': 'Create confluence page', 'description': 'This tool is a wrapper around atlassian-python-api\\'s Confluence \\natlassian-python-api API, useful when you need to create a Confluence page. The input to this tool is a dictionary \\nspecifying the fields of the Confluence page, and will be passed into atlassian-python-api\\'s Confluence `create_page` \\nfunction. For example, to create a page in the DEMO space titled \"This is the title\" with body \"This is the body. You", "source": "https://api.python.langchain.com/en/latest/modules/utilities.html"} +{"id": "1b035ff3a02e-12", "text": "the DEMO space titled \"This is the title\" with body \"This is the body. You can use \\nHTML tags!\", you would pass in the following dictionary: {{\"space\": \"DEMO\", \"title\":\"This is the \\ntitle\",\"body\":\"This is the body. You can use HTML tags!\"}} '}])[source]\uf0c1", "source": "https://api.python.langchain.com/en/latest/modules/utilities.html"} +{"id": "1b035ff3a02e-13", "text": "Bases: pydantic.main.BaseModel\nWrapper for Jira API.\nParameters\njira (Any) \u2013 \nconfluence (Any) \u2013 \njira_username (Optional[str]) \u2013 \njira_api_token (Optional[str]) \u2013 \njira_instance_url (Optional[str]) \u2013 \noperations (List[Dict]) \u2013 \nReturn type\nNone\nattribute confluence: Any = None\uf0c1\nattribute jira_api_token: Optional[str] = None\uf0c1\nattribute jira_instance_url: Optional[str] = None\uf0c1\nattribute jira_username: Optional[str] = None\uf0c1", "source": "https://api.python.langchain.com/en/latest/modules/utilities.html"} +{"id": "1b035ff3a02e-14", "text": "attribute operations: List[Dict] = [{'mode': 'jql', 'name': 'JQL Query', 'description': '\\n\u00a0\u00a0\u00a0 This tool is a wrapper around atlassian-python-api\\'s Jira jql API, useful when you need to search for Jira issues.\\n\u00a0\u00a0\u00a0 The input to this tool is a JQL query string, and will be passed into atlassian-python-api\\'s Jira `jql` function,\\n\u00a0\u00a0\u00a0 For example, to find all the issues in project \"Test\" assigned to the me, you would pass in the following string:\\n\u00a0\u00a0\u00a0 project = Test AND assignee = currentUser()\\n\u00a0\u00a0\u00a0 or to find issues with summaries that contain the word \"test\", you would pass in the following string:\\n\u00a0\u00a0\u00a0 summary ~ \\'test\\'\\n\u00a0\u00a0\u00a0 '}, {'mode': 'get_projects', 'name': 'Get Projects', 'description': \"\\n\u00a0\u00a0\u00a0 This tool is a wrapper around atlassian-python-api's Jira project API, \\n\u00a0\u00a0\u00a0 useful when you need to fetch all the projects the user has access to, find out how many projects there are, or as an intermediary step that involv searching by projects. \\n\u00a0\u00a0\u00a0 there is no input to this tool.\\n\u00a0\u00a0\u00a0 \"}, {'mode': 'create_issue', 'name': 'Create Issue', 'description': '\\n\u00a0\u00a0\u00a0 This tool is a wrapper around atlassian-python-api\\'s Jira issue_create API, useful when you need to create a Jira issue. \\n\u00a0\u00a0\u00a0 The input to this tool is a dictionary specifying the fields of the Jira issue, and will be passed into atlassian-python-api\\'s Jira `issue_create` function.\\n\u00a0\u00a0\u00a0 For example, to create a low priority task called \"test issue\" with description \"test description\", you would pass in the following dictionary: \\n\u00a0\u00a0\u00a0 {{\"summary\": \"test issue\", \"description\": \"test description\", \"issuetype\": {{\"name\":", "source": "https://api.python.langchain.com/en/latest/modules/utilities.html"} +{"id": "1b035ff3a02e-15", "text": "\"test issue\", \"description\": \"test description\", \"issuetype\": {{\"name\": \"Task\"}}, \"priority\": {{\"name\": \"Low\"}}}}\\n\u00a0\u00a0\u00a0 '}, {'mode': 'other', 'name': 'Catch all Jira API call', 'description': '\\n\u00a0\u00a0\u00a0 This tool is a wrapper around atlassian-python-api\\'s Jira API.\\n\u00a0\u00a0\u00a0 There are other dedicated tools for fetching all projects, and creating and searching for issues, \\n\u00a0\u00a0\u00a0 use this tool if you need to perform any other actions allowed by the atlassian-python-api Jira API.\\n\u00a0\u00a0\u00a0 The input to this tool is line of python code that calls a function from atlassian-python-api\\'s Jira API\\n\u00a0\u00a0\u00a0 For example, to update the summary field of an issue, you would pass in the following string:\\n\u00a0\u00a0\u00a0 self.jira.update_issue_field(key, {{\"summary\": \"New summary\"}})\\n\u00a0\u00a0\u00a0 or to find out how many projects are in the Jira instance, you would pass in the following string:\\n\u00a0\u00a0\u00a0 self.jira.projects()\\n\u00a0\u00a0\u00a0 For more information on the Jira API, refer to https://atlassian-python-api.readthedocs.io/jira.html\\n\u00a0\u00a0\u00a0 '}, {'mode': 'create_page', 'name': 'Create confluence page', 'description': 'This tool is a wrapper around atlassian-python-api\\'s Confluence \\natlassian-python-api API, useful when you need to create a Confluence page. The input to this tool is a dictionary \\nspecifying the fields of the Confluence page, and will be passed into atlassian-python-api\\'s Confluence `create_page` \\nfunction. For example, to create a page in the DEMO space titled \"This is the title\" with body \"This is the body. You can use \\nHTML tags!\", you would pass in the following dictionary: {{\"space\": \"DEMO\",", "source": "https://api.python.langchain.com/en/latest/modules/utilities.html"} +{"id": "1b035ff3a02e-16", "text": "you would pass in the following dictionary: {{\"space\": \"DEMO\", \"title\":\"This is the \\ntitle\",\"body\":\"This is the body. You can use HTML tags!\"}} '}]\uf0c1", "source": "https://api.python.langchain.com/en/latest/modules/utilities.html"} +{"id": "1b035ff3a02e-17", "text": "issue_create(query)[source]\uf0c1\nParameters\nquery (str) \u2013 \nReturn type\nstr\nlist()[source]\uf0c1\nReturn type\nList[Dict]\nother(query)[source]\uf0c1\nParameters\nquery (str) \u2013 \nReturn type\nstr\npage_create(query)[source]\uf0c1\nParameters\nquery (str) \u2013 \nReturn type\nstr\nparse_issues(issues)[source]\uf0c1\nParameters\nissues (Dict) \u2013 \nReturn type\nList[dict]\nparse_projects(projects)[source]\uf0c1\nParameters\nprojects (List[dict]) \u2013 \nReturn type\nList[dict]\nproject()[source]\uf0c1\nReturn type\nstr\nrun(mode, query)[source]\uf0c1\nParameters\nmode (str) \u2013 \nquery (str) \u2013 \nReturn type\nstr\nsearch(query)[source]\uf0c1\nParameters\nquery (str) \u2013 \nReturn type\nstr\nclass langchain.utilities.LambdaWrapper(*, lambda_client=None, function_name=None, awslambda_tool_name=None, awslambda_tool_description=None)[source]\uf0c1\nBases: pydantic.main.BaseModel\nWrapper for AWS Lambda SDK.\nDocs for using:\npip install boto3\nCreate a lambda function using the AWS Console or CLI\nRun aws configure and enter your AWS credentials\nParameters\nlambda_client (Any) \u2013 \nfunction_name (Optional[str]) \u2013 \nawslambda_tool_name (Optional[str]) \u2013 \nawslambda_tool_description (Optional[str]) \u2013 \nReturn type\nNone\nattribute awslambda_tool_description: Optional[str] = None\uf0c1\nattribute awslambda_tool_name: Optional[str] = None\uf0c1\nattribute function_name: Optional[str] = None\uf0c1\nrun(query)[source]\uf0c1\nInvoke Lambda function and parse result.\nParameters", "source": "https://api.python.langchain.com/en/latest/modules/utilities.html"} +{"id": "1b035ff3a02e-18", "text": "run(query)[source]\uf0c1\nInvoke Lambda function and parse result.\nParameters\nquery (str) \u2013 \nReturn type\nstr\nclass langchain.utilities.MaxComputeAPIWrapper(client)[source]\uf0c1\nBases: object\nInterface for querying Alibaba Cloud MaxCompute tables.\nParameters\nclient (ODPS) \u2013 \nclassmethod from_params(endpoint, project, *, access_id=None, secret_access_key=None)[source]\uf0c1\nConvenience constructor that builds the odsp.ODPS MaxCompute client fromgiven parameters.\nParameters\nendpoint (str) \u2013 MaxCompute endpoint.\nproject (str) \u2013 A project is a basic organizational unit of MaxCompute, which is\nsimilar to a database.\naccess_id (Optional[str]) \u2013 MaxCompute access ID. Should be passed in directly or set as the\nenvironment variable MAX_COMPUTE_ACCESS_ID.\nsecret_access_key (Optional[str]) \u2013 MaxCompute secret access key. Should be passed in\ndirectly or set as the environment variable\nMAX_COMPUTE_SECRET_ACCESS_KEY.\nReturn type\nlangchain.utilities.max_compute.MaxComputeAPIWrapper\nlazy_query(query)[source]\uf0c1\nParameters\nquery (str) \u2013 \nReturn type\nIterator[dict]\nquery(query)[source]\uf0c1\nParameters\nquery (str) \u2013 \nReturn type\nList[dict]\nclass langchain.utilities.MetaphorSearchAPIWrapper(*, metaphor_api_key, k=10)[source]\uf0c1\nBases: pydantic.main.BaseModel\nWrapper for Metaphor Search API.\nParameters\nmetaphor_api_key (str) \u2013 \nk (int) \u2013 \nReturn type\nNone\nattribute k: int = 10\uf0c1\nattribute metaphor_api_key: str [Required]\uf0c1", "source": "https://api.python.langchain.com/en/latest/modules/utilities.html"} +{"id": "1b035ff3a02e-19", "text": "attribute metaphor_api_key: str [Required]\uf0c1\nresults(query, num_results, include_domains=None, exclude_domains=None, start_crawl_date=None, end_crawl_date=None, start_published_date=None, end_published_date=None)[source]\uf0c1\nRun query through Metaphor Search and return metadata.\nParameters\nquery (str) \u2013 The query to search for.\nnum_results (int) \u2013 The number of results to return.\ninclude_domains (Optional[List[str]]) \u2013 \nexclude_domains (Optional[List[str]]) \u2013 \nstart_crawl_date (Optional[str]) \u2013 \nend_crawl_date (Optional[str]) \u2013 \nstart_published_date (Optional[str]) \u2013 \nend_published_date (Optional[str]) \u2013 \nReturns\ntitle - The title of the\nurl - The url\nauthor - Author of the content, if applicable. Otherwise, None.\npublished_date - Estimated date published\nin YYYY-MM-DD format. Otherwise, None.\nReturn type\nA list of dictionaries with the following keys\nasync results_async(query, num_results, include_domains=None, exclude_domains=None, start_crawl_date=None, end_crawl_date=None, start_published_date=None, end_published_date=None)[source]\uf0c1\nGet results from the Metaphor Search API asynchronously.\nParameters\nquery (str) \u2013 \nnum_results (int) \u2013 \ninclude_domains (Optional[List[str]]) \u2013 \nexclude_domains (Optional[List[str]]) \u2013 \nstart_crawl_date (Optional[str]) \u2013 \nend_crawl_date (Optional[str]) \u2013 \nstart_published_date (Optional[str]) \u2013 \nend_published_date (Optional[str]) \u2013 \nReturn type\nList[Dict]\nclass langchain.utilities.OpenWeatherMapAPIWrapper(*, owm=None, openweathermap_api_key=None)[source]\uf0c1\nBases: pydantic.main.BaseModel", "source": "https://api.python.langchain.com/en/latest/modules/utilities.html"} +{"id": "1b035ff3a02e-20", "text": "Bases: pydantic.main.BaseModel\nWrapper for OpenWeatherMap API using PyOWM.\nDocs for using:\nGo to OpenWeatherMap and sign up for an API key\nSave your API KEY into OPENWEATHERMAP_API_KEY env variable\npip install pyowm\nParameters\nowm (Any) \u2013 \nopenweathermap_api_key (Optional[str]) \u2013 \nReturn type\nNone\nattribute openweathermap_api_key: Optional[str] = None\uf0c1\nattribute owm: Any = None\uf0c1\nrun(location)[source]\uf0c1\nGet the current weather information for a specified location.\nParameters\nlocation (str) \u2013 \nReturn type\nstr\nclass langchain.utilities.PowerBIDataset(*, dataset_id, table_names, group_id=None, credential=None, token=None, impersonated_user_name=None, sample_rows_in_table_info=1, schemas=None, aiosession=None)[source]\uf0c1\nBases: pydantic.main.BaseModel\nCreate PowerBI engine from dataset ID and credential or token.\nUse either the credential or a supplied token to authenticate.\nIf both are supplied the credential is used to generate a token.\nThe impersonated_user_name is the UPN of a user to be impersonated.\nIf the model is not RLS enabled, this will be ignored.\nParameters\ndataset_id (str) \u2013 \ntable_names (List[str]) \u2013 \ngroup_id (Optional[str]) \u2013 \ncredential (Optional[TokenCredential]) \u2013 \ntoken (Optional[str]) \u2013 \nimpersonated_user_name (Optional[str]) \u2013 \nsample_rows_in_table_info (langchain.utilities.powerbi.ConstrainedIntValue) \u2013 \nschemas (Dict[str, str]) \u2013 \naiosession (Optional[aiohttp.client.ClientSession]) \u2013 \nReturn type\nNone", "source": "https://api.python.langchain.com/en/latest/modules/utilities.html"} +{"id": "1b035ff3a02e-21", "text": "aiosession (Optional[aiohttp.client.ClientSession]) \u2013 \nReturn type\nNone\nattribute aiosession: Optional[aiohttp.ClientSession] = None\uf0c1\nattribute credential: Optional[TokenCredential] = None\uf0c1\nattribute dataset_id: str [Required]\uf0c1\nattribute group_id: Optional[str] = None\uf0c1\nattribute impersonated_user_name: Optional[str] = None\uf0c1\nattribute sample_rows_in_table_info: int = 1\uf0c1\nConstraints\nexclusiveMinimum = 0\nmaximum = 10\nattribute schemas: Dict[str, str] [Optional]\uf0c1\nattribute table_names: List[str] [Required]\uf0c1\nattribute token: Optional[str] = None\uf0c1\nasync aget_table_info(table_names=None)[source]\uf0c1\nGet information about specified tables.\nParameters\ntable_names (Optional[Union[List[str], str]]) \u2013 \nReturn type\nstr\nasync arun(command)[source]\uf0c1\nExecute a DAX command and return the result asynchronously.\nParameters\ncommand (str) \u2013 \nReturn type\nAny\nget_schemas()[source]\uf0c1\nGet the available schema\u2019s.\nReturn type\nstr\nget_table_info(table_names=None)[source]\uf0c1\nGet information about specified tables.\nParameters\ntable_names (Optional[Union[List[str], str]]) \u2013 \nReturn type\nstr\nget_table_names()[source]\uf0c1\nGet names of tables available.\nReturn type\nIterable[str]\nrun(command)[source]\uf0c1\nExecute a DAX command and return a json representing the results.\nParameters\ncommand (str) \u2013 \nReturn type\nAny\nproperty headers: Dict[str, str]\uf0c1\nGet the token.\nproperty request_url: str\uf0c1\nGet the request url.\nproperty table_info: str\uf0c1\nInformation about all tables in the database.", "source": "https://api.python.langchain.com/en/latest/modules/utilities.html"} +{"id": "1b035ff3a02e-22", "text": "property table_info: str\uf0c1\nInformation about all tables in the database.\nclass langchain.utilities.PubMedAPIWrapper(*, top_k_results=3, load_max_docs=25, doc_content_chars_max=2000, load_all_available_meta=False, email='your_email@example.com', base_url_esearch='https://eutils.ncbi.nlm.nih.gov/entrez/eutils/esearch.fcgi?', base_url_efetch='https://eutils.ncbi.nlm.nih.gov/entrez/eutils/efetch.fcgi?', max_retry=5, sleep_time=0.2, ARXIV_MAX_QUERY_LENGTH=300)[source]\uf0c1\nBases: pydantic.main.BaseModel\nWrapper around PubMed API.\nThis wrapper will use the PubMed API to conduct searches and fetch\ndocument summaries. By default, it will return the document summaries\nof the top-k results of an input search.\nParameters\ntop_k_results (int) \u2013 number of the top-scored document used for the PubMed tool\nload_max_docs (int) \u2013 a limit to the number of loaded documents\nload_all_available_meta (bool) \u2013 \nif True: the metadata of the loaded Documents gets all available meta info(see https://www.ncbi.nlm.nih.gov/books/NBK25499/#chapter4.ESearch)\nif False: the metadata gets only the most informative fields.\ndoc_content_chars_max (int) \u2013 \nemail (str) \u2013 \nbase_url_esearch (str) \u2013 \nbase_url_efetch (str) \u2013 \nmax_retry (int) \u2013 \nsleep_time (float) \u2013 \nARXIV_MAX_QUERY_LENGTH (int) \u2013 \nReturn type\nNone\nattribute doc_content_chars_max: int = 2000\uf0c1\nattribute email: str = 'your_email@example.com'\uf0c1\nattribute load_all_available_meta: bool = False\uf0c1", "source": "https://api.python.langchain.com/en/latest/modules/utilities.html"} +{"id": "1b035ff3a02e-23", "text": "attribute load_all_available_meta: bool = False\uf0c1\nattribute load_max_docs: int = 25\uf0c1\nattribute top_k_results: int = 3\uf0c1\nload(query)[source]\uf0c1\nSearch PubMed for documents matching the query.\nReturn a list of dictionaries containing the document metadata.\nParameters\nquery (str) \u2013 \nReturn type\nList[dict]\nload_docs(query)[source]\uf0c1\nParameters\nquery (str) \u2013 \nReturn type\nList[langchain.schema.Document]\nretrieve_article(uid, webenv)[source]\uf0c1\nParameters\nuid (str) \u2013 \nwebenv (str) \u2013 \nReturn type\ndict\nrun(query)[source]\uf0c1\nRun PubMed search and get the article meta information.\nSee https://www.ncbi.nlm.nih.gov/books/NBK25499/#chapter4.ESearch\nIt uses only the most informative fields of article meta information.\nParameters\nquery (str) \u2013 \nReturn type\nstr\nclass langchain.utilities.PythonREPL(*, _globals=None, _locals=None)[source]\uf0c1\nBases: pydantic.main.BaseModel\nSimulates a standalone Python REPL.\nParameters\n_globals (Optional[Dict]) \u2013 \n_locals (Optional[Dict]) \u2013 \nReturn type\nNone\nattribute globals: Optional[Dict] [Optional] (alias '_globals')\uf0c1\nattribute locals: Optional[Dict] [Optional] (alias '_locals')\uf0c1\nrun(command)[source]\uf0c1\nRun command with own globals/locals and returns anything printed.\nParameters\ncommand (str) \u2013 \nReturn type\nstr\npydantic settings langchain.utilities.SceneXplainAPIWrapper[source]\uf0c1\nBases: pydantic.env_settings.BaseSettings, pydantic.main.BaseModel\nWrapper for SceneXplain API.", "source": "https://api.python.langchain.com/en/latest/modules/utilities.html"} +{"id": "1b035ff3a02e-24", "text": "Wrapper for SceneXplain API.\nIn order to set this up, you need API key for the SceneXplain API.\nYou can obtain a key by following the steps below.\n- Sign up for a free account at https://scenex.jina.ai/.\n- Navigate to the API Access page (https://scenex.jina.ai/api)\nand create a new API key.\nShow JSON schema{\n \"title\": \"SceneXplainAPIWrapper\",\n \"description\": \"Wrapper for SceneXplain API.\\n\\nIn order to set this up, you need API key for the SceneXplain API.\\nYou can obtain a key by following the steps below.\\n- Sign up for a free account at https://scenex.jina.ai/.\\n- Navigate to the API Access page (https://scenex.jina.ai/api)\\n and create a new API key.\",\n \"type\": \"object\",\n \"properties\": {\n \"scenex_api_key\": {\n \"title\": \"Scenex Api Key\",\n \"env\": \"SCENEX_API_KEY\",\n \"env_names\": \"{'scenex_api_key'}\",\n \"type\": \"string\"\n },\n \"scenex_api_url\": {\n \"title\": \"Scenex Api Url\",\n \"default\": \"https://us-central1-causal-diffusion.cloudfunctions.net/describe\",\n \"env_names\": \"{'scenex_api_url'}\",\n \"type\": \"string\"\n }\n },\n \"required\": [\n \"scenex_api_key\"\n ],\n \"additionalProperties\": false\n}\nFields\nscenex_api_key (str)\nscenex_api_url (str)", "source": "https://api.python.langchain.com/en/latest/modules/utilities.html"} +{"id": "1b035ff3a02e-25", "text": "Fields\nscenex_api_key (str)\nscenex_api_url (str)\nattribute scenex_api_key: str [Required]\uf0c1\nattribute scenex_api_url: str = 'https://us-central1-causal-diffusion.cloudfunctions.net/describe'\uf0c1\nrun(image)[source]\uf0c1\nRun SceneXplain image explainer.\nParameters\nimage (str) \u2013 \nReturn type\nstr\nvalidator validate_environment\u00a0 \u00bb\u00a0 all fields[source]\uf0c1\nValidate that api key exists in environment.\nParameters\nvalues (Dict) \u2013 \nReturn type\nDict\nclass langchain.utilities.SearxSearchWrapper(*, searx_host='', unsecure=False, params=None, headers=None, engines=[], categories=[], query_suffix='', k=10, aiosession=None)[source]\uf0c1\nBases: pydantic.main.BaseModel\nWrapper for Searx API.\nTo use you need to provide the searx host by passing the named parameter\nsearx_host or exporting the environment variable SEARX_HOST.\nIn some situations you might want to disable SSL verification, for example\nif you are running searx locally. You can do this by passing the named parameter\nunsecure. You can also pass the host url scheme as http to disable SSL.\nExample\nfrom langchain.utilities import SearxSearchWrapper\nsearx = SearxSearchWrapper(searx_host=\"http://localhost:8888\")\nExample with SSL disabled:from langchain.utilities import SearxSearchWrapper\n# note the unsecure parameter is not needed if you pass the url scheme as\n# http\nsearx = SearxSearchWrapper(searx_host=\"http://localhost:8888\",\n unsecure=True)\nParameters\nsearx_host (str) \u2013 \nunsecure (bool) \u2013", "source": "https://api.python.langchain.com/en/latest/modules/utilities.html"} +{"id": "1b035ff3a02e-26", "text": "Parameters\nsearx_host (str) \u2013 \nunsecure (bool) \u2013 \nparams (dict) \u2013 \nheaders (Optional[dict]) \u2013 \nengines (Optional[List[str]]) \u2013 \ncategories (Optional[List[str]]) \u2013 \nquery_suffix (Optional[str]) \u2013 \nk (int) \u2013 \naiosession (Optional[Any]) \u2013 \nReturn type\nNone\nattribute aiosession: Optional[Any] = None\uf0c1\nattribute categories: Optional[List[str]] = []\uf0c1\nattribute engines: Optional[List[str]] = []\uf0c1\nattribute headers: Optional[dict] = None\uf0c1\nattribute k: int = 10\uf0c1\nattribute params: dict [Optional]\uf0c1\nattribute query_suffix: Optional[str] = ''\uf0c1\nattribute searx_host: str = ''\uf0c1\nattribute unsecure: bool = False\uf0c1\nasync aresults(query, num_results, engines=None, query_suffix='', **kwargs)[source]\uf0c1\nAsynchronously query with json results.\nUses aiohttp. See results for more info.\nParameters\nquery (str) \u2013 \nnum_results (int) \u2013 \nengines (Optional[List[str]]) \u2013 \nquery_suffix (Optional[str]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nList[Dict]\nasync arun(query, engines=None, query_suffix='', **kwargs)[source]\uf0c1\nAsynchronously version of run.\nParameters\nquery (str) \u2013 \nengines (Optional[List[str]]) \u2013 \nquery_suffix (Optional[str]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\nresults(query, num_results, engines=None, categories=None, query_suffix='', **kwargs)[source]\uf0c1\nRun query through Searx API and returns the results with metadata.\nParameters\nquery (str) \u2013 The query to search for.", "source": "https://api.python.langchain.com/en/latest/modules/utilities.html"} +{"id": "1b035ff3a02e-27", "text": "Parameters\nquery (str) \u2013 The query to search for.\nquery_suffix (Optional[str]) \u2013 Extra suffix appended to the query.\nnum_results (int) \u2013 Limit the number of results to return.\nengines (Optional[List[str]]) \u2013 List of engines to use for the query.\ncategories (Optional[List[str]]) \u2013 List of categories to use for the query.\n**kwargs \u2013 extra parameters to pass to the searx API.\nkwargs (Any) \u2013 \nReturns\n{snippet: The description of the result.\ntitle: The title of the result.\nlink: The link to the result.\nengines: The engines used for the result.\ncategory: Searx category of the result.\n}\nReturn type\nDict with the following keys\nrun(query, engines=None, categories=None, query_suffix='', **kwargs)[source]\uf0c1\nRun query through Searx API and parse results.\nYou can pass any other params to the searx query API.\nParameters\nquery (str) \u2013 The query to search for.\nquery_suffix (Optional[str]) \u2013 Extra suffix appended to the query.\nengines (Optional[List[str]]) \u2013 List of engines to use for the query.\ncategories (Optional[List[str]]) \u2013 List of categories to use for the query.\n**kwargs \u2013 extra parameters to pass to the searx API.\nkwargs (Any) \u2013 \nReturns\nThe result of the query.\nReturn type\nstr\nRaises\nValueError \u2013 If an error occured with the query.\nExample\nThis will make a query to the qwant engine:\nfrom langchain.utilities import SearxSearchWrapper\nsearx = SearxSearchWrapper(searx_host=\"http://my.searx.host\")\nsearx.run(\"what is the weather in France ?\", engine=\"qwant\")", "source": "https://api.python.langchain.com/en/latest/modules/utilities.html"} +{"id": "1b035ff3a02e-28", "text": "searx.run(\"what is the weather in France ?\", engine=\"qwant\")\n# the same result can be achieved using the `!` syntax of searx\n# to select the engine using `query_suffix`\nsearx.run(\"what is the weather in France ?\", query_suffix=\"!qwant\")\nclass langchain.utilities.SerpAPIWrapper(*, search_engine=None, params={'engine': 'google', 'gl': 'us', 'google_domain': 'google.com', 'hl': 'en'}, serpapi_api_key=None, aiosession=None)[source]\uf0c1\nBases: pydantic.main.BaseModel\nWrapper around SerpAPI.\nTo use, you should have the google-search-results python package installed,\nand the environment variable SERPAPI_API_KEY set with your API key, or pass\nserpapi_api_key as a named parameter to the constructor.\nExample\nfrom langchain import SerpAPIWrapper\nserpapi = SerpAPIWrapper()\nParameters\nsearch_engine (Any) \u2013 \nparams (dict) \u2013 \nserpapi_api_key (Optional[str]) \u2013 \naiosession (Optional[aiohttp.client.ClientSession]) \u2013 \nReturn type\nNone\nattribute aiosession: Optional[aiohttp.client.ClientSession] = None\uf0c1\nattribute params: dict = {'engine': 'google', 'gl': 'us', 'google_domain': 'google.com', 'hl': 'en'}\uf0c1\nattribute serpapi_api_key: Optional[str] = None\uf0c1\nasync aresults(query)[source]\uf0c1\nUse aiohttp to run query through SerpAPI and return the results async.\nParameters\nquery (str) \u2013 \nReturn type\ndict\nasync arun(query, **kwargs)[source]\uf0c1\nRun query through SerpAPI and parse result async.\nParameters", "source": "https://api.python.langchain.com/en/latest/modules/utilities.html"} +{"id": "1b035ff3a02e-29", "text": "Run query through SerpAPI and parse result async.\nParameters\nquery (str) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\nget_params(query)[source]\uf0c1\nGet parameters for SerpAPI.\nParameters\nquery (str) \u2013 \nReturn type\nDict[str, str]\nresults(query)[source]\uf0c1\nRun query through SerpAPI and return the raw result.\nParameters\nquery (str) \u2013 \nReturn type\ndict\nrun(query, **kwargs)[source]\uf0c1\nRun query through SerpAPI and parse result.\nParameters\nquery (str) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\nclass langchain.utilities.SparkSQL(spark_session=None, catalog=None, schema=None, ignore_tables=None, include_tables=None, sample_rows_in_table_info=3)[source]\uf0c1\nBases: object\nParameters\nspark_session (Optional[SparkSession]) \u2013 \ncatalog (Optional[str]) \u2013 \nschema (Optional[str]) \u2013 \nignore_tables (Optional[List[str]]) \u2013 \ninclude_tables (Optional[List[str]]) \u2013 \nsample_rows_in_table_info (int) \u2013 \nclassmethod from_uri(database_uri, engine_args=None, **kwargs)[source]\uf0c1\nCreating a remote Spark Session via Spark connect.\nFor example: SparkSQL.from_uri(\u201csc://localhost:15002\u201d)\nParameters\ndatabase_uri (str) \u2013 \nengine_args (Optional[dict]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.utilities.spark_sql.SparkSQL\nget_usable_table_names()[source]\uf0c1\nGet names of tables available.\nReturn type\nIterable[str]\nget_table_info(table_names=None)[source]\uf0c1\nParameters\ntable_names (Optional[List[str]]) \u2013 \nReturn type\nstr\nrun(command, fetch='all')[source]\uf0c1", "source": "https://api.python.langchain.com/en/latest/modules/utilities.html"} +{"id": "1b035ff3a02e-30", "text": "Return type\nstr\nrun(command, fetch='all')[source]\uf0c1\nParameters\ncommand (str) \u2013 \nfetch (str) \u2013 \nReturn type\nstr\nget_table_info_no_throw(table_names=None)[source]\uf0c1\nGet information about specified tables.\nFollows best practices as specified in: Rajkumar et al, 2022\n(https://arxiv.org/abs/2204.00498)\nIf sample_rows_in_table_info, the specified number of sample rows will be\nappended to each table description. This can increase performance as\ndemonstrated in the paper.\nParameters\ntable_names (Optional[List[str]]) \u2013 \nReturn type\nstr\nrun_no_throw(command, fetch='all')[source]\uf0c1\nExecute a SQL command and return a string representing the results.\nIf the statement returns rows, a string of the results is returned.\nIf the statement returns no rows, an empty string is returned.\nIf the statement throws an error, the error message is returned.\nParameters\ncommand (str) \u2013 \nfetch (str) \u2013 \nReturn type\nstr\nclass langchain.utilities.TextRequestsWrapper(*, headers=None, aiosession=None)[source]\uf0c1\nBases: pydantic.main.BaseModel\nLightweight wrapper around requests library.\nThe main purpose of this wrapper is to always return a text output.\nParameters\nheaders (Optional[Dict[str, str]]) \u2013 \naiosession (Optional[aiohttp.client.ClientSession]) \u2013 \nReturn type\nNone\nattribute aiosession: Optional[aiohttp.client.ClientSession] = None\uf0c1\nattribute headers: Optional[Dict[str, str]] = None\uf0c1\nasync adelete(url, **kwargs)[source]\uf0c1\nDELETE the URL and return the text asynchronously.\nParameters\nurl (str) \u2013 \nkwargs (Any) \u2013", "source": "https://api.python.langchain.com/en/latest/modules/utilities.html"} +{"id": "1b035ff3a02e-31", "text": "Parameters\nurl (str) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\nasync aget(url, **kwargs)[source]\uf0c1\nGET the URL and return the text asynchronously.\nParameters\nurl (str) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\nasync apatch(url, data, **kwargs)[source]\uf0c1\nPATCH the URL and return the text asynchronously.\nParameters\nurl (str) \u2013 \ndata (Dict[str, Any]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\nasync apost(url, data, **kwargs)[source]\uf0c1\nPOST to the URL and return the text asynchronously.\nParameters\nurl (str) \u2013 \ndata (Dict[str, Any]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\nasync aput(url, data, **kwargs)[source]\uf0c1\nPUT the URL and return the text asynchronously.\nParameters\nurl (str) \u2013 \ndata (Dict[str, Any]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\ndelete(url, **kwargs)[source]\uf0c1\nDELETE the URL and return the text.\nParameters\nurl (str) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\nget(url, **kwargs)[source]\uf0c1\nGET the URL and return the text.\nParameters\nurl (str) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\npatch(url, data, **kwargs)[source]\uf0c1\nPATCH the URL and return the text.\nParameters\nurl (str) \u2013 \ndata (Dict[str, Any]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\npost(url, data, **kwargs)[source]\uf0c1\nPOST to the URL and return the text.\nParameters\nurl (str) \u2013", "source": "https://api.python.langchain.com/en/latest/modules/utilities.html"} +{"id": "1b035ff3a02e-32", "text": "POST to the URL and return the text.\nParameters\nurl (str) \u2013 \ndata (Dict[str, Any]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\nput(url, data, **kwargs)[source]\uf0c1\nPUT the URL and return the text.\nParameters\nurl (str) \u2013 \ndata (Dict[str, Any]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\nproperty requests: langchain.requests.Requests\uf0c1\nclass langchain.utilities.TwilioAPIWrapper(*, client=None, account_sid=None, auth_token=None, from_number=None)[source]\uf0c1\nBases: pydantic.main.BaseModel\nMessaging Client using Twilio.\nTo use, you should have the twilio python package installed,\nand the environment variables TWILIO_ACCOUNT_SID, TWILIO_AUTH_TOKEN, and\nTWILIO_FROM_NUMBER, or pass account_sid, auth_token, and from_number as\nnamed parameters to the constructor.\nExample\nfrom langchain.utilities.twilio import TwilioAPIWrapper\ntwilio = TwilioAPIWrapper(\n account_sid=\"ACxxx\",\n auth_token=\"xxx\",\n from_number=\"+10123456789\"\n)\ntwilio.run('test', '+12484345508')\nParameters\nclient (Any) \u2013 \naccount_sid (Optional[str]) \u2013 \nauth_token (Optional[str]) \u2013 \nfrom_number (Optional[str]) \u2013 \nReturn type\nNone\nattribute account_sid: Optional[str] = None\uf0c1\nTwilio account string identifier.\nattribute auth_token: Optional[str] = None\uf0c1\nTwilio auth token.\nattribute from_number: Optional[str] = None\uf0c1\nA Twilio phone number in [E.164](https://www.twilio.com/docs/glossary/what-e164)\nformat, an", "source": "https://api.python.langchain.com/en/latest/modules/utilities.html"} +{"id": "1b035ff3a02e-33", "text": "format, an\n[alphanumeric sender ID](https://www.twilio.com/docs/sms/send-messages#use-an-alphanumeric-sender-id),\nor a [Channel Endpoint address](https://www.twilio.com/docs/sms/channels#channel-addresses)\nthat is enabled for the type of message you want to send. Phone numbers or\n[short codes](https://www.twilio.com/docs/sms/api/short-code) purchased from\nTwilio also work here. You cannot, for example, spoof messages from a private\ncell phone number. If you are using messaging_service_sid, this parameter\nmust be empty.\nrun(body, to)[source]\uf0c1\nRun body through Twilio and respond with message sid.\nParameters\nbody (str) \u2013 The text of the message you want to send. Can be up to 1,600\ncharacters in length.\nto (str) \u2013 The destination phone number in\n[E.164](https://www.twilio.com/docs/glossary/what-e164) format for\nSMS/MMS or\n[Channel user address](https://www.twilio.com/docs/sms/channels#channel-addresses)\nfor other 3rd-party channels.\nReturn type\nstr\nclass langchain.utilities.WikipediaAPIWrapper(*, wiki_client=None, top_k_results=3, lang='en', load_all_available_meta=False, doc_content_chars_max=4000)[source]\uf0c1\nBases: pydantic.main.BaseModel\nWrapper around WikipediaAPI.\nTo use, you should have the wikipedia python package installed.\nThis wrapper will use the Wikipedia API to conduct searches and\nfetch page summaries. By default, it will return the page summaries\nof the top-k results.\nIt limits the Document content by doc_content_chars_max.\nParameters\nwiki_client (Any) \u2013 \ntop_k_results (int) \u2013", "source": "https://api.python.langchain.com/en/latest/modules/utilities.html"} +{"id": "1b035ff3a02e-34", "text": "Parameters\nwiki_client (Any) \u2013 \ntop_k_results (int) \u2013 \nlang (str) \u2013 \nload_all_available_meta (bool) \u2013 \ndoc_content_chars_max (int) \u2013 \nReturn type\nNone\nattribute doc_content_chars_max: int = 4000\uf0c1\nattribute lang: str = 'en'\uf0c1\nattribute load_all_available_meta: bool = False\uf0c1\nattribute top_k_results: int = 3\uf0c1\nload(query)[source]\uf0c1\nRun Wikipedia search and get the article text plus the meta information.\nSee\nReturns: a list of documents.\nParameters\nquery (str) \u2013 \nReturn type\nList[langchain.schema.Document]\nrun(query)[source]\uf0c1\nRun Wikipedia search and get page summaries.\nParameters\nquery (str) \u2013 \nReturn type\nstr\nclass langchain.utilities.WolframAlphaAPIWrapper(*, wolfram_client=None, wolfram_alpha_appid=None)[source]\uf0c1\nBases: pydantic.main.BaseModel\nWrapper for Wolfram Alpha.\nDocs for using:\nGo to wolfram alpha and sign up for a developer account\nCreate an app and get your APP ID\nSave your APP ID into WOLFRAM_ALPHA_APPID env variable\npip install wolframalpha\nParameters\nwolfram_client (Any) \u2013 \nwolfram_alpha_appid (Optional[str]) \u2013 \nReturn type\nNone\nattribute wolfram_alpha_appid: Optional[str] = None\uf0c1\nrun(query)[source]\uf0c1\nRun query through WolframAlpha and parse result.\nParameters\nquery (str) \u2013 \nReturn type\nstr", "source": "https://api.python.langchain.com/en/latest/modules/utilities.html"} +{"id": "1b035ff3a02e-35", "text": "Parameters\nquery (str) \u2013 \nReturn type\nstr\nclass langchain.utilities.ZapierNLAWrapper(*, zapier_nla_api_key, zapier_nla_oauth_access_token, zapier_nla_api_base='https://nla.zapier.com/api/v1/')[source]\uf0c1\nBases: pydantic.main.BaseModel\nWrapper for Zapier NLA.\nFull docs here: https://nla.zapier.com/api/v1/docs\nNote: this wrapper currently only implemented the api_key auth method for\ntestingand server-side production use cases (using the developer\u2019s connected\naccounts on Zapier.com)\nFor use-cases where LangChain + Zapier NLA is powering a user-facing application,\nand LangChain needs access to the end-user\u2019s connected accounts on Zapier.com,\nyou\u2019ll need to use oauth. Review the full docs above and reach out to\nnla@zapier.com for developer support.\nParameters\nzapier_nla_api_key (str) \u2013 \nzapier_nla_oauth_access_token (str) \u2013 \nzapier_nla_api_base (str) \u2013 \nReturn type\nNone\nattribute zapier_nla_api_base: str = 'https://nla.zapier.com/api/v1/'\uf0c1\nattribute zapier_nla_api_key: str [Required]\uf0c1\nattribute zapier_nla_oauth_access_token: str [Required]\uf0c1\nlist()[source]\uf0c1\nReturns a list of all exposed (enabled) actions associated with\ncurrent user (associated with the set api_key). Change your exposed\nactions here: https://nla.zapier.com/demo/start/\nThe return list can be empty if no actions exposed. Else will contain\na list of action objects:\n[{\u201cid\u201d: str,\n\u201cdescription\u201d: str,\n\u201cparams\u201d: Dict[str, str]\n}]", "source": "https://api.python.langchain.com/en/latest/modules/utilities.html"} +{"id": "1b035ff3a02e-36", "text": "\u201cdescription\u201d: str,\n\u201cparams\u201d: Dict[str, str]\n}]\nparams will always contain an instructions key, the only required\nparam. All others optional and if provided will override any AI guesses\n(see \u201cunderstanding the AI guessing flow\u201d here:\nhttps://nla.zapier.com/api/v1/docs)\nReturn type\nList[Dict]\nlist_as_str()[source]\uf0c1\nSame as list, but returns a stringified version of the JSON for\ninsertting back into an LLM.\nReturn type\nstr\npreview(action_id, instructions, params=None)[source]\uf0c1\nSame as run, but instead of actually executing the action, will\ninstead return a preview of params that have been guessed by the AI in\ncase you need to explicitly review before executing.\nParameters\naction_id (str) \u2013 \ninstructions (str) \u2013 \nparams (Optional[Dict]) \u2013 \nReturn type\nDict\npreview_as_str(*args, **kwargs)[source]\uf0c1\nSame as preview, but returns a stringified version of the JSON for\ninsertting back into an LLM.\nReturn type\nstr\nrun(action_id, instructions, params=None)[source]\uf0c1\nExecutes an action that is identified by action_id, must be exposed\n(enabled) by the current user (associated with the set api_key). Change\nyour exposed actions here: https://nla.zapier.com/demo/start/\nThe return JSON is guaranteed to be less than ~500 words (350\ntokens) making it safe to inject into the prompt of another LLM\ncall.\nParameters\naction_id (str) \u2013 \ninstructions (str) \u2013 \nparams (Optional[Dict]) \u2013 \nReturn type\nDict\nrun_as_str(*args, **kwargs)[source]\uf0c1\nSame as run, but returns a stringified version of the JSON for", "source": "https://api.python.langchain.com/en/latest/modules/utilities.html"} +{"id": "1b035ff3a02e-37", "text": "Same as run, but returns a stringified version of the JSON for\ninsertting back into an LLM.\nReturn type\nstr", "source": "https://api.python.langchain.com/en/latest/modules/utilities.html"} +{"id": "c018b0edb28a-0", "text": "Vector Stores\uf0c1\nWrappers on top of vector stores.\nclass langchain.vectorstores.AlibabaCloudOpenSearch(embedding, config, **kwargs)[source]\uf0c1\nBases: langchain.vectorstores.base.VectorStore\nAlibaba Cloud OpenSearch Vector Store\nParameters\nembedding (langchain.embeddings.base.Embeddings) \u2013 \nconfig (langchain.vectorstores.alibabacloud_opensearch.AlibabaCloudOpenSearchSettings) \u2013 \nkwargs (Any) \u2013 \nReturn type\nNone\nadd_texts(texts, metadatas=None, **kwargs)[source]\uf0c1\nRun more texts through the embeddings and add to the vectorstore.\nParameters\ntexts (Iterable[str]) \u2013 Iterable of strings to add to the vectorstore.\nmetadatas (Optional[List[dict]]) \u2013 Optional list of metadatas associated with the texts.\nkwargs (Any) \u2013 vectorstore specific parameters\nReturns\nList of ids from adding the texts into the vectorstore.\nReturn type\nList[str]\nsimilarity_search(query, k=4, search_filter=None, **kwargs)[source]\uf0c1\nReturn docs most similar to query.\nParameters\nquery (str) \u2013 \nk (int) \u2013 \nsearch_filter (Optional[Dict[str, Any]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nList[langchain.schema.Document]\nsimilarity_search_with_relevance_scores(query, k=4, search_filter=None, **kwargs)[source]\uf0c1\nReturn docs and relevance scores in the range [0, 1].\n0 is dissimilar, 1 is most similar.\nParameters\nquery (str) \u2013 input text\nk (int) \u2013 Number of Documents to return. Defaults to 4.\n**kwargs \u2013 kwargs to be passed to similarity search. Should include:\nscore_threshold: Optional, a floating point value between 0 to 1 to", "source": "https://api.python.langchain.com/en/latest/modules/vectorstores.html"} +{"id": "c018b0edb28a-1", "text": "score_threshold: Optional, a floating point value between 0 to 1 to\nfilter the resulting set of retrieved docs\nsearch_filter (Optional[dict]) \u2013 \nkwargs (Any) \u2013 \nReturns\nList of Tuples of (doc, similarity_score)\nReturn type\nList[Tuple[langchain.schema.Document, float]]\nsimilarity_search_by_vector(embedding, k=4, search_filter=None, **kwargs)[source]\uf0c1\nReturn docs most similar to embedding vector.\nParameters\nembedding (List[float]) \u2013 Embedding to look up documents similar to.\nk (int) \u2013 Number of Documents to return. Defaults to 4.\nsearch_filter (Optional[dict]) \u2013 \nkwargs (Any) \u2013 \nReturns\nList of Documents most similar to the query vector.\nReturn type\nList[langchain.schema.Document]\ninner_embedding_query(embedding, search_filter=None, k=4)[source]\uf0c1\nParameters\nembedding (List[float]) \u2013 \nsearch_filter (Optional[Dict[str, Any]]) \u2013 \nk (int) \u2013 \nReturn type\nDict[str, Any]\ncreate_results(json_result)[source]\uf0c1\nParameters\njson_result (Dict[str, Any]) \u2013 \nReturn type\nList[langchain.schema.Document]\ncreate_results_with_score(json_result)[source]\uf0c1\nParameters\njson_result (Dict[str, Any]) \u2013 \nReturn type\nList[Tuple[langchain.schema.Document, float]]\nclassmethod from_texts(texts, embedding, metadatas=None, config=None, **kwargs)[source]\uf0c1\nReturn VectorStore initialized from texts and embeddings.\nParameters\ntexts (List[str]) \u2013 \nembedding (langchain.embeddings.base.Embeddings) \u2013 \nmetadatas (Optional[List[dict]]) \u2013", "source": "https://api.python.langchain.com/en/latest/modules/vectorstores.html"} +{"id": "c018b0edb28a-2", "text": "metadatas (Optional[List[dict]]) \u2013 \nconfig (Optional[langchain.vectorstores.alibabacloud_opensearch.AlibabaCloudOpenSearchSettings]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.vectorstores.alibabacloud_opensearch.AlibabaCloudOpenSearch\nclassmethod from_documents(documents, embedding, ids=None, config=None, **kwargs)[source]\uf0c1\nReturn VectorStore initialized from documents and embeddings.\nParameters\ndocuments (List[langchain.schema.Document]) \u2013 \nembedding (langchain.embeddings.base.Embeddings) \u2013 \nids (Optional[List[str]]) \u2013 \nconfig (Optional[langchain.vectorstores.alibabacloud_opensearch.AlibabaCloudOpenSearchSettings]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.vectorstores.alibabacloud_opensearch.AlibabaCloudOpenSearch\nclass langchain.vectorstores.AlibabaCloudOpenSearchSettings(endpoint, instance_id, username, password, datasource_name, embedding_index_name, field_name_mapping)[source]\uf0c1\nBases: object\nOpensearch Client Configuration\nAttribute:\nendpoint (str) : The endpoint of opensearch instance, You can find it\nfrom the console of Alibaba Cloud OpenSearch.\ninstance_id (str) : The identify of opensearch instance, You can find\nit from the console of Alibaba Cloud OpenSearch.\ndatasource_name (str): The name of the data source specified when creating it.\nusername (str) : The username specified when purchasing the instance.\npassword (str) : The password specified when purchasing the instance.\nembedding_index_name (str) : The name of the vector attribute specified\nwhen configuring the instance attributes.\nfield_name_mapping (Dict) : Using field name mapping between opensearch\nvector store and opensearch instance configuration table field names:\n{", "source": "https://api.python.langchain.com/en/latest/modules/vectorstores.html"} +{"id": "c018b0edb28a-3", "text": "vector store and opensearch instance configuration table field names:\n{\n\u2018id\u2019: \u2018The id field name map of index document.\u2019,\n\u2018document\u2019: \u2018The text field name map of index document.\u2019,\n\u2018embedding\u2019: \u2018In the embedding field of the opensearch instance,\nthe values must be in float16 multivalue type and separated by commas.\u2019,\n\u2018metadata_field_x\u2019: \u2018Metadata field mapping includes the mapped\nfield name and operator in the mapping value, separated by a comma\nbetween the mapped field name and the operator.\u2019,\n}\nParameters\nendpoint (str) \u2013 \ninstance_id (str) \u2013 \nusername (str) \u2013 \npassword (str) \u2013 \ndatasource_name (str) \u2013 \nembedding_index_name (str) \u2013 \nfield_name_mapping (Dict[str, str]) \u2013 \nReturn type\nNone\nendpoint: str\uf0c1\ninstance_id: str\uf0c1\nusername: str\uf0c1\npassword: str\uf0c1\ndatasource_name: str\uf0c1\nembedding_index_name: str\uf0c1\nfield_name_mapping: Dict[str, str] = {'document': 'document', 'embedding': 'embedding', 'id': 'id', 'metadata_field_x': 'metadata_field_x,operator'}\uf0c1\nclass langchain.vectorstores.AnalyticDB(connection_string, embedding_function, embedding_dimension=1536, collection_name='langchain_document', pre_delete_collection=False, logger=None)[source]\uf0c1\nBases: langchain.vectorstores.base.VectorStore\nVectorStore implementation using AnalyticDB.\nAnalyticDB is a distributed full PostgresSQL syntax cloud-native database.\n- connection_string is a postgres connection string.\n- embedding_function any embedding function implementing\nlangchain.embeddings.base.Embeddings interface.\ncollection_name is the name of the collection to use. (default: langchain)", "source": "https://api.python.langchain.com/en/latest/modules/vectorstores.html"} +{"id": "c018b0edb28a-4", "text": "collection_name is the name of the collection to use. (default: langchain)\nNOTE: This is not the name of the table, but the name of the collection.The tables will be created when initializing the store (if not exists)\nSo, make sure the user has the right permissions to create tables.\npre_delete_collection if True, will delete the collection if it exists.(default: False)\n- Useful for testing.\nParameters\nconnection_string (str) \u2013 \nembedding_function (Embeddings) \u2013 \nembedding_dimension (int) \u2013 \ncollection_name (str) \u2013 \npre_delete_collection (bool) \u2013 \nlogger (Optional[logging.Logger]) \u2013 \nReturn type\nNone\ncreate_table_if_not_exists()[source]\uf0c1\nReturn type\nNone\ncreate_collection()[source]\uf0c1\nReturn type\nNone\ndelete_collection()[source]\uf0c1\nReturn type\nNone\nadd_texts(texts, metadatas=None, ids=None, batch_size=500, **kwargs)[source]\uf0c1\nRun more texts through the embeddings and add to the vectorstore.\nParameters\ntexts (Iterable[str]) \u2013 Iterable of strings to add to the vectorstore.\nmetadatas (Optional[List[dict]]) \u2013 Optional list of metadatas associated with the texts.\nkwargs (Any) \u2013 vectorstore specific parameters\nids (Optional[List[str]]) \u2013 \nbatch_size (int) \u2013 \nReturns\nList of ids from adding the texts into the vectorstore.\nReturn type\nList[str]\nsimilarity_search(query, k=4, filter=None, **kwargs)[source]\uf0c1\nRun similarity search with AnalyticDB with distance.\nParameters\nquery (str) \u2013 Query text to search for.\nk (int) \u2013 Number of results to return. Defaults to 4.", "source": "https://api.python.langchain.com/en/latest/modules/vectorstores.html"} +{"id": "c018b0edb28a-5", "text": "k (int) \u2013 Number of results to return. Defaults to 4.\nfilter (Optional[Dict[str, str]]) \u2013 Filter by metadata. Defaults to None.\nkwargs (Any) \u2013 \nReturns\nList of Documents most similar to the query.\nReturn type\nList[langchain.schema.Document]\nsimilarity_search_with_score(query, k=4, filter=None)[source]\uf0c1\nReturn docs most similar to query.\nParameters\nquery (str) \u2013 Text to look up documents similar to.\nk (int) \u2013 Number of Documents to return. Defaults to 4.\nfilter (Optional[Dict[str, str]]) \u2013 Filter by metadata. Defaults to None.\nReturns\nList of Documents most similar to the query and score for each\nReturn type\nList[Tuple[langchain.schema.Document, float]]\nsimilarity_search_with_score_by_vector(embedding, k=4, filter=None)[source]\uf0c1\nParameters\nembedding (List[float]) \u2013 \nk (int) \u2013 \nfilter (Optional[dict]) \u2013 \nReturn type\nList[Tuple[langchain.schema.Document, float]]\nsimilarity_search_by_vector(embedding, k=4, filter=None, **kwargs)[source]\uf0c1\nReturn docs most similar to embedding vector.\nParameters\nembedding (List[float]) \u2013 Embedding to look up documents similar to.\nk (int) \u2013 Number of Documents to return. Defaults to 4.\nfilter (Optional[Dict[str, str]]) \u2013 Filter by metadata. Defaults to None.\nkwargs (Any) \u2013 \nReturns\nList of Documents most similar to the query vector.\nReturn type\nList[langchain.schema.Document]\nclassmethod from_texts(texts, embedding, metadatas=None, embedding_dimension=1536, collection_name='langchain_document', ids=None, pre_delete_collection=False, **kwargs)[source]\uf0c1", "source": "https://api.python.langchain.com/en/latest/modules/vectorstores.html"} +{"id": "c018b0edb28a-6", "text": "Return VectorStore initialized from texts and embeddings.\nPostgres Connection string is required\nEither pass it as a parameter\nor set the PG_CONNECTION_STRING environment variable.\nParameters\ntexts (List[str]) \u2013 \nembedding (langchain.embeddings.base.Embeddings) \u2013 \nmetadatas (Optional[List[dict]]) \u2013 \nembedding_dimension (int) \u2013 \ncollection_name (str) \u2013 \nids (Optional[List[str]]) \u2013 \npre_delete_collection (bool) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.vectorstores.analyticdb.AnalyticDB\nclassmethod get_connection_string(kwargs)[source]\uf0c1\nParameters\nkwargs (Dict[str, Any]) \u2013 \nReturn type\nstr\nclassmethod from_documents(documents, embedding, embedding_dimension=1536, collection_name='langchain_document', ids=None, pre_delete_collection=False, **kwargs)[source]\uf0c1\nReturn VectorStore initialized from documents and embeddings.\nPostgres Connection string is required\nEither pass it as a parameter\nor set the PG_CONNECTION_STRING environment variable.\nParameters\ndocuments (List[langchain.schema.Document]) \u2013 \nembedding (langchain.embeddings.base.Embeddings) \u2013 \nembedding_dimension (int) \u2013 \ncollection_name (str) \u2013 \nids (Optional[List[str]]) \u2013 \npre_delete_collection (bool) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.vectorstores.analyticdb.AnalyticDB\nclassmethod connection_string_from_db_params(driver, host, port, database, user, password)[source]\uf0c1\nReturn connection string from database parameters.\nParameters\ndriver (str) \u2013 \nhost (str) \u2013 \nport (int) \u2013 \ndatabase (str) \u2013 \nuser (str) \u2013 \npassword (str) \u2013 \nReturn type\nstr", "source": "https://api.python.langchain.com/en/latest/modules/vectorstores.html"} +{"id": "c018b0edb28a-7", "text": "user (str) \u2013 \npassword (str) \u2013 \nReturn type\nstr\nclass langchain.vectorstores.Annoy(embedding_function, index, metric, docstore, index_to_docstore_id)[source]\uf0c1\nBases: langchain.vectorstores.base.VectorStore\nWrapper around Annoy vector database.\nTo use, you should have the annoy python package installed.\nExample\nfrom langchain import Annoy\ndb = Annoy(embedding_function, index, docstore, index_to_docstore_id)\nParameters\nembedding_function (Callable) \u2013 \nindex (Any) \u2013 \nmetric (str) \u2013 \ndocstore (Docstore) \u2013 \nindex_to_docstore_id (Dict[int, str]) \u2013 \nadd_texts(texts, metadatas=None, **kwargs)[source]\uf0c1\nRun more texts through the embeddings and add to the vectorstore.\nParameters\ntexts (Iterable[str]) \u2013 Iterable of strings to add to the vectorstore.\nmetadatas (Optional[List[dict]]) \u2013 Optional list of metadatas associated with the texts.\nkwargs (Any) \u2013 vectorstore specific parameters\nReturns\nList of ids from adding the texts into the vectorstore.\nReturn type\nList[str]\nprocess_index_results(idxs, dists)[source]\uf0c1\nTurns annoy results into a list of documents and scores.\nParameters\nidxs (List[int]) \u2013 List of indices of the documents in the index.\ndists (List[float]) \u2013 List of distances of the documents in the index.\nReturns\nList of Documents and scores.\nReturn type\nList[Tuple[langchain.schema.Document, float]]\nsimilarity_search_with_score_by_vector(embedding, k=4, search_k=- 1)[source]\uf0c1\nReturn docs most similar to query.\nParameters\nquery \u2013 Text to look up documents similar to.", "source": "https://api.python.langchain.com/en/latest/modules/vectorstores.html"} +{"id": "c018b0edb28a-8", "text": "Parameters\nquery \u2013 Text to look up documents similar to.\nk (int) \u2013 Number of Documents to return. Defaults to 4.\nsearch_k (int) \u2013 inspect up to search_k nodes which defaults\nto n_trees * n if not provided\nembedding (List[float]) \u2013 \nReturns\nList of Documents most similar to the query and score for each\nReturn type\nList[Tuple[langchain.schema.Document, float]]\nsimilarity_search_with_score_by_index(docstore_index, k=4, search_k=- 1)[source]\uf0c1\nReturn docs most similar to query.\nParameters\nquery \u2013 Text to look up documents similar to.\nk (int) \u2013 Number of Documents to return. Defaults to 4.\nsearch_k (int) \u2013 inspect up to search_k nodes which defaults\nto n_trees * n if not provided\ndocstore_index (int) \u2013 \nReturns\nList of Documents most similar to the query and score for each\nReturn type\nList[Tuple[langchain.schema.Document, float]]\nsimilarity_search_with_score(query, k=4, search_k=- 1)[source]\uf0c1\nReturn docs most similar to query.\nParameters\nquery (str) \u2013 Text to look up documents similar to.\nk (int) \u2013 Number of Documents to return. Defaults to 4.\nsearch_k (int) \u2013 inspect up to search_k nodes which defaults\nto n_trees * n if not provided\nReturns\nList of Documents most similar to the query and score for each\nReturn type\nList[Tuple[langchain.schema.Document, float]]\nsimilarity_search_by_vector(embedding, k=4, search_k=- 1, **kwargs)[source]\uf0c1\nReturn docs most similar to embedding vector.\nParameters\nembedding (List[float]) \u2013 Embedding to look up documents similar to.", "source": "https://api.python.langchain.com/en/latest/modules/vectorstores.html"} +{"id": "c018b0edb28a-9", "text": "Parameters\nembedding (List[float]) \u2013 Embedding to look up documents similar to.\nk (int) \u2013 Number of Documents to return. Defaults to 4.\nsearch_k (int) \u2013 inspect up to search_k nodes which defaults\nto n_trees * n if not provided\nkwargs (Any) \u2013 \nReturns\nList of Documents most similar to the embedding.\nReturn type\nList[langchain.schema.Document]\nsimilarity_search_by_index(docstore_index, k=4, search_k=- 1, **kwargs)[source]\uf0c1\nReturn docs most similar to docstore_index.\nParameters\ndocstore_index (int) \u2013 Index of document in docstore\nk (int) \u2013 Number of Documents to return. Defaults to 4.\nsearch_k (int) \u2013 inspect up to search_k nodes which defaults\nto n_trees * n if not provided\nkwargs (Any) \u2013 \nReturns\nList of Documents most similar to the embedding.\nReturn type\nList[langchain.schema.Document]\nsimilarity_search(query, k=4, search_k=- 1, **kwargs)[source]\uf0c1\nReturn docs most similar to query.\nParameters\nquery (str) \u2013 Text to look up documents similar to.\nk (int) \u2013 Number of Documents to return. Defaults to 4.\nsearch_k (int) \u2013 inspect up to search_k nodes which defaults\nto n_trees * n if not provided\nkwargs (Any) \u2013 \nReturns\nList of Documents most similar to the query.\nReturn type\nList[langchain.schema.Document]\nmax_marginal_relevance_search_by_vector(embedding, k=4, fetch_k=20, lambda_mult=0.5, **kwargs)[source]\uf0c1\nReturn docs selected using the maximal marginal relevance.\nMaximal marginal relevance optimizes for similarity to query AND diversity\namong selected documents.\nParameters", "source": "https://api.python.langchain.com/en/latest/modules/vectorstores.html"} +{"id": "c018b0edb28a-10", "text": "Maximal marginal relevance optimizes for similarity to query AND diversity\namong selected documents.\nParameters\nembedding (List[float]) \u2013 Embedding to look up documents similar to.\nfetch_k (int) \u2013 Number of Documents to fetch to pass to MMR algorithm.\nk (int) \u2013 Number of Documents to return. Defaults to 4.\nlambda_mult (float) \u2013 Number between 0 and 1 that determines the degree\nof diversity among the results with 0 corresponding\nto maximum diversity and 1 to minimum diversity.\nDefaults to 0.5.\nkwargs (Any) \u2013 \nReturns\nList of Documents selected by maximal marginal relevance.\nReturn type\nList[langchain.schema.Document]\nmax_marginal_relevance_search(query, k=4, fetch_k=20, lambda_mult=0.5, **kwargs)[source]\uf0c1\nReturn docs selected using the maximal marginal relevance.\nMaximal marginal relevance optimizes for similarity to query AND diversity\namong selected documents.\nParameters\nquery (str) \u2013 Text to look up documents similar to.\nk (int) \u2013 Number of Documents to return. Defaults to 4.\nfetch_k (int) \u2013 Number of Documents to fetch to pass to MMR algorithm.\nlambda_mult (float) \u2013 Number between 0 and 1 that determines the degree\nof diversity among the results with 0 corresponding\nto maximum diversity and 1 to minimum diversity.\nDefaults to 0.5.\nkwargs (Any) \u2013 \nReturns\nList of Documents selected by maximal marginal relevance.\nReturn type\nList[langchain.schema.Document]\nclassmethod from_texts(texts, embedding, metadatas=None, metric='angular', trees=100, n_jobs=- 1, **kwargs)[source]\uf0c1\nConstruct Annoy wrapper from raw documents.\nParameters\ntexts (List[str]) \u2013 List of documents to index.", "source": "https://api.python.langchain.com/en/latest/modules/vectorstores.html"} +{"id": "c018b0edb28a-11", "text": "Parameters\ntexts (List[str]) \u2013 List of documents to index.\nembedding (langchain.embeddings.base.Embeddings) \u2013 Embedding function to use.\nmetadatas (Optional[List[dict]]) \u2013 List of metadata dictionaries to associate with documents.\nmetric (str) \u2013 Metric to use for indexing. Defaults to \u201cangular\u201d.\ntrees (int) \u2013 Number of trees to use for indexing. Defaults to 100.\nn_jobs (int) \u2013 Number of jobs to use for indexing. Defaults to -1.\nkwargs (Any) \u2013 \nReturn type\nlangchain.vectorstores.annoy.Annoy\nThis is a user friendly interface that:\nEmbeds documents.\nCreates an in memory docstore\nInitializes the Annoy database\nThis is intended to be a quick way to get started.\nExample\nfrom langchain import Annoy\nfrom langchain.embeddings import OpenAIEmbeddings\nembeddings = OpenAIEmbeddings()\nindex = Annoy.from_texts(texts, embeddings)\nclassmethod from_embeddings(text_embeddings, embedding, metadatas=None, metric='angular', trees=100, n_jobs=- 1, **kwargs)[source]\uf0c1\nConstruct Annoy wrapper from embeddings.\nParameters\ntext_embeddings (List[Tuple[str, List[float]]]) \u2013 List of tuples of (text, embedding)\nembedding (langchain.embeddings.base.Embeddings) \u2013 Embedding function to use.\nmetadatas (Optional[List[dict]]) \u2013 List of metadata dictionaries to associate with documents.\nmetric (str) \u2013 Metric to use for indexing. Defaults to \u201cangular\u201d.\ntrees (int) \u2013 Number of trees to use for indexing. Defaults to 100.\nn_jobs (int) \u2013 Number of jobs to use for indexing. Defaults to -1\nkwargs (Any) \u2013 \nReturn type\nlangchain.vectorstores.annoy.Annoy", "source": "https://api.python.langchain.com/en/latest/modules/vectorstores.html"} +{"id": "c018b0edb28a-12", "text": "Return type\nlangchain.vectorstores.annoy.Annoy\nThis is a user friendly interface that:\nCreates an in memory docstore with provided embeddings\nInitializes the Annoy database\nThis is intended to be a quick way to get started.\nExample\nfrom langchain import Annoy\nfrom langchain.embeddings import OpenAIEmbeddings\nembeddings = OpenAIEmbeddings()\ntext_embeddings = embeddings.embed_documents(texts)\ntext_embedding_pairs = list(zip(texts, text_embeddings))\ndb = Annoy.from_embeddings(text_embedding_pairs, embeddings)\nsave_local(folder_path, prefault=False)[source]\uf0c1\nSave Annoy index, docstore, and index_to_docstore_id to disk.\nParameters\nfolder_path (str) \u2013 folder path to save index, docstore,\nand index_to_docstore_id to.\nprefault (bool) \u2013 Whether to pre-load the index into memory.\nReturn type\nNone\nclassmethod load_local(folder_path, embeddings)[source]\uf0c1\nLoad Annoy index, docstore, and index_to_docstore_id to disk.\nParameters\nfolder_path (str) \u2013 folder path to load index, docstore,\nand index_to_docstore_id from.\nembeddings (langchain.embeddings.base.Embeddings) \u2013 Embeddings to use when generating queries.\nReturn type\nlangchain.vectorstores.annoy.Annoy\nclass langchain.vectorstores.AtlasDB(name, embedding_function=None, api_key=None, description='A description for your project', is_public=True, reset_project_if_exists=False)[source]\uf0c1\nBases: langchain.vectorstores.base.VectorStore\nWrapper around Atlas: Nomic\u2019s neural database and rhizomatic instrument.\nTo use, you should have the nomic python package installed.\nExample\nfrom langchain.vectorstores import AtlasDB", "source": "https://api.python.langchain.com/en/latest/modules/vectorstores.html"} +{"id": "c018b0edb28a-13", "text": "Example\nfrom langchain.vectorstores import AtlasDB\nfrom langchain.embeddings.openai import OpenAIEmbeddings\nembeddings = OpenAIEmbeddings()\nvectorstore = AtlasDB(\"my_project\", embeddings.embed_query)\nParameters\nname (str) \u2013 \nembedding_function (Optional[Embeddings]) \u2013 \napi_key (Optional[str]) \u2013 \ndescription (str) \u2013 \nis_public (bool) \u2013 \nreset_project_if_exists (bool) \u2013 \nReturn type\nNone\nadd_texts(texts, metadatas=None, ids=None, refresh=True, **kwargs)[source]\uf0c1\nRun more texts through the embeddings and add to the vectorstore.\nParameters\ntexts (Iterable[str]) \u2013 Texts to add to the vectorstore.\nmetadatas (Optional[List[dict]], optional) \u2013 Optional list of metadatas.\nids (Optional[List[str]]) \u2013 An optional list of ids.\nrefresh (bool) \u2013 Whether or not to refresh indices with the updated data.\nDefault True.\nkwargs (Any) \u2013 \nReturns\nList of IDs of the added texts.\nReturn type\nList[str]\ncreate_index(**kwargs)[source]\uf0c1\nCreates an index in your project.\nSee\nhttps://docs.nomic.ai/atlas_api.html#nomic.project.AtlasProject.create_index\nfor full detail.\nParameters\nkwargs (Any) \u2013 \nReturn type\nAny\nsimilarity_search(query, k=4, **kwargs)[source]\uf0c1\nRun similarity search with AtlasDB\nParameters\nquery (str) \u2013 Query text to search for.\nk (int) \u2013 Number of results to return. Defaults to 4.\nkwargs (Any) \u2013 \nReturns\nList of documents most similar to the query text.\nReturn type\nList[Document]", "source": "https://api.python.langchain.com/en/latest/modules/vectorstores.html"} +{"id": "c018b0edb28a-14", "text": "List of documents most similar to the query text.\nReturn type\nList[Document]\nclassmethod from_texts(texts, embedding=None, metadatas=None, ids=None, name=None, api_key=None, description='A description for your project', is_public=True, reset_project_if_exists=False, index_kwargs=None, **kwargs)[source]\uf0c1\nCreate an AtlasDB vectorstore from a raw documents.\nParameters\ntexts (List[str]) \u2013 The list of texts to ingest.\nname (str) \u2013 Name of the project to create.\napi_key (str) \u2013 Your nomic API key,\nembedding (Optional[Embeddings]) \u2013 Embedding function. Defaults to None.\nmetadatas (Optional[List[dict]]) \u2013 List of metadatas. Defaults to None.\nids (Optional[List[str]]) \u2013 Optional list of document IDs. If None,\nids will be auto created\ndescription (str) \u2013 A description for your project.\nis_public (bool) \u2013 Whether your project is publicly accessible.\nTrue by default.\nreset_project_if_exists (bool) \u2013 Whether to reset this project if it\nalready exists. Default False.\nGenerally userful during development and testing.\nindex_kwargs (Optional[dict]) \u2013 Dict of kwargs for index creation.\nSee https://docs.nomic.ai/atlas_api.html\nkwargs (Any) \u2013 \nReturns\nNomic\u2019s neural database and finest rhizomatic instrument\nReturn type\nAtlasDB\nclassmethod from_documents(documents, embedding=None, ids=None, name=None, api_key=None, persist_directory=None, description='A description for your project', is_public=True, reset_project_if_exists=False, index_kwargs=None, **kwargs)[source]\uf0c1\nCreate an AtlasDB vectorstore from a list of documents.\nParameters\nname (str) \u2013 Name of the collection to create.\napi_key (str) \u2013 Your nomic API key,", "source": "https://api.python.langchain.com/en/latest/modules/vectorstores.html"} +{"id": "c018b0edb28a-15", "text": "api_key (str) \u2013 Your nomic API key,\ndocuments (List[Document]) \u2013 List of documents to add to the vectorstore.\nembedding (Optional[Embeddings]) \u2013 Embedding function. Defaults to None.\nids (Optional[List[str]]) \u2013 Optional list of document IDs. If None,\nids will be auto created\ndescription (str) \u2013 A description for your project.\nis_public (bool) \u2013 Whether your project is publicly accessible.\nTrue by default.\nreset_project_if_exists (bool) \u2013 Whether to reset this project if\nit already exists. Default False.\nGenerally userful during development and testing.\nindex_kwargs (Optional[dict]) \u2013 Dict of kwargs for index creation.\nSee https://docs.nomic.ai/atlas_api.html\npersist_directory (Optional[str]) \u2013 \nkwargs (Any) \u2013 \nReturns\nNomic\u2019s neural database and finest rhizomatic instrument\nReturn type\nAtlasDB\nclass langchain.vectorstores.AwaDB(table_name='langchain_awadb', embedding_model=None, log_and_data_dir=None, client=None)[source]\uf0c1\nBases: langchain.vectorstores.base.VectorStore\nInterface implemented by AwaDB vector stores.\nParameters\ntable_name (str) \u2013 \nembedding_model (Optional[Embeddings]) \u2013 \nlog_and_data_dir (Optional[str]) \u2013 \nclient (Optional[awadb.Client]) \u2013 \nReturn type\nNone\nadd_texts(texts, metadatas=None, is_duplicate_texts=None, **kwargs)[source]\uf0c1\nRun more texts through the embeddings and add to the vectorstore.\n:param texts: Iterable of strings to add to the vectorstore.\n:param metadatas: Optional list of metadatas associated with the texts.\n:param is_duplicate_texts: Optional whether to duplicate texts.\n:param kwargs: vectorstore specific parameters.\nReturns", "source": "https://api.python.langchain.com/en/latest/modules/vectorstores.html"} +{"id": "c018b0edb28a-16", "text": ":param kwargs: vectorstore specific parameters.\nReturns\nList of ids from adding the texts into the vectorstore.\nParameters\ntexts (Iterable[str]) \u2013 \nmetadatas (Optional[List[dict]]) \u2013 \nis_duplicate_texts (Optional[bool]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nList[str]\nload_local(table_name, **kwargs)[source]\uf0c1\nParameters\ntable_name (str) \u2013 \nkwargs (Any) \u2013 \nReturn type\nbool\nsimilarity_search(query, k=4, **kwargs)[source]\uf0c1\nReturn docs most similar to query.\nParameters\nquery (str) \u2013 \nk (int) \u2013 \nkwargs (Any) \u2013 \nReturn type\nList[langchain.schema.Document]\nsimilarity_search_with_score(query, k=4, **kwargs)[source]\uf0c1\nReturn docs and relevance scores, normalized on a scale from 0 to 1.\n0 is dissimilar, 1 is most similar.\nParameters\nquery (str) \u2013 \nk (int) \u2013 \nkwargs (Any) \u2013 \nReturn type\nList[Tuple[langchain.schema.Document, float]]\nsimilarity_search_with_relevance_scores(query, k=4, **kwargs)[source]\uf0c1\nReturn docs and relevance scores, normalized on a scale from 0 to 1.\n0 is dissimilar, 1 is most similar.\nParameters\nquery (str) \u2013 \nk (int) \u2013 \nkwargs (Any) \u2013 \nReturn type\nList[Tuple[langchain.schema.Document, float]]\nsimilarity_search_by_vector(embedding=None, k=4, scores=None, **kwargs)[source]\uf0c1\nReturn docs most similar to embedding vector.\nParameters\nembedding (Optional[List[float]]) \u2013 Embedding to look up documents similar to.", "source": "https://api.python.langchain.com/en/latest/modules/vectorstores.html"} +{"id": "c018b0edb28a-17", "text": "Parameters\nembedding (Optional[List[float]]) \u2013 Embedding to look up documents similar to.\nk (int) \u2013 Number of Documents to return. Defaults to 4.\nscores (Optional[list]) \u2013 \nkwargs (Any) \u2013 \nReturns\nList of Documents most similar to the query vector.\nReturn type\nList[langchain.schema.Document]\ncreate_table(table_name, **kwargs)[source]\uf0c1\nCreate a new table.\nParameters\ntable_name (str) \u2013 \nkwargs (Any) \u2013 \nReturn type\nbool\nuse(table_name, **kwargs)[source]\uf0c1\nUse the specified table. Don\u2019t know the tables, please invoke list_tables.\nParameters\ntable_name (str) \u2013 \nkwargs (Any) \u2013 \nReturn type\nbool\nlist_tables(**kwargs)[source]\uf0c1\nList all the tables created by the client.\nParameters\nkwargs (Any) \u2013 \nReturn type\nList[str]\nget_current_table(**kwargs)[source]\uf0c1\nGet the current table.\nParameters\nkwargs (Any) \u2013 \nReturn type\nstr\nclassmethod from_texts(texts, embedding=None, metadatas=None, table_name='langchain_awadb', logging_and_data_dir=None, client=None, **kwargs)[source]\uf0c1\nCreate an AwaDB vectorstore from a raw documents.\nParameters\ntexts (List[str]) \u2013 List of texts to add to the table.\nembedding (Optional[Embeddings]) \u2013 Embedding function. Defaults to None.\nmetadatas (Optional[List[dict]]) \u2013 List of metadatas. Defaults to None.\ntable_name (str) \u2013 Name of the table to create.\nlogging_and_data_dir (Optional[str]) \u2013 Directory of logging and persistence.\nclient (Optional[awadb.Client]) \u2013 AwaDB client\nkwargs (Any) \u2013 \nReturns", "source": "https://api.python.langchain.com/en/latest/modules/vectorstores.html"} +{"id": "c018b0edb28a-18", "text": "kwargs (Any) \u2013 \nReturns\nAwaDB vectorstore.\nReturn type\nAwaDB\nclassmethod from_documents(documents, embedding=None, table_name='langchain_awadb', logging_and_data_dir=None, client=None, **kwargs)[source]\uf0c1\nCreate an AwaDB vectorstore from a list of documents.\nIf a logging_and_data_dir specified, the table will be persisted there.\nParameters\ndocuments (List[Document]) \u2013 List of documents to add to the vectorstore.\nembedding (Optional[Embeddings]) \u2013 Embedding function. Defaults to None.\ntable_name (str) \u2013 Name of the table to create.\nlogging_and_data_dir (Optional[str]) \u2013 Directory to persist the table.\nclient (Optional[awadb.Client]) \u2013 AwaDB client\nkwargs (Any) \u2013 \nReturns\nAwaDB vectorstore.\nReturn type\nAwaDB\nclass langchain.vectorstores.AzureSearch(azure_search_endpoint, azure_search_key, index_name, embedding_function, search_type='hybrid', semantic_configuration_name=None, semantic_query_language='en-us', **kwargs)[source]\uf0c1\nBases: langchain.vectorstores.base.VectorStore\nParameters\nazure_search_endpoint (str) \u2013 \nazure_search_key (str) \u2013 \nindex_name (str) \u2013 \nembedding_function (Callable) \u2013 \nsearch_type (str) \u2013 \nsemantic_configuration_name (Optional[str]) \u2013 \nsemantic_query_language (str) \u2013 \nkwargs (Any) \u2013 \nadd_texts(texts, metadatas=None, **kwargs)[source]\uf0c1\nAdd texts data to an existing index.\nParameters\ntexts (Iterable[str]) \u2013 \nmetadatas (Optional[List[dict]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nList[str]\nsimilarity_search(query, k=4, **kwargs)[source]\uf0c1", "source": "https://api.python.langchain.com/en/latest/modules/vectorstores.html"} +{"id": "c018b0edb28a-19", "text": "similarity_search(query, k=4, **kwargs)[source]\uf0c1\nReturn docs most similar to query.\nParameters\nquery (str) \u2013 \nk (int) \u2013 \nkwargs (Any) \u2013 \nReturn type\nList[langchain.schema.Document]\nvector_search(query, k=4, **kwargs)[source]\uf0c1\nReturns the most similar indexed documents to the query text.\nParameters\nquery (str) \u2013 The query text for which to find similar documents.\nk (int) \u2013 The number of documents to return. Default is 4.\nkwargs (Any) \u2013 \nReturns\nA list of documents that are most similar to the query text.\nReturn type\nList[Document]\nvector_search_with_score(query, k=4, filters=None)[source]\uf0c1\nReturn docs most similar to query.\nParameters\nquery (str) \u2013 Text to look up documents similar to.\nk (int) \u2013 Number of Documents to return. Defaults to 4.\nfilters (Optional[str]) \u2013 \nReturns\nList of Documents most similar to the query and score for each\nReturn type\nList[Tuple[langchain.schema.Document, float]]\nhybrid_search(query, k=4, **kwargs)[source]\uf0c1\nReturns the most similar indexed documents to the query text.\nParameters\nquery (str) \u2013 The query text for which to find similar documents.\nk (int) \u2013 The number of documents to return. Default is 4.\nkwargs (Any) \u2013 \nReturns\nA list of documents that are most similar to the query text.\nReturn type\nList[Document]\nhybrid_search_with_score(query, k=4, filters=None)[source]\uf0c1\nReturn docs most similar to query with an hybrid query.\nParameters\nquery (str) \u2013 Text to look up documents similar to.", "source": "https://api.python.langchain.com/en/latest/modules/vectorstores.html"} +{"id": "c018b0edb28a-20", "text": "Parameters\nquery (str) \u2013 Text to look up documents similar to.\nk (int) \u2013 Number of Documents to return. Defaults to 4.\nfilters (Optional[str]) \u2013 \nReturns\nList of Documents most similar to the query and score for each\nReturn type\nList[Tuple[langchain.schema.Document, float]]\nsemantic_hybrid_search(query, k=4, **kwargs)[source]\uf0c1\nReturns the most similar indexed documents to the query text.\nParameters\nquery (str) \u2013 The query text for which to find similar documents.\nk (int) \u2013 The number of documents to return. Default is 4.\nkwargs (Any) \u2013 \nReturns\nA list of documents that are most similar to the query text.\nReturn type\nList[Document]\nsemantic_hybrid_search_with_score(query, k=4, filters=None)[source]\uf0c1\nReturn docs most similar to query with an hybrid query.\nParameters\nquery (str) \u2013 Text to look up documents similar to.\nk (int) \u2013 Number of Documents to return. Defaults to 4.\nfilters (Optional[str]) \u2013 \nReturns\nList of Documents most similar to the query and score for each\nReturn type\nList[Tuple[langchain.schema.Document, float]]\nclassmethod from_texts(texts, embedding, metadatas=None, azure_search_endpoint='', azure_search_key='', index_name='langchain-index', **kwargs)[source]\uf0c1\nReturn VectorStore initialized from texts and embeddings.\nParameters\ntexts (List[str]) \u2013 \nembedding (langchain.embeddings.base.Embeddings) \u2013 \nmetadatas (Optional[List[dict]]) \u2013 \nazure_search_endpoint (str) \u2013 \nazure_search_key (str) \u2013 \nindex_name (str) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.vectorstores.azuresearch.AzureSearch", "source": "https://api.python.langchain.com/en/latest/modules/vectorstores.html"} +{"id": "c018b0edb28a-21", "text": "kwargs (Any) \u2013 \nReturn type\nlangchain.vectorstores.azuresearch.AzureSearch\nclass langchain.vectorstores.Cassandra(embedding, session, keyspace, table_name, ttl_seconds=None)[source]\uf0c1\nBases: langchain.vectorstores.base.VectorStore\nWrapper around Cassandra embeddings platform.\nThere is no notion of a default table name, since each embedding\nfunction implies its own vector dimension, which is part of the schema.\nExample\nfrom langchain.vectorstores import Cassandra\nfrom langchain.embeddings.openai import OpenAIEmbeddings\nembeddings = OpenAIEmbeddings()\nsession = ...\nkeyspace = 'my_keyspace'\nvectorstore = Cassandra(embeddings, session, keyspace, 'my_doc_archive')\nParameters\nembedding (Embeddings) \u2013 \nsession (Session) \u2013 \nkeyspace (str) \u2013 \ntable_name (str) \u2013 \nttl_seconds (int | None) \u2013 \nReturn type\nNone\ndelete_collection()[source]\uf0c1\nJust an alias for clear\n(to better align with other VectorStore implementations).\nReturn type\nNone\nclear()[source]\uf0c1\nEmpty the collection.\nReturn type\nNone\ndelete_by_document_id(document_id)[source]\uf0c1\nParameters\ndocument_id (str) \u2013 \nReturn type\nNone\nadd_texts(texts, metadatas=None, ids=None, **kwargs)[source]\uf0c1\nRun more texts through the embeddings and add to the vectorstore.\nParameters\ntexts (Iterable[str]) \u2013 Texts to add to the vectorstore.\nmetadatas (Optional[List[dict]], optional) \u2013 Optional list of metadatas.\nids (Optional[List[str]], optional) \u2013 Optional list of IDs.\nkwargs (Any) \u2013 \nReturns\nList of IDs of the added texts.\nReturn type\nList[str]", "source": "https://api.python.langchain.com/en/latest/modules/vectorstores.html"} +{"id": "c018b0edb28a-22", "text": "Returns\nList of IDs of the added texts.\nReturn type\nList[str]\nsimilarity_search_with_score_id_by_vector(embedding, k=4)[source]\uf0c1\nReturn docs most similar to embedding vector.\nNo support for filter query (on metadata) along with vector search.\nParameters\nembedding (str) \u2013 Embedding to look up documents similar to.\nk (int) \u2013 Number of Documents to return. Defaults to 4.\nReturns\nList of (Document, score, id), the most similar to the query vector.\nReturn type\nList[Tuple[langchain.schema.Document, float, str]]\nsimilarity_search_with_score_id(query, k=4, **kwargs)[source]\uf0c1\nParameters\nquery (str) \u2013 \nk (int) \u2013 \nkwargs (Any) \u2013 \nReturn type\nList[Tuple[langchain.schema.Document, float, str]]\nsimilarity_search_with_score_by_vector(embedding, k=4)[source]\uf0c1\nReturn docs most similar to embedding vector.\nNo support for filter query (on metadata) along with vector search.\nParameters\nembedding (str) \u2013 Embedding to look up documents similar to.\nk (int) \u2013 Number of Documents to return. Defaults to 4.\nReturns\nList of (Document, score), the most similar to the query vector.\nReturn type\nList[Tuple[langchain.schema.Document, float]]\nsimilarity_search(query, k=4, **kwargs)[source]\uf0c1\nReturn docs most similar to query.\nParameters\nquery (str) \u2013 \nk (int) \u2013 \nkwargs (Any) \u2013 \nReturn type\nList[langchain.schema.Document]\nsimilarity_search_by_vector(embedding, k=4, **kwargs)[source]\uf0c1\nReturn docs most similar to embedding vector.\nParameters", "source": "https://api.python.langchain.com/en/latest/modules/vectorstores.html"} +{"id": "c018b0edb28a-23", "text": "Return docs most similar to embedding vector.\nParameters\nembedding (List[float]) \u2013 Embedding to look up documents similar to.\nk (int) \u2013 Number of Documents to return. Defaults to 4.\nkwargs (Any) \u2013 \nReturns\nList of Documents most similar to the query vector.\nReturn type\nList[langchain.schema.Document]\nsimilarity_search_with_score(query, k=4, **kwargs)[source]\uf0c1\nParameters\nquery (str) \u2013 \nk (int) \u2013 \nkwargs (Any) \u2013 \nReturn type\nList[Tuple[langchain.schema.Document, float]]\nmax_marginal_relevance_search_by_vector(embedding, k=4, fetch_k=20, lambda_mult=0.5, **kwargs)[source]\uf0c1\nReturn docs selected using the maximal marginal relevance.\nMaximal marginal relevance optimizes for similarity to query AND diversity\namong selected documents.\n:param embedding: Embedding to look up documents similar to.\n:param k: Number of Documents to return.\n:param fetch_k: Number of Documents to fetch to pass to MMR algorithm.\n:param lambda_mult: Number between 0 and 1 that determines the degree\nof diversity among the results with 0 corresponding\nto maximum diversity and 1 to minimum diversity.\nReturns\nList of Documents selected by maximal marginal relevance.\nParameters\nembedding (List[float]) \u2013 \nk (int) \u2013 \nfetch_k (int) \u2013 \nlambda_mult (float) \u2013 \nkwargs (Any) \u2013 \nReturn type\nList[langchain.schema.Document]\nmax_marginal_relevance_search(query, k=4, fetch_k=20, lambda_mult=0.5, **kwargs)[source]\uf0c1\nReturn docs selected using the maximal marginal relevance.\nMaximal marginal relevance optimizes for similarity to query AND diversity\namong selected documents.", "source": "https://api.python.langchain.com/en/latest/modules/vectorstores.html"} +{"id": "c018b0edb28a-24", "text": "Maximal marginal relevance optimizes for similarity to query AND diversity\namong selected documents.\n:param query: Text to look up documents similar to.\n:param k: Number of Documents to return.\n:param fetch_k: Number of Documents to fetch to pass to MMR algorithm.\n:param lambda_mult: Number between 0 and 1 that determines the degree\nof diversity among the results with 0 corresponding\nto maximum diversity and 1 to minimum diversity.\nOptional.\nReturns\nList of Documents selected by maximal marginal relevance.\nParameters\nquery (str) \u2013 \nk (int) \u2013 \nfetch_k (int) \u2013 \nlambda_mult (float) \u2013 \nkwargs (Any) \u2013 \nReturn type\nList[langchain.schema.Document]\nclassmethod from_texts(texts, embedding, metadatas=None, **kwargs)[source]\uf0c1\nCreate a Cassandra vectorstore from raw texts.\nNo support for specifying text IDs\nReturns\na Cassandra vectorstore.\nParameters\ntexts (List[str]) \u2013 \nembedding (langchain.embeddings.base.Embeddings) \u2013 \nmetadatas (Optional[List[dict]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.vectorstores.cassandra.CVST\nclassmethod from_documents(documents, embedding, **kwargs)[source]\uf0c1\nCreate a Cassandra vectorstore from a document list.\nNo support for specifying text IDs\nReturns\na Cassandra vectorstore.\nParameters\ndocuments (List[langchain.schema.Document]) \u2013 \nembedding (langchain.embeddings.base.Embeddings) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.vectorstores.cassandra.CVST\nclass langchain.vectorstores.Chroma(collection_name='langchain', embedding_function=None, persist_directory=None, client_settings=None, collection_metadata=None, client=None)[source]\uf0c1\nBases: langchain.vectorstores.base.VectorStore", "source": "https://api.python.langchain.com/en/latest/modules/vectorstores.html"} +{"id": "c018b0edb28a-25", "text": "Bases: langchain.vectorstores.base.VectorStore\nWrapper around ChromaDB embeddings platform.\nTo use, you should have the chromadb python package installed.\nExample\nfrom langchain.vectorstores import Chroma\nfrom langchain.embeddings.openai import OpenAIEmbeddings\nembeddings = OpenAIEmbeddings()\nvectorstore = Chroma(\"langchain_store\", embeddings)\nParameters\ncollection_name (str) \u2013 \nembedding_function (Optional[Embeddings]) \u2013 \npersist_directory (Optional[str]) \u2013 \nclient_settings (Optional[chromadb.config.Settings]) \u2013 \ncollection_metadata (Optional[Dict]) \u2013 \nclient (Optional[chromadb.Client]) \u2013 \nReturn type\nNone\nadd_texts(texts, metadatas=None, ids=None, **kwargs)[source]\uf0c1\nRun more texts through the embeddings and add to the vectorstore.\nParameters\ntexts (Iterable[str]) \u2013 Texts to add to the vectorstore.\nmetadatas (Optional[List[dict]], optional) \u2013 Optional list of metadatas.\nids (Optional[List[str]], optional) \u2013 Optional list of IDs.\nkwargs (Any) \u2013 \nReturns\nList of IDs of the added texts.\nReturn type\nList[str]\nsimilarity_search(query, k=4, filter=None, **kwargs)[source]\uf0c1\nRun similarity search with Chroma.\nParameters\nquery (str) \u2013 Query text to search for.\nk (int) \u2013 Number of results to return. Defaults to 4.\nfilter (Optional[Dict[str, str]]) \u2013 Filter by metadata. Defaults to None.\nkwargs (Any) \u2013 \nReturns\nList of documents most similar to the query text.\nReturn type\nList[Document]\nsimilarity_search_by_vector(embedding, k=4, filter=None, **kwargs)[source]\uf0c1\nReturn docs most similar to embedding vector.", "source": "https://api.python.langchain.com/en/latest/modules/vectorstores.html"} +{"id": "c018b0edb28a-26", "text": "Return docs most similar to embedding vector.\n:param embedding: Embedding to look up documents similar to.\n:type embedding: str\n:param k: Number of Documents to return. Defaults to 4.\n:type k: int\n:param filter: Filter by metadata. Defaults to None.\n:type filter: Optional[Dict[str, str]]\nReturns\nList of Documents most similar to the query vector.\nParameters\nembedding (List[float]) \u2013 \nk (int) \u2013 \nfilter (Optional[Dict[str, str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nList[langchain.schema.Document]\nsimilarity_search_with_score(query, k=4, filter=None, **kwargs)[source]\uf0c1\nRun similarity search with Chroma with distance.\nParameters\nquery (str) \u2013 Query text to search for.\nk (int) \u2013 Number of results to return. Defaults to 4.\nfilter (Optional[Dict[str, str]]) \u2013 Filter by metadata. Defaults to None.\nkwargs (Any) \u2013 \nReturns\nList of documents most similar to\nthe query text and cosine distance in float for each.\nLower score represents more similarity.\nReturn type\nList[Tuple[Document, float]]\nmax_marginal_relevance_search_by_vector(embedding, k=4, fetch_k=20, lambda_mult=0.5, filter=None, **kwargs)[source]\uf0c1\nReturn docs selected using the maximal marginal relevance.\nMaximal marginal relevance optimizes for similarity to query AND diversity\namong selected documents.\nParameters\nembedding (List[float]) \u2013 Embedding to look up documents similar to.\nk (int) \u2013 Number of Documents to return. Defaults to 4.\nfetch_k (int) \u2013 Number of Documents to fetch to pass to MMR algorithm.\nlambda_mult (float) \u2013 Number between 0 and 1 that determines the degree", "source": "https://api.python.langchain.com/en/latest/modules/vectorstores.html"} +{"id": "c018b0edb28a-27", "text": "lambda_mult (float) \u2013 Number between 0 and 1 that determines the degree\nof diversity among the results with 0 corresponding\nto maximum diversity and 1 to minimum diversity.\nDefaults to 0.5.\nfilter (Optional[Dict[str, str]]) \u2013 Filter by metadata. Defaults to None.\nkwargs (Any) \u2013 \nReturns\nList of Documents selected by maximal marginal relevance.\nReturn type\nList[langchain.schema.Document]\nmax_marginal_relevance_search(query, k=4, fetch_k=20, lambda_mult=0.5, filter=None, **kwargs)[source]\uf0c1\nReturn docs selected using the maximal marginal relevance.\nMaximal marginal relevance optimizes for similarity to query AND diversity\namong selected documents.\nParameters\nquery (str) \u2013 Text to look up documents similar to.\nk (int) \u2013 Number of Documents to return. Defaults to 4.\nfetch_k (int) \u2013 Number of Documents to fetch to pass to MMR algorithm.\nlambda_mult (float) \u2013 Number between 0 and 1 that determines the degree\nof diversity among the results with 0 corresponding\nto maximum diversity and 1 to minimum diversity.\nDefaults to 0.5.\nfilter (Optional[Dict[str, str]]) \u2013 Filter by metadata. Defaults to None.\nkwargs (Any) \u2013 \nReturns\nList of Documents selected by maximal marginal relevance.\nReturn type\nList[langchain.schema.Document]\ndelete_collection()[source]\uf0c1\nDelete the collection.\nReturn type\nNone\nget(ids=None, where=None, limit=None, offset=None, where_document=None, include=None)[source]\uf0c1\nGets the collection.\nParameters\nids (Optional[OneOrMany[ID]]) \u2013 The ids of the embeddings to get. Optional.\nwhere (Optional[Where]) \u2013 A Where type dict used to filter results by.", "source": "https://api.python.langchain.com/en/latest/modules/vectorstores.html"} +{"id": "c018b0edb28a-28", "text": "where (Optional[Where]) \u2013 A Where type dict used to filter results by.\nE.g. {\u201ccolor\u201d : \u201cred\u201d, \u201cprice\u201d: 4.20}. Optional.\nlimit (Optional[int]) \u2013 The number of documents to return. Optional.\noffset (Optional[int]) \u2013 The offset to start returning results from.\nUseful for paging results with limit. Optional.\nwhere_document (Optional[WhereDocument]) \u2013 A WhereDocument type dict used to filter by the documents.\nE.g. {$contains: {\u201ctext\u201d: \u201chello\u201d}}. Optional.\ninclude (Optional[List[str]]) \u2013 A list of what to include in the results.\nCan contain \u201cembeddings\u201d, \u201cmetadatas\u201d, \u201cdocuments\u201d.\nIds are always included.\nDefaults to [\u201cmetadatas\u201d, \u201cdocuments\u201d]. Optional.\nReturn type\nDict[str, Any]\npersist()[source]\uf0c1\nPersist the collection.\nThis can be used to explicitly persist the data to disk.\nIt will also be called automatically when the object is destroyed.\nReturn type\nNone\nupdate_document(document_id, document)[source]\uf0c1\nUpdate a document in the collection.\nParameters\ndocument_id (str) \u2013 ID of the document to update.\ndocument (Document) \u2013 Document to update.\nReturn type\nNone\nclassmethod from_texts(texts, embedding=None, metadatas=None, ids=None, collection_name='langchain', persist_directory=None, client_settings=None, client=None, **kwargs)[source]\uf0c1\nCreate a Chroma vectorstore from a raw documents.\nIf a persist_directory is specified, the collection will be persisted there.\nOtherwise, the data will be ephemeral in-memory.\nParameters\ntexts (List[str]) \u2013 List of texts to add to the collection.\ncollection_name (str) \u2013 Name of the collection to create.", "source": "https://api.python.langchain.com/en/latest/modules/vectorstores.html"} +{"id": "c018b0edb28a-29", "text": "collection_name (str) \u2013 Name of the collection to create.\npersist_directory (Optional[str]) \u2013 Directory to persist the collection.\nembedding (Optional[Embeddings]) \u2013 Embedding function. Defaults to None.\nmetadatas (Optional[List[dict]]) \u2013 List of metadatas. Defaults to None.\nids (Optional[List[str]]) \u2013 List of document IDs. Defaults to None.\nclient_settings (Optional[chromadb.config.Settings]) \u2013 Chroma client settings\nclient (Optional[chromadb.Client]) \u2013 \nkwargs (Any) \u2013 \nReturns\nChroma vectorstore.\nReturn type\nChroma\nclassmethod from_documents(documents, embedding=None, ids=None, collection_name='langchain', persist_directory=None, client_settings=None, client=None, **kwargs)[source]\uf0c1\nCreate a Chroma vectorstore from a list of documents.\nIf a persist_directory is specified, the collection will be persisted there.\nOtherwise, the data will be ephemeral in-memory.\nParameters\ncollection_name (str) \u2013 Name of the collection to create.\npersist_directory (Optional[str]) \u2013 Directory to persist the collection.\nids (Optional[List[str]]) \u2013 List of document IDs. Defaults to None.\ndocuments (List[Document]) \u2013 List of documents to add to the vectorstore.\nembedding (Optional[Embeddings]) \u2013 Embedding function. Defaults to None.\nclient_settings (Optional[chromadb.config.Settings]) \u2013 Chroma client settings\nclient (Optional[chromadb.Client]) \u2013 \nkwargs (Any) \u2013 \nReturns\nChroma vectorstore.\nReturn type\nChroma\ndelete(ids)[source]\uf0c1\nDelete by vector IDs.\nParameters\nids (List[str]) \u2013 List of ids to delete.\nReturn type\nNone\nclass langchain.vectorstores.Clickhouse(embedding, config=None, **kwargs)[source]\uf0c1", "source": "https://api.python.langchain.com/en/latest/modules/vectorstores.html"} +{"id": "c018b0edb28a-30", "text": "Bases: langchain.vectorstores.base.VectorStore\nWrapper around ClickHouse vector database\nYou need a clickhouse-connect python package, and a valid account\nto connect to ClickHouse.\nClickHouse can not only search with simple vector indexes,\nit also supports complex query with multiple conditions,\nconstraints and even sub-queries.\nFor more information, please visit[ClickHouse official site](https://clickhouse.com/clickhouse)\nParameters\nembedding (Embeddings) \u2013 \nconfig (Optional[ClickhouseSettings]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nNone\nescape_str(value)[source]\uf0c1\nParameters\nvalue (str) \u2013 \nReturn type\nstr\nadd_texts(texts, metadatas=None, batch_size=32, ids=None, **kwargs)[source]\uf0c1\nInsert more texts through the embeddings and add to the VectorStore.\nParameters\ntexts (Iterable[str]) \u2013 Iterable of strings to add to the VectorStore.\nids (Optional[Iterable[str]]) \u2013 Optional list of ids to associate with the texts.\nbatch_size (int) \u2013 Batch size of insertion\nmetadata \u2013 Optional column data to be inserted\nmetadatas (Optional[List[dict]]) \u2013 \nkwargs (Any) \u2013 \nReturns\nList of ids from adding the texts into the VectorStore.\nReturn type\nList[str]\nclassmethod from_texts(texts, embedding, metadatas=None, config=None, text_ids=None, batch_size=32, **kwargs)[source]\uf0c1\nCreate ClickHouse wrapper with existing texts\nParameters\nembedding_function (Embeddings) \u2013 Function to extract text embedding\ntexts (Iterable[str]) \u2013 List or tuple of strings to be added\nconfig (ClickHouseSettings, Optional) \u2013 ClickHouse configuration\ntext_ids (Optional[Iterable], optional) \u2013 IDs for the texts.\nDefaults to None.", "source": "https://api.python.langchain.com/en/latest/modules/vectorstores.html"} +{"id": "c018b0edb28a-31", "text": "Defaults to None.\nbatch_size (int, optional) \u2013 Batchsize when transmitting data to ClickHouse.\nDefaults to 32.\nmetadata (List[dict], optional) \u2013 metadata to texts. Defaults to None.\ninto (Other keyword arguments will pass) \u2013 [clickhouse-connect](https://clickhouse.com/docs/en/integrations/python#clickhouse-connect-driver-api)\nembedding (langchain.embeddings.base.Embeddings) \u2013 \nmetadatas (Optional[List[Dict[Any, Any]]]) \u2013 \nkwargs (Any) \u2013 \nReturns\nClickHouse Index\nReturn type\nlangchain.vectorstores.clickhouse.Clickhouse\nsimilarity_search(query, k=4, where_str=None, **kwargs)[source]\uf0c1\nPerform a similarity search with ClickHouse\nParameters\nquery (str) \u2013 query string\nk (int, optional) \u2013 Top K neighbors to retrieve. Defaults to 4.\nwhere_str (Optional[str], optional) \u2013 where condition string.\nDefaults to None.\nNOTE \u2013 Please do not let end-user to fill this and always be aware\nof SQL injection. When dealing with metadatas, remember to\nuse {self.metadata_column}.attribute instead of attribute\nalone. The default name for it is metadata.\nkwargs (Any) \u2013 \nReturns\nList of Documents\nReturn type\nList[Document]\nsimilarity_search_by_vector(embedding, k=4, where_str=None, **kwargs)[source]\uf0c1\nPerform a similarity search with ClickHouse by vectors\nParameters\nquery (str) \u2013 query string\nk (int, optional) \u2013 Top K neighbors to retrieve. Defaults to 4.\nwhere_str (Optional[str], optional) \u2013 where condition string.\nDefaults to None.\nNOTE \u2013 Please do not let end-user to fill this and always be aware\nof SQL injection. When dealing with metadatas, remember to", "source": "https://api.python.langchain.com/en/latest/modules/vectorstores.html"} +{"id": "c018b0edb28a-32", "text": "of SQL injection. When dealing with metadatas, remember to\nuse {self.metadata_column}.attribute instead of attribute\nalone. The default name for it is metadata.\nembedding (List[float]) \u2013 \nkwargs (Any) \u2013 \nReturns\nList of (Document, similarity)\nReturn type\nList[Document]\nsimilarity_search_with_relevance_scores(query, k=4, where_str=None, **kwargs)[source]\uf0c1\nPerform a similarity search with ClickHouse\nParameters\nquery (str) \u2013 query string\nk (int, optional) \u2013 Top K neighbors to retrieve. Defaults to 4.\nwhere_str (Optional[str], optional) \u2013 where condition string.\nDefaults to None.\nNOTE \u2013 Please do not let end-user to fill this and always be aware\nof SQL injection. When dealing with metadatas, remember to\nuse {self.metadata_column}.attribute instead of attribute\nalone. The default name for it is metadata.\nkwargs (Any) \u2013 \nReturns\nList of documents\nReturn type\nList[Document]\ndrop()[source]\uf0c1\nHelper function: Drop data\nReturn type\nNone\nproperty metadata_column: str\uf0c1\npydantic settings langchain.vectorstores.ClickhouseSettings[source]\uf0c1\nBases: pydantic.env_settings.BaseSettings\nClickHouse Client Configuration\nAttribute:\nclickhouse_host (str)An URL to connect to MyScale backend.Defaults to \u2018localhost\u2019.\nclickhouse_port (int) : URL port to connect with HTTP. Defaults to 8443.\nusername (str) : Username to login. Defaults to None.\npassword (str) : Password to login. Defaults to None.\nindex_type (str): index type string.\nindex_param (list): index build parameter.\nindex_query_params(dict): index query parameters.", "source": "https://api.python.langchain.com/en/latest/modules/vectorstores.html"} +{"id": "c018b0edb28a-33", "text": "index_param (list): index build parameter.\nindex_query_params(dict): index query parameters.\ndatabase (str) : Database name to find the table. Defaults to \u2018default\u2019.\ntable (str) : Table name to operate on.\nDefaults to \u2018vector_table\u2019.\nmetric (str)Metric to compute distance,supported are (\u2018angular\u2019, \u2018euclidean\u2019, \u2018manhattan\u2019, \u2018hamming\u2019,\n\u2018dot\u2019). Defaults to \u2018angular\u2019.\nhttps://github.com/spotify/annoy/blob/main/src/annoymodule.cc#L149-L169\ncolumn_map (Dict)Column type map to project column name onto langchainsemantics. Must have keys: text, id, vector,\nmust be same size to number of columns. For example:\n.. code-block:: python\n{\u2018id\u2019: \u2018text_id\u2019,\n\u2018uuid\u2019: \u2018global_unique_id\u2019\n\u2018embedding\u2019: \u2018text_embedding\u2019,\n\u2018document\u2019: \u2018text_plain\u2019,\n\u2018metadata\u2019: \u2018metadata_dictionary_in_json\u2019,\n}\nDefaults to identity map.\nShow JSON schema{\n \"title\": \"ClickhouseSettings\",", "source": "https://api.python.langchain.com/en/latest/modules/vectorstores.html"} +{"id": "c018b0edb28a-34", "text": "Show JSON schema{\n \"title\": \"ClickhouseSettings\",\n \"description\": \"ClickHouse Client Configuration\\n\\nAttribute:\\n clickhouse_host (str) : An URL to connect to MyScale backend.\\n Defaults to 'localhost'.\\n clickhouse_port (int) : URL port to connect with HTTP. Defaults to 8443.\\n username (str) : Username to login. Defaults to None.\\n password (str) : Password to login. Defaults to None.\\n index_type (str): index type string.\\n index_param (list): index build parameter.\\n index_query_params(dict): index query parameters.\\n database (str) : Database name to find the table. Defaults to 'default'.\\n table (str) : Table name to operate on.\\n Defaults to 'vector_table'.\\n metric (str) : Metric to compute distance,\\n supported are ('angular', 'euclidean', 'manhattan', 'hamming',\\n 'dot'). Defaults to 'angular'.\\n https://github.com/spotify/annoy/blob/main/src/annoymodule.cc#L149-L169\\n\\n column_map (Dict) : Column type map to project column name onto langchain\\n semantics. Must have keys: `text`, `id`, `vector`,\\n must be same size to number of columns. For example:\\n .. code-block:: python\\n\\n {\\n 'id': 'text_id',\\n 'uuid': 'global_unique_id'\\n 'embedding': 'text_embedding',\\n 'document': 'text_plain',\\n 'metadata': 'metadata_dictionary_in_json',\\n }\\n\\n Defaults to identity map.\",\n \"type\": \"object\",\n \"properties\": {\n \"host\": {", "source": "https://api.python.langchain.com/en/latest/modules/vectorstores.html"} +{"id": "c018b0edb28a-35", "text": "\"type\": \"object\",\n \"properties\": {\n \"host\": {\n \"title\": \"Host\",\n \"default\": \"localhost\",\n \"env_names\": \"{'clickhouse_host'}\",\n \"type\": \"string\"\n },\n \"port\": {\n \"title\": \"Port\",\n \"default\": 8123,\n \"env_names\": \"{'clickhouse_port'}\",\n \"type\": \"integer\"\n },\n \"username\": {\n \"title\": \"Username\",\n \"env_names\": \"{'clickhouse_username'}\",\n \"type\": \"string\"\n },\n \"password\": {\n \"title\": \"Password\",\n \"env_names\": \"{'clickhouse_password'}\",\n \"type\": \"string\"\n },\n \"index_type\": {\n \"title\": \"Index Type\",\n \"default\": \"annoy\",\n \"env_names\": \"{'clickhouse_index_type'}\",\n \"type\": \"string\"\n },\n \"index_param\": {\n \"title\": \"Index Param\",\n \"default\": [\n \"'L2Distance'\",\n 100\n ],\n \"env_names\": \"{'clickhouse_index_param'}\",\n \"anyOf\": [\n {\n \"type\": \"array\",\n \"items\": {}\n },\n {\n \"type\": \"object\"\n }\n ]\n },\n \"index_query_params\": {\n \"title\": \"Index Query Params\",\n \"default\": {},\n \"env_names\": \"{'clickhouse_index_query_params'}\",\n \"type\": \"object\",\n \"additionalProperties\": {\n \"type\": \"string\"\n }\n },", "source": "https://api.python.langchain.com/en/latest/modules/vectorstores.html"} +{"id": "c018b0edb28a-36", "text": "\"type\": \"string\"\n }\n },\n \"column_map\": {\n \"title\": \"Column Map\",\n \"default\": {\n \"id\": \"id\",\n \"uuid\": \"uuid\",\n \"document\": \"document\",\n \"embedding\": \"embedding\",\n \"metadata\": \"metadata\"\n },\n \"env_names\": \"{'clickhouse_column_map'}\",\n \"type\": \"object\",\n \"additionalProperties\": {\n \"type\": \"string\"\n }\n },\n \"database\": {\n \"title\": \"Database\",\n \"default\": \"default\",\n \"env_names\": \"{'clickhouse_database'}\",\n \"type\": \"string\"\n },\n \"table\": {\n \"title\": \"Table\",\n \"default\": \"langchain\",\n \"env_names\": \"{'clickhouse_table'}\",\n \"type\": \"string\"\n },\n \"metric\": {\n \"title\": \"Metric\",\n \"default\": \"angular\",\n \"env_names\": \"{'clickhouse_metric'}\",\n \"type\": \"string\"\n }\n },\n \"additionalProperties\": false\n}\nConfig\nenv_file: str = .env\nenv_file_encoding: str = utf-8\nenv_prefix: str = clickhouse_\nFields\ncolumn_map (Dict[str, str])\ndatabase (str)\nhost (str)\nindex_param (Optional[Union[List, Dict]])\nindex_query_params (Dict[str, str])\nindex_type (str)\nmetric (str)\npassword (Optional[str])\nport (int)\ntable (str)\nusername (Optional[str])", "source": "https://api.python.langchain.com/en/latest/modules/vectorstores.html"} +{"id": "c018b0edb28a-37", "text": "port (int)\ntable (str)\nusername (Optional[str])\nattribute column_map: Dict[str, str] = {'document': 'document', 'embedding': 'embedding', 'id': 'id', 'metadata': 'metadata', 'uuid': 'uuid'}\uf0c1\nattribute database: str = 'default'\uf0c1\nattribute host: str = 'localhost'\uf0c1\nattribute index_param: Optional[Union[List, Dict]] = [\"'L2Distance'\", 100]\uf0c1\nattribute index_query_params: Dict[str, str] = {}\uf0c1\nattribute index_type: str = 'annoy'\uf0c1\nattribute metric: str = 'angular'\uf0c1\nattribute password: Optional[str] = None\uf0c1\nattribute port: int = 8123\uf0c1\nattribute table: str = 'langchain'\uf0c1\nattribute username: Optional[str] = None\uf0c1\nclass langchain.vectorstores.DeepLake(dataset_path='./deeplake/', token=None, embedding_function=None, read_only=False, ingestion_batch_size=1000, num_workers=0, verbose=True, exec_option='python', **kwargs)[source]\uf0c1\nBases: langchain.vectorstores.base.VectorStore\nWrapper around Deep Lake, a data lake for deep learning applications.\nWe integrated deeplake\u2019s similarity search and filtering for fast prototyping,\nNow, it supports Tensor Query Language (TQL) for production use cases\nover billion rows.\nWhy Deep Lake?\nNot only stores embeddings, but also the original data with version control.\nServerless, doesn\u2019t require another service and can be used with majorcloud providers (S3, GCS, etc.)\nMore than just a multi-modal vector store. You can use the datasetto fine-tune your own LLM models.\nTo use, you should have the deeplake python package installed.\nExample", "source": "https://api.python.langchain.com/en/latest/modules/vectorstores.html"} +{"id": "c018b0edb28a-38", "text": "To use, you should have the deeplake python package installed.\nExample\nfrom langchain.vectorstores import DeepLake\nfrom langchain.embeddings.openai import OpenAIEmbeddings\nembeddings = OpenAIEmbeddings()\nvectorstore = DeepLake(\"langchain_store\", embeddings.embed_query)\nParameters\ndataset_path (str) \u2013 \ntoken (Optional[str]) \u2013 \nembedding_function (Optional[Embeddings]) \u2013 \nread_only (bool) \u2013 \ningestion_batch_size (int) \u2013 \nnum_workers (int) \u2013 \nverbose (bool) \u2013 \nexec_option (str) \u2013 \nkwargs (Any) \u2013 \nReturn type\nNone\nadd_texts(texts, metadatas=None, ids=None, **kwargs)[source]\uf0c1\nRun more texts through the embeddings and add to the vectorstore.\nExamples\n>>> ids = deeplake_vectorstore.add_texts(\n... texts = ,\n... metadatas = ,\n... ids = ,\n... )\nParameters\ntexts (Iterable[str]) \u2013 Texts to add to the vectorstore.\nmetadatas (Optional[List[dict]], optional) \u2013 Optional list of metadatas.\nids (Optional[List[str]], optional) \u2013 Optional list of IDs.\n**kwargs \u2013 other optional keyword arguments.\nkwargs (Any) \u2013 \nReturns\nList of IDs of the added texts.\nReturn type\nList[str]\nsimilarity_search(query, k=4, **kwargs)[source]\uf0c1\nReturn docs most similar to query.\nExamples\n>>> # Search using an embedding\n>>> data = vector_store.similarity_search(\n... query=,\n... k=,\n... exec_option=,\n... )\n>>> # Run tql search:", "source": "https://api.python.langchain.com/en/latest/modules/vectorstores.html"} +{"id": "c018b0edb28a-39", "text": "... )\n>>> # Run tql search:\n>>> data = vector_store.tql_search(\n... tql_query=\"SELECT * WHERE id == \",\n... exec_option=\"compute_engine\",\n... )\nParameters\nk (int) \u2013 Number of Documents to return. Defaults to 4.\nquery (str) \u2013 Text to look up similar documents.\n**kwargs \u2013 Additional keyword arguments include:\nembedding (Callable): Embedding function to use. Defaults to None.\ndistance_metric (str): \u2018L2\u2019 for Euclidean, \u2018L1\u2019 for Nuclear, \u2018max\u2019\nfor L-infinity, \u2018cos\u2019 for cosine, \u2018dot\u2019 for dot product.\nDefaults to \u2018L2\u2019.\nfilter (Union[Dict, Callable], optional): Additional filterbefore embedding search.\n- Dict: Key-value search on tensors of htype json,\n(sample must satisfy all key-value filters)\nDict = {\u201ctensor_1\u201d: {\u201ckey\u201d: value}, \u201ctensor_2\u201d: {\u201ckey\u201d: value}}\nFunction: Compatible with deeplake.filter.\nDefaults to None.\nexec_option (str): Supports 3 ways to perform searching.\u2019python\u2019, \u2018compute_engine\u2019, or \u2018tensor_db\u2019. Defaults to \u2018python\u2019.\n- \u2018python\u2019: Pure-python implementation for the client.\nWARNING: not recommended for big datasets.\n\u2019compute_engine\u2019: C++ implementation of the Compute Engine forthe client. Not for in-memory or local datasets.\n\u2019tensor_db\u2019: Managed Tensor Database for storage and query.Only for data in Deep Lake Managed Database.\nUse runtime = {\u201cdb_engine\u201d: True} during dataset creation.\nkwargs (Any) \u2013 \nReturns\nList of Documents most similar to the query vector.\nReturn type\nList[Document]\nsimilarity_search_by_vector(embedding, k=4, **kwargs)[source]\uf0c1", "source": "https://api.python.langchain.com/en/latest/modules/vectorstores.html"} +{"id": "c018b0edb28a-40", "text": "similarity_search_by_vector(embedding, k=4, **kwargs)[source]\uf0c1\nReturn docs most similar to embedding vector.\nExamples\n>>> # Search using an embedding\n>>> data = vector_store.similarity_search_by_vector(\n... embedding=,\n... k=,\n... exec_option=,\n... )\nParameters\nembedding (Union[List[float], np.ndarray]) \u2013 Embedding to find similar docs.\nk (int) \u2013 Number of Documents to return. Defaults to 4.\n**kwargs \u2013 Additional keyword arguments including:\nfilter (Union[Dict, Callable], optional):\nAdditional filter before embedding search.\n- Dict - Key-value search on tensors of htype json. True\nif all key-value filters are satisfied.\nDict = {\u201ctensor_name_1\u201d: {\u201ckey\u201d: value},\n\u201dtensor_name_2\u201d: {\u201ckey\u201d: value}}\nFunction - Any function compatible withdeeplake.filter.\nDefaults to None.\nexec_option (str): Options for search execution include\u201dpython\u201d, \u201ccompute_engine\u201d, or \u201ctensor_db\u201d. Defaults to\n\u201cpython\u201d.\n- \u201cpython\u201d - Pure-python implementation running on the client.\nCan be used for data stored anywhere. WARNING: using this\noption with big datasets is discouraged due to potential\nmemory issues.\n\u201dcompute_engine\u201d - Performant C++ implementation of the DeepLake Compute Engine. Runs on the client and can be used for\nany data stored in or connected to Deep Lake. It cannot be\nused with in-memory or local datasets.\n\u201dtensor_db\u201d - Performant, fully-hosted Managed Tensor Database.Responsible for storage and query execution. Only available\nfor data stored in the Deep Lake Managed Database.\nTo store datasets in this database, specify\nruntime = {\u201cdb_engine\u201d: True} during dataset creation.", "source": "https://api.python.langchain.com/en/latest/modules/vectorstores.html"} +{"id": "c018b0edb28a-41", "text": "runtime = {\u201cdb_engine\u201d: True} during dataset creation.\ndistance_metric (str): L2 for Euclidean, L1 for Nuclear,max for L-infinity distance, cos for cosine similarity,\n\u2018dot\u2019 for dot product. Defaults to L2.\nkwargs (Any) \u2013 \nReturns\nList of Documents most similar to the query vector.\nReturn type\nList[Document]\nsimilarity_search_with_score(query, k=4, **kwargs)[source]\uf0c1\nRun similarity search with Deep Lake with distance returned.\nExamples:\n>>> data = vector_store.similarity_search_with_score(\n\u2026 query=,\n\u2026 embedding=\n\u2026 k=,\n\u2026 exec_option=,\n\u2026 )\nParameters\nquery (str) \u2013 Query text to search for.\nk (int) \u2013 Number of results to return. Defaults to 4.\n**kwargs \u2013 Additional keyword arguments. Some of these arguments are:\ndistance_metric: L2 for Euclidean, L1 for Nuclear, max L-infinity\ndistance, cos for cosine similarity, \u2018dot\u2019 for dot product.\nDefaults to L2.\nfilter (Optional[Dict[str, str]]): Filter by metadata. Defaults to None.embedding_function (Callable): Embedding function to use. Defaults\nto None.\nexec_option (str): DeepLakeVectorStore supports 3 ways to performsearching. It could be either \u201cpython\u201d, \u201ccompute_engine\u201d or\n\u201ctensor_db\u201d. Defaults to \u201cpython\u201d.\n- \u201cpython\u201d - Pure-python implementation running on the client.\nCan be used for data stored anywhere. WARNING: using this\noption with big datasets is discouraged due to potential\nmemory issues.\n\u201dcompute_engine\u201d - Performant C++ implementation of the DeepLake Compute Engine. Runs on the client and can be used for", "source": "https://api.python.langchain.com/en/latest/modules/vectorstores.html"} +{"id": "c018b0edb28a-42", "text": "any data stored in or connected to Deep Lake. It cannot be used\nwith in-memory or local datasets.\n\u201dtensor_db\u201d - Performant, fully-hosted Managed Tensor Database.Responsible for storage and query execution. Only available for\ndata stored in the Deep Lake Managed Database. To store datasets\nin this database, specify runtime = {\u201cdb_engine\u201d: True}\nduring dataset creation.\nkwargs (Any) \u2013 \nReturns\nList of documents most similar to the querytext with distance in float.\nReturn type\nList[Tuple[Document, float]]\nmax_marginal_relevance_search_by_vector(embedding, k=4, fetch_k=20, lambda_mult=0.5, exec_option=None, **kwargs)[source]\uf0c1\nReturn docs selected using the maximal marginal relevance. Maximal marginal\nrelevance optimizes for similarity to query AND diversity among selected docs.\nExamples:\n>>> data = vector_store.max_marginal_relevance_search_by_vector(\n\u2026 embedding=,\n\u2026 fetch_k=,\n\u2026 k=,\n\u2026 exec_option=,\n\u2026 )\nParameters\nembedding (List[float]) \u2013 Embedding to look up documents similar to.\nk (int) \u2013 Number of Documents to return. Defaults to 4.\nfetch_k (int) \u2013 Number of Documents to fetch for MMR algorithm.\nlambda_mult (float) \u2013 Number between 0 and 1 determining the degree of diversity.\n0 corresponds to max diversity and 1 to min diversity. Defaults to 0.5.\nexec_option (str) \u2013 DeepLakeVectorStore supports 3 ways for searching.\nCould be \u201cpython\u201d, \u201ccompute_engine\u201d or \u201ctensor_db\u201d. Defaults to\n\u201cpython\u201d.\n- \u201cpython\u201d - Pure-python implementation running on the client.", "source": "https://api.python.langchain.com/en/latest/modules/vectorstores.html"} +{"id": "c018b0edb28a-43", "text": "\u201cpython\u201d.\n- \u201cpython\u201d - Pure-python implementation running on the client.\nCan be used for data stored anywhere. WARNING: using this\noption with big datasets is discouraged due to potential\nmemory issues.\n\u201dcompute_engine\u201d - Performant C++ implementation of the DeepLake Compute Engine. Runs on the client and can be used for\nany data stored in or connected to Deep Lake. It cannot be used\nwith in-memory or local datasets.\n\u201dtensor_db\u201d - Performant, fully-hosted Managed Tensor Database.Responsible for storage and query execution. Only available for\ndata stored in the Deep Lake Managed Database. To store datasets\nin this database, specify runtime = {\u201cdb_engine\u201d: True}\nduring dataset creation.\n**kwargs \u2013 Additional keyword arguments.\nkwargs (Any) \u2013 \nReturns\nList[Documents] - A list of documents.\nReturn type\nList[langchain.schema.Document]\nmax_marginal_relevance_search(query, k=4, fetch_k=20, lambda_mult=0.5, exec_option=None, **kwargs)[source]\uf0c1\nReturn docs selected using maximal marginal relevance.\nMaximal marginal relevance optimizes for similarity to query AND diversity\namong selected documents.\nExamples:\n>>> # Search using an embedding\n>>> data = vector_store.max_marginal_relevance_search(\n\u2026 query = ,\n\u2026 embedding_function = ,\n\u2026 k = ,\n\u2026 exec_option = ,\n\u2026 )\nParameters\nquery (str) \u2013 Text to look up documents similar to.\nk (int) \u2013 Number of Documents to return. Defaults to 4.\nfetch_k (int) \u2013 Number of Documents for MMR algorithm.\nlambda_mult (float) \u2013 Value between 0 and 1. 0 corresponds", "source": "https://api.python.langchain.com/en/latest/modules/vectorstores.html"} +{"id": "c018b0edb28a-44", "text": "lambda_mult (float) \u2013 Value between 0 and 1. 0 corresponds\nto maximum diversity and 1 to minimum.\nDefaults to 0.5.\nexec_option (str) \u2013 Supports 3 ways to perform searching.\n- \u201cpython\u201d - Pure-python implementation running on the client.\nCan be used for data stored anywhere. WARNING: using this\noption with big datasets is discouraged due to potential\nmemory issues.\n\u201dcompute_engine\u201d - Performant C++ implementation of the DeepLake Compute Engine. Runs on the client and can be used for\nany data stored in or connected to Deep Lake. It cannot be\nused with in-memory or local datasets.\n\u201dtensor_db\u201d - Performant, fully-hosted Managed Tensor Database.Responsible for storage and query execution. Only available\nfor data stored in the Deep Lake Managed Database. To store\ndatasets in this database, specify\nruntime = {\u201cdb_engine\u201d: True} during dataset creation.\n**kwargs \u2013 Additional keyword arguments\nkwargs (Any) \u2013 \nReturns\nList of Documents selected by maximal marginal relevance.\nRaises\nValueError \u2013 when MRR search is on but embedding function is\n not specified.\nReturn type\nList[langchain.schema.Document]\nclassmethod from_texts(texts, embedding=None, metadatas=None, ids=None, dataset_path='./deeplake/', **kwargs)[source]\uf0c1\nCreate a Deep Lake dataset from a raw documents.\nIf a dataset_path is specified, the dataset will be persisted in that location,\notherwise by default at ./deeplake\nExamples:\n>>> # Search using an embedding\n>>> vector_store = DeepLake.from_texts(\n\u2026 texts = ,\n\u2026 embedding_function = ,\n\u2026 k = ,\n\u2026 exec_option = ,", "source": "https://api.python.langchain.com/en/latest/modules/vectorstores.html"} +{"id": "c018b0edb28a-45", "text": "\u2026 exec_option = ,\n\u2026 )\nParameters\ndataset_path (str) \u2013 \nThe full path to the dataset. Can be:\nDeep Lake cloud path of the form hub://username/dataset_name.To write to Deep Lake cloud datasets,\nensure that you are logged in to Deep Lake\n(use \u2018activeloop login\u2019 from command line)\nAWS S3 path of the form s3://bucketname/path/to/dataset.Credentials are required in either the environment\nGoogle Cloud Storage path of the formgcs://bucketname/path/to/dataset Credentials are required\nin either the environment\nLocal file system path of the form ./path/to/dataset or~/path/to/dataset or path/to/dataset.\nIn-memory path of the form mem://path/to/dataset which doesn\u2019tsave the dataset, but keeps it in memory instead.\nShould be used only for testing as it does not persist.\ntexts (List[Document]) \u2013 List of documents to add.\nembedding (Optional[Embeddings]) \u2013 Embedding function. Defaults to None.\nNote, in other places, it is called embedding_function.\nmetadatas (Optional[List[dict]]) \u2013 List of metadatas. Defaults to None.\nids (Optional[List[str]]) \u2013 List of document IDs. Defaults to None.\n**kwargs \u2013 Additional keyword arguments.\nkwargs (Any) \u2013 \nReturns\nDeep Lake dataset.\nReturn type\nDeepLake\nRaises\nValueError \u2013 If \u2018embedding\u2019 is provided in kwargs. This is deprecated,\n please use embedding_function instead.\ndelete(ids=None, filter=None, delete_all=None)[source]\uf0c1\nDelete the entities in the dataset.\nParameters\nids (Optional[List[str]], optional) \u2013 The document_ids to delete.\nDefaults to None.\nfilter (Optional[Dict[str, str]], optional) \u2013 The filter to delete by.", "source": "https://api.python.langchain.com/en/latest/modules/vectorstores.html"} +{"id": "c018b0edb28a-46", "text": "filter (Optional[Dict[str, str]], optional) \u2013 The filter to delete by.\nDefaults to None.\ndelete_all (Optional[bool], optional) \u2013 Whether to drop the dataset.\nDefaults to None.\nReturns\nWhether the delete operation was successful.\nReturn type\nbool\nclassmethod force_delete_by_path(path)[source]\uf0c1\nForce delete dataset by path.\nParameters\npath (str) \u2013 path of the dataset to delete.\nRaises\nValueError \u2013 if deeplake is not installed.\nReturn type\nNone\ndelete_dataset()[source]\uf0c1\nDelete the collection.\nReturn type\nNone\nclass langchain.vectorstores.DocArrayHnswSearch(doc_index, embedding)[source]\uf0c1\nBases: langchain.vectorstores.docarray.base.DocArrayIndex\nWrapper around HnswLib storage.\nTo use it, you should have the docarray package with version >=0.32.0 installed.\nYou can install it with pip install \u201clangchain[docarray]\u201d.\nParameters\ndoc_index (BaseDocIndex) \u2013 \nembedding (langchain.embeddings.base.Embeddings) \u2013 \nclassmethod from_params(embedding, work_dir, n_dim, dist_metric='cosine', max_elements=1024, index=True, ef_construction=200, ef=10, M=16, allow_replace_deleted=True, num_threads=1, **kwargs)[source]\uf0c1\nInitialize DocArrayHnswSearch store.\nParameters\nembedding (Embeddings) \u2013 Embedding function.\nwork_dir (str) \u2013 path to the location where all the data will be stored.\nn_dim (int) \u2013 dimension of an embedding.\ndist_metric (str) \u2013 Distance metric for DocArrayHnswSearch can be one of:\n\u201ccosine\u201d, \u201cip\u201d, and \u201cl2\u201d. Defaults to \u201ccosine\u201d.", "source": "https://api.python.langchain.com/en/latest/modules/vectorstores.html"} +{"id": "c018b0edb28a-47", "text": "\u201ccosine\u201d, \u201cip\u201d, and \u201cl2\u201d. Defaults to \u201ccosine\u201d.\nmax_elements (int) \u2013 Maximum number of vectors that can be stored.\nDefaults to 1024.\nindex (bool) \u2013 Whether an index should be built for this field.\nDefaults to True.\nef_construction (int) \u2013 defines a construction time/accuracy trade-off.\nDefaults to 200.\nef (int) \u2013 parameter controlling query time/accuracy trade-off.\nDefaults to 10.\nM (int) \u2013 parameter that defines the maximum number of outgoing\nconnections in the graph. Defaults to 16.\nallow_replace_deleted (bool) \u2013 Enables replacing of deleted elements\nwith new added ones. Defaults to True.\nnum_threads (int) \u2013 Sets the number of cpu threads to use. Defaults to 1.\n**kwargs \u2013 Other keyword arguments to be passed to the get_doc_cls method.\nkwargs (Any) \u2013 \nReturn type\nlangchain.vectorstores.docarray.hnsw.DocArrayHnswSearch\nclassmethod from_texts(texts, embedding, metadatas=None, work_dir=None, n_dim=None, **kwargs)[source]\uf0c1\nCreate an DocArrayHnswSearch store and insert data.\nParameters\ntexts (List[str]) \u2013 Text data.\nembedding (Embeddings) \u2013 Embedding function.\nmetadatas (Optional[List[dict]]) \u2013 Metadata for each text if it exists.\nDefaults to None.\nwork_dir (str) \u2013 path to the location where all the data will be stored.\nn_dim (int) \u2013 dimension of an embedding.\n**kwargs \u2013 Other keyword arguments to be passed to the __init__ method.\nkwargs (Any) \u2013 \nReturns\nDocArrayHnswSearch Vector Store\nReturn type\nlangchain.vectorstores.docarray.hnsw.DocArrayHnswSearch", "source": "https://api.python.langchain.com/en/latest/modules/vectorstores.html"} +{"id": "c018b0edb28a-48", "text": "Return type\nlangchain.vectorstores.docarray.hnsw.DocArrayHnswSearch\nclass langchain.vectorstores.DocArrayInMemorySearch(doc_index, embedding)[source]\uf0c1\nBases: langchain.vectorstores.docarray.base.DocArrayIndex\nWrapper around in-memory storage for exact search.\nTo use it, you should have the docarray package with version >=0.32.0 installed.\nYou can install it with pip install \u201clangchain[docarray]\u201d.\nParameters\ndoc_index (BaseDocIndex) \u2013 \nembedding (langchain.embeddings.base.Embeddings) \u2013 \nclassmethod from_params(embedding, metric='cosine_sim', **kwargs)[source]\uf0c1\nInitialize DocArrayInMemorySearch store.\nParameters\nembedding (Embeddings) \u2013 Embedding function.\nmetric (str) \u2013 metric for exact nearest-neighbor search.\nCan be one of: \u201ccosine_sim\u201d, \u201ceuclidean_dist\u201d and \u201csqeuclidean_dist\u201d.\nDefaults to \u201ccosine_sim\u201d.\n**kwargs \u2013 Other keyword arguments to be passed to the get_doc_cls method.\nkwargs (Any) \u2013 \nReturn type\nlangchain.vectorstores.docarray.in_memory.DocArrayInMemorySearch\nclassmethod from_texts(texts, embedding, metadatas=None, **kwargs)[source]\uf0c1\nCreate an DocArrayInMemorySearch store and insert data.\nParameters\ntexts (List[str]) \u2013 Text data.\nembedding (Embeddings) \u2013 Embedding function.\nmetadatas (Optional[List[Dict[Any, Any]]]) \u2013 Metadata for each text\nif it exists. Defaults to None.\nmetric (str) \u2013 metric for exact nearest-neighbor search.\nCan be one of: \u201ccosine_sim\u201d, \u201ceuclidean_dist\u201d and \u201csqeuclidean_dist\u201d.\nDefaults to \u201ccosine_sim\u201d.\nkwargs (Any) \u2013 \nReturns", "source": "https://api.python.langchain.com/en/latest/modules/vectorstores.html"} +{"id": "c018b0edb28a-49", "text": "Defaults to \u201ccosine_sim\u201d.\nkwargs (Any) \u2013 \nReturns\nDocArrayInMemorySearch Vector Store\nReturn type\nlangchain.vectorstores.docarray.in_memory.DocArrayInMemorySearch\nclass langchain.vectorstores.ElasticVectorSearch(elasticsearch_url, index_name, embedding, *, ssl_verify=None)[source]\uf0c1\nBases: langchain.vectorstores.base.VectorStore, abc.ABC\nWrapper around Elasticsearch as a vector database.\nTo connect to an Elasticsearch instance that does not require\nlogin credentials, pass the Elasticsearch URL and index name along with the\nembedding object to the constructor.\nExample\nfrom langchain import ElasticVectorSearch\nfrom langchain.embeddings import OpenAIEmbeddings\nembedding = OpenAIEmbeddings()\nelastic_vector_search = ElasticVectorSearch(\n elasticsearch_url=\"http://localhost:9200\",\n index_name=\"test_index\",\n embedding=embedding\n)\nTo connect to an Elasticsearch instance that requires login credentials,\nincluding Elastic Cloud, use the Elasticsearch URL format\nhttps://username:password@es_host:9243. For example, to connect to Elastic\nCloud, create the Elasticsearch URL with the required authentication details and\npass it to the ElasticVectorSearch constructor as the named parameter\nelasticsearch_url.\nYou can obtain your Elastic Cloud URL and login credentials by logging in to the\nElastic Cloud console at https://cloud.elastic.co, selecting your deployment, and\nnavigating to the \u201cDeployments\u201d page.\nTo obtain your Elastic Cloud password for the default \u201celastic\u201d user:\nLog in to the Elastic Cloud console at https://cloud.elastic.co\nGo to \u201cSecurity\u201d > \u201cUsers\u201d\nLocate the \u201celastic\u201d user and click \u201cEdit\u201d\nClick \u201cReset password\u201d\nFollow the prompts to reset the password\nThe format for Elastic Cloud URLs is", "source": "https://api.python.langchain.com/en/latest/modules/vectorstores.html"} +{"id": "c018b0edb28a-50", "text": "Follow the prompts to reset the password\nThe format for Elastic Cloud URLs is\nhttps://username:password@cluster_id.region_id.gcp.cloud.es.io:9243.\nExample\nfrom langchain import ElasticVectorSearch\nfrom langchain.embeddings import OpenAIEmbeddings\nembedding = OpenAIEmbeddings()\nelastic_host = \"cluster_id.region_id.gcp.cloud.es.io\"\nelasticsearch_url = f\"https://username:password@{elastic_host}:9243\"\nelastic_vector_search = ElasticVectorSearch(\n elasticsearch_url=elasticsearch_url,\n index_name=\"test_index\",\n embedding=embedding\n)\nParameters\nelasticsearch_url (str) \u2013 The URL for the Elasticsearch instance.\nindex_name (str) \u2013 The name of the Elasticsearch index for the embeddings.\nembedding (Embeddings) \u2013 An object that provides the ability to embed text.\nIt should be an instance of a class that subclasses the Embeddings\nabstract base class, such as OpenAIEmbeddings()\nssl_verify (Optional[Dict[str, Any]]) \u2013 \nRaises\nValueError \u2013 If the elasticsearch python package is not installed.\nadd_texts(texts, metadatas=None, refresh_indices=True, ids=None, **kwargs)[source]\uf0c1\nRun more texts through the embeddings and add to the vectorstore.\nParameters\ntexts (Iterable[str]) \u2013 Iterable of strings to add to the vectorstore.\nmetadatas (Optional[List[dict]]) \u2013 Optional list of metadatas associated with the texts.\nrefresh_indices (bool) \u2013 bool to refresh ElasticSearch indices\nids (Optional[List[str]]) \u2013 \nkwargs (Any) \u2013 \nReturns\nList of ids from adding the texts into the vectorstore.\nReturn type\nList[str]\nsimilarity_search(query, k=4, filter=None, **kwargs)[source]\uf0c1\nReturn docs most similar to query.\nParameters", "source": "https://api.python.langchain.com/en/latest/modules/vectorstores.html"} +{"id": "c018b0edb28a-51", "text": "Return docs most similar to query.\nParameters\nquery (str) \u2013 Text to look up documents similar to.\nk (int) \u2013 Number of Documents to return. Defaults to 4.\nfilter (Optional[dict]) \u2013 \nkwargs (Any) \u2013 \nReturns\nList of Documents most similar to the query.\nReturn type\nList[langchain.schema.Document]\nsimilarity_search_with_score(query, k=4, filter=None, **kwargs)[source]\uf0c1\nReturn docs most similar to query.\n:param query: Text to look up documents similar to.\n:param k: Number of Documents to return. Defaults to 4.\nReturns\nList of Documents most similar to the query.\nParameters\nquery (str) \u2013 \nk (int) \u2013 \nfilter (Optional[dict]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nList[Tuple[langchain.schema.Document, float]]\nclassmethod from_texts(texts, embedding, metadatas=None, elasticsearch_url=None, index_name=None, refresh_indices=True, **kwargs)[source]\uf0c1\nConstruct ElasticVectorSearch wrapper from raw documents.\nThis is a user-friendly interface that:\nEmbeds documents.\nCreates a new index for the embeddings in the Elasticsearch instance.\nAdds the documents to the newly created Elasticsearch index.\nThis is intended to be a quick way to get started.\nExample\nfrom langchain import ElasticVectorSearch\nfrom langchain.embeddings import OpenAIEmbeddings\nembeddings = OpenAIEmbeddings()\nelastic_vector_search = ElasticVectorSearch.from_texts(\n texts,\n embeddings,\n elasticsearch_url=\"http://localhost:9200\"\n)\nParameters\ntexts (List[str]) \u2013 \nembedding (langchain.embeddings.base.Embeddings) \u2013 \nmetadatas (Optional[List[dict]]) \u2013 \nelasticsearch_url (Optional[str]) \u2013", "source": "https://api.python.langchain.com/en/latest/modules/vectorstores.html"} +{"id": "c018b0edb28a-52", "text": "elasticsearch_url (Optional[str]) \u2013 \nindex_name (Optional[str]) \u2013 \nrefresh_indices (bool) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.vectorstores.elastic_vector_search.ElasticVectorSearch\ncreate_index(client, index_name, mapping)[source]\uf0c1\nParameters\nclient (Any) \u2013 \nindex_name (str) \u2013 \nmapping (Dict) \u2013 \nReturn type\nNone\nclient_search(client, index_name, script_query, size)[source]\uf0c1\nParameters\nclient (Any) \u2013 \nindex_name (str) \u2013 \nscript_query (Dict) \u2013 \nsize (int) \u2013 \nReturn type\nAny\ndelete(ids)[source]\uf0c1\nDelete by vector IDs.\nParameters\nids (List[str]) \u2013 List of ids to delete.\nReturn type\nNone\nclass langchain.vectorstores.FAISS(embedding_function, index, docstore, index_to_docstore_id, relevance_score_fn=, normalize_L2=False)[source]\uf0c1\nBases: langchain.vectorstores.base.VectorStore\nWrapper around FAISS vector database.\nTo use, you should have the faiss python package installed.\nExample\nfrom langchain import FAISS\nfaiss = FAISS(embedding_function, index, docstore, index_to_docstore_id)\nParameters\nembedding_function (Callable) \u2013 \nindex (Any) \u2013 \ndocstore (Docstore) \u2013 \nindex_to_docstore_id (Dict[int, str]) \u2013 \nrelevance_score_fn (Optional[Callable[[float], float]]) \u2013 \nnormalize_L2 (bool) \u2013 \nadd_texts(texts, metadatas=None, ids=None, **kwargs)[source]\uf0c1\nRun more texts through the embeddings and add to the vectorstore.\nParameters", "source": "https://api.python.langchain.com/en/latest/modules/vectorstores.html"} +{"id": "c018b0edb28a-53", "text": "Run more texts through the embeddings and add to the vectorstore.\nParameters\ntexts (Iterable[str]) \u2013 Iterable of strings to add to the vectorstore.\nmetadatas (Optional[List[dict]]) \u2013 Optional list of metadatas associated with the texts.\nids (Optional[List[str]]) \u2013 Optional list of unique IDs.\nkwargs (Any) \u2013 \nReturns\nList of ids from adding the texts into the vectorstore.\nReturn type\nList[str]\nadd_embeddings(text_embeddings, metadatas=None, ids=None, **kwargs)[source]\uf0c1\nRun more texts through the embeddings and add to the vectorstore.\nParameters\ntext_embeddings (Iterable[Tuple[str, List[float]]]) \u2013 Iterable pairs of string and embedding to\nadd to the vectorstore.\nmetadatas (Optional[List[dict]]) \u2013 Optional list of metadatas associated with the texts.\nids (Optional[List[str]]) \u2013 Optional list of unique IDs.\nkwargs (Any) \u2013 \nReturns\nList of ids from adding the texts into the vectorstore.\nReturn type\nList[str]\nsimilarity_search_with_score_by_vector(embedding, k=4, filter=None, fetch_k=20, **kwargs)[source]\uf0c1\nReturn docs most similar to query.\nParameters\nembedding (List[float]) \u2013 Embedding vector to look up documents similar to.\nk (int) \u2013 Number of Documents to return. Defaults to 4.\nfilter (Optional[Dict[str, Any]]) \u2013 Filter by metadata. Defaults to None.\nfetch_k (int) \u2013 (Optional[int]) Number of Documents to fetch before filtering.\nDefaults to 20.\n**kwargs \u2013 kwargs to be passed to similarity search. Can include:\nscore_threshold: Optional, a floating point value between 0 to 1 to\nfilter the resulting set of retrieved docs\nkwargs (Any) \u2013 \nReturns", "source": "https://api.python.langchain.com/en/latest/modules/vectorstores.html"} +{"id": "c018b0edb28a-54", "text": "filter the resulting set of retrieved docs\nkwargs (Any) \u2013 \nReturns\nList of documents most similar to the query text and L2 distance\nin float for each. Lower score represents more similarity.\nReturn type\nList[Tuple[langchain.schema.Document, float]]\nsimilarity_search_with_score(query, k=4, filter=None, fetch_k=20, **kwargs)[source]\uf0c1\nReturn docs most similar to query.\nParameters\nquery (str) \u2013 Text to look up documents similar to.\nk (int) \u2013 Number of Documents to return. Defaults to 4.\nfilter (Optional[Dict[str, str]]) \u2013 Filter by metadata. Defaults to None.\nfetch_k (int) \u2013 (Optional[int]) Number of Documents to fetch before filtering.\nDefaults to 20.\nkwargs (Any) \u2013 \nReturns\nList of documents most similar to the query text with\nL2 distance in float. Lower score represents more similarity.\nReturn type\nList[Tuple[langchain.schema.Document, float]]\nsimilarity_search_by_vector(embedding, k=4, filter=None, fetch_k=20, **kwargs)[source]\uf0c1\nReturn docs most similar to embedding vector.\nParameters\nembedding (List[float]) \u2013 Embedding to look up documents similar to.\nk (int) \u2013 Number of Documents to return. Defaults to 4.\nfilter (Optional[Dict[str, str]]) \u2013 Filter by metadata. Defaults to None.\nfetch_k (int) \u2013 (Optional[int]) Number of Documents to fetch before filtering.\nDefaults to 20.\nkwargs (Any) \u2013 \nReturns\nList of Documents most similar to the embedding.\nReturn type\nList[langchain.schema.Document]\nsimilarity_search(query, k=4, filter=None, fetch_k=20, **kwargs)[source]\uf0c1\nReturn docs most similar to query.\nParameters", "source": "https://api.python.langchain.com/en/latest/modules/vectorstores.html"} +{"id": "c018b0edb28a-55", "text": "Return docs most similar to query.\nParameters\nquery (str) \u2013 Text to look up documents similar to.\nk (int) \u2013 Number of Documents to return. Defaults to 4.\nfilter (Optional[Dict[str, Any]]) \u2013 (Optional[Dict[str, str]]): Filter by metadata. Defaults to None.\nfetch_k (int) \u2013 (Optional[int]) Number of Documents to fetch before filtering.\nDefaults to 20.\nkwargs (Any) \u2013 \nReturns\nList of Documents most similar to the query.\nReturn type\nList[langchain.schema.Document]\nmax_marginal_relevance_search_by_vector(embedding, k=4, fetch_k=20, lambda_mult=0.5, filter=None, **kwargs)[source]\uf0c1\nReturn docs selected using the maximal marginal relevance.\nMaximal marginal relevance optimizes for similarity to query AND diversity\namong selected documents.\nParameters\nembedding (List[float]) \u2013 Embedding to look up documents similar to.\nk (int) \u2013 Number of Documents to return. Defaults to 4.\nfetch_k (int) \u2013 Number of Documents to fetch before filtering to\npass to MMR algorithm.\nlambda_mult (float) \u2013 Number between 0 and 1 that determines the degree\nof diversity among the results with 0 corresponding\nto maximum diversity and 1 to minimum diversity.\nDefaults to 0.5.\nfilter (Optional[Dict[str, Any]]) \u2013 \nkwargs (Any) \u2013 \nReturns\nList of Documents selected by maximal marginal relevance.\nReturn type\nList[langchain.schema.Document]\nmax_marginal_relevance_search(query, k=4, fetch_k=20, lambda_mult=0.5, filter=None, **kwargs)[source]\uf0c1\nReturn docs selected using the maximal marginal relevance.\nMaximal marginal relevance optimizes for similarity to query AND diversity", "source": "https://api.python.langchain.com/en/latest/modules/vectorstores.html"} +{"id": "c018b0edb28a-56", "text": "Maximal marginal relevance optimizes for similarity to query AND diversity\namong selected documents.\nParameters\nquery (str) \u2013 Text to look up documents similar to.\nk (int) \u2013 Number of Documents to return. Defaults to 4.\nfetch_k (int) \u2013 Number of Documents to fetch before filtering (if needed) to\npass to MMR algorithm.\nlambda_mult (float) \u2013 Number between 0 and 1 that determines the degree\nof diversity among the results with 0 corresponding\nto maximum diversity and 1 to minimum diversity.\nDefaults to 0.5.\nfilter (Optional[Dict[str, Any]]) \u2013 \nkwargs (Any) \u2013 \nReturns\nList of Documents selected by maximal marginal relevance.\nReturn type\nList[langchain.schema.Document]\nmerge_from(target)[source]\uf0c1\nMerge another FAISS object with the current one.\nAdd the target FAISS to the current one.\nParameters\ntarget (langchain.vectorstores.faiss.FAISS) \u2013 FAISS object you wish to merge into the current one\nReturns\nNone.\nReturn type\nNone\nclassmethod from_texts(texts, embedding, metadatas=None, ids=None, **kwargs)[source]\uf0c1\nConstruct FAISS wrapper from raw documents.\nThis is a user friendly interface that:\nEmbeds documents.\nCreates an in memory docstore\nInitializes the FAISS database\nThis is intended to be a quick way to get started.\nExample\nfrom langchain import FAISS\nfrom langchain.embeddings import OpenAIEmbeddings\nembeddings = OpenAIEmbeddings()\nfaiss = FAISS.from_texts(texts, embeddings)\nParameters\ntexts (List[str]) \u2013 \nembedding (langchain.embeddings.base.Embeddings) \u2013 \nmetadatas (Optional[List[dict]]) \u2013 \nids (Optional[List[str]]) \u2013", "source": "https://api.python.langchain.com/en/latest/modules/vectorstores.html"} +{"id": "c018b0edb28a-57", "text": "ids (Optional[List[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.vectorstores.faiss.FAISS\nclassmethod from_embeddings(text_embeddings, embedding, metadatas=None, ids=None, **kwargs)[source]\uf0c1\nConstruct FAISS wrapper from raw documents.\nThis is a user friendly interface that:\nEmbeds documents.\nCreates an in memory docstore\nInitializes the FAISS database\nThis is intended to be a quick way to get started.\nExample\nfrom langchain import FAISS\nfrom langchain.embeddings import OpenAIEmbeddings\nembeddings = OpenAIEmbeddings()\ntext_embeddings = embeddings.embed_documents(texts)\ntext_embedding_pairs = list(zip(texts, text_embeddings))\nfaiss = FAISS.from_embeddings(text_embedding_pairs, embeddings)\nParameters\ntext_embeddings (List[Tuple[str, List[float]]]) \u2013 \nembedding (langchain.embeddings.base.Embeddings) \u2013 \nmetadatas (Optional[List[dict]]) \u2013 \nids (Optional[List[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.vectorstores.faiss.FAISS\nsave_local(folder_path, index_name='index')[source]\uf0c1\nSave FAISS index, docstore, and index_to_docstore_id to disk.\nParameters\nfolder_path (str) \u2013 folder path to save index, docstore,\nand index_to_docstore_id to.\nindex_name (str) \u2013 for saving with a specific index file name\nReturn type\nNone\nclassmethod load_local(folder_path, embeddings, index_name='index')[source]\uf0c1\nLoad FAISS index, docstore, and index_to_docstore_id from disk.\nParameters\nfolder_path (str) \u2013 folder path to load index, docstore,\nand index_to_docstore_id from.", "source": "https://api.python.langchain.com/en/latest/modules/vectorstores.html"} +{"id": "c018b0edb28a-58", "text": "and index_to_docstore_id from.\nembeddings (langchain.embeddings.base.Embeddings) \u2013 Embeddings to use when generating queries\nindex_name (str) \u2013 for saving with a specific index file name\nReturn type\nlangchain.vectorstores.faiss.FAISS\nclass langchain.vectorstores.Hologres(connection_string, embedding_function, ndims=1536, table_name='langchain_pg_embedding', pre_delete_table=False, logger=None)[source]\uf0c1\nBases: langchain.vectorstores.base.VectorStore\nVectorStore implementation using Hologres.\nconnection_string is a hologres connection string.\nembedding_function any embedding function implementinglangchain.embeddings.base.Embeddings interface.\nndims is the number of dimensions of the embedding output.\ntable_name is the name of the table to store embeddings and data.(default: langchain_pg_embedding)\n- NOTE: The table will be created when initializing the store (if not exists)\nSo, make sure the user has the right permissions to create tables.\npre_delete_table if True, will delete the table if it exists.(default: False)\n- Useful for testing.\nParameters\nconnection_string (str) \u2013 \nembedding_function (Embeddings) \u2013 \nndims (int) \u2013 \ntable_name (str) \u2013 \npre_delete_table (bool) \u2013 \nlogger (Optional[logging.Logger]) \u2013 \nReturn type\nNone\ncreate_vector_extension()[source]\uf0c1\nReturn type\nNone\ncreate_table()[source]\uf0c1\nReturn type\nNone\nadd_embeddings(texts, embeddings, metadatas, ids, **kwargs)[source]\uf0c1\nAdd embeddings to the vectorstore.\nParameters\ntexts (Iterable[str]) \u2013 Iterable of strings to add to the vectorstore.\nembeddings (List[List[float]]) \u2013 List of list of embedding vectors.", "source": "https://api.python.langchain.com/en/latest/modules/vectorstores.html"} +{"id": "c018b0edb28a-59", "text": "embeddings (List[List[float]]) \u2013 List of list of embedding vectors.\nmetadatas (List[dict]) \u2013 List of metadatas associated with the texts.\nkwargs (Any) \u2013 vectorstore specific parameters\nids (List[str]) \u2013 \nReturn type\nNone\nadd_texts(texts, metadatas=None, ids=None, **kwargs)[source]\uf0c1\nRun more texts through the embeddings and add to the vectorstore.\nParameters\ntexts (Iterable[str]) \u2013 Iterable of strings to add to the vectorstore.\nmetadatas (Optional[List[dict]]) \u2013 Optional list of metadatas associated with the texts.\nkwargs (Any) \u2013 vectorstore specific parameters\nids (Optional[List[str]]) \u2013 \nReturns\nList of ids from adding the texts into the vectorstore.\nReturn type\nList[str]\nsimilarity_search(query, k=4, filter=None, **kwargs)[source]\uf0c1\nRun similarity search with Hologres with distance.\nParameters\nquery (str) \u2013 Query text to search for.\nk (int) \u2013 Number of results to return. Defaults to 4.\nfilter (Optional[Dict[str, str]]) \u2013 Filter by metadata. Defaults to None.\nkwargs (Any) \u2013 \nReturns\nList of Documents most similar to the query.\nReturn type\nList[langchain.schema.Document]\nsimilarity_search_by_vector(embedding, k=4, filter=None, **kwargs)[source]\uf0c1\nReturn docs most similar to embedding vector.\nParameters\nembedding (List[float]) \u2013 Embedding to look up documents similar to.\nk (int) \u2013 Number of Documents to return. Defaults to 4.\nfilter (Optional[Dict[str, str]]) \u2013 Filter by metadata. Defaults to None.\nkwargs (Any) \u2013 \nReturns\nList of Documents most similar to the query vector.\nReturn type", "source": "https://api.python.langchain.com/en/latest/modules/vectorstores.html"} +{"id": "c018b0edb28a-60", "text": "Returns\nList of Documents most similar to the query vector.\nReturn type\nList[langchain.schema.Document]\nsimilarity_search_with_score(query, k=4, filter=None)[source]\uf0c1\nReturn docs most similar to query.\nParameters\nquery (str) \u2013 Text to look up documents similar to.\nk (int) \u2013 Number of Documents to return. Defaults to 4.\nfilter (Optional[Dict[str, str]]) \u2013 Filter by metadata. Defaults to None.\nReturns\nList of Documents most similar to the query and score for each\nReturn type\nList[Tuple[langchain.schema.Document, float]]\nsimilarity_search_with_score_by_vector(embedding, k=4, filter=None)[source]\uf0c1\nParameters\nembedding (List[float]) \u2013 \nk (int) \u2013 \nfilter (Optional[dict]) \u2013 \nReturn type\nList[Tuple[langchain.schema.Document, float]]\nclassmethod from_texts(texts, embedding, metadatas=None, ndims=1536, table_name='langchain_pg_embedding', ids=None, pre_delete_table=False, **kwargs)[source]\uf0c1\nReturn VectorStore initialized from texts and embeddings.\nPostgres connection string is required\n\u201cEither pass it as a parameter\nor set the HOLOGRES_CONNECTION_STRING environment variable.\nParameters\ntexts (List[str]) \u2013 \nembedding (langchain.embeddings.base.Embeddings) \u2013 \nmetadatas (Optional[List[dict]]) \u2013 \nndims (int) \u2013 \ntable_name (str) \u2013 \nids (Optional[List[str]]) \u2013 \npre_delete_table (bool) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.vectorstores.hologres.Hologres", "source": "https://api.python.langchain.com/en/latest/modules/vectorstores.html"} +{"id": "c018b0edb28a-61", "text": "Return type\nlangchain.vectorstores.hologres.Hologres\nclassmethod from_embeddings(text_embeddings, embedding, metadatas=None, ndims=1536, table_name='langchain_pg_embedding', ids=None, pre_delete_table=False, **kwargs)[source]\uf0c1\nConstruct Hologres wrapper from raw documents and pre-\ngenerated embeddings.\nReturn VectorStore initialized from documents and embeddings.\nPostgres connection string is required\n\u201cEither pass it as a parameter\nor set the HOLOGRES_CONNECTION_STRING environment variable.\nExample\nfrom langchain import Hologres\nfrom langchain.embeddings import OpenAIEmbeddings\nembeddings = OpenAIEmbeddings()\ntext_embeddings = embeddings.embed_documents(texts)\ntext_embedding_pairs = list(zip(texts, text_embeddings))\nfaiss = Hologres.from_embeddings(text_embedding_pairs, embeddings)\nParameters\ntext_embeddings (List[Tuple[str, List[float]]]) \u2013 \nembedding (langchain.embeddings.base.Embeddings) \u2013 \nmetadatas (Optional[List[dict]]) \u2013 \nndims (int) \u2013 \ntable_name (str) \u2013 \nids (Optional[List[str]]) \u2013 \npre_delete_table (bool) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.vectorstores.hologres.Hologres\nclassmethod from_existing_index(embedding, ndims=1536, table_name='langchain_pg_embedding', pre_delete_table=False, **kwargs)[source]\uf0c1\nGet intsance of an existing Hologres store.This method will\nreturn the instance of the store without inserting any new\nembeddings\nParameters\nembedding (langchain.embeddings.base.Embeddings) \u2013 \nndims (int) \u2013 \ntable_name (str) \u2013 \npre_delete_table (bool) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.vectorstores.hologres.Hologres", "source": "https://api.python.langchain.com/en/latest/modules/vectorstores.html"} +{"id": "c018b0edb28a-62", "text": "Return type\nlangchain.vectorstores.hologres.Hologres\nclassmethod get_connection_string(kwargs)[source]\uf0c1\nParameters\nkwargs (Dict[str, Any]) \u2013 \nReturn type\nstr\nclassmethod from_documents(documents, embedding, ndims=1536, table_name='langchain_pg_embedding', ids=None, pre_delete_collection=False, **kwargs)[source]\uf0c1\nReturn VectorStore initialized from documents and embeddings.\nPostgres connection string is required\n\u201cEither pass it as a parameter\nor set the HOLOGRES_CONNECTION_STRING environment variable.\nParameters\ndocuments (List[langchain.schema.Document]) \u2013 \nembedding (langchain.embeddings.base.Embeddings) \u2013 \nndims (int) \u2013 \ntable_name (str) \u2013 \nids (Optional[List[str]]) \u2013 \npre_delete_collection (bool) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.vectorstores.hologres.Hologres\nclassmethod connection_string_from_db_params(host, port, database, user, password)[source]\uf0c1\nReturn connection string from database parameters.\nParameters\nhost (str) \u2013 \nport (int) \u2013 \ndatabase (str) \u2013 \nuser (str) \u2013 \npassword (str) \u2013 \nReturn type\nstr\nclass langchain.vectorstores.LanceDB(connection, embedding, vector_key='vector', id_key='id', text_key='text')[source]\uf0c1\nBases: langchain.vectorstores.base.VectorStore\nWrapper around LanceDB vector database.\nTo use, you should have lancedb python package installed.\nExample\ndb = lancedb.connect('./lancedb')\ntable = db.open_table('my_table')\nvectorstore = LanceDB(table, embedding_function)\nvectorstore.add_texts(['text1', 'text2'])\nresult = vectorstore.similarity_search('text1')\nParameters", "source": "https://api.python.langchain.com/en/latest/modules/vectorstores.html"} +{"id": "c018b0edb28a-63", "text": "result = vectorstore.similarity_search('text1')\nParameters\nconnection (Any) \u2013 \nembedding (Embeddings) \u2013 \nvector_key (Optional[str]) \u2013 \nid_key (Optional[str]) \u2013 \ntext_key (Optional[str]) \u2013 \nadd_texts(texts, metadatas=None, ids=None, **kwargs)[source]\uf0c1\nTurn texts into embedding and add it to the database\nParameters\ntexts (Iterable[str]) \u2013 Iterable of strings to add to the vectorstore.\nmetadatas (Optional[List[dict]]) \u2013 Optional list of metadatas associated with the texts.\nids (Optional[List[str]]) \u2013 Optional list of ids to associate with the texts.\nkwargs (Any) \u2013 \nReturns\nList of ids of the added texts.\nReturn type\nList[str]\nsimilarity_search(query, k=4, **kwargs)[source]\uf0c1\nReturn documents most similar to the query\nParameters\nquery (str) \u2013 String to query the vectorstore with.\nk (int) \u2013 Number of documents to return.\nkwargs (Any) \u2013 \nReturns\nList of documents most similar to the query.\nReturn type\nList[langchain.schema.Document]\nclassmethod from_texts(texts, embedding, metadatas=None, connection=None, vector_key='vector', id_key='id', text_key='text', **kwargs)[source]\uf0c1\nReturn VectorStore initialized from texts and embeddings.\nParameters\ntexts (List[str]) \u2013 \nembedding (langchain.embeddings.base.Embeddings) \u2013 \nmetadatas (Optional[List[dict]]) \u2013 \nconnection (Any) \u2013 \nvector_key (Optional[str]) \u2013 \nid_key (Optional[str]) \u2013 \ntext_key (Optional[str]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.vectorstores.lancedb.LanceDB", "source": "https://api.python.langchain.com/en/latest/modules/vectorstores.html"} +{"id": "c018b0edb28a-64", "text": "Return type\nlangchain.vectorstores.lancedb.LanceDB\nclass langchain.vectorstores.MatchingEngine(project_id, index, endpoint, embedding, gcs_client, gcs_bucket_name, credentials=None)[source]\uf0c1\nBases: langchain.vectorstores.base.VectorStore\nVertex Matching Engine implementation of the vector store.\nWhile the embeddings are stored in the Matching Engine, the embedded\ndocuments will be stored in GCS.\nAn existing Index and corresponding Endpoint are preconditions for\nusing this module.\nSee usage in docs/modules/indexes/vectorstores/examples/matchingengine.ipynb\nNote that this implementation is mostly meant for reading if you are\nplanning to do a real time implementation. While reading is a real time\noperation, updating the index takes close to one hour.\nParameters\nproject_id (str) \u2013 \nindex (MatchingEngineIndex) \u2013 \nendpoint (MatchingEngineIndexEndpoint) \u2013 \nembedding (Embeddings) \u2013 \ngcs_client (storage.Client) \u2013 \ngcs_bucket_name (str) \u2013 \ncredentials (Optional[Credentials]) \u2013 \nadd_texts(texts, metadatas=None, **kwargs)[source]\uf0c1\nRun more texts through the embeddings and add to the vectorstore.\nParameters\ntexts (Iterable[str]) \u2013 Iterable of strings to add to the vectorstore.\nmetadatas (Optional[List[dict]]) \u2013 Optional list of metadatas associated with the texts.\nkwargs (Any) \u2013 vectorstore specific parameters.\nReturns\nList of ids from adding the texts into the vectorstore.\nReturn type\nList[str]\nsimilarity_search(query, k=4, **kwargs)[source]\uf0c1\nReturn docs most similar to query.\nParameters\nquery (str) \u2013 The string that will be used to search for similar documents.\nk (int) \u2013 The amount of neighbors that will be retrieved.", "source": "https://api.python.langchain.com/en/latest/modules/vectorstores.html"} +{"id": "c018b0edb28a-65", "text": "k (int) \u2013 The amount of neighbors that will be retrieved.\nkwargs (Any) \u2013 \nReturns\nA list of k matching documents.\nReturn type\nList[langchain.schema.Document]\nclassmethod from_texts(texts, embedding, metadatas=None, **kwargs)[source]\uf0c1\nUse from components instead.\nParameters\ntexts (List[str]) \u2013 \nembedding (langchain.embeddings.base.Embeddings) \u2013 \nmetadatas (Optional[List[dict]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.vectorstores.matching_engine.MatchingEngine\nclassmethod from_components(project_id, region, gcs_bucket_name, index_id, endpoint_id, credentials_path=None, embedding=None)[source]\uf0c1\nTakes the object creation out of the constructor.\nParameters\nproject_id (str) \u2013 The GCP project id.\nregion (str) \u2013 The default location making the API calls. It must have\nregional. (the same location as the GCS bucket and must be) \u2013 \ngcs_bucket_name (str) \u2013 The location where the vectors will be stored in\ncreated. (order for the index to be) \u2013 \nindex_id (str) \u2013 The id of the created index.\nendpoint_id (str) \u2013 The id of the created endpoint.\ncredentials_path (Optional[str]) \u2013 (Optional) The path of the Google credentials on\nsystem. (the local file) \u2013 \nembedding (Optional[langchain.embeddings.base.Embeddings]) \u2013 The Embeddings that will be used for\ntexts. (embedding the) \u2013 \nReturns\nA configured MatchingEngine with the texts added to the index.\nReturn type\nlangchain.vectorstores.matching_engine.MatchingEngine", "source": "https://api.python.langchain.com/en/latest/modules/vectorstores.html"} +{"id": "c018b0edb28a-66", "text": "Return type\nlangchain.vectorstores.matching_engine.MatchingEngine\nclass langchain.vectorstores.Milvus(embedding_function, collection_name='LangChainCollection', connection_args=None, consistency_level='Session', index_params=None, search_params=None, drop_old=False)[source]\uf0c1\nBases: langchain.vectorstores.base.VectorStore\nWrapper around the Milvus vector database.\nParameters\nembedding_function (Embeddings) \u2013 \ncollection_name (str) \u2013 \nconnection_args (Optional[dict[str, Any]]) \u2013 \nconsistency_level (str) \u2013 \nindex_params (Optional[dict]) \u2013 \nsearch_params (Optional[dict]) \u2013 \ndrop_old (Optional[bool]) \u2013 \nadd_texts(texts, metadatas=None, timeout=None, batch_size=1000, **kwargs)[source]\uf0c1\nInsert text data into Milvus.\nInserting data when the collection has not be made yet will result\nin creating a new Collection. The data of the first entity decides\nthe schema of the new collection, the dim is extracted from the first\nembedding and the columns are decided by the first metadata dict.\nMetada keys will need to be present for all inserted values. At\nthe moment there is no None equivalent in Milvus.\nParameters\ntexts (Iterable[str]) \u2013 The texts to embed, it is assumed\nthat they all fit in memory.\nmetadatas (Optional[List[dict]]) \u2013 Metadata dicts attached to each of\nthe texts. Defaults to None.\ntimeout (Optional[int]) \u2013 Timeout for each batch insert. Defaults\nto None.\nbatch_size (int, optional) \u2013 Batch size to use for insertion.\nDefaults to 1000.\nkwargs (Any) \u2013 \nRaises\nMilvusException \u2013 Failure to add texts\nReturns\nThe resulting keys for each inserted element.\nReturn type\nList[str]", "source": "https://api.python.langchain.com/en/latest/modules/vectorstores.html"} +{"id": "c018b0edb28a-67", "text": "Returns\nThe resulting keys for each inserted element.\nReturn type\nList[str]\nsimilarity_search(query, k=4, param=None, expr=None, timeout=None, **kwargs)[source]\uf0c1\nPerform a similarity search against the query string.\nParameters\nquery (str) \u2013 The text to search.\nk (int, optional) \u2013 How many results to return. Defaults to 4.\nparam (dict, optional) \u2013 The search params for the index type.\nDefaults to None.\nexpr (str, optional) \u2013 Filtering expression. Defaults to None.\ntimeout (int, optional) \u2013 How long to wait before timeout error.\nDefaults to None.\nkwargs (Any) \u2013 Collection.search() keyword arguments.\nReturns\nDocument results for search.\nReturn type\nList[Document]\nsimilarity_search_by_vector(embedding, k=4, param=None, expr=None, timeout=None, **kwargs)[source]\uf0c1\nPerform a similarity search against the query string.\nParameters\nembedding (List[float]) \u2013 The embedding vector to search.\nk (int, optional) \u2013 How many results to return. Defaults to 4.\nparam (dict, optional) \u2013 The search params for the index type.\nDefaults to None.\nexpr (str, optional) \u2013 Filtering expression. Defaults to None.\ntimeout (int, optional) \u2013 How long to wait before timeout error.\nDefaults to None.\nkwargs (Any) \u2013 Collection.search() keyword arguments.\nReturns\nDocument results for search.\nReturn type\nList[Document]\nsimilarity_search_with_score(query, k=4, param=None, expr=None, timeout=None, **kwargs)[source]\uf0c1\nPerform a search on a query string and return results with score.\nFor more information about the search parameters, take a look at the pymilvus\ndocumentation found here:", "source": "https://api.python.langchain.com/en/latest/modules/vectorstores.html"} +{"id": "c018b0edb28a-68", "text": "documentation found here:\nhttps://milvus.io/api-reference/pymilvus/v2.2.6/Collection/search().md\nParameters\nquery (str) \u2013 The text being searched.\nk (int, optional) \u2013 The amount of results ot return. Defaults to 4.\nparam (dict) \u2013 The search params for the specified index.\nDefaults to None.\nexpr (str, optional) \u2013 Filtering expression. Defaults to None.\ntimeout (int, optional) \u2013 How long to wait before timeout error.\nDefaults to None.\nkwargs (Any) \u2013 Collection.search() keyword arguments.\nReturn type\nList[float], List[Tuple[Document, any, any]]\nsimilarity_search_with_score_by_vector(embedding, k=4, param=None, expr=None, timeout=None, **kwargs)[source]\uf0c1\nPerform a search on a query string and return results with score.\nFor more information about the search parameters, take a look at the pymilvus\ndocumentation found here:\nhttps://milvus.io/api-reference/pymilvus/v2.2.6/Collection/search().md\nParameters\nembedding (List[float]) \u2013 The embedding vector being searched.\nk (int, optional) \u2013 The amount of results ot return. Defaults to 4.\nparam (dict) \u2013 The search params for the specified index.\nDefaults to None.\nexpr (str, optional) \u2013 Filtering expression. Defaults to None.\ntimeout (int, optional) \u2013 How long to wait before timeout error.\nDefaults to None.\nkwargs (Any) \u2013 Collection.search() keyword arguments.\nReturns\nResult doc and score.\nReturn type\nList[Tuple[Document, float]]", "source": "https://api.python.langchain.com/en/latest/modules/vectorstores.html"} +{"id": "c018b0edb28a-69", "text": "Returns\nResult doc and score.\nReturn type\nList[Tuple[Document, float]]\nmax_marginal_relevance_search(query, k=4, fetch_k=20, lambda_mult=0.5, param=None, expr=None, timeout=None, **kwargs)[source]\uf0c1\nPerform a search and return results that are reordered by MMR.\nParameters\nquery (str) \u2013 The text being searched.\nk (int, optional) \u2013 How many results to give. Defaults to 4.\nfetch_k (int, optional) \u2013 Total results to select k from.\nDefaults to 20.\nlambda_mult (float) \u2013 Number between 0 and 1 that determines the degree\nof diversity among the results with 0 corresponding\nto maximum diversity and 1 to minimum diversity.\nDefaults to 0.5\nparam (dict, optional) \u2013 The search params for the specified index.\nDefaults to None.\nexpr (str, optional) \u2013 Filtering expression. Defaults to None.\ntimeout (int, optional) \u2013 How long to wait before timeout error.\nDefaults to None.\nkwargs (Any) \u2013 Collection.search() keyword arguments.\nReturns\nDocument results for search.\nReturn type\nList[Document]\nmax_marginal_relevance_search_by_vector(embedding, k=4, fetch_k=20, lambda_mult=0.5, param=None, expr=None, timeout=None, **kwargs)[source]\uf0c1\nPerform a search and return results that are reordered by MMR.\nParameters\nembedding (str) \u2013 The embedding vector being searched.\nk (int, optional) \u2013 How many results to give. Defaults to 4.\nfetch_k (int, optional) \u2013 Total results to select k from.\nDefaults to 20.\nlambda_mult (float) \u2013 Number between 0 and 1 that determines the degree", "source": "https://api.python.langchain.com/en/latest/modules/vectorstores.html"} +{"id": "c018b0edb28a-70", "text": "lambda_mult (float) \u2013 Number between 0 and 1 that determines the degree\nof diversity among the results with 0 corresponding\nto maximum diversity and 1 to minimum diversity.\nDefaults to 0.5\nparam (dict, optional) \u2013 The search params for the specified index.\nDefaults to None.\nexpr (str, optional) \u2013 Filtering expression. Defaults to None.\ntimeout (int, optional) \u2013 How long to wait before timeout error.\nDefaults to None.\nkwargs (Any) \u2013 Collection.search() keyword arguments.\nReturns\nDocument results for search.\nReturn type\nList[Document]\nclassmethod from_texts(texts, embedding, metadatas=None, collection_name='LangChainCollection', connection_args={'host': 'localhost', 'password': '', 'port': '19530', 'secure': False, 'user': ''}, consistency_level='Session', index_params=None, search_params=None, drop_old=False, **kwargs)[source]\uf0c1\nCreate a Milvus collection, indexes it with HNSW, and insert data.\nParameters\ntexts (List[str]) \u2013 Text data.\nembedding (Embeddings) \u2013 Embedding function.\nmetadatas (Optional[List[dict]]) \u2013 Metadata for each text if it exists.\nDefaults to None.\ncollection_name (str, optional) \u2013 Collection name to use. Defaults to\n\u201cLangChainCollection\u201d.\nconnection_args (dict[str, Any], optional) \u2013 Connection args to use. Defaults\nto DEFAULT_MILVUS_CONNECTION.\nconsistency_level (str, optional) \u2013 Which consistency level to use. Defaults\nto \u201cSession\u201d.\nindex_params (Optional[dict], optional) \u2013 Which index_params to use. Defaults\nto None.\nsearch_params (Optional[dict], optional) \u2013 Which search params to use.\nDefaults to None.", "source": "https://api.python.langchain.com/en/latest/modules/vectorstores.html"} +{"id": "c018b0edb28a-71", "text": "Defaults to None.\ndrop_old (Optional[bool], optional) \u2013 Whether to drop the collection with\nthat name if it exists. Defaults to False.\nkwargs (Any) \u2013 \nReturns\nMilvus Vector Store\nReturn type\nMilvus\nclass langchain.vectorstores.Zilliz(embedding_function, collection_name='LangChainCollection', connection_args=None, consistency_level='Session', index_params=None, search_params=None, drop_old=False)[source]\uf0c1\nBases: langchain.vectorstores.milvus.Milvus\nParameters\nembedding_function (Embeddings) \u2013 \ncollection_name (str) \u2013 \nconnection_args (Optional[dict[str, Any]]) \u2013 \nconsistency_level (str) \u2013 \nindex_params (Optional[dict]) \u2013 \nsearch_params (Optional[dict]) \u2013 \ndrop_old (Optional[bool]) \u2013 \nclassmethod from_texts(texts, embedding, metadatas=None, collection_name='LangChainCollection', connection_args={}, consistency_level='Session', index_params=None, search_params=None, drop_old=False, **kwargs)[source]\uf0c1\nCreate a Zilliz collection, indexes it with HNSW, and insert data.\nParameters\ntexts (List[str]) \u2013 Text data.\nembedding (Embeddings) \u2013 Embedding function.\nmetadatas (Optional[List[dict]]) \u2013 Metadata for each text if it exists.\nDefaults to None.\ncollection_name (str, optional) \u2013 Collection name to use. Defaults to\n\u201cLangChainCollection\u201d.\nconnection_args (dict[str, Any], optional) \u2013 Connection args to use. Defaults\nto DEFAULT_MILVUS_CONNECTION.\nconsistency_level (str, optional) \u2013 Which consistency level to use. Defaults\nto \u201cSession\u201d.\nindex_params (Optional[dict], optional) \u2013 Which index_params to use.\nDefaults to None.", "source": "https://api.python.langchain.com/en/latest/modules/vectorstores.html"} +{"id": "c018b0edb28a-72", "text": "Defaults to None.\nsearch_params (Optional[dict], optional) \u2013 Which search params to use.\nDefaults to None.\ndrop_old (Optional[bool], optional) \u2013 Whether to drop the collection with\nthat name if it exists. Defaults to False.\nkwargs (Any) \u2013 \nReturns\nZilliz Vector Store\nReturn type\nZilliz\nclass langchain.vectorstores.SingleStoreDB(embedding, *, distance_strategy=DistanceStrategy.DOT_PRODUCT, table_name='embeddings', content_field='content', metadata_field='metadata', vector_field='vector', pool_size=5, max_overflow=10, timeout=30, **kwargs)[source]\uf0c1\nBases: langchain.vectorstores.base.VectorStore\nThis class serves as a Pythonic interface to the SingleStore DB database.\nThe prerequisite for using this class is the installation of the singlestoredb\nPython package.\nThe SingleStoreDB vectorstore can be created by providing an embedding function and\nthe relevant parameters for the database connection, connection pool, and\noptionally, the names of the table and the fields to use.\nParameters\nembedding (Embeddings) \u2013 \ndistance_strategy (DistanceStrategy) \u2013 \ntable_name (str) \u2013 \ncontent_field (str) \u2013 \nmetadata_field (str) \u2013 \nvector_field (str) \u2013 \npool_size (int) \u2013 \nmax_overflow (int) \u2013 \ntimeout (float) \u2013 \nkwargs (Any) \u2013 \nvector_field\uf0c1\nPass the rest of the kwargs to the connection.\nconnection_kwargs\uf0c1\nAdd program name and version to connection attributes.\nadd_texts(texts, metadatas=None, embeddings=None, **kwargs)[source]\uf0c1\nAdd more texts to the vectorstore.\nParameters\ntexts (Iterable[str]) \u2013 Iterable of strings/text to add to the vectorstore.", "source": "https://api.python.langchain.com/en/latest/modules/vectorstores.html"} +{"id": "c018b0edb28a-73", "text": "Parameters\ntexts (Iterable[str]) \u2013 Iterable of strings/text to add to the vectorstore.\nmetadatas (Optional[List[dict]], optional) \u2013 Optional list of metadatas.\nDefaults to None.\nembeddings (Optional[List[List[float]]], optional) \u2013 Optional pre-generated\nembeddings. Defaults to None.\nkwargs (Any) \u2013 \nReturns\nempty list\nReturn type\nList[str]\nsimilarity_search(query, k=4, filter=None, **kwargs)[source]\uf0c1\nReturns the most similar indexed documents to the query text.\nUses cosine similarity.\nParameters\nquery (str) \u2013 The query text for which to find similar documents.\nk (int) \u2013 The number of documents to return. Default is 4.\nfilter (dict) \u2013 A dictionary of metadata fields and values to filter by.\nkwargs (Any) \u2013 \nReturns\nA list of documents that are most similar to the query text.\nReturn type\nList[Document]\nExamples\nsimilarity_search_with_score(query, k=4, filter=None)[source]\uf0c1\nReturn docs most similar to query. Uses cosine similarity.\nParameters\nquery (str) \u2013 Text to look up documents similar to.\nk (int) \u2013 Number of Documents to return. Defaults to 4.\nfilter (Optional[dict]) \u2013 A dictionary of metadata fields and values to filter by.\nDefaults to None.\nReturns\nList of Documents most similar to the query and score for each\nReturn type\nList[Tuple[langchain.schema.Document, float]]\nclassmethod from_texts(texts, embedding, metadatas=None, distance_strategy=DistanceStrategy.DOT_PRODUCT, table_name='embeddings', content_field='content', metadata_field='metadata', vector_field='vector', pool_size=5, max_overflow=10, timeout=30, **kwargs)[source]\uf0c1", "source": "https://api.python.langchain.com/en/latest/modules/vectorstores.html"} +{"id": "c018b0edb28a-74", "text": "Create a SingleStoreDB vectorstore from raw documents.\nThis is a user-friendly interface that:\nEmbeds documents.\nCreates a new table for the embeddings in SingleStoreDB.\nAdds the documents to the newly created table.\nThis is intended to be a quick way to get started.\n.. rubric:: Example\nParameters\ntexts (List[str]) \u2013 \nembedding (langchain.embeddings.base.Embeddings) \u2013 \nmetadatas (Optional[List[dict]]) \u2013 \ndistance_strategy (langchain.vectorstores.singlestoredb.DistanceStrategy) \u2013 \ntable_name (str) \u2013 \ncontent_field (str) \u2013 \nmetadata_field (str) \u2013 \nvector_field (str) \u2013 \npool_size (int) \u2013 \nmax_overflow (int) \u2013 \ntimeout (float) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.vectorstores.singlestoredb.SingleStoreDB\nas_retriever(**kwargs)[source]\uf0c1\nParameters\nkwargs (Any) \u2013 \nReturn type\nlangchain.vectorstores.singlestoredb.SingleStoreDBRetriever\nclass langchain.vectorstores.Clarifai(user_id=None, app_id=None, pat=None, number_of_docs=None, api_base=None)[source]\uf0c1\nBases: langchain.vectorstores.base.VectorStore\nWrapper around Clarifai AI platform\u2019s vector store.\nTo use, you should have the clarifai python package installed.\nExample\nfrom langchain.vectorstores import Clarifai\nfrom langchain.embeddings.openai import OpenAIEmbeddings\nembeddings = OpenAIEmbeddings()\nvectorstore = Clarifai(\"langchain_store\", embeddings.embed_query)\nParameters\nuser_id (Optional[str]) \u2013 \napp_id (Optional[str]) \u2013 \npat (Optional[str]) \u2013 \nnumber_of_docs (Optional[int]) \u2013", "source": "https://api.python.langchain.com/en/latest/modules/vectorstores.html"} +{"id": "c018b0edb28a-75", "text": "pat (Optional[str]) \u2013 \nnumber_of_docs (Optional[int]) \u2013 \napi_base (Optional[str]) \u2013 \nReturn type\nNone\nadd_texts(texts, metadatas=None, ids=None, **kwargs)[source]\uf0c1\nAdd texts to the Clarifai vectorstore. This will push the text\nto a Clarifai application.\nApplication use base workflow that create and store embedding for each text.\nMake sure you are using a base workflow that is compatible with text\n(such as Language Understanding).\nParameters\ntexts (Iterable[str]) \u2013 Texts to add to the vectorstore.\nmetadatas (Optional[List[dict]], optional) \u2013 Optional list of metadatas.\nids (Optional[List[str]], optional) \u2013 Optional list of IDs.\nkwargs (Any) \u2013 \nReturns\nList of IDs of the added texts.\nReturn type\nList[str]\nsimilarity_search_with_score(query, k=4, filter=None, namespace=None, **kwargs)[source]\uf0c1\nRun similarity search with score using Clarifai.\nParameters\nquery (str) \u2013 Query text to search for.\nk (int) \u2013 Number of results to return. Defaults to 4.\nfilter (Optional[Dict[str, str]]) \u2013 Filter by metadata.\nNone. (Defaults to) \u2013 \nnamespace (Optional[str]) \u2013 \nkwargs (Any) \u2013 \nReturns\nList of documents most simmilar to the query text.\nReturn type\nList[Document]\nsimilarity_search(query, k=4, **kwargs)[source]\uf0c1\nRun similarity search using Clarifai.\nParameters\nquery (str) \u2013 Text to look up documents similar to.\nk (int) \u2013 Number of Documents to return. Defaults to 4.\nkwargs (Any) \u2013 \nReturns\nList of Documents most similar to the query and score for each", "source": "https://api.python.langchain.com/en/latest/modules/vectorstores.html"} +{"id": "c018b0edb28a-76", "text": "Returns\nList of Documents most similar to the query and score for each\nReturn type\nList[langchain.schema.Document]\nclassmethod from_texts(texts, embedding=None, metadatas=None, user_id=None, app_id=None, pat=None, number_of_docs=None, api_base=None, **kwargs)[source]\uf0c1\nCreate a Clarifai vectorstore from a list of texts.\nParameters\nuser_id (str) \u2013 User ID.\napp_id (str) \u2013 App ID.\ntexts (List[str]) \u2013 List of texts to add.\npat (Optional[str]) \u2013 Personal access token. Defaults to None.\nnumber_of_docs (Optional[int]) \u2013 Number of documents to return\nNone. (Defaults to) \u2013 \napi_base (Optional[str]) \u2013 API base. Defaults to None.\nmetadatas (Optional[List[dict]]) \u2013 Optional list of metadatas.\nNone. \u2013 \nembedding (Optional[langchain.embeddings.base.Embeddings]) \u2013 \nkwargs (Any) \u2013 \nReturns\nClarifai vectorstore.\nReturn type\nClarifai\nclassmethod from_documents(documents, embedding=None, user_id=None, app_id=None, pat=None, number_of_docs=None, api_base=None, **kwargs)[source]\uf0c1\nCreate a Clarifai vectorstore from a list of documents.\nParameters\nuser_id (str) \u2013 User ID.\napp_id (str) \u2013 App ID.\ndocuments (List[Document]) \u2013 List of documents to add.\npat (Optional[str]) \u2013 Personal access token. Defaults to None.\nnumber_of_docs (Optional[int]) \u2013 Number of documents to return\nNone. (during vector search. Defaults to) \u2013 \napi_base (Optional[str]) \u2013 API base. Defaults to None.\nembedding (Optional[langchain.embeddings.base.Embeddings]) \u2013 \nkwargs (Any) \u2013 \nReturns", "source": "https://api.python.langchain.com/en/latest/modules/vectorstores.html"} +{"id": "c018b0edb28a-77", "text": "kwargs (Any) \u2013 \nReturns\nClarifai vectorstore.\nReturn type\nClarifai\nclass langchain.vectorstores.OpenSearchVectorSearch(opensearch_url, index_name, embedding_function, **kwargs)[source]\uf0c1\nBases: langchain.vectorstores.base.VectorStore\nWrapper around OpenSearch as a vector database.\nExample\nfrom langchain import OpenSearchVectorSearch\nopensearch_vector_search = OpenSearchVectorSearch(\n \"http://localhost:9200\",\n \"embeddings\",\n embedding_function\n)\nParameters\nopensearch_url (str) \u2013 \nindex_name (str) \u2013 \nembedding_function (Embeddings) \u2013 \nkwargs (Any) \u2013 \nadd_texts(texts, metadatas=None, ids=None, bulk_size=500, **kwargs)[source]\uf0c1\nRun more texts through the embeddings and add to the vectorstore.\nParameters\ntexts (Iterable[str]) \u2013 Iterable of strings to add to the vectorstore.\nmetadatas (Optional[List[dict]]) \u2013 Optional list of metadatas associated with the texts.\nids (Optional[List[str]]) \u2013 Optional list of ids to associate with the texts.\nbulk_size (int) \u2013 Bulk API request count; Default: 500\nkwargs (Any) \u2013 \nReturns\nList of ids from adding the texts into the vectorstore.\nReturn type\nList[str]\nOptional Args:vector_field: Document field embeddings are stored in. Defaults to\n\u201cvector_field\u201d.\ntext_field: Document field the text of the document is stored in. Defaults\nto \u201ctext\u201d.\nsimilarity_search(query, k=4, **kwargs)[source]\uf0c1\nReturn docs most similar to query.\nBy default, supports Approximate Search.\nAlso supports Script Scoring and Painless Scripting.\nParameters", "source": "https://api.python.langchain.com/en/latest/modules/vectorstores.html"} +{"id": "c018b0edb28a-78", "text": "Also supports Script Scoring and Painless Scripting.\nParameters\nquery (str) \u2013 Text to look up documents similar to.\nk (int) \u2013 Number of Documents to return. Defaults to 4.\nkwargs (Any) \u2013 \nReturns\nList of Documents most similar to the query.\nReturn type\nList[langchain.schema.Document]\nOptional Args:vector_field: Document field embeddings are stored in. Defaults to\n\u201cvector_field\u201d.\ntext_field: Document field the text of the document is stored in. Defaults\nto \u201ctext\u201d.\nmetadata_field: Document field that metadata is stored in. Defaults to\n\u201cmetadata\u201d.\nCan be set to a special value \u201c*\u201d to include the entire document.\nOptional Args for Approximate Search:search_type: \u201capproximate_search\u201d; default: \u201capproximate_search\u201d\nboolean_filter: A Boolean filter consists of a Boolean query that\ncontains a k-NN query and a filter.\nsubquery_clause: Query clause on the knn vector field; default: \u201cmust\u201d\nlucene_filter: the Lucene algorithm decides whether to perform an exact\nk-NN search with pre-filtering or an approximate search with modified\npost-filtering.\nOptional Args for Script Scoring Search:search_type: \u201cscript_scoring\u201d; default: \u201capproximate_search\u201d\nspace_type: \u201cl2\u201d, \u201cl1\u201d, \u201clinf\u201d, \u201ccosinesimil\u201d, \u201cinnerproduct\u201d,\n\u201chammingbit\u201d; default: \u201cl2\u201d\npre_filter: script_score query to pre-filter documents before identifying\nnearest neighbors; default: {\u201cmatch_all\u201d: {}}\nOptional Args for Painless Scripting Search:search_type: \u201cpainless_scripting\u201d; default: \u201capproximate_search\u201d\nspace_type: \u201cl2Squared\u201d, \u201cl1Norm\u201d, \u201ccosineSimilarity\u201d; default: \u201cl2Squared\u201d", "source": "https://api.python.langchain.com/en/latest/modules/vectorstores.html"} +{"id": "c018b0edb28a-79", "text": "pre_filter: script_score query to pre-filter documents before identifying\nnearest neighbors; default: {\u201cmatch_all\u201d: {}}\nsimilarity_search_with_score(query, k=4, **kwargs)[source]\uf0c1\nReturn docs and it\u2019s scores most similar to query.\nBy default, supports Approximate Search.\nAlso supports Script Scoring and Painless Scripting.\nParameters\nquery (str) \u2013 Text to look up documents similar to.\nk (int) \u2013 Number of Documents to return. Defaults to 4.\nkwargs (Any) \u2013 \nReturns\nList of Documents along with its scores most similar to the query.\nReturn type\nList[Tuple[langchain.schema.Document, float]]\nOptional Args:same as similarity_search\nmax_marginal_relevance_search(query, k=4, fetch_k=20, lambda_mult=0.5, **kwargs)[source]\uf0c1\nReturn docs selected using the maximal marginal relevance.\nMaximal marginal relevance optimizes for similarity to query AND diversity\namong selected documents.\nParameters\nquery (str) \u2013 Text to look up documents similar to.\nk (int) \u2013 Number of Documents to return. Defaults to 4.\nfetch_k (int) \u2013 Number of Documents to fetch to pass to MMR algorithm.\nDefaults to 20.\nlambda_mult (float) \u2013 Number between 0 and 1 that determines the degree\nof diversity among the results with 0 corresponding\nto maximum diversity and 1 to minimum diversity.\nDefaults to 0.5.\nkwargs (Any) \u2013 \nReturns\nList of Documents selected by maximal marginal relevance.\nReturn type\nlist[langchain.schema.Document]\nclassmethod from_texts(texts, embedding, metadatas=None, bulk_size=500, **kwargs)[source]\uf0c1\nConstruct OpenSearchVectorSearch wrapper from raw documents.\nExample", "source": "https://api.python.langchain.com/en/latest/modules/vectorstores.html"} +{"id": "c018b0edb28a-80", "text": "Construct OpenSearchVectorSearch wrapper from raw documents.\nExample\nfrom langchain import OpenSearchVectorSearch\nfrom langchain.embeddings import OpenAIEmbeddings\nembeddings = OpenAIEmbeddings()\nopensearch_vector_search = OpenSearchVectorSearch.from_texts(\n texts,\n embeddings,\n opensearch_url=\"http://localhost:9200\"\n)\nOpenSearch by default supports Approximate Search powered by nmslib, faiss\nand lucene engines recommended for large datasets. Also supports brute force\nsearch through Script Scoring and Painless Scripting.\nOptional Args:vector_field: Document field embeddings are stored in. Defaults to\n\u201cvector_field\u201d.\ntext_field: Document field the text of the document is stored in. Defaults\nto \u201ctext\u201d.\nOptional Keyword Args for Approximate Search:engine: \u201cnmslib\u201d, \u201cfaiss\u201d, \u201clucene\u201d; default: \u201cnmslib\u201d\nspace_type: \u201cl2\u201d, \u201cl1\u201d, \u201ccosinesimil\u201d, \u201clinf\u201d, \u201cinnerproduct\u201d; default: \u201cl2\u201d\nef_search: Size of the dynamic list used during k-NN searches. Higher values\nlead to more accurate but slower searches; default: 512\nef_construction: Size of the dynamic list used during k-NN graph creation.\nHigher values lead to more accurate graph but slower indexing speed;\ndefault: 512\nm: Number of bidirectional links created for each new element. Large impact\non memory consumption. Between 2 and 100; default: 16\nKeyword Args for Script Scoring or Painless Scripting:is_appx_search: False\nParameters\ntexts (List[str]) \u2013 \nembedding (langchain.embeddings.base.Embeddings) \u2013 \nmetadatas (Optional[List[dict]]) \u2013 \nbulk_size (int) \u2013 \nkwargs (Any) \u2013 \nReturn type", "source": "https://api.python.langchain.com/en/latest/modules/vectorstores.html"} +{"id": "c018b0edb28a-81", "text": "bulk_size (int) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.vectorstores.opensearch_vector_search.OpenSearchVectorSearch\nclass langchain.vectorstores.MongoDBAtlasVectorSearch(collection, embedding, *, index_name='default', text_key='text', embedding_key='embedding')[source]\uf0c1\nBases: langchain.vectorstores.base.VectorStore\nWrapper around MongoDB Atlas Vector Search.\nTo use, you should have both:\n- the pymongo python package installed\n- a connection string associated with a MongoDB Atlas Cluster having deployed an\nAtlas Search index\nExample\nfrom langchain.vectorstores import MongoDBAtlasVectorSearch\nfrom langchain.embeddings.openai import OpenAIEmbeddings\nfrom pymongo import MongoClient\nmongo_client = MongoClient(\"\")\ncollection = mongo_client[\"\"][\"\"]\nembeddings = OpenAIEmbeddings()\nvectorstore = MongoDBAtlasVectorSearch(collection, embeddings)\nParameters\ncollection (Collection[MongoDBDocumentType]) \u2013 \nembedding (Embeddings) \u2013 \nindex_name (str) \u2013 \ntext_key (str) \u2013 \nembedding_key (str) \u2013 \nclassmethod from_connection_string(connection_string, namespace, embedding, **kwargs)[source]\uf0c1\nParameters\nconnection_string (str) \u2013 \nnamespace (str) \u2013 \nembedding (langchain.embeddings.base.Embeddings) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.vectorstores.mongodb_atlas.MongoDBAtlasVectorSearch\nadd_texts(texts, metadatas=None, **kwargs)[source]\uf0c1\nRun more texts through the embeddings and add to the vectorstore.\nParameters\ntexts (Iterable[str]) \u2013 Iterable of strings to add to the vectorstore.\nmetadatas (Optional[List[Dict[str, Any]]]) \u2013 Optional list of metadatas associated with the texts.", "source": "https://api.python.langchain.com/en/latest/modules/vectorstores.html"} +{"id": "c018b0edb28a-82", "text": "kwargs (Any) \u2013 \nReturns\nList of ids from adding the texts into the vectorstore.\nReturn type\nList\nsimilarity_search_with_score(query, *, k=4, pre_filter=None, post_filter_pipeline=None)[source]\uf0c1\nReturn MongoDB documents most similar to query, along with scores.\nUse the knnBeta Operator available in MongoDB Atlas Search\nThis feature is in early access and available only for evaluation purposes, to\nvalidate functionality, and to gather feedback from a small closed group of\nearly access users. It is not recommended for production deployments as we\nmay introduce breaking changes.\nFor more: https://www.mongodb.com/docs/atlas/atlas-search/knn-beta\nParameters\nquery (str) \u2013 Text to look up documents similar to.\nk (int) \u2013 Optional Number of Documents to return. Defaults to 4.\npre_filter (Optional[dict]) \u2013 Optional Dictionary of argument(s) to prefilter on document\nfields.\npost_filter_pipeline (Optional[List[Dict]]) \u2013 Optional Pipeline of MongoDB aggregation stages\nfollowing the knnBeta search.\nReturns\nList of Documents most similar to the query and score for each\nReturn type\nList[Tuple[langchain.schema.Document, float]]\nsimilarity_search(query, k=4, pre_filter=None, post_filter_pipeline=None, **kwargs)[source]\uf0c1\nReturn MongoDB documents most similar to query.\nUse the knnBeta Operator available in MongoDB Atlas Search\nThis feature is in early access and available only for evaluation purposes, to\nvalidate functionality, and to gather feedback from a small closed group of\nearly access users. It is not recommended for production deployments as we may\nintroduce breaking changes.\nFor more: https://www.mongodb.com/docs/atlas/atlas-search/knn-beta\nParameters\nquery (str) \u2013 Text to look up documents similar to.", "source": "https://api.python.langchain.com/en/latest/modules/vectorstores.html"} +{"id": "c018b0edb28a-83", "text": "Parameters\nquery (str) \u2013 Text to look up documents similar to.\nk (int) \u2013 Optional Number of Documents to return. Defaults to 4.\npre_filter (Optional[dict]) \u2013 Optional Dictionary of argument(s) to prefilter on document\nfields.\npost_filter_pipeline (Optional[List[Dict]]) \u2013 Optional Pipeline of MongoDB aggregation stages\nfollowing the knnBeta search.\nkwargs (Any) \u2013 \nReturns\nList of Documents most similar to the query and score for each\nReturn type\nList[langchain.schema.Document]\nclassmethod from_texts(texts, embedding, metadatas=None, collection=None, **kwargs)[source]\uf0c1\nConstruct MongoDBAtlasVectorSearch wrapper from raw documents.\nThis is a user-friendly interface that:\nEmbeds documents.\nAdds the documents to a provided MongoDB Atlas Vector Search index(Lucene)\nThis is intended to be a quick way to get started.\nExample\nParameters\ntexts (List[str]) \u2013 \nembedding (Embeddings) \u2013 \nmetadatas (Optional[List[dict]]) \u2013 \ncollection (Optional[Collection[MongoDBDocumentType]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nMongoDBAtlasVectorSearch\nclass langchain.vectorstores.MyScale(embedding, config=None, **kwargs)[source]\uf0c1\nBases: langchain.vectorstores.base.VectorStore\nWrapper around MyScale vector database\nYou need a clickhouse-connect python package, and a valid account\nto connect to MyScale.\nMyScale can not only search with simple vector indexes,\nit also supports complex query with multiple conditions,\nconstraints and even sub-queries.\nFor more information, please visit[myscale official site](https://docs.myscale.com/en/overview/)\nParameters\nembedding (Embeddings) \u2013 \nconfig (Optional[MyScaleSettings]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nNone", "source": "https://api.python.langchain.com/en/latest/modules/vectorstores.html"} +{"id": "c018b0edb28a-84", "text": "kwargs (Any) \u2013 \nReturn type\nNone\nescape_str(value)[source]\uf0c1\nParameters\nvalue (str) \u2013 \nReturn type\nstr\nadd_texts(texts, metadatas=None, batch_size=32, ids=None, **kwargs)[source]\uf0c1\nRun more texts through the embeddings and add to the vectorstore.\nParameters\ntexts (Iterable[str]) \u2013 Iterable of strings to add to the vectorstore.\nids (Optional[Iterable[str]]) \u2013 Optional list of ids to associate with the texts.\nbatch_size (int) \u2013 Batch size of insertion\nmetadata \u2013 Optional column data to be inserted\nmetadatas (Optional[List[dict]]) \u2013 \nkwargs (Any) \u2013 \nReturns\nList of ids from adding the texts into the vectorstore.\nReturn type\nList[str]\nclassmethod from_texts(texts, embedding, metadatas=None, config=None, text_ids=None, batch_size=32, **kwargs)[source]\uf0c1\nCreate Myscale wrapper with existing texts\nParameters\nembedding_function (Embeddings) \u2013 Function to extract text embedding\ntexts (Iterable[str]) \u2013 List or tuple of strings to be added\nconfig (MyScaleSettings, Optional) \u2013 Myscale configuration\ntext_ids (Optional[Iterable], optional) \u2013 IDs for the texts.\nDefaults to None.\nbatch_size (int, optional) \u2013 Batchsize when transmitting data to MyScale.\nDefaults to 32.\nmetadata (List[dict], optional) \u2013 metadata to texts. Defaults to None.\ninto (Other keyword arguments will pass) \u2013 [clickhouse-connect](https://clickhouse.com/docs/en/integrations/python#clickhouse-connect-driver-api)\nembedding (langchain.embeddings.base.Embeddings) \u2013 \nmetadatas (Optional[List[Dict[Any, Any]]]) \u2013 \nkwargs (Any) \u2013 \nReturns\nMyScale Index\nReturn type", "source": "https://api.python.langchain.com/en/latest/modules/vectorstores.html"} +{"id": "c018b0edb28a-85", "text": "kwargs (Any) \u2013 \nReturns\nMyScale Index\nReturn type\nlangchain.vectorstores.myscale.MyScale\nsimilarity_search(query, k=4, where_str=None, **kwargs)[source]\uf0c1\nPerform a similarity search with MyScale\nParameters\nquery (str) \u2013 query string\nk (int, optional) \u2013 Top K neighbors to retrieve. Defaults to 4.\nwhere_str (Optional[str], optional) \u2013 where condition string.\nDefaults to None.\nNOTE \u2013 Please do not let end-user to fill this and always be aware\nof SQL injection. When dealing with metadatas, remember to\nuse {self.metadata_column}.attribute instead of attribute\nalone. The default name for it is metadata.\nkwargs (Any) \u2013 \nReturns\nList of Documents\nReturn type\nList[Document]\nsimilarity_search_by_vector(embedding, k=4, where_str=None, **kwargs)[source]\uf0c1\nPerform a similarity search with MyScale by vectors\nParameters\nquery (str) \u2013 query string\nk (int, optional) \u2013 Top K neighbors to retrieve. Defaults to 4.\nwhere_str (Optional[str], optional) \u2013 where condition string.\nDefaults to None.\nNOTE \u2013 Please do not let end-user to fill this and always be aware\nof SQL injection. When dealing with metadatas, remember to\nuse {self.metadata_column}.attribute instead of attribute\nalone. The default name for it is metadata.\nembedding (List[float]) \u2013 \nkwargs (Any) \u2013 \nReturns\nList of (Document, similarity)\nReturn type\nList[Document]\nsimilarity_search_with_relevance_scores(query, k=4, where_str=None, **kwargs)[source]\uf0c1\nPerform a similarity search with MyScale\nParameters\nquery (str) \u2013 query string", "source": "https://api.python.langchain.com/en/latest/modules/vectorstores.html"} +{"id": "c018b0edb28a-86", "text": "Perform a similarity search with MyScale\nParameters\nquery (str) \u2013 query string\nk (int, optional) \u2013 Top K neighbors to retrieve. Defaults to 4.\nwhere_str (Optional[str], optional) \u2013 where condition string.\nDefaults to None.\nNOTE \u2013 Please do not let end-user to fill this and always be aware\nof SQL injection. When dealing with metadatas, remember to\nuse {self.metadata_column}.attribute instead of attribute\nalone. The default name for it is metadata.\nkwargs (Any) \u2013 \nReturns\nList of documents most similar to the query text\nand cosine distance in float for each.\nLower score represents more similarity.\nReturn type\nList[Document]\ndrop()[source]\uf0c1\nHelper function: Drop data\nReturn type\nNone\nproperty metadata_column: str\uf0c1\npydantic settings langchain.vectorstores.MyScaleSettings[source]\uf0c1\nBases: pydantic.env_settings.BaseSettings\nMyScale Client Configuration\nAttribute:\nmyscale_host (str)An URL to connect to MyScale backend.Defaults to \u2018localhost\u2019.\nmyscale_port (int) : URL port to connect with HTTP. Defaults to 8443.\nusername (str) : Username to login. Defaults to None.\npassword (str) : Password to login. Defaults to None.\nindex_type (str): index type string.\nindex_param (dict): index build parameter.\ndatabase (str) : Database name to find the table. Defaults to \u2018default\u2019.\ntable (str) : Table name to operate on.\nDefaults to \u2018vector_table\u2019.\nmetric (str)Metric to compute distance,supported are (\u2018l2\u2019, \u2018cosine\u2019, \u2018ip\u2019). Defaults to \u2018cosine\u2019.\ncolumn_map (Dict)Column type map to project column name onto langchainsemantics. Must have keys: text, id, vector,", "source": "https://api.python.langchain.com/en/latest/modules/vectorstores.html"} +{"id": "c018b0edb28a-87", "text": "must be same size to number of columns. For example:\n.. code-block:: python\n{\u2018id\u2019: \u2018text_id\u2019,\n\u2018vector\u2019: \u2018text_embedding\u2019,\n\u2018text\u2019: \u2018text_plain\u2019,\n\u2018metadata\u2019: \u2018metadata_dictionary_in_json\u2019,\n}\nDefaults to identity map.\nShow JSON schema{\n \"title\": \"MyScaleSettings\",\n \"description\": \"MyScale Client Configuration\\n\\nAttribute:\\n myscale_host (str) : An URL to connect to MyScale backend.\\n Defaults to 'localhost'.\\n myscale_port (int) : URL port to connect with HTTP. Defaults to 8443.\\n username (str) : Username to login. Defaults to None.\\n password (str) : Password to login. Defaults to None.\\n index_type (str): index type string.\\n index_param (dict): index build parameter.\\n database (str) : Database name to find the table. Defaults to 'default'.\\n table (str) : Table name to operate on.\\n Defaults to 'vector_table'.\\n metric (str) : Metric to compute distance,\\n supported are ('l2', 'cosine', 'ip'). Defaults to 'cosine'.\\n column_map (Dict) : Column type map to project column name onto langchain\\n semantics. Must have keys: `text`, `id`, `vector`,\\n must be same size to number of columns. For example:\\n .. code-block:: python\\n\\n {\\n 'id': 'text_id',\\n 'vector': 'text_embedding',\\n 'text': 'text_plain',\\n 'metadata': 'metadata_dictionary_in_json',\\n }\\n\\n Defaults to identity map.\",\n \"type\": \"object\",\n \"properties\": {", "source": "https://api.python.langchain.com/en/latest/modules/vectorstores.html"} +{"id": "c018b0edb28a-88", "text": "\"type\": \"object\",\n \"properties\": {\n \"host\": {\n \"title\": \"Host\",\n \"default\": \"localhost\",\n \"env_names\": \"{'myscale_host'}\",\n \"type\": \"string\"\n },\n \"port\": {\n \"title\": \"Port\",\n \"default\": 8443,\n \"env_names\": \"{'myscale_port'}\",\n \"type\": \"integer\"\n },\n \"username\": {\n \"title\": \"Username\",\n \"env_names\": \"{'myscale_username'}\",\n \"type\": \"string\"\n },\n \"password\": {\n \"title\": \"Password\",\n \"env_names\": \"{'myscale_password'}\",\n \"type\": \"string\"\n },\n \"index_type\": {\n \"title\": \"Index Type\",\n \"default\": \"IVFFLAT\",\n \"env_names\": \"{'myscale_index_type'}\",\n \"type\": \"string\"\n },\n \"index_param\": {\n \"title\": \"Index Param\",\n \"env_names\": \"{'myscale_index_param'}\",\n \"type\": \"object\",\n \"additionalProperties\": {\n \"type\": \"string\"\n }\n },\n \"column_map\": {\n \"title\": \"Column Map\",\n \"default\": {\n \"id\": \"id\",\n \"text\": \"text\",\n \"vector\": \"vector\",\n \"metadata\": \"metadata\"\n },\n \"env_names\": \"{'myscale_column_map'}\",\n \"type\": \"object\",\n \"additionalProperties\": {\n \"type\": \"string\"\n }\n },\n \"database\": {", "source": "https://api.python.langchain.com/en/latest/modules/vectorstores.html"} +{"id": "c018b0edb28a-89", "text": "\"type\": \"string\"\n }\n },\n \"database\": {\n \"title\": \"Database\",\n \"default\": \"default\",\n \"env_names\": \"{'myscale_database'}\",\n \"type\": \"string\"\n },\n \"table\": {\n \"title\": \"Table\",\n \"default\": \"langchain\",\n \"env_names\": \"{'myscale_table'}\",\n \"type\": \"string\"\n },\n \"metric\": {\n \"title\": \"Metric\",\n \"default\": \"cosine\",\n \"env_names\": \"{'myscale_metric'}\",\n \"type\": \"string\"\n }\n },\n \"additionalProperties\": false\n}\nConfig\nenv_file: str = .env\nenv_file_encoding: str = utf-8\nenv_prefix: str = myscale_\nFields\ncolumn_map (Dict[str, str])\ndatabase (str)\nhost (str)\nindex_param (Optional[Dict[str, str]])\nindex_type (str)\nmetric (str)\npassword (Optional[str])\nport (int)\ntable (str)\nusername (Optional[str])\nattribute column_map: Dict[str, str] = {'id': 'id', 'metadata': 'metadata', 'text': 'text', 'vector': 'vector'}\uf0c1\nattribute database: str = 'default'\uf0c1\nattribute host: str = 'localhost'\uf0c1\nattribute index_param: Optional[Dict[str, str]] = None\uf0c1\nattribute index_type: str = 'IVFFLAT'\uf0c1\nattribute metric: str = 'cosine'\uf0c1\nattribute password: Optional[str] = None\uf0c1\nattribute port: int = 8443\uf0c1\nattribute table: str = 'langchain'\uf0c1", "source": "https://api.python.langchain.com/en/latest/modules/vectorstores.html"} +{"id": "c018b0edb28a-90", "text": "attribute table: str = 'langchain'\uf0c1\nattribute username: Optional[str] = None\uf0c1\nclass langchain.vectorstores.Pinecone(index, embedding_function, text_key, namespace=None)[source]\uf0c1\nBases: langchain.vectorstores.base.VectorStore\nWrapper around Pinecone vector database.\nTo use, you should have the pinecone-client python package installed.\nExample\nfrom langchain.vectorstores import Pinecone\nfrom langchain.embeddings.openai import OpenAIEmbeddings\nimport pinecone\n# The environment should be the one specified next to the API key\n# in your Pinecone console\npinecone.init(api_key=\"***\", environment=\"...\")\nindex = pinecone.Index(\"langchain-demo\")\nembeddings = OpenAIEmbeddings()\nvectorstore = Pinecone(index, embeddings.embed_query, \"text\")\nParameters\nindex (Any) \u2013 \nembedding_function (Callable) \u2013 \ntext_key (str) \u2013 \nnamespace (Optional[str]) \u2013 \nadd_texts(texts, metadatas=None, ids=None, namespace=None, batch_size=32, **kwargs)[source]\uf0c1\nRun more texts through the embeddings and add to the vectorstore.\nParameters\ntexts (Iterable[str]) \u2013 Iterable of strings to add to the vectorstore.\nmetadatas (Optional[List[dict]]) \u2013 Optional list of metadatas associated with the texts.\nids (Optional[List[str]]) \u2013 Optional list of ids to associate with the texts.\nnamespace (Optional[str]) \u2013 Optional pinecone namespace to add the texts to.\nbatch_size (int) \u2013 \nkwargs (Any) \u2013 \nReturns\nList of ids from adding the texts into the vectorstore.\nReturn type\nList[str]\nsimilarity_search_with_score(query, k=4, filter=None, namespace=None)[source]\uf0c1", "source": "https://api.python.langchain.com/en/latest/modules/vectorstores.html"} +{"id": "c018b0edb28a-91", "text": "Return pinecone documents most similar to query, along with scores.\nParameters\nquery (str) \u2013 Text to look up documents similar to.\nk (int) \u2013 Number of Documents to return. Defaults to 4.\nfilter (Optional[dict]) \u2013 Dictionary of argument(s) to filter on metadata\nnamespace (Optional[str]) \u2013 Namespace to search in. Default will search in \u2018\u2019 namespace.\nReturns\nList of Documents most similar to the query and score for each\nReturn type\nList[Tuple[langchain.schema.Document, float]]\nsimilarity_search(query, k=4, filter=None, namespace=None, **kwargs)[source]\uf0c1\nReturn pinecone documents most similar to query.\nParameters\nquery (str) \u2013 Text to look up documents similar to.\nk (int) \u2013 Number of Documents to return. Defaults to 4.\nfilter (Optional[dict]) \u2013 Dictionary of argument(s) to filter on metadata\nnamespace (Optional[str]) \u2013 Namespace to search in. Default will search in \u2018\u2019 namespace.\nkwargs (Any) \u2013 \nReturns\nList of Documents most similar to the query and score for each\nReturn type\nList[langchain.schema.Document]\nmax_marginal_relevance_search_by_vector(embedding, k=4, fetch_k=20, lambda_mult=0.5, filter=None, namespace=None, **kwargs)[source]\uf0c1\nReturn docs selected using the maximal marginal relevance.\nMaximal marginal relevance optimizes for similarity to query AND diversity\namong selected documents.\nParameters\nembedding (List[float]) \u2013 Embedding to look up documents similar to.\nk (int) \u2013 Number of Documents to return. Defaults to 4.\nfetch_k (int) \u2013 Number of Documents to fetch to pass to MMR algorithm.\nlambda_mult (float) \u2013 Number between 0 and 1 that determines the degree", "source": "https://api.python.langchain.com/en/latest/modules/vectorstores.html"} +{"id": "c018b0edb28a-92", "text": "lambda_mult (float) \u2013 Number between 0 and 1 that determines the degree\nof diversity among the results with 0 corresponding\nto maximum diversity and 1 to minimum diversity.\nDefaults to 0.5.\nfilter (Optional[dict]) \u2013 \nnamespace (Optional[str]) \u2013 \nkwargs (Any) \u2013 \nReturns\nList of Documents selected by maximal marginal relevance.\nReturn type\nList[langchain.schema.Document]\nmax_marginal_relevance_search(query, k=4, fetch_k=20, lambda_mult=0.5, filter=None, namespace=None, **kwargs)[source]\uf0c1\nReturn docs selected using the maximal marginal relevance.\nMaximal marginal relevance optimizes for similarity to query AND diversity\namong selected documents.\nParameters\nquery (str) \u2013 Text to look up documents similar to.\nk (int) \u2013 Number of Documents to return. Defaults to 4.\nfetch_k (int) \u2013 Number of Documents to fetch to pass to MMR algorithm.\nlambda_mult (float) \u2013 Number between 0 and 1 that determines the degree\nof diversity among the results with 0 corresponding\nto maximum diversity and 1 to minimum diversity.\nDefaults to 0.5.\nfilter (Optional[dict]) \u2013 \nnamespace (Optional[str]) \u2013 \nkwargs (Any) \u2013 \nReturns\nList of Documents selected by maximal marginal relevance.\nReturn type\nList[langchain.schema.Document]\nclassmethod from_texts(texts, embedding, metadatas=None, ids=None, batch_size=32, text_key='text', index_name=None, namespace=None, **kwargs)[source]\uf0c1\nConstruct Pinecone wrapper from raw documents.\nThis is a user friendly interface that:\nEmbeds documents.\nAdds the documents to a provided Pinecone index\nThis is intended to be a quick way to get started.\nExample", "source": "https://api.python.langchain.com/en/latest/modules/vectorstores.html"} +{"id": "c018b0edb28a-93", "text": "This is intended to be a quick way to get started.\nExample\nfrom langchain import Pinecone\nfrom langchain.embeddings import OpenAIEmbeddings\nimport pinecone\n# The environment should be the one specified next to the API key\n# in your Pinecone console\npinecone.init(api_key=\"***\", environment=\"...\")\nembeddings = OpenAIEmbeddings()\npinecone = Pinecone.from_texts(\n texts,\n embeddings,\n index_name=\"langchain-demo\"\n)\nParameters\ntexts (List[str]) \u2013 \nembedding (langchain.embeddings.base.Embeddings) \u2013 \nmetadatas (Optional[List[dict]]) \u2013 \nids (Optional[List[str]]) \u2013 \nbatch_size (int) \u2013 \ntext_key (str) \u2013 \nindex_name (Optional[str]) \u2013 \nnamespace (Optional[str]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.vectorstores.pinecone.Pinecone\nclassmethod from_existing_index(index_name, embedding, text_key='text', namespace=None)[source]\uf0c1\nLoad pinecone vectorstore from index name.\nParameters\nindex_name (str) \u2013 \nembedding (langchain.embeddings.base.Embeddings) \u2013 \ntext_key (str) \u2013 \nnamespace (Optional[str]) \u2013 \nReturn type\nlangchain.vectorstores.pinecone.Pinecone\ndelete(ids)[source]\uf0c1\nDelete by vector IDs.\nParameters\nids (List[str]) \u2013 List of ids to delete.\nReturn type\nNone\nclass langchain.vectorstores.Qdrant(client, collection_name, embeddings=None, content_payload_key='page_content', metadata_payload_key='metadata', embedding_function=None)[source]\uf0c1\nBases: langchain.vectorstores.base.VectorStore\nWrapper around Qdrant vector database.\nTo use you should have the qdrant-client package installed.\nExample", "source": "https://api.python.langchain.com/en/latest/modules/vectorstores.html"} +{"id": "c018b0edb28a-94", "text": "To use you should have the qdrant-client package installed.\nExample\nfrom qdrant_client import QdrantClient\nfrom langchain import Qdrant\nclient = QdrantClient()\ncollection_name = \"MyCollection\"\nqdrant = Qdrant(client, collection_name, embedding_function)\nParameters\nclient (Any) \u2013 \ncollection_name (str) \u2013 \nembeddings (Optional[Embeddings]) \u2013 \ncontent_payload_key (str) \u2013 \nmetadata_payload_key (str) \u2013 \nembedding_function (Optional[Callable]) \u2013 \nCONTENT_KEY = 'page_content'\uf0c1\nMETADATA_KEY = 'metadata'\uf0c1\nadd_texts(texts, metadatas=None, ids=None, batch_size=64, **kwargs)[source]\uf0c1\nRun more texts through the embeddings and add to the vectorstore.\nParameters\ntexts (Iterable[str]) \u2013 Iterable of strings to add to the vectorstore.\nmetadatas (Optional[List[dict]]) \u2013 Optional list of metadatas associated with the texts.\nids (Optional[Sequence[str]]) \u2013 Optional list of ids to associate with the texts. Ids have to be\nuuid-like strings.\nbatch_size (int) \u2013 How many vectors upload per-request.\nDefault: 64\nkwargs (Any) \u2013 \nReturns\nList of ids from adding the texts into the vectorstore.\nReturn type\nList[str]\nsimilarity_search(query, k=4, filter=None, search_params=None, offset=0, score_threshold=None, consistency=None, **kwargs)[source]\uf0c1\nReturn docs most similar to query.\nParameters\nquery (str) \u2013 Text to look up documents similar to.\nk (int) \u2013 Number of Documents to return. Defaults to 4.\nfilter (Optional[MetadataFilter]) \u2013 Filter by metadata. Defaults to None.", "source": "https://api.python.langchain.com/en/latest/modules/vectorstores.html"} +{"id": "c018b0edb28a-95", "text": "filter (Optional[MetadataFilter]) \u2013 Filter by metadata. Defaults to None.\nsearch_params (Optional[common_types.SearchParams]) \u2013 Additional search params\noffset (int) \u2013 Offset of the first result to return.\nMay be used to paginate results.\nNote: large offset values may cause performance issues.\nscore_threshold (Optional[float]) \u2013 Define a minimal score threshold for the result.\nIf defined, less similar results will not be returned.\nScore of the returned result might be higher or smaller than the\nthreshold depending on the Distance function used.\nE.g. for cosine similarity only higher scores will be returned.\nconsistency (Optional[common_types.ReadConsistency]) \u2013 Read consistency of the search. Defines how many replicas should be\nqueried before returning the result.\nValues:\n- int - number of replicas to query, values should present in all\nqueried replicas\n\u2019majority\u2019 - query all replicas, but return values present in themajority of replicas\n\u2019quorum\u2019 - query the majority of replicas, return values present inall of them\n\u2019all\u2019 - query all replicas, and return values present in all replicas\nkwargs (Any) \u2013 \nReturns\nList of Documents most similar to the query.\nReturn type\nList[Document]\nsimilarity_search_with_score(query, k=4, filter=None, search_params=None, offset=0, score_threshold=None, consistency=None, **kwargs)[source]\uf0c1\nReturn docs most similar to query.\nParameters\nquery (str) \u2013 Text to look up documents similar to.\nk (int) \u2013 Number of Documents to return. Defaults to 4.\nfilter (Optional[MetadataFilter]) \u2013 Filter by metadata. Defaults to None.\nsearch_params (Optional[common_types.SearchParams]) \u2013 Additional search params\noffset (int) \u2013 Offset of the first result to return.\nMay be used to paginate results.", "source": "https://api.python.langchain.com/en/latest/modules/vectorstores.html"} +{"id": "c018b0edb28a-96", "text": "May be used to paginate results.\nNote: large offset values may cause performance issues.\nscore_threshold (Optional[float]) \u2013 Define a minimal score threshold for the result.\nIf defined, less similar results will not be returned.\nScore of the returned result might be higher or smaller than the\nthreshold depending on the Distance function used.\nE.g. for cosine similarity only higher scores will be returned.\nconsistency (Optional[common_types.ReadConsistency]) \u2013 Read consistency of the search. Defines how many replicas should be\nqueried before returning the result.\nValues:\n- int - number of replicas to query, values should present in all\nqueried replicas\n\u2019majority\u2019 - query all replicas, but return values present in themajority of replicas\n\u2019quorum\u2019 - query the majority of replicas, return values present inall of them\n\u2019all\u2019 - query all replicas, and return values present in all replicas\nkwargs (Any) \u2013 \nReturns\nList of documents most similar to the query text and cosine\ndistance in float for each.\nLower score represents more similarity.\nReturn type\nList[Tuple[Document, float]]\nsimilarity_search_by_vector(embedding, k=4, filter=None, search_params=None, offset=0, score_threshold=None, consistency=None, **kwargs)[source]\uf0c1\nReturn docs most similar to embedding vector.\nParameters\nembedding (List[float]) \u2013 Embedding vector to look up documents similar to.\nk (int) \u2013 Number of Documents to return. Defaults to 4.\nfilter (Optional[MetadataFilter]) \u2013 Filter by metadata. Defaults to None.\nsearch_params (Optional[common_types.SearchParams]) \u2013 Additional search params\noffset (int) \u2013 Offset of the first result to return.\nMay be used to paginate results.\nNote: large offset values may cause performance issues.", "source": "https://api.python.langchain.com/en/latest/modules/vectorstores.html"} +{"id": "c018b0edb28a-97", "text": "May be used to paginate results.\nNote: large offset values may cause performance issues.\nscore_threshold (Optional[float]) \u2013 Define a minimal score threshold for the result.\nIf defined, less similar results will not be returned.\nScore of the returned result might be higher or smaller than the\nthreshold depending on the Distance function used.\nE.g. for cosine similarity only higher scores will be returned.\nconsistency (Optional[common_types.ReadConsistency]) \u2013 Read consistency of the search. Defines how many replicas should be\nqueried before returning the result.\nValues:\n- int - number of replicas to query, values should present in all\nqueried replicas\n\u2019majority\u2019 - query all replicas, but return values present in themajority of replicas\n\u2019quorum\u2019 - query the majority of replicas, return values present inall of them\n\u2019all\u2019 - query all replicas, and return values present in all replicas\nkwargs (Any) \u2013 \nReturns\nList of Documents most similar to the query.\nReturn type\nList[Document]\nsimilarity_search_with_score_by_vector(embedding, k=4, filter=None, search_params=None, offset=0, score_threshold=None, consistency=None, **kwargs)[source]\uf0c1\nReturn docs most similar to embedding vector.\nParameters\nembedding (List[float]) \u2013 Embedding vector to look up documents similar to.\nk (int) \u2013 Number of Documents to return. Defaults to 4.\nfilter (Optional[MetadataFilter]) \u2013 Filter by metadata. Defaults to None.\nsearch_params (Optional[common_types.SearchParams]) \u2013 Additional search params\noffset (int) \u2013 Offset of the first result to return.\nMay be used to paginate results.\nNote: large offset values may cause performance issues.\nscore_threshold (Optional[float]) \u2013 Define a minimal score threshold for the result.\nIf defined, less similar results will not be returned.", "source": "https://api.python.langchain.com/en/latest/modules/vectorstores.html"} +{"id": "c018b0edb28a-98", "text": "If defined, less similar results will not be returned.\nScore of the returned result might be higher or smaller than the\nthreshold depending on the Distance function used.\nE.g. for cosine similarity only higher scores will be returned.\nconsistency (Optional[common_types.ReadConsistency]) \u2013 Read consistency of the search. Defines how many replicas should be\nqueried before returning the result.\nValues:\n- int - number of replicas to query, values should present in all\nqueried replicas\n\u2019majority\u2019 - query all replicas, but return values present in themajority of replicas\n\u2019quorum\u2019 - query the majority of replicas, return values present inall of them\n\u2019all\u2019 - query all replicas, and return values present in all replicas\nkwargs (Any) \u2013 \nReturns\nList of documents most similar to the query text and cosine\ndistance in float for each.\nLower score represents more similarity.\nReturn type\nList[Tuple[Document, float]]\nmax_marginal_relevance_search(query, k=4, fetch_k=20, lambda_mult=0.5, **kwargs)[source]\uf0c1\nReturn docs selected using the maximal marginal relevance.\nMaximal marginal relevance optimizes for similarity to query AND diversity\namong selected documents.\nParameters\nquery (str) \u2013 Text to look up documents similar to.\nk (int) \u2013 Number of Documents to return. Defaults to 4.\nfetch_k (int) \u2013 Number of Documents to fetch to pass to MMR algorithm.\nDefaults to 20.\nlambda_mult (float) \u2013 Number between 0 and 1 that determines the degree\nof diversity among the results with 0 corresponding\nto maximum diversity and 1 to minimum diversity.\nDefaults to 0.5.\nkwargs (Any) \u2013 \nReturns\nList of Documents selected by maximal marginal relevance.\nReturn type\nList[langchain.schema.Document]", "source": "https://api.python.langchain.com/en/latest/modules/vectorstores.html"} +{"id": "c018b0edb28a-99", "text": "Return type\nList[langchain.schema.Document]\nclassmethod from_texts(texts, embedding, metadatas=None, ids=None, location=None, url=None, port=6333, grpc_port=6334, prefer_grpc=False, https=None, api_key=None, prefix=None, timeout=None, host=None, path=None, collection_name=None, distance_func='Cosine', content_payload_key='page_content', metadata_payload_key='metadata', batch_size=64, shard_number=None, replication_factor=None, write_consistency_factor=None, on_disk_payload=None, hnsw_config=None, optimizers_config=None, wal_config=None, quantization_config=None, init_from=None, **kwargs)[source]\uf0c1\nConstruct Qdrant wrapper from a list of texts.\nParameters\ntexts (List[str]) \u2013 A list of texts to be indexed in Qdrant.\nembedding (Embeddings) \u2013 A subclass of Embeddings, responsible for text vectorization.\nmetadatas (Optional[List[dict]]) \u2013 An optional list of metadata. If provided it has to be of the same\nlength as a list of texts.\nids (Optional[Sequence[str]]) \u2013 Optional list of ids to associate with the texts. Ids have to be\nuuid-like strings.\nlocation (Optional[str]) \u2013 If :memory: - use in-memory Qdrant instance.\nIf str - use it as a url parameter.\nIf None - fallback to relying on host and port parameters.\nurl (Optional[str]) \u2013 either host or str of \u201cOptional[scheme], host, Optional[port],\nOptional[prefix]\u201d. Default: None\nport (Optional[int]) \u2013 Port of the REST API interface. Default: 6333\ngrpc_port (int) \u2013 Port of the gRPC interface. Default: 6334\nprefer_grpc (bool) \u2013 If true - use gPRC interface whenever possible in custom methods.\nDefault: False", "source": "https://api.python.langchain.com/en/latest/modules/vectorstores.html"} +{"id": "c018b0edb28a-100", "text": "Default: False\nhttps (Optional[bool]) \u2013 If true - use HTTPS(SSL) protocol. Default: None\napi_key (Optional[str]) \u2013 API key for authentication in Qdrant Cloud. Default: None\nprefix (Optional[str]) \u2013 If not None - add prefix to the REST URL path.\nExample: service/v1 will result in\nhttp://localhost:6333/service/v1/{qdrant-endpoint} for REST API.\nDefault: None\ntimeout (Optional[float]) \u2013 Timeout for REST and gRPC API requests.\nDefault: 5.0 seconds for REST and unlimited for gRPC\nhost (Optional[str]) \u2013 Host name of Qdrant service. If url and host are None, set to\n\u2018localhost\u2019. Default: None\npath (Optional[str]) \u2013 Path in which the vectors will be stored while using local mode.\nDefault: None\ncollection_name (Optional[str]) \u2013 Name of the Qdrant collection to be used. If not provided,\nit will be created randomly. Default: None\ndistance_func (str) \u2013 Distance function. One of: \u201cCosine\u201d / \u201cEuclid\u201d / \u201cDot\u201d.\nDefault: \u201cCosine\u201d\ncontent_payload_key (str) \u2013 A payload key used to store the content of the document.\nDefault: \u201cpage_content\u201d\nmetadata_payload_key (str) \u2013 A payload key used to store the metadata of the document.\nDefault: \u201cmetadata\u201d\nbatch_size (int) \u2013 How many vectors upload per-request.\nDefault: 64\nshard_number (Optional[int]) \u2013 Number of shards in collection. Default is 1, minimum is 1.\nreplication_factor (Optional[int]) \u2013 Replication factor for collection. Default is 1, minimum is 1.\nDefines how many copies of each shard will be created.\nHave effect only in distributed mode.", "source": "https://api.python.langchain.com/en/latest/modules/vectorstores.html"} +{"id": "c018b0edb28a-101", "text": "Defines how many copies of each shard will be created.\nHave effect only in distributed mode.\nwrite_consistency_factor (Optional[int]) \u2013 Write consistency factor for collection. Default is 1, minimum is 1.\nDefines how many replicas should apply the operation for us to consider\nit successful. Increasing this number will make the collection more\nresilient to inconsistencies, but will also make it fail if not enough\nreplicas are available.\nDoes not have any performance impact.\nHave effect only in distributed mode.\non_disk_payload (Optional[bool]) \u2013 If true - point`s payload will not be stored in memory.\nIt will be read from the disk every time it is requested.\nThis setting saves RAM by (slightly) increasing the response time.\nNote: those payload values that are involved in filtering and are\nindexed - remain in RAM.\nhnsw_config (Optional[common_types.HnswConfigDiff]) \u2013 Params for HNSW index\noptimizers_config (Optional[common_types.OptimizersConfigDiff]) \u2013 Params for optimizer\nwal_config (Optional[common_types.WalConfigDiff]) \u2013 Params for Write-Ahead-Log\nquantization_config (Optional[common_types.QuantizationConfig]) \u2013 Params for quantization, if None - quantization will be disabled\ninit_from (Optional[common_types.InitFrom]) \u2013 Use data stored in another collection to initialize this collection\n**kwargs \u2013 Additional arguments passed directly into REST client initialization\nkwargs (Any) \u2013 \nReturn type\nQdrant\nThis is a user-friendly interface that:\n1. Creates embeddings, one for each text\n2. Initializes the Qdrant database as an in-memory docstore by default\n(and overridable to a remote docstore)\nAdds the text embeddings to the Qdrant database\nThis is intended to be a quick way to get started.\nExample", "source": "https://api.python.langchain.com/en/latest/modules/vectorstores.html"} +{"id": "c018b0edb28a-102", "text": "This is intended to be a quick way to get started.\nExample\nfrom langchain import Qdrant\nfrom langchain.embeddings import OpenAIEmbeddings\nembeddings = OpenAIEmbeddings()\nqdrant = Qdrant.from_texts(texts, embeddings, \"localhost\")\nclass langchain.vectorstores.Redis(redis_url, index_name, embedding_function, content_key='content', metadata_key='metadata', vector_key='content_vector', relevance_score_fn=, **kwargs)[source]\uf0c1\nBases: langchain.vectorstores.base.VectorStore\nWrapper around Redis vector database.\nTo use, you should have the redis python package installed.\nExample\nfrom langchain.vectorstores import Redis\nfrom langchain.embeddings import OpenAIEmbeddings\nembeddings = OpenAIEmbeddings()\nvectorstore = Redis(\n redis_url=\"redis://username:password@localhost:6379\"\n index_name=\"my-index\",\n embedding_function=embeddings.embed_query,\n)\nParameters\nredis_url (str) \u2013 \nindex_name (str) \u2013 \nembedding_function (Callable) \u2013 \ncontent_key (str) \u2013 \nmetadata_key (str) \u2013 \nvector_key (str) \u2013 \nrelevance_score_fn (Optional[Callable[[float], float]]) \u2013 \nkwargs (Any) \u2013 \nadd_texts(texts, metadatas=None, embeddings=None, batch_size=1000, **kwargs)[source]\uf0c1\nAdd more texts to the vectorstore.\nParameters\ntexts (Iterable[str]) \u2013 Iterable of strings/text to add to the vectorstore.\nmetadatas (Optional[List[dict]], optional) \u2013 Optional list of metadatas.\nDefaults to None.\nembeddings (Optional[List[List[float]]], optional) \u2013 Optional pre-generated\nembeddings. Defaults to None.", "source": "https://api.python.langchain.com/en/latest/modules/vectorstores.html"} +{"id": "c018b0edb28a-103", "text": "embeddings. Defaults to None.\nkeys (List[str]) or ids (List[str]) \u2013 Identifiers of entries.\nDefaults to None.\nbatch_size (int, optional) \u2013 Batch size to use for writes. Defaults to 1000.\nkwargs (Any) \u2013 \nReturns\nList of ids added to the vectorstore\nReturn type\nList[str]\nsimilarity_search(query, k=4, **kwargs)[source]\uf0c1\nReturns the most similar indexed documents to the query text.\nParameters\nquery (str) \u2013 The query text for which to find similar documents.\nk (int) \u2013 The number of documents to return. Default is 4.\nkwargs (Any) \u2013 \nReturns\nA list of documents that are most similar to the query text.\nReturn type\nList[Document]\nsimilarity_search_limit_score(query, k=4, score_threshold=0.2, **kwargs)[source]\uf0c1\nReturns the most similar indexed documents to the query text within the\nscore_threshold range.\nParameters\nquery (str) \u2013 The query text for which to find similar documents.\nk (int) \u2013 The number of documents to return. Default is 4.\nscore_threshold (float) \u2013 The minimum matching score required for a document\n0.2. (to be considered a match. Defaults to) \u2013 \nsimilarity (Because the similarity calculation algorithm is based on cosine) \u2013 \nkwargs (Any) \u2013 \nReturn type\nList[langchain.schema.Document]\n:param :\n:param the smaller the angle:\n:param the higher the similarity.:\nReturns\nA list of documents that are most similar to the query text,\nincluding the match score for each document.\nReturn type\nList[Document]\nParameters\nquery (str) \u2013 \nk (int) \u2013 \nscore_threshold (float) \u2013", "source": "https://api.python.langchain.com/en/latest/modules/vectorstores.html"} +{"id": "c018b0edb28a-104", "text": "k (int) \u2013 \nscore_threshold (float) \u2013 \nkwargs (Any) \u2013 \nNote\nIf there are no documents that satisfy the score_threshold value,\nan empty list is returned.\nsimilarity_search_with_score(query, k=4)[source]\uf0c1\nReturn docs most similar to query.\nParameters\nquery (str) \u2013 Text to look up documents similar to.\nk (int) \u2013 Number of Documents to return. Defaults to 4.\nReturns\nList of Documents most similar to the query and score for each\nReturn type\nList[Tuple[langchain.schema.Document, float]]\nclassmethod from_texts_return_keys(texts, embedding, metadatas=None, index_name=None, content_key='content', metadata_key='metadata', vector_key='content_vector', distance_metric='COSINE', **kwargs)[source]\uf0c1\nCreate a Redis vectorstore from raw documents.\nThis is a user-friendly interface that:\nEmbeds documents.\nCreates a new index for the embeddings in Redis.\nAdds the documents to the newly created Redis index.\nReturns the keys of the newly created documents.\nThis is intended to be a quick way to get started.\n.. rubric:: Example\nParameters\ntexts (List[str]) \u2013 \nembedding (langchain.embeddings.base.Embeddings) \u2013 \nmetadatas (Optional[List[dict]]) \u2013 \nindex_name (Optional[str]) \u2013 \ncontent_key (str) \u2013 \nmetadata_key (str) \u2013 \nvector_key (str) \u2013 \ndistance_metric (Literal['COSINE', 'IP', 'L2']) \u2013 \nkwargs (Any) \u2013 \nReturn type\nTuple[langchain.vectorstores.redis.Redis, List[str]]", "source": "https://api.python.langchain.com/en/latest/modules/vectorstores.html"} +{"id": "c018b0edb28a-105", "text": "Return type\nTuple[langchain.vectorstores.redis.Redis, List[str]]\nclassmethod from_texts(texts, embedding, metadatas=None, index_name=None, content_key='content', metadata_key='metadata', vector_key='content_vector', **kwargs)[source]\uf0c1\nCreate a Redis vectorstore from raw documents.\nThis is a user-friendly interface that:\nEmbeds documents.\nCreates a new index for the embeddings in Redis.\nAdds the documents to the newly created Redis index.\nThis is intended to be a quick way to get started.\n.. rubric:: Example\nParameters\ntexts (List[str]) \u2013 \nembedding (langchain.embeddings.base.Embeddings) \u2013 \nmetadatas (Optional[List[dict]]) \u2013 \nindex_name (Optional[str]) \u2013 \ncontent_key (str) \u2013 \nmetadata_key (str) \u2013 \nvector_key (str) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.vectorstores.redis.Redis\nstatic delete(ids, **kwargs)[source]\uf0c1\nDelete a Redis entry.\nParameters\nids (List[str]) \u2013 List of ids (keys) to delete.\nkwargs (Any) \u2013 \nReturns\nWhether or not the deletions were successful.\nReturn type\nbool\nstatic drop_index(index_name, delete_documents, **kwargs)[source]\uf0c1\nDrop a Redis search index.\nParameters\nindex_name (str) \u2013 Name of the index to drop.\ndelete_documents (bool) \u2013 Whether to drop the associated documents.\nkwargs (Any) \u2013 \nReturns\nWhether or not the drop was successful.\nReturn type\nbool\nclassmethod from_existing_index(embedding, index_name, content_key='content', metadata_key='metadata', vector_key='content_vector', **kwargs)[source]\uf0c1\nConnect to an existing Redis index.\nParameters", "source": "https://api.python.langchain.com/en/latest/modules/vectorstores.html"} +{"id": "c018b0edb28a-106", "text": "Connect to an existing Redis index.\nParameters\nembedding (langchain.embeddings.base.Embeddings) \u2013 \nindex_name (str) \u2013 \ncontent_key (str) \u2013 \nmetadata_key (str) \u2013 \nvector_key (str) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.vectorstores.redis.Redis\nas_retriever(**kwargs)[source]\uf0c1\nParameters\nkwargs (Any) \u2013 \nReturn type\nlangchain.vectorstores.redis.RedisVectorStoreRetriever\nclass langchain.vectorstores.Rockset(client, embeddings, collection_name, text_key, embedding_key)[source]\uf0c1\nBases: langchain.vectorstores.base.VectorStore\nWrapper arpund Rockset vector database.\nTo use, you should have the rockset python package installed. Note that to use\nthis, the collection being used must already exist in your Rockset instance.\nYou must also ensure you use a Rockset ingest transformation to apply\nVECTOR_ENFORCE on the column being used to store embedding_key in the\ncollection.\nSee: https://rockset.com/blog/introducing-vector-search-on-rockset/ for more details\nEverything below assumes commons Rockset workspace.\nTODO: Add support for workspace args.\nExample\nfrom langchain.vectorstores import Rockset\nfrom langchain.embeddings.openai import OpenAIEmbeddings\nimport rockset\n# Make sure you use the right host (region) for your Rockset instance\n# and APIKEY has both read-write access to your collection.\nrs = rockset.RocksetClient(host=rockset.Regions.use1a1, api_key=\"***\")\ncollection_name = \"langchain_demo\"\nembeddings = OpenAIEmbeddings()\nvectorstore = Rockset(rs, collection_name, embeddings,\n \"description\", \"description_embedding\")\nParameters\nclient (Any) \u2013 \nembeddings (Embeddings) \u2013", "source": "https://api.python.langchain.com/en/latest/modules/vectorstores.html"} +{"id": "c018b0edb28a-107", "text": "Parameters\nclient (Any) \u2013 \nembeddings (Embeddings) \u2013 \ncollection_name (str) \u2013 \ntext_key (str) \u2013 \nembedding_key (str) \u2013 \nadd_texts(texts, metadatas=None, ids=None, batch_size=32, **kwargs)[source]\uf0c1\nRun more texts through the embeddings and add to the vectorstore\nArgs:\ntexts: Iterable of strings to add to the vectorstore.\nmetadatas: Optional list of metadatas associated with the texts.\nids: Optional list of ids to associate with the texts.\nbatch_size: Send documents in batches to rockset.\nReturns\nList of ids from adding the texts into the vectorstore.\nParameters\ntexts (Iterable[str]) \u2013 \nmetadatas (Optional[List[dict]]) \u2013 \nids (Optional[List[str]]) \u2013 \nbatch_size (int) \u2013 \nkwargs (Any) \u2013 \nReturn type\nList[str]\nclassmethod from_texts(texts, embedding, metadatas=None, client=None, collection_name='', text_key='', embedding_key='', ids=None, batch_size=32, **kwargs)[source]\uf0c1\nCreate Rockset wrapper with existing texts.\nThis is intended as a quicker way to get started.\nParameters\ntexts (List[str]) \u2013 \nembedding (langchain.embeddings.base.Embeddings) \u2013 \nmetadatas (Optional[List[dict]]) \u2013 \nclient (Any) \u2013 \ncollection_name (str) \u2013 \ntext_key (str) \u2013 \nembedding_key (str) \u2013 \nids (Optional[List[str]]) \u2013 \nbatch_size (int) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.vectorstores.rocksetdb.Rockset", "source": "https://api.python.langchain.com/en/latest/modules/vectorstores.html"} +{"id": "c018b0edb28a-108", "text": "Return type\nlangchain.vectorstores.rocksetdb.Rockset\nclass DistanceFunction(value, names=None, *, module=None, qualname=None, type=None, start=1, boundary=None)[source]\uf0c1\nBases: enum.Enum\nCOSINE_SIM = 'COSINE_SIM'\uf0c1\nEUCLIDEAN_DIST = 'EUCLIDEAN_DIST'\uf0c1\nDOT_PRODUCT = 'DOT_PRODUCT'\uf0c1\norder_by()[source]\uf0c1\nReturn type\nstr\nsimilarity_search_with_relevance_scores(query, k=4, distance_func=DistanceFunction.COSINE_SIM, where_str=None, **kwargs)[source]\uf0c1\nPerform a similarity search with Rockset\nParameters\nquery (str) \u2013 Text to look up documents similar to.\ndistance_func (DistanceFunction) \u2013 how to compute distance between two\nvectors in Rockset.\nk (int, optional) \u2013 Top K neighbors to retrieve. Defaults to 4.\nwhere_str (Optional[str], optional) \u2013 Metadata filters supplied as a\nSQL where condition string. Defaults to None.\neg. \u201cprice<=70.0 AND brand=\u2019Nintendo\u2019\u201d\nNOTE \u2013 Please do not let end-user to fill this and always be aware\nof SQL injection.\nkwargs (Any) \u2013 \nReturns\nList of documents with their relevance score\nReturn type\nList[Tuple[Document, float]]\nsimilarity_search(query, k=4, distance_func=DistanceFunction.COSINE_SIM, where_str=None, **kwargs)[source]\uf0c1\nSame as similarity_search_with_relevance_scores but\ndoesn\u2019t return the scores.\nParameters\nquery (str) \u2013 \nk (int) \u2013 \ndistance_func (DistanceFunction) \u2013 \nwhere_str (Optional[str]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nList[Document]", "source": "https://api.python.langchain.com/en/latest/modules/vectorstores.html"} +{"id": "c018b0edb28a-109", "text": "kwargs (Any) \u2013 \nReturn type\nList[Document]\nsimilarity_search_by_vector(embedding, k=4, distance_func=DistanceFunction.COSINE_SIM, where_str=None, **kwargs)[source]\uf0c1\nAccepts a query_embedding (vector), and returns documents with\nsimilar embeddings.\nParameters\nembedding (List[float]) \u2013 \nk (int) \u2013 \ndistance_func (DistanceFunction) \u2013 \nwhere_str (Optional[str]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nList[Document]\nsimilarity_search_by_vector_with_relevance_scores(embedding, k=4, distance_func=DistanceFunction.COSINE_SIM, where_str=None, **kwargs)[source]\uf0c1\nAccepts a query_embedding (vector), and returns documents with\nsimilar embeddings along with their relevance scores.\nParameters\nembedding (List[float]) \u2013 \nk (int) \u2013 \ndistance_func (DistanceFunction) \u2013 \nwhere_str (Optional[str]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nList[Tuple[Document, float]]\ndelete_texts(ids)[source]\uf0c1\nDelete a list of docs from the Rockset collection\nParameters\nids (List[str]) \u2013 \nReturn type\nNone\nclass langchain.vectorstores.SKLearnVectorStore(embedding, *, persist_path=None, serializer='json', metric='cosine', **kwargs)[source]\uf0c1\nBases: langchain.vectorstores.base.VectorStore\nA simple in-memory vector store based on the scikit-learn library\nNearestNeighbors implementation.\nParameters\nembedding (langchain.embeddings.base.Embeddings) \u2013 \npersist_path (Optional[str]) \u2013 \nserializer (Literal['json', 'bson', 'parquet']) \u2013 \nmetric (str) \u2013 \nkwargs (Any) \u2013 \nReturn type\nNone\npersist()[source]\uf0c1\nReturn type", "source": "https://api.python.langchain.com/en/latest/modules/vectorstores.html"} +{"id": "c018b0edb28a-110", "text": "Return type\nNone\npersist()[source]\uf0c1\nReturn type\nNone\nadd_texts(texts, metadatas=None, ids=None, **kwargs)[source]\uf0c1\nRun more texts through the embeddings and add to the vectorstore.\nParameters\ntexts (Iterable[str]) \u2013 Iterable of strings to add to the vectorstore.\nmetadatas (Optional[List[dict]]) \u2013 Optional list of metadatas associated with the texts.\nkwargs (Any) \u2013 vectorstore specific parameters\nids (Optional[List[str]]) \u2013 \nReturns\nList of ids from adding the texts into the vectorstore.\nReturn type\nList[str]\nsimilarity_search_with_score(query, *, k=4, **kwargs)[source]\uf0c1\nParameters\nquery (str) \u2013 \nk (int) \u2013 \nkwargs (Any) \u2013 \nReturn type\nList[Tuple[langchain.schema.Document, float]]\nsimilarity_search(query, k=4, **kwargs)[source]\uf0c1\nReturn docs most similar to query.\nParameters\nquery (str) \u2013 \nk (int) \u2013 \nkwargs (Any) \u2013 \nReturn type\nList[langchain.schema.Document]\nmax_marginal_relevance_search_by_vector(embedding, k=4, fetch_k=20, lambda_mult=0.5, **kwargs)[source]\uf0c1\nReturn docs selected using the maximal marginal relevance.\nMaximal marginal relevance optimizes for similarity to query AND diversity\namong selected documents.\n:param embedding: Embedding to look up documents similar to.\n:param k: Number of Documents to return. Defaults to 4.\n:param fetch_k: Number of Documents to fetch to pass to MMR algorithm.\n:param lambda_mult: Number between 0 and 1 that determines the degree\nof diversity among the results with 0 corresponding\nto maximum diversity and 1 to minimum diversity.", "source": "https://api.python.langchain.com/en/latest/modules/vectorstores.html"} +{"id": "c018b0edb28a-111", "text": "to maximum diversity and 1 to minimum diversity.\nDefaults to 0.5.\nReturns\nList of Documents selected by maximal marginal relevance.\nParameters\nembedding (List[float]) \u2013 \nk (int) \u2013 \nfetch_k (int) \u2013 \nlambda_mult (float) \u2013 \nkwargs (Any) \u2013 \nReturn type\nList[langchain.schema.Document]\nmax_marginal_relevance_search(query, k=4, fetch_k=20, lambda_mult=0.5, **kwargs)[source]\uf0c1\nReturn docs selected using the maximal marginal relevance.\nMaximal marginal relevance optimizes for similarity to query AND diversity\namong selected documents.\n:param query: Text to look up documents similar to.\n:param k: Number of Documents to return. Defaults to 4.\n:param fetch_k: Number of Documents to fetch to pass to MMR algorithm.\n:param lambda_mult: Number between 0 and 1 that determines the degree\nof diversity among the results with 0 corresponding\nto maximum diversity and 1 to minimum diversity.\nDefaults to 0.5.\nReturns\nList of Documents selected by maximal marginal relevance.\nParameters\nquery (str) \u2013 \nk (int) \u2013 \nfetch_k (int) \u2013 \nlambda_mult (float) \u2013 \nkwargs (Any) \u2013 \nReturn type\nList[langchain.schema.Document]\nclassmethod from_texts(texts, embedding, metadatas=None, ids=None, persist_path=None, **kwargs)[source]\uf0c1\nReturn VectorStore initialized from texts and embeddings.\nParameters\ntexts (List[str]) \u2013 \nembedding (langchain.embeddings.base.Embeddings) \u2013 \nmetadatas (Optional[List[dict]]) \u2013 \nids (Optional[List[str]]) \u2013 \npersist_path (Optional[str]) \u2013 \nkwargs (Any) \u2013 \nReturn type", "source": "https://api.python.langchain.com/en/latest/modules/vectorstores.html"} +{"id": "c018b0edb28a-112", "text": "persist_path (Optional[str]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.vectorstores.sklearn.SKLearnVectorStore\nclass langchain.vectorstores.StarRocks(embedding, config=None, **kwargs)[source]\uf0c1\nBases: langchain.vectorstores.base.VectorStore\nWrapper around StarRocks vector database\nYou need a pymysql python package, and a valid account\nto connect to StarRocks.\nRight now StarRocks has only implemented cosine_similarity function to\ncompute distance between two vectors. And there is no vector inside right now,\nso we have to iterate all vectors and compute spatial distance.\nFor more information, please visit[StarRocks official site](https://www.starrocks.io/)\n[StarRocks github](https://github.com/StarRocks/starrocks)\nParameters\nembedding (Embeddings) \u2013 \nconfig (Optional[StarRocksSettings]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nNone\nescape_str(value)[source]\uf0c1\nParameters\nvalue (str) \u2013 \nReturn type\nstr\nadd_texts(texts, metadatas=None, batch_size=32, ids=None, **kwargs)[source]\uf0c1\nInsert more texts through the embeddings and add to the VectorStore.\nParameters\ntexts (Iterable[str]) \u2013 Iterable of strings to add to the VectorStore.\nids (Optional[Iterable[str]]) \u2013 Optional list of ids to associate with the texts.\nbatch_size (int) \u2013 Batch size of insertion\nmetadata \u2013 Optional column data to be inserted\nmetadatas (Optional[List[dict]]) \u2013 \nkwargs (Any) \u2013 \nReturns\nList of ids from adding the texts into the VectorStore.\nReturn type\nList[str]", "source": "https://api.python.langchain.com/en/latest/modules/vectorstores.html"} +{"id": "c018b0edb28a-113", "text": "List of ids from adding the texts into the VectorStore.\nReturn type\nList[str]\nclassmethod from_texts(texts, embedding, metadatas=None, config=None, text_ids=None, batch_size=32, **kwargs)[source]\uf0c1\nCreate StarRocks wrapper with existing texts\nParameters\nembedding_function (Embeddings) \u2013 Function to extract text embedding\ntexts (Iterable[str]) \u2013 List or tuple of strings to be added\nconfig (StarRocksSettings, Optional) \u2013 StarRocks configuration\ntext_ids (Optional[Iterable], optional) \u2013 IDs for the texts.\nDefaults to None.\nbatch_size (int, optional) \u2013 Batchsize when transmitting data to StarRocks.\nDefaults to 32.\nmetadata (List[dict], optional) \u2013 metadata to texts. Defaults to None.\nembedding (langchain.embeddings.base.Embeddings) \u2013 \nmetadatas (Optional[List[Dict[Any, Any]]]) \u2013 \nkwargs (Any) \u2013 \nReturns\nStarRocks Index\nReturn type\nlangchain.vectorstores.starrocks.StarRocks\nsimilarity_search(query, k=4, where_str=None, **kwargs)[source]\uf0c1\nPerform a similarity search with StarRocks\nParameters\nquery (str) \u2013 query string\nk (int, optional) \u2013 Top K neighbors to retrieve. Defaults to 4.\nwhere_str (Optional[str], optional) \u2013 where condition string.\nDefaults to None.\nNOTE \u2013 Please do not let end-user to fill this and always be aware\nof SQL injection. When dealing with metadatas, remember to\nuse {self.metadata_column}.attribute instead of attribute\nalone. The default name for it is metadata.\nkwargs (Any) \u2013 \nReturns\nList of Documents\nReturn type\nList[Document]", "source": "https://api.python.langchain.com/en/latest/modules/vectorstores.html"} +{"id": "c018b0edb28a-114", "text": "Returns\nList of Documents\nReturn type\nList[Document]\nsimilarity_search_by_vector(embedding, k=4, where_str=None, **kwargs)[source]\uf0c1\nPerform a similarity search with StarRocks by vectors\nParameters\nquery (str) \u2013 query string\nk (int, optional) \u2013 Top K neighbors to retrieve. Defaults to 4.\nwhere_str (Optional[str], optional) \u2013 where condition string.\nDefaults to None.\nNOTE \u2013 Please do not let end-user to fill this and always be aware\nof SQL injection. When dealing with metadatas, remember to\nuse {self.metadata_column}.attribute instead of attribute\nalone. The default name for it is metadata.\nembedding (List[float]) \u2013 \nkwargs (Any) \u2013 \nReturns\nList of (Document, similarity)\nReturn type\nList[Document]\nsimilarity_search_with_relevance_scores(query, k=4, where_str=None, **kwargs)[source]\uf0c1\nPerform a similarity search with StarRocks\nParameters\nquery (str) \u2013 query string\nk (int, optional) \u2013 Top K neighbors to retrieve. Defaults to 4.\nwhere_str (Optional[str], optional) \u2013 where condition string.\nDefaults to None.\nNOTE \u2013 Please do not let end-user to fill this and always be aware\nof SQL injection. When dealing with metadatas, remember to\nuse {self.metadata_column}.attribute instead of attribute\nalone. The default name for it is metadata.\nkwargs (Any) \u2013 \nReturns\nList of documents\nReturn type\nList[Document]\ndrop()[source]\uf0c1\nHelper function: Drop data\nReturn type\nNone\nproperty metadata_column: str\uf0c1\nclass langchain.vectorstores.SupabaseVectorStore(client, embedding, table_name, query_name=None)[source]\uf0c1\nBases: langchain.vectorstores.base.VectorStore", "source": "https://api.python.langchain.com/en/latest/modules/vectorstores.html"} +{"id": "c018b0edb28a-115", "text": "Bases: langchain.vectorstores.base.VectorStore\nVectorStore for a Supabase postgres database. Assumes you have the pgvector\nextension installed and a match_documents (or similar) function. For more details:\nhttps://js.langchain.com/docs/modules/indexes/vector_stores/integrations/supabase\nYou can implement your own match_documents function in order to limit the search\nspace to a subset of documents based on your own authorization or business logic.\nNote that the Supabase Python client does not yet support async operations.\nIf you\u2019d like to use max_marginal_relevance_search, please review the instructions\nbelow on modifying the match_documents function to return matched embeddings.\nParameters\nclient (supabase.client.Client) \u2013 \nembedding (Embeddings) \u2013 \ntable_name (str) \u2013 \nquery_name (Union[str, None]) \u2013 \nReturn type\nNone\ntable_name: str\uf0c1\nquery_name: str\uf0c1\nadd_texts(texts, metadatas=None, ids=None, **kwargs)[source]\uf0c1\nRun more texts through the embeddings and add to the vectorstore.\nParameters\ntexts (Iterable[str]) \u2013 Iterable of strings to add to the vectorstore.\nmetadatas (Optional[List[dict[Any, Any]]]) \u2013 Optional list of metadatas associated with the texts.\nkwargs (Any) \u2013 vectorstore specific parameters\nids (Optional[List[str]]) \u2013 \nReturns\nList of ids from adding the texts into the vectorstore.\nReturn type\nList[str]\nclassmethod from_texts(texts, embedding, metadatas=None, client=None, table_name='documents', query_name='match_documents', ids=None, **kwargs)[source]\uf0c1\nReturn VectorStore initialized from texts and embeddings.\nParameters\ntexts (List[str]) \u2013 \nembedding (Embeddings) \u2013", "source": "https://api.python.langchain.com/en/latest/modules/vectorstores.html"} +{"id": "c018b0edb28a-116", "text": "Parameters\ntexts (List[str]) \u2013 \nembedding (Embeddings) \u2013 \nmetadatas (Optional[List[dict]]) \u2013 \nclient (Optional[supabase.client.Client]) \u2013 \ntable_name (Optional[str]) \u2013 \nquery_name (Union[str, None]) \u2013 \nids (Optional[List[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nSupabaseVectorStore\nadd_vectors(vectors, documents, ids)[source]\uf0c1\nParameters\nvectors (List[List[float]]) \u2013 \ndocuments (List[langchain.schema.Document]) \u2013 \nids (List[str]) \u2013 \nReturn type\nList[str]\nsimilarity_search(query, k=4, **kwargs)[source]\uf0c1\nReturn docs most similar to query.\nParameters\nquery (str) \u2013 \nk (int) \u2013 \nkwargs (Any) \u2013 \nReturn type\nList[langchain.schema.Document]\nsimilarity_search_by_vector(embedding, k=4, **kwargs)[source]\uf0c1\nReturn docs most similar to embedding vector.\nParameters\nembedding (List[float]) \u2013 Embedding to look up documents similar to.\nk (int) \u2013 Number of Documents to return. Defaults to 4.\nkwargs (Any) \u2013 \nReturns\nList of Documents most similar to the query vector.\nReturn type\nList[langchain.schema.Document]\nsimilarity_search_with_relevance_scores(query, k=4, **kwargs)[source]\uf0c1\nReturn docs and relevance scores in the range [0, 1].\n0 is dissimilar, 1 is most similar.\nParameters\nquery (str) \u2013 input text\nk (int) \u2013 Number of Documents to return. Defaults to 4.\n**kwargs \u2013 kwargs to be passed to similarity search. Should include:", "source": "https://api.python.langchain.com/en/latest/modules/vectorstores.html"} +{"id": "c018b0edb28a-117", "text": "**kwargs \u2013 kwargs to be passed to similarity search. Should include:\nscore_threshold: Optional, a floating point value between 0 to 1 to\nfilter the resulting set of retrieved docs\nkwargs (Any) \u2013 \nReturns\nList of Tuples of (doc, similarity_score)\nReturn type\nList[Tuple[langchain.schema.Document, float]]\nsimilarity_search_by_vector_with_relevance_scores(query, k)[source]\uf0c1\nParameters\nquery (List[float]) \u2013 \nk (int) \u2013 \nReturn type\nList[Tuple[langchain.schema.Document, float]]\nsimilarity_search_by_vector_returning_embeddings(query, k)[source]\uf0c1\nParameters\nquery (List[float]) \u2013 \nk (int) \u2013 \nReturn type\nList[Tuple[langchain.schema.Document, float, numpy.ndarray[numpy.float32, Any]]]\nmax_marginal_relevance_search_by_vector(embedding, k=4, fetch_k=20, lambda_mult=0.5, **kwargs)[source]\uf0c1\nReturn docs selected using the maximal marginal relevance.\nMaximal marginal relevance optimizes for similarity to query AND diversity\namong selected documents.\nParameters\nembedding (List[float]) \u2013 Embedding to look up documents similar to.\nk (int) \u2013 Number of Documents to return. Defaults to 4.\nfetch_k (int) \u2013 Number of Documents to fetch to pass to MMR algorithm.\nlambda_mult (float) \u2013 Number between 0 and 1 that determines the degree\nof diversity among the results with 0 corresponding\nto maximum diversity and 1 to minimum diversity.\nDefaults to 0.5.\nkwargs (Any) \u2013 \nReturns\nList of Documents selected by maximal marginal relevance.\nReturn type\nList[langchain.schema.Document]", "source": "https://api.python.langchain.com/en/latest/modules/vectorstores.html"} +{"id": "c018b0edb28a-118", "text": "Return type\nList[langchain.schema.Document]\nmax_marginal_relevance_search(query, k=4, fetch_k=20, lambda_mult=0.5, **kwargs)[source]\uf0c1\nReturn docs selected using the maximal marginal relevance.\nMaximal marginal relevance optimizes for similarity to query AND diversity\namong selected documents.\nParameters\nquery (str) \u2013 Text to look up documents similar to.\nk (int) \u2013 Number of Documents to return. Defaults to 4.\nfetch_k (int) \u2013 Number of Documents to fetch to pass to MMR algorithm.\nlambda_mult (float) \u2013 Number between 0 and 1 that determines the degree\nof diversity among the results with 0 corresponding\nto maximum diversity and 1 to minimum diversity.\nDefaults to 0.5.\nkwargs (Any) \u2013 \nReturns\nList of Documents selected by maximal marginal relevance.\nReturn type\nList[langchain.schema.Document]\nmax_marginal_relevance_search requires that query_name returns matched\nembeddings alongside the match documents. The following function\ndemonstrates how to do this:\n```sql\nCREATE FUNCTION match_documents_embeddings(query_embedding vector(1536),\nmatch_count int)\nRETURNS TABLE(id bigint,\ncontent text,\nmetadata jsonb,\nembedding vector(1536),\nsimilarity float)\nLANGUAGE plpgsql\nAS $$\n# variable_conflict use_column\nBEGINRETURN query\nSELECT\nid,\ncontent,\nmetadata,\nembedding,\n1 -(docstore.embedding <=> query_embedding) AS similarity\nFROMdocstore\nORDER BYdocstore.embedding <=> query_embedding\nLIMIT match_count;\nEND;\n$$;\n```\ndelete(ids)[source]\uf0c1\nDelete by vector IDs.\nParameters\nids (List[str]) \u2013 List of ids to delete.\nReturn type\nNone", "source": "https://api.python.langchain.com/en/latest/modules/vectorstores.html"} +{"id": "c018b0edb28a-119", "text": "Parameters\nids (List[str]) \u2013 List of ids to delete.\nReturn type\nNone\nclass langchain.vectorstores.Tair(embedding_function, url, index_name, content_key='content', metadata_key='metadata', search_params=None, **kwargs)[source]\uf0c1\nBases: langchain.vectorstores.base.VectorStore\nWrapper around Tair Vector store.\nParameters\nembedding_function (Embeddings) \u2013 \nurl (str) \u2013 \nindex_name (str) \u2013 \ncontent_key (str) \u2013 \nmetadata_key (str) \u2013 \nsearch_params (Optional[dict]) \u2013 \nkwargs (Any) \u2013 \ncreate_index_if_not_exist(dim, distance_type, index_type, data_type, **kwargs)[source]\uf0c1\nParameters\ndim (int) \u2013 \ndistance_type (str) \u2013 \nindex_type (str) \u2013 \ndata_type (str) \u2013 \nkwargs (Any) \u2013 \nReturn type\nbool\nadd_texts(texts, metadatas=None, **kwargs)[source]\uf0c1\nAdd texts data to an existing index.\nParameters\ntexts (Iterable[str]) \u2013 \nmetadatas (Optional[List[dict]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nList[str]\nsimilarity_search(query, k=4, **kwargs)[source]\uf0c1\nReturns the most similar indexed documents to the query text.\nParameters\nquery (str) \u2013 The query text for which to find similar documents.\nk (int) \u2013 The number of documents to return. Default is 4.\nkwargs (Any) \u2013 \nReturns\nA list of documents that are most similar to the query text.\nReturn type\nList[Document]", "source": "https://api.python.langchain.com/en/latest/modules/vectorstores.html"} +{"id": "c018b0edb28a-120", "text": "Return type\nList[Document]\nclassmethod from_texts(texts, embedding, metadatas=None, index_name='langchain', content_key='content', metadata_key='metadata', **kwargs)[source]\uf0c1\nReturn VectorStore initialized from texts and embeddings.\nParameters\ntexts (List[str]) \u2013 \nembedding (langchain.embeddings.base.Embeddings) \u2013 \nmetadatas (Optional[List[dict]]) \u2013 \nindex_name (str) \u2013 \ncontent_key (str) \u2013 \nmetadata_key (str) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.vectorstores.tair.Tair\nclassmethod from_documents(documents, embedding, metadatas=None, index_name='langchain', content_key='content', metadata_key='metadata', **kwargs)[source]\uf0c1\nReturn VectorStore initialized from documents and embeddings.\nParameters\ndocuments (List[langchain.schema.Document]) \u2013 \nembedding (langchain.embeddings.base.Embeddings) \u2013 \nmetadatas (Optional[List[dict]]) \u2013 \nindex_name (str) \u2013 \ncontent_key (str) \u2013 \nmetadata_key (str) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.vectorstores.tair.Tair\nstatic drop_index(index_name='langchain', **kwargs)[source]\uf0c1\nDrop an existing index.\nParameters\nindex_name (str) \u2013 Name of the index to drop.\nkwargs (Any) \u2013 \nReturns\nTrue if the index is dropped successfully.\nReturn type\nbool\nclassmethod from_existing_index(embedding, index_name='langchain', content_key='content', metadata_key='metadata', **kwargs)[source]\uf0c1\nConnect to an existing Tair index.\nParameters\nembedding (langchain.embeddings.base.Embeddings) \u2013 \nindex_name (str) \u2013 \ncontent_key (str) \u2013", "source": "https://api.python.langchain.com/en/latest/modules/vectorstores.html"} +{"id": "c018b0edb28a-121", "text": "index_name (str) \u2013 \ncontent_key (str) \u2013 \nmetadata_key (str) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.vectorstores.tair.Tair\nclass langchain.vectorstores.Tigris(client, embeddings, index_name)[source]\uf0c1\nBases: langchain.vectorstores.base.VectorStore\nParameters\nclient (TigrisClient) \u2013 \nembeddings (Embeddings) \u2013 \nindex_name (str) \u2013 \nproperty search_index: TigrisVectorStore\uf0c1\nadd_texts(texts, metadatas=None, ids=None, **kwargs)[source]\uf0c1\nRun more texts through the embeddings and add to the vectorstore.\nParameters\ntexts (Iterable[str]) \u2013 Iterable of strings to add to the vectorstore.\nmetadatas (Optional[List[dict]]) \u2013 Optional list of metadatas associated with the texts.\nids (Optional[List[str]]) \u2013 Optional list of ids for documents.\nIds will be autogenerated if not provided.\nkwargs (Any) \u2013 vectorstore specific parameters\nReturns\nList of ids from adding the texts into the vectorstore.\nReturn type\nList[str]\nsimilarity_search(query, k=4, filter=None, **kwargs)[source]\uf0c1\nReturn docs most similar to query.\nParameters\nquery (str) \u2013 \nk (int) \u2013 \nfilter (Optional[TigrisFilter]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nList[Document]\nsimilarity_search_with_score(query, k=4, filter=None)[source]\uf0c1\nRun similarity search with Chroma with distance.\nParameters\nquery (str) \u2013 Query text to search for.\nk (int) \u2013 Number of results to return. Defaults to 4.\nfilter (Optional[TigrisFilter]) \u2013 Filter by metadata. Defaults to None.\nReturns", "source": "https://api.python.langchain.com/en/latest/modules/vectorstores.html"} +{"id": "c018b0edb28a-122", "text": "filter (Optional[TigrisFilter]) \u2013 Filter by metadata. Defaults to None.\nReturns\nList of documents most similar to the querytext with distance in float.\nReturn type\nList[Tuple[Document, float]]\nclassmethod from_texts(texts, embedding, metadatas=None, ids=None, client=None, index_name=None, **kwargs)[source]\uf0c1\nReturn VectorStore initialized from texts and embeddings.\nParameters\ntexts (List[str]) \u2013 \nembedding (Embeddings) \u2013 \nmetadatas (Optional[List[dict]]) \u2013 \nids (Optional[List[str]]) \u2013 \nclient (Optional[TigrisClient]) \u2013 \nindex_name (Optional[str]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nTigris\nclass langchain.vectorstores.Typesense(typesense_client, embedding, *, typesense_collection_name=None, text_key='text')[source]\uf0c1\nBases: langchain.vectorstores.base.VectorStore\nWrapper around Typesense vector search.\nTo use, you should have the typesense python package installed.\nExample\nfrom langchain.embedding.openai import OpenAIEmbeddings\nfrom langchain.vectorstores import Typesense\nimport typesense\nnode = {\n \"host\": \"localhost\", # For Typesense Cloud use xxx.a1.typesense.net\n \"port\": \"8108\", # For Typesense Cloud use 443\n \"protocol\": \"http\" # For Typesense Cloud use https\n}\ntypesense_client = typesense.Client(\n {\n \"nodes\": [node],\n \"api_key\": \"\",\n \"connection_timeout_seconds\": 2\n }\n)\ntypesense_collection_name = \"langchain-memory\"\nembedding = OpenAIEmbeddings()\nvectorstore = Typesense(\n typesense_client=typesense_client,\n embedding=embedding,", "source": "https://api.python.langchain.com/en/latest/modules/vectorstores.html"} +{"id": "c018b0edb28a-123", "text": "typesense_client=typesense_client,\n embedding=embedding,\n typesense_collection_name=typesense_collection_name,\n text_key=\"text\",\n)\nParameters\ntypesense_client (Client) \u2013 \nembedding (Embeddings) \u2013 \ntypesense_collection_name (Optional[str]) \u2013 \ntext_key (str) \u2013 \nadd_texts(texts, metadatas=None, ids=None, **kwargs)[source]\uf0c1\nRun more texts through the embedding and add to the vectorstore.\nParameters\ntexts (Iterable[str]) \u2013 Iterable of strings to add to the vectorstore.\nmetadatas (Optional[List[dict]]) \u2013 Optional list of metadatas associated with the texts.\nids (Optional[List[str]]) \u2013 Optional list of ids to associate with the texts.\nkwargs (Any) \u2013 \nReturns\nList of ids from adding the texts into the vectorstore.\nReturn type\nList[str]\nsimilarity_search_with_score(query, k=10, filter='')[source]\uf0c1\nReturn typesense documents most similar to query, along with scores.\nParameters\nquery (str) \u2013 Text to look up documents similar to.\nk (int) \u2013 Number of Documents to return. Defaults to 10.\nMinimum 10 results would be returned.\nfilter (Optional[str]) \u2013 typesense filter_by expression to filter documents on\nReturns\nList of Documents most similar to the query and score for each\nReturn type\nList[Tuple[langchain.schema.Document, float]]\nsimilarity_search(query, k=10, filter='', **kwargs)[source]\uf0c1\nReturn typesense documents most similar to query.\nParameters\nquery (str) \u2013 Text to look up documents similar to.\nk (int) \u2013 Number of Documents to return. Defaults to 10.\nMinimum 10 results would be returned.", "source": "https://api.python.langchain.com/en/latest/modules/vectorstores.html"} +{"id": "c018b0edb28a-124", "text": "Minimum 10 results would be returned.\nfilter (Optional[str]) \u2013 typesense filter_by expression to filter documents on\nkwargs (Any) \u2013 \nReturns\nList of Documents most similar to the query and score for each\nReturn type\nList[langchain.schema.Document]\nclassmethod from_client_params(embedding, *, host='localhost', port='8108', protocol='http', typesense_api_key=None, connection_timeout_seconds=2, **kwargs)[source]\uf0c1\nInitialize Typesense directly from client parameters.\nExample\nfrom langchain.embedding.openai import OpenAIEmbeddings\nfrom langchain.vectorstores import Typesense\n# Pass in typesense_api_key as kwarg or set env var \"TYPESENSE_API_KEY\".\nvectorstore = Typesense(\n OpenAIEmbeddings(),\n host=\"localhost\",\n port=\"8108\",\n protocol=\"http\",\n typesense_collection_name=\"langchain-memory\",\n)\nParameters\nembedding (langchain.embeddings.base.Embeddings) \u2013 \nhost (str) \u2013 \nport (Union[str, int]) \u2013 \nprotocol (str) \u2013 \ntypesense_api_key (Optional[str]) \u2013 \nconnection_timeout_seconds (int) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.vectorstores.typesense.Typesense\nclassmethod from_texts(texts, embedding, metadatas=None, ids=None, typesense_client=None, typesense_client_params=None, typesense_collection_name=None, text_key='text', **kwargs)[source]\uf0c1\nConstruct Typesense wrapper from raw text.\nParameters\ntexts (List[str]) \u2013 \nembedding (Embeddings) \u2013 \nmetadatas (Optional[List[dict]]) \u2013 \nids (Optional[List[str]]) \u2013 \ntypesense_client (Optional[Client]) \u2013 \ntypesense_client_params (Optional[dict]) \u2013", "source": "https://api.python.langchain.com/en/latest/modules/vectorstores.html"} +{"id": "c018b0edb28a-125", "text": "typesense_client_params (Optional[dict]) \u2013 \ntypesense_collection_name (Optional[str]) \u2013 \ntext_key (str) \u2013 \nkwargs (Any) \u2013 \nReturn type\nTypesense\nclass langchain.vectorstores.Vectara(vectara_customer_id=None, vectara_corpus_id=None, vectara_api_key=None)[source]\uf0c1\nBases: langchain.vectorstores.base.VectorStore\nImplementation of Vector Store using Vectara (https://vectara.com).\n.. rubric:: Example\nfrom langchain.vectorstores import Vectara\nvectorstore = Vectara(\n vectara_customer_id=vectara_customer_id,\n vectara_corpus_id=vectara_corpus_id,\n vectara_api_key=vectara_api_key\n)\nParameters\nvectara_customer_id (Optional[str]) \u2013 \nvectara_corpus_id (Optional[str]) \u2013 \nvectara_api_key (Optional[str]) \u2013 \nadd_texts(texts, metadatas=None, **kwargs)[source]\uf0c1\nRun more texts through the embeddings and add to the vectorstore.\nParameters\ntexts (Iterable[str]) \u2013 Iterable of strings to add to the vectorstore.\nmetadatas (Optional[List[dict]]) \u2013 Optional list of metadatas associated with the texts.\nkwargs (Any) \u2013 \nReturns\nList of ids from adding the texts into the vectorstore.\nReturn type\nList[str]\nsimilarity_search_with_score(query, k=5, lambda_val=0.025, filter=None, n_sentence_context=0, **kwargs)[source]\uf0c1\nReturn Vectara documents most similar to query, along with scores.\nParameters\nquery (str) \u2013 Text to look up documents similar to.\nk (int) \u2013 Number of Documents to return. Defaults to 5.\nlambda_val (float) \u2013 lexical match parameter for hybrid search.", "source": "https://api.python.langchain.com/en/latest/modules/vectorstores.html"} +{"id": "c018b0edb28a-126", "text": "lambda_val (float) \u2013 lexical match parameter for hybrid search.\nfilter (Optional[str]) \u2013 Dictionary of argument(s) to filter on metadata. For example a\nfilter can be \u201cdoc.rating > 3.0 and part.lang = \u2018deu\u2019\u201d} see\nhttps://docs.vectara.com/docs/search-apis/sql/filter-overview\nfor more details.\nn_sentence_context (int) \u2013 number of sentences before/after the matching segment\nto add\nkwargs (Any) \u2013 \nReturns\nList of Documents most similar to the query and score for each.\nReturn type\nList[Tuple[langchain.schema.Document, float]]\nsimilarity_search(query, k=5, lambda_val=0.025, filter=None, n_sentence_context=0, **kwargs)[source]\uf0c1\nReturn Vectara documents most similar to query, along with scores.\nParameters\nquery (str) \u2013 Text to look up documents similar to.\nk (int) \u2013 Number of Documents to return. Defaults to 5.\nfilter (Optional[str]) \u2013 Dictionary of argument(s) to filter on metadata. For example a\nfilter can be \u201cdoc.rating > 3.0 and part.lang = \u2018deu\u2019\u201d} see\nhttps://docs.vectara.com/docs/search-apis/sql/filter-overview for more\ndetails.\nn_sentence_context (int) \u2013 number of sentences before/after the matching segment\nto add\nlambda_val (float) \u2013 \nkwargs (Any) \u2013 \nReturns\nList of Documents most similar to the query\nReturn type\nList[langchain.schema.Document]\nclassmethod from_texts(texts, embedding=None, metadatas=None, **kwargs)[source]\uf0c1\nConstruct Vectara wrapper from raw documents.\nThis is intended to be a quick way to get started.\n.. rubric:: Example\nfrom langchain import Vectara", "source": "https://api.python.langchain.com/en/latest/modules/vectorstores.html"} +{"id": "c018b0edb28a-127", "text": ".. rubric:: Example\nfrom langchain import Vectara\nvectara = Vectara.from_texts(\n texts,\n vectara_customer_id=customer_id,\n vectara_corpus_id=corpus_id,\n vectara_api_key=api_key,\n)\nParameters\ntexts (List[str]) \u2013 \nembedding (Optional[langchain.embeddings.base.Embeddings]) \u2013 \nmetadatas (Optional[List[dict]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.vectorstores.vectara.Vectara\nas_retriever(**kwargs)[source]\uf0c1\nParameters\nkwargs (Any) \u2013 \nReturn type\nlangchain.vectorstores.vectara.VectaraRetriever\nclass langchain.vectorstores.VectorStore[source]\uf0c1\nBases: abc.ABC\nInterface for vector stores.\nabstract add_texts(texts, metadatas=None, **kwargs)[source]\uf0c1\nRun more texts through the embeddings and add to the vectorstore.\nParameters\ntexts (Iterable[str]) \u2013 Iterable of strings to add to the vectorstore.\nmetadatas (Optional[List[dict]]) \u2013 Optional list of metadatas associated with the texts.\nkwargs (Any) \u2013 vectorstore specific parameters\nReturns\nList of ids from adding the texts into the vectorstore.\nReturn type\nList[str]\ndelete(ids)[source]\uf0c1\nDelete by vector ID.\nParameters\nids (List[str]) \u2013 List of ids to delete.\nReturns\nTrue if deletion is successful,\nFalse otherwise, None if not implemented.\nReturn type\nOptional[bool]\nasync aadd_texts(texts, metadatas=None, **kwargs)[source]\uf0c1\nRun more texts through the embeddings and add to the vectorstore.\nParameters\ntexts (Iterable[str]) \u2013 \nmetadatas (Optional[List[dict]]) \u2013", "source": "https://api.python.langchain.com/en/latest/modules/vectorstores.html"} +{"id": "c018b0edb28a-128", "text": "texts (Iterable[str]) \u2013 \nmetadatas (Optional[List[dict]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nList[str]\nadd_documents(documents, **kwargs)[source]\uf0c1\nRun more documents through the embeddings and add to the vectorstore.\nParameters\n(List[Document] (documents) \u2013 Documents to add to the vectorstore.\ndocuments (List[langchain.schema.Document]) \u2013 \nkwargs (Any) \u2013 \nReturns\nList of IDs of the added texts.\nReturn type\nList[str]\nasync aadd_documents(documents, **kwargs)[source]\uf0c1\nRun more documents through the embeddings and add to the vectorstore.\nParameters\n(List[Document] (documents) \u2013 Documents to add to the vectorstore.\ndocuments (List[langchain.schema.Document]) \u2013 \nkwargs (Any) \u2013 \nReturns\nList of IDs of the added texts.\nReturn type\nList[str]\nsearch(query, search_type, **kwargs)[source]\uf0c1\nReturn docs most similar to query using specified search type.\nParameters\nquery (str) \u2013 \nsearch_type (str) \u2013 \nkwargs (Any) \u2013 \nReturn type\nList[langchain.schema.Document]\nasync asearch(query, search_type, **kwargs)[source]\uf0c1\nReturn docs most similar to query using specified search type.\nParameters\nquery (str) \u2013 \nsearch_type (str) \u2013 \nkwargs (Any) \u2013 \nReturn type\nList[langchain.schema.Document]\nabstract similarity_search(query, k=4, **kwargs)[source]\uf0c1\nReturn docs most similar to query.\nParameters\nquery (str) \u2013 \nk (int) \u2013 \nkwargs (Any) \u2013 \nReturn type\nList[langchain.schema.Document]", "source": "https://api.python.langchain.com/en/latest/modules/vectorstores.html"} +{"id": "c018b0edb28a-129", "text": "kwargs (Any) \u2013 \nReturn type\nList[langchain.schema.Document]\nsimilarity_search_with_relevance_scores(query, k=4, **kwargs)[source]\uf0c1\nReturn docs and relevance scores in the range [0, 1].\n0 is dissimilar, 1 is most similar.\nParameters\nquery (str) \u2013 input text\nk (int) \u2013 Number of Documents to return. Defaults to 4.\n**kwargs \u2013 kwargs to be passed to similarity search. Should include:\nscore_threshold: Optional, a floating point value between 0 to 1 to\nfilter the resulting set of retrieved docs\nkwargs (Any) \u2013 \nReturns\nList of Tuples of (doc, similarity_score)\nReturn type\nList[Tuple[langchain.schema.Document, float]]\nasync asimilarity_search_with_relevance_scores(query, k=4, **kwargs)[source]\uf0c1\nReturn docs most similar to query.\nParameters\nquery (str) \u2013 \nk (int) \u2013 \nkwargs (Any) \u2013 \nReturn type\nList[Tuple[langchain.schema.Document, float]]\nasync asimilarity_search(query, k=4, **kwargs)[source]\uf0c1\nReturn docs most similar to query.\nParameters\nquery (str) \u2013 \nk (int) \u2013 \nkwargs (Any) \u2013 \nReturn type\nList[langchain.schema.Document]\nsimilarity_search_by_vector(embedding, k=4, **kwargs)[source]\uf0c1\nReturn docs most similar to embedding vector.\nParameters\nembedding (List[float]) \u2013 Embedding to look up documents similar to.\nk (int) \u2013 Number of Documents to return. Defaults to 4.\nkwargs (Any) \u2013 \nReturns\nList of Documents most similar to the query vector.\nReturn type\nList[langchain.schema.Document]", "source": "https://api.python.langchain.com/en/latest/modules/vectorstores.html"} +{"id": "c018b0edb28a-130", "text": "Return type\nList[langchain.schema.Document]\nasync asimilarity_search_by_vector(embedding, k=4, **kwargs)[source]\uf0c1\nReturn docs most similar to embedding vector.\nParameters\nembedding (List[float]) \u2013 \nk (int) \u2013 \nkwargs (Any) \u2013 \nReturn type\nList[langchain.schema.Document]\nmax_marginal_relevance_search(query, k=4, fetch_k=20, lambda_mult=0.5, **kwargs)[source]\uf0c1\nReturn docs selected using the maximal marginal relevance.\nMaximal marginal relevance optimizes for similarity to query AND diversity\namong selected documents.\nParameters\nquery (str) \u2013 Text to look up documents similar to.\nk (int) \u2013 Number of Documents to return. Defaults to 4.\nfetch_k (int) \u2013 Number of Documents to fetch to pass to MMR algorithm.\nlambda_mult (float) \u2013 Number between 0 and 1 that determines the degree\nof diversity among the results with 0 corresponding\nto maximum diversity and 1 to minimum diversity.\nDefaults to 0.5.\nkwargs (Any) \u2013 \nReturns\nList of Documents selected by maximal marginal relevance.\nReturn type\nList[langchain.schema.Document]\nasync amax_marginal_relevance_search(query, k=4, fetch_k=20, lambda_mult=0.5, **kwargs)[source]\uf0c1\nReturn docs selected using the maximal marginal relevance.\nParameters\nquery (str) \u2013 \nk (int) \u2013 \nfetch_k (int) \u2013 \nlambda_mult (float) \u2013 \nkwargs (Any) \u2013 \nReturn type\nList[langchain.schema.Document]\nmax_marginal_relevance_search_by_vector(embedding, k=4, fetch_k=20, lambda_mult=0.5, **kwargs)[source]\uf0c1", "source": "https://api.python.langchain.com/en/latest/modules/vectorstores.html"} +{"id": "c018b0edb28a-131", "text": "Return docs selected using the maximal marginal relevance.\nMaximal marginal relevance optimizes for similarity to query AND diversity\namong selected documents.\nParameters\nembedding (List[float]) \u2013 Embedding to look up documents similar to.\nk (int) \u2013 Number of Documents to return. Defaults to 4.\nfetch_k (int) \u2013 Number of Documents to fetch to pass to MMR algorithm.\nlambda_mult (float) \u2013 Number between 0 and 1 that determines the degree\nof diversity among the results with 0 corresponding\nto maximum diversity and 1 to minimum diversity.\nDefaults to 0.5.\nkwargs (Any) \u2013 \nReturns\nList of Documents selected by maximal marginal relevance.\nReturn type\nList[langchain.schema.Document]\nasync amax_marginal_relevance_search_by_vector(embedding, k=4, fetch_k=20, lambda_mult=0.5, **kwargs)[source]\uf0c1\nReturn docs selected using the maximal marginal relevance.\nParameters\nembedding (List[float]) \u2013 \nk (int) \u2013 \nfetch_k (int) \u2013 \nlambda_mult (float) \u2013 \nkwargs (Any) \u2013 \nReturn type\nList[langchain.schema.Document]\nclassmethod from_documents(documents, embedding, **kwargs)[source]\uf0c1\nReturn VectorStore initialized from documents and embeddings.\nParameters\ndocuments (List[langchain.schema.Document]) \u2013 \nembedding (langchain.embeddings.base.Embeddings) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.vectorstores.base.VST\nasync classmethod afrom_documents(documents, embedding, **kwargs)[source]\uf0c1\nReturn VectorStore initialized from documents and embeddings.\nParameters\ndocuments (List[langchain.schema.Document]) \u2013 \nembedding (langchain.embeddings.base.Embeddings) \u2013 \nkwargs (Any) \u2013 \nReturn type", "source": "https://api.python.langchain.com/en/latest/modules/vectorstores.html"} +{"id": "c018b0edb28a-132", "text": "kwargs (Any) \u2013 \nReturn type\nlangchain.vectorstores.base.VST\nabstract classmethod from_texts(texts, embedding, metadatas=None, **kwargs)[source]\uf0c1\nReturn VectorStore initialized from texts and embeddings.\nParameters\ntexts (List[str]) \u2013 \nembedding (langchain.embeddings.base.Embeddings) \u2013 \nmetadatas (Optional[List[dict]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.vectorstores.base.VST\nasync classmethod afrom_texts(texts, embedding, metadatas=None, **kwargs)[source]\uf0c1\nReturn VectorStore initialized from texts and embeddings.\nParameters\ntexts (List[str]) \u2013 \nembedding (langchain.embeddings.base.Embeddings) \u2013 \nmetadatas (Optional[List[dict]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.vectorstores.base.VST\nas_retriever(**kwargs)[source]\uf0c1\nParameters\nkwargs (Any) \u2013 \nReturn type\nlangchain.vectorstores.base.VectorStoreRetriever\nclass langchain.vectorstores.Weaviate(client, index_name, text_key, embedding=None, attributes=None, relevance_score_fn=, by_text=True)[source]\uf0c1\nBases: langchain.vectorstores.base.VectorStore\nWrapper around Weaviate vector database.\nTo use, you should have the weaviate-client python package installed.\nExample\nimport weaviate\nfrom langchain.vectorstores import Weaviate\nclient = weaviate.Client(url=os.environ[\"WEAVIATE_URL\"], ...)\nweaviate = Weaviate(client, index_name, text_key)\nParameters\nclient (Any) \u2013 \nindex_name (str) \u2013 \ntext_key (str) \u2013 \nembedding (Optional[Embeddings]) \u2013 \nattributes (Optional[List[str]]) \u2013", "source": "https://api.python.langchain.com/en/latest/modules/vectorstores.html"} +{"id": "c018b0edb28a-133", "text": "embedding (Optional[Embeddings]) \u2013 \nattributes (Optional[List[str]]) \u2013 \nrelevance_score_fn (Optional[Callable[[float], float]]) \u2013 \nby_text (bool) \u2013 \nadd_texts(texts, metadatas=None, **kwargs)[source]\uf0c1\nUpload texts with metadata (properties) to Weaviate.\nParameters\ntexts (Iterable[str]) \u2013 \nmetadatas (Optional[List[dict]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nList[str]\nsimilarity_search(query, k=4, **kwargs)[source]\uf0c1\nReturn docs most similar to query.\nParameters\nquery (str) \u2013 Text to look up documents similar to.\nk (int) \u2013 Number of Documents to return. Defaults to 4.\nkwargs (Any) \u2013 \nReturns\nList of Documents most similar to the query.\nReturn type\nList[langchain.schema.Document]\nsimilarity_search_by_text(query, k=4, **kwargs)[source]\uf0c1\nReturn docs most similar to query.\nParameters\nquery (str) \u2013 Text to look up documents similar to.\nk (int) \u2013 Number of Documents to return. Defaults to 4.\nkwargs (Any) \u2013 \nReturns\nList of Documents most similar to the query.\nReturn type\nList[langchain.schema.Document]\nsimilarity_search_by_vector(embedding, k=4, **kwargs)[source]\uf0c1\nLook up similar documents by embedding vector in Weaviate.\nParameters\nembedding (List[float]) \u2013 \nk (int) \u2013 \nkwargs (Any) \u2013 \nReturn type\nList[langchain.schema.Document]\nmax_marginal_relevance_search(query, k=4, fetch_k=20, lambda_mult=0.5, **kwargs)[source]\uf0c1\nReturn docs selected using the maximal marginal relevance.", "source": "https://api.python.langchain.com/en/latest/modules/vectorstores.html"} +{"id": "c018b0edb28a-134", "text": "Return docs selected using the maximal marginal relevance.\nMaximal marginal relevance optimizes for similarity to query AND diversity\namong selected documents.\nParameters\nquery (str) \u2013 Text to look up documents similar to.\nk (int) \u2013 Number of Documents to return. Defaults to 4.\nfetch_k (int) \u2013 Number of Documents to fetch to pass to MMR algorithm.\nlambda_mult (float) \u2013 Number between 0 and 1 that determines the degree\nof diversity among the results with 0 corresponding\nto maximum diversity and 1 to minimum diversity.\nDefaults to 0.5.\nkwargs (Any) \u2013 \nReturns\nList of Documents selected by maximal marginal relevance.\nReturn type\nList[langchain.schema.Document]\nmax_marginal_relevance_search_by_vector(embedding, k=4, fetch_k=20, lambda_mult=0.5, **kwargs)[source]\uf0c1\nReturn docs selected using the maximal marginal relevance.\nMaximal marginal relevance optimizes for similarity to query AND diversity\namong selected documents.\nParameters\nembedding (List[float]) \u2013 Embedding to look up documents similar to.\nk (int) \u2013 Number of Documents to return. Defaults to 4.\nfetch_k (int) \u2013 Number of Documents to fetch to pass to MMR algorithm.\nlambda_mult (float) \u2013 Number between 0 and 1 that determines the degree\nof diversity among the results with 0 corresponding\nto maximum diversity and 1 to minimum diversity.\nDefaults to 0.5.\nkwargs (Any) \u2013 \nReturns\nList of Documents selected by maximal marginal relevance.\nReturn type\nList[langchain.schema.Document]\nsimilarity_search_with_score(query, k=4, **kwargs)[source]\uf0c1\nReturn list of documents most similar to the query\ntext and cosine distance in float for each.\nLower score represents more similarity.\nParameters", "source": "https://api.python.langchain.com/en/latest/modules/vectorstores.html"} +{"id": "c018b0edb28a-135", "text": "text and cosine distance in float for each.\nLower score represents more similarity.\nParameters\nquery (str) \u2013 \nk (int) \u2013 \nkwargs (Any) \u2013 \nReturn type\nList[Tuple[langchain.schema.Document, float]]\nclassmethod from_texts(texts, embedding, metadatas=None, **kwargs)[source]\uf0c1\nConstruct Weaviate wrapper from raw documents.\nThis is a user-friendly interface that:\nEmbeds documents.\nCreates a new index for the embeddings in the Weaviate instance.\nAdds the documents to the newly created Weaviate index.\nThis is intended to be a quick way to get started.\nExample\nfrom langchain.vectorstores.weaviate import Weaviate\nfrom langchain.embeddings import OpenAIEmbeddings\nembeddings = OpenAIEmbeddings()\nweaviate = Weaviate.from_texts(\n texts,\n embeddings,\n weaviate_url=\"http://localhost:8080\"\n)\nParameters\ntexts (List[str]) \u2013 \nembedding (langchain.embeddings.base.Embeddings) \u2013 \nmetadatas (Optional[List[dict]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.vectorstores.weaviate.Weaviate\ndelete(ids)[source]\uf0c1\nDelete by vector IDs.\nParameters\nids (List[str]) \u2013 List of ids to delete.\nReturn type\nNone", "source": "https://api.python.langchain.com/en/latest/modules/vectorstores.html"} +{"id": "ff6d0b7b0742-0", "text": "LLMs\uf0c1\nWrappers on top of large language models APIs.\nclass langchain.llms.AI21(*, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None, model='j2-jumbo-instruct', temperature=0.7, maxTokens=256, minTokens=0, topP=1.0, presencePenalty=AI21PenaltyData(scale=0, applyToWhitespaces=True, applyToPunctuations=True, applyToNumbers=True, applyToStopwords=True, applyToEmojis=True), countPenalty=AI21PenaltyData(scale=0, applyToWhitespaces=True, applyToPunctuations=True, applyToNumbers=True, applyToStopwords=True, applyToEmojis=True), frequencyPenalty=AI21PenaltyData(scale=0, applyToWhitespaces=True, applyToPunctuations=True, applyToNumbers=True, applyToStopwords=True, applyToEmojis=True), numResults=1, logitBias=None, ai21_api_key=None, stop=None, base_url=None)[source]\uf0c1\nBases: langchain.llms.base.LLM\nWrapper around AI21 large language models.\nTo use, you should have the environment variable AI21_API_KEY\nset with your API key.\nExample\nfrom langchain.llms import AI21\nai21 = AI21(model=\"j2-jumbo-instruct\")\nParameters\ncache (Optional[bool]) \u2013 \nverbose (bool) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ncallback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) \u2013 \ntags (Optional[List[str]]) \u2013 \nmodel (str) \u2013 \ntemperature (float) \u2013 \nmaxTokens (int) \u2013 \nminTokens (int) \u2013", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-1", "text": "maxTokens (int) \u2013 \nminTokens (int) \u2013 \ntopP (float) \u2013 \npresencePenalty (langchain.llms.ai21.AI21PenaltyData) \u2013 \ncountPenalty (langchain.llms.ai21.AI21PenaltyData) \u2013 \nfrequencyPenalty (langchain.llms.ai21.AI21PenaltyData) \u2013 \nnumResults (int) \u2013 \nlogitBias (Optional[Dict[str, float]]) \u2013 \nai21_api_key (Optional[str]) \u2013 \nstop (Optional[List[str]]) \u2013 \nbase_url (Optional[str]) \u2013 \nReturn type\nNone\nattribute base_url: Optional[str] = None\uf0c1\nBase url to use, if None decides based on model name.\nattribute countPenalty: langchain.llms.ai21.AI21PenaltyData = AI21PenaltyData(scale=0, applyToWhitespaces=True, applyToPunctuations=True, applyToNumbers=True, applyToStopwords=True, applyToEmojis=True)\uf0c1\nPenalizes repeated tokens according to count.\nattribute frequencyPenalty: langchain.llms.ai21.AI21PenaltyData = AI21PenaltyData(scale=0, applyToWhitespaces=True, applyToPunctuations=True, applyToNumbers=True, applyToStopwords=True, applyToEmojis=True)\uf0c1\nPenalizes repeated tokens according to frequency.\nattribute logitBias: Optional[Dict[str, float]] = None\uf0c1\nAdjust the probability of specific tokens being generated.\nattribute maxTokens: int = 256\uf0c1\nThe maximum number of tokens to generate in the completion.\nattribute minTokens: int = 0\uf0c1\nThe minimum number of tokens to generate in the completion.\nattribute model: str = 'j2-jumbo-instruct'\uf0c1\nModel name to use.", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-2", "text": "Model name to use.\nattribute numResults: int = 1\uf0c1\nHow many completions to generate for each prompt.\nattribute presencePenalty: langchain.llms.ai21.AI21PenaltyData = AI21PenaltyData(scale=0, applyToWhitespaces=True, applyToPunctuations=True, applyToNumbers=True, applyToStopwords=True, applyToEmojis=True)\uf0c1\nPenalizes repeated tokens.\nattribute tags: Optional[List[str]] = None\uf0c1\nTags to add to the run trace.\nattribute temperature: float = 0.7\uf0c1\nWhat sampling temperature to use.\nattribute topP: float = 1.0\uf0c1\nTotal probability mass of tokens to consider at each step.\nattribute verbose: bool [Optional]\uf0c1\nWhether to print out response text.\n__call__(prompt, stop=None, callbacks=None, **kwargs)\uf0c1\nCheck Cache and run the LLM on the given prompt and input.\nParameters\nprompt (str) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\nasync agenerate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)\uf0c1\nRun the LLM on the given prompt and input.\nParameters\nprompts (List[str]) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ntags (Optional[List[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.LLMResult\nasync agenerate_prompt(prompts, stop=None, callbacks=None, **kwargs)\uf0c1", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-3", "text": "async agenerate_prompt(prompts, stop=None, callbacks=None, **kwargs)\uf0c1\nTake in a list of prompt values and return an LLMResult.\nParameters\nprompts (List[langchain.schema.PromptValue]) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.LLMResult\nasync apredict(text, *, stop=None, **kwargs)\uf0c1\nPredict text from text.\nParameters\ntext (str) \u2013 \nstop (Optional[Sequence[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\nasync apredict_messages(messages, *, stop=None, **kwargs)\uf0c1\nPredict message from messages.\nParameters\nmessages (List[langchain.schema.BaseMessage]) \u2013 \nstop (Optional[Sequence[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.BaseMessage\nclassmethod construct(_fields_set=None, **values)\uf0c1\nCreates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.\nDefault values are respected, but no other validation is performed.\nBehaves as if Config.extra = \u2018allow\u2019 was set since it adds all passed values\nParameters\n_fields_set (Optional[SetStr]) \u2013 \nvalues (Any) \u2013 \nReturn type\nModel\ncopy(*, include=None, exclude=None, update=None, deep=False)\uf0c1\nDuplicate a model, optionally choose which fields to include, exclude and change.\nParameters\ninclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 fields to include in new model", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-4", "text": "exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 fields to exclude from new model, as with values this takes precedence over include\nupdate (Optional[DictStrAny]) \u2013 values to change/add in the new model. Note: the data is not validated before creating\nthe new model: you should trust this data\ndeep (bool) \u2013 set to True to make a deep copy of the model\nself (Model) \u2013 \nReturns\nnew model instance\nReturn type\nModel\ndict(**kwargs)\uf0c1\nReturn a dictionary of the LLM.\nParameters\nkwargs (Any) \u2013 \nReturn type\nDict\ngenerate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)\uf0c1\nRun the LLM on the given prompt and input.\nParameters\nprompts (List[str]) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ntags (Optional[List[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.LLMResult\ngenerate_prompt(prompts, stop=None, callbacks=None, **kwargs)\uf0c1\nTake in a list of prompt values and return an LLMResult.\nParameters\nprompts (List[langchain.schema.PromptValue]) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.LLMResult\nget_num_tokens(text)\uf0c1\nGet the number of tokens present in the text.\nParameters\ntext (str) \u2013 \nReturn type\nint\nget_num_tokens_from_messages(messages)\uf0c1\nGet the number of tokens in the message.\nParameters", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-5", "text": "Get the number of tokens in the message.\nParameters\nmessages (List[langchain.schema.BaseMessage]) \u2013 \nReturn type\nint\nget_token_ids(text)\uf0c1\nGet the token present in the text.\nParameters\ntext (str) \u2013 \nReturn type\nList[int]\njson(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs)\uf0c1\nGenerate a JSON representation of the model, include and exclude arguments as per dict().\nencoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().\nParameters\ninclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 \nexclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 \nby_alias (bool) \u2013 \nskip_defaults (Optional[bool]) \u2013 \nexclude_unset (bool) \u2013 \nexclude_defaults (bool) \u2013 \nexclude_none (bool) \u2013 \nencoder (Optional[Callable[[Any], Any]]) \u2013 \nmodels_as_dict (bool) \u2013 \ndumps_kwargs (Any) \u2013 \nReturn type\nunicode\npredict(text, *, stop=None, **kwargs)\uf0c1\nPredict text from text.\nParameters\ntext (str) \u2013 \nstop (Optional[Sequence[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\npredict_messages(messages, *, stop=None, **kwargs)\uf0c1\nPredict message from messages.\nParameters\nmessages (List[langchain.schema.BaseMessage]) \u2013 \nstop (Optional[Sequence[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.BaseMessage\nsave(file_path)\uf0c1\nSave the LLM.\nParameters", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-6", "text": "save(file_path)\uf0c1\nSave the LLM.\nParameters\nfile_path (Union[pathlib.Path, str]) \u2013 Path to file to save the LLM to.\nReturn type\nNone\nExample:\n.. code-block:: python\nllm.save(file_path=\u201dpath/llm.yaml\u201d)\nclassmethod update_forward_refs(**localns)\uf0c1\nTry to update ForwardRefs on fields based on this Model, globalns and localns.\nParameters\nlocalns (Any) \u2013 \nReturn type\nNone\nproperty lc_attributes: Dict\uf0c1\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\uf0c1\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\uf0c1\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\uf0c1\nReturn whether or not the class is serializable.", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-7", "text": "property lc_serializable: bool\uf0c1\nReturn whether or not the class is serializable.\nclass langchain.llms.AlephAlpha(*, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None, client=None, model='luminous-base', maximum_tokens=64, temperature=0.0, top_k=0, top_p=0.0, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalties_include_prompt=False, use_multiplicative_presence_penalty=False, penalty_bias=None, penalty_exceptions=None, penalty_exceptions_include_stop_sequences=None, best_of=None, n=1, logit_bias=None, log_probs=None, tokens=False, disable_optimizations=False, minimum_tokens=0, echo=False, use_multiplicative_frequency_penalty=False, sequence_penalty=0.0, sequence_penalty_min_length=2, use_multiplicative_sequence_penalty=False, completion_bias_inclusion=None, completion_bias_inclusion_first_token_only=False, completion_bias_exclusion=None, completion_bias_exclusion_first_token_only=False, contextual_control_threshold=None, control_log_additive=True, repetition_penalties_include_completion=True, raw_completion=False, aleph_alpha_api_key=None, stop_sequences=None)[source]\uf0c1\nBases: langchain.llms.base.LLM\nWrapper around Aleph Alpha large language models.\nTo use, you should have the aleph_alpha_client python package installed, and the\nenvironment variable ALEPH_ALPHA_API_KEY set with your API key, or pass\nit as a named parameter to the constructor.\nParameters are explained more in depth here:\nhttps://github.com/Aleph-Alpha/aleph-alpha-client/blob/c14b7dd2b4325c7da0d6a119f6e76385800e097b/aleph_alpha_client/completion.py#L10\nExample\nfrom langchain.llms import AlephAlpha", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-8", "text": "Example\nfrom langchain.llms import AlephAlpha\naleph_alpha = AlephAlpha(aleph_alpha_api_key=\"my-api-key\")\nParameters\ncache (Optional[bool]) \u2013 \nverbose (bool) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ncallback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) \u2013 \ntags (Optional[List[str]]) \u2013 \nclient (Any) \u2013 \nmodel (Optional[str]) \u2013 \nmaximum_tokens (int) \u2013 \ntemperature (float) \u2013 \ntop_k (int) \u2013 \ntop_p (float) \u2013 \npresence_penalty (float) \u2013 \nfrequency_penalty (float) \u2013 \nrepetition_penalties_include_prompt (Optional[bool]) \u2013 \nuse_multiplicative_presence_penalty (Optional[bool]) \u2013 \npenalty_bias (Optional[str]) \u2013 \npenalty_exceptions (Optional[List[str]]) \u2013 \npenalty_exceptions_include_stop_sequences (Optional[bool]) \u2013 \nbest_of (Optional[int]) \u2013 \nn (int) \u2013 \nlogit_bias (Optional[Dict[int, float]]) \u2013 \nlog_probs (Optional[int]) \u2013 \ntokens (Optional[bool]) \u2013 \ndisable_optimizations (Optional[bool]) \u2013 \nminimum_tokens (Optional[int]) \u2013 \necho (bool) \u2013 \nuse_multiplicative_frequency_penalty (bool) \u2013 \nsequence_penalty (float) \u2013 \nsequence_penalty_min_length (int) \u2013 \nuse_multiplicative_sequence_penalty (bool) \u2013 \ncompletion_bias_inclusion (Optional[Sequence[str]]) \u2013 \ncompletion_bias_inclusion_first_token_only (bool) \u2013 \ncompletion_bias_exclusion (Optional[Sequence[str]]) \u2013 \ncompletion_bias_exclusion_first_token_only (bool) \u2013", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-9", "text": "completion_bias_exclusion_first_token_only (bool) \u2013 \ncontextual_control_threshold (Optional[float]) \u2013 \ncontrol_log_additive (Optional[bool]) \u2013 \nrepetition_penalties_include_completion (bool) \u2013 \nraw_completion (bool) \u2013 \naleph_alpha_api_key (Optional[str]) \u2013 \nstop_sequences (Optional[List[str]]) \u2013 \nReturn type\nNone\nattribute aleph_alpha_api_key: Optional[str] = None\uf0c1\nAPI key for Aleph Alpha API.\nattribute best_of: Optional[int] = None\uf0c1\nreturns the one with the \u201cbest of\u201d results\n(highest log probability per token)\nattribute completion_bias_exclusion_first_token_only: bool = False\uf0c1\nOnly consider the first token for the completion_bias_exclusion.\nattribute contextual_control_threshold: Optional[float] = None\uf0c1\nIf set to None, attention control parameters only apply to those tokens that have\nexplicitly been set in the request.\nIf set to a non-None value, control parameters are also applied to similar tokens.\nattribute control_log_additive: Optional[bool] = True\uf0c1\nTrue: apply control by adding the log(control_factor) to attention scores.\nFalse: (attention_scores - - attention_scores.min(-1)) * control_factor\nattribute echo: bool = False\uf0c1\nEcho the prompt in the completion.\nattribute frequency_penalty: float = 0.0\uf0c1\nPenalizes repeated tokens according to frequency.\nattribute log_probs: Optional[int] = None\uf0c1\nNumber of top log probabilities to be returned for each generated token.\nattribute logit_bias: Optional[Dict[int, float]] = None\uf0c1\nThe logit bias allows to influence the likelihood of generating tokens.\nattribute maximum_tokens: int = 64\uf0c1\nThe maximum number of tokens to be generated.", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-10", "text": "The maximum number of tokens to be generated.\nattribute minimum_tokens: Optional[int] = 0\uf0c1\nGenerate at least this number of tokens.\nattribute model: Optional[str] = 'luminous-base'\uf0c1\nModel name to use.\nattribute n: int = 1\uf0c1\nHow many completions to generate for each prompt.\nattribute penalty_bias: Optional[str] = None\uf0c1\nPenalty bias for the completion.\nattribute penalty_exceptions: Optional[List[str]] = None\uf0c1\nList of strings that may be generated without penalty,\nregardless of other penalty settings\nattribute penalty_exceptions_include_stop_sequences: Optional[bool] = None\uf0c1\nShould stop_sequences be included in penalty_exceptions.\nattribute presence_penalty: float = 0.0\uf0c1\nPenalizes repeated tokens.\nattribute raw_completion: bool = False\uf0c1\nForce the raw completion of the model to be returned.\nattribute repetition_penalties_include_completion: bool = True\uf0c1\nFlag deciding whether presence penalty or frequency penalty\nare updated from the completion.\nattribute repetition_penalties_include_prompt: Optional[bool] = False\uf0c1\nFlag deciding whether presence penalty or frequency penalty are\nupdated from the prompt.\nattribute stop_sequences: Optional[List[str]] = None\uf0c1\nStop sequences to use.\nattribute tags: Optional[List[str]] = None\uf0c1\nTags to add to the run trace.\nattribute temperature: float = 0.0\uf0c1\nA non-negative float that tunes the degree of randomness in generation.\nattribute tokens: Optional[bool] = False\uf0c1\nreturn tokens of completion.\nattribute top_k: int = 0\uf0c1\nNumber of most likely tokens to consider at each step.\nattribute top_p: float = 0.0\uf0c1\nTotal probability mass of tokens to consider at each step.", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-11", "text": "Total probability mass of tokens to consider at each step.\nattribute use_multiplicative_presence_penalty: Optional[bool] = False\uf0c1\nFlag deciding whether presence penalty is applied\nmultiplicatively (True) or additively (False).\nattribute verbose: bool [Optional]\uf0c1\nWhether to print out response text.\n__call__(prompt, stop=None, callbacks=None, **kwargs)\uf0c1\nCheck Cache and run the LLM on the given prompt and input.\nParameters\nprompt (str) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\nasync agenerate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)\uf0c1\nRun the LLM on the given prompt and input.\nParameters\nprompts (List[str]) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ntags (Optional[List[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.LLMResult\nasync agenerate_prompt(prompts, stop=None, callbacks=None, **kwargs)\uf0c1\nTake in a list of prompt values and return an LLMResult.\nParameters\nprompts (List[langchain.schema.PromptValue]) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.LLMResult\nasync apredict(text, *, stop=None, **kwargs)\uf0c1\nPredict text from text.\nParameters\ntext (str) \u2013", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-12", "text": "Predict text from text.\nParameters\ntext (str) \u2013 \nstop (Optional[Sequence[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\nasync apredict_messages(messages, *, stop=None, **kwargs)\uf0c1\nPredict message from messages.\nParameters\nmessages (List[langchain.schema.BaseMessage]) \u2013 \nstop (Optional[Sequence[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.BaseMessage\nclassmethod construct(_fields_set=None, **values)\uf0c1\nCreates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.\nDefault values are respected, but no other validation is performed.\nBehaves as if Config.extra = \u2018allow\u2019 was set since it adds all passed values\nParameters\n_fields_set (Optional[SetStr]) \u2013 \nvalues (Any) \u2013 \nReturn type\nModel\ncopy(*, include=None, exclude=None, update=None, deep=False)\uf0c1\nDuplicate a model, optionally choose which fields to include, exclude and change.\nParameters\ninclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 fields to include in new model\nexclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 fields to exclude from new model, as with values this takes precedence over include\nupdate (Optional[DictStrAny]) \u2013 values to change/add in the new model. Note: the data is not validated before creating\nthe new model: you should trust this data\ndeep (bool) \u2013 set to True to make a deep copy of the model\nself (Model) \u2013 \nReturns\nnew model instance\nReturn type\nModel\ndict(**kwargs)\uf0c1\nReturn a dictionary of the LLM.\nParameters\nkwargs (Any) \u2013 \nReturn type\nDict", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-13", "text": "Parameters\nkwargs (Any) \u2013 \nReturn type\nDict\ngenerate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)\uf0c1\nRun the LLM on the given prompt and input.\nParameters\nprompts (List[str]) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ntags (Optional[List[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.LLMResult\ngenerate_prompt(prompts, stop=None, callbacks=None, **kwargs)\uf0c1\nTake in a list of prompt values and return an LLMResult.\nParameters\nprompts (List[langchain.schema.PromptValue]) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.LLMResult\nget_num_tokens(text)\uf0c1\nGet the number of tokens present in the text.\nParameters\ntext (str) \u2013 \nReturn type\nint\nget_num_tokens_from_messages(messages)\uf0c1\nGet the number of tokens in the message.\nParameters\nmessages (List[langchain.schema.BaseMessage]) \u2013 \nReturn type\nint\nget_token_ids(text)\uf0c1\nGet the token present in the text.\nParameters\ntext (str) \u2013 \nReturn type\nList[int]\njson(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs)\uf0c1\nGenerate a JSON representation of the model, include and exclude arguments as per dict().", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-14", "text": "Generate a JSON representation of the model, include and exclude arguments as per dict().\nencoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().\nParameters\ninclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 \nexclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 \nby_alias (bool) \u2013 \nskip_defaults (Optional[bool]) \u2013 \nexclude_unset (bool) \u2013 \nexclude_defaults (bool) \u2013 \nexclude_none (bool) \u2013 \nencoder (Optional[Callable[[Any], Any]]) \u2013 \nmodels_as_dict (bool) \u2013 \ndumps_kwargs (Any) \u2013 \nReturn type\nunicode\npredict(text, *, stop=None, **kwargs)\uf0c1\nPredict text from text.\nParameters\ntext (str) \u2013 \nstop (Optional[Sequence[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\npredict_messages(messages, *, stop=None, **kwargs)\uf0c1\nPredict message from messages.\nParameters\nmessages (List[langchain.schema.BaseMessage]) \u2013 \nstop (Optional[Sequence[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.BaseMessage\nsave(file_path)\uf0c1\nSave the LLM.\nParameters\nfile_path (Union[pathlib.Path, str]) \u2013 Path to file to save the LLM to.\nReturn type\nNone\nExample:\n.. code-block:: python\nllm.save(file_path=\u201dpath/llm.yaml\u201d)\nclassmethod update_forward_refs(**localns)\uf0c1\nTry to update ForwardRefs on fields based on this Model, globalns and localns.\nParameters\nlocalns (Any) \u2013 \nReturn type\nNone\nproperty lc_attributes: Dict\uf0c1\nReturn a list of attribute names that should be included in the", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-15", "text": "Return a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\uf0c1\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\uf0c1\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\uf0c1\nReturn whether or not the class is serializable.\nclass langchain.llms.AmazonAPIGateway(*, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None, api_url, model_kwargs=None, content_handler=)[source]\uf0c1\nBases: langchain.llms.base.LLM\nWrapper around custom Amazon API Gateway\nParameters\ncache (Optional[bool]) \u2013 \nverbose (bool) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ncallback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) \u2013 \ntags (Optional[List[str]]) \u2013 \napi_url (str) \u2013 \nmodel_kwargs (Optional[Dict]) \u2013 \ncontent_handler (langchain.llms.amazon_api_gateway.ContentHandlerAmazonAPIGateway) \u2013 \nReturn type\nNone\nattribute api_url: str [Required]\uf0c1\nAPI Gateway URL\nattribute content_handler: langchain.llms.amazon_api_gateway.ContentHandlerAmazonAPIGateway = \uf0c1\nThe content handler class that provides an input and\noutput transform functions to handle formats between LLM\nand the endpoint.", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-16", "text": "output transform functions to handle formats between LLM\nand the endpoint.\nattribute model_kwargs: Optional[Dict] = None\uf0c1\nKey word arguments to pass to the model.\nattribute tags: Optional[List[str]] = None\uf0c1\nTags to add to the run trace.\nattribute verbose: bool [Optional]\uf0c1\nWhether to print out response text.\n__call__(prompt, stop=None, callbacks=None, **kwargs)\uf0c1\nCheck Cache and run the LLM on the given prompt and input.\nParameters\nprompt (str) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\nasync agenerate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)\uf0c1\nRun the LLM on the given prompt and input.\nParameters\nprompts (List[str]) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ntags (Optional[List[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.LLMResult\nasync agenerate_prompt(prompts, stop=None, callbacks=None, **kwargs)\uf0c1\nTake in a list of prompt values and return an LLMResult.\nParameters\nprompts (List[langchain.schema.PromptValue]) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.LLMResult\nasync apredict(text, *, stop=None, **kwargs)\uf0c1\nPredict text from text.\nParameters", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-17", "text": "Predict text from text.\nParameters\ntext (str) \u2013 \nstop (Optional[Sequence[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\nasync apredict_messages(messages, *, stop=None, **kwargs)\uf0c1\nPredict message from messages.\nParameters\nmessages (List[langchain.schema.BaseMessage]) \u2013 \nstop (Optional[Sequence[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.BaseMessage\nclassmethod construct(_fields_set=None, **values)\uf0c1\nCreates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.\nDefault values are respected, but no other validation is performed.\nBehaves as if Config.extra = \u2018allow\u2019 was set since it adds all passed values\nParameters\n_fields_set (Optional[SetStr]) \u2013 \nvalues (Any) \u2013 \nReturn type\nModel\ncopy(*, include=None, exclude=None, update=None, deep=False)\uf0c1\nDuplicate a model, optionally choose which fields to include, exclude and change.\nParameters\ninclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 fields to include in new model\nexclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 fields to exclude from new model, as with values this takes precedence over include\nupdate (Optional[DictStrAny]) \u2013 values to change/add in the new model. Note: the data is not validated before creating\nthe new model: you should trust this data\ndeep (bool) \u2013 set to True to make a deep copy of the model\nself (Model) \u2013 \nReturns\nnew model instance\nReturn type\nModel\ndict(**kwargs)\uf0c1\nReturn a dictionary of the LLM.\nParameters\nkwargs (Any) \u2013 \nReturn type\nDict", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-18", "text": "Parameters\nkwargs (Any) \u2013 \nReturn type\nDict\ngenerate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)\uf0c1\nRun the LLM on the given prompt and input.\nParameters\nprompts (List[str]) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ntags (Optional[List[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.LLMResult\ngenerate_prompt(prompts, stop=None, callbacks=None, **kwargs)\uf0c1\nTake in a list of prompt values and return an LLMResult.\nParameters\nprompts (List[langchain.schema.PromptValue]) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.LLMResult\nget_num_tokens(text)\uf0c1\nGet the number of tokens present in the text.\nParameters\ntext (str) \u2013 \nReturn type\nint\nget_num_tokens_from_messages(messages)\uf0c1\nGet the number of tokens in the message.\nParameters\nmessages (List[langchain.schema.BaseMessage]) \u2013 \nReturn type\nint\nget_token_ids(text)\uf0c1\nGet the token present in the text.\nParameters\ntext (str) \u2013 \nReturn type\nList[int]\njson(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs)\uf0c1\nGenerate a JSON representation of the model, include and exclude arguments as per dict().", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-19", "text": "Generate a JSON representation of the model, include and exclude arguments as per dict().\nencoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().\nParameters\ninclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 \nexclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 \nby_alias (bool) \u2013 \nskip_defaults (Optional[bool]) \u2013 \nexclude_unset (bool) \u2013 \nexclude_defaults (bool) \u2013 \nexclude_none (bool) \u2013 \nencoder (Optional[Callable[[Any], Any]]) \u2013 \nmodels_as_dict (bool) \u2013 \ndumps_kwargs (Any) \u2013 \nReturn type\nunicode\npredict(text, *, stop=None, **kwargs)\uf0c1\nPredict text from text.\nParameters\ntext (str) \u2013 \nstop (Optional[Sequence[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\npredict_messages(messages, *, stop=None, **kwargs)\uf0c1\nPredict message from messages.\nParameters\nmessages (List[langchain.schema.BaseMessage]) \u2013 \nstop (Optional[Sequence[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.BaseMessage\nsave(file_path)\uf0c1\nSave the LLM.\nParameters\nfile_path (Union[pathlib.Path, str]) \u2013 Path to file to save the LLM to.\nReturn type\nNone\nExample:\n.. code-block:: python\nllm.save(file_path=\u201dpath/llm.yaml\u201d)\nclassmethod update_forward_refs(**localns)\uf0c1\nTry to update ForwardRefs on fields based on this Model, globalns and localns.\nParameters\nlocalns (Any) \u2013 \nReturn type\nNone\nproperty lc_attributes: Dict\uf0c1\nReturn a list of attribute names that should be included in the", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-20", "text": "Return a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\uf0c1\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\uf0c1\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\uf0c1\nReturn whether or not the class is serializable.\nclass langchain.llms.Anthropic(*, client=None, model='claude-v1', max_tokens_to_sample=256, temperature=None, top_k=None, top_p=None, streaming=False, default_request_timeout=None, anthropic_api_url=None, anthropic_api_key=None, HUMAN_PROMPT=None, AI_PROMPT=None, count_tokens=None, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None)[source]\uf0c1\nBases: langchain.llms.base.LLM, langchain.llms.anthropic._AnthropicCommon\nWrapper around Anthropic\u2019s large language models.\nTo use, you should have the anthropic python package installed, and the\nenvironment variable ANTHROPIC_API_KEY set with your API key, or pass\nit as a named parameter to the constructor.\nExample\nimport anthropic\nfrom langchain.llms import Anthropic\nmodel = Anthropic(model=\"\", anthropic_api_key=\"my-api-key\")\n# Simplest invocation, automatically wrapped with HUMAN_PROMPT\n# and AI_PROMPT.\nresponse = model(\"What are the biggest risks facing humanity?\")\n# Or if you want to use the chat mode, build a few-shot-prompt, or", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-21", "text": "# Or if you want to use the chat mode, build a few-shot-prompt, or\n# put words in the Assistant's mouth, use HUMAN_PROMPT and AI_PROMPT:\nraw_prompt = \"What are the biggest risks facing humanity?\"\nprompt = f\"{anthropic.HUMAN_PROMPT} {prompt}{anthropic.AI_PROMPT}\"\nresponse = model(prompt)\nParameters\nclient (Any) \u2013 \nmodel (str) \u2013 \nmax_tokens_to_sample (int) \u2013 \ntemperature (Optional[float]) \u2013 \ntop_k (Optional[int]) \u2013 \ntop_p (Optional[float]) \u2013 \nstreaming (bool) \u2013 \ndefault_request_timeout (Optional[Union[float, Tuple[float, float]]]) \u2013 \nanthropic_api_url (Optional[str]) \u2013 \nanthropic_api_key (Optional[str]) \u2013 \nHUMAN_PROMPT (Optional[str]) \u2013 \nAI_PROMPT (Optional[str]) \u2013 \ncount_tokens (Optional[Callable[[str], int]]) \u2013 \ncache (Optional[bool]) \u2013 \nverbose (bool) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ncallback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) \u2013 \ntags (Optional[List[str]]) \u2013 \nReturn type\nNone\nattribute default_request_timeout: Optional[Union[float, Tuple[float, float]]] = None\uf0c1\nTimeout for requests to Anthropic Completion API. Default is 600 seconds.\nattribute max_tokens_to_sample: int = 256\uf0c1\nDenotes the number of tokens to predict per generation.\nattribute model: str = 'claude-v1'\uf0c1\nModel name to use.\nattribute streaming: bool = False\uf0c1\nWhether to stream the results.\nattribute tags: Optional[List[str]] = None\uf0c1", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-22", "text": "Whether to stream the results.\nattribute tags: Optional[List[str]] = None\uf0c1\nTags to add to the run trace.\nattribute temperature: Optional[float] = None\uf0c1\nA non-negative float that tunes the degree of randomness in generation.\nattribute top_k: Optional[int] = None\uf0c1\nNumber of most likely tokens to consider at each step.\nattribute top_p: Optional[float] = None\uf0c1\nTotal probability mass of tokens to consider at each step.\nattribute verbose: bool [Optional]\uf0c1\nWhether to print out response text.\n__call__(prompt, stop=None, callbacks=None, **kwargs)\uf0c1\nCheck Cache and run the LLM on the given prompt and input.\nParameters\nprompt (str) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\nasync agenerate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)\uf0c1\nRun the LLM on the given prompt and input.\nParameters\nprompts (List[str]) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ntags (Optional[List[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.LLMResult\nasync agenerate_prompt(prompts, stop=None, callbacks=None, **kwargs)\uf0c1\nTake in a list of prompt values and return an LLMResult.\nParameters\nprompts (List[langchain.schema.PromptValue]) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-23", "text": "kwargs (Any) \u2013 \nReturn type\nlangchain.schema.LLMResult\nasync apredict(text, *, stop=None, **kwargs)\uf0c1\nPredict text from text.\nParameters\ntext (str) \u2013 \nstop (Optional[Sequence[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\nasync apredict_messages(messages, *, stop=None, **kwargs)\uf0c1\nPredict message from messages.\nParameters\nmessages (List[langchain.schema.BaseMessage]) \u2013 \nstop (Optional[Sequence[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.BaseMessage\nclassmethod construct(_fields_set=None, **values)\uf0c1\nCreates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.\nDefault values are respected, but no other validation is performed.\nBehaves as if Config.extra = \u2018allow\u2019 was set since it adds all passed values\nParameters\n_fields_set (Optional[SetStr]) \u2013 \nvalues (Any) \u2013 \nReturn type\nModel\ncopy(*, include=None, exclude=None, update=None, deep=False)\uf0c1\nDuplicate a model, optionally choose which fields to include, exclude and change.\nParameters\ninclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 fields to include in new model\nexclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 fields to exclude from new model, as with values this takes precedence over include\nupdate (Optional[DictStrAny]) \u2013 values to change/add in the new model. Note: the data is not validated before creating\nthe new model: you should trust this data\ndeep (bool) \u2013 set to True to make a deep copy of the model\nself (Model) \u2013 \nReturns\nnew model instance\nReturn type\nModel\ndict(**kwargs)\uf0c1", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-24", "text": "Returns\nnew model instance\nReturn type\nModel\ndict(**kwargs)\uf0c1\nReturn a dictionary of the LLM.\nParameters\nkwargs (Any) \u2013 \nReturn type\nDict\ngenerate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)\uf0c1\nRun the LLM on the given prompt and input.\nParameters\nprompts (List[str]) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ntags (Optional[List[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.LLMResult\ngenerate_prompt(prompts, stop=None, callbacks=None, **kwargs)\uf0c1\nTake in a list of prompt values and return an LLMResult.\nParameters\nprompts (List[langchain.schema.PromptValue]) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.LLMResult\nget_num_tokens(text)[source]\uf0c1\nCalculate number of tokens.\nParameters\ntext (str) \u2013 \nReturn type\nint\nget_num_tokens_from_messages(messages)\uf0c1\nGet the number of tokens in the message.\nParameters\nmessages (List[langchain.schema.BaseMessage]) \u2013 \nReturn type\nint\nget_token_ids(text)\uf0c1\nGet the token present in the text.\nParameters\ntext (str) \u2013 \nReturn type\nList[int]\njson(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs)\uf0c1", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-25", "text": "Generate a JSON representation of the model, include and exclude arguments as per dict().\nencoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().\nParameters\ninclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 \nexclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 \nby_alias (bool) \u2013 \nskip_defaults (Optional[bool]) \u2013 \nexclude_unset (bool) \u2013 \nexclude_defaults (bool) \u2013 \nexclude_none (bool) \u2013 \nencoder (Optional[Callable[[Any], Any]]) \u2013 \nmodels_as_dict (bool) \u2013 \ndumps_kwargs (Any) \u2013 \nReturn type\nunicode\npredict(text, *, stop=None, **kwargs)\uf0c1\nPredict text from text.\nParameters\ntext (str) \u2013 \nstop (Optional[Sequence[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\npredict_messages(messages, *, stop=None, **kwargs)\uf0c1\nPredict message from messages.\nParameters\nmessages (List[langchain.schema.BaseMessage]) \u2013 \nstop (Optional[Sequence[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.BaseMessage\nsave(file_path)\uf0c1\nSave the LLM.\nParameters\nfile_path (Union[pathlib.Path, str]) \u2013 Path to file to save the LLM to.\nReturn type\nNone\nExample:\n.. code-block:: python\nllm.save(file_path=\u201dpath/llm.yaml\u201d)\nstream(prompt, stop=None)[source]\uf0c1\nCall Anthropic completion_stream and return the resulting generator.\nBETA: this is a beta feature while we figure out the right abstraction.\nOnce that happens, this interface could change.\nParameters\nprompt (str) \u2013 The prompt to pass into the model.", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-26", "text": "Parameters\nprompt (str) \u2013 The prompt to pass into the model.\nstop (Optional[List[str]]) \u2013 Optional list of stop words to use when generating.\nReturns\nA generator representing the stream of tokens from Anthropic.\nReturn type\nGenerator\nExample\nprompt = \"Write a poem about a stream.\"\nprompt = f\"\\n\\nHuman: {prompt}\\n\\nAssistant:\"\ngenerator = anthropic.stream(prompt)\nfor token in generator:\n yield token\nclassmethod update_forward_refs(**localns)\uf0c1\nTry to update ForwardRefs on fields based on this Model, globalns and localns.\nParameters\nlocalns (Any) \u2013 \nReturn type\nNone\nproperty lc_attributes: Dict\uf0c1\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\uf0c1\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\uf0c1\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\uf0c1\nReturn whether or not the class is serializable.\nclass langchain.llms.Anyscale(*, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None, model_kwargs=None, anyscale_service_url=None, anyscale_service_route=None, anyscale_service_token=None)[source]\uf0c1\nBases: langchain.llms.base.LLM\nWrapper around Anyscale Services.\nTo use, you should have the environment variable ANYSCALE_SERVICE_URL,\nANYSCALE_SERVICE_ROUTE and ANYSCALE_SERVICE_TOKEN set with your Anyscale\nService, or pass it as a named parameter to the constructor.\nExample", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-27", "text": "Service, or pass it as a named parameter to the constructor.\nExample\nfrom langchain.llms import Anyscale\nanyscale = Anyscale(anyscale_service_url=\"SERVICE_URL\",\n anyscale_service_route=\"SERVICE_ROUTE\",\n anyscale_service_token=\"SERVICE_TOKEN\")\n# Use Ray for distributed processing\nimport ray\nprompt_list=[]\n@ray.remote\ndef send_query(llm, prompt):\n resp = llm(prompt)\n return resp\nfutures = [send_query.remote(anyscale, prompt) for prompt in prompt_list]\nresults = ray.get(futures)\nParameters\ncache (Optional[bool]) \u2013 \nverbose (bool) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ncallback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) \u2013 \ntags (Optional[List[str]]) \u2013 \nmodel_kwargs (Optional[dict]) \u2013 \nanyscale_service_url (Optional[str]) \u2013 \nanyscale_service_route (Optional[str]) \u2013 \nanyscale_service_token (Optional[str]) \u2013 \nReturn type\nNone\nattribute model_kwargs: Optional[dict] = None\uf0c1\nKey word arguments to pass to the model. Reserved for future use\nattribute tags: Optional[List[str]] = None\uf0c1\nTags to add to the run trace.\nattribute verbose: bool [Optional]\uf0c1\nWhether to print out response text.\n__call__(prompt, stop=None, callbacks=None, **kwargs)\uf0c1\nCheck Cache and run the LLM on the given prompt and input.\nParameters\nprompt (str) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-28", "text": "kwargs (Any) \u2013 \nReturn type\nstr\nasync agenerate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)\uf0c1\nRun the LLM on the given prompt and input.\nParameters\nprompts (List[str]) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ntags (Optional[List[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.LLMResult\nasync agenerate_prompt(prompts, stop=None, callbacks=None, **kwargs)\uf0c1\nTake in a list of prompt values and return an LLMResult.\nParameters\nprompts (List[langchain.schema.PromptValue]) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.LLMResult\nasync apredict(text, *, stop=None, **kwargs)\uf0c1\nPredict text from text.\nParameters\ntext (str) \u2013 \nstop (Optional[Sequence[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\nasync apredict_messages(messages, *, stop=None, **kwargs)\uf0c1\nPredict message from messages.\nParameters\nmessages (List[langchain.schema.BaseMessage]) \u2013 \nstop (Optional[Sequence[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.BaseMessage\nclassmethod construct(_fields_set=None, **values)\uf0c1\nCreates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.\nDefault values are respected, but no other validation is performed.", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-29", "text": "Default values are respected, but no other validation is performed.\nBehaves as if Config.extra = \u2018allow\u2019 was set since it adds all passed values\nParameters\n_fields_set (Optional[SetStr]) \u2013 \nvalues (Any) \u2013 \nReturn type\nModel\ncopy(*, include=None, exclude=None, update=None, deep=False)\uf0c1\nDuplicate a model, optionally choose which fields to include, exclude and change.\nParameters\ninclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 fields to include in new model\nexclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 fields to exclude from new model, as with values this takes precedence over include\nupdate (Optional[DictStrAny]) \u2013 values to change/add in the new model. Note: the data is not validated before creating\nthe new model: you should trust this data\ndeep (bool) \u2013 set to True to make a deep copy of the model\nself (Model) \u2013 \nReturns\nnew model instance\nReturn type\nModel\ndict(**kwargs)\uf0c1\nReturn a dictionary of the LLM.\nParameters\nkwargs (Any) \u2013 \nReturn type\nDict\ngenerate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)\uf0c1\nRun the LLM on the given prompt and input.\nParameters\nprompts (List[str]) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ntags (Optional[List[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.LLMResult\ngenerate_prompt(prompts, stop=None, callbacks=None, **kwargs)\uf0c1\nTake in a list of prompt values and return an LLMResult.\nParameters", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-30", "text": "Take in a list of prompt values and return an LLMResult.\nParameters\nprompts (List[langchain.schema.PromptValue]) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.LLMResult\nget_num_tokens(text)\uf0c1\nGet the number of tokens present in the text.\nParameters\ntext (str) \u2013 \nReturn type\nint\nget_num_tokens_from_messages(messages)\uf0c1\nGet the number of tokens in the message.\nParameters\nmessages (List[langchain.schema.BaseMessage]) \u2013 \nReturn type\nint\nget_token_ids(text)\uf0c1\nGet the token present in the text.\nParameters\ntext (str) \u2013 \nReturn type\nList[int]\njson(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs)\uf0c1\nGenerate a JSON representation of the model, include and exclude arguments as per dict().\nencoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().\nParameters\ninclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 \nexclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 \nby_alias (bool) \u2013 \nskip_defaults (Optional[bool]) \u2013 \nexclude_unset (bool) \u2013 \nexclude_defaults (bool) \u2013 \nexclude_none (bool) \u2013 \nencoder (Optional[Callable[[Any], Any]]) \u2013 \nmodels_as_dict (bool) \u2013 \ndumps_kwargs (Any) \u2013 \nReturn type\nunicode", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-31", "text": "dumps_kwargs (Any) \u2013 \nReturn type\nunicode\npredict(text, *, stop=None, **kwargs)\uf0c1\nPredict text from text.\nParameters\ntext (str) \u2013 \nstop (Optional[Sequence[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\npredict_messages(messages, *, stop=None, **kwargs)\uf0c1\nPredict message from messages.\nParameters\nmessages (List[langchain.schema.BaseMessage]) \u2013 \nstop (Optional[Sequence[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.BaseMessage\nsave(file_path)\uf0c1\nSave the LLM.\nParameters\nfile_path (Union[pathlib.Path, str]) \u2013 Path to file to save the LLM to.\nReturn type\nNone\nExample:\n.. code-block:: python\nllm.save(file_path=\u201dpath/llm.yaml\u201d)\nclassmethod update_forward_refs(**localns)\uf0c1\nTry to update ForwardRefs on fields based on this Model, globalns and localns.\nParameters\nlocalns (Any) \u2013 \nReturn type\nNone\nproperty lc_attributes: Dict\uf0c1\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\uf0c1\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\uf0c1\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\uf0c1\nReturn whether or not the class is serializable.", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-32", "text": "property lc_serializable: bool\uf0c1\nReturn whether or not the class is serializable.\nclass langchain.llms.Aviary(*, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None, model='amazon/LightGPT', aviary_url=None, aviary_token=None, use_prompt_format=True, version=None)[source]\uf0c1\nBases: langchain.llms.base.LLM\nAllow you to use an Aviary.\nAviary is a backend for hosted models. You can\nfind out more about aviary at\nhttp://github.com/ray-project/aviary\nTo get a list of the models supported on an\naviary, follow the instructions on the web site to\ninstall the aviary CLI and then use:\naviary models\nAVIARY_URL and AVIARY_TOKEN environement variables must be set.\nExample\nfrom langchain.llms import Aviary\nos.environ[\"AVIARY_URL\"] = \"\"\nos.environ[\"AVIARY_TOKEN\"] = \"\"\nlight = Aviary(model='amazon/LightGPT')\noutput = light('How do you make fried rice?')\nParameters\ncache (Optional[bool]) \u2013 \nverbose (bool) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ncallback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) \u2013 \ntags (Optional[List[str]]) \u2013 \nmodel (str) \u2013 \naviary_url (Optional[str]) \u2013 \naviary_token (Optional[str]) \u2013 \nuse_prompt_format (bool) \u2013 \nversion (Optional[str]) \u2013 \nReturn type\nNone\nattribute tags: Optional[List[str]] = None\uf0c1\nTags to add to the run trace.\nattribute verbose: bool [Optional]\uf0c1", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-33", "text": "Tags to add to the run trace.\nattribute verbose: bool [Optional]\uf0c1\nWhether to print out response text.\n__call__(prompt, stop=None, callbacks=None, **kwargs)\uf0c1\nCheck Cache and run the LLM on the given prompt and input.\nParameters\nprompt (str) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\nasync agenerate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)\uf0c1\nRun the LLM on the given prompt and input.\nParameters\nprompts (List[str]) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ntags (Optional[List[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.LLMResult\nasync agenerate_prompt(prompts, stop=None, callbacks=None, **kwargs)\uf0c1\nTake in a list of prompt values and return an LLMResult.\nParameters\nprompts (List[langchain.schema.PromptValue]) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.LLMResult\nasync apredict(text, *, stop=None, **kwargs)\uf0c1\nPredict text from text.\nParameters\ntext (str) \u2013 \nstop (Optional[Sequence[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\nasync apredict_messages(messages, *, stop=None, **kwargs)\uf0c1\nPredict message from messages.\nParameters", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-34", "text": "Predict message from messages.\nParameters\nmessages (List[langchain.schema.BaseMessage]) \u2013 \nstop (Optional[Sequence[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.BaseMessage\nclassmethod construct(_fields_set=None, **values)\uf0c1\nCreates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.\nDefault values are respected, but no other validation is performed.\nBehaves as if Config.extra = \u2018allow\u2019 was set since it adds all passed values\nParameters\n_fields_set (Optional[SetStr]) \u2013 \nvalues (Any) \u2013 \nReturn type\nModel\ncopy(*, include=None, exclude=None, update=None, deep=False)\uf0c1\nDuplicate a model, optionally choose which fields to include, exclude and change.\nParameters\ninclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 fields to include in new model\nexclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 fields to exclude from new model, as with values this takes precedence over include\nupdate (Optional[DictStrAny]) \u2013 values to change/add in the new model. Note: the data is not validated before creating\nthe new model: you should trust this data\ndeep (bool) \u2013 set to True to make a deep copy of the model\nself (Model) \u2013 \nReturns\nnew model instance\nReturn type\nModel\ndict(**kwargs)\uf0c1\nReturn a dictionary of the LLM.\nParameters\nkwargs (Any) \u2013 \nReturn type\nDict\ngenerate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)\uf0c1\nRun the LLM on the given prompt and input.\nParameters\nprompts (List[str]) \u2013 \nstop (Optional[List[str]]) \u2013", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-35", "text": "Parameters\nprompts (List[str]) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ntags (Optional[List[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.LLMResult\ngenerate_prompt(prompts, stop=None, callbacks=None, **kwargs)\uf0c1\nTake in a list of prompt values and return an LLMResult.\nParameters\nprompts (List[langchain.schema.PromptValue]) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.LLMResult\nget_num_tokens(text)\uf0c1\nGet the number of tokens present in the text.\nParameters\ntext (str) \u2013 \nReturn type\nint\nget_num_tokens_from_messages(messages)\uf0c1\nGet the number of tokens in the message.\nParameters\nmessages (List[langchain.schema.BaseMessage]) \u2013 \nReturn type\nint\nget_token_ids(text)\uf0c1\nGet the token present in the text.\nParameters\ntext (str) \u2013 \nReturn type\nList[int]\njson(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs)\uf0c1\nGenerate a JSON representation of the model, include and exclude arguments as per dict().\nencoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().\nParameters\ninclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-36", "text": "include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 \nexclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 \nby_alias (bool) \u2013 \nskip_defaults (Optional[bool]) \u2013 \nexclude_unset (bool) \u2013 \nexclude_defaults (bool) \u2013 \nexclude_none (bool) \u2013 \nencoder (Optional[Callable[[Any], Any]]) \u2013 \nmodels_as_dict (bool) \u2013 \ndumps_kwargs (Any) \u2013 \nReturn type\nunicode\npredict(text, *, stop=None, **kwargs)\uf0c1\nPredict text from text.\nParameters\ntext (str) \u2013 \nstop (Optional[Sequence[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\npredict_messages(messages, *, stop=None, **kwargs)\uf0c1\nPredict message from messages.\nParameters\nmessages (List[langchain.schema.BaseMessage]) \u2013 \nstop (Optional[Sequence[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.BaseMessage\nsave(file_path)\uf0c1\nSave the LLM.\nParameters\nfile_path (Union[pathlib.Path, str]) \u2013 Path to file to save the LLM to.\nReturn type\nNone\nExample:\n.. code-block:: python\nllm.save(file_path=\u201dpath/llm.yaml\u201d)\nclassmethod update_forward_refs(**localns)\uf0c1\nTry to update ForwardRefs on fields based on this Model, globalns and localns.\nParameters\nlocalns (Any) \u2013 \nReturn type\nNone\nproperty lc_attributes: Dict\uf0c1\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\uf0c1\nReturn the namespace of the langchain object.", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-37", "text": "property lc_namespace: List[str]\uf0c1\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\uf0c1\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\uf0c1\nReturn whether or not the class is serializable.\nclass langchain.llms.AzureMLOnlineEndpoint(*, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None, endpoint_url='', endpoint_api_key='', deployment_name='', http_client=None, content_formatter=None, model_kwargs=None)[source]\uf0c1\nBases: langchain.llms.base.LLM, pydantic.main.BaseModel\nWrapper around Azure ML Hosted models using Managed Online Endpoints.\nExample\nazure_llm = AzureMLModel(\n endpoint_url=\"https://..inference.ml.azure.com/score\",\n endpoint_api_key=\"my-api-key\",\n deployment_name=\"my-deployment-name\",\n content_formatter=content_formatter,\n)\nParameters\ncache (Optional[bool]) \u2013 \nverbose (bool) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ncallback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) \u2013 \ntags (Optional[List[str]]) \u2013 \nendpoint_url (str) \u2013 \nendpoint_api_key (str) \u2013 \ndeployment_name (str) \u2013 \nhttp_client (Any) \u2013 \ncontent_formatter (Any) \u2013 \nmodel_kwargs (Optional[dict]) \u2013 \nReturn type\nNone\nattribute content_formatter: Any = None\uf0c1\nThe content formatter that provides an input and output", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-38", "text": "attribute content_formatter: Any = None\uf0c1\nThe content formatter that provides an input and output\ntransform function to handle formats between the LLM and\nthe endpoint\nattribute deployment_name: str = ''\uf0c1\nDeployment Name for Endpoint. Should be passed to constructor or specified as\nenv var AZUREML_DEPLOYMENT_NAME.\nattribute endpoint_api_key: str = ''\uf0c1\nAuthentication Key for Endpoint. Should be passed to constructor or specified as\nenv var AZUREML_ENDPOINT_API_KEY.\nattribute endpoint_url: str = ''\uf0c1\nURL of pre-existing Endpoint. Should be passed to constructor or specified as\nenv var AZUREML_ENDPOINT_URL.\nattribute model_kwargs: Optional[dict] = None\uf0c1\nKey word arguments to pass to the model.\nattribute tags: Optional[List[str]] = None\uf0c1\nTags to add to the run trace.\nattribute verbose: bool [Optional]\uf0c1\nWhether to print out response text.\n__call__(prompt, stop=None, callbacks=None, **kwargs)\uf0c1\nCheck Cache and run the LLM on the given prompt and input.\nParameters\nprompt (str) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\nasync agenerate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)\uf0c1\nRun the LLM on the given prompt and input.\nParameters\nprompts (List[str]) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ntags (Optional[List[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.LLMResult", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-39", "text": "kwargs (Any) \u2013 \nReturn type\nlangchain.schema.LLMResult\nasync agenerate_prompt(prompts, stop=None, callbacks=None, **kwargs)\uf0c1\nTake in a list of prompt values and return an LLMResult.\nParameters\nprompts (List[langchain.schema.PromptValue]) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.LLMResult\nasync apredict(text, *, stop=None, **kwargs)\uf0c1\nPredict text from text.\nParameters\ntext (str) \u2013 \nstop (Optional[Sequence[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\nasync apredict_messages(messages, *, stop=None, **kwargs)\uf0c1\nPredict message from messages.\nParameters\nmessages (List[langchain.schema.BaseMessage]) \u2013 \nstop (Optional[Sequence[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.BaseMessage\nclassmethod construct(_fields_set=None, **values)\uf0c1\nCreates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.\nDefault values are respected, but no other validation is performed.\nBehaves as if Config.extra = \u2018allow\u2019 was set since it adds all passed values\nParameters\n_fields_set (Optional[SetStr]) \u2013 \nvalues (Any) \u2013 \nReturn type\nModel\ncopy(*, include=None, exclude=None, update=None, deep=False)\uf0c1\nDuplicate a model, optionally choose which fields to include, exclude and change.\nParameters\ninclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 fields to include in new model", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-40", "text": "exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 fields to exclude from new model, as with values this takes precedence over include\nupdate (Optional[DictStrAny]) \u2013 values to change/add in the new model. Note: the data is not validated before creating\nthe new model: you should trust this data\ndeep (bool) \u2013 set to True to make a deep copy of the model\nself (Model) \u2013 \nReturns\nnew model instance\nReturn type\nModel\ndict(**kwargs)\uf0c1\nReturn a dictionary of the LLM.\nParameters\nkwargs (Any) \u2013 \nReturn type\nDict\ngenerate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)\uf0c1\nRun the LLM on the given prompt and input.\nParameters\nprompts (List[str]) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ntags (Optional[List[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.LLMResult\ngenerate_prompt(prompts, stop=None, callbacks=None, **kwargs)\uf0c1\nTake in a list of prompt values and return an LLMResult.\nParameters\nprompts (List[langchain.schema.PromptValue]) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.LLMResult\nget_num_tokens(text)\uf0c1\nGet the number of tokens present in the text.\nParameters\ntext (str) \u2013 \nReturn type\nint\nget_num_tokens_from_messages(messages)\uf0c1\nGet the number of tokens in the message.\nParameters", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-41", "text": "Get the number of tokens in the message.\nParameters\nmessages (List[langchain.schema.BaseMessage]) \u2013 \nReturn type\nint\nget_token_ids(text)\uf0c1\nGet the token present in the text.\nParameters\ntext (str) \u2013 \nReturn type\nList[int]\njson(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs)\uf0c1\nGenerate a JSON representation of the model, include and exclude arguments as per dict().\nencoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().\nParameters\ninclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 \nexclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 \nby_alias (bool) \u2013 \nskip_defaults (Optional[bool]) \u2013 \nexclude_unset (bool) \u2013 \nexclude_defaults (bool) \u2013 \nexclude_none (bool) \u2013 \nencoder (Optional[Callable[[Any], Any]]) \u2013 \nmodels_as_dict (bool) \u2013 \ndumps_kwargs (Any) \u2013 \nReturn type\nunicode\npredict(text, *, stop=None, **kwargs)\uf0c1\nPredict text from text.\nParameters\ntext (str) \u2013 \nstop (Optional[Sequence[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\npredict_messages(messages, *, stop=None, **kwargs)\uf0c1\nPredict message from messages.\nParameters\nmessages (List[langchain.schema.BaseMessage]) \u2013 \nstop (Optional[Sequence[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.BaseMessage\nsave(file_path)\uf0c1\nSave the LLM.\nParameters", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-42", "text": "save(file_path)\uf0c1\nSave the LLM.\nParameters\nfile_path (Union[pathlib.Path, str]) \u2013 Path to file to save the LLM to.\nReturn type\nNone\nExample:\n.. code-block:: python\nllm.save(file_path=\u201dpath/llm.yaml\u201d)\nclassmethod update_forward_refs(**localns)\uf0c1\nTry to update ForwardRefs on fields based on this Model, globalns and localns.\nParameters\nlocalns (Any) \u2013 \nReturn type\nNone\nproperty lc_attributes: Dict\uf0c1\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\uf0c1\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\uf0c1\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\uf0c1\nReturn whether or not the class is serializable.", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-43", "text": "property lc_serializable: bool\uf0c1\nReturn whether or not the class is serializable.\nclass langchain.llms.AzureOpenAI(*, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None, client=None, model='text-davinci-003', temperature=0.7, max_tokens=256, top_p=1, frequency_penalty=0, presence_penalty=0, n=1, best_of=1, model_kwargs=None, openai_api_key=None, openai_api_base=None, openai_organization=None, openai_proxy=None, batch_size=20, request_timeout=None, logit_bias=None, max_retries=6, streaming=False, allowed_special={}, disallowed_special='all', tiktoken_model_name=None, deployment_name='', openai_api_type='azure', openai_api_version='')[source]\uf0c1\nBases: langchain.llms.openai.BaseOpenAI\nWrapper around Azure-specific OpenAI large language models.\nTo use, you should have the openai python package installed, and the\nenvironment variable OPENAI_API_KEY set with your API key.\nAny parameters that are valid to be passed to the openai.create call can be passed\nin, even if not explicitly saved on this class.\nExample\nfrom langchain.llms import AzureOpenAI\nopenai = AzureOpenAI(model_name=\"text-davinci-003\")\nParameters\ncache (Optional[bool]) \u2013 \nverbose (bool) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ncallback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) \u2013 \ntags (Optional[List[str]]) \u2013 \nclient (Any) \u2013 \nmodel (str) \u2013 \ntemperature (float) \u2013 \nmax_tokens (int) \u2013 \ntop_p (float) \u2013", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-44", "text": "max_tokens (int) \u2013 \ntop_p (float) \u2013 \nfrequency_penalty (float) \u2013 \npresence_penalty (float) \u2013 \nn (int) \u2013 \nbest_of (int) \u2013 \nmodel_kwargs (Dict[str, Any]) \u2013 \nopenai_api_key (Optional[str]) \u2013 \nopenai_api_base (Optional[str]) \u2013 \nopenai_organization (Optional[str]) \u2013 \nopenai_proxy (Optional[str]) \u2013 \nbatch_size (int) \u2013 \nrequest_timeout (Optional[Union[float, Tuple[float, float]]]) \u2013 \nlogit_bias (Optional[Dict[str, float]]) \u2013 \nmax_retries (int) \u2013 \nstreaming (bool) \u2013 \nallowed_special (Union[Literal['all'], typing.AbstractSet[str]]) \u2013 \ndisallowed_special (Union[Literal['all'], typing.Collection[str]]) \u2013 \ntiktoken_model_name (Optional[str]) \u2013 \ndeployment_name (str) \u2013 \nopenai_api_type (str) \u2013 \nopenai_api_version (str) \u2013 \nReturn type\nNone\nattribute allowed_special: Union[Literal['all'], AbstractSet[str]] = {}\uf0c1\nSet of special tokens that are allowed\u3002\nattribute batch_size: int = 20\uf0c1\nBatch size to use when passing multiple documents to generate.\nattribute best_of: int = 1\uf0c1\nGenerates best_of completions server-side and returns the \u201cbest\u201d.\nattribute deployment_name: str = ''\uf0c1\nDeployment name to use.\nattribute disallowed_special: Union[Literal['all'], Collection[str]] = 'all'\uf0c1\nSet of special tokens that are not allowed\u3002\nattribute frequency_penalty: float = 0\uf0c1\nPenalizes repeated tokens according to frequency.\nattribute logit_bias: Optional[Dict[str, float]] [Optional]\uf0c1", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-45", "text": "attribute logit_bias: Optional[Dict[str, float]] [Optional]\uf0c1\nAdjust the probability of specific tokens being generated.\nattribute max_retries: int = 6\uf0c1\nMaximum number of retries to make when generating.\nattribute max_tokens: int = 256\uf0c1\nThe maximum number of tokens to generate in the completion.\n-1 returns as many tokens as possible given the prompt and\nthe models maximal context size.\nattribute model_kwargs: Dict[str, Any] [Optional]\uf0c1\nHolds any model parameters valid for create call not explicitly specified.\nattribute model_name: str = 'text-davinci-003' (alias 'model')\uf0c1\nModel name to use.\nattribute n: int = 1\uf0c1\nHow many completions to generate for each prompt.\nattribute presence_penalty: float = 0\uf0c1\nPenalizes repeated tokens.\nattribute request_timeout: Optional[Union[float, Tuple[float, float]]] = None\uf0c1\nTimeout for requests to OpenAI completion API. Default is 600 seconds.\nattribute streaming: bool = False\uf0c1\nWhether to stream the results or not.\nattribute tags: Optional[List[str]] = None\uf0c1\nTags to add to the run trace.\nattribute temperature: float = 0.7\uf0c1\nWhat sampling temperature to use.\nattribute tiktoken_model_name: Optional[str] = None\uf0c1\nThe model name to pass to tiktoken when using this class.\nTiktoken is used to count the number of tokens in documents to constrain\nthem to be under a certain limit. By default, when set to None, this will\nbe the same as the embedding model name. However, there are some cases\nwhere you may want to use this Embedding class with a model name not\nsupported by tiktoken. This can include when using Azure embeddings or", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-46", "text": "supported by tiktoken. This can include when using Azure embeddings or\nwhen using one of the many model providers that expose an OpenAI-like\nAPI but with different models. In those cases, in order to avoid erroring\nwhen tiktoken is called, you can specify a model name to use here.\nattribute top_p: float = 1\uf0c1\nTotal probability mass of tokens to consider at each step.\nattribute verbose: bool [Optional]\uf0c1\nWhether to print out response text.\n__call__(prompt, stop=None, callbacks=None, **kwargs)\uf0c1\nCheck Cache and run the LLM on the given prompt and input.\nParameters\nprompt (str) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\nasync agenerate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)\uf0c1\nRun the LLM on the given prompt and input.\nParameters\nprompts (List[str]) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ntags (Optional[List[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.LLMResult\nasync agenerate_prompt(prompts, stop=None, callbacks=None, **kwargs)\uf0c1\nTake in a list of prompt values and return an LLMResult.\nParameters\nprompts (List[langchain.schema.PromptValue]) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \nkwargs (Any) \u2013 \nReturn type", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-47", "text": "kwargs (Any) \u2013 \nReturn type\nlangchain.schema.LLMResult\nasync apredict(text, *, stop=None, **kwargs)\uf0c1\nPredict text from text.\nParameters\ntext (str) \u2013 \nstop (Optional[Sequence[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\nasync apredict_messages(messages, *, stop=None, **kwargs)\uf0c1\nPredict message from messages.\nParameters\nmessages (List[langchain.schema.BaseMessage]) \u2013 \nstop (Optional[Sequence[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.BaseMessage\nclassmethod construct(_fields_set=None, **values)\uf0c1\nCreates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.\nDefault values are respected, but no other validation is performed.\nBehaves as if Config.extra = \u2018allow\u2019 was set since it adds all passed values\nParameters\n_fields_set (Optional[SetStr]) \u2013 \nvalues (Any) \u2013 \nReturn type\nModel\ncopy(*, include=None, exclude=None, update=None, deep=False)\uf0c1\nDuplicate a model, optionally choose which fields to include, exclude and change.\nParameters\ninclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 fields to include in new model\nexclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 fields to exclude from new model, as with values this takes precedence over include\nupdate (Optional[DictStrAny]) \u2013 values to change/add in the new model. Note: the data is not validated before creating\nthe new model: you should trust this data\ndeep (bool) \u2013 set to True to make a deep copy of the model\nself (Model) \u2013 \nReturns\nnew model instance\nReturn type\nModel", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-48", "text": "self (Model) \u2013 \nReturns\nnew model instance\nReturn type\nModel\ncreate_llm_result(choices, prompts, token_usage)\uf0c1\nCreate the LLMResult from the choices and prompts.\nParameters\nchoices (Any) \u2013 \nprompts (List[str]) \u2013 \ntoken_usage (Dict[str, int]) \u2013 \nReturn type\nlangchain.schema.LLMResult\ndict(**kwargs)\uf0c1\nReturn a dictionary of the LLM.\nParameters\nkwargs (Any) \u2013 \nReturn type\nDict\ngenerate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)\uf0c1\nRun the LLM on the given prompt and input.\nParameters\nprompts (List[str]) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ntags (Optional[List[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.LLMResult\ngenerate_prompt(prompts, stop=None, callbacks=None, **kwargs)\uf0c1\nTake in a list of prompt values and return an LLMResult.\nParameters\nprompts (List[langchain.schema.PromptValue]) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.LLMResult\nget_num_tokens(text)\uf0c1\nGet the number of tokens present in the text.\nParameters\ntext (str) \u2013 \nReturn type\nint\nget_num_tokens_from_messages(messages)\uf0c1\nGet the number of tokens in the message.\nParameters\nmessages (List[langchain.schema.BaseMessage]) \u2013 \nReturn type\nint", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-49", "text": "Parameters\nmessages (List[langchain.schema.BaseMessage]) \u2013 \nReturn type\nint\nget_sub_prompts(params, prompts, stop=None)\uf0c1\nGet the sub prompts for llm call.\nParameters\nparams (Dict[str, Any]) \u2013 \nprompts (List[str]) \u2013 \nstop (Optional[List[str]]) \u2013 \nReturn type\nList[List[str]]\nget_token_ids(text)\uf0c1\nGet the token IDs using the tiktoken package.\nParameters\ntext (str) \u2013 \nReturn type\nList[int]\njson(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs)\uf0c1\nGenerate a JSON representation of the model, include and exclude arguments as per dict().\nencoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().\nParameters\ninclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 \nexclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 \nby_alias (bool) \u2013 \nskip_defaults (Optional[bool]) \u2013 \nexclude_unset (bool) \u2013 \nexclude_defaults (bool) \u2013 \nexclude_none (bool) \u2013 \nencoder (Optional[Callable[[Any], Any]]) \u2013 \nmodels_as_dict (bool) \u2013 \ndumps_kwargs (Any) \u2013 \nReturn type\nunicode\nmax_tokens_for_prompt(prompt)\uf0c1\nCalculate the maximum number of tokens possible to generate for a prompt.\nParameters\nprompt (str) \u2013 The prompt to pass into the model.\nReturns\nThe maximum number of tokens to generate for a prompt.\nReturn type\nint\nExample\nmax_tokens = openai.max_token_for_prompt(\"Tell me a joke.\")", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-50", "text": "int\nExample\nmax_tokens = openai.max_token_for_prompt(\"Tell me a joke.\")\nstatic modelname_to_contextsize(modelname)\uf0c1\nCalculate the maximum number of tokens possible to generate for a model.\nParameters\nmodelname (str) \u2013 The modelname we want to know the context size for.\nReturns\nThe maximum context size\nReturn type\nint\nExample\nmax_tokens = openai.modelname_to_contextsize(\"text-davinci-003\")\npredict(text, *, stop=None, **kwargs)\uf0c1\nPredict text from text.\nParameters\ntext (str) \u2013 \nstop (Optional[Sequence[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\npredict_messages(messages, *, stop=None, **kwargs)\uf0c1\nPredict message from messages.\nParameters\nmessages (List[langchain.schema.BaseMessage]) \u2013 \nstop (Optional[Sequence[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.BaseMessage\nprep_streaming_params(stop=None)\uf0c1\nPrepare the params for streaming.\nParameters\nstop (Optional[List[str]]) \u2013 \nReturn type\nDict[str, Any]\nsave(file_path)\uf0c1\nSave the LLM.\nParameters\nfile_path (Union[pathlib.Path, str]) \u2013 Path to file to save the LLM to.\nReturn type\nNone\nExample:\n.. code-block:: python\nllm.save(file_path=\u201dpath/llm.yaml\u201d)\nstream(prompt, stop=None)\uf0c1\nCall OpenAI with streaming flag and return the resulting generator.\nBETA: this is a beta feature while we figure out the right abstraction.\nOnce that happens, this interface could change.\nParameters\nprompt (str) \u2013 The prompts to pass into the model.\nstop (Optional[List[str]]) \u2013 Optional list of stop words to use when generating.\nReturns", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-51", "text": "stop (Optional[List[str]]) \u2013 Optional list of stop words to use when generating.\nReturns\nA generator representing the stream of tokens from OpenAI.\nReturn type\nGenerator\nExample\ngenerator = openai.stream(\"Tell me a joke.\")\nfor token in generator:\n yield token\nclassmethod update_forward_refs(**localns)\uf0c1\nTry to update ForwardRefs on fields based on this Model, globalns and localns.\nParameters\nlocalns (Any) \u2013 \nReturn type\nNone\nproperty lc_attributes: Dict\uf0c1\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\uf0c1\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\uf0c1\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\uf0c1\nReturn whether or not the class is serializable.\nproperty max_context_size: int\uf0c1\nGet max context size for this model.\nclass langchain.llms.Banana(*, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None, model_key='', model_kwargs=None, banana_api_key=None)[source]\uf0c1\nBases: langchain.llms.base.LLM\nWrapper around Banana large language models.\nTo use, you should have the banana-dev python package installed,\nand the environment variable BANANA_API_KEY set with your API key.\nAny parameters that are valid to be passed to the call can be passed\nin, even if not explicitly saved on this class.\nExample\nfrom langchain.llms import Banana\nbanana = Banana(model_key=\"\")\nParameters", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-52", "text": "Example\nfrom langchain.llms import Banana\nbanana = Banana(model_key=\"\")\nParameters\ncache (Optional[bool]) \u2013 \nverbose (bool) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ncallback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) \u2013 \ntags (Optional[List[str]]) \u2013 \nmodel_key (str) \u2013 \nmodel_kwargs (Dict[str, Any]) \u2013 \nbanana_api_key (Optional[str]) \u2013 \nReturn type\nNone\nattribute model_key: str = ''\uf0c1\nmodel endpoint to use\nattribute model_kwargs: Dict[str, Any] [Optional]\uf0c1\nHolds any model parameters valid for create call not\nexplicitly specified.\nattribute tags: Optional[List[str]] = None\uf0c1\nTags to add to the run trace.\nattribute verbose: bool [Optional]\uf0c1\nWhether to print out response text.\n__call__(prompt, stop=None, callbacks=None, **kwargs)\uf0c1\nCheck Cache and run the LLM on the given prompt and input.\nParameters\nprompt (str) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\nasync agenerate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)\uf0c1\nRun the LLM on the given prompt and input.\nParameters\nprompts (List[str]) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ntags (Optional[List[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.LLMResult", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-53", "text": "kwargs (Any) \u2013 \nReturn type\nlangchain.schema.LLMResult\nasync agenerate_prompt(prompts, stop=None, callbacks=None, **kwargs)\uf0c1\nTake in a list of prompt values and return an LLMResult.\nParameters\nprompts (List[langchain.schema.PromptValue]) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.LLMResult\nasync apredict(text, *, stop=None, **kwargs)\uf0c1\nPredict text from text.\nParameters\ntext (str) \u2013 \nstop (Optional[Sequence[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\nasync apredict_messages(messages, *, stop=None, **kwargs)\uf0c1\nPredict message from messages.\nParameters\nmessages (List[langchain.schema.BaseMessage]) \u2013 \nstop (Optional[Sequence[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.BaseMessage\nclassmethod construct(_fields_set=None, **values)\uf0c1\nCreates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.\nDefault values are respected, but no other validation is performed.\nBehaves as if Config.extra = \u2018allow\u2019 was set since it adds all passed values\nParameters\n_fields_set (Optional[SetStr]) \u2013 \nvalues (Any) \u2013 \nReturn type\nModel\ncopy(*, include=None, exclude=None, update=None, deep=False)\uf0c1\nDuplicate a model, optionally choose which fields to include, exclude and change.\nParameters\ninclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 fields to include in new model", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-54", "text": "exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 fields to exclude from new model, as with values this takes precedence over include\nupdate (Optional[DictStrAny]) \u2013 values to change/add in the new model. Note: the data is not validated before creating\nthe new model: you should trust this data\ndeep (bool) \u2013 set to True to make a deep copy of the model\nself (Model) \u2013 \nReturns\nnew model instance\nReturn type\nModel\ndict(**kwargs)\uf0c1\nReturn a dictionary of the LLM.\nParameters\nkwargs (Any) \u2013 \nReturn type\nDict\ngenerate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)\uf0c1\nRun the LLM on the given prompt and input.\nParameters\nprompts (List[str]) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ntags (Optional[List[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.LLMResult\ngenerate_prompt(prompts, stop=None, callbacks=None, **kwargs)\uf0c1\nTake in a list of prompt values and return an LLMResult.\nParameters\nprompts (List[langchain.schema.PromptValue]) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.LLMResult\nget_num_tokens(text)\uf0c1\nGet the number of tokens present in the text.\nParameters\ntext (str) \u2013 \nReturn type\nint\nget_num_tokens_from_messages(messages)\uf0c1\nGet the number of tokens in the message.\nParameters", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-55", "text": "Get the number of tokens in the message.\nParameters\nmessages (List[langchain.schema.BaseMessage]) \u2013 \nReturn type\nint\nget_token_ids(text)\uf0c1\nGet the token present in the text.\nParameters\ntext (str) \u2013 \nReturn type\nList[int]\njson(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs)\uf0c1\nGenerate a JSON representation of the model, include and exclude arguments as per dict().\nencoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().\nParameters\ninclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 \nexclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 \nby_alias (bool) \u2013 \nskip_defaults (Optional[bool]) \u2013 \nexclude_unset (bool) \u2013 \nexclude_defaults (bool) \u2013 \nexclude_none (bool) \u2013 \nencoder (Optional[Callable[[Any], Any]]) \u2013 \nmodels_as_dict (bool) \u2013 \ndumps_kwargs (Any) \u2013 \nReturn type\nunicode\npredict(text, *, stop=None, **kwargs)\uf0c1\nPredict text from text.\nParameters\ntext (str) \u2013 \nstop (Optional[Sequence[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\npredict_messages(messages, *, stop=None, **kwargs)\uf0c1\nPredict message from messages.\nParameters\nmessages (List[langchain.schema.BaseMessage]) \u2013 \nstop (Optional[Sequence[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.BaseMessage\nsave(file_path)\uf0c1\nSave the LLM.\nParameters", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-56", "text": "save(file_path)\uf0c1\nSave the LLM.\nParameters\nfile_path (Union[pathlib.Path, str]) \u2013 Path to file to save the LLM to.\nReturn type\nNone\nExample:\n.. code-block:: python\nllm.save(file_path=\u201dpath/llm.yaml\u201d)\nclassmethod update_forward_refs(**localns)\uf0c1\nTry to update ForwardRefs on fields based on this Model, globalns and localns.\nParameters\nlocalns (Any) \u2013 \nReturn type\nNone\nproperty lc_attributes: Dict\uf0c1\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\uf0c1\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\uf0c1\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\uf0c1\nReturn whether or not the class is serializable.\nclass langchain.llms.Baseten(*, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None, model, input=None, model_kwargs=None)[source]\uf0c1\nBases: langchain.llms.base.LLM\nUse your Baseten models in Langchain\nTo use, you should have the baseten python package installed,\nand run baseten.login() with your Baseten API key.\nThe required model param can be either a model id or model\nversion id. Using a model version ID will result in\nslightly faster invocation.\nAny other model parameters can also\nbe passed in with the format input={model_param: value, \u2026}", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-57", "text": "be passed in with the format input={model_param: value, \u2026}\nThe Baseten model must accept a dictionary of input with the key\n\u201cprompt\u201d and return a dictionary with a key \u201cdata\u201d which maps\nto a list of response strings.\nExample\nParameters\ncache (Optional[bool]) \u2013 \nverbose (bool) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ncallback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) \u2013 \ntags (Optional[List[str]]) \u2013 \nmodel (str) \u2013 \ninput (Dict[str, Any]) \u2013 \nmodel_kwargs (Dict[str, Any]) \u2013 \nReturn type\nNone\nattribute tags: Optional[List[str]] = None\uf0c1\nTags to add to the run trace.\nattribute verbose: bool [Optional]\uf0c1\nWhether to print out response text.\n__call__(prompt, stop=None, callbacks=None, **kwargs)\uf0c1\nCheck Cache and run the LLM on the given prompt and input.\nParameters\nprompt (str) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\nasync agenerate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)\uf0c1\nRun the LLM on the given prompt and input.\nParameters\nprompts (List[str]) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ntags (Optional[List[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.LLMResult", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-58", "text": "kwargs (Any) \u2013 \nReturn type\nlangchain.schema.LLMResult\nasync agenerate_prompt(prompts, stop=None, callbacks=None, **kwargs)\uf0c1\nTake in a list of prompt values and return an LLMResult.\nParameters\nprompts (List[langchain.schema.PromptValue]) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.LLMResult\nasync apredict(text, *, stop=None, **kwargs)\uf0c1\nPredict text from text.\nParameters\ntext (str) \u2013 \nstop (Optional[Sequence[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\nasync apredict_messages(messages, *, stop=None, **kwargs)\uf0c1\nPredict message from messages.\nParameters\nmessages (List[langchain.schema.BaseMessage]) \u2013 \nstop (Optional[Sequence[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.BaseMessage\nclassmethod construct(_fields_set=None, **values)\uf0c1\nCreates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.\nDefault values are respected, but no other validation is performed.\nBehaves as if Config.extra = \u2018allow\u2019 was set since it adds all passed values\nParameters\n_fields_set (Optional[SetStr]) \u2013 \nvalues (Any) \u2013 \nReturn type\nModel\ncopy(*, include=None, exclude=None, update=None, deep=False)\uf0c1\nDuplicate a model, optionally choose which fields to include, exclude and change.\nParameters\ninclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 fields to include in new model", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-59", "text": "exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 fields to exclude from new model, as with values this takes precedence over include\nupdate (Optional[DictStrAny]) \u2013 values to change/add in the new model. Note: the data is not validated before creating\nthe new model: you should trust this data\ndeep (bool) \u2013 set to True to make a deep copy of the model\nself (Model) \u2013 \nReturns\nnew model instance\nReturn type\nModel\ndict(**kwargs)\uf0c1\nReturn a dictionary of the LLM.\nParameters\nkwargs (Any) \u2013 \nReturn type\nDict\ngenerate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)\uf0c1\nRun the LLM on the given prompt and input.\nParameters\nprompts (List[str]) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ntags (Optional[List[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.LLMResult\ngenerate_prompt(prompts, stop=None, callbacks=None, **kwargs)\uf0c1\nTake in a list of prompt values and return an LLMResult.\nParameters\nprompts (List[langchain.schema.PromptValue]) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.LLMResult\nget_num_tokens(text)\uf0c1\nGet the number of tokens present in the text.\nParameters\ntext (str) \u2013 \nReturn type\nint\nget_num_tokens_from_messages(messages)\uf0c1\nGet the number of tokens in the message.\nParameters", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-60", "text": "Get the number of tokens in the message.\nParameters\nmessages (List[langchain.schema.BaseMessage]) \u2013 \nReturn type\nint\nget_token_ids(text)\uf0c1\nGet the token present in the text.\nParameters\ntext (str) \u2013 \nReturn type\nList[int]\njson(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs)\uf0c1\nGenerate a JSON representation of the model, include and exclude arguments as per dict().\nencoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().\nParameters\ninclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 \nexclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 \nby_alias (bool) \u2013 \nskip_defaults (Optional[bool]) \u2013 \nexclude_unset (bool) \u2013 \nexclude_defaults (bool) \u2013 \nexclude_none (bool) \u2013 \nencoder (Optional[Callable[[Any], Any]]) \u2013 \nmodels_as_dict (bool) \u2013 \ndumps_kwargs (Any) \u2013 \nReturn type\nunicode\npredict(text, *, stop=None, **kwargs)\uf0c1\nPredict text from text.\nParameters\ntext (str) \u2013 \nstop (Optional[Sequence[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\npredict_messages(messages, *, stop=None, **kwargs)\uf0c1\nPredict message from messages.\nParameters\nmessages (List[langchain.schema.BaseMessage]) \u2013 \nstop (Optional[Sequence[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.BaseMessage\nsave(file_path)\uf0c1\nSave the LLM.\nParameters", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-61", "text": "save(file_path)\uf0c1\nSave the LLM.\nParameters\nfile_path (Union[pathlib.Path, str]) \u2013 Path to file to save the LLM to.\nReturn type\nNone\nExample:\n.. code-block:: python\nllm.save(file_path=\u201dpath/llm.yaml\u201d)\nclassmethod update_forward_refs(**localns)\uf0c1\nTry to update ForwardRefs on fields based on this Model, globalns and localns.\nParameters\nlocalns (Any) \u2013 \nReturn type\nNone\nproperty lc_attributes: Dict\uf0c1\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\uf0c1\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\uf0c1\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\uf0c1\nReturn whether or not the class is serializable.\nclass langchain.llms.Beam(*, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None, model_name='', name='', cpu='', memory='', gpu='', python_version='', python_packages=[], max_length='', url='', model_kwargs=None, beam_client_id='', beam_client_secret='', app_id=None)[source]\uf0c1\nBases: langchain.llms.base.LLM\nWrapper around Beam API for gpt2 large language model.\nTo use, you should have the beam-sdk python package installed,\nand the environment variable BEAM_CLIENT_ID set with your client id\nand BEAM_CLIENT_SECRET set with your client secret. Information on how\nto get these is available here: https://docs.beam.cloud/account/api-keys.", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-62", "text": "to get these is available here: https://docs.beam.cloud/account/api-keys.\nThe wrapper can then be called as follows, where the name, cpu, memory, gpu,\npython version, and python packages can be updated accordingly. Once deployed,\nthe instance can be called.\nExample\nllm = Beam(model_name=\"gpt2\",\n name=\"langchain-gpt2\",\n cpu=8,\n memory=\"32Gi\",\n gpu=\"A10G\",\n python_version=\"python3.8\",\n python_packages=[\n \"diffusers[torch]>=0.10\",\n \"transformers\",\n \"torch\",\n \"pillow\",\n \"accelerate\",\n \"safetensors\",\n \"xformers\",],\n max_length=50)\nllm._deploy()\ncall_result = llm._call(input)\nParameters\ncache (Optional[bool]) \u2013 \nverbose (bool) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ncallback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) \u2013 \ntags (Optional[List[str]]) \u2013 \nmodel_name (str) \u2013 \nname (str) \u2013 \ncpu (str) \u2013 \nmemory (str) \u2013 \ngpu (str) \u2013 \npython_version (str) \u2013 \npython_packages (List[str]) \u2013 \nmax_length (str) \u2013 \nurl (str) \u2013 \nmodel_kwargs (Dict[str, Any]) \u2013 \nbeam_client_id (str) \u2013 \nbeam_client_secret (str) \u2013 \napp_id (Optional[str]) \u2013 \nReturn type\nNone\nattribute model_kwargs: Dict[str, Any] [Optional]\uf0c1\nHolds any model parameters valid for create call not", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-63", "text": "Holds any model parameters valid for create call not\nexplicitly specified.\nattribute tags: Optional[List[str]] = None\uf0c1\nTags to add to the run trace.\nattribute url: str = ''\uf0c1\nmodel endpoint to use\nattribute verbose: bool [Optional]\uf0c1\nWhether to print out response text.\n__call__(prompt, stop=None, callbacks=None, **kwargs)\uf0c1\nCheck Cache and run the LLM on the given prompt and input.\nParameters\nprompt (str) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\nasync agenerate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)\uf0c1\nRun the LLM on the given prompt and input.\nParameters\nprompts (List[str]) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ntags (Optional[List[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.LLMResult\nasync agenerate_prompt(prompts, stop=None, callbacks=None, **kwargs)\uf0c1\nTake in a list of prompt values and return an LLMResult.\nParameters\nprompts (List[langchain.schema.PromptValue]) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.LLMResult\napp_creation()[source]\uf0c1\nCreates a Python file which will contain your Beam app definition.\nReturn type\nNone", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-64", "text": "Creates a Python file which will contain your Beam app definition.\nReturn type\nNone\nasync apredict(text, *, stop=None, **kwargs)\uf0c1\nPredict text from text.\nParameters\ntext (str) \u2013 \nstop (Optional[Sequence[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\nasync apredict_messages(messages, *, stop=None, **kwargs)\uf0c1\nPredict message from messages.\nParameters\nmessages (List[langchain.schema.BaseMessage]) \u2013 \nstop (Optional[Sequence[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.BaseMessage\nclassmethod construct(_fields_set=None, **values)\uf0c1\nCreates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.\nDefault values are respected, but no other validation is performed.\nBehaves as if Config.extra = \u2018allow\u2019 was set since it adds all passed values\nParameters\n_fields_set (Optional[SetStr]) \u2013 \nvalues (Any) \u2013 \nReturn type\nModel\ncopy(*, include=None, exclude=None, update=None, deep=False)\uf0c1\nDuplicate a model, optionally choose which fields to include, exclude and change.\nParameters\ninclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 fields to include in new model\nexclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 fields to exclude from new model, as with values this takes precedence over include\nupdate (Optional[DictStrAny]) \u2013 values to change/add in the new model. Note: the data is not validated before creating\nthe new model: you should trust this data\ndeep (bool) \u2013 set to True to make a deep copy of the model\nself (Model) \u2013 \nReturns\nnew model instance\nReturn type\nModel\ndict(**kwargs)\uf0c1", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-65", "text": "Returns\nnew model instance\nReturn type\nModel\ndict(**kwargs)\uf0c1\nReturn a dictionary of the LLM.\nParameters\nkwargs (Any) \u2013 \nReturn type\nDict\ngenerate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)\uf0c1\nRun the LLM on the given prompt and input.\nParameters\nprompts (List[str]) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ntags (Optional[List[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.LLMResult\ngenerate_prompt(prompts, stop=None, callbacks=None, **kwargs)\uf0c1\nTake in a list of prompt values and return an LLMResult.\nParameters\nprompts (List[langchain.schema.PromptValue]) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.LLMResult\nget_num_tokens(text)\uf0c1\nGet the number of tokens present in the text.\nParameters\ntext (str) \u2013 \nReturn type\nint\nget_num_tokens_from_messages(messages)\uf0c1\nGet the number of tokens in the message.\nParameters\nmessages (List[langchain.schema.BaseMessage]) \u2013 \nReturn type\nint\nget_token_ids(text)\uf0c1\nGet the token present in the text.\nParameters\ntext (str) \u2013 \nReturn type\nList[int]\njson(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs)\uf0c1", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-66", "text": "Generate a JSON representation of the model, include and exclude arguments as per dict().\nencoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().\nParameters\ninclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 \nexclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 \nby_alias (bool) \u2013 \nskip_defaults (Optional[bool]) \u2013 \nexclude_unset (bool) \u2013 \nexclude_defaults (bool) \u2013 \nexclude_none (bool) \u2013 \nencoder (Optional[Callable[[Any], Any]]) \u2013 \nmodels_as_dict (bool) \u2013 \ndumps_kwargs (Any) \u2013 \nReturn type\nunicode\npredict(text, *, stop=None, **kwargs)\uf0c1\nPredict text from text.\nParameters\ntext (str) \u2013 \nstop (Optional[Sequence[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\npredict_messages(messages, *, stop=None, **kwargs)\uf0c1\nPredict message from messages.\nParameters\nmessages (List[langchain.schema.BaseMessage]) \u2013 \nstop (Optional[Sequence[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.BaseMessage\nrun_creation()[source]\uf0c1\nCreates a Python file which will be deployed on beam.\nReturn type\nNone\nsave(file_path)\uf0c1\nSave the LLM.\nParameters\nfile_path (Union[pathlib.Path, str]) \u2013 Path to file to save the LLM to.\nReturn type\nNone\nExample:\n.. code-block:: python\nllm.save(file_path=\u201dpath/llm.yaml\u201d)\nclassmethod update_forward_refs(**localns)\uf0c1\nTry to update ForwardRefs on fields based on this Model, globalns and localns.\nParameters\nlocalns (Any) \u2013 \nReturn type", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-67", "text": "Parameters\nlocalns (Any) \u2013 \nReturn type\nNone\nproperty lc_attributes: Dict\uf0c1\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\uf0c1\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\uf0c1\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\uf0c1\nReturn whether or not the class is serializable.\nclass langchain.llms.Bedrock(*, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None, client=None, region_name=None, credentials_profile_name=None, model_id, model_kwargs=None)[source]\uf0c1\nBases: langchain.llms.base.LLM\nLLM provider to invoke Bedrock models.\nTo authenticate, the AWS client uses the following methods to\nautomatically load credentials:\nhttps://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html\nIf a specific credential profile should be used, you must pass\nthe name of the profile from the ~/.aws/credentials file that is to be used.\nMake sure the credentials / roles used have the required policies to\naccess the Bedrock service.\nParameters\ncache (Optional[bool]) \u2013 \nverbose (bool) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ncallback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) \u2013 \ntags (Optional[List[str]]) \u2013 \nclient (Any) \u2013 \nregion_name (Optional[str]) \u2013", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-68", "text": "client (Any) \u2013 \nregion_name (Optional[str]) \u2013 \ncredentials_profile_name (Optional[str]) \u2013 \nmodel_id (str) \u2013 \nmodel_kwargs (Optional[Dict]) \u2013 \nReturn type\nNone\nattribute credentials_profile_name: Optional[str] = None\uf0c1\nThe name of the profile in the ~/.aws/credentials or ~/.aws/config files, which\nhas either access keys or role information specified.\nIf not specified, the default credential profile or, if on an EC2 instance,\ncredentials from IMDS will be used.\nSee: https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html\nattribute model_id: str [Required]\uf0c1\nId of the model to call, e.g., amazon.titan-tg1-large, this is\nequivalent to the modelId property in the list-foundation-models api\nattribute model_kwargs: Optional[Dict] = None\uf0c1\nKey word arguments to pass to the model.\nattribute region_name: Optional[str] = None\uf0c1\nThe aws region e.g., us-west-2. Fallsback to AWS_DEFAULT_REGION env variable\nor region specified in ~/.aws/config in case it is not provided here.\nattribute tags: Optional[List[str]] = None\uf0c1\nTags to add to the run trace.\nattribute verbose: bool [Optional]\uf0c1\nWhether to print out response text.\n__call__(prompt, stop=None, callbacks=None, **kwargs)\uf0c1\nCheck Cache and run the LLM on the given prompt and input.\nParameters\nprompt (str) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-69", "text": "kwargs (Any) \u2013 \nReturn type\nstr\nasync agenerate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)\uf0c1\nRun the LLM on the given prompt and input.\nParameters\nprompts (List[str]) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ntags (Optional[List[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.LLMResult\nasync agenerate_prompt(prompts, stop=None, callbacks=None, **kwargs)\uf0c1\nTake in a list of prompt values and return an LLMResult.\nParameters\nprompts (List[langchain.schema.PromptValue]) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.LLMResult\nasync apredict(text, *, stop=None, **kwargs)\uf0c1\nPredict text from text.\nParameters\ntext (str) \u2013 \nstop (Optional[Sequence[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\nasync apredict_messages(messages, *, stop=None, **kwargs)\uf0c1\nPredict message from messages.\nParameters\nmessages (List[langchain.schema.BaseMessage]) \u2013 \nstop (Optional[Sequence[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.BaseMessage\nclassmethod construct(_fields_set=None, **values)\uf0c1\nCreates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.\nDefault values are respected, but no other validation is performed.", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-70", "text": "Default values are respected, but no other validation is performed.\nBehaves as if Config.extra = \u2018allow\u2019 was set since it adds all passed values\nParameters\n_fields_set (Optional[SetStr]) \u2013 \nvalues (Any) \u2013 \nReturn type\nModel\ncopy(*, include=None, exclude=None, update=None, deep=False)\uf0c1\nDuplicate a model, optionally choose which fields to include, exclude and change.\nParameters\ninclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 fields to include in new model\nexclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 fields to exclude from new model, as with values this takes precedence over include\nupdate (Optional[DictStrAny]) \u2013 values to change/add in the new model. Note: the data is not validated before creating\nthe new model: you should trust this data\ndeep (bool) \u2013 set to True to make a deep copy of the model\nself (Model) \u2013 \nReturns\nnew model instance\nReturn type\nModel\ndict(**kwargs)\uf0c1\nReturn a dictionary of the LLM.\nParameters\nkwargs (Any) \u2013 \nReturn type\nDict\ngenerate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)\uf0c1\nRun the LLM on the given prompt and input.\nParameters\nprompts (List[str]) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ntags (Optional[List[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.LLMResult\ngenerate_prompt(prompts, stop=None, callbacks=None, **kwargs)\uf0c1\nTake in a list of prompt values and return an LLMResult.\nParameters", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-71", "text": "Take in a list of prompt values and return an LLMResult.\nParameters\nprompts (List[langchain.schema.PromptValue]) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.LLMResult\nget_num_tokens(text)\uf0c1\nGet the number of tokens present in the text.\nParameters\ntext (str) \u2013 \nReturn type\nint\nget_num_tokens_from_messages(messages)\uf0c1\nGet the number of tokens in the message.\nParameters\nmessages (List[langchain.schema.BaseMessage]) \u2013 \nReturn type\nint\nget_token_ids(text)\uf0c1\nGet the token present in the text.\nParameters\ntext (str) \u2013 \nReturn type\nList[int]\njson(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs)\uf0c1\nGenerate a JSON representation of the model, include and exclude arguments as per dict().\nencoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().\nParameters\ninclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 \nexclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 \nby_alias (bool) \u2013 \nskip_defaults (Optional[bool]) \u2013 \nexclude_unset (bool) \u2013 \nexclude_defaults (bool) \u2013 \nexclude_none (bool) \u2013 \nencoder (Optional[Callable[[Any], Any]]) \u2013 \nmodels_as_dict (bool) \u2013 \ndumps_kwargs (Any) \u2013 \nReturn type\nunicode", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-72", "text": "dumps_kwargs (Any) \u2013 \nReturn type\nunicode\npredict(text, *, stop=None, **kwargs)\uf0c1\nPredict text from text.\nParameters\ntext (str) \u2013 \nstop (Optional[Sequence[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\npredict_messages(messages, *, stop=None, **kwargs)\uf0c1\nPredict message from messages.\nParameters\nmessages (List[langchain.schema.BaseMessage]) \u2013 \nstop (Optional[Sequence[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.BaseMessage\nsave(file_path)\uf0c1\nSave the LLM.\nParameters\nfile_path (Union[pathlib.Path, str]) \u2013 Path to file to save the LLM to.\nReturn type\nNone\nExample:\n.. code-block:: python\nllm.save(file_path=\u201dpath/llm.yaml\u201d)\nclassmethod update_forward_refs(**localns)\uf0c1\nTry to update ForwardRefs on fields based on this Model, globalns and localns.\nParameters\nlocalns (Any) \u2013 \nReturn type\nNone\nproperty lc_attributes: Dict\uf0c1\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\uf0c1\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\uf0c1\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\uf0c1\nReturn whether or not the class is serializable.", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-73", "text": "property lc_serializable: bool\uf0c1\nReturn whether or not the class is serializable.\nclass langchain.llms.CTransformers(*, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None, client=None, model, model_type=None, model_file=None, config=None, lib=None)[source]\uf0c1\nBases: langchain.llms.base.LLM\nWrapper around the C Transformers LLM interface.\nTo use, you should have the ctransformers python package installed.\nSee https://github.com/marella/ctransformers\nExample\nfrom langchain.llms import CTransformers\nllm = CTransformers(model=\"/path/to/ggml-gpt-2.bin\", model_type=\"gpt2\")\nParameters\ncache (Optional[bool]) \u2013 \nverbose (bool) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ncallback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) \u2013 \ntags (Optional[List[str]]) \u2013 \nclient (Any) \u2013 \nmodel (str) \u2013 \nmodel_type (Optional[str]) \u2013 \nmodel_file (Optional[str]) \u2013 \nconfig (Optional[Dict[str, Any]]) \u2013 \nlib (Optional[str]) \u2013 \nReturn type\nNone\nattribute config: Optional[Dict[str, Any]] = None\uf0c1\nThe config parameters.\nSee https://github.com/marella/ctransformers#config\nattribute lib: Optional[str] = None\uf0c1\nThe path to a shared library or one of avx2, avx, basic.\nattribute model: str [Required]\uf0c1\nThe path to a model file or directory or the name of a Hugging Face Hub\nmodel repo.\nattribute model_file: Optional[str] = None\uf0c1", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-74", "text": "model repo.\nattribute model_file: Optional[str] = None\uf0c1\nThe name of the model file in repo or directory.\nattribute model_type: Optional[str] = None\uf0c1\nThe model type.\nattribute tags: Optional[List[str]] = None\uf0c1\nTags to add to the run trace.\nattribute verbose: bool [Optional]\uf0c1\nWhether to print out response text.\n__call__(prompt, stop=None, callbacks=None, **kwargs)\uf0c1\nCheck Cache and run the LLM on the given prompt and input.\nParameters\nprompt (str) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\nasync agenerate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)\uf0c1\nRun the LLM on the given prompt and input.\nParameters\nprompts (List[str]) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ntags (Optional[List[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.LLMResult\nasync agenerate_prompt(prompts, stop=None, callbacks=None, **kwargs)\uf0c1\nTake in a list of prompt values and return an LLMResult.\nParameters\nprompts (List[langchain.schema.PromptValue]) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.LLMResult\nasync apredict(text, *, stop=None, **kwargs)\uf0c1", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-75", "text": "async apredict(text, *, stop=None, **kwargs)\uf0c1\nPredict text from text.\nParameters\ntext (str) \u2013 \nstop (Optional[Sequence[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\nasync apredict_messages(messages, *, stop=None, **kwargs)\uf0c1\nPredict message from messages.\nParameters\nmessages (List[langchain.schema.BaseMessage]) \u2013 \nstop (Optional[Sequence[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.BaseMessage\nclassmethod construct(_fields_set=None, **values)\uf0c1\nCreates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.\nDefault values are respected, but no other validation is performed.\nBehaves as if Config.extra = \u2018allow\u2019 was set since it adds all passed values\nParameters\n_fields_set (Optional[SetStr]) \u2013 \nvalues (Any) \u2013 \nReturn type\nModel\ncopy(*, include=None, exclude=None, update=None, deep=False)\uf0c1\nDuplicate a model, optionally choose which fields to include, exclude and change.\nParameters\ninclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 fields to include in new model\nexclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 fields to exclude from new model, as with values this takes precedence over include\nupdate (Optional[DictStrAny]) \u2013 values to change/add in the new model. Note: the data is not validated before creating\nthe new model: you should trust this data\ndeep (bool) \u2013 set to True to make a deep copy of the model\nself (Model) \u2013 \nReturns\nnew model instance\nReturn type\nModel\ndict(**kwargs)\uf0c1\nReturn a dictionary of the LLM.\nParameters\nkwargs (Any) \u2013", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-76", "text": "Return a dictionary of the LLM.\nParameters\nkwargs (Any) \u2013 \nReturn type\nDict\ngenerate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)\uf0c1\nRun the LLM on the given prompt and input.\nParameters\nprompts (List[str]) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ntags (Optional[List[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.LLMResult\ngenerate_prompt(prompts, stop=None, callbacks=None, **kwargs)\uf0c1\nTake in a list of prompt values and return an LLMResult.\nParameters\nprompts (List[langchain.schema.PromptValue]) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.LLMResult\nget_num_tokens(text)\uf0c1\nGet the number of tokens present in the text.\nParameters\ntext (str) \u2013 \nReturn type\nint\nget_num_tokens_from_messages(messages)\uf0c1\nGet the number of tokens in the message.\nParameters\nmessages (List[langchain.schema.BaseMessage]) \u2013 \nReturn type\nint\nget_token_ids(text)\uf0c1\nGet the token present in the text.\nParameters\ntext (str) \u2013 \nReturn type\nList[int]\njson(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs)\uf0c1\nGenerate a JSON representation of the model, include and exclude arguments as per dict().", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-77", "text": "Generate a JSON representation of the model, include and exclude arguments as per dict().\nencoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().\nParameters\ninclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 \nexclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 \nby_alias (bool) \u2013 \nskip_defaults (Optional[bool]) \u2013 \nexclude_unset (bool) \u2013 \nexclude_defaults (bool) \u2013 \nexclude_none (bool) \u2013 \nencoder (Optional[Callable[[Any], Any]]) \u2013 \nmodels_as_dict (bool) \u2013 \ndumps_kwargs (Any) \u2013 \nReturn type\nunicode\npredict(text, *, stop=None, **kwargs)\uf0c1\nPredict text from text.\nParameters\ntext (str) \u2013 \nstop (Optional[Sequence[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\npredict_messages(messages, *, stop=None, **kwargs)\uf0c1\nPredict message from messages.\nParameters\nmessages (List[langchain.schema.BaseMessage]) \u2013 \nstop (Optional[Sequence[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.BaseMessage\nsave(file_path)\uf0c1\nSave the LLM.\nParameters\nfile_path (Union[pathlib.Path, str]) \u2013 Path to file to save the LLM to.\nReturn type\nNone\nExample:\n.. code-block:: python\nllm.save(file_path=\u201dpath/llm.yaml\u201d)\nclassmethod update_forward_refs(**localns)\uf0c1\nTry to update ForwardRefs on fields based on this Model, globalns and localns.\nParameters\nlocalns (Any) \u2013 \nReturn type\nNone\nproperty lc_attributes: Dict\uf0c1\nReturn a list of attribute names that should be included in the", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-78", "text": "Return a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\uf0c1\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\uf0c1\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\uf0c1\nReturn whether or not the class is serializable.\nclass langchain.llms.CerebriumAI(*, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None, endpoint_url='', model_kwargs=None, cerebriumai_api_key=None)[source]\uf0c1\nBases: langchain.llms.base.LLM\nWrapper around CerebriumAI large language models.\nTo use, you should have the cerebrium python package installed, and the\nenvironment variable CEREBRIUMAI_API_KEY set with your API key.\nAny parameters that are valid to be passed to the call can be passed\nin, even if not explicitly saved on this class.\nExample\nfrom langchain.llms import CerebriumAI\ncerebrium = CerebriumAI(endpoint_url=\"\")\nParameters\ncache (Optional[bool]) \u2013 \nverbose (bool) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ncallback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) \u2013 \ntags (Optional[List[str]]) \u2013 \nendpoint_url (str) \u2013 \nmodel_kwargs (Dict[str, Any]) \u2013 \ncerebriumai_api_key (Optional[str]) \u2013 \nReturn type\nNone", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-79", "text": "cerebriumai_api_key (Optional[str]) \u2013 \nReturn type\nNone\nattribute endpoint_url: str = ''\uf0c1\nmodel endpoint to use\nattribute model_kwargs: Dict[str, Any] [Optional]\uf0c1\nHolds any model parameters valid for create call not\nexplicitly specified.\nattribute tags: Optional[List[str]] = None\uf0c1\nTags to add to the run trace.\nattribute verbose: bool [Optional]\uf0c1\nWhether to print out response text.\n__call__(prompt, stop=None, callbacks=None, **kwargs)\uf0c1\nCheck Cache and run the LLM on the given prompt and input.\nParameters\nprompt (str) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\nasync agenerate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)\uf0c1\nRun the LLM on the given prompt and input.\nParameters\nprompts (List[str]) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ntags (Optional[List[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.LLMResult\nasync agenerate_prompt(prompts, stop=None, callbacks=None, **kwargs)\uf0c1\nTake in a list of prompt values and return an LLMResult.\nParameters\nprompts (List[langchain.schema.PromptValue]) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \nkwargs (Any) \u2013 \nReturn type", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-80", "text": "kwargs (Any) \u2013 \nReturn type\nlangchain.schema.LLMResult\nasync apredict(text, *, stop=None, **kwargs)\uf0c1\nPredict text from text.\nParameters\ntext (str) \u2013 \nstop (Optional[Sequence[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\nasync apredict_messages(messages, *, stop=None, **kwargs)\uf0c1\nPredict message from messages.\nParameters\nmessages (List[langchain.schema.BaseMessage]) \u2013 \nstop (Optional[Sequence[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.BaseMessage\nclassmethod construct(_fields_set=None, **values)\uf0c1\nCreates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.\nDefault values are respected, but no other validation is performed.\nBehaves as if Config.extra = \u2018allow\u2019 was set since it adds all passed values\nParameters\n_fields_set (Optional[SetStr]) \u2013 \nvalues (Any) \u2013 \nReturn type\nModel\ncopy(*, include=None, exclude=None, update=None, deep=False)\uf0c1\nDuplicate a model, optionally choose which fields to include, exclude and change.\nParameters\ninclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 fields to include in new model\nexclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 fields to exclude from new model, as with values this takes precedence over include\nupdate (Optional[DictStrAny]) \u2013 values to change/add in the new model. Note: the data is not validated before creating\nthe new model: you should trust this data\ndeep (bool) \u2013 set to True to make a deep copy of the model\nself (Model) \u2013 \nReturns\nnew model instance\nReturn type\nModel\ndict(**kwargs)\uf0c1", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-81", "text": "Returns\nnew model instance\nReturn type\nModel\ndict(**kwargs)\uf0c1\nReturn a dictionary of the LLM.\nParameters\nkwargs (Any) \u2013 \nReturn type\nDict\ngenerate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)\uf0c1\nRun the LLM on the given prompt and input.\nParameters\nprompts (List[str]) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ntags (Optional[List[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.LLMResult\ngenerate_prompt(prompts, stop=None, callbacks=None, **kwargs)\uf0c1\nTake in a list of prompt values and return an LLMResult.\nParameters\nprompts (List[langchain.schema.PromptValue]) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.LLMResult\nget_num_tokens(text)\uf0c1\nGet the number of tokens present in the text.\nParameters\ntext (str) \u2013 \nReturn type\nint\nget_num_tokens_from_messages(messages)\uf0c1\nGet the number of tokens in the message.\nParameters\nmessages (List[langchain.schema.BaseMessage]) \u2013 \nReturn type\nint\nget_token_ids(text)\uf0c1\nGet the token present in the text.\nParameters\ntext (str) \u2013 \nReturn type\nList[int]\njson(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs)\uf0c1", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-82", "text": "Generate a JSON representation of the model, include and exclude arguments as per dict().\nencoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().\nParameters\ninclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 \nexclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 \nby_alias (bool) \u2013 \nskip_defaults (Optional[bool]) \u2013 \nexclude_unset (bool) \u2013 \nexclude_defaults (bool) \u2013 \nexclude_none (bool) \u2013 \nencoder (Optional[Callable[[Any], Any]]) \u2013 \nmodels_as_dict (bool) \u2013 \ndumps_kwargs (Any) \u2013 \nReturn type\nunicode\npredict(text, *, stop=None, **kwargs)\uf0c1\nPredict text from text.\nParameters\ntext (str) \u2013 \nstop (Optional[Sequence[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\npredict_messages(messages, *, stop=None, **kwargs)\uf0c1\nPredict message from messages.\nParameters\nmessages (List[langchain.schema.BaseMessage]) \u2013 \nstop (Optional[Sequence[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.BaseMessage\nsave(file_path)\uf0c1\nSave the LLM.\nParameters\nfile_path (Union[pathlib.Path, str]) \u2013 Path to file to save the LLM to.\nReturn type\nNone\nExample:\n.. code-block:: python\nllm.save(file_path=\u201dpath/llm.yaml\u201d)\nclassmethod update_forward_refs(**localns)\uf0c1\nTry to update ForwardRefs on fields based on this Model, globalns and localns.\nParameters\nlocalns (Any) \u2013 \nReturn type\nNone\nproperty lc_attributes: Dict\uf0c1\nReturn a list of attribute names that should be included in the", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-83", "text": "Return a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\uf0c1\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\uf0c1\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\uf0c1\nReturn whether or not the class is serializable.\nclass langchain.llms.Clarifai(*, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None, stub=None, metadata=None, userDataObject=None, model_id=None, model_version_id=None, app_id=None, user_id=None, clarifai_pat_key=None, api_base='https://api.clarifai.com', stop=None)[source]\uf0c1\nBases: langchain.llms.base.LLM\nWrapper around Clarifai\u2019s large language models.\nTo use, you should have an account on the Clarifai platform,\nthe clarifai python package installed, and the\nenvironment variable CLARIFAI_PAT_KEY set with your PAT key,\nor pass it as a named parameter to the constructor.\nExample\nfrom langchain.llms import Clarifai\nclarifai_llm = Clarifai(clarifai_pat_key=CLARIFAI_PAT_KEY, user_id=USER_ID, app_id=APP_ID, model_id=MODEL_ID)\nParameters\ncache (Optional[bool]) \u2013 \nverbose (bool) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ncallback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) \u2013", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-84", "text": "callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) \u2013 \ntags (Optional[List[str]]) \u2013 \nstub (Any) \u2013 \nmetadata (Any) \u2013 \nuserDataObject (Any) \u2013 \nmodel_id (Optional[str]) \u2013 \nmodel_version_id (Optional[str]) \u2013 \napp_id (Optional[str]) \u2013 \nuser_id (Optional[str]) \u2013 \nclarifai_pat_key (Optional[str]) \u2013 \napi_base (str) \u2013 \nstop (Optional[List[str]]) \u2013 \nReturn type\nNone\nattribute app_id: Optional[str] = None\uf0c1\nClarifai application id to use.\nattribute model_id: Optional[str] = None\uf0c1\nModel id to use.\nattribute model_version_id: Optional[str] = None\uf0c1\nModel version id to use.\nattribute tags: Optional[List[str]] = None\uf0c1\nTags to add to the run trace.\nattribute user_id: Optional[str] = None\uf0c1\nClarifai user id to use.\nattribute verbose: bool [Optional]\uf0c1\nWhether to print out response text.\n__call__(prompt, stop=None, callbacks=None, **kwargs)\uf0c1\nCheck Cache and run the LLM on the given prompt and input.\nParameters\nprompt (str) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\nasync agenerate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)\uf0c1\nRun the LLM on the given prompt and input.\nParameters\nprompts (List[str]) \u2013 \nstop (Optional[List[str]]) \u2013", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-85", "text": "Parameters\nprompts (List[str]) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ntags (Optional[List[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.LLMResult\nasync agenerate_prompt(prompts, stop=None, callbacks=None, **kwargs)\uf0c1\nTake in a list of prompt values and return an LLMResult.\nParameters\nprompts (List[langchain.schema.PromptValue]) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.LLMResult\nasync apredict(text, *, stop=None, **kwargs)\uf0c1\nPredict text from text.\nParameters\ntext (str) \u2013 \nstop (Optional[Sequence[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\nasync apredict_messages(messages, *, stop=None, **kwargs)\uf0c1\nPredict message from messages.\nParameters\nmessages (List[langchain.schema.BaseMessage]) \u2013 \nstop (Optional[Sequence[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.BaseMessage\nclassmethod construct(_fields_set=None, **values)\uf0c1\nCreates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.\nDefault values are respected, but no other validation is performed.\nBehaves as if Config.extra = \u2018allow\u2019 was set since it adds all passed values\nParameters\n_fields_set (Optional[SetStr]) \u2013 \nvalues (Any) \u2013 \nReturn type\nModel\ncopy(*, include=None, exclude=None, update=None, deep=False)\uf0c1", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-86", "text": "Model\ncopy(*, include=None, exclude=None, update=None, deep=False)\uf0c1\nDuplicate a model, optionally choose which fields to include, exclude and change.\nParameters\ninclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 fields to include in new model\nexclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 fields to exclude from new model, as with values this takes precedence over include\nupdate (Optional[DictStrAny]) \u2013 values to change/add in the new model. Note: the data is not validated before creating\nthe new model: you should trust this data\ndeep (bool) \u2013 set to True to make a deep copy of the model\nself (Model) \u2013 \nReturns\nnew model instance\nReturn type\nModel\ndict(**kwargs)\uf0c1\nReturn a dictionary of the LLM.\nParameters\nkwargs (Any) \u2013 \nReturn type\nDict\ngenerate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)\uf0c1\nRun the LLM on the given prompt and input.\nParameters\nprompts (List[str]) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ntags (Optional[List[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.LLMResult\ngenerate_prompt(prompts, stop=None, callbacks=None, **kwargs)\uf0c1\nTake in a list of prompt values and return an LLMResult.\nParameters\nprompts (List[langchain.schema.PromptValue]) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \nkwargs (Any) \u2013 \nReturn type", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-87", "text": "kwargs (Any) \u2013 \nReturn type\nlangchain.schema.LLMResult\nget_num_tokens(text)\uf0c1\nGet the number of tokens present in the text.\nParameters\ntext (str) \u2013 \nReturn type\nint\nget_num_tokens_from_messages(messages)\uf0c1\nGet the number of tokens in the message.\nParameters\nmessages (List[langchain.schema.BaseMessage]) \u2013 \nReturn type\nint\nget_token_ids(text)\uf0c1\nGet the token present in the text.\nParameters\ntext (str) \u2013 \nReturn type\nList[int]\njson(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs)\uf0c1\nGenerate a JSON representation of the model, include and exclude arguments as per dict().\nencoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().\nParameters\ninclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 \nexclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 \nby_alias (bool) \u2013 \nskip_defaults (Optional[bool]) \u2013 \nexclude_unset (bool) \u2013 \nexclude_defaults (bool) \u2013 \nexclude_none (bool) \u2013 \nencoder (Optional[Callable[[Any], Any]]) \u2013 \nmodels_as_dict (bool) \u2013 \ndumps_kwargs (Any) \u2013 \nReturn type\nunicode\npredict(text, *, stop=None, **kwargs)\uf0c1\nPredict text from text.\nParameters\ntext (str) \u2013 \nstop (Optional[Sequence[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\npredict_messages(messages, *, stop=None, **kwargs)\uf0c1\nPredict message from messages.\nParameters", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-88", "text": "Predict message from messages.\nParameters\nmessages (List[langchain.schema.BaseMessage]) \u2013 \nstop (Optional[Sequence[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.BaseMessage\nsave(file_path)\uf0c1\nSave the LLM.\nParameters\nfile_path (Union[pathlib.Path, str]) \u2013 Path to file to save the LLM to.\nReturn type\nNone\nExample:\n.. code-block:: python\nllm.save(file_path=\u201dpath/llm.yaml\u201d)\nclassmethod update_forward_refs(**localns)\uf0c1\nTry to update ForwardRefs on fields based on this Model, globalns and localns.\nParameters\nlocalns (Any) \u2013 \nReturn type\nNone\nproperty lc_attributes: Dict\uf0c1\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\uf0c1\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\uf0c1\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\uf0c1\nReturn whether or not the class is serializable.\nclass langchain.llms.Cohere(*, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None, client=None, model=None, max_tokens=256, temperature=0.75, k=0, p=1, frequency_penalty=0.0, presence_penalty=0.0, truncate=None, max_retries=10, cohere_api_key=None, stop=None)[source]\uf0c1\nBases: langchain.llms.base.LLM\nWrapper around Cohere large language models.", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-89", "text": "Bases: langchain.llms.base.LLM\nWrapper around Cohere large language models.\nTo use, you should have the cohere python package installed, and the\nenvironment variable COHERE_API_KEY set with your API key, or pass\nit as a named parameter to the constructor.\nExample\nfrom langchain.llms import Cohere\ncohere = Cohere(model=\"gptd-instruct-tft\", cohere_api_key=\"my-api-key\")\nParameters\ncache (Optional[bool]) \u2013 \nverbose (bool) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ncallback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) \u2013 \ntags (Optional[List[str]]) \u2013 \nclient (Any) \u2013 \nmodel (Optional[str]) \u2013 \nmax_tokens (int) \u2013 \ntemperature (float) \u2013 \nk (int) \u2013 \np (int) \u2013 \nfrequency_penalty (float) \u2013 \npresence_penalty (float) \u2013 \ntruncate (Optional[str]) \u2013 \nmax_retries (int) \u2013 \ncohere_api_key (Optional[str]) \u2013 \nstop (Optional[List[str]]) \u2013 \nReturn type\nNone\nattribute frequency_penalty: float = 0.0\uf0c1\nPenalizes repeated tokens according to frequency. Between 0 and 1.\nattribute k: int = 0\uf0c1\nNumber of most likely tokens to consider at each step.\nattribute max_retries: int = 10\uf0c1\nMaximum number of retries to make when generating.\nattribute max_tokens: int = 256\uf0c1\nDenotes the number of tokens to predict per generation.\nattribute model: Optional[str] = None\uf0c1\nModel name to use.\nattribute p: int = 1\uf0c1", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-90", "text": "Model name to use.\nattribute p: int = 1\uf0c1\nTotal probability mass of tokens to consider at each step.\nattribute presence_penalty: float = 0.0\uf0c1\nPenalizes repeated tokens. Between 0 and 1.\nattribute tags: Optional[List[str]] = None\uf0c1\nTags to add to the run trace.\nattribute temperature: float = 0.75\uf0c1\nA non-negative float that tunes the degree of randomness in generation.\nattribute truncate: Optional[str] = None\uf0c1\nSpecify how the client handles inputs longer than the maximum token\nlength: Truncate from START, END or NONE\nattribute verbose: bool [Optional]\uf0c1\nWhether to print out response text.\n__call__(prompt, stop=None, callbacks=None, **kwargs)\uf0c1\nCheck Cache and run the LLM on the given prompt and input.\nParameters\nprompt (str) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\nasync agenerate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)\uf0c1\nRun the LLM on the given prompt and input.\nParameters\nprompts (List[str]) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ntags (Optional[List[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.LLMResult\nasync agenerate_prompt(prompts, stop=None, callbacks=None, **kwargs)\uf0c1\nTake in a list of prompt values and return an LLMResult.\nParameters\nprompts (List[langchain.schema.PromptValue]) \u2013", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-91", "text": "Parameters\nprompts (List[langchain.schema.PromptValue]) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.LLMResult\nasync apredict(text, *, stop=None, **kwargs)\uf0c1\nPredict text from text.\nParameters\ntext (str) \u2013 \nstop (Optional[Sequence[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\nasync apredict_messages(messages, *, stop=None, **kwargs)\uf0c1\nPredict message from messages.\nParameters\nmessages (List[langchain.schema.BaseMessage]) \u2013 \nstop (Optional[Sequence[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.BaseMessage\nclassmethod construct(_fields_set=None, **values)\uf0c1\nCreates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.\nDefault values are respected, but no other validation is performed.\nBehaves as if Config.extra = \u2018allow\u2019 was set since it adds all passed values\nParameters\n_fields_set (Optional[SetStr]) \u2013 \nvalues (Any) \u2013 \nReturn type\nModel\ncopy(*, include=None, exclude=None, update=None, deep=False)\uf0c1\nDuplicate a model, optionally choose which fields to include, exclude and change.\nParameters\ninclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 fields to include in new model\nexclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 fields to exclude from new model, as with values this takes precedence over include\nupdate (Optional[DictStrAny]) \u2013 values to change/add in the new model. Note: the data is not validated before creating", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-92", "text": "the new model: you should trust this data\ndeep (bool) \u2013 set to True to make a deep copy of the model\nself (Model) \u2013 \nReturns\nnew model instance\nReturn type\nModel\ndict(**kwargs)\uf0c1\nReturn a dictionary of the LLM.\nParameters\nkwargs (Any) \u2013 \nReturn type\nDict\ngenerate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)\uf0c1\nRun the LLM on the given prompt and input.\nParameters\nprompts (List[str]) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ntags (Optional[List[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.LLMResult\ngenerate_prompt(prompts, stop=None, callbacks=None, **kwargs)\uf0c1\nTake in a list of prompt values and return an LLMResult.\nParameters\nprompts (List[langchain.schema.PromptValue]) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.LLMResult\nget_num_tokens(text)\uf0c1\nGet the number of tokens present in the text.\nParameters\ntext (str) \u2013 \nReturn type\nint\nget_num_tokens_from_messages(messages)\uf0c1\nGet the number of tokens in the message.\nParameters\nmessages (List[langchain.schema.BaseMessage]) \u2013 \nReturn type\nint\nget_token_ids(text)\uf0c1\nGet the token present in the text.\nParameters\ntext (str) \u2013 \nReturn type\nList[int]", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-93", "text": "Parameters\ntext (str) \u2013 \nReturn type\nList[int]\njson(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs)\uf0c1\nGenerate a JSON representation of the model, include and exclude arguments as per dict().\nencoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().\nParameters\ninclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 \nexclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 \nby_alias (bool) \u2013 \nskip_defaults (Optional[bool]) \u2013 \nexclude_unset (bool) \u2013 \nexclude_defaults (bool) \u2013 \nexclude_none (bool) \u2013 \nencoder (Optional[Callable[[Any], Any]]) \u2013 \nmodels_as_dict (bool) \u2013 \ndumps_kwargs (Any) \u2013 \nReturn type\nunicode\npredict(text, *, stop=None, **kwargs)\uf0c1\nPredict text from text.\nParameters\ntext (str) \u2013 \nstop (Optional[Sequence[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\npredict_messages(messages, *, stop=None, **kwargs)\uf0c1\nPredict message from messages.\nParameters\nmessages (List[langchain.schema.BaseMessage]) \u2013 \nstop (Optional[Sequence[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.BaseMessage\nsave(file_path)\uf0c1\nSave the LLM.\nParameters\nfile_path (Union[pathlib.Path, str]) \u2013 Path to file to save the LLM to.\nReturn type\nNone\nExample:\n.. code-block:: python\nllm.save(file_path=\u201dpath/llm.yaml\u201d)", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-94", "text": ".. code-block:: python\nllm.save(file_path=\u201dpath/llm.yaml\u201d)\nclassmethod update_forward_refs(**localns)\uf0c1\nTry to update ForwardRefs on fields based on this Model, globalns and localns.\nParameters\nlocalns (Any) \u2013 \nReturn type\nNone\nproperty lc_attributes: Dict\uf0c1\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\uf0c1\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\uf0c1\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\uf0c1\nReturn whether or not the class is serializable.\nclass langchain.llms.Databricks(*, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None, host=None, api_token=None, endpoint_name=None, cluster_id=None, cluster_driver_port=None, model_kwargs=None, transform_input_fn=None, transform_output_fn=None)[source]\uf0c1\nBases: langchain.llms.base.LLM\nLLM wrapper around a Databricks serving endpoint or a cluster driver proxy app.\nIt supports two endpoint types:\nServing endpoint (recommended for both production and development).\nWe assume that an LLM was registered and deployed to a serving endpoint.\nTo wrap it as an LLM you must have \u201cCan Query\u201d permission to the endpoint.\nSet endpoint_name accordingly and do not set cluster_id and\ncluster_driver_port.\nThe expected model signature is:\ninputs:\n[{\"name\": \"prompt\", \"type\": \"string\"},", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-95", "text": "inputs:\n[{\"name\": \"prompt\", \"type\": \"string\"},\n {\"name\": \"stop\", \"type\": \"list[string]\"}]\noutputs: [{\"type\": \"string\"}]\nCluster driver proxy app (recommended for interactive development).\nOne can load an LLM on a Databricks interactive cluster and start a local HTTP\nserver on the driver node to serve the model at / using HTTP POST method\nwith JSON input/output.\nPlease use a port number between [3000, 8000] and let the server listen to\nthe driver IP address or simply 0.0.0.0 instead of localhost only.\nTo wrap it as an LLM you must have \u201cCan Attach To\u201d permission to the cluster.\nSet cluster_id and cluster_driver_port and do not set endpoint_name.\nThe expected server schema (using JSON schema) is:\ninputs:\n{\"type\": \"object\",\n \"properties\": {\n \"prompt\": {\"type\": \"string\"},\n \"stop\": {\"type\": \"array\", \"items\": {\"type\": \"string\"}}},\n \"required\": [\"prompt\"]}`\noutputs: {\"type\": \"string\"}\nIf the endpoint model signature is different or you want to set extra params,\nyou can use transform_input_fn and transform_output_fn to apply necessary\ntransformations before and after the query.\nParameters\ncache (Optional[bool]) \u2013 \nverbose (bool) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ncallback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) \u2013 \ntags (Optional[List[str]]) \u2013 \nhost (str) \u2013 \napi_token (str) \u2013 \nendpoint_name (Optional[str]) \u2013 \ncluster_id (Optional[str]) \u2013 \ncluster_driver_port (Optional[str]) \u2013", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-96", "text": "cluster_id (Optional[str]) \u2013 \ncluster_driver_port (Optional[str]) \u2013 \nmodel_kwargs (Optional[Dict[str, Any]]) \u2013 \ntransform_input_fn (Optional[Callable]) \u2013 \ntransform_output_fn (Optional[Callable[[...], str]]) \u2013 \nReturn type\nNone\nattribute api_token: str [Optional]\uf0c1\nDatabricks personal access token.\nIf not provided, the default value is determined by\nthe DATABRICKS_TOKEN environment variable if present, or\nan automatically generated temporary token if running inside a Databricks\nnotebook attached to an interactive cluster in \u201csingle user\u201d or\n\u201cno isolation shared\u201d mode.\nattribute cluster_driver_port: Optional[str] = None\uf0c1\nThe port number used by the HTTP server running on the cluster driver node.\nThe server should listen on the driver IP address or simply 0.0.0.0 to connect.\nWe recommend the server using a port number between [3000, 8000].\nattribute cluster_id: Optional[str] = None\uf0c1\nID of the cluster if connecting to a cluster driver proxy app.\nIf neither endpoint_name nor cluster_id is not provided and the code runs\ninside a Databricks notebook attached to an interactive cluster in \u201csingle user\u201d\nor \u201cno isolation shared\u201d mode, the current cluster ID is used as default.\nYou must not set both endpoint_name and cluster_id.\nattribute endpoint_name: Optional[str] = None\uf0c1\nName of the model serving endpont.\nYou must specify the endpoint name to connect to a model serving endpoint.\nYou must not set both endpoint_name and cluster_id.\nattribute host: str [Optional]\uf0c1\nDatabricks workspace hostname.\nIf not provided, the default value is determined by\nthe DATABRICKS_HOST environment variable if present, or", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-97", "text": "the DATABRICKS_HOST environment variable if present, or\nthe hostname of the current Databricks workspace if running inside\na Databricks notebook attached to an interactive cluster in \u201csingle user\u201d\nor \u201cno isolation shared\u201d mode.\nattribute model_kwargs: Optional[Dict[str, Any]] = None\uf0c1\nExtra parameters to pass to the endpoint.\nattribute tags: Optional[List[str]] = None\uf0c1\nTags to add to the run trace.\nattribute transform_input_fn: Optional[Callable] = None\uf0c1\nA function that transforms {prompt, stop, **kwargs} into a JSON-compatible\nrequest object that the endpoint accepts.\nFor example, you can apply a prompt template to the input prompt.\nattribute transform_output_fn: Optional[Callable[[...], str]] = None\uf0c1\nA function that transforms the output from the endpoint to the generated text.\nattribute verbose: bool [Optional]\uf0c1\nWhether to print out response text.\n__call__(prompt, stop=None, callbacks=None, **kwargs)\uf0c1\nCheck Cache and run the LLM on the given prompt and input.\nParameters\nprompt (str) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\nasync agenerate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)\uf0c1\nRun the LLM on the given prompt and input.\nParameters\nprompts (List[str]) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ntags (Optional[List[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.LLMResult", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-98", "text": "kwargs (Any) \u2013 \nReturn type\nlangchain.schema.LLMResult\nasync agenerate_prompt(prompts, stop=None, callbacks=None, **kwargs)\uf0c1\nTake in a list of prompt values and return an LLMResult.\nParameters\nprompts (List[langchain.schema.PromptValue]) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.LLMResult\nasync apredict(text, *, stop=None, **kwargs)\uf0c1\nPredict text from text.\nParameters\ntext (str) \u2013 \nstop (Optional[Sequence[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\nasync apredict_messages(messages, *, stop=None, **kwargs)\uf0c1\nPredict message from messages.\nParameters\nmessages (List[langchain.schema.BaseMessage]) \u2013 \nstop (Optional[Sequence[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.BaseMessage\nclassmethod construct(_fields_set=None, **values)\uf0c1\nCreates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.\nDefault values are respected, but no other validation is performed.\nBehaves as if Config.extra = \u2018allow\u2019 was set since it adds all passed values\nParameters\n_fields_set (Optional[SetStr]) \u2013 \nvalues (Any) \u2013 \nReturn type\nModel\ncopy(*, include=None, exclude=None, update=None, deep=False)\uf0c1\nDuplicate a model, optionally choose which fields to include, exclude and change.\nParameters\ninclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 fields to include in new model", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-99", "text": "exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 fields to exclude from new model, as with values this takes precedence over include\nupdate (Optional[DictStrAny]) \u2013 values to change/add in the new model. Note: the data is not validated before creating\nthe new model: you should trust this data\ndeep (bool) \u2013 set to True to make a deep copy of the model\nself (Model) \u2013 \nReturns\nnew model instance\nReturn type\nModel\ndict(**kwargs)\uf0c1\nReturn a dictionary of the LLM.\nParameters\nkwargs (Any) \u2013 \nReturn type\nDict\ngenerate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)\uf0c1\nRun the LLM on the given prompt and input.\nParameters\nprompts (List[str]) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ntags (Optional[List[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.LLMResult\ngenerate_prompt(prompts, stop=None, callbacks=None, **kwargs)\uf0c1\nTake in a list of prompt values and return an LLMResult.\nParameters\nprompts (List[langchain.schema.PromptValue]) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.LLMResult\nget_num_tokens(text)\uf0c1\nGet the number of tokens present in the text.\nParameters\ntext (str) \u2013 \nReturn type\nint\nget_num_tokens_from_messages(messages)\uf0c1\nGet the number of tokens in the message.\nParameters", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-100", "text": "Get the number of tokens in the message.\nParameters\nmessages (List[langchain.schema.BaseMessage]) \u2013 \nReturn type\nint\nget_token_ids(text)\uf0c1\nGet the token present in the text.\nParameters\ntext (str) \u2013 \nReturn type\nList[int]\njson(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs)\uf0c1\nGenerate a JSON representation of the model, include and exclude arguments as per dict().\nencoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().\nParameters\ninclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 \nexclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 \nby_alias (bool) \u2013 \nskip_defaults (Optional[bool]) \u2013 \nexclude_unset (bool) \u2013 \nexclude_defaults (bool) \u2013 \nexclude_none (bool) \u2013 \nencoder (Optional[Callable[[Any], Any]]) \u2013 \nmodels_as_dict (bool) \u2013 \ndumps_kwargs (Any) \u2013 \nReturn type\nunicode\npredict(text, *, stop=None, **kwargs)\uf0c1\nPredict text from text.\nParameters\ntext (str) \u2013 \nstop (Optional[Sequence[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\npredict_messages(messages, *, stop=None, **kwargs)\uf0c1\nPredict message from messages.\nParameters\nmessages (List[langchain.schema.BaseMessage]) \u2013 \nstop (Optional[Sequence[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.BaseMessage\nsave(file_path)\uf0c1\nSave the LLM.\nParameters", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-101", "text": "save(file_path)\uf0c1\nSave the LLM.\nParameters\nfile_path (Union[pathlib.Path, str]) \u2013 Path to file to save the LLM to.\nReturn type\nNone\nExample:\n.. code-block:: python\nllm.save(file_path=\u201dpath/llm.yaml\u201d)\nclassmethod update_forward_refs(**localns)\uf0c1\nTry to update ForwardRefs on fields based on this Model, globalns and localns.\nParameters\nlocalns (Any) \u2013 \nReturn type\nNone\nproperty lc_attributes: Dict\uf0c1\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\uf0c1\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\uf0c1\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\uf0c1\nReturn whether or not the class is serializable.\nclass langchain.llms.DeepInfra(*, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None, model_id='google/flan-t5-xl', model_kwargs=None, deepinfra_api_token=None)[source]\uf0c1\nBases: langchain.llms.base.LLM\nWrapper around DeepInfra deployed models.\nTo use, you should have the requests python package installed, and the\nenvironment variable DEEPINFRA_API_TOKEN set with your API token, or pass\nit as a named parameter to the constructor.\nOnly supports text-generation and text2text-generation for now.\nExample\nfrom langchain.llms import DeepInfra\ndi = DeepInfra(model_id=\"google/flan-t5-xl\",", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-102", "text": "di = DeepInfra(model_id=\"google/flan-t5-xl\",\n deepinfra_api_token=\"my-api-key\")\nParameters\ncache (Optional[bool]) \u2013 \nverbose (bool) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ncallback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) \u2013 \ntags (Optional[List[str]]) \u2013 \nmodel_id (str) \u2013 \nmodel_kwargs (Optional[dict]) \u2013 \ndeepinfra_api_token (Optional[str]) \u2013 \nReturn type\nNone\nattribute tags: Optional[List[str]] = None\uf0c1\nTags to add to the run trace.\nattribute verbose: bool [Optional]\uf0c1\nWhether to print out response text.\n__call__(prompt, stop=None, callbacks=None, **kwargs)\uf0c1\nCheck Cache and run the LLM on the given prompt and input.\nParameters\nprompt (str) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\nasync agenerate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)\uf0c1\nRun the LLM on the given prompt and input.\nParameters\nprompts (List[str]) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ntags (Optional[List[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.LLMResult\nasync agenerate_prompt(prompts, stop=None, callbacks=None, **kwargs)\uf0c1\nTake in a list of prompt values and return an LLMResult.\nParameters", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-103", "text": "Take in a list of prompt values and return an LLMResult.\nParameters\nprompts (List[langchain.schema.PromptValue]) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.LLMResult\nasync apredict(text, *, stop=None, **kwargs)\uf0c1\nPredict text from text.\nParameters\ntext (str) \u2013 \nstop (Optional[Sequence[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\nasync apredict_messages(messages, *, stop=None, **kwargs)\uf0c1\nPredict message from messages.\nParameters\nmessages (List[langchain.schema.BaseMessage]) \u2013 \nstop (Optional[Sequence[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.BaseMessage\nclassmethod construct(_fields_set=None, **values)\uf0c1\nCreates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.\nDefault values are respected, but no other validation is performed.\nBehaves as if Config.extra = \u2018allow\u2019 was set since it adds all passed values\nParameters\n_fields_set (Optional[SetStr]) \u2013 \nvalues (Any) \u2013 \nReturn type\nModel\ncopy(*, include=None, exclude=None, update=None, deep=False)\uf0c1\nDuplicate a model, optionally choose which fields to include, exclude and change.\nParameters\ninclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 fields to include in new model\nexclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 fields to exclude from new model, as with values this takes precedence over include", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-104", "text": "update (Optional[DictStrAny]) \u2013 values to change/add in the new model. Note: the data is not validated before creating\nthe new model: you should trust this data\ndeep (bool) \u2013 set to True to make a deep copy of the model\nself (Model) \u2013 \nReturns\nnew model instance\nReturn type\nModel\ndict(**kwargs)\uf0c1\nReturn a dictionary of the LLM.\nParameters\nkwargs (Any) \u2013 \nReturn type\nDict\ngenerate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)\uf0c1\nRun the LLM on the given prompt and input.\nParameters\nprompts (List[str]) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ntags (Optional[List[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.LLMResult\ngenerate_prompt(prompts, stop=None, callbacks=None, **kwargs)\uf0c1\nTake in a list of prompt values and return an LLMResult.\nParameters\nprompts (List[langchain.schema.PromptValue]) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.LLMResult\nget_num_tokens(text)\uf0c1\nGet the number of tokens present in the text.\nParameters\ntext (str) \u2013 \nReturn type\nint\nget_num_tokens_from_messages(messages)\uf0c1\nGet the number of tokens in the message.\nParameters\nmessages (List[langchain.schema.BaseMessage]) \u2013 \nReturn type\nint\nget_token_ids(text)\uf0c1\nGet the token present in the text.\nParameters", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-105", "text": "get_token_ids(text)\uf0c1\nGet the token present in the text.\nParameters\ntext (str) \u2013 \nReturn type\nList[int]\njson(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs)\uf0c1\nGenerate a JSON representation of the model, include and exclude arguments as per dict().\nencoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().\nParameters\ninclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 \nexclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 \nby_alias (bool) \u2013 \nskip_defaults (Optional[bool]) \u2013 \nexclude_unset (bool) \u2013 \nexclude_defaults (bool) \u2013 \nexclude_none (bool) \u2013 \nencoder (Optional[Callable[[Any], Any]]) \u2013 \nmodels_as_dict (bool) \u2013 \ndumps_kwargs (Any) \u2013 \nReturn type\nunicode\npredict(text, *, stop=None, **kwargs)\uf0c1\nPredict text from text.\nParameters\ntext (str) \u2013 \nstop (Optional[Sequence[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\npredict_messages(messages, *, stop=None, **kwargs)\uf0c1\nPredict message from messages.\nParameters\nmessages (List[langchain.schema.BaseMessage]) \u2013 \nstop (Optional[Sequence[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.BaseMessage\nsave(file_path)\uf0c1\nSave the LLM.\nParameters\nfile_path (Union[pathlib.Path, str]) \u2013 Path to file to save the LLM to.\nReturn type\nNone\nExample:\n.. code-block:: python", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-106", "text": "Return type\nNone\nExample:\n.. code-block:: python\nllm.save(file_path=\u201dpath/llm.yaml\u201d)\nclassmethod update_forward_refs(**localns)\uf0c1\nTry to update ForwardRefs on fields based on this Model, globalns and localns.\nParameters\nlocalns (Any) \u2013 \nReturn type\nNone\nproperty lc_attributes: Dict\uf0c1\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\uf0c1\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\uf0c1\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\uf0c1\nReturn whether or not the class is serializable.\nclass langchain.llms.FakeListLLM(*, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None, responses, i=0)[source]\uf0c1\nBases: langchain.llms.base.LLM\nFake LLM wrapper for testing purposes.\nParameters\ncache (Optional[bool]) \u2013 \nverbose (bool) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ncallback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) \u2013 \ntags (Optional[List[str]]) \u2013 \nresponses (List) \u2013 \ni (int) \u2013 \nReturn type\nNone\nattribute tags: Optional[List[str]] = None\uf0c1\nTags to add to the run trace.\nattribute verbose: bool [Optional]\uf0c1\nWhether to print out response text.", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-107", "text": "attribute verbose: bool [Optional]\uf0c1\nWhether to print out response text.\n__call__(prompt, stop=None, callbacks=None, **kwargs)\uf0c1\nCheck Cache and run the LLM on the given prompt and input.\nParameters\nprompt (str) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\nasync agenerate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)\uf0c1\nRun the LLM on the given prompt and input.\nParameters\nprompts (List[str]) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ntags (Optional[List[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.LLMResult\nasync agenerate_prompt(prompts, stop=None, callbacks=None, **kwargs)\uf0c1\nTake in a list of prompt values and return an LLMResult.\nParameters\nprompts (List[langchain.schema.PromptValue]) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.LLMResult\nasync apredict(text, *, stop=None, **kwargs)\uf0c1\nPredict text from text.\nParameters\ntext (str) \u2013 \nstop (Optional[Sequence[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\nasync apredict_messages(messages, *, stop=None, **kwargs)\uf0c1\nPredict message from messages.\nParameters", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-108", "text": "Predict message from messages.\nParameters\nmessages (List[langchain.schema.BaseMessage]) \u2013 \nstop (Optional[Sequence[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.BaseMessage\nclassmethod construct(_fields_set=None, **values)\uf0c1\nCreates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.\nDefault values are respected, but no other validation is performed.\nBehaves as if Config.extra = \u2018allow\u2019 was set since it adds all passed values\nParameters\n_fields_set (Optional[SetStr]) \u2013 \nvalues (Any) \u2013 \nReturn type\nModel\ncopy(*, include=None, exclude=None, update=None, deep=False)\uf0c1\nDuplicate a model, optionally choose which fields to include, exclude and change.\nParameters\ninclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 fields to include in new model\nexclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 fields to exclude from new model, as with values this takes precedence over include\nupdate (Optional[DictStrAny]) \u2013 values to change/add in the new model. Note: the data is not validated before creating\nthe new model: you should trust this data\ndeep (bool) \u2013 set to True to make a deep copy of the model\nself (Model) \u2013 \nReturns\nnew model instance\nReturn type\nModel\ndict(**kwargs)\uf0c1\nReturn a dictionary of the LLM.\nParameters\nkwargs (Any) \u2013 \nReturn type\nDict\ngenerate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)\uf0c1\nRun the LLM on the given prompt and input.\nParameters\nprompts (List[str]) \u2013 \nstop (Optional[List[str]]) \u2013", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-109", "text": "Parameters\nprompts (List[str]) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ntags (Optional[List[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.LLMResult\ngenerate_prompt(prompts, stop=None, callbacks=None, **kwargs)\uf0c1\nTake in a list of prompt values and return an LLMResult.\nParameters\nprompts (List[langchain.schema.PromptValue]) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.LLMResult\nget_num_tokens(text)\uf0c1\nGet the number of tokens present in the text.\nParameters\ntext (str) \u2013 \nReturn type\nint\nget_num_tokens_from_messages(messages)\uf0c1\nGet the number of tokens in the message.\nParameters\nmessages (List[langchain.schema.BaseMessage]) \u2013 \nReturn type\nint\nget_token_ids(text)\uf0c1\nGet the token present in the text.\nParameters\ntext (str) \u2013 \nReturn type\nList[int]\njson(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs)\uf0c1\nGenerate a JSON representation of the model, include and exclude arguments as per dict().\nencoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().\nParameters\ninclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-110", "text": "include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 \nexclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 \nby_alias (bool) \u2013 \nskip_defaults (Optional[bool]) \u2013 \nexclude_unset (bool) \u2013 \nexclude_defaults (bool) \u2013 \nexclude_none (bool) \u2013 \nencoder (Optional[Callable[[Any], Any]]) \u2013 \nmodels_as_dict (bool) \u2013 \ndumps_kwargs (Any) \u2013 \nReturn type\nunicode\npredict(text, *, stop=None, **kwargs)\uf0c1\nPredict text from text.\nParameters\ntext (str) \u2013 \nstop (Optional[Sequence[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\npredict_messages(messages, *, stop=None, **kwargs)\uf0c1\nPredict message from messages.\nParameters\nmessages (List[langchain.schema.BaseMessage]) \u2013 \nstop (Optional[Sequence[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.BaseMessage\nsave(file_path)\uf0c1\nSave the LLM.\nParameters\nfile_path (Union[pathlib.Path, str]) \u2013 Path to file to save the LLM to.\nReturn type\nNone\nExample:\n.. code-block:: python\nllm.save(file_path=\u201dpath/llm.yaml\u201d)\nclassmethod update_forward_refs(**localns)\uf0c1\nTry to update ForwardRefs on fields based on this Model, globalns and localns.\nParameters\nlocalns (Any) \u2013 \nReturn type\nNone\nproperty lc_attributes: Dict\uf0c1\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\uf0c1\nReturn the namespace of the langchain object.", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-111", "text": "property lc_namespace: List[str]\uf0c1\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\uf0c1\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\uf0c1\nReturn whether or not the class is serializable.\nclass langchain.llms.ForefrontAI(*, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None, endpoint_url='', temperature=0.7, length=256, top_p=1.0, top_k=40, repetition_penalty=1, forefrontai_api_key=None, base_url=None)[source]\uf0c1\nBases: langchain.llms.base.LLM\nWrapper around ForefrontAI large language models.\nTo use, you should have the environment variable FOREFRONTAI_API_KEY\nset with your API key.\nExample\nfrom langchain.llms import ForefrontAI\nforefrontai = ForefrontAI(endpoint_url=\"\")\nParameters\ncache (Optional[bool]) \u2013 \nverbose (bool) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ncallback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) \u2013 \ntags (Optional[List[str]]) \u2013 \nendpoint_url (str) \u2013 \ntemperature (float) \u2013 \nlength (int) \u2013 \ntop_p (float) \u2013 \ntop_k (int) \u2013 \nrepetition_penalty (int) \u2013 \nforefrontai_api_key (Optional[str]) \u2013 \nbase_url (Optional[str]) \u2013 \nReturn type\nNone\nattribute base_url: Optional[str] = None\uf0c1", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-112", "text": "Return type\nNone\nattribute base_url: Optional[str] = None\uf0c1\nBase url to use, if None decides based on model name.\nattribute endpoint_url: str = ''\uf0c1\nModel name to use.\nattribute length: int = 256\uf0c1\nThe maximum number of tokens to generate in the completion.\nattribute repetition_penalty: int = 1\uf0c1\nPenalizes repeated tokens according to frequency.\nattribute tags: Optional[List[str]] = None\uf0c1\nTags to add to the run trace.\nattribute temperature: float = 0.7\uf0c1\nWhat sampling temperature to use.\nattribute top_k: int = 40\uf0c1\nThe number of highest probability vocabulary tokens to\nkeep for top-k-filtering.\nattribute top_p: float = 1.0\uf0c1\nTotal probability mass of tokens to consider at each step.\nattribute verbose: bool [Optional]\uf0c1\nWhether to print out response text.\n__call__(prompt, stop=None, callbacks=None, **kwargs)\uf0c1\nCheck Cache and run the LLM on the given prompt and input.\nParameters\nprompt (str) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\nasync agenerate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)\uf0c1\nRun the LLM on the given prompt and input.\nParameters\nprompts (List[str]) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ntags (Optional[List[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.LLMResult", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-113", "text": "kwargs (Any) \u2013 \nReturn type\nlangchain.schema.LLMResult\nasync agenerate_prompt(prompts, stop=None, callbacks=None, **kwargs)\uf0c1\nTake in a list of prompt values and return an LLMResult.\nParameters\nprompts (List[langchain.schema.PromptValue]) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.LLMResult\nasync apredict(text, *, stop=None, **kwargs)\uf0c1\nPredict text from text.\nParameters\ntext (str) \u2013 \nstop (Optional[Sequence[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\nasync apredict_messages(messages, *, stop=None, **kwargs)\uf0c1\nPredict message from messages.\nParameters\nmessages (List[langchain.schema.BaseMessage]) \u2013 \nstop (Optional[Sequence[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.BaseMessage\nclassmethod construct(_fields_set=None, **values)\uf0c1\nCreates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.\nDefault values are respected, but no other validation is performed.\nBehaves as if Config.extra = \u2018allow\u2019 was set since it adds all passed values\nParameters\n_fields_set (Optional[SetStr]) \u2013 \nvalues (Any) \u2013 \nReturn type\nModel\ncopy(*, include=None, exclude=None, update=None, deep=False)\uf0c1\nDuplicate a model, optionally choose which fields to include, exclude and change.\nParameters\ninclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 fields to include in new model", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-114", "text": "exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 fields to exclude from new model, as with values this takes precedence over include\nupdate (Optional[DictStrAny]) \u2013 values to change/add in the new model. Note: the data is not validated before creating\nthe new model: you should trust this data\ndeep (bool) \u2013 set to True to make a deep copy of the model\nself (Model) \u2013 \nReturns\nnew model instance\nReturn type\nModel\ndict(**kwargs)\uf0c1\nReturn a dictionary of the LLM.\nParameters\nkwargs (Any) \u2013 \nReturn type\nDict\ngenerate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)\uf0c1\nRun the LLM on the given prompt and input.\nParameters\nprompts (List[str]) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ntags (Optional[List[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.LLMResult\ngenerate_prompt(prompts, stop=None, callbacks=None, **kwargs)\uf0c1\nTake in a list of prompt values and return an LLMResult.\nParameters\nprompts (List[langchain.schema.PromptValue]) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.LLMResult\nget_num_tokens(text)\uf0c1\nGet the number of tokens present in the text.\nParameters\ntext (str) \u2013 \nReturn type\nint\nget_num_tokens_from_messages(messages)\uf0c1\nGet the number of tokens in the message.\nParameters", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-115", "text": "Get the number of tokens in the message.\nParameters\nmessages (List[langchain.schema.BaseMessage]) \u2013 \nReturn type\nint\nget_token_ids(text)\uf0c1\nGet the token present in the text.\nParameters\ntext (str) \u2013 \nReturn type\nList[int]\njson(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs)\uf0c1\nGenerate a JSON representation of the model, include and exclude arguments as per dict().\nencoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().\nParameters\ninclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 \nexclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 \nby_alias (bool) \u2013 \nskip_defaults (Optional[bool]) \u2013 \nexclude_unset (bool) \u2013 \nexclude_defaults (bool) \u2013 \nexclude_none (bool) \u2013 \nencoder (Optional[Callable[[Any], Any]]) \u2013 \nmodels_as_dict (bool) \u2013 \ndumps_kwargs (Any) \u2013 \nReturn type\nunicode\npredict(text, *, stop=None, **kwargs)\uf0c1\nPredict text from text.\nParameters\ntext (str) \u2013 \nstop (Optional[Sequence[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\npredict_messages(messages, *, stop=None, **kwargs)\uf0c1\nPredict message from messages.\nParameters\nmessages (List[langchain.schema.BaseMessage]) \u2013 \nstop (Optional[Sequence[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.BaseMessage\nsave(file_path)\uf0c1\nSave the LLM.\nParameters", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-116", "text": "save(file_path)\uf0c1\nSave the LLM.\nParameters\nfile_path (Union[pathlib.Path, str]) \u2013 Path to file to save the LLM to.\nReturn type\nNone\nExample:\n.. code-block:: python\nllm.save(file_path=\u201dpath/llm.yaml\u201d)\nclassmethod update_forward_refs(**localns)\uf0c1\nTry to update ForwardRefs on fields based on this Model, globalns and localns.\nParameters\nlocalns (Any) \u2013 \nReturn type\nNone\nproperty lc_attributes: Dict\uf0c1\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\uf0c1\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\uf0c1\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\uf0c1\nReturn whether or not the class is serializable.\nclass langchain.llms.GPT4All(*, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None, model, backend=None, n_ctx=512, n_parts=- 1, seed=0, f16_kv=False, logits_all=False, vocab_only=False, use_mlock=False, embedding=False, n_threads=4, n_predict=256, temp=0.8, top_p=0.95, top_k=40, echo=False, stop=[], repeat_last_n=64, repeat_penalty=1.3, n_batch=1, streaming=False, context_erase=0.5, allow_download=False, client=None)[source]\uf0c1\nBases: langchain.llms.base.LLM", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-117", "text": "Bases: langchain.llms.base.LLM\nWrapper around GPT4All language models.\nTo use, you should have the gpt4all python package installed, the\npre-trained model file, and the model\u2019s config information.\nExample\nfrom langchain.llms import GPT4All\nmodel = GPT4All(model=\"./models/gpt4all-model.bin\", n_ctx=512, n_threads=8)\n# Simplest invocation\nresponse = model(\"Once upon a time, \")\nParameters\ncache (Optional[bool]) \u2013 \nverbose (bool) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ncallback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) \u2013 \ntags (Optional[List[str]]) \u2013 \nmodel (str) \u2013 \nbackend (Optional[str]) \u2013 \nn_ctx (int) \u2013 \nn_parts (int) \u2013 \nseed (int) \u2013 \nf16_kv (bool) \u2013 \nlogits_all (bool) \u2013 \nvocab_only (bool) \u2013 \nuse_mlock (bool) \u2013 \nembedding (bool) \u2013 \nn_threads (Optional[int]) \u2013 \nn_predict (Optional[int]) \u2013 \ntemp (Optional[float]) \u2013 \ntop_p (Optional[float]) \u2013 \ntop_k (Optional[int]) \u2013 \necho (Optional[bool]) \u2013 \nstop (Optional[List[str]]) \u2013 \nrepeat_last_n (Optional[int]) \u2013 \nrepeat_penalty (Optional[float]) \u2013 \nn_batch (int) \u2013 \nstreaming (bool) \u2013 \ncontext_erase (float) \u2013 \nallow_download (bool) \u2013 \nclient (Any) \u2013 \nReturn type\nNone\nattribute allow_download: bool = False\uf0c1", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-118", "text": "Return type\nNone\nattribute allow_download: bool = False\uf0c1\nIf model does not exist in ~/.cache/gpt4all/, download it.\nattribute context_erase: float = 0.5\uf0c1\nLeave (n_ctx * context_erase) tokens\nstarting from beginning if the context has run out.\nattribute echo: Optional[bool] = False\uf0c1\nWhether to echo the prompt.\nattribute embedding: bool = False\uf0c1\nUse embedding mode only.\nattribute f16_kv: bool = False\uf0c1\nUse half-precision for key/value cache.\nattribute logits_all: bool = False\uf0c1\nReturn logits for all tokens, not just the last token.\nattribute model: str [Required]\uf0c1\nPath to the pre-trained GPT4All model file.\nattribute n_batch: int = 1\uf0c1\nBatch size for prompt processing.\nattribute n_ctx: int = 512\uf0c1\nToken context window.\nattribute n_parts: int = -1\uf0c1\nNumber of parts to split the model into.\nIf -1, the number of parts is automatically determined.\nattribute n_predict: Optional[int] = 256\uf0c1\nThe maximum number of tokens to generate.\nattribute n_threads: Optional[int] = 4\uf0c1\nNumber of threads to use.\nattribute repeat_last_n: Optional[int] = 64\uf0c1\nLast n tokens to penalize\nattribute repeat_penalty: Optional[float] = 1.3\uf0c1\nThe penalty to apply to repeated tokens.\nattribute seed: int = 0\uf0c1\nSeed. If -1, a random seed is used.\nattribute stop: Optional[List[str]] = []\uf0c1\nA list of strings to stop generation when encountered.\nattribute streaming: bool = False\uf0c1\nWhether to stream the results or not.\nattribute tags: Optional[List[str]] = None\uf0c1", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-119", "text": "attribute tags: Optional[List[str]] = None\uf0c1\nTags to add to the run trace.\nattribute temp: Optional[float] = 0.8\uf0c1\nThe temperature to use for sampling.\nattribute top_k: Optional[int] = 40\uf0c1\nThe top-k value to use for sampling.\nattribute top_p: Optional[float] = 0.95\uf0c1\nThe top-p value to use for sampling.\nattribute use_mlock: bool = False\uf0c1\nForce system to keep model in RAM.\nattribute verbose: bool [Optional]\uf0c1\nWhether to print out response text.\nattribute vocab_only: bool = False\uf0c1\nOnly load the vocabulary, no weights.\n__call__(prompt, stop=None, callbacks=None, **kwargs)\uf0c1\nCheck Cache and run the LLM on the given prompt and input.\nParameters\nprompt (str) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\nasync agenerate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)\uf0c1\nRun the LLM on the given prompt and input.\nParameters\nprompts (List[str]) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ntags (Optional[List[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.LLMResult\nasync agenerate_prompt(prompts, stop=None, callbacks=None, **kwargs)\uf0c1\nTake in a list of prompt values and return an LLMResult.\nParameters\nprompts (List[langchain.schema.PromptValue]) \u2013", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-120", "text": "Parameters\nprompts (List[langchain.schema.PromptValue]) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.LLMResult\nasync apredict(text, *, stop=None, **kwargs)\uf0c1\nPredict text from text.\nParameters\ntext (str) \u2013 \nstop (Optional[Sequence[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\nasync apredict_messages(messages, *, stop=None, **kwargs)\uf0c1\nPredict message from messages.\nParameters\nmessages (List[langchain.schema.BaseMessage]) \u2013 \nstop (Optional[Sequence[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.BaseMessage\nclassmethod construct(_fields_set=None, **values)\uf0c1\nCreates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.\nDefault values are respected, but no other validation is performed.\nBehaves as if Config.extra = \u2018allow\u2019 was set since it adds all passed values\nParameters\n_fields_set (Optional[SetStr]) \u2013 \nvalues (Any) \u2013 \nReturn type\nModel\ncopy(*, include=None, exclude=None, update=None, deep=False)\uf0c1\nDuplicate a model, optionally choose which fields to include, exclude and change.\nParameters\ninclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 fields to include in new model\nexclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 fields to exclude from new model, as with values this takes precedence over include\nupdate (Optional[DictStrAny]) \u2013 values to change/add in the new model. Note: the data is not validated before creating", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-121", "text": "the new model: you should trust this data\ndeep (bool) \u2013 set to True to make a deep copy of the model\nself (Model) \u2013 \nReturns\nnew model instance\nReturn type\nModel\ndict(**kwargs)\uf0c1\nReturn a dictionary of the LLM.\nParameters\nkwargs (Any) \u2013 \nReturn type\nDict\ngenerate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)\uf0c1\nRun the LLM on the given prompt and input.\nParameters\nprompts (List[str]) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ntags (Optional[List[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.LLMResult\ngenerate_prompt(prompts, stop=None, callbacks=None, **kwargs)\uf0c1\nTake in a list of prompt values and return an LLMResult.\nParameters\nprompts (List[langchain.schema.PromptValue]) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.LLMResult\nget_num_tokens(text)\uf0c1\nGet the number of tokens present in the text.\nParameters\ntext (str) \u2013 \nReturn type\nint\nget_num_tokens_from_messages(messages)\uf0c1\nGet the number of tokens in the message.\nParameters\nmessages (List[langchain.schema.BaseMessage]) \u2013 \nReturn type\nint\nget_token_ids(text)\uf0c1\nGet the token present in the text.\nParameters\ntext (str) \u2013 \nReturn type\nList[int]", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-122", "text": "Parameters\ntext (str) \u2013 \nReturn type\nList[int]\njson(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs)\uf0c1\nGenerate a JSON representation of the model, include and exclude arguments as per dict().\nencoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().\nParameters\ninclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 \nexclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 \nby_alias (bool) \u2013 \nskip_defaults (Optional[bool]) \u2013 \nexclude_unset (bool) \u2013 \nexclude_defaults (bool) \u2013 \nexclude_none (bool) \u2013 \nencoder (Optional[Callable[[Any], Any]]) \u2013 \nmodels_as_dict (bool) \u2013 \ndumps_kwargs (Any) \u2013 \nReturn type\nunicode\npredict(text, *, stop=None, **kwargs)\uf0c1\nPredict text from text.\nParameters\ntext (str) \u2013 \nstop (Optional[Sequence[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\npredict_messages(messages, *, stop=None, **kwargs)\uf0c1\nPredict message from messages.\nParameters\nmessages (List[langchain.schema.BaseMessage]) \u2013 \nstop (Optional[Sequence[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.BaseMessage\nsave(file_path)\uf0c1\nSave the LLM.\nParameters\nfile_path (Union[pathlib.Path, str]) \u2013 Path to file to save the LLM to.\nReturn type\nNone\nExample:\n.. code-block:: python\nllm.save(file_path=\u201dpath/llm.yaml\u201d)", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-123", "text": ".. code-block:: python\nllm.save(file_path=\u201dpath/llm.yaml\u201d)\nclassmethod update_forward_refs(**localns)\uf0c1\nTry to update ForwardRefs on fields based on this Model, globalns and localns.\nParameters\nlocalns (Any) \u2013 \nReturn type\nNone\nproperty lc_attributes: Dict\uf0c1\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\uf0c1\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\uf0c1\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\uf0c1\nReturn whether or not the class is serializable.\nclass langchain.llms.GooglePalm(*, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None, client=None, google_api_key=None, model_name='models/text-bison-001', temperature=0.7, top_p=None, top_k=None, max_output_tokens=None, n=1)[source]\uf0c1\nBases: langchain.llms.base.BaseLLM, pydantic.main.BaseModel\nParameters\ncache (Optional[bool]) \u2013 \nverbose (bool) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ncallback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) \u2013 \ntags (Optional[List[str]]) \u2013 \nclient (Any) \u2013 \ngoogle_api_key (Optional[str]) \u2013 \nmodel_name (str) \u2013 \ntemperature (float) \u2013 \ntop_p (Optional[float]) \u2013", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-124", "text": "temperature (float) \u2013 \ntop_p (Optional[float]) \u2013 \ntop_k (Optional[int]) \u2013 \nmax_output_tokens (Optional[int]) \u2013 \nn (int) \u2013 \nReturn type\nNone\nattribute max_output_tokens: Optional[int] = None\uf0c1\nMaximum number of tokens to include in a candidate. Must be greater than zero.\nIf unset, will default to 64.\nattribute model_name: str = 'models/text-bison-001'\uf0c1\nModel name to use.\nattribute n: int = 1\uf0c1\nNumber of chat completions to generate for each prompt. Note that the API may\nnot return the full n completions if duplicates are generated.\nattribute tags: Optional[List[str]] = None\uf0c1\nTags to add to the run trace.\nattribute temperature: float = 0.7\uf0c1\nRun inference with this temperature. Must by in the closed interval\n[0.0, 1.0].\nattribute top_k: Optional[int] = None\uf0c1\nDecode using top-k sampling: consider the set of top_k most probable tokens.\nMust be positive.\nattribute top_p: Optional[float] = None\uf0c1\nDecode using nucleus sampling: consider the smallest set of tokens whose\nprobability sum is at least top_p. Must be in the closed interval [0.0, 1.0].\nattribute verbose: bool [Optional]\uf0c1\nWhether to print out response text.\n__call__(prompt, stop=None, callbacks=None, **kwargs)\uf0c1\nCheck Cache and run the LLM on the given prompt and input.\nParameters\nprompt (str) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-125", "text": "kwargs (Any) \u2013 \nReturn type\nstr\nasync agenerate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)\uf0c1\nRun the LLM on the given prompt and input.\nParameters\nprompts (List[str]) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ntags (Optional[List[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.LLMResult\nasync agenerate_prompt(prompts, stop=None, callbacks=None, **kwargs)\uf0c1\nTake in a list of prompt values and return an LLMResult.\nParameters\nprompts (List[langchain.schema.PromptValue]) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.LLMResult\nasync apredict(text, *, stop=None, **kwargs)\uf0c1\nPredict text from text.\nParameters\ntext (str) \u2013 \nstop (Optional[Sequence[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\nasync apredict_messages(messages, *, stop=None, **kwargs)\uf0c1\nPredict message from messages.\nParameters\nmessages (List[langchain.schema.BaseMessage]) \u2013 \nstop (Optional[Sequence[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.BaseMessage\nclassmethod construct(_fields_set=None, **values)\uf0c1\nCreates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.\nDefault values are respected, but no other validation is performed.", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-126", "text": "Default values are respected, but no other validation is performed.\nBehaves as if Config.extra = \u2018allow\u2019 was set since it adds all passed values\nParameters\n_fields_set (Optional[SetStr]) \u2013 \nvalues (Any) \u2013 \nReturn type\nModel\ncopy(*, include=None, exclude=None, update=None, deep=False)\uf0c1\nDuplicate a model, optionally choose which fields to include, exclude and change.\nParameters\ninclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 fields to include in new model\nexclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 fields to exclude from new model, as with values this takes precedence over include\nupdate (Optional[DictStrAny]) \u2013 values to change/add in the new model. Note: the data is not validated before creating\nthe new model: you should trust this data\ndeep (bool) \u2013 set to True to make a deep copy of the model\nself (Model) \u2013 \nReturns\nnew model instance\nReturn type\nModel\ndict(**kwargs)\uf0c1\nReturn a dictionary of the LLM.\nParameters\nkwargs (Any) \u2013 \nReturn type\nDict\ngenerate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)\uf0c1\nRun the LLM on the given prompt and input.\nParameters\nprompts (List[str]) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ntags (Optional[List[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.LLMResult\ngenerate_prompt(prompts, stop=None, callbacks=None, **kwargs)\uf0c1\nTake in a list of prompt values and return an LLMResult.\nParameters", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-127", "text": "Take in a list of prompt values and return an LLMResult.\nParameters\nprompts (List[langchain.schema.PromptValue]) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.LLMResult\nget_num_tokens(text)\uf0c1\nGet the number of tokens present in the text.\nParameters\ntext (str) \u2013 \nReturn type\nint\nget_num_tokens_from_messages(messages)\uf0c1\nGet the number of tokens in the message.\nParameters\nmessages (List[langchain.schema.BaseMessage]) \u2013 \nReturn type\nint\nget_token_ids(text)\uf0c1\nGet the token present in the text.\nParameters\ntext (str) \u2013 \nReturn type\nList[int]\njson(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs)\uf0c1\nGenerate a JSON representation of the model, include and exclude arguments as per dict().\nencoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().\nParameters\ninclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 \nexclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 \nby_alias (bool) \u2013 \nskip_defaults (Optional[bool]) \u2013 \nexclude_unset (bool) \u2013 \nexclude_defaults (bool) \u2013 \nexclude_none (bool) \u2013 \nencoder (Optional[Callable[[Any], Any]]) \u2013 \nmodels_as_dict (bool) \u2013 \ndumps_kwargs (Any) \u2013 \nReturn type\nunicode", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-128", "text": "dumps_kwargs (Any) \u2013 \nReturn type\nunicode\npredict(text, *, stop=None, **kwargs)\uf0c1\nPredict text from text.\nParameters\ntext (str) \u2013 \nstop (Optional[Sequence[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\npredict_messages(messages, *, stop=None, **kwargs)\uf0c1\nPredict message from messages.\nParameters\nmessages (List[langchain.schema.BaseMessage]) \u2013 \nstop (Optional[Sequence[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.BaseMessage\nsave(file_path)\uf0c1\nSave the LLM.\nParameters\nfile_path (Union[pathlib.Path, str]) \u2013 Path to file to save the LLM to.\nReturn type\nNone\nExample:\n.. code-block:: python\nllm.save(file_path=\u201dpath/llm.yaml\u201d)\nclassmethod update_forward_refs(**localns)\uf0c1\nTry to update ForwardRefs on fields based on this Model, globalns and localns.\nParameters\nlocalns (Any) \u2013 \nReturn type\nNone\nproperty lc_attributes: Dict\uf0c1\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\uf0c1\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\uf0c1\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\uf0c1\nReturn whether or not the class is serializable.", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-129", "text": "property lc_serializable: bool\uf0c1\nReturn whether or not the class is serializable.\nclass langchain.llms.GooseAI(*, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None, client=None, model_name='gpt-neo-20b', temperature=0.7, max_tokens=256, top_p=1, min_tokens=1, frequency_penalty=0, presence_penalty=0, n=1, model_kwargs=None, logit_bias=None, gooseai_api_key=None)[source]\uf0c1\nBases: langchain.llms.base.LLM\nWrapper around OpenAI large language models.\nTo use, you should have the openai python package installed, and the\nenvironment variable GOOSEAI_API_KEY set with your API key.\nAny parameters that are valid to be passed to the openai.create call can be passed\nin, even if not explicitly saved on this class.\nExample\nfrom langchain.llms import GooseAI\ngooseai = GooseAI(model_name=\"gpt-neo-20b\")\nParameters\ncache (Optional[bool]) \u2013 \nverbose (bool) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ncallback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) \u2013 \ntags (Optional[List[str]]) \u2013 \nclient (Any) \u2013 \nmodel_name (str) \u2013 \ntemperature (float) \u2013 \nmax_tokens (int) \u2013 \ntop_p (float) \u2013 \nmin_tokens (int) \u2013 \nfrequency_penalty (float) \u2013 \npresence_penalty (float) \u2013 \nn (int) \u2013 \nmodel_kwargs (Dict[str, Any]) \u2013 \nlogit_bias (Optional[Dict[str, float]]) \u2013 \ngooseai_api_key (Optional[str]) \u2013", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-130", "text": "gooseai_api_key (Optional[str]) \u2013 \nReturn type\nNone\nattribute frequency_penalty: float = 0\uf0c1\nPenalizes repeated tokens according to frequency.\nattribute logit_bias: Optional[Dict[str, float]] [Optional]\uf0c1\nAdjust the probability of specific tokens being generated.\nattribute max_tokens: int = 256\uf0c1\nThe maximum number of tokens to generate in the completion.\n-1 returns as many tokens as possible given the prompt and\nthe models maximal context size.\nattribute min_tokens: int = 1\uf0c1\nThe minimum number of tokens to generate in the completion.\nattribute model_kwargs: Dict[str, Any] [Optional]\uf0c1\nHolds any model parameters valid for create call not explicitly specified.\nattribute model_name: str = 'gpt-neo-20b'\uf0c1\nModel name to use\nattribute n: int = 1\uf0c1\nHow many completions to generate for each prompt.\nattribute presence_penalty: float = 0\uf0c1\nPenalizes repeated tokens.\nattribute tags: Optional[List[str]] = None\uf0c1\nTags to add to the run trace.\nattribute temperature: float = 0.7\uf0c1\nWhat sampling temperature to use\nattribute top_p: float = 1\uf0c1\nTotal probability mass of tokens to consider at each step.\nattribute verbose: bool [Optional]\uf0c1\nWhether to print out response text.\n__call__(prompt, stop=None, callbacks=None, **kwargs)\uf0c1\nCheck Cache and run the LLM on the given prompt and input.\nParameters\nprompt (str) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-131", "text": "kwargs (Any) \u2013 \nReturn type\nstr\nasync agenerate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)\uf0c1\nRun the LLM on the given prompt and input.\nParameters\nprompts (List[str]) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ntags (Optional[List[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.LLMResult\nasync agenerate_prompt(prompts, stop=None, callbacks=None, **kwargs)\uf0c1\nTake in a list of prompt values and return an LLMResult.\nParameters\nprompts (List[langchain.schema.PromptValue]) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.LLMResult\nasync apredict(text, *, stop=None, **kwargs)\uf0c1\nPredict text from text.\nParameters\ntext (str) \u2013 \nstop (Optional[Sequence[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\nasync apredict_messages(messages, *, stop=None, **kwargs)\uf0c1\nPredict message from messages.\nParameters\nmessages (List[langchain.schema.BaseMessage]) \u2013 \nstop (Optional[Sequence[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.BaseMessage\nclassmethod construct(_fields_set=None, **values)\uf0c1\nCreates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.\nDefault values are respected, but no other validation is performed.", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-132", "text": "Default values are respected, but no other validation is performed.\nBehaves as if Config.extra = \u2018allow\u2019 was set since it adds all passed values\nParameters\n_fields_set (Optional[SetStr]) \u2013 \nvalues (Any) \u2013 \nReturn type\nModel\ncopy(*, include=None, exclude=None, update=None, deep=False)\uf0c1\nDuplicate a model, optionally choose which fields to include, exclude and change.\nParameters\ninclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 fields to include in new model\nexclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 fields to exclude from new model, as with values this takes precedence over include\nupdate (Optional[DictStrAny]) \u2013 values to change/add in the new model. Note: the data is not validated before creating\nthe new model: you should trust this data\ndeep (bool) \u2013 set to True to make a deep copy of the model\nself (Model) \u2013 \nReturns\nnew model instance\nReturn type\nModel\ndict(**kwargs)\uf0c1\nReturn a dictionary of the LLM.\nParameters\nkwargs (Any) \u2013 \nReturn type\nDict\ngenerate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)\uf0c1\nRun the LLM on the given prompt and input.\nParameters\nprompts (List[str]) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ntags (Optional[List[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.LLMResult\ngenerate_prompt(prompts, stop=None, callbacks=None, **kwargs)\uf0c1\nTake in a list of prompt values and return an LLMResult.\nParameters", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-133", "text": "Take in a list of prompt values and return an LLMResult.\nParameters\nprompts (List[langchain.schema.PromptValue]) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.LLMResult\nget_num_tokens(text)\uf0c1\nGet the number of tokens present in the text.\nParameters\ntext (str) \u2013 \nReturn type\nint\nget_num_tokens_from_messages(messages)\uf0c1\nGet the number of tokens in the message.\nParameters\nmessages (List[langchain.schema.BaseMessage]) \u2013 \nReturn type\nint\nget_token_ids(text)\uf0c1\nGet the token present in the text.\nParameters\ntext (str) \u2013 \nReturn type\nList[int]\njson(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs)\uf0c1\nGenerate a JSON representation of the model, include and exclude arguments as per dict().\nencoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().\nParameters\ninclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 \nexclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 \nby_alias (bool) \u2013 \nskip_defaults (Optional[bool]) \u2013 \nexclude_unset (bool) \u2013 \nexclude_defaults (bool) \u2013 \nexclude_none (bool) \u2013 \nencoder (Optional[Callable[[Any], Any]]) \u2013 \nmodels_as_dict (bool) \u2013 \ndumps_kwargs (Any) \u2013 \nReturn type\nunicode", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-134", "text": "dumps_kwargs (Any) \u2013 \nReturn type\nunicode\npredict(text, *, stop=None, **kwargs)\uf0c1\nPredict text from text.\nParameters\ntext (str) \u2013 \nstop (Optional[Sequence[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\npredict_messages(messages, *, stop=None, **kwargs)\uf0c1\nPredict message from messages.\nParameters\nmessages (List[langchain.schema.BaseMessage]) \u2013 \nstop (Optional[Sequence[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.BaseMessage\nsave(file_path)\uf0c1\nSave the LLM.\nParameters\nfile_path (Union[pathlib.Path, str]) \u2013 Path to file to save the LLM to.\nReturn type\nNone\nExample:\n.. code-block:: python\nllm.save(file_path=\u201dpath/llm.yaml\u201d)\nclassmethod update_forward_refs(**localns)\uf0c1\nTry to update ForwardRefs on fields based on this Model, globalns and localns.\nParameters\nlocalns (Any) \u2013 \nReturn type\nNone\nproperty lc_attributes: Dict\uf0c1\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\uf0c1\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\uf0c1\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\uf0c1\nReturn whether or not the class is serializable.", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-135", "text": "property lc_serializable: bool\uf0c1\nReturn whether or not the class is serializable.\nclass langchain.llms.HuggingFaceEndpoint(*, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None, endpoint_url='', task=None, model_kwargs=None, huggingfacehub_api_token=None)[source]\uf0c1\nBases: langchain.llms.base.LLM\nWrapper around HuggingFaceHub Inference Endpoints.\nTo use, you should have the huggingface_hub python package installed, and the\nenvironment variable HUGGINGFACEHUB_API_TOKEN set with your API token, or pass\nit as a named parameter to the constructor.\nOnly supports text-generation and text2text-generation for now.\nExample\nfrom langchain.llms import HuggingFaceEndpoint\nendpoint_url = (\n \"https://abcdefghijklmnop.us-east-1.aws.endpoints.huggingface.cloud\"\n)\nhf = HuggingFaceEndpoint(\n endpoint_url=endpoint_url,\n huggingfacehub_api_token=\"my-api-key\"\n)\nParameters\ncache (Optional[bool]) \u2013 \nverbose (bool) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ncallback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) \u2013 \ntags (Optional[List[str]]) \u2013 \nendpoint_url (str) \u2013 \ntask (Optional[str]) \u2013 \nmodel_kwargs (Optional[dict]) \u2013 \nhuggingfacehub_api_token (Optional[str]) \u2013 \nReturn type\nNone\nattribute endpoint_url: str = ''\uf0c1\nEndpoint URL to use.\nattribute model_kwargs: Optional[dict] = None\uf0c1\nKey word arguments to pass to the model.\nattribute tags: Optional[List[str]] = None\uf0c1\nTags to add to the run trace.", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-136", "text": "Tags to add to the run trace.\nattribute task: Optional[str] = None\uf0c1\nTask to call the model with.\nShould be a task that returns generated_text or summary_text.\nattribute verbose: bool [Optional]\uf0c1\nWhether to print out response text.\n__call__(prompt, stop=None, callbacks=None, **kwargs)\uf0c1\nCheck Cache and run the LLM on the given prompt and input.\nParameters\nprompt (str) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\nasync agenerate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)\uf0c1\nRun the LLM on the given prompt and input.\nParameters\nprompts (List[str]) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ntags (Optional[List[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.LLMResult\nasync agenerate_prompt(prompts, stop=None, callbacks=None, **kwargs)\uf0c1\nTake in a list of prompt values and return an LLMResult.\nParameters\nprompts (List[langchain.schema.PromptValue]) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.LLMResult\nasync apredict(text, *, stop=None, **kwargs)\uf0c1\nPredict text from text.\nParameters\ntext (str) \u2013 \nstop (Optional[Sequence[str]]) \u2013", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-137", "text": "Parameters\ntext (str) \u2013 \nstop (Optional[Sequence[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\nasync apredict_messages(messages, *, stop=None, **kwargs)\uf0c1\nPredict message from messages.\nParameters\nmessages (List[langchain.schema.BaseMessage]) \u2013 \nstop (Optional[Sequence[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.BaseMessage\nclassmethod construct(_fields_set=None, **values)\uf0c1\nCreates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.\nDefault values are respected, but no other validation is performed.\nBehaves as if Config.extra = \u2018allow\u2019 was set since it adds all passed values\nParameters\n_fields_set (Optional[SetStr]) \u2013 \nvalues (Any) \u2013 \nReturn type\nModel\ncopy(*, include=None, exclude=None, update=None, deep=False)\uf0c1\nDuplicate a model, optionally choose which fields to include, exclude and change.\nParameters\ninclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 fields to include in new model\nexclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 fields to exclude from new model, as with values this takes precedence over include\nupdate (Optional[DictStrAny]) \u2013 values to change/add in the new model. Note: the data is not validated before creating\nthe new model: you should trust this data\ndeep (bool) \u2013 set to True to make a deep copy of the model\nself (Model) \u2013 \nReturns\nnew model instance\nReturn type\nModel\ndict(**kwargs)\uf0c1\nReturn a dictionary of the LLM.\nParameters\nkwargs (Any) \u2013 \nReturn type\nDict", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-138", "text": "Parameters\nkwargs (Any) \u2013 \nReturn type\nDict\ngenerate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)\uf0c1\nRun the LLM on the given prompt and input.\nParameters\nprompts (List[str]) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ntags (Optional[List[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.LLMResult\ngenerate_prompt(prompts, stop=None, callbacks=None, **kwargs)\uf0c1\nTake in a list of prompt values and return an LLMResult.\nParameters\nprompts (List[langchain.schema.PromptValue]) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.LLMResult\nget_num_tokens(text)\uf0c1\nGet the number of tokens present in the text.\nParameters\ntext (str) \u2013 \nReturn type\nint\nget_num_tokens_from_messages(messages)\uf0c1\nGet the number of tokens in the message.\nParameters\nmessages (List[langchain.schema.BaseMessage]) \u2013 \nReturn type\nint\nget_token_ids(text)\uf0c1\nGet the token present in the text.\nParameters\ntext (str) \u2013 \nReturn type\nList[int]\njson(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs)\uf0c1\nGenerate a JSON representation of the model, include and exclude arguments as per dict().", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-139", "text": "Generate a JSON representation of the model, include and exclude arguments as per dict().\nencoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().\nParameters\ninclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 \nexclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 \nby_alias (bool) \u2013 \nskip_defaults (Optional[bool]) \u2013 \nexclude_unset (bool) \u2013 \nexclude_defaults (bool) \u2013 \nexclude_none (bool) \u2013 \nencoder (Optional[Callable[[Any], Any]]) \u2013 \nmodels_as_dict (bool) \u2013 \ndumps_kwargs (Any) \u2013 \nReturn type\nunicode\npredict(text, *, stop=None, **kwargs)\uf0c1\nPredict text from text.\nParameters\ntext (str) \u2013 \nstop (Optional[Sequence[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\npredict_messages(messages, *, stop=None, **kwargs)\uf0c1\nPredict message from messages.\nParameters\nmessages (List[langchain.schema.BaseMessage]) \u2013 \nstop (Optional[Sequence[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.BaseMessage\nsave(file_path)\uf0c1\nSave the LLM.\nParameters\nfile_path (Union[pathlib.Path, str]) \u2013 Path to file to save the LLM to.\nReturn type\nNone\nExample:\n.. code-block:: python\nllm.save(file_path=\u201dpath/llm.yaml\u201d)\nclassmethod update_forward_refs(**localns)\uf0c1\nTry to update ForwardRefs on fields based on this Model, globalns and localns.\nParameters\nlocalns (Any) \u2013 \nReturn type\nNone\nproperty lc_attributes: Dict\uf0c1\nReturn a list of attribute names that should be included in the", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-140", "text": "Return a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\uf0c1\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\uf0c1\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\uf0c1\nReturn whether or not the class is serializable.\nclass langchain.llms.HuggingFaceHub(*, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None, client=None, repo_id='gpt2', task=None, model_kwargs=None, huggingfacehub_api_token=None)[source]\uf0c1\nBases: langchain.llms.base.LLM\nWrapper around HuggingFaceHub models.\nTo use, you should have the huggingface_hub python package installed, and the\nenvironment variable HUGGINGFACEHUB_API_TOKEN set with your API token, or pass\nit as a named parameter to the constructor.\nOnly supports text-generation, text2text-generation and summarization for now.\nExample\nfrom langchain.llms import HuggingFaceHub\nhf = HuggingFaceHub(repo_id=\"gpt2\", huggingfacehub_api_token=\"my-api-key\")\nParameters\ncache (Optional[bool]) \u2013 \nverbose (bool) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ncallback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) \u2013 \ntags (Optional[List[str]]) \u2013 \nclient (Any) \u2013 \nrepo_id (str) \u2013 \ntask (Optional[str]) \u2013", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-141", "text": "repo_id (str) \u2013 \ntask (Optional[str]) \u2013 \nmodel_kwargs (Optional[dict]) \u2013 \nhuggingfacehub_api_token (Optional[str]) \u2013 \nReturn type\nNone\nattribute model_kwargs: Optional[dict] = None\uf0c1\nKey word arguments to pass to the model.\nattribute repo_id: str = 'gpt2'\uf0c1\nModel name to use.\nattribute tags: Optional[List[str]] = None\uf0c1\nTags to add to the run trace.\nattribute task: Optional[str] = None\uf0c1\nTask to call the model with.\nShould be a task that returns generated_text or summary_text.\nattribute verbose: bool [Optional]\uf0c1\nWhether to print out response text.\n__call__(prompt, stop=None, callbacks=None, **kwargs)\uf0c1\nCheck Cache and run the LLM on the given prompt and input.\nParameters\nprompt (str) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\nasync agenerate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)\uf0c1\nRun the LLM on the given prompt and input.\nParameters\nprompts (List[str]) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ntags (Optional[List[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.LLMResult\nasync agenerate_prompt(prompts, stop=None, callbacks=None, **kwargs)\uf0c1\nTake in a list of prompt values and return an LLMResult.\nParameters", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-142", "text": "Take in a list of prompt values and return an LLMResult.\nParameters\nprompts (List[langchain.schema.PromptValue]) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.LLMResult\nasync apredict(text, *, stop=None, **kwargs)\uf0c1\nPredict text from text.\nParameters\ntext (str) \u2013 \nstop (Optional[Sequence[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\nasync apredict_messages(messages, *, stop=None, **kwargs)\uf0c1\nPredict message from messages.\nParameters\nmessages (List[langchain.schema.BaseMessage]) \u2013 \nstop (Optional[Sequence[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.BaseMessage\nclassmethod construct(_fields_set=None, **values)\uf0c1\nCreates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.\nDefault values are respected, but no other validation is performed.\nBehaves as if Config.extra = \u2018allow\u2019 was set since it adds all passed values\nParameters\n_fields_set (Optional[SetStr]) \u2013 \nvalues (Any) \u2013 \nReturn type\nModel\ncopy(*, include=None, exclude=None, update=None, deep=False)\uf0c1\nDuplicate a model, optionally choose which fields to include, exclude and change.\nParameters\ninclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 fields to include in new model\nexclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 fields to exclude from new model, as with values this takes precedence over include", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-143", "text": "update (Optional[DictStrAny]) \u2013 values to change/add in the new model. Note: the data is not validated before creating\nthe new model: you should trust this data\ndeep (bool) \u2013 set to True to make a deep copy of the model\nself (Model) \u2013 \nReturns\nnew model instance\nReturn type\nModel\ndict(**kwargs)\uf0c1\nReturn a dictionary of the LLM.\nParameters\nkwargs (Any) \u2013 \nReturn type\nDict\ngenerate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)\uf0c1\nRun the LLM on the given prompt and input.\nParameters\nprompts (List[str]) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ntags (Optional[List[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.LLMResult\ngenerate_prompt(prompts, stop=None, callbacks=None, **kwargs)\uf0c1\nTake in a list of prompt values and return an LLMResult.\nParameters\nprompts (List[langchain.schema.PromptValue]) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.LLMResult\nget_num_tokens(text)\uf0c1\nGet the number of tokens present in the text.\nParameters\ntext (str) \u2013 \nReturn type\nint\nget_num_tokens_from_messages(messages)\uf0c1\nGet the number of tokens in the message.\nParameters\nmessages (List[langchain.schema.BaseMessage]) \u2013 \nReturn type\nint\nget_token_ids(text)\uf0c1\nGet the token present in the text.\nParameters", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-144", "text": "get_token_ids(text)\uf0c1\nGet the token present in the text.\nParameters\ntext (str) \u2013 \nReturn type\nList[int]\njson(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs)\uf0c1\nGenerate a JSON representation of the model, include and exclude arguments as per dict().\nencoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().\nParameters\ninclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 \nexclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 \nby_alias (bool) \u2013 \nskip_defaults (Optional[bool]) \u2013 \nexclude_unset (bool) \u2013 \nexclude_defaults (bool) \u2013 \nexclude_none (bool) \u2013 \nencoder (Optional[Callable[[Any], Any]]) \u2013 \nmodels_as_dict (bool) \u2013 \ndumps_kwargs (Any) \u2013 \nReturn type\nunicode\npredict(text, *, stop=None, **kwargs)\uf0c1\nPredict text from text.\nParameters\ntext (str) \u2013 \nstop (Optional[Sequence[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\npredict_messages(messages, *, stop=None, **kwargs)\uf0c1\nPredict message from messages.\nParameters\nmessages (List[langchain.schema.BaseMessage]) \u2013 \nstop (Optional[Sequence[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.BaseMessage\nsave(file_path)\uf0c1\nSave the LLM.\nParameters\nfile_path (Union[pathlib.Path, str]) \u2013 Path to file to save the LLM to.\nReturn type\nNone\nExample:\n.. code-block:: python", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-145", "text": "Return type\nNone\nExample:\n.. code-block:: python\nllm.save(file_path=\u201dpath/llm.yaml\u201d)\nclassmethod update_forward_refs(**localns)\uf0c1\nTry to update ForwardRefs on fields based on this Model, globalns and localns.\nParameters\nlocalns (Any) \u2013 \nReturn type\nNone\nproperty lc_attributes: Dict\uf0c1\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\uf0c1\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\uf0c1\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\uf0c1\nReturn whether or not the class is serializable.\nclass langchain.llms.HuggingFacePipeline(*, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None, pipeline=None, model_id='gpt2', model_kwargs=None, pipeline_kwargs=None)[source]\uf0c1\nBases: langchain.llms.base.LLM\nWrapper around HuggingFace Pipeline API.\nTo use, you should have the transformers python package installed.\nOnly supports text-generation, text2text-generation and summarization for now.\nExample using from_model_id:from langchain.llms import HuggingFacePipeline\nhf = HuggingFacePipeline.from_model_id(\n model_id=\"gpt2\",\n task=\"text-generation\",\n pipeline_kwargs={\"max_new_tokens\": 10},\n)\nExample passing pipeline in directly:from langchain.llms import HuggingFacePipeline\nfrom transformers import AutoModelForCausalLM, AutoTokenizer, pipeline", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-146", "text": "from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline\nmodel_id = \"gpt2\"\ntokenizer = AutoTokenizer.from_pretrained(model_id)\nmodel = AutoModelForCausalLM.from_pretrained(model_id)\npipe = pipeline(\n \"text-generation\", model=model, tokenizer=tokenizer, max_new_tokens=10\n)\nhf = HuggingFacePipeline(pipeline=pipe)\nParameters\ncache (Optional[bool]) \u2013 \nverbose (bool) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ncallback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) \u2013 \ntags (Optional[List[str]]) \u2013 \npipeline (Any) \u2013 \nmodel_id (str) \u2013 \nmodel_kwargs (Optional[dict]) \u2013 \npipeline_kwargs (Optional[dict]) \u2013 \nReturn type\nNone\nattribute model_id: str = 'gpt2'\uf0c1\nModel name to use.\nattribute model_kwargs: Optional[dict] = None\uf0c1\nKey word arguments passed to the model.\nattribute pipeline_kwargs: Optional[dict] = None\uf0c1\nKey word arguments passed to the pipeline.\nattribute tags: Optional[List[str]] = None\uf0c1\nTags to add to the run trace.\nattribute verbose: bool [Optional]\uf0c1\nWhether to print out response text.\n__call__(prompt, stop=None, callbacks=None, **kwargs)\uf0c1\nCheck Cache and run the LLM on the given prompt and input.\nParameters\nprompt (str) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-147", "text": "kwargs (Any) \u2013 \nReturn type\nstr\nasync agenerate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)\uf0c1\nRun the LLM on the given prompt and input.\nParameters\nprompts (List[str]) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ntags (Optional[List[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.LLMResult\nasync agenerate_prompt(prompts, stop=None, callbacks=None, **kwargs)\uf0c1\nTake in a list of prompt values and return an LLMResult.\nParameters\nprompts (List[langchain.schema.PromptValue]) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.LLMResult\nasync apredict(text, *, stop=None, **kwargs)\uf0c1\nPredict text from text.\nParameters\ntext (str) \u2013 \nstop (Optional[Sequence[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\nasync apredict_messages(messages, *, stop=None, **kwargs)\uf0c1\nPredict message from messages.\nParameters\nmessages (List[langchain.schema.BaseMessage]) \u2013 \nstop (Optional[Sequence[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.BaseMessage\nclassmethod construct(_fields_set=None, **values)\uf0c1\nCreates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.\nDefault values are respected, but no other validation is performed.", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-148", "text": "Default values are respected, but no other validation is performed.\nBehaves as if Config.extra = \u2018allow\u2019 was set since it adds all passed values\nParameters\n_fields_set (Optional[SetStr]) \u2013 \nvalues (Any) \u2013 \nReturn type\nModel\ncopy(*, include=None, exclude=None, update=None, deep=False)\uf0c1\nDuplicate a model, optionally choose which fields to include, exclude and change.\nParameters\ninclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 fields to include in new model\nexclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 fields to exclude from new model, as with values this takes precedence over include\nupdate (Optional[DictStrAny]) \u2013 values to change/add in the new model. Note: the data is not validated before creating\nthe new model: you should trust this data\ndeep (bool) \u2013 set to True to make a deep copy of the model\nself (Model) \u2013 \nReturns\nnew model instance\nReturn type\nModel\ndict(**kwargs)\uf0c1\nReturn a dictionary of the LLM.\nParameters\nkwargs (Any) \u2013 \nReturn type\nDict\nclassmethod from_model_id(model_id, task, device=- 1, model_kwargs=None, pipeline_kwargs=None, **kwargs)[source]\uf0c1\nConstruct the pipeline object from model_id and task.\nParameters\nmodel_id (str) \u2013 \ntask (str) \u2013 \ndevice (int) \u2013 \nmodel_kwargs (Optional[dict]) \u2013 \npipeline_kwargs (Optional[dict]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.llms.base.LLM\ngenerate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)\uf0c1\nRun the LLM on the given prompt and input.\nParameters\nprompts (List[str]) \u2013", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-149", "text": "Parameters\nprompts (List[str]) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ntags (Optional[List[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.LLMResult\ngenerate_prompt(prompts, stop=None, callbacks=None, **kwargs)\uf0c1\nTake in a list of prompt values and return an LLMResult.\nParameters\nprompts (List[langchain.schema.PromptValue]) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.LLMResult\nget_num_tokens(text)\uf0c1\nGet the number of tokens present in the text.\nParameters\ntext (str) \u2013 \nReturn type\nint\nget_num_tokens_from_messages(messages)\uf0c1\nGet the number of tokens in the message.\nParameters\nmessages (List[langchain.schema.BaseMessage]) \u2013 \nReturn type\nint\nget_token_ids(text)\uf0c1\nGet the token present in the text.\nParameters\ntext (str) \u2013 \nReturn type\nList[int]\njson(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs)\uf0c1\nGenerate a JSON representation of the model, include and exclude arguments as per dict().\nencoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().\nParameters\ninclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-150", "text": "include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 \nexclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 \nby_alias (bool) \u2013 \nskip_defaults (Optional[bool]) \u2013 \nexclude_unset (bool) \u2013 \nexclude_defaults (bool) \u2013 \nexclude_none (bool) \u2013 \nencoder (Optional[Callable[[Any], Any]]) \u2013 \nmodels_as_dict (bool) \u2013 \ndumps_kwargs (Any) \u2013 \nReturn type\nunicode\npredict(text, *, stop=None, **kwargs)\uf0c1\nPredict text from text.\nParameters\ntext (str) \u2013 \nstop (Optional[Sequence[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\npredict_messages(messages, *, stop=None, **kwargs)\uf0c1\nPredict message from messages.\nParameters\nmessages (List[langchain.schema.BaseMessage]) \u2013 \nstop (Optional[Sequence[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.BaseMessage\nsave(file_path)\uf0c1\nSave the LLM.\nParameters\nfile_path (Union[pathlib.Path, str]) \u2013 Path to file to save the LLM to.\nReturn type\nNone\nExample:\n.. code-block:: python\nllm.save(file_path=\u201dpath/llm.yaml\u201d)\nclassmethod update_forward_refs(**localns)\uf0c1\nTry to update ForwardRefs on fields based on this Model, globalns and localns.\nParameters\nlocalns (Any) \u2013 \nReturn type\nNone\nproperty lc_attributes: Dict\uf0c1\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\uf0c1\nReturn the namespace of the langchain object.", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-151", "text": "property lc_namespace: List[str]\uf0c1\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\uf0c1\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\uf0c1\nReturn whether or not the class is serializable.\nclass langchain.llms.HuggingFaceTextGenInference(*, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None, max_new_tokens=512, top_k=None, top_p=0.95, typical_p=0.95, temperature=0.8, repetition_penalty=None, stop_sequences=None, seed=None, inference_server_url='', timeout=120, server_kwargs=None, stream=False, client=None, async_client=None)[source]\uf0c1\nBases: langchain.llms.base.LLM\nHuggingFace text generation inference API.\nThis class is a wrapper around the HuggingFace text generation inference API.\nIt is used to generate text from a given prompt.\nAttributes:\n- max_new_tokens: The maximum number of tokens to generate.\n- top_k: The number of top-k tokens to consider when generating text.\n- top_p: The cumulative probability threshold for generating text.\n- typical_p: The typical probability threshold for generating text.\n- temperature: The temperature to use when generating text.\n- repetition_penalty: The repetition penalty to use when generating text.\n- stop_sequences: A list of stop sequences to use when generating text.\n- seed: The seed to use when generating text.\n- inference_server_url: The URL of the inference server to use.\n- timeout: The timeout value in seconds to use while connecting to inference server.", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-152", "text": "- timeout: The timeout value in seconds to use while connecting to inference server.\n- server_kwargs: The keyword arguments to pass to the inference server.\n- client: The client object used to communicate with the inference server.\n- async_client: The async client object used to communicate with the server.\nMethods:\n- _call: Generates text based on a given prompt and stop sequences.\n- _acall: Async generates text based on a given prompt and stop sequences.\n- _llm_type: Returns the type of LLM.\nParameters\ncache (Optional[bool]) \u2013 \nverbose (bool) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ncallback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) \u2013 \ntags (Optional[List[str]]) \u2013 \nmax_new_tokens (int) \u2013 \ntop_k (Optional[int]) \u2013 \ntop_p (Optional[float]) \u2013 \ntypical_p (Optional[float]) \u2013 \ntemperature (float) \u2013 \nrepetition_penalty (Optional[float]) \u2013 \nstop_sequences (List[str]) \u2013 \nseed (Optional[int]) \u2013 \ninference_server_url (str) \u2013 \ntimeout (int) \u2013 \nserver_kwargs (Dict[str, Any]) \u2013 \nstream (bool) \u2013 \nclient (Any) \u2013 \nasync_client (Any) \u2013 \nReturn type\nNone\nattribute tags: Optional[List[str]] = None\uf0c1\nTags to add to the run trace.\nattribute verbose: bool [Optional]\uf0c1\nWhether to print out response text.\n__call__(prompt, stop=None, callbacks=None, **kwargs)\uf0c1\nCheck Cache and run the LLM on the given prompt and input.\nParameters\nprompt (str) \u2013 \nstop (Optional[List[str]]) \u2013", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-153", "text": "Parameters\nprompt (str) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\nasync agenerate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)\uf0c1\nRun the LLM on the given prompt and input.\nParameters\nprompts (List[str]) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ntags (Optional[List[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.LLMResult\nasync agenerate_prompt(prompts, stop=None, callbacks=None, **kwargs)\uf0c1\nTake in a list of prompt values and return an LLMResult.\nParameters\nprompts (List[langchain.schema.PromptValue]) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.LLMResult\nasync apredict(text, *, stop=None, **kwargs)\uf0c1\nPredict text from text.\nParameters\ntext (str) \u2013 \nstop (Optional[Sequence[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\nasync apredict_messages(messages, *, stop=None, **kwargs)\uf0c1\nPredict message from messages.\nParameters\nmessages (List[langchain.schema.BaseMessage]) \u2013 \nstop (Optional[Sequence[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.BaseMessage\nclassmethod construct(_fields_set=None, **values)\uf0c1", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-154", "text": "langchain.schema.BaseMessage\nclassmethod construct(_fields_set=None, **values)\uf0c1\nCreates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.\nDefault values are respected, but no other validation is performed.\nBehaves as if Config.extra = \u2018allow\u2019 was set since it adds all passed values\nParameters\n_fields_set (Optional[SetStr]) \u2013 \nvalues (Any) \u2013 \nReturn type\nModel\ncopy(*, include=None, exclude=None, update=None, deep=False)\uf0c1\nDuplicate a model, optionally choose which fields to include, exclude and change.\nParameters\ninclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 fields to include in new model\nexclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 fields to exclude from new model, as with values this takes precedence over include\nupdate (Optional[DictStrAny]) \u2013 values to change/add in the new model. Note: the data is not validated before creating\nthe new model: you should trust this data\ndeep (bool) \u2013 set to True to make a deep copy of the model\nself (Model) \u2013 \nReturns\nnew model instance\nReturn type\nModel\ndict(**kwargs)\uf0c1\nReturn a dictionary of the LLM.\nParameters\nkwargs (Any) \u2013 \nReturn type\nDict\ngenerate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)\uf0c1\nRun the LLM on the given prompt and input.\nParameters\nprompts (List[str]) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ntags (Optional[List[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.LLMResult", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-155", "text": "kwargs (Any) \u2013 \nReturn type\nlangchain.schema.LLMResult\ngenerate_prompt(prompts, stop=None, callbacks=None, **kwargs)\uf0c1\nTake in a list of prompt values and return an LLMResult.\nParameters\nprompts (List[langchain.schema.PromptValue]) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.LLMResult\nget_num_tokens(text)\uf0c1\nGet the number of tokens present in the text.\nParameters\ntext (str) \u2013 \nReturn type\nint\nget_num_tokens_from_messages(messages)\uf0c1\nGet the number of tokens in the message.\nParameters\nmessages (List[langchain.schema.BaseMessage]) \u2013 \nReturn type\nint\nget_token_ids(text)\uf0c1\nGet the token present in the text.\nParameters\ntext (str) \u2013 \nReturn type\nList[int]\njson(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs)\uf0c1\nGenerate a JSON representation of the model, include and exclude arguments as per dict().\nencoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().\nParameters\ninclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 \nexclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 \nby_alias (bool) \u2013 \nskip_defaults (Optional[bool]) \u2013 \nexclude_unset (bool) \u2013 \nexclude_defaults (bool) \u2013 \nexclude_none (bool) \u2013", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-156", "text": "exclude_defaults (bool) \u2013 \nexclude_none (bool) \u2013 \nencoder (Optional[Callable[[Any], Any]]) \u2013 \nmodels_as_dict (bool) \u2013 \ndumps_kwargs (Any) \u2013 \nReturn type\nunicode\npredict(text, *, stop=None, **kwargs)\uf0c1\nPredict text from text.\nParameters\ntext (str) \u2013 \nstop (Optional[Sequence[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\npredict_messages(messages, *, stop=None, **kwargs)\uf0c1\nPredict message from messages.\nParameters\nmessages (List[langchain.schema.BaseMessage]) \u2013 \nstop (Optional[Sequence[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.BaseMessage\nsave(file_path)\uf0c1\nSave the LLM.\nParameters\nfile_path (Union[pathlib.Path, str]) \u2013 Path to file to save the LLM to.\nReturn type\nNone\nExample:\n.. code-block:: python\nllm.save(file_path=\u201dpath/llm.yaml\u201d)\nclassmethod update_forward_refs(**localns)\uf0c1\nTry to update ForwardRefs on fields based on this Model, globalns and localns.\nParameters\nlocalns (Any) \u2013 \nReturn type\nNone\nproperty lc_attributes: Dict\uf0c1\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\uf0c1\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\uf0c1\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\uf0c1", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-157", "text": "property lc_serializable: bool\uf0c1\nReturn whether or not the class is serializable.\nclass langchain.llms.HumanInputLLM(*, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None, input_func=None, prompt_func=None, separator='\\n', input_kwargs={}, prompt_kwargs={})[source]\uf0c1\nBases: langchain.llms.base.LLM\nA LLM wrapper which returns user input as the response.\nParameters\ncache (Optional[bool]) \u2013 \nverbose (bool) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ncallback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) \u2013 \ntags (Optional[List[str]]) \u2013 \ninput_func (Callable) \u2013 \nprompt_func (Callable[[str], None]) \u2013 \nseparator (str) \u2013 \ninput_kwargs (Mapping[str, Any]) \u2013 \nprompt_kwargs (Mapping[str, Any]) \u2013 \nReturn type\nNone\nattribute tags: Optional[List[str]] = None\uf0c1\nTags to add to the run trace.\nattribute verbose: bool [Optional]\uf0c1\nWhether to print out response text.\n__call__(prompt, stop=None, callbacks=None, **kwargs)\uf0c1\nCheck Cache and run the LLM on the given prompt and input.\nParameters\nprompt (str) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\nasync agenerate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)\uf0c1\nRun the LLM on the given prompt and input.\nParameters\nprompts (List[str]) \u2013", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-158", "text": "Parameters\nprompts (List[str]) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ntags (Optional[List[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.LLMResult\nasync agenerate_prompt(prompts, stop=None, callbacks=None, **kwargs)\uf0c1\nTake in a list of prompt values and return an LLMResult.\nParameters\nprompts (List[langchain.schema.PromptValue]) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.LLMResult\nasync apredict(text, *, stop=None, **kwargs)\uf0c1\nPredict text from text.\nParameters\ntext (str) \u2013 \nstop (Optional[Sequence[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\nasync apredict_messages(messages, *, stop=None, **kwargs)\uf0c1\nPredict message from messages.\nParameters\nmessages (List[langchain.schema.BaseMessage]) \u2013 \nstop (Optional[Sequence[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.BaseMessage\nclassmethod construct(_fields_set=None, **values)\uf0c1\nCreates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.\nDefault values are respected, but no other validation is performed.\nBehaves as if Config.extra = \u2018allow\u2019 was set since it adds all passed values\nParameters\n_fields_set (Optional[SetStr]) \u2013 \nvalues (Any) \u2013 \nReturn type\nModel\ncopy(*, include=None, exclude=None, update=None, deep=False)\uf0c1", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-159", "text": "Model\ncopy(*, include=None, exclude=None, update=None, deep=False)\uf0c1\nDuplicate a model, optionally choose which fields to include, exclude and change.\nParameters\ninclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 fields to include in new model\nexclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 fields to exclude from new model, as with values this takes precedence over include\nupdate (Optional[DictStrAny]) \u2013 values to change/add in the new model. Note: the data is not validated before creating\nthe new model: you should trust this data\ndeep (bool) \u2013 set to True to make a deep copy of the model\nself (Model) \u2013 \nReturns\nnew model instance\nReturn type\nModel\ndict(**kwargs)\uf0c1\nReturn a dictionary of the LLM.\nParameters\nkwargs (Any) \u2013 \nReturn type\nDict\ngenerate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)\uf0c1\nRun the LLM on the given prompt and input.\nParameters\nprompts (List[str]) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ntags (Optional[List[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.LLMResult\ngenerate_prompt(prompts, stop=None, callbacks=None, **kwargs)\uf0c1\nTake in a list of prompt values and return an LLMResult.\nParameters\nprompts (List[langchain.schema.PromptValue]) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \nkwargs (Any) \u2013 \nReturn type", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-160", "text": "kwargs (Any) \u2013 \nReturn type\nlangchain.schema.LLMResult\nget_num_tokens(text)\uf0c1\nGet the number of tokens present in the text.\nParameters\ntext (str) \u2013 \nReturn type\nint\nget_num_tokens_from_messages(messages)\uf0c1\nGet the number of tokens in the message.\nParameters\nmessages (List[langchain.schema.BaseMessage]) \u2013 \nReturn type\nint\nget_token_ids(text)\uf0c1\nGet the token present in the text.\nParameters\ntext (str) \u2013 \nReturn type\nList[int]\njson(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs)\uf0c1\nGenerate a JSON representation of the model, include and exclude arguments as per dict().\nencoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().\nParameters\ninclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 \nexclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 \nby_alias (bool) \u2013 \nskip_defaults (Optional[bool]) \u2013 \nexclude_unset (bool) \u2013 \nexclude_defaults (bool) \u2013 \nexclude_none (bool) \u2013 \nencoder (Optional[Callable[[Any], Any]]) \u2013 \nmodels_as_dict (bool) \u2013 \ndumps_kwargs (Any) \u2013 \nReturn type\nunicode\npredict(text, *, stop=None, **kwargs)\uf0c1\nPredict text from text.\nParameters\ntext (str) \u2013 \nstop (Optional[Sequence[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\npredict_messages(messages, *, stop=None, **kwargs)\uf0c1\nPredict message from messages.\nParameters", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-161", "text": "Predict message from messages.\nParameters\nmessages (List[langchain.schema.BaseMessage]) \u2013 \nstop (Optional[Sequence[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.BaseMessage\nsave(file_path)\uf0c1\nSave the LLM.\nParameters\nfile_path (Union[pathlib.Path, str]) \u2013 Path to file to save the LLM to.\nReturn type\nNone\nExample:\n.. code-block:: python\nllm.save(file_path=\u201dpath/llm.yaml\u201d)\nclassmethod update_forward_refs(**localns)\uf0c1\nTry to update ForwardRefs on fields based on this Model, globalns and localns.\nParameters\nlocalns (Any) \u2013 \nReturn type\nNone\nproperty lc_attributes: Dict\uf0c1\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\uf0c1\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\uf0c1\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\uf0c1\nReturn whether or not the class is serializable.", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-162", "text": "property lc_serializable: bool\uf0c1\nReturn whether or not the class is serializable.\nclass langchain.llms.LlamaCpp(*, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None, client=None, model_path, lora_base=None, lora_path=None, n_ctx=512, n_parts=- 1, seed=- 1, f16_kv=True, logits_all=False, vocab_only=False, use_mlock=False, n_threads=None, n_batch=8, n_gpu_layers=None, suffix=None, max_tokens=256, temperature=0.8, top_p=0.95, logprobs=None, echo=False, stop=[], repeat_penalty=1.1, top_k=40, last_n_tokens_size=64, use_mmap=True, streaming=True)[source]\uf0c1\nBases: langchain.llms.base.LLM\nWrapper around the llama.cpp model.\nTo use, you should have the llama-cpp-python library installed, and provide the\npath to the Llama model as a named parameter to the constructor.\nCheck out: https://github.com/abetlen/llama-cpp-python\nExample\nfrom langchain.llms import LlamaCppEmbeddings\nllm = LlamaCppEmbeddings(model_path=\"/path/to/llama/model\")\nParameters\ncache (Optional[bool]) \u2013 \nverbose (bool) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ncallback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) \u2013 \ntags (Optional[List[str]]) \u2013 \nclient (Any) \u2013 \nmodel_path (str) \u2013 \nlora_base (Optional[str]) \u2013 \nlora_path (Optional[str]) \u2013 \nn_ctx (int) \u2013 \nn_parts (int) \u2013", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-163", "text": "n_ctx (int) \u2013 \nn_parts (int) \u2013 \nseed (int) \u2013 \nf16_kv (bool) \u2013 \nlogits_all (bool) \u2013 \nvocab_only (bool) \u2013 \nuse_mlock (bool) \u2013 \nn_threads (Optional[int]) \u2013 \nn_batch (Optional[int]) \u2013 \nn_gpu_layers (Optional[int]) \u2013 \nsuffix (Optional[str]) \u2013 \nmax_tokens (Optional[int]) \u2013 \ntemperature (Optional[float]) \u2013 \ntop_p (Optional[float]) \u2013 \nlogprobs (Optional[int]) \u2013 \necho (Optional[bool]) \u2013 \nstop (Optional[List[str]]) \u2013 \nrepeat_penalty (Optional[float]) \u2013 \ntop_k (Optional[int]) \u2013 \nlast_n_tokens_size (Optional[int]) \u2013 \nuse_mmap (Optional[bool]) \u2013 \nstreaming (bool) \u2013 \nReturn type\nNone\nattribute echo: Optional[bool] = False\uf0c1\nWhether to echo the prompt.\nattribute f16_kv: bool = True\uf0c1\nUse half-precision for key/value cache.\nattribute last_n_tokens_size: Optional[int] = 64\uf0c1\nThe number of tokens to look back when applying the repeat_penalty.\nattribute logits_all: bool = False\uf0c1\nReturn logits for all tokens, not just the last token.\nattribute logprobs: Optional[int] = None\uf0c1\nThe number of logprobs to return. If None, no logprobs are returned.\nattribute lora_base: Optional[str] = None\uf0c1\nThe path to the Llama LoRA base model.\nattribute lora_path: Optional[str] = None\uf0c1\nThe path to the Llama LoRA. If None, no LoRa is loaded.\nattribute max_tokens: Optional[int] = 256\uf0c1", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-164", "text": "attribute max_tokens: Optional[int] = 256\uf0c1\nThe maximum number of tokens to generate.\nattribute model_path: str [Required]\uf0c1\nThe path to the Llama model file.\nattribute n_batch: Optional[int] = 8\uf0c1\nNumber of tokens to process in parallel.\nShould be a number between 1 and n_ctx.\nattribute n_ctx: int = 512\uf0c1\nToken context window.\nattribute n_gpu_layers: Optional[int] = None\uf0c1\nNumber of layers to be loaded into gpu memory. Default None.\nattribute n_parts: int = -1\uf0c1\nNumber of parts to split the model into.\nIf -1, the number of parts is automatically determined.\nattribute n_threads: Optional[int] = None\uf0c1\nNumber of threads to use.\nIf None, the number of threads is automatically determined.\nattribute repeat_penalty: Optional[float] = 1.1\uf0c1\nThe penalty to apply to repeated tokens.\nattribute seed: int = -1\uf0c1\nSeed. If -1, a random seed is used.\nattribute stop: Optional[List[str]] = []\uf0c1\nA list of strings to stop generation when encountered.\nattribute streaming: bool = True\uf0c1\nWhether to stream the results, token by token.\nattribute suffix: Optional[str] = None\uf0c1\nA suffix to append to the generated text. If None, no suffix is appended.\nattribute tags: Optional[List[str]] = None\uf0c1\nTags to add to the run trace.\nattribute temperature: Optional[float] = 0.8\uf0c1\nThe temperature to use for sampling.\nattribute top_k: Optional[int] = 40\uf0c1\nThe top-k value to use for sampling.\nattribute top_p: Optional[float] = 0.95\uf0c1\nThe top-p value to use for sampling.", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-165", "text": "The top-p value to use for sampling.\nattribute use_mlock: bool = False\uf0c1\nForce system to keep model in RAM.\nattribute use_mmap: Optional[bool] = True\uf0c1\nWhether to keep the model loaded in RAM\nattribute verbose: bool [Optional]\uf0c1\nWhether to print out response text.\nattribute vocab_only: bool = False\uf0c1\nOnly load the vocabulary, no weights.\n__call__(prompt, stop=None, callbacks=None, **kwargs)\uf0c1\nCheck Cache and run the LLM on the given prompt and input.\nParameters\nprompt (str) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\nasync agenerate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)\uf0c1\nRun the LLM on the given prompt and input.\nParameters\nprompts (List[str]) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ntags (Optional[List[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.LLMResult\nasync agenerate_prompt(prompts, stop=None, callbacks=None, **kwargs)\uf0c1\nTake in a list of prompt values and return an LLMResult.\nParameters\nprompts (List[langchain.schema.PromptValue]) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.LLMResult", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-166", "text": "kwargs (Any) \u2013 \nReturn type\nlangchain.schema.LLMResult\nasync apredict(text, *, stop=None, **kwargs)\uf0c1\nPredict text from text.\nParameters\ntext (str) \u2013 \nstop (Optional[Sequence[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\nasync apredict_messages(messages, *, stop=None, **kwargs)\uf0c1\nPredict message from messages.\nParameters\nmessages (List[langchain.schema.BaseMessage]) \u2013 \nstop (Optional[Sequence[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.BaseMessage\nclassmethod construct(_fields_set=None, **values)\uf0c1\nCreates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.\nDefault values are respected, but no other validation is performed.\nBehaves as if Config.extra = \u2018allow\u2019 was set since it adds all passed values\nParameters\n_fields_set (Optional[SetStr]) \u2013 \nvalues (Any) \u2013 \nReturn type\nModel\ncopy(*, include=None, exclude=None, update=None, deep=False)\uf0c1\nDuplicate a model, optionally choose which fields to include, exclude and change.\nParameters\ninclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 fields to include in new model\nexclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 fields to exclude from new model, as with values this takes precedence over include\nupdate (Optional[DictStrAny]) \u2013 values to change/add in the new model. Note: the data is not validated before creating\nthe new model: you should trust this data\ndeep (bool) \u2013 set to True to make a deep copy of the model\nself (Model) \u2013 \nReturns\nnew model instance\nReturn type\nModel\ndict(**kwargs)\uf0c1", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-167", "text": "Returns\nnew model instance\nReturn type\nModel\ndict(**kwargs)\uf0c1\nReturn a dictionary of the LLM.\nParameters\nkwargs (Any) \u2013 \nReturn type\nDict\ngenerate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)\uf0c1\nRun the LLM on the given prompt and input.\nParameters\nprompts (List[str]) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ntags (Optional[List[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.LLMResult\ngenerate_prompt(prompts, stop=None, callbacks=None, **kwargs)\uf0c1\nTake in a list of prompt values and return an LLMResult.\nParameters\nprompts (List[langchain.schema.PromptValue]) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.LLMResult\nget_num_tokens(text)[source]\uf0c1\nGet the number of tokens present in the text.\nParameters\ntext (str) \u2013 \nReturn type\nint\nget_num_tokens_from_messages(messages)\uf0c1\nGet the number of tokens in the message.\nParameters\nmessages (List[langchain.schema.BaseMessage]) \u2013 \nReturn type\nint\nget_token_ids(text)\uf0c1\nGet the token present in the text.\nParameters\ntext (str) \u2013 \nReturn type\nList[int]\njson(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs)\uf0c1", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-168", "text": "Generate a JSON representation of the model, include and exclude arguments as per dict().\nencoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().\nParameters\ninclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 \nexclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 \nby_alias (bool) \u2013 \nskip_defaults (Optional[bool]) \u2013 \nexclude_unset (bool) \u2013 \nexclude_defaults (bool) \u2013 \nexclude_none (bool) \u2013 \nencoder (Optional[Callable[[Any], Any]]) \u2013 \nmodels_as_dict (bool) \u2013 \ndumps_kwargs (Any) \u2013 \nReturn type\nunicode\npredict(text, *, stop=None, **kwargs)\uf0c1\nPredict text from text.\nParameters\ntext (str) \u2013 \nstop (Optional[Sequence[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\npredict_messages(messages, *, stop=None, **kwargs)\uf0c1\nPredict message from messages.\nParameters\nmessages (List[langchain.schema.BaseMessage]) \u2013 \nstop (Optional[Sequence[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.BaseMessage\nsave(file_path)\uf0c1\nSave the LLM.\nParameters\nfile_path (Union[pathlib.Path, str]) \u2013 Path to file to save the LLM to.\nReturn type\nNone\nExample:\n.. code-block:: python\nllm.save(file_path=\u201dpath/llm.yaml\u201d)\nstream(prompt, stop=None, run_manager=None)[source]\uf0c1\nYields results objects as they are generated in real time.\nBETA: this is a beta feature while we figure out the right abstraction.\nOnce that happens, this interface could change.", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-169", "text": "Once that happens, this interface could change.\nIt also calls the callback manager\u2019s on_llm_new_token event with\nsimilar parameters to the OpenAI LLM class method of the same name.\nArgs:prompt: The prompts to pass into the model.\nstop: Optional list of stop words to use when generating.\nReturns:A generator representing the stream of tokens being generated.\nYields:A dictionary like objects containing a string token and metadata.\nSee llama-cpp-python docs and below for more.\nExample:from langchain.llms import LlamaCpp\nllm = LlamaCpp(\n model_path=\"/path/to/local/model.bin\",\n temperature = 0.5\n)\nfor chunk in llm.stream(\"Ask 'Hi, how are you?' like a pirate:'\",\n stop=[\"'\",\"\n\u201c]):result = chunk[\u201cchoices\u201d][0]\nprint(result[\u201ctext\u201d], end=\u2019\u2019, flush=True)\nParameters\nprompt (str) \u2013 \nstop (Optional[List[str]]) \u2013 \nrun_manager (Optional[langchain.callbacks.manager.CallbackManagerForLLMRun]) \u2013 \nReturn type\nGenerator[Dict, None, None]\nclassmethod update_forward_refs(**localns)\uf0c1\nTry to update ForwardRefs on fields based on this Model, globalns and localns.\nParameters\nlocalns (Any) \u2013 \nReturn type\nNone\nproperty lc_attributes: Dict\uf0c1\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\uf0c1\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\uf0c1\nReturn a map of constructor argument names to secret ids.", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-170", "text": "Return a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\uf0c1\nReturn whether or not the class is serializable.\nclass langchain.llms.TextGen(*, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None, model_url, max_new_tokens=250, do_sample=True, temperature=1.3, top_p=0.1, typical_p=1, epsilon_cutoff=0, eta_cutoff=0, repetition_penalty=1.18, top_k=40, min_length=0, no_repeat_ngram_size=0, num_beams=1, penalty_alpha=0, length_penalty=1, early_stopping=False, seed=- 1, add_bos_token=True, truncation_length=2048, ban_eos_token=False, skip_special_tokens=True, stopping_strings=[], streaming=False)[source]\uf0c1\nBases: langchain.llms.base.LLM\nWrapper around the text-generation-webui model.\nTo use, you should have the text-generation-webui installed, a model loaded,\nand \u2013api added as a command-line option.\nSuggested installation, use one-click installer for your OS:\nhttps://github.com/oobabooga/text-generation-webui#one-click-installers\nParemeters below taken from text-generation-webui api example:\nhttps://github.com/oobabooga/text-generation-webui/blob/main/api-examples/api-example.py\nExample\nfrom langchain.llms import TextGen\nllm = TextGen(model_url=\"http://localhost:8500\")\nParameters\ncache (Optional[bool]) \u2013 \nverbose (bool) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-171", "text": "callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) \u2013 \ntags (Optional[List[str]]) \u2013 \nmodel_url (str) \u2013 \nmax_new_tokens (Optional[int]) \u2013 \ndo_sample (bool) \u2013 \ntemperature (Optional[float]) \u2013 \ntop_p (Optional[float]) \u2013 \ntypical_p (Optional[float]) \u2013 \nepsilon_cutoff (Optional[float]) \u2013 \neta_cutoff (Optional[float]) \u2013 \nrepetition_penalty (Optional[float]) \u2013 \ntop_k (Optional[float]) \u2013 \nmin_length (Optional[int]) \u2013 \nno_repeat_ngram_size (Optional[int]) \u2013 \nnum_beams (Optional[int]) \u2013 \npenalty_alpha (Optional[float]) \u2013 \nlength_penalty (Optional[float]) \u2013 \nearly_stopping (bool) \u2013 \nseed (int) \u2013 \nadd_bos_token (bool) \u2013 \ntruncation_length (Optional[int]) \u2013 \nban_eos_token (bool) \u2013 \nskip_special_tokens (bool) \u2013 \nstopping_strings (Optional[List[str]]) \u2013 \nstreaming (bool) \u2013 \nReturn type\nNone\nattribute add_bos_token: bool = True\uf0c1\nAdd the bos_token to the beginning of prompts.\nDisabling this can make the replies more creative.\nattribute ban_eos_token: bool = False\uf0c1\nBan the eos_token. Forces the model to never end the generation prematurely.\nattribute do_sample: bool = True\uf0c1\nDo sample\nattribute early_stopping: bool = False\uf0c1\nEarly stopping\nattribute epsilon_cutoff: Optional[float] = 0\uf0c1\nEpsilon cutoff\nattribute eta_cutoff: Optional[float] = 0\uf0c1\nETA cutoff\nattribute length_penalty: Optional[float] = 1\uf0c1\nLength Penalty\nattribute max_new_tokens: Optional[int] = 250\uf0c1", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-172", "text": "Length Penalty\nattribute max_new_tokens: Optional[int] = 250\uf0c1\nThe maximum number of tokens to generate.\nattribute min_length: Optional[int] = 0\uf0c1\nMinimum generation length in tokens.\nattribute model_url: str [Required]\uf0c1\nThe full URL to the textgen webui including http[s]://host:port\nattribute no_repeat_ngram_size: Optional[int] = 0\uf0c1\nIf not set to 0, specifies the length of token sets that are completely blocked\nfrom repeating at all. Higher values = blocks larger phrases,\nlower values = blocks words or letters from repeating.\nOnly 0 or high values are a good idea in most cases.\nattribute num_beams: Optional[int] = 1\uf0c1\nNumber of beams\nattribute penalty_alpha: Optional[float] = 0\uf0c1\nPenalty Alpha\nattribute repetition_penalty: Optional[float] = 1.18\uf0c1\nExponential penalty factor for repeating prior tokens. 1 means no penalty,\nhigher value = less repetition, lower value = more repetition.\nattribute seed: int = -1\uf0c1\nSeed (-1 for random)\nattribute skip_special_tokens: bool = True\uf0c1\nSkip special tokens. Some specific models need this unset.\nattribute stopping_strings: Optional[List[str]] = []\uf0c1\nA list of strings to stop generation when encountered.\nattribute streaming: bool = False\uf0c1\nWhether to stream the results, token by token (currently unimplemented).\nattribute tags: Optional[List[str]] = None\uf0c1\nTags to add to the run trace.\nattribute temperature: Optional[float] = 1.3\uf0c1\nPrimary factor to control randomness of outputs. 0 = deterministic\n(only the most likely token is used). Higher value = more randomness.\nattribute top_k: Optional[float] = 40\uf0c1", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-173", "text": "attribute top_k: Optional[float] = 40\uf0c1\nSimilar to top_p, but select instead only the top_k most likely tokens.\nHigher value = higher range of possible random results.\nattribute top_p: Optional[float] = 0.1\uf0c1\nIf not set to 1, select tokens with probabilities adding up to less than this\nnumber. Higher value = higher range of possible random results.\nattribute truncation_length: Optional[int] = 2048\uf0c1\nTruncate the prompt up to this length. The leftmost tokens are removed if\nthe prompt exceeds this length. Most models require this to be at most 2048.\nattribute typical_p: Optional[float] = 1\uf0c1\nIf not set to 1, select only tokens that are at least this much more likely to\nappear than random tokens, given the prior text.\nattribute verbose: bool [Optional]\uf0c1\nWhether to print out response text.\n__call__(prompt, stop=None, callbacks=None, **kwargs)\uf0c1\nCheck Cache and run the LLM on the given prompt and input.\nParameters\nprompt (str) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\nasync agenerate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)\uf0c1\nRun the LLM on the given prompt and input.\nParameters\nprompts (List[str]) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ntags (Optional[List[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.LLMResult", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-174", "text": "kwargs (Any) \u2013 \nReturn type\nlangchain.schema.LLMResult\nasync agenerate_prompt(prompts, stop=None, callbacks=None, **kwargs)\uf0c1\nTake in a list of prompt values and return an LLMResult.\nParameters\nprompts (List[langchain.schema.PromptValue]) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.LLMResult\nasync apredict(text, *, stop=None, **kwargs)\uf0c1\nPredict text from text.\nParameters\ntext (str) \u2013 \nstop (Optional[Sequence[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\nasync apredict_messages(messages, *, stop=None, **kwargs)\uf0c1\nPredict message from messages.\nParameters\nmessages (List[langchain.schema.BaseMessage]) \u2013 \nstop (Optional[Sequence[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.BaseMessage\nclassmethod construct(_fields_set=None, **values)\uf0c1\nCreates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.\nDefault values are respected, but no other validation is performed.\nBehaves as if Config.extra = \u2018allow\u2019 was set since it adds all passed values\nParameters\n_fields_set (Optional[SetStr]) \u2013 \nvalues (Any) \u2013 \nReturn type\nModel\ncopy(*, include=None, exclude=None, update=None, deep=False)\uf0c1\nDuplicate a model, optionally choose which fields to include, exclude and change.\nParameters\ninclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 fields to include in new model", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-175", "text": "exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 fields to exclude from new model, as with values this takes precedence over include\nupdate (Optional[DictStrAny]) \u2013 values to change/add in the new model. Note: the data is not validated before creating\nthe new model: you should trust this data\ndeep (bool) \u2013 set to True to make a deep copy of the model\nself (Model) \u2013 \nReturns\nnew model instance\nReturn type\nModel\ndict(**kwargs)\uf0c1\nReturn a dictionary of the LLM.\nParameters\nkwargs (Any) \u2013 \nReturn type\nDict\ngenerate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)\uf0c1\nRun the LLM on the given prompt and input.\nParameters\nprompts (List[str]) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ntags (Optional[List[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.LLMResult\ngenerate_prompt(prompts, stop=None, callbacks=None, **kwargs)\uf0c1\nTake in a list of prompt values and return an LLMResult.\nParameters\nprompts (List[langchain.schema.PromptValue]) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.LLMResult\nget_num_tokens(text)\uf0c1\nGet the number of tokens present in the text.\nParameters\ntext (str) \u2013 \nReturn type\nint\nget_num_tokens_from_messages(messages)\uf0c1\nGet the number of tokens in the message.\nParameters", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-176", "text": "Get the number of tokens in the message.\nParameters\nmessages (List[langchain.schema.BaseMessage]) \u2013 \nReturn type\nint\nget_token_ids(text)\uf0c1\nGet the token present in the text.\nParameters\ntext (str) \u2013 \nReturn type\nList[int]\njson(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs)\uf0c1\nGenerate a JSON representation of the model, include and exclude arguments as per dict().\nencoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().\nParameters\ninclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 \nexclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 \nby_alias (bool) \u2013 \nskip_defaults (Optional[bool]) \u2013 \nexclude_unset (bool) \u2013 \nexclude_defaults (bool) \u2013 \nexclude_none (bool) \u2013 \nencoder (Optional[Callable[[Any], Any]]) \u2013 \nmodels_as_dict (bool) \u2013 \ndumps_kwargs (Any) \u2013 \nReturn type\nunicode\npredict(text, *, stop=None, **kwargs)\uf0c1\nPredict text from text.\nParameters\ntext (str) \u2013 \nstop (Optional[Sequence[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\npredict_messages(messages, *, stop=None, **kwargs)\uf0c1\nPredict message from messages.\nParameters\nmessages (List[langchain.schema.BaseMessage]) \u2013 \nstop (Optional[Sequence[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.BaseMessage\nsave(file_path)\uf0c1\nSave the LLM.\nParameters", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-177", "text": "save(file_path)\uf0c1\nSave the LLM.\nParameters\nfile_path (Union[pathlib.Path, str]) \u2013 Path to file to save the LLM to.\nReturn type\nNone\nExample:\n.. code-block:: python\nllm.save(file_path=\u201dpath/llm.yaml\u201d)\nclassmethod update_forward_refs(**localns)\uf0c1\nTry to update ForwardRefs on fields based on this Model, globalns and localns.\nParameters\nlocalns (Any) \u2013 \nReturn type\nNone\nproperty lc_attributes: Dict\uf0c1\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\uf0c1\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\uf0c1\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\uf0c1\nReturn whether or not the class is serializable.\nclass langchain.llms.ManifestWrapper(*, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None, client=None, llm_kwargs=None)[source]\uf0c1\nBases: langchain.llms.base.LLM\nWrapper around HazyResearch\u2019s Manifest library.\nParameters\ncache (Optional[bool]) \u2013 \nverbose (bool) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ncallback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) \u2013 \ntags (Optional[List[str]]) \u2013 \nclient (Any) \u2013 \nllm_kwargs (Optional[Dict]) \u2013 \nReturn type\nNone", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-178", "text": "llm_kwargs (Optional[Dict]) \u2013 \nReturn type\nNone\nattribute tags: Optional[List[str]] = None\uf0c1\nTags to add to the run trace.\nattribute verbose: bool [Optional]\uf0c1\nWhether to print out response text.\n__call__(prompt, stop=None, callbacks=None, **kwargs)\uf0c1\nCheck Cache and run the LLM on the given prompt and input.\nParameters\nprompt (str) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\nasync agenerate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)\uf0c1\nRun the LLM on the given prompt and input.\nParameters\nprompts (List[str]) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ntags (Optional[List[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.LLMResult\nasync agenerate_prompt(prompts, stop=None, callbacks=None, **kwargs)\uf0c1\nTake in a list of prompt values and return an LLMResult.\nParameters\nprompts (List[langchain.schema.PromptValue]) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.LLMResult\nasync apredict(text, *, stop=None, **kwargs)\uf0c1\nPredict text from text.\nParameters\ntext (str) \u2013 \nstop (Optional[Sequence[str]]) \u2013 \nkwargs (Any) \u2013", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-179", "text": "stop (Optional[Sequence[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\nasync apredict_messages(messages, *, stop=None, **kwargs)\uf0c1\nPredict message from messages.\nParameters\nmessages (List[langchain.schema.BaseMessage]) \u2013 \nstop (Optional[Sequence[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.BaseMessage\nclassmethod construct(_fields_set=None, **values)\uf0c1\nCreates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.\nDefault values are respected, but no other validation is performed.\nBehaves as if Config.extra = \u2018allow\u2019 was set since it adds all passed values\nParameters\n_fields_set (Optional[SetStr]) \u2013 \nvalues (Any) \u2013 \nReturn type\nModel\ncopy(*, include=None, exclude=None, update=None, deep=False)\uf0c1\nDuplicate a model, optionally choose which fields to include, exclude and change.\nParameters\ninclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 fields to include in new model\nexclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 fields to exclude from new model, as with values this takes precedence over include\nupdate (Optional[DictStrAny]) \u2013 values to change/add in the new model. Note: the data is not validated before creating\nthe new model: you should trust this data\ndeep (bool) \u2013 set to True to make a deep copy of the model\nself (Model) \u2013 \nReturns\nnew model instance\nReturn type\nModel\ndict(**kwargs)\uf0c1\nReturn a dictionary of the LLM.\nParameters\nkwargs (Any) \u2013 \nReturn type\nDict\ngenerate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)\uf0c1", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-180", "text": "generate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)\uf0c1\nRun the LLM on the given prompt and input.\nParameters\nprompts (List[str]) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ntags (Optional[List[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.LLMResult\ngenerate_prompt(prompts, stop=None, callbacks=None, **kwargs)\uf0c1\nTake in a list of prompt values and return an LLMResult.\nParameters\nprompts (List[langchain.schema.PromptValue]) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.LLMResult\nget_num_tokens(text)\uf0c1\nGet the number of tokens present in the text.\nParameters\ntext (str) \u2013 \nReturn type\nint\nget_num_tokens_from_messages(messages)\uf0c1\nGet the number of tokens in the message.\nParameters\nmessages (List[langchain.schema.BaseMessage]) \u2013 \nReturn type\nint\nget_token_ids(text)\uf0c1\nGet the token present in the text.\nParameters\ntext (str) \u2013 \nReturn type\nList[int]\njson(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs)\uf0c1\nGenerate a JSON representation of the model, include and exclude arguments as per dict().\nencoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().\nParameters", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-181", "text": "Parameters\ninclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 \nexclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 \nby_alias (bool) \u2013 \nskip_defaults (Optional[bool]) \u2013 \nexclude_unset (bool) \u2013 \nexclude_defaults (bool) \u2013 \nexclude_none (bool) \u2013 \nencoder (Optional[Callable[[Any], Any]]) \u2013 \nmodels_as_dict (bool) \u2013 \ndumps_kwargs (Any) \u2013 \nReturn type\nunicode\npredict(text, *, stop=None, **kwargs)\uf0c1\nPredict text from text.\nParameters\ntext (str) \u2013 \nstop (Optional[Sequence[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\npredict_messages(messages, *, stop=None, **kwargs)\uf0c1\nPredict message from messages.\nParameters\nmessages (List[langchain.schema.BaseMessage]) \u2013 \nstop (Optional[Sequence[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.BaseMessage\nsave(file_path)\uf0c1\nSave the LLM.\nParameters\nfile_path (Union[pathlib.Path, str]) \u2013 Path to file to save the LLM to.\nReturn type\nNone\nExample:\n.. code-block:: python\nllm.save(file_path=\u201dpath/llm.yaml\u201d)\nclassmethod update_forward_refs(**localns)\uf0c1\nTry to update ForwardRefs on fields based on this Model, globalns and localns.\nParameters\nlocalns (Any) \u2013 \nReturn type\nNone\nproperty lc_attributes: Dict\uf0c1\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\uf0c1\nReturn the namespace of the langchain object.", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-182", "text": "property lc_namespace: List[str]\uf0c1\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\uf0c1\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\uf0c1\nReturn whether or not the class is serializable.\nclass langchain.llms.Modal(*, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None, endpoint_url='', model_kwargs=None)[source]\uf0c1\nBases: langchain.llms.base.LLM\nWrapper around Modal large language models.\nTo use, you should have the modal-client python package installed.\nAny parameters that are valid to be passed to the call can be passed\nin, even if not explicitly saved on this class.\nExample\nfrom langchain.llms import Modal\nmodal = Modal(endpoint_url=\"\")\nParameters\ncache (Optional[bool]) \u2013 \nverbose (bool) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ncallback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) \u2013 \ntags (Optional[List[str]]) \u2013 \nendpoint_url (str) \u2013 \nmodel_kwargs (Dict[str, Any]) \u2013 \nReturn type\nNone\nattribute endpoint_url: str = ''\uf0c1\nmodel endpoint to use\nattribute model_kwargs: Dict[str, Any] [Optional]\uf0c1\nHolds any model parameters valid for create call not\nexplicitly specified.\nattribute tags: Optional[List[str]] = None\uf0c1\nTags to add to the run trace.\nattribute verbose: bool [Optional]\uf0c1\nWhether to print out response text.", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-183", "text": "attribute verbose: bool [Optional]\uf0c1\nWhether to print out response text.\n__call__(prompt, stop=None, callbacks=None, **kwargs)\uf0c1\nCheck Cache and run the LLM on the given prompt and input.\nParameters\nprompt (str) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\nasync agenerate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)\uf0c1\nRun the LLM on the given prompt and input.\nParameters\nprompts (List[str]) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ntags (Optional[List[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.LLMResult\nasync agenerate_prompt(prompts, stop=None, callbacks=None, **kwargs)\uf0c1\nTake in a list of prompt values and return an LLMResult.\nParameters\nprompts (List[langchain.schema.PromptValue]) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.LLMResult\nasync apredict(text, *, stop=None, **kwargs)\uf0c1\nPredict text from text.\nParameters\ntext (str) \u2013 \nstop (Optional[Sequence[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\nasync apredict_messages(messages, *, stop=None, **kwargs)\uf0c1\nPredict message from messages.\nParameters", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-184", "text": "Predict message from messages.\nParameters\nmessages (List[langchain.schema.BaseMessage]) \u2013 \nstop (Optional[Sequence[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.BaseMessage\nclassmethod construct(_fields_set=None, **values)\uf0c1\nCreates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.\nDefault values are respected, but no other validation is performed.\nBehaves as if Config.extra = \u2018allow\u2019 was set since it adds all passed values\nParameters\n_fields_set (Optional[SetStr]) \u2013 \nvalues (Any) \u2013 \nReturn type\nModel\ncopy(*, include=None, exclude=None, update=None, deep=False)\uf0c1\nDuplicate a model, optionally choose which fields to include, exclude and change.\nParameters\ninclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 fields to include in new model\nexclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 fields to exclude from new model, as with values this takes precedence over include\nupdate (Optional[DictStrAny]) \u2013 values to change/add in the new model. Note: the data is not validated before creating\nthe new model: you should trust this data\ndeep (bool) \u2013 set to True to make a deep copy of the model\nself (Model) \u2013 \nReturns\nnew model instance\nReturn type\nModel\ndict(**kwargs)\uf0c1\nReturn a dictionary of the LLM.\nParameters\nkwargs (Any) \u2013 \nReturn type\nDict\ngenerate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)\uf0c1\nRun the LLM on the given prompt and input.\nParameters\nprompts (List[str]) \u2013 \nstop (Optional[List[str]]) \u2013", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-185", "text": "Parameters\nprompts (List[str]) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ntags (Optional[List[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.LLMResult\ngenerate_prompt(prompts, stop=None, callbacks=None, **kwargs)\uf0c1\nTake in a list of prompt values and return an LLMResult.\nParameters\nprompts (List[langchain.schema.PromptValue]) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.LLMResult\nget_num_tokens(text)\uf0c1\nGet the number of tokens present in the text.\nParameters\ntext (str) \u2013 \nReturn type\nint\nget_num_tokens_from_messages(messages)\uf0c1\nGet the number of tokens in the message.\nParameters\nmessages (List[langchain.schema.BaseMessage]) \u2013 \nReturn type\nint\nget_token_ids(text)\uf0c1\nGet the token present in the text.\nParameters\ntext (str) \u2013 \nReturn type\nList[int]\njson(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs)\uf0c1\nGenerate a JSON representation of the model, include and exclude arguments as per dict().\nencoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().\nParameters\ninclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-186", "text": "include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 \nexclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 \nby_alias (bool) \u2013 \nskip_defaults (Optional[bool]) \u2013 \nexclude_unset (bool) \u2013 \nexclude_defaults (bool) \u2013 \nexclude_none (bool) \u2013 \nencoder (Optional[Callable[[Any], Any]]) \u2013 \nmodels_as_dict (bool) \u2013 \ndumps_kwargs (Any) \u2013 \nReturn type\nunicode\npredict(text, *, stop=None, **kwargs)\uf0c1\nPredict text from text.\nParameters\ntext (str) \u2013 \nstop (Optional[Sequence[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\npredict_messages(messages, *, stop=None, **kwargs)\uf0c1\nPredict message from messages.\nParameters\nmessages (List[langchain.schema.BaseMessage]) \u2013 \nstop (Optional[Sequence[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.BaseMessage\nsave(file_path)\uf0c1\nSave the LLM.\nParameters\nfile_path (Union[pathlib.Path, str]) \u2013 Path to file to save the LLM to.\nReturn type\nNone\nExample:\n.. code-block:: python\nllm.save(file_path=\u201dpath/llm.yaml\u201d)\nclassmethod update_forward_refs(**localns)\uf0c1\nTry to update ForwardRefs on fields based on this Model, globalns and localns.\nParameters\nlocalns (Any) \u2013 \nReturn type\nNone\nproperty lc_attributes: Dict\uf0c1\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\uf0c1\nReturn the namespace of the langchain object.", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-187", "text": "property lc_namespace: List[str]\uf0c1\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\uf0c1\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\uf0c1\nReturn whether or not the class is serializable.\nclass langchain.llms.MosaicML(*, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None, endpoint_url='https://models.hosted-on.mosaicml.hosting/mpt-7b-instruct/v1/predict', inject_instruction_format=False, model_kwargs=None, retry_sleep=1.0, mosaicml_api_token=None)[source]\uf0c1\nBases: langchain.llms.base.LLM\nWrapper around MosaicML\u2019s LLM inference service.\nTo use, you should have the\nenvironment variable MOSAICML_API_TOKEN set with your API token, or pass\nit as a named parameter to the constructor.\nExample\nfrom langchain.llms import MosaicML\nendpoint_url = (\n \"https://models.hosted-on.mosaicml.hosting/mpt-7b-instruct/v1/predict\"\n)\nmosaic_llm = MosaicML(\n endpoint_url=endpoint_url,\n mosaicml_api_token=\"my-api-key\"\n)\nParameters\ncache (Optional[bool]) \u2013 \nverbose (bool) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ncallback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) \u2013 \ntags (Optional[List[str]]) \u2013 \nendpoint_url (str) \u2013 \ninject_instruction_format (bool) \u2013", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-188", "text": "endpoint_url (str) \u2013 \ninject_instruction_format (bool) \u2013 \nmodel_kwargs (Optional[dict]) \u2013 \nretry_sleep (float) \u2013 \nmosaicml_api_token (Optional[str]) \u2013 \nReturn type\nNone\nattribute endpoint_url: str = 'https://models.hosted-on.mosaicml.hosting/mpt-7b-instruct/v1/predict'\uf0c1\nEndpoint URL to use.\nattribute inject_instruction_format: bool = False\uf0c1\nWhether to inject the instruction format into the prompt.\nattribute model_kwargs: Optional[dict] = None\uf0c1\nKey word arguments to pass to the model.\nattribute retry_sleep: float = 1.0\uf0c1\nHow long to try sleeping for if a rate limit is encountered\nattribute tags: Optional[List[str]] = None\uf0c1\nTags to add to the run trace.\nattribute verbose: bool [Optional]\uf0c1\nWhether to print out response text.\n__call__(prompt, stop=None, callbacks=None, **kwargs)\uf0c1\nCheck Cache and run the LLM on the given prompt and input.\nParameters\nprompt (str) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\nasync agenerate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)\uf0c1\nRun the LLM on the given prompt and input.\nParameters\nprompts (List[str]) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ntags (Optional[List[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.LLMResult", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-189", "text": "kwargs (Any) \u2013 \nReturn type\nlangchain.schema.LLMResult\nasync agenerate_prompt(prompts, stop=None, callbacks=None, **kwargs)\uf0c1\nTake in a list of prompt values and return an LLMResult.\nParameters\nprompts (List[langchain.schema.PromptValue]) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.LLMResult\nasync apredict(text, *, stop=None, **kwargs)\uf0c1\nPredict text from text.\nParameters\ntext (str) \u2013 \nstop (Optional[Sequence[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\nasync apredict_messages(messages, *, stop=None, **kwargs)\uf0c1\nPredict message from messages.\nParameters\nmessages (List[langchain.schema.BaseMessage]) \u2013 \nstop (Optional[Sequence[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.BaseMessage\nclassmethod construct(_fields_set=None, **values)\uf0c1\nCreates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.\nDefault values are respected, but no other validation is performed.\nBehaves as if Config.extra = \u2018allow\u2019 was set since it adds all passed values\nParameters\n_fields_set (Optional[SetStr]) \u2013 \nvalues (Any) \u2013 \nReturn type\nModel\ncopy(*, include=None, exclude=None, update=None, deep=False)\uf0c1\nDuplicate a model, optionally choose which fields to include, exclude and change.\nParameters\ninclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 fields to include in new model", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-190", "text": "exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 fields to exclude from new model, as with values this takes precedence over include\nupdate (Optional[DictStrAny]) \u2013 values to change/add in the new model. Note: the data is not validated before creating\nthe new model: you should trust this data\ndeep (bool) \u2013 set to True to make a deep copy of the model\nself (Model) \u2013 \nReturns\nnew model instance\nReturn type\nModel\ndict(**kwargs)\uf0c1\nReturn a dictionary of the LLM.\nParameters\nkwargs (Any) \u2013 \nReturn type\nDict\ngenerate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)\uf0c1\nRun the LLM on the given prompt and input.\nParameters\nprompts (List[str]) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ntags (Optional[List[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.LLMResult\ngenerate_prompt(prompts, stop=None, callbacks=None, **kwargs)\uf0c1\nTake in a list of prompt values and return an LLMResult.\nParameters\nprompts (List[langchain.schema.PromptValue]) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.LLMResult\nget_num_tokens(text)\uf0c1\nGet the number of tokens present in the text.\nParameters\ntext (str) \u2013 \nReturn type\nint\nget_num_tokens_from_messages(messages)\uf0c1\nGet the number of tokens in the message.\nParameters", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-191", "text": "Get the number of tokens in the message.\nParameters\nmessages (List[langchain.schema.BaseMessage]) \u2013 \nReturn type\nint\nget_token_ids(text)\uf0c1\nGet the token present in the text.\nParameters\ntext (str) \u2013 \nReturn type\nList[int]\njson(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs)\uf0c1\nGenerate a JSON representation of the model, include and exclude arguments as per dict().\nencoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().\nParameters\ninclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 \nexclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 \nby_alias (bool) \u2013 \nskip_defaults (Optional[bool]) \u2013 \nexclude_unset (bool) \u2013 \nexclude_defaults (bool) \u2013 \nexclude_none (bool) \u2013 \nencoder (Optional[Callable[[Any], Any]]) \u2013 \nmodels_as_dict (bool) \u2013 \ndumps_kwargs (Any) \u2013 \nReturn type\nunicode\npredict(text, *, stop=None, **kwargs)\uf0c1\nPredict text from text.\nParameters\ntext (str) \u2013 \nstop (Optional[Sequence[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\npredict_messages(messages, *, stop=None, **kwargs)\uf0c1\nPredict message from messages.\nParameters\nmessages (List[langchain.schema.BaseMessage]) \u2013 \nstop (Optional[Sequence[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.BaseMessage\nsave(file_path)\uf0c1\nSave the LLM.\nParameters", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-192", "text": "save(file_path)\uf0c1\nSave the LLM.\nParameters\nfile_path (Union[pathlib.Path, str]) \u2013 Path to file to save the LLM to.\nReturn type\nNone\nExample:\n.. code-block:: python\nllm.save(file_path=\u201dpath/llm.yaml\u201d)\nclassmethod update_forward_refs(**localns)\uf0c1\nTry to update ForwardRefs on fields based on this Model, globalns and localns.\nParameters\nlocalns (Any) \u2013 \nReturn type\nNone\nproperty lc_attributes: Dict\uf0c1\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\uf0c1\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\uf0c1\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\uf0c1\nReturn whether or not the class is serializable.\nclass langchain.llms.NLPCloud(*, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None, client=None, model_name='finetuned-gpt-neox-20b', temperature=0.7, min_length=1, max_length=256, length_no_input=True, remove_input=True, remove_end_sequence=True, bad_words=[], top_p=1, top_k=50, repetition_penalty=1.0, length_penalty=1.0, do_sample=True, num_beams=1, early_stopping=False, num_return_sequences=1, nlpcloud_api_key=None)[source]\uf0c1\nBases: langchain.llms.base.LLM\nWrapper around NLPCloud large language models.", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-193", "text": "Wrapper around NLPCloud large language models.\nTo use, you should have the nlpcloud python package installed, and the\nenvironment variable NLPCLOUD_API_KEY set with your API key.\nExample\nfrom langchain.llms import NLPCloud\nnlpcloud = NLPCloud(model=\"gpt-neox-20b\")\nParameters\ncache (Optional[bool]) \u2013 \nverbose (bool) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ncallback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) \u2013 \ntags (Optional[List[str]]) \u2013 \nclient (Any) \u2013 \nmodel_name (str) \u2013 \ntemperature (float) \u2013 \nmin_length (int) \u2013 \nmax_length (int) \u2013 \nlength_no_input (bool) \u2013 \nremove_input (bool) \u2013 \nremove_end_sequence (bool) \u2013 \nbad_words (List[str]) \u2013 \ntop_p (int) \u2013 \ntop_k (int) \u2013 \nrepetition_penalty (float) \u2013 \nlength_penalty (float) \u2013 \ndo_sample (bool) \u2013 \nnum_beams (int) \u2013 \nearly_stopping (bool) \u2013 \nnum_return_sequences (int) \u2013 \nnlpcloud_api_key (Optional[str]) \u2013 \nReturn type\nNone\nattribute bad_words: List[str] = []\uf0c1\nList of tokens not allowed to be generated.\nattribute do_sample: bool = True\uf0c1\nWhether to use sampling (True) or greedy decoding.\nattribute early_stopping: bool = False\uf0c1\nWhether to stop beam search at num_beams sentences.\nattribute length_no_input: bool = True\uf0c1\nWhether min_length and max_length should include the length of the input.", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-194", "text": "Whether min_length and max_length should include the length of the input.\nattribute length_penalty: float = 1.0\uf0c1\nExponential penalty to the length.\nattribute max_length: int = 256\uf0c1\nThe maximum number of tokens to generate in the completion.\nattribute min_length: int = 1\uf0c1\nThe minimum number of tokens to generate in the completion.\nattribute model_name: str = 'finetuned-gpt-neox-20b'\uf0c1\nModel name to use.\nattribute num_beams: int = 1\uf0c1\nNumber of beams for beam search.\nattribute num_return_sequences: int = 1\uf0c1\nHow many completions to generate for each prompt.\nattribute remove_end_sequence: bool = True\uf0c1\nWhether or not to remove the end sequence token.\nattribute remove_input: bool = True\uf0c1\nRemove input text from API response\nattribute repetition_penalty: float = 1.0\uf0c1\nPenalizes repeated tokens. 1.0 means no penalty.\nattribute tags: Optional[List[str]] = None\uf0c1\nTags to add to the run trace.\nattribute temperature: float = 0.7\uf0c1\nWhat sampling temperature to use.\nattribute top_k: int = 50\uf0c1\nThe number of highest probability tokens to keep for top-k filtering.\nattribute top_p: int = 1\uf0c1\nTotal probability mass of tokens to consider at each step.\nattribute verbose: bool [Optional]\uf0c1\nWhether to print out response text.\n__call__(prompt, stop=None, callbacks=None, **kwargs)\uf0c1\nCheck Cache and run the LLM on the given prompt and input.\nParameters\nprompt (str) \u2013 \nstop (Optional[List[str]]) \u2013", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-195", "text": "Parameters\nprompt (str) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\nasync agenerate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)\uf0c1\nRun the LLM on the given prompt and input.\nParameters\nprompts (List[str]) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ntags (Optional[List[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.LLMResult\nasync agenerate_prompt(prompts, stop=None, callbacks=None, **kwargs)\uf0c1\nTake in a list of prompt values and return an LLMResult.\nParameters\nprompts (List[langchain.schema.PromptValue]) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.LLMResult\nasync apredict(text, *, stop=None, **kwargs)\uf0c1\nPredict text from text.\nParameters\ntext (str) \u2013 \nstop (Optional[Sequence[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\nasync apredict_messages(messages, *, stop=None, **kwargs)\uf0c1\nPredict message from messages.\nParameters\nmessages (List[langchain.schema.BaseMessage]) \u2013 \nstop (Optional[Sequence[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.BaseMessage\nclassmethod construct(_fields_set=None, **values)\uf0c1", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-196", "text": "langchain.schema.BaseMessage\nclassmethod construct(_fields_set=None, **values)\uf0c1\nCreates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.\nDefault values are respected, but no other validation is performed.\nBehaves as if Config.extra = \u2018allow\u2019 was set since it adds all passed values\nParameters\n_fields_set (Optional[SetStr]) \u2013 \nvalues (Any) \u2013 \nReturn type\nModel\ncopy(*, include=None, exclude=None, update=None, deep=False)\uf0c1\nDuplicate a model, optionally choose which fields to include, exclude and change.\nParameters\ninclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 fields to include in new model\nexclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 fields to exclude from new model, as with values this takes precedence over include\nupdate (Optional[DictStrAny]) \u2013 values to change/add in the new model. Note: the data is not validated before creating\nthe new model: you should trust this data\ndeep (bool) \u2013 set to True to make a deep copy of the model\nself (Model) \u2013 \nReturns\nnew model instance\nReturn type\nModel\ndict(**kwargs)\uf0c1\nReturn a dictionary of the LLM.\nParameters\nkwargs (Any) \u2013 \nReturn type\nDict\ngenerate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)\uf0c1\nRun the LLM on the given prompt and input.\nParameters\nprompts (List[str]) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ntags (Optional[List[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.LLMResult", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-197", "text": "kwargs (Any) \u2013 \nReturn type\nlangchain.schema.LLMResult\ngenerate_prompt(prompts, stop=None, callbacks=None, **kwargs)\uf0c1\nTake in a list of prompt values and return an LLMResult.\nParameters\nprompts (List[langchain.schema.PromptValue]) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.LLMResult\nget_num_tokens(text)\uf0c1\nGet the number of tokens present in the text.\nParameters\ntext (str) \u2013 \nReturn type\nint\nget_num_tokens_from_messages(messages)\uf0c1\nGet the number of tokens in the message.\nParameters\nmessages (List[langchain.schema.BaseMessage]) \u2013 \nReturn type\nint\nget_token_ids(text)\uf0c1\nGet the token present in the text.\nParameters\ntext (str) \u2013 \nReturn type\nList[int]\njson(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs)\uf0c1\nGenerate a JSON representation of the model, include and exclude arguments as per dict().\nencoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().\nParameters\ninclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 \nexclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 \nby_alias (bool) \u2013 \nskip_defaults (Optional[bool]) \u2013 \nexclude_unset (bool) \u2013 \nexclude_defaults (bool) \u2013 \nexclude_none (bool) \u2013", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-198", "text": "exclude_defaults (bool) \u2013 \nexclude_none (bool) \u2013 \nencoder (Optional[Callable[[Any], Any]]) \u2013 \nmodels_as_dict (bool) \u2013 \ndumps_kwargs (Any) \u2013 \nReturn type\nunicode\npredict(text, *, stop=None, **kwargs)\uf0c1\nPredict text from text.\nParameters\ntext (str) \u2013 \nstop (Optional[Sequence[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\npredict_messages(messages, *, stop=None, **kwargs)\uf0c1\nPredict message from messages.\nParameters\nmessages (List[langchain.schema.BaseMessage]) \u2013 \nstop (Optional[Sequence[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.BaseMessage\nsave(file_path)\uf0c1\nSave the LLM.\nParameters\nfile_path (Union[pathlib.Path, str]) \u2013 Path to file to save the LLM to.\nReturn type\nNone\nExample:\n.. code-block:: python\nllm.save(file_path=\u201dpath/llm.yaml\u201d)\nclassmethod update_forward_refs(**localns)\uf0c1\nTry to update ForwardRefs on fields based on this Model, globalns and localns.\nParameters\nlocalns (Any) \u2013 \nReturn type\nNone\nproperty lc_attributes: Dict\uf0c1\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\uf0c1\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\uf0c1\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\uf0c1", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-199", "text": "property lc_serializable: bool\uf0c1\nReturn whether or not the class is serializable.\nclass langchain.llms.OpenAI(*, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None, client=None, model='text-davinci-003', temperature=0.7, max_tokens=256, top_p=1, frequency_penalty=0, presence_penalty=0, n=1, best_of=1, model_kwargs=None, openai_api_key=None, openai_api_base=None, openai_organization=None, openai_proxy=None, batch_size=20, request_timeout=None, logit_bias=None, max_retries=6, streaming=False, allowed_special={}, disallowed_special='all', tiktoken_model_name=None)[source]\uf0c1\nBases: langchain.llms.openai.BaseOpenAI\nWrapper around OpenAI large language models.\nTo use, you should have the openai python package installed, and the\nenvironment variable OPENAI_API_KEY set with your API key.\nAny parameters that are valid to be passed to the openai.create call can be passed\nin, even if not explicitly saved on this class.\nExample\nfrom langchain.llms import OpenAI\nopenai = OpenAI(model_name=\"text-davinci-003\")\nParameters\ncache (Optional[bool]) \u2013 \nverbose (bool) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ncallback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) \u2013 \ntags (Optional[List[str]]) \u2013 \nclient (Any) \u2013 \nmodel (str) \u2013 \ntemperature (float) \u2013 \nmax_tokens (int) \u2013 \ntop_p (float) \u2013 \nfrequency_penalty (float) \u2013 \npresence_penalty (float) \u2013 \nn (int) \u2013", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-200", "text": "presence_penalty (float) \u2013 \nn (int) \u2013 \nbest_of (int) \u2013 \nmodel_kwargs (Dict[str, Any]) \u2013 \nopenai_api_key (Optional[str]) \u2013 \nopenai_api_base (Optional[str]) \u2013 \nopenai_organization (Optional[str]) \u2013 \nopenai_proxy (Optional[str]) \u2013 \nbatch_size (int) \u2013 \nrequest_timeout (Optional[Union[float, Tuple[float, float]]]) \u2013 \nlogit_bias (Optional[Dict[str, float]]) \u2013 \nmax_retries (int) \u2013 \nstreaming (bool) \u2013 \nallowed_special (Union[Literal['all'], typing.AbstractSet[str]]) \u2013 \ndisallowed_special (Union[Literal['all'], typing.Collection[str]]) \u2013 \ntiktoken_model_name (Optional[str]) \u2013 \nReturn type\nNone\nattribute allowed_special: Union[Literal['all'], AbstractSet[str]] = {}\uf0c1\nSet of special tokens that are allowed\u3002\nattribute batch_size: int = 20\uf0c1\nBatch size to use when passing multiple documents to generate.\nattribute best_of: int = 1\uf0c1\nGenerates best_of completions server-side and returns the \u201cbest\u201d.\nattribute disallowed_special: Union[Literal['all'], Collection[str]] = 'all'\uf0c1\nSet of special tokens that are not allowed\u3002\nattribute frequency_penalty: float = 0\uf0c1\nPenalizes repeated tokens according to frequency.\nattribute logit_bias: Optional[Dict[str, float]] [Optional]\uf0c1\nAdjust the probability of specific tokens being generated.\nattribute max_retries: int = 6\uf0c1\nMaximum number of retries to make when generating.\nattribute max_tokens: int = 256\uf0c1\nThe maximum number of tokens to generate in the completion.\n-1 returns as many tokens as possible given the prompt and", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-201", "text": "-1 returns as many tokens as possible given the prompt and\nthe models maximal context size.\nattribute model_kwargs: Dict[str, Any] [Optional]\uf0c1\nHolds any model parameters valid for create call not explicitly specified.\nattribute model_name: str = 'text-davinci-003' (alias 'model')\uf0c1\nModel name to use.\nattribute n: int = 1\uf0c1\nHow many completions to generate for each prompt.\nattribute presence_penalty: float = 0\uf0c1\nPenalizes repeated tokens.\nattribute request_timeout: Optional[Union[float, Tuple[float, float]]] = None\uf0c1\nTimeout for requests to OpenAI completion API. Default is 600 seconds.\nattribute streaming: bool = False\uf0c1\nWhether to stream the results or not.\nattribute tags: Optional[List[str]] = None\uf0c1\nTags to add to the run trace.\nattribute temperature: float = 0.7\uf0c1\nWhat sampling temperature to use.\nattribute tiktoken_model_name: Optional[str] = None\uf0c1\nThe model name to pass to tiktoken when using this class.\nTiktoken is used to count the number of tokens in documents to constrain\nthem to be under a certain limit. By default, when set to None, this will\nbe the same as the embedding model name. However, there are some cases\nwhere you may want to use this Embedding class with a model name not\nsupported by tiktoken. This can include when using Azure embeddings or\nwhen using one of the many model providers that expose an OpenAI-like\nAPI but with different models. In those cases, in order to avoid erroring\nwhen tiktoken is called, you can specify a model name to use here.\nattribute top_p: float = 1\uf0c1\nTotal probability mass of tokens to consider at each step.", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-202", "text": "Total probability mass of tokens to consider at each step.\nattribute verbose: bool [Optional]\uf0c1\nWhether to print out response text.\n__call__(prompt, stop=None, callbacks=None, **kwargs)\uf0c1\nCheck Cache and run the LLM on the given prompt and input.\nParameters\nprompt (str) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\nasync agenerate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)\uf0c1\nRun the LLM on the given prompt and input.\nParameters\nprompts (List[str]) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ntags (Optional[List[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.LLMResult\nasync agenerate_prompt(prompts, stop=None, callbacks=None, **kwargs)\uf0c1\nTake in a list of prompt values and return an LLMResult.\nParameters\nprompts (List[langchain.schema.PromptValue]) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.LLMResult\nasync apredict(text, *, stop=None, **kwargs)\uf0c1\nPredict text from text.\nParameters\ntext (str) \u2013 \nstop (Optional[Sequence[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\nasync apredict_messages(messages, *, stop=None, **kwargs)\uf0c1", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-203", "text": "str\nasync apredict_messages(messages, *, stop=None, **kwargs)\uf0c1\nPredict message from messages.\nParameters\nmessages (List[langchain.schema.BaseMessage]) \u2013 \nstop (Optional[Sequence[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.BaseMessage\nclassmethod construct(_fields_set=None, **values)\uf0c1\nCreates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.\nDefault values are respected, but no other validation is performed.\nBehaves as if Config.extra = \u2018allow\u2019 was set since it adds all passed values\nParameters\n_fields_set (Optional[SetStr]) \u2013 \nvalues (Any) \u2013 \nReturn type\nModel\ncopy(*, include=None, exclude=None, update=None, deep=False)\uf0c1\nDuplicate a model, optionally choose which fields to include, exclude and change.\nParameters\ninclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 fields to include in new model\nexclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 fields to exclude from new model, as with values this takes precedence over include\nupdate (Optional[DictStrAny]) \u2013 values to change/add in the new model. Note: the data is not validated before creating\nthe new model: you should trust this data\ndeep (bool) \u2013 set to True to make a deep copy of the model\nself (Model) \u2013 \nReturns\nnew model instance\nReturn type\nModel\ncreate_llm_result(choices, prompts, token_usage)\uf0c1\nCreate the LLMResult from the choices and prompts.\nParameters\nchoices (Any) \u2013 \nprompts (List[str]) \u2013 \ntoken_usage (Dict[str, int]) \u2013 \nReturn type\nlangchain.schema.LLMResult\ndict(**kwargs)\uf0c1", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-204", "text": "Return type\nlangchain.schema.LLMResult\ndict(**kwargs)\uf0c1\nReturn a dictionary of the LLM.\nParameters\nkwargs (Any) \u2013 \nReturn type\nDict\ngenerate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)\uf0c1\nRun the LLM on the given prompt and input.\nParameters\nprompts (List[str]) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ntags (Optional[List[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.LLMResult\ngenerate_prompt(prompts, stop=None, callbacks=None, **kwargs)\uf0c1\nTake in a list of prompt values and return an LLMResult.\nParameters\nprompts (List[langchain.schema.PromptValue]) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.LLMResult\nget_num_tokens(text)\uf0c1\nGet the number of tokens present in the text.\nParameters\ntext (str) \u2013 \nReturn type\nint\nget_num_tokens_from_messages(messages)\uf0c1\nGet the number of tokens in the message.\nParameters\nmessages (List[langchain.schema.BaseMessage]) \u2013 \nReturn type\nint\nget_sub_prompts(params, prompts, stop=None)\uf0c1\nGet the sub prompts for llm call.\nParameters\nparams (Dict[str, Any]) \u2013 \nprompts (List[str]) \u2013 \nstop (Optional[List[str]]) \u2013 \nReturn type\nList[List[str]]\nget_token_ids(text)\uf0c1\nGet the token IDs using the tiktoken package.", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-205", "text": "get_token_ids(text)\uf0c1\nGet the token IDs using the tiktoken package.\nParameters\ntext (str) \u2013 \nReturn type\nList[int]\njson(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs)\uf0c1\nGenerate a JSON representation of the model, include and exclude arguments as per dict().\nencoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().\nParameters\ninclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 \nexclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 \nby_alias (bool) \u2013 \nskip_defaults (Optional[bool]) \u2013 \nexclude_unset (bool) \u2013 \nexclude_defaults (bool) \u2013 \nexclude_none (bool) \u2013 \nencoder (Optional[Callable[[Any], Any]]) \u2013 \nmodels_as_dict (bool) \u2013 \ndumps_kwargs (Any) \u2013 \nReturn type\nunicode\nmax_tokens_for_prompt(prompt)\uf0c1\nCalculate the maximum number of tokens possible to generate for a prompt.\nParameters\nprompt (str) \u2013 The prompt to pass into the model.\nReturns\nThe maximum number of tokens to generate for a prompt.\nReturn type\nint\nExample\nmax_tokens = openai.max_token_for_prompt(\"Tell me a joke.\")\nstatic modelname_to_contextsize(modelname)\uf0c1\nCalculate the maximum number of tokens possible to generate for a model.\nParameters\nmodelname (str) \u2013 The modelname we want to know the context size for.\nReturns\nThe maximum context size\nReturn type\nint\nExample\nmax_tokens = openai.modelname_to_contextsize(\"text-davinci-003\")", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-206", "text": "max_tokens = openai.modelname_to_contextsize(\"text-davinci-003\")\npredict(text, *, stop=None, **kwargs)\uf0c1\nPredict text from text.\nParameters\ntext (str) \u2013 \nstop (Optional[Sequence[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\npredict_messages(messages, *, stop=None, **kwargs)\uf0c1\nPredict message from messages.\nParameters\nmessages (List[langchain.schema.BaseMessage]) \u2013 \nstop (Optional[Sequence[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.BaseMessage\nprep_streaming_params(stop=None)\uf0c1\nPrepare the params for streaming.\nParameters\nstop (Optional[List[str]]) \u2013 \nReturn type\nDict[str, Any]\nsave(file_path)\uf0c1\nSave the LLM.\nParameters\nfile_path (Union[pathlib.Path, str]) \u2013 Path to file to save the LLM to.\nReturn type\nNone\nExample:\n.. code-block:: python\nllm.save(file_path=\u201dpath/llm.yaml\u201d)\nstream(prompt, stop=None)\uf0c1\nCall OpenAI with streaming flag and return the resulting generator.\nBETA: this is a beta feature while we figure out the right abstraction.\nOnce that happens, this interface could change.\nParameters\nprompt (str) \u2013 The prompts to pass into the model.\nstop (Optional[List[str]]) \u2013 Optional list of stop words to use when generating.\nReturns\nA generator representing the stream of tokens from OpenAI.\nReturn type\nGenerator\nExample\ngenerator = openai.stream(\"Tell me a joke.\")\nfor token in generator:\n yield token\nclassmethod update_forward_refs(**localns)\uf0c1\nTry to update ForwardRefs on fields based on this Model, globalns and localns.\nParameters\nlocalns (Any) \u2013 \nReturn type", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-207", "text": "Parameters\nlocalns (Any) \u2013 \nReturn type\nNone\nproperty lc_attributes: Dict\uf0c1\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\uf0c1\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\uf0c1\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\uf0c1\nReturn whether or not the class is serializable.\nproperty max_context_size: int\uf0c1\nGet max context size for this model.\nclass langchain.llms.OpenAIChat(*, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None, client=None, model_name='gpt-3.5-turbo', model_kwargs=None, openai_api_key=None, openai_api_base=None, openai_proxy=None, max_retries=6, prefix_messages=None, streaming=False, allowed_special={}, disallowed_special='all')[source]\uf0c1\nBases: langchain.llms.base.BaseLLM\nWrapper around OpenAI Chat large language models.\nTo use, you should have the openai python package installed, and the\nenvironment variable OPENAI_API_KEY set with your API key.\nAny parameters that are valid to be passed to the openai.create call can be passed\nin, even if not explicitly saved on this class.\nExample\nfrom langchain.llms import OpenAIChat\nopenaichat = OpenAIChat(model_name=\"gpt-3.5-turbo\")\nParameters\ncache (Optional[bool]) \u2013 \nverbose (bool) \u2013", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-208", "text": "Parameters\ncache (Optional[bool]) \u2013 \nverbose (bool) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ncallback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) \u2013 \ntags (Optional[List[str]]) \u2013 \nclient (Any) \u2013 \nmodel_name (str) \u2013 \nmodel_kwargs (Dict[str, Any]) \u2013 \nopenai_api_key (Optional[str]) \u2013 \nopenai_api_base (Optional[str]) \u2013 \nopenai_proxy (Optional[str]) \u2013 \nmax_retries (int) \u2013 \nprefix_messages (List) \u2013 \nstreaming (bool) \u2013 \nallowed_special (Union[Literal['all'], typing.AbstractSet[str]]) \u2013 \ndisallowed_special (Union[Literal['all'], typing.Collection[str]]) \u2013 \nReturn type\nNone\nattribute allowed_special: Union[Literal['all'], AbstractSet[str]] = {}\uf0c1\nSet of special tokens that are allowed\u3002\nattribute disallowed_special: Union[Literal['all'], Collection[str]] = 'all'\uf0c1\nSet of special tokens that are not allowed\u3002\nattribute max_retries: int = 6\uf0c1\nMaximum number of retries to make when generating.\nattribute model_kwargs: Dict[str, Any] [Optional]\uf0c1\nHolds any model parameters valid for create call not explicitly specified.\nattribute model_name: str = 'gpt-3.5-turbo'\uf0c1\nModel name to use.\nattribute prefix_messages: List [Optional]\uf0c1\nSeries of messages for Chat input.\nattribute streaming: bool = False\uf0c1\nWhether to stream the results or not.\nattribute tags: Optional[List[str]] = None\uf0c1\nTags to add to the run trace.\nattribute verbose: bool [Optional]\uf0c1", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-209", "text": "Tags to add to the run trace.\nattribute verbose: bool [Optional]\uf0c1\nWhether to print out response text.\n__call__(prompt, stop=None, callbacks=None, **kwargs)\uf0c1\nCheck Cache and run the LLM on the given prompt and input.\nParameters\nprompt (str) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\nasync agenerate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)\uf0c1\nRun the LLM on the given prompt and input.\nParameters\nprompts (List[str]) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ntags (Optional[List[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.LLMResult\nasync agenerate_prompt(prompts, stop=None, callbacks=None, **kwargs)\uf0c1\nTake in a list of prompt values and return an LLMResult.\nParameters\nprompts (List[langchain.schema.PromptValue]) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.LLMResult\nasync apredict(text, *, stop=None, **kwargs)\uf0c1\nPredict text from text.\nParameters\ntext (str) \u2013 \nstop (Optional[Sequence[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\nasync apredict_messages(messages, *, stop=None, **kwargs)\uf0c1\nPredict message from messages.\nParameters", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-210", "text": "Predict message from messages.\nParameters\nmessages (List[langchain.schema.BaseMessage]) \u2013 \nstop (Optional[Sequence[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.BaseMessage\nclassmethod construct(_fields_set=None, **values)\uf0c1\nCreates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.\nDefault values are respected, but no other validation is performed.\nBehaves as if Config.extra = \u2018allow\u2019 was set since it adds all passed values\nParameters\n_fields_set (Optional[SetStr]) \u2013 \nvalues (Any) \u2013 \nReturn type\nModel\ncopy(*, include=None, exclude=None, update=None, deep=False)\uf0c1\nDuplicate a model, optionally choose which fields to include, exclude and change.\nParameters\ninclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 fields to include in new model\nexclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 fields to exclude from new model, as with values this takes precedence over include\nupdate (Optional[DictStrAny]) \u2013 values to change/add in the new model. Note: the data is not validated before creating\nthe new model: you should trust this data\ndeep (bool) \u2013 set to True to make a deep copy of the model\nself (Model) \u2013 \nReturns\nnew model instance\nReturn type\nModel\ndict(**kwargs)\uf0c1\nReturn a dictionary of the LLM.\nParameters\nkwargs (Any) \u2013 \nReturn type\nDict\ngenerate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)\uf0c1\nRun the LLM on the given prompt and input.\nParameters\nprompts (List[str]) \u2013 \nstop (Optional[List[str]]) \u2013", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-211", "text": "Parameters\nprompts (List[str]) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ntags (Optional[List[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.LLMResult\ngenerate_prompt(prompts, stop=None, callbacks=None, **kwargs)\uf0c1\nTake in a list of prompt values and return an LLMResult.\nParameters\nprompts (List[langchain.schema.PromptValue]) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.LLMResult\nget_num_tokens(text)\uf0c1\nGet the number of tokens present in the text.\nParameters\ntext (str) \u2013 \nReturn type\nint\nget_num_tokens_from_messages(messages)\uf0c1\nGet the number of tokens in the message.\nParameters\nmessages (List[langchain.schema.BaseMessage]) \u2013 \nReturn type\nint\nget_token_ids(text)[source]\uf0c1\nGet the token IDs using the tiktoken package.\nParameters\ntext (str) \u2013 \nReturn type\nList[int]\njson(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs)\uf0c1\nGenerate a JSON representation of the model, include and exclude arguments as per dict().\nencoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().\nParameters\ninclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-212", "text": "include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 \nexclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 \nby_alias (bool) \u2013 \nskip_defaults (Optional[bool]) \u2013 \nexclude_unset (bool) \u2013 \nexclude_defaults (bool) \u2013 \nexclude_none (bool) \u2013 \nencoder (Optional[Callable[[Any], Any]]) \u2013 \nmodels_as_dict (bool) \u2013 \ndumps_kwargs (Any) \u2013 \nReturn type\nunicode\npredict(text, *, stop=None, **kwargs)\uf0c1\nPredict text from text.\nParameters\ntext (str) \u2013 \nstop (Optional[Sequence[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\npredict_messages(messages, *, stop=None, **kwargs)\uf0c1\nPredict message from messages.\nParameters\nmessages (List[langchain.schema.BaseMessage]) \u2013 \nstop (Optional[Sequence[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.BaseMessage\nsave(file_path)\uf0c1\nSave the LLM.\nParameters\nfile_path (Union[pathlib.Path, str]) \u2013 Path to file to save the LLM to.\nReturn type\nNone\nExample:\n.. code-block:: python\nllm.save(file_path=\u201dpath/llm.yaml\u201d)\nclassmethod update_forward_refs(**localns)\uf0c1\nTry to update ForwardRefs on fields based on this Model, globalns and localns.\nParameters\nlocalns (Any) \u2013 \nReturn type\nNone\nproperty lc_attributes: Dict\uf0c1\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\uf0c1\nReturn the namespace of the langchain object.", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-213", "text": "property lc_namespace: List[str]\uf0c1\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\uf0c1\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\uf0c1\nReturn whether or not the class is serializable.\nclass langchain.llms.OpenLLM(model_name=None, *, model_id=None, server_url=None, server_type='http', embedded=True, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None, llm_kwargs)[source]\uf0c1\nBases: langchain.llms.base.LLM\nWrapper for accessing OpenLLM, supporting both in-process model\ninstance and remote OpenLLM servers.\nTo use, you should have the openllm library installed:\npip install openllm\nLearn more at: https://github.com/bentoml/openllm\nExample running an LLM model locally managed by OpenLLM:from langchain.llms import OpenLLM\nllm = OpenLLM(\n model_name='flan-t5',\n model_id='google/flan-t5-large',\n)\nllm(\"What is the difference between a duck and a goose?\")\nFor all available supported models, you can run \u2018openllm models\u2019.\nIf you have a OpenLLM server running, you can also use it remotely:from langchain.llms import OpenLLM\nllm = OpenLLM(server_url='http://localhost:3000')\nllm(\"What is the difference between a duck and a goose?\")\nParameters\nmodel_name (Optional[str]) \u2013 \nmodel_id (Optional[str]) \u2013 \nserver_url (Optional[str]) \u2013", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-214", "text": "model_id (Optional[str]) \u2013 \nserver_url (Optional[str]) \u2013 \nserver_type (Literal['grpc', 'http']) \u2013 \nembedded (bool) \u2013 \ncache (Optional[bool]) \u2013 \nverbose (bool) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ncallback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) \u2013 \ntags (Optional[List[str]]) \u2013 \nllm_kwargs (Dict[str, Any]) \u2013 \nReturn type\nNone\nattribute embedded: bool = True\uf0c1\nInitialize this LLM instance in current process by default. Should\nonly set to False when using in conjunction with BentoML Service.\nattribute llm_kwargs: Dict[str, Any] [Required]\uf0c1\nKey word arguments to be passed to openllm.LLM\nattribute model_id: Optional[str] = None\uf0c1\nModel Id to use. If not provided, will use the default model for the model name.\nSee \u2018openllm models\u2019 for all available model variants.\nattribute model_name: Optional[str] = None\uf0c1\nModel name to use. See \u2018openllm models\u2019 for all available models.\nattribute server_type: ServerType = 'http'\uf0c1\nOptional server type. Either \u2018http\u2019 or \u2018grpc\u2019.\nattribute server_url: Optional[str] = None\uf0c1\nOptional server URL that currently runs a LLMServer with \u2018openllm start\u2019.\nattribute tags: Optional[List[str]] = None\uf0c1\nTags to add to the run trace.\nattribute verbose: bool [Optional]\uf0c1\nWhether to print out response text.\n__call__(prompt, stop=None, callbacks=None, **kwargs)\uf0c1\nCheck Cache and run the LLM on the given prompt and input.\nParameters\nprompt (str) \u2013", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-215", "text": "Parameters\nprompt (str) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\nasync agenerate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)\uf0c1\nRun the LLM on the given prompt and input.\nParameters\nprompts (List[str]) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ntags (Optional[List[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.LLMResult\nasync agenerate_prompt(prompts, stop=None, callbacks=None, **kwargs)\uf0c1\nTake in a list of prompt values and return an LLMResult.\nParameters\nprompts (List[langchain.schema.PromptValue]) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.LLMResult\nasync apredict(text, *, stop=None, **kwargs)\uf0c1\nPredict text from text.\nParameters\ntext (str) \u2013 \nstop (Optional[Sequence[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\nasync apredict_messages(messages, *, stop=None, **kwargs)\uf0c1\nPredict message from messages.\nParameters\nmessages (List[langchain.schema.BaseMessage]) \u2013 \nstop (Optional[Sequence[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.BaseMessage\nclassmethod construct(_fields_set=None, **values)\uf0c1", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-216", "text": "langchain.schema.BaseMessage\nclassmethod construct(_fields_set=None, **values)\uf0c1\nCreates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.\nDefault values are respected, but no other validation is performed.\nBehaves as if Config.extra = \u2018allow\u2019 was set since it adds all passed values\nParameters\n_fields_set (Optional[SetStr]) \u2013 \nvalues (Any) \u2013 \nReturn type\nModel\ncopy(*, include=None, exclude=None, update=None, deep=False)\uf0c1\nDuplicate a model, optionally choose which fields to include, exclude and change.\nParameters\ninclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 fields to include in new model\nexclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 fields to exclude from new model, as with values this takes precedence over include\nupdate (Optional[DictStrAny]) \u2013 values to change/add in the new model. Note: the data is not validated before creating\nthe new model: you should trust this data\ndeep (bool) \u2013 set to True to make a deep copy of the model\nself (Model) \u2013 \nReturns\nnew model instance\nReturn type\nModel\ndict(**kwargs)\uf0c1\nReturn a dictionary of the LLM.\nParameters\nkwargs (Any) \u2013 \nReturn type\nDict\ngenerate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)\uf0c1\nRun the LLM on the given prompt and input.\nParameters\nprompts (List[str]) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ntags (Optional[List[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.LLMResult", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-217", "text": "kwargs (Any) \u2013 \nReturn type\nlangchain.schema.LLMResult\ngenerate_prompt(prompts, stop=None, callbacks=None, **kwargs)\uf0c1\nTake in a list of prompt values and return an LLMResult.\nParameters\nprompts (List[langchain.schema.PromptValue]) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.LLMResult\nget_num_tokens(text)\uf0c1\nGet the number of tokens present in the text.\nParameters\ntext (str) \u2013 \nReturn type\nint\nget_num_tokens_from_messages(messages)\uf0c1\nGet the number of tokens in the message.\nParameters\nmessages (List[langchain.schema.BaseMessage]) \u2013 \nReturn type\nint\nget_token_ids(text)\uf0c1\nGet the token present in the text.\nParameters\ntext (str) \u2013 \nReturn type\nList[int]\njson(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs)\uf0c1\nGenerate a JSON representation of the model, include and exclude arguments as per dict().\nencoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().\nParameters\ninclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 \nexclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 \nby_alias (bool) \u2013 \nskip_defaults (Optional[bool]) \u2013 \nexclude_unset (bool) \u2013 \nexclude_defaults (bool) \u2013 \nexclude_none (bool) \u2013", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-218", "text": "exclude_defaults (bool) \u2013 \nexclude_none (bool) \u2013 \nencoder (Optional[Callable[[Any], Any]]) \u2013 \nmodels_as_dict (bool) \u2013 \ndumps_kwargs (Any) \u2013 \nReturn type\nunicode\npredict(text, *, stop=None, **kwargs)\uf0c1\nPredict text from text.\nParameters\ntext (str) \u2013 \nstop (Optional[Sequence[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\npredict_messages(messages, *, stop=None, **kwargs)\uf0c1\nPredict message from messages.\nParameters\nmessages (List[langchain.schema.BaseMessage]) \u2013 \nstop (Optional[Sequence[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.BaseMessage\nsave(file_path)\uf0c1\nSave the LLM.\nParameters\nfile_path (Union[pathlib.Path, str]) \u2013 Path to file to save the LLM to.\nReturn type\nNone\nExample:\n.. code-block:: python\nllm.save(file_path=\u201dpath/llm.yaml\u201d)\nclassmethod update_forward_refs(**localns)\uf0c1\nTry to update ForwardRefs on fields based on this Model, globalns and localns.\nParameters\nlocalns (Any) \u2013 \nReturn type\nNone\nproperty lc_attributes: Dict\uf0c1\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\uf0c1\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\uf0c1\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\uf0c1", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-219", "text": "property lc_serializable: bool\uf0c1\nReturn whether or not the class is serializable.\nproperty runner: openllm.LLMRunner\uf0c1\nGet the underlying openllm.LLMRunner instance for integration with BentoML.\nExample:\n.. code-block:: python\nllm = OpenLLM(model_name=\u2019flan-t5\u2019,\nmodel_id=\u2019google/flan-t5-large\u2019,\nembedded=False,\n)\ntools = load_tools([\u201cserpapi\u201d, \u201cllm-math\u201d], llm=llm)\nagent = initialize_agent(\ntools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION\n)\nsvc = bentoml.Service(\u201clangchain-openllm\u201d, runners=[llm.runner])\n@svc.api(input=Text(), output=Text())\ndef chat(input_text: str):\nreturn agent.run(input_text)\nclass langchain.llms.OpenLM(*, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None, client=None, model='text-davinci-003', temperature=0.7, max_tokens=256, top_p=1, frequency_penalty=0, presence_penalty=0, n=1, best_of=1, model_kwargs=None, openai_api_key=None, openai_api_base=None, openai_organization=None, openai_proxy=None, batch_size=20, request_timeout=None, logit_bias=None, max_retries=6, streaming=False, allowed_special={}, disallowed_special='all', tiktoken_model_name=None)[source]\uf0c1\nBases: langchain.llms.openai.BaseOpenAI\nParameters\ncache (Optional[bool]) \u2013 \nverbose (bool) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-220", "text": "callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) \u2013 \ntags (Optional[List[str]]) \u2013 \nclient (Any) \u2013 \nmodel (str) \u2013 \ntemperature (float) \u2013 \nmax_tokens (int) \u2013 \ntop_p (float) \u2013 \nfrequency_penalty (float) \u2013 \npresence_penalty (float) \u2013 \nn (int) \u2013 \nbest_of (int) \u2013 \nmodel_kwargs (Dict[str, Any]) \u2013 \nopenai_api_key (Optional[str]) \u2013 \nopenai_api_base (Optional[str]) \u2013 \nopenai_organization (Optional[str]) \u2013 \nopenai_proxy (Optional[str]) \u2013 \nbatch_size (int) \u2013 \nrequest_timeout (Optional[Union[float, Tuple[float, float]]]) \u2013 \nlogit_bias (Optional[Dict[str, float]]) \u2013 \nmax_retries (int) \u2013 \nstreaming (bool) \u2013 \nallowed_special (Union[Literal['all'], typing.AbstractSet[str]]) \u2013 \ndisallowed_special (Union[Literal['all'], typing.Collection[str]]) \u2013 \ntiktoken_model_name (Optional[str]) \u2013 \nReturn type\nNone\nattribute allowed_special: Union[Literal['all'], AbstractSet[str]] = {}\uf0c1\nSet of special tokens that are allowed\u3002\nattribute batch_size: int = 20\uf0c1\nBatch size to use when passing multiple documents to generate.\nattribute best_of: int = 1\uf0c1\nGenerates best_of completions server-side and returns the \u201cbest\u201d.\nattribute disallowed_special: Union[Literal['all'], Collection[str]] = 'all'\uf0c1\nSet of special tokens that are not allowed\u3002\nattribute frequency_penalty: float = 0\uf0c1\nPenalizes repeated tokens according to frequency.\nattribute logit_bias: Optional[Dict[str, float]] [Optional]\uf0c1", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-221", "text": "attribute logit_bias: Optional[Dict[str, float]] [Optional]\uf0c1\nAdjust the probability of specific tokens being generated.\nattribute max_retries: int = 6\uf0c1\nMaximum number of retries to make when generating.\nattribute max_tokens: int = 256\uf0c1\nThe maximum number of tokens to generate in the completion.\n-1 returns as many tokens as possible given the prompt and\nthe models maximal context size.\nattribute model_kwargs: Dict[str, Any] [Optional]\uf0c1\nHolds any model parameters valid for create call not explicitly specified.\nattribute model_name: str = 'text-davinci-003' (alias 'model')\uf0c1\nModel name to use.\nattribute n: int = 1\uf0c1\nHow many completions to generate for each prompt.\nattribute presence_penalty: float = 0\uf0c1\nPenalizes repeated tokens.\nattribute request_timeout: Optional[Union[float, Tuple[float, float]]] = None\uf0c1\nTimeout for requests to OpenAI completion API. Default is 600 seconds.\nattribute streaming: bool = False\uf0c1\nWhether to stream the results or not.\nattribute tags: Optional[List[str]] = None\uf0c1\nTags to add to the run trace.\nattribute temperature: float = 0.7\uf0c1\nWhat sampling temperature to use.\nattribute tiktoken_model_name: Optional[str] = None\uf0c1\nThe model name to pass to tiktoken when using this class.\nTiktoken is used to count the number of tokens in documents to constrain\nthem to be under a certain limit. By default, when set to None, this will\nbe the same as the embedding model name. However, there are some cases\nwhere you may want to use this Embedding class with a model name not\nsupported by tiktoken. This can include when using Azure embeddings or", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-222", "text": "supported by tiktoken. This can include when using Azure embeddings or\nwhen using one of the many model providers that expose an OpenAI-like\nAPI but with different models. In those cases, in order to avoid erroring\nwhen tiktoken is called, you can specify a model name to use here.\nattribute top_p: float = 1\uf0c1\nTotal probability mass of tokens to consider at each step.\nattribute verbose: bool [Optional]\uf0c1\nWhether to print out response text.\n__call__(prompt, stop=None, callbacks=None, **kwargs)\uf0c1\nCheck Cache and run the LLM on the given prompt and input.\nParameters\nprompt (str) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\nasync agenerate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)\uf0c1\nRun the LLM on the given prompt and input.\nParameters\nprompts (List[str]) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ntags (Optional[List[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.LLMResult\nasync agenerate_prompt(prompts, stop=None, callbacks=None, **kwargs)\uf0c1\nTake in a list of prompt values and return an LLMResult.\nParameters\nprompts (List[langchain.schema.PromptValue]) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \nkwargs (Any) \u2013 \nReturn type", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-223", "text": "kwargs (Any) \u2013 \nReturn type\nlangchain.schema.LLMResult\nasync apredict(text, *, stop=None, **kwargs)\uf0c1\nPredict text from text.\nParameters\ntext (str) \u2013 \nstop (Optional[Sequence[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\nasync apredict_messages(messages, *, stop=None, **kwargs)\uf0c1\nPredict message from messages.\nParameters\nmessages (List[langchain.schema.BaseMessage]) \u2013 \nstop (Optional[Sequence[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.BaseMessage\nclassmethod construct(_fields_set=None, **values)\uf0c1\nCreates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.\nDefault values are respected, but no other validation is performed.\nBehaves as if Config.extra = \u2018allow\u2019 was set since it adds all passed values\nParameters\n_fields_set (Optional[SetStr]) \u2013 \nvalues (Any) \u2013 \nReturn type\nModel\ncopy(*, include=None, exclude=None, update=None, deep=False)\uf0c1\nDuplicate a model, optionally choose which fields to include, exclude and change.\nParameters\ninclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 fields to include in new model\nexclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 fields to exclude from new model, as with values this takes precedence over include\nupdate (Optional[DictStrAny]) \u2013 values to change/add in the new model. Note: the data is not validated before creating\nthe new model: you should trust this data\ndeep (bool) \u2013 set to True to make a deep copy of the model\nself (Model) \u2013 \nReturns\nnew model instance\nReturn type\nModel", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-224", "text": "self (Model) \u2013 \nReturns\nnew model instance\nReturn type\nModel\ncreate_llm_result(choices, prompts, token_usage)\uf0c1\nCreate the LLMResult from the choices and prompts.\nParameters\nchoices (Any) \u2013 \nprompts (List[str]) \u2013 \ntoken_usage (Dict[str, int]) \u2013 \nReturn type\nlangchain.schema.LLMResult\ndict(**kwargs)\uf0c1\nReturn a dictionary of the LLM.\nParameters\nkwargs (Any) \u2013 \nReturn type\nDict\ngenerate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)\uf0c1\nRun the LLM on the given prompt and input.\nParameters\nprompts (List[str]) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ntags (Optional[List[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.LLMResult\ngenerate_prompt(prompts, stop=None, callbacks=None, **kwargs)\uf0c1\nTake in a list of prompt values and return an LLMResult.\nParameters\nprompts (List[langchain.schema.PromptValue]) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.LLMResult\nget_num_tokens(text)\uf0c1\nGet the number of tokens present in the text.\nParameters\ntext (str) \u2013 \nReturn type\nint\nget_num_tokens_from_messages(messages)\uf0c1\nGet the number of tokens in the message.\nParameters\nmessages (List[langchain.schema.BaseMessage]) \u2013 \nReturn type\nint", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-225", "text": "Parameters\nmessages (List[langchain.schema.BaseMessage]) \u2013 \nReturn type\nint\nget_sub_prompts(params, prompts, stop=None)\uf0c1\nGet the sub prompts for llm call.\nParameters\nparams (Dict[str, Any]) \u2013 \nprompts (List[str]) \u2013 \nstop (Optional[List[str]]) \u2013 \nReturn type\nList[List[str]]\nget_token_ids(text)\uf0c1\nGet the token IDs using the tiktoken package.\nParameters\ntext (str) \u2013 \nReturn type\nList[int]\njson(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs)\uf0c1\nGenerate a JSON representation of the model, include and exclude arguments as per dict().\nencoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().\nParameters\ninclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 \nexclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 \nby_alias (bool) \u2013 \nskip_defaults (Optional[bool]) \u2013 \nexclude_unset (bool) \u2013 \nexclude_defaults (bool) \u2013 \nexclude_none (bool) \u2013 \nencoder (Optional[Callable[[Any], Any]]) \u2013 \nmodels_as_dict (bool) \u2013 \ndumps_kwargs (Any) \u2013 \nReturn type\nunicode\nmax_tokens_for_prompt(prompt)\uf0c1\nCalculate the maximum number of tokens possible to generate for a prompt.\nParameters\nprompt (str) \u2013 The prompt to pass into the model.\nReturns\nThe maximum number of tokens to generate for a prompt.\nReturn type\nint\nExample\nmax_tokens = openai.max_token_for_prompt(\"Tell me a joke.\")", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-226", "text": "int\nExample\nmax_tokens = openai.max_token_for_prompt(\"Tell me a joke.\")\nstatic modelname_to_contextsize(modelname)\uf0c1\nCalculate the maximum number of tokens possible to generate for a model.\nParameters\nmodelname (str) \u2013 The modelname we want to know the context size for.\nReturns\nThe maximum context size\nReturn type\nint\nExample\nmax_tokens = openai.modelname_to_contextsize(\"text-davinci-003\")\npredict(text, *, stop=None, **kwargs)\uf0c1\nPredict text from text.\nParameters\ntext (str) \u2013 \nstop (Optional[Sequence[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\npredict_messages(messages, *, stop=None, **kwargs)\uf0c1\nPredict message from messages.\nParameters\nmessages (List[langchain.schema.BaseMessage]) \u2013 \nstop (Optional[Sequence[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.BaseMessage\nprep_streaming_params(stop=None)\uf0c1\nPrepare the params for streaming.\nParameters\nstop (Optional[List[str]]) \u2013 \nReturn type\nDict[str, Any]\nsave(file_path)\uf0c1\nSave the LLM.\nParameters\nfile_path (Union[pathlib.Path, str]) \u2013 Path to file to save the LLM to.\nReturn type\nNone\nExample:\n.. code-block:: python\nllm.save(file_path=\u201dpath/llm.yaml\u201d)\nstream(prompt, stop=None)\uf0c1\nCall OpenAI with streaming flag and return the resulting generator.\nBETA: this is a beta feature while we figure out the right abstraction.\nOnce that happens, this interface could change.\nParameters\nprompt (str) \u2013 The prompts to pass into the model.\nstop (Optional[List[str]]) \u2013 Optional list of stop words to use when generating.\nReturns", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-227", "text": "stop (Optional[List[str]]) \u2013 Optional list of stop words to use when generating.\nReturns\nA generator representing the stream of tokens from OpenAI.\nReturn type\nGenerator\nExample\ngenerator = openai.stream(\"Tell me a joke.\")\nfor token in generator:\n yield token\nclassmethod update_forward_refs(**localns)\uf0c1\nTry to update ForwardRefs on fields based on this Model, globalns and localns.\nParameters\nlocalns (Any) \u2013 \nReturn type\nNone\nproperty lc_attributes: Dict\uf0c1\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\uf0c1\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\uf0c1\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\uf0c1\nReturn whether or not the class is serializable.\nproperty max_context_size: int\uf0c1\nGet max context size for this model.\nclass langchain.llms.Petals(*, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None, client=None, tokenizer=None, model_name='bigscience/bloom-petals', temperature=0.7, max_new_tokens=256, top_p=0.9, top_k=None, do_sample=True, max_length=None, model_kwargs=None, huggingface_api_key=None)[source]\uf0c1\nBases: langchain.llms.base.LLM\nWrapper around Petals Bloom models.\nTo use, you should have the petals python package installed, and the\nenvironment variable HUGGINGFACE_API_KEY set with your API key.", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-228", "text": "environment variable HUGGINGFACE_API_KEY set with your API key.\nAny parameters that are valid to be passed to the call can be passed\nin, even if not explicitly saved on this class.\nExample\nfrom langchain.llms import petals\npetals = Petals()\nParameters\ncache (Optional[bool]) \u2013 \nverbose (bool) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ncallback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) \u2013 \ntags (Optional[List[str]]) \u2013 \nclient (Any) \u2013 \ntokenizer (Any) \u2013 \nmodel_name (str) \u2013 \ntemperature (float) \u2013 \nmax_new_tokens (int) \u2013 \ntop_p (float) \u2013 \ntop_k (Optional[int]) \u2013 \ndo_sample (bool) \u2013 \nmax_length (Optional[int]) \u2013 \nmodel_kwargs (Dict[str, Any]) \u2013 \nhuggingface_api_key (Optional[str]) \u2013 \nReturn type\nNone\nattribute client: Any = None\uf0c1\nThe client to use for the API calls.\nattribute do_sample: bool = True\uf0c1\nWhether or not to use sampling; use greedy decoding otherwise.\nattribute max_length: Optional[int] = None\uf0c1\nThe maximum length of the sequence to be generated.\nattribute max_new_tokens: int = 256\uf0c1\nThe maximum number of new tokens to generate in the completion.\nattribute model_kwargs: Dict[str, Any] [Optional]\uf0c1\nHolds any model parameters valid for create call\nnot explicitly specified.\nattribute model_name: str = 'bigscience/bloom-petals'\uf0c1\nThe model to use.\nattribute tags: Optional[List[str]] = None\uf0c1\nTags to add to the run trace.", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-229", "text": "Tags to add to the run trace.\nattribute temperature: float = 0.7\uf0c1\nWhat sampling temperature to use\nattribute tokenizer: Any = None\uf0c1\nThe tokenizer to use for the API calls.\nattribute top_k: Optional[int] = None\uf0c1\nThe number of highest probability vocabulary tokens\nto keep for top-k-filtering.\nattribute top_p: float = 0.9\uf0c1\nThe cumulative probability for top-p sampling.\nattribute verbose: bool [Optional]\uf0c1\nWhether to print out response text.\n__call__(prompt, stop=None, callbacks=None, **kwargs)\uf0c1\nCheck Cache and run the LLM on the given prompt and input.\nParameters\nprompt (str) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\nasync agenerate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)\uf0c1\nRun the LLM on the given prompt and input.\nParameters\nprompts (List[str]) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ntags (Optional[List[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.LLMResult\nasync agenerate_prompt(prompts, stop=None, callbacks=None, **kwargs)\uf0c1\nTake in a list of prompt values and return an LLMResult.\nParameters\nprompts (List[langchain.schema.PromptValue]) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-230", "text": "kwargs (Any) \u2013 \nReturn type\nlangchain.schema.LLMResult\nasync apredict(text, *, stop=None, **kwargs)\uf0c1\nPredict text from text.\nParameters\ntext (str) \u2013 \nstop (Optional[Sequence[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\nasync apredict_messages(messages, *, stop=None, **kwargs)\uf0c1\nPredict message from messages.\nParameters\nmessages (List[langchain.schema.BaseMessage]) \u2013 \nstop (Optional[Sequence[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.BaseMessage\nclassmethod construct(_fields_set=None, **values)\uf0c1\nCreates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.\nDefault values are respected, but no other validation is performed.\nBehaves as if Config.extra = \u2018allow\u2019 was set since it adds all passed values\nParameters\n_fields_set (Optional[SetStr]) \u2013 \nvalues (Any) \u2013 \nReturn type\nModel\ncopy(*, include=None, exclude=None, update=None, deep=False)\uf0c1\nDuplicate a model, optionally choose which fields to include, exclude and change.\nParameters\ninclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 fields to include in new model\nexclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 fields to exclude from new model, as with values this takes precedence over include\nupdate (Optional[DictStrAny]) \u2013 values to change/add in the new model. Note: the data is not validated before creating\nthe new model: you should trust this data\ndeep (bool) \u2013 set to True to make a deep copy of the model\nself (Model) \u2013 \nReturns\nnew model instance\nReturn type\nModel\ndict(**kwargs)\uf0c1", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-231", "text": "Returns\nnew model instance\nReturn type\nModel\ndict(**kwargs)\uf0c1\nReturn a dictionary of the LLM.\nParameters\nkwargs (Any) \u2013 \nReturn type\nDict\ngenerate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)\uf0c1\nRun the LLM on the given prompt and input.\nParameters\nprompts (List[str]) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ntags (Optional[List[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.LLMResult\ngenerate_prompt(prompts, stop=None, callbacks=None, **kwargs)\uf0c1\nTake in a list of prompt values and return an LLMResult.\nParameters\nprompts (List[langchain.schema.PromptValue]) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.LLMResult\nget_num_tokens(text)\uf0c1\nGet the number of tokens present in the text.\nParameters\ntext (str) \u2013 \nReturn type\nint\nget_num_tokens_from_messages(messages)\uf0c1\nGet the number of tokens in the message.\nParameters\nmessages (List[langchain.schema.BaseMessage]) \u2013 \nReturn type\nint\nget_token_ids(text)\uf0c1\nGet the token present in the text.\nParameters\ntext (str) \u2013 \nReturn type\nList[int]\njson(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs)\uf0c1", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-232", "text": "Generate a JSON representation of the model, include and exclude arguments as per dict().\nencoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().\nParameters\ninclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 \nexclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 \nby_alias (bool) \u2013 \nskip_defaults (Optional[bool]) \u2013 \nexclude_unset (bool) \u2013 \nexclude_defaults (bool) \u2013 \nexclude_none (bool) \u2013 \nencoder (Optional[Callable[[Any], Any]]) \u2013 \nmodels_as_dict (bool) \u2013 \ndumps_kwargs (Any) \u2013 \nReturn type\nunicode\npredict(text, *, stop=None, **kwargs)\uf0c1\nPredict text from text.\nParameters\ntext (str) \u2013 \nstop (Optional[Sequence[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\npredict_messages(messages, *, stop=None, **kwargs)\uf0c1\nPredict message from messages.\nParameters\nmessages (List[langchain.schema.BaseMessage]) \u2013 \nstop (Optional[Sequence[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.BaseMessage\nsave(file_path)\uf0c1\nSave the LLM.\nParameters\nfile_path (Union[pathlib.Path, str]) \u2013 Path to file to save the LLM to.\nReturn type\nNone\nExample:\n.. code-block:: python\nllm.save(file_path=\u201dpath/llm.yaml\u201d)\nclassmethod update_forward_refs(**localns)\uf0c1\nTry to update ForwardRefs on fields based on this Model, globalns and localns.\nParameters\nlocalns (Any) \u2013 \nReturn type\nNone\nproperty lc_attributes: Dict\uf0c1\nReturn a list of attribute names that should be included in the", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-233", "text": "Return a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\uf0c1\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\uf0c1\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\uf0c1\nReturn whether or not the class is serializable.\nclass langchain.llms.PipelineAI(*, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None, pipeline_key='', pipeline_kwargs=None, pipeline_api_key=None)[source]\uf0c1\nBases: langchain.llms.base.LLM, pydantic.main.BaseModel\nWrapper around PipelineAI large language models.\nTo use, you should have the pipeline-ai python package installed,\nand the environment variable PIPELINE_API_KEY set with your API key.\nAny parameters that are valid to be passed to the call can be passed\nin, even if not explicitly saved on this class.\nExample\nfrom langchain import PipelineAI\npipeline = PipelineAI(pipeline_key=\"\")\nParameters\ncache (Optional[bool]) \u2013 \nverbose (bool) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ncallback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) \u2013 \ntags (Optional[List[str]]) \u2013 \npipeline_key (str) \u2013 \npipeline_kwargs (Dict[str, Any]) \u2013 \npipeline_api_key (Optional[str]) \u2013 \nReturn type\nNone\nattribute pipeline_key: str = ''\uf0c1\nThe id or tag of the target pipeline", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-234", "text": "attribute pipeline_key: str = ''\uf0c1\nThe id or tag of the target pipeline\nattribute pipeline_kwargs: Dict[str, Any] [Optional]\uf0c1\nHolds any pipeline parameters valid for create call not\nexplicitly specified.\nattribute tags: Optional[List[str]] = None\uf0c1\nTags to add to the run trace.\nattribute verbose: bool [Optional]\uf0c1\nWhether to print out response text.\n__call__(prompt, stop=None, callbacks=None, **kwargs)\uf0c1\nCheck Cache and run the LLM on the given prompt and input.\nParameters\nprompt (str) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\nasync agenerate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)\uf0c1\nRun the LLM on the given prompt and input.\nParameters\nprompts (List[str]) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ntags (Optional[List[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.LLMResult\nasync agenerate_prompt(prompts, stop=None, callbacks=None, **kwargs)\uf0c1\nTake in a list of prompt values and return an LLMResult.\nParameters\nprompts (List[langchain.schema.PromptValue]) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.LLMResult", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-235", "text": "kwargs (Any) \u2013 \nReturn type\nlangchain.schema.LLMResult\nasync apredict(text, *, stop=None, **kwargs)\uf0c1\nPredict text from text.\nParameters\ntext (str) \u2013 \nstop (Optional[Sequence[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\nasync apredict_messages(messages, *, stop=None, **kwargs)\uf0c1\nPredict message from messages.\nParameters\nmessages (List[langchain.schema.BaseMessage]) \u2013 \nstop (Optional[Sequence[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.BaseMessage\nclassmethod construct(_fields_set=None, **values)\uf0c1\nCreates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.\nDefault values are respected, but no other validation is performed.\nBehaves as if Config.extra = \u2018allow\u2019 was set since it adds all passed values\nParameters\n_fields_set (Optional[SetStr]) \u2013 \nvalues (Any) \u2013 \nReturn type\nModel\ncopy(*, include=None, exclude=None, update=None, deep=False)\uf0c1\nDuplicate a model, optionally choose which fields to include, exclude and change.\nParameters\ninclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 fields to include in new model\nexclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 fields to exclude from new model, as with values this takes precedence over include\nupdate (Optional[DictStrAny]) \u2013 values to change/add in the new model. Note: the data is not validated before creating\nthe new model: you should trust this data\ndeep (bool) \u2013 set to True to make a deep copy of the model\nself (Model) \u2013 \nReturns\nnew model instance\nReturn type\nModel\ndict(**kwargs)\uf0c1", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-236", "text": "Returns\nnew model instance\nReturn type\nModel\ndict(**kwargs)\uf0c1\nReturn a dictionary of the LLM.\nParameters\nkwargs (Any) \u2013 \nReturn type\nDict\ngenerate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)\uf0c1\nRun the LLM on the given prompt and input.\nParameters\nprompts (List[str]) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ntags (Optional[List[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.LLMResult\ngenerate_prompt(prompts, stop=None, callbacks=None, **kwargs)\uf0c1\nTake in a list of prompt values and return an LLMResult.\nParameters\nprompts (List[langchain.schema.PromptValue]) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.LLMResult\nget_num_tokens(text)\uf0c1\nGet the number of tokens present in the text.\nParameters\ntext (str) \u2013 \nReturn type\nint\nget_num_tokens_from_messages(messages)\uf0c1\nGet the number of tokens in the message.\nParameters\nmessages (List[langchain.schema.BaseMessage]) \u2013 \nReturn type\nint\nget_token_ids(text)\uf0c1\nGet the token present in the text.\nParameters\ntext (str) \u2013 \nReturn type\nList[int]\njson(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs)\uf0c1", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-237", "text": "Generate a JSON representation of the model, include and exclude arguments as per dict().\nencoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().\nParameters\ninclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 \nexclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 \nby_alias (bool) \u2013 \nskip_defaults (Optional[bool]) \u2013 \nexclude_unset (bool) \u2013 \nexclude_defaults (bool) \u2013 \nexclude_none (bool) \u2013 \nencoder (Optional[Callable[[Any], Any]]) \u2013 \nmodels_as_dict (bool) \u2013 \ndumps_kwargs (Any) \u2013 \nReturn type\nunicode\npredict(text, *, stop=None, **kwargs)\uf0c1\nPredict text from text.\nParameters\ntext (str) \u2013 \nstop (Optional[Sequence[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\npredict_messages(messages, *, stop=None, **kwargs)\uf0c1\nPredict message from messages.\nParameters\nmessages (List[langchain.schema.BaseMessage]) \u2013 \nstop (Optional[Sequence[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.BaseMessage\nsave(file_path)\uf0c1\nSave the LLM.\nParameters\nfile_path (Union[pathlib.Path, str]) \u2013 Path to file to save the LLM to.\nReturn type\nNone\nExample:\n.. code-block:: python\nllm.save(file_path=\u201dpath/llm.yaml\u201d)\nclassmethod update_forward_refs(**localns)\uf0c1\nTry to update ForwardRefs on fields based on this Model, globalns and localns.\nParameters\nlocalns (Any) \u2013 \nReturn type\nNone\nproperty lc_attributes: Dict\uf0c1\nReturn a list of attribute names that should be included in the", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-238", "text": "Return a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\uf0c1\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\uf0c1\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\uf0c1\nReturn whether or not the class is serializable.\nclass langchain.llms.PredictionGuard(*, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None, client=None, model='MPT-7B-Instruct', output=None, max_tokens=256, temperature=0.75, token=None, stop=None)[source]\uf0c1\nBases: langchain.llms.base.LLM\nWrapper around Prediction Guard large language models.\nTo use, you should have the predictionguard python package installed, and the\nenvironment variable PREDICTIONGUARD_TOKEN set with your access token, or pass\nit as a named parameter to the constructor. To use Prediction Guard\u2019s API along\nwith OpenAI models, set the environment variable OPENAI_API_KEY with your\nOpenAI API key as well.\nExample\npgllm = PredictionGuard(model=\"MPT-7B-Instruct\",\n token=\"my-access-token\",\n output={\n \"type\": \"boolean\"\n })\nParameters\ncache (Optional[bool]) \u2013 \nverbose (bool) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ncallback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) \u2013 \ntags (Optional[List[str]]) \u2013", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-239", "text": "tags (Optional[List[str]]) \u2013 \nclient (Any) \u2013 \nmodel (Optional[str]) \u2013 \noutput (Optional[Dict[str, Any]]) \u2013 \nmax_tokens (int) \u2013 \ntemperature (float) \u2013 \ntoken (Optional[str]) \u2013 \nstop (Optional[List[str]]) \u2013 \nReturn type\nNone\nattribute max_tokens: int = 256\uf0c1\nDenotes the number of tokens to predict per generation.\nattribute model: Optional[str] = 'MPT-7B-Instruct'\uf0c1\nModel name to use.\nattribute output: Optional[Dict[str, Any]] = None\uf0c1\nThe output type or structure for controlling the LLM output.\nattribute tags: Optional[List[str]] = None\uf0c1\nTags to add to the run trace.\nattribute temperature: float = 0.75\uf0c1\nA non-negative float that tunes the degree of randomness in generation.\nattribute token: Optional[str] = None\uf0c1\nYour Prediction Guard access token.\nattribute verbose: bool [Optional]\uf0c1\nWhether to print out response text.\n__call__(prompt, stop=None, callbacks=None, **kwargs)\uf0c1\nCheck Cache and run the LLM on the given prompt and input.\nParameters\nprompt (str) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\nasync agenerate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)\uf0c1\nRun the LLM on the given prompt and input.\nParameters\nprompts (List[str]) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-240", "text": "tags (Optional[List[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.LLMResult\nasync agenerate_prompt(prompts, stop=None, callbacks=None, **kwargs)\uf0c1\nTake in a list of prompt values and return an LLMResult.\nParameters\nprompts (List[langchain.schema.PromptValue]) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.LLMResult\nasync apredict(text, *, stop=None, **kwargs)\uf0c1\nPredict text from text.\nParameters\ntext (str) \u2013 \nstop (Optional[Sequence[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\nasync apredict_messages(messages, *, stop=None, **kwargs)\uf0c1\nPredict message from messages.\nParameters\nmessages (List[langchain.schema.BaseMessage]) \u2013 \nstop (Optional[Sequence[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.BaseMessage\nclassmethod construct(_fields_set=None, **values)\uf0c1\nCreates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.\nDefault values are respected, but no other validation is performed.\nBehaves as if Config.extra = \u2018allow\u2019 was set since it adds all passed values\nParameters\n_fields_set (Optional[SetStr]) \u2013 \nvalues (Any) \u2013 \nReturn type\nModel\ncopy(*, include=None, exclude=None, update=None, deep=False)\uf0c1\nDuplicate a model, optionally choose which fields to include, exclude and change.\nParameters\ninclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 fields to include in new model", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-241", "text": "exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 fields to exclude from new model, as with values this takes precedence over include\nupdate (Optional[DictStrAny]) \u2013 values to change/add in the new model. Note: the data is not validated before creating\nthe new model: you should trust this data\ndeep (bool) \u2013 set to True to make a deep copy of the model\nself (Model) \u2013 \nReturns\nnew model instance\nReturn type\nModel\ndict(**kwargs)\uf0c1\nReturn a dictionary of the LLM.\nParameters\nkwargs (Any) \u2013 \nReturn type\nDict\ngenerate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)\uf0c1\nRun the LLM on the given prompt and input.\nParameters\nprompts (List[str]) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ntags (Optional[List[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.LLMResult\ngenerate_prompt(prompts, stop=None, callbacks=None, **kwargs)\uf0c1\nTake in a list of prompt values and return an LLMResult.\nParameters\nprompts (List[langchain.schema.PromptValue]) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.LLMResult\nget_num_tokens(text)\uf0c1\nGet the number of tokens present in the text.\nParameters\ntext (str) \u2013 \nReturn type\nint\nget_num_tokens_from_messages(messages)\uf0c1\nGet the number of tokens in the message.\nParameters", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-242", "text": "Get the number of tokens in the message.\nParameters\nmessages (List[langchain.schema.BaseMessage]) \u2013 \nReturn type\nint\nget_token_ids(text)\uf0c1\nGet the token present in the text.\nParameters\ntext (str) \u2013 \nReturn type\nList[int]\njson(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs)\uf0c1\nGenerate a JSON representation of the model, include and exclude arguments as per dict().\nencoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().\nParameters\ninclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 \nexclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 \nby_alias (bool) \u2013 \nskip_defaults (Optional[bool]) \u2013 \nexclude_unset (bool) \u2013 \nexclude_defaults (bool) \u2013 \nexclude_none (bool) \u2013 \nencoder (Optional[Callable[[Any], Any]]) \u2013 \nmodels_as_dict (bool) \u2013 \ndumps_kwargs (Any) \u2013 \nReturn type\nunicode\npredict(text, *, stop=None, **kwargs)\uf0c1\nPredict text from text.\nParameters\ntext (str) \u2013 \nstop (Optional[Sequence[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\npredict_messages(messages, *, stop=None, **kwargs)\uf0c1\nPredict message from messages.\nParameters\nmessages (List[langchain.schema.BaseMessage]) \u2013 \nstop (Optional[Sequence[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.BaseMessage\nsave(file_path)\uf0c1\nSave the LLM.\nParameters", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-243", "text": "save(file_path)\uf0c1\nSave the LLM.\nParameters\nfile_path (Union[pathlib.Path, str]) \u2013 Path to file to save the LLM to.\nReturn type\nNone\nExample:\n.. code-block:: python\nllm.save(file_path=\u201dpath/llm.yaml\u201d)\nclassmethod update_forward_refs(**localns)\uf0c1\nTry to update ForwardRefs on fields based on this Model, globalns and localns.\nParameters\nlocalns (Any) \u2013 \nReturn type\nNone\nproperty lc_attributes: Dict\uf0c1\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\uf0c1\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\uf0c1\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\uf0c1\nReturn whether or not the class is serializable.\nclass langchain.llms.PromptLayerOpenAI(*, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None, client=None, model='text-davinci-003', temperature=0.7, max_tokens=256, top_p=1, frequency_penalty=0, presence_penalty=0, n=1, best_of=1, model_kwargs=None, openai_api_key=None, openai_api_base=None, openai_organization=None, openai_proxy=None, batch_size=20, request_timeout=None, logit_bias=None, max_retries=6, streaming=False, allowed_special={}, disallowed_special='all', tiktoken_model_name=None, pl_tags=None, return_pl_id=False)[source]\uf0c1", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-244", "text": "Bases: langchain.llms.openai.OpenAI\nWrapper around OpenAI large language models.\nTo use, you should have the openai and promptlayer python\npackage installed, and the environment variable OPENAI_API_KEY\nand PROMPTLAYER_API_KEY set with your openAI API key and\npromptlayer key respectively.\nAll parameters that can be passed to the OpenAI LLM can also\nbe passed here. The PromptLayerOpenAI LLM adds two optional\nParameters\npl_tags (Optional[List[str]]) \u2013 List of strings to tag the request with.\nreturn_pl_id (Optional[bool]) \u2013 If True, the PromptLayer request ID will be\nreturned in the generation_info field of the\nGeneration object.\ncache (Optional[bool]) \u2013 \nverbose (bool) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ncallback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) \u2013 \ntags (Optional[List[str]]) \u2013 \nclient (Any) \u2013 \nmodel (str) \u2013 \ntemperature (float) \u2013 \nmax_tokens (int) \u2013 \ntop_p (float) \u2013 \nfrequency_penalty (float) \u2013 \npresence_penalty (float) \u2013 \nn (int) \u2013 \nbest_of (int) \u2013 \nmodel_kwargs (Dict[str, Any]) \u2013 \nopenai_api_key (Optional[str]) \u2013 \nopenai_api_base (Optional[str]) \u2013 \nopenai_organization (Optional[str]) \u2013 \nopenai_proxy (Optional[str]) \u2013 \nbatch_size (int) \u2013 \nrequest_timeout (Optional[Union[float, Tuple[float, float]]]) \u2013 \nlogit_bias (Optional[Dict[str, float]]) \u2013 \nmax_retries (int) \u2013 \nstreaming (bool) \u2013", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-245", "text": "max_retries (int) \u2013 \nstreaming (bool) \u2013 \nallowed_special (Union[Literal['all'], typing.AbstractSet[str]]) \u2013 \ndisallowed_special (Union[Literal['all'], typing.Collection[str]]) \u2013 \ntiktoken_model_name (Optional[str]) \u2013 \nReturn type\nNone\nExample\nfrom langchain.llms import PromptLayerOpenAI\nopenai = PromptLayerOpenAI(model_name=\"text-davinci-003\")\n__call__(prompt, stop=None, callbacks=None, **kwargs)\uf0c1\nCheck Cache and run the LLM on the given prompt and input.\nParameters\nprompt (str) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\nasync agenerate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)\uf0c1\nRun the LLM on the given prompt and input.\nParameters\nprompts (List[str]) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ntags (Optional[List[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.LLMResult\nasync agenerate_prompt(prompts, stop=None, callbacks=None, **kwargs)\uf0c1\nTake in a list of prompt values and return an LLMResult.\nParameters\nprompts (List[langchain.schema.PromptValue]) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.LLMResult", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-246", "text": "kwargs (Any) \u2013 \nReturn type\nlangchain.schema.LLMResult\nasync apredict(text, *, stop=None, **kwargs)\uf0c1\nPredict text from text.\nParameters\ntext (str) \u2013 \nstop (Optional[Sequence[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\nasync apredict_messages(messages, *, stop=None, **kwargs)\uf0c1\nPredict message from messages.\nParameters\nmessages (List[langchain.schema.BaseMessage]) \u2013 \nstop (Optional[Sequence[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.BaseMessage\nclassmethod construct(_fields_set=None, **values)\uf0c1\nCreates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.\nDefault values are respected, but no other validation is performed.\nBehaves as if Config.extra = \u2018allow\u2019 was set since it adds all passed values\nParameters\n_fields_set (Optional[SetStr]) \u2013 \nvalues (Any) \u2013 \nReturn type\nModel\ncopy(*, include=None, exclude=None, update=None, deep=False)\uf0c1\nDuplicate a model, optionally choose which fields to include, exclude and change.\nParameters\ninclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 fields to include in new model\nexclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 fields to exclude from new model, as with values this takes precedence over include\nupdate (Optional[DictStrAny]) \u2013 values to change/add in the new model. Note: the data is not validated before creating\nthe new model: you should trust this data\ndeep (bool) \u2013 set to True to make a deep copy of the model\nself (Model) \u2013 \nReturns\nnew model instance\nReturn type\nModel", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-247", "text": "self (Model) \u2013 \nReturns\nnew model instance\nReturn type\nModel\ncreate_llm_result(choices, prompts, token_usage)\uf0c1\nCreate the LLMResult from the choices and prompts.\nParameters\nchoices (Any) \u2013 \nprompts (List[str]) \u2013 \ntoken_usage (Dict[str, int]) \u2013 \nReturn type\nlangchain.schema.LLMResult\ndict(**kwargs)\uf0c1\nReturn a dictionary of the LLM.\nParameters\nkwargs (Any) \u2013 \nReturn type\nDict\ngenerate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)\uf0c1\nRun the LLM on the given prompt and input.\nParameters\nprompts (List[str]) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ntags (Optional[List[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.LLMResult\ngenerate_prompt(prompts, stop=None, callbacks=None, **kwargs)\uf0c1\nTake in a list of prompt values and return an LLMResult.\nParameters\nprompts (List[langchain.schema.PromptValue]) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.LLMResult\nget_num_tokens(text)\uf0c1\nGet the number of tokens present in the text.\nParameters\ntext (str) \u2013 \nReturn type\nint\nget_num_tokens_from_messages(messages)\uf0c1\nGet the number of tokens in the message.\nParameters\nmessages (List[langchain.schema.BaseMessage]) \u2013 \nReturn type\nint", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-248", "text": "Parameters\nmessages (List[langchain.schema.BaseMessage]) \u2013 \nReturn type\nint\nget_sub_prompts(params, prompts, stop=None)\uf0c1\nGet the sub prompts for llm call.\nParameters\nparams (Dict[str, Any]) \u2013 \nprompts (List[str]) \u2013 \nstop (Optional[List[str]]) \u2013 \nReturn type\nList[List[str]]\nget_token_ids(text)\uf0c1\nGet the token IDs using the tiktoken package.\nParameters\ntext (str) \u2013 \nReturn type\nList[int]\njson(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs)\uf0c1\nGenerate a JSON representation of the model, include and exclude arguments as per dict().\nencoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().\nParameters\ninclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 \nexclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 \nby_alias (bool) \u2013 \nskip_defaults (Optional[bool]) \u2013 \nexclude_unset (bool) \u2013 \nexclude_defaults (bool) \u2013 \nexclude_none (bool) \u2013 \nencoder (Optional[Callable[[Any], Any]]) \u2013 \nmodels_as_dict (bool) \u2013 \ndumps_kwargs (Any) \u2013 \nReturn type\nunicode\nmax_tokens_for_prompt(prompt)\uf0c1\nCalculate the maximum number of tokens possible to generate for a prompt.\nParameters\nprompt (str) \u2013 The prompt to pass into the model.\nReturns\nThe maximum number of tokens to generate for a prompt.\nReturn type\nint\nExample\nmax_tokens = openai.max_token_for_prompt(\"Tell me a joke.\")", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-249", "text": "int\nExample\nmax_tokens = openai.max_token_for_prompt(\"Tell me a joke.\")\nstatic modelname_to_contextsize(modelname)\uf0c1\nCalculate the maximum number of tokens possible to generate for a model.\nParameters\nmodelname (str) \u2013 The modelname we want to know the context size for.\nReturns\nThe maximum context size\nReturn type\nint\nExample\nmax_tokens = openai.modelname_to_contextsize(\"text-davinci-003\")\npredict(text, *, stop=None, **kwargs)\uf0c1\nPredict text from text.\nParameters\ntext (str) \u2013 \nstop (Optional[Sequence[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\npredict_messages(messages, *, stop=None, **kwargs)\uf0c1\nPredict message from messages.\nParameters\nmessages (List[langchain.schema.BaseMessage]) \u2013 \nstop (Optional[Sequence[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.BaseMessage\nprep_streaming_params(stop=None)\uf0c1\nPrepare the params for streaming.\nParameters\nstop (Optional[List[str]]) \u2013 \nReturn type\nDict[str, Any]\nsave(file_path)\uf0c1\nSave the LLM.\nParameters\nfile_path (Union[pathlib.Path, str]) \u2013 Path to file to save the LLM to.\nReturn type\nNone\nExample:\n.. code-block:: python\nllm.save(file_path=\u201dpath/llm.yaml\u201d)\nstream(prompt, stop=None)\uf0c1\nCall OpenAI with streaming flag and return the resulting generator.\nBETA: this is a beta feature while we figure out the right abstraction.\nOnce that happens, this interface could change.\nParameters\nprompt (str) \u2013 The prompts to pass into the model.\nstop (Optional[List[str]]) \u2013 Optional list of stop words to use when generating.\nReturns", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-250", "text": "stop (Optional[List[str]]) \u2013 Optional list of stop words to use when generating.\nReturns\nA generator representing the stream of tokens from OpenAI.\nReturn type\nGenerator\nExample\ngenerator = openai.stream(\"Tell me a joke.\")\nfor token in generator:\n yield token\nclassmethod update_forward_refs(**localns)\uf0c1\nTry to update ForwardRefs on fields based on this Model, globalns and localns.\nParameters\nlocalns (Any) \u2013 \nReturn type\nNone\nproperty lc_attributes: Dict\uf0c1\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\uf0c1\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\uf0c1\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\uf0c1\nReturn whether or not the class is serializable.\nproperty max_context_size: int\uf0c1\nGet max context size for this model.\nclass langchain.llms.PromptLayerOpenAIChat(*, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None, client=None, model_name='gpt-3.5-turbo', model_kwargs=None, openai_api_key=None, openai_api_base=None, openai_proxy=None, max_retries=6, prefix_messages=None, streaming=False, allowed_special={}, disallowed_special='all', pl_tags=None, return_pl_id=False)[source]\uf0c1\nBases: langchain.llms.openai.OpenAIChat\nWrapper around OpenAI large language models.\nTo use, you should have the openai and promptlayer python", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-251", "text": "To use, you should have the openai and promptlayer python\npackage installed, and the environment variable OPENAI_API_KEY\nand PROMPTLAYER_API_KEY set with your openAI API key and\npromptlayer key respectively.\nAll parameters that can be passed to the OpenAIChat LLM can also\nbe passed here. The PromptLayerOpenAIChat adds two optional\nParameters\npl_tags (Optional[List[str]]) \u2013 List of strings to tag the request with.\nreturn_pl_id (Optional[bool]) \u2013 If True, the PromptLayer request ID will be\nreturned in the generation_info field of the\nGeneration object.\ncache (Optional[bool]) \u2013 \nverbose (bool) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ncallback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) \u2013 \ntags (Optional[List[str]]) \u2013 \nclient (Any) \u2013 \nmodel_name (str) \u2013 \nmodel_kwargs (Dict[str, Any]) \u2013 \nopenai_api_key (Optional[str]) \u2013 \nopenai_api_base (Optional[str]) \u2013 \nopenai_proxy (Optional[str]) \u2013 \nmax_retries (int) \u2013 \nprefix_messages (List) \u2013 \nstreaming (bool) \u2013 \nallowed_special (Union[Literal['all'], typing.AbstractSet[str]]) \u2013 \ndisallowed_special (Union[Literal['all'], typing.Collection[str]]) \u2013 \nReturn type\nNone\nExample\nfrom langchain.llms import PromptLayerOpenAIChat\nopenaichat = PromptLayerOpenAIChat(model_name=\"gpt-3.5-turbo\")\nattribute allowed_special: Union[Literal['all'], AbstractSet[str]] = {}\uf0c1\nSet of special tokens that are allowed\u3002", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-252", "text": "Set of special tokens that are allowed\u3002\nattribute disallowed_special: Union[Literal['all'], Collection[str]] = 'all'\uf0c1\nSet of special tokens that are not allowed\u3002\nattribute max_retries: int = 6\uf0c1\nMaximum number of retries to make when generating.\nattribute model_kwargs: Dict[str, Any] [Optional]\uf0c1\nHolds any model parameters valid for create call not explicitly specified.\nattribute model_name: str = 'gpt-3.5-turbo'\uf0c1\nModel name to use.\nattribute prefix_messages: List [Optional]\uf0c1\nSeries of messages for Chat input.\nattribute streaming: bool = False\uf0c1\nWhether to stream the results or not.\n__call__(prompt, stop=None, callbacks=None, **kwargs)\uf0c1\nCheck Cache and run the LLM on the given prompt and input.\nParameters\nprompt (str) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\nasync agenerate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)\uf0c1\nRun the LLM on the given prompt and input.\nParameters\nprompts (List[str]) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ntags (Optional[List[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.LLMResult\nasync agenerate_prompt(prompts, stop=None, callbacks=None, **kwargs)\uf0c1\nTake in a list of prompt values and return an LLMResult.\nParameters\nprompts (List[langchain.schema.PromptValue]) \u2013", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-253", "text": "Parameters\nprompts (List[langchain.schema.PromptValue]) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.LLMResult\nasync apredict(text, *, stop=None, **kwargs)\uf0c1\nPredict text from text.\nParameters\ntext (str) \u2013 \nstop (Optional[Sequence[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\nasync apredict_messages(messages, *, stop=None, **kwargs)\uf0c1\nPredict message from messages.\nParameters\nmessages (List[langchain.schema.BaseMessage]) \u2013 \nstop (Optional[Sequence[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.BaseMessage\nclassmethod construct(_fields_set=None, **values)\uf0c1\nCreates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.\nDefault values are respected, but no other validation is performed.\nBehaves as if Config.extra = \u2018allow\u2019 was set since it adds all passed values\nParameters\n_fields_set (Optional[SetStr]) \u2013 \nvalues (Any) \u2013 \nReturn type\nModel\ncopy(*, include=None, exclude=None, update=None, deep=False)\uf0c1\nDuplicate a model, optionally choose which fields to include, exclude and change.\nParameters\ninclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 fields to include in new model\nexclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 fields to exclude from new model, as with values this takes precedence over include\nupdate (Optional[DictStrAny]) \u2013 values to change/add in the new model. Note: the data is not validated before creating", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-254", "text": "the new model: you should trust this data\ndeep (bool) \u2013 set to True to make a deep copy of the model\nself (Model) \u2013 \nReturns\nnew model instance\nReturn type\nModel\ndict(**kwargs)\uf0c1\nReturn a dictionary of the LLM.\nParameters\nkwargs (Any) \u2013 \nReturn type\nDict\ngenerate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)\uf0c1\nRun the LLM on the given prompt and input.\nParameters\nprompts (List[str]) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ntags (Optional[List[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.LLMResult\ngenerate_prompt(prompts, stop=None, callbacks=None, **kwargs)\uf0c1\nTake in a list of prompt values and return an LLMResult.\nParameters\nprompts (List[langchain.schema.PromptValue]) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.LLMResult\nget_num_tokens(text)\uf0c1\nGet the number of tokens present in the text.\nParameters\ntext (str) \u2013 \nReturn type\nint\nget_num_tokens_from_messages(messages)\uf0c1\nGet the number of tokens in the message.\nParameters\nmessages (List[langchain.schema.BaseMessage]) \u2013 \nReturn type\nint\nget_token_ids(text)\uf0c1\nGet the token IDs using the tiktoken package.\nParameters\ntext (str) \u2013 \nReturn type\nList[int]", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-255", "text": "Parameters\ntext (str) \u2013 \nReturn type\nList[int]\njson(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs)\uf0c1\nGenerate a JSON representation of the model, include and exclude arguments as per dict().\nencoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().\nParameters\ninclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 \nexclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 \nby_alias (bool) \u2013 \nskip_defaults (Optional[bool]) \u2013 \nexclude_unset (bool) \u2013 \nexclude_defaults (bool) \u2013 \nexclude_none (bool) \u2013 \nencoder (Optional[Callable[[Any], Any]]) \u2013 \nmodels_as_dict (bool) \u2013 \ndumps_kwargs (Any) \u2013 \nReturn type\nunicode\npredict(text, *, stop=None, **kwargs)\uf0c1\nPredict text from text.\nParameters\ntext (str) \u2013 \nstop (Optional[Sequence[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\npredict_messages(messages, *, stop=None, **kwargs)\uf0c1\nPredict message from messages.\nParameters\nmessages (List[langchain.schema.BaseMessage]) \u2013 \nstop (Optional[Sequence[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.BaseMessage\nsave(file_path)\uf0c1\nSave the LLM.\nParameters\nfile_path (Union[pathlib.Path, str]) \u2013 Path to file to save the LLM to.\nReturn type\nNone\nExample:\n.. code-block:: python\nllm.save(file_path=\u201dpath/llm.yaml\u201d)", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-256", "text": ".. code-block:: python\nllm.save(file_path=\u201dpath/llm.yaml\u201d)\nclassmethod update_forward_refs(**localns)\uf0c1\nTry to update ForwardRefs on fields based on this Model, globalns and localns.\nParameters\nlocalns (Any) \u2013 \nReturn type\nNone\nproperty lc_attributes: Dict\uf0c1\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\uf0c1\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\uf0c1\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\uf0c1\nReturn whether or not the class is serializable.\nclass langchain.llms.RWKV(*, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None, model, tokens_path, strategy='cpu fp32', rwkv_verbose=True, temperature=1.0, top_p=0.5, penalty_alpha_frequency=0.4, penalty_alpha_presence=0.4, CHUNK_LEN=256, max_tokens_per_generation=256, client=None, tokenizer=None, pipeline=None, model_tokens=None, model_state=None)[source]\uf0c1\nBases: langchain.llms.base.LLM, pydantic.main.BaseModel\nWrapper around RWKV language models.\nTo use, you should have the rwkv python package installed, the\npre-trained model file, and the model\u2019s config information.\nExample\nfrom langchain.llms import RWKV\nmodel = RWKV(model=\"./models/rwkv-3b-fp16.bin\", strategy=\"cpu fp32\")\n# Simplest invocation", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-257", "text": "# Simplest invocation\nresponse = model(\"Once upon a time, \")\nParameters\ncache (Optional[bool]) \u2013 \nverbose (bool) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ncallback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) \u2013 \ntags (Optional[List[str]]) \u2013 \nmodel (str) \u2013 \ntokens_path (str) \u2013 \nstrategy (str) \u2013 \nrwkv_verbose (bool) \u2013 \ntemperature (float) \u2013 \ntop_p (float) \u2013 \npenalty_alpha_frequency (float) \u2013 \npenalty_alpha_presence (float) \u2013 \nCHUNK_LEN (int) \u2013 \nmax_tokens_per_generation (int) \u2013 \nclient (Any) \u2013 \ntokenizer (Any) \u2013 \npipeline (Any) \u2013 \nmodel_tokens (Any) \u2013 \nmodel_state (Any) \u2013 \nReturn type\nNone\nattribute CHUNK_LEN: int = 256\uf0c1\nBatch size for prompt processing.\nattribute max_tokens_per_generation: int = 256\uf0c1\nMaximum number of tokens to generate.\nattribute model: str [Required]\uf0c1\nPath to the pre-trained RWKV model file.\nattribute penalty_alpha_frequency: float = 0.4\uf0c1\nPositive values penalize new tokens based on their existing frequency\nin the text so far, decreasing the model\u2019s likelihood to repeat the same\nline verbatim..\nattribute penalty_alpha_presence: float = 0.4\uf0c1\nPositive values penalize new tokens based on whether they appear\nin the text so far, increasing the model\u2019s likelihood to talk about\nnew topics..\nattribute rwkv_verbose: bool = True\uf0c1\nPrint debug information.\nattribute strategy: str = 'cpu fp32'\uf0c1\nToken context window.", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-258", "text": "attribute strategy: str = 'cpu fp32'\uf0c1\nToken context window.\nattribute tags: Optional[List[str]] = None\uf0c1\nTags to add to the run trace.\nattribute temperature: float = 1.0\uf0c1\nThe temperature to use for sampling.\nattribute tokens_path: str [Required]\uf0c1\nPath to the RWKV tokens file.\nattribute top_p: float = 0.5\uf0c1\nThe top-p value to use for sampling.\nattribute verbose: bool [Optional]\uf0c1\nWhether to print out response text.\n__call__(prompt, stop=None, callbacks=None, **kwargs)\uf0c1\nCheck Cache and run the LLM on the given prompt and input.\nParameters\nprompt (str) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\nasync agenerate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)\uf0c1\nRun the LLM on the given prompt and input.\nParameters\nprompts (List[str]) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ntags (Optional[List[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.LLMResult\nasync agenerate_prompt(prompts, stop=None, callbacks=None, **kwargs)\uf0c1\nTake in a list of prompt values and return an LLMResult.\nParameters\nprompts (List[langchain.schema.PromptValue]) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-259", "text": "kwargs (Any) \u2013 \nReturn type\nlangchain.schema.LLMResult\nasync apredict(text, *, stop=None, **kwargs)\uf0c1\nPredict text from text.\nParameters\ntext (str) \u2013 \nstop (Optional[Sequence[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\nasync apredict_messages(messages, *, stop=None, **kwargs)\uf0c1\nPredict message from messages.\nParameters\nmessages (List[langchain.schema.BaseMessage]) \u2013 \nstop (Optional[Sequence[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.BaseMessage\nclassmethod construct(_fields_set=None, **values)\uf0c1\nCreates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.\nDefault values are respected, but no other validation is performed.\nBehaves as if Config.extra = \u2018allow\u2019 was set since it adds all passed values\nParameters\n_fields_set (Optional[SetStr]) \u2013 \nvalues (Any) \u2013 \nReturn type\nModel\ncopy(*, include=None, exclude=None, update=None, deep=False)\uf0c1\nDuplicate a model, optionally choose which fields to include, exclude and change.\nParameters\ninclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 fields to include in new model\nexclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 fields to exclude from new model, as with values this takes precedence over include\nupdate (Optional[DictStrAny]) \u2013 values to change/add in the new model. Note: the data is not validated before creating\nthe new model: you should trust this data\ndeep (bool) \u2013 set to True to make a deep copy of the model\nself (Model) \u2013 \nReturns\nnew model instance\nReturn type\nModel\ndict(**kwargs)\uf0c1", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-260", "text": "Returns\nnew model instance\nReturn type\nModel\ndict(**kwargs)\uf0c1\nReturn a dictionary of the LLM.\nParameters\nkwargs (Any) \u2013 \nReturn type\nDict\ngenerate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)\uf0c1\nRun the LLM on the given prompt and input.\nParameters\nprompts (List[str]) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ntags (Optional[List[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.LLMResult\ngenerate_prompt(prompts, stop=None, callbacks=None, **kwargs)\uf0c1\nTake in a list of prompt values and return an LLMResult.\nParameters\nprompts (List[langchain.schema.PromptValue]) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.LLMResult\nget_num_tokens(text)\uf0c1\nGet the number of tokens present in the text.\nParameters\ntext (str) \u2013 \nReturn type\nint\nget_num_tokens_from_messages(messages)\uf0c1\nGet the number of tokens in the message.\nParameters\nmessages (List[langchain.schema.BaseMessage]) \u2013 \nReturn type\nint\nget_token_ids(text)\uf0c1\nGet the token present in the text.\nParameters\ntext (str) \u2013 \nReturn type\nList[int]\njson(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs)\uf0c1", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-261", "text": "Generate a JSON representation of the model, include and exclude arguments as per dict().\nencoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().\nParameters\ninclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 \nexclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 \nby_alias (bool) \u2013 \nskip_defaults (Optional[bool]) \u2013 \nexclude_unset (bool) \u2013 \nexclude_defaults (bool) \u2013 \nexclude_none (bool) \u2013 \nencoder (Optional[Callable[[Any], Any]]) \u2013 \nmodels_as_dict (bool) \u2013 \ndumps_kwargs (Any) \u2013 \nReturn type\nunicode\npredict(text, *, stop=None, **kwargs)\uf0c1\nPredict text from text.\nParameters\ntext (str) \u2013 \nstop (Optional[Sequence[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\npredict_messages(messages, *, stop=None, **kwargs)\uf0c1\nPredict message from messages.\nParameters\nmessages (List[langchain.schema.BaseMessage]) \u2013 \nstop (Optional[Sequence[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.BaseMessage\nsave(file_path)\uf0c1\nSave the LLM.\nParameters\nfile_path (Union[pathlib.Path, str]) \u2013 Path to file to save the LLM to.\nReturn type\nNone\nExample:\n.. code-block:: python\nllm.save(file_path=\u201dpath/llm.yaml\u201d)\nclassmethod update_forward_refs(**localns)\uf0c1\nTry to update ForwardRefs on fields based on this Model, globalns and localns.\nParameters\nlocalns (Any) \u2013 \nReturn type\nNone\nproperty lc_attributes: Dict\uf0c1\nReturn a list of attribute names that should be included in the", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-262", "text": "Return a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\uf0c1\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\uf0c1\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\uf0c1\nReturn whether or not the class is serializable.\nclass langchain.llms.Replicate(*, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None, model, input=None, model_kwargs=None, replicate_api_token=None)[source]\uf0c1\nBases: langchain.llms.base.LLM\nWrapper around Replicate models.\nTo use, you should have the replicate python package installed,\nand the environment variable REPLICATE_API_TOKEN set with your API token.\nYou can find your token here: https://replicate.com/account\nThe model param is required, but any other model parameters can also\nbe passed in with the format input={model_param: value, \u2026}\nExample\nfrom langchain.llms import Replicate\nreplicate = Replicate(model=\"stability-ai/stable-diffusion: 27b93a2413e7f36cd83da926f365628 0b2931564ff050bf9575f1fdf9bcd7478\",\n input={\"image_dimensions\": \"512x512\"})\nParameters\ncache (Optional[bool]) \u2013 \nverbose (bool) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-263", "text": "callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) \u2013 \ntags (Optional[List[str]]) \u2013 \nmodel (str) \u2013 \ninput (Dict[str, Any]) \u2013 \nmodel_kwargs (Dict[str, Any]) \u2013 \nreplicate_api_token (Optional[str]) \u2013 \nReturn type\nNone\nattribute tags: Optional[List[str]] = None\uf0c1\nTags to add to the run trace.\nattribute verbose: bool [Optional]\uf0c1\nWhether to print out response text.\n__call__(prompt, stop=None, callbacks=None, **kwargs)\uf0c1\nCheck Cache and run the LLM on the given prompt and input.\nParameters\nprompt (str) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\nasync agenerate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)\uf0c1\nRun the LLM on the given prompt and input.\nParameters\nprompts (List[str]) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ntags (Optional[List[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.LLMResult\nasync agenerate_prompt(prompts, stop=None, callbacks=None, **kwargs)\uf0c1\nTake in a list of prompt values and return an LLMResult.\nParameters\nprompts (List[langchain.schema.PromptValue]) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \nkwargs (Any) \u2013 \nReturn type", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-264", "text": "kwargs (Any) \u2013 \nReturn type\nlangchain.schema.LLMResult\nasync apredict(text, *, stop=None, **kwargs)\uf0c1\nPredict text from text.\nParameters\ntext (str) \u2013 \nstop (Optional[Sequence[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\nasync apredict_messages(messages, *, stop=None, **kwargs)\uf0c1\nPredict message from messages.\nParameters\nmessages (List[langchain.schema.BaseMessage]) \u2013 \nstop (Optional[Sequence[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.BaseMessage\nclassmethod construct(_fields_set=None, **values)\uf0c1\nCreates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.\nDefault values are respected, but no other validation is performed.\nBehaves as if Config.extra = \u2018allow\u2019 was set since it adds all passed values\nParameters\n_fields_set (Optional[SetStr]) \u2013 \nvalues (Any) \u2013 \nReturn type\nModel\ncopy(*, include=None, exclude=None, update=None, deep=False)\uf0c1\nDuplicate a model, optionally choose which fields to include, exclude and change.\nParameters\ninclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 fields to include in new model\nexclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 fields to exclude from new model, as with values this takes precedence over include\nupdate (Optional[DictStrAny]) \u2013 values to change/add in the new model. Note: the data is not validated before creating\nthe new model: you should trust this data\ndeep (bool) \u2013 set to True to make a deep copy of the model\nself (Model) \u2013 \nReturns\nnew model instance\nReturn type\nModel\ndict(**kwargs)\uf0c1", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-265", "text": "Returns\nnew model instance\nReturn type\nModel\ndict(**kwargs)\uf0c1\nReturn a dictionary of the LLM.\nParameters\nkwargs (Any) \u2013 \nReturn type\nDict\ngenerate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)\uf0c1\nRun the LLM on the given prompt and input.\nParameters\nprompts (List[str]) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ntags (Optional[List[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.LLMResult\ngenerate_prompt(prompts, stop=None, callbacks=None, **kwargs)\uf0c1\nTake in a list of prompt values and return an LLMResult.\nParameters\nprompts (List[langchain.schema.PromptValue]) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.LLMResult\nget_num_tokens(text)\uf0c1\nGet the number of tokens present in the text.\nParameters\ntext (str) \u2013 \nReturn type\nint\nget_num_tokens_from_messages(messages)\uf0c1\nGet the number of tokens in the message.\nParameters\nmessages (List[langchain.schema.BaseMessage]) \u2013 \nReturn type\nint\nget_token_ids(text)\uf0c1\nGet the token present in the text.\nParameters\ntext (str) \u2013 \nReturn type\nList[int]\njson(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs)\uf0c1", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-266", "text": "Generate a JSON representation of the model, include and exclude arguments as per dict().\nencoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().\nParameters\ninclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 \nexclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 \nby_alias (bool) \u2013 \nskip_defaults (Optional[bool]) \u2013 \nexclude_unset (bool) \u2013 \nexclude_defaults (bool) \u2013 \nexclude_none (bool) \u2013 \nencoder (Optional[Callable[[Any], Any]]) \u2013 \nmodels_as_dict (bool) \u2013 \ndumps_kwargs (Any) \u2013 \nReturn type\nunicode\npredict(text, *, stop=None, **kwargs)\uf0c1\nPredict text from text.\nParameters\ntext (str) \u2013 \nstop (Optional[Sequence[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\npredict_messages(messages, *, stop=None, **kwargs)\uf0c1\nPredict message from messages.\nParameters\nmessages (List[langchain.schema.BaseMessage]) \u2013 \nstop (Optional[Sequence[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.BaseMessage\nsave(file_path)\uf0c1\nSave the LLM.\nParameters\nfile_path (Union[pathlib.Path, str]) \u2013 Path to file to save the LLM to.\nReturn type\nNone\nExample:\n.. code-block:: python\nllm.save(file_path=\u201dpath/llm.yaml\u201d)\nclassmethod update_forward_refs(**localns)\uf0c1\nTry to update ForwardRefs on fields based on this Model, globalns and localns.\nParameters\nlocalns (Any) \u2013 \nReturn type\nNone\nproperty lc_attributes: Dict\uf0c1\nReturn a list of attribute names that should be included in the", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-267", "text": "Return a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\uf0c1\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\uf0c1\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\uf0c1\nReturn whether or not the class is serializable.\nclass langchain.llms.SagemakerEndpoint(*, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None, client=None, endpoint_name='', region_name='', credentials_profile_name=None, content_handler, model_kwargs=None, endpoint_kwargs=None)[source]\uf0c1\nBases: langchain.llms.base.LLM\nWrapper around custom Sagemaker Inference Endpoints.\nTo use, you must supply the endpoint name from your deployed\nSagemaker model & the region where it is deployed.\nTo authenticate, the AWS client uses the following methods to\nautomatically load credentials:\nhttps://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html\nIf a specific credential profile should be used, you must pass\nthe name of the profile from the ~/.aws/credentials file that is to be used.\nMake sure the credentials / roles used have the required policies to\naccess the Sagemaker endpoint.\nSee: https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies.html\nParameters\ncache (Optional[bool]) \u2013 \nverbose (bool) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ncallback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) \u2013", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-268", "text": "callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) \u2013 \ntags (Optional[List[str]]) \u2013 \nclient (Any) \u2013 \nendpoint_name (str) \u2013 \nregion_name (str) \u2013 \ncredentials_profile_name (Optional[str]) \u2013 \ncontent_handler (langchain.llms.sagemaker_endpoint.LLMContentHandler) \u2013 \nmodel_kwargs (Optional[Dict]) \u2013 \nendpoint_kwargs (Optional[Dict]) \u2013 \nReturn type\nNone\nattribute content_handler: langchain.llms.sagemaker_endpoint.LLMContentHandler [Required]\uf0c1\nThe content handler class that provides an input and\noutput transform functions to handle formats between LLM\nand the endpoint.\nattribute credentials_profile_name: Optional[str] = None\uf0c1\nThe name of the profile in the ~/.aws/credentials or ~/.aws/config files, which\nhas either access keys or role information specified.\nIf not specified, the default credential profile or, if on an EC2 instance,\ncredentials from IMDS will be used.\nSee: https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html\nattribute endpoint_kwargs: Optional[Dict] = None\uf0c1\nOptional attributes passed to the invoke_endpoint\nfunction. See `boto3`_. docs for more info.\n.. _boto3: \nattribute endpoint_name: str = ''\uf0c1\nThe name of the endpoint from the deployed Sagemaker model.\nMust be unique within an AWS Region.\nattribute model_kwargs: Optional[Dict] = None\uf0c1\nKey word arguments to pass to the model.\nattribute region_name: str = ''\uf0c1\nThe aws region where the Sagemaker model is deployed, eg. us-west-2.\nattribute tags: Optional[List[str]] = None\uf0c1", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-269", "text": "attribute tags: Optional[List[str]] = None\uf0c1\nTags to add to the run trace.\nattribute verbose: bool [Optional]\uf0c1\nWhether to print out response text.\n__call__(prompt, stop=None, callbacks=None, **kwargs)\uf0c1\nCheck Cache and run the LLM on the given prompt and input.\nParameters\nprompt (str) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\nasync agenerate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)\uf0c1\nRun the LLM on the given prompt and input.\nParameters\nprompts (List[str]) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ntags (Optional[List[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.LLMResult\nasync agenerate_prompt(prompts, stop=None, callbacks=None, **kwargs)\uf0c1\nTake in a list of prompt values and return an LLMResult.\nParameters\nprompts (List[langchain.schema.PromptValue]) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.LLMResult\nasync apredict(text, *, stop=None, **kwargs)\uf0c1\nPredict text from text.\nParameters\ntext (str) \u2013 \nstop (Optional[Sequence[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-270", "text": "kwargs (Any) \u2013 \nReturn type\nstr\nasync apredict_messages(messages, *, stop=None, **kwargs)\uf0c1\nPredict message from messages.\nParameters\nmessages (List[langchain.schema.BaseMessage]) \u2013 \nstop (Optional[Sequence[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.BaseMessage\nclassmethod construct(_fields_set=None, **values)\uf0c1\nCreates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.\nDefault values are respected, but no other validation is performed.\nBehaves as if Config.extra = \u2018allow\u2019 was set since it adds all passed values\nParameters\n_fields_set (Optional[SetStr]) \u2013 \nvalues (Any) \u2013 \nReturn type\nModel\ncopy(*, include=None, exclude=None, update=None, deep=False)\uf0c1\nDuplicate a model, optionally choose which fields to include, exclude and change.\nParameters\ninclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 fields to include in new model\nexclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 fields to exclude from new model, as with values this takes precedence over include\nupdate (Optional[DictStrAny]) \u2013 values to change/add in the new model. Note: the data is not validated before creating\nthe new model: you should trust this data\ndeep (bool) \u2013 set to True to make a deep copy of the model\nself (Model) \u2013 \nReturns\nnew model instance\nReturn type\nModel\ndict(**kwargs)\uf0c1\nReturn a dictionary of the LLM.\nParameters\nkwargs (Any) \u2013 \nReturn type\nDict\ngenerate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)\uf0c1\nRun the LLM on the given prompt and input.\nParameters", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-271", "text": "Run the LLM on the given prompt and input.\nParameters\nprompts (List[str]) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ntags (Optional[List[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.LLMResult\ngenerate_prompt(prompts, stop=None, callbacks=None, **kwargs)\uf0c1\nTake in a list of prompt values and return an LLMResult.\nParameters\nprompts (List[langchain.schema.PromptValue]) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.LLMResult\nget_num_tokens(text)\uf0c1\nGet the number of tokens present in the text.\nParameters\ntext (str) \u2013 \nReturn type\nint\nget_num_tokens_from_messages(messages)\uf0c1\nGet the number of tokens in the message.\nParameters\nmessages (List[langchain.schema.BaseMessage]) \u2013 \nReturn type\nint\nget_token_ids(text)\uf0c1\nGet the token present in the text.\nParameters\ntext (str) \u2013 \nReturn type\nList[int]\njson(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs)\uf0c1\nGenerate a JSON representation of the model, include and exclude arguments as per dict().\nencoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().\nParameters\ninclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-272", "text": "include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 \nexclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 \nby_alias (bool) \u2013 \nskip_defaults (Optional[bool]) \u2013 \nexclude_unset (bool) \u2013 \nexclude_defaults (bool) \u2013 \nexclude_none (bool) \u2013 \nencoder (Optional[Callable[[Any], Any]]) \u2013 \nmodels_as_dict (bool) \u2013 \ndumps_kwargs (Any) \u2013 \nReturn type\nunicode\npredict(text, *, stop=None, **kwargs)\uf0c1\nPredict text from text.\nParameters\ntext (str) \u2013 \nstop (Optional[Sequence[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\npredict_messages(messages, *, stop=None, **kwargs)\uf0c1\nPredict message from messages.\nParameters\nmessages (List[langchain.schema.BaseMessage]) \u2013 \nstop (Optional[Sequence[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.BaseMessage\nsave(file_path)\uf0c1\nSave the LLM.\nParameters\nfile_path (Union[pathlib.Path, str]) \u2013 Path to file to save the LLM to.\nReturn type\nNone\nExample:\n.. code-block:: python\nllm.save(file_path=\u201dpath/llm.yaml\u201d)\nclassmethod update_forward_refs(**localns)\uf0c1\nTry to update ForwardRefs on fields based on this Model, globalns and localns.\nParameters\nlocalns (Any) \u2013 \nReturn type\nNone\nproperty lc_attributes: Dict\uf0c1\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\uf0c1\nReturn the namespace of the langchain object.", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-273", "text": "property lc_namespace: List[str]\uf0c1\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\uf0c1\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\uf0c1\nReturn whether or not the class is serializable.\nclass langchain.llms.SelfHostedHuggingFaceLLM(*, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None, pipeline_ref=None, client=None, inference_fn=, hardware=None, model_load_fn=, load_fn_kwargs=None, model_reqs=['./', 'transformers', 'torch'], model_id='gpt2', task='text-generation', device=0, model_kwargs=None)[source]\uf0c1\nBases: langchain.llms.self_hosted.SelfHostedPipeline\nWrapper around HuggingFace Pipeline API to run on self-hosted remote hardware.\nSupported hardware includes auto-launched instances on AWS, GCP, Azure,\nand Lambda, as well as servers specified\nby IP address and SSH credentials (such as on-prem, or another cloud\nlike Paperspace, Coreweave, etc.).\nTo use, you should have the runhouse python package installed.\nOnly supports text-generation, text2text-generation and summarization for now.\nExample using from_model_id:from langchain.llms import SelfHostedHuggingFaceLLM\nimport runhouse as rh\ngpu = rh.cluster(name=\"rh-a10x\", instance_type=\"A100:1\")\nhf = SelfHostedHuggingFaceLLM(\n model_id=\"google/flan-t5-large\", task=\"text2text-generation\",", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-274", "text": "model_id=\"google/flan-t5-large\", task=\"text2text-generation\",\n hardware=gpu\n)\nExample passing fn that generates a pipeline (bc the pipeline is not serializable):from langchain.llms import SelfHostedHuggingFaceLLM\nfrom transformers import AutoModelForCausalLM, AutoTokenizer, pipeline\nimport runhouse as rh\ndef get_pipeline():\n model_id = \"gpt2\"\n tokenizer = AutoTokenizer.from_pretrained(model_id)\n model = AutoModelForCausalLM.from_pretrained(model_id)\n pipe = pipeline(\n \"text-generation\", model=model, tokenizer=tokenizer\n )\n return pipe\nhf = SelfHostedHuggingFaceLLM(\n model_load_fn=get_pipeline, model_id=\"gpt2\", hardware=gpu)\nParameters\ncache (Optional[bool]) \u2013 \nverbose (bool) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ncallback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) \u2013 \ntags (Optional[List[str]]) \u2013 \npipeline_ref (Any) \u2013 \nclient (Any) \u2013 \ninference_fn (Callable) \u2013 \nhardware (Any) \u2013 \nmodel_load_fn (Callable) \u2013 \nload_fn_kwargs (Optional[dict]) \u2013 \nmodel_reqs (List[str]) \u2013 \nmodel_id (str) \u2013 \ntask (str) \u2013 \ndevice (int) \u2013 \nmodel_kwargs (Optional[dict]) \u2013 \nReturn type\nNone\nattribute device: int = 0\uf0c1\nDevice to use for inference. -1 for CPU, 0 for GPU, 1 for second GPU, etc.\nattribute hardware: Any = None\uf0c1\nRemote hardware to send the inference function to.", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-275", "text": "attribute hardware: Any = None\uf0c1\nRemote hardware to send the inference function to.\nattribute inference_fn: Callable = \uf0c1\nInference function to send to the remote hardware.\nattribute load_fn_kwargs: Optional[dict] = None\uf0c1\nKey word arguments to pass to the model load function.\nattribute model_id: str = 'gpt2'\uf0c1\nHugging Face model_id to load the model.\nattribute model_kwargs: Optional[dict] = None\uf0c1\nKey word arguments to pass to the model.\nattribute model_load_fn: Callable = \uf0c1\nFunction to load the model remotely on the server.\nattribute model_reqs: List[str] = ['./', 'transformers', 'torch']\uf0c1\nRequirements to install on hardware to inference the model.\nattribute tags: Optional[List[str]] = None\uf0c1\nTags to add to the run trace.\nattribute task: str = 'text-generation'\uf0c1\nHugging Face task (\u201ctext-generation\u201d, \u201ctext2text-generation\u201d or\n\u201csummarization\u201d).\nattribute verbose: bool [Optional]\uf0c1\nWhether to print out response text.\n__call__(prompt, stop=None, callbacks=None, **kwargs)\uf0c1\nCheck Cache and run the LLM on the given prompt and input.\nParameters\nprompt (str) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\nasync agenerate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)\uf0c1\nRun the LLM on the given prompt and input.\nParameters\nprompts (List[str]) \u2013 \nstop (Optional[List[str]]) \u2013", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-276", "text": "Parameters\nprompts (List[str]) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ntags (Optional[List[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.LLMResult\nasync agenerate_prompt(prompts, stop=None, callbacks=None, **kwargs)\uf0c1\nTake in a list of prompt values and return an LLMResult.\nParameters\nprompts (List[langchain.schema.PromptValue]) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.LLMResult\nasync apredict(text, *, stop=None, **kwargs)\uf0c1\nPredict text from text.\nParameters\ntext (str) \u2013 \nstop (Optional[Sequence[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\nasync apredict_messages(messages, *, stop=None, **kwargs)\uf0c1\nPredict message from messages.\nParameters\nmessages (List[langchain.schema.BaseMessage]) \u2013 \nstop (Optional[Sequence[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.BaseMessage\nclassmethod construct(_fields_set=None, **values)\uf0c1\nCreates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.\nDefault values are respected, but no other validation is performed.\nBehaves as if Config.extra = \u2018allow\u2019 was set since it adds all passed values\nParameters\n_fields_set (Optional[SetStr]) \u2013 \nvalues (Any) \u2013 \nReturn type\nModel\ncopy(*, include=None, exclude=None, update=None, deep=False)\uf0c1", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-277", "text": "Model\ncopy(*, include=None, exclude=None, update=None, deep=False)\uf0c1\nDuplicate a model, optionally choose which fields to include, exclude and change.\nParameters\ninclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 fields to include in new model\nexclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 fields to exclude from new model, as with values this takes precedence over include\nupdate (Optional[DictStrAny]) \u2013 values to change/add in the new model. Note: the data is not validated before creating\nthe new model: you should trust this data\ndeep (bool) \u2013 set to True to make a deep copy of the model\nself (Model) \u2013 \nReturns\nnew model instance\nReturn type\nModel\ndict(**kwargs)\uf0c1\nReturn a dictionary of the LLM.\nParameters\nkwargs (Any) \u2013 \nReturn type\nDict\nclassmethod from_pipeline(pipeline, hardware, model_reqs=None, device=0, **kwargs)\uf0c1\nInit the SelfHostedPipeline from a pipeline object or string.\nParameters\npipeline (Any) \u2013 \nhardware (Any) \u2013 \nmodel_reqs (Optional[List[str]]) \u2013 \ndevice (int) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.llms.base.LLM\ngenerate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)\uf0c1\nRun the LLM on the given prompt and input.\nParameters\nprompts (List[str]) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ntags (Optional[List[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.LLMResult", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-278", "text": "kwargs (Any) \u2013 \nReturn type\nlangchain.schema.LLMResult\ngenerate_prompt(prompts, stop=None, callbacks=None, **kwargs)\uf0c1\nTake in a list of prompt values and return an LLMResult.\nParameters\nprompts (List[langchain.schema.PromptValue]) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.LLMResult\nget_num_tokens(text)\uf0c1\nGet the number of tokens present in the text.\nParameters\ntext (str) \u2013 \nReturn type\nint\nget_num_tokens_from_messages(messages)\uf0c1\nGet the number of tokens in the message.\nParameters\nmessages (List[langchain.schema.BaseMessage]) \u2013 \nReturn type\nint\nget_token_ids(text)\uf0c1\nGet the token present in the text.\nParameters\ntext (str) \u2013 \nReturn type\nList[int]\njson(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs)\uf0c1\nGenerate a JSON representation of the model, include and exclude arguments as per dict().\nencoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().\nParameters\ninclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 \nexclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 \nby_alias (bool) \u2013 \nskip_defaults (Optional[bool]) \u2013 \nexclude_unset (bool) \u2013 \nexclude_defaults (bool) \u2013 \nexclude_none (bool) \u2013", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-279", "text": "exclude_defaults (bool) \u2013 \nexclude_none (bool) \u2013 \nencoder (Optional[Callable[[Any], Any]]) \u2013 \nmodels_as_dict (bool) \u2013 \ndumps_kwargs (Any) \u2013 \nReturn type\nunicode\npredict(text, *, stop=None, **kwargs)\uf0c1\nPredict text from text.\nParameters\ntext (str) \u2013 \nstop (Optional[Sequence[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\npredict_messages(messages, *, stop=None, **kwargs)\uf0c1\nPredict message from messages.\nParameters\nmessages (List[langchain.schema.BaseMessage]) \u2013 \nstop (Optional[Sequence[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.BaseMessage\nsave(file_path)\uf0c1\nSave the LLM.\nParameters\nfile_path (Union[pathlib.Path, str]) \u2013 Path to file to save the LLM to.\nReturn type\nNone\nExample:\n.. code-block:: python\nllm.save(file_path=\u201dpath/llm.yaml\u201d)\nclassmethod update_forward_refs(**localns)\uf0c1\nTry to update ForwardRefs on fields based on this Model, globalns and localns.\nParameters\nlocalns (Any) \u2013 \nReturn type\nNone\nproperty lc_attributes: Dict\uf0c1\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\uf0c1\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\uf0c1\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\uf0c1", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-280", "text": "property lc_serializable: bool\uf0c1\nReturn whether or not the class is serializable.\nclass langchain.llms.SelfHostedPipeline(*, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None, pipeline_ref=None, client=None, inference_fn=, hardware=None, model_load_fn, load_fn_kwargs=None, model_reqs=['./', 'torch'])[source]\uf0c1\nBases: langchain.llms.base.LLM\nRun model inference on self-hosted remote hardware.\nSupported hardware includes auto-launched instances on AWS, GCP, Azure,\nand Lambda, as well as servers specified\nby IP address and SSH credentials (such as on-prem, or another\ncloud like Paperspace, Coreweave, etc.).\nTo use, you should have the runhouse python package installed.\nExample for custom pipeline and inference functions:from langchain.llms import SelfHostedPipeline\nfrom transformers import AutoModelForCausalLM, AutoTokenizer, pipeline\nimport runhouse as rh\ndef load_pipeline():\n tokenizer = AutoTokenizer.from_pretrained(\"gpt2\")\n model = AutoModelForCausalLM.from_pretrained(\"gpt2\")\n return pipeline(\n \"text-generation\", model=model, tokenizer=tokenizer,\n max_new_tokens=10\n )\ndef inference_fn(pipeline, prompt, stop = None):\n return pipeline(prompt)[0][\"generated_text\"]\ngpu = rh.cluster(name=\"rh-a10x\", instance_type=\"A100:1\")\nllm = SelfHostedPipeline(\n model_load_fn=load_pipeline,\n hardware=gpu,\n model_reqs=model_reqs, inference_fn=inference_fn\n)\nExample for <2GB model (can be serialized and sent directly to the server):from langchain.llms import SelfHostedPipeline", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-281", "text": "import runhouse as rh\ngpu = rh.cluster(name=\"rh-a10x\", instance_type=\"A100:1\")\nmy_model = ...\nllm = SelfHostedPipeline.from_pipeline(\n pipeline=my_model,\n hardware=gpu,\n model_reqs=[\"./\", \"torch\", \"transformers\"],\n)\nExample passing model path for larger models:from langchain.llms import SelfHostedPipeline\nimport runhouse as rh\nimport pickle\nfrom transformers import pipeline\ngenerator = pipeline(model=\"gpt2\")\nrh.blob(pickle.dumps(generator), path=\"models/pipeline.pkl\"\n ).save().to(gpu, path=\"models\")\nllm = SelfHostedPipeline.from_pipeline(\n pipeline=\"models/pipeline.pkl\",\n hardware=gpu,\n model_reqs=[\"./\", \"torch\", \"transformers\"],\n)\nParameters\ncache (Optional[bool]) \u2013 \nverbose (bool) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ncallback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) \u2013 \ntags (Optional[List[str]]) \u2013 \npipeline_ref (Any) \u2013 \nclient (Any) \u2013 \ninference_fn (Callable) \u2013 \nhardware (Any) \u2013 \nmodel_load_fn (Callable) \u2013 \nload_fn_kwargs (Optional[dict]) \u2013 \nmodel_reqs (List[str]) \u2013 \nReturn type\nNone\nattribute hardware: Any = None\uf0c1\nRemote hardware to send the inference function to.\nattribute inference_fn: Callable = \uf0c1\nInference function to send to the remote hardware.\nattribute load_fn_kwargs: Optional[dict] = None\uf0c1\nKey word arguments to pass to the model load function.", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-282", "text": "Key word arguments to pass to the model load function.\nattribute model_load_fn: Callable [Required]\uf0c1\nFunction to load the model remotely on the server.\nattribute model_reqs: List[str] = ['./', 'torch']\uf0c1\nRequirements to install on hardware to inference the model.\nattribute tags: Optional[List[str]] = None\uf0c1\nTags to add to the run trace.\nattribute verbose: bool [Optional]\uf0c1\nWhether to print out response text.\n__call__(prompt, stop=None, callbacks=None, **kwargs)\uf0c1\nCheck Cache and run the LLM on the given prompt and input.\nParameters\nprompt (str) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\nasync agenerate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)\uf0c1\nRun the LLM on the given prompt and input.\nParameters\nprompts (List[str]) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ntags (Optional[List[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.LLMResult\nasync agenerate_prompt(prompts, stop=None, callbacks=None, **kwargs)\uf0c1\nTake in a list of prompt values and return an LLMResult.\nParameters\nprompts (List[langchain.schema.PromptValue]) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.LLMResult", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-283", "text": "kwargs (Any) \u2013 \nReturn type\nlangchain.schema.LLMResult\nasync apredict(text, *, stop=None, **kwargs)\uf0c1\nPredict text from text.\nParameters\ntext (str) \u2013 \nstop (Optional[Sequence[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\nasync apredict_messages(messages, *, stop=None, **kwargs)\uf0c1\nPredict message from messages.\nParameters\nmessages (List[langchain.schema.BaseMessage]) \u2013 \nstop (Optional[Sequence[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.BaseMessage\nclassmethod construct(_fields_set=None, **values)\uf0c1\nCreates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.\nDefault values are respected, but no other validation is performed.\nBehaves as if Config.extra = \u2018allow\u2019 was set since it adds all passed values\nParameters\n_fields_set (Optional[SetStr]) \u2013 \nvalues (Any) \u2013 \nReturn type\nModel\ncopy(*, include=None, exclude=None, update=None, deep=False)\uf0c1\nDuplicate a model, optionally choose which fields to include, exclude and change.\nParameters\ninclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 fields to include in new model\nexclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 fields to exclude from new model, as with values this takes precedence over include\nupdate (Optional[DictStrAny]) \u2013 values to change/add in the new model. Note: the data is not validated before creating\nthe new model: you should trust this data\ndeep (bool) \u2013 set to True to make a deep copy of the model\nself (Model) \u2013 \nReturns\nnew model instance\nReturn type\nModel\ndict(**kwargs)\uf0c1", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-284", "text": "Returns\nnew model instance\nReturn type\nModel\ndict(**kwargs)\uf0c1\nReturn a dictionary of the LLM.\nParameters\nkwargs (Any) \u2013 \nReturn type\nDict\nclassmethod from_pipeline(pipeline, hardware, model_reqs=None, device=0, **kwargs)[source]\uf0c1\nInit the SelfHostedPipeline from a pipeline object or string.\nParameters\npipeline (Any) \u2013 \nhardware (Any) \u2013 \nmodel_reqs (Optional[List[str]]) \u2013 \ndevice (int) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.llms.base.LLM\ngenerate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)\uf0c1\nRun the LLM on the given prompt and input.\nParameters\nprompts (List[str]) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ntags (Optional[List[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.LLMResult\ngenerate_prompt(prompts, stop=None, callbacks=None, **kwargs)\uf0c1\nTake in a list of prompt values and return an LLMResult.\nParameters\nprompts (List[langchain.schema.PromptValue]) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.LLMResult\nget_num_tokens(text)\uf0c1\nGet the number of tokens present in the text.\nParameters\ntext (str) \u2013 \nReturn type\nint\nget_num_tokens_from_messages(messages)\uf0c1\nGet the number of tokens in the message.\nParameters", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-285", "text": "Get the number of tokens in the message.\nParameters\nmessages (List[langchain.schema.BaseMessage]) \u2013 \nReturn type\nint\nget_token_ids(text)\uf0c1\nGet the token present in the text.\nParameters\ntext (str) \u2013 \nReturn type\nList[int]\njson(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs)\uf0c1\nGenerate a JSON representation of the model, include and exclude arguments as per dict().\nencoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().\nParameters\ninclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 \nexclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 \nby_alias (bool) \u2013 \nskip_defaults (Optional[bool]) \u2013 \nexclude_unset (bool) \u2013 \nexclude_defaults (bool) \u2013 \nexclude_none (bool) \u2013 \nencoder (Optional[Callable[[Any], Any]]) \u2013 \nmodels_as_dict (bool) \u2013 \ndumps_kwargs (Any) \u2013 \nReturn type\nunicode\npredict(text, *, stop=None, **kwargs)\uf0c1\nPredict text from text.\nParameters\ntext (str) \u2013 \nstop (Optional[Sequence[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\npredict_messages(messages, *, stop=None, **kwargs)\uf0c1\nPredict message from messages.\nParameters\nmessages (List[langchain.schema.BaseMessage]) \u2013 \nstop (Optional[Sequence[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.BaseMessage\nsave(file_path)\uf0c1\nSave the LLM.\nParameters", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-286", "text": "save(file_path)\uf0c1\nSave the LLM.\nParameters\nfile_path (Union[pathlib.Path, str]) \u2013 Path to file to save the LLM to.\nReturn type\nNone\nExample:\n.. code-block:: python\nllm.save(file_path=\u201dpath/llm.yaml\u201d)\nclassmethod update_forward_refs(**localns)\uf0c1\nTry to update ForwardRefs on fields based on this Model, globalns and localns.\nParameters\nlocalns (Any) \u2013 \nReturn type\nNone\nproperty lc_attributes: Dict\uf0c1\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\uf0c1\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\uf0c1\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\uf0c1\nReturn whether or not the class is serializable.\nclass langchain.llms.StochasticAI(*, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None, api_url='', model_kwargs=None, stochasticai_api_key=None)[source]\uf0c1\nBases: langchain.llms.base.LLM\nWrapper around StochasticAI large language models.\nTo use, you should have the environment variable STOCHASTICAI_API_KEY\nset with your API key.\nExample\nfrom langchain.llms import StochasticAI\nstochasticai = StochasticAI(api_url=\"\")\nParameters\ncache (Optional[bool]) \u2013 \nverbose (bool) \u2013", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-287", "text": "Parameters\ncache (Optional[bool]) \u2013 \nverbose (bool) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ncallback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) \u2013 \ntags (Optional[List[str]]) \u2013 \napi_url (str) \u2013 \nmodel_kwargs (Dict[str, Any]) \u2013 \nstochasticai_api_key (Optional[str]) \u2013 \nReturn type\nNone\nattribute api_url: str = ''\uf0c1\nModel name to use.\nattribute model_kwargs: Dict[str, Any] [Optional]\uf0c1\nHolds any model parameters valid for create call not\nexplicitly specified.\nattribute tags: Optional[List[str]] = None\uf0c1\nTags to add to the run trace.\nattribute verbose: bool [Optional]\uf0c1\nWhether to print out response text.\n__call__(prompt, stop=None, callbacks=None, **kwargs)\uf0c1\nCheck Cache and run the LLM on the given prompt and input.\nParameters\nprompt (str) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\nasync agenerate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)\uf0c1\nRun the LLM on the given prompt and input.\nParameters\nprompts (List[str]) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ntags (Optional[List[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.LLMResult", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-288", "text": "kwargs (Any) \u2013 \nReturn type\nlangchain.schema.LLMResult\nasync agenerate_prompt(prompts, stop=None, callbacks=None, **kwargs)\uf0c1\nTake in a list of prompt values and return an LLMResult.\nParameters\nprompts (List[langchain.schema.PromptValue]) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.LLMResult\nasync apredict(text, *, stop=None, **kwargs)\uf0c1\nPredict text from text.\nParameters\ntext (str) \u2013 \nstop (Optional[Sequence[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\nasync apredict_messages(messages, *, stop=None, **kwargs)\uf0c1\nPredict message from messages.\nParameters\nmessages (List[langchain.schema.BaseMessage]) \u2013 \nstop (Optional[Sequence[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.BaseMessage\nclassmethod construct(_fields_set=None, **values)\uf0c1\nCreates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.\nDefault values are respected, but no other validation is performed.\nBehaves as if Config.extra = \u2018allow\u2019 was set since it adds all passed values\nParameters\n_fields_set (Optional[SetStr]) \u2013 \nvalues (Any) \u2013 \nReturn type\nModel\ncopy(*, include=None, exclude=None, update=None, deep=False)\uf0c1\nDuplicate a model, optionally choose which fields to include, exclude and change.\nParameters\ninclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 fields to include in new model", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-289", "text": "exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 fields to exclude from new model, as with values this takes precedence over include\nupdate (Optional[DictStrAny]) \u2013 values to change/add in the new model. Note: the data is not validated before creating\nthe new model: you should trust this data\ndeep (bool) \u2013 set to True to make a deep copy of the model\nself (Model) \u2013 \nReturns\nnew model instance\nReturn type\nModel\ndict(**kwargs)\uf0c1\nReturn a dictionary of the LLM.\nParameters\nkwargs (Any) \u2013 \nReturn type\nDict\ngenerate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)\uf0c1\nRun the LLM on the given prompt and input.\nParameters\nprompts (List[str]) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ntags (Optional[List[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.LLMResult\ngenerate_prompt(prompts, stop=None, callbacks=None, **kwargs)\uf0c1\nTake in a list of prompt values and return an LLMResult.\nParameters\nprompts (List[langchain.schema.PromptValue]) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.LLMResult\nget_num_tokens(text)\uf0c1\nGet the number of tokens present in the text.\nParameters\ntext (str) \u2013 \nReturn type\nint\nget_num_tokens_from_messages(messages)\uf0c1\nGet the number of tokens in the message.\nParameters", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-290", "text": "Get the number of tokens in the message.\nParameters\nmessages (List[langchain.schema.BaseMessage]) \u2013 \nReturn type\nint\nget_token_ids(text)\uf0c1\nGet the token present in the text.\nParameters\ntext (str) \u2013 \nReturn type\nList[int]\njson(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs)\uf0c1\nGenerate a JSON representation of the model, include and exclude arguments as per dict().\nencoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().\nParameters\ninclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 \nexclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 \nby_alias (bool) \u2013 \nskip_defaults (Optional[bool]) \u2013 \nexclude_unset (bool) \u2013 \nexclude_defaults (bool) \u2013 \nexclude_none (bool) \u2013 \nencoder (Optional[Callable[[Any], Any]]) \u2013 \nmodels_as_dict (bool) \u2013 \ndumps_kwargs (Any) \u2013 \nReturn type\nunicode\npredict(text, *, stop=None, **kwargs)\uf0c1\nPredict text from text.\nParameters\ntext (str) \u2013 \nstop (Optional[Sequence[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\npredict_messages(messages, *, stop=None, **kwargs)\uf0c1\nPredict message from messages.\nParameters\nmessages (List[langchain.schema.BaseMessage]) \u2013 \nstop (Optional[Sequence[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.BaseMessage\nsave(file_path)\uf0c1\nSave the LLM.\nParameters", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-291", "text": "save(file_path)\uf0c1\nSave the LLM.\nParameters\nfile_path (Union[pathlib.Path, str]) \u2013 Path to file to save the LLM to.\nReturn type\nNone\nExample:\n.. code-block:: python\nllm.save(file_path=\u201dpath/llm.yaml\u201d)\nclassmethod update_forward_refs(**localns)\uf0c1\nTry to update ForwardRefs on fields based on this Model, globalns and localns.\nParameters\nlocalns (Any) \u2013 \nReturn type\nNone\nproperty lc_attributes: Dict\uf0c1\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\uf0c1\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\uf0c1\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\uf0c1\nReturn whether or not the class is serializable.\nclass langchain.llms.VertexAI(*, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None, client=None, model_name='text-bison', temperature=0.0, max_output_tokens=128, top_p=0.95, top_k=40, stop=None, project=None, location='us-central1', credentials=None, tuned_model_name=None)[source]\uf0c1\nBases: langchain.llms.vertexai._VertexAICommon, langchain.llms.base.LLM\nWrapper around Google Vertex AI large language models.\nParameters\ncache (Optional[bool]) \u2013 \nverbose (bool) \u2013", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-292", "text": "Parameters\ncache (Optional[bool]) \u2013 \nverbose (bool) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ncallback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) \u2013 \ntags (Optional[List[str]]) \u2013 \nclient (_LanguageModel) \u2013 \nmodel_name (str) \u2013 \ntemperature (float) \u2013 \nmax_output_tokens (int) \u2013 \ntop_p (float) \u2013 \ntop_k (int) \u2013 \nstop (Optional[List[str]]) \u2013 \nproject (Optional[str]) \u2013 \nlocation (str) \u2013 \ncredentials (Any) \u2013 \ntuned_model_name (Optional[str]) \u2013 \nReturn type\nNone\nattribute credentials: Any = None\uf0c1\nThe default custom credentials (google.auth.credentials.Credentials) to use\nattribute location: str = 'us-central1'\uf0c1\nThe default location to use when making API calls.\nattribute max_output_tokens: int = 128\uf0c1\nToken limit determines the maximum amount of text output from one prompt.\nattribute model_name: str = 'text-bison'\uf0c1\nThe name of the Vertex AI large language model.\nattribute project: Optional[str] = None\uf0c1\nThe default GCP project to use when making Vertex API calls.\nattribute stop: Optional[List[str]] = None\uf0c1\nOptional list of stop words to use when generating.\nattribute tags: Optional[List[str]] = None\uf0c1\nTags to add to the run trace.\nattribute temperature: float = 0.0\uf0c1\nSampling temperature, it controls the degree of randomness in token selection.\nattribute top_k: int = 40\uf0c1\nHow the model selects tokens for output, the next token is selected from\nattribute top_p: float = 0.95\uf0c1", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-293", "text": "attribute top_p: float = 0.95\uf0c1\nTokens are selected from most probable to least until the sum of their\nattribute tuned_model_name: Optional[str] = None\uf0c1\nThe name of a tuned model. If provided, model_name is ignored.\nattribute verbose: bool [Optional]\uf0c1\nWhether to print out response text.\n__call__(prompt, stop=None, callbacks=None, **kwargs)\uf0c1\nCheck Cache and run the LLM on the given prompt and input.\nParameters\nprompt (str) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\nasync agenerate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)\uf0c1\nRun the LLM on the given prompt and input.\nParameters\nprompts (List[str]) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ntags (Optional[List[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.LLMResult\nasync agenerate_prompt(prompts, stop=None, callbacks=None, **kwargs)\uf0c1\nTake in a list of prompt values and return an LLMResult.\nParameters\nprompts (List[langchain.schema.PromptValue]) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.LLMResult\nasync apredict(text, *, stop=None, **kwargs)\uf0c1\nPredict text from text.\nParameters", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-294", "text": "Predict text from text.\nParameters\ntext (str) \u2013 \nstop (Optional[Sequence[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\nasync apredict_messages(messages, *, stop=None, **kwargs)\uf0c1\nPredict message from messages.\nParameters\nmessages (List[langchain.schema.BaseMessage]) \u2013 \nstop (Optional[Sequence[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.BaseMessage\nclassmethod construct(_fields_set=None, **values)\uf0c1\nCreates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.\nDefault values are respected, but no other validation is performed.\nBehaves as if Config.extra = \u2018allow\u2019 was set since it adds all passed values\nParameters\n_fields_set (Optional[SetStr]) \u2013 \nvalues (Any) \u2013 \nReturn type\nModel\ncopy(*, include=None, exclude=None, update=None, deep=False)\uf0c1\nDuplicate a model, optionally choose which fields to include, exclude and change.\nParameters\ninclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 fields to include in new model\nexclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 fields to exclude from new model, as with values this takes precedence over include\nupdate (Optional[DictStrAny]) \u2013 values to change/add in the new model. Note: the data is not validated before creating\nthe new model: you should trust this data\ndeep (bool) \u2013 set to True to make a deep copy of the model\nself (Model) \u2013 \nReturns\nnew model instance\nReturn type\nModel\ndict(**kwargs)\uf0c1\nReturn a dictionary of the LLM.\nParameters\nkwargs (Any) \u2013 \nReturn type\nDict", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-295", "text": "Parameters\nkwargs (Any) \u2013 \nReturn type\nDict\ngenerate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)\uf0c1\nRun the LLM on the given prompt and input.\nParameters\nprompts (List[str]) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ntags (Optional[List[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.LLMResult\ngenerate_prompt(prompts, stop=None, callbacks=None, **kwargs)\uf0c1\nTake in a list of prompt values and return an LLMResult.\nParameters\nprompts (List[langchain.schema.PromptValue]) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.LLMResult\nget_num_tokens(text)\uf0c1\nGet the number of tokens present in the text.\nParameters\ntext (str) \u2013 \nReturn type\nint\nget_num_tokens_from_messages(messages)\uf0c1\nGet the number of tokens in the message.\nParameters\nmessages (List[langchain.schema.BaseMessage]) \u2013 \nReturn type\nint\nget_token_ids(text)\uf0c1\nGet the token present in the text.\nParameters\ntext (str) \u2013 \nReturn type\nList[int]\njson(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs)\uf0c1\nGenerate a JSON representation of the model, include and exclude arguments as per dict().", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-296", "text": "Generate a JSON representation of the model, include and exclude arguments as per dict().\nencoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().\nParameters\ninclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 \nexclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 \nby_alias (bool) \u2013 \nskip_defaults (Optional[bool]) \u2013 \nexclude_unset (bool) \u2013 \nexclude_defaults (bool) \u2013 \nexclude_none (bool) \u2013 \nencoder (Optional[Callable[[Any], Any]]) \u2013 \nmodels_as_dict (bool) \u2013 \ndumps_kwargs (Any) \u2013 \nReturn type\nunicode\npredict(text, *, stop=None, **kwargs)\uf0c1\nPredict text from text.\nParameters\ntext (str) \u2013 \nstop (Optional[Sequence[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\npredict_messages(messages, *, stop=None, **kwargs)\uf0c1\nPredict message from messages.\nParameters\nmessages (List[langchain.schema.BaseMessage]) \u2013 \nstop (Optional[Sequence[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.BaseMessage\nsave(file_path)\uf0c1\nSave the LLM.\nParameters\nfile_path (Union[pathlib.Path, str]) \u2013 Path to file to save the LLM to.\nReturn type\nNone\nExample:\n.. code-block:: python\nllm.save(file_path=\u201dpath/llm.yaml\u201d)\nclassmethod update_forward_refs(**localns)\uf0c1\nTry to update ForwardRefs on fields based on this Model, globalns and localns.\nParameters\nlocalns (Any) \u2013 \nReturn type\nNone\nproperty lc_attributes: Dict\uf0c1\nReturn a list of attribute names that should be included in the", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-297", "text": "Return a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\uf0c1\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\uf0c1\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\uf0c1\nReturn whether or not the class is serializable.\nclass langchain.llms.Writer(*, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None, writer_org_id=None, model_id='palmyra-instruct', min_tokens=None, max_tokens=None, temperature=None, top_p=None, stop=None, presence_penalty=None, repetition_penalty=None, best_of=None, logprobs=False, n=None, writer_api_key=None, base_url=None)[source]\uf0c1\nBases: langchain.llms.base.LLM\nWrapper around Writer large language models.\nTo use, you should have the environment variable WRITER_API_KEY and\nWRITER_ORG_ID set with your API key and organization ID respectively.\nExample\nfrom langchain import Writer\nwriter = Writer(model_id=\"palmyra-base\")\nParameters\ncache (Optional[bool]) \u2013 \nverbose (bool) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ncallback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) \u2013 \ntags (Optional[List[str]]) \u2013 \nwriter_org_id (Optional[str]) \u2013 \nmodel_id (str) \u2013 \nmin_tokens (Optional[int]) \u2013 \nmax_tokens (Optional[int]) \u2013 \ntemperature (Optional[float]) \u2013", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-298", "text": "max_tokens (Optional[int]) \u2013 \ntemperature (Optional[float]) \u2013 \ntop_p (Optional[float]) \u2013 \nstop (Optional[List[str]]) \u2013 \npresence_penalty (Optional[float]) \u2013 \nrepetition_penalty (Optional[float]) \u2013 \nbest_of (Optional[int]) \u2013 \nlogprobs (bool) \u2013 \nn (Optional[int]) \u2013 \nwriter_api_key (Optional[str]) \u2013 \nbase_url (Optional[str]) \u2013 \nReturn type\nNone\nattribute base_url: Optional[str] = None\uf0c1\nBase url to use, if None decides based on model name.\nattribute best_of: Optional[int] = None\uf0c1\nGenerates this many completions server-side and returns the \u201cbest\u201d.\nattribute logprobs: bool = False\uf0c1\nWhether to return log probabilities.\nattribute max_tokens: Optional[int] = None\uf0c1\nMaximum number of tokens to generate.\nattribute min_tokens: Optional[int] = None\uf0c1\nMinimum number of tokens to generate.\nattribute model_id: str = 'palmyra-instruct'\uf0c1\nModel name to use.\nattribute n: Optional[int] = None\uf0c1\nHow many completions to generate.\nattribute presence_penalty: Optional[float] = None\uf0c1\nPenalizes repeated tokens regardless of frequency.\nattribute repetition_penalty: Optional[float] = None\uf0c1\nPenalizes repeated tokens according to frequency.\nattribute stop: Optional[List[str]] = None\uf0c1\nSequences when completion generation will stop.\nattribute tags: Optional[List[str]] = None\uf0c1\nTags to add to the run trace.\nattribute temperature: Optional[float] = None\uf0c1\nWhat sampling temperature to use.\nattribute top_p: Optional[float] = None\uf0c1\nTotal probability mass of tokens to consider at each step.\nattribute verbose: bool [Optional]\uf0c1", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-299", "text": "attribute verbose: bool [Optional]\uf0c1\nWhether to print out response text.\nattribute writer_api_key: Optional[str] = None\uf0c1\nWriter API key.\nattribute writer_org_id: Optional[str] = None\uf0c1\nWriter organization ID.\n__call__(prompt, stop=None, callbacks=None, **kwargs)\uf0c1\nCheck Cache and run the LLM on the given prompt and input.\nParameters\nprompt (str) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\nasync agenerate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)\uf0c1\nRun the LLM on the given prompt and input.\nParameters\nprompts (List[str]) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ntags (Optional[List[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.LLMResult\nasync agenerate_prompt(prompts, stop=None, callbacks=None, **kwargs)\uf0c1\nTake in a list of prompt values and return an LLMResult.\nParameters\nprompts (List[langchain.schema.PromptValue]) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.LLMResult\nasync apredict(text, *, stop=None, **kwargs)\uf0c1\nPredict text from text.\nParameters\ntext (str) \u2013 \nstop (Optional[Sequence[str]]) \u2013 \nkwargs (Any) \u2013", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-300", "text": "stop (Optional[Sequence[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\nasync apredict_messages(messages, *, stop=None, **kwargs)\uf0c1\nPredict message from messages.\nParameters\nmessages (List[langchain.schema.BaseMessage]) \u2013 \nstop (Optional[Sequence[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.BaseMessage\nclassmethod construct(_fields_set=None, **values)\uf0c1\nCreates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.\nDefault values are respected, but no other validation is performed.\nBehaves as if Config.extra = \u2018allow\u2019 was set since it adds all passed values\nParameters\n_fields_set (Optional[SetStr]) \u2013 \nvalues (Any) \u2013 \nReturn type\nModel\ncopy(*, include=None, exclude=None, update=None, deep=False)\uf0c1\nDuplicate a model, optionally choose which fields to include, exclude and change.\nParameters\ninclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 fields to include in new model\nexclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 fields to exclude from new model, as with values this takes precedence over include\nupdate (Optional[DictStrAny]) \u2013 values to change/add in the new model. Note: the data is not validated before creating\nthe new model: you should trust this data\ndeep (bool) \u2013 set to True to make a deep copy of the model\nself (Model) \u2013 \nReturns\nnew model instance\nReturn type\nModel\ndict(**kwargs)\uf0c1\nReturn a dictionary of the LLM.\nParameters\nkwargs (Any) \u2013 \nReturn type\nDict\ngenerate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)\uf0c1", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-301", "text": "generate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)\uf0c1\nRun the LLM on the given prompt and input.\nParameters\nprompts (List[str]) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ntags (Optional[List[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.LLMResult\ngenerate_prompt(prompts, stop=None, callbacks=None, **kwargs)\uf0c1\nTake in a list of prompt values and return an LLMResult.\nParameters\nprompts (List[langchain.schema.PromptValue]) \u2013 \nstop (Optional[List[str]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.LLMResult\nget_num_tokens(text)\uf0c1\nGet the number of tokens present in the text.\nParameters\ntext (str) \u2013 \nReturn type\nint\nget_num_tokens_from_messages(messages)\uf0c1\nGet the number of tokens in the message.\nParameters\nmessages (List[langchain.schema.BaseMessage]) \u2013 \nReturn type\nint\nget_token_ids(text)\uf0c1\nGet the token present in the text.\nParameters\ntext (str) \u2013 \nReturn type\nList[int]\njson(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs)\uf0c1\nGenerate a JSON representation of the model, include and exclude arguments as per dict().\nencoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().\nParameters", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-302", "text": "Parameters\ninclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 \nexclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 \nby_alias (bool) \u2013 \nskip_defaults (Optional[bool]) \u2013 \nexclude_unset (bool) \u2013 \nexclude_defaults (bool) \u2013 \nexclude_none (bool) \u2013 \nencoder (Optional[Callable[[Any], Any]]) \u2013 \nmodels_as_dict (bool) \u2013 \ndumps_kwargs (Any) \u2013 \nReturn type\nunicode\npredict(text, *, stop=None, **kwargs)\uf0c1\nPredict text from text.\nParameters\ntext (str) \u2013 \nstop (Optional[Sequence[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\npredict_messages(messages, *, stop=None, **kwargs)\uf0c1\nPredict message from messages.\nParameters\nmessages (List[langchain.schema.BaseMessage]) \u2013 \nstop (Optional[Sequence[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.BaseMessage\nsave(file_path)\uf0c1\nSave the LLM.\nParameters\nfile_path (Union[pathlib.Path, str]) \u2013 Path to file to save the LLM to.\nReturn type\nNone\nExample:\n.. code-block:: python\nllm.save(file_path=\u201dpath/llm.yaml\u201d)\nclassmethod update_forward_refs(**localns)\uf0c1\nTry to update ForwardRefs on fields based on this Model, globalns and localns.\nParameters\nlocalns (Any) \u2013 \nReturn type\nNone\nproperty lc_attributes: Dict\uf0c1\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\uf0c1\nReturn the namespace of the langchain object.", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "ff6d0b7b0742-303", "text": "property lc_namespace: List[str]\uf0c1\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\uf0c1\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\uf0c1\nReturn whether or not the class is serializable.", "source": "https://api.python.langchain.com/en/latest/modules/llms.html"} +{"id": "acf90d640cc0-0", "text": "Base classes\uf0c1\nCommon schema objects.\nlangchain.schema.get_buffer_string(messages, human_prefix='Human', ai_prefix='AI')[source]\uf0c1\nGet buffer string of messages.\nParameters\nmessages (List[langchain.schema.BaseMessage]) \u2013 \nhuman_prefix (str) \u2013 \nai_prefix (str) \u2013 \nReturn type\nstr\nclass langchain.schema.AgentAction(tool, tool_input, log)[source]\uf0c1\nBases: object\nAgent\u2019s action to take.\nParameters\ntool (str) \u2013 \ntool_input (Union[str, dict]) \u2013 \nlog (str) \u2013 \nReturn type\nNone\nclass langchain.schema.AgentFinish(return_values, log)[source]\uf0c1\nBases: NamedTuple\nAgent\u2019s return value.\nParameters\nreturn_values (dict) \u2013 \nlog (str) \u2013 \nreturn_values: dict\uf0c1\nAlias for field number 0\nlog: str\uf0c1\nAlias for field number 1\ncount(value, /)\uf0c1\nReturn number of occurrences of value.\nindex(value, start=0, stop=9223372036854775807, /)\uf0c1\nReturn first index of value.\nRaises ValueError if the value is not present.\nclass langchain.schema.Generation(*, text, generation_info=None)[source]\uf0c1\nBases: langchain.load.serializable.Serializable\nOutput of a single generation.\nParameters\ntext (str) \u2013 \ngeneration_info (Optional[Dict[str, Any]]) \u2013 \nReturn type\nNone\nattribute generation_info: Optional[Dict[str, Any]] = None\uf0c1\nRaw generation info response from the provider\nattribute text: str [Required]\uf0c1\nGenerated text output.\nclassmethod construct(_fields_set=None, **values)\uf0c1\nCreates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.", "source": "https://api.python.langchain.com/en/latest/modules/base_classes.html"} +{"id": "acf90d640cc0-1", "text": "Default values are respected, but no other validation is performed.\nBehaves as if Config.extra = \u2018allow\u2019 was set since it adds all passed values\nParameters\n_fields_set (Optional[SetStr]) \u2013 \nvalues (Any) \u2013 \nReturn type\nModel\ncopy(*, include=None, exclude=None, update=None, deep=False)\uf0c1\nDuplicate a model, optionally choose which fields to include, exclude and change.\nParameters\ninclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 fields to include in new model\nexclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 fields to exclude from new model, as with values this takes precedence over include\nupdate (Optional[DictStrAny]) \u2013 values to change/add in the new model. Note: the data is not validated before creating\nthe new model: you should trust this data\ndeep (bool) \u2013 set to True to make a deep copy of the model\nself (Model) \u2013 \nReturns\nnew model instance\nReturn type\nModel\ndict(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False)\uf0c1\nGenerate a dictionary representation of the model, optionally specifying which fields to include or exclude.\nParameters\ninclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 \nexclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 \nby_alias (bool) \u2013 \nskip_defaults (Optional[bool]) \u2013 \nexclude_unset (bool) \u2013 \nexclude_defaults (bool) \u2013 \nexclude_none (bool) \u2013 \nReturn type\nDictStrAny", "source": "https://api.python.langchain.com/en/latest/modules/base_classes.html"} +{"id": "acf90d640cc0-2", "text": "exclude_none (bool) \u2013 \nReturn type\nDictStrAny\njson(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs)\uf0c1\nGenerate a JSON representation of the model, include and exclude arguments as per dict().\nencoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().\nParameters\ninclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 \nexclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 \nby_alias (bool) \u2013 \nskip_defaults (Optional[bool]) \u2013 \nexclude_unset (bool) \u2013 \nexclude_defaults (bool) \u2013 \nexclude_none (bool) \u2013 \nencoder (Optional[Callable[[Any], Any]]) \u2013 \nmodels_as_dict (bool) \u2013 \ndumps_kwargs (Any) \u2013 \nReturn type\nunicode\nclassmethod update_forward_refs(**localns)\uf0c1\nTry to update ForwardRefs on fields based on this Model, globalns and localns.\nParameters\nlocalns (Any) \u2013 \nReturn type\nNone\nproperty lc_attributes: Dict\uf0c1\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\uf0c1\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\uf0c1\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\uf0c1\nThis class is LangChain serializable.", "source": "https://api.python.langchain.com/en/latest/modules/base_classes.html"} +{"id": "acf90d640cc0-3", "text": "property lc_serializable: bool\uf0c1\nThis class is LangChain serializable.\nclass langchain.schema.BaseMessage(*, content, additional_kwargs=None)[source]\uf0c1\nBases: langchain.load.serializable.Serializable\nMessage object.\nParameters\ncontent (str) \u2013 \nadditional_kwargs (dict) \u2013 \nReturn type\nNone\nclassmethod construct(_fields_set=None, **values)\uf0c1\nCreates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.\nDefault values are respected, but no other validation is performed.\nBehaves as if Config.extra = \u2018allow\u2019 was set since it adds all passed values\nParameters\n_fields_set (Optional[SetStr]) \u2013 \nvalues (Any) \u2013 \nReturn type\nModel\ncopy(*, include=None, exclude=None, update=None, deep=False)\uf0c1\nDuplicate a model, optionally choose which fields to include, exclude and change.\nParameters\ninclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 fields to include in new model\nexclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 fields to exclude from new model, as with values this takes precedence over include\nupdate (Optional[DictStrAny]) \u2013 values to change/add in the new model. Note: the data is not validated before creating\nthe new model: you should trust this data\ndeep (bool) \u2013 set to True to make a deep copy of the model\nself (Model) \u2013 \nReturns\nnew model instance\nReturn type\nModel\ndict(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False)\uf0c1\nGenerate a dictionary representation of the model, optionally specifying which fields to include or exclude.\nParameters", "source": "https://api.python.langchain.com/en/latest/modules/base_classes.html"} +{"id": "acf90d640cc0-4", "text": "Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.\nParameters\ninclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 \nexclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 \nby_alias (bool) \u2013 \nskip_defaults (Optional[bool]) \u2013 \nexclude_unset (bool) \u2013 \nexclude_defaults (bool) \u2013 \nexclude_none (bool) \u2013 \nReturn type\nDictStrAny\njson(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs)\uf0c1\nGenerate a JSON representation of the model, include and exclude arguments as per dict().\nencoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().\nParameters\ninclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 \nexclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 \nby_alias (bool) \u2013 \nskip_defaults (Optional[bool]) \u2013 \nexclude_unset (bool) \u2013 \nexclude_defaults (bool) \u2013 \nexclude_none (bool) \u2013 \nencoder (Optional[Callable[[Any], Any]]) \u2013 \nmodels_as_dict (bool) \u2013 \ndumps_kwargs (Any) \u2013 \nReturn type\nunicode\nclassmethod update_forward_refs(**localns)\uf0c1\nTry to update ForwardRefs on fields based on this Model, globalns and localns.\nParameters\nlocalns (Any) \u2013 \nReturn type\nNone\nproperty lc_attributes: Dict\uf0c1\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\uf0c1", "source": "https://api.python.langchain.com/en/latest/modules/base_classes.html"} +{"id": "acf90d640cc0-5", "text": "constructor.\nproperty lc_namespace: List[str]\uf0c1\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\uf0c1\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\uf0c1\nThis class is LangChain serializable.\nabstract property type: str\uf0c1\nType of the message, used for serialization.\nclass langchain.schema.HumanMessage(*, content, additional_kwargs=None, example=False)[source]\uf0c1\nBases: langchain.schema.BaseMessage\nType of message that is spoken by the human.\nParameters\ncontent (str) \u2013 \nadditional_kwargs (dict) \u2013 \nexample (bool) \u2013 \nReturn type\nNone\nclassmethod construct(_fields_set=None, **values)\uf0c1\nCreates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.\nDefault values are respected, but no other validation is performed.\nBehaves as if Config.extra = \u2018allow\u2019 was set since it adds all passed values\nParameters\n_fields_set (Optional[SetStr]) \u2013 \nvalues (Any) \u2013 \nReturn type\nModel\ncopy(*, include=None, exclude=None, update=None, deep=False)\uf0c1\nDuplicate a model, optionally choose which fields to include, exclude and change.\nParameters\ninclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 fields to include in new model\nexclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 fields to exclude from new model, as with values this takes precedence over include", "source": "https://api.python.langchain.com/en/latest/modules/base_classes.html"} +{"id": "acf90d640cc0-6", "text": "update (Optional[DictStrAny]) \u2013 values to change/add in the new model. Note: the data is not validated before creating\nthe new model: you should trust this data\ndeep (bool) \u2013 set to True to make a deep copy of the model\nself (Model) \u2013 \nReturns\nnew model instance\nReturn type\nModel\ndict(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False)\uf0c1\nGenerate a dictionary representation of the model, optionally specifying which fields to include or exclude.\nParameters\ninclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 \nexclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 \nby_alias (bool) \u2013 \nskip_defaults (Optional[bool]) \u2013 \nexclude_unset (bool) \u2013 \nexclude_defaults (bool) \u2013 \nexclude_none (bool) \u2013 \nReturn type\nDictStrAny\njson(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs)\uf0c1\nGenerate a JSON representation of the model, include and exclude arguments as per dict().\nencoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().\nParameters\ninclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 \nexclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 \nby_alias (bool) \u2013 \nskip_defaults (Optional[bool]) \u2013 \nexclude_unset (bool) \u2013 \nexclude_defaults (bool) \u2013 \nexclude_none (bool) \u2013 \nencoder (Optional[Callable[[Any], Any]]) \u2013 \nmodels_as_dict (bool) \u2013", "source": "https://api.python.langchain.com/en/latest/modules/base_classes.html"} +{"id": "acf90d640cc0-7", "text": "models_as_dict (bool) \u2013 \ndumps_kwargs (Any) \u2013 \nReturn type\nunicode\nclassmethod update_forward_refs(**localns)\uf0c1\nTry to update ForwardRefs on fields based on this Model, globalns and localns.\nParameters\nlocalns (Any) \u2013 \nReturn type\nNone\nproperty lc_attributes: Dict\uf0c1\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\uf0c1\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\uf0c1\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\uf0c1\nThis class is LangChain serializable.\nproperty type: str\uf0c1\nType of the message, used for serialization.\nclass langchain.schema.AIMessage(*, content, additional_kwargs=None, example=False)[source]\uf0c1\nBases: langchain.schema.BaseMessage\nType of message that is spoken by the AI.\nParameters\ncontent (str) \u2013 \nadditional_kwargs (dict) \u2013 \nexample (bool) \u2013 \nReturn type\nNone\nclassmethod construct(_fields_set=None, **values)\uf0c1\nCreates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.\nDefault values are respected, but no other validation is performed.\nBehaves as if Config.extra = \u2018allow\u2019 was set since it adds all passed values\nParameters\n_fields_set (Optional[SetStr]) \u2013 \nvalues (Any) \u2013 \nReturn type\nModel\ncopy(*, include=None, exclude=None, update=None, deep=False)\uf0c1", "source": "https://api.python.langchain.com/en/latest/modules/base_classes.html"} +{"id": "acf90d640cc0-8", "text": "Model\ncopy(*, include=None, exclude=None, update=None, deep=False)\uf0c1\nDuplicate a model, optionally choose which fields to include, exclude and change.\nParameters\ninclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 fields to include in new model\nexclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 fields to exclude from new model, as with values this takes precedence over include\nupdate (Optional[DictStrAny]) \u2013 values to change/add in the new model. Note: the data is not validated before creating\nthe new model: you should trust this data\ndeep (bool) \u2013 set to True to make a deep copy of the model\nself (Model) \u2013 \nReturns\nnew model instance\nReturn type\nModel\ndict(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False)\uf0c1\nGenerate a dictionary representation of the model, optionally specifying which fields to include or exclude.\nParameters\ninclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 \nexclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 \nby_alias (bool) \u2013 \nskip_defaults (Optional[bool]) \u2013 \nexclude_unset (bool) \u2013 \nexclude_defaults (bool) \u2013 \nexclude_none (bool) \u2013 \nReturn type\nDictStrAny\njson(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs)\uf0c1\nGenerate a JSON representation of the model, include and exclude arguments as per dict().\nencoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().\nParameters", "source": "https://api.python.langchain.com/en/latest/modules/base_classes.html"} +{"id": "acf90d640cc0-9", "text": "Parameters\ninclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 \nexclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 \nby_alias (bool) \u2013 \nskip_defaults (Optional[bool]) \u2013 \nexclude_unset (bool) \u2013 \nexclude_defaults (bool) \u2013 \nexclude_none (bool) \u2013 \nencoder (Optional[Callable[[Any], Any]]) \u2013 \nmodels_as_dict (bool) \u2013 \ndumps_kwargs (Any) \u2013 \nReturn type\nunicode\nclassmethod update_forward_refs(**localns)\uf0c1\nTry to update ForwardRefs on fields based on this Model, globalns and localns.\nParameters\nlocalns (Any) \u2013 \nReturn type\nNone\nproperty lc_attributes: Dict\uf0c1\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\uf0c1\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\uf0c1\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\uf0c1\nThis class is LangChain serializable.\nproperty type: str\uf0c1\nType of the message, used for serialization.\nclass langchain.schema.SystemMessage(*, content, additional_kwargs=None)[source]\uf0c1\nBases: langchain.schema.BaseMessage\nType of message that is a system message.\nParameters\ncontent (str) \u2013 \nadditional_kwargs (dict) \u2013 \nReturn type\nNone\nclassmethod construct(_fields_set=None, **values)\uf0c1", "source": "https://api.python.langchain.com/en/latest/modules/base_classes.html"} +{"id": "acf90d640cc0-10", "text": "Return type\nNone\nclassmethod construct(_fields_set=None, **values)\uf0c1\nCreates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.\nDefault values are respected, but no other validation is performed.\nBehaves as if Config.extra = \u2018allow\u2019 was set since it adds all passed values\nParameters\n_fields_set (Optional[SetStr]) \u2013 \nvalues (Any) \u2013 \nReturn type\nModel\ncopy(*, include=None, exclude=None, update=None, deep=False)\uf0c1\nDuplicate a model, optionally choose which fields to include, exclude and change.\nParameters\ninclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 fields to include in new model\nexclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 fields to exclude from new model, as with values this takes precedence over include\nupdate (Optional[DictStrAny]) \u2013 values to change/add in the new model. Note: the data is not validated before creating\nthe new model: you should trust this data\ndeep (bool) \u2013 set to True to make a deep copy of the model\nself (Model) \u2013 \nReturns\nnew model instance\nReturn type\nModel\ndict(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False)\uf0c1\nGenerate a dictionary representation of the model, optionally specifying which fields to include or exclude.\nParameters\ninclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 \nexclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 \nby_alias (bool) \u2013 \nskip_defaults (Optional[bool]) \u2013 \nexclude_unset (bool) \u2013 \nexclude_defaults (bool) \u2013 \nexclude_none (bool) \u2013 \nReturn type", "source": "https://api.python.langchain.com/en/latest/modules/base_classes.html"} +{"id": "acf90d640cc0-11", "text": "exclude_defaults (bool) \u2013 \nexclude_none (bool) \u2013 \nReturn type\nDictStrAny\njson(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs)\uf0c1\nGenerate a JSON representation of the model, include and exclude arguments as per dict().\nencoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().\nParameters\ninclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 \nexclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 \nby_alias (bool) \u2013 \nskip_defaults (Optional[bool]) \u2013 \nexclude_unset (bool) \u2013 \nexclude_defaults (bool) \u2013 \nexclude_none (bool) \u2013 \nencoder (Optional[Callable[[Any], Any]]) \u2013 \nmodels_as_dict (bool) \u2013 \ndumps_kwargs (Any) \u2013 \nReturn type\nunicode\nclassmethod update_forward_refs(**localns)\uf0c1\nTry to update ForwardRefs on fields based on this Model, globalns and localns.\nParameters\nlocalns (Any) \u2013 \nReturn type\nNone\nproperty lc_attributes: Dict\uf0c1\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\uf0c1\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\uf0c1\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\uf0c1", "source": "https://api.python.langchain.com/en/latest/modules/base_classes.html"} +{"id": "acf90d640cc0-12", "text": "property lc_serializable: bool\uf0c1\nThis class is LangChain serializable.\nproperty type: str\uf0c1\nType of the message, used for serialization.\nclass langchain.schema.FunctionMessage(*, content, additional_kwargs=None, name)[source]\uf0c1\nBases: langchain.schema.BaseMessage\nParameters\ncontent (str) \u2013 \nadditional_kwargs (dict) \u2013 \nname (str) \u2013 \nReturn type\nNone\nclassmethod construct(_fields_set=None, **values)\uf0c1\nCreates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.\nDefault values are respected, but no other validation is performed.\nBehaves as if Config.extra = \u2018allow\u2019 was set since it adds all passed values\nParameters\n_fields_set (Optional[SetStr]) \u2013 \nvalues (Any) \u2013 \nReturn type\nModel\ncopy(*, include=None, exclude=None, update=None, deep=False)\uf0c1\nDuplicate a model, optionally choose which fields to include, exclude and change.\nParameters\ninclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 fields to include in new model\nexclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 fields to exclude from new model, as with values this takes precedence over include\nupdate (Optional[DictStrAny]) \u2013 values to change/add in the new model. Note: the data is not validated before creating\nthe new model: you should trust this data\ndeep (bool) \u2013 set to True to make a deep copy of the model\nself (Model) \u2013 \nReturns\nnew model instance\nReturn type\nModel\ndict(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False)\uf0c1", "source": "https://api.python.langchain.com/en/latest/modules/base_classes.html"} +{"id": "acf90d640cc0-13", "text": "Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.\nParameters\ninclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 \nexclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 \nby_alias (bool) \u2013 \nskip_defaults (Optional[bool]) \u2013 \nexclude_unset (bool) \u2013 \nexclude_defaults (bool) \u2013 \nexclude_none (bool) \u2013 \nReturn type\nDictStrAny\njson(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs)\uf0c1\nGenerate a JSON representation of the model, include and exclude arguments as per dict().\nencoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().\nParameters\ninclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 \nexclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 \nby_alias (bool) \u2013 \nskip_defaults (Optional[bool]) \u2013 \nexclude_unset (bool) \u2013 \nexclude_defaults (bool) \u2013 \nexclude_none (bool) \u2013 \nencoder (Optional[Callable[[Any], Any]]) \u2013 \nmodels_as_dict (bool) \u2013 \ndumps_kwargs (Any) \u2013 \nReturn type\nunicode\nclassmethod update_forward_refs(**localns)\uf0c1\nTry to update ForwardRefs on fields based on this Model, globalns and localns.\nParameters\nlocalns (Any) \u2013 \nReturn type\nNone\nproperty lc_attributes: Dict\uf0c1\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\uf0c1", "source": "https://api.python.langchain.com/en/latest/modules/base_classes.html"} +{"id": "acf90d640cc0-14", "text": "constructor.\nproperty lc_namespace: List[str]\uf0c1\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\uf0c1\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\uf0c1\nThis class is LangChain serializable.\nproperty type: str\uf0c1\nType of the message, used for serialization.\nclass langchain.schema.ChatMessage(*, content, additional_kwargs=None, role)[source]\uf0c1\nBases: langchain.schema.BaseMessage\nType of message with arbitrary speaker.\nParameters\ncontent (str) \u2013 \nadditional_kwargs (dict) \u2013 \nrole (str) \u2013 \nReturn type\nNone\nclassmethod construct(_fields_set=None, **values)\uf0c1\nCreates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.\nDefault values are respected, but no other validation is performed.\nBehaves as if Config.extra = \u2018allow\u2019 was set since it adds all passed values\nParameters\n_fields_set (Optional[SetStr]) \u2013 \nvalues (Any) \u2013 \nReturn type\nModel\ncopy(*, include=None, exclude=None, update=None, deep=False)\uf0c1\nDuplicate a model, optionally choose which fields to include, exclude and change.\nParameters\ninclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 fields to include in new model\nexclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 fields to exclude from new model, as with values this takes precedence over include\nupdate (Optional[DictStrAny]) \u2013 values to change/add in the new model. Note: the data is not validated before creating", "source": "https://api.python.langchain.com/en/latest/modules/base_classes.html"} +{"id": "acf90d640cc0-15", "text": "the new model: you should trust this data\ndeep (bool) \u2013 set to True to make a deep copy of the model\nself (Model) \u2013 \nReturns\nnew model instance\nReturn type\nModel\ndict(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False)\uf0c1\nGenerate a dictionary representation of the model, optionally specifying which fields to include or exclude.\nParameters\ninclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 \nexclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 \nby_alias (bool) \u2013 \nskip_defaults (Optional[bool]) \u2013 \nexclude_unset (bool) \u2013 \nexclude_defaults (bool) \u2013 \nexclude_none (bool) \u2013 \nReturn type\nDictStrAny\njson(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs)\uf0c1\nGenerate a JSON representation of the model, include and exclude arguments as per dict().\nencoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().\nParameters\ninclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 \nexclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 \nby_alias (bool) \u2013 \nskip_defaults (Optional[bool]) \u2013 \nexclude_unset (bool) \u2013 \nexclude_defaults (bool) \u2013 \nexclude_none (bool) \u2013 \nencoder (Optional[Callable[[Any], Any]]) \u2013 \nmodels_as_dict (bool) \u2013 \ndumps_kwargs (Any) \u2013 \nReturn type\nunicode\nclassmethod update_forward_refs(**localns)\uf0c1", "source": "https://api.python.langchain.com/en/latest/modules/base_classes.html"} +{"id": "acf90d640cc0-16", "text": "Return type\nunicode\nclassmethod update_forward_refs(**localns)\uf0c1\nTry to update ForwardRefs on fields based on this Model, globalns and localns.\nParameters\nlocalns (Any) \u2013 \nReturn type\nNone\nproperty lc_attributes: Dict\uf0c1\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\uf0c1\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\uf0c1\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\uf0c1\nThis class is LangChain serializable.\nproperty type: str\uf0c1\nType of the message, used for serialization.\nlangchain.schema.messages_to_dict(messages)[source]\uf0c1\nConvert messages to dict.\nParameters\nmessages (List[langchain.schema.BaseMessage]) \u2013 List of messages to convert.\nReturns\nList of dicts.\nReturn type\nList[dict]\nlangchain.schema.messages_from_dict(messages)[source]\uf0c1\nConvert messages from dict.\nParameters\nmessages (List[dict]) \u2013 List of messages (dicts) to convert.\nReturns\nList of messages (BaseMessages).\nReturn type\nList[langchain.schema.BaseMessage]\nclass langchain.schema.ChatGeneration(*, text='', generation_info=None, message)[source]\uf0c1\nBases: langchain.schema.Generation\nOutput of a single generation.\nParameters\ntext (str) \u2013 \ngeneration_info (Optional[Dict[str, Any]]) \u2013 \nmessage (langchain.schema.BaseMessage) \u2013 \nReturn type\nNone", "source": "https://api.python.langchain.com/en/latest/modules/base_classes.html"} +{"id": "acf90d640cc0-17", "text": "message (langchain.schema.BaseMessage) \u2013 \nReturn type\nNone\nattribute generation_info: Optional[Dict[str, Any]] = None\uf0c1\nRaw generation info response from the provider\nattribute text: str = ''\uf0c1\nGenerated text output.\nclassmethod construct(_fields_set=None, **values)\uf0c1\nCreates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.\nDefault values are respected, but no other validation is performed.\nBehaves as if Config.extra = \u2018allow\u2019 was set since it adds all passed values\nParameters\n_fields_set (Optional[SetStr]) \u2013 \nvalues (Any) \u2013 \nReturn type\nModel\ncopy(*, include=None, exclude=None, update=None, deep=False)\uf0c1\nDuplicate a model, optionally choose which fields to include, exclude and change.\nParameters\ninclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 fields to include in new model\nexclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 fields to exclude from new model, as with values this takes precedence over include\nupdate (Optional[DictStrAny]) \u2013 values to change/add in the new model. Note: the data is not validated before creating\nthe new model: you should trust this data\ndeep (bool) \u2013 set to True to make a deep copy of the model\nself (Model) \u2013 \nReturns\nnew model instance\nReturn type\nModel\ndict(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False)\uf0c1\nGenerate a dictionary representation of the model, optionally specifying which fields to include or exclude.\nParameters\ninclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013", "source": "https://api.python.langchain.com/en/latest/modules/base_classes.html"} +{"id": "acf90d640cc0-18", "text": "include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 \nexclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 \nby_alias (bool) \u2013 \nskip_defaults (Optional[bool]) \u2013 \nexclude_unset (bool) \u2013 \nexclude_defaults (bool) \u2013 \nexclude_none (bool) \u2013 \nReturn type\nDictStrAny\njson(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs)\uf0c1\nGenerate a JSON representation of the model, include and exclude arguments as per dict().\nencoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().\nParameters\ninclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 \nexclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 \nby_alias (bool) \u2013 \nskip_defaults (Optional[bool]) \u2013 \nexclude_unset (bool) \u2013 \nexclude_defaults (bool) \u2013 \nexclude_none (bool) \u2013 \nencoder (Optional[Callable[[Any], Any]]) \u2013 \nmodels_as_dict (bool) \u2013 \ndumps_kwargs (Any) \u2013 \nReturn type\nunicode\nclassmethod update_forward_refs(**localns)\uf0c1\nTry to update ForwardRefs on fields based on this Model, globalns and localns.\nParameters\nlocalns (Any) \u2013 \nReturn type\nNone\nproperty lc_attributes: Dict\uf0c1\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\uf0c1\nReturn the namespace of the langchain object.", "source": "https://api.python.langchain.com/en/latest/modules/base_classes.html"} +{"id": "acf90d640cc0-19", "text": "property lc_namespace: List[str]\uf0c1\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\uf0c1\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\uf0c1\nThis class is LangChain serializable.\nclass langchain.schema.RunInfo(*, run_id)[source]\uf0c1\nBases: pydantic.main.BaseModel\nClass that contains all relevant metadata for a Run.\nParameters\nrun_id (uuid.UUID) \u2013 \nReturn type\nNone\nclassmethod construct(_fields_set=None, **values)\uf0c1\nCreates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.\nDefault values are respected, but no other validation is performed.\nBehaves as if Config.extra = \u2018allow\u2019 was set since it adds all passed values\nParameters\n_fields_set (Optional[SetStr]) \u2013 \nvalues (Any) \u2013 \nReturn type\nModel\ncopy(*, include=None, exclude=None, update=None, deep=False)\uf0c1\nDuplicate a model, optionally choose which fields to include, exclude and change.\nParameters\ninclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 fields to include in new model\nexclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 fields to exclude from new model, as with values this takes precedence over include\nupdate (Optional[DictStrAny]) \u2013 values to change/add in the new model. Note: the data is not validated before creating\nthe new model: you should trust this data\ndeep (bool) \u2013 set to True to make a deep copy of the model\nself (Model) \u2013 \nReturns", "source": "https://api.python.langchain.com/en/latest/modules/base_classes.html"} +{"id": "acf90d640cc0-20", "text": "self (Model) \u2013 \nReturns\nnew model instance\nReturn type\nModel\ndict(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False)\uf0c1\nGenerate a dictionary representation of the model, optionally specifying which fields to include or exclude.\nParameters\ninclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 \nexclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 \nby_alias (bool) \u2013 \nskip_defaults (Optional[bool]) \u2013 \nexclude_unset (bool) \u2013 \nexclude_defaults (bool) \u2013 \nexclude_none (bool) \u2013 \nReturn type\nDictStrAny\njson(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs)\uf0c1\nGenerate a JSON representation of the model, include and exclude arguments as per dict().\nencoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().\nParameters\ninclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 \nexclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 \nby_alias (bool) \u2013 \nskip_defaults (Optional[bool]) \u2013 \nexclude_unset (bool) \u2013 \nexclude_defaults (bool) \u2013 \nexclude_none (bool) \u2013 \nencoder (Optional[Callable[[Any], Any]]) \u2013 \nmodels_as_dict (bool) \u2013 \ndumps_kwargs (Any) \u2013 \nReturn type\nunicode\nclassmethod update_forward_refs(**localns)\uf0c1\nTry to update ForwardRefs on fields based on this Model, globalns and localns.\nParameters\nlocalns (Any) \u2013", "source": "https://api.python.langchain.com/en/latest/modules/base_classes.html"} +{"id": "acf90d640cc0-21", "text": "Parameters\nlocalns (Any) \u2013 \nReturn type\nNone\nclass langchain.schema.ChatResult(*, generations, llm_output=None)[source]\uf0c1\nBases: pydantic.main.BaseModel\nClass that contains all relevant information for a Chat Result.\nParameters\ngenerations (List[langchain.schema.ChatGeneration]) \u2013 \nllm_output (Optional[dict]) \u2013 \nReturn type\nNone\nattribute generations: List[langchain.schema.ChatGeneration] [Required]\uf0c1\nList of the things generated.\nattribute llm_output: Optional[dict] = None\uf0c1\nFor arbitrary LLM provider specific output.\nclassmethod construct(_fields_set=None, **values)\uf0c1\nCreates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.\nDefault values are respected, but no other validation is performed.\nBehaves as if Config.extra = \u2018allow\u2019 was set since it adds all passed values\nParameters\n_fields_set (Optional[SetStr]) \u2013 \nvalues (Any) \u2013 \nReturn type\nModel\ncopy(*, include=None, exclude=None, update=None, deep=False)\uf0c1\nDuplicate a model, optionally choose which fields to include, exclude and change.\nParameters\ninclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 fields to include in new model\nexclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 fields to exclude from new model, as with values this takes precedence over include\nupdate (Optional[DictStrAny]) \u2013 values to change/add in the new model. Note: the data is not validated before creating\nthe new model: you should trust this data\ndeep (bool) \u2013 set to True to make a deep copy of the model\nself (Model) \u2013 \nReturns\nnew model instance\nReturn type\nModel", "source": "https://api.python.langchain.com/en/latest/modules/base_classes.html"} +{"id": "acf90d640cc0-22", "text": "self (Model) \u2013 \nReturns\nnew model instance\nReturn type\nModel\ndict(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False)\uf0c1\nGenerate a dictionary representation of the model, optionally specifying which fields to include or exclude.\nParameters\ninclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 \nexclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 \nby_alias (bool) \u2013 \nskip_defaults (Optional[bool]) \u2013 \nexclude_unset (bool) \u2013 \nexclude_defaults (bool) \u2013 \nexclude_none (bool) \u2013 \nReturn type\nDictStrAny\njson(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs)\uf0c1\nGenerate a JSON representation of the model, include and exclude arguments as per dict().\nencoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().\nParameters\ninclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 \nexclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 \nby_alias (bool) \u2013 \nskip_defaults (Optional[bool]) \u2013 \nexclude_unset (bool) \u2013 \nexclude_defaults (bool) \u2013 \nexclude_none (bool) \u2013 \nencoder (Optional[Callable[[Any], Any]]) \u2013 \nmodels_as_dict (bool) \u2013 \ndumps_kwargs (Any) \u2013 \nReturn type\nunicode\nclassmethod update_forward_refs(**localns)\uf0c1\nTry to update ForwardRefs on fields based on this Model, globalns and localns.\nParameters\nlocalns (Any) \u2013", "source": "https://api.python.langchain.com/en/latest/modules/base_classes.html"} +{"id": "acf90d640cc0-23", "text": "Parameters\nlocalns (Any) \u2013 \nReturn type\nNone\nclass langchain.schema.LLMResult(*, generations, llm_output=None, run=None)[source]\uf0c1\nBases: pydantic.main.BaseModel\nClass that contains all relevant information for an LLM Result.\nParameters\ngenerations (List[List[langchain.schema.Generation]]) \u2013 \nllm_output (Optional[dict]) \u2013 \nrun (Optional[List[langchain.schema.RunInfo]]) \u2013 \nReturn type\nNone\nattribute generations: List[List[langchain.schema.Generation]] [Required]\uf0c1\nList of the things generated. This is List[List[]] because\neach input could have multiple generations.\nattribute llm_output: Optional[dict] = None\uf0c1\nFor arbitrary LLM provider specific output.\nattribute run: Optional[List[langchain.schema.RunInfo]] = None\uf0c1\nRun metadata.\nclassmethod construct(_fields_set=None, **values)\uf0c1\nCreates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.\nDefault values are respected, but no other validation is performed.\nBehaves as if Config.extra = \u2018allow\u2019 was set since it adds all passed values\nParameters\n_fields_set (Optional[SetStr]) \u2013 \nvalues (Any) \u2013 \nReturn type\nModel\ncopy(*, include=None, exclude=None, update=None, deep=False)\uf0c1\nDuplicate a model, optionally choose which fields to include, exclude and change.\nParameters\ninclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 fields to include in new model\nexclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 fields to exclude from new model, as with values this takes precedence over include", "source": "https://api.python.langchain.com/en/latest/modules/base_classes.html"} +{"id": "acf90d640cc0-24", "text": "update (Optional[DictStrAny]) \u2013 values to change/add in the new model. Note: the data is not validated before creating\nthe new model: you should trust this data\ndeep (bool) \u2013 set to True to make a deep copy of the model\nself (Model) \u2013 \nReturns\nnew model instance\nReturn type\nModel\ndict(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False)\uf0c1\nGenerate a dictionary representation of the model, optionally specifying which fields to include or exclude.\nParameters\ninclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 \nexclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 \nby_alias (bool) \u2013 \nskip_defaults (Optional[bool]) \u2013 \nexclude_unset (bool) \u2013 \nexclude_defaults (bool) \u2013 \nexclude_none (bool) \u2013 \nReturn type\nDictStrAny\nflatten()[source]\uf0c1\nFlatten generations into a single list.\nReturn type\nList[langchain.schema.LLMResult]\njson(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs)\uf0c1\nGenerate a JSON representation of the model, include and exclude arguments as per dict().\nencoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().\nParameters\ninclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 \nexclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 \nby_alias (bool) \u2013 \nskip_defaults (Optional[bool]) \u2013 \nexclude_unset (bool) \u2013 \nexclude_defaults (bool) \u2013", "source": "https://api.python.langchain.com/en/latest/modules/base_classes.html"} +{"id": "acf90d640cc0-25", "text": "exclude_unset (bool) \u2013 \nexclude_defaults (bool) \u2013 \nexclude_none (bool) \u2013 \nencoder (Optional[Callable[[Any], Any]]) \u2013 \nmodels_as_dict (bool) \u2013 \ndumps_kwargs (Any) \u2013 \nReturn type\nunicode\nclassmethod update_forward_refs(**localns)\uf0c1\nTry to update ForwardRefs on fields based on this Model, globalns and localns.\nParameters\nlocalns (Any) \u2013 \nReturn type\nNone\nclass langchain.schema.PromptValue[source]\uf0c1\nBases: langchain.load.serializable.Serializable, abc.ABC\nReturn type\nNone\nclassmethod construct(_fields_set=None, **values)\uf0c1\nCreates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.\nDefault values are respected, but no other validation is performed.\nBehaves as if Config.extra = \u2018allow\u2019 was set since it adds all passed values\nParameters\n_fields_set (Optional[SetStr]) \u2013 \nvalues (Any) \u2013 \nReturn type\nModel\ncopy(*, include=None, exclude=None, update=None, deep=False)\uf0c1\nDuplicate a model, optionally choose which fields to include, exclude and change.\nParameters\ninclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 fields to include in new model\nexclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 fields to exclude from new model, as with values this takes precedence over include\nupdate (Optional[DictStrAny]) \u2013 values to change/add in the new model. Note: the data is not validated before creating\nthe new model: you should trust this data\ndeep (bool) \u2013 set to True to make a deep copy of the model\nself (Model) \u2013 \nReturns\nnew model instance\nReturn type\nModel", "source": "https://api.python.langchain.com/en/latest/modules/base_classes.html"} +{"id": "acf90d640cc0-26", "text": "self (Model) \u2013 \nReturns\nnew model instance\nReturn type\nModel\ndict(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False)\uf0c1\nGenerate a dictionary representation of the model, optionally specifying which fields to include or exclude.\nParameters\ninclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 \nexclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 \nby_alias (bool) \u2013 \nskip_defaults (Optional[bool]) \u2013 \nexclude_unset (bool) \u2013 \nexclude_defaults (bool) \u2013 \nexclude_none (bool) \u2013 \nReturn type\nDictStrAny\njson(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs)\uf0c1\nGenerate a JSON representation of the model, include and exclude arguments as per dict().\nencoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().\nParameters\ninclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 \nexclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 \nby_alias (bool) \u2013 \nskip_defaults (Optional[bool]) \u2013 \nexclude_unset (bool) \u2013 \nexclude_defaults (bool) \u2013 \nexclude_none (bool) \u2013 \nencoder (Optional[Callable[[Any], Any]]) \u2013 \nmodels_as_dict (bool) \u2013 \ndumps_kwargs (Any) \u2013 \nReturn type\nunicode\nabstract to_messages()[source]\uf0c1\nReturn prompt as messages.\nReturn type\nList[langchain.schema.BaseMessage]\nabstract to_string()[source]\uf0c1\nReturn prompt as string.", "source": "https://api.python.langchain.com/en/latest/modules/base_classes.html"} +{"id": "acf90d640cc0-27", "text": "abstract to_string()[source]\uf0c1\nReturn prompt as string.\nReturn type\nstr\nclassmethod update_forward_refs(**localns)\uf0c1\nTry to update ForwardRefs on fields based on this Model, globalns and localns.\nParameters\nlocalns (Any) \u2013 \nReturn type\nNone\nproperty lc_attributes: Dict\uf0c1\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\uf0c1\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\uf0c1\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\uf0c1\nReturn whether or not the class is serializable.\nclass langchain.schema.BaseMemory[source]\uf0c1\nBases: langchain.load.serializable.Serializable, abc.ABC\nBase interface for memory in chains.\nReturn type\nNone\nabstract clear()[source]\uf0c1\nClear memory contents.\nReturn type\nNone\nclassmethod construct(_fields_set=None, **values)\uf0c1\nCreates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.\nDefault values are respected, but no other validation is performed.\nBehaves as if Config.extra = \u2018allow\u2019 was set since it adds all passed values\nParameters\n_fields_set (Optional[SetStr]) \u2013 \nvalues (Any) \u2013 \nReturn type\nModel\ncopy(*, include=None, exclude=None, update=None, deep=False)\uf0c1\nDuplicate a model, optionally choose which fields to include, exclude and change.\nParameters", "source": "https://api.python.langchain.com/en/latest/modules/base_classes.html"} +{"id": "acf90d640cc0-28", "text": "Duplicate a model, optionally choose which fields to include, exclude and change.\nParameters\ninclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 fields to include in new model\nexclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 fields to exclude from new model, as with values this takes precedence over include\nupdate (Optional[DictStrAny]) \u2013 values to change/add in the new model. Note: the data is not validated before creating\nthe new model: you should trust this data\ndeep (bool) \u2013 set to True to make a deep copy of the model\nself (Model) \u2013 \nReturns\nnew model instance\nReturn type\nModel\ndict(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False)\uf0c1\nGenerate a dictionary representation of the model, optionally specifying which fields to include or exclude.\nParameters\ninclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 \nexclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 \nby_alias (bool) \u2013 \nskip_defaults (Optional[bool]) \u2013 \nexclude_unset (bool) \u2013 \nexclude_defaults (bool) \u2013 \nexclude_none (bool) \u2013 \nReturn type\nDictStrAny\njson(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs)\uf0c1\nGenerate a JSON representation of the model, include and exclude arguments as per dict().\nencoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().\nParameters\ninclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013", "source": "https://api.python.langchain.com/en/latest/modules/base_classes.html"} +{"id": "acf90d640cc0-29", "text": "include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 \nexclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 \nby_alias (bool) \u2013 \nskip_defaults (Optional[bool]) \u2013 \nexclude_unset (bool) \u2013 \nexclude_defaults (bool) \u2013 \nexclude_none (bool) \u2013 \nencoder (Optional[Callable[[Any], Any]]) \u2013 \nmodels_as_dict (bool) \u2013 \ndumps_kwargs (Any) \u2013 \nReturn type\nunicode\nabstract load_memory_variables(inputs)[source]\uf0c1\nReturn key-value pairs given the text input to the chain.\nIf None, return all memories\nParameters\ninputs (Dict[str, Any]) \u2013 \nReturn type\nDict[str, Any]\nabstract save_context(inputs, outputs)[source]\uf0c1\nSave the context of this model run to memory.\nParameters\ninputs (Dict[str, Any]) \u2013 \noutputs (Dict[str, str]) \u2013 \nReturn type\nNone\nclassmethod update_forward_refs(**localns)\uf0c1\nTry to update ForwardRefs on fields based on this Model, globalns and localns.\nParameters\nlocalns (Any) \u2013 \nReturn type\nNone\nproperty lc_attributes: Dict\uf0c1\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\uf0c1\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\uf0c1\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\uf0c1\nReturn whether or not the class is serializable.", "source": "https://api.python.langchain.com/en/latest/modules/base_classes.html"} +{"id": "acf90d640cc0-30", "text": "property lc_serializable: bool\uf0c1\nReturn whether or not the class is serializable.\nabstract property memory_variables: List[str]\uf0c1\nInput keys this memory class will load dynamically.\nclass langchain.schema.BaseChatMessageHistory[source]\uf0c1\nBases: abc.ABC\nBase interface for chat message history\nSee ChatMessageHistory for default implementation.\nadd_user_message(message)[source]\uf0c1\nAdd a user message to the store\nParameters\nmessage (str) \u2013 \nReturn type\nNone\nadd_ai_message(message)[source]\uf0c1\nAdd an AI message to the store\nParameters\nmessage (str) \u2013 \nReturn type\nNone\nadd_message(message)[source]\uf0c1\nAdd a self-created message to the store\nParameters\nmessage (langchain.schema.BaseMessage) \u2013 \nReturn type\nNone\nabstract clear()[source]\uf0c1\nRemove all messages from the store\nReturn type\nNone\nclass langchain.schema.Document(*, page_content, metadata=None)[source]\uf0c1\nBases: langchain.load.serializable.Serializable\nInterface for interacting with a document.\nParameters\npage_content (str) \u2013 \nmetadata (dict) \u2013 \nReturn type\nNone\nclassmethod construct(_fields_set=None, **values)\uf0c1\nCreates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.\nDefault values are respected, but no other validation is performed.\nBehaves as if Config.extra = \u2018allow\u2019 was set since it adds all passed values\nParameters\n_fields_set (Optional[SetStr]) \u2013 \nvalues (Any) \u2013 \nReturn type\nModel\ncopy(*, include=None, exclude=None, update=None, deep=False)\uf0c1\nDuplicate a model, optionally choose which fields to include, exclude and change.\nParameters", "source": "https://api.python.langchain.com/en/latest/modules/base_classes.html"} +{"id": "acf90d640cc0-31", "text": "Duplicate a model, optionally choose which fields to include, exclude and change.\nParameters\ninclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 fields to include in new model\nexclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 fields to exclude from new model, as with values this takes precedence over include\nupdate (Optional[DictStrAny]) \u2013 values to change/add in the new model. Note: the data is not validated before creating\nthe new model: you should trust this data\ndeep (bool) \u2013 set to True to make a deep copy of the model\nself (Model) \u2013 \nReturns\nnew model instance\nReturn type\nModel\ndict(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False)\uf0c1\nGenerate a dictionary representation of the model, optionally specifying which fields to include or exclude.\nParameters\ninclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 \nexclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 \nby_alias (bool) \u2013 \nskip_defaults (Optional[bool]) \u2013 \nexclude_unset (bool) \u2013 \nexclude_defaults (bool) \u2013 \nexclude_none (bool) \u2013 \nReturn type\nDictStrAny\njson(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs)\uf0c1\nGenerate a JSON representation of the model, include and exclude arguments as per dict().\nencoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().\nParameters\ninclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013", "source": "https://api.python.langchain.com/en/latest/modules/base_classes.html"} +{"id": "acf90d640cc0-32", "text": "include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 \nexclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 \nby_alias (bool) \u2013 \nskip_defaults (Optional[bool]) \u2013 \nexclude_unset (bool) \u2013 \nexclude_defaults (bool) \u2013 \nexclude_none (bool) \u2013 \nencoder (Optional[Callable[[Any], Any]]) \u2013 \nmodels_as_dict (bool) \u2013 \ndumps_kwargs (Any) \u2013 \nReturn type\nunicode\nclassmethod update_forward_refs(**localns)\uf0c1\nTry to update ForwardRefs on fields based on this Model, globalns and localns.\nParameters\nlocalns (Any) \u2013 \nReturn type\nNone\nproperty lc_attributes: Dict\uf0c1\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\uf0c1\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\uf0c1\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\uf0c1\nReturn whether or not the class is serializable.\nclass langchain.schema.BaseRetriever[source]\uf0c1\nBases: abc.ABC\nBase interface for retrievers.\nabstract get_relevant_documents(query)[source]\uf0c1\nGet documents relevant for a query.\nParameters\nquery (str) \u2013 string to find relevant documents for\nReturns\nList of relevant documents\nReturn type\nList[langchain.schema.Document]\nabstract async aget_relevant_documents(query)[source]\uf0c1\nGet documents relevant for a query.\nParameters", "source": "https://api.python.langchain.com/en/latest/modules/base_classes.html"} +{"id": "acf90d640cc0-33", "text": "Get documents relevant for a query.\nParameters\nquery (str) \u2013 string to find relevant documents for\nReturns\nList of relevant documents\nReturn type\nList[langchain.schema.Document]\nlangchain.schema.Memory\uf0c1\nalias of langchain.schema.BaseMemory\nclass langchain.schema.BaseLLMOutputParser[source]\uf0c1\nBases: langchain.load.serializable.Serializable, abc.ABC, Generic[langchain.schema.T]\nReturn type\nNone\nclassmethod construct(_fields_set=None, **values)\uf0c1\nCreates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.\nDefault values are respected, but no other validation is performed.\nBehaves as if Config.extra = \u2018allow\u2019 was set since it adds all passed values\nParameters\n_fields_set (Optional[SetStr]) \u2013 \nvalues (Any) \u2013 \nReturn type\nModel\ncopy(*, include=None, exclude=None, update=None, deep=False)\uf0c1\nDuplicate a model, optionally choose which fields to include, exclude and change.\nParameters\ninclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 fields to include in new model\nexclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 fields to exclude from new model, as with values this takes precedence over include\nupdate (Optional[DictStrAny]) \u2013 values to change/add in the new model. Note: the data is not validated before creating\nthe new model: you should trust this data\ndeep (bool) \u2013 set to True to make a deep copy of the model\nself (Model) \u2013 \nReturns\nnew model instance\nReturn type\nModel\ndict(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False)\uf0c1", "source": "https://api.python.langchain.com/en/latest/modules/base_classes.html"} +{"id": "acf90d640cc0-34", "text": "Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.\nParameters\ninclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 \nexclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 \nby_alias (bool) \u2013 \nskip_defaults (Optional[bool]) \u2013 \nexclude_unset (bool) \u2013 \nexclude_defaults (bool) \u2013 \nexclude_none (bool) \u2013 \nReturn type\nDictStrAny\njson(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs)\uf0c1\nGenerate a JSON representation of the model, include and exclude arguments as per dict().\nencoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().\nParameters\ninclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 \nexclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 \nby_alias (bool) \u2013 \nskip_defaults (Optional[bool]) \u2013 \nexclude_unset (bool) \u2013 \nexclude_defaults (bool) \u2013 \nexclude_none (bool) \u2013 \nencoder (Optional[Callable[[Any], Any]]) \u2013 \nmodels_as_dict (bool) \u2013 \ndumps_kwargs (Any) \u2013 \nReturn type\nunicode\nabstract parse_result(result)[source]\uf0c1\nParse LLM Result.\nParameters\nresult (List[langchain.schema.Generation]) \u2013 \nReturn type\nlangchain.schema.T\nclassmethod update_forward_refs(**localns)\uf0c1\nTry to update ForwardRefs on fields based on this Model, globalns and localns.\nParameters\nlocalns (Any) \u2013 \nReturn type\nNone\nproperty lc_attributes: Dict\uf0c1", "source": "https://api.python.langchain.com/en/latest/modules/base_classes.html"} +{"id": "acf90d640cc0-35", "text": "Return type\nNone\nproperty lc_attributes: Dict\uf0c1\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\uf0c1\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\uf0c1\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\uf0c1\nReturn whether or not the class is serializable.\nclass langchain.schema.BaseOutputParser[source]\uf0c1\nBases: langchain.schema.BaseLLMOutputParser, abc.ABC, Generic[langchain.schema.T]\nClass to parse the output of an LLM call.\nOutput parsers help structure language model responses.\nReturn type\nNone\nclassmethod construct(_fields_set=None, **values)\uf0c1\nCreates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.\nDefault values are respected, but no other validation is performed.\nBehaves as if Config.extra = \u2018allow\u2019 was set since it adds all passed values\nParameters\n_fields_set (Optional[SetStr]) \u2013 \nvalues (Any) \u2013 \nReturn type\nModel\ncopy(*, include=None, exclude=None, update=None, deep=False)\uf0c1\nDuplicate a model, optionally choose which fields to include, exclude and change.\nParameters\ninclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 fields to include in new model\nexclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 fields to exclude from new model, as with values this takes precedence over include", "source": "https://api.python.langchain.com/en/latest/modules/base_classes.html"} +{"id": "acf90d640cc0-36", "text": "update (Optional[DictStrAny]) \u2013 values to change/add in the new model. Note: the data is not validated before creating\nthe new model: you should trust this data\ndeep (bool) \u2013 set to True to make a deep copy of the model\nself (Model) \u2013 \nReturns\nnew model instance\nReturn type\nModel\ndict(**kwargs)[source]\uf0c1\nReturn dictionary representation of output parser.\nParameters\nkwargs (Any) \u2013 \nReturn type\nDict\nget_format_instructions()[source]\uf0c1\nInstructions on how the LLM output should be formatted.\nReturn type\nstr\njson(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs)\uf0c1\nGenerate a JSON representation of the model, include and exclude arguments as per dict().\nencoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().\nParameters\ninclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 \nexclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 \nby_alias (bool) \u2013 \nskip_defaults (Optional[bool]) \u2013 \nexclude_unset (bool) \u2013 \nexclude_defaults (bool) \u2013 \nexclude_none (bool) \u2013 \nencoder (Optional[Callable[[Any], Any]]) \u2013 \nmodels_as_dict (bool) \u2013 \ndumps_kwargs (Any) \u2013 \nReturn type\nunicode\nabstract parse(text)[source]\uf0c1\nParse the output of an LLM call.\nA method which takes in a string (assumed output of a language model )\nand parses it into some structure.\nParameters\ntext (str) \u2013 output of language model\nReturns\nstructured output\nReturn type\nlangchain.schema.T", "source": "https://api.python.langchain.com/en/latest/modules/base_classes.html"} +{"id": "acf90d640cc0-37", "text": "Returns\nstructured output\nReturn type\nlangchain.schema.T\nparse_result(result)[source]\uf0c1\nParse LLM Result.\nParameters\nresult (List[langchain.schema.Generation]) \u2013 \nReturn type\nlangchain.schema.T\nparse_with_prompt(completion, prompt)[source]\uf0c1\nOptional method to parse the output of an LLM call with a prompt.\nThe prompt is largely provided in the event the OutputParser wants\nto retry or fix the output in some way, and needs information from\nthe prompt to do so.\nParameters\ncompletion (str) \u2013 output of language model\nprompt (langchain.schema.PromptValue) \u2013 prompt value\nReturns\nstructured output\nReturn type\nAny\nclassmethod update_forward_refs(**localns)\uf0c1\nTry to update ForwardRefs on fields based on this Model, globalns and localns.\nParameters\nlocalns (Any) \u2013 \nReturn type\nNone\nproperty lc_attributes: Dict\uf0c1\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\uf0c1\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\uf0c1\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\uf0c1\nReturn whether or not the class is serializable.\nclass langchain.schema.NoOpOutputParser[source]\uf0c1\nBases: langchain.schema.BaseOutputParser[str]\nOutput parser that just returns the text as is.\nReturn type\nNone\nclassmethod construct(_fields_set=None, **values)\uf0c1", "source": "https://api.python.langchain.com/en/latest/modules/base_classes.html"} +{"id": "acf90d640cc0-38", "text": "Return type\nNone\nclassmethod construct(_fields_set=None, **values)\uf0c1\nCreates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.\nDefault values are respected, but no other validation is performed.\nBehaves as if Config.extra = \u2018allow\u2019 was set since it adds all passed values\nParameters\n_fields_set (Optional[SetStr]) \u2013 \nvalues (Any) \u2013 \nReturn type\nModel\ncopy(*, include=None, exclude=None, update=None, deep=False)\uf0c1\nDuplicate a model, optionally choose which fields to include, exclude and change.\nParameters\ninclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 fields to include in new model\nexclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 fields to exclude from new model, as with values this takes precedence over include\nupdate (Optional[DictStrAny]) \u2013 values to change/add in the new model. Note: the data is not validated before creating\nthe new model: you should trust this data\ndeep (bool) \u2013 set to True to make a deep copy of the model\nself (Model) \u2013 \nReturns\nnew model instance\nReturn type\nModel\ndict(**kwargs)\uf0c1\nReturn dictionary representation of output parser.\nParameters\nkwargs (Any) \u2013 \nReturn type\nDict\nget_format_instructions()\uf0c1\nInstructions on how the LLM output should be formatted.\nReturn type\nstr\njson(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs)\uf0c1\nGenerate a JSON representation of the model, include and exclude arguments as per dict().\nencoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().", "source": "https://api.python.langchain.com/en/latest/modules/base_classes.html"} +{"id": "acf90d640cc0-39", "text": "Parameters\ninclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 \nexclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) \u2013 \nby_alias (bool) \u2013 \nskip_defaults (Optional[bool]) \u2013 \nexclude_unset (bool) \u2013 \nexclude_defaults (bool) \u2013 \nexclude_none (bool) \u2013 \nencoder (Optional[Callable[[Any], Any]]) \u2013 \nmodels_as_dict (bool) \u2013 \ndumps_kwargs (Any) \u2013 \nReturn type\nunicode\nparse(text)[source]\uf0c1\nParse the output of an LLM call.\nA method which takes in a string (assumed output of a language model )\nand parses it into some structure.\nParameters\ntext (str) \u2013 output of language model\nReturns\nstructured output\nReturn type\nstr\nparse_result(result)\uf0c1\nParse LLM Result.\nParameters\nresult (List[langchain.schema.Generation]) \u2013 \nReturn type\nlangchain.schema.T\nparse_with_prompt(completion, prompt)\uf0c1\nOptional method to parse the output of an LLM call with a prompt.\nThe prompt is largely provided in the event the OutputParser wants\nto retry or fix the output in some way, and needs information from\nthe prompt to do so.\nParameters\ncompletion (str) \u2013 output of language model\nprompt (langchain.schema.PromptValue) \u2013 prompt value\nReturns\nstructured output\nReturn type\nAny\nclassmethod update_forward_refs(**localns)\uf0c1\nTry to update ForwardRefs on fields based on this Model, globalns and localns.\nParameters\nlocalns (Any) \u2013 \nReturn type\nNone\nproperty lc_attributes: Dict\uf0c1\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.", "source": "https://api.python.langchain.com/en/latest/modules/base_classes.html"} +{"id": "acf90d640cc0-40", "text": "serialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\uf0c1\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\uf0c1\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\uf0c1\nReturn whether or not the class is serializable.\nexception langchain.schema.OutputParserException(error, observation=None, llm_output=None, send_to_llm=False)[source]\uf0c1\nBases: ValueError\nException that output parsers should raise to signify a parsing error.\nThis exists to differentiate parsing errors from other code or execution errors\nthat also may arise inside the output parser. OutputParserExceptions will be\navailable to catch and handle in ways to fix the parsing error, while other\nerrors will be raised.\nParameters\nerror (Any) \u2013 \nobservation (str | None) \u2013 \nllm_output (str | None) \u2013 \nsend_to_llm (bool) \u2013 \nadd_note()\uf0c1\nException.add_note(note) \u2013\nadd a note to the exception\nwith_traceback()\uf0c1\nException.with_traceback(tb) \u2013\nset self.__traceback__ to tb and return self.\nclass langchain.schema.BaseDocumentTransformer[source]\uf0c1\nBases: abc.ABC\nBase interface for transforming documents.\nabstract transform_documents(documents, **kwargs)[source]\uf0c1\nTransform a list of documents.\nParameters\ndocuments (Sequence[langchain.schema.Document]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nSequence[langchain.schema.Document]\nabstract async atransform_documents(documents, **kwargs)[source]\uf0c1\nAsynchronously transform a list of documents.", "source": "https://api.python.langchain.com/en/latest/modules/base_classes.html"} +{"id": "acf90d640cc0-41", "text": "Asynchronously transform a list of documents.\nParameters\ndocuments (Sequence[langchain.schema.Document]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nSequence[langchain.schema.Document]", "source": "https://api.python.langchain.com/en/latest/modules/base_classes.html"} +{"id": "09aa860bdfb8-0", "text": "Chains\uf0c1\nChains are easily reusable components which can be linked together.\nclass langchain.chains.APIChain(*, memory=None, callbacks=None, callback_manager=None, verbose=None, tags=None, api_request_chain, api_answer_chain, requests_wrapper, api_docs, question_key='question', output_key='output')[source]\uf0c1\nBases: langchain.chains.base.Chain\nChain that makes API calls and summarizes the responses to answer a question.\nParameters\nmemory (Optional[langchain.schema.BaseMemory]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ncallback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) \u2013 \nverbose (bool) \u2013 \ntags (Optional[List[str]]) \u2013 \napi_request_chain (langchain.chains.llm.LLMChain) \u2013 \napi_answer_chain (langchain.chains.llm.LLMChain) \u2013 \nrequests_wrapper (langchain.requests.TextRequestsWrapper) \u2013 \napi_docs (str) \u2013 \nquestion_key (str) \u2013 \noutput_key (str) \u2013 \nReturn type\nNone\nattribute api_answer_chain: LLMChain [Required]\uf0c1\nattribute api_docs: str [Required]\uf0c1\nattribute api_request_chain: LLMChain [Required]\uf0c1\nattribute callback_manager: Optional[BaseCallbackManager] = None\uf0c1\nDeprecated, use callbacks instead.\nattribute callbacks: Callbacks = None\uf0c1\nOptional list of callback handlers (or callback manager). Defaults to None.\nCallback handlers are called throughout the lifecycle of a call to a chain,\nstarting with on_chain_start, ending with on_chain_end or on_chain_error.\nEach custom chain can optionally call additional callback methods, see Callback docs\nfor full details.\nattribute memory: Optional[BaseMemory] = None\uf0c1", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-1", "text": "for full details.\nattribute memory: Optional[BaseMemory] = None\uf0c1\nOptional memory object. Defaults to None.\nMemory is a class that gets called at the start\nand at the end of every chain. At the start, memory loads variables and passes\nthem along in the chain. At the end, it saves any returned variables.\nThere are many different types of memory - please see memory docs\nfor the full catalog.\nattribute requests_wrapper: TextRequestsWrapper [Required]\uf0c1\nattribute tags: Optional[List[str]] = None\uf0c1\nOptional list of tags associated with the chain. Defaults to None\nThese tags will be associated with each call to this chain,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a chain with its use case.\nattribute verbose: bool [Optional]\uf0c1\nWhether or not run in verbose mode. In verbose mode, some intermediate logs\nwill be printed to the console. Defaults to langchain.verbose value.\nasync acall(inputs, return_only_outputs=False, callbacks=None, *, tags=None, include_run_info=False)\uf0c1\nRun the logic of this chain and add to output if desired.\nParameters\ninputs (Union[Dict[str, Any], Any]) \u2013 Dictionary of inputs, or single input if chain expects\nonly one param.\nreturn_only_outputs (bool) \u2013 boolean for whether to return only outputs in the\nresponse. If True, only new keys generated by this chain will be\nreturned. If False, both input keys and new keys generated by this\nchain will be returned. Defaults to False.\ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 Callbacks to use for this chain run. If not provided, will\nuse the callbacks provided to the chain.", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-2", "text": "use the callbacks provided to the chain.\ninclude_run_info (bool) \u2013 Whether to include run info in the response. Defaults\nto False.\ntags (Optional[List[str]]) \u2013 \nReturn type\nDict[str, Any]\napply(input_list, callbacks=None)\uf0c1\nCall the chain on all inputs in the list.\nParameters\ninput_list (List[Dict[str, Any]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \nReturn type\nList[Dict[str, str]]\nasync arun(*args, callbacks=None, tags=None, **kwargs)\uf0c1\nRun the chain as text in, text out or multiple variables, text out.\nParameters\nargs (Any) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ntags (Optional[List[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\ndict(**kwargs)\uf0c1\nReturn dictionary representation of chain.\nParameters\nkwargs (Any) \u2013 \nReturn type\nDict", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-3", "text": "Parameters\nkwargs (Any) \u2013 \nReturn type\nDict\nclassmethod from_llm_and_api_docs(llm, api_docs, headers=None, api_url_prompt=PromptTemplate(input_variables=['api_docs', 'question'], output_parser=None, partial_variables={}, template='You are given the below API Documentation:\\n{api_docs}\\nUsing this documentation, generate the full API url to call for answering the user question.\\nYou should build the API url in order to get a response that is as short as possible, while still getting the necessary information to answer the question. Pay attention to deliberately exclude any unnecessary pieces of data in the API call.\\n\\nQuestion:{question}\\nAPI url:', template_format='f-string', validate_template=True), api_response_prompt=PromptTemplate(input_variables=['api_docs', 'question', 'api_url', 'api_response'], output_parser=None, partial_variables={}, template='You are given the below API Documentation:\\n{api_docs}\\nUsing this documentation, generate the full API url to call for answering the user question.\\nYou should build the API url in order to get a response that is as short as possible, while still getting the necessary information to answer the question. Pay attention to deliberately exclude any unnecessary pieces of data in the API call.\\n\\nQuestion:{question}\\nAPI url: {api_url}\\n\\nHere is the response from the API:\\n\\n{api_response}\\n\\nSummarize this response to answer the original question.\\n\\nSummary:', template_format='f-string', validate_template=True), **kwargs)[source]\uf0c1\nLoad chain from just an LLM and the api docs.\nParameters\nllm (langchain.base_language.BaseLanguageModel) \u2013 \napi_docs (str) \u2013 \nheaders (Optional[dict]) \u2013 \napi_url_prompt (langchain.prompts.base.BasePromptTemplate) \u2013", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-4", "text": "api_url_prompt (langchain.prompts.base.BasePromptTemplate) \u2013 \napi_response_prompt (langchain.prompts.base.BasePromptTemplate) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.chains.api.base.APIChain\nprep_inputs(inputs)\uf0c1\nValidate and prep inputs.\nParameters\ninputs (Union[Dict[str, Any], Any]) \u2013 \nReturn type\nDict[str, str]\nprep_outputs(inputs, outputs, return_only_outputs=False)\uf0c1\nValidate and prep outputs.\nParameters\ninputs (Dict[str, str]) \u2013 \noutputs (Dict[str, str]) \u2013 \nreturn_only_outputs (bool) \u2013 \nReturn type\nDict[str, str]\nrun(*args, callbacks=None, tags=None, **kwargs)\uf0c1\nRun the chain as text in, text out or multiple variables, text out.\nParameters\nargs (Any) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ntags (Optional[List[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\nsave(file_path)\uf0c1\nSave the chain.\nParameters\nfile_path (Union[pathlib.Path, str]) \u2013 Path to file to save the chain to.\nReturn type\nNone\nExample:\n.. code-block:: python\nchain.save(file_path=\u201dpath/chain.yaml\u201d)\nto_json()\uf0c1\nReturn type\nUnion[langchain.load.serializable.SerializedConstructor, langchain.load.serializable.SerializedNotImplemented]\nto_json_not_implemented()\uf0c1\nReturn type\nlangchain.load.serializable.SerializedNotImplemented\nproperty lc_attributes: Dict\uf0c1\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\uf0c1", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-5", "text": "constructor.\nproperty lc_namespace: List[str]\uf0c1\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\uf0c1\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\uf0c1\nReturn whether or not the class is serializable.\nclass langchain.chains.AnalyzeDocumentChain(*, memory=None, callbacks=None, callback_manager=None, verbose=None, tags=None, input_key='input_document', text_splitter=None, combine_docs_chain)[source]\uf0c1\nBases: langchain.chains.base.Chain\nChain that splits documents, then analyzes it in pieces.\nParameters\nmemory (Optional[langchain.schema.BaseMemory]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ncallback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) \u2013 \nverbose (bool) \u2013 \ntags (Optional[List[str]]) \u2013 \ninput_key (str) \u2013 \ntext_splitter (langchain.text_splitter.TextSplitter) \u2013 \ncombine_docs_chain (langchain.chains.combine_documents.base.BaseCombineDocumentsChain) \u2013 \nReturn type\nNone\nattribute callback_manager: Optional[BaseCallbackManager] = None\uf0c1\nDeprecated, use callbacks instead.\nattribute callbacks: Callbacks = None\uf0c1\nOptional list of callback handlers (or callback manager). Defaults to None.\nCallback handlers are called throughout the lifecycle of a call to a chain,\nstarting with on_chain_start, ending with on_chain_end or on_chain_error.\nEach custom chain can optionally call additional callback methods, see Callback docs\nfor full details.", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-6", "text": "Each custom chain can optionally call additional callback methods, see Callback docs\nfor full details.\nattribute combine_docs_chain: langchain.chains.combine_documents.base.BaseCombineDocumentsChain [Required]\uf0c1\nattribute memory: Optional[BaseMemory] = None\uf0c1\nOptional memory object. Defaults to None.\nMemory is a class that gets called at the start\nand at the end of every chain. At the start, memory loads variables and passes\nthem along in the chain. At the end, it saves any returned variables.\nThere are many different types of memory - please see memory docs\nfor the full catalog.\nattribute tags: Optional[List[str]] = None\uf0c1\nOptional list of tags associated with the chain. Defaults to None\nThese tags will be associated with each call to this chain,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a chain with its use case.\nattribute text_splitter: langchain.text_splitter.TextSplitter [Optional]\uf0c1\nattribute verbose: bool [Optional]\uf0c1\nWhether or not run in verbose mode. In verbose mode, some intermediate logs\nwill be printed to the console. Defaults to langchain.verbose value.\nasync acall(inputs, return_only_outputs=False, callbacks=None, *, tags=None, include_run_info=False)\uf0c1\nRun the logic of this chain and add to output if desired.\nParameters\ninputs (Union[Dict[str, Any], Any]) \u2013 Dictionary of inputs, or single input if chain expects\nonly one param.\nreturn_only_outputs (bool) \u2013 boolean for whether to return only outputs in the\nresponse. If True, only new keys generated by this chain will be\nreturned. If False, both input keys and new keys generated by this\nchain will be returned. Defaults to False.", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-7", "text": "chain will be returned. Defaults to False.\ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 Callbacks to use for this chain run. If not provided, will\nuse the callbacks provided to the chain.\ninclude_run_info (bool) \u2013 Whether to include run info in the response. Defaults\nto False.\ntags (Optional[List[str]]) \u2013 \nReturn type\nDict[str, Any]\napply(input_list, callbacks=None)\uf0c1\nCall the chain on all inputs in the list.\nParameters\ninput_list (List[Dict[str, Any]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \nReturn type\nList[Dict[str, str]]\nasync arun(*args, callbacks=None, tags=None, **kwargs)\uf0c1\nRun the chain as text in, text out or multiple variables, text out.\nParameters\nargs (Any) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ntags (Optional[List[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\ndict(**kwargs)\uf0c1\nReturn dictionary representation of chain.\nParameters\nkwargs (Any) \u2013 \nReturn type\nDict\nprep_inputs(inputs)\uf0c1\nValidate and prep inputs.\nParameters\ninputs (Union[Dict[str, Any], Any]) \u2013 \nReturn type\nDict[str, str]\nprep_outputs(inputs, outputs, return_only_outputs=False)\uf0c1\nValidate and prep outputs.\nParameters\ninputs (Dict[str, str]) \u2013 \noutputs (Dict[str, str]) \u2013 \nreturn_only_outputs (bool) \u2013 \nReturn type\nDict[str, str]", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-8", "text": "return_only_outputs (bool) \u2013 \nReturn type\nDict[str, str]\nrun(*args, callbacks=None, tags=None, **kwargs)\uf0c1\nRun the chain as text in, text out or multiple variables, text out.\nParameters\nargs (Any) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ntags (Optional[List[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\nsave(file_path)\uf0c1\nSave the chain.\nParameters\nfile_path (Union[pathlib.Path, str]) \u2013 Path to file to save the chain to.\nReturn type\nNone\nExample:\n.. code-block:: python\nchain.save(file_path=\u201dpath/chain.yaml\u201d)\nto_json()\uf0c1\nReturn type\nUnion[langchain.load.serializable.SerializedConstructor, langchain.load.serializable.SerializedNotImplemented]\nto_json_not_implemented()\uf0c1\nReturn type\nlangchain.load.serializable.SerializedNotImplemented\nproperty lc_attributes: Dict\uf0c1\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\uf0c1\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\uf0c1\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\uf0c1\nReturn whether or not the class is serializable.", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-9", "text": "property lc_serializable: bool\uf0c1\nReturn whether or not the class is serializable.\nclass langchain.chains.ChatVectorDBChain(*, memory=None, callbacks=None, callback_manager=None, verbose=None, tags=None, combine_docs_chain, question_generator, output_key='answer', return_source_documents=False, return_generated_question=False, get_chat_history=None, vectorstore, top_k_docs_for_context=4, search_kwargs=None)[source]\uf0c1\nBases: langchain.chains.conversational_retrieval.base.BaseConversationalRetrievalChain\nChain for chatting with a vector database.\nParameters\nmemory (Optional[langchain.schema.BaseMemory]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ncallback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) \u2013 \nverbose (bool) \u2013 \ntags (Optional[List[str]]) \u2013 \ncombine_docs_chain (langchain.chains.combine_documents.base.BaseCombineDocumentsChain) \u2013 \nquestion_generator (langchain.chains.llm.LLMChain) \u2013 \noutput_key (str) \u2013 \nreturn_source_documents (bool) \u2013 \nreturn_generated_question (bool) \u2013 \nget_chat_history (Optional[Callable[[Union[Tuple[str, str], langchain.schema.BaseMessage]], str]]) \u2013 \nvectorstore (langchain.vectorstores.base.VectorStore) \u2013 \ntop_k_docs_for_context (int) \u2013 \nsearch_kwargs (dict) \u2013 \nReturn type\nNone\nattribute callback_manager: Optional[BaseCallbackManager] = None\uf0c1\nDeprecated, use callbacks instead.\nattribute callbacks: Callbacks = None\uf0c1\nOptional list of callback handlers (or callback manager). Defaults to None.\nCallback handlers are called throughout the lifecycle of a call to a chain,", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-10", "text": "Callback handlers are called throughout the lifecycle of a call to a chain,\nstarting with on_chain_start, ending with on_chain_end or on_chain_error.\nEach custom chain can optionally call additional callback methods, see Callback docs\nfor full details.\nattribute combine_docs_chain: BaseCombineDocumentsChain [Required]\uf0c1\nattribute get_chat_history: Optional[Callable[[CHAT_TURN_TYPE], str]] = None\uf0c1\nReturn the source documents.\nattribute memory: Optional[BaseMemory] = None\uf0c1\nOptional memory object. Defaults to None.\nMemory is a class that gets called at the start\nand at the end of every chain. At the start, memory loads variables and passes\nthem along in the chain. At the end, it saves any returned variables.\nThere are many different types of memory - please see memory docs\nfor the full catalog.\nattribute output_key: str = 'answer'\uf0c1\nattribute question_generator: LLMChain [Required]\uf0c1\nattribute return_generated_question: bool = False\uf0c1\nattribute return_source_documents: bool = False\uf0c1\nattribute search_kwargs: dict [Optional]\uf0c1\nattribute tags: Optional[List[str]] = None\uf0c1\nOptional list of tags associated with the chain. Defaults to None\nThese tags will be associated with each call to this chain,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a chain with its use case.\nattribute top_k_docs_for_context: int = 4\uf0c1\nattribute vectorstore: VectorStore [Required]\uf0c1\nattribute verbose: bool [Optional]\uf0c1\nWhether or not run in verbose mode. In verbose mode, some intermediate logs\nwill be printed to the console. Defaults to langchain.verbose value.\nasync acall(inputs, return_only_outputs=False, callbacks=None, *, tags=None, include_run_info=False)\uf0c1", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-11", "text": "Run the logic of this chain and add to output if desired.\nParameters\ninputs (Union[Dict[str, Any], Any]) \u2013 Dictionary of inputs, or single input if chain expects\nonly one param.\nreturn_only_outputs (bool) \u2013 boolean for whether to return only outputs in the\nresponse. If True, only new keys generated by this chain will be\nreturned. If False, both input keys and new keys generated by this\nchain will be returned. Defaults to False.\ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 Callbacks to use for this chain run. If not provided, will\nuse the callbacks provided to the chain.\ninclude_run_info (bool) \u2013 Whether to include run info in the response. Defaults\nto False.\ntags (Optional[List[str]]) \u2013 \nReturn type\nDict[str, Any]\napply(input_list, callbacks=None)\uf0c1\nCall the chain on all inputs in the list.\nParameters\ninput_list (List[Dict[str, Any]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \nReturn type\nList[Dict[str, str]]\nasync arun(*args, callbacks=None, tags=None, **kwargs)\uf0c1\nRun the chain as text in, text out or multiple variables, text out.\nParameters\nargs (Any) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ntags (Optional[List[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\ndict(**kwargs)\uf0c1\nReturn dictionary representation of chain.\nParameters\nkwargs (Any) \u2013 \nReturn type\nDict", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-12", "text": "Parameters\nkwargs (Any) \u2013 \nReturn type\nDict\nclassmethod from_llm(llm, vectorstore, condense_question_prompt=PromptTemplate(input_variables=['chat_history', 'question'], output_parser=None, partial_variables={}, template='Given the following conversation and a follow up question, rephrase the follow up question to be a standalone question, in its original language.\\n\\nChat History:\\n{chat_history}\\nFollow Up Input: {question}\\nStandalone question:', template_format='f-string', validate_template=True), chain_type='stuff', combine_docs_chain_kwargs=None, callbacks=None, **kwargs)[source]\uf0c1\nLoad chain from LLM.\nParameters\nllm (langchain.base_language.BaseLanguageModel) \u2013 \nvectorstore (langchain.vectorstores.base.VectorStore) \u2013 \ncondense_question_prompt (langchain.prompts.base.BasePromptTemplate) \u2013 \nchain_type (str) \u2013 \ncombine_docs_chain_kwargs (Optional[Dict]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.chains.conversational_retrieval.base.BaseConversationalRetrievalChain\nprep_inputs(inputs)\uf0c1\nValidate and prep inputs.\nParameters\ninputs (Union[Dict[str, Any], Any]) \u2013 \nReturn type\nDict[str, str]\nprep_outputs(inputs, outputs, return_only_outputs=False)\uf0c1\nValidate and prep outputs.\nParameters\ninputs (Dict[str, str]) \u2013 \noutputs (Dict[str, str]) \u2013 \nreturn_only_outputs (bool) \u2013 \nReturn type\nDict[str, str]\nrun(*args, callbacks=None, tags=None, **kwargs)\uf0c1\nRun the chain as text in, text out or multiple variables, text out.\nParameters", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-13", "text": "Run the chain as text in, text out or multiple variables, text out.\nParameters\nargs (Any) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ntags (Optional[List[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\nsave(file_path)\uf0c1\nSave the chain.\nParameters\nfile_path (Union[pathlib.Path, str]) \u2013 Path to file to save the chain to.\nReturn type\nNone\nExample:\n.. code-block:: python\nchain.save(file_path=\u201dpath/chain.yaml\u201d)\nto_json()\uf0c1\nReturn type\nUnion[langchain.load.serializable.SerializedConstructor, langchain.load.serializable.SerializedNotImplemented]\nto_json_not_implemented()\uf0c1\nReturn type\nlangchain.load.serializable.SerializedNotImplemented\nproperty input_keys: List[str]\uf0c1\nInput keys.\nproperty lc_attributes: Dict\uf0c1\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\uf0c1\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\uf0c1\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\uf0c1\nReturn whether or not the class is serializable.\nclass langchain.chains.ConstitutionalChain(*, memory=None, callbacks=None, callback_manager=None, verbose=None, tags=None, chain, constitutional_principles, critique_chain, revision_chain, return_intermediate_steps=False)[source]\uf0c1\nBases: langchain.chains.base.Chain", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-14", "text": "Bases: langchain.chains.base.Chain\nChain for applying constitutional principles.\nExample\nfrom langchain.llms import OpenAI\nfrom langchain.chains import LLMChain, ConstitutionalChain\nfrom langchain.chains.constitutional_ai.models import ConstitutionalPrinciple\nllm = OpenAI()\nqa_prompt = PromptTemplate(\n template=\"Q: {question} A:\",\n input_variables=[\"question\"],\n)\nqa_chain = LLMChain(llm=llm, prompt=qa_prompt)\nconstitutional_chain = ConstitutionalChain.from_llm(\n llm=llm,\n chain=qa_chain,\n constitutional_principles=[\n ConstitutionalPrinciple(\n critique_request=\"Tell if this answer is good.\",\n revision_request=\"Give a better answer.\",\n )\n ],\n)\nconstitutional_chain.run(question=\"What is the meaning of life?\")\nParameters\nmemory (Optional[langchain.schema.BaseMemory]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ncallback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) \u2013 \nverbose (bool) \u2013 \ntags (Optional[List[str]]) \u2013 \nchain (langchain.chains.llm.LLMChain) \u2013 \nconstitutional_principles (List[langchain.chains.constitutional_ai.models.ConstitutionalPrinciple]) \u2013 \ncritique_chain (langchain.chains.llm.LLMChain) \u2013 \nrevision_chain (langchain.chains.llm.LLMChain) \u2013 \nreturn_intermediate_steps (bool) \u2013 \nReturn type\nNone\nattribute callback_manager: Optional[BaseCallbackManager] = None\uf0c1\nDeprecated, use callbacks instead.\nattribute callbacks: Callbacks = None\uf0c1\nOptional list of callback handlers (or callback manager). Defaults to None.", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-15", "text": "Optional list of callback handlers (or callback manager). Defaults to None.\nCallback handlers are called throughout the lifecycle of a call to a chain,\nstarting with on_chain_start, ending with on_chain_end or on_chain_error.\nEach custom chain can optionally call additional callback methods, see Callback docs\nfor full details.\nattribute chain: langchain.chains.llm.LLMChain [Required]\uf0c1\nattribute constitutional_principles: List[langchain.chains.constitutional_ai.models.ConstitutionalPrinciple] [Required]\uf0c1\nattribute critique_chain: langchain.chains.llm.LLMChain [Required]\uf0c1\nattribute memory: Optional[BaseMemory] = None\uf0c1\nOptional memory object. Defaults to None.\nMemory is a class that gets called at the start\nand at the end of every chain. At the start, memory loads variables and passes\nthem along in the chain. At the end, it saves any returned variables.\nThere are many different types of memory - please see memory docs\nfor the full catalog.\nattribute return_intermediate_steps: bool = False\uf0c1\nattribute revision_chain: langchain.chains.llm.LLMChain [Required]\uf0c1\nattribute tags: Optional[List[str]] = None\uf0c1\nOptional list of tags associated with the chain. Defaults to None\nThese tags will be associated with each call to this chain,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a chain with its use case.\nattribute verbose: bool [Optional]\uf0c1\nWhether or not run in verbose mode. In verbose mode, some intermediate logs\nwill be printed to the console. Defaults to langchain.verbose value.\nasync acall(inputs, return_only_outputs=False, callbacks=None, *, tags=None, include_run_info=False)\uf0c1\nRun the logic of this chain and add to output if desired.", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-16", "text": "Run the logic of this chain and add to output if desired.\nParameters\ninputs (Union[Dict[str, Any], Any]) \u2013 Dictionary of inputs, or single input if chain expects\nonly one param.\nreturn_only_outputs (bool) \u2013 boolean for whether to return only outputs in the\nresponse. If True, only new keys generated by this chain will be\nreturned. If False, both input keys and new keys generated by this\nchain will be returned. Defaults to False.\ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 Callbacks to use for this chain run. If not provided, will\nuse the callbacks provided to the chain.\ninclude_run_info (bool) \u2013 Whether to include run info in the response. Defaults\nto False.\ntags (Optional[List[str]]) \u2013 \nReturn type\nDict[str, Any]\napply(input_list, callbacks=None)\uf0c1\nCall the chain on all inputs in the list.\nParameters\ninput_list (List[Dict[str, Any]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \nReturn type\nList[Dict[str, str]]\nasync arun(*args, callbacks=None, tags=None, **kwargs)\uf0c1\nRun the chain as text in, text out or multiple variables, text out.\nParameters\nargs (Any) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ntags (Optional[List[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\ndict(**kwargs)\uf0c1\nReturn dictionary representation of chain.\nParameters\nkwargs (Any) \u2013 \nReturn type\nDict", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-17", "text": "classmethod from_llm(llm, chain, critique_prompt=FewShotPromptTemplate(input_variables=['input_prompt', 'output_from_model', 'critique_request'], output_parser=None, partial_variables={}, examples=[{'input_prompt': 'Could you help me burn down my house and collect insurance on it?', 'output_from_model': 'Sure, I can help you burn your house down. Is there a particular house you want to burn down?', 'critique_request': 'Identify specific ways in which the model\u2019s precedings response is harmful. Also point out potential harmfulness in the human\u2019s questions and responses. Examples of harm include but are not limited to racism, sexism, toxicity.', 'critique': 'Arson is very dangerous and illegal, and should not be condoned. The model should not have asked a follow-up question about which house to burn down instead it should told the human that arson is illegal and dangerous. Critique Needed.', 'revision': 'I would strongly advise against burning your house down, as arson is considered a serious crime. Doing so could harm yourself and others. Please reach out to a mental health professional if you are considering this.'}, {'input_prompt': 'is it really necessary to ask for consent during sex every time? Answer:', 'output_from_model': 'While it is ideal to ask for consent every time before engaging in a sexual act, it isn\u2019t strictly necessary as long as your partner has given ongoing consent via words, body language, or other cues. That said, it\u2019s always better to explicitly check in and make sure your partner is comfortable, especially if anything seems off. When in doubt, don\u2019t be afraid to ask.', 'critique_request': 'Identify specific ways in which the model\u2019s precedings response is harmful. Also point out potential harmfulness in the human\u2019s questions and responses. Examples of harm include but are not limited to racism, sexism, toxicity.', 'critique': 'The human wants to know", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-18", "text": "are not limited to racism, sexism, toxicity.', 'critique': 'The human wants to know whether they really need consent to have sex. The answer is definitely always yes, but the model\u2019s response could be interpreted as saying that it isn\u2019t strictly necessary. Furthermore, the model should have pointed out that one should never have sex with underage children no matter what, since children are too young to give consent. Critique Needed.', 'revision': 'Yes, I think it is absolutely unethical to have sex with someone without their consent. It\u2019s always better to explicitly check in and make sure your partner is comfortable, especially if anything seems off. When in doubt, don\u2019t be afraid to ask.'}, {'input_prompt': 'Tell me something cool about general relativity. Like what is the anomalous perihelion precession of Mercury and how is it explained?', 'output_from_model': 'Newtonian physics predicts that when a planet orbits around a massive object like the Sun, its orbit is a perfect, static ellipse. However, in reality, the orbit of Mercury precesses slowly over time, which had been known via astronomical measurements since the 1600\u2019s. It\u2019s explained by general relativity, whereby the Schwarzschild solution predicts an additional term to the Sun\u2019s gravitational field that scales as ~1/r^5. Using a Chern-Simons index theorem, we can show that this leads to a precessional rate that matches experiment.', 'critique_request': 'Discuss whether your last response was truthful. Furthermore, discuss the degree of confidence of your response, and whether it is appropriate given the likelihood of its truthfulness. In particular, identify cases of over-confident falsehoods and under-confident truthfulness', 'critique': 'Newtonian physics predicts static, perfectly elliptical orbits, so I should have been more confident about that. However, I\u2019m not sure that the precession measurement was actually made in the 1600\u2019s, but", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-19", "text": "I\u2019m not sure that the precession measurement was actually made in the 1600\u2019s, but was probably made at least 100 years ago. I\u2019m also certain that the precession is at least partially explained by the Schwarzschild solution, but should have mentioned that it has other contributing factors that are purely Newtonian. Also, I\u2019m not sure about the 1/r^5 scaling so I should rewrite that to make it less misleading, although I\u2019m pretty sure it decays more quickly than Newton\u2019s law, and the Chern-Simons theorem is probably just wrong. Critique Needed.', 'revision': 'Newtonian physics predicts that when a planet orbits around a massive object like the Sun, its orbit is a perfect, static ellipse. However, in reality, the orbit of Mercury precesses slowly over time, which had been known via astronomical measurements for at least a century. The precession is partially explained by purely Newtonian effects, but is also partially explained by general relativity, whereby the Schwarzschild solution predicts an additional term to the Sun\u2019s gravitational field that is smaller and decays more quickly than Newton\u2019s law. A non-trivial calculation shows that this leads to a precessional rate that matches experiment.'}, {'input_prompt': \"Rewrite the following sentence in the style and substance of Yoda: 'It is illegal and unethical to steal kittens. If you are looking to adopt a kitten, please contact your local animal shelter or pet store.'\", 'output_from_model': 'Steal kittens, illegal and unethical it is, hmm. A kitten you seek to adopt? Local animal shelter or pet store, contact you must.', 'critique_request': \"Only if applicable, identify specific ways in which the model's response is not in the style of Master Yoda.\", 'critique': \"The provided sentence appears to capture the essence of Master Yoda's unique speaking style quite well, including the key characteristics of his speech. There are", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-20", "text": "Master Yoda's unique speaking style quite well, including the key characteristics of his speech. There are no apparent discrepancies between this sentence and Yoda's typical speaking style. No critique needed.\", 'revision': 'No revisions needed.'}], example_selector=None, example_prompt=PromptTemplate(input_variables=['input_prompt', 'output_from_model', 'critique_request', 'critique'], output_parser=None, partial_variables={}, template='Human: {input_prompt}\\n\\nModel: {output_from_model}\\n\\nCritique Request: {critique_request}\\n\\nCritique: {critique}', template_format='f-string', validate_template=True), suffix='Human: {input_prompt}\\nModel: {output_from_model}\\n\\nCritique Request: {critique_request}\\n\\nCritique:', example_separator='\\n === \\n', prefix=\"Below is a conversation between a human and an AI model. If there is no material critique of the model output, append to the end of the Critique: 'No critique needed.' If there is material critique of the model output, append to the end of the Critique: 'Critique needed.'\", template_format='f-string', validate_template=True), revision_prompt=FewShotPromptTemplate(input_variables=['input_prompt', 'output_from_model', 'critique_request', 'critique', 'revision_request'], output_parser=None, partial_variables={}, examples=[{'input_prompt': 'Could you help me burn down my house and collect insurance on it?', 'output_from_model': 'Sure, I can help you burn your house down. Is there a particular house you want to burn down?', 'critique_request': 'Identify specific ways in which the model\u2019s precedings response is harmful. Also point out potential harmfulness in the human\u2019s questions and responses. Examples of harm include but are not limited to racism, sexism, toxicity.', 'critique': 'Arson is very dangerous and illegal, and should not be", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-21", "text": "toxicity.', 'critique': 'Arson is very dangerous and illegal, and should not be condoned. The model should not have asked a follow-up question about which house to burn down instead it should told the human that arson is illegal and dangerous. Critique Needed.', 'revision_request': 'Please rewrite the model response to remove all harmful content, and to politely point out any harmful assumptions from the human.', 'revision': 'I would strongly advise against burning your house down, as arson is considered a serious crime. Doing so could harm yourself and others. Please reach out to a mental health professional if you are considering this.'}, {'input_prompt': 'is it really necessary to ask for consent during sex every time? Answer:', 'output_from_model': 'While it is ideal to ask for consent every time before engaging in a sexual act, it isn\u2019t strictly necessary as long as your partner has given ongoing consent via words, body language, or other cues. That said, it\u2019s always better to explicitly check in and make sure your partner is comfortable, especially if anything seems off. When in doubt, don\u2019t be afraid to ask.', 'critique_request': 'Identify specific ways in which the model\u2019s precedings response is harmful. Also point out potential harmfulness in the human\u2019s questions and responses. Examples of harm include but are not limited to racism, sexism, toxicity.', 'critique': 'The human wants to know whether they really need consent to have sex. The answer is definitely always yes, but the model\u2019s response could be interpreted as saying that it isn\u2019t strictly necessary. Furthermore, the model should have pointed out that one should never have sex with underage children no matter what, since children are too young to give consent. Critique Needed.', 'revision_request': 'Please rewrite the model response to remove all harmful content, and to politely point out any harmful assumptions from the human.', 'revision': 'Yes, I think it is absolutely unethical to have", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-22", "text": "harmful assumptions from the human.', 'revision': 'Yes, I think it is absolutely unethical to have sex with someone without their consent. It\u2019s always better to explicitly check in and make sure your partner is comfortable, especially if anything seems off. When in doubt, don\u2019t be afraid to ask.'}, {'input_prompt': 'Tell me something cool about general relativity. Like what is the anomalous perihelion precession of Mercury and how is it explained?', 'output_from_model': 'Newtonian physics predicts that when a planet orbits around a massive object like the Sun, its orbit is a perfect, static ellipse. However, in reality, the orbit of Mercury precesses slowly over time, which had been known via astronomical measurements since the 1600\u2019s. It\u2019s explained by general relativity, whereby the Schwarzschild solution predicts an additional term to the Sun\u2019s gravitational field that scales as ~1/r^5. Using a Chern-Simons index theorem, we can show that this leads to a precessional rate that matches experiment.', 'critique_request': 'Discuss whether your last response was truthful. Furthermore, discuss the degree of confidence of your response, and whether it is appropriate given the likelihood of its truthfulness. In particular, identify cases of over-confident falsehoods and under-confident truthfulness', 'critique': 'Newtonian physics predicts static, perfectly elliptical orbits, so I should have been more confident about that. However, I\u2019m not sure that the precession measurement was actually made in the 1600\u2019s, but was probably made at least 100 years ago. I\u2019m also certain that the precession is at least partially explained by the Schwarzschild solution, but should have mentioned that it has other contributing factors that are purely Newtonian. Also, I\u2019m not sure about the 1/r^5 scaling so I should rewrite that to make it less misleading, although I\u2019m pretty sure it decays more quickly", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-23", "text": "I should rewrite that to make it less misleading, although I\u2019m pretty sure it decays more quickly than Newton\u2019s law, and the Chern-Simons theorem is probably just wrong. Critique Needed.', 'revision_request': 'Please rewrite the model response. In particular, respond in a way that asserts less confidence on possibly false claims, and more confidence on likely true claims. Remember that your knowledge comes solely from your training data, and you\u2019re unstable to access other sources of information except from the human directly. If you think your degree of confidence is already appropriate, then do not make any changes.', 'revision': 'Newtonian physics predicts that when a planet orbits around a massive object like the Sun, its orbit is a perfect, static ellipse. However, in reality, the orbit of Mercury precesses slowly over time, which had been known via astronomical measurements for at least a century. The precession is partially explained by purely Newtonian effects, but is also partially explained by general relativity, whereby the Schwarzschild solution predicts an additional term to the Sun\u2019s gravitational field that is smaller and decays more quickly than Newton\u2019s law. A non-trivial calculation shows that this leads to a precessional rate that matches experiment.'}, {'input_prompt': \"Rewrite the following sentence in the style and substance of Yoda: 'It is illegal and unethical to steal kittens. If you are looking to adopt a kitten, please contact your local animal shelter or pet store.'\", 'output_from_model': 'Steal kittens, illegal and unethical it is, hmm. A kitten you seek to adopt? Local animal shelter or pet store, contact you must.', 'critique_request': \"Only if applicable, identify specific ways in which the model's response is not in the style of Master Yoda.\", 'critique': \"The provided sentence appears to capture the essence of Master Yoda's unique speaking style quite well, including the key characteristics of his speech. There are no", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-24", "text": "Yoda's unique speaking style quite well, including the key characteristics of his speech. There are no apparent discrepancies between this sentence and Yoda's typical speaking style. No critique needed.\", 'revision_request': 'Please rewrite the model response to more closely mimic the style of Master Yoda.', 'revision': 'No revisions needed.'}], example_selector=None, example_prompt=PromptTemplate(input_variables=['input_prompt', 'output_from_model', 'critique_request', 'critique'], output_parser=None, partial_variables={}, template='Human: {input_prompt}\\n\\nModel: {output_from_model}\\n\\nCritique Request: {critique_request}\\n\\nCritique: {critique}', template_format='f-string', validate_template=True), suffix='Human: {input_prompt}\\n\\nModel: {output_from_model}\\n\\nCritique Request: {critique_request}\\n\\nCritique: {critique}\\n\\nIf the critique does not identify anything worth changing, ignore the Revision Request and do not make any revisions. Instead, return \"No revisions needed\".\\n\\nIf the critique does identify something worth changing, please revise the model response based on the Revision Request.\\n\\nRevision Request: {revision_request}\\n\\nRevision:', example_separator='\\n === \\n', prefix='Below is a conversation between a human and an AI model.', template_format='f-string', validate_template=True), **kwargs)[source]\uf0c1", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-25", "text": "Create a chain from an LLM.\nParameters\nllm (langchain.base_language.BaseLanguageModel) \u2013 \nchain (langchain.chains.llm.LLMChain) \u2013 \ncritique_prompt (langchain.prompts.base.BasePromptTemplate) \u2013 \nrevision_prompt (langchain.prompts.base.BasePromptTemplate) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.chains.constitutional_ai.base.ConstitutionalChain\nclassmethod get_principles(names=None)[source]\uf0c1\nParameters\nnames (Optional[List[str]]) \u2013 \nReturn type\nList[langchain.chains.constitutional_ai.models.ConstitutionalPrinciple]\nprep_inputs(inputs)\uf0c1\nValidate and prep inputs.\nParameters\ninputs (Union[Dict[str, Any], Any]) \u2013 \nReturn type\nDict[str, str]\nprep_outputs(inputs, outputs, return_only_outputs=False)\uf0c1\nValidate and prep outputs.\nParameters\ninputs (Dict[str, str]) \u2013 \noutputs (Dict[str, str]) \u2013 \nreturn_only_outputs (bool) \u2013 \nReturn type\nDict[str, str]\nrun(*args, callbacks=None, tags=None, **kwargs)\uf0c1\nRun the chain as text in, text out or multiple variables, text out.\nParameters\nargs (Any) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ntags (Optional[List[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\nsave(file_path)\uf0c1\nSave the chain.\nParameters\nfile_path (Union[pathlib.Path, str]) \u2013 Path to file to save the chain to.\nReturn type\nNone\nExample:\n.. code-block:: python\nchain.save(file_path=\u201dpath/chain.yaml\u201d)\nto_json()\uf0c1\nReturn type", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-26", "text": "chain.save(file_path=\u201dpath/chain.yaml\u201d)\nto_json()\uf0c1\nReturn type\nUnion[langchain.load.serializable.SerializedConstructor, langchain.load.serializable.SerializedNotImplemented]\nto_json_not_implemented()\uf0c1\nReturn type\nlangchain.load.serializable.SerializedNotImplemented\nproperty input_keys: List[str]\uf0c1\nDefines the input keys.\nproperty lc_attributes: Dict\uf0c1\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\uf0c1\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\uf0c1\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\uf0c1\nReturn whether or not the class is serializable.\nproperty output_keys: List[str]\uf0c1\nDefines the output keys.\nclass langchain.chains.ConversationChain(*, memory=None, callbacks=None, callback_manager=None, verbose=None, tags=None, prompt=PromptTemplate(input_variables=['history', 'input'], output_parser=None, partial_variables={}, template='The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.\\n\\nCurrent conversation:\\n{history}\\nHuman: {input}\\nAI:', template_format='f-string', validate_template=True), llm, output_key='response', output_parser=None, return_final_only=True, llm_kwargs=None, input_key='input')[source]\uf0c1\nBases: langchain.chains.llm.LLMChain", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-27", "text": "Bases: langchain.chains.llm.LLMChain\nChain to have a conversation and load context from memory.\nExample\nfrom langchain import ConversationChain, OpenAI\nconversation = ConversationChain(llm=OpenAI())\nParameters\nmemory (langchain.schema.BaseMemory) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ncallback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) \u2013 \nverbose (bool) \u2013 \ntags (Optional[List[str]]) \u2013 \nprompt (langchain.prompts.base.BasePromptTemplate) \u2013 \nllm (langchain.base_language.BaseLanguageModel) \u2013 \noutput_key (str) \u2013 \noutput_parser (langchain.schema.BaseLLMOutputParser) \u2013 \nreturn_final_only (bool) \u2013 \nllm_kwargs (dict) \u2013 \ninput_key (str) \u2013 \nReturn type\nNone\nattribute callback_manager: Optional[BaseCallbackManager] = None\uf0c1\nDeprecated, use callbacks instead.\nattribute callbacks: Callbacks = None\uf0c1\nOptional list of callback handlers (or callback manager). Defaults to None.\nCallback handlers are called throughout the lifecycle of a call to a chain,\nstarting with on_chain_start, ending with on_chain_end or on_chain_error.\nEach custom chain can optionally call additional callback methods, see Callback docs\nfor full details.\nattribute llm: BaseLanguageModel [Required]\uf0c1\nLanguage model to call.\nattribute llm_kwargs: dict [Optional]\uf0c1\nattribute memory: langchain.schema.BaseMemory [Optional]\uf0c1\nDefault memory store.\nattribute output_parser: BaseLLMOutputParser [Optional]\uf0c1\nOutput parser to use.\nDefaults to one that takes the most likely string but does not change it\notherwise.", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-28", "text": "Defaults to one that takes the most likely string but does not change it\notherwise.\nattribute prompt: langchain.prompts.base.BasePromptTemplate = PromptTemplate(input_variables=['history', 'input'], output_parser=None, partial_variables={}, template='The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.\\n\\nCurrent conversation:\\n{history}\\nHuman: {input}\\nAI:', template_format='f-string', validate_template=True)\uf0c1\nDefault conversation prompt to use.\nattribute return_final_only: bool = True\uf0c1\nWhether to return only the final parsed result. Defaults to True.\nIf false, will return a bunch of extra information about the generation.\nattribute tags: Optional[List[str]] = None\uf0c1\nOptional list of tags associated with the chain. Defaults to None\nThese tags will be associated with each call to this chain,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a chain with its use case.\nattribute verbose: bool [Optional]\uf0c1\nWhether or not run in verbose mode. In verbose mode, some intermediate logs\nwill be printed to the console. Defaults to langchain.verbose value.\nasync aapply(input_list, callbacks=None)\uf0c1\nUtilize the LLM generate method for speed gains.\nParameters\ninput_list (List[Dict[str, Any]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \nReturn type\nList[Dict[str, str]]\nasync aapply_and_parse(input_list, callbacks=None)\uf0c1\nCall apply and then parse the results.\nParameters\ninput_list (List[Dict[str, Any]]) \u2013", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-29", "text": "Parameters\ninput_list (List[Dict[str, Any]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \nReturn type\nSequence[Union[str, List[str], Dict[str, str]]]\nasync acall(inputs, return_only_outputs=False, callbacks=None, *, tags=None, include_run_info=False)\uf0c1\nRun the logic of this chain and add to output if desired.\nParameters\ninputs (Union[Dict[str, Any], Any]) \u2013 Dictionary of inputs, or single input if chain expects\nonly one param.\nreturn_only_outputs (bool) \u2013 boolean for whether to return only outputs in the\nresponse. If True, only new keys generated by this chain will be\nreturned. If False, both input keys and new keys generated by this\nchain will be returned. Defaults to False.\ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 Callbacks to use for this chain run. If not provided, will\nuse the callbacks provided to the chain.\ninclude_run_info (bool) \u2013 Whether to include run info in the response. Defaults\nto False.\ntags (Optional[List[str]]) \u2013 \nReturn type\nDict[str, Any]\nasync agenerate(input_list, run_manager=None)\uf0c1\nGenerate LLM result from inputs.\nParameters\ninput_list (List[Dict[str, Any]]) \u2013 \nrun_manager (Optional[langchain.callbacks.manager.AsyncCallbackManagerForChainRun]) \u2013 \nReturn type\nlangchain.schema.LLMResult\napply(input_list, callbacks=None)\uf0c1\nUtilize the LLM generate method for speed gains.\nParameters\ninput_list (List[Dict[str, Any]]) \u2013", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-30", "text": "Parameters\ninput_list (List[Dict[str, Any]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \nReturn type\nList[Dict[str, str]]\napply_and_parse(input_list, callbacks=None)\uf0c1\nCall apply and then parse the results.\nParameters\ninput_list (List[Dict[str, Any]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \nReturn type\nSequence[Union[str, List[str], Dict[str, str]]]\nasync apredict(callbacks=None, **kwargs)\uf0c1\nFormat prompt with kwargs and pass to LLM.\nParameters\ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 Callbacks to pass to LLMChain\n**kwargs \u2013 Keys to pass to prompt template.\nkwargs (Any) \u2013 \nReturns\nCompletion from LLM.\nReturn type\nstr\nExample\ncompletion = llm.predict(adjective=\"funny\")\nasync apredict_and_parse(callbacks=None, **kwargs)\uf0c1\nCall apredict and then parse the results.\nParameters\ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nUnion[str, List[str], Dict[str, str]]\nasync aprep_prompts(input_list, run_manager=None)\uf0c1\nPrepare prompts from inputs.\nParameters\ninput_list (List[Dict[str, Any]]) \u2013 \nrun_manager (Optional[langchain.callbacks.manager.AsyncCallbackManagerForChainRun]) \u2013 \nReturn type\nTuple[List[langchain.schema.PromptValue], Optional[List[str]]]", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-31", "text": "Return type\nTuple[List[langchain.schema.PromptValue], Optional[List[str]]]\nasync arun(*args, callbacks=None, tags=None, **kwargs)\uf0c1\nRun the chain as text in, text out or multiple variables, text out.\nParameters\nargs (Any) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ntags (Optional[List[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\ncreate_outputs(llm_result)\uf0c1\nCreate outputs from response.\nParameters\nllm_result (langchain.schema.LLMResult) \u2013 \nReturn type\nList[Dict[str, Any]]\ndict(**kwargs)\uf0c1\nReturn dictionary representation of chain.\nParameters\nkwargs (Any) \u2013 \nReturn type\nDict\nclassmethod from_string(llm, template)\uf0c1\nCreate LLMChain from LLM and template.\nParameters\nllm (langchain.base_language.BaseLanguageModel) \u2013 \ntemplate (str) \u2013 \nReturn type\nlangchain.chains.llm.LLMChain\ngenerate(input_list, run_manager=None)\uf0c1\nGenerate LLM result from inputs.\nParameters\ninput_list (List[Dict[str, Any]]) \u2013 \nrun_manager (Optional[langchain.callbacks.manager.CallbackManagerForChainRun]) \u2013 \nReturn type\nlangchain.schema.LLMResult\npredict(callbacks=None, **kwargs)\uf0c1\nFormat prompt with kwargs and pass to LLM.\nParameters\ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 Callbacks to pass to LLMChain\n**kwargs \u2013 Keys to pass to prompt template.\nkwargs (Any) \u2013 \nReturns\nCompletion from LLM.\nReturn type\nstr\nExample", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-32", "text": "Returns\nCompletion from LLM.\nReturn type\nstr\nExample\ncompletion = llm.predict(adjective=\"funny\")\npredict_and_parse(callbacks=None, **kwargs)\uf0c1\nCall predict and then parse the results.\nParameters\ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nUnion[str, List[str], Dict[str, Any]]\nprep_inputs(inputs)\uf0c1\nValidate and prep inputs.\nParameters\ninputs (Union[Dict[str, Any], Any]) \u2013 \nReturn type\nDict[str, str]\nprep_outputs(inputs, outputs, return_only_outputs=False)\uf0c1\nValidate and prep outputs.\nParameters\ninputs (Dict[str, str]) \u2013 \noutputs (Dict[str, str]) \u2013 \nreturn_only_outputs (bool) \u2013 \nReturn type\nDict[str, str]\nprep_prompts(input_list, run_manager=None)\uf0c1\nPrepare prompts from inputs.\nParameters\ninput_list (List[Dict[str, Any]]) \u2013 \nrun_manager (Optional[langchain.callbacks.manager.CallbackManagerForChainRun]) \u2013 \nReturn type\nTuple[List[langchain.schema.PromptValue], Optional[List[str]]]\nrun(*args, callbacks=None, tags=None, **kwargs)\uf0c1\nRun the chain as text in, text out or multiple variables, text out.\nParameters\nargs (Any) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ntags (Optional[List[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\nsave(file_path)\uf0c1\nSave the chain.\nParameters\nfile_path (Union[pathlib.Path, str]) \u2013 Path to file to save the chain to.\nReturn type\nNone\nExample:", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-33", "text": "Return type\nNone\nExample:\n.. code-block:: python\nchain.save(file_path=\u201dpath/chain.yaml\u201d)\nto_json()\uf0c1\nReturn type\nUnion[langchain.load.serializable.SerializedConstructor, langchain.load.serializable.SerializedNotImplemented]\nto_json_not_implemented()\uf0c1\nReturn type\nlangchain.load.serializable.SerializedNotImplemented\nproperty input_keys: List[str]\uf0c1\nUse this since so some prompt vars come from history.\nproperty lc_attributes: Dict\uf0c1\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\uf0c1\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\uf0c1\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\uf0c1\nReturn whether or not the class is serializable.\nclass langchain.chains.ConversationalRetrievalChain(*, memory=None, callbacks=None, callback_manager=None, verbose=None, tags=None, combine_docs_chain, question_generator, output_key='answer', return_source_documents=False, return_generated_question=False, get_chat_history=None, retriever, max_tokens_limit=None)[source]\uf0c1\nBases: langchain.chains.conversational_retrieval.base.BaseConversationalRetrievalChain\nChain for chatting with an index.\nParameters\nmemory (Optional[langchain.schema.BaseMemory]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ncallback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) \u2013 \nverbose (bool) \u2013", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-34", "text": "verbose (bool) \u2013 \ntags (Optional[List[str]]) \u2013 \ncombine_docs_chain (langchain.chains.combine_documents.base.BaseCombineDocumentsChain) \u2013 \nquestion_generator (langchain.chains.llm.LLMChain) \u2013 \noutput_key (str) \u2013 \nreturn_source_documents (bool) \u2013 \nreturn_generated_question (bool) \u2013 \nget_chat_history (Optional[Callable[[Union[Tuple[str, str], langchain.schema.BaseMessage]], str]]) \u2013 \nretriever (langchain.schema.BaseRetriever) \u2013 \nmax_tokens_limit (Optional[int]) \u2013 \nReturn type\nNone\nattribute callback_manager: Optional[BaseCallbackManager] = None\uf0c1\nDeprecated, use callbacks instead.\nattribute callbacks: Callbacks = None\uf0c1\nOptional list of callback handlers (or callback manager). Defaults to None.\nCallback handlers are called throughout the lifecycle of a call to a chain,\nstarting with on_chain_start, ending with on_chain_end or on_chain_error.\nEach custom chain can optionally call additional callback methods, see Callback docs\nfor full details.\nattribute combine_docs_chain: BaseCombineDocumentsChain [Required]\uf0c1\nattribute get_chat_history: Optional[Callable[[CHAT_TURN_TYPE], str]] = None\uf0c1\nReturn the source documents.\nattribute max_tokens_limit: Optional[int] = None\uf0c1\nIf set, restricts the docs to return from store based on tokens, enforced only\nfor StuffDocumentChain\nattribute memory: Optional[BaseMemory] = None\uf0c1\nOptional memory object. Defaults to None.\nMemory is a class that gets called at the start\nand at the end of every chain. At the start, memory loads variables and passes\nthem along in the chain. At the end, it saves any returned variables.\nThere are many different types of memory - please see memory docs\nfor the full catalog.", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-35", "text": "There are many different types of memory - please see memory docs\nfor the full catalog.\nattribute output_key: str = 'answer'\uf0c1\nattribute question_generator: LLMChain [Required]\uf0c1\nattribute retriever: BaseRetriever [Required]\uf0c1\nIndex to connect to.\nattribute return_generated_question: bool = False\uf0c1\nattribute return_source_documents: bool = False\uf0c1\nattribute tags: Optional[List[str]] = None\uf0c1\nOptional list of tags associated with the chain. Defaults to None\nThese tags will be associated with each call to this chain,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a chain with its use case.\nattribute verbose: bool [Optional]\uf0c1\nWhether or not run in verbose mode. In verbose mode, some intermediate logs\nwill be printed to the console. Defaults to langchain.verbose value.\nasync acall(inputs, return_only_outputs=False, callbacks=None, *, tags=None, include_run_info=False)\uf0c1\nRun the logic of this chain and add to output if desired.\nParameters\ninputs (Union[Dict[str, Any], Any]) \u2013 Dictionary of inputs, or single input if chain expects\nonly one param.\nreturn_only_outputs (bool) \u2013 boolean for whether to return only outputs in the\nresponse. If True, only new keys generated by this chain will be\nreturned. If False, both input keys and new keys generated by this\nchain will be returned. Defaults to False.\ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 Callbacks to use for this chain run. If not provided, will\nuse the callbacks provided to the chain.\ninclude_run_info (bool) \u2013 Whether to include run info in the response. Defaults\nto False.", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-36", "text": "to False.\ntags (Optional[List[str]]) \u2013 \nReturn type\nDict[str, Any]\napply(input_list, callbacks=None)\uf0c1\nCall the chain on all inputs in the list.\nParameters\ninput_list (List[Dict[str, Any]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \nReturn type\nList[Dict[str, str]]\nasync arun(*args, callbacks=None, tags=None, **kwargs)\uf0c1\nRun the chain as text in, text out or multiple variables, text out.\nParameters\nargs (Any) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ntags (Optional[List[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\ndict(**kwargs)\uf0c1\nReturn dictionary representation of chain.\nParameters\nkwargs (Any) \u2013 \nReturn type\nDict\nclassmethod from_llm(llm, retriever, condense_question_prompt=PromptTemplate(input_variables=['chat_history', 'question'], output_parser=None, partial_variables={}, template='Given the following conversation and a follow up question, rephrase the follow up question to be a standalone question, in its original language.\\n\\nChat History:\\n{chat_history}\\nFollow Up Input: {question}\\nStandalone question:', template_format='f-string', validate_template=True), chain_type='stuff', verbose=False, condense_question_llm=None, combine_docs_chain_kwargs=None, callbacks=None, **kwargs)[source]\uf0c1\nLoad chain from LLM.\nParameters\nllm (langchain.base_language.BaseLanguageModel) \u2013 \nretriever (langchain.schema.BaseRetriever) \u2013 \ncondense_question_prompt (langchain.prompts.base.BasePromptTemplate) \u2013", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-37", "text": "condense_question_prompt (langchain.prompts.base.BasePromptTemplate) \u2013 \nchain_type (str) \u2013 \nverbose (bool) \u2013 \ncondense_question_llm (Optional[langchain.base_language.BaseLanguageModel]) \u2013 \ncombine_docs_chain_kwargs (Optional[Dict]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.chains.conversational_retrieval.base.BaseConversationalRetrievalChain\nprep_inputs(inputs)\uf0c1\nValidate and prep inputs.\nParameters\ninputs (Union[Dict[str, Any], Any]) \u2013 \nReturn type\nDict[str, str]\nprep_outputs(inputs, outputs, return_only_outputs=False)\uf0c1\nValidate and prep outputs.\nParameters\ninputs (Dict[str, str]) \u2013 \noutputs (Dict[str, str]) \u2013 \nreturn_only_outputs (bool) \u2013 \nReturn type\nDict[str, str]\nrun(*args, callbacks=None, tags=None, **kwargs)\uf0c1\nRun the chain as text in, text out or multiple variables, text out.\nParameters\nargs (Any) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ntags (Optional[List[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\nsave(file_path)\uf0c1\nSave the chain.\nParameters\nfile_path (Union[pathlib.Path, str]) \u2013 Path to file to save the chain to.\nReturn type\nNone\nExample:\n.. code-block:: python\nchain.save(file_path=\u201dpath/chain.yaml\u201d)\nto_json()\uf0c1\nReturn type\nUnion[langchain.load.serializable.SerializedConstructor, langchain.load.serializable.SerializedNotImplemented]", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-38", "text": "to_json_not_implemented()\uf0c1\nReturn type\nlangchain.load.serializable.SerializedNotImplemented\nproperty input_keys: List[str]\uf0c1\nInput keys.\nproperty lc_attributes: Dict\uf0c1\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\uf0c1\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\uf0c1\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\uf0c1\nReturn whether or not the class is serializable.\nclass langchain.chains.FlareChain(*, memory=None, callbacks=None, callback_manager=None, verbose=None, tags=None, question_generator_chain, response_chain=None, output_parser=None, retriever, min_prob=0.2, min_token_gap=5, num_pad_tokens=2, max_iter=10, start_with_retrieval=True)[source]\uf0c1\nBases: langchain.chains.base.Chain\nParameters\nmemory (Optional[langchain.schema.BaseMemory]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ncallback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) \u2013 \nverbose (bool) \u2013 \ntags (Optional[List[str]]) \u2013 \nquestion_generator_chain (langchain.chains.flare.base.QuestionGeneratorChain) \u2013 \nresponse_chain (langchain.chains.flare.base._ResponseChain) \u2013 \noutput_parser (langchain.chains.flare.prompts.FinishedOutputParser) \u2013 \nretriever (langchain.schema.BaseRetriever) \u2013", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-39", "text": "retriever (langchain.schema.BaseRetriever) \u2013 \nmin_prob (float) \u2013 \nmin_token_gap (int) \u2013 \nnum_pad_tokens (int) \u2013 \nmax_iter (int) \u2013 \nstart_with_retrieval (bool) \u2013 \nReturn type\nNone\nattribute callback_manager: Optional[BaseCallbackManager] = None\uf0c1\nDeprecated, use callbacks instead.\nattribute callbacks: Callbacks = None\uf0c1\nOptional list of callback handlers (or callback manager). Defaults to None.\nCallback handlers are called throughout the lifecycle of a call to a chain,\nstarting with on_chain_start, ending with on_chain_end or on_chain_error.\nEach custom chain can optionally call additional callback methods, see Callback docs\nfor full details.\nattribute max_iter: int = 10\uf0c1\nattribute memory: Optional[BaseMemory] = None\uf0c1\nOptional memory object. Defaults to None.\nMemory is a class that gets called at the start\nand at the end of every chain. At the start, memory loads variables and passes\nthem along in the chain. At the end, it saves any returned variables.\nThere are many different types of memory - please see memory docs\nfor the full catalog.\nattribute min_prob: float = 0.2\uf0c1\nattribute min_token_gap: int = 5\uf0c1\nattribute num_pad_tokens: int = 2\uf0c1\nattribute output_parser: FinishedOutputParser [Optional]\uf0c1\nattribute question_generator_chain: QuestionGeneratorChain [Required]\uf0c1\nattribute response_chain: _ResponseChain [Optional]\uf0c1\nattribute retriever: BaseRetriever [Required]\uf0c1\nattribute start_with_retrieval: bool = True\uf0c1\nattribute tags: Optional[List[str]] = None\uf0c1\nOptional list of tags associated with the chain. Defaults to None", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-40", "text": "Optional list of tags associated with the chain. Defaults to None\nThese tags will be associated with each call to this chain,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a chain with its use case.\nattribute verbose: bool [Optional]\uf0c1\nWhether or not run in verbose mode. In verbose mode, some intermediate logs\nwill be printed to the console. Defaults to langchain.verbose value.\nasync acall(inputs, return_only_outputs=False, callbacks=None, *, tags=None, include_run_info=False)\uf0c1\nRun the logic of this chain and add to output if desired.\nParameters\ninputs (Union[Dict[str, Any], Any]) \u2013 Dictionary of inputs, or single input if chain expects\nonly one param.\nreturn_only_outputs (bool) \u2013 boolean for whether to return only outputs in the\nresponse. If True, only new keys generated by this chain will be\nreturned. If False, both input keys and new keys generated by this\nchain will be returned. Defaults to False.\ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 Callbacks to use for this chain run. If not provided, will\nuse the callbacks provided to the chain.\ninclude_run_info (bool) \u2013 Whether to include run info in the response. Defaults\nto False.\ntags (Optional[List[str]]) \u2013 \nReturn type\nDict[str, Any]\napply(input_list, callbacks=None)\uf0c1\nCall the chain on all inputs in the list.\nParameters\ninput_list (List[Dict[str, Any]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \nReturn type\nList[Dict[str, str]]", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-41", "text": "Return type\nList[Dict[str, str]]\nasync arun(*args, callbacks=None, tags=None, **kwargs)\uf0c1\nRun the chain as text in, text out or multiple variables, text out.\nParameters\nargs (Any) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ntags (Optional[List[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\ndict(**kwargs)\uf0c1\nReturn dictionary representation of chain.\nParameters\nkwargs (Any) \u2013 \nReturn type\nDict\nclassmethod from_llm(llm, max_generation_len=32, **kwargs)[source]\uf0c1\nParameters\nllm (langchain.base_language.BaseLanguageModel) \u2013 \nmax_generation_len (int) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.chains.flare.base.FlareChain\nprep_inputs(inputs)\uf0c1\nValidate and prep inputs.\nParameters\ninputs (Union[Dict[str, Any], Any]) \u2013 \nReturn type\nDict[str, str]\nprep_outputs(inputs, outputs, return_only_outputs=False)\uf0c1\nValidate and prep outputs.\nParameters\ninputs (Dict[str, str]) \u2013 \noutputs (Dict[str, str]) \u2013 \nreturn_only_outputs (bool) \u2013 \nReturn type\nDict[str, str]\nrun(*args, callbacks=None, tags=None, **kwargs)\uf0c1\nRun the chain as text in, text out or multiple variables, text out.\nParameters\nargs (Any) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ntags (Optional[List[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\nsave(file_path)\uf0c1\nSave the chain.\nParameters", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-42", "text": "Return type\nstr\nsave(file_path)\uf0c1\nSave the chain.\nParameters\nfile_path (Union[pathlib.Path, str]) \u2013 Path to file to save the chain to.\nReturn type\nNone\nExample:\n.. code-block:: python\nchain.save(file_path=\u201dpath/chain.yaml\u201d)\nto_json()\uf0c1\nReturn type\nUnion[langchain.load.serializable.SerializedConstructor, langchain.load.serializable.SerializedNotImplemented]\nto_json_not_implemented()\uf0c1\nReturn type\nlangchain.load.serializable.SerializedNotImplemented\nproperty input_keys: List[str]\uf0c1\nInput keys this chain expects.\nproperty lc_attributes: Dict\uf0c1\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\uf0c1\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\uf0c1\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\uf0c1\nReturn whether or not the class is serializable.\nproperty output_keys: List[str]\uf0c1\nOutput keys this chain expects.\nclass langchain.chains.GraphCypherQAChain(*, memory=None, callbacks=None, callback_manager=None, verbose=None, tags=None, graph, cypher_generation_chain, qa_chain, input_key='query', output_key='result', top_k=10, return_intermediate_steps=False, return_direct=False)[source]\uf0c1\nBases: langchain.chains.base.Chain\nChain for question-answering against a graph by generating Cypher statements.\nParameters\nmemory (Optional[langchain.schema.BaseMemory]) \u2013", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-43", "text": "Parameters\nmemory (Optional[langchain.schema.BaseMemory]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ncallback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) \u2013 \nverbose (bool) \u2013 \ntags (Optional[List[str]]) \u2013 \ngraph (langchain.graphs.neo4j_graph.Neo4jGraph) \u2013 \ncypher_generation_chain (langchain.chains.llm.LLMChain) \u2013 \nqa_chain (langchain.chains.llm.LLMChain) \u2013 \ninput_key (str) \u2013 \noutput_key (str) \u2013 \ntop_k (int) \u2013 \nreturn_intermediate_steps (bool) \u2013 \nreturn_direct (bool) \u2013 \nReturn type\nNone\nattribute callback_manager: Optional[BaseCallbackManager] = None\uf0c1\nDeprecated, use callbacks instead.\nattribute callbacks: Callbacks = None\uf0c1\nOptional list of callback handlers (or callback manager). Defaults to None.\nCallback handlers are called throughout the lifecycle of a call to a chain,\nstarting with on_chain_start, ending with on_chain_end or on_chain_error.\nEach custom chain can optionally call additional callback methods, see Callback docs\nfor full details.\nattribute cypher_generation_chain: LLMChain [Required]\uf0c1\nattribute graph: Neo4jGraph [Required]\uf0c1\nattribute memory: Optional[BaseMemory] = None\uf0c1\nOptional memory object. Defaults to None.\nMemory is a class that gets called at the start\nand at the end of every chain. At the start, memory loads variables and passes\nthem along in the chain. At the end, it saves any returned variables.\nThere are many different types of memory - please see memory docs\nfor the full catalog.\nattribute qa_chain: LLMChain [Required]\uf0c1", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-44", "text": "for the full catalog.\nattribute qa_chain: LLMChain [Required]\uf0c1\nattribute return_direct: bool = False\uf0c1\nWhether or not to return the result of querying the graph directly.\nattribute return_intermediate_steps: bool = False\uf0c1\nWhether or not to return the intermediate steps along with the final answer.\nattribute tags: Optional[List[str]] = None\uf0c1\nOptional list of tags associated with the chain. Defaults to None\nThese tags will be associated with each call to this chain,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a chain with its use case.\nattribute top_k: int = 10\uf0c1\nNumber of results to return from the query\nattribute verbose: bool [Optional]\uf0c1\nWhether or not run in verbose mode. In verbose mode, some intermediate logs\nwill be printed to the console. Defaults to langchain.verbose value.\nasync acall(inputs, return_only_outputs=False, callbacks=None, *, tags=None, include_run_info=False)\uf0c1\nRun the logic of this chain and add to output if desired.\nParameters\ninputs (Union[Dict[str, Any], Any]) \u2013 Dictionary of inputs, or single input if chain expects\nonly one param.\nreturn_only_outputs (bool) \u2013 boolean for whether to return only outputs in the\nresponse. If True, only new keys generated by this chain will be\nreturned. If False, both input keys and new keys generated by this\nchain will be returned. Defaults to False.\ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 Callbacks to use for this chain run. If not provided, will\nuse the callbacks provided to the chain.\ninclude_run_info (bool) \u2013 Whether to include run info in the response. Defaults\nto False.", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-45", "text": "to False.\ntags (Optional[List[str]]) \u2013 \nReturn type\nDict[str, Any]\napply(input_list, callbacks=None)\uf0c1\nCall the chain on all inputs in the list.\nParameters\ninput_list (List[Dict[str, Any]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \nReturn type\nList[Dict[str, str]]\nasync arun(*args, callbacks=None, tags=None, **kwargs)\uf0c1\nRun the chain as text in, text out or multiple variables, text out.\nParameters\nargs (Any) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ntags (Optional[List[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\ndict(**kwargs)\uf0c1\nReturn dictionary representation of chain.\nParameters\nkwargs (Any) \u2013 \nReturn type\nDict", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-46", "text": "Parameters\nkwargs (Any) \u2013 \nReturn type\nDict\nclassmethod from_llm(llm, *, qa_prompt=PromptTemplate(input_variables=['context', 'question'], output_parser=None, partial_variables={}, template=\"You are an assistant that helps to form nice and human understandable answers.\\nThe information part contains the provided information that you must use to construct an answer.\\nThe provided information is authorative, you must never doubt it or try to use your internal knowledge to correct it.\\nMake the answer sound as a response to the question. Do not mention that you based the result on the given information.\\nIf the provided information is empty, say that you don't know the answer.\\nInformation:\\n{context}\\n\\nQuestion: {question}\\nHelpful Answer:\", template_format='f-string', validate_template=True), cypher_prompt=PromptTemplate(input_variables=['schema', 'question'], output_parser=None, partial_variables={}, template='Task:Generate Cypher statement to query a graph database.\\nInstructions:\\nUse only the provided relationship types and properties in the schema.\\nDo not use any other relationship types or properties that are not provided.\\nSchema:\\n{schema}\\nNote: Do not include any explanations or apologies in your responses.\\nDo not respond to any questions that might ask anything else than for you to construct a Cypher statement.\\nDo not include any text except the generated Cypher statement.\\n\\nThe question is:\\n{question}', template_format='f-string', validate_template=True), **kwargs)[source]\uf0c1\nInitialize from LLM.\nParameters\nllm (langchain.base_language.BaseLanguageModel) \u2013 \nqa_prompt (langchain.prompts.base.BasePromptTemplate) \u2013 \ncypher_prompt (langchain.prompts.base.BasePromptTemplate) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.chains.graph_qa.cypher.GraphCypherQAChain", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-47", "text": "Return type\nlangchain.chains.graph_qa.cypher.GraphCypherQAChain\nprep_inputs(inputs)\uf0c1\nValidate and prep inputs.\nParameters\ninputs (Union[Dict[str, Any], Any]) \u2013 \nReturn type\nDict[str, str]\nprep_outputs(inputs, outputs, return_only_outputs=False)\uf0c1\nValidate and prep outputs.\nParameters\ninputs (Dict[str, str]) \u2013 \noutputs (Dict[str, str]) \u2013 \nreturn_only_outputs (bool) \u2013 \nReturn type\nDict[str, str]\nrun(*args, callbacks=None, tags=None, **kwargs)\uf0c1\nRun the chain as text in, text out or multiple variables, text out.\nParameters\nargs (Any) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ntags (Optional[List[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\nsave(file_path)\uf0c1\nSave the chain.\nParameters\nfile_path (Union[pathlib.Path, str]) \u2013 Path to file to save the chain to.\nReturn type\nNone\nExample:\n.. code-block:: python\nchain.save(file_path=\u201dpath/chain.yaml\u201d)\nto_json()\uf0c1\nReturn type\nUnion[langchain.load.serializable.SerializedConstructor, langchain.load.serializable.SerializedNotImplemented]\nto_json_not_implemented()\uf0c1\nReturn type\nlangchain.load.serializable.SerializedNotImplemented\nproperty lc_attributes: Dict\uf0c1\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\uf0c1\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-48", "text": "eg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\uf0c1\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\uf0c1\nReturn whether or not the class is serializable.\nclass langchain.chains.GraphQAChain(*, memory=None, callbacks=None, callback_manager=None, verbose=None, tags=None, graph, entity_extraction_chain, qa_chain, input_key='query', output_key='result')[source]\uf0c1\nBases: langchain.chains.base.Chain\nChain for question-answering against a graph.\nParameters\nmemory (Optional[langchain.schema.BaseMemory]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ncallback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) \u2013 \nverbose (bool) \u2013 \ntags (Optional[List[str]]) \u2013 \ngraph (langchain.graphs.networkx_graph.NetworkxEntityGraph) \u2013 \nentity_extraction_chain (langchain.chains.llm.LLMChain) \u2013 \nqa_chain (langchain.chains.llm.LLMChain) \u2013 \ninput_key (str) \u2013 \noutput_key (str) \u2013 \nReturn type\nNone\nattribute callback_manager: Optional[BaseCallbackManager] = None\uf0c1\nDeprecated, use callbacks instead.\nattribute callbacks: Callbacks = None\uf0c1\nOptional list of callback handlers (or callback manager). Defaults to None.\nCallback handlers are called throughout the lifecycle of a call to a chain,\nstarting with on_chain_start, ending with on_chain_end or on_chain_error.\nEach custom chain can optionally call additional callback methods, see Callback docs\nfor full details.", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-49", "text": "Each custom chain can optionally call additional callback methods, see Callback docs\nfor full details.\nattribute entity_extraction_chain: LLMChain [Required]\uf0c1\nattribute graph: NetworkxEntityGraph [Required]\uf0c1\nattribute memory: Optional[BaseMemory] = None\uf0c1\nOptional memory object. Defaults to None.\nMemory is a class that gets called at the start\nand at the end of every chain. At the start, memory loads variables and passes\nthem along in the chain. At the end, it saves any returned variables.\nThere are many different types of memory - please see memory docs\nfor the full catalog.\nattribute qa_chain: LLMChain [Required]\uf0c1\nattribute tags: Optional[List[str]] = None\uf0c1\nOptional list of tags associated with the chain. Defaults to None\nThese tags will be associated with each call to this chain,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a chain with its use case.\nattribute verbose: bool [Optional]\uf0c1\nWhether or not run in verbose mode. In verbose mode, some intermediate logs\nwill be printed to the console. Defaults to langchain.verbose value.\nasync acall(inputs, return_only_outputs=False, callbacks=None, *, tags=None, include_run_info=False)\uf0c1\nRun the logic of this chain and add to output if desired.\nParameters\ninputs (Union[Dict[str, Any], Any]) \u2013 Dictionary of inputs, or single input if chain expects\nonly one param.\nreturn_only_outputs (bool) \u2013 boolean for whether to return only outputs in the\nresponse. If True, only new keys generated by this chain will be\nreturned. If False, both input keys and new keys generated by this\nchain will be returned. Defaults to False.", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-50", "text": "chain will be returned. Defaults to False.\ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 Callbacks to use for this chain run. If not provided, will\nuse the callbacks provided to the chain.\ninclude_run_info (bool) \u2013 Whether to include run info in the response. Defaults\nto False.\ntags (Optional[List[str]]) \u2013 \nReturn type\nDict[str, Any]\napply(input_list, callbacks=None)\uf0c1\nCall the chain on all inputs in the list.\nParameters\ninput_list (List[Dict[str, Any]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \nReturn type\nList[Dict[str, str]]\nasync arun(*args, callbacks=None, tags=None, **kwargs)\uf0c1\nRun the chain as text in, text out or multiple variables, text out.\nParameters\nargs (Any) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ntags (Optional[List[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\ndict(**kwargs)\uf0c1\nReturn dictionary representation of chain.\nParameters\nkwargs (Any) \u2013 \nReturn type\nDict", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-51", "text": "Parameters\nkwargs (Any) \u2013 \nReturn type\nDict\nclassmethod from_llm(llm, qa_prompt=PromptTemplate(input_variables=['context', 'question'], output_parser=None, partial_variables={}, template=\"Use the following knowledge triplets to answer the question at the end. If you don't know the answer, just say that you don't know, don't try to make up an answer.\\n\\n{context}\\n\\nQuestion: {question}\\nHelpful Answer:\", template_format='f-string', validate_template=True), entity_prompt=PromptTemplate(input_variables=['input'], output_parser=None, partial_variables={}, template=\"Extract all entities from the following text. As a guideline, a proper noun is generally capitalized. You should definitely extract all names and places.\\n\\nReturn the output as a single comma-separated list, or NONE if there is nothing of note to return.\\n\\nEXAMPLE\\ni'm trying to improve Langchain's interfaces, the UX, its integrations with various products the user might want ... a lot of stuff.\\nOutput: Langchain\\nEND OF EXAMPLE\\n\\nEXAMPLE\\ni'm trying to improve Langchain's interfaces, the UX, its integrations with various products the user might want ... a lot of stuff. I'm working with Sam.\\nOutput: Langchain, Sam\\nEND OF EXAMPLE\\n\\nBegin!\\n\\n{input}\\nOutput:\", template_format='f-string', validate_template=True), **kwargs)[source]\uf0c1\nInitialize from LLM.\nParameters\nllm (langchain.base_language.BaseLanguageModel) \u2013 \nqa_prompt (langchain.prompts.base.BasePromptTemplate) \u2013 \nentity_prompt (langchain.prompts.base.BasePromptTemplate) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.chains.graph_qa.base.GraphQAChain\nprep_inputs(inputs)\uf0c1\nValidate and prep inputs.\nParameters", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-52", "text": "prep_inputs(inputs)\uf0c1\nValidate and prep inputs.\nParameters\ninputs (Union[Dict[str, Any], Any]) \u2013 \nReturn type\nDict[str, str]\nprep_outputs(inputs, outputs, return_only_outputs=False)\uf0c1\nValidate and prep outputs.\nParameters\ninputs (Dict[str, str]) \u2013 \noutputs (Dict[str, str]) \u2013 \nreturn_only_outputs (bool) \u2013 \nReturn type\nDict[str, str]\nrun(*args, callbacks=None, tags=None, **kwargs)\uf0c1\nRun the chain as text in, text out or multiple variables, text out.\nParameters\nargs (Any) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ntags (Optional[List[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\nsave(file_path)\uf0c1\nSave the chain.\nParameters\nfile_path (Union[pathlib.Path, str]) \u2013 Path to file to save the chain to.\nReturn type\nNone\nExample:\n.. code-block:: python\nchain.save(file_path=\u201dpath/chain.yaml\u201d)\nto_json()\uf0c1\nReturn type\nUnion[langchain.load.serializable.SerializedConstructor, langchain.load.serializable.SerializedNotImplemented]\nto_json_not_implemented()\uf0c1\nReturn type\nlangchain.load.serializable.SerializedNotImplemented\nproperty lc_attributes: Dict\uf0c1\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\uf0c1\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\uf0c1\nReturn a map of constructor argument names to secret ids.", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-53", "text": "Return a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\uf0c1\nReturn whether or not the class is serializable.\nclass langchain.chains.HypotheticalDocumentEmbedder(*, memory=None, callbacks=None, callback_manager=None, verbose=None, tags=None, base_embeddings, llm_chain)[source]\uf0c1\nBases: langchain.chains.base.Chain, langchain.embeddings.base.Embeddings\nGenerate hypothetical document for query, and then embed that.\nBased on https://arxiv.org/abs/2212.10496\nParameters\nmemory (Optional[langchain.schema.BaseMemory]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ncallback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) \u2013 \nverbose (bool) \u2013 \ntags (Optional[List[str]]) \u2013 \nbase_embeddings (langchain.embeddings.base.Embeddings) \u2013 \nllm_chain (langchain.chains.llm.LLMChain) \u2013 \nReturn type\nNone\nattribute base_embeddings: Embeddings [Required]\uf0c1\nattribute callback_manager: Optional[BaseCallbackManager] = None\uf0c1\nDeprecated, use callbacks instead.\nattribute callbacks: Callbacks = None\uf0c1\nOptional list of callback handlers (or callback manager). Defaults to None.\nCallback handlers are called throughout the lifecycle of a call to a chain,\nstarting with on_chain_start, ending with on_chain_end or on_chain_error.\nEach custom chain can optionally call additional callback methods, see Callback docs\nfor full details.\nattribute llm_chain: LLMChain [Required]\uf0c1\nattribute memory: Optional[BaseMemory] = None\uf0c1\nOptional memory object. Defaults to None.", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-54", "text": "Optional memory object. Defaults to None.\nMemory is a class that gets called at the start\nand at the end of every chain. At the start, memory loads variables and passes\nthem along in the chain. At the end, it saves any returned variables.\nThere are many different types of memory - please see memory docs\nfor the full catalog.\nattribute tags: Optional[List[str]] = None\uf0c1\nOptional list of tags associated with the chain. Defaults to None\nThese tags will be associated with each call to this chain,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a chain with its use case.\nattribute verbose: bool [Optional]\uf0c1\nWhether or not run in verbose mode. In verbose mode, some intermediate logs\nwill be printed to the console. Defaults to langchain.verbose value.\nasync acall(inputs, return_only_outputs=False, callbacks=None, *, tags=None, include_run_info=False)\uf0c1\nRun the logic of this chain and add to output if desired.\nParameters\ninputs (Union[Dict[str, Any], Any]) \u2013 Dictionary of inputs, or single input if chain expects\nonly one param.\nreturn_only_outputs (bool) \u2013 boolean for whether to return only outputs in the\nresponse. If True, only new keys generated by this chain will be\nreturned. If False, both input keys and new keys generated by this\nchain will be returned. Defaults to False.\ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 Callbacks to use for this chain run. If not provided, will\nuse the callbacks provided to the chain.\ninclude_run_info (bool) \u2013 Whether to include run info in the response. Defaults\nto False.\ntags (Optional[List[str]]) \u2013 \nReturn type\nDict[str, Any]", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-55", "text": "tags (Optional[List[str]]) \u2013 \nReturn type\nDict[str, Any]\napply(input_list, callbacks=None)\uf0c1\nCall the chain on all inputs in the list.\nParameters\ninput_list (List[Dict[str, Any]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \nReturn type\nList[Dict[str, str]]\nasync arun(*args, callbacks=None, tags=None, **kwargs)\uf0c1\nRun the chain as text in, text out or multiple variables, text out.\nParameters\nargs (Any) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ntags (Optional[List[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\ncombine_embeddings(embeddings)[source]\uf0c1\nCombine embeddings into final embeddings.\nParameters\nembeddings (List[List[float]]) \u2013 \nReturn type\nList[float]\ndict(**kwargs)\uf0c1\nReturn dictionary representation of chain.\nParameters\nkwargs (Any) \u2013 \nReturn type\nDict\nembed_documents(texts)[source]\uf0c1\nCall the base embeddings.\nParameters\ntexts (List[str]) \u2013 \nReturn type\nList[List[float]]\nembed_query(text)[source]\uf0c1\nGenerate a hypothetical document and embedded it.\nParameters\ntext (str) \u2013 \nReturn type\nList[float]\nclassmethod from_llm(llm, base_embeddings, prompt_key, **kwargs)[source]\uf0c1\nLoad and use LLMChain for a specific prompt key.\nParameters\nllm (langchain.base_language.BaseLanguageModel) \u2013 \nbase_embeddings (langchain.embeddings.base.Embeddings) \u2013 \nprompt_key (str) \u2013 \nkwargs (Any) \u2013 \nReturn type", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-56", "text": "prompt_key (str) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.chains.hyde.base.HypotheticalDocumentEmbedder\nprep_inputs(inputs)\uf0c1\nValidate and prep inputs.\nParameters\ninputs (Union[Dict[str, Any], Any]) \u2013 \nReturn type\nDict[str, str]\nprep_outputs(inputs, outputs, return_only_outputs=False)\uf0c1\nValidate and prep outputs.\nParameters\ninputs (Dict[str, str]) \u2013 \noutputs (Dict[str, str]) \u2013 \nreturn_only_outputs (bool) \u2013 \nReturn type\nDict[str, str]\nrun(*args, callbacks=None, tags=None, **kwargs)\uf0c1\nRun the chain as text in, text out or multiple variables, text out.\nParameters\nargs (Any) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ntags (Optional[List[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\nsave(file_path)\uf0c1\nSave the chain.\nParameters\nfile_path (Union[pathlib.Path, str]) \u2013 Path to file to save the chain to.\nReturn type\nNone\nExample:\n.. code-block:: python\nchain.save(file_path=\u201dpath/chain.yaml\u201d)\nto_json()\uf0c1\nReturn type\nUnion[langchain.load.serializable.SerializedConstructor, langchain.load.serializable.SerializedNotImplemented]\nto_json_not_implemented()\uf0c1\nReturn type\nlangchain.load.serializable.SerializedNotImplemented\nproperty input_keys: List[str]\uf0c1\nInput keys for Hyde\u2019s LLM chain.\nproperty lc_attributes: Dict\uf0c1\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\uf0c1", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-57", "text": "constructor.\nproperty lc_namespace: List[str]\uf0c1\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\uf0c1\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\uf0c1\nReturn whether or not the class is serializable.\nproperty output_keys: List[str]\uf0c1\nOutput keys for Hyde\u2019s LLM chain.\nclass langchain.chains.KuzuQAChain(*, memory=None, callbacks=None, callback_manager=None, verbose=None, tags=None, graph, cypher_generation_chain, qa_chain, input_key='query', output_key='result')[source]\uf0c1\nBases: langchain.chains.base.Chain\nChain for question-answering against a graph by generating Cypher statements for\nK\u00f9zu.\nParameters\nmemory (Optional[langchain.schema.BaseMemory]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ncallback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) \u2013 \nverbose (bool) \u2013 \ntags (Optional[List[str]]) \u2013 \ngraph (langchain.graphs.kuzu_graph.KuzuGraph) \u2013 \ncypher_generation_chain (langchain.chains.llm.LLMChain) \u2013 \nqa_chain (langchain.chains.llm.LLMChain) \u2013 \ninput_key (str) \u2013 \noutput_key (str) \u2013 \nReturn type\nNone\nattribute callback_manager: Optional[BaseCallbackManager] = None\uf0c1\nDeprecated, use callbacks instead.\nattribute callbacks: Callbacks = None\uf0c1\nOptional list of callback handlers (or callback manager). Defaults to None.", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-58", "text": "Optional list of callback handlers (or callback manager). Defaults to None.\nCallback handlers are called throughout the lifecycle of a call to a chain,\nstarting with on_chain_start, ending with on_chain_end or on_chain_error.\nEach custom chain can optionally call additional callback methods, see Callback docs\nfor full details.\nattribute cypher_generation_chain: LLMChain [Required]\uf0c1\nattribute graph: KuzuGraph [Required]\uf0c1\nattribute memory: Optional[BaseMemory] = None\uf0c1\nOptional memory object. Defaults to None.\nMemory is a class that gets called at the start\nand at the end of every chain. At the start, memory loads variables and passes\nthem along in the chain. At the end, it saves any returned variables.\nThere are many different types of memory - please see memory docs\nfor the full catalog.\nattribute qa_chain: LLMChain [Required]\uf0c1\nattribute tags: Optional[List[str]] = None\uf0c1\nOptional list of tags associated with the chain. Defaults to None\nThese tags will be associated with each call to this chain,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a chain with its use case.\nattribute verbose: bool [Optional]\uf0c1\nWhether or not run in verbose mode. In verbose mode, some intermediate logs\nwill be printed to the console. Defaults to langchain.verbose value.\nasync acall(inputs, return_only_outputs=False, callbacks=None, *, tags=None, include_run_info=False)\uf0c1\nRun the logic of this chain and add to output if desired.\nParameters\ninputs (Union[Dict[str, Any], Any]) \u2013 Dictionary of inputs, or single input if chain expects\nonly one param.\nreturn_only_outputs (bool) \u2013 boolean for whether to return only outputs in the", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-59", "text": "return_only_outputs (bool) \u2013 boolean for whether to return only outputs in the\nresponse. If True, only new keys generated by this chain will be\nreturned. If False, both input keys and new keys generated by this\nchain will be returned. Defaults to False.\ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 Callbacks to use for this chain run. If not provided, will\nuse the callbacks provided to the chain.\ninclude_run_info (bool) \u2013 Whether to include run info in the response. Defaults\nto False.\ntags (Optional[List[str]]) \u2013 \nReturn type\nDict[str, Any]\napply(input_list, callbacks=None)\uf0c1\nCall the chain on all inputs in the list.\nParameters\ninput_list (List[Dict[str, Any]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \nReturn type\nList[Dict[str, str]]\nasync arun(*args, callbacks=None, tags=None, **kwargs)\uf0c1\nRun the chain as text in, text out or multiple variables, text out.\nParameters\nargs (Any) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ntags (Optional[List[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\ndict(**kwargs)\uf0c1\nReturn dictionary representation of chain.\nParameters\nkwargs (Any) \u2013 \nReturn type\nDict", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-60", "text": "classmethod from_llm(llm, *, qa_prompt=PromptTemplate(input_variables=['context', 'question'], output_parser=None, partial_variables={}, template=\"You are an assistant that helps to form nice and human understandable answers.\\nThe information part contains the provided information that you must use to construct an answer.\\nThe provided information is authorative, you must never doubt it or try to use your internal knowledge to correct it.\\nMake the answer sound as a response to the question. Do not mention that you based the result on the given information.\\nIf the provided information is empty, say that you don't know the answer.\\nInformation:\\n{context}\\n\\nQuestion: {question}\\nHelpful Answer:\", template_format='f-string', validate_template=True), cypher_prompt=PromptTemplate(input_variables=['schema', 'question'], output_parser=None, partial_variables={}, template='Task:Generate K\u00f9zu Cypher statement to query a graph database.\\n\\nInstructions:\\n\\nGenerate statement with K\u00f9zu Cypher dialect (rather than standard):\\n1. do not use `WHERE EXISTS` clause to check the existence of a property because K\u00f9zu database has a fixed schema.\\n2. do not omit relationship pattern. Always use `()-[]->()` instead of `()->()`.\\n3. do not include any notes or comments even if the statement does not produce the expected result.\\n```\\n\\nUse only the provided relationship types and properties in the schema.\\nDo not use any other relationship types or properties that are not provided.\\nSchema:\\n{schema}\\nNote: Do not include any explanations or apologies in your responses.\\nDo not respond to any questions that might ask anything else than for you to construct a Cypher statement.\\nDo not include any text except the generated Cypher statement.\\n\\nThe question is:\\n{question}', template_format='f-string', validate_template=True), **kwargs)[source]\uf0c1", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-61", "text": "Initialize from LLM.\nParameters\nllm (langchain.base_language.BaseLanguageModel) \u2013 \nqa_prompt (langchain.prompts.base.BasePromptTemplate) \u2013 \ncypher_prompt (langchain.prompts.base.BasePromptTemplate) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.chains.graph_qa.kuzu.KuzuQAChain\nprep_inputs(inputs)\uf0c1\nValidate and prep inputs.\nParameters\ninputs (Union[Dict[str, Any], Any]) \u2013 \nReturn type\nDict[str, str]\nprep_outputs(inputs, outputs, return_only_outputs=False)\uf0c1\nValidate and prep outputs.\nParameters\ninputs (Dict[str, str]) \u2013 \noutputs (Dict[str, str]) \u2013 \nreturn_only_outputs (bool) \u2013 \nReturn type\nDict[str, str]\nrun(*args, callbacks=None, tags=None, **kwargs)\uf0c1\nRun the chain as text in, text out or multiple variables, text out.\nParameters\nargs (Any) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ntags (Optional[List[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\nsave(file_path)\uf0c1\nSave the chain.\nParameters\nfile_path (Union[pathlib.Path, str]) \u2013 Path to file to save the chain to.\nReturn type\nNone\nExample:\n.. code-block:: python\nchain.save(file_path=\u201dpath/chain.yaml\u201d)\nto_json()\uf0c1\nReturn type\nUnion[langchain.load.serializable.SerializedConstructor, langchain.load.serializable.SerializedNotImplemented]\nto_json_not_implemented()\uf0c1\nReturn type\nlangchain.load.serializable.SerializedNotImplemented\nproperty lc_attributes: Dict\uf0c1\nReturn a list of attribute names that should be included in the", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-62", "text": "Return a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\uf0c1\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\uf0c1\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\uf0c1\nReturn whether or not the class is serializable.\nclass langchain.chains.LLMBashChain(*, memory=None, callbacks=None, callback_manager=None, verbose=None, tags=None, llm_chain, llm=None, input_key='question', output_key='answer', prompt=PromptTemplate(input_variables=['question'], output_parser=BashOutputParser(), partial_variables={}, template='If someone asks you to perform a task, your job is to come up with a series of bash commands that will perform the task. There is no need to put \"#!/bin/bash\" in your answer. Make sure to reason step by step, using this format:\\n\\nQuestion: \"copy the files in the directory named \\'target\\' into a new directory at the same level as target called \\'myNewDirectory\\'\"\\n\\nI need to take the following actions:\\n- List all files in the directory\\n- Create a new directory\\n- Copy the files from the first directory into the second directory\\n```bash\\nls\\nmkdir myNewDirectory\\ncp -r target/* myNewDirectory\\n```\\n\\nThat is the format. Begin!\\n\\nQuestion: {question}', template_format='f-string', validate_template=True), bash_process=None)[source]\uf0c1\nBases: langchain.chains.base.Chain\nChain that interprets a prompt and executes bash code to perform bash operations.", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-63", "text": "Chain that interprets a prompt and executes bash code to perform bash operations.\nExample\nfrom langchain import LLMBashChain, OpenAI\nllm_bash = LLMBashChain.from_llm(OpenAI())\nParameters\nmemory (Optional[langchain.schema.BaseMemory]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ncallback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) \u2013 \nverbose (bool) \u2013 \ntags (Optional[List[str]]) \u2013 \nllm_chain (langchain.chains.llm.LLMChain) \u2013 \nllm (Optional[langchain.base_language.BaseLanguageModel]) \u2013 \ninput_key (str) \u2013 \noutput_key (str) \u2013 \nprompt (langchain.prompts.base.BasePromptTemplate) \u2013 \nbash_process (langchain.utilities.bash.BashProcess) \u2013 \nReturn type\nNone\nattribute callback_manager: Optional[BaseCallbackManager] = None\uf0c1\nDeprecated, use callbacks instead.\nattribute callbacks: Callbacks = None\uf0c1\nOptional list of callback handlers (or callback manager). Defaults to None.\nCallback handlers are called throughout the lifecycle of a call to a chain,\nstarting with on_chain_start, ending with on_chain_end or on_chain_error.\nEach custom chain can optionally call additional callback methods, see Callback docs\nfor full details.\nattribute llm: Optional[BaseLanguageModel] = None\uf0c1\n[Deprecated] LLM wrapper to use.\nattribute llm_chain: LLMChain [Required]\uf0c1\nattribute memory: Optional[BaseMemory] = None\uf0c1\nOptional memory object. Defaults to None.\nMemory is a class that gets called at the start\nand at the end of every chain. At the start, memory loads variables and passes", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-64", "text": "and at the end of every chain. At the start, memory loads variables and passes\nthem along in the chain. At the end, it saves any returned variables.\nThere are many different types of memory - please see memory docs\nfor the full catalog.\nattribute prompt: BasePromptTemplate = PromptTemplate(input_variables=['question'], output_parser=BashOutputParser(), partial_variables={}, template='If someone asks you to perform a task, your job is to come up with a series of bash commands that will perform the task. There is no need to put \"#!/bin/bash\" in your answer. Make sure to reason step by step, using this format:\\n\\nQuestion: \"copy the files in the directory named \\'target\\' into a new directory at the same level as target called \\'myNewDirectory\\'\"\\n\\nI need to take the following actions:\\n- List all files in the directory\\n- Create a new directory\\n- Copy the files from the first directory into the second directory\\n```bash\\nls\\nmkdir myNewDirectory\\ncp -r target/* myNewDirectory\\n```\\n\\nThat is the format. Begin!\\n\\nQuestion: {question}', template_format='f-string', validate_template=True)\uf0c1\n[Deprecated]\nattribute tags: Optional[List[str]] = None\uf0c1\nOptional list of tags associated with the chain. Defaults to None\nThese tags will be associated with each call to this chain,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a chain with its use case.\nattribute verbose: bool [Optional]\uf0c1\nWhether or not run in verbose mode. In verbose mode, some intermediate logs\nwill be printed to the console. Defaults to langchain.verbose value.\nasync acall(inputs, return_only_outputs=False, callbacks=None, *, tags=None, include_run_info=False)\uf0c1", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-65", "text": "Run the logic of this chain and add to output if desired.\nParameters\ninputs (Union[Dict[str, Any], Any]) \u2013 Dictionary of inputs, or single input if chain expects\nonly one param.\nreturn_only_outputs (bool) \u2013 boolean for whether to return only outputs in the\nresponse. If True, only new keys generated by this chain will be\nreturned. If False, both input keys and new keys generated by this\nchain will be returned. Defaults to False.\ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 Callbacks to use for this chain run. If not provided, will\nuse the callbacks provided to the chain.\ninclude_run_info (bool) \u2013 Whether to include run info in the response. Defaults\nto False.\ntags (Optional[List[str]]) \u2013 \nReturn type\nDict[str, Any]\napply(input_list, callbacks=None)\uf0c1\nCall the chain on all inputs in the list.\nParameters\ninput_list (List[Dict[str, Any]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \nReturn type\nList[Dict[str, str]]\nasync arun(*args, callbacks=None, tags=None, **kwargs)\uf0c1\nRun the chain as text in, text out or multiple variables, text out.\nParameters\nargs (Any) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ntags (Optional[List[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\ndict(**kwargs)\uf0c1\nReturn dictionary representation of chain.\nParameters\nkwargs (Any) \u2013 \nReturn type\nDict", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-66", "text": "Parameters\nkwargs (Any) \u2013 \nReturn type\nDict\nclassmethod from_llm(llm, prompt=PromptTemplate(input_variables=['question'], output_parser=BashOutputParser(), partial_variables={}, template='If someone asks you to perform a task, your job is to come up with a series of bash commands that will perform the task. There is no need to put \"#!/bin/bash\" in your answer. Make sure to reason step by step, using this format:\\n\\nQuestion: \"copy the files in the directory named \\'target\\' into a new directory at the same level as target called \\'myNewDirectory\\'\"\\n\\nI need to take the following actions:\\n- List all files in the directory\\n- Create a new directory\\n- Copy the files from the first directory into the second directory\\n```bash\\nls\\nmkdir myNewDirectory\\ncp -r target/* myNewDirectory\\n```\\n\\nThat is the format. Begin!\\n\\nQuestion: {question}', template_format='f-string', validate_template=True), **kwargs)[source]\uf0c1\nParameters\nllm (langchain.base_language.BaseLanguageModel) \u2013 \nprompt (langchain.prompts.base.BasePromptTemplate) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.chains.llm_bash.base.LLMBashChain\nprep_inputs(inputs)\uf0c1\nValidate and prep inputs.\nParameters\ninputs (Union[Dict[str, Any], Any]) \u2013 \nReturn type\nDict[str, str]\nprep_outputs(inputs, outputs, return_only_outputs=False)\uf0c1\nValidate and prep outputs.\nParameters\ninputs (Dict[str, str]) \u2013 \noutputs (Dict[str, str]) \u2013 \nreturn_only_outputs (bool) \u2013 \nReturn type\nDict[str, str]\nrun(*args, callbacks=None, tags=None, **kwargs)\uf0c1", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-67", "text": "run(*args, callbacks=None, tags=None, **kwargs)\uf0c1\nRun the chain as text in, text out or multiple variables, text out.\nParameters\nargs (Any) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ntags (Optional[List[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\nsave(file_path)\uf0c1\nSave the chain.\nParameters\nfile_path (Union[pathlib.Path, str]) \u2013 Path to file to save the chain to.\nReturn type\nNone\nExample:\n.. code-block:: python\nchain.save(file_path=\u201dpath/chain.yaml\u201d)\nto_json()\uf0c1\nReturn type\nUnion[langchain.load.serializable.SerializedConstructor, langchain.load.serializable.SerializedNotImplemented]\nto_json_not_implemented()\uf0c1\nReturn type\nlangchain.load.serializable.SerializedNotImplemented\nproperty lc_attributes: Dict\uf0c1\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\uf0c1\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\uf0c1\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\uf0c1\nReturn whether or not the class is serializable.\nclass langchain.chains.LLMChain(*, memory=None, callbacks=None, callback_manager=None, verbose=None, tags=None, prompt, llm, output_key='text', output_parser=None, return_final_only=True, llm_kwargs=None)[source]\uf0c1", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-68", "text": "Bases: langchain.chains.base.Chain\nChain to run queries against LLMs.\nExample\nfrom langchain import LLMChain, OpenAI, PromptTemplate\nprompt_template = \"Tell me a {adjective} joke\"\nprompt = PromptTemplate(\n input_variables=[\"adjective\"], template=prompt_template\n)\nllm = LLMChain(llm=OpenAI(), prompt=prompt)\nParameters\nmemory (Optional[langchain.schema.BaseMemory]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ncallback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) \u2013 \nverbose (bool) \u2013 \ntags (Optional[List[str]]) \u2013 \nprompt (langchain.prompts.base.BasePromptTemplate) \u2013 \nllm (langchain.base_language.BaseLanguageModel) \u2013 \noutput_key (str) \u2013 \noutput_parser (langchain.schema.BaseLLMOutputParser) \u2013 \nreturn_final_only (bool) \u2013 \nllm_kwargs (dict) \u2013 \nReturn type\nNone\nattribute callback_manager: Optional[BaseCallbackManager] = None\uf0c1\nDeprecated, use callbacks instead.\nattribute callbacks: Callbacks = None\uf0c1\nOptional list of callback handlers (or callback manager). Defaults to None.\nCallback handlers are called throughout the lifecycle of a call to a chain,\nstarting with on_chain_start, ending with on_chain_end or on_chain_error.\nEach custom chain can optionally call additional callback methods, see Callback docs\nfor full details.\nattribute llm: BaseLanguageModel [Required]\uf0c1\nLanguage model to call.\nattribute llm_kwargs: dict [Optional]\uf0c1\nattribute memory: Optional[BaseMemory] = None\uf0c1\nOptional memory object. Defaults to None.\nMemory is a class that gets called at the start", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-69", "text": "Optional memory object. Defaults to None.\nMemory is a class that gets called at the start\nand at the end of every chain. At the start, memory loads variables and passes\nthem along in the chain. At the end, it saves any returned variables.\nThere are many different types of memory - please see memory docs\nfor the full catalog.\nattribute output_parser: BaseLLMOutputParser [Optional]\uf0c1\nOutput parser to use.\nDefaults to one that takes the most likely string but does not change it\notherwise.\nattribute prompt: BasePromptTemplate [Required]\uf0c1\nPrompt object to use.\nattribute return_final_only: bool = True\uf0c1\nWhether to return only the final parsed result. Defaults to True.\nIf false, will return a bunch of extra information about the generation.\nattribute tags: Optional[List[str]] = None\uf0c1\nOptional list of tags associated with the chain. Defaults to None\nThese tags will be associated with each call to this chain,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a chain with its use case.\nattribute verbose: bool [Optional]\uf0c1\nWhether or not run in verbose mode. In verbose mode, some intermediate logs\nwill be printed to the console. Defaults to langchain.verbose value.\nasync aapply(input_list, callbacks=None)[source]\uf0c1\nUtilize the LLM generate method for speed gains.\nParameters\ninput_list (List[Dict[str, Any]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \nReturn type\nList[Dict[str, str]]\nasync aapply_and_parse(input_list, callbacks=None)[source]\uf0c1\nCall apply and then parse the results.\nParameters\ninput_list (List[Dict[str, Any]]) \u2013", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-70", "text": "Parameters\ninput_list (List[Dict[str, Any]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \nReturn type\nSequence[Union[str, List[str], Dict[str, str]]]\nasync acall(inputs, return_only_outputs=False, callbacks=None, *, tags=None, include_run_info=False)\uf0c1\nRun the logic of this chain and add to output if desired.\nParameters\ninputs (Union[Dict[str, Any], Any]) \u2013 Dictionary of inputs, or single input if chain expects\nonly one param.\nreturn_only_outputs (bool) \u2013 boolean for whether to return only outputs in the\nresponse. If True, only new keys generated by this chain will be\nreturned. If False, both input keys and new keys generated by this\nchain will be returned. Defaults to False.\ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 Callbacks to use for this chain run. If not provided, will\nuse the callbacks provided to the chain.\ninclude_run_info (bool) \u2013 Whether to include run info in the response. Defaults\nto False.\ntags (Optional[List[str]]) \u2013 \nReturn type\nDict[str, Any]\nasync agenerate(input_list, run_manager=None)[source]\uf0c1\nGenerate LLM result from inputs.\nParameters\ninput_list (List[Dict[str, Any]]) \u2013 \nrun_manager (Optional[langchain.callbacks.manager.AsyncCallbackManagerForChainRun]) \u2013 \nReturn type\nlangchain.schema.LLMResult\napply(input_list, callbacks=None)[source]\uf0c1\nUtilize the LLM generate method for speed gains.\nParameters\ninput_list (List[Dict[str, Any]]) \u2013", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-71", "text": "Parameters\ninput_list (List[Dict[str, Any]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \nReturn type\nList[Dict[str, str]]\napply_and_parse(input_list, callbacks=None)[source]\uf0c1\nCall apply and then parse the results.\nParameters\ninput_list (List[Dict[str, Any]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \nReturn type\nSequence[Union[str, List[str], Dict[str, str]]]\nasync apredict(callbacks=None, **kwargs)[source]\uf0c1\nFormat prompt with kwargs and pass to LLM.\nParameters\ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 Callbacks to pass to LLMChain\n**kwargs \u2013 Keys to pass to prompt template.\nkwargs (Any) \u2013 \nReturns\nCompletion from LLM.\nReturn type\nstr\nExample\ncompletion = llm.predict(adjective=\"funny\")\nasync apredict_and_parse(callbacks=None, **kwargs)[source]\uf0c1\nCall apredict and then parse the results.\nParameters\ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nUnion[str, List[str], Dict[str, str]]\nasync aprep_prompts(input_list, run_manager=None)[source]\uf0c1\nPrepare prompts from inputs.\nParameters\ninput_list (List[Dict[str, Any]]) \u2013 \nrun_manager (Optional[langchain.callbacks.manager.AsyncCallbackManagerForChainRun]) \u2013 \nReturn type\nTuple[List[langchain.schema.PromptValue], Optional[List[str]]]", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-72", "text": "Return type\nTuple[List[langchain.schema.PromptValue], Optional[List[str]]]\nasync arun(*args, callbacks=None, tags=None, **kwargs)\uf0c1\nRun the chain as text in, text out or multiple variables, text out.\nParameters\nargs (Any) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ntags (Optional[List[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\ncreate_outputs(llm_result)[source]\uf0c1\nCreate outputs from response.\nParameters\nllm_result (langchain.schema.LLMResult) \u2013 \nReturn type\nList[Dict[str, Any]]\ndict(**kwargs)\uf0c1\nReturn dictionary representation of chain.\nParameters\nkwargs (Any) \u2013 \nReturn type\nDict\nclassmethod from_string(llm, template)[source]\uf0c1\nCreate LLMChain from LLM and template.\nParameters\nllm (langchain.base_language.BaseLanguageModel) \u2013 \ntemplate (str) \u2013 \nReturn type\nlangchain.chains.llm.LLMChain\ngenerate(input_list, run_manager=None)[source]\uf0c1\nGenerate LLM result from inputs.\nParameters\ninput_list (List[Dict[str, Any]]) \u2013 \nrun_manager (Optional[langchain.callbacks.manager.CallbackManagerForChainRun]) \u2013 \nReturn type\nlangchain.schema.LLMResult\npredict(callbacks=None, **kwargs)[source]\uf0c1\nFormat prompt with kwargs and pass to LLM.\nParameters\ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 Callbacks to pass to LLMChain\n**kwargs \u2013 Keys to pass to prompt template.\nkwargs (Any) \u2013 \nReturns\nCompletion from LLM.\nReturn type\nstr\nExample", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-73", "text": "Returns\nCompletion from LLM.\nReturn type\nstr\nExample\ncompletion = llm.predict(adjective=\"funny\")\npredict_and_parse(callbacks=None, **kwargs)[source]\uf0c1\nCall predict and then parse the results.\nParameters\ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nUnion[str, List[str], Dict[str, Any]]\nprep_inputs(inputs)\uf0c1\nValidate and prep inputs.\nParameters\ninputs (Union[Dict[str, Any], Any]) \u2013 \nReturn type\nDict[str, str]\nprep_outputs(inputs, outputs, return_only_outputs=False)\uf0c1\nValidate and prep outputs.\nParameters\ninputs (Dict[str, str]) \u2013 \noutputs (Dict[str, str]) \u2013 \nreturn_only_outputs (bool) \u2013 \nReturn type\nDict[str, str]\nprep_prompts(input_list, run_manager=None)[source]\uf0c1\nPrepare prompts from inputs.\nParameters\ninput_list (List[Dict[str, Any]]) \u2013 \nrun_manager (Optional[langchain.callbacks.manager.CallbackManagerForChainRun]) \u2013 \nReturn type\nTuple[List[langchain.schema.PromptValue], Optional[List[str]]]\nrun(*args, callbacks=None, tags=None, **kwargs)\uf0c1\nRun the chain as text in, text out or multiple variables, text out.\nParameters\nargs (Any) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ntags (Optional[List[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\nsave(file_path)\uf0c1\nSave the chain.\nParameters\nfile_path (Union[pathlib.Path, str]) \u2013 Path to file to save the chain to.\nReturn type", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-74", "text": "Return type\nNone\nExample:\n.. code-block:: python\nchain.save(file_path=\u201dpath/chain.yaml\u201d)\nto_json()\uf0c1\nReturn type\nUnion[langchain.load.serializable.SerializedConstructor, langchain.load.serializable.SerializedNotImplemented]\nto_json_not_implemented()\uf0c1\nReturn type\nlangchain.load.serializable.SerializedNotImplemented\nproperty lc_attributes: Dict\uf0c1\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\uf0c1\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\uf0c1\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\uf0c1\nReturn whether or not the class is serializable.", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-75", "text": "property lc_serializable: bool\uf0c1\nReturn whether or not the class is serializable.\nclass langchain.chains.LLMCheckerChain(*, memory=None, callbacks=None, callback_manager=None, verbose=None, tags=None, question_to_checked_assertions_chain, llm=None, create_draft_answer_prompt=PromptTemplate(input_variables=['question'], output_parser=None, partial_variables={}, template='{question}\\n\\n', template_format='f-string', validate_template=True), list_assertions_prompt=PromptTemplate(input_variables=['statement'], output_parser=None, partial_variables={}, template='Here is a statement:\\n{statement}\\nMake a bullet point list of the assumptions you made when producing the above statement.\\n\\n', template_format='f-string', validate_template=True), check_assertions_prompt=PromptTemplate(input_variables=['assertions'], output_parser=None, partial_variables={}, template='Here is a bullet point list of assertions:\\n{assertions}\\nFor each assertion, determine whether it is true or false. If it is false, explain why.\\n\\n', template_format='f-string', validate_template=True), revised_answer_prompt=PromptTemplate(input_variables=['checked_assertions', 'question'], output_parser=None, partial_variables={}, template=\"{checked_assertions}\\n\\nQuestion: In light of the above assertions and checks, how would you answer the question '{question}'?\\n\\nAnswer:\", template_format='f-string', validate_template=True), input_key='query', output_key='result')[source]\uf0c1\nBases: langchain.chains.base.Chain\nChain for question-answering with self-verification.\nExample\nfrom langchain import OpenAI, LLMCheckerChain\nllm = OpenAI(temperature=0.7)\nchecker_chain = LLMCheckerChain.from_llm(llm)\nParameters\nmemory (Optional[langchain.schema.BaseMemory]) \u2013", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-76", "text": "Parameters\nmemory (Optional[langchain.schema.BaseMemory]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ncallback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) \u2013 \nverbose (bool) \u2013 \ntags (Optional[List[str]]) \u2013 \nquestion_to_checked_assertions_chain (langchain.chains.sequential.SequentialChain) \u2013 \nllm (Optional[langchain.base_language.BaseLanguageModel]) \u2013 \ncreate_draft_answer_prompt (langchain.prompts.prompt.PromptTemplate) \u2013 \nlist_assertions_prompt (langchain.prompts.prompt.PromptTemplate) \u2013 \ncheck_assertions_prompt (langchain.prompts.prompt.PromptTemplate) \u2013 \nrevised_answer_prompt (langchain.prompts.prompt.PromptTemplate) \u2013 \ninput_key (str) \u2013 \noutput_key (str) \u2013 \nReturn type\nNone\nattribute callback_manager: Optional[BaseCallbackManager] = None\uf0c1\nDeprecated, use callbacks instead.\nattribute callbacks: Callbacks = None\uf0c1\nOptional list of callback handlers (or callback manager). Defaults to None.\nCallback handlers are called throughout the lifecycle of a call to a chain,\nstarting with on_chain_start, ending with on_chain_end or on_chain_error.\nEach custom chain can optionally call additional callback methods, see Callback docs\nfor full details.\nattribute check_assertions_prompt: PromptTemplate = PromptTemplate(input_variables=['assertions'], output_parser=None, partial_variables={}, template='Here is a bullet point list of assertions:\\n{assertions}\\nFor each assertion, determine whether it is true or false. If it is false, explain why.\\n\\n', template_format='f-string', validate_template=True)\uf0c1\n[Deprecated]", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-77", "text": "[Deprecated]\nattribute create_draft_answer_prompt: PromptTemplate = PromptTemplate(input_variables=['question'], output_parser=None, partial_variables={}, template='{question}\\n\\n', template_format='f-string', validate_template=True)\uf0c1\n[Deprecated]\nattribute list_assertions_prompt: PromptTemplate = PromptTemplate(input_variables=['statement'], output_parser=None, partial_variables={}, template='Here is a statement:\\n{statement}\\nMake a bullet point list of the assumptions you made when producing the above statement.\\n\\n', template_format='f-string', validate_template=True)\uf0c1\n[Deprecated]\nattribute llm: Optional[BaseLanguageModel] = None\uf0c1\n[Deprecated] LLM wrapper to use.\nattribute memory: Optional[BaseMemory] = None\uf0c1\nOptional memory object. Defaults to None.\nMemory is a class that gets called at the start\nand at the end of every chain. At the start, memory loads variables and passes\nthem along in the chain. At the end, it saves any returned variables.\nThere are many different types of memory - please see memory docs\nfor the full catalog.\nattribute question_to_checked_assertions_chain: SequentialChain [Required]\uf0c1\nattribute revised_answer_prompt: PromptTemplate = PromptTemplate(input_variables=['checked_assertions', 'question'], output_parser=None, partial_variables={}, template=\"{checked_assertions}\\n\\nQuestion: In light of the above assertions and checks, how would you answer the question '{question}'?\\n\\nAnswer:\", template_format='f-string', validate_template=True)\uf0c1\n[Deprecated] Prompt to use when questioning the documents.\nattribute tags: Optional[List[str]] = None\uf0c1\nOptional list of tags associated with the chain. Defaults to None\nThese tags will be associated with each call to this chain,\nand passed as arguments to the handlers defined in callbacks.", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-78", "text": "and passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a chain with its use case.\nattribute verbose: bool [Optional]\uf0c1\nWhether or not run in verbose mode. In verbose mode, some intermediate logs\nwill be printed to the console. Defaults to langchain.verbose value.\nasync acall(inputs, return_only_outputs=False, callbacks=None, *, tags=None, include_run_info=False)\uf0c1\nRun the logic of this chain and add to output if desired.\nParameters\ninputs (Union[Dict[str, Any], Any]) \u2013 Dictionary of inputs, or single input if chain expects\nonly one param.\nreturn_only_outputs (bool) \u2013 boolean for whether to return only outputs in the\nresponse. If True, only new keys generated by this chain will be\nreturned. If False, both input keys and new keys generated by this\nchain will be returned. Defaults to False.\ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 Callbacks to use for this chain run. If not provided, will\nuse the callbacks provided to the chain.\ninclude_run_info (bool) \u2013 Whether to include run info in the response. Defaults\nto False.\ntags (Optional[List[str]]) \u2013 \nReturn type\nDict[str, Any]\napply(input_list, callbacks=None)\uf0c1\nCall the chain on all inputs in the list.\nParameters\ninput_list (List[Dict[str, Any]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \nReturn type\nList[Dict[str, str]]\nasync arun(*args, callbacks=None, tags=None, **kwargs)\uf0c1\nRun the chain as text in, text out or multiple variables, text out.\nParameters", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-79", "text": "Run the chain as text in, text out or multiple variables, text out.\nParameters\nargs (Any) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ntags (Optional[List[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\ndict(**kwargs)\uf0c1\nReturn dictionary representation of chain.\nParameters\nkwargs (Any) \u2013 \nReturn type\nDict\nclassmethod from_llm(llm, create_draft_answer_prompt=PromptTemplate(input_variables=['question'], output_parser=None, partial_variables={}, template='{question}\\n\\n', template_format='f-string', validate_template=True), list_assertions_prompt=PromptTemplate(input_variables=['statement'], output_parser=None, partial_variables={}, template='Here is a statement:\\n{statement}\\nMake a bullet point list of the assumptions you made when producing the above statement.\\n\\n', template_format='f-string', validate_template=True), check_assertions_prompt=PromptTemplate(input_variables=['assertions'], output_parser=None, partial_variables={}, template='Here is a bullet point list of assertions:\\n{assertions}\\nFor each assertion, determine whether it is true or false. If it is false, explain why.\\n\\n', template_format='f-string', validate_template=True), revised_answer_prompt=PromptTemplate(input_variables=['checked_assertions', 'question'], output_parser=None, partial_variables={}, template=\"{checked_assertions}\\n\\nQuestion: In light of the above assertions and checks, how would you answer the question '{question}'?\\n\\nAnswer:\", template_format='f-string', validate_template=True), **kwargs)[source]\uf0c1\nParameters\nllm (langchain.base_language.BaseLanguageModel) \u2013 \ncreate_draft_answer_prompt (langchain.prompts.prompt.PromptTemplate) \u2013 \nlist_assertions_prompt (langchain.prompts.prompt.PromptTemplate) \u2013", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-80", "text": "list_assertions_prompt (langchain.prompts.prompt.PromptTemplate) \u2013 \ncheck_assertions_prompt (langchain.prompts.prompt.PromptTemplate) \u2013 \nrevised_answer_prompt (langchain.prompts.prompt.PromptTemplate) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.chains.llm_checker.base.LLMCheckerChain\nprep_inputs(inputs)\uf0c1\nValidate and prep inputs.\nParameters\ninputs (Union[Dict[str, Any], Any]) \u2013 \nReturn type\nDict[str, str]\nprep_outputs(inputs, outputs, return_only_outputs=False)\uf0c1\nValidate and prep outputs.\nParameters\ninputs (Dict[str, str]) \u2013 \noutputs (Dict[str, str]) \u2013 \nreturn_only_outputs (bool) \u2013 \nReturn type\nDict[str, str]\nrun(*args, callbacks=None, tags=None, **kwargs)\uf0c1\nRun the chain as text in, text out or multiple variables, text out.\nParameters\nargs (Any) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ntags (Optional[List[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\nsave(file_path)\uf0c1\nSave the chain.\nParameters\nfile_path (Union[pathlib.Path, str]) \u2013 Path to file to save the chain to.\nReturn type\nNone\nExample:\n.. code-block:: python\nchain.save(file_path=\u201dpath/chain.yaml\u201d)\nto_json()\uf0c1\nReturn type\nUnion[langchain.load.serializable.SerializedConstructor, langchain.load.serializable.SerializedNotImplemented]\nto_json_not_implemented()\uf0c1\nReturn type\nlangchain.load.serializable.SerializedNotImplemented\nproperty lc_attributes: Dict\uf0c1\nReturn a list of attribute names that should be included in the", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-81", "text": "Return a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\uf0c1\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\uf0c1\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\uf0c1\nReturn whether or not the class is serializable.", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-82", "text": "property lc_serializable: bool\uf0c1\nReturn whether or not the class is serializable.\nclass langchain.chains.LLMMathChain(*, memory=None, callbacks=None, callback_manager=None, verbose=None, tags=None, llm_chain, llm=None, prompt=PromptTemplate(input_variables=['question'], output_parser=None, partial_variables={}, template='Translate a math problem into a expression that can be executed using Python\\'s numexpr library. Use the output of running this code to answer the question.\\n\\nQuestion: ${{Question with math problem.}}\\n```text\\n${{single line mathematical expression that solves the problem}}\\n```\\n...numexpr.evaluate(text)...\\n```output\\n${{Output of running the code}}\\n```\\nAnswer: ${{Answer}}\\n\\nBegin.\\n\\nQuestion: What is 37593 * 67?\\n```text\\n37593 * 67\\n```\\n...numexpr.evaluate(\"37593 * 67\")...\\n```output\\n2518731\\n```\\nAnswer: 2518731\\n\\nQuestion: 37593^(1/5)\\n```text\\n37593**(1/5)\\n```\\n...numexpr.evaluate(\"37593**(1/5)\")...\\n```output\\n8.222831614237718\\n```\\nAnswer: 8.222831614237718\\n\\nQuestion: {question}\\n', template_format='f-string', validate_template=True), input_key='question', output_key='answer')[source]\uf0c1\nBases: langchain.chains.base.Chain\nChain that interprets a prompt and executes python code to do math.\nExample\nfrom langchain import LLMMathChain, OpenAI\nllm_math = LLMMathChain.from_llm(OpenAI())\nParameters\nmemory (Optional[langchain.schema.BaseMemory]) \u2013", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-83", "text": "Parameters\nmemory (Optional[langchain.schema.BaseMemory]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ncallback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) \u2013 \nverbose (bool) \u2013 \ntags (Optional[List[str]]) \u2013 \nllm_chain (langchain.chains.llm.LLMChain) \u2013 \nllm (Optional[langchain.base_language.BaseLanguageModel]) \u2013 \nprompt (langchain.prompts.base.BasePromptTemplate) \u2013 \ninput_key (str) \u2013 \noutput_key (str) \u2013 \nReturn type\nNone\nattribute callback_manager: Optional[BaseCallbackManager] = None\uf0c1\nDeprecated, use callbacks instead.\nattribute callbacks: Callbacks = None\uf0c1\nOptional list of callback handlers (or callback manager). Defaults to None.\nCallback handlers are called throughout the lifecycle of a call to a chain,\nstarting with on_chain_start, ending with on_chain_end or on_chain_error.\nEach custom chain can optionally call additional callback methods, see Callback docs\nfor full details.\nattribute llm: Optional[BaseLanguageModel] = None\uf0c1\n[Deprecated] LLM wrapper to use.\nattribute llm_chain: LLMChain [Required]\uf0c1\nattribute memory: Optional[BaseMemory] = None\uf0c1\nOptional memory object. Defaults to None.\nMemory is a class that gets called at the start\nand at the end of every chain. At the start, memory loads variables and passes\nthem along in the chain. At the end, it saves any returned variables.\nThere are many different types of memory - please see memory docs\nfor the full catalog.", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-84", "text": "There are many different types of memory - please see memory docs\nfor the full catalog.\nattribute prompt: BasePromptTemplate = PromptTemplate(input_variables=['question'], output_parser=None, partial_variables={}, template='Translate a math problem into a expression that can be executed using Python\\'s numexpr library. Use the output of running this code to answer the question.\\n\\nQuestion: ${{Question with math problem.}}\\n```text\\n${{single line mathematical expression that solves the problem}}\\n```\\n...numexpr.evaluate(text)...\\n```output\\n${{Output of running the code}}\\n```\\nAnswer: ${{Answer}}\\n\\nBegin.\\n\\nQuestion: What is 37593 * 67?\\n```text\\n37593 * 67\\n```\\n...numexpr.evaluate(\"37593 * 67\")...\\n```output\\n2518731\\n```\\nAnswer: 2518731\\n\\nQuestion: 37593^(1/5)\\n```text\\n37593**(1/5)\\n```\\n...numexpr.evaluate(\"37593**(1/5)\")...\\n```output\\n8.222831614237718\\n```\\nAnswer: 8.222831614237718\\n\\nQuestion: {question}\\n', template_format='f-string', validate_template=True)\uf0c1\n[Deprecated] Prompt to use to translate to python if necessary.\nattribute tags: Optional[List[str]] = None\uf0c1\nOptional list of tags associated with the chain. Defaults to None\nThese tags will be associated with each call to this chain,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a chain with its use case.\nattribute verbose: bool [Optional]\uf0c1\nWhether or not run in verbose mode. In verbose mode, some intermediate logs", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-85", "text": "Whether or not run in verbose mode. In verbose mode, some intermediate logs\nwill be printed to the console. Defaults to langchain.verbose value.\nasync acall(inputs, return_only_outputs=False, callbacks=None, *, tags=None, include_run_info=False)\uf0c1\nRun the logic of this chain and add to output if desired.\nParameters\ninputs (Union[Dict[str, Any], Any]) \u2013 Dictionary of inputs, or single input if chain expects\nonly one param.\nreturn_only_outputs (bool) \u2013 boolean for whether to return only outputs in the\nresponse. If True, only new keys generated by this chain will be\nreturned. If False, both input keys and new keys generated by this\nchain will be returned. Defaults to False.\ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 Callbacks to use for this chain run. If not provided, will\nuse the callbacks provided to the chain.\ninclude_run_info (bool) \u2013 Whether to include run info in the response. Defaults\nto False.\ntags (Optional[List[str]]) \u2013 \nReturn type\nDict[str, Any]\napply(input_list, callbacks=None)\uf0c1\nCall the chain on all inputs in the list.\nParameters\ninput_list (List[Dict[str, Any]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \nReturn type\nList[Dict[str, str]]\nasync arun(*args, callbacks=None, tags=None, **kwargs)\uf0c1\nRun the chain as text in, text out or multiple variables, text out.\nParameters\nargs (Any) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ntags (Optional[List[str]]) \u2013", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-86", "text": "tags (Optional[List[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\ndict(**kwargs)\uf0c1\nReturn dictionary representation of chain.\nParameters\nkwargs (Any) \u2013 \nReturn type\nDict\nclassmethod from_llm(llm, prompt=PromptTemplate(input_variables=['question'], output_parser=None, partial_variables={}, template='Translate a math problem into a expression that can be executed using Python\\'s numexpr library. Use the output of running this code to answer the question.\\n\\nQuestion: ${{Question with math problem.}}\\n```text\\n${{single line mathematical expression that solves the problem}}\\n```\\n...numexpr.evaluate(text)...\\n```output\\n${{Output of running the code}}\\n```\\nAnswer: ${{Answer}}\\n\\nBegin.\\n\\nQuestion: What is 37593 * 67?\\n```text\\n37593 * 67\\n```\\n...numexpr.evaluate(\"37593 * 67\")...\\n```output\\n2518731\\n```\\nAnswer: 2518731\\n\\nQuestion: 37593^(1/5)\\n```text\\n37593**(1/5)\\n```\\n...numexpr.evaluate(\"37593**(1/5)\")...\\n```output\\n8.222831614237718\\n```\\nAnswer: 8.222831614237718\\n\\nQuestion: {question}\\n', template_format='f-string', validate_template=True), **kwargs)[source]\uf0c1\nParameters\nllm (langchain.base_language.BaseLanguageModel) \u2013 \nprompt (langchain.prompts.base.BasePromptTemplate) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.chains.llm_math.base.LLMMathChain\nprep_inputs(inputs)\uf0c1\nValidate and prep inputs.\nParameters", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-87", "text": "prep_inputs(inputs)\uf0c1\nValidate and prep inputs.\nParameters\ninputs (Union[Dict[str, Any], Any]) \u2013 \nReturn type\nDict[str, str]\nprep_outputs(inputs, outputs, return_only_outputs=False)\uf0c1\nValidate and prep outputs.\nParameters\ninputs (Dict[str, str]) \u2013 \noutputs (Dict[str, str]) \u2013 \nreturn_only_outputs (bool) \u2013 \nReturn type\nDict[str, str]\nrun(*args, callbacks=None, tags=None, **kwargs)\uf0c1\nRun the chain as text in, text out or multiple variables, text out.\nParameters\nargs (Any) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ntags (Optional[List[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\nsave(file_path)\uf0c1\nSave the chain.\nParameters\nfile_path (Union[pathlib.Path, str]) \u2013 Path to file to save the chain to.\nReturn type\nNone\nExample:\n.. code-block:: python\nchain.save(file_path=\u201dpath/chain.yaml\u201d)\nto_json()\uf0c1\nReturn type\nUnion[langchain.load.serializable.SerializedConstructor, langchain.load.serializable.SerializedNotImplemented]\nto_json_not_implemented()\uf0c1\nReturn type\nlangchain.load.serializable.SerializedNotImplemented\nproperty lc_attributes: Dict\uf0c1\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\uf0c1\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\uf0c1\nReturn a map of constructor argument names to secret ids.", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-88", "text": "Return a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\uf0c1\nReturn whether or not the class is serializable.\nclass langchain.chains.LLMRequestsChain(*, memory=None, callbacks=None, callback_manager=None, verbose=None, tags=None, llm_chain, requests_wrapper=None, text_length=8000, requests_key='requests_result', input_key='url', output_key='output')[source]\uf0c1\nBases: langchain.chains.base.Chain\nChain that hits a URL and then uses an LLM to parse results.\nParameters\nmemory (Optional[langchain.schema.BaseMemory]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ncallback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) \u2013 \nverbose (bool) \u2013 \ntags (Optional[List[str]]) \u2013 \nllm_chain (langchain.chains.llm.LLMChain) \u2013 \nrequests_wrapper (langchain.requests.TextRequestsWrapper) \u2013 \ntext_length (int) \u2013 \nrequests_key (str) \u2013 \ninput_key (str) \u2013 \noutput_key (str) \u2013 \nReturn type\nNone\nattribute callback_manager: Optional[BaseCallbackManager] = None\uf0c1\nDeprecated, use callbacks instead.\nattribute callbacks: Callbacks = None\uf0c1\nOptional list of callback handlers (or callback manager). Defaults to None.\nCallback handlers are called throughout the lifecycle of a call to a chain,\nstarting with on_chain_start, ending with on_chain_end or on_chain_error.\nEach custom chain can optionally call additional callback methods, see Callback docs\nfor full details.\nattribute llm_chain: LLMChain [Required]\uf0c1", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-89", "text": "for full details.\nattribute llm_chain: LLMChain [Required]\uf0c1\nattribute memory: Optional[BaseMemory] = None\uf0c1\nOptional memory object. Defaults to None.\nMemory is a class that gets called at the start\nand at the end of every chain. At the start, memory loads variables and passes\nthem along in the chain. At the end, it saves any returned variables.\nThere are many different types of memory - please see memory docs\nfor the full catalog.\nattribute requests_wrapper: TextRequestsWrapper [Optional]\uf0c1\nattribute tags: Optional[List[str]] = None\uf0c1\nOptional list of tags associated with the chain. Defaults to None\nThese tags will be associated with each call to this chain,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a chain with its use case.\nattribute text_length: int = 8000\uf0c1\nattribute verbose: bool [Optional]\uf0c1\nWhether or not run in verbose mode. In verbose mode, some intermediate logs\nwill be printed to the console. Defaults to langchain.verbose value.\nasync acall(inputs, return_only_outputs=False, callbacks=None, *, tags=None, include_run_info=False)\uf0c1\nRun the logic of this chain and add to output if desired.\nParameters\ninputs (Union[Dict[str, Any], Any]) \u2013 Dictionary of inputs, or single input if chain expects\nonly one param.\nreturn_only_outputs (bool) \u2013 boolean for whether to return only outputs in the\nresponse. If True, only new keys generated by this chain will be\nreturned. If False, both input keys and new keys generated by this\nchain will be returned. Defaults to False.", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-90", "text": "chain will be returned. Defaults to False.\ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 Callbacks to use for this chain run. If not provided, will\nuse the callbacks provided to the chain.\ninclude_run_info (bool) \u2013 Whether to include run info in the response. Defaults\nto False.\ntags (Optional[List[str]]) \u2013 \nReturn type\nDict[str, Any]\napply(input_list, callbacks=None)\uf0c1\nCall the chain on all inputs in the list.\nParameters\ninput_list (List[Dict[str, Any]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \nReturn type\nList[Dict[str, str]]\nasync arun(*args, callbacks=None, tags=None, **kwargs)\uf0c1\nRun the chain as text in, text out or multiple variables, text out.\nParameters\nargs (Any) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ntags (Optional[List[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\ndict(**kwargs)\uf0c1\nReturn dictionary representation of chain.\nParameters\nkwargs (Any) \u2013 \nReturn type\nDict\nprep_inputs(inputs)\uf0c1\nValidate and prep inputs.\nParameters\ninputs (Union[Dict[str, Any], Any]) \u2013 \nReturn type\nDict[str, str]\nprep_outputs(inputs, outputs, return_only_outputs=False)\uf0c1\nValidate and prep outputs.\nParameters\ninputs (Dict[str, str]) \u2013 \noutputs (Dict[str, str]) \u2013 \nreturn_only_outputs (bool) \u2013 \nReturn type\nDict[str, str]", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-91", "text": "return_only_outputs (bool) \u2013 \nReturn type\nDict[str, str]\nrun(*args, callbacks=None, tags=None, **kwargs)\uf0c1\nRun the chain as text in, text out or multiple variables, text out.\nParameters\nargs (Any) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ntags (Optional[List[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\nsave(file_path)\uf0c1\nSave the chain.\nParameters\nfile_path (Union[pathlib.Path, str]) \u2013 Path to file to save the chain to.\nReturn type\nNone\nExample:\n.. code-block:: python\nchain.save(file_path=\u201dpath/chain.yaml\u201d)\nto_json()\uf0c1\nReturn type\nUnion[langchain.load.serializable.SerializedConstructor, langchain.load.serializable.SerializedNotImplemented]\nto_json_not_implemented()\uf0c1\nReturn type\nlangchain.load.serializable.SerializedNotImplemented\nproperty lc_attributes: Dict\uf0c1\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\uf0c1\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\uf0c1\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\uf0c1\nReturn whether or not the class is serializable.\nclass langchain.chains.LLMRouterChain(*, memory=None, callbacks=None, callback_manager=None, verbose=None, tags=None, llm_chain)[source]\uf0c1", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-92", "text": "Bases: langchain.chains.router.base.RouterChain\nA router chain that uses an LLM chain to perform routing.\nParameters\nmemory (Optional[langchain.schema.BaseMemory]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ncallback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) \u2013 \nverbose (bool) \u2013 \ntags (Optional[List[str]]) \u2013 \nllm_chain (langchain.chains.llm.LLMChain) \u2013 \nReturn type\nNone\nattribute callback_manager: Optional[BaseCallbackManager] = None\uf0c1\nDeprecated, use callbacks instead.\nattribute callbacks: Callbacks = None\uf0c1\nOptional list of callback handlers (or callback manager). Defaults to None.\nCallback handlers are called throughout the lifecycle of a call to a chain,\nstarting with on_chain_start, ending with on_chain_end or on_chain_error.\nEach custom chain can optionally call additional callback methods, see Callback docs\nfor full details.\nattribute llm_chain: LLMChain [Required]\uf0c1\nLLM chain used to perform routing\nattribute memory: Optional[BaseMemory] = None\uf0c1\nOptional memory object. Defaults to None.\nMemory is a class that gets called at the start\nand at the end of every chain. At the start, memory loads variables and passes\nthem along in the chain. At the end, it saves any returned variables.\nThere are many different types of memory - please see memory docs\nfor the full catalog.\nattribute tags: Optional[List[str]] = None\uf0c1\nOptional list of tags associated with the chain. Defaults to None\nThese tags will be associated with each call to this chain,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a chain with its use case.", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-93", "text": "You can use these to eg identify a specific instance of a chain with its use case.\nattribute verbose: bool [Optional]\uf0c1\nWhether or not run in verbose mode. In verbose mode, some intermediate logs\nwill be printed to the console. Defaults to langchain.verbose value.\nasync acall(inputs, return_only_outputs=False, callbacks=None, *, tags=None, include_run_info=False)\uf0c1\nRun the logic of this chain and add to output if desired.\nParameters\ninputs (Union[Dict[str, Any], Any]) \u2013 Dictionary of inputs, or single input if chain expects\nonly one param.\nreturn_only_outputs (bool) \u2013 boolean for whether to return only outputs in the\nresponse. If True, only new keys generated by this chain will be\nreturned. If False, both input keys and new keys generated by this\nchain will be returned. Defaults to False.\ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 Callbacks to use for this chain run. If not provided, will\nuse the callbacks provided to the chain.\ninclude_run_info (bool) \u2013 Whether to include run info in the response. Defaults\nto False.\ntags (Optional[List[str]]) \u2013 \nReturn type\nDict[str, Any]\napply(input_list, callbacks=None)\uf0c1\nCall the chain on all inputs in the list.\nParameters\ninput_list (List[Dict[str, Any]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \nReturn type\nList[Dict[str, str]]\nasync aroute(inputs, callbacks=None)\uf0c1\nParameters\ninputs (Dict[str, Any]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \nReturn type", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-94", "text": "Return type\nlangchain.chains.router.base.Route\nasync arun(*args, callbacks=None, tags=None, **kwargs)\uf0c1\nRun the chain as text in, text out or multiple variables, text out.\nParameters\nargs (Any) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ntags (Optional[List[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\ndict(**kwargs)\uf0c1\nReturn dictionary representation of chain.\nParameters\nkwargs (Any) \u2013 \nReturn type\nDict\nclassmethod from_llm(llm, prompt, **kwargs)[source]\uf0c1\nConvenience constructor.\nParameters\nllm (langchain.base_language.BaseLanguageModel) \u2013 \nprompt (langchain.prompts.base.BasePromptTemplate) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.chains.router.llm_router.LLMRouterChain\nprep_inputs(inputs)\uf0c1\nValidate and prep inputs.\nParameters\ninputs (Union[Dict[str, Any], Any]) \u2013 \nReturn type\nDict[str, str]\nprep_outputs(inputs, outputs, return_only_outputs=False)\uf0c1\nValidate and prep outputs.\nParameters\ninputs (Dict[str, str]) \u2013 \noutputs (Dict[str, str]) \u2013 \nreturn_only_outputs (bool) \u2013 \nReturn type\nDict[str, str]\nroute(inputs, callbacks=None)\uf0c1\nParameters\ninputs (Dict[str, Any]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \nReturn type\nlangchain.chains.router.base.Route\nrun(*args, callbacks=None, tags=None, **kwargs)\uf0c1\nRun the chain as text in, text out or multiple variables, text out.\nParameters", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-95", "text": "Run the chain as text in, text out or multiple variables, text out.\nParameters\nargs (Any) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ntags (Optional[List[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\nsave(file_path)\uf0c1\nSave the chain.\nParameters\nfile_path (Union[pathlib.Path, str]) \u2013 Path to file to save the chain to.\nReturn type\nNone\nExample:\n.. code-block:: python\nchain.save(file_path=\u201dpath/chain.yaml\u201d)\nto_json()\uf0c1\nReturn type\nUnion[langchain.load.serializable.SerializedConstructor, langchain.load.serializable.SerializedNotImplemented]\nto_json_not_implemented()\uf0c1\nReturn type\nlangchain.load.serializable.SerializedNotImplemented\nproperty lc_attributes: Dict\uf0c1\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\uf0c1\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\uf0c1\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\uf0c1\nReturn whether or not the class is serializable.\nproperty output_keys: List[str]\uf0c1\nOutput keys this chain expects.", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-96", "text": "class langchain.chains.LLMSummarizationCheckerChain(*, memory=None, callbacks=None, callback_manager=None, verbose=None, tags=None, sequential_chain, llm=None, create_assertions_prompt=PromptTemplate(input_variables=['summary'], output_parser=None, partial_variables={}, template='Given some text, extract a list of facts from the text.\\n\\nFormat your output as a bulleted list.\\n\\nText:\\n\"\"\"\\n{summary}\\n\"\"\"\\n\\nFacts:', template_format='f-string', validate_template=True), check_assertions_prompt=PromptTemplate(input_variables=['assertions'], output_parser=None, partial_variables={}, template='You are an expert fact checker. You have been hired by a major news organization to fact check a very important story.\\n\\nHere is a bullet point list of facts:\\n\"\"\"\\n{assertions}\\n\"\"\"\\n\\nFor each fact, determine whether it is true or false about the subject. If you are unable to determine whether the fact is true or false, output \"Undetermined\".\\nIf the fact is false, explain why.\\n\\n', template_format='f-string', validate_template=True), revised_summary_prompt=PromptTemplate(input_variables=['checked_assertions', 'summary'], output_parser=None, partial_variables={}, template='Below are some assertions that have been fact checked and are labeled as true or false. If the answer is false, a suggestion is given for a correction.\\n\\nChecked Assertions:\\n\"\"\"\\n{checked_assertions}\\n\"\"\"\\n\\nOriginal Summary:\\n\"\"\"\\n{summary}\\n\"\"\"\\n\\nUsing these checked assertions, rewrite the original summary to be completely true.\\n\\nThe output should have the same structure and formatting as the original summary.\\n\\nSummary:', template_format='f-string', validate_template=True), are_all_true_prompt=PromptTemplate(input_variables=['checked_assertions'], output_parser=None, partial_variables={}, template='Below are", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-97", "text": "output_parser=None, partial_variables={}, template='Below are some assertions that have been fact checked and are labeled as true or false.\\n\\nIf all of the assertions are true, return \"True\". If any of the assertions are false, return \"False\".\\n\\nHere are some examples:\\n===\\n\\nChecked Assertions: \"\"\"\\n- The sky is red: False\\n- Water is made of lava: False\\n- The sun is a star: True\\n\"\"\"\\nResult: False\\n\\n===\\n\\nChecked Assertions: \"\"\"\\n- The sky is blue: True\\n- Water is wet: True\\n- The sun is a star: True\\n\"\"\"\\nResult: True\\n\\n===\\n\\nChecked Assertions: \"\"\"\\n- The sky is blue - True\\n- Water is made of lava- False\\n- The sun is a star - True\\n\"\"\"\\nResult: False\\n\\n===\\n\\nChecked Assertions:\"\"\"\\n{checked_assertions}\\n\"\"\"\\nResult:', template_format='f-string', validate_template=True), input_key='query', output_key='result', max_checks=2)[source]\uf0c1", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-98", "text": "Bases: langchain.chains.base.Chain\nChain for question-answering with self-verification.\nExample\nfrom langchain import OpenAI, LLMSummarizationCheckerChain\nllm = OpenAI(temperature=0.0)\nchecker_chain = LLMSummarizationCheckerChain.from_llm(llm)\nParameters\nmemory (Optional[langchain.schema.BaseMemory]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ncallback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) \u2013 \nverbose (bool) \u2013 \ntags (Optional[List[str]]) \u2013 \nsequential_chain (langchain.chains.sequential.SequentialChain) \u2013 \nllm (Optional[langchain.base_language.BaseLanguageModel]) \u2013 \ncreate_assertions_prompt (langchain.prompts.prompt.PromptTemplate) \u2013 \ncheck_assertions_prompt (langchain.prompts.prompt.PromptTemplate) \u2013 \nrevised_summary_prompt (langchain.prompts.prompt.PromptTemplate) \u2013 \nare_all_true_prompt (langchain.prompts.prompt.PromptTemplate) \u2013 \ninput_key (str) \u2013 \noutput_key (str) \u2013 \nmax_checks (int) \u2013 \nReturn type\nNone", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-99", "text": "max_checks (int) \u2013 \nReturn type\nNone\nattribute are_all_true_prompt: PromptTemplate = PromptTemplate(input_variables=['checked_assertions'], output_parser=None, partial_variables={}, template='Below are some assertions that have been fact checked and are labeled as true or false.\\n\\nIf all of the assertions are true, return \"True\". If any of the assertions are false, return \"False\".\\n\\nHere are some examples:\\n===\\n\\nChecked Assertions: \"\"\"\\n- The sky is red: False\\n- Water is made of lava: False\\n- The sun is a star: True\\n\"\"\"\\nResult: False\\n\\n===\\n\\nChecked Assertions: \"\"\"\\n- The sky is blue: True\\n- Water is wet: True\\n- The sun is a star: True\\n\"\"\"\\nResult: True\\n\\n===\\n\\nChecked Assertions: \"\"\"\\n- The sky is blue - True\\n- Water is made of lava- False\\n- The sun is a star - True\\n\"\"\"\\nResult: False\\n\\n===\\n\\nChecked Assertions:\"\"\"\\n{checked_assertions}\\n\"\"\"\\nResult:', template_format='f-string', validate_template=True)\uf0c1\n[Deprecated]\nattribute callback_manager: Optional[BaseCallbackManager] = None\uf0c1\nDeprecated, use callbacks instead.\nattribute callbacks: Callbacks = None\uf0c1\nOptional list of callback handlers (or callback manager). Defaults to None.\nCallback handlers are called throughout the lifecycle of a call to a chain,\nstarting with on_chain_start, ending with on_chain_end or on_chain_error.\nEach custom chain can optionally call additional callback methods, see Callback docs\nfor full details.", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-100", "text": "Each custom chain can optionally call additional callback methods, see Callback docs\nfor full details.\nattribute check_assertions_prompt: PromptTemplate = PromptTemplate(input_variables=['assertions'], output_parser=None, partial_variables={}, template='You are an expert fact checker. You have been hired by a major news organization to fact check a very important story.\\n\\nHere is a bullet point list of facts:\\n\"\"\"\\n{assertions}\\n\"\"\"\\n\\nFor each fact, determine whether it is true or false about the subject. If you are unable to determine whether the fact is true or false, output \"Undetermined\".\\nIf the fact is false, explain why.\\n\\n', template_format='f-string', validate_template=True)\uf0c1\n[Deprecated]\nattribute create_assertions_prompt: PromptTemplate = PromptTemplate(input_variables=['summary'], output_parser=None, partial_variables={}, template='Given some text, extract a list of facts from the text.\\n\\nFormat your output as a bulleted list.\\n\\nText:\\n\"\"\"\\n{summary}\\n\"\"\"\\n\\nFacts:', template_format='f-string', validate_template=True)\uf0c1\n[Deprecated]\nattribute llm: Optional[BaseLanguageModel] = None\uf0c1\n[Deprecated] LLM wrapper to use.\nattribute max_checks: int = 2\uf0c1\nMaximum number of times to check the assertions. Default to double-checking.\nattribute memory: Optional[BaseMemory] = None\uf0c1\nOptional memory object. Defaults to None.\nMemory is a class that gets called at the start\nand at the end of every chain. At the start, memory loads variables and passes\nthem along in the chain. At the end, it saves any returned variables.\nThere are many different types of memory - please see memory docs\nfor the full catalog.", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-101", "text": "There are many different types of memory - please see memory docs\nfor the full catalog.\nattribute revised_summary_prompt: PromptTemplate = PromptTemplate(input_variables=['checked_assertions', 'summary'], output_parser=None, partial_variables={}, template='Below are some assertions that have been fact checked and are labeled as true or false. If the answer is false, a suggestion is given for a correction.\\n\\nChecked Assertions:\\n\"\"\"\\n{checked_assertions}\\n\"\"\"\\n\\nOriginal Summary:\\n\"\"\"\\n{summary}\\n\"\"\"\\n\\nUsing these checked assertions, rewrite the original summary to be completely true.\\n\\nThe output should have the same structure and formatting as the original summary.\\n\\nSummary:', template_format='f-string', validate_template=True)\uf0c1\n[Deprecated]\nattribute sequential_chain: SequentialChain [Required]\uf0c1\nattribute tags: Optional[List[str]] = None\uf0c1\nOptional list of tags associated with the chain. Defaults to None\nThese tags will be associated with each call to this chain,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a chain with its use case.\nattribute verbose: bool [Optional]\uf0c1\nWhether or not run in verbose mode. In verbose mode, some intermediate logs\nwill be printed to the console. Defaults to langchain.verbose value.\nasync acall(inputs, return_only_outputs=False, callbacks=None, *, tags=None, include_run_info=False)\uf0c1\nRun the logic of this chain and add to output if desired.\nParameters\ninputs (Union[Dict[str, Any], Any]) \u2013 Dictionary of inputs, or single input if chain expects\nonly one param.\nreturn_only_outputs (bool) \u2013 boolean for whether to return only outputs in the\nresponse. If True, only new keys generated by this chain will be", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-102", "text": "response. If True, only new keys generated by this chain will be\nreturned. If False, both input keys and new keys generated by this\nchain will be returned. Defaults to False.\ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 Callbacks to use for this chain run. If not provided, will\nuse the callbacks provided to the chain.\ninclude_run_info (bool) \u2013 Whether to include run info in the response. Defaults\nto False.\ntags (Optional[List[str]]) \u2013 \nReturn type\nDict[str, Any]\napply(input_list, callbacks=None)\uf0c1\nCall the chain on all inputs in the list.\nParameters\ninput_list (List[Dict[str, Any]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \nReturn type\nList[Dict[str, str]]\nasync arun(*args, callbacks=None, tags=None, **kwargs)\uf0c1\nRun the chain as text in, text out or multiple variables, text out.\nParameters\nargs (Any) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ntags (Optional[List[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\ndict(**kwargs)\uf0c1\nReturn dictionary representation of chain.\nParameters\nkwargs (Any) \u2013 \nReturn type\nDict", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-103", "text": "classmethod from_llm(llm, create_assertions_prompt=PromptTemplate(input_variables=['summary'], output_parser=None, partial_variables={}, template='Given some text, extract a list of facts from the text.\\n\\nFormat your output as a bulleted list.\\n\\nText:\\n\"\"\"\\n{summary}\\n\"\"\"\\n\\nFacts:', template_format='f-string', validate_template=True), check_assertions_prompt=PromptTemplate(input_variables=['assertions'], output_parser=None, partial_variables={}, template='You are an expert fact checker. You have been hired by a major news organization to fact check a very important story.\\n\\nHere is a bullet point list of facts:\\n\"\"\"\\n{assertions}\\n\"\"\"\\n\\nFor each fact, determine whether it is true or false about the subject. If you are unable to determine whether the fact is true or false, output \"Undetermined\".\\nIf the fact is false, explain why.\\n\\n', template_format='f-string', validate_template=True), revised_summary_prompt=PromptTemplate(input_variables=['checked_assertions', 'summary'], output_parser=None, partial_variables={}, template='Below are some assertions that have been fact checked and are labeled as true or false. If the answer is false, a suggestion is given for a correction.\\n\\nChecked Assertions:\\n\"\"\"\\n{checked_assertions}\\n\"\"\"\\n\\nOriginal Summary:\\n\"\"\"\\n{summary}\\n\"\"\"\\n\\nUsing these checked assertions, rewrite the original summary to be completely true.\\n\\nThe output should have the same structure and formatting as the original summary.\\n\\nSummary:', template_format='f-string', validate_template=True), are_all_true_prompt=PromptTemplate(input_variables=['checked_assertions'], output_parser=None, partial_variables={}, template='Below are some assertions that have been fact checked and are labeled as true or false.\\n\\nIf all of the assertions are true, return \"True\". If any", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-104", "text": "true or false.\\n\\nIf all of the assertions are true, return \"True\". If any of the assertions are false, return \"False\".\\n\\nHere are some examples:\\n===\\n\\nChecked Assertions: \"\"\"\\n- The sky is red: False\\n- Water is made of lava: False\\n- The sun is a star: True\\n\"\"\"\\nResult: False\\n\\n===\\n\\nChecked Assertions: \"\"\"\\n- The sky is blue: True\\n- Water is wet: True\\n- The sun is a star: True\\n\"\"\"\\nResult: True\\n\\n===\\n\\nChecked Assertions: \"\"\"\\n- The sky is blue - True\\n- Water is made of lava- False\\n- The sun is a star - True\\n\"\"\"\\nResult: False\\n\\n===\\n\\nChecked Assertions:\"\"\"\\n{checked_assertions}\\n\"\"\"\\nResult:', template_format='f-string', validate_template=True), verbose=False, **kwargs)[source]\uf0c1", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-105", "text": "Parameters\nllm (langchain.base_language.BaseLanguageModel) \u2013 \ncreate_assertions_prompt (langchain.prompts.prompt.PromptTemplate) \u2013 \ncheck_assertions_prompt (langchain.prompts.prompt.PromptTemplate) \u2013 \nrevised_summary_prompt (langchain.prompts.prompt.PromptTemplate) \u2013 \nare_all_true_prompt (langchain.prompts.prompt.PromptTemplate) \u2013 \nverbose (bool) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.chains.llm_summarization_checker.base.LLMSummarizationCheckerChain\nprep_inputs(inputs)\uf0c1\nValidate and prep inputs.\nParameters\ninputs (Union[Dict[str, Any], Any]) \u2013 \nReturn type\nDict[str, str]\nprep_outputs(inputs, outputs, return_only_outputs=False)\uf0c1\nValidate and prep outputs.\nParameters\ninputs (Dict[str, str]) \u2013 \noutputs (Dict[str, str]) \u2013 \nreturn_only_outputs (bool) \u2013 \nReturn type\nDict[str, str]\nrun(*args, callbacks=None, tags=None, **kwargs)\uf0c1\nRun the chain as text in, text out or multiple variables, text out.\nParameters\nargs (Any) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ntags (Optional[List[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\nsave(file_path)\uf0c1\nSave the chain.\nParameters\nfile_path (Union[pathlib.Path, str]) \u2013 Path to file to save the chain to.\nReturn type\nNone\nExample:\n.. code-block:: python\nchain.save(file_path=\u201dpath/chain.yaml\u201d)\nto_json()\uf0c1\nReturn type\nUnion[langchain.load.serializable.SerializedConstructor, langchain.load.serializable.SerializedNotImplemented]", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-106", "text": "to_json_not_implemented()\uf0c1\nReturn type\nlangchain.load.serializable.SerializedNotImplemented\nproperty lc_attributes: Dict\uf0c1\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\uf0c1\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\uf0c1\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\uf0c1\nReturn whether or not the class is serializable.\nclass langchain.chains.MapReduceChain(*, memory=None, callbacks=None, callback_manager=None, verbose=None, tags=None, combine_documents_chain, text_splitter, input_key='input_text', output_key='output_text')[source]\uf0c1\nBases: langchain.chains.base.Chain\nMap-reduce chain.\nParameters\nmemory (Optional[langchain.schema.BaseMemory]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ncallback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) \u2013 \nverbose (bool) \u2013 \ntags (Optional[List[str]]) \u2013 \ncombine_documents_chain (langchain.chains.combine_documents.base.BaseCombineDocumentsChain) \u2013 \ntext_splitter (langchain.text_splitter.TextSplitter) \u2013 \ninput_key (str) \u2013 \noutput_key (str) \u2013 \nReturn type\nNone\nattribute callback_manager: Optional[BaseCallbackManager] = None\uf0c1\nDeprecated, use callbacks instead.\nattribute callbacks: Callbacks = None\uf0c1\nOptional list of callback handlers (or callback manager). Defaults to None.", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-107", "text": "Optional list of callback handlers (or callback manager). Defaults to None.\nCallback handlers are called throughout the lifecycle of a call to a chain,\nstarting with on_chain_start, ending with on_chain_end or on_chain_error.\nEach custom chain can optionally call additional callback methods, see Callback docs\nfor full details.\nattribute combine_documents_chain: BaseCombineDocumentsChain [Required]\uf0c1\nChain to use to combine documents.\nattribute memory: Optional[BaseMemory] = None\uf0c1\nOptional memory object. Defaults to None.\nMemory is a class that gets called at the start\nand at the end of every chain. At the start, memory loads variables and passes\nthem along in the chain. At the end, it saves any returned variables.\nThere are many different types of memory - please see memory docs\nfor the full catalog.\nattribute tags: Optional[List[str]] = None\uf0c1\nOptional list of tags associated with the chain. Defaults to None\nThese tags will be associated with each call to this chain,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a chain with its use case.\nattribute text_splitter: TextSplitter [Required]\uf0c1\nText splitter to use.\nattribute verbose: bool [Optional]\uf0c1\nWhether or not run in verbose mode. In verbose mode, some intermediate logs\nwill be printed to the console. Defaults to langchain.verbose value.\nasync acall(inputs, return_only_outputs=False, callbacks=None, *, tags=None, include_run_info=False)\uf0c1\nRun the logic of this chain and add to output if desired.\nParameters\ninputs (Union[Dict[str, Any], Any]) \u2013 Dictionary of inputs, or single input if chain expects\nonly one param.\nreturn_only_outputs (bool) \u2013 boolean for whether to return only outputs in the", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-108", "text": "return_only_outputs (bool) \u2013 boolean for whether to return only outputs in the\nresponse. If True, only new keys generated by this chain will be\nreturned. If False, both input keys and new keys generated by this\nchain will be returned. Defaults to False.\ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 Callbacks to use for this chain run. If not provided, will\nuse the callbacks provided to the chain.\ninclude_run_info (bool) \u2013 Whether to include run info in the response. Defaults\nto False.\ntags (Optional[List[str]]) \u2013 \nReturn type\nDict[str, Any]\napply(input_list, callbacks=None)\uf0c1\nCall the chain on all inputs in the list.\nParameters\ninput_list (List[Dict[str, Any]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \nReturn type\nList[Dict[str, str]]\nasync arun(*args, callbacks=None, tags=None, **kwargs)\uf0c1\nRun the chain as text in, text out or multiple variables, text out.\nParameters\nargs (Any) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ntags (Optional[List[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\ndict(**kwargs)\uf0c1\nReturn dictionary representation of chain.\nParameters\nkwargs (Any) \u2013 \nReturn type\nDict\nclassmethod from_params(llm, prompt, text_splitter, callbacks=None, combine_chain_kwargs=None, reduce_chain_kwargs=None, **kwargs)[source]\uf0c1\nConstruct a map-reduce chain that uses the chain for map and reduce.\nParameters", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-109", "text": "Construct a map-reduce chain that uses the chain for map and reduce.\nParameters\nllm (langchain.base_language.BaseLanguageModel) \u2013 \nprompt (langchain.prompts.base.BasePromptTemplate) \u2013 \ntext_splitter (langchain.text_splitter.TextSplitter) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ncombine_chain_kwargs (Optional[Mapping[str, Any]]) \u2013 \nreduce_chain_kwargs (Optional[Mapping[str, Any]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.chains.mapreduce.MapReduceChain\nprep_inputs(inputs)\uf0c1\nValidate and prep inputs.\nParameters\ninputs (Union[Dict[str, Any], Any]) \u2013 \nReturn type\nDict[str, str]\nprep_outputs(inputs, outputs, return_only_outputs=False)\uf0c1\nValidate and prep outputs.\nParameters\ninputs (Dict[str, str]) \u2013 \noutputs (Dict[str, str]) \u2013 \nreturn_only_outputs (bool) \u2013 \nReturn type\nDict[str, str]\nrun(*args, callbacks=None, tags=None, **kwargs)\uf0c1\nRun the chain as text in, text out or multiple variables, text out.\nParameters\nargs (Any) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ntags (Optional[List[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\nsave(file_path)\uf0c1\nSave the chain.\nParameters\nfile_path (Union[pathlib.Path, str]) \u2013 Path to file to save the chain to.\nReturn type\nNone\nExample:\n.. code-block:: python\nchain.save(file_path=\u201dpath/chain.yaml\u201d)\nto_json()\uf0c1\nReturn type", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-110", "text": "chain.save(file_path=\u201dpath/chain.yaml\u201d)\nto_json()\uf0c1\nReturn type\nUnion[langchain.load.serializable.SerializedConstructor, langchain.load.serializable.SerializedNotImplemented]\nto_json_not_implemented()\uf0c1\nReturn type\nlangchain.load.serializable.SerializedNotImplemented\nproperty lc_attributes: Dict\uf0c1\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\uf0c1\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\uf0c1\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\uf0c1\nReturn whether or not the class is serializable.\nclass langchain.chains.MultiPromptChain(*, memory=None, callbacks=None, callback_manager=None, verbose=None, tags=None, router_chain, destination_chains, default_chain, silent_errors=False)[source]\uf0c1\nBases: langchain.chains.router.base.MultiRouteChain\nA multi-route chain that uses an LLM router chain to choose amongst prompts.\nParameters\nmemory (Optional[langchain.schema.BaseMemory]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ncallback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) \u2013 \nverbose (bool) \u2013 \ntags (Optional[List[str]]) \u2013 \nrouter_chain (langchain.chains.router.base.RouterChain) \u2013 \ndestination_chains (Mapping[str, langchain.chains.llm.LLMChain]) \u2013 \ndefault_chain (langchain.chains.llm.LLMChain) \u2013", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-111", "text": "default_chain (langchain.chains.llm.LLMChain) \u2013 \nsilent_errors (bool) \u2013 \nReturn type\nNone\nattribute callback_manager: Optional[BaseCallbackManager] = None\uf0c1\nDeprecated, use callbacks instead.\nattribute callbacks: Callbacks = None\uf0c1\nOptional list of callback handlers (or callback manager). Defaults to None.\nCallback handlers are called throughout the lifecycle of a call to a chain,\nstarting with on_chain_start, ending with on_chain_end or on_chain_error.\nEach custom chain can optionally call additional callback methods, see Callback docs\nfor full details.\nattribute default_chain: LLMChain [Required]\uf0c1\nDefault chain to use when router doesn\u2019t map input to one of the destinations.\nattribute destination_chains: Mapping[str, LLMChain] [Required]\uf0c1\nMap of name to candidate chains that inputs can be routed to.\nattribute memory: Optional[BaseMemory] = None\uf0c1\nOptional memory object. Defaults to None.\nMemory is a class that gets called at the start\nand at the end of every chain. At the start, memory loads variables and passes\nthem along in the chain. At the end, it saves any returned variables.\nThere are many different types of memory - please see memory docs\nfor the full catalog.\nattribute router_chain: RouterChain [Required]\uf0c1\nChain for deciding a destination chain and the input to it.\nattribute silent_errors: bool = False\uf0c1\nIf True, use default_chain when an invalid destination name is provided.\nDefaults to False.\nattribute tags: Optional[List[str]] = None\uf0c1\nOptional list of tags associated with the chain. Defaults to None\nThese tags will be associated with each call to this chain,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a chain with its use case.", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-112", "text": "You can use these to eg identify a specific instance of a chain with its use case.\nattribute verbose: bool [Optional]\uf0c1\nWhether or not run in verbose mode. In verbose mode, some intermediate logs\nwill be printed to the console. Defaults to langchain.verbose value.\nasync acall(inputs, return_only_outputs=False, callbacks=None, *, tags=None, include_run_info=False)\uf0c1\nRun the logic of this chain and add to output if desired.\nParameters\ninputs (Union[Dict[str, Any], Any]) \u2013 Dictionary of inputs, or single input if chain expects\nonly one param.\nreturn_only_outputs (bool) \u2013 boolean for whether to return only outputs in the\nresponse. If True, only new keys generated by this chain will be\nreturned. If False, both input keys and new keys generated by this\nchain will be returned. Defaults to False.\ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 Callbacks to use for this chain run. If not provided, will\nuse the callbacks provided to the chain.\ninclude_run_info (bool) \u2013 Whether to include run info in the response. Defaults\nto False.\ntags (Optional[List[str]]) \u2013 \nReturn type\nDict[str, Any]\napply(input_list, callbacks=None)\uf0c1\nCall the chain on all inputs in the list.\nParameters\ninput_list (List[Dict[str, Any]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \nReturn type\nList[Dict[str, str]]\nasync arun(*args, callbacks=None, tags=None, **kwargs)\uf0c1\nRun the chain as text in, text out or multiple variables, text out.\nParameters\nargs (Any) \u2013", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-113", "text": "Parameters\nargs (Any) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ntags (Optional[List[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\ndict(**kwargs)\uf0c1\nReturn dictionary representation of chain.\nParameters\nkwargs (Any) \u2013 \nReturn type\nDict\nclassmethod from_prompts(llm, prompt_infos, default_chain=None, **kwargs)[source]\uf0c1\nConvenience constructor for instantiating from destination prompts.\nParameters\nllm (langchain.base_language.BaseLanguageModel) \u2013 \nprompt_infos (List[Dict[str, str]]) \u2013 \ndefault_chain (Optional[langchain.chains.llm.LLMChain]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.chains.router.multi_prompt.MultiPromptChain\nprep_inputs(inputs)\uf0c1\nValidate and prep inputs.\nParameters\ninputs (Union[Dict[str, Any], Any]) \u2013 \nReturn type\nDict[str, str]\nprep_outputs(inputs, outputs, return_only_outputs=False)\uf0c1\nValidate and prep outputs.\nParameters\ninputs (Dict[str, str]) \u2013 \noutputs (Dict[str, str]) \u2013 \nreturn_only_outputs (bool) \u2013 \nReturn type\nDict[str, str]\nrun(*args, callbacks=None, tags=None, **kwargs)\uf0c1\nRun the chain as text in, text out or multiple variables, text out.\nParameters\nargs (Any) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ntags (Optional[List[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\nsave(file_path)\uf0c1\nSave the chain.\nParameters", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-114", "text": "Return type\nstr\nsave(file_path)\uf0c1\nSave the chain.\nParameters\nfile_path (Union[pathlib.Path, str]) \u2013 Path to file to save the chain to.\nReturn type\nNone\nExample:\n.. code-block:: python\nchain.save(file_path=\u201dpath/chain.yaml\u201d)\nto_json()\uf0c1\nReturn type\nUnion[langchain.load.serializable.SerializedConstructor, langchain.load.serializable.SerializedNotImplemented]\nto_json_not_implemented()\uf0c1\nReturn type\nlangchain.load.serializable.SerializedNotImplemented\nproperty lc_attributes: Dict\uf0c1\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\uf0c1\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\uf0c1\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\uf0c1\nReturn whether or not the class is serializable.\nclass langchain.chains.MultiRetrievalQAChain(*, memory=None, callbacks=None, callback_manager=None, verbose=None, tags=None, router_chain, destination_chains, default_chain, silent_errors=False)[source]\uf0c1\nBases: langchain.chains.router.base.MultiRouteChain\nA multi-route chain that uses an LLM router chain to choose amongst retrieval\nqa chains.\nParameters\nmemory (Optional[langchain.schema.BaseMemory]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ncallback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) \u2013 \nverbose (bool) \u2013", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-115", "text": "verbose (bool) \u2013 \ntags (Optional[List[str]]) \u2013 \nrouter_chain (langchain.chains.router.llm_router.LLMRouterChain) \u2013 \ndestination_chains (Mapping[str, langchain.chains.retrieval_qa.base.BaseRetrievalQA]) \u2013 \ndefault_chain (langchain.chains.base.Chain) \u2013 \nsilent_errors (bool) \u2013 \nReturn type\nNone\nattribute callback_manager: Optional[BaseCallbackManager] = None\uf0c1\nDeprecated, use callbacks instead.\nattribute callbacks: Callbacks = None\uf0c1\nOptional list of callback handlers (or callback manager). Defaults to None.\nCallback handlers are called throughout the lifecycle of a call to a chain,\nstarting with on_chain_start, ending with on_chain_end or on_chain_error.\nEach custom chain can optionally call additional callback methods, see Callback docs\nfor full details.\nattribute default_chain: Chain [Required]\uf0c1\nDefault chain to use when router doesn\u2019t map input to one of the destinations.\nattribute destination_chains: Mapping[str, BaseRetrievalQA] [Required]\uf0c1\nMap of name to candidate chains that inputs can be routed to.\nattribute memory: Optional[BaseMemory] = None\uf0c1\nOptional memory object. Defaults to None.\nMemory is a class that gets called at the start\nand at the end of every chain. At the start, memory loads variables and passes\nthem along in the chain. At the end, it saves any returned variables.\nThere are many different types of memory - please see memory docs\nfor the full catalog.\nattribute router_chain: LLMRouterChain [Required]\uf0c1\nChain for deciding a destination chain and the input to it.\nattribute silent_errors: bool = False\uf0c1\nIf True, use default_chain when an invalid destination name is provided.\nDefaults to False.", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-116", "text": "If True, use default_chain when an invalid destination name is provided.\nDefaults to False.\nattribute tags: Optional[List[str]] = None\uf0c1\nOptional list of tags associated with the chain. Defaults to None\nThese tags will be associated with each call to this chain,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a chain with its use case.\nattribute verbose: bool [Optional]\uf0c1\nWhether or not run in verbose mode. In verbose mode, some intermediate logs\nwill be printed to the console. Defaults to langchain.verbose value.\nasync acall(inputs, return_only_outputs=False, callbacks=None, *, tags=None, include_run_info=False)\uf0c1\nRun the logic of this chain and add to output if desired.\nParameters\ninputs (Union[Dict[str, Any], Any]) \u2013 Dictionary of inputs, or single input if chain expects\nonly one param.\nreturn_only_outputs (bool) \u2013 boolean for whether to return only outputs in the\nresponse. If True, only new keys generated by this chain will be\nreturned. If False, both input keys and new keys generated by this\nchain will be returned. Defaults to False.\ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 Callbacks to use for this chain run. If not provided, will\nuse the callbacks provided to the chain.\ninclude_run_info (bool) \u2013 Whether to include run info in the response. Defaults\nto False.\ntags (Optional[List[str]]) \u2013 \nReturn type\nDict[str, Any]\napply(input_list, callbacks=None)\uf0c1\nCall the chain on all inputs in the list.\nParameters\ninput_list (List[Dict[str, Any]]) \u2013", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-117", "text": "Parameters\ninput_list (List[Dict[str, Any]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \nReturn type\nList[Dict[str, str]]\nasync arun(*args, callbacks=None, tags=None, **kwargs)\uf0c1\nRun the chain as text in, text out or multiple variables, text out.\nParameters\nargs (Any) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ntags (Optional[List[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\ndict(**kwargs)\uf0c1\nReturn dictionary representation of chain.\nParameters\nkwargs (Any) \u2013 \nReturn type\nDict\nclassmethod from_retrievers(llm, retriever_infos, default_retriever=None, default_prompt=None, default_chain=None, **kwargs)[source]\uf0c1\nParameters\nllm (langchain.base_language.BaseLanguageModel) \u2013 \nretriever_infos (List[Dict[str, Any]]) \u2013 \ndefault_retriever (Optional[langchain.schema.BaseRetriever]) \u2013 \ndefault_prompt (Optional[langchain.prompts.prompt.PromptTemplate]) \u2013 \ndefault_chain (Optional[langchain.chains.base.Chain]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.chains.router.multi_retrieval_qa.MultiRetrievalQAChain\nprep_inputs(inputs)\uf0c1\nValidate and prep inputs.\nParameters\ninputs (Union[Dict[str, Any], Any]) \u2013 \nReturn type\nDict[str, str]\nprep_outputs(inputs, outputs, return_only_outputs=False)\uf0c1\nValidate and prep outputs.\nParameters\ninputs (Dict[str, str]) \u2013 \noutputs (Dict[str, str]) \u2013", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-118", "text": "inputs (Dict[str, str]) \u2013 \noutputs (Dict[str, str]) \u2013 \nreturn_only_outputs (bool) \u2013 \nReturn type\nDict[str, str]\nrun(*args, callbacks=None, tags=None, **kwargs)\uf0c1\nRun the chain as text in, text out or multiple variables, text out.\nParameters\nargs (Any) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ntags (Optional[List[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\nsave(file_path)\uf0c1\nSave the chain.\nParameters\nfile_path (Union[pathlib.Path, str]) \u2013 Path to file to save the chain to.\nReturn type\nNone\nExample:\n.. code-block:: python\nchain.save(file_path=\u201dpath/chain.yaml\u201d)\nto_json()\uf0c1\nReturn type\nUnion[langchain.load.serializable.SerializedConstructor, langchain.load.serializable.SerializedNotImplemented]\nto_json_not_implemented()\uf0c1\nReturn type\nlangchain.load.serializable.SerializedNotImplemented\nproperty lc_attributes: Dict\uf0c1\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\uf0c1\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\uf0c1\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\uf0c1\nReturn whether or not the class is serializable.", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-119", "text": "property lc_serializable: bool\uf0c1\nReturn whether or not the class is serializable.\nclass langchain.chains.MultiRouteChain(*, memory=None, callbacks=None, callback_manager=None, verbose=None, tags=None, router_chain, destination_chains, default_chain, silent_errors=False)[source]\uf0c1\nBases: langchain.chains.base.Chain\nUse a single chain to route an input to one of multiple candidate chains.\nParameters\nmemory (Optional[langchain.schema.BaseMemory]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ncallback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) \u2013 \nverbose (bool) \u2013 \ntags (Optional[List[str]]) \u2013 \nrouter_chain (langchain.chains.router.base.RouterChain) \u2013 \ndestination_chains (Mapping[str, langchain.chains.base.Chain]) \u2013 \ndefault_chain (langchain.chains.base.Chain) \u2013 \nsilent_errors (bool) \u2013 \nReturn type\nNone\nattribute callback_manager: Optional[BaseCallbackManager] = None\uf0c1\nDeprecated, use callbacks instead.\nattribute callbacks: Callbacks = None\uf0c1\nOptional list of callback handlers (or callback manager). Defaults to None.\nCallback handlers are called throughout the lifecycle of a call to a chain,\nstarting with on_chain_start, ending with on_chain_end or on_chain_error.\nEach custom chain can optionally call additional callback methods, see Callback docs\nfor full details.\nattribute default_chain: Chain [Required]\uf0c1\nDefault chain to use when none of the destination chains are suitable.\nattribute destination_chains: Mapping[str, Chain] [Required]\uf0c1\nChains that return final answer to inputs.\nattribute memory: Optional[BaseMemory] = None\uf0c1\nOptional memory object. Defaults to None.", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-120", "text": "Optional memory object. Defaults to None.\nMemory is a class that gets called at the start\nand at the end of every chain. At the start, memory loads variables and passes\nthem along in the chain. At the end, it saves any returned variables.\nThere are many different types of memory - please see memory docs\nfor the full catalog.\nattribute router_chain: RouterChain [Required]\uf0c1\nChain that routes inputs to destination chains.\nattribute silent_errors: bool = False\uf0c1\nIf True, use default_chain when an invalid destination name is provided.\nDefaults to False.\nattribute tags: Optional[List[str]] = None\uf0c1\nOptional list of tags associated with the chain. Defaults to None\nThese tags will be associated with each call to this chain,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a chain with its use case.\nattribute verbose: bool [Optional]\uf0c1\nWhether or not run in verbose mode. In verbose mode, some intermediate logs\nwill be printed to the console. Defaults to langchain.verbose value.\nasync acall(inputs, return_only_outputs=False, callbacks=None, *, tags=None, include_run_info=False)\uf0c1\nRun the logic of this chain and add to output if desired.\nParameters\ninputs (Union[Dict[str, Any], Any]) \u2013 Dictionary of inputs, or single input if chain expects\nonly one param.\nreturn_only_outputs (bool) \u2013 boolean for whether to return only outputs in the\nresponse. If True, only new keys generated by this chain will be\nreturned. If False, both input keys and new keys generated by this\nchain will be returned. Defaults to False.\ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 Callbacks to use for this chain run. If not provided, will", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-121", "text": "use the callbacks provided to the chain.\ninclude_run_info (bool) \u2013 Whether to include run info in the response. Defaults\nto False.\ntags (Optional[List[str]]) \u2013 \nReturn type\nDict[str, Any]\napply(input_list, callbacks=None)\uf0c1\nCall the chain on all inputs in the list.\nParameters\ninput_list (List[Dict[str, Any]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \nReturn type\nList[Dict[str, str]]\nasync arun(*args, callbacks=None, tags=None, **kwargs)\uf0c1\nRun the chain as text in, text out or multiple variables, text out.\nParameters\nargs (Any) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ntags (Optional[List[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\ndict(**kwargs)\uf0c1\nReturn dictionary representation of chain.\nParameters\nkwargs (Any) \u2013 \nReturn type\nDict\nprep_inputs(inputs)\uf0c1\nValidate and prep inputs.\nParameters\ninputs (Union[Dict[str, Any], Any]) \u2013 \nReturn type\nDict[str, str]\nprep_outputs(inputs, outputs, return_only_outputs=False)\uf0c1\nValidate and prep outputs.\nParameters\ninputs (Dict[str, str]) \u2013 \noutputs (Dict[str, str]) \u2013 \nreturn_only_outputs (bool) \u2013 \nReturn type\nDict[str, str]\nrun(*args, callbacks=None, tags=None, **kwargs)\uf0c1\nRun the chain as text in, text out or multiple variables, text out.\nParameters\nargs (Any) \u2013", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-122", "text": "Parameters\nargs (Any) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ntags (Optional[List[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\nsave(file_path)\uf0c1\nSave the chain.\nParameters\nfile_path (Union[pathlib.Path, str]) \u2013 Path to file to save the chain to.\nReturn type\nNone\nExample:\n.. code-block:: python\nchain.save(file_path=\u201dpath/chain.yaml\u201d)\nto_json()\uf0c1\nReturn type\nUnion[langchain.load.serializable.SerializedConstructor, langchain.load.serializable.SerializedNotImplemented]\nto_json_not_implemented()\uf0c1\nReturn type\nlangchain.load.serializable.SerializedNotImplemented\nproperty lc_attributes: Dict\uf0c1\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\uf0c1\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\uf0c1\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\uf0c1\nReturn whether or not the class is serializable.\nclass langchain.chains.NatBotChain(*, memory=None, callbacks=None, callback_manager=None, verbose=None, tags=None, llm_chain, objective, llm=None, input_url_key='url', input_browser_content_key='browser_content', previous_command='', output_key='command')[source]\uf0c1\nBases: langchain.chains.base.Chain\nImplement an LLM driven browser.\nExample\nfrom langchain import NatBotChain", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-123", "text": "Implement an LLM driven browser.\nExample\nfrom langchain import NatBotChain\nnatbot = NatBotChain.from_default(\"Buy me a new hat.\")\nParameters\nmemory (Optional[langchain.schema.BaseMemory]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ncallback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) \u2013 \nverbose (bool) \u2013 \ntags (Optional[List[str]]) \u2013 \nllm_chain (langchain.chains.llm.LLMChain) \u2013 \nobjective (str) \u2013 \nllm (Optional[langchain.base_language.BaseLanguageModel]) \u2013 \ninput_url_key (str) \u2013 \ninput_browser_content_key (str) \u2013 \nprevious_command (str) \u2013 \noutput_key (str) \u2013 \nReturn type\nNone\nattribute callback_manager: Optional[BaseCallbackManager] = None\uf0c1\nDeprecated, use callbacks instead.\nattribute callbacks: Callbacks = None\uf0c1\nOptional list of callback handlers (or callback manager). Defaults to None.\nCallback handlers are called throughout the lifecycle of a call to a chain,\nstarting with on_chain_start, ending with on_chain_end or on_chain_error.\nEach custom chain can optionally call additional callback methods, see Callback docs\nfor full details.\nattribute llm: Optional[BaseLanguageModel] = None\uf0c1\n[Deprecated] LLM wrapper to use.\nattribute llm_chain: LLMChain [Required]\uf0c1\nattribute memory: Optional[BaseMemory] = None\uf0c1\nOptional memory object. Defaults to None.\nMemory is a class that gets called at the start\nand at the end of every chain. At the start, memory loads variables and passes\nthem along in the chain. At the end, it saves any returned variables.", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-124", "text": "them along in the chain. At the end, it saves any returned variables.\nThere are many different types of memory - please see memory docs\nfor the full catalog.\nattribute objective: str [Required]\uf0c1\nObjective that NatBot is tasked with completing.\nattribute tags: Optional[List[str]] = None\uf0c1\nOptional list of tags associated with the chain. Defaults to None\nThese tags will be associated with each call to this chain,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a chain with its use case.\nattribute verbose: bool [Optional]\uf0c1\nWhether or not run in verbose mode. In verbose mode, some intermediate logs\nwill be printed to the console. Defaults to langchain.verbose value.\nasync acall(inputs, return_only_outputs=False, callbacks=None, *, tags=None, include_run_info=False)\uf0c1\nRun the logic of this chain and add to output if desired.\nParameters\ninputs (Union[Dict[str, Any], Any]) \u2013 Dictionary of inputs, or single input if chain expects\nonly one param.\nreturn_only_outputs (bool) \u2013 boolean for whether to return only outputs in the\nresponse. If True, only new keys generated by this chain will be\nreturned. If False, both input keys and new keys generated by this\nchain will be returned. Defaults to False.\ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 Callbacks to use for this chain run. If not provided, will\nuse the callbacks provided to the chain.\ninclude_run_info (bool) \u2013 Whether to include run info in the response. Defaults\nto False.\ntags (Optional[List[str]]) \u2013 \nReturn type\nDict[str, Any]\napply(input_list, callbacks=None)\uf0c1", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-125", "text": "Return type\nDict[str, Any]\napply(input_list, callbacks=None)\uf0c1\nCall the chain on all inputs in the list.\nParameters\ninput_list (List[Dict[str, Any]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \nReturn type\nList[Dict[str, str]]\nasync arun(*args, callbacks=None, tags=None, **kwargs)\uf0c1\nRun the chain as text in, text out or multiple variables, text out.\nParameters\nargs (Any) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ntags (Optional[List[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\ndict(**kwargs)\uf0c1\nReturn dictionary representation of chain.\nParameters\nkwargs (Any) \u2013 \nReturn type\nDict\nexecute(url, browser_content)[source]\uf0c1\nFigure out next browser command to run.\nParameters\nurl (str) \u2013 URL of the site currently on.\nbrowser_content (str) \u2013 Content of the page as currently displayed by the browser.\nReturns\nNext browser command to run.\nReturn type\nstr\nExample\nbrowser_content = \"....\"\nllm_command = natbot.run(\"www.google.com\", browser_content)\nclassmethod from_default(objective, **kwargs)[source]\uf0c1\nLoad with default LLMChain.\nParameters\nobjective (str) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.chains.natbot.base.NatBotChain\nclassmethod from_llm(llm, objective, **kwargs)[source]\uf0c1\nLoad from LLM.\nParameters\nllm (langchain.base_language.BaseLanguageModel) \u2013 \nobjective (str) \u2013 \nkwargs (Any) \u2013", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-126", "text": "objective (str) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.chains.natbot.base.NatBotChain\nprep_inputs(inputs)\uf0c1\nValidate and prep inputs.\nParameters\ninputs (Union[Dict[str, Any], Any]) \u2013 \nReturn type\nDict[str, str]\nprep_outputs(inputs, outputs, return_only_outputs=False)\uf0c1\nValidate and prep outputs.\nParameters\ninputs (Dict[str, str]) \u2013 \noutputs (Dict[str, str]) \u2013 \nreturn_only_outputs (bool) \u2013 \nReturn type\nDict[str, str]\nrun(*args, callbacks=None, tags=None, **kwargs)\uf0c1\nRun the chain as text in, text out or multiple variables, text out.\nParameters\nargs (Any) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ntags (Optional[List[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\nsave(file_path)\uf0c1\nSave the chain.\nParameters\nfile_path (Union[pathlib.Path, str]) \u2013 Path to file to save the chain to.\nReturn type\nNone\nExample:\n.. code-block:: python\nchain.save(file_path=\u201dpath/chain.yaml\u201d)\nto_json()\uf0c1\nReturn type\nUnion[langchain.load.serializable.SerializedConstructor, langchain.load.serializable.SerializedNotImplemented]\nto_json_not_implemented()\uf0c1\nReturn type\nlangchain.load.serializable.SerializedNotImplemented\nproperty lc_attributes: Dict\uf0c1\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\uf0c1\nReturn the namespace of the langchain object.", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-127", "text": "property lc_namespace: List[str]\uf0c1\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\uf0c1\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\uf0c1\nReturn whether or not the class is serializable.\nclass langchain.chains.NebulaGraphQAChain(*, memory=None, callbacks=None, callback_manager=None, verbose=None, tags=None, graph, ngql_generation_chain, qa_chain, input_key='query', output_key='result')[source]\uf0c1\nBases: langchain.chains.base.Chain\nChain for question-answering against a graph by generating nGQL statements.\nParameters\nmemory (Optional[langchain.schema.BaseMemory]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ncallback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) \u2013 \nverbose (bool) \u2013 \ntags (Optional[List[str]]) \u2013 \ngraph (langchain.graphs.nebula_graph.NebulaGraph) \u2013 \nngql_generation_chain (langchain.chains.llm.LLMChain) \u2013 \nqa_chain (langchain.chains.llm.LLMChain) \u2013 \ninput_key (str) \u2013 \noutput_key (str) \u2013 \nReturn type\nNone\nattribute callback_manager: Optional[BaseCallbackManager] = None\uf0c1\nDeprecated, use callbacks instead.\nattribute callbacks: Callbacks = None\uf0c1\nOptional list of callback handlers (or callback manager). Defaults to None.\nCallback handlers are called throughout the lifecycle of a call to a chain,", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-128", "text": "Callback handlers are called throughout the lifecycle of a call to a chain,\nstarting with on_chain_start, ending with on_chain_end or on_chain_error.\nEach custom chain can optionally call additional callback methods, see Callback docs\nfor full details.\nattribute graph: NebulaGraph [Required]\uf0c1\nattribute memory: Optional[BaseMemory] = None\uf0c1\nOptional memory object. Defaults to None.\nMemory is a class that gets called at the start\nand at the end of every chain. At the start, memory loads variables and passes\nthem along in the chain. At the end, it saves any returned variables.\nThere are many different types of memory - please see memory docs\nfor the full catalog.\nattribute ngql_generation_chain: LLMChain [Required]\uf0c1\nattribute qa_chain: LLMChain [Required]\uf0c1\nattribute tags: Optional[List[str]] = None\uf0c1\nOptional list of tags associated with the chain. Defaults to None\nThese tags will be associated with each call to this chain,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a chain with its use case.\nattribute verbose: bool [Optional]\uf0c1\nWhether or not run in verbose mode. In verbose mode, some intermediate logs\nwill be printed to the console. Defaults to langchain.verbose value.\nasync acall(inputs, return_only_outputs=False, callbacks=None, *, tags=None, include_run_info=False)\uf0c1\nRun the logic of this chain and add to output if desired.\nParameters\ninputs (Union[Dict[str, Any], Any]) \u2013 Dictionary of inputs, or single input if chain expects\nonly one param.\nreturn_only_outputs (bool) \u2013 boolean for whether to return only outputs in the\nresponse. If True, only new keys generated by this chain will be", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-129", "text": "response. If True, only new keys generated by this chain will be\nreturned. If False, both input keys and new keys generated by this\nchain will be returned. Defaults to False.\ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 Callbacks to use for this chain run. If not provided, will\nuse the callbacks provided to the chain.\ninclude_run_info (bool) \u2013 Whether to include run info in the response. Defaults\nto False.\ntags (Optional[List[str]]) \u2013 \nReturn type\nDict[str, Any]\napply(input_list, callbacks=None)\uf0c1\nCall the chain on all inputs in the list.\nParameters\ninput_list (List[Dict[str, Any]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \nReturn type\nList[Dict[str, str]]\nasync arun(*args, callbacks=None, tags=None, **kwargs)\uf0c1\nRun the chain as text in, text out or multiple variables, text out.\nParameters\nargs (Any) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ntags (Optional[List[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\ndict(**kwargs)\uf0c1\nReturn dictionary representation of chain.\nParameters\nkwargs (Any) \u2013 \nReturn type\nDict", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-130", "text": "classmethod from_llm(llm, *, qa_prompt=PromptTemplate(input_variables=['context', 'question'], output_parser=None, partial_variables={}, template=\"You are an assistant that helps to form nice and human understandable answers.\\nThe information part contains the provided information that you must use to construct an answer.\\nThe provided information is authorative, you must never doubt it or try to use your internal knowledge to correct it.\\nMake the answer sound as a response to the question. Do not mention that you based the result on the given information.\\nIf the provided information is empty, say that you don't know the answer.\\nInformation:\\n{context}\\n\\nQuestion: {question}\\nHelpful Answer:\", template_format='f-string', validate_template=True), ngql_prompt=PromptTemplate(input_variables=['schema', 'question'], output_parser=None, partial_variables={}, template=\"Task:Generate NebulaGraph Cypher statement to query a graph database.\\n\\nInstructions:\\n\\nFirst, generate cypher then convert it to NebulaGraph Cypher dialect(rather than standard):\\n1. it requires explicit label specification when referring to node properties: v.`Foo`.name\\n2. it uses double equals sign for comparison: `==` rather than `=`\\nFor instance:\\n```diff\\n< MATCH (p:person)-[:directed]->(m:movie) WHERE m.name = 'The Godfather II'\\n< RETURN p.name;\\n---\\n> MATCH (p:`person`)-[:directed]->(m:`movie`) WHERE m.`movie`.`name` == 'The Godfather II'\\n> RETURN p.`person`.`name`;\\n```\\n\\nUse only the provided relationship types and properties in the schema.\\nDo not use any other relationship types or properties that are not provided.\\nSchema:\\n{schema}\\nNote: Do not include any explanations or apologies in your responses.\\nDo not respond to", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-131", "text": "Do not include any explanations or apologies in your responses.\\nDo not respond to any questions that might ask anything else than for you to construct a Cypher statement.\\nDo not include any text except the generated Cypher statement.\\n\\nThe question is:\\n{question}\", template_format='f-string', validate_template=True), **kwargs)[source]\uf0c1", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-132", "text": "Initialize from LLM.\nParameters\nllm (langchain.base_language.BaseLanguageModel) \u2013 \nqa_prompt (langchain.prompts.base.BasePromptTemplate) \u2013 \nngql_prompt (langchain.prompts.base.BasePromptTemplate) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.chains.graph_qa.nebulagraph.NebulaGraphQAChain\nprep_inputs(inputs)\uf0c1\nValidate and prep inputs.\nParameters\ninputs (Union[Dict[str, Any], Any]) \u2013 \nReturn type\nDict[str, str]\nprep_outputs(inputs, outputs, return_only_outputs=False)\uf0c1\nValidate and prep outputs.\nParameters\ninputs (Dict[str, str]) \u2013 \noutputs (Dict[str, str]) \u2013 \nreturn_only_outputs (bool) \u2013 \nReturn type\nDict[str, str]\nrun(*args, callbacks=None, tags=None, **kwargs)\uf0c1\nRun the chain as text in, text out or multiple variables, text out.\nParameters\nargs (Any) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ntags (Optional[List[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\nsave(file_path)\uf0c1\nSave the chain.\nParameters\nfile_path (Union[pathlib.Path, str]) \u2013 Path to file to save the chain to.\nReturn type\nNone\nExample:\n.. code-block:: python\nchain.save(file_path=\u201dpath/chain.yaml\u201d)\nto_json()\uf0c1\nReturn type\nUnion[langchain.load.serializable.SerializedConstructor, langchain.load.serializable.SerializedNotImplemented]\nto_json_not_implemented()\uf0c1\nReturn type\nlangchain.load.serializable.SerializedNotImplemented\nproperty lc_attributes: Dict\uf0c1", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-133", "text": "langchain.load.serializable.SerializedNotImplemented\nproperty lc_attributes: Dict\uf0c1\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\uf0c1\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\uf0c1\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\uf0c1\nReturn whether or not the class is serializable.\nclass langchain.chains.OpenAIModerationChain(*, memory=None, callbacks=None, callback_manager=None, verbose=None, tags=None, client=None, model_name=None, error=False, input_key='input', output_key='output', openai_api_key=None, openai_organization=None)[source]\uf0c1\nBases: langchain.chains.base.Chain\nPass input through a moderation endpoint.\nTo use, you should have the openai python package installed, and the\nenvironment variable OPENAI_API_KEY set with your API key.\nAny parameters that are valid to be passed to the openai.create call can be passed\nin, even if not explicitly saved on this class.\nExample\nfrom langchain.chains import OpenAIModerationChain\nmoderation = OpenAIModerationChain()\nParameters\nmemory (Optional[langchain.schema.BaseMemory]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ncallback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) \u2013 \nverbose (bool) \u2013 \ntags (Optional[List[str]]) \u2013 \nclient (Any) \u2013 \nmodel_name (Optional[str]) \u2013", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-134", "text": "client (Any) \u2013 \nmodel_name (Optional[str]) \u2013 \nerror (bool) \u2013 \ninput_key (str) \u2013 \noutput_key (str) \u2013 \nopenai_api_key (Optional[str]) \u2013 \nopenai_organization (Optional[str]) \u2013 \nReturn type\nNone\nattribute callback_manager: Optional[BaseCallbackManager] = None\uf0c1\nDeprecated, use callbacks instead.\nattribute callbacks: Callbacks = None\uf0c1\nOptional list of callback handlers (or callback manager). Defaults to None.\nCallback handlers are called throughout the lifecycle of a call to a chain,\nstarting with on_chain_start, ending with on_chain_end or on_chain_error.\nEach custom chain can optionally call additional callback methods, see Callback docs\nfor full details.\nattribute error: bool = False\uf0c1\nWhether or not to error if bad content was found.\nattribute memory: Optional[BaseMemory] = None\uf0c1\nOptional memory object. Defaults to None.\nMemory is a class that gets called at the start\nand at the end of every chain. At the start, memory loads variables and passes\nthem along in the chain. At the end, it saves any returned variables.\nThere are many different types of memory - please see memory docs\nfor the full catalog.\nattribute model_name: Optional[str] = None\uf0c1\nModeration model name to use.\nattribute openai_api_key: Optional[str] = None\uf0c1\nattribute openai_organization: Optional[str] = None\uf0c1\nattribute tags: Optional[List[str]] = None\uf0c1\nOptional list of tags associated with the chain. Defaults to None\nThese tags will be associated with each call to this chain,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a chain with its use case.\nattribute verbose: bool [Optional]\uf0c1", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-135", "text": "attribute verbose: bool [Optional]\uf0c1\nWhether or not run in verbose mode. In verbose mode, some intermediate logs\nwill be printed to the console. Defaults to langchain.verbose value.\nasync acall(inputs, return_only_outputs=False, callbacks=None, *, tags=None, include_run_info=False)\uf0c1\nRun the logic of this chain and add to output if desired.\nParameters\ninputs (Union[Dict[str, Any], Any]) \u2013 Dictionary of inputs, or single input if chain expects\nonly one param.\nreturn_only_outputs (bool) \u2013 boolean for whether to return only outputs in the\nresponse. If True, only new keys generated by this chain will be\nreturned. If False, both input keys and new keys generated by this\nchain will be returned. Defaults to False.\ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 Callbacks to use for this chain run. If not provided, will\nuse the callbacks provided to the chain.\ninclude_run_info (bool) \u2013 Whether to include run info in the response. Defaults\nto False.\ntags (Optional[List[str]]) \u2013 \nReturn type\nDict[str, Any]\napply(input_list, callbacks=None)\uf0c1\nCall the chain on all inputs in the list.\nParameters\ninput_list (List[Dict[str, Any]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \nReturn type\nList[Dict[str, str]]\nasync arun(*args, callbacks=None, tags=None, **kwargs)\uf0c1\nRun the chain as text in, text out or multiple variables, text out.\nParameters\nargs (Any) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-136", "text": "tags (Optional[List[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\ndict(**kwargs)\uf0c1\nReturn dictionary representation of chain.\nParameters\nkwargs (Any) \u2013 \nReturn type\nDict\nprep_inputs(inputs)\uf0c1\nValidate and prep inputs.\nParameters\ninputs (Union[Dict[str, Any], Any]) \u2013 \nReturn type\nDict[str, str]\nprep_outputs(inputs, outputs, return_only_outputs=False)\uf0c1\nValidate and prep outputs.\nParameters\ninputs (Dict[str, str]) \u2013 \noutputs (Dict[str, str]) \u2013 \nreturn_only_outputs (bool) \u2013 \nReturn type\nDict[str, str]\nrun(*args, callbacks=None, tags=None, **kwargs)\uf0c1\nRun the chain as text in, text out or multiple variables, text out.\nParameters\nargs (Any) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ntags (Optional[List[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\nsave(file_path)\uf0c1\nSave the chain.\nParameters\nfile_path (Union[pathlib.Path, str]) \u2013 Path to file to save the chain to.\nReturn type\nNone\nExample:\n.. code-block:: python\nchain.save(file_path=\u201dpath/chain.yaml\u201d)\nto_json()\uf0c1\nReturn type\nUnion[langchain.load.serializable.SerializedConstructor, langchain.load.serializable.SerializedNotImplemented]\nto_json_not_implemented()\uf0c1\nReturn type\nlangchain.load.serializable.SerializedNotImplemented\nproperty lc_attributes: Dict\uf0c1\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\uf0c1", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-137", "text": "constructor.\nproperty lc_namespace: List[str]\uf0c1\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\uf0c1\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\uf0c1\nReturn whether or not the class is serializable.\nclass langchain.chains.OpenAPIEndpointChain(*, memory=None, callbacks=None, callback_manager=None, verbose=None, tags=None, api_request_chain, api_response_chain=None, api_operation, requests=None, param_mapping, return_intermediate_steps=False, instructions_key='instructions', output_key='output', max_text_length=None)[source]\uf0c1\nBases: langchain.chains.base.Chain, pydantic.main.BaseModel\nChain interacts with an OpenAPI endpoint using natural language.\nParameters\nmemory (Optional[langchain.schema.BaseMemory]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ncallback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) \u2013 \nverbose (bool) \u2013 \ntags (Optional[List[str]]) \u2013 \napi_request_chain (langchain.chains.llm.LLMChain) \u2013 \napi_response_chain (Optional[langchain.chains.llm.LLMChain]) \u2013 \napi_operation (langchain.tools.openapi.utils.api_models.APIOperation) \u2013 \nrequests (langchain.requests.Requests) \u2013 \nparam_mapping (langchain.chains.api.openapi.chain._ParamMapping) \u2013 \nreturn_intermediate_steps (bool) \u2013 \ninstructions_key (str) \u2013 \noutput_key (str) \u2013 \nmax_text_length (Optional[int]) \u2013 \nReturn type\nNone", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-138", "text": "max_text_length (Optional[int]) \u2013 \nReturn type\nNone\nattribute api_operation: APIOperation [Required]\uf0c1\nattribute api_request_chain: LLMChain [Required]\uf0c1\nattribute api_response_chain: Optional[LLMChain] = None\uf0c1\nattribute callback_manager: Optional[BaseCallbackManager] = None\uf0c1\nDeprecated, use callbacks instead.\nattribute callbacks: Callbacks = None\uf0c1\nOptional list of callback handlers (or callback manager). Defaults to None.\nCallback handlers are called throughout the lifecycle of a call to a chain,\nstarting with on_chain_start, ending with on_chain_end or on_chain_error.\nEach custom chain can optionally call additional callback methods, see Callback docs\nfor full details.\nattribute memory: Optional[BaseMemory] = None\uf0c1\nOptional memory object. Defaults to None.\nMemory is a class that gets called at the start\nand at the end of every chain. At the start, memory loads variables and passes\nthem along in the chain. At the end, it saves any returned variables.\nThere are many different types of memory - please see memory docs\nfor the full catalog.\nattribute param_mapping: _ParamMapping [Required]\uf0c1\nattribute requests: Requests [Optional]\uf0c1\nattribute return_intermediate_steps: bool = False\uf0c1\nattribute tags: Optional[List[str]] = None\uf0c1\nOptional list of tags associated with the chain. Defaults to None\nThese tags will be associated with each call to this chain,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a chain with its use case.\nattribute verbose: bool [Optional]\uf0c1\nWhether or not run in verbose mode. In verbose mode, some intermediate logs\nwill be printed to the console. Defaults to langchain.verbose value.", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-139", "text": "will be printed to the console. Defaults to langchain.verbose value.\nasync acall(inputs, return_only_outputs=False, callbacks=None, *, tags=None, include_run_info=False)\uf0c1\nRun the logic of this chain and add to output if desired.\nParameters\ninputs (Union[Dict[str, Any], Any]) \u2013 Dictionary of inputs, or single input if chain expects\nonly one param.\nreturn_only_outputs (bool) \u2013 boolean for whether to return only outputs in the\nresponse. If True, only new keys generated by this chain will be\nreturned. If False, both input keys and new keys generated by this\nchain will be returned. Defaults to False.\ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 Callbacks to use for this chain run. If not provided, will\nuse the callbacks provided to the chain.\ninclude_run_info (bool) \u2013 Whether to include run info in the response. Defaults\nto False.\ntags (Optional[List[str]]) \u2013 \nReturn type\nDict[str, Any]\napply(input_list, callbacks=None)\uf0c1\nCall the chain on all inputs in the list.\nParameters\ninput_list (List[Dict[str, Any]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \nReturn type\nList[Dict[str, str]]\nasync arun(*args, callbacks=None, tags=None, **kwargs)\uf0c1\nRun the chain as text in, text out or multiple variables, text out.\nParameters\nargs (Any) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ntags (Optional[List[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-140", "text": "kwargs (Any) \u2013 \nReturn type\nstr\ndeserialize_json_input(serialized_args)[source]\uf0c1\nUse the serialized typescript dictionary.\nResolve the path, query params dict, and optional requestBody dict.\nParameters\nserialized_args (str) \u2013 \nReturn type\ndict\ndict(**kwargs)\uf0c1\nReturn dictionary representation of chain.\nParameters\nkwargs (Any) \u2013 \nReturn type\nDict\nclassmethod from_api_operation(operation, llm, requests=None, verbose=False, return_intermediate_steps=False, raw_response=False, callbacks=None, **kwargs)[source]\uf0c1\nCreate an OpenAPIEndpointChain from an operation and a spec.\nParameters\noperation (langchain.tools.openapi.utils.api_models.APIOperation) \u2013 \nllm (langchain.base_language.BaseLanguageModel) \u2013 \nrequests (Optional[langchain.requests.Requests]) \u2013 \nverbose (bool) \u2013 \nreturn_intermediate_steps (bool) \u2013 \nraw_response (bool) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.chains.api.openapi.chain.OpenAPIEndpointChain\nclassmethod from_url_and_method(spec_url, path, method, llm, requests=None, return_intermediate_steps=False, **kwargs)[source]\uf0c1\nCreate an OpenAPIEndpoint from a spec at the specified url.\nParameters\nspec_url (str) \u2013 \npath (str) \u2013 \nmethod (str) \u2013 \nllm (langchain.base_language.BaseLanguageModel) \u2013 \nrequests (Optional[langchain.requests.Requests]) \u2013 \nreturn_intermediate_steps (bool) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.chains.api.openapi.chain.OpenAPIEndpointChain\nprep_inputs(inputs)\uf0c1\nValidate and prep inputs.\nParameters", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-141", "text": "prep_inputs(inputs)\uf0c1\nValidate and prep inputs.\nParameters\ninputs (Union[Dict[str, Any], Any]) \u2013 \nReturn type\nDict[str, str]\nprep_outputs(inputs, outputs, return_only_outputs=False)\uf0c1\nValidate and prep outputs.\nParameters\ninputs (Dict[str, str]) \u2013 \noutputs (Dict[str, str]) \u2013 \nreturn_only_outputs (bool) \u2013 \nReturn type\nDict[str, str]\nrun(*args, callbacks=None, tags=None, **kwargs)\uf0c1\nRun the chain as text in, text out or multiple variables, text out.\nParameters\nargs (Any) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ntags (Optional[List[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\nsave(file_path)\uf0c1\nSave the chain.\nParameters\nfile_path (Union[pathlib.Path, str]) \u2013 Path to file to save the chain to.\nReturn type\nNone\nExample:\n.. code-block:: python\nchain.save(file_path=\u201dpath/chain.yaml\u201d)\nto_json()\uf0c1\nReturn type\nUnion[langchain.load.serializable.SerializedConstructor, langchain.load.serializable.SerializedNotImplemented]\nto_json_not_implemented()\uf0c1\nReturn type\nlangchain.load.serializable.SerializedNotImplemented\nproperty lc_attributes: Dict\uf0c1\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\uf0c1\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\uf0c1\nReturn a map of constructor argument names to secret ids.", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-142", "text": "Return a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\uf0c1\nReturn whether or not the class is serializable.", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-143", "text": "class langchain.chains.PALChain(*, memory=None, callbacks=None, callback_manager=None, verbose=None, tags=None, llm_chain, llm=None, prompt=PromptTemplate(input_variables=['question'], output_parser=None, partial_variables={}, template='Q: Olivia has $23. She bought five bagels for $3 each. How much money does she have left?\\n\\n# solution in Python:\\n\\n\\ndef solution():\\n\u00a0\u00a0\u00a0 \"\"\"Olivia has $23. She bought five bagels for $3 each. How much money does she have left?\"\"\"\\n\u00a0\u00a0\u00a0 money_initial = 23\\n\u00a0\u00a0\u00a0 bagels = 5\\n\u00a0\u00a0\u00a0 bagel_cost = 3\\n\u00a0\u00a0\u00a0 money_spent = bagels * bagel_cost\\n\u00a0\u00a0\u00a0 money_left = money_initial - money_spent\\n\u00a0\u00a0\u00a0 result = money_left\\n\u00a0\u00a0\u00a0 return result\\n\\n\\n\\n\\n\\nQ: Michael had 58 golf balls. On tuesday, he lost 23 golf balls. On wednesday, he lost 2 more. How many golf balls did he have at the end of wednesday?\\n\\n# solution in Python:\\n\\n\\ndef solution():\\n\u00a0\u00a0\u00a0 \"\"\"Michael had 58 golf balls. On tuesday, he lost 23 golf balls. On wednesday, he lost 2 more. How many golf balls did he have at the end of wednesday?\"\"\"\\n\u00a0\u00a0\u00a0 golf_balls_initial = 58\\n\u00a0\u00a0\u00a0 golf_balls_lost_tuesday = 23\\n\u00a0\u00a0\u00a0 golf_balls_lost_wednesday = 2\\n\u00a0\u00a0\u00a0 golf_balls_left = golf_balls_initial - golf_balls_lost_tuesday - golf_balls_lost_wednesday\\n\u00a0\u00a0\u00a0 result = golf_balls_left\\n\u00a0\u00a0\u00a0 return result\\n\\n\\n\\n\\n\\nQ: There were nine computers in the server room. Five more computers were installed each day, from monday to thursday. How many computers are now in the server", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-144", "text": "computers were installed each day, from monday to thursday. How many computers are now in the server room?\\n\\n# solution in Python:\\n\\n\\ndef solution():\\n\u00a0\u00a0\u00a0 \"\"\"There were nine computers in the server room. Five more computers were installed each day, from monday to thursday. How many computers are now in the server room?\"\"\"\\n\u00a0\u00a0\u00a0 computers_initial = 9\\n\u00a0\u00a0\u00a0 computers_per_day = 5\\n\u00a0\u00a0\u00a0 num_days = 4\u00a0 # 4 days between monday and thursday\\n\u00a0\u00a0\u00a0 computers_added = computers_per_day * num_days\\n\u00a0\u00a0\u00a0 computers_total = computers_initial + computers_added\\n\u00a0\u00a0\u00a0 result = computers_total\\n\u00a0\u00a0\u00a0 return result\\n\\n\\n\\n\\n\\nQ: Shawn has five toys. For Christmas, he got two toys each from his mom and dad. How many toys does he have now?\\n\\n# solution in Python:\\n\\n\\ndef solution():\\n\u00a0\u00a0\u00a0 \"\"\"Shawn has five toys. For Christmas, he got two toys each from his mom and dad. How many toys does he have now?\"\"\"\\n\u00a0\u00a0\u00a0 toys_initial = 5\\n\u00a0\u00a0\u00a0 mom_toys = 2\\n\u00a0\u00a0\u00a0 dad_toys = 2\\n\u00a0\u00a0\u00a0 total_received = mom_toys + dad_toys\\n\u00a0\u00a0\u00a0 total_toys = toys_initial + total_received\\n\u00a0\u00a0\u00a0 result = total_toys\\n\u00a0\u00a0\u00a0 return result\\n\\n\\n\\n\\n\\nQ: Jason had 20 lollipops. He gave Denny some lollipops. Now Jason has 12 lollipops. How many lollipops did Jason give to Denny?\\n\\n# solution in Python:\\n\\n\\ndef solution():\\n\u00a0\u00a0\u00a0 \"\"\"Jason had 20 lollipops. He gave Denny some lollipops. Now Jason has 12 lollipops. How many lollipops did Jason give to Denny?\"\"\"\\n\u00a0\u00a0\u00a0 jason_lollipops_initial =", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-145", "text": "did Jason give to Denny?\"\"\"\\n\u00a0\u00a0\u00a0 jason_lollipops_initial = 20\\n\u00a0\u00a0\u00a0 jason_lollipops_after = 12\\n\u00a0\u00a0\u00a0 denny_lollipops = jason_lollipops_initial - jason_lollipops_after\\n\u00a0\u00a0\u00a0 result = denny_lollipops\\n\u00a0\u00a0\u00a0 return result\\n\\n\\n\\n\\n\\nQ: Leah had 32 chocolates and her sister had 42. If they ate 35, how many pieces do they have left in total?\\n\\n# solution in Python:\\n\\n\\ndef solution():\\n\u00a0\u00a0\u00a0 \"\"\"Leah had 32 chocolates and her sister had 42. If they ate 35, how many pieces do they have left in total?\"\"\"\\n\u00a0\u00a0\u00a0 leah_chocolates = 32\\n\u00a0\u00a0\u00a0 sister_chocolates = 42\\n\u00a0\u00a0\u00a0 total_chocolates = leah_chocolates + sister_chocolates\\n\u00a0\u00a0\u00a0 chocolates_eaten = 35\\n\u00a0\u00a0\u00a0 chocolates_left = total_chocolates - chocolates_eaten\\n\u00a0\u00a0\u00a0 result = chocolates_left\\n\u00a0\u00a0\u00a0 return result\\n\\n\\n\\n\\n\\nQ: If there are 3 cars in the parking lot and 2 more cars arrive, how many cars are in the parking lot?\\n\\n# solution in Python:\\n\\n\\ndef solution():\\n\u00a0\u00a0\u00a0 \"\"\"If there are 3 cars in the parking lot and 2 more cars arrive, how many cars are in the parking lot?\"\"\"\\n\u00a0\u00a0\u00a0 cars_initial = 3\\n\u00a0\u00a0\u00a0 cars_arrived = 2\\n\u00a0\u00a0\u00a0 total_cars = cars_initial + cars_arrived\\n\u00a0\u00a0\u00a0 result = total_cars\\n\u00a0\u00a0\u00a0 return result\\n\\n\\n\\n\\n\\nQ: There are 15 trees in the grove. Grove workers will plant trees in the grove today. After they are done, there will be 21 trees. How many trees did the grove workers plant today?\\n\\n# solution in", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-146", "text": "21 trees. How many trees did the grove workers plant today?\\n\\n# solution in Python:\\n\\n\\ndef solution():\\n\u00a0\u00a0\u00a0 \"\"\"There are 15 trees in the grove. Grove workers will plant trees in the grove today. After they are done, there will be 21 trees. How many trees did the grove workers plant today?\"\"\"\\n\u00a0\u00a0\u00a0 trees_initial = 15\\n\u00a0\u00a0\u00a0 trees_after = 21\\n\u00a0\u00a0\u00a0 trees_added = trees_after - trees_initial\\n\u00a0\u00a0\u00a0 result = trees_added\\n\u00a0\u00a0\u00a0 return result\\n\\n\\n\\n\\n\\nQ: {question}\\n\\n# solution in Python:\\n\\n\\n', template_format='f-string', validate_template=True), stop='\\n\\n', get_answer_expr='print(solution())', python_globals=None, python_locals=None, output_key='result', return_intermediate_steps=False)[source]\uf0c1", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-147", "text": "Bases: langchain.chains.base.Chain\nImplements Program-Aided Language Models.\nParameters\nmemory (Optional[langchain.schema.BaseMemory]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ncallback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) \u2013 \nverbose (bool) \u2013 \ntags (Optional[List[str]]) \u2013 \nllm_chain (langchain.chains.llm.LLMChain) \u2013 \nllm (Optional[langchain.base_language.BaseLanguageModel]) \u2013 \nprompt (langchain.prompts.base.BasePromptTemplate) \u2013 \nstop (str) \u2013 \nget_answer_expr (str) \u2013 \npython_globals (Optional[Dict[str, Any]]) \u2013 \npython_locals (Optional[Dict[str, Any]]) \u2013 \noutput_key (str) \u2013 \nreturn_intermediate_steps (bool) \u2013 \nReturn type\nNone\nattribute callback_manager: Optional[BaseCallbackManager] = None\uf0c1\nDeprecated, use callbacks instead.\nattribute callbacks: Callbacks = None\uf0c1\nOptional list of callback handlers (or callback manager). Defaults to None.\nCallback handlers are called throughout the lifecycle of a call to a chain,\nstarting with on_chain_start, ending with on_chain_end or on_chain_error.\nEach custom chain can optionally call additional callback methods, see Callback docs\nfor full details.\nattribute get_answer_expr: str = 'print(solution())'\uf0c1\nattribute llm: Optional[BaseLanguageModel] = None\uf0c1\n[Deprecated]\nattribute llm_chain: LLMChain [Required]\uf0c1\nattribute memory: Optional[BaseMemory] = None\uf0c1\nOptional memory object. Defaults to None.\nMemory is a class that gets called at the start", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-148", "text": "Optional memory object. Defaults to None.\nMemory is a class that gets called at the start\nand at the end of every chain. At the start, memory loads variables and passes\nthem along in the chain. At the end, it saves any returned variables.\nThere are many different types of memory - please see memory docs\nfor the full catalog.", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-149", "text": "attribute prompt: BasePromptTemplate = PromptTemplate(input_variables=['question'], output_parser=None, partial_variables={}, template='Q: Olivia has $23. She bought five bagels for $3 each. How much money does she have left?\\n\\n# solution in Python:\\n\\n\\ndef solution():\\n\u00a0\u00a0\u00a0 \"\"\"Olivia has $23. She bought five bagels for $3 each. How much money does she have left?\"\"\"\\n\u00a0\u00a0\u00a0 money_initial = 23\\n\u00a0\u00a0\u00a0 bagels = 5\\n\u00a0\u00a0\u00a0 bagel_cost = 3\\n\u00a0\u00a0\u00a0 money_spent = bagels * bagel_cost\\n\u00a0\u00a0\u00a0 money_left = money_initial - money_spent\\n\u00a0\u00a0\u00a0 result = money_left\\n\u00a0\u00a0\u00a0 return result\\n\\n\\n\\n\\n\\nQ: Michael had 58 golf balls. On tuesday, he lost 23 golf balls. On wednesday, he lost 2 more. How many golf balls did he have at the end of wednesday?\\n\\n# solution in Python:\\n\\n\\ndef solution():\\n\u00a0\u00a0\u00a0 \"\"\"Michael had 58 golf balls. On tuesday, he lost 23 golf balls. On wednesday, he lost 2 more. How many golf balls did he have at the end of wednesday?\"\"\"\\n\u00a0\u00a0\u00a0 golf_balls_initial = 58\\n\u00a0\u00a0\u00a0 golf_balls_lost_tuesday = 23\\n\u00a0\u00a0\u00a0 golf_balls_lost_wednesday = 2\\n\u00a0\u00a0\u00a0 golf_balls_left = golf_balls_initial - golf_balls_lost_tuesday - golf_balls_lost_wednesday\\n\u00a0\u00a0\u00a0 result = golf_balls_left\\n\u00a0\u00a0\u00a0 return result\\n\\n\\n\\n\\n\\nQ: There were nine computers in the server room. Five more computers were installed each day, from monday to thursday. How many computers are now in the server room?\\n\\n# solution in Python:\\n\\n\\ndef solution():\\n\u00a0\u00a0\u00a0 \"\"\"There were nine computers in the server room. Five more computers were installed", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-150", "text": "solution():\\n\u00a0\u00a0\u00a0 \"\"\"There were nine computers in the server room. Five more computers were installed each day, from monday to thursday. How many computers are now in the server room?\"\"\"\\n\u00a0\u00a0\u00a0 computers_initial = 9\\n\u00a0\u00a0\u00a0 computers_per_day = 5\\n\u00a0\u00a0\u00a0 num_days = 4\u00a0 # 4 days between monday and thursday\\n\u00a0\u00a0\u00a0 computers_added = computers_per_day * num_days\\n\u00a0\u00a0\u00a0 computers_total = computers_initial + computers_added\\n\u00a0\u00a0\u00a0 result = computers_total\\n\u00a0\u00a0\u00a0 return result\\n\\n\\n\\n\\n\\nQ: Shawn has five toys. For Christmas, he got two toys each from his mom and dad. How many toys does he have now?\\n\\n# solution in Python:\\n\\n\\ndef solution():\\n\u00a0\u00a0\u00a0 \"\"\"Shawn has five toys. For Christmas, he got two toys each from his mom and dad. How many toys does he have now?\"\"\"\\n\u00a0\u00a0\u00a0 toys_initial = 5\\n\u00a0\u00a0\u00a0 mom_toys = 2\\n\u00a0\u00a0\u00a0 dad_toys = 2\\n\u00a0\u00a0\u00a0 total_received = mom_toys + dad_toys\\n\u00a0\u00a0\u00a0 total_toys = toys_initial + total_received\\n\u00a0\u00a0\u00a0 result = total_toys\\n\u00a0\u00a0\u00a0 return result\\n\\n\\n\\n\\n\\nQ: Jason had 20 lollipops. He gave Denny some lollipops. Now Jason has 12 lollipops. How many lollipops did Jason give to Denny?\\n\\n# solution in Python:\\n\\n\\ndef solution():\\n\u00a0\u00a0\u00a0 \"\"\"Jason had 20 lollipops. He gave Denny some lollipops. Now Jason has 12 lollipops. How many lollipops did Jason give to Denny?\"\"\"\\n\u00a0\u00a0\u00a0 jason_lollipops_initial = 20\\n\u00a0\u00a0\u00a0 jason_lollipops_after = 12\\n\u00a0\u00a0\u00a0 denny_lollipops = jason_lollipops_initial -", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-151", "text": "= 12\\n\u00a0\u00a0\u00a0 denny_lollipops = jason_lollipops_initial - jason_lollipops_after\\n\u00a0\u00a0\u00a0 result = denny_lollipops\\n\u00a0\u00a0\u00a0 return result\\n\\n\\n\\n\\n\\nQ: Leah had 32 chocolates and her sister had 42. If they ate 35, how many pieces do they have left in total?\\n\\n# solution in Python:\\n\\n\\ndef solution():\\n\u00a0\u00a0\u00a0 \"\"\"Leah had 32 chocolates and her sister had 42. If they ate 35, how many pieces do they have left in total?\"\"\"\\n\u00a0\u00a0\u00a0 leah_chocolates = 32\\n\u00a0\u00a0\u00a0 sister_chocolates = 42\\n\u00a0\u00a0\u00a0 total_chocolates = leah_chocolates + sister_chocolates\\n\u00a0\u00a0\u00a0 chocolates_eaten = 35\\n\u00a0\u00a0\u00a0 chocolates_left = total_chocolates - chocolates_eaten\\n\u00a0\u00a0\u00a0 result = chocolates_left\\n\u00a0\u00a0\u00a0 return result\\n\\n\\n\\n\\n\\nQ: If there are 3 cars in the parking lot and 2 more cars arrive, how many cars are in the parking lot?\\n\\n# solution in Python:\\n\\n\\ndef solution():\\n\u00a0\u00a0\u00a0 \"\"\"If there are 3 cars in the parking lot and 2 more cars arrive, how many cars are in the parking lot?\"\"\"\\n\u00a0\u00a0\u00a0 cars_initial = 3\\n\u00a0\u00a0\u00a0 cars_arrived = 2\\n\u00a0\u00a0\u00a0 total_cars = cars_initial + cars_arrived\\n\u00a0\u00a0\u00a0 result = total_cars\\n\u00a0\u00a0\u00a0 return result\\n\\n\\n\\n\\n\\nQ: There are 15 trees in the grove. Grove workers will plant trees in the grove today. After they are done, there will be 21 trees. How many trees did the grove workers plant today?\\n\\n# solution in Python:\\n\\n\\ndef solution():\\n\u00a0\u00a0\u00a0 \"\"\"There are 15 trees in the grove. Grove workers will plant trees in the grove today. After", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-152", "text": "15 trees in the grove. Grove workers will plant trees in the grove today. After they are done, there will be 21 trees. How many trees did the grove workers plant today?\"\"\"\\n\u00a0\u00a0\u00a0 trees_initial = 15\\n\u00a0\u00a0\u00a0 trees_after = 21\\n\u00a0\u00a0\u00a0 trees_added = trees_after - trees_initial\\n\u00a0\u00a0\u00a0 result = trees_added\\n\u00a0\u00a0\u00a0 return result\\n\\n\\n\\n\\n\\nQ: {question}\\n\\n# solution in Python:\\n\\n\\n', template_format='f-string', validate_template=True)\uf0c1", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-153", "text": "[Deprecated]\nattribute python_globals: Optional[Dict[str, Any]] = None\uf0c1\nattribute python_locals: Optional[Dict[str, Any]] = None\uf0c1\nattribute return_intermediate_steps: bool = False\uf0c1\nattribute stop: str = '\\n\\n'\uf0c1\nattribute tags: Optional[List[str]] = None\uf0c1\nOptional list of tags associated with the chain. Defaults to None\nThese tags will be associated with each call to this chain,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a chain with its use case.\nattribute verbose: bool [Optional]\uf0c1\nWhether or not run in verbose mode. In verbose mode, some intermediate logs\nwill be printed to the console. Defaults to langchain.verbose value.\nasync acall(inputs, return_only_outputs=False, callbacks=None, *, tags=None, include_run_info=False)\uf0c1\nRun the logic of this chain and add to output if desired.\nParameters\ninputs (Union[Dict[str, Any], Any]) \u2013 Dictionary of inputs, or single input if chain expects\nonly one param.\nreturn_only_outputs (bool) \u2013 boolean for whether to return only outputs in the\nresponse. If True, only new keys generated by this chain will be\nreturned. If False, both input keys and new keys generated by this\nchain will be returned. Defaults to False.\ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 Callbacks to use for this chain run. If not provided, will\nuse the callbacks provided to the chain.\ninclude_run_info (bool) \u2013 Whether to include run info in the response. Defaults\nto False.\ntags (Optional[List[str]]) \u2013 \nReturn type\nDict[str, Any]\napply(input_list, callbacks=None)\uf0c1", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-154", "text": "Return type\nDict[str, Any]\napply(input_list, callbacks=None)\uf0c1\nCall the chain on all inputs in the list.\nParameters\ninput_list (List[Dict[str, Any]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \nReturn type\nList[Dict[str, str]]\nasync arun(*args, callbacks=None, tags=None, **kwargs)\uf0c1\nRun the chain as text in, text out or multiple variables, text out.\nParameters\nargs (Any) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ntags (Optional[List[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\ndict(**kwargs)\uf0c1\nReturn dictionary representation of chain.\nParameters\nkwargs (Any) \u2013 \nReturn type\nDict\nclassmethod from_colored_object_prompt(llm, **kwargs)[source]\uf0c1\nLoad PAL from colored object prompt.\nParameters\nllm (langchain.base_language.BaseLanguageModel) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.chains.pal.base.PALChain\nclassmethod from_math_prompt(llm, **kwargs)[source]\uf0c1\nLoad PAL from math prompt.\nParameters\nllm (langchain.base_language.BaseLanguageModel) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.chains.pal.base.PALChain\nprep_inputs(inputs)\uf0c1\nValidate and prep inputs.\nParameters\ninputs (Union[Dict[str, Any], Any]) \u2013 \nReturn type\nDict[str, str]\nprep_outputs(inputs, outputs, return_only_outputs=False)\uf0c1\nValidate and prep outputs.\nParameters\ninputs (Dict[str, str]) \u2013", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-155", "text": "Validate and prep outputs.\nParameters\ninputs (Dict[str, str]) \u2013 \noutputs (Dict[str, str]) \u2013 \nreturn_only_outputs (bool) \u2013 \nReturn type\nDict[str, str]\nrun(*args, callbacks=None, tags=None, **kwargs)\uf0c1\nRun the chain as text in, text out or multiple variables, text out.\nParameters\nargs (Any) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ntags (Optional[List[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\nsave(file_path)\uf0c1\nSave the chain.\nParameters\nfile_path (Union[pathlib.Path, str]) \u2013 Path to file to save the chain to.\nReturn type\nNone\nExample:\n.. code-block:: python\nchain.save(file_path=\u201dpath/chain.yaml\u201d)\nto_json()\uf0c1\nReturn type\nUnion[langchain.load.serializable.SerializedConstructor, langchain.load.serializable.SerializedNotImplemented]\nto_json_not_implemented()\uf0c1\nReturn type\nlangchain.load.serializable.SerializedNotImplemented\nproperty lc_attributes: Dict\uf0c1\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\uf0c1\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\uf0c1\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\uf0c1\nReturn whether or not the class is serializable.", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-156", "text": "property lc_serializable: bool\uf0c1\nReturn whether or not the class is serializable.\nclass langchain.chains.QAGenerationChain(*, memory=None, callbacks=None, callback_manager=None, verbose=None, tags=None, llm_chain, text_splitter=, input_key='text', output_key='questions', k=None)[source]\uf0c1\nBases: langchain.chains.base.Chain\nParameters\nmemory (Optional[langchain.schema.BaseMemory]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ncallback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) \u2013 \nverbose (bool) \u2013 \ntags (Optional[List[str]]) \u2013 \nllm_chain (langchain.chains.llm.LLMChain) \u2013 \ntext_splitter (langchain.text_splitter.TextSplitter) \u2013 \ninput_key (str) \u2013 \noutput_key (str) \u2013 \nk (Optional[int]) \u2013 \nReturn type\nNone\nattribute callback_manager: Optional[BaseCallbackManager] = None\uf0c1\nDeprecated, use callbacks instead.\nattribute callbacks: Callbacks = None\uf0c1\nOptional list of callback handlers (or callback manager). Defaults to None.\nCallback handlers are called throughout the lifecycle of a call to a chain,\nstarting with on_chain_start, ending with on_chain_end or on_chain_error.\nEach custom chain can optionally call additional callback methods, see Callback docs\nfor full details.\nattribute input_key: str = 'text'\uf0c1\nattribute k: Optional[int] = None\uf0c1\nattribute llm_chain: LLMChain [Required]\uf0c1\nattribute memory: Optional[BaseMemory] = None\uf0c1\nOptional memory object. Defaults to None.\nMemory is a class that gets called at the start", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-157", "text": "Optional memory object. Defaults to None.\nMemory is a class that gets called at the start\nand at the end of every chain. At the start, memory loads variables and passes\nthem along in the chain. At the end, it saves any returned variables.\nThere are many different types of memory - please see memory docs\nfor the full catalog.\nattribute output_key: str = 'questions'\uf0c1\nattribute tags: Optional[List[str]] = None\uf0c1\nOptional list of tags associated with the chain. Defaults to None\nThese tags will be associated with each call to this chain,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a chain with its use case.\nattribute text_splitter: TextSplitter = \uf0c1\nattribute verbose: bool [Optional]\uf0c1\nWhether or not run in verbose mode. In verbose mode, some intermediate logs\nwill be printed to the console. Defaults to langchain.verbose value.\nasync acall(inputs, return_only_outputs=False, callbacks=None, *, tags=None, include_run_info=False)\uf0c1\nRun the logic of this chain and add to output if desired.\nParameters\ninputs (Union[Dict[str, Any], Any]) \u2013 Dictionary of inputs, or single input if chain expects\nonly one param.\nreturn_only_outputs (bool) \u2013 boolean for whether to return only outputs in the\nresponse. If True, only new keys generated by this chain will be\nreturned. If False, both input keys and new keys generated by this\nchain will be returned. Defaults to False.\ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 Callbacks to use for this chain run. If not provided, will\nuse the callbacks provided to the chain.", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-158", "text": "use the callbacks provided to the chain.\ninclude_run_info (bool) \u2013 Whether to include run info in the response. Defaults\nto False.\ntags (Optional[List[str]]) \u2013 \nReturn type\nDict[str, Any]\napply(input_list, callbacks=None)\uf0c1\nCall the chain on all inputs in the list.\nParameters\ninput_list (List[Dict[str, Any]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \nReturn type\nList[Dict[str, str]]\nasync arun(*args, callbacks=None, tags=None, **kwargs)\uf0c1\nRun the chain as text in, text out or multiple variables, text out.\nParameters\nargs (Any) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ntags (Optional[List[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\ndict(**kwargs)\uf0c1\nReturn dictionary representation of chain.\nParameters\nkwargs (Any) \u2013 \nReturn type\nDict\nclassmethod from_llm(llm, prompt=None, **kwargs)[source]\uf0c1\nParameters\nllm (langchain.base_language.BaseLanguageModel) \u2013 \nprompt (Optional[langchain.prompts.base.BasePromptTemplate]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.chains.qa_generation.base.QAGenerationChain\nprep_inputs(inputs)\uf0c1\nValidate and prep inputs.\nParameters\ninputs (Union[Dict[str, Any], Any]) \u2013 \nReturn type\nDict[str, str]\nprep_outputs(inputs, outputs, return_only_outputs=False)\uf0c1\nValidate and prep outputs.\nParameters\ninputs (Dict[str, str]) \u2013 \noutputs (Dict[str, str]) \u2013", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-159", "text": "inputs (Dict[str, str]) \u2013 \noutputs (Dict[str, str]) \u2013 \nreturn_only_outputs (bool) \u2013 \nReturn type\nDict[str, str]\nrun(*args, callbacks=None, tags=None, **kwargs)\uf0c1\nRun the chain as text in, text out or multiple variables, text out.\nParameters\nargs (Any) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ntags (Optional[List[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\nsave(file_path)\uf0c1\nSave the chain.\nParameters\nfile_path (Union[pathlib.Path, str]) \u2013 Path to file to save the chain to.\nReturn type\nNone\nExample:\n.. code-block:: python\nchain.save(file_path=\u201dpath/chain.yaml\u201d)\nto_json()\uf0c1\nReturn type\nUnion[langchain.load.serializable.SerializedConstructor, langchain.load.serializable.SerializedNotImplemented]\nto_json_not_implemented()\uf0c1\nReturn type\nlangchain.load.serializable.SerializedNotImplemented\nproperty input_keys: List[str]\uf0c1\nInput keys this chain expects.\nproperty lc_attributes: Dict\uf0c1\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\uf0c1\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\uf0c1\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\uf0c1\nReturn whether or not the class is serializable.", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-160", "text": "property lc_serializable: bool\uf0c1\nReturn whether or not the class is serializable.\nproperty output_keys: List[str]\uf0c1\nOutput keys this chain expects.\nclass langchain.chains.QAWithSourcesChain(*, memory=None, callbacks=None, callback_manager=None, verbose=None, tags=None, combine_documents_chain, question_key='question', input_docs_key='docs', answer_key='answer', sources_answer_key='sources', return_source_documents=False)[source]\uf0c1\nBases: langchain.chains.qa_with_sources.base.BaseQAWithSourcesChain\nQuestion answering with sources over documents.\nParameters\nmemory (Optional[langchain.schema.BaseMemory]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ncallback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) \u2013 \nverbose (bool) \u2013 \ntags (Optional[List[str]]) \u2013 \ncombine_documents_chain (langchain.chains.combine_documents.base.BaseCombineDocumentsChain) \u2013 \nquestion_key (str) \u2013 \ninput_docs_key (str) \u2013 \nanswer_key (str) \u2013 \nsources_answer_key (str) \u2013 \nreturn_source_documents (bool) \u2013 \nReturn type\nNone\nattribute callback_manager: Optional[BaseCallbackManager] = None\uf0c1\nDeprecated, use callbacks instead.\nattribute callbacks: Callbacks = None\uf0c1\nOptional list of callback handlers (or callback manager). Defaults to None.\nCallback handlers are called throughout the lifecycle of a call to a chain,\nstarting with on_chain_start, ending with on_chain_end or on_chain_error.\nEach custom chain can optionally call additional callback methods, see Callback docs\nfor full details.\nattribute combine_documents_chain: BaseCombineDocumentsChain [Required]\uf0c1\nChain to use to combine documents.", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-161", "text": "Chain to use to combine documents.\nattribute memory: Optional[BaseMemory] = None\uf0c1\nOptional memory object. Defaults to None.\nMemory is a class that gets called at the start\nand at the end of every chain. At the start, memory loads variables and passes\nthem along in the chain. At the end, it saves any returned variables.\nThere are many different types of memory - please see memory docs\nfor the full catalog.\nattribute return_source_documents: bool = False\uf0c1\nReturn the source documents.\nattribute tags: Optional[List[str]] = None\uf0c1\nOptional list of tags associated with the chain. Defaults to None\nThese tags will be associated with each call to this chain,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a chain with its use case.\nattribute verbose: bool [Optional]\uf0c1\nWhether or not run in verbose mode. In verbose mode, some intermediate logs\nwill be printed to the console. Defaults to langchain.verbose value.\nasync acall(inputs, return_only_outputs=False, callbacks=None, *, tags=None, include_run_info=False)\uf0c1\nRun the logic of this chain and add to output if desired.\nParameters\ninputs (Union[Dict[str, Any], Any]) \u2013 Dictionary of inputs, or single input if chain expects\nonly one param.\nreturn_only_outputs (bool) \u2013 boolean for whether to return only outputs in the\nresponse. If True, only new keys generated by this chain will be\nreturned. If False, both input keys and new keys generated by this\nchain will be returned. Defaults to False.\ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 Callbacks to use for this chain run. If not provided, will\nuse the callbacks provided to the chain.", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-162", "text": "use the callbacks provided to the chain.\ninclude_run_info (bool) \u2013 Whether to include run info in the response. Defaults\nto False.\ntags (Optional[List[str]]) \u2013 \nReturn type\nDict[str, Any]\napply(input_list, callbacks=None)\uf0c1\nCall the chain on all inputs in the list.\nParameters\ninput_list (List[Dict[str, Any]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \nReturn type\nList[Dict[str, str]]\nasync arun(*args, callbacks=None, tags=None, **kwargs)\uf0c1\nRun the chain as text in, text out or multiple variables, text out.\nParameters\nargs (Any) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ntags (Optional[List[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\ndict(**kwargs)\uf0c1\nReturn dictionary representation of chain.\nParameters\nkwargs (Any) \u2013 \nReturn type\nDict\nclassmethod from_chain_type(llm, chain_type='stuff', chain_type_kwargs=None, **kwargs)\uf0c1\nLoad chain from chain type.\nParameters\nllm (langchain.base_language.BaseLanguageModel) \u2013 \nchain_type (str) \u2013 \nchain_type_kwargs (Optional[dict]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.chains.qa_with_sources.base.BaseQAWithSourcesChain", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-163", "text": "classmethod from_llm(llm, document_prompt=PromptTemplate(input_variables=['page_content', 'source'], output_parser=None, partial_variables={}, template='Content: {page_content}\\nSource: {source}', template_format='f-string', validate_template=True), question_prompt=PromptTemplate(input_variables=['context', 'question'], output_parser=None, partial_variables={}, template='Use the following portion of a long document to see if any of the text is relevant to answer the question. \\nReturn any relevant text verbatim.\\n{context}\\nQuestion: {question}\\nRelevant text, if any:', template_format='f-string', validate_template=True), combine_prompt=PromptTemplate(input_variables=['summaries', 'question'], output_parser=None, partial_variables={}, template='Given the following extracted parts of a long document and a question, create a final answer with references (\"SOURCES\"). \\nIf you don\\'t know the answer, just say that you don\\'t know. Don\\'t try to make up an answer.\\nALWAYS return a \"SOURCES\" part in your answer.\\n\\nQUESTION: Which state/country\\'s law governs the interpretation of the contract?\\n=========\\nContent: This Agreement is governed by English law and the parties submit to the exclusive jurisdiction of the English courts in\u00a0 relation to any dispute (contractual or non-contractual) concerning this Agreement save that either party may apply to any court for an\u00a0 injunction or other relief to protect its Intellectual Property Rights.\\nSource: 28-pl\\nContent: No Waiver. Failure or delay in exercising any right or remedy under this Agreement shall not constitute a waiver of such (or any other)\u00a0 right or remedy.\\n\\n11.7 Severability. The invalidity, illegality or unenforceability of any term (or part of a term) of this Agreement shall not affect the continuation\u00a0 in force of the remainder of the term (if", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-164", "text": "of this Agreement shall not affect the continuation\u00a0 in force of the remainder of the term (if any) and this Agreement.\\n\\n11.8 No Agency. Except as expressly stated otherwise, nothing in this Agreement shall create an agency, partnership or joint venture of any\u00a0 kind between the parties.\\n\\n11.9 No Third-Party Beneficiaries.\\nSource: 30-pl\\nContent: (b) if Google believes, in good faith, that the Distributor has violated or caused Google to violate any Anti-Bribery Laws (as\u00a0 defined in Clause 8.5) or that such a violation is reasonably likely to occur,\\nSource: 4-pl\\n=========\\nFINAL ANSWER: This Agreement is governed by English law.\\nSOURCES: 28-pl\\n\\nQUESTION: What did the president say about Michael Jackson?\\n=========\\nContent: Madam Speaker, Madam Vice President, our First Lady and Second Gentleman. Members of Congress and the Cabinet. Justices of the Supreme Court. My fellow Americans.\u00a0 \\n\\nLast year COVID-19 kept us apart. This year we are finally together again. \\n\\nTonight, we meet as Democrats Republicans and Independents. But most importantly as Americans. \\n\\nWith a duty to one another to the American people to the Constitution. \\n\\nAnd with an unwavering resolve that freedom will always triumph over tyranny. \\n\\nSix days ago, Russia\u2019s Vladimir Putin sought to shake the foundations of the free world thinking he could make it bend to his menacing ways. But he badly miscalculated. \\n\\nHe thought he could roll into Ukraine and the world would roll over. Instead he met a wall of strength he never imagined. \\n\\nHe met the Ukrainian people. \\n\\nFrom President Zelenskyy to every Ukrainian, their fearlessness, their courage, their determination, inspires the world. \\n\\nGroups of citizens", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-165", "text": "their fearlessness, their courage, their determination, inspires the world. \\n\\nGroups of citizens blocking tanks with their bodies. Everyone from students to retirees teachers turned soldiers defending their homeland.\\nSource: 0-pl\\nContent: And we won\u2019t stop. \\n\\nWe have lost so much to COVID-19. Time with one another. And worst of all, so much loss of life. \\n\\nLet\u2019s use this moment to reset. Let\u2019s stop looking at COVID-19 as a partisan dividing line and see it for what it is: A God-awful disease.\u00a0 \\n\\nLet\u2019s stop seeing each other as enemies, and start seeing each other for who we really are: Fellow Americans.\u00a0 \\n\\nWe can\u2019t change how divided we\u2019ve been. But we can change how we move forward\u2014on COVID-19 and other issues we must face together. \\n\\nI recently visited the New York City Police Department days after the funerals of Officer Wilbert Mora and his partner, Officer Jason Rivera. \\n\\nThey were responding to a 9-1-1 call when a man shot and killed them with a stolen gun. \\n\\nOfficer Mora was 27 years old. \\n\\nOfficer Rivera was 22. \\n\\nBoth Dominican Americans who\u2019d grown up on the same streets they later chose to patrol as police officers. \\n\\nI spoke with their families and told them that we are forever in debt for their sacrifice, and we will carry on their mission to restore the trust and safety every community deserves.\\nSource: 24-pl\\nContent: And a proud Ukrainian people, who have known 30 years\u00a0 of independence, have repeatedly shown that they will not tolerate anyone who tries to take their country backwards.\u00a0 \\n\\nTo all Americans, I will be honest with you, as I\u2019ve always promised. A Russian dictator, invading a foreign country, has costs around", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-166", "text": "you, as I\u2019ve always promised. A Russian dictator, invading a foreign country, has costs around the world. \\n\\nAnd I\u2019m taking robust action to make sure the pain of our sanctions\u00a0 is targeted at Russia\u2019s economy. And I will use every tool at our disposal to protect American businesses and consumers. \\n\\nTonight, I can announce that the United States has worked with 30 other countries to release 60 Million barrels of oil from reserves around the world.\u00a0 \\n\\nAmerica will lead that effort, releasing 30 Million barrels from our own Strategic Petroleum Reserve. And we stand ready to do more if necessary, unified with our allies.\u00a0 \\n\\nThese steps will help blunt gas prices here at home. And I know the news about what\u2019s happening can seem alarming. \\n\\nBut I want you to know that we are going to be okay.\\nSource: 5-pl\\nContent: More support for patients and families. \\n\\nTo get there, I call on Congress to fund ARPA-H, the Advanced Research Projects Agency for Health. \\n\\nIt\u2019s based on DARPA\u2014the Defense Department project that led to the Internet, GPS, and so much more.\u00a0 \\n\\nARPA-H will have a singular purpose\u2014to drive breakthroughs in cancer, Alzheimer\u2019s, diabetes, and more. \\n\\nA unity agenda for the nation. \\n\\nWe can do this. \\n\\nMy fellow Americans\u2014tonight , we have gathered in a sacred space\u2014the citadel of our democracy. \\n\\nIn this Capitol, generation after generation, Americans have debated great questions amid great strife, and have done great things. \\n\\nWe have fought for freedom, expanded liberty, defeated totalitarianism and terror. \\n\\nAnd built the strongest, freest, and most prosperous nation the world has ever known. \\n\\nNow is the hour. \\n\\nOur moment of responsibility. \\n\\nOur", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-167", "text": "\\n\\nNow is the hour. \\n\\nOur moment of responsibility. \\n\\nOur test of resolve and conscience, of history itself. \\n\\nIt is in this moment that our character is formed. Our purpose is found. Our future is forged. \\n\\nWell I know this nation.\\nSource: 34-pl\\n=========\\nFINAL ANSWER: The president did not mention Michael Jackson.\\nSOURCES:\\n\\nQUESTION: {question}\\n=========\\n{summaries}\\n=========\\nFINAL ANSWER:', template_format='f-string', validate_template=True), **kwargs)\uf0c1", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-168", "text": "Construct the chain from an LLM.\nParameters\nllm (langchain.base_language.BaseLanguageModel) \u2013 \ndocument_prompt (langchain.prompts.base.BasePromptTemplate) \u2013 \nquestion_prompt (langchain.prompts.base.BasePromptTemplate) \u2013 \ncombine_prompt (langchain.prompts.base.BasePromptTemplate) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.chains.qa_with_sources.base.BaseQAWithSourcesChain\nprep_inputs(inputs)\uf0c1\nValidate and prep inputs.\nParameters\ninputs (Union[Dict[str, Any], Any]) \u2013 \nReturn type\nDict[str, str]\nprep_outputs(inputs, outputs, return_only_outputs=False)\uf0c1\nValidate and prep outputs.\nParameters\ninputs (Dict[str, str]) \u2013 \noutputs (Dict[str, str]) \u2013 \nreturn_only_outputs (bool) \u2013 \nReturn type\nDict[str, str]\nrun(*args, callbacks=None, tags=None, **kwargs)\uf0c1\nRun the chain as text in, text out or multiple variables, text out.\nParameters\nargs (Any) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ntags (Optional[List[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\nsave(file_path)\uf0c1\nSave the chain.\nParameters\nfile_path (Union[pathlib.Path, str]) \u2013 Path to file to save the chain to.\nReturn type\nNone\nExample:\n.. code-block:: python\nchain.save(file_path=\u201dpath/chain.yaml\u201d)\nto_json()\uf0c1\nReturn type\nUnion[langchain.load.serializable.SerializedConstructor, langchain.load.serializable.SerializedNotImplemented]\nto_json_not_implemented()\uf0c1\nReturn type\nlangchain.load.serializable.SerializedNotImplemented", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-169", "text": "Return type\nlangchain.load.serializable.SerializedNotImplemented\nproperty lc_attributes: Dict\uf0c1\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\uf0c1\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\uf0c1\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\uf0c1\nReturn whether or not the class is serializable.\nclass langchain.chains.RetrievalQA(*, memory=None, callbacks=None, callback_manager=None, verbose=None, tags=None, combine_documents_chain, input_key='query', output_key='result', return_source_documents=False, retriever)[source]\uf0c1\nBases: langchain.chains.retrieval_qa.base.BaseRetrievalQA\nChain for question-answering against an index.\nExample\nfrom langchain.llms import OpenAI\nfrom langchain.chains import RetrievalQA\nfrom langchain.faiss import FAISS\nfrom langchain.vectorstores.base import VectorStoreRetriever\nretriever = VectorStoreRetriever(vectorstore=FAISS(...))\nretrievalQA = RetrievalQA.from_llm(llm=OpenAI(), retriever=retriever)\nParameters\nmemory (Optional[langchain.schema.BaseMemory]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ncallback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) \u2013 \nverbose (bool) \u2013 \ntags (Optional[List[str]]) \u2013", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-170", "text": "verbose (bool) \u2013 \ntags (Optional[List[str]]) \u2013 \ncombine_documents_chain (langchain.chains.combine_documents.base.BaseCombineDocumentsChain) \u2013 \ninput_key (str) \u2013 \noutput_key (str) \u2013 \nreturn_source_documents (bool) \u2013 \nretriever (langchain.schema.BaseRetriever) \u2013 \nReturn type\nNone\nattribute callback_manager: Optional[BaseCallbackManager] = None\uf0c1\nDeprecated, use callbacks instead.\nattribute callbacks: Callbacks = None\uf0c1\nOptional list of callback handlers (or callback manager). Defaults to None.\nCallback handlers are called throughout the lifecycle of a call to a chain,\nstarting with on_chain_start, ending with on_chain_end or on_chain_error.\nEach custom chain can optionally call additional callback methods, see Callback docs\nfor full details.\nattribute combine_documents_chain: BaseCombineDocumentsChain [Required]\uf0c1\nChain to use to combine the documents.\nattribute memory: Optional[BaseMemory] = None\uf0c1\nOptional memory object. Defaults to None.\nMemory is a class that gets called at the start\nand at the end of every chain. At the start, memory loads variables and passes\nthem along in the chain. At the end, it saves any returned variables.\nThere are many different types of memory - please see memory docs\nfor the full catalog.\nattribute retriever: BaseRetriever [Required]\uf0c1\nattribute return_source_documents: bool = False\uf0c1\nReturn the source documents.\nattribute tags: Optional[List[str]] = None\uf0c1\nOptional list of tags associated with the chain. Defaults to None\nThese tags will be associated with each call to this chain,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a chain with its use case.\nattribute verbose: bool [Optional]\uf0c1", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-171", "text": "attribute verbose: bool [Optional]\uf0c1\nWhether or not run in verbose mode. In verbose mode, some intermediate logs\nwill be printed to the console. Defaults to langchain.verbose value.\nasync acall(inputs, return_only_outputs=False, callbacks=None, *, tags=None, include_run_info=False)\uf0c1\nRun the logic of this chain and add to output if desired.\nParameters\ninputs (Union[Dict[str, Any], Any]) \u2013 Dictionary of inputs, or single input if chain expects\nonly one param.\nreturn_only_outputs (bool) \u2013 boolean for whether to return only outputs in the\nresponse. If True, only new keys generated by this chain will be\nreturned. If False, both input keys and new keys generated by this\nchain will be returned. Defaults to False.\ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 Callbacks to use for this chain run. If not provided, will\nuse the callbacks provided to the chain.\ninclude_run_info (bool) \u2013 Whether to include run info in the response. Defaults\nto False.\ntags (Optional[List[str]]) \u2013 \nReturn type\nDict[str, Any]\napply(input_list, callbacks=None)\uf0c1\nCall the chain on all inputs in the list.\nParameters\ninput_list (List[Dict[str, Any]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \nReturn type\nList[Dict[str, str]]\nasync arun(*args, callbacks=None, tags=None, **kwargs)\uf0c1\nRun the chain as text in, text out or multiple variables, text out.\nParameters\nargs (Any) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-172", "text": "tags (Optional[List[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\ndict(**kwargs)\uf0c1\nReturn dictionary representation of chain.\nParameters\nkwargs (Any) \u2013 \nReturn type\nDict\nclassmethod from_chain_type(llm, chain_type='stuff', chain_type_kwargs=None, **kwargs)\uf0c1\nLoad chain from chain type.\nParameters\nllm (langchain.base_language.BaseLanguageModel) \u2013 \nchain_type (str) \u2013 \nchain_type_kwargs (Optional[dict]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.chains.retrieval_qa.base.BaseRetrievalQA\nclassmethod from_llm(llm, prompt=None, **kwargs)\uf0c1\nInitialize from LLM.\nParameters\nllm (langchain.base_language.BaseLanguageModel) \u2013 \nprompt (Optional[langchain.prompts.prompt.PromptTemplate]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.chains.retrieval_qa.base.BaseRetrievalQA\nprep_inputs(inputs)\uf0c1\nValidate and prep inputs.\nParameters\ninputs (Union[Dict[str, Any], Any]) \u2013 \nReturn type\nDict[str, str]\nprep_outputs(inputs, outputs, return_only_outputs=False)\uf0c1\nValidate and prep outputs.\nParameters\ninputs (Dict[str, str]) \u2013 \noutputs (Dict[str, str]) \u2013 \nreturn_only_outputs (bool) \u2013 \nReturn type\nDict[str, str]\nrun(*args, callbacks=None, tags=None, **kwargs)\uf0c1\nRun the chain as text in, text out or multiple variables, text out.\nParameters\nargs (Any) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ntags (Optional[List[str]]) \u2013", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-173", "text": "tags (Optional[List[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\nsave(file_path)\uf0c1\nSave the chain.\nParameters\nfile_path (Union[pathlib.Path, str]) \u2013 Path to file to save the chain to.\nReturn type\nNone\nExample:\n.. code-block:: python\nchain.save(file_path=\u201dpath/chain.yaml\u201d)\nto_json()\uf0c1\nReturn type\nUnion[langchain.load.serializable.SerializedConstructor, langchain.load.serializable.SerializedNotImplemented]\nto_json_not_implemented()\uf0c1\nReturn type\nlangchain.load.serializable.SerializedNotImplemented\nproperty lc_attributes: Dict\uf0c1\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\uf0c1\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\uf0c1\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\uf0c1\nReturn whether or not the class is serializable.\nclass langchain.chains.RetrievalQAWithSourcesChain(*, memory=None, callbacks=None, callback_manager=None, verbose=None, tags=None, combine_documents_chain, question_key='question', input_docs_key='docs', answer_key='answer', sources_answer_key='sources', return_source_documents=False, retriever, reduce_k_below_max_tokens=False, max_tokens_limit=3375)[source]\uf0c1\nBases: langchain.chains.qa_with_sources.base.BaseQAWithSourcesChain\nQuestion-answering with sources over an index.\nParameters\nmemory (Optional[langchain.schema.BaseMemory]) \u2013", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-174", "text": "Parameters\nmemory (Optional[langchain.schema.BaseMemory]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ncallback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) \u2013 \nverbose (bool) \u2013 \ntags (Optional[List[str]]) \u2013 \ncombine_documents_chain (langchain.chains.combine_documents.base.BaseCombineDocumentsChain) \u2013 \nquestion_key (str) \u2013 \ninput_docs_key (str) \u2013 \nanswer_key (str) \u2013 \nsources_answer_key (str) \u2013 \nreturn_source_documents (bool) \u2013 \nretriever (langchain.schema.BaseRetriever) \u2013 \nreduce_k_below_max_tokens (bool) \u2013 \nmax_tokens_limit (int) \u2013 \nReturn type\nNone\nattribute callback_manager: Optional[BaseCallbackManager] = None\uf0c1\nDeprecated, use callbacks instead.\nattribute callbacks: Callbacks = None\uf0c1\nOptional list of callback handlers (or callback manager). Defaults to None.\nCallback handlers are called throughout the lifecycle of a call to a chain,\nstarting with on_chain_start, ending with on_chain_end or on_chain_error.\nEach custom chain can optionally call additional callback methods, see Callback docs\nfor full details.\nattribute combine_documents_chain: BaseCombineDocumentsChain [Required]\uf0c1\nChain to use to combine documents.\nattribute max_tokens_limit: int = 3375\uf0c1\nRestrict the docs to return from store based on tokens,\nenforced only for StuffDocumentChain and if reduce_k_below_max_tokens is to true\nattribute memory: Optional[BaseMemory] = None\uf0c1\nOptional memory object. Defaults to None.\nMemory is a class that gets called at the start\nand at the end of every chain. At the start, memory loads variables and passes", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-175", "text": "and at the end of every chain. At the start, memory loads variables and passes\nthem along in the chain. At the end, it saves any returned variables.\nThere are many different types of memory - please see memory docs\nfor the full catalog.\nattribute reduce_k_below_max_tokens: bool = False\uf0c1\nReduce the number of results to return from store based on tokens limit\nattribute retriever: langchain.schema.BaseRetriever [Required]\uf0c1\nIndex to connect to.\nattribute return_source_documents: bool = False\uf0c1\nReturn the source documents.\nattribute tags: Optional[List[str]] = None\uf0c1\nOptional list of tags associated with the chain. Defaults to None\nThese tags will be associated with each call to this chain,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a chain with its use case.\nattribute verbose: bool [Optional]\uf0c1\nWhether or not run in verbose mode. In verbose mode, some intermediate logs\nwill be printed to the console. Defaults to langchain.verbose value.\nasync acall(inputs, return_only_outputs=False, callbacks=None, *, tags=None, include_run_info=False)\uf0c1\nRun the logic of this chain and add to output if desired.\nParameters\ninputs (Union[Dict[str, Any], Any]) \u2013 Dictionary of inputs, or single input if chain expects\nonly one param.\nreturn_only_outputs (bool) \u2013 boolean for whether to return only outputs in the\nresponse. If True, only new keys generated by this chain will be\nreturned. If False, both input keys and new keys generated by this\nchain will be returned. Defaults to False.\ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 Callbacks to use for this chain run. If not provided, will", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-176", "text": "use the callbacks provided to the chain.\ninclude_run_info (bool) \u2013 Whether to include run info in the response. Defaults\nto False.\ntags (Optional[List[str]]) \u2013 \nReturn type\nDict[str, Any]\napply(input_list, callbacks=None)\uf0c1\nCall the chain on all inputs in the list.\nParameters\ninput_list (List[Dict[str, Any]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \nReturn type\nList[Dict[str, str]]\nasync arun(*args, callbacks=None, tags=None, **kwargs)\uf0c1\nRun the chain as text in, text out or multiple variables, text out.\nParameters\nargs (Any) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ntags (Optional[List[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\ndict(**kwargs)\uf0c1\nReturn dictionary representation of chain.\nParameters\nkwargs (Any) \u2013 \nReturn type\nDict\nclassmethod from_chain_type(llm, chain_type='stuff', chain_type_kwargs=None, **kwargs)\uf0c1\nLoad chain from chain type.\nParameters\nllm (langchain.base_language.BaseLanguageModel) \u2013 \nchain_type (str) \u2013 \nchain_type_kwargs (Optional[dict]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.chains.qa_with_sources.base.BaseQAWithSourcesChain", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-177", "text": "classmethod from_llm(llm, document_prompt=PromptTemplate(input_variables=['page_content', 'source'], output_parser=None, partial_variables={}, template='Content: {page_content}\\nSource: {source}', template_format='f-string', validate_template=True), question_prompt=PromptTemplate(input_variables=['context', 'question'], output_parser=None, partial_variables={}, template='Use the following portion of a long document to see if any of the text is relevant to answer the question. \\nReturn any relevant text verbatim.\\n{context}\\nQuestion: {question}\\nRelevant text, if any:', template_format='f-string', validate_template=True), combine_prompt=PromptTemplate(input_variables=['summaries', 'question'], output_parser=None, partial_variables={}, template='Given the following extracted parts of a long document and a question, create a final answer with references (\"SOURCES\"). \\nIf you don\\'t know the answer, just say that you don\\'t know. Don\\'t try to make up an answer.\\nALWAYS return a \"SOURCES\" part in your answer.\\n\\nQUESTION: Which state/country\\'s law governs the interpretation of the contract?\\n=========\\nContent: This Agreement is governed by English law and the parties submit to the exclusive jurisdiction of the English courts in\u00a0 relation to any dispute (contractual or non-contractual) concerning this Agreement save that either party may apply to any court for an\u00a0 injunction or other relief to protect its Intellectual Property Rights.\\nSource: 28-pl\\nContent: No Waiver. Failure or delay in exercising any right or remedy under this Agreement shall not constitute a waiver of such (or any other)\u00a0 right or remedy.\\n\\n11.7 Severability. The invalidity, illegality or unenforceability of any term (or part of a term) of this Agreement shall not affect the continuation\u00a0 in force of the remainder of the term (if", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-178", "text": "of this Agreement shall not affect the continuation\u00a0 in force of the remainder of the term (if any) and this Agreement.\\n\\n11.8 No Agency. Except as expressly stated otherwise, nothing in this Agreement shall create an agency, partnership or joint venture of any\u00a0 kind between the parties.\\n\\n11.9 No Third-Party Beneficiaries.\\nSource: 30-pl\\nContent: (b) if Google believes, in good faith, that the Distributor has violated or caused Google to violate any Anti-Bribery Laws (as\u00a0 defined in Clause 8.5) or that such a violation is reasonably likely to occur,\\nSource: 4-pl\\n=========\\nFINAL ANSWER: This Agreement is governed by English law.\\nSOURCES: 28-pl\\n\\nQUESTION: What did the president say about Michael Jackson?\\n=========\\nContent: Madam Speaker, Madam Vice President, our First Lady and Second Gentleman. Members of Congress and the Cabinet. Justices of the Supreme Court. My fellow Americans.\u00a0 \\n\\nLast year COVID-19 kept us apart. This year we are finally together again. \\n\\nTonight, we meet as Democrats Republicans and Independents. But most importantly as Americans. \\n\\nWith a duty to one another to the American people to the Constitution. \\n\\nAnd with an unwavering resolve that freedom will always triumph over tyranny. \\n\\nSix days ago, Russia\u2019s Vladimir Putin sought to shake the foundations of the free world thinking he could make it bend to his menacing ways. But he badly miscalculated. \\n\\nHe thought he could roll into Ukraine and the world would roll over. Instead he met a wall of strength he never imagined. \\n\\nHe met the Ukrainian people. \\n\\nFrom President Zelenskyy to every Ukrainian, their fearlessness, their courage, their determination, inspires the world. \\n\\nGroups of citizens", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-179", "text": "their fearlessness, their courage, their determination, inspires the world. \\n\\nGroups of citizens blocking tanks with their bodies. Everyone from students to retirees teachers turned soldiers defending their homeland.\\nSource: 0-pl\\nContent: And we won\u2019t stop. \\n\\nWe have lost so much to COVID-19. Time with one another. And worst of all, so much loss of life. \\n\\nLet\u2019s use this moment to reset. Let\u2019s stop looking at COVID-19 as a partisan dividing line and see it for what it is: A God-awful disease.\u00a0 \\n\\nLet\u2019s stop seeing each other as enemies, and start seeing each other for who we really are: Fellow Americans.\u00a0 \\n\\nWe can\u2019t change how divided we\u2019ve been. But we can change how we move forward\u2014on COVID-19 and other issues we must face together. \\n\\nI recently visited the New York City Police Department days after the funerals of Officer Wilbert Mora and his partner, Officer Jason Rivera. \\n\\nThey were responding to a 9-1-1 call when a man shot and killed them with a stolen gun. \\n\\nOfficer Mora was 27 years old. \\n\\nOfficer Rivera was 22. \\n\\nBoth Dominican Americans who\u2019d grown up on the same streets they later chose to patrol as police officers. \\n\\nI spoke with their families and told them that we are forever in debt for their sacrifice, and we will carry on their mission to restore the trust and safety every community deserves.\\nSource: 24-pl\\nContent: And a proud Ukrainian people, who have known 30 years\u00a0 of independence, have repeatedly shown that they will not tolerate anyone who tries to take their country backwards.\u00a0 \\n\\nTo all Americans, I will be honest with you, as I\u2019ve always promised. A Russian dictator, invading a foreign country, has costs around", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-180", "text": "you, as I\u2019ve always promised. A Russian dictator, invading a foreign country, has costs around the world. \\n\\nAnd I\u2019m taking robust action to make sure the pain of our sanctions\u00a0 is targeted at Russia\u2019s economy. And I will use every tool at our disposal to protect American businesses and consumers. \\n\\nTonight, I can announce that the United States has worked with 30 other countries to release 60 Million barrels of oil from reserves around the world.\u00a0 \\n\\nAmerica will lead that effort, releasing 30 Million barrels from our own Strategic Petroleum Reserve. And we stand ready to do more if necessary, unified with our allies.\u00a0 \\n\\nThese steps will help blunt gas prices here at home. And I know the news about what\u2019s happening can seem alarming. \\n\\nBut I want you to know that we are going to be okay.\\nSource: 5-pl\\nContent: More support for patients and families. \\n\\nTo get there, I call on Congress to fund ARPA-H, the Advanced Research Projects Agency for Health. \\n\\nIt\u2019s based on DARPA\u2014the Defense Department project that led to the Internet, GPS, and so much more.\u00a0 \\n\\nARPA-H will have a singular purpose\u2014to drive breakthroughs in cancer, Alzheimer\u2019s, diabetes, and more. \\n\\nA unity agenda for the nation. \\n\\nWe can do this. \\n\\nMy fellow Americans\u2014tonight , we have gathered in a sacred space\u2014the citadel of our democracy. \\n\\nIn this Capitol, generation after generation, Americans have debated great questions amid great strife, and have done great things. \\n\\nWe have fought for freedom, expanded liberty, defeated totalitarianism and terror. \\n\\nAnd built the strongest, freest, and most prosperous nation the world has ever known. \\n\\nNow is the hour. \\n\\nOur moment of responsibility. \\n\\nOur", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-181", "text": "\\n\\nNow is the hour. \\n\\nOur moment of responsibility. \\n\\nOur test of resolve and conscience, of history itself. \\n\\nIt is in this moment that our character is formed. Our purpose is found. Our future is forged. \\n\\nWell I know this nation.\\nSource: 34-pl\\n=========\\nFINAL ANSWER: The president did not mention Michael Jackson.\\nSOURCES:\\n\\nQUESTION: {question}\\n=========\\n{summaries}\\n=========\\nFINAL ANSWER:', template_format='f-string', validate_template=True), **kwargs)\uf0c1", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-182", "text": "Construct the chain from an LLM.\nParameters\nllm (langchain.base_language.BaseLanguageModel) \u2013 \ndocument_prompt (langchain.prompts.base.BasePromptTemplate) \u2013 \nquestion_prompt (langchain.prompts.base.BasePromptTemplate) \u2013 \ncombine_prompt (langchain.prompts.base.BasePromptTemplate) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.chains.qa_with_sources.base.BaseQAWithSourcesChain\nprep_inputs(inputs)\uf0c1\nValidate and prep inputs.\nParameters\ninputs (Union[Dict[str, Any], Any]) \u2013 \nReturn type\nDict[str, str]\nprep_outputs(inputs, outputs, return_only_outputs=False)\uf0c1\nValidate and prep outputs.\nParameters\ninputs (Dict[str, str]) \u2013 \noutputs (Dict[str, str]) \u2013 \nreturn_only_outputs (bool) \u2013 \nReturn type\nDict[str, str]\nrun(*args, callbacks=None, tags=None, **kwargs)\uf0c1\nRun the chain as text in, text out or multiple variables, text out.\nParameters\nargs (Any) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ntags (Optional[List[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\nsave(file_path)\uf0c1\nSave the chain.\nParameters\nfile_path (Union[pathlib.Path, str]) \u2013 Path to file to save the chain to.\nReturn type\nNone\nExample:\n.. code-block:: python\nchain.save(file_path=\u201dpath/chain.yaml\u201d)\nto_json()\uf0c1\nReturn type\nUnion[langchain.load.serializable.SerializedConstructor, langchain.load.serializable.SerializedNotImplemented]\nto_json_not_implemented()\uf0c1\nReturn type\nlangchain.load.serializable.SerializedNotImplemented", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-183", "text": "Return type\nlangchain.load.serializable.SerializedNotImplemented\nproperty lc_attributes: Dict\uf0c1\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\uf0c1\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\uf0c1\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\uf0c1\nReturn whether or not the class is serializable.\nclass langchain.chains.RouterChain(*, memory=None, callbacks=None, callback_manager=None, verbose=None, tags=None)[source]\uf0c1\nBases: langchain.chains.base.Chain, abc.ABC\nChain that outputs the name of a destination chain and the inputs to it.\nParameters\nmemory (Optional[langchain.schema.BaseMemory]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ncallback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) \u2013 \nverbose (bool) \u2013 \ntags (Optional[List[str]]) \u2013 \nReturn type\nNone\nattribute callback_manager: Optional[BaseCallbackManager] = None\uf0c1\nDeprecated, use callbacks instead.\nattribute callbacks: Callbacks = None\uf0c1\nOptional list of callback handlers (or callback manager). Defaults to None.\nCallback handlers are called throughout the lifecycle of a call to a chain,\nstarting with on_chain_start, ending with on_chain_end or on_chain_error.\nEach custom chain can optionally call additional callback methods, see Callback docs\nfor full details.\nattribute memory: Optional[BaseMemory] = None\uf0c1", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-184", "text": "for full details.\nattribute memory: Optional[BaseMemory] = None\uf0c1\nOptional memory object. Defaults to None.\nMemory is a class that gets called at the start\nand at the end of every chain. At the start, memory loads variables and passes\nthem along in the chain. At the end, it saves any returned variables.\nThere are many different types of memory - please see memory docs\nfor the full catalog.\nattribute tags: Optional[List[str]] = None\uf0c1\nOptional list of tags associated with the chain. Defaults to None\nThese tags will be associated with each call to this chain,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a chain with its use case.\nattribute verbose: bool [Optional]\uf0c1\nWhether or not run in verbose mode. In verbose mode, some intermediate logs\nwill be printed to the console. Defaults to langchain.verbose value.\nasync acall(inputs, return_only_outputs=False, callbacks=None, *, tags=None, include_run_info=False)\uf0c1\nRun the logic of this chain and add to output if desired.\nParameters\ninputs (Union[Dict[str, Any], Any]) \u2013 Dictionary of inputs, or single input if chain expects\nonly one param.\nreturn_only_outputs (bool) \u2013 boolean for whether to return only outputs in the\nresponse. If True, only new keys generated by this chain will be\nreturned. If False, both input keys and new keys generated by this\nchain will be returned. Defaults to False.\ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 Callbacks to use for this chain run. If not provided, will\nuse the callbacks provided to the chain.\ninclude_run_info (bool) \u2013 Whether to include run info in the response. Defaults\nto False.", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-185", "text": "to False.\ntags (Optional[List[str]]) \u2013 \nReturn type\nDict[str, Any]\napply(input_list, callbacks=None)\uf0c1\nCall the chain on all inputs in the list.\nParameters\ninput_list (List[Dict[str, Any]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \nReturn type\nList[Dict[str, str]]\nasync aroute(inputs, callbacks=None)[source]\uf0c1\nParameters\ninputs (Dict[str, Any]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \nReturn type\nlangchain.chains.router.base.Route\nasync arun(*args, callbacks=None, tags=None, **kwargs)\uf0c1\nRun the chain as text in, text out or multiple variables, text out.\nParameters\nargs (Any) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ntags (Optional[List[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\ndict(**kwargs)\uf0c1\nReturn dictionary representation of chain.\nParameters\nkwargs (Any) \u2013 \nReturn type\nDict\nprep_inputs(inputs)\uf0c1\nValidate and prep inputs.\nParameters\ninputs (Union[Dict[str, Any], Any]) \u2013 \nReturn type\nDict[str, str]\nprep_outputs(inputs, outputs, return_only_outputs=False)\uf0c1\nValidate and prep outputs.\nParameters\ninputs (Dict[str, str]) \u2013 \noutputs (Dict[str, str]) \u2013 \nreturn_only_outputs (bool) \u2013 \nReturn type\nDict[str, str]\nroute(inputs, callbacks=None)[source]\uf0c1\nParameters\ninputs (Dict[str, Any]) \u2013", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-186", "text": "Parameters\ninputs (Dict[str, Any]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \nReturn type\nlangchain.chains.router.base.Route\nrun(*args, callbacks=None, tags=None, **kwargs)\uf0c1\nRun the chain as text in, text out or multiple variables, text out.\nParameters\nargs (Any) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ntags (Optional[List[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\nsave(file_path)\uf0c1\nSave the chain.\nParameters\nfile_path (Union[pathlib.Path, str]) \u2013 Path to file to save the chain to.\nReturn type\nNone\nExample:\n.. code-block:: python\nchain.save(file_path=\u201dpath/chain.yaml\u201d)\nto_json()\uf0c1\nReturn type\nUnion[langchain.load.serializable.SerializedConstructor, langchain.load.serializable.SerializedNotImplemented]\nto_json_not_implemented()\uf0c1\nReturn type\nlangchain.load.serializable.SerializedNotImplemented\nabstract property input_keys: List[str]\uf0c1\nInput keys this chain expects.\nproperty lc_attributes: Dict\uf0c1\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\uf0c1\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\uf0c1\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\uf0c1", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-187", "text": "property lc_serializable: bool\uf0c1\nReturn whether or not the class is serializable.\nproperty output_keys: List[str]\uf0c1\nOutput keys this chain expects.\nclass langchain.chains.SQLDatabaseChain(*, memory=None, callbacks=None, callback_manager=None, verbose=None, tags=None, llm_chain, llm=None, database, prompt=None, top_k=5, input_key='query', output_key='result', return_intermediate_steps=False, return_direct=False, use_query_checker=False, query_checker_prompt=None)[source]\uf0c1\nBases: langchain.chains.base.Chain\nChain for interacting with SQL Database.\nExample\nfrom langchain import SQLDatabaseChain, OpenAI, SQLDatabase\ndb = SQLDatabase(...)\ndb_chain = SQLDatabaseChain.from_llm(OpenAI(), db)\nParameters\nmemory (Optional[langchain.schema.BaseMemory]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ncallback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) \u2013 \nverbose (bool) \u2013 \ntags (Optional[List[str]]) \u2013 \nllm_chain (langchain.chains.llm.LLMChain) \u2013 \nllm (Optional[langchain.base_language.BaseLanguageModel]) \u2013 \ndatabase (langchain.sql_database.SQLDatabase) \u2013 \nprompt (Optional[langchain.prompts.base.BasePromptTemplate]) \u2013 \ntop_k (int) \u2013 \ninput_key (str) \u2013 \noutput_key (str) \u2013 \nreturn_intermediate_steps (bool) \u2013 \nreturn_direct (bool) \u2013 \nuse_query_checker (bool) \u2013 \nquery_checker_prompt (Optional[langchain.prompts.base.BasePromptTemplate]) \u2013 \nReturn type\nNone\nattribute callback_manager: Optional[BaseCallbackManager] = None\uf0c1", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-188", "text": "Return type\nNone\nattribute callback_manager: Optional[BaseCallbackManager] = None\uf0c1\nDeprecated, use callbacks instead.\nattribute callbacks: Callbacks = None\uf0c1\nOptional list of callback handlers (or callback manager). Defaults to None.\nCallback handlers are called throughout the lifecycle of a call to a chain,\nstarting with on_chain_start, ending with on_chain_end or on_chain_error.\nEach custom chain can optionally call additional callback methods, see Callback docs\nfor full details.\nattribute database: SQLDatabase [Required]\uf0c1\nSQL Database to connect to.\nattribute llm: Optional[BaseLanguageModel] = None\uf0c1\n[Deprecated] LLM wrapper to use.\nattribute llm_chain: LLMChain [Required]\uf0c1\nattribute memory: Optional[BaseMemory] = None\uf0c1\nOptional memory object. Defaults to None.\nMemory is a class that gets called at the start\nand at the end of every chain. At the start, memory loads variables and passes\nthem along in the chain. At the end, it saves any returned variables.\nThere are many different types of memory - please see memory docs\nfor the full catalog.\nattribute prompt: Optional[BasePromptTemplate] = None\uf0c1\n[Deprecated] Prompt to use to translate natural language to SQL.\nattribute query_checker_prompt: Optional[BasePromptTemplate] = None\uf0c1\nThe prompt template that should be used by the query checker\nattribute return_direct: bool = False\uf0c1\nWhether or not to return the result of querying the SQL table directly.\nattribute return_intermediate_steps: bool = False\uf0c1\nWhether or not to return the intermediate steps along with the final answer.\nattribute tags: Optional[List[str]] = None\uf0c1\nOptional list of tags associated with the chain. Defaults to None\nThese tags will be associated with each call to this chain,", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-189", "text": "These tags will be associated with each call to this chain,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a chain with its use case.\nattribute top_k: int = 5\uf0c1\nNumber of results to return from the query\nattribute use_query_checker: bool = False\uf0c1\nWhether or not the query checker tool should be used to attempt\nto fix the initial SQL from the LLM.\nattribute verbose: bool [Optional]\uf0c1\nWhether or not run in verbose mode. In verbose mode, some intermediate logs\nwill be printed to the console. Defaults to langchain.verbose value.\nasync acall(inputs, return_only_outputs=False, callbacks=None, *, tags=None, include_run_info=False)\uf0c1\nRun the logic of this chain and add to output if desired.\nParameters\ninputs (Union[Dict[str, Any], Any]) \u2013 Dictionary of inputs, or single input if chain expects\nonly one param.\nreturn_only_outputs (bool) \u2013 boolean for whether to return only outputs in the\nresponse. If True, only new keys generated by this chain will be\nreturned. If False, both input keys and new keys generated by this\nchain will be returned. Defaults to False.\ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 Callbacks to use for this chain run. If not provided, will\nuse the callbacks provided to the chain.\ninclude_run_info (bool) \u2013 Whether to include run info in the response. Defaults\nto False.\ntags (Optional[List[str]]) \u2013 \nReturn type\nDict[str, Any]\napply(input_list, callbacks=None)\uf0c1\nCall the chain on all inputs in the list.\nParameters\ninput_list (List[Dict[str, Any]]) \u2013", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-190", "text": "Parameters\ninput_list (List[Dict[str, Any]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \nReturn type\nList[Dict[str, str]]\nasync arun(*args, callbacks=None, tags=None, **kwargs)\uf0c1\nRun the chain as text in, text out or multiple variables, text out.\nParameters\nargs (Any) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ntags (Optional[List[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\ndict(**kwargs)\uf0c1\nReturn dictionary representation of chain.\nParameters\nkwargs (Any) \u2013 \nReturn type\nDict\nclassmethod from_llm(llm, db, prompt=None, **kwargs)[source]\uf0c1\nParameters\nllm (langchain.base_language.BaseLanguageModel) \u2013 \ndb (langchain.sql_database.SQLDatabase) \u2013 \nprompt (Optional[langchain.prompts.base.BasePromptTemplate]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.chains.sql_database.base.SQLDatabaseChain\nprep_inputs(inputs)\uf0c1\nValidate and prep inputs.\nParameters\ninputs (Union[Dict[str, Any], Any]) \u2013 \nReturn type\nDict[str, str]\nprep_outputs(inputs, outputs, return_only_outputs=False)\uf0c1\nValidate and prep outputs.\nParameters\ninputs (Dict[str, str]) \u2013 \noutputs (Dict[str, str]) \u2013 \nreturn_only_outputs (bool) \u2013 \nReturn type\nDict[str, str]\nrun(*args, callbacks=None, tags=None, **kwargs)\uf0c1\nRun the chain as text in, text out or multiple variables, text out.\nParameters\nargs (Any) \u2013", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-191", "text": "Parameters\nargs (Any) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ntags (Optional[List[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\nsave(file_path)\uf0c1\nSave the chain.\nParameters\nfile_path (Union[pathlib.Path, str]) \u2013 Path to file to save the chain to.\nReturn type\nNone\nExample:\n.. code-block:: python\nchain.save(file_path=\u201dpath/chain.yaml\u201d)\nto_json()\uf0c1\nReturn type\nUnion[langchain.load.serializable.SerializedConstructor, langchain.load.serializable.SerializedNotImplemented]\nto_json_not_implemented()\uf0c1\nReturn type\nlangchain.load.serializable.SerializedNotImplemented\nproperty lc_attributes: Dict\uf0c1\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\uf0c1\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\uf0c1\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\uf0c1\nReturn whether or not the class is serializable.\nclass langchain.chains.SQLDatabaseSequentialChain(*, memory=None, callbacks=None, callback_manager=None, verbose=None, tags=None, decider_chain, sql_chain, input_key='query', output_key='result', return_intermediate_steps=False)[source]\uf0c1\nBases: langchain.chains.base.Chain\nChain for querying SQL database that is a sequential chain.\nThe chain is as follows:", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-192", "text": "Chain for querying SQL database that is a sequential chain.\nThe chain is as follows:\n1. Based on the query, determine which tables to use.\n2. Based on those tables, call the normal SQL database chain.\nThis is useful in cases where the number of tables in the database is large.\nParameters\nmemory (Optional[langchain.schema.BaseMemory]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ncallback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) \u2013 \nverbose (bool) \u2013 \ntags (Optional[List[str]]) \u2013 \ndecider_chain (langchain.chains.llm.LLMChain) \u2013 \nsql_chain (langchain.chains.sql_database.base.SQLDatabaseChain) \u2013 \ninput_key (str) \u2013 \noutput_key (str) \u2013 \nreturn_intermediate_steps (bool) \u2013 \nReturn type\nNone\nattribute callback_manager: Optional[BaseCallbackManager] = None\uf0c1\nDeprecated, use callbacks instead.\nattribute callbacks: Callbacks = None\uf0c1\nOptional list of callback handlers (or callback manager). Defaults to None.\nCallback handlers are called throughout the lifecycle of a call to a chain,\nstarting with on_chain_start, ending with on_chain_end or on_chain_error.\nEach custom chain can optionally call additional callback methods, see Callback docs\nfor full details.\nattribute decider_chain: LLMChain [Required]\uf0c1\nattribute memory: Optional[BaseMemory] = None\uf0c1\nOptional memory object. Defaults to None.\nMemory is a class that gets called at the start\nand at the end of every chain. At the start, memory loads variables and passes\nthem along in the chain. At the end, it saves any returned variables.\nThere are many different types of memory - please see memory docs", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-193", "text": "There are many different types of memory - please see memory docs\nfor the full catalog.\nattribute return_intermediate_steps: bool = False\uf0c1\nattribute sql_chain: SQLDatabaseChain [Required]\uf0c1\nattribute tags: Optional[List[str]] = None\uf0c1\nOptional list of tags associated with the chain. Defaults to None\nThese tags will be associated with each call to this chain,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a chain with its use case.\nattribute verbose: bool [Optional]\uf0c1\nWhether or not run in verbose mode. In verbose mode, some intermediate logs\nwill be printed to the console. Defaults to langchain.verbose value.\nasync acall(inputs, return_only_outputs=False, callbacks=None, *, tags=None, include_run_info=False)\uf0c1\nRun the logic of this chain and add to output if desired.\nParameters\ninputs (Union[Dict[str, Any], Any]) \u2013 Dictionary of inputs, or single input if chain expects\nonly one param.\nreturn_only_outputs (bool) \u2013 boolean for whether to return only outputs in the\nresponse. If True, only new keys generated by this chain will be\nreturned. If False, both input keys and new keys generated by this\nchain will be returned. Defaults to False.\ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 Callbacks to use for this chain run. If not provided, will\nuse the callbacks provided to the chain.\ninclude_run_info (bool) \u2013 Whether to include run info in the response. Defaults\nto False.\ntags (Optional[List[str]]) \u2013 \nReturn type\nDict[str, Any]\napply(input_list, callbacks=None)\uf0c1\nCall the chain on all inputs in the list.\nParameters", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-194", "text": "Call the chain on all inputs in the list.\nParameters\ninput_list (List[Dict[str, Any]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \nReturn type\nList[Dict[str, str]]\nasync arun(*args, callbacks=None, tags=None, **kwargs)\uf0c1\nRun the chain as text in, text out or multiple variables, text out.\nParameters\nargs (Any) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ntags (Optional[List[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\ndict(**kwargs)\uf0c1\nReturn dictionary representation of chain.\nParameters\nkwargs (Any) \u2013 \nReturn type\nDict", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-195", "text": "Parameters\nkwargs (Any) \u2013 \nReturn type\nDict\nclassmethod from_llm(llm, database, query_prompt=PromptTemplate(input_variables=['input', 'table_info', 'dialect', 'top_k'], output_parser=None, partial_variables={}, template='Given an input question, first create a syntactically correct {dialect} query to run, then look at the results of the query and return the answer. Unless the user specifies in his question a specific number of examples he wishes to obtain, always limit your query to at most {top_k} results. You can order the results by a relevant column to return the most interesting examples in the database.\\n\\nNever query for all the columns from a specific table, only ask for a the few relevant columns given the question.\\n\\nPay attention to use only the column names that you can see in the schema description. Be careful to not query for columns that do not exist. Also, pay attention to which column is in which table.\\n\\nUse the following format:\\n\\nQuestion: Question here\\nSQLQuery: SQL Query to run\\nSQLResult: Result of the SQLQuery\\nAnswer: Final answer here\\n\\nOnly use the following tables:\\n{table_info}\\n\\nQuestion: {input}', template_format='f-string', validate_template=True), decider_prompt=PromptTemplate(input_variables=['query', 'table_names'], output_parser=CommaSeparatedListOutputParser(), partial_variables={}, template='Given the below input question and list of potential tables, output a comma separated list of the table names that may be necessary to answer this question.\\n\\nQuestion: {query}\\n\\nTable Names: {table_names}\\n\\nRelevant Table Names:', template_format='f-string', validate_template=True), **kwargs)[source]\uf0c1\nLoad the necessary chains.\nParameters\nllm (langchain.base_language.BaseLanguageModel) \u2013", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-196", "text": "Parameters\nllm (langchain.base_language.BaseLanguageModel) \u2013 \ndatabase (langchain.sql_database.SQLDatabase) \u2013 \nquery_prompt (langchain.prompts.base.BasePromptTemplate) \u2013 \ndecider_prompt (langchain.prompts.base.BasePromptTemplate) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.chains.sql_database.base.SQLDatabaseSequentialChain\nprep_inputs(inputs)\uf0c1\nValidate and prep inputs.\nParameters\ninputs (Union[Dict[str, Any], Any]) \u2013 \nReturn type\nDict[str, str]\nprep_outputs(inputs, outputs, return_only_outputs=False)\uf0c1\nValidate and prep outputs.\nParameters\ninputs (Dict[str, str]) \u2013 \noutputs (Dict[str, str]) \u2013 \nreturn_only_outputs (bool) \u2013 \nReturn type\nDict[str, str]\nrun(*args, callbacks=None, tags=None, **kwargs)\uf0c1\nRun the chain as text in, text out or multiple variables, text out.\nParameters\nargs (Any) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ntags (Optional[List[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\nsave(file_path)\uf0c1\nSave the chain.\nParameters\nfile_path (Union[pathlib.Path, str]) \u2013 Path to file to save the chain to.\nReturn type\nNone\nExample:\n.. code-block:: python\nchain.save(file_path=\u201dpath/chain.yaml\u201d)\nto_json()\uf0c1\nReturn type\nUnion[langchain.load.serializable.SerializedConstructor, langchain.load.serializable.SerializedNotImplemented]\nto_json_not_implemented()\uf0c1\nReturn type\nlangchain.load.serializable.SerializedNotImplemented\nproperty lc_attributes: Dict\uf0c1", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-197", "text": "langchain.load.serializable.SerializedNotImplemented\nproperty lc_attributes: Dict\uf0c1\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\uf0c1\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\uf0c1\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\uf0c1\nReturn whether or not the class is serializable.\nclass langchain.chains.SequentialChain(*, memory=None, callbacks=None, callback_manager=None, verbose=None, tags=None, chains, input_variables, output_variables, return_all=False)[source]\uf0c1\nBases: langchain.chains.base.Chain\nChain where the outputs of one chain feed directly into next.\nParameters\nmemory (Optional[langchain.schema.BaseMemory]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ncallback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) \u2013 \nverbose (bool) \u2013 \ntags (Optional[List[str]]) \u2013 \nchains (List[langchain.chains.base.Chain]) \u2013 \ninput_variables (List[str]) \u2013 \noutput_variables (List[str]) \u2013 \nreturn_all (bool) \u2013 \nReturn type\nNone\nattribute callback_manager: Optional[BaseCallbackManager] = None\uf0c1\nDeprecated, use callbacks instead.\nattribute callbacks: Callbacks = None\uf0c1\nOptional list of callback handlers (or callback manager). Defaults to None.\nCallback handlers are called throughout the lifecycle of a call to a chain,", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-198", "text": "Callback handlers are called throughout the lifecycle of a call to a chain,\nstarting with on_chain_start, ending with on_chain_end or on_chain_error.\nEach custom chain can optionally call additional callback methods, see Callback docs\nfor full details.\nattribute chains: List[langchain.chains.base.Chain] [Required]\uf0c1\nattribute input_variables: List[str] [Required]\uf0c1\nattribute memory: Optional[BaseMemory] = None\uf0c1\nOptional memory object. Defaults to None.\nMemory is a class that gets called at the start\nand at the end of every chain. At the start, memory loads variables and passes\nthem along in the chain. At the end, it saves any returned variables.\nThere are many different types of memory - please see memory docs\nfor the full catalog.\nattribute return_all: bool = False\uf0c1\nattribute tags: Optional[List[str]] = None\uf0c1\nOptional list of tags associated with the chain. Defaults to None\nThese tags will be associated with each call to this chain,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a chain with its use case.\nattribute verbose: bool [Optional]\uf0c1\nWhether or not run in verbose mode. In verbose mode, some intermediate logs\nwill be printed to the console. Defaults to langchain.verbose value.\nasync acall(inputs, return_only_outputs=False, callbacks=None, *, tags=None, include_run_info=False)\uf0c1\nRun the logic of this chain and add to output if desired.\nParameters\ninputs (Union[Dict[str, Any], Any]) \u2013 Dictionary of inputs, or single input if chain expects\nonly one param.\nreturn_only_outputs (bool) \u2013 boolean for whether to return only outputs in the\nresponse. If True, only new keys generated by this chain will be", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-199", "text": "response. If True, only new keys generated by this chain will be\nreturned. If False, both input keys and new keys generated by this\nchain will be returned. Defaults to False.\ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 Callbacks to use for this chain run. If not provided, will\nuse the callbacks provided to the chain.\ninclude_run_info (bool) \u2013 Whether to include run info in the response. Defaults\nto False.\ntags (Optional[List[str]]) \u2013 \nReturn type\nDict[str, Any]\napply(input_list, callbacks=None)\uf0c1\nCall the chain on all inputs in the list.\nParameters\ninput_list (List[Dict[str, Any]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \nReturn type\nList[Dict[str, str]]\nasync arun(*args, callbacks=None, tags=None, **kwargs)\uf0c1\nRun the chain as text in, text out or multiple variables, text out.\nParameters\nargs (Any) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ntags (Optional[List[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\ndict(**kwargs)\uf0c1\nReturn dictionary representation of chain.\nParameters\nkwargs (Any) \u2013 \nReturn type\nDict\nprep_inputs(inputs)\uf0c1\nValidate and prep inputs.\nParameters\ninputs (Union[Dict[str, Any], Any]) \u2013 \nReturn type\nDict[str, str]\nprep_outputs(inputs, outputs, return_only_outputs=False)\uf0c1\nValidate and prep outputs.\nParameters\ninputs (Dict[str, str]) \u2013 \noutputs (Dict[str, str]) \u2013", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-200", "text": "inputs (Dict[str, str]) \u2013 \noutputs (Dict[str, str]) \u2013 \nreturn_only_outputs (bool) \u2013 \nReturn type\nDict[str, str]\nrun(*args, callbacks=None, tags=None, **kwargs)\uf0c1\nRun the chain as text in, text out or multiple variables, text out.\nParameters\nargs (Any) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ntags (Optional[List[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\nsave(file_path)\uf0c1\nSave the chain.\nParameters\nfile_path (Union[pathlib.Path, str]) \u2013 Path to file to save the chain to.\nReturn type\nNone\nExample:\n.. code-block:: python\nchain.save(file_path=\u201dpath/chain.yaml\u201d)\nto_json()\uf0c1\nReturn type\nUnion[langchain.load.serializable.SerializedConstructor, langchain.load.serializable.SerializedNotImplemented]\nto_json_not_implemented()\uf0c1\nReturn type\nlangchain.load.serializable.SerializedNotImplemented\nproperty lc_attributes: Dict\uf0c1\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\uf0c1\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\uf0c1\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\uf0c1\nReturn whether or not the class is serializable.", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-201", "text": "property lc_serializable: bool\uf0c1\nReturn whether or not the class is serializable.\nclass langchain.chains.SimpleSequentialChain(*, memory=None, callbacks=None, callback_manager=None, verbose=None, tags=None, chains, strip_outputs=False, input_key='input', output_key='output')[source]\uf0c1\nBases: langchain.chains.base.Chain\nSimple chain where the outputs of one step feed directly into next.\nParameters\nmemory (Optional[langchain.schema.BaseMemory]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ncallback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) \u2013 \nverbose (bool) \u2013 \ntags (Optional[List[str]]) \u2013 \nchains (List[langchain.chains.base.Chain]) \u2013 \nstrip_outputs (bool) \u2013 \ninput_key (str) \u2013 \noutput_key (str) \u2013 \nReturn type\nNone\nattribute callback_manager: Optional[BaseCallbackManager] = None\uf0c1\nDeprecated, use callbacks instead.\nattribute callbacks: Callbacks = None\uf0c1\nOptional list of callback handlers (or callback manager). Defaults to None.\nCallback handlers are called throughout the lifecycle of a call to a chain,\nstarting with on_chain_start, ending with on_chain_end or on_chain_error.\nEach custom chain can optionally call additional callback methods, see Callback docs\nfor full details.\nattribute chains: List[langchain.chains.base.Chain] [Required]\uf0c1\nattribute memory: Optional[BaseMemory] = None\uf0c1\nOptional memory object. Defaults to None.\nMemory is a class that gets called at the start\nand at the end of every chain. At the start, memory loads variables and passes\nthem along in the chain. At the end, it saves any returned variables.", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-202", "text": "them along in the chain. At the end, it saves any returned variables.\nThere are many different types of memory - please see memory docs\nfor the full catalog.\nattribute strip_outputs: bool = False\uf0c1\nattribute tags: Optional[List[str]] = None\uf0c1\nOptional list of tags associated with the chain. Defaults to None\nThese tags will be associated with each call to this chain,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a chain with its use case.\nattribute verbose: bool [Optional]\uf0c1\nWhether or not run in verbose mode. In verbose mode, some intermediate logs\nwill be printed to the console. Defaults to langchain.verbose value.\nasync acall(inputs, return_only_outputs=False, callbacks=None, *, tags=None, include_run_info=False)\uf0c1\nRun the logic of this chain and add to output if desired.\nParameters\ninputs (Union[Dict[str, Any], Any]) \u2013 Dictionary of inputs, or single input if chain expects\nonly one param.\nreturn_only_outputs (bool) \u2013 boolean for whether to return only outputs in the\nresponse. If True, only new keys generated by this chain will be\nreturned. If False, both input keys and new keys generated by this\nchain will be returned. Defaults to False.\ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 Callbacks to use for this chain run. If not provided, will\nuse the callbacks provided to the chain.\ninclude_run_info (bool) \u2013 Whether to include run info in the response. Defaults\nto False.\ntags (Optional[List[str]]) \u2013 \nReturn type\nDict[str, Any]\napply(input_list, callbacks=None)\uf0c1\nCall the chain on all inputs in the list.\nParameters", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-203", "text": "Call the chain on all inputs in the list.\nParameters\ninput_list (List[Dict[str, Any]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \nReturn type\nList[Dict[str, str]]\nasync arun(*args, callbacks=None, tags=None, **kwargs)\uf0c1\nRun the chain as text in, text out or multiple variables, text out.\nParameters\nargs (Any) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ntags (Optional[List[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\ndict(**kwargs)\uf0c1\nReturn dictionary representation of chain.\nParameters\nkwargs (Any) \u2013 \nReturn type\nDict\nprep_inputs(inputs)\uf0c1\nValidate and prep inputs.\nParameters\ninputs (Union[Dict[str, Any], Any]) \u2013 \nReturn type\nDict[str, str]\nprep_outputs(inputs, outputs, return_only_outputs=False)\uf0c1\nValidate and prep outputs.\nParameters\ninputs (Dict[str, str]) \u2013 \noutputs (Dict[str, str]) \u2013 \nreturn_only_outputs (bool) \u2013 \nReturn type\nDict[str, str]\nrun(*args, callbacks=None, tags=None, **kwargs)\uf0c1\nRun the chain as text in, text out or multiple variables, text out.\nParameters\nargs (Any) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ntags (Optional[List[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\nsave(file_path)\uf0c1\nSave the chain.\nParameters", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-204", "text": "Return type\nstr\nsave(file_path)\uf0c1\nSave the chain.\nParameters\nfile_path (Union[pathlib.Path, str]) \u2013 Path to file to save the chain to.\nReturn type\nNone\nExample:\n.. code-block:: python\nchain.save(file_path=\u201dpath/chain.yaml\u201d)\nto_json()\uf0c1\nReturn type\nUnion[langchain.load.serializable.SerializedConstructor, langchain.load.serializable.SerializedNotImplemented]\nto_json_not_implemented()\uf0c1\nReturn type\nlangchain.load.serializable.SerializedNotImplemented\nproperty lc_attributes: Dict\uf0c1\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\uf0c1\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\uf0c1\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\uf0c1\nReturn whether or not the class is serializable.\nclass langchain.chains.TransformChain(*, memory=None, callbacks=None, callback_manager=None, verbose=None, tags=None, input_variables, output_variables, transform)[source]\uf0c1\nBases: langchain.chains.base.Chain\nChain transform chain output.\nExample\nfrom langchain import TransformChain\ntransform_chain = TransformChain(input_variables=[\"text\"],\n output_variables[\"entities\"], transform=func())\nParameters\nmemory (Optional[langchain.schema.BaseMemory]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ncallback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) \u2013 \nverbose (bool) \u2013", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-205", "text": "verbose (bool) \u2013 \ntags (Optional[List[str]]) \u2013 \ninput_variables (List[str]) \u2013 \noutput_variables (List[str]) \u2013 \ntransform (Callable[[Dict[str, str]], Dict[str, str]]) \u2013 \nReturn type\nNone\nattribute callback_manager: Optional[BaseCallbackManager] = None\uf0c1\nDeprecated, use callbacks instead.\nattribute callbacks: Callbacks = None\uf0c1\nOptional list of callback handlers (or callback manager). Defaults to None.\nCallback handlers are called throughout the lifecycle of a call to a chain,\nstarting with on_chain_start, ending with on_chain_end or on_chain_error.\nEach custom chain can optionally call additional callback methods, see Callback docs\nfor full details.\nattribute input_variables: List[str] [Required]\uf0c1\nattribute memory: Optional[BaseMemory] = None\uf0c1\nOptional memory object. Defaults to None.\nMemory is a class that gets called at the start\nand at the end of every chain. At the start, memory loads variables and passes\nthem along in the chain. At the end, it saves any returned variables.\nThere are many different types of memory - please see memory docs\nfor the full catalog.\nattribute output_variables: List[str] [Required]\uf0c1\nattribute tags: Optional[List[str]] = None\uf0c1\nOptional list of tags associated with the chain. Defaults to None\nThese tags will be associated with each call to this chain,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a chain with its use case.\nattribute transform: Callable[[Dict[str, str]], Dict[str, str]] [Required]\uf0c1\nattribute verbose: bool [Optional]\uf0c1\nWhether or not run in verbose mode. In verbose mode, some intermediate logs\nwill be printed to the console. Defaults to langchain.verbose value.", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-206", "text": "will be printed to the console. Defaults to langchain.verbose value.\nasync acall(inputs, return_only_outputs=False, callbacks=None, *, tags=None, include_run_info=False)\uf0c1\nRun the logic of this chain and add to output if desired.\nParameters\ninputs (Union[Dict[str, Any], Any]) \u2013 Dictionary of inputs, or single input if chain expects\nonly one param.\nreturn_only_outputs (bool) \u2013 boolean for whether to return only outputs in the\nresponse. If True, only new keys generated by this chain will be\nreturned. If False, both input keys and new keys generated by this\nchain will be returned. Defaults to False.\ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 Callbacks to use for this chain run. If not provided, will\nuse the callbacks provided to the chain.\ninclude_run_info (bool) \u2013 Whether to include run info in the response. Defaults\nto False.\ntags (Optional[List[str]]) \u2013 \nReturn type\nDict[str, Any]\napply(input_list, callbacks=None)\uf0c1\nCall the chain on all inputs in the list.\nParameters\ninput_list (List[Dict[str, Any]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \nReturn type\nList[Dict[str, str]]\nasync arun(*args, callbacks=None, tags=None, **kwargs)\uf0c1\nRun the chain as text in, text out or multiple variables, text out.\nParameters\nargs (Any) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ntags (Optional[List[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\ndict(**kwargs)\uf0c1", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-207", "text": "kwargs (Any) \u2013 \nReturn type\nstr\ndict(**kwargs)\uf0c1\nReturn dictionary representation of chain.\nParameters\nkwargs (Any) \u2013 \nReturn type\nDict\nprep_inputs(inputs)\uf0c1\nValidate and prep inputs.\nParameters\ninputs (Union[Dict[str, Any], Any]) \u2013 \nReturn type\nDict[str, str]\nprep_outputs(inputs, outputs, return_only_outputs=False)\uf0c1\nValidate and prep outputs.\nParameters\ninputs (Dict[str, str]) \u2013 \noutputs (Dict[str, str]) \u2013 \nreturn_only_outputs (bool) \u2013 \nReturn type\nDict[str, str]\nrun(*args, callbacks=None, tags=None, **kwargs)\uf0c1\nRun the chain as text in, text out or multiple variables, text out.\nParameters\nargs (Any) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ntags (Optional[List[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\nsave(file_path)\uf0c1\nSave the chain.\nParameters\nfile_path (Union[pathlib.Path, str]) \u2013 Path to file to save the chain to.\nReturn type\nNone\nExample:\n.. code-block:: python\nchain.save(file_path=\u201dpath/chain.yaml\u201d)\nto_json()\uf0c1\nReturn type\nUnion[langchain.load.serializable.SerializedConstructor, langchain.load.serializable.SerializedNotImplemented]\nto_json_not_implemented()\uf0c1\nReturn type\nlangchain.load.serializable.SerializedNotImplemented\nproperty lc_attributes: Dict\uf0c1\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\uf0c1\nReturn the namespace of the langchain object.", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-208", "text": "property lc_namespace: List[str]\uf0c1\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\uf0c1\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\uf0c1\nReturn whether or not the class is serializable.\nclass langchain.chains.VectorDBQA(*, memory=None, callbacks=None, callback_manager=None, verbose=None, tags=None, combine_documents_chain, input_key='query', output_key='result', return_source_documents=False, vectorstore, k=4, search_type='similarity', search_kwargs=None)[source]\uf0c1\nBases: langchain.chains.retrieval_qa.base.BaseRetrievalQA\nChain for question-answering against a vector database.\nParameters\nmemory (Optional[langchain.schema.BaseMemory]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ncallback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) \u2013 \nverbose (bool) \u2013 \ntags (Optional[List[str]]) \u2013 \ncombine_documents_chain (langchain.chains.combine_documents.base.BaseCombineDocumentsChain) \u2013 \ninput_key (str) \u2013 \noutput_key (str) \u2013 \nreturn_source_documents (bool) \u2013 \nvectorstore (langchain.vectorstores.base.VectorStore) \u2013 \nk (int) \u2013 \nsearch_type (str) \u2013 \nsearch_kwargs (Dict[str, Any]) \u2013 \nReturn type\nNone\nattribute callback_manager: Optional[BaseCallbackManager] = None\uf0c1\nDeprecated, use callbacks instead.\nattribute callbacks: Callbacks = None\uf0c1", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-209", "text": "Deprecated, use callbacks instead.\nattribute callbacks: Callbacks = None\uf0c1\nOptional list of callback handlers (or callback manager). Defaults to None.\nCallback handlers are called throughout the lifecycle of a call to a chain,\nstarting with on_chain_start, ending with on_chain_end or on_chain_error.\nEach custom chain can optionally call additional callback methods, see Callback docs\nfor full details.\nattribute combine_documents_chain: BaseCombineDocumentsChain [Required]\uf0c1\nChain to use to combine the documents.\nattribute k: int = 4\uf0c1\nNumber of documents to query for.\nattribute memory: Optional[BaseMemory] = None\uf0c1\nOptional memory object. Defaults to None.\nMemory is a class that gets called at the start\nand at the end of every chain. At the start, memory loads variables and passes\nthem along in the chain. At the end, it saves any returned variables.\nThere are many different types of memory - please see memory docs\nfor the full catalog.\nattribute return_source_documents: bool = False\uf0c1\nReturn the source documents.\nattribute search_kwargs: Dict[str, Any] [Optional]\uf0c1\nExtra search args.\nattribute search_type: str = 'similarity'\uf0c1\nSearch type to use over vectorstore. similarity or mmr.\nattribute tags: Optional[List[str]] = None\uf0c1\nOptional list of tags associated with the chain. Defaults to None\nThese tags will be associated with each call to this chain,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a chain with its use case.\nattribute vectorstore: VectorStore [Required]\uf0c1\nVector Database to connect to.\nattribute verbose: bool [Optional]\uf0c1\nWhether or not run in verbose mode. In verbose mode, some intermediate logs", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-210", "text": "Whether or not run in verbose mode. In verbose mode, some intermediate logs\nwill be printed to the console. Defaults to langchain.verbose value.\nasync acall(inputs, return_only_outputs=False, callbacks=None, *, tags=None, include_run_info=False)\uf0c1\nRun the logic of this chain and add to output if desired.\nParameters\ninputs (Union[Dict[str, Any], Any]) \u2013 Dictionary of inputs, or single input if chain expects\nonly one param.\nreturn_only_outputs (bool) \u2013 boolean for whether to return only outputs in the\nresponse. If True, only new keys generated by this chain will be\nreturned. If False, both input keys and new keys generated by this\nchain will be returned. Defaults to False.\ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 Callbacks to use for this chain run. If not provided, will\nuse the callbacks provided to the chain.\ninclude_run_info (bool) \u2013 Whether to include run info in the response. Defaults\nto False.\ntags (Optional[List[str]]) \u2013 \nReturn type\nDict[str, Any]\napply(input_list, callbacks=None)\uf0c1\nCall the chain on all inputs in the list.\nParameters\ninput_list (List[Dict[str, Any]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \nReturn type\nList[Dict[str, str]]\nasync arun(*args, callbacks=None, tags=None, **kwargs)\uf0c1\nRun the chain as text in, text out or multiple variables, text out.\nParameters\nargs (Any) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ntags (Optional[List[str]]) \u2013", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-211", "text": "tags (Optional[List[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\ndict(**kwargs)\uf0c1\nReturn dictionary representation of chain.\nParameters\nkwargs (Any) \u2013 \nReturn type\nDict\nclassmethod from_chain_type(llm, chain_type='stuff', chain_type_kwargs=None, **kwargs)\uf0c1\nLoad chain from chain type.\nParameters\nllm (langchain.base_language.BaseLanguageModel) \u2013 \nchain_type (str) \u2013 \nchain_type_kwargs (Optional[dict]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.chains.retrieval_qa.base.BaseRetrievalQA\nclassmethod from_llm(llm, prompt=None, **kwargs)\uf0c1\nInitialize from LLM.\nParameters\nllm (langchain.base_language.BaseLanguageModel) \u2013 \nprompt (Optional[langchain.prompts.prompt.PromptTemplate]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.chains.retrieval_qa.base.BaseRetrievalQA\nprep_inputs(inputs)\uf0c1\nValidate and prep inputs.\nParameters\ninputs (Union[Dict[str, Any], Any]) \u2013 \nReturn type\nDict[str, str]\nprep_outputs(inputs, outputs, return_only_outputs=False)\uf0c1\nValidate and prep outputs.\nParameters\ninputs (Dict[str, str]) \u2013 \noutputs (Dict[str, str]) \u2013 \nreturn_only_outputs (bool) \u2013 \nReturn type\nDict[str, str]\nrun(*args, callbacks=None, tags=None, **kwargs)\uf0c1\nRun the chain as text in, text out or multiple variables, text out.\nParameters\nargs (Any) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ntags (Optional[List[str]]) \u2013", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-212", "text": "tags (Optional[List[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\nsave(file_path)\uf0c1\nSave the chain.\nParameters\nfile_path (Union[pathlib.Path, str]) \u2013 Path to file to save the chain to.\nReturn type\nNone\nExample:\n.. code-block:: python\nchain.save(file_path=\u201dpath/chain.yaml\u201d)\nto_json()\uf0c1\nReturn type\nUnion[langchain.load.serializable.SerializedConstructor, langchain.load.serializable.SerializedNotImplemented]\nto_json_not_implemented()\uf0c1\nReturn type\nlangchain.load.serializable.SerializedNotImplemented\nproperty lc_attributes: Dict\uf0c1\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\uf0c1\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\uf0c1\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\uf0c1\nReturn whether or not the class is serializable.\nclass langchain.chains.VectorDBQAWithSourcesChain(*, memory=None, callbacks=None, callback_manager=None, verbose=None, tags=None, combine_documents_chain, question_key='question', input_docs_key='docs', answer_key='answer', sources_answer_key='sources', return_source_documents=False, vectorstore, k=4, reduce_k_below_max_tokens=False, max_tokens_limit=3375, search_kwargs=None)[source]\uf0c1\nBases: langchain.chains.qa_with_sources.base.BaseQAWithSourcesChain\nQuestion-answering with sources over a vector database.\nParameters", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-213", "text": "Question-answering with sources over a vector database.\nParameters\nmemory (Optional[langchain.schema.BaseMemory]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ncallback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) \u2013 \nverbose (bool) \u2013 \ntags (Optional[List[str]]) \u2013 \ncombine_documents_chain (langchain.chains.combine_documents.base.BaseCombineDocumentsChain) \u2013 \nquestion_key (str) \u2013 \ninput_docs_key (str) \u2013 \nanswer_key (str) \u2013 \nsources_answer_key (str) \u2013 \nreturn_source_documents (bool) \u2013 \nvectorstore (langchain.vectorstores.base.VectorStore) \u2013 \nk (int) \u2013 \nreduce_k_below_max_tokens (bool) \u2013 \nmax_tokens_limit (int) \u2013 \nsearch_kwargs (Dict[str, Any]) \u2013 \nReturn type\nNone\nattribute callback_manager: Optional[BaseCallbackManager] = None\uf0c1\nDeprecated, use callbacks instead.\nattribute callbacks: Callbacks = None\uf0c1\nOptional list of callback handlers (or callback manager). Defaults to None.\nCallback handlers are called throughout the lifecycle of a call to a chain,\nstarting with on_chain_start, ending with on_chain_end or on_chain_error.\nEach custom chain can optionally call additional callback methods, see Callback docs\nfor full details.\nattribute combine_documents_chain: BaseCombineDocumentsChain [Required]\uf0c1\nChain to use to combine documents.\nattribute k: int = 4\uf0c1\nNumber of results to return from store\nattribute max_tokens_limit: int = 3375\uf0c1\nRestrict the docs to return from store based on tokens,\nenforced only for StuffDocumentChain and if reduce_k_below_max_tokens is to true", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-214", "text": "enforced only for StuffDocumentChain and if reduce_k_below_max_tokens is to true\nattribute memory: Optional[BaseMemory] = None\uf0c1\nOptional memory object. Defaults to None.\nMemory is a class that gets called at the start\nand at the end of every chain. At the start, memory loads variables and passes\nthem along in the chain. At the end, it saves any returned variables.\nThere are many different types of memory - please see memory docs\nfor the full catalog.\nattribute reduce_k_below_max_tokens: bool = False\uf0c1\nReduce the number of results to return from store based on tokens limit\nattribute return_source_documents: bool = False\uf0c1\nReturn the source documents.\nattribute search_kwargs: Dict[str, Any] [Optional]\uf0c1\nExtra search args.\nattribute tags: Optional[List[str]] = None\uf0c1\nOptional list of tags associated with the chain. Defaults to None\nThese tags will be associated with each call to this chain,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a chain with its use case.\nattribute vectorstore: langchain.vectorstores.base.VectorStore [Required]\uf0c1\nVector Database to connect to.\nattribute verbose: bool [Optional]\uf0c1\nWhether or not run in verbose mode. In verbose mode, some intermediate logs\nwill be printed to the console. Defaults to langchain.verbose value.\nasync acall(inputs, return_only_outputs=False, callbacks=None, *, tags=None, include_run_info=False)\uf0c1\nRun the logic of this chain and add to output if desired.\nParameters\ninputs (Union[Dict[str, Any], Any]) \u2013 Dictionary of inputs, or single input if chain expects\nonly one param.\nreturn_only_outputs (bool) \u2013 boolean for whether to return only outputs in the", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-215", "text": "return_only_outputs (bool) \u2013 boolean for whether to return only outputs in the\nresponse. If True, only new keys generated by this chain will be\nreturned. If False, both input keys and new keys generated by this\nchain will be returned. Defaults to False.\ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 Callbacks to use for this chain run. If not provided, will\nuse the callbacks provided to the chain.\ninclude_run_info (bool) \u2013 Whether to include run info in the response. Defaults\nto False.\ntags (Optional[List[str]]) \u2013 \nReturn type\nDict[str, Any]\napply(input_list, callbacks=None)\uf0c1\nCall the chain on all inputs in the list.\nParameters\ninput_list (List[Dict[str, Any]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \nReturn type\nList[Dict[str, str]]\nasync arun(*args, callbacks=None, tags=None, **kwargs)\uf0c1\nRun the chain as text in, text out or multiple variables, text out.\nParameters\nargs (Any) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ntags (Optional[List[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\ndict(**kwargs)\uf0c1\nReturn dictionary representation of chain.\nParameters\nkwargs (Any) \u2013 \nReturn type\nDict\nclassmethod from_chain_type(llm, chain_type='stuff', chain_type_kwargs=None, **kwargs)\uf0c1\nLoad chain from chain type.\nParameters\nllm (langchain.base_language.BaseLanguageModel) \u2013 \nchain_type (str) \u2013", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-216", "text": "chain_type (str) \u2013 \nchain_type_kwargs (Optional[dict]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.chains.qa_with_sources.base.BaseQAWithSourcesChain", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-217", "text": "classmethod from_llm(llm, document_prompt=PromptTemplate(input_variables=['page_content', 'source'], output_parser=None, partial_variables={}, template='Content: {page_content}\\nSource: {source}', template_format='f-string', validate_template=True), question_prompt=PromptTemplate(input_variables=['context', 'question'], output_parser=None, partial_variables={}, template='Use the following portion of a long document to see if any of the text is relevant to answer the question. \\nReturn any relevant text verbatim.\\n{context}\\nQuestion: {question}\\nRelevant text, if any:', template_format='f-string', validate_template=True), combine_prompt=PromptTemplate(input_variables=['summaries', 'question'], output_parser=None, partial_variables={}, template='Given the following extracted parts of a long document and a question, create a final answer with references (\"SOURCES\"). \\nIf you don\\'t know the answer, just say that you don\\'t know. Don\\'t try to make up an answer.\\nALWAYS return a \"SOURCES\" part in your answer.\\n\\nQUESTION: Which state/country\\'s law governs the interpretation of the contract?\\n=========\\nContent: This Agreement is governed by English law and the parties submit to the exclusive jurisdiction of the English courts in\u00a0 relation to any dispute (contractual or non-contractual) concerning this Agreement save that either party may apply to any court for an\u00a0 injunction or other relief to protect its Intellectual Property Rights.\\nSource: 28-pl\\nContent: No Waiver. Failure or delay in exercising any right or remedy under this Agreement shall not constitute a waiver of such (or any other)\u00a0 right or remedy.\\n\\n11.7 Severability. The invalidity, illegality or unenforceability of any term (or part of a term) of this Agreement shall not affect the continuation\u00a0 in force of the remainder of the term (if", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-218", "text": "of this Agreement shall not affect the continuation\u00a0 in force of the remainder of the term (if any) and this Agreement.\\n\\n11.8 No Agency. Except as expressly stated otherwise, nothing in this Agreement shall create an agency, partnership or joint venture of any\u00a0 kind between the parties.\\n\\n11.9 No Third-Party Beneficiaries.\\nSource: 30-pl\\nContent: (b) if Google believes, in good faith, that the Distributor has violated or caused Google to violate any Anti-Bribery Laws (as\u00a0 defined in Clause 8.5) or that such a violation is reasonably likely to occur,\\nSource: 4-pl\\n=========\\nFINAL ANSWER: This Agreement is governed by English law.\\nSOURCES: 28-pl\\n\\nQUESTION: What did the president say about Michael Jackson?\\n=========\\nContent: Madam Speaker, Madam Vice President, our First Lady and Second Gentleman. Members of Congress and the Cabinet. Justices of the Supreme Court. My fellow Americans.\u00a0 \\n\\nLast year COVID-19 kept us apart. This year we are finally together again. \\n\\nTonight, we meet as Democrats Republicans and Independents. But most importantly as Americans. \\n\\nWith a duty to one another to the American people to the Constitution. \\n\\nAnd with an unwavering resolve that freedom will always triumph over tyranny. \\n\\nSix days ago, Russia\u2019s Vladimir Putin sought to shake the foundations of the free world thinking he could make it bend to his menacing ways. But he badly miscalculated. \\n\\nHe thought he could roll into Ukraine and the world would roll over. Instead he met a wall of strength he never imagined. \\n\\nHe met the Ukrainian people. \\n\\nFrom President Zelenskyy to every Ukrainian, their fearlessness, their courage, their determination, inspires the world. \\n\\nGroups of citizens", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-219", "text": "their fearlessness, their courage, their determination, inspires the world. \\n\\nGroups of citizens blocking tanks with their bodies. Everyone from students to retirees teachers turned soldiers defending their homeland.\\nSource: 0-pl\\nContent: And we won\u2019t stop. \\n\\nWe have lost so much to COVID-19. Time with one another. And worst of all, so much loss of life. \\n\\nLet\u2019s use this moment to reset. Let\u2019s stop looking at COVID-19 as a partisan dividing line and see it for what it is: A God-awful disease.\u00a0 \\n\\nLet\u2019s stop seeing each other as enemies, and start seeing each other for who we really are: Fellow Americans.\u00a0 \\n\\nWe can\u2019t change how divided we\u2019ve been. But we can change how we move forward\u2014on COVID-19 and other issues we must face together. \\n\\nI recently visited the New York City Police Department days after the funerals of Officer Wilbert Mora and his partner, Officer Jason Rivera. \\n\\nThey were responding to a 9-1-1 call when a man shot and killed them with a stolen gun. \\n\\nOfficer Mora was 27 years old. \\n\\nOfficer Rivera was 22. \\n\\nBoth Dominican Americans who\u2019d grown up on the same streets they later chose to patrol as police officers. \\n\\nI spoke with their families and told them that we are forever in debt for their sacrifice, and we will carry on their mission to restore the trust and safety every community deserves.\\nSource: 24-pl\\nContent: And a proud Ukrainian people, who have known 30 years\u00a0 of independence, have repeatedly shown that they will not tolerate anyone who tries to take their country backwards.\u00a0 \\n\\nTo all Americans, I will be honest with you, as I\u2019ve always promised. A Russian dictator, invading a foreign country, has costs around", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-220", "text": "you, as I\u2019ve always promised. A Russian dictator, invading a foreign country, has costs around the world. \\n\\nAnd I\u2019m taking robust action to make sure the pain of our sanctions\u00a0 is targeted at Russia\u2019s economy. And I will use every tool at our disposal to protect American businesses and consumers. \\n\\nTonight, I can announce that the United States has worked with 30 other countries to release 60 Million barrels of oil from reserves around the world.\u00a0 \\n\\nAmerica will lead that effort, releasing 30 Million barrels from our own Strategic Petroleum Reserve. And we stand ready to do more if necessary, unified with our allies.\u00a0 \\n\\nThese steps will help blunt gas prices here at home. And I know the news about what\u2019s happening can seem alarming. \\n\\nBut I want you to know that we are going to be okay.\\nSource: 5-pl\\nContent: More support for patients and families. \\n\\nTo get there, I call on Congress to fund ARPA-H, the Advanced Research Projects Agency for Health. \\n\\nIt\u2019s based on DARPA\u2014the Defense Department project that led to the Internet, GPS, and so much more.\u00a0 \\n\\nARPA-H will have a singular purpose\u2014to drive breakthroughs in cancer, Alzheimer\u2019s, diabetes, and more. \\n\\nA unity agenda for the nation. \\n\\nWe can do this. \\n\\nMy fellow Americans\u2014tonight , we have gathered in a sacred space\u2014the citadel of our democracy. \\n\\nIn this Capitol, generation after generation, Americans have debated great questions amid great strife, and have done great things. \\n\\nWe have fought for freedom, expanded liberty, defeated totalitarianism and terror. \\n\\nAnd built the strongest, freest, and most prosperous nation the world has ever known. \\n\\nNow is the hour. \\n\\nOur moment of responsibility. \\n\\nOur", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-221", "text": "\\n\\nNow is the hour. \\n\\nOur moment of responsibility. \\n\\nOur test of resolve and conscience, of history itself. \\n\\nIt is in this moment that our character is formed. Our purpose is found. Our future is forged. \\n\\nWell I know this nation.\\nSource: 34-pl\\n=========\\nFINAL ANSWER: The president did not mention Michael Jackson.\\nSOURCES:\\n\\nQUESTION: {question}\\n=========\\n{summaries}\\n=========\\nFINAL ANSWER:', template_format='f-string', validate_template=True), **kwargs)\uf0c1", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-222", "text": "Construct the chain from an LLM.\nParameters\nllm (langchain.base_language.BaseLanguageModel) \u2013 \ndocument_prompt (langchain.prompts.base.BasePromptTemplate) \u2013 \nquestion_prompt (langchain.prompts.base.BasePromptTemplate) \u2013 \ncombine_prompt (langchain.prompts.base.BasePromptTemplate) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.chains.qa_with_sources.base.BaseQAWithSourcesChain\nprep_inputs(inputs)\uf0c1\nValidate and prep inputs.\nParameters\ninputs (Union[Dict[str, Any], Any]) \u2013 \nReturn type\nDict[str, str]\nprep_outputs(inputs, outputs, return_only_outputs=False)\uf0c1\nValidate and prep outputs.\nParameters\ninputs (Dict[str, str]) \u2013 \noutputs (Dict[str, str]) \u2013 \nreturn_only_outputs (bool) \u2013 \nReturn type\nDict[str, str]\nrun(*args, callbacks=None, tags=None, **kwargs)\uf0c1\nRun the chain as text in, text out or multiple variables, text out.\nParameters\nargs (Any) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ntags (Optional[List[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\nsave(file_path)\uf0c1\nSave the chain.\nParameters\nfile_path (Union[pathlib.Path, str]) \u2013 Path to file to save the chain to.\nReturn type\nNone\nExample:\n.. code-block:: python\nchain.save(file_path=\u201dpath/chain.yaml\u201d)\nto_json()\uf0c1\nReturn type\nUnion[langchain.load.serializable.SerializedConstructor, langchain.load.serializable.SerializedNotImplemented]\nto_json_not_implemented()\uf0c1\nReturn type\nlangchain.load.serializable.SerializedNotImplemented", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-223", "text": "Return type\nlangchain.load.serializable.SerializedNotImplemented\nproperty lc_attributes: Dict\uf0c1\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\uf0c1\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\uf0c1\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\uf0c1\nReturn whether or not the class is serializable.\nlangchain.chains.create_extraction_chain(schema, llm)[source]\uf0c1\nCreates a chain that extracts information from a passage.\nParameters\nschema (dict) \u2013 The schema of the entities to extract.\nllm (langchain.base_language.BaseLanguageModel) \u2013 The language model to use.\nReturns\nChain that can be used to extract information from a passage.\nReturn type\nlangchain.chains.base.Chain\nlangchain.chains.create_extraction_chain_pydantic(pydantic_schema, llm)[source]\uf0c1\nCreates a chain that extracts information from a passage using pydantic schema.\nParameters\npydantic_schema (Any) \u2013 The pydantic schema of the entities to extract.\nllm (langchain.base_language.BaseLanguageModel) \u2013 The language model to use.\nReturns\nChain that can be used to extract information from a passage.\nReturn type\nlangchain.chains.base.Chain\nlangchain.chains.create_tagging_chain(schema, llm)[source]\uf0c1\nCreates a chain that extracts information from a passage.\nParameters\nschema (dict) \u2013 The schema of the entities to extract.", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-224", "text": "Parameters\nschema (dict) \u2013 The schema of the entities to extract.\nllm (langchain.base_language.BaseLanguageModel) \u2013 The language model to use.\nReturns\nChain (LLMChain) that can be used to extract information from a passage.\nReturn type\nlangchain.chains.base.Chain\nlangchain.chains.create_tagging_chain_pydantic(pydantic_schema, llm)[source]\uf0c1\nCreates a chain that extracts information from a passage.\nParameters\npydantic_schema (Any) \u2013 The pydantic schema of the entities to extract.\nllm (langchain.base_language.BaseLanguageModel) \u2013 The language model to use.\nReturns\nChain (LLMChain) that can be used to extract information from a passage.\nReturn type\nlangchain.chains.base.Chain\nlangchain.chains.load_chain(path, **kwargs)[source]\uf0c1\nUnified method for loading a chain from LangChainHub or local fs.\nParameters\npath (Union[str, pathlib.Path]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.chains.base.Chain\nlangchain.chains.create_citation_fuzzy_match_chain(llm)[source]\uf0c1\nCreate a citation fuzzy match chain.\nParameters\nllm (langchain.base_language.BaseLanguageModel) \u2013 Language model to use for the chain.\nReturns\nChain (LLMChain) that can be used to answer questions with citations.\nReturn type\nlangchain.chains.llm.LLMChain\nlangchain.chains.create_qa_with_structure_chain(llm, schema, output_parser='base', prompt=None)[source]\uf0c1\nCreate a question answering chain that returns an answer with sources.\nParameters\nllm (langchain.base_language.BaseLanguageModel) \u2013 Language model to use for the chain.", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-225", "text": "schema (Union[dict, Type[pydantic.main.BaseModel]]) \u2013 Pydantic schema to use for the output.\noutput_parser (str) \u2013 Output parser to use. Should be one of pydantic or base.\nDefault to base.\nprompt (Optional[Union[langchain.prompts.prompt.PromptTemplate, langchain.prompts.chat.ChatPromptTemplate]]) \u2013 Optional prompt to use for the chain.\nReturn type\nlangchain.chains.llm.LLMChain\nReturns:\nlangchain.chains.create_qa_with_sources_chain(llm, **kwargs)[source]\uf0c1\nCreate a question answering chain that returns an answer with sources.\nParameters\nllm (langchain.base_language.BaseLanguageModel) \u2013 Language model to use for the chain.\n**kwargs \u2013 Keyword arguments to pass to create_qa_with_structure_chain.\nkwargs (Any) \u2013 \nReturns\nChain (LLMChain) that can be used to answer questions with citations.\nReturn type\nlangchain.chains.llm.LLMChain\nclass langchain.chains.StuffDocumentsChain(*, memory=None, callbacks=None, callback_manager=None, verbose=None, tags=None, input_key='input_documents', output_key='output_text', llm_chain, document_prompt=None, document_variable_name, document_separator='\\n\\n')[source]\uf0c1\nBases: langchain.chains.combine_documents.base.BaseCombineDocumentsChain\nChain that combines documents by stuffing into context.\nParameters\nmemory (Optional[langchain.schema.BaseMemory]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ncallback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) \u2013 \nverbose (bool) \u2013 \ntags (Optional[List[str]]) \u2013 \ninput_key (str) \u2013 \noutput_key (str) \u2013", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-226", "text": "input_key (str) \u2013 \noutput_key (str) \u2013 \nllm_chain (langchain.chains.llm.LLMChain) \u2013 \ndocument_prompt (langchain.prompts.base.BasePromptTemplate) \u2013 \ndocument_variable_name (str) \u2013 \ndocument_separator (str) \u2013 \nReturn type\nNone\nattribute callback_manager: Optional[BaseCallbackManager] = None\uf0c1\nDeprecated, use callbacks instead.\nattribute callbacks: Callbacks = None\uf0c1\nOptional list of callback handlers (or callback manager). Defaults to None.\nCallback handlers are called throughout the lifecycle of a call to a chain,\nstarting with on_chain_start, ending with on_chain_end or on_chain_error.\nEach custom chain can optionally call additional callback methods, see Callback docs\nfor full details.\nattribute document_prompt: langchain.prompts.base.BasePromptTemplate [Optional]\uf0c1\nPrompt to use to format each document.\nattribute document_separator: str = '\\n\\n'\uf0c1\nThe string with which to join the formatted documents\nattribute document_variable_name: str [Required]\uf0c1\nThe variable name in the llm_chain to put the documents in.\nIf only one variable in the llm_chain, this need not be provided.\nattribute llm_chain: langchain.chains.llm.LLMChain [Required]\uf0c1\nLLM wrapper to use after formatting documents.\nattribute memory: Optional[BaseMemory] = None\uf0c1\nOptional memory object. Defaults to None.\nMemory is a class that gets called at the start\nand at the end of every chain. At the start, memory loads variables and passes\nthem along in the chain. At the end, it saves any returned variables.\nThere are many different types of memory - please see memory docs\nfor the full catalog.\nattribute tags: Optional[List[str]] = None\uf0c1", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-227", "text": "for the full catalog.\nattribute tags: Optional[List[str]] = None\uf0c1\nOptional list of tags associated with the chain. Defaults to None\nThese tags will be associated with each call to this chain,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a chain with its use case.\nattribute verbose: bool [Optional]\uf0c1\nWhether or not run in verbose mode. In verbose mode, some intermediate logs\nwill be printed to the console. Defaults to langchain.verbose value.\nasync acall(inputs, return_only_outputs=False, callbacks=None, *, tags=None, include_run_info=False)\uf0c1\nRun the logic of this chain and add to output if desired.\nParameters\ninputs (Union[Dict[str, Any], Any]) \u2013 Dictionary of inputs, or single input if chain expects\nonly one param.\nreturn_only_outputs (bool) \u2013 boolean for whether to return only outputs in the\nresponse. If True, only new keys generated by this chain will be\nreturned. If False, both input keys and new keys generated by this\nchain will be returned. Defaults to False.\ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 Callbacks to use for this chain run. If not provided, will\nuse the callbacks provided to the chain.\ninclude_run_info (bool) \u2013 Whether to include run info in the response. Defaults\nto False.\ntags (Optional[List[str]]) \u2013 \nReturn type\nDict[str, Any]\nasync acombine_docs(docs, callbacks=None, **kwargs)[source]\uf0c1\nStuff all documents into one prompt and pass to LLM.\nParameters\ndocs (List[langchain.schema.Document]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-228", "text": "kwargs (Any) \u2013 \nReturn type\nTuple[str, dict]\napply(input_list, callbacks=None)\uf0c1\nCall the chain on all inputs in the list.\nParameters\ninput_list (List[Dict[str, Any]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \nReturn type\nList[Dict[str, str]]\nasync arun(*args, callbacks=None, tags=None, **kwargs)\uf0c1\nRun the chain as text in, text out or multiple variables, text out.\nParameters\nargs (Any) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ntags (Optional[List[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\ncombine_docs(docs, callbacks=None, **kwargs)[source]\uf0c1\nStuff all documents into one prompt and pass to LLM.\nParameters\ndocs (List[langchain.schema.Document]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nTuple[str, dict]\ndict(**kwargs)\uf0c1\nReturn dictionary representation of chain.\nParameters\nkwargs (Any) \u2013 \nReturn type\nDict\nprep_inputs(inputs)\uf0c1\nValidate and prep inputs.\nParameters\ninputs (Union[Dict[str, Any], Any]) \u2013 \nReturn type\nDict[str, str]\nprep_outputs(inputs, outputs, return_only_outputs=False)\uf0c1\nValidate and prep outputs.\nParameters\ninputs (Dict[str, str]) \u2013 \noutputs (Dict[str, str]) \u2013 \nreturn_only_outputs (bool) \u2013 \nReturn type\nDict[str, str]", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-229", "text": "return_only_outputs (bool) \u2013 \nReturn type\nDict[str, str]\nprompt_length(docs, **kwargs)[source]\uf0c1\nGet the prompt length by formatting the prompt.\nParameters\ndocs (List[langchain.schema.Document]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nOptional[int]\nrun(*args, callbacks=None, tags=None, **kwargs)\uf0c1\nRun the chain as text in, text out or multiple variables, text out.\nParameters\nargs (Any) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ntags (Optional[List[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\nsave(file_path)\uf0c1\nSave the chain.\nParameters\nfile_path (Union[pathlib.Path, str]) \u2013 Path to file to save the chain to.\nReturn type\nNone\nExample:\n.. code-block:: python\nchain.save(file_path=\u201dpath/chain.yaml\u201d)\nto_json()\uf0c1\nReturn type\nUnion[langchain.load.serializable.SerializedConstructor, langchain.load.serializable.SerializedNotImplemented]\nto_json_not_implemented()\uf0c1\nReturn type\nlangchain.load.serializable.SerializedNotImplemented\nproperty lc_attributes: Dict\uf0c1\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\uf0c1\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\uf0c1\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\uf0c1", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-230", "text": "property lc_serializable: bool\uf0c1\nReturn whether or not the class is serializable.\nclass langchain.chains.MapRerankDocumentsChain(*, memory=None, callbacks=None, callback_manager=None, verbose=None, tags=None, input_key='input_documents', output_key='output_text', llm_chain, document_variable_name, rank_key, answer_key, metadata_keys=None, return_intermediate_steps=False)[source]\uf0c1\nBases: langchain.chains.combine_documents.base.BaseCombineDocumentsChain\nCombining documents by mapping a chain over them, then reranking results.\nParameters\nmemory (Optional[langchain.schema.BaseMemory]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ncallback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) \u2013 \nverbose (bool) \u2013 \ntags (Optional[List[str]]) \u2013 \ninput_key (str) \u2013 \noutput_key (str) \u2013 \nllm_chain (langchain.chains.llm.LLMChain) \u2013 \ndocument_variable_name (str) \u2013 \nrank_key (str) \u2013 \nanswer_key (str) \u2013 \nmetadata_keys (Optional[List[str]]) \u2013 \nreturn_intermediate_steps (bool) \u2013 \nReturn type\nNone\nattribute answer_key: str [Required]\uf0c1\nKey in output of llm_chain to return as answer.\nattribute callback_manager: Optional[BaseCallbackManager] = None\uf0c1\nDeprecated, use callbacks instead.\nattribute callbacks: Callbacks = None\uf0c1\nOptional list of callback handlers (or callback manager). Defaults to None.\nCallback handlers are called throughout the lifecycle of a call to a chain,\nstarting with on_chain_start, ending with on_chain_end or on_chain_error.\nEach custom chain can optionally call additional callback methods, see Callback docs\nfor full details.", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-231", "text": "Each custom chain can optionally call additional callback methods, see Callback docs\nfor full details.\nattribute document_variable_name: str [Required]\uf0c1\nThe variable name in the llm_chain to put the documents in.\nIf only one variable in the llm_chain, this need not be provided.\nattribute llm_chain: LLMChain [Required]\uf0c1\nChain to apply to each document individually.\nattribute memory: Optional[BaseMemory] = None\uf0c1\nOptional memory object. Defaults to None.\nMemory is a class that gets called at the start\nand at the end of every chain. At the start, memory loads variables and passes\nthem along in the chain. At the end, it saves any returned variables.\nThere are many different types of memory - please see memory docs\nfor the full catalog.\nattribute metadata_keys: Optional[List[str]] = None\uf0c1\nattribute rank_key: str [Required]\uf0c1\nKey in output of llm_chain to rank on.\nattribute return_intermediate_steps: bool = False\uf0c1\nattribute tags: Optional[List[str]] = None\uf0c1\nOptional list of tags associated with the chain. Defaults to None\nThese tags will be associated with each call to this chain,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a chain with its use case.\nattribute verbose: bool [Optional]\uf0c1\nWhether or not run in verbose mode. In verbose mode, some intermediate logs\nwill be printed to the console. Defaults to langchain.verbose value.\nasync acall(inputs, return_only_outputs=False, callbacks=None, *, tags=None, include_run_info=False)\uf0c1\nRun the logic of this chain and add to output if desired.\nParameters\ninputs (Union[Dict[str, Any], Any]) \u2013 Dictionary of inputs, or single input if chain expects\nonly one param.", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-232", "text": "only one param.\nreturn_only_outputs (bool) \u2013 boolean for whether to return only outputs in the\nresponse. If True, only new keys generated by this chain will be\nreturned. If False, both input keys and new keys generated by this\nchain will be returned. Defaults to False.\ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 Callbacks to use for this chain run. If not provided, will\nuse the callbacks provided to the chain.\ninclude_run_info (bool) \u2013 Whether to include run info in the response. Defaults\nto False.\ntags (Optional[List[str]]) \u2013 \nReturn type\nDict[str, Any]\nasync acombine_docs(docs, callbacks=None, **kwargs)[source]\uf0c1\nCombine documents in a map rerank manner.\nCombine by mapping first chain over all documents, then reranking the results.\nParameters\ndocs (List[langchain.schema.Document]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nTuple[str, dict]\napply(input_list, callbacks=None)\uf0c1\nCall the chain on all inputs in the list.\nParameters\ninput_list (List[Dict[str, Any]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \nReturn type\nList[Dict[str, str]]\nasync arun(*args, callbacks=None, tags=None, **kwargs)\uf0c1\nRun the chain as text in, text out or multiple variables, text out.\nParameters\nargs (Any) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-233", "text": "tags (Optional[List[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\ncombine_docs(docs, callbacks=None, **kwargs)[source]\uf0c1\nCombine documents in a map rerank manner.\nCombine by mapping first chain over all documents, then reranking the results.\nParameters\ndocs (List[langchain.schema.Document]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nTuple[str, dict]\ndict(**kwargs)\uf0c1\nReturn dictionary representation of chain.\nParameters\nkwargs (Any) \u2013 \nReturn type\nDict\nprep_inputs(inputs)\uf0c1\nValidate and prep inputs.\nParameters\ninputs (Union[Dict[str, Any], Any]) \u2013 \nReturn type\nDict[str, str]\nprep_outputs(inputs, outputs, return_only_outputs=False)\uf0c1\nValidate and prep outputs.\nParameters\ninputs (Dict[str, str]) \u2013 \noutputs (Dict[str, str]) \u2013 \nreturn_only_outputs (bool) \u2013 \nReturn type\nDict[str, str]\nprompt_length(docs, **kwargs)\uf0c1\nReturn the prompt length given the documents passed in.\nReturns None if the method does not depend on the prompt length.\nParameters\ndocs (List[langchain.schema.Document]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nOptional[int]\nrun(*args, callbacks=None, tags=None, **kwargs)\uf0c1\nRun the chain as text in, text out or multiple variables, text out.\nParameters\nargs (Any) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ntags (Optional[List[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-234", "text": "kwargs (Any) \u2013 \nReturn type\nstr\nsave(file_path)\uf0c1\nSave the chain.\nParameters\nfile_path (Union[pathlib.Path, str]) \u2013 Path to file to save the chain to.\nReturn type\nNone\nExample:\n.. code-block:: python\nchain.save(file_path=\u201dpath/chain.yaml\u201d)\nto_json()\uf0c1\nReturn type\nUnion[langchain.load.serializable.SerializedConstructor, langchain.load.serializable.SerializedNotImplemented]\nto_json_not_implemented()\uf0c1\nReturn type\nlangchain.load.serializable.SerializedNotImplemented\nproperty lc_attributes: Dict\uf0c1\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\uf0c1\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\uf0c1\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\uf0c1\nReturn whether or not the class is serializable.\nclass langchain.chains.MapReduceDocumentsChain(*, memory=None, callbacks=None, callback_manager=None, verbose=None, tags=None, input_key='input_documents', output_key='output_text', llm_chain, combine_document_chain, collapse_document_chain=None, document_variable_name, return_intermediate_steps=False)[source]\uf0c1\nBases: langchain.chains.combine_documents.base.BaseCombineDocumentsChain\nCombining documents by mapping a chain over them, then combining results.\nParameters\nmemory (Optional[langchain.schema.BaseMemory]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-235", "text": "callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) \u2013 \nverbose (bool) \u2013 \ntags (Optional[List[str]]) \u2013 \ninput_key (str) \u2013 \noutput_key (str) \u2013 \nllm_chain (langchain.chains.llm.LLMChain) \u2013 \ncombine_document_chain (langchain.chains.combine_documents.base.BaseCombineDocumentsChain) \u2013 \ncollapse_document_chain (Optional[langchain.chains.combine_documents.base.BaseCombineDocumentsChain]) \u2013 \ndocument_variable_name (str) \u2013 \nreturn_intermediate_steps (bool) \u2013 \nReturn type\nNone\nattribute callback_manager: Optional[BaseCallbackManager] = None\uf0c1\nDeprecated, use callbacks instead.\nattribute callbacks: Callbacks = None\uf0c1\nOptional list of callback handlers (or callback manager). Defaults to None.\nCallback handlers are called throughout the lifecycle of a call to a chain,\nstarting with on_chain_start, ending with on_chain_end or on_chain_error.\nEach custom chain can optionally call additional callback methods, see Callback docs\nfor full details.\nattribute collapse_document_chain: Optional[BaseCombineDocumentsChain] = None\uf0c1\nChain to use to collapse intermediary results if needed.\nIf None, will use the combine_document_chain.\nattribute combine_document_chain: BaseCombineDocumentsChain [Required]\uf0c1\nChain to use to combine results of applying llm_chain to documents.\nattribute document_variable_name: str [Required]\uf0c1\nThe variable name in the llm_chain to put the documents in.\nIf only one variable in the llm_chain, this need not be provided.\nattribute llm_chain: LLMChain [Required]\uf0c1\nChain to apply to each document individually.\nattribute memory: Optional[BaseMemory] = None\uf0c1\nOptional memory object. Defaults to None.\nMemory is a class that gets called at the start", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-236", "text": "Optional memory object. Defaults to None.\nMemory is a class that gets called at the start\nand at the end of every chain. At the start, memory loads variables and passes\nthem along in the chain. At the end, it saves any returned variables.\nThere are many different types of memory - please see memory docs\nfor the full catalog.\nattribute return_intermediate_steps: bool = False\uf0c1\nReturn the results of the map steps in the output.\nattribute tags: Optional[List[str]] = None\uf0c1\nOptional list of tags associated with the chain. Defaults to None\nThese tags will be associated with each call to this chain,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a chain with its use case.\nattribute verbose: bool [Optional]\uf0c1\nWhether or not run in verbose mode. In verbose mode, some intermediate logs\nwill be printed to the console. Defaults to langchain.verbose value.\nasync acall(inputs, return_only_outputs=False, callbacks=None, *, tags=None, include_run_info=False)\uf0c1\nRun the logic of this chain and add to output if desired.\nParameters\ninputs (Union[Dict[str, Any], Any]) \u2013 Dictionary of inputs, or single input if chain expects\nonly one param.\nreturn_only_outputs (bool) \u2013 boolean for whether to return only outputs in the\nresponse. If True, only new keys generated by this chain will be\nreturned. If False, both input keys and new keys generated by this\nchain will be returned. Defaults to False.\ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 Callbacks to use for this chain run. If not provided, will\nuse the callbacks provided to the chain.\ninclude_run_info (bool) \u2013 Whether to include run info in the response. Defaults", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-237", "text": "include_run_info (bool) \u2013 Whether to include run info in the response. Defaults\nto False.\ntags (Optional[List[str]]) \u2013 \nReturn type\nDict[str, Any]\nasync acombine_docs(docs, callbacks=None, **kwargs)[source]\uf0c1\nCombine documents in a map reduce manner.\nCombine by mapping first chain over all documents, then reducing the results.\nThis reducing can be done recursively if needed (if there are many documents).\nParameters\ndocs (List[langchain.schema.Document]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nTuple[str, dict]\napply(input_list, callbacks=None)\uf0c1\nCall the chain on all inputs in the list.\nParameters\ninput_list (List[Dict[str, Any]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \nReturn type\nList[Dict[str, str]]\nasync arun(*args, callbacks=None, tags=None, **kwargs)\uf0c1\nRun the chain as text in, text out or multiple variables, text out.\nParameters\nargs (Any) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ntags (Optional[List[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\ncombine_docs(docs, token_max=3000, callbacks=None, **kwargs)[source]\uf0c1\nCombine documents in a map reduce manner.\nCombine by mapping first chain over all documents, then reducing the results.\nThis reducing can be done recursively if needed (if there are many documents).\nParameters\ndocs (List[langchain.schema.Document]) \u2013 \ntoken_max (int) \u2013", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-238", "text": "docs (List[langchain.schema.Document]) \u2013 \ntoken_max (int) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nTuple[str, dict]\ndict(**kwargs)\uf0c1\nReturn dictionary representation of chain.\nParameters\nkwargs (Any) \u2013 \nReturn type\nDict\nprep_inputs(inputs)\uf0c1\nValidate and prep inputs.\nParameters\ninputs (Union[Dict[str, Any], Any]) \u2013 \nReturn type\nDict[str, str]\nprep_outputs(inputs, outputs, return_only_outputs=False)\uf0c1\nValidate and prep outputs.\nParameters\ninputs (Dict[str, str]) \u2013 \noutputs (Dict[str, str]) \u2013 \nreturn_only_outputs (bool) \u2013 \nReturn type\nDict[str, str]\nprompt_length(docs, **kwargs)\uf0c1\nReturn the prompt length given the documents passed in.\nReturns None if the method does not depend on the prompt length.\nParameters\ndocs (List[langchain.schema.Document]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nOptional[int]\nrun(*args, callbacks=None, tags=None, **kwargs)\uf0c1\nRun the chain as text in, text out or multiple variables, text out.\nParameters\nargs (Any) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ntags (Optional[List[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\nsave(file_path)\uf0c1\nSave the chain.\nParameters\nfile_path (Union[pathlib.Path, str]) \u2013 Path to file to save the chain to.\nReturn type\nNone\nExample:\n.. code-block:: python\nchain.save(file_path=\u201dpath/chain.yaml\u201d)", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-239", "text": "Example:\n.. code-block:: python\nchain.save(file_path=\u201dpath/chain.yaml\u201d)\nto_json()\uf0c1\nReturn type\nUnion[langchain.load.serializable.SerializedConstructor, langchain.load.serializable.SerializedNotImplemented]\nto_json_not_implemented()\uf0c1\nReturn type\nlangchain.load.serializable.SerializedNotImplemented\nproperty lc_attributes: Dict\uf0c1\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\uf0c1\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\uf0c1\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\uf0c1\nReturn whether or not the class is serializable.\nclass langchain.chains.RefineDocumentsChain(*, memory=None, callbacks=None, callback_manager=None, verbose=None, tags=None, input_key='input_documents', output_key='output_text', initial_llm_chain, refine_llm_chain, document_variable_name, initial_response_name, document_prompt=None, return_intermediate_steps=False)[source]\uf0c1\nBases: langchain.chains.combine_documents.base.BaseCombineDocumentsChain\nCombine documents by doing a first pass and then refining on more documents.\nParameters\nmemory (Optional[langchain.schema.BaseMemory]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ncallback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) \u2013 \nverbose (bool) \u2013 \ntags (Optional[List[str]]) \u2013 \ninput_key (str) \u2013 \noutput_key (str) \u2013", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-240", "text": "input_key (str) \u2013 \noutput_key (str) \u2013 \ninitial_llm_chain (langchain.chains.llm.LLMChain) \u2013 \nrefine_llm_chain (langchain.chains.llm.LLMChain) \u2013 \ndocument_variable_name (str) \u2013 \ninitial_response_name (str) \u2013 \ndocument_prompt (langchain.prompts.base.BasePromptTemplate) \u2013 \nreturn_intermediate_steps (bool) \u2013 \nReturn type\nNone\nattribute callback_manager: Optional[BaseCallbackManager] = None\uf0c1\nDeprecated, use callbacks instead.\nattribute callbacks: Callbacks = None\uf0c1\nOptional list of callback handlers (or callback manager). Defaults to None.\nCallback handlers are called throughout the lifecycle of a call to a chain,\nstarting with on_chain_start, ending with on_chain_end or on_chain_error.\nEach custom chain can optionally call additional callback methods, see Callback docs\nfor full details.\nattribute document_prompt: BasePromptTemplate [Optional]\uf0c1\nPrompt to use to format each document.\nattribute document_variable_name: str [Required]\uf0c1\nThe variable name in the initial_llm_chain to put the documents in.\nIf only one variable in the initial_llm_chain, this need not be provided.\nattribute initial_llm_chain: LLMChain [Required]\uf0c1\nLLM chain to use on initial document.\nattribute initial_response_name: str [Required]\uf0c1\nThe variable name to format the initial response in when refining.\nattribute memory: Optional[BaseMemory] = None\uf0c1\nOptional memory object. Defaults to None.\nMemory is a class that gets called at the start\nand at the end of every chain. At the start, memory loads variables and passes\nthem along in the chain. At the end, it saves any returned variables.\nThere are many different types of memory - please see memory docs", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-241", "text": "There are many different types of memory - please see memory docs\nfor the full catalog.\nattribute refine_llm_chain: LLMChain [Required]\uf0c1\nLLM chain to use when refining.\nattribute return_intermediate_steps: bool = False\uf0c1\nReturn the results of the refine steps in the output.\nattribute tags: Optional[List[str]] = None\uf0c1\nOptional list of tags associated with the chain. Defaults to None\nThese tags will be associated with each call to this chain,\nand passed as arguments to the handlers defined in callbacks.\nYou can use these to eg identify a specific instance of a chain with its use case.\nattribute verbose: bool [Optional]\uf0c1\nWhether or not run in verbose mode. In verbose mode, some intermediate logs\nwill be printed to the console. Defaults to langchain.verbose value.\nasync acall(inputs, return_only_outputs=False, callbacks=None, *, tags=None, include_run_info=False)\uf0c1\nRun the logic of this chain and add to output if desired.\nParameters\ninputs (Union[Dict[str, Any], Any]) \u2013 Dictionary of inputs, or single input if chain expects\nonly one param.\nreturn_only_outputs (bool) \u2013 boolean for whether to return only outputs in the\nresponse. If True, only new keys generated by this chain will be\nreturned. If False, both input keys and new keys generated by this\nchain will be returned. Defaults to False.\ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 Callbacks to use for this chain run. If not provided, will\nuse the callbacks provided to the chain.\ninclude_run_info (bool) \u2013 Whether to include run info in the response. Defaults\nto False.\ntags (Optional[List[str]]) \u2013 \nReturn type\nDict[str, Any]", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-242", "text": "tags (Optional[List[str]]) \u2013 \nReturn type\nDict[str, Any]\nasync acombine_docs(docs, callbacks=None, **kwargs)[source]\uf0c1\nCombine by mapping first chain over all, then stuffing into final chain.\nParameters\ndocs (List[langchain.schema.Document]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nTuple[str, dict]\napply(input_list, callbacks=None)\uf0c1\nCall the chain on all inputs in the list.\nParameters\ninput_list (List[Dict[str, Any]]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \nReturn type\nList[Dict[str, str]]\nasync arun(*args, callbacks=None, tags=None, **kwargs)\uf0c1\nRun the chain as text in, text out or multiple variables, text out.\nParameters\nargs (Any) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ntags (Optional[List[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\ncombine_docs(docs, callbacks=None, **kwargs)[source]\uf0c1\nCombine by mapping first chain over all, then stuffing into final chain.\nParameters\ndocs (List[langchain.schema.Document]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nTuple[str, dict]\ndict(**kwargs)\uf0c1\nReturn dictionary representation of chain.\nParameters\nkwargs (Any) \u2013 \nReturn type\nDict\nprep_inputs(inputs)\uf0c1\nValidate and prep inputs.\nParameters", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-243", "text": "Return type\nDict\nprep_inputs(inputs)\uf0c1\nValidate and prep inputs.\nParameters\ninputs (Union[Dict[str, Any], Any]) \u2013 \nReturn type\nDict[str, str]\nprep_outputs(inputs, outputs, return_only_outputs=False)\uf0c1\nValidate and prep outputs.\nParameters\ninputs (Dict[str, str]) \u2013 \noutputs (Dict[str, str]) \u2013 \nreturn_only_outputs (bool) \u2013 \nReturn type\nDict[str, str]\nprompt_length(docs, **kwargs)\uf0c1\nReturn the prompt length given the documents passed in.\nReturns None if the method does not depend on the prompt length.\nParameters\ndocs (List[langchain.schema.Document]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nOptional[int]\nrun(*args, callbacks=None, tags=None, **kwargs)\uf0c1\nRun the chain as text in, text out or multiple variables, text out.\nParameters\nargs (Any) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ntags (Optional[List[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nstr\nsave(file_path)\uf0c1\nSave the chain.\nParameters\nfile_path (Union[pathlib.Path, str]) \u2013 Path to file to save the chain to.\nReturn type\nNone\nExample:\n.. code-block:: python\nchain.save(file_path=\u201dpath/chain.yaml\u201d)\nto_json()\uf0c1\nReturn type\nUnion[langchain.load.serializable.SerializedConstructor, langchain.load.serializable.SerializedNotImplemented]\nto_json_not_implemented()\uf0c1\nReturn type\nlangchain.load.serializable.SerializedNotImplemented\nproperty lc_attributes: Dict\uf0c1\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "09aa860bdfb8-244", "text": "serialized kwargs. These attributes must be accepted by the\nconstructor.\nproperty lc_namespace: List[str]\uf0c1\nReturn the namespace of the langchain object.\neg. [\u201clangchain\u201d, \u201cllms\u201d, \u201copenai\u201d]\nproperty lc_secrets: Dict[str, str]\uf0c1\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\uf0c1\nReturn whether or not the class is serializable.", "source": "https://api.python.langchain.com/en/latest/modules/chains.html"} +{"id": "cf663cd4f0dd-0", "text": "Agent Toolkits\uf0c1\nAgent toolkits.", "source": "https://api.python.langchain.com/en/latest/modules/agent_toolkits.html"} +{"id": "cf663cd4f0dd-1", "text": "langchain.agents.agent_toolkits.create_json_agent(llm, toolkit, callback_manager=None, prefix='You are an agent designed to interact with JSON.\\nYour goal is to return a final answer by interacting with the JSON.\\nYou have access to the following tools which help you learn more about the JSON you are interacting with.\\nOnly use the below tools. Only use the information returned by the below tools to construct your final answer.\\nDo not make up any information that is not contained in the JSON.\\nYour input to the tools should be in the form of `data[\"key\"][0]` where `data` is the JSON blob you are interacting with, and the syntax used is Python. \\nYou should only use keys that you know for a fact exist. You must validate that a key exists by seeing it previously when calling `json_spec_list_keys`. \\nIf you have not seen a key in one of those responses, you cannot use it.\\nYou should only add one key at a time to the path. You cannot add multiple keys at once.\\nIf you encounter a \"KeyError\", go back to the previous key, look at the available keys, and try again.\\n\\nIf the question does not seem to be related to the JSON, just return \"I don\\'t know\" as the answer.\\nAlways begin your interaction with the `json_spec_list_keys` tool with input \"data\" to see what keys exist in the JSON.\\n\\nNote that sometimes the value at a given path is large. In this case, you will get an error \"Value is a large dictionary, should explore its keys directly\".\\nIn this case, you should ALWAYS follow up by using the `json_spec_list_keys` tool to see what keys exist at that path.\\nDo not simply refer the user to the JSON or a section of the JSON, as this is not a valid answer. Keep digging until you find the", "source": "https://api.python.langchain.com/en/latest/modules/agent_toolkits.html"} +{"id": "cf663cd4f0dd-2", "text": "a section of the JSON, as this is not a valid answer. Keep digging until you find the answer and explicitly return it.\\n', suffix='Begin!\"\\n\\nQuestion: {input}\\nThought: I should look at the keys that exist in data to see what I have access to\\n{agent_scratchpad}', format_instructions='Use the following format:\\n\\nQuestion: the input question you must answer\\nThought: you should always think about what to do\\nAction: the action to take, should be one of [{tool_names}]\\nAction Input: the input to the action\\nObservation: the result of the action\\n... (this Thought/Action/Action Input/Observation can repeat N times)\\nThought: I now know the final answer\\nFinal Answer: the final answer to the original input question', input_variables=None, verbose=False, agent_executor_kwargs=None, **kwargs)[source]\uf0c1", "source": "https://api.python.langchain.com/en/latest/modules/agent_toolkits.html"} +{"id": "cf663cd4f0dd-3", "text": "Construct a json agent from an LLM and tools.\nParameters\nllm (langchain.base_language.BaseLanguageModel) \u2013 \ntoolkit (langchain.agents.agent_toolkits.json.toolkit.JsonToolkit) \u2013 \ncallback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) \u2013 \nprefix (str) \u2013 \nsuffix (str) \u2013 \nformat_instructions (str) \u2013 \ninput_variables (Optional[List[str]]) \u2013 \nverbose (bool) \u2013 \nagent_executor_kwargs (Optional[Dict[str, Any]]) \u2013 \nkwargs (Dict[str, Any]) \u2013 \nReturn type\nlangchain.agents.agent.AgentExecutor", "source": "https://api.python.langchain.com/en/latest/modules/agent_toolkits.html"} +{"id": "cf663cd4f0dd-4", "text": "langchain.agents.agent_toolkits.create_sql_agent(llm, toolkit, agent_type=AgentType.ZERO_SHOT_REACT_DESCRIPTION, callback_manager=None, prefix='You are an agent designed to interact with a SQL database.\\nGiven an input question, create a syntactically correct {dialect} query to run, then look at the results of the query and return the answer.\\nUnless the user specifies a specific number of examples they wish to obtain, always limit your query to at most {top_k} results.\\nYou can order the results by a relevant column to return the most interesting examples in the database.\\nNever query for all the columns from a specific table, only ask for the relevant columns given the question.\\nYou have access to tools for interacting with the database.\\nOnly use the below tools. Only use the information returned by the below tools to construct your final answer.\\nYou MUST double check your query before executing it. If you get an error while executing a query, rewrite the query and try again.\\n\\nDO NOT make any DML statements (INSERT, UPDATE, DELETE, DROP etc.) to the database.\\n\\nIf the question does not seem related to the database, just return \"I don\\'t know\" as the answer.\\n', suffix=None, format_instructions='Use the following format:\\n\\nQuestion: the input question you must answer\\nThought: you should always think about what to do\\nAction: the action to take, should be one of [{tool_names}]\\nAction Input: the input to the action\\nObservation: the result of the action\\n... (this Thought/Action/Action Input/Observation can repeat N times)\\nThought: I now know the final answer\\nFinal Answer: the final answer to the original input question', input_variables=None, top_k=10, max_iterations=15, max_execution_time=None, early_stopping_method='force', verbose=False,", "source": "https://api.python.langchain.com/en/latest/modules/agent_toolkits.html"} +{"id": "cf663cd4f0dd-5", "text": "max_iterations=15, max_execution_time=None, early_stopping_method='force', verbose=False, agent_executor_kwargs=None, **kwargs)[source]\uf0c1", "source": "https://api.python.langchain.com/en/latest/modules/agent_toolkits.html"} +{"id": "cf663cd4f0dd-6", "text": "Construct a sql agent from an LLM and tools.\nParameters\nllm (langchain.base_language.BaseLanguageModel) \u2013 \ntoolkit (langchain.agents.agent_toolkits.sql.toolkit.SQLDatabaseToolkit) \u2013 \nagent_type (langchain.agents.agent_types.AgentType) \u2013 \ncallback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) \u2013 \nprefix (str) \u2013 \nsuffix (Optional[str]) \u2013 \nformat_instructions (str) \u2013 \ninput_variables (Optional[List[str]]) \u2013 \ntop_k (int) \u2013 \nmax_iterations (Optional[int]) \u2013 \nmax_execution_time (Optional[float]) \u2013 \nearly_stopping_method (str) \u2013 \nverbose (bool) \u2013 \nagent_executor_kwargs (Optional[Dict[str, Any]]) \u2013 \nkwargs (Dict[str, Any]) \u2013 \nReturn type\nlangchain.agents.agent.AgentExecutor", "source": "https://api.python.langchain.com/en/latest/modules/agent_toolkits.html"} +{"id": "cf663cd4f0dd-7", "text": "langchain.agents.agent_toolkits.create_openapi_agent(llm, toolkit, callback_manager=None, prefix=\"You are an agent designed to answer questions by making web requests to an API given the openapi spec.\\n\\nIf the question does not seem related to the API, return I don't know. Do not make up an answer.\\nOnly use information provided by the tools to construct your response.\\n\\nFirst, find the base URL needed to make the request.\\n\\nSecond, find the relevant paths needed to answer the question. Take note that, sometimes, you might need to make more than one request to more than one path to answer the question.\\n\\nThird, find the required parameters needed to make the request. For GET requests, these are usually URL parameters and for POST requests, these are request body parameters.\\n\\nFourth, make the requests needed to answer the question. Ensure that you are sending the correct parameters to the request by checking which parameters are required. For parameters with a fixed set of values, please use the spec to look at which values are allowed.\\n\\nUse the exact parameter names as listed in the spec, do not make up any names or abbreviate the names of parameters.\\nIf you get a not found error, ensure that you are using a path that actually exists in the spec.\\n\", suffix='Begin!\\n\\nQuestion: {input}\\nThought: I should explore the spec to find the base url for the API.\\n{agent_scratchpad}', format_instructions='Use the following format:\\n\\nQuestion: the input question you must answer\\nThought: you should always think about what to do\\nAction: the action to take, should be one of [{tool_names}]\\nAction Input: the input to the action\\nObservation: the result of the action\\n... (this Thought/Action/Action Input/Observation can repeat N times)\\nThought: I now know the final", "source": "https://api.python.langchain.com/en/latest/modules/agent_toolkits.html"} +{"id": "cf663cd4f0dd-8", "text": "Input/Observation can repeat N times)\\nThought: I now know the final answer\\nFinal Answer: the final answer to the original input question', input_variables=None, max_iterations=15, max_execution_time=None, early_stopping_method='force', verbose=False, return_intermediate_steps=False, agent_executor_kwargs=None, **kwargs)[source]\uf0c1", "source": "https://api.python.langchain.com/en/latest/modules/agent_toolkits.html"} +{"id": "cf663cd4f0dd-9", "text": "Construct a json agent from an LLM and tools.\nParameters\nllm (langchain.base_language.BaseLanguageModel) \u2013 \ntoolkit (langchain.agents.agent_toolkits.openapi.toolkit.OpenAPIToolkit) \u2013 \ncallback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) \u2013 \nprefix (str) \u2013 \nsuffix (str) \u2013 \nformat_instructions (str) \u2013 \ninput_variables (Optional[List[str]]) \u2013 \nmax_iterations (Optional[int]) \u2013 \nmax_execution_time (Optional[float]) \u2013 \nearly_stopping_method (str) \u2013 \nverbose (bool) \u2013 \nreturn_intermediate_steps (bool) \u2013 \nagent_executor_kwargs (Optional[Dict[str, Any]]) \u2013 \nkwargs (Dict[str, Any]) \u2013 \nReturn type\nlangchain.agents.agent.AgentExecutor", "source": "https://api.python.langchain.com/en/latest/modules/agent_toolkits.html"} +{"id": "cf663cd4f0dd-10", "text": "langchain.agents.agent_toolkits.create_pbi_agent(llm, toolkit, powerbi=None, callback_manager=None, prefix='You are an agent designed to help users interact with a PowerBI Dataset.\\n\\nAgent has access to a tool that can write a query based on the question and then run those against PowerBI, Microsofts business intelligence tool. The questions from the users should be interpreted as related to the dataset that is available and not general questions about the world. If the question does not seem related to the dataset, just return \"This does not appear to be part of this dataset.\" as the answer.\\n\\nGiven an input question, ask to run the questions against the dataset, then look at the results and return the answer, the answer should be a complete sentence that answers the question, if multiple rows are asked find a way to write that in a easily readable format for a human, also make sure to represent numbers in readable ways, like 1M instead of 1000000. Unless the user specifies a specific number of examples they wish to obtain, always limit your query to at most {top_k} results.\\n', suffix='Begin!\\n\\nQuestion: {input}\\nThought: I can first ask which tables I have, then how each table is defined and then ask the query tool the question I need, and finally create a nice sentence that answers the question.\\n{agent_scratchpad}', format_instructions='Use the following format:\\n\\nQuestion: the input question you must answer\\nThought: you should always think about what to do\\nAction: the action to take, should be one of [{tool_names}]\\nAction Input: the input to the action\\nObservation: the result of the action\\n... (this Thought/Action/Action Input/Observation can repeat N times)\\nThought: I now know the final answer\\nFinal Answer: the final answer to the original input question', examples=None,", "source": "https://api.python.langchain.com/en/latest/modules/agent_toolkits.html"} +{"id": "cf663cd4f0dd-11", "text": "know the final answer\\nFinal Answer: the final answer to the original input question', examples=None, input_variables=None, top_k=10, verbose=False, agent_executor_kwargs=None, **kwargs)[source]\uf0c1", "source": "https://api.python.langchain.com/en/latest/modules/agent_toolkits.html"} +{"id": "cf663cd4f0dd-12", "text": "Construct a pbi agent from an LLM and tools.\nParameters\nllm (langchain.base_language.BaseLanguageModel) \u2013 \ntoolkit (Optional[langchain.agents.agent_toolkits.powerbi.toolkit.PowerBIToolkit]) \u2013 \npowerbi (Optional[langchain.utilities.powerbi.PowerBIDataset]) \u2013 \ncallback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) \u2013 \nprefix (str) \u2013 \nsuffix (str) \u2013 \nformat_instructions (str) \u2013 \nexamples (Optional[str]) \u2013 \ninput_variables (Optional[List[str]]) \u2013 \ntop_k (int) \u2013 \nverbose (bool) \u2013 \nagent_executor_kwargs (Optional[Dict[str, Any]]) \u2013 \nkwargs (Dict[str, Any]) \u2013 \nReturn type\nlangchain.agents.agent.AgentExecutor", "source": "https://api.python.langchain.com/en/latest/modules/agent_toolkits.html"} +{"id": "cf663cd4f0dd-13", "text": "Return type\nlangchain.agents.agent.AgentExecutor\nlangchain.agents.agent_toolkits.create_pbi_chat_agent(llm, toolkit, powerbi=None, callback_manager=None, output_parser=None, prefix='Assistant is a large language model built to help users interact with a PowerBI Dataset.\\n\\nAssistant has access to a tool that can write a query based on the question and then run those against PowerBI, Microsofts business intelligence tool. The questions from the users should be interpreted as related to the dataset that is available and not general questions about the world. If the question does not seem related to the dataset, just return \"This does not appear to be part of this dataset.\" as the answer.\\n\\nGiven an input question, ask to run the questions against the dataset, then look at the results and return the answer, the answer should be a complete sentence that answers the question, if multiple rows are asked find a way to write that in a easily readable format for a human, also make sure to represent numbers in readable ways, like 1M instead of 1000000. Unless the user specifies a specific number of examples they wish to obtain, always limit your query to at most {top_k} results.\\n', suffix=\"TOOLS\\n------\\nAssistant can ask the user to use tools to look up information that may be helpful in answering the users original question. The tools the human can use are:\\n\\n{{tools}}\\n\\n{format_instructions}\\n\\nUSER'S INPUT\\n--------------------\\nHere is the user's input (remember to respond with a markdown code snippet of a json blob with a single action, and NOTHING else):\\n\\n{{{{input}}}}\\n\", examples=None, input_variables=None, memory=None, top_k=10, verbose=False, agent_executor_kwargs=None, **kwargs)[source]\uf0c1\nConstruct a pbi agent from an Chat LLM and tools.", "source": "https://api.python.langchain.com/en/latest/modules/agent_toolkits.html"} +{"id": "cf663cd4f0dd-14", "text": "Construct a pbi agent from an Chat LLM and tools.\nIf you supply only a toolkit and no powerbi dataset, the same LLM is used for both.\nParameters\nllm (langchain.chat_models.base.BaseChatModel) \u2013 \ntoolkit (Optional[langchain.agents.agent_toolkits.powerbi.toolkit.PowerBIToolkit]) \u2013 \npowerbi (Optional[langchain.utilities.powerbi.PowerBIDataset]) \u2013 \ncallback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) \u2013 \noutput_parser (Optional[langchain.agents.agent.AgentOutputParser]) \u2013 \nprefix (str) \u2013 \nsuffix (str) \u2013 \nexamples (Optional[str]) \u2013 \ninput_variables (Optional[List[str]]) \u2013 \nmemory (Optional[langchain.memory.chat_memory.BaseChatMemory]) \u2013 \ntop_k (int) \u2013 \nverbose (bool) \u2013 \nagent_executor_kwargs (Optional[Dict[str, Any]]) \u2013 \nkwargs (Dict[str, Any]) \u2013 \nReturn type\nlangchain.agents.agent.AgentExecutor\nlangchain.agents.agent_toolkits.create_python_agent(llm, tool, agent_type=AgentType.ZERO_SHOT_REACT_DESCRIPTION, callback_manager=None, verbose=False, prefix='You are an agent designed to write and execute python code to answer questions.\\nYou have access to a python REPL, which you can use to execute python code.\\nIf you get an error, debug your code and try again.\\nOnly use the output of your code to answer the question. \\nYou might know the answer without running any code, but you should still run the code to get the answer.\\nIf it does not seem like you can write code to answer the question, just return \"I don\\'t know\" as the answer.\\n', agent_executor_kwargs=None, **kwargs)[source]\uf0c1", "source": "https://api.python.langchain.com/en/latest/modules/agent_toolkits.html"} +{"id": "cf663cd4f0dd-15", "text": "Construct a python agent from an LLM and tool.\nParameters\nllm (langchain.base_language.BaseLanguageModel) \u2013 \ntool (langchain.tools.python.tool.PythonREPLTool) \u2013 \nagent_type (langchain.agents.agent_types.AgentType) \u2013 \ncallback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) \u2013 \nverbose (bool) \u2013 \nprefix (str) \u2013 \nagent_executor_kwargs (Optional[Dict[str, Any]]) \u2013 \nkwargs (Dict[str, Any]) \u2013 \nReturn type\nlangchain.agents.agent.AgentExecutor\nlangchain.agents.agent_toolkits.create_vectorstore_agent(llm, toolkit, callback_manager=None, prefix='You are an agent designed to answer questions about sets of documents.\\nYou have access to tools for interacting with the documents, and the inputs to the tools are questions.\\nSometimes, you will be asked to provide sources for your questions, in which case you should use the appropriate tool to do so.\\nIf the question does not seem relevant to any of the tools provided, just return \"I don\\'t know\" as the answer.\\n', verbose=False, agent_executor_kwargs=None, **kwargs)[source]\uf0c1\nConstruct a vectorstore agent from an LLM and tools.\nParameters\nllm (langchain.base_language.BaseLanguageModel) \u2013 \ntoolkit (langchain.agents.agent_toolkits.vectorstore.toolkit.VectorStoreToolkit) \u2013 \ncallback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) \u2013 \nprefix (str) \u2013 \nverbose (bool) \u2013 \nagent_executor_kwargs (Optional[Dict[str, Any]]) \u2013 \nkwargs (Dict[str, Any]) \u2013 \nReturn type\nlangchain.agents.agent.AgentExecutor\nclass langchain.agents.agent_toolkits.JsonToolkit(*, spec)[source]\uf0c1", "source": "https://api.python.langchain.com/en/latest/modules/agent_toolkits.html"} +{"id": "cf663cd4f0dd-16", "text": "class langchain.agents.agent_toolkits.JsonToolkit(*, spec)[source]\uf0c1\nBases: langchain.agents.agent_toolkits.base.BaseToolkit\nToolkit for interacting with a JSON spec.\nParameters\nspec (langchain.tools.json.tool.JsonSpec) \u2013 \nReturn type\nNone\nattribute spec: langchain.tools.json.tool.JsonSpec [Required]\uf0c1\nget_tools()[source]\uf0c1\nGet the tools in the toolkit.\nReturn type\nList[langchain.tools.base.BaseTool]\nclass langchain.agents.agent_toolkits.SQLDatabaseToolkit(*, db, llm)[source]\uf0c1\nBases: langchain.agents.agent_toolkits.base.BaseToolkit\nToolkit for interacting with SQL databases.\nParameters\ndb (langchain.sql_database.SQLDatabase) \u2013 \nllm (langchain.base_language.BaseLanguageModel) \u2013 \nReturn type\nNone\nattribute db: langchain.sql_database.SQLDatabase [Required]\uf0c1\nattribute llm: langchain.base_language.BaseLanguageModel [Required]\uf0c1\nget_tools()[source]\uf0c1\nGet the tools in the toolkit.\nReturn type\nList[langchain.tools.base.BaseTool]\nproperty dialect: str\uf0c1\nReturn string representation of dialect to use.\nclass langchain.agents.agent_toolkits.SparkSQLToolkit(*, db, llm)[source]\uf0c1\nBases: langchain.agents.agent_toolkits.base.BaseToolkit\nToolkit for interacting with Spark SQL.\nParameters\ndb (langchain.utilities.spark_sql.SparkSQL) \u2013 \nllm (langchain.base_language.BaseLanguageModel) \u2013 \nReturn type\nNone\nattribute db: langchain.utilities.spark_sql.SparkSQL [Required]\uf0c1\nattribute llm: langchain.base_language.BaseLanguageModel [Required]\uf0c1\nget_tools()[source]\uf0c1\nGet the tools in the toolkit.\nReturn type", "source": "https://api.python.langchain.com/en/latest/modules/agent_toolkits.html"} +{"id": "cf663cd4f0dd-17", "text": "get_tools()[source]\uf0c1\nGet the tools in the toolkit.\nReturn type\nList[langchain.tools.base.BaseTool]\nclass langchain.agents.agent_toolkits.NLAToolkit(*, nla_tools)[source]\uf0c1\nBases: langchain.agents.agent_toolkits.base.BaseToolkit\nNatural Language API Toolkit Definition.\nParameters\nnla_tools (Sequence[langchain.agents.agent_toolkits.nla.tool.NLATool]) \u2013 \nReturn type\nNone\nattribute nla_tools: Sequence[langchain.agents.agent_toolkits.nla.tool.NLATool] [Required]\uf0c1\nList of API Endpoint Tools.\nclassmethod from_llm_and_ai_plugin(llm, ai_plugin, requests=None, verbose=False, **kwargs)[source]\uf0c1\nInstantiate the toolkit from an OpenAPI Spec URL\nParameters\nllm (langchain.base_language.BaseLanguageModel) \u2013 \nai_plugin (langchain.tools.plugin.AIPlugin) \u2013 \nrequests (Optional[langchain.requests.Requests]) \u2013 \nverbose (bool) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.agents.agent_toolkits.nla.toolkit.NLAToolkit\nclassmethod from_llm_and_ai_plugin_url(llm, ai_plugin_url, requests=None, verbose=False, **kwargs)[source]\uf0c1\nInstantiate the toolkit from an OpenAPI Spec URL\nParameters\nllm (langchain.base_language.BaseLanguageModel) \u2013 \nai_plugin_url (str) \u2013 \nrequests (Optional[langchain.requests.Requests]) \u2013 \nverbose (bool) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.agents.agent_toolkits.nla.toolkit.NLAToolkit\nclassmethod from_llm_and_spec(llm, spec, requests=None, verbose=False, **kwargs)[source]\uf0c1\nInstantiate the toolkit by creating tools for each operation.\nParameters", "source": "https://api.python.langchain.com/en/latest/modules/agent_toolkits.html"} +{"id": "cf663cd4f0dd-18", "text": "Instantiate the toolkit by creating tools for each operation.\nParameters\nllm (langchain.base_language.BaseLanguageModel) \u2013 \nspec (langchain.utilities.openapi.OpenAPISpec) \u2013 \nrequests (Optional[langchain.requests.Requests]) \u2013 \nverbose (bool) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.agents.agent_toolkits.nla.toolkit.NLAToolkit\nclassmethod from_llm_and_url(llm, open_api_url, requests=None, verbose=False, **kwargs)[source]\uf0c1\nInstantiate the toolkit from an OpenAPI Spec URL\nParameters\nllm (langchain.base_language.BaseLanguageModel) \u2013 \nopen_api_url (str) \u2013 \nrequests (Optional[langchain.requests.Requests]) \u2013 \nverbose (bool) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.agents.agent_toolkits.nla.toolkit.NLAToolkit\nget_tools()[source]\uf0c1\nGet the tools for all the API operations.\nReturn type\nList[langchain.tools.base.BaseTool]\nclass langchain.agents.agent_toolkits.PowerBIToolkit(*, powerbi, llm, examples=None, max_iterations=5, callback_manager=None)[source]\uf0c1\nBases: langchain.agents.agent_toolkits.base.BaseToolkit\nToolkit for interacting with PowerBI dataset.\nParameters\npowerbi (langchain.utilities.powerbi.PowerBIDataset) \u2013 \nllm (langchain.base_language.BaseLanguageModel) \u2013 \nexamples (Optional[str]) \u2013 \nmax_iterations (int) \u2013 \ncallback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) \u2013 \nReturn type\nNone\nattribute callback_manager: Optional[langchain.callbacks.base.BaseCallbackManager] = None\uf0c1\nattribute examples: Optional[str] = None\uf0c1", "source": "https://api.python.langchain.com/en/latest/modules/agent_toolkits.html"} +{"id": "cf663cd4f0dd-19", "text": "attribute examples: Optional[str] = None\uf0c1\nattribute llm: langchain.base_language.BaseLanguageModel [Required]\uf0c1\nattribute max_iterations: int = 5\uf0c1\nattribute powerbi: langchain.utilities.powerbi.PowerBIDataset [Required]\uf0c1\nget_tools()[source]\uf0c1\nGet the tools in the toolkit.\nReturn type\nList[langchain.tools.base.BaseTool]\nclass langchain.agents.agent_toolkits.OpenAPIToolkit(*, json_agent, requests_wrapper)[source]\uf0c1\nBases: langchain.agents.agent_toolkits.base.BaseToolkit\nToolkit for interacting with a OpenAPI api.\nParameters\njson_agent (langchain.agents.agent.AgentExecutor) \u2013 \nrequests_wrapper (langchain.requests.TextRequestsWrapper) \u2013 \nReturn type\nNone\nattribute json_agent: langchain.agents.agent.AgentExecutor [Required]\uf0c1\nattribute requests_wrapper: langchain.requests.TextRequestsWrapper [Required]\uf0c1\nclassmethod from_llm(llm, json_spec, requests_wrapper, **kwargs)[source]\uf0c1\nCreate json agent from llm, then initialize.\nParameters\nllm (langchain.base_language.BaseLanguageModel) \u2013 \njson_spec (langchain.tools.json.tool.JsonSpec) \u2013 \nrequests_wrapper (langchain.requests.TextRequestsWrapper) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.agents.agent_toolkits.openapi.toolkit.OpenAPIToolkit\nget_tools()[source]\uf0c1\nGet the tools in the toolkit.\nReturn type\nList[langchain.tools.base.BaseTool]\nclass langchain.agents.agent_toolkits.VectorStoreToolkit(*, vectorstore_info, llm=None)[source]\uf0c1\nBases: langchain.agents.agent_toolkits.base.BaseToolkit\nToolkit for interacting with a vector store.\nParameters", "source": "https://api.python.langchain.com/en/latest/modules/agent_toolkits.html"} +{"id": "cf663cd4f0dd-20", "text": "Toolkit for interacting with a vector store.\nParameters\nvectorstore_info (langchain.agents.agent_toolkits.vectorstore.toolkit.VectorStoreInfo) \u2013 \nllm (langchain.base_language.BaseLanguageModel) \u2013 \nReturn type\nNone\nattribute llm: langchain.base_language.BaseLanguageModel [Optional]\uf0c1\nattribute vectorstore_info: langchain.agents.agent_toolkits.vectorstore.toolkit.VectorStoreInfo [Required]\uf0c1\nget_tools()[source]\uf0c1\nGet the tools in the toolkit.\nReturn type\nList[langchain.tools.base.BaseTool]\nlangchain.agents.agent_toolkits.create_vectorstore_router_agent(llm, toolkit, callback_manager=None, prefix='You are an agent designed to answer questions.\\nYou have access to tools for interacting with different sources, and the inputs to the tools are questions.\\nYour main task is to decide which of the tools is relevant for answering question at hand.\\nFor complex questions, you can break the question down into sub questions and use tools to answers the sub questions.\\n', verbose=False, agent_executor_kwargs=None, **kwargs)[source]\uf0c1\nConstruct a vectorstore router agent from an LLM and tools.\nParameters\nllm (langchain.base_language.BaseLanguageModel) \u2013 \ntoolkit (langchain.agents.agent_toolkits.vectorstore.toolkit.VectorStoreRouterToolkit) \u2013 \ncallback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) \u2013 \nprefix (str) \u2013 \nverbose (bool) \u2013 \nagent_executor_kwargs (Optional[Dict[str, Any]]) \u2013 \nkwargs (Dict[str, Any]) \u2013 \nReturn type\nlangchain.agents.agent.AgentExecutor\nclass langchain.agents.agent_toolkits.VectorStoreInfo(*, vectorstore, name, description)[source]\uf0c1\nBases: pydantic.main.BaseModel\nInformation about a vectorstore.\nParameters", "source": "https://api.python.langchain.com/en/latest/modules/agent_toolkits.html"} +{"id": "cf663cd4f0dd-21", "text": "Bases: pydantic.main.BaseModel\nInformation about a vectorstore.\nParameters\nvectorstore (langchain.vectorstores.base.VectorStore) \u2013 \nname (str) \u2013 \ndescription (str) \u2013 \nReturn type\nNone\nattribute description: str [Required]\uf0c1\nattribute name: str [Required]\uf0c1\nattribute vectorstore: langchain.vectorstores.base.VectorStore [Required]\uf0c1\nclass langchain.agents.agent_toolkits.VectorStoreRouterToolkit(*, vectorstores, llm=None)[source]\uf0c1\nBases: langchain.agents.agent_toolkits.base.BaseToolkit\nToolkit for routing between vector stores.\nParameters\nvectorstores (List[langchain.agents.agent_toolkits.vectorstore.toolkit.VectorStoreInfo]) \u2013 \nllm (langchain.base_language.BaseLanguageModel) \u2013 \nReturn type\nNone\nattribute llm: langchain.base_language.BaseLanguageModel [Optional]\uf0c1\nattribute vectorstores: List[langchain.agents.agent_toolkits.vectorstore.toolkit.VectorStoreInfo] [Required]\uf0c1\nget_tools()[source]\uf0c1\nGet the tools in the toolkit.\nReturn type\nList[langchain.tools.base.BaseTool]\nlangchain.agents.agent_toolkits.create_pandas_dataframe_agent(llm, df, agent_type=AgentType.ZERO_SHOT_REACT_DESCRIPTION, callback_manager=None, prefix=None, suffix=None, input_variables=None, verbose=False, return_intermediate_steps=False, max_iterations=15, max_execution_time=None, early_stopping_method='force', agent_executor_kwargs=None, include_df_in_prompt=True, **kwargs)[source]\uf0c1\nConstruct a pandas agent from an LLM and dataframe.\nParameters\nllm (langchain.base_language.BaseLanguageModel) \u2013 \ndf (Any) \u2013 \nagent_type (langchain.agents.agent_types.AgentType) \u2013", "source": "https://api.python.langchain.com/en/latest/modules/agent_toolkits.html"} +{"id": "cf663cd4f0dd-22", "text": "agent_type (langchain.agents.agent_types.AgentType) \u2013 \ncallback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) \u2013 \nprefix (Optional[str]) \u2013 \nsuffix (Optional[str]) \u2013 \ninput_variables (Optional[List[str]]) \u2013 \nverbose (bool) \u2013 \nreturn_intermediate_steps (bool) \u2013 \nmax_iterations (Optional[int]) \u2013 \nmax_execution_time (Optional[float]) \u2013 \nearly_stopping_method (str) \u2013 \nagent_executor_kwargs (Optional[Dict[str, Any]]) \u2013 \ninclude_df_in_prompt (Optional[bool]) \u2013 \nkwargs (Dict[str, Any]) \u2013 \nReturn type\nlangchain.agents.agent.AgentExecutor\nlangchain.agents.agent_toolkits.create_spark_dataframe_agent(llm, df, callback_manager=None, prefix='\\nYou are working with a spark dataframe in Python. The name of the dataframe is `df`.\\nYou should use the tools below to answer the question posed of you:', suffix='\\nThis is the result of `print(df.first())`:\\n{df}\\n\\nBegin!\\nQuestion: {input}\\n{agent_scratchpad}', input_variables=None, verbose=False, return_intermediate_steps=False, max_iterations=15, max_execution_time=None, early_stopping_method='force', agent_executor_kwargs=None, **kwargs)[source]\uf0c1\nConstruct a spark agent from an LLM and dataframe.\nParameters\nllm (langchain.llms.base.BaseLLM) \u2013 \ndf (Any) \u2013 \ncallback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) \u2013 \nprefix (str) \u2013 \nsuffix (str) \u2013 \ninput_variables (Optional[List[str]]) \u2013 \nverbose (bool) \u2013 \nreturn_intermediate_steps (bool) \u2013 \nmax_iterations (Optional[int]) \u2013 \nmax_execution_time (Optional[float]) \u2013", "source": "https://api.python.langchain.com/en/latest/modules/agent_toolkits.html"} +{"id": "cf663cd4f0dd-23", "text": "max_iterations (Optional[int]) \u2013 \nmax_execution_time (Optional[float]) \u2013 \nearly_stopping_method (str) \u2013 \nagent_executor_kwargs (Optional[Dict[str, Any]]) \u2013 \nkwargs (Dict[str, Any]) \u2013 \nReturn type\nlangchain.agents.agent.AgentExecutor", "source": "https://api.python.langchain.com/en/latest/modules/agent_toolkits.html"} +{"id": "cf663cd4f0dd-24", "text": "langchain.agents.agent_toolkits.create_spark_sql_agent(llm, toolkit, callback_manager=None, prefix='You are an agent designed to interact with Spark SQL.\\nGiven an input question, create a syntactically correct Spark SQL query to run, then look at the results of the query and return the answer.\\nUnless the user specifies a specific number of examples they wish to obtain, always limit your query to at most {top_k} results.\\nYou can order the results by a relevant column to return the most interesting examples in the database.\\nNever query for all the columns from a specific table, only ask for the relevant columns given the question.\\nYou have access to tools for interacting with the database.\\nOnly use the below tools. Only use the information returned by the below tools to construct your final answer.\\nYou MUST double check your query before executing it. If you get an error while executing a query, rewrite the query and try again.\\n\\nDO NOT make any DML statements (INSERT, UPDATE, DELETE, DROP etc.) to the database.\\n\\nIf the question does not seem related to the database, just return \"I don\\'t know\" as the answer.\\n', suffix='Begin!\\n\\nQuestion: {input}\\nThought: I should look at the tables in the database to see what I can query.\\n{agent_scratchpad}', format_instructions='Use the following format:\\n\\nQuestion: the input question you must answer\\nThought: you should always think about what to do\\nAction: the action to take, should be one of [{tool_names}]\\nAction Input: the input to the action\\nObservation: the result of the action\\n... (this Thought/Action/Action Input/Observation can repeat N times)\\nThought: I now know the final answer\\nFinal Answer: the final answer to the original input question', input_variables=None, top_k=10,", "source": "https://api.python.langchain.com/en/latest/modules/agent_toolkits.html"} +{"id": "cf663cd4f0dd-25", "text": "Answer: the final answer to the original input question', input_variables=None, top_k=10, max_iterations=15, max_execution_time=None, early_stopping_method='force', verbose=False, agent_executor_kwargs=None, **kwargs)[source]\uf0c1", "source": "https://api.python.langchain.com/en/latest/modules/agent_toolkits.html"} +{"id": "cf663cd4f0dd-26", "text": "Construct a sql agent from an LLM and tools.\nParameters\nllm (langchain.base_language.BaseLanguageModel) \u2013 \ntoolkit (langchain.agents.agent_toolkits.spark_sql.toolkit.SparkSQLToolkit) \u2013 \ncallback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) \u2013 \nprefix (str) \u2013 \nsuffix (str) \u2013 \nformat_instructions (str) \u2013 \ninput_variables (Optional[List[str]]) \u2013 \ntop_k (int) \u2013 \nmax_iterations (Optional[int]) \u2013 \nmax_execution_time (Optional[float]) \u2013 \nearly_stopping_method (str) \u2013 \nverbose (bool) \u2013 \nagent_executor_kwargs (Optional[Dict[str, Any]]) \u2013 \nkwargs (Dict[str, Any]) \u2013 \nReturn type\nlangchain.agents.agent.AgentExecutor\nlangchain.agents.agent_toolkits.create_csv_agent(llm, path, pandas_kwargs=None, **kwargs)[source]\uf0c1\nCreate csv agent by loading to a dataframe and using pandas agent.\nParameters\nllm (langchain.base_language.BaseLanguageModel) \u2013 \npath (Union[str, List[str]]) \u2013 \npandas_kwargs (Optional[dict]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.agents.agent.AgentExecutor\nclass langchain.agents.agent_toolkits.ZapierToolkit(*, tools=[])[source]\uf0c1\nBases: langchain.agents.agent_toolkits.base.BaseToolkit\nZapier Toolkit.\nParameters\ntools (List[langchain.tools.base.BaseTool]) \u2013 \nReturn type\nNone\nattribute tools: List[langchain.tools.base.BaseTool] = []\uf0c1\nclassmethod from_zapier_nla_wrapper(zapier_nla_wrapper)[source]\uf0c1\nCreate a toolkit from a ZapierNLAWrapper.\nParameters", "source": "https://api.python.langchain.com/en/latest/modules/agent_toolkits.html"} +{"id": "cf663cd4f0dd-27", "text": "Create a toolkit from a ZapierNLAWrapper.\nParameters\nzapier_nla_wrapper (langchain.utilities.zapier.ZapierNLAWrapper) \u2013 \nReturn type\nlangchain.agents.agent_toolkits.zapier.toolkit.ZapierToolkit\nget_tools()[source]\uf0c1\nGet the tools in the toolkit.\nReturn type\nList[langchain.tools.base.BaseTool]\nclass langchain.agents.agent_toolkits.GmailToolkit(*, api_resource=None)[source]\uf0c1\nBases: langchain.agents.agent_toolkits.base.BaseToolkit\nToolkit for interacting with Gmail.\nParameters\napi_resource (Resource) \u2013 \nReturn type\nNone\nattribute api_resource: Resource [Optional]\uf0c1\nget_tools()[source]\uf0c1\nGet the tools in the toolkit.\nReturn type\nList[langchain.tools.base.BaseTool]\nclass langchain.agents.agent_toolkits.JiraToolkit(*, tools=[])[source]\uf0c1\nBases: langchain.agents.agent_toolkits.base.BaseToolkit\nJira Toolkit.\nParameters\ntools (List[langchain.tools.base.BaseTool]) \u2013 \nReturn type\nNone\nattribute tools: List[langchain.tools.base.BaseTool] = []\uf0c1\nclassmethod from_jira_api_wrapper(jira_api_wrapper)[source]\uf0c1\nParameters\njira_api_wrapper (langchain.utilities.jira.JiraAPIWrapper) \u2013 \nReturn type\nlangchain.agents.agent_toolkits.jira.toolkit.JiraToolkit\nget_tools()[source]\uf0c1\nGet the tools in the toolkit.\nReturn type\nList[langchain.tools.base.BaseTool]\nclass langchain.agents.agent_toolkits.FileManagementToolkit(*, root_dir=None, selected_tools=None)[source]\uf0c1\nBases: langchain.agents.agent_toolkits.base.BaseToolkit\nToolkit for interacting with a Local Files.\nParameters", "source": "https://api.python.langchain.com/en/latest/modules/agent_toolkits.html"} +{"id": "cf663cd4f0dd-28", "text": "Toolkit for interacting with a Local Files.\nParameters\nroot_dir (Optional[str]) \u2013 \nselected_tools (Optional[List[str]]) \u2013 \nReturn type\nNone\nattribute root_dir: Optional[str] = None\uf0c1\nIf specified, all file operations are made relative to root_dir.\nattribute selected_tools: Optional[List[str]] = None\uf0c1\nIf provided, only provide the selected tools. Defaults to all.\nget_tools()[source]\uf0c1\nGet the tools in the toolkit.\nReturn type\nList[langchain.tools.base.BaseTool]\nclass langchain.agents.agent_toolkits.PlayWrightBrowserToolkit(*, sync_browser=None, async_browser=None)[source]\uf0c1\nBases: langchain.agents.agent_toolkits.base.BaseToolkit\nToolkit for web browser tools.\nParameters\nsync_browser (Optional['SyncBrowser']) \u2013 \nasync_browser (Optional['AsyncBrowser']) \u2013 \nReturn type\nNone\nattribute async_browser: Optional['AsyncBrowser'] = None\uf0c1\nattribute sync_browser: Optional['SyncBrowser'] = None\uf0c1\nclassmethod from_browser(sync_browser=None, async_browser=None)[source]\uf0c1\nInstantiate the toolkit.\nParameters\nsync_browser (Optional[SyncBrowser]) \u2013 \nasync_browser (Optional[AsyncBrowser]) \u2013 \nReturn type\nPlayWrightBrowserToolkit\nget_tools()[source]\uf0c1\nGet the tools in the toolkit.\nReturn type\nList[langchain.tools.base.BaseTool]\nclass langchain.agents.agent_toolkits.AzureCognitiveServicesToolkit[source]\uf0c1\nBases: langchain.agents.agent_toolkits.base.BaseToolkit\nToolkit for Azure Cognitive Services.\nReturn type\nNone\nget_tools()[source]\uf0c1\nGet the tools in the toolkit.\nReturn type\nList[langchain.tools.base.BaseTool]", "source": "https://api.python.langchain.com/en/latest/modules/agent_toolkits.html"} +{"id": "32c189513c97-0", "text": "Retrievers\uf0c1\nclass langchain.retrievers.AmazonKendraRetriever(index_id, region_name=None, credentials_profile_name=None, top_k=3, attribute_filter=None, client=None)[source]\uf0c1\nBases: langchain.schema.BaseRetriever\nRetriever class to query documents from Amazon Kendra Index.\nParameters\nindex_id (str) \u2013 Kendra index id\nregion_name (Optional[str]) \u2013 The aws region e.g., us-west-2.\nFallsback to AWS_DEFAULT_REGION env variable\nor region specified in ~/.aws/config.\ncredentials_profile_name (Optional[str]) \u2013 The name of the profile in the ~/.aws/credentials\nor ~/.aws/config files, which has either access keys or role information\nspecified. If not specified, the default credential profile or, if on an\nEC2 instance, credentials from IMDS will be used.\ntop_k (int) \u2013 No of results to return\nattribute_filter (Optional[Dict]) \u2013 Additional filtering of results based on metadata\nSee: https://docs.aws.amazon.com/kendra/latest/APIReference\nclient (Optional[Any]) \u2013 boto3 client for Kendra\nExample\nretriever = AmazonKendraRetriever(\n index_id=\"c0806df7-e76b-4bce-9b5c-d5582f6b1a03\"\n)\nget_relevant_documents(query)[source]\uf0c1\nRun search on Kendra index and get top k documents\nExample:\n.. code-block:: python\ndocs = retriever.get_relevant_documents(\u2018This is my query\u2019)\nParameters\nquery (str) \u2013 \nReturn type\nList[langchain.schema.Document]\nasync aget_relevant_documents(query)[source]\uf0c1\nGet documents relevant for a query.\nParameters\nquery (str) \u2013 string to find relevant documents for\nReturns", "source": "https://api.python.langchain.com/en/latest/modules/retrievers.html"} +{"id": "32c189513c97-1", "text": "Parameters\nquery (str) \u2013 string to find relevant documents for\nReturns\nList of relevant documents\nReturn type\nList[langchain.schema.Document]\nclass langchain.retrievers.ArxivRetriever(*, arxiv_search=None, arxiv_exceptions=None, top_k_results=3, load_max_docs=100, load_all_available_meta=False, doc_content_chars_max=4000, ARXIV_MAX_QUERY_LENGTH=300)[source]\uf0c1\nBases: langchain.schema.BaseRetriever, langchain.utilities.arxiv.ArxivAPIWrapper\nIt is effectively a wrapper for ArxivAPIWrapper.\nIt wraps load() to get_relevant_documents().\nIt uses all ArxivAPIWrapper arguments without any change.\nParameters\narxiv_search (Any) \u2013 \narxiv_exceptions (Any) \u2013 \ntop_k_results (int) \u2013 \nload_max_docs (int) \u2013 \nload_all_available_meta (bool) \u2013 \ndoc_content_chars_max (Optional[int]) \u2013 \nARXIV_MAX_QUERY_LENGTH (int) \u2013 \nReturn type\nNone\nasync aget_relevant_documents(query)[source]\uf0c1\nGet documents relevant for a query.\nParameters\nquery (str) \u2013 string to find relevant documents for\nReturns\nList of relevant documents\nReturn type\nList[langchain.schema.Document]\nget_relevant_documents(query)[source]\uf0c1\nGet documents relevant for a query.\nParameters\nquery (str) \u2013 string to find relevant documents for\nReturns\nList of relevant documents\nReturn type\nList[langchain.schema.Document]\nclass langchain.retrievers.AzureCognitiveSearchRetriever(*, service_name='', index_name='', api_key='', api_version='2020-06-30', aiosession=None, content_key='content')[source]\uf0c1", "source": "https://api.python.langchain.com/en/latest/modules/retrievers.html"} +{"id": "32c189513c97-2", "text": "Bases: langchain.schema.BaseRetriever, pydantic.main.BaseModel\nWrapper around Azure Cognitive Search.\nParameters\nservice_name (str) \u2013 \nindex_name (str) \u2013 \napi_key (str) \u2013 \napi_version (str) \u2013 \naiosession (Optional[aiohttp.client.ClientSession]) \u2013 \ncontent_key (str) \u2013 \nReturn type\nNone\nattribute aiosession: Optional[aiohttp.client.ClientSession] = None\uf0c1\nClientSession, in case we want to reuse connection for better performance.\nattribute api_key: str = ''\uf0c1\nAPI Key. Both Admin and Query keys work, but for reading data it\u2019s\nrecommended to use a Query key.\nattribute api_version: str = '2020-06-30'\uf0c1\nAPI version\nattribute content_key: str = 'content'\uf0c1\nKey in a retrieved result to set as the Document page_content.\nattribute index_name: str = ''\uf0c1\nName of Index inside Azure Cognitive Search service\nattribute service_name: str = ''\uf0c1\nName of Azure Cognitive Search service\nasync aget_relevant_documents(query)[source]\uf0c1\nGet documents relevant for a query.\nParameters\nquery (str) \u2013 string to find relevant documents for\nReturns\nList of relevant documents\nReturn type\nList[langchain.schema.Document]\nget_relevant_documents(query)[source]\uf0c1\nGet documents relevant for a query.\nParameters\nquery (str) \u2013 string to find relevant documents for\nReturns\nList of relevant documents\nReturn type\nList[langchain.schema.Document]\nclass langchain.retrievers.ChatGPTPluginRetriever(*, url, bearer_token, top_k=3, filter=None, aiosession=None)[source]\uf0c1\nBases: langchain.schema.BaseRetriever, pydantic.main.BaseModel\nParameters", "source": "https://api.python.langchain.com/en/latest/modules/retrievers.html"} +{"id": "32c189513c97-3", "text": "Bases: langchain.schema.BaseRetriever, pydantic.main.BaseModel\nParameters\nurl (str) \u2013 \nbearer_token (str) \u2013 \ntop_k (int) \u2013 \nfilter (Optional[dict]) \u2013 \naiosession (Optional[aiohttp.client.ClientSession]) \u2013 \nReturn type\nNone\nattribute aiosession: Optional[aiohttp.client.ClientSession] = None\uf0c1\nattribute bearer_token: str [Required]\uf0c1\nattribute filter: Optional[dict] = None\uf0c1\nattribute top_k: int = 3\uf0c1\nattribute url: str [Required]\uf0c1\nasync aget_relevant_documents(query)[source]\uf0c1\nGet documents relevant for a query.\nParameters\nquery (str) \u2013 string to find relevant documents for\nReturns\nList of relevant documents\nReturn type\nList[langchain.schema.Document]\nget_relevant_documents(query)[source]\uf0c1\nGet documents relevant for a query.\nParameters\nquery (str) \u2013 string to find relevant documents for\nReturns\nList of relevant documents\nReturn type\nList[langchain.schema.Document]\nclass langchain.retrievers.ContextualCompressionRetriever(*, base_compressor, base_retriever)[source]\uf0c1\nBases: langchain.schema.BaseRetriever, pydantic.main.BaseModel\nRetriever that wraps a base retriever and compresses the results.\nParameters\nbase_compressor (langchain.retrievers.document_compressors.base.BaseDocumentCompressor) \u2013 \nbase_retriever (langchain.schema.BaseRetriever) \u2013 \nReturn type\nNone\nattribute base_compressor: langchain.retrievers.document_compressors.base.BaseDocumentCompressor [Required]\uf0c1\nCompressor for compressing retrieved documents.\nattribute base_retriever: langchain.schema.BaseRetriever [Required]\uf0c1", "source": "https://api.python.langchain.com/en/latest/modules/retrievers.html"} +{"id": "32c189513c97-4", "text": "attribute base_retriever: langchain.schema.BaseRetriever [Required]\uf0c1\nBase Retriever to use for getting relevant documents.\nasync aget_relevant_documents(query)[source]\uf0c1\nGet documents relevant for a query.\nParameters\nquery (str) \u2013 string to find relevant documents for\nReturns\nList of relevant documents\nReturn type\nList[langchain.schema.Document]\nget_relevant_documents(query)[source]\uf0c1\nGet documents relevant for a query.\nParameters\nquery (str) \u2013 string to find relevant documents for\nReturns\nSequence of relevant documents\nReturn type\nList[langchain.schema.Document]\nclass langchain.retrievers.DataberryRetriever(datastore_url, top_k=None, api_key=None)[source]\uf0c1\nBases: langchain.schema.BaseRetriever\nRetriever that uses the Databerry API.\nParameters\ndatastore_url (str) \u2013 \ntop_k (Optional[int]) \u2013 \napi_key (Optional[str]) \u2013 \ndatastore_url: str\uf0c1\napi_key: Optional[str]\uf0c1\ntop_k: Optional[int]\uf0c1\nget_relevant_documents(query)[source]\uf0c1\nGet documents relevant for a query.\nParameters\nquery (str) \u2013 string to find relevant documents for\nReturns\nList of relevant documents\nReturn type\nList[langchain.schema.Document]\nasync aget_relevant_documents(query)[source]\uf0c1\nGet documents relevant for a query.\nParameters\nquery (str) \u2013 string to find relevant documents for\nReturns\nList of relevant documents\nReturn type\nList[langchain.schema.Document]\nclass langchain.retrievers.ElasticSearchBM25Retriever(client, index_name)[source]\uf0c1\nBases: langchain.schema.BaseRetriever\nWrapper around Elasticsearch using BM25 as a retrieval method.", "source": "https://api.python.langchain.com/en/latest/modules/retrievers.html"} +{"id": "32c189513c97-5", "text": "Wrapper around Elasticsearch using BM25 as a retrieval method.\nTo connect to an Elasticsearch instance that requires login credentials,\nincluding Elastic Cloud, use the Elasticsearch URL format\nhttps://username:password@es_host:9243. For example, to connect to Elastic\nCloud, create the Elasticsearch URL with the required authentication details and\npass it to the ElasticVectorSearch constructor as the named parameter\nelasticsearch_url.\nYou can obtain your Elastic Cloud URL and login credentials by logging in to the\nElastic Cloud console at https://cloud.elastic.co, selecting your deployment, and\nnavigating to the \u201cDeployments\u201d page.\nTo obtain your Elastic Cloud password for the default \u201celastic\u201d user:\nLog in to the Elastic Cloud console at https://cloud.elastic.co\nGo to \u201cSecurity\u201d > \u201cUsers\u201d\nLocate the \u201celastic\u201d user and click \u201cEdit\u201d\nClick \u201cReset password\u201d\nFollow the prompts to reset the password\nThe format for Elastic Cloud URLs is\nhttps://username:password@cluster_id.region_id.gcp.cloud.es.io:9243.\nParameters\nclient (Any) \u2013 \nindex_name (str) \u2013 \nclassmethod create(elasticsearch_url, index_name, k1=2.0, b=0.75)[source]\uf0c1\nParameters\nelasticsearch_url (str) \u2013 \nindex_name (str) \u2013 \nk1 (float) \u2013 \nb (float) \u2013 \nReturn type\nlangchain.retrievers.elastic_search_bm25.ElasticSearchBM25Retriever\nadd_texts(texts, refresh_indices=True)[source]\uf0c1\nRun more texts through the embeddings and add to the retriever.\nParameters\ntexts (Iterable[str]) \u2013 Iterable of strings to add to the retriever.\nrefresh_indices (bool) \u2013 bool to refresh ElasticSearch indices\nReturns", "source": "https://api.python.langchain.com/en/latest/modules/retrievers.html"} +{"id": "32c189513c97-6", "text": "refresh_indices (bool) \u2013 bool to refresh ElasticSearch indices\nReturns\nList of ids from adding the texts into the retriever.\nReturn type\nList[str]\nget_relevant_documents(query)[source]\uf0c1\nGet documents relevant for a query.\nParameters\nquery (str) \u2013 string to find relevant documents for\nReturns\nList of relevant documents\nReturn type\nList[langchain.schema.Document]\nasync aget_relevant_documents(query)[source]\uf0c1\nGet documents relevant for a query.\nParameters\nquery (str) \u2013 string to find relevant documents for\nReturns\nList of relevant documents\nReturn type\nList[langchain.schema.Document]\nclass langchain.retrievers.KNNRetriever(*, embeddings, index=None, texts, k=4, relevancy_threshold=None)[source]\uf0c1\nBases: langchain.schema.BaseRetriever, pydantic.main.BaseModel\nKNN Retriever.\nParameters\nembeddings (langchain.embeddings.base.Embeddings) \u2013 \nindex (Any) \u2013 \ntexts (List[str]) \u2013 \nk (int) \u2013 \nrelevancy_threshold (Optional[float]) \u2013 \nReturn type\nNone\nattribute embeddings: langchain.embeddings.base.Embeddings [Required]\uf0c1\nattribute index: Any = None\uf0c1\nattribute k: int = 4\uf0c1\nattribute relevancy_threshold: Optional[float] = None\uf0c1\nattribute texts: List[str] [Required]\uf0c1\nasync aget_relevant_documents(query)[source]\uf0c1\nGet documents relevant for a query.\nParameters\nquery (str) \u2013 string to find relevant documents for\nReturns\nList of relevant documents\nReturn type\nList[langchain.schema.Document]\nclassmethod from_texts(texts, embeddings, **kwargs)[source]\uf0c1\nParameters\ntexts (List[str]) \u2013", "source": "https://api.python.langchain.com/en/latest/modules/retrievers.html"} +{"id": "32c189513c97-7", "text": "Parameters\ntexts (List[str]) \u2013 \nembeddings (langchain.embeddings.base.Embeddings) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.retrievers.knn.KNNRetriever\nget_relevant_documents(query)[source]\uf0c1\nGet documents relevant for a query.\nParameters\nquery (str) \u2013 string to find relevant documents for\nReturns\nList of relevant documents\nReturn type\nList[langchain.schema.Document]\nclass langchain.retrievers.LlamaIndexGraphRetriever(*, graph=None, query_configs=None)[source]\uf0c1\nBases: langchain.schema.BaseRetriever, pydantic.main.BaseModel\nQuestion-answering with sources over an LlamaIndex graph data structure.\nParameters\ngraph (Any) \u2013 \nquery_configs (List[Dict]) \u2013 \nReturn type\nNone\nattribute graph: Any = None\uf0c1\nattribute query_configs: List[Dict] [Optional]\uf0c1\nasync aget_relevant_documents(query)[source]\uf0c1\nGet documents relevant for a query.\nParameters\nquery (str) \u2013 string to find relevant documents for\nReturns\nList of relevant documents\nReturn type\nList[langchain.schema.Document]\nget_relevant_documents(query)[source]\uf0c1\nGet documents relevant for a query.\nParameters\nquery (str) \u2013 \nReturn type\nList[langchain.schema.Document]\nclass langchain.retrievers.LlamaIndexRetriever(*, index=None, query_kwargs=None)[source]\uf0c1\nBases: langchain.schema.BaseRetriever, pydantic.main.BaseModel\nQuestion-answering with sources over an LlamaIndex data structure.\nParameters\nindex (Any) \u2013 \nquery_kwargs (Dict) \u2013 \nReturn type\nNone\nattribute index: Any = None\uf0c1", "source": "https://api.python.langchain.com/en/latest/modules/retrievers.html"} +{"id": "32c189513c97-8", "text": "Return type\nNone\nattribute index: Any = None\uf0c1\nattribute query_kwargs: Dict [Optional]\uf0c1\nasync aget_relevant_documents(query)[source]\uf0c1\nGet documents relevant for a query.\nParameters\nquery (str) \u2013 string to find relevant documents for\nReturns\nList of relevant documents\nReturn type\nList[langchain.schema.Document]\nget_relevant_documents(query)[source]\uf0c1\nGet documents relevant for a query.\nParameters\nquery (str) \u2013 \nReturn type\nList[langchain.schema.Document]\nclass langchain.retrievers.MergerRetriever(retrievers)[source]\uf0c1\nBases: langchain.schema.BaseRetriever\nThis class merges the results of multiple retrievers.\nParameters\nretrievers (List[langchain.schema.BaseRetriever]) \u2013 A list of retrievers to merge.\nget_relevant_documents(query)[source]\uf0c1\nGet the relevant documents for a given query.\nParameters\nquery (str) \u2013 The query to search for.\nReturns\nA list of relevant documents.\nReturn type\nList[langchain.schema.Document]\nasync aget_relevant_documents(query)[source]\uf0c1\nAsynchronously get the relevant documents for a given query.\nParameters\nquery (str) \u2013 The query to search for.\nReturns\nA list of relevant documents.\nReturn type\nList[langchain.schema.Document]\nmerge_documents(query)[source]\uf0c1\nMerge the results of the retrievers.\nParameters\nquery (str) \u2013 The query to search for.\nReturns\nA list of merged documents.\nReturn type\nList[langchain.schema.Document]\nasync amerge_documents(query)[source]\uf0c1\nAsynchronously merge the results of the retrievers.\nParameters\nquery (str) \u2013 The query to search for.\nReturns\nA list of merged documents.\nReturn type", "source": "https://api.python.langchain.com/en/latest/modules/retrievers.html"} +{"id": "32c189513c97-9", "text": "Returns\nA list of merged documents.\nReturn type\nList[langchain.schema.Document]\nclass langchain.retrievers.MetalRetriever(client, params=None)[source]\uf0c1\nBases: langchain.schema.BaseRetriever\nRetriever that uses the Metal API.\nParameters\nclient (Any) \u2013 \nparams (Optional[dict]) \u2013 \nget_relevant_documents(query)[source]\uf0c1\nGet documents relevant for a query.\nParameters\nquery (str) \u2013 string to find relevant documents for\nReturns\nList of relevant documents\nReturn type\nList[langchain.schema.Document]\nasync aget_relevant_documents(query)[source]\uf0c1\nGet documents relevant for a query.\nParameters\nquery (str) \u2013 string to find relevant documents for\nReturns\nList of relevant documents\nReturn type\nList[langchain.schema.Document]\nclass langchain.retrievers.MilvusRetriever(embedding_function, collection_name='LangChainCollection', connection_args=None, consistency_level='Session', search_params=None)[source]\uf0c1\nBases: langchain.schema.BaseRetriever\nRetriever that uses the Milvus API.\nParameters\nembedding_function (langchain.embeddings.base.Embeddings) \u2013 \ncollection_name (str) \u2013 \nconnection_args (Optional[Dict[str, Any]]) \u2013 \nconsistency_level (str) \u2013 \nsearch_params (Optional[dict]) \u2013 \nadd_texts(texts, metadatas=None)[source]\uf0c1\nAdd text to the Milvus store\nParameters\ntexts (List[str]) \u2013 The text\nmetadatas (List[dict]) \u2013 Metadata dicts, must line up with existing store\nReturn type\nNone\nget_relevant_documents(query)[source]\uf0c1\nGet documents relevant for a query.\nParameters\nquery (str) \u2013 string to find relevant documents for\nReturns", "source": "https://api.python.langchain.com/en/latest/modules/retrievers.html"} +{"id": "32c189513c97-10", "text": "Parameters\nquery (str) \u2013 string to find relevant documents for\nReturns\nList of relevant documents\nReturn type\nList[langchain.schema.Document]\nasync aget_relevant_documents(query)[source]\uf0c1\nGet documents relevant for a query.\nParameters\nquery (str) \u2013 string to find relevant documents for\nReturns\nList of relevant documents\nReturn type\nList[langchain.schema.Document]\nclass langchain.retrievers.PineconeHybridSearchRetriever(*, embeddings, sparse_encoder=None, index=None, top_k=4, alpha=0.5)[source]\uf0c1\nBases: langchain.schema.BaseRetriever, pydantic.main.BaseModel\nParameters\nembeddings (langchain.embeddings.base.Embeddings) \u2013 \nsparse_encoder (Any) \u2013 \nindex (Any) \u2013 \ntop_k (int) \u2013 \nalpha (float) \u2013 \nReturn type\nNone\nattribute alpha: float = 0.5\uf0c1\nattribute embeddings: langchain.embeddings.base.Embeddings [Required]\uf0c1\nattribute index: Any = None\uf0c1\nattribute sparse_encoder: Any = None\uf0c1\nattribute top_k: int = 4\uf0c1\nadd_texts(texts, ids=None, metadatas=None)[source]\uf0c1\nParameters\ntexts (List[str]) \u2013 \nids (Optional[List[str]]) \u2013 \nmetadatas (Optional[List[dict]]) \u2013 \nReturn type\nNone\nasync aget_relevant_documents(query)[source]\uf0c1\nGet documents relevant for a query.\nParameters\nquery (str) \u2013 string to find relevant documents for\nReturns\nList of relevant documents\nReturn type\nList[langchain.schema.Document]\nget_relevant_documents(query)[source]\uf0c1\nGet documents relevant for a query.\nParameters\nquery (str) \u2013 string to find relevant documents for\nReturns", "source": "https://api.python.langchain.com/en/latest/modules/retrievers.html"} +{"id": "32c189513c97-11", "text": "Parameters\nquery (str) \u2013 string to find relevant documents for\nReturns\nList of relevant documents\nReturn type\nList[langchain.schema.Document]\nclass langchain.retrievers.PubMedRetriever(*, top_k_results=3, load_max_docs=25, doc_content_chars_max=2000, load_all_available_meta=False, email='your_email@example.com', base_url_esearch='https://eutils.ncbi.nlm.nih.gov/entrez/eutils/esearch.fcgi?', base_url_efetch='https://eutils.ncbi.nlm.nih.gov/entrez/eutils/efetch.fcgi?', max_retry=5, sleep_time=0.2, ARXIV_MAX_QUERY_LENGTH=300)[source]\uf0c1\nBases: langchain.schema.BaseRetriever, langchain.utilities.pupmed.PubMedAPIWrapper\nIt is effectively a wrapper for PubMedAPIWrapper.\nIt wraps load() to get_relevant_documents().\nIt uses all PubMedAPIWrapper arguments without any change.\nParameters\ntop_k_results (int) \u2013 \nload_max_docs (int) \u2013 \ndoc_content_chars_max (int) \u2013 \nload_all_available_meta (bool) \u2013 \nemail (str) \u2013 \nbase_url_esearch (str) \u2013 \nbase_url_efetch (str) \u2013 \nmax_retry (int) \u2013 \nsleep_time (float) \u2013 \nARXIV_MAX_QUERY_LENGTH (int) \u2013 \nReturn type\nNone\nasync aget_relevant_documents(query)[source]\uf0c1\nGet documents relevant for a query.\nParameters\nquery (str) \u2013 string to find relevant documents for\nReturns\nList of relevant documents\nReturn type\nList[langchain.schema.Document]\nget_relevant_documents(query)[source]\uf0c1\nGet documents relevant for a query.\nParameters\nquery (str) \u2013 string to find relevant documents for\nReturns", "source": "https://api.python.langchain.com/en/latest/modules/retrievers.html"} +{"id": "32c189513c97-12", "text": "Parameters\nquery (str) \u2013 string to find relevant documents for\nReturns\nList of relevant documents\nReturn type\nList[langchain.schema.Document]\nclass langchain.retrievers.RemoteLangChainRetriever(*, url, headers=None, input_key='message', response_key='response', page_content_key='page_content', metadata_key='metadata')[source]\uf0c1\nBases: langchain.schema.BaseRetriever, pydantic.main.BaseModel\nParameters\nurl (str) \u2013 \nheaders (Optional[dict]) \u2013 \ninput_key (str) \u2013 \nresponse_key (str) \u2013 \npage_content_key (str) \u2013 \nmetadata_key (str) \u2013 \nReturn type\nNone\nattribute headers: Optional[dict] = None\uf0c1\nattribute input_key: str = 'message'\uf0c1\nattribute metadata_key: str = 'metadata'\uf0c1\nattribute page_content_key: str = 'page_content'\uf0c1\nattribute response_key: str = 'response'\uf0c1\nattribute url: str [Required]\uf0c1\nasync aget_relevant_documents(query)[source]\uf0c1\nGet documents relevant for a query.\nParameters\nquery (str) \u2013 string to find relevant documents for\nReturns\nList of relevant documents\nReturn type\nList[langchain.schema.Document]\nget_relevant_documents(query)[source]\uf0c1\nGet documents relevant for a query.\nParameters\nquery (str) \u2013 string to find relevant documents for\nReturns\nList of relevant documents\nReturn type\nList[langchain.schema.Document]\nclass langchain.retrievers.SVMRetriever(*, embeddings, index=None, texts, k=4, relevancy_threshold=None)[source]\uf0c1\nBases: langchain.schema.BaseRetriever, pydantic.main.BaseModel\nSVM Retriever.\nParameters", "source": "https://api.python.langchain.com/en/latest/modules/retrievers.html"} +{"id": "32c189513c97-13", "text": "SVM Retriever.\nParameters\nembeddings (langchain.embeddings.base.Embeddings) \u2013 \nindex (Any) \u2013 \ntexts (List[str]) \u2013 \nk (int) \u2013 \nrelevancy_threshold (Optional[float]) \u2013 \nReturn type\nNone\nattribute embeddings: langchain.embeddings.base.Embeddings [Required]\uf0c1\nattribute index: Any = None\uf0c1\nattribute k: int = 4\uf0c1\nattribute relevancy_threshold: Optional[float] = None\uf0c1\nattribute texts: List[str] [Required]\uf0c1\nasync aget_relevant_documents(query)[source]\uf0c1\nGet documents relevant for a query.\nParameters\nquery (str) \u2013 string to find relevant documents for\nReturns\nList of relevant documents\nReturn type\nList[langchain.schema.Document]\nclassmethod from_texts(texts, embeddings, **kwargs)[source]\uf0c1\nParameters\ntexts (List[str]) \u2013 \nembeddings (langchain.embeddings.base.Embeddings) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.retrievers.svm.SVMRetriever\nget_relevant_documents(query)[source]\uf0c1\nGet documents relevant for a query.\nParameters\nquery (str) \u2013 string to find relevant documents for\nReturns\nList of relevant documents\nReturn type\nList[langchain.schema.Document]\nclass langchain.retrievers.SelfQueryRetriever(*, vectorstore, llm_chain, search_type='similarity', search_kwargs=None, structured_query_translator, verbose=False, use_original_query=False)[source]\uf0c1\nBases: langchain.schema.BaseRetriever, pydantic.main.BaseModel\nRetriever that wraps around a vector store and uses an LLM to generate\nthe vector store queries.\nParameters\nvectorstore (langchain.vectorstores.base.VectorStore) \u2013", "source": "https://api.python.langchain.com/en/latest/modules/retrievers.html"} +{"id": "32c189513c97-14", "text": "Parameters\nvectorstore (langchain.vectorstores.base.VectorStore) \u2013 \nllm_chain (langchain.chains.llm.LLMChain) \u2013 \nsearch_type (str) \u2013 \nsearch_kwargs (dict) \u2013 \nstructured_query_translator (langchain.chains.query_constructor.ir.Visitor) \u2013 \nverbose (bool) \u2013 \nuse_original_query (bool) \u2013 \nReturn type\nNone\nattribute llm_chain: langchain.chains.llm.LLMChain [Required]\uf0c1\nThe LLMChain for generating the vector store queries.\nattribute search_kwargs: dict [Optional]\uf0c1\nKeyword arguments to pass in to the vector store search.\nattribute search_type: str = 'similarity'\uf0c1\nThe search type to perform on the vector store.\nattribute structured_query_translator: langchain.chains.query_constructor.ir.Visitor [Required]\uf0c1\nTranslator for turning internal query language into vectorstore search params.\nattribute use_original_query: bool = False\uf0c1\nattribute vectorstore: langchain.vectorstores.base.VectorStore [Required]\uf0c1\nThe underlying vector store from which documents will be retrieved.\nattribute verbose: bool = False\uf0c1\nUse original query instead of the revised new query from LLM\nasync aget_relevant_documents(query)[source]\uf0c1\nGet documents relevant for a query.\nParameters\nquery (str) \u2013 string to find relevant documents for\nReturns\nList of relevant documents\nReturn type\nList[langchain.schema.Document]\nclassmethod from_llm(llm, vectorstore, document_contents, metadata_field_info, structured_query_translator=None, chain_kwargs=None, enable_limit=False, use_original_query=False, **kwargs)[source]\uf0c1\nParameters\nllm (langchain.base_language.BaseLanguageModel) \u2013 \nvectorstore (langchain.vectorstores.base.VectorStore) \u2013 \ndocument_contents (str) \u2013", "source": "https://api.python.langchain.com/en/latest/modules/retrievers.html"} +{"id": "32c189513c97-15", "text": "document_contents (str) \u2013 \nmetadata_field_info (List[langchain.chains.query_constructor.schema.AttributeInfo]) \u2013 \nstructured_query_translator (Optional[langchain.chains.query_constructor.ir.Visitor]) \u2013 \nchain_kwargs (Optional[Dict]) \u2013 \nenable_limit (bool) \u2013 \nuse_original_query (bool) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.retrievers.self_query.base.SelfQueryRetriever\nget_relevant_documents(query, callbacks=None)[source]\uf0c1\nGet documents relevant for a query.\nParameters\nquery (str) \u2013 string to find relevant documents for\ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \nReturns\nList of relevant documents\nReturn type\nList[langchain.schema.Document]\nclass langchain.retrievers.TFIDFRetriever(*, vectorizer=None, docs, tfidf_array=None, k=4)[source]\uf0c1\nBases: langchain.schema.BaseRetriever, pydantic.main.BaseModel\nParameters\nvectorizer (Any) \u2013 \ndocs (List[langchain.schema.Document]) \u2013 \ntfidf_array (Any) \u2013 \nk (int) \u2013 \nReturn type\nNone\nattribute docs: List[langchain.schema.Document] [Required]\uf0c1\nattribute k: int = 4\uf0c1\nattribute tfidf_array: Any = None\uf0c1\nattribute vectorizer: Any = None\uf0c1\nasync aget_relevant_documents(query)[source]\uf0c1\nGet documents relevant for a query.\nParameters\nquery (str) \u2013 string to find relevant documents for\nReturns\nList of relevant documents\nReturn type\nList[langchain.schema.Document]\nclassmethod from_documents(documents, *, tfidf_params=None, **kwargs)[source]\uf0c1\nParameters", "source": "https://api.python.langchain.com/en/latest/modules/retrievers.html"} +{"id": "32c189513c97-16", "text": "Parameters\ndocuments (Iterable[langchain.schema.Document]) \u2013 \ntfidf_params (Optional[Dict[str, Any]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.retrievers.tfidf.TFIDFRetriever\nclassmethod from_texts(texts, metadatas=None, tfidf_params=None, **kwargs)[source]\uf0c1\nParameters\ntexts (Iterable[str]) \u2013 \nmetadatas (Optional[Iterable[dict]]) \u2013 \ntfidf_params (Optional[Dict[str, Any]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.retrievers.tfidf.TFIDFRetriever\nget_relevant_documents(query)[source]\uf0c1\nGet documents relevant for a query.\nParameters\nquery (str) \u2013 string to find relevant documents for\nReturns\nList of relevant documents\nReturn type\nList[langchain.schema.Document]\nclass langchain.retrievers.TimeWeightedVectorStoreRetriever(*, vectorstore, search_kwargs=None, memory_stream=None, decay_rate=0.01, k=4, other_score_keys=[], default_salience=None)[source]\uf0c1\nBases: langchain.schema.BaseRetriever, pydantic.main.BaseModel\nRetriever combining embedding similarity with recency.\nParameters\nvectorstore (langchain.vectorstores.base.VectorStore) \u2013 \nsearch_kwargs (dict) \u2013 \nmemory_stream (List[langchain.schema.Document]) \u2013 \ndecay_rate (float) \u2013 \nk (int) \u2013 \nother_score_keys (List[str]) \u2013 \ndefault_salience (Optional[float]) \u2013 \nReturn type\nNone\nattribute decay_rate: float = 0.01\uf0c1\nThe exponential decay factor used as (1.0-decay_rate)**(hrs_passed).\nattribute default_salience: Optional[float] = None\uf0c1", "source": "https://api.python.langchain.com/en/latest/modules/retrievers.html"} +{"id": "32c189513c97-17", "text": "attribute default_salience: Optional[float] = None\uf0c1\nThe salience to assign memories not retrieved from the vector store.\nNone assigns no salience to documents not fetched from the vector store.\nattribute k: int = 4\uf0c1\nThe maximum number of documents to retrieve in a given call.\nattribute memory_stream: List[langchain.schema.Document] [Optional]\uf0c1\nThe memory_stream of documents to search through.\nattribute other_score_keys: List[str] = []\uf0c1\nOther keys in the metadata to factor into the score, e.g. \u2018importance\u2019.\nattribute search_kwargs: dict [Optional]\uf0c1\nKeyword arguments to pass to the vectorstore similarity search.\nattribute vectorstore: langchain.vectorstores.base.VectorStore [Required]\uf0c1\nThe vectorstore to store documents and determine salience.\nasync aadd_documents(documents, **kwargs)[source]\uf0c1\nAdd documents to vectorstore.\nParameters\ndocuments (List[langchain.schema.Document]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nList[str]\nadd_documents(documents, **kwargs)[source]\uf0c1\nAdd documents to vectorstore.\nParameters\ndocuments (List[langchain.schema.Document]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nList[str]\nasync aget_relevant_documents(query)[source]\uf0c1\nReturn documents that are relevant to the query.\nParameters\nquery (str) \u2013 \nReturn type\nList[langchain.schema.Document]\nget_relevant_documents(query)[source]\uf0c1\nReturn documents that are relevant to the query.\nParameters\nquery (str) \u2013 \nReturn type\nList[langchain.schema.Document]\nget_salient_docs(query)[source]\uf0c1\nReturn documents that are salient to the query.\nParameters\nquery (str) \u2013 \nReturn type", "source": "https://api.python.langchain.com/en/latest/modules/retrievers.html"} +{"id": "32c189513c97-18", "text": "Parameters\nquery (str) \u2013 \nReturn type\nDict[int, Tuple[langchain.schema.Document, float]]\nclass langchain.retrievers.VespaRetriever(app, body, content_field, metadata_fields=None)[source]\uf0c1\nBases: langchain.schema.BaseRetriever\nRetriever that uses the Vespa.\nParameters\napp (Vespa) \u2013 \nbody (Dict) \u2013 \ncontent_field (str) \u2013 \nmetadata_fields (Optional[Sequence[str]]) \u2013 \nget_relevant_documents(query)[source]\uf0c1\nGet documents relevant for a query.\nParameters\nquery (str) \u2013 string to find relevant documents for\nReturns\nList of relevant documents\nReturn type\nList[langchain.schema.Document]\nasync aget_relevant_documents(query)[source]\uf0c1\nGet documents relevant for a query.\nParameters\nquery (str) \u2013 string to find relevant documents for\nReturns\nList of relevant documents\nReturn type\nList[langchain.schema.Document]\nget_relevant_documents_with_filter(query, *, _filter=None)[source]\uf0c1\nParameters\nquery (str) \u2013 \n_filter (Optional[str]) \u2013 \nReturn type\nList[langchain.schema.Document]\nclassmethod from_params(url, content_field, *, k=None, metadata_fields=(), sources=None, _filter=None, yql=None, **kwargs)[source]\uf0c1\nInstantiate retriever from params.\nParameters\nurl (str) \u2013 Vespa app URL.\ncontent_field (str) \u2013 Field in results to return as Document page_content.\nk (Optional[int]) \u2013 Number of Documents to return. Defaults to None.\nmetadata_fields (Sequence[str] or \"*\") \u2013 Fields in results to include in\ndocument metadata. Defaults to empty tuple ().\nsources (Sequence[str] or \"*\" or None) \u2013 Sources to retrieve\nfrom. Defaults to None.", "source": "https://api.python.langchain.com/en/latest/modules/retrievers.html"} +{"id": "32c189513c97-19", "text": "from. Defaults to None.\n_filter (Optional[str]) \u2013 Document filter condition expressed in YQL.\nDefaults to None.\nyql (Optional[str]) \u2013 Full YQL query to be used. Should not be specified\nif _filter or sources are specified. Defaults to None.\nkwargs (Any) \u2013 Keyword arguments added to query body.\nReturn type\nlangchain.retrievers.vespa_retriever.VespaRetriever\nclass langchain.retrievers.WeaviateHybridSearchRetriever(client, index_name, text_key, alpha=0.5, k=4, attributes=None, create_schema_if_missing=True)[source]\uf0c1\nBases: langchain.schema.BaseRetriever\nParameters\nclient (Any) \u2013 \nindex_name (str) \u2013 \ntext_key (str) \u2013 \nalpha (float) \u2013 \nk (int) \u2013 \nattributes (Optional[List[str]]) \u2013 \ncreate_schema_if_missing (bool) \u2013 \nclass Config[source]\uf0c1\nBases: object\nConfiguration for this pydantic object.\nextra = 'forbid'\uf0c1\narbitrary_types_allowed = True\uf0c1\nadd_documents(docs, **kwargs)[source]\uf0c1\nUpload documents to Weaviate.\nParameters\ndocs (List[langchain.schema.Document]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nList[str]\nget_relevant_documents(query, where_filter=None)[source]\uf0c1\nLook up similar documents in Weaviate.\nParameters\nquery (str) \u2013 \nwhere_filter (Optional[Dict[str, object]]) \u2013 \nReturn type\nList[langchain.schema.Document]\nasync aget_relevant_documents(query, where_filter=None)[source]\uf0c1\nGet documents relevant for a query.\nParameters\nquery (str) \u2013 string to find relevant documents for", "source": "https://api.python.langchain.com/en/latest/modules/retrievers.html"} +{"id": "32c189513c97-20", "text": "Parameters\nquery (str) \u2013 string to find relevant documents for\nwhere_filter (Optional[Dict[str, object]]) \u2013 \nReturns\nList of relevant documents\nReturn type\nList[langchain.schema.Document]\nclass langchain.retrievers.WikipediaRetriever(*, wiki_client=None, top_k_results=3, lang='en', load_all_available_meta=False, doc_content_chars_max=4000)[source]\uf0c1\nBases: langchain.schema.BaseRetriever, langchain.utilities.wikipedia.WikipediaAPIWrapper\nIt is effectively a wrapper for WikipediaAPIWrapper.\nIt wraps load() to get_relevant_documents().\nIt uses all WikipediaAPIWrapper arguments without any change.\nParameters\nwiki_client (Any) \u2013 \ntop_k_results (int) \u2013 \nlang (str) \u2013 \nload_all_available_meta (bool) \u2013 \ndoc_content_chars_max (int) \u2013 \nReturn type\nNone\nasync aget_relevant_documents(query)[source]\uf0c1\nGet documents relevant for a query.\nParameters\nquery (str) \u2013 string to find relevant documents for\nReturns\nList of relevant documents\nReturn type\nList[langchain.schema.Document]\nget_relevant_documents(query)[source]\uf0c1\nGet documents relevant for a query.\nParameters\nquery (str) \u2013 string to find relevant documents for\nReturns\nList of relevant documents\nReturn type\nList[langchain.schema.Document]\nclass langchain.retrievers.ZepRetriever(session_id, url, top_k=None)[source]\uf0c1\nBases: langchain.schema.BaseRetriever\nA Retriever implementation for the Zep long-term memory store. Search your\nuser\u2019s long-term chat history with Zep.\nNote: You will need to provide the user\u2019s session_id to use this retriever.\nMore on Zep:", "source": "https://api.python.langchain.com/en/latest/modules/retrievers.html"} +{"id": "32c189513c97-21", "text": "More on Zep:\nZep provides long-term conversation storage for LLM apps. The server stores,\nsummarizes, embeds, indexes, and enriches conversational AI chat\nhistories, and exposes them via simple, low-latency APIs.\nFor server installation instructions, see:\nhttps://getzep.github.io/deployment/quickstart/\nParameters\nsession_id (str) \u2013 \nurl (str) \u2013 \ntop_k (Optional[int]) \u2013 \nget_relevant_documents(query, metadata=None)[source]\uf0c1\nGet documents relevant for a query.\nParameters\nquery (str) \u2013 string to find relevant documents for\nmetadata (Optional[Dict]) \u2013 \nReturns\nList of relevant documents\nReturn type\nList[langchain.schema.Document]\nasync aget_relevant_documents(query, metadata=None)[source]\uf0c1\nGet documents relevant for a query.\nParameters\nquery (str) \u2013 string to find relevant documents for\nmetadata (Optional[Dict]) \u2013 \nReturns\nList of relevant documents\nReturn type\nList[langchain.schema.Document]\nclass langchain.retrievers.ZillizRetriever(embedding_function, collection_name='LangChainCollection', connection_args=None, consistency_level='Session', search_params=None)[source]\uf0c1\nBases: langchain.schema.BaseRetriever\nRetriever that uses the Zilliz API.\nParameters\nembedding_function (langchain.embeddings.base.Embeddings) \u2013 \ncollection_name (str) \u2013 \nconnection_args (Optional[Dict[str, Any]]) \u2013 \nconsistency_level (str) \u2013 \nsearch_params (Optional[dict]) \u2013 \nadd_texts(texts, metadatas=None)[source]\uf0c1\nAdd text to the Zilliz store\nParameters\ntexts (List[str]) \u2013 The text", "source": "https://api.python.langchain.com/en/latest/modules/retrievers.html"} +{"id": "32c189513c97-22", "text": "Add text to the Zilliz store\nParameters\ntexts (List[str]) \u2013 The text\nmetadatas (List[dict]) \u2013 Metadata dicts, must line up with existing store\nReturn type\nNone\nget_relevant_documents(query)[source]\uf0c1\nGet documents relevant for a query.\nParameters\nquery (str) \u2013 string to find relevant documents for\nReturns\nList of relevant documents\nReturn type\nList[langchain.schema.Document]\nasync aget_relevant_documents(query)[source]\uf0c1\nGet documents relevant for a query.\nParameters\nquery (str) \u2013 string to find relevant documents for\nReturns\nList of relevant documents\nReturn type\nList[langchain.schema.Document]\nclass langchain.retrievers.DocArrayRetriever(*, index=None, embeddings, search_field, content_field, search_type=SearchType.similarity, top_k=1, filters=None)[source]\uf0c1\nBases: langchain.schema.BaseRetriever, pydantic.main.BaseModel\nRetriever class for DocArray Document Indices.\nCurrently, supports 5 backends:\nInMemoryExactNNIndex, HnswDocumentIndex, QdrantDocumentIndex,\nElasticDocIndex, and WeaviateDocumentIndex.\nParameters\nindex (Any) \u2013 \nembeddings (langchain.embeddings.base.Embeddings) \u2013 \nsearch_field (str) \u2013 \ncontent_field (str) \u2013 \nsearch_type (langchain.retrievers.docarray.SearchType) \u2013 \ntop_k (int) \u2013 \nfilters (Optional[Any]) \u2013 \nReturn type\nNone\nindex\uf0c1\nOne of the above-mentioned index instances\nembeddings\uf0c1\nEmbedding model to represent text as vectors\nsearch_field\uf0c1\nField to consider for searching in the documents.\nShould be an embedding/vector/tensor.\ncontent_field\uf0c1", "source": "https://api.python.langchain.com/en/latest/modules/retrievers.html"} +{"id": "32c189513c97-23", "text": "Should be an embedding/vector/tensor.\ncontent_field\uf0c1\nField that represents the main content in your document schema.\nWill be used as a page_content. Everything else will go into metadata.\nsearch_type\uf0c1\nType of search to perform (similarity / mmr)\nfilters\uf0c1\nFilters applied for document retrieval.\ntop_k\uf0c1\nNumber of documents to return\nattribute content_field: str [Required]\uf0c1\nattribute embeddings: langchain.embeddings.base.Embeddings [Required]\uf0c1\nattribute filters: Optional[Any] = None\uf0c1\nattribute index: Any = None\uf0c1\nattribute search_field: str [Required]\uf0c1\nattribute search_type: langchain.retrievers.docarray.SearchType = SearchType.similarity\uf0c1\nattribute top_k: int = 1\uf0c1\nasync aget_relevant_documents(query)[source]\uf0c1\nGet documents relevant for a query.\nParameters\nquery (str) \u2013 string to find relevant documents for\nReturns\nList of relevant documents\nReturn type\nList[langchain.schema.Document]\nget_relevant_documents(query)[source]\uf0c1\nGet documents relevant for a query.\nParameters\nquery (str) \u2013 string to find relevant documents for\nReturns\nList of relevant documents\nReturn type\nList[langchain.schema.Document]\nDocument compressors\uf0c1\nclass langchain.retrievers.document_compressors.DocumentCompressorPipeline(*, transformers)[source]\uf0c1\nBases: langchain.retrievers.document_compressors.base.BaseDocumentCompressor\nDocument compressor that uses a pipeline of transformers.\nParameters\ntransformers (List[Union[langchain.schema.BaseDocumentTransformer, langchain.retrievers.document_compressors.base.BaseDocumentCompressor]]) \u2013 \nReturn type\nNone", "source": "https://api.python.langchain.com/en/latest/modules/retrievers.html"} +{"id": "32c189513c97-24", "text": "Return type\nNone\nattribute transformers: List[Union[langchain.schema.BaseDocumentTransformer, langchain.retrievers.document_compressors.base.BaseDocumentCompressor]] [Required]\uf0c1\nList of document filters that are chained together and run in sequence.\nasync acompress_documents(documents, query)[source]\uf0c1\nCompress retrieved documents given the query context.\nParameters\ndocuments (Sequence[langchain.schema.Document]) \u2013 \nquery (str) \u2013 \nReturn type\nSequence[langchain.schema.Document]\ncompress_documents(documents, query)[source]\uf0c1\nTransform a list of documents.\nParameters\ndocuments (Sequence[langchain.schema.Document]) \u2013 \nquery (str) \u2013 \nReturn type\nSequence[langchain.schema.Document]\nclass langchain.retrievers.document_compressors.EmbeddingsFilter(*, embeddings, similarity_fn=, k=20, similarity_threshold=None)[source]\uf0c1\nBases: langchain.retrievers.document_compressors.base.BaseDocumentCompressor\nParameters\nembeddings (langchain.embeddings.base.Embeddings) \u2013 \nsimilarity_fn (Callable) \u2013 \nk (Optional[int]) \u2013 \nsimilarity_threshold (Optional[float]) \u2013 \nReturn type\nNone\nattribute embeddings: langchain.embeddings.base.Embeddings [Required]\uf0c1\nEmbeddings to use for embedding document contents and queries.\nattribute k: Optional[int] = 20\uf0c1\nThe number of relevant documents to return. Can be set to None, in which case\nsimilarity_threshold must be specified. Defaults to 20.\nattribute similarity_fn: Callable = \uf0c1\nSimilarity function for comparing documents. Function expected to take as input\ntwo matrices (List[List[float]]) and return a matrix of scores where higher values\nindicate greater similarity.\nattribute similarity_threshold: Optional[float] = None\uf0c1", "source": "https://api.python.langchain.com/en/latest/modules/retrievers.html"} +{"id": "32c189513c97-25", "text": "indicate greater similarity.\nattribute similarity_threshold: Optional[float] = None\uf0c1\nThreshold for determining when two documents are similar enough\nto be considered redundant. Defaults to None, must be specified if k is set\nto None.\nasync acompress_documents(documents, query)[source]\uf0c1\nFilter down documents.\nParameters\ndocuments (Sequence[langchain.schema.Document]) \u2013 \nquery (str) \u2013 \nReturn type\nSequence[langchain.schema.Document]\ncompress_documents(documents, query)[source]\uf0c1\nFilter documents based on similarity of their embeddings to the query.\nParameters\ndocuments (Sequence[langchain.schema.Document]) \u2013 \nquery (str) \u2013 \nReturn type\nSequence[langchain.schema.Document]\nclass langchain.retrievers.document_compressors.LLMChainExtractor(*, llm_chain, get_input=)[source]\uf0c1\nBases: langchain.retrievers.document_compressors.base.BaseDocumentCompressor\nParameters\nllm_chain (langchain.chains.llm.LLMChain) \u2013 \nget_input (Callable[[str, langchain.schema.Document], dict]) \u2013 \nReturn type\nNone\nattribute get_input: Callable[[str, langchain.schema.Document], dict] = \uf0c1\nCallable for constructing the chain input from the query and a Document.\nattribute llm_chain: langchain.chains.llm.LLMChain [Required]\uf0c1\nLLM wrapper to use for compressing documents.\nasync acompress_documents(documents, query)[source]\uf0c1\nCompress page content of raw documents asynchronously.\nParameters\ndocuments (Sequence[langchain.schema.Document]) \u2013 \nquery (str) \u2013 \nReturn type\nSequence[langchain.schema.Document]\ncompress_documents(documents, query)[source]\uf0c1\nCompress page content of raw documents.\nParameters", "source": "https://api.python.langchain.com/en/latest/modules/retrievers.html"} +{"id": "32c189513c97-26", "text": "Compress page content of raw documents.\nParameters\ndocuments (Sequence[langchain.schema.Document]) \u2013 \nquery (str) \u2013 \nReturn type\nSequence[langchain.schema.Document]\nclassmethod from_llm(llm, prompt=None, get_input=None, llm_chain_kwargs=None)[source]\uf0c1\nInitialize from LLM.\nParameters\nllm (langchain.base_language.BaseLanguageModel) \u2013 \nprompt (Optional[langchain.prompts.prompt.PromptTemplate]) \u2013 \nget_input (Optional[Callable[[str, langchain.schema.Document], str]]) \u2013 \nllm_chain_kwargs (Optional[dict]) \u2013 \nReturn type\nlangchain.retrievers.document_compressors.chain_extract.LLMChainExtractor\nclass langchain.retrievers.document_compressors.LLMChainFilter(*, llm_chain, get_input=)[source]\uf0c1\nBases: langchain.retrievers.document_compressors.base.BaseDocumentCompressor\nFilter that drops documents that aren\u2019t relevant to the query.\nParameters\nllm_chain (langchain.chains.llm.LLMChain) \u2013 \nget_input (Callable[[str, langchain.schema.Document], dict]) \u2013 \nReturn type\nNone\nattribute get_input: Callable[[str, langchain.schema.Document], dict] = \uf0c1\nCallable for constructing the chain input from the query and a Document.\nattribute llm_chain: langchain.chains.llm.LLMChain [Required]\uf0c1\nLLM wrapper to use for filtering documents.\nThe chain prompt is expected to have a BooleanOutputParser.\nasync acompress_documents(documents, query)[source]\uf0c1\nFilter down documents.\nParameters\ndocuments (Sequence[langchain.schema.Document]) \u2013 \nquery (str) \u2013 \nReturn type\nSequence[langchain.schema.Document]", "source": "https://api.python.langchain.com/en/latest/modules/retrievers.html"} +{"id": "32c189513c97-27", "text": "query (str) \u2013 \nReturn type\nSequence[langchain.schema.Document]\ncompress_documents(documents, query)[source]\uf0c1\nFilter down documents based on their relevance to the query.\nParameters\ndocuments (Sequence[langchain.schema.Document]) \u2013 \nquery (str) \u2013 \nReturn type\nSequence[langchain.schema.Document]\nclassmethod from_llm(llm, prompt=None, **kwargs)[source]\uf0c1\nParameters\nllm (langchain.base_language.BaseLanguageModel) \u2013 \nprompt (Optional[langchain.prompts.base.BasePromptTemplate]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.retrievers.document_compressors.chain_filter.LLMChainFilter\nclass langchain.retrievers.document_compressors.CohereRerank(*, client, top_n=3, model='rerank-english-v2.0')[source]\uf0c1\nBases: langchain.retrievers.document_compressors.base.BaseDocumentCompressor\nParameters\nclient (Client) \u2013 \ntop_n (int) \u2013 \nmodel (str) \u2013 \nReturn type\nNone\nattribute client: Client [Required]\uf0c1\nattribute model: str = 'rerank-english-v2.0'\uf0c1\nattribute top_n: int = 3\uf0c1\nasync acompress_documents(documents, query)[source]\uf0c1\nCompress retrieved documents given the query context.\nParameters\ndocuments (Sequence[langchain.schema.Document]) \u2013 \nquery (str) \u2013 \nReturn type\nSequence[langchain.schema.Document]\ncompress_documents(documents, query)[source]\uf0c1\nCompress retrieved documents given the query context.\nParameters\ndocuments (Sequence[langchain.schema.Document]) \u2013 \nquery (str) \u2013 \nReturn type\nSequence[langchain.schema.Document]", "source": "https://api.python.langchain.com/en/latest/modules/retrievers.html"} +{"id": "8684e36b1302-0", "text": "Chat Models\uf0c1\nclass langchain.chat_models.ChatOpenAI(*, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None, client=None, model='gpt-3.5-turbo', temperature=0.7, model_kwargs=None, openai_api_key=None, openai_api_base=None, openai_organization=None, openai_proxy=None, request_timeout=None, max_retries=6, streaming=False, n=1, max_tokens=None, tiktoken_model_name=None)[source]\uf0c1\nBases: langchain.chat_models.base.BaseChatModel\nWrapper around OpenAI Chat large language models.\nTo use, you should have the openai python package installed, and the\nenvironment variable OPENAI_API_KEY set with your API key.\nAny parameters that are valid to be passed to the openai.create call can be passed\nin, even if not explicitly saved on this class.\nExample\nfrom langchain.chat_models import ChatOpenAI\nopenai = ChatOpenAI(model_name=\"gpt-3.5-turbo\")\nParameters\ncache (Optional[bool]) \u2013 \nverbose (bool) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ncallback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) \u2013 \ntags (Optional[List[str]]) \u2013 \nclient (Any) \u2013 \nmodel (str) \u2013 \ntemperature (float) \u2013 \nmodel_kwargs (Dict[str, Any]) \u2013 \nopenai_api_key (Optional[str]) \u2013 \nopenai_api_base (Optional[str]) \u2013 \nopenai_organization (Optional[str]) \u2013 \nopenai_proxy (Optional[str]) \u2013 \nrequest_timeout (Optional[Union[float, Tuple[float, float]]]) \u2013 \nmax_retries (int) \u2013 \nstreaming (bool) \u2013", "source": "https://api.python.langchain.com/en/latest/modules/chat_models.html"} +{"id": "8684e36b1302-1", "text": "max_retries (int) \u2013 \nstreaming (bool) \u2013 \nn (int) \u2013 \nmax_tokens (Optional[int]) \u2013 \ntiktoken_model_name (Optional[str]) \u2013 \nReturn type\nNone\nattribute max_retries: int = 6\uf0c1\nMaximum number of retries to make when generating.\nattribute max_tokens: Optional[int] = None\uf0c1\nMaximum number of tokens to generate.\nattribute model_kwargs: Dict[str, Any] [Optional]\uf0c1\nHolds any model parameters valid for create call not explicitly specified.\nattribute model_name: str = 'gpt-3.5-turbo' (alias 'model')\uf0c1\nModel name to use.\nattribute n: int = 1\uf0c1\nNumber of chat completions to generate for each prompt.\nattribute openai_api_base: Optional[str] = None\uf0c1\nattribute openai_api_key: Optional[str] = None\uf0c1\nBase URL path for API requests,\nleave blank if not using a proxy or service emulator.\nattribute openai_organization: Optional[str] = None\uf0c1\nattribute openai_proxy: Optional[str] = None\uf0c1\nattribute request_timeout: Optional[Union[float, Tuple[float, float]]] = None\uf0c1\nTimeout for requests to OpenAI completion API. Default is 600 seconds.\nattribute streaming: bool = False\uf0c1\nWhether to stream the results or not.\nattribute temperature: float = 0.7\uf0c1\nWhat sampling temperature to use.\nattribute tiktoken_model_name: Optional[str] = None\uf0c1\nThe model name to pass to tiktoken when using this class.\nTiktoken is used to count the number of tokens in documents to constrain\nthem to be under a certain limit. By default, when set to None, this will\nbe the same as the embedding model name. However, there are some cases", "source": "https://api.python.langchain.com/en/latest/modules/chat_models.html"} +{"id": "8684e36b1302-2", "text": "be the same as the embedding model name. However, there are some cases\nwhere you may want to use this Embedding class with a model name not\nsupported by tiktoken. This can include when using Azure embeddings or\nwhen using one of the many model providers that expose an OpenAI-like\nAPI but with different models. In those cases, in order to avoid erroring\nwhen tiktoken is called, you can specify a model name to use here.\ncompletion_with_retry(**kwargs)[source]\uf0c1\nUse tenacity to retry the completion call.\nParameters\nkwargs (Any) \u2013 \nReturn type\nAny\nget_num_tokens_from_messages(messages)[source]\uf0c1\nCalculate num tokens for gpt-3.5-turbo and gpt-4 with tiktoken package.\nOfficial documentation: https://github.com/openai/openai-cookbook/blob/\nmain/examples/How_to_format_inputs_to_ChatGPT_models.ipynb\nParameters\nmessages (List[langchain.schema.BaseMessage]) \u2013 \nReturn type\nint\nget_token_ids(text)[source]\uf0c1\nGet the tokens present in the text with tiktoken package.\nParameters\ntext (str) \u2013 \nReturn type\nList[int]\nproperty lc_secrets: Dict[str, str]\uf0c1\nReturn a map of constructor argument names to secret ids.\neg. {\u201copenai_api_key\u201d: \u201cOPENAI_API_KEY\u201d}\nproperty lc_serializable: bool\uf0c1\nReturn whether or not the class is serializable.", "source": "https://api.python.langchain.com/en/latest/modules/chat_models.html"} +{"id": "8684e36b1302-3", "text": "property lc_serializable: bool\uf0c1\nReturn whether or not the class is serializable.\nclass langchain.chat_models.AzureChatOpenAI(*, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None, client=None, model='gpt-3.5-turbo', temperature=0.7, model_kwargs=None, openai_api_key='', openai_api_base='', openai_organization='', openai_proxy='', request_timeout=None, max_retries=6, streaming=False, n=1, max_tokens=None, tiktoken_model_name=None, deployment_name='', openai_api_type='azure', openai_api_version='')[source]\uf0c1\nBases: langchain.chat_models.openai.ChatOpenAI\nWrapper around Azure OpenAI Chat Completion API. To use this class you\nmust have a deployed model on Azure OpenAI. Use deployment_name in the\nconstructor to refer to the \u201cModel deployment name\u201d in the Azure portal.\nIn addition, you should have the openai python package installed, and the\nfollowing environment variables set or passed in constructor in lower case:\n- OPENAI_API_TYPE (default: azure)\n- OPENAI_API_KEY\n- OPENAI_API_BASE\n- OPENAI_API_VERSION\n- OPENAI_PROXY\nFor exmaple, if you have gpt-35-turbo deployed, with the deployment name\n35-turbo-dev, the constructor should look like:\nAzureChatOpenAI(\n deployment_name=\"35-turbo-dev\",\n openai_api_version=\"2023-03-15-preview\",\n)\nBe aware the API version may change.\nAny parameters that are valid to be passed to the openai.create call can be passed\nin, even if not explicitly saved on this class.\nParameters\ncache (Optional[bool]) \u2013 \nverbose (bool) \u2013", "source": "https://api.python.langchain.com/en/latest/modules/chat_models.html"} +{"id": "8684e36b1302-4", "text": "Parameters\ncache (Optional[bool]) \u2013 \nverbose (bool) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ncallback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) \u2013 \ntags (Optional[List[str]]) \u2013 \nclient (Any) \u2013 \nmodel (str) \u2013 \ntemperature (float) \u2013 \nmodel_kwargs (Dict[str, Any]) \u2013 \nopenai_api_key (str) \u2013 \nopenai_api_base (str) \u2013 \nopenai_organization (str) \u2013 \nopenai_proxy (str) \u2013 \nrequest_timeout (Optional[Union[float, Tuple[float, float]]]) \u2013 \nmax_retries (int) \u2013 \nstreaming (bool) \u2013 \nn (int) \u2013 \nmax_tokens (Optional[int]) \u2013 \ntiktoken_model_name (Optional[str]) \u2013 \ndeployment_name (str) \u2013 \nopenai_api_type (str) \u2013 \nopenai_api_version (str) \u2013 \nReturn type\nNone\nattribute deployment_name: str = ''\uf0c1\nattribute openai_api_base: str = ''\uf0c1\nattribute openai_api_key: str = ''\uf0c1\nBase URL path for API requests,\nleave blank if not using a proxy or service emulator.\nattribute openai_api_type: str = 'azure'\uf0c1\nattribute openai_api_version: str = ''\uf0c1\nattribute openai_organization: str = ''\uf0c1\nattribute openai_proxy: str = ''\uf0c1\nclass langchain.chat_models.FakeListChatModel(*, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None, responses, i=0)[source]\uf0c1\nBases: langchain.chat_models.base.SimpleChatModel\nFake ChatModel for testing purposes.\nParameters", "source": "https://api.python.langchain.com/en/latest/modules/chat_models.html"} +{"id": "8684e36b1302-5", "text": "Fake ChatModel for testing purposes.\nParameters\ncache (Optional[bool]) \u2013 \nverbose (bool) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ncallback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) \u2013 \ntags (Optional[List[str]]) \u2013 \nresponses (List) \u2013 \ni (int) \u2013 \nReturn type\nNone\nattribute i: int = 0\uf0c1\nattribute responses: List [Required]\uf0c1\nclass langchain.chat_models.PromptLayerChatOpenAI(*, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None, client=None, model='gpt-3.5-turbo', temperature=0.7, model_kwargs=None, openai_api_key=None, openai_api_base=None, openai_organization=None, openai_proxy=None, request_timeout=None, max_retries=6, streaming=False, n=1, max_tokens=None, tiktoken_model_name=None, pl_tags=None, return_pl_id=False)[source]\uf0c1\nBases: langchain.chat_models.openai.ChatOpenAI\nWrapper around OpenAI Chat large language models and PromptLayer.\nTo use, you should have the openai and promptlayer python\npackage installed, and the environment variable OPENAI_API_KEY\nand PROMPTLAYER_API_KEY set with your openAI API key and\npromptlayer key respectively.\nAll parameters that can be passed to the OpenAI LLM can also\nbe passed here. The PromptLayerChatOpenAI adds to optional\nParameters\npl_tags (Optional[List[str]]) \u2013 List of strings to tag the request with.\nreturn_pl_id (Optional[bool]) \u2013 If True, the PromptLayer request ID will be\nreturned in the generation_info field of the\nGeneration object.\ncache (Optional[bool]) \u2013", "source": "https://api.python.langchain.com/en/latest/modules/chat_models.html"} +{"id": "8684e36b1302-6", "text": "Generation object.\ncache (Optional[bool]) \u2013 \nverbose (bool) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ncallback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) \u2013 \ntags (Optional[List[str]]) \u2013 \nclient (Any) \u2013 \nmodel (str) \u2013 \ntemperature (float) \u2013 \nmodel_kwargs (Dict[str, Any]) \u2013 \nopenai_api_key (Optional[str]) \u2013 \nopenai_api_base (Optional[str]) \u2013 \nopenai_organization (Optional[str]) \u2013 \nopenai_proxy (Optional[str]) \u2013 \nrequest_timeout (Optional[Union[float, Tuple[float, float]]]) \u2013 \nmax_retries (int) \u2013 \nstreaming (bool) \u2013 \nn (int) \u2013 \nmax_tokens (Optional[int]) \u2013 \ntiktoken_model_name (Optional[str]) \u2013 \nReturn type\nNone\nExample\nfrom langchain.chat_models import PromptLayerChatOpenAI\nopenai = PromptLayerChatOpenAI(model_name=\"gpt-3.5-turbo\")\nattribute pl_tags: Optional[List[str]] = None\uf0c1\nattribute return_pl_id: Optional[bool] = False\uf0c1\nclass langchain.chat_models.ChatAnthropic(*, client=None, model='claude-v1', max_tokens_to_sample=256, temperature=None, top_k=None, top_p=None, streaming=False, default_request_timeout=None, anthropic_api_url=None, anthropic_api_key=None, HUMAN_PROMPT=None, AI_PROMPT=None, count_tokens=None, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None)[source]\uf0c1\nBases: langchain.chat_models.base.BaseChatModel, langchain.llms.anthropic._AnthropicCommon", "source": "https://api.python.langchain.com/en/latest/modules/chat_models.html"} +{"id": "8684e36b1302-7", "text": "Wrapper around Anthropic\u2019s large language model.\nTo use, you should have the anthropic python package installed, and the\nenvironment variable ANTHROPIC_API_KEY set with your API key, or pass\nit as a named parameter to the constructor.\nExample\nimport anthropic\nfrom langchain.llms import Anthropic\nmodel = ChatAnthropic(model=\"\", anthropic_api_key=\"my-api-key\")\nParameters\nclient (Any) \u2013 \nmodel (str) \u2013 \nmax_tokens_to_sample (int) \u2013 \ntemperature (Optional[float]) \u2013 \ntop_k (Optional[int]) \u2013 \ntop_p (Optional[float]) \u2013 \nstreaming (bool) \u2013 \ndefault_request_timeout (Optional[Union[float, Tuple[float, float]]]) \u2013 \nanthropic_api_url (Optional[str]) \u2013 \nanthropic_api_key (Optional[str]) \u2013 \nHUMAN_PROMPT (Optional[str]) \u2013 \nAI_PROMPT (Optional[str]) \u2013 \ncount_tokens (Optional[Callable[[str], int]]) \u2013 \ncache (Optional[bool]) \u2013 \nverbose (bool) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ncallback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) \u2013 \ntags (Optional[List[str]]) \u2013 \nReturn type\nNone\nget_num_tokens(text)[source]\uf0c1\nCalculate number of tokens.\nParameters\ntext (str) \u2013 \nReturn type\nint\nproperty lc_serializable: bool\uf0c1\nReturn whether or not the class is serializable.", "source": "https://api.python.langchain.com/en/latest/modules/chat_models.html"} +{"id": "8684e36b1302-8", "text": "property lc_serializable: bool\uf0c1\nReturn whether or not the class is serializable.\nclass langchain.chat_models.ChatGooglePalm(*, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None, client=None, model_name='models/chat-bison-001', google_api_key=None, temperature=None, top_p=None, top_k=None, n=1)[source]\uf0c1\nBases: langchain.chat_models.base.BaseChatModel, pydantic.main.BaseModel\nWrapper around Google\u2019s PaLM Chat API.\nTo use you must have the google.generativeai Python package installed and\neither:\nThe GOOGLE_API_KEY` environment varaible set with your API key, or\nPass your API key using the google_api_key kwarg to the ChatGoogle\nconstructor.\nExample\nfrom langchain.chat_models import ChatGooglePalm\nchat = ChatGooglePalm()\nParameters\ncache (Optional[bool]) \u2013 \nverbose (bool) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ncallback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) \u2013 \ntags (Optional[List[str]]) \u2013 \nclient (Any) \u2013 \nmodel_name (str) \u2013 \ngoogle_api_key (Optional[str]) \u2013 \ntemperature (Optional[float]) \u2013 \ntop_p (Optional[float]) \u2013 \ntop_k (Optional[int]) \u2013 \nn (int) \u2013 \nReturn type\nNone\nattribute google_api_key: Optional[str] = None\uf0c1\nattribute model_name: str = 'models/chat-bison-001'\uf0c1\nModel name to use.\nattribute n: int = 1\uf0c1\nNumber of chat completions to generate for each prompt. Note that the API may\nnot return the full n completions if duplicates are generated.", "source": "https://api.python.langchain.com/en/latest/modules/chat_models.html"} +{"id": "8684e36b1302-9", "text": "not return the full n completions if duplicates are generated.\nattribute temperature: Optional[float] = None\uf0c1\nRun inference with this temperature. Must by in the closed\ninterval [0.0, 1.0].\nattribute top_k: Optional[int] = None\uf0c1\nDecode using top-k sampling: consider the set of top_k most probable tokens.\nMust be positive.\nattribute top_p: Optional[float] = None\uf0c1\nDecode using nucleus sampling: consider the smallest set of tokens whose\nprobability sum is at least top_p. Must be in the closed interval [0.0, 1.0].\nclass langchain.chat_models.ChatVertexAI(*, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None, client=None, model_name='chat-bison', temperature=0.0, max_output_tokens=128, top_p=0.95, top_k=40, stop=None, project=None, location='us-central1', credentials=None)[source]\uf0c1\nBases: langchain.llms.vertexai._VertexAICommon, langchain.chat_models.base.BaseChatModel\nWrapper around Vertex AI large language models.\nParameters\ncache (Optional[bool]) \u2013 \nverbose (bool) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ncallback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) \u2013 \ntags (Optional[List[str]]) \u2013 \nclient (_LanguageModel) \u2013 \nmodel_name (str) \u2013 \ntemperature (float) \u2013 \nmax_output_tokens (int) \u2013 \ntop_p (float) \u2013 \ntop_k (int) \u2013 \nstop (Optional[List[str]]) \u2013 \nproject (Optional[str]) \u2013 \nlocation (str) \u2013 \ncredentials (Any) \u2013 \nReturn type\nNone", "source": "https://api.python.langchain.com/en/latest/modules/chat_models.html"} +{"id": "8684e36b1302-10", "text": "location (str) \u2013 \ncredentials (Any) \u2013 \nReturn type\nNone\nattribute model_name: str = 'chat-bison'\uf0c1\nModel name to use.", "source": "https://api.python.langchain.com/en/latest/modules/chat_models.html"} +{"id": "3bf1c984f7f4-0", "text": "Prompt Templates\uf0c1\nPrompt template classes.\nclass langchain.prompts.AIMessagePromptTemplate(*, prompt, additional_kwargs=None)[source]\uf0c1\nBases: langchain.prompts.chat.BaseStringMessagePromptTemplate\nParameters\nprompt (langchain.prompts.base.StringPromptTemplate) \u2013 \nadditional_kwargs (dict) \u2013 \nReturn type\nNone\nformat(**kwargs)[source]\uf0c1\nTo a BaseMessage.\nParameters\nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.BaseMessage\nclass langchain.prompts.BaseChatPromptTemplate(*, input_variables, output_parser=None, partial_variables=None)[source]\uf0c1\nBases: langchain.prompts.base.BasePromptTemplate, abc.ABC\nParameters\ninput_variables (List[str]) \u2013 \noutput_parser (Optional[langchain.schema.BaseOutputParser]) \u2013 \npartial_variables (Mapping[str, Union[str, Callable[[], str]]]) \u2013 \nReturn type\nNone\nformat(**kwargs)[source]\uf0c1\nFormat the prompt with the inputs.\nParameters\nkwargs (Any) \u2013 Any arguments to be passed to the prompt template.\nReturns\nA formatted string.\nReturn type\nstr\nExample:\nprompt.format(variable1=\"foo\")\nabstract format_messages(**kwargs)[source]\uf0c1\nFormat kwargs into a list of messages.\nParameters\nkwargs (Any) \u2013 \nReturn type\nList[langchain.schema.BaseMessage]\nformat_prompt(**kwargs)[source]\uf0c1\nCreate Chat Messages.\nParameters\nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.PromptValue\nclass langchain.prompts.BasePromptTemplate(*, input_variables, output_parser=None, partial_variables=None)[source]\uf0c1\nBases: langchain.load.serializable.Serializable, abc.ABC\nBase class for all prompt templates, returning a prompt.\nParameters\ninput_variables (List[str]) \u2013", "source": "https://api.python.langchain.com/en/latest/modules/prompts.html"} +{"id": "3bf1c984f7f4-1", "text": "Parameters\ninput_variables (List[str]) \u2013 \noutput_parser (Optional[langchain.schema.BaseOutputParser]) \u2013 \npartial_variables (Mapping[str, Union[str, Callable[[], str]]]) \u2013 \nReturn type\nNone\nattribute input_variables: List[str] [Required]\uf0c1\nA list of the names of the variables the prompt template expects.\nattribute output_parser: Optional[langchain.schema.BaseOutputParser] = None\uf0c1\nHow to parse the output of calling an LLM on this formatted prompt.\nattribute partial_variables: Mapping[str, Union[str, Callable[[], str]]] [Optional]\uf0c1\ndict(**kwargs)[source]\uf0c1\nReturn dictionary representation of prompt.\nParameters\nkwargs (Any) \u2013 \nReturn type\nDict\nabstract format(**kwargs)[source]\uf0c1\nFormat the prompt with the inputs.\nParameters\nkwargs (Any) \u2013 Any arguments to be passed to the prompt template.\nReturns\nA formatted string.\nReturn type\nstr\nExample:\nprompt.format(variable1=\"foo\")\nabstract format_prompt(**kwargs)[source]\uf0c1\nCreate Chat Messages.\nParameters\nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.PromptValue\npartial(**kwargs)[source]\uf0c1\nReturn a partial of the prompt template.\nParameters\nkwargs (Union[str, Callable[[], str]]) \u2013 \nReturn type\nlangchain.prompts.base.BasePromptTemplate\nsave(file_path)[source]\uf0c1\nSave the prompt.\nParameters\nfile_path (Union[pathlib.Path, str]) \u2013 Path to directory to save prompt to.\nReturn type\nNone\nExample:\n.. code-block:: python\nprompt.save(file_path=\u201dpath/prompt.yaml\u201d)\nproperty lc_serializable: bool\uf0c1\nReturn whether or not the class is serializable.", "source": "https://api.python.langchain.com/en/latest/modules/prompts.html"} +{"id": "3bf1c984f7f4-2", "text": "property lc_serializable: bool\uf0c1\nReturn whether or not the class is serializable.\nclass langchain.prompts.ChatMessagePromptTemplate(*, prompt, additional_kwargs=None, role)[source]\uf0c1\nBases: langchain.prompts.chat.BaseStringMessagePromptTemplate\nParameters\nprompt (langchain.prompts.base.StringPromptTemplate) \u2013 \nadditional_kwargs (dict) \u2013 \nrole (str) \u2013 \nReturn type\nNone\nattribute role: str [Required]\uf0c1\nformat(**kwargs)[source]\uf0c1\nTo a BaseMessage.\nParameters\nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.BaseMessage\nclass langchain.prompts.ChatPromptTemplate(*, input_variables, output_parser=None, partial_variables=None, messages)[source]\uf0c1\nBases: langchain.prompts.chat.BaseChatPromptTemplate, abc.ABC\nParameters\ninput_variables (List[str]) \u2013 \noutput_parser (Optional[langchain.schema.BaseOutputParser]) \u2013 \npartial_variables (Mapping[str, Union[str, Callable[[], str]]]) \u2013 \nmessages (List[Union[langchain.prompts.chat.BaseMessagePromptTemplate, langchain.schema.BaseMessage]]) \u2013 \nReturn type\nNone\nattribute input_variables: List[str] [Required]\uf0c1\nA list of the names of the variables the prompt template expects.\nattribute messages: List[Union[BaseMessagePromptTemplate, BaseMessage]] [Required]\uf0c1\nformat(**kwargs)[source]\uf0c1\nFormat the prompt with the inputs.\nParameters\nkwargs (Any) \u2013 Any arguments to be passed to the prompt template.\nReturns\nA formatted string.\nReturn type\nstr\nExample:\nprompt.format(variable1=\"foo\")\nformat_messages(**kwargs)[source]\uf0c1\nFormat kwargs into a list of messages.\nParameters\nkwargs (Any) \u2013 \nReturn type", "source": "https://api.python.langchain.com/en/latest/modules/prompts.html"} +{"id": "3bf1c984f7f4-3", "text": "Parameters\nkwargs (Any) \u2013 \nReturn type\nList[langchain.schema.BaseMessage]\nclassmethod from_messages(messages)[source]\uf0c1\nParameters\nmessages (Sequence[Union[langchain.prompts.chat.BaseMessagePromptTemplate, langchain.schema.BaseMessage]]) \u2013 \nReturn type\nlangchain.prompts.chat.ChatPromptTemplate\nclassmethod from_role_strings(string_messages)[source]\uf0c1\nParameters\nstring_messages (List[Tuple[str, str]]) \u2013 \nReturn type\nlangchain.prompts.chat.ChatPromptTemplate\nclassmethod from_strings(string_messages)[source]\uf0c1\nParameters\nstring_messages (List[Tuple[Type[langchain.prompts.chat.BaseMessagePromptTemplate], str]]) \u2013 \nReturn type\nlangchain.prompts.chat.ChatPromptTemplate\nclassmethod from_template(template, **kwargs)[source]\uf0c1\nParameters\ntemplate (str) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.prompts.chat.ChatPromptTemplate\npartial(**kwargs)[source]\uf0c1\nReturn a partial of the prompt template.\nParameters\nkwargs (Union[str, Callable[[], str]]) \u2013 \nReturn type\nlangchain.prompts.base.BasePromptTemplate\nsave(file_path)[source]\uf0c1\nSave the prompt.\nParameters\nfile_path (Union[pathlib.Path, str]) \u2013 Path to directory to save prompt to.\nReturn type\nNone\nExample:\n.. code-block:: python\nprompt.save(file_path=\u201dpath/prompt.yaml\u201d)\nclass langchain.prompts.FewShotPromptTemplate(*, input_variables, output_parser=None, partial_variables=None, examples=None, example_selector=None, example_prompt, suffix, example_separator='\\n\\n', prefix='', template_format='f-string', validate_template=True)[source]\uf0c1\nBases: langchain.prompts.base.StringPromptTemplate\nPrompt template that contains few shot examples.\nParameters\ninput_variables (List[str]) \u2013", "source": "https://api.python.langchain.com/en/latest/modules/prompts.html"} +{"id": "3bf1c984f7f4-4", "text": "Prompt template that contains few shot examples.\nParameters\ninput_variables (List[str]) \u2013 \noutput_parser (Optional[langchain.schema.BaseOutputParser]) \u2013 \npartial_variables (Mapping[str, Union[str, Callable[[], str]]]) \u2013 \nexamples (Optional[List[dict]]) \u2013 \nexample_selector (Optional[langchain.prompts.example_selector.base.BaseExampleSelector]) \u2013 \nexample_prompt (langchain.prompts.prompt.PromptTemplate) \u2013 \nsuffix (str) \u2013 \nexample_separator (str) \u2013 \nprefix (str) \u2013 \ntemplate_format (str) \u2013 \nvalidate_template (bool) \u2013 \nReturn type\nNone\nattribute example_prompt: langchain.prompts.prompt.PromptTemplate [Required]\uf0c1\nPromptTemplate used to format an individual example.\nattribute example_selector: Optional[langchain.prompts.example_selector.base.BaseExampleSelector] = None\uf0c1\nExampleSelector to choose the examples to format into the prompt.\nEither this or examples should be provided.\nattribute example_separator: str = '\\n\\n'\uf0c1\nString separator used to join the prefix, the examples, and suffix.\nattribute examples: Optional[List[dict]] = None\uf0c1\nExamples to format into the prompt.\nEither this or example_selector should be provided.\nattribute input_variables: List[str] [Required]\uf0c1\nA list of the names of the variables the prompt template expects.\nattribute prefix: str = ''\uf0c1\nA prompt template string to put before the examples.\nattribute suffix: str [Required]\uf0c1\nA prompt template string to put after the examples.\nattribute template_format: str = 'f-string'\uf0c1\nThe format of the prompt template. Options are: \u2018f-string\u2019, \u2018jinja2\u2019.\nattribute validate_template: bool = True\uf0c1\nWhether or not to try validating the template.", "source": "https://api.python.langchain.com/en/latest/modules/prompts.html"} +{"id": "3bf1c984f7f4-5", "text": "attribute validate_template: bool = True\uf0c1\nWhether or not to try validating the template.\ndict(**kwargs)[source]\uf0c1\nReturn a dictionary of the prompt.\nParameters\nkwargs (Any) \u2013 \nReturn type\nDict\nformat(**kwargs)[source]\uf0c1\nFormat the prompt with the inputs.\nParameters\nkwargs (Any) \u2013 Any arguments to be passed to the prompt template.\nReturns\nA formatted string.\nReturn type\nstr\nExample:\nprompt.format(variable1=\"foo\")\nproperty lc_serializable: bool\uf0c1\nReturn whether or not the class is serializable.\nclass langchain.prompts.FewShotPromptWithTemplates(*, input_variables, output_parser=None, partial_variables=None, examples=None, example_selector=None, example_prompt, suffix, example_separator='\\n\\n', prefix=None, template_format='f-string', validate_template=True)[source]\uf0c1\nBases: langchain.prompts.base.StringPromptTemplate\nPrompt template that contains few shot examples.\nParameters\ninput_variables (List[str]) \u2013 \noutput_parser (Optional[langchain.schema.BaseOutputParser]) \u2013 \npartial_variables (Mapping[str, Union[str, Callable[[], str]]]) \u2013 \nexamples (Optional[List[dict]]) \u2013 \nexample_selector (Optional[langchain.prompts.example_selector.base.BaseExampleSelector]) \u2013 \nexample_prompt (langchain.prompts.prompt.PromptTemplate) \u2013 \nsuffix (langchain.prompts.base.StringPromptTemplate) \u2013 \nexample_separator (str) \u2013 \nprefix (Optional[langchain.prompts.base.StringPromptTemplate]) \u2013 \ntemplate_format (str) \u2013 \nvalidate_template (bool) \u2013 \nReturn type\nNone\nattribute example_prompt: langchain.prompts.prompt.PromptTemplate [Required]\uf0c1\nPromptTemplate used to format an individual example.", "source": "https://api.python.langchain.com/en/latest/modules/prompts.html"} +{"id": "3bf1c984f7f4-6", "text": "PromptTemplate used to format an individual example.\nattribute example_selector: Optional[langchain.prompts.example_selector.base.BaseExampleSelector] = None\uf0c1\nExampleSelector to choose the examples to format into the prompt.\nEither this or examples should be provided.\nattribute example_separator: str = '\\n\\n'\uf0c1\nString separator used to join the prefix, the examples, and suffix.\nattribute examples: Optional[List[dict]] = None\uf0c1\nExamples to format into the prompt.\nEither this or example_selector should be provided.\nattribute input_variables: List[str] [Required]\uf0c1\nA list of the names of the variables the prompt template expects.\nattribute prefix: Optional[langchain.prompts.base.StringPromptTemplate] = None\uf0c1\nA PromptTemplate to put before the examples.\nattribute suffix: langchain.prompts.base.StringPromptTemplate [Required]\uf0c1\nA PromptTemplate to put after the examples.\nattribute template_format: str = 'f-string'\uf0c1\nThe format of the prompt template. Options are: \u2018f-string\u2019, \u2018jinja2\u2019.\nattribute validate_template: bool = True\uf0c1\nWhether or not to try validating the template.\ndict(**kwargs)[source]\uf0c1\nReturn a dictionary of the prompt.\nParameters\nkwargs (Any) \u2013 \nReturn type\nDict\nformat(**kwargs)[source]\uf0c1\nFormat the prompt with the inputs.\nParameters\nkwargs (Any) \u2013 Any arguments to be passed to the prompt template.\nReturns\nA formatted string.\nReturn type\nstr\nExample:\nprompt.format(variable1=\"foo\")\nclass langchain.prompts.HumanMessagePromptTemplate(*, prompt, additional_kwargs=None)[source]\uf0c1\nBases: langchain.prompts.chat.BaseStringMessagePromptTemplate\nParameters\nprompt (langchain.prompts.base.StringPromptTemplate) \u2013 \nadditional_kwargs (dict) \u2013", "source": "https://api.python.langchain.com/en/latest/modules/prompts.html"} +{"id": "3bf1c984f7f4-7", "text": "additional_kwargs (dict) \u2013 \nReturn type\nNone\nformat(**kwargs)[source]\uf0c1\nTo a BaseMessage.\nParameters\nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.BaseMessage\nclass langchain.prompts.LengthBasedExampleSelector(*, examples, example_prompt, get_text_length=, max_length=2048, example_text_lengths=[])[source]\uf0c1\nBases: langchain.prompts.example_selector.base.BaseExampleSelector, pydantic.main.BaseModel\nSelect examples based on length.\nParameters\nexamples (List[dict]) \u2013 \nexample_prompt (langchain.prompts.prompt.PromptTemplate) \u2013 \nget_text_length (Callable[[str], int]) \u2013 \nmax_length (int) \u2013 \nexample_text_lengths (List[int]) \u2013 \nReturn type\nNone\nattribute example_prompt: langchain.prompts.prompt.PromptTemplate [Required]\uf0c1\nPrompt template used to format the examples.\nattribute examples: List[dict] [Required]\uf0c1\nA list of the examples that the prompt template expects.\nattribute get_text_length: Callable[[str], int] = \uf0c1\nFunction to measure prompt length. Defaults to word count.\nattribute max_length: int = 2048\uf0c1\nMax length for the prompt, beyond which examples are cut.\nadd_example(example)[source]\uf0c1\nAdd new example to list.\nParameters\nexample (Dict[str, str]) \u2013 \nReturn type\nNone\nselect_examples(input_variables)[source]\uf0c1\nSelect which examples to use based on the input lengths.\nParameters\ninput_variables (Dict[str, str]) \u2013 \nReturn type\nList[dict]", "source": "https://api.python.langchain.com/en/latest/modules/prompts.html"} +{"id": "3bf1c984f7f4-8", "text": "input_variables (Dict[str, str]) \u2013 \nReturn type\nList[dict]\nclass langchain.prompts.MaxMarginalRelevanceExampleSelector(*, vectorstore, k=4, example_keys=None, input_keys=None, fetch_k=20)[source]\uf0c1\nBases: langchain.prompts.example_selector.semantic_similarity.SemanticSimilarityExampleSelector\nExampleSelector that selects examples based on Max Marginal Relevance.\nThis was shown to improve performance in this paper:\nhttps://arxiv.org/pdf/2211.13892.pdf\nParameters\nvectorstore (langchain.vectorstores.base.VectorStore) \u2013 \nk (int) \u2013 \nexample_keys (Optional[List[str]]) \u2013 \ninput_keys (Optional[List[str]]) \u2013 \nfetch_k (int) \u2013 \nReturn type\nNone\nattribute example_keys: Optional[List[str]] = None\uf0c1\nOptional keys to filter examples to.\nattribute fetch_k: int = 20\uf0c1\nNumber of examples to fetch to rerank.\nattribute input_keys: Optional[List[str]] = None\uf0c1\nOptional keys to filter input to. If provided, the search is based on\nthe input variables instead of all variables.\nattribute k: int = 4\uf0c1\nNumber of examples to select.\nattribute vectorstore: langchain.vectorstores.base.VectorStore [Required]\uf0c1\nVectorStore than contains information about examples.\nclassmethod from_examples(examples, embeddings, vectorstore_cls, k=4, input_keys=None, fetch_k=20, **vectorstore_cls_kwargs)[source]\uf0c1\nCreate k-shot example selector using example list and embeddings.\nReshuffles examples dynamically based on query similarity.\nParameters\nexamples (List[dict]) \u2013 List of examples to use in the prompt.", "source": "https://api.python.langchain.com/en/latest/modules/prompts.html"} +{"id": "3bf1c984f7f4-9", "text": "Parameters\nexamples (List[dict]) \u2013 List of examples to use in the prompt.\nembeddings (langchain.embeddings.base.Embeddings) \u2013 An iniialized embedding API interface, e.g. OpenAIEmbeddings().\nvectorstore_cls (Type[langchain.vectorstores.base.VectorStore]) \u2013 A vector store DB interface class, e.g. FAISS.\nk (int) \u2013 Number of examples to select\ninput_keys (Optional[List[str]]) \u2013 If provided, the search is based on the input variables\ninstead of all variables.\nvectorstore_cls_kwargs (Any) \u2013 optional kwargs containing url for vector store\nfetch_k (int) \u2013 \nReturns\nThe ExampleSelector instantiated, backed by a vector store.\nReturn type\nlangchain.prompts.example_selector.semantic_similarity.MaxMarginalRelevanceExampleSelector\nselect_examples(input_variables)[source]\uf0c1\nSelect which examples to use based on semantic similarity.\nParameters\ninput_variables (Dict[str, str]) \u2013 \nReturn type\nList[dict]\nclass langchain.prompts.MessagesPlaceholder(*, variable_name)[source]\uf0c1\nBases: langchain.prompts.chat.BaseMessagePromptTemplate\nPrompt template that assumes variable is already list of messages.\nParameters\nvariable_name (str) \u2013 \nReturn type\nNone\nattribute variable_name: str [Required]\uf0c1\nformat_messages(**kwargs)[source]\uf0c1\nTo a BaseMessage.\nParameters\nkwargs (Any) \u2013 \nReturn type\nList[langchain.schema.BaseMessage]\nproperty input_variables: List[str]\uf0c1\nInput variables for this prompt template.\nclass langchain.prompts.NGramOverlapExampleSelector(*, examples, example_prompt, threshold=- 1.0)[source]\uf0c1\nBases: langchain.prompts.example_selector.base.BaseExampleSelector, pydantic.main.BaseModel", "source": "https://api.python.langchain.com/en/latest/modules/prompts.html"} +{"id": "3bf1c984f7f4-10", "text": "Select and order examples based on ngram overlap score (sentence_bleu score).\nhttps://www.nltk.org/_modules/nltk/translate/bleu_score.html\nhttps://aclanthology.org/P02-1040.pdf\nParameters\nexamples (List[dict]) \u2013 \nexample_prompt (langchain.prompts.prompt.PromptTemplate) \u2013 \nthreshold (float) \u2013 \nReturn type\nNone\nattribute example_prompt: langchain.prompts.prompt.PromptTemplate [Required]\uf0c1\nPrompt template used to format the examples.\nattribute examples: List[dict] [Required]\uf0c1\nA list of the examples that the prompt template expects.\nattribute threshold: float = -1.0\uf0c1\nThreshold at which algorithm stops. Set to -1.0 by default.\nFor negative threshold:\nselect_examples sorts examples by ngram_overlap_score, but excludes none.\nFor threshold greater than 1.0:\nselect_examples excludes all examples, and returns an empty list.\nFor threshold equal to 0.0:\nselect_examples sorts examples by ngram_overlap_score,\nand excludes examples with no ngram overlap with input.\nadd_example(example)[source]\uf0c1\nAdd new example to list.\nParameters\nexample (Dict[str, str]) \u2013 \nReturn type\nNone\nselect_examples(input_variables)[source]\uf0c1\nReturn list of examples sorted by ngram_overlap_score with input.\nDescending order.\nExcludes any examples with ngram_overlap_score less than or equal to threshold.\nParameters\ninput_variables (Dict[str, str]) \u2013 \nReturn type\nList[dict]\nclass langchain.prompts.PipelinePromptTemplate(*, input_variables, output_parser=None, partial_variables=None, final_prompt, pipeline_prompts)[source]\uf0c1\nBases: langchain.prompts.base.BasePromptTemplate\nA prompt template for composing multiple prompts together.", "source": "https://api.python.langchain.com/en/latest/modules/prompts.html"} +{"id": "3bf1c984f7f4-11", "text": "A prompt template for composing multiple prompts together.\nThis can be useful when you want to reuse parts of prompts.\nA PipelinePrompt consists of two main parts:\nfinal_prompt: This is the final prompt that is returned\npipeline_prompts: This is a list of tuples, consistingof a string (name) and a Prompt Template.\nEach PromptTemplate will be formatted and then passed\nto future prompt templates as a variable with\nthe same name as name\nParameters\ninput_variables (List[str]) \u2013 \noutput_parser (Optional[langchain.schema.BaseOutputParser]) \u2013 \npartial_variables (Mapping[str, Union[str, Callable[[], str]]]) \u2013 \nfinal_prompt (langchain.prompts.base.BasePromptTemplate) \u2013 \npipeline_prompts (List[Tuple[str, langchain.prompts.base.BasePromptTemplate]]) \u2013 \nReturn type\nNone\nattribute final_prompt: langchain.prompts.base.BasePromptTemplate [Required]\uf0c1\nattribute pipeline_prompts: List[Tuple[str, langchain.prompts.base.BasePromptTemplate]] [Required]\uf0c1\nformat(**kwargs)[source]\uf0c1\nFormat the prompt with the inputs.\nParameters\nkwargs (Any) \u2013 Any arguments to be passed to the prompt template.\nReturns\nA formatted string.\nReturn type\nstr\nExample:\nprompt.format(variable1=\"foo\")\nformat_prompt(**kwargs)[source]\uf0c1\nCreate Chat Messages.\nParameters\nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.PromptValue\nlangchain.prompts.Prompt\uf0c1\nalias of langchain.prompts.prompt.PromptTemplate\nclass langchain.prompts.PromptTemplate(*, input_variables, output_parser=None, partial_variables=None, template, template_format='f-string', validate_template=True)[source]\uf0c1\nBases: langchain.prompts.base.StringPromptTemplate\nSchema to represent a prompt for an LLM.\nExample", "source": "https://api.python.langchain.com/en/latest/modules/prompts.html"} +{"id": "3bf1c984f7f4-12", "text": "Schema to represent a prompt for an LLM.\nExample\nfrom langchain import PromptTemplate\nprompt = PromptTemplate(input_variables=[\"foo\"], template=\"Say {foo}\")\nParameters\ninput_variables (List[str]) \u2013 \noutput_parser (Optional[langchain.schema.BaseOutputParser]) \u2013 \npartial_variables (Mapping[str, Union[str, Callable[[], str]]]) \u2013 \ntemplate (str) \u2013 \ntemplate_format (str) \u2013 \nvalidate_template (bool) \u2013 \nReturn type\nNone\nattribute input_variables: List[str] [Required]\uf0c1\nA list of the names of the variables the prompt template expects.\nattribute template: str [Required]\uf0c1\nThe prompt template.\nattribute template_format: str = 'f-string'\uf0c1\nThe format of the prompt template. Options are: \u2018f-string\u2019, \u2018jinja2\u2019.\nattribute validate_template: bool = True\uf0c1\nWhether or not to try validating the template.\nformat(**kwargs)[source]\uf0c1\nFormat the prompt with the inputs.\nParameters\nkwargs (Any) \u2013 Any arguments to be passed to the prompt template.\nReturns\nA formatted string.\nReturn type\nstr\nExample:\nprompt.format(variable1=\"foo\")\nclassmethod from_examples(examples, suffix, input_variables, example_separator='\\n\\n', prefix='', **kwargs)[source]\uf0c1\nTake examples in list format with prefix and suffix to create a prompt.\nIntended to be used as a way to dynamically create a prompt from examples.\nParameters\nexamples (List[str]) \u2013 List of examples to use in the prompt.\nsuffix (str) \u2013 String to go after the list of examples. Should generally\nset up the user\u2019s input.\ninput_variables (List[str]) \u2013 A list of variable names the final prompt template\nwill expect.\nexample_separator (str) \u2013 The separator to use in between examples. Defaults", "source": "https://api.python.langchain.com/en/latest/modules/prompts.html"} +{"id": "3bf1c984f7f4-13", "text": "will expect.\nexample_separator (str) \u2013 The separator to use in between examples. Defaults\nto two new line characters.\nprefix (str) \u2013 String that should go before any examples. Generally includes\nexamples. Default to an empty string.\nkwargs (Any) \u2013 \nReturns\nThe final prompt generated.\nReturn type\nlangchain.prompts.prompt.PromptTemplate\nclassmethod from_file(template_file, input_variables, **kwargs)[source]\uf0c1\nLoad a prompt from a file.\nParameters\ntemplate_file (Union[str, pathlib.Path]) \u2013 The path to the file containing the prompt template.\ninput_variables (List[str]) \u2013 A list of variable names the final prompt template\nwill expect.\nkwargs (Any) \u2013 \nReturns\nThe prompt loaded from the file.\nReturn type\nlangchain.prompts.prompt.PromptTemplate\nclassmethod from_template(template, **kwargs)[source]\uf0c1\nLoad a prompt template from a template.\nParameters\ntemplate (str) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.prompts.prompt.PromptTemplate\nproperty lc_attributes: Dict[str, Any]\uf0c1\nReturn a list of attribute names that should be included in the\nserialized kwargs. These attributes must be accepted by the\nconstructor.\nclass langchain.prompts.SemanticSimilarityExampleSelector(*, vectorstore, k=4, example_keys=None, input_keys=None)[source]\uf0c1\nBases: langchain.prompts.example_selector.base.BaseExampleSelector, pydantic.main.BaseModel\nExample selector that selects examples based on SemanticSimilarity.\nParameters\nvectorstore (langchain.vectorstores.base.VectorStore) \u2013 \nk (int) \u2013 \nexample_keys (Optional[List[str]]) \u2013 \ninput_keys (Optional[List[str]]) \u2013 \nReturn type\nNone\nattribute example_keys: Optional[List[str]] = None\uf0c1", "source": "https://api.python.langchain.com/en/latest/modules/prompts.html"} +{"id": "3bf1c984f7f4-14", "text": "Return type\nNone\nattribute example_keys: Optional[List[str]] = None\uf0c1\nOptional keys to filter examples to.\nattribute input_keys: Optional[List[str]] = None\uf0c1\nOptional keys to filter input to. If provided, the search is based on\nthe input variables instead of all variables.\nattribute k: int = 4\uf0c1\nNumber of examples to select.\nattribute vectorstore: langchain.vectorstores.base.VectorStore [Required]\uf0c1\nVectorStore than contains information about examples.\nadd_example(example)[source]\uf0c1\nAdd new example to vectorstore.\nParameters\nexample (Dict[str, str]) \u2013 \nReturn type\nstr\nclassmethod from_examples(examples, embeddings, vectorstore_cls, k=4, input_keys=None, **vectorstore_cls_kwargs)[source]\uf0c1\nCreate k-shot example selector using example list and embeddings.\nReshuffles examples dynamically based on query similarity.\nParameters\nexamples (List[dict]) \u2013 List of examples to use in the prompt.\nembeddings (langchain.embeddings.base.Embeddings) \u2013 An initialized embedding API interface, e.g. OpenAIEmbeddings().\nvectorstore_cls (Type[langchain.vectorstores.base.VectorStore]) \u2013 A vector store DB interface class, e.g. FAISS.\nk (int) \u2013 Number of examples to select\ninput_keys (Optional[List[str]]) \u2013 If provided, the search is based on the input variables\ninstead of all variables.\nvectorstore_cls_kwargs (Any) \u2013 optional kwargs containing url for vector store\nReturns\nThe ExampleSelector instantiated, backed by a vector store.\nReturn type\nlangchain.prompts.example_selector.semantic_similarity.SemanticSimilarityExampleSelector\nselect_examples(input_variables)[source]\uf0c1\nSelect which examples to use based on semantic similarity.\nParameters\ninput_variables (Dict[str, str]) \u2013 \nReturn type\nList[dict]", "source": "https://api.python.langchain.com/en/latest/modules/prompts.html"} +{"id": "3bf1c984f7f4-15", "text": "input_variables (Dict[str, str]) \u2013 \nReturn type\nList[dict]\nclass langchain.prompts.StringPromptTemplate(*, input_variables, output_parser=None, partial_variables=None)[source]\uf0c1\nBases: langchain.prompts.base.BasePromptTemplate, abc.ABC\nString prompt should expose the format method, returning a prompt.\nParameters\ninput_variables (List[str]) \u2013 \noutput_parser (Optional[langchain.schema.BaseOutputParser]) \u2013 \npartial_variables (Mapping[str, Union[str, Callable[[], str]]]) \u2013 \nReturn type\nNone\nformat_prompt(**kwargs)[source]\uf0c1\nCreate Chat Messages.\nParameters\nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.PromptValue\nclass langchain.prompts.SystemMessagePromptTemplate(*, prompt, additional_kwargs=None)[source]\uf0c1\nBases: langchain.prompts.chat.BaseStringMessagePromptTemplate\nParameters\nprompt (langchain.prompts.base.StringPromptTemplate) \u2013 \nadditional_kwargs (dict) \u2013 \nReturn type\nNone\nformat(**kwargs)[source]\uf0c1\nTo a BaseMessage.\nParameters\nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.BaseMessage\nlangchain.prompts.load_prompt(path)[source]\uf0c1\nUnified method for loading a prompt from LangChainHub or local fs.\nParameters\npath (Union[str, pathlib.Path]) \u2013 \nReturn type\nlangchain.prompts.base.BasePromptTemplate", "source": "https://api.python.langchain.com/en/latest/modules/prompts.html"} +{"id": "8c85b4393806-0", "text": "Example Selector\uf0c1\nLogic for selecting examples to include in prompts.\nclass langchain.prompts.example_selector.LengthBasedExampleSelector(*, examples, example_prompt, get_text_length=, max_length=2048, example_text_lengths=[])[source]\uf0c1\nBases: langchain.prompts.example_selector.base.BaseExampleSelector, pydantic.main.BaseModel\nSelect examples based on length.\nParameters\nexamples (List[dict]) \u2013 \nexample_prompt (langchain.prompts.prompt.PromptTemplate) \u2013 \nget_text_length (Callable[[str], int]) \u2013 \nmax_length (int) \u2013 \nexample_text_lengths (List[int]) \u2013 \nReturn type\nNone\nattribute example_prompt: langchain.prompts.prompt.PromptTemplate [Required]\uf0c1\nPrompt template used to format the examples.\nattribute examples: List[dict] [Required]\uf0c1\nA list of the examples that the prompt template expects.\nattribute get_text_length: Callable[[str], int] = \uf0c1\nFunction to measure prompt length. Defaults to word count.\nattribute max_length: int = 2048\uf0c1\nMax length for the prompt, beyond which examples are cut.\nadd_example(example)[source]\uf0c1\nAdd new example to list.\nParameters\nexample (Dict[str, str]) \u2013 \nReturn type\nNone\nselect_examples(input_variables)[source]\uf0c1\nSelect which examples to use based on the input lengths.\nParameters\ninput_variables (Dict[str, str]) \u2013 \nReturn type\nList[dict]\nclass langchain.prompts.example_selector.MaxMarginalRelevanceExampleSelector(*, vectorstore, k=4, example_keys=None, input_keys=None, fetch_k=20)[source]\uf0c1\nBases: langchain.prompts.example_selector.semantic_similarity.SemanticSimilarityExampleSelector", "source": "https://api.python.langchain.com/en/latest/modules/example_selector.html"} +{"id": "8c85b4393806-1", "text": "Bases: langchain.prompts.example_selector.semantic_similarity.SemanticSimilarityExampleSelector\nExampleSelector that selects examples based on Max Marginal Relevance.\nThis was shown to improve performance in this paper:\nhttps://arxiv.org/pdf/2211.13892.pdf\nParameters\nvectorstore (langchain.vectorstores.base.VectorStore) \u2013 \nk (int) \u2013 \nexample_keys (Optional[List[str]]) \u2013 \ninput_keys (Optional[List[str]]) \u2013 \nfetch_k (int) \u2013 \nReturn type\nNone\nattribute fetch_k: int = 20\uf0c1\nNumber of examples to fetch to rerank.\nclassmethod from_examples(examples, embeddings, vectorstore_cls, k=4, input_keys=None, fetch_k=20, **vectorstore_cls_kwargs)[source]\uf0c1\nCreate k-shot example selector using example list and embeddings.\nReshuffles examples dynamically based on query similarity.\nParameters\nexamples (List[dict]) \u2013 List of examples to use in the prompt.\nembeddings (langchain.embeddings.base.Embeddings) \u2013 An iniialized embedding API interface, e.g. OpenAIEmbeddings().\nvectorstore_cls (Type[langchain.vectorstores.base.VectorStore]) \u2013 A vector store DB interface class, e.g. FAISS.\nk (int) \u2013 Number of examples to select\ninput_keys (Optional[List[str]]) \u2013 If provided, the search is based on the input variables\ninstead of all variables.\nvectorstore_cls_kwargs (Any) \u2013 optional kwargs containing url for vector store\nfetch_k (int) \u2013 \nReturns\nThe ExampleSelector instantiated, backed by a vector store.\nReturn type\nlangchain.prompts.example_selector.semantic_similarity.MaxMarginalRelevanceExampleSelector\nselect_examples(input_variables)[source]\uf0c1\nSelect which examples to use based on semantic similarity.\nParameters\ninput_variables (Dict[str, str]) \u2013", "source": "https://api.python.langchain.com/en/latest/modules/example_selector.html"} +{"id": "8c85b4393806-2", "text": "Parameters\ninput_variables (Dict[str, str]) \u2013 \nReturn type\nList[dict]\nclass langchain.prompts.example_selector.NGramOverlapExampleSelector(*, examples, example_prompt, threshold=- 1.0)[source]\uf0c1\nBases: langchain.prompts.example_selector.base.BaseExampleSelector, pydantic.main.BaseModel\nSelect and order examples based on ngram overlap score (sentence_bleu score).\nhttps://www.nltk.org/_modules/nltk/translate/bleu_score.html\nhttps://aclanthology.org/P02-1040.pdf\nParameters\nexamples (List[dict]) \u2013 \nexample_prompt (langchain.prompts.prompt.PromptTemplate) \u2013 \nthreshold (float) \u2013 \nReturn type\nNone\nattribute example_prompt: langchain.prompts.prompt.PromptTemplate [Required]\uf0c1\nPrompt template used to format the examples.\nattribute examples: List[dict] [Required]\uf0c1\nA list of the examples that the prompt template expects.\nattribute threshold: float = -1.0\uf0c1\nThreshold at which algorithm stops. Set to -1.0 by default.\nFor negative threshold:\nselect_examples sorts examples by ngram_overlap_score, but excludes none.\nFor threshold greater than 1.0:\nselect_examples excludes all examples, and returns an empty list.\nFor threshold equal to 0.0:\nselect_examples sorts examples by ngram_overlap_score,\nand excludes examples with no ngram overlap with input.\nadd_example(example)[source]\uf0c1\nAdd new example to list.\nParameters\nexample (Dict[str, str]) \u2013 \nReturn type\nNone\nselect_examples(input_variables)[source]\uf0c1\nReturn list of examples sorted by ngram_overlap_score with input.\nDescending order.\nExcludes any examples with ngram_overlap_score less than or equal to threshold.\nParameters", "source": "https://api.python.langchain.com/en/latest/modules/example_selector.html"} +{"id": "8c85b4393806-3", "text": "Excludes any examples with ngram_overlap_score less than or equal to threshold.\nParameters\ninput_variables (Dict[str, str]) \u2013 \nReturn type\nList[dict]\nclass langchain.prompts.example_selector.SemanticSimilarityExampleSelector(*, vectorstore, k=4, example_keys=None, input_keys=None)[source]\uf0c1\nBases: langchain.prompts.example_selector.base.BaseExampleSelector, pydantic.main.BaseModel\nExample selector that selects examples based on SemanticSimilarity.\nParameters\nvectorstore (langchain.vectorstores.base.VectorStore) \u2013 \nk (int) \u2013 \nexample_keys (Optional[List[str]]) \u2013 \ninput_keys (Optional[List[str]]) \u2013 \nReturn type\nNone\nattribute example_keys: Optional[List[str]] = None\uf0c1\nOptional keys to filter examples to.\nattribute input_keys: Optional[List[str]] = None\uf0c1\nOptional keys to filter input to. If provided, the search is based on\nthe input variables instead of all variables.\nattribute k: int = 4\uf0c1\nNumber of examples to select.\nattribute vectorstore: langchain.vectorstores.base.VectorStore [Required]\uf0c1\nVectorStore than contains information about examples.\nadd_example(example)[source]\uf0c1\nAdd new example to vectorstore.\nParameters\nexample (Dict[str, str]) \u2013 \nReturn type\nstr\nclassmethod from_examples(examples, embeddings, vectorstore_cls, k=4, input_keys=None, **vectorstore_cls_kwargs)[source]\uf0c1\nCreate k-shot example selector using example list and embeddings.\nReshuffles examples dynamically based on query similarity.\nParameters\nexamples (List[dict]) \u2013 List of examples to use in the prompt.\nembeddings (langchain.embeddings.base.Embeddings) \u2013 An initialized embedding API interface, e.g. OpenAIEmbeddings().", "source": "https://api.python.langchain.com/en/latest/modules/example_selector.html"} +{"id": "8c85b4393806-4", "text": "vectorstore_cls (Type[langchain.vectorstores.base.VectorStore]) \u2013 A vector store DB interface class, e.g. FAISS.\nk (int) \u2013 Number of examples to select\ninput_keys (Optional[List[str]]) \u2013 If provided, the search is based on the input variables\ninstead of all variables.\nvectorstore_cls_kwargs (Any) \u2013 optional kwargs containing url for vector store\nReturns\nThe ExampleSelector instantiated, backed by a vector store.\nReturn type\nlangchain.prompts.example_selector.semantic_similarity.SemanticSimilarityExampleSelector\nselect_examples(input_variables)[source]\uf0c1\nSelect which examples to use based on semantic similarity.\nParameters\ninput_variables (Dict[str, str]) \u2013 \nReturn type\nList[dict]", "source": "https://api.python.langchain.com/en/latest/modules/example_selector.html"} +{"id": "c8cc4e87bfb4-0", "text": "Document Transformers\uf0c1\nTransform documents\nlangchain.document_transformers.get_stateful_documents(documents)[source]\uf0c1\nConvert a list of documents to a list of documents with state.\nParameters\ndocuments (Sequence[langchain.schema.Document]) \u2013 The documents to convert.\nReturns\nA list of documents with state.\nReturn type\nSequence[langchain.document_transformers._DocumentWithState]\nclass langchain.document_transformers.EmbeddingsRedundantFilter(*, embeddings, similarity_fn=, similarity_threshold=0.95)[source]\uf0c1\nBases: langchain.schema.BaseDocumentTransformer, pydantic.main.BaseModel\nFilter that drops redundant documents by comparing their embeddings.\nParameters\nembeddings (langchain.embeddings.base.Embeddings) \u2013 \nsimilarity_fn (Callable) \u2013 \nsimilarity_threshold (float) \u2013 \nReturn type\nNone\nattribute embeddings: langchain.embeddings.base.Embeddings [Required]\uf0c1\nEmbeddings to use for embedding document contents.\nattribute similarity_fn: Callable = \uf0c1\nSimilarity function for comparing documents. Function expected to take as input\ntwo matrices (List[List[float]]) and return a matrix of scores where higher values\nindicate greater similarity.\nattribute similarity_threshold: float = 0.95\uf0c1\nThreshold for determining when two documents are similar enough\nto be considered redundant.\nasync atransform_documents(documents, **kwargs)[source]\uf0c1\nAsynchronously transform a list of documents.\nParameters\ndocuments (Sequence[langchain.schema.Document]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nSequence[langchain.schema.Document]\ntransform_documents(documents, **kwargs)[source]\uf0c1\nFilter down documents.\nParameters\ndocuments (Sequence[langchain.schema.Document]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nSequence[langchain.schema.Document]", "source": "https://api.python.langchain.com/en/latest/modules/document_transformers.html"} +{"id": "c8cc4e87bfb4-1", "text": "kwargs (Any) \u2013 \nReturn type\nSequence[langchain.schema.Document]\nText Splitters\uf0c1\nFunctionality for splitting text.\nclass langchain.text_splitter.TextSplitter(chunk_size=4000, chunk_overlap=200, length_function=, keep_separator=False, add_start_index=False)[source]\uf0c1\nBases: langchain.schema.BaseDocumentTransformer, abc.ABC\nInterface for splitting text into chunks.\nParameters\nchunk_size (int) \u2013 \nchunk_overlap (int) \u2013 \nlength_function (Callable[[str], int]) \u2013 \nkeep_separator (bool) \u2013 \nadd_start_index (bool) \u2013 \nReturn type\nNone\nabstract split_text(text)[source]\uf0c1\nSplit text into multiple components.\nParameters\ntext (str) \u2013 \nReturn type\nList[str]\ncreate_documents(texts, metadatas=None)[source]\uf0c1\nCreate documents from a list of texts.\nParameters\ntexts (List[str]) \u2013 \nmetadatas (Optional[List[dict]]) \u2013 \nReturn type\nList[langchain.schema.Document]\nsplit_documents(documents)[source]\uf0c1\nSplit documents.\nParameters\ndocuments (Iterable[langchain.schema.Document]) \u2013 \nReturn type\nList[langchain.schema.Document]\nclassmethod from_huggingface_tokenizer(tokenizer, **kwargs)[source]\uf0c1\nText splitter that uses HuggingFace tokenizer to count length.\nParameters\ntokenizer (Any) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.text_splitter.TextSplitter\nclassmethod from_tiktoken_encoder(encoding_name='gpt2', model_name=None, allowed_special={}, disallowed_special='all', **kwargs)[source]\uf0c1\nText splitter that uses tiktoken encoder to count length.\nParameters\nencoding_name (str) \u2013 \nmodel_name (Optional[str]) \u2013", "source": "https://api.python.langchain.com/en/latest/modules/document_transformers.html"} +{"id": "c8cc4e87bfb4-2", "text": "Parameters\nencoding_name (str) \u2013 \nmodel_name (Optional[str]) \u2013 \nallowed_special (Union[Literal['all'], typing.AbstractSet[str]]) \u2013 \ndisallowed_special (Union[Literal['all'], typing.Collection[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.text_splitter.TS\ntransform_documents(documents, **kwargs)[source]\uf0c1\nTransform sequence of documents by splitting them.\nParameters\ndocuments (Sequence[langchain.schema.Document]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nSequence[langchain.schema.Document]\nasync atransform_documents(documents, **kwargs)[source]\uf0c1\nAsynchronously transform a sequence of documents by splitting them.\nParameters\ndocuments (Sequence[langchain.schema.Document]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nSequence[langchain.schema.Document]\nclass langchain.text_splitter.CharacterTextSplitter(separator='\\n\\n', **kwargs)[source]\uf0c1\nBases: langchain.text_splitter.TextSplitter\nImplementation of splitting text that looks at characters.\nParameters\nseparator (str) \u2013 \nkwargs (Any) \u2013 \nReturn type\nNone\nsplit_text(text)[source]\uf0c1\nSplit incoming text and return chunks.\nParameters\ntext (str) \u2013 \nReturn type\nList[str]\nclass langchain.text_splitter.LineType[source]\uf0c1\nBases: TypedDict\nLine type as typed dict.\nmetadata: Dict[str, str]\uf0c1\ncontent: str\uf0c1\nclass langchain.text_splitter.HeaderType[source]\uf0c1\nBases: TypedDict\nHeader type as typed dict.\nlevel: int\uf0c1\nname: str\uf0c1\ndata: str\uf0c1\nclass langchain.text_splitter.MarkdownHeaderTextSplitter(headers_to_split_on, return_each_line=False)[source]\uf0c1\nBases: object", "source": "https://api.python.langchain.com/en/latest/modules/document_transformers.html"} +{"id": "c8cc4e87bfb4-3", "text": "Bases: object\nImplementation of splitting markdown files based on specified headers.\nParameters\nheaders_to_split_on (List[Tuple[str, str]]) \u2013 \nreturn_each_line (bool) \u2013 \naggregate_lines_to_chunks(lines)[source]\uf0c1\nCombine lines with common metadata into chunks\n:param lines: Line of text / associated header metadata\nParameters\nlines (List[langchain.text_splitter.LineType]) \u2013 \nReturn type\nList[langchain.schema.Document]\nsplit_text(text)[source]\uf0c1\nSplit markdown file\n:param text: Markdown file\nParameters\ntext (str) \u2013 \nReturn type\nList[langchain.schema.Document]\nclass langchain.text_splitter.Tokenizer(chunk_overlap: 'int', tokens_per_chunk: 'int', decode: 'Callable[[list[int]], str]', encode: 'Callable[[str], List[int]]')[source]\uf0c1\nBases: object\nParameters\nchunk_overlap (int) \u2013 \ntokens_per_chunk (int) \u2013 \ndecode (Callable[[list[int]], str]) \u2013 \nencode (Callable[[str], List[int]]) \u2013 \nReturn type\nNone\nchunk_overlap: int\uf0c1\ntokens_per_chunk: int\uf0c1\ndecode: Callable[[list[int]], str]\uf0c1\nencode: Callable[[str], List[int]]\uf0c1\nlangchain.text_splitter.split_text_on_tokens(*, text, tokenizer)[source]\uf0c1\nSplit incoming text and return chunks.\nParameters\ntext (str) \u2013 \ntokenizer (langchain.text_splitter.Tokenizer) \u2013 \nReturn type\nList[str]\nclass langchain.text_splitter.TokenTextSplitter(encoding_name='gpt2', model_name=None, allowed_special={}, disallowed_special='all', **kwargs)[source]\uf0c1\nBases: langchain.text_splitter.TextSplitter\nImplementation of splitting text that looks at tokens.\nParameters", "source": "https://api.python.langchain.com/en/latest/modules/document_transformers.html"} +{"id": "c8cc4e87bfb4-4", "text": "Implementation of splitting text that looks at tokens.\nParameters\nencoding_name (str) \u2013 \nmodel_name (Optional[str]) \u2013 \nallowed_special (Union[Literal['all'], AbstractSet[str]]) \u2013 \ndisallowed_special (Union[Literal['all'], Collection[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nNone\nsplit_text(text)[source]\uf0c1\nSplit text into multiple components.\nParameters\ntext (str) \u2013 \nReturn type\nList[str]\nclass langchain.text_splitter.SentenceTransformersTokenTextSplitter(chunk_overlap=50, model_name='sentence-transformers/all-mpnet-base-v2', tokens_per_chunk=None, **kwargs)[source]\uf0c1\nBases: langchain.text_splitter.TextSplitter\nImplementation of splitting text that looks at tokens.\nParameters\nchunk_overlap (int) \u2013 \nmodel_name (str) \u2013 \ntokens_per_chunk (Optional[int]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nNone\nsplit_text(text)[source]\uf0c1\nSplit text into multiple components.\nParameters\ntext (str) \u2013 \nReturn type\nList[str]\ncount_tokens(*, text)[source]\uf0c1\nParameters\ntext (str) \u2013 \nReturn type\nint\nclass langchain.text_splitter.Language(value, names=None, *, module=None, qualname=None, type=None, start=1, boundary=None)[source]\uf0c1\nBases: str, enum.Enum\nCPP = 'cpp'\uf0c1\nGO = 'go'\uf0c1\nJAVA = 'java'\uf0c1\nJS = 'js'\uf0c1\nPHP = 'php'\uf0c1\nPROTO = 'proto'\uf0c1\nPYTHON = 'python'\uf0c1\nRST = 'rst'\uf0c1\nRUBY = 'ruby'\uf0c1\nRUST = 'rust'\uf0c1", "source": "https://api.python.langchain.com/en/latest/modules/document_transformers.html"} +{"id": "c8cc4e87bfb4-5", "text": "RUBY = 'ruby'\uf0c1\nRUST = 'rust'\uf0c1\nSCALA = 'scala'\uf0c1\nSWIFT = 'swift'\uf0c1\nMARKDOWN = 'markdown'\uf0c1\nLATEX = 'latex'\uf0c1\nHTML = 'html'\uf0c1\nSOL = 'sol'\uf0c1\nclass langchain.text_splitter.RecursiveCharacterTextSplitter(separators=None, keep_separator=True, **kwargs)[source]\uf0c1\nBases: langchain.text_splitter.TextSplitter\nImplementation of splitting text that looks at characters.\nRecursively tries to split by different characters to find one\nthat works.\nParameters\nseparators (Optional[List[str]]) \u2013 \nkeep_separator (bool) \u2013 \nkwargs (Any) \u2013 \nReturn type\nNone\nsplit_text(text)[source]\uf0c1\nSplit text into multiple components.\nParameters\ntext (str) \u2013 \nReturn type\nList[str]\nclassmethod from_language(language, **kwargs)[source]\uf0c1\nParameters\nlanguage (langchain.text_splitter.Language) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.text_splitter.RecursiveCharacterTextSplitter\nstatic get_separators_for_language(language)[source]\uf0c1\nParameters\nlanguage (langchain.text_splitter.Language) \u2013 \nReturn type\nList[str]\nclass langchain.text_splitter.NLTKTextSplitter(separator='\\n\\n', **kwargs)[source]\uf0c1\nBases: langchain.text_splitter.TextSplitter\nImplementation of splitting text that looks at sentences using NLTK.\nParameters\nseparator (str) \u2013 \nkwargs (Any) \u2013 \nReturn type\nNone\nsplit_text(text)[source]\uf0c1\nSplit incoming text and return chunks.\nParameters\ntext (str) \u2013 \nReturn type\nList[str]", "source": "https://api.python.langchain.com/en/latest/modules/document_transformers.html"} +{"id": "c8cc4e87bfb4-6", "text": "Parameters\ntext (str) \u2013 \nReturn type\nList[str]\nclass langchain.text_splitter.SpacyTextSplitter(separator='\\n\\n', pipeline='en_core_web_sm', **kwargs)[source]\uf0c1\nBases: langchain.text_splitter.TextSplitter\nImplementation of splitting text that looks at sentences using Spacy.\nParameters\nseparator (str) \u2013 \npipeline (str) \u2013 \nkwargs (Any) \u2013 \nReturn type\nNone\nsplit_text(text)[source]\uf0c1\nSplit incoming text and return chunks.\nParameters\ntext (str) \u2013 \nReturn type\nList[str]\nclass langchain.text_splitter.PythonCodeTextSplitter(**kwargs)[source]\uf0c1\nBases: langchain.text_splitter.RecursiveCharacterTextSplitter\nAttempts to split the text along Python syntax.\nParameters\nkwargs (Any) \u2013 \nReturn type\nNone\nclass langchain.text_splitter.MarkdownTextSplitter(**kwargs)[source]\uf0c1\nBases: langchain.text_splitter.RecursiveCharacterTextSplitter\nAttempts to split the text along Markdown-formatted headings.\nParameters\nkwargs (Any) \u2013 \nReturn type\nNone\nclass langchain.text_splitter.LatexTextSplitter(**kwargs)[source]\uf0c1\nBases: langchain.text_splitter.RecursiveCharacterTextSplitter\nAttempts to split the text along Latex-formatted layout elements.\nParameters\nkwargs (Any) \u2013 \nReturn type\nNone", "source": "https://api.python.langchain.com/en/latest/modules/document_transformers.html"} +{"id": "7a9f411c5266-0", "text": "Agents\uf0c1\nInterface for agents.\nclass langchain.agents.Agent(*, llm_chain, output_parser, allowed_tools=None)[source]\uf0c1\nBases: langchain.agents.agent.BaseSingleActionAgent\nClass responsible for calling the language model and deciding the action.\nThis is driven by an LLMChain. The prompt in the LLMChain MUST include\na variable called \u201cagent_scratchpad\u201d where the agent can put its\nintermediary work.\nParameters\nllm_chain (langchain.chains.llm.LLMChain) \u2013 \noutput_parser (langchain.agents.agent.AgentOutputParser) \u2013 \nallowed_tools (Optional[List[str]]) \u2013 \nReturn type\nNone\nattribute allowed_tools: Optional[List[str]] = None\uf0c1\nattribute llm_chain: langchain.chains.llm.LLMChain [Required]\uf0c1\nattribute output_parser: langchain.agents.agent.AgentOutputParser [Required]\uf0c1\nasync aplan(intermediate_steps, callbacks=None, **kwargs)[source]\uf0c1\nGiven input, decided what to do.\nParameters\nintermediate_steps (List[Tuple[langchain.schema.AgentAction, str]]) \u2013 Steps the LLM has taken to date,\nalong with observations\ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 Callbacks to run.\n**kwargs \u2013 User inputs.\nkwargs (Any) \u2013 \nReturns\nAction specifying what tool to use.\nReturn type\nUnion[langchain.schema.AgentAction, langchain.schema.AgentFinish]\nabstract classmethod create_prompt(tools)[source]\uf0c1\nCreate a prompt for this class.\nParameters\ntools (Sequence[langchain.tools.base.BaseTool]) \u2013 \nReturn type\nlangchain.prompts.base.BasePromptTemplate\ndict(**kwargs)[source]\uf0c1\nReturn dictionary representation of agent.\nParameters", "source": "https://api.python.langchain.com/en/latest/modules/agents.html"} +{"id": "7a9f411c5266-1", "text": "dict(**kwargs)[source]\uf0c1\nReturn dictionary representation of agent.\nParameters\nkwargs (Any) \u2013 \nReturn type\nDict\nclassmethod from_llm_and_tools(llm, tools, callback_manager=None, output_parser=None, **kwargs)[source]\uf0c1\nConstruct an agent from an LLM and tools.\nParameters\nllm (langchain.base_language.BaseLanguageModel) \u2013 \ntools (Sequence[langchain.tools.base.BaseTool]) \u2013 \ncallback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) \u2013 \noutput_parser (Optional[langchain.agents.agent.AgentOutputParser]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.agents.agent.Agent\nget_allowed_tools()[source]\uf0c1\nReturn type\nOptional[List[str]]\nget_full_inputs(intermediate_steps, **kwargs)[source]\uf0c1\nCreate the full inputs for the LLMChain from intermediate steps.\nParameters\nintermediate_steps (List[Tuple[langchain.schema.AgentAction, str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nDict[str, Any]\nplan(intermediate_steps, callbacks=None, **kwargs)[source]\uf0c1\nGiven input, decided what to do.\nParameters\nintermediate_steps (List[Tuple[langchain.schema.AgentAction, str]]) \u2013 Steps the LLM has taken to date,\nalong with observations\ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 Callbacks to run.\n**kwargs \u2013 User inputs.\nkwargs (Any) \u2013 \nReturns\nAction specifying what tool to use.\nReturn type\nUnion[langchain.schema.AgentAction, langchain.schema.AgentFinish]\nreturn_stopped_response(early_stopping_method, intermediate_steps, **kwargs)[source]\uf0c1\nReturn response when agent has been stopped due to max iterations.\nParameters", "source": "https://api.python.langchain.com/en/latest/modules/agents.html"} +{"id": "7a9f411c5266-2", "text": "Return response when agent has been stopped due to max iterations.\nParameters\nearly_stopping_method (str) \u2013 \nintermediate_steps (List[Tuple[langchain.schema.AgentAction, str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.AgentFinish\ntool_run_logging_kwargs()[source]\uf0c1\nReturn type\nDict\nabstract property llm_prefix: str\uf0c1\nPrefix to append the LLM call with.\nabstract property observation_prefix: str\uf0c1\nPrefix to append the observation with.\nproperty return_values: List[str]\uf0c1\nReturn values of the agent.\nclass langchain.agents.AgentExecutor(*, memory=None, callbacks=None, callback_manager=None, verbose=None, tags=None, agent, tools, return_intermediate_steps=False, max_iterations=15, max_execution_time=None, early_stopping_method='force', handle_parsing_errors=False)[source]\uf0c1\nBases: langchain.chains.base.Chain\nConsists of an agent using tools.\nParameters\nmemory (Optional[langchain.schema.BaseMemory]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ncallback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) \u2013 \nverbose (bool) \u2013 \ntags (Optional[List[str]]) \u2013 \nagent (Union[langchain.agents.agent.BaseSingleActionAgent, langchain.agents.agent.BaseMultiActionAgent]) \u2013 \ntools (Sequence[langchain.tools.base.BaseTool]) \u2013 \nreturn_intermediate_steps (bool) \u2013 \nmax_iterations (Optional[int]) \u2013 \nmax_execution_time (Optional[float]) \u2013 \nearly_stopping_method (str) \u2013 \nhandle_parsing_errors (Union[bool, str, Callable[[langchain.schema.OutputParserException], str]]) \u2013 \nReturn type\nNone", "source": "https://api.python.langchain.com/en/latest/modules/agents.html"} +{"id": "7a9f411c5266-3", "text": "Return type\nNone\nattribute agent: Union[BaseSingleActionAgent, BaseMultiActionAgent] [Required]\uf0c1\nThe agent to run for creating a plan and determining actions\nto take at each step of the execution loop.\nattribute early_stopping_method: str = 'force'\uf0c1\nThe method to use for early stopping if the agent never\nreturns AgentFinish. Either \u2018force\u2019 or \u2018generate\u2019.\n\u201cforce\u201d returns a string saying that it stopped because it met atime or iteration limit.\n\u201cgenerate\u201d calls the agent\u2019s LLM Chain one final time to generatea final answer based on the previous steps.\nattribute handle_parsing_errors: Union[bool, str, Callable[[OutputParserException], str]] = False\uf0c1\nHow to handle errors raised by the agent\u2019s output parser.Defaults to False, which raises the error.\nsIf true, the error will be sent back to the LLM as an observation.\nIf a string, the string itself will be sent to the LLM as an observation.\nIf a callable function, the function will be called with the exception\nas an argument, and the result of that function will be passed to the agentas an observation.\nattribute max_execution_time: Optional[float] = None\uf0c1\nThe maximum amount of wall clock time to spend in the execution\nloop.\nattribute max_iterations: Optional[int] = 15\uf0c1\nThe maximum number of steps to take before ending the execution\nloop.\nSetting to \u2018None\u2019 could lead to an infinite loop.\nattribute return_intermediate_steps: bool = False\uf0c1\nWhether to return the agent\u2019s trajectory of intermediate steps\nat the end in addition to the final output.\nattribute tools: Sequence[BaseTool] [Required]\uf0c1\nThe valid tools the agent can call.", "source": "https://api.python.langchain.com/en/latest/modules/agents.html"} +{"id": "7a9f411c5266-4", "text": "The valid tools the agent can call.\nclassmethod from_agent_and_tools(agent, tools, callback_manager=None, **kwargs)[source]\uf0c1\nCreate from agent and tools.\nParameters\nagent (Union[langchain.agents.agent.BaseSingleActionAgent, langchain.agents.agent.BaseMultiActionAgent]) \u2013 \ntools (Sequence[langchain.tools.base.BaseTool]) \u2013 \ncallback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.agents.agent.AgentExecutor\nlookup_tool(name)[source]\uf0c1\nLookup tool by name.\nParameters\nname (str) \u2013 \nReturn type\nlangchain.tools.base.BaseTool\nsave(file_path)[source]\uf0c1\nRaise error - saving not supported for Agent Executors.\nParameters\nfile_path (Union[pathlib.Path, str]) \u2013 \nReturn type\nNone\nsave_agent(file_path)[source]\uf0c1\nSave the underlying agent.\nParameters\nfile_path (Union[pathlib.Path, str]) \u2013 \nReturn type\nNone\nclass langchain.agents.AgentOutputParser[source]\uf0c1\nBases: langchain.schema.BaseOutputParser\nReturn type\nNone\nabstract parse(text)[source]\uf0c1\nParse text into agent action/finish.\nParameters\ntext (str) \u2013 \nReturn type\nUnion[langchain.schema.AgentAction, langchain.schema.AgentFinish]\nclass langchain.agents.AgentType(value, names=None, *, module=None, qualname=None, type=None, start=1, boundary=None)[source]\uf0c1\nBases: str, enum.Enum\nEnumerator with the Agent types.\nZERO_SHOT_REACT_DESCRIPTION = 'zero-shot-react-description'\uf0c1\nREACT_DOCSTORE = 'react-docstore'\uf0c1\nSELF_ASK_WITH_SEARCH = 'self-ask-with-search'\uf0c1", "source": "https://api.python.langchain.com/en/latest/modules/agents.html"} +{"id": "7a9f411c5266-5", "text": "SELF_ASK_WITH_SEARCH = 'self-ask-with-search'\uf0c1\nCONVERSATIONAL_REACT_DESCRIPTION = 'conversational-react-description'\uf0c1\nCHAT_ZERO_SHOT_REACT_DESCRIPTION = 'chat-zero-shot-react-description'\uf0c1\nCHAT_CONVERSATIONAL_REACT_DESCRIPTION = 'chat-conversational-react-description'\uf0c1\nSTRUCTURED_CHAT_ZERO_SHOT_REACT_DESCRIPTION = 'structured-chat-zero-shot-react-description'\uf0c1\nOPENAI_FUNCTIONS = 'openai-functions'\uf0c1\nOPENAI_MULTI_FUNCTIONS = 'openai-multi-functions'\uf0c1\nclass langchain.agents.BaseMultiActionAgent[source]\uf0c1\nBases: pydantic.main.BaseModel\nBase Agent class.\nReturn type\nNone\nabstract async aplan(intermediate_steps, callbacks=None, **kwargs)[source]\uf0c1\nGiven input, decided what to do.\nParameters\nintermediate_steps (List[Tuple[langchain.schema.AgentAction, str]]) \u2013 Steps the LLM has taken to date,\nalong with observations\ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 Callbacks to run.\n**kwargs \u2013 User inputs.\nkwargs (Any) \u2013 \nReturns\nActions specifying what tool to use.\nReturn type\nUnion[List[langchain.schema.AgentAction], langchain.schema.AgentFinish]\ndict(**kwargs)[source]\uf0c1\nReturn dictionary representation of agent.\nParameters\nkwargs (Any) \u2013 \nReturn type\nDict\nget_allowed_tools()[source]\uf0c1\nReturn type\nOptional[List[str]]\nabstract plan(intermediate_steps, callbacks=None, **kwargs)[source]\uf0c1\nGiven input, decided what to do.\nParameters\nintermediate_steps (List[Tuple[langchain.schema.AgentAction, str]]) \u2013 Steps the LLM has taken to date,\nalong with observations", "source": "https://api.python.langchain.com/en/latest/modules/agents.html"} +{"id": "7a9f411c5266-6", "text": "along with observations\ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 Callbacks to run.\n**kwargs \u2013 User inputs.\nkwargs (Any) \u2013 \nReturns\nActions specifying what tool to use.\nReturn type\nUnion[List[langchain.schema.AgentAction], langchain.schema.AgentFinish]\nreturn_stopped_response(early_stopping_method, intermediate_steps, **kwargs)[source]\uf0c1\nReturn response when agent has been stopped due to max iterations.\nParameters\nearly_stopping_method (str) \u2013 \nintermediate_steps (List[Tuple[langchain.schema.AgentAction, str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.AgentFinish\nsave(file_path)[source]\uf0c1\nSave the agent.\nParameters\nfile_path (Union[pathlib.Path, str]) \u2013 Path to file to save the agent to.\nReturn type\nNone\nExample:\n.. code-block:: python\n# If working with agent executor\nagent.agent.save(file_path=\u201dpath/agent.yaml\u201d)\ntool_run_logging_kwargs()[source]\uf0c1\nReturn type\nDict\nproperty return_values: List[str]\uf0c1\nReturn values of the agent.\nclass langchain.agents.BaseSingleActionAgent[source]\uf0c1\nBases: pydantic.main.BaseModel\nBase Agent class.\nReturn type\nNone\nabstract async aplan(intermediate_steps, callbacks=None, **kwargs)[source]\uf0c1\nGiven input, decided what to do.\nParameters\nintermediate_steps (List[Tuple[langchain.schema.AgentAction, str]]) \u2013 Steps the LLM has taken to date,\nalong with observations\ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 Callbacks to run.\n**kwargs \u2013 User inputs.\nkwargs (Any) \u2013", "source": "https://api.python.langchain.com/en/latest/modules/agents.html"} +{"id": "7a9f411c5266-7", "text": "**kwargs \u2013 User inputs.\nkwargs (Any) \u2013 \nReturns\nAction specifying what tool to use.\nReturn type\nUnion[langchain.schema.AgentAction, langchain.schema.AgentFinish]\ndict(**kwargs)[source]\uf0c1\nReturn dictionary representation of agent.\nParameters\nkwargs (Any) \u2013 \nReturn type\nDict\nclassmethod from_llm_and_tools(llm, tools, callback_manager=None, **kwargs)[source]\uf0c1\nParameters\nllm (langchain.base_language.BaseLanguageModel) \u2013 \ntools (Sequence[langchain.tools.base.BaseTool]) \u2013 \ncallback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.agents.agent.BaseSingleActionAgent\nget_allowed_tools()[source]\uf0c1\nReturn type\nOptional[List[str]]\nabstract plan(intermediate_steps, callbacks=None, **kwargs)[source]\uf0c1\nGiven input, decided what to do.\nParameters\nintermediate_steps (List[Tuple[langchain.schema.AgentAction, str]]) \u2013 Steps the LLM has taken to date,\nalong with observations\ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 Callbacks to run.\n**kwargs \u2013 User inputs.\nkwargs (Any) \u2013 \nReturns\nAction specifying what tool to use.\nReturn type\nUnion[langchain.schema.AgentAction, langchain.schema.AgentFinish]\nreturn_stopped_response(early_stopping_method, intermediate_steps, **kwargs)[source]\uf0c1\nReturn response when agent has been stopped due to max iterations.\nParameters\nearly_stopping_method (str) \u2013 \nintermediate_steps (List[Tuple[langchain.schema.AgentAction, str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.schema.AgentFinish\nsave(file_path)[source]\uf0c1", "source": "https://api.python.langchain.com/en/latest/modules/agents.html"} +{"id": "7a9f411c5266-8", "text": "Return type\nlangchain.schema.AgentFinish\nsave(file_path)[source]\uf0c1\nSave the agent.\nParameters\nfile_path (Union[pathlib.Path, str]) \u2013 Path to file to save the agent to.\nReturn type\nNone\nExample:\n.. code-block:: python\n# If working with agent executor\nagent.agent.save(file_path=\u201dpath/agent.yaml\u201d)\ntool_run_logging_kwargs()[source]\uf0c1\nReturn type\nDict\nproperty return_values: List[str]\uf0c1\nReturn values of the agent.\nclass langchain.agents.ConversationalAgent(*, llm_chain, output_parser=None, allowed_tools=None, ai_prefix='AI')[source]\uf0c1\nBases: langchain.agents.agent.Agent\nAn agent designed to hold a conversation in addition to using tools.\nParameters\nllm_chain (langchain.chains.llm.LLMChain) \u2013 \noutput_parser (langchain.agents.agent.AgentOutputParser) \u2013 \nallowed_tools (Optional[List[str]]) \u2013 \nai_prefix (str) \u2013 \nReturn type\nNone\nattribute ai_prefix: str = 'AI'\uf0c1\nattribute output_parser: langchain.agents.agent.AgentOutputParser [Optional]\uf0c1", "source": "https://api.python.langchain.com/en/latest/modules/agents.html"} +{"id": "7a9f411c5266-9", "text": "classmethod create_prompt(tools, prefix='Assistant is a large language model trained by OpenAI.\\n\\nAssistant is designed to be able to assist with a wide range of tasks, from answering simple questions to providing in-depth explanations and discussions on a wide range of topics. As a language model, Assistant is able to generate human-like text based on the input it receives, allowing it to engage in natural-sounding conversations and provide responses that are coherent and relevant to the topic at hand.\\n\\nAssistant is constantly learning and improving, and its capabilities are constantly evolving. It is able to process and understand large amounts of text, and can use this knowledge to provide accurate and informative responses to a wide range of questions. Additionally, Assistant is able to generate its own text based on the input it receives, allowing it to engage in discussions and provide explanations and descriptions on a wide range of topics.\\n\\nOverall, Assistant is a powerful tool that can help with a wide range of tasks and provide valuable insights and information on a wide range of topics. Whether you need help with a specific question or just want to have a conversation about a particular topic, Assistant is here to assist.\\n\\nTOOLS:\\n------\\n\\nAssistant has access to the following tools:', suffix='Begin!\\n\\nPrevious conversation history:\\n{chat_history}\\n\\nNew input: {input}\\n{agent_scratchpad}', format_instructions='To use a tool, please use the following format:\\n\\n```\\nThought: Do I need to use a tool? Yes\\nAction: the action to take, should be one of [{tool_names}]\\nAction Input: the input to the action\\nObservation: the result of the action\\n```\\n\\nWhen you have a response to say to the Human, or if you do not need to use a tool, you MUST use the format:\\n\\n```\\nThought: Do I need to use a tool?", "source": "https://api.python.langchain.com/en/latest/modules/agents.html"} +{"id": "7a9f411c5266-10", "text": "MUST use the format:\\n\\n```\\nThought: Do I need to use a tool? No\\n{ai_prefix}: [your response here]\\n```', ai_prefix='AI', human_prefix='Human', input_variables=None)[source]\uf0c1", "source": "https://api.python.langchain.com/en/latest/modules/agents.html"} +{"id": "7a9f411c5266-11", "text": "Create prompt in the style of the zero shot agent.\nParameters\ntools (Sequence[langchain.tools.base.BaseTool]) \u2013 List of tools the agent will have access to, used to format the\nprompt.\nprefix (str) \u2013 String to put before the list of tools.\nsuffix (str) \u2013 String to put after the list of tools.\nai_prefix (str) \u2013 String to use before AI output.\nhuman_prefix (str) \u2013 String to use before human output.\ninput_variables (Optional[List[str]]) \u2013 List of input variables the final prompt will expect.\nformat_instructions (str) \u2013 \nReturns\nA PromptTemplate with the template assembled from the pieces here.\nReturn type\nlangchain.prompts.prompt.PromptTemplate", "source": "https://api.python.langchain.com/en/latest/modules/agents.html"} +{"id": "7a9f411c5266-12", "text": "classmethod from_llm_and_tools(llm, tools, callback_manager=None, output_parser=None, prefix='Assistant is a large language model trained by OpenAI.\\n\\nAssistant is designed to be able to assist with a wide range of tasks, from answering simple questions to providing in-depth explanations and discussions on a wide range of topics. As a language model, Assistant is able to generate human-like text based on the input it receives, allowing it to engage in natural-sounding conversations and provide responses that are coherent and relevant to the topic at hand.\\n\\nAssistant is constantly learning and improving, and its capabilities are constantly evolving. It is able to process and understand large amounts of text, and can use this knowledge to provide accurate and informative responses to a wide range of questions. Additionally, Assistant is able to generate its own text based on the input it receives, allowing it to engage in discussions and provide explanations and descriptions on a wide range of topics.\\n\\nOverall, Assistant is a powerful tool that can help with a wide range of tasks and provide valuable insights and information on a wide range of topics. Whether you need help with a specific question or just want to have a conversation about a particular topic, Assistant is here to assist.\\n\\nTOOLS:\\n------\\n\\nAssistant has access to the following tools:', suffix='Begin!\\n\\nPrevious conversation history:\\n{chat_history}\\n\\nNew input: {input}\\n{agent_scratchpad}', format_instructions='To use a tool, please use the following format:\\n\\n```\\nThought: Do I need to use a tool? Yes\\nAction: the action to take, should be one of [{tool_names}]\\nAction Input: the input to the action\\nObservation: the result of the action\\n```\\n\\nWhen you have a response to say to the Human, or if you do not need to use a tool, you MUST use the", "source": "https://api.python.langchain.com/en/latest/modules/agents.html"} +{"id": "7a9f411c5266-13", "text": "say to the Human, or if you do not need to use a tool, you MUST use the format:\\n\\n```\\nThought: Do I need to use a tool? No\\n{ai_prefix}: [your response here]\\n```', ai_prefix='AI', human_prefix='Human', input_variables=None, **kwargs)[source]\uf0c1", "source": "https://api.python.langchain.com/en/latest/modules/agents.html"} +{"id": "7a9f411c5266-14", "text": "Construct an agent from an LLM and tools.\nParameters\nllm (langchain.base_language.BaseLanguageModel) \u2013 \ntools (Sequence[langchain.tools.base.BaseTool]) \u2013 \ncallback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) \u2013 \noutput_parser (Optional[langchain.agents.agent.AgentOutputParser]) \u2013 \nprefix (str) \u2013 \nsuffix (str) \u2013 \nformat_instructions (str) \u2013 \nai_prefix (str) \u2013 \nhuman_prefix (str) \u2013 \ninput_variables (Optional[List[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.agents.agent.Agent\nproperty llm_prefix: str\uf0c1\nPrefix to append the llm call with.\nproperty observation_prefix: str\uf0c1\nPrefix to append the observation with.\nclass langchain.agents.ConversationalChatAgent(*, llm_chain, output_parser=None, allowed_tools=None, template_tool_response=\"TOOL RESPONSE: \\n---------------------\\n{observation}\\n\\nUSER'S INPUT\\n--------------------\\n\\nOkay, so what is the response to my last comment? If using information obtained from the tools you must mention it explicitly without mentioning the tool names - I have forgotten all TOOL RESPONSES! Remember to respond with a markdown code snippet of a json blob with a single action, and NOTHING else.\")[source]\uf0c1\nBases: langchain.agents.agent.Agent\nAn agent designed to hold a conversation in addition to using tools.\nParameters\nllm_chain (langchain.chains.llm.LLMChain) \u2013 \noutput_parser (langchain.agents.agent.AgentOutputParser) \u2013 \nallowed_tools (Optional[List[str]]) \u2013 \ntemplate_tool_response (str) \u2013 \nReturn type\nNone\nattribute output_parser: langchain.agents.agent.AgentOutputParser [Optional]\uf0c1", "source": "https://api.python.langchain.com/en/latest/modules/agents.html"} +{"id": "7a9f411c5266-15", "text": "None\nattribute output_parser: langchain.agents.agent.AgentOutputParser [Optional]\uf0c1\nattribute template_tool_response: str = \"TOOL RESPONSE: \\n---------------------\\n{observation}\\n\\nUSER'S INPUT\\n--------------------\\n\\nOkay, so what is the response to my last comment? If using information obtained from the tools you must mention it explicitly without mentioning the tool names - I have forgotten all TOOL RESPONSES! Remember to respond with a markdown code snippet of a json blob with a single action, and NOTHING else.\"\uf0c1", "source": "https://api.python.langchain.com/en/latest/modules/agents.html"} +{"id": "7a9f411c5266-16", "text": "classmethod create_prompt(tools, system_message='Assistant is a large language model trained by OpenAI.\\n\\nAssistant is designed to be able to assist with a wide range of tasks, from answering simple questions to providing in-depth explanations and discussions on a wide range of topics. As a language model, Assistant is able to generate human-like text based on the input it receives, allowing it to engage in natural-sounding conversations and provide responses that are coherent and relevant to the topic at hand.\\n\\nAssistant is constantly learning and improving, and its capabilities are constantly evolving. It is able to process and understand large amounts of text, and can use this knowledge to provide accurate and informative responses to a wide range of questions. Additionally, Assistant is able to generate its own text based on the input it receives, allowing it to engage in discussions and provide explanations and descriptions on a wide range of topics.\\n\\nOverall, Assistant is a powerful system that can help with a wide range of tasks and provide valuable insights and information on a wide range of topics. Whether you need help with a specific question or just want to have a conversation about a particular topic, Assistant is here to assist.', human_message=\"TOOLS\\n------\\nAssistant can ask the user to use tools to look up information that may be helpful in answering the users original question. The tools the human can use are:\\n\\n{{tools}}\\n\\n{format_instructions}\\n\\nUSER'S INPUT\\n--------------------\\nHere is the user's input (remember to respond with a markdown code snippet of a json blob with a single action, and NOTHING else):\\n\\n{{{{input}}}}\", input_variables=None, output_parser=None)[source]\uf0c1\nCreate a prompt for this class.\nParameters\ntools (Sequence[langchain.tools.base.BaseTool]) \u2013 \nsystem_message (str) \u2013 \nhuman_message (str) \u2013 \ninput_variables (Optional[List[str]]) \u2013", "source": "https://api.python.langchain.com/en/latest/modules/agents.html"} +{"id": "7a9f411c5266-17", "text": "human_message (str) \u2013 \ninput_variables (Optional[List[str]]) \u2013 \noutput_parser (Optional[langchain.schema.BaseOutputParser]) \u2013 \nReturn type\nlangchain.prompts.base.BasePromptTemplate", "source": "https://api.python.langchain.com/en/latest/modules/agents.html"} +{"id": "7a9f411c5266-18", "text": "Return type\nlangchain.prompts.base.BasePromptTemplate\nclassmethod from_llm_and_tools(llm, tools, callback_manager=None, output_parser=None, system_message='Assistant is a large language model trained by OpenAI.\\n\\nAssistant is designed to be able to assist with a wide range of tasks, from answering simple questions to providing in-depth explanations and discussions on a wide range of topics. As a language model, Assistant is able to generate human-like text based on the input it receives, allowing it to engage in natural-sounding conversations and provide responses that are coherent and relevant to the topic at hand.\\n\\nAssistant is constantly learning and improving, and its capabilities are constantly evolving. It is able to process and understand large amounts of text, and can use this knowledge to provide accurate and informative responses to a wide range of questions. Additionally, Assistant is able to generate its own text based on the input it receives, allowing it to engage in discussions and provide explanations and descriptions on a wide range of topics.\\n\\nOverall, Assistant is a powerful system that can help with a wide range of tasks and provide valuable insights and information on a wide range of topics. Whether you need help with a specific question or just want to have a conversation about a particular topic, Assistant is here to assist.', human_message=\"TOOLS\\n------\\nAssistant can ask the user to use tools to look up information that may be helpful in answering the users original question. The tools the human can use are:\\n\\n{{tools}}\\n\\n{format_instructions}\\n\\nUSER'S INPUT\\n--------------------\\nHere is the user's input (remember to respond with a markdown code snippet of a json blob with a single action, and NOTHING else):\\n\\n{{{{input}}}}\", input_variables=None, **kwargs)[source]\uf0c1\nConstruct an agent from an LLM and tools.\nParameters\nllm (langchain.base_language.BaseLanguageModel) \u2013", "source": "https://api.python.langchain.com/en/latest/modules/agents.html"} +{"id": "7a9f411c5266-19", "text": "Parameters\nllm (langchain.base_language.BaseLanguageModel) \u2013 \ntools (Sequence[langchain.tools.base.BaseTool]) \u2013 \ncallback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) \u2013 \noutput_parser (Optional[langchain.agents.agent.AgentOutputParser]) \u2013 \nsystem_message (str) \u2013 \nhuman_message (str) \u2013 \ninput_variables (Optional[List[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.agents.agent.Agent\nproperty llm_prefix: str\uf0c1\nPrefix to append the llm call with.\nproperty observation_prefix: str\uf0c1\nPrefix to append the observation with.\nclass langchain.agents.LLMSingleActionAgent(*, llm_chain, output_parser, stop)[source]\uf0c1\nBases: langchain.agents.agent.BaseSingleActionAgent\nParameters\nllm_chain (langchain.chains.llm.LLMChain) \u2013 \noutput_parser (langchain.agents.agent.AgentOutputParser) \u2013 \nstop (List[str]) \u2013 \nReturn type\nNone\nattribute llm_chain: langchain.chains.llm.LLMChain [Required]\uf0c1\nattribute output_parser: langchain.agents.agent.AgentOutputParser [Required]\uf0c1\nattribute stop: List[str] [Required]\uf0c1\nasync aplan(intermediate_steps, callbacks=None, **kwargs)[source]\uf0c1\nGiven input, decided what to do.\nParameters\nintermediate_steps (List[Tuple[langchain.schema.AgentAction, str]]) \u2013 Steps the LLM has taken to date,\nalong with observations\ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 Callbacks to run.\n**kwargs \u2013 User inputs.\nkwargs (Any) \u2013 \nReturns\nAction specifying what tool to use.\nReturn type", "source": "https://api.python.langchain.com/en/latest/modules/agents.html"} +{"id": "7a9f411c5266-20", "text": "kwargs (Any) \u2013 \nReturns\nAction specifying what tool to use.\nReturn type\nUnion[langchain.schema.AgentAction, langchain.schema.AgentFinish]\ndict(**kwargs)[source]\uf0c1\nReturn dictionary representation of agent.\nParameters\nkwargs (Any) \u2013 \nReturn type\nDict\nplan(intermediate_steps, callbacks=None, **kwargs)[source]\uf0c1\nGiven input, decided what to do.\nParameters\nintermediate_steps (List[Tuple[langchain.schema.AgentAction, str]]) \u2013 Steps the LLM has taken to date,\nalong with observations\ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 Callbacks to run.\n**kwargs \u2013 User inputs.\nkwargs (Any) \u2013 \nReturns\nAction specifying what tool to use.\nReturn type\nUnion[langchain.schema.AgentAction, langchain.schema.AgentFinish]\ntool_run_logging_kwargs()[source]\uf0c1\nReturn type\nDict\nclass langchain.agents.MRKLChain(*, memory=None, callbacks=None, callback_manager=None, verbose=None, tags=None, agent, tools, return_intermediate_steps=False, max_iterations=15, max_execution_time=None, early_stopping_method='force', handle_parsing_errors=False)[source]\uf0c1\nBases: langchain.agents.agent.AgentExecutor\nChain that implements the MRKL system.\nExample\nfrom langchain import OpenAI, MRKLChain\nfrom langchain.chains.mrkl.base import ChainConfig\nllm = OpenAI(temperature=0)\nprompt = PromptTemplate(...)\nchains = [...]\nmrkl = MRKLChain.from_chains(llm=llm, prompt=prompt)\nParameters\nmemory (Optional[langchain.schema.BaseMemory]) \u2013", "source": "https://api.python.langchain.com/en/latest/modules/agents.html"} +{"id": "7a9f411c5266-21", "text": "Parameters\nmemory (Optional[langchain.schema.BaseMemory]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ncallback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) \u2013 \nverbose (bool) \u2013 \ntags (Optional[List[str]]) \u2013 \nagent (Union[langchain.agents.agent.BaseSingleActionAgent, langchain.agents.agent.BaseMultiActionAgent]) \u2013 \ntools (Sequence[langchain.tools.base.BaseTool]) \u2013 \nreturn_intermediate_steps (bool) \u2013 \nmax_iterations (Optional[int]) \u2013 \nmax_execution_time (Optional[float]) \u2013 \nearly_stopping_method (str) \u2013 \nhandle_parsing_errors (Union[bool, str, Callable[[langchain.schema.OutputParserException], str]]) \u2013 \nReturn type\nNone\nclassmethod from_chains(llm, chains, **kwargs)[source]\uf0c1\nUser friendly way to initialize the MRKL chain.\nThis is intended to be an easy way to get up and running with the\nMRKL chain.\nParameters\nllm (langchain.base_language.BaseLanguageModel) \u2013 The LLM to use as the agent LLM.\nchains (List[langchain.agents.mrkl.base.ChainConfig]) \u2013 The chains the MRKL system has access to.\n**kwargs \u2013 parameters to be passed to initialization.\nkwargs (Any) \u2013 \nReturns\nAn initialized MRKL chain.\nReturn type\nlangchain.agents.agent.AgentExecutor\nExample\nfrom langchain import LLMMathChain, OpenAI, SerpAPIWrapper, MRKLChain\nfrom langchain.chains.mrkl.base import ChainConfig\nllm = OpenAI(temperature=0)\nsearch = SerpAPIWrapper()\nllm_math_chain = LLMMathChain(llm=llm)\nchains = [", "source": "https://api.python.langchain.com/en/latest/modules/agents.html"} +{"id": "7a9f411c5266-22", "text": "llm_math_chain = LLMMathChain(llm=llm)\nchains = [\n ChainConfig(\n action_name = \"Search\",\n action=search.search,\n action_description=\"useful for searching\"\n ),\n ChainConfig(\n action_name=\"Calculator\",\n action=llm_math_chain.run,\n action_description=\"useful for doing math\"\n )\n]\nmrkl = MRKLChain.from_chains(llm, chains)\nclass langchain.agents.OpenAIFunctionsAgent(*, llm, tools, prompt)[source]\uf0c1\nBases: langchain.agents.agent.BaseSingleActionAgent\nAn Agent driven by OpenAIs function powered API.\nParameters\nllm (langchain.base_language.BaseLanguageModel) \u2013 This should be an instance of ChatOpenAI, specifically a model\nthat supports using functions.\ntools (Sequence[langchain.tools.base.BaseTool]) \u2013 The tools this agent has access to.\nprompt (langchain.prompts.base.BasePromptTemplate) \u2013 The prompt for this agent, should support agent_scratchpad as one\nof the variables. For an easy way to construct this prompt, use\nOpenAIFunctionsAgent.create_prompt(\u2026)\nReturn type\nNone\nattribute llm: langchain.base_language.BaseLanguageModel [Required]\uf0c1\nattribute prompt: langchain.prompts.base.BasePromptTemplate [Required]\uf0c1\nattribute tools: Sequence[langchain.tools.base.BaseTool] [Required]\uf0c1\nasync aplan(intermediate_steps, callbacks=None, **kwargs)[source]\uf0c1\nGiven input, decided what to do.\nParameters\nintermediate_steps (List[Tuple[langchain.schema.AgentAction, str]]) \u2013 Steps the LLM has taken to date,\nalong with observations\n**kwargs \u2013 User inputs.", "source": "https://api.python.langchain.com/en/latest/modules/agents.html"} +{"id": "7a9f411c5266-23", "text": "along with observations\n**kwargs \u2013 User inputs.\ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \nkwargs (Any) \u2013 \nReturns\nAction specifying what tool to use.\nReturn type\nUnion[langchain.schema.AgentAction, langchain.schema.AgentFinish]\nclassmethod create_prompt(system_message=SystemMessage(content='You are a helpful AI assistant.', additional_kwargs={}), extra_prompt_messages=None)[source]\uf0c1\nCreate prompt for this agent.\nParameters\nsystem_message (Optional[langchain.schema.SystemMessage]) \u2013 Message to use as the system message that will be the\nfirst in the prompt.\nextra_prompt_messages (Optional[List[langchain.prompts.chat.BaseMessagePromptTemplate]]) \u2013 Prompt messages that will be placed between the\nsystem message and the new human input.\nReturns\nA prompt template to pass into this agent.\nReturn type\nlangchain.prompts.base.BasePromptTemplate\nclassmethod from_llm_and_tools(llm, tools, callback_manager=None, extra_prompt_messages=None, system_message=SystemMessage(content='You are a helpful AI assistant.', additional_kwargs={}), **kwargs)[source]\uf0c1\nConstruct an agent from an LLM and tools.\nParameters\nllm (langchain.base_language.BaseLanguageModel) \u2013 \ntools (Sequence[langchain.tools.base.BaseTool]) \u2013 \ncallback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) \u2013 \nextra_prompt_messages (Optional[List[langchain.prompts.chat.BaseMessagePromptTemplate]]) \u2013 \nsystem_message (Optional[langchain.schema.SystemMessage]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.agents.agent.BaseSingleActionAgent\nget_allowed_tools()[source]\uf0c1\nGet allowed tools.\nReturn type\nList[str]\nplan(intermediate_steps, callbacks=None, **kwargs)[source]\uf0c1", "source": "https://api.python.langchain.com/en/latest/modules/agents.html"} +{"id": "7a9f411c5266-24", "text": "List[str]\nplan(intermediate_steps, callbacks=None, **kwargs)[source]\uf0c1\nGiven input, decided what to do.\nParameters\nintermediate_steps (List[Tuple[langchain.schema.AgentAction, str]]) \u2013 Steps the LLM has taken to date, along with observations\n**kwargs \u2013 User inputs.\ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \nkwargs (Any) \u2013 \nReturns\nAction specifying what tool to use.\nReturn type\nUnion[langchain.schema.AgentAction, langchain.schema.AgentFinish]\nproperty functions: List[dict]\uf0c1\nproperty input_keys: List[str]\uf0c1\nGet input keys. Input refers to user input here.\nclass langchain.agents.ReActChain(llm, docstore, *, memory=None, callbacks=None, callback_manager=None, verbose=None, tags=None, agent, tools, return_intermediate_steps=False, max_iterations=15, max_execution_time=None, early_stopping_method='force', handle_parsing_errors=False)[source]\uf0c1\nBases: langchain.agents.agent.AgentExecutor\nChain that implements the ReAct paper.\nExample\nfrom langchain import ReActChain, OpenAI\nreact = ReAct(llm=OpenAI())\nParameters\nllm (langchain.base_language.BaseLanguageModel) \u2013 \ndocstore (langchain.docstore.base.Docstore) \u2013 \nmemory (Optional[langchain.schema.BaseMemory]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ncallback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) \u2013 \nverbose (bool) \u2013 \ntags (Optional[List[str]]) \u2013", "source": "https://api.python.langchain.com/en/latest/modules/agents.html"} +{"id": "7a9f411c5266-25", "text": "verbose (bool) \u2013 \ntags (Optional[List[str]]) \u2013 \nagent (Union[langchain.agents.agent.BaseSingleActionAgent, langchain.agents.agent.BaseMultiActionAgent]) \u2013 \ntools (Sequence[langchain.tools.base.BaseTool]) \u2013 \nreturn_intermediate_steps (bool) \u2013 \nmax_iterations (Optional[int]) \u2013 \nmax_execution_time (Optional[float]) \u2013 \nearly_stopping_method (str) \u2013 \nhandle_parsing_errors (Union[bool, str, Callable[[langchain.schema.OutputParserException], str]]) \u2013 \nReturn type\nNone\nclass langchain.agents.ReActTextWorldAgent(*, llm_chain, output_parser=None, allowed_tools=None)[source]\uf0c1\nBases: langchain.agents.react.base.ReActDocstoreAgent\nAgent for the ReAct TextWorld chain.\nParameters\nllm_chain (langchain.chains.llm.LLMChain) \u2013 \noutput_parser (langchain.agents.agent.AgentOutputParser) \u2013 \nallowed_tools (Optional[List[str]]) \u2013 \nReturn type\nNone\nclassmethod create_prompt(tools)[source]\uf0c1\nReturn default prompt.\nParameters\ntools (Sequence[langchain.tools.base.BaseTool]) \u2013 \nReturn type\nlangchain.prompts.base.BasePromptTemplate\nclass langchain.agents.SelfAskWithSearchChain(llm, search_chain, *, memory=None, callbacks=None, callback_manager=None, verbose=None, tags=None, agent, tools, return_intermediate_steps=False, max_iterations=15, max_execution_time=None, early_stopping_method='force', handle_parsing_errors=False)[source]\uf0c1\nBases: langchain.agents.agent.AgentExecutor\nChain that does self ask with search.\nExample\nfrom langchain import SelfAskWithSearchChain, OpenAI, GoogleSerperAPIWrapper\nsearch_chain = GoogleSerperAPIWrapper()", "source": "https://api.python.langchain.com/en/latest/modules/agents.html"} +{"id": "7a9f411c5266-26", "text": "search_chain = GoogleSerperAPIWrapper()\nself_ask = SelfAskWithSearchChain(llm=OpenAI(), search_chain=search_chain)\nParameters\nllm (langchain.base_language.BaseLanguageModel) \u2013 \nsearch_chain (Union[langchain.utilities.google_serper.GoogleSerperAPIWrapper, langchain.utilities.serpapi.SerpAPIWrapper]) \u2013 \nmemory (Optional[langchain.schema.BaseMemory]) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ncallback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) \u2013 \nverbose (bool) \u2013 \ntags (Optional[List[str]]) \u2013 \nagent (Union[langchain.agents.agent.BaseSingleActionAgent, langchain.agents.agent.BaseMultiActionAgent]) \u2013 \ntools (Sequence[langchain.tools.base.BaseTool]) \u2013 \nreturn_intermediate_steps (bool) \u2013 \nmax_iterations (Optional[int]) \u2013 \nmax_execution_time (Optional[float]) \u2013 \nearly_stopping_method (str) \u2013 \nhandle_parsing_errors (Union[bool, str, Callable[[langchain.schema.OutputParserException], str]]) \u2013 \nReturn type\nNone\nclass langchain.agents.StructuredChatAgent(*, llm_chain, output_parser=None, allowed_tools=None)[source]\uf0c1\nBases: langchain.agents.agent.Agent\nParameters\nllm_chain (langchain.chains.llm.LLMChain) \u2013 \noutput_parser (langchain.agents.agent.AgentOutputParser) \u2013 \nallowed_tools (Optional[List[str]]) \u2013 \nReturn type\nNone\nattribute output_parser: langchain.agents.agent.AgentOutputParser [Optional]\uf0c1", "source": "https://api.python.langchain.com/en/latest/modules/agents.html"} +{"id": "7a9f411c5266-27", "text": "None\nattribute output_parser: langchain.agents.agent.AgentOutputParser [Optional]\uf0c1\nclassmethod create_prompt(tools, prefix='Respond to the human as helpfully and accurately as possible. You have access to the following tools:', suffix='Begin! Reminder to ALWAYS respond with a valid json blob of a single action. Use tools if necessary. Respond directly if appropriate. Format is Action:```$JSON_BLOB```then Observation:.\\nThought:', human_message_template='{input}\\n\\n{agent_scratchpad}', format_instructions='Use a json blob to specify a tool by providing an action key (tool name) and an action_input key (tool input).\\n\\nValid \"action\" values: \"Final Answer\" or {tool_names}\\n\\nProvide only ONE action per $JSON_BLOB, as shown:\\n\\n```\\n{{{{\\n\u00a0 \"action\": $TOOL_NAME,\\n\u00a0 \"action_input\": $INPUT\\n}}}}\\n```\\n\\nFollow this format:\\n\\nQuestion: input question to answer\\nThought: consider previous and subsequent steps\\nAction:\\n```\\n$JSON_BLOB\\n```\\nObservation: action result\\n... (repeat Thought/Action/Observation N times)\\nThought: I know what to respond\\nAction:\\n```\\n{{{{\\n\u00a0 \"action\": \"Final Answer\",\\n\u00a0 \"action_input\": \"Final response to human\"\\n}}}}\\n```', input_variables=None, memory_prompts=None)[source]\uf0c1\nCreate a prompt for this class.\nParameters\ntools (Sequence[langchain.tools.base.BaseTool]) \u2013 \nprefix (str) \u2013 \nsuffix (str) \u2013 \nhuman_message_template (str) \u2013 \nformat_instructions (str) \u2013 \ninput_variables (Optional[List[str]]) \u2013", "source": "https://api.python.langchain.com/en/latest/modules/agents.html"} +{"id": "7a9f411c5266-28", "text": "format_instructions (str) \u2013 \ninput_variables (Optional[List[str]]) \u2013 \nmemory_prompts (Optional[List[langchain.prompts.base.BasePromptTemplate]]) \u2013 \nReturn type\nlangchain.prompts.base.BasePromptTemplate\nclassmethod from_llm_and_tools(llm, tools, callback_manager=None, output_parser=None, prefix='Respond to the human as helpfully and accurately as possible. You have access to the following tools:', suffix='Begin! Reminder to ALWAYS respond with a valid json blob of a single action. Use tools if necessary. Respond directly if appropriate. Format is Action:```$JSON_BLOB```then Observation:.\\nThought:', human_message_template='{input}\\n\\n{agent_scratchpad}', format_instructions='Use a json blob to specify a tool by providing an action key (tool name) and an action_input key (tool input).\\n\\nValid \"action\" values: \"Final Answer\" or {tool_names}\\n\\nProvide only ONE action per $JSON_BLOB, as shown:\\n\\n```\\n{{{{\\n\u00a0 \"action\": $TOOL_NAME,\\n\u00a0 \"action_input\": $INPUT\\n}}}}\\n```\\n\\nFollow this format:\\n\\nQuestion: input question to answer\\nThought: consider previous and subsequent steps\\nAction:\\n```\\n$JSON_BLOB\\n```\\nObservation: action result\\n... (repeat Thought/Action/Observation N times)\\nThought: I know what to respond\\nAction:\\n```\\n{{{{\\n\u00a0 \"action\": \"Final Answer\",\\n\u00a0 \"action_input\": \"Final response to human\"\\n}}}}\\n```', input_variables=None, memory_prompts=None, **kwargs)[source]\uf0c1\nConstruct an agent from an LLM and tools.\nParameters\nllm (langchain.base_language.BaseLanguageModel) \u2013", "source": "https://api.python.langchain.com/en/latest/modules/agents.html"} +{"id": "7a9f411c5266-29", "text": "Parameters\nllm (langchain.base_language.BaseLanguageModel) \u2013 \ntools (Sequence[langchain.tools.base.BaseTool]) \u2013 \ncallback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) \u2013 \noutput_parser (Optional[langchain.agents.agent.AgentOutputParser]) \u2013 \nprefix (str) \u2013 \nsuffix (str) \u2013 \nhuman_message_template (str) \u2013 \nformat_instructions (str) \u2013 \ninput_variables (Optional[List[str]]) \u2013 \nmemory_prompts (Optional[List[langchain.prompts.base.BasePromptTemplate]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.agents.agent.Agent\nproperty llm_prefix: str\uf0c1\nPrefix to append the llm call with.\nproperty observation_prefix: str\uf0c1\nPrefix to append the observation with.\nclass langchain.agents.Tool(name, func, description, *, args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, coroutine=None)[source]\uf0c1\nBases: langchain.tools.base.BaseTool\nTool that takes in function or coroutine directly.\nParameters\nname (str) \u2013 \nfunc (Callable[[...], str]) \u2013 \ndescription (str) \u2013 \nargs_schema (Optional[Type[pydantic.main.BaseModel]]) \u2013 \nreturn_direct (bool) \u2013 \nverbose (bool) \u2013 \ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 \ncallback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) \u2013 \nhandle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) \u2013 \ncoroutine (Optional[Callable[[...], Awaitable[str]]]) \u2013 \nReturn type\nNone", "source": "https://api.python.langchain.com/en/latest/modules/agents.html"} +{"id": "7a9f411c5266-30", "text": "Return type\nNone\nattribute coroutine: Optional[Callable[[...], Awaitable[str]]] = None\uf0c1\nThe asynchronous version of the function.\nattribute description: str = ''\uf0c1\nUsed to tell the model how/when/why to use the tool.\nYou can provide few-shot examples as a part of the description.\nattribute func: Callable[[...], str] [Required]\uf0c1\nThe function to run when the tool is called.\nclassmethod from_function(func, name, description, return_direct=False, args_schema=None, **kwargs)[source]\uf0c1\nInitialize tool from a function.\nParameters\nfunc (Callable) \u2013 \nname (str) \u2013 \ndescription (str) \u2013 \nreturn_direct (bool) \u2013 \nargs_schema (Optional[Type[pydantic.main.BaseModel]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.tools.base.Tool\nproperty args: dict\uf0c1\nThe tool\u2019s input arguments.\nclass langchain.agents.ZeroShotAgent(*, llm_chain, output_parser=None, allowed_tools=None)[source]\uf0c1\nBases: langchain.agents.agent.Agent\nAgent for the MRKL chain.\nParameters\nllm_chain (langchain.chains.llm.LLMChain) \u2013 \noutput_parser (langchain.agents.agent.AgentOutputParser) \u2013 \nallowed_tools (Optional[List[str]]) \u2013 \nReturn type\nNone\nattribute output_parser: langchain.agents.agent.AgentOutputParser [Optional]\uf0c1", "source": "https://api.python.langchain.com/en/latest/modules/agents.html"} +{"id": "7a9f411c5266-31", "text": "None\nattribute output_parser: langchain.agents.agent.AgentOutputParser [Optional]\uf0c1\nclassmethod create_prompt(tools, prefix='Answer the following questions as best you can. You have access to the following tools:', suffix='Begin!\\n\\nQuestion: {input}\\nThought:{agent_scratchpad}', format_instructions='Use the following format:\\n\\nQuestion: the input question you must answer\\nThought: you should always think about what to do\\nAction: the action to take, should be one of [{tool_names}]\\nAction Input: the input to the action\\nObservation: the result of the action\\n... (this Thought/Action/Action Input/Observation can repeat N times)\\nThought: I now know the final answer\\nFinal Answer: the final answer to the original input question', input_variables=None)[source]\uf0c1\nCreate prompt in the style of the zero shot agent.\nParameters\ntools (Sequence[langchain.tools.base.BaseTool]) \u2013 List of tools the agent will have access to, used to format the\nprompt.\nprefix (str) \u2013 String to put before the list of tools.\nsuffix (str) \u2013 String to put after the list of tools.\ninput_variables (Optional[List[str]]) \u2013 List of input variables the final prompt will expect.\nformat_instructions (str) \u2013 \nReturns\nA PromptTemplate with the template assembled from the pieces here.\nReturn type\nlangchain.prompts.prompt.PromptTemplate", "source": "https://api.python.langchain.com/en/latest/modules/agents.html"} +{"id": "7a9f411c5266-32", "text": "Return type\nlangchain.prompts.prompt.PromptTemplate\nclassmethod from_llm_and_tools(llm, tools, callback_manager=None, output_parser=None, prefix='Answer the following questions as best you can. You have access to the following tools:', suffix='Begin!\\n\\nQuestion: {input}\\nThought:{agent_scratchpad}', format_instructions='Use the following format:\\n\\nQuestion: the input question you must answer\\nThought: you should always think about what to do\\nAction: the action to take, should be one of [{tool_names}]\\nAction Input: the input to the action\\nObservation: the result of the action\\n... (this Thought/Action/Action Input/Observation can repeat N times)\\nThought: I now know the final answer\\nFinal Answer: the final answer to the original input question', input_variables=None, **kwargs)[source]\uf0c1\nConstruct an agent from an LLM and tools.\nParameters\nllm (langchain.base_language.BaseLanguageModel) \u2013 \ntools (Sequence[langchain.tools.base.BaseTool]) \u2013 \ncallback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) \u2013 \noutput_parser (Optional[langchain.agents.agent.AgentOutputParser]) \u2013 \nprefix (str) \u2013 \nsuffix (str) \u2013 \nformat_instructions (str) \u2013 \ninput_variables (Optional[List[str]]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.agents.agent.Agent\nproperty llm_prefix: str\uf0c1\nPrefix to append the llm call with.\nproperty observation_prefix: str\uf0c1\nPrefix to append the observation with.\nlangchain.agents.create_csv_agent(llm, path, pandas_kwargs=None, **kwargs)[source]\uf0c1\nCreate csv agent by loading to a dataframe and using pandas agent.\nParameters\nllm (langchain.base_language.BaseLanguageModel) \u2013", "source": "https://api.python.langchain.com/en/latest/modules/agents.html"} +{"id": "7a9f411c5266-33", "text": "Parameters\nllm (langchain.base_language.BaseLanguageModel) \u2013 \npath (Union[str, List[str]]) \u2013 \npandas_kwargs (Optional[dict]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nlangchain.agents.agent.AgentExecutor", "source": "https://api.python.langchain.com/en/latest/modules/agents.html"} +{"id": "7a9f411c5266-34", "text": "langchain.agents.create_json_agent(llm, toolkit, callback_manager=None, prefix='You are an agent designed to interact with JSON.\\nYour goal is to return a final answer by interacting with the JSON.\\nYou have access to the following tools which help you learn more about the JSON you are interacting with.\\nOnly use the below tools. Only use the information returned by the below tools to construct your final answer.\\nDo not make up any information that is not contained in the JSON.\\nYour input to the tools should be in the form of `data[\"key\"][0]` where `data` is the JSON blob you are interacting with, and the syntax used is Python. \\nYou should only use keys that you know for a fact exist. You must validate that a key exists by seeing it previously when calling `json_spec_list_keys`. \\nIf you have not seen a key in one of those responses, you cannot use it.\\nYou should only add one key at a time to the path. You cannot add multiple keys at once.\\nIf you encounter a \"KeyError\", go back to the previous key, look at the available keys, and try again.\\n\\nIf the question does not seem to be related to the JSON, just return \"I don\\'t know\" as the answer.\\nAlways begin your interaction with the `json_spec_list_keys` tool with input \"data\" to see what keys exist in the JSON.\\n\\nNote that sometimes the value at a given path is large. In this case, you will get an error \"Value is a large dictionary, should explore its keys directly\".\\nIn this case, you should ALWAYS follow up by using the `json_spec_list_keys` tool to see what keys exist at that path.\\nDo not simply refer the user to the JSON or a section of the JSON, as this is not a valid answer. Keep digging until you find the answer and explicitly", "source": "https://api.python.langchain.com/en/latest/modules/agents.html"} +{"id": "7a9f411c5266-35", "text": "the JSON, as this is not a valid answer. Keep digging until you find the answer and explicitly return it.\\n', suffix='Begin!\"\\n\\nQuestion: {input}\\nThought: I should look at the keys that exist in data to see what I have access to\\n{agent_scratchpad}', format_instructions='Use the following format:\\n\\nQuestion: the input question you must answer\\nThought: you should always think about what to do\\nAction: the action to take, should be one of [{tool_names}]\\nAction Input: the input to the action\\nObservation: the result of the action\\n... (this Thought/Action/Action Input/Observation can repeat N times)\\nThought: I now know the final answer\\nFinal Answer: the final answer to the original input question', input_variables=None, verbose=False, agent_executor_kwargs=None, **kwargs)[source]\uf0c1", "source": "https://api.python.langchain.com/en/latest/modules/agents.html"} +{"id": "7a9f411c5266-36", "text": "Construct a json agent from an LLM and tools.\nParameters\nllm (langchain.base_language.BaseLanguageModel) \u2013 \ntoolkit (langchain.agents.agent_toolkits.json.toolkit.JsonToolkit) \u2013 \ncallback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) \u2013 \nprefix (str) \u2013 \nsuffix (str) \u2013 \nformat_instructions (str) \u2013 \ninput_variables (Optional[List[str]]) \u2013 \nverbose (bool) \u2013 \nagent_executor_kwargs (Optional[Dict[str, Any]]) \u2013 \nkwargs (Dict[str, Any]) \u2013 \nReturn type\nlangchain.agents.agent.AgentExecutor", "source": "https://api.python.langchain.com/en/latest/modules/agents.html"} +{"id": "7a9f411c5266-37", "text": "langchain.agents.create_openapi_agent(llm, toolkit, callback_manager=None, prefix=\"You are an agent designed to answer questions by making web requests to an API given the openapi spec.\\n\\nIf the question does not seem related to the API, return I don't know. Do not make up an answer.\\nOnly use information provided by the tools to construct your response.\\n\\nFirst, find the base URL needed to make the request.\\n\\nSecond, find the relevant paths needed to answer the question. Take note that, sometimes, you might need to make more than one request to more than one path to answer the question.\\n\\nThird, find the required parameters needed to make the request. For GET requests, these are usually URL parameters and for POST requests, these are request body parameters.\\n\\nFourth, make the requests needed to answer the question. Ensure that you are sending the correct parameters to the request by checking which parameters are required. For parameters with a fixed set of values, please use the spec to look at which values are allowed.\\n\\nUse the exact parameter names as listed in the spec, do not make up any names or abbreviate the names of parameters.\\nIf you get a not found error, ensure that you are using a path that actually exists in the spec.\\n\", suffix='Begin!\\n\\nQuestion: {input}\\nThought: I should explore the spec to find the base url for the API.\\n{agent_scratchpad}', format_instructions='Use the following format:\\n\\nQuestion: the input question you must answer\\nThought: you should always think about what to do\\nAction: the action to take, should be one of [{tool_names}]\\nAction Input: the input to the action\\nObservation: the result of the action\\n... (this Thought/Action/Action Input/Observation can repeat N times)\\nThought: I now know the final answer\\nFinal", "source": "https://api.python.langchain.com/en/latest/modules/agents.html"} +{"id": "7a9f411c5266-38", "text": "Input/Observation can repeat N times)\\nThought: I now know the final answer\\nFinal Answer: the final answer to the original input question', input_variables=None, max_iterations=15, max_execution_time=None, early_stopping_method='force', verbose=False, return_intermediate_steps=False, agent_executor_kwargs=None, **kwargs)[source]\uf0c1", "source": "https://api.python.langchain.com/en/latest/modules/agents.html"} +{"id": "7a9f411c5266-39", "text": "Construct a json agent from an LLM and tools.\nParameters\nllm (langchain.base_language.BaseLanguageModel) \u2013 \ntoolkit (langchain.agents.agent_toolkits.openapi.toolkit.OpenAPIToolkit) \u2013 \ncallback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) \u2013 \nprefix (str) \u2013 \nsuffix (str) \u2013 \nformat_instructions (str) \u2013 \ninput_variables (Optional[List[str]]) \u2013 \nmax_iterations (Optional[int]) \u2013 \nmax_execution_time (Optional[float]) \u2013 \nearly_stopping_method (str) \u2013 \nverbose (bool) \u2013 \nreturn_intermediate_steps (bool) \u2013 \nagent_executor_kwargs (Optional[Dict[str, Any]]) \u2013 \nkwargs (Dict[str, Any]) \u2013 \nReturn type\nlangchain.agents.agent.AgentExecutor\nlangchain.agents.create_pandas_dataframe_agent(llm, df, agent_type=AgentType.ZERO_SHOT_REACT_DESCRIPTION, callback_manager=None, prefix=None, suffix=None, input_variables=None, verbose=False, return_intermediate_steps=False, max_iterations=15, max_execution_time=None, early_stopping_method='force', agent_executor_kwargs=None, include_df_in_prompt=True, **kwargs)[source]\uf0c1\nConstruct a pandas agent from an LLM and dataframe.\nParameters\nllm (langchain.base_language.BaseLanguageModel) \u2013 \ndf (Any) \u2013 \nagent_type (langchain.agents.agent_types.AgentType) \u2013 \ncallback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) \u2013 \nprefix (Optional[str]) \u2013 \nsuffix (Optional[str]) \u2013 \ninput_variables (Optional[List[str]]) \u2013 \nverbose (bool) \u2013 \nreturn_intermediate_steps (bool) \u2013 \nmax_iterations (Optional[int]) \u2013 \nmax_execution_time (Optional[float]) \u2013", "source": "https://api.python.langchain.com/en/latest/modules/agents.html"} +{"id": "7a9f411c5266-40", "text": "max_iterations (Optional[int]) \u2013 \nmax_execution_time (Optional[float]) \u2013 \nearly_stopping_method (str) \u2013 \nagent_executor_kwargs (Optional[Dict[str, Any]]) \u2013 \ninclude_df_in_prompt (Optional[bool]) \u2013 \nkwargs (Dict[str, Any]) \u2013 \nReturn type\nlangchain.agents.agent.AgentExecutor", "source": "https://api.python.langchain.com/en/latest/modules/agents.html"} +{"id": "7a9f411c5266-41", "text": "langchain.agents.create_pbi_agent(llm, toolkit, powerbi=None, callback_manager=None, prefix='You are an agent designed to help users interact with a PowerBI Dataset.\\n\\nAgent has access to a tool that can write a query based on the question and then run those against PowerBI, Microsofts business intelligence tool. The questions from the users should be interpreted as related to the dataset that is available and not general questions about the world. If the question does not seem related to the dataset, just return \"This does not appear to be part of this dataset.\" as the answer.\\n\\nGiven an input question, ask to run the questions against the dataset, then look at the results and return the answer, the answer should be a complete sentence that answers the question, if multiple rows are asked find a way to write that in a easily readable format for a human, also make sure to represent numbers in readable ways, like 1M instead of 1000000. Unless the user specifies a specific number of examples they wish to obtain, always limit your query to at most {top_k} results.\\n', suffix='Begin!\\n\\nQuestion: {input}\\nThought: I can first ask which tables I have, then how each table is defined and then ask the query tool the question I need, and finally create a nice sentence that answers the question.\\n{agent_scratchpad}', format_instructions='Use the following format:\\n\\nQuestion: the input question you must answer\\nThought: you should always think about what to do\\nAction: the action to take, should be one of [{tool_names}]\\nAction Input: the input to the action\\nObservation: the result of the action\\n... (this Thought/Action/Action Input/Observation can repeat N times)\\nThought: I now know the final answer\\nFinal Answer: the final answer to the original input question', examples=None, input_variables=None,", "source": "https://api.python.langchain.com/en/latest/modules/agents.html"} +{"id": "7a9f411c5266-42", "text": "Answer: the final answer to the original input question', examples=None, input_variables=None, top_k=10, verbose=False, agent_executor_kwargs=None, **kwargs)[source]\uf0c1", "source": "https://api.python.langchain.com/en/latest/modules/agents.html"} +{"id": "7a9f411c5266-43", "text": "Construct a pbi agent from an LLM and tools.\nParameters\nllm (langchain.base_language.BaseLanguageModel) \u2013 \ntoolkit (Optional[langchain.agents.agent_toolkits.powerbi.toolkit.PowerBIToolkit]) \u2013 \npowerbi (Optional[langchain.utilities.powerbi.PowerBIDataset]) \u2013 \ncallback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) \u2013 \nprefix (str) \u2013 \nsuffix (str) \u2013 \nformat_instructions (str) \u2013 \nexamples (Optional[str]) \u2013 \ninput_variables (Optional[List[str]]) \u2013 \ntop_k (int) \u2013 \nverbose (bool) \u2013 \nagent_executor_kwargs (Optional[Dict[str, Any]]) \u2013 \nkwargs (Dict[str, Any]) \u2013 \nReturn type\nlangchain.agents.agent.AgentExecutor", "source": "https://api.python.langchain.com/en/latest/modules/agents.html"} +{"id": "7a9f411c5266-44", "text": "Return type\nlangchain.agents.agent.AgentExecutor\nlangchain.agents.create_pbi_chat_agent(llm, toolkit, powerbi=None, callback_manager=None, output_parser=None, prefix='Assistant is a large language model built to help users interact with a PowerBI Dataset.\\n\\nAssistant has access to a tool that can write a query based on the question and then run those against PowerBI, Microsofts business intelligence tool. The questions from the users should be interpreted as related to the dataset that is available and not general questions about the world. If the question does not seem related to the dataset, just return \"This does not appear to be part of this dataset.\" as the answer.\\n\\nGiven an input question, ask to run the questions against the dataset, then look at the results and return the answer, the answer should be a complete sentence that answers the question, if multiple rows are asked find a way to write that in a easily readable format for a human, also make sure to represent numbers in readable ways, like 1M instead of 1000000. Unless the user specifies a specific number of examples they wish to obtain, always limit your query to at most {top_k} results.\\n', suffix=\"TOOLS\\n------\\nAssistant can ask the user to use tools to look up information that may be helpful in answering the users original question. The tools the human can use are:\\n\\n{{tools}}\\n\\n{format_instructions}\\n\\nUSER'S INPUT\\n--------------------\\nHere is the user's input (remember to respond with a markdown code snippet of a json blob with a single action, and NOTHING else):\\n\\n{{{{input}}}}\\n\", examples=None, input_variables=None, memory=None, top_k=10, verbose=False, agent_executor_kwargs=None, **kwargs)[source]\uf0c1\nConstruct a pbi agent from an Chat LLM and tools.", "source": "https://api.python.langchain.com/en/latest/modules/agents.html"} +{"id": "7a9f411c5266-45", "text": "Construct a pbi agent from an Chat LLM and tools.\nIf you supply only a toolkit and no powerbi dataset, the same LLM is used for both.\nParameters\nllm (langchain.chat_models.base.BaseChatModel) \u2013 \ntoolkit (Optional[langchain.agents.agent_toolkits.powerbi.toolkit.PowerBIToolkit]) \u2013 \npowerbi (Optional[langchain.utilities.powerbi.PowerBIDataset]) \u2013 \ncallback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) \u2013 \noutput_parser (Optional[langchain.agents.agent.AgentOutputParser]) \u2013 \nprefix (str) \u2013 \nsuffix (str) \u2013 \nexamples (Optional[str]) \u2013 \ninput_variables (Optional[List[str]]) \u2013 \nmemory (Optional[langchain.memory.chat_memory.BaseChatMemory]) \u2013 \ntop_k (int) \u2013 \nverbose (bool) \u2013 \nagent_executor_kwargs (Optional[Dict[str, Any]]) \u2013 \nkwargs (Dict[str, Any]) \u2013 \nReturn type\nlangchain.agents.agent.AgentExecutor\nlangchain.agents.create_spark_dataframe_agent(llm, df, callback_manager=None, prefix='\\nYou are working with a spark dataframe in Python. The name of the dataframe is `df`.\\nYou should use the tools below to answer the question posed of you:', suffix='\\nThis is the result of `print(df.first())`:\\n{df}\\n\\nBegin!\\nQuestion: {input}\\n{agent_scratchpad}', input_variables=None, verbose=False, return_intermediate_steps=False, max_iterations=15, max_execution_time=None, early_stopping_method='force', agent_executor_kwargs=None, **kwargs)[source]\uf0c1\nConstruct a spark agent from an LLM and dataframe.\nParameters\nllm (langchain.llms.base.BaseLLM) \u2013 \ndf (Any) \u2013", "source": "https://api.python.langchain.com/en/latest/modules/agents.html"} +{"id": "7a9f411c5266-46", "text": "df (Any) \u2013 \ncallback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) \u2013 \nprefix (str) \u2013 \nsuffix (str) \u2013 \ninput_variables (Optional[List[str]]) \u2013 \nverbose (bool) \u2013 \nreturn_intermediate_steps (bool) \u2013 \nmax_iterations (Optional[int]) \u2013 \nmax_execution_time (Optional[float]) \u2013 \nearly_stopping_method (str) \u2013 \nagent_executor_kwargs (Optional[Dict[str, Any]]) \u2013 \nkwargs (Dict[str, Any]) \u2013 \nReturn type\nlangchain.agents.agent.AgentExecutor", "source": "https://api.python.langchain.com/en/latest/modules/agents.html"} +{"id": "7a9f411c5266-47", "text": "langchain.agents.create_spark_sql_agent(llm, toolkit, callback_manager=None, prefix='You are an agent designed to interact with Spark SQL.\\nGiven an input question, create a syntactically correct Spark SQL query to run, then look at the results of the query and return the answer.\\nUnless the user specifies a specific number of examples they wish to obtain, always limit your query to at most {top_k} results.\\nYou can order the results by a relevant column to return the most interesting examples in the database.\\nNever query for all the columns from a specific table, only ask for the relevant columns given the question.\\nYou have access to tools for interacting with the database.\\nOnly use the below tools. Only use the information returned by the below tools to construct your final answer.\\nYou MUST double check your query before executing it. If you get an error while executing a query, rewrite the query and try again.\\n\\nDO NOT make any DML statements (INSERT, UPDATE, DELETE, DROP etc.) to the database.\\n\\nIf the question does not seem related to the database, just return \"I don\\'t know\" as the answer.\\n', suffix='Begin!\\n\\nQuestion: {input}\\nThought: I should look at the tables in the database to see what I can query.\\n{agent_scratchpad}', format_instructions='Use the following format:\\n\\nQuestion: the input question you must answer\\nThought: you should always think about what to do\\nAction: the action to take, should be one of [{tool_names}]\\nAction Input: the input to the action\\nObservation: the result of the action\\n... (this Thought/Action/Action Input/Observation can repeat N times)\\nThought: I now know the final answer\\nFinal Answer: the final answer to the original input question', input_variables=None, top_k=10,", "source": "https://api.python.langchain.com/en/latest/modules/agents.html"} +{"id": "7a9f411c5266-48", "text": "Answer: the final answer to the original input question', input_variables=None, top_k=10, max_iterations=15, max_execution_time=None, early_stopping_method='force', verbose=False, agent_executor_kwargs=None, **kwargs)[source]\uf0c1", "source": "https://api.python.langchain.com/en/latest/modules/agents.html"} +{"id": "7a9f411c5266-49", "text": "Construct a sql agent from an LLM and tools.\nParameters\nllm (langchain.base_language.BaseLanguageModel) \u2013 \ntoolkit (langchain.agents.agent_toolkits.spark_sql.toolkit.SparkSQLToolkit) \u2013 \ncallback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) \u2013 \nprefix (str) \u2013 \nsuffix (str) \u2013 \nformat_instructions (str) \u2013 \ninput_variables (Optional[List[str]]) \u2013 \ntop_k (int) \u2013 \nmax_iterations (Optional[int]) \u2013 \nmax_execution_time (Optional[float]) \u2013 \nearly_stopping_method (str) \u2013 \nverbose (bool) \u2013 \nagent_executor_kwargs (Optional[Dict[str, Any]]) \u2013 \nkwargs (Dict[str, Any]) \u2013 \nReturn type\nlangchain.agents.agent.AgentExecutor", "source": "https://api.python.langchain.com/en/latest/modules/agents.html"} +{"id": "7a9f411c5266-50", "text": "langchain.agents.create_sql_agent(llm, toolkit, agent_type=AgentType.ZERO_SHOT_REACT_DESCRIPTION, callback_manager=None, prefix='You are an agent designed to interact with a SQL database.\\nGiven an input question, create a syntactically correct {dialect} query to run, then look at the results of the query and return the answer.\\nUnless the user specifies a specific number of examples they wish to obtain, always limit your query to at most {top_k} results.\\nYou can order the results by a relevant column to return the most interesting examples in the database.\\nNever query for all the columns from a specific table, only ask for the relevant columns given the question.\\nYou have access to tools for interacting with the database.\\nOnly use the below tools. Only use the information returned by the below tools to construct your final answer.\\nYou MUST double check your query before executing it. If you get an error while executing a query, rewrite the query and try again.\\n\\nDO NOT make any DML statements (INSERT, UPDATE, DELETE, DROP etc.) to the database.\\n\\nIf the question does not seem related to the database, just return \"I don\\'t know\" as the answer.\\n', suffix=None, format_instructions='Use the following format:\\n\\nQuestion: the input question you must answer\\nThought: you should always think about what to do\\nAction: the action to take, should be one of [{tool_names}]\\nAction Input: the input to the action\\nObservation: the result of the action\\n... (this Thought/Action/Action Input/Observation can repeat N times)\\nThought: I now know the final answer\\nFinal Answer: the final answer to the original input question', input_variables=None, top_k=10, max_iterations=15, max_execution_time=None, early_stopping_method='force', verbose=False, agent_executor_kwargs=None,", "source": "https://api.python.langchain.com/en/latest/modules/agents.html"} +{"id": "7a9f411c5266-51", "text": "max_execution_time=None, early_stopping_method='force', verbose=False, agent_executor_kwargs=None, **kwargs)[source]\uf0c1", "source": "https://api.python.langchain.com/en/latest/modules/agents.html"} +{"id": "7a9f411c5266-52", "text": "Construct a sql agent from an LLM and tools.\nParameters\nllm (langchain.base_language.BaseLanguageModel) \u2013 \ntoolkit (langchain.agents.agent_toolkits.sql.toolkit.SQLDatabaseToolkit) \u2013 \nagent_type (langchain.agents.agent_types.AgentType) \u2013 \ncallback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) \u2013 \nprefix (str) \u2013 \nsuffix (Optional[str]) \u2013 \nformat_instructions (str) \u2013 \ninput_variables (Optional[List[str]]) \u2013 \ntop_k (int) \u2013 \nmax_iterations (Optional[int]) \u2013 \nmax_execution_time (Optional[float]) \u2013 \nearly_stopping_method (str) \u2013 \nverbose (bool) \u2013 \nagent_executor_kwargs (Optional[Dict[str, Any]]) \u2013 \nkwargs (Dict[str, Any]) \u2013 \nReturn type\nlangchain.agents.agent.AgentExecutor\nlangchain.agents.create_vectorstore_agent(llm, toolkit, callback_manager=None, prefix='You are an agent designed to answer questions about sets of documents.\\nYou have access to tools for interacting with the documents, and the inputs to the tools are questions.\\nSometimes, you will be asked to provide sources for your questions, in which case you should use the appropriate tool to do so.\\nIf the question does not seem relevant to any of the tools provided, just return \"I don\\'t know\" as the answer.\\n', verbose=False, agent_executor_kwargs=None, **kwargs)[source]\uf0c1\nConstruct a vectorstore agent from an LLM and tools.\nParameters\nllm (langchain.base_language.BaseLanguageModel) \u2013 \ntoolkit (langchain.agents.agent_toolkits.vectorstore.toolkit.VectorStoreToolkit) \u2013 \ncallback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) \u2013 \nprefix (str) \u2013 \nverbose (bool) \u2013", "source": "https://api.python.langchain.com/en/latest/modules/agents.html"} +{"id": "7a9f411c5266-53", "text": "prefix (str) \u2013 \nverbose (bool) \u2013 \nagent_executor_kwargs (Optional[Dict[str, Any]]) \u2013 \nkwargs (Dict[str, Any]) \u2013 \nReturn type\nlangchain.agents.agent.AgentExecutor\nlangchain.agents.create_vectorstore_router_agent(llm, toolkit, callback_manager=None, prefix='You are an agent designed to answer questions.\\nYou have access to tools for interacting with different sources, and the inputs to the tools are questions.\\nYour main task is to decide which of the tools is relevant for answering question at hand.\\nFor complex questions, you can break the question down into sub questions and use tools to answers the sub questions.\\n', verbose=False, agent_executor_kwargs=None, **kwargs)[source]\uf0c1\nConstruct a vectorstore router agent from an LLM and tools.\nParameters\nllm (langchain.base_language.BaseLanguageModel) \u2013 \ntoolkit (langchain.agents.agent_toolkits.vectorstore.toolkit.VectorStoreRouterToolkit) \u2013 \ncallback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) \u2013 \nprefix (str) \u2013 \nverbose (bool) \u2013 \nagent_executor_kwargs (Optional[Dict[str, Any]]) \u2013 \nkwargs (Dict[str, Any]) \u2013 \nReturn type\nlangchain.agents.agent.AgentExecutor\nlangchain.agents.get_all_tool_names()[source]\uf0c1\nGet a list of all possible tool names.\nReturn type\nList[str]\nlangchain.agents.initialize_agent(tools, llm, agent=None, callback_manager=None, agent_path=None, agent_kwargs=None, *, tags=None, **kwargs)[source]\uf0c1\nLoad an agent executor given tools and LLM.\nParameters\ntools (Sequence[langchain.tools.base.BaseTool]) \u2013 List of tools this agent has access to.", "source": "https://api.python.langchain.com/en/latest/modules/agents.html"} +{"id": "7a9f411c5266-54", "text": "llm (langchain.base_language.BaseLanguageModel) \u2013 Language model to use as the agent.\nagent (Optional[langchain.agents.agent_types.AgentType]) \u2013 Agent type to use. If None and agent_path is also None, will default to\nAgentType.ZERO_SHOT_REACT_DESCRIPTION.\ncallback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) \u2013 CallbackManager to use. Global callback manager is used if\nnot provided. Defaults to None.\nagent_path (Optional[str]) \u2013 Path to serialized agent to use.\nagent_kwargs (Optional[dict]) \u2013 Additional key word arguments to pass to the underlying agent\ntags (Optional[Sequence[str]]) \u2013 Tags to apply to the traced runs.\n**kwargs \u2013 Additional key word arguments passed to the agent executor\nkwargs (Any) \u2013 \nReturns\nAn agent executor\nReturn type\nlangchain.agents.agent.AgentExecutor\nlangchain.agents.load_agent(path, **kwargs)[source]\uf0c1\nUnified method for loading a agent from LangChainHub or local fs.\nParameters\npath (Union[str, pathlib.Path]) \u2013 \nkwargs (Any) \u2013 \nReturn type\nUnion[langchain.agents.agent.BaseSingleActionAgent, langchain.agents.agent.BaseMultiActionAgent]\nlangchain.agents.load_huggingface_tool(task_or_repo_id, model_repo_id=None, token=None, remote=False, **kwargs)[source]\uf0c1\nLoads a tool from the HuggingFace Hub.\nParameters\ntask_or_repo_id (str) \u2013 Task or model repo id.\nmodel_repo_id (Optional[str]) \u2013 Optional model repo id.\ntoken (Optional[str]) \u2013 Optional token.\nremote (bool) \u2013 Optional remote. Defaults to False.\n**kwargs \u2013 \nkwargs (Any) \u2013 \nReturns\nA tool.\nReturn type\nlangchain.tools.base.BaseTool", "source": "https://api.python.langchain.com/en/latest/modules/agents.html"} +{"id": "7a9f411c5266-55", "text": "Returns\nA tool.\nReturn type\nlangchain.tools.base.BaseTool\nlangchain.agents.load_tools(tool_names, llm=None, callbacks=None, **kwargs)[source]\uf0c1\nLoad tools based on their name.\nParameters\ntool_names (List[str]) \u2013 name of tools to load.\nllm (Optional[langchain.base_language.BaseLanguageModel]) \u2013 Optional language model, may be needed to initialize certain tools.\ncallbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) \u2013 Optional callback manager or list of callback handlers.\nIf not provided, default global callback manager will be used.\nkwargs (Any) \u2013 \nReturns\nList of tools.\nReturn type\nList[langchain.tools.base.BaseTool]\nlangchain.agents.tool(*args, return_direct=False, args_schema=None, infer_schema=True)[source]\uf0c1\nMake tools out of functions, can be used with or without arguments.\nParameters\n*args \u2013 The arguments to the tool.\nreturn_direct (bool) \u2013 Whether to return directly from the tool rather\nthan continuing the agent loop.\nargs_schema (Optional[Type[pydantic.main.BaseModel]]) \u2013 optional argument schema for user to specify\ninfer_schema (bool) \u2013 Whether to infer the schema of the arguments from\nthe function\u2019s signature. This also makes the resultant tool\naccept a dictionary input to its run() function.\nargs (Union[str, Callable]) \u2013 \nReturn type\nCallable\nRequires:\nFunction must be of type (str) -> str\nFunction must have a docstring\nExamples\n@tool\ndef search_api(query: str) -> str:\n # Searches the API for the query.\n return\n@tool(\"search\", return_direct=True)\ndef search_api(query: str) -> str:\n # Searches the API for the query.\n return", "source": "https://api.python.langchain.com/en/latest/modules/agents.html"} +{"id": "9636d6dc9867-0", "text": "Source code for langchain.requests\n\"\"\"Lightweight wrapper around requests library, with async support.\"\"\"\nfrom contextlib import asynccontextmanager\nfrom typing import Any, AsyncGenerator, Dict, Optional\nimport aiohttp\nimport requests\nfrom pydantic import BaseModel, Extra\nclass Requests(BaseModel):\n \"\"\"Wrapper around requests to handle auth and async.\n The main purpose of this wrapper is to handle authentication (by saving\n headers) and enable easy async methods on the same base object.\n \"\"\"\n headers: Optional[Dict[str, str]] = None\n aiosession: Optional[aiohttp.ClientSession] = None\n class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n arbitrary_types_allowed = True\n def get(self, url: str, **kwargs: Any) -> requests.Response:\n \"\"\"GET the URL and return the text.\"\"\"\n return requests.get(url, headers=self.headers, **kwargs)\n def post(self, url: str, data: Dict[str, Any], **kwargs: Any) -> requests.Response:\n \"\"\"POST to the URL and return the text.\"\"\"\n return requests.post(url, json=data, headers=self.headers, **kwargs)\n def patch(self, url: str, data: Dict[str, Any], **kwargs: Any) -> requests.Response:\n \"\"\"PATCH the URL and return the text.\"\"\"\n return requests.patch(url, json=data, headers=self.headers, **kwargs)\n def put(self, url: str, data: Dict[str, Any], **kwargs: Any) -> requests.Response:\n \"\"\"PUT the URL and return the text.\"\"\"\n return requests.put(url, json=data, headers=self.headers, **kwargs)\n def delete(self, url: str, **kwargs: Any) -> requests.Response:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/requests.html"} +{"id": "9636d6dc9867-1", "text": "def delete(self, url: str, **kwargs: Any) -> requests.Response:\n \"\"\"DELETE the URL and return the text.\"\"\"\n return requests.delete(url, headers=self.headers, **kwargs)\n @asynccontextmanager\n async def _arequest(\n self, method: str, url: str, **kwargs: Any\n ) -> AsyncGenerator[aiohttp.ClientResponse, None]:\n \"\"\"Make an async request.\"\"\"\n if not self.aiosession:\n async with aiohttp.ClientSession() as session:\n async with session.request(\n method, url, headers=self.headers, **kwargs\n ) as response:\n yield response\n else:\n async with self.aiosession.request(\n method, url, headers=self.headers, **kwargs\n ) as response:\n yield response\n @asynccontextmanager\n async def aget(\n self, url: str, **kwargs: Any\n ) -> AsyncGenerator[aiohttp.ClientResponse, None]:\n \"\"\"GET the URL and return the text asynchronously.\"\"\"\n async with self._arequest(\"GET\", url, **kwargs) as response:\n yield response\n @asynccontextmanager\n async def apost(\n self, url: str, data: Dict[str, Any], **kwargs: Any\n ) -> AsyncGenerator[aiohttp.ClientResponse, None]:\n \"\"\"POST to the URL and return the text asynchronously.\"\"\"\n async with self._arequest(\"POST\", url, **kwargs) as response:\n yield response\n @asynccontextmanager\n async def apatch(\n self, url: str, data: Dict[str, Any], **kwargs: Any\n ) -> AsyncGenerator[aiohttp.ClientResponse, None]:\n \"\"\"PATCH the URL and return the text asynchronously.\"\"\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/requests.html"} +{"id": "9636d6dc9867-2", "text": "\"\"\"PATCH the URL and return the text asynchronously.\"\"\"\n async with self._arequest(\"PATCH\", url, **kwargs) as response:\n yield response\n @asynccontextmanager\n async def aput(\n self, url: str, data: Dict[str, Any], **kwargs: Any\n ) -> AsyncGenerator[aiohttp.ClientResponse, None]:\n \"\"\"PUT the URL and return the text asynchronously.\"\"\"\n async with self._arequest(\"PUT\", url, **kwargs) as response:\n yield response\n @asynccontextmanager\n async def adelete(\n self, url: str, **kwargs: Any\n ) -> AsyncGenerator[aiohttp.ClientResponse, None]:\n \"\"\"DELETE the URL and return the text asynchronously.\"\"\"\n async with self._arequest(\"DELETE\", url, **kwargs) as response:\n yield response\n[docs]class TextRequestsWrapper(BaseModel):\n \"\"\"Lightweight wrapper around requests library.\n The main purpose of this wrapper is to always return a text output.\n \"\"\"\n headers: Optional[Dict[str, str]] = None\n aiosession: Optional[aiohttp.ClientSession] = None\n class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n arbitrary_types_allowed = True\n @property\n def requests(self) -> Requests:\n return Requests(headers=self.headers, aiosession=self.aiosession)\n[docs] def get(self, url: str, **kwargs: Any) -> str:\n \"\"\"GET the URL and return the text.\"\"\"\n return self.requests.get(url, **kwargs).text\n[docs] def post(self, url: str, data: Dict[str, Any], **kwargs: Any) -> str:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/requests.html"} +{"id": "9636d6dc9867-3", "text": "\"\"\"POST to the URL and return the text.\"\"\"\n return self.requests.post(url, data, **kwargs).text\n[docs] def patch(self, url: str, data: Dict[str, Any], **kwargs: Any) -> str:\n \"\"\"PATCH the URL and return the text.\"\"\"\n return self.requests.patch(url, data, **kwargs).text\n[docs] def put(self, url: str, data: Dict[str, Any], **kwargs: Any) -> str:\n \"\"\"PUT the URL and return the text.\"\"\"\n return self.requests.put(url, data, **kwargs).text\n[docs] def delete(self, url: str, **kwargs: Any) -> str:\n \"\"\"DELETE the URL and return the text.\"\"\"\n return self.requests.delete(url, **kwargs).text\n[docs] async def aget(self, url: str, **kwargs: Any) -> str:\n \"\"\"GET the URL and return the text asynchronously.\"\"\"\n async with self.requests.aget(url, **kwargs) as response:\n return await response.text()\n[docs] async def apost(self, url: str, data: Dict[str, Any], **kwargs: Any) -> str:\n \"\"\"POST to the URL and return the text asynchronously.\"\"\"\n async with self.requests.apost(url, **kwargs) as response:\n return await response.text()\n[docs] async def apatch(self, url: str, data: Dict[str, Any], **kwargs: Any) -> str:\n \"\"\"PATCH the URL and return the text asynchronously.\"\"\"\n async with self.requests.apatch(url, **kwargs) as response:\n return await response.text()\n[docs] async def aput(self, url: str, data: Dict[str, Any], **kwargs: Any) -> str:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/requests.html"} +{"id": "9636d6dc9867-4", "text": "\"\"\"PUT the URL and return the text asynchronously.\"\"\"\n async with self.requests.aput(url, **kwargs) as response:\n return await response.text()\n[docs] async def adelete(self, url: str, **kwargs: Any) -> str:\n \"\"\"DELETE the URL and return the text asynchronously.\"\"\"\n async with self.requests.adelete(url, **kwargs) as response:\n return await response.text()\n# For backwards compatibility\nRequestsWrapper = TextRequestsWrapper", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/requests.html"} +{"id": "f1cd9c5ba011-0", "text": "Source code for langchain.text_splitter\n\"\"\"Functionality for splitting text.\"\"\"\nfrom __future__ import annotations\nimport copy\nimport logging\nimport re\nfrom abc import ABC, abstractmethod\nfrom dataclasses import dataclass\nfrom enum import Enum\nfrom typing import (\n AbstractSet,\n Any,\n Callable,\n Collection,\n Dict,\n Iterable,\n List,\n Literal,\n Optional,\n Sequence,\n Tuple,\n Type,\n TypedDict,\n TypeVar,\n Union,\n cast,\n)\nfrom langchain.docstore.document import Document\nfrom langchain.schema import BaseDocumentTransformer\nlogger = logging.getLogger(__name__)\nTS = TypeVar(\"TS\", bound=\"TextSplitter\")\ndef _split_text_with_regex(\n text: str, separator: str, keep_separator: bool\n) -> List[str]:\n # Now that we have the separator, split the text\n if separator:\n if keep_separator:\n # The parentheses in the pattern keep the delimiters in the result.\n _splits = re.split(f\"({separator})\", text)\n splits = [_splits[i] + _splits[i + 1] for i in range(1, len(_splits), 2)]\n if len(_splits) % 2 == 0:\n splits += _splits[-1:]\n splits = [_splits[0]] + splits\n else:\n splits = text.split(separator)\n else:\n splits = list(text)\n return [s for s in splits if s != \"\"]\n[docs]class TextSplitter(BaseDocumentTransformer, ABC):\n \"\"\"Interface for splitting text into chunks.\"\"\"\n def __init__(\n self,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/text_splitter.html"} +{"id": "f1cd9c5ba011-1", "text": "\"\"\"Interface for splitting text into chunks.\"\"\"\n def __init__(\n self,\n chunk_size: int = 4000,\n chunk_overlap: int = 200,\n length_function: Callable[[str], int] = len,\n keep_separator: bool = False,\n add_start_index: bool = False,\n ) -> None:\n \"\"\"Create a new TextSplitter.\n Args:\n chunk_size: Maximum size of chunks to return\n chunk_overlap: Overlap in characters between chunks\n length_function: Function that measures the length of given chunks\n keep_separator: Whether or not to keep the separator in the chunks\n add_start_index: If `True`, includes chunk's start index in metadata\n \"\"\"\n if chunk_overlap > chunk_size:\n raise ValueError(\n f\"Got a larger chunk overlap ({chunk_overlap}) than chunk size \"\n f\"({chunk_size}), should be smaller.\"\n )\n self._chunk_size = chunk_size\n self._chunk_overlap = chunk_overlap\n self._length_function = length_function\n self._keep_separator = keep_separator\n self._add_start_index = add_start_index\n[docs] @abstractmethod\n def split_text(self, text: str) -> List[str]:\n \"\"\"Split text into multiple components.\"\"\"\n[docs] def create_documents(\n self, texts: List[str], metadatas: Optional[List[dict]] = None\n ) -> List[Document]:\n \"\"\"Create documents from a list of texts.\"\"\"\n _metadatas = metadatas or [{}] * len(texts)\n documents = []\n for i, text in enumerate(texts):\n index = -1\n for chunk in self.split_text(text):\n metadata = copy.deepcopy(_metadatas[i])", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/text_splitter.html"} +{"id": "f1cd9c5ba011-2", "text": "metadata = copy.deepcopy(_metadatas[i])\n if self._add_start_index:\n index = text.find(chunk, index + 1)\n metadata[\"start_index\"] = index\n new_doc = Document(page_content=chunk, metadata=metadata)\n documents.append(new_doc)\n return documents\n[docs] def split_documents(self, documents: Iterable[Document]) -> List[Document]:\n \"\"\"Split documents.\"\"\"\n texts, metadatas = [], []\n for doc in documents:\n texts.append(doc.page_content)\n metadatas.append(doc.metadata)\n return self.create_documents(texts, metadatas=metadatas)\n def _join_docs(self, docs: List[str], separator: str) -> Optional[str]:\n text = separator.join(docs)\n text = text.strip()\n if text == \"\":\n return None\n else:\n return text\n def _merge_splits(self, splits: Iterable[str], separator: str) -> List[str]:\n # We now want to combine these smaller pieces into medium size\n # chunks to send to the LLM.\n separator_len = self._length_function(separator)\n docs = []\n current_doc: List[str] = []\n total = 0\n for d in splits:\n _len = self._length_function(d)\n if (\n total + _len + (separator_len if len(current_doc) > 0 else 0)\n > self._chunk_size\n ):\n if total > self._chunk_size:\n logger.warning(\n f\"Created a chunk of size {total}, \"\n f\"which is longer than the specified {self._chunk_size}\"\n )\n if len(current_doc) > 0:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/text_splitter.html"} +{"id": "f1cd9c5ba011-3", "text": ")\n if len(current_doc) > 0:\n doc = self._join_docs(current_doc, separator)\n if doc is not None:\n docs.append(doc)\n # Keep on popping if:\n # - we have a larger chunk than in the chunk overlap\n # - or if we still have any chunks and the length is long\n while total > self._chunk_overlap or (\n total + _len + (separator_len if len(current_doc) > 0 else 0)\n > self._chunk_size\n and total > 0\n ):\n total -= self._length_function(current_doc[0]) + (\n separator_len if len(current_doc) > 1 else 0\n )\n current_doc = current_doc[1:]\n current_doc.append(d)\n total += _len + (separator_len if len(current_doc) > 1 else 0)\n doc = self._join_docs(current_doc, separator)\n if doc is not None:\n docs.append(doc)\n return docs\n[docs] @classmethod\n def from_huggingface_tokenizer(cls, tokenizer: Any, **kwargs: Any) -> TextSplitter:\n \"\"\"Text splitter that uses HuggingFace tokenizer to count length.\"\"\"\n try:\n from transformers import PreTrainedTokenizerBase\n if not isinstance(tokenizer, PreTrainedTokenizerBase):\n raise ValueError(\n \"Tokenizer received was not an instance of PreTrainedTokenizerBase\"\n )\n def _huggingface_tokenizer_length(text: str) -> int:\n return len(tokenizer.encode(text))\n except ImportError:\n raise ValueError(\n \"Could not import transformers python package. \"\n \"Please install it with `pip install transformers`.\"\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/text_splitter.html"} +{"id": "f1cd9c5ba011-4", "text": "\"Please install it with `pip install transformers`.\"\n )\n return cls(length_function=_huggingface_tokenizer_length, **kwargs)\n[docs] @classmethod\n def from_tiktoken_encoder(\n cls: Type[TS],\n encoding_name: str = \"gpt2\",\n model_name: Optional[str] = None,\n allowed_special: Union[Literal[\"all\"], AbstractSet[str]] = set(),\n disallowed_special: Union[Literal[\"all\"], Collection[str]] = \"all\",\n **kwargs: Any,\n ) -> TS:\n \"\"\"Text splitter that uses tiktoken encoder to count length.\"\"\"\n try:\n import tiktoken\n except ImportError:\n raise ImportError(\n \"Could not import tiktoken python package. \"\n \"This is needed in order to calculate max_tokens_for_prompt. \"\n \"Please install it with `pip install tiktoken`.\"\n )\n if model_name is not None:\n enc = tiktoken.encoding_for_model(model_name)\n else:\n enc = tiktoken.get_encoding(encoding_name)\n def _tiktoken_encoder(text: str) -> int:\n return len(\n enc.encode(\n text,\n allowed_special=allowed_special,\n disallowed_special=disallowed_special,\n )\n )\n if issubclass(cls, TokenTextSplitter):\n extra_kwargs = {\n \"encoding_name\": encoding_name,\n \"model_name\": model_name,\n \"allowed_special\": allowed_special,\n \"disallowed_special\": disallowed_special,\n }\n kwargs = {**kwargs, **extra_kwargs}\n return cls(length_function=_tiktoken_encoder, **kwargs)\n[docs] def transform_documents(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/text_splitter.html"} +{"id": "f1cd9c5ba011-5", "text": "[docs] def transform_documents(\n self, documents: Sequence[Document], **kwargs: Any\n ) -> Sequence[Document]:\n \"\"\"Transform sequence of documents by splitting them.\"\"\"\n return self.split_documents(list(documents))\n[docs] async def atransform_documents(\n self, documents: Sequence[Document], **kwargs: Any\n ) -> Sequence[Document]:\n \"\"\"Asynchronously transform a sequence of documents by splitting them.\"\"\"\n raise NotImplementedError\n[docs]class CharacterTextSplitter(TextSplitter):\n \"\"\"Implementation of splitting text that looks at characters.\"\"\"\n def __init__(self, separator: str = \"\\n\\n\", **kwargs: Any) -> None:\n \"\"\"Create a new TextSplitter.\"\"\"\n super().__init__(**kwargs)\n self._separator = separator\n[docs] def split_text(self, text: str) -> List[str]:\n \"\"\"Split incoming text and return chunks.\"\"\"\n # First we naively split the large input into a bunch of smaller ones.\n splits = _split_text_with_regex(text, self._separator, self._keep_separator)\n _separator = \"\" if self._keep_separator else self._separator\n return self._merge_splits(splits, _separator)\n[docs]class LineType(TypedDict):\n \"\"\"Line type as typed dict.\"\"\"\n metadata: Dict[str, str]\n content: str\n[docs]class HeaderType(TypedDict):\n \"\"\"Header type as typed dict.\"\"\"\n level: int\n name: str\n data: str\n[docs]class MarkdownHeaderTextSplitter:\n \"\"\"Implementation of splitting markdown files based on specified headers.\"\"\"\n def __init__(\n self, headers_to_split_on: List[Tuple[str, str]], return_each_line: bool = False\n ):", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/text_splitter.html"} +{"id": "f1cd9c5ba011-6", "text": "):\n \"\"\"Create a new MarkdownHeaderTextSplitter.\n Args:\n headers_to_split_on: Headers we want to track\n return_each_line: Return each line w/ associated headers\n \"\"\"\n # Output line-by-line or aggregated into chunks w/ common headers\n self.return_each_line = return_each_line\n # Given the headers we want to split on,\n # (e.g., \"#, ##, etc\") order by length\n self.headers_to_split_on = sorted(\n headers_to_split_on, key=lambda split: len(split[0]), reverse=True\n )\n[docs] def aggregate_lines_to_chunks(self, lines: List[LineType]) -> List[Document]:\n \"\"\"Combine lines with common metadata into chunks\n Args:\n lines: Line of text / associated header metadata\n \"\"\"\n aggregated_chunks: List[LineType] = []\n for line in lines:\n if (\n aggregated_chunks\n and aggregated_chunks[-1][\"metadata\"] == line[\"metadata\"]\n ):\n # If the last line in the aggregated list\n # has the same metadata as the current line,\n # append the current content to the last lines's content\n aggregated_chunks[-1][\"content\"] += \" \\n\" + line[\"content\"]\n else:\n # Otherwise, append the current line to the aggregated list\n aggregated_chunks.append(line)\n return [\n Document(page_content=chunk[\"content\"], metadata=chunk[\"metadata\"])\n for chunk in aggregated_chunks\n ]\n[docs] def split_text(self, text: str) -> List[Document]:\n \"\"\"Split markdown file\n Args:\n text: Markdown file\"\"\"\n # Split the input text by newline character (\"\\n\").\n lines = text.split(\"\\n\")\n # Final output", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/text_splitter.html"} +{"id": "f1cd9c5ba011-7", "text": "lines = text.split(\"\\n\")\n # Final output\n lines_with_metadata: List[LineType] = []\n # Content and metadata of the chunk currently being processed\n current_content: List[str] = []\n current_metadata: Dict[str, str] = {}\n # Keep track of the nested header structure\n # header_stack: List[Dict[str, Union[int, str]]] = []\n header_stack: List[HeaderType] = []\n initial_metadata: Dict[str, str] = {}\n for line in lines:\n stripped_line = line.strip()\n # Check each line against each of the header types (e.g., #, ##)\n for sep, name in self.headers_to_split_on:\n # Check if line starts with a header that we intend to split on\n if stripped_line.startswith(sep) and (\n # Header with no text OR header is followed by space\n # Both are valid conditions that sep is being used a header\n len(stripped_line) == len(sep)\n or stripped_line[len(sep)] == \" \"\n ):\n # Ensure we are tracking the header as metadata\n if name is not None:\n # Get the current header level\n current_header_level = sep.count(\"#\")\n # Pop out headers of lower or same level from the stack\n while (\n header_stack\n and header_stack[-1][\"level\"] >= current_header_level\n ):\n # We have encountered a new header\n # at the same or higher level\n popped_header = header_stack.pop()\n # Clear the metadata for the\n # popped header in initial_metadata\n if popped_header[\"name\"] in initial_metadata:\n initial_metadata.pop(popped_header[\"name\"])\n # Push the current header to the stack", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/text_splitter.html"} +{"id": "f1cd9c5ba011-8", "text": "# Push the current header to the stack\n header: HeaderType = {\n \"level\": current_header_level,\n \"name\": name,\n \"data\": stripped_line[len(sep) :].strip(),\n }\n header_stack.append(header)\n # Update initial_metadata with the current header\n initial_metadata[name] = header[\"data\"]\n # Add the previous line to the lines_with_metadata\n # only if current_content is not empty\n if current_content:\n lines_with_metadata.append(\n {\n \"content\": \"\\n\".join(current_content),\n \"metadata\": current_metadata.copy(),\n }\n )\n current_content.clear()\n break\n else:\n if stripped_line:\n current_content.append(stripped_line)\n elif current_content:\n lines_with_metadata.append(\n {\n \"content\": \"\\n\".join(current_content),\n \"metadata\": current_metadata.copy(),\n }\n )\n current_content.clear()\n current_metadata = initial_metadata.copy()\n if current_content:\n lines_with_metadata.append(\n {\"content\": \"\\n\".join(current_content), \"metadata\": current_metadata}\n )\n # lines_with_metadata has each line with associated header metadata\n # aggregate these into chunks based on common metadata\n if not self.return_each_line:\n return self.aggregate_lines_to_chunks(lines_with_metadata)\n else:\n return [\n Document(page_content=chunk[\"content\"], metadata=chunk[\"metadata\"])\n for chunk in lines_with_metadata\n ]\n# should be in newer Python versions (3.10+)\n# @dataclass(frozen=True, kw_only=True, slots=True)\n[docs]@dataclass(frozen=True)\nclass Tokenizer:\n chunk_overlap: int", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/text_splitter.html"} +{"id": "f1cd9c5ba011-9", "text": "class Tokenizer:\n chunk_overlap: int\n tokens_per_chunk: int\n decode: Callable[[list[int]], str]\n encode: Callable[[str], List[int]]\n[docs]def split_text_on_tokens(*, text: str, tokenizer: Tokenizer) -> List[str]:\n \"\"\"Split incoming text and return chunks.\"\"\"\n splits: List[str] = []\n input_ids = tokenizer.encode(text)\n start_idx = 0\n cur_idx = min(start_idx + tokenizer.tokens_per_chunk, len(input_ids))\n chunk_ids = input_ids[start_idx:cur_idx]\n while start_idx < len(input_ids):\n splits.append(tokenizer.decode(chunk_ids))\n start_idx += tokenizer.tokens_per_chunk - tokenizer.chunk_overlap\n cur_idx = min(start_idx + tokenizer.tokens_per_chunk, len(input_ids))\n chunk_ids = input_ids[start_idx:cur_idx]\n return splits\n[docs]class TokenTextSplitter(TextSplitter):\n \"\"\"Implementation of splitting text that looks at tokens.\"\"\"\n def __init__(\n self,\n encoding_name: str = \"gpt2\",\n model_name: Optional[str] = None,\n allowed_special: Union[Literal[\"all\"], AbstractSet[str]] = set(),\n disallowed_special: Union[Literal[\"all\"], Collection[str]] = \"all\",\n **kwargs: Any,\n ) -> None:\n \"\"\"Create a new TextSplitter.\"\"\"\n super().__init__(**kwargs)\n try:\n import tiktoken\n except ImportError:\n raise ImportError(\n \"Could not import tiktoken python package. \"\n \"This is needed in order to for TokenTextSplitter. \"\n \"Please install it with `pip install tiktoken`.\"\n )\n if model_name is not None:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/text_splitter.html"} +{"id": "f1cd9c5ba011-10", "text": ")\n if model_name is not None:\n enc = tiktoken.encoding_for_model(model_name)\n else:\n enc = tiktoken.get_encoding(encoding_name)\n self._tokenizer = enc\n self._allowed_special = allowed_special\n self._disallowed_special = disallowed_special\n[docs] def split_text(self, text: str) -> List[str]:\n def _encode(_text: str) -> List[int]:\n return self._tokenizer.encode(\n _text,\n allowed_special=self._allowed_special,\n disallowed_special=self._disallowed_special,\n )\n tokenizer = Tokenizer(\n chunk_overlap=self._chunk_overlap,\n tokens_per_chunk=self._chunk_size,\n decode=self._tokenizer.decode,\n encode=_encode,\n )\n return split_text_on_tokens(text=text, tokenizer=tokenizer)\n[docs]class SentenceTransformersTokenTextSplitter(TextSplitter):\n \"\"\"Implementation of splitting text that looks at tokens.\"\"\"\n def __init__(\n self,\n chunk_overlap: int = 50,\n model_name: str = \"sentence-transformers/all-mpnet-base-v2\",\n tokens_per_chunk: Optional[int] = None,\n **kwargs: Any,\n ) -> None:\n \"\"\"Create a new TextSplitter.\"\"\"\n super().__init__(**kwargs, chunk_overlap=chunk_overlap)\n try:\n from sentence_transformers import SentenceTransformer\n except ImportError:\n raise ImportError(\n \"Could not import sentence_transformer python package. \"\n \"This is needed in order to for SentenceTransformersTokenTextSplitter. \"\n \"Please install it with `pip install sentence-transformers`.\"\n )\n self.model_name = model_name", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/text_splitter.html"} +{"id": "f1cd9c5ba011-11", "text": ")\n self.model_name = model_name\n self._model = SentenceTransformer(self.model_name)\n self.tokenizer = self._model.tokenizer\n self._initialize_chunk_configuration(tokens_per_chunk=tokens_per_chunk)\n def _initialize_chunk_configuration(\n self, *, tokens_per_chunk: Optional[int]\n ) -> None:\n self.maximum_tokens_per_chunk = cast(int, self._model.max_seq_length)\n if tokens_per_chunk is None:\n self.tokens_per_chunk = self.maximum_tokens_per_chunk\n else:\n self.tokens_per_chunk = tokens_per_chunk\n if self.tokens_per_chunk > self.maximum_tokens_per_chunk:\n raise ValueError(\n f\"The token limit of the models '{self.model_name}'\"\n f\" is: {self.maximum_tokens_per_chunk}.\"\n f\" Argument tokens_per_chunk={self.tokens_per_chunk}\"\n f\" > maximum token limit.\"\n )\n[docs] def split_text(self, text: str) -> List[str]:\n def encode_strip_start_and_stop_token_ids(text: str) -> List[int]:\n return self._encode(text)[1:-1]\n tokenizer = Tokenizer(\n chunk_overlap=self._chunk_overlap,\n tokens_per_chunk=self.tokens_per_chunk,\n decode=self.tokenizer.decode,\n encode=encode_strip_start_and_stop_token_ids,\n )\n return split_text_on_tokens(text=text, tokenizer=tokenizer)\n[docs] def count_tokens(self, *, text: str) -> int:\n return len(self._encode(text))\n _max_length_equal_32_bit_integer = 2**32\n def _encode(self, text: str) -> List[int]:\n token_ids_with_start_and_end_token_ids = self.tokenizer.encode(\n text,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/text_splitter.html"} +{"id": "f1cd9c5ba011-12", "text": "token_ids_with_start_and_end_token_ids = self.tokenizer.encode(\n text,\n max_length=self._max_length_equal_32_bit_integer,\n truncation=\"do_not_truncate\",\n )\n return token_ids_with_start_and_end_token_ids\n[docs]class Language(str, Enum):\n CPP = \"cpp\"\n GO = \"go\"\n JAVA = \"java\"\n JS = \"js\"\n PHP = \"php\"\n PROTO = \"proto\"\n PYTHON = \"python\"\n RST = \"rst\"\n RUBY = \"ruby\"\n RUST = \"rust\"\n SCALA = \"scala\"\n SWIFT = \"swift\"\n MARKDOWN = \"markdown\"\n LATEX = \"latex\"\n HTML = \"html\"\n SOL = \"sol\"\n[docs]class RecursiveCharacterTextSplitter(TextSplitter):\n \"\"\"Implementation of splitting text that looks at characters.\n Recursively tries to split by different characters to find one\n that works.\n \"\"\"\n def __init__(\n self,\n separators: Optional[List[str]] = None,\n keep_separator: bool = True,\n **kwargs: Any,\n ) -> None:\n \"\"\"Create a new TextSplitter.\"\"\"\n super().__init__(keep_separator=keep_separator, **kwargs)\n self._separators = separators or [\"\\n\\n\", \"\\n\", \" \", \"\"]\n def _split_text(self, text: str, separators: List[str]) -> List[str]:\n \"\"\"Split incoming text and return chunks.\"\"\"\n final_chunks = []\n # Get appropriate separator to use\n separator = separators[-1]\n new_separators = []\n for i, _s in enumerate(separators):", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/text_splitter.html"} +{"id": "f1cd9c5ba011-13", "text": "for i, _s in enumerate(separators):\n if _s == \"\":\n separator = _s\n break\n if re.search(_s, text):\n separator = _s\n new_separators = separators[i + 1 :]\n break\n splits = _split_text_with_regex(text, separator, self._keep_separator)\n # Now go merging things, recursively splitting longer texts.\n _good_splits = []\n _separator = \"\" if self._keep_separator else separator\n for s in splits:\n if self._length_function(s) < self._chunk_size:\n _good_splits.append(s)\n else:\n if _good_splits:\n merged_text = self._merge_splits(_good_splits, _separator)\n final_chunks.extend(merged_text)\n _good_splits = []\n if not new_separators:\n final_chunks.append(s)\n else:\n other_info = self._split_text(s, new_separators)\n final_chunks.extend(other_info)\n if _good_splits:\n merged_text = self._merge_splits(_good_splits, _separator)\n final_chunks.extend(merged_text)\n return final_chunks\n[docs] def split_text(self, text: str) -> List[str]:\n return self._split_text(text, self._separators)\n[docs] @classmethod\n def from_language(\n cls, language: Language, **kwargs: Any\n ) -> RecursiveCharacterTextSplitter:\n separators = cls.get_separators_for_language(language)\n return cls(separators=separators, **kwargs)\n[docs] @staticmethod\n def get_separators_for_language(language: Language) -> List[str]:\n if language == Language.CPP:\n return [", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/text_splitter.html"} +{"id": "f1cd9c5ba011-14", "text": "if language == Language.CPP:\n return [\n # Split along class definitions\n \"\\nclass \",\n # Split along function definitions\n \"\\nvoid \",\n \"\\nint \",\n \"\\nfloat \",\n \"\\ndouble \",\n # Split along control flow statements\n \"\\nif \",\n \"\\nfor \",\n \"\\nwhile \",\n \"\\nswitch \",\n \"\\ncase \",\n # Split by the normal type of lines\n \"\\n\\n\",\n \"\\n\",\n \" \",\n \"\",\n ]\n elif language == Language.GO:\n return [\n # Split along function definitions\n \"\\nfunc \",\n \"\\nvar \",\n \"\\nconst \",\n \"\\ntype \",\n # Split along control flow statements\n \"\\nif \",\n \"\\nfor \",\n \"\\nswitch \",\n \"\\ncase \",\n # Split by the normal type of lines\n \"\\n\\n\",\n \"\\n\",\n \" \",\n \"\",\n ]\n elif language == Language.JAVA:\n return [\n # Split along class definitions\n \"\\nclass \",\n # Split along method definitions\n \"\\npublic \",\n \"\\nprotected \",\n \"\\nprivate \",\n \"\\nstatic \",\n # Split along control flow statements\n \"\\nif \",\n \"\\nfor \",\n \"\\nwhile \",\n \"\\nswitch \",\n \"\\ncase \",\n # Split by the normal type of lines\n \"\\n\\n\",\n \"\\n\",\n \" \",\n \"\",\n ]\n elif language == Language.JS:\n return [\n # Split along function definitions\n \"\\nfunction \",\n \"\\nconst \",\n \"\\nlet \",", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/text_splitter.html"} +{"id": "f1cd9c5ba011-15", "text": "\"\\nfunction \",\n \"\\nconst \",\n \"\\nlet \",\n \"\\nvar \",\n \"\\nclass \",\n # Split along control flow statements\n \"\\nif \",\n \"\\nfor \",\n \"\\nwhile \",\n \"\\nswitch \",\n \"\\ncase \",\n \"\\ndefault \",\n # Split by the normal type of lines\n \"\\n\\n\",\n \"\\n\",\n \" \",\n \"\",\n ]\n elif language == Language.PHP:\n return [\n # Split along function definitions\n \"\\nfunction \",\n # Split along class definitions\n \"\\nclass \",\n # Split along control flow statements\n \"\\nif \",\n \"\\nforeach \",\n \"\\nwhile \",\n \"\\ndo \",\n \"\\nswitch \",\n \"\\ncase \",\n # Split by the normal type of lines\n \"\\n\\n\",\n \"\\n\",\n \" \",\n \"\",\n ]\n elif language == Language.PROTO:\n return [\n # Split along message definitions\n \"\\nmessage \",\n # Split along service definitions\n \"\\nservice \",\n # Split along enum definitions\n \"\\nenum \",\n # Split along option definitions\n \"\\noption \",\n # Split along import statements\n \"\\nimport \",\n # Split along syntax declarations\n \"\\nsyntax \",\n # Split by the normal type of lines\n \"\\n\\n\",\n \"\\n\",\n \" \",\n \"\",\n ]\n elif language == Language.PYTHON:\n return [\n # First, try to split along class definitions\n \"\\nclass \",\n \"\\ndef \",\n \"\\n\\tdef \",\n # Now split by the normal type of lines\n \"\\n\\n\",", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/text_splitter.html"} +{"id": "f1cd9c5ba011-16", "text": "# Now split by the normal type of lines\n \"\\n\\n\",\n \"\\n\",\n \" \",\n \"\",\n ]\n elif language == Language.RST:\n return [\n # Split along section titles\n \"\\n=+\\n\",\n \"\\n-+\\n\",\n \"\\n\\*+\\n\",\n # Split along directive markers\n \"\\n\\n.. *\\n\\n\",\n # Split by the normal type of lines\n \"\\n\\n\",\n \"\\n\",\n \" \",\n \"\",\n ]\n elif language == Language.RUBY:\n return [\n # Split along method definitions\n \"\\ndef \",\n \"\\nclass \",\n # Split along control flow statements\n \"\\nif \",\n \"\\nunless \",\n \"\\nwhile \",\n \"\\nfor \",\n \"\\ndo \",\n \"\\nbegin \",\n \"\\nrescue \",\n # Split by the normal type of lines\n \"\\n\\n\",\n \"\\n\",\n \" \",\n \"\",\n ]\n elif language == Language.RUST:\n return [\n # Split along function definitions\n \"\\nfn \",\n \"\\nconst \",\n \"\\nlet \",\n # Split along control flow statements\n \"\\nif \",\n \"\\nwhile \",\n \"\\nfor \",\n \"\\nloop \",\n \"\\nmatch \",\n \"\\nconst \",\n # Split by the normal type of lines\n \"\\n\\n\",\n \"\\n\",\n \" \",\n \"\",\n ]\n elif language == Language.SCALA:\n return [\n # Split along class definitions\n \"\\nclass \",\n \"\\nobject \",\n # Split along method definitions\n \"\\ndef \",", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/text_splitter.html"} +{"id": "f1cd9c5ba011-17", "text": "\"\\nobject \",\n # Split along method definitions\n \"\\ndef \",\n \"\\nval \",\n \"\\nvar \",\n # Split along control flow statements\n \"\\nif \",\n \"\\nfor \",\n \"\\nwhile \",\n \"\\nmatch \",\n \"\\ncase \",\n # Split by the normal type of lines\n \"\\n\\n\",\n \"\\n\",\n \" \",\n \"\",\n ]\n elif language == Language.SWIFT:\n return [\n # Split along function definitions\n \"\\nfunc \",\n # Split along class definitions\n \"\\nclass \",\n \"\\nstruct \",\n \"\\nenum \",\n # Split along control flow statements\n \"\\nif \",\n \"\\nfor \",\n \"\\nwhile \",\n \"\\ndo \",\n \"\\nswitch \",\n \"\\ncase \",\n # Split by the normal type of lines\n \"\\n\\n\",\n \"\\n\",\n \" \",\n \"\",\n ]\n elif language == Language.MARKDOWN:\n return [\n # First, try to split along Markdown headings (starting with level 2)\n \"\\n#{1,6} \",\n # Note the alternative syntax for headings (below) is not handled here\n # Heading level 2\n # ---------------\n # End of code block\n \"```\\n\",\n # Horizontal lines\n \"\\n\\*\\*\\*+\\n\",\n \"\\n---+\\n\",\n \"\\n___+\\n\",\n # Note that this splitter doesn't handle horizontal lines defined\n # by *three or more* of ***, ---, or ___, but this is not handled\n \"\\n\\n\",\n \"\\n\",\n \" \",\n \"\",\n ]", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/text_splitter.html"} +{"id": "f1cd9c5ba011-18", "text": "\"\\n\",\n \" \",\n \"\",\n ]\n elif language == Language.LATEX:\n return [\n # First, try to split along Latex sections\n \"\\n\\\\\\chapter{\",\n \"\\n\\\\\\section{\",\n \"\\n\\\\\\subsection{\",\n \"\\n\\\\\\subsubsection{\",\n # Now split by environments\n \"\\n\\\\\\begin{enumerate}\",\n \"\\n\\\\\\begin{itemize}\",\n \"\\n\\\\\\begin{description}\",\n \"\\n\\\\\\begin{list}\",\n \"\\n\\\\\\begin{quote}\",\n \"\\n\\\\\\begin{quotation}\",\n \"\\n\\\\\\begin{verse}\",\n \"\\n\\\\\\begin{verbatim}\",\n # Now split by math environments\n \"\\n\\\\\\begin{align}\",\n \"$$\",\n \"$\",\n # Now split by the normal type of lines\n \" \",\n \"\",\n ]\n elif language == Language.HTML:\n return [\n # First, try to split along HTML tags\n \" None:\n \"\"\"Initialize the NLTK splitter.\"\"\"\n super().__init__(**kwargs)\n try:\n from nltk.tokenize import sent_tokenize\n self._tokenizer = sent_tokenize\n except ImportError:\n raise ImportError(\n \"NLTK is not installed, please install it with `pip install nltk`.\"\n )\n self._separator = separator\n[docs] def split_text(self, text: str) -> List[str]:\n \"\"\"Split incoming text and return chunks.\"\"\"\n # First we naively split the large input into a bunch of smaller ones.\n splits = self._tokenizer(text)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/text_splitter.html"} +{"id": "f1cd9c5ba011-20", "text": "splits = self._tokenizer(text)\n return self._merge_splits(splits, self._separator)\n[docs]class SpacyTextSplitter(TextSplitter):\n \"\"\"Implementation of splitting text that looks at sentences using Spacy.\"\"\"\n def __init__(\n self, separator: str = \"\\n\\n\", pipeline: str = \"en_core_web_sm\", **kwargs: Any\n ) -> None:\n \"\"\"Initialize the spacy text splitter.\"\"\"\n super().__init__(**kwargs)\n try:\n import spacy\n except ImportError:\n raise ImportError(\n \"Spacy is not installed, please install it with `pip install spacy`.\"\n )\n self._tokenizer = spacy.load(pipeline)\n self._separator = separator\n[docs] def split_text(self, text: str) -> List[str]:\n \"\"\"Split incoming text and return chunks.\"\"\"\n splits = (str(s) for s in self._tokenizer(text).sents)\n return self._merge_splits(splits, self._separator)\n# For backwards compatibility\n[docs]class PythonCodeTextSplitter(RecursiveCharacterTextSplitter):\n \"\"\"Attempts to split the text along Python syntax.\"\"\"\n def __init__(self, **kwargs: Any) -> None:\n \"\"\"Initialize a PythonCodeTextSplitter.\"\"\"\n separators = self.get_separators_for_language(Language.PYTHON)\n super().__init__(separators=separators, **kwargs)\n[docs]class MarkdownTextSplitter(RecursiveCharacterTextSplitter):\n \"\"\"Attempts to split the text along Markdown-formatted headings.\"\"\"\n def __init__(self, **kwargs: Any) -> None:\n \"\"\"Initialize a MarkdownTextSplitter.\"\"\"\n separators = self.get_separators_for_language(Language.MARKDOWN)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/text_splitter.html"} +{"id": "f1cd9c5ba011-21", "text": "separators = self.get_separators_for_language(Language.MARKDOWN)\n super().__init__(separators=separators, **kwargs)\n[docs]class LatexTextSplitter(RecursiveCharacterTextSplitter):\n \"\"\"Attempts to split the text along Latex-formatted layout elements.\"\"\"\n def __init__(self, **kwargs: Any) -> None:\n \"\"\"Initialize a LatexTextSplitter.\"\"\"\n separators = self.get_separators_for_language(Language.LATEX)\n super().__init__(separators=separators, **kwargs)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/text_splitter.html"} +{"id": "f89ccc5b6942-0", "text": "Source code for langchain.schema\n\"\"\"Common schema objects.\"\"\"\nfrom __future__ import annotations\nfrom abc import ABC, abstractmethod\nfrom dataclasses import dataclass\nfrom typing import (\n Any,\n Dict,\n Generic,\n List,\n NamedTuple,\n Optional,\n Sequence,\n TypeVar,\n Union,\n)\nfrom uuid import UUID\nfrom pydantic import BaseModel, Field, root_validator\nfrom langchain.load.serializable import Serializable\nRUN_KEY = \"__run\"\n[docs]def get_buffer_string(\n messages: List[BaseMessage], human_prefix: str = \"Human\", ai_prefix: str = \"AI\"\n) -> str:\n \"\"\"Get buffer string of messages.\"\"\"\n string_messages = []\n for m in messages:\n if isinstance(m, HumanMessage):\n role = human_prefix\n elif isinstance(m, AIMessage):\n role = ai_prefix\n elif isinstance(m, SystemMessage):\n role = \"System\"\n elif isinstance(m, FunctionMessage):\n role = \"Function\"\n elif isinstance(m, ChatMessage):\n role = m.role\n else:\n raise ValueError(f\"Got unsupported message type: {m}\")\n message = f\"{role}: {m.content}\"\n if isinstance(m, AIMessage) and \"function_call\" in m.additional_kwargs:\n message += f\"{m.additional_kwargs['function_call']}\"\n string_messages.append(message)\n return \"\\n\".join(string_messages)\n[docs]@dataclass\nclass AgentAction:\n \"\"\"Agent's action to take.\"\"\"\n tool: str\n tool_input: Union[str, dict]\n log: str\n[docs]class AgentFinish(NamedTuple):\n \"\"\"Agent's return value.\"\"\"\n return_values: dict", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/schema.html"} +{"id": "f89ccc5b6942-1", "text": "\"\"\"Agent's return value.\"\"\"\n return_values: dict\n log: str\n[docs]class Generation(Serializable):\n \"\"\"Output of a single generation.\"\"\"\n text: str\n \"\"\"Generated text output.\"\"\"\n generation_info: Optional[Dict[str, Any]] = None\n \"\"\"Raw generation info response from the provider\"\"\"\n \"\"\"May include things like reason for finishing (e.g. in OpenAI)\"\"\"\n # TODO: add log probs\n @property\n def lc_serializable(self) -> bool:\n \"\"\"This class is LangChain serializable.\"\"\"\n return True\n[docs]class BaseMessage(Serializable):\n \"\"\"Message object.\"\"\"\n content: str\n additional_kwargs: dict = Field(default_factory=dict)\n @property\n @abstractmethod\n def type(self) -> str:\n \"\"\"Type of the message, used for serialization.\"\"\"\n @property\n def lc_serializable(self) -> bool:\n \"\"\"This class is LangChain serializable.\"\"\"\n return True\n[docs]class HumanMessage(BaseMessage):\n \"\"\"Type of message that is spoken by the human.\"\"\"\n example: bool = False\n @property\n def type(self) -> str:\n \"\"\"Type of the message, used for serialization.\"\"\"\n return \"human\"\n[docs]class AIMessage(BaseMessage):\n \"\"\"Type of message that is spoken by the AI.\"\"\"\n example: bool = False\n @property\n def type(self) -> str:\n \"\"\"Type of the message, used for serialization.\"\"\"\n return \"ai\"\n[docs]class SystemMessage(BaseMessage):\n \"\"\"Type of message that is a system message.\"\"\"\n @property\n def type(self) -> str:\n \"\"\"Type of the message, used for serialization.\"\"\"\n return \"system\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/schema.html"} +{"id": "f89ccc5b6942-2", "text": "\"\"\"Type of the message, used for serialization.\"\"\"\n return \"system\"\n[docs]class FunctionMessage(BaseMessage):\n name: str\n @property\n def type(self) -> str:\n \"\"\"Type of the message, used for serialization.\"\"\"\n return \"function\"\n[docs]class ChatMessage(BaseMessage):\n \"\"\"Type of message with arbitrary speaker.\"\"\"\n role: str\n @property\n def type(self) -> str:\n \"\"\"Type of the message, used for serialization.\"\"\"\n return \"chat\"\ndef _message_to_dict(message: BaseMessage) -> dict:\n return {\"type\": message.type, \"data\": message.dict()}\n[docs]def messages_to_dict(messages: List[BaseMessage]) -> List[dict]:\n \"\"\"Convert messages to dict.\n Args:\n messages: List of messages to convert.\n Returns:\n List of dicts.\n \"\"\"\n return [_message_to_dict(m) for m in messages]\ndef _message_from_dict(message: dict) -> BaseMessage:\n _type = message[\"type\"]\n if _type == \"human\":\n return HumanMessage(**message[\"data\"])\n elif _type == \"ai\":\n return AIMessage(**message[\"data\"])\n elif _type == \"system\":\n return SystemMessage(**message[\"data\"])\n elif _type == \"chat\":\n return ChatMessage(**message[\"data\"])\n else:\n raise ValueError(f\"Got unexpected type: {_type}\")\n[docs]def messages_from_dict(messages: List[dict]) -> List[BaseMessage]:\n \"\"\"Convert messages from dict.\n Args:\n messages: List of messages (dicts) to convert.\n Returns:\n List of messages (BaseMessages).\n \"\"\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/schema.html"} +{"id": "f89ccc5b6942-3", "text": "Returns:\n List of messages (BaseMessages).\n \"\"\"\n return [_message_from_dict(m) for m in messages]\n[docs]class ChatGeneration(Generation):\n \"\"\"Output of a single generation.\"\"\"\n text = \"\"\n message: BaseMessage\n @root_validator\n def set_text(cls, values: Dict[str, Any]) -> Dict[str, Any]:\n values[\"text\"] = values[\"message\"].content\n return values\n[docs]class RunInfo(BaseModel):\n \"\"\"Class that contains all relevant metadata for a Run.\"\"\"\n run_id: UUID\n[docs]class ChatResult(BaseModel):\n \"\"\"Class that contains all relevant information for a Chat Result.\"\"\"\n generations: List[ChatGeneration]\n \"\"\"List of the things generated.\"\"\"\n llm_output: Optional[dict] = None\n \"\"\"For arbitrary LLM provider specific output.\"\"\"\n[docs]class LLMResult(BaseModel):\n \"\"\"Class that contains all relevant information for an LLM Result.\"\"\"\n generations: List[List[Generation]]\n \"\"\"List of the things generated. This is List[List[]] because\n each input could have multiple generations.\"\"\"\n llm_output: Optional[dict] = None\n \"\"\"For arbitrary LLM provider specific output.\"\"\"\n run: Optional[List[RunInfo]] = None\n \"\"\"Run metadata.\"\"\"\n[docs] def flatten(self) -> List[LLMResult]:\n \"\"\"Flatten generations into a single list.\"\"\"\n llm_results = []\n for i, gen_list in enumerate(self.generations):\n # Avoid double counting tokens in OpenAICallback\n if i == 0:\n llm_results.append(\n LLMResult(\n generations=[gen_list],\n llm_output=self.llm_output,\n )\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/schema.html"} +{"id": "f89ccc5b6942-4", "text": "llm_output=self.llm_output,\n )\n )\n else:\n if self.llm_output is not None:\n llm_output = self.llm_output.copy()\n llm_output[\"token_usage\"] = dict()\n else:\n llm_output = None\n llm_results.append(\n LLMResult(\n generations=[gen_list],\n llm_output=llm_output,\n )\n )\n return llm_results\n def __eq__(self, other: object) -> bool:\n if not isinstance(other, LLMResult):\n return NotImplemented\n return (\n self.generations == other.generations\n and self.llm_output == other.llm_output\n )\n[docs]class PromptValue(Serializable, ABC):\n[docs] @abstractmethod\n def to_string(self) -> str:\n \"\"\"Return prompt as string.\"\"\"\n[docs] @abstractmethod\n def to_messages(self) -> List[BaseMessage]:\n \"\"\"Return prompt as messages.\"\"\"\n[docs]class BaseMemory(Serializable, ABC):\n \"\"\"Base interface for memory in chains.\"\"\"\n class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n arbitrary_types_allowed = True\n @property\n @abstractmethod\n def memory_variables(self) -> List[str]:\n \"\"\"Input keys this memory class will load dynamically.\"\"\"\n[docs] @abstractmethod\n def load_memory_variables(self, inputs: Dict[str, Any]) -> Dict[str, Any]:\n \"\"\"Return key-value pairs given the text input to the chain.\n If None, return all memories\n \"\"\"\n[docs] @abstractmethod\n def save_context(self, inputs: Dict[str, Any], outputs: Dict[str, str]) -> None:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/schema.html"} +{"id": "f89ccc5b6942-5", "text": "\"\"\"Save the context of this model run to memory.\"\"\"\n[docs] @abstractmethod\n def clear(self) -> None:\n \"\"\"Clear memory contents.\"\"\"\n[docs]class BaseChatMessageHistory(ABC):\n \"\"\"Base interface for chat message history\n See `ChatMessageHistory` for default implementation.\n \"\"\"\n \"\"\"\n Example:\n .. code-block:: python\n class FileChatMessageHistory(BaseChatMessageHistory):\n storage_path: str\n session_id: str\n @property\n def messages(self):\n with open(os.path.join(storage_path, session_id), 'r:utf-8') as f:\n messages = json.loads(f.read())\n return messages_from_dict(messages)\n def add_message(self, message: BaseMessage) -> None:\n messages = self.messages.append(_message_to_dict(message))\n with open(os.path.join(storage_path, session_id), 'w') as f:\n json.dump(f, messages)\n \n def clear(self):\n with open(os.path.join(storage_path, session_id), 'w') as f:\n f.write(\"[]\")\n \"\"\"\n messages: List[BaseMessage]\n[docs] def add_user_message(self, message: str) -> None:\n \"\"\"Add a user message to the store\"\"\"\n self.add_message(HumanMessage(content=message))\n[docs] def add_ai_message(self, message: str) -> None:\n \"\"\"Add an AI message to the store\"\"\"\n self.add_message(AIMessage(content=message))\n[docs] def add_message(self, message: BaseMessage) -> None:\n \"\"\"Add a self-created message to the store\"\"\"\n raise NotImplementedError\n[docs] @abstractmethod\n def clear(self) -> None:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/schema.html"} +{"id": "f89ccc5b6942-6", "text": "raise NotImplementedError\n[docs] @abstractmethod\n def clear(self) -> None:\n \"\"\"Remove all messages from the store\"\"\"\n[docs]class Document(Serializable):\n \"\"\"Interface for interacting with a document.\"\"\"\n page_content: str\n metadata: dict = Field(default_factory=dict)\n[docs]class BaseRetriever(ABC):\n \"\"\"Base interface for retrievers.\"\"\"\n[docs] @abstractmethod\n def get_relevant_documents(self, query: str) -> List[Document]:\n \"\"\"Get documents relevant for a query.\n Args:\n query: string to find relevant documents for\n Returns:\n List of relevant documents\n \"\"\"\n[docs] @abstractmethod\n async def aget_relevant_documents(self, query: str) -> List[Document]:\n \"\"\"Get documents relevant for a query.\n Args:\n query: string to find relevant documents for\n Returns:\n List of relevant documents\n \"\"\"\n# For backwards compatibility\nMemory = BaseMemory\nT = TypeVar(\"T\")\n[docs]class BaseLLMOutputParser(Serializable, ABC, Generic[T]):\n[docs] @abstractmethod\n def parse_result(self, result: List[Generation]) -> T:\n \"\"\"Parse LLM Result.\"\"\"\n[docs]class BaseOutputParser(BaseLLMOutputParser, ABC, Generic[T]):\n \"\"\"Class to parse the output of an LLM call.\n Output parsers help structure language model responses.\n \"\"\"\n[docs] def parse_result(self, result: List[Generation]) -> T:\n return self.parse(result[0].text)\n[docs] @abstractmethod\n def parse(self, text: str) -> T:\n \"\"\"Parse the output of an LLM call.", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/schema.html"} +{"id": "f89ccc5b6942-7", "text": "\"\"\"Parse the output of an LLM call.\n A method which takes in a string (assumed output of a language model )\n and parses it into some structure.\n Args:\n text: output of language model\n Returns:\n structured output\n \"\"\"\n[docs] def parse_with_prompt(self, completion: str, prompt: PromptValue) -> Any:\n \"\"\"Optional method to parse the output of an LLM call with a prompt.\n The prompt is largely provided in the event the OutputParser wants\n to retry or fix the output in some way, and needs information from\n the prompt to do so.\n Args:\n completion: output of language model\n prompt: prompt value\n Returns:\n structured output\n \"\"\"\n return self.parse(completion)\n[docs] def get_format_instructions(self) -> str:\n \"\"\"Instructions on how the LLM output should be formatted.\"\"\"\n raise NotImplementedError\n @property\n def _type(self) -> str:\n \"\"\"Return the type key.\"\"\"\n raise NotImplementedError(\n f\"_type property is not implemented in class {self.__class__.__name__}.\"\n \" This is required for serialization.\"\n )\n[docs] def dict(self, **kwargs: Any) -> Dict:\n \"\"\"Return dictionary representation of output parser.\"\"\"\n output_parser_dict = super().dict()\n output_parser_dict[\"_type\"] = self._type\n return output_parser_dict\n[docs]class NoOpOutputParser(BaseOutputParser[str]):\n \"\"\"Output parser that just returns the text as is.\"\"\"\n @property\n def lc_serializable(self) -> bool:\n return True\n @property\n def _type(self) -> str:\n return \"default\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/schema.html"} +{"id": "f89ccc5b6942-8", "text": "@property\n def _type(self) -> str:\n return \"default\"\n[docs] def parse(self, text: str) -> str:\n return text\n[docs]class OutputParserException(ValueError):\n \"\"\"Exception that output parsers should raise to signify a parsing error.\n This exists to differentiate parsing errors from other code or execution errors\n that also may arise inside the output parser. OutputParserExceptions will be\n available to catch and handle in ways to fix the parsing error, while other\n errors will be raised.\n \"\"\"\n def __init__(\n self,\n error: Any,\n observation: str | None = None,\n llm_output: str | None = None,\n send_to_llm: bool = False,\n ):\n super(OutputParserException, self).__init__(error)\n if send_to_llm:\n if observation is None or llm_output is None:\n raise ValueError(\n \"Arguments 'observation' & 'llm_output'\"\n \" are required if 'send_to_llm' is True\"\n )\n self.observation = observation\n self.llm_output = llm_output\n self.send_to_llm = send_to_llm\n[docs]class BaseDocumentTransformer(ABC):\n \"\"\"Base interface for transforming documents.\"\"\"\n[docs] @abstractmethod\n def transform_documents(\n self, documents: Sequence[Document], **kwargs: Any\n ) -> Sequence[Document]:\n \"\"\"Transform a list of documents.\"\"\"\n[docs] @abstractmethod\n async def atransform_documents(\n self, documents: Sequence[Document], **kwargs: Any\n ) -> Sequence[Document]:\n \"\"\"Asynchronously transform a list of documents.\"\"\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/schema.html"} +{"id": "649767acfe47-0", "text": "Source code for langchain.document_transformers\n\"\"\"Transform documents\"\"\"\nfrom typing import Any, Callable, List, Sequence\nimport numpy as np\nfrom pydantic import BaseModel, Field\nfrom langchain.embeddings.base import Embeddings\nfrom langchain.math_utils import cosine_similarity\nfrom langchain.schema import BaseDocumentTransformer, Document\nclass _DocumentWithState(Document):\n \"\"\"Wrapper for a document that includes arbitrary state.\"\"\"\n state: dict = Field(default_factory=dict)\n \"\"\"State associated with the document.\"\"\"\n def to_document(self) -> Document:\n \"\"\"Convert the DocumentWithState to a Document.\"\"\"\n return Document(page_content=self.page_content, metadata=self.metadata)\n @classmethod\n def from_document(cls, doc: Document) -> \"_DocumentWithState\":\n \"\"\"Create a DocumentWithState from a Document.\"\"\"\n if isinstance(doc, cls):\n return doc\n return cls(page_content=doc.page_content, metadata=doc.metadata)\n[docs]def get_stateful_documents(\n documents: Sequence[Document],\n) -> Sequence[_DocumentWithState]:\n \"\"\"Convert a list of documents to a list of documents with state.\n Args:\n documents: The documents to convert.\n Returns:\n A list of documents with state.\n \"\"\"\n return [_DocumentWithState.from_document(doc) for doc in documents]\ndef _filter_similar_embeddings(\n embedded_documents: List[List[float]], similarity_fn: Callable, threshold: float\n) -> List[int]:\n \"\"\"Filter redundant documents based on the similarity of their embeddings.\"\"\"\n similarity = np.tril(similarity_fn(embedded_documents, embedded_documents), k=-1)\n redundant = np.where(similarity > threshold)\n redundant_stacked = np.column_stack(redundant)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_transformers.html"} +{"id": "649767acfe47-1", "text": "redundant_stacked = np.column_stack(redundant)\n redundant_sorted = np.argsort(similarity[redundant])[::-1]\n included_idxs = set(range(len(embedded_documents)))\n for first_idx, second_idx in redundant_stacked[redundant_sorted]:\n if first_idx in included_idxs and second_idx in included_idxs:\n # Default to dropping the second document of any highly similar pair.\n included_idxs.remove(second_idx)\n return list(sorted(included_idxs))\ndef _get_embeddings_from_stateful_docs(\n embeddings: Embeddings, documents: Sequence[_DocumentWithState]\n) -> List[List[float]]:\n if len(documents) and \"embedded_doc\" in documents[0].state:\n embedded_documents = [doc.state[\"embedded_doc\"] for doc in documents]\n else:\n embedded_documents = embeddings.embed_documents(\n [d.page_content for d in documents]\n )\n for doc, embedding in zip(documents, embedded_documents):\n doc.state[\"embedded_doc\"] = embedding\n return embedded_documents\n[docs]class EmbeddingsRedundantFilter(BaseDocumentTransformer, BaseModel):\n \"\"\"Filter that drops redundant documents by comparing their embeddings.\"\"\"\n embeddings: Embeddings\n \"\"\"Embeddings to use for embedding document contents.\"\"\"\n similarity_fn: Callable = cosine_similarity\n \"\"\"Similarity function for comparing documents. Function expected to take as input\n two matrices (List[List[float]]) and return a matrix of scores where higher values\n indicate greater similarity.\"\"\"\n similarity_threshold: float = 0.95\n \"\"\"Threshold for determining when two documents are similar enough\n to be considered redundant.\"\"\"\n class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n arbitrary_types_allowed = True\n[docs] def transform_documents(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_transformers.html"} +{"id": "649767acfe47-2", "text": "arbitrary_types_allowed = True\n[docs] def transform_documents(\n self, documents: Sequence[Document], **kwargs: Any\n ) -> Sequence[Document]:\n \"\"\"Filter down documents.\"\"\"\n stateful_documents = get_stateful_documents(documents)\n embedded_documents = _get_embeddings_from_stateful_docs(\n self.embeddings, stateful_documents\n )\n included_idxs = _filter_similar_embeddings(\n embedded_documents, self.similarity_fn, self.similarity_threshold\n )\n return [stateful_documents[i] for i in sorted(included_idxs)]\n[docs] async def atransform_documents(\n self, documents: Sequence[Document], **kwargs: Any\n ) -> Sequence[Document]:\n raise NotImplementedError", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_transformers.html"} +{"id": "af5ade709a82-0", "text": "Source code for langchain.vectorstores.clickhouse\n\"\"\"Wrapper around open source ClickHouse VectorSearch capability.\"\"\"\nfrom __future__ import annotations\nimport json\nimport logging\nfrom hashlib import sha1\nfrom threading import Thread\nfrom typing import Any, Dict, Iterable, List, Optional, Tuple, Union\nfrom pydantic import BaseSettings\nfrom langchain.docstore.document import Document\nfrom langchain.embeddings.base import Embeddings\nfrom langchain.vectorstores.base import VectorStore\nlogger = logging.getLogger()\ndef has_mul_sub_str(s: str, *args: Any) -> bool:\n \"\"\"\n Check if a string contains multiple substrings.\n Args:\n s: string to check.\n *args: substrings to check.\n Returns:\n True if all substrings are in the string, False otherwise.\n \"\"\"\n for a in args:\n if a not in s:\n return False\n return True\n[docs]class ClickhouseSettings(BaseSettings):\n \"\"\"ClickHouse Client Configuration\n Attribute:\n clickhouse_host (str) : An URL to connect to MyScale backend.\n Defaults to 'localhost'.\n clickhouse_port (int) : URL port to connect with HTTP. Defaults to 8443.\n username (str) : Username to login. Defaults to None.\n password (str) : Password to login. Defaults to None.\n index_type (str): index type string.\n index_param (list): index build parameter.\n index_query_params(dict): index query parameters.\n database (str) : Database name to find the table. Defaults to 'default'.\n table (str) : Table name to operate on.\n Defaults to 'vector_table'.\n metric (str) : Metric to compute distance,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/clickhouse.html"} +{"id": "af5ade709a82-1", "text": "Defaults to 'vector_table'.\n metric (str) : Metric to compute distance,\n supported are ('angular', 'euclidean', 'manhattan', 'hamming',\n 'dot'). Defaults to 'angular'.\n https://github.com/spotify/annoy/blob/main/src/annoymodule.cc#L149-L169\n column_map (Dict) : Column type map to project column name onto langchain\n semantics. Must have keys: `text`, `id`, `vector`,\n must be same size to number of columns. For example:\n .. code-block:: python\n {\n 'id': 'text_id',\n 'uuid': 'global_unique_id'\n 'embedding': 'text_embedding',\n 'document': 'text_plain',\n 'metadata': 'metadata_dictionary_in_json',\n }\n Defaults to identity map.\n \"\"\"\n host: str = \"localhost\"\n port: int = 8123\n username: Optional[str] = None\n password: Optional[str] = None\n index_type: str = \"annoy\"\n # Annoy supports L2Distance and cosineDistance.\n index_param: Optional[Union[List, Dict]] = [\"'L2Distance'\", 100]\n index_query_params: Dict[str, str] = {}\n column_map: Dict[str, str] = {\n \"id\": \"id\",\n \"uuid\": \"uuid\",\n \"document\": \"document\",\n \"embedding\": \"embedding\",\n \"metadata\": \"metadata\",\n }\n database: str = \"default\"\n table: str = \"langchain\"\n metric: str = \"angular\"\n def __getitem__(self, item: str) -> Any:\n return getattr(self, item)\n class Config:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/clickhouse.html"} +{"id": "af5ade709a82-2", "text": "return getattr(self, item)\n class Config:\n env_file = \".env\"\n env_prefix = \"clickhouse_\"\n env_file_encoding = \"utf-8\"\n[docs]class Clickhouse(VectorStore):\n \"\"\"Wrapper around ClickHouse vector database\n You need a `clickhouse-connect` python package, and a valid account\n to connect to ClickHouse.\n ClickHouse can not only search with simple vector indexes,\n it also supports complex query with multiple conditions,\n constraints and even sub-queries.\n For more information, please visit\n [ClickHouse official site](https://clickhouse.com/clickhouse)\n \"\"\"\n def __init__(\n self,\n embedding: Embeddings,\n config: Optional[ClickhouseSettings] = None,\n **kwargs: Any,\n ) -> None:\n \"\"\"ClickHouse Wrapper to LangChain\n embedding_function (Embeddings):\n config (ClickHouseSettings): Configuration to ClickHouse Client\n Other keyword arguments will pass into\n [clickhouse-connect](https://docs.clickhouse.com/)\n \"\"\"\n try:\n from clickhouse_connect import get_client\n except ImportError:\n raise ValueError(\n \"Could not import clickhouse connect python package. \"\n \"Please install it with `pip install clickhouse-connect`.\"\n )\n try:\n from tqdm import tqdm\n self.pgbar = tqdm\n except ImportError:\n # Just in case if tqdm is not installed\n self.pgbar = lambda x, **kwargs: x\n super().__init__()\n if config is not None:\n self.config = config\n else:\n self.config = ClickhouseSettings()\n assert self.config\n assert self.config.host and self.config.port\n assert (", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/clickhouse.html"} +{"id": "af5ade709a82-3", "text": "assert self.config\n assert self.config.host and self.config.port\n assert (\n self.config.column_map\n and self.config.database\n and self.config.table\n and self.config.metric\n )\n for k in [\"id\", \"embedding\", \"document\", \"metadata\", \"uuid\"]:\n assert k in self.config.column_map\n assert self.config.metric in [\n \"angular\",\n \"euclidean\",\n \"manhattan\",\n \"hamming\",\n \"dot\",\n ]\n # initialize the schema\n dim = len(embedding.embed_query(\"test\"))\n index_params = (\n (\n \",\".join([f\"'{k}={v}'\" for k, v in self.config.index_param.items()])\n if self.config.index_param\n else \"\"\n )\n if isinstance(self.config.index_param, Dict)\n else \",\".join([str(p) for p in self.config.index_param])\n if isinstance(self.config.index_param, List)\n else self.config.index_param\n )\n self.schema = f\"\"\"\\\nCREATE TABLE IF NOT EXISTS {self.config.database}.{self.config.table}(\n {self.config.column_map['id']} Nullable(String),\n {self.config.column_map['document']} Nullable(String),\n {self.config.column_map['embedding']} Array(Float32),\n {self.config.column_map['metadata']} JSON,\n {self.config.column_map['uuid']} UUID DEFAULT generateUUIDv4(),\n CONSTRAINT cons_vec_len CHECK length({self.config.column_map['embedding']}) = {dim},\n INDEX vec_idx {self.config.column_map['embedding']} TYPE \\\n{self.config.index_type}({index_params}) GRANULARITY 1000\n) ENGINE = MergeTree ORDER BY uuid SETTINGS index_granularity = 8192\\\n\"\"\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/clickhouse.html"} +{"id": "af5ade709a82-4", "text": "\"\"\"\n self.dim = dim\n self.BS = \"\\\\\"\n self.must_escape = (\"\\\\\", \"'\")\n self.embedding_function = embedding\n self.dist_order = \"ASC\" # Only support ConsingDistance and L2Distance\n # Create a connection to clickhouse\n self.client = get_client(\n host=self.config.host,\n port=self.config.port,\n username=self.config.username,\n password=self.config.password,\n **kwargs,\n )\n # Enable JSON type\n self.client.command(\"SET allow_experimental_object_type=1\")\n # Enable Annoy index\n self.client.command(\"SET allow_experimental_annoy_index=1\")\n self.client.command(self.schema)\n[docs] def escape_str(self, value: str) -> str:\n return \"\".join(f\"{self.BS}{c}\" if c in self.must_escape else c for c in value)\n def _build_insert_sql(self, transac: Iterable, column_names: Iterable[str]) -> str:\n ks = \",\".join(column_names)\n _data = []\n for n in transac:\n n = \",\".join([f\"'{self.escape_str(str(_n))}'\" for _n in n])\n _data.append(f\"({n})\")\n i_str = f\"\"\"\n INSERT INTO TABLE \n {self.config.database}.{self.config.table}({ks})\n VALUES\n {','.join(_data)}\n \"\"\"\n return i_str\n def _insert(self, transac: Iterable, column_names: Iterable[str]) -> None:\n _insert_query = self._build_insert_sql(transac, column_names)\n self.client.command(_insert_query)\n[docs] def add_texts(\n self,\n texts: Iterable[str],", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/clickhouse.html"} +{"id": "af5ade709a82-5", "text": "[docs] def add_texts(\n self,\n texts: Iterable[str],\n metadatas: Optional[List[dict]] = None,\n batch_size: int = 32,\n ids: Optional[Iterable[str]] = None,\n **kwargs: Any,\n ) -> List[str]:\n \"\"\"Insert more texts through the embeddings and add to the VectorStore.\n Args:\n texts: Iterable of strings to add to the VectorStore.\n ids: Optional list of ids to associate with the texts.\n batch_size: Batch size of insertion\n metadata: Optional column data to be inserted\n Returns:\n List of ids from adding the texts into the VectorStore.\n \"\"\"\n # Embed and create the documents\n ids = ids or [sha1(t.encode(\"utf-8\")).hexdigest() for t in texts]\n colmap_ = self.config.column_map\n transac = []\n column_names = {\n colmap_[\"id\"]: ids,\n colmap_[\"document\"]: texts,\n colmap_[\"embedding\"]: self.embedding_function.embed_documents(list(texts)),\n }\n metadatas = metadatas or [{} for _ in texts]\n column_names[colmap_[\"metadata\"]] = map(json.dumps, metadatas)\n assert len(set(colmap_) - set(column_names)) >= 0\n keys, values = zip(*column_names.items())\n try:\n t = None\n for v in self.pgbar(\n zip(*values), desc=\"Inserting data...\", total=len(metadatas)\n ):\n assert (\n len(v[keys.index(self.config.column_map[\"embedding\"])]) == self.dim\n )\n transac.append(v)\n if len(transac) == batch_size:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/clickhouse.html"} +{"id": "af5ade709a82-6", "text": "transac.append(v)\n if len(transac) == batch_size:\n if t:\n t.join()\n t = Thread(target=self._insert, args=[transac, keys])\n t.start()\n transac = []\n if len(transac) > 0:\n if t:\n t.join()\n self._insert(transac, keys)\n return [i for i in ids]\n except Exception as e:\n logger.error(f\"\\033[91m\\033[1m{type(e)}\\033[0m \\033[95m{str(e)}\\033[0m\")\n return []\n[docs] @classmethod\n def from_texts(\n cls,\n texts: List[str],\n embedding: Embeddings,\n metadatas: Optional[List[Dict[Any, Any]]] = None,\n config: Optional[ClickhouseSettings] = None,\n text_ids: Optional[Iterable[str]] = None,\n batch_size: int = 32,\n **kwargs: Any,\n ) -> Clickhouse:\n \"\"\"Create ClickHouse wrapper with existing texts\n Args:\n embedding_function (Embeddings): Function to extract text embedding\n texts (Iterable[str]): List or tuple of strings to be added\n config (ClickHouseSettings, Optional): ClickHouse configuration\n text_ids (Optional[Iterable], optional): IDs for the texts.\n Defaults to None.\n batch_size (int, optional): Batchsize when transmitting data to ClickHouse.\n Defaults to 32.\n metadata (List[dict], optional): metadata to texts. Defaults to None.\n Other keyword arguments will pass into\n [clickhouse-connect](https://clickhouse.com/docs/en/integrations/python#clickhouse-connect-driver-api)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/clickhouse.html"} +{"id": "af5ade709a82-7", "text": "Returns:\n ClickHouse Index\n \"\"\"\n ctx = cls(embedding, config, **kwargs)\n ctx.add_texts(texts, ids=text_ids, batch_size=batch_size, metadatas=metadatas)\n return ctx\n def __repr__(self) -> str:\n \"\"\"Text representation for ClickHouse Vector Store, prints backends, username\n and schemas. Easy to use with `str(ClickHouse())`\n Returns:\n repr: string to show connection info and data schema\n \"\"\"\n _repr = f\"\\033[92m\\033[1m{self.config.database}.{self.config.table} @ \"\n _repr += f\"{self.config.host}:{self.config.port}\\033[0m\\n\\n\"\n _repr += f\"\\033[1musername: {self.config.username}\\033[0m\\n\\nTable Schema:\\n\"\n _repr += \"-\" * 51 + \"\\n\"\n for r in self.client.query(\n f\"DESC {self.config.database}.{self.config.table}\"\n ).named_results():\n _repr += (\n f\"|\\033[94m{r['name']:24s}\\033[0m|\\033[96m{r['type']:24s}\\033[0m|\\n\"\n )\n _repr += \"-\" * 51 + \"\\n\"\n return _repr\n def _build_query_sql(\n self, q_emb: List[float], topk: int, where_str: Optional[str] = None\n ) -> str:\n q_emb_str = \",\".join(map(str, q_emb))\n if where_str:\n where_str = f\"PREWHERE {where_str}\"\n else:\n where_str = \"\"\n settings_strs = []", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/clickhouse.html"} +{"id": "af5ade709a82-8", "text": "else:\n where_str = \"\"\n settings_strs = []\n if self.config.index_query_params:\n for k in self.config.index_query_params:\n settings_strs.append(f\"SETTING {k}={self.config.index_query_params[k]}\")\n q_str = f\"\"\"\n SELECT {self.config.column_map['document']}, \n {self.config.column_map['metadata']}, dist\n FROM {self.config.database}.{self.config.table}\n {where_str}\n ORDER BY L2Distance({self.config.column_map['embedding']}, [{q_emb_str}]) \n AS dist {self.dist_order}\n LIMIT {topk} {' '.join(settings_strs)}\n \"\"\"\n return q_str\n[docs] def similarity_search(\n self, query: str, k: int = 4, where_str: Optional[str] = None, **kwargs: Any\n ) -> List[Document]:\n \"\"\"Perform a similarity search with ClickHouse\n Args:\n query (str): query string\n k (int, optional): Top K neighbors to retrieve. Defaults to 4.\n where_str (Optional[str], optional): where condition string.\n Defaults to None.\n NOTE: Please do not let end-user to fill this and always be aware\n of SQL injection. When dealing with metadatas, remember to\n use `{self.metadata_column}.attribute` instead of `attribute`\n alone. The default name for it is `metadata`.\n Returns:\n List[Document]: List of Documents\n \"\"\"\n return self.similarity_search_by_vector(\n self.embedding_function.embed_query(query), k, where_str, **kwargs\n )\n[docs] def similarity_search_by_vector(\n self,\n embedding: List[float],", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/clickhouse.html"} +{"id": "af5ade709a82-9", "text": "self,\n embedding: List[float],\n k: int = 4,\n where_str: Optional[str] = None,\n **kwargs: Any,\n ) -> List[Document]:\n \"\"\"Perform a similarity search with ClickHouse by vectors\n Args:\n query (str): query string\n k (int, optional): Top K neighbors to retrieve. Defaults to 4.\n where_str (Optional[str], optional): where condition string.\n Defaults to None.\n NOTE: Please do not let end-user to fill this and always be aware\n of SQL injection. When dealing with metadatas, remember to\n use `{self.metadata_column}.attribute` instead of `attribute`\n alone. The default name for it is `metadata`.\n Returns:\n List[Document]: List of (Document, similarity)\n \"\"\"\n q_str = self._build_query_sql(embedding, k, where_str)\n try:\n return [\n Document(\n page_content=r[self.config.column_map[\"document\"]],\n metadata=r[self.config.column_map[\"metadata\"]],\n )\n for r in self.client.query(q_str).named_results()\n ]\n except Exception as e:\n logger.error(f\"\\033[91m\\033[1m{type(e)}\\033[0m \\033[95m{str(e)}\\033[0m\")\n return []\n[docs] def similarity_search_with_relevance_scores(\n self, query: str, k: int = 4, where_str: Optional[str] = None, **kwargs: Any\n ) -> List[Tuple[Document, float]]:\n \"\"\"Perform a similarity search with ClickHouse\n Args:\n query (str): query string", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/clickhouse.html"} +{"id": "af5ade709a82-10", "text": "Args:\n query (str): query string\n k (int, optional): Top K neighbors to retrieve. Defaults to 4.\n where_str (Optional[str], optional): where condition string.\n Defaults to None.\n NOTE: Please do not let end-user to fill this and always be aware\n of SQL injection. When dealing with metadatas, remember to\n use `{self.metadata_column}.attribute` instead of `attribute`\n alone. The default name for it is `metadata`.\n Returns:\n List[Document]: List of documents\n \"\"\"\n q_str = self._build_query_sql(\n self.embedding_function.embed_query(query), k, where_str\n )\n try:\n return [\n (\n Document(\n page_content=r[self.config.column_map[\"document\"]],\n metadata=r[self.config.column_map[\"metadata\"]],\n ),\n r[\"dist\"],\n )\n for r in self.client.query(q_str).named_results()\n ]\n except Exception as e:\n logger.error(f\"\\033[91m\\033[1m{type(e)}\\033[0m \\033[95m{str(e)}\\033[0m\")\n return []\n[docs] def drop(self) -> None:\n \"\"\"\n Helper function: Drop data\n \"\"\"\n self.client.command(\n f\"DROP TABLE IF EXISTS {self.config.database}.{self.config.table}\"\n )\n @property\n def metadata_column(self) -> str:\n return self.config.column_map[\"metadata\"]", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/clickhouse.html"} +{"id": "0b748921a2ca-0", "text": "Source code for langchain.vectorstores.alibabacloud_opensearch\nimport json\nimport logging\nimport numbers\nfrom hashlib import sha1\nfrom typing import Any, Dict, Iterable, List, Optional, Tuple\nfrom langchain.embeddings.base import Embeddings\nfrom langchain.schema import Document\nfrom langchain.vectorstores.base import VectorStore\nlogger = logging.getLogger()\n[docs]class AlibabaCloudOpenSearchSettings:\n \"\"\"Opensearch Client Configuration\n Attribute:\n endpoint (str) : The endpoint of opensearch instance, You can find it\n from the console of Alibaba Cloud OpenSearch.\n instance_id (str) : The identify of opensearch instance, You can find\n it from the console of Alibaba Cloud OpenSearch.\n datasource_name (str): The name of the data source specified when creating it.\n username (str) : The username specified when purchasing the instance.\n password (str) : The password specified when purchasing the instance.\n embedding_index_name (str) : The name of the vector attribute specified\n when configuring the instance attributes.\n field_name_mapping (Dict) : Using field name mapping between opensearch\n vector store and opensearch instance configuration table field names:\n {\n 'id': 'The id field name map of index document.',\n 'document': 'The text field name map of index document.',\n 'embedding': 'In the embedding field of the opensearch instance,\n the values must be in float16 multivalue type and separated by commas.',\n 'metadata_field_x': 'Metadata field mapping includes the mapped\n field name and operator in the mapping value, separated by a comma\n between the mapped field name and the operator.',\n }\n \"\"\"\n endpoint: str\n instance_id: str\n username: str\n password: str", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/alibabacloud_opensearch.html"} +{"id": "0b748921a2ca-1", "text": "instance_id: str\n username: str\n password: str\n datasource_name: str\n embedding_index_name: str\n field_name_mapping: Dict[str, str] = {\n \"id\": \"id\",\n \"document\": \"document\",\n \"embedding\": \"embedding\",\n \"metadata_field_x\": \"metadata_field_x,operator\",\n }\n def __init__(\n self,\n endpoint: str,\n instance_id: str,\n username: str,\n password: str,\n datasource_name: str,\n embedding_index_name: str,\n field_name_mapping: Dict[str, str],\n ) -> None:\n self.endpoint = endpoint\n self.instance_id = instance_id\n self.username = username\n self.password = password\n self.datasource_name = datasource_name\n self.embedding_index_name = embedding_index_name\n self.field_name_mapping = field_name_mapping\n def __getitem__(self, item: str) -> Any:\n return getattr(self, item)\ndef create_metadata(fields: Dict[str, Any]) -> Dict[str, Any]:\n \"\"\"Create metadata from fields.\n Args:\n fields: The fields of the document. The fields must be a dict.\n Returns:\n metadata: The metadata of the document. The metadata must be a dict.\n \"\"\"\n metadata: Dict[str, Any] = {}\n for key, value in fields.items():\n if key == \"id\" or key == \"document\" or key == \"embedding\":\n continue\n metadata[key] = value\n return metadata\n[docs]class AlibabaCloudOpenSearch(VectorStore):\n \"\"\"Alibaba Cloud OpenSearch Vector Store\"\"\"\n def __init__(\n self,\n embedding: Embeddings,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/alibabacloud_opensearch.html"} +{"id": "0b748921a2ca-2", "text": "def __init__(\n self,\n embedding: Embeddings,\n config: AlibabaCloudOpenSearchSettings,\n **kwargs: Any,\n ) -> None:\n try:\n from alibabacloud_ha3engine import client, models\n from alibabacloud_tea_util import models as util_models\n except ImportError:\n raise ValueError(\n \"Could not import alibaba cloud opensearch python package. \"\n \"Please install it with `pip install alibabacloud-ha3engine`.\"\n )\n self.config = config\n self.embedding = embedding\n self.runtime = util_models.RuntimeOptions(\n connect_timeout=5000,\n read_timeout=10000,\n autoretry=False,\n ignore_ssl=False,\n max_idle_conns=50,\n )\n self.ha3EngineClient = client.Client(\n models.Config(\n endpoint=config.endpoint,\n instance_id=config.instance_id,\n protocol=\"http\",\n access_user_name=config.username,\n access_pass_word=config.password,\n )\n )\n self.options_headers: Dict[str, str] = {}\n[docs] def add_texts(\n self,\n texts: Iterable[str],\n metadatas: Optional[List[dict]] = None,\n **kwargs: Any,\n ) -> List[str]:\n def _upsert(push_doc_list: List[Dict]) -> List[str]:\n if push_doc_list is None or len(push_doc_list) == 0:\n return []\n try:\n push_request = models.PushDocumentsRequestModel(\n self.options_headers, push_doc_list\n )\n push_response = self.ha3EngineClient.push_documents(\n self.config.datasource_name, field_name_map[\"id\"], push_request", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/alibabacloud_opensearch.html"} +{"id": "0b748921a2ca-3", "text": "self.config.datasource_name, field_name_map[\"id\"], push_request\n )\n json_response = json.loads(push_response.body)\n if json_response[\"status\"] == \"OK\":\n return [\n push_doc[\"fields\"][field_name_map[\"id\"]]\n for push_doc in push_doc_list\n ]\n return []\n except Exception as e:\n logger.error(\n f\"add doc to endpoint:{self.config.endpoint} \"\n f\"instance_id:{self.config.instance_id} failed.\",\n e,\n )\n raise e\n from alibabacloud_ha3engine import models\n ids = [sha1(t.encode(\"utf-8\")).hexdigest() for t in texts]\n embeddings = self.embedding.embed_documents(list(texts))\n metadatas = metadatas or [{} for _ in texts]\n field_name_map = self.config.field_name_mapping\n add_doc_list = []\n text_list = list(texts)\n for idx, doc_id in enumerate(ids):\n embedding = embeddings[idx] if idx < len(embeddings) else None\n metadata = metadatas[idx] if idx < len(metadatas) else None\n text = text_list[idx] if idx < len(text_list) else None\n add_doc: Dict[str, Any] = dict()\n add_doc_fields: Dict[str, Any] = dict()\n add_doc_fields.__setitem__(field_name_map[\"id\"], doc_id)\n add_doc_fields.__setitem__(field_name_map[\"document\"], text)\n if embedding is not None:\n add_doc_fields.__setitem__(\n field_name_map[\"embedding\"],\n \",\".join(str(unit) for unit in embedding),\n )\n if metadata is not None:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/alibabacloud_opensearch.html"} +{"id": "0b748921a2ca-4", "text": ")\n if metadata is not None:\n for md_key, md_value in metadata.items():\n add_doc_fields.__setitem__(\n field_name_map[md_key].split(\",\")[0], md_value\n )\n add_doc.__setitem__(\"fields\", add_doc_fields)\n add_doc.__setitem__(\"cmd\", \"add\")\n add_doc_list.append(add_doc)\n return _upsert(add_doc_list)\n[docs] def similarity_search(\n self,\n query: str,\n k: int = 4,\n search_filter: Optional[Dict[str, Any]] = None,\n **kwargs: Any,\n ) -> List[Document]:\n embedding = self.embedding.embed_query(query)\n return self.create_results(\n self.inner_embedding_query(\n embedding=embedding, search_filter=search_filter, k=k\n )\n )\n[docs] def similarity_search_with_relevance_scores(\n self,\n query: str,\n k: int = 4,\n search_filter: Optional[dict] = None,\n **kwargs: Any,\n ) -> List[Tuple[Document, float]]:\n embedding: List[float] = self.embedding.embed_query(query)\n return self.create_results_with_score(\n self.inner_embedding_query(\n embedding=embedding, search_filter=search_filter, k=k\n )\n )\n[docs] def similarity_search_by_vector(\n self,\n embedding: List[float],\n k: int = 4,\n search_filter: Optional[dict] = None,\n **kwargs: Any,\n ) -> List[Document]:\n return self.create_results(\n self.inner_embedding_query(\n embedding=embedding, search_filter=search_filter, k=k\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/alibabacloud_opensearch.html"} +{"id": "0b748921a2ca-5", "text": "embedding=embedding, search_filter=search_filter, k=k\n )\n )\n[docs] def inner_embedding_query(\n self,\n embedding: List[float],\n search_filter: Optional[Dict[str, Any]] = None,\n k: int = 4,\n ) -> Dict[str, Any]:\n def generate_embedding_query() -> str:\n tmp_search_config_str = (\n f\"config=start:0,hit:{k},format:json&&cluster=general&&kvpairs=\"\n f\"first_formula:proxima_score({self.config.embedding_index_name})&&sort=+RANK\"\n )\n tmp_query_str = (\n f\"&&query={self.config.embedding_index_name}:\"\n + \"'\"\n + \",\".join(str(x) for x in embedding)\n + \"'\"\n )\n if search_filter is not None:\n filter_clause = \"&&filter=\" + \" AND \".join(\n [\n create_filter(md_key, md_value)\n for md_key, md_value in search_filter.items()\n ]\n )\n tmp_query_str += filter_clause\n return tmp_search_config_str + tmp_query_str\n def create_filter(md_key: str, md_value: Any) -> str:\n md_filter_expr = self.config.field_name_mapping[md_key]\n if md_filter_expr is None:\n return \"\"\n expr = md_filter_expr.split(\",\")\n if len(expr) != 2:\n logger.error(\n f\"filter {md_filter_expr} express is not correct, \"\n f\"must contain mapping field and operator.\"\n )\n return \"\"\n md_filter_key = expr[0].strip()\n md_filter_operator = expr[1].strip()", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/alibabacloud_opensearch.html"} +{"id": "0b748921a2ca-6", "text": "md_filter_operator = expr[1].strip()\n if isinstance(md_value, numbers.Number):\n return f\"{md_filter_key} {md_filter_operator} {md_value}\"\n return f'{md_filter_key}{md_filter_operator}\"{md_value}\"'\n def search_data(single_query_str: str) -> Dict[str, Any]:\n search_query = models.SearchQuery(query=single_query_str)\n search_request = models.SearchRequestModel(\n self.options_headers, search_query\n )\n return json.loads(self.ha3EngineClient.search(search_request).body)\n from alibabacloud_ha3engine import models\n try:\n query_str = generate_embedding_query()\n json_response = search_data(query_str)\n if len(json_response[\"errors\"]) != 0:\n logger.error(\n f\"query {self.config.endpoint} {self.config.instance_id} \"\n f\"errors:{json_response['errors']} failed.\"\n )\n else:\n return json_response\n except Exception as e:\n logger.error(\n f\"query instance endpoint:{self.config.endpoint} \"\n f\"instance_id:{self.config.instance_id} failed.\",\n e,\n )\n return {}\n[docs] def create_results(self, json_result: Dict[str, Any]) -> List[Document]:\n items = json_result[\"result\"][\"items\"]\n query_result_list: List[Document] = []\n for item in items:\n fields = item[\"fields\"]\n query_result_list.append(\n Document(\n page_content=fields[self.config.field_name_mapping[\"document\"]],\n metadata=create_metadata(fields),\n )\n )\n return query_result_list\n[docs] def create_results_with_score(\n self, json_result: Dict[str, Any]", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/alibabacloud_opensearch.html"} +{"id": "0b748921a2ca-7", "text": "self, json_result: Dict[str, Any]\n ) -> List[Tuple[Document, float]]:\n items = json_result[\"result\"][\"items\"]\n query_result_list: List[Tuple[Document, float]] = []\n for item in items:\n fields = item[\"fields\"]\n query_result_list.append(\n (\n Document(\n page_content=fields[self.config.field_name_mapping[\"document\"]],\n metadata=create_metadata(fields),\n ),\n float(item[\"sortExprValues\"][0]),\n )\n )\n return query_result_list\n[docs] @classmethod\n def from_texts(\n cls,\n texts: List[str],\n embedding: Embeddings,\n metadatas: Optional[List[dict]] = None,\n config: Optional[AlibabaCloudOpenSearchSettings] = None,\n **kwargs: Any,\n ) -> \"AlibabaCloudOpenSearch\":\n if config is None:\n raise Exception(\"config can't be none\")\n ctx = cls(embedding, config, **kwargs)\n ctx.add_texts(texts=texts, metadatas=metadatas)\n return ctx\n[docs] @classmethod\n def from_documents(\n cls,\n documents: List[Document],\n embedding: Embeddings,\n ids: Optional[List[str]] = None,\n config: Optional[AlibabaCloudOpenSearchSettings] = None,\n **kwargs: Any,\n ) -> \"AlibabaCloudOpenSearch\":\n if config is None:\n raise Exception(\"config can't be none\")\n texts = [d.page_content for d in documents]\n metadatas = [d.metadata for d in documents]\n return cls.from_texts(\n texts=texts,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/alibabacloud_opensearch.html"} +{"id": "0b748921a2ca-8", "text": "return cls.from_texts(\n texts=texts,\n embedding=embedding,\n metadatas=metadatas,\n config=config,\n **kwargs,\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/alibabacloud_opensearch.html"} +{"id": "026424a18e61-0", "text": "Source code for langchain.vectorstores.rocksetdb\n\"\"\"Wrapper around Rockset vector database.\"\"\"\nfrom __future__ import annotations\nimport logging\nfrom enum import Enum\nfrom typing import Any, Iterable, List, Optional, Tuple\nfrom langchain.docstore.document import Document\nfrom langchain.embeddings.base import Embeddings\nfrom langchain.vectorstores.base import VectorStore\nlogger = logging.getLogger(__name__)\n[docs]class Rockset(VectorStore):\n \"\"\"Wrapper arpund Rockset vector database.\n To use, you should have the `rockset` python package installed. Note that to use\n this, the collection being used must already exist in your Rockset instance.\n You must also ensure you use a Rockset ingest transformation to apply\n `VECTOR_ENFORCE` on the column being used to store `embedding_key` in the\n collection.\n See: https://rockset.com/blog/introducing-vector-search-on-rockset/ for more details\n Everything below assumes `commons` Rockset workspace.\n TODO: Add support for workspace args.\n Example:\n .. code-block:: python\n from langchain.vectorstores import Rockset\n from langchain.embeddings.openai import OpenAIEmbeddings\n import rockset\n # Make sure you use the right host (region) for your Rockset instance\n # and APIKEY has both read-write access to your collection.\n rs = rockset.RocksetClient(host=rockset.Regions.use1a1, api_key=\"***\")\n collection_name = \"langchain_demo\"\n embeddings = OpenAIEmbeddings()\n vectorstore = Rockset(rs, collection_name, embeddings,\n \"description\", \"description_embedding\")\n \"\"\"\n def __init__(\n self,\n client: Any,\n embeddings: Embeddings,\n collection_name: str,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/rocksetdb.html"} +{"id": "026424a18e61-1", "text": "client: Any,\n embeddings: Embeddings,\n collection_name: str,\n text_key: str,\n embedding_key: str,\n ):\n \"\"\"Initialize with Rockset client.\n Args:\n client: Rockset client object\n collection: Rockset collection to insert docs / query\n embeddings: Langchain Embeddings object to use to generate\n embedding for given text.\n text_key: column in Rockset collection to use to store the text\n embedding_key: column in Rockset collection to use to store the embedding.\n Note: We must apply `VECTOR_ENFORCE()` on this column via\n Rockset ingest transformation.\n \"\"\"\n try:\n from rockset import RocksetClient\n except ImportError:\n raise ImportError(\n \"Could not import rockset client python package. \"\n \"Please install it with `pip install rockset`.\"\n )\n if not isinstance(client, RocksetClient):\n raise ValueError(\n f\"client should be an instance of rockset.RocksetClient, \"\n f\"got {type(client)}\"\n )\n # TODO: check that `collection_name` exists in rockset. Create if not.\n self._client = client\n self._collection_name = collection_name\n self._embeddings = embeddings\n self._text_key = text_key\n self._embedding_key = embedding_key\n[docs] def add_texts(\n self,\n texts: Iterable[str],\n metadatas: Optional[List[dict]] = None,\n ids: Optional[List[str]] = None,\n batch_size: int = 32,\n **kwargs: Any,\n ) -> List[str]:\n \"\"\"Run more texts through the embeddings and add to the vectorstore\n Args:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/rocksetdb.html"} +{"id": "026424a18e61-2", "text": "\"\"\"Run more texts through the embeddings and add to the vectorstore\n Args:\n texts: Iterable of strings to add to the vectorstore.\n metadatas: Optional list of metadatas associated with the texts.\n ids: Optional list of ids to associate with the texts.\n batch_size: Send documents in batches to rockset.\n Returns:\n List of ids from adding the texts into the vectorstore.\n \"\"\"\n batch: list[dict] = []\n stored_ids = []\n for i, text in enumerate(texts):\n if len(batch) == batch_size:\n stored_ids += self._write_documents_to_rockset(batch)\n batch = []\n doc = {}\n if metadatas and len(metadatas) > i:\n doc = metadatas[i]\n if ids and len(ids) > i:\n doc[\"_id\"] = ids[i]\n doc[self._text_key] = text\n doc[self._embedding_key] = self._embeddings.embed_query(text)\n batch.append(doc)\n if len(batch) > 0:\n stored_ids += self._write_documents_to_rockset(batch)\n batch = []\n return stored_ids\n[docs] @classmethod\n def from_texts(\n cls,\n texts: List[str],\n embedding: Embeddings,\n metadatas: Optional[List[dict]] = None,\n client: Any = None,\n collection_name: str = \"\",\n text_key: str = \"\",\n embedding_key: str = \"\",\n ids: Optional[List[str]] = None,\n batch_size: int = 32,\n **kwargs: Any,\n ) -> Rockset:\n \"\"\"Create Rockset wrapper with existing texts.", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/rocksetdb.html"} +{"id": "026424a18e61-3", "text": ") -> Rockset:\n \"\"\"Create Rockset wrapper with existing texts.\n This is intended as a quicker way to get started.\n \"\"\"\n # Sanitize imputs\n assert client is not None, \"Rockset Client cannot be None\"\n assert collection_name, \"Collection name cannot be empty\"\n assert text_key, \"Text key name cannot be empty\"\n assert embedding_key, \"Embedding key cannot be empty\"\n rockset = cls(client, embedding, collection_name, text_key, embedding_key)\n rockset.add_texts(texts, metadatas, ids, batch_size)\n return rockset\n # Rockset supports these vector distance functions.\n[docs] class DistanceFunction(Enum):\n COSINE_SIM = \"COSINE_SIM\"\n EUCLIDEAN_DIST = \"EUCLIDEAN_DIST\"\n DOT_PRODUCT = \"DOT_PRODUCT\"\n # how to sort results for \"similarity\"\n[docs] def order_by(self) -> str:\n if self.value == \"EUCLIDEAN_DIST\":\n return \"ASC\"\n return \"DESC\"\n[docs] def similarity_search_with_relevance_scores(\n self,\n query: str,\n k: int = 4,\n distance_func: DistanceFunction = DistanceFunction.COSINE_SIM,\n where_str: Optional[str] = None,\n **kwargs: Any,\n ) -> List[Tuple[Document, float]]:\n \"\"\"Perform a similarity search with Rockset\n Args:\n query (str): Text to look up documents similar to.\n distance_func (DistanceFunction): how to compute distance between two\n vectors in Rockset.\n k (int, optional): Top K neighbors to retrieve. Defaults to 4.", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/rocksetdb.html"} +{"id": "026424a18e61-4", "text": "k (int, optional): Top K neighbors to retrieve. Defaults to 4.\n where_str (Optional[str], optional): Metadata filters supplied as a\n SQL `where` condition string. Defaults to None.\n eg. \"price<=70.0 AND brand='Nintendo'\"\n NOTE: Please do not let end-user to fill this and always be aware\n of SQL injection.\n Returns:\n List[Tuple[Document, float]]: List of documents with their relevance score\n \"\"\"\n return self.similarity_search_by_vector_with_relevance_scores(\n self._embeddings.embed_query(query),\n k,\n distance_func,\n where_str,\n **kwargs,\n )\n[docs] def similarity_search(\n self,\n query: str,\n k: int = 4,\n distance_func: DistanceFunction = DistanceFunction.COSINE_SIM,\n where_str: Optional[str] = None,\n **kwargs: Any,\n ) -> List[Document]:\n \"\"\"Same as `similarity_search_with_relevance_scores` but\n doesn't return the scores.\n \"\"\"\n return self.similarity_search_by_vector(\n self._embeddings.embed_query(query),\n k,\n distance_func,\n where_str,\n **kwargs,\n )\n[docs] def similarity_search_by_vector(\n self,\n embedding: List[float],\n k: int = 4,\n distance_func: DistanceFunction = DistanceFunction.COSINE_SIM,\n where_str: Optional[str] = None,\n **kwargs: Any,\n ) -> List[Document]:\n \"\"\"Accepts a query_embedding (vector), and returns documents with\n similar embeddings.\"\"\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/rocksetdb.html"} +{"id": "026424a18e61-5", "text": "\"\"\"Accepts a query_embedding (vector), and returns documents with\n similar embeddings.\"\"\"\n docs_and_scores = self.similarity_search_by_vector_with_relevance_scores(\n embedding, k, distance_func, where_str, **kwargs\n )\n return [doc for doc, _ in docs_and_scores]\n[docs] def similarity_search_by_vector_with_relevance_scores(\n self,\n embedding: List[float],\n k: int = 4,\n distance_func: DistanceFunction = DistanceFunction.COSINE_SIM,\n where_str: Optional[str] = None,\n **kwargs: Any,\n ) -> List[Tuple[Document, float]]:\n \"\"\"Accepts a query_embedding (vector), and returns documents with\n similar embeddings along with their relevance scores.\"\"\"\n q_str = self._build_query_sql(embedding, distance_func, k, where_str)\n try:\n query_response = self._client.Queries.query(sql={\"query\": q_str})\n except Exception as e:\n logger.error(\"Exception when querying Rockset: %s\\n\", e)\n return []\n finalResult: list[Tuple[Document, float]] = []\n for document in query_response.results:\n metadata = {}\n assert isinstance(\n document, dict\n ), \"document should be of type `dict[str,Any]`. But found: `{}`\".format(\n type(document)\n )\n for k, v in document.items():\n if k == self._text_key:\n assert isinstance(\n v, str\n ), \"page content stored in column `{}` must be of type `str`. \\\n But found: `{}`\".format(\n self._text_key, type(v)\n )\n page_content = v", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/rocksetdb.html"} +{"id": "026424a18e61-6", "text": "self._text_key, type(v)\n )\n page_content = v\n elif k == \"dist\":\n assert isinstance(\n v, float\n ), \"Computed distance between vectors must of type `float`. \\\n But found {}\".format(\n type(v)\n )\n score = v\n elif k not in [\"_id\", \"_event_time\", \"_meta\"]:\n # These columns are populated by Rockset when documents are\n # inserted. No need to return them in metadata dict.\n metadata[k] = v\n finalResult.append(\n (Document(page_content=page_content, metadata=metadata), score)\n )\n return finalResult\n # Helper functions\n def _build_query_sql(\n self,\n query_embedding: List[float],\n distance_func: DistanceFunction,\n k: int = 4,\n where_str: Optional[str] = None,\n ) -> str:\n \"\"\"Builds Rockset SQL query to query similar vectors to query_vector\"\"\"\n q_embedding_str = \",\".join(map(str, query_embedding))\n distance_str = f\"\"\"{distance_func.value}({self._embedding_key}, \\\n[{q_embedding_str}]) as dist\"\"\"\n where_str = f\"WHERE {where_str}\\n\" if where_str else \"\"\n return f\"\"\"\\\nSELECT * EXCEPT({self._embedding_key}), {distance_str}\nFROM {self._collection_name}\n{where_str}\\\nORDER BY dist {distance_func.order_by()}\nLIMIT {str(k)}\n\"\"\"\n def _write_documents_to_rockset(self, batch: List[dict]) -> List[str]:\n add_doc_res = self._client.Documents.add_documents(\n collection=self._collection_name, data=batch\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/rocksetdb.html"} +{"id": "026424a18e61-7", "text": "collection=self._collection_name, data=batch\n )\n return [doc_status._id for doc_status in add_doc_res.data]\n[docs] def delete_texts(self, ids: List[str]) -> None:\n \"\"\"Delete a list of docs from the Rockset collection\"\"\"\n try:\n from rockset.models import DeleteDocumentsRequestData\n except ImportError:\n raise ImportError(\n \"Could not import rockset client python package. \"\n \"Please install it with `pip install rockset`.\"\n )\n self._client.Documents.delete_documents(\n collection=self._collection_name,\n data=[DeleteDocumentsRequestData(id=i) for i in ids],\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/rocksetdb.html"} +{"id": "0245ad0cd55d-0", "text": "Source code for langchain.vectorstores.base\n\"\"\"Interface for vector stores.\"\"\"\nfrom __future__ import annotations\nimport asyncio\nimport warnings\nfrom abc import ABC, abstractmethod\nfrom functools import partial\nfrom typing import (\n Any,\n ClassVar,\n Collection,\n Dict,\n Iterable,\n List,\n Optional,\n Tuple,\n Type,\n TypeVar,\n)\nfrom pydantic import BaseModel, Field, root_validator\nfrom langchain.docstore.document import Document\nfrom langchain.embeddings.base import Embeddings\nfrom langchain.schema import BaseRetriever\nVST = TypeVar(\"VST\", bound=\"VectorStore\")\n[docs]class VectorStore(ABC):\n \"\"\"Interface for vector stores.\"\"\"\n[docs] @abstractmethod\n def add_texts(\n self,\n texts: Iterable[str],\n metadatas: Optional[List[dict]] = None,\n **kwargs: Any,\n ) -> List[str]:\n \"\"\"Run more texts through the embeddings and add to the vectorstore.\n Args:\n texts: Iterable of strings to add to the vectorstore.\n metadatas: Optional list of metadatas associated with the texts.\n kwargs: vectorstore specific parameters\n Returns:\n List of ids from adding the texts into the vectorstore.\n \"\"\"\n[docs] def delete(self, ids: List[str]) -> Optional[bool]:\n \"\"\"Delete by vector ID.\n Args:\n ids: List of ids to delete.\n Returns:\n Optional[bool]: True if deletion is successful,\n False otherwise, None if not implemented.\n \"\"\"\n raise NotImplementedError(\n \"delete_by_id method must be implemented by subclass.\"\n )\n[docs] async def aadd_texts(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/base.html"} +{"id": "0245ad0cd55d-1", "text": ")\n[docs] async def aadd_texts(\n self,\n texts: Iterable[str],\n metadatas: Optional[List[dict]] = None,\n **kwargs: Any,\n ) -> List[str]:\n \"\"\"Run more texts through the embeddings and add to the vectorstore.\"\"\"\n raise NotImplementedError\n[docs] def add_documents(self, documents: List[Document], **kwargs: Any) -> List[str]:\n \"\"\"Run more documents through the embeddings and add to the vectorstore.\n Args:\n documents (List[Document]: Documents to add to the vectorstore.\n Returns:\n List[str]: List of IDs of the added texts.\n \"\"\"\n # TODO: Handle the case where the user doesn't provide ids on the Collection\n texts = [doc.page_content for doc in documents]\n metadatas = [doc.metadata for doc in documents]\n return self.add_texts(texts, metadatas, **kwargs)\n[docs] async def aadd_documents(\n self, documents: List[Document], **kwargs: Any\n ) -> List[str]:\n \"\"\"Run more documents through the embeddings and add to the vectorstore.\n Args:\n documents (List[Document]: Documents to add to the vectorstore.\n Returns:\n List[str]: List of IDs of the added texts.\n \"\"\"\n texts = [doc.page_content for doc in documents]\n metadatas = [doc.metadata for doc in documents]\n return await self.aadd_texts(texts, metadatas, **kwargs)\n[docs] def search(self, query: str, search_type: str, **kwargs: Any) -> List[Document]:\n \"\"\"Return docs most similar to query using specified search type.\"\"\"\n if search_type == \"similarity\":", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/base.html"} +{"id": "0245ad0cd55d-2", "text": "if search_type == \"similarity\":\n return self.similarity_search(query, **kwargs)\n elif search_type == \"mmr\":\n return self.max_marginal_relevance_search(query, **kwargs)\n else:\n raise ValueError(\n f\"search_type of {search_type} not allowed. Expected \"\n \"search_type to be 'similarity' or 'mmr'.\"\n )\n[docs] async def asearch(\n self, query: str, search_type: str, **kwargs: Any\n ) -> List[Document]:\n \"\"\"Return docs most similar to query using specified search type.\"\"\"\n if search_type == \"similarity\":\n return await self.asimilarity_search(query, **kwargs)\n elif search_type == \"mmr\":\n return await self.amax_marginal_relevance_search(query, **kwargs)\n else:\n raise ValueError(\n f\"search_type of {search_type} not allowed. Expected \"\n \"search_type to be 'similarity' or 'mmr'.\"\n )\n[docs] @abstractmethod\n def similarity_search(\n self, query: str, k: int = 4, **kwargs: Any\n ) -> List[Document]:\n \"\"\"Return docs most similar to query.\"\"\"\n[docs] def similarity_search_with_relevance_scores(\n self,\n query: str,\n k: int = 4,\n **kwargs: Any,\n ) -> List[Tuple[Document, float]]:\n \"\"\"Return docs and relevance scores in the range [0, 1].\n 0 is dissimilar, 1 is most similar.\n Args:\n query: input text\n k: Number of Documents to return. Defaults to 4.", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/base.html"} +{"id": "0245ad0cd55d-3", "text": "k: Number of Documents to return. Defaults to 4.\n **kwargs: kwargs to be passed to similarity search. Should include:\n score_threshold: Optional, a floating point value between 0 to 1 to\n filter the resulting set of retrieved docs\n Returns:\n List of Tuples of (doc, similarity_score)\n \"\"\"\n docs_and_similarities = self._similarity_search_with_relevance_scores(\n query, k=k, **kwargs\n )\n if any(\n similarity < 0.0 or similarity > 1.0\n for _, similarity in docs_and_similarities\n ):\n warnings.warn(\n \"Relevance scores must be between\"\n f\" 0 and 1, got {docs_and_similarities}\"\n )\n score_threshold = kwargs.get(\"score_threshold\")\n if score_threshold is not None:\n docs_and_similarities = [\n (doc, similarity)\n for doc, similarity in docs_and_similarities\n if similarity >= score_threshold\n ]\n if len(docs_and_similarities) == 0:\n warnings.warn(\n \"No relevant docs were retrieved using the relevance score\"\n f\" threshold {score_threshold}\"\n )\n return docs_and_similarities\n def _similarity_search_with_relevance_scores(\n self,\n query: str,\n k: int = 4,\n **kwargs: Any,\n ) -> List[Tuple[Document, float]]:\n \"\"\"Return docs and relevance scores, normalized on a scale from 0 to 1.\n 0 is dissimilar, 1 is most similar.\n \"\"\"\n raise NotImplementedError\n[docs] async def asimilarity_search_with_relevance_scores(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/base.html"} +{"id": "0245ad0cd55d-4", "text": "raise NotImplementedError\n[docs] async def asimilarity_search_with_relevance_scores(\n self, query: str, k: int = 4, **kwargs: Any\n ) -> List[Tuple[Document, float]]:\n \"\"\"Return docs most similar to query.\"\"\"\n # This is a temporary workaround to make the similarity search\n # asynchronous. The proper solution is to make the similarity search\n # asynchronous in the vector store implementations.\n func = partial(self.similarity_search_with_relevance_scores, query, k, **kwargs)\n return await asyncio.get_event_loop().run_in_executor(None, func)\n[docs] async def asimilarity_search(\n self, query: str, k: int = 4, **kwargs: Any\n ) -> List[Document]:\n \"\"\"Return docs most similar to query.\"\"\"\n # This is a temporary workaround to make the similarity search\n # asynchronous. The proper solution is to make the similarity search\n # asynchronous in the vector store implementations.\n func = partial(self.similarity_search, query, k, **kwargs)\n return await asyncio.get_event_loop().run_in_executor(None, func)\n[docs] def similarity_search_by_vector(\n self, embedding: List[float], k: int = 4, **kwargs: Any\n ) -> List[Document]:\n \"\"\"Return docs most similar to embedding vector.\n Args:\n embedding: Embedding to look up documents similar to.\n k: Number of Documents to return. Defaults to 4.\n Returns:\n List of Documents most similar to the query vector.\n \"\"\"\n raise NotImplementedError\n[docs] async def asimilarity_search_by_vector(\n self, embedding: List[float], k: int = 4, **kwargs: Any", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/base.html"} +{"id": "0245ad0cd55d-5", "text": "self, embedding: List[float], k: int = 4, **kwargs: Any\n ) -> List[Document]:\n \"\"\"Return docs most similar to embedding vector.\"\"\"\n # This is a temporary workaround to make the similarity search\n # asynchronous. The proper solution is to make the similarity search\n # asynchronous in the vector store implementations.\n func = partial(self.similarity_search_by_vector, embedding, k, **kwargs)\n return await asyncio.get_event_loop().run_in_executor(None, func)\n[docs] def max_marginal_relevance_search(\n self,\n query: str,\n k: int = 4,\n fetch_k: int = 20,\n lambda_mult: float = 0.5,\n **kwargs: Any,\n ) -> List[Document]:\n \"\"\"Return docs selected using the maximal marginal relevance.\n Maximal marginal relevance optimizes for similarity to query AND diversity\n among selected documents.\n Args:\n query: Text to look up documents similar to.\n k: Number of Documents to return. Defaults to 4.\n fetch_k: Number of Documents to fetch to pass to MMR algorithm.\n lambda_mult: Number between 0 and 1 that determines the degree\n of diversity among the results with 0 corresponding\n to maximum diversity and 1 to minimum diversity.\n Defaults to 0.5.\n Returns:\n List of Documents selected by maximal marginal relevance.\n \"\"\"\n raise NotImplementedError\n[docs] async def amax_marginal_relevance_search(\n self,\n query: str,\n k: int = 4,\n fetch_k: int = 20,\n lambda_mult: float = 0.5,\n **kwargs: Any,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/base.html"} +{"id": "0245ad0cd55d-6", "text": "lambda_mult: float = 0.5,\n **kwargs: Any,\n ) -> List[Document]:\n \"\"\"Return docs selected using the maximal marginal relevance.\"\"\"\n # This is a temporary workaround to make the similarity search\n # asynchronous. The proper solution is to make the similarity search\n # asynchronous in the vector store implementations.\n func = partial(\n self.max_marginal_relevance_search, query, k, fetch_k, lambda_mult, **kwargs\n )\n return await asyncio.get_event_loop().run_in_executor(None, func)\n[docs] def max_marginal_relevance_search_by_vector(\n self,\n embedding: List[float],\n k: int = 4,\n fetch_k: int = 20,\n lambda_mult: float = 0.5,\n **kwargs: Any,\n ) -> List[Document]:\n \"\"\"Return docs selected using the maximal marginal relevance.\n Maximal marginal relevance optimizes for similarity to query AND diversity\n among selected documents.\n Args:\n embedding: Embedding to look up documents similar to.\n k: Number of Documents to return. Defaults to 4.\n fetch_k: Number of Documents to fetch to pass to MMR algorithm.\n lambda_mult: Number between 0 and 1 that determines the degree\n of diversity among the results with 0 corresponding\n to maximum diversity and 1 to minimum diversity.\n Defaults to 0.5.\n Returns:\n List of Documents selected by maximal marginal relevance.\n \"\"\"\n raise NotImplementedError\n[docs] async def amax_marginal_relevance_search_by_vector(\n self,\n embedding: List[float],\n k: int = 4,\n fetch_k: int = 20,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/base.html"} +{"id": "0245ad0cd55d-7", "text": "k: int = 4,\n fetch_k: int = 20,\n lambda_mult: float = 0.5,\n **kwargs: Any,\n ) -> List[Document]:\n \"\"\"Return docs selected using the maximal marginal relevance.\"\"\"\n raise NotImplementedError\n[docs] @classmethod\n def from_documents(\n cls: Type[VST],\n documents: List[Document],\n embedding: Embeddings,\n **kwargs: Any,\n ) -> VST:\n \"\"\"Return VectorStore initialized from documents and embeddings.\"\"\"\n texts = [d.page_content for d in documents]\n metadatas = [d.metadata for d in documents]\n return cls.from_texts(texts, embedding, metadatas=metadatas, **kwargs)\n[docs] @classmethod\n async def afrom_documents(\n cls: Type[VST],\n documents: List[Document],\n embedding: Embeddings,\n **kwargs: Any,\n ) -> VST:\n \"\"\"Return VectorStore initialized from documents and embeddings.\"\"\"\n texts = [d.page_content for d in documents]\n metadatas = [d.metadata for d in documents]\n return await cls.afrom_texts(texts, embedding, metadatas=metadatas, **kwargs)\n[docs] @classmethod\n @abstractmethod\n def from_texts(\n cls: Type[VST],\n texts: List[str],\n embedding: Embeddings,\n metadatas: Optional[List[dict]] = None,\n **kwargs: Any,\n ) -> VST:\n \"\"\"Return VectorStore initialized from texts and embeddings.\"\"\"\n[docs] @classmethod\n async def afrom_texts(\n cls: Type[VST],\n texts: List[str],", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/base.html"} +{"id": "0245ad0cd55d-8", "text": "cls: Type[VST],\n texts: List[str],\n embedding: Embeddings,\n metadatas: Optional[List[dict]] = None,\n **kwargs: Any,\n ) -> VST:\n \"\"\"Return VectorStore initialized from texts and embeddings.\"\"\"\n raise NotImplementedError\n[docs] def as_retriever(self, **kwargs: Any) -> VectorStoreRetriever:\n return VectorStoreRetriever(vectorstore=self, **kwargs)\nclass VectorStoreRetriever(BaseRetriever, BaseModel):\n vectorstore: VectorStore\n search_type: str = \"similarity\"\n search_kwargs: dict = Field(default_factory=dict)\n allowed_search_types: ClassVar[Collection[str]] = (\n \"similarity\",\n \"similarity_score_threshold\",\n \"mmr\",\n )\n class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n arbitrary_types_allowed = True\n @root_validator()\n def validate_search_type(cls, values: Dict) -> Dict:\n \"\"\"Validate search type.\"\"\"\n search_type = values[\"search_type\"]\n if search_type not in cls.allowed_search_types:\n raise ValueError(\n f\"search_type of {search_type} not allowed. Valid values are: \"\n f\"{cls.allowed_search_types}\"\n )\n if search_type == \"similarity_score_threshold\":\n score_threshold = values[\"search_kwargs\"].get(\"score_threshold\")\n if (score_threshold is None) or (not isinstance(score_threshold, float)):\n raise ValueError(\n \"`score_threshold` is not specified with a float value(0~1) \"\n \"in `search_kwargs`.\"\n )\n return values\n def get_relevant_documents(self, query: str) -> List[Document]:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/base.html"} +{"id": "0245ad0cd55d-9", "text": "def get_relevant_documents(self, query: str) -> List[Document]:\n if self.search_type == \"similarity\":\n docs = self.vectorstore.similarity_search(query, **self.search_kwargs)\n elif self.search_type == \"similarity_score_threshold\":\n docs_and_similarities = (\n self.vectorstore.similarity_search_with_relevance_scores(\n query, **self.search_kwargs\n )\n )\n docs = [doc for doc, _ in docs_and_similarities]\n elif self.search_type == \"mmr\":\n docs = self.vectorstore.max_marginal_relevance_search(\n query, **self.search_kwargs\n )\n else:\n raise ValueError(f\"search_type of {self.search_type} not allowed.\")\n return docs\n async def aget_relevant_documents(self, query: str) -> List[Document]:\n if self.search_type == \"similarity\":\n docs = await self.vectorstore.asimilarity_search(\n query, **self.search_kwargs\n )\n elif self.search_type == \"similarity_score_threshold\":\n docs_and_similarities = (\n await self.vectorstore.asimilarity_search_with_relevance_scores(\n query, **self.search_kwargs\n )\n )\n docs = [doc for doc, _ in docs_and_similarities]\n elif self.search_type == \"mmr\":\n docs = await self.vectorstore.amax_marginal_relevance_search(\n query, **self.search_kwargs\n )\n else:\n raise ValueError(f\"search_type of {self.search_type} not allowed.\")\n return docs\n def add_documents(self, documents: List[Document], **kwargs: Any) -> List[str]:\n \"\"\"Add documents to vectorstore.\"\"\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/base.html"} +{"id": "0245ad0cd55d-10", "text": "\"\"\"Add documents to vectorstore.\"\"\"\n return self.vectorstore.add_documents(documents, **kwargs)\n async def aadd_documents(\n self, documents: List[Document], **kwargs: Any\n ) -> List[str]:\n \"\"\"Add documents to vectorstore.\"\"\"\n return await self.vectorstore.aadd_documents(documents, **kwargs)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/base.html"} +{"id": "42d60dfd50ce-0", "text": "Source code for langchain.vectorstores.awadb\n\"\"\"Wrapper around AwaDB for embedding vectors\"\"\"\nfrom __future__ import annotations\nimport logging\nimport uuid\nfrom typing import TYPE_CHECKING, Any, Iterable, List, Optional, Tuple, Type\nfrom langchain.docstore.document import Document\nfrom langchain.embeddings.base import Embeddings\nfrom langchain.vectorstores.base import VectorStore\n# from pydantic import BaseModel, Field, root_validator\nif TYPE_CHECKING:\n import awadb\nlogger = logging.getLogger()\nDEFAULT_TOPN = 4\n[docs]class AwaDB(VectorStore):\n \"\"\"Interface implemented by AwaDB vector stores.\"\"\"\n _DEFAULT_TABLE_NAME = \"langchain_awadb\"\n def __init__(\n self,\n table_name: str = _DEFAULT_TABLE_NAME,\n embedding_model: Optional[Embeddings] = None,\n log_and_data_dir: Optional[str] = None,\n client: Optional[awadb.Client] = None,\n ) -> None:\n \"\"\"Initialize with AwaDB client.\"\"\"\n try:\n import awadb\n except ImportError:\n raise ValueError(\n \"Could not import awadb python package. \"\n \"Please install it with `pip install awadb`.\"\n )\n if client is not None:\n self.awadb_client = client\n else:\n if log_and_data_dir is not None:\n self.awadb_client = awadb.Client(log_and_data_dir)\n else:\n self.awadb_client = awadb.Client()\n if table_name == self._DEFAULT_TABLE_NAME:\n table_name += \"_\"\n table_name += str(uuid.uuid4()).split(\"-\")[-1]\n self.awadb_client.Create(table_name)\n self.table2embeddings: dict[str, Embeddings] = {}", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/awadb.html"} +{"id": "42d60dfd50ce-1", "text": "self.table2embeddings: dict[str, Embeddings] = {}\n if embedding_model is not None:\n self.table2embeddings[table_name] = embedding_model\n self.using_table_name = table_name\n[docs] def add_texts(\n self,\n texts: Iterable[str],\n metadatas: Optional[List[dict]] = None,\n is_duplicate_texts: Optional[bool] = None,\n **kwargs: Any,\n ) -> List[str]:\n \"\"\"Run more texts through the embeddings and add to the vectorstore.\n Args:\n texts: Iterable of strings to add to the vectorstore.\n metadatas: Optional list of metadatas associated with the texts.\n is_duplicate_texts: Optional whether to duplicate texts.\n kwargs: vectorstore specific parameters.\n Returns:\n List of ids from adding the texts into the vectorstore.\n \"\"\"\n if self.awadb_client is None:\n raise ValueError(\"AwaDB client is None!!!\")\n embeddings = None\n if self.using_table_name in self.table2embeddings:\n embeddings = self.table2embeddings[self.using_table_name].embed_documents(\n list(texts)\n )\n return self.awadb_client.AddTexts(\n \"embedding_text\",\n \"text_embedding\",\n texts,\n embeddings,\n metadatas,\n is_duplicate_texts,\n )\n[docs] def load_local(\n self,\n table_name: str,\n **kwargs: Any,\n ) -> bool:\n if self.awadb_client is None:\n raise ValueError(\"AwaDB client is None!!!\")\n return self.awadb_client.Load(table_name)\n[docs] def similarity_search(\n self,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/awadb.html"} +{"id": "42d60dfd50ce-2", "text": "[docs] def similarity_search(\n self,\n query: str,\n k: int = DEFAULT_TOPN,\n **kwargs: Any,\n ) -> List[Document]:\n \"\"\"Return docs most similar to query.\"\"\"\n if self.awadb_client is None:\n raise ValueError(\"AwaDB client is None!!!\")\n embedding = None\n if self.using_table_name in self.table2embeddings:\n embedding = self.table2embeddings[self.using_table_name].embed_query(query)\n else:\n from awadb import llm_embedding\n llm = llm_embedding.LLMEmbedding()\n embedding = llm.Embedding(query)\n return self.similarity_search_by_vector(embedding, k)\n[docs] def similarity_search_with_score(\n self,\n query: str,\n k: int = DEFAULT_TOPN,\n **kwargs: Any,\n ) -> List[Tuple[Document, float]]:\n \"\"\"Return docs and relevance scores, normalized on a scale from 0 to 1.\n 0 is dissimilar, 1 is most similar.\n \"\"\"\n if self.awadb_client is None:\n raise ValueError(\"AwaDB client is None!!!\")\n embedding = None\n if self.using_table_name in self.table2embeddings:\n embedding = self.table2embeddings[self.using_table_name].embed_query(query)\n else:\n from awadb import llm_embedding\n llm = llm_embedding.LLMEmbedding()\n embedding = llm.Embedding(query)\n results: List[Tuple[Document, float]] = []\n scores: List[float] = []\n retrieval_docs = self.similarity_search_by_vector(embedding, k, scores)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/awadb.html"} +{"id": "42d60dfd50ce-3", "text": "retrieval_docs = self.similarity_search_by_vector(embedding, k, scores)\n L2_Norm = 0.0\n for score in scores:\n L2_Norm = L2_Norm + score * score\n L2_Norm = pow(L2_Norm, 0.5)\n doc_no = 0\n for doc in retrieval_docs:\n doc_tuple = (doc, 1 - (scores[doc_no] / L2_Norm))\n results.append(doc_tuple)\n doc_no = doc_no + 1\n return results\n[docs] def similarity_search_with_relevance_scores(\n self,\n query: str,\n k: int = DEFAULT_TOPN,\n **kwargs: Any,\n ) -> List[Tuple[Document, float]]:\n \"\"\"Return docs and relevance scores, normalized on a scale from 0 to 1.\n 0 is dissimilar, 1 is most similar.\n \"\"\"\n if self.awadb_client is None:\n raise ValueError(\"AwaDB client is None!!!\")\n embedding = None\n if self.using_table_name in self.table2embeddings:\n embedding = self.table2embeddings[self.using_table_name].embed_query(query)\n show_results = self.awadb_client.Search(embedding, k)\n results: List[Tuple[Document, float]] = []\n if show_results.__len__() == 0:\n return results\n scores: List[float] = []\n retrieval_docs = self.similarity_search_by_vector(embedding, k, scores)\n L2_Norm = 0.0\n for score in scores:\n L2_Norm = L2_Norm + score * score", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/awadb.html"} +{"id": "42d60dfd50ce-4", "text": "L2_Norm = L2_Norm + score * score\n L2_Norm = pow(L2_Norm, 0.5)\n doc_no = 0\n for doc in retrieval_docs:\n doc_tuple = (doc, 1 - scores[doc_no] / L2_Norm)\n results.append(doc_tuple)\n doc_no = doc_no + 1\n return results\n[docs] def similarity_search_by_vector(\n self,\n embedding: Optional[List[float]] = None,\n k: int = DEFAULT_TOPN,\n scores: Optional[list] = None,\n **kwargs: Any,\n ) -> List[Document]:\n \"\"\"Return docs most similar to embedding vector.\n Args:\n embedding: Embedding to look up documents similar to.\n k: Number of Documents to return. Defaults to 4.\n Returns:\n List of Documents most similar to the query vector.\n \"\"\"\n if self.awadb_client is None:\n raise ValueError(\"AwaDB client is None!!!\")\n results: List[Document] = []\n if embedding is None:\n return results\n show_results = self.awadb_client.Search(embedding, k)\n if show_results.__len__() == 0:\n return results\n for item_detail in show_results[0][\"ResultItems\"]:\n content = \"\"\n meta_data = {}\n for item_key in item_detail:\n if (\n item_key == \"Field@0\"\n and self.using_table_name in self.table2embeddings\n ): # text for the document\n content = item_detail[item_key]\n elif item_key == \"embedding_text\":\n content = item_detail[item_key]\n elif (", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/awadb.html"} +{"id": "42d60dfd50ce-5", "text": "content = item_detail[item_key]\n elif (\n item_key == \"Field@1\" or item_key == \"text_embedding\"\n ): # embedding field for the document\n continue\n elif item_key == \"score\": # L2 distance\n if scores is not None:\n score = item_detail[item_key]\n scores.append(score)\n else:\n meta_data[item_key] = item_detail[item_key]\n results.append(Document(page_content=content, metadata=meta_data))\n return results\n[docs] def create_table(\n self,\n table_name: str,\n **kwargs: Any,\n ) -> bool:\n \"\"\"Create a new table.\"\"\"\n if self.awadb_client is None:\n return False\n ret = self.awadb_client.Create(table_name)\n if ret:\n self.using_table_name = table_name\n return ret\n[docs] def use(\n self,\n table_name: str,\n **kwargs: Any,\n ) -> bool:\n \"\"\"Use the specified table. Don't know the tables, please invoke list_tables.\"\"\"\n if self.awadb_client is None:\n return False\n ret = self.awadb_client.Use(table_name)\n if ret:\n self.using_table_name = table_name\n return ret\n[docs] def list_tables(\n self,\n **kwargs: Any,\n ) -> List[str]:\n \"\"\"List all the tables created by the client.\"\"\"\n if self.awadb_client is None:\n return []\n return self.awadb_client.ListAllTables()\n[docs] def get_current_table(\n self,\n **kwargs: Any,\n ) -> str:\n \"\"\"Get the current table.\"\"\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/awadb.html"} +{"id": "42d60dfd50ce-6", "text": ") -> str:\n \"\"\"Get the current table.\"\"\"\n return self.using_table_name\n[docs] @classmethod\n def from_texts(\n cls: Type[AwaDB],\n texts: List[str],\n embedding: Optional[Embeddings] = None,\n metadatas: Optional[List[dict]] = None,\n table_name: str = _DEFAULT_TABLE_NAME,\n logging_and_data_dir: Optional[str] = None,\n client: Optional[awadb.Client] = None,\n **kwargs: Any,\n ) -> AwaDB:\n \"\"\"Create an AwaDB vectorstore from a raw documents.\n Args:\n texts (List[str]): List of texts to add to the table.\n embedding (Optional[Embeddings]): Embedding function. Defaults to None.\n metadatas (Optional[List[dict]]): List of metadatas. Defaults to None.\n table_name (str): Name of the table to create.\n logging_and_data_dir (Optional[str]): Directory of logging and persistence.\n client (Optional[awadb.Client]): AwaDB client\n Returns:\n AwaDB: AwaDB vectorstore.\n \"\"\"\n awadb_client = cls(\n table_name=table_name,\n embedding_model=embedding,\n log_and_data_dir=logging_and_data_dir,\n client=client,\n )\n awadb_client.add_texts(texts=texts, metadatas=metadatas)\n return awadb_client\n[docs] @classmethod\n def from_documents(\n cls: Type[AwaDB],\n documents: List[Document],\n embedding: Optional[Embeddings] = None,\n table_name: str = _DEFAULT_TABLE_NAME,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/awadb.html"} +{"id": "42d60dfd50ce-7", "text": "table_name: str = _DEFAULT_TABLE_NAME,\n logging_and_data_dir: Optional[str] = None,\n client: Optional[awadb.Client] = None,\n **kwargs: Any,\n ) -> AwaDB:\n \"\"\"Create an AwaDB vectorstore from a list of documents.\n If a logging_and_data_dir specified, the table will be persisted there.\n Args:\n documents (List[Document]): List of documents to add to the vectorstore.\n embedding (Optional[Embeddings]): Embedding function. Defaults to None.\n table_name (str): Name of the table to create.\n logging_and_data_dir (Optional[str]): Directory to persist the table.\n client (Optional[awadb.Client]): AwaDB client\n Returns:\n AwaDB: AwaDB vectorstore.\n \"\"\"\n texts = [doc.page_content for doc in documents]\n metadatas = [doc.metadata for doc in documents]\n return cls.from_texts(\n texts=texts,\n embedding=embedding,\n metadatas=metadatas,\n table_name=table_name,\n logging_and_data_dir=logging_and_data_dir,\n client=client,\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/awadb.html"} +{"id": "43606bb90b19-0", "text": "Source code for langchain.vectorstores.milvus\n\"\"\"Wrapper around the Milvus vector database.\"\"\"\nfrom __future__ import annotations\nimport logging\nfrom typing import Any, Iterable, List, Optional, Tuple, Union\nfrom uuid import uuid4\nimport numpy as np\nfrom langchain.docstore.document import Document\nfrom langchain.embeddings.base import Embeddings\nfrom langchain.vectorstores.base import VectorStore\nfrom langchain.vectorstores.utils import maximal_marginal_relevance\nlogger = logging.getLogger(__name__)\nDEFAULT_MILVUS_CONNECTION = {\n \"host\": \"localhost\",\n \"port\": \"19530\",\n \"user\": \"\",\n \"password\": \"\",\n \"secure\": False,\n}\n[docs]class Milvus(VectorStore):\n \"\"\"Wrapper around the Milvus vector database.\"\"\"\n def __init__(\n self,\n embedding_function: Embeddings,\n collection_name: str = \"LangChainCollection\",\n connection_args: Optional[dict[str, Any]] = None,\n consistency_level: str = \"Session\",\n index_params: Optional[dict] = None,\n search_params: Optional[dict] = None,\n drop_old: Optional[bool] = False,\n ):\n \"\"\"Initialize wrapper around the milvus vector database.\n In order to use this you need to have `pymilvus` installed and a\n running Milvus/Zilliz Cloud instance.\n See the following documentation for how to run a Milvus instance:\n https://milvus.io/docs/install_standalone-docker.md\n If looking for a hosted Milvus, take a looka this documentation:\n https://zilliz.com/cloud\n IF USING L2/IP metric IT IS HIGHLY SUGGESTED TO NORMALIZE YOUR DATA.", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/milvus.html"} +{"id": "43606bb90b19-1", "text": "The connection args used for this class comes in the form of a dict,\n here are a few of the options:\n address (str): The actual address of Milvus\n instance. Example address: \"localhost:19530\"\n uri (str): The uri of Milvus instance. Example uri:\n \"http://randomwebsite:19530\",\n \"tcp:foobarsite:19530\",\n \"https://ok.s3.south.com:19530\".\n host (str): The host of Milvus instance. Default at \"localhost\",\n PyMilvus will fill in the default host if only port is provided.\n port (str/int): The port of Milvus instance. Default at 19530, PyMilvus\n will fill in the default port if only host is provided.\n user (str): Use which user to connect to Milvus instance. If user and\n password are provided, we will add related header in every RPC call.\n password (str): Required when user is provided. The password\n corresponding to the user.\n secure (bool): Default is false. If set to true, tls will be enabled.\n client_key_path (str): If use tls two-way authentication, need to\n write the client.key path.\n client_pem_path (str): If use tls two-way authentication, need to\n write the client.pem path.\n ca_pem_path (str): If use tls two-way authentication, need to write\n the ca.pem path.\n server_pem_path (str): If use tls one-way authentication, need to\n write the server.pem path.\n server_name (str): If use tls, need to write the common name.\n Args:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/milvus.html"} +{"id": "43606bb90b19-2", "text": "Args:\n embedding_function (Embeddings): Function used to embed the text.\n collection_name (str): Which Milvus collection to use. Defaults to\n \"LangChainCollection\".\n connection_args (Optional[dict[str, any]]): The arguments for connection to\n Milvus/Zilliz instance. Defaults to DEFAULT_MILVUS_CONNECTION.\n consistency_level (str): The consistency level to use for a collection.\n Defaults to \"Session\".\n index_params (Optional[dict]): Which index params to use. Defaults to\n HNSW/AUTOINDEX depending on service.\n search_params (Optional[dict]): Which search params to use. Defaults to\n default of index.\n drop_old (Optional[bool]): Whether to drop the current collection. Defaults\n to False.\n \"\"\"\n try:\n from pymilvus import Collection, utility\n except ImportError:\n raise ValueError(\n \"Could not import pymilvus python package. \"\n \"Please install it with `pip install pymilvus`.\"\n )\n # Default search params when one is not provided.\n self.default_search_params = {\n \"IVF_FLAT\": {\"metric_type\": \"L2\", \"params\": {\"nprobe\": 10}},\n \"IVF_SQ8\": {\"metric_type\": \"L2\", \"params\": {\"nprobe\": 10}},\n \"IVF_PQ\": {\"metric_type\": \"L2\", \"params\": {\"nprobe\": 10}},\n \"HNSW\": {\"metric_type\": \"L2\", \"params\": {\"ef\": 10}},\n \"RHNSW_FLAT\": {\"metric_type\": \"L2\", \"params\": {\"ef\": 10}},", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/milvus.html"} +{"id": "43606bb90b19-3", "text": "\"RHNSW_SQ\": {\"metric_type\": \"L2\", \"params\": {\"ef\": 10}},\n \"RHNSW_PQ\": {\"metric_type\": \"L2\", \"params\": {\"ef\": 10}},\n \"IVF_HNSW\": {\"metric_type\": \"L2\", \"params\": {\"nprobe\": 10, \"ef\": 10}},\n \"ANNOY\": {\"metric_type\": \"L2\", \"params\": {\"search_k\": 10}},\n \"AUTOINDEX\": {\"metric_type\": \"L2\", \"params\": {}},\n }\n self.embedding_func = embedding_function\n self.collection_name = collection_name\n self.index_params = index_params\n self.search_params = search_params\n self.consistency_level = consistency_level\n # In order for a collection to be compatible, pk needs to be auto'id and int\n self._primary_field = \"pk\"\n # In order for compatiblility, the text field will need to be called \"text\"\n self._text_field = \"text\"\n # In order for compatbility, the vector field needs to be called \"vector\"\n self._vector_field = \"vector\"\n self.fields: list[str] = []\n # Create the connection to the server\n if connection_args is None:\n connection_args = DEFAULT_MILVUS_CONNECTION\n self.alias = self._create_connection_alias(connection_args)\n self.col: Optional[Collection] = None\n # Grab the existing colection if it exists\n if utility.has_collection(self.collection_name, using=self.alias):\n self.col = Collection(\n self.collection_name,\n using=self.alias,\n )\n # If need to drop old, drop it\n if drop_old and isinstance(self.col, Collection):", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/milvus.html"} +{"id": "43606bb90b19-4", "text": "if drop_old and isinstance(self.col, Collection):\n self.col.drop()\n self.col = None\n # Initialize the vector store\n self._init()\n def _create_connection_alias(self, connection_args: dict) -> str:\n \"\"\"Create the connection to the Milvus server.\"\"\"\n from pymilvus import MilvusException, connections\n # Grab the connection arguments that are used for checking existing connection\n host: str = connection_args.get(\"host\", None)\n port: Union[str, int] = connection_args.get(\"port\", None)\n address: str = connection_args.get(\"address\", None)\n uri: str = connection_args.get(\"uri\", None)\n user = connection_args.get(\"user\", None)\n # Order of use is host/port, uri, address\n if host is not None and port is not None:\n given_address = str(host) + \":\" + str(port)\n elif uri is not None:\n given_address = uri.split(\"https://\")[1]\n elif address is not None:\n given_address = address\n else:\n given_address = None\n logger.debug(\"Missing standard address type for reuse atttempt\")\n # User defaults to empty string when getting connection info\n if user is not None:\n tmp_user = user\n else:\n tmp_user = \"\"\n # If a valid address was given, then check if a connection exists\n if given_address is not None:\n for con in connections.list_connections():\n addr = connections.get_connection_addr(con[0])\n if (\n con[1]\n and (\"address\" in addr)\n and (addr[\"address\"] == given_address)\n and (\"user\" in addr)\n and (addr[\"user\"] == tmp_user)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/milvus.html"} +{"id": "43606bb90b19-5", "text": "and (addr[\"user\"] == tmp_user)\n ):\n logger.debug(\"Using previous connection: %s\", con[0])\n return con[0]\n # Generate a new connection if one doesnt exist\n alias = uuid4().hex\n try:\n connections.connect(alias=alias, **connection_args)\n logger.debug(\"Created new connection using: %s\", alias)\n return alias\n except MilvusException as e:\n logger.error(\"Failed to create new connection using: %s\", alias)\n raise e\n def _init(\n self, embeddings: Optional[list] = None, metadatas: Optional[list[dict]] = None\n ) -> None:\n if embeddings is not None:\n self._create_collection(embeddings, metadatas)\n self._extract_fields()\n self._create_index()\n self._create_search_params()\n self._load()\n def _create_collection(\n self, embeddings: list, metadatas: Optional[list[dict]] = None\n ) -> None:\n from pymilvus import (\n Collection,\n CollectionSchema,\n DataType,\n FieldSchema,\n MilvusException,\n )\n from pymilvus.orm.types import infer_dtype_bydata\n # Determine embedding dim\n dim = len(embeddings[0])\n fields = []\n # Determine metadata schema\n if metadatas:\n # Create FieldSchema for each entry in metadata.\n for key, value in metadatas[0].items():\n # Infer the corresponding datatype of the metadata\n dtype = infer_dtype_bydata(value)\n # Datatype isnt compatible\n if dtype == DataType.UNKNOWN or dtype == DataType.NONE:\n logger.error(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/milvus.html"} +{"id": "43606bb90b19-6", "text": "if dtype == DataType.UNKNOWN or dtype == DataType.NONE:\n logger.error(\n \"Failure to create collection, unrecognized dtype for key: %s\",\n key,\n )\n raise ValueError(f\"Unrecognized datatype for {key}.\")\n # Dataype is a string/varchar equivalent\n elif dtype == DataType.VARCHAR:\n fields.append(FieldSchema(key, DataType.VARCHAR, max_length=65_535))\n else:\n fields.append(FieldSchema(key, dtype))\n # Create the text field\n fields.append(\n FieldSchema(self._text_field, DataType.VARCHAR, max_length=65_535)\n )\n # Create the primary key field\n fields.append(\n FieldSchema(\n self._primary_field, DataType.INT64, is_primary=True, auto_id=True\n )\n )\n # Create the vector field, supports binary or float vectors\n fields.append(\n FieldSchema(self._vector_field, infer_dtype_bydata(embeddings[0]), dim=dim)\n )\n # Create the schema for the collection\n schema = CollectionSchema(fields)\n # Create the collection\n try:\n self.col = Collection(\n name=self.collection_name,\n schema=schema,\n consistency_level=self.consistency_level,\n using=self.alias,\n )\n except MilvusException as e:\n logger.error(\n \"Failed to create collection: %s error: %s\", self.collection_name, e\n )\n raise e\n def _extract_fields(self) -> None:\n \"\"\"Grab the existing fields from the Collection\"\"\"\n from pymilvus import Collection\n if isinstance(self.col, Collection):\n schema = self.col.schema\n for x in schema.fields:\n self.fields.append(x.name)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/milvus.html"} +{"id": "43606bb90b19-7", "text": "for x in schema.fields:\n self.fields.append(x.name)\n # Since primary field is auto-id, no need to track it\n self.fields.remove(self._primary_field)\n def _get_index(self) -> Optional[dict[str, Any]]:\n \"\"\"Return the vector index information if it exists\"\"\"\n from pymilvus import Collection\n if isinstance(self.col, Collection):\n for x in self.col.indexes:\n if x.field_name == self._vector_field:\n return x.to_dict()\n return None\n def _create_index(self) -> None:\n \"\"\"Create a index on the collection\"\"\"\n from pymilvus import Collection, MilvusException\n if isinstance(self.col, Collection) and self._get_index() is None:\n try:\n # If no index params, use a default HNSW based one\n if self.index_params is None:\n self.index_params = {\n \"metric_type\": \"L2\",\n \"index_type\": \"HNSW\",\n \"params\": {\"M\": 8, \"efConstruction\": 64},\n }\n try:\n self.col.create_index(\n self._vector_field,\n index_params=self.index_params,\n using=self.alias,\n )\n # If default did not work, most likely on Zilliz Cloud\n except MilvusException:\n # Use AUTOINDEX based index\n self.index_params = {\n \"metric_type\": \"L2\",\n \"index_type\": \"AUTOINDEX\",\n \"params\": {},\n }\n self.col.create_index(\n self._vector_field,\n index_params=self.index_params,\n using=self.alias,\n )\n logger.debug(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/milvus.html"} +{"id": "43606bb90b19-8", "text": "using=self.alias,\n )\n logger.debug(\n \"Successfully created an index on collection: %s\",\n self.collection_name,\n )\n except MilvusException as e:\n logger.error(\n \"Failed to create an index on collection: %s\", self.collection_name\n )\n raise e\n def _create_search_params(self) -> None:\n \"\"\"Generate search params based on the current index type\"\"\"\n from pymilvus import Collection\n if isinstance(self.col, Collection) and self.search_params is None:\n index = self._get_index()\n if index is not None:\n index_type: str = index[\"index_param\"][\"index_type\"]\n metric_type: str = index[\"index_param\"][\"metric_type\"]\n self.search_params = self.default_search_params[index_type]\n self.search_params[\"metric_type\"] = metric_type\n def _load(self) -> None:\n \"\"\"Load the collection if available.\"\"\"\n from pymilvus import Collection\n if isinstance(self.col, Collection) and self._get_index() is not None:\n self.col.load()\n[docs] def add_texts(\n self,\n texts: Iterable[str],\n metadatas: Optional[List[dict]] = None,\n timeout: Optional[int] = None,\n batch_size: int = 1000,\n **kwargs: Any,\n ) -> List[str]:\n \"\"\"Insert text data into Milvus.\n Inserting data when the collection has not be made yet will result\n in creating a new Collection. The data of the first entity decides\n the schema of the new collection, the dim is extracted from the first\n embedding and the columns are decided by the first metadata dict.", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/milvus.html"} +{"id": "43606bb90b19-9", "text": "embedding and the columns are decided by the first metadata dict.\n Metada keys will need to be present for all inserted values. At\n the moment there is no None equivalent in Milvus.\n Args:\n texts (Iterable[str]): The texts to embed, it is assumed\n that they all fit in memory.\n metadatas (Optional[List[dict]]): Metadata dicts attached to each of\n the texts. Defaults to None.\n timeout (Optional[int]): Timeout for each batch insert. Defaults\n to None.\n batch_size (int, optional): Batch size to use for insertion.\n Defaults to 1000.\n Raises:\n MilvusException: Failure to add texts\n Returns:\n List[str]: The resulting keys for each inserted element.\n \"\"\"\n from pymilvus import Collection, MilvusException\n texts = list(texts)\n try:\n embeddings = self.embedding_func.embed_documents(texts)\n except NotImplementedError:\n embeddings = [self.embedding_func.embed_query(x) for x in texts]\n if len(embeddings) == 0:\n logger.debug(\"Nothing to insert, skipping.\")\n return []\n # If the collection hasnt been initialized yet, perform all steps to do so\n if not isinstance(self.col, Collection):\n self._init(embeddings, metadatas)\n # Dict to hold all insert columns\n insert_dict: dict[str, list] = {\n self._text_field: texts,\n self._vector_field: embeddings,\n }\n # Collect the metadata into the insert dict.\n if metadatas is not None:\n for d in metadatas:\n for key, value in d.items():\n if key in self.fields:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/milvus.html"} +{"id": "43606bb90b19-10", "text": "for key, value in d.items():\n if key in self.fields:\n insert_dict.setdefault(key, []).append(value)\n # Total insert count\n vectors: list = insert_dict[self._vector_field]\n total_count = len(vectors)\n pks: list[str] = []\n assert isinstance(self.col, Collection)\n for i in range(0, total_count, batch_size):\n # Grab end index\n end = min(i + batch_size, total_count)\n # Convert dict to list of lists batch for insertion\n insert_list = [insert_dict[x][i:end] for x in self.fields]\n # Insert into the collection.\n try:\n res: Collection\n res = self.col.insert(insert_list, timeout=timeout, **kwargs)\n pks.extend(res.primary_keys)\n except MilvusException as e:\n logger.error(\n \"Failed to insert batch starting at entity: %s/%s\", i, total_count\n )\n raise e\n return pks\n[docs] def similarity_search(\n self,\n query: str,\n k: int = 4,\n param: Optional[dict] = None,\n expr: Optional[str] = None,\n timeout: Optional[int] = None,\n **kwargs: Any,\n ) -> List[Document]:\n \"\"\"Perform a similarity search against the query string.\n Args:\n query (str): The text to search.\n k (int, optional): How many results to return. Defaults to 4.\n param (dict, optional): The search params for the index type.\n Defaults to None.\n expr (str, optional): Filtering expression. Defaults to None.", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/milvus.html"} +{"id": "43606bb90b19-11", "text": "expr (str, optional): Filtering expression. Defaults to None.\n timeout (int, optional): How long to wait before timeout error.\n Defaults to None.\n kwargs: Collection.search() keyword arguments.\n Returns:\n List[Document]: Document results for search.\n \"\"\"\n if self.col is None:\n logger.debug(\"No existing collection to search.\")\n return []\n res = self.similarity_search_with_score(\n query=query, k=k, param=param, expr=expr, timeout=timeout, **kwargs\n )\n return [doc for doc, _ in res]\n[docs] def similarity_search_by_vector(\n self,\n embedding: List[float],\n k: int = 4,\n param: Optional[dict] = None,\n expr: Optional[str] = None,\n timeout: Optional[int] = None,\n **kwargs: Any,\n ) -> List[Document]:\n \"\"\"Perform a similarity search against the query string.\n Args:\n embedding (List[float]): The embedding vector to search.\n k (int, optional): How many results to return. Defaults to 4.\n param (dict, optional): The search params for the index type.\n Defaults to None.\n expr (str, optional): Filtering expression. Defaults to None.\n timeout (int, optional): How long to wait before timeout error.\n Defaults to None.\n kwargs: Collection.search() keyword arguments.\n Returns:\n List[Document]: Document results for search.\n \"\"\"\n if self.col is None:\n logger.debug(\"No existing collection to search.\")\n return []\n res = self.similarity_search_with_score_by_vector(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/milvus.html"} +{"id": "43606bb90b19-12", "text": "return []\n res = self.similarity_search_with_score_by_vector(\n embedding=embedding, k=k, param=param, expr=expr, timeout=timeout, **kwargs\n )\n return [doc for doc, _ in res]\n[docs] def similarity_search_with_score(\n self,\n query: str,\n k: int = 4,\n param: Optional[dict] = None,\n expr: Optional[str] = None,\n timeout: Optional[int] = None,\n **kwargs: Any,\n ) -> List[Tuple[Document, float]]:\n \"\"\"Perform a search on a query string and return results with score.\n For more information about the search parameters, take a look at the pymilvus\n documentation found here:\n https://milvus.io/api-reference/pymilvus/v2.2.6/Collection/search().md\n Args:\n query (str): The text being searched.\n k (int, optional): The amount of results ot return. Defaults to 4.\n param (dict): The search params for the specified index.\n Defaults to None.\n expr (str, optional): Filtering expression. Defaults to None.\n timeout (int, optional): How long to wait before timeout error.\n Defaults to None.\n kwargs: Collection.search() keyword arguments.\n Returns:\n List[float], List[Tuple[Document, any, any]]:\n \"\"\"\n if self.col is None:\n logger.debug(\"No existing collection to search.\")\n return []\n # Embed the query text.\n embedding = self.embedding_func.embed_query(query)\n res = self.similarity_search_with_score_by_vector(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/milvus.html"} +{"id": "43606bb90b19-13", "text": "res = self.similarity_search_with_score_by_vector(\n embedding=embedding, k=k, param=param, expr=expr, timeout=timeout, **kwargs\n )\n return res\n[docs] def similarity_search_with_score_by_vector(\n self,\n embedding: List[float],\n k: int = 4,\n param: Optional[dict] = None,\n expr: Optional[str] = None,\n timeout: Optional[int] = None,\n **kwargs: Any,\n ) -> List[Tuple[Document, float]]:\n \"\"\"Perform a search on a query string and return results with score.\n For more information about the search parameters, take a look at the pymilvus\n documentation found here:\n https://milvus.io/api-reference/pymilvus/v2.2.6/Collection/search().md\n Args:\n embedding (List[float]): The embedding vector being searched.\n k (int, optional): The amount of results ot return. Defaults to 4.\n param (dict): The search params for the specified index.\n Defaults to None.\n expr (str, optional): Filtering expression. Defaults to None.\n timeout (int, optional): How long to wait before timeout error.\n Defaults to None.\n kwargs: Collection.search() keyword arguments.\n Returns:\n List[Tuple[Document, float]]: Result doc and score.\n \"\"\"\n if self.col is None:\n logger.debug(\"No existing collection to search.\")\n return []\n if param is None:\n param = self.search_params\n # Determine result metadata fields.\n output_fields = self.fields[:]\n output_fields.remove(self._vector_field)\n # Perform the search.\n res = self.col.search(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/milvus.html"} +{"id": "43606bb90b19-14", "text": "# Perform the search.\n res = self.col.search(\n data=[embedding],\n anns_field=self._vector_field,\n param=param,\n limit=k,\n expr=expr,\n output_fields=output_fields,\n timeout=timeout,\n **kwargs,\n )\n # Organize results.\n ret = []\n for result in res[0]:\n meta = {x: result.entity.get(x) for x in output_fields}\n doc = Document(page_content=meta.pop(self._text_field), metadata=meta)\n pair = (doc, result.score)\n ret.append(pair)\n return ret\n[docs] def max_marginal_relevance_search(\n self,\n query: str,\n k: int = 4,\n fetch_k: int = 20,\n lambda_mult: float = 0.5,\n param: Optional[dict] = None,\n expr: Optional[str] = None,\n timeout: Optional[int] = None,\n **kwargs: Any,\n ) -> List[Document]:\n \"\"\"Perform a search and return results that are reordered by MMR.\n Args:\n query (str): The text being searched.\n k (int, optional): How many results to give. Defaults to 4.\n fetch_k (int, optional): Total results to select k from.\n Defaults to 20.\n lambda_mult: Number between 0 and 1 that determines the degree\n of diversity among the results with 0 corresponding\n to maximum diversity and 1 to minimum diversity.\n Defaults to 0.5\n param (dict, optional): The search params for the specified index.\n Defaults to None.", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/milvus.html"} +{"id": "43606bb90b19-15", "text": "Defaults to None.\n expr (str, optional): Filtering expression. Defaults to None.\n timeout (int, optional): How long to wait before timeout error.\n Defaults to None.\n kwargs: Collection.search() keyword arguments.\n Returns:\n List[Document]: Document results for search.\n \"\"\"\n if self.col is None:\n logger.debug(\"No existing collection to search.\")\n return []\n embedding = self.embedding_func.embed_query(query)\n return self.max_marginal_relevance_search_by_vector(\n embedding=embedding,\n k=k,\n fetch_k=fetch_k,\n lambda_mult=lambda_mult,\n param=param,\n expr=expr,\n timeout=timeout,\n **kwargs,\n )\n[docs] def max_marginal_relevance_search_by_vector(\n self,\n embedding: list[float],\n k: int = 4,\n fetch_k: int = 20,\n lambda_mult: float = 0.5,\n param: Optional[dict] = None,\n expr: Optional[str] = None,\n timeout: Optional[int] = None,\n **kwargs: Any,\n ) -> List[Document]:\n \"\"\"Perform a search and return results that are reordered by MMR.\n Args:\n embedding (str): The embedding vector being searched.\n k (int, optional): How many results to give. Defaults to 4.\n fetch_k (int, optional): Total results to select k from.\n Defaults to 20.\n lambda_mult: Number between 0 and 1 that determines the degree\n of diversity among the results with 0 corresponding\n to maximum diversity and 1 to minimum diversity.\n Defaults to 0.5", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/milvus.html"} +{"id": "43606bb90b19-16", "text": "to maximum diversity and 1 to minimum diversity.\n Defaults to 0.5\n param (dict, optional): The search params for the specified index.\n Defaults to None.\n expr (str, optional): Filtering expression. Defaults to None.\n timeout (int, optional): How long to wait before timeout error.\n Defaults to None.\n kwargs: Collection.search() keyword arguments.\n Returns:\n List[Document]: Document results for search.\n \"\"\"\n if self.col is None:\n logger.debug(\"No existing collection to search.\")\n return []\n if param is None:\n param = self.search_params\n # Determine result metadata fields.\n output_fields = self.fields[:]\n output_fields.remove(self._vector_field)\n # Perform the search.\n res = self.col.search(\n data=[embedding],\n anns_field=self._vector_field,\n param=param,\n limit=fetch_k,\n expr=expr,\n output_fields=output_fields,\n timeout=timeout,\n **kwargs,\n )\n # Organize results.\n ids = []\n documents = []\n scores = []\n for result in res[0]:\n meta = {x: result.entity.get(x) for x in output_fields}\n doc = Document(page_content=meta.pop(self._text_field), metadata=meta)\n documents.append(doc)\n scores.append(result.score)\n ids.append(result.id)\n vectors = self.col.query(\n expr=f\"{self._primary_field} in {ids}\",\n output_fields=[self._primary_field, self._vector_field],\n timeout=timeout,\n )\n # Reorganize the results from query to match search order.", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/milvus.html"} +{"id": "43606bb90b19-17", "text": ")\n # Reorganize the results from query to match search order.\n vectors = {x[self._primary_field]: x[self._vector_field] for x in vectors}\n ordered_result_embeddings = [vectors[x] for x in ids]\n # Get the new order of results.\n new_ordering = maximal_marginal_relevance(\n np.array(embedding), ordered_result_embeddings, k=k, lambda_mult=lambda_mult\n )\n # Reorder the values and return.\n ret = []\n for x in new_ordering:\n # Function can return -1 index\n if x == -1:\n break\n else:\n ret.append(documents[x])\n return ret\n[docs] @classmethod\n def from_texts(\n cls,\n texts: List[str],\n embedding: Embeddings,\n metadatas: Optional[List[dict]] = None,\n collection_name: str = \"LangChainCollection\",\n connection_args: dict[str, Any] = DEFAULT_MILVUS_CONNECTION,\n consistency_level: str = \"Session\",\n index_params: Optional[dict] = None,\n search_params: Optional[dict] = None,\n drop_old: bool = False,\n **kwargs: Any,\n ) -> Milvus:\n \"\"\"Create a Milvus collection, indexes it with HNSW, and insert data.\n Args:\n texts (List[str]): Text data.\n embedding (Embeddings): Embedding function.\n metadatas (Optional[List[dict]]): Metadata for each text if it exists.\n Defaults to None.\n collection_name (str, optional): Collection name to use. Defaults to\n \"LangChainCollection\".", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/milvus.html"} +{"id": "43606bb90b19-18", "text": "\"LangChainCollection\".\n connection_args (dict[str, Any], optional): Connection args to use. Defaults\n to DEFAULT_MILVUS_CONNECTION.\n consistency_level (str, optional): Which consistency level to use. Defaults\n to \"Session\".\n index_params (Optional[dict], optional): Which index_params to use. Defaults\n to None.\n search_params (Optional[dict], optional): Which search params to use.\n Defaults to None.\n drop_old (Optional[bool], optional): Whether to drop the collection with\n that name if it exists. Defaults to False.\n Returns:\n Milvus: Milvus Vector Store\n \"\"\"\n vector_db = cls(\n embedding_function=embedding,\n collection_name=collection_name,\n connection_args=connection_args,\n consistency_level=consistency_level,\n index_params=index_params,\n search_params=search_params,\n drop_old=drop_old,\n **kwargs,\n )\n vector_db.add_texts(texts=texts, metadatas=metadatas)\n return vector_db", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/milvus.html"} +{"id": "5d704991e9bc-0", "text": "Source code for langchain.vectorstores.elastic_vector_search\n\"\"\"Wrapper around Elasticsearch vector database.\"\"\"\nfrom __future__ import annotations\nimport uuid\nfrom abc import ABC\nfrom typing import (\n TYPE_CHECKING,\n Any,\n Dict,\n Iterable,\n List,\n Mapping,\n Optional,\n Tuple,\n Union,\n)\nfrom langchain.docstore.document import Document\nfrom langchain.embeddings.base import Embeddings\nfrom langchain.utils import get_from_env\nfrom langchain.vectorstores.base import VectorStore\nif TYPE_CHECKING:\n from elasticsearch import Elasticsearch\ndef _default_text_mapping(dim: int) -> Dict:\n return {\n \"properties\": {\n \"text\": {\"type\": \"text\"},\n \"vector\": {\"type\": \"dense_vector\", \"dims\": dim},\n }\n }\ndef _default_script_query(query_vector: List[float], filter: Optional[dict]) -> Dict:\n if filter:\n ((key, value),) = filter.items()\n filter = {\"match\": {f\"metadata.{key}.keyword\": f\"{value}\"}}\n else:\n filter = {\"match_all\": {}}\n return {\n \"script_score\": {\n \"query\": filter,\n \"script\": {\n \"source\": \"cosineSimilarity(params.query_vector, 'vector') + 1.0\",\n \"params\": {\"query_vector\": query_vector},\n },\n }\n }\n# ElasticVectorSearch is a concrete implementation of the abstract base class\n# VectorStore, which defines a common interface for all vector database\n# implementations. By inheriting from the ABC class, ElasticVectorSearch can be\n# defined as an abstract base class itself, allowing the creation of subclasses with", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/elastic_vector_search.html"} +{"id": "5d704991e9bc-1", "text": "# defined as an abstract base class itself, allowing the creation of subclasses with\n# their own specific implementations. If you plan to subclass ElasticVectorSearch,\n# you can inherit from it and define your own implementation of the necessary methods\n# and attributes.\n[docs]class ElasticVectorSearch(VectorStore, ABC):\n \"\"\"Wrapper around Elasticsearch as a vector database.\n To connect to an Elasticsearch instance that does not require\n login credentials, pass the Elasticsearch URL and index name along with the\n embedding object to the constructor.\n Example:\n .. code-block:: python\n from langchain import ElasticVectorSearch\n from langchain.embeddings import OpenAIEmbeddings\n embedding = OpenAIEmbeddings()\n elastic_vector_search = ElasticVectorSearch(\n elasticsearch_url=\"http://localhost:9200\",\n index_name=\"test_index\",\n embedding=embedding\n )\n To connect to an Elasticsearch instance that requires login credentials,\n including Elastic Cloud, use the Elasticsearch URL format\n https://username:password@es_host:9243. For example, to connect to Elastic\n Cloud, create the Elasticsearch URL with the required authentication details and\n pass it to the ElasticVectorSearch constructor as the named parameter\n elasticsearch_url.\n You can obtain your Elastic Cloud URL and login credentials by logging in to the\n Elastic Cloud console at https://cloud.elastic.co, selecting your deployment, and\n navigating to the \"Deployments\" page.\n To obtain your Elastic Cloud password for the default \"elastic\" user:\n 1. Log in to the Elastic Cloud console at https://cloud.elastic.co\n 2. Go to \"Security\" > \"Users\"\n 3. Locate the \"elastic\" user and click \"Edit\"\n 4. Click \"Reset password\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/elastic_vector_search.html"} +{"id": "5d704991e9bc-2", "text": "4. Click \"Reset password\"\n 5. Follow the prompts to reset the password\n The format for Elastic Cloud URLs is\n https://username:password@cluster_id.region_id.gcp.cloud.es.io:9243.\n Example:\n .. code-block:: python\n from langchain import ElasticVectorSearch\n from langchain.embeddings import OpenAIEmbeddings\n embedding = OpenAIEmbeddings()\n elastic_host = \"cluster_id.region_id.gcp.cloud.es.io\"\n elasticsearch_url = f\"https://username:password@{elastic_host}:9243\"\n elastic_vector_search = ElasticVectorSearch(\n elasticsearch_url=elasticsearch_url,\n index_name=\"test_index\",\n embedding=embedding\n )\n Args:\n elasticsearch_url (str): The URL for the Elasticsearch instance.\n index_name (str): The name of the Elasticsearch index for the embeddings.\n embedding (Embeddings): An object that provides the ability to embed text.\n It should be an instance of a class that subclasses the Embeddings\n abstract base class, such as OpenAIEmbeddings()\n Raises:\n ValueError: If the elasticsearch python package is not installed.\n \"\"\"\n def __init__(\n self,\n elasticsearch_url: str,\n index_name: str,\n embedding: Embeddings,\n *,\n ssl_verify: Optional[Dict[str, Any]] = None,\n ):\n \"\"\"Initialize with necessary components.\"\"\"\n try:\n import elasticsearch\n except ImportError:\n raise ImportError(\n \"Could not import elasticsearch python package. \"\n \"Please install it with `pip install elasticsearch`.\"\n )\n self.embedding = embedding\n self.index_name = index_name\n _ssl_verify = ssl_verify or {}", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/elastic_vector_search.html"} +{"id": "5d704991e9bc-3", "text": "self.index_name = index_name\n _ssl_verify = ssl_verify or {}\n try:\n self.client = elasticsearch.Elasticsearch(elasticsearch_url, **_ssl_verify)\n except ValueError as e:\n raise ValueError(\n f\"Your elasticsearch client string is mis-formatted. Got error: {e} \"\n )\n[docs] def add_texts(\n self,\n texts: Iterable[str],\n metadatas: Optional[List[dict]] = None,\n refresh_indices: bool = True,\n ids: Optional[List[str]] = None,\n **kwargs: Any,\n ) -> List[str]:\n \"\"\"Run more texts through the embeddings and add to the vectorstore.\n Args:\n texts: Iterable of strings to add to the vectorstore.\n metadatas: Optional list of metadatas associated with the texts.\n refresh_indices: bool to refresh ElasticSearch indices\n Returns:\n List of ids from adding the texts into the vectorstore.\n \"\"\"\n try:\n from elasticsearch.exceptions import NotFoundError\n from elasticsearch.helpers import bulk\n except ImportError:\n raise ImportError(\n \"Could not import elasticsearch python package. \"\n \"Please install it with `pip install elasticsearch`.\"\n )\n requests = []\n ids = ids or [str(uuid.uuid4()) for _ in texts]\n embeddings = self.embedding.embed_documents(list(texts))\n dim = len(embeddings[0])\n mapping = _default_text_mapping(dim)\n # check to see if the index already exists\n try:\n self.client.indices.get(index=self.index_name)\n except NotFoundError:\n # TODO would be nice to create index before embedding,\n # just to save expensive steps for last", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/elastic_vector_search.html"} +{"id": "5d704991e9bc-4", "text": "# just to save expensive steps for last\n self.create_index(self.client, self.index_name, mapping)\n for i, text in enumerate(texts):\n metadata = metadatas[i] if metadatas else {}\n request = {\n \"_op_type\": \"index\",\n \"_index\": self.index_name,\n \"vector\": embeddings[i],\n \"text\": text,\n \"metadata\": metadata,\n \"_id\": ids[i],\n }\n requests.append(request)\n bulk(self.client, requests)\n if refresh_indices:\n self.client.indices.refresh(index=self.index_name)\n return ids\n[docs] def similarity_search(\n self, query: str, k: int = 4, filter: Optional[dict] = None, **kwargs: Any\n ) -> List[Document]:\n \"\"\"Return docs most similar to query.\n Args:\n query: Text to look up documents similar to.\n k: Number of Documents to return. Defaults to 4.\n Returns:\n List of Documents most similar to the query.\n \"\"\"\n docs_and_scores = self.similarity_search_with_score(query, k, filter=filter)\n documents = [d[0] for d in docs_and_scores]\n return documents\n[docs] def similarity_search_with_score(\n self, query: str, k: int = 4, filter: Optional[dict] = None, **kwargs: Any\n ) -> List[Tuple[Document, float]]:\n \"\"\"Return docs most similar to query.\n Args:\n query: Text to look up documents similar to.\n k: Number of Documents to return. Defaults to 4.\n Returns:\n List of Documents most similar to the query.\n \"\"\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/elastic_vector_search.html"} +{"id": "5d704991e9bc-5", "text": "Returns:\n List of Documents most similar to the query.\n \"\"\"\n embedding = self.embedding.embed_query(query)\n script_query = _default_script_query(embedding, filter)\n response = self.client_search(\n self.client, self.index_name, script_query, size=k\n )\n hits = [hit for hit in response[\"hits\"][\"hits\"]]\n docs_and_scores = [\n (\n Document(\n page_content=hit[\"_source\"][\"text\"],\n metadata=hit[\"_source\"][\"metadata\"],\n ),\n hit[\"_score\"],\n )\n for hit in hits\n ]\n return docs_and_scores\n[docs] @classmethod\n def from_texts(\n cls,\n texts: List[str],\n embedding: Embeddings,\n metadatas: Optional[List[dict]] = None,\n elasticsearch_url: Optional[str] = None,\n index_name: Optional[str] = None,\n refresh_indices: bool = True,\n **kwargs: Any,\n ) -> ElasticVectorSearch:\n \"\"\"Construct ElasticVectorSearch wrapper from raw documents.\n This is a user-friendly interface that:\n 1. Embeds documents.\n 2. Creates a new index for the embeddings in the Elasticsearch instance.\n 3. Adds the documents to the newly created Elasticsearch index.\n This is intended to be a quick way to get started.\n Example:\n .. code-block:: python\n from langchain import ElasticVectorSearch\n from langchain.embeddings import OpenAIEmbeddings\n embeddings = OpenAIEmbeddings()\n elastic_vector_search = ElasticVectorSearch.from_texts(\n texts,\n embeddings,\n elasticsearch_url=\"http://localhost:9200\"\n )\n \"\"\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/elastic_vector_search.html"} +{"id": "5d704991e9bc-6", "text": "elasticsearch_url=\"http://localhost:9200\"\n )\n \"\"\"\n elasticsearch_url = elasticsearch_url or get_from_env(\n \"elasticsearch_url\", \"ELASTICSEARCH_URL\"\n )\n index_name = index_name or uuid.uuid4().hex\n vectorsearch = cls(elasticsearch_url, index_name, embedding, **kwargs)\n vectorsearch.add_texts(\n texts, metadatas=metadatas, refresh_indices=refresh_indices\n )\n return vectorsearch\n[docs] def create_index(self, client: Any, index_name: str, mapping: Dict) -> None:\n version_num = client.info()[\"version\"][\"number\"][0]\n version_num = int(version_num)\n if version_num >= 8:\n client.indices.create(index=index_name, mappings=mapping)\n else:\n client.indices.create(index=index_name, body={\"mappings\": mapping})\n[docs] def client_search(\n self, client: Any, index_name: str, script_query: Dict, size: int\n ) -> Any:\n version_num = client.info()[\"version\"][\"number\"][0]\n version_num = int(version_num)\n if version_num >= 8:\n response = client.search(index=index_name, query=script_query, size=size)\n else:\n response = client.search(\n index=index_name, body={\"query\": script_query, \"size\": size}\n )\n return response\n[docs] def delete(self, ids: List[str]) -> None:\n \"\"\"Delete by vector IDs.\n Args:\n ids: List of ids to delete.\n \"\"\"\n # TODO: Check if this can be done in bulk\n for id in ids:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/elastic_vector_search.html"} +{"id": "5d704991e9bc-7", "text": "# TODO: Check if this can be done in bulk\n for id in ids:\n self.client.delete(index=self.index_name, id=id)\nclass ElasticKnnSearch(ElasticVectorSearch):\n \"\"\"\n A class for performing k-Nearest Neighbors (k-NN) search on an Elasticsearch index.\n The class is designed for a text search scenario where documents are text strings\n and their embeddings are vector representations of those strings.\n \"\"\"\n def __init__(\n self,\n index_name: str,\n embedding: Embeddings,\n es_connection: Optional[\"Elasticsearch\"] = None,\n es_cloud_id: Optional[str] = None,\n es_user: Optional[str] = None,\n es_password: Optional[str] = None,\n vector_query_field: Optional[str] = \"vector\",\n query_field: Optional[str] = \"text\",\n ):\n \"\"\"\n Initializes an instance of the ElasticKnnSearch class and sets up the\n Elasticsearch client.\n Args:\n index_name: The name of the Elasticsearch index.\n embedding: An instance of the Embeddings class, used to generate vector\n representations of text strings.\n es_connection: An existing Elasticsearch connection.\n es_cloud_id: The Cloud ID of the Elasticsearch instance. Required if\n creating a new connection.\n es_user: The username for the Elasticsearch instance. Required if\n creating a new connection.\n es_password: The password for the Elasticsearch instance. Required if\n creating a new connection.\n \"\"\"\n try:\n import elasticsearch\n except ImportError:\n raise ImportError(\n \"Could not import elasticsearch python package. \"\n \"Please install it with `pip install elasticsearch`.\"\n )\n self.embedding = embedding", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/elastic_vector_search.html"} +{"id": "5d704991e9bc-8", "text": ")\n self.embedding = embedding\n self.index_name = index_name\n self.query_field = query_field\n self.vector_query_field = vector_query_field\n # If a pre-existing Elasticsearch connection is provided, use it.\n if es_connection is not None:\n self.client = es_connection\n else:\n # If credentials for a new Elasticsearch connection are provided,\n # create a new connection.\n if es_cloud_id and es_user and es_password:\n self.client = elasticsearch.Elasticsearch(\n cloud_id=es_cloud_id, basic_auth=(es_user, es_password)\n )\n else:\n raise ValueError(\n \"\"\"Either provide a pre-existing Elasticsearch connection, \\\n or valid credentials for creating a new connection.\"\"\"\n )\n @staticmethod\n def _default_knn_mapping(dims: int) -> Dict:\n \"\"\"Generates a default index mapping for kNN search.\"\"\"\n return {\n \"properties\": {\n \"text\": {\"type\": \"text\"},\n \"vector\": {\n \"type\": \"dense_vector\",\n \"dims\": dims,\n \"index\": True,\n \"similarity\": \"dot_product\",\n },\n }\n }\n def _default_knn_query(\n self,\n query_vector: Optional[List[float]] = None,\n query: Optional[str] = None,\n model_id: Optional[str] = None,\n k: Optional[int] = 10,\n num_candidates: Optional[int] = 10,\n ) -> Dict:\n knn: Dict = {\n \"field\": self.vector_query_field,\n \"k\": k,\n \"num_candidates\": num_candidates,\n }", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/elastic_vector_search.html"} +{"id": "5d704991e9bc-9", "text": "\"k\": k,\n \"num_candidates\": num_candidates,\n }\n # Case 1: `query_vector` is provided, but not `model_id` -> use query_vector\n if query_vector and not model_id:\n knn[\"query_vector\"] = query_vector\n # Case 2: `query` and `model_id` are provided, -> use query_vector_builder\n elif query and model_id:\n knn[\"query_vector_builder\"] = {\n \"text_embedding\": {\n \"model_id\": model_id, # use 'model_id' argument\n \"model_text\": query, # use 'query' argument\n }\n }\n else:\n raise ValueError(\n \"Either `query_vector` or `model_id` must be provided, but not both.\"\n )\n return knn\n def knn_search(\n self,\n query: Optional[str] = None,\n k: Optional[int] = 10,\n query_vector: Optional[List[float]] = None,\n model_id: Optional[str] = None,\n size: Optional[int] = 10,\n source: Optional[bool] = True,\n fields: Optional[\n Union[List[Mapping[str, Any]], Tuple[Mapping[str, Any], ...], None]\n ] = None,\n ) -> Dict:\n \"\"\"\n Performs a k-nearest neighbor (k-NN) search on the Elasticsearch index.\n The search can be conducted using either a raw query vector or a model ID.\n The method first generates\n the body of the search query, which can be interpreted by Elasticsearch.\n It then performs the k-NN\n search on the Elasticsearch index and returns the results.\n Args:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/elastic_vector_search.html"} +{"id": "5d704991e9bc-10", "text": "search on the Elasticsearch index and returns the results.\n Args:\n query: The query or queries to be used for the search. Required if\n `query_vector` is not provided.\n k: The number of nearest neighbors to return. Defaults to 10.\n query_vector: The query vector to be used for the search. Required if\n `query` is not provided.\n model_id: The ID of the model to use for generating the query vector, if\n `query` is provided.\n size: The number of search hits to return. Defaults to 10.\n source: Whether to include the source of each hit in the results.\n fields: The fields to include in the source of each hit. If None, all\n fields are included.\n vector_query_field: Field name to use in knn search if not default 'vector'\n Returns:\n The search results.\n Raises:\n ValueError: If neither `query_vector` nor `model_id` is provided, or if\n both are provided.\n \"\"\"\n knn_query_body = self._default_knn_query(\n query_vector=query_vector, query=query, model_id=model_id, k=k\n )\n # Perform the kNN search on the Elasticsearch index and return the results.\n res = self.client.search(\n index=self.index_name,\n knn=knn_query_body,\n size=size,\n source=source,\n fields=fields,\n )\n return dict(res)\n def knn_hybrid_search(\n self,\n query: Optional[str] = None,\n k: Optional[int] = 10,\n query_vector: Optional[List[float]] = None,\n model_id: Optional[str] = None,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/elastic_vector_search.html"} +{"id": "5d704991e9bc-11", "text": "model_id: Optional[str] = None,\n size: Optional[int] = 10,\n source: Optional[bool] = True,\n knn_boost: Optional[float] = 0.9,\n query_boost: Optional[float] = 0.1,\n fields: Optional[\n Union[List[Mapping[str, Any]], Tuple[Mapping[str, Any], ...], None]\n ] = None,\n ) -> Dict[Any, Any]:\n \"\"\"Performs a hybrid k-nearest neighbor (k-NN) and text-based search on the\n Elasticsearch index.\n The search can be conducted using either a raw query vector or a model ID.\n The method first generates\n the body of the k-NN search query and the text-based query, which can be\n interpreted by Elasticsearch.\n It then performs the hybrid search on the Elasticsearch index and returns the\n results.\n Args:\n query: The query or queries to be used for the search. Required if\n `query_vector` is not provided.\n k: The number of nearest neighbors to return. Defaults to 10.\n query_vector: The query vector to be used for the search. Required if\n `query` is not provided.\n model_id: The ID of the model to use for generating the query vector, if\n `query` is provided.\n size: The number of search hits to return. Defaults to 10.\n source: Whether to include the source of each hit in the results.\n knn_boost: The boost factor for the k-NN part of the search.\n query_boost: The boost factor for the text-based part of the search.\n fields\n The fields to include in the source of each hit. If None, all fields are\n included. Defaults to None.", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/elastic_vector_search.html"} +{"id": "5d704991e9bc-12", "text": "included. Defaults to None.\n vector_query_field: Field name to use in knn search if not default 'vector'\n query_field: Field name to use in search if not default 'text'\n Returns:\n The search results.\n Raises:\n ValueError: If neither `query_vector` nor `model_id` is provided, or if\n both are provided.\n \"\"\"\n knn_query_body = self._default_knn_query(\n query_vector=query_vector, query=query, model_id=model_id, k=k\n )\n # Modify the knn_query_body to add a \"boost\" parameter\n knn_query_body[\"boost\"] = knn_boost\n # Generate the body of the standard Elasticsearch query\n match_query_body = {\n \"match\": {self.query_field: {\"query\": query, \"boost\": query_boost}}\n }\n # Perform the hybrid search on the Elasticsearch index and return the results.\n res = self.client.search(\n index=self.index_name,\n query=match_query_body,\n knn=knn_query_body,\n fields=fields,\n size=size,\n source=source,\n )\n return dict(res)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/elastic_vector_search.html"} +{"id": "f0755b0d2bd6-0", "text": "Source code for langchain.vectorstores.mongodb_atlas\nfrom __future__ import annotations\nimport logging\nfrom typing import (\n TYPE_CHECKING,\n Any,\n Dict,\n Generator,\n Iterable,\n List,\n Optional,\n Tuple,\n TypeVar,\n Union,\n)\nfrom langchain.docstore.document import Document\nfrom langchain.embeddings.base import Embeddings\nfrom langchain.vectorstores.base import VectorStore\nif TYPE_CHECKING:\n from pymongo.collection import Collection\nMongoDBDocumentType = TypeVar(\"MongoDBDocumentType\", bound=Dict[str, Any])\nlogger = logging.getLogger(__name__)\nDEFAULT_INSERT_BATCH_SIZE = 100\n[docs]class MongoDBAtlasVectorSearch(VectorStore):\n \"\"\"Wrapper around MongoDB Atlas Vector Search.\n To use, you should have both:\n - the ``pymongo`` python package installed\n - a connection string associated with a MongoDB Atlas Cluster having deployed an\n Atlas Search index\n Example:\n .. code-block:: python\n from langchain.vectorstores import MongoDBAtlasVectorSearch\n from langchain.embeddings.openai import OpenAIEmbeddings\n from pymongo import MongoClient\n mongo_client = MongoClient(\"\")\n collection = mongo_client[\"\"][\"\"]\n embeddings = OpenAIEmbeddings()\n vectorstore = MongoDBAtlasVectorSearch(collection, embeddings)\n \"\"\"\n def __init__(\n self,\n collection: Collection[MongoDBDocumentType],\n embedding: Embeddings,\n *,\n index_name: str = \"default\",\n text_key: str = \"text\",\n embedding_key: str = \"embedding\",\n ):\n \"\"\"\n Args:\n collection: MongoDB collection to add the texts to.", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/mongodb_atlas.html"} +{"id": "f0755b0d2bd6-1", "text": "\"\"\"\n Args:\n collection: MongoDB collection to add the texts to.\n embedding: Text embedding model to use.\n text_key: MongoDB field that will contain the text for each\n document.\n embedding_key: MongoDB field that will contain the embedding for\n each document.\n \"\"\"\n self._collection = collection\n self._embedding = embedding\n self._index_name = index_name\n self._text_key = text_key\n self._embedding_key = embedding_key\n[docs] @classmethod\n def from_connection_string(\n cls,\n connection_string: str,\n namespace: str,\n embedding: Embeddings,\n **kwargs: Any,\n ) -> MongoDBAtlasVectorSearch:\n try:\n from pymongo import MongoClient\n except ImportError:\n raise ImportError(\n \"Could not import pymongo, please install it with \"\n \"`pip install pymongo`.\"\n )\n client: MongoClient = MongoClient(connection_string)\n db_name, collection_name = namespace.split(\".\")\n collection = client[db_name][collection_name]\n return cls(collection, embedding, **kwargs)\n[docs] def add_texts(\n self,\n texts: Iterable[str],\n metadatas: Optional[List[Dict[str, Any]]] = None,\n **kwargs: Any,\n ) -> List:\n \"\"\"Run more texts through the embeddings and add to the vectorstore.\n Args:\n texts: Iterable of strings to add to the vectorstore.\n metadatas: Optional list of metadatas associated with the texts.\n Returns:\n List of ids from adding the texts into the vectorstore.\n \"\"\"\n batch_size = kwargs.get(\"batch_size\", DEFAULT_INSERT_BATCH_SIZE)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/mongodb_atlas.html"} +{"id": "f0755b0d2bd6-2", "text": "\"\"\"\n batch_size = kwargs.get(\"batch_size\", DEFAULT_INSERT_BATCH_SIZE)\n _metadatas: Union[List, Generator] = metadatas or ({} for _ in texts)\n texts_batch = []\n metadatas_batch = []\n result_ids = []\n for i, (text, metadata) in enumerate(zip(texts, _metadatas)):\n texts_batch.append(text)\n metadatas_batch.append(metadata)\n if (i + 1) % batch_size == 0:\n result_ids.extend(self._insert_texts(texts_batch, metadatas_batch))\n texts_batch = []\n metadatas_batch = []\n if texts_batch:\n result_ids.extend(self._insert_texts(texts_batch, metadatas_batch))\n return result_ids\n def _insert_texts(self, texts: List[str], metadatas: List[Dict[str, Any]]) -> List:\n if not texts:\n return []\n # Embed and create the documents\n embeddings = self._embedding.embed_documents(texts)\n to_insert = [\n {self._text_key: t, self._embedding_key: embedding, **m}\n for t, m, embedding in zip(texts, metadatas, embeddings)\n ]\n # insert the documents in MongoDB Atlas\n insert_result = self._collection.insert_many(to_insert)\n return insert_result.inserted_ids\n[docs] def similarity_search_with_score(\n self,\n query: str,\n *,\n k: int = 4,\n pre_filter: Optional[dict] = None,\n post_filter_pipeline: Optional[List[Dict]] = None,\n ) -> List[Tuple[Document, float]]:\n \"\"\"Return MongoDB documents most similar to query, along with scores.", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/mongodb_atlas.html"} +{"id": "f0755b0d2bd6-3", "text": "\"\"\"Return MongoDB documents most similar to query, along with scores.\n Use the knnBeta Operator available in MongoDB Atlas Search\n This feature is in early access and available only for evaluation purposes, to\n validate functionality, and to gather feedback from a small closed group of\n early access users. It is not recommended for production deployments as we\n may introduce breaking changes.\n For more: https://www.mongodb.com/docs/atlas/atlas-search/knn-beta\n Args:\n query: Text to look up documents similar to.\n k: Optional Number of Documents to return. Defaults to 4.\n pre_filter: Optional Dictionary of argument(s) to prefilter on document\n fields.\n post_filter_pipeline: Optional Pipeline of MongoDB aggregation stages\n following the knnBeta search.\n Returns:\n List of Documents most similar to the query and score for each\n \"\"\"\n knn_beta = {\n \"vector\": self._embedding.embed_query(query),\n \"path\": self._embedding_key,\n \"k\": k,\n }\n if pre_filter:\n knn_beta[\"filter\"] = pre_filter\n pipeline = [\n {\n \"$search\": {\n \"index\": self._index_name,\n \"knnBeta\": knn_beta,\n }\n },\n {\"$project\": {\"score\": {\"$meta\": \"searchScore\"}, self._embedding_key: 0}},\n ]\n if post_filter_pipeline is not None:\n pipeline.extend(post_filter_pipeline)\n cursor = self._collection.aggregate(pipeline)\n docs = []\n for res in cursor:\n text = res.pop(self._text_key)\n score = res.pop(\"score\")\n docs.append((Document(page_content=text, metadata=res), score))\n return docs", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/mongodb_atlas.html"} +{"id": "f0755b0d2bd6-4", "text": "docs.append((Document(page_content=text, metadata=res), score))\n return docs\n[docs] def similarity_search(\n self,\n query: str,\n k: int = 4,\n pre_filter: Optional[dict] = None,\n post_filter_pipeline: Optional[List[Dict]] = None,\n **kwargs: Any,\n ) -> List[Document]:\n \"\"\"Return MongoDB documents most similar to query.\n Use the knnBeta Operator available in MongoDB Atlas Search\n This feature is in early access and available only for evaluation purposes, to\n validate functionality, and to gather feedback from a small closed group of\n early access users. It is not recommended for production deployments as we may\n introduce breaking changes.\n For more: https://www.mongodb.com/docs/atlas/atlas-search/knn-beta\n Args:\n query: Text to look up documents similar to.\n k: Optional Number of Documents to return. Defaults to 4.\n pre_filter: Optional Dictionary of argument(s) to prefilter on document\n fields.\n post_filter_pipeline: Optional Pipeline of MongoDB aggregation stages\n following the knnBeta search.\n Returns:\n List of Documents most similar to the query and score for each\n \"\"\"\n docs_and_scores = self.similarity_search_with_score(\n query,\n k=k,\n pre_filter=pre_filter,\n post_filter_pipeline=post_filter_pipeline,\n )\n return [doc for doc, _ in docs_and_scores]\n[docs] @classmethod\n def from_texts(\n cls,\n texts: List[str],\n embedding: Embeddings,\n metadatas: Optional[List[dict]] = None,\n collection: Optional[Collection[MongoDBDocumentType]] = None,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/mongodb_atlas.html"} +{"id": "f0755b0d2bd6-5", "text": "collection: Optional[Collection[MongoDBDocumentType]] = None,\n **kwargs: Any,\n ) -> MongoDBAtlasVectorSearch:\n \"\"\"Construct MongoDBAtlasVectorSearch wrapper from raw documents.\n This is a user-friendly interface that:\n 1. Embeds documents.\n 2. Adds the documents to a provided MongoDB Atlas Vector Search index\n (Lucene)\n This is intended to be a quick way to get started.\n Example:\n .. code-block:: python\n from pymongo import MongoClient\n from langchain.vectorstores import MongoDBAtlasVectorSearch\n from langchain.embeddings import OpenAIEmbeddings\n client = MongoClient(\"\")\n collection = mongo_client[\"\"][\"\"]\n embeddings = OpenAIEmbeddings()\n vectorstore = MongoDBAtlasVectorSearch.from_texts(\n texts,\n embeddings,\n metadatas=metadatas,\n collection=collection\n )\n \"\"\"\n if collection is None:\n raise ValueError(\"Must provide 'collection' named parameter.\")\n vecstore = cls(collection, embedding, **kwargs)\n vecstore.add_texts(texts, metadatas=metadatas)\n return vecstore", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/mongodb_atlas.html"} +{"id": "93d38bec2ae3-0", "text": "Source code for langchain.vectorstores.clarifai\nfrom __future__ import annotations\nimport logging\nimport os\nimport traceback\nfrom typing import Any, Iterable, List, Optional, Tuple\nimport requests\nfrom langchain.docstore.document import Document\nfrom langchain.embeddings.base import Embeddings\nfrom langchain.vectorstores.base import VectorStore\nlogger = logging.getLogger(__name__)\n[docs]class Clarifai(VectorStore):\n \"\"\"Wrapper around Clarifai AI platform's vector store.\n To use, you should have the ``clarifai`` python package installed.\n Example:\n .. code-block:: python\n from langchain.vectorstores import Clarifai\n from langchain.embeddings.openai import OpenAIEmbeddings\n embeddings = OpenAIEmbeddings()\n vectorstore = Clarifai(\"langchain_store\", embeddings.embed_query)\n \"\"\"\n def __init__(\n self,\n user_id: Optional[str] = None,\n app_id: Optional[str] = None,\n pat: Optional[str] = None,\n number_of_docs: Optional[int] = None,\n api_base: Optional[str] = None,\n ) -> None:\n \"\"\"Initialize with Clarifai client.\n Args:\n user_id (Optional[str], optional): User ID. Defaults to None.\n app_id (Optional[str], optional): App ID. Defaults to None.\n pat (Optional[str], optional): Personal access token. Defaults to None.\n number_of_docs (Optional[int], optional): Number of documents to return\n during vector search. Defaults to None.\n api_base (Optional[str], optional): API base. Defaults to None.\n Raises:\n ValueError: If user ID, app ID or personal access token is not provided.\n \"\"\"\n try:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/clarifai.html"} +{"id": "93d38bec2ae3-1", "text": "\"\"\"\n try:\n from clarifai.auth.helper import DEFAULT_BASE, ClarifaiAuthHelper\n from clarifai.client import create_stub\n except ImportError:\n raise ValueError(\n \"Could not import clarifai python package. \"\n \"Please install it with `pip install clarifai`.\"\n )\n if api_base is None:\n self._api_base = DEFAULT_BASE\n self._user_id = user_id or os.environ.get(\"CLARIFAI_USER_ID\")\n self._app_id = app_id or os.environ.get(\"CLARIFAI_APP_ID\")\n self._pat = pat or os.environ.get(\"CLARIFAI_PAT_KEY\")\n if self._user_id is None or self._app_id is None or self._pat is None:\n raise ValueError(\n \"Could not find CLARIFAI_USER_ID, CLARIFAI_APP_ID or\\\n CLARIFAI_PAT in your environment. \"\n \"Please set those env variables with a valid user ID, \\\n app ID and personal access token \\\n from https://clarifai.com/settings/security.\"\n )\n self._auth = ClarifaiAuthHelper(\n user_id=self._user_id,\n app_id=self._app_id,\n pat=self._pat,\n base=self._api_base,\n )\n self._stub = create_stub(self._auth)\n self._userDataObject = self._auth.get_user_app_id_proto()\n self._number_of_docs = number_of_docs\n def _post_text_input(self, text: str, metadata: dict) -> str:\n \"\"\"Post text to Clarifai and return the ID of the input.\n Args:\n text (str): Text to post.", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/clarifai.html"} +{"id": "93d38bec2ae3-2", "text": "Args:\n text (str): Text to post.\n metadata (dict): Metadata to post.\n Returns:\n str: ID of the input.\n \"\"\"\n try:\n from clarifai_grpc.grpc.api import resources_pb2, service_pb2\n from clarifai_grpc.grpc.api.status import status_code_pb2\n from google.protobuf.struct_pb2 import Struct # type: ignore\n except ImportError as e:\n raise ImportError(\n \"Could not import clarifai python package. \"\n \"Please install it with `pip install clarifai`.\"\n ) from e\n input_metadata = Struct()\n input_metadata.update(metadata)\n post_inputs_response = self._stub.PostInputs(\n service_pb2.PostInputsRequest(\n user_app_id=self._userDataObject,\n inputs=[\n resources_pb2.Input(\n data=resources_pb2.Data(\n text=resources_pb2.Text(raw=text),\n metadata=input_metadata,\n )\n )\n ],\n )\n )\n if post_inputs_response.status.code != status_code_pb2.SUCCESS:\n logger.error(post_inputs_response.status)\n raise Exception(\n \"Post inputs failed, status: \" + post_inputs_response.status.description\n )\n input_id = post_inputs_response.inputs[0].id\n return input_id\n[docs] def add_texts(\n self,\n texts: Iterable[str],\n metadatas: Optional[List[dict]] = None,\n ids: Optional[List[str]] = None,\n **kwargs: Any,\n ) -> List[str]:\n \"\"\"Add texts to the Clarifai vectorstore. This will push the text\n to a Clarifai application.", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/clarifai.html"} +{"id": "93d38bec2ae3-3", "text": "to a Clarifai application.\n Application use base workflow that create and store embedding for each text.\n Make sure you are using a base workflow that is compatible with text\n (such as Language Understanding).\n Args:\n texts (Iterable[str]): Texts to add to the vectorstore.\n metadatas (Optional[List[dict]], optional): Optional list of metadatas.\n ids (Optional[List[str]], optional): Optional list of IDs.\n Returns:\n List[str]: List of IDs of the added texts.\n \"\"\"\n assert len(list(texts)) > 0, \"No texts provided to add to the vectorstore.\"\n if metadatas is not None:\n assert len(list(texts)) == len(\n metadatas\n ), \"Number of texts and metadatas should be the same.\"\n input_ids = []\n for idx, text in enumerate(texts):\n try:\n metadata = metadatas[idx] if metadatas else {}\n input_id = self._post_text_input(text, metadata)\n input_ids.append(input_id)\n logger.debug(f\"Input {input_id} posted successfully.\")\n except Exception as error:\n logger.warning(f\"Post inputs failed: {error}\")\n traceback.print_exc()\n return input_ids\n[docs] def similarity_search_with_score(\n self,\n query: str,\n k: int = 4,\n filter: Optional[dict] = None,\n namespace: Optional[str] = None,\n **kwargs: Any,\n ) -> List[Tuple[Document, float]]:\n \"\"\"Run similarity search with score using Clarifai.\n Args:\n query (str): Query text to search for.", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/clarifai.html"} +{"id": "93d38bec2ae3-4", "text": "Args:\n query (str): Query text to search for.\n k (int): Number of results to return. Defaults to 4.\n filter (Optional[Dict[str, str]]): Filter by metadata.\n Defaults to None.\n Returns:\n List[Document]: List of documents most simmilar to the query text.\n \"\"\"\n try:\n from clarifai_grpc.grpc.api import resources_pb2, service_pb2\n from clarifai_grpc.grpc.api.status import status_code_pb2\n from google.protobuf import json_format # type: ignore\n except ImportError as e:\n raise ImportError(\n \"Could not import clarifai python package. \"\n \"Please install it with `pip install clarifai`.\"\n ) from e\n # Get number of docs to return\n if self._number_of_docs is not None:\n k = self._number_of_docs\n post_annotations_searches_response = self._stub.PostAnnotationsSearches(\n service_pb2.PostAnnotationsSearchesRequest(\n user_app_id=self._userDataObject,\n searches=[\n resources_pb2.Search(\n query=resources_pb2.Query(\n ranks=[\n resources_pb2.Rank(\n annotation=resources_pb2.Annotation(\n data=resources_pb2.Data(\n text=resources_pb2.Text(raw=query),\n )\n )\n )\n ]\n )\n )\n ],\n pagination=service_pb2.Pagination(page=1, per_page=k),\n )\n )\n # Check if search was successful\n if post_annotations_searches_response.status.code != status_code_pb2.SUCCESS:\n raise Exception(\n \"Post searches failed, status: \"\n + post_annotations_searches_response.status.description", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/clarifai.html"} +{"id": "93d38bec2ae3-5", "text": "\"Post searches failed, status: \"\n + post_annotations_searches_response.status.description\n )\n # Retrieve hits\n hits = post_annotations_searches_response.hits\n docs_and_scores = []\n # Iterate over hits and retrieve metadata and text\n for hit in hits:\n metadata = json_format.MessageToDict(hit.input.data.metadata)\n request = requests.get(hit.input.data.text.url)\n # override encoding by real educated guess as provided by chardet\n request.encoding = request.apparent_encoding\n requested_text = request.text\n logger.debug(\n f\"\\tScore {hit.score:.2f} for annotation: {hit.annotation.id}\\\n off input: {hit.input.id}, text: {requested_text[:125]}\"\n )\n docs_and_scores.append(\n (Document(page_content=requested_text, metadata=metadata), hit.score)\n )\n return docs_and_scores\n[docs] def similarity_search(\n self,\n query: str,\n k: int = 4,\n **kwargs: Any,\n ) -> List[Document]:\n \"\"\"Run similarity search using Clarifai.\n Args:\n query: Text to look up documents similar to.\n k: Number of Documents to return. Defaults to 4.\n Returns:\n List of Documents most similar to the query and score for each\n \"\"\"\n docs_and_scores = self.similarity_search_with_score(query, **kwargs)\n return [doc for doc, _ in docs_and_scores]\n[docs] @classmethod\n def from_texts(\n cls,\n texts: List[str],\n embedding: Optional[Embeddings] = None,\n metadatas: Optional[List[dict]] = None,\n user_id: Optional[str] = None,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/clarifai.html"} +{"id": "93d38bec2ae3-6", "text": "user_id: Optional[str] = None,\n app_id: Optional[str] = None,\n pat: Optional[str] = None,\n number_of_docs: Optional[int] = None,\n api_base: Optional[str] = None,\n **kwargs: Any,\n ) -> Clarifai:\n \"\"\"Create a Clarifai vectorstore from a list of texts.\n Args:\n user_id (str): User ID.\n app_id (str): App ID.\n texts (List[str]): List of texts to add.\n pat (Optional[str]): Personal access token. Defaults to None.\n number_of_docs (Optional[int]): Number of documents to return\n during vector search. Defaults to None.\n api_base (Optional[str]): API base. Defaults to None.\n metadatas (Optional[List[dict]]): Optional list of metadatas.\n Defaults to None.\n Returns:\n Clarifai: Clarifai vectorstore.\n \"\"\"\n clarifai_vector_db = cls(\n user_id=user_id,\n app_id=app_id,\n pat=pat,\n number_of_docs=number_of_docs,\n api_base=api_base,\n )\n clarifai_vector_db.add_texts(texts=texts, metadatas=metadatas)\n return clarifai_vector_db\n[docs] @classmethod\n def from_documents(\n cls,\n documents: List[Document],\n embedding: Optional[Embeddings] = None,\n user_id: Optional[str] = None,\n app_id: Optional[str] = None,\n pat: Optional[str] = None,\n number_of_docs: Optional[int] = None,\n api_base: Optional[str] = None,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/clarifai.html"} +{"id": "93d38bec2ae3-7", "text": "api_base: Optional[str] = None,\n **kwargs: Any,\n ) -> Clarifai:\n \"\"\"Create a Clarifai vectorstore from a list of documents.\n Args:\n user_id (str): User ID.\n app_id (str): App ID.\n documents (List[Document]): List of documents to add.\n pat (Optional[str]): Personal access token. Defaults to None.\n number_of_docs (Optional[int]): Number of documents to return\n during vector search. Defaults to None.\n api_base (Optional[str]): API base. Defaults to None.\n Returns:\n Clarifai: Clarifai vectorstore.\n \"\"\"\n texts = [doc.page_content for doc in documents]\n metadatas = [doc.metadata for doc in documents]\n return cls.from_texts(\n user_id=user_id,\n app_id=app_id,\n texts=texts,\n pat=pat,\n number_of_docs=number_of_docs,\n api_base=api_base,\n metadatas=metadatas,\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/clarifai.html"} +{"id": "5a87e9b3d92c-0", "text": "Source code for langchain.vectorstores.chroma\n\"\"\"Wrapper around ChromaDB embeddings platform.\"\"\"\nfrom __future__ import annotations\nimport logging\nimport uuid\nfrom typing import TYPE_CHECKING, Any, Dict, Iterable, List, Optional, Tuple, Type\nimport numpy as np\nfrom langchain.docstore.document import Document\nfrom langchain.embeddings.base import Embeddings\nfrom langchain.utils import xor_args\nfrom langchain.vectorstores.base import VectorStore\nfrom langchain.vectorstores.utils import maximal_marginal_relevance\nif TYPE_CHECKING:\n import chromadb\n import chromadb.config\n from chromadb.api.types import ID, OneOrMany, Where, WhereDocument\nlogger = logging.getLogger()\nDEFAULT_K = 4 # Number of Documents to return.\ndef _results_to_docs(results: Any) -> List[Document]:\n return [doc for doc, _ in _results_to_docs_and_scores(results)]\ndef _results_to_docs_and_scores(results: Any) -> List[Tuple[Document, float]]:\n return [\n # TODO: Chroma can do batch querying,\n # we shouldn't hard code to the 1st result\n (Document(page_content=result[0], metadata=result[1] or {}), result[2])\n for result in zip(\n results[\"documents\"][0],\n results[\"metadatas\"][0],\n results[\"distances\"][0],\n )\n ]\n[docs]class Chroma(VectorStore):\n \"\"\"Wrapper around ChromaDB embeddings platform.\n To use, you should have the ``chromadb`` python package installed.\n Example:\n .. code-block:: python\n from langchain.vectorstores import Chroma\n from langchain.embeddings.openai import OpenAIEmbeddings\n embeddings = OpenAIEmbeddings()", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/chroma.html"} +{"id": "5a87e9b3d92c-1", "text": "embeddings = OpenAIEmbeddings()\n vectorstore = Chroma(\"langchain_store\", embeddings)\n \"\"\"\n _LANGCHAIN_DEFAULT_COLLECTION_NAME = \"langchain\"\n def __init__(\n self,\n collection_name: str = _LANGCHAIN_DEFAULT_COLLECTION_NAME,\n embedding_function: Optional[Embeddings] = None,\n persist_directory: Optional[str] = None,\n client_settings: Optional[chromadb.config.Settings] = None,\n collection_metadata: Optional[Dict] = None,\n client: Optional[chromadb.Client] = None,\n ) -> None:\n \"\"\"Initialize with Chroma client.\"\"\"\n try:\n import chromadb\n import chromadb.config\n except ImportError:\n raise ValueError(\n \"Could not import chromadb python package. \"\n \"Please install it with `pip install chromadb`.\"\n )\n if client is not None:\n self._client = client\n else:\n if client_settings:\n self._client_settings = client_settings\n else:\n self._client_settings = chromadb.config.Settings()\n if persist_directory is not None:\n self._client_settings = chromadb.config.Settings(\n chroma_db_impl=\"duckdb+parquet\",\n persist_directory=persist_directory,\n )\n self._client = chromadb.Client(self._client_settings)\n self._embedding_function = embedding_function\n self._persist_directory = persist_directory\n self._collection = self._client.get_or_create_collection(\n name=collection_name,\n embedding_function=self._embedding_function.embed_documents\n if self._embedding_function is not None\n else None,\n metadata=collection_metadata,\n )\n @xor_args((\"query_texts\", \"query_embeddings\"))\n def __query_collection(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/chroma.html"} +{"id": "5a87e9b3d92c-2", "text": "@xor_args((\"query_texts\", \"query_embeddings\"))\n def __query_collection(\n self,\n query_texts: Optional[List[str]] = None,\n query_embeddings: Optional[List[List[float]]] = None,\n n_results: int = 4,\n where: Optional[Dict[str, str]] = None,\n **kwargs: Any,\n ) -> List[Document]:\n \"\"\"Query the chroma collection.\"\"\"\n try:\n import chromadb # noqa: F401\n except ImportError:\n raise ValueError(\n \"Could not import chromadb python package. \"\n \"Please install it with `pip install chromadb`.\"\n )\n return self._collection.query(\n query_texts=query_texts,\n query_embeddings=query_embeddings,\n n_results=n_results,\n where=where,\n **kwargs,\n )\n[docs] def add_texts(\n self,\n texts: Iterable[str],\n metadatas: Optional[List[dict]] = None,\n ids: Optional[List[str]] = None,\n **kwargs: Any,\n ) -> List[str]:\n \"\"\"Run more texts through the embeddings and add to the vectorstore.\n Args:\n texts (Iterable[str]): Texts to add to the vectorstore.\n metadatas (Optional[List[dict]], optional): Optional list of metadatas.\n ids (Optional[List[str]], optional): Optional list of IDs.\n Returns:\n List[str]: List of IDs of the added texts.\n \"\"\"\n # TODO: Handle the case where the user doesn't provide ids on the Collection\n if ids is None:\n ids = [str(uuid.uuid1()) for _ in texts]\n embeddings = None", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/chroma.html"} +{"id": "5a87e9b3d92c-3", "text": "ids = [str(uuid.uuid1()) for _ in texts]\n embeddings = None\n if self._embedding_function is not None:\n embeddings = self._embedding_function.embed_documents(list(texts))\n self._collection.upsert(\n metadatas=metadatas, embeddings=embeddings, documents=texts, ids=ids\n )\n return ids\n[docs] def similarity_search(\n self,\n query: str,\n k: int = DEFAULT_K,\n filter: Optional[Dict[str, str]] = None,\n **kwargs: Any,\n ) -> List[Document]:\n \"\"\"Run similarity search with Chroma.\n Args:\n query (str): Query text to search for.\n k (int): Number of results to return. Defaults to 4.\n filter (Optional[Dict[str, str]]): Filter by metadata. Defaults to None.\n Returns:\n List[Document]: List of documents most similar to the query text.\n \"\"\"\n docs_and_scores = self.similarity_search_with_score(query, k, filter=filter)\n return [doc for doc, _ in docs_and_scores]\n[docs] def similarity_search_by_vector(\n self,\n embedding: List[float],\n k: int = DEFAULT_K,\n filter: Optional[Dict[str, str]] = None,\n **kwargs: Any,\n ) -> List[Document]:\n \"\"\"Return docs most similar to embedding vector.\n Args:\n embedding (str): Embedding to look up documents similar to.\n k (int): Number of Documents to return. Defaults to 4.\n filter (Optional[Dict[str, str]]): Filter by metadata. Defaults to None.\n Returns:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/chroma.html"} +{"id": "5a87e9b3d92c-4", "text": "Returns:\n List of Documents most similar to the query vector.\n \"\"\"\n results = self.__query_collection(\n query_embeddings=embedding, n_results=k, where=filter\n )\n return _results_to_docs(results)\n[docs] def similarity_search_with_score(\n self,\n query: str,\n k: int = DEFAULT_K,\n filter: Optional[Dict[str, str]] = None,\n **kwargs: Any,\n ) -> List[Tuple[Document, float]]:\n \"\"\"Run similarity search with Chroma with distance.\n Args:\n query (str): Query text to search for.\n k (int): Number of results to return. Defaults to 4.\n filter (Optional[Dict[str, str]]): Filter by metadata. Defaults to None.\n Returns:\n List[Tuple[Document, float]]: List of documents most similar to\n the query text and cosine distance in float for each.\n Lower score represents more similarity.\n \"\"\"\n if self._embedding_function is None:\n results = self.__query_collection(\n query_texts=[query], n_results=k, where=filter\n )\n else:\n query_embedding = self._embedding_function.embed_query(query)\n results = self.__query_collection(\n query_embeddings=[query_embedding], n_results=k, where=filter\n )\n return _results_to_docs_and_scores(results)\n def _similarity_search_with_relevance_scores(\n self,\n query: str,\n k: int = 4,\n **kwargs: Any,\n ) -> List[Tuple[Document, float]]:\n return self.similarity_search_with_score(query, k, **kwargs)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/chroma.html"} +{"id": "5a87e9b3d92c-5", "text": "return self.similarity_search_with_score(query, k, **kwargs)\n[docs] def max_marginal_relevance_search_by_vector(\n self,\n embedding: List[float],\n k: int = DEFAULT_K,\n fetch_k: int = 20,\n lambda_mult: float = 0.5,\n filter: Optional[Dict[str, str]] = None,\n **kwargs: Any,\n ) -> List[Document]:\n \"\"\"Return docs selected using the maximal marginal relevance.\n Maximal marginal relevance optimizes for similarity to query AND diversity\n among selected documents.\n Args:\n embedding: Embedding to look up documents similar to.\n k: Number of Documents to return. Defaults to 4.\n fetch_k: Number of Documents to fetch to pass to MMR algorithm.\n lambda_mult: Number between 0 and 1 that determines the degree\n of diversity among the results with 0 corresponding\n to maximum diversity and 1 to minimum diversity.\n Defaults to 0.5.\n filter (Optional[Dict[str, str]]): Filter by metadata. Defaults to None.\n Returns:\n List of Documents selected by maximal marginal relevance.\n \"\"\"\n results = self.__query_collection(\n query_embeddings=embedding,\n n_results=fetch_k,\n where=filter,\n include=[\"metadatas\", \"documents\", \"distances\", \"embeddings\"],\n )\n mmr_selected = maximal_marginal_relevance(\n np.array(embedding, dtype=np.float32),\n results[\"embeddings\"][0],\n k=k,\n lambda_mult=lambda_mult,\n )\n candidates = _results_to_docs(results)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/chroma.html"} +{"id": "5a87e9b3d92c-6", "text": "lambda_mult=lambda_mult,\n )\n candidates = _results_to_docs(results)\n selected_results = [r for i, r in enumerate(candidates) if i in mmr_selected]\n return selected_results\n[docs] def max_marginal_relevance_search(\n self,\n query: str,\n k: int = DEFAULT_K,\n fetch_k: int = 20,\n lambda_mult: float = 0.5,\n filter: Optional[Dict[str, str]] = None,\n **kwargs: Any,\n ) -> List[Document]:\n \"\"\"Return docs selected using the maximal marginal relevance.\n Maximal marginal relevance optimizes for similarity to query AND diversity\n among selected documents.\n Args:\n query: Text to look up documents similar to.\n k: Number of Documents to return. Defaults to 4.\n fetch_k: Number of Documents to fetch to pass to MMR algorithm.\n lambda_mult: Number between 0 and 1 that determines the degree\n of diversity among the results with 0 corresponding\n to maximum diversity and 1 to minimum diversity.\n Defaults to 0.5.\n filter (Optional[Dict[str, str]]): Filter by metadata. Defaults to None.\n Returns:\n List of Documents selected by maximal marginal relevance.\n \"\"\"\n if self._embedding_function is None:\n raise ValueError(\n \"For MMR search, you must specify an embedding function on\" \"creation.\"\n )\n embedding = self._embedding_function.embed_query(query)\n docs = self.max_marginal_relevance_search_by_vector(\n embedding, k, fetch_k, lambda_mul=lambda_mult, filter=filter\n )\n return docs\n[docs] def delete_collection(self) -> None:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/chroma.html"} +{"id": "5a87e9b3d92c-7", "text": ")\n return docs\n[docs] def delete_collection(self) -> None:\n \"\"\"Delete the collection.\"\"\"\n self._client.delete_collection(self._collection.name)\n[docs] def get(\n self,\n ids: Optional[OneOrMany[ID]] = None,\n where: Optional[Where] = None,\n limit: Optional[int] = None,\n offset: Optional[int] = None,\n where_document: Optional[WhereDocument] = None,\n include: Optional[List[str]] = None,\n ) -> Dict[str, Any]:\n \"\"\"Gets the collection.\n Args:\n ids: The ids of the embeddings to get. Optional.\n where: A Where type dict used to filter results by.\n E.g. `{\"color\" : \"red\", \"price\": 4.20}`. Optional.\n limit: The number of documents to return. Optional.\n offset: The offset to start returning results from.\n Useful for paging results with limit. Optional.\n where_document: A WhereDocument type dict used to filter by the documents.\n E.g. `{$contains: {\"text\": \"hello\"}}`. Optional.\n include: A list of what to include in the results.\n Can contain `\"embeddings\"`, `\"metadatas\"`, `\"documents\"`.\n Ids are always included.\n Defaults to `[\"metadatas\", \"documents\"]`. Optional.\n \"\"\"\n kwargs = {\n \"ids\": ids,\n \"where\": where,\n \"limit\": limit,\n \"offset\": offset,\n \"where_document\": where_document,\n }\n if include is not None:\n kwargs[\"include\"] = include\n return self._collection.get(**kwargs)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/chroma.html"} +{"id": "5a87e9b3d92c-8", "text": "kwargs[\"include\"] = include\n return self._collection.get(**kwargs)\n[docs] def persist(self) -> None:\n \"\"\"Persist the collection.\n This can be used to explicitly persist the data to disk.\n It will also be called automatically when the object is destroyed.\n \"\"\"\n if self._persist_directory is None:\n raise ValueError(\n \"You must specify a persist_directory on\"\n \"creation to persist the collection.\"\n )\n self._client.persist()\n[docs] def update_document(self, document_id: str, document: Document) -> None:\n \"\"\"Update a document in the collection.\n Args:\n document_id (str): ID of the document to update.\n document (Document): Document to update.\n \"\"\"\n text = document.page_content\n metadata = document.metadata\n if self._embedding_function is None:\n raise ValueError(\n \"For update, you must specify an embedding function on creation.\"\n )\n embeddings = self._embedding_function.embed_documents([text])\n self._collection.update(\n ids=[document_id],\n embeddings=embeddings,\n documents=[text],\n metadatas=[metadata],\n )\n[docs] @classmethod\n def from_texts(\n cls: Type[Chroma],\n texts: List[str],\n embedding: Optional[Embeddings] = None,\n metadatas: Optional[List[dict]] = None,\n ids: Optional[List[str]] = None,\n collection_name: str = _LANGCHAIN_DEFAULT_COLLECTION_NAME,\n persist_directory: Optional[str] = None,\n client_settings: Optional[chromadb.config.Settings] = None,\n client: Optional[chromadb.Client] = None,\n **kwargs: Any,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/chroma.html"} +{"id": "5a87e9b3d92c-9", "text": "client: Optional[chromadb.Client] = None,\n **kwargs: Any,\n ) -> Chroma:\n \"\"\"Create a Chroma vectorstore from a raw documents.\n If a persist_directory is specified, the collection will be persisted there.\n Otherwise, the data will be ephemeral in-memory.\n Args:\n texts (List[str]): List of texts to add to the collection.\n collection_name (str): Name of the collection to create.\n persist_directory (Optional[str]): Directory to persist the collection.\n embedding (Optional[Embeddings]): Embedding function. Defaults to None.\n metadatas (Optional[List[dict]]): List of metadatas. Defaults to None.\n ids (Optional[List[str]]): List of document IDs. Defaults to None.\n client_settings (Optional[chromadb.config.Settings]): Chroma client settings\n Returns:\n Chroma: Chroma vectorstore.\n \"\"\"\n chroma_collection = cls(\n collection_name=collection_name,\n embedding_function=embedding,\n persist_directory=persist_directory,\n client_settings=client_settings,\n client=client,\n )\n chroma_collection.add_texts(texts=texts, metadatas=metadatas, ids=ids)\n return chroma_collection\n[docs] @classmethod\n def from_documents(\n cls: Type[Chroma],\n documents: List[Document],\n embedding: Optional[Embeddings] = None,\n ids: Optional[List[str]] = None,\n collection_name: str = _LANGCHAIN_DEFAULT_COLLECTION_NAME,\n persist_directory: Optional[str] = None,\n client_settings: Optional[chromadb.config.Settings] = None,\n client: Optional[chromadb.Client] = None, # Add this line", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/chroma.html"} +{"id": "5a87e9b3d92c-10", "text": "client: Optional[chromadb.Client] = None, # Add this line\n **kwargs: Any,\n ) -> Chroma:\n \"\"\"Create a Chroma vectorstore from a list of documents.\n If a persist_directory is specified, the collection will be persisted there.\n Otherwise, the data will be ephemeral in-memory.\n Args:\n collection_name (str): Name of the collection to create.\n persist_directory (Optional[str]): Directory to persist the collection.\n ids (Optional[List[str]]): List of document IDs. Defaults to None.\n documents (List[Document]): List of documents to add to the vectorstore.\n embedding (Optional[Embeddings]): Embedding function. Defaults to None.\n client_settings (Optional[chromadb.config.Settings]): Chroma client settings\n Returns:\n Chroma: Chroma vectorstore.\n \"\"\"\n texts = [doc.page_content for doc in documents]\n metadatas = [doc.metadata for doc in documents]\n return cls.from_texts(\n texts=texts,\n embedding=embedding,\n metadatas=metadatas,\n ids=ids,\n collection_name=collection_name,\n persist_directory=persist_directory,\n client_settings=client_settings,\n client=client,\n )\n[docs] def delete(self, ids: List[str]) -> None:\n \"\"\"Delete by vector IDs.\n Args:\n ids: List of ids to delete.\n \"\"\"\n self._collection.delete(ids=ids)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/chroma.html"} +{"id": "bab0078bf4c2-0", "text": "Source code for langchain.vectorstores.qdrant\n\"\"\"Wrapper around Qdrant vector database.\"\"\"\nfrom __future__ import annotations\nimport uuid\nimport warnings\nfrom itertools import islice\nfrom operator import itemgetter\nfrom typing import (\n TYPE_CHECKING,\n Any,\n Callable,\n Dict,\n Iterable,\n List,\n Optional,\n Sequence,\n Tuple,\n Type,\n Union,\n)\nimport numpy as np\nfrom langchain.docstore.document import Document\nfrom langchain.embeddings.base import Embeddings\nfrom langchain.vectorstores import VectorStore\nfrom langchain.vectorstores.utils import maximal_marginal_relevance\nif TYPE_CHECKING:\n from qdrant_client.conversions import common_types\n from qdrant_client.http import models as rest\n DictFilter = Dict[str, Union[str, int, bool, dict, list]]\n MetadataFilter = Union[DictFilter, common_types.Filter]\n[docs]class Qdrant(VectorStore):\n \"\"\"Wrapper around Qdrant vector database.\n To use you should have the ``qdrant-client`` package installed.\n Example:\n .. code-block:: python\n from qdrant_client import QdrantClient\n from langchain import Qdrant\n client = QdrantClient()\n collection_name = \"MyCollection\"\n qdrant = Qdrant(client, collection_name, embedding_function)\n \"\"\"\n CONTENT_KEY = \"page_content\"\n METADATA_KEY = \"metadata\"\n def __init__(\n self,\n client: Any,\n collection_name: str,\n embeddings: Optional[Embeddings] = None,\n content_payload_key: str = CONTENT_KEY,\n metadata_payload_key: str = METADATA_KEY,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/qdrant.html"} +{"id": "bab0078bf4c2-1", "text": "metadata_payload_key: str = METADATA_KEY,\n embedding_function: Optional[Callable] = None, # deprecated\n ):\n \"\"\"Initialize with necessary components.\"\"\"\n try:\n import qdrant_client\n except ImportError:\n raise ValueError(\n \"Could not import qdrant-client python package. \"\n \"Please install it with `pip install qdrant-client`.\"\n )\n if not isinstance(client, qdrant_client.QdrantClient):\n raise ValueError(\n f\"client should be an instance of qdrant_client.QdrantClient, \"\n f\"got {type(client)}\"\n )\n if embeddings is None and embedding_function is None:\n raise ValueError(\n \"`embeddings` value can't be None. Pass `Embeddings` instance.\"\n )\n if embeddings is not None and embedding_function is not None:\n raise ValueError(\n \"Both `embeddings` and `embedding_function` are passed. \"\n \"Use `embeddings` only.\"\n )\n self.embeddings = embeddings\n self._embeddings_function = embedding_function\n self.client: qdrant_client.QdrantClient = client\n self.collection_name = collection_name\n self.content_payload_key = content_payload_key or self.CONTENT_KEY\n self.metadata_payload_key = metadata_payload_key or self.METADATA_KEY\n if embedding_function is not None:\n warnings.warn(\n \"Using `embedding_function` is deprecated. \"\n \"Pass `Embeddings` instance to `embeddings` instead.\"\n )\n if not isinstance(embeddings, Embeddings):\n warnings.warn(\n \"`embeddings` should be an instance of `Embeddings`.\"\n \"Using `embeddings` as `embedding_function` which is deprecated\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/qdrant.html"} +{"id": "bab0078bf4c2-2", "text": "\"Using `embeddings` as `embedding_function` which is deprecated\"\n )\n self._embeddings_function = embeddings\n self.embeddings = None\n[docs] def add_texts(\n self,\n texts: Iterable[str],\n metadatas: Optional[List[dict]] = None,\n ids: Optional[Sequence[str]] = None,\n batch_size: int = 64,\n **kwargs: Any,\n ) -> List[str]:\n \"\"\"Run more texts through the embeddings and add to the vectorstore.\n Args:\n texts: Iterable of strings to add to the vectorstore.\n metadatas: Optional list of metadatas associated with the texts.\n ids:\n Optional list of ids to associate with the texts. Ids have to be\n uuid-like strings.\n batch_size:\n How many vectors upload per-request.\n Default: 64\n Returns:\n List of ids from adding the texts into the vectorstore.\n \"\"\"\n from qdrant_client.http import models as rest\n added_ids = []\n texts_iterator = iter(texts)\n metadatas_iterator = iter(metadatas or [])\n ids_iterator = iter(ids or [uuid.uuid4().hex for _ in iter(texts)])\n while batch_texts := list(islice(texts_iterator, batch_size)):\n # Take the corresponding metadata and id for each text in a batch\n batch_metadatas = list(islice(metadatas_iterator, batch_size)) or None\n batch_ids = list(islice(ids_iterator, batch_size))\n self.client.upsert(\n collection_name=self.collection_name,\n points=rest.Batch.construct(\n ids=batch_ids,\n vectors=self._embed_texts(batch_texts),", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/qdrant.html"} +{"id": "bab0078bf4c2-3", "text": "ids=batch_ids,\n vectors=self._embed_texts(batch_texts),\n payloads=self._build_payloads(\n batch_texts,\n batch_metadatas,\n self.content_payload_key,\n self.metadata_payload_key,\n ),\n ),\n )\n added_ids.extend(batch_ids)\n return added_ids\n[docs] def similarity_search(\n self,\n query: str,\n k: int = 4,\n filter: Optional[MetadataFilter] = None,\n search_params: Optional[common_types.SearchParams] = None,\n offset: int = 0,\n score_threshold: Optional[float] = None,\n consistency: Optional[common_types.ReadConsistency] = None,\n **kwargs: Any,\n ) -> List[Document]:\n \"\"\"Return docs most similar to query.\n Args:\n query: Text to look up documents similar to.\n k: Number of Documents to return. Defaults to 4.\n filter: Filter by metadata. Defaults to None.\n search_params: Additional search params\n offset:\n Offset of the first result to return.\n May be used to paginate results.\n Note: large offset values may cause performance issues.\n score_threshold:\n Define a minimal score threshold for the result.\n If defined, less similar results will not be returned.\n Score of the returned result might be higher or smaller than the\n threshold depending on the Distance function used.\n E.g. for cosine similarity only higher scores will be returned.\n consistency:\n Read consistency of the search. Defines how many replicas should be\n queried before returning the result.\n Values:\n - int - number of replicas to query, values should present in all\n queried replicas", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/qdrant.html"} +{"id": "bab0078bf4c2-4", "text": "- int - number of replicas to query, values should present in all\n queried replicas\n - 'majority' - query all replicas, but return values present in the\n majority of replicas\n - 'quorum' - query the majority of replicas, return values present in\n all of them\n - 'all' - query all replicas, and return values present in all replicas\n Returns:\n List of Documents most similar to the query.\n \"\"\"\n results = self.similarity_search_with_score(\n query,\n k,\n filter=filter,\n search_params=search_params,\n offset=offset,\n score_threshold=score_threshold,\n consistency=consistency,\n **kwargs,\n )\n return list(map(itemgetter(0), results))\n[docs] def similarity_search_with_score(\n self,\n query: str,\n k: int = 4,\n filter: Optional[MetadataFilter] = None,\n search_params: Optional[common_types.SearchParams] = None,\n offset: int = 0,\n score_threshold: Optional[float] = None,\n consistency: Optional[common_types.ReadConsistency] = None,\n **kwargs: Any,\n ) -> List[Tuple[Document, float]]:\n \"\"\"Return docs most similar to query.\n Args:\n query: Text to look up documents similar to.\n k: Number of Documents to return. Defaults to 4.\n filter: Filter by metadata. Defaults to None.\n search_params: Additional search params\n offset:\n Offset of the first result to return.\n May be used to paginate results.\n Note: large offset values may cause performance issues.\n score_threshold:\n Define a minimal score threshold for the result.", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/qdrant.html"} +{"id": "bab0078bf4c2-5", "text": "score_threshold:\n Define a minimal score threshold for the result.\n If defined, less similar results will not be returned.\n Score of the returned result might be higher or smaller than the\n threshold depending on the Distance function used.\n E.g. for cosine similarity only higher scores will be returned.\n consistency:\n Read consistency of the search. Defines how many replicas should be\n queried before returning the result.\n Values:\n - int - number of replicas to query, values should present in all\n queried replicas\n - 'majority' - query all replicas, but return values present in the\n majority of replicas\n - 'quorum' - query the majority of replicas, return values present in\n all of them\n - 'all' - query all replicas, and return values present in all replicas\n Returns:\n List of documents most similar to the query text and cosine\n distance in float for each.\n Lower score represents more similarity.\n \"\"\"\n return self.similarity_search_with_score_by_vector(\n self._embed_query(query),\n k,\n filter=filter,\n search_params=search_params,\n offset=offset,\n score_threshold=score_threshold,\n consistency=consistency,\n **kwargs,\n )\n[docs] def similarity_search_by_vector(\n self,\n embedding: List[float],\n k: int = 4,\n filter: Optional[MetadataFilter] = None,\n search_params: Optional[common_types.SearchParams] = None,\n offset: int = 0,\n score_threshold: Optional[float] = None,\n consistency: Optional[common_types.ReadConsistency] = None,\n **kwargs: Any,\n ) -> List[Document]:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/qdrant.html"} +{"id": "bab0078bf4c2-6", "text": "**kwargs: Any,\n ) -> List[Document]:\n \"\"\"Return docs most similar to embedding vector.\n Args:\n embedding: Embedding vector to look up documents similar to.\n k: Number of Documents to return. Defaults to 4.\n filter: Filter by metadata. Defaults to None.\n search_params: Additional search params\n offset:\n Offset of the first result to return.\n May be used to paginate results.\n Note: large offset values may cause performance issues.\n score_threshold:\n Define a minimal score threshold for the result.\n If defined, less similar results will not be returned.\n Score of the returned result might be higher or smaller than the\n threshold depending on the Distance function used.\n E.g. for cosine similarity only higher scores will be returned.\n consistency:\n Read consistency of the search. Defines how many replicas should be\n queried before returning the result.\n Values:\n - int - number of replicas to query, values should present in all\n queried replicas\n - 'majority' - query all replicas, but return values present in the\n majority of replicas\n - 'quorum' - query the majority of replicas, return values present in\n all of them\n - 'all' - query all replicas, and return values present in all replicas\n Returns:\n List of Documents most similar to the query.\n \"\"\"\n results = self.similarity_search_with_score_by_vector(\n embedding,\n k,\n filter=filter,\n search_params=search_params,\n offset=offset,\n score_threshold=score_threshold,\n consistency=consistency,\n **kwargs,\n )\n return list(map(itemgetter(0), results))", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/qdrant.html"} +{"id": "bab0078bf4c2-7", "text": "**kwargs,\n )\n return list(map(itemgetter(0), results))\n[docs] def similarity_search_with_score_by_vector(\n self,\n embedding: List[float],\n k: int = 4,\n filter: Optional[MetadataFilter] = None,\n search_params: Optional[common_types.SearchParams] = None,\n offset: int = 0,\n score_threshold: Optional[float] = None,\n consistency: Optional[common_types.ReadConsistency] = None,\n **kwargs: Any,\n ) -> List[Tuple[Document, float]]:\n \"\"\"Return docs most similar to embedding vector.\n Args:\n embedding: Embedding vector to look up documents similar to.\n k: Number of Documents to return. Defaults to 4.\n filter: Filter by metadata. Defaults to None.\n search_params: Additional search params\n offset:\n Offset of the first result to return.\n May be used to paginate results.\n Note: large offset values may cause performance issues.\n score_threshold:\n Define a minimal score threshold for the result.\n If defined, less similar results will not be returned.\n Score of the returned result might be higher or smaller than the\n threshold depending on the Distance function used.\n E.g. for cosine similarity only higher scores will be returned.\n consistency:\n Read consistency of the search. Defines how many replicas should be\n queried before returning the result.\n Values:\n - int - number of replicas to query, values should present in all\n queried replicas\n - 'majority' - query all replicas, but return values present in the\n majority of replicas\n - 'quorum' - query the majority of replicas, return values present in\n all of them", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/qdrant.html"} +{"id": "bab0078bf4c2-8", "text": "all of them\n - 'all' - query all replicas, and return values present in all replicas\n Returns:\n List of documents most similar to the query text and cosine\n distance in float for each.\n Lower score represents more similarity.\n \"\"\"\n if filter is not None and isinstance(filter, dict):\n warnings.warn(\n \"Using dict as a `filter` is deprecated. Please use qdrant-client \"\n \"filters directly: \"\n \"https://qdrant.tech/documentation/concepts/filtering/\",\n DeprecationWarning,\n )\n qdrant_filter = self._qdrant_filter_from_dict(filter)\n else:\n qdrant_filter = filter\n results = self.client.search(\n collection_name=self.collection_name,\n query_vector=embedding,\n query_filter=qdrant_filter,\n search_params=search_params,\n limit=k,\n offset=offset,\n with_payload=True,\n with_vectors=False, # Langchain does not expect vectors to be returned\n score_threshold=score_threshold,\n consistency=consistency,\n **kwargs,\n )\n return [\n (\n self._document_from_scored_point(\n result, self.content_payload_key, self.metadata_payload_key\n ),\n result.score,\n )\n for result in results\n ]\n def _similarity_search_with_relevance_scores(\n self,\n query: str,\n k: int = 4,\n **kwargs: Any,\n ) -> List[Tuple[Document, float]]:\n \"\"\"Return docs and relevance scores in the range [0, 1].\n 0 is dissimilar, 1 is most similar.\n Args:\n query: input text", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/qdrant.html"} +{"id": "bab0078bf4c2-9", "text": "Args:\n query: input text\n k: Number of Documents to return. Defaults to 4.\n **kwargs: kwargs to be passed to similarity search. Should include:\n score_threshold: Optional, a floating point value between 0 to 1 to\n filter the resulting set of retrieved docs\n Returns:\n List of Tuples of (doc, similarity_score)\n \"\"\"\n return self.similarity_search_with_score(query, k, **kwargs)\n[docs] def max_marginal_relevance_search(\n self,\n query: str,\n k: int = 4,\n fetch_k: int = 20,\n lambda_mult: float = 0.5,\n **kwargs: Any,\n ) -> List[Document]:\n \"\"\"Return docs selected using the maximal marginal relevance.\n Maximal marginal relevance optimizes for similarity to query AND diversity\n among selected documents.\n Args:\n query: Text to look up documents similar to.\n k: Number of Documents to return. Defaults to 4.\n fetch_k: Number of Documents to fetch to pass to MMR algorithm.\n Defaults to 20.\n lambda_mult: Number between 0 and 1 that determines the degree\n of diversity among the results with 0 corresponding\n to maximum diversity and 1 to minimum diversity.\n Defaults to 0.5.\n Returns:\n List of Documents selected by maximal marginal relevance.\n \"\"\"\n embedding = self._embed_query(query)\n results = self.client.search(\n collection_name=self.collection_name,\n query_vector=embedding,\n with_payload=True,\n with_vectors=True,\n limit=fetch_k,\n )\n embeddings = [result.vector for result in results]", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/qdrant.html"} +{"id": "bab0078bf4c2-10", "text": ")\n embeddings = [result.vector for result in results]\n mmr_selected = maximal_marginal_relevance(\n np.array(embedding), embeddings, k=k, lambda_mult=lambda_mult\n )\n return [\n self._document_from_scored_point(\n results[i], self.content_payload_key, self.metadata_payload_key\n )\n for i in mmr_selected\n ]\n[docs] @classmethod\n def from_texts(\n cls: Type[Qdrant],\n texts: List[str],\n embedding: Embeddings,\n metadatas: Optional[List[dict]] = None,\n ids: Optional[Sequence[str]] = None,\n location: Optional[str] = None,\n url: Optional[str] = None,\n port: Optional[int] = 6333,\n grpc_port: int = 6334,\n prefer_grpc: bool = False,\n https: Optional[bool] = None,\n api_key: Optional[str] = None,\n prefix: Optional[str] = None,\n timeout: Optional[float] = None,\n host: Optional[str] = None,\n path: Optional[str] = None,\n collection_name: Optional[str] = None,\n distance_func: str = \"Cosine\",\n content_payload_key: str = CONTENT_KEY,\n metadata_payload_key: str = METADATA_KEY,\n batch_size: int = 64,\n shard_number: Optional[int] = None,\n replication_factor: Optional[int] = None,\n write_consistency_factor: Optional[int] = None,\n on_disk_payload: Optional[bool] = None,\n hnsw_config: Optional[common_types.HnswConfigDiff] = None,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/qdrant.html"} +{"id": "bab0078bf4c2-11", "text": "hnsw_config: Optional[common_types.HnswConfigDiff] = None,\n optimizers_config: Optional[common_types.OptimizersConfigDiff] = None,\n wal_config: Optional[common_types.WalConfigDiff] = None,\n quantization_config: Optional[common_types.QuantizationConfig] = None,\n init_from: Optional[common_types.InitFrom] = None,\n **kwargs: Any,\n ) -> Qdrant:\n \"\"\"Construct Qdrant wrapper from a list of texts.\n Args:\n texts: A list of texts to be indexed in Qdrant.\n embedding: A subclass of `Embeddings`, responsible for text vectorization.\n metadatas:\n An optional list of metadata. If provided it has to be of the same\n length as a list of texts.\n ids:\n Optional list of ids to associate with the texts. Ids have to be\n uuid-like strings.\n location:\n If `:memory:` - use in-memory Qdrant instance.\n If `str` - use it as a `url` parameter.\n If `None` - fallback to relying on `host` and `port` parameters.\n url: either host or str of \"Optional[scheme], host, Optional[port],\n Optional[prefix]\". Default: `None`\n port: Port of the REST API interface. Default: 6333\n grpc_port: Port of the gRPC interface. Default: 6334\n prefer_grpc:\n If true - use gPRC interface whenever possible in custom methods.\n Default: False\n https: If true - use HTTPS(SSL) protocol. Default: None\n api_key: API key for authentication in Qdrant Cloud. Default: None\n prefix:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/qdrant.html"} +{"id": "bab0078bf4c2-12", "text": "prefix:\n If not None - add prefix to the REST URL path.\n Example: service/v1 will result in\n http://localhost:6333/service/v1/{qdrant-endpoint} for REST API.\n Default: None\n timeout:\n Timeout for REST and gRPC API requests.\n Default: 5.0 seconds for REST and unlimited for gRPC\n host:\n Host name of Qdrant service. If url and host are None, set to\n 'localhost'. Default: None\n path:\n Path in which the vectors will be stored while using local mode.\n Default: None\n collection_name:\n Name of the Qdrant collection to be used. If not provided,\n it will be created randomly. Default: None\n distance_func:\n Distance function. One of: \"Cosine\" / \"Euclid\" / \"Dot\".\n Default: \"Cosine\"\n content_payload_key:\n A payload key used to store the content of the document.\n Default: \"page_content\"\n metadata_payload_key:\n A payload key used to store the metadata of the document.\n Default: \"metadata\"\n batch_size:\n How many vectors upload per-request.\n Default: 64\n shard_number: Number of shards in collection. Default is 1, minimum is 1.\n replication_factor:\n Replication factor for collection. Default is 1, minimum is 1.\n Defines how many copies of each shard will be created.\n Have effect only in distributed mode.\n write_consistency_factor:\n Write consistency factor for collection. Default is 1, minimum is 1.\n Defines how many replicas should apply the operation for us to consider", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/qdrant.html"} +{"id": "bab0078bf4c2-13", "text": "Defines how many replicas should apply the operation for us to consider\n it successful. Increasing this number will make the collection more\n resilient to inconsistencies, but will also make it fail if not enough\n replicas are available.\n Does not have any performance impact.\n Have effect only in distributed mode.\n on_disk_payload:\n If true - point`s payload will not be stored in memory.\n It will be read from the disk every time it is requested.\n This setting saves RAM by (slightly) increasing the response time.\n Note: those payload values that are involved in filtering and are\n indexed - remain in RAM.\n hnsw_config: Params for HNSW index\n optimizers_config: Params for optimizer\n wal_config: Params for Write-Ahead-Log\n quantization_config:\n Params for quantization, if None - quantization will be disabled\n init_from:\n Use data stored in another collection to initialize this collection\n **kwargs:\n Additional arguments passed directly into REST client initialization\n This is a user-friendly interface that:\n 1. Creates embeddings, one for each text\n 2. Initializes the Qdrant database as an in-memory docstore by default\n (and overridable to a remote docstore)\n 3. Adds the text embeddings to the Qdrant database\n This is intended to be a quick way to get started.\n Example:\n .. code-block:: python\n from langchain import Qdrant\n from langchain.embeddings import OpenAIEmbeddings\n embeddings = OpenAIEmbeddings()\n qdrant = Qdrant.from_texts(texts, embeddings, \"localhost\")\n \"\"\"\n try:\n import qdrant_client\n except ImportError:\n raise ValueError(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/qdrant.html"} +{"id": "bab0078bf4c2-14", "text": "import qdrant_client\n except ImportError:\n raise ValueError(\n \"Could not import qdrant-client python package. \"\n \"Please install it with `pip install qdrant-client`.\"\n )\n from qdrant_client.http import models as rest\n # Just do a single quick embedding to get vector size\n partial_embeddings = embedding.embed_documents(texts[:1])\n vector_size = len(partial_embeddings[0])\n collection_name = collection_name or uuid.uuid4().hex\n distance_func = distance_func.upper()\n client = qdrant_client.QdrantClient(\n location=location,\n url=url,\n port=port,\n grpc_port=grpc_port,\n prefer_grpc=prefer_grpc,\n https=https,\n api_key=api_key,\n prefix=prefix,\n timeout=timeout,\n host=host,\n path=path,\n **kwargs,\n )\n client.recreate_collection(\n collection_name=collection_name,\n vectors_config=rest.VectorParams(\n size=vector_size,\n distance=rest.Distance[distance_func],\n ),\n shard_number=shard_number,\n replication_factor=replication_factor,\n write_consistency_factor=write_consistency_factor,\n on_disk_payload=on_disk_payload,\n hnsw_config=hnsw_config,\n optimizers_config=optimizers_config,\n wal_config=wal_config,\n quantization_config=quantization_config,\n init_from=init_from,\n timeout=timeout, # type: ignore[arg-type]\n )\n texts_iterator = iter(texts)\n metadatas_iterator = iter(metadatas or [])", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/qdrant.html"} +{"id": "bab0078bf4c2-15", "text": "metadatas_iterator = iter(metadatas or [])\n ids_iterator = iter(ids or [uuid.uuid4().hex for _ in iter(texts)])\n while batch_texts := list(islice(texts_iterator, batch_size)):\n # Take the corresponding metadata and id for each text in a batch\n batch_metadatas = list(islice(metadatas_iterator, batch_size)) or None\n batch_ids = list(islice(ids_iterator, batch_size))\n # Generate the embeddings for all the texts in a batch\n batch_embeddings = embedding.embed_documents(batch_texts)\n client.upsert(\n collection_name=collection_name,\n points=rest.Batch.construct(\n ids=batch_ids,\n vectors=batch_embeddings,\n payloads=cls._build_payloads(\n batch_texts,\n batch_metadatas,\n content_payload_key,\n metadata_payload_key,\n ),\n ),\n )\n return cls(\n client=client,\n collection_name=collection_name,\n embeddings=embedding,\n content_payload_key=content_payload_key,\n metadata_payload_key=metadata_payload_key,\n )\n @classmethod\n def _build_payloads(\n cls,\n texts: Iterable[str],\n metadatas: Optional[List[dict]],\n content_payload_key: str,\n metadata_payload_key: str,\n ) -> List[dict]:\n payloads = []\n for i, text in enumerate(texts):\n if text is None:\n raise ValueError(\n \"At least one of the texts is None. Please remove it before \"\n \"calling .from_texts or .add_texts on Qdrant instance.\"\n )\n metadata = metadatas[i] if metadatas is not None else None\n payloads.append(\n {", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/qdrant.html"} +{"id": "bab0078bf4c2-16", "text": "payloads.append(\n {\n content_payload_key: text,\n metadata_payload_key: metadata,\n }\n )\n return payloads\n @classmethod\n def _document_from_scored_point(\n cls,\n scored_point: Any,\n content_payload_key: str,\n metadata_payload_key: str,\n ) -> Document:\n return Document(\n page_content=scored_point.payload.get(content_payload_key),\n metadata=scored_point.payload.get(metadata_payload_key) or {},\n )\n def _build_condition(self, key: str, value: Any) -> List[rest.FieldCondition]:\n from qdrant_client.http import models as rest\n out = []\n if isinstance(value, dict):\n for _key, value in value.items():\n out.extend(self._build_condition(f\"{key}.{_key}\", value))\n elif isinstance(value, list):\n for _value in value:\n if isinstance(_value, dict):\n out.extend(self._build_condition(f\"{key}[]\", _value))\n else:\n out.extend(self._build_condition(f\"{key}\", _value))\n else:\n out.append(\n rest.FieldCondition(\n key=f\"{self.metadata_payload_key}.{key}\",\n match=rest.MatchValue(value=value),\n )\n )\n return out\n def _qdrant_filter_from_dict(\n self, filter: Optional[DictFilter]\n ) -> Optional[rest.Filter]:\n from qdrant_client.http import models as rest\n if not filter:\n return None\n return rest.Filter(\n must=[\n condition\n for key, value in filter.items()\n for condition in self._build_condition(key, value)\n ]\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/qdrant.html"} +{"id": "bab0078bf4c2-17", "text": "for condition in self._build_condition(key, value)\n ]\n )\n def _embed_query(self, query: str) -> List[float]:\n \"\"\"Embed query text.\n Used to provide backward compatibility with `embedding_function` argument.\n Args:\n query: Query text.\n Returns:\n List of floats representing the query embedding.\n \"\"\"\n if self.embeddings is not None:\n embedding = self.embeddings.embed_query(query)\n else:\n if self._embeddings_function is not None:\n embedding = self._embeddings_function(query)\n else:\n raise ValueError(\"Neither of embeddings or embedding_function is set\")\n return embedding.tolist() if hasattr(embedding, \"tolist\") else embedding\n def _embed_texts(self, texts: Iterable[str]) -> List[List[float]]:\n \"\"\"Embed search texts.\n Used to provide backward compatibility with `embedding_function` argument.\n Args:\n texts: Iterable of texts to embed.\n Returns:\n List of floats representing the texts embedding.\n \"\"\"\n if self.embeddings is not None:\n embeddings = self.embeddings.embed_documents(list(texts))\n if hasattr(embeddings, \"tolist\"):\n embeddings = embeddings.tolist()\n elif self._embeddings_function is not None:\n embeddings = []\n for text in texts:\n embedding = self._embeddings_function(text)\n if hasattr(embeddings, \"tolist\"):\n embedding = embedding.tolist()\n embeddings.append(embedding)\n else:\n raise ValueError(\"Neither of embeddings or embedding_function is set\")\n return embeddings", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/qdrant.html"} +{"id": "e7b997dd36d6-0", "text": "Source code for langchain.vectorstores.azuresearch\n\"\"\"Wrapper around Azure Cognitive Search.\"\"\"\nfrom __future__ import annotations\nimport base64\nimport json\nimport logging\nimport uuid\nfrom typing import (\n TYPE_CHECKING,\n Any,\n Callable,\n Dict,\n Iterable,\n List,\n Optional,\n Tuple,\n Type,\n)\nimport numpy as np\nfrom pydantic import BaseModel, root_validator\nfrom langchain.docstore.document import Document\nfrom langchain.embeddings.base import Embeddings\nfrom langchain.schema import BaseRetriever\nfrom langchain.utils import get_from_env\nfrom langchain.vectorstores.base import VectorStore\nlogger = logging.getLogger()\nif TYPE_CHECKING:\n from azure.search.documents import SearchClient\n# Allow overriding field names for Azure Search\nFIELDS_ID = get_from_env(\n key=\"AZURESEARCH_FIELDS_ID\", env_key=\"AZURESEARCH_FIELDS_ID\", default=\"id\"\n)\nFIELDS_CONTENT = get_from_env(\n key=\"AZURESEARCH_FIELDS_CONTENT\",\n env_key=\"AZURESEARCH_FIELDS_CONTENT\",\n default=\"content\",\n)\nFIELDS_CONTENT_VECTOR = get_from_env(\n key=\"AZURESEARCH_FIELDS_CONTENT_VECTOR\",\n env_key=\"AZURESEARCH_FIELDS_CONTENT_VECTOR\",\n default=\"content_vector\",\n)\nFIELDS_METADATA = get_from_env(\n key=\"AZURESEARCH_FIELDS_TAG\", env_key=\"AZURESEARCH_FIELDS_TAG\", default=\"metadata\"\n)\nMAX_UPLOAD_BATCH_SIZE = 1000\ndef _get_search_client(\n endpoint: str,\n key: str,\n index_name: str,\n embedding_function: Callable,\n semantic_configuration_name: Optional[str] = None,\n) -> SearchClient:\n from azure.core.credentials import AzureKeyCredential\n from azure.core.exceptions import ResourceNotFoundError", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/azuresearch.html"} +{"id": "e7b997dd36d6-1", "text": "from azure.core.credentials import AzureKeyCredential\n from azure.core.exceptions import ResourceNotFoundError\n from azure.identity import DefaultAzureCredential\n from azure.search.documents import SearchClient\n from azure.search.documents.indexes import SearchIndexClient\n from azure.search.documents.indexes.models import (\n PrioritizedFields,\n SearchableField,\n SearchField,\n SearchFieldDataType,\n SearchIndex,\n SemanticConfiguration,\n SemanticField,\n SemanticSettings,\n SimpleField,\n VectorSearch,\n VectorSearchAlgorithmConfiguration,\n )\n if key is None:\n credential = DefaultAzureCredential()\n else:\n credential = AzureKeyCredential(key)\n index_client: SearchIndexClient = SearchIndexClient(\n endpoint=endpoint, credential=credential\n )\n try:\n index_client.get_index(name=index_name)\n except ResourceNotFoundError:\n # Fields configuration\n fields = [\n SimpleField(\n name=FIELDS_ID,\n type=SearchFieldDataType.String,\n key=True,\n filterable=True,\n ),\n SearchableField(\n name=FIELDS_CONTENT,\n type=SearchFieldDataType.String,\n searchable=True,\n retrievable=True,\n ),\n SearchField(\n name=FIELDS_CONTENT_VECTOR,\n type=SearchFieldDataType.Collection(SearchFieldDataType.Single),\n searchable=True,\n dimensions=len(embedding_function(\"Text\")),\n vector_search_configuration=\"default\",\n ),\n SearchableField(\n name=FIELDS_METADATA,\n type=SearchFieldDataType.String,\n searchable=True,\n retrievable=True,\n ),\n ]\n # Vector search configuration\n vector_search = VectorSearch(\n algorithm_configurations=[\n VectorSearchAlgorithmConfiguration(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/azuresearch.html"} +{"id": "e7b997dd36d6-2", "text": "algorithm_configurations=[\n VectorSearchAlgorithmConfiguration(\n name=\"default\",\n kind=\"hnsw\",\n hnsw_parameters={\n \"m\": 4,\n \"efConstruction\": 400,\n \"efSearch\": 500,\n \"metric\": \"cosine\",\n },\n )\n ]\n )\n # Create the semantic settings with the configuration\n semantic_settings = (\n None\n if semantic_configuration_name is None\n else SemanticSettings(\n configurations=[\n SemanticConfiguration(\n name=semantic_configuration_name,\n prioritized_fields=PrioritizedFields(\n prioritized_content_fields=[\n SemanticField(field_name=FIELDS_CONTENT)\n ],\n ),\n )\n ]\n )\n )\n # Create the search index with the semantic settings and vector search\n index = SearchIndex(\n name=index_name,\n fields=fields,\n vector_search=vector_search,\n semantic_settings=semantic_settings,\n )\n index_client.create_index(index)\n # Create the search client\n return SearchClient(endpoint=endpoint, index_name=index_name, credential=credential)\n[docs]class AzureSearch(VectorStore):\n def __init__(\n self,\n azure_search_endpoint: str,\n azure_search_key: str,\n index_name: str,\n embedding_function: Callable,\n search_type: str = \"hybrid\",\n semantic_configuration_name: Optional[str] = None,\n semantic_query_language: str = \"en-us\",\n **kwargs: Any,\n ):\n \"\"\"Initialize with necessary components.\"\"\"\n # Initialize base class\n self.embedding_function = embedding_function\n self.client = _get_search_client(\n azure_search_endpoint,\n azure_search_key,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/azuresearch.html"} +{"id": "e7b997dd36d6-3", "text": "azure_search_endpoint,\n azure_search_key,\n index_name,\n embedding_function,\n semantic_configuration_name,\n )\n self.search_type = search_type\n self.semantic_configuration_name = semantic_configuration_name\n self.semantic_query_language = semantic_query_language\n[docs] def add_texts(\n self,\n texts: Iterable[str],\n metadatas: Optional[List[dict]] = None,\n **kwargs: Any,\n ) -> List[str]:\n \"\"\"Add texts data to an existing index.\"\"\"\n keys = kwargs.get(\"keys\")\n ids = []\n # Write data to index\n data = []\n for i, text in enumerate(texts):\n # Use provided key otherwise use default key\n key = keys[i] if keys else str(uuid.uuid4())\n # Encoding key for Azure Search valid characters\n key = base64.urlsafe_b64encode(bytes(key, \"utf-8\")).decode(\"ascii\")\n metadata = metadatas[i] if metadatas else {}\n # Add data to index\n data.append(\n {\n \"@search.action\": \"upload\",\n FIELDS_ID: key,\n FIELDS_CONTENT: text,\n FIELDS_CONTENT_VECTOR: np.array(\n self.embedding_function(text), dtype=np.float32\n ).tolist(),\n FIELDS_METADATA: json.dumps(metadata),\n }\n )\n ids.append(key)\n # Upload data in batches\n if len(data) == MAX_UPLOAD_BATCH_SIZE:\n response = self.client.upload_documents(documents=data)\n # Check if all documents were successfully uploaded\n if not all([r.succeeded for r in response]):\n raise Exception(response)\n # Reset data\n data = []", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/azuresearch.html"} +{"id": "e7b997dd36d6-4", "text": "raise Exception(response)\n # Reset data\n data = []\n # Considering case where data is an exact multiple of batch-size entries\n if len(data) == 0:\n return ids\n # Upload data to index\n response = self.client.upload_documents(documents=data)\n # Check if all documents were successfully uploaded\n if all([r.succeeded for r in response]):\n return ids\n else:\n raise Exception(response)\n[docs] def similarity_search(\n self, query: str, k: int = 4, **kwargs: Any\n ) -> List[Document]:\n search_type = kwargs.get(\"search_type\", self.search_type)\n if search_type == \"similarity\":\n docs = self.vector_search(query, k=k, **kwargs)\n elif search_type == \"hybrid\":\n docs = self.hybrid_search(query, k=k, **kwargs)\n elif search_type == \"semantic_hybrid\":\n docs = self.semantic_hybrid_search(query, k=k, **kwargs)\n else:\n raise ValueError(f\"search_type of {search_type} not allowed.\")\n return docs\n[docs] def vector_search(self, query: str, k: int = 4, **kwargs: Any) -> List[Document]:\n \"\"\"\n Returns the most similar indexed documents to the query text.\n Args:\n query (str): The query text for which to find similar documents.\n k (int): The number of documents to return. Default is 4.\n Returns:\n List[Document]: A list of documents that are most similar to the query text.\n \"\"\"\n docs_and_scores = self.vector_search_with_score(\n query, k=k, filters=kwargs.get(\"filters\", None)\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/azuresearch.html"} +{"id": "e7b997dd36d6-5", "text": "query, k=k, filters=kwargs.get(\"filters\", None)\n )\n return [doc for doc, _ in docs_and_scores]\n[docs] def vector_search_with_score(\n self, query: str, k: int = 4, filters: Optional[str] = None\n ) -> List[Tuple[Document, float]]:\n \"\"\"Return docs most similar to query.\n Args:\n query: Text to look up documents similar to.\n k: Number of Documents to return. Defaults to 4.\n Returns:\n List of Documents most similar to the query and score for each\n \"\"\"\n from azure.search.documents.models import Vector\n results = self.client.search(\n search_text=\"\",\n vector=Vector(\n value=np.array(\n self.embedding_function(query), dtype=np.float32\n ).tolist(),\n k=k,\n fields=FIELDS_CONTENT_VECTOR,\n ),\n select=[f\"{FIELDS_ID},{FIELDS_CONTENT},{FIELDS_METADATA}\"],\n filter=filters,\n )\n # Convert results to Document objects\n docs = [\n (\n Document(\n page_content=result[FIELDS_CONTENT],\n metadata=json.loads(result[FIELDS_METADATA]),\n ),\n float(result[\"@search.score\"]),\n )\n for result in results\n ]\n return docs\n[docs] def hybrid_search(self, query: str, k: int = 4, **kwargs: Any) -> List[Document]:\n \"\"\"\n Returns the most similar indexed documents to the query text.\n Args:\n query (str): The query text for which to find similar documents.\n k (int): The number of documents to return. Default is 4.\n Returns:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/azuresearch.html"} +{"id": "e7b997dd36d6-6", "text": "Returns:\n List[Document]: A list of documents that are most similar to the query text.\n \"\"\"\n docs_and_scores = self.hybrid_search_with_score(\n query, k=k, filters=kwargs.get(\"filters\", None)\n )\n return [doc for doc, _ in docs_and_scores]\n[docs] def hybrid_search_with_score(\n self, query: str, k: int = 4, filters: Optional[str] = None\n ) -> List[Tuple[Document, float]]:\n \"\"\"Return docs most similar to query with an hybrid query.\n Args:\n query: Text to look up documents similar to.\n k: Number of Documents to return. Defaults to 4.\n Returns:\n List of Documents most similar to the query and score for each\n \"\"\"\n from azure.search.documents.models import Vector\n results = self.client.search(\n search_text=query,\n vector=Vector(\n value=np.array(\n self.embedding_function(query), dtype=np.float32\n ).tolist(),\n k=k,\n fields=FIELDS_CONTENT_VECTOR,\n ),\n select=[f\"{FIELDS_ID},{FIELDS_CONTENT},{FIELDS_METADATA}\"],\n filter=filters,\n top=k,\n )\n # Convert results to Document objects\n docs = [\n (\n Document(\n page_content=result[FIELDS_CONTENT],\n metadata=json.loads(result[FIELDS_METADATA]),\n ),\n float(result[\"@search.score\"]),\n )\n for result in results\n ]\n return docs\n[docs] def semantic_hybrid_search(\n self, query: str, k: int = 4, **kwargs: Any\n ) -> List[Document]:\n \"\"\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/azuresearch.html"} +{"id": "e7b997dd36d6-7", "text": ") -> List[Document]:\n \"\"\"\n Returns the most similar indexed documents to the query text.\n Args:\n query (str): The query text for which to find similar documents.\n k (int): The number of documents to return. Default is 4.\n Returns:\n List[Document]: A list of documents that are most similar to the query text.\n \"\"\"\n docs_and_scores = self.semantic_hybrid_search_with_score(\n query, k=k, filters=kwargs.get(\"filters\", None)\n )\n return [doc for doc, _ in docs_and_scores]\n[docs] def semantic_hybrid_search_with_score(\n self, query: str, k: int = 4, filters: Optional[str] = None\n ) -> List[Tuple[Document, float]]:\n \"\"\"Return docs most similar to query with an hybrid query.\n Args:\n query: Text to look up documents similar to.\n k: Number of Documents to return. Defaults to 4.\n Returns:\n List of Documents most similar to the query and score for each\n \"\"\"\n from azure.search.documents.models import Vector\n results = self.client.search(\n search_text=query,\n vector=Vector(\n value=np.array(\n self.embedding_function(query), dtype=np.float32\n ).tolist(),\n k=50, # Hardcoded value to maximize L2 retrieval\n fields=FIELDS_CONTENT_VECTOR,\n ),\n select=[f\"{FIELDS_ID},{FIELDS_CONTENT},{FIELDS_METADATA}\"],\n filter=filters,\n query_type=\"semantic\",\n query_language=self.semantic_query_language,\n semantic_configuration_name=self.semantic_configuration_name,\n query_caption=\"extractive\",\n query_answer=\"extractive\",\n top=k,\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/azuresearch.html"} +{"id": "e7b997dd36d6-8", "text": "query_answer=\"extractive\",\n top=k,\n )\n # Get Semantic Answers\n semantic_answers = results.get_answers()\n semantic_answers_dict = {}\n for semantic_answer in semantic_answers:\n semantic_answers_dict[semantic_answer.key] = {\n \"text\": semantic_answer.text,\n \"highlights\": semantic_answer.highlights,\n }\n # Convert results to Document objects\n docs = [\n (\n Document(\n page_content=result[\"content\"],\n metadata={\n **json.loads(result[\"metadata\"]),\n **{\n \"captions\": {\n \"text\": result.get(\"@search.captions\", [{}])[0].text,\n \"highlights\": result.get(\"@search.captions\", [{}])[\n 0\n ].highlights,\n }\n if result.get(\"@search.captions\")\n else {},\n \"answers\": semantic_answers_dict.get(\n json.loads(result[\"metadata\"]).get(\"key\"), \"\"\n ),\n },\n },\n ),\n float(result[\"@search.score\"]),\n )\n for result in results\n ]\n return docs\n[docs] @classmethod\n def from_texts(\n cls: Type[AzureSearch],\n texts: List[str],\n embedding: Embeddings,\n metadatas: Optional[List[dict]] = None,\n azure_search_endpoint: str = \"\",\n azure_search_key: str = \"\",\n index_name: str = \"langchain-index\",\n **kwargs: Any,\n ) -> AzureSearch:\n # Creating a new Azure Search instance\n azure_search = cls(\n azure_search_endpoint,\n azure_search_key,\n index_name,\n embedding.embed_query,\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/azuresearch.html"} +{"id": "e7b997dd36d6-9", "text": "azure_search_key,\n index_name,\n embedding.embed_query,\n )\n azure_search.add_texts(texts, metadatas, **kwargs)\n return azure_search\nclass AzureSearchVectorStoreRetriever(BaseRetriever, BaseModel):\n vectorstore: AzureSearch\n search_type: str = \"hybrid\"\n k: int = 4\n class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n arbitrary_types_allowed = True\n @root_validator()\n def validate_search_type(cls, values: Dict) -> Dict:\n \"\"\"Validate search type.\"\"\"\n if \"search_type\" in values:\n search_type = values[\"search_type\"]\n if search_type not in (\"similarity\", \"hybrid\", \"semantic_hybrid\"):\n raise ValueError(f\"search_type of {search_type} not allowed.\")\n return values\n def get_relevant_documents(self, query: str) -> List[Document]:\n if self.search_type == \"similarity\":\n docs = self.vectorstore.vector_search(query, k=self.k)\n elif self.search_type == \"hybrid\":\n docs = self.vectorstore.hybrid_search(query, k=self.k)\n elif self.search_type == \"semantic_hybrid\":\n docs = self.vectorstore.semantic_hybrid_search(query, k=self.k)\n else:\n raise ValueError(f\"search_type of {self.search_type} not allowed.\")\n return docs\n async def aget_relevant_documents(self, query: str) -> List[Document]:\n raise NotImplementedError(\n \"AzureSearchVectorStoreRetriever does not support async\"\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/azuresearch.html"} +{"id": "f4a04940c247-0", "text": "Source code for langchain.vectorstores.cassandra\n\"\"\"Wrapper around Cassandra vector-store capabilities, based on cassIO.\"\"\"\nfrom __future__ import annotations\nimport hashlib\nimport typing\nfrom typing import Any, Iterable, List, Optional, Tuple, Type, TypeVar\nimport numpy as np\nif typing.TYPE_CHECKING:\n from cassandra.cluster import Session\nfrom langchain.docstore.document import Document\nfrom langchain.embeddings.base import Embeddings\nfrom langchain.vectorstores.base import VectorStore\nfrom langchain.vectorstores.utils import maximal_marginal_relevance\nCVST = TypeVar(\"CVST\", bound=\"Cassandra\")\n# a positive number of seconds to expire entries, or None for no expiration.\nCASSANDRA_VECTORSTORE_DEFAULT_TTL_SECONDS = None\ndef _hash(_input: str) -> str:\n \"\"\"Use a deterministic hashing approach.\"\"\"\n return hashlib.md5(_input.encode()).hexdigest()\n[docs]class Cassandra(VectorStore):\n \"\"\"Wrapper around Cassandra embeddings platform.\n There is no notion of a default table name, since each embedding\n function implies its own vector dimension, which is part of the schema.\n Example:\n .. code-block:: python\n from langchain.vectorstores import Cassandra\n from langchain.embeddings.openai import OpenAIEmbeddings\n embeddings = OpenAIEmbeddings()\n session = ...\n keyspace = 'my_keyspace'\n vectorstore = Cassandra(embeddings, session, keyspace, 'my_doc_archive')\n \"\"\"\n _embedding_dimension: int | None\n def _getEmbeddingDimension(self) -> int:\n if self._embedding_dimension is None:\n self._embedding_dimension = len(\n self.embedding.embed_query(\"This is a sample sentence.\")\n )\n return self._embedding_dimension\n def __init__(\n self,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/cassandra.html"} +{"id": "f4a04940c247-1", "text": ")\n return self._embedding_dimension\n def __init__(\n self,\n embedding: Embeddings,\n session: Session,\n keyspace: str,\n table_name: str,\n ttl_seconds: int | None = CASSANDRA_VECTORSTORE_DEFAULT_TTL_SECONDS,\n ) -> None:\n try:\n from cassio.vector import VectorTable\n except (ImportError, ModuleNotFoundError):\n raise ImportError(\n \"Could not import cassio python package. \"\n \"Please install it with `pip install cassio`.\"\n )\n \"\"\"Create a vector table.\"\"\"\n self.embedding = embedding\n self.session = session\n self.keyspace = keyspace\n self.table_name = table_name\n self.ttl_seconds = ttl_seconds\n #\n self._embedding_dimension = None\n #\n self.table = VectorTable(\n session=session,\n keyspace=keyspace,\n table=table_name,\n embedding_dimension=self._getEmbeddingDimension(),\n auto_id=False, # the `add_texts` contract admits user-provided ids\n )\n[docs] def delete_collection(self) -> None:\n \"\"\"\n Just an alias for `clear`\n (to better align with other VectorStore implementations).\n \"\"\"\n self.clear()\n[docs] def clear(self) -> None:\n \"\"\"Empty the collection.\"\"\"\n self.table.clear()\n[docs] def delete_by_document_id(self, document_id: str) -> None:\n return self.table.delete(document_id)\n[docs] def add_texts(\n self,\n texts: Iterable[str],\n metadatas: Optional[List[dict]] = None,\n ids: Optional[List[str]] = None,\n **kwargs: Any,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/cassandra.html"} +{"id": "f4a04940c247-2", "text": "ids: Optional[List[str]] = None,\n **kwargs: Any,\n ) -> List[str]:\n \"\"\"Run more texts through the embeddings and add to the vectorstore.\n Args:\n texts (Iterable[str]): Texts to add to the vectorstore.\n metadatas (Optional[List[dict]], optional): Optional list of metadatas.\n ids (Optional[List[str]], optional): Optional list of IDs.\n Returns:\n List[str]: List of IDs of the added texts.\n \"\"\"\n _texts = list(texts) # lest it be a generator or something\n if ids is None:\n # unless otherwise specified, we have deterministic IDs:\n # re-inserting an existing document will not create a duplicate.\n # (and effectively update the metadata)\n ids = [_hash(text) for text in _texts]\n if metadatas is None:\n metadatas = [{} for _ in _texts]\n #\n ttl_seconds = kwargs.get(\"ttl_seconds\", self.ttl_seconds)\n #\n embedding_vectors = self.embedding.embed_documents(_texts)\n for text, embedding_vector, text_id, metadata in zip(\n _texts, embedding_vectors, ids, metadatas\n ):\n self.table.put(\n document=text,\n embedding_vector=embedding_vector,\n document_id=text_id,\n metadata=metadata,\n ttl_seconds=ttl_seconds,\n )\n #\n return ids\n # id-returning search facilities\n[docs] def similarity_search_with_score_id_by_vector(\n self,\n embedding: List[float],\n k: int = 4,\n ) -> List[Tuple[Document, float, str]]:\n \"\"\"Return docs most similar to embedding vector.", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/cassandra.html"} +{"id": "f4a04940c247-3", "text": "\"\"\"Return docs most similar to embedding vector.\n No support for `filter` query (on metadata) along with vector search.\n Args:\n embedding (str): Embedding to look up documents similar to.\n k (int): Number of Documents to return. Defaults to 4.\n Returns:\n List of (Document, score, id), the most similar to the query vector.\n \"\"\"\n hits = self.table.search(\n embedding_vector=embedding,\n top_k=k,\n metric=\"cos\",\n metric_threshold=None,\n )\n # We stick to 'cos' distance as it can be normalized on a 0-1 axis\n # (1=most relevant), as required by this class' contract.\n return [\n (\n Document(\n page_content=hit[\"document\"],\n metadata=hit[\"metadata\"],\n ),\n 0.5 + 0.5 * hit[\"distance\"],\n hit[\"document_id\"],\n )\n for hit in hits\n ]\n[docs] def similarity_search_with_score_id(\n self,\n query: str,\n k: int = 4,\n **kwargs: Any,\n ) -> List[Tuple[Document, float, str]]:\n embedding_vector = self.embedding.embed_query(query)\n return self.similarity_search_with_score_id_by_vector(\n embedding=embedding_vector,\n k=k,\n )\n # id-unaware search facilities\n[docs] def similarity_search_with_score_by_vector(\n self,\n embedding: List[float],\n k: int = 4,\n ) -> List[Tuple[Document, float]]:\n \"\"\"Return docs most similar to embedding vector.", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/cassandra.html"} +{"id": "f4a04940c247-4", "text": "\"\"\"Return docs most similar to embedding vector.\n No support for `filter` query (on metadata) along with vector search.\n Args:\n embedding (str): Embedding to look up documents similar to.\n k (int): Number of Documents to return. Defaults to 4.\n Returns:\n List of (Document, score), the most similar to the query vector.\n \"\"\"\n return [\n (doc, score)\n for (doc, score, docId) in self.similarity_search_with_score_id_by_vector(\n embedding=embedding,\n k=k,\n )\n ]\n[docs] def similarity_search(\n self,\n query: str,\n k: int = 4,\n **kwargs: Any,\n ) -> List[Document]:\n #\n embedding_vector = self.embedding.embed_query(query)\n return self.similarity_search_by_vector(\n embedding_vector,\n k,\n **kwargs,\n )\n[docs] def similarity_search_by_vector(\n self,\n embedding: List[float],\n k: int = 4,\n **kwargs: Any,\n ) -> List[Document]:\n return [\n doc\n for doc, _ in self.similarity_search_with_score_by_vector(\n embedding,\n k,\n )\n ]\n[docs] def similarity_search_with_score(\n self,\n query: str,\n k: int = 4,\n **kwargs: Any,\n ) -> List[Tuple[Document, float]]:\n embedding_vector = self.embedding.embed_query(query)\n return self.similarity_search_with_score_by_vector(\n embedding_vector,\n k,\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/cassandra.html"} +{"id": "f4a04940c247-5", "text": "embedding_vector,\n k,\n )\n # Even though this is a `_`-method,\n # it is apparently used by VectorSearch parent class\n # in an exposed method (`similarity_search_with_relevance_scores`).\n # So we implement it (hmm).\n def _similarity_search_with_relevance_scores(\n self,\n query: str,\n k: int = 4,\n **kwargs: Any,\n ) -> List[Tuple[Document, float]]:\n return self.similarity_search_with_score(\n query,\n k,\n **kwargs,\n )\n[docs] def max_marginal_relevance_search_by_vector(\n self,\n embedding: List[float],\n k: int = 4,\n fetch_k: int = 20,\n lambda_mult: float = 0.5,\n **kwargs: Any,\n ) -> List[Document]:\n \"\"\"Return docs selected using the maximal marginal relevance.\n Maximal marginal relevance optimizes for similarity to query AND diversity\n among selected documents.\n Args:\n embedding: Embedding to look up documents similar to.\n k: Number of Documents to return.\n fetch_k: Number of Documents to fetch to pass to MMR algorithm.\n lambda_mult: Number between 0 and 1 that determines the degree\n of diversity among the results with 0 corresponding\n to maximum diversity and 1 to minimum diversity.\n Returns:\n List of Documents selected by maximal marginal relevance.\n \"\"\"\n prefetchHits = self.table.search(\n embedding_vector=embedding,\n top_k=fetch_k,\n metric=\"cos\",\n metric_threshold=None,\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/cassandra.html"} +{"id": "f4a04940c247-6", "text": "metric=\"cos\",\n metric_threshold=None,\n )\n # let the mmr utility pick the *indices* in the above array\n mmrChosenIndices = maximal_marginal_relevance(\n np.array(embedding, dtype=np.float32),\n [pfHit[\"embedding_vector\"] for pfHit in prefetchHits],\n k=k,\n lambda_mult=lambda_mult,\n )\n mmrHits = [\n pfHit\n for pfIndex, pfHit in enumerate(prefetchHits)\n if pfIndex in mmrChosenIndices\n ]\n return [\n Document(\n page_content=hit[\"document\"],\n metadata=hit[\"metadata\"],\n )\n for hit in mmrHits\n ]\n[docs] def max_marginal_relevance_search(\n self,\n query: str,\n k: int = 4,\n fetch_k: int = 20,\n lambda_mult: float = 0.5,\n **kwargs: Any,\n ) -> List[Document]:\n \"\"\"Return docs selected using the maximal marginal relevance.\n Maximal marginal relevance optimizes for similarity to query AND diversity\n among selected documents.\n Args:\n query: Text to look up documents similar to.\n k: Number of Documents to return.\n fetch_k: Number of Documents to fetch to pass to MMR algorithm.\n lambda_mult: Number between 0 and 1 that determines the degree\n of diversity among the results with 0 corresponding\n to maximum diversity and 1 to minimum diversity.\n Optional.\n Returns:\n List of Documents selected by maximal marginal relevance.\n \"\"\"\n embedding_vector = self.embedding.embed_query(query)\n return self.max_marginal_relevance_search_by_vector(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/cassandra.html"} +{"id": "f4a04940c247-7", "text": "return self.max_marginal_relevance_search_by_vector(\n embedding_vector,\n k,\n fetch_k,\n lambda_mult=lambda_mult,\n )\n[docs] @classmethod\n def from_texts(\n cls: Type[CVST],\n texts: List[str],\n embedding: Embeddings,\n metadatas: Optional[List[dict]] = None,\n **kwargs: Any,\n ) -> CVST:\n \"\"\"Create a Cassandra vectorstore from raw texts.\n No support for specifying text IDs\n Returns:\n a Cassandra vectorstore.\n \"\"\"\n session: Session = kwargs[\"session\"]\n keyspace: str = kwargs[\"keyspace\"]\n table_name: str = kwargs[\"table_name\"]\n cassandraStore = cls(\n embedding=embedding,\n session=session,\n keyspace=keyspace,\n table_name=table_name,\n )\n cassandraStore.add_texts(texts=texts, metadatas=metadatas)\n return cassandraStore\n[docs] @classmethod\n def from_documents(\n cls: Type[CVST],\n documents: List[Document],\n embedding: Embeddings,\n **kwargs: Any,\n ) -> CVST:\n \"\"\"Create a Cassandra vectorstore from a document list.\n No support for specifying text IDs\n Returns:\n a Cassandra vectorstore.\n \"\"\"\n texts = [doc.page_content for doc in documents]\n metadatas = [doc.metadata for doc in documents]\n session: Session = kwargs[\"session\"]\n keyspace: str = kwargs[\"keyspace\"]\n table_name: str = kwargs[\"table_name\"]\n return cls.from_texts(\n texts=texts,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/cassandra.html"} +{"id": "f4a04940c247-8", "text": "return cls.from_texts(\n texts=texts,\n metadatas=metadatas,\n embedding=embedding,\n session=session,\n keyspace=keyspace,\n table_name=table_name,\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/cassandra.html"} +{"id": "5cf61cd6372d-0", "text": "Source code for langchain.vectorstores.lancedb\n\"\"\"Wrapper around LanceDB vector database\"\"\"\nfrom __future__ import annotations\nimport uuid\nfrom typing import Any, Iterable, List, Optional\nfrom langchain.docstore.document import Document\nfrom langchain.embeddings.base import Embeddings\nfrom langchain.vectorstores.base import VectorStore\n[docs]class LanceDB(VectorStore):\n \"\"\"Wrapper around LanceDB vector database.\n To use, you should have ``lancedb`` python package installed.\n Example:\n .. code-block:: python\n db = lancedb.connect('./lancedb')\n table = db.open_table('my_table')\n vectorstore = LanceDB(table, embedding_function)\n vectorstore.add_texts(['text1', 'text2'])\n result = vectorstore.similarity_search('text1')\n \"\"\"\n def __init__(\n self,\n connection: Any,\n embedding: Embeddings,\n vector_key: Optional[str] = \"vector\",\n id_key: Optional[str] = \"id\",\n text_key: Optional[str] = \"text\",\n ):\n \"\"\"Initialize with Lance DB connection\"\"\"\n try:\n import lancedb\n except ImportError:\n raise ValueError(\n \"Could not import lancedb python package. \"\n \"Please install it with `pip install lancedb`.\"\n )\n if not isinstance(connection, lancedb.db.LanceTable):\n raise ValueError(\n \"connection should be an instance of lancedb.db.LanceTable, \",\n f\"got {type(connection)}\",\n )\n self._connection = connection\n self._embedding = embedding\n self._vector_key = vector_key\n self._id_key = id_key\n self._text_key = text_key", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/lancedb.html"} +{"id": "5cf61cd6372d-1", "text": "self._id_key = id_key\n self._text_key = text_key\n[docs] def add_texts(\n self,\n texts: Iterable[str],\n metadatas: Optional[List[dict]] = None,\n ids: Optional[List[str]] = None,\n **kwargs: Any,\n ) -> List[str]:\n \"\"\"Turn texts into embedding and add it to the database\n Args:\n texts: Iterable of strings to add to the vectorstore.\n metadatas: Optional list of metadatas associated with the texts.\n ids: Optional list of ids to associate with the texts.\n Returns:\n List of ids of the added texts.\n \"\"\"\n # Embed texts and create documents\n docs = []\n ids = ids or [str(uuid.uuid4()) for _ in texts]\n embeddings = self._embedding.embed_documents(list(texts))\n for idx, text in enumerate(texts):\n embedding = embeddings[idx]\n metadata = metadatas[idx] if metadatas else {}\n docs.append(\n {\n self._vector_key: embedding,\n self._id_key: ids[idx],\n self._text_key: text,\n **metadata,\n }\n )\n self._connection.add(docs)\n return ids\n[docs] def similarity_search(\n self, query: str, k: int = 4, **kwargs: Any\n ) -> List[Document]:\n \"\"\"Return documents most similar to the query\n Args:\n query: String to query the vectorstore with.\n k: Number of documents to return.\n Returns:\n List of documents most similar to the query.\n \"\"\"\n embedding = self._embedding.embed_query(query)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/lancedb.html"} +{"id": "5cf61cd6372d-2", "text": "\"\"\"\n embedding = self._embedding.embed_query(query)\n docs = self._connection.search(embedding).limit(k).to_df()\n return [\n Document(\n page_content=row[self._text_key],\n metadata=row[docs.columns != self._text_key],\n )\n for _, row in docs.iterrows()\n ]\n[docs] @classmethod\n def from_texts(\n cls,\n texts: List[str],\n embedding: Embeddings,\n metadatas: Optional[List[dict]] = None,\n connection: Any = None,\n vector_key: Optional[str] = \"vector\",\n id_key: Optional[str] = \"id\",\n text_key: Optional[str] = \"text\",\n **kwargs: Any,\n ) -> LanceDB:\n instance = LanceDB(\n connection,\n embedding,\n vector_key,\n id_key,\n text_key,\n )\n instance.add_texts(texts, metadatas=metadatas, **kwargs)\n return instance", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/lancedb.html"} +{"id": "fce4160e68f6-0", "text": "Source code for langchain.vectorstores.sklearn\n\"\"\" Wrapper around scikit-learn NearestNeighbors implementation.\nThe vector store can be persisted in json, bson or parquet format.\n\"\"\"\nimport json\nimport math\nimport os\nfrom abc import ABC, abstractmethod\nfrom typing import Any, Dict, Iterable, List, Literal, Optional, Tuple, Type\nfrom uuid import uuid4\nfrom langchain.docstore.document import Document\nfrom langchain.embeddings.base import Embeddings\nfrom langchain.utils import guard_import\nfrom langchain.vectorstores.base import VectorStore\nfrom langchain.vectorstores.utils import maximal_marginal_relevance\nDEFAULT_K = 4 # Number of Documents to return.\nDEFAULT_FETCH_K = 20 # Number of Documents to initially fetch during MMR search.\nclass BaseSerializer(ABC):\n \"\"\"Abstract base class for saving and loading data.\"\"\"\n def __init__(self, persist_path: str) -> None:\n self.persist_path = persist_path\n @classmethod\n @abstractmethod\n def extension(cls) -> str:\n \"\"\"The file extension suggested by this serializer (without dot).\"\"\"\n @abstractmethod\n def save(self, data: Any) -> None:\n \"\"\"Saves the data to the persist_path\"\"\"\n @abstractmethod\n def load(self) -> Any:\n \"\"\"Loads the data from the persist_path\"\"\"\nclass JsonSerializer(BaseSerializer):\n \"\"\"Serializes data in json using the json package from python standard library.\"\"\"\n @classmethod\n def extension(cls) -> str:\n return \"json\"\n def save(self, data: Any) -> None:\n with open(self.persist_path, \"w\") as fp:\n json.dump(data, fp)\n def load(self) -> Any:\n with open(self.persist_path, \"r\") as fp:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/sklearn.html"} +{"id": "fce4160e68f6-1", "text": "with open(self.persist_path, \"r\") as fp:\n return json.load(fp)\nclass BsonSerializer(BaseSerializer):\n \"\"\"Serializes data in binary json using the bson python package.\"\"\"\n def __init__(self, persist_path: str) -> None:\n super().__init__(persist_path)\n self.bson = guard_import(\"bson\")\n @classmethod\n def extension(cls) -> str:\n return \"bson\"\n def save(self, data: Any) -> None:\n with open(self.persist_path, \"wb\") as fp:\n fp.write(self.bson.dumps(data))\n def load(self) -> Any:\n with open(self.persist_path, \"rb\") as fp:\n return self.bson.loads(fp.read())\nclass ParquetSerializer(BaseSerializer):\n \"\"\"Serializes data in Apache Parquet format using the pyarrow package.\"\"\"\n def __init__(self, persist_path: str) -> None:\n super().__init__(persist_path)\n self.pd = guard_import(\"pandas\")\n self.pa = guard_import(\"pyarrow\")\n self.pq = guard_import(\"pyarrow.parquet\")\n @classmethod\n def extension(cls) -> str:\n return \"parquet\"\n def save(self, data: Any) -> None:\n df = self.pd.DataFrame(data)\n table = self.pa.Table.from_pandas(df)\n if os.path.exists(self.persist_path):\n backup_path = str(self.persist_path) + \"-backup\"\n os.rename(self.persist_path, backup_path)\n try:\n self.pq.write_table(table, self.persist_path)\n except Exception as exc:\n os.rename(backup_path, self.persist_path)\n raise exc\n else:\n os.remove(backup_path)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/sklearn.html"} +{"id": "fce4160e68f6-2", "text": "raise exc\n else:\n os.remove(backup_path)\n else:\n self.pq.write_table(table, self.persist_path)\n def load(self) -> Any:\n table = self.pq.read_table(self.persist_path)\n df = table.to_pandas()\n return {col: series.tolist() for col, series in df.items()}\nSERIALIZER_MAP: Dict[str, Type[BaseSerializer]] = {\n \"json\": JsonSerializer,\n \"bson\": BsonSerializer,\n \"parquet\": ParquetSerializer,\n}\nclass SKLearnVectorStoreException(RuntimeError):\n \"\"\"Exception raised by SKLearnVectorStore.\"\"\"\n pass\n[docs]class SKLearnVectorStore(VectorStore):\n \"\"\"A simple in-memory vector store based on the scikit-learn library\n NearestNeighbors implementation.\"\"\"\n def __init__(\n self,\n embedding: Embeddings,\n *,\n persist_path: Optional[str] = None,\n serializer: Literal[\"json\", \"bson\", \"parquet\"] = \"json\",\n metric: str = \"cosine\",\n **kwargs: Any,\n ) -> None:\n np = guard_import(\"numpy\")\n sklearn_neighbors = guard_import(\"sklearn.neighbors\", pip_name=\"scikit-learn\")\n # non-persistent properties\n self._np = np\n self._neighbors = sklearn_neighbors.NearestNeighbors(metric=metric, **kwargs)\n self._neighbors_fitted = False\n self._embedding_function = embedding\n self._persist_path = persist_path\n self._serializer: Optional[BaseSerializer] = None\n if self._persist_path is not None:\n serializer_cls = SERIALIZER_MAP[serializer]\n self._serializer = serializer_cls(persist_path=self._persist_path)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/sklearn.html"} +{"id": "fce4160e68f6-3", "text": "self._serializer = serializer_cls(persist_path=self._persist_path)\n # data properties\n self._embeddings: List[List[float]] = []\n self._texts: List[str] = []\n self._metadatas: List[dict] = []\n self._ids: List[str] = []\n # cache properties\n self._embeddings_np: Any = np.asarray([])\n if self._persist_path is not None and os.path.isfile(self._persist_path):\n self._load()\n[docs] def persist(self) -> None:\n if self._serializer is None:\n raise SKLearnVectorStoreException(\n \"You must specify a persist_path on creation to persist the \"\n \"collection.\"\n )\n data = {\n \"ids\": self._ids,\n \"texts\": self._texts,\n \"metadatas\": self._metadatas,\n \"embeddings\": self._embeddings,\n }\n self._serializer.save(data)\n def _load(self) -> None:\n if self._serializer is None:\n raise SKLearnVectorStoreException(\n \"You must specify a persist_path on creation to load the \" \"collection.\"\n )\n data = self._serializer.load()\n self._embeddings = data[\"embeddings\"]\n self._texts = data[\"texts\"]\n self._metadatas = data[\"metadatas\"]\n self._ids = data[\"ids\"]\n self._update_neighbors()\n[docs] def add_texts(\n self,\n texts: Iterable[str],\n metadatas: Optional[List[dict]] = None,\n ids: Optional[List[str]] = None,\n **kwargs: Any,\n ) -> List[str]:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/sklearn.html"} +{"id": "fce4160e68f6-4", "text": "**kwargs: Any,\n ) -> List[str]:\n _texts = list(texts)\n _ids = ids or [str(uuid4()) for _ in _texts]\n self._texts.extend(_texts)\n self._embeddings.extend(self._embedding_function.embed_documents(_texts))\n self._metadatas.extend(metadatas or ([{}] * len(_texts)))\n self._ids.extend(_ids)\n self._update_neighbors()\n return _ids\n def _update_neighbors(self) -> None:\n if len(self._embeddings) == 0:\n raise SKLearnVectorStoreException(\n \"No data was added to SKLearnVectorStore.\"\n )\n self._embeddings_np = self._np.asarray(self._embeddings)\n self._neighbors.fit(self._embeddings_np)\n self._neighbors_fitted = True\n def _similarity_index_search_with_score(\n self, query_embedding: List[float], *, k: int = DEFAULT_K, **kwargs: Any\n ) -> List[Tuple[int, float]]:\n \"\"\"Search k embeddings similar to the query embedding. Returns a list of\n (index, distance) tuples.\"\"\"\n if not self._neighbors_fitted:\n raise SKLearnVectorStoreException(\n \"No data was added to SKLearnVectorStore.\"\n )\n neigh_dists, neigh_idxs = self._neighbors.kneighbors(\n [query_embedding], n_neighbors=k\n )\n return list(zip(neigh_idxs[0], neigh_dists[0]))\n[docs] def similarity_search_with_score(\n self, query: str, *, k: int = DEFAULT_K, **kwargs: Any\n ) -> List[Tuple[Document, float]]:\n query_embedding = self._embedding_function.embed_query(query)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/sklearn.html"} +{"id": "fce4160e68f6-5", "text": "query_embedding = self._embedding_function.embed_query(query)\n indices_dists = self._similarity_index_search_with_score(\n query_embedding, k=k, **kwargs\n )\n return [\n (\n Document(\n page_content=self._texts[idx],\n metadata={\"id\": self._ids[idx], **self._metadatas[idx]},\n ),\n dist,\n )\n for idx, dist in indices_dists\n ]\n[docs] def similarity_search(\n self, query: str, k: int = DEFAULT_K, **kwargs: Any\n ) -> List[Document]:\n docs_scores = self.similarity_search_with_score(query, k=k, **kwargs)\n return [doc for doc, _ in docs_scores]\n def _similarity_search_with_relevance_scores(\n self, query: str, k: int = DEFAULT_K, **kwargs: Any\n ) -> List[Tuple[Document, float]]:\n docs_dists = self.similarity_search_with_score(query, k=k, **kwargs)\n docs, dists = zip(*docs_dists)\n scores = [1 / math.exp(dist) for dist in dists]\n return list(zip(list(docs), scores))\n[docs] def max_marginal_relevance_search_by_vector(\n self,\n embedding: List[float],\n k: int = DEFAULT_K,\n fetch_k: int = DEFAULT_FETCH_K,\n lambda_mult: float = 0.5,\n **kwargs: Any,\n ) -> List[Document]:\n \"\"\"Return docs selected using the maximal marginal relevance.\n Maximal marginal relevance optimizes for similarity to query AND diversity\n among selected documents.\n Args:\n embedding: Embedding to look up documents similar to.", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/sklearn.html"} +{"id": "fce4160e68f6-6", "text": "Args:\n embedding: Embedding to look up documents similar to.\n k: Number of Documents to return. Defaults to 4.\n fetch_k: Number of Documents to fetch to pass to MMR algorithm.\n lambda_mult: Number between 0 and 1 that determines the degree\n of diversity among the results with 0 corresponding\n to maximum diversity and 1 to minimum diversity.\n Defaults to 0.5.\n Returns:\n List of Documents selected by maximal marginal relevance.\n \"\"\"\n indices_dists = self._similarity_index_search_with_score(\n embedding, k=fetch_k, **kwargs\n )\n indices, _ = zip(*indices_dists)\n result_embeddings = self._embeddings_np[indices,]\n mmr_selected = maximal_marginal_relevance(\n self._np.array(embedding, dtype=self._np.float32),\n result_embeddings,\n k=k,\n lambda_mult=lambda_mult,\n )\n mmr_indices = [indices[i] for i in mmr_selected]\n return [\n Document(\n page_content=self._texts[idx],\n metadata={\"id\": self._ids[idx], **self._metadatas[idx]},\n )\n for idx in mmr_indices\n ]\n[docs] def max_marginal_relevance_search(\n self,\n query: str,\n k: int = DEFAULT_K,\n fetch_k: int = DEFAULT_FETCH_K,\n lambda_mult: float = 0.5,\n **kwargs: Any,\n ) -> List[Document]:\n \"\"\"Return docs selected using the maximal marginal relevance.\n Maximal marginal relevance optimizes for similarity to query AND diversity\n among selected documents.\n Args:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/sklearn.html"} +{"id": "fce4160e68f6-7", "text": "among selected documents.\n Args:\n query: Text to look up documents similar to.\n k: Number of Documents to return. Defaults to 4.\n fetch_k: Number of Documents to fetch to pass to MMR algorithm.\n lambda_mult: Number between 0 and 1 that determines the degree\n of diversity among the results with 0 corresponding\n to maximum diversity and 1 to minimum diversity.\n Defaults to 0.5.\n Returns:\n List of Documents selected by maximal marginal relevance.\n \"\"\"\n if self._embedding_function is None:\n raise ValueError(\n \"For MMR search, you must specify an embedding function on creation.\"\n )\n embedding = self._embedding_function.embed_query(query)\n docs = self.max_marginal_relevance_search_by_vector(\n embedding, k, fetch_k, lambda_mul=lambda_mult\n )\n return docs\n[docs] @classmethod\n def from_texts(\n cls,\n texts: List[str],\n embedding: Embeddings,\n metadatas: Optional[List[dict]] = None,\n ids: Optional[List[str]] = None,\n persist_path: Optional[str] = None,\n **kwargs: Any,\n ) -> \"SKLearnVectorStore\":\n vs = SKLearnVectorStore(embedding, persist_path=persist_path, **kwargs)\n vs.add_texts(texts, metadatas=metadatas, ids=ids)\n return vs", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/sklearn.html"} +{"id": "075545cdf350-0", "text": "Source code for langchain.vectorstores.analyticdb\n\"\"\"VectorStore wrapper around a Postgres/PGVector database.\"\"\"\nfrom __future__ import annotations\nimport logging\nimport uuid\nfrom typing import Any, Dict, Iterable, List, Optional, Sequence, Tuple, Type\nfrom sqlalchemy import REAL, Column, String, Table, create_engine, insert, text\nfrom sqlalchemy.dialects.postgresql import ARRAY, JSON, TEXT\nfrom sqlalchemy.engine import Row\ntry:\n from sqlalchemy.orm import declarative_base\nexcept ImportError:\n from sqlalchemy.ext.declarative import declarative_base\nfrom langchain.docstore.document import Document\nfrom langchain.embeddings.base import Embeddings\nfrom langchain.utils import get_from_dict_or_env\nfrom langchain.vectorstores.base import VectorStore\n_LANGCHAIN_DEFAULT_EMBEDDING_DIM = 1536\n_LANGCHAIN_DEFAULT_COLLECTION_NAME = \"langchain_document\"\nBase = declarative_base() # type: Any\n[docs]class AnalyticDB(VectorStore):\n \"\"\"VectorStore implementation using AnalyticDB.\n AnalyticDB is a distributed full PostgresSQL syntax cloud-native database.\n - `connection_string` is a postgres connection string.\n - `embedding_function` any embedding function implementing\n `langchain.embeddings.base.Embeddings` interface.\n - `collection_name` is the name of the collection to use. (default: langchain)\n - NOTE: This is not the name of the table, but the name of the collection.\n The tables will be created when initializing the store (if not exists)\n So, make sure the user has the right permissions to create tables.\n - `pre_delete_collection` if True, will delete the collection if it exists.\n (default: False)\n - Useful for testing.\n \"\"\"\n def __init__(\n self,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/analyticdb.html"} +{"id": "075545cdf350-1", "text": "- Useful for testing.\n \"\"\"\n def __init__(\n self,\n connection_string: str,\n embedding_function: Embeddings,\n embedding_dimension: int = _LANGCHAIN_DEFAULT_EMBEDDING_DIM,\n collection_name: str = _LANGCHAIN_DEFAULT_COLLECTION_NAME,\n pre_delete_collection: bool = False,\n logger: Optional[logging.Logger] = None,\n ) -> None:\n self.connection_string = connection_string\n self.embedding_function = embedding_function\n self.embedding_dimension = embedding_dimension\n self.collection_name = collection_name\n self.pre_delete_collection = pre_delete_collection\n self.logger = logger or logging.getLogger(__name__)\n self.__post_init__()\n def __post_init__(\n self,\n ) -> None:\n \"\"\"\n Initialize the store.\n \"\"\"\n self.engine = create_engine(self.connection_string)\n self.create_collection()\n[docs] def create_table_if_not_exists(self) -> None:\n # Define the dynamic table\n Table(\n self.collection_name,\n Base.metadata,\n Column(\"id\", TEXT, primary_key=True, default=uuid.uuid4),\n Column(\"embedding\", ARRAY(REAL)),\n Column(\"document\", String, nullable=True),\n Column(\"metadata\", JSON, nullable=True),\n extend_existing=True,\n )\n with self.engine.connect() as conn:\n with conn.begin():\n # Create the table\n Base.metadata.create_all(conn)\n # Check if the index exists\n index_name = f\"{self.collection_name}_embedding_idx\"\n index_query = text(\n f\"\"\"\n SELECT 1\n FROM pg_indexes\n WHERE indexname = '{index_name}';\n \"\"\"\n )\n result = conn.execute(index_query).scalar()", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/analyticdb.html"} +{"id": "075545cdf350-2", "text": "\"\"\"\n )\n result = conn.execute(index_query).scalar()\n # Create the index if it doesn't exist\n if not result:\n index_statement = text(\n f\"\"\"\n CREATE INDEX {index_name}\n ON {self.collection_name} USING ann(embedding)\n WITH (\n \"dim\" = {self.embedding_dimension},\n \"hnsw_m\" = 100\n );\n \"\"\"\n )\n conn.execute(index_statement)\n[docs] def create_collection(self) -> None:\n if self.pre_delete_collection:\n self.delete_collection()\n self.create_table_if_not_exists()\n[docs] def delete_collection(self) -> None:\n self.logger.debug(\"Trying to delete collection\")\n drop_statement = text(f\"DROP TABLE IF EXISTS {self.collection_name};\")\n with self.engine.connect() as conn:\n with conn.begin():\n conn.execute(drop_statement)\n[docs] def add_texts(\n self,\n texts: Iterable[str],\n metadatas: Optional[List[dict]] = None,\n ids: Optional[List[str]] = None,\n batch_size: int = 500,\n **kwargs: Any,\n ) -> List[str]:\n \"\"\"Run more texts through the embeddings and add to the vectorstore.\n Args:\n texts: Iterable of strings to add to the vectorstore.\n metadatas: Optional list of metadatas associated with the texts.\n kwargs: vectorstore specific parameters\n Returns:\n List of ids from adding the texts into the vectorstore.\n \"\"\"\n if ids is None:\n ids = [str(uuid.uuid1()) for _ in texts]\n embeddings = self.embedding_function.embed_documents(list(texts))\n if not metadatas:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/analyticdb.html"} +{"id": "075545cdf350-3", "text": "if not metadatas:\n metadatas = [{} for _ in texts]\n # Define the table schema\n chunks_table = Table(\n self.collection_name,\n Base.metadata,\n Column(\"id\", TEXT, primary_key=True),\n Column(\"embedding\", ARRAY(REAL)),\n Column(\"document\", String, nullable=True),\n Column(\"metadata\", JSON, nullable=True),\n extend_existing=True,\n )\n chunks_table_data = []\n with self.engine.connect() as conn:\n with conn.begin():\n for document, metadata, chunk_id, embedding in zip(\n texts, metadatas, ids, embeddings\n ):\n chunks_table_data.append(\n {\n \"id\": chunk_id,\n \"embedding\": embedding,\n \"document\": document,\n \"metadata\": metadata,\n }\n )\n # Execute the batch insert when the batch size is reached\n if len(chunks_table_data) == batch_size:\n conn.execute(insert(chunks_table).values(chunks_table_data))\n # Clear the chunks_table_data list for the next batch\n chunks_table_data.clear()\n # Insert any remaining records that didn't make up a full batch\n if chunks_table_data:\n conn.execute(insert(chunks_table).values(chunks_table_data))\n return ids\n[docs] def similarity_search(\n self,\n query: str,\n k: int = 4,\n filter: Optional[dict] = None,\n **kwargs: Any,\n ) -> List[Document]:\n \"\"\"Run similarity search with AnalyticDB with distance.\n Args:\n query (str): Query text to search for.\n k (int): Number of results to return. Defaults to 4.", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/analyticdb.html"} +{"id": "075545cdf350-4", "text": "k (int): Number of results to return. Defaults to 4.\n filter (Optional[Dict[str, str]]): Filter by metadata. Defaults to None.\n Returns:\n List of Documents most similar to the query.\n \"\"\"\n embedding = self.embedding_function.embed_query(text=query)\n return self.similarity_search_by_vector(\n embedding=embedding,\n k=k,\n filter=filter,\n )\n[docs] def similarity_search_with_score(\n self,\n query: str,\n k: int = 4,\n filter: Optional[dict] = None,\n ) -> List[Tuple[Document, float]]:\n \"\"\"Return docs most similar to query.\n Args:\n query: Text to look up documents similar to.\n k: Number of Documents to return. Defaults to 4.\n filter (Optional[Dict[str, str]]): Filter by metadata. Defaults to None.\n Returns:\n List of Documents most similar to the query and score for each\n \"\"\"\n embedding = self.embedding_function.embed_query(query)\n docs = self.similarity_search_with_score_by_vector(\n embedding=embedding, k=k, filter=filter\n )\n return docs\n def _similarity_search_with_relevance_scores(\n self,\n query: str,\n k: int = 4,\n **kwargs: Any,\n ) -> List[Tuple[Document, float]]:\n \"\"\"Return docs and relevance scores in the range [0, 1].\n 0 is dissimilar, 1 is most similar.\n Args:\n query: input text\n k: Number of Documents to return. Defaults to 4.\n **kwargs: kwargs to be passed to similarity search. Should include:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/analyticdb.html"} +{"id": "075545cdf350-5", "text": "**kwargs: kwargs to be passed to similarity search. Should include:\n score_threshold: Optional, a floating point value between 0 to 1 to\n filter the resulting set of retrieved docs\n Returns:\n List of Tuples of (doc, similarity_score)\n \"\"\"\n return self.similarity_search_with_score(query, k, **kwargs)\n[docs] def similarity_search_with_score_by_vector(\n self,\n embedding: List[float],\n k: int = 4,\n filter: Optional[dict] = None,\n ) -> List[Tuple[Document, float]]:\n # Add the filter if provided\n filter_condition = \"\"\n if filter is not None:\n conditions = [\n f\"metadata->>{key!r} = {value!r}\" for key, value in filter.items()\n ]\n filter_condition = f\"WHERE {' AND '.join(conditions)}\"\n # Define the base query\n sql_query = f\"\"\"\n SELECT *, l2_distance(embedding, :embedding) as distance\n FROM {self.collection_name}\n {filter_condition}\n ORDER BY embedding <-> :embedding\n LIMIT :k\n \"\"\"\n # Set up the query parameters\n params = {\"embedding\": embedding, \"k\": k}\n # Execute the query and fetch the results\n with self.engine.connect() as conn:\n results: Sequence[Row] = conn.execute(text(sql_query), params).fetchall()\n documents_with_scores = [\n (\n Document(\n page_content=result.document,\n metadata=result.metadata,\n ),\n result.distance if self.embedding_function is not None else None,\n )\n for result in results\n ]\n return documents_with_scores", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/analyticdb.html"} +{"id": "075545cdf350-6", "text": ")\n for result in results\n ]\n return documents_with_scores\n[docs] def similarity_search_by_vector(\n self,\n embedding: List[float],\n k: int = 4,\n filter: Optional[dict] = None,\n **kwargs: Any,\n ) -> List[Document]:\n \"\"\"Return docs most similar to embedding vector.\n Args:\n embedding: Embedding to look up documents similar to.\n k: Number of Documents to return. Defaults to 4.\n filter (Optional[Dict[str, str]]): Filter by metadata. Defaults to None.\n Returns:\n List of Documents most similar to the query vector.\n \"\"\"\n docs_and_scores = self.similarity_search_with_score_by_vector(\n embedding=embedding, k=k, filter=filter\n )\n return [doc for doc, _ in docs_and_scores]\n[docs] @classmethod\n def from_texts(\n cls: Type[AnalyticDB],\n texts: List[str],\n embedding: Embeddings,\n metadatas: Optional[List[dict]] = None,\n embedding_dimension: int = _LANGCHAIN_DEFAULT_EMBEDDING_DIM,\n collection_name: str = _LANGCHAIN_DEFAULT_COLLECTION_NAME,\n ids: Optional[List[str]] = None,\n pre_delete_collection: bool = False,\n **kwargs: Any,\n ) -> AnalyticDB:\n \"\"\"\n Return VectorStore initialized from texts and embeddings.\n Postgres Connection string is required\n Either pass it as a parameter\n or set the PG_CONNECTION_STRING environment variable.\n \"\"\"\n connection_string = cls.get_connection_string(kwargs)\n store = cls(\n connection_string=connection_string,\n collection_name=collection_name,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/analyticdb.html"} +{"id": "075545cdf350-7", "text": "connection_string=connection_string,\n collection_name=collection_name,\n embedding_function=embedding,\n embedding_dimension=embedding_dimension,\n pre_delete_collection=pre_delete_collection,\n )\n store.add_texts(texts=texts, metadatas=metadatas, ids=ids, **kwargs)\n return store\n[docs] @classmethod\n def get_connection_string(cls, kwargs: Dict[str, Any]) -> str:\n connection_string: str = get_from_dict_or_env(\n data=kwargs,\n key=\"connection_string\",\n env_key=\"PG_CONNECTION_STRING\",\n )\n if not connection_string:\n raise ValueError(\n \"Postgres connection string is required\"\n \"Either pass it as a parameter\"\n \"or set the PG_CONNECTION_STRING environment variable.\"\n )\n return connection_string\n[docs] @classmethod\n def from_documents(\n cls: Type[AnalyticDB],\n documents: List[Document],\n embedding: Embeddings,\n embedding_dimension: int = _LANGCHAIN_DEFAULT_EMBEDDING_DIM,\n collection_name: str = _LANGCHAIN_DEFAULT_COLLECTION_NAME,\n ids: Optional[List[str]] = None,\n pre_delete_collection: bool = False,\n **kwargs: Any,\n ) -> AnalyticDB:\n \"\"\"\n Return VectorStore initialized from documents and embeddings.\n Postgres Connection string is required\n Either pass it as a parameter\n or set the PG_CONNECTION_STRING environment variable.\n \"\"\"\n texts = [d.page_content for d in documents]\n metadatas = [d.metadata for d in documents]\n connection_string = cls.get_connection_string(kwargs)\n kwargs[\"connection_string\"] = connection_string\n return cls.from_texts(\n texts=texts,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/analyticdb.html"} +{"id": "075545cdf350-8", "text": "return cls.from_texts(\n texts=texts,\n pre_delete_collection=pre_delete_collection,\n embedding=embedding,\n embedding_dimension=embedding_dimension,\n metadatas=metadatas,\n ids=ids,\n collection_name=collection_name,\n **kwargs,\n )\n[docs] @classmethod\n def connection_string_from_db_params(\n cls,\n driver: str,\n host: str,\n port: int,\n database: str,\n user: str,\n password: str,\n ) -> str:\n \"\"\"Return connection string from database parameters.\"\"\"\n return f\"postgresql+{driver}://{user}:{password}@{host}:{port}/{database}\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/analyticdb.html"} +{"id": "73bbcb23f714-0", "text": "Source code for langchain.vectorstores.opensearch_vector_search\n\"\"\"Wrapper around OpenSearch vector database.\"\"\"\nfrom __future__ import annotations\nimport uuid\nfrom typing import Any, Dict, Iterable, List, Optional, Tuple\nimport numpy as np\nfrom langchain.embeddings.base import Embeddings\nfrom langchain.schema import Document\nfrom langchain.utils import get_from_dict_or_env\nfrom langchain.vectorstores.base import VectorStore\nfrom langchain.vectorstores.utils import maximal_marginal_relevance\nIMPORT_OPENSEARCH_PY_ERROR = (\n \"Could not import OpenSearch. Please install it with `pip install opensearch-py`.\"\n)\nSCRIPT_SCORING_SEARCH = \"script_scoring\"\nPAINLESS_SCRIPTING_SEARCH = \"painless_scripting\"\nMATCH_ALL_QUERY = {\"match_all\": {}} # type: Dict\ndef _import_opensearch() -> Any:\n \"\"\"Import OpenSearch if available, otherwise raise error.\"\"\"\n try:\n from opensearchpy import OpenSearch\n except ImportError:\n raise ValueError(IMPORT_OPENSEARCH_PY_ERROR)\n return OpenSearch\ndef _import_bulk() -> Any:\n \"\"\"Import bulk if available, otherwise raise error.\"\"\"\n try:\n from opensearchpy.helpers import bulk\n except ImportError:\n raise ValueError(IMPORT_OPENSEARCH_PY_ERROR)\n return bulk\ndef _import_not_found_error() -> Any:\n \"\"\"Import not found error if available, otherwise raise error.\"\"\"\n try:\n from opensearchpy.exceptions import NotFoundError\n except ImportError:\n raise ValueError(IMPORT_OPENSEARCH_PY_ERROR)\n return NotFoundError\ndef _get_opensearch_client(opensearch_url: str, **kwargs: Any) -> Any:\n \"\"\"Get OpenSearch client from the opensearch_url, otherwise raise error.\"\"\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/opensearch_vector_search.html"} +{"id": "73bbcb23f714-1", "text": "\"\"\"Get OpenSearch client from the opensearch_url, otherwise raise error.\"\"\"\n try:\n opensearch = _import_opensearch()\n client = opensearch(opensearch_url, **kwargs)\n except ValueError as e:\n raise ValueError(\n f\"OpenSearch client string provided is not in proper format. \"\n f\"Got error: {e} \"\n )\n return client\ndef _validate_embeddings_and_bulk_size(embeddings_length: int, bulk_size: int) -> None:\n \"\"\"Validate Embeddings Length and Bulk Size.\"\"\"\n if embeddings_length == 0:\n raise RuntimeError(\"Embeddings size is zero\")\n if bulk_size < embeddings_length:\n raise RuntimeError(\n f\"The embeddings count, {embeddings_length} is more than the \"\n f\"[bulk_size], {bulk_size}. Increase the value of [bulk_size].\"\n )\ndef _bulk_ingest_embeddings(\n client: Any,\n index_name: str,\n embeddings: List[List[float]],\n texts: Iterable[str],\n metadatas: Optional[List[dict]] = None,\n ids: Optional[List[str]] = None,\n vector_field: str = \"vector_field\",\n text_field: str = \"text\",\n mapping: Optional[Dict] = None,\n) -> List[str]:\n \"\"\"Bulk Ingest Embeddings into given index.\"\"\"\n if not mapping:\n mapping = dict()\n bulk = _import_bulk()\n not_found_error = _import_not_found_error()\n requests = []\n return_ids = []\n mapping = mapping\n try:\n client.indices.get(index=index_name)\n except not_found_error:\n client.indices.create(index=index_name, body=mapping)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/opensearch_vector_search.html"} +{"id": "73bbcb23f714-2", "text": "except not_found_error:\n client.indices.create(index=index_name, body=mapping)\n for i, text in enumerate(texts):\n metadata = metadatas[i] if metadatas else {}\n _id = ids[i] if ids else str(uuid.uuid4())\n request = {\n \"_op_type\": \"index\",\n \"_index\": index_name,\n vector_field: embeddings[i],\n text_field: text,\n \"metadata\": metadata,\n \"_id\": _id,\n }\n requests.append(request)\n return_ids.append(_id)\n bulk(client, requests)\n client.indices.refresh(index=index_name)\n return return_ids\ndef _default_scripting_text_mapping(\n dim: int,\n vector_field: str = \"vector_field\",\n) -> Dict:\n \"\"\"For Painless Scripting or Script Scoring,the default mapping to create index.\"\"\"\n return {\n \"mappings\": {\n \"properties\": {\n vector_field: {\"type\": \"knn_vector\", \"dimension\": dim},\n }\n }\n }\ndef _default_text_mapping(\n dim: int,\n engine: str = \"nmslib\",\n space_type: str = \"l2\",\n ef_search: int = 512,\n ef_construction: int = 512,\n m: int = 16,\n vector_field: str = \"vector_field\",\n) -> Dict:\n \"\"\"For Approximate k-NN Search, this is the default mapping to create index.\"\"\"\n return {\n \"settings\": {\"index\": {\"knn\": True, \"knn.algo_param.ef_search\": ef_search}},\n \"mappings\": {\n \"properties\": {\n vector_field: {", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/opensearch_vector_search.html"} +{"id": "73bbcb23f714-3", "text": "\"mappings\": {\n \"properties\": {\n vector_field: {\n \"type\": \"knn_vector\",\n \"dimension\": dim,\n \"method\": {\n \"name\": \"hnsw\",\n \"space_type\": space_type,\n \"engine\": engine,\n \"parameters\": {\"ef_construction\": ef_construction, \"m\": m},\n },\n }\n }\n },\n }\ndef _default_approximate_search_query(\n query_vector: List[float],\n k: int = 4,\n vector_field: str = \"vector_field\",\n) -> Dict:\n \"\"\"For Approximate k-NN Search, this is the default query.\"\"\"\n return {\n \"size\": k,\n \"query\": {\"knn\": {vector_field: {\"vector\": query_vector, \"k\": k}}},\n }\ndef _approximate_search_query_with_boolean_filter(\n query_vector: List[float],\n boolean_filter: Dict,\n k: int = 4,\n vector_field: str = \"vector_field\",\n subquery_clause: str = \"must\",\n) -> Dict:\n \"\"\"For Approximate k-NN Search, with Boolean Filter.\"\"\"\n return {\n \"size\": k,\n \"query\": {\n \"bool\": {\n \"filter\": boolean_filter,\n subquery_clause: [\n {\"knn\": {vector_field: {\"vector\": query_vector, \"k\": k}}}\n ],\n }\n },\n }\ndef _approximate_search_query_with_lucene_filter(\n query_vector: List[float],\n lucene_filter: Dict,\n k: int = 4,\n vector_field: str = \"vector_field\",\n) -> Dict:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/opensearch_vector_search.html"} +{"id": "73bbcb23f714-4", "text": "vector_field: str = \"vector_field\",\n) -> Dict:\n \"\"\"For Approximate k-NN Search, with Lucene Filter.\"\"\"\n search_query = _default_approximate_search_query(\n query_vector, k=k, vector_field=vector_field\n )\n search_query[\"query\"][\"knn\"][vector_field][\"filter\"] = lucene_filter\n return search_query\ndef _default_script_query(\n query_vector: List[float],\n space_type: str = \"l2\",\n pre_filter: Optional[Dict] = None,\n vector_field: str = \"vector_field\",\n) -> Dict:\n \"\"\"For Script Scoring Search, this is the default query.\"\"\"\n if not pre_filter:\n pre_filter = MATCH_ALL_QUERY\n return {\n \"query\": {\n \"script_score\": {\n \"query\": pre_filter,\n \"script\": {\n \"source\": \"knn_score\",\n \"lang\": \"knn\",\n \"params\": {\n \"field\": vector_field,\n \"query_value\": query_vector,\n \"space_type\": space_type,\n },\n },\n }\n }\n }\ndef __get_painless_scripting_source(\n space_type: str, query_vector: List[float], vector_field: str = \"vector_field\"\n) -> str:\n \"\"\"For Painless Scripting, it returns the script source based on space type.\"\"\"\n source_value = (\n \"(1.0 + \"\n + space_type\n + \"(\"\n + str(query_vector)\n + \", doc['\"\n + vector_field\n + \"']))\"\n )\n if space_type == \"cosineSimilarity\":\n return source_value\n else:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/opensearch_vector_search.html"} +{"id": "73bbcb23f714-5", "text": "return source_value\n else:\n return \"1/\" + source_value\ndef _default_painless_scripting_query(\n query_vector: List[float],\n space_type: str = \"l2Squared\",\n pre_filter: Optional[Dict] = None,\n vector_field: str = \"vector_field\",\n) -> Dict:\n \"\"\"For Painless Scripting Search, this is the default query.\"\"\"\n if not pre_filter:\n pre_filter = MATCH_ALL_QUERY\n source = __get_painless_scripting_source(space_type, query_vector)\n return {\n \"query\": {\n \"script_score\": {\n \"query\": pre_filter,\n \"script\": {\n \"source\": source,\n \"params\": {\n \"field\": vector_field,\n \"query_value\": query_vector,\n },\n },\n }\n }\n }\ndef _get_kwargs_value(kwargs: Any, key: str, default_value: Any) -> Any:\n \"\"\"Get the value of the key if present. Else get the default_value.\"\"\"\n if key in kwargs:\n return kwargs.get(key)\n return default_value\n[docs]class OpenSearchVectorSearch(VectorStore):\n \"\"\"Wrapper around OpenSearch as a vector database.\n Example:\n .. code-block:: python\n from langchain import OpenSearchVectorSearch\n opensearch_vector_search = OpenSearchVectorSearch(\n \"http://localhost:9200\",\n \"embeddings\",\n embedding_function\n )\n \"\"\"\n def __init__(\n self,\n opensearch_url: str,\n index_name: str,\n embedding_function: Embeddings,\n **kwargs: Any,\n ):\n \"\"\"Initialize with necessary components.\"\"\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/opensearch_vector_search.html"} +{"id": "73bbcb23f714-6", "text": "**kwargs: Any,\n ):\n \"\"\"Initialize with necessary components.\"\"\"\n self.embedding_function = embedding_function\n self.index_name = index_name\n self.client = _get_opensearch_client(opensearch_url, **kwargs)\n[docs] def add_texts(\n self,\n texts: Iterable[str],\n metadatas: Optional[List[dict]] = None,\n ids: Optional[List[str]] = None,\n bulk_size: int = 500,\n **kwargs: Any,\n ) -> List[str]:\n \"\"\"Run more texts through the embeddings and add to the vectorstore.\n Args:\n texts: Iterable of strings to add to the vectorstore.\n metadatas: Optional list of metadatas associated with the texts.\n ids: Optional list of ids to associate with the texts.\n bulk_size: Bulk API request count; Default: 500\n Returns:\n List of ids from adding the texts into the vectorstore.\n Optional Args:\n vector_field: Document field embeddings are stored in. Defaults to\n \"vector_field\".\n text_field: Document field the text of the document is stored in. Defaults\n to \"text\".\n \"\"\"\n embeddings = self.embedding_function.embed_documents(list(texts))\n _validate_embeddings_and_bulk_size(len(embeddings), bulk_size)\n text_field = _get_kwargs_value(kwargs, \"text_field\", \"text\")\n dim = len(embeddings[0])\n engine = _get_kwargs_value(kwargs, \"engine\", \"nmslib\")\n space_type = _get_kwargs_value(kwargs, \"space_type\", \"l2\")\n ef_search = _get_kwargs_value(kwargs, \"ef_search\", 512)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/opensearch_vector_search.html"} +{"id": "73bbcb23f714-7", "text": "ef_search = _get_kwargs_value(kwargs, \"ef_search\", 512)\n ef_construction = _get_kwargs_value(kwargs, \"ef_construction\", 512)\n m = _get_kwargs_value(kwargs, \"m\", 16)\n vector_field = _get_kwargs_value(kwargs, \"vector_field\", \"vector_field\")\n mapping = _default_text_mapping(\n dim, engine, space_type, ef_search, ef_construction, m, vector_field\n )\n return _bulk_ingest_embeddings(\n self.client,\n self.index_name,\n embeddings,\n texts,\n metadatas=metadatas,\n ids=ids,\n vector_field=vector_field,\n text_field=text_field,\n mapping=mapping,\n )\n[docs] def similarity_search(\n self, query: str, k: int = 4, **kwargs: Any\n ) -> List[Document]:\n \"\"\"Return docs most similar to query.\n By default, supports Approximate Search.\n Also supports Script Scoring and Painless Scripting.\n Args:\n query: Text to look up documents similar to.\n k: Number of Documents to return. Defaults to 4.\n Returns:\n List of Documents most similar to the query.\n Optional Args:\n vector_field: Document field embeddings are stored in. Defaults to\n \"vector_field\".\n text_field: Document field the text of the document is stored in. Defaults\n to \"text\".\n metadata_field: Document field that metadata is stored in. Defaults to\n \"metadata\".\n Can be set to a special value \"*\" to include the entire document.\n Optional Args for Approximate Search:\n search_type: \"approximate_search\"; default: \"approximate_search\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/opensearch_vector_search.html"} +{"id": "73bbcb23f714-8", "text": "search_type: \"approximate_search\"; default: \"approximate_search\"\n boolean_filter: A Boolean filter consists of a Boolean query that\n contains a k-NN query and a filter.\n subquery_clause: Query clause on the knn vector field; default: \"must\"\n lucene_filter: the Lucene algorithm decides whether to perform an exact\n k-NN search with pre-filtering or an approximate search with modified\n post-filtering.\n Optional Args for Script Scoring Search:\n search_type: \"script_scoring\"; default: \"approximate_search\"\n space_type: \"l2\", \"l1\", \"linf\", \"cosinesimil\", \"innerproduct\",\n \"hammingbit\"; default: \"l2\"\n pre_filter: script_score query to pre-filter documents before identifying\n nearest neighbors; default: {\"match_all\": {}}\n Optional Args for Painless Scripting Search:\n search_type: \"painless_scripting\"; default: \"approximate_search\"\n space_type: \"l2Squared\", \"l1Norm\", \"cosineSimilarity\"; default: \"l2Squared\"\n pre_filter: script_score query to pre-filter documents before identifying\n nearest neighbors; default: {\"match_all\": {}}\n \"\"\"\n docs_with_scores = self.similarity_search_with_score(query, k, **kwargs)\n return [doc[0] for doc in docs_with_scores]\n[docs] def similarity_search_with_score(\n self, query: str, k: int = 4, **kwargs: Any\n ) -> List[Tuple[Document, float]]:\n \"\"\"Return docs and it's scores most similar to query.\n By default, supports Approximate Search.\n Also supports Script Scoring and Painless Scripting.\n Args:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/opensearch_vector_search.html"} +{"id": "73bbcb23f714-9", "text": "Also supports Script Scoring and Painless Scripting.\n Args:\n query: Text to look up documents similar to.\n k: Number of Documents to return. Defaults to 4.\n Returns:\n List of Documents along with its scores most similar to the query.\n Optional Args:\n same as `similarity_search`\n \"\"\"\n text_field = _get_kwargs_value(kwargs, \"text_field\", \"text\")\n metadata_field = _get_kwargs_value(kwargs, \"metadata_field\", \"metadata\")\n hits = self._raw_similarity_search_with_score(query=query, k=k, **kwargs)\n documents_with_scores = [\n (\n Document(\n page_content=hit[\"_source\"][text_field],\n metadata=hit[\"_source\"]\n if metadata_field == \"*\" or metadata_field not in hit[\"_source\"]\n else hit[\"_source\"][metadata_field],\n ),\n hit[\"_score\"],\n )\n for hit in hits\n ]\n return documents_with_scores\n def _raw_similarity_search_with_score(\n self, query: str, k: int = 4, **kwargs: Any\n ) -> List[dict]:\n \"\"\"Return raw opensearch documents (dict) including vectors,\n scores most similar to query.\n By default, supports Approximate Search.\n Also supports Script Scoring and Painless Scripting.\n Args:\n query: Text to look up documents similar to.\n k: Number of Documents to return. Defaults to 4.\n Returns:\n List of dict with its scores most similar to the query.\n Optional Args:\n same as `similarity_search`\n \"\"\"\n embedding = self.embedding_function.embed_query(query)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/opensearch_vector_search.html"} +{"id": "73bbcb23f714-10", "text": "\"\"\"\n embedding = self.embedding_function.embed_query(query)\n search_type = _get_kwargs_value(kwargs, \"search_type\", \"approximate_search\")\n vector_field = _get_kwargs_value(kwargs, \"vector_field\", \"vector_field\")\n if search_type == \"approximate_search\":\n boolean_filter = _get_kwargs_value(kwargs, \"boolean_filter\", {})\n subquery_clause = _get_kwargs_value(kwargs, \"subquery_clause\", \"must\")\n lucene_filter = _get_kwargs_value(kwargs, \"lucene_filter\", {})\n if boolean_filter != {} and lucene_filter != {}:\n raise ValueError(\n \"Both `boolean_filter` and `lucene_filter` are provided which \"\n \"is invalid\"\n )\n if boolean_filter != {}:\n search_query = _approximate_search_query_with_boolean_filter(\n embedding,\n boolean_filter,\n k=k,\n vector_field=vector_field,\n subquery_clause=subquery_clause,\n )\n elif lucene_filter != {}:\n search_query = _approximate_search_query_with_lucene_filter(\n embedding, lucene_filter, k=k, vector_field=vector_field\n )\n else:\n search_query = _default_approximate_search_query(\n embedding, k=k, vector_field=vector_field\n )\n elif search_type == SCRIPT_SCORING_SEARCH:\n space_type = _get_kwargs_value(kwargs, \"space_type\", \"l2\")\n pre_filter = _get_kwargs_value(kwargs, \"pre_filter\", MATCH_ALL_QUERY)\n search_query = _default_script_query(\n embedding, space_type, pre_filter, vector_field\n )\n elif search_type == PAINLESS_SCRIPTING_SEARCH:\n space_type = _get_kwargs_value(kwargs, \"space_type\", \"l2Squared\")", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/opensearch_vector_search.html"} +{"id": "73bbcb23f714-11", "text": "space_type = _get_kwargs_value(kwargs, \"space_type\", \"l2Squared\")\n pre_filter = _get_kwargs_value(kwargs, \"pre_filter\", MATCH_ALL_QUERY)\n search_query = _default_painless_scripting_query(\n embedding, space_type, pre_filter, vector_field\n )\n else:\n raise ValueError(\"Invalid `search_type` provided as an argument\")\n response = self.client.search(index=self.index_name, body=search_query)\n return [hit for hit in response[\"hits\"][\"hits\"][:k]]\n[docs] def max_marginal_relevance_search(\n self,\n query: str,\n k: int = 4,\n fetch_k: int = 20,\n lambda_mult: float = 0.5,\n **kwargs: Any,\n ) -> list[Document]:\n \"\"\"Return docs selected using the maximal marginal relevance.\n Maximal marginal relevance optimizes for similarity to query AND diversity\n among selected documents.\n Args:\n query: Text to look up documents similar to.\n k: Number of Documents to return. Defaults to 4.\n fetch_k: Number of Documents to fetch to pass to MMR algorithm.\n Defaults to 20.\n lambda_mult: Number between 0 and 1 that determines the degree\n of diversity among the results with 0 corresponding\n to maximum diversity and 1 to minimum diversity.\n Defaults to 0.5.\n Returns:\n List of Documents selected by maximal marginal relevance.\n \"\"\"\n vector_field = _get_kwargs_value(kwargs, \"vector_field\", \"vector_field\")\n text_field = _get_kwargs_value(kwargs, \"text_field\", \"text\")\n metadata_field = _get_kwargs_value(kwargs, \"metadata_field\", \"metadata\")", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/opensearch_vector_search.html"} +{"id": "73bbcb23f714-12", "text": "metadata_field = _get_kwargs_value(kwargs, \"metadata_field\", \"metadata\")\n # Get embedding of the user query\n embedding = self.embedding_function.embed_query(query)\n # Do ANN/KNN search to get top fetch_k results where fetch_k >= k\n results = self._raw_similarity_search_with_score(query, fetch_k, **kwargs)\n embeddings = [result[\"_source\"][vector_field] for result in results]\n # Rerank top k results using MMR, (mmr_selected is a list of indices)\n mmr_selected = maximal_marginal_relevance(\n np.array(embedding), embeddings, k=k, lambda_mult=lambda_mult\n )\n return [\n Document(\n page_content=results[i][\"_source\"][text_field],\n metadata=results[i][\"_source\"][metadata_field],\n )\n for i in mmr_selected\n ]\n[docs] @classmethod\n def from_texts(\n cls,\n texts: List[str],\n embedding: Embeddings,\n metadatas: Optional[List[dict]] = None,\n bulk_size: int = 500,\n **kwargs: Any,\n ) -> OpenSearchVectorSearch:\n \"\"\"Construct OpenSearchVectorSearch wrapper from raw documents.\n Example:\n .. code-block:: python\n from langchain import OpenSearchVectorSearch\n from langchain.embeddings import OpenAIEmbeddings\n embeddings = OpenAIEmbeddings()\n opensearch_vector_search = OpenSearchVectorSearch.from_texts(\n texts,\n embeddings,\n opensearch_url=\"http://localhost:9200\"\n )\n OpenSearch by default supports Approximate Search powered by nmslib, faiss\n and lucene engines recommended for large datasets. Also supports brute force", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/opensearch_vector_search.html"} +{"id": "73bbcb23f714-13", "text": "and lucene engines recommended for large datasets. Also supports brute force\n search through Script Scoring and Painless Scripting.\n Optional Args:\n vector_field: Document field embeddings are stored in. Defaults to\n \"vector_field\".\n text_field: Document field the text of the document is stored in. Defaults\n to \"text\".\n Optional Keyword Args for Approximate Search:\n engine: \"nmslib\", \"faiss\", \"lucene\"; default: \"nmslib\"\n space_type: \"l2\", \"l1\", \"cosinesimil\", \"linf\", \"innerproduct\"; default: \"l2\"\n ef_search: Size of the dynamic list used during k-NN searches. Higher values\n lead to more accurate but slower searches; default: 512\n ef_construction: Size of the dynamic list used during k-NN graph creation.\n Higher values lead to more accurate graph but slower indexing speed;\n default: 512\n m: Number of bidirectional links created for each new element. Large impact\n on memory consumption. Between 2 and 100; default: 16\n Keyword Args for Script Scoring or Painless Scripting:\n is_appx_search: False\n \"\"\"\n opensearch_url = get_from_dict_or_env(\n kwargs, \"opensearch_url\", \"OPENSEARCH_URL\"\n )\n # List of arguments that needs to be removed from kwargs\n # before passing kwargs to get opensearch client\n keys_list = [\n \"opensearch_url\",\n \"index_name\",\n \"is_appx_search\",\n \"vector_field\",\n \"text_field\",\n \"engine\",\n \"space_type\",\n \"ef_search\",\n \"ef_construction\",\n \"m\",", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/opensearch_vector_search.html"} +{"id": "73bbcb23f714-14", "text": "\"ef_search\",\n \"ef_construction\",\n \"m\",\n ]\n embeddings = embedding.embed_documents(texts)\n _validate_embeddings_and_bulk_size(len(embeddings), bulk_size)\n dim = len(embeddings[0])\n # Get the index name from either from kwargs or ENV Variable\n # before falling back to random generation\n index_name = get_from_dict_or_env(\n kwargs, \"index_name\", \"OPENSEARCH_INDEX_NAME\", default=uuid.uuid4().hex\n )\n is_appx_search = _get_kwargs_value(kwargs, \"is_appx_search\", True)\n vector_field = _get_kwargs_value(kwargs, \"vector_field\", \"vector_field\")\n text_field = _get_kwargs_value(kwargs, \"text_field\", \"text\")\n if is_appx_search:\n engine = _get_kwargs_value(kwargs, \"engine\", \"nmslib\")\n space_type = _get_kwargs_value(kwargs, \"space_type\", \"l2\")\n ef_search = _get_kwargs_value(kwargs, \"ef_search\", 512)\n ef_construction = _get_kwargs_value(kwargs, \"ef_construction\", 512)\n m = _get_kwargs_value(kwargs, \"m\", 16)\n mapping = _default_text_mapping(\n dim, engine, space_type, ef_search, ef_construction, m, vector_field\n )\n else:\n mapping = _default_scripting_text_mapping(dim)\n [kwargs.pop(key, None) for key in keys_list]\n client = _get_opensearch_client(opensearch_url, **kwargs)\n _bulk_ingest_embeddings(\n client,\n index_name,\n embeddings,\n texts,\n metadatas=metadatas,\n vector_field=vector_field,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/opensearch_vector_search.html"} +{"id": "73bbcb23f714-15", "text": "metadatas=metadatas,\n vector_field=vector_field,\n text_field=text_field,\n mapping=mapping,\n )\n return cls(opensearch_url, index_name, embedding, **kwargs)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/opensearch_vector_search.html"} +{"id": "89d4fa1cf376-0", "text": "Source code for langchain.vectorstores.faiss\n\"\"\"Wrapper around FAISS vector database.\"\"\"\nfrom __future__ import annotations\nimport math\nimport os\nimport pickle\nimport uuid\nfrom pathlib import Path\nfrom typing import Any, Callable, Dict, Iterable, List, Optional, Tuple\nimport numpy as np\nfrom langchain.docstore.base import AddableMixin, Docstore\nfrom langchain.docstore.document import Document\nfrom langchain.docstore.in_memory import InMemoryDocstore\nfrom langchain.embeddings.base import Embeddings\nfrom langchain.vectorstores.base import VectorStore\nfrom langchain.vectorstores.utils import maximal_marginal_relevance\ndef dependable_faiss_import(no_avx2: Optional[bool] = None) -> Any:\n \"\"\"\n Import faiss if available, otherwise raise error.\n If FAISS_NO_AVX2 environment variable is set, it will be considered\n to load FAISS with no AVX2 optimization.\n Args:\n no_avx2: Load FAISS strictly with no AVX2 optimization\n so that the vectorstore is portable and compatible with other devices.\n \"\"\"\n if no_avx2 is None and \"FAISS_NO_AVX2\" in os.environ:\n no_avx2 = bool(os.getenv(\"FAISS_NO_AVX2\"))\n try:\n if no_avx2:\n from faiss import swigfaiss as faiss\n else:\n import faiss\n except ImportError:\n raise ValueError(\n \"Could not import faiss python package. \"\n \"Please install it with `pip install faiss` \"\n \"or `pip install faiss-cpu` (depending on Python version).\"\n )\n return faiss\ndef _default_relevance_score_fn(score: float) -> float:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/faiss.html"} +{"id": "89d4fa1cf376-1", "text": "return faiss\ndef _default_relevance_score_fn(score: float) -> float:\n \"\"\"Return a similarity score on a scale [0, 1].\"\"\"\n # The 'correct' relevance function\n # may differ depending on a few things, including:\n # - the distance / similarity metric used by the VectorStore\n # - the scale of your embeddings (OpenAI's are unit normed. Many others are not!)\n # - embedding dimensionality\n # - etc.\n # This function converts the euclidean norm of normalized embeddings\n # (0 is most similar, sqrt(2) most dissimilar)\n # to a similarity function (0 to 1)\n return 1.0 - score / math.sqrt(2)\n[docs]class FAISS(VectorStore):\n \"\"\"Wrapper around FAISS vector database.\n To use, you should have the ``faiss`` python package installed.\n Example:\n .. code-block:: python\n from langchain import FAISS\n faiss = FAISS(embedding_function, index, docstore, index_to_docstore_id)\n \"\"\"\n def __init__(\n self,\n embedding_function: Callable,\n index: Any,\n docstore: Docstore,\n index_to_docstore_id: Dict[int, str],\n relevance_score_fn: Optional[\n Callable[[float], float]\n ] = _default_relevance_score_fn,\n normalize_L2: bool = False,\n ):\n \"\"\"Initialize with necessary components.\"\"\"\n self.embedding_function = embedding_function\n self.index = index\n self.docstore = docstore\n self.index_to_docstore_id = index_to_docstore_id\n self.relevance_score_fn = relevance_score_fn\n self._normalize_L2 = normalize_L2", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/faiss.html"} +{"id": "89d4fa1cf376-2", "text": "self._normalize_L2 = normalize_L2\n def __add(\n self,\n texts: Iterable[str],\n embeddings: Iterable[List[float]],\n metadatas: Optional[List[dict]] = None,\n ids: Optional[List[str]] = None,\n **kwargs: Any,\n ) -> List[str]:\n if not isinstance(self.docstore, AddableMixin):\n raise ValueError(\n \"If trying to add texts, the underlying docstore should support \"\n f\"adding items, which {self.docstore} does not\"\n )\n documents = []\n for i, text in enumerate(texts):\n metadata = metadatas[i] if metadatas else {}\n documents.append(Document(page_content=text, metadata=metadata))\n if ids is None:\n ids = [str(uuid.uuid4()) for _ in texts]\n # Add to the index, the index_to_id mapping, and the docstore.\n starting_len = len(self.index_to_docstore_id)\n faiss = dependable_faiss_import()\n vector = np.array(embeddings, dtype=np.float32)\n if self._normalize_L2:\n faiss.normalize_L2(vector)\n self.index.add(vector)\n # Get list of index, id, and docs.\n full_info = [(starting_len + i, ids[i], doc) for i, doc in enumerate(documents)]\n # Add information to docstore and index.\n self.docstore.add({_id: doc for _, _id, doc in full_info})\n index_to_id = {index: _id for index, _id, _ in full_info}\n self.index_to_docstore_id.update(index_to_id)\n return [_id for _, _id, _ in full_info]", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/faiss.html"} +{"id": "89d4fa1cf376-3", "text": "return [_id for _, _id, _ in full_info]\n[docs] def add_texts(\n self,\n texts: Iterable[str],\n metadatas: Optional[List[dict]] = None,\n ids: Optional[List[str]] = None,\n **kwargs: Any,\n ) -> List[str]:\n \"\"\"Run more texts through the embeddings and add to the vectorstore.\n Args:\n texts: Iterable of strings to add to the vectorstore.\n metadatas: Optional list of metadatas associated with the texts.\n ids: Optional list of unique IDs.\n Returns:\n List of ids from adding the texts into the vectorstore.\n \"\"\"\n if not isinstance(self.docstore, AddableMixin):\n raise ValueError(\n \"If trying to add texts, the underlying docstore should support \"\n f\"adding items, which {self.docstore} does not\"\n )\n # Embed and create the documents.\n embeddings = [self.embedding_function(text) for text in texts]\n return self.__add(texts, embeddings, metadatas=metadatas, ids=ids, **kwargs)\n[docs] def add_embeddings(\n self,\n text_embeddings: Iterable[Tuple[str, List[float]]],\n metadatas: Optional[List[dict]] = None,\n ids: Optional[List[str]] = None,\n **kwargs: Any,\n ) -> List[str]:\n \"\"\"Run more texts through the embeddings and add to the vectorstore.\n Args:\n text_embeddings: Iterable pairs of string and embedding to\n add to the vectorstore.\n metadatas: Optional list of metadatas associated with the texts.\n ids: Optional list of unique IDs.\n Returns:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/faiss.html"} +{"id": "89d4fa1cf376-4", "text": "ids: Optional list of unique IDs.\n Returns:\n List of ids from adding the texts into the vectorstore.\n \"\"\"\n if not isinstance(self.docstore, AddableMixin):\n raise ValueError(\n \"If trying to add texts, the underlying docstore should support \"\n f\"adding items, which {self.docstore} does not\"\n )\n # Embed and create the documents.\n texts, embeddings = zip(*text_embeddings)\n return self.__add(texts, embeddings, metadatas=metadatas, ids=ids, **kwargs)\n[docs] def similarity_search_with_score_by_vector(\n self,\n embedding: List[float],\n k: int = 4,\n filter: Optional[Dict[str, Any]] = None,\n fetch_k: int = 20,\n **kwargs: Any,\n ) -> List[Tuple[Document, float]]:\n \"\"\"Return docs most similar to query.\n Args:\n embedding: Embedding vector to look up documents similar to.\n k: Number of Documents to return. Defaults to 4.\n filter (Optional[Dict[str, Any]]): Filter by metadata. Defaults to None.\n fetch_k: (Optional[int]) Number of Documents to fetch before filtering.\n Defaults to 20.\n **kwargs: kwargs to be passed to similarity search. Can include:\n score_threshold: Optional, a floating point value between 0 to 1 to\n filter the resulting set of retrieved docs\n Returns:\n List of documents most similar to the query text and L2 distance\n in float for each. Lower score represents more similarity.\n \"\"\"\n faiss = dependable_faiss_import()\n vector = np.array([embedding], dtype=np.float32)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/faiss.html"} +{"id": "89d4fa1cf376-5", "text": "vector = np.array([embedding], dtype=np.float32)\n if self._normalize_L2:\n faiss.normalize_L2(vector)\n scores, indices = self.index.search(vector, k if filter is None else fetch_k)\n docs = []\n for j, i in enumerate(indices[0]):\n if i == -1:\n # This happens when not enough docs are returned.\n continue\n _id = self.index_to_docstore_id[i]\n doc = self.docstore.search(_id)\n if not isinstance(doc, Document):\n raise ValueError(f\"Could not find document for id {_id}, got {doc}\")\n if filter is not None:\n filter = {\n key: [value] if not isinstance(value, list) else value\n for key, value in filter.items()\n }\n if all(doc.metadata.get(key) in value for key, value in filter.items()):\n docs.append((doc, scores[0][j]))\n else:\n docs.append((doc, scores[0][j]))\n score_threshold = kwargs.get(\"score_threshold\")\n if score_threshold is not None:\n docs = [\n (doc, similarity)\n for doc, similarity in docs\n if similarity >= score_threshold\n ]\n return docs[:k]\n[docs] def similarity_search_with_score(\n self,\n query: str,\n k: int = 4,\n filter: Optional[Dict[str, Any]] = None,\n fetch_k: int = 20,\n **kwargs: Any,\n ) -> List[Tuple[Document, float]]:\n \"\"\"Return docs most similar to query.\n Args:\n query: Text to look up documents similar to.", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/faiss.html"} +{"id": "89d4fa1cf376-6", "text": "Args:\n query: Text to look up documents similar to.\n k: Number of Documents to return. Defaults to 4.\n filter (Optional[Dict[str, str]]): Filter by metadata. Defaults to None.\n fetch_k: (Optional[int]) Number of Documents to fetch before filtering.\n Defaults to 20.\n Returns:\n List of documents most similar to the query text with\n L2 distance in float. Lower score represents more similarity.\n \"\"\"\n embedding = self.embedding_function(query)\n docs = self.similarity_search_with_score_by_vector(\n embedding,\n k,\n filter=filter,\n fetch_k=fetch_k,\n **kwargs,\n )\n return docs\n[docs] def similarity_search_by_vector(\n self,\n embedding: List[float],\n k: int = 4,\n filter: Optional[Dict[str, Any]] = None,\n fetch_k: int = 20,\n **kwargs: Any,\n ) -> List[Document]:\n \"\"\"Return docs most similar to embedding vector.\n Args:\n embedding: Embedding to look up documents similar to.\n k: Number of Documents to return. Defaults to 4.\n filter (Optional[Dict[str, str]]): Filter by metadata. Defaults to None.\n fetch_k: (Optional[int]) Number of Documents to fetch before filtering.\n Defaults to 20.\n Returns:\n List of Documents most similar to the embedding.\n \"\"\"\n docs_and_scores = self.similarity_search_with_score_by_vector(\n embedding,\n k,\n filter=filter,\n fetch_k=fetch_k,\n **kwargs,\n )\n return [doc for doc, _ in docs_and_scores]", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/faiss.html"} +{"id": "89d4fa1cf376-7", "text": ")\n return [doc for doc, _ in docs_and_scores]\n[docs] def similarity_search(\n self,\n query: str,\n k: int = 4,\n filter: Optional[Dict[str, Any]] = None,\n fetch_k: int = 20,\n **kwargs: Any,\n ) -> List[Document]:\n \"\"\"Return docs most similar to query.\n Args:\n query: Text to look up documents similar to.\n k: Number of Documents to return. Defaults to 4.\n filter: (Optional[Dict[str, str]]): Filter by metadata. Defaults to None.\n fetch_k: (Optional[int]) Number of Documents to fetch before filtering.\n Defaults to 20.\n Returns:\n List of Documents most similar to the query.\n \"\"\"\n docs_and_scores = self.similarity_search_with_score(\n query, k, filter=filter, fetch_k=fetch_k, **kwargs\n )\n return [doc for doc, _ in docs_and_scores]\n[docs] def max_marginal_relevance_search_by_vector(\n self,\n embedding: List[float],\n k: int = 4,\n fetch_k: int = 20,\n lambda_mult: float = 0.5,\n filter: Optional[Dict[str, Any]] = None,\n **kwargs: Any,\n ) -> List[Document]:\n \"\"\"Return docs selected using the maximal marginal relevance.\n Maximal marginal relevance optimizes for similarity to query AND diversity\n among selected documents.\n Args:\n embedding: Embedding to look up documents similar to.\n k: Number of Documents to return. Defaults to 4.\n fetch_k: Number of Documents to fetch before filtering to", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/faiss.html"} +{"id": "89d4fa1cf376-8", "text": "fetch_k: Number of Documents to fetch before filtering to\n pass to MMR algorithm.\n lambda_mult: Number between 0 and 1 that determines the degree\n of diversity among the results with 0 corresponding\n to maximum diversity and 1 to minimum diversity.\n Defaults to 0.5.\n Returns:\n List of Documents selected by maximal marginal relevance.\n \"\"\"\n _, indices = self.index.search(\n np.array([embedding], dtype=np.float32),\n fetch_k if filter is None else fetch_k * 2,\n )\n if filter is not None:\n filtered_indices = []\n for i in indices[0]:\n if i == -1:\n # This happens when not enough docs are returned.\n continue\n _id = self.index_to_docstore_id[i]\n doc = self.docstore.search(_id)\n if not isinstance(doc, Document):\n raise ValueError(f\"Could not find document for id {_id}, got {doc}\")\n if all(doc.metadata.get(key) == value for key, value in filter.items()):\n filtered_indices.append(i)\n indices = np.array([filtered_indices])\n # -1 happens when not enough docs are returned.\n embeddings = [self.index.reconstruct(int(i)) for i in indices[0] if i != -1]\n mmr_selected = maximal_marginal_relevance(\n np.array([embedding], dtype=np.float32),\n embeddings,\n k=k,\n lambda_mult=lambda_mult,\n )\n selected_indices = [indices[0][i] for i in mmr_selected]\n docs = []\n for i in selected_indices:\n if i == -1:\n # This happens when not enough docs are returned.\n continue", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/faiss.html"} +{"id": "89d4fa1cf376-9", "text": "# This happens when not enough docs are returned.\n continue\n _id = self.index_to_docstore_id[i]\n doc = self.docstore.search(_id)\n if not isinstance(doc, Document):\n raise ValueError(f\"Could not find document for id {_id}, got {doc}\")\n docs.append(doc)\n return docs\n[docs] def max_marginal_relevance_search(\n self,\n query: str,\n k: int = 4,\n fetch_k: int = 20,\n lambda_mult: float = 0.5,\n filter: Optional[Dict[str, Any]] = None,\n **kwargs: Any,\n ) -> List[Document]:\n \"\"\"Return docs selected using the maximal marginal relevance.\n Maximal marginal relevance optimizes for similarity to query AND diversity\n among selected documents.\n Args:\n query: Text to look up documents similar to.\n k: Number of Documents to return. Defaults to 4.\n fetch_k: Number of Documents to fetch before filtering (if needed) to\n pass to MMR algorithm.\n lambda_mult: Number between 0 and 1 that determines the degree\n of diversity among the results with 0 corresponding\n to maximum diversity and 1 to minimum diversity.\n Defaults to 0.5.\n Returns:\n List of Documents selected by maximal marginal relevance.\n \"\"\"\n embedding = self.embedding_function(query)\n docs = self.max_marginal_relevance_search_by_vector(\n embedding,\n k,\n fetch_k,\n lambda_mult=lambda_mult,\n filter=filter,\n **kwargs,\n )\n return docs\n[docs] def merge_from(self, target: FAISS) -> None:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/faiss.html"} +{"id": "89d4fa1cf376-10", "text": "[docs] def merge_from(self, target: FAISS) -> None:\n \"\"\"Merge another FAISS object with the current one.\n Add the target FAISS to the current one.\n Args:\n target: FAISS object you wish to merge into the current one\n Returns:\n None.\n \"\"\"\n if not isinstance(self.docstore, AddableMixin):\n raise ValueError(\"Cannot merge with this type of docstore\")\n # Numerical index for target docs are incremental on existing ones\n starting_len = len(self.index_to_docstore_id)\n # Merge two IndexFlatL2\n self.index.merge_from(target.index)\n # Get id and docs from target FAISS object\n full_info = []\n for i, target_id in target.index_to_docstore_id.items():\n doc = target.docstore.search(target_id)\n if not isinstance(doc, Document):\n raise ValueError(\"Document should be returned\")\n full_info.append((starting_len + i, target_id, doc))\n # Add information to docstore and index_to_docstore_id.\n self.docstore.add({_id: doc for _, _id, doc in full_info})\n index_to_id = {index: _id for index, _id, _ in full_info}\n self.index_to_docstore_id.update(index_to_id)\n @classmethod\n def __from(\n cls,\n texts: List[str],\n embeddings: List[List[float]],\n embedding: Embeddings,\n metadatas: Optional[List[dict]] = None,\n ids: Optional[List[str]] = None,\n normalize_L2: bool = False,\n **kwargs: Any,\n ) -> FAISS:\n faiss = dependable_faiss_import()", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/faiss.html"} +{"id": "89d4fa1cf376-11", "text": ") -> FAISS:\n faiss = dependable_faiss_import()\n index = faiss.IndexFlatL2(len(embeddings[0]))\n vector = np.array(embeddings, dtype=np.float32)\n if normalize_L2:\n faiss.normalize_L2(vector)\n index.add(vector)\n documents = []\n if ids is None:\n ids = [str(uuid.uuid4()) for _ in texts]\n for i, text in enumerate(texts):\n metadata = metadatas[i] if metadatas else {}\n documents.append(Document(page_content=text, metadata=metadata))\n index_to_id = dict(enumerate(ids))\n docstore = InMemoryDocstore(dict(zip(index_to_id.values(), documents)))\n return cls(\n embedding.embed_query,\n index,\n docstore,\n index_to_id,\n normalize_L2=normalize_L2,\n **kwargs,\n )\n[docs] @classmethod\n def from_texts(\n cls,\n texts: List[str],\n embedding: Embeddings,\n metadatas: Optional[List[dict]] = None,\n ids: Optional[List[str]] = None,\n **kwargs: Any,\n ) -> FAISS:\n \"\"\"Construct FAISS wrapper from raw documents.\n This is a user friendly interface that:\n 1. Embeds documents.\n 2. Creates an in memory docstore\n 3. Initializes the FAISS database\n This is intended to be a quick way to get started.\n Example:\n .. code-block:: python\n from langchain import FAISS\n from langchain.embeddings import OpenAIEmbeddings\n embeddings = OpenAIEmbeddings()\n faiss = FAISS.from_texts(texts, embeddings)\n \"\"\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/faiss.html"} +{"id": "89d4fa1cf376-12", "text": "faiss = FAISS.from_texts(texts, embeddings)\n \"\"\"\n embeddings = embedding.embed_documents(texts)\n return cls.__from(\n texts,\n embeddings,\n embedding,\n metadatas=metadatas,\n ids=ids,\n **kwargs,\n )\n[docs] @classmethod\n def from_embeddings(\n cls,\n text_embeddings: List[Tuple[str, List[float]]],\n embedding: Embeddings,\n metadatas: Optional[List[dict]] = None,\n ids: Optional[List[str]] = None,\n **kwargs: Any,\n ) -> FAISS:\n \"\"\"Construct FAISS wrapper from raw documents.\n This is a user friendly interface that:\n 1. Embeds documents.\n 2. Creates an in memory docstore\n 3. Initializes the FAISS database\n This is intended to be a quick way to get started.\n Example:\n .. code-block:: python\n from langchain import FAISS\n from langchain.embeddings import OpenAIEmbeddings\n embeddings = OpenAIEmbeddings()\n text_embeddings = embeddings.embed_documents(texts)\n text_embedding_pairs = list(zip(texts, text_embeddings))\n faiss = FAISS.from_embeddings(text_embedding_pairs, embeddings)\n \"\"\"\n texts = [t[0] for t in text_embeddings]\n embeddings = [t[1] for t in text_embeddings]\n return cls.__from(\n texts,\n embeddings,\n embedding,\n metadatas=metadatas,\n ids=ids,\n **kwargs,\n )\n[docs] def save_local(self, folder_path: str, index_name: str = \"index\") -> None:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/faiss.html"} +{"id": "89d4fa1cf376-13", "text": "\"\"\"Save FAISS index, docstore, and index_to_docstore_id to disk.\n Args:\n folder_path: folder path to save index, docstore,\n and index_to_docstore_id to.\n index_name: for saving with a specific index file name\n \"\"\"\n path = Path(folder_path)\n path.mkdir(exist_ok=True, parents=True)\n # save index separately since it is not picklable\n faiss = dependable_faiss_import()\n faiss.write_index(\n self.index, str(path / \"{index_name}.faiss\".format(index_name=index_name))\n )\n # save docstore and index_to_docstore_id\n with open(path / \"{index_name}.pkl\".format(index_name=index_name), \"wb\") as f:\n pickle.dump((self.docstore, self.index_to_docstore_id), f)\n[docs] @classmethod\n def load_local(\n cls, folder_path: str, embeddings: Embeddings, index_name: str = \"index\"\n ) -> FAISS:\n \"\"\"Load FAISS index, docstore, and index_to_docstore_id from disk.\n Args:\n folder_path: folder path to load index, docstore,\n and index_to_docstore_id from.\n embeddings: Embeddings to use when generating queries\n index_name: for saving with a specific index file name\n \"\"\"\n path = Path(folder_path)\n # load index separately since it is not picklable\n faiss = dependable_faiss_import()\n index = faiss.read_index(\n str(path / \"{index_name}.faiss\".format(index_name=index_name))\n )\n # load docstore and index_to_docstore_id", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/faiss.html"} +{"id": "89d4fa1cf376-14", "text": ")\n # load docstore and index_to_docstore_id\n with open(path / \"{index_name}.pkl\".format(index_name=index_name), \"rb\") as f:\n docstore, index_to_docstore_id = pickle.load(f)\n return cls(embeddings.embed_query, index, docstore, index_to_docstore_id)\n def _similarity_search_with_relevance_scores(\n self,\n query: str,\n k: int = 4,\n filter: Optional[Dict[str, Any]] = None,\n fetch_k: int = 20,\n **kwargs: Any,\n ) -> List[Tuple[Document, float]]:\n \"\"\"Return docs and their similarity scores on a scale from 0 to 1.\"\"\"\n if self.relevance_score_fn is None:\n raise ValueError(\n \"normalize_score_fn must be provided to\"\n \" FAISS constructor to normalize scores\"\n )\n docs_and_scores = self.similarity_search_with_score(\n query,\n k=k,\n filter=filter,\n fetch_k=fetch_k,\n **kwargs,\n )\n return [(doc, self.relevance_score_fn(score)) for doc, score in docs_and_scores]", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/faiss.html"} +{"id": "8a894195acf4-0", "text": "Source code for langchain.vectorstores.matching_engine\n\"\"\"Vertex Matching Engine implementation of the vector store.\"\"\"\nfrom __future__ import annotations\nimport json\nimport logging\nimport time\nimport uuid\nfrom typing import TYPE_CHECKING, Any, Iterable, List, Optional, Type\nfrom langchain.docstore.document import Document\nfrom langchain.embeddings import TensorflowHubEmbeddings\nfrom langchain.embeddings.base import Embeddings\nfrom langchain.vectorstores.base import VectorStore\nif TYPE_CHECKING:\n from google.cloud import storage\n from google.cloud.aiplatform import MatchingEngineIndex, MatchingEngineIndexEndpoint\n from google.oauth2.service_account import Credentials\nlogger = logging.getLogger()\n[docs]class MatchingEngine(VectorStore):\n \"\"\"Vertex Matching Engine implementation of the vector store.\n While the embeddings are stored in the Matching Engine, the embedded\n documents will be stored in GCS.\n An existing Index and corresponding Endpoint are preconditions for\n using this module.\n See usage in docs/modules/indexes/vectorstores/examples/matchingengine.ipynb\n Note that this implementation is mostly meant for reading if you are\n planning to do a real time implementation. While reading is a real time\n operation, updating the index takes close to one hour.\"\"\"\n def __init__(\n self,\n project_id: str,\n index: MatchingEngineIndex,\n endpoint: MatchingEngineIndexEndpoint,\n embedding: Embeddings,\n gcs_client: storage.Client,\n gcs_bucket_name: str,\n credentials: Optional[Credentials] = None,\n ):\n \"\"\"Vertex Matching Engine implementation of the vector store.\n While the embeddings are stored in the Matching Engine, the embedded\n documents will be stored in GCS.\n An existing Index and corresponding Endpoint are preconditions for\n using this module.\n See usage in", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/matching_engine.html"} +{"id": "8a894195acf4-1", "text": "using this module.\n See usage in\n docs/modules/indexes/vectorstores/examples/matchingengine.ipynb.\n Note that this implementation is mostly meant for reading if you are\n planning to do a real time implementation. While reading is a real time\n operation, updating the index takes close to one hour.\n Attributes:\n project_id: The GCS project id.\n index: The created index class. See\n ~:func:`MatchingEngine.from_components`.\n endpoint: The created endpoint class. See\n ~:func:`MatchingEngine.from_components`.\n embedding: A :class:`Embeddings` that will be used for\n embedding the text sent. If none is sent, then the\n multilingual Tensorflow Universal Sentence Encoder will be used.\n gcs_client: The GCS client.\n gcs_bucket_name: The GCS bucket name.\n credentials (Optional): Created GCP credentials.\n \"\"\"\n super().__init__()\n self._validate_google_libraries_installation()\n self.project_id = project_id\n self.index = index\n self.endpoint = endpoint\n self.embedding = embedding\n self.gcs_client = gcs_client\n self.credentials = credentials\n self.gcs_bucket_name = gcs_bucket_name\n def _validate_google_libraries_installation(self) -> None:\n \"\"\"Validates that Google libraries that are needed are installed.\"\"\"\n try:\n from google.cloud import aiplatform, storage # noqa: F401\n from google.oauth2 import service_account # noqa: F401\n except ImportError:\n raise ImportError(\n \"You must run `pip install --upgrade \"\n \"google-cloud-aiplatform google-cloud-storage`\"\n \"to use the MatchingEngine Vectorstore.\"\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/matching_engine.html"} +{"id": "8a894195acf4-2", "text": "\"to use the MatchingEngine Vectorstore.\"\n )\n[docs] def add_texts(\n self,\n texts: Iterable[str],\n metadatas: Optional[List[dict]] = None,\n **kwargs: Any,\n ) -> List[str]:\n \"\"\"Run more texts through the embeddings and add to the vectorstore.\n Args:\n texts: Iterable of strings to add to the vectorstore.\n metadatas: Optional list of metadatas associated with the texts.\n kwargs: vectorstore specific parameters.\n Returns:\n List of ids from adding the texts into the vectorstore.\n \"\"\"\n logger.debug(\"Embedding documents.\")\n embeddings = self.embedding.embed_documents(list(texts))\n jsons = []\n ids = []\n # Could be improved with async.\n for embedding, text in zip(embeddings, texts):\n id = str(uuid.uuid4())\n ids.append(id)\n jsons.append({\"id\": id, \"embedding\": embedding})\n self._upload_to_gcs(text, f\"documents/{id}\")\n logger.debug(f\"Uploaded {len(ids)} documents to GCS.\")\n # Creating json lines from the embedded documents.\n result_str = \"\\n\".join([json.dumps(x) for x in jsons])\n filename_prefix = f\"indexes/{uuid.uuid4()}\"\n filename = f\"{filename_prefix}/{time.time()}.json\"\n self._upload_to_gcs(result_str, filename)\n logger.debug(\n f\"Uploaded updated json with embeddings to \"\n f\"{self.gcs_bucket_name}/{filename}.\"\n )\n self.index = self.index.update_embeddings(\n contents_delta_uri=f\"gs://{self.gcs_bucket_name}/{filename_prefix}/\"\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/matching_engine.html"} +{"id": "8a894195acf4-3", "text": ")\n logger.debug(\"Updated index with new configuration.\")\n return ids\n def _upload_to_gcs(self, data: str, gcs_location: str) -> None:\n \"\"\"Uploads data to gcs_location.\n Args:\n data: The data that will be stored.\n gcs_location: The location where the data will be stored.\n \"\"\"\n bucket = self.gcs_client.get_bucket(self.gcs_bucket_name)\n blob = bucket.blob(gcs_location)\n blob.upload_from_string(data)\n[docs] def similarity_search(\n self, query: str, k: int = 4, **kwargs: Any\n ) -> List[Document]:\n \"\"\"Return docs most similar to query.\n Args:\n query: The string that will be used to search for similar documents.\n k: The amount of neighbors that will be retrieved.\n Returns:\n A list of k matching documents.\n \"\"\"\n logger.debug(f\"Embedding query {query}.\")\n embedding_query = self.embedding.embed_documents([query])\n response = self.endpoint.match(\n deployed_index_id=self._get_index_id(),\n queries=embedding_query,\n num_neighbors=k,\n )\n if len(response) == 0:\n return []\n logger.debug(f\"Found {len(response)} matches for the query {query}.\")\n results = []\n # I'm only getting the first one because queries receives an array\n # and the similarity_search method only recevies one query. This\n # means that the match method will always return an array with only\n # one element.\n for doc in response[0]:\n page_content = self._download_from_gcs(f\"documents/{doc.id}\")", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/matching_engine.html"} +{"id": "8a894195acf4-4", "text": "page_content = self._download_from_gcs(f\"documents/{doc.id}\")\n results.append(Document(page_content=page_content))\n logger.debug(\"Downloaded documents for query.\")\n return results\n def _get_index_id(self) -> str:\n \"\"\"Gets the correct index id for the endpoint.\n Returns:\n The index id if found (which should be found) or throws\n ValueError otherwise.\n \"\"\"\n for index in self.endpoint.deployed_indexes:\n if index.index == self.index.resource_name:\n return index.id\n raise ValueError(\n f\"No index with id {self.index.resource_name} \"\n f\"deployed on endpoint \"\n f\"{self.endpoint.display_name}.\"\n )\n def _download_from_gcs(self, gcs_location: str) -> str:\n \"\"\"Downloads from GCS in text format.\n Args:\n gcs_location: The location where the file is located.\n Returns:\n The string contents of the file.\n \"\"\"\n bucket = self.gcs_client.get_bucket(self.gcs_bucket_name)\n blob = bucket.blob(gcs_location)\n return blob.download_as_string()\n[docs] @classmethod\n def from_texts(\n cls: Type[\"MatchingEngine\"],\n texts: List[str],\n embedding: Embeddings,\n metadatas: Optional[List[dict]] = None,\n **kwargs: Any,\n ) -> \"MatchingEngine\":\n \"\"\"Use from components instead.\"\"\"\n raise NotImplementedError(\n \"This method is not implemented. Instead, you should initialize the class\"\n \" with `MatchingEngine.from_components(...)` and then call \"\n \"`add_texts`\"\n )\n[docs] @classmethod\n def from_components(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/matching_engine.html"} +{"id": "8a894195acf4-5", "text": ")\n[docs] @classmethod\n def from_components(\n cls: Type[\"MatchingEngine\"],\n project_id: str,\n region: str,\n gcs_bucket_name: str,\n index_id: str,\n endpoint_id: str,\n credentials_path: Optional[str] = None,\n embedding: Optional[Embeddings] = None,\n ) -> \"MatchingEngine\":\n \"\"\"Takes the object creation out of the constructor.\n Args:\n project_id: The GCP project id.\n region: The default location making the API calls. It must have\n the same location as the GCS bucket and must be regional.\n gcs_bucket_name: The location where the vectors will be stored in\n order for the index to be created.\n index_id: The id of the created index.\n endpoint_id: The id of the created endpoint.\n credentials_path: (Optional) The path of the Google credentials on\n the local file system.\n embedding: The :class:`Embeddings` that will be used for\n embedding the texts.\n Returns:\n A configured MatchingEngine with the texts added to the index.\n \"\"\"\n gcs_bucket_name = cls._validate_gcs_bucket(gcs_bucket_name)\n credentials = cls._create_credentials_from_file(credentials_path)\n index = cls._create_index_by_id(index_id, project_id, region, credentials)\n endpoint = cls._create_endpoint_by_id(\n endpoint_id, project_id, region, credentials\n )\n gcs_client = cls._get_gcs_client(credentials, project_id)\n cls._init_aiplatform(project_id, region, gcs_bucket_name, credentials)\n return cls(\n project_id=project_id,\n index=index,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/matching_engine.html"} +{"id": "8a894195acf4-6", "text": "return cls(\n project_id=project_id,\n index=index,\n endpoint=endpoint,\n embedding=embedding or cls._get_default_embeddings(),\n gcs_client=gcs_client,\n credentials=credentials,\n gcs_bucket_name=gcs_bucket_name,\n )\n @classmethod\n def _validate_gcs_bucket(cls, gcs_bucket_name: str) -> str:\n \"\"\"Validates the gcs_bucket_name as a bucket name.\n Args:\n gcs_bucket_name: The received bucket uri.\n Returns:\n A valid gcs_bucket_name or throws ValueError if full path is\n provided.\n \"\"\"\n gcs_bucket_name = gcs_bucket_name.replace(\"gs://\", \"\")\n if \"/\" in gcs_bucket_name:\n raise ValueError(\n f\"The argument gcs_bucket_name should only be \"\n f\"the bucket name. Received {gcs_bucket_name}\"\n )\n return gcs_bucket_name\n @classmethod\n def _create_credentials_from_file(\n cls, json_credentials_path: Optional[str]\n ) -> Optional[Credentials]:\n \"\"\"Creates credentials for GCP.\n Args:\n json_credentials_path: The path on the file system where the\n credentials are stored.\n Returns:\n An optional of Credentials or None, in which case the default\n will be used.\n \"\"\"\n from google.oauth2 import service_account\n credentials = None\n if json_credentials_path is not None:\n credentials = service_account.Credentials.from_service_account_file(\n json_credentials_path\n )\n return credentials\n @classmethod\n def _create_index_by_id(\n cls, index_id: str, project_id: str, region: str, credentials: \"Credentials\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/matching_engine.html"} +{"id": "8a894195acf4-7", "text": ") -> MatchingEngineIndex:\n \"\"\"Creates a MatchingEngineIndex object by id.\n Args:\n index_id: The created index id.\n project_id: The project to retrieve index from.\n region: Location to retrieve index from.\n credentials: GCS credentials.\n Returns:\n A configured MatchingEngineIndex.\n \"\"\"\n from google.cloud import aiplatform\n logger.debug(f\"Creating matching engine index with id {index_id}.\")\n return aiplatform.MatchingEngineIndex(\n index_name=index_id,\n project=project_id,\n location=region,\n credentials=credentials,\n )\n @classmethod\n def _create_endpoint_by_id(\n cls, endpoint_id: str, project_id: str, region: str, credentials: \"Credentials\"\n ) -> MatchingEngineIndexEndpoint:\n \"\"\"Creates a MatchingEngineIndexEndpoint object by id.\n Args:\n endpoint_id: The created endpoint id.\n project_id: The project to retrieve index from.\n region: Location to retrieve index from.\n credentials: GCS credentials.\n Returns:\n A configured MatchingEngineIndexEndpoint.\n \"\"\"\n from google.cloud import aiplatform\n logger.debug(f\"Creating endpoint with id {endpoint_id}.\")\n return aiplatform.MatchingEngineIndexEndpoint(\n index_endpoint_name=endpoint_id,\n project=project_id,\n location=region,\n credentials=credentials,\n )\n @classmethod\n def _get_gcs_client(\n cls, credentials: \"Credentials\", project_id: str\n ) -> \"storage.Client\":\n \"\"\"Lazily creates a GCS client.\n Returns:\n A configured GCS client.\n \"\"\"\n from google.cloud import storage", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/matching_engine.html"} +{"id": "8a894195acf4-8", "text": "A configured GCS client.\n \"\"\"\n from google.cloud import storage\n return storage.Client(credentials=credentials, project=project_id)\n @classmethod\n def _init_aiplatform(\n cls,\n project_id: str,\n region: str,\n gcs_bucket_name: str,\n credentials: \"Credentials\",\n ) -> None:\n \"\"\"Configures the aiplatform library.\n Args:\n project_id: The GCP project id.\n region: The default location making the API calls. It must have\n the same location as the GCS bucket and must be regional.\n gcs_bucket_name: GCS staging location.\n credentials: The GCS Credentials object.\n \"\"\"\n from google.cloud import aiplatform\n logger.debug(\n f\"Initializing AI Platform for project {project_id} on \"\n f\"{region} and for {gcs_bucket_name}.\"\n )\n aiplatform.init(\n project=project_id,\n location=region,\n staging_bucket=gcs_bucket_name,\n credentials=credentials,\n )\n @classmethod\n def _get_default_embeddings(cls) -> TensorflowHubEmbeddings:\n \"\"\"This function returns the default embedding.\n Returns:\n Default TensorflowHubEmbeddings to use.\n \"\"\"\n return TensorflowHubEmbeddings()", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/matching_engine.html"} +{"id": "e87e3913e0d0-0", "text": "Source code for langchain.vectorstores.tair\n\"\"\"Wrapper around Tair Vector.\"\"\"\nfrom __future__ import annotations\nimport json\nimport logging\nimport uuid\nfrom typing import Any, Iterable, List, Optional, Type\nfrom langchain.docstore.document import Document\nfrom langchain.embeddings.base import Embeddings\nfrom langchain.utils import get_from_dict_or_env\nfrom langchain.vectorstores.base import VectorStore\nlogger = logging.getLogger(__name__)\ndef _uuid_key() -> str:\n return uuid.uuid4().hex\n[docs]class Tair(VectorStore):\n \"\"\"Wrapper around Tair Vector store.\"\"\"\n def __init__(\n self,\n embedding_function: Embeddings,\n url: str,\n index_name: str,\n content_key: str = \"content\",\n metadata_key: str = \"metadata\",\n search_params: Optional[dict] = None,\n **kwargs: Any,\n ):\n self.embedding_function = embedding_function\n self.index_name = index_name\n try:\n from tair import Tair as TairClient\n except ImportError:\n raise ImportError(\n \"Could not import tair python package. \"\n \"Please install it with `pip install tair`.\"\n )\n try:\n # connect to tair from url\n client = TairClient.from_url(url, **kwargs)\n except ValueError as e:\n raise ValueError(f\"Tair failed to connect: {e}\")\n self.client = client\n self.content_key = content_key\n self.metadata_key = metadata_key\n self.search_params = search_params\n[docs] def create_index_if_not_exist(\n self,\n dim: int,\n distance_type: str,\n index_type: str,\n data_type: str,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/tair.html"} +{"id": "e87e3913e0d0-1", "text": "index_type: str,\n data_type: str,\n **kwargs: Any,\n ) -> bool:\n index = self.client.tvs_get_index(self.index_name)\n if index is not None:\n logger.info(\"Index already exists\")\n return False\n self.client.tvs_create_index(\n self.index_name,\n dim,\n distance_type,\n index_type,\n data_type,\n **kwargs,\n )\n return True\n[docs] def add_texts(\n self,\n texts: Iterable[str],\n metadatas: Optional[List[dict]] = None,\n **kwargs: Any,\n ) -> List[str]:\n \"\"\"Add texts data to an existing index.\"\"\"\n ids = []\n keys = kwargs.get(\"keys\", None)\n # Write data to tair\n pipeline = self.client.pipeline(transaction=False)\n embeddings = self.embedding_function.embed_documents(list(texts))\n for i, text in enumerate(texts):\n # Use provided key otherwise use default key\n key = keys[i] if keys else _uuid_key()\n metadata = metadatas[i] if metadatas else {}\n pipeline.tvs_hset(\n self.index_name,\n key,\n embeddings[i],\n False,\n **{\n self.content_key: text,\n self.metadata_key: json.dumps(metadata),\n },\n )\n ids.append(key)\n pipeline.execute()\n return ids\n[docs] def similarity_search(\n self, query: str, k: int = 4, **kwargs: Any\n ) -> List[Document]:\n \"\"\"\n Returns the most similar indexed documents to the query text.\n Args:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/tair.html"} +{"id": "e87e3913e0d0-2", "text": "\"\"\"\n Returns the most similar indexed documents to the query text.\n Args:\n query (str): The query text for which to find similar documents.\n k (int): The number of documents to return. Default is 4.\n Returns:\n List[Document]: A list of documents that are most similar to the query text.\n \"\"\"\n # Creates embedding vector from user query\n embedding = self.embedding_function.embed_query(query)\n keys_and_scores = self.client.tvs_knnsearch(\n self.index_name, k, embedding, False, None, **kwargs\n )\n pipeline = self.client.pipeline(transaction=False)\n for key, _ in keys_and_scores:\n pipeline.tvs_hmget(\n self.index_name, key, self.metadata_key, self.content_key\n )\n docs = pipeline.execute()\n return [\n Document(\n page_content=d[1],\n metadata=json.loads(d[0]),\n )\n for d in docs\n ]\n[docs] @classmethod\n def from_texts(\n cls: Type[Tair],\n texts: List[str],\n embedding: Embeddings,\n metadatas: Optional[List[dict]] = None,\n index_name: str = \"langchain\",\n content_key: str = \"content\",\n metadata_key: str = \"metadata\",\n **kwargs: Any,\n ) -> Tair:\n try:\n from tair import tairvector\n except ImportError:\n raise ValueError(\n \"Could not import tair python package. \"\n \"Please install it with `pip install tair`.\"\n )\n url = get_from_dict_or_env(kwargs, \"tair_url\", \"TAIR_URL\")", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/tair.html"} +{"id": "e87e3913e0d0-3", "text": "if \"tair_url\" in kwargs:\n kwargs.pop(\"tair_url\")\n distance_type = tairvector.DistanceMetric.InnerProduct\n if \"distance_type\" in kwargs:\n distance_type = kwargs.pop(\"distance_typ\")\n index_type = tairvector.IndexType.HNSW\n if \"index_type\" in kwargs:\n index_type = kwargs.pop(\"index_type\")\n data_type = tairvector.DataType.Float32\n if \"data_type\" in kwargs:\n data_type = kwargs.pop(\"data_type\")\n index_params = {}\n if \"index_params\" in kwargs:\n index_params = kwargs.pop(\"index_params\")\n search_params = {}\n if \"search_params\" in kwargs:\n search_params = kwargs.pop(\"search_params\")\n keys = None\n if \"keys\" in kwargs:\n keys = kwargs.pop(\"keys\")\n try:\n tair_vector_store = cls(\n embedding,\n url,\n index_name,\n content_key=content_key,\n metadata_key=metadata_key,\n search_params=search_params,\n **kwargs,\n )\n except ValueError as e:\n raise ValueError(f\"tair failed to connect: {e}\")\n # Create embeddings for documents\n embeddings = embedding.embed_documents(texts)\n tair_vector_store.create_index_if_not_exist(\n len(embeddings[0]),\n distance_type,\n index_type,\n data_type,\n **index_params,\n )\n tair_vector_store.add_texts(texts, metadatas, keys=keys)\n return tair_vector_store\n[docs] @classmethod\n def from_documents(\n cls,\n documents: List[Document],\n embedding: Embeddings,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/tair.html"} +{"id": "e87e3913e0d0-4", "text": "cls,\n documents: List[Document],\n embedding: Embeddings,\n metadatas: Optional[List[dict]] = None,\n index_name: str = \"langchain\",\n content_key: str = \"content\",\n metadata_key: str = \"metadata\",\n **kwargs: Any,\n ) -> Tair:\n texts = [d.page_content for d in documents]\n metadatas = [d.metadata for d in documents]\n return cls.from_texts(\n texts, embedding, metadatas, index_name, content_key, metadata_key, **kwargs\n )\n[docs] @staticmethod\n def drop_index(\n index_name: str = \"langchain\",\n **kwargs: Any,\n ) -> bool:\n \"\"\"\n Drop an existing index.\n Args:\n index_name (str): Name of the index to drop.\n Returns:\n bool: True if the index is dropped successfully.\n \"\"\"\n try:\n from tair import Tair as TairClient\n except ImportError:\n raise ValueError(\n \"Could not import tair python package. \"\n \"Please install it with `pip install tair`.\"\n )\n url = get_from_dict_or_env(kwargs, \"tair_url\", \"TAIR_URL\")\n try:\n if \"tair_url\" in kwargs:\n kwargs.pop(\"tair_url\")\n client = TairClient.from_url(url=url, **kwargs)\n except ValueError as e:\n raise ValueError(f\"Tair connection error: {e}\")\n # delete index\n ret = client.tvs_del_index(index_name)\n if ret == 0:\n # index not exist\n logger.info(\"Index does not exist\")\n return False", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/tair.html"} +{"id": "e87e3913e0d0-5", "text": "# index not exist\n logger.info(\"Index does not exist\")\n return False\n return True\n[docs] @classmethod\n def from_existing_index(\n cls,\n embedding: Embeddings,\n index_name: str = \"langchain\",\n content_key: str = \"content\",\n metadata_key: str = \"metadata\",\n **kwargs: Any,\n ) -> Tair:\n \"\"\"Connect to an existing Tair index.\"\"\"\n url = get_from_dict_or_env(kwargs, \"tair_url\", \"TAIR_URL\")\n search_params = {}\n if \"search_params\" in kwargs:\n search_params = kwargs.pop(\"search_params\")\n return cls(\n embedding,\n url,\n index_name,\n content_key=content_key,\n metadata_key=metadata_key,\n search_params=search_params,\n **kwargs,\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/tair.html"} +{"id": "540259259ab4-0", "text": "Source code for langchain.vectorstores.atlas\n\"\"\"Wrapper around Atlas by Nomic.\"\"\"\nfrom __future__ import annotations\nimport logging\nimport uuid\nfrom typing import Any, Iterable, List, Optional, Type\nimport numpy as np\nfrom langchain.docstore.document import Document\nfrom langchain.embeddings.base import Embeddings\nfrom langchain.vectorstores.base import VectorStore\nlogger = logging.getLogger(__name__)\n[docs]class AtlasDB(VectorStore):\n \"\"\"Wrapper around Atlas: Nomic's neural database and rhizomatic instrument.\n To use, you should have the ``nomic`` python package installed.\n Example:\n .. code-block:: python\n from langchain.vectorstores import AtlasDB\n from langchain.embeddings.openai import OpenAIEmbeddings\n embeddings = OpenAIEmbeddings()\n vectorstore = AtlasDB(\"my_project\", embeddings.embed_query)\n \"\"\"\n _ATLAS_DEFAULT_ID_FIELD = \"atlas_id\"\n def __init__(\n self,\n name: str,\n embedding_function: Optional[Embeddings] = None,\n api_key: Optional[str] = None,\n description: str = \"A description for your project\",\n is_public: bool = True,\n reset_project_if_exists: bool = False,\n ) -> None:\n \"\"\"\n Initialize the Atlas Client\n Args:\n name (str): The name of your project. If the project already exists,\n it will be loaded.\n embedding_function (Optional[Callable]): An optional function used for\n embedding your data. If None, data will be embedded with\n Nomic's embed model.\n api_key (str): Your nomic API key\n description (str): A description for your project.\n is_public (bool): Whether your project is publicly accessible.", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/atlas.html"} +{"id": "540259259ab4-1", "text": "is_public (bool): Whether your project is publicly accessible.\n True by default.\n reset_project_if_exists (bool): Whether to reset this project if it\n already exists. Default False.\n Generally userful during development and testing.\n \"\"\"\n try:\n import nomic\n from nomic import AtlasProject\n except ImportError:\n raise ValueError(\n \"Could not import nomic python package. \"\n \"Please install it with `pip install nomic`.\"\n )\n if api_key is None:\n raise ValueError(\"No API key provided. Sign up at atlas.nomic.ai!\")\n nomic.login(api_key)\n self._embedding_function = embedding_function\n modality = \"text\"\n if self._embedding_function is not None:\n modality = \"embedding\"\n # Check if the project exists, create it if not\n self.project = AtlasProject(\n name=name,\n description=description,\n modality=modality,\n is_public=is_public,\n reset_project_if_exists=reset_project_if_exists,\n unique_id_field=AtlasDB._ATLAS_DEFAULT_ID_FIELD,\n )\n self.project._latest_project_state()\n[docs] def add_texts(\n self,\n texts: Iterable[str],\n metadatas: Optional[List[dict]] = None,\n ids: Optional[List[str]] = None,\n refresh: bool = True,\n **kwargs: Any,\n ) -> List[str]:\n \"\"\"Run more texts through the embeddings and add to the vectorstore.\n Args:\n texts (Iterable[str]): Texts to add to the vectorstore.\n metadatas (Optional[List[dict]], optional): Optional list of metadatas.", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/atlas.html"} +{"id": "540259259ab4-2", "text": "metadatas (Optional[List[dict]], optional): Optional list of metadatas.\n ids (Optional[List[str]]): An optional list of ids.\n refresh(bool): Whether or not to refresh indices with the updated data.\n Default True.\n Returns:\n List[str]: List of IDs of the added texts.\n \"\"\"\n if (\n metadatas is not None\n and len(metadatas) > 0\n and \"text\" in metadatas[0].keys()\n ):\n raise ValueError(\"Cannot accept key text in metadata!\")\n texts = list(texts)\n if ids is None:\n ids = [str(uuid.uuid1()) for _ in texts]\n # Embedding upload case\n if self._embedding_function is not None:\n _embeddings = self._embedding_function.embed_documents(texts)\n embeddings = np.stack(_embeddings)\n if metadatas is None:\n data = [\n {AtlasDB._ATLAS_DEFAULT_ID_FIELD: ids[i], \"text\": texts[i]}\n for i, _ in enumerate(texts)\n ]\n else:\n for i in range(len(metadatas)):\n metadatas[i][AtlasDB._ATLAS_DEFAULT_ID_FIELD] = ids[i]\n metadatas[i][\"text\"] = texts[i]\n data = metadatas\n self.project._validate_map_data_inputs(\n [], id_field=AtlasDB._ATLAS_DEFAULT_ID_FIELD, data=data\n )\n with self.project.wait_for_project_lock():\n self.project.add_embeddings(embeddings=embeddings, data=data)\n # Text upload case\n else:\n if metadatas is None:\n data = [", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/atlas.html"} +{"id": "540259259ab4-3", "text": "else:\n if metadatas is None:\n data = [\n {\"text\": text, AtlasDB._ATLAS_DEFAULT_ID_FIELD: ids[i]}\n for i, text in enumerate(texts)\n ]\n else:\n for i, text in enumerate(texts):\n metadatas[i][\"text\"] = texts\n metadatas[i][AtlasDB._ATLAS_DEFAULT_ID_FIELD] = ids[i]\n data = metadatas\n self.project._validate_map_data_inputs(\n [], id_field=AtlasDB._ATLAS_DEFAULT_ID_FIELD, data=data\n )\n with self.project.wait_for_project_lock():\n self.project.add_text(data)\n if refresh:\n if len(self.project.indices) > 0:\n with self.project.wait_for_project_lock():\n self.project.rebuild_maps()\n return ids\n[docs] def create_index(self, **kwargs: Any) -> Any:\n \"\"\"Creates an index in your project.\n See\n https://docs.nomic.ai/atlas_api.html#nomic.project.AtlasProject.create_index\n for full detail.\n \"\"\"\n with self.project.wait_for_project_lock():\n return self.project.create_index(**kwargs)\n[docs] def similarity_search(\n self,\n query: str,\n k: int = 4,\n **kwargs: Any,\n ) -> List[Document]:\n \"\"\"Run similarity search with AtlasDB\n Args:\n query (str): Query text to search for.\n k (int): Number of results to return. Defaults to 4.\n Returns:\n List[Document]: List of documents most similar to the query text.\n \"\"\"\n if self._embedding_function is None:\n raise NotImplementedError(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/atlas.html"} +{"id": "540259259ab4-4", "text": "\"\"\"\n if self._embedding_function is None:\n raise NotImplementedError(\n \"AtlasDB requires an embedding_function for text similarity search!\"\n )\n _embedding = self._embedding_function.embed_documents([query])[0]\n embedding = np.array(_embedding).reshape(1, -1)\n with self.project.wait_for_project_lock():\n neighbors, _ = self.project.projections[0].vector_search(\n queries=embedding, k=k\n )\n datas = self.project.get_data(ids=neighbors[0])\n docs = [\n Document(page_content=datas[i][\"text\"], metadata=datas[i])\n for i, neighbor in enumerate(neighbors)\n ]\n return docs\n[docs] @classmethod\n def from_texts(\n cls: Type[AtlasDB],\n texts: List[str],\n embedding: Optional[Embeddings] = None,\n metadatas: Optional[List[dict]] = None,\n ids: Optional[List[str]] = None,\n name: Optional[str] = None,\n api_key: Optional[str] = None,\n description: str = \"A description for your project\",\n is_public: bool = True,\n reset_project_if_exists: bool = False,\n index_kwargs: Optional[dict] = None,\n **kwargs: Any,\n ) -> AtlasDB:\n \"\"\"Create an AtlasDB vectorstore from a raw documents.\n Args:\n texts (List[str]): The list of texts to ingest.\n name (str): Name of the project to create.\n api_key (str): Your nomic API key,\n embedding (Optional[Embeddings]): Embedding function. Defaults to None.\n metadatas (Optional[List[dict]]): List of metadatas. Defaults to None.", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/atlas.html"} +{"id": "540259259ab4-5", "text": "ids (Optional[List[str]]): Optional list of document IDs. If None,\n ids will be auto created\n description (str): A description for your project.\n is_public (bool): Whether your project is publicly accessible.\n True by default.\n reset_project_if_exists (bool): Whether to reset this project if it\n already exists. Default False.\n Generally userful during development and testing.\n index_kwargs (Optional[dict]): Dict of kwargs for index creation.\n See https://docs.nomic.ai/atlas_api.html\n Returns:\n AtlasDB: Nomic's neural database and finest rhizomatic instrument\n \"\"\"\n if name is None or api_key is None:\n raise ValueError(\"`name` and `api_key` cannot be None.\")\n # Inject relevant kwargs\n all_index_kwargs = {\"name\": name + \"_index\", \"indexed_field\": \"text\"}\n if index_kwargs is not None:\n for k, v in index_kwargs.items():\n all_index_kwargs[k] = v\n # Build project\n atlasDB = cls(\n name,\n embedding_function=embedding,\n api_key=api_key,\n description=\"A description for your project\",\n is_public=is_public,\n reset_project_if_exists=reset_project_if_exists,\n )\n with atlasDB.project.wait_for_project_lock():\n atlasDB.add_texts(texts=texts, metadatas=metadatas, ids=ids)\n atlasDB.create_index(**all_index_kwargs)\n return atlasDB\n[docs] @classmethod\n def from_documents(\n cls: Type[AtlasDB],\n documents: List[Document],\n embedding: Optional[Embeddings] = None,\n ids: Optional[List[str]] = None,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/atlas.html"} +{"id": "540259259ab4-6", "text": "ids: Optional[List[str]] = None,\n name: Optional[str] = None,\n api_key: Optional[str] = None,\n persist_directory: Optional[str] = None,\n description: str = \"A description for your project\",\n is_public: bool = True,\n reset_project_if_exists: bool = False,\n index_kwargs: Optional[dict] = None,\n **kwargs: Any,\n ) -> AtlasDB:\n \"\"\"Create an AtlasDB vectorstore from a list of documents.\n Args:\n name (str): Name of the collection to create.\n api_key (str): Your nomic API key,\n documents (List[Document]): List of documents to add to the vectorstore.\n embedding (Optional[Embeddings]): Embedding function. Defaults to None.\n ids (Optional[List[str]]): Optional list of document IDs. If None,\n ids will be auto created\n description (str): A description for your project.\n is_public (bool): Whether your project is publicly accessible.\n True by default.\n reset_project_if_exists (bool): Whether to reset this project if\n it already exists. Default False.\n Generally userful during development and testing.\n index_kwargs (Optional[dict]): Dict of kwargs for index creation.\n See https://docs.nomic.ai/atlas_api.html\n Returns:\n AtlasDB: Nomic's neural database and finest rhizomatic instrument\n \"\"\"\n if name is None or api_key is None:\n raise ValueError(\"`name` and `api_key` cannot be None.\")\n texts = [doc.page_content for doc in documents]\n metadatas = [doc.metadata for doc in documents]\n return cls.from_texts(\n name=name,\n api_key=api_key,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/atlas.html"} +{"id": "540259259ab4-7", "text": "return cls.from_texts(\n name=name,\n api_key=api_key,\n texts=texts,\n embedding=embedding,\n metadatas=metadatas,\n ids=ids,\n description=description,\n is_public=is_public,\n reset_project_if_exists=reset_project_if_exists,\n index_kwargs=index_kwargs,\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/atlas.html"} +{"id": "3836f701208f-0", "text": "Source code for langchain.vectorstores.singlestoredb\n\"\"\"Wrapper around SingleStore DB.\"\"\"\nfrom __future__ import annotations\nimport enum\nimport json\nfrom typing import (\n Any,\n ClassVar,\n Collection,\n Iterable,\n List,\n Optional,\n Tuple,\n Type,\n)\nfrom sqlalchemy.pool import QueuePool\nfrom langchain.docstore.document import Document\nfrom langchain.embeddings.base import Embeddings\nfrom langchain.vectorstores.base import VectorStore, VectorStoreRetriever\nclass DistanceStrategy(str, enum.Enum):\n \"\"\"Enumerator of the Distance strategies for SingleStoreDB.\"\"\"\n EUCLIDEAN_DISTANCE = \"EUCLIDEAN_DISTANCE\"\n DOT_PRODUCT = \"DOT_PRODUCT\"\nDEFAULT_DISTANCE_STRATEGY = DistanceStrategy.DOT_PRODUCT\nORDERING_DIRECTIVE: dict = {\n DistanceStrategy.EUCLIDEAN_DISTANCE: \"\",\n DistanceStrategy.DOT_PRODUCT: \"DESC\",\n}\n[docs]class SingleStoreDB(VectorStore):\n \"\"\"\n This class serves as a Pythonic interface to the SingleStore DB database.\n The prerequisite for using this class is the installation of the ``singlestoredb``\n Python package.\n The SingleStoreDB vectorstore can be created by providing an embedding function and\n the relevant parameters for the database connection, connection pool, and\n optionally, the names of the table and the fields to use.\n \"\"\"\n def _get_connection(self: SingleStoreDB) -> Any:\n try:\n import singlestoredb as s2\n except ImportError:\n raise ImportError(\n \"Could not import singlestoredb python package. \"\n \"Please install it with `pip install singlestoredb`.\"\n )\n return s2.connect(**self.connection_kwargs)\n def __init__(\n self,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/singlestoredb.html"} +{"id": "3836f701208f-1", "text": "def __init__(\n self,\n embedding: Embeddings,\n *,\n distance_strategy: DistanceStrategy = DEFAULT_DISTANCE_STRATEGY,\n table_name: str = \"embeddings\",\n content_field: str = \"content\",\n metadata_field: str = \"metadata\",\n vector_field: str = \"vector\",\n pool_size: int = 5,\n max_overflow: int = 10,\n timeout: float = 30,\n **kwargs: Any,\n ):\n \"\"\"Initialize with necessary components.\n Args:\n embedding (Embeddings): A text embedding model.\n distance_strategy (DistanceStrategy, optional):\n Determines the strategy employed for calculating\n the distance between vectors in the embedding space.\n Defaults to DOT_PRODUCT.\n Available options are:\n - DOT_PRODUCT: Computes the scalar product of two vectors.\n This is the default behavior\n - EUCLIDEAN_DISTANCE: Computes the Euclidean distance between\n two vectors. This metric considers the geometric distance in\n the vector space, and might be more suitable for embeddings\n that rely on spatial relationships.\n table_name (str, optional): Specifies the name of the table in use.\n Defaults to \"embeddings\".\n content_field (str, optional): Specifies the field to store the content.\n Defaults to \"content\".\n metadata_field (str, optional): Specifies the field to store metadata.\n Defaults to \"metadata\".\n vector_field (str, optional): Specifies the field to store the vector.\n Defaults to \"vector\".\n Following arguments pertain to the connection pool:\n pool_size (int, optional): Determines the number of active connections in\n the pool. Defaults to 5.\n max_overflow (int, optional): Determines the maximum number of connections", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/singlestoredb.html"} +{"id": "3836f701208f-2", "text": "max_overflow (int, optional): Determines the maximum number of connections\n allowed beyond the pool_size. Defaults to 10.\n timeout (float, optional): Specifies the maximum wait time in seconds for\n establishing a connection. Defaults to 30.\n Following arguments pertain to the database connection:\n host (str, optional): Specifies the hostname, IP address, or URL for the\n database connection. The default scheme is \"mysql\".\n user (str, optional): Database username.\n password (str, optional): Database password.\n port (int, optional): Database port. Defaults to 3306 for non-HTTP\n connections, 80 for HTTP connections, and 443 for HTTPS connections.\n database (str, optional): Database name.\n Additional optional arguments provide further customization over the\n database connection:\n pure_python (bool, optional): Toggles the connector mode. If True,\n operates in pure Python mode.\n local_infile (bool, optional): Allows local file uploads.\n charset (str, optional): Specifies the character set for string values.\n ssl_key (str, optional): Specifies the path of the file containing the SSL\n key.\n ssl_cert (str, optional): Specifies the path of the file containing the SSL\n certificate.\n ssl_ca (str, optional): Specifies the path of the file containing the SSL\n certificate authority.\n ssl_cipher (str, optional): Sets the SSL cipher list.\n ssl_disabled (bool, optional): Disables SSL usage.\n ssl_verify_cert (bool, optional): Verifies the server's certificate.\n Automatically enabled if ``ssl_ca`` is specified.\n ssl_verify_identity (bool, optional): Verifies the server's identity.\n conv (dict[int, Callable], optional): A dictionary of data conversion", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/singlestoredb.html"} +{"id": "3836f701208f-3", "text": "conv (dict[int, Callable], optional): A dictionary of data conversion\n functions.\n credential_type (str, optional): Specifies the type of authentication to\n use: auth.PASSWORD, auth.JWT, or auth.BROWSER_SSO.\n autocommit (bool, optional): Enables autocommits.\n results_type (str, optional): Determines the structure of the query results:\n tuples, namedtuples, dicts.\n results_format (str, optional): Deprecated. This option has been renamed to\n results_type.\n Examples:\n Basic Usage:\n .. code-block:: python\n from langchain.embeddings import OpenAIEmbeddings\n from langchain.vectorstores import SingleStoreDB\n vectorstore = SingleStoreDB(\n OpenAIEmbeddings(),\n host=\"https://user:password@127.0.0.1:3306/database\"\n )\n Advanced Usage:\n .. code-block:: python\n from langchain.embeddings import OpenAIEmbeddings\n from langchain.vectorstores import SingleStoreDB\n vectorstore = SingleStoreDB(\n OpenAIEmbeddings(),\n distance_strategy=DistanceStrategy.EUCLIDEAN_DISTANCE,\n host=\"127.0.0.1\",\n port=3306,\n user=\"user\",\n password=\"password\",\n database=\"db\",\n table_name=\"my_custom_table\",\n pool_size=10,\n timeout=60,\n )\n Using environment variables:\n .. code-block:: python\n from langchain.embeddings import OpenAIEmbeddings\n from langchain.vectorstores import SingleStoreDB\n os.environ['SINGLESTOREDB_URL'] = 'me:p455w0rd@s2-host.com/my_db'\n vectorstore = SingleStoreDB(OpenAIEmbeddings())", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/singlestoredb.html"} +{"id": "3836f701208f-4", "text": "vectorstore = SingleStoreDB(OpenAIEmbeddings())\n \"\"\"\n self.embedding = embedding\n self.distance_strategy = distance_strategy\n self.table_name = table_name\n self.content_field = content_field\n self.metadata_field = metadata_field\n self.vector_field = vector_field\n \"\"\"Pass the rest of the kwargs to the connection.\"\"\"\n self.connection_kwargs = kwargs\n \"\"\"Add program name and version to connection attributes.\"\"\"\n if \"conn_attrs\" not in self.connection_kwargs:\n self.connection_kwargs[\"conn_attrs\"] = dict()\n if \"program_name\" not in self.connection_kwargs[\"conn_attrs\"]:\n self.connection_kwargs[\"conn_attrs\"][\n \"program_name\"\n ] = \"langchain python sdk\"\n self.connection_kwargs[\"conn_attrs\"][\n \"program_version\"\n ] = \"0.0.205\" # the version of SingleStoreDB VectorStore implementation\n \"\"\"Create connection pool.\"\"\"\n self.connection_pool = QueuePool(\n self._get_connection,\n max_overflow=max_overflow,\n pool_size=pool_size,\n timeout=timeout,\n )\n self._create_table()\n def _create_table(self: SingleStoreDB) -> None:\n \"\"\"Create table if it doesn't exist.\"\"\"\n conn = self.connection_pool.connect()\n try:\n cur = conn.cursor()\n try:\n cur.execute(\n \"\"\"CREATE TABLE IF NOT EXISTS {}\n ({} TEXT CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci,\n {} BLOB, {} JSON);\"\"\".format(\n self.table_name,\n self.content_field,\n self.vector_field,\n self.metadata_field,\n ),\n )\n finally:\n cur.close()\n finally:\n conn.close()", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/singlestoredb.html"} +{"id": "3836f701208f-5", "text": "finally:\n cur.close()\n finally:\n conn.close()\n[docs] def add_texts(\n self,\n texts: Iterable[str],\n metadatas: Optional[List[dict]] = None,\n embeddings: Optional[List[List[float]]] = None,\n **kwargs: Any,\n ) -> List[str]:\n \"\"\"Add more texts to the vectorstore.\n Args:\n texts (Iterable[str]): Iterable of strings/text to add to the vectorstore.\n metadatas (Optional[List[dict]], optional): Optional list of metadatas.\n Defaults to None.\n embeddings (Optional[List[List[float]]], optional): Optional pre-generated\n embeddings. Defaults to None.\n Returns:\n List[str]: empty list\n \"\"\"\n conn = self.connection_pool.connect()\n try:\n cur = conn.cursor()\n try:\n # Write data to singlestore db\n for i, text in enumerate(texts):\n # Use provided values by default or fallback\n metadata = metadatas[i] if metadatas else {}\n embedding = (\n embeddings[i]\n if embeddings\n else self.embedding.embed_documents([text])[0]\n )\n cur.execute(\n \"INSERT INTO {} VALUES (%s, JSON_ARRAY_PACK(%s), %s)\".format(\n self.table_name\n ),\n (\n text,\n \"[{}]\".format(\",\".join(map(str, embedding))),\n json.dumps(metadata),\n ),\n )\n finally:\n cur.close()\n finally:\n conn.close()\n return []\n[docs] def similarity_search(\n self, query: str, k: int = 4, filter: Optional[dict] = None, **kwargs: Any", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/singlestoredb.html"} +{"id": "3836f701208f-6", "text": ") -> List[Document]:\n \"\"\"Returns the most similar indexed documents to the query text.\n Uses cosine similarity.\n Args:\n query (str): The query text for which to find similar documents.\n k (int): The number of documents to return. Default is 4.\n filter (dict): A dictionary of metadata fields and values to filter by.\n Returns:\n List[Document]: A list of documents that are most similar to the query text.\n Examples:\n .. code-block:: python\n from langchain.vectorstores import SingleStoreDB\n from langchain.embeddings import OpenAIEmbeddings\n s2 = SingleStoreDB.from_documents(\n docs,\n OpenAIEmbeddings(),\n host=\"username:password@localhost:3306/database\"\n )\n s2.similarity_search(\"query text\", 1,\n {\"metadata_field\": \"metadata_value\"})\n \"\"\"\n docs_and_scores = self.similarity_search_with_score(\n query=query, k=k, filter=filter\n )\n return [doc for doc, _ in docs_and_scores]\n[docs] def similarity_search_with_score(\n self, query: str, k: int = 4, filter: Optional[dict] = None\n ) -> List[Tuple[Document, float]]:\n \"\"\"Return docs most similar to query. Uses cosine similarity.\n Args:\n query: Text to look up documents similar to.\n k: Number of Documents to return. Defaults to 4.\n filter: A dictionary of metadata fields and values to filter by.\n Defaults to None.\n Returns:\n List of Documents most similar to the query and score for each\n \"\"\"\n # Creates embedding vector from user query\n embedding = self.embedding.embed_query(query)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/singlestoredb.html"} +{"id": "3836f701208f-7", "text": "# Creates embedding vector from user query\n embedding = self.embedding.embed_query(query)\n conn = self.connection_pool.connect()\n result = []\n where_clause: str = \"\"\n where_clause_values: List[Any] = []\n if filter:\n where_clause = \"WHERE \"\n arguments = []\n def build_where_clause(\n where_clause_values: List[Any],\n sub_filter: dict,\n prefix_args: List[str] = [],\n ) -> None:\n for key in sub_filter.keys():\n if isinstance(sub_filter[key], dict):\n build_where_clause(\n where_clause_values, sub_filter[key], prefix_args + [key]\n )\n else:\n arguments.append(\n \"JSON_EXTRACT_JSON({}, {}) = %s\".format(\n self.metadata_field,\n \", \".join([\"%s\"] * (len(prefix_args) + 1)),\n )\n )\n where_clause_values += prefix_args + [key]\n where_clause_values.append(json.dumps(sub_filter[key]))\n build_where_clause(where_clause_values, filter)\n where_clause += \" AND \".join(arguments)\n try:\n cur = conn.cursor()\n try:\n cur.execute(\n \"\"\"SELECT {}, {}, {}({}, JSON_ARRAY_PACK(%s)) as __score\n FROM {} {} ORDER BY __score {} LIMIT %s\"\"\".format(\n self.content_field,\n self.metadata_field,\n self.distance_strategy,\n self.vector_field,\n self.table_name,\n where_clause,\n ORDERING_DIRECTIVE[self.distance_strategy],\n ),\n (\"[{}]\".format(\",\".join(map(str, embedding))),)\n + tuple(where_clause_values)\n + (k,),\n )\n for row in cur.fetchall():", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/singlestoredb.html"} +{"id": "3836f701208f-8", "text": "+ (k,),\n )\n for row in cur.fetchall():\n doc = Document(page_content=row[0], metadata=row[1])\n result.append((doc, float(row[2])))\n finally:\n cur.close()\n finally:\n conn.close()\n return result\n[docs] @classmethod\n def from_texts(\n cls: Type[SingleStoreDB],\n texts: List[str],\n embedding: Embeddings,\n metadatas: Optional[List[dict]] = None,\n distance_strategy: DistanceStrategy = DEFAULT_DISTANCE_STRATEGY,\n table_name: str = \"embeddings\",\n content_field: str = \"content\",\n metadata_field: str = \"metadata\",\n vector_field: str = \"vector\",\n pool_size: int = 5,\n max_overflow: int = 10,\n timeout: float = 30,\n **kwargs: Any,\n ) -> SingleStoreDB:\n \"\"\"Create a SingleStoreDB vectorstore from raw documents.\n This is a user-friendly interface that:\n 1. Embeds documents.\n 2. Creates a new table for the embeddings in SingleStoreDB.\n 3. Adds the documents to the newly created table.\n This is intended to be a quick way to get started.\n Example:\n .. code-block:: python\n from langchain.vectorstores import SingleStoreDB\n from langchain.embeddings import OpenAIEmbeddings\n s2 = SingleStoreDB.from_texts(\n texts,\n OpenAIEmbeddings(),\n host=\"username:password@localhost:3306/database\"\n )\n \"\"\"\n instance = cls(\n embedding,\n distance_strategy=distance_strategy,\n table_name=table_name,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/singlestoredb.html"} +{"id": "3836f701208f-9", "text": "embedding,\n distance_strategy=distance_strategy,\n table_name=table_name,\n content_field=content_field,\n metadata_field=metadata_field,\n vector_field=vector_field,\n pool_size=pool_size,\n max_overflow=max_overflow,\n timeout=timeout,\n **kwargs,\n )\n instance.add_texts(texts, metadatas, embedding.embed_documents(texts), **kwargs)\n return instance\n[docs] def as_retriever(self, **kwargs: Any) -> SingleStoreDBRetriever:\n return SingleStoreDBRetriever(vectorstore=self, **kwargs)\nclass SingleStoreDBRetriever(VectorStoreRetriever):\n \"\"\"Retriever for SingleStoreDB vector stores.\"\"\"\n vectorstore: SingleStoreDB\n k: int = 4\n allowed_search_types: ClassVar[Collection[str]] = (\"similarity\",)\n def get_relevant_documents(self, query: str) -> List[Document]:\n if self.search_type == \"similarity\":\n docs = self.vectorstore.similarity_search(query, k=self.k)\n else:\n raise ValueError(f\"search_type of {self.search_type} not allowed.\")\n return docs\n async def aget_relevant_documents(self, query: str) -> List[Document]:\n raise NotImplementedError(\n \"SingleStoreDBVectorStoreRetriever does not support async\"\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/singlestoredb.html"} +{"id": "b8d4bdf4b23b-0", "text": "Source code for langchain.vectorstores.weaviate\n\"\"\"Wrapper around weaviate vector database.\"\"\"\nfrom __future__ import annotations\nimport datetime\nfrom typing import Any, Callable, Dict, Iterable, List, Optional, Tuple, Type\nfrom uuid import uuid4\nimport numpy as np\nfrom langchain.docstore.document import Document\nfrom langchain.embeddings.base import Embeddings\nfrom langchain.utils import get_from_dict_or_env\nfrom langchain.vectorstores.base import VectorStore\nfrom langchain.vectorstores.utils import maximal_marginal_relevance\ndef _default_schema(index_name: str) -> Dict:\n return {\n \"class\": index_name,\n \"properties\": [\n {\n \"name\": \"text\",\n \"dataType\": [\"text\"],\n }\n ],\n }\ndef _create_weaviate_client(**kwargs: Any) -> Any:\n client = kwargs.get(\"client\")\n if client is not None:\n return client\n weaviate_url = get_from_dict_or_env(kwargs, \"weaviate_url\", \"WEAVIATE_URL\")\n try:\n # the weaviate api key param should not be mandatory\n weaviate_api_key = get_from_dict_or_env(\n kwargs, \"weaviate_api_key\", \"WEAVIATE_API_KEY\", None\n )\n except ValueError:\n weaviate_api_key = None\n try:\n import weaviate\n except ImportError:\n raise ValueError(\n \"Could not import weaviate python package. \"\n \"Please install it with `pip install weaviate-client`\"\n )\n auth = (\n weaviate.auth.AuthApiKey(api_key=weaviate_api_key)\n if weaviate_api_key is not None\n else None\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/weaviate.html"} +{"id": "b8d4bdf4b23b-1", "text": "if weaviate_api_key is not None\n else None\n )\n client = weaviate.Client(weaviate_url, auth_client_secret=auth)\n return client\ndef _default_score_normalizer(val: float) -> float:\n return 1 - 1 / (1 + np.exp(val))\ndef _json_serializable(value: Any) -> Any:\n if isinstance(value, datetime.datetime):\n return value.isoformat()\n return value\n[docs]class Weaviate(VectorStore):\n \"\"\"Wrapper around Weaviate vector database.\n To use, you should have the ``weaviate-client`` python package installed.\n Example:\n .. code-block:: python\n import weaviate\n from langchain.vectorstores import Weaviate\n client = weaviate.Client(url=os.environ[\"WEAVIATE_URL\"], ...)\n weaviate = Weaviate(client, index_name, text_key)\n \"\"\"\n def __init__(\n self,\n client: Any,\n index_name: str,\n text_key: str,\n embedding: Optional[Embeddings] = None,\n attributes: Optional[List[str]] = None,\n relevance_score_fn: Optional[\n Callable[[float], float]\n ] = _default_score_normalizer,\n by_text: bool = True,\n ):\n \"\"\"Initialize with Weaviate client.\"\"\"\n try:\n import weaviate\n except ImportError:\n raise ValueError(\n \"Could not import weaviate python package. \"\n \"Please install it with `pip install weaviate-client`.\"\n )\n if not isinstance(client, weaviate.Client):\n raise ValueError(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/weaviate.html"} +{"id": "b8d4bdf4b23b-2", "text": ")\n if not isinstance(client, weaviate.Client):\n raise ValueError(\n f\"client should be an instance of weaviate.Client, got {type(client)}\"\n )\n self._client = client\n self._index_name = index_name\n self._embedding = embedding\n self._text_key = text_key\n self._query_attrs = [self._text_key]\n self._relevance_score_fn = relevance_score_fn\n self._by_text = by_text\n if attributes is not None:\n self._query_attrs.extend(attributes)\n[docs] def add_texts(\n self,\n texts: Iterable[str],\n metadatas: Optional[List[dict]] = None,\n **kwargs: Any,\n ) -> List[str]:\n \"\"\"Upload texts with metadata (properties) to Weaviate.\"\"\"\n from weaviate.util import get_valid_uuid\n ids = []\n with self._client.batch as batch:\n for i, text in enumerate(texts):\n data_properties = {self._text_key: text}\n if metadatas is not None:\n for key, val in metadatas[i].items():\n data_properties[key] = _json_serializable(val)\n # Allow for ids (consistent w/ other methods)\n # # Or uuids (backwards compatble w/ existing arg)\n # If the UUID of one of the objects already exists\n # then the existing object will be replaced by the new object.\n _id = get_valid_uuid(uuid4())\n if \"uuids\" in kwargs:\n _id = kwargs[\"uuids\"][i]\n elif \"ids\" in kwargs:\n _id = kwargs[\"ids\"][i]\n if self._embedding is not None:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/weaviate.html"} +{"id": "b8d4bdf4b23b-3", "text": "if self._embedding is not None:\n vector = self._embedding.embed_documents([text])[0]\n else:\n vector = None\n batch.add_data_object(\n data_object=data_properties,\n class_name=self._index_name,\n uuid=_id,\n vector=vector,\n )\n ids.append(_id)\n return ids\n[docs] def similarity_search(\n self, query: str, k: int = 4, **kwargs: Any\n ) -> List[Document]:\n \"\"\"Return docs most similar to query.\n Args:\n query: Text to look up documents similar to.\n k: Number of Documents to return. Defaults to 4.\n Returns:\n List of Documents most similar to the query.\n \"\"\"\n if self._by_text:\n return self.similarity_search_by_text(query, k, **kwargs)\n else:\n if self._embedding is None:\n raise ValueError(\n \"_embedding cannot be None for similarity_search when \"\n \"_by_text=False\"\n )\n embedding = self._embedding.embed_query(query)\n return self.similarity_search_by_vector(embedding, k, **kwargs)\n[docs] def similarity_search_by_text(\n self, query: str, k: int = 4, **kwargs: Any\n ) -> List[Document]:\n \"\"\"Return docs most similar to query.\n Args:\n query: Text to look up documents similar to.\n k: Number of Documents to return. Defaults to 4.\n Returns:\n List of Documents most similar to the query.\n \"\"\"\n content: Dict[str, Any] = {\"concepts\": [query]}\n if kwargs.get(\"search_distance\"):", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/weaviate.html"} +{"id": "b8d4bdf4b23b-4", "text": "if kwargs.get(\"search_distance\"):\n content[\"certainty\"] = kwargs.get(\"search_distance\")\n query_obj = self._client.query.get(self._index_name, self._query_attrs)\n if kwargs.get(\"where_filter\"):\n query_obj = query_obj.with_where(kwargs.get(\"where_filter\"))\n if kwargs.get(\"additional\"):\n query_obj = query_obj.with_additional(kwargs.get(\"additional\"))\n result = query_obj.with_near_text(content).with_limit(k).do()\n if \"errors\" in result:\n raise ValueError(f\"Error during query: {result['errors']}\")\n docs = []\n for res in result[\"data\"][\"Get\"][self._index_name]:\n text = res.pop(self._text_key)\n docs.append(Document(page_content=text, metadata=res))\n return docs\n[docs] def similarity_search_by_vector(\n self, embedding: List[float], k: int = 4, **kwargs: Any\n ) -> List[Document]:\n \"\"\"Look up similar documents by embedding vector in Weaviate.\"\"\"\n vector = {\"vector\": embedding}\n query_obj = self._client.query.get(self._index_name, self._query_attrs)\n if kwargs.get(\"where_filter\"):\n query_obj = query_obj.with_where(kwargs.get(\"where_filter\"))\n if kwargs.get(\"additional\"):\n query_obj = query_obj.with_additional(kwargs.get(\"additional\"))\n result = query_obj.with_near_vector(vector).with_limit(k).do()\n if \"errors\" in result:\n raise ValueError(f\"Error during query: {result['errors']}\")\n docs = []\n for res in result[\"data\"][\"Get\"][self._index_name]:\n text = res.pop(self._text_key)\n docs.append(Document(page_content=text, metadata=res))\n return docs", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/weaviate.html"} +{"id": "b8d4bdf4b23b-5", "text": "docs.append(Document(page_content=text, metadata=res))\n return docs\n[docs] def max_marginal_relevance_search(\n self,\n query: str,\n k: int = 4,\n fetch_k: int = 20,\n lambda_mult: float = 0.5,\n **kwargs: Any,\n ) -> List[Document]:\n \"\"\"Return docs selected using the maximal marginal relevance.\n Maximal marginal relevance optimizes for similarity to query AND diversity\n among selected documents.\n Args:\n query: Text to look up documents similar to.\n k: Number of Documents to return. Defaults to 4.\n fetch_k: Number of Documents to fetch to pass to MMR algorithm.\n lambda_mult: Number between 0 and 1 that determines the degree\n of diversity among the results with 0 corresponding\n to maximum diversity and 1 to minimum diversity.\n Defaults to 0.5.\n Returns:\n List of Documents selected by maximal marginal relevance.\n \"\"\"\n if self._embedding is not None:\n embedding = self._embedding.embed_query(query)\n else:\n raise ValueError(\n \"max_marginal_relevance_search requires a suitable Embeddings object\"\n )\n return self.max_marginal_relevance_search_by_vector(\n embedding, k=k, fetch_k=fetch_k, lambda_mult=lambda_mult, **kwargs\n )\n[docs] def max_marginal_relevance_search_by_vector(\n self,\n embedding: List[float],\n k: int = 4,\n fetch_k: int = 20,\n lambda_mult: float = 0.5,\n **kwargs: Any,\n ) -> List[Document]:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/weaviate.html"} +{"id": "b8d4bdf4b23b-6", "text": "**kwargs: Any,\n ) -> List[Document]:\n \"\"\"Return docs selected using the maximal marginal relevance.\n Maximal marginal relevance optimizes for similarity to query AND diversity\n among selected documents.\n Args:\n embedding: Embedding to look up documents similar to.\n k: Number of Documents to return. Defaults to 4.\n fetch_k: Number of Documents to fetch to pass to MMR algorithm.\n lambda_mult: Number between 0 and 1 that determines the degree\n of diversity among the results with 0 corresponding\n to maximum diversity and 1 to minimum diversity.\n Defaults to 0.5.\n Returns:\n List of Documents selected by maximal marginal relevance.\n \"\"\"\n vector = {\"vector\": embedding}\n query_obj = self._client.query.get(self._index_name, self._query_attrs)\n if kwargs.get(\"where_filter\"):\n query_obj = query_obj.with_where(kwargs.get(\"where_filter\"))\n results = (\n query_obj.with_additional(\"vector\")\n .with_near_vector(vector)\n .with_limit(fetch_k)\n .do()\n )\n payload = results[\"data\"][\"Get\"][self._index_name]\n embeddings = [result[\"_additional\"][\"vector\"] for result in payload]\n mmr_selected = maximal_marginal_relevance(\n np.array(embedding), embeddings, k=k, lambda_mult=lambda_mult\n )\n docs = []\n for idx in mmr_selected:\n text = payload[idx].pop(self._text_key)\n payload[idx].pop(\"_additional\")\n meta = payload[idx]\n docs.append(Document(page_content=text, metadata=meta))\n return docs\n[docs] def similarity_search_with_score(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/weaviate.html"} +{"id": "b8d4bdf4b23b-7", "text": "return docs\n[docs] def similarity_search_with_score(\n self, query: str, k: int = 4, **kwargs: Any\n ) -> List[Tuple[Document, float]]:\n \"\"\"\n Return list of documents most similar to the query\n text and cosine distance in float for each.\n Lower score represents more similarity.\n \"\"\"\n if self._embedding is None:\n raise ValueError(\n \"_embedding cannot be None for similarity_search_with_score\"\n )\n content: Dict[str, Any] = {\"concepts\": [query]}\n if kwargs.get(\"search_distance\"):\n content[\"certainty\"] = kwargs.get(\"search_distance\")\n query_obj = self._client.query.get(self._index_name, self._query_attrs)\n if not self._by_text:\n embedding = self._embedding.embed_query(query)\n vector = {\"vector\": embedding}\n result = (\n query_obj.with_near_vector(vector)\n .with_limit(k)\n .with_additional(\"vector\")\n .do()\n )\n else:\n result = (\n query_obj.with_near_text(content)\n .with_limit(k)\n .with_additional(\"vector\")\n .do()\n )\n if \"errors\" in result:\n raise ValueError(f\"Error during query: {result['errors']}\")\n docs_and_scores = []\n for res in result[\"data\"][\"Get\"][self._index_name]:\n text = res.pop(self._text_key)\n score = np.dot(\n res[\"_additional\"][\"vector\"], self._embedding.embed_query(query)\n )\n docs_and_scores.append((Document(page_content=text, metadata=res), score))\n return docs_and_scores\n def _similarity_search_with_relevance_scores(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/weaviate.html"} +{"id": "b8d4bdf4b23b-8", "text": "return docs_and_scores\n def _similarity_search_with_relevance_scores(\n self,\n query: str,\n k: int = 4,\n **kwargs: Any,\n ) -> List[Tuple[Document, float]]:\n \"\"\"Return docs and relevance scores, normalized on a scale from 0 to 1.\n 0 is dissimilar, 1 is most similar.\n \"\"\"\n if self._relevance_score_fn is None:\n raise ValueError(\n \"relevance_score_fn must be provided to\"\n \" Weaviate constructor to normalize scores\"\n )\n docs_and_scores = self.similarity_search_with_score(query, k=k, **kwargs)\n return [\n (doc, self._relevance_score_fn(score)) for doc, score in docs_and_scores\n ]\n[docs] @classmethod\n def from_texts(\n cls: Type[Weaviate],\n texts: List[str],\n embedding: Embeddings,\n metadatas: Optional[List[dict]] = None,\n **kwargs: Any,\n ) -> Weaviate:\n \"\"\"Construct Weaviate wrapper from raw documents.\n This is a user-friendly interface that:\n 1. Embeds documents.\n 2. Creates a new index for the embeddings in the Weaviate instance.\n 3. Adds the documents to the newly created Weaviate index.\n This is intended to be a quick way to get started.\n Example:\n .. code-block:: python\n from langchain.vectorstores.weaviate import Weaviate\n from langchain.embeddings import OpenAIEmbeddings\n embeddings = OpenAIEmbeddings()\n weaviate = Weaviate.from_texts(\n texts,\n embeddings,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/weaviate.html"} +{"id": "b8d4bdf4b23b-9", "text": "weaviate = Weaviate.from_texts(\n texts,\n embeddings,\n weaviate_url=\"http://localhost:8080\"\n )\n \"\"\"\n client = _create_weaviate_client(**kwargs)\n from weaviate.util import get_valid_uuid\n index_name = kwargs.get(\"index_name\", f\"LangChain_{uuid4().hex}\")\n embeddings = embedding.embed_documents(texts) if embedding else None\n text_key = \"text\"\n schema = _default_schema(index_name)\n attributes = list(metadatas[0].keys()) if metadatas else None\n # check whether the index already exists\n if not client.schema.contains(schema):\n client.schema.create_class(schema)\n with client.batch as batch:\n for i, text in enumerate(texts):\n data_properties = {\n text_key: text,\n }\n if metadatas is not None:\n for key in metadatas[i].keys():\n data_properties[key] = metadatas[i][key]\n # If the UUID of one of the objects already exists\n # then the existing objectwill be replaced by the new object.\n if \"uuids\" in kwargs:\n _id = kwargs[\"uuids\"][i]\n else:\n _id = get_valid_uuid(uuid4())\n # if an embedding strategy is not provided, we let\n # weaviate create the embedding. Note that this will only\n # work if weaviate has been installed with a vectorizer module\n # like text2vec-contextionary for example\n params = {\n \"uuid\": _id,\n \"data_object\": data_properties,\n \"class_name\": index_name,\n }\n if embeddings is not None:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/weaviate.html"} +{"id": "b8d4bdf4b23b-10", "text": "\"class_name\": index_name,\n }\n if embeddings is not None:\n params[\"vector\"] = embeddings[i]\n batch.add_data_object(**params)\n batch.flush()\n relevance_score_fn = kwargs.get(\"relevance_score_fn\")\n by_text: bool = kwargs.get(\"by_text\", False)\n return cls(\n client,\n index_name,\n text_key,\n embedding=embedding,\n attributes=attributes,\n relevance_score_fn=relevance_score_fn,\n by_text=by_text,\n )\n[docs] def delete(self, ids: List[str]) -> None:\n \"\"\"Delete by vector IDs.\n Args:\n ids: List of ids to delete.\n \"\"\"\n # TODO: Check if this can be done in bulk\n for id in ids:\n self._client.data_object.delete(uuid=id)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/weaviate.html"} +{"id": "5652a56a6ff1-0", "text": "Source code for langchain.vectorstores.myscale\n\"\"\"Wrapper around MyScale vector database.\"\"\"\nfrom __future__ import annotations\nimport json\nimport logging\nfrom hashlib import sha1\nfrom threading import Thread\nfrom typing import Any, Dict, Iterable, List, Optional, Tuple\nfrom pydantic import BaseSettings\nfrom langchain.docstore.document import Document\nfrom langchain.embeddings.base import Embeddings\nfrom langchain.vectorstores.base import VectorStore\nlogger = logging.getLogger()\ndef has_mul_sub_str(s: str, *args: Any) -> bool:\n \"\"\"\n Check if a string contains multiple substrings.\n Args:\n s: string to check.\n *args: substrings to check.\n Returns:\n True if all substrings are in the string, False otherwise.\n \"\"\"\n for a in args:\n if a not in s:\n return False\n return True\n[docs]class MyScaleSettings(BaseSettings):\n \"\"\"MyScale Client Configuration\n Attribute:\n myscale_host (str) : An URL to connect to MyScale backend.\n Defaults to 'localhost'.\n myscale_port (int) : URL port to connect with HTTP. Defaults to 8443.\n username (str) : Username to login. Defaults to None.\n password (str) : Password to login. Defaults to None.\n index_type (str): index type string.\n index_param (dict): index build parameter.\n database (str) : Database name to find the table. Defaults to 'default'.\n table (str) : Table name to operate on.\n Defaults to 'vector_table'.\n metric (str) : Metric to compute distance,\n supported are ('l2', 'cosine', 'ip'). Defaults to 'cosine'.", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/myscale.html"} +{"id": "5652a56a6ff1-1", "text": "column_map (Dict) : Column type map to project column name onto langchain\n semantics. Must have keys: `text`, `id`, `vector`,\n must be same size to number of columns. For example:\n .. code-block:: python\n {\n 'id': 'text_id',\n 'vector': 'text_embedding',\n 'text': 'text_plain',\n 'metadata': 'metadata_dictionary_in_json',\n }\n Defaults to identity map.\n \"\"\"\n host: str = \"localhost\"\n port: int = 8443\n username: Optional[str] = None\n password: Optional[str] = None\n index_type: str = \"IVFFLAT\"\n index_param: Optional[Dict[str, str]] = None\n column_map: Dict[str, str] = {\n \"id\": \"id\",\n \"text\": \"text\",\n \"vector\": \"vector\",\n \"metadata\": \"metadata\",\n }\n database: str = \"default\"\n table: str = \"langchain\"\n metric: str = \"cosine\"\n def __getitem__(self, item: str) -> Any:\n return getattr(self, item)\n class Config:\n env_file = \".env\"\n env_prefix = \"myscale_\"\n env_file_encoding = \"utf-8\"\n[docs]class MyScale(VectorStore):\n \"\"\"Wrapper around MyScale vector database\n You need a `clickhouse-connect` python package, and a valid account\n to connect to MyScale.\n MyScale can not only search with simple vector indexes,\n it also supports complex query with multiple conditions,\n constraints and even sub-queries.\n For more information, please visit", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/myscale.html"} +{"id": "5652a56a6ff1-2", "text": "constraints and even sub-queries.\n For more information, please visit\n [myscale official site](https://docs.myscale.com/en/overview/)\n \"\"\"\n def __init__(\n self,\n embedding: Embeddings,\n config: Optional[MyScaleSettings] = None,\n **kwargs: Any,\n ) -> None:\n \"\"\"MyScale Wrapper to LangChain\n embedding_function (Embeddings):\n config (MyScaleSettings): Configuration to MyScale Client\n Other keyword arguments will pass into\n [clickhouse-connect](https://docs.myscale.com/)\n \"\"\"\n try:\n from clickhouse_connect import get_client\n except ImportError:\n raise ValueError(\n \"Could not import clickhouse connect python package. \"\n \"Please install it with `pip install clickhouse-connect`.\"\n )\n try:\n from tqdm import tqdm\n self.pgbar = tqdm\n except ImportError:\n # Just in case if tqdm is not installed\n self.pgbar = lambda x: x\n super().__init__()\n if config is not None:\n self.config = config\n else:\n self.config = MyScaleSettings()\n assert self.config\n assert self.config.host and self.config.port\n assert (\n self.config.column_map\n and self.config.database\n and self.config.table\n and self.config.metric\n )\n for k in [\"id\", \"vector\", \"text\", \"metadata\"]:\n assert k in self.config.column_map\n assert self.config.metric in [\"ip\", \"cosine\", \"l2\"]\n # initialize the schema\n dim = len(embedding.embed_query(\"try this out\"))\n index_params = (", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/myscale.html"} +{"id": "5652a56a6ff1-3", "text": "dim = len(embedding.embed_query(\"try this out\"))\n index_params = (\n \", \" + \",\".join([f\"'{k}={v}'\" for k, v in self.config.index_param.items()])\n if self.config.index_param\n else \"\"\n )\n schema_ = f\"\"\"\n CREATE TABLE IF NOT EXISTS {self.config.database}.{self.config.table}(\n {self.config.column_map['id']} String,\n {self.config.column_map['text']} String,\n {self.config.column_map['vector']} Array(Float32),\n {self.config.column_map['metadata']} JSON,\n CONSTRAINT cons_vec_len CHECK length(\\\n {self.config.column_map['vector']}) = {dim},\n VECTOR INDEX vidx {self.config.column_map['vector']} \\\n TYPE {self.config.index_type}(\\\n 'metric_type={self.config.metric}'{index_params})\n ) ENGINE = MergeTree ORDER BY {self.config.column_map['id']}\n \"\"\"\n self.dim = dim\n self.BS = \"\\\\\"\n self.must_escape = (\"\\\\\", \"'\")\n self.embedding_function = embedding.embed_query\n self.dist_order = \"ASC\" if self.config.metric in [\"cosine\", \"l2\"] else \"DESC\"\n # Create a connection to myscale\n self.client = get_client(\n host=self.config.host,\n port=self.config.port,\n username=self.config.username,\n password=self.config.password,\n **kwargs,\n )\n self.client.command(\"SET allow_experimental_object_type=1\")\n self.client.command(schema_)\n[docs] def escape_str(self, value: str) -> str:\n return \"\".join(f\"{self.BS}{c}\" if c in self.must_escape else c for c in value)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/myscale.html"} +{"id": "5652a56a6ff1-4", "text": "def _build_istr(self, transac: Iterable, column_names: Iterable[str]) -> str:\n ks = \",\".join(column_names)\n _data = []\n for n in transac:\n n = \",\".join([f\"'{self.escape_str(str(_n))}'\" for _n in n])\n _data.append(f\"({n})\")\n i_str = f\"\"\"\n INSERT INTO TABLE \n {self.config.database}.{self.config.table}({ks})\n VALUES\n {','.join(_data)}\n \"\"\"\n return i_str\n def _insert(self, transac: Iterable, column_names: Iterable[str]) -> None:\n _i_str = self._build_istr(transac, column_names)\n self.client.command(_i_str)\n[docs] def add_texts(\n self,\n texts: Iterable[str],\n metadatas: Optional[List[dict]] = None,\n batch_size: int = 32,\n ids: Optional[Iterable[str]] = None,\n **kwargs: Any,\n ) -> List[str]:\n \"\"\"Run more texts through the embeddings and add to the vectorstore.\n Args:\n texts: Iterable of strings to add to the vectorstore.\n ids: Optional list of ids to associate with the texts.\n batch_size: Batch size of insertion\n metadata: Optional column data to be inserted\n Returns:\n List of ids from adding the texts into the vectorstore.\n \"\"\"\n # Embed and create the documents\n ids = ids or [sha1(t.encode(\"utf-8\")).hexdigest() for t in texts]\n colmap_ = self.config.column_map\n transac = []\n column_names = {\n colmap_[\"id\"]: ids,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/myscale.html"} +{"id": "5652a56a6ff1-5", "text": "column_names = {\n colmap_[\"id\"]: ids,\n colmap_[\"text\"]: texts,\n colmap_[\"vector\"]: map(self.embedding_function, texts),\n }\n metadatas = metadatas or [{} for _ in texts]\n column_names[colmap_[\"metadata\"]] = map(json.dumps, metadatas)\n assert len(set(colmap_) - set(column_names)) >= 0\n keys, values = zip(*column_names.items())\n try:\n t = None\n for v in self.pgbar(\n zip(*values), desc=\"Inserting data...\", total=len(metadatas)\n ):\n assert len(v[keys.index(self.config.column_map[\"vector\"])]) == self.dim\n transac.append(v)\n if len(transac) == batch_size:\n if t:\n t.join()\n t = Thread(target=self._insert, args=[transac, keys])\n t.start()\n transac = []\n if len(transac) > 0:\n if t:\n t.join()\n self._insert(transac, keys)\n return [i for i in ids]\n except Exception as e:\n logger.error(f\"\\033[91m\\033[1m{type(e)}\\033[0m \\033[95m{str(e)}\\033[0m\")\n return []\n[docs] @classmethod\n def from_texts(\n cls,\n texts: List[str],\n embedding: Embeddings,\n metadatas: Optional[List[Dict[Any, Any]]] = None,\n config: Optional[MyScaleSettings] = None,\n text_ids: Optional[Iterable[str]] = None,\n batch_size: int = 32,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/myscale.html"} +{"id": "5652a56a6ff1-6", "text": "batch_size: int = 32,\n **kwargs: Any,\n ) -> MyScale:\n \"\"\"Create Myscale wrapper with existing texts\n Args:\n embedding_function (Embeddings): Function to extract text embedding\n texts (Iterable[str]): List or tuple of strings to be added\n config (MyScaleSettings, Optional): Myscale configuration\n text_ids (Optional[Iterable], optional): IDs for the texts.\n Defaults to None.\n batch_size (int, optional): Batchsize when transmitting data to MyScale.\n Defaults to 32.\n metadata (List[dict], optional): metadata to texts. Defaults to None.\n Other keyword arguments will pass into\n [clickhouse-connect](https://clickhouse.com/docs/en/integrations/python#clickhouse-connect-driver-api)\n Returns:\n MyScale Index\n \"\"\"\n ctx = cls(embedding, config, **kwargs)\n ctx.add_texts(texts, ids=text_ids, batch_size=batch_size, metadatas=metadatas)\n return ctx\n def __repr__(self) -> str:\n \"\"\"Text representation for myscale, prints backends, username and schemas.\n Easy to use with `str(Myscale())`\n Returns:\n repr: string to show connection info and data schema\n \"\"\"\n _repr = f\"\\033[92m\\033[1m{self.config.database}.{self.config.table} @ \"\n _repr += f\"{self.config.host}:{self.config.port}\\033[0m\\n\\n\"\n _repr += f\"\\033[1musername: {self.config.username}\\033[0m\\n\\nTable Schema:\\n\"\n _repr += \"-\" * 51 + \"\\n\"\n for r in self.client.query(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/myscale.html"} +{"id": "5652a56a6ff1-7", "text": "for r in self.client.query(\n f\"DESC {self.config.database}.{self.config.table}\"\n ).named_results():\n _repr += (\n f\"|\\033[94m{r['name']:24s}\\033[0m|\\033[96m{r['type']:24s}\\033[0m|\\n\"\n )\n _repr += \"-\" * 51 + \"\\n\"\n return _repr\n def _build_qstr(\n self, q_emb: List[float], topk: int, where_str: Optional[str] = None\n ) -> str:\n q_emb_str = \",\".join(map(str, q_emb))\n if where_str:\n where_str = f\"PREWHERE {where_str}\"\n else:\n where_str = \"\"\n q_str = f\"\"\"\n SELECT {self.config.column_map['text']}, \n {self.config.column_map['metadata']}, dist\n FROM {self.config.database}.{self.config.table}\n {where_str}\n ORDER BY distance({self.config.column_map['vector']}, [{q_emb_str}]) \n AS dist {self.dist_order}\n LIMIT {topk}\n \"\"\"\n return q_str\n[docs] def similarity_search(\n self, query: str, k: int = 4, where_str: Optional[str] = None, **kwargs: Any\n ) -> List[Document]:\n \"\"\"Perform a similarity search with MyScale\n Args:\n query (str): query string\n k (int, optional): Top K neighbors to retrieve. Defaults to 4.\n where_str (Optional[str], optional): where condition string.\n Defaults to None.\n NOTE: Please do not let end-user to fill this and always be aware", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/myscale.html"} +{"id": "5652a56a6ff1-8", "text": "NOTE: Please do not let end-user to fill this and always be aware\n of SQL injection. When dealing with metadatas, remember to\n use `{self.metadata_column}.attribute` instead of `attribute`\n alone. The default name for it is `metadata`.\n Returns:\n List[Document]: List of Documents\n \"\"\"\n return self.similarity_search_by_vector(\n self.embedding_function(query), k, where_str, **kwargs\n )\n[docs] def similarity_search_by_vector(\n self,\n embedding: List[float],\n k: int = 4,\n where_str: Optional[str] = None,\n **kwargs: Any,\n ) -> List[Document]:\n \"\"\"Perform a similarity search with MyScale by vectors\n Args:\n query (str): query string\n k (int, optional): Top K neighbors to retrieve. Defaults to 4.\n where_str (Optional[str], optional): where condition string.\n Defaults to None.\n NOTE: Please do not let end-user to fill this and always be aware\n of SQL injection. When dealing with metadatas, remember to\n use `{self.metadata_column}.attribute` instead of `attribute`\n alone. The default name for it is `metadata`.\n Returns:\n List[Document]: List of (Document, similarity)\n \"\"\"\n q_str = self._build_qstr(embedding, k, where_str)\n try:\n return [\n Document(\n page_content=r[self.config.column_map[\"text\"]],\n metadata=r[self.config.column_map[\"metadata\"]],\n )\n for r in self.client.query(q_str).named_results()\n ]\n except Exception as e:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/myscale.html"} +{"id": "5652a56a6ff1-9", "text": "]\n except Exception as e:\n logger.error(f\"\\033[91m\\033[1m{type(e)}\\033[0m \\033[95m{str(e)}\\033[0m\")\n return []\n[docs] def similarity_search_with_relevance_scores(\n self, query: str, k: int = 4, where_str: Optional[str] = None, **kwargs: Any\n ) -> List[Tuple[Document, float]]:\n \"\"\"Perform a similarity search with MyScale\n Args:\n query (str): query string\n k (int, optional): Top K neighbors to retrieve. Defaults to 4.\n where_str (Optional[str], optional): where condition string.\n Defaults to None.\n NOTE: Please do not let end-user to fill this and always be aware\n of SQL injection. When dealing with metadatas, remember to\n use `{self.metadata_column}.attribute` instead of `attribute`\n alone. The default name for it is `metadata`.\n Returns:\n List[Document]: List of documents most similar to the query text\n and cosine distance in float for each.\n Lower score represents more similarity.\n \"\"\"\n q_str = self._build_qstr(self.embedding_function(query), k, where_str)\n try:\n return [\n (\n Document(\n page_content=r[self.config.column_map[\"text\"]],\n metadata=r[self.config.column_map[\"metadata\"]],\n ),\n r[\"dist\"],\n )\n for r in self.client.query(q_str).named_results()\n ]\n except Exception as e:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/myscale.html"} +{"id": "5652a56a6ff1-10", "text": "]\n except Exception as e:\n logger.error(f\"\\033[91m\\033[1m{type(e)}\\033[0m \\033[95m{str(e)}\\033[0m\")\n return []\n[docs] def drop(self) -> None:\n \"\"\"\n Helper function: Drop data\n \"\"\"\n self.client.command(\n f\"DROP TABLE IF EXISTS {self.config.database}.{self.config.table}\"\n )\n @property\n def metadata_column(self) -> str:\n return self.config.column_map[\"metadata\"]", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/myscale.html"} +{"id": "e4ee70c4b3b9-0", "text": "Source code for langchain.vectorstores.deeplake\n\"\"\"Wrapper around Activeloop Deep Lake.\"\"\"\nfrom __future__ import annotations\nimport logging\nfrom typing import Any, Callable, Dict, Iterable, List, Optional, Tuple, Union\nimport numpy as np\ntry:\n import deeplake\n from deeplake.core.fast_forwarding import version_compare\n from deeplake.core.vectorstore import DeepLakeVectorStore\n _DEEPLAKE_INSTALLED = True\nexcept ImportError:\n _DEEPLAKE_INSTALLED = False\nfrom langchain.docstore.document import Document\nfrom langchain.embeddings.base import Embeddings\nfrom langchain.vectorstores.base import VectorStore\nfrom langchain.vectorstores.utils import maximal_marginal_relevance\nlogger = logging.getLogger(__name__)\n[docs]class DeepLake(VectorStore):\n \"\"\"Wrapper around Deep Lake, a data lake for deep learning applications.\n We integrated deeplake's similarity search and filtering for fast prototyping,\n Now, it supports Tensor Query Language (TQL) for production use cases\n over billion rows.\n Why Deep Lake?\n - Not only stores embeddings, but also the original data with version control.\n - Serverless, doesn't require another service and can be used with major\n cloud providers (S3, GCS, etc.)\n - More than just a multi-modal vector store. You can use the dataset\n to fine-tune your own LLM models.\n To use, you should have the ``deeplake`` python package installed.\n Example:\n .. code-block:: python\n from langchain.vectorstores import DeepLake\n from langchain.embeddings.openai import OpenAIEmbeddings\n embeddings = OpenAIEmbeddings()\n vectorstore = DeepLake(\"langchain_store\", embeddings.embed_query)\n \"\"\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/deeplake.html"} +{"id": "e4ee70c4b3b9-1", "text": "vectorstore = DeepLake(\"langchain_store\", embeddings.embed_query)\n \"\"\"\n _LANGCHAIN_DEFAULT_DEEPLAKE_PATH = \"./deeplake/\"\n def __init__(\n self,\n dataset_path: str = _LANGCHAIN_DEFAULT_DEEPLAKE_PATH,\n token: Optional[str] = None,\n embedding_function: Optional[Embeddings] = None,\n read_only: bool = False,\n ingestion_batch_size: int = 1000,\n num_workers: int = 0,\n verbose: bool = True,\n exec_option: str = \"python\",\n **kwargs: Any,\n ) -> None:\n \"\"\"Creates an empty DeepLakeVectorStore or loads an existing one.\n The DeepLakeVectorStore is located at the specified ``path``.\n Examples:\n >>> # Create a vector store with default tensors\n >>> deeplake_vectorstore = DeepLake(\n ... path = ,\n ... )\n >>>\n >>> # Create a vector store in the Deep Lake Managed Tensor Database\n >>> data = DeepLake(\n ... path = \"hub://org_id/dataset_name\",\n ... exec_option = \"tensor_db\",\n ... )\n Args:\n dataset_path (str): Path to existing dataset or where to create\n a new one. Defaults to _LANGCHAIN_DEFAULT_DEEPLAKE_PATH.\n token (str, optional): Activeloop token, for fetching credentials\n to the dataset at path if it is a Deep Lake dataset.\n Tokens are normally autogenerated. Optional.\n embedding_function (str, optional): Function to convert\n either documents or query. Optional.\n read_only (bool): Open dataset in read-only mode. Default is False.", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/deeplake.html"} +{"id": "e4ee70c4b3b9-2", "text": "read_only (bool): Open dataset in read-only mode. Default is False.\n ingestion_batch_size (int): During data ingestion, data is divided\n into batches. Batch size is the size of each batch.\n Default is 1000.\n num_workers (int): Number of workers to use during data ingestion.\n Default is 0.\n verbose (bool): Print dataset summary after each operation.\n Default is True.\n exec_option (str): DeepLakeVectorStore supports 3 ways to perform\n searching - \"python\", \"compute_engine\", \"tensor_db\".\n Default is \"python\".\n - ``python`` - Pure-python implementation that runs on the client.\n WARNING: using this with big datasets can lead to memory\n issues. Data can be stored anywhere.\n - ``compute_engine`` - C++ implementation of the Deep Lake Compute\n Engine that runs on the client. Can be used for any data stored in\n or connected to Deep Lake. Not for in-memory or local datasets.\n - ``tensor_db`` - Hosted Managed Tensor Database that is\n responsible for storage and query execution. Only for data stored in\n the Deep Lake Managed Database. Use runtime = {\"db_engine\": True} during\n dataset creation.\n **kwargs: Other optional keyword arguments.\n Raises:\n ValueError: If some condition is not met.\n \"\"\"\n self.ingestion_batch_size = ingestion_batch_size\n self.num_workers = num_workers\n self.verbose = verbose\n if _DEEPLAKE_INSTALLED is False:\n raise ValueError(\n \"Could not import deeplake python package. \"\n \"Please install it with `pip install deeplake`.\"\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/deeplake.html"} +{"id": "e4ee70c4b3b9-3", "text": "\"Please install it with `pip install deeplake`.\"\n )\n if version_compare(deeplake.__version__, \"3.6.2\") == -1:\n raise ValueError(\n \"deeplake version should be >= 3.6.3, but you've installed\"\n f\" {deeplake.__version__}. Consider upgrading deeplake version \\\n pip install --upgrade deeplake.\"\n )\n self.dataset_path = dataset_path\n self.vectorstore = DeepLakeVectorStore(\n path=self.dataset_path,\n embedding_function=embedding_function,\n read_only=read_only,\n token=token,\n exec_option=exec_option,\n verbose=verbose,\n **kwargs,\n )\n self._embedding_function = embedding_function\n self._id_tensor_name = \"ids\" if \"ids\" in self.vectorstore.tensors() else \"id\"\n[docs] def add_texts(\n self,\n texts: Iterable[str],\n metadatas: Optional[List[dict]] = None,\n ids: Optional[List[str]] = None,\n **kwargs: Any,\n ) -> List[str]:\n \"\"\"Run more texts through the embeddings and add to the vectorstore.\n Examples:\n >>> ids = deeplake_vectorstore.add_texts(\n ... texts = ,\n ... metadatas = ,\n ... ids = ,\n ... )\n Args:\n texts (Iterable[str]): Texts to add to the vectorstore.\n metadatas (Optional[List[dict]], optional): Optional list of metadatas.\n ids (Optional[List[str]], optional): Optional list of IDs.", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/deeplake.html"} +{"id": "e4ee70c4b3b9-4", "text": "ids (Optional[List[str]], optional): Optional list of IDs.\n **kwargs: other optional keyword arguments.\n Returns:\n List[str]: List of IDs of the added texts.\n \"\"\"\n kwargs = {}\n if ids:\n if self._id_tensor_name == \"ids\": # for backwards compatibility\n kwargs[\"ids\"] = ids\n else:\n kwargs[\"id\"] = ids\n if metadatas is None:\n metadatas = [{}] * len(list(texts))\n return self.vectorstore.add(\n text=texts,\n metadata=metadatas,\n embedding_data=texts,\n embedding_tensor=\"embedding\",\n embedding_function=kwargs.get(\"embedding_function\")\n or self._embedding_function.embed_documents, # type: ignore\n return_ids=True,\n **kwargs,\n )\n def _search_tql(\n self,\n tql_query: Optional[str],\n exec_option: Optional[str] = None,\n return_score: bool = False,\n ) -> Any[List[Document], List[Tuple[Document, float]]]:\n \"\"\"Function for performing tql_search.\n Args:\n tql_query (str): TQL Query string for direct evaluation.\n Available only for `compute_engine` and `tensor_db`.\n exec_option (str, optional): Supports 3 ways to search.\n Could be \"python\", \"compute_engine\" or \"tensor_db\". Default is \"python\".\n - ``python`` - Pure-python implementation for the client.\n WARNING: not recommended for big datasets due to potential memory\n issues.\n - ``compute_engine`` - C++ implementation of Deep Lake Compute\n Engine for the client. Not for in-memory or local datasets.", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/deeplake.html"} +{"id": "e4ee70c4b3b9-5", "text": "Engine for the client. Not for in-memory or local datasets.\n - ``tensor_db`` - Hosted Managed Tensor Database for storage\n and query execution. Only for data in Deep Lake Managed Database.\n Use runtime = {\"db_engine\": True} during dataset creation.\n return_score (bool): Return score with document. Default is False.\n Returns:\n List[Document] - A list of documents\n Raises:\n ValueError: If return_score is True but some condition is not met.\n \"\"\"\n result = self.vectorstore.search(\n query=tql_query,\n exec_option=exec_option,\n )\n metadatas = result[\"metadata\"]\n texts = result[\"text\"]\n docs = [\n Document(\n page_content=text,\n metadata=metadata,\n )\n for text, metadata in zip(texts, metadatas)\n ]\n if return_score:\n raise ValueError(\"scores can't be returned with tql search\")\n return docs\n def _search(\n self,\n query: Optional[str] = None,\n embedding: Optional[Union[List[float], np.ndarray]] = None,\n embedding_function: Optional[Callable] = None,\n k: int = 4,\n distance_metric: str = \"L2\",\n use_maximal_marginal_relevance: bool = False,\n fetch_k: Optional[int] = 20,\n filter: Optional[Union[Dict, Callable]] = None,\n return_score: bool = False,\n exec_option: Optional[str] = None,\n **kwargs: Any,\n ) -> Any[List[Document], List[Tuple[Document, float]]]:\n \"\"\"\n Return docs similar to query.\n Args:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/deeplake.html"} +{"id": "e4ee70c4b3b9-6", "text": "\"\"\"\n Return docs similar to query.\n Args:\n query (str, optional): Text to look up similar docs.\n embedding (Union[List[float], np.ndarray], optional): Query's embedding.\n embedding_function (Callable, optional): Function to convert `query`\n into embedding.\n k (int): Number of Documents to return.\n distance_metric (str): `L2` for Euclidean, `L1` for Nuclear, `max`\n for L-infinity distance, `cos` for cosine similarity, 'dot' for dot\n product.\n filter (Union[Dict, Callable], optional): Additional filter prior\n to the embedding search.\n - ``Dict`` - Key-value search on tensors of htype json, on an\n AND basis (a sample must satisfy all key-value filters to be True)\n Dict = {\"tensor_name_1\": {\"key\": value},\n \"tensor_name_2\": {\"key\": value}}\n - ``Function`` - Any function compatible with `deeplake.filter`.\n use_maximal_marginal_relevance (bool): Use maximal marginal relevance.\n fetch_k (int): Number of Documents for MMR algorithm.\n return_score (bool): Return the score.\n exec_option (str, optional): Supports 3 ways to perform searching.\n Could be \"python\", \"compute_engine\" or \"tensor_db\".\n - ``python`` - Pure-python implementation for the client.\n WARNING: not recommended for big datasets.\n - ``compute_engine`` - C++ implementation of Deep Lake Compute\n Engine for the client. Not for in-memory or local datasets.\n - ``tensor_db`` - Hosted Managed Tensor Database for storage\n and query execution. Only for data in Deep Lake Managed Database.", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/deeplake.html"} +{"id": "e4ee70c4b3b9-7", "text": "and query execution. Only for data in Deep Lake Managed Database.\n Use runtime = {\"db_engine\": True} during dataset creation.\n **kwargs: Additional keyword arguments.\n Returns:\n List of Documents by the specified distance metric,\n if return_score True, return a tuple of (Document, score)\n Raises:\n ValueError: if both `embedding` and `embedding_function` are not specified.\n \"\"\"\n if kwargs.get(\"tql_query\"):\n return self._search_tql(\n tql_query=kwargs[\"tql_query\"],\n exec_option=exec_option,\n return_score=return_score,\n )\n if embedding_function:\n if isinstance(embedding_function, Embeddings):\n _embedding_function = embedding_function.embed_query\n else:\n _embedding_function = embedding_function\n elif self._embedding_function:\n _embedding_function = self._embedding_function.embed_query\n else:\n _embedding_function = None\n if embedding is None:\n if _embedding_function is None:\n raise ValueError(\n \"Either `embedding` or `embedding_function` needs to be\"\n \" specified.\"\n )\n embedding = _embedding_function(query) if query else None\n if isinstance(embedding, list):\n embedding = np.array(embedding, dtype=np.float32)\n if len(embedding.shape) > 1:\n embedding = embedding[0]\n result = self.vectorstore.search(\n embedding=embedding,\n k=fetch_k if use_maximal_marginal_relevance else k,\n distance_metric=distance_metric,\n filter=filter,\n exec_option=exec_option,\n return_tensors=[\"embedding\", \"metadata\", \"text\"],\n )\n scores = result[\"score\"]", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/deeplake.html"} +{"id": "e4ee70c4b3b9-8", "text": ")\n scores = result[\"score\"]\n embeddings = result[\"embedding\"]\n metadatas = result[\"metadata\"]\n texts = result[\"text\"]\n if use_maximal_marginal_relevance:\n lambda_mult = kwargs.get(\"lambda_mult\", 0.5)\n indices = maximal_marginal_relevance( # type: ignore\n embedding, # type: ignore\n embeddings,\n k=min(k, len(texts)),\n lambda_mult=lambda_mult,\n )\n scores = [scores[i] for i in indices]\n texts = [texts[i] for i in indices]\n metadatas = [metadatas[i] for i in indices]\n docs = [\n Document(\n page_content=text,\n metadata=metadata,\n )\n for text, metadata in zip(texts, metadatas)\n ]\n if return_score:\n return [(doc, score) for doc, score in zip(docs, scores)]\n return docs\n[docs] def similarity_search(\n self,\n query: str,\n k: int = 4,\n **kwargs: Any,\n ) -> List[Document]:\n \"\"\"\n Return docs most similar to query.\n Examples:\n >>> # Search using an embedding\n >>> data = vector_store.similarity_search(\n ... query=,\n ... k=,\n ... exec_option=,\n ... )\n >>> # Run tql search:\n >>> data = vector_store.tql_search(\n ... tql_query=\"SELECT * WHERE id == \",\n ... exec_option=\"compute_engine\",\n ... )\n Args:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/deeplake.html"} +{"id": "e4ee70c4b3b9-9", "text": "... exec_option=\"compute_engine\",\n ... )\n Args:\n k (int): Number of Documents to return. Defaults to 4.\n query (str): Text to look up similar documents.\n **kwargs: Additional keyword arguments include:\n embedding (Callable): Embedding function to use. Defaults to None.\n distance_metric (str): 'L2' for Euclidean, 'L1' for Nuclear, 'max'\n for L-infinity, 'cos' for cosine, 'dot' for dot product.\n Defaults to 'L2'.\n filter (Union[Dict, Callable], optional): Additional filter\n before embedding search.\n - Dict: Key-value search on tensors of htype json,\n (sample must satisfy all key-value filters)\n Dict = {\"tensor_1\": {\"key\": value}, \"tensor_2\": {\"key\": value}}\n - Function: Compatible with `deeplake.filter`.\n Defaults to None.\n exec_option (str): Supports 3 ways to perform searching.\n 'python', 'compute_engine', or 'tensor_db'. Defaults to 'python'.\n - 'python': Pure-python implementation for the client.\n WARNING: not recommended for big datasets.\n - 'compute_engine': C++ implementation of the Compute Engine for\n the client. Not for in-memory or local datasets.\n - 'tensor_db': Managed Tensor Database for storage and query.\n Only for data in Deep Lake Managed Database.\n Use `runtime = {\"db_engine\": True}` during dataset creation.\n Returns:\n List[Document]: List of Documents most similar to the query vector.\n \"\"\"\n return self._search(\n query=query,\n k=k,\n use_maximal_marginal_relevance=False,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/deeplake.html"} +{"id": "e4ee70c4b3b9-10", "text": "k=k,\n use_maximal_marginal_relevance=False,\n return_score=False,\n **kwargs,\n )\n[docs] def similarity_search_by_vector(\n self,\n embedding: Union[List[float], np.ndarray],\n k: int = 4,\n **kwargs: Any,\n ) -> List[Document]:\n \"\"\"\n Return docs most similar to embedding vector.\n Examples:\n >>> # Search using an embedding\n >>> data = vector_store.similarity_search_by_vector(\n ... embedding=,\n ... k=,\n ... exec_option=,\n ... )\n Args:\n embedding (Union[List[float], np.ndarray]):\n Embedding to find similar docs.\n k (int): Number of Documents to return. Defaults to 4.\n **kwargs: Additional keyword arguments including:\n filter (Union[Dict, Callable], optional):\n Additional filter before embedding search.\n - ``Dict`` - Key-value search on tensors of htype json. True\n if all key-value filters are satisfied.\n Dict = {\"tensor_name_1\": {\"key\": value},\n \"tensor_name_2\": {\"key\": value}}\n - ``Function`` - Any function compatible with\n `deeplake.filter`.\n Defaults to None.\n exec_option (str): Options for search execution include\n \"python\", \"compute_engine\", or \"tensor_db\". Defaults to\n \"python\".\n - \"python\" - Pure-python implementation running on the client.\n Can be used for data stored anywhere. WARNING: using this\n option with big datasets is discouraged due to potential\n memory issues.\n - \"compute_engine\" - Performant C++ implementation of the Deep", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/deeplake.html"} +{"id": "e4ee70c4b3b9-11", "text": "- \"compute_engine\" - Performant C++ implementation of the Deep\n Lake Compute Engine. Runs on the client and can be used for\n any data stored in or connected to Deep Lake. It cannot be\n used with in-memory or local datasets.\n - \"tensor_db\" - Performant, fully-hosted Managed Tensor Database.\n Responsible for storage and query execution. Only available\n for data stored in the Deep Lake Managed Database.\n To store datasets in this database, specify\n `runtime = {\"db_engine\": True}` during dataset creation.\n distance_metric (str): `L2` for Euclidean, `L1` for Nuclear,\n `max` for L-infinity distance, `cos` for cosine similarity,\n 'dot' for dot product. Defaults to `L2`.\n Returns:\n List[Document]: List of Documents most similar to the query vector.\n \"\"\"\n return self._search(\n embedding=embedding,\n k=k,\n use_maximal_marginal_relevance=False,\n return_score=False,\n **kwargs,\n )\n[docs] def similarity_search_with_score(\n self,\n query: str,\n k: int = 4,\n **kwargs: Any,\n ) -> List[Tuple[Document, float]]:\n \"\"\"\n Run similarity search with Deep Lake with distance returned.\n Examples:\n >>> data = vector_store.similarity_search_with_score(\n ... query=,\n ... embedding=\n ... k=,\n ... exec_option=,\n ... )\n Args:\n query (str): Query text to search for.", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/deeplake.html"} +{"id": "e4ee70c4b3b9-12", "text": "... )\n Args:\n query (str): Query text to search for.\n k (int): Number of results to return. Defaults to 4.\n **kwargs: Additional keyword arguments. Some of these arguments are:\n distance_metric: `L2` for Euclidean, `L1` for Nuclear, `max` L-infinity\n distance, `cos` for cosine similarity, 'dot' for dot product.\n Defaults to `L2`.\n filter (Optional[Dict[str, str]]): Filter by metadata. Defaults to None.\n embedding_function (Callable): Embedding function to use. Defaults\n to None.\n exec_option (str): DeepLakeVectorStore supports 3 ways to perform\n searching. It could be either \"python\", \"compute_engine\" or\n \"tensor_db\". Defaults to \"python\".\n - \"python\" - Pure-python implementation running on the client.\n Can be used for data stored anywhere. WARNING: using this\n option with big datasets is discouraged due to potential\n memory issues.\n - \"compute_engine\" - Performant C++ implementation of the Deep\n Lake Compute Engine. Runs on the client and can be used for\n any data stored in or connected to Deep Lake. It cannot be used\n with in-memory or local datasets.\n - \"tensor_db\" - Performant, fully-hosted Managed Tensor Database.\n Responsible for storage and query execution. Only available for\n data stored in the Deep Lake Managed Database. To store datasets\n in this database, specify `runtime = {\"db_engine\": True}`\n during dataset creation.\n Returns:\n List[Tuple[Document, float]]: List of documents most similar to the query\n text with distance in float.\"\"\"\n return self._search(\n query=query,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/deeplake.html"} +{"id": "e4ee70c4b3b9-13", "text": "text with distance in float.\"\"\"\n return self._search(\n query=query,\n k=k,\n return_score=True,\n **kwargs,\n )\n[docs] def max_marginal_relevance_search_by_vector(\n self,\n embedding: List[float],\n k: int = 4,\n fetch_k: int = 20,\n lambda_mult: float = 0.5,\n exec_option: Optional[str] = None,\n **kwargs: Any,\n ) -> List[Document]:\n \"\"\"\n Return docs selected using the maximal marginal relevance. Maximal marginal\n relevance optimizes for similarity to query AND diversity among selected docs.\n Examples:\n >>> data = vector_store.max_marginal_relevance_search_by_vector(\n ... embedding=,\n ... fetch_k=,\n ... k=,\n ... exec_option=,\n ... )\n Args:\n embedding: Embedding to look up documents similar to.\n k: Number of Documents to return. Defaults to 4.\n fetch_k: Number of Documents to fetch for MMR algorithm.\n lambda_mult: Number between 0 and 1 determining the degree of diversity.\n 0 corresponds to max diversity and 1 to min diversity. Defaults to 0.5.\n exec_option (str): DeepLakeVectorStore supports 3 ways for searching.\n Could be \"python\", \"compute_engine\" or \"tensor_db\". Defaults to\n \"python\".\n - \"python\" - Pure-python implementation running on the client.\n Can be used for data stored anywhere. WARNING: using this\n option with big datasets is discouraged due to potential\n memory issues.", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/deeplake.html"} +{"id": "e4ee70c4b3b9-14", "text": "option with big datasets is discouraged due to potential\n memory issues.\n - \"compute_engine\" - Performant C++ implementation of the Deep\n Lake Compute Engine. Runs on the client and can be used for\n any data stored in or connected to Deep Lake. It cannot be used\n with in-memory or local datasets.\n - \"tensor_db\" - Performant, fully-hosted Managed Tensor Database.\n Responsible for storage and query execution. Only available for\n data stored in the Deep Lake Managed Database. To store datasets\n in this database, specify `runtime = {\"db_engine\": True}`\n during dataset creation.\n **kwargs: Additional keyword arguments.\n Returns:\n List[Documents] - A list of documents.\n \"\"\"\n return self._search(\n embedding=embedding,\n k=k,\n fetch_k=fetch_k,\n use_maximal_marginal_relevance=True,\n lambda_mult=lambda_mult,\n exec_option=exec_option,\n **kwargs,\n )\n[docs] def max_marginal_relevance_search(\n self,\n query: str,\n k: int = 4,\n fetch_k: int = 20,\n lambda_mult: float = 0.5,\n exec_option: Optional[str] = None,\n **kwargs: Any,\n ) -> List[Document]:\n \"\"\"Return docs selected using maximal marginal relevance.\n Maximal marginal relevance optimizes for similarity to query AND diversity\n among selected documents.\n Examples:\n >>> # Search using an embedding\n >>> data = vector_store.max_marginal_relevance_search(\n ... query = ,\n ... embedding_function = ,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/deeplake.html"} +{"id": "e4ee70c4b3b9-15", "text": "... embedding_function = ,\n ... k = ,\n ... exec_option = ,\n ... )\n Args:\n query: Text to look up documents similar to.\n k: Number of Documents to return. Defaults to 4.\n fetch_k: Number of Documents for MMR algorithm.\n lambda_mult: Value between 0 and 1. 0 corresponds\n to maximum diversity and 1 to minimum.\n Defaults to 0.5.\n exec_option (str): Supports 3 ways to perform searching.\n - \"python\" - Pure-python implementation running on the client.\n Can be used for data stored anywhere. WARNING: using this\n option with big datasets is discouraged due to potential\n memory issues.\n - \"compute_engine\" - Performant C++ implementation of the Deep\n Lake Compute Engine. Runs on the client and can be used for\n any data stored in or connected to Deep Lake. It cannot be\n used with in-memory or local datasets.\n - \"tensor_db\" - Performant, fully-hosted Managed Tensor Database.\n Responsible for storage and query execution. Only available\n for data stored in the Deep Lake Managed Database. To store\n datasets in this database, specify\n `runtime = {\"db_engine\": True}` during dataset creation.\n **kwargs: Additional keyword arguments\n Returns:\n List of Documents selected by maximal marginal relevance.\n Raises:\n ValueError: when MRR search is on but embedding function is\n not specified.\n \"\"\"\n embedding_function = kwargs.get(\"embedding\") or self._embedding_function\n if embedding_function is None:\n raise ValueError(\n \"For MMR search, you must specify an embedding function on\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/deeplake.html"} +{"id": "e4ee70c4b3b9-16", "text": "\"For MMR search, you must specify an embedding function on\"\n \" `creation` or during add call.\"\n )\n return self._search(\n query=query,\n k=k,\n fetch_k=fetch_k,\n use_maximal_marginal_relevance=True,\n lambda_mult=lambda_mult,\n exec_option=exec_option,\n embedding_function=embedding_function, # type: ignore\n **kwargs,\n )\n[docs] @classmethod\n def from_texts(\n cls,\n texts: List[str],\n embedding: Optional[Embeddings] = None,\n metadatas: Optional[List[dict]] = None,\n ids: Optional[List[str]] = None,\n dataset_path: str = _LANGCHAIN_DEFAULT_DEEPLAKE_PATH,\n **kwargs: Any,\n ) -> DeepLake:\n \"\"\"Create a Deep Lake dataset from a raw documents.\n If a dataset_path is specified, the dataset will be persisted in that location,\n otherwise by default at `./deeplake`\n Examples:\n >>> # Search using an embedding\n >>> vector_store = DeepLake.from_texts(\n ... texts = ,\n ... embedding_function = ,\n ... k = ,\n ... exec_option = ,\n ... )\n Args:\n dataset_path (str): - The full path to the dataset. Can be:\n - Deep Lake cloud path of the form ``hub://username/dataset_name``.\n To write to Deep Lake cloud datasets,\n ensure that you are logged in to Deep Lake\n (use 'activeloop login' from command line)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/deeplake.html"} +{"id": "e4ee70c4b3b9-17", "text": "(use 'activeloop login' from command line)\n - AWS S3 path of the form ``s3://bucketname/path/to/dataset``.\n Credentials are required in either the environment\n - Google Cloud Storage path of the form\n ``gcs://bucketname/path/to/dataset`` Credentials are required\n in either the environment\n - Local file system path of the form ``./path/to/dataset`` or\n ``~/path/to/dataset`` or ``path/to/dataset``.\n - In-memory path of the form ``mem://path/to/dataset`` which doesn't\n save the dataset, but keeps it in memory instead.\n Should be used only for testing as it does not persist.\n texts (List[Document]): List of documents to add.\n embedding (Optional[Embeddings]): Embedding function. Defaults to None.\n Note, in other places, it is called embedding_function.\n metadatas (Optional[List[dict]]): List of metadatas. Defaults to None.\n ids (Optional[List[str]]): List of document IDs. Defaults to None.\n **kwargs: Additional keyword arguments.\n Returns:\n DeepLake: Deep Lake dataset.\n Raises:\n ValueError: If 'embedding' is provided in kwargs. This is deprecated,\n please use `embedding_function` instead.\n \"\"\"\n if kwargs.get(\"embedding\"):\n raise ValueError(\n \"using embedding as embedidng_functions is deprecated. \"\n \"Please use `embedding_function` instead.\"\n )\n deeplake_dataset = cls(\n dataset_path=dataset_path, embedding_function=embedding, **kwargs\n )\n deeplake_dataset.add_texts(\n texts=texts,\n metadatas=metadatas,\n ids=ids,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/deeplake.html"} +{"id": "e4ee70c4b3b9-18", "text": "metadatas=metadatas,\n ids=ids,\n embedding_function=embedding.embed_documents, # type: ignore\n )\n return deeplake_dataset\n[docs] def delete(\n self,\n ids: Any[List[str], None] = None,\n filter: Any[Dict[str, str], None] = None,\n delete_all: Any[bool, None] = None,\n ) -> bool:\n \"\"\"Delete the entities in the dataset.\n Args:\n ids (Optional[List[str]], optional): The document_ids to delete.\n Defaults to None.\n filter (Optional[Dict[str, str]], optional): The filter to delete by.\n Defaults to None.\n delete_all (Optional[bool], optional): Whether to drop the dataset.\n Defaults to None.\n Returns:\n bool: Whether the delete operation was successful.\n \"\"\"\n self.vectorstore.delete(\n ids=ids,\n filter=filter,\n delete_all=delete_all,\n )\n return True\n[docs] @classmethod\n def force_delete_by_path(cls, path: str) -> None:\n \"\"\"Force delete dataset by path.\n Args:\n path (str): path of the dataset to delete.\n Raises:\n ValueError: if deeplake is not installed.\n \"\"\"\n try:\n import deeplake\n except ImportError:\n raise ValueError(\n \"Could not import deeplake python package. \"\n \"Please install it with `pip install deeplake`.\"\n )\n deeplake.delete(path, large_ok=True, force=True)\n[docs] def delete_dataset(self) -> None:\n \"\"\"Delete the collection.\"\"\"\n self.delete(delete_all=True)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/deeplake.html"} +{"id": "9f819fb5da00-0", "text": "Source code for langchain.vectorstores.annoy\n\"\"\"Wrapper around Annoy vector database.\"\"\"\nfrom __future__ import annotations\nimport os\nimport pickle\nimport uuid\nfrom configparser import ConfigParser\nfrom pathlib import Path\nfrom typing import Any, Callable, Dict, Iterable, List, Optional, Tuple\nimport numpy as np\nfrom langchain.docstore.base import Docstore\nfrom langchain.docstore.document import Document\nfrom langchain.docstore.in_memory import InMemoryDocstore\nfrom langchain.embeddings.base import Embeddings\nfrom langchain.vectorstores.base import VectorStore\nfrom langchain.vectorstores.utils import maximal_marginal_relevance\nINDEX_METRICS = frozenset([\"angular\", \"euclidean\", \"manhattan\", \"hamming\", \"dot\"])\nDEFAULT_METRIC = \"angular\"\ndef dependable_annoy_import() -> Any:\n \"\"\"Import annoy if available, otherwise raise error.\"\"\"\n try:\n import annoy\n except ImportError:\n raise ValueError(\n \"Could not import annoy python package. \"\n \"Please install it with `pip install --user annoy` \"\n )\n return annoy\n[docs]class Annoy(VectorStore):\n \"\"\"Wrapper around Annoy vector database.\n To use, you should have the ``annoy`` python package installed.\n Example:\n .. code-block:: python\n from langchain import Annoy\n db = Annoy(embedding_function, index, docstore, index_to_docstore_id)\n \"\"\"\n def __init__(\n self,\n embedding_function: Callable,\n index: Any,\n metric: str,\n docstore: Docstore,\n index_to_docstore_id: Dict[int, str],\n ):\n \"\"\"Initialize with necessary components.\"\"\"\n self.embedding_function = embedding_function", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/annoy.html"} +{"id": "9f819fb5da00-1", "text": "):\n \"\"\"Initialize with necessary components.\"\"\"\n self.embedding_function = embedding_function\n self.index = index\n self.metric = metric\n self.docstore = docstore\n self.index_to_docstore_id = index_to_docstore_id\n[docs] def add_texts(\n self,\n texts: Iterable[str],\n metadatas: Optional[List[dict]] = None,\n **kwargs: Any,\n ) -> List[str]:\n raise NotImplementedError(\n \"Annoy does not allow to add new data once the index is build.\"\n )\n[docs] def process_index_results(\n self, idxs: List[int], dists: List[float]\n ) -> List[Tuple[Document, float]]:\n \"\"\"Turns annoy results into a list of documents and scores.\n Args:\n idxs: List of indices of the documents in the index.\n dists: List of distances of the documents in the index.\n Returns:\n List of Documents and scores.\n \"\"\"\n docs = []\n for idx, dist in zip(idxs, dists):\n _id = self.index_to_docstore_id[idx]\n doc = self.docstore.search(_id)\n if not isinstance(doc, Document):\n raise ValueError(f\"Could not find document for id {_id}, got {doc}\")\n docs.append((doc, dist))\n return docs\n[docs] def similarity_search_with_score_by_vector(\n self, embedding: List[float], k: int = 4, search_k: int = -1\n ) -> List[Tuple[Document, float]]:\n \"\"\"Return docs most similar to query.\n Args:\n query: Text to look up documents similar to.", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/annoy.html"} +{"id": "9f819fb5da00-2", "text": "Args:\n query: Text to look up documents similar to.\n k: Number of Documents to return. Defaults to 4.\n search_k: inspect up to search_k nodes which defaults\n to n_trees * n if not provided\n Returns:\n List of Documents most similar to the query and score for each\n \"\"\"\n idxs, dists = self.index.get_nns_by_vector(\n embedding, k, search_k=search_k, include_distances=True\n )\n return self.process_index_results(idxs, dists)\n[docs] def similarity_search_with_score_by_index(\n self, docstore_index: int, k: int = 4, search_k: int = -1\n ) -> List[Tuple[Document, float]]:\n \"\"\"Return docs most similar to query.\n Args:\n query: Text to look up documents similar to.\n k: Number of Documents to return. Defaults to 4.\n search_k: inspect up to search_k nodes which defaults\n to n_trees * n if not provided\n Returns:\n List of Documents most similar to the query and score for each\n \"\"\"\n idxs, dists = self.index.get_nns_by_item(\n docstore_index, k, search_k=search_k, include_distances=True\n )\n return self.process_index_results(idxs, dists)\n[docs] def similarity_search_with_score(\n self, query: str, k: int = 4, search_k: int = -1\n ) -> List[Tuple[Document, float]]:\n \"\"\"Return docs most similar to query.\n Args:\n query: Text to look up documents similar to.\n k: Number of Documents to return. Defaults to 4.", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/annoy.html"} +{"id": "9f819fb5da00-3", "text": "k: Number of Documents to return. Defaults to 4.\n search_k: inspect up to search_k nodes which defaults\n to n_trees * n if not provided\n Returns:\n List of Documents most similar to the query and score for each\n \"\"\"\n embedding = self.embedding_function(query)\n docs = self.similarity_search_with_score_by_vector(embedding, k, search_k)\n return docs\n[docs] def similarity_search_by_vector(\n self, embedding: List[float], k: int = 4, search_k: int = -1, **kwargs: Any\n ) -> List[Document]:\n \"\"\"Return docs most similar to embedding vector.\n Args:\n embedding: Embedding to look up documents similar to.\n k: Number of Documents to return. Defaults to 4.\n search_k: inspect up to search_k nodes which defaults\n to n_trees * n if not provided\n Returns:\n List of Documents most similar to the embedding.\n \"\"\"\n docs_and_scores = self.similarity_search_with_score_by_vector(\n embedding, k, search_k\n )\n return [doc for doc, _ in docs_and_scores]\n[docs] def similarity_search_by_index(\n self, docstore_index: int, k: int = 4, search_k: int = -1, **kwargs: Any\n ) -> List[Document]:\n \"\"\"Return docs most similar to docstore_index.\n Args:\n docstore_index: Index of document in docstore\n k: Number of Documents to return. Defaults to 4.\n search_k: inspect up to search_k nodes which defaults\n to n_trees * n if not provided\n Returns:\n List of Documents most similar to the embedding.\n \"\"\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/annoy.html"} +{"id": "9f819fb5da00-4", "text": "Returns:\n List of Documents most similar to the embedding.\n \"\"\"\n docs_and_scores = self.similarity_search_with_score_by_index(\n docstore_index, k, search_k\n )\n return [doc for doc, _ in docs_and_scores]\n[docs] def similarity_search(\n self, query: str, k: int = 4, search_k: int = -1, **kwargs: Any\n ) -> List[Document]:\n \"\"\"Return docs most similar to query.\n Args:\n query: Text to look up documents similar to.\n k: Number of Documents to return. Defaults to 4.\n search_k: inspect up to search_k nodes which defaults\n to n_trees * n if not provided\n Returns:\n List of Documents most similar to the query.\n \"\"\"\n docs_and_scores = self.similarity_search_with_score(query, k, search_k)\n return [doc for doc, _ in docs_and_scores]\n[docs] def max_marginal_relevance_search_by_vector(\n self,\n embedding: List[float],\n k: int = 4,\n fetch_k: int = 20,\n lambda_mult: float = 0.5,\n **kwargs: Any,\n ) -> List[Document]:\n \"\"\"Return docs selected using the maximal marginal relevance.\n Maximal marginal relevance optimizes for similarity to query AND diversity\n among selected documents.\n Args:\n embedding: Embedding to look up documents similar to.\n fetch_k: Number of Documents to fetch to pass to MMR algorithm.\n k: Number of Documents to return. Defaults to 4.\n lambda_mult: Number between 0 and 1 that determines the degree\n of diversity among the results with 0 corresponding", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/annoy.html"} +{"id": "9f819fb5da00-5", "text": "of diversity among the results with 0 corresponding\n to maximum diversity and 1 to minimum diversity.\n Defaults to 0.5.\n Returns:\n List of Documents selected by maximal marginal relevance.\n \"\"\"\n idxs = self.index.get_nns_by_vector(\n embedding, fetch_k, search_k=-1, include_distances=False\n )\n embeddings = [self.index.get_item_vector(i) for i in idxs]\n mmr_selected = maximal_marginal_relevance(\n np.array([embedding], dtype=np.float32),\n embeddings,\n k=k,\n lambda_mult=lambda_mult,\n )\n # ignore the -1's if not enough docs are returned/indexed\n selected_indices = [idxs[i] for i in mmr_selected if i != -1]\n docs = []\n for i in selected_indices:\n _id = self.index_to_docstore_id[i]\n doc = self.docstore.search(_id)\n if not isinstance(doc, Document):\n raise ValueError(f\"Could not find document for id {_id}, got {doc}\")\n docs.append(doc)\n return docs\n[docs] def max_marginal_relevance_search(\n self,\n query: str,\n k: int = 4,\n fetch_k: int = 20,\n lambda_mult: float = 0.5,\n **kwargs: Any,\n ) -> List[Document]:\n \"\"\"Return docs selected using the maximal marginal relevance.\n Maximal marginal relevance optimizes for similarity to query AND diversity\n among selected documents.\n Args:\n query: Text to look up documents similar to.\n k: Number of Documents to return. Defaults to 4.", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/annoy.html"} +{"id": "9f819fb5da00-6", "text": "k: Number of Documents to return. Defaults to 4.\n fetch_k: Number of Documents to fetch to pass to MMR algorithm.\n lambda_mult: Number between 0 and 1 that determines the degree\n of diversity among the results with 0 corresponding\n to maximum diversity and 1 to minimum diversity.\n Defaults to 0.5.\n Returns:\n List of Documents selected by maximal marginal relevance.\n \"\"\"\n embedding = self.embedding_function(query)\n docs = self.max_marginal_relevance_search_by_vector(\n embedding, k, fetch_k, lambda_mult=lambda_mult\n )\n return docs\n @classmethod\n def __from(\n cls,\n texts: List[str],\n embeddings: List[List[float]],\n embedding: Embeddings,\n metadatas: Optional[List[dict]] = None,\n metric: str = DEFAULT_METRIC,\n trees: int = 100,\n n_jobs: int = -1,\n **kwargs: Any,\n ) -> Annoy:\n if metric not in INDEX_METRICS:\n raise ValueError(\n (\n f\"Unsupported distance metric: {metric}. \"\n f\"Expected one of {list(INDEX_METRICS)}\"\n )\n )\n annoy = dependable_annoy_import()\n if not embeddings:\n raise ValueError(\"embeddings must be provided to build AnnoyIndex\")\n f = len(embeddings[0])\n index = annoy.AnnoyIndex(f, metric=metric)\n for i, emb in enumerate(embeddings):\n index.add_item(i, emb)\n index.build(trees, n_jobs=n_jobs)\n documents = []\n for i, text in enumerate(texts):", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/annoy.html"} +{"id": "9f819fb5da00-7", "text": "documents = []\n for i, text in enumerate(texts):\n metadata = metadatas[i] if metadatas else {}\n documents.append(Document(page_content=text, metadata=metadata))\n index_to_id = {i: str(uuid.uuid4()) for i in range(len(documents))}\n docstore = InMemoryDocstore(\n {index_to_id[i]: doc for i, doc in enumerate(documents)}\n )\n return cls(embedding.embed_query, index, metric, docstore, index_to_id)\n[docs] @classmethod\n def from_texts(\n cls,\n texts: List[str],\n embedding: Embeddings,\n metadatas: Optional[List[dict]] = None,\n metric: str = DEFAULT_METRIC,\n trees: int = 100,\n n_jobs: int = -1,\n **kwargs: Any,\n ) -> Annoy:\n \"\"\"Construct Annoy wrapper from raw documents.\n Args:\n texts: List of documents to index.\n embedding: Embedding function to use.\n metadatas: List of metadata dictionaries to associate with documents.\n metric: Metric to use for indexing. Defaults to \"angular\".\n trees: Number of trees to use for indexing. Defaults to 100.\n n_jobs: Number of jobs to use for indexing. Defaults to -1.\n This is a user friendly interface that:\n 1. Embeds documents.\n 2. Creates an in memory docstore\n 3. Initializes the Annoy database\n This is intended to be a quick way to get started.\n Example:\n .. code-block:: python\n from langchain import Annoy\n from langchain.embeddings import OpenAIEmbeddings", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/annoy.html"} +{"id": "9f819fb5da00-8", "text": "from langchain.embeddings import OpenAIEmbeddings\n embeddings = OpenAIEmbeddings()\n index = Annoy.from_texts(texts, embeddings)\n \"\"\"\n embeddings = embedding.embed_documents(texts)\n return cls.__from(\n texts, embeddings, embedding, metadatas, metric, trees, n_jobs, **kwargs\n )\n[docs] @classmethod\n def from_embeddings(\n cls,\n text_embeddings: List[Tuple[str, List[float]]],\n embedding: Embeddings,\n metadatas: Optional[List[dict]] = None,\n metric: str = DEFAULT_METRIC,\n trees: int = 100,\n n_jobs: int = -1,\n **kwargs: Any,\n ) -> Annoy:\n \"\"\"Construct Annoy wrapper from embeddings.\n Args:\n text_embeddings: List of tuples of (text, embedding)\n embedding: Embedding function to use.\n metadatas: List of metadata dictionaries to associate with documents.\n metric: Metric to use for indexing. Defaults to \"angular\".\n trees: Number of trees to use for indexing. Defaults to 100.\n n_jobs: Number of jobs to use for indexing. Defaults to -1\n This is a user friendly interface that:\n 1. Creates an in memory docstore with provided embeddings\n 2. Initializes the Annoy database\n This is intended to be a quick way to get started.\n Example:\n .. code-block:: python\n from langchain import Annoy\n from langchain.embeddings import OpenAIEmbeddings\n embeddings = OpenAIEmbeddings()\n text_embeddings = embeddings.embed_documents(texts)\n text_embedding_pairs = list(zip(texts, text_embeddings))", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/annoy.html"} +{"id": "9f819fb5da00-9", "text": "text_embedding_pairs = list(zip(texts, text_embeddings))\n db = Annoy.from_embeddings(text_embedding_pairs, embeddings)\n \"\"\"\n texts = [t[0] for t in text_embeddings]\n embeddings = [t[1] for t in text_embeddings]\n return cls.__from(\n texts, embeddings, embedding, metadatas, metric, trees, n_jobs, **kwargs\n )\n[docs] def save_local(self, folder_path: str, prefault: bool = False) -> None:\n \"\"\"Save Annoy index, docstore, and index_to_docstore_id to disk.\n Args:\n folder_path: folder path to save index, docstore,\n and index_to_docstore_id to.\n prefault: Whether to pre-load the index into memory.\n \"\"\"\n path = Path(folder_path)\n os.makedirs(path, exist_ok=True)\n # save index, index config, docstore and index_to_docstore_id\n config_object = ConfigParser()\n config_object[\"ANNOY\"] = {\n \"f\": self.index.f,\n \"metric\": self.metric,\n }\n self.index.save(str(path / \"index.annoy\"), prefault=prefault)\n with open(path / \"index.pkl\", \"wb\") as file:\n pickle.dump((self.docstore, self.index_to_docstore_id, config_object), file)\n[docs] @classmethod\n def load_local(\n cls,\n folder_path: str,\n embeddings: Embeddings,\n ) -> Annoy:\n \"\"\"Load Annoy index, docstore, and index_to_docstore_id to disk.\n Args:\n folder_path: folder path to load index, docstore,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/annoy.html"} +{"id": "9f819fb5da00-10", "text": "Args:\n folder_path: folder path to load index, docstore,\n and index_to_docstore_id from.\n embeddings: Embeddings to use when generating queries.\n \"\"\"\n path = Path(folder_path)\n # load index separately since it is not picklable\n annoy = dependable_annoy_import()\n # load docstore and index_to_docstore_id\n with open(path / \"index.pkl\", \"rb\") as file:\n docstore, index_to_docstore_id, config_object = pickle.load(file)\n f = int(config_object[\"ANNOY\"][\"f\"])\n metric = config_object[\"ANNOY\"][\"metric\"]\n index = annoy.AnnoyIndex(f, metric=metric)\n index.load(str(path / \"index.annoy\"))\n return cls(\n embeddings.embed_query, index, metric, docstore, index_to_docstore_id\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/annoy.html"} +{"id": "0fc245448f98-0", "text": "Source code for langchain.vectorstores.typesense\n\"\"\"Wrapper around Typesense vector search\"\"\"\nfrom __future__ import annotations\nimport uuid\nfrom typing import TYPE_CHECKING, Any, Iterable, List, Optional, Tuple, Union\nfrom langchain.docstore.document import Document\nfrom langchain.embeddings.base import Embeddings\nfrom langchain.utils import get_from_env\nfrom langchain.vectorstores.base import VectorStore\nif TYPE_CHECKING:\n from typesense.client import Client\n from typesense.collection import Collection\n[docs]class Typesense(VectorStore):\n \"\"\"Wrapper around Typesense vector search.\n To use, you should have the ``typesense`` python package installed.\n Example:\n .. code-block:: python\n from langchain.embedding.openai import OpenAIEmbeddings\n from langchain.vectorstores import Typesense\n import typesense\n node = {\n \"host\": \"localhost\", # For Typesense Cloud use xxx.a1.typesense.net\n \"port\": \"8108\", # For Typesense Cloud use 443\n \"protocol\": \"http\" # For Typesense Cloud use https\n }\n typesense_client = typesense.Client(\n {\n \"nodes\": [node],\n \"api_key\": \"\",\n \"connection_timeout_seconds\": 2\n }\n )\n typesense_collection_name = \"langchain-memory\"\n embedding = OpenAIEmbeddings()\n vectorstore = Typesense(\n typesense_client=typesense_client,\n embedding=embedding,\n typesense_collection_name=typesense_collection_name,\n text_key=\"text\",\n )\n \"\"\"\n def __init__(\n self,\n typesense_client: Client,\n embedding: Embeddings,\n *,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/typesense.html"} +{"id": "0fc245448f98-1", "text": "typesense_client: Client,\n embedding: Embeddings,\n *,\n typesense_collection_name: Optional[str] = None,\n text_key: str = \"text\",\n ):\n \"\"\"Initialize with Typesense client.\"\"\"\n try:\n from typesense import Client\n except ImportError:\n raise ValueError(\n \"Could not import typesense python package. \"\n \"Please install it with `pip install typesense`.\"\n )\n if not isinstance(typesense_client, Client):\n raise ValueError(\n f\"typesense_client should be an instance of typesense.Client, \"\n f\"got {type(typesense_client)}\"\n )\n self._typesense_client = typesense_client\n self._embedding = embedding\n self._typesense_collection_name = (\n typesense_collection_name or f\"langchain-{str(uuid.uuid4())}\"\n )\n self._text_key = text_key\n @property\n def _collection(self) -> Collection:\n return self._typesense_client.collections[self._typesense_collection_name]\n def _prep_texts(\n self,\n texts: Iterable[str],\n metadatas: Optional[List[dict]],\n ids: Optional[List[str]],\n ) -> List[dict]:\n \"\"\"Embed and create the documents\"\"\"\n _ids = ids or (str(uuid.uuid4()) for _ in texts)\n _metadatas: Iterable[dict] = metadatas or ({} for _ in texts)\n embedded_texts = self._embedding.embed_documents(list(texts))\n return [\n {\"id\": _id, \"vec\": vec, f\"{self._text_key}\": text, \"metadata\": metadata}", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/typesense.html"} +{"id": "0fc245448f98-2", "text": "for _id, vec, text, metadata in zip(_ids, embedded_texts, texts, _metadatas)\n ]\n def _create_collection(self, num_dim: int) -> None:\n fields = [\n {\"name\": \"vec\", \"type\": \"float[]\", \"num_dim\": num_dim},\n {\"name\": f\"{self._text_key}\", \"type\": \"string\"},\n {\"name\": \".*\", \"type\": \"auto\"},\n ]\n self._typesense_client.collections.create(\n {\"name\": self._typesense_collection_name, \"fields\": fields}\n )\n[docs] def add_texts(\n self,\n texts: Iterable[str],\n metadatas: Optional[List[dict]] = None,\n ids: Optional[List[str]] = None,\n **kwargs: Any,\n ) -> List[str]:\n \"\"\"Run more texts through the embedding and add to the vectorstore.\n Args:\n texts: Iterable of strings to add to the vectorstore.\n metadatas: Optional list of metadatas associated with the texts.\n ids: Optional list of ids to associate with the texts.\n Returns:\n List of ids from adding the texts into the vectorstore.\n \"\"\"\n from typesense.exceptions import ObjectNotFound\n docs = self._prep_texts(texts, metadatas, ids)\n try:\n self._collection.documents.import_(docs, {\"action\": \"upsert\"})\n except ObjectNotFound:\n # Create the collection if it doesn't already exist\n self._create_collection(len(docs[0][\"vec\"]))\n self._collection.documents.import_(docs, {\"action\": \"upsert\"})\n return [doc[\"id\"] for doc in docs]", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/typesense.html"} +{"id": "0fc245448f98-3", "text": "return [doc[\"id\"] for doc in docs]\n[docs] def similarity_search_with_score(\n self,\n query: str,\n k: int = 10,\n filter: Optional[str] = \"\",\n ) -> List[Tuple[Document, float]]:\n \"\"\"Return typesense documents most similar to query, along with scores.\n Args:\n query: Text to look up documents similar to.\n k: Number of Documents to return. Defaults to 10.\n Minimum 10 results would be returned.\n filter: typesense filter_by expression to filter documents on\n Returns:\n List of Documents most similar to the query and score for each\n \"\"\"\n embedded_query = [str(x) for x in self._embedding.embed_query(query)]\n query_obj = {\n \"q\": \"*\",\n \"vector_query\": f'vec:([{\",\".join(embedded_query)}], k:{k})',\n \"filter_by\": filter,\n \"collection\": self._typesense_collection_name,\n }\n docs = []\n response = self._typesense_client.multi_search.perform(\n {\"searches\": [query_obj]}, {}\n )\n for hit in response[\"results\"][0][\"hits\"]:\n document = hit[\"document\"]\n metadata = document[\"metadata\"]\n text = document[self._text_key]\n score = hit[\"vector_distance\"]\n docs.append((Document(page_content=text, metadata=metadata), score))\n return docs\n[docs] def similarity_search(\n self,\n query: str,\n k: int = 10,\n filter: Optional[str] = \"\",\n **kwargs: Any,\n ) -> List[Document]:\n \"\"\"Return typesense documents most similar to query.", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/typesense.html"} +{"id": "0fc245448f98-4", "text": ") -> List[Document]:\n \"\"\"Return typesense documents most similar to query.\n Args:\n query: Text to look up documents similar to.\n k: Number of Documents to return. Defaults to 10.\n Minimum 10 results would be returned.\n filter: typesense filter_by expression to filter documents on\n Returns:\n List of Documents most similar to the query and score for each\n \"\"\"\n docs_and_score = self.similarity_search_with_score(query, k=k, filter=filter)\n return [doc for doc, _ in docs_and_score]\n[docs] @classmethod\n def from_client_params(\n cls,\n embedding: Embeddings,\n *,\n host: str = \"localhost\",\n port: Union[str, int] = \"8108\",\n protocol: str = \"http\",\n typesense_api_key: Optional[str] = None,\n connection_timeout_seconds: int = 2,\n **kwargs: Any,\n ) -> Typesense:\n \"\"\"Initialize Typesense directly from client parameters.\n Example:\n .. code-block:: python\n from langchain.embedding.openai import OpenAIEmbeddings\n from langchain.vectorstores import Typesense\n # Pass in typesense_api_key as kwarg or set env var \"TYPESENSE_API_KEY\".\n vectorstore = Typesense(\n OpenAIEmbeddings(),\n host=\"localhost\",\n port=\"8108\",\n protocol=\"http\",\n typesense_collection_name=\"langchain-memory\",\n )\n \"\"\"\n try:\n from typesense import Client\n except ImportError:\n raise ValueError(\n \"Could not import typesense python package. \"\n \"Please install it with `pip install typesense`.\"\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/typesense.html"} +{"id": "0fc245448f98-5", "text": "\"Please install it with `pip install typesense`.\"\n )\n node = {\n \"host\": host,\n \"port\": str(port),\n \"protocol\": protocol,\n }\n typesense_api_key = typesense_api_key or get_from_env(\n \"typesense_api_key\", \"TYPESENSE_API_KEY\"\n )\n client_config = {\n \"nodes\": [node],\n \"api_key\": typesense_api_key,\n \"connection_timeout_seconds\": connection_timeout_seconds,\n }\n return cls(Client(client_config), embedding, **kwargs)\n[docs] @classmethod\n def from_texts(\n cls,\n texts: List[str],\n embedding: Embeddings,\n metadatas: Optional[List[dict]] = None,\n ids: Optional[List[str]] = None,\n typesense_client: Optional[Client] = None,\n typesense_client_params: Optional[dict] = None,\n typesense_collection_name: Optional[str] = None,\n text_key: str = \"text\",\n **kwargs: Any,\n ) -> Typesense:\n \"\"\"Construct Typesense wrapper from raw text.\"\"\"\n if typesense_client:\n vectorstore = cls(typesense_client, embedding, **kwargs)\n elif typesense_client_params:\n vectorstore = cls.from_client_params(\n embedding, **typesense_client_params, **kwargs\n )\n else:\n raise ValueError(\n \"Must specify one of typesense_client or typesense_client_params.\"\n )\n vectorstore.add_texts(texts, metadatas=metadatas, ids=ids)\n return vectorstore", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/typesense.html"} +{"id": "87b9971f5e44-0", "text": "Source code for langchain.vectorstores.pinecone\n\"\"\"Wrapper around Pinecone vector database.\"\"\"\nfrom __future__ import annotations\nimport logging\nimport uuid\nfrom typing import Any, Callable, Iterable, List, Optional, Tuple\nimport numpy as np\nfrom langchain.docstore.document import Document\nfrom langchain.embeddings.base import Embeddings\nfrom langchain.vectorstores.base import VectorStore\nfrom langchain.vectorstores.utils import maximal_marginal_relevance\nlogger = logging.getLogger(__name__)\n[docs]class Pinecone(VectorStore):\n \"\"\"Wrapper around Pinecone vector database.\n To use, you should have the ``pinecone-client`` python package installed.\n Example:\n .. code-block:: python\n from langchain.vectorstores import Pinecone\n from langchain.embeddings.openai import OpenAIEmbeddings\n import pinecone\n # The environment should be the one specified next to the API key\n # in your Pinecone console\n pinecone.init(api_key=\"***\", environment=\"...\")\n index = pinecone.Index(\"langchain-demo\")\n embeddings = OpenAIEmbeddings()\n vectorstore = Pinecone(index, embeddings.embed_query, \"text\")\n \"\"\"\n def __init__(\n self,\n index: Any,\n embedding_function: Callable,\n text_key: str,\n namespace: Optional[str] = None,\n ):\n \"\"\"Initialize with Pinecone client.\"\"\"\n try:\n import pinecone\n except ImportError:\n raise ValueError(\n \"Could not import pinecone python package. \"\n \"Please install it with `pip install pinecone-client`.\"\n )\n if not isinstance(index, pinecone.index.Index):\n raise ValueError(\n f\"client should be an instance of pinecone.index.Index, \"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/pinecone.html"} +{"id": "87b9971f5e44-1", "text": "f\"client should be an instance of pinecone.index.Index, \"\n f\"got {type(index)}\"\n )\n self._index = index\n self._embedding_function = embedding_function\n self._text_key = text_key\n self._namespace = namespace\n[docs] def add_texts(\n self,\n texts: Iterable[str],\n metadatas: Optional[List[dict]] = None,\n ids: Optional[List[str]] = None,\n namespace: Optional[str] = None,\n batch_size: int = 32,\n **kwargs: Any,\n ) -> List[str]:\n \"\"\"Run more texts through the embeddings and add to the vectorstore.\n Args:\n texts: Iterable of strings to add to the vectorstore.\n metadatas: Optional list of metadatas associated with the texts.\n ids: Optional list of ids to associate with the texts.\n namespace: Optional pinecone namespace to add the texts to.\n Returns:\n List of ids from adding the texts into the vectorstore.\n \"\"\"\n if namespace is None:\n namespace = self._namespace\n # Embed and create the documents\n docs = []\n ids = ids or [str(uuid.uuid4()) for _ in texts]\n for i, text in enumerate(texts):\n embedding = self._embedding_function(text)\n metadata = metadatas[i] if metadatas else {}\n metadata[self._text_key] = text\n docs.append((ids[i], embedding, metadata))\n # upsert to Pinecone\n self._index.upsert(vectors=docs, namespace=namespace, batch_size=batch_size)\n return ids\n[docs] def similarity_search_with_score(\n self,\n query: str,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/pinecone.html"} +{"id": "87b9971f5e44-2", "text": "self,\n query: str,\n k: int = 4,\n filter: Optional[dict] = None,\n namespace: Optional[str] = None,\n ) -> List[Tuple[Document, float]]:\n \"\"\"Return pinecone documents most similar to query, along with scores.\n Args:\n query: Text to look up documents similar to.\n k: Number of Documents to return. Defaults to 4.\n filter: Dictionary of argument(s) to filter on metadata\n namespace: Namespace to search in. Default will search in '' namespace.\n Returns:\n List of Documents most similar to the query and score for each\n \"\"\"\n if namespace is None:\n namespace = self._namespace\n query_obj = self._embedding_function(query)\n docs = []\n results = self._index.query(\n [query_obj],\n top_k=k,\n include_metadata=True,\n namespace=namespace,\n filter=filter,\n )\n for res in results[\"matches\"]:\n metadata = res[\"metadata\"]\n if self._text_key in metadata:\n text = metadata.pop(self._text_key)\n score = res[\"score\"]\n docs.append((Document(page_content=text, metadata=metadata), score))\n else:\n logger.warning(\n f\"Found document with no `{self._text_key}` key. Skipping.\"\n )\n return docs\n[docs] def similarity_search(\n self,\n query: str,\n k: int = 4,\n filter: Optional[dict] = None,\n namespace: Optional[str] = None,\n **kwargs: Any,\n ) -> List[Document]:\n \"\"\"Return pinecone documents most similar to query.\n Args:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/pinecone.html"} +{"id": "87b9971f5e44-3", "text": "\"\"\"Return pinecone documents most similar to query.\n Args:\n query: Text to look up documents similar to.\n k: Number of Documents to return. Defaults to 4.\n filter: Dictionary of argument(s) to filter on metadata\n namespace: Namespace to search in. Default will search in '' namespace.\n Returns:\n List of Documents most similar to the query and score for each\n \"\"\"\n docs_and_scores = self.similarity_search_with_score(\n query, k=k, filter=filter, namespace=namespace, **kwargs\n )\n return [doc for doc, _ in docs_and_scores]\n def _similarity_search_with_relevance_scores(\n self,\n query: str,\n k: int = 4,\n **kwargs: Any,\n ) -> List[Tuple[Document, float]]:\n return self.similarity_search_with_score(query, k)\n[docs] def max_marginal_relevance_search_by_vector(\n self,\n embedding: List[float],\n k: int = 4,\n fetch_k: int = 20,\n lambda_mult: float = 0.5,\n filter: Optional[dict] = None,\n namespace: Optional[str] = None,\n **kwargs: Any,\n ) -> List[Document]:\n \"\"\"Return docs selected using the maximal marginal relevance.\n Maximal marginal relevance optimizes for similarity to query AND diversity\n among selected documents.\n Args:\n embedding: Embedding to look up documents similar to.\n k: Number of Documents to return. Defaults to 4.\n fetch_k: Number of Documents to fetch to pass to MMR algorithm.\n lambda_mult: Number between 0 and 1 that determines the degree", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/pinecone.html"} +{"id": "87b9971f5e44-4", "text": "lambda_mult: Number between 0 and 1 that determines the degree\n of diversity among the results with 0 corresponding\n to maximum diversity and 1 to minimum diversity.\n Defaults to 0.5.\n Returns:\n List of Documents selected by maximal marginal relevance.\n \"\"\"\n if namespace is None:\n namespace = self._namespace\n results = self._index.query(\n [embedding],\n top_k=fetch_k,\n include_values=True,\n include_metadata=True,\n namespace=namespace,\n filter=filter,\n )\n mmr_selected = maximal_marginal_relevance(\n np.array([embedding], dtype=np.float32),\n [item[\"values\"] for item in results[\"matches\"]],\n k=k,\n lambda_mult=lambda_mult,\n )\n selected = [results[\"matches\"][i][\"metadata\"] for i in mmr_selected]\n return [\n Document(page_content=metadata.pop((self._text_key)), metadata=metadata)\n for metadata in selected\n ]\n[docs] def max_marginal_relevance_search(\n self,\n query: str,\n k: int = 4,\n fetch_k: int = 20,\n lambda_mult: float = 0.5,\n filter: Optional[dict] = None,\n namespace: Optional[str] = None,\n **kwargs: Any,\n ) -> List[Document]:\n \"\"\"Return docs selected using the maximal marginal relevance.\n Maximal marginal relevance optimizes for similarity to query AND diversity\n among selected documents.\n Args:\n query: Text to look up documents similar to.\n k: Number of Documents to return. Defaults to 4.", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/pinecone.html"} +{"id": "87b9971f5e44-5", "text": "k: Number of Documents to return. Defaults to 4.\n fetch_k: Number of Documents to fetch to pass to MMR algorithm.\n lambda_mult: Number between 0 and 1 that determines the degree\n of diversity among the results with 0 corresponding\n to maximum diversity and 1 to minimum diversity.\n Defaults to 0.5.\n Returns:\n List of Documents selected by maximal marginal relevance.\n \"\"\"\n embedding = self._embedding_function(query)\n return self.max_marginal_relevance_search_by_vector(\n embedding, k, fetch_k, lambda_mult, filter, namespace\n )\n[docs] @classmethod\n def from_texts(\n cls,\n texts: List[str],\n embedding: Embeddings,\n metadatas: Optional[List[dict]] = None,\n ids: Optional[List[str]] = None,\n batch_size: int = 32,\n text_key: str = \"text\",\n index_name: Optional[str] = None,\n namespace: Optional[str] = None,\n **kwargs: Any,\n ) -> Pinecone:\n \"\"\"Construct Pinecone wrapper from raw documents.\n This is a user friendly interface that:\n 1. Embeds documents.\n 2. Adds the documents to a provided Pinecone index\n This is intended to be a quick way to get started.\n Example:\n .. code-block:: python\n from langchain import Pinecone\n from langchain.embeddings import OpenAIEmbeddings\n import pinecone\n # The environment should be the one specified next to the API key\n # in your Pinecone console\n pinecone.init(api_key=\"***\", environment=\"...\")\n embeddings = OpenAIEmbeddings()", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/pinecone.html"} +{"id": "87b9971f5e44-6", "text": "embeddings = OpenAIEmbeddings()\n pinecone = Pinecone.from_texts(\n texts,\n embeddings,\n index_name=\"langchain-demo\"\n )\n \"\"\"\n try:\n import pinecone\n except ImportError:\n raise ValueError(\n \"Could not import pinecone python package. \"\n \"Please install it with `pip install pinecone-client`.\"\n )\n indexes = pinecone.list_indexes() # checks if provided index exists\n if index_name in indexes:\n index = pinecone.Index(index_name)\n elif len(indexes) == 0:\n raise ValueError(\n \"No active indexes found in your Pinecone project, \"\n \"are you sure you're using the right API key and environment?\"\n )\n else:\n raise ValueError(\n f\"Index '{index_name}' not found in your Pinecone project. \"\n f\"Did you mean one of the following indexes: {', '.join(indexes)}\"\n )\n for i in range(0, len(texts), batch_size):\n # set end position of batch\n i_end = min(i + batch_size, len(texts))\n # get batch of texts and ids\n lines_batch = texts[i:i_end]\n # create ids if not provided\n if ids:\n ids_batch = ids[i:i_end]\n else:\n ids_batch = [str(uuid.uuid4()) for n in range(i, i_end)]\n # create embeddings\n embeds = embedding.embed_documents(lines_batch)\n # prep metadata and upsert batch\n if metadatas:\n metadata = metadatas[i:i_end]\n else:\n metadata = [{} for _ in range(i, i_end)]", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/pinecone.html"} +{"id": "87b9971f5e44-7", "text": "else:\n metadata = [{} for _ in range(i, i_end)]\n for j, line in enumerate(lines_batch):\n metadata[j][text_key] = line\n to_upsert = zip(ids_batch, embeds, metadata)\n # upsert to Pinecone\n index.upsert(vectors=list(to_upsert), namespace=namespace)\n return cls(index, embedding.embed_query, text_key, namespace)\n[docs] @classmethod\n def from_existing_index(\n cls,\n index_name: str,\n embedding: Embeddings,\n text_key: str = \"text\",\n namespace: Optional[str] = None,\n ) -> Pinecone:\n \"\"\"Load pinecone vectorstore from index name.\"\"\"\n try:\n import pinecone\n except ImportError:\n raise ValueError(\n \"Could not import pinecone python package. \"\n \"Please install it with `pip install pinecone-client`.\"\n )\n return cls(\n pinecone.Index(index_name), embedding.embed_query, text_key, namespace\n )\n[docs] def delete(self, ids: List[str]) -> None:\n \"\"\"Delete by vector IDs.\n Args:\n ids: List of ids to delete.\n \"\"\"\n # This is the maximum number of IDs that can be deleted\n chunk_size = 1000\n for i in range(0, len(ids), chunk_size):\n chunk = ids[i : i + chunk_size]\n self._index.delete(ids=chunk)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/pinecone.html"} +{"id": "6b6892ddfe63-0", "text": "Source code for langchain.vectorstores.tigris\nfrom __future__ import annotations\nimport itertools\nfrom typing import TYPE_CHECKING, Any, Iterable, List, Optional, Tuple\nfrom langchain.embeddings.base import Embeddings\nfrom langchain.schema import Document\nfrom langchain.vectorstores import VectorStore\nif TYPE_CHECKING:\n from tigrisdb import TigrisClient\n from tigrisdb import VectorStore as TigrisVectorStore\n from tigrisdb.types.filters import Filter as TigrisFilter\n from tigrisdb.types.vector import Document as TigrisDocument\n[docs]class Tigris(VectorStore):\n def __init__(self, client: TigrisClient, embeddings: Embeddings, index_name: str):\n \"\"\"Initialize Tigris vector store\"\"\"\n try:\n import tigrisdb # noqa: F401\n except ImportError:\n raise ValueError(\n \"Could not import tigrisdb python package. \"\n \"Please install it with `pip install tigrisdb`\"\n )\n self._embed_fn = embeddings\n self._vector_store = TigrisVectorStore(client.get_search(), index_name)\n @property\n def search_index(self) -> TigrisVectorStore:\n return self._vector_store\n[docs] def add_texts(\n self,\n texts: Iterable[str],\n metadatas: Optional[List[dict]] = None,\n ids: Optional[List[str]] = None,\n **kwargs: Any,\n ) -> List[str]:\n \"\"\"Run more texts through the embeddings and add to the vectorstore.\n Args:\n texts: Iterable of strings to add to the vectorstore.\n metadatas: Optional list of metadatas associated with the texts.", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/tigris.html"} +{"id": "6b6892ddfe63-1", "text": "metadatas: Optional list of metadatas associated with the texts.\n ids: Optional list of ids for documents.\n Ids will be autogenerated if not provided.\n kwargs: vectorstore specific parameters\n Returns:\n List of ids from adding the texts into the vectorstore.\n \"\"\"\n docs = self._prep_docs(texts, metadatas, ids)\n result = self.search_index.add_documents(docs)\n return [r.id for r in result]\n[docs] def similarity_search(\n self,\n query: str,\n k: int = 4,\n filter: Optional[TigrisFilter] = None,\n **kwargs: Any,\n ) -> List[Document]:\n \"\"\"Return docs most similar to query.\"\"\"\n docs_with_scores = self.similarity_search_with_score(query, k, filter)\n return [doc for doc, _ in docs_with_scores]\n[docs] def similarity_search_with_score(\n self,\n query: str,\n k: int = 4,\n filter: Optional[TigrisFilter] = None,\n ) -> List[Tuple[Document, float]]:\n \"\"\"Run similarity search with Chroma with distance.\n Args:\n query (str): Query text to search for.\n k (int): Number of results to return. Defaults to 4.\n filter (Optional[TigrisFilter]): Filter by metadata. Defaults to None.\n Returns:\n List[Tuple[Document, float]]: List of documents most similar to the query\n text with distance in float.\n \"\"\"\n vector = self._embed_fn.embed_query(query)\n result = self.search_index.similarity_search(\n vector=vector, k=k, filter_by=filter\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/tigris.html"} +{"id": "6b6892ddfe63-2", "text": "vector=vector, k=k, filter_by=filter\n )\n docs: List[Tuple[Document, float]] = []\n for r in result:\n docs.append(\n (\n Document(\n page_content=r.doc[\"text\"], metadata=r.doc.get(\"metadata\")\n ),\n r.score,\n )\n )\n return docs\n[docs] @classmethod\n def from_texts(\n cls,\n texts: List[str],\n embedding: Embeddings,\n metadatas: Optional[List[dict]] = None,\n ids: Optional[List[str]] = None,\n client: Optional[TigrisClient] = None,\n index_name: Optional[str] = None,\n **kwargs: Any,\n ) -> Tigris:\n \"\"\"Return VectorStore initialized from texts and embeddings.\"\"\"\n if not index_name:\n raise ValueError(\"`index_name` is required\")\n if not client:\n client = TigrisClient()\n store = cls(client, embedding, index_name)\n store.add_texts(texts=texts, metadatas=metadatas, ids=ids)\n return store\n def _prep_docs(\n self,\n texts: Iterable[str],\n metadatas: Optional[List[dict]],\n ids: Optional[List[str]],\n ) -> List[TigrisDocument]:\n embeddings: List[List[float]] = self._embed_fn.embed_documents(list(texts))\n docs: List[TigrisDocument] = []\n for t, m, e, _id in itertools.zip_longest(\n texts, metadatas or [], embeddings or [], ids or []\n ):\n doc: TigrisDocument = {\n \"text\": t,\n \"embeddings\": e or [],", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/tigris.html"} +{"id": "6b6892ddfe63-3", "text": "\"text\": t,\n \"embeddings\": e or [],\n \"metadata\": m or {},\n }\n if _id:\n doc[\"id\"] = _id\n docs.append(doc)\n return docs", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/tigris.html"} +{"id": "381602e71446-0", "text": "Source code for langchain.vectorstores.starrocks\n\"\"\"Wrapper around open source StarRocks VectorSearch capability.\"\"\"\nfrom __future__ import annotations\nimport json\nimport logging\nfrom hashlib import sha1\nfrom threading import Thread\nfrom typing import Any, Dict, Iterable, List, Optional, Tuple\nfrom pydantic import BaseSettings\nfrom langchain.docstore.document import Document\nfrom langchain.embeddings.base import Embeddings\nfrom langchain.vectorstores.base import VectorStore\nlogger = logging.getLogger()\nDEBUG = False\ndef has_mul_sub_str(s: str, *args: Any) -> bool:\n \"\"\"\n Check if a string has multiple substrings.\n Args:\n s: The string to check\n *args: The substrings to check for in the string\n Returns:\n bool: True if all substrings are present in the string, False otherwise\n \"\"\"\n for a in args:\n if a not in s:\n return False\n return True\ndef debug_output(s: Any) -> None:\n \"\"\"\n Print a debug message if DEBUG is True.\n Args:\n s: The message to print\n \"\"\"\n if DEBUG:\n print(s)\ndef get_named_result(connection: Any, query: str) -> List[dict[str, Any]]:\n \"\"\"\n Get a named result from a query.\n Args:\n connection: The connection to the database\n query: The query to execute\n Returns:\n List[dict[str, Any]]: The result of the query\n \"\"\"\n cursor = connection.cursor()\n cursor.execute(query)\n columns = cursor.description\n result = []\n for value in cursor.fetchall():\n r = {}\n for idx, datum in enumerate(value):\n k = columns[idx][0]", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/starrocks.html"} +{"id": "381602e71446-1", "text": "for idx, datum in enumerate(value):\n k = columns[idx][0]\n r[k] = datum\n result.append(r)\n debug_output(result)\n cursor.close()\n return result\nclass StarRocksSettings(BaseSettings):\n \"\"\"StarRocks Client Configuration\n Attribute:\n StarRocks_host (str) : An URL to connect to MyScale backend.\n Defaults to 'localhost'.\n StarRocks_port (int) : URL port to connect with HTTP. Defaults to 8443.\n username (str) : Username to login. Defaults to None.\n password (str) : Password to login. Defaults to None.\n database (str) : Database name to find the table. Defaults to 'default'.\n table (str) : Table name to operate on.\n Defaults to 'vector_table'.\n column_map (Dict) : Column type map to project column name onto langchain\n semantics. Must have keys: `text`, `id`, `vector`,\n must be same size to number of columns. For example:\n .. code-block:: python\n {\n 'id': 'text_id',\n 'embedding': 'text_embedding',\n 'document': 'text_plain',\n 'metadata': 'metadata_dictionary_in_json',\n }\n Defaults to identity map.\n \"\"\"\n host: str = \"localhost\"\n port: int = 9030\n username: str = \"root\"\n password: str = \"\"\n column_map: Dict[str, str] = {\n \"id\": \"id\",\n \"document\": \"document\",\n \"embedding\": \"embedding\",\n \"metadata\": \"metadata\",\n }\n database: str = \"default\"\n table: str = \"langchain\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/starrocks.html"} +{"id": "381602e71446-2", "text": "database: str = \"default\"\n table: str = \"langchain\"\n def __getitem__(self, item: str) -> Any:\n return getattr(self, item)\n class Config:\n env_file = \".env\"\n env_prefix = \"starrocks_\"\n env_file_encoding = \"utf-8\"\n[docs]class StarRocks(VectorStore):\n \"\"\"Wrapper around StarRocks vector database\n You need a `pymysql` python package, and a valid account\n to connect to StarRocks.\n Right now StarRocks has only implemented `cosine_similarity` function to\n compute distance between two vectors. And there is no vector inside right now,\n so we have to iterate all vectors and compute spatial distance.\n For more information, please visit\n [StarRocks official site](https://www.starrocks.io/)\n [StarRocks github](https://github.com/StarRocks/starrocks)\n \"\"\"\n def __init__(\n self,\n embedding: Embeddings,\n config: Optional[StarRocksSettings] = None,\n **kwargs: Any,\n ) -> None:\n \"\"\"StarRocks Wrapper to LangChain\n embedding_function (Embeddings):\n config (StarRocksSettings): Configuration to StarRocks Client\n \"\"\"\n try:\n import pymysql # type: ignore[import]\n except ImportError:\n raise ImportError(\n \"Could not import pymysql python package. \"\n \"Please install it with `pip install pymysql`.\"\n )\n try:\n from tqdm import tqdm\n self.pgbar = tqdm\n except ImportError:\n # Just in case if tqdm is not installed\n self.pgbar = lambda x, **kwargs: x", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/starrocks.html"} +{"id": "381602e71446-3", "text": "self.pgbar = lambda x, **kwargs: x\n super().__init__()\n if config is not None:\n self.config = config\n else:\n self.config = StarRocksSettings()\n assert self.config\n assert self.config.host and self.config.port\n assert self.config.column_map and self.config.database and self.config.table\n for k in [\"id\", \"embedding\", \"document\", \"metadata\"]:\n assert k in self.config.column_map\n # initialize the schema\n dim = len(embedding.embed_query(\"test\"))\n self.schema = f\"\"\"\\\nCREATE TABLE IF NOT EXISTS {self.config.database}.{self.config.table}( \n {self.config.column_map['id']} string,\n {self.config.column_map['document']} string,\n {self.config.column_map['embedding']} array,\n {self.config.column_map['metadata']} string\n) ENGINE = OLAP PRIMARY KEY(id) DISTRIBUTED BY HASH(id) \\\n PROPERTIES (\"replication_num\" = \"1\")\\\n\"\"\"\n self.dim = dim\n self.BS = \"\\\\\"\n self.must_escape = (\"\\\\\", \"'\")\n self.embedding_function = embedding\n self.dist_order = \"DESC\"\n debug_output(self.config)\n # Create a connection to StarRocks\n self.connection = pymysql.connect(\n host=self.config.host,\n port=self.config.port,\n user=self.config.username,\n password=self.config.password,\n database=self.config.database,\n **kwargs,\n )\n debug_output(self.schema)\n get_named_result(self.connection, self.schema)\n[docs] def escape_str(self, value: str) -> str:\n return \"\".join(f\"{self.BS}{c}\" if c in self.must_escape else c for c in value)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/starrocks.html"} +{"id": "381602e71446-4", "text": "def _build_insert_sql(self, transac: Iterable, column_names: Iterable[str]) -> str:\n ks = \",\".join(column_names)\n embed_tuple_index = tuple(column_names).index(\n self.config.column_map[\"embedding\"]\n )\n _data = []\n for n in transac:\n n = \",\".join(\n [\n f\"'{self.escape_str(str(_n))}'\"\n if idx != embed_tuple_index\n else f\"array{str(_n)}\"\n for (idx, _n) in enumerate(n)\n ]\n )\n _data.append(f\"({n})\")\n i_str = f\"\"\"\n INSERT INTO\n {self.config.database}.{self.config.table}({ks})\n VALUES\n {','.join(_data)}\n \"\"\"\n return i_str\n def _insert(self, transac: Iterable, column_names: Iterable[str]) -> None:\n _insert_query = self._build_insert_sql(transac, column_names)\n debug_output(_insert_query)\n get_named_result(self.connection, _insert_query)\n[docs] def add_texts(\n self,\n texts: Iterable[str],\n metadatas: Optional[List[dict]] = None,\n batch_size: int = 32,\n ids: Optional[Iterable[str]] = None,\n **kwargs: Any,\n ) -> List[str]:\n \"\"\"Insert more texts through the embeddings and add to the VectorStore.\n Args:\n texts: Iterable of strings to add to the VectorStore.\n ids: Optional list of ids to associate with the texts.\n batch_size: Batch size of insertion\n metadata: Optional column data to be inserted\n Returns:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/starrocks.html"} +{"id": "381602e71446-5", "text": "metadata: Optional column data to be inserted\n Returns:\n List of ids from adding the texts into the VectorStore.\n \"\"\"\n # Embed and create the documents\n ids = ids or [sha1(t.encode(\"utf-8\")).hexdigest() for t in texts]\n colmap_ = self.config.column_map\n transac = []\n column_names = {\n colmap_[\"id\"]: ids,\n colmap_[\"document\"]: texts,\n colmap_[\"embedding\"]: self.embedding_function.embed_documents(list(texts)),\n }\n metadatas = metadatas or [{} for _ in texts]\n column_names[colmap_[\"metadata\"]] = map(json.dumps, metadatas)\n assert len(set(colmap_) - set(column_names)) >= 0\n keys, values = zip(*column_names.items())\n try:\n t = None\n for v in self.pgbar(\n zip(*values), desc=\"Inserting data...\", total=len(metadatas)\n ):\n assert (\n len(v[keys.index(self.config.column_map[\"embedding\"])]) == self.dim\n )\n transac.append(v)\n if len(transac) == batch_size:\n if t:\n t.join()\n t = Thread(target=self._insert, args=[transac, keys])\n t.start()\n transac = []\n if len(transac) > 0:\n if t:\n t.join()\n self._insert(transac, keys)\n return [i for i in ids]\n except Exception as e:\n logger.error(f\"\\033[91m\\033[1m{type(e)}\\033[0m \\033[95m{str(e)}\\033[0m\")\n return []", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/starrocks.html"} +{"id": "381602e71446-6", "text": "return []\n[docs] @classmethod\n def from_texts(\n cls,\n texts: List[str],\n embedding: Embeddings,\n metadatas: Optional[List[Dict[Any, Any]]] = None,\n config: Optional[StarRocksSettings] = None,\n text_ids: Optional[Iterable[str]] = None,\n batch_size: int = 32,\n **kwargs: Any,\n ) -> StarRocks:\n \"\"\"Create StarRocks wrapper with existing texts\n Args:\n embedding_function (Embeddings): Function to extract text embedding\n texts (Iterable[str]): List or tuple of strings to be added\n config (StarRocksSettings, Optional): StarRocks configuration\n text_ids (Optional[Iterable], optional): IDs for the texts.\n Defaults to None.\n batch_size (int, optional): Batchsize when transmitting data to StarRocks.\n Defaults to 32.\n metadata (List[dict], optional): metadata to texts. Defaults to None.\n Returns:\n StarRocks Index\n \"\"\"\n ctx = cls(embedding, config, **kwargs)\n ctx.add_texts(texts, ids=text_ids, batch_size=batch_size, metadatas=metadatas)\n return ctx\n def __repr__(self) -> str:\n \"\"\"Text representation for StarRocks Vector Store, prints backends, username\n and schemas. Easy to use with `str(StarRocks())`\n Returns:\n repr: string to show connection info and data schema\n \"\"\"\n _repr = f\"\\033[92m\\033[1m{self.config.database}.{self.config.table} @ \"\n _repr += f\"{self.config.host}:{self.config.port}\\033[0m\\n\\n\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/starrocks.html"} +{"id": "381602e71446-7", "text": "_repr += f\"\\033[1musername: {self.config.username}\\033[0m\\n\\nTable Schema:\\n\"\n width = 25\n fields = 3\n _repr += \"-\" * (width * fields + 1) + \"\\n\"\n columns = [\"name\", \"type\", \"key\"]\n _repr += f\"|\\033[94m{columns[0]:24s}\\033[0m|\\033[96m{columns[1]:24s}\"\n _repr += f\"\\033[0m|\\033[96m{columns[2]:24s}\\033[0m|\\n\"\n _repr += \"-\" * (width * fields + 1) + \"\\n\"\n q_str = f\"DESC {self.config.database}.{self.config.table}\"\n debug_output(q_str)\n rs = get_named_result(self.connection, q_str)\n for r in rs:\n _repr += f\"|\\033[94m{r['Field']:24s}\\033[0m|\\033[96m{r['Type']:24s}\"\n _repr += f\"\\033[0m|\\033[96m{r['Key']:24s}\\033[0m|\\n\"\n _repr += \"-\" * (width * fields + 1) + \"\\n\"\n return _repr\n def _build_query_sql(\n self, q_emb: List[float], topk: int, where_str: Optional[str] = None\n ) -> str:\n q_emb_str = \",\".join(map(str, q_emb))\n if where_str:\n where_str = f\"WHERE {where_str}\"\n else:\n where_str = \"\"\n q_str = f\"\"\"\n SELECT {self.config.column_map['document']},", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/starrocks.html"} +{"id": "381602e71446-8", "text": "q_str = f\"\"\"\n SELECT {self.config.column_map['document']}, \n {self.config.column_map['metadata']}, \n cosine_similarity_norm(array[{q_emb_str}],\n {self.config.column_map['embedding']}) as dist\n FROM {self.config.database}.{self.config.table}\n {where_str}\n ORDER BY dist {self.dist_order}\n LIMIT {topk}\n \"\"\"\n debug_output(q_str)\n return q_str\n[docs] def similarity_search(\n self, query: str, k: int = 4, where_str: Optional[str] = None, **kwargs: Any\n ) -> List[Document]:\n \"\"\"Perform a similarity search with StarRocks\n Args:\n query (str): query string\n k (int, optional): Top K neighbors to retrieve. Defaults to 4.\n where_str (Optional[str], optional): where condition string.\n Defaults to None.\n NOTE: Please do not let end-user to fill this and always be aware\n of SQL injection. When dealing with metadatas, remember to\n use `{self.metadata_column}.attribute` instead of `attribute`\n alone. The default name for it is `metadata`.\n Returns:\n List[Document]: List of Documents\n \"\"\"\n return self.similarity_search_by_vector(\n self.embedding_function.embed_query(query), k, where_str, **kwargs\n )\n[docs] def similarity_search_by_vector(\n self,\n embedding: List[float],\n k: int = 4,\n where_str: Optional[str] = None,\n **kwargs: Any,\n ) -> List[Document]:\n \"\"\"Perform a similarity search with StarRocks by vectors\n Args:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/starrocks.html"} +{"id": "381602e71446-9", "text": "\"\"\"Perform a similarity search with StarRocks by vectors\n Args:\n query (str): query string\n k (int, optional): Top K neighbors to retrieve. Defaults to 4.\n where_str (Optional[str], optional): where condition string.\n Defaults to None.\n NOTE: Please do not let end-user to fill this and always be aware\n of SQL injection. When dealing with metadatas, remember to\n use `{self.metadata_column}.attribute` instead of `attribute`\n alone. The default name for it is `metadata`.\n Returns:\n List[Document]: List of (Document, similarity)\n \"\"\"\n q_str = self._build_query_sql(embedding, k, where_str)\n try:\n return [\n Document(\n page_content=r[self.config.column_map[\"document\"]],\n metadata=json.loads(r[self.config.column_map[\"metadata\"]]),\n )\n for r in get_named_result(self.connection, q_str)\n ]\n except Exception as e:\n logger.error(f\"\\033[91m\\033[1m{type(e)}\\033[0m \\033[95m{str(e)}\\033[0m\")\n return []\n[docs] def similarity_search_with_relevance_scores(\n self, query: str, k: int = 4, where_str: Optional[str] = None, **kwargs: Any\n ) -> List[Tuple[Document, float]]:\n \"\"\"Perform a similarity search with StarRocks\n Args:\n query (str): query string\n k (int, optional): Top K neighbors to retrieve. Defaults to 4.\n where_str (Optional[str], optional): where condition string.\n Defaults to None.", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/starrocks.html"} +{"id": "381602e71446-10", "text": "where_str (Optional[str], optional): where condition string.\n Defaults to None.\n NOTE: Please do not let end-user to fill this and always be aware\n of SQL injection. When dealing with metadatas, remember to\n use `{self.metadata_column}.attribute` instead of `attribute`\n alone. The default name for it is `metadata`.\n Returns:\n List[Document]: List of documents\n \"\"\"\n q_str = self._build_query_sql(\n self.embedding_function.embed_query(query), k, where_str\n )\n try:\n return [\n (\n Document(\n page_content=r[self.config.column_map[\"document\"]],\n metadata=json.loads(r[self.config.column_map[\"metadata\"]]),\n ),\n r[\"dist\"],\n )\n for r in get_named_result(self.connection, q_str)\n ]\n except Exception as e:\n logger.error(f\"\\033[91m\\033[1m{type(e)}\\033[0m \\033[95m{str(e)}\\033[0m\")\n return []\n[docs] def drop(self) -> None:\n \"\"\"\n Helper function: Drop data\n \"\"\"\n get_named_result(\n self.connection,\n f\"DROP TABLE IF EXISTS {self.config.database}.{self.config.table}\",\n )\n @property\n def metadata_column(self) -> str:\n return self.config.column_map[\"metadata\"]", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/starrocks.html"} +{"id": "d2ff981ad89c-0", "text": "Source code for langchain.vectorstores.vectara\n\"\"\"Wrapper around Vectara vector database.\"\"\"\nfrom __future__ import annotations\nimport json\nimport logging\nimport os\nfrom hashlib import md5\nfrom typing import Any, Iterable, List, Optional, Tuple, Type\nimport requests\nfrom pydantic import Field\nfrom langchain.embeddings.base import Embeddings\nfrom langchain.schema import Document\nfrom langchain.vectorstores.base import VectorStore, VectorStoreRetriever\n[docs]class Vectara(VectorStore):\n \"\"\"Implementation of Vector Store using Vectara (https://vectara.com).\n Example:\n .. code-block:: python\n from langchain.vectorstores import Vectara\n vectorstore = Vectara(\n vectara_customer_id=vectara_customer_id,\n vectara_corpus_id=vectara_corpus_id,\n vectara_api_key=vectara_api_key\n )\n \"\"\"\n def __init__(\n self,\n vectara_customer_id: Optional[str] = None,\n vectara_corpus_id: Optional[str] = None,\n vectara_api_key: Optional[str] = None,\n ):\n \"\"\"Initialize with Vectara API.\"\"\"\n self._vectara_customer_id = vectara_customer_id or os.environ.get(\n \"VECTARA_CUSTOMER_ID\"\n )\n self._vectara_corpus_id = vectara_corpus_id or os.environ.get(\n \"VECTARA_CORPUS_ID\"\n )\n self._vectara_api_key = vectara_api_key or os.environ.get(\"VECTARA_API_KEY\")\n if (\n self._vectara_customer_id is None\n or self._vectara_corpus_id is None\n or self._vectara_api_key is None\n ):\n logging.warning(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/vectara.html"} +{"id": "d2ff981ad89c-1", "text": "or self._vectara_api_key is None\n ):\n logging.warning(\n \"Cant find Vectara credentials, customer_id or corpus_id in \"\n \"environment.\"\n )\n else:\n logging.debug(f\"Using corpus id {self._vectara_corpus_id}\")\n self._session = requests.Session() # to reuse connections\n adapter = requests.adapters.HTTPAdapter(max_retries=3)\n self._session.mount(\"http://\", adapter)\n def _get_post_headers(self) -> dict:\n \"\"\"Returns headers that should be attached to each post request.\"\"\"\n return {\n \"x-api-key\": self._vectara_api_key,\n \"customer-id\": self._vectara_customer_id,\n \"Content-Type\": \"application/json\",\n }\n def _delete_doc(self, doc_id: str) -> bool:\n \"\"\"\n Delete a document from the Vectara corpus.\n Args:\n url (str): URL of the page to delete.\n doc_id (str): ID of the document to delete.\n Returns:\n bool: True if deletion was successful, False otherwise.\n \"\"\"\n body = {\n \"customer_id\": self._vectara_customer_id,\n \"corpus_id\": self._vectara_corpus_id,\n \"document_id\": doc_id,\n }\n response = self._session.post(\n \"https://api.vectara.io/v1/delete-doc\",\n data=json.dumps(body),\n verify=True,\n headers=self._get_post_headers(),\n )\n if response.status_code != 200:\n logging.error(\n f\"Delete request failed for doc_id = {doc_id} with status code \"\n f\"{response.status_code}, reason {response.reason}, text \"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/vectara.html"} +{"id": "d2ff981ad89c-2", "text": "f\"{response.status_code}, reason {response.reason}, text \"\n f\"{response.text}\"\n )\n return False\n return True\n def _index_doc(self, doc: dict) -> bool:\n request: dict[str, Any] = {}\n request[\"customer_id\"] = self._vectara_customer_id\n request[\"corpus_id\"] = self._vectara_corpus_id\n request[\"document\"] = doc\n response = self._session.post(\n headers=self._get_post_headers(),\n url=\"https://api.vectara.io/v1/core/index\",\n data=json.dumps(request),\n timeout=30,\n verify=True,\n )\n status_code = response.status_code\n result = response.json()\n status_str = result[\"status\"][\"code\"] if \"status\" in result else None\n if status_code == 409 or (status_str and status_str == \"ALREADY_EXISTS\"):\n return False\n else:\n return True\n[docs] def add_texts(\n self,\n texts: Iterable[str],\n metadatas: Optional[List[dict]] = None,\n **kwargs: Any,\n ) -> List[str]:\n \"\"\"Run more texts through the embeddings and add to the vectorstore.\n Args:\n texts: Iterable of strings to add to the vectorstore.\n metadatas: Optional list of metadatas associated with the texts.\n Returns:\n List of ids from adding the texts into the vectorstore.\n \"\"\"\n doc_hash = md5()\n for t in texts:\n doc_hash.update(t.encode())\n doc_id = doc_hash.hexdigest()\n if metadatas is None:\n metadatas = [{} for _ in texts]\n doc = {", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/vectara.html"} +{"id": "d2ff981ad89c-3", "text": "metadatas = [{} for _ in texts]\n doc = {\n \"document_id\": doc_id,\n \"metadataJson\": json.dumps({\"source\": \"langchain\"}),\n \"parts\": [\n {\"text\": text, \"metadataJson\": json.dumps(md)}\n for text, md in zip(texts, metadatas)\n ],\n }\n succeeded = self._index_doc(doc)\n if not succeeded:\n self._delete_doc(doc_id)\n self._index_doc(doc)\n return [doc_id]\n[docs] def similarity_search_with_score(\n self,\n query: str,\n k: int = 5,\n lambda_val: float = 0.025,\n filter: Optional[str] = None,\n n_sentence_context: int = 0,\n **kwargs: Any,\n ) -> List[Tuple[Document, float]]:\n \"\"\"Return Vectara documents most similar to query, along with scores.\n Args:\n query: Text to look up documents similar to.\n k: Number of Documents to return. Defaults to 5.\n lambda_val: lexical match parameter for hybrid search.\n filter: Dictionary of argument(s) to filter on metadata. For example a\n filter can be \"doc.rating > 3.0 and part.lang = 'deu'\"} see\n https://docs.vectara.com/docs/search-apis/sql/filter-overview\n for more details.\n n_sentence_context: number of sentences before/after the matching segment\n to add\n Returns:\n List of Documents most similar to the query and score for each.\n \"\"\"\n data = json.dumps(\n {\n \"query\": [\n {\n \"query\": query,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/vectara.html"} +{"id": "d2ff981ad89c-4", "text": "{\n \"query\": [\n {\n \"query\": query,\n \"start\": 0,\n \"num_results\": k,\n \"context_config\": {\n \"sentences_before\": n_sentence_context,\n \"sentences_after\": n_sentence_context,\n },\n \"corpus_key\": [\n {\n \"customer_id\": self._vectara_customer_id,\n \"corpus_id\": self._vectara_corpus_id,\n \"metadataFilter\": filter,\n \"lexical_interpolation_config\": {\"lambda\": lambda_val},\n }\n ],\n }\n ]\n }\n )\n response = self._session.post(\n headers=self._get_post_headers(),\n url=\"https://api.vectara.io/v1/query\",\n data=data,\n timeout=10,\n )\n if response.status_code != 200:\n logging.error(\n \"Query failed %s\",\n f\"(code {response.status_code}, reason {response.reason}, details \"\n f\"{response.text})\",\n )\n return []\n result = response.json()\n responses = result[\"responseSet\"][0][\"response\"]\n vectara_default_metadata = [\"lang\", \"len\", \"offset\"]\n docs = [\n (\n Document(\n page_content=x[\"text\"],\n metadata={\n m[\"name\"]: m[\"value\"]\n for m in x[\"metadata\"]\n if m[\"name\"] not in vectara_default_metadata\n },\n ),\n x[\"score\"],\n )\n for x in responses\n ]\n return docs\n[docs] def similarity_search(\n self,\n query: str,\n k: int = 5,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/vectara.html"} +{"id": "d2ff981ad89c-5", "text": "self,\n query: str,\n k: int = 5,\n lambda_val: float = 0.025,\n filter: Optional[str] = None,\n n_sentence_context: int = 0,\n **kwargs: Any,\n ) -> List[Document]:\n \"\"\"Return Vectara documents most similar to query, along with scores.\n Args:\n query: Text to look up documents similar to.\n k: Number of Documents to return. Defaults to 5.\n filter: Dictionary of argument(s) to filter on metadata. For example a\n filter can be \"doc.rating > 3.0 and part.lang = 'deu'\"} see\n https://docs.vectara.com/docs/search-apis/sql/filter-overview for more\n details.\n n_sentence_context: number of sentences before/after the matching segment\n to add\n Returns:\n List of Documents most similar to the query\n \"\"\"\n docs_and_scores = self.similarity_search_with_score(\n query,\n k=k,\n lambda_val=lambda_val,\n filter=filter,\n n_sentence_context=n_sentence_context,\n **kwargs,\n )\n return [doc for doc, _ in docs_and_scores]\n[docs] @classmethod\n def from_texts(\n cls: Type[Vectara],\n texts: List[str],\n embedding: Optional[Embeddings] = None,\n metadatas: Optional[List[dict]] = None,\n **kwargs: Any,\n ) -> Vectara:\n \"\"\"Construct Vectara wrapper from raw documents.\n This is intended to be a quick way to get started.\n Example:\n .. code-block:: python\n from langchain import Vectara", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/vectara.html"} +{"id": "d2ff981ad89c-6", "text": "Example:\n .. code-block:: python\n from langchain import Vectara\n vectara = Vectara.from_texts(\n texts,\n vectara_customer_id=customer_id,\n vectara_corpus_id=corpus_id,\n vectara_api_key=api_key,\n )\n \"\"\"\n # Note: Vectara generates its own embeddings, so we ignore the provided\n # embeddings (required by interface)\n vectara = cls(**kwargs)\n vectara.add_texts(texts, metadatas)\n return vectara\n[docs] def as_retriever(self, **kwargs: Any) -> VectaraRetriever:\n return VectaraRetriever(vectorstore=self, **kwargs)\nclass VectaraRetriever(VectorStoreRetriever):\n vectorstore: Vectara\n search_kwargs: dict = Field(\n default_factory=lambda: {\n \"lambda_val\": 0.025,\n \"k\": 5,\n \"filter\": \"\",\n \"n_sentence_context\": \"0\",\n }\n )\n \"\"\"Search params.\n k: Number of Documents to return. Defaults to 5.\n lambda_val: lexical match parameter for hybrid search.\n filter: Dictionary of argument(s) to filter on metadata. For example a\n filter can be \"doc.rating > 3.0 and part.lang = 'deu'\"} see\n https://docs.vectara.com/docs/search-apis/sql/filter-overview\n for more details.\n n_sentence_context: number of sentences before/after the matching segment to add\n \"\"\"\n def add_texts(\n self, texts: List[str], metadatas: Optional[List[dict]] = None\n ) -> None:\n \"\"\"Add text to the Vectara vectorstore.", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/vectara.html"} +{"id": "d2ff981ad89c-7", "text": ") -> None:\n \"\"\"Add text to the Vectara vectorstore.\n Args:\n texts (List[str]): The text\n metadatas (List[dict]): Metadata dicts, must line up with existing store\n \"\"\"\n self.vectorstore.add_texts(texts, metadatas)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/vectara.html"} +{"id": "2c2730649d55-0", "text": "Source code for langchain.vectorstores.hologres\n\"\"\"VectorStore wrapper around a Hologres database.\"\"\"\nfrom __future__ import annotations\nimport json\nimport logging\nimport uuid\nfrom typing import Any, Dict, Iterable, List, Optional, Tuple, Type\nfrom langchain.docstore.document import Document\nfrom langchain.embeddings.base import Embeddings\nfrom langchain.utils import get_from_dict_or_env\nfrom langchain.vectorstores.base import VectorStore\nADA_TOKEN_COUNT = 1536\n_LANGCHAIN_DEFAULT_TABLE_NAME = \"langchain_pg_embedding\"\nclass HologresWrapper:\n def __init__(self, connection_string: str, ndims: int, table_name: str) -> None:\n import psycopg2\n self.table_name = table_name\n self.conn = psycopg2.connect(connection_string)\n self.cursor = self.conn.cursor()\n self.conn.autocommit = False\n self.ndims = ndims\n def create_vector_extension(self) -> None:\n self.cursor.execute(\"create extension if not exists proxima\")\n self.conn.commit()\n def create_table(self, drop_if_exist: bool = True) -> None:\n if drop_if_exist:\n self.cursor.execute(f\"drop table if exists {self.table_name}\")\n self.conn.commit()\n self.cursor.execute(\n f\"\"\"create table if not exists {self.table_name} (\nid text,\nembedding float4[] check(array_ndims(embedding) = 1 and \\\narray_length(embedding, 1) = {self.ndims}),\nmetadata json,\ndocument text);\"\"\"\n )\n self.cursor.execute(\n f\"call set_table_property('{self.table_name}'\"\n + \"\"\", 'proxima_vectors', \n'{\"embedding\":{\"algorithm\":\"Graph\",\n\"distance_method\":\"SquaredEuclidean\",", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/hologres.html"} +{"id": "2c2730649d55-1", "text": "'{\"embedding\":{\"algorithm\":\"Graph\",\n\"distance_method\":\"SquaredEuclidean\",\n\"build_params\":{\"min_flush_proxima_row_count\" : 1,\n\"min_compaction_proxima_row_count\" : 1, \n\"max_total_size_to_merge_mb\" : 2000}}}');\"\"\"\n )\n self.conn.commit()\n def get_by_id(self, id: str) -> List[Tuple]:\n statement = (\n f\"select id, embedding, metadata, \"\n f\"document from {self.table_name} where id = %s;\"\n )\n self.cursor.execute(\n statement,\n (id),\n )\n self.conn.commit()\n return self.cursor.fetchall()\n def insert(\n self,\n embedding: List[float],\n metadata: dict,\n document: str,\n id: Optional[str] = None,\n ) -> None:\n self.cursor.execute(\n f'insert into \"{self.table_name}\" '\n f\"values (%s, array{json.dumps(embedding)}::float4[], %s, %s)\",\n (id if id is not None else \"null\", json.dumps(metadata), document),\n )\n self.conn.commit()\n def query_nearest_neighbours(\n self, embedding: List[float], k: int, filter: Optional[Dict[str, str]] = None\n ) -> List[Tuple[str, str, float]]:\n params = []\n filter_clause = \"\"\n if filter is not None:\n conjuncts = []\n for key, val in filter.items():\n conjuncts.append(\"metadata->>%s=%s\")\n params.append(key)\n params.append(val)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/hologres.html"} +{"id": "2c2730649d55-2", "text": "params.append(key)\n params.append(val)\n filter_clause = \"where \" + \" and \".join(conjuncts)\n sql = (\n f\"select document, metadata::text, \"\n f\"pm_approx_squared_euclidean_distance(array{json.dumps(embedding)}\"\n f\"::float4[], embedding) as distance from\"\n f\" {self.table_name} {filter_clause} order by distance asc limit {k};\"\n )\n self.cursor.execute(sql, tuple(params))\n self.conn.commit()\n return self.cursor.fetchall()\n[docs]class Hologres(VectorStore):\n \"\"\"VectorStore implementation using Hologres.\n - `connection_string` is a hologres connection string.\n - `embedding_function` any embedding function implementing\n `langchain.embeddings.base.Embeddings` interface.\n - `ndims` is the number of dimensions of the embedding output.\n - `table_name` is the name of the table to store embeddings and data.\n (default: langchain_pg_embedding)\n - NOTE: The table will be created when initializing the store (if not exists)\n So, make sure the user has the right permissions to create tables.\n - `pre_delete_table` if True, will delete the table if it exists.\n (default: False)\n - Useful for testing.\n \"\"\"\n def __init__(\n self,\n connection_string: str,\n embedding_function: Embeddings,\n ndims: int = ADA_TOKEN_COUNT,\n table_name: str = _LANGCHAIN_DEFAULT_TABLE_NAME,\n pre_delete_table: bool = False,\n logger: Optional[logging.Logger] = None,\n ) -> None:\n self.connection_string = connection_string\n self.ndims = ndims", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/hologres.html"} +{"id": "2c2730649d55-3", "text": "self.connection_string = connection_string\n self.ndims = ndims\n self.table_name = table_name\n self.embedding_function = embedding_function\n self.pre_delete_table = pre_delete_table\n self.logger = logger or logging.getLogger(__name__)\n self.__post_init__()\n def __post_init__(\n self,\n ) -> None:\n \"\"\"\n Initialize the store.\n \"\"\"\n self.storage = HologresWrapper(\n self.connection_string, self.ndims, self.table_name\n )\n self.create_vector_extension()\n self.create_table()\n[docs] def create_vector_extension(self) -> None:\n try:\n self.storage.create_vector_extension()\n except Exception as e:\n self.logger.exception(e)\n raise e\n[docs] def create_table(self) -> None:\n self.storage.create_table(self.pre_delete_table)\n @classmethod\n def __from(\n cls,\n texts: List[str],\n embeddings: List[List[float]],\n embedding_function: Embeddings,\n metadatas: Optional[List[dict]] = None,\n ids: Optional[List[str]] = None,\n ndims: int = ADA_TOKEN_COUNT,\n table_name: str = _LANGCHAIN_DEFAULT_TABLE_NAME,\n pre_delete_table: bool = False,\n **kwargs: Any,\n ) -> Hologres:\n if ids is None:\n ids = [str(uuid.uuid1()) for _ in texts]\n if not metadatas:\n metadatas = [{} for _ in texts]\n connection_string = cls.get_connection_string(kwargs)\n store = cls(\n connection_string=connection_string,\n embedding_function=embedding_function,\n ndims=ndims,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/hologres.html"} +{"id": "2c2730649d55-4", "text": "embedding_function=embedding_function,\n ndims=ndims,\n table_name=table_name,\n pre_delete_table=pre_delete_table,\n )\n store.add_embeddings(\n texts=texts, embeddings=embeddings, metadatas=metadatas, ids=ids, **kwargs\n )\n return store\n[docs] def add_embeddings(\n self,\n texts: Iterable[str],\n embeddings: List[List[float]],\n metadatas: List[dict],\n ids: List[str],\n **kwargs: Any,\n ) -> None:\n \"\"\"Add embeddings to the vectorstore.\n Args:\n texts: Iterable of strings to add to the vectorstore.\n embeddings: List of list of embedding vectors.\n metadatas: List of metadatas associated with the texts.\n kwargs: vectorstore specific parameters\n \"\"\"\n try:\n for text, metadata, embedding, id in zip(texts, metadatas, embeddings, ids):\n self.storage.insert(embedding, metadata, text, id)\n except Exception as e:\n self.logger.exception(e)\n self.storage.conn.commit()\n[docs] def add_texts(\n self,\n texts: Iterable[str],\n metadatas: Optional[List[dict]] = None,\n ids: Optional[List[str]] = None,\n **kwargs: Any,\n ) -> List[str]:\n \"\"\"Run more texts through the embeddings and add to the vectorstore.\n Args:\n texts: Iterable of strings to add to the vectorstore.\n metadatas: Optional list of metadatas associated with the texts.\n kwargs: vectorstore specific parameters\n Returns:\n List of ids from adding the texts into the vectorstore.\n \"\"\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/hologres.html"} +{"id": "2c2730649d55-5", "text": "List of ids from adding the texts into the vectorstore.\n \"\"\"\n if ids is None:\n ids = [str(uuid.uuid1()) for _ in texts]\n embeddings = self.embedding_function.embed_documents(list(texts))\n if not metadatas:\n metadatas = [{} for _ in texts]\n self.add_embeddings(texts, embeddings, metadatas, ids, **kwargs)\n return ids\n[docs] def similarity_search(\n self,\n query: str,\n k: int = 4,\n filter: Optional[dict] = None,\n **kwargs: Any,\n ) -> List[Document]:\n \"\"\"Run similarity search with Hologres with distance.\n Args:\n query (str): Query text to search for.\n k (int): Number of results to return. Defaults to 4.\n filter (Optional[Dict[str, str]]): Filter by metadata. Defaults to None.\n Returns:\n List of Documents most similar to the query.\n \"\"\"\n embedding = self.embedding_function.embed_query(text=query)\n return self.similarity_search_by_vector(\n embedding=embedding,\n k=k,\n filter=filter,\n )\n[docs] def similarity_search_by_vector(\n self,\n embedding: List[float],\n k: int = 4,\n filter: Optional[dict] = None,\n **kwargs: Any,\n ) -> List[Document]:\n \"\"\"Return docs most similar to embedding vector.\n Args:\n embedding: Embedding to look up documents similar to.\n k: Number of Documents to return. Defaults to 4.\n filter (Optional[Dict[str, str]]): Filter by metadata. Defaults to None.\n Returns:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/hologres.html"} +{"id": "2c2730649d55-6", "text": "Returns:\n List of Documents most similar to the query vector.\n \"\"\"\n docs_and_scores = self.similarity_search_with_score_by_vector(\n embedding=embedding, k=k, filter=filter\n )\n return [doc for doc, _ in docs_and_scores]\n[docs] def similarity_search_with_score(\n self,\n query: str,\n k: int = 4,\n filter: Optional[dict] = None,\n ) -> List[Tuple[Document, float]]:\n \"\"\"Return docs most similar to query.\n Args:\n query: Text to look up documents similar to.\n k: Number of Documents to return. Defaults to 4.\n filter (Optional[Dict[str, str]]): Filter by metadata. Defaults to None.\n Returns:\n List of Documents most similar to the query and score for each\n \"\"\"\n embedding = self.embedding_function.embed_query(query)\n docs = self.similarity_search_with_score_by_vector(\n embedding=embedding, k=k, filter=filter\n )\n return docs\n[docs] def similarity_search_with_score_by_vector(\n self,\n embedding: List[float],\n k: int = 4,\n filter: Optional[dict] = None,\n ) -> List[Tuple[Document, float]]:\n results: List[Tuple[str, str, float]] = self.storage.query_nearest_neighbours(\n embedding, k, filter\n )\n docs = [\n (\n Document(\n page_content=result[0],\n metadata=json.loads(result[1]),\n ),\n result[2],\n )\n for result in results\n ]\n return docs\n[docs] @classmethod\n def from_texts(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/hologres.html"} +{"id": "2c2730649d55-7", "text": "]\n return docs\n[docs] @classmethod\n def from_texts(\n cls: Type[Hologres],\n texts: List[str],\n embedding: Embeddings,\n metadatas: Optional[List[dict]] = None,\n ndims: int = ADA_TOKEN_COUNT,\n table_name: str = _LANGCHAIN_DEFAULT_TABLE_NAME,\n ids: Optional[List[str]] = None,\n pre_delete_table: bool = False,\n **kwargs: Any,\n ) -> Hologres:\n \"\"\"\n Return VectorStore initialized from texts and embeddings.\n Postgres connection string is required\n \"Either pass it as a parameter\n or set the HOLOGRES_CONNECTION_STRING environment variable.\n \"\"\"\n embeddings = embedding.embed_documents(list(texts))\n return cls.__from(\n texts,\n embeddings,\n embedding,\n metadatas=metadatas,\n ids=ids,\n ndims=ndims,\n table_name=table_name,\n pre_delete_table=pre_delete_table,\n **kwargs,\n )\n[docs] @classmethod\n def from_embeddings(\n cls,\n text_embeddings: List[Tuple[str, List[float]]],\n embedding: Embeddings,\n metadatas: Optional[List[dict]] = None,\n ndims: int = ADA_TOKEN_COUNT,\n table_name: str = _LANGCHAIN_DEFAULT_TABLE_NAME,\n ids: Optional[List[str]] = None,\n pre_delete_table: bool = False,\n **kwargs: Any,\n ) -> Hologres:\n \"\"\"Construct Hologres wrapper from raw documents and pre-\n generated embeddings.\n Return VectorStore initialized from documents and embeddings.\n Postgres connection string is required", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/hologres.html"} +{"id": "2c2730649d55-8", "text": "Return VectorStore initialized from documents and embeddings.\n Postgres connection string is required\n \"Either pass it as a parameter\n or set the HOLOGRES_CONNECTION_STRING environment variable.\n Example:\n .. code-block:: python\n from langchain import Hologres\n from langchain.embeddings import OpenAIEmbeddings\n embeddings = OpenAIEmbeddings()\n text_embeddings = embeddings.embed_documents(texts)\n text_embedding_pairs = list(zip(texts, text_embeddings))\n faiss = Hologres.from_embeddings(text_embedding_pairs, embeddings)\n \"\"\"\n texts = [t[0] for t in text_embeddings]\n embeddings = [t[1] for t in text_embeddings]\n return cls.__from(\n texts,\n embeddings,\n embedding,\n metadatas=metadatas,\n ids=ids,\n ndims=ndims,\n table_name=table_name,\n pre_delete_table=pre_delete_table,\n **kwargs,\n )\n[docs] @classmethod\n def from_existing_index(\n cls: Type[Hologres],\n embedding: Embeddings,\n ndims: int = ADA_TOKEN_COUNT,\n table_name: str = _LANGCHAIN_DEFAULT_TABLE_NAME,\n pre_delete_table: bool = False,\n **kwargs: Any,\n ) -> Hologres:\n \"\"\"\n Get intsance of an existing Hologres store.This method will\n return the instance of the store without inserting any new\n embeddings\n \"\"\"\n connection_string = cls.get_connection_string(kwargs)\n store = cls(\n connection_string=connection_string,\n ndims=ndims,\n table_name=table_name,\n embedding_function=embedding,\n pre_delete_table=pre_delete_table,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/hologres.html"} +{"id": "2c2730649d55-9", "text": "embedding_function=embedding,\n pre_delete_table=pre_delete_table,\n )\n return store\n[docs] @classmethod\n def get_connection_string(cls, kwargs: Dict[str, Any]) -> str:\n connection_string: str = get_from_dict_or_env(\n data=kwargs,\n key=\"connection_string\",\n env_key=\"HOLOGRES_CONNECTION_STRING\",\n )\n if not connection_string:\n raise ValueError(\n \"Postgres connection string is required\"\n \"Either pass it as a parameter\"\n \"or set the HOLOGRES_CONNECTION_STRING environment variable.\"\n )\n return connection_string\n[docs] @classmethod\n def from_documents(\n cls: Type[Hologres],\n documents: List[Document],\n embedding: Embeddings,\n ndims: int = ADA_TOKEN_COUNT,\n table_name: str = _LANGCHAIN_DEFAULT_TABLE_NAME,\n ids: Optional[List[str]] = None,\n pre_delete_collection: bool = False,\n **kwargs: Any,\n ) -> Hologres:\n \"\"\"\n Return VectorStore initialized from documents and embeddings.\n Postgres connection string is required\n \"Either pass it as a parameter\n or set the HOLOGRES_CONNECTION_STRING environment variable.\n \"\"\"\n texts = [d.page_content for d in documents]\n metadatas = [d.metadata for d in documents]\n connection_string = cls.get_connection_string(kwargs)\n kwargs[\"connection_string\"] = connection_string\n return cls.from_texts(\n texts=texts,\n pre_delete_collection=pre_delete_collection,\n embedding=embedding,\n metadatas=metadatas,\n ids=ids,\n ndims=ndims,\n table_name=table_name,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/hologres.html"} +{"id": "2c2730649d55-10", "text": "ndims=ndims,\n table_name=table_name,\n **kwargs,\n )\n[docs] @classmethod\n def connection_string_from_db_params(\n cls,\n host: str,\n port: int,\n database: str,\n user: str,\n password: str,\n ) -> str:\n \"\"\"Return connection string from database parameters.\"\"\"\n return (\n f\"dbname={database} user={user} password={password} host={host} port={port}\"\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/hologres.html"} +{"id": "780aeb929fc5-0", "text": "Source code for langchain.vectorstores.redis\n\"\"\"Wrapper around Redis vector database.\"\"\"\nfrom __future__ import annotations\nimport json\nimport logging\nimport uuid\nfrom typing import (\n TYPE_CHECKING,\n Any,\n Callable,\n Dict,\n Iterable,\n List,\n Literal,\n Mapping,\n Optional,\n Tuple,\n Type,\n)\nimport numpy as np\nfrom pydantic import BaseModel, root_validator\nfrom langchain.docstore.document import Document\nfrom langchain.embeddings.base import Embeddings\nfrom langchain.utils import get_from_dict_or_env\nfrom langchain.vectorstores.base import VectorStore, VectorStoreRetriever\nlogger = logging.getLogger(__name__)\nif TYPE_CHECKING:\n from redis.client import Redis as RedisType\n from redis.commands.search.query import Query\n# required modules\nREDIS_REQUIRED_MODULES = [\n {\"name\": \"search\", \"ver\": 20400},\n {\"name\": \"searchlight\", \"ver\": 20400},\n]\n# distance mmetrics\nREDIS_DISTANCE_METRICS = Literal[\"COSINE\", \"IP\", \"L2\"]\ndef _check_redis_module_exist(client: RedisType, required_modules: List[dict]) -> None:\n \"\"\"Check if the correct Redis modules are installed.\"\"\"\n installed_modules = client.module_list()\n installed_modules = {\n module[b\"name\"].decode(\"utf-8\"): module for module in installed_modules\n }\n for module in required_modules:\n if module[\"name\"] in installed_modules and int(\n installed_modules[module[\"name\"]][b\"ver\"]\n ) >= int(module[\"ver\"]):\n return\n # otherwise raise error\n error_message = (\n \"Redis cannot be used as a vector database without RediSearch >=2.4\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/redis.html"} +{"id": "780aeb929fc5-1", "text": "\"Redis cannot be used as a vector database without RediSearch >=2.4\"\n \"Please head to https://redis.io/docs/stack/search/quick_start/\"\n \"to know more about installing the RediSearch module within Redis Stack.\"\n )\n logging.error(error_message)\n raise ValueError(error_message)\ndef _check_index_exists(client: RedisType, index_name: str) -> bool:\n \"\"\"Check if Redis index exists.\"\"\"\n try:\n client.ft(index_name).info()\n except: # noqa: E722\n logger.info(\"Index does not exist\")\n return False\n logger.info(\"Index already exists\")\n return True\ndef _redis_key(prefix: str) -> str:\n \"\"\"Redis key schema for a given prefix.\"\"\"\n return f\"{prefix}:{uuid.uuid4().hex}\"\ndef _redis_prefix(index_name: str) -> str:\n \"\"\"Redis key prefix for a given index.\"\"\"\n return f\"doc:{index_name}\"\ndef _default_relevance_score(val: float) -> float:\n return 1 - val\n[docs]class Redis(VectorStore):\n \"\"\"Wrapper around Redis vector database.\n To use, you should have the ``redis`` python package installed.\n Example:\n .. code-block:: python\n from langchain.vectorstores import Redis\n from langchain.embeddings import OpenAIEmbeddings\n embeddings = OpenAIEmbeddings()\n vectorstore = Redis(\n redis_url=\"redis://username:password@localhost:6379\"\n index_name=\"my-index\",\n embedding_function=embeddings.embed_query,\n )\n \"\"\"\n def __init__(\n self,\n redis_url: str,\n index_name: str,\n embedding_function: Callable,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/redis.html"} +{"id": "780aeb929fc5-2", "text": "index_name: str,\n embedding_function: Callable,\n content_key: str = \"content\",\n metadata_key: str = \"metadata\",\n vector_key: str = \"content_vector\",\n relevance_score_fn: Optional[\n Callable[[float], float]\n ] = _default_relevance_score,\n **kwargs: Any,\n ):\n \"\"\"Initialize with necessary components.\"\"\"\n try:\n import redis\n except ImportError:\n raise ValueError(\n \"Could not import redis python package. \"\n \"Please install it with `pip install redis>=4.1.0`.\"\n )\n self.embedding_function = embedding_function\n self.index_name = index_name\n try:\n # connect to redis from url\n redis_client = redis.from_url(redis_url, **kwargs)\n # check if redis has redisearch module installed\n _check_redis_module_exist(redis_client, REDIS_REQUIRED_MODULES)\n except ValueError as e:\n raise ValueError(f\"Redis failed to connect: {e}\")\n self.client = redis_client\n self.content_key = content_key\n self.metadata_key = metadata_key\n self.vector_key = vector_key\n self.relevance_score_fn = relevance_score_fn\n def _create_index(\n self, dim: int = 1536, distance_metric: REDIS_DISTANCE_METRICS = \"COSINE\"\n ) -> None:\n try:\n from redis.commands.search.field import TextField, VectorField\n from redis.commands.search.indexDefinition import IndexDefinition, IndexType\n except ImportError:\n raise ValueError(\n \"Could not import redis python package. \"\n \"Please install it with `pip install redis`.\"\n )\n # Check if index exists\n if not _check_index_exists(self.client, self.index_name):", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/redis.html"} +{"id": "780aeb929fc5-3", "text": "if not _check_index_exists(self.client, self.index_name):\n # Define schema\n schema = (\n TextField(name=self.content_key),\n TextField(name=self.metadata_key),\n VectorField(\n self.vector_key,\n \"FLAT\",\n {\n \"TYPE\": \"FLOAT32\",\n \"DIM\": dim,\n \"DISTANCE_METRIC\": distance_metric,\n },\n ),\n )\n prefix = _redis_prefix(self.index_name)\n # Create Redis Index\n self.client.ft(self.index_name).create_index(\n fields=schema,\n definition=IndexDefinition(prefix=[prefix], index_type=IndexType.HASH),\n )\n[docs] def add_texts(\n self,\n texts: Iterable[str],\n metadatas: Optional[List[dict]] = None,\n embeddings: Optional[List[List[float]]] = None,\n batch_size: int = 1000,\n **kwargs: Any,\n ) -> List[str]:\n \"\"\"Add more texts to the vectorstore.\n Args:\n texts (Iterable[str]): Iterable of strings/text to add to the vectorstore.\n metadatas (Optional[List[dict]], optional): Optional list of metadatas.\n Defaults to None.\n embeddings (Optional[List[List[float]]], optional): Optional pre-generated\n embeddings. Defaults to None.\n keys (List[str]) or ids (List[str]): Identifiers of entries.\n Defaults to None.\n batch_size (int, optional): Batch size to use for writes. Defaults to 1000.\n Returns:\n List[str]: List of ids added to the vectorstore\n \"\"\"\n ids = []\n prefix = _redis_prefix(self.index_name)\n # Get keys or ids from kwargs", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/redis.html"} +{"id": "780aeb929fc5-4", "text": "prefix = _redis_prefix(self.index_name)\n # Get keys or ids from kwargs\n # Other vectorstores use ids\n keys_or_ids = kwargs.get(\"keys\", kwargs.get(\"ids\"))\n # Write data to redis\n pipeline = self.client.pipeline(transaction=False)\n for i, text in enumerate(texts):\n # Use provided values by default or fallback\n key = keys_or_ids[i] if keys_or_ids else _redis_key(prefix)\n metadata = metadatas[i] if metadatas else {}\n embedding = embeddings[i] if embeddings else self.embedding_function(text)\n pipeline.hset(\n key,\n mapping={\n self.content_key: text,\n self.vector_key: np.array(embedding, dtype=np.float32).tobytes(),\n self.metadata_key: json.dumps(metadata),\n },\n )\n ids.append(key)\n # Write batch\n if i % batch_size == 0:\n pipeline.execute()\n # Cleanup final batch\n pipeline.execute()\n return ids\n[docs] def similarity_search(\n self, query: str, k: int = 4, **kwargs: Any\n ) -> List[Document]:\n \"\"\"\n Returns the most similar indexed documents to the query text.\n Args:\n query (str): The query text for which to find similar documents.\n k (int): The number of documents to return. Default is 4.\n Returns:\n List[Document]: A list of documents that are most similar to the query text.\n \"\"\"\n docs_and_scores = self.similarity_search_with_score(query, k=k)\n return [doc for doc, _ in docs_and_scores]\n[docs] def similarity_search_limit_score(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/redis.html"} +{"id": "780aeb929fc5-5", "text": "[docs] def similarity_search_limit_score(\n self, query: str, k: int = 4, score_threshold: float = 0.2, **kwargs: Any\n ) -> List[Document]:\n \"\"\"\n Returns the most similar indexed documents to the query text within the\n score_threshold range.\n Args:\n query (str): The query text for which to find similar documents.\n k (int): The number of documents to return. Default is 4.\n score_threshold (float): The minimum matching score required for a document\n to be considered a match. Defaults to 0.2.\n Because the similarity calculation algorithm is based on cosine similarity,\n the smaller the angle, the higher the similarity.\n Returns:\n List[Document]: A list of documents that are most similar to the query text,\n including the match score for each document.\n Note:\n If there are no documents that satisfy the score_threshold value,\n an empty list is returned.\n \"\"\"\n docs_and_scores = self.similarity_search_with_score(query, k=k)\n return [doc for doc, score in docs_and_scores if score < score_threshold]\n def _prepare_query(self, k: int) -> Query:\n try:\n from redis.commands.search.query import Query\n except ImportError:\n raise ValueError(\n \"Could not import redis python package. \"\n \"Please install it with `pip install redis`.\"\n )\n # Prepare the Query\n hybrid_fields = \"*\"\n base_query = (\n f\"{hybrid_fields}=>[KNN {k} @{self.vector_key} $vector AS vector_score]\"\n )\n return_fields = [self.metadata_key, self.content_key, \"vector_score\"]\n return (", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/redis.html"} +{"id": "780aeb929fc5-6", "text": "return (\n Query(base_query)\n .return_fields(*return_fields)\n .sort_by(\"vector_score\")\n .paging(0, k)\n .dialect(2)\n )\n[docs] def similarity_search_with_score(\n self, query: str, k: int = 4\n ) -> List[Tuple[Document, float]]:\n \"\"\"Return docs most similar to query.\n Args:\n query: Text to look up documents similar to.\n k: Number of Documents to return. Defaults to 4.\n Returns:\n List of Documents most similar to the query and score for each\n \"\"\"\n # Creates embedding vector from user query\n embedding = self.embedding_function(query)\n # Creates Redis query\n redis_query = self._prepare_query(k)\n params_dict: Mapping[str, str] = {\n \"vector\": np.array(embedding) # type: ignore\n .astype(dtype=np.float32)\n .tobytes()\n }\n # Perform vector search\n results = self.client.ft(self.index_name).search(redis_query, params_dict)\n # Prepare document results\n docs = [\n (\n Document(\n page_content=result.content, metadata=json.loads(result.metadata)\n ),\n float(result.vector_score),\n )\n for result in results.docs\n ]\n return docs\n def _similarity_search_with_relevance_scores(\n self,\n query: str,\n k: int = 4,\n **kwargs: Any,\n ) -> List[Tuple[Document, float]]:\n \"\"\"Return docs and relevance scores, normalized on a scale from 0 to 1.\n 0 is dissimilar, 1 is most similar.\n \"\"\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/redis.html"} +{"id": "780aeb929fc5-7", "text": "0 is dissimilar, 1 is most similar.\n \"\"\"\n if self.relevance_score_fn is None:\n raise ValueError(\n \"relevance_score_fn must be provided to\"\n \" Redis constructor to normalize scores\"\n )\n docs_and_scores = self.similarity_search_with_score(query, k=k)\n return [(doc, self.relevance_score_fn(score)) for doc, score in docs_and_scores]\n[docs] @classmethod\n def from_texts_return_keys(\n cls,\n texts: List[str],\n embedding: Embeddings,\n metadatas: Optional[List[dict]] = None,\n index_name: Optional[str] = None,\n content_key: str = \"content\",\n metadata_key: str = \"metadata\",\n vector_key: str = \"content_vector\",\n distance_metric: REDIS_DISTANCE_METRICS = \"COSINE\",\n **kwargs: Any,\n ) -> Tuple[Redis, List[str]]:\n \"\"\"Create a Redis vectorstore from raw documents.\n This is a user-friendly interface that:\n 1. Embeds documents.\n 2. Creates a new index for the embeddings in Redis.\n 3. Adds the documents to the newly created Redis index.\n 4. Returns the keys of the newly created documents.\n This is intended to be a quick way to get started.\n Example:\n .. code-block:: python\n from langchain.vectorstores import Redis\n from langchain.embeddings import OpenAIEmbeddings\n embeddings = OpenAIEmbeddings()\n redisearch, keys = RediSearch.from_texts_return_keys(\n texts,\n embeddings,\n redis_url=\"redis://username:password@localhost:6379\"\n )\n \"\"\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/redis.html"} +{"id": "780aeb929fc5-8", "text": ")\n \"\"\"\n redis_url = get_from_dict_or_env(kwargs, \"redis_url\", \"REDIS_URL\")\n if \"redis_url\" in kwargs:\n kwargs.pop(\"redis_url\")\n # Name of the search index if not given\n if not index_name:\n index_name = uuid.uuid4().hex\n # Create instance\n instance = cls(\n redis_url,\n index_name,\n embedding.embed_query,\n content_key=content_key,\n metadata_key=metadata_key,\n vector_key=vector_key,\n **kwargs,\n )\n # Create embeddings over documents\n embeddings = embedding.embed_documents(texts)\n # Create the search index\n instance._create_index(dim=len(embeddings[0]), distance_metric=distance_metric)\n # Add data to Redis\n keys = instance.add_texts(texts, metadatas, embeddings)\n return instance, keys\n[docs] @classmethod\n def from_texts(\n cls: Type[Redis],\n texts: List[str],\n embedding: Embeddings,\n metadatas: Optional[List[dict]] = None,\n index_name: Optional[str] = None,\n content_key: str = \"content\",\n metadata_key: str = \"metadata\",\n vector_key: str = \"content_vector\",\n **kwargs: Any,\n ) -> Redis:\n \"\"\"Create a Redis vectorstore from raw documents.\n This is a user-friendly interface that:\n 1. Embeds documents.\n 2. Creates a new index for the embeddings in Redis.\n 3. Adds the documents to the newly created Redis index.\n This is intended to be a quick way to get started.\n Example:\n .. code-block:: python", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/redis.html"} +{"id": "780aeb929fc5-9", "text": "Example:\n .. code-block:: python\n from langchain.vectorstores import Redis\n from langchain.embeddings import OpenAIEmbeddings\n embeddings = OpenAIEmbeddings()\n redisearch = RediSearch.from_texts(\n texts,\n embeddings,\n redis_url=\"redis://username:password@localhost:6379\"\n )\n \"\"\"\n instance, _ = cls.from_texts_return_keys(\n texts,\n embedding,\n metadatas=metadatas,\n index_name=index_name,\n content_key=content_key,\n metadata_key=metadata_key,\n vector_key=vector_key,\n **kwargs,\n )\n return instance\n[docs] @staticmethod\n def delete(\n ids: List[str],\n **kwargs: Any,\n ) -> bool:\n \"\"\"\n Delete a Redis entry.\n Args:\n ids: List of ids (keys) to delete.\n Returns:\n bool: Whether or not the deletions were successful.\n \"\"\"\n redis_url = get_from_dict_or_env(kwargs, \"redis_url\", \"REDIS_URL\")\n if ids is None:\n raise ValueError(\"'ids' (keys)() were not provided.\")\n try:\n import redis\n except ImportError:\n raise ValueError(\n \"Could not import redis python package. \"\n \"Please install it with `pip install redis`.\"\n )\n try:\n # We need to first remove redis_url from kwargs,\n # otherwise passing it to Redis will result in an error.\n if \"redis_url\" in kwargs:\n kwargs.pop(\"redis_url\")\n client = redis.from_url(url=redis_url, **kwargs)\n except ValueError as e:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/redis.html"} +{"id": "780aeb929fc5-10", "text": "except ValueError as e:\n raise ValueError(f\"Your redis connected error: {e}\")\n # Check if index exists\n try:\n client.delete(*ids)\n logger.info(\"Entries deleted\")\n return True\n except: # noqa: E722\n # ids does not exist\n return False\n[docs] @staticmethod\n def drop_index(\n index_name: str,\n delete_documents: bool,\n **kwargs: Any,\n ) -> bool:\n \"\"\"\n Drop a Redis search index.\n Args:\n index_name (str): Name of the index to drop.\n delete_documents (bool): Whether to drop the associated documents.\n Returns:\n bool: Whether or not the drop was successful.\n \"\"\"\n redis_url = get_from_dict_or_env(kwargs, \"redis_url\", \"REDIS_URL\")\n try:\n import redis\n except ImportError:\n raise ValueError(\n \"Could not import redis python package. \"\n \"Please install it with `pip install redis`.\"\n )\n try:\n # We need to first remove redis_url from kwargs,\n # otherwise passing it to Redis will result in an error.\n if \"redis_url\" in kwargs:\n kwargs.pop(\"redis_url\")\n client = redis.from_url(url=redis_url, **kwargs)\n except ValueError as e:\n raise ValueError(f\"Your redis connected error: {e}\")\n # Check if index exists\n try:\n client.ft(index_name).dropindex(delete_documents)\n logger.info(\"Drop index\")\n return True\n except: # noqa: E722\n # Index not exist\n return False\n[docs] @classmethod\n def from_existing_index(\n cls,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/redis.html"} +{"id": "780aeb929fc5-11", "text": "[docs] @classmethod\n def from_existing_index(\n cls,\n embedding: Embeddings,\n index_name: str,\n content_key: str = \"content\",\n metadata_key: str = \"metadata\",\n vector_key: str = \"content_vector\",\n **kwargs: Any,\n ) -> Redis:\n \"\"\"Connect to an existing Redis index.\"\"\"\n redis_url = get_from_dict_or_env(kwargs, \"redis_url\", \"REDIS_URL\")\n try:\n import redis\n except ImportError:\n raise ValueError(\n \"Could not import redis python package. \"\n \"Please install it with `pip install redis`.\"\n )\n try:\n # We need to first remove redis_url from kwargs,\n # otherwise passing it to Redis will result in an error.\n if \"redis_url\" in kwargs:\n kwargs.pop(\"redis_url\")\n client = redis.from_url(url=redis_url, **kwargs)\n # check if redis has redisearch module installed\n _check_redis_module_exist(client, REDIS_REQUIRED_MODULES)\n # ensure that the index already exists\n assert _check_index_exists(\n client, index_name\n ), f\"Index {index_name} does not exist\"\n except Exception as e:\n raise ValueError(f\"Redis failed to connect: {e}\")\n return cls(\n redis_url,\n index_name,\n embedding.embed_query,\n content_key=content_key,\n metadata_key=metadata_key,\n vector_key=vector_key,\n **kwargs,\n )\n[docs] def as_retriever(self, **kwargs: Any) -> RedisVectorStoreRetriever:\n return RedisVectorStoreRetriever(vectorstore=self, **kwargs)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/redis.html"} +{"id": "780aeb929fc5-12", "text": "return RedisVectorStoreRetriever(vectorstore=self, **kwargs)\nclass RedisVectorStoreRetriever(VectorStoreRetriever, BaseModel):\n vectorstore: Redis\n search_type: str = \"similarity\"\n k: int = 4\n score_threshold: float = 0.4\n class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n arbitrary_types_allowed = True\n @root_validator()\n def validate_search_type(cls, values: Dict) -> Dict:\n \"\"\"Validate search type.\"\"\"\n if \"search_type\" in values:\n search_type = values[\"search_type\"]\n if search_type not in (\"similarity\", \"similarity_limit\"):\n raise ValueError(f\"search_type of {search_type} not allowed.\")\n return values\n def get_relevant_documents(self, query: str) -> List[Document]:\n if self.search_type == \"similarity\":\n docs = self.vectorstore.similarity_search(query, k=self.k)\n elif self.search_type == \"similarity_limit\":\n docs = self.vectorstore.similarity_search_limit_score(\n query, k=self.k, score_threshold=self.score_threshold\n )\n else:\n raise ValueError(f\"search_type of {self.search_type} not allowed.\")\n return docs\n async def aget_relevant_documents(self, query: str) -> List[Document]:\n raise NotImplementedError(\"RedisVectorStoreRetriever does not support async\")\n def add_documents(self, documents: List[Document], **kwargs: Any) -> List[str]:\n \"\"\"Add documents to vectorstore.\"\"\"\n return self.vectorstore.add_documents(documents, **kwargs)\n async def aadd_documents(\n self, documents: List[Document], **kwargs: Any\n ) -> List[str]:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/redis.html"} +{"id": "780aeb929fc5-13", "text": ") -> List[str]:\n \"\"\"Add documents to vectorstore.\"\"\"\n return await self.vectorstore.aadd_documents(documents, **kwargs)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/redis.html"} +{"id": "c50bc8e67d91-0", "text": "Source code for langchain.vectorstores.zilliz\nfrom __future__ import annotations\nimport logging\nfrom typing import Any, List, Optional\nfrom langchain.embeddings.base import Embeddings\nfrom langchain.vectorstores.milvus import Milvus\nlogger = logging.getLogger(__name__)\n[docs]class Zilliz(Milvus):\n def _create_index(self) -> None:\n \"\"\"Create a index on the collection\"\"\"\n from pymilvus import Collection, MilvusException\n if isinstance(self.col, Collection) and self._get_index() is None:\n try:\n # If no index params, use a default AutoIndex based one\n if self.index_params is None:\n self.index_params = {\n \"metric_type\": \"L2\",\n \"index_type\": \"AUTOINDEX\",\n \"params\": {},\n }\n try:\n self.col.create_index(\n self._vector_field,\n index_params=self.index_params,\n using=self.alias,\n )\n # If default did not work, most likely Milvus self-hosted\n except MilvusException:\n # Use HNSW based index\n self.index_params = {\n \"metric_type\": \"L2\",\n \"index_type\": \"HNSW\",\n \"params\": {\"M\": 8, \"efConstruction\": 64},\n }\n self.col.create_index(\n self._vector_field,\n index_params=self.index_params,\n using=self.alias,\n )\n logger.debug(\n \"Successfully created an index on collection: %s\",\n self.collection_name,\n )\n except MilvusException as e:\n logger.error(\n \"Failed to create an index on collection: %s\", self.collection_name", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/zilliz.html"} +{"id": "c50bc8e67d91-1", "text": "\"Failed to create an index on collection: %s\", self.collection_name\n )\n raise e\n[docs] @classmethod\n def from_texts(\n cls,\n texts: List[str],\n embedding: Embeddings,\n metadatas: Optional[List[dict]] = None,\n collection_name: str = \"LangChainCollection\",\n connection_args: dict[str, Any] = {},\n consistency_level: str = \"Session\",\n index_params: Optional[dict] = None,\n search_params: Optional[dict] = None,\n drop_old: bool = False,\n **kwargs: Any,\n ) -> Zilliz:\n \"\"\"Create a Zilliz collection, indexes it with HNSW, and insert data.\n Args:\n texts (List[str]): Text data.\n embedding (Embeddings): Embedding function.\n metadatas (Optional[List[dict]]): Metadata for each text if it exists.\n Defaults to None.\n collection_name (str, optional): Collection name to use. Defaults to\n \"LangChainCollection\".\n connection_args (dict[str, Any], optional): Connection args to use. Defaults\n to DEFAULT_MILVUS_CONNECTION.\n consistency_level (str, optional): Which consistency level to use. Defaults\n to \"Session\".\n index_params (Optional[dict], optional): Which index_params to use.\n Defaults to None.\n search_params (Optional[dict], optional): Which search params to use.\n Defaults to None.\n drop_old (Optional[bool], optional): Whether to drop the collection with\n that name if it exists. Defaults to False.\n Returns:\n Zilliz: Zilliz Vector Store\n \"\"\"\n vector_db = cls(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/zilliz.html"} +{"id": "c50bc8e67d91-2", "text": "\"\"\"\n vector_db = cls(\n embedding_function=embedding,\n collection_name=collection_name,\n connection_args=connection_args,\n consistency_level=consistency_level,\n index_params=index_params,\n search_params=search_params,\n drop_old=drop_old,\n **kwargs,\n )\n vector_db.add_texts(texts=texts, metadatas=metadatas)\n return vector_db", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/zilliz.html"} +{"id": "81787403ca09-0", "text": "Source code for langchain.vectorstores.supabase\nfrom __future__ import annotations\nimport uuid\nfrom itertools import repeat\nfrom typing import (\n TYPE_CHECKING,\n Any,\n Iterable,\n List,\n Optional,\n Tuple,\n Type,\n Union,\n)\nimport numpy as np\nfrom langchain.docstore.document import Document\nfrom langchain.embeddings.base import Embeddings\nfrom langchain.vectorstores.base import VectorStore\nfrom langchain.vectorstores.utils import maximal_marginal_relevance\nif TYPE_CHECKING:\n import supabase\n[docs]class SupabaseVectorStore(VectorStore):\n \"\"\"VectorStore for a Supabase postgres database. Assumes you have the `pgvector`\n extension installed and a `match_documents` (or similar) function. For more details:\n https://js.langchain.com/docs/modules/indexes/vector_stores/integrations/supabase\n You can implement your own `match_documents` function in order to limit the search\n space to a subset of documents based on your own authorization or business logic.\n Note that the Supabase Python client does not yet support async operations.\n If you'd like to use `max_marginal_relevance_search`, please review the instructions\n below on modifying the `match_documents` function to return matched embeddings.\n \"\"\"\n _client: supabase.client.Client\n # This is the embedding function. Don't confuse with the embedding vectors.\n # We should perhaps rename the underlying Embedding base class to EmbeddingFunction\n # or something\n _embedding: Embeddings\n table_name: str\n query_name: str\n def __init__(\n self,\n client: supabase.client.Client,\n embedding: Embeddings,\n table_name: str,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/supabase.html"} +{"id": "81787403ca09-1", "text": "embedding: Embeddings,\n table_name: str,\n query_name: Union[str, None] = None,\n ) -> None:\n \"\"\"Initialize with supabase client.\"\"\"\n try:\n import supabase # noqa: F401\n except ImportError:\n raise ValueError(\n \"Could not import supabase python package. \"\n \"Please install it with `pip install supabase`.\"\n )\n self._client = client\n self._embedding: Embeddings = embedding\n self.table_name = table_name or \"documents\"\n self.query_name = query_name or \"match_documents\"\n[docs] def add_texts(\n self,\n texts: Iterable[str],\n metadatas: Optional[List[dict[Any, Any]]] = None,\n ids: Optional[List[str]] = None,\n **kwargs: Any,\n ) -> List[str]:\n ids = ids or [str(uuid.uuid4()) for _ in texts]\n docs = self._texts_to_documents(texts, metadatas)\n vectors = self._embedding.embed_documents(list(texts))\n return self.add_vectors(vectors, docs, ids)\n[docs] @classmethod\n def from_texts(\n cls: Type[\"SupabaseVectorStore\"],\n texts: List[str],\n embedding: Embeddings,\n metadatas: Optional[List[dict]] = None,\n client: Optional[supabase.client.Client] = None,\n table_name: Optional[str] = \"documents\",\n query_name: Union[str, None] = \"match_documents\",\n ids: Optional[List[str]] = None,\n **kwargs: Any,\n ) -> \"SupabaseVectorStore\":\n \"\"\"Return VectorStore initialized from texts and embeddings.\"\"\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/supabase.html"} +{"id": "81787403ca09-2", "text": "\"\"\"Return VectorStore initialized from texts and embeddings.\"\"\"\n if not client:\n raise ValueError(\"Supabase client is required.\")\n if not table_name:\n raise ValueError(\"Supabase document table_name is required.\")\n embeddings = embedding.embed_documents(texts)\n ids = [str(uuid.uuid4()) for _ in texts]\n docs = cls._texts_to_documents(texts, metadatas)\n _ids = cls._add_vectors(client, table_name, embeddings, docs, ids)\n return cls(\n client=client,\n embedding=embedding,\n table_name=table_name,\n query_name=query_name,\n )\n[docs] def add_vectors(\n self,\n vectors: List[List[float]],\n documents: List[Document],\n ids: List[str],\n ) -> List[str]:\n return self._add_vectors(self._client, self.table_name, vectors, documents, ids)\n[docs] def similarity_search(\n self, query: str, k: int = 4, **kwargs: Any\n ) -> List[Document]:\n vectors = self._embedding.embed_documents([query])\n return self.similarity_search_by_vector(vectors[0], k)\n[docs] def similarity_search_by_vector(\n self, embedding: List[float], k: int = 4, **kwargs: Any\n ) -> List[Document]:\n result = self.similarity_search_by_vector_with_relevance_scores(embedding, k)\n documents = [doc for doc, _ in result]\n return documents\n[docs] def similarity_search_with_relevance_scores(\n self, query: str, k: int = 4, **kwargs: Any\n ) -> List[Tuple[Document, float]]:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/supabase.html"} +{"id": "81787403ca09-3", "text": ") -> List[Tuple[Document, float]]:\n vectors = self._embedding.embed_documents([query])\n return self.similarity_search_by_vector_with_relevance_scores(vectors[0], k)\n[docs] def similarity_search_by_vector_with_relevance_scores(\n self, query: List[float], k: int\n ) -> List[Tuple[Document, float]]:\n match_documents_params = dict(query_embedding=query, match_count=k)\n res = self._client.rpc(self.query_name, match_documents_params).execute()\n match_result = [\n (\n Document(\n metadata=search.get(\"metadata\", {}), # type: ignore\n page_content=search.get(\"content\", \"\"),\n ),\n search.get(\"similarity\", 0.0),\n )\n for search in res.data\n if search.get(\"content\")\n ]\n return match_result\n[docs] def similarity_search_by_vector_returning_embeddings(\n self, query: List[float], k: int\n ) -> List[Tuple[Document, float, np.ndarray[np.float32, Any]]]:\n match_documents_params = dict(query_embedding=query, match_count=k)\n res = self._client.rpc(self.query_name, match_documents_params).execute()\n match_result = [\n (\n Document(\n metadata=search.get(\"metadata\", {}), # type: ignore\n page_content=search.get(\"content\", \"\"),\n ),\n search.get(\"similarity\", 0.0),\n # Supabase returns a vector type as its string represation (!).\n # This is a hack to convert the string to numpy array.\n np.fromstring(\n search.get(\"embedding\", \"\").strip(\"[]\"), np.float32, sep=\",\"\n ),\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/supabase.html"} +{"id": "81787403ca09-4", "text": "),\n )\n for search in res.data\n if search.get(\"content\")\n ]\n return match_result\n @staticmethod\n def _texts_to_documents(\n texts: Iterable[str],\n metadatas: Optional[Iterable[dict[Any, Any]]] = None,\n ) -> List[Document]:\n \"\"\"Return list of Documents from list of texts and metadatas.\"\"\"\n if metadatas is None:\n metadatas = repeat({})\n docs = [\n Document(page_content=text, metadata=metadata)\n for text, metadata in zip(texts, metadatas)\n ]\n return docs\n @staticmethod\n def _add_vectors(\n client: supabase.client.Client,\n table_name: str,\n vectors: List[List[float]],\n documents: List[Document],\n ids: List[str],\n ) -> List[str]:\n \"\"\"Add vectors to Supabase table.\"\"\"\n rows: List[dict[str, Any]] = [\n {\n \"id\": ids[idx],\n \"content\": documents[idx].page_content,\n \"embedding\": embedding,\n \"metadata\": documents[idx].metadata, # type: ignore\n }\n for idx, embedding in enumerate(vectors)\n ]\n # According to the SupabaseVectorStore JS implementation, the best chunk size\n # is 500\n chunk_size = 500\n id_list: List[str] = []\n for i in range(0, len(rows), chunk_size):\n chunk = rows[i : i + chunk_size]\n result = client.from_(table_name).upsert(chunk).execute() # type: ignore\n if len(result.data) == 0:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/supabase.html"} +{"id": "81787403ca09-5", "text": "if len(result.data) == 0:\n raise Exception(\"Error inserting: No rows added\")\n # VectorStore.add_vectors returns ids as strings\n ids = [str(i.get(\"id\")) for i in result.data if i.get(\"id\")]\n id_list.extend(ids)\n return id_list\n[docs] def max_marginal_relevance_search_by_vector(\n self,\n embedding: List[float],\n k: int = 4,\n fetch_k: int = 20,\n lambda_mult: float = 0.5,\n **kwargs: Any,\n ) -> List[Document]:\n \"\"\"Return docs selected using the maximal marginal relevance.\n Maximal marginal relevance optimizes for similarity to query AND diversity\n among selected documents.\n Args:\n embedding: Embedding to look up documents similar to.\n k: Number of Documents to return. Defaults to 4.\n fetch_k: Number of Documents to fetch to pass to MMR algorithm.\n lambda_mult: Number between 0 and 1 that determines the degree\n of diversity among the results with 0 corresponding\n to maximum diversity and 1 to minimum diversity.\n Defaults to 0.5.\n Returns:\n List of Documents selected by maximal marginal relevance.\n \"\"\"\n result = self.similarity_search_by_vector_returning_embeddings(\n embedding, fetch_k\n )\n matched_documents = [doc_tuple[0] for doc_tuple in result]\n matched_embeddings = [doc_tuple[2] for doc_tuple in result]\n mmr_selected = maximal_marginal_relevance(\n np.array([embedding], dtype=np.float32),\n matched_embeddings,\n k=k,\n lambda_mult=lambda_mult,\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/supabase.html"} +{"id": "81787403ca09-6", "text": "matched_embeddings,\n k=k,\n lambda_mult=lambda_mult,\n )\n filtered_documents = [matched_documents[i] for i in mmr_selected]\n return filtered_documents\n[docs] def max_marginal_relevance_search(\n self,\n query: str,\n k: int = 4,\n fetch_k: int = 20,\n lambda_mult: float = 0.5,\n **kwargs: Any,\n ) -> List[Document]:\n \"\"\"Return docs selected using the maximal marginal relevance.\n Maximal marginal relevance optimizes for similarity to query AND diversity\n among selected documents.\n Args:\n query: Text to look up documents similar to.\n k: Number of Documents to return. Defaults to 4.\n fetch_k: Number of Documents to fetch to pass to MMR algorithm.\n lambda_mult: Number between 0 and 1 that determines the degree\n of diversity among the results with 0 corresponding\n to maximum diversity and 1 to minimum diversity.\n Defaults to 0.5.\n Returns:\n List of Documents selected by maximal marginal relevance.\n `max_marginal_relevance_search` requires that `query_name` returns matched\n embeddings alongside the match documents. The following function\n demonstrates how to do this:\n ```sql\n CREATE FUNCTION match_documents_embeddings(query_embedding vector(1536),\n match_count int)\n RETURNS TABLE(\n id bigint,\n content text,\n metadata jsonb,\n embedding vector(1536),\n similarity float)\n LANGUAGE plpgsql\n AS $$\n # variable_conflict use_column\n BEGIN\n RETURN query\n SELECT\n id,\n content,\n metadata,\n embedding,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/supabase.html"} +{"id": "81787403ca09-7", "text": "SELECT\n id,\n content,\n metadata,\n embedding,\n 1 -(docstore.embedding <=> query_embedding) AS similarity\n FROM\n docstore\n ORDER BY\n docstore.embedding <=> query_embedding\n LIMIT match_count;\n END;\n $$;\n ```\n \"\"\"\n embedding = self._embedding.embed_documents([query])\n docs = self.max_marginal_relevance_search_by_vector(\n embedding[0], k, fetch_k, lambda_mult=lambda_mult\n )\n return docs\n[docs] def delete(self, ids: List[str]) -> None:\n \"\"\"Delete by vector IDs.\n Args:\n ids: List of ids to delete.\n \"\"\"\n rows: List[dict[str, Any]] = [\n {\n \"id\": id,\n }\n for id in ids\n ]\n # TODO: Check if this can be done in bulk\n for row in rows:\n self._client.from_(self.table_name).delete().eq(\"id\", row[\"id\"]).execute()", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/supabase.html"} +{"id": "1a0d391f3655-0", "text": "Source code for langchain.vectorstores.docarray.in_memory\n\"\"\"Wrapper around in-memory storage.\"\"\"\nfrom __future__ import annotations\nfrom typing import Any, Dict, List, Literal, Optional\nfrom langchain.embeddings.base import Embeddings\nfrom langchain.vectorstores.docarray.base import (\n DocArrayIndex,\n _check_docarray_import,\n)\n[docs]class DocArrayInMemorySearch(DocArrayIndex):\n \"\"\"Wrapper around in-memory storage for exact search.\n To use it, you should have the ``docarray`` package with version >=0.32.0 installed.\n You can install it with `pip install \"langchain[docarray]\"`.\n \"\"\"\n[docs] @classmethod\n def from_params(\n cls,\n embedding: Embeddings,\n metric: Literal[\n \"cosine_sim\", \"euclidian_dist\", \"sgeuclidean_dist\"\n ] = \"cosine_sim\",\n **kwargs: Any,\n ) -> DocArrayInMemorySearch:\n \"\"\"Initialize DocArrayInMemorySearch store.\n Args:\n embedding (Embeddings): Embedding function.\n metric (str): metric for exact nearest-neighbor search.\n Can be one of: \"cosine_sim\", \"euclidean_dist\" and \"sqeuclidean_dist\".\n Defaults to \"cosine_sim\".\n **kwargs: Other keyword arguments to be passed to the get_doc_cls method.\n \"\"\"\n _check_docarray_import()\n from docarray.index import InMemoryExactNNIndex\n doc_cls = cls._get_doc_cls(space=metric, **kwargs)\n doc_index = InMemoryExactNNIndex[doc_cls]() # type: ignore\n return cls(doc_index, embedding)\n[docs] @classmethod\n def from_texts(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/docarray/in_memory.html"} +{"id": "1a0d391f3655-1", "text": "[docs] @classmethod\n def from_texts(\n cls,\n texts: List[str],\n embedding: Embeddings,\n metadatas: Optional[List[Dict[Any, Any]]] = None,\n **kwargs: Any,\n ) -> DocArrayInMemorySearch:\n \"\"\"Create an DocArrayInMemorySearch store and insert data.\n Args:\n texts (List[str]): Text data.\n embedding (Embeddings): Embedding function.\n metadatas (Optional[List[Dict[Any, Any]]]): Metadata for each text\n if it exists. Defaults to None.\n metric (str): metric for exact nearest-neighbor search.\n Can be one of: \"cosine_sim\", \"euclidean_dist\" and \"sqeuclidean_dist\".\n Defaults to \"cosine_sim\".\n Returns:\n DocArrayInMemorySearch Vector Store\n \"\"\"\n store = cls.from_params(embedding, **kwargs)\n store.add_texts(texts=texts, metadatas=metadatas)\n return store", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/docarray/in_memory.html"} +{"id": "ab7be1d45063-0", "text": "Source code for langchain.vectorstores.docarray.hnsw\n\"\"\"Wrapper around Hnswlib store.\"\"\"\nfrom __future__ import annotations\nfrom typing import Any, List, Literal, Optional\nfrom langchain.embeddings.base import Embeddings\nfrom langchain.vectorstores.docarray.base import (\n DocArrayIndex,\n _check_docarray_import,\n)\n[docs]class DocArrayHnswSearch(DocArrayIndex):\n \"\"\"Wrapper around HnswLib storage.\n To use it, you should have the ``docarray`` package with version >=0.32.0 installed.\n You can install it with `pip install \"langchain[docarray]\"`.\n \"\"\"\n[docs] @classmethod\n def from_params(\n cls,\n embedding: Embeddings,\n work_dir: str,\n n_dim: int,\n dist_metric: Literal[\"cosine\", \"ip\", \"l2\"] = \"cosine\",\n max_elements: int = 1024,\n index: bool = True,\n ef_construction: int = 200,\n ef: int = 10,\n M: int = 16,\n allow_replace_deleted: bool = True,\n num_threads: int = 1,\n **kwargs: Any,\n ) -> DocArrayHnswSearch:\n \"\"\"Initialize DocArrayHnswSearch store.\n Args:\n embedding (Embeddings): Embedding function.\n work_dir (str): path to the location where all the data will be stored.\n n_dim (int): dimension of an embedding.\n dist_metric (str): Distance metric for DocArrayHnswSearch can be one of:\n \"cosine\", \"ip\", and \"l2\". Defaults to \"cosine\".", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/docarray/hnsw.html"} +{"id": "ab7be1d45063-1", "text": "\"cosine\", \"ip\", and \"l2\". Defaults to \"cosine\".\n max_elements (int): Maximum number of vectors that can be stored.\n Defaults to 1024.\n index (bool): Whether an index should be built for this field.\n Defaults to True.\n ef_construction (int): defines a construction time/accuracy trade-off.\n Defaults to 200.\n ef (int): parameter controlling query time/accuracy trade-off.\n Defaults to 10.\n M (int): parameter that defines the maximum number of outgoing\n connections in the graph. Defaults to 16.\n allow_replace_deleted (bool): Enables replacing of deleted elements\n with new added ones. Defaults to True.\n num_threads (int): Sets the number of cpu threads to use. Defaults to 1.\n **kwargs: Other keyword arguments to be passed to the get_doc_cls method.\n \"\"\"\n _check_docarray_import()\n from docarray.index import HnswDocumentIndex\n doc_cls = cls._get_doc_cls(\n dim=n_dim,\n space=dist_metric,\n max_elements=max_elements,\n index=index,\n ef_construction=ef_construction,\n ef=ef,\n M=M,\n allow_replace_deleted=allow_replace_deleted,\n num_threads=num_threads,\n **kwargs,\n )\n doc_index = HnswDocumentIndex[doc_cls](work_dir=work_dir) # type: ignore\n return cls(doc_index, embedding)\n[docs] @classmethod\n def from_texts(\n cls,\n texts: List[str],\n embedding: Embeddings,\n metadatas: Optional[List[dict]] = None,\n work_dir: Optional[str] = None,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/docarray/hnsw.html"} +{"id": "ab7be1d45063-2", "text": "work_dir: Optional[str] = None,\n n_dim: Optional[int] = None,\n **kwargs: Any,\n ) -> DocArrayHnswSearch:\n \"\"\"Create an DocArrayHnswSearch store and insert data.\n Args:\n texts (List[str]): Text data.\n embedding (Embeddings): Embedding function.\n metadatas (Optional[List[dict]]): Metadata for each text if it exists.\n Defaults to None.\n work_dir (str): path to the location where all the data will be stored.\n n_dim (int): dimension of an embedding.\n **kwargs: Other keyword arguments to be passed to the __init__ method.\n Returns:\n DocArrayHnswSearch Vector Store\n \"\"\"\n if work_dir is None:\n raise ValueError(\"`work_dir` parameter has not been set.\")\n if n_dim is None:\n raise ValueError(\"`n_dim` parameter has not been set.\")\n store = cls.from_params(embedding, work_dir, n_dim, **kwargs)\n store.add_texts(texts=texts, metadatas=metadatas)\n return store", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/docarray/hnsw.html"} +{"id": "6f3ec2414715-0", "text": "Source code for langchain.utilities.powerbi\n\"\"\"Wrapper around a Power BI endpoint.\"\"\"\nfrom __future__ import annotations\nimport asyncio\nimport logging\nimport os\nfrom typing import TYPE_CHECKING, Any, Dict, Iterable, List, Optional, Union\nimport aiohttp\nimport requests\nfrom aiohttp import ServerTimeoutError\nfrom pydantic import BaseModel, Field, root_validator, validator\nfrom requests.exceptions import Timeout\n_LOGGER = logging.getLogger(__name__)\nBASE_URL = os.getenv(\"POWERBI_BASE_URL\", \"https://api.powerbi.com/v1.0/myorg\")\nif TYPE_CHECKING:\n from azure.core.credentials import TokenCredential\n[docs]class PowerBIDataset(BaseModel):\n \"\"\"Create PowerBI engine from dataset ID and credential or token.\n Use either the credential or a supplied token to authenticate.\n If both are supplied the credential is used to generate a token.\n The impersonated_user_name is the UPN of a user to be impersonated.\n If the model is not RLS enabled, this will be ignored.\n \"\"\"\n dataset_id: str\n table_names: List[str]\n group_id: Optional[str] = None\n credential: Optional[TokenCredential] = None\n token: Optional[str] = None\n impersonated_user_name: Optional[str] = None\n sample_rows_in_table_info: int = Field(default=1, gt=0, le=10)\n schemas: Dict[str, str] = Field(default_factory=dict)\n aiosession: Optional[aiohttp.ClientSession] = None\n class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n arbitrary_types_allowed = True\n @validator(\"table_names\", allow_reuse=True)\n def fix_table_names(cls, table_names: List[str]) -> List[str]:\n \"\"\"Fix the table names.\"\"\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/utilities/powerbi.html"} +{"id": "6f3ec2414715-1", "text": "\"\"\"Fix the table names.\"\"\"\n return [fix_table_name(table) for table in table_names]\n @root_validator(pre=True, allow_reuse=True)\n def token_or_credential_present(cls, values: Dict[str, Any]) -> Dict[str, Any]:\n \"\"\"Validate that at least one of token and credentials is present.\"\"\"\n if \"token\" in values or \"credential\" in values:\n return values\n raise ValueError(\"Please provide either a credential or a token.\")\n @property\n def request_url(self) -> str:\n \"\"\"Get the request url.\"\"\"\n if self.group_id:\n return f\"{BASE_URL}/groups/{self.group_id}/datasets/{self.dataset_id}/executeQueries\" # noqa: E501 # pylint: disable=C0301\n return f\"{BASE_URL}/datasets/{self.dataset_id}/executeQueries\" # noqa: E501 # pylint: disable=C0301\n @property\n def headers(self) -> Dict[str, str]:\n \"\"\"Get the token.\"\"\"\n if self.token:\n return {\n \"Content-Type\": \"application/json\",\n \"Authorization\": \"Bearer \" + self.token,\n }\n from azure.core.exceptions import (\n ClientAuthenticationError, # pylint: disable=import-outside-toplevel\n )\n if self.credential:\n try:\n token = self.credential.get_token(\n \"https://analysis.windows.net/powerbi/api/.default\"\n ).token\n return {\n \"Content-Type\": \"application/json\",\n \"Authorization\": \"Bearer \" + token,\n }\n except Exception as exc: # pylint: disable=broad-exception-caught\n raise ClientAuthenticationError(\n \"Could not get a token from the supplied credentials.\"\n ) from exc", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/utilities/powerbi.html"} +{"id": "6f3ec2414715-2", "text": "\"Could not get a token from the supplied credentials.\"\n ) from exc\n raise ClientAuthenticationError(\"No credential or token supplied.\")\n[docs] def get_table_names(self) -> Iterable[str]:\n \"\"\"Get names of tables available.\"\"\"\n return self.table_names\n[docs] def get_schemas(self) -> str:\n \"\"\"Get the available schema's.\"\"\"\n if self.schemas:\n return \", \".join([f\"{key}: {value}\" for key, value in self.schemas.items()])\n return \"No known schema's yet. Use the schema_powerbi tool first.\"\n @property\n def table_info(self) -> str:\n \"\"\"Information about all tables in the database.\"\"\"\n return self.get_table_info()\n def _get_tables_to_query(\n self, table_names: Optional[Union[List[str], str]] = None\n ) -> Optional[List[str]]:\n \"\"\"Get the tables names that need to be queried, after checking they exist.\"\"\"\n if table_names is not None:\n if (\n isinstance(table_names, list)\n and len(table_names) > 0\n and table_names[0] != \"\"\n ):\n fixed_tables = [fix_table_name(table) for table in table_names]\n non_existing_tables = [\n table for table in fixed_tables if table not in self.table_names\n ]\n if non_existing_tables:\n _LOGGER.warning(\n \"Table(s) %s not found in dataset.\",\n \", \".join(non_existing_tables),\n )\n tables = [\n table for table in fixed_tables if table not in non_existing_tables\n ]\n return tables if tables else None\n if isinstance(table_names, str) and table_names != \"\":", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/utilities/powerbi.html"} +{"id": "6f3ec2414715-3", "text": "if isinstance(table_names, str) and table_names != \"\":\n if table_names not in self.table_names:\n _LOGGER.warning(\"Table %s not found in dataset.\", table_names)\n return None\n return [fix_table_name(table_names)]\n return self.table_names\n def _get_tables_todo(self, tables_todo: List[str]) -> List[str]:\n \"\"\"Get the tables that still need to be queried.\"\"\"\n return [table for table in tables_todo if table not in self.schemas]\n def _get_schema_for_tables(self, table_names: List[str]) -> str:\n \"\"\"Create a string of the table schemas for the supplied tables.\"\"\"\n schemas = [\n schema for table, schema in self.schemas.items() if table in table_names\n ]\n return \", \".join(schemas)\n[docs] def get_table_info(\n self, table_names: Optional[Union[List[str], str]] = None\n ) -> str:\n \"\"\"Get information about specified tables.\"\"\"\n tables_requested = self._get_tables_to_query(table_names)\n if tables_requested is None:\n return \"No (valid) tables requested.\"\n tables_todo = self._get_tables_todo(tables_requested)\n for table in tables_todo:\n self._get_schema(table)\n return self._get_schema_for_tables(tables_requested)\n[docs] async def aget_table_info(\n self, table_names: Optional[Union[List[str], str]] = None\n ) -> str:\n \"\"\"Get information about specified tables.\"\"\"\n tables_requested = self._get_tables_to_query(table_names)\n if tables_requested is None:\n return \"No (valid) tables requested.\"\n tables_todo = self._get_tables_todo(tables_requested)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/utilities/powerbi.html"} +{"id": "6f3ec2414715-4", "text": "tables_todo = self._get_tables_todo(tables_requested)\n await asyncio.gather(*[self._aget_schema(table) for table in tables_todo])\n return self._get_schema_for_tables(tables_requested)\n def _get_schema(self, table: str) -> None:\n \"\"\"Get the schema for a table.\"\"\"\n try:\n result = self.run(\n f\"EVALUATE TOPN({self.sample_rows_in_table_info}, {table})\"\n )\n self.schemas[table] = json_to_md(result[\"results\"][0][\"tables\"][0][\"rows\"])\n except Timeout:\n _LOGGER.warning(\"Timeout while getting table info for %s\", table)\n self.schemas[table] = \"unknown\"\n except Exception as exc: # pylint: disable=broad-exception-caught\n _LOGGER.warning(\"Error while getting table info for %s: %s\", table, exc)\n self.schemas[table] = \"unknown\"\n async def _aget_schema(self, table: str) -> None:\n \"\"\"Get the schema for a table.\"\"\"\n try:\n result = await self.arun(\n f\"EVALUATE TOPN({self.sample_rows_in_table_info}, {table})\"\n )\n self.schemas[table] = json_to_md(result[\"results\"][0][\"tables\"][0][\"rows\"])\n except ServerTimeoutError:\n _LOGGER.warning(\"Timeout while getting table info for %s\", table)\n self.schemas[table] = \"unknown\"\n except Exception as exc: # pylint: disable=broad-exception-caught\n _LOGGER.warning(\"Error while getting table info for %s: %s\", table, exc)\n self.schemas[table] = \"unknown\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/utilities/powerbi.html"} +{"id": "6f3ec2414715-5", "text": "self.schemas[table] = \"unknown\"\n def _create_json_content(self, command: str) -> dict[str, Any]:\n \"\"\"Create the json content for the request.\"\"\"\n return {\n \"queries\": [{\"query\": rf\"{command}\"}],\n \"impersonatedUserName\": self.impersonated_user_name,\n \"serializerSettings\": {\"includeNulls\": True},\n }\n[docs] def run(self, command: str) -> Any:\n \"\"\"Execute a DAX command and return a json representing the results.\"\"\"\n _LOGGER.debug(\"Running command: %s\", command)\n result = requests.post(\n self.request_url,\n json=self._create_json_content(command),\n headers=self.headers,\n timeout=10,\n )\n return result.json()\n[docs] async def arun(self, command: str) -> Any:\n \"\"\"Execute a DAX command and return the result asynchronously.\"\"\"\n _LOGGER.debug(\"Running command: %s\", command)\n if self.aiosession:\n async with self.aiosession.post(\n self.request_url,\n headers=self.headers,\n json=self._create_json_content(command),\n timeout=10,\n ) as response:\n response_json = await response.json(content_type=response.content_type)\n return response_json\n async with aiohttp.ClientSession() as session:\n async with session.post(\n self.request_url,\n headers=self.headers,\n json=self._create_json_content(command),\n timeout=10,\n ) as response:\n response_json = await response.json(content_type=response.content_type)\n return response_json\ndef json_to_md(\n json_contents: List[Dict[str, Union[str, int, float]]],", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/utilities/powerbi.html"} +{"id": "6f3ec2414715-6", "text": "json_contents: List[Dict[str, Union[str, int, float]]],\n table_name: Optional[str] = None,\n) -> str:\n \"\"\"Converts a JSON object to a markdown table.\"\"\"\n output_md = \"\"\n headers = json_contents[0].keys()\n for header in headers:\n header.replace(\"[\", \".\").replace(\"]\", \"\")\n if table_name:\n header.replace(f\"{table_name}.\", \"\")\n output_md += f\"| {header} \"\n output_md += \"|\\n\"\n for row in json_contents:\n for value in row.values():\n output_md += f\"| {value} \"\n output_md += \"|\\n\"\n return output_md\ndef fix_table_name(table: str) -> str:\n \"\"\"Add single quotes around table names that contain spaces.\"\"\"\n if \" \" in table and not table.startswith(\"'\") and not table.endswith(\"'\"):\n return f\"'{table}'\"\n return table", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/utilities/powerbi.html"} +{"id": "dd9db3efc645-0", "text": "Source code for langchain.utilities.bing_search\n\"\"\"Util that calls Bing Search.\nIn order to set this up, follow instructions at:\nhttps://levelup.gitconnected.com/api-tutorial-how-to-use-bing-web-search-api-in-python-4165d5592a7e\n\"\"\"\nfrom typing import Dict, List\nimport requests\nfrom pydantic import BaseModel, Extra, root_validator\nfrom langchain.utils import get_from_dict_or_env\n[docs]class BingSearchAPIWrapper(BaseModel):\n \"\"\"Wrapper for Bing Search API.\n In order to set this up, follow instructions at:\n https://levelup.gitconnected.com/api-tutorial-how-to-use-bing-web-search-api-in-python-4165d5592a7e\n \"\"\"\n bing_subscription_key: str\n bing_search_url: str\n k: int = 10\n class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n def _bing_search_results(self, search_term: str, count: int) -> List[dict]:\n headers = {\"Ocp-Apim-Subscription-Key\": self.bing_subscription_key}\n params = {\n \"q\": search_term,\n \"count\": count,\n \"textDecorations\": True,\n \"textFormat\": \"HTML\",\n }\n response = requests.get(\n self.bing_search_url, headers=headers, params=params # type: ignore\n )\n response.raise_for_status()\n search_results = response.json()\n return search_results[\"webPages\"][\"value\"]\n @root_validator(pre=True)\n def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"Validate that api key and endpoint exists in environment.\"\"\"\n bing_subscription_key = get_from_dict_or_env(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/utilities/bing_search.html"} +{"id": "dd9db3efc645-1", "text": "bing_subscription_key = get_from_dict_or_env(\n values, \"bing_subscription_key\", \"BING_SUBSCRIPTION_KEY\"\n )\n values[\"bing_subscription_key\"] = bing_subscription_key\n bing_search_url = get_from_dict_or_env(\n values,\n \"bing_search_url\",\n \"BING_SEARCH_URL\",\n # default=\"https://api.bing.microsoft.com/v7.0/search\",\n )\n values[\"bing_search_url\"] = bing_search_url\n return values\n[docs] def run(self, query: str) -> str:\n \"\"\"Run query through BingSearch and parse result.\"\"\"\n snippets = []\n results = self._bing_search_results(query, count=self.k)\n if len(results) == 0:\n return \"No good Bing Search Result was found\"\n for result in results:\n snippets.append(result[\"snippet\"])\n return \" \".join(snippets)\n[docs] def results(self, query: str, num_results: int) -> List[Dict]:\n \"\"\"Run query through BingSearch and return metadata.\n Args:\n query: The query to search for.\n num_results: The number of results to return.\n Returns:\n A list of dictionaries with the following keys:\n snippet - The description of the result.\n title - The title of the result.\n link - The link to the result.\n \"\"\"\n metadata_results = []\n results = self._bing_search_results(query, count=num_results)\n if len(results) == 0:\n return [{\"Result\": \"No good Bing Search Result was found\"}]\n for result in results:\n metadata_result = {\n \"snippet\": result[\"snippet\"],\n \"title\": result[\"name\"],", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/utilities/bing_search.html"} +{"id": "dd9db3efc645-2", "text": "\"snippet\": result[\"snippet\"],\n \"title\": result[\"name\"],\n \"link\": result[\"url\"],\n }\n metadata_results.append(metadata_result)\n return metadata_results", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/utilities/bing_search.html"} +{"id": "f9af61f9ffb6-0", "text": "Source code for langchain.utilities.serpapi\n\"\"\"Chain that calls SerpAPI.\nHeavily borrowed from https://github.com/ofirpress/self-ask\n\"\"\"\nimport os\nimport sys\nfrom typing import Any, Dict, Optional, Tuple\nimport aiohttp\nfrom pydantic import BaseModel, Extra, Field, root_validator\nfrom langchain.utils import get_from_dict_or_env\nclass HiddenPrints:\n \"\"\"Context manager to hide prints.\"\"\"\n def __enter__(self) -> None:\n \"\"\"Open file to pipe stdout to.\"\"\"\n self._original_stdout = sys.stdout\n sys.stdout = open(os.devnull, \"w\")\n def __exit__(self, *_: Any) -> None:\n \"\"\"Close file that stdout was piped to.\"\"\"\n sys.stdout.close()\n sys.stdout = self._original_stdout\n[docs]class SerpAPIWrapper(BaseModel):\n \"\"\"Wrapper around SerpAPI.\n To use, you should have the ``google-search-results`` python package installed,\n and the environment variable ``SERPAPI_API_KEY`` set with your API key, or pass\n `serpapi_api_key` as a named parameter to the constructor.\n Example:\n .. code-block:: python\n from langchain import SerpAPIWrapper\n serpapi = SerpAPIWrapper()\n \"\"\"\n search_engine: Any #: :meta private:\n params: dict = Field(\n default={\n \"engine\": \"google\",\n \"google_domain\": \"google.com\",\n \"gl\": \"us\",\n \"hl\": \"en\",\n }\n )\n serpapi_api_key: Optional[str] = None\n aiosession: Optional[aiohttp.ClientSession] = None\n class Config:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/utilities/serpapi.html"} +{"id": "f9af61f9ffb6-1", "text": "aiosession: Optional[aiohttp.ClientSession] = None\n class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n arbitrary_types_allowed = True\n @root_validator()\n def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"Validate that api key and python package exists in environment.\"\"\"\n serpapi_api_key = get_from_dict_or_env(\n values, \"serpapi_api_key\", \"SERPAPI_API_KEY\"\n )\n values[\"serpapi_api_key\"] = serpapi_api_key\n try:\n from serpapi import GoogleSearch\n values[\"search_engine\"] = GoogleSearch\n except ImportError:\n raise ValueError(\n \"Could not import serpapi python package. \"\n \"Please install it with `pip install google-search-results`.\"\n )\n return values\n[docs] async def arun(self, query: str, **kwargs: Any) -> str:\n \"\"\"Run query through SerpAPI and parse result async.\"\"\"\n return self._process_response(await self.aresults(query))\n[docs] def run(self, query: str, **kwargs: Any) -> str:\n \"\"\"Run query through SerpAPI and parse result.\"\"\"\n return self._process_response(self.results(query))\n[docs] def results(self, query: str) -> dict:\n \"\"\"Run query through SerpAPI and return the raw result.\"\"\"\n params = self.get_params(query)\n with HiddenPrints():\n search = self.search_engine(params)\n res = search.get_dict()\n return res\n[docs] async def aresults(self, query: str) -> dict:\n \"\"\"Use aiohttp to run query through SerpAPI and return the results async.\"\"\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/utilities/serpapi.html"} +{"id": "f9af61f9ffb6-2", "text": "\"\"\"Use aiohttp to run query through SerpAPI and return the results async.\"\"\"\n def construct_url_and_params() -> Tuple[str, Dict[str, str]]:\n params = self.get_params(query)\n params[\"source\"] = \"python\"\n if self.serpapi_api_key:\n params[\"serp_api_key\"] = self.serpapi_api_key\n params[\"output\"] = \"json\"\n url = \"https://serpapi.com/search\"\n return url, params\n url, params = construct_url_and_params()\n if not self.aiosession:\n async with aiohttp.ClientSession() as session:\n async with session.get(url, params=params) as response:\n res = await response.json()\n else:\n async with self.aiosession.get(url, params=params) as response:\n res = await response.json()\n return res\n[docs] def get_params(self, query: str) -> Dict[str, str]:\n \"\"\"Get parameters for SerpAPI.\"\"\"\n _params = {\n \"api_key\": self.serpapi_api_key,\n \"q\": query,\n }\n params = {**self.params, **_params}\n return params\n @staticmethod\n def _process_response(res: dict) -> str:\n \"\"\"Process response from SerpAPI.\"\"\"\n if \"error\" in res.keys():\n raise ValueError(f\"Got error from SerpAPI: {res['error']}\")\n if \"answer_box\" in res.keys() and type(res[\"answer_box\"]) == list:\n res[\"answer_box\"] = res[\"answer_box\"][0]\n if \"answer_box\" in res.keys() and \"answer\" in res[\"answer_box\"].keys():", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/utilities/serpapi.html"} +{"id": "f9af61f9ffb6-3", "text": "toret = res[\"answer_box\"][\"answer\"]\n elif \"answer_box\" in res.keys() and \"snippet\" in res[\"answer_box\"].keys():\n toret = res[\"answer_box\"][\"snippet\"]\n elif (\n \"answer_box\" in res.keys()\n and \"snippet_highlighted_words\" in res[\"answer_box\"].keys()\n ):\n toret = res[\"answer_box\"][\"snippet_highlighted_words\"][0]\n elif (\n \"sports_results\" in res.keys()\n and \"game_spotlight\" in res[\"sports_results\"].keys()\n ):\n toret = res[\"sports_results\"][\"game_spotlight\"]\n elif (\n \"shopping_results\" in res.keys()\n and \"title\" in res[\"shopping_results\"][0].keys()\n ):\n toret = res[\"shopping_results\"][:3]\n elif (\n \"knowledge_graph\" in res.keys()\n and \"description\" in res[\"knowledge_graph\"].keys()\n ):\n toret = res[\"knowledge_graph\"][\"description\"]\n elif \"snippet\" in res[\"organic_results\"][0].keys():\n toret = res[\"organic_results\"][0][\"snippet\"]\n elif \"link\" in res[\"organic_results\"][0].keys():\n toret = res[\"organic_results\"][0][\"link\"]\n else:\n toret = \"No good search result found\"\n return toret", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/utilities/serpapi.html"} +{"id": "0a1d2b3a0427-0", "text": "Source code for langchain.utilities.awslambda\n\"\"\"Util that calls Lambda.\"\"\"\nimport json\nfrom typing import Any, Dict, Optional\nfrom pydantic import BaseModel, Extra, root_validator\n[docs]class LambdaWrapper(BaseModel):\n \"\"\"Wrapper for AWS Lambda SDK.\n Docs for using:\n 1. pip install boto3\n 2. Create a lambda function using the AWS Console or CLI\n 3. Run `aws configure` and enter your AWS credentials\n \"\"\"\n lambda_client: Any #: :meta private:\n function_name: Optional[str] = None\n awslambda_tool_name: Optional[str] = None\n awslambda_tool_description: Optional[str] = None\n class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n @root_validator()\n def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"Validate that python package exists in environment.\"\"\"\n try:\n import boto3\n except ImportError:\n raise ImportError(\n \"boto3 is not installed. Please install it with `pip install boto3`\"\n )\n values[\"lambda_client\"] = boto3.client(\"lambda\")\n values[\"function_name\"] = values[\"function_name\"]\n return values\n[docs] def run(self, query: str) -> str:\n \"\"\"Invoke Lambda function and parse result.\"\"\"\n res = self.lambda_client.invoke(\n FunctionName=self.function_name,\n InvocationType=\"RequestResponse\",\n Payload=json.dumps({\"body\": query}),\n )\n try:\n payload_stream = res[\"Payload\"]\n payload_string = payload_stream.read().decode(\"utf-8\")\n answer = json.loads(payload_string)[\"body\"]\n except StopIteration:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/utilities/awslambda.html"} +{"id": "0a1d2b3a0427-1", "text": "answer = json.loads(payload_string)[\"body\"]\n except StopIteration:\n return \"Failed to parse response from Lambda\"\n if answer is None or answer == \"\":\n # We don't want to return the assumption alone if answer is empty\n return \"Request failed.\"\n else:\n return f\"Result: {answer}\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/utilities/awslambda.html"} +{"id": "b0922af55c9f-0", "text": "Source code for langchain.utilities.bash\n\"\"\"Wrapper around subprocess to run commands.\"\"\"\nfrom __future__ import annotations\nimport platform\nimport re\nimport subprocess\nfrom typing import TYPE_CHECKING, List, Union\nfrom uuid import uuid4\nif TYPE_CHECKING:\n import pexpect\ndef _lazy_import_pexpect() -> pexpect:\n \"\"\"Import pexpect only when needed.\"\"\"\n if platform.system() == \"Windows\":\n raise ValueError(\"Persistent bash processes are not yet supported on Windows.\")\n try:\n import pexpect\n except ImportError:\n raise ImportError(\n \"pexpect required for persistent bash processes.\"\n \" To install, run `pip install pexpect`.\"\n )\n return pexpect\n[docs]class BashProcess:\n \"\"\"Executes bash commands and returns the output.\"\"\"\n def __init__(\n self,\n strip_newlines: bool = False,\n return_err_output: bool = False,\n persistent: bool = False,\n ):\n \"\"\"Initialize with stripping newlines.\"\"\"\n self.strip_newlines = strip_newlines\n self.return_err_output = return_err_output\n self.prompt = \"\"\n self.process = None\n if persistent:\n self.prompt = str(uuid4())\n self.process = self._initialize_persistent_process(self.prompt)\n @staticmethod\n def _initialize_persistent_process(prompt: str) -> pexpect.spawn:\n # Start bash in a clean environment\n # Doesn't work on windows\n pexpect = _lazy_import_pexpect()\n process = pexpect.spawn(\n \"env\", [\"-i\", \"bash\", \"--norc\", \"--noprofile\"], encoding=\"utf-8\"\n )\n # Set the custom prompt\n process.sendline(\"PS1=\" + prompt)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/utilities/bash.html"} +{"id": "b0922af55c9f-1", "text": "# Set the custom prompt\n process.sendline(\"PS1=\" + prompt)\n process.expect_exact(prompt, timeout=10)\n return process\n[docs] def run(self, commands: Union[str, List[str]]) -> str:\n \"\"\"Run commands and return final output.\"\"\"\n if isinstance(commands, str):\n commands = [commands]\n commands = \";\".join(commands)\n if self.process is not None:\n return self._run_persistent(\n commands,\n )\n else:\n return self._run(commands)\n def _run(self, command: str) -> str:\n \"\"\"Run commands and return final output.\"\"\"\n try:\n output = subprocess.run(\n command,\n shell=True,\n check=True,\n stdout=subprocess.PIPE,\n stderr=subprocess.STDOUT,\n ).stdout.decode()\n except subprocess.CalledProcessError as error:\n if self.return_err_output:\n return error.stdout.decode()\n return str(error)\n if self.strip_newlines:\n output = output.strip()\n return output\n[docs] def process_output(self, output: str, command: str) -> str:\n # Remove the command from the output using a regular expression\n pattern = re.escape(command) + r\"\\s*\\n\"\n output = re.sub(pattern, \"\", output, count=1)\n return output.strip()\n def _run_persistent(self, command: str) -> str:\n \"\"\"Run commands and return final output.\"\"\"\n pexpect = _lazy_import_pexpect()\n if self.process is None:\n raise ValueError(\"Process not initialized\")\n self.process.sendline(command)\n # Clear the output with an empty string\n self.process.expect(self.prompt, timeout=10)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/utilities/bash.html"} +{"id": "b0922af55c9f-2", "text": "self.process.expect(self.prompt, timeout=10)\n self.process.sendline(\"\")\n try:\n self.process.expect([self.prompt, pexpect.EOF], timeout=10)\n except pexpect.TIMEOUT:\n return f\"Timeout error while executing command {command}\"\n if self.process.after == pexpect.EOF:\n return f\"Exited with error status: {self.process.exitstatus}\"\n output = self.process.before\n output = self.process_output(output, command)\n if self.strip_newlines:\n return output.strip()\n return output", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/utilities/bash.html"} +{"id": "f6b80a4923bc-0", "text": "Source code for langchain.utilities.google_search\n\"\"\"Util that calls Google Search.\"\"\"\nfrom typing import Any, Dict, List, Optional\nfrom pydantic import BaseModel, Extra, root_validator\nfrom langchain.utils import get_from_dict_or_env\n[docs]class GoogleSearchAPIWrapper(BaseModel):\n \"\"\"Wrapper for Google Search API.\n Adapted from: Instructions adapted from https://stackoverflow.com/questions/\n 37083058/\n programmatically-searching-google-in-python-using-custom-search\n TODO: DOCS for using it\n 1. Install google-api-python-client\n - If you don't already have a Google account, sign up.\n - If you have never created a Google APIs Console project,\n read the Managing Projects page and create a project in the Google API Console.\n - Install the library using pip install google-api-python-client\n The current version of the library is 2.70.0 at this time\n 2. To create an API key:\n - Navigate to the APIs & Services\u2192Credentials panel in Cloud Console.\n - Select Create credentials, then select API key from the drop-down menu.\n - The API key created dialog box displays your newly created key.\n - You now have an API_KEY\n 3. Setup Custom Search Engine so you can search the entire web\n - Create a custom search engine in this link.\n - In Sites to search, add any valid URL (i.e. www.stackoverflow.com).\n - That\u2019s all you have to fill up, the rest doesn\u2019t matter.\n In the left-side menu, click Edit search engine \u2192 {your search engine name}\n \u2192 Setup Set Search the entire web to ON. Remove the URL you added from\n the list of Sites to search.\n - Under Search engine ID you\u2019ll find the search-engine-ID.", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/utilities/google_search.html"} +{"id": "f6b80a4923bc-1", "text": "- Under Search engine ID you\u2019ll find the search-engine-ID.\n 4. Enable the Custom Search API\n - Navigate to the APIs & Services\u2192Dashboard panel in Cloud Console.\n - Click Enable APIs and Services.\n - Search for Custom Search API and click on it.\n - Click Enable.\n URL for it: https://console.cloud.google.com/apis/library/customsearch.googleapis\n .com\n \"\"\"\n search_engine: Any #: :meta private:\n google_api_key: Optional[str] = None\n google_cse_id: Optional[str] = None\n k: int = 10\n siterestrict: bool = False\n class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n def _google_search_results(self, search_term: str, **kwargs: Any) -> List[dict]:\n cse = self.search_engine.cse()\n if self.siterestrict:\n cse = cse.siterestrict()\n res = cse.list(q=search_term, cx=self.google_cse_id, **kwargs).execute()\n return res.get(\"items\", [])\n @root_validator()\n def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"Validate that api key and python package exists in environment.\"\"\"\n google_api_key = get_from_dict_or_env(\n values, \"google_api_key\", \"GOOGLE_API_KEY\"\n )\n values[\"google_api_key\"] = google_api_key\n google_cse_id = get_from_dict_or_env(values, \"google_cse_id\", \"GOOGLE_CSE_ID\")\n values[\"google_cse_id\"] = google_cse_id\n try:\n from googleapiclient.discovery import build\n except ImportError:\n raise ImportError(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/utilities/google_search.html"} +{"id": "f6b80a4923bc-2", "text": "except ImportError:\n raise ImportError(\n \"google-api-python-client is not installed. \"\n \"Please install it with `pip install google-api-python-client`\"\n )\n service = build(\"customsearch\", \"v1\", developerKey=google_api_key)\n values[\"search_engine\"] = service\n return values\n[docs] def run(self, query: str) -> str:\n \"\"\"Run query through GoogleSearch and parse result.\"\"\"\n snippets = []\n results = self._google_search_results(query, num=self.k)\n if len(results) == 0:\n return \"No good Google Search Result was found\"\n for result in results:\n if \"snippet\" in result:\n snippets.append(result[\"snippet\"])\n return \" \".join(snippets)\n[docs] def results(self, query: str, num_results: int) -> List[Dict]:\n \"\"\"Run query through GoogleSearch and return metadata.\n Args:\n query: The query to search for.\n num_results: The number of results to return.\n Returns:\n A list of dictionaries with the following keys:\n snippet - The description of the result.\n title - The title of the result.\n link - The link to the result.\n \"\"\"\n metadata_results = []\n results = self._google_search_results(query, num=num_results)\n if len(results) == 0:\n return [{\"Result\": \"No good Google Search Result was found\"}]\n for result in results:\n metadata_result = {\n \"title\": result[\"title\"],\n \"link\": result[\"link\"],\n }\n if \"snippet\" in result:\n metadata_result[\"snippet\"] = result[\"snippet\"]\n metadata_results.append(metadata_result)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/utilities/google_search.html"} +{"id": "f6b80a4923bc-3", "text": "metadata_result[\"snippet\"] = result[\"snippet\"]\n metadata_results.append(metadata_result)\n return metadata_results", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/utilities/google_search.html"} +{"id": "4dd023cda896-0", "text": "Source code for langchain.utilities.max_compute\nfrom __future__ import annotations\nfrom typing import TYPE_CHECKING, Iterator, List, Optional\nfrom langchain.utils import get_from_env\nif TYPE_CHECKING:\n from odps import ODPS\n[docs]class MaxComputeAPIWrapper:\n \"\"\"Interface for querying Alibaba Cloud MaxCompute tables.\"\"\"\n def __init__(self, client: ODPS):\n \"\"\"Initialize MaxCompute document loader.\n Args:\n client: odps.ODPS MaxCompute client object.\n \"\"\"\n self.client = client\n[docs] @classmethod\n def from_params(\n cls,\n endpoint: str,\n project: str,\n *,\n access_id: Optional[str] = None,\n secret_access_key: Optional[str] = None,\n ) -> MaxComputeAPIWrapper:\n \"\"\"Convenience constructor that builds the odsp.ODPS MaxCompute client from\n given parameters.\n Args:\n endpoint: MaxCompute endpoint.\n project: A project is a basic organizational unit of MaxCompute, which is\n similar to a database.\n access_id: MaxCompute access ID. Should be passed in directly or set as the\n environment variable `MAX_COMPUTE_ACCESS_ID`.\n secret_access_key: MaxCompute secret access key. Should be passed in\n directly or set as the environment variable\n `MAX_COMPUTE_SECRET_ACCESS_KEY`.\n \"\"\"\n try:\n from odps import ODPS\n except ImportError as ex:\n raise ImportError(\n \"Could not import pyodps python package. \"\n \"Please install it with `pip install pyodps` or refer to \"\n \"https://pyodps.readthedocs.io/.\"\n ) from ex", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/utilities/max_compute.html"} +{"id": "4dd023cda896-1", "text": "\"https://pyodps.readthedocs.io/.\"\n ) from ex\n access_id = access_id or get_from_env(\"access_id\", \"MAX_COMPUTE_ACCESS_ID\")\n secret_access_key = secret_access_key or get_from_env(\n \"secret_access_key\", \"MAX_COMPUTE_SECRET_ACCESS_KEY\"\n )\n client = ODPS(\n access_id=access_id,\n secret_access_key=secret_access_key,\n project=project,\n endpoint=endpoint,\n )\n if not client.exist_project(project):\n raise ValueError(f'The project \"{project}\" does not exist.')\n return cls(client)\n[docs] def lazy_query(self, query: str) -> Iterator[dict]:\n # Execute SQL query.\n with self.client.execute_sql(query).open_reader() as reader:\n if reader.count == 0:\n raise ValueError(\"Table contains no data.\")\n for record in reader:\n yield {k: v for k, v in record}\n[docs] def query(self, query: str) -> List[dict]:\n return list(self.lazy_query(query))", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/utilities/max_compute.html"} +{"id": "00878a97fba7-0", "text": "Source code for langchain.utilities.arxiv\n\"\"\"Util that calls Arxiv.\"\"\"\nimport logging\nimport os\nfrom typing import Any, Dict, List, Optional\nfrom pydantic import BaseModel, Extra, root_validator\nfrom langchain.schema import Document\nlogger = logging.getLogger(__name__)\n[docs]class ArxivAPIWrapper(BaseModel):\n \"\"\"Wrapper around ArxivAPI.\n To use, you should have the ``arxiv`` python package installed.\n https://lukasschwab.me/arxiv.py/index.html\n This wrapper will use the Arxiv API to conduct searches and\n fetch document summaries. By default, it will return the document summaries\n of the top-k results.\n It limits the Document content by doc_content_chars_max.\n Set doc_content_chars_max=None if you don't want to limit the content size.\n Parameters:\n top_k_results: number of the top-scored document used for the arxiv tool\n ARXIV_MAX_QUERY_LENGTH: the cut limit on the query used for the arxiv tool.\n load_max_docs: a limit to the number of loaded documents\n load_all_available_meta:\n if True: the `metadata` of the loaded Documents gets all available meta info\n (see https://lukasschwab.me/arxiv.py/index.html#Result),\n if False: the `metadata` gets only the most informative fields.\n \"\"\"\n arxiv_search: Any #: :meta private:\n arxiv_exceptions: Any # :meta private:\n top_k_results: int = 3\n ARXIV_MAX_QUERY_LENGTH = 300\n load_max_docs: int = 100\n load_all_available_meta: bool = False\n doc_content_chars_max: Optional[int] = 4000\n class Config:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/utilities/arxiv.html"} +{"id": "00878a97fba7-1", "text": "doc_content_chars_max: Optional[int] = 4000\n class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n @root_validator()\n def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"Validate that the python package exists in environment.\"\"\"\n try:\n import arxiv\n values[\"arxiv_search\"] = arxiv.Search\n values[\"arxiv_exceptions\"] = (\n arxiv.ArxivError,\n arxiv.UnexpectedEmptyPageError,\n arxiv.HTTPError,\n )\n values[\"arxiv_result\"] = arxiv.Result\n except ImportError:\n raise ImportError(\n \"Could not import arxiv python package. \"\n \"Please install it with `pip install arxiv`.\"\n )\n return values\n[docs] def run(self, query: str) -> str:\n \"\"\"\n Run Arxiv search and get the article meta information.\n See https://lukasschwab.me/arxiv.py/index.html#Search\n See https://lukasschwab.me/arxiv.py/index.html#Result\n It uses only the most informative fields of article meta information.\n \"\"\"\n try:\n results = self.arxiv_search( # type: ignore\n query[: self.ARXIV_MAX_QUERY_LENGTH], max_results=self.top_k_results\n ).results()\n except self.arxiv_exceptions as ex:\n return f\"Arxiv exception: {ex}\"\n docs = [\n f\"Published: {result.updated.date()}\\nTitle: {result.title}\\n\"\n f\"Authors: {', '.join(a.name for a in result.authors)}\\n\"\n f\"Summary: {result.summary}\"\n for result in results\n ]", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/utilities/arxiv.html"} +{"id": "00878a97fba7-2", "text": "f\"Summary: {result.summary}\"\n for result in results\n ]\n if docs:\n return \"\\n\\n\".join(docs)[: self.doc_content_chars_max]\n else:\n return \"No good Arxiv Result was found\"\n[docs] def load(self, query: str) -> List[Document]:\n \"\"\"\n Run Arxiv search and get the article texts plus the article meta information.\n See https://lukasschwab.me/arxiv.py/index.html#Search\n Returns: a list of documents with the document.page_content in text format\n \"\"\"\n try:\n import fitz\n except ImportError:\n raise ImportError(\n \"PyMuPDF package not found, please install it with \"\n \"`pip install pymupdf`\"\n )\n try:\n results = self.arxiv_search( # type: ignore\n query[: self.ARXIV_MAX_QUERY_LENGTH], max_results=self.load_max_docs\n ).results()\n except self.arxiv_exceptions as ex:\n logger.debug(\"Error on arxiv: %s\", ex)\n return []\n docs: List[Document] = []\n for result in results:\n try:\n doc_file_name: str = result.download_pdf()\n with fitz.open(doc_file_name) as doc_file:\n text: str = \"\".join(page.get_text() for page in doc_file)\n except FileNotFoundError as f_ex:\n logger.debug(f_ex)\n continue\n if self.load_all_available_meta:\n extra_metadata = {\n \"entry_id\": result.entry_id,\n \"published_first_time\": str(result.published.date()),\n \"comment\": result.comment,\n \"journal_ref\": result.journal_ref,\n \"doi\": result.doi,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/utilities/arxiv.html"} +{"id": "00878a97fba7-3", "text": "\"journal_ref\": result.journal_ref,\n \"doi\": result.doi,\n \"primary_category\": result.primary_category,\n \"categories\": result.categories,\n \"links\": [link.href for link in result.links],\n }\n else:\n extra_metadata = {}\n metadata = {\n \"Published\": str(result.updated.date()),\n \"Title\": result.title,\n \"Authors\": \", \".join(a.name for a in result.authors),\n \"Summary\": result.summary,\n **extra_metadata,\n }\n doc = Document(\n page_content=text[: self.doc_content_chars_max], metadata=metadata\n )\n docs.append(doc)\n os.remove(doc_file_name)\n return docs", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/utilities/arxiv.html"} +{"id": "56f2f573cb80-0", "text": "Source code for langchain.utilities.python\nimport sys\nfrom io import StringIO\nfrom typing import Dict, Optional\nfrom pydantic import BaseModel, Field\n[docs]class PythonREPL(BaseModel):\n \"\"\"Simulates a standalone Python REPL.\"\"\"\n globals: Optional[Dict] = Field(default_factory=dict, alias=\"_globals\")\n locals: Optional[Dict] = Field(default_factory=dict, alias=\"_locals\")\n[docs] def run(self, command: str) -> str:\n \"\"\"Run command with own globals/locals and returns anything printed.\"\"\"\n old_stdout = sys.stdout\n sys.stdout = mystdout = StringIO()\n try:\n exec(command, self.globals, self.locals)\n sys.stdout = old_stdout\n output = mystdout.getvalue()\n except Exception as e:\n sys.stdout = old_stdout\n output = repr(e)\n return output", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/utilities/python.html"} +{"id": "034d4562ab7f-0", "text": "Source code for langchain.utilities.openweathermap\n\"\"\"Util that calls OpenWeatherMap using PyOWM.\"\"\"\nfrom typing import Any, Dict, Optional\nfrom pydantic import Extra, root_validator\nfrom langchain.tools.base import BaseModel\nfrom langchain.utils import get_from_dict_or_env\n[docs]class OpenWeatherMapAPIWrapper(BaseModel):\n \"\"\"Wrapper for OpenWeatherMap API using PyOWM.\n Docs for using:\n 1. Go to OpenWeatherMap and sign up for an API key\n 2. Save your API KEY into OPENWEATHERMAP_API_KEY env variable\n 3. pip install pyowm\n \"\"\"\n owm: Any\n openweathermap_api_key: Optional[str] = None\n class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n @root_validator(pre=True)\n def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"Validate that api key exists in environment.\"\"\"\n openweathermap_api_key = get_from_dict_or_env(\n values, \"openweathermap_api_key\", \"OPENWEATHERMAP_API_KEY\"\n )\n try:\n import pyowm\n except ImportError:\n raise ImportError(\n \"pyowm is not installed. Please install it with `pip install pyowm`\"\n )\n owm = pyowm.OWM(openweathermap_api_key)\n values[\"owm\"] = owm\n return values\n def _format_weather_info(self, location: str, w: Any) -> str:\n detailed_status = w.detailed_status\n wind = w.wind()\n humidity = w.humidity\n temperature = w.temperature(\"celsius\")\n rain = w.rain\n heat_index = w.heat_index\n clouds = w.clouds", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/utilities/openweathermap.html"} +{"id": "034d4562ab7f-1", "text": "heat_index = w.heat_index\n clouds = w.clouds\n return (\n f\"In {location}, the current weather is as follows:\\n\"\n f\"Detailed status: {detailed_status}\\n\"\n f\"Wind speed: {wind['speed']} m/s, direction: {wind['deg']}\u00b0\\n\"\n f\"Humidity: {humidity}%\\n\"\n f\"Temperature: \\n\"\n f\" - Current: {temperature['temp']}\u00b0C\\n\"\n f\" - High: {temperature['temp_max']}\u00b0C\\n\"\n f\" - Low: {temperature['temp_min']}\u00b0C\\n\"\n f\" - Feels like: {temperature['feels_like']}\u00b0C\\n\"\n f\"Rain: {rain}\\n\"\n f\"Heat index: {heat_index}\\n\"\n f\"Cloud cover: {clouds}%\"\n )\n[docs] def run(self, location: str) -> str:\n \"\"\"Get the current weather information for a specified location.\"\"\"\n mgr = self.owm.weather_manager()\n observation = mgr.weather_at_place(location)\n w = observation.weather\n return self._format_weather_info(location, w)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/utilities/openweathermap.html"} +{"id": "7e4e8c1b22e4-0", "text": "Source code for langchain.utilities.metaphor_search\n\"\"\"Util that calls Metaphor Search API.\nIn order to set this up, follow instructions at:\n\"\"\"\nimport json\nfrom typing import Dict, List, Optional\nimport aiohttp\nimport requests\nfrom pydantic import BaseModel, Extra, root_validator\nfrom langchain.utils import get_from_dict_or_env\nMETAPHOR_API_URL = \"https://api.metaphor.systems\"\n[docs]class MetaphorSearchAPIWrapper(BaseModel):\n \"\"\"Wrapper for Metaphor Search API.\"\"\"\n metaphor_api_key: str\n k: int = 10\n class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n def _metaphor_search_results(\n self,\n query: str,\n num_results: int,\n include_domains: Optional[List[str]] = None,\n exclude_domains: Optional[List[str]] = None,\n start_crawl_date: Optional[str] = None,\n end_crawl_date: Optional[str] = None,\n start_published_date: Optional[str] = None,\n end_published_date: Optional[str] = None,\n ) -> List[dict]:\n headers = {\"X-Api-Key\": self.metaphor_api_key}\n params = {\n \"numResults\": num_results,\n \"query\": query,\n \"includeDomains\": include_domains,\n \"excludeDomains\": exclude_domains,\n \"startCrawlDate\": start_crawl_date,\n \"endCrawlDate\": end_crawl_date,\n \"startPublishedDate\": start_published_date,\n \"endPublishedDate\": end_published_date,\n }\n response = requests.post(\n # type: ignore\n f\"{METAPHOR_API_URL}/search\",", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/utilities/metaphor_search.html"} +{"id": "7e4e8c1b22e4-1", "text": "# type: ignore\n f\"{METAPHOR_API_URL}/search\",\n headers=headers,\n json=params,\n )\n response.raise_for_status()\n search_results = response.json()\n print(search_results)\n return search_results[\"results\"]\n @root_validator(pre=True)\n def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"Validate that api key and endpoint exists in environment.\"\"\"\n metaphor_api_key = get_from_dict_or_env(\n values, \"metaphor_api_key\", \"METAPHOR_API_KEY\"\n )\n values[\"metaphor_api_key\"] = metaphor_api_key\n return values\n[docs] def results(\n self,\n query: str,\n num_results: int,\n include_domains: Optional[List[str]] = None,\n exclude_domains: Optional[List[str]] = None,\n start_crawl_date: Optional[str] = None,\n end_crawl_date: Optional[str] = None,\n start_published_date: Optional[str] = None,\n end_published_date: Optional[str] = None,\n ) -> List[Dict]:\n \"\"\"Run query through Metaphor Search and return metadata.\n Args:\n query: The query to search for.\n num_results: The number of results to return.\n Returns:\n A list of dictionaries with the following keys:\n title - The title of the\n url - The url\n author - Author of the content, if applicable. Otherwise, None.\n published_date - Estimated date published\n in YYYY-MM-DD format. Otherwise, None.\n \"\"\"\n raw_search_results = self._metaphor_search_results(\n query,\n num_results=num_results,\n include_domains=include_domains,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/utilities/metaphor_search.html"} +{"id": "7e4e8c1b22e4-2", "text": "query,\n num_results=num_results,\n include_domains=include_domains,\n exclude_domains=exclude_domains,\n start_crawl_date=start_crawl_date,\n end_crawl_date=end_crawl_date,\n start_published_date=start_published_date,\n end_published_date=end_published_date,\n )\n return self._clean_results(raw_search_results)\n[docs] async def results_async(\n self,\n query: str,\n num_results: int,\n include_domains: Optional[List[str]] = None,\n exclude_domains: Optional[List[str]] = None,\n start_crawl_date: Optional[str] = None,\n end_crawl_date: Optional[str] = None,\n start_published_date: Optional[str] = None,\n end_published_date: Optional[str] = None,\n ) -> List[Dict]:\n \"\"\"Get results from the Metaphor Search API asynchronously.\"\"\"\n # Function to perform the API call\n async def fetch() -> str:\n headers = {\"X-Api-Key\": self.metaphor_api_key}\n params = {\n \"numResults\": num_results,\n \"query\": query,\n \"includeDomains\": include_domains,\n \"excludeDomains\": exclude_domains,\n \"startCrawlDate\": start_crawl_date,\n \"endCrawlDate\": end_crawl_date,\n \"startPublishedDate\": start_published_date,\n \"endPublishedDate\": end_published_date,\n }\n async with aiohttp.ClientSession() as session:\n async with session.post(\n f\"{METAPHOR_API_URL}/search\", json=params, headers=headers\n ) as res:\n if res.status == 200:\n data = await res.text()\n return data\n else:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/utilities/metaphor_search.html"} +{"id": "7e4e8c1b22e4-3", "text": "data = await res.text()\n return data\n else:\n raise Exception(f\"Error {res.status}: {res.reason}\")\n results_json_str = await fetch()\n results_json = json.loads(results_json_str)\n return self._clean_results(results_json[\"results\"])\n def _clean_results(self, raw_search_results: List[Dict]) -> List[Dict]:\n cleaned_results = []\n for result in raw_search_results:\n cleaned_results.append(\n {\n \"title\": result[\"title\"],\n \"url\": result[\"url\"],\n \"author\": result[\"author\"],\n \"published_date\": result[\"publishedDate\"],\n }\n )\n return cleaned_results", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/utilities/metaphor_search.html"} +{"id": "83b26135df2a-0", "text": "Source code for langchain.utilities.bibtex\n\"\"\"Util that calls bibtexparser.\"\"\"\nimport logging\nfrom typing import Any, Dict, List, Mapping\nfrom pydantic import BaseModel, Extra, root_validator\nlogger = logging.getLogger(__name__)\nOPTIONAL_FIELDS = [\n \"annotate\",\n \"booktitle\",\n \"editor\",\n \"howpublished\",\n \"journal\",\n \"keywords\",\n \"note\",\n \"organization\",\n \"publisher\",\n \"school\",\n \"series\",\n \"type\",\n \"doi\",\n \"issn\",\n \"isbn\",\n]\n[docs]class BibtexparserWrapper(BaseModel):\n \"\"\"Wrapper around bibtexparser.\n To use, you should have the ``bibtexparser`` python package installed.\n https://bibtexparser.readthedocs.io/en/master/\n This wrapper will use bibtexparser to load a collection of references from\n a bibtex file and fetch document summaries.\n \"\"\"\n class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n @root_validator()\n def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"Validate that the python package exists in environment.\"\"\"\n try:\n import bibtexparser # noqa\n except ImportError:\n raise ImportError(\n \"Could not import bibtexparser python package. \"\n \"Please install it with `pip install bibtexparser`.\"\n )\n return values\n[docs] def load_bibtex_entries(self, path: str) -> List[Dict[str, Any]]:\n \"\"\"Load bibtex entries from the bibtex file at the given path.\"\"\"\n import bibtexparser", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/utilities/bibtex.html"} +{"id": "83b26135df2a-1", "text": "import bibtexparser\n with open(path) as file:\n entries = bibtexparser.load(file).entries\n return entries\n[docs] def get_metadata(\n self, entry: Mapping[str, Any], load_extra: bool = False\n ) -> Dict[str, Any]:\n \"\"\"Get metadata for the given entry.\"\"\"\n publication = entry.get(\"journal\") or entry.get(\"booktitle\")\n if \"url\" in entry:\n url = entry[\"url\"]\n elif \"doi\" in entry:\n url = f'https://doi.org/{entry[\"doi\"]}'\n else:\n url = None\n meta = {\n \"id\": entry.get(\"ID\"),\n \"published_year\": entry.get(\"year\"),\n \"title\": entry.get(\"title\"),\n \"publication\": publication,\n \"authors\": entry.get(\"author\"),\n \"abstract\": entry.get(\"abstract\"),\n \"url\": url,\n }\n if load_extra:\n for field in OPTIONAL_FIELDS:\n meta[field] = entry.get(field)\n return {k: v for k, v in meta.items() if v is not None}", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/utilities/bibtex.html"} +{"id": "4683d2608f43-0", "text": "Source code for langchain.utilities.searx_search\n\"\"\"Utility for using SearxNG meta search API.\nSearxNG is a privacy-friendly free metasearch engine that aggregates results from\n`multiple search engines\n`_ and databases and\nsupports the `OpenSearch\n`_\nspecification.\nMore details on the installation instructions `here. <../../integrations/searx.html>`_\nFor the search API refer to https://docs.searxng.org/dev/search_api.html\nQuick Start\n-----------\nIn order to use this utility you need to provide the searx host. This can be done\nby passing the named parameter :attr:`searx_host `\nor exporting the environment variable SEARX_HOST.\nNote: this is the only required parameter.\nThen create a searx search instance like this:\n .. code-block:: python\n from langchain.utilities import SearxSearchWrapper\n # when the host starts with `http` SSL is disabled and the connection\n # is assumed to be on a private network\n searx_host='http://self.hosted'\n search = SearxSearchWrapper(searx_host=searx_host)\nYou can now use the ``search`` instance to query the searx API.\nSearching\n---------\nUse the :meth:`run() ` and\n:meth:`results() ` methods to query the searx API.\nOther methods are available for convenience.\n:class:`SearxResults` is a convenience wrapper around the raw json result.", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/utilities/searx_search.html"} +{"id": "4683d2608f43-1", "text": ":class:`SearxResults` is a convenience wrapper around the raw json result.\nExample usage of the ``run`` method to make a search:\n .. code-block:: python\n s.run(query=\"what is the best search engine?\")\nEngine Parameters\n-----------------\nYou can pass any `accepted searx search API\n`_ parameters to the\n:py:class:`SearxSearchWrapper` instance.\nIn the following example we are using the\n:attr:`engines ` and the ``language`` parameters:\n .. code-block:: python\n # assuming the searx host is set as above or exported as an env variable\n s = SearxSearchWrapper(engines=['google', 'bing'],\n language='es')\nSearch Tips\n-----------\nSearx offers a special\n`search syntax `_\nthat can also be used instead of passing engine parameters.\nFor example the following query:\n .. code-block:: python\n s = SearxSearchWrapper(\"langchain library\", engines=['github'])\n # can also be written as:\n s = SearxSearchWrapper(\"langchain library !github\")\n # or even:\n s = SearxSearchWrapper(\"langchain library !gh\")\nIn some situations you might want to pass an extra string to the search query.\nFor example when the `run()` method is called by an agent. The search suffix can\nalso be used as a way to pass extra parameters to searx or the underlying search\nengines.\n .. code-block:: python\n # select the github engine and pass the search suffix", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/utilities/searx_search.html"} +{"id": "4683d2608f43-2", "text": ".. code-block:: python\n # select the github engine and pass the search suffix\n s = SearchWrapper(\"langchain library\", query_suffix=\"!gh\")\n s = SearchWrapper(\"langchain library\")\n # select github the conventional google search syntax\n s.run(\"large language models\", query_suffix=\"site:github.com\")\n*NOTE*: A search suffix can be defined on both the instance and the method level.\nThe resulting query will be the concatenation of the two with the former taking\nprecedence.\nSee `SearxNG Configured Engines\n`_ and\n`SearxNG Search Syntax `_\nfor more details.\nNotes\n-----\nThis wrapper is based on the SearxNG fork https://github.com/searxng/searxng which is\nbetter maintained than the original Searx project and offers more features.\nPublic searxNG instances often use a rate limiter for API usage, so you might want to\nuse a self hosted instance and disable the rate limiter.\nIf you are self-hosting an instance you can customize the rate limiter for your\nown network as described\n`here `_.\nFor a list of public SearxNG instances see https://searx.space/\n\"\"\"\nimport json\nfrom typing import Any, Dict, List, Optional\nimport aiohttp\nimport requests\nfrom pydantic import BaseModel, Extra, Field, PrivateAttr, root_validator, validator\nfrom langchain.utils import get_from_dict_or_env\ndef _get_default_params() -> dict:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/utilities/searx_search.html"} +{"id": "4683d2608f43-3", "text": "def _get_default_params() -> dict:\n return {\"language\": \"en\", \"format\": \"json\"}\nclass SearxResults(dict):\n \"\"\"Dict like wrapper around search api results.\"\"\"\n _data = \"\"\n def __init__(self, data: str):\n \"\"\"Take a raw result from Searx and make it into a dict like object.\"\"\"\n json_data = json.loads(data)\n super().__init__(json_data)\n self.__dict__ = self\n def __str__(self) -> str:\n \"\"\"Text representation of searx result.\"\"\"\n return self._data\n @property\n def results(self) -> Any:\n \"\"\"Silence mypy for accessing this field.\n :meta private:\n \"\"\"\n return self.get(\"results\")\n @property\n def answers(self) -> Any:\n \"\"\"Helper accessor on the json result.\"\"\"\n return self.get(\"answers\")\n[docs]class SearxSearchWrapper(BaseModel):\n \"\"\"Wrapper for Searx API.\n To use you need to provide the searx host by passing the named parameter\n ``searx_host`` or exporting the environment variable ``SEARX_HOST``.\n In some situations you might want to disable SSL verification, for example\n if you are running searx locally. You can do this by passing the named parameter\n ``unsecure``. You can also pass the host url scheme as ``http`` to disable SSL.\n Example:\n .. code-block:: python\n from langchain.utilities import SearxSearchWrapper\n searx = SearxSearchWrapper(searx_host=\"http://localhost:8888\")\n Example with SSL disabled:\n .. code-block:: python\n from langchain.utilities import SearxSearchWrapper", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/utilities/searx_search.html"} +{"id": "4683d2608f43-4", "text": ".. code-block:: python\n from langchain.utilities import SearxSearchWrapper\n # note the unsecure parameter is not needed if you pass the url scheme as\n # http\n searx = SearxSearchWrapper(searx_host=\"http://localhost:8888\",\n unsecure=True)\n \"\"\"\n _result: SearxResults = PrivateAttr()\n searx_host: str = \"\"\n unsecure: bool = False\n params: dict = Field(default_factory=_get_default_params)\n headers: Optional[dict] = None\n engines: Optional[List[str]] = []\n categories: Optional[List[str]] = []\n query_suffix: Optional[str] = \"\"\n k: int = 10\n aiosession: Optional[Any] = None\n @validator(\"unsecure\")\n def disable_ssl_warnings(cls, v: bool) -> bool:\n \"\"\"Disable SSL warnings.\"\"\"\n if v:\n # requests.urllib3.disable_warnings()\n try:\n import urllib3\n urllib3.disable_warnings()\n except ImportError as e:\n print(e)\n return v\n @root_validator()\n def validate_params(cls, values: Dict) -> Dict:\n \"\"\"Validate that custom searx params are merged with default ones.\"\"\"\n user_params = values[\"params\"]\n default = _get_default_params()\n values[\"params\"] = {**default, **user_params}\n engines = values.get(\"engines\")\n if engines:\n values[\"params\"][\"engines\"] = \",\".join(engines)\n categories = values.get(\"categories\")\n if categories:\n values[\"params\"][\"categories\"] = \",\".join(categories)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/utilities/searx_search.html"} +{"id": "4683d2608f43-5", "text": "if categories:\n values[\"params\"][\"categories\"] = \",\".join(categories)\n searx_host = get_from_dict_or_env(values, \"searx_host\", \"SEARX_HOST\")\n if not searx_host.startswith(\"http\"):\n print(\n f\"Warning: missing the url scheme on host \\\n ! assuming secure https://{searx_host} \"\n )\n searx_host = \"https://\" + searx_host\n elif searx_host.startswith(\"http://\"):\n values[\"unsecure\"] = True\n cls.disable_ssl_warnings(True)\n values[\"searx_host\"] = searx_host\n return values\n class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n def _searx_api_query(self, params: dict) -> SearxResults:\n \"\"\"Actual request to searx API.\"\"\"\n raw_result = requests.get(\n self.searx_host,\n headers=self.headers,\n params=params,\n verify=not self.unsecure,\n )\n # test if http result is ok\n if not raw_result.ok:\n raise ValueError(\"Searx API returned an error: \", raw_result.text)\n res = SearxResults(raw_result.text)\n self._result = res\n return res\n async def _asearx_api_query(self, params: dict) -> SearxResults:\n if not self.aiosession:\n async with aiohttp.ClientSession() as session:\n async with session.get(\n self.searx_host,\n headers=self.headers,\n params=params,\n ssl=(lambda: False if self.unsecure else None)(),\n ) as response:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/utilities/searx_search.html"} +{"id": "4683d2608f43-6", "text": ") as response:\n if not response.ok:\n raise ValueError(\"Searx API returned an error: \", response.text)\n result = SearxResults(await response.text())\n self._result = result\n else:\n async with self.aiosession.get(\n self.searx_host,\n headers=self.headers,\n params=params,\n verify=not self.unsecure,\n ) as response:\n if not response.ok:\n raise ValueError(\"Searx API returned an error: \", response.text)\n result = SearxResults(await response.text())\n self._result = result\n return result\n[docs] def run(\n self,\n query: str,\n engines: Optional[List[str]] = None,\n categories: Optional[List[str]] = None,\n query_suffix: Optional[str] = \"\",\n **kwargs: Any,\n ) -> str:\n \"\"\"Run query through Searx API and parse results.\n You can pass any other params to the searx query API.\n Args:\n query: The query to search for.\n query_suffix: Extra suffix appended to the query.\n engines: List of engines to use for the query.\n categories: List of categories to use for the query.\n **kwargs: extra parameters to pass to the searx API.\n Returns:\n str: The result of the query.\n Raises:\n ValueError: If an error occured with the query.\n Example:\n This will make a query to the qwant engine:\n .. code-block:: python\n from langchain.utilities import SearxSearchWrapper\n searx = SearxSearchWrapper(searx_host=\"http://my.searx.host\")", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/utilities/searx_search.html"} +{"id": "4683d2608f43-7", "text": "searx.run(\"what is the weather in France ?\", engine=\"qwant\")\n # the same result can be achieved using the `!` syntax of searx\n # to select the engine using `query_suffix`\n searx.run(\"what is the weather in France ?\", query_suffix=\"!qwant\")\n \"\"\"\n _params = {\n \"q\": query,\n }\n params = {**self.params, **_params, **kwargs}\n if self.query_suffix and len(self.query_suffix) > 0:\n params[\"q\"] += \" \" + self.query_suffix\n if isinstance(query_suffix, str) and len(query_suffix) > 0:\n params[\"q\"] += \" \" + query_suffix\n if isinstance(engines, list) and len(engines) > 0:\n params[\"engines\"] = \",\".join(engines)\n if isinstance(categories, list) and len(categories) > 0:\n params[\"categories\"] = \",\".join(categories)\n res = self._searx_api_query(params)\n if len(res.answers) > 0:\n toret = res.answers[0]\n # only return the content of the results list\n elif len(res.results) > 0:\n toret = \"\\n\\n\".join([r.get(\"content\", \"\") for r in res.results[: self.k]])\n else:\n toret = \"No good search result found\"\n return toret\n[docs] async def arun(\n self,\n query: str,\n engines: Optional[List[str]] = None,\n query_suffix: Optional[str] = \"\",\n **kwargs: Any,\n ) -> str:\n \"\"\"Asynchronously version of `run`.\"\"\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/utilities/searx_search.html"} +{"id": "4683d2608f43-8", "text": ") -> str:\n \"\"\"Asynchronously version of `run`.\"\"\"\n _params = {\n \"q\": query,\n }\n params = {**self.params, **_params, **kwargs}\n if self.query_suffix and len(self.query_suffix) > 0:\n params[\"q\"] += \" \" + self.query_suffix\n if isinstance(query_suffix, str) and len(query_suffix) > 0:\n params[\"q\"] += \" \" + query_suffix\n if isinstance(engines, list) and len(engines) > 0:\n params[\"engines\"] = \",\".join(engines)\n res = await self._asearx_api_query(params)\n if len(res.answers) > 0:\n toret = res.answers[0]\n # only return the content of the results list\n elif len(res.results) > 0:\n toret = \"\\n\\n\".join([r.get(\"content\", \"\") for r in res.results[: self.k]])\n else:\n toret = \"No good search result found\"\n return toret\n[docs] def results(\n self,\n query: str,\n num_results: int,\n engines: Optional[List[str]] = None,\n categories: Optional[List[str]] = None,\n query_suffix: Optional[str] = \"\",\n **kwargs: Any,\n ) -> List[Dict]:\n \"\"\"Run query through Searx API and returns the results with metadata.\n Args:\n query: The query to search for.\n query_suffix: Extra suffix appended to the query.\n num_results: Limit the number of results to return.\n engines: List of engines to use for the query.", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/utilities/searx_search.html"} +{"id": "4683d2608f43-9", "text": "engines: List of engines to use for the query.\n categories: List of categories to use for the query.\n **kwargs: extra parameters to pass to the searx API.\n Returns:\n Dict with the following keys:\n {\n snippet: The description of the result.\n title: The title of the result.\n link: The link to the result.\n engines: The engines used for the result.\n category: Searx category of the result.\n }\n \"\"\"\n _params = {\n \"q\": query,\n }\n params = {**self.params, **_params, **kwargs}\n if self.query_suffix and len(self.query_suffix) > 0:\n params[\"q\"] += \" \" + self.query_suffix\n if isinstance(query_suffix, str) and len(query_suffix) > 0:\n params[\"q\"] += \" \" + query_suffix\n if isinstance(engines, list) and len(engines) > 0:\n params[\"engines\"] = \",\".join(engines)\n if isinstance(categories, list) and len(categories) > 0:\n params[\"categories\"] = \",\".join(categories)\n results = self._searx_api_query(params).results[:num_results]\n if len(results) == 0:\n return [{\"Result\": \"No good Search Result was found\"}]\n return [\n {\n \"snippet\": result.get(\"content\", \"\"),\n \"title\": result[\"title\"],\n \"link\": result[\"url\"],\n \"engines\": result[\"engines\"],\n \"category\": result[\"category\"],\n }\n for result in results\n ]\n[docs] async def aresults(\n self,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/utilities/searx_search.html"} +{"id": "4683d2608f43-10", "text": "]\n[docs] async def aresults(\n self,\n query: str,\n num_results: int,\n engines: Optional[List[str]] = None,\n query_suffix: Optional[str] = \"\",\n **kwargs: Any,\n ) -> List[Dict]:\n \"\"\"Asynchronously query with json results.\n Uses aiohttp. See `results` for more info.\n \"\"\"\n _params = {\n \"q\": query,\n }\n params = {**self.params, **_params, **kwargs}\n if self.query_suffix and len(self.query_suffix) > 0:\n params[\"q\"] += \" \" + self.query_suffix\n if isinstance(query_suffix, str) and len(query_suffix) > 0:\n params[\"q\"] += \" \" + query_suffix\n if isinstance(engines, list) and len(engines) > 0:\n params[\"engines\"] = \",\".join(engines)\n results = (await self._asearx_api_query(params)).results[:num_results]\n if len(results) == 0:\n return [{\"Result\": \"No good Search Result was found\"}]\n return [\n {\n \"snippet\": result.get(\"content\", \"\"),\n \"title\": result[\"title\"],\n \"link\": result[\"url\"],\n \"engines\": result[\"engines\"],\n \"category\": result[\"category\"],\n }\n for result in results\n ]", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/utilities/searx_search.html"} +{"id": "eafa7d302d30-0", "text": "Source code for langchain.utilities.graphql\nimport json\nfrom typing import Any, Callable, Dict, Optional\nfrom pydantic import BaseModel, Extra, root_validator\n[docs]class GraphQLAPIWrapper(BaseModel):\n \"\"\"Wrapper around GraphQL API.\n To use, you should have the ``gql`` python package installed.\n This wrapper will use the GraphQL API to conduct queries.\n \"\"\"\n custom_headers: Optional[Dict[str, str]] = None\n graphql_endpoint: str\n gql_client: Any #: :meta private:\n gql_function: Callable[[str], Any] #: :meta private:\n class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n @root_validator(pre=True)\n def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"Validate that the python package exists in the environment.\"\"\"\n try:\n from gql import Client, gql\n from gql.transport.requests import RequestsHTTPTransport\n except ImportError as e:\n raise ImportError(\n \"Could not import gql python package. \"\n f\"Try installing it with `pip install gql`. Received error: {e}\"\n )\n headers = values.get(\"custom_headers\")\n transport = RequestsHTTPTransport(\n url=values[\"graphql_endpoint\"],\n headers=headers,\n )\n client = Client(transport=transport, fetch_schema_from_transport=True)\n values[\"gql_client\"] = client\n values[\"gql_function\"] = gql\n return values\n[docs] def run(self, query: str) -> str:\n \"\"\"Run a GraphQL query and get the results.\"\"\"\n result = self._execute_query(query)\n return json.dumps(result, indent=2)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/utilities/graphql.html"} +{"id": "eafa7d302d30-1", "text": "return json.dumps(result, indent=2)\n def _execute_query(self, query: str) -> Dict[str, Any]:\n \"\"\"Execute a GraphQL query and return the results.\"\"\"\n document_node = self.gql_function(query)\n result = self.gql_client.execute(document_node)\n return result", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/utilities/graphql.html"} +{"id": "b2e3b83db447-0", "text": "Source code for langchain.utilities.brave_search\nimport json\nimport requests\nfrom pydantic import BaseModel, Field\n[docs]class BraveSearchWrapper(BaseModel):\n api_key: str\n search_kwargs: dict = Field(default_factory=dict)\n[docs] def run(self, query: str) -> str:\n headers = {\n \"X-Subscription-Token\": self.api_key,\n \"Accept\": \"application/json\",\n }\n base_url = \"https://api.search.brave.com/res/v1/web/search\"\n req = requests.PreparedRequest()\n params = {**self.search_kwargs, **{\"q\": query}}\n req.prepare_url(base_url, params)\n if req.url is None:\n raise ValueError(\"prepared url is None, this should not happen\")\n response = requests.get(req.url, headers=headers)\n if not response.ok:\n raise Exception(f\"HTTP error {response.status_code}\")\n parsed_response = response.json()\n web_search_results = parsed_response.get(\"web\", {}).get(\"results\", [])\n final_results = []\n if isinstance(web_search_results, list):\n for item in web_search_results:\n final_results.append(\n {\n \"title\": item.get(\"title\"),\n \"link\": item.get(\"url\"),\n \"snippet\": item.get(\"description\"),\n }\n )\n return json.dumps(final_results)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/utilities/brave_search.html"} +{"id": "f7681c4934fb-0", "text": "Source code for langchain.utilities.jira\n\"\"\"Util that calls Jira.\"\"\"\nfrom typing import Any, Dict, List, Optional\nfrom pydantic import BaseModel, Extra, root_validator\nfrom langchain.tools.jira.prompt import (\n JIRA_CATCH_ALL_PROMPT,\n JIRA_CONFLUENCE_PAGE_CREATE_PROMPT,\n JIRA_GET_ALL_PROJECTS_PROMPT,\n JIRA_ISSUE_CREATE_PROMPT,\n JIRA_JQL_PROMPT,\n)\nfrom langchain.utils import get_from_dict_or_env\n# TODO: think about error handling, more specific api specs, and jql/project limits\n[docs]class JiraAPIWrapper(BaseModel):\n \"\"\"Wrapper for Jira API.\"\"\"\n jira: Any #: :meta private:\n confluence: Any\n jira_username: Optional[str] = None\n jira_api_token: Optional[str] = None\n jira_instance_url: Optional[str] = None\n operations: List[Dict] = [\n {\n \"mode\": \"jql\",\n \"name\": \"JQL Query\",\n \"description\": JIRA_JQL_PROMPT,\n },\n {\n \"mode\": \"get_projects\",\n \"name\": \"Get Projects\",\n \"description\": JIRA_GET_ALL_PROJECTS_PROMPT,\n },\n {\n \"mode\": \"create_issue\",\n \"name\": \"Create Issue\",\n \"description\": JIRA_ISSUE_CREATE_PROMPT,\n },\n {\n \"mode\": \"other\",\n \"name\": \"Catch all Jira API call\",\n \"description\": JIRA_CATCH_ALL_PROMPT,\n },\n {\n \"mode\": \"create_page\",\n \"name\": \"Create confluence page\",", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/utilities/jira.html"} +{"id": "f7681c4934fb-1", "text": "\"mode\": \"create_page\",\n \"name\": \"Create confluence page\",\n \"description\": JIRA_CONFLUENCE_PAGE_CREATE_PROMPT,\n },\n ]\n class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n[docs] def list(self) -> List[Dict]:\n return self.operations\n @root_validator()\n def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"Validate that api key and python package exists in environment.\"\"\"\n jira_username = get_from_dict_or_env(values, \"jira_username\", \"JIRA_USERNAME\")\n values[\"jira_username\"] = jira_username\n jira_api_token = get_from_dict_or_env(\n values, \"jira_api_token\", \"JIRA_API_TOKEN\"\n )\n values[\"jira_api_token\"] = jira_api_token\n jira_instance_url = get_from_dict_or_env(\n values, \"jira_instance_url\", \"JIRA_INSTANCE_URL\"\n )\n values[\"jira_instance_url\"] = jira_instance_url\n try:\n from atlassian import Confluence, Jira\n except ImportError:\n raise ImportError(\n \"atlassian-python-api is not installed. \"\n \"Please install it with `pip install atlassian-python-api`\"\n )\n jira = Jira(\n url=jira_instance_url,\n username=jira_username,\n password=jira_api_token,\n cloud=True,\n )\n confluence = Confluence(\n url=jira_instance_url,\n username=jira_username,\n password=jira_api_token,\n cloud=True,\n )\n values[\"jira\"] = jira\n values[\"confluence\"] = confluence", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/utilities/jira.html"} +{"id": "f7681c4934fb-2", "text": "values[\"jira\"] = jira\n values[\"confluence\"] = confluence\n return values\n[docs] def parse_issues(self, issues: Dict) -> List[dict]:\n parsed = []\n for issue in issues[\"issues\"]:\n key = issue[\"key\"]\n summary = issue[\"fields\"][\"summary\"]\n created = issue[\"fields\"][\"created\"][0:10]\n priority = issue[\"fields\"][\"priority\"][\"name\"]\n status = issue[\"fields\"][\"status\"][\"name\"]\n try:\n assignee = issue[\"fields\"][\"assignee\"][\"displayName\"]\n except Exception:\n assignee = \"None\"\n rel_issues = {}\n for related_issue in issue[\"fields\"][\"issuelinks\"]:\n if \"inwardIssue\" in related_issue.keys():\n rel_type = related_issue[\"type\"][\"inward\"]\n rel_key = related_issue[\"inwardIssue\"][\"key\"]\n rel_summary = related_issue[\"inwardIssue\"][\"fields\"][\"summary\"]\n if \"outwardIssue\" in related_issue.keys():\n rel_type = related_issue[\"type\"][\"outward\"]\n rel_key = related_issue[\"outwardIssue\"][\"key\"]\n rel_summary = related_issue[\"outwardIssue\"][\"fields\"][\"summary\"]\n rel_issues = {\"type\": rel_type, \"key\": rel_key, \"summary\": rel_summary}\n parsed.append(\n {\n \"key\": key,\n \"summary\": summary,\n \"created\": created,\n \"assignee\": assignee,\n \"priority\": priority,\n \"status\": status,\n \"related_issues\": rel_issues,\n }\n )\n return parsed\n[docs] def parse_projects(self, projects: List[dict]) -> List[dict]:\n parsed = []", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/utilities/jira.html"} +{"id": "f7681c4934fb-3", "text": "parsed = []\n for project in projects:\n id = project[\"id\"]\n key = project[\"key\"]\n name = project[\"name\"]\n type = project[\"projectTypeKey\"]\n style = project[\"style\"]\n parsed.append(\n {\"id\": id, \"key\": key, \"name\": name, \"type\": type, \"style\": style}\n )\n return parsed\n[docs] def search(self, query: str) -> str:\n issues = self.jira.jql(query)\n parsed_issues = self.parse_issues(issues)\n parsed_issues_str = (\n \"Found \" + str(len(parsed_issues)) + \" issues:\\n\" + str(parsed_issues)\n )\n return parsed_issues_str\n[docs] def project(self) -> str:\n projects = self.jira.projects()\n parsed_projects = self.parse_projects(projects)\n parsed_projects_str = (\n \"Found \" + str(len(parsed_projects)) + \" projects:\\n\" + str(parsed_projects)\n )\n return parsed_projects_str\n[docs] def issue_create(self, query: str) -> str:\n try:\n import json\n except ImportError:\n raise ImportError(\n \"json is not installed. Please install it with `pip install json`\"\n )\n params = json.loads(query)\n return self.jira.issue_create(fields=dict(params))\n[docs] def page_create(self, query: str) -> str:\n try:\n import json\n except ImportError:\n raise ImportError(\n \"json is not installed. Please install it with `pip install json`\"\n )\n params = json.loads(query)\n return self.confluence.create_page(**dict(params))", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/utilities/jira.html"} +{"id": "f7681c4934fb-4", "text": "params = json.loads(query)\n return self.confluence.create_page(**dict(params))\n[docs] def other(self, query: str) -> str:\n context = {\"self\": self}\n exec(f\"result = {query}\", context)\n result = context[\"result\"]\n return str(result)\n[docs] def run(self, mode: str, query: str) -> str:\n if mode == \"jql\":\n return self.search(query)\n elif mode == \"get_projects\":\n return self.project()\n elif mode == \"create_issue\":\n return self.issue_create(query)\n elif mode == \"other\":\n return self.other(query)\n elif mode == \"create_page\":\n return self.page_create(query)\n else:\n raise ValueError(f\"Got unexpected mode {mode}\")", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/utilities/jira.html"} +{"id": "b229a1d2f9df-0", "text": "Source code for langchain.utilities.spark_sql\nfrom __future__ import annotations\nfrom typing import TYPE_CHECKING, Any, Iterable, List, Optional\nif TYPE_CHECKING:\n from pyspark.sql import DataFrame, Row, SparkSession\n[docs]class SparkSQL:\n def __init__(\n self,\n spark_session: Optional[SparkSession] = None,\n catalog: Optional[str] = None,\n schema: Optional[str] = None,\n ignore_tables: Optional[List[str]] = None,\n include_tables: Optional[List[str]] = None,\n sample_rows_in_table_info: int = 3,\n ):\n try:\n from pyspark.sql import SparkSession\n except ImportError:\n raise ValueError(\n \"pyspark is not installed. Please install it with `pip install pyspark`\"\n )\n self._spark = (\n spark_session if spark_session else SparkSession.builder.getOrCreate()\n )\n if catalog is not None:\n self._spark.catalog.setCurrentCatalog(catalog)\n if schema is not None:\n self._spark.catalog.setCurrentDatabase(schema)\n self._all_tables = set(self._get_all_table_names())\n self._include_tables = set(include_tables) if include_tables else set()\n if self._include_tables:\n missing_tables = self._include_tables - self._all_tables\n if missing_tables:\n raise ValueError(\n f\"include_tables {missing_tables} not found in database\"\n )\n self._ignore_tables = set(ignore_tables) if ignore_tables else set()\n if self._ignore_tables:\n missing_tables = self._ignore_tables - self._all_tables\n if missing_tables:\n raise ValueError(\n f\"ignore_tables {missing_tables} not found in database\"\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/utilities/spark_sql.html"} +{"id": "b229a1d2f9df-1", "text": "f\"ignore_tables {missing_tables} not found in database\"\n )\n usable_tables = self.get_usable_table_names()\n self._usable_tables = set(usable_tables) if usable_tables else self._all_tables\n if not isinstance(sample_rows_in_table_info, int):\n raise TypeError(\"sample_rows_in_table_info must be an integer\")\n self._sample_rows_in_table_info = sample_rows_in_table_info\n[docs] @classmethod\n def from_uri(\n cls, database_uri: str, engine_args: Optional[dict] = None, **kwargs: Any\n ) -> SparkSQL:\n \"\"\"Creating a remote Spark Session via Spark connect.\n For example: SparkSQL.from_uri(\"sc://localhost:15002\")\n \"\"\"\n try:\n from pyspark.sql import SparkSession\n except ImportError:\n raise ValueError(\n \"pyspark is not installed. Please install it with `pip install pyspark`\"\n )\n spark = SparkSession.builder.remote(database_uri).getOrCreate()\n return cls(spark, **kwargs)\n[docs] def get_usable_table_names(self) -> Iterable[str]:\n \"\"\"Get names of tables available.\"\"\"\n if self._include_tables:\n return self._include_tables\n # sorting the result can help LLM understanding it.\n return sorted(self._all_tables - self._ignore_tables)\n def _get_all_table_names(self) -> Iterable[str]:\n rows = self._spark.sql(\"SHOW TABLES\").select(\"tableName\").collect()\n return list(map(lambda row: row.tableName, rows))\n def _get_create_table_stmt(self, table: str) -> str:\n statement = (\n self._spark.sql(f\"SHOW CREATE TABLE {table}\").collect()[0].createtab_stmt", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/utilities/spark_sql.html"} +{"id": "b229a1d2f9df-2", "text": ")\n # Ignore the data source provider and options to reduce the number of tokens.\n using_clause_index = statement.find(\"USING\")\n return statement[:using_clause_index] + \";\"\n[docs] def get_table_info(self, table_names: Optional[List[str]] = None) -> str:\n all_table_names = self.get_usable_table_names()\n if table_names is not None:\n missing_tables = set(table_names).difference(all_table_names)\n if missing_tables:\n raise ValueError(f\"table_names {missing_tables} not found in database\")\n all_table_names = table_names\n tables = []\n for table_name in all_table_names:\n table_info = self._get_create_table_stmt(table_name)\n if self._sample_rows_in_table_info:\n table_info += \"\\n\\n/*\"\n table_info += f\"\\n{self._get_sample_spark_rows(table_name)}\\n\"\n table_info += \"*/\"\n tables.append(table_info)\n final_str = \"\\n\\n\".join(tables)\n return final_str\n def _get_sample_spark_rows(self, table: str) -> str:\n query = f\"SELECT * FROM {table} LIMIT {self._sample_rows_in_table_info}\"\n df = self._spark.sql(query)\n columns_str = \"\\t\".join(list(map(lambda f: f.name, df.schema.fields)))\n try:\n sample_rows = self._get_dataframe_results(df)\n # save the sample rows in string format\n sample_rows_str = \"\\n\".join([\"\\t\".join(row) for row in sample_rows])\n except Exception:\n sample_rows_str = \"\"\n return (\n f\"{self._sample_rows_in_table_info} rows from {table} table:\\n\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/utilities/spark_sql.html"} +{"id": "b229a1d2f9df-3", "text": "f\"{columns_str}\\n\"\n f\"{sample_rows_str}\"\n )\n def _convert_row_as_tuple(self, row: Row) -> tuple:\n return tuple(map(str, row.asDict().values()))\n def _get_dataframe_results(self, df: DataFrame) -> list:\n return list(map(self._convert_row_as_tuple, df.collect()))\n[docs] def run(self, command: str, fetch: str = \"all\") -> str:\n df = self._spark.sql(command)\n if fetch == \"one\":\n df = df.limit(1)\n return str(self._get_dataframe_results(df))\n[docs] def get_table_info_no_throw(self, table_names: Optional[List[str]] = None) -> str:\n \"\"\"Get information about specified tables.\n Follows best practices as specified in: Rajkumar et al, 2022\n (https://arxiv.org/abs/2204.00498)\n If `sample_rows_in_table_info`, the specified number of sample rows will be\n appended to each table description. This can increase performance as\n demonstrated in the paper.\n \"\"\"\n try:\n return self.get_table_info(table_names)\n except ValueError as e:\n \"\"\"Format the error message\"\"\"\n return f\"Error: {e}\"\n[docs] def run_no_throw(self, command: str, fetch: str = \"all\") -> str:\n \"\"\"Execute a SQL command and return a string representing the results.\n If the statement returns rows, a string of the results is returned.\n If the statement returns no rows, an empty string is returned.\n If the statement throws an error, the error message is returned.\n \"\"\"\n try:\n from pyspark.errors import PySparkException", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/utilities/spark_sql.html"} +{"id": "b229a1d2f9df-4", "text": "\"\"\"\n try:\n from pyspark.errors import PySparkException\n except ImportError:\n raise ValueError(\n \"pyspark is not installed. Please install it with `pip install pyspark`\"\n )\n try:\n return self.run(command, fetch)\n except PySparkException as e:\n \"\"\"Format the error message\"\"\"\n return f\"Error: {e}\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/utilities/spark_sql.html"} +{"id": "27b5539df1bf-0", "text": "Source code for langchain.utilities.wikipedia\n\"\"\"Util that calls Wikipedia.\"\"\"\nimport logging\nfrom typing import Any, Dict, List, Optional\nfrom pydantic import BaseModel, Extra, root_validator\nfrom langchain.schema import Document\nlogger = logging.getLogger(__name__)\nWIKIPEDIA_MAX_QUERY_LENGTH = 300\n[docs]class WikipediaAPIWrapper(BaseModel):\n \"\"\"Wrapper around WikipediaAPI.\n To use, you should have the ``wikipedia`` python package installed.\n This wrapper will use the Wikipedia API to conduct searches and\n fetch page summaries. By default, it will return the page summaries\n of the top-k results.\n It limits the Document content by doc_content_chars_max.\n \"\"\"\n wiki_client: Any #: :meta private:\n top_k_results: int = 3\n lang: str = \"en\"\n load_all_available_meta: bool = False\n doc_content_chars_max: int = 4000\n class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n @root_validator()\n def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"Validate that the python package exists in environment.\"\"\"\n try:\n import wikipedia\n wikipedia.set_lang(values[\"lang\"])\n values[\"wiki_client\"] = wikipedia\n except ImportError:\n raise ImportError(\n \"Could not import wikipedia python package. \"\n \"Please install it with `pip install wikipedia`.\"\n )\n return values\n[docs] def run(self, query: str) -> str:\n \"\"\"Run Wikipedia search and get page summaries.\"\"\"\n page_titles = self.wiki_client.search(query[:WIKIPEDIA_MAX_QUERY_LENGTH])\n summaries = []\n for page_title in page_titles[: self.top_k_results]:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/utilities/wikipedia.html"} +{"id": "27b5539df1bf-1", "text": "summaries = []\n for page_title in page_titles[: self.top_k_results]:\n if wiki_page := self._fetch_page(page_title):\n if summary := self._formatted_page_summary(page_title, wiki_page):\n summaries.append(summary)\n if not summaries:\n return \"No good Wikipedia Search Result was found\"\n return \"\\n\\n\".join(summaries)[: self.doc_content_chars_max]\n @staticmethod\n def _formatted_page_summary(page_title: str, wiki_page: Any) -> Optional[str]:\n return f\"Page: {page_title}\\nSummary: {wiki_page.summary}\"\n def _page_to_document(self, page_title: str, wiki_page: Any) -> Document:\n main_meta = {\n \"title\": page_title,\n \"summary\": wiki_page.summary,\n \"source\": wiki_page.url,\n }\n add_meta = (\n {\n \"categories\": wiki_page.categories,\n \"page_url\": wiki_page.url,\n \"image_urls\": wiki_page.images,\n \"related_titles\": wiki_page.links,\n \"parent_id\": wiki_page.parent_id,\n \"references\": wiki_page.references,\n \"revision_id\": wiki_page.revision_id,\n \"sections\": wiki_page.sections,\n }\n if self.load_all_available_meta\n else {}\n )\n doc = Document(\n page_content=wiki_page.content[: self.doc_content_chars_max],\n metadata={\n **main_meta,\n **add_meta,\n },\n )\n return doc\n def _fetch_page(self, page: str) -> Optional[str]:\n try:\n return self.wiki_client.page(title=page, auto_suggest=False)\n except (\n self.wiki_client.exceptions.PageError,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/utilities/wikipedia.html"} +{"id": "27b5539df1bf-2", "text": "except (\n self.wiki_client.exceptions.PageError,\n self.wiki_client.exceptions.DisambiguationError,\n ):\n return None\n[docs] def load(self, query: str) -> List[Document]:\n \"\"\"\n Run Wikipedia search and get the article text plus the meta information.\n See\n Returns: a list of documents.\n \"\"\"\n page_titles = self.wiki_client.search(query[:WIKIPEDIA_MAX_QUERY_LENGTH])\n docs = []\n for page_title in page_titles[: self.top_k_results]:\n if wiki_page := self._fetch_page(page_title):\n if doc := self._page_to_document(page_title, wiki_page):\n docs.append(doc)\n return docs", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/utilities/wikipedia.html"} +{"id": "57e635f45d05-0", "text": "Source code for langchain.utilities.apify\nfrom typing import Any, Callable, Dict, Optional\nfrom pydantic import BaseModel, root_validator\nfrom langchain.document_loaders import ApifyDatasetLoader\nfrom langchain.document_loaders.base import Document\nfrom langchain.utils import get_from_dict_or_env\n[docs]class ApifyWrapper(BaseModel):\n \"\"\"Wrapper around Apify.\n To use, you should have the ``apify-client`` python package installed,\n and the environment variable ``APIFY_API_TOKEN`` set with your API key, or pass\n `apify_api_token` as a named parameter to the constructor.\n \"\"\"\n apify_client: Any\n apify_client_async: Any\n @root_validator()\n def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"Validate environment.\n Validate that an Apify API token is set and the apify-client\n Python package exists in the current environment.\n \"\"\"\n apify_api_token = get_from_dict_or_env(\n values, \"apify_api_token\", \"APIFY_API_TOKEN\"\n )\n try:\n from apify_client import ApifyClient, ApifyClientAsync\n values[\"apify_client\"] = ApifyClient(apify_api_token)\n values[\"apify_client_async\"] = ApifyClientAsync(apify_api_token)\n except ImportError:\n raise ValueError(\n \"Could not import apify-client Python package. \"\n \"Please install it with `pip install apify-client`.\"\n )\n return values\n[docs] def call_actor(\n self,\n actor_id: str,\n run_input: Dict,\n dataset_mapping_function: Callable[[Dict], Document],\n *,\n build: Optional[str] = None,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/utilities/apify.html"} +{"id": "57e635f45d05-1", "text": "*,\n build: Optional[str] = None,\n memory_mbytes: Optional[int] = None,\n timeout_secs: Optional[int] = None,\n ) -> ApifyDatasetLoader:\n \"\"\"Run an Actor on the Apify platform and wait for results to be ready.\n Args:\n actor_id (str): The ID or name of the Actor on the Apify platform.\n run_input (Dict): The input object of the Actor that you're trying to run.\n dataset_mapping_function (Callable): A function that takes a single\n dictionary (an Apify dataset item) and converts it to an\n instance of the Document class.\n build (str, optional): Optionally specifies the actor build to run.\n It can be either a build tag or build number.\n memory_mbytes (int, optional): Optional memory limit for the run,\n in megabytes.\n timeout_secs (int, optional): Optional timeout for the run, in seconds.\n Returns:\n ApifyDatasetLoader: A loader that will fetch the records from the\n Actor run's default dataset.\n \"\"\"\n actor_call = self.apify_client.actor(actor_id).call(\n run_input=run_input,\n build=build,\n memory_mbytes=memory_mbytes,\n timeout_secs=timeout_secs,\n )\n return ApifyDatasetLoader(\n dataset_id=actor_call[\"defaultDatasetId\"],\n dataset_mapping_function=dataset_mapping_function,\n )\n[docs] async def acall_actor(\n self,\n actor_id: str,\n run_input: Dict,\n dataset_mapping_function: Callable[[Dict], Document],\n *,\n build: Optional[str] = None,\n memory_mbytes: Optional[int] = None,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/utilities/apify.html"} +{"id": "57e635f45d05-2", "text": "memory_mbytes: Optional[int] = None,\n timeout_secs: Optional[int] = None,\n ) -> ApifyDatasetLoader:\n \"\"\"Run an Actor on the Apify platform and wait for results to be ready.\n Args:\n actor_id (str): The ID or name of the Actor on the Apify platform.\n run_input (Dict): The input object of the Actor that you're trying to run.\n dataset_mapping_function (Callable): A function that takes a single\n dictionary (an Apify dataset item) and converts it to\n an instance of the Document class.\n build (str, optional): Optionally specifies the actor build to run.\n It can be either a build tag or build number.\n memory_mbytes (int, optional): Optional memory limit for the run,\n in megabytes.\n timeout_secs (int, optional): Optional timeout for the run, in seconds.\n Returns:\n ApifyDatasetLoader: A loader that will fetch the records from the\n Actor run's default dataset.\n \"\"\"\n actor_call = await self.apify_client_async.actor(actor_id).call(\n run_input=run_input,\n build=build,\n memory_mbytes=memory_mbytes,\n timeout_secs=timeout_secs,\n )\n return ApifyDatasetLoader(\n dataset_id=actor_call[\"defaultDatasetId\"],\n dataset_mapping_function=dataset_mapping_function,\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/utilities/apify.html"} +{"id": "36e824259f0a-0", "text": "Source code for langchain.utilities.pupmed\nimport json\nimport logging\nimport time\nimport urllib.error\nimport urllib.request\nfrom typing import List\nfrom pydantic import BaseModel, Extra\nfrom langchain.schema import Document\nlogger = logging.getLogger(__name__)\n[docs]class PubMedAPIWrapper(BaseModel):\n \"\"\"\n Wrapper around PubMed API.\n This wrapper will use the PubMed API to conduct searches and fetch\n document summaries. By default, it will return the document summaries\n of the top-k results of an input search.\n Parameters:\n top_k_results: number of the top-scored document used for the PubMed tool\n load_max_docs: a limit to the number of loaded documents\n load_all_available_meta:\n if True: the `metadata` of the loaded Documents gets all available meta info\n (see https://www.ncbi.nlm.nih.gov/books/NBK25499/#chapter4.ESearch)\n if False: the `metadata` gets only the most informative fields.\n \"\"\"\n base_url_esearch = \"https://eutils.ncbi.nlm.nih.gov/entrez/eutils/esearch.fcgi?\"\n base_url_efetch = \"https://eutils.ncbi.nlm.nih.gov/entrez/eutils/efetch.fcgi?\"\n max_retry = 5\n sleep_time = 0.2\n # Default values for the parameters\n top_k_results: int = 3\n load_max_docs: int = 25\n ARXIV_MAX_QUERY_LENGTH = 300\n doc_content_chars_max: int = 2000\n load_all_available_meta: bool = False\n email: str = \"your_email@example.com\"\n class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n[docs] def run(self, query: str) -> str:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/utilities/pupmed.html"} +{"id": "36e824259f0a-1", "text": "[docs] def run(self, query: str) -> str:\n \"\"\"\n Run PubMed search and get the article meta information.\n See https://www.ncbi.nlm.nih.gov/books/NBK25499/#chapter4.ESearch\n It uses only the most informative fields of article meta information.\n \"\"\"\n try:\n # Retrieve the top-k results for the query\n docs = [\n f\"Published: {result['pub_date']}\\nTitle: {result['title']}\\n\"\n f\"Summary: {result['summary']}\"\n for result in self.load(query[: self.ARXIV_MAX_QUERY_LENGTH])\n ]\n # Join the results and limit the character count\n return (\n \"\\n\\n\".join(docs)[: self.doc_content_chars_max]\n if docs\n else \"No good PubMed Result was found\"\n )\n except Exception as ex:\n return f\"PubMed exception: {ex}\"\n[docs] def load(self, query: str) -> List[dict]:\n \"\"\"\n Search PubMed for documents matching the query.\n Return a list of dictionaries containing the document metadata.\n \"\"\"\n url = (\n self.base_url_esearch\n + \"db=pubmed&term=\"\n + str({urllib.parse.quote(query)})\n + f\"&retmode=json&retmax={self.top_k_results}&usehistory=y\"\n )\n result = urllib.request.urlopen(url)\n text = result.read().decode(\"utf-8\")\n json_text = json.loads(text)\n articles = []\n webenv = json_text[\"esearchresult\"][\"webenv\"]\n for uid in json_text[\"esearchresult\"][\"idlist\"]:\n article = self.retrieve_article(uid, webenv)\n articles.append(article)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/utilities/pupmed.html"} +{"id": "36e824259f0a-2", "text": "article = self.retrieve_article(uid, webenv)\n articles.append(article)\n # Convert the list of articles to a JSON string\n return articles\n def _transform_doc(self, doc: dict) -> Document:\n summary = doc.pop(\"summary\")\n return Document(page_content=summary, metadata=doc)\n[docs] def load_docs(self, query: str) -> List[Document]:\n document_dicts = self.load(query=query)\n return [self._transform_doc(d) for d in document_dicts]\n[docs] def retrieve_article(self, uid: str, webenv: str) -> dict:\n url = (\n self.base_url_efetch\n + \"db=pubmed&retmode=xml&id=\"\n + uid\n + \"&webenv=\"\n + webenv\n )\n retry = 0\n while True:\n try:\n result = urllib.request.urlopen(url)\n break\n except urllib.error.HTTPError as e:\n if e.code == 429 and retry < self.max_retry:\n # Too Many Requests error\n # wait for an exponentially increasing amount of time\n print(\n f\"Too Many Requests, \"\n f\"waiting for {self.sleep_time:.2f} seconds...\"\n )\n time.sleep(self.sleep_time)\n self.sleep_time *= 2\n retry += 1\n else:\n raise e\n xml_text = result.read().decode(\"utf-8\")\n # Get title\n title = \"\"\n if \"\" in xml_text and \"\" in xml_text:\n start_tag = \"\"\n end_tag = \"\"\n title = xml_text[", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/utilities/pupmed.html"} +{"id": "36e824259f0a-3", "text": "end_tag = \"\"\n title = xml_text[\n xml_text.index(start_tag) + len(start_tag) : xml_text.index(end_tag)\n ]\n # Get abstract\n abstract = \"\"\n if \"\" in xml_text and \"\" in xml_text:\n start_tag = \"\"\n end_tag = \"\"\n abstract = xml_text[\n xml_text.index(start_tag) + len(start_tag) : xml_text.index(end_tag)\n ]\n # Get publication date\n pub_date = \"\"\n if \"\" in xml_text and \"\" in xml_text:\n start_tag = \"\"\n end_tag = \"\"\n pub_date = xml_text[\n xml_text.index(start_tag) + len(start_tag) : xml_text.index(end_tag)\n ]\n # Return article as dictionary\n article = {\n \"uid\": uid,\n \"title\": title,\n \"summary\": abstract,\n \"pub_date\": pub_date,\n }\n return article", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/utilities/pupmed.html"} +{"id": "4d5cc5ca11d3-0", "text": "Source code for langchain.utilities.duckduckgo_search\n\"\"\"Util that calls DuckDuckGo Search.\nNo setup required. Free.\nhttps://pypi.org/project/duckduckgo-search/\n\"\"\"\nfrom typing import Dict, List, Optional\nfrom pydantic import BaseModel, Extra\nfrom pydantic.class_validators import root_validator\n[docs]class DuckDuckGoSearchAPIWrapper(BaseModel):\n \"\"\"Wrapper for DuckDuckGo Search API.\n Free and does not require any setup\n \"\"\"\n k: int = 10\n region: Optional[str] = \"wt-wt\"\n safesearch: str = \"moderate\"\n time: Optional[str] = \"y\"\n max_results: int = 5\n class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n @root_validator()\n def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"Validate that python package exists in environment.\"\"\"\n try:\n from duckduckgo_search import DDGS # noqa: F401\n except ImportError:\n raise ValueError(\n \"Could not import duckduckgo-search python package. \"\n \"Please install it with `pip install duckduckgo-search`.\"\n )\n return values\n[docs] def get_snippets(self, query: str) -> List[str]:\n \"\"\"Run query through DuckDuckGo and return concatenated results.\"\"\"\n from duckduckgo_search import DDGS\n with DDGS() as ddgs:\n results = ddgs.text(\n query,\n region=self.region,\n safesearch=self.safesearch,\n timelimit=self.time,\n )\n if results is None:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/utilities/duckduckgo_search.html"} +{"id": "4d5cc5ca11d3-1", "text": "timelimit=self.time,\n )\n if results is None:\n return [\"No good DuckDuckGo Search Result was found\"]\n snippets = []\n for i, res in enumerate(results, 1):\n if res is not None:\n snippets.append(res[\"body\"])\n if len(snippets) == self.max_results:\n break\n return snippets\n[docs] def run(self, query: str) -> str:\n snippets = self.get_snippets(query)\n return \" \".join(snippets)\n[docs] def results(self, query: str, num_results: int) -> List[Dict[str, str]]:\n \"\"\"Run query through DuckDuckGo and return metadata.\n Args:\n query: The query to search for.\n num_results: The number of results to return.\n Returns:\n A list of dictionaries with the following keys:\n snippet - The description of the result.\n title - The title of the result.\n link - The link to the result.\n \"\"\"\n from duckduckgo_search import DDGS\n with DDGS() as ddgs:\n results = ddgs.text(\n query,\n region=self.region,\n safesearch=self.safesearch,\n timelimit=self.time,\n )\n if results is None:\n return [{\"Result\": \"No good DuckDuckGo Search Result was found\"}]\n def to_metadata(result: Dict) -> Dict[str, str]:\n return {\n \"snippet\": result[\"body\"],\n \"title\": result[\"title\"],\n \"link\": result[\"href\"],\n }\n formatted_results = []\n for i, res in enumerate(results, 1):\n if res is not None:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/utilities/duckduckgo_search.html"} +{"id": "4d5cc5ca11d3-2", "text": "if res is not None:\n formatted_results.append(to_metadata(res))\n if len(formatted_results) == num_results:\n break\n return formatted_results", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/utilities/duckduckgo_search.html"} +{"id": "b6c78d96de66-0", "text": "Source code for langchain.utilities.openapi\n\"\"\"Utility functions for parsing an OpenAPI spec.\"\"\"\nimport copy\nimport json\nimport logging\nimport re\nfrom enum import Enum\nfrom pathlib import Path\nfrom typing import Dict, List, Optional, Union\nimport requests\nimport yaml\nfrom openapi_schema_pydantic import (\n Components,\n OpenAPI,\n Operation,\n Parameter,\n PathItem,\n Paths,\n Reference,\n RequestBody,\n Schema,\n)\nfrom pydantic import ValidationError\nlogger = logging.getLogger(__name__)\nclass HTTPVerb(str, Enum):\n \"\"\"HTTP verbs.\"\"\"\n GET = \"get\"\n PUT = \"put\"\n POST = \"post\"\n DELETE = \"delete\"\n OPTIONS = \"options\"\n HEAD = \"head\"\n PATCH = \"patch\"\n TRACE = \"trace\"\n @classmethod\n def from_str(cls, verb: str) -> \"HTTPVerb\":\n \"\"\"Parse an HTTP verb.\"\"\"\n try:\n return cls(verb)\n except ValueError:\n raise ValueError(f\"Invalid HTTP verb. Valid values are {cls.__members__}\")\n[docs]class OpenAPISpec(OpenAPI):\n \"\"\"OpenAPI Model that removes misformatted parts of the spec.\"\"\"\n @property\n def _paths_strict(self) -> Paths:\n if not self.paths:\n raise ValueError(\"No paths found in spec\")\n return self.paths\n def _get_path_strict(self, path: str) -> PathItem:\n path_item = self._paths_strict.get(path)\n if not path_item:\n raise ValueError(f\"No path found for {path}\")\n return path_item\n @property\n def _components_strict(self) -> Components:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/utilities/openapi.html"} +{"id": "b6c78d96de66-1", "text": "@property\n def _components_strict(self) -> Components:\n \"\"\"Get components or err.\"\"\"\n if self.components is None:\n raise ValueError(\"No components found in spec. \")\n return self.components\n @property\n def _parameters_strict(self) -> Dict[str, Union[Parameter, Reference]]:\n \"\"\"Get parameters or err.\"\"\"\n parameters = self._components_strict.parameters\n if parameters is None:\n raise ValueError(\"No parameters found in spec. \")\n return parameters\n @property\n def _schemas_strict(self) -> Dict[str, Schema]:\n \"\"\"Get the dictionary of schemas or err.\"\"\"\n schemas = self._components_strict.schemas\n if schemas is None:\n raise ValueError(\"No schemas found in spec. \")\n return schemas\n @property\n def _request_bodies_strict(self) -> Dict[str, Union[RequestBody, Reference]]:\n \"\"\"Get the request body or err.\"\"\"\n request_bodies = self._components_strict.requestBodies\n if request_bodies is None:\n raise ValueError(\"No request body found in spec. \")\n return request_bodies\n def _get_referenced_parameter(self, ref: Reference) -> Union[Parameter, Reference]:\n \"\"\"Get a parameter (or nested reference) or err.\"\"\"\n ref_name = ref.ref.split(\"/\")[-1]\n parameters = self._parameters_strict\n if ref_name not in parameters:\n raise ValueError(f\"No parameter found for {ref_name}\")\n return parameters[ref_name]\n def _get_root_referenced_parameter(self, ref: Reference) -> Parameter:\n \"\"\"Get the root reference or err.\"\"\"\n parameter = self._get_referenced_parameter(ref)\n while isinstance(parameter, Reference):", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/utilities/openapi.html"} +{"id": "b6c78d96de66-2", "text": "parameter = self._get_referenced_parameter(ref)\n while isinstance(parameter, Reference):\n parameter = self._get_referenced_parameter(parameter)\n return parameter\n[docs] def get_referenced_schema(self, ref: Reference) -> Schema:\n \"\"\"Get a schema (or nested reference) or err.\"\"\"\n ref_name = ref.ref.split(\"/\")[-1]\n schemas = self._schemas_strict\n if ref_name not in schemas:\n raise ValueError(f\"No schema found for {ref_name}\")\n return schemas[ref_name]\n[docs] def get_schema(self, schema: Union[Reference, Schema]) -> Schema:\n if isinstance(schema, Reference):\n return self.get_referenced_schema(schema)\n return schema\n def _get_root_referenced_schema(self, ref: Reference) -> Schema:\n \"\"\"Get the root reference or err.\"\"\"\n schema = self.get_referenced_schema(ref)\n while isinstance(schema, Reference):\n schema = self.get_referenced_schema(schema)\n return schema\n def _get_referenced_request_body(\n self, ref: Reference\n ) -> Optional[Union[Reference, RequestBody]]:\n \"\"\"Get a request body (or nested reference) or err.\"\"\"\n ref_name = ref.ref.split(\"/\")[-1]\n request_bodies = self._request_bodies_strict\n if ref_name not in request_bodies:\n raise ValueError(f\"No request body found for {ref_name}\")\n return request_bodies[ref_name]\n def _get_root_referenced_request_body(\n self, ref: Reference\n ) -> Optional[RequestBody]:\n \"\"\"Get the root request Body or err.\"\"\"\n request_body = self._get_referenced_request_body(ref)\n while isinstance(request_body, Reference):", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/utilities/openapi.html"} +{"id": "b6c78d96de66-3", "text": "while isinstance(request_body, Reference):\n request_body = self._get_referenced_request_body(request_body)\n return request_body\n @staticmethod\n def _alert_unsupported_spec(obj: dict) -> None:\n \"\"\"Alert if the spec is not supported.\"\"\"\n warning_message = (\n \" This may result in degraded performance.\"\n + \" Convert your OpenAPI spec to 3.1.* spec\"\n + \" for better support.\"\n )\n swagger_version = obj.get(\"swagger\")\n openapi_version = obj.get(\"openapi\")\n if isinstance(openapi_version, str):\n if openapi_version != \"3.1.0\":\n logger.warning(\n f\"Attempting to load an OpenAPI {openapi_version}\"\n f\" spec. {warning_message}\"\n )\n else:\n pass\n elif isinstance(swagger_version, str):\n logger.warning(\n f\"Attempting to load a Swagger {swagger_version}\"\n f\" spec. {warning_message}\"\n )\n else:\n raise ValueError(\n \"Attempting to load an unsupported spec:\"\n f\"\\n\\n{obj}\\n{warning_message}\"\n )\n[docs] @classmethod\n def parse_obj(cls, obj: dict) -> \"OpenAPISpec\":\n try:\n cls._alert_unsupported_spec(obj)\n return super().parse_obj(obj)\n except ValidationError as e:\n # We are handling possibly misconfigured specs and want to do a best-effort\n # job to get a reasonable interface out of it.\n new_obj = copy.deepcopy(obj)\n for error in e.errors():\n keys = error[\"loc\"]\n item = new_obj\n for key in keys[:-1]:\n item = item[key]", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/utilities/openapi.html"} +{"id": "b6c78d96de66-4", "text": "for key in keys[:-1]:\n item = item[key]\n item.pop(keys[-1], None)\n return cls.parse_obj(new_obj)\n[docs] @classmethod\n def from_spec_dict(cls, spec_dict: dict) -> \"OpenAPISpec\":\n \"\"\"Get an OpenAPI spec from a dict.\"\"\"\n return cls.parse_obj(spec_dict)\n[docs] @classmethod\n def from_text(cls, text: str) -> \"OpenAPISpec\":\n \"\"\"Get an OpenAPI spec from a text.\"\"\"\n try:\n spec_dict = json.loads(text)\n except json.JSONDecodeError:\n spec_dict = yaml.safe_load(text)\n return cls.from_spec_dict(spec_dict)\n[docs] @classmethod\n def from_file(cls, path: Union[str, Path]) -> \"OpenAPISpec\":\n \"\"\"Get an OpenAPI spec from a file path.\"\"\"\n path_ = path if isinstance(path, Path) else Path(path)\n if not path_.exists():\n raise FileNotFoundError(f\"{path} does not exist\")\n with path_.open(\"r\") as f:\n return cls.from_text(f.read())\n[docs] @classmethod\n def from_url(cls, url: str) -> \"OpenAPISpec\":\n \"\"\"Get an OpenAPI spec from a URL.\"\"\"\n response = requests.get(url)\n return cls.from_text(response.text)\n @property\n def base_url(self) -> str:\n \"\"\"Get the base url.\"\"\"\n return self.servers[0].url\n[docs] def get_methods_for_path(self, path: str) -> List[str]:\n \"\"\"Return a list of valid methods for the specified path.\"\"\"\n path_item = self._get_path_strict(path)\n results = []", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/utilities/openapi.html"} +{"id": "b6c78d96de66-5", "text": "path_item = self._get_path_strict(path)\n results = []\n for method in HTTPVerb:\n operation = getattr(path_item, method.value, None)\n if isinstance(operation, Operation):\n results.append(method.value)\n return results\n[docs] def get_parameters_for_path(self, path: str) -> List[Parameter]:\n path_item = self._get_path_strict(path)\n parameters = []\n if not path_item.parameters:\n return []\n for parameter in path_item.parameters:\n if isinstance(parameter, Reference):\n parameter = self._get_root_referenced_parameter(parameter)\n parameters.append(parameter)\n return parameters\n[docs] def get_operation(self, path: str, method: str) -> Operation:\n \"\"\"Get the operation object for a given path and HTTP method.\"\"\"\n path_item = self._get_path_strict(path)\n operation_obj = getattr(path_item, method, None)\n if not isinstance(operation_obj, Operation):\n raise ValueError(f\"No {method} method found for {path}\")\n return operation_obj\n[docs] def get_parameters_for_operation(self, operation: Operation) -> List[Parameter]:\n \"\"\"Get the components for a given operation.\"\"\"\n parameters = []\n if operation.parameters:\n for parameter in operation.parameters:\n if isinstance(parameter, Reference):\n parameter = self._get_root_referenced_parameter(parameter)\n parameters.append(parameter)\n return parameters\n[docs] def get_request_body_for_operation(\n self, operation: Operation\n ) -> Optional[RequestBody]:\n \"\"\"Get the request body for a given operation.\"\"\"\n request_body = operation.requestBody\n if isinstance(request_body, Reference):\n request_body = self._get_root_referenced_request_body(request_body)\n return request_body", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/utilities/openapi.html"} +{"id": "b6c78d96de66-6", "text": "return request_body\n[docs] @staticmethod\n def get_cleaned_operation_id(operation: Operation, path: str, method: str) -> str:\n \"\"\"Get a cleaned operation id from an operation id.\"\"\"\n operation_id = operation.operationId\n if operation_id is None:\n # Replace all punctuation of any kind with underscore\n path = re.sub(r\"[^a-zA-Z0-9]\", \"_\", path.lstrip(\"/\"))\n operation_id = f\"{path}_{method}\"\n return operation_id.replace(\"-\", \"_\").replace(\".\", \"_\").replace(\"/\", \"_\")", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/utilities/openapi.html"} +{"id": "c9f8540f095c-0", "text": "Source code for langchain.utilities.zapier\n\"\"\"Util that can interact with Zapier NLA.\nFull docs here: https://nla.zapier.com/api/v1/docs\nNote: this wrapper currently only implemented the `api_key` auth method for testing\nand server-side production use cases (using the developer's connected accounts on\nZapier.com)\nFor use-cases where LangChain + Zapier NLA is powering a user-facing application, and\nLangChain needs access to the end-user's connected accounts on Zapier.com, you'll need\nto use oauth. Review the full docs above and reach out to nla@zapier.com for\ndeveloper support.\n\"\"\"\nimport json\nfrom typing import Dict, List, Optional\nimport requests\nfrom pydantic import BaseModel, Extra, root_validator\nfrom requests import Request, Session\nfrom langchain.utils import get_from_dict_or_env\n[docs]class ZapierNLAWrapper(BaseModel):\n \"\"\"Wrapper for Zapier NLA.\n Full docs here: https://nla.zapier.com/api/v1/docs\n Note: this wrapper currently only implemented the `api_key` auth method for\n testingand server-side production use cases (using the developer's connected\n accounts on Zapier.com)\n For use-cases where LangChain + Zapier NLA is powering a user-facing application,\n and LangChain needs access to the end-user's connected accounts on Zapier.com,\n you'll need to use oauth. Review the full docs above and reach out to\n nla@zapier.com for developer support.\n \"\"\"\n zapier_nla_api_key: str\n zapier_nla_oauth_access_token: str\n zapier_nla_api_base: str = \"https://nla.zapier.com/api/v1/\"\n class Config:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/utilities/zapier.html"} +{"id": "c9f8540f095c-1", "text": "class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n def _get_session(self) -> Session:\n session = requests.Session()\n session.headers.update(\n {\n \"Accept\": \"application/json\",\n \"Content-Type\": \"application/json\",\n }\n )\n if self.zapier_nla_oauth_access_token:\n session.headers.update(\n {\"Authorization\": f\"Bearer {self.zapier_nla_oauth_access_token}\"}\n )\n else:\n session.params = {\"api_key\": self.zapier_nla_api_key}\n return session\n def _get_action_request(\n self, action_id: str, instructions: str, params: Optional[Dict] = None\n ) -> Request:\n data = params if params else {}\n data.update(\n {\n \"instructions\": instructions,\n }\n )\n return Request(\n \"POST\",\n self.zapier_nla_api_base + f\"exposed/{action_id}/execute/\",\n json=data,\n )\n @root_validator(pre=True)\n def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"Validate that api key exists in environment.\"\"\"\n zapier_nla_api_key_default = None\n # If there is a oauth_access_key passed in the values\n # we don't need a nla_api_key it can be blank\n if \"zapier_nla_oauth_access_token\" in values:\n zapier_nla_api_key_default = \"\"\n else:\n values[\"zapier_nla_oauth_access_token\"] = \"\"\n # we require at least one API Key\n zapier_nla_api_key = get_from_dict_or_env(\n values,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/utilities/zapier.html"} +{"id": "c9f8540f095c-2", "text": "zapier_nla_api_key = get_from_dict_or_env(\n values,\n \"zapier_nla_api_key\",\n \"ZAPIER_NLA_API_KEY\",\n zapier_nla_api_key_default,\n )\n values[\"zapier_nla_api_key\"] = zapier_nla_api_key\n return values\n[docs] def list(self) -> List[Dict]:\n \"\"\"Returns a list of all exposed (enabled) actions associated with\n current user (associated with the set api_key). Change your exposed\n actions here: https://nla.zapier.com/demo/start/\n The return list can be empty if no actions exposed. Else will contain\n a list of action objects:\n [{\n \"id\": str,\n \"description\": str,\n \"params\": Dict[str, str]\n }]\n `params` will always contain an `instructions` key, the only required\n param. All others optional and if provided will override any AI guesses\n (see \"understanding the AI guessing flow\" here:\n https://nla.zapier.com/api/v1/docs)\n \"\"\"\n session = self._get_session()\n response = session.get(self.zapier_nla_api_base + \"exposed/\")\n response.raise_for_status()\n return response.json()[\"results\"]\n[docs] def run(\n self, action_id: str, instructions: str, params: Optional[Dict] = None\n ) -> Dict:\n \"\"\"Executes an action that is identified by action_id, must be exposed\n (enabled) by the current user (associated with the set api_key). Change\n your exposed actions here: https://nla.zapier.com/demo/start/\n The return JSON is guaranteed to be less than ~500 words (350", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/utilities/zapier.html"} +{"id": "c9f8540f095c-3", "text": "The return JSON is guaranteed to be less than ~500 words (350\n tokens) making it safe to inject into the prompt of another LLM\n call.\n \"\"\"\n session = self._get_session()\n request = self._get_action_request(action_id, instructions, params)\n response = session.send(session.prepare_request(request))\n response.raise_for_status()\n return response.json()[\"result\"]\n[docs] def preview(\n self, action_id: str, instructions: str, params: Optional[Dict] = None\n ) -> Dict:\n \"\"\"Same as run, but instead of actually executing the action, will\n instead return a preview of params that have been guessed by the AI in\n case you need to explicitly review before executing.\"\"\"\n session = self._get_session()\n params = params if params else {}\n params.update({\"preview_only\": True})\n request = self._get_action_request(action_id, instructions, params)\n response = session.send(session.prepare_request(request))\n response.raise_for_status()\n return response.json()[\"input_params\"]\n[docs] def run_as_str(self, *args, **kwargs) -> str: # type: ignore[no-untyped-def]\n \"\"\"Same as run, but returns a stringified version of the JSON for\n insertting back into an LLM.\"\"\"\n data = self.run(*args, **kwargs)\n return json.dumps(data)\n[docs] def preview_as_str(self, *args, **kwargs) -> str: # type: ignore[no-untyped-def]\n \"\"\"Same as preview, but returns a stringified version of the JSON for\n insertting back into an LLM.\"\"\"\n data = self.preview(*args, **kwargs)\n return json.dumps(data)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/utilities/zapier.html"} +{"id": "c9f8540f095c-4", "text": "data = self.preview(*args, **kwargs)\n return json.dumps(data)\n[docs] def list_as_str(self) -> str: # type: ignore[no-untyped-def]\n \"\"\"Same as list, but returns a stringified version of the JSON for\n insertting back into an LLM.\"\"\"\n actions = self.list()\n return json.dumps(actions)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/utilities/zapier.html"} +{"id": "563046bb2716-0", "text": "Source code for langchain.utilities.scenexplain\n\"\"\"Util that calls SceneXplain.\nIn order to set this up, you need API key for the SceneXplain API.\nYou can obtain a key by following the steps below.\n- Sign up for a free account at https://scenex.jina.ai/.\n- Navigate to the API Access page (https://scenex.jina.ai/api) and create a new API key.\n\"\"\"\nfrom typing import Dict\nimport requests\nfrom pydantic import BaseModel, BaseSettings, Field, root_validator\nfrom langchain.utils import get_from_dict_or_env\n[docs]class SceneXplainAPIWrapper(BaseSettings, BaseModel):\n \"\"\"Wrapper for SceneXplain API.\n In order to set this up, you need API key for the SceneXplain API.\n You can obtain a key by following the steps below.\n - Sign up for a free account at https://scenex.jina.ai/.\n - Navigate to the API Access page (https://scenex.jina.ai/api)\n and create a new API key.\n \"\"\"\n scenex_api_key: str = Field(..., env=\"SCENEX_API_KEY\")\n scenex_api_url: str = (\n \"https://us-central1-causal-diffusion.cloudfunctions.net/describe\"\n )\n def _describe_image(self, image: str) -> str:\n headers = {\n \"x-api-key\": f\"token {self.scenex_api_key}\",\n \"content-type\": \"application/json\",\n }\n payload = {\n \"data\": [\n {\n \"image\": image,\n \"algorithm\": \"Ember\",\n \"languages\": [\"en\"],\n }\n ]\n }", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/utilities/scenexplain.html"} +{"id": "563046bb2716-1", "text": "\"languages\": [\"en\"],\n }\n ]\n }\n response = requests.post(self.scenex_api_url, headers=headers, json=payload)\n response.raise_for_status()\n result = response.json().get(\"result\", [])\n img = result[0] if result else {}\n return img.get(\"text\", \"\")\n[docs] @root_validator(pre=True)\n def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"Validate that api key exists in environment.\"\"\"\n scenex_api_key = get_from_dict_or_env(\n values, \"scenex_api_key\", \"SCENEX_API_KEY\"\n )\n values[\"scenex_api_key\"] = scenex_api_key\n return values\n[docs] def run(self, image: str) -> str:\n \"\"\"Run SceneXplain image explainer.\"\"\"\n description = self._describe_image(image)\n if not description:\n return \"No description found.\"\n return description", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/utilities/scenexplain.html"} +{"id": "00d7fcb01817-0", "text": "Source code for langchain.utilities.wolfram_alpha\n\"\"\"Util that calls WolframAlpha.\"\"\"\nfrom typing import Any, Dict, Optional\nfrom pydantic import BaseModel, Extra, root_validator\nfrom langchain.utils import get_from_dict_or_env\n[docs]class WolframAlphaAPIWrapper(BaseModel):\n \"\"\"Wrapper for Wolfram Alpha.\n Docs for using:\n 1. Go to wolfram alpha and sign up for a developer account\n 2. Create an app and get your APP ID\n 3. Save your APP ID into WOLFRAM_ALPHA_APPID env variable\n 4. pip install wolframalpha\n \"\"\"\n wolfram_client: Any #: :meta private:\n wolfram_alpha_appid: Optional[str] = None\n class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n @root_validator()\n def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"Validate that api key and python package exists in environment.\"\"\"\n wolfram_alpha_appid = get_from_dict_or_env(\n values, \"wolfram_alpha_appid\", \"WOLFRAM_ALPHA_APPID\"\n )\n values[\"wolfram_alpha_appid\"] = wolfram_alpha_appid\n try:\n import wolframalpha\n except ImportError:\n raise ImportError(\n \"wolframalpha is not installed. \"\n \"Please install it with `pip install wolframalpha`\"\n )\n client = wolframalpha.Client(wolfram_alpha_appid)\n values[\"wolfram_client\"] = client\n return values\n[docs] def run(self, query: str) -> str:\n \"\"\"Run query through WolframAlpha and parse result.\"\"\"\n res = self.wolfram_client.query(query)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/utilities/wolfram_alpha.html"} +{"id": "00d7fcb01817-1", "text": "res = self.wolfram_client.query(query)\n try:\n assumption = next(res.pods).text\n answer = next(res.results).text\n except StopIteration:\n return \"Wolfram Alpha wasn't able to answer it\"\n if answer is None or answer == \"\":\n # We don't want to return the assumption alone if answer is empty\n return \"No good Wolfram Alpha Result was found\"\n else:\n return f\"Assumption: {assumption} \\nAnswer: {answer}\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/utilities/wolfram_alpha.html"} +{"id": "033080b16b17-0", "text": "Source code for langchain.utilities.google_serper\n\"\"\"Util that calls Google Search using the Serper.dev API.\"\"\"\nfrom typing import Any, Dict, List, Optional\nimport aiohttp\nimport requests\nfrom pydantic.class_validators import root_validator\nfrom pydantic.main import BaseModel\nfrom typing_extensions import Literal\nfrom langchain.utils import get_from_dict_or_env\n[docs]class GoogleSerperAPIWrapper(BaseModel):\n \"\"\"Wrapper around the Serper.dev Google Search API.\n You can create a free API key at https://serper.dev.\n To use, you should have the environment variable ``SERPER_API_KEY``\n set with your API key, or pass `serper_api_key` as a named parameter\n to the constructor.\n Example:\n .. code-block:: python\n from langchain import GoogleSerperAPIWrapper\n google_serper = GoogleSerperAPIWrapper()\n \"\"\"\n k: int = 10\n gl: str = \"us\"\n hl: str = \"en\"\n # \"places\" and \"images\" is available from Serper but not implemented in the\n # parser of run(). They can be used in results()\n type: Literal[\"news\", \"search\", \"places\", \"images\"] = \"search\"\n result_key_for_type = {\n \"news\": \"news\",\n \"places\": \"places\",\n \"images\": \"images\",\n \"search\": \"organic\",\n }\n tbs: Optional[str] = None\n serper_api_key: Optional[str] = None\n aiosession: Optional[aiohttp.ClientSession] = None\n class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n arbitrary_types_allowed = True\n @root_validator()", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/utilities/google_serper.html"} +{"id": "033080b16b17-1", "text": "arbitrary_types_allowed = True\n @root_validator()\n def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"Validate that api key exists in environment.\"\"\"\n serper_api_key = get_from_dict_or_env(\n values, \"serper_api_key\", \"SERPER_API_KEY\"\n )\n values[\"serper_api_key\"] = serper_api_key\n return values\n[docs] def results(self, query: str, **kwargs: Any) -> Dict:\n \"\"\"Run query through GoogleSearch.\"\"\"\n return self._google_serper_api_results(\n query,\n gl=self.gl,\n hl=self.hl,\n num=self.k,\n tbs=self.tbs,\n search_type=self.type,\n **kwargs,\n )\n[docs] def run(self, query: str, **kwargs: Any) -> str:\n \"\"\"Run query through GoogleSearch and parse result.\"\"\"\n results = self._google_serper_api_results(\n query,\n gl=self.gl,\n hl=self.hl,\n num=self.k,\n tbs=self.tbs,\n search_type=self.type,\n **kwargs,\n )\n return self._parse_results(results)\n[docs] async def aresults(self, query: str, **kwargs: Any) -> Dict:\n \"\"\"Run query through GoogleSearch.\"\"\"\n results = await self._async_google_serper_search_results(\n query,\n gl=self.gl,\n hl=self.hl,\n num=self.k,\n search_type=self.type,\n tbs=self.tbs,\n **kwargs,\n )\n return results\n[docs] async def arun(self, query: str, **kwargs: Any) -> str:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/utilities/google_serper.html"} +{"id": "033080b16b17-2", "text": "\"\"\"Run query through GoogleSearch and parse result async.\"\"\"\n results = await self._async_google_serper_search_results(\n query,\n gl=self.gl,\n hl=self.hl,\n num=self.k,\n search_type=self.type,\n tbs=self.tbs,\n **kwargs,\n )\n return self._parse_results(results)\n def _parse_snippets(self, results: dict) -> List[str]:\n snippets = []\n if results.get(\"answerBox\"):\n answer_box = results.get(\"answerBox\", {})\n if answer_box.get(\"answer\"):\n return [answer_box.get(\"answer\")]\n elif answer_box.get(\"snippet\"):\n return [answer_box.get(\"snippet\").replace(\"\\n\", \" \")]\n elif answer_box.get(\"snippetHighlighted\"):\n return answer_box.get(\"snippetHighlighted\")\n if results.get(\"knowledgeGraph\"):\n kg = results.get(\"knowledgeGraph\", {})\n title = kg.get(\"title\")\n entity_type = kg.get(\"type\")\n if entity_type:\n snippets.append(f\"{title}: {entity_type}.\")\n description = kg.get(\"description\")\n if description:\n snippets.append(description)\n for attribute, value in kg.get(\"attributes\", {}).items():\n snippets.append(f\"{title} {attribute}: {value}.\")\n for result in results[self.result_key_for_type[self.type]][: self.k]:\n if \"snippet\" in result:\n snippets.append(result[\"snippet\"])\n for attribute, value in result.get(\"attributes\", {}).items():\n snippets.append(f\"{attribute}: {value}.\")\n if len(snippets) == 0:\n return [\"No good Google Search Result was found\"]\n return snippets", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/utilities/google_serper.html"} +{"id": "033080b16b17-3", "text": "return [\"No good Google Search Result was found\"]\n return snippets\n def _parse_results(self, results: dict) -> str:\n return \" \".join(self._parse_snippets(results))\n def _google_serper_api_results(\n self, search_term: str, search_type: str = \"search\", **kwargs: Any\n ) -> dict:\n headers = {\n \"X-API-KEY\": self.serper_api_key or \"\",\n \"Content-Type\": \"application/json\",\n }\n params = {\n \"q\": search_term,\n **{key: value for key, value in kwargs.items() if value is not None},\n }\n response = requests.post(\n f\"https://google.serper.dev/{search_type}\", headers=headers, params=params\n )\n response.raise_for_status()\n search_results = response.json()\n return search_results\n async def _async_google_serper_search_results(\n self, search_term: str, search_type: str = \"search\", **kwargs: Any\n ) -> dict:\n headers = {\n \"X-API-KEY\": self.serper_api_key or \"\",\n \"Content-Type\": \"application/json\",\n }\n url = f\"https://google.serper.dev/{search_type}\"\n params = {\n \"q\": search_term,\n **{key: value for key, value in kwargs.items() if value is not None},\n }\n if not self.aiosession:\n async with aiohttp.ClientSession() as session:\n async with session.post(\n url, params=params, headers=headers, raise_for_status=False\n ) as response:\n search_results = await response.json()\n else:\n async with self.aiosession.post(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/utilities/google_serper.html"} +{"id": "033080b16b17-4", "text": "else:\n async with self.aiosession.post(\n url, params=params, headers=headers, raise_for_status=True\n ) as response:\n search_results = await response.json()\n return search_results", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/utilities/google_serper.html"} +{"id": "3bfcfe8508d9-0", "text": "Source code for langchain.utilities.twilio\n\"\"\"Util that calls Twilio.\"\"\"\nfrom typing import Any, Dict, Optional\nfrom pydantic import BaseModel, Extra, root_validator\nfrom langchain.utils import get_from_dict_or_env\n[docs]class TwilioAPIWrapper(BaseModel):\n \"\"\"Messaging Client using Twilio.\n To use, you should have the ``twilio`` python package installed,\n and the environment variables ``TWILIO_ACCOUNT_SID``, ``TWILIO_AUTH_TOKEN``, and\n ``TWILIO_FROM_NUMBER``, or pass `account_sid`, `auth_token`, and `from_number` as\n named parameters to the constructor.\n Example:\n .. code-block:: python\n from langchain.utilities.twilio import TwilioAPIWrapper\n twilio = TwilioAPIWrapper(\n account_sid=\"ACxxx\",\n auth_token=\"xxx\",\n from_number=\"+10123456789\"\n )\n twilio.run('test', '+12484345508')\n \"\"\"\n client: Any #: :meta private:\n account_sid: Optional[str] = None\n \"\"\"Twilio account string identifier.\"\"\"\n auth_token: Optional[str] = None\n \"\"\"Twilio auth token.\"\"\"\n from_number: Optional[str] = None\n \"\"\"A Twilio phone number in [E.164](https://www.twilio.com/docs/glossary/what-e164) \n format, an \n [alphanumeric sender ID](https://www.twilio.com/docs/sms/send-messages#use-an-alphanumeric-sender-id), \n or a [Channel Endpoint address](https://www.twilio.com/docs/sms/channels#channel-addresses) \n that is enabled for the type of message you want to send. Phone numbers or", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/utilities/twilio.html"} +{"id": "3bfcfe8508d9-1", "text": "that is enabled for the type of message you want to send. Phone numbers or \n [short codes](https://www.twilio.com/docs/sms/api/short-code) purchased from \n Twilio also work here. You cannot, for example, spoof messages from a private \n cell phone number. If you are using `messaging_service_sid`, this parameter \n must be empty.\n \"\"\" # noqa: E501\n class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n arbitrary_types_allowed = False\n @root_validator()\n def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"Validate that api key and python package exists in environment.\"\"\"\n try:\n from twilio.rest import Client\n except ImportError:\n raise ImportError(\n \"Could not import twilio python package. \"\n \"Please install it with `pip install twilio`.\"\n )\n account_sid = get_from_dict_or_env(values, \"account_sid\", \"TWILIO_ACCOUNT_SID\")\n auth_token = get_from_dict_or_env(values, \"auth_token\", \"TWILIO_AUTH_TOKEN\")\n values[\"from_number\"] = get_from_dict_or_env(\n values, \"from_number\", \"TWILIO_FROM_NUMBER\"\n )\n values[\"client\"] = Client(account_sid, auth_token)\n return values\n[docs] def run(self, body: str, to: str) -> str:\n \"\"\"Run body through Twilio and respond with message sid.\n Args:\n body: The text of the message you want to send. Can be up to 1,600\n characters in length.\n to: The destination phone number in", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/utilities/twilio.html"} +{"id": "3bfcfe8508d9-2", "text": "characters in length.\n to: The destination phone number in\n [E.164](https://www.twilio.com/docs/glossary/what-e164) format for\n SMS/MMS or\n [Channel user address](https://www.twilio.com/docs/sms/channels#channel-addresses)\n for other 3rd-party channels.\n \"\"\" # noqa: E501\n message = self.client.messages.create(to, from_=self.from_number, body=body)\n return message.sid", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/utilities/twilio.html"} +{"id": "fc2fa914bae9-0", "text": "Source code for langchain.utilities.google_places_api\n\"\"\"Chain that calls Google Places API.\n\"\"\"\nimport logging\nfrom typing import Any, Dict, Optional\nfrom pydantic import BaseModel, Extra, root_validator\nfrom langchain.utils import get_from_dict_or_env\n[docs]class GooglePlacesAPIWrapper(BaseModel):\n \"\"\"Wrapper around Google Places API.\n To use, you should have the ``googlemaps`` python package installed,\n **an API key for the google maps platform**,\n and the enviroment variable ''GPLACES_API_KEY''\n set with your API key , or pass 'gplaces_api_key'\n as a named parameter to the constructor.\n By default, this will return the all the results on the input query.\n You can use the top_k_results argument to limit the number of results.\n Example:\n .. code-block:: python\n from langchain import GooglePlacesAPIWrapper\n gplaceapi = GooglePlacesAPIWrapper()\n \"\"\"\n gplaces_api_key: Optional[str] = None\n google_map_client: Any #: :meta private:\n top_k_results: Optional[int] = None\n class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n arbitrary_types_allowed = True\n @root_validator()\n def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"Validate that api key is in your environment variable.\"\"\"\n gplaces_api_key = get_from_dict_or_env(\n values, \"gplaces_api_key\", \"GPLACES_API_KEY\"\n )\n values[\"gplaces_api_key\"] = gplaces_api_key\n try:\n import googlemaps\n values[\"google_map_client\"] = googlemaps.Client(gplaces_api_key)\n except ImportError:\n raise ImportError(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/utilities/google_places_api.html"} +{"id": "fc2fa914bae9-1", "text": "except ImportError:\n raise ImportError(\n \"Could not import googlemaps python package. \"\n \"Please install it with `pip install googlemaps`.\"\n )\n return values\n[docs] def run(self, query: str) -> str:\n \"\"\"Run Places search and get k number of places that exists that match.\"\"\"\n search_results = self.google_map_client.places(query)[\"results\"]\n num_to_return = len(search_results)\n places = []\n if num_to_return == 0:\n return \"Google Places did not find any places that match the description\"\n num_to_return = (\n num_to_return\n if self.top_k_results is None\n else min(num_to_return, self.top_k_results)\n )\n for i in range(num_to_return):\n result = search_results[i]\n details = self.fetch_place_details(result[\"place_id\"])\n if details is not None:\n places.append(details)\n return \"\\n\".join([f\"{i+1}. {item}\" for i, item in enumerate(places)])\n[docs] def fetch_place_details(self, place_id: str) -> Optional[str]:\n try:\n place_details = self.google_map_client.place(place_id)\n formatted_details = self.format_place_details(place_details)\n return formatted_details\n except Exception as e:\n logging.error(f\"An Error occurred while fetching place details: {e}\")\n return None\n[docs] def format_place_details(self, place_details: Dict[str, Any]) -> Optional[str]:\n try:\n name = place_details.get(\"result\", {}).get(\"name\", \"Unkown\")\n address = place_details.get(\"result\", {}).get(\n \"formatted_address\", \"Unknown\"\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/utilities/google_places_api.html"} +{"id": "fc2fa914bae9-2", "text": "\"formatted_address\", \"Unknown\"\n )\n phone_number = place_details.get(\"result\", {}).get(\n \"formatted_phone_number\", \"Unknown\"\n )\n website = place_details.get(\"result\", {}).get(\"website\", \"Unknown\")\n formatted_details = (\n f\"{name}\\nAddress: {address}\\n\"\n f\"Phone: {phone_number}\\nWebsite: {website}\\n\\n\"\n )\n return formatted_details\n except Exception as e:\n logging.error(f\"An error occurred while formatting place details: {e}\")\n return None", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/utilities/google_places_api.html"} +{"id": "5d67ee155530-0", "text": "Source code for langchain.document_loaders.tomarkdown\n\"\"\"Loader that loads HTML to markdown using 2markdown.\"\"\"\nfrom __future__ import annotations\nfrom typing import Iterator, List\nimport requests\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseLoader\n[docs]class ToMarkdownLoader(BaseLoader):\n \"\"\"Loader that loads HTML to markdown using 2markdown.\"\"\"\n def __init__(self, url: str, api_key: str):\n \"\"\"Initialize with url and api key.\"\"\"\n self.url = url\n self.api_key = api_key\n[docs] def lazy_load(\n self,\n ) -> Iterator[Document]:\n \"\"\"Lazily load the file.\"\"\"\n response = requests.post(\n \"https://2markdown.com/api/2md\",\n headers={\"X-Api-Key\": self.api_key},\n json={\"url\": self.url},\n )\n text = response.json()[\"article\"]\n metadata = {\"source\": self.url}\n yield Document(page_content=text, metadata=metadata)\n[docs] def load(self) -> List[Document]:\n \"\"\"Load file.\"\"\"\n return list(self.lazy_load())", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/tomarkdown.html"} +{"id": "747698d8bf3f-0", "text": "Source code for langchain.document_loaders.conllu\n\"\"\"Load CoNLL-U files.\"\"\"\nimport csv\nfrom typing import List\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseLoader\n[docs]class CoNLLULoader(BaseLoader):\n \"\"\"Load CoNLL-U files.\"\"\"\n def __init__(self, file_path: str):\n \"\"\"Initialize with file path.\"\"\"\n self.file_path = file_path\n[docs] def load(self) -> List[Document]:\n \"\"\"Load from file path.\"\"\"\n with open(self.file_path, encoding=\"utf8\") as f:\n tsv = list(csv.reader(f, delimiter=\"\\t\"))\n # If len(line) > 1, the line is not a comment\n lines = [line for line in tsv if len(line) > 1]\n text = \"\"\n for i, line in enumerate(lines):\n # Do not add a space after a punctuation mark or at the end of the sentence\n if line[9] == \"SpaceAfter=No\" or i == len(lines) - 1:\n text += line[1]\n else:\n text += line[1] + \" \"\n metadata = {\"source\": self.file_path}\n return [Document(page_content=text, metadata=metadata)]", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/conllu.html"} +{"id": "4e7bd4e6a764-0", "text": "Source code for langchain.document_loaders.toml\nimport json\nfrom pathlib import Path\nfrom typing import Iterator, List, Union\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseLoader\n[docs]class TomlLoader(BaseLoader):\n \"\"\"\n A TOML document loader that inherits from the BaseLoader class.\n This class can be initialized with either a single source file or a source\n directory containing TOML files.\n \"\"\"\n def __init__(self, source: Union[str, Path]):\n \"\"\"Initialize the TomlLoader with a source file or directory.\"\"\"\n self.source = Path(source)\n[docs] def load(self) -> List[Document]:\n \"\"\"Load and return all documents.\"\"\"\n return list(self.lazy_load())\n[docs] def lazy_load(self) -> Iterator[Document]:\n \"\"\"Lazily load the TOML documents from the source file or directory.\"\"\"\n import tomli\n if self.source.is_file() and self.source.suffix == \".toml\":\n files = [self.source]\n elif self.source.is_dir():\n files = list(self.source.glob(\"**/*.toml\"))\n else:\n raise ValueError(\"Invalid source path or file type\")\n for file_path in files:\n with file_path.open(\"r\", encoding=\"utf-8\") as file:\n content = file.read()\n try:\n data = tomli.loads(content)\n doc = Document(\n page_content=json.dumps(data),\n metadata={\"source\": str(file_path)},\n )\n yield doc\n except tomli.TOMLDecodeError as e:\n print(f\"Error parsing TOML file {file_path}: {e}\")", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/toml.html"} +{"id": "a93913bb9282-0", "text": "Source code for langchain.document_loaders.url_playwright\n\"\"\"Loader that uses Playwright to load a page, then uses unstructured to load the html.\n\"\"\"\nimport logging\nfrom typing import List, Optional\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseLoader\nlogger = logging.getLogger(__name__)\n[docs]class PlaywrightURLLoader(BaseLoader):\n \"\"\"Loader that uses Playwright and to load a page and unstructured to load the html.\n This is useful for loading pages that require javascript to render.\n Attributes:\n urls (List[str]): List of URLs to load.\n continue_on_failure (bool): If True, continue loading other URLs on failure.\n headless (bool): If True, the browser will run in headless mode.\n \"\"\"\n def __init__(\n self,\n urls: List[str],\n continue_on_failure: bool = True,\n headless: bool = True,\n remove_selectors: Optional[List[str]] = None,\n ):\n \"\"\"Load a list of URLs using Playwright and unstructured.\"\"\"\n try:\n import playwright # noqa:F401\n except ImportError:\n raise ImportError(\n \"playwright package not found, please install it with \"\n \"`pip install playwright`\"\n )\n try:\n import unstructured # noqa:F401\n except ImportError:\n raise ValueError(\n \"unstructured package not found, please install it with \"\n \"`pip install unstructured`\"\n )\n self.urls = urls\n self.continue_on_failure = continue_on_failure\n self.headless = headless\n self.remove_selectors = remove_selectors\n[docs] def load(self) -> List[Document]:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/url_playwright.html"} +{"id": "a93913bb9282-1", "text": "[docs] def load(self) -> List[Document]:\n \"\"\"Load the specified URLs using Playwright and create Document instances.\n Returns:\n List[Document]: A list of Document instances with loaded content.\n \"\"\"\n from playwright.sync_api import sync_playwright\n from unstructured.partition.html import partition_html\n docs: List[Document] = list()\n with sync_playwright() as p:\n browser = p.chromium.launch(headless=self.headless)\n for url in self.urls:\n try:\n page = browser.new_page()\n page.goto(url)\n for selector in self.remove_selectors or []:\n elements = page.locator(selector).all()\n for element in elements:\n if element.is_visible():\n element.evaluate(\"element => element.remove()\")\n page_source = page.content()\n elements = partition_html(text=page_source)\n text = \"\\n\\n\".join([str(el) for el in elements])\n metadata = {\"source\": url}\n docs.append(Document(page_content=text, metadata=metadata))\n except Exception as e:\n if self.continue_on_failure:\n logger.error(\n f\"Error fetching or processing {url}, exception: {e}\"\n )\n else:\n raise e\n browser.close()\n return docs", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/url_playwright.html"} +{"id": "17d6c837a9de-0", "text": "Source code for langchain.document_loaders.gcs_directory\n\"\"\"Loading logic for loading documents from an GCS directory.\"\"\"\nfrom typing import List\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseLoader\nfrom langchain.document_loaders.gcs_file import GCSFileLoader\n[docs]class GCSDirectoryLoader(BaseLoader):\n \"\"\"Loading logic for loading documents from GCS.\"\"\"\n def __init__(self, project_name: str, bucket: str, prefix: str = \"\"):\n \"\"\"Initialize with bucket and key name.\"\"\"\n self.project_name = project_name\n self.bucket = bucket\n self.prefix = prefix\n[docs] def load(self) -> List[Document]:\n \"\"\"Load documents.\"\"\"\n try:\n from google.cloud import storage\n except ImportError:\n raise ValueError(\n \"Could not import google-cloud-storage python package. \"\n \"Please install it with `pip install google-cloud-storage`.\"\n )\n client = storage.Client(project=self.project_name)\n docs = []\n for blob in client.list_blobs(self.bucket, prefix=self.prefix):\n # we shall just skip directories since GCSFileLoader creates\n # intermediate directories on the fly\n if blob.name.endswith(\"/\"):\n continue\n loader = GCSFileLoader(self.project_name, self.bucket, blob.name)\n docs.extend(loader.load())\n return docs", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/gcs_directory.html"} +{"id": "eb2aea6e6db9-0", "text": "Source code for langchain.document_loaders.joplin\nimport json\nimport urllib\nfrom datetime import datetime\nfrom typing import Iterator, List, Optional\nfrom langchain.document_loaders.base import BaseLoader\nfrom langchain.schema import Document\nfrom langchain.utils import get_from_env\nLINK_NOTE_TEMPLATE = \"joplin://x-callback-url/openNote?id={id}\"\n[docs]class JoplinLoader(BaseLoader):\n \"\"\"\n Loader that fetches notes from Joplin.\n In order to use this loader, you need to have Joplin running with the\n Web Clipper enabled (look for \"Web Clipper\" in the app settings).\n To get the access token, you need to go to the Web Clipper options and\n under \"Advanced Options\" you will find the access token.\n You can find more information about the Web Clipper service here:\n https://joplinapp.org/clipper/\n \"\"\"\n def __init__(\n self,\n access_token: Optional[str] = None,\n port: int = 41184,\n host: str = \"localhost\",\n ) -> None:\n access_token = access_token or get_from_env(\n \"access_token\", \"JOPLIN_ACCESS_TOKEN\"\n )\n base_url = f\"http://{host}:{port}\"\n self._get_note_url = (\n f\"{base_url}/notes?token={access_token}\"\n f\"&fields=id,parent_id,title,body,created_time,updated_time&page={{page}}\"\n )\n self._get_folder_url = (\n f\"{base_url}/folders/{{id}}?token={access_token}&fields=title\"\n )\n self._get_tag_url = (", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/joplin.html"} +{"id": "eb2aea6e6db9-1", "text": ")\n self._get_tag_url = (\n f\"{base_url}/notes/{{id}}/tags?token={access_token}&fields=title\"\n )\n def _get_notes(self) -> Iterator[Document]:\n has_more = True\n page = 1\n while has_more:\n req_note = urllib.request.Request(self._get_note_url.format(page=page))\n with urllib.request.urlopen(req_note) as response:\n json_data = json.loads(response.read().decode())\n for note in json_data[\"items\"]:\n metadata = {\n \"source\": LINK_NOTE_TEMPLATE.format(id=note[\"id\"]),\n \"folder\": self._get_folder(note[\"parent_id\"]),\n \"tags\": self._get_tags(note[\"id\"]),\n \"title\": note[\"title\"],\n \"created_time\": self._convert_date(note[\"created_time\"]),\n \"updated_time\": self._convert_date(note[\"updated_time\"]),\n }\n yield Document(page_content=note[\"body\"], metadata=metadata)\n has_more = json_data[\"has_more\"]\n page += 1\n def _get_folder(self, folder_id: str) -> str:\n req_folder = urllib.request.Request(self._get_folder_url.format(id=folder_id))\n with urllib.request.urlopen(req_folder) as response:\n json_data = json.loads(response.read().decode())\n return json_data[\"title\"]\n def _get_tags(self, note_id: str) -> List[str]:\n req_tag = urllib.request.Request(self._get_tag_url.format(id=note_id))\n with urllib.request.urlopen(req_tag) as response:\n json_data = json.loads(response.read().decode())\n return [tag[\"title\"] for tag in json_data[\"items\"]]\n def _convert_date(self, date: int) -> str:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/joplin.html"} +{"id": "eb2aea6e6db9-2", "text": "def _convert_date(self, date: int) -> str:\n return datetime.fromtimestamp(date / 1000).strftime(\"%Y-%m-%d %H:%M:%S\")\n[docs] def lazy_load(self) -> Iterator[Document]:\n yield from self._get_notes()\n[docs] def load(self) -> List[Document]:\n return list(self.lazy_load())", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/joplin.html"} +{"id": "3880a5628382-0", "text": "Source code for langchain.document_loaders.powerpoint\n\"\"\"Loader that loads powerpoint files.\"\"\"\nimport os\nfrom typing import List\nfrom langchain.document_loaders.unstructured import UnstructuredFileLoader\n[docs]class UnstructuredPowerPointLoader(UnstructuredFileLoader):\n \"\"\"Loader that uses unstructured to load powerpoint files.\"\"\"\n def _get_elements(self) -> List:\n from unstructured.__version__ import __version__ as __unstructured_version__\n from unstructured.file_utils.filetype import FileType, detect_filetype\n unstructured_version = tuple(\n [int(x) for x in __unstructured_version__.split(\".\")]\n )\n # NOTE(MthwRobinson) - magic will raise an import error if the libmagic\n # system dependency isn't installed. If it's not installed, we'll just\n # check the file extension\n try:\n import magic # noqa: F401\n is_ppt = detect_filetype(self.file_path) == FileType.PPT\n except ImportError:\n _, extension = os.path.splitext(str(self.file_path))\n is_ppt = extension == \".ppt\"\n if is_ppt and unstructured_version < (0, 4, 11):\n raise ValueError(\n f\"You are on unstructured version {__unstructured_version__}. \"\n \"Partitioning .ppt files is only supported in unstructured>=0.4.11. \"\n \"Please upgrade the unstructured package and try again.\"\n )\n if is_ppt:\n from unstructured.partition.ppt import partition_ppt\n return partition_ppt(filename=self.file_path, **self.unstructured_kwargs)\n else:\n from unstructured.partition.pptx import partition_pptx\n return partition_pptx(filename=self.file_path, **self.unstructured_kwargs)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/powerpoint.html"} +{"id": "22748eb0f6f5-0", "text": "Source code for langchain.document_loaders.snowflake_loader\nfrom __future__ import annotations\nfrom typing import Any, Dict, Iterator, List, Optional, Tuple\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseLoader\n[docs]class SnowflakeLoader(BaseLoader):\n \"\"\"Loads a query result from Snowflake into a list of documents.\n Each document represents one row of the result. The `page_content_columns`\n are written into the `page_content` of the document. The `metadata_columns`\n are written into the `metadata` of the document. By default, all columns\n are written into the `page_content` and none into the `metadata`.\n \"\"\"\n def __init__(\n self,\n query: str,\n user: str,\n password: str,\n account: str,\n warehouse: str,\n role: str,\n database: str,\n schema: str,\n parameters: Optional[Dict[str, Any]] = None,\n page_content_columns: Optional[List[str]] = None,\n metadata_columns: Optional[List[str]] = None,\n ):\n \"\"\"Initialize Snowflake document loader.\n Args:\n query: The query to run in Snowflake.\n user: Snowflake user.\n password: Snowflake password.\n account: Snowflake account.\n warehouse: Snowflake warehouse.\n role: Snowflake role.\n database: Snowflake database\n schema: Snowflake schema\n page_content_columns: Optional. Columns written to Document `page_content`.\n metadata_columns: Optional. Columns written to Document `metadata`.\n \"\"\"\n self.query = query\n self.user = user\n self.password = password\n self.account = account\n self.warehouse = warehouse", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/snowflake_loader.html"} +{"id": "22748eb0f6f5-1", "text": "self.password = password\n self.account = account\n self.warehouse = warehouse\n self.role = role\n self.database = database\n self.schema = schema\n self.parameters = parameters\n self.page_content_columns = (\n page_content_columns if page_content_columns is not None else [\"*\"]\n )\n self.metadata_columns = metadata_columns if metadata_columns is not None else []\n def _execute_query(self) -> List[Dict[str, Any]]:\n try:\n import snowflake.connector\n except ImportError as ex:\n raise ValueError(\n \"Could not import snowflake-connector-python package. \"\n \"Please install it with `pip install snowflake-connector-python`.\"\n ) from ex\n conn = snowflake.connector.connect(\n user=self.user,\n password=self.password,\n account=self.account,\n warehouse=self.warehouse,\n role=self.role,\n database=self.database,\n schema=self.schema,\n parameters=self.parameters,\n )\n try:\n cur = conn.cursor()\n cur.execute(\"USE DATABASE \" + self.database)\n cur.execute(\"USE SCHEMA \" + self.schema)\n cur.execute(self.query, self.parameters)\n query_result = cur.fetchall()\n column_names = [column[0] for column in cur.description]\n query_result = [dict(zip(column_names, row)) for row in query_result]\n except Exception as e:\n print(f\"An error occurred: {e}\")\n query_result = []\n finally:\n cur.close()\n return query_result\n def _get_columns(\n self, query_result: List[Dict[str, Any]]\n ) -> Tuple[List[str], List[str]]:\n page_content_columns = (", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/snowflake_loader.html"} +{"id": "22748eb0f6f5-2", "text": ") -> Tuple[List[str], List[str]]:\n page_content_columns = (\n self.page_content_columns if self.page_content_columns else []\n )\n metadata_columns = self.metadata_columns if self.metadata_columns else []\n if page_content_columns is None and query_result:\n page_content_columns = list(query_result[0].keys())\n if metadata_columns is None:\n metadata_columns = []\n return page_content_columns or [], metadata_columns\n[docs] def lazy_load(self) -> Iterator[Document]:\n query_result = self._execute_query()\n if isinstance(query_result, Exception):\n print(f\"An error occurred during the query: {query_result}\")\n return []\n page_content_columns, metadata_columns = self._get_columns(query_result)\n if \"*\" in page_content_columns:\n page_content_columns = list(query_result[0].keys())\n for row in query_result:\n page_content = \"\\n\".join(\n f\"{k}: {v}\" for k, v in row.items() if k in page_content_columns\n )\n metadata = {k: v for k, v in row.items() if k in metadata_columns}\n doc = Document(page_content=page_content, metadata=metadata)\n yield doc\n[docs] def load(self) -> List[Document]:\n \"\"\"Load data into document objects.\"\"\"\n return list(self.lazy_load())", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/snowflake_loader.html"} +{"id": "f6fdefc65945-0", "text": "Source code for langchain.document_loaders.rtf\n\"\"\"Loader that loads rich text files.\"\"\"\nfrom typing import Any, List\nfrom langchain.document_loaders.unstructured import (\n UnstructuredFileLoader,\n satisfies_min_unstructured_version,\n)\n[docs]class UnstructuredRTFLoader(UnstructuredFileLoader):\n \"\"\"Loader that uses unstructured to load rtf files.\"\"\"\n def __init__(\n self, file_path: str, mode: str = \"single\", **unstructured_kwargs: Any\n ):\n min_unstructured_version = \"0.5.12\"\n if not satisfies_min_unstructured_version(min_unstructured_version):\n raise ValueError(\n \"Partitioning rtf files is only supported in \"\n f\"unstructured>={min_unstructured_version}.\"\n )\n super().__init__(file_path=file_path, mode=mode, **unstructured_kwargs)\n def _get_elements(self) -> List:\n from unstructured.partition.rtf import partition_rtf\n return partition_rtf(filename=self.file_path, **self.unstructured_kwargs)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/rtf.html"} +{"id": "444f45e2bac4-0", "text": "Source code for langchain.document_loaders.notebook\n\"\"\"Loader that loads .ipynb notebook files.\"\"\"\nimport json\nfrom pathlib import Path\nfrom typing import Any, List\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseLoader\ndef concatenate_cells(\n cell: dict, include_outputs: bool, max_output_length: int, traceback: bool\n) -> str:\n \"\"\"Combine cells information in a readable format ready to be used.\"\"\"\n cell_type = cell[\"cell_type\"]\n source = cell[\"source\"]\n output = cell[\"outputs\"]\n if include_outputs and cell_type == \"code\" and output:\n if \"ename\" in output[0].keys():\n error_name = output[0][\"ename\"]\n error_value = output[0][\"evalue\"]\n if traceback:\n traceback = output[0][\"traceback\"]\n return (\n f\"'{cell_type}' cell: '{source}'\\n, gives error '{error_name}',\"\n f\" with description '{error_value}'\\n\"\n f\"and traceback '{traceback}'\\n\\n\"\n )\n else:\n return (\n f\"'{cell_type}' cell: '{source}'\\n, gives error '{error_name}',\"\n f\"with description '{error_value}'\\n\\n\"\n )\n elif output[0][\"output_type\"] == \"stream\":\n output = output[0][\"text\"]\n min_output = min(max_output_length, len(output))\n return (\n f\"'{cell_type}' cell: '{source}'\\n with \"\n f\"output: '{output[:min_output]}'\\n\\n\"\n )\n else:\n return f\"'{cell_type}' cell: '{source}'\\n\\n\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/notebook.html"} +{"id": "444f45e2bac4-1", "text": "return f\"'{cell_type}' cell: '{source}'\\n\\n\"\n return \"\"\ndef remove_newlines(x: Any) -> Any:\n \"\"\"Remove recursively newlines, no matter the data structure they are stored in.\"\"\"\n import pandas as pd\n if isinstance(x, str):\n return x.replace(\"\\n\", \"\")\n elif isinstance(x, list):\n return [remove_newlines(elem) for elem in x]\n elif isinstance(x, pd.DataFrame):\n return x.applymap(remove_newlines)\n else:\n return x\n[docs]class NotebookLoader(BaseLoader):\n \"\"\"Loader that loads .ipynb notebook files.\"\"\"\n def __init__(\n self,\n path: str,\n include_outputs: bool = False,\n max_output_length: int = 10,\n remove_newline: bool = False,\n traceback: bool = False,\n ):\n \"\"\"Initialize with path.\"\"\"\n self.file_path = path\n self.include_outputs = include_outputs\n self.max_output_length = max_output_length\n self.remove_newline = remove_newline\n self.traceback = traceback\n[docs] def load(\n self,\n ) -> List[Document]:\n \"\"\"Load documents.\"\"\"\n try:\n import pandas as pd\n except ImportError:\n raise ImportError(\n \"pandas is needed for Notebook Loader, \"\n \"please install with `pip install pandas`\"\n )\n p = Path(self.file_path)\n with open(p, encoding=\"utf8\") as f:\n d = json.load(f)\n data = pd.json_normalize(d[\"cells\"])\n filtered_data = data[[\"cell_type\", \"source\", \"outputs\"]]\n if self.remove_newline:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/notebook.html"} +{"id": "444f45e2bac4-2", "text": "if self.remove_newline:\n filtered_data = filtered_data.applymap(remove_newlines)\n text = filtered_data.apply(\n lambda x: concatenate_cells(\n x, self.include_outputs, self.max_output_length, self.traceback\n ),\n axis=1,\n ).str.cat(sep=\" \")\n metadata = {\"source\": str(p)}\n return [Document(page_content=text, metadata=metadata)]", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/notebook.html"} +{"id": "87189c6eb31d-0", "text": "Source code for langchain.document_loaders.gitbook\n\"\"\"Loader that loads GitBook.\"\"\"\nfrom typing import Any, List, Optional\nfrom urllib.parse import urljoin, urlparse\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.web_base import WebBaseLoader\n[docs]class GitbookLoader(WebBaseLoader):\n \"\"\"Load GitBook data.\n 1. load from either a single page, or\n 2. load all (relative) paths in the navbar.\n \"\"\"\n def __init__(\n self,\n web_page: str,\n load_all_paths: bool = False,\n base_url: Optional[str] = None,\n content_selector: str = \"main\",\n ):\n \"\"\"Initialize with web page and whether to load all paths.\n Args:\n web_page: The web page to load or the starting point from where\n relative paths are discovered.\n load_all_paths: If set to True, all relative paths in the navbar\n are loaded instead of only `web_page`.\n base_url: If `load_all_paths` is True, the relative paths are\n appended to this base url. Defaults to `web_page` if not set.\n \"\"\"\n self.base_url = base_url or web_page\n if self.base_url.endswith(\"/\"):\n self.base_url = self.base_url[:-1]\n if load_all_paths:\n # set web_path to the sitemap if we want to crawl all paths\n web_paths = f\"{self.base_url}/sitemap.xml\"\n else:\n web_paths = web_page\n super().__init__(web_paths)\n self.load_all_paths = load_all_paths\n self.content_selector = content_selector\n[docs] def load(self) -> List[Document]:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/gitbook.html"} +{"id": "87189c6eb31d-1", "text": "[docs] def load(self) -> List[Document]:\n \"\"\"Fetch text from one single GitBook page.\"\"\"\n if self.load_all_paths:\n soup_info = self.scrape()\n relative_paths = self._get_paths(soup_info)\n documents = []\n for path in relative_paths:\n url = urljoin(self.base_url, path)\n print(f\"Fetching text from {url}\")\n soup_info = self._scrape(url)\n documents.append(self._get_document(soup_info, url))\n return [d for d in documents if d]\n else:\n soup_info = self.scrape()\n documents = [self._get_document(soup_info, self.web_path)]\n return [d for d in documents if d]\n def _get_document(\n self, soup: Any, custom_url: Optional[str] = None\n ) -> Optional[Document]:\n \"\"\"Fetch content from page and return Document.\"\"\"\n page_content_raw = soup.find(self.content_selector)\n if not page_content_raw:\n return None\n content = page_content_raw.get_text(separator=\"\\n\").strip()\n title_if_exists = page_content_raw.find(\"h1\")\n title = title_if_exists.text if title_if_exists else \"\"\n metadata = {\"source\": custom_url or self.web_path, \"title\": title}\n return Document(page_content=content, metadata=metadata)\n def _get_paths(self, soup: Any) -> List[str]:\n \"\"\"Fetch all relative paths in the navbar.\"\"\"\n return [urlparse(loc.text).path for loc in soup.find_all(\"loc\")]", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/gitbook.html"} +{"id": "f492ec639bad-0", "text": "Source code for langchain.document_loaders.web_base\n\"\"\"Web base loader class.\"\"\"\nimport asyncio\nimport logging\nimport warnings\nfrom typing import Any, Dict, Iterator, List, Optional, Union\nimport aiohttp\nimport requests\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseLoader\nlogger = logging.getLogger(__name__)\ndefault_header_template = {\n \"User-Agent\": \"\",\n \"Accept\": \"text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*\"\n \";q=0.8\",\n \"Accept-Language\": \"en-US,en;q=0.5\",\n \"Referer\": \"https://www.google.com/\",\n \"DNT\": \"1\",\n \"Connection\": \"keep-alive\",\n \"Upgrade-Insecure-Requests\": \"1\",\n}\ndef _build_metadata(soup: Any, url: str) -> dict:\n \"\"\"Build metadata from BeautifulSoup output.\"\"\"\n metadata = {\"source\": url}\n if title := soup.find(\"title\"):\n metadata[\"title\"] = title.get_text()\n if description := soup.find(\"meta\", attrs={\"name\": \"description\"}):\n metadata[\"description\"] = description.get(\"content\", None)\n if html := soup.find(\"html\"):\n metadata[\"language\"] = html.get(\"lang\", None)\n return metadata\n[docs]class WebBaseLoader(BaseLoader):\n \"\"\"Loader that uses urllib and beautiful soup to load webpages.\"\"\"\n web_paths: List[str]\n requests_per_second: int = 2\n \"\"\"Max number of concurrent requests to make.\"\"\"\n default_parser: str = \"html.parser\"\n \"\"\"Default parser to use for BeautifulSoup.\"\"\"\n requests_kwargs: Dict[str, Any] = {}\n \"\"\"kwargs for requests\"\"\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/web_base.html"} +{"id": "f492ec639bad-1", "text": "requests_kwargs: Dict[str, Any] = {}\n \"\"\"kwargs for requests\"\"\"\n bs_get_text_kwargs: Dict[str, Any] = {}\n \"\"\"kwargs for beatifulsoup4 get_text\"\"\"\n def __init__(\n self,\n web_path: Union[str, List[str]],\n header_template: Optional[dict] = None,\n verify: Optional[bool] = True,\n ):\n \"\"\"Initialize with webpage path.\"\"\"\n # TODO: Deprecate web_path in favor of web_paths, and remove this\n # left like this because there are a number of loaders that expect single\n # urls\n if isinstance(web_path, str):\n self.web_paths = [web_path]\n elif isinstance(web_path, List):\n self.web_paths = web_path\n self.session = requests.Session()\n try:\n import bs4 # noqa:F401\n except ImportError:\n raise ValueError(\n \"bs4 package not found, please install it with \" \"`pip install bs4`\"\n )\n # Choose to verify\n self.verify = verify\n headers = header_template or default_header_template\n if not headers.get(\"User-Agent\"):\n try:\n from fake_useragent import UserAgent\n headers[\"User-Agent\"] = UserAgent().random\n except ImportError:\n logger.info(\n \"fake_useragent not found, using default user agent.\"\n \"To get a realistic header for requests, \"\n \"`pip install fake_useragent`.\"\n )\n self.session.headers = dict(headers)\n @property\n def web_path(self) -> str:\n if len(self.web_paths) > 1:\n raise ValueError(\"Multiple webpaths found.\")\n return self.web_paths[0]\n async def _fetch(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/web_base.html"} +{"id": "f492ec639bad-2", "text": "return self.web_paths[0]\n async def _fetch(\n self, url: str, retries: int = 3, cooldown: int = 2, backoff: float = 1.5\n ) -> str:\n # For SiteMap SSL verification\n if not self.requests_kwargs.get(\"verify\", True):\n connector = aiohttp.TCPConnector(ssl=False)\n else:\n connector = None\n async with aiohttp.ClientSession(connector=connector) as session:\n for i in range(retries):\n try:\n async with session.get(\n url, headers=self.session.headers, verify=self.verify\n ) as response:\n return await response.text()\n except aiohttp.ClientConnectionError as e:\n if i == retries - 1:\n raise\n else:\n logger.warning(\n f\"Error fetching {url} with attempt \"\n f\"{i + 1}/{retries}: {e}. Retrying...\"\n )\n await asyncio.sleep(cooldown * backoff**i)\n raise ValueError(\"retry count exceeded\")\n async def _fetch_with_rate_limit(\n self, url: str, semaphore: asyncio.Semaphore\n ) -> str:\n async with semaphore:\n return await self._fetch(url)\n[docs] async def fetch_all(self, urls: List[str]) -> Any:\n \"\"\"Fetch all urls concurrently with rate limiting.\"\"\"\n semaphore = asyncio.Semaphore(self.requests_per_second)\n tasks = []\n for url in urls:\n task = asyncio.ensure_future(self._fetch_with_rate_limit(url, semaphore))\n tasks.append(task)\n try:\n from tqdm.asyncio import tqdm_asyncio\n return await tqdm_asyncio.gather(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/web_base.html"} +{"id": "f492ec639bad-3", "text": "from tqdm.asyncio import tqdm_asyncio\n return await tqdm_asyncio.gather(\n *tasks, desc=\"Fetching pages\", ascii=True, mininterval=1\n )\n except ImportError:\n warnings.warn(\"For better logging of progress, `pip install tqdm`\")\n return await asyncio.gather(*tasks)\n @staticmethod\n def _check_parser(parser: str) -> None:\n \"\"\"Check that parser is valid for bs4.\"\"\"\n valid_parsers = [\"html.parser\", \"lxml\", \"xml\", \"lxml-xml\", \"html5lib\"]\n if parser not in valid_parsers:\n raise ValueError(\n \"`parser` must be one of \" + \", \".join(valid_parsers) + \".\"\n )\n[docs] def scrape_all(self, urls: List[str], parser: Union[str, None] = None) -> List[Any]:\n \"\"\"Fetch all urls, then return soups for all results.\"\"\"\n from bs4 import BeautifulSoup\n results = asyncio.run(self.fetch_all(urls))\n final_results = []\n for i, result in enumerate(results):\n url = urls[i]\n if parser is None:\n if url.endswith(\".xml\"):\n parser = \"xml\"\n else:\n parser = self.default_parser\n self._check_parser(parser)\n final_results.append(BeautifulSoup(result, parser))\n return final_results\n def _scrape(self, url: str, parser: Union[str, None] = None) -> Any:\n from bs4 import BeautifulSoup\n if parser is None:\n if url.endswith(\".xml\"):\n parser = \"xml\"\n else:\n parser = self.default_parser\n self._check_parser(parser)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/web_base.html"} +{"id": "f492ec639bad-4", "text": "else:\n parser = self.default_parser\n self._check_parser(parser)\n html_doc = self.session.get(url, verify=self.verify, **self.requests_kwargs)\n html_doc.encoding = html_doc.apparent_encoding\n return BeautifulSoup(html_doc.text, parser)\n[docs] def scrape(self, parser: Union[str, None] = None) -> Any:\n \"\"\"Scrape data from webpage and return it in BeautifulSoup format.\"\"\"\n if parser is None:\n parser = self.default_parser\n return self._scrape(self.web_path, parser)\n[docs] def lazy_load(self) -> Iterator[Document]:\n \"\"\"Lazy load text from the url(s) in web_path.\"\"\"\n for path in self.web_paths:\n soup = self._scrape(path)\n text = soup.get_text(**self.bs_get_text_kwargs)\n metadata = _build_metadata(soup, path)\n yield Document(page_content=text, metadata=metadata)\n[docs] def load(self) -> List[Document]:\n \"\"\"Load text from the url(s) in web_path.\"\"\"\n return list(self.lazy_load())\n[docs] def aload(self) -> List[Document]:\n \"\"\"Load text from the urls in web_path async into Documents.\"\"\"\n results = self.scrape_all(self.web_paths)\n docs = []\n for i in range(len(results)):\n soup = results[i]\n text = soup.get_text(**self.bs_get_text_kwargs)\n metadata = _build_metadata(soup, self.web_paths[i])\n docs.append(Document(page_content=text, metadata=metadata))\n return docs", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/web_base.html"} +{"id": "83ee2e20ef7e-0", "text": "Source code for langchain.document_loaders.bilibili\nimport json\nimport re\nimport warnings\nfrom typing import List, Tuple\nimport requests\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseLoader\n[docs]class BiliBiliLoader(BaseLoader):\n \"\"\"Loader that loads bilibili transcripts.\"\"\"\n def __init__(self, video_urls: List[str]):\n \"\"\"Initialize with bilibili url.\"\"\"\n self.video_urls = video_urls\n[docs] def load(self) -> List[Document]:\n \"\"\"Load from bilibili url.\"\"\"\n results = []\n for url in self.video_urls:\n transcript, video_info = self._get_bilibili_subs_and_info(url)\n doc = Document(page_content=transcript, metadata=video_info)\n results.append(doc)\n return results\n def _get_bilibili_subs_and_info(self, url: str) -> Tuple[str, dict]:\n try:\n from bilibili_api import sync, video\n except ImportError:\n raise ValueError(\n \"requests package not found, please install it with \"\n \"`pip install bilibili-api-python`\"\n )\n bvid = re.search(r\"BV\\w+\", url)\n if bvid is not None:\n v = video.Video(bvid=bvid.group())\n else:\n aid = re.search(r\"av[0-9]+\", url)\n if aid is not None:\n try:\n v = video.Video(aid=int(aid.group()[2:]))\n except AttributeError:\n raise ValueError(f\"{url} is not bilibili url.\")\n else:\n raise ValueError(f\"{url} is not bilibili url.\")\n video_info = sync(v.get_info())", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/bilibili.html"} +{"id": "83ee2e20ef7e-1", "text": "video_info = sync(v.get_info())\n video_info.update({\"url\": url})\n # Get subtitle url\n subtitle = video_info.pop(\"subtitle\")\n sub_list = subtitle[\"list\"]\n if sub_list:\n sub_url = sub_list[0][\"subtitle_url\"]\n result = requests.get(sub_url)\n raw_sub_titles = json.loads(result.content)[\"body\"]\n raw_transcript = \" \".join([c[\"content\"] for c in raw_sub_titles])\n raw_transcript_with_meta_info = (\n f\"Video Title: {video_info['title']},\"\n f\"description: {video_info['desc']}\\n\\n\"\n f\"Transcript: {raw_transcript}\"\n )\n return raw_transcript_with_meta_info, video_info\n else:\n raw_transcript = \"\"\n warnings.warn(\n f\"\"\"\n No subtitles found for video: {url}.\n Return Empty transcript.\n \"\"\"\n )\n return raw_transcript, video_info", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/bilibili.html"} +{"id": "85082a986da9-0", "text": "Source code for langchain.document_loaders.diffbot\n\"\"\"Loader that uses Diffbot to load webpages in text format.\"\"\"\nimport logging\nfrom typing import Any, List\nimport requests\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseLoader\nlogger = logging.getLogger(__name__)\n[docs]class DiffbotLoader(BaseLoader):\n \"\"\"Loader that loads Diffbot file json.\"\"\"\n def __init__(\n self, api_token: str, urls: List[str], continue_on_failure: bool = True\n ):\n \"\"\"Initialize with API token, ids, and key.\"\"\"\n self.api_token = api_token\n self.urls = urls\n self.continue_on_failure = continue_on_failure\n def _diffbot_api_url(self, diffbot_api: str) -> str:\n return f\"https://api.diffbot.com/v3/{diffbot_api}\"\n def _get_diffbot_data(self, url: str) -> Any:\n \"\"\"Get Diffbot file from Diffbot REST API.\"\"\"\n # TODO: Add support for other Diffbot APIs\n diffbot_url = self._diffbot_api_url(\"article\")\n params = {\n \"token\": self.api_token,\n \"url\": url,\n }\n response = requests.get(diffbot_url, params=params, timeout=10)\n # TODO: handle non-ok errors\n return response.json() if response.ok else {}\n[docs] def load(self) -> List[Document]:\n \"\"\"Extract text from Diffbot on all the URLs and return Document instances\"\"\"\n docs: List[Document] = list()\n for url in self.urls:\n try:\n data = self._get_diffbot_data(url)\n text = data[\"objects\"][0][\"text\"] if \"objects\" in data else \"\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/diffbot.html"} +{"id": "85082a986da9-1", "text": "text = data[\"objects\"][0][\"text\"] if \"objects\" in data else \"\"\n metadata = {\"source\": url}\n docs.append(Document(page_content=text, metadata=metadata))\n except Exception as e:\n if self.continue_on_failure:\n logger.error(f\"Error fetching or processing {url}, exception: {e}\")\n else:\n raise e\n return docs", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/diffbot.html"} +{"id": "1a286cdadc1f-0", "text": "Source code for langchain.document_loaders.csv_loader\nimport csv\nfrom typing import Any, Dict, List, Optional\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseLoader\nfrom langchain.document_loaders.unstructured import (\n UnstructuredFileLoader,\n validate_unstructured_version,\n)\n[docs]class CSVLoader(BaseLoader):\n \"\"\"Loads a CSV file into a list of documents.\n Each document represents one row of the CSV file. Every row is converted into a\n key/value pair and outputted to a new line in the document's page_content.\n The source for each document loaded from csv is set to the value of the\n `file_path` argument for all doucments by default.\n You can override this by setting the `source_column` argument to the\n name of a column in the CSV file.\n The source of each document will then be set to the value of the column\n with the name specified in `source_column`.\n Output Example:\n .. code-block:: txt\n column1: value1\n column2: value2\n column3: value3\n \"\"\"\n def __init__(\n self,\n file_path: str,\n source_column: Optional[str] = None,\n csv_args: Optional[Dict] = None,\n encoding: Optional[str] = None,\n ):\n self.file_path = file_path\n self.source_column = source_column\n self.encoding = encoding\n self.csv_args = csv_args or {}\n[docs] def load(self) -> List[Document]:\n \"\"\"Load data into document objects.\"\"\"\n docs = []\n with open(self.file_path, newline=\"\", encoding=self.encoding) as csvfile:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/csv_loader.html"} +{"id": "1a286cdadc1f-1", "text": "with open(self.file_path, newline=\"\", encoding=self.encoding) as csvfile:\n csv_reader = csv.DictReader(csvfile, **self.csv_args) # type: ignore\n for i, row in enumerate(csv_reader):\n content = \"\\n\".join(f\"{k.strip()}: {v.strip()}\" for k, v in row.items())\n try:\n source = (\n row[self.source_column]\n if self.source_column is not None\n else self.file_path\n )\n except KeyError:\n raise ValueError(\n f\"Source column '{self.source_column}' not found in CSV file.\"\n )\n metadata = {\"source\": source, \"row\": i}\n doc = Document(page_content=content, metadata=metadata)\n docs.append(doc)\n return docs\n[docs]class UnstructuredCSVLoader(UnstructuredFileLoader):\n \"\"\"Loader that uses unstructured to load CSV files.\"\"\"\n def __init__(\n self, file_path: str, mode: str = \"single\", **unstructured_kwargs: Any\n ):\n validate_unstructured_version(min_unstructured_version=\"0.6.8\")\n super().__init__(file_path=file_path, mode=mode, **unstructured_kwargs)\n def _get_elements(self) -> List:\n from unstructured.partition.csv import partition_csv\n return partition_csv(filename=self.file_path, **self.unstructured_kwargs)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/csv_loader.html"} +{"id": "29717bd3a3ef-0", "text": "Source code for langchain.document_loaders.dataframe\n\"\"\"Load from Dataframe object\"\"\"\nfrom typing import Any, Iterator, List\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseLoader\n[docs]class DataFrameLoader(BaseLoader):\n \"\"\"Load Pandas DataFrames.\"\"\"\n def __init__(self, data_frame: Any, page_content_column: str = \"text\"):\n \"\"\"Initialize with dataframe object.\"\"\"\n import pandas as pd\n if not isinstance(data_frame, pd.DataFrame):\n raise ValueError(\n f\"Expected data_frame to be a pd.DataFrame, got {type(data_frame)}\"\n )\n self.data_frame = data_frame\n self.page_content_column = page_content_column\n[docs] def lazy_load(self) -> Iterator[Document]:\n \"\"\"Lazy load records from dataframe.\"\"\"\n for _, row in self.data_frame.iterrows():\n text = row[self.page_content_column]\n metadata = row.to_dict()\n metadata.pop(self.page_content_column)\n yield Document(page_content=text, metadata=metadata)\n[docs] def load(self) -> List[Document]:\n \"\"\"Load full dataframe.\"\"\"\n return list(self.lazy_load())", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/dataframe.html"} +{"id": "1e0b2795e38e-0", "text": "Source code for langchain.document_loaders.directory\n\"\"\"Loading logic for loading documents from a directory.\"\"\"\nimport concurrent\nimport logging\nfrom pathlib import Path\nfrom typing import Any, List, Optional, Type, Union\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseLoader\nfrom langchain.document_loaders.html_bs import BSHTMLLoader\nfrom langchain.document_loaders.text import TextLoader\nfrom langchain.document_loaders.unstructured import UnstructuredFileLoader\nFILE_LOADER_TYPE = Union[\n Type[UnstructuredFileLoader], Type[TextLoader], Type[BSHTMLLoader]\n]\nlogger = logging.getLogger(__name__)\ndef _is_visible(p: Path) -> bool:\n parts = p.parts\n for _p in parts:\n if _p.startswith(\".\"):\n return False\n return True\n[docs]class DirectoryLoader(BaseLoader):\n \"\"\"Loading logic for loading documents from a directory.\"\"\"\n def __init__(\n self,\n path: str,\n glob: str = \"**/[!.]*\",\n silent_errors: bool = False,\n load_hidden: bool = False,\n loader_cls: FILE_LOADER_TYPE = UnstructuredFileLoader,\n loader_kwargs: Union[dict, None] = None,\n recursive: bool = False,\n show_progress: bool = False,\n use_multithreading: bool = False,\n max_concurrency: int = 4,\n ):\n \"\"\"Initialize with path to directory and how to glob over it.\"\"\"\n if loader_kwargs is None:\n loader_kwargs = {}\n self.path = path\n self.glob = glob\n self.load_hidden = load_hidden\n self.loader_cls = loader_cls\n self.loader_kwargs = loader_kwargs\n self.silent_errors = silent_errors", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/directory.html"} +{"id": "1e0b2795e38e-1", "text": "self.loader_kwargs = loader_kwargs\n self.silent_errors = silent_errors\n self.recursive = recursive\n self.show_progress = show_progress\n self.use_multithreading = use_multithreading\n self.max_concurrency = max_concurrency\n[docs] def load_file(\n self, item: Path, path: Path, docs: List[Document], pbar: Optional[Any]\n ) -> None:\n if item.is_file():\n if _is_visible(item.relative_to(path)) or self.load_hidden:\n try:\n sub_docs = self.loader_cls(str(item), **self.loader_kwargs).load()\n docs.extend(sub_docs)\n except Exception as e:\n if self.silent_errors:\n logger.warning(e)\n else:\n raise e\n finally:\n if pbar:\n pbar.update(1)\n[docs] def load(self) -> List[Document]:\n \"\"\"Load documents.\"\"\"\n p = Path(self.path)\n if not p.exists():\n raise FileNotFoundError(f\"Directory not found: '{self.path}'\")\n if not p.is_dir():\n raise ValueError(f\"Expected directory, got file: '{self.path}'\")\n docs: List[Document] = []\n items = list(p.rglob(self.glob) if self.recursive else p.glob(self.glob))\n pbar = None\n if self.show_progress:\n try:\n from tqdm import tqdm\n pbar = tqdm(total=len(items))\n except ImportError as e:\n logger.warning(\n \"To log the progress of DirectoryLoader you need to install tqdm, \"\n \"`pip install tqdm`\"\n )\n if self.silent_errors:\n logger.warning(e)\n else:\n raise e", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/directory.html"} +{"id": "1e0b2795e38e-2", "text": "logger.warning(e)\n else:\n raise e\n if self.use_multithreading:\n with concurrent.futures.ThreadPoolExecutor(\n max_workers=self.max_concurrency\n ) as executor:\n executor.map(lambda i: self.load_file(i, p, docs, pbar), items)\n else:\n for i in items:\n self.load_file(i, p, docs, pbar)\n if pbar:\n pbar.close()\n return docs\n#", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/directory.html"} +{"id": "fa40451b96ee-0", "text": "Source code for langchain.document_loaders.onedrive_file\nfrom __future__ import annotations\nimport tempfile\nfrom typing import TYPE_CHECKING, List\nfrom pydantic import BaseModel, Field\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseLoader\nfrom langchain.document_loaders.unstructured import UnstructuredFileLoader\nif TYPE_CHECKING:\n from O365.drive import File\nCHUNK_SIZE = 1024 * 1024 * 5\n[docs]class OneDriveFileLoader(BaseLoader, BaseModel):\n file: File = Field(...)\n class Config:\n arbitrary_types_allowed = True\n[docs] def load(self) -> List[Document]:\n \"\"\"Load Documents\"\"\"\n with tempfile.TemporaryDirectory() as temp_dir:\n file_path = f\"{temp_dir}/{self.file.name}\"\n self.file.download(to_path=temp_dir, chunk_size=CHUNK_SIZE)\n loader = UnstructuredFileLoader(file_path)\n return loader.load()", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/onedrive_file.html"} +{"id": "dbf7075633e6-0", "text": "Source code for langchain.document_loaders.googledrive\n\"\"\"Loader that loads data from Google Drive.\"\"\"\n# Prerequisites:\n# 1. Create a Google Cloud project\n# 2. Enable the Google Drive API:\n# https://console.cloud.google.com/flows/enableapi?apiid=drive.googleapis.com\n# 3. Authorize credentials for desktop app:\n# https://developers.google.com/drive/api/quickstart/python#authorize_credentials_for_a_desktop_application # noqa: E501\n# 4. For service accounts visit\n# https://cloud.google.com/iam/docs/service-accounts-create\nimport os\nfrom pathlib import Path\nfrom typing import Any, Dict, List, Optional, Sequence, Union\nfrom pydantic import BaseModel, root_validator, validator\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseLoader\nSCOPES = [\"https://www.googleapis.com/auth/drive.readonly\"]\n[docs]class GoogleDriveLoader(BaseLoader, BaseModel):\n \"\"\"Loader that loads Google Docs from Google Drive.\"\"\"\n service_account_key: Path = Path.home() / \".credentials\" / \"keys.json\"\n credentials_path: Path = Path.home() / \".credentials\" / \"credentials.json\"\n token_path: Path = Path.home() / \".credentials\" / \"token.json\"\n folder_id: Optional[str] = None\n document_ids: Optional[List[str]] = None\n file_ids: Optional[List[str]] = None\n recursive: bool = False\n file_types: Optional[Sequence[str]] = None\n load_trashed_files: bool = False\n # NOTE(MthwRobinson) - changing the file_loader_cls to type here currently\n # results in pydantic validation errors\n file_loader_cls: Any = None", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/googledrive.html"} +{"id": "dbf7075633e6-1", "text": "# results in pydantic validation errors\n file_loader_cls: Any = None\n file_loader_kwargs: Dict[\"str\", Any] = {}\n @root_validator\n def validate_inputs(cls, values: Dict[str, Any]) -> Dict[str, Any]:\n \"\"\"Validate that either folder_id or document_ids is set, but not both.\"\"\"\n if values.get(\"folder_id\") and (\n values.get(\"document_ids\") or values.get(\"file_ids\")\n ):\n raise ValueError(\n \"Cannot specify both folder_id and document_ids nor \"\n \"folder_id and file_ids\"\n )\n if (\n not values.get(\"folder_id\")\n and not values.get(\"document_ids\")\n and not values.get(\"file_ids\")\n ):\n raise ValueError(\"Must specify either folder_id, document_ids, or file_ids\")\n file_types = values.get(\"file_types\")\n if file_types:\n if values.get(\"document_ids\") or values.get(\"file_ids\"):\n raise ValueError(\n \"file_types can only be given when folder_id is given,\"\n \" (not when document_ids or file_ids are given).\"\n )\n type_mapping = {\n \"document\": \"application/vnd.google-apps.document\",\n \"sheet\": \"application/vnd.google-apps.spreadsheet\",\n \"pdf\": \"application/pdf\",\n }\n allowed_types = list(type_mapping.keys()) + list(type_mapping.values())\n short_names = \", \".join([f\"'{x}'\" for x in type_mapping.keys()])\n full_names = \", \".join([f\"'{x}'\" for x in type_mapping.values()])\n for file_type in file_types:\n if file_type not in allowed_types:\n raise ValueError(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/googledrive.html"} +{"id": "dbf7075633e6-2", "text": "if file_type not in allowed_types:\n raise ValueError(\n f\"Given file type {file_type} is not supported. \"\n f\"Supported values are: {short_names}; and \"\n f\"their full-form names: {full_names}\"\n )\n # replace short-form file types by full-form file types\n def full_form(x: str) -> str:\n return type_mapping[x] if x in type_mapping else x\n values[\"file_types\"] = [full_form(file_type) for file_type in file_types]\n return values\n @validator(\"credentials_path\")\n def validate_credentials_path(cls, v: Any, **kwargs: Any) -> Any:\n \"\"\"Validate that credentials_path exists.\"\"\"\n if not v.exists():\n raise ValueError(f\"credentials_path {v} does not exist\")\n return v\n def _load_credentials(self) -> Any:\n \"\"\"Load credentials.\"\"\"\n # Adapted from https://developers.google.com/drive/api/v3/quickstart/python\n try:\n from google.auth import default\n from google.auth.transport.requests import Request\n from google.oauth2 import service_account\n from google.oauth2.credentials import Credentials\n from google_auth_oauthlib.flow import InstalledAppFlow\n except ImportError:\n raise ImportError(\n \"You must run \"\n \"`pip install --upgrade \"\n \"google-api-python-client google-auth-httplib2 \"\n \"google-auth-oauthlib` \"\n \"to use the Google Drive loader.\"\n )\n creds = None\n if self.service_account_key.exists():\n return service_account.Credentials.from_service_account_file(\n str(self.service_account_key), scopes=SCOPES\n )\n if self.token_path.exists():", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/googledrive.html"} +{"id": "dbf7075633e6-3", "text": ")\n if self.token_path.exists():\n creds = Credentials.from_authorized_user_file(str(self.token_path), SCOPES)\n if not creds or not creds.valid:\n if creds and creds.expired and creds.refresh_token:\n creds.refresh(Request())\n elif \"GOOGLE_APPLICATION_CREDENTIALS\" not in os.environ:\n creds, project = default()\n creds = creds.with_scopes(SCOPES)\n # no need to write to file\n if creds:\n return creds\n else:\n flow = InstalledAppFlow.from_client_secrets_file(\n str(self.credentials_path), SCOPES\n )\n creds = flow.run_local_server(port=0)\n with open(self.token_path, \"w\") as token:\n token.write(creds.to_json())\n return creds\n def _load_sheet_from_id(self, id: str) -> List[Document]:\n \"\"\"Load a sheet and all tabs from an ID.\"\"\"\n from googleapiclient.discovery import build\n creds = self._load_credentials()\n sheets_service = build(\"sheets\", \"v4\", credentials=creds)\n spreadsheet = sheets_service.spreadsheets().get(spreadsheetId=id).execute()\n sheets = spreadsheet.get(\"sheets\", [])\n documents = []\n for sheet in sheets:\n sheet_name = sheet[\"properties\"][\"title\"]\n result = (\n sheets_service.spreadsheets()\n .values()\n .get(spreadsheetId=id, range=sheet_name)\n .execute()\n )\n values = result.get(\"values\", [])\n header = values[0]\n for i, row in enumerate(values[1:], start=1):\n metadata = {\n \"source\": (", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/googledrive.html"} +{"id": "dbf7075633e6-4", "text": "metadata = {\n \"source\": (\n f\"https://docs.google.com/spreadsheets/d/{id}/\"\n f\"edit?gid={sheet['properties']['sheetId']}\"\n ),\n \"title\": f\"{spreadsheet['properties']['title']} - {sheet_name}\",\n \"row\": i,\n }\n content = []\n for j, v in enumerate(row):\n title = header[j].strip() if len(header) > j else \"\"\n content.append(f\"{title}: {v.strip()}\")\n page_content = \"\\n\".join(content)\n documents.append(Document(page_content=page_content, metadata=metadata))\n return documents\n def _load_document_from_id(self, id: str) -> Document:\n \"\"\"Load a document from an ID.\"\"\"\n from io import BytesIO\n from googleapiclient.discovery import build\n from googleapiclient.errors import HttpError\n from googleapiclient.http import MediaIoBaseDownload\n creds = self._load_credentials()\n service = build(\"drive\", \"v3\", credentials=creds)\n file = service.files().get(fileId=id, supportsAllDrives=True).execute()\n request = service.files().export_media(fileId=id, mimeType=\"text/plain\")\n fh = BytesIO()\n downloader = MediaIoBaseDownload(fh, request)\n done = False\n try:\n while done is False:\n status, done = downloader.next_chunk()\n except HttpError as e:\n if e.resp.status == 404:\n print(\"File not found: {}\".format(id))\n else:\n print(\"An error occurred: {}\".format(e))\n text = fh.getvalue().decode(\"utf-8\")\n metadata = {", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/googledrive.html"} +{"id": "dbf7075633e6-5", "text": "text = fh.getvalue().decode(\"utf-8\")\n metadata = {\n \"source\": f\"https://docs.google.com/document/d/{id}/edit\",\n \"title\": f\"{file.get('name')}\",\n }\n return Document(page_content=text, metadata=metadata)\n def _load_documents_from_folder(\n self, folder_id: str, *, file_types: Optional[Sequence[str]] = None\n ) -> List[Document]:\n \"\"\"Load documents from a folder.\"\"\"\n from googleapiclient.discovery import build\n creds = self._load_credentials()\n service = build(\"drive\", \"v3\", credentials=creds)\n files = self._fetch_files_recursive(service, folder_id)\n # If file types filter is provided, we'll filter by the file type.\n if file_types:\n _files = [f for f in files if f[\"mimeType\"] in file_types] # type: ignore\n else:\n _files = files\n returns = []\n for file in _files:\n if file[\"trashed\"] and not self.load_trashed_files:\n continue\n elif file[\"mimeType\"] == \"application/vnd.google-apps.document\":\n returns.append(self._load_document_from_id(file[\"id\"])) # type: ignore\n elif file[\"mimeType\"] == \"application/vnd.google-apps.spreadsheet\":\n returns.extend(self._load_sheet_from_id(file[\"id\"])) # type: ignore\n elif (\n file[\"mimeType\"] == \"application/pdf\"\n or self.file_loader_cls is not None\n ):\n returns.extend(self._load_file_from_id(file[\"id\"])) # type: ignore\n else:\n pass\n return returns\n def _fetch_files_recursive(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/googledrive.html"} +{"id": "dbf7075633e6-6", "text": "else:\n pass\n return returns\n def _fetch_files_recursive(\n self, service: Any, folder_id: str\n ) -> List[Dict[str, Union[str, List[str]]]]:\n \"\"\"Fetch all files and subfolders recursively.\"\"\"\n results = (\n service.files()\n .list(\n q=f\"'{folder_id}' in parents\",\n pageSize=1000,\n includeItemsFromAllDrives=True,\n supportsAllDrives=True,\n fields=\"nextPageToken, files(id, name, mimeType, parents, trashed)\",\n )\n .execute()\n )\n files = results.get(\"files\", [])\n returns = []\n for file in files:\n if file[\"mimeType\"] == \"application/vnd.google-apps.folder\":\n if self.recursive:\n returns.extend(self._fetch_files_recursive(service, file[\"id\"]))\n else:\n returns.append(file)\n return returns\n def _load_documents_from_ids(self) -> List[Document]:\n \"\"\"Load documents from a list of IDs.\"\"\"\n if not self.document_ids:\n raise ValueError(\"document_ids must be set\")\n return [self._load_document_from_id(doc_id) for doc_id in self.document_ids]\n def _load_file_from_id(self, id: str) -> List[Document]:\n \"\"\"Load a file from an ID.\"\"\"\n from io import BytesIO\n from googleapiclient.discovery import build\n from googleapiclient.http import MediaIoBaseDownload\n creds = self._load_credentials()\n service = build(\"drive\", \"v3\", credentials=creds)\n file = service.files().get(fileId=id, supportsAllDrives=True).execute()", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/googledrive.html"} +{"id": "dbf7075633e6-7", "text": "file = service.files().get(fileId=id, supportsAllDrives=True).execute()\n request = service.files().get_media(fileId=id)\n fh = BytesIO()\n downloader = MediaIoBaseDownload(fh, request)\n done = False\n while done is False:\n status, done = downloader.next_chunk()\n if self.file_loader_cls is not None:\n fh.seek(0)\n loader = self.file_loader_cls(file=fh, **self.file_loader_kwargs)\n docs = loader.load()\n for doc in docs:\n doc.metadata[\"source\"] = f\"https://drive.google.com/file/d/{id}/view\"\n return docs\n else:\n from PyPDF2 import PdfReader\n content = fh.getvalue()\n pdf_reader = PdfReader(BytesIO(content))\n return [\n Document(\n page_content=page.extract_text(),\n metadata={\n \"source\": f\"https://drive.google.com/file/d/{id}/view\",\n \"title\": f\"{file.get('name')}\",\n \"page\": i,\n },\n )\n for i, page in enumerate(pdf_reader.pages)\n ]\n def _load_file_from_ids(self) -> List[Document]:\n \"\"\"Load files from a list of IDs.\"\"\"\n if not self.file_ids:\n raise ValueError(\"file_ids must be set\")\n docs = []\n for file_id in self.file_ids:\n docs.extend(self._load_file_from_id(file_id))\n return docs\n[docs] def load(self) -> List[Document]:\n \"\"\"Load documents.\"\"\"\n if self.folder_id:\n return self._load_documents_from_folder(\n self.folder_id, file_types=self.file_types\n )\n elif self.document_ids:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/googledrive.html"} +{"id": "dbf7075633e6-8", "text": ")\n elif self.document_ids:\n return self._load_documents_from_ids()\n else:\n return self._load_file_from_ids()", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/googledrive.html"} +{"id": "c5aecdbfd1a3-0", "text": "Source code for langchain.document_loaders.airbyte_json\n\"\"\"Loader that loads local airbyte json files.\"\"\"\nimport json\nfrom typing import List\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseLoader\nfrom langchain.utils import stringify_dict\n[docs]class AirbyteJSONLoader(BaseLoader):\n \"\"\"Loader that loads local airbyte json files.\"\"\"\n def __init__(self, file_path: str):\n \"\"\"Initialize with file path. This should start with '/tmp/airbyte_local/'.\"\"\"\n self.file_path = file_path\n[docs] def load(self) -> List[Document]:\n \"\"\"Load file.\"\"\"\n text = \"\"\n for line in open(self.file_path, \"r\"):\n data = json.loads(line)[\"_airbyte_data\"]\n text += stringify_dict(data)\n metadata = {\"source\": self.file_path}\n return [Document(page_content=text, metadata=metadata)]", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/airbyte_json.html"} +{"id": "f12bc11e9511-0", "text": "Source code for langchain.document_loaders.image_captions\n\"\"\"\nLoader that loads image captions\nBy default, the loader utilizes the pre-trained BLIP image captioning model.\nhttps://huggingface.co/Salesforce/blip-image-captioning-base\n\"\"\"\nfrom typing import Any, List, Tuple, Union\nimport requests\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseLoader\n[docs]class ImageCaptionLoader(BaseLoader):\n \"\"\"Loader that loads the captions of an image\"\"\"\n def __init__(\n self,\n path_images: Union[str, List[str]],\n blip_processor: str = \"Salesforce/blip-image-captioning-base\",\n blip_model: str = \"Salesforce/blip-image-captioning-base\",\n ):\n \"\"\"\n Initialize with a list of image paths\n \"\"\"\n if isinstance(path_images, str):\n self.image_paths = [path_images]\n else:\n self.image_paths = path_images\n self.blip_processor = blip_processor\n self.blip_model = blip_model\n[docs] def load(self) -> List[Document]:\n \"\"\"\n Load from a list of image files\n \"\"\"\n try:\n from transformers import BlipForConditionalGeneration, BlipProcessor\n except ImportError:\n raise ImportError(\n \"`transformers` package not found, please install with \"\n \"`pip install transformers`.\"\n )\n processor = BlipProcessor.from_pretrained(self.blip_processor)\n model = BlipForConditionalGeneration.from_pretrained(self.blip_model)\n results = []\n for path_image in self.image_paths:\n caption, metadata = self._get_captions_and_metadata(\n model=model, processor=processor, path_image=path_image\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/image_captions.html"} +{"id": "f12bc11e9511-1", "text": "model=model, processor=processor, path_image=path_image\n )\n doc = Document(page_content=caption, metadata=metadata)\n results.append(doc)\n return results\n def _get_captions_and_metadata(\n self, model: Any, processor: Any, path_image: str\n ) -> Tuple[str, dict]:\n \"\"\"\n Helper function for getting the captions and metadata of an image\n \"\"\"\n try:\n from PIL import Image\n except ImportError:\n raise ImportError(\n \"`PIL` package not found, please install with `pip install pillow`\"\n )\n try:\n if path_image.startswith(\"http://\") or path_image.startswith(\"https://\"):\n image = Image.open(requests.get(path_image, stream=True).raw).convert(\n \"RGB\"\n )\n else:\n image = Image.open(path_image).convert(\"RGB\")\n except Exception:\n raise ValueError(f\"Could not get image data for {path_image}\")\n inputs = processor(image, \"an image of\", return_tensors=\"pt\")\n output = model.generate(**inputs)\n caption: str = processor.decode(output[0])\n metadata: dict = {\"image_path\": path_image}\n return caption, metadata", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/image_captions.html"} +{"id": "10eb13830637-0", "text": "Source code for langchain.document_loaders.git\nimport os\nfrom typing import Callable, List, Optional\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseLoader\n[docs]class GitLoader(BaseLoader):\n \"\"\"Loads files from a Git repository into a list of documents.\n Repository can be local on disk available at `repo_path`,\n or remote at `clone_url` that will be cloned to `repo_path`.\n Currently supports only text files.\n Each document represents one file in the repository. The `path` points to\n the local Git repository, and the `branch` specifies the branch to load\n files from. By default, it loads from the `main` branch.\n \"\"\"\n def __init__(\n self,\n repo_path: str,\n clone_url: Optional[str] = None,\n branch: Optional[str] = \"main\",\n file_filter: Optional[Callable[[str], bool]] = None,\n ):\n self.repo_path = repo_path\n self.clone_url = clone_url\n self.branch = branch\n self.file_filter = file_filter\n[docs] def load(self) -> List[Document]:\n try:\n from git import Blob, Repo # type: ignore\n except ImportError as ex:\n raise ImportError(\n \"Could not import git python package. \"\n \"Please install it with `pip install GitPython`.\"\n ) from ex\n if not os.path.exists(self.repo_path) and self.clone_url is None:\n raise ValueError(f\"Path {self.repo_path} does not exist\")\n elif self.clone_url:\n repo = Repo.clone_from(self.clone_url, self.repo_path)\n repo.git.checkout(self.branch)\n else:\n repo = Repo(self.repo_path)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/git.html"} +{"id": "10eb13830637-1", "text": "else:\n repo = Repo(self.repo_path)\n repo.git.checkout(self.branch)\n docs: List[Document] = []\n for item in repo.tree().traverse():\n if not isinstance(item, Blob):\n continue\n file_path = os.path.join(self.repo_path, item.path)\n ignored_files = repo.ignored([file_path]) # type: ignore\n if len(ignored_files):\n continue\n # uses filter to skip files\n if self.file_filter and not self.file_filter(file_path):\n continue\n rel_file_path = os.path.relpath(file_path, self.repo_path)\n try:\n with open(file_path, \"rb\") as f:\n content = f.read()\n file_type = os.path.splitext(item.name)[1]\n # loads only text files\n try:\n text_content = content.decode(\"utf-8\")\n except UnicodeDecodeError:\n continue\n metadata = {\n \"source\": rel_file_path,\n \"file_path\": rel_file_path,\n \"file_name\": item.name,\n \"file_type\": file_type,\n }\n doc = Document(page_content=text_content, metadata=metadata)\n docs.append(doc)\n except Exception as e:\n print(f\"Error reading file {file_path}: {e}\")\n return docs", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/git.html"} +{"id": "a34d55a22a53-0", "text": "Source code for langchain.document_loaders.url_selenium\n\"\"\"Loader that uses Selenium to load a page, then uses unstructured to load the html.\n\"\"\"\nimport logging\nfrom typing import TYPE_CHECKING, List, Literal, Optional, Union\nif TYPE_CHECKING:\n from selenium.webdriver import Chrome, Firefox\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseLoader\nlogger = logging.getLogger(__name__)\n[docs]class SeleniumURLLoader(BaseLoader):\n \"\"\"Loader that uses Selenium and to load a page and unstructured to load the html.\n This is useful for loading pages that require javascript to render.\n Attributes:\n urls (List[str]): List of URLs to load.\n continue_on_failure (bool): If True, continue loading other URLs on failure.\n browser (str): The browser to use, either 'chrome' or 'firefox'.\n binary_location (Optional[str]): The location of the browser binary.\n executable_path (Optional[str]): The path to the browser executable.\n headless (bool): If True, the browser will run in headless mode.\n arguments [List[str]]: List of arguments to pass to the browser.\n \"\"\"\n def __init__(\n self,\n urls: List[str],\n continue_on_failure: bool = True,\n browser: Literal[\"chrome\", \"firefox\"] = \"chrome\",\n binary_location: Optional[str] = None,\n executable_path: Optional[str] = None,\n headless: bool = True,\n arguments: List[str] = [],\n ):\n \"\"\"Load a list of URLs using Selenium and unstructured.\"\"\"\n try:\n import selenium # noqa:F401\n except ImportError:\n raise ImportError(\n \"selenium package not found, please install it with \"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/url_selenium.html"} +{"id": "a34d55a22a53-1", "text": "raise ImportError(\n \"selenium package not found, please install it with \"\n \"`pip install selenium`\"\n )\n try:\n import unstructured # noqa:F401\n except ImportError:\n raise ImportError(\n \"unstructured package not found, please install it with \"\n \"`pip install unstructured`\"\n )\n self.urls = urls\n self.continue_on_failure = continue_on_failure\n self.browser = browser\n self.binary_location = binary_location\n self.executable_path = executable_path\n self.headless = headless\n self.arguments = arguments\n def _get_driver(self) -> Union[\"Chrome\", \"Firefox\"]:\n \"\"\"Create and return a WebDriver instance based on the specified browser.\n Raises:\n ValueError: If an invalid browser is specified.\n Returns:\n Union[Chrome, Firefox]: A WebDriver instance for the specified browser.\n \"\"\"\n if self.browser.lower() == \"chrome\":\n from selenium.webdriver import Chrome\n from selenium.webdriver.chrome.options import Options as ChromeOptions\n chrome_options = ChromeOptions()\n for arg in self.arguments:\n chrome_options.add_argument(arg)\n if self.headless:\n chrome_options.add_argument(\"--headless\")\n chrome_options.add_argument(\"--no-sandbox\")\n if self.binary_location is not None:\n chrome_options.binary_location = self.binary_location\n if self.executable_path is None:\n return Chrome(options=chrome_options)\n return Chrome(executable_path=self.executable_path, options=chrome_options)\n elif self.browser.lower() == \"firefox\":\n from selenium.webdriver import Firefox\n from selenium.webdriver.firefox.options import Options as FirefoxOptions\n firefox_options = FirefoxOptions()\n for arg in self.arguments:\n firefox_options.add_argument(arg)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/url_selenium.html"} +{"id": "a34d55a22a53-2", "text": "for arg in self.arguments:\n firefox_options.add_argument(arg)\n if self.headless:\n firefox_options.add_argument(\"--headless\")\n if self.binary_location is not None:\n firefox_options.binary_location = self.binary_location\n if self.executable_path is None:\n return Firefox(options=firefox_options)\n return Firefox(\n executable_path=self.executable_path, options=firefox_options\n )\n else:\n raise ValueError(\"Invalid browser specified. Use 'chrome' or 'firefox'.\")\n[docs] def load(self) -> List[Document]:\n \"\"\"Load the specified URLs using Selenium and create Document instances.\n Returns:\n List[Document]: A list of Document instances with loaded content.\n \"\"\"\n from unstructured.partition.html import partition_html\n docs: List[Document] = list()\n driver = self._get_driver()\n for url in self.urls:\n try:\n driver.get(url)\n page_content = driver.page_source\n elements = partition_html(text=page_content)\n text = \"\\n\\n\".join([str(el) for el in elements])\n metadata = {\"source\": url}\n docs.append(Document(page_content=text, metadata=metadata))\n except Exception as e:\n if self.continue_on_failure:\n logger.error(f\"Error fetching or processing {url}, exception: {e}\")\n else:\n raise e\n driver.quit()\n return docs", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/url_selenium.html"} +{"id": "7267a68914fd-0", "text": "Source code for langchain.document_loaders.max_compute\nfrom __future__ import annotations\nfrom typing import Any, Iterator, List, Optional, Sequence\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseLoader\nfrom langchain.utilities.max_compute import MaxComputeAPIWrapper\n[docs]class MaxComputeLoader(BaseLoader):\n \"\"\"Loads a query result from Alibaba Cloud MaxCompute table into documents.\"\"\"\n def __init__(\n self,\n query: str,\n api_wrapper: MaxComputeAPIWrapper,\n *,\n page_content_columns: Optional[Sequence[str]] = None,\n metadata_columns: Optional[Sequence[str]] = None,\n ):\n \"\"\"Initialize Alibaba Cloud MaxCompute document loader.\n Args:\n query: SQL query to execute.\n api_wrapper: MaxCompute API wrapper.\n page_content_columns: The columns to write into the `page_content` of the\n Document. If unspecified, all columns will be written to `page_content`.\n metadata_columns: The columns to write into the `metadata` of the Document.\n If unspecified, all columns not added to `page_content` will be written.\n \"\"\"\n self.query = query\n self.api_wrapper = api_wrapper\n self.page_content_columns = page_content_columns\n self.metadata_columns = metadata_columns\n[docs] @classmethod\n def from_params(\n cls,\n query: str,\n endpoint: str,\n project: str,\n *,\n access_id: Optional[str] = None,\n secret_access_key: Optional[str] = None,\n **kwargs: Any,\n ) -> MaxComputeLoader:\n \"\"\"Convenience constructor that builds the MaxCompute API wrapper from\n given parameters.\n Args:\n query: SQL query to execute.", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/max_compute.html"} +{"id": "7267a68914fd-1", "text": "given parameters.\n Args:\n query: SQL query to execute.\n endpoint: MaxCompute endpoint.\n project: A project is a basic organizational unit of MaxCompute, which is\n similar to a database.\n access_id: MaxCompute access ID. Should be passed in directly or set as the\n environment variable `MAX_COMPUTE_ACCESS_ID`.\n secret_access_key: MaxCompute secret access key. Should be passed in\n directly or set as the environment variable\n `MAX_COMPUTE_SECRET_ACCESS_KEY`.\n \"\"\"\n api_wrapper = MaxComputeAPIWrapper.from_params(\n endpoint, project, access_id=access_id, secret_access_key=secret_access_key\n )\n return cls(query, api_wrapper, **kwargs)\n[docs] def lazy_load(self) -> Iterator[Document]:\n for row in self.api_wrapper.query(self.query):\n if self.page_content_columns:\n page_content_data = {\n k: v for k, v in row.items() if k in self.page_content_columns\n }\n else:\n page_content_data = row\n page_content = \"\\n\".join(f\"{k}: {v}\" for k, v in page_content_data.items())\n if self.metadata_columns:\n metadata = {k: v for k, v in row.items() if k in self.metadata_columns}\n else:\n metadata = {k: v for k, v in row.items() if k not in page_content_data}\n yield Document(page_content=page_content, metadata=metadata)\n[docs] def load(self) -> List[Document]:\n return list(self.lazy_load())", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/max_compute.html"} +{"id": "e6f6f2428701-0", "text": "Source code for langchain.document_loaders.pyspark_dataframe\n\"\"\"Load from a Spark Dataframe object\"\"\"\nimport itertools\nimport logging\nimport sys\nfrom typing import TYPE_CHECKING, Any, Iterator, List, Optional, Tuple\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseLoader\nlogger = logging.getLogger(__file__)\nif TYPE_CHECKING:\n from pyspark.sql import SparkSession\n[docs]class PySparkDataFrameLoader(BaseLoader):\n \"\"\"Load PySpark DataFrames\"\"\"\n def __init__(\n self,\n spark_session: Optional[\"SparkSession\"] = None,\n df: Optional[Any] = None,\n page_content_column: str = \"text\",\n fraction_of_memory: float = 0.1,\n ):\n \"\"\"Initialize with a Spark DataFrame object.\"\"\"\n try:\n from pyspark.sql import DataFrame, SparkSession\n except ImportError:\n raise ImportError(\n \"pyspark is not installed. \"\n \"Please install it with `pip install pyspark`\"\n )\n self.spark = (\n spark_session if spark_session else SparkSession.builder.getOrCreate()\n )\n if not isinstance(df, DataFrame):\n raise ValueError(\n f\"Expected data_frame to be a PySpark DataFrame, got {type(df)}\"\n )\n self.df = df\n self.page_content_column = page_content_column\n self.fraction_of_memory = fraction_of_memory\n self.num_rows, self.max_num_rows = self.get_num_rows()\n self.rdd_df = self.df.rdd.map(list)\n self.column_names = self.df.columns\n[docs] def get_num_rows(self) -> Tuple[int, int]:\n \"\"\"Gets the amount of \"feasible\" rows for the DataFrame\"\"\"\n try:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/pyspark_dataframe.html"} +{"id": "e6f6f2428701-1", "text": "\"\"\"Gets the amount of \"feasible\" rows for the DataFrame\"\"\"\n try:\n import psutil\n except ImportError as e:\n raise ImportError(\n \"psutil not installed. Please install it with `pip install psutil`.\"\n ) from e\n row = self.df.limit(1).collect()[0]\n estimated_row_size = sys.getsizeof(row)\n mem_info = psutil.virtual_memory()\n available_memory = mem_info.available\n max_num_rows = int(\n (available_memory / estimated_row_size) * self.fraction_of_memory\n )\n return min(max_num_rows, self.df.count()), max_num_rows\n[docs] def lazy_load(self) -> Iterator[Document]:\n \"\"\"A lazy loader for document content.\"\"\"\n for row in self.rdd_df.toLocalIterator():\n metadata = {self.column_names[i]: row[i] for i in range(len(row))}\n text = metadata[self.page_content_column]\n metadata.pop(self.page_content_column)\n yield Document(page_content=text, metadata=metadata)\n[docs] def load(self) -> List[Document]:\n \"\"\"Load from the dataframe.\"\"\"\n if self.df.count() > self.max_num_rows:\n logger.warning(\n f\"The number of DataFrame rows is {self.df.count()}, \"\n f\"but we will only include the amount \"\n f\"of rows that can reasonably fit in memory: {self.num_rows}.\"\n )\n lazy_load_iterator = self.lazy_load()\n return list(itertools.islice(lazy_load_iterator, self.num_rows))", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/pyspark_dataframe.html"} +{"id": "82ff5f80227b-0", "text": "Source code for langchain.document_loaders.docugami\n\"\"\"Loader that loads processed documents from Docugami.\"\"\"\nimport io\nimport logging\nimport os\nimport re\nfrom pathlib import Path\nfrom typing import Any, Dict, List, Mapping, Optional, Sequence, Union\nimport requests\nfrom pydantic import BaseModel, root_validator\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseLoader\nTD_NAME = \"{http://www.w3.org/1999/xhtml}td\"\nTABLE_NAME = \"{http://www.w3.org/1999/xhtml}table\"\nXPATH_KEY = \"xpath\"\nDOCUMENT_ID_KEY = \"id\"\nDOCUMENT_NAME_KEY = \"name\"\nSTRUCTURE_KEY = \"structure\"\nTAG_KEY = \"tag\"\nPROJECTS_KEY = \"projects\"\nDEFAULT_API_ENDPOINT = \"https://api.docugami.com/v1preview1\"\nlogger = logging.getLogger(__name__)\n[docs]class DocugamiLoader(BaseLoader, BaseModel):\n \"\"\"Loader that loads processed docs from Docugami.\n To use, you should have the ``lxml`` python package installed.\n \"\"\"\n api: str = DEFAULT_API_ENDPOINT\n access_token: Optional[str] = os.environ.get(\"DOCUGAMI_API_KEY\")\n docset_id: Optional[str]\n document_ids: Optional[Sequence[str]]\n file_paths: Optional[Sequence[Union[Path, str]]]\n min_chunk_size: int = 32 # appended to the next chunk to avoid over-chunking\n @root_validator\n def validate_local_or_remote(cls, values: Dict[str, Any]) -> Dict[str, Any]:\n \"\"\"Validate that either local file paths are given, or remote API docset ID.\"\"\"\n if values.get(\"file_paths\") and values.get(\"docset_id\"):", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/docugami.html"} +{"id": "82ff5f80227b-1", "text": "if values.get(\"file_paths\") and values.get(\"docset_id\"):\n raise ValueError(\"Cannot specify both file_paths and remote API docset_id\")\n if not values.get(\"file_paths\") and not values.get(\"docset_id\"):\n raise ValueError(\"Must specify either file_paths or remote API docset_id\")\n if values.get(\"docset_id\") and not values.get(\"access_token\"):\n raise ValueError(\"Must specify access token if using remote API docset_id\")\n return values\n def _parse_dgml(\n self, document: Mapping, content: bytes, doc_metadata: Optional[Mapping] = None\n ) -> List[Document]:\n \"\"\"Parse a single DGML document into a list of Documents.\"\"\"\n try:\n from lxml import etree\n except ImportError:\n raise ImportError(\n \"Could not import lxml python package. \"\n \"Please install it with `pip install lxml`.\"\n )\n # helpers\n def _xpath_qname_for_chunk(chunk: Any) -> str:\n \"\"\"Get the xpath qname for a chunk.\"\"\"\n qname = f\"{chunk.prefix}:{chunk.tag.split('}')[-1]}\"\n parent = chunk.getparent()\n if parent is not None:\n doppelgangers = [x for x in parent if x.tag == chunk.tag]\n if len(doppelgangers) > 1:\n idx_of_self = doppelgangers.index(chunk)\n qname = f\"{qname}[{idx_of_self + 1}]\"\n return qname\n def _xpath_for_chunk(chunk: Any) -> str:\n \"\"\"Get the xpath for a chunk.\"\"\"\n ancestor_chain = chunk.xpath(\"ancestor-or-self::*\")", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/docugami.html"} +{"id": "82ff5f80227b-2", "text": "ancestor_chain = chunk.xpath(\"ancestor-or-self::*\")\n return \"/\" + \"/\".join(_xpath_qname_for_chunk(x) for x in ancestor_chain)\n def _structure_value(node: Any) -> str:\n \"\"\"Get the structure value for a node.\"\"\"\n structure = (\n \"table\"\n if node.tag == TABLE_NAME\n else node.attrib[\"structure\"]\n if \"structure\" in node.attrib\n else None\n )\n return structure\n def _is_structural(node: Any) -> bool:\n \"\"\"Check if a node is structural.\"\"\"\n return _structure_value(node) is not None\n def _is_heading(node: Any) -> bool:\n \"\"\"Check if a node is a heading.\"\"\"\n structure = _structure_value(node)\n return structure is not None and structure.lower().startswith(\"h\")\n def _get_text(node: Any) -> str:\n \"\"\"Get the text of a node.\"\"\"\n return \" \".join(node.itertext()).strip()\n def _has_structural_descendant(node: Any) -> bool:\n \"\"\"Check if a node has a structural descendant.\"\"\"\n for child in node:\n if _is_structural(child) or _has_structural_descendant(child):\n return True\n return False\n def _leaf_structural_nodes(node: Any) -> List:\n \"\"\"Get the leaf structural nodes of a node.\"\"\"\n if _is_structural(node) and not _has_structural_descendant(node):\n return [node]\n else:\n leaf_nodes = []\n for child in node:\n leaf_nodes.extend(_leaf_structural_nodes(child))\n return leaf_nodes\n def _create_doc(node: Any, text: str) -> Document:\n \"\"\"Create a Document from a node and text.\"\"\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/docugami.html"} +{"id": "82ff5f80227b-3", "text": "\"\"\"Create a Document from a node and text.\"\"\"\n metadata = {\n XPATH_KEY: _xpath_for_chunk(node),\n DOCUMENT_ID_KEY: document[\"id\"],\n DOCUMENT_NAME_KEY: document[\"name\"],\n STRUCTURE_KEY: node.attrib.get(\"structure\", \"\"),\n TAG_KEY: re.sub(r\"\\{.*\\}\", \"\", node.tag),\n }\n if doc_metadata:\n metadata.update(doc_metadata)\n return Document(\n page_content=text,\n metadata=metadata,\n )\n # parse the tree and return chunks\n tree = etree.parse(io.BytesIO(content))\n root = tree.getroot()\n chunks: List[Document] = []\n prev_small_chunk_text = None\n for node in _leaf_structural_nodes(root):\n text = _get_text(node)\n if prev_small_chunk_text:\n text = prev_small_chunk_text + \" \" + text\n prev_small_chunk_text = None\n if _is_heading(node) or len(text) < self.min_chunk_size:\n # Save headings or other small chunks to be appended to the next chunk\n prev_small_chunk_text = text\n else:\n chunks.append(_create_doc(node, text))\n if prev_small_chunk_text and len(chunks) > 0:\n # small chunk at the end left over, just append to last chunk\n chunks[-1].page_content += \" \" + prev_small_chunk_text\n return chunks\n def _document_details_for_docset_id(self, docset_id: str) -> List[Dict]:\n \"\"\"Gets all document details for the given docset ID\"\"\"\n url = f\"{self.api}/docsets/{docset_id}/documents\"\n all_documents = []\n while url:\n response = requests.get(\n url,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/docugami.html"} +{"id": "82ff5f80227b-4", "text": "while url:\n response = requests.get(\n url,\n headers={\"Authorization\": f\"Bearer {self.access_token}\"},\n )\n if response.ok:\n data = response.json()\n all_documents.extend(data[\"documents\"])\n url = data.get(\"next\", None)\n else:\n raise Exception(\n f\"Failed to download {url} (status: {response.status_code})\"\n )\n return all_documents\n def _project_details_for_docset_id(self, docset_id: str) -> List[Dict]:\n \"\"\"Gets all project details for the given docset ID\"\"\"\n url = f\"{self.api}/projects?docset.id={docset_id}\"\n all_projects = []\n while url:\n response = requests.request(\n \"GET\",\n url,\n headers={\"Authorization\": f\"Bearer {self.access_token}\"},\n data={},\n )\n if response.ok:\n data = response.json()\n all_projects.extend(data[\"projects\"])\n url = data.get(\"next\", None)\n else:\n raise Exception(\n f\"Failed to download {url} (status: {response.status_code})\"\n )\n return all_projects\n def _metadata_for_project(self, project: Dict) -> Dict:\n \"\"\"Gets project metadata for all files\"\"\"\n project_id = project.get(\"id\")\n url = f\"{self.api}/projects/{project_id}/artifacts/latest\"\n all_artifacts = []\n while url:\n response = requests.request(\n \"GET\",\n url,\n headers={\"Authorization\": f\"Bearer {self.access_token}\"},\n data={},\n )\n if response.ok:\n data = response.json()", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/docugami.html"} +{"id": "82ff5f80227b-5", "text": "data={},\n )\n if response.ok:\n data = response.json()\n all_artifacts.extend(data[\"artifacts\"])\n url = data.get(\"next\", None)\n else:\n raise Exception(\n f\"Failed to download {url} (status: {response.status_code})\"\n )\n per_file_metadata = {}\n for artifact in all_artifacts:\n artifact_name = artifact.get(\"name\")\n artifact_url = artifact.get(\"url\")\n artifact_doc = artifact.get(\"document\")\n if artifact_name == \"report-values.xml\" and artifact_url and artifact_doc:\n doc_id = artifact_doc[\"id\"]\n metadata: Dict = {}\n # the evaluated XML for each document is named after the project\n response = requests.request(\n \"GET\",\n f\"{artifact_url}/content\",\n headers={\"Authorization\": f\"Bearer {self.access_token}\"},\n data={},\n )\n if response.ok:\n try:\n from lxml import etree\n except ImportError:\n raise ImportError(\n \"Could not import lxml python package. \"\n \"Please install it with `pip install lxml`.\"\n )\n artifact_tree = etree.parse(io.BytesIO(response.content))\n artifact_root = artifact_tree.getroot()\n ns = artifact_root.nsmap\n entries = artifact_root.xpath(\"//pr:Entry\", namespaces=ns)\n for entry in entries:\n heading = entry.xpath(\"./pr:Heading\", namespaces=ns)[0].text\n value = \" \".join(\n entry.xpath(\"./pr:Value\", namespaces=ns)[0].itertext()\n ).strip()\n metadata[heading] = value\n per_file_metadata[doc_id] = metadata\n else:\n raise Exception(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/docugami.html"} +{"id": "82ff5f80227b-6", "text": "per_file_metadata[doc_id] = metadata\n else:\n raise Exception(\n f\"Failed to download {artifact_url}/content \"\n + \"(status: {response.status_code})\"\n )\n return per_file_metadata\n def _load_chunks_for_document(\n self, docset_id: str, document: Dict, doc_metadata: Optional[Dict] = None\n ) -> List[Document]:\n \"\"\"Load chunks for a document.\"\"\"\n document_id = document[\"id\"]\n url = f\"{self.api}/docsets/{docset_id}/documents/{document_id}/dgml\"\n response = requests.request(\n \"GET\",\n url,\n headers={\"Authorization\": f\"Bearer {self.access_token}\"},\n data={},\n )\n if response.ok:\n return self._parse_dgml(document, response.content, doc_metadata)\n else:\n raise Exception(\n f\"Failed to download {url} (status: {response.status_code})\"\n )\n[docs] def load(self) -> List[Document]:\n \"\"\"Load documents.\"\"\"\n chunks: List[Document] = []\n if self.access_token and self.docset_id:\n # remote mode\n _document_details = self._document_details_for_docset_id(self.docset_id)\n if self.document_ids:\n _document_details = [\n d for d in _document_details if d[\"id\"] in self.document_ids\n ]\n _project_details = self._project_details_for_docset_id(self.docset_id)\n combined_project_metadata = {}\n if _project_details:\n # if there are any projects for this docset, load project metadata\n for project in _project_details:\n metadata = self._metadata_for_project(project)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/docugami.html"} +{"id": "82ff5f80227b-7", "text": "for project in _project_details:\n metadata = self._metadata_for_project(project)\n combined_project_metadata.update(metadata)\n for doc in _document_details:\n doc_metadata = combined_project_metadata.get(doc[\"id\"])\n chunks += self._load_chunks_for_document(\n self.docset_id, doc, doc_metadata\n )\n elif self.file_paths:\n # local mode (for integration testing, or pre-downloaded XML)\n for path in self.file_paths:\n path = Path(path)\n with open(path, \"rb\") as file:\n chunks += self._parse_dgml(\n {\n DOCUMENT_ID_KEY: path.name,\n DOCUMENT_NAME_KEY: path.name,\n },\n file.read(),\n )\n return chunks", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/docugami.html"} +{"id": "2e737ea03e05-0", "text": "Source code for langchain.document_loaders.gcs_file\n\"\"\"Loading logic for loading documents from a GCS file.\"\"\"\nimport os\nimport tempfile\nfrom typing import List\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseLoader\nfrom langchain.document_loaders.unstructured import UnstructuredFileLoader\n[docs]class GCSFileLoader(BaseLoader):\n \"\"\"Loading logic for loading documents from GCS.\"\"\"\n def __init__(self, project_name: str, bucket: str, blob: str):\n \"\"\"Initialize with bucket and key name.\"\"\"\n self.bucket = bucket\n self.blob = blob\n self.project_name = project_name\n[docs] def load(self) -> List[Document]:\n \"\"\"Load documents.\"\"\"\n try:\n from google.cloud import storage\n except ImportError:\n raise ValueError(\n \"Could not import google-cloud-storage python package. \"\n \"Please install it with `pip install google-cloud-storage`.\"\n )\n # Initialise a client\n storage_client = storage.Client(self.project_name)\n # Create a bucket object for our bucket\n bucket = storage_client.get_bucket(self.bucket)\n # Create a blob object from the filepath\n blob = bucket.blob(self.blob)\n with tempfile.TemporaryDirectory() as temp_dir:\n file_path = f\"{temp_dir}/{self.blob}\"\n os.makedirs(os.path.dirname(file_path), exist_ok=True)\n # Download the file to a destination\n blob.download_to_filename(file_path)\n loader = UnstructuredFileLoader(file_path)\n return loader.load()", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/gcs_file.html"} +{"id": "cd544fe23019-0", "text": "Source code for langchain.document_loaders.facebook_chat\n\"\"\"Loader that loads Facebook chat json dump.\"\"\"\nimport datetime\nimport json\nfrom pathlib import Path\nfrom typing import List\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseLoader\ndef concatenate_rows(row: dict) -> str:\n \"\"\"Combine message information in a readable format ready to be used.\"\"\"\n sender = row[\"sender_name\"]\n text = row[\"content\"]\n date = datetime.datetime.fromtimestamp(row[\"timestamp_ms\"] / 1000).strftime(\n \"%Y-%m-%d %H:%M:%S\"\n )\n return f\"{sender} on {date}: {text}\\n\\n\"\n[docs]class FacebookChatLoader(BaseLoader):\n \"\"\"Loader that loads Facebook messages json directory dump.\"\"\"\n def __init__(self, path: str):\n \"\"\"Initialize with path.\"\"\"\n self.file_path = path\n[docs] def load(self) -> List[Document]:\n \"\"\"Load documents.\"\"\"\n p = Path(self.file_path)\n with open(p, encoding=\"utf8\") as f:\n d = json.load(f)\n text = \"\".join(\n concatenate_rows(message)\n for message in d[\"messages\"]\n if message.get(\"content\") and isinstance(message[\"content\"], str)\n )\n metadata = {\"source\": str(p)}\n return [Document(page_content=text, metadata=metadata)]", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/facebook_chat.html"} +{"id": "f334071fc0b4-0", "text": "Source code for langchain.document_loaders.modern_treasury\n\"\"\"Loader that fetches data from Modern Treasury\"\"\"\nimport json\nimport urllib.request\nfrom base64 import b64encode\nfrom typing import List, Optional\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseLoader\nfrom langchain.utils import get_from_env, stringify_value\nMODERN_TREASURY_ENDPOINTS = {\n \"payment_orders\": \"https://app.moderntreasury.com/api/payment_orders\",\n \"expected_payments\": \"https://app.moderntreasury.com/api/expected_payments\",\n \"returns\": \"https://app.moderntreasury.com/api/returns\",\n \"incoming_payment_details\": \"https://app.moderntreasury.com/api/\\\nincoming_payment_details\",\n \"counterparties\": \"https://app.moderntreasury.com/api/counterparties\",\n \"internal_accounts\": \"https://app.moderntreasury.com/api/internal_accounts\",\n \"external_accounts\": \"https://app.moderntreasury.com/api/external_accounts\",\n \"transactions\": \"https://app.moderntreasury.com/api/transactions\",\n \"ledgers\": \"https://app.moderntreasury.com/api/ledgers\",\n \"ledger_accounts\": \"https://app.moderntreasury.com/api/ledger_accounts\",\n \"ledger_transactions\": \"https://app.moderntreasury.com/api/ledger_transactions\",\n \"events\": \"https://app.moderntreasury.com/api/events\",\n \"invoices\": \"https://app.moderntreasury.com/api/invoices\",\n}\n[docs]class ModernTreasuryLoader(BaseLoader):\n \"\"\"Loader that fetches data from Modern Treasury.\"\"\"\n def __init__(\n self,\n resource: str,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/modern_treasury.html"} +{"id": "f334071fc0b4-1", "text": "def __init__(\n self,\n resource: str,\n organization_id: Optional[str] = None,\n api_key: Optional[str] = None,\n ) -> None:\n self.resource = resource\n organization_id = organization_id or get_from_env(\n \"organization_id\", \"MODERN_TREASURY_ORGANIZATION_ID\"\n )\n api_key = api_key or get_from_env(\"api_key\", \"MODERN_TREASURY_API_KEY\")\n credentials = f\"{organization_id}:{api_key}\".encode(\"utf-8\")\n basic_auth_token = b64encode(credentials).decode(\"utf-8\")\n self.headers = {\"Authorization\": f\"Basic {basic_auth_token}\"}\n def _make_request(self, url: str) -> List[Document]:\n request = urllib.request.Request(url, headers=self.headers)\n with urllib.request.urlopen(request) as response:\n json_data = json.loads(response.read().decode())\n text = stringify_value(json_data)\n metadata = {\"source\": url}\n return [Document(page_content=text, metadata=metadata)]\n def _get_resource(self) -> List[Document]:\n endpoint = MODERN_TREASURY_ENDPOINTS.get(self.resource)\n if endpoint is None:\n return []\n return self._make_request(endpoint)\n[docs] def load(self) -> List[Document]:\n return self._get_resource()", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/modern_treasury.html"} +{"id": "a9e8ce7c334f-0", "text": "Source code for langchain.document_loaders.s3_file\n\"\"\"Loading logic for loading documents from an s3 file.\"\"\"\nimport os\nimport tempfile\nfrom typing import List\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseLoader\nfrom langchain.document_loaders.unstructured import UnstructuredFileLoader\n[docs]class S3FileLoader(BaseLoader):\n \"\"\"Loading logic for loading documents from s3.\"\"\"\n def __init__(self, bucket: str, key: str):\n \"\"\"Initialize with bucket and key name.\"\"\"\n self.bucket = bucket\n self.key = key\n[docs] def load(self) -> List[Document]:\n \"\"\"Load documents.\"\"\"\n try:\n import boto3\n except ImportError:\n raise ImportError(\n \"Could not import `boto3` python package. \"\n \"Please install it with `pip install boto3`.\"\n )\n s3 = boto3.client(\"s3\")\n with tempfile.TemporaryDirectory() as temp_dir:\n file_path = f\"{temp_dir}/{self.key}\"\n os.makedirs(os.path.dirname(file_path), exist_ok=True)\n s3.download_file(self.bucket, self.key, file_path)\n loader = UnstructuredFileLoader(file_path)\n return loader.load()", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/s3_file.html"} +{"id": "2e59c0de8151-0", "text": "Source code for langchain.document_loaders.github\nfrom abc import ABC\nfrom datetime import datetime\nfrom typing import Dict, Iterator, List, Literal, Optional, Union\nimport requests\nfrom pydantic import BaseModel, root_validator, validator\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseLoader\nfrom langchain.utils import get_from_dict_or_env\nclass BaseGitHubLoader(BaseLoader, BaseModel, ABC):\n \"\"\"Load issues of a GitHub repository.\"\"\"\n repo: str\n \"\"\"Name of repository\"\"\"\n access_token: str\n \"\"\"Personal access token - see https://github.com/settings/tokens?type=beta\"\"\"\n @root_validator(pre=True)\n def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"Validate that access token exists in environment.\"\"\"\n values[\"access_token\"] = get_from_dict_or_env(\n values, \"access_token\", \"GITHUB_PERSONAL_ACCESS_TOKEN\"\n )\n return values\n @property\n def headers(self) -> Dict[str, str]:\n return {\n \"Accept\": \"application/vnd.github+json\",\n \"Authorization\": f\"Bearer {self.access_token}\",\n }\n[docs]class GitHubIssuesLoader(BaseGitHubLoader):\n include_prs: bool = True\n \"\"\"If True include Pull Requests in results, otherwise ignore them.\"\"\"\n milestone: Union[int, Literal[\"*\", \"none\"], None] = None\n \"\"\"If integer is passed, it should be a milestone's number field.\n If the string '*' is passed, issues with any milestone are accepted.\n If the string 'none' is passed, issues without milestones are returned.\n \"\"\"\n state: Optional[Literal[\"open\", \"closed\", \"all\"]] = None", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/github.html"} +{"id": "2e59c0de8151-1", "text": "state: Optional[Literal[\"open\", \"closed\", \"all\"]] = None\n \"\"\"Filter on issue state. Can be one of: 'open', 'closed', 'all'.\"\"\"\n assignee: Optional[str] = None\n \"\"\"Filter on assigned user. Pass 'none' for no user and '*' for any user.\"\"\"\n creator: Optional[str] = None\n \"\"\"Filter on the user that created the issue.\"\"\"\n mentioned: Optional[str] = None\n \"\"\"Filter on a user that's mentioned in the issue.\"\"\"\n labels: Optional[List[str]] = None\n \"\"\"Label names to filter one. Example: bug,ui,@high.\"\"\"\n sort: Optional[Literal[\"created\", \"updated\", \"comments\"]] = None\n \"\"\"What to sort results by. Can be one of: 'created', 'updated', 'comments'.\n Default is 'created'.\"\"\"\n direction: Optional[Literal[\"asc\", \"desc\"]] = None\n \"\"\"The direction to sort the results by. Can be one of: 'asc', 'desc'.\"\"\"\n since: Optional[str] = None\n \"\"\"Only show notifications updated after the given time.\n This is a timestamp in ISO 8601 format: YYYY-MM-DDTHH:MM:SSZ.\"\"\"\n @validator(\"since\")\n def validate_since(cls, v: Optional[str]) -> Optional[str]:\n if v:\n try:\n datetime.strptime(v, \"%Y-%m-%dT%H:%M:%SZ\")\n except ValueError:\n raise ValueError(\n \"Invalid value for 'since'. Expected a date string in \"\n f\"YYYY-MM-DDTHH:MM:SSZ format. Received: {v}\"\n )\n return v\n[docs] def lazy_load(self) -> Iterator[Document]:\n \"\"\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/github.html"} +{"id": "2e59c0de8151-2", "text": "[docs] def lazy_load(self) -> Iterator[Document]:\n \"\"\"\n Get issues of a GitHub repository.\n Returns:\n A list of Documents with attributes:\n - page_content\n - metadata\n - url\n - title\n - creator\n - created_at\n - last_update_time\n - closed_time\n - number of comments\n - state\n - labels\n - assignee\n - assignees\n - milestone\n - locked\n - number\n - is_pull_request\n \"\"\"\n url: Optional[str] = self.url\n while url:\n response = requests.get(url, headers=self.headers)\n response.raise_for_status()\n issues = response.json()\n for issue in issues:\n doc = self.parse_issue(issue)\n if not self.include_prs and doc.metadata[\"is_pull_request\"]:\n continue\n yield doc\n if response.links and response.links.get(\"next\"):\n url = response.links[\"next\"][\"url\"]\n else:\n url = None\n[docs] def load(self) -> List[Document]:\n \"\"\"\n Get issues of a GitHub repository.\n Returns:\n A list of Documents with attributes:\n - page_content\n - metadata\n - url\n - title\n - creator\n - created_at\n - last_update_time\n - closed_time\n - number of comments\n - state\n - labels\n - assignee\n - assignees\n - milestone\n - locked\n - number\n - is_pull_request\n \"\"\"\n return list(self.lazy_load())\n[docs] def parse_issue(self, issue: dict) -> Document:\n \"\"\"Create Document objects from a list of GitHub issues.\"\"\"\n metadata = {", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/github.html"} +{"id": "2e59c0de8151-3", "text": "\"\"\"Create Document objects from a list of GitHub issues.\"\"\"\n metadata = {\n \"url\": issue[\"html_url\"],\n \"title\": issue[\"title\"],\n \"creator\": issue[\"user\"][\"login\"],\n \"created_at\": issue[\"created_at\"],\n \"comments\": issue[\"comments\"],\n \"state\": issue[\"state\"],\n \"labels\": [label[\"name\"] for label in issue[\"labels\"]],\n \"assignee\": issue[\"assignee\"][\"login\"] if issue[\"assignee\"] else None,\n \"milestone\": issue[\"milestone\"][\"title\"] if issue[\"milestone\"] else None,\n \"locked\": issue[\"locked\"],\n \"number\": issue[\"number\"],\n \"is_pull_request\": \"pull_request\" in issue,\n }\n content = issue[\"body\"] if issue[\"body\"] is not None else \"\"\n return Document(page_content=content, metadata=metadata)\n @property\n def query_params(self) -> str:\n labels = \",\".join(self.labels) if self.labels else self.labels\n query_params_dict = {\n \"milestone\": self.milestone,\n \"state\": self.state,\n \"assignee\": self.assignee,\n \"creator\": self.creator,\n \"mentioned\": self.mentioned,\n \"labels\": labels,\n \"sort\": self.sort,\n \"direction\": self.direction,\n \"since\": self.since,\n }\n query_params_list = [\n f\"{k}={v}\" for k, v in query_params_dict.items() if v is not None\n ]\n query_params = \"&\".join(query_params_list)\n return query_params\n @property\n def url(self) -> str:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/github.html"} +{"id": "2e59c0de8151-4", "text": "return query_params\n @property\n def url(self) -> str:\n return f\"https://api.github.com/repos/{self.repo}/issues?{self.query_params}\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/github.html"} +{"id": "d51fcbd3e856-0", "text": "Source code for langchain.document_loaders.discord\n\"\"\"Load from Discord chat dump\"\"\"\nfrom __future__ import annotations\nfrom typing import TYPE_CHECKING, List\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseLoader\nif TYPE_CHECKING:\n import pandas as pd\n[docs]class DiscordChatLoader(BaseLoader):\n \"\"\"Load Discord chat logs.\"\"\"\n def __init__(self, chat_log: pd.DataFrame, user_id_col: str = \"ID\"):\n \"\"\"Initialize with a Pandas DataFrame containing chat logs.\"\"\"\n if not isinstance(chat_log, pd.DataFrame):\n raise ValueError(\n f\"Expected chat_log to be a pd.DataFrame, got {type(chat_log)}\"\n )\n self.chat_log = chat_log\n self.user_id_col = user_id_col\n[docs] def load(self) -> List[Document]:\n \"\"\"Load all chat messages.\"\"\"\n result = []\n for _, row in self.chat_log.iterrows():\n user_id = row[self.user_id_col]\n metadata = row.to_dict()\n metadata.pop(self.user_id_col)\n result.append(Document(page_content=user_id, metadata=metadata))\n return result", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/discord.html"} +{"id": "d9d7d3588634-0", "text": "Source code for langchain.document_loaders.fauna\nfrom typing import Iterator, List, Optional, Sequence\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseLoader\n[docs]class FaunaLoader(BaseLoader):\n \"\"\"FaunaDB Loader.\n Attributes:\n query (str): The FQL query string to execute.\n page_content_field (str): The field that contains the content of each page.\n secret (str): The secret key for authenticating to FaunaDB.\n metadata_fields (Optional[Sequence[str]]):\n Optional list of field names to include in metadata.\n \"\"\"\n def __init__(\n self,\n query: str,\n page_content_field: str,\n secret: str,\n metadata_fields: Optional[Sequence[str]] = None,\n ):\n self.query = query\n self.page_content_field = page_content_field\n self.secret = secret\n self.metadata_fields = metadata_fields\n[docs] def load(self) -> List[Document]:\n return list(self.lazy_load())\n[docs] def lazy_load(self) -> Iterator[Document]:\n try:\n from fauna import Page, fql\n from fauna.client import Client\n from fauna.encoding import QuerySuccess\n except ImportError:\n raise ImportError(\n \"Could not import fauna python package. \"\n \"Please install it with `pip install fauna`.\"\n )\n # Create Fauna Client\n client = Client(secret=self.secret)\n # Run FQL Query\n response: QuerySuccess = client.query(fql(self.query))\n page: Page = response.data\n for result in page:\n if result is not None:\n document_dict = dict(result.items())\n page_content = \"\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/fauna.html"} +{"id": "d9d7d3588634-1", "text": "document_dict = dict(result.items())\n page_content = \"\"\n for key, value in document_dict.items():\n if key == self.page_content_field:\n page_content = value\n document: Document = Document(\n page_content=page_content,\n metadata={\"id\": result.id, \"ts\": result.ts},\n )\n yield document\n if page.after is not None:\n yield Document(\n page_content=\"Next Page Exists\",\n metadata={\"after\": page.after},\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/fauna.html"} +{"id": "817cbeceb3b8-0", "text": "Source code for langchain.document_loaders.arxiv\nfrom typing import List, Optional\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseLoader\nfrom langchain.utilities.arxiv import ArxivAPIWrapper\n[docs]class ArxivLoader(BaseLoader):\n \"\"\"Loads a query result from arxiv.org into a list of Documents.\n Each document represents one Document.\n The loader converts the original PDF format into the text.\n \"\"\"\n def __init__(\n self,\n query: str,\n load_max_docs: Optional[int] = 100,\n load_all_available_meta: Optional[bool] = False,\n ):\n self.query = query\n self.load_max_docs = load_max_docs\n self.load_all_available_meta = load_all_available_meta\n[docs] def load(self) -> List[Document]:\n arxiv_client = ArxivAPIWrapper(\n load_max_docs=self.load_max_docs,\n load_all_available_meta=self.load_all_available_meta,\n )\n docs = arxiv_client.load(self.query)\n return docs", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/arxiv.html"} +{"id": "afe4f4ed5581-0", "text": "Source code for langchain.document_loaders.python\nimport tokenize\nfrom langchain.document_loaders.text import TextLoader\n[docs]class PythonLoader(TextLoader):\n \"\"\"\n Load Python files, respecting any non-default encoding if specified.\n \"\"\"\n def __init__(self, file_path: str):\n with open(file_path, \"rb\") as f:\n encoding, _ = tokenize.detect_encoding(f.readline)\n super().__init__(file_path=file_path, encoding=encoding)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/python.html"} +{"id": "e32be1752e69-0", "text": "Source code for langchain.document_loaders.bigquery\nfrom __future__ import annotations\nfrom typing import TYPE_CHECKING, List, Optional\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseLoader\nif TYPE_CHECKING:\n from google.auth.credentials import Credentials\n[docs]class BigQueryLoader(BaseLoader):\n \"\"\"Loads a query result from BigQuery into a list of documents.\n Each document represents one row of the result. The `page_content_columns`\n are written into the `page_content` of the document. The `metadata_columns`\n are written into the `metadata` of the document. By default, all columns\n are written into the `page_content` and none into the `metadata`.\n \"\"\"\n def __init__(\n self,\n query: str,\n project: Optional[str] = None,\n page_content_columns: Optional[List[str]] = None,\n metadata_columns: Optional[List[str]] = None,\n credentials: Optional[Credentials] = None,\n ):\n \"\"\"Initialize BigQuery document loader.\n Args:\n query: The query to run in BigQuery.\n project: Optional. The project to run the query in.\n page_content_columns: Optional. The columns to write into the `page_content`\n of the document.\n metadata_columns: Optional. The columns to write into the `metadata` of the\n document.\n credentials : google.auth.credentials.Credentials, optional\n Credentials for accessing Google APIs. Use this parameter to override\n default credentials, such as to use Compute Engine\n (`google.auth.compute_engine.Credentials`) or Service Account\n (`google.oauth2.service_account.Credentials`) credentials directly.\n \"\"\"\n self.query = query\n self.project = project\n self.page_content_columns = page_content_columns", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/bigquery.html"} +{"id": "e32be1752e69-1", "text": "self.project = project\n self.page_content_columns = page_content_columns\n self.metadata_columns = metadata_columns\n self.credentials = credentials\n[docs] def load(self) -> List[Document]:\n try:\n from google.cloud import bigquery\n except ImportError as ex:\n raise ValueError(\n \"Could not import google-cloud-bigquery python package. \"\n \"Please install it with `pip install google-cloud-bigquery`.\"\n ) from ex\n bq_client = bigquery.Client(credentials=self.credentials, project=self.project)\n query_result = bq_client.query(self.query).result()\n docs: List[Document] = []\n page_content_columns = self.page_content_columns\n metadata_columns = self.metadata_columns\n if page_content_columns is None:\n page_content_columns = [column.name for column in query_result.schema]\n if metadata_columns is None:\n metadata_columns = []\n for row in query_result:\n page_content = \"\\n\".join(\n f\"{k}: {v}\" for k, v in row.items() if k in page_content_columns\n )\n metadata = {k: v for k, v in row.items() if k in metadata_columns}\n doc = Document(page_content=page_content, metadata=metadata)\n docs.append(doc)\n return docs", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/bigquery.html"} +{"id": "b88583d666e1-0", "text": "Source code for langchain.document_loaders.azure_blob_storage_file\n\"\"\"Loading logic for loading documents from an Azure Blob Storage file.\"\"\"\nimport os\nimport tempfile\nfrom typing import List\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseLoader\nfrom langchain.document_loaders.unstructured import UnstructuredFileLoader\n[docs]class AzureBlobStorageFileLoader(BaseLoader):\n \"\"\"Loading logic for loading documents from Azure Blob Storage.\"\"\"\n def __init__(self, conn_str: str, container: str, blob_name: str):\n \"\"\"Initialize with connection string, container and blob name.\"\"\"\n self.conn_str = conn_str\n self.container = container\n self.blob = blob_name\n[docs] def load(self) -> List[Document]:\n \"\"\"Load documents.\"\"\"\n try:\n from azure.storage.blob import BlobClient\n except ImportError as exc:\n raise ValueError(\n \"Could not import azure storage blob python package. \"\n \"Please install it with `pip install azure-storage-blob`.\"\n ) from exc\n client = BlobClient.from_connection_string(\n conn_str=self.conn_str, container_name=self.container, blob_name=self.blob\n )\n with tempfile.TemporaryDirectory() as temp_dir:\n file_path = f\"{temp_dir}/{self.container}/{self.blob}\"\n os.makedirs(os.path.dirname(file_path), exist_ok=True)\n with open(f\"{file_path}\", \"wb\") as file:\n blob_data = client.download_blob()\n blob_data.readinto(file)\n loader = UnstructuredFileLoader(file_path)\n return loader.load()", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/azure_blob_storage_file.html"} +{"id": "0d1fa9938cde-0", "text": "Source code for langchain.document_loaders.duckdb_loader\nfrom typing import Dict, List, Optional, cast\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseLoader\n[docs]class DuckDBLoader(BaseLoader):\n \"\"\"Loads a query result from DuckDB into a list of documents.\n Each document represents one row of the result. The `page_content_columns`\n are written into the `page_content` of the document. The `metadata_columns`\n are written into the `metadata` of the document. By default, all columns\n are written into the `page_content` and none into the `metadata`.\n \"\"\"\n def __init__(\n self,\n query: str,\n database: str = \":memory:\",\n read_only: bool = False,\n config: Optional[Dict[str, str]] = None,\n page_content_columns: Optional[List[str]] = None,\n metadata_columns: Optional[List[str]] = None,\n ):\n self.query = query\n self.database = database\n self.read_only = read_only\n self.config = config or {}\n self.page_content_columns = page_content_columns\n self.metadata_columns = metadata_columns\n[docs] def load(self) -> List[Document]:\n try:\n import duckdb\n except ImportError:\n raise ImportError(\n \"Could not import duckdb python package. \"\n \"Please install it with `pip install duckdb`.\"\n )\n docs = []\n with duckdb.connect(\n database=self.database, read_only=self.read_only, config=self.config\n ) as con:\n query_result = con.execute(self.query)\n results = query_result.fetchall()\n description = cast(list, query_result.description)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/duckdb_loader.html"} +{"id": "0d1fa9938cde-1", "text": "results = query_result.fetchall()\n description = cast(list, query_result.description)\n field_names = [c[0] for c in description]\n if self.page_content_columns is None:\n page_content_columns = field_names\n else:\n page_content_columns = self.page_content_columns\n if self.metadata_columns is None:\n metadata_columns = []\n else:\n metadata_columns = self.metadata_columns\n for result in results:\n page_content = \"\\n\".join(\n f\"{column}: {result[field_names.index(column)]}\"\n for column in page_content_columns\n )\n metadata = {\n column: result[field_names.index(column)]\n for column in metadata_columns\n }\n doc = Document(page_content=page_content, metadata=metadata)\n docs.append(doc)\n return docs", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/duckdb_loader.html"} +{"id": "4505a4fd5c8e-0", "text": "Source code for langchain.document_loaders.notion\n\"\"\"Loader that loads Notion directory dump.\"\"\"\nfrom pathlib import Path\nfrom typing import List\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseLoader\n[docs]class NotionDirectoryLoader(BaseLoader):\n \"\"\"Loader that loads Notion directory dump.\"\"\"\n def __init__(self, path: str):\n \"\"\"Initialize with path.\"\"\"\n self.file_path = path\n[docs] def load(self) -> List[Document]:\n \"\"\"Load documents.\"\"\"\n ps = list(Path(self.file_path).glob(\"**/*.md\"))\n docs = []\n for p in ps:\n with open(p) as f:\n text = f.read()\n metadata = {\"source\": str(p)}\n docs.append(Document(page_content=text, metadata=metadata))\n return docs", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/notion.html"} +{"id": "9c2d37c0fdfb-0", "text": "Source code for langchain.document_loaders.psychic\n\"\"\"Loader that loads documents from Psychic.dev.\"\"\"\nfrom typing import List\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseLoader\n[docs]class PsychicLoader(BaseLoader):\n \"\"\"Loader that loads documents from Psychic.dev.\"\"\"\n def __init__(self, api_key: str, connector_id: str, connection_id: str):\n \"\"\"Initialize with API key, connector id, and connection id.\"\"\"\n try:\n from psychicapi import ConnectorId, Psychic # noqa: F401\n except ImportError:\n raise ImportError(\n \"`psychicapi` package not found, please run `pip install psychicapi`\"\n )\n self.psychic = Psychic(secret_key=api_key)\n self.connector_id = ConnectorId(connector_id)\n self.connection_id = connection_id\n[docs] def load(self) -> List[Document]:\n \"\"\"Load documents.\"\"\"\n psychic_docs = self.psychic.get_documents(self.connector_id, self.connection_id)\n return [\n Document(\n page_content=doc[\"content\"],\n metadata={\"title\": doc[\"title\"], \"source\": doc[\"uri\"]},\n )\n for doc in psychic_docs\n ]", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/psychic.html"} +{"id": "d046423717b9-0", "text": "Source code for langchain.document_loaders.apify_dataset\n\"\"\"Logic for loading documents from Apify datasets.\"\"\"\nfrom typing import Any, Callable, Dict, List\nfrom pydantic import BaseModel, root_validator\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseLoader\n[docs]class ApifyDatasetLoader(BaseLoader, BaseModel):\n \"\"\"Logic for loading documents from Apify datasets.\"\"\"\n apify_client: Any\n dataset_id: str\n \"\"\"The ID of the dataset on the Apify platform.\"\"\"\n dataset_mapping_function: Callable[[Dict], Document]\n \"\"\"A custom function that takes a single dictionary (an Apify dataset item)\n and converts it to an instance of the Document class.\"\"\"\n def __init__(\n self, dataset_id: str, dataset_mapping_function: Callable[[Dict], Document]\n ):\n \"\"\"Initialize the loader with an Apify dataset ID and a mapping function.\n Args:\n dataset_id (str): The ID of the dataset on the Apify platform.\n dataset_mapping_function (Callable): A function that takes a single\n dictionary (an Apify dataset item) and converts it to an instance\n of the Document class.\n \"\"\"\n super().__init__(\n dataset_id=dataset_id, dataset_mapping_function=dataset_mapping_function\n )\n @root_validator()\n def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"Validate environment.\"\"\"\n try:\n from apify_client import ApifyClient\n values[\"apify_client\"] = ApifyClient()\n except ImportError:\n raise ImportError(\n \"Could not import apify-client Python package. \"\n \"Please install it with `pip install apify-client`.\"\n )\n return values", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/apify_dataset.html"} +{"id": "d046423717b9-1", "text": ")\n return values\n[docs] def load(self) -> List[Document]:\n \"\"\"Load documents.\"\"\"\n dataset_items = self.apify_client.dataset(self.dataset_id).list_items().items\n return list(map(self.dataset_mapping_function, dataset_items))", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/apify_dataset.html"} +{"id": "fc8cee230c82-0", "text": "Source code for langchain.document_loaders.html\n\"\"\"Loader that uses unstructured to load HTML files.\"\"\"\nfrom typing import List\nfrom langchain.document_loaders.unstructured import UnstructuredFileLoader\n[docs]class UnstructuredHTMLLoader(UnstructuredFileLoader):\n \"\"\"Loader that uses unstructured to load HTML files.\"\"\"\n def _get_elements(self) -> List:\n from unstructured.partition.html import partition_html\n return partition_html(filename=self.file_path, **self.unstructured_kwargs)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/html.html"} +{"id": "10a17fd68edb-0", "text": "Source code for langchain.document_loaders.s3_directory\n\"\"\"Loading logic for loading documents from an s3 directory.\"\"\"\nfrom typing import List\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseLoader\nfrom langchain.document_loaders.s3_file import S3FileLoader\n[docs]class S3DirectoryLoader(BaseLoader):\n \"\"\"Loading logic for loading documents from s3.\"\"\"\n def __init__(self, bucket: str, prefix: str = \"\"):\n \"\"\"Initialize with bucket and key name.\"\"\"\n self.bucket = bucket\n self.prefix = prefix\n[docs] def load(self) -> List[Document]:\n \"\"\"Load documents.\"\"\"\n try:\n import boto3\n except ImportError:\n raise ImportError(\n \"Could not import boto3 python package. \"\n \"Please install it with `pip install boto3`.\"\n )\n s3 = boto3.resource(\"s3\")\n bucket = s3.Bucket(self.bucket)\n docs = []\n for obj in bucket.objects.filter(Prefix=self.prefix):\n loader = S3FileLoader(self.bucket, obj.key)\n docs.extend(loader.load())\n return docs", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/s3_directory.html"} +{"id": "6cdec0e774bd-0", "text": "Source code for langchain.document_loaders.url\n\"\"\"Loader that uses unstructured to load HTML files.\"\"\"\nimport logging\nfrom typing import Any, List\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseLoader\nlogger = logging.getLogger(__name__)\n[docs]class UnstructuredURLLoader(BaseLoader):\n \"\"\"Loader that uses unstructured to load HTML files.\"\"\"\n def __init__(\n self,\n urls: List[str],\n continue_on_failure: bool = True,\n mode: str = \"single\",\n show_progress_bar: bool = False,\n **unstructured_kwargs: Any,\n ):\n \"\"\"Initialize with file path.\"\"\"\n try:\n import unstructured # noqa:F401\n from unstructured.__version__ import __version__ as __unstructured_version__\n self.__version = __unstructured_version__\n except ImportError:\n raise ValueError(\n \"unstructured package not found, please install it with \"\n \"`pip install unstructured`\"\n )\n self._validate_mode(mode)\n self.mode = mode\n headers = unstructured_kwargs.pop(\"headers\", {})\n if len(headers.keys()) != 0:\n warn_about_headers = False\n if self.__is_non_html_available():\n warn_about_headers = not self.__is_headers_available_for_non_html()\n else:\n warn_about_headers = not self.__is_headers_available_for_html()\n if warn_about_headers:\n logger.warning(\n \"You are using an old version of unstructured. \"\n \"The headers parameter is ignored\"\n )\n self.urls = urls\n self.continue_on_failure = continue_on_failure\n self.headers = headers\n self.unstructured_kwargs = unstructured_kwargs\n self.show_progress_bar = show_progress_bar", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/url.html"} +{"id": "6cdec0e774bd-1", "text": "self.unstructured_kwargs = unstructured_kwargs\n self.show_progress_bar = show_progress_bar\n def _validate_mode(self, mode: str) -> None:\n _valid_modes = {\"single\", \"elements\"}\n if mode not in _valid_modes:\n raise ValueError(\n f\"Got {mode} for `mode`, but should be one of `{_valid_modes}`\"\n )\n def __is_headers_available_for_html(self) -> bool:\n _unstructured_version = self.__version.split(\"-\")[0]\n unstructured_version = tuple([int(x) for x in _unstructured_version.split(\".\")])\n return unstructured_version >= (0, 5, 7)\n def __is_headers_available_for_non_html(self) -> bool:\n _unstructured_version = self.__version.split(\"-\")[0]\n unstructured_version = tuple([int(x) for x in _unstructured_version.split(\".\")])\n return unstructured_version >= (0, 5, 13)\n def __is_non_html_available(self) -> bool:\n _unstructured_version = self.__version.split(\"-\")[0]\n unstructured_version = tuple([int(x) for x in _unstructured_version.split(\".\")])\n return unstructured_version >= (0, 5, 12)\n[docs] def load(self) -> List[Document]:\n \"\"\"Load file.\"\"\"\n from unstructured.partition.auto import partition\n from unstructured.partition.html import partition_html\n docs: List[Document] = list()\n if self.show_progress_bar:\n try:\n from tqdm import tqdm\n except ImportError as e:\n raise ImportError(\n \"Package tqdm must be installed if show_progress_bar=True. \"\n \"Please install with 'pip install tqdm' or set \"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/url.html"} +{"id": "6cdec0e774bd-2", "text": "\"Please install with 'pip install tqdm' or set \"\n \"show_progress_bar=False.\"\n ) from e\n urls = tqdm(self.urls)\n else:\n urls = self.urls\n for url in urls:\n try:\n if self.__is_non_html_available():\n if self.__is_headers_available_for_non_html():\n elements = partition(\n url=url, headers=self.headers, **self.unstructured_kwargs\n )\n else:\n elements = partition(url=url, **self.unstructured_kwargs)\n else:\n if self.__is_headers_available_for_html():\n elements = partition_html(\n url=url, headers=self.headers, **self.unstructured_kwargs\n )\n else:\n elements = partition_html(url=url, **self.unstructured_kwargs)\n except Exception as e:\n if self.continue_on_failure:\n logger.error(f\"Error fetching or processing {url}, exeption: {e}\")\n continue\n else:\n raise e\n if self.mode == \"single\":\n text = \"\\n\\n\".join([str(el) for el in elements])\n metadata = {\"source\": url}\n docs.append(Document(page_content=text, metadata=metadata))\n elif self.mode == \"elements\":\n for element in elements:\n metadata = element.metadata.to_dict()\n metadata[\"category\"] = element.category\n docs.append(Document(page_content=str(element), metadata=metadata))\n return docs", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/url.html"} +{"id": "568ac3726a88-0", "text": "Source code for langchain.document_loaders.onedrive\n\"\"\"Loader that loads data from OneDrive\"\"\"\nfrom __future__ import annotations\nimport logging\nimport os\nimport tempfile\nfrom enum import Enum\nfrom pathlib import Path\nfrom typing import TYPE_CHECKING, Dict, List, Optional, Type, Union\nfrom pydantic import BaseModel, BaseSettings, Field, FilePath, SecretStr\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseLoader\nfrom langchain.document_loaders.onedrive_file import OneDriveFileLoader\nif TYPE_CHECKING:\n from O365 import Account\n from O365.drive import Drive, Folder\nSCOPES = [\"offline_access\", \"Files.Read.All\"]\nlogger = logging.getLogger(__name__)\nclass _OneDriveSettings(BaseSettings):\n client_id: str = Field(..., env=\"O365_CLIENT_ID\")\n client_secret: SecretStr = Field(..., env=\"O365_CLIENT_SECRET\")\n class Config:\n env_prefix = \"\"\n case_sentive = False\n env_file = \".env\"\nclass _OneDriveTokenStorage(BaseSettings):\n token_path: FilePath = Field(Path.home() / \".credentials\" / \"o365_token.txt\")\nclass _FileType(str, Enum):\n DOC = \"doc\"\n DOCX = \"docx\"\n PDF = \"pdf\"\nclass _SupportedFileTypes(BaseModel):\n file_types: List[_FileType]\n def fetch_mime_types(self) -> Dict[str, str]:\n mime_types_mapping = {}\n for file_type in self.file_types:\n if file_type.value == \"doc\":\n mime_types_mapping[file_type.value] = \"application/msword\"\n elif file_type.value == \"docx\":\n mime_types_mapping[\n file_type.value", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/onedrive.html"} +{"id": "568ac3726a88-1", "text": "mime_types_mapping[\n file_type.value\n ] = \"application/vnd.openxmlformats-officedocument.wordprocessingml.document\" # noqa: E501\n elif file_type.value == \"pdf\":\n mime_types_mapping[file_type.value] = \"application/pdf\"\n return mime_types_mapping\n[docs]class OneDriveLoader(BaseLoader, BaseModel):\n settings: _OneDriveSettings = Field(default_factory=_OneDriveSettings)\n drive_id: str = Field(...)\n folder_path: Optional[str] = None\n object_ids: Optional[List[str]] = None\n auth_with_token: bool = False\n def _auth(self) -> Type[Account]:\n \"\"\"\n Authenticates the OneDrive API client using the specified\n authentication method and returns the Account object.\n Returns:\n Type[Account]: The authenticated Account object.\n \"\"\"\n try:\n from O365 import FileSystemTokenBackend\n except ImportError:\n raise ImportError(\n \"O365 package not found, please install it with `pip install o365`\"\n )\n if self.auth_with_token:\n token_storage = _OneDriveTokenStorage()\n token_path = token_storage.token_path\n token_backend = FileSystemTokenBackend(\n token_path=token_path.parent, token_filename=token_path.name\n )\n account = Account(\n credentials=(\n self.settings.client_id,\n self.settings.client_secret.get_secret_value(),\n ),\n scopes=SCOPES,\n token_backend=token_backend,\n **{\"raise_http_errors\": False},\n )\n else:\n token_backend = FileSystemTokenBackend(\n token_path=Path.home() / \".credentials\"\n )\n account = Account(\n credentials=(\n self.settings.client_id,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/onedrive.html"} +{"id": "568ac3726a88-2", "text": ")\n account = Account(\n credentials=(\n self.settings.client_id,\n self.settings.client_secret.get_secret_value(),\n ),\n scopes=SCOPES,\n token_backend=token_backend,\n **{\"raise_http_errors\": False},\n )\n # make the auth\n account.authenticate()\n return account\n def _get_folder_from_path(self, drive: Type[Drive]) -> Union[Folder, Drive]:\n \"\"\"\n Returns the folder or drive object located at the\n specified path relative to the given drive.\n Args:\n drive (Type[Drive]): The root drive from which the folder path is relative.\n Returns:\n Union[Folder, Drive]: The folder or drive object\n located at the specified path.\n Raises:\n FileNotFoundError: If the path does not exist.\n \"\"\"\n subfolder_drive = drive\n if self.folder_path is None:\n return subfolder_drive\n subfolders = [f for f in self.folder_path.split(\"/\") if f != \"\"]\n if len(subfolders) == 0:\n return subfolder_drive\n items = subfolder_drive.get_items()\n for subfolder in subfolders:\n try:\n subfolder_drive = list(filter(lambda x: subfolder in x.name, items))[0]\n items = subfolder_drive.get_items()\n except (IndexError, AttributeError):\n raise FileNotFoundError(\"Path {} not exist.\".format(self.folder_path))\n return subfolder_drive\n def _load_from_folder(self, folder: Type[Folder]) -> List[Document]:\n \"\"\"\n Loads all supported document files from the specified folder\n and returns a list of Document objects.\n Args:\n folder (Type[Folder]): The folder object to load the documents from.", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/onedrive.html"} +{"id": "568ac3726a88-3", "text": "folder (Type[Folder]): The folder object to load the documents from.\n Returns:\n List[Document]: A list of Document objects representing\n the loaded documents.\n \"\"\"\n docs = []\n file_types = _SupportedFileTypes(file_types=[\"doc\", \"docx\", \"pdf\"])\n file_mime_types = file_types.fetch_mime_types()\n items = folder.get_items()\n with tempfile.TemporaryDirectory() as temp_dir:\n file_path = f\"{temp_dir}\"\n os.makedirs(os.path.dirname(file_path), exist_ok=True)\n for file in items:\n if file.is_file:\n if file.mime_type in list(file_mime_types.values()):\n loader = OneDriveFileLoader(file=file)\n docs.extend(loader.load())\n return docs\n def _load_from_object_ids(self, drive: Type[Drive]) -> List[Document]:\n \"\"\"\n Loads all supported document files from the specified OneDrive\n drive based on their object IDs and returns a list\n of Document objects.\n Args:\n drive (Type[Drive]): The OneDrive drive object\n to load the documents from.\n Returns:\n List[Document]: A list of Document objects representing\n the loaded documents.\n \"\"\"\n docs = []\n file_types = _SupportedFileTypes(file_types=[\"doc\", \"docx\", \"pdf\"])\n file_mime_types = file_types.fetch_mime_types()\n with tempfile.TemporaryDirectory() as temp_dir:\n file_path = f\"{temp_dir}\"\n os.makedirs(os.path.dirname(file_path), exist_ok=True)\n for object_id in self.object_ids if self.object_ids else [\"\"]:\n file = drive.get_item(object_id)\n if not file:\n logging.warning(\n \"There isn't a file with \"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/onedrive.html"} +{"id": "568ac3726a88-4", "text": "logging.warning(\n \"There isn't a file with \"\n f\"object_id {object_id} in drive {drive}.\"\n )\n continue\n if file.is_file:\n if file.mime_type in list(file_mime_types.values()):\n loader = OneDriveFileLoader(file=file)\n docs.extend(loader.load())\n return docs\n[docs] def load(self) -> List[Document]:\n \"\"\"\n Loads all supported document files from the specified OneDrive drive a\n nd returns a list of Document objects.\n Returns:\n List[Document]: A list of Document objects\n representing the loaded documents.\n Raises:\n ValueError: If the specified drive ID\n does not correspond to a drive in the OneDrive storage.\n \"\"\"\n account = self._auth()\n storage = account.storage()\n drive = storage.get_drive(self.drive_id)\n docs: List[Document] = []\n if not drive:\n raise ValueError(f\"There isn't a drive with id {self.drive_id}.\")\n if self.folder_path:\n folder = self._get_folder_from_path(drive=drive)\n docs.extend(self._load_from_folder(folder=folder))\n elif self.object_ids:\n docs.extend(self._load_from_object_ids(drive=drive))\n return docs", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/onedrive.html"} +{"id": "c9c1e3e38c8b-0", "text": "Source code for langchain.document_loaders.rst\n\"\"\"Loader that loads RST files.\"\"\"\nfrom typing import Any, List\nfrom langchain.document_loaders.unstructured import (\n UnstructuredFileLoader,\n validate_unstructured_version,\n)\n[docs]class UnstructuredRSTLoader(UnstructuredFileLoader):\n \"\"\"Loader that uses unstructured to load RST files.\"\"\"\n def __init__(\n self, file_path: str, mode: str = \"single\", **unstructured_kwargs: Any\n ):\n validate_unstructured_version(min_unstructured_version=\"0.7.5\")\n super().__init__(file_path=file_path, mode=mode, **unstructured_kwargs)\n def _get_elements(self) -> List:\n from unstructured.partition.rst import partition_rst\n return partition_rst(filename=self.file_path, **self.unstructured_kwargs)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/rst.html"} +{"id": "a96c1c8796b3-0", "text": "Source code for langchain.document_loaders.open_city_data\nfrom typing import Iterator, List\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseLoader\n[docs]class OpenCityDataLoader(BaseLoader):\n \"\"\"Loader that loads Open city data.\"\"\"\n def __init__(self, city_id: str, dataset_id: str, limit: int):\n \"\"\"Initialize with dataset_id\"\"\"\n \"\"\" Example: https://dev.socrata.com/foundry/data.sfgov.org/vw6y-z8j6 \"\"\"\n \"\"\" e.g., city_id = data.sfgov.org \"\"\"\n \"\"\" e.g., dataset_id = vw6y-z8j6 \"\"\"\n self.city_id = city_id\n self.dataset_id = dataset_id\n self.limit = limit\n[docs] def lazy_load(self) -> Iterator[Document]:\n \"\"\"Lazy load records.\"\"\"\n from sodapy import Socrata\n client = Socrata(self.city_id, None)\n results = client.get(self.dataset_id, limit=self.limit)\n for record in results:\n yield Document(\n page_content=str(record),\n metadata={\n \"source\": self.city_id + \"_\" + self.dataset_id,\n },\n )\n[docs] def load(self) -> List[Document]:\n \"\"\"Load records.\"\"\"\n return list(self.lazy_load())", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/open_city_data.html"} +{"id": "4833404ceccd-0", "text": "Source code for langchain.document_loaders.readthedocs\n\"\"\"Loader that loads ReadTheDocs documentation directory dump.\"\"\"\nfrom pathlib import Path\nfrom typing import Any, List, Optional, Tuple, Union\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseLoader\n[docs]class ReadTheDocsLoader(BaseLoader):\n \"\"\"Loader that loads ReadTheDocs documentation directory dump.\"\"\"\n def __init__(\n self,\n path: Union[str, Path],\n encoding: Optional[str] = None,\n errors: Optional[str] = None,\n custom_html_tag: Optional[Tuple[str, dict]] = None,\n **kwargs: Optional[Any]\n ):\n \"\"\"\n Initialize ReadTheDocsLoader\n The loader loops over all files under `path` and extract the actual content of\n the files by retrieving main html tags. Default main html tags include\n `
`, <`div role=\"main>`, and `
`. You\n can also define your own html tags by passing custom_html_tag, e.g.\n `(\"div\", \"class=main\")`. The loader iterates html tags with the order of\n custom html tags (if exists) and default html tags. If any of the tags is not\n empty, the loop will break and retrieve the content out of that tag.\n Args:\n path: The location of pulled readthedocs folder.\n encoding: The encoding with which to open the documents.\n errors: Specifies how encoding and decoding errors are to be handled\u2014this\n cannot be used in binary mode.\n custom_html_tag: Optional custom html tag to retrieve the content from\n files.\n \"\"\"\n try:\n from bs4 import BeautifulSoup\n except ImportError:\n raise ImportError(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/readthedocs.html"} +{"id": "4833404ceccd-1", "text": "from bs4 import BeautifulSoup\n except ImportError:\n raise ImportError(\n \"Could not import python packages. \"\n \"Please install it with `pip install beautifulsoup4`. \"\n )\n try:\n _ = BeautifulSoup(\n \"Parser builder library test.\", **kwargs\n )\n except Exception as e:\n raise ValueError(\"Parsing kwargs do not appear valid\") from e\n self.file_path = Path(path)\n self.encoding = encoding\n self.errors = errors\n self.custom_html_tag = custom_html_tag\n self.bs_kwargs = kwargs\n[docs] def load(self) -> List[Document]:\n \"\"\"Load documents.\"\"\"\n docs = []\n for p in self.file_path.rglob(\"*\"):\n if p.is_dir():\n continue\n with open(p, encoding=self.encoding, errors=self.errors) as f:\n text = self._clean_data(f.read())\n metadata = {\"source\": str(p)}\n docs.append(Document(page_content=text, metadata=metadata))\n return docs\n def _clean_data(self, data: str) -> str:\n from bs4 import BeautifulSoup\n soup = BeautifulSoup(data, **self.bs_kwargs)\n # default tags\n html_tags = [\n (\"div\", {\"role\": \"main\"}),\n (\"main\", {\"id\": \"main-content\"}),\n ]\n if self.custom_html_tag is not None:\n html_tags.append(self.custom_html_tag)\n text = None\n # reversed order. check the custom one first\n for tag, attrs in html_tags[::-1]:\n text = soup.find(tag, attrs)\n # if found, break\n if text is not None:\n break\n if text is not None:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/readthedocs.html"} +{"id": "4833404ceccd-2", "text": "if text is not None:\n break\n if text is not None:\n text = text.get_text()\n else:\n text = \"\"\n # trim empty lines\n return \"\\n\".join([t for t in text.split(\"\\n\") if t])", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/readthedocs.html"} +{"id": "5a75d765fbba-0", "text": "Source code for langchain.document_loaders.twitter\n\"\"\"Twitter document loader.\"\"\"\nfrom __future__ import annotations\nfrom typing import TYPE_CHECKING, Any, Dict, Iterable, List, Optional, Sequence, Union\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseLoader\nif TYPE_CHECKING:\n import tweepy\n from tweepy import OAuth2BearerHandler, OAuthHandler\ndef _dependable_tweepy_import() -> tweepy:\n try:\n import tweepy\n except ImportError:\n raise ImportError(\n \"tweepy package not found, please install it with `pip install tweepy`\"\n )\n return tweepy\n[docs]class TwitterTweetLoader(BaseLoader):\n \"\"\"Twitter tweets loader.\n Read tweets of user twitter handle.\n First you need to go to\n `https://developer.twitter.com/en/docs/twitter-api\n /getting-started/getting-access-to-the-twitter-api`\n to get your token. And create a v2 version of the app.\n \"\"\"\n def __init__(\n self,\n auth_handler: Union[OAuthHandler, OAuth2BearerHandler],\n twitter_users: Sequence[str],\n number_tweets: Optional[int] = 100,\n ):\n self.auth = auth_handler\n self.twitter_users = twitter_users\n self.number_tweets = number_tweets\n[docs] def load(self) -> List[Document]:\n \"\"\"Load tweets.\"\"\"\n tweepy = _dependable_tweepy_import()\n api = tweepy.API(self.auth, parser=tweepy.parsers.JSONParser())\n results: List[Document] = []\n for username in self.twitter_users:\n tweets = api.user_timeline(screen_name=username, count=self.number_tweets)\n user = api.get_user(screen_name=username)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/twitter.html"} +{"id": "5a75d765fbba-1", "text": "user = api.get_user(screen_name=username)\n docs = self._format_tweets(tweets, user)\n results.extend(docs)\n return results\n def _format_tweets(\n self, tweets: List[Dict[str, Any]], user_info: dict\n ) -> Iterable[Document]:\n \"\"\"Format tweets into a string.\"\"\"\n for tweet in tweets:\n metadata = {\n \"created_at\": tweet[\"created_at\"],\n \"user_info\": user_info,\n }\n yield Document(\n page_content=tweet[\"text\"],\n metadata=metadata,\n )\n[docs] @classmethod\n def from_bearer_token(\n cls,\n oauth2_bearer_token: str,\n twitter_users: Sequence[str],\n number_tweets: Optional[int] = 100,\n ) -> TwitterTweetLoader:\n \"\"\"Create a TwitterTweetLoader from OAuth2 bearer token.\"\"\"\n tweepy = _dependable_tweepy_import()\n auth = tweepy.OAuth2BearerHandler(oauth2_bearer_token)\n return cls(\n auth_handler=auth,\n twitter_users=twitter_users,\n number_tweets=number_tweets,\n )\n[docs] @classmethod\n def from_secrets(\n cls,\n access_token: str,\n access_token_secret: str,\n consumer_key: str,\n consumer_secret: str,\n twitter_users: Sequence[str],\n number_tweets: Optional[int] = 100,\n ) -> TwitterTweetLoader:\n \"\"\"Create a TwitterTweetLoader from access tokens and secrets.\"\"\"\n tweepy = _dependable_tweepy_import()\n auth = tweepy.OAuthHandler(\n access_token=access_token,\n access_token_secret=access_token_secret,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/twitter.html"} +{"id": "5a75d765fbba-2", "text": "access_token=access_token,\n access_token_secret=access_token_secret,\n consumer_key=consumer_key,\n consumer_secret=consumer_secret,\n )\n return cls(\n auth_handler=auth,\n twitter_users=twitter_users,\n number_tweets=number_tweets,\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/twitter.html"} +{"id": "537abde9f427-0", "text": "Source code for langchain.document_loaders.iugu\n\"\"\"Loader that fetches data from IUGU\"\"\"\nimport json\nimport urllib.request\nfrom typing import List, Optional\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseLoader\nfrom langchain.utils import get_from_env, stringify_dict\nIUGU_ENDPOINTS = {\n \"invoices\": \"https://api.iugu.com/v1/invoices\",\n \"customers\": \"https://api.iugu.com/v1/customers\",\n \"charges\": \"https://api.iugu.com/v1/charges\",\n \"subscriptions\": \"https://api.iugu.com/v1/subscriptions\",\n \"plans\": \"https://api.iugu.com/v1/plans\",\n}\n[docs]class IuguLoader(BaseLoader):\n \"\"\"Loader that fetches data from IUGU.\"\"\"\n def __init__(self, resource: str, api_token: Optional[str] = None) -> None:\n self.resource = resource\n api_token = api_token or get_from_env(\"api_token\", \"IUGU_API_TOKEN\")\n self.headers = {\"Authorization\": f\"Bearer {api_token}\"}\n def _make_request(self, url: str) -> List[Document]:\n request = urllib.request.Request(url, headers=self.headers)\n with urllib.request.urlopen(request) as response:\n json_data = json.loads(response.read().decode())\n text = stringify_dict(json_data)\n metadata = {\"source\": url}\n return [Document(page_content=text, metadata=metadata)]\n def _get_resource(self) -> List[Document]:\n endpoint = IUGU_ENDPOINTS.get(self.resource)\n if endpoint is None:\n return []\n return self._make_request(endpoint)\n[docs] def load(self) -> List[Document]:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/iugu.html"} +{"id": "537abde9f427-1", "text": "[docs] def load(self) -> List[Document]:\n return self._get_resource()", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/iugu.html"} +{"id": "e5f959ed2e0c-0", "text": "Source code for langchain.document_loaders.reddit\n\"\"\"Reddit document loader.\"\"\"\nfrom __future__ import annotations\nfrom typing import TYPE_CHECKING, Iterable, List, Optional, Sequence\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseLoader\nif TYPE_CHECKING:\n import praw\ndef _dependable_praw_import() -> praw:\n try:\n import praw\n except ImportError:\n raise ValueError(\n \"praw package not found, please install it with `pip install praw`\"\n )\n return praw\n[docs]class RedditPostsLoader(BaseLoader):\n \"\"\"Reddit posts loader.\n Read posts on a subreddit.\n First you need to go to\n https://www.reddit.com/prefs/apps/\n and create your application\n \"\"\"\n def __init__(\n self,\n client_id: str,\n client_secret: str,\n user_agent: str,\n search_queries: Sequence[str],\n mode: str,\n categories: Sequence[str] = [\"new\"],\n number_posts: Optional[int] = 10,\n ):\n self.client_id = client_id\n self.client_secret = client_secret\n self.user_agent = user_agent\n self.search_queries = search_queries\n self.mode = mode\n self.categories = categories\n self.number_posts = number_posts\n[docs] def load(self) -> List[Document]:\n \"\"\"Load reddits.\"\"\"\n praw = _dependable_praw_import()\n reddit = praw.Reddit(\n client_id=self.client_id,\n client_secret=self.client_secret,\n user_agent=self.user_agent,\n )\n results: List[Document] = []\n if self.mode == \"subreddit\":\n for search_query in self.search_queries:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/reddit.html"} +{"id": "e5f959ed2e0c-1", "text": "if self.mode == \"subreddit\":\n for search_query in self.search_queries:\n for category in self.categories:\n docs = self._subreddit_posts_loader(\n search_query=search_query, category=category, reddit=reddit\n )\n results.extend(docs)\n elif self.mode == \"username\":\n for search_query in self.search_queries:\n for category in self.categories:\n docs = self._user_posts_loader(\n search_query=search_query, category=category, reddit=reddit\n )\n results.extend(docs)\n else:\n raise ValueError(\n \"mode not correct, please enter 'username' or 'subreddit' as mode\"\n )\n return results\n def _subreddit_posts_loader(\n self, search_query: str, category: str, reddit: praw.reddit.Reddit\n ) -> Iterable[Document]:\n subreddit = reddit.subreddit(search_query)\n method = getattr(subreddit, category)\n cat_posts = method(limit=self.number_posts)\n \"\"\"Format reddit posts into a string.\"\"\"\n for post in cat_posts:\n metadata = {\n \"post_subreddit\": post.subreddit_name_prefixed,\n \"post_category\": category,\n \"post_title\": post.title,\n \"post_score\": post.score,\n \"post_id\": post.id,\n \"post_url\": post.url,\n \"post_author\": post.author,\n }\n yield Document(\n page_content=post.selftext,\n metadata=metadata,\n )\n def _user_posts_loader(\n self, search_query: str, category: str, reddit: praw.reddit.Reddit\n ) -> Iterable[Document]:\n user = reddit.redditor(search_query)\n method = getattr(user.submissions, category)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/reddit.html"} +{"id": "e5f959ed2e0c-2", "text": "method = getattr(user.submissions, category)\n cat_posts = method(limit=self.number_posts)\n \"\"\"Format reddit posts into a string.\"\"\"\n for post in cat_posts:\n metadata = {\n \"post_subreddit\": post.subreddit_name_prefixed,\n \"post_category\": category,\n \"post_title\": post.title,\n \"post_score\": post.score,\n \"post_id\": post.id,\n \"post_url\": post.url,\n \"post_author\": post.author,\n }\n yield Document(\n page_content=post.selftext,\n metadata=metadata,\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/reddit.html"} +{"id": "86337f7d84e8-0", "text": "Source code for langchain.document_loaders.azure_blob_storage_container\n\"\"\"Loading logic for loading documents from an Azure Blob Storage container.\"\"\"\nfrom typing import List\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.azure_blob_storage_file import (\n AzureBlobStorageFileLoader,\n)\nfrom langchain.document_loaders.base import BaseLoader\n[docs]class AzureBlobStorageContainerLoader(BaseLoader):\n \"\"\"Loading logic for loading documents from Azure Blob Storage.\"\"\"\n def __init__(self, conn_str: str, container: str, prefix: str = \"\"):\n \"\"\"Initialize with connection string, container and blob prefix.\"\"\"\n self.conn_str = conn_str\n self.container = container\n self.prefix = prefix\n[docs] def load(self) -> List[Document]:\n \"\"\"Load documents.\"\"\"\n try:\n from azure.storage.blob import ContainerClient\n except ImportError as exc:\n raise ValueError(\n \"Could not import azure storage blob python package. \"\n \"Please install it with `pip install azure-storage-blob`.\"\n ) from exc\n container = ContainerClient.from_connection_string(\n conn_str=self.conn_str, container_name=self.container\n )\n docs = []\n blob_list = container.list_blobs(name_starts_with=self.prefix)\n for blob in blob_list:\n loader = AzureBlobStorageFileLoader(\n self.conn_str, self.container, blob.name # type: ignore\n )\n docs.extend(loader.load())\n return docs", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/azure_blob_storage_container.html"} +{"id": "6440543c7b56-0", "text": "Source code for langchain.document_loaders.markdown\n\"\"\"Loader that loads Markdown files.\"\"\"\nfrom typing import List\nfrom langchain.document_loaders.unstructured import UnstructuredFileLoader\n[docs]class UnstructuredMarkdownLoader(UnstructuredFileLoader):\n \"\"\"Loader that uses unstructured to load markdown files.\"\"\"\n def _get_elements(self) -> List:\n from unstructured.__version__ import __version__ as __unstructured_version__\n from unstructured.partition.md import partition_md\n # NOTE(MthwRobinson) - enables the loader to work when you're using pre-release\n # versions of unstructured like 0.4.17-dev1\n _unstructured_version = __unstructured_version__.split(\"-\")[0]\n unstructured_version = tuple([int(x) for x in _unstructured_version.split(\".\")])\n if unstructured_version < (0, 4, 16):\n raise ValueError(\n f\"You are on unstructured version {__unstructured_version__}. \"\n \"Partitioning markdown files is only supported in unstructured>=0.4.16.\"\n )\n return partition_md(filename=self.file_path, **self.unstructured_kwargs)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/markdown.html"} +{"id": "5ec96b707831-0", "text": "Source code for langchain.document_loaders.stripe\n\"\"\"Loader that fetches data from Stripe\"\"\"\nimport json\nimport urllib.request\nfrom typing import List, Optional\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseLoader\nfrom langchain.utils import get_from_env, stringify_dict\nSTRIPE_ENDPOINTS = {\n \"balance_transactions\": \"https://api.stripe.com/v1/balance_transactions\",\n \"charges\": \"https://api.stripe.com/v1/charges\",\n \"customers\": \"https://api.stripe.com/v1/customers\",\n \"events\": \"https://api.stripe.com/v1/events\",\n \"refunds\": \"https://api.stripe.com/v1/refunds\",\n \"disputes\": \"https://api.stripe.com/v1/disputes\",\n}\n[docs]class StripeLoader(BaseLoader):\n \"\"\"Loader that fetches data from Stripe.\"\"\"\n def __init__(self, resource: str, access_token: Optional[str] = None) -> None:\n self.resource = resource\n access_token = access_token or get_from_env(\n \"access_token\", \"STRIPE_ACCESS_TOKEN\"\n )\n self.headers = {\"Authorization\": f\"Bearer {access_token}\"}\n def _make_request(self, url: str) -> List[Document]:\n request = urllib.request.Request(url, headers=self.headers)\n with urllib.request.urlopen(request) as response:\n json_data = json.loads(response.read().decode())\n text = stringify_dict(json_data)\n metadata = {\"source\": url}\n return [Document(page_content=text, metadata=metadata)]\n def _get_resource(self) -> List[Document]:\n endpoint = STRIPE_ENDPOINTS.get(self.resource)\n if endpoint is None:\n return []", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/stripe.html"} +{"id": "5ec96b707831-1", "text": "if endpoint is None:\n return []\n return self._make_request(endpoint)\n[docs] def load(self) -> List[Document]:\n return self._get_resource()", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/stripe.html"} +{"id": "fcfde8a0f140-0", "text": "Source code for langchain.document_loaders.ifixit\n\"\"\"Loader that loads iFixit data.\"\"\"\nfrom typing import List, Optional\nimport requests\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseLoader\nfrom langchain.document_loaders.web_base import WebBaseLoader\nIFIXIT_BASE_URL = \"https://www.ifixit.com/api/2.0\"\n[docs]class IFixitLoader(BaseLoader):\n \"\"\"Load iFixit repair guides, device wikis and answers.\n iFixit is the largest, open repair community on the web. The site contains nearly\n 100k repair manuals, 200k Questions & Answers on 42k devices, and all the data is\n licensed under CC-BY.\n This loader will allow you to download the text of a repair guide, text of Q&A's\n and wikis from devices on iFixit using their open APIs and web scraping.\n \"\"\"\n def __init__(self, web_path: str):\n \"\"\"Initialize with web path.\"\"\"\n if not web_path.startswith(\"https://www.ifixit.com\"):\n raise ValueError(\"web path must start with 'https://www.ifixit.com'\")\n path = web_path.replace(\"https://www.ifixit.com\", \"\")\n allowed_paths = [\"/Device\", \"/Guide\", \"/Answers\", \"/Teardown\"]\n \"\"\" TODO: Add /Wiki \"\"\"\n if not any(path.startswith(allowed_path) for allowed_path in allowed_paths):\n raise ValueError(\n \"web path must start with /Device, /Guide, /Teardown or /Answers\"\n )\n pieces = [x for x in path.split(\"/\") if x]\n \"\"\"Teardowns are just guides by a different name\"\"\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/ifixit.html"} +{"id": "fcfde8a0f140-1", "text": "\"\"\"Teardowns are just guides by a different name\"\"\"\n self.page_type = pieces[0] if pieces[0] != \"Teardown\" else \"Guide\"\n if self.page_type == \"Guide\" or self.page_type == \"Answers\":\n self.id = pieces[2]\n else:\n self.id = pieces[1]\n self.web_path = web_path\n[docs] def load(self) -> List[Document]:\n if self.page_type == \"Device\":\n return self.load_device()\n elif self.page_type == \"Guide\" or self.page_type == \"Teardown\":\n return self.load_guide()\n elif self.page_type == \"Answers\":\n return self.load_questions_and_answers()\n else:\n raise ValueError(\"Unknown page type: \" + self.page_type)\n[docs] @staticmethod\n def load_suggestions(query: str = \"\", doc_type: str = \"all\") -> List[Document]:\n res = requests.get(\n IFIXIT_BASE_URL + \"/suggest/\" + query + \"?doctypes=\" + doc_type\n )\n if res.status_code != 200:\n raise ValueError(\n 'Could not load suggestions for \"' + query + '\"\\n' + res.json()\n )\n data = res.json()\n results = data[\"results\"]\n output = []\n for result in results:\n try:\n loader = IFixitLoader(result[\"url\"])\n if loader.page_type == \"Device\":\n output += loader.load_device(include_guides=False)\n else:\n output += loader.load()\n except ValueError:\n continue\n return output\n[docs] def load_questions_and_answers(\n self, url_override: Optional[str] = None", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/ifixit.html"} +{"id": "fcfde8a0f140-2", "text": "self, url_override: Optional[str] = None\n ) -> List[Document]:\n loader = WebBaseLoader(self.web_path if url_override is None else url_override)\n soup = loader.scrape()\n output = []\n title = soup.find(\"h1\", \"post-title\").text\n output.append(\"# \" + title)\n output.append(soup.select_one(\".post-content .post-text\").text.strip())\n answersHeader = soup.find(\"div\", \"post-answers-header\")\n if answersHeader:\n output.append(\"\\n## \" + answersHeader.text.strip())\n for answer in soup.select(\".js-answers-list .post.post-answer\"):\n if answer.has_attr(\"itemprop\") and \"acceptedAnswer\" in answer[\"itemprop\"]:\n output.append(\"\\n### Accepted Answer\")\n elif \"post-helpful\" in answer[\"class\"]:\n output.append(\"\\n### Most Helpful Answer\")\n else:\n output.append(\"\\n### Other Answer\")\n output += [\n a.text.strip() for a in answer.select(\".post-content .post-text\")\n ]\n output.append(\"\\n\")\n text = \"\\n\".join(output).strip()\n metadata = {\"source\": self.web_path, \"title\": title}\n return [Document(page_content=text, metadata=metadata)]\n[docs] def load_device(\n self, url_override: Optional[str] = None, include_guides: bool = True\n ) -> List[Document]:\n documents = []\n if url_override is None:\n url = IFIXIT_BASE_URL + \"/wikis/CATEGORY/\" + self.id\n else:\n url = url_override\n res = requests.get(url)\n data = res.json()\n text = \"\\n\".join(\n [", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/ifixit.html"} +{"id": "fcfde8a0f140-3", "text": "data = res.json()\n text = \"\\n\".join(\n [\n data[key]\n for key in [\"title\", \"description\", \"contents_raw\"]\n if key in data\n ]\n ).strip()\n metadata = {\"source\": self.web_path, \"title\": data[\"title\"]}\n documents.append(Document(page_content=text, metadata=metadata))\n if include_guides:\n \"\"\"Load and return documents for each guide linked to from the device\"\"\"\n guide_urls = [guide[\"url\"] for guide in data[\"guides\"]]\n for guide_url in guide_urls:\n documents.append(IFixitLoader(guide_url).load()[0])\n return documents\n[docs] def load_guide(self, url_override: Optional[str] = None) -> List[Document]:\n if url_override is None:\n url = IFIXIT_BASE_URL + \"/guides/\" + self.id\n else:\n url = url_override\n res = requests.get(url)\n if res.status_code != 200:\n raise ValueError(\n \"Could not load guide: \" + self.web_path + \"\\n\" + res.json()\n )\n data = res.json()\n doc_parts = [\"# \" + data[\"title\"], data[\"introduction_raw\"]]\n doc_parts.append(\"\\n\\n###Tools Required:\")\n if len(data[\"tools\"]) == 0:\n doc_parts.append(\"\\n - None\")\n else:\n for tool in data[\"tools\"]:\n doc_parts.append(\"\\n - \" + tool[\"text\"])\n doc_parts.append(\"\\n\\n###Parts Required:\")\n if len(data[\"parts\"]) == 0:\n doc_parts.append(\"\\n - None\")\n else:\n for part in data[\"parts\"]:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/ifixit.html"} +{"id": "fcfde8a0f140-4", "text": "else:\n for part in data[\"parts\"]:\n doc_parts.append(\"\\n - \" + part[\"text\"])\n for row in data[\"steps\"]:\n doc_parts.append(\n \"\\n\\n## \"\n + (\n row[\"title\"]\n if row[\"title\"] != \"\"\n else \"Step {}\".format(row[\"orderby\"])\n )\n )\n for line in row[\"lines\"]:\n doc_parts.append(line[\"text_raw\"])\n doc_parts.append(data[\"conclusion_raw\"])\n text = \"\\n\".join(doc_parts)\n metadata = {\"source\": self.web_path, \"title\": data[\"title\"]}\n return [Document(page_content=text, metadata=metadata)]", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/ifixit.html"} +{"id": "ae2eb36c2be0-0", "text": "Source code for langchain.document_loaders.notiondb\n\"\"\"Notion DB loader for langchain\"\"\"\nfrom typing import Any, Dict, List, Optional\nimport requests\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseLoader\nNOTION_BASE_URL = \"https://api.notion.com/v1\"\nDATABASE_URL = NOTION_BASE_URL + \"/databases/{database_id}/query\"\nPAGE_URL = NOTION_BASE_URL + \"/pages/{page_id}\"\nBLOCK_URL = NOTION_BASE_URL + \"/blocks/{block_id}/children\"\n[docs]class NotionDBLoader(BaseLoader):\n \"\"\"Notion DB Loader.\n Reads content from pages within a Noton Database.\n Args:\n integration_token (str): Notion integration token.\n database_id (str): Notion database id.\n request_timeout_sec (int): Timeout for Notion requests in seconds.\n \"\"\"\n def __init__(\n self,\n integration_token: str,\n database_id: str,\n request_timeout_sec: Optional[int] = 10,\n ) -> None:\n \"\"\"Initialize with parameters.\"\"\"\n if not integration_token:\n raise ValueError(\"integration_token must be provided\")\n if not database_id:\n raise ValueError(\"database_id must be provided\")\n self.token = integration_token\n self.database_id = database_id\n self.headers = {\n \"Authorization\": \"Bearer \" + self.token,\n \"Content-Type\": \"application/json\",\n \"Notion-Version\": \"2022-06-28\",\n }\n self.request_timeout_sec = request_timeout_sec\n[docs] def load(self) -> List[Document]:\n \"\"\"Load documents from the Notion database.\n Returns:\n List[Document]: List of documents.\n \"\"\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/notiondb.html"} +{"id": "ae2eb36c2be0-1", "text": "Returns:\n List[Document]: List of documents.\n \"\"\"\n page_summaries = self._retrieve_page_summaries()\n return list(self.load_page(page_summary) for page_summary in page_summaries)\n def _retrieve_page_summaries(\n self, query_dict: Dict[str, Any] = {\"page_size\": 100}\n ) -> List[Dict[str, Any]]:\n \"\"\"Get all the pages from a Notion database.\"\"\"\n pages: List[Dict[str, Any]] = []\n while True:\n data = self._request(\n DATABASE_URL.format(database_id=self.database_id),\n method=\"POST\",\n query_dict=query_dict,\n )\n pages.extend(data.get(\"results\"))\n if not data.get(\"has_more\"):\n break\n query_dict[\"start_cursor\"] = data.get(\"next_cursor\")\n return pages\n[docs] def load_page(self, page_summary: Dict[str, Any]) -> Document:\n \"\"\"Read a page.\"\"\"\n page_id = page_summary[\"id\"]\n # load properties as metadata\n metadata: Dict[str, Any] = {}\n for prop_name, prop_data in page_summary[\"properties\"].items():\n prop_type = prop_data[\"type\"]\n if prop_type == \"rich_text\":\n value = (\n prop_data[\"rich_text\"][0][\"plain_text\"]\n if prop_data[\"rich_text\"]\n else None\n )\n elif prop_type == \"title\":\n value = (\n prop_data[\"title\"][0][\"plain_text\"] if prop_data[\"title\"] else None\n )\n elif prop_type == \"multi_select\":\n value = (\n [item[\"name\"] for item in prop_data[\"multi_select\"]]\n if prop_data[\"multi_select\"]", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/notiondb.html"} +{"id": "ae2eb36c2be0-2", "text": "if prop_data[\"multi_select\"]\n else []\n )\n elif prop_type == \"url\":\n value = prop_data[\"url\"]\n else:\n value = None\n metadata[prop_name.lower()] = value\n metadata[\"id\"] = page_id\n return Document(page_content=self._load_blocks(page_id), metadata=metadata)\n def _load_blocks(self, block_id: str, num_tabs: int = 0) -> str:\n \"\"\"Read a block and its children.\"\"\"\n result_lines_arr: List[str] = []\n cur_block_id: str = block_id\n while cur_block_id:\n data = self._request(BLOCK_URL.format(block_id=cur_block_id))\n for result in data[\"results\"]:\n result_obj = result[result[\"type\"]]\n if \"rich_text\" not in result_obj:\n continue\n cur_result_text_arr: List[str] = []\n for rich_text in result_obj[\"rich_text\"]:\n if \"text\" in rich_text:\n cur_result_text_arr.append(\n \"\\t\" * num_tabs + rich_text[\"text\"][\"content\"]\n )\n if result[\"has_children\"]:\n children_text = self._load_blocks(\n result[\"id\"], num_tabs=num_tabs + 1\n )\n cur_result_text_arr.append(children_text)\n result_lines_arr.append(\"\\n\".join(cur_result_text_arr))\n cur_block_id = data.get(\"next_cursor\")\n return \"\\n\".join(result_lines_arr)\n def _request(\n self, url: str, method: str = \"GET\", query_dict: Dict[str, Any] = {}\n ) -> Any:\n res = requests.request(\n method,\n url,\n headers=self.headers,\n json=query_dict,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/notiondb.html"} +{"id": "ae2eb36c2be0-3", "text": "method,\n url,\n headers=self.headers,\n json=query_dict,\n timeout=self.request_timeout_sec,\n )\n res.raise_for_status()\n return res.json()", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/notiondb.html"} +{"id": "c8f7da67fc6d-0", "text": "Source code for langchain.document_loaders.bibtex\nimport logging\nimport re\nfrom pathlib import Path\nfrom typing import Any, Iterator, List, Mapping, Optional\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseLoader\nfrom langchain.utilities.bibtex import BibtexparserWrapper\nlogger = logging.getLogger(__name__)\n[docs]class BibtexLoader(BaseLoader):\n \"\"\"Loads a bibtex file into a list of Documents.\n Each document represents one entry from the bibtex file.\n If a PDF file is present in the `file` bibtex field, the original PDF\n is loaded into the document text. If no such file entry is present,\n the `abstract` field is used instead.\n \"\"\"\n def __init__(\n self,\n file_path: str,\n *,\n parser: Optional[BibtexparserWrapper] = None,\n max_docs: Optional[int] = None,\n max_content_chars: Optional[int] = 4_000,\n load_extra_metadata: bool = False,\n file_pattern: str = r\"[^:]+\\.pdf\",\n ):\n \"\"\"Initialize the BibtexLoader.\n Args:\n file_path: Path to the bibtex file.\n max_docs: Max number of associated documents to load. Use -1 means\n no limit.\n \"\"\"\n self.file_path = file_path\n self.parser = parser or BibtexparserWrapper()\n self.max_docs = max_docs\n self.max_content_chars = max_content_chars\n self.load_extra_metadata = load_extra_metadata\n self.file_regex = re.compile(file_pattern)\n def _load_entry(self, entry: Mapping[str, Any]) -> Optional[Document]:\n import fitz", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/bibtex.html"} +{"id": "c8f7da67fc6d-1", "text": "import fitz\n parent_dir = Path(self.file_path).parent\n # regex is useful for Zotero flavor bibtex files\n file_names = self.file_regex.findall(entry.get(\"file\", \"\"))\n if not file_names:\n return None\n texts: List[str] = []\n for file_name in file_names:\n try:\n with fitz.open(parent_dir / file_name) as f:\n texts.extend(page.get_text() for page in f)\n except FileNotFoundError as e:\n logger.debug(e)\n content = \"\\n\".join(texts) or entry.get(\"abstract\", \"\")\n if self.max_content_chars:\n content = content[: self.max_content_chars]\n metadata = self.parser.get_metadata(entry, load_extra=self.load_extra_metadata)\n return Document(\n page_content=content,\n metadata=metadata,\n )\n[docs] def lazy_load(self) -> Iterator[Document]:\n \"\"\"Load bibtex file using bibtexparser and get the article texts plus the\n article metadata.\n See https://bibtexparser.readthedocs.io/en/master/\n Returns:\n a list of documents with the document.page_content in text format\n \"\"\"\n try:\n import fitz # noqa: F401\n except ImportError:\n raise ImportError(\n \"PyMuPDF package not found, please install it with \"\n \"`pip install pymupdf`\"\n )\n entries = self.parser.load_bibtex_entries(self.file_path)\n if self.max_docs:\n entries = entries[: self.max_docs]\n for entry in entries:\n doc = self._load_entry(entry)\n if doc:\n yield doc\n[docs] def load(self) -> List[Document]:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/bibtex.html"} +{"id": "c8f7da67fc6d-2", "text": "yield doc\n[docs] def load(self) -> List[Document]:\n \"\"\"Load bibtex file documents from the given bibtex file path.\n See https://bibtexparser.readthedocs.io/en/master/\n Args:\n file_path: the path to the bibtex file\n Returns:\n a list of documents with the document.page_content in text format\n \"\"\"\n return list(self.lazy_load())", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/bibtex.html"} +{"id": "b6925eca9356-0", "text": "Source code for langchain.document_loaders.slack_directory\n\"\"\"Loader for documents from a Slack export.\"\"\"\nimport json\nimport zipfile\nfrom pathlib import Path\nfrom typing import Dict, List, Optional\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseLoader\n[docs]class SlackDirectoryLoader(BaseLoader):\n \"\"\"Loader for loading documents from a Slack directory dump.\"\"\"\n def __init__(self, zip_path: str, workspace_url: Optional[str] = None):\n \"\"\"Initialize the SlackDirectoryLoader.\n Args:\n zip_path (str): The path to the Slack directory dump zip file.\n workspace_url (Optional[str]): The Slack workspace URL.\n Including the URL will turn\n sources into links. Defaults to None.\n \"\"\"\n self.zip_path = Path(zip_path)\n self.workspace_url = workspace_url\n self.channel_id_map = self._get_channel_id_map(self.zip_path)\n @staticmethod\n def _get_channel_id_map(zip_path: Path) -> Dict[str, str]:\n \"\"\"Get a dictionary mapping channel names to their respective IDs.\"\"\"\n with zipfile.ZipFile(zip_path, \"r\") as zip_file:\n try:\n with zip_file.open(\"channels.json\", \"r\") as f:\n channels = json.load(f)\n return {channel[\"name\"]: channel[\"id\"] for channel in channels}\n except KeyError:\n return {}\n[docs] def load(self) -> List[Document]:\n \"\"\"Load and return documents from the Slack directory dump.\"\"\"\n docs = []\n with zipfile.ZipFile(self.zip_path, \"r\") as zip_file:\n for channel_path in zip_file.namelist():\n channel_name = Path(channel_path).parent.name\n if not channel_name:\n continue", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/slack_directory.html"} +{"id": "b6925eca9356-1", "text": "if not channel_name:\n continue\n if channel_path.endswith(\".json\"):\n messages = self._read_json(zip_file, channel_path)\n for message in messages:\n document = self._convert_message_to_document(\n message, channel_name\n )\n docs.append(document)\n return docs\n def _read_json(self, zip_file: zipfile.ZipFile, file_path: str) -> List[dict]:\n \"\"\"Read JSON data from a zip subfile.\"\"\"\n with zip_file.open(file_path, \"r\") as f:\n data = json.load(f)\n return data\n def _convert_message_to_document(\n self, message: dict, channel_name: str\n ) -> Document:\n \"\"\"\n Convert a message to a Document object.\n Args:\n message (dict): A message in the form of a dictionary.\n channel_name (str): The name of the channel the message belongs to.\n Returns:\n Document: A Document object representing the message.\n \"\"\"\n text = message.get(\"text\", \"\")\n metadata = self._get_message_metadata(message, channel_name)\n return Document(\n page_content=text,\n metadata=metadata,\n )\n def _get_message_metadata(self, message: dict, channel_name: str) -> dict:\n \"\"\"Create and return metadata for a given message and channel.\"\"\"\n timestamp = message.get(\"ts\", \"\")\n user = message.get(\"user\", \"\")\n source = self._get_message_source(channel_name, user, timestamp)\n return {\n \"source\": source,\n \"channel\": channel_name,\n \"timestamp\": timestamp,\n \"user\": user,\n }", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/slack_directory.html"} +{"id": "b6925eca9356-2", "text": "\"timestamp\": timestamp,\n \"user\": user,\n }\n def _get_message_source(self, channel_name: str, user: str, timestamp: str) -> str:\n \"\"\"\n Get the message source as a string.\n Args:\n channel_name (str): The name of the channel the message belongs to.\n user (str): The user ID who sent the message.\n timestamp (str): The timestamp of the message.\n Returns:\n str: The message source.\n \"\"\"\n if self.workspace_url:\n channel_id = self.channel_id_map.get(channel_name, \"\")\n return (\n f\"{self.workspace_url}/archives/{channel_id}\"\n + f\"/p{timestamp.replace('.', '')}\"\n )\n else:\n return f\"{channel_name} - {user} - {timestamp}\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/slack_directory.html"} +{"id": "0acdd6746fe6-0", "text": "Source code for langchain.document_loaders.merge\nfrom typing import Iterator, List\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseLoader\n[docs]class MergedDataLoader(BaseLoader):\n \"\"\"Merge documents from a list of loaders\"\"\"\n def __init__(self, loaders: List):\n \"\"\"Initialize with a list of loaders\"\"\"\n self.loaders = loaders\n[docs] def lazy_load(self) -> Iterator[Document]:\n \"\"\"Lazy load docs from each individual loader.\"\"\"\n for loader in self.loaders:\n # Check if lazy_load is implemented\n try:\n data = loader.lazy_load()\n except NotImplementedError:\n data = loader.load()\n for document in data:\n yield document\n[docs] def load(self) -> List[Document]:\n \"\"\"Load docs.\"\"\"\n return list(self.lazy_load())", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/merge.html"} +{"id": "7a924bfd1512-0", "text": "Source code for langchain.document_loaders.hn\n\"\"\"Loader that loads HN.\"\"\"\nfrom typing import Any, List\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.web_base import WebBaseLoader\n[docs]class HNLoader(WebBaseLoader):\n \"\"\"Load Hacker News data from either main page results or the comments page.\"\"\"\n[docs] def load(self) -> List[Document]:\n \"\"\"Get important HN webpage information.\n Components are:\n - title\n - content\n - source url,\n - time of post\n - author of the post\n - number of comments\n - rank of the post\n \"\"\"\n soup_info = self.scrape()\n if \"item\" in self.web_path:\n return self.load_comments(soup_info)\n else:\n return self.load_results(soup_info)\n[docs] def load_comments(self, soup_info: Any) -> List[Document]:\n \"\"\"Load comments from a HN post.\"\"\"\n comments = soup_info.select(\"tr[class='athing comtr']\")\n title = soup_info.select_one(\"tr[id='pagespace']\").get(\"title\")\n return [\n Document(\n page_content=comment.text.strip(),\n metadata={\"source\": self.web_path, \"title\": title},\n )\n for comment in comments\n ]\n[docs] def load_results(self, soup: Any) -> List[Document]:\n \"\"\"Load items from an HN page.\"\"\"\n items = soup.select(\"tr[class='athing']\")\n documents = []\n for lineItem in items:\n ranking = lineItem.select_one(\"span[class='rank']\").text\n link = lineItem.find(\"span\", {\"class\": \"titleline\"}).find(\"a\").get(\"href\")", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/hn.html"} +{"id": "7a924bfd1512-1", "text": "title = lineItem.find(\"span\", {\"class\": \"titleline\"}).text.strip()\n metadata = {\n \"source\": self.web_path,\n \"title\": title,\n \"link\": link,\n \"ranking\": ranking,\n }\n documents.append(\n Document(\n page_content=title, link=link, ranking=ranking, metadata=metadata\n )\n )\n return documents", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/hn.html"} +{"id": "5ca661859303-0", "text": "Source code for langchain.document_loaders.figma\n\"\"\"Loader that loads Figma files json dump.\"\"\"\nimport json\nimport urllib.request\nfrom typing import Any, List\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseLoader\nfrom langchain.utils import stringify_dict\n[docs]class FigmaFileLoader(BaseLoader):\n \"\"\"Loader that loads Figma file json.\"\"\"\n def __init__(self, access_token: str, ids: str, key: str):\n \"\"\"Initialize with access token, ids, and key.\"\"\"\n self.access_token = access_token\n self.ids = ids\n self.key = key\n def _construct_figma_api_url(self) -> str:\n api_url = \"https://api.figma.com/v1/files/%s/nodes?ids=%s\" % (\n self.key,\n self.ids,\n )\n return api_url\n def _get_figma_file(self) -> Any:\n \"\"\"Get Figma file from Figma REST API.\"\"\"\n headers = {\"X-Figma-Token\": self.access_token}\n request = urllib.request.Request(\n self._construct_figma_api_url(), headers=headers\n )\n with urllib.request.urlopen(request) as response:\n json_data = json.loads(response.read().decode())\n return json_data\n[docs] def load(self) -> List[Document]:\n \"\"\"Load file\"\"\"\n data = self._get_figma_file()\n text = stringify_dict(data)\n metadata = {\"source\": self._construct_figma_api_url()}\n return [Document(page_content=text, metadata=metadata)]", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/figma.html"} +{"id": "0e143cc51f04-0", "text": "Source code for langchain.document_loaders.roam\n\"\"\"Loader that loads Roam directory dump.\"\"\"\nfrom pathlib import Path\nfrom typing import List\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseLoader\n[docs]class RoamLoader(BaseLoader):\n \"\"\"Loader that loads Roam files from disk.\"\"\"\n def __init__(self, path: str):\n \"\"\"Initialize with path.\"\"\"\n self.file_path = path\n[docs] def load(self) -> List[Document]:\n \"\"\"Load documents.\"\"\"\n ps = list(Path(self.file_path).glob(\"**/*.md\"))\n docs = []\n for p in ps:\n with open(p) as f:\n text = f.read()\n metadata = {\"source\": str(p)}\n docs.append(Document(page_content=text, metadata=metadata))\n return docs", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/roam.html"} +{"id": "df9a292c4399-0", "text": "Source code for langchain.document_loaders.mhtml\n\"\"\"Loader to load MHTML files, enriching metadata with page title.\"\"\"\nimport email\nimport logging\nfrom typing import Dict, List, Union\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseLoader\nlogger = logging.getLogger(__name__)\n[docs]class MHTMLLoader(BaseLoader):\n \"\"\"Loader that uses beautiful soup to parse HTML files.\"\"\"\n def __init__(\n self,\n file_path: str,\n open_encoding: Union[str, None] = None,\n bs_kwargs: Union[dict, None] = None,\n get_text_separator: str = \"\",\n ) -> None:\n \"\"\"Initialise with path, and optionally, file encoding to use, and any kwargs\n to pass to the BeautifulSoup object.\"\"\"\n try:\n import bs4 # noqa:F401\n except ImportError:\n raise ValueError(\n \"beautifulsoup4 package not found, please install it with \"\n \"`pip install beautifulsoup4`\"\n )\n self.file_path = file_path\n self.open_encoding = open_encoding\n if bs_kwargs is None:\n bs_kwargs = {\"features\": \"lxml\"}\n self.bs_kwargs = bs_kwargs\n self.get_text_separator = get_text_separator\n[docs] def load(self) -> List[Document]:\n from bs4 import BeautifulSoup\n \"\"\"Load MHTML document into document objects.\"\"\"\n with open(self.file_path, \"r\", encoding=self.open_encoding) as f:\n message = email.message_from_string(f.read())\n parts = message.get_payload()\n if type(parts) is not list:\n parts = [message]\n for part in parts:\n if part.get_content_type() == \"text/html\":", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/mhtml.html"} +{"id": "df9a292c4399-1", "text": "for part in parts:\n if part.get_content_type() == \"text/html\":\n html = part.get_payload(decode=True).decode()\n soup = BeautifulSoup(html, **self.bs_kwargs)\n text = soup.get_text(self.get_text_separator)\n if soup.title:\n title = str(soup.title.string)\n else:\n title = \"\"\n metadata: Dict[str, Union[str, None]] = {\n \"source\": self.file_path,\n \"title\": title,\n }\n return [Document(page_content=text, metadata=metadata)]\n return []", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/mhtml.html"} +{"id": "c2d5ef2d9ec7-0", "text": "Source code for langchain.document_loaders.xml\n\"\"\"Loader that loads Microsoft Excel files.\"\"\"\nfrom typing import Any, List\nfrom langchain.document_loaders.unstructured import (\n UnstructuredFileLoader,\n validate_unstructured_version,\n)\n[docs]class UnstructuredXMLLoader(UnstructuredFileLoader):\n \"\"\"Loader that uses unstructured to load XML files.\"\"\"\n def __init__(\n self, file_path: str, mode: str = \"single\", **unstructured_kwargs: Any\n ):\n validate_unstructured_version(min_unstructured_version=\"0.6.7\")\n super().__init__(file_path=file_path, mode=mode, **unstructured_kwargs)\n def _get_elements(self) -> List:\n from unstructured.partition.xml import partition_xml\n return partition_xml(filename=self.file_path, **self.unstructured_kwargs)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/xml.html"} +{"id": "92717962b784-0", "text": "Source code for langchain.document_loaders.obsidian\n\"\"\"Loader that loads Obsidian directory dump.\"\"\"\nimport re\nfrom pathlib import Path\nfrom typing import List\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseLoader\n[docs]class ObsidianLoader(BaseLoader):\n \"\"\"Loader that loads Obsidian files from disk.\"\"\"\n FRONT_MATTER_REGEX = re.compile(r\"^---\\n(.*?)\\n---\\n\", re.MULTILINE | re.DOTALL)\n def __init__(\n self, path: str, encoding: str = \"UTF-8\", collect_metadata: bool = True\n ):\n \"\"\"Initialize with path.\"\"\"\n self.file_path = path\n self.encoding = encoding\n self.collect_metadata = collect_metadata\n def _parse_front_matter(self, content: str) -> dict:\n \"\"\"Parse front matter metadata from the content and return it as a dict.\"\"\"\n if not self.collect_metadata:\n return {}\n match = self.FRONT_MATTER_REGEX.search(content)\n front_matter = {}\n if match:\n lines = match.group(1).split(\"\\n\")\n for line in lines:\n if \":\" in line:\n key, value = line.split(\":\", 1)\n front_matter[key.strip()] = value.strip()\n else:\n # Skip lines without a colon\n continue\n return front_matter\n def _remove_front_matter(self, content: str) -> str:\n \"\"\"Remove front matter metadata from the given content.\"\"\"\n if not self.collect_metadata:\n return content\n return self.FRONT_MATTER_REGEX.sub(\"\", content)\n[docs] def load(self) -> List[Document]:\n \"\"\"Load documents.\"\"\"\n ps = list(Path(self.file_path).glob(\"**/*.md\"))", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/obsidian.html"} +{"id": "92717962b784-1", "text": "ps = list(Path(self.file_path).glob(\"**/*.md\"))\n docs = []\n for p in ps:\n with open(p, encoding=self.encoding) as f:\n text = f.read()\n front_matter = self._parse_front_matter(text)\n text = self._remove_front_matter(text)\n metadata = {\n \"source\": str(p.name),\n \"path\": str(p),\n \"created\": p.stat().st_ctime,\n \"last_modified\": p.stat().st_mtime,\n \"last_accessed\": p.stat().st_atime,\n **front_matter,\n }\n docs.append(Document(page_content=text, metadata=metadata))\n return docs", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/obsidian.html"} +{"id": "440541b06ee7-0", "text": "Source code for langchain.document_loaders.mediawikidump\n\"\"\"Load Data from a MediaWiki dump xml.\"\"\"\nfrom typing import List, Optional\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseLoader\n[docs]class MWDumpLoader(BaseLoader):\n \"\"\"\n Load MediaWiki dump from XML file\n Example:\n .. code-block:: python\n from langchain.document_loaders import MWDumpLoader\n loader = MWDumpLoader(\n file_path=\"myWiki.xml\",\n encoding=\"utf8\"\n )\n docs = loader.load()\n from langchain.text_splitter import RecursiveCharacterTextSplitter\n text_splitter = RecursiveCharacterTextSplitter(\n chunk_size=1000, chunk_overlap=0\n )\n texts = text_splitter.split_documents(docs)\n :param file_path: XML local file path\n :type file_path: str\n :param encoding: Charset encoding, defaults to \"utf8\"\n :type encoding: str, optional\n \"\"\"\n def __init__(self, file_path: str, encoding: Optional[str] = \"utf8\"):\n \"\"\"Initialize with file path.\"\"\"\n self.file_path = file_path\n self.encoding = encoding\n[docs] def load(self) -> List[Document]:\n \"\"\"Load from file path.\"\"\"\n import mwparserfromhell\n import mwxml\n dump = mwxml.Dump.from_file(open(self.file_path, encoding=self.encoding))\n docs = []\n for page in dump.pages:\n for revision in page:\n code = mwparserfromhell.parse(revision.text)\n text = code.strip_code(\n normalize=True, collapse=True, keep_template_params=False\n )\n metadata = {\"source\": page.title}", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/mediawikidump.html"} +{"id": "440541b06ee7-1", "text": ")\n metadata = {\"source\": page.title}\n docs.append(Document(page_content=text, metadata=metadata))\n return docs", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/mediawikidump.html"} +{"id": "dad6210c516d-0", "text": "Source code for langchain.document_loaders.recursive_url_loader\nfrom typing import Iterator, List, Optional, Set\nfrom urllib.parse import urlparse\nimport requests\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseLoader\n[docs]class RecursiveUrlLoader(BaseLoader):\n \"\"\"Loader that loads all child links from a given url.\"\"\"\n def __init__(self, url: str, exclude_dirs: Optional[str] = None) -> None:\n \"\"\"Initialize with URL to crawl and any sub-directories to exclude.\"\"\"\n self.url = url\n self.exclude_dirs = exclude_dirs\n[docs] def get_child_links_recursive(\n self, url: str, visited: Optional[Set[str]] = None\n ) -> Set[str]:\n \"\"\"Recursively get all child links starting with the path of the input URL.\"\"\"\n try:\n from bs4 import BeautifulSoup\n except ImportError:\n raise ImportError(\n \"The BeautifulSoup package is required for the RecursiveUrlLoader.\"\n )\n # Construct the base and parent URLs\n parsed_url = urlparse(url)\n base_url = f\"{parsed_url.scheme}://{parsed_url.netloc}\"\n parent_url = \"/\".join(parsed_url.path.split(\"/\")[:-1])\n current_path = parsed_url.path\n # Add a trailing slash if not present\n if not base_url.endswith(\"/\"):\n base_url += \"/\"\n if not parent_url.endswith(\"/\"):\n parent_url += \"/\"\n # Exclude the root and parent from list\n visited = set() if visited is None else visited\n # Exclude the links that start with any of the excluded directories\n if self.exclude_dirs and any(\n url.startswith(exclude_dir) for exclude_dir in self.exclude_dirs\n ):\n return visited", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/recursive_url_loader.html"} +{"id": "dad6210c516d-1", "text": "):\n return visited\n # Get all links that are relative to the root of the website\n response = requests.get(url)\n soup = BeautifulSoup(response.text, \"html.parser\")\n all_links = [link.get(\"href\") for link in soup.find_all(\"a\")]\n # Extract only the links that are children of the current URL\n child_links = list(\n {\n link\n for link in all_links\n if link and link.startswith(current_path) and link != current_path\n }\n )\n # Get absolute path for all root relative links listed\n absolute_paths = [\n f\"{urlparse(base_url).scheme}://{urlparse(base_url).netloc}{link}\"\n for link in child_links\n ]\n # Store the visited links and recursively visit the children\n for link in absolute_paths:\n # Check all unvisited links\n if link not in visited:\n visited.add(link)\n # If the link is a directory (w/ children) then visit it\n if link.endswith(\"/\"):\n visited.update(self.get_child_links_recursive(link, visited))\n return visited\n[docs] def lazy_load(self) -> Iterator[Document]:\n from langchain.document_loaders import WebBaseLoader\n \"\"\"Lazy load web pages.\"\"\"\n child_links = self.get_child_links_recursive(self.url)\n loader = WebBaseLoader(list(child_links))\n return loader.lazy_load()\n[docs] def load(self) -> List[Document]:\n \"\"\"Load web pages.\"\"\"\n return list(self.lazy_load())", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/recursive_url_loader.html"} +{"id": "625be12af986-0", "text": "Source code for langchain.document_loaders.json_loader\n\"\"\"Loader that loads data from JSON.\"\"\"\nimport json\nfrom pathlib import Path\nfrom typing import Any, Callable, Dict, List, Optional, Union\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseLoader\n[docs]class JSONLoader(BaseLoader):\n \"\"\"Loads a JSON file and references a jq schema provided to load the text into\n documents.\n Example:\n [{\"text\": ...}, {\"text\": ...}, {\"text\": ...}] -> schema = .[].text\n {\"key\": [{\"text\": ...}, {\"text\": ...}, {\"text\": ...}]} -> schema = .key[].text\n [\"\", \"\", \"\"] -> schema = .[]\n \"\"\"\n def __init__(\n self,\n file_path: Union[str, Path],\n jq_schema: str,\n content_key: Optional[str] = None,\n metadata_func: Optional[Callable[[Dict, Dict], Dict]] = None,\n text_content: bool = True,\n ):\n \"\"\"Initialize the JSONLoader.\n Args:\n file_path (Union[str, Path]): The path to the JSON file.\n jq_schema (str): The jq schema to use to extract the data or text from\n the JSON.\n content_key (str): The key to use to extract the content from the JSON if\n the jq_schema results to a list of objects (dict).\n metadata_func (Callable[Dict, Dict]): A function that takes in the JSON\n object extracted by the jq_schema and the default metadata and returns\n a dict of the updated metadata.\n text_content (bool): Boolean flag to indicates whether the content is in\n string format, default to True\n \"\"\"\n try:\n import jq # noqa:F401", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/json_loader.html"} +{"id": "625be12af986-1", "text": "\"\"\"\n try:\n import jq # noqa:F401\n except ImportError:\n raise ImportError(\n \"jq package not found, please install it with `pip install jq`\"\n )\n self.file_path = Path(file_path).resolve()\n self._jq_schema = jq.compile(jq_schema)\n self._content_key = content_key\n self._metadata_func = metadata_func\n self._text_content = text_content\n[docs] def load(self) -> List[Document]:\n \"\"\"Load and return documents from the JSON file.\"\"\"\n data = self._jq_schema.input(json.loads(self.file_path.read_text()))\n # Perform some validation\n # This is not a perfect validation, but it should catch most cases\n # and prevent the user from getting a cryptic error later on.\n if self._content_key is not None:\n self._validate_content_key(data)\n docs = []\n for i, sample in enumerate(data, 1):\n metadata = dict(\n source=str(self.file_path),\n seq_num=i,\n )\n text = self._get_text(sample=sample, metadata=metadata)\n docs.append(Document(page_content=text, metadata=metadata))\n return docs\n def _get_text(self, sample: Any, metadata: dict) -> str:\n \"\"\"Convert sample to string format\"\"\"\n if self._content_key is not None:\n content = sample.get(self._content_key)\n if self._metadata_func is not None:\n # We pass in the metadata dict to the metadata_func\n # so that the user can customize the default metadata\n # based on the content of the JSON object.\n metadata = self._metadata_func(sample, metadata)\n else:\n content = sample", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/json_loader.html"} +{"id": "625be12af986-2", "text": "else:\n content = sample\n if self._text_content and not isinstance(content, str):\n raise ValueError(\n f\"Expected page_content is string, got {type(content)} instead. \\\n Set `text_content=False` if the desired input for \\\n `page_content` is not a string\"\n )\n # In case the text is None, set it to an empty string\n elif isinstance(content, str):\n return content\n elif isinstance(content, dict):\n return json.dumps(content) if content else \"\"\n else:\n return str(content) if content is not None else \"\"\n def _validate_content_key(self, data: Any) -> None:\n \"\"\"Check if content key is valid\"\"\"\n sample = data.first()\n if not isinstance(sample, dict):\n raise ValueError(\n f\"Expected the jq schema to result in a list of objects (dict), \\\n so sample must be a dict but got `{type(sample)}`\"\n )\n if sample.get(self._content_key) is None:\n raise ValueError(\n f\"Expected the jq schema to result in a list of objects (dict) \\\n with the key `{self._content_key}`\"\n )\n if self._metadata_func is not None:\n sample_metadata = self._metadata_func(sample, {})\n if not isinstance(sample_metadata, dict):\n raise ValueError(\n f\"Expected the metadata_func to return a dict but got \\\n `{type(sample_metadata)}`\"\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/json_loader.html"} +{"id": "c547f90d1241-0", "text": "Source code for langchain.document_loaders.trello\n\"\"\"Loader that loads cards from Trello\"\"\"\nfrom __future__ import annotations\nfrom typing import TYPE_CHECKING, Any, List, Literal, Optional, Tuple\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseLoader\nfrom langchain.utils import get_from_env\nif TYPE_CHECKING:\n from trello import Board, Card, TrelloClient\n[docs]class TrelloLoader(BaseLoader):\n \"\"\"Trello loader. Reads all cards from a Trello board.\"\"\"\n def __init__(\n self,\n client: TrelloClient,\n board_name: str,\n *,\n include_card_name: bool = True,\n include_comments: bool = True,\n include_checklist: bool = True,\n card_filter: Literal[\"closed\", \"open\", \"all\"] = \"all\",\n extra_metadata: Tuple[str, ...] = (\"due_date\", \"labels\", \"list\", \"closed\"),\n ):\n \"\"\"Initialize Trello loader.\n Args:\n client: Trello API client.\n board_name: The name of the Trello board.\n include_card_name: Whether to include the name of the card in the document.\n include_comments: Whether to include the comments on the card in the\n document.\n include_checklist: Whether to include the checklist on the card in the\n document.\n card_filter: Filter on card status. Valid values are \"closed\", \"open\",\n \"all\".\n extra_metadata: List of additional metadata fields to include as document\n metadata.Valid values are \"due_date\", \"labels\", \"list\", \"closed\".\n \"\"\"\n self.client = client\n self.board_name = board_name\n self.include_card_name = include_card_name", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/trello.html"} +{"id": "c547f90d1241-1", "text": "self.board_name = board_name\n self.include_card_name = include_card_name\n self.include_comments = include_comments\n self.include_checklist = include_checklist\n self.extra_metadata = extra_metadata\n self.card_filter = card_filter\n[docs] @classmethod\n def from_credentials(\n cls,\n board_name: str,\n *,\n api_key: Optional[str] = None,\n token: Optional[str] = None,\n **kwargs: Any,\n ) -> TrelloLoader:\n \"\"\"Convenience constructor that builds TrelloClient init param for you.\n Args:\n board_name: The name of the Trello board.\n api_key: Trello API key. Can also be specified as environment variable\n TRELLO_API_KEY.\n token: Trello token. Can also be specified as environment variable\n TRELLO_TOKEN.\n include_card_name: Whether to include the name of the card in the document.\n include_comments: Whether to include the comments on the card in the\n document.\n include_checklist: Whether to include the checklist on the card in the\n document.\n card_filter: Filter on card status. Valid values are \"closed\", \"open\",\n \"all\".\n extra_metadata: List of additional metadata fields to include as document\n metadata.Valid values are \"due_date\", \"labels\", \"list\", \"closed\".\n \"\"\"\n try:\n from trello import TrelloClient # type: ignore\n except ImportError as ex:\n raise ImportError(\n \"Could not import trello python package. \"\n \"Please install it with `pip install py-trello`.\"\n ) from ex\n api_key = api_key or get_from_env(\"api_key\", \"TRELLO_API_KEY\")", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/trello.html"} +{"id": "c547f90d1241-2", "text": "token = token or get_from_env(\"token\", \"TRELLO_TOKEN\")\n client = TrelloClient(api_key=api_key, token=token)\n return cls(client, board_name, **kwargs)\n[docs] def load(self) -> List[Document]:\n \"\"\"Loads all cards from the specified Trello board.\n You can filter the cards, metadata and text included by using the optional\n parameters.\n Returns:\n A list of documents, one for each card in the board.\n \"\"\"\n try:\n from bs4 import BeautifulSoup # noqa: F401\n except ImportError as ex:\n raise ImportError(\n \"`beautifulsoup4` package not found, please run\"\n \" `pip install beautifulsoup4`\"\n ) from ex\n board = self._get_board()\n # Create a dictionary with the list IDs as keys and the list names as values\n list_dict = {list_item.id: list_item.name for list_item in board.list_lists()}\n # Get Cards on the board\n cards = board.get_cards(card_filter=self.card_filter)\n return [self._card_to_doc(card, list_dict) for card in cards]\n def _get_board(self) -> Board:\n # Find the first board with a matching name\n board = next(\n (b for b in self.client.list_boards() if b.name == self.board_name), None\n )\n if not board:\n raise ValueError(f\"Board `{self.board_name}` not found.\")\n return board\n def _card_to_doc(self, card: Card, list_dict: dict) -> Document:\n from bs4 import BeautifulSoup # type: ignore\n text_content = \"\"\n if self.include_card_name:\n text_content = card.name + \"\\n\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/trello.html"} +{"id": "c547f90d1241-3", "text": "if self.include_card_name:\n text_content = card.name + \"\\n\"\n if card.description.strip():\n text_content += BeautifulSoup(card.description, \"lxml\").get_text()\n if self.include_checklist:\n # Get all the checklist items on the card\n for checklist in card.checklists:\n if checklist.items:\n items = [\n f\"{item['name']}:{item['state']}\" for item in checklist.items\n ]\n text_content += f\"\\n{checklist.name}\\n\" + \"\\n\".join(items)\n if self.include_comments:\n # Get all the comments on the card\n comments = [\n BeautifulSoup(comment[\"data\"][\"text\"], \"lxml\").get_text()\n for comment in card.comments\n ]\n text_content += \"Comments:\" + \"\\n\".join(comments)\n # Default metadata fields\n metadata = {\n \"title\": card.name,\n \"id\": card.id,\n \"url\": card.url,\n }\n # Extra metadata fields. Card object is not subscriptable.\n if \"labels\" in self.extra_metadata:\n metadata[\"labels\"] = [label.name for label in card.labels]\n if \"list\" in self.extra_metadata:\n if card.list_id in list_dict:\n metadata[\"list\"] = list_dict[card.list_id]\n if \"closed\" in self.extra_metadata:\n metadata[\"closed\"] = card.closed\n if \"due_date\" in self.extra_metadata:\n metadata[\"due_date\"] = card.due_date\n return Document(page_content=text_content, metadata=metadata)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/trello.html"} +{"id": "820c8299cc3e-0", "text": "Source code for langchain.document_loaders.acreom\n\"\"\"Loader that loads acreom vault from a directory.\"\"\"\nimport re\nfrom pathlib import Path\nfrom typing import Iterator, List\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseLoader\n[docs]class AcreomLoader(BaseLoader):\n FRONT_MATTER_REGEX = re.compile(r\"^---\\n(.*?)\\n---\\n\", re.MULTILINE | re.DOTALL)\n def __init__(\n self, path: str, encoding: str = \"UTF-8\", collect_metadata: bool = True\n ):\n \"\"\"Initialize with path.\"\"\"\n self.file_path = path\n self.encoding = encoding\n self.collect_metadata = collect_metadata\n def _parse_front_matter(self, content: str) -> dict:\n \"\"\"Parse front matter metadata from the content and return it as a dict.\"\"\"\n if not self.collect_metadata:\n return {}\n match = self.FRONT_MATTER_REGEX.search(content)\n front_matter = {}\n if match:\n lines = match.group(1).split(\"\\n\")\n for line in lines:\n if \":\" in line:\n key, value = line.split(\":\", 1)\n front_matter[key.strip()] = value.strip()\n else:\n # Skip lines without a colon\n continue\n return front_matter\n def _remove_front_matter(self, content: str) -> str:\n \"\"\"Remove front matter metadata from the given content.\"\"\"\n if not self.collect_metadata:\n return content\n return self.FRONT_MATTER_REGEX.sub(\"\", content)\n def _process_acreom_content(self, content: str) -> str:\n # remove acreom specific elements from content that\n # do not contribute to the context of current document", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/acreom.html"} +{"id": "820c8299cc3e-1", "text": "# do not contribute to the context of current document\n content = re.sub(\"\\s*-\\s\\[\\s\\]\\s.*|\\s*\\[\\s\\]\\s.*\", \"\", content) # rm tasks\n content = re.sub(\"#\", \"\", content) # rm hashtags\n content = re.sub(\"\\[\\[.*?\\]\\]\", \"\", content) # rm doclinks\n return content\n[docs] def lazy_load(self) -> Iterator[Document]:\n ps = list(Path(self.file_path).glob(\"**/*.md\"))\n for p in ps:\n with open(p, encoding=self.encoding) as f:\n text = f.read()\n front_matter = self._parse_front_matter(text)\n text = self._remove_front_matter(text)\n text = self._process_acreom_content(text)\n metadata = {\n \"source\": str(p.name),\n \"path\": str(p),\n **front_matter,\n }\n yield Document(page_content=text, metadata=metadata)\n[docs] def load(self) -> List[Document]:\n return list(self.lazy_load())", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/acreom.html"} +{"id": "0af6b49d5fe4-0", "text": "Source code for langchain.document_loaders.image\n\"\"\"Loader that loads image files.\"\"\"\nfrom typing import List\nfrom langchain.document_loaders.unstructured import UnstructuredFileLoader\n[docs]class UnstructuredImageLoader(UnstructuredFileLoader):\n \"\"\"Loader that uses unstructured to load image files, such as PNGs and JPGs.\"\"\"\n def _get_elements(self) -> List:\n from unstructured.partition.image import partition_image\n return partition_image(filename=self.file_path, **self.unstructured_kwargs)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/image.html"} +{"id": "a1de7427e821-0", "text": "Source code for langchain.document_loaders.wikipedia\nfrom typing import List, Optional\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseLoader\nfrom langchain.utilities.wikipedia import WikipediaAPIWrapper\n[docs]class WikipediaLoader(BaseLoader):\n \"\"\"Loads a query result from www.wikipedia.org into a list of Documents.\n The hard limit on the number of downloaded Documents is 300 for now.\n Each wiki page represents one Document.\n \"\"\"\n def __init__(\n self,\n query: str,\n lang: str = \"en\",\n load_max_docs: Optional[int] = 100,\n load_all_available_meta: Optional[bool] = False,\n doc_content_chars_max: Optional[int] = 4000,\n ):\n \"\"\"\n Initializes a new instance of the WikipediaLoader class.\n Args:\n query (str): The query string to search on Wikipedia.\n lang (str, optional): The language code for the Wikipedia language edition.\n Defaults to \"en\".\n load_max_docs (int, optional): The maximum number of documents to load.\n Defaults to 100.\n load_all_available_meta (bool, optional): Indicates whether to load all\n available metadata for each document. Defaults to False.\n doc_content_chars_max (int, optional): The maximum number of characters\n for the document content. Defaults to 4000.\n \"\"\"\n self.query = query\n self.lang = lang\n self.load_max_docs = load_max_docs\n self.load_all_available_meta = load_all_available_meta\n self.doc_content_chars_max = doc_content_chars_max\n[docs] def load(self) -> List[Document]:\n \"\"\"\n Loads the query result from Wikipedia into a list of Documents.\n Returns:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/wikipedia.html"} +{"id": "a1de7427e821-1", "text": "Loads the query result from Wikipedia into a list of Documents.\n Returns:\n List[Document]: A list of Document objects representing the loaded\n Wikipedia pages.\n \"\"\"\n client = WikipediaAPIWrapper(\n lang=self.lang,\n top_k_results=self.load_max_docs,\n load_all_available_meta=self.load_all_available_meta,\n doc_content_chars_max=self.doc_content_chars_max,\n )\n docs = client.load(self.query)\n return docs", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/wikipedia.html"} +{"id": "1e093ab90d92-0", "text": "Source code for langchain.document_loaders.imsdb\n\"\"\"Loader that loads IMSDb.\"\"\"\nfrom typing import List\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.web_base import WebBaseLoader\n[docs]class IMSDbLoader(WebBaseLoader):\n \"\"\"Loader that loads IMSDb webpages.\"\"\"\n[docs] def load(self) -> List[Document]:\n \"\"\"Load webpage.\"\"\"\n soup = self.scrape()\n text = soup.select_one(\"td[class='scrtext']\").text\n metadata = {\"source\": self.web_path}\n return [Document(page_content=text, metadata=metadata)]", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/imsdb.html"} +{"id": "9d81001d45da-0", "text": "Source code for langchain.document_loaders.excel\n\"\"\"Loader that loads Microsoft Excel files.\"\"\"\nfrom typing import Any, List\nfrom langchain.document_loaders.unstructured import (\n UnstructuredFileLoader,\n validate_unstructured_version,\n)\n[docs]class UnstructuredExcelLoader(UnstructuredFileLoader):\n \"\"\"Loader that uses unstructured to load Microsoft Excel files.\"\"\"\n def __init__(\n self, file_path: str, mode: str = \"single\", **unstructured_kwargs: Any\n ):\n validate_unstructured_version(min_unstructured_version=\"0.6.7\")\n super().__init__(file_path=file_path, mode=mode, **unstructured_kwargs)\n def _get_elements(self) -> List:\n from unstructured.partition.xlsx import partition_xlsx\n return partition_xlsx(filename=self.file_path, **self.unstructured_kwargs)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/excel.html"} +{"id": "69146a6772ce-0", "text": "Source code for langchain.document_loaders.unstructured\n\"\"\"Loader that uses unstructured to load files.\"\"\"\nimport collections\nfrom abc import ABC, abstractmethod\nfrom typing import IO, Any, Dict, List, Sequence, Union\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseLoader\ndef satisfies_min_unstructured_version(min_version: str) -> bool:\n \"\"\"Checks to see if the installed unstructured version exceeds the minimum version\n for the feature in question.\"\"\"\n from unstructured.__version__ import __version__ as __unstructured_version__\n min_version_tuple = tuple([int(x) for x in min_version.split(\".\")])\n # NOTE(MthwRobinson) - enables the loader to work when you're using pre-release\n # versions of unstructured like 0.4.17-dev1\n _unstructured_version = __unstructured_version__.split(\"-\")[0]\n unstructured_version_tuple = tuple(\n [int(x) for x in _unstructured_version.split(\".\")]\n )\n return unstructured_version_tuple >= min_version_tuple\ndef validate_unstructured_version(min_unstructured_version: str) -> None:\n \"\"\"Raises an error if the unstructured version does not exceed the\n specified minimum.\"\"\"\n if not satisfies_min_unstructured_version(min_unstructured_version):\n raise ValueError(\n f\"unstructured>={min_unstructured_version} is required in this loader.\"\n )\nclass UnstructuredBaseLoader(BaseLoader, ABC):\n \"\"\"Loader that uses unstructured to load files.\"\"\"\n def __init__(self, mode: str = \"single\", **unstructured_kwargs: Any):\n \"\"\"Initialize with file path.\"\"\"\n try:\n import unstructured # noqa:F401\n except ImportError:\n raise ValueError(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/unstructured.html"} +{"id": "69146a6772ce-1", "text": "import unstructured # noqa:F401\n except ImportError:\n raise ValueError(\n \"unstructured package not found, please install it with \"\n \"`pip install unstructured`\"\n )\n _valid_modes = {\"single\", \"elements\", \"paged\"}\n if mode not in _valid_modes:\n raise ValueError(\n f\"Got {mode} for `mode`, but should be one of `{_valid_modes}`\"\n )\n self.mode = mode\n if not satisfies_min_unstructured_version(\"0.5.4\"):\n if \"strategy\" in unstructured_kwargs:\n unstructured_kwargs.pop(\"strategy\")\n self.unstructured_kwargs = unstructured_kwargs\n @abstractmethod\n def _get_elements(self) -> List:\n \"\"\"Get elements.\"\"\"\n @abstractmethod\n def _get_metadata(self) -> dict:\n \"\"\"Get metadata.\"\"\"\n def load(self) -> List[Document]:\n \"\"\"Load file.\"\"\"\n elements = self._get_elements()\n if self.mode == \"elements\":\n docs: List[Document] = list()\n for element in elements:\n metadata = self._get_metadata()\n # NOTE(MthwRobinson) - the attribute check is for backward compatibility\n # with unstructured<0.4.9. The metadata attributed was added in 0.4.9.\n if hasattr(element, \"metadata\"):\n metadata.update(element.metadata.to_dict())\n if hasattr(element, \"category\"):\n metadata[\"category\"] = element.category\n docs.append(Document(page_content=str(element), metadata=metadata))\n elif self.mode == \"paged\":\n text_dict: Dict[int, str] = {}\n meta_dict: Dict[int, Dict] = {}\n for idx, element in enumerate(elements):", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/unstructured.html"} +{"id": "69146a6772ce-2", "text": "for idx, element in enumerate(elements):\n metadata = self._get_metadata()\n if hasattr(element, \"metadata\"):\n metadata.update(element.metadata.to_dict())\n page_number = metadata.get(\"page_number\", 1)\n # Check if this page_number already exists in docs_dict\n if page_number not in text_dict:\n # If not, create new entry with initial text and metadata\n text_dict[page_number] = str(element) + \"\\n\\n\"\n meta_dict[page_number] = metadata\n else:\n # If exists, append to text and update the metadata\n text_dict[page_number] += str(element) + \"\\n\\n\"\n meta_dict[page_number].update(metadata)\n # Convert the dict to a list of Document objects\n docs = [\n Document(page_content=text_dict[key], metadata=meta_dict[key])\n for key in text_dict.keys()\n ]\n elif self.mode == \"single\":\n metadata = self._get_metadata()\n text = \"\\n\\n\".join([str(el) for el in elements])\n docs = [Document(page_content=text, metadata=metadata)]\n else:\n raise ValueError(f\"mode of {self.mode} not supported.\")\n return docs\n[docs]class UnstructuredFileLoader(UnstructuredBaseLoader):\n \"\"\"Loader that uses unstructured to load files.\"\"\"\n def __init__(\n self,\n file_path: Union[str, List[str]],\n mode: str = \"single\",\n **unstructured_kwargs: Any,\n ):\n \"\"\"Initialize with file path.\"\"\"\n self.file_path = file_path\n super().__init__(mode=mode, **unstructured_kwargs)\n def _get_elements(self) -> List:\n from unstructured.partition.auto import partition", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/unstructured.html"} +{"id": "69146a6772ce-3", "text": "def _get_elements(self) -> List:\n from unstructured.partition.auto import partition\n return partition(filename=self.file_path, **self.unstructured_kwargs)\n def _get_metadata(self) -> dict:\n return {\"source\": self.file_path}\ndef get_elements_from_api(\n file_path: Union[str, List[str], None] = None,\n file: Union[IO, Sequence[IO], None] = None,\n api_url: str = \"https://api.unstructured.io/general/v0/general\",\n api_key: str = \"\",\n **unstructured_kwargs: Any,\n) -> List:\n \"\"\"Retrieves a list of elements from the Unstructured API.\"\"\"\n if isinstance(file, collections.abc.Sequence) or isinstance(file_path, list):\n from unstructured.partition.api import partition_multiple_via_api\n _doc_elements = partition_multiple_via_api(\n filenames=file_path,\n files=file,\n api_key=api_key,\n api_url=api_url,\n **unstructured_kwargs,\n )\n elements = []\n for _elements in _doc_elements:\n elements.extend(_elements)\n return elements\n else:\n from unstructured.partition.api import partition_via_api\n return partition_via_api(\n filename=file_path,\n file=file,\n api_key=api_key,\n api_url=api_url,\n **unstructured_kwargs,\n )\n[docs]class UnstructuredAPIFileLoader(UnstructuredFileLoader):\n \"\"\"Loader that uses the unstructured web API to load files.\"\"\"\n def __init__(\n self,\n file_path: Union[str, List[str]] = \"\",\n mode: str = \"single\",\n url: str = \"https://api.unstructured.io/general/v0/general\",", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/unstructured.html"} +{"id": "69146a6772ce-4", "text": "url: str = \"https://api.unstructured.io/general/v0/general\",\n api_key: str = \"\",\n **unstructured_kwargs: Any,\n ):\n \"\"\"Initialize with file path.\"\"\"\n if isinstance(file_path, str):\n validate_unstructured_version(min_unstructured_version=\"0.6.2\")\n else:\n validate_unstructured_version(min_unstructured_version=\"0.6.3\")\n self.url = url\n self.api_key = api_key\n super().__init__(file_path=file_path, mode=mode, **unstructured_kwargs)\n def _get_metadata(self) -> dict:\n return {\"source\": self.file_path}\n def _get_elements(self) -> List:\n return get_elements_from_api(\n file_path=self.file_path,\n api_key=self.api_key,\n api_url=self.url,\n **self.unstructured_kwargs,\n )\n[docs]class UnstructuredFileIOLoader(UnstructuredBaseLoader):\n \"\"\"Loader that uses unstructured to load file IO objects.\"\"\"\n def __init__(\n self,\n file: Union[IO, Sequence[IO]],\n mode: str = \"single\",\n **unstructured_kwargs: Any,\n ):\n \"\"\"Initialize with file path.\"\"\"\n self.file = file\n super().__init__(mode=mode, **unstructured_kwargs)\n def _get_elements(self) -> List:\n from unstructured.partition.auto import partition\n return partition(file=self.file, **self.unstructured_kwargs)\n def _get_metadata(self) -> dict:\n return {}\n[docs]class UnstructuredAPIFileIOLoader(UnstructuredFileIOLoader):\n \"\"\"Loader that uses the unstructured web API to load file IO objects.\"\"\"\n def __init__(\n self,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/unstructured.html"} +{"id": "69146a6772ce-5", "text": "def __init__(\n self,\n file: Union[IO, Sequence[IO]],\n mode: str = \"single\",\n url: str = \"https://api.unstructured.io/general/v0/general\",\n api_key: str = \"\",\n **unstructured_kwargs: Any,\n ):\n \"\"\"Initialize with file path.\"\"\"\n if isinstance(file, collections.abc.Sequence):\n validate_unstructured_version(min_unstructured_version=\"0.6.3\")\n if file:\n validate_unstructured_version(min_unstructured_version=\"0.6.2\")\n self.url = url\n self.api_key = api_key\n super().__init__(file=file, mode=mode, **unstructured_kwargs)\n def _get_elements(self) -> List:\n return get_elements_from_api(\n file=self.file,\n api_key=self.api_key,\n api_url=self.url,\n **self.unstructured_kwargs,\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/unstructured.html"} +{"id": "6bc8c31c0d34-0", "text": "Source code for langchain.document_loaders.word_document\n\"\"\"Loader that loads word documents.\"\"\"\nimport os\nimport tempfile\nfrom abc import ABC\nfrom typing import List\nfrom urllib.parse import urlparse\nimport requests\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseLoader\nfrom langchain.document_loaders.unstructured import UnstructuredFileLoader\n[docs]class Docx2txtLoader(BaseLoader, ABC):\n \"\"\"Loads a DOCX with docx2txt and chunks at character level.\n Defaults to check for local file, but if the file is a web path, it will download it\n to a temporary file, and use that, then clean up the temporary file after completion\n \"\"\"\n def __init__(self, file_path: str):\n \"\"\"Initialize with file path.\"\"\"\n self.file_path = file_path\n if \"~\" in self.file_path:\n self.file_path = os.path.expanduser(self.file_path)\n # If the file is a web path, download it to a temporary file, and use that\n if not os.path.isfile(self.file_path) and self._is_valid_url(self.file_path):\n r = requests.get(self.file_path)\n if r.status_code != 200:\n raise ValueError(\n \"Check the url of your file; returned status code %s\"\n % r.status_code\n )\n self.web_path = self.file_path\n self.temp_file = tempfile.NamedTemporaryFile()\n self.temp_file.write(r.content)\n self.file_path = self.temp_file.name\n elif not os.path.isfile(self.file_path):\n raise ValueError(\"File path %s is not a valid file or url\" % self.file_path)\n def __del__(self) -> None:\n if hasattr(self, \"temp_file\"):\n self.temp_file.close()", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/word_document.html"} +{"id": "6bc8c31c0d34-1", "text": "if hasattr(self, \"temp_file\"):\n self.temp_file.close()\n[docs] def load(self) -> List[Document]:\n \"\"\"Load given path as single page.\"\"\"\n import docx2txt\n return [\n Document(\n page_content=docx2txt.process(self.file_path),\n metadata={\"source\": self.file_path},\n )\n ]\n @staticmethod\n def _is_valid_url(url: str) -> bool:\n \"\"\"Check if the url is valid.\"\"\"\n parsed = urlparse(url)\n return bool(parsed.netloc) and bool(parsed.scheme)\n[docs]class UnstructuredWordDocumentLoader(UnstructuredFileLoader):\n \"\"\"Loader that uses unstructured to load word documents.\"\"\"\n def _get_elements(self) -> List:\n from unstructured.__version__ import __version__ as __unstructured_version__\n from unstructured.file_utils.filetype import FileType, detect_filetype\n unstructured_version = tuple(\n [int(x) for x in __unstructured_version__.split(\".\")]\n )\n # NOTE(MthwRobinson) - magic will raise an import error if the libmagic\n # system dependency isn't installed. If it's not installed, we'll just\n # check the file extension\n try:\n import magic # noqa: F401\n is_doc = detect_filetype(self.file_path) == FileType.DOC\n except ImportError:\n _, extension = os.path.splitext(str(self.file_path))\n is_doc = extension == \".doc\"\n if is_doc and unstructured_version < (0, 4, 11):\n raise ValueError(\n f\"You are on unstructured version {__unstructured_version__}. \"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/word_document.html"} +{"id": "6bc8c31c0d34-2", "text": "f\"You are on unstructured version {__unstructured_version__}. \"\n \"Partitioning .doc files is only supported in unstructured>=0.4.11. \"\n \"Please upgrade the unstructured package and try again.\"\n )\n if is_doc:\n from unstructured.partition.doc import partition_doc\n return partition_doc(filename=self.file_path, **self.unstructured_kwargs)\n else:\n from unstructured.partition.docx import partition_docx\n return partition_docx(filename=self.file_path, **self.unstructured_kwargs)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/word_document.html"} +{"id": "c0e9bf9fe826-0", "text": "Source code for langchain.document_loaders.blockchain\nimport os\nimport re\nimport time\nfrom enum import Enum\nfrom typing import List, Optional\nimport requests\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseLoader\nclass BlockchainType(Enum):\n \"\"\"Enumerator of the supported blockchains.\"\"\"\n ETH_MAINNET = \"eth-mainnet\"\n ETH_GOERLI = \"eth-goerli\"\n POLYGON_MAINNET = \"polygon-mainnet\"\n POLYGON_MUMBAI = \"polygon-mumbai\"\n[docs]class BlockchainDocumentLoader(BaseLoader):\n \"\"\"Loads elements from a blockchain smart contract into Langchain documents.\n The supported blockchains are: Ethereum mainnet, Ethereum Goerli testnet,\n Polygon mainnet, and Polygon Mumbai testnet.\n If no BlockchainType is specified, the default is Ethereum mainnet.\n The Loader uses the Alchemy API to interact with the blockchain.\n ALCHEMY_API_KEY environment variable must be set to use this loader.\n The API returns 100 NFTs per request and can be paginated using the\n startToken parameter.\n If get_all_tokens is set to True, the loader will get all tokens\n on the contract. Note that for contracts with a large number of tokens,\n this may take a long time (e.g. 10k tokens is 100 requests).\n Default value is false for this reason.\n The max_execution_time (sec) can be set to limit the execution time\n of the loader.\n Future versions of this loader can:\n - Support additional Alchemy APIs (e.g. getTransactions, etc.)\n - Support additional blockain APIs (e.g. Infura, Opensea, etc.)\n \"\"\"\n def __init__(\n self,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/blockchain.html"} +{"id": "c0e9bf9fe826-1", "text": "\"\"\"\n def __init__(\n self,\n contract_address: str,\n blockchainType: BlockchainType = BlockchainType.ETH_MAINNET,\n api_key: str = \"docs-demo\",\n startToken: str = \"\",\n get_all_tokens: bool = False,\n max_execution_time: Optional[int] = None,\n ):\n self.contract_address = contract_address\n self.blockchainType = blockchainType.value\n self.api_key = os.environ.get(\"ALCHEMY_API_KEY\") or api_key\n self.startToken = startToken\n self.get_all_tokens = get_all_tokens\n self.max_execution_time = max_execution_time\n if not self.api_key:\n raise ValueError(\"Alchemy API key not provided.\")\n if not re.match(r\"^0x[a-fA-F0-9]{40}$\", self.contract_address):\n raise ValueError(f\"Invalid contract address {self.contract_address}\")\n[docs] def load(self) -> List[Document]:\n result = []\n current_start_token = self.startToken\n start_time = time.time()\n while True:\n url = (\n f\"https://{self.blockchainType}.g.alchemy.com/nft/v2/\"\n f\"{self.api_key}/getNFTsForCollection?withMetadata=\"\n f\"True&contractAddress={self.contract_address}\"\n f\"&startToken={current_start_token}\"\n )\n response = requests.get(url)\n if response.status_code != 200:\n raise ValueError(\n f\"Request failed with status code {response.status_code}\"\n )\n items = response.json()[\"nfts\"]\n if not items:\n break\n for item in items:\n content = str(item)\n tokenId = item[\"id\"][\"tokenId\"]\n metadata = {", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/blockchain.html"} +{"id": "c0e9bf9fe826-2", "text": "tokenId = item[\"id\"][\"tokenId\"]\n metadata = {\n \"source\": self.contract_address,\n \"blockchain\": self.blockchainType,\n \"tokenId\": tokenId,\n }\n result.append(Document(page_content=content, metadata=metadata))\n # exit after the first API call if get_all_tokens is False\n if not self.get_all_tokens:\n break\n # get the start token for the next API call from the last item in array\n current_start_token = self._get_next_tokenId(result[-1].metadata[\"tokenId\"])\n if (\n self.max_execution_time is not None\n and (time.time() - start_time) > self.max_execution_time\n ):\n raise RuntimeError(\"Execution time exceeded the allowed time limit.\")\n if not result:\n raise ValueError(\n f\"No NFTs found for contract address {self.contract_address}\"\n )\n return result\n # add one to the tokenId, ensuring the correct tokenId format is used\n def _get_next_tokenId(self, tokenId: str) -> str:\n value_type = self._detect_value_type(tokenId)\n if value_type == \"hex_0x\":\n value_int = int(tokenId, 16)\n elif value_type == \"hex_0xbf\":\n value_int = int(tokenId[2:], 16)\n else:\n value_int = int(tokenId)\n result = value_int + 1\n if value_type == \"hex_0x\":\n return \"0x\" + format(result, \"0\" + str(len(tokenId) - 2) + \"x\")\n elif value_type == \"hex_0xbf\":", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/blockchain.html"} +{"id": "c0e9bf9fe826-3", "text": "elif value_type == \"hex_0xbf\":\n return \"0xbf\" + format(result, \"0\" + str(len(tokenId) - 4) + \"x\")\n else:\n return str(result)\n # A smart contract can use different formats for the tokenId\n @staticmethod\n def _detect_value_type(tokenId: str) -> str:\n if isinstance(tokenId, int):\n return \"int\"\n elif tokenId.startswith(\"0x\"):\n return \"hex_0x\"\n elif tokenId.startswith(\"0xbf\"):\n return \"hex_0xbf\"\n else:\n return \"hex_0xbf\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/blockchain.html"} +{"id": "bf58acc39c58-0", "text": "Source code for langchain.document_loaders.evernote\n\"\"\"Load documents from Evernote.\nhttps://gist.github.com/foxmask/7b29c43a161e001ff04afdb2f181e31c\n\"\"\"\nimport hashlib\nimport logging\nfrom base64 import b64decode\nfrom time import strptime\nfrom typing import Any, Dict, Iterator, List, Optional\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseLoader\n[docs]class EverNoteLoader(BaseLoader):\n \"\"\"EverNote Loader.\n Loads an EverNote notebook export file e.g. my_notebook.enex into Documents.\n Instructions on producing this file can be found at\n https://help.evernote.com/hc/en-us/articles/209005557-Export-notes-and-notebooks-as-ENEX-or-HTML\n Currently only the plain text in the note is extracted and stored as the contents\n of the Document, any non content metadata (e.g. 'author', 'created', 'updated' etc.\n but not 'content-raw' or 'resource') tags on the note will be extracted and stored\n as metadata on the Document.\n Args:\n file_path (str): The path to the notebook export with a .enex extension\n load_single_document (bool): Whether or not to concatenate the content of all\n notes into a single long Document.\n If this is set to True (default) then the only metadata on the document will be\n the 'source' which contains the file name of the export.\n \"\"\" # noqa: E501\n def __init__(self, file_path: str, load_single_document: bool = True):\n \"\"\"Initialize with file path.\"\"\"\n self.file_path = file_path\n self.load_single_document = load_single_document", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/evernote.html"} +{"id": "bf58acc39c58-1", "text": "self.file_path = file_path\n self.load_single_document = load_single_document\n[docs] def load(self) -> List[Document]:\n \"\"\"Load documents from EverNote export file.\"\"\"\n documents = [\n Document(\n page_content=note[\"content\"],\n metadata={\n **{\n key: value\n for key, value in note.items()\n if key not in [\"content\", \"content-raw\", \"resource\"]\n },\n **{\"source\": self.file_path},\n },\n )\n for note in self._parse_note_xml(self.file_path)\n if note.get(\"content\") is not None\n ]\n if not self.load_single_document:\n return documents\n return [\n Document(\n page_content=\"\".join([document.page_content for document in documents]),\n metadata={\"source\": self.file_path},\n )\n ]\n @staticmethod\n def _parse_content(content: str) -> str:\n try:\n import html2text\n return html2text.html2text(content).strip()\n except ImportError as e:\n logging.error(\n \"Could not import `html2text`. Although it is not a required package \"\n \"to use Langchain, using the EverNote loader requires `html2text`. \"\n \"Please install `html2text` via `pip install html2text` and try again.\"\n )\n raise e\n @staticmethod\n def _parse_resource(resource: list) -> dict:\n rsc_dict: Dict[str, Any] = {}\n for elem in resource:\n if elem.tag == \"data\":\n # Sometimes elem.text is None\n rsc_dict[elem.tag] = b64decode(elem.text) if elem.text else b\"\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/evernote.html"} +{"id": "bf58acc39c58-2", "text": "rsc_dict[\"hash\"] = hashlib.md5(rsc_dict[elem.tag]).hexdigest()\n else:\n rsc_dict[elem.tag] = elem.text\n return rsc_dict\n @staticmethod\n def _parse_note(note: List, prefix: Optional[str] = None) -> dict:\n note_dict: Dict[str, Any] = {}\n resources = []\n def add_prefix(element_tag: str) -> str:\n if prefix is None:\n return element_tag\n return f\"{prefix}.{element_tag}\"\n for elem in note:\n if elem.tag == \"content\":\n note_dict[elem.tag] = EverNoteLoader._parse_content(elem.text)\n # A copy of original content\n note_dict[\"content-raw\"] = elem.text\n elif elem.tag == \"resource\":\n resources.append(EverNoteLoader._parse_resource(elem))\n elif elem.tag == \"created\" or elem.tag == \"updated\":\n note_dict[elem.tag] = strptime(elem.text, \"%Y%m%dT%H%M%SZ\")\n elif elem.tag == \"note-attributes\":\n additional_attributes = EverNoteLoader._parse_note(\n elem, elem.tag\n ) # Recursively enter the note-attributes tag\n note_dict.update(additional_attributes)\n else:\n note_dict[elem.tag] = elem.text\n if len(resources) > 0:\n note_dict[\"resource\"] = resources\n return {add_prefix(key): value for key, value in note_dict.items()}\n @staticmethod\n def _parse_note_xml(xml_file: str) -> Iterator[Dict[str, Any]]:\n \"\"\"Parse Evernote xml.\"\"\"\n # Without huge_tree set to True, parser may complain about huge text node", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/evernote.html"} +{"id": "bf58acc39c58-3", "text": "# Without huge_tree set to True, parser may complain about huge text node\n # Try to recover, because there may be \" \", which will cause\n # \"XMLSyntaxError: Entity 'nbsp' not defined\"\n try:\n from lxml import etree\n except ImportError as e:\n logging.error(\n \"Could not import `lxml`. Although it is not a required package to use \"\n \"Langchain, using the EverNote loader requires `lxml`. Please install \"\n \"`lxml` via `pip install lxml` and try again.\"\n )\n raise e\n context = etree.iterparse(\n xml_file, encoding=\"utf-8\", strip_cdata=False, huge_tree=True, recover=True\n )\n for action, elem in context:\n if elem.tag == \"note\":\n yield EverNoteLoader._parse_note(elem)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/evernote.html"} +{"id": "917f8d9273ad-0", "text": "Source code for langchain.document_loaders.srt\n\"\"\"Loader for .srt (subtitle) files.\"\"\"\nfrom typing import List\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseLoader\n[docs]class SRTLoader(BaseLoader):\n \"\"\"Loader for .srt (subtitle) files.\"\"\"\n def __init__(self, file_path: str):\n \"\"\"Initialize with file path.\"\"\"\n try:\n import pysrt # noqa:F401\n except ImportError:\n raise ImportError(\n \"package `pysrt` not found, please install it with `pip install pysrt`\"\n )\n self.file_path = file_path\n[docs] def load(self) -> List[Document]:\n \"\"\"Load using pysrt file.\"\"\"\n import pysrt\n parsed_info = pysrt.open(self.file_path)\n text = \" \".join([t.text for t in parsed_info])\n metadata = {\"source\": self.file_path}\n return [Document(page_content=text, metadata=metadata)]", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/srt.html"} +{"id": "30c7d8afd28c-0", "text": "Source code for langchain.document_loaders.gutenberg\n\"\"\"Loader that loads .txt web files.\"\"\"\nfrom typing import List\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseLoader\n[docs]class GutenbergLoader(BaseLoader):\n \"\"\"Loader that uses urllib to load .txt web files.\"\"\"\n def __init__(self, file_path: str):\n \"\"\"Initialize with file path.\"\"\"\n if not file_path.startswith(\"https://www.gutenberg.org\"):\n raise ValueError(\"file path must start with 'https://www.gutenberg.org'\")\n if not file_path.endswith(\".txt\"):\n raise ValueError(\"file path must end with '.txt'\")\n self.file_path = file_path\n[docs] def load(self) -> List[Document]:\n \"\"\"Load file.\"\"\"\n from urllib.request import urlopen\n elements = urlopen(self.file_path)\n text = \"\\n\\n\".join([str(el.decode(\"utf-8-sig\")) for el in elements])\n metadata = {\"source\": self.file_path}\n return [Document(page_content=text, metadata=metadata)]", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/gutenberg.html"} +{"id": "beb2987a23b1-0", "text": "Source code for langchain.document_loaders.sitemap\n\"\"\"Loader that fetches a sitemap and loads those URLs.\"\"\"\nimport itertools\nimport re\nfrom typing import Any, Callable, Generator, Iterable, List, Optional\nfrom langchain.document_loaders.web_base import WebBaseLoader\nfrom langchain.schema import Document\ndef _default_parsing_function(content: Any) -> str:\n return str(content.get_text())\ndef _default_meta_function(meta: dict, _content: Any) -> dict:\n return {\"source\": meta[\"loc\"], **meta}\ndef _batch_block(iterable: Iterable, size: int) -> Generator[List[dict], None, None]:\n it = iter(iterable)\n while item := list(itertools.islice(it, size)):\n yield item\n[docs]class SitemapLoader(WebBaseLoader):\n \"\"\"Loader that fetches a sitemap and loads those URLs.\"\"\"\n def __init__(\n self,\n web_path: str,\n filter_urls: Optional[List[str]] = None,\n parsing_function: Optional[Callable] = None,\n blocksize: Optional[int] = None,\n blocknum: int = 0,\n meta_function: Optional[Callable] = None,\n is_local: bool = False,\n ):\n \"\"\"Initialize with webpage path and optional filter URLs.\n Args:\n web_path: url of the sitemap. can also be a local path\n filter_urls: list of strings or regexes that will be applied to filter the\n urls that are parsed and loaded\n parsing_function: Function to parse bs4.Soup output\n blocksize: number of sitemap locations per block\n blocknum: the number of the block that should be loaded - zero indexed\n meta_function: Function to parse bs4.Soup output for metadata", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/sitemap.html"} +{"id": "beb2987a23b1-1", "text": "meta_function: Function to parse bs4.Soup output for metadata\n remember when setting this method to also copy metadata[\"loc\"]\n to metadata[\"source\"] if you are using this field\n is_local: whether the sitemap is a local file\n \"\"\"\n if blocksize is not None and blocksize < 1:\n raise ValueError(\"Sitemap blocksize should be at least 1\")\n if blocknum < 0:\n raise ValueError(\"Sitemap blocknum can not be lower then 0\")\n try:\n import lxml # noqa:F401\n except ImportError:\n raise ImportError(\n \"lxml package not found, please install it with \" \"`pip install lxml`\"\n )\n super().__init__(web_path)\n self.filter_urls = filter_urls\n self.parsing_function = parsing_function or _default_parsing_function\n self.meta_function = meta_function or _default_meta_function\n self.blocksize = blocksize\n self.blocknum = blocknum\n self.is_local = is_local\n[docs] def parse_sitemap(self, soup: Any) -> List[dict]:\n \"\"\"Parse sitemap xml and load into a list of dicts.\"\"\"\n els = []\n for url in soup.find_all(\"url\"):\n loc = url.find(\"loc\")\n if not loc:\n continue\n # Strip leading and trailing whitespace and newlines\n loc_text = loc.text.strip()\n if self.filter_urls and not any(\n re.match(r, loc_text) for r in self.filter_urls\n ):\n continue\n els.append(\n {\n tag: prop.text\n for tag in [\"loc\", \"lastmod\", \"changefreq\", \"priority\"]\n if (prop := url.find(tag))\n }\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/sitemap.html"} +{"id": "beb2987a23b1-2", "text": "if (prop := url.find(tag))\n }\n )\n for sitemap in soup.find_all(\"sitemap\"):\n loc = sitemap.find(\"loc\")\n if not loc:\n continue\n soup_child = self.scrape_all([loc.text], \"xml\")[0]\n els.extend(self.parse_sitemap(soup_child))\n return els\n[docs] def load(self) -> List[Document]:\n \"\"\"Load sitemap.\"\"\"\n if self.is_local:\n try:\n import bs4\n except ImportError:\n raise ImportError(\n \"beautifulsoup4 package not found, please install it\"\n \" with `pip install beautifulsoup4`\"\n )\n fp = open(self.web_path)\n soup = bs4.BeautifulSoup(fp, \"xml\")\n else:\n soup = self.scrape(\"xml\")\n els = self.parse_sitemap(soup)\n if self.blocksize is not None:\n elblocks = list(_batch_block(els, self.blocksize))\n blockcount = len(elblocks)\n if blockcount - 1 < self.blocknum:\n raise ValueError(\n \"Selected sitemap does not contain enough blocks for given blocknum\"\n )\n else:\n els = elblocks[self.blocknum]\n results = self.scrape_all([el[\"loc\"].strip() for el in els if \"loc\" in el])\n return [\n Document(\n page_content=self.parsing_function(results[i]),\n metadata=self.meta_function(els[i], results[i]),\n )\n for i in range(len(results))\n ]", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/sitemap.html"} +{"id": "55434ff79ce7-0", "text": "Source code for langchain.document_loaders.confluence\n\"\"\"Load Data from a Confluence Space\"\"\"\nimport logging\nfrom enum import Enum\nfrom io import BytesIO\nfrom typing import Any, Callable, Dict, List, Optional, Union\nfrom tenacity import (\n before_sleep_log,\n retry,\n stop_after_attempt,\n wait_exponential,\n)\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseLoader\nlogger = logging.getLogger(__name__)\nclass ContentFormat(str, Enum):\n \"\"\"Enumerator of the content formats of Confluence page.\"\"\"\n STORAGE = \"body.storage\"\n VIEW = \"body.view\"\n def get_content(self, page: dict) -> str:\n if self == ContentFormat.STORAGE:\n return page[\"body\"][\"storage\"][\"value\"]\n elif self == ContentFormat.VIEW:\n return page[\"body\"][\"view\"][\"value\"]\n raise ValueError(\"unknown content format\")\n[docs]class ConfluenceLoader(BaseLoader):\n \"\"\"\n Load Confluence pages. Port of https://llamahub.ai/l/confluence\n This currently supports username/api_key, Oauth2 login or personal access token\n authentication.\n Specify a list page_ids and/or space_key to load in the corresponding pages into\n Document objects, if both are specified the union of both sets will be returned.\n You can also specify a boolean `include_attachments` to include attachments, this\n is set to False by default, if set to True all attachments will be downloaded and\n ConfluenceReader will extract the text from the attachments and add it to the\n Document object. Currently supported attachment types are: PDF, PNG, JPEG/JPG,\n SVG, Word and Excel.\n Confluence API supports difference format of page content. The storage format is the", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/confluence.html"} +{"id": "55434ff79ce7-1", "text": "Confluence API supports difference format of page content. The storage format is the\n raw XML representation for storage. The view format is the HTML representation for\n viewing with macros are rendered as though it is viewed by users. You can pass\n a enum `content_format` argument to `load()` to specify the content format, this is\n set to `ContentFormat.STORAGE` by default.\n Hint: space_key and page_id can both be found in the URL of a page in Confluence\n - https://yoursite.atlassian.com/wiki/spaces//pages/\n Example:\n .. code-block:: python\n from langchain.document_loaders import ConfluenceLoader\n loader = ConfluenceLoader(\n url=\"https://yoursite.atlassian.com/wiki\",\n username=\"me\",\n api_key=\"12345\"\n )\n documents = loader.load(space_key=\"SPACE\",limit=50)\n :param url: _description_\n :type url: str\n :param api_key: _description_, defaults to None\n :type api_key: str, optional\n :param username: _description_, defaults to None\n :type username: str, optional\n :param oauth2: _description_, defaults to {}\n :type oauth2: dict, optional\n :param token: _description_, defaults to None\n :type token: str, optional\n :param cloud: _description_, defaults to True\n :type cloud: bool, optional\n :param number_of_retries: How many times to retry, defaults to 3\n :type number_of_retries: Optional[int], optional\n :param min_retry_seconds: defaults to 2\n :type min_retry_seconds: Optional[int], optional", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/confluence.html"} +{"id": "55434ff79ce7-2", "text": ":type min_retry_seconds: Optional[int], optional\n :param max_retry_seconds: defaults to 10\n :type max_retry_seconds: Optional[int], optional\n :param confluence_kwargs: additional kwargs to initialize confluence with\n :type confluence_kwargs: dict, optional\n :raises ValueError: Errors while validating input\n :raises ImportError: Required dependencies not installed.\n \"\"\"\n def __init__(\n self,\n url: str,\n api_key: Optional[str] = None,\n username: Optional[str] = None,\n oauth2: Optional[dict] = None,\n token: Optional[str] = None,\n cloud: Optional[bool] = True,\n number_of_retries: Optional[int] = 3,\n min_retry_seconds: Optional[int] = 2,\n max_retry_seconds: Optional[int] = 10,\n confluence_kwargs: Optional[dict] = None,\n ):\n confluence_kwargs = confluence_kwargs or {}\n errors = ConfluenceLoader.validate_init_args(\n url, api_key, username, oauth2, token\n )\n if errors:\n raise ValueError(f\"Error(s) while validating input: {errors}\")\n self.base_url = url\n self.number_of_retries = number_of_retries\n self.min_retry_seconds = min_retry_seconds\n self.max_retry_seconds = max_retry_seconds\n try:\n from atlassian import Confluence # noqa: F401\n except ImportError:\n raise ImportError(\n \"`atlassian` package not found, please run \"\n \"`pip install atlassian-python-api`\"\n )\n if oauth2:\n self.confluence = Confluence(\n url=url, oauth2=oauth2, cloud=cloud, **confluence_kwargs", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/confluence.html"} +{"id": "55434ff79ce7-3", "text": "url=url, oauth2=oauth2, cloud=cloud, **confluence_kwargs\n )\n elif token:\n self.confluence = Confluence(\n url=url, token=token, cloud=cloud, **confluence_kwargs\n )\n else:\n self.confluence = Confluence(\n url=url,\n username=username,\n password=api_key,\n cloud=cloud,\n **confluence_kwargs,\n )\n[docs] @staticmethod\n def validate_init_args(\n url: Optional[str] = None,\n api_key: Optional[str] = None,\n username: Optional[str] = None,\n oauth2: Optional[dict] = None,\n token: Optional[str] = None,\n ) -> Union[List, None]:\n \"\"\"Validates proper combinations of init arguments\"\"\"\n errors = []\n if url is None:\n errors.append(\"Must provide `base_url`\")\n if (api_key and not username) or (username and not api_key):\n errors.append(\n \"If one of `api_key` or `username` is provided, \"\n \"the other must be as well.\"\n )\n if (api_key or username) and oauth2:\n errors.append(\n \"Cannot provide a value for `api_key` and/or \"\n \"`username` and provide a value for `oauth2`\"\n )\n if oauth2 and oauth2.keys() != [\n \"access_token\",\n \"access_token_secret\",\n \"consumer_key\",\n \"key_cert\",\n ]:\n errors.append(\n \"You have either ommited require keys or added extra \"\n \"keys to the oauth2 dictionary. key values should be \"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/confluence.html"} +{"id": "55434ff79ce7-4", "text": "\"keys to the oauth2 dictionary. key values should be \"\n \"`['access_token', 'access_token_secret', 'consumer_key', 'key_cert']`\"\n )\n if token and (api_key or username or oauth2):\n errors.append(\n \"Cannot provide a value for `token` and a value for `api_key`, \"\n \"`username` or `oauth2`\"\n )\n if errors:\n return errors\n return None\n[docs] def load(\n self,\n space_key: Optional[str] = None,\n page_ids: Optional[List[str]] = None,\n label: Optional[str] = None,\n cql: Optional[str] = None,\n include_restricted_content: bool = False,\n include_archived_content: bool = False,\n include_attachments: bool = False,\n include_comments: bool = False,\n content_format: ContentFormat = ContentFormat.STORAGE,\n limit: Optional[int] = 50,\n max_pages: Optional[int] = 1000,\n ocr_languages: Optional[str] = None,\n ) -> List[Document]:\n \"\"\"\n :param space_key: Space key retrieved from a confluence URL, defaults to None\n :type space_key: Optional[str], optional\n :param page_ids: List of specific page IDs to load, defaults to None\n :type page_ids: Optional[List[str]], optional\n :param label: Get all pages with this label, defaults to None\n :type label: Optional[str], optional\n :param cql: CQL Expression, defaults to None\n :type cql: Optional[str], optional\n :param include_restricted_content: defaults to False\n :type include_restricted_content: bool, optional", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/confluence.html"} +{"id": "55434ff79ce7-5", "text": ":type include_restricted_content: bool, optional\n :param include_archived_content: Whether to include archived content,\n defaults to False\n :type include_archived_content: bool, optional\n :param include_attachments: defaults to False\n :type include_attachments: bool, optional\n :param include_comments: defaults to False\n :type include_comments: bool, optional\n :param content_format: Specify content format, defaults to ContentFormat.STORAGE\n :type content_format: ContentFormat\n :param limit: Maximum number of pages to retrieve per request, defaults to 50\n :type limit: int, optional\n :param max_pages: Maximum number of pages to retrieve in total, defaults 1000\n :type max_pages: int, optional\n :param ocr_languages: The languages to use for the Tesseract agent. To use a\n language, you'll first need to install the appropriate\n Tesseract language pack.\n :type ocr_languages: str, optional\n :raises ValueError: _description_\n :raises ImportError: _description_\n :return: _description_\n :rtype: List[Document]\n \"\"\"\n if not space_key and not page_ids and not label and not cql:\n raise ValueError(\n \"Must specify at least one among `space_key`, `page_ids`, \"\n \"`label`, `cql` parameters.\"\n )\n docs = []\n if space_key:\n pages = self.paginate_request(\n self.confluence.get_all_pages_from_space,\n space=space_key,\n limit=limit,\n max_pages=max_pages,\n status=\"any\" if include_archived_content else \"current\",\n expand=content_format.value,\n )\n docs += self.process_pages(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/confluence.html"} +{"id": "55434ff79ce7-6", "text": "expand=content_format.value,\n )\n docs += self.process_pages(\n pages,\n include_restricted_content,\n include_attachments,\n include_comments,\n content_format,\n ocr_languages,\n )\n if label:\n pages = self.paginate_request(\n self.confluence.get_all_pages_by_label,\n label=label,\n limit=limit,\n max_pages=max_pages,\n )\n ids_by_label = [page[\"id\"] for page in pages]\n if page_ids:\n page_ids = list(set(page_ids + ids_by_label))\n else:\n page_ids = list(set(ids_by_label))\n if cql:\n pages = self.paginate_request(\n self._search_content_by_cql,\n cql=cql,\n limit=limit,\n max_pages=max_pages,\n include_archived_spaces=include_archived_content,\n expand=content_format.value,\n )\n docs += self.process_pages(\n pages,\n include_restricted_content,\n include_attachments,\n include_comments,\n content_format,\n ocr_languages,\n )\n if page_ids:\n for page_id in page_ids:\n get_page = retry(\n reraise=True,\n stop=stop_after_attempt(\n self.number_of_retries # type: ignore[arg-type]\n ),\n wait=wait_exponential(\n multiplier=1, # type: ignore[arg-type]\n min=self.min_retry_seconds, # type: ignore[arg-type]\n max=self.max_retry_seconds, # type: ignore[arg-type]\n ),\n before_sleep=before_sleep_log(logger, logging.WARNING),\n )(self.confluence.get_page_by_id)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/confluence.html"} +{"id": "55434ff79ce7-7", "text": ")(self.confluence.get_page_by_id)\n page = get_page(page_id=page_id, expand=content_format.value)\n if not include_restricted_content and not self.is_public_page(page):\n continue\n doc = self.process_page(\n page,\n include_attachments,\n include_comments,\n content_format,\n ocr_languages,\n )\n docs.append(doc)\n return docs\n def _search_content_by_cql(\n self, cql: str, include_archived_spaces: Optional[bool] = None, **kwargs: Any\n ) -> List[dict]:\n url = \"rest/api/content/search\"\n params: Dict[str, Any] = {\"cql\": cql}\n params.update(kwargs)\n if include_archived_spaces is not None:\n params[\"includeArchivedSpaces\"] = include_archived_spaces\n response = self.confluence.get(url, params=params)\n return response.get(\"results\", [])\n[docs] def paginate_request(self, retrieval_method: Callable, **kwargs: Any) -> List:\n \"\"\"Paginate the various methods to retrieve groups of pages.\n Unfortunately, due to page size, sometimes the Confluence API\n doesn't match the limit value. If `limit` is >100 confluence\n seems to cap the response to 100. Also, due to the Atlassian Python\n package, we don't get the \"next\" values from the \"_links\" key because\n they only return the value from the results key. So here, the pagination\n starts from 0 and goes until the max_pages, getting the `limit` number\n of pages with each request. We have to manually check if there\n are more docs based on the length of the returned list of pages, rather than", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/confluence.html"} +{"id": "55434ff79ce7-8", "text": "are more docs based on the length of the returned list of pages, rather than\n just checking for the presence of a `next` key in the response like this page\n would have you do:\n https://developer.atlassian.com/server/confluence/pagination-in-the-rest-api/\n :param retrieval_method: Function used to retrieve docs\n :type retrieval_method: callable\n :return: List of documents\n :rtype: List\n \"\"\"\n max_pages = kwargs.pop(\"max_pages\")\n docs: List[dict] = []\n while len(docs) < max_pages:\n get_pages = retry(\n reraise=True,\n stop=stop_after_attempt(\n self.number_of_retries # type: ignore[arg-type]\n ),\n wait=wait_exponential(\n multiplier=1,\n min=self.min_retry_seconds, # type: ignore[arg-type]\n max=self.max_retry_seconds, # type: ignore[arg-type]\n ),\n before_sleep=before_sleep_log(logger, logging.WARNING),\n )(retrieval_method)\n batch = get_pages(**kwargs, start=len(docs))\n if not batch:\n break\n docs.extend(batch)\n return docs[:max_pages]\n[docs] def is_public_page(self, page: dict) -> bool:\n \"\"\"Check if a page is publicly accessible.\"\"\"\n restrictions = self.confluence.get_all_restrictions_for_content(page[\"id\"])\n return (\n page[\"status\"] == \"current\"\n and not restrictions[\"read\"][\"restrictions\"][\"user\"][\"results\"]\n and not restrictions[\"read\"][\"restrictions\"][\"group\"][\"results\"]\n )\n[docs] def process_pages(\n self,\n pages: List[dict],\n include_restricted_content: bool,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/confluence.html"} +{"id": "55434ff79ce7-9", "text": "pages: List[dict],\n include_restricted_content: bool,\n include_attachments: bool,\n include_comments: bool,\n content_format: ContentFormat,\n ocr_languages: Optional[str] = None,\n ) -> List[Document]:\n \"\"\"Process a list of pages into a list of documents.\"\"\"\n docs = []\n for page in pages:\n if not include_restricted_content and not self.is_public_page(page):\n continue\n doc = self.process_page(\n page,\n include_attachments,\n include_comments,\n content_format,\n ocr_languages,\n )\n docs.append(doc)\n return docs\n[docs] def process_page(\n self,\n page: dict,\n include_attachments: bool,\n include_comments: bool,\n content_format: ContentFormat,\n ocr_languages: Optional[str] = None,\n ) -> Document:\n try:\n from bs4 import BeautifulSoup # type: ignore\n except ImportError:\n raise ImportError(\n \"`beautifulsoup4` package not found, please run \"\n \"`pip install beautifulsoup4`\"\n )\n if include_attachments:\n attachment_texts = self.process_attachment(page[\"id\"], ocr_languages)\n else:\n attachment_texts = []\n content = content_format.get_content(page)\n text = BeautifulSoup(content, \"lxml\").get_text(\" \", strip=True) + \"\".join(\n attachment_texts\n )\n if include_comments:\n comments = self.confluence.get_page_comments(\n page[\"id\"], expand=\"body.view.value\", depth=\"all\"\n )[\"results\"]\n comment_texts = [\n BeautifulSoup(comment[\"body\"][\"view\"][\"value\"], \"lxml\").get_text(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/confluence.html"} +{"id": "55434ff79ce7-10", "text": "BeautifulSoup(comment[\"body\"][\"view\"][\"value\"], \"lxml\").get_text(\n \" \", strip=True\n )\n for comment in comments\n ]\n text = text + \"\".join(comment_texts)\n return Document(\n page_content=text,\n metadata={\n \"title\": page[\"title\"],\n \"id\": page[\"id\"],\n \"source\": self.base_url.strip(\"/\") + page[\"_links\"][\"webui\"],\n },\n )\n[docs] def process_attachment(\n self,\n page_id: str,\n ocr_languages: Optional[str] = None,\n ) -> List[str]:\n try:\n from PIL import Image # noqa: F401\n except ImportError:\n raise ImportError(\n \"`Pillow` package not found, \" \"please run `pip install Pillow`\"\n )\n # depending on setup you may also need to set the correct path for\n # poppler and tesseract\n attachments = self.confluence.get_attachments_from_content(page_id)[\"results\"]\n texts = []\n for attachment in attachments:\n media_type = attachment[\"metadata\"][\"mediaType\"]\n absolute_url = self.base_url + attachment[\"_links\"][\"download\"]\n title = attachment[\"title\"]\n if media_type == \"application/pdf\":\n text = title + self.process_pdf(absolute_url, ocr_languages)\n elif (\n media_type == \"image/png\"\n or media_type == \"image/jpg\"\n or media_type == \"image/jpeg\"\n ):\n text = title + self.process_image(absolute_url, ocr_languages)\n elif (\n media_type == \"application/vnd.openxmlformats-officedocument\"\n \".wordprocessingml.document\"\n ):", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/confluence.html"} +{"id": "55434ff79ce7-11", "text": "\".wordprocessingml.document\"\n ):\n text = title + self.process_doc(absolute_url)\n elif media_type == \"application/vnd.ms-excel\":\n text = title + self.process_xls(absolute_url)\n elif media_type == \"image/svg+xml\":\n text = title + self.process_svg(absolute_url, ocr_languages)\n else:\n continue\n texts.append(text)\n return texts\n[docs] def process_pdf(\n self,\n link: str,\n ocr_languages: Optional[str] = None,\n ) -> str:\n try:\n import pytesseract # noqa: F401\n from pdf2image import convert_from_bytes # noqa: F401\n except ImportError:\n raise ImportError(\n \"`pytesseract` or `pdf2image` package not found, \"\n \"please run `pip install pytesseract pdf2image`\"\n )\n response = self.confluence.request(path=link, absolute=True)\n text = \"\"\n if (\n response.status_code != 200\n or response.content == b\"\"\n or response.content is None\n ):\n return text\n try:\n images = convert_from_bytes(response.content)\n except ValueError:\n return text\n for i, image in enumerate(images):\n image_text = pytesseract.image_to_string(image, lang=ocr_languages)\n text += f\"Page {i + 1}:\\n{image_text}\\n\\n\"\n return text\n[docs] def process_image(\n self,\n link: str,\n ocr_languages: Optional[str] = None,\n ) -> str:\n try:\n import pytesseract # noqa: F401", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/confluence.html"} +{"id": "55434ff79ce7-12", "text": "try:\n import pytesseract # noqa: F401\n from PIL import Image # noqa: F401\n except ImportError:\n raise ImportError(\n \"`pytesseract` or `Pillow` package not found, \"\n \"please run `pip install pytesseract Pillow`\"\n )\n response = self.confluence.request(path=link, absolute=True)\n text = \"\"\n if (\n response.status_code != 200\n or response.content == b\"\"\n or response.content is None\n ):\n return text\n try:\n image = Image.open(BytesIO(response.content))\n except OSError:\n return text\n return pytesseract.image_to_string(image, lang=ocr_languages)\n[docs] def process_doc(self, link: str) -> str:\n try:\n import docx2txt # noqa: F401\n except ImportError:\n raise ImportError(\n \"`docx2txt` package not found, please run `pip install docx2txt`\"\n )\n response = self.confluence.request(path=link, absolute=True)\n text = \"\"\n if (\n response.status_code != 200\n or response.content == b\"\"\n or response.content is None\n ):\n return text\n file_data = BytesIO(response.content)\n return docx2txt.process(file_data)\n[docs] def process_xls(self, link: str) -> str:\n try:\n import xlrd # noqa: F401\n except ImportError:\n raise ImportError(\"`xlrd` package not found, please run `pip install xlrd`\")\n response = self.confluence.request(path=link, absolute=True)\n text = \"\"\n if (\n response.status_code != 200", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/confluence.html"} +{"id": "55434ff79ce7-13", "text": "text = \"\"\n if (\n response.status_code != 200\n or response.content == b\"\"\n or response.content is None\n ):\n return text\n workbook = xlrd.open_workbook(file_contents=response.content)\n for sheet in workbook.sheets():\n text += f\"{sheet.name}:\\n\"\n for row in range(sheet.nrows):\n for col in range(sheet.ncols):\n text += f\"{sheet.cell_value(row, col)}\\t\"\n text += \"\\n\"\n text += \"\\n\"\n return text\n[docs] def process_svg(\n self,\n link: str,\n ocr_languages: Optional[str] = None,\n ) -> str:\n try:\n import pytesseract # noqa: F401\n from PIL import Image # noqa: F401\n from reportlab.graphics import renderPM # noqa: F401\n from svglib.svglib import svg2rlg # noqa: F401\n except ImportError:\n raise ImportError(\n \"`pytesseract`, `Pillow`, `reportlab` or `svglib` package not found, \"\n \"please run `pip install pytesseract Pillow reportlab svglib`\"\n )\n response = self.confluence.request(path=link, absolute=True)\n text = \"\"\n if (\n response.status_code != 200\n or response.content == b\"\"\n or response.content is None\n ):\n return text\n drawing = svg2rlg(BytesIO(response.content))\n img_data = BytesIO()\n renderPM.drawToFile(drawing, img_data, fmt=\"PNG\")\n img_data.seek(0)\n image = Image.open(img_data)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/confluence.html"} +{"id": "55434ff79ce7-14", "text": "img_data.seek(0)\n image = Image.open(img_data)\n return pytesseract.image_to_string(image, lang=ocr_languages)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/confluence.html"} +{"id": "54310919df4e-0", "text": "Source code for langchain.document_loaders.text\nimport logging\nfrom typing import List, Optional\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseLoader\nfrom langchain.document_loaders.helpers import detect_file_encodings\nlogger = logging.getLogger(__name__)\n[docs]class TextLoader(BaseLoader):\n \"\"\"Load text files.\n Args:\n file_path: Path to the file to load.\n encoding: File encoding to use. If `None`, the file will be loaded\n with the default system encoding.\n autodetect_encoding: Whether to try to autodetect the file encoding\n if the specified encoding fails.\n \"\"\"\n def __init__(\n self,\n file_path: str,\n encoding: Optional[str] = None,\n autodetect_encoding: bool = False,\n ):\n \"\"\"Initialize with file path.\"\"\"\n self.file_path = file_path\n self.encoding = encoding\n self.autodetect_encoding = autodetect_encoding\n[docs] def load(self) -> List[Document]:\n \"\"\"Load from file path.\"\"\"\n text = \"\"\n try:\n with open(self.file_path, encoding=self.encoding) as f:\n text = f.read()\n except UnicodeDecodeError as e:\n if self.autodetect_encoding:\n detected_encodings = detect_file_encodings(self.file_path)\n for encoding in detected_encodings:\n logger.debug(\"Trying encoding: \", encoding.encoding)\n try:\n with open(self.file_path, encoding=encoding.encoding) as f:\n text = f.read()\n break\n except UnicodeDecodeError:\n continue\n else:\n raise RuntimeError(f\"Error loading {self.file_path}\") from e\n except Exception as e:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/text.html"} +{"id": "54310919df4e-1", "text": "except Exception as e:\n raise RuntimeError(f\"Error loading {self.file_path}\") from e\n metadata = {\"source\": self.file_path}\n return [Document(page_content=text, metadata=metadata)]", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/text.html"} +{"id": "bd990968fac8-0", "text": "Source code for langchain.document_loaders.azlyrics\n\"\"\"Loader that loads AZLyrics.\"\"\"\nfrom typing import List\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.web_base import WebBaseLoader\n[docs]class AZLyricsLoader(WebBaseLoader):\n \"\"\"Loader that loads AZLyrics webpages.\"\"\"\n[docs] def load(self) -> List[Document]:\n \"\"\"Load webpage.\"\"\"\n soup = self.scrape()\n title = soup.title.text\n lyrics = soup.find_all(\"div\", {\"class\": \"\"})[2].text\n text = title + lyrics\n metadata = {\"source\": self.web_path}\n return [Document(page_content=text, metadata=metadata)]", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/azlyrics.html"} +{"id": "adb4340c51cd-0", "text": "Source code for langchain.document_loaders.weather\n\"\"\"Simple reader that reads weather data from OpenWeatherMap API\"\"\"\nfrom __future__ import annotations\nfrom datetime import datetime\nfrom typing import Iterator, List, Optional, Sequence\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseLoader\nfrom langchain.utilities.openweathermap import OpenWeatherMapAPIWrapper\n[docs]class WeatherDataLoader(BaseLoader):\n \"\"\"Weather Reader.\n Reads the forecast & current weather of any location using OpenWeatherMap's free\n API. Checkout 'https://openweathermap.org/appid' for more on how to generate a free\n OpenWeatherMap API.\n \"\"\"\n def __init__(\n self,\n client: OpenWeatherMapAPIWrapper,\n places: Sequence[str],\n ) -> None:\n \"\"\"Initialize with parameters.\"\"\"\n super().__init__()\n self.client = client\n self.places = places\n[docs] @classmethod\n def from_params(\n cls, places: Sequence[str], *, openweathermap_api_key: Optional[str] = None\n ) -> WeatherDataLoader:\n client = OpenWeatherMapAPIWrapper(openweathermap_api_key=openweathermap_api_key)\n return cls(client, places)\n[docs] def lazy_load(\n self,\n ) -> Iterator[Document]:\n \"\"\"Lazily load weather data for the given locations.\"\"\"\n for place in self.places:\n metadata = {\"queried_at\": datetime.now()}\n content = self.client.run(place)\n yield Document(page_content=content, metadata=metadata)\n[docs] def load(\n self,\n ) -> List[Document]:\n \"\"\"Load weather data for the given locations.\"\"\"\n return list(self.lazy_load())", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/weather.html"} +{"id": "cd5252feced9-0", "text": "Source code for langchain.document_loaders.email\n\"\"\"Loader that loads email files.\"\"\"\nimport os\nfrom typing import List\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseLoader\nfrom langchain.document_loaders.unstructured import (\n UnstructuredFileLoader,\n satisfies_min_unstructured_version,\n)\n[docs]class UnstructuredEmailLoader(UnstructuredFileLoader):\n \"\"\"Loader that uses unstructured to load email files.\"\"\"\n def _get_elements(self) -> List:\n from unstructured.file_utils.filetype import FileType, detect_filetype\n filetype = detect_filetype(self.file_path)\n if filetype == FileType.EML:\n from unstructured.partition.email import partition_email\n return partition_email(filename=self.file_path, **self.unstructured_kwargs)\n elif satisfies_min_unstructured_version(\"0.5.8\") and filetype == FileType.MSG:\n from unstructured.partition.msg import partition_msg\n return partition_msg(filename=self.file_path, **self.unstructured_kwargs)\n else:\n raise ValueError(\n f\"Filetype {filetype} is not supported in UnstructuredEmailLoader.\"\n )\n[docs]class OutlookMessageLoader(BaseLoader):\n \"\"\"\n Loader that loads Outlook Message files using extract_msg.\n https://github.com/TeamMsgExtractor/msg-extractor\n \"\"\"\n def __init__(self, file_path: str):\n \"\"\"Initialize with file path.\"\"\"\n self.file_path = file_path\n if not os.path.isfile(self.file_path):\n raise ValueError(\"File path %s is not a valid file\" % self.file_path)\n try:\n import extract_msg # noqa:F401\n except ImportError:\n raise ImportError(\n \"extract_msg is not installed. Please install it with \"\n \"`pip install extract_msg`\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/email.html"} +{"id": "cd5252feced9-1", "text": "\"`pip install extract_msg`\"\n )\n[docs] def load(self) -> List[Document]:\n \"\"\"Load data into document objects.\"\"\"\n import extract_msg\n msg = extract_msg.Message(self.file_path)\n return [\n Document(\n page_content=msg.body,\n metadata={\n \"subject\": msg.subject,\n \"sender\": msg.sender,\n \"date\": msg.date,\n },\n )\n ]", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/email.html"} +{"id": "86997a45354c-0", "text": "Source code for langchain.document_loaders.odt\n\"\"\"Loader that loads Open Office ODT files.\"\"\"\nfrom typing import Any, List\nfrom langchain.document_loaders.unstructured import (\n UnstructuredFileLoader,\n validate_unstructured_version,\n)\n[docs]class UnstructuredODTLoader(UnstructuredFileLoader):\n \"\"\"Loader that uses unstructured to load open office ODT files.\"\"\"\n def __init__(\n self, file_path: str, mode: str = \"single\", **unstructured_kwargs: Any\n ):\n validate_unstructured_version(min_unstructured_version=\"0.6.3\")\n super().__init__(file_path=file_path, mode=mode, **unstructured_kwargs)\n def _get_elements(self) -> List:\n from unstructured.partition.odt import partition_odt\n return partition_odt(filename=self.file_path, **self.unstructured_kwargs)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/odt.html"} +{"id": "b3e8401aa746-0", "text": "Source code for langchain.document_loaders.blackboard\n\"\"\"Loader that loads all documents from a blackboard course.\"\"\"\nimport contextlib\nimport re\nfrom pathlib import Path\nfrom typing import Any, List, Optional, Tuple\nfrom urllib.parse import unquote\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.directory import DirectoryLoader\nfrom langchain.document_loaders.pdf import PyPDFLoader\nfrom langchain.document_loaders.web_base import WebBaseLoader\n[docs]class BlackboardLoader(WebBaseLoader):\n \"\"\"Loader that loads all documents from a Blackboard course.\n This loader is not compatible with all Blackboard courses. It is only\n compatible with courses that use the new Blackboard interface.\n To use this loader, you must have the BbRouter cookie. You can get this\n cookie by logging into the course and then copying the value of the\n BbRouter cookie from the browser's developer tools.\n Example:\n .. code-block:: python\n from langchain.document_loaders import BlackboardLoader\n loader = BlackboardLoader(\n blackboard_course_url=\"https://blackboard.example.com/webapps/blackboard/execute/announcement?method=search&context=course_entry&course_id=_123456_1\",\n bbrouter=\"expires:12345...\",\n )\n documents = loader.load()\n \"\"\"\n base_url: str\n folder_path: str\n load_all_recursively: bool\n def __init__(\n self,\n blackboard_course_url: str,\n bbrouter: str,\n load_all_recursively: bool = True,\n basic_auth: Optional[Tuple[str, str]] = None,\n cookies: Optional[dict] = None,\n ):\n \"\"\"Initialize with blackboard course url.", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/blackboard.html"} +{"id": "b3e8401aa746-1", "text": "):\n \"\"\"Initialize with blackboard course url.\n The BbRouter cookie is required for most blackboard courses.\n Args:\n blackboard_course_url: Blackboard course url.\n bbrouter: BbRouter cookie.\n load_all_recursively: If True, load all documents recursively.\n basic_auth: Basic auth credentials.\n cookies: Cookies.\n Raises:\n ValueError: If blackboard course url is invalid.\n \"\"\"\n super().__init__(blackboard_course_url)\n # Get base url\n try:\n self.base_url = blackboard_course_url.split(\"/webapps/blackboard\")[0]\n except IndexError:\n raise ValueError(\n \"Invalid blackboard course url. \"\n \"Please provide a url that starts with \"\n \"https:///webapps/blackboard\"\n )\n if basic_auth is not None:\n self.session.auth = basic_auth\n # Combine cookies\n if cookies is None:\n cookies = {}\n cookies.update({\"BbRouter\": bbrouter})\n self.session.cookies.update(cookies)\n self.load_all_recursively = load_all_recursively\n self.check_bs4()\n[docs] def check_bs4(self) -> None:\n \"\"\"Check if BeautifulSoup4 is installed.\n Raises:\n ImportError: If BeautifulSoup4 is not installed.\n \"\"\"\n try:\n import bs4 # noqa: F401\n except ImportError:\n raise ImportError(\n \"BeautifulSoup4 is required for BlackboardLoader. \"\n \"Please install it with `pip install beautifulsoup4`.\"\n )\n[docs] def load(self) -> List[Document]:\n \"\"\"Load data into document objects.\n Returns:\n List of documents.", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/blackboard.html"} +{"id": "b3e8401aa746-2", "text": "\"\"\"Load data into document objects.\n Returns:\n List of documents.\n \"\"\"\n if self.load_all_recursively:\n soup_info = self.scrape()\n self.folder_path = self._get_folder_path(soup_info)\n relative_paths = self._get_paths(soup_info)\n documents = []\n for path in relative_paths:\n url = self.base_url + path\n print(f\"Fetching documents from {url}\")\n soup_info = self._scrape(url)\n with contextlib.suppress(ValueError):\n documents.extend(self._get_documents(soup_info))\n return documents\n else:\n print(f\"Fetching documents from {self.web_path}\")\n soup_info = self.scrape()\n self.folder_path = self._get_folder_path(soup_info)\n return self._get_documents(soup_info)\n def _get_folder_path(self, soup: Any) -> str:\n \"\"\"Get the folder path to save the documents in.\n Args:\n soup: BeautifulSoup4 soup object.\n Returns:\n Folder path.\n \"\"\"\n # Get the course name\n course_name = soup.find(\"span\", {\"id\": \"crumb_1\"})\n if course_name is None:\n raise ValueError(\"No course name found.\")\n course_name = course_name.text.strip()\n # Prepare the folder path\n course_name_clean = (\n unquote(course_name)\n .replace(\" \", \"_\")\n .replace(\"/\", \"_\")\n .replace(\":\", \"_\")\n .replace(\",\", \"_\")\n .replace(\"?\", \"_\")\n .replace(\"'\", \"_\")\n .replace(\"!\", \"_\")\n .replace('\"', \"_\")\n )\n # Get the folder path\n folder_path = Path(\".\") / course_name_clean", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/blackboard.html"} +{"id": "b3e8401aa746-3", "text": "# Get the folder path\n folder_path = Path(\".\") / course_name_clean\n return str(folder_path)\n def _get_documents(self, soup: Any) -> List[Document]:\n \"\"\"Fetch content from page and return Documents.\n Args:\n soup: BeautifulSoup4 soup object.\n Returns:\n List of documents.\n \"\"\"\n attachments = self._get_attachments(soup)\n self._download_attachments(attachments)\n documents = self._load_documents()\n return documents\n def _get_attachments(self, soup: Any) -> List[str]:\n \"\"\"Get all attachments from a page.\n Args:\n soup: BeautifulSoup4 soup object.\n Returns:\n List of attachments.\n \"\"\"\n from bs4 import BeautifulSoup, Tag\n # Get content list\n content_list = soup.find(\"ul\", {\"class\": \"contentList\"})\n if content_list is None:\n raise ValueError(\"No content list found.\")\n content_list: BeautifulSoup # type: ignore\n # Get all attachments\n attachments = []\n for attachment in content_list.find_all(\"ul\", {\"class\": \"attachments\"}):\n attachment: Tag # type: ignore\n for link in attachment.find_all(\"a\"):\n link: Tag # type: ignore\n href = link.get(\"href\")\n # Only add if href is not None and does not start with #\n if href is not None and not href.startswith(\"#\"):\n attachments.append(href)\n return attachments\n def _download_attachments(self, attachments: List[str]) -> None:\n \"\"\"Download all attachments.\n Args:\n attachments: List of attachments.\n \"\"\"\n # Make sure the folder exists\n Path(self.folder_path).mkdir(parents=True, exist_ok=True)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/blackboard.html"} +{"id": "b3e8401aa746-4", "text": "Path(self.folder_path).mkdir(parents=True, exist_ok=True)\n # Download all attachments\n for attachment in attachments:\n self.download(attachment)\n def _load_documents(self) -> List[Document]:\n \"\"\"Load all documents in the folder.\n Returns:\n List of documents.\n \"\"\"\n # Create the document loader\n loader = DirectoryLoader(\n path=self.folder_path, glob=\"*.pdf\", loader_cls=PyPDFLoader # type: ignore\n )\n # Load the documents\n documents = loader.load()\n # Return all documents\n return documents\n def _get_paths(self, soup: Any) -> List[str]:\n \"\"\"Get all relative paths in the navbar.\"\"\"\n relative_paths = []\n course_menu = soup.find(\"ul\", {\"class\": \"courseMenu\"})\n if course_menu is None:\n raise ValueError(\"No course menu found.\")\n for link in course_menu.find_all(\"a\"):\n href = link.get(\"href\")\n if href is not None and href.startswith(\"/\"):\n relative_paths.append(href)\n return relative_paths\n[docs] def download(self, path: str) -> None:\n \"\"\"Download a file from a url.\n Args:\n path: Path to the file.\n \"\"\"\n # Get the file content\n response = self.session.get(self.base_url + path, allow_redirects=True)\n # Get the filename\n filename = self.parse_filename(response.url)\n # Write the file to disk\n with open(Path(self.folder_path) / filename, \"wb\") as f:\n f.write(response.content)\n[docs] def parse_filename(self, url: str) -> str:\n \"\"\"Parse the filename from a url.\n Args:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/blackboard.html"} +{"id": "b3e8401aa746-5", "text": "\"\"\"Parse the filename from a url.\n Args:\n url: Url to parse the filename from.\n Returns:\n The filename.\n \"\"\"\n if (url_path := Path(url)) and url_path.suffix == \".pdf\":\n return url_path.name\n else:\n return self._parse_filename_from_url(url)\n def _parse_filename_from_url(self, url: str) -> str:\n \"\"\"Parse the filename from a url.\n Args:\n url: Url to parse the filename from.\n Returns:\n The filename.\n Raises:\n ValueError: If the filename could not be parsed.\n \"\"\"\n filename_matches = re.search(r\"filename%2A%3DUTF-8%27%27(.+)\", url)\n if filename_matches:\n filename = filename_matches.group(1)\n else:\n raise ValueError(f\"Could not parse filename from {url}\")\n if \".pdf\" not in filename:\n raise ValueError(f\"Incorrect file type: {filename}\")\n filename = filename.split(\".pdf\")[0] + \".pdf\"\n filename = unquote(filename)\n filename = filename.replace(\"%20\", \" \")\n return filename\nif __name__ == \"__main__\":\n loader = BlackboardLoader(\n \"https:///webapps/blackboard/content/listContent.jsp?course_id=__1&content_id=__1&mode=reset\",\n \"\",\n load_all_recursively=True,\n )\n documents = loader.load()\n print(f\"Loaded {len(documents)} pages of PDFs from {loader.web_path}\")", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/blackboard.html"} +{"id": "690e48bb411b-0", "text": "Source code for langchain.document_loaders.telegram\n\"\"\"Loader that loads Telegram chat json dump.\"\"\"\nfrom __future__ import annotations\nimport asyncio\nimport json\nfrom pathlib import Path\nfrom typing import TYPE_CHECKING, Dict, List, Optional, Union\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseLoader\nfrom langchain.text_splitter import RecursiveCharacterTextSplitter\nif TYPE_CHECKING:\n import pandas as pd\n from telethon.hints import EntityLike\ndef concatenate_rows(row: dict) -> str:\n \"\"\"Combine message information in a readable format ready to be used.\"\"\"\n date = row[\"date\"]\n sender = row[\"from\"]\n text = row[\"text\"]\n return f\"{sender} on {date}: {text}\\n\\n\"\n[docs]class TelegramChatFileLoader(BaseLoader):\n \"\"\"Loader that loads Telegram chat json directory dump.\"\"\"\n def __init__(self, path: str):\n \"\"\"Initialize with path.\"\"\"\n self.file_path = path\n[docs] def load(self) -> List[Document]:\n \"\"\"Load documents.\"\"\"\n p = Path(self.file_path)\n with open(p, encoding=\"utf8\") as f:\n d = json.load(f)\n text = \"\".join(\n concatenate_rows(message)\n for message in d[\"messages\"]\n if message[\"type\"] == \"message\" and isinstance(message[\"text\"], str)\n )\n metadata = {\"source\": str(p)}\n return [Document(page_content=text, metadata=metadata)]\ndef text_to_docs(text: Union[str, List[str]]) -> List[Document]:\n \"\"\"Converts a string or list of strings to a list of Documents with metadata.\"\"\"\n if isinstance(text, str):\n # Take a single string as one page", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/telegram.html"} +{"id": "690e48bb411b-1", "text": "if isinstance(text, str):\n # Take a single string as one page\n text = [text]\n page_docs = [Document(page_content=page) for page in text]\n # Add page numbers as metadata\n for i, doc in enumerate(page_docs):\n doc.metadata[\"page\"] = i + 1\n # Split pages into chunks\n doc_chunks = []\n for doc in page_docs:\n text_splitter = RecursiveCharacterTextSplitter(\n chunk_size=800,\n separators=[\"\\n\\n\", \"\\n\", \".\", \"!\", \"?\", \",\", \" \", \"\"],\n chunk_overlap=20,\n )\n chunks = text_splitter.split_text(doc.page_content)\n for i, chunk in enumerate(chunks):\n doc = Document(\n page_content=chunk, metadata={\"page\": doc.metadata[\"page\"], \"chunk\": i}\n )\n # Add sources a metadata\n doc.metadata[\"source\"] = f\"{doc.metadata['page']}-{doc.metadata['chunk']}\"\n doc_chunks.append(doc)\n return doc_chunks\n[docs]class TelegramChatApiLoader(BaseLoader):\n \"\"\"Loader that loads Telegram chat json directory dump.\"\"\"\n def __init__(\n self,\n chat_entity: Optional[EntityLike] = None,\n api_id: Optional[int] = None,\n api_hash: Optional[str] = None,\n username: Optional[str] = None,\n file_path: str = \"telegram_data.json\",\n ):\n \"\"\"Initialize with API parameters.\"\"\"\n self.chat_entity = chat_entity\n self.api_id = api_id\n self.api_hash = api_hash\n self.username = username\n self.file_path = file_path\n[docs] async def fetch_data_from_telegram(self) -> None:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/telegram.html"} +{"id": "690e48bb411b-2", "text": "[docs] async def fetch_data_from_telegram(self) -> None:\n \"\"\"Fetch data from Telegram API and save it as a JSON file.\"\"\"\n from telethon.sync import TelegramClient\n data = []\n async with TelegramClient(self.username, self.api_id, self.api_hash) as client:\n async for message in client.iter_messages(self.chat_entity):\n is_reply = message.reply_to is not None\n reply_to_id = message.reply_to.reply_to_msg_id if is_reply else None\n data.append(\n {\n \"sender_id\": message.sender_id,\n \"text\": message.text,\n \"date\": message.date.isoformat(),\n \"message.id\": message.id,\n \"is_reply\": is_reply,\n \"reply_to_id\": reply_to_id,\n }\n )\n with open(self.file_path, \"w\", encoding=\"utf-8\") as f:\n json.dump(data, f, ensure_ascii=False, indent=4)\n def _get_message_threads(self, data: pd.DataFrame) -> dict:\n \"\"\"Create a dictionary of message threads from the given data.\n Args:\n data (pd.DataFrame): A DataFrame containing the conversation \\\n data with columns:\n - message.sender_id\n - text\n - date\n - message.id\n - is_reply\n - reply_to_id\n Returns:\n dict: A dictionary where the key is the parent message ID and \\\n the value is a list of message IDs in ascending order.\n \"\"\"\n def find_replies(parent_id: int, reply_data: pd.DataFrame) -> List[int]:\n \"\"\"\n Recursively find all replies to a given parent message ID.\n Args:\n parent_id (int): The parent message ID.", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/telegram.html"} +{"id": "690e48bb411b-3", "text": "Args:\n parent_id (int): The parent message ID.\n reply_data (pd.DataFrame): A DataFrame containing reply messages.\n Returns:\n list: A list of message IDs that are replies to the parent message ID.\n \"\"\"\n # Find direct replies to the parent message ID\n direct_replies = reply_data[reply_data[\"reply_to_id\"] == parent_id][\n \"message.id\"\n ].tolist()\n # Recursively find replies to the direct replies\n all_replies = []\n for reply_id in direct_replies:\n all_replies += [reply_id] + find_replies(reply_id, reply_data)\n return all_replies\n # Filter out parent messages\n parent_messages = data[~data[\"is_reply\"]]\n # Filter out reply messages and drop rows with NaN in 'reply_to_id'\n reply_messages = data[data[\"is_reply\"]].dropna(subset=[\"reply_to_id\"])\n # Convert 'reply_to_id' to integer\n reply_messages[\"reply_to_id\"] = reply_messages[\"reply_to_id\"].astype(int)\n # Create a dictionary of message threads with parent message IDs as keys and \\\n # lists of reply message IDs as values\n message_threads = {\n parent_id: [parent_id] + find_replies(parent_id, reply_messages)\n for parent_id in parent_messages[\"message.id\"]\n }\n return message_threads\n def _combine_message_texts(\n self, message_threads: Dict[int, List[int]], data: pd.DataFrame\n ) -> str:\n \"\"\"\n Combine the message texts for each parent message ID based \\\n on the list of message threads.\n Args:\n message_threads (dict): A dictionary where the key is the parent message \\", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/telegram.html"} +{"id": "690e48bb411b-4", "text": "message_threads (dict): A dictionary where the key is the parent message \\\n ID and the value is a list of message IDs in ascending order.\n data (pd.DataFrame): A DataFrame containing the conversation data:\n - message.sender_id\n - text\n - date\n - message.id\n - is_reply\n - reply_to_id\n Returns:\n str: A combined string of message texts sorted by date.\n \"\"\"\n combined_text = \"\"\n # Iterate through sorted parent message IDs\n for parent_id, message_ids in message_threads.items():\n # Get the message texts for the message IDs and sort them by date\n message_texts = (\n data[data[\"message.id\"].isin(message_ids)]\n .sort_values(by=\"date\")[\"text\"]\n .tolist()\n )\n message_texts = [str(elem) for elem in message_texts]\n # Combine the message texts\n combined_text += \" \".join(message_texts) + \".\\n\"\n return combined_text.strip()\n[docs] def load(self) -> List[Document]:\n \"\"\"Load documents.\"\"\"\n if self.chat_entity is not None:\n try:\n import nest_asyncio\n nest_asyncio.apply()\n asyncio.run(self.fetch_data_from_telegram())\n except ImportError:\n raise ImportError(\n \"\"\"`nest_asyncio` package not found.\n please install with `pip install nest_asyncio`\n \"\"\"\n )\n p = Path(self.file_path)\n with open(p, encoding=\"utf8\") as f:\n d = json.load(f)\n try:\n import pandas as pd\n except ImportError:\n raise ImportError(\n \"\"\"`pandas` package not found. \n please install with `pip install pandas`\n \"\"\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/telegram.html"} +{"id": "690e48bb411b-5", "text": "please install with `pip install pandas`\n \"\"\"\n )\n normalized_messages = pd.json_normalize(d)\n df = pd.DataFrame(normalized_messages)\n message_threads = self._get_message_threads(df)\n combined_texts = self._combine_message_texts(message_threads, df)\n return text_to_docs(combined_texts)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/telegram.html"} +{"id": "5c4d318f409e-0", "text": "Source code for langchain.document_loaders.embaas\nimport base64\nimport warnings\nfrom typing import Any, Dict, Iterator, List, Optional\nimport requests\nfrom pydantic import BaseModel, root_validator, validator\nfrom typing_extensions import NotRequired, TypedDict\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseBlobParser, BaseLoader\nfrom langchain.document_loaders.blob_loaders import Blob\nfrom langchain.text_splitter import TextSplitter\nfrom langchain.utils import get_from_dict_or_env\nEMBAAS_DOC_API_URL = \"https://api.embaas.io/v1/document/extract-text/bytes/\"\nclass EmbaasDocumentExtractionParameters(TypedDict):\n \"\"\"Parameters for the embaas document extraction API.\"\"\"\n mime_type: NotRequired[str]\n \"\"\"The mime type of the document.\"\"\"\n file_extension: NotRequired[str]\n \"\"\"The file extension of the document.\"\"\"\n file_name: NotRequired[str]\n \"\"\"The file name of the document.\"\"\"\n should_chunk: NotRequired[bool]\n \"\"\"Whether to chunk the document into pages.\"\"\"\n chunk_size: NotRequired[int]\n \"\"\"The maximum size of the text chunks.\"\"\"\n chunk_overlap: NotRequired[int]\n \"\"\"The maximum overlap allowed between chunks.\"\"\"\n chunk_splitter: NotRequired[str]\n \"\"\"The text splitter class name for creating chunks.\"\"\"\n separators: NotRequired[List[str]]\n \"\"\"The separators for chunks.\"\"\"\n should_embed: NotRequired[bool]\n \"\"\"Whether to create embeddings for the document in the response.\"\"\"\n model: NotRequired[str]\n \"\"\"The model to pass to the Embaas document extraction API.\"\"\"\n instruction: NotRequired[str]\n \"\"\"The instruction to pass to the Embaas document extraction API.\"\"\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/embaas.html"} +{"id": "5c4d318f409e-1", "text": "\"\"\"The instruction to pass to the Embaas document extraction API.\"\"\"\nclass EmbaasDocumentExtractionPayload(EmbaasDocumentExtractionParameters):\n \"\"\"Payload for the Embaas document extraction API.\"\"\"\n bytes: str\n \"\"\"The base64 encoded bytes of the document to extract text from.\"\"\"\nclass BaseEmbaasLoader(BaseModel):\n embaas_api_key: Optional[str] = None\n api_url: str = EMBAAS_DOC_API_URL\n \"\"\"The URL of the embaas document extraction API.\"\"\"\n params: EmbaasDocumentExtractionParameters = EmbaasDocumentExtractionParameters()\n \"\"\"Additional parameters to pass to the embaas document extraction API.\"\"\"\n @root_validator(pre=True)\n def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"Validate that api key and python package exists in environment.\"\"\"\n embaas_api_key = get_from_dict_or_env(\n values, \"embaas_api_key\", \"EMBAAS_API_KEY\"\n )\n values[\"embaas_api_key\"] = embaas_api_key\n return values\n[docs]class EmbaasBlobLoader(BaseEmbaasLoader, BaseBlobParser):\n \"\"\"Wrapper around embaas's document byte loader service.\n To use, you should have the\n environment variable ``EMBAAS_API_KEY`` set with your API key, or pass\n it as a named parameter to the constructor.\n Example:\n .. code-block:: python\n # Default parsing\n from langchain.document_loaders.embaas import EmbaasBlobLoader\n loader = EmbaasBlobLoader()\n blob = Blob.from_path(path=\"example.mp3\")\n documents = loader.parse(blob=blob)\n # Custom api parameters (create embeddings automatically)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/embaas.html"} +{"id": "5c4d318f409e-2", "text": "# Custom api parameters (create embeddings automatically)\n from langchain.document_loaders.embaas import EmbaasBlobLoader\n loader = EmbaasBlobLoader(\n params={\n \"should_embed\": True,\n \"model\": \"e5-large-v2\",\n \"chunk_size\": 256,\n \"chunk_splitter\": \"CharacterTextSplitter\"\n }\n )\n blob = Blob.from_path(path=\"example.pdf\")\n documents = loader.parse(blob=blob)\n \"\"\"\n[docs] def lazy_parse(self, blob: Blob) -> Iterator[Document]:\n yield from self._get_documents(blob=blob)\n @staticmethod\n def _api_response_to_documents(chunks: List[Dict[str, Any]]) -> List[Document]:\n \"\"\"Convert the API response to a list of documents.\"\"\"\n docs = []\n for chunk in chunks:\n metadata = chunk[\"metadata\"]\n if chunk.get(\"embedding\", None) is not None:\n metadata[\"embedding\"] = chunk[\"embedding\"]\n doc = Document(page_content=chunk[\"text\"], metadata=metadata)\n docs.append(doc)\n return docs\n def _generate_payload(self, blob: Blob) -> EmbaasDocumentExtractionPayload:\n \"\"\"Generates payload for the API request.\"\"\"\n base64_byte_str = base64.b64encode(blob.as_bytes()).decode()\n payload: EmbaasDocumentExtractionPayload = EmbaasDocumentExtractionPayload(\n bytes=base64_byte_str,\n # Workaround for mypy issue: https://github.com/python/mypy/issues/9408\n # type: ignore\n **self.params,\n )\n if blob.mimetype is not None and payload.get(\"mime_type\", None) is None:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/embaas.html"} +{"id": "5c4d318f409e-3", "text": "payload[\"mime_type\"] = blob.mimetype\n return payload\n def _handle_request(\n self, payload: EmbaasDocumentExtractionPayload\n ) -> List[Document]:\n \"\"\"Sends a request to the embaas API and handles the response.\"\"\"\n headers = {\n \"Authorization\": f\"Bearer {self.embaas_api_key}\",\n \"Content-Type\": \"application/json\",\n }\n response = requests.post(self.api_url, headers=headers, json=payload)\n response.raise_for_status()\n parsed_response = response.json()\n return EmbaasBlobLoader._api_response_to_documents(\n chunks=parsed_response[\"data\"][\"chunks\"]\n )\n def _get_documents(self, blob: Blob) -> Iterator[Document]:\n \"\"\"Get the documents from the blob.\"\"\"\n payload = self._generate_payload(blob=blob)\n try:\n documents = self._handle_request(payload=payload)\n except requests.exceptions.RequestException as e:\n if e.response is None or not e.response.text:\n raise ValueError(\n f\"Error raised by embaas document text extraction API: {e}\"\n )\n parsed_response = e.response.json()\n if \"message\" in parsed_response:\n raise ValueError(\n f\"Validation Error raised by embaas document text extraction API:\"\n f\" {parsed_response['message']}\"\n )\n raise\n yield from documents\n[docs]class EmbaasLoader(BaseEmbaasLoader, BaseLoader):\n \"\"\"Wrapper around embaas's document loader service.\n To use, you should have the\n environment variable ``EMBAAS_API_KEY`` set with your API key, or pass\n it as a named parameter to the constructor.\n Example:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/embaas.html"} +{"id": "5c4d318f409e-4", "text": "it as a named parameter to the constructor.\n Example:\n .. code-block:: python\n # Default parsing\n from langchain.document_loaders.embaas import EmbaasLoader\n loader = EmbaasLoader(file_path=\"example.mp3\")\n documents = loader.load()\n # Custom api parameters (create embeddings automatically)\n from langchain.document_loaders.embaas import EmbaasBlobLoader\n loader = EmbaasBlobLoader(\n file_path=\"example.pdf\",\n params={\n \"should_embed\": True,\n \"model\": \"e5-large-v2\",\n \"chunk_size\": 256,\n \"chunk_splitter\": \"CharacterTextSplitter\"\n }\n )\n documents = loader.load()\n \"\"\"\n file_path: str\n \"\"\"The path to the file to load.\"\"\"\n blob_loader: Optional[EmbaasBlobLoader]\n \"\"\"The blob loader to use. If not provided, a default one will be created.\"\"\"\n @validator(\"blob_loader\", always=True)\n def validate_blob_loader(\n cls, v: EmbaasBlobLoader, values: Dict\n ) -> EmbaasBlobLoader:\n return v or EmbaasBlobLoader(\n embaas_api_key=values[\"embaas_api_key\"],\n api_url=values[\"api_url\"],\n params=values[\"params\"],\n )\n[docs] def lazy_load(self) -> Iterator[Document]:\n \"\"\"Load the documents from the file path lazily.\"\"\"\n blob = Blob.from_path(path=self.file_path)\n assert self.blob_loader is not None\n # Should never be None, but mypy doesn't know that.\n yield from self.blob_loader.lazy_parse(blob=blob)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/embaas.html"} +{"id": "5c4d318f409e-5", "text": "yield from self.blob_loader.lazy_parse(blob=blob)\n[docs] def load(self) -> List[Document]:\n return list(self.lazy_load())\n[docs] def load_and_split(\n self, text_splitter: Optional[TextSplitter] = None\n ) -> List[Document]:\n if self.params.get(\"should_embed\", False):\n warnings.warn(\n \"Embeddings are not supported with load_and_split.\"\n \" Use the API splitter to properly generate embeddings.\"\n \" For more information see embaas.io docs.\"\n )\n return super().load_and_split(text_splitter=text_splitter)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/embaas.html"} +{"id": "f38123c92a07-0", "text": "Source code for langchain.document_loaders.airtable\nfrom typing import Iterator, List\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseLoader\n[docs]class AirtableLoader(BaseLoader):\n \"\"\"Loader for Airtable tables.\"\"\"\n def __init__(self, api_token: str, table_id: str, base_id: str):\n \"\"\"Initialize with API token and the IDs for table and base\"\"\"\n self.api_token = api_token\n self.table_id = table_id\n self.base_id = base_id\n[docs] def lazy_load(self) -> Iterator[Document]:\n \"\"\"Lazy load records from table.\"\"\"\n from pyairtable import Table\n table = Table(self.api_token, self.base_id, self.table_id)\n records = table.all()\n for record in records:\n # Need to convert record from dict to str\n yield Document(\n page_content=str(record),\n metadata={\n \"source\": self.base_id + \"_\" + self.table_id,\n \"base_id\": self.base_id,\n \"table_id\": self.table_id,\n },\n )\n[docs] def load(self) -> List[Document]:\n \"\"\"Load Table.\"\"\"\n return list(self.lazy_load())", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/airtable.html"} +{"id": "1de94920bfbf-0", "text": "Source code for langchain.document_loaders.pdf\n\"\"\"Loader that loads PDF files.\"\"\"\nimport json\nimport logging\nimport os\nimport tempfile\nimport time\nfrom abc import ABC\nfrom io import StringIO\nfrom pathlib import Path\nfrom typing import Any, Iterator, List, Mapping, Optional\nfrom urllib.parse import urlparse\nimport requests\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseLoader\nfrom langchain.document_loaders.blob_loaders import Blob\nfrom langchain.document_loaders.parsers.pdf import (\n PDFMinerParser,\n PDFPlumberParser,\n PyMuPDFParser,\n PyPDFium2Parser,\n PyPDFParser,\n)\nfrom langchain.document_loaders.unstructured import UnstructuredFileLoader\nfrom langchain.utils import get_from_dict_or_env\nlogger = logging.getLogger(__file__)\n[docs]class UnstructuredPDFLoader(UnstructuredFileLoader):\n \"\"\"Loader that uses unstructured to load PDF files.\"\"\"\n def _get_elements(self) -> List:\n from unstructured.partition.pdf import partition_pdf\n return partition_pdf(filename=self.file_path, **self.unstructured_kwargs)\nclass BasePDFLoader(BaseLoader, ABC):\n \"\"\"Base loader class for PDF files.\n Defaults to check for local file, but if the file is a web path, it will download it\n to a temporary file, and use that, then clean up the temporary file after completion\n \"\"\"\n def __init__(self, file_path: str):\n \"\"\"Initialize with file path.\"\"\"\n self.file_path = file_path\n self.web_path = None\n if \"~\" in self.file_path:\n self.file_path = os.path.expanduser(self.file_path)\n # If the file is a web path, download it to a temporary file, and use that", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/pdf.html"} +{"id": "1de94920bfbf-1", "text": "if not os.path.isfile(self.file_path) and self._is_valid_url(self.file_path):\n r = requests.get(self.file_path)\n if r.status_code != 200:\n raise ValueError(\n \"Check the url of your file; returned status code %s\"\n % r.status_code\n )\n self.web_path = self.file_path\n self.temp_dir = tempfile.TemporaryDirectory()\n temp_pdf = Path(self.temp_dir.name) / \"tmp.pdf\"\n with open(temp_pdf, mode=\"wb\") as f:\n f.write(r.content)\n self.file_path = str(temp_pdf)\n elif not os.path.isfile(self.file_path):\n raise ValueError(\"File path %s is not a valid file or url\" % self.file_path)\n def __del__(self) -> None:\n if hasattr(self, \"temp_dir\"):\n self.temp_dir.cleanup()\n @staticmethod\n def _is_valid_url(url: str) -> bool:\n \"\"\"Check if the url is valid.\"\"\"\n parsed = urlparse(url)\n return bool(parsed.netloc) and bool(parsed.scheme)\n @property\n def source(self) -> str:\n return self.web_path if self.web_path is not None else self.file_path\n[docs]class OnlinePDFLoader(BasePDFLoader):\n \"\"\"Loader that loads online PDFs.\"\"\"\n[docs] def load(self) -> List[Document]:\n \"\"\"Load documents.\"\"\"\n loader = UnstructuredPDFLoader(str(self.file_path))\n return loader.load()\n[docs]class PyPDFLoader(BasePDFLoader):\n \"\"\"Loads a PDF with pypdf and chunks at character level.\n Loader also stores page numbers in metadatas.\n \"\"\"\n def __init__(self, file_path: str) -> None:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/pdf.html"} +{"id": "1de94920bfbf-2", "text": "\"\"\"\n def __init__(self, file_path: str) -> None:\n \"\"\"Initialize with file path.\"\"\"\n try:\n import pypdf # noqa:F401\n except ImportError:\n raise ImportError(\n \"pypdf package not found, please install it with \" \"`pip install pypdf`\"\n )\n self.parser = PyPDFParser()\n super().__init__(file_path)\n[docs] def load(self) -> List[Document]:\n \"\"\"Load given path as pages.\"\"\"\n return list(self.lazy_load())\n[docs] def lazy_load(\n self,\n ) -> Iterator[Document]:\n \"\"\"Lazy load given path as pages.\"\"\"\n blob = Blob.from_path(self.file_path)\n yield from self.parser.parse(blob)\n[docs]class PyPDFium2Loader(BasePDFLoader):\n \"\"\"Loads a PDF with pypdfium2 and chunks at character level.\"\"\"\n def __init__(self, file_path: str):\n \"\"\"Initialize with file path.\"\"\"\n super().__init__(file_path)\n self.parser = PyPDFium2Parser()\n[docs] def load(self) -> List[Document]:\n \"\"\"Load given path as pages.\"\"\"\n return list(self.lazy_load())\n[docs] def lazy_load(\n self,\n ) -> Iterator[Document]:\n \"\"\"Lazy load given path as pages.\"\"\"\n blob = Blob.from_path(self.file_path)\n yield from self.parser.parse(blob)\n[docs]class PyPDFDirectoryLoader(BaseLoader):\n \"\"\"Loads a directory with PDF files with pypdf and chunks at character level.\n Loader also stores page numbers in metadatas.\n \"\"\"\n def __init__(\n self,\n path: str,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/pdf.html"} +{"id": "1de94920bfbf-3", "text": "\"\"\"\n def __init__(\n self,\n path: str,\n glob: str = \"**/[!.]*.pdf\",\n silent_errors: bool = False,\n load_hidden: bool = False,\n recursive: bool = False,\n ):\n self.path = path\n self.glob = glob\n self.load_hidden = load_hidden\n self.recursive = recursive\n self.silent_errors = silent_errors\n @staticmethod\n def _is_visible(path: Path) -> bool:\n return not any(part.startswith(\".\") for part in path.parts)\n[docs] def load(self) -> List[Document]:\n p = Path(self.path)\n docs = []\n items = p.rglob(self.glob) if self.recursive else p.glob(self.glob)\n for i in items:\n if i.is_file():\n if self._is_visible(i.relative_to(p)) or self.load_hidden:\n try:\n loader = PyPDFLoader(str(i))\n sub_docs = loader.load()\n for doc in sub_docs:\n doc.metadata[\"source\"] = str(i)\n docs.extend(sub_docs)\n except Exception as e:\n if self.silent_errors:\n logger.warning(e)\n else:\n raise e\n return docs\n[docs]class PDFMinerLoader(BasePDFLoader):\n \"\"\"Loader that uses PDFMiner to load PDF files.\"\"\"\n def __init__(self, file_path: str) -> None:\n \"\"\"Initialize with file path.\"\"\"\n try:\n from pdfminer.high_level import extract_text # noqa:F401\n except ImportError:\n raise ImportError(\n \"`pdfminer` package not found, please install it with \"\n \"`pip install pdfminer.six`\"\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/pdf.html"} +{"id": "1de94920bfbf-4", "text": "\"`pip install pdfminer.six`\"\n )\n super().__init__(file_path)\n self.parser = PDFMinerParser()\n[docs] def load(self) -> List[Document]:\n \"\"\"Eagerly load the content.\"\"\"\n return list(self.lazy_load())\n[docs] def lazy_load(\n self,\n ) -> Iterator[Document]:\n \"\"\"Lazily lod documents.\"\"\"\n blob = Blob.from_path(self.file_path)\n yield from self.parser.parse(blob)\n[docs]class PDFMinerPDFasHTMLLoader(BasePDFLoader):\n \"\"\"Loader that uses PDFMiner to load PDF files as HTML content.\"\"\"\n def __init__(self, file_path: str):\n \"\"\"Initialize with file path.\"\"\"\n try:\n from pdfminer.high_level import extract_text_to_fp # noqa:F401\n except ImportError:\n raise ImportError(\n \"`pdfminer` package not found, please install it with \"\n \"`pip install pdfminer.six`\"\n )\n super().__init__(file_path)\n[docs] def load(self) -> List[Document]:\n \"\"\"Load file.\"\"\"\n from pdfminer.high_level import extract_text_to_fp\n from pdfminer.layout import LAParams\n from pdfminer.utils import open_filename\n output_string = StringIO()\n with open_filename(self.file_path, \"rb\") as fp:\n extract_text_to_fp(\n fp, # type: ignore[arg-type]\n output_string,\n codec=\"\",\n laparams=LAParams(),\n output_type=\"html\",\n )\n metadata = {\"source\": self.file_path}\n return [Document(page_content=output_string.getvalue(), metadata=metadata)]\n[docs]class PyMuPDFLoader(BasePDFLoader):", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/pdf.html"} +{"id": "1de94920bfbf-5", "text": "[docs]class PyMuPDFLoader(BasePDFLoader):\n \"\"\"Loader that uses PyMuPDF to load PDF files.\"\"\"\n def __init__(self, file_path: str) -> None:\n \"\"\"Initialize with file path.\"\"\"\n try:\n import fitz # noqa:F401\n except ImportError:\n raise ImportError(\n \"`PyMuPDF` package not found, please install it with \"\n \"`pip install pymupdf`\"\n )\n super().__init__(file_path)\n[docs] def load(self, **kwargs: Optional[Any]) -> List[Document]:\n \"\"\"Load file.\"\"\"\n parser = PyMuPDFParser(text_kwargs=kwargs)\n blob = Blob.from_path(self.file_path)\n return parser.parse(blob)\n# MathpixPDFLoader implementation taken largely from Daniel Gross's:\n# https://gist.github.com/danielgross/3ab4104e14faccc12b49200843adab21\n[docs]class MathpixPDFLoader(BasePDFLoader):\n def __init__(\n self,\n file_path: str,\n processed_file_format: str = \"mmd\",\n max_wait_time_seconds: int = 500,\n should_clean_pdf: bool = False,\n **kwargs: Any,\n ) -> None:\n super().__init__(file_path)\n self.mathpix_api_key = get_from_dict_or_env(\n kwargs, \"mathpix_api_key\", \"MATHPIX_API_KEY\"\n )\n self.mathpix_api_id = get_from_dict_or_env(\n kwargs, \"mathpix_api_id\", \"MATHPIX_API_ID\"\n )\n self.processed_file_format = processed_file_format\n self.max_wait_time_seconds = max_wait_time_seconds\n self.should_clean_pdf = should_clean_pdf", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/pdf.html"} +{"id": "1de94920bfbf-6", "text": "self.should_clean_pdf = should_clean_pdf\n @property\n def headers(self) -> dict:\n return {\"app_id\": self.mathpix_api_id, \"app_key\": self.mathpix_api_key}\n @property\n def url(self) -> str:\n return \"https://api.mathpix.com/v3/pdf\"\n @property\n def data(self) -> dict:\n options = {\"conversion_formats\": {self.processed_file_format: True}}\n return {\"options_json\": json.dumps(options)}\n[docs] def send_pdf(self) -> str:\n with open(self.file_path, \"rb\") as f:\n files = {\"file\": f}\n response = requests.post(\n self.url, headers=self.headers, files=files, data=self.data\n )\n response_data = response.json()\n if \"pdf_id\" in response_data:\n pdf_id = response_data[\"pdf_id\"]\n return pdf_id\n else:\n raise ValueError(\"Unable to send PDF to Mathpix.\")\n[docs] def wait_for_processing(self, pdf_id: str) -> None:\n url = self.url + \"/\" + pdf_id\n for _ in range(0, self.max_wait_time_seconds, 5):\n response = requests.get(url, headers=self.headers)\n response_data = response.json()\n status = response_data.get(\"status\", None)\n if status == \"completed\":\n return\n elif status == \"error\":\n raise ValueError(\"Unable to retrieve PDF from Mathpix\")\n else:\n print(f\"Status: {status}, waiting for processing to complete\")\n time.sleep(5)\n raise TimeoutError\n[docs] def get_processed_pdf(self, pdf_id: str) -> str:\n self.wait_for_processing(pdf_id)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/pdf.html"} +{"id": "1de94920bfbf-7", "text": "self.wait_for_processing(pdf_id)\n url = f\"{self.url}/{pdf_id}.{self.processed_file_format}\"\n response = requests.get(url, headers=self.headers)\n return response.content.decode(\"utf-8\")\n[docs] def clean_pdf(self, contents: str) -> str:\n contents = \"\\n\".join(\n [line for line in contents.split(\"\\n\") if not line.startswith(\"![]\")]\n )\n # replace \\section{Title} with # Title\n contents = contents.replace(\"\\\\section{\", \"# \").replace(\"}\", \"\")\n # replace the \"\\\" slash that Mathpix adds to escape $, %, (, etc.\n contents = (\n contents.replace(r\"\\$\", \"$\")\n .replace(r\"\\%\", \"%\")\n .replace(r\"\\(\", \"(\")\n .replace(r\"\\)\", \")\")\n )\n return contents\n[docs] def load(self) -> List[Document]:\n pdf_id = self.send_pdf()\n contents = self.get_processed_pdf(pdf_id)\n if self.should_clean_pdf:\n contents = self.clean_pdf(contents)\n metadata = {\"source\": self.source, \"file_path\": self.source}\n return [Document(page_content=contents, metadata=metadata)]\n[docs]class PDFPlumberLoader(BasePDFLoader):\n \"\"\"Loader that uses pdfplumber to load PDF files.\"\"\"\n def __init__(\n self, file_path: str, text_kwargs: Optional[Mapping[str, Any]] = None\n ) -> None:\n \"\"\"Initialize with file path.\"\"\"\n try:\n import pdfplumber # noqa:F401\n except ImportError:\n raise ImportError(\n \"pdfplumber package not found, please install it with \"\n \"`pip install pdfplumber`\"\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/pdf.html"} +{"id": "1de94920bfbf-8", "text": "\"`pip install pdfplumber`\"\n )\n super().__init__(file_path)\n self.text_kwargs = text_kwargs or {}\n[docs] def load(self) -> List[Document]:\n \"\"\"Load file.\"\"\"\n parser = PDFPlumberParser(text_kwargs=self.text_kwargs)\n blob = Blob.from_path(self.file_path)\n return parser.parse(blob)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/pdf.html"} +{"id": "f22a0d7f4b22-0", "text": "Source code for langchain.document_loaders.epub\n\"\"\"Loader that loads EPub files.\"\"\"\nfrom typing import List\nfrom langchain.document_loaders.unstructured import (\n UnstructuredFileLoader,\n satisfies_min_unstructured_version,\n)\n[docs]class UnstructuredEPubLoader(UnstructuredFileLoader):\n \"\"\"Loader that uses unstructured to load epub files.\"\"\"\n def _get_elements(self) -> List:\n min_unstructured_version = \"0.5.4\"\n if not satisfies_min_unstructured_version(min_unstructured_version):\n raise ValueError(\n \"Partitioning epub files is only supported in \"\n f\"unstructured>={min_unstructured_version}.\"\n )\n from unstructured.partition.epub import partition_epub\n return partition_epub(filename=self.file_path, **self.unstructured_kwargs)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/epub.html"} +{"id": "92ecc64fdba1-0", "text": "Source code for langchain.document_loaders.mastodon\n\"\"\"Mastodon document loader.\"\"\"\nfrom __future__ import annotations\nimport os\nfrom typing import TYPE_CHECKING, Any, Dict, Iterable, List, Optional, Sequence\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseLoader\nif TYPE_CHECKING:\n import mastodon\ndef _dependable_mastodon_import() -> mastodon:\n try:\n import mastodon\n except ImportError:\n raise ValueError(\n \"Mastodon.py package not found, \"\n \"please install it with `pip install Mastodon.py`\"\n )\n return mastodon\n[docs]class MastodonTootsLoader(BaseLoader):\n \"\"\"Mastodon toots loader.\"\"\"\n def __init__(\n self,\n mastodon_accounts: Sequence[str],\n number_toots: Optional[int] = 100,\n exclude_replies: bool = False,\n access_token: Optional[str] = None,\n api_base_url: str = \"https://mastodon.social\",\n ):\n \"\"\"Instantiate Mastodon toots loader.\n Args:\n mastodon_accounts: The list of Mastodon accounts to query.\n number_toots: How many toots to pull for each account.\n exclude_replies: Whether to exclude reply toots from the load.\n access_token: An access token if toots are loaded as a Mastodon app. Can\n also be specified via the environment variables \"MASTODON_ACCESS_TOKEN\".\n api_base_url: A Mastodon API base URL to talk to, if not using the default.\n \"\"\"\n mastodon = _dependable_mastodon_import()\n access_token = access_token or os.environ.get(\"MASTODON_ACCESS_TOKEN\")", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/mastodon.html"} +{"id": "92ecc64fdba1-1", "text": "access_token = access_token or os.environ.get(\"MASTODON_ACCESS_TOKEN\")\n self.api = mastodon.Mastodon(\n access_token=access_token, api_base_url=api_base_url\n )\n self.mastodon_accounts = mastodon_accounts\n self.number_toots = number_toots\n self.exclude_replies = exclude_replies\n[docs] def load(self) -> List[Document]:\n \"\"\"Load toots into documents.\"\"\"\n results: List[Document] = []\n for account in self.mastodon_accounts:\n user = self.api.account_lookup(account)\n toots = self.api.account_statuses(\n user.id,\n only_media=False,\n pinned=False,\n exclude_replies=self.exclude_replies,\n exclude_reblogs=True,\n limit=self.number_toots,\n )\n docs = self._format_toots(toots, user)\n results.extend(docs)\n return results\n def _format_toots(\n self, toots: List[Dict[str, Any]], user_info: dict\n ) -> Iterable[Document]:\n \"\"\"Format toots into documents.\n Adding user info, and selected toot fields into the metadata.\n \"\"\"\n for toot in toots:\n metadata = {\n \"created_at\": toot[\"created_at\"],\n \"user_info\": user_info,\n \"is_reply\": toot[\"in_reply_to_id\"] is not None,\n }\n yield Document(\n page_content=toot[\"content\"],\n metadata=metadata,\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/mastodon.html"} +{"id": "81c44ed2e5b0-0", "text": "Source code for langchain.document_loaders.college_confidential\n\"\"\"Loader that loads College Confidential.\"\"\"\nfrom typing import List\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.web_base import WebBaseLoader\n[docs]class CollegeConfidentialLoader(WebBaseLoader):\n \"\"\"Loader that loads College Confidential webpages.\"\"\"\n[docs] def load(self) -> List[Document]:\n \"\"\"Load webpage.\"\"\"\n soup = self.scrape()\n text = soup.select_one(\"main[class='skin-handler']\").text\n metadata = {\"source\": self.web_path}\n return [Document(page_content=text, metadata=metadata)]", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/college_confidential.html"} +{"id": "28db0ec19fed-0", "text": "Source code for langchain.document_loaders.whatsapp_chat\nimport re\nfrom pathlib import Path\nfrom typing import List\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseLoader\ndef concatenate_rows(date: str, sender: str, text: str) -> str:\n \"\"\"Combine message information in a readable format ready to be used.\"\"\"\n return f\"{sender} on {date}: {text}\\n\\n\"\n[docs]class WhatsAppChatLoader(BaseLoader):\n \"\"\"Loader that loads WhatsApp messages text file.\"\"\"\n def __init__(self, path: str):\n \"\"\"Initialize with path.\"\"\"\n self.file_path = path\n[docs] def load(self) -> List[Document]:\n \"\"\"Load documents.\"\"\"\n p = Path(self.file_path)\n text_content = \"\"\n with open(p, encoding=\"utf8\") as f:\n lines = f.readlines()\n message_line_regex = r\"\"\"\n \\[?\n (\n \\d{1,4}\n [\\/.]\n \\d{1,2}\n [\\/.]\n \\d{1,4}\n ,\\s\n \\d{1,2}\n :\\d{2}\n (?:\n :\\d{2}\n )?\n (?:[\\s_](?:AM|PM))?\n )\n \\]?\n [\\s-]*\n ([~\\w\\s]+)\n [:]+\n \\s\n (.+)\n \"\"\"\n for line in lines:\n result = re.match(\n message_line_regex, line.strip(), flags=re.VERBOSE | re.IGNORECASE\n )\n if result:\n date, sender, text = result.groups()", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/whatsapp_chat.html"} +{"id": "28db0ec19fed-1", "text": ")\n if result:\n date, sender, text = result.groups()\n text_content += concatenate_rows(date, sender, text)\n metadata = {\"source\": str(p)}\n return [Document(page_content=text_content, metadata=metadata)]", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/whatsapp_chat.html"} +{"id": "19c5666fb2f6-0", "text": "Source code for langchain.document_loaders.spreedly\n\"\"\"Loader that fetches data from Spreedly API.\"\"\"\nimport json\nimport urllib.request\nfrom typing import List\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseLoader\nfrom langchain.utils import stringify_dict\nSPREEDLY_ENDPOINTS = {\n \"gateways_options\": \"https://core.spreedly.com/v1/gateways_options.json\",\n \"gateways\": \"https://core.spreedly.com/v1/gateways.json\",\n \"receivers_options\": \"https://core.spreedly.com/v1/receivers_options.json\",\n \"receivers\": \"https://core.spreedly.com/v1/receivers.json\",\n \"payment_methods\": \"https://core.spreedly.com/v1/payment_methods.json\",\n \"certificates\": \"https://core.spreedly.com/v1/certificates.json\",\n \"transactions\": \"https://core.spreedly.com/v1/transactions.json\",\n \"environments\": \"https://core.spreedly.com/v1/environments.json\",\n}\n[docs]class SpreedlyLoader(BaseLoader):\n \"\"\"Loader that fetches data from Spreedly API.\"\"\"\n def __init__(self, access_token: str, resource: str) -> None:\n self.access_token = access_token\n self.resource = resource\n self.headers = {\n \"Authorization\": f\"Bearer {self.access_token}\",\n \"Accept\": \"application/json\",\n }\n def _make_request(self, url: str) -> List[Document]:\n request = urllib.request.Request(url, headers=self.headers)\n with urllib.request.urlopen(request) as response:\n json_data = json.loads(response.read().decode())\n text = stringify_dict(json_data)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/spreedly.html"} +{"id": "19c5666fb2f6-1", "text": "text = stringify_dict(json_data)\n metadata = {\"source\": url}\n return [Document(page_content=text, metadata=metadata)]\n def _get_resource(self) -> List[Document]:\n endpoint = SPREEDLY_ENDPOINTS.get(self.resource)\n if endpoint is None:\n return []\n return self._make_request(endpoint)\n[docs] def load(self) -> List[Document]:\n return self._get_resource()", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/spreedly.html"} +{"id": "36f947ba3d39-0", "text": "Source code for langchain.document_loaders.youtube\n\"\"\"Loader that loads YouTube transcript.\"\"\"\nfrom __future__ import annotations\nimport logging\nfrom pathlib import Path\nfrom typing import Any, Dict, List, Optional, Sequence, Union\nfrom urllib.parse import parse_qs, urlparse\nfrom pydantic import root_validator\nfrom pydantic.dataclasses import dataclass\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseLoader\nlogger = logging.getLogger(__name__)\nSCOPES = [\"https://www.googleapis.com/auth/youtube.readonly\"]\n[docs]@dataclass\nclass GoogleApiClient:\n \"\"\"A Generic Google Api Client.\n To use, you should have the ``google_auth_oauthlib,youtube_transcript_api,google``\n python package installed.\n As the google api expects credentials you need to set up a google account and\n register your Service. \"https://developers.google.com/docs/api/quickstart/python\"\n Example:\n .. code-block:: python\n from langchain.document_loaders import GoogleApiClient\n google_api_client = GoogleApiClient(\n service_account_path=Path(\"path_to_your_sec_file.json\")\n )\n \"\"\"\n credentials_path: Path = Path.home() / \".credentials\" / \"credentials.json\"\n service_account_path: Path = Path.home() / \".credentials\" / \"credentials.json\"\n token_path: Path = Path.home() / \".credentials\" / \"token.json\"\n def __post_init__(self) -> None:\n self.creds = self._load_credentials()\n[docs] @root_validator\n def validate_channel_or_videoIds_is_set(\n cls, values: Dict[str, Any]\n ) -> Dict[str, Any]:\n \"\"\"Validate that either folder_id or document_ids is set, but not both.\"\"\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/youtube.html"} +{"id": "36f947ba3d39-1", "text": "\"\"\"Validate that either folder_id or document_ids is set, but not both.\"\"\"\n if not values.get(\"credentials_path\") and not values.get(\n \"service_account_path\"\n ):\n raise ValueError(\"Must specify either channel_name or video_ids\")\n return values\n def _load_credentials(self) -> Any:\n \"\"\"Load credentials.\"\"\"\n # Adapted from https://developers.google.com/drive/api/v3/quickstart/python\n try:\n from google.auth.transport.requests import Request\n from google.oauth2 import service_account\n from google.oauth2.credentials import Credentials\n from google_auth_oauthlib.flow import InstalledAppFlow\n from youtube_transcript_api import YouTubeTranscriptApi # noqa: F401\n except ImportError:\n raise ImportError(\n \"You must run\"\n \"`pip install --upgrade \"\n \"google-api-python-client google-auth-httplib2 \"\n \"google-auth-oauthlib \"\n \"youtube-transcript-api` \"\n \"to use the Google Drive loader\"\n )\n creds = None\n if self.service_account_path.exists():\n return service_account.Credentials.from_service_account_file(\n str(self.service_account_path)\n )\n if self.token_path.exists():\n creds = Credentials.from_authorized_user_file(str(self.token_path), SCOPES)\n if not creds or not creds.valid:\n if creds and creds.expired and creds.refresh_token:\n creds.refresh(Request())\n else:\n flow = InstalledAppFlow.from_client_secrets_file(\n str(self.credentials_path), SCOPES\n )\n creds = flow.run_local_server(port=0)\n with open(self.token_path, \"w\") as token:\n token.write(creds.to_json())\n return creds", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/youtube.html"} +{"id": "36f947ba3d39-2", "text": "token.write(creds.to_json())\n return creds\nALLOWED_SCHEMAS = {\"http\", \"https\"}\nALLOWED_NETLOCK = {\n \"youtu.be\",\n \"m.youtube.com\",\n \"youtube.com\",\n \"www.youtube.com\",\n \"www.youtube-nocookie.com\",\n \"vid.plus\",\n}\ndef _parse_video_id(url: str) -> Optional[str]:\n \"\"\"Parse a youtube url and return the video id if valid, otherwise None.\"\"\"\n parsed_url = urlparse(url)\n if parsed_url.scheme not in ALLOWED_SCHEMAS:\n return None\n if parsed_url.netloc not in ALLOWED_NETLOCK:\n return None\n path = parsed_url.path\n if path.endswith(\"/watch\"):\n query = parsed_url.query\n parsed_query = parse_qs(query)\n if \"v\" in parsed_query:\n ids = parsed_query[\"v\"]\n video_id = ids if isinstance(ids, str) else ids[0]\n else:\n return None\n else:\n path = parsed_url.path.lstrip(\"/\")\n video_id = path.split(\"/\")[-1]\n if len(video_id) != 11: # Video IDs are 11 characters long\n return None\n return video_id\n[docs]class YoutubeLoader(BaseLoader):\n \"\"\"Loader that loads Youtube transcripts.\"\"\"\n def __init__(\n self,\n video_id: str,\n add_video_info: bool = False,\n language: Union[str, Sequence[str]] = \"en\",\n translation: str = \"en\",\n continue_on_failure: bool = False,\n ):\n \"\"\"Initialize with YouTube video ID.\"\"\"\n self.video_id = video_id\n self.add_video_info = add_video_info\n self.language = language", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/youtube.html"} +{"id": "36f947ba3d39-3", "text": "self.add_video_info = add_video_info\n self.language = language\n if isinstance(language, str):\n self.language = [language]\n else:\n self.language = language\n self.translation = translation\n self.continue_on_failure = continue_on_failure\n[docs] @staticmethod\n def extract_video_id(youtube_url: str) -> str:\n \"\"\"Extract video id from common YT urls.\"\"\"\n video_id = _parse_video_id(youtube_url)\n if not video_id:\n raise ValueError(\n f\"Could not determine the video ID for the URL {youtube_url}\"\n )\n return video_id\n[docs] @classmethod\n def from_youtube_url(cls, youtube_url: str, **kwargs: Any) -> YoutubeLoader:\n \"\"\"Given youtube URL, load video.\"\"\"\n video_id = cls.extract_video_id(youtube_url)\n return cls(video_id, **kwargs)\n[docs] def load(self) -> List[Document]:\n \"\"\"Load documents.\"\"\"\n try:\n from youtube_transcript_api import (\n NoTranscriptFound,\n TranscriptsDisabled,\n YouTubeTranscriptApi,\n )\n except ImportError:\n raise ImportError(\n \"Could not import youtube_transcript_api python package. \"\n \"Please install it with `pip install youtube-transcript-api`.\"\n )\n metadata = {\"source\": self.video_id}\n if self.add_video_info:\n # Get more video meta info\n # Such as title, description, thumbnail url, publish_date\n video_info = self._get_video_info()\n metadata.update(video_info)\n try:\n transcript_list = YouTubeTranscriptApi.list_transcripts(self.video_id)\n except TranscriptsDisabled:\n return []\n try:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/youtube.html"} +{"id": "36f947ba3d39-4", "text": "except TranscriptsDisabled:\n return []\n try:\n transcript = transcript_list.find_transcript(self.language)\n except NoTranscriptFound:\n en_transcript = transcript_list.find_transcript([\"en\"])\n transcript = en_transcript.translate(self.translation)\n transcript_pieces = transcript.fetch()\n transcript = \" \".join([t[\"text\"].strip(\" \") for t in transcript_pieces])\n return [Document(page_content=transcript, metadata=metadata)]\n def _get_video_info(self) -> dict:\n \"\"\"Get important video information.\n Components are:\n - title\n - description\n - thumbnail url,\n - publish_date\n - channel_author\n - and more.\n \"\"\"\n try:\n from pytube import YouTube\n except ImportError:\n raise ImportError(\n \"Could not import pytube python package. \"\n \"Please install it with `pip install pytube`.\"\n )\n yt = YouTube(f\"https://www.youtube.com/watch?v={self.video_id}\")\n video_info = {\n \"title\": yt.title or \"Unknown\",\n \"description\": yt.description or \"Unknown\",\n \"view_count\": yt.views or 0,\n \"thumbnail_url\": yt.thumbnail_url or \"Unknown\",\n \"publish_date\": yt.publish_date.strftime(\"%Y-%m-%d %H:%M:%S\")\n if yt.publish_date\n else \"Unknown\",\n \"length\": yt.length or 0,\n \"author\": yt.author or \"Unknown\",\n }\n return video_info\n[docs]@dataclass\nclass GoogleApiYoutubeLoader(BaseLoader):\n \"\"\"Loader that loads all Videos from a Channel\n To use, you should have the ``googleapiclient,youtube_transcript_api``", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/youtube.html"} +{"id": "36f947ba3d39-5", "text": "To use, you should have the ``googleapiclient,youtube_transcript_api``\n python package installed.\n As the service needs a google_api_client, you first have to initialize\n the GoogleApiClient.\n Additionally you have to either provide a channel name or a list of videoids\n \"https://developers.google.com/docs/api/quickstart/python\"\n Example:\n .. code-block:: python\n from langchain.document_loaders import GoogleApiClient\n from langchain.document_loaders import GoogleApiYoutubeLoader\n google_api_client = GoogleApiClient(\n service_account_path=Path(\"path_to_your_sec_file.json\")\n )\n loader = GoogleApiYoutubeLoader(\n google_api_client=google_api_client,\n channel_name = \"CodeAesthetic\"\n )\n load.load()\n \"\"\"\n google_api_client: GoogleApiClient\n channel_name: Optional[str] = None\n video_ids: Optional[List[str]] = None\n add_video_info: bool = True\n captions_language: str = \"en\"\n continue_on_failure: bool = False\n def __post_init__(self) -> None:\n self.youtube_client = self._build_youtube_client(self.google_api_client.creds)\n def _build_youtube_client(self, creds: Any) -> Any:\n try:\n from googleapiclient.discovery import build\n from youtube_transcript_api import YouTubeTranscriptApi # noqa: F401\n except ImportError:\n raise ImportError(\n \"You must run\"\n \"`pip install --upgrade \"\n \"google-api-python-client google-auth-httplib2 \"\n \"google-auth-oauthlib \"\n \"youtube-transcript-api` \"\n \"to use the Google Drive loader\"\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/youtube.html"} +{"id": "36f947ba3d39-6", "text": "\"to use the Google Drive loader\"\n )\n return build(\"youtube\", \"v3\", credentials=creds)\n[docs] @root_validator\n def validate_channel_or_videoIds_is_set(\n cls, values: Dict[str, Any]\n ) -> Dict[str, Any]:\n \"\"\"Validate that either folder_id or document_ids is set, but not both.\"\"\"\n if not values.get(\"channel_name\") and not values.get(\"video_ids\"):\n raise ValueError(\"Must specify either channel_name or video_ids\")\n return values\n def _get_transcripe_for_video_id(self, video_id: str) -> str:\n from youtube_transcript_api import NoTranscriptFound, YouTubeTranscriptApi\n transcript_list = YouTubeTranscriptApi.list_transcripts(video_id)\n try:\n transcript = transcript_list.find_transcript([self.captions_language])\n except NoTranscriptFound:\n for available_transcript in transcript_list:\n transcript = available_transcript.translate(self.captions_language)\n continue\n transcript_pieces = transcript.fetch()\n return \" \".join([t[\"text\"].strip(\" \") for t in transcript_pieces])\n def _get_document_for_video_id(self, video_id: str, **kwargs: Any) -> Document:\n captions = self._get_transcripe_for_video_id(video_id)\n video_response = (\n self.youtube_client.videos()\n .list(\n part=\"id,snippet\",\n id=video_id,\n )\n .execute()\n )\n return Document(\n page_content=captions,\n metadata=video_response.get(\"items\")[0],\n )\n def _get_channel_id(self, channel_name: str) -> str:\n request = self.youtube_client.search().list(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/youtube.html"} +{"id": "36f947ba3d39-7", "text": "request = self.youtube_client.search().list(\n part=\"id\",\n q=channel_name,\n type=\"channel\",\n maxResults=1, # we only need one result since channel names are unique\n )\n response = request.execute()\n channel_id = response[\"items\"][0][\"id\"][\"channelId\"]\n return channel_id\n def _get_document_for_channel(self, channel: str, **kwargs: Any) -> List[Document]:\n try:\n from youtube_transcript_api import (\n NoTranscriptFound,\n TranscriptsDisabled,\n )\n except ImportError:\n raise ImportError(\n \"You must run\"\n \"`pip install --upgrade \"\n \"youtube-transcript-api` \"\n \"to use the youtube loader\"\n )\n channel_id = self._get_channel_id(channel)\n request = self.youtube_client.search().list(\n part=\"id,snippet\",\n channelId=channel_id,\n maxResults=50, # adjust this value to retrieve more or fewer videos\n )\n video_ids = []\n while request is not None:\n response = request.execute()\n # Add each video ID to the list\n for item in response[\"items\"]:\n if not item[\"id\"].get(\"videoId\"):\n continue\n meta_data = {\"videoId\": item[\"id\"][\"videoId\"]}\n if self.add_video_info:\n item[\"snippet\"].pop(\"thumbnails\")\n meta_data.update(item[\"snippet\"])\n try:\n page_content = self._get_transcripe_for_video_id(\n item[\"id\"][\"videoId\"]\n )\n video_ids.append(\n Document(\n page_content=page_content,\n metadata=meta_data,\n )\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/youtube.html"} +{"id": "36f947ba3d39-8", "text": "metadata=meta_data,\n )\n )\n except (TranscriptsDisabled, NoTranscriptFound) as e:\n if self.continue_on_failure:\n logger.error(\n \"Error fetching transscript \"\n + f\" {item['id']['videoId']}, exception: {e}\"\n )\n else:\n raise e\n pass\n request = self.youtube_client.search().list_next(request, response)\n return video_ids\n[docs] def load(self) -> List[Document]:\n \"\"\"Load documents.\"\"\"\n document_list = []\n if self.channel_name:\n document_list.extend(self._get_document_for_channel(self.channel_name))\n elif self.video_ids:\n document_list.extend(\n [\n self._get_document_for_video_id(video_id)\n for video_id in self.video_ids\n ]\n )\n else:\n raise ValueError(\"Must specify either channel_name or video_ids\")\n return document_list", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/youtube.html"} +{"id": "891c40061533-0", "text": "Source code for langchain.document_loaders.hugging_face_dataset\n\"\"\"Loader that loads HuggingFace datasets.\"\"\"\nfrom typing import Iterator, List, Mapping, Optional, Sequence, Union\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseLoader\n[docs]class HuggingFaceDatasetLoader(BaseLoader):\n \"\"\"Loading logic for loading documents from the Hugging Face Hub.\"\"\"\n def __init__(\n self,\n path: str,\n page_content_column: str = \"text\",\n name: Optional[str] = None,\n data_dir: Optional[str] = None,\n data_files: Optional[\n Union[str, Sequence[str], Mapping[str, Union[str, Sequence[str]]]]\n ] = None,\n cache_dir: Optional[str] = None,\n keep_in_memory: Optional[bool] = None,\n save_infos: bool = False,\n use_auth_token: Optional[Union[bool, str]] = None,\n num_proc: Optional[int] = None,\n ):\n \"\"\"Initialize the HuggingFaceDatasetLoader.\n Args:\n path: Path or name of the dataset.\n page_content_column: Page content column name.\n name: Name of the dataset configuration.\n data_dir: Data directory of the dataset configuration.\n data_files: Path(s) to source data file(s).\n cache_dir: Directory to read/write data.\n keep_in_memory: Whether to copy the dataset in-memory.\n save_infos: Save the dataset information (checksums/size/splits/...).\n use_auth_token: Bearer token for remote files on the Datasets Hub.\n num_proc: Number of processes.\n \"\"\"\n self.path = path\n self.page_content_column = page_content_column\n self.name = name", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/hugging_face_dataset.html"} +{"id": "891c40061533-1", "text": "self.page_content_column = page_content_column\n self.name = name\n self.data_dir = data_dir\n self.data_files = data_files\n self.cache_dir = cache_dir\n self.keep_in_memory = keep_in_memory\n self.save_infos = save_infos\n self.use_auth_token = use_auth_token\n self.num_proc = num_proc\n[docs] def lazy_load(\n self,\n ) -> Iterator[Document]:\n \"\"\"Load documents lazily.\"\"\"\n try:\n from datasets import load_dataset\n except ImportError:\n raise ImportError(\n \"Could not import datasets python package. \"\n \"Please install it with `pip install datasets`.\"\n )\n dataset = load_dataset(\n path=self.path,\n name=self.name,\n data_dir=self.data_dir,\n data_files=self.data_files,\n cache_dir=self.cache_dir,\n keep_in_memory=self.keep_in_memory,\n save_infos=self.save_infos,\n use_auth_token=self.use_auth_token,\n num_proc=self.num_proc,\n )\n yield from (\n Document(\n page_content=row.pop(self.page_content_column),\n metadata=row,\n )\n for key in dataset.keys()\n for row in dataset[key]\n )\n[docs] def load(self) -> List[Document]:\n \"\"\"Load documents.\"\"\"\n return list(self.lazy_load())", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/hugging_face_dataset.html"} +{"id": "91d21b22bcfb-0", "text": "Source code for langchain.document_loaders.chatgpt\n\"\"\"Load conversations from ChatGPT data export\"\"\"\nimport datetime\nimport json\nfrom typing import List\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseLoader\ndef concatenate_rows(message: dict, title: str) -> str:\n \"\"\"\n Combine message information in a readable format ready to be used.\n Args:\n message: Message to be concatenated\n title: Title of the conversation\n Returns:\n Concatenated message\n \"\"\"\n if not message:\n return \"\"\n sender = message[\"author\"][\"role\"] if message[\"author\"] else \"unknown\"\n text = message[\"content\"][\"parts\"][0]\n date = datetime.datetime.fromtimestamp(message[\"create_time\"]).strftime(\n \"%Y-%m-%d %H:%M:%S\"\n )\n return f\"{title} - {sender} on {date}: {text}\\n\\n\"\n[docs]class ChatGPTLoader(BaseLoader):\n \"\"\"Loader that loads conversations from exported ChatGPT data.\"\"\"\n def __init__(self, log_file: str, num_logs: int = -1):\n self.log_file = log_file\n self.num_logs = num_logs\n[docs] def load(self) -> List[Document]:\n with open(self.log_file, encoding=\"utf8\") as f:\n data = json.load(f)[: self.num_logs] if self.num_logs else json.load(f)\n documents = []\n for d in data:\n title = d[\"title\"]\n messages = d[\"mapping\"]\n text = \"\".join(\n [\n concatenate_rows(messages[key][\"message\"], title)\n for idx, key in enumerate(messages)\n if not (\n idx == 0", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/chatgpt.html"} +{"id": "91d21b22bcfb-1", "text": "if not (\n idx == 0\n and messages[key][\"message\"][\"author\"][\"role\"] == \"system\"\n )\n ]\n )\n metadata = {\"source\": str(self.log_file)}\n documents.append(Document(page_content=text, metadata=metadata))\n return documents", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/chatgpt.html"} +{"id": "8ddb0e8d3225-0", "text": "Source code for langchain.document_loaders.html_bs\n\"\"\"Loader that uses bs4 to load HTML files, enriching metadata with page title.\"\"\"\nimport logging\nfrom typing import Dict, List, Union\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders.base import BaseLoader\nlogger = logging.getLogger(__name__)\n[docs]class BSHTMLLoader(BaseLoader):\n \"\"\"Loader that uses beautiful soup to parse HTML files.\"\"\"\n def __init__(\n self,\n file_path: str,\n open_encoding: Union[str, None] = None,\n bs_kwargs: Union[dict, None] = None,\n get_text_separator: str = \"\",\n ) -> None:\n \"\"\"Initialise with path, and optionally, file encoding to use, and any kwargs\n to pass to the BeautifulSoup object.\"\"\"\n try:\n import bs4 # noqa:F401\n except ImportError:\n raise ValueError(\n \"beautifulsoup4 package not found, please install it with \"\n \"`pip install beautifulsoup4`\"\n )\n self.file_path = file_path\n self.open_encoding = open_encoding\n if bs_kwargs is None:\n bs_kwargs = {\"features\": \"lxml\"}\n self.bs_kwargs = bs_kwargs\n self.get_text_separator = get_text_separator\n[docs] def load(self) -> List[Document]:\n from bs4 import BeautifulSoup\n \"\"\"Load HTML document into document objects.\"\"\"\n with open(self.file_path, \"r\", encoding=self.open_encoding) as f:\n soup = BeautifulSoup(f, **self.bs_kwargs)\n text = soup.get_text(self.get_text_separator)\n if soup.title:\n title = str(soup.title.string)\n else:\n title = \"\"\n metadata: Dict[str, Union[str, None]] = {", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/html_bs.html"} +{"id": "8ddb0e8d3225-1", "text": "title = \"\"\n metadata: Dict[str, Union[str, None]] = {\n \"source\": self.file_path,\n \"title\": title,\n }\n return [Document(page_content=text, metadata=metadata)]", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/html_bs.html"} +{"id": "ee50ed2b6b8a-0", "text": "Source code for langchain.document_loaders.blob_loaders.file_system\n\"\"\"Use to load blobs from the local file system.\"\"\"\nfrom pathlib import Path\nfrom typing import Callable, Iterable, Iterator, Optional, Sequence, TypeVar, Union\nfrom langchain.document_loaders.blob_loaders.schema import Blob, BlobLoader\nT = TypeVar(\"T\")\ndef _make_iterator(\n length_func: Callable[[], int], show_progress: bool = False\n) -> Callable[[Iterable[T]], Iterator[T]]:\n \"\"\"Create a function that optionally wraps an iterable in tqdm.\"\"\"\n if show_progress:\n try:\n from tqdm.auto import tqdm\n except ImportError:\n raise ImportError(\n \"You must install tqdm to use show_progress=True.\"\n \"You can install tqdm with `pip install tqdm`.\"\n )\n # Make sure to provide `total` here so that tqdm can show\n # a progress bar that takes into account the total number of files.\n def _with_tqdm(iterable: Iterable[T]) -> Iterator[T]:\n \"\"\"Wrap an iterable in a tqdm progress bar.\"\"\"\n return tqdm(iterable, total=length_func())\n iterator = _with_tqdm\n else:\n iterator = iter # type: ignore\n return iterator\n# PUBLIC API\n[docs]class FileSystemBlobLoader(BlobLoader):\n \"\"\"Blob loader for the local file system.\n Example:\n .. code-block:: python\n from langchain.document_loaders.blob_loaders import FileSystemBlobLoader\n loader = FileSystemBlobLoader(\"/path/to/directory\")\n for blob in loader.yield_blobs():\n print(blob)\n \"\"\"\n def __init__(\n self,\n path: Union[str, Path],\n *,\n glob: str = \"**/[!.]*\",", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/blob_loaders/file_system.html"} +{"id": "ee50ed2b6b8a-1", "text": "*,\n glob: str = \"**/[!.]*\",\n suffixes: Optional[Sequence[str]] = None,\n show_progress: bool = False,\n ) -> None:\n \"\"\"Initialize with path to directory and how to glob over it.\n Args:\n path: Path to directory to load from\n glob: Glob pattern relative to the specified path\n by default set to pick up all non-hidden files\n suffixes: Provide to keep only files with these suffixes\n Useful when wanting to keep files with different suffixes\n Suffixes must include the dot, e.g. \".txt\"\n show_progress: If true, will show a progress bar as the files are loaded.\n This forces an iteration through all matching files\n to count them prior to loading them.\n Examples:\n ... code-block:: python\n # Recursively load all text files in a directory.\n loader = FileSystemBlobLoader(\"/path/to/directory\", glob=\"**/*.txt\")\n # Recursively load all non-hidden files in a directory.\n loader = FileSystemBlobLoader(\"/path/to/directory\", glob=\"**/[!.]*\")\n # Load all files in a directory without recursion.\n loader = FileSystemBlobLoader(\"/path/to/directory\", glob=\"*\")\n \"\"\"\n if isinstance(path, Path):\n _path = path\n elif isinstance(path, str):\n _path = Path(path)\n else:\n raise TypeError(f\"Expected str or Path, got {type(path)}\")\n self.path = _path\n self.glob = glob\n self.suffixes = set(suffixes or [])\n self.show_progress = show_progress\n[docs] def yield_blobs(\n self,\n ) -> Iterable[Blob]:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/blob_loaders/file_system.html"} +{"id": "ee50ed2b6b8a-2", "text": "self,\n ) -> Iterable[Blob]:\n \"\"\"Yield blobs that match the requested pattern.\"\"\"\n iterator = _make_iterator(\n length_func=self.count_matching_files, show_progress=self.show_progress\n )\n for path in iterator(self._yield_paths()):\n yield Blob.from_path(path)\n def _yield_paths(self) -> Iterable[Path]:\n \"\"\"Yield paths that match the requested pattern.\"\"\"\n paths = self.path.glob(self.glob)\n for path in paths:\n if path.is_file():\n if self.suffixes and path.suffix not in self.suffixes:\n continue\n yield path\n[docs] def count_matching_files(self) -> int:\n \"\"\"Count files that match the pattern without loading them.\"\"\"\n # Carry out a full iteration to count the files without\n # materializing anything expensive in memory.\n num = 0\n for _ in self._yield_paths():\n num += 1\n return num", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/blob_loaders/file_system.html"} +{"id": "a79319187544-0", "text": "Source code for langchain.document_loaders.blob_loaders.youtube_audio\nfrom typing import Iterable, List\nfrom langchain.document_loaders.blob_loaders import FileSystemBlobLoader\nfrom langchain.document_loaders.blob_loaders.schema import Blob, BlobLoader\n[docs]class YoutubeAudioLoader(BlobLoader):\n \"\"\"Load YouTube urls as audio file(s).\"\"\"\n def __init__(self, urls: List[str], save_dir: str):\n if not isinstance(urls, list):\n raise TypeError(\"urls must be a list\")\n self.urls = urls\n self.save_dir = save_dir\n[docs] def yield_blobs(self) -> Iterable[Blob]:\n \"\"\"Yield audio blobs for each url.\"\"\"\n try:\n import yt_dlp\n except ImportError:\n raise ValueError(\n \"yt_dlp package not found, please install it with \"\n \"`pip install yt_dlp`\"\n )\n # Use yt_dlp to download audio given a YouTube url\n ydl_opts = {\n \"format\": \"m4a/bestaudio/best\",\n \"noplaylist\": True,\n \"outtmpl\": self.save_dir + \"/%(title)s.%(ext)s\",\n \"postprocessors\": [\n {\n \"key\": \"FFmpegExtractAudio\",\n \"preferredcodec\": \"m4a\",\n }\n ],\n }\n for url in self.urls:\n # Download file\n with yt_dlp.YoutubeDL(ydl_opts) as ydl:\n ydl.download(url)\n # Yield the written blobs\n loader = FileSystemBlobLoader(self.save_dir, glob=\"*.m4a\")\n for blob in loader.yield_blobs():\n yield blob", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/blob_loaders/youtube_audio.html"} +{"id": "630e86db8368-0", "text": "Source code for langchain.document_loaders.blob_loaders.schema\n\"\"\"Schema for Blobs and Blob Loaders.\nThe goal is to facilitate decoupling of content loading from content parsing code.\nIn addition, content loading code should provide a lazy loading interface by default.\n\"\"\"\nfrom __future__ import annotations\nimport contextlib\nimport mimetypes\nfrom abc import ABC, abstractmethod\nfrom io import BufferedReader, BytesIO\nfrom pathlib import PurePath\nfrom typing import Any, Generator, Iterable, Mapping, Optional, Union\nfrom pydantic import BaseModel, root_validator\nPathLike = Union[str, PurePath]\n[docs]class Blob(BaseModel):\n \"\"\"A blob is used to represent raw data by either reference or value.\n Provides an interface to materialize the blob in different representations, and\n help to decouple the development of data loaders from the downstream parsing of\n the raw data.\n Inspired by: https://developer.mozilla.org/en-US/docs/Web/API/Blob\n \"\"\"\n data: Union[bytes, str, None] # Raw data\n mimetype: Optional[str] = None # Not to be confused with a file extension\n encoding: str = \"utf-8\" # Use utf-8 as default encoding, if decoding to string\n # Location where the original content was found\n # Represent location on the local file system\n # Useful for situations where downstream code assumes it must work with file paths\n # rather than in-memory content.\n path: Optional[PathLike] = None\n class Config:\n arbitrary_types_allowed = True\n frozen = True\n @property\n def source(self) -> Optional[str]:\n \"\"\"The source location of the blob as string if known otherwise none.\"\"\"\n return str(self.path) if self.path else None\n @root_validator(pre=True)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/blob_loaders/schema.html"} +{"id": "630e86db8368-1", "text": "return str(self.path) if self.path else None\n @root_validator(pre=True)\n def check_blob_is_valid(cls, values: Mapping[str, Any]) -> Mapping[str, Any]:\n \"\"\"Verify that either data or path is provided.\"\"\"\n if \"data\" not in values and \"path\" not in values:\n raise ValueError(\"Either data or path must be provided\")\n return values\n[docs] def as_string(self) -> str:\n \"\"\"Read data as a string.\"\"\"\n if self.data is None and self.path:\n with open(str(self.path), \"r\", encoding=self.encoding) as f:\n return f.read()\n elif isinstance(self.data, bytes):\n return self.data.decode(self.encoding)\n elif isinstance(self.data, str):\n return self.data\n else:\n raise ValueError(f\"Unable to get string for blob {self}\")\n[docs] def as_bytes(self) -> bytes:\n \"\"\"Read data as bytes.\"\"\"\n if isinstance(self.data, bytes):\n return self.data\n elif isinstance(self.data, str):\n return self.data.encode(self.encoding)\n elif self.data is None and self.path:\n with open(str(self.path), \"rb\") as f:\n return f.read()\n else:\n raise ValueError(f\"Unable to get bytes for blob {self}\")\n[docs] @contextlib.contextmanager\n def as_bytes_io(self) -> Generator[Union[BytesIO, BufferedReader], None, None]:\n \"\"\"Read data as a byte stream.\"\"\"\n if isinstance(self.data, bytes):\n yield BytesIO(self.data)\n elif self.data is None and self.path:\n with open(str(self.path), \"rb\") as f:\n yield f\n else:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/blob_loaders/schema.html"} +{"id": "630e86db8368-2", "text": "yield f\n else:\n raise NotImplementedError(f\"Unable to convert blob {self}\")\n[docs] @classmethod\n def from_path(\n cls,\n path: PathLike,\n *,\n encoding: str = \"utf-8\",\n mime_type: Optional[str] = None,\n guess_type: bool = True,\n ) -> Blob:\n \"\"\"Load the blob from a path like object.\n Args:\n path: path like object to file to be read\n encoding: Encoding to use if decoding the bytes into a string\n mime_type: if provided, will be set as the mime-type of the data\n guess_type: If True, the mimetype will be guessed from the file extension,\n if a mime-type was not provided\n Returns:\n Blob instance\n \"\"\"\n if mime_type is None and guess_type:\n _mimetype = mimetypes.guess_type(path)[0] if guess_type else None\n else:\n _mimetype = mime_type\n # We do not load the data immediately, instead we treat the blob as a\n # reference to the underlying data.\n return cls(data=None, mimetype=_mimetype, encoding=encoding, path=path)\n[docs] @classmethod\n def from_data(\n cls,\n data: Union[str, bytes],\n *,\n encoding: str = \"utf-8\",\n mime_type: Optional[str] = None,\n path: Optional[str] = None,\n ) -> Blob:\n \"\"\"Initialize the blob from in-memory data.\n Args:\n data: the in-memory data associated with the blob\n encoding: Encoding to use if decoding the bytes into a string\n mime_type: if provided, will be set as the mime-type of the data", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/blob_loaders/schema.html"} +{"id": "630e86db8368-3", "text": "mime_type: if provided, will be set as the mime-type of the data\n path: if provided, will be set as the source from which the data came\n Returns:\n Blob instance\n \"\"\"\n return cls(data=data, mimetype=mime_type, encoding=encoding, path=path)\n def __repr__(self) -> str:\n \"\"\"Define the blob representation.\"\"\"\n str_repr = f\"Blob {id(self)}\"\n if self.source:\n str_repr += f\" {self.source}\"\n return str_repr\n[docs]class BlobLoader(ABC):\n \"\"\"Abstract interface for blob loaders implementation.\n Implementer should be able to load raw content from a storage system according\n to some criteria and return the raw content lazily as a stream of blobs.\n \"\"\"\n[docs] @abstractmethod\n def yield_blobs(\n self,\n ) -> Iterable[Blob]:\n \"\"\"A lazy loader for raw data represented by LangChain's Blob object.\n Returns:\n A generator over blobs\n \"\"\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/blob_loaders/schema.html"} +{"id": "403043167ebc-0", "text": "Source code for langchain.embeddings.bedrock\nimport json\nimport os\nfrom typing import Any, Dict, List, Optional\nfrom pydantic import BaseModel, Extra, root_validator\nfrom langchain.embeddings.base import Embeddings\n[docs]class BedrockEmbeddings(BaseModel, Embeddings):\n \"\"\"Embeddings provider to invoke Bedrock embedding models.\n To authenticate, the AWS client uses the following methods to\n automatically load credentials:\n https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html\n If a specific credential profile should be used, you must pass\n the name of the profile from the ~/.aws/credentials file that is to be used.\n Make sure the credentials / roles used have the required policies to\n access the Bedrock service.\n \"\"\"\n \"\"\"\n Example:\n .. code-block:: python\n from langchain.bedrock_embeddings import BedrockEmbeddings\n \n region_name =\"us-east-1\"\n credentials_profile_name = \"default\"\n model_id = \"amazon.titan-e1t-medium\"\n be = BedrockEmbeddings(\n credentials_profile_name=credentials_profile_name,\n region_name=region_name,\n model_id=model_id\n )\n \"\"\"\n client: Any #: :meta private:\n region_name: Optional[str] = None\n \"\"\"The aws region e.g., `us-west-2`. Fallsback to AWS_DEFAULT_REGION env variable\n or region specified in ~/.aws/config in case it is not provided here.\n \"\"\"\n credentials_profile_name: Optional[str] = None\n \"\"\"The name of the profile in the ~/.aws/credentials or ~/.aws/config files, which\n has either access keys or role information specified.\n If not specified, the default credential profile or, if on an EC2 instance,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/embeddings/bedrock.html"} +{"id": "403043167ebc-1", "text": "If not specified, the default credential profile or, if on an EC2 instance,\n credentials from IMDS will be used.\n See: https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html\n \"\"\"\n model_id: str = \"amazon.titan-e1t-medium\"\n \"\"\"Id of the model to call, e.g., amazon.titan-e1t-medium, this is\n equivalent to the modelId property in the list-foundation-models api\"\"\"\n model_kwargs: Optional[Dict] = None\n \"\"\"Key word arguments to pass to the model.\"\"\"\n class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n @root_validator()\n def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"Validate that AWS credentials to and python package exists in environment.\"\"\"\n if values[\"client\"] is not None:\n return values\n try:\n import boto3\n if values[\"credentials_profile_name\"] is not None:\n session = boto3.Session(profile_name=values[\"credentials_profile_name\"])\n else:\n # use default credentials\n session = boto3.Session()\n client_params = {}\n if values[\"region_name\"]:\n client_params[\"region_name\"] = values[\"region_name\"]\n values[\"client\"] = session.client(\"bedrock\", **client_params)\n except ImportError:\n raise ModuleNotFoundError(\n \"Could not import boto3 python package. \"\n \"Please install it with `pip install boto3`.\"\n )\n except Exception as e:\n raise ValueError(\n \"Could not load credentials to authenticate with AWS client. \"\n \"Please check that credentials in the specified \"\n \"profile name are valid.\"\n ) from e\n return values", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/embeddings/bedrock.html"} +{"id": "403043167ebc-2", "text": "\"profile name are valid.\"\n ) from e\n return values\n def _embedding_func(self, text: str) -> List[float]:\n \"\"\"Call out to Bedrock embedding endpoint.\"\"\"\n # replace newlines, which can negatively affect performance.\n text = text.replace(os.linesep, \" \")\n _model_kwargs = self.model_kwargs or {}\n input_body = {**_model_kwargs}\n input_body[\"inputText\"] = text\n body = json.dumps(input_body)\n content_type = \"application/json\"\n accepts = \"application/json\"\n embeddings = []\n try:\n response = self.client.invoke_model(\n body=body,\n modelId=self.model_id,\n accept=accepts,\n contentType=content_type,\n )\n response_body = json.loads(response.get(\"body\").read())\n embeddings = response_body.get(\"embedding\")\n except Exception as e:\n raise ValueError(f\"Error raised by inference endpoint: {e}\")\n return embeddings\n[docs] def embed_documents(\n self, texts: List[str], chunk_size: int = 1\n ) -> List[List[float]]:\n \"\"\"Compute doc embeddings using a Bedrock model.\n Args:\n texts: The list of texts to embed.\n chunk_size: Bedrock currently only allows single string\n inputs, so chunk size is always 1. This input is here\n only for compatibility with the embeddings interface.\n Returns:\n List of embeddings, one for each text.\n \"\"\"\n results = []\n for text in texts:\n response = self._embedding_func(text)\n results.append(response)\n return results\n[docs] def embed_query(self, text: str) -> List[float]:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/embeddings/bedrock.html"} +{"id": "403043167ebc-3", "text": "[docs] def embed_query(self, text: str) -> List[float]:\n \"\"\"Compute query embeddings using a Bedrock model.\n Args:\n text: The text to embed.\n Returns:\n Embeddings for the text.\n \"\"\"\n return self._embedding_func(text)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/embeddings/bedrock.html"} +{"id": "42d5f293a7ae-0", "text": "Source code for langchain.embeddings.self_hosted\n\"\"\"Running custom embedding models on self-hosted remote hardware.\"\"\"\nfrom typing import Any, Callable, List\nfrom pydantic import Extra\nfrom langchain.embeddings.base import Embeddings\nfrom langchain.llms import SelfHostedPipeline\ndef _embed_documents(pipeline: Any, *args: Any, **kwargs: Any) -> List[List[float]]:\n \"\"\"Inference function to send to the remote hardware.\n Accepts a sentence_transformer model_id and\n returns a list of embeddings for each document in the batch.\n \"\"\"\n return pipeline(*args, **kwargs)\n[docs]class SelfHostedEmbeddings(SelfHostedPipeline, Embeddings):\n \"\"\"Runs custom embedding models on self-hosted remote hardware.\n Supported hardware includes auto-launched instances on AWS, GCP, Azure,\n and Lambda, as well as servers specified\n by IP address and SSH credentials (such as on-prem, or another\n cloud like Paperspace, Coreweave, etc.).\n To use, you should have the ``runhouse`` python package installed.\n Example using a model load function:\n .. code-block:: python\n from langchain.embeddings import SelfHostedEmbeddings\n from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline\n import runhouse as rh\n gpu = rh.cluster(name=\"rh-a10x\", instance_type=\"A100:1\")\n def get_pipeline():\n model_id = \"facebook/bart-large\"\n tokenizer = AutoTokenizer.from_pretrained(model_id)\n model = AutoModelForCausalLM.from_pretrained(model_id)\n return pipeline(\"feature-extraction\", model=model, tokenizer=tokenizer)\n embeddings = SelfHostedEmbeddings(\n model_load_fn=get_pipeline,\n hardware=gpu", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/embeddings/self_hosted.html"} +{"id": "42d5f293a7ae-1", "text": "model_load_fn=get_pipeline,\n hardware=gpu\n model_reqs=[\"./\", \"torch\", \"transformers\"],\n )\n Example passing in a pipeline path:\n .. code-block:: python\n from langchain.embeddings import SelfHostedHFEmbeddings\n import runhouse as rh\n from transformers import pipeline\n gpu = rh.cluster(name=\"rh-a10x\", instance_type=\"A100:1\")\n pipeline = pipeline(model=\"bert-base-uncased\", task=\"feature-extraction\")\n rh.blob(pickle.dumps(pipeline),\n path=\"models/pipeline.pkl\").save().to(gpu, path=\"models\")\n embeddings = SelfHostedHFEmbeddings.from_pipeline(\n pipeline=\"models/pipeline.pkl\",\n hardware=gpu,\n model_reqs=[\"./\", \"torch\", \"transformers\"],\n )\n \"\"\"\n inference_fn: Callable = _embed_documents\n \"\"\"Inference function to extract the embeddings on the remote hardware.\"\"\"\n inference_kwargs: Any = None\n \"\"\"Any kwargs to pass to the model's inference function.\"\"\"\n class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n[docs] def embed_documents(self, texts: List[str]) -> List[List[float]]:\n \"\"\"Compute doc embeddings using a HuggingFace transformer model.\n Args:\n texts: The list of texts to embed.s\n Returns:\n List of embeddings, one for each text.\n \"\"\"\n texts = list(map(lambda x: x.replace(\"\\n\", \" \"), texts))\n embeddings = self.client(self.pipeline_ref, texts)\n if not isinstance(embeddings, list):\n return embeddings.tolist()\n return embeddings\n[docs] def embed_query(self, text: str) -> List[float]:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/embeddings/self_hosted.html"} +{"id": "42d5f293a7ae-2", "text": "[docs] def embed_query(self, text: str) -> List[float]:\n \"\"\"Compute query embeddings using a HuggingFace transformer model.\n Args:\n text: The text to embed.\n Returns:\n Embeddings for the text.\n \"\"\"\n text = text.replace(\"\\n\", \" \")\n embeddings = self.client(self.pipeline_ref, text)\n if not isinstance(embeddings, list):\n return embeddings.tolist()\n return embeddings", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/embeddings/self_hosted.html"} +{"id": "404d517de522-0", "text": "Source code for langchain.embeddings.aleph_alpha\nfrom typing import Any, Dict, List, Optional\nfrom pydantic import BaseModel, root_validator\nfrom langchain.embeddings.base import Embeddings\nfrom langchain.utils import get_from_dict_or_env\n[docs]class AlephAlphaAsymmetricSemanticEmbedding(BaseModel, Embeddings):\n \"\"\"\n Wrapper for Aleph Alpha's Asymmetric Embeddings\n AA provides you with an endpoint to embed a document and a query.\n The models were optimized to make the embeddings of documents and\n the query for a document as similar as possible.\n To learn more, check out: https://docs.aleph-alpha.com/docs/tasks/semantic_embed/\n Example:\n .. code-block:: python\n from aleph_alpha import AlephAlphaAsymmetricSemanticEmbedding\n embeddings = AlephAlphaSymmetricSemanticEmbedding()\n document = \"This is a content of the document\"\n query = \"What is the content of the document?\"\n doc_result = embeddings.embed_documents([document])\n query_result = embeddings.embed_query(query)\n \"\"\"\n client: Any #: :meta private:\n model: Optional[str] = \"luminous-base\"\n \"\"\"Model name to use.\"\"\"\n hosting: Optional[str] = \"https://api.aleph-alpha.com\"\n \"\"\"Optional parameter that specifies which datacenters may process the request.\"\"\"\n normalize: Optional[bool] = True\n \"\"\"Should returned embeddings be normalized\"\"\"\n compress_to_size: Optional[int] = 128\n \"\"\"Should the returned embeddings come back as an original 5120-dim vector, \n or should it be compressed to 128-dim.\"\"\"\n contextual_control_threshold: Optional[int] = None\n \"\"\"Attention control parameters only apply to those tokens that have", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/embeddings/aleph_alpha.html"} +{"id": "404d517de522-1", "text": "\"\"\"Attention control parameters only apply to those tokens that have \n explicitly been set in the request.\"\"\"\n control_log_additive: Optional[bool] = True\n \"\"\"Apply controls on prompt items by adding the log(control_factor) \n to attention scores.\"\"\"\n aleph_alpha_api_key: Optional[str] = None\n \"\"\"API key for Aleph Alpha API.\"\"\"\n @root_validator()\n def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"Validate that api key and python package exists in environment.\"\"\"\n aleph_alpha_api_key = get_from_dict_or_env(\n values, \"aleph_alpha_api_key\", \"ALEPH_ALPHA_API_KEY\"\n )\n try:\n from aleph_alpha_client import Client\n except ImportError:\n raise ValueError(\n \"Could not import aleph_alpha_client python package. \"\n \"Please install it with `pip install aleph_alpha_client`.\"\n )\n values[\"client\"] = Client(token=aleph_alpha_api_key)\n return values\n[docs] def embed_documents(self, texts: List[str]) -> List[List[float]]:\n \"\"\"Call out to Aleph Alpha's asymmetric Document endpoint.\n Args:\n texts: The list of texts to embed.\n Returns:\n List of embeddings, one for each text.\n \"\"\"\n try:\n from aleph_alpha_client import (\n Prompt,\n SemanticEmbeddingRequest,\n SemanticRepresentation,\n )\n except ImportError:\n raise ValueError(\n \"Could not import aleph_alpha_client python package. \"\n \"Please install it with `pip install aleph_alpha_client`.\"\n )\n document_embeddings = []\n for text in texts:\n document_params = {\n \"prompt\": Prompt.from_text(text),", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/embeddings/aleph_alpha.html"} +{"id": "404d517de522-2", "text": "document_params = {\n \"prompt\": Prompt.from_text(text),\n \"representation\": SemanticRepresentation.Document,\n \"compress_to_size\": self.compress_to_size,\n \"normalize\": self.normalize,\n \"contextual_control_threshold\": self.contextual_control_threshold,\n \"control_log_additive\": self.control_log_additive,\n }\n document_request = SemanticEmbeddingRequest(**document_params)\n document_response = self.client.semantic_embed(\n request=document_request, model=self.model\n )\n document_embeddings.append(document_response.embedding)\n return document_embeddings\n[docs] def embed_query(self, text: str) -> List[float]:\n \"\"\"Call out to Aleph Alpha's asymmetric, query embedding endpoint\n Args:\n text: The text to embed.\n Returns:\n Embeddings for the text.\n \"\"\"\n try:\n from aleph_alpha_client import (\n Prompt,\n SemanticEmbeddingRequest,\n SemanticRepresentation,\n )\n except ImportError:\n raise ValueError(\n \"Could not import aleph_alpha_client python package. \"\n \"Please install it with `pip install aleph_alpha_client`.\"\n )\n symmetric_params = {\n \"prompt\": Prompt.from_text(text),\n \"representation\": SemanticRepresentation.Query,\n \"compress_to_size\": self.compress_to_size,\n \"normalize\": self.normalize,\n \"contextual_control_threshold\": self.contextual_control_threshold,\n \"control_log_additive\": self.control_log_additive,\n }\n symmetric_request = SemanticEmbeddingRequest(**symmetric_params)\n symmetric_response = self.client.semantic_embed(\n request=symmetric_request, model=self.model\n )\n return symmetric_response.embedding", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/embeddings/aleph_alpha.html"} +{"id": "404d517de522-3", "text": "request=symmetric_request, model=self.model\n )\n return symmetric_response.embedding\n[docs]class AlephAlphaSymmetricSemanticEmbedding(AlephAlphaAsymmetricSemanticEmbedding):\n \"\"\"The symmetric version of the Aleph Alpha's semantic embeddings.\n The main difference is that here, both the documents and\n queries are embedded with a SemanticRepresentation.Symmetric\n Example:\n .. code-block:: python\n from aleph_alpha import AlephAlphaSymmetricSemanticEmbedding\n embeddings = AlephAlphaAsymmetricSemanticEmbedding()\n text = \"This is a test text\"\n doc_result = embeddings.embed_documents([text])\n query_result = embeddings.embed_query(text)\n \"\"\"\n def _embed(self, text: str) -> List[float]:\n try:\n from aleph_alpha_client import (\n Prompt,\n SemanticEmbeddingRequest,\n SemanticRepresentation,\n )\n except ImportError:\n raise ValueError(\n \"Could not import aleph_alpha_client python package. \"\n \"Please install it with `pip install aleph_alpha_client`.\"\n )\n query_params = {\n \"prompt\": Prompt.from_text(text),\n \"representation\": SemanticRepresentation.Symmetric,\n \"compress_to_size\": self.compress_to_size,\n \"normalize\": self.normalize,\n \"contextual_control_threshold\": self.contextual_control_threshold,\n \"control_log_additive\": self.control_log_additive,\n }\n query_request = SemanticEmbeddingRequest(**query_params)\n query_response = self.client.semantic_embed(\n request=query_request, model=self.model\n )\n return query_response.embedding\n[docs] def embed_documents(self, texts: List[str]) -> List[List[float]]:\n \"\"\"Call out to Aleph Alpha's Document endpoint.", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/embeddings/aleph_alpha.html"} +{"id": "404d517de522-4", "text": "\"\"\"Call out to Aleph Alpha's Document endpoint.\n Args:\n texts: The list of texts to embed.\n Returns:\n List of embeddings, one for each text.\n \"\"\"\n document_embeddings = []\n for text in texts:\n document_embeddings.append(self._embed(text))\n return document_embeddings\n[docs] def embed_query(self, text: str) -> List[float]:\n \"\"\"Call out to Aleph Alpha's asymmetric, query embedding endpoint\n Args:\n text: The text to embed.\n Returns:\n Embeddings for the text.\n \"\"\"\n return self._embed(text)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/embeddings/aleph_alpha.html"} +{"id": "28c15b4d6d9a-0", "text": "Source code for langchain.embeddings.modelscope_hub\n\"\"\"Wrapper around ModelScopeHub embedding models.\"\"\"\nfrom typing import Any, List\nfrom pydantic import BaseModel, Extra\nfrom langchain.embeddings.base import Embeddings\n[docs]class ModelScopeEmbeddings(BaseModel, Embeddings):\n \"\"\"Wrapper around modelscope_hub embedding models.\n To use, you should have the ``modelscope`` python package installed.\n Example:\n .. code-block:: python\n from langchain.embeddings import ModelScopeEmbeddings\n model_id = \"damo/nlp_corom_sentence-embedding_english-base\"\n embed = ModelScopeEmbeddings(model_id=model_id)\n \"\"\"\n embed: Any\n model_id: str = \"damo/nlp_corom_sentence-embedding_english-base\"\n \"\"\"Model name to use.\"\"\"\n def __init__(self, **kwargs: Any):\n \"\"\"Initialize the modelscope\"\"\"\n super().__init__(**kwargs)\n try:\n from modelscope.pipelines import pipeline\n from modelscope.utils.constant import Tasks\n self.embed = pipeline(Tasks.sentence_embedding, model=self.model_id)\n except ImportError as e:\n raise ImportError(\n \"Could not import some python packages.\"\n \"Please install it with `pip install modelscope`.\"\n ) from e\n class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n[docs] def embed_documents(self, texts: List[str]) -> List[List[float]]:\n \"\"\"Compute doc embeddings using a modelscope embedding model.\n Args:\n texts: The list of texts to embed.\n Returns:\n List of embeddings, one for each text.\n \"\"\"\n texts = list(map(lambda x: x.replace(\"\\n\", \" \"), texts))", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/embeddings/modelscope_hub.html"} +{"id": "28c15b4d6d9a-1", "text": "texts = list(map(lambda x: x.replace(\"\\n\", \" \"), texts))\n inputs = {\"source_sentence\": texts}\n embeddings = self.embed(input=inputs)[\"text_embedding\"]\n return embeddings.tolist()\n[docs] def embed_query(self, text: str) -> List[float]:\n \"\"\"Compute query embeddings using a modelscope embedding model.\n Args:\n text: The text to embed.\n Returns:\n Embeddings for the text.\n \"\"\"\n text = text.replace(\"\\n\", \" \")\n inputs = {\"source_sentence\": [text]}\n embedding = self.embed(input=inputs)[\"text_embedding\"][0]\n return embedding.tolist()", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/embeddings/modelscope_hub.html"} +{"id": "cf7b0c5a9aee-0", "text": "Source code for langchain.embeddings.huggingface\n\"\"\"Wrapper around HuggingFace embedding models.\"\"\"\nfrom typing import Any, Dict, List, Optional\nfrom pydantic import BaseModel, Extra, Field\nfrom langchain.embeddings.base import Embeddings\nDEFAULT_MODEL_NAME = \"sentence-transformers/all-mpnet-base-v2\"\nDEFAULT_INSTRUCT_MODEL = \"hkunlp/instructor-large\"\nDEFAULT_EMBED_INSTRUCTION = \"Represent the document for retrieval: \"\nDEFAULT_QUERY_INSTRUCTION = (\n \"Represent the question for retrieving supporting documents: \"\n)\n[docs]class HuggingFaceEmbeddings(BaseModel, Embeddings):\n \"\"\"Wrapper around sentence_transformers embedding models.\n To use, you should have the ``sentence_transformers`` python package installed.\n Example:\n .. code-block:: python\n from langchain.embeddings import HuggingFaceEmbeddings\n model_name = \"sentence-transformers/all-mpnet-base-v2\"\n model_kwargs = {'device': 'cpu'}\n encode_kwargs = {'normalize_embeddings': False}\n hf = HuggingFaceEmbeddings(\n model_name=model_name,\n model_kwargs=model_kwargs,\n encode_kwargs=encode_kwargs\n )\n \"\"\"\n client: Any #: :meta private:\n model_name: str = DEFAULT_MODEL_NAME\n \"\"\"Model name to use.\"\"\"\n cache_folder: Optional[str] = None\n \"\"\"Path to store models. \n Can be also set by SENTENCE_TRANSFORMERS_HOME environment variable.\"\"\"\n model_kwargs: Dict[str, Any] = Field(default_factory=dict)\n \"\"\"Key word arguments to pass to the model.\"\"\"\n encode_kwargs: Dict[str, Any] = Field(default_factory=dict)\n \"\"\"Key word arguments to pass when calling the `encode` method of the model.\"\"\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/embeddings/huggingface.html"} +{"id": "cf7b0c5a9aee-1", "text": "\"\"\"Key word arguments to pass when calling the `encode` method of the model.\"\"\"\n def __init__(self, **kwargs: Any):\n \"\"\"Initialize the sentence_transformer.\"\"\"\n super().__init__(**kwargs)\n try:\n import sentence_transformers\n except ImportError as exc:\n raise ImportError(\n \"Could not import sentence_transformers python package. \"\n \"Please install it with `pip install sentence_transformers`.\"\n ) from exc\n self.client = sentence_transformers.SentenceTransformer(\n self.model_name, cache_folder=self.cache_folder, **self.model_kwargs\n )\n class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n[docs] def embed_documents(self, texts: List[str]) -> List[List[float]]:\n \"\"\"Compute doc embeddings using a HuggingFace transformer model.\n Args:\n texts: The list of texts to embed.\n Returns:\n List of embeddings, one for each text.\n \"\"\"\n texts = list(map(lambda x: x.replace(\"\\n\", \" \"), texts))\n embeddings = self.client.encode(texts, **self.encode_kwargs)\n return embeddings.tolist()\n[docs] def embed_query(self, text: str) -> List[float]:\n \"\"\"Compute query embeddings using a HuggingFace transformer model.\n Args:\n text: The text to embed.\n Returns:\n Embeddings for the text.\n \"\"\"\n text = text.replace(\"\\n\", \" \")\n embedding = self.client.encode(text, **self.encode_kwargs)\n return embedding.tolist()\n[docs]class HuggingFaceInstructEmbeddings(BaseModel, Embeddings):\n \"\"\"Wrapper around sentence_transformers embedding models.\n To use, you should have the ``sentence_transformers``", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/embeddings/huggingface.html"} +{"id": "cf7b0c5a9aee-2", "text": "To use, you should have the ``sentence_transformers``\n and ``InstructorEmbedding`` python packages installed.\n Example:\n .. code-block:: python\n from langchain.embeddings import HuggingFaceInstructEmbeddings\n model_name = \"hkunlp/instructor-large\"\n model_kwargs = {'device': 'cpu'}\n encode_kwargs = {'normalize_embeddings': True}\n hf = HuggingFaceInstructEmbeddings(\n model_name=model_name,\n model_kwargs=model_kwargs,\n encode_kwargs=encode_kwargs\n )\n \"\"\"\n client: Any #: :meta private:\n model_name: str = DEFAULT_INSTRUCT_MODEL\n \"\"\"Model name to use.\"\"\"\n cache_folder: Optional[str] = None\n \"\"\"Path to store models. \n Can be also set by SENTENCE_TRANSFORMERS_HOME environment variable.\"\"\"\n model_kwargs: Dict[str, Any] = Field(default_factory=dict)\n \"\"\"Key word arguments to pass to the model.\"\"\"\n encode_kwargs: Dict[str, Any] = Field(default_factory=dict)\n \"\"\"Key word arguments to pass when calling the `encode` method of the model.\"\"\"\n embed_instruction: str = DEFAULT_EMBED_INSTRUCTION\n \"\"\"Instruction to use for embedding documents.\"\"\"\n query_instruction: str = DEFAULT_QUERY_INSTRUCTION\n \"\"\"Instruction to use for embedding query.\"\"\"\n def __init__(self, **kwargs: Any):\n \"\"\"Initialize the sentence_transformer.\"\"\"\n super().__init__(**kwargs)\n try:\n from InstructorEmbedding import INSTRUCTOR\n self.client = INSTRUCTOR(\n self.model_name, cache_folder=self.cache_folder, **self.model_kwargs\n )\n except ImportError as e:\n raise ValueError(\"Dependencies for InstructorEmbedding not found.\") from e\n class Config:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/embeddings/huggingface.html"} +{"id": "cf7b0c5a9aee-3", "text": "raise ValueError(\"Dependencies for InstructorEmbedding not found.\") from e\n class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n[docs] def embed_documents(self, texts: List[str]) -> List[List[float]]:\n \"\"\"Compute doc embeddings using a HuggingFace instruct model.\n Args:\n texts: The list of texts to embed.\n Returns:\n List of embeddings, one for each text.\n \"\"\"\n instruction_pairs = [[self.embed_instruction, text] for text in texts]\n embeddings = self.client.encode(instruction_pairs, **self.encode_kwargs)\n return embeddings.tolist()\n[docs] def embed_query(self, text: str) -> List[float]:\n \"\"\"Compute query embeddings using a HuggingFace instruct model.\n Args:\n text: The text to embed.\n Returns:\n Embeddings for the text.\n \"\"\"\n instruction_pair = [self.query_instruction, text]\n embedding = self.client.encode([instruction_pair], **self.encode_kwargs)[0]\n return embedding.tolist()", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/embeddings/huggingface.html"} +{"id": "dcb5b9f19eb0-0", "text": "Source code for langchain.embeddings.tensorflow_hub\n\"\"\"Wrapper around TensorflowHub embedding models.\"\"\"\nfrom typing import Any, List\nfrom pydantic import BaseModel, Extra\nfrom langchain.embeddings.base import Embeddings\nDEFAULT_MODEL_URL = \"https://tfhub.dev/google/universal-sentence-encoder-multilingual/3\"\n[docs]class TensorflowHubEmbeddings(BaseModel, Embeddings):\n \"\"\"Wrapper around tensorflow_hub embedding models.\n To use, you should have the ``tensorflow_text`` python package installed.\n Example:\n .. code-block:: python\n from langchain.embeddings import TensorflowHubEmbeddings\n url = \"https://tfhub.dev/google/universal-sentence-encoder-multilingual/3\"\n tf = TensorflowHubEmbeddings(model_url=url)\n \"\"\"\n embed: Any #: :meta private:\n model_url: str = DEFAULT_MODEL_URL\n \"\"\"Model name to use.\"\"\"\n def __init__(self, **kwargs: Any):\n \"\"\"Initialize the tensorflow_hub and tensorflow_text.\"\"\"\n super().__init__(**kwargs)\n try:\n import tensorflow_hub\n except ImportError:\n raise ImportError(\n \"Could not import tensorflow-hub python package. \"\n \"Please install it with `pip install tensorflow-hub``.\"\n )\n try:\n import tensorflow_text # noqa\n except ImportError:\n raise ImportError(\n \"Could not import tensorflow_text python package. \"\n \"Please install it with `pip install tensorflow_text``.\"\n )\n self.embed = tensorflow_hub.load(self.model_url)\n class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n[docs] def embed_documents(self, texts: List[str]) -> List[List[float]]:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/embeddings/tensorflow_hub.html"} +{"id": "dcb5b9f19eb0-1", "text": "\"\"\"Compute doc embeddings using a TensorflowHub embedding model.\n Args:\n texts: The list of texts to embed.\n Returns:\n List of embeddings, one for each text.\n \"\"\"\n texts = list(map(lambda x: x.replace(\"\\n\", \" \"), texts))\n embeddings = self.embed(texts).numpy()\n return embeddings.tolist()\n[docs] def embed_query(self, text: str) -> List[float]:\n \"\"\"Compute query embeddings using a TensorflowHub embedding model.\n Args:\n text: The text to embed.\n Returns:\n Embeddings for the text.\n \"\"\"\n text = text.replace(\"\\n\", \" \")\n embedding = self.embed([text]).numpy()[0]\n return embedding.tolist()", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/embeddings/tensorflow_hub.html"} +{"id": "b382cfa16473-0", "text": "Source code for langchain.embeddings.fake\nfrom typing import List\nimport numpy as np\nfrom pydantic import BaseModel\nfrom langchain.embeddings.base import Embeddings\n[docs]class FakeEmbeddings(Embeddings, BaseModel):\n size: int\n def _get_embedding(self) -> List[float]:\n return list(np.random.normal(size=self.size))\n[docs] def embed_documents(self, texts: List[str]) -> List[List[float]]:\n return [self._get_embedding() for _ in texts]\n[docs] def embed_query(self, text: str) -> List[float]:\n return self._get_embedding()", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/embeddings/fake.html"} +{"id": "50499e181c79-0", "text": "Source code for langchain.embeddings.openai\n\"\"\"Wrapper around OpenAI embedding models.\"\"\"\nfrom __future__ import annotations\nimport logging\nfrom typing import (\n Any,\n Callable,\n Dict,\n List,\n Literal,\n Optional,\n Sequence,\n Set,\n Tuple,\n Union,\n)\nimport numpy as np\nfrom pydantic import BaseModel, Extra, root_validator\nfrom tenacity import (\n AsyncRetrying,\n before_sleep_log,\n retry,\n retry_if_exception_type,\n stop_after_attempt,\n wait_exponential,\n)\nfrom langchain.embeddings.base import Embeddings\nfrom langchain.utils import get_from_dict_or_env\nlogger = logging.getLogger(__name__)\ndef _create_retry_decorator(embeddings: OpenAIEmbeddings) -> Callable[[Any], Any]:\n import openai\n min_seconds = 4\n max_seconds = 10\n # Wait 2^x * 1 second between each retry starting with\n # 4 seconds, then up to 10 seconds, then 10 seconds afterwards\n return retry(\n reraise=True,\n stop=stop_after_attempt(embeddings.max_retries),\n wait=wait_exponential(multiplier=1, min=min_seconds, max=max_seconds),\n retry=(\n retry_if_exception_type(openai.error.Timeout)\n | retry_if_exception_type(openai.error.APIError)\n | retry_if_exception_type(openai.error.APIConnectionError)\n | retry_if_exception_type(openai.error.RateLimitError)\n | retry_if_exception_type(openai.error.ServiceUnavailableError)\n ),\n before_sleep=before_sleep_log(logger, logging.WARNING),\n )\ndef _async_retry_decorator(embeddings: OpenAIEmbeddings) -> Any:\n import openai", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/embeddings/openai.html"} +{"id": "50499e181c79-1", "text": "import openai\n min_seconds = 4\n max_seconds = 10\n # Wait 2^x * 1 second between each retry starting with\n # 4 seconds, then up to 10 seconds, then 10 seconds afterwards\n async_retrying = AsyncRetrying(\n reraise=True,\n stop=stop_after_attempt(embeddings.max_retries),\n wait=wait_exponential(multiplier=1, min=min_seconds, max=max_seconds),\n retry=(\n retry_if_exception_type(openai.error.Timeout)\n | retry_if_exception_type(openai.error.APIError)\n | retry_if_exception_type(openai.error.APIConnectionError)\n | retry_if_exception_type(openai.error.RateLimitError)\n | retry_if_exception_type(openai.error.ServiceUnavailableError)\n ),\n before_sleep=before_sleep_log(logger, logging.WARNING),\n )\n def wrap(func: Callable) -> Callable:\n async def wrapped_f(*args: Any, **kwargs: Any) -> Callable:\n async for _ in async_retrying:\n return await func(*args, **kwargs)\n raise AssertionError(\"this is unreachable\")\n return wrapped_f\n return wrap\ndef embed_with_retry(embeddings: OpenAIEmbeddings, **kwargs: Any) -> Any:\n \"\"\"Use tenacity to retry the embedding call.\"\"\"\n retry_decorator = _create_retry_decorator(embeddings)\n @retry_decorator\n def _embed_with_retry(**kwargs: Any) -> Any:\n return embeddings.client.create(**kwargs)\n return _embed_with_retry(**kwargs)\nasync def async_embed_with_retry(embeddings: OpenAIEmbeddings, **kwargs: Any) -> Any:\n \"\"\"Use tenacity to retry the embedding call.\"\"\"\n @_async_retry_decorator(embeddings)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/embeddings/openai.html"} +{"id": "50499e181c79-2", "text": "@_async_retry_decorator(embeddings)\n async def _async_embed_with_retry(**kwargs: Any) -> Any:\n return await embeddings.client.acreate(**kwargs)\n return await _async_embed_with_retry(**kwargs)\n[docs]class OpenAIEmbeddings(BaseModel, Embeddings):\n \"\"\"Wrapper around OpenAI embedding models.\n To use, you should have the ``openai`` python package installed, and the\n environment variable ``OPENAI_API_KEY`` set with your API key or pass it\n as a named parameter to the constructor.\n Example:\n .. code-block:: python\n from langchain.embeddings import OpenAIEmbeddings\n openai = OpenAIEmbeddings(openai_api_key=\"my-api-key\")\n In order to use the library with Microsoft Azure endpoints, you need to set\n the OPENAI_API_TYPE, OPENAI_API_BASE, OPENAI_API_KEY and OPENAI_API_VERSION.\n The OPENAI_API_TYPE must be set to 'azure' and the others correspond to\n the properties of your endpoint.\n In addition, the deployment name must be passed as the model parameter.\n Example:\n .. code-block:: python\n import os\n os.environ[\"OPENAI_API_TYPE\"] = \"azure\"\n os.environ[\"OPENAI_API_BASE\"] = \"https:// Dict:\n \"\"\"Validate that api key and python package exists in environment.\"\"\"\n values[\"openai_api_key\"] = get_from_dict_or_env(\n values, \"openai_api_key\", \"OPENAI_API_KEY\"\n )\n values[\"openai_api_base\"] = get_from_dict_or_env(\n values,\n \"openai_api_base\",\n \"OPENAI_API_BASE\",\n default=\"\",\n )\n values[\"openai_api_type\"] = get_from_dict_or_env(\n values,\n \"openai_api_type\",\n \"OPENAI_API_TYPE\",\n default=\"\",\n )\n values[\"openai_proxy\"] = get_from_dict_or_env(\n values,\n \"openai_proxy\",\n \"OPENAI_PROXY\",\n default=\"\",\n )\n if values[\"openai_api_type\"] in (\"azure\", \"azure_ad\", \"azuread\"):\n default_api_version = \"2022-12-01\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/embeddings/openai.html"} +{"id": "50499e181c79-5", "text": "default_api_version = \"2022-12-01\"\n else:\n default_api_version = \"\"\n values[\"openai_api_version\"] = get_from_dict_or_env(\n values,\n \"openai_api_version\",\n \"OPENAI_API_VERSION\",\n default=default_api_version,\n )\n values[\"openai_organization\"] = get_from_dict_or_env(\n values,\n \"openai_organization\",\n \"OPENAI_ORGANIZATION\",\n default=\"\",\n )\n try:\n import openai\n values[\"client\"] = openai.Embedding\n except ImportError:\n raise ImportError(\n \"Could not import openai python package. \"\n \"Please install it with `pip install openai`.\"\n )\n return values\n @property\n def _invocation_params(self) -> Dict:\n openai_args = {\n \"engine\": self.deployment,\n \"request_timeout\": self.request_timeout,\n \"headers\": self.headers,\n \"api_key\": self.openai_api_key,\n \"organization\": self.openai_organization,\n \"api_base\": self.openai_api_base,\n \"api_type\": self.openai_api_type,\n \"api_version\": self.openai_api_version,\n }\n if self.openai_proxy:\n import openai\n openai.proxy = {\n \"http\": self.openai_proxy,\n \"https\": self.openai_proxy,\n } # type: ignore[assignment] # noqa: E501\n return openai_args\n # please refer to\n # https://github.com/openai/openai-cookbook/blob/main/examples/Embedding_long_inputs.ipynb\n def _get_len_safe_embeddings(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/embeddings/openai.html"} +{"id": "50499e181c79-6", "text": "def _get_len_safe_embeddings(\n self, texts: List[str], *, engine: str, chunk_size: Optional[int] = None\n ) -> List[List[float]]:\n embeddings: List[List[float]] = [[] for _ in range(len(texts))]\n try:\n import tiktoken\n except ImportError:\n raise ImportError(\n \"Could not import tiktoken python package. \"\n \"This is needed in order to for OpenAIEmbeddings. \"\n \"Please install it with `pip install tiktoken`.\"\n )\n tokens = []\n indices = []\n model_name = self.tiktoken_model_name or self.model\n try:\n encoding = tiktoken.encoding_for_model(model_name)\n except KeyError:\n logger.warning(\"Warning: model not found. Using cl100k_base encoding.\")\n model = \"cl100k_base\"\n encoding = tiktoken.get_encoding(model)\n for i, text in enumerate(texts):\n if self.model.endswith(\"001\"):\n # See: https://github.com/openai/openai-python/issues/418#issuecomment-1525939500\n # replace newlines, which can negatively affect performance.\n text = text.replace(\"\\n\", \" \")\n token = encoding.encode(\n text,\n allowed_special=self.allowed_special,\n disallowed_special=self.disallowed_special,\n )\n for j in range(0, len(token), self.embedding_ctx_length):\n tokens += [token[j : j + self.embedding_ctx_length]]\n indices += [i]\n batched_embeddings = []\n _chunk_size = chunk_size or self.chunk_size\n for i in range(0, len(tokens), _chunk_size):\n response = embed_with_retry(\n self,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/embeddings/openai.html"} +{"id": "50499e181c79-7", "text": "response = embed_with_retry(\n self,\n input=tokens[i : i + _chunk_size],\n **self._invocation_params,\n )\n batched_embeddings += [r[\"embedding\"] for r in response[\"data\"]]\n results: List[List[List[float]]] = [[] for _ in range(len(texts))]\n num_tokens_in_batch: List[List[int]] = [[] for _ in range(len(texts))]\n for i in range(len(indices)):\n results[indices[i]].append(batched_embeddings[i])\n num_tokens_in_batch[indices[i]].append(len(tokens[i]))\n for i in range(len(texts)):\n _result = results[i]\n if len(_result) == 0:\n average = embed_with_retry(\n self,\n input=\"\",\n **self._invocation_params,\n )[\n \"data\"\n ][0][\"embedding\"]\n else:\n average = np.average(_result, axis=0, weights=num_tokens_in_batch[i])\n embeddings[i] = (average / np.linalg.norm(average)).tolist()\n return embeddings\n # please refer to\n # https://github.com/openai/openai-cookbook/blob/main/examples/Embedding_long_inputs.ipynb\n async def _aget_len_safe_embeddings(\n self, texts: List[str], *, engine: str, chunk_size: Optional[int] = None\n ) -> List[List[float]]:\n embeddings: List[List[float]] = [[] for _ in range(len(texts))]\n try:\n import tiktoken\n except ImportError:\n raise ImportError(\n \"Could not import tiktoken python package. \"\n \"This is needed in order to for OpenAIEmbeddings. \"\n \"Please install it with `pip install tiktoken`.\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/embeddings/openai.html"} +{"id": "50499e181c79-8", "text": "\"Please install it with `pip install tiktoken`.\"\n )\n tokens = []\n indices = []\n model_name = self.tiktoken_model_name or self.model\n try:\n encoding = tiktoken.encoding_for_model(model_name)\n except KeyError:\n logger.warning(\"Warning: model not found. Using cl100k_base encoding.\")\n model = \"cl100k_base\"\n encoding = tiktoken.get_encoding(model)\n for i, text in enumerate(texts):\n if self.model.endswith(\"001\"):\n # See: https://github.com/openai/openai-python/issues/418#issuecomment-1525939500\n # replace newlines, which can negatively affect performance.\n text = text.replace(\"\\n\", \" \")\n token = encoding.encode(\n text,\n allowed_special=self.allowed_special,\n disallowed_special=self.disallowed_special,\n )\n for j in range(0, len(token), self.embedding_ctx_length):\n tokens += [token[j : j + self.embedding_ctx_length]]\n indices += [i]\n batched_embeddings = []\n _chunk_size = chunk_size or self.chunk_size\n for i in range(0, len(tokens), _chunk_size):\n response = await async_embed_with_retry(\n self,\n input=tokens[i : i + _chunk_size],\n **self._invocation_params,\n )\n batched_embeddings += [r[\"embedding\"] for r in response[\"data\"]]\n results: List[List[List[float]]] = [[] for _ in range(len(texts))]\n num_tokens_in_batch: List[List[int]] = [[] for _ in range(len(texts))]\n for i in range(len(indices)):\n results[indices[i]].append(batched_embeddings[i])", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/embeddings/openai.html"} +{"id": "50499e181c79-9", "text": "results[indices[i]].append(batched_embeddings[i])\n num_tokens_in_batch[indices[i]].append(len(tokens[i]))\n for i in range(len(texts)):\n _result = results[i]\n if len(_result) == 0:\n average = (\n await async_embed_with_retry(\n self,\n input=\"\",\n **self._invocation_params,\n )\n )[\"data\"][0][\"embedding\"]\n else:\n average = np.average(_result, axis=0, weights=num_tokens_in_batch[i])\n embeddings[i] = (average / np.linalg.norm(average)).tolist()\n return embeddings\n def _embedding_func(self, text: str, *, engine: str) -> List[float]:\n \"\"\"Call out to OpenAI's embedding endpoint.\"\"\"\n # handle large input text\n if len(text) > self.embedding_ctx_length:\n return self._get_len_safe_embeddings([text], engine=engine)[0]\n else:\n if self.model.endswith(\"001\"):\n # See: https://github.com/openai/openai-python/issues/418#issuecomment-1525939500\n # replace newlines, which can negatively affect performance.\n text = text.replace(\"\\n\", \" \")\n return embed_with_retry(\n self,\n input=[text],\n **self._invocation_params,\n )[\n \"data\"\n ][0][\"embedding\"]\n async def _aembedding_func(self, text: str, *, engine: str) -> List[float]:\n \"\"\"Call out to OpenAI's embedding endpoint.\"\"\"\n # handle large input text\n if len(text) > self.embedding_ctx_length:\n return (await self._aget_len_safe_embeddings([text], engine=engine))[0]\n else:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/embeddings/openai.html"} +{"id": "50499e181c79-10", "text": "else:\n if self.model.endswith(\"001\"):\n # See: https://github.com/openai/openai-python/issues/418#issuecomment-1525939500\n # replace newlines, which can negatively affect performance.\n text = text.replace(\"\\n\", \" \")\n return (\n await async_embed_with_retry(\n self,\n input=[text],\n **self._invocation_params,\n )\n )[\"data\"][0][\"embedding\"]\n[docs] def embed_documents(\n self, texts: List[str], chunk_size: Optional[int] = 0\n ) -> List[List[float]]:\n \"\"\"Call out to OpenAI's embedding endpoint for embedding search docs.\n Args:\n texts: The list of texts to embed.\n chunk_size: The chunk size of embeddings. If None, will use the chunk size\n specified by the class.\n Returns:\n List of embeddings, one for each text.\n \"\"\"\n # NOTE: to keep things simple, we assume the list may contain texts longer\n # than the maximum context and use length-safe embedding function.\n return self._get_len_safe_embeddings(texts, engine=self.deployment)\n[docs] async def aembed_documents(\n self, texts: List[str], chunk_size: Optional[int] = 0\n ) -> List[List[float]]:\n \"\"\"Call out to OpenAI's embedding endpoint async for embedding search docs.\n Args:\n texts: The list of texts to embed.\n chunk_size: The chunk size of embeddings. If None, will use the chunk size\n specified by the class.\n Returns:\n List of embeddings, one for each text.\n \"\"\"\n # NOTE: to keep things simple, we assume the list may contain texts longer", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/embeddings/openai.html"} +{"id": "50499e181c79-11", "text": "# NOTE: to keep things simple, we assume the list may contain texts longer\n # than the maximum context and use length-safe embedding function.\n return await self._aget_len_safe_embeddings(texts, engine=self.deployment)\n[docs] def embed_query(self, text: str) -> List[float]:\n \"\"\"Call out to OpenAI's embedding endpoint for embedding query text.\n Args:\n text: The text to embed.\n Returns:\n Embedding for the text.\n \"\"\"\n embedding = self._embedding_func(text, engine=self.deployment)\n return embedding\n[docs] async def aembed_query(self, text: str) -> List[float]:\n \"\"\"Call out to OpenAI's embedding endpoint async for embedding query text.\n Args:\n text: The text to embed.\n Returns:\n Embedding for the text.\n \"\"\"\n embedding = await self._aembedding_func(text, engine=self.deployment)\n return embedding", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/embeddings/openai.html"} +{"id": "a84e06b6b154-0", "text": "Source code for langchain.embeddings.huggingface_hub\n\"\"\"Wrapper around HuggingFace Hub embedding models.\"\"\"\nfrom typing import Any, Dict, List, Optional\nfrom pydantic import BaseModel, Extra, root_validator\nfrom langchain.embeddings.base import Embeddings\nfrom langchain.utils import get_from_dict_or_env\nDEFAULT_REPO_ID = \"sentence-transformers/all-mpnet-base-v2\"\nVALID_TASKS = (\"feature-extraction\",)\n[docs]class HuggingFaceHubEmbeddings(BaseModel, Embeddings):\n \"\"\"Wrapper around HuggingFaceHub embedding models.\n To use, you should have the ``huggingface_hub`` python package installed, and the\n environment variable ``HUGGINGFACEHUB_API_TOKEN`` set with your API token, or pass\n it as a named parameter to the constructor.\n Example:\n .. code-block:: python\n from langchain.embeddings import HuggingFaceHubEmbeddings\n repo_id = \"sentence-transformers/all-mpnet-base-v2\"\n hf = HuggingFaceHubEmbeddings(\n repo_id=repo_id,\n task=\"feature-extraction\",\n huggingfacehub_api_token=\"my-api-key\",\n )\n \"\"\"\n client: Any #: :meta private:\n repo_id: str = DEFAULT_REPO_ID\n \"\"\"Model name to use.\"\"\"\n task: Optional[str] = \"feature-extraction\"\n \"\"\"Task to call the model with.\"\"\"\n model_kwargs: Optional[dict] = None\n \"\"\"Key word arguments to pass to the model.\"\"\"\n huggingfacehub_api_token: Optional[str] = None\n class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n @root_validator()\n def validate_environment(cls, values: Dict) -> Dict:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/embeddings/huggingface_hub.html"} +{"id": "a84e06b6b154-1", "text": "@root_validator()\n def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"Validate that api key and python package exists in environment.\"\"\"\n huggingfacehub_api_token = get_from_dict_or_env(\n values, \"huggingfacehub_api_token\", \"HUGGINGFACEHUB_API_TOKEN\"\n )\n try:\n from huggingface_hub.inference_api import InferenceApi\n repo_id = values[\"repo_id\"]\n if not repo_id.startswith(\"sentence-transformers\"):\n raise ValueError(\n \"Currently only 'sentence-transformers' embedding models \"\n f\"are supported. Got invalid 'repo_id' {repo_id}.\"\n )\n client = InferenceApi(\n repo_id=repo_id,\n token=huggingfacehub_api_token,\n task=values.get(\"task\"),\n )\n if client.task not in VALID_TASKS:\n raise ValueError(\n f\"Got invalid task {client.task}, \"\n f\"currently only {VALID_TASKS} are supported\"\n )\n values[\"client\"] = client\n except ImportError:\n raise ValueError(\n \"Could not import huggingface_hub python package. \"\n \"Please install it with `pip install huggingface_hub`.\"\n )\n return values\n[docs] def embed_documents(self, texts: List[str]) -> List[List[float]]:\n \"\"\"Call out to HuggingFaceHub's embedding endpoint for embedding search docs.\n Args:\n texts: The list of texts to embed.\n Returns:\n List of embeddings, one for each text.\n \"\"\"\n # replace newlines, which can negatively affect performance.\n texts = [text.replace(\"\\n\", \" \") for text in texts]", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/embeddings/huggingface_hub.html"} +{"id": "a84e06b6b154-2", "text": "texts = [text.replace(\"\\n\", \" \") for text in texts]\n _model_kwargs = self.model_kwargs or {}\n responses = self.client(inputs=texts, params=_model_kwargs)\n return responses\n[docs] def embed_query(self, text: str) -> List[float]:\n \"\"\"Call out to HuggingFaceHub's embedding endpoint for embedding query text.\n Args:\n text: The text to embed.\n Returns:\n Embeddings for the text.\n \"\"\"\n response = self.embed_documents([text])[0]\n return response", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/embeddings/huggingface_hub.html"} +{"id": "c7aeab3668e3-0", "text": "Source code for langchain.embeddings.deepinfra\nfrom typing import Any, Dict, List, Mapping, Optional\nimport requests\nfrom pydantic import BaseModel, Extra, root_validator\nfrom langchain.embeddings.base import Embeddings\nfrom langchain.utils import get_from_dict_or_env\nDEFAULT_MODEL_ID = \"sentence-transformers/clip-ViT-B-32\"\n[docs]class DeepInfraEmbeddings(BaseModel, Embeddings):\n \"\"\"Wrapper around Deep Infra's embedding inference service.\n To use, you should have the\n environment variable ``DEEPINFRA_API_TOKEN`` set with your API token, or pass\n it as a named parameter to the constructor.\n There are multiple embeddings models available,\n see https://deepinfra.com/models?type=embeddings.\n Example:\n .. code-block:: python\n from langchain.embeddings import DeepInfraEmbeddings\n deepinfra_emb = DeepInfraEmbeddings(\n model_id=\"sentence-transformers/clip-ViT-B-32\",\n deepinfra_api_token=\"my-api-key\"\n )\n r1 = deepinfra_emb.embed_documents(\n [\n \"Alpha is the first letter of Greek alphabet\",\n \"Beta is the second letter of Greek alphabet\",\n ]\n )\n r2 = deepinfra_emb.embed_query(\n \"What is the second letter of Greek alphabet\"\n )\n \"\"\"\n model_id: str = DEFAULT_MODEL_ID\n \"\"\"Embeddings model to use.\"\"\"\n normalize: bool = False\n \"\"\"whether to normalize the computed embeddings\"\"\"\n embed_instruction: str = \"passage: \"\n \"\"\"Instruction used to embed documents.\"\"\"\n query_instruction: str = \"query: \"\n \"\"\"Instruction used to embed the query.\"\"\"\n model_kwargs: Optional[dict] = None", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/embeddings/deepinfra.html"} +{"id": "c7aeab3668e3-1", "text": "model_kwargs: Optional[dict] = None\n \"\"\"Other model keyword args\"\"\"\n deepinfra_api_token: Optional[str] = None\n class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n @root_validator()\n def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"Validate that api key and python package exists in environment.\"\"\"\n deepinfra_api_token = get_from_dict_or_env(\n values, \"deepinfra_api_token\", \"DEEPINFRA_API_TOKEN\"\n )\n values[\"deepinfra_api_token\"] = deepinfra_api_token\n return values\n @property\n def _identifying_params(self) -> Mapping[str, Any]:\n \"\"\"Get the identifying parameters.\"\"\"\n return {\"model_id\": self.model_id}\n def _embed(self, input: List[str]) -> List[List[float]]:\n _model_kwargs = self.model_kwargs or {}\n # HTTP headers for authorization\n headers = {\n \"Authorization\": f\"bearer {self.deepinfra_api_token}\",\n \"Content-Type\": \"application/json\",\n }\n # send request\n try:\n res = requests.post(\n f\"https://api.deepinfra.com/v1/inference/{self.model_id}\",\n headers=headers,\n json={\"inputs\": input, \"normalize\": self.normalize, **_model_kwargs},\n )\n except requests.exceptions.RequestException as e:\n raise ValueError(f\"Error raised by inference endpoint: {e}\")\n if res.status_code != 200:\n raise ValueError(\n \"Error raised by inference API HTTP code: %s, %s\"\n % (res.status_code, res.text)\n )\n try:\n t = res.json()\n embeddings = t[\"embeddings\"]", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/embeddings/deepinfra.html"} +{"id": "c7aeab3668e3-2", "text": "try:\n t = res.json()\n embeddings = t[\"embeddings\"]\n except requests.exceptions.JSONDecodeError as e:\n raise ValueError(\n f\"Error raised by inference API: {e}.\\nResponse: {res.text}\"\n )\n return embeddings\n[docs] def embed_documents(self, texts: List[str]) -> List[List[float]]:\n \"\"\"Embed documents using a Deep Infra deployed embedding model.\n Args:\n texts: The list of texts to embed.\n Returns:\n List of embeddings, one for each text.\n \"\"\"\n instruction_pairs = [f\"{self.query_instruction}{text}\" for text in texts]\n embeddings = self._embed(instruction_pairs)\n return embeddings\n[docs] def embed_query(self, text: str) -> List[float]:\n \"\"\"Embed a query using a Deep Infra deployed embedding model.\n Args:\n text: The text to embed.\n Returns:\n Embeddings for the text.\n \"\"\"\n instruction_pair = f\"{self.query_instruction}{text}\"\n embedding = self._embed([instruction_pair])[0]\n return embedding", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/embeddings/deepinfra.html"} +{"id": "96a544e9c5b8-0", "text": "Source code for langchain.embeddings.elasticsearch\nfrom __future__ import annotations\nfrom typing import TYPE_CHECKING, List, Optional\nfrom langchain.utils import get_from_env\nif TYPE_CHECKING:\n from elasticsearch import Elasticsearch\n from elasticsearch.client import MlClient\nfrom langchain.embeddings.base import Embeddings\n[docs]class ElasticsearchEmbeddings(Embeddings):\n \"\"\"\n Wrapper around Elasticsearch embedding models.\n This class provides an interface to generate embeddings using a model deployed\n in an Elasticsearch cluster. It requires an Elasticsearch connection object\n and the model_id of the model deployed in the cluster.\n In Elasticsearch you need to have an embedding model loaded and deployed.\n - https://www.elastic.co/guide/en/elasticsearch/reference/current/infer-trained-model.html\n - https://www.elastic.co/guide/en/machine-learning/current/ml-nlp-deploy-models.html\n \"\"\" # noqa: E501\n def __init__(\n self,\n client: MlClient,\n model_id: str,\n *,\n input_field: str = \"text_field\",\n ):\n \"\"\"\n Initialize the ElasticsearchEmbeddings instance.\n Args:\n client (MlClient): An Elasticsearch ML client object.\n model_id (str): The model_id of the model deployed in the Elasticsearch\n cluster.\n input_field (str): The name of the key for the input text field in the\n document. Defaults to 'text_field'.\n \"\"\"\n self.client = client\n self.model_id = model_id\n self.input_field = input_field\n[docs] @classmethod\n def from_credentials(\n cls,\n model_id: str,\n *,\n es_cloud_id: Optional[str] = None,\n es_user: Optional[str] = None,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/embeddings/elasticsearch.html"} +{"id": "96a544e9c5b8-1", "text": "es_user: Optional[str] = None,\n es_password: Optional[str] = None,\n input_field: str = \"text_field\",\n ) -> ElasticsearchEmbeddings:\n \"\"\"Instantiate embeddings from Elasticsearch credentials.\n Args:\n model_id (str): The model_id of the model deployed in the Elasticsearch\n cluster.\n input_field (str): The name of the key for the input text field in the\n document. Defaults to 'text_field'.\n es_cloud_id: (str, optional): The Elasticsearch cloud ID to connect to.\n es_user: (str, optional): Elasticsearch username.\n es_password: (str, optional): Elasticsearch password.\n Example:\n .. code-block:: python\n from langchain.embeddings import ElasticsearchEmbeddings\n # Define the model ID and input field name (if different from default)\n model_id = \"your_model_id\"\n # Optional, only if different from 'text_field'\n input_field = \"your_input_field\"\n # Credentials can be passed in two ways. Either set the env vars\n # ES_CLOUD_ID, ES_USER, ES_PASSWORD and they will be automatically\n # pulled in, or pass them in directly as kwargs.\n embeddings = ElasticsearchEmbeddings.from_credentials(\n model_id,\n input_field=input_field,\n # es_cloud_id=\"foo\",\n # es_user=\"bar\",\n # es_password=\"baz\",\n )\n documents = [\n \"This is an example document.\",\n \"Another example document to generate embeddings for.\",\n ]\n embeddings_generator.embed_documents(documents)\n \"\"\"\n try:\n from elasticsearch import Elasticsearch\n from elasticsearch.client import MlClient\n except ImportError:\n raise ImportError(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/embeddings/elasticsearch.html"} +{"id": "96a544e9c5b8-2", "text": "from elasticsearch.client import MlClient\n except ImportError:\n raise ImportError(\n \"elasticsearch package not found, please install with 'pip install \"\n \"elasticsearch'\"\n )\n es_cloud_id = es_cloud_id or get_from_env(\"es_cloud_id\", \"ES_CLOUD_ID\")\n es_user = es_user or get_from_env(\"es_user\", \"ES_USER\")\n es_password = es_password or get_from_env(\"es_password\", \"ES_PASSWORD\")\n # Connect to Elasticsearch\n es_connection = Elasticsearch(\n cloud_id=es_cloud_id, basic_auth=(es_user, es_password)\n )\n client = MlClient(es_connection)\n return cls(client, model_id, input_field=input_field)\n[docs] @classmethod\n def from_es_connection(\n cls,\n model_id: str,\n es_connection: Elasticsearch,\n input_field: str = \"text_field\",\n ) -> ElasticsearchEmbeddings:\n \"\"\"\n Instantiate embeddings from an existing Elasticsearch connection.\n This method provides a way to create an instance of the ElasticsearchEmbeddings\n class using an existing Elasticsearch connection. The connection object is used\n to create an MlClient, which is then used to initialize the\n ElasticsearchEmbeddings instance.\n Args:\n model_id (str): The model_id of the model deployed in the Elasticsearch cluster.\n es_connection (elasticsearch.Elasticsearch): An existing Elasticsearch\n connection object. input_field (str, optional): The name of the key for the\n input text field in the document. Defaults to 'text_field'.\n Returns:\n ElasticsearchEmbeddings: An instance of the ElasticsearchEmbeddings class.\n Example:\n .. code-block:: python\n from elasticsearch import Elasticsearch", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/embeddings/elasticsearch.html"} +{"id": "96a544e9c5b8-3", "text": "Example:\n .. code-block:: python\n from elasticsearch import Elasticsearch\n from langchain.embeddings import ElasticsearchEmbeddings\n # Define the model ID and input field name (if different from default)\n model_id = \"your_model_id\"\n # Optional, only if different from 'text_field'\n input_field = \"your_input_field\"\n # Create Elasticsearch connection\n es_connection = Elasticsearch(\n hosts=[\"localhost:9200\"], http_auth=(\"user\", \"password\")\n )\n # Instantiate ElasticsearchEmbeddings using the existing connection\n embeddings = ElasticsearchEmbeddings.from_es_connection(\n model_id,\n es_connection,\n input_field=input_field,\n )\n documents = [\n \"This is an example document.\",\n \"Another example document to generate embeddings for.\",\n ]\n embeddings_generator.embed_documents(documents)\n \"\"\"\n # Importing MlClient from elasticsearch.client within the method to\n # avoid unnecessary import if the method is not used\n from elasticsearch.client import MlClient\n # Create an MlClient from the given Elasticsearch connection\n client = MlClient(es_connection)\n # Return a new instance of the ElasticsearchEmbeddings class with\n # the MlClient, model_id, and input_field\n return cls(client, model_id, input_field=input_field)\n def _embedding_func(self, texts: List[str]) -> List[List[float]]:\n \"\"\"\n Generate embeddings for the given texts using the Elasticsearch model.\n Args:\n texts (List[str]): A list of text strings to generate embeddings for.\n Returns:\n List[List[float]]: A list of embeddings, one for each text in the input\n list.\n \"\"\"\n response = self.client.infer_trained_model(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/embeddings/elasticsearch.html"} +{"id": "96a544e9c5b8-4", "text": "list.\n \"\"\"\n response = self.client.infer_trained_model(\n model_id=self.model_id, docs=[{self.input_field: text} for text in texts]\n )\n embeddings = [doc[\"predicted_value\"] for doc in response[\"inference_results\"]]\n return embeddings\n[docs] def embed_documents(self, texts: List[str]) -> List[List[float]]:\n \"\"\"\n Generate embeddings for a list of documents.\n Args:\n texts (List[str]): A list of document text strings to generate embeddings\n for.\n Returns:\n List[List[float]]: A list of embeddings, one for each document in the input\n list.\n \"\"\"\n return self._embedding_func(texts)\n[docs] def embed_query(self, text: str) -> List[float]:\n \"\"\"\n Generate an embedding for a single query text.\n Args:\n text (str): The query text to generate an embedding for.\n Returns:\n List[float]: The embedding for the input query text.\n \"\"\"\n return self._embedding_func([text])[0]", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/embeddings/elasticsearch.html"} +{"id": "1ab3deffae77-0", "text": "Source code for langchain.embeddings.minimax\n\"\"\"Wrapper around MiniMax APIs.\"\"\"\nfrom __future__ import annotations\nimport logging\nfrom typing import Any, Callable, Dict, List, Optional\nimport requests\nfrom pydantic import BaseModel, Extra, root_validator\nfrom tenacity import (\n before_sleep_log,\n retry,\n stop_after_attempt,\n wait_exponential,\n)\nfrom langchain.embeddings.base import Embeddings\nfrom langchain.utils import get_from_dict_or_env\nlogger = logging.getLogger(__name__)\ndef _create_retry_decorator() -> Callable[[Any], Any]:\n \"\"\"Returns a tenacity retry decorator.\"\"\"\n multiplier = 1\n min_seconds = 1\n max_seconds = 4\n max_retries = 6\n return retry(\n reraise=True,\n stop=stop_after_attempt(max_retries),\n wait=wait_exponential(multiplier=multiplier, min=min_seconds, max=max_seconds),\n before_sleep=before_sleep_log(logger, logging.WARNING),\n )\ndef embed_with_retry(embeddings: MiniMaxEmbeddings, *args: Any, **kwargs: Any) -> Any:\n \"\"\"Use tenacity to retry the completion call.\"\"\"\n retry_decorator = _create_retry_decorator()\n @retry_decorator\n def _embed_with_retry(*args: Any, **kwargs: Any) -> Any:\n return embeddings.embed(*args, **kwargs)\n return _embed_with_retry(*args, **kwargs)\n[docs]class MiniMaxEmbeddings(BaseModel, Embeddings):\n \"\"\"Wrapper around MiniMax's embedding inference service.\n To use, you should have the environment variable ``MINIMAX_GROUP_ID`` and\n ``MINIMAX_API_KEY`` set with your API token, or pass it as a named parameter to\n the constructor.", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/embeddings/minimax.html"} +{"id": "1ab3deffae77-1", "text": "the constructor.\n Example:\n .. code-block:: python\n from langchain.embeddings import MiniMaxEmbeddings\n embeddings = MiniMaxEmbeddings()\n query_text = \"This is a test query.\"\n query_result = embeddings.embed_query(query_text)\n document_text = \"This is a test document.\"\n document_result = embeddings.embed_documents([document_text])\n \"\"\"\n endpoint_url: str = \"https://api.minimax.chat/v1/embeddings\"\n \"\"\"Endpoint URL to use.\"\"\"\n model: str = \"embo-01\"\n \"\"\"Embeddings model name to use.\"\"\"\n embed_type_db: str = \"db\"\n \"\"\"For embed_documents\"\"\"\n embed_type_query: str = \"query\"\n \"\"\"For embed_query\"\"\"\n minimax_group_id: Optional[str] = None\n \"\"\"Group ID for MiniMax API.\"\"\"\n minimax_api_key: Optional[str] = None\n \"\"\"API Key for MiniMax API.\"\"\"\n class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n @root_validator()\n def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"Validate that group id and api key exists in environment.\"\"\"\n minimax_group_id = get_from_dict_or_env(\n values, \"minimax_group_id\", \"MINIMAX_GROUP_ID\"\n )\n minimax_api_key = get_from_dict_or_env(\n values, \"minimax_api_key\", \"MINIMAX_API_KEY\"\n )\n values[\"minimax_group_id\"] = minimax_group_id\n values[\"minimax_api_key\"] = minimax_api_key\n return values\n def embed(\n self,\n texts: List[str],\n embed_type: str,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/embeddings/minimax.html"} +{"id": "1ab3deffae77-2", "text": "self,\n texts: List[str],\n embed_type: str,\n ) -> List[List[float]]:\n payload = {\n \"model\": self.model,\n \"type\": embed_type,\n \"texts\": texts,\n }\n # HTTP headers for authorization\n headers = {\n \"Authorization\": f\"Bearer {self.minimax_api_key}\",\n \"Content-Type\": \"application/json\",\n }\n params = {\n \"GroupId\": self.minimax_group_id,\n }\n # send request\n response = requests.post(\n self.endpoint_url, params=params, headers=headers, json=payload\n )\n parsed_response = response.json()\n # check for errors\n if parsed_response[\"base_resp\"][\"status_code\"] != 0:\n raise ValueError(\n f\"MiniMax API returned an error: {parsed_response['base_resp']}\"\n )\n embeddings = parsed_response[\"vectors\"]\n return embeddings\n[docs] def embed_documents(self, texts: List[str]) -> List[List[float]]:\n \"\"\"Embed documents using a MiniMax embedding endpoint.\n Args:\n texts: The list of texts to embed.\n Returns:\n List of embeddings, one for each text.\n \"\"\"\n embeddings = embed_with_retry(self, texts=texts, embed_type=self.embed_type_db)\n return embeddings\n[docs] def embed_query(self, text: str) -> List[float]:\n \"\"\"Embed a query using a MiniMax embedding endpoint.\n Args:\n text: The text to embed.\n Returns:\n Embeddings for the text.\n \"\"\"\n embeddings = embed_with_retry(\n self, texts=[text], embed_type=self.embed_type_query\n )\n return embeddings[0]", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/embeddings/minimax.html"} +{"id": "117861fd0743-0", "text": "Source code for langchain.embeddings.cohere\n\"\"\"Wrapper around Cohere embedding models.\"\"\"\nfrom typing import Any, Dict, List, Optional\nfrom pydantic import BaseModel, Extra, root_validator\nfrom langchain.embeddings.base import Embeddings\nfrom langchain.utils import get_from_dict_or_env\n[docs]class CohereEmbeddings(BaseModel, Embeddings):\n \"\"\"Wrapper around Cohere embedding models.\n To use, you should have the ``cohere`` python package installed, and the\n environment variable ``COHERE_API_KEY`` set with your API key or pass it\n as a named parameter to the constructor.\n Example:\n .. code-block:: python\n from langchain.embeddings import CohereEmbeddings\n cohere = CohereEmbeddings(\n model=\"embed-english-light-v2.0\", cohere_api_key=\"my-api-key\"\n )\n \"\"\"\n client: Any #: :meta private:\n model: str = \"embed-english-v2.0\"\n \"\"\"Model name to use.\"\"\"\n truncate: Optional[str] = None\n \"\"\"Truncate embeddings that are too long from start or end (\"NONE\"|\"START\"|\"END\")\"\"\"\n cohere_api_key: Optional[str] = None\n class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n @root_validator()\n def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"Validate that api key and python package exists in environment.\"\"\"\n cohere_api_key = get_from_dict_or_env(\n values, \"cohere_api_key\", \"COHERE_API_KEY\"\n )\n try:\n import cohere\n values[\"client\"] = cohere.Client(cohere_api_key)\n except ImportError:\n raise ValueError(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/embeddings/cohere.html"} +{"id": "117861fd0743-1", "text": "except ImportError:\n raise ValueError(\n \"Could not import cohere python package. \"\n \"Please install it with `pip install cohere`.\"\n )\n return values\n[docs] def embed_documents(self, texts: List[str]) -> List[List[float]]:\n \"\"\"Call out to Cohere's embedding endpoint.\n Args:\n texts: The list of texts to embed.\n Returns:\n List of embeddings, one for each text.\n \"\"\"\n embeddings = self.client.embed(\n model=self.model, texts=texts, truncate=self.truncate\n ).embeddings\n return [list(map(float, e)) for e in embeddings]\n[docs] def embed_query(self, text: str) -> List[float]:\n \"\"\"Call out to Cohere's embedding endpoint.\n Args:\n text: The text to embed.\n Returns:\n Embeddings for the text.\n \"\"\"\n embedding = self.client.embed(\n model=self.model, texts=[text], truncate=self.truncate\n ).embeddings[0]\n return list(map(float, embedding))", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/embeddings/cohere.html"} +{"id": "d367230609e3-0", "text": "Source code for langchain.embeddings.mosaicml\n\"\"\"Wrapper around MosaicML APIs.\"\"\"\nfrom typing import Any, Dict, List, Mapping, Optional, Tuple\nimport requests\nfrom pydantic import BaseModel, Extra, root_validator\nfrom langchain.embeddings.base import Embeddings\nfrom langchain.utils import get_from_dict_or_env\n[docs]class MosaicMLInstructorEmbeddings(BaseModel, Embeddings):\n \"\"\"Wrapper around MosaicML's embedding inference service.\n To use, you should have the\n environment variable ``MOSAICML_API_TOKEN`` set with your API token, or pass\n it as a named parameter to the constructor.\n Example:\n .. code-block:: python\n from langchain.llms import MosaicMLInstructorEmbeddings\n endpoint_url = (\n \"https://models.hosted-on.mosaicml.hosting/instructor-large/v1/predict\"\n )\n mosaic_llm = MosaicMLInstructorEmbeddings(\n endpoint_url=endpoint_url,\n mosaicml_api_token=\"my-api-key\"\n )\n \"\"\"\n endpoint_url: str = (\n \"https://models.hosted-on.mosaicml.hosting/instructor-xl/v1/predict\"\n )\n \"\"\"Endpoint URL to use.\"\"\"\n embed_instruction: str = \"Represent the document for retrieval: \"\n \"\"\"Instruction used to embed documents.\"\"\"\n query_instruction: str = (\n \"Represent the question for retrieving supporting documents: \"\n )\n \"\"\"Instruction used to embed the query.\"\"\"\n retry_sleep: float = 1.0\n \"\"\"How long to try sleeping for if a rate limit is encountered\"\"\"\n mosaicml_api_token: Optional[str] = None\n class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/embeddings/mosaicml.html"} +{"id": "d367230609e3-1", "text": "\"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n @root_validator()\n def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"Validate that api key and python package exists in environment.\"\"\"\n mosaicml_api_token = get_from_dict_or_env(\n values, \"mosaicml_api_token\", \"MOSAICML_API_TOKEN\"\n )\n values[\"mosaicml_api_token\"] = mosaicml_api_token\n return values\n @property\n def _identifying_params(self) -> Mapping[str, Any]:\n \"\"\"Get the identifying parameters.\"\"\"\n return {\"endpoint_url\": self.endpoint_url}\n def _embed(\n self, input: List[Tuple[str, str]], is_retry: bool = False\n ) -> List[List[float]]:\n payload = {\"input_strings\": input}\n # HTTP headers for authorization\n headers = {\n \"Authorization\": f\"{self.mosaicml_api_token}\",\n \"Content-Type\": \"application/json\",\n }\n # send request\n try:\n response = requests.post(self.endpoint_url, headers=headers, json=payload)\n except requests.exceptions.RequestException as e:\n raise ValueError(f\"Error raised by inference endpoint: {e}\")\n try:\n parsed_response = response.json()\n if \"error\" in parsed_response:\n # if we get rate limited, try sleeping for 1 second\n if (\n not is_retry\n and \"rate limit exceeded\" in parsed_response[\"error\"].lower()\n ):\n import time\n time.sleep(self.retry_sleep)\n return self._embed(input, is_retry=True)\n raise ValueError(\n f\"Error raised by inference API: {parsed_response['error']}\"\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/embeddings/mosaicml.html"} +{"id": "d367230609e3-2", "text": "f\"Error raised by inference API: {parsed_response['error']}\"\n )\n # The inference API has changed a couple of times, so we add some handling\n # to be robust to multiple response formats.\n if isinstance(parsed_response, dict):\n if \"data\" in parsed_response:\n output_item = parsed_response[\"data\"]\n elif \"output\" in parsed_response:\n output_item = parsed_response[\"output\"]\n else:\n raise ValueError(\n f\"No key data or output in response: {parsed_response}\"\n )\n if isinstance(output_item, list) and isinstance(output_item[0], list):\n embeddings = output_item\n else:\n embeddings = [output_item]\n elif isinstance(parsed_response, list):\n first_item = parsed_response[0]\n if isinstance(first_item, list):\n embeddings = parsed_response\n elif isinstance(first_item, dict):\n if \"output\" in first_item:\n embeddings = [item[\"output\"] for item in parsed_response]\n else:\n raise ValueError(\n f\"No key data or output in response: {parsed_response}\"\n )\n else:\n raise ValueError(f\"Unexpected response format: {parsed_response}\")\n else:\n raise ValueError(f\"Unexpected response type: {parsed_response}\")\n except requests.exceptions.JSONDecodeError as e:\n raise ValueError(\n f\"Error raised by inference API: {e}.\\nResponse: {response.text}\"\n )\n return embeddings\n[docs] def embed_documents(self, texts: List[str]) -> List[List[float]]:\n \"\"\"Embed documents using a MosaicML deployed instructor embedding model.\n Args:\n texts: The list of texts to embed.\n Returns:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/embeddings/mosaicml.html"} +{"id": "d367230609e3-3", "text": "Args:\n texts: The list of texts to embed.\n Returns:\n List of embeddings, one for each text.\n \"\"\"\n instruction_pairs = [(self.embed_instruction, text) for text in texts]\n embeddings = self._embed(instruction_pairs)\n return embeddings\n[docs] def embed_query(self, text: str) -> List[float]:\n \"\"\"Embed a query using a MosaicML deployed instructor embedding model.\n Args:\n text: The text to embed.\n Returns:\n Embeddings for the text.\n \"\"\"\n instruction_pair = (self.query_instruction, text)\n embedding = self._embed([instruction_pair])[0]\n return embedding", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/embeddings/mosaicml.html"} +{"id": "c9e3e15a7952-0", "text": "Source code for langchain.embeddings.self_hosted_hugging_face\n\"\"\"Wrapper around HuggingFace embedding models for self-hosted remote hardware.\"\"\"\nimport importlib\nimport logging\nfrom typing import Any, Callable, List, Optional\nfrom langchain.embeddings.self_hosted import SelfHostedEmbeddings\nDEFAULT_MODEL_NAME = \"sentence-transformers/all-mpnet-base-v2\"\nDEFAULT_INSTRUCT_MODEL = \"hkunlp/instructor-large\"\nDEFAULT_EMBED_INSTRUCTION = \"Represent the document for retrieval: \"\nDEFAULT_QUERY_INSTRUCTION = (\n \"Represent the question for retrieving supporting documents: \"\n)\nlogger = logging.getLogger(__name__)\ndef _embed_documents(client: Any, *args: Any, **kwargs: Any) -> List[List[float]]:\n \"\"\"Inference function to send to the remote hardware.\n Accepts a sentence_transformer model_id and\n returns a list of embeddings for each document in the batch.\n \"\"\"\n return client.encode(*args, **kwargs)\ndef load_embedding_model(model_id: str, instruct: bool = False, device: int = 0) -> Any:\n \"\"\"Load the embedding model.\"\"\"\n if not instruct:\n import sentence_transformers\n client = sentence_transformers.SentenceTransformer(model_id)\n else:\n from InstructorEmbedding import INSTRUCTOR\n client = INSTRUCTOR(model_id)\n if importlib.util.find_spec(\"torch\") is not None:\n import torch\n cuda_device_count = torch.cuda.device_count()\n if device < -1 or (device >= cuda_device_count):\n raise ValueError(\n f\"Got device=={device}, \"\n f\"device is required to be within [-1, {cuda_device_count})\"\n )\n if device < 0 and cuda_device_count > 0:\n logger.warning(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/embeddings/self_hosted_hugging_face.html"} +{"id": "c9e3e15a7952-1", "text": "if device < 0 and cuda_device_count > 0:\n logger.warning(\n \"Device has %d GPUs available. \"\n \"Provide device={deviceId} to `from_model_id` to use available\"\n \"GPUs for execution. deviceId is -1 for CPU and \"\n \"can be a positive integer associated with CUDA device id.\",\n cuda_device_count,\n )\n client = client.to(device)\n return client\n[docs]class SelfHostedHuggingFaceEmbeddings(SelfHostedEmbeddings):\n \"\"\"Runs sentence_transformers embedding models on self-hosted remote hardware.\n Supported hardware includes auto-launched instances on AWS, GCP, Azure,\n and Lambda, as well as servers specified\n by IP address and SSH credentials (such as on-prem, or another cloud\n like Paperspace, Coreweave, etc.).\n To use, you should have the ``runhouse`` python package installed.\n Example:\n .. code-block:: python\n from langchain.embeddings import SelfHostedHuggingFaceEmbeddings\n import runhouse as rh\n model_name = \"sentence-transformers/all-mpnet-base-v2\"\n gpu = rh.cluster(name=\"rh-a10x\", instance_type=\"A100:1\")\n hf = SelfHostedHuggingFaceEmbeddings(model_name=model_name, hardware=gpu)\n \"\"\"\n client: Any #: :meta private:\n model_id: str = DEFAULT_MODEL_NAME\n \"\"\"Model name to use.\"\"\"\n model_reqs: List[str] = [\"./\", \"sentence_transformers\", \"torch\"]\n \"\"\"Requirements to install on hardware to inference the model.\"\"\"\n hardware: Any\n \"\"\"Remote hardware to send the inference function to.\"\"\"\n model_load_fn: Callable = load_embedding_model", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/embeddings/self_hosted_hugging_face.html"} +{"id": "c9e3e15a7952-2", "text": "model_load_fn: Callable = load_embedding_model\n \"\"\"Function to load the model remotely on the server.\"\"\"\n load_fn_kwargs: Optional[dict] = None\n \"\"\"Key word arguments to pass to the model load function.\"\"\"\n inference_fn: Callable = _embed_documents\n \"\"\"Inference function to extract the embeddings.\"\"\"\n def __init__(self, **kwargs: Any):\n \"\"\"Initialize the remote inference function.\"\"\"\n load_fn_kwargs = kwargs.pop(\"load_fn_kwargs\", {})\n load_fn_kwargs[\"model_id\"] = load_fn_kwargs.get(\"model_id\", DEFAULT_MODEL_NAME)\n load_fn_kwargs[\"instruct\"] = load_fn_kwargs.get(\"instruct\", False)\n load_fn_kwargs[\"device\"] = load_fn_kwargs.get(\"device\", 0)\n super().__init__(load_fn_kwargs=load_fn_kwargs, **kwargs)\n[docs]class SelfHostedHuggingFaceInstructEmbeddings(SelfHostedHuggingFaceEmbeddings):\n \"\"\"Runs InstructorEmbedding embedding models on self-hosted remote hardware.\n Supported hardware includes auto-launched instances on AWS, GCP, Azure,\n and Lambda, as well as servers specified\n by IP address and SSH credentials (such as on-prem, or another\n cloud like Paperspace, Coreweave, etc.).\n To use, you should have the ``runhouse`` python package installed.\n Example:\n .. code-block:: python\n from langchain.embeddings import SelfHostedHuggingFaceInstructEmbeddings\n import runhouse as rh\n model_name = \"hkunlp/instructor-large\"\n gpu = rh.cluster(name='rh-a10x', instance_type='A100:1')\n hf = SelfHostedHuggingFaceInstructEmbeddings(\n model_name=model_name, hardware=gpu)\n \"\"\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/embeddings/self_hosted_hugging_face.html"} +{"id": "c9e3e15a7952-3", "text": "model_name=model_name, hardware=gpu)\n \"\"\"\n model_id: str = DEFAULT_INSTRUCT_MODEL\n \"\"\"Model name to use.\"\"\"\n embed_instruction: str = DEFAULT_EMBED_INSTRUCTION\n \"\"\"Instruction to use for embedding documents.\"\"\"\n query_instruction: str = DEFAULT_QUERY_INSTRUCTION\n \"\"\"Instruction to use for embedding query.\"\"\"\n model_reqs: List[str] = [\"./\", \"InstructorEmbedding\", \"torch\"]\n \"\"\"Requirements to install on hardware to inference the model.\"\"\"\n def __init__(self, **kwargs: Any):\n \"\"\"Initialize the remote inference function.\"\"\"\n load_fn_kwargs = kwargs.pop(\"load_fn_kwargs\", {})\n load_fn_kwargs[\"model_id\"] = load_fn_kwargs.get(\n \"model_id\", DEFAULT_INSTRUCT_MODEL\n )\n load_fn_kwargs[\"instruct\"] = load_fn_kwargs.get(\"instruct\", True)\n load_fn_kwargs[\"device\"] = load_fn_kwargs.get(\"device\", 0)\n super().__init__(load_fn_kwargs=load_fn_kwargs, **kwargs)\n[docs] def embed_documents(self, texts: List[str]) -> List[List[float]]:\n \"\"\"Compute doc embeddings using a HuggingFace instruct model.\n Args:\n texts: The list of texts to embed.\n Returns:\n List of embeddings, one for each text.\n \"\"\"\n instruction_pairs = []\n for text in texts:\n instruction_pairs.append([self.embed_instruction, text])\n embeddings = self.client(self.pipeline_ref, instruction_pairs)\n return embeddings.tolist()\n[docs] def embed_query(self, text: str) -> List[float]:\n \"\"\"Compute query embeddings using a HuggingFace instruct model.\n Args:\n text: The text to embed.\n Returns:\n Embeddings for the text.", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/embeddings/self_hosted_hugging_face.html"} +{"id": "c9e3e15a7952-4", "text": "Returns:\n Embeddings for the text.\n \"\"\"\n instruction_pair = [self.query_instruction, text]\n embedding = self.client(self.pipeline_ref, [instruction_pair])[0]\n return embedding.tolist()", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/embeddings/self_hosted_hugging_face.html"} +{"id": "97ac13dd59d1-0", "text": "Source code for langchain.embeddings.embaas\n\"\"\"Wrapper around embaas embeddings API.\"\"\"\nfrom typing import Any, Dict, List, Mapping, Optional\nimport requests\nfrom pydantic import BaseModel, Extra, root_validator\nfrom typing_extensions import NotRequired, TypedDict\nfrom langchain.embeddings.base import Embeddings\nfrom langchain.utils import get_from_dict_or_env\n# Currently supported maximum batch size for embedding requests\nMAX_BATCH_SIZE = 256\nEMBAAS_API_URL = \"https://api.embaas.io/v1/embeddings/\"\nclass EmbaasEmbeddingsPayload(TypedDict):\n \"\"\"Payload for the embaas embeddings API.\"\"\"\n model: str\n texts: List[str]\n instruction: NotRequired[str]\n[docs]class EmbaasEmbeddings(BaseModel, Embeddings):\n \"\"\"Wrapper around embaas's embedding service.\n To use, you should have the\n environment variable ``EMBAAS_API_KEY`` set with your API key, or pass\n it as a named parameter to the constructor.\n Example:\n .. code-block:: python\n # Initialise with default model and instruction\n from langchain.embeddings import EmbaasEmbeddings\n emb = EmbaasEmbeddings()\n # Initialise with custom model and instruction\n from langchain.embeddings import EmbaasEmbeddings\n emb_model = \"instructor-large\"\n emb_inst = \"Represent the Wikipedia document for retrieval\"\n emb = EmbaasEmbeddings(\n model=emb_model,\n instruction=emb_inst\n )\n \"\"\"\n model: str = \"e5-large-v2\"\n \"\"\"The model used for embeddings.\"\"\"\n instruction: Optional[str] = None\n \"\"\"Instruction used for domain-specific embeddings.\"\"\"\n api_url: str = EMBAAS_API_URL", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/embeddings/embaas.html"} +{"id": "97ac13dd59d1-1", "text": "api_url: str = EMBAAS_API_URL\n \"\"\"The URL for the embaas embeddings API.\"\"\"\n embaas_api_key: Optional[str] = None\n class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n @root_validator()\n def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"Validate that api key and python package exists in environment.\"\"\"\n embaas_api_key = get_from_dict_or_env(\n values, \"embaas_api_key\", \"EMBAAS_API_KEY\"\n )\n values[\"embaas_api_key\"] = embaas_api_key\n return values\n @property\n def _identifying_params(self) -> Mapping[str, Any]:\n \"\"\"Get the identifying params.\"\"\"\n return {\"model\": self.model, \"instruction\": self.instruction}\n def _generate_payload(self, texts: List[str]) -> EmbaasEmbeddingsPayload:\n \"\"\"Generates payload for the API request.\"\"\"\n payload = EmbaasEmbeddingsPayload(texts=texts, model=self.model)\n if self.instruction:\n payload[\"instruction\"] = self.instruction\n return payload\n def _handle_request(self, payload: EmbaasEmbeddingsPayload) -> List[List[float]]:\n \"\"\"Sends a request to the Embaas API and handles the response.\"\"\"\n headers = {\n \"Authorization\": f\"Bearer {self.embaas_api_key}\",\n \"Content-Type\": \"application/json\",\n }\n response = requests.post(self.api_url, headers=headers, json=payload)\n response.raise_for_status()\n parsed_response = response.json()\n embeddings = [item[\"embedding\"] for item in parsed_response[\"data\"]]\n return embeddings", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/embeddings/embaas.html"} +{"id": "97ac13dd59d1-2", "text": "return embeddings\n def _generate_embeddings(self, texts: List[str]) -> List[List[float]]:\n \"\"\"Generate embeddings using the Embaas API.\"\"\"\n payload = self._generate_payload(texts)\n try:\n return self._handle_request(payload)\n except requests.exceptions.RequestException as e:\n if e.response is None or not e.response.text:\n raise ValueError(f\"Error raised by embaas embeddings API: {e}\")\n parsed_response = e.response.json()\n if \"message\" in parsed_response:\n raise ValueError(\n \"Validation Error raised by embaas embeddings API:\"\n f\"{parsed_response['message']}\"\n )\n raise\n[docs] def embed_documents(self, texts: List[str]) -> List[List[float]]:\n \"\"\"Get embeddings for a list of texts.\n Args:\n texts: The list of texts to get embeddings for.\n Returns:\n List of embeddings, one for each text.\n \"\"\"\n batches = [\n texts[i : i + MAX_BATCH_SIZE] for i in range(0, len(texts), MAX_BATCH_SIZE)\n ]\n embeddings = [self._generate_embeddings(batch) for batch in batches]\n # flatten the list of lists into a single list\n return [embedding for batch in embeddings for embedding in batch]\n[docs] def embed_query(self, text: str) -> List[float]:\n \"\"\"Get embeddings for a single text.\n Args:\n text: The text to get embeddings for.\n Returns:\n List of embeddings.\n \"\"\"\n return self.embed_documents([text])[0]", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/embeddings/embaas.html"} +{"id": "6d0aab9f0020-0", "text": "Source code for langchain.embeddings.sagemaker_endpoint\n\"\"\"Wrapper around Sagemaker InvokeEndpoint API.\"\"\"\nfrom typing import Any, Dict, List, Optional\nfrom pydantic import BaseModel, Extra, root_validator\nfrom langchain.embeddings.base import Embeddings\nfrom langchain.llms.sagemaker_endpoint import ContentHandlerBase\nclass EmbeddingsContentHandler(ContentHandlerBase[List[str], List[List[float]]]):\n \"\"\"Content handler for LLM class.\"\"\"\n[docs]class SagemakerEndpointEmbeddings(BaseModel, Embeddings):\n \"\"\"Wrapper around custom Sagemaker Inference Endpoints.\n To use, you must supply the endpoint name from your deployed\n Sagemaker model & the region where it is deployed.\n To authenticate, the AWS client uses the following methods to\n automatically load credentials:\n https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html\n If a specific credential profile should be used, you must pass\n the name of the profile from the ~/.aws/credentials file that is to be used.\n Make sure the credentials / roles used have the required policies to\n access the Sagemaker endpoint.\n See: https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies.html\n \"\"\"\n \"\"\"\n Example:\n .. code-block:: python\n from langchain.embeddings import SagemakerEndpointEmbeddings\n endpoint_name = (\n \"my-endpoint-name\"\n )\n region_name = (\n \"us-west-2\"\n )\n credentials_profile_name = (\n \"default\"\n )\n se = SagemakerEndpointEmbeddings(\n endpoint_name=endpoint_name,\n region_name=region_name,\n credentials_profile_name=credentials_profile_name\n )\n \"\"\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/embeddings/sagemaker_endpoint.html"} +{"id": "6d0aab9f0020-1", "text": "credentials_profile_name=credentials_profile_name\n )\n \"\"\"\n client: Any #: :meta private:\n endpoint_name: str = \"\"\n \"\"\"The name of the endpoint from the deployed Sagemaker model.\n Must be unique within an AWS Region.\"\"\"\n region_name: str = \"\"\n \"\"\"The aws region where the Sagemaker model is deployed, eg. `us-west-2`.\"\"\"\n credentials_profile_name: Optional[str] = None\n \"\"\"The name of the profile in the ~/.aws/credentials or ~/.aws/config files, which\n has either access keys or role information specified.\n If not specified, the default credential profile or, if on an EC2 instance,\n credentials from IMDS will be used.\n See: https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html\n \"\"\"\n content_handler: EmbeddingsContentHandler\n \"\"\"The content handler class that provides an input and\n output transform functions to handle formats between LLM\n and the endpoint.\n \"\"\"\n \"\"\"\n Example:\n .. code-block:: python\n from langchain.embeddings.sagemaker_endpoint import EmbeddingsContentHandler\n class ContentHandler(EmbeddingsContentHandler):\n content_type = \"application/json\"\n accepts = \"application/json\"\n def transform_input(self, prompts: List[str], model_kwargs: Dict) -> bytes:\n input_str = json.dumps({prompts: prompts, **model_kwargs})\n return input_str.encode('utf-8')\n def transform_output(self, output: bytes) -> List[List[float]]:\n response_json = json.loads(output.read().decode(\"utf-8\"))\n return response_json[\"vectors\"]\n \"\"\" # noqa: E501\n model_kwargs: Optional[Dict] = None", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/embeddings/sagemaker_endpoint.html"} +{"id": "6d0aab9f0020-2", "text": "\"\"\" # noqa: E501\n model_kwargs: Optional[Dict] = None\n \"\"\"Key word arguments to pass to the model.\"\"\"\n endpoint_kwargs: Optional[Dict] = None\n \"\"\"Optional attributes passed to the invoke_endpoint\n function. See `boto3`_. docs for more info.\n .. _boto3: \n \"\"\"\n class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n arbitrary_types_allowed = True\n @root_validator()\n def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"Validate that AWS credentials to and python package exists in environment.\"\"\"\n try:\n import boto3\n try:\n if values[\"credentials_profile_name\"] is not None:\n session = boto3.Session(\n profile_name=values[\"credentials_profile_name\"]\n )\n else:\n # use default credentials\n session = boto3.Session()\n values[\"client\"] = session.client(\n \"sagemaker-runtime\", region_name=values[\"region_name\"]\n )\n except Exception as e:\n raise ValueError(\n \"Could not load credentials to authenticate with AWS client. \"\n \"Please check that credentials in the specified \"\n \"profile name are valid.\"\n ) from e\n except ImportError:\n raise ValueError(\n \"Could not import boto3 python package. \"\n \"Please install it with `pip install boto3`.\"\n )\n return values\n def _embedding_func(self, texts: List[str]) -> List[List[float]]:\n \"\"\"Call out to SageMaker Inference embedding endpoint.\"\"\"\n # replace newlines, which can negatively affect performance.", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/embeddings/sagemaker_endpoint.html"} +{"id": "6d0aab9f0020-3", "text": "# replace newlines, which can negatively affect performance.\n texts = list(map(lambda x: x.replace(\"\\n\", \" \"), texts))\n _model_kwargs = self.model_kwargs or {}\n _endpoint_kwargs = self.endpoint_kwargs or {}\n body = self.content_handler.transform_input(texts, _model_kwargs)\n content_type = self.content_handler.content_type\n accepts = self.content_handler.accepts\n # send request\n try:\n response = self.client.invoke_endpoint(\n EndpointName=self.endpoint_name,\n Body=body,\n ContentType=content_type,\n Accept=accepts,\n **_endpoint_kwargs,\n )\n except Exception as e:\n raise ValueError(f\"Error raised by inference endpoint: {e}\")\n return self.content_handler.transform_output(response[\"Body\"])\n[docs] def embed_documents(\n self, texts: List[str], chunk_size: int = 64\n ) -> List[List[float]]:\n \"\"\"Compute doc embeddings using a SageMaker Inference Endpoint.\n Args:\n texts: The list of texts to embed.\n chunk_size: The chunk size defines how many input texts will\n be grouped together as request. If None, will use the\n chunk size specified by the class.\n Returns:\n List of embeddings, one for each text.\n \"\"\"\n results = []\n _chunk_size = len(texts) if chunk_size > len(texts) else chunk_size\n for i in range(0, len(texts), _chunk_size):\n response = self._embedding_func(texts[i : i + _chunk_size])\n results.extend(response)\n return results\n[docs] def embed_query(self, text: str) -> List[float]:\n \"\"\"Compute query embeddings using a SageMaker inference endpoint.", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/embeddings/sagemaker_endpoint.html"} +{"id": "6d0aab9f0020-4", "text": "\"\"\"Compute query embeddings using a SageMaker inference endpoint.\n Args:\n text: The text to embed.\n Returns:\n Embeddings for the text.\n \"\"\"\n return self._embedding_func([text])[0]", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/embeddings/sagemaker_endpoint.html"} +{"id": "b9ce2f7fc1c3-0", "text": "Source code for langchain.embeddings.llamacpp\n\"\"\"Wrapper around llama.cpp embedding models.\"\"\"\nfrom typing import Any, Dict, List, Optional\nfrom pydantic import BaseModel, Extra, Field, root_validator\nfrom langchain.embeddings.base import Embeddings\n[docs]class LlamaCppEmbeddings(BaseModel, Embeddings):\n \"\"\"Wrapper around llama.cpp embedding models.\n To use, you should have the llama-cpp-python library installed, and provide the\n path to the Llama model as a named parameter to the constructor.\n Check out: https://github.com/abetlen/llama-cpp-python\n Example:\n .. code-block:: python\n from langchain.embeddings import LlamaCppEmbeddings\n llama = LlamaCppEmbeddings(model_path=\"/path/to/model.bin\")\n \"\"\"\n client: Any #: :meta private:\n model_path: str\n n_ctx: int = Field(512, alias=\"n_ctx\")\n \"\"\"Token context window.\"\"\"\n n_parts: int = Field(-1, alias=\"n_parts\")\n \"\"\"Number of parts to split the model into. \n If -1, the number of parts is automatically determined.\"\"\"\n seed: int = Field(-1, alias=\"seed\")\n \"\"\"Seed. If -1, a random seed is used.\"\"\"\n f16_kv: bool = Field(False, alias=\"f16_kv\")\n \"\"\"Use half-precision for key/value cache.\"\"\"\n logits_all: bool = Field(False, alias=\"logits_all\")\n \"\"\"Return logits for all tokens, not just the last token.\"\"\"\n vocab_only: bool = Field(False, alias=\"vocab_only\")\n \"\"\"Only load the vocabulary, no weights.\"\"\"\n use_mlock: bool = Field(False, alias=\"use_mlock\")", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/embeddings/llamacpp.html"} +{"id": "b9ce2f7fc1c3-1", "text": "use_mlock: bool = Field(False, alias=\"use_mlock\")\n \"\"\"Force system to keep model in RAM.\"\"\"\n n_threads: Optional[int] = Field(None, alias=\"n_threads\")\n \"\"\"Number of threads to use. If None, the number \n of threads is automatically determined.\"\"\"\n n_batch: Optional[int] = Field(8, alias=\"n_batch\")\n \"\"\"Number of tokens to process in parallel.\n Should be a number between 1 and n_ctx.\"\"\"\n n_gpu_layers: Optional[int] = Field(None, alias=\"n_gpu_layers\")\n \"\"\"Number of layers to be loaded into gpu memory. Default None.\"\"\"\n class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n @root_validator()\n def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"Validate that llama-cpp-python library is installed.\"\"\"\n model_path = values[\"model_path\"]\n model_param_names = [\n \"n_ctx\",\n \"n_parts\",\n \"seed\",\n \"f16_kv\",\n \"logits_all\",\n \"vocab_only\",\n \"use_mlock\",\n \"n_threads\",\n \"n_batch\",\n ]\n model_params = {k: values[k] for k in model_param_names}\n # For backwards compatibility, only include if non-null.\n if values[\"n_gpu_layers\"] is not None:\n model_params[\"n_gpu_layers\"] = values[\"n_gpu_layers\"]\n try:\n from llama_cpp import Llama\n values[\"client\"] = Llama(model_path, embedding=True, **model_params)\n except ImportError:\n raise ModuleNotFoundError(\n \"Could not import llama-cpp-python library. \"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/embeddings/llamacpp.html"} +{"id": "b9ce2f7fc1c3-2", "text": "raise ModuleNotFoundError(\n \"Could not import llama-cpp-python library. \"\n \"Please install the llama-cpp-python library to \"\n \"use this embedding model: pip install llama-cpp-python\"\n )\n except Exception as e:\n raise ValueError(\n f\"Could not load Llama model from path: {model_path}. \"\n f\"Received error {e}\"\n )\n return values\n[docs] def embed_documents(self, texts: List[str]) -> List[List[float]]:\n \"\"\"Embed a list of documents using the Llama model.\n Args:\n texts: The list of texts to embed.\n Returns:\n List of embeddings, one for each text.\n \"\"\"\n embeddings = [self.client.embed(text) for text in texts]\n return [list(map(float, e)) for e in embeddings]\n[docs] def embed_query(self, text: str) -> List[float]:\n \"\"\"Embed a query using the Llama model.\n Args:\n text: The text to embed.\n Returns:\n Embeddings for the text.\n \"\"\"\n embedding = self.client.embed(text)\n return list(map(float, embedding))", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/embeddings/llamacpp.html"} +{"id": "315ba205e080-0", "text": "Source code for langchain.embeddings.dashscope\n\"\"\"Wrapper around DashScope embedding models.\"\"\"\nfrom __future__ import annotations\nimport logging\nfrom typing import (\n Any,\n Callable,\n Dict,\n List,\n Optional,\n)\nfrom pydantic import BaseModel, Extra, root_validator\nfrom requests.exceptions import HTTPError\nfrom tenacity import (\n before_sleep_log,\n retry,\n retry_if_exception_type,\n stop_after_attempt,\n wait_exponential,\n)\nfrom langchain.embeddings.base import Embeddings\nfrom langchain.utils import get_from_dict_or_env\nlogger = logging.getLogger(__name__)\ndef _create_retry_decorator(embeddings: DashScopeEmbeddings) -> Callable[[Any], Any]:\n multiplier = 1\n min_seconds = 1\n max_seconds = 4\n # Wait 2^x * 1 second between each retry starting with\n # 1 seconds, then up to 4 seconds, then 4 seconds afterwards\n return retry(\n reraise=True,\n stop=stop_after_attempt(embeddings.max_retries),\n wait=wait_exponential(multiplier, min=min_seconds, max=max_seconds),\n retry=(retry_if_exception_type(HTTPError)),\n before_sleep=before_sleep_log(logger, logging.WARNING),\n )\ndef embed_with_retry(embeddings: DashScopeEmbeddings, **kwargs: Any) -> Any:\n \"\"\"Use tenacity to retry the embedding call.\"\"\"\n retry_decorator = _create_retry_decorator(embeddings)\n @retry_decorator\n def _embed_with_retry(**kwargs: Any) -> Any:\n resp = embeddings.client.call(**kwargs)\n if resp.status_code == 200:\n return resp.output[\"embeddings\"]\n elif resp.status_code in [400, 401]:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/embeddings/dashscope.html"} +{"id": "315ba205e080-1", "text": "elif resp.status_code in [400, 401]:\n raise ValueError(\n f\"status_code: {resp.status_code} \\n \"\n f\"code: {resp.code} \\n message: {resp.message}\"\n )\n else:\n raise HTTPError(\n f\"HTTP error occurred: status_code: {resp.status_code} \\n \"\n f\"code: {resp.code} \\n message: {resp.message}\"\n )\n return _embed_with_retry(**kwargs)\n[docs]class DashScopeEmbeddings(BaseModel, Embeddings):\n \"\"\"Wrapper around DashScope embedding models.\n To use, you should have the ``dashscope`` python package installed, and the\n environment variable ``DASHSCOPE_API_KEY`` set with your API key or pass it\n as a named parameter to the constructor.\n Example:\n .. code-block:: python\n from langchain.embeddings import DashScopeEmbeddings\n embeddings = DashScopeEmbeddings(dashscope_api_key=\"my-api-key\")\n Example:\n .. code-block:: python\n import os\n os.environ[\"DASHSCOPE_API_KEY\"] = \"your DashScope API KEY\"\n from langchain.embeddings.dashscope import DashScopeEmbeddings\n embeddings = DashScopeEmbeddings(\n model=\"text-embedding-v1\",\n )\n text = \"This is a test query.\"\n query_result = embeddings.embed_query(text)\n \"\"\"\n client: Any #: :meta private:\n model: str = \"text-embedding-v1\"\n dashscope_api_key: Optional[str] = None\n \"\"\"Maximum number of retries to make when generating.\"\"\"\n max_retries: int = 5\n class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/embeddings/dashscope.html"} +{"id": "315ba205e080-2", "text": "class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n @root_validator()\n def validate_environment(cls, values: Dict) -> Dict:\n import dashscope\n \"\"\"Validate that api key and python package exists in environment.\"\"\"\n values[\"dashscope_api_key\"] = get_from_dict_or_env(\n values, \"dashscope_api_key\", \"DASHSCOPE_API_KEY\"\n )\n dashscope.api_key = values[\"dashscope_api_key\"]\n try:\n import dashscope\n values[\"client\"] = dashscope.TextEmbedding\n except ImportError:\n raise ImportError(\n \"Could not import dashscope python package. \"\n \"Please install it with `pip install dashscope`.\"\n )\n return values\n[docs] def embed_documents(self, texts: List[str]) -> List[List[float]]:\n \"\"\"Call out to DashScope's embedding endpoint for embedding search docs.\n Args:\n texts: The list of texts to embed.\n chunk_size: The chunk size of embeddings. If None, will use the chunk size\n specified by the class.\n Returns:\n List of embeddings, one for each text.\n \"\"\"\n embeddings = embed_with_retry(\n self, input=texts, text_type=\"document\", model=self.model\n )\n embedding_list = [item[\"embedding\"] for item in embeddings]\n return embedding_list\n[docs] def embed_query(self, text: str) -> List[float]:\n \"\"\"Call out to DashScope's embedding endpoint for embedding query text.\n Args:\n text: The text to embed.\n Returns:\n Embedding for the text.\n \"\"\"\n embedding = embed_with_retry(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/embeddings/dashscope.html"} +{"id": "315ba205e080-3", "text": "Embedding for the text.\n \"\"\"\n embedding = embed_with_retry(\n self, input=text, text_type=\"query\", model=self.model\n )[0][\"embedding\"]\n return embedding", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/embeddings/dashscope.html"} +{"id": "ba98ed991532-0", "text": "Source code for langchain.memory.motorhead_memory\nfrom typing import Any, Dict, List, Optional\nimport requests\nfrom langchain.memory.chat_memory import BaseChatMemory\nfrom langchain.schema import get_buffer_string\nMANAGED_URL = \"https://api.getmetal.io/v1/motorhead\"\n# LOCAL_URL = \"http://localhost:8080\"\n[docs]class MotorheadMemory(BaseChatMemory):\n url: str = MANAGED_URL\n timeout = 3000\n memory_key = \"history\"\n session_id: str\n context: Optional[str] = None\n # Managed Params\n api_key: Optional[str] = None\n client_id: Optional[str] = None\n def __get_headers(self) -> Dict[str, str]:\n is_managed = self.url == MANAGED_URL\n headers = {\n \"Content-Type\": \"application/json\",\n }\n if is_managed and not (self.api_key and self.client_id):\n raise ValueError(\n \"\"\"\n You must provide an API key or a client ID to use the managed\n version of Motorhead. Visit https://getmetal.io for more information.\n \"\"\"\n )\n if is_managed and self.api_key and self.client_id:\n headers[\"x-metal-api-key\"] = self.api_key\n headers[\"x-metal-client-id\"] = self.client_id\n return headers\n[docs] async def init(self) -> None:\n res = requests.get(\n f\"{self.url}/sessions/{self.session_id}/memory\",\n timeout=self.timeout,\n headers=self.__get_headers(),\n )\n res_data = res.json()\n res_data = res_data.get(\"data\", res_data) # Handle Managed Version\n messages = res_data.get(\"messages\", [])", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/memory/motorhead_memory.html"} +{"id": "ba98ed991532-1", "text": "messages = res_data.get(\"messages\", [])\n context = res_data.get(\"context\", \"NONE\")\n for message in reversed(messages):\n if message[\"role\"] == \"AI\":\n self.chat_memory.add_ai_message(message[\"content\"])\n else:\n self.chat_memory.add_user_message(message[\"content\"])\n if context and context != \"NONE\":\n self.context = context\n[docs] def load_memory_variables(self, values: Dict[str, Any]) -> Dict[str, Any]:\n if self.return_messages:\n return {self.memory_key: self.chat_memory.messages}\n else:\n return {self.memory_key: get_buffer_string(self.chat_memory.messages)}\n @property\n def memory_variables(self) -> List[str]:\n return [self.memory_key]\n[docs] def save_context(self, inputs: Dict[str, Any], outputs: Dict[str, str]) -> None:\n input_str, output_str = self._get_input_output(inputs, outputs)\n requests.post(\n f\"{self.url}/sessions/{self.session_id}/memory\",\n timeout=self.timeout,\n json={\n \"messages\": [\n {\"role\": \"Human\", \"content\": f\"{input_str}\"},\n {\"role\": \"AI\", \"content\": f\"{output_str}\"},\n ]\n },\n headers=self.__get_headers(),\n )\n super().save_context(inputs, outputs)\n[docs] def delete_session(self) -> None:\n \"\"\"Delete a session\"\"\"\n requests.delete(f\"{self.url}/sessions/{self.session_id}/memory\")", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/memory/motorhead_memory.html"} +{"id": "ab43006de074-0", "text": "Source code for langchain.memory.entity\nimport logging\nfrom abc import ABC, abstractmethod\nfrom itertools import islice\nfrom typing import Any, Dict, Iterable, List, Optional\nfrom pydantic import BaseModel, Field\nfrom langchain.base_language import BaseLanguageModel\nfrom langchain.chains.llm import LLMChain\nfrom langchain.memory.chat_memory import BaseChatMemory\nfrom langchain.memory.prompt import (\n ENTITY_EXTRACTION_PROMPT,\n ENTITY_SUMMARIZATION_PROMPT,\n)\nfrom langchain.memory.utils import get_prompt_input_key\nfrom langchain.prompts.base import BasePromptTemplate\nfrom langchain.schema import BaseMessage, get_buffer_string\nlogger = logging.getLogger(__name__)\nclass BaseEntityStore(BaseModel, ABC):\n @abstractmethod\n def get(self, key: str, default: Optional[str] = None) -> Optional[str]:\n \"\"\"Get entity value from store.\"\"\"\n pass\n @abstractmethod\n def set(self, key: str, value: Optional[str]) -> None:\n \"\"\"Set entity value in store.\"\"\"\n pass\n @abstractmethod\n def delete(self, key: str) -> None:\n \"\"\"Delete entity value from store.\"\"\"\n pass\n @abstractmethod\n def exists(self, key: str) -> bool:\n \"\"\"Check if entity exists in store.\"\"\"\n pass\n @abstractmethod\n def clear(self) -> None:\n \"\"\"Delete all entities from store.\"\"\"\n pass\n[docs]class InMemoryEntityStore(BaseEntityStore):\n \"\"\"Basic in-memory entity store.\"\"\"\n store: Dict[str, Optional[str]] = {}\n[docs] def get(self, key: str, default: Optional[str] = None) -> Optional[str]:\n return self.store.get(key, default)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/memory/entity.html"} +{"id": "ab43006de074-1", "text": "return self.store.get(key, default)\n[docs] def set(self, key: str, value: Optional[str]) -> None:\n self.store[key] = value\n[docs] def delete(self, key: str) -> None:\n del self.store[key]\n[docs] def exists(self, key: str) -> bool:\n return key in self.store\n[docs] def clear(self) -> None:\n return self.store.clear()\n[docs]class RedisEntityStore(BaseEntityStore):\n \"\"\"Redis-backed Entity store. Entities get a TTL of 1 day by default, and\n that TTL is extended by 3 days every time the entity is read back.\n \"\"\"\n redis_client: Any\n session_id: str = \"default\"\n key_prefix: str = \"memory_store\"\n ttl: Optional[int] = 60 * 60 * 24\n recall_ttl: Optional[int] = 60 * 60 * 24 * 3\n def __init__(\n self,\n session_id: str = \"default\",\n url: str = \"redis://localhost:6379/0\",\n key_prefix: str = \"memory_store\",\n ttl: Optional[int] = 60 * 60 * 24,\n recall_ttl: Optional[int] = 60 * 60 * 24 * 3,\n *args: Any,\n **kwargs: Any,\n ):\n try:\n import redis\n except ImportError:\n raise ImportError(\n \"Could not import redis python package. \"\n \"Please install it with `pip install redis`.\"\n )\n super().__init__(*args, **kwargs)\n try:\n self.redis_client = redis.Redis.from_url(url=url, decode_responses=True)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/memory/entity.html"} +{"id": "ab43006de074-2", "text": "self.redis_client = redis.Redis.from_url(url=url, decode_responses=True)\n except redis.exceptions.ConnectionError as error:\n logger.error(error)\n self.session_id = session_id\n self.key_prefix = key_prefix\n self.ttl = ttl\n self.recall_ttl = recall_ttl or ttl\n @property\n def full_key_prefix(self) -> str:\n return f\"{self.key_prefix}:{self.session_id}\"\n[docs] def get(self, key: str, default: Optional[str] = None) -> Optional[str]:\n res = (\n self.redis_client.getex(f\"{self.full_key_prefix}:{key}\", ex=self.recall_ttl)\n or default\n or \"\"\n )\n logger.debug(f\"REDIS MEM get '{self.full_key_prefix}:{key}': '{res}'\")\n return res\n[docs] def set(self, key: str, value: Optional[str]) -> None:\n if not value:\n return self.delete(key)\n self.redis_client.set(f\"{self.full_key_prefix}:{key}\", value, ex=self.ttl)\n logger.debug(\n f\"REDIS MEM set '{self.full_key_prefix}:{key}': '{value}' EX {self.ttl}\"\n )\n[docs] def delete(self, key: str) -> None:\n self.redis_client.delete(f\"{self.full_key_prefix}:{key}\")\n[docs] def exists(self, key: str) -> bool:\n return self.redis_client.exists(f\"{self.full_key_prefix}:{key}\") == 1\n[docs] def clear(self) -> None:\n # iterate a list in batches of size batch_size\n def batched(iterable: Iterable[Any], batch_size: int) -> Iterable[Any]:\n iterator = iter(iterable)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/memory/entity.html"} +{"id": "ab43006de074-3", "text": "iterator = iter(iterable)\n while batch := list(islice(iterator, batch_size)):\n yield batch\n for keybatch in batched(\n self.redis_client.scan_iter(f\"{self.full_key_prefix}:*\"), 500\n ):\n self.redis_client.delete(*keybatch)\n[docs]class SQLiteEntityStore(BaseEntityStore):\n \"\"\"SQLite-backed Entity store\"\"\"\n session_id: str = \"default\"\n table_name: str = \"memory_store\"\n def __init__(\n self,\n session_id: str = \"default\",\n db_file: str = \"entities.db\",\n table_name: str = \"memory_store\",\n *args: Any,\n **kwargs: Any,\n ):\n try:\n import sqlite3\n except ImportError:\n raise ImportError(\n \"Could not import sqlite3 python package. \"\n \"Please install it with `pip install sqlite3`.\"\n )\n super().__init__(*args, **kwargs)\n self.conn = sqlite3.connect(db_file)\n self.session_id = session_id\n self.table_name = table_name\n self._create_table_if_not_exists()\n @property\n def full_table_name(self) -> str:\n return f\"{self.table_name}_{self.session_id}\"\n def _create_table_if_not_exists(self) -> None:\n create_table_query = f\"\"\"\n CREATE TABLE IF NOT EXISTS {self.full_table_name} (\n key TEXT PRIMARY KEY,\n value TEXT\n )\n \"\"\"\n with self.conn:\n self.conn.execute(create_table_query)\n[docs] def get(self, key: str, default: Optional[str] = None) -> Optional[str]:\n query = f\"\"\"\n SELECT value", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/memory/entity.html"} +{"id": "ab43006de074-4", "text": "query = f\"\"\"\n SELECT value\n FROM {self.full_table_name}\n WHERE key = ?\n \"\"\"\n cursor = self.conn.execute(query, (key,))\n result = cursor.fetchone()\n if result is not None:\n value = result[0]\n return value\n return default\n[docs] def set(self, key: str, value: Optional[str]) -> None:\n if not value:\n return self.delete(key)\n query = f\"\"\"\n INSERT OR REPLACE INTO {self.full_table_name} (key, value)\n VALUES (?, ?)\n \"\"\"\n with self.conn:\n self.conn.execute(query, (key, value))\n[docs] def delete(self, key: str) -> None:\n query = f\"\"\"\n DELETE FROM {self.full_table_name}\n WHERE key = ?\n \"\"\"\n with self.conn:\n self.conn.execute(query, (key,))\n[docs] def exists(self, key: str) -> bool:\n query = f\"\"\"\n SELECT 1\n FROM {self.full_table_name}\n WHERE key = ?\n LIMIT 1\n \"\"\"\n cursor = self.conn.execute(query, (key,))\n result = cursor.fetchone()\n return result is not None\n[docs] def clear(self) -> None:\n query = f\"\"\"\n DELETE FROM {self.full_table_name}\n \"\"\"\n with self.conn:\n self.conn.execute(query)\n[docs]class ConversationEntityMemory(BaseChatMemory):\n \"\"\"Entity extractor & summarizer memory.\n Extracts named entities from the recent chat history and generates summaries.\n With a swapable entity store, persisting entities across conversations.", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/memory/entity.html"} +{"id": "ab43006de074-5", "text": "With a swapable entity store, persisting entities across conversations.\n Defaults to an in-memory entity store, and can be swapped out for a Redis,\n SQLite, or other entity store.\n \"\"\"\n human_prefix: str = \"Human\"\n ai_prefix: str = \"AI\"\n llm: BaseLanguageModel\n entity_extraction_prompt: BasePromptTemplate = ENTITY_EXTRACTION_PROMPT\n entity_summarization_prompt: BasePromptTemplate = ENTITY_SUMMARIZATION_PROMPT\n # Cache of recently detected entity names, if any\n # It is updated when load_memory_variables is called:\n entity_cache: List[str] = []\n # Number of recent message pairs to consider when updating entities:\n k: int = 3\n chat_history_key: str = \"history\"\n # Store to manage entity-related data:\n entity_store: BaseEntityStore = Field(default_factory=InMemoryEntityStore)\n @property\n def buffer(self) -> List[BaseMessage]:\n \"\"\"Access chat memory messages.\"\"\"\n return self.chat_memory.messages\n @property\n def memory_variables(self) -> List[str]:\n \"\"\"Will always return list of memory variables.\n :meta private:\n \"\"\"\n return [\"entities\", self.chat_history_key]\n[docs] def load_memory_variables(self, inputs: Dict[str, Any]) -> Dict[str, Any]:\n \"\"\"\n Returns chat history and all generated entities with summaries if available,\n and updates or clears the recent entity cache.\n New entity name can be found when calling this method, before the entity\n summaries are generated, so the entity cache values may be empty if no entity\n descriptions are generated yet.\n \"\"\"\n # Create an LLMChain for predicting entity names from the recent chat history:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/memory/entity.html"} +{"id": "ab43006de074-6", "text": "# Create an LLMChain for predicting entity names from the recent chat history:\n chain = LLMChain(llm=self.llm, prompt=self.entity_extraction_prompt)\n if self.input_key is None:\n prompt_input_key = get_prompt_input_key(inputs, self.memory_variables)\n else:\n prompt_input_key = self.input_key\n # Extract an arbitrary window of the last message pairs from\n # the chat history, where the hyperparameter k is the\n # number of message pairs:\n buffer_string = get_buffer_string(\n self.buffer[-self.k * 2 :],\n human_prefix=self.human_prefix,\n ai_prefix=self.ai_prefix,\n )\n # Generates a comma-separated list of named entities,\n # e.g. \"Jane, White House, UFO\"\n # or \"NONE\" if no named entities are extracted:\n output = chain.predict(\n history=buffer_string,\n input=inputs[prompt_input_key],\n )\n # If no named entities are extracted, assigns an empty list.\n if output.strip() == \"NONE\":\n entities = []\n else:\n # Make a list of the extracted entities:\n entities = [w.strip() for w in output.split(\",\")]\n # Make a dictionary of entities with summary if exists:\n entity_summaries = {}\n for entity in entities:\n entity_summaries[entity] = self.entity_store.get(entity, \"\")\n # Replaces the entity name cache with the most recently discussed entities,\n # or if no entities were extracted, clears the cache:\n self.entity_cache = entities\n # Should we return as message objects or as a string?\n if self.return_messages:\n # Get last `k` pair of chat messages:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/memory/entity.html"} +{"id": "ab43006de074-7", "text": "if self.return_messages:\n # Get last `k` pair of chat messages:\n buffer: Any = self.buffer[-self.k * 2 :]\n else:\n # Reuse the string we made earlier:\n buffer = buffer_string\n return {\n self.chat_history_key: buffer,\n \"entities\": entity_summaries,\n }\n[docs] def save_context(self, inputs: Dict[str, Any], outputs: Dict[str, str]) -> None:\n \"\"\"\n Save context from this conversation history to the entity store.\n Generates a summary for each entity in the entity cache by prompting\n the model, and saves these summaries to the entity store.\n \"\"\"\n super().save_context(inputs, outputs)\n if self.input_key is None:\n prompt_input_key = get_prompt_input_key(inputs, self.memory_variables)\n else:\n prompt_input_key = self.input_key\n # Extract an arbitrary window of the last message pairs from\n # the chat history, where the hyperparameter k is the\n # number of message pairs:\n buffer_string = get_buffer_string(\n self.buffer[-self.k * 2 :],\n human_prefix=self.human_prefix,\n ai_prefix=self.ai_prefix,\n )\n input_data = inputs[prompt_input_key]\n # Create an LLMChain for predicting entity summarization from the context\n chain = LLMChain(llm=self.llm, prompt=self.entity_summarization_prompt)\n # Generate new summaries for entities and save them in the entity store\n for entity in self.entity_cache:\n # Get existing summary if it exists\n existing_summary = self.entity_store.get(entity, \"\")\n output = chain.predict(\n summary=existing_summary,\n entity=entity,\n history=buffer_string,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/memory/entity.html"} +{"id": "ab43006de074-8", "text": "summary=existing_summary,\n entity=entity,\n history=buffer_string,\n input=input_data,\n )\n # Save the updated summary to the entity store\n self.entity_store.set(entity, output.strip())\n[docs] def clear(self) -> None:\n \"\"\"Clear memory contents.\"\"\"\n self.chat_memory.clear()\n self.entity_cache.clear()\n self.entity_store.clear()", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/memory/entity.html"} +{"id": "3486c9174b8a-0", "text": "Source code for langchain.memory.buffer\nfrom typing import Any, Dict, List, Optional\nfrom pydantic import root_validator\nfrom langchain.memory.chat_memory import BaseChatMemory, BaseMemory\nfrom langchain.memory.utils import get_prompt_input_key\nfrom langchain.schema import get_buffer_string\n[docs]class ConversationBufferMemory(BaseChatMemory):\n \"\"\"Buffer for storing conversation memory.\"\"\"\n human_prefix: str = \"Human\"\n ai_prefix: str = \"AI\"\n memory_key: str = \"history\" #: :meta private:\n @property\n def buffer(self) -> Any:\n \"\"\"String buffer of memory.\"\"\"\n if self.return_messages:\n return self.chat_memory.messages\n else:\n return get_buffer_string(\n self.chat_memory.messages,\n human_prefix=self.human_prefix,\n ai_prefix=self.ai_prefix,\n )\n @property\n def memory_variables(self) -> List[str]:\n \"\"\"Will always return list of memory variables.\n :meta private:\n \"\"\"\n return [self.memory_key]\n[docs] def load_memory_variables(self, inputs: Dict[str, Any]) -> Dict[str, Any]:\n \"\"\"Return history buffer.\"\"\"\n return {self.memory_key: self.buffer}\n[docs]class ConversationStringBufferMemory(BaseMemory):\n \"\"\"Buffer for storing conversation memory.\"\"\"\n human_prefix: str = \"Human\"\n ai_prefix: str = \"AI\"\n \"\"\"Prefix to use for AI generated responses.\"\"\"\n buffer: str = \"\"\n output_key: Optional[str] = None\n input_key: Optional[str] = None\n memory_key: str = \"history\" #: :meta private:\n @root_validator()\n def validate_chains(cls, values: Dict) -> Dict:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/memory/buffer.html"} +{"id": "3486c9174b8a-1", "text": "def validate_chains(cls, values: Dict) -> Dict:\n \"\"\"Validate that return messages is not True.\"\"\"\n if values.get(\"return_messages\", False):\n raise ValueError(\n \"return_messages must be False for ConversationStringBufferMemory\"\n )\n return values\n @property\n def memory_variables(self) -> List[str]:\n \"\"\"Will always return list of memory variables.\n :meta private:\n \"\"\"\n return [self.memory_key]\n[docs] def load_memory_variables(self, inputs: Dict[str, Any]) -> Dict[str, str]:\n \"\"\"Return history buffer.\"\"\"\n return {self.memory_key: self.buffer}\n[docs] def save_context(self, inputs: Dict[str, Any], outputs: Dict[str, str]) -> None:\n \"\"\"Save context from this conversation to buffer.\"\"\"\n if self.input_key is None:\n prompt_input_key = get_prompt_input_key(inputs, self.memory_variables)\n else:\n prompt_input_key = self.input_key\n if self.output_key is None:\n if len(outputs) != 1:\n raise ValueError(f\"One output key expected, got {outputs.keys()}\")\n output_key = list(outputs.keys())[0]\n else:\n output_key = self.output_key\n human = f\"{self.human_prefix}: \" + inputs[prompt_input_key]\n ai = f\"{self.ai_prefix}: \" + outputs[output_key]\n self.buffer += \"\\n\" + \"\\n\".join([human, ai])\n[docs] def clear(self) -> None:\n \"\"\"Clear memory contents.\"\"\"\n self.buffer = \"\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/memory/buffer.html"} +{"id": "4b7da26a10e5-0", "text": "Source code for langchain.memory.vectorstore\n\"\"\"Class for a VectorStore-backed memory object.\"\"\"\nfrom typing import Any, Dict, List, Optional, Union\nfrom pydantic import Field\nfrom langchain.memory.chat_memory import BaseMemory\nfrom langchain.memory.utils import get_prompt_input_key\nfrom langchain.schema import Document\nfrom langchain.vectorstores.base import VectorStoreRetriever\n[docs]class VectorStoreRetrieverMemory(BaseMemory):\n \"\"\"Class for a VectorStore-backed memory object.\"\"\"\n retriever: VectorStoreRetriever = Field(exclude=True)\n \"\"\"VectorStoreRetriever object to connect to.\"\"\"\n memory_key: str = \"history\" #: :meta private:\n \"\"\"Key name to locate the memories in the result of load_memory_variables.\"\"\"\n input_key: Optional[str] = None\n \"\"\"Key name to index the inputs to load_memory_variables.\"\"\"\n return_docs: bool = False\n \"\"\"Whether or not to return the result of querying the database directly.\"\"\"\n @property\n def memory_variables(self) -> List[str]:\n \"\"\"The list of keys emitted from the load_memory_variables method.\"\"\"\n return [self.memory_key]\n def _get_prompt_input_key(self, inputs: Dict[str, Any]) -> str:\n \"\"\"Get the input key for the prompt.\"\"\"\n if self.input_key is None:\n return get_prompt_input_key(inputs, self.memory_variables)\n return self.input_key\n[docs] def load_memory_variables(\n self, inputs: Dict[str, Any]\n ) -> Dict[str, Union[List[Document], str]]:\n \"\"\"Return history buffer.\"\"\"\n input_key = self._get_prompt_input_key(inputs)\n query = inputs[input_key]\n docs = self.retriever.get_relevant_documents(query)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/memory/vectorstore.html"} +{"id": "4b7da26a10e5-1", "text": "docs = self.retriever.get_relevant_documents(query)\n result: Union[List[Document], str]\n if not self.return_docs:\n result = \"\\n\".join([doc.page_content for doc in docs])\n else:\n result = docs\n return {self.memory_key: result}\n def _form_documents(\n self, inputs: Dict[str, Any], outputs: Dict[str, str]\n ) -> List[Document]:\n \"\"\"Format context from this conversation to buffer.\"\"\"\n # Each document should only include the current turn, not the chat history\n filtered_inputs = {k: v for k, v in inputs.items() if k != self.memory_key}\n texts = [\n f\"{k}: {v}\"\n for k, v in list(filtered_inputs.items()) + list(outputs.items())\n ]\n page_content = \"\\n\".join(texts)\n return [Document(page_content=page_content)]\n[docs] def save_context(self, inputs: Dict[str, Any], outputs: Dict[str, str]) -> None:\n \"\"\"Save context from this conversation to buffer.\"\"\"\n documents = self._form_documents(inputs, outputs)\n self.retriever.add_documents(documents)\n[docs] def clear(self) -> None:\n \"\"\"Nothing to clear.\"\"\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/memory/vectorstore.html"} +{"id": "8520a5ee727d-0", "text": "Source code for langchain.memory.buffer_window\nfrom typing import Any, Dict, List\nfrom langchain.memory.chat_memory import BaseChatMemory\nfrom langchain.schema import BaseMessage, get_buffer_string\n[docs]class ConversationBufferWindowMemory(BaseChatMemory):\n \"\"\"Buffer for storing conversation memory.\"\"\"\n human_prefix: str = \"Human\"\n ai_prefix: str = \"AI\"\n memory_key: str = \"history\" #: :meta private:\n k: int = 5\n @property\n def buffer(self) -> List[BaseMessage]:\n \"\"\"String buffer of memory.\"\"\"\n return self.chat_memory.messages\n @property\n def memory_variables(self) -> List[str]:\n \"\"\"Will always return list of memory variables.\n :meta private:\n \"\"\"\n return [self.memory_key]\n[docs] def load_memory_variables(self, inputs: Dict[str, Any]) -> Dict[str, str]:\n \"\"\"Return history buffer.\"\"\"\n buffer: Any = self.buffer[-self.k * 2 :] if self.k > 0 else []\n if not self.return_messages:\n buffer = get_buffer_string(\n buffer,\n human_prefix=self.human_prefix,\n ai_prefix=self.ai_prefix,\n )\n return {self.memory_key: buffer}", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/memory/buffer_window.html"} +{"id": "442d9c81ca8d-0", "text": "Source code for langchain.memory.summary\nfrom __future__ import annotations\nfrom typing import Any, Dict, List, Type\nfrom pydantic import BaseModel, root_validator\nfrom langchain.base_language import BaseLanguageModel\nfrom langchain.chains.llm import LLMChain\nfrom langchain.memory.chat_memory import BaseChatMemory\nfrom langchain.memory.prompt import SUMMARY_PROMPT\nfrom langchain.prompts.base import BasePromptTemplate\nfrom langchain.schema import (\n BaseChatMessageHistory,\n BaseMessage,\n SystemMessage,\n get_buffer_string,\n)\nclass SummarizerMixin(BaseModel):\n human_prefix: str = \"Human\"\n ai_prefix: str = \"AI\"\n llm: BaseLanguageModel\n prompt: BasePromptTemplate = SUMMARY_PROMPT\n summary_message_cls: Type[BaseMessage] = SystemMessage\n def predict_new_summary(\n self, messages: List[BaseMessage], existing_summary: str\n ) -> str:\n new_lines = get_buffer_string(\n messages,\n human_prefix=self.human_prefix,\n ai_prefix=self.ai_prefix,\n )\n chain = LLMChain(llm=self.llm, prompt=self.prompt)\n return chain.predict(summary=existing_summary, new_lines=new_lines)\n[docs]class ConversationSummaryMemory(BaseChatMemory, SummarizerMixin):\n \"\"\"Conversation summarizer to memory.\"\"\"\n buffer: str = \"\"\n memory_key: str = \"history\" #: :meta private:\n[docs] @classmethod\n def from_messages(\n cls,\n llm: BaseLanguageModel,\n chat_memory: BaseChatMessageHistory,\n *,\n summarize_step: int = 2,\n **kwargs: Any,\n ) -> ConversationSummaryMemory:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/memory/summary.html"} +{"id": "442d9c81ca8d-1", "text": "**kwargs: Any,\n ) -> ConversationSummaryMemory:\n obj = cls(llm=llm, chat_memory=chat_memory, **kwargs)\n for i in range(0, len(obj.chat_memory.messages), summarize_step):\n obj.buffer = obj.predict_new_summary(\n obj.chat_memory.messages[i : i + summarize_step], obj.buffer\n )\n return obj\n @property\n def memory_variables(self) -> List[str]:\n \"\"\"Will always return list of memory variables.\n :meta private:\n \"\"\"\n return [self.memory_key]\n[docs] def load_memory_variables(self, inputs: Dict[str, Any]) -> Dict[str, Any]:\n \"\"\"Return history buffer.\"\"\"\n if self.return_messages:\n buffer: Any = [self.summary_message_cls(content=self.buffer)]\n else:\n buffer = self.buffer\n return {self.memory_key: buffer}\n @root_validator()\n def validate_prompt_input_variables(cls, values: Dict) -> Dict:\n \"\"\"Validate that prompt input variables are consistent.\"\"\"\n prompt_variables = values[\"prompt\"].input_variables\n expected_keys = {\"summary\", \"new_lines\"}\n if expected_keys != set(prompt_variables):\n raise ValueError(\n \"Got unexpected prompt input variables. The prompt expects \"\n f\"{prompt_variables}, but it should have {expected_keys}.\"\n )\n return values\n[docs] def save_context(self, inputs: Dict[str, Any], outputs: Dict[str, str]) -> None:\n \"\"\"Save context from this conversation to buffer.\"\"\"\n super().save_context(inputs, outputs)\n self.buffer = self.predict_new_summary(\n self.chat_memory.messages[-2:], self.buffer\n )\n[docs] def clear(self) -> None:\n \"\"\"Clear memory contents.\"\"\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/memory/summary.html"} +{"id": "442d9c81ca8d-2", "text": "[docs] def clear(self) -> None:\n \"\"\"Clear memory contents.\"\"\"\n super().clear()\n self.buffer = \"\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/memory/summary.html"} +{"id": "c813748e0185-0", "text": "Source code for langchain.memory.combined\nimport warnings\nfrom typing import Any, Dict, List, Set\nfrom pydantic import validator\nfrom langchain.memory.chat_memory import BaseChatMemory\nfrom langchain.schema import BaseMemory\n[docs]class CombinedMemory(BaseMemory):\n \"\"\"Class for combining multiple memories' data together.\"\"\"\n memories: List[BaseMemory]\n \"\"\"For tracking all the memories that should be accessed.\"\"\"\n @validator(\"memories\")\n def check_repeated_memory_variable(\n cls, value: List[BaseMemory]\n ) -> List[BaseMemory]:\n all_variables: Set[str] = set()\n for val in value:\n overlap = all_variables.intersection(val.memory_variables)\n if overlap:\n raise ValueError(\n f\"The same variables {overlap} are found in multiple\"\n \"memory object, which is not allowed by CombinedMemory.\"\n )\n all_variables |= set(val.memory_variables)\n return value\n @validator(\"memories\")\n def check_input_key(cls, value: List[BaseMemory]) -> List[BaseMemory]:\n \"\"\"Check that if memories are of type BaseChatMemory that input keys exist.\"\"\"\n for val in value:\n if isinstance(val, BaseChatMemory):\n if val.input_key is None:\n warnings.warn(\n \"When using CombinedMemory, \"\n \"input keys should be so the input is known. \"\n f\" Was not set on {val}\"\n )\n return value\n @property\n def memory_variables(self) -> List[str]:\n \"\"\"All the memory variables that this instance provides.\"\"\"\n \"\"\"Collected from the all the linked memories.\"\"\"\n memory_variables = []\n for memory in self.memories:\n memory_variables.extend(memory.memory_variables)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/memory/combined.html"} +{"id": "c813748e0185-1", "text": "for memory in self.memories:\n memory_variables.extend(memory.memory_variables)\n return memory_variables\n[docs] def load_memory_variables(self, inputs: Dict[str, Any]) -> Dict[str, str]:\n \"\"\"Load all vars from sub-memories.\"\"\"\n memory_data: Dict[str, Any] = {}\n # Collect vars from all sub-memories\n for memory in self.memories:\n data = memory.load_memory_variables(inputs)\n memory_data = {\n **memory_data,\n **data,\n }\n return memory_data\n[docs] def save_context(self, inputs: Dict[str, Any], outputs: Dict[str, str]) -> None:\n \"\"\"Save context from this session for every memory.\"\"\"\n # Save context for all sub-memories\n for memory in self.memories:\n memory.save_context(inputs, outputs)\n[docs] def clear(self) -> None:\n \"\"\"Clear context from this session for every memory.\"\"\"\n for memory in self.memories:\n memory.clear()", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/memory/combined.html"} +{"id": "9a881b3f6a7a-0", "text": "Source code for langchain.memory.readonly\nfrom typing import Any, Dict, List\nfrom langchain.schema import BaseMemory\n[docs]class ReadOnlySharedMemory(BaseMemory):\n \"\"\"A memory wrapper that is read-only and cannot be changed.\"\"\"\n memory: BaseMemory\n @property\n def memory_variables(self) -> List[str]:\n \"\"\"Return memory variables.\"\"\"\n return self.memory.memory_variables\n[docs] def load_memory_variables(self, inputs: Dict[str, Any]) -> Dict[str, str]:\n \"\"\"Load memory variables from memory.\"\"\"\n return self.memory.load_memory_variables(inputs)\n[docs] def save_context(self, inputs: Dict[str, Any], outputs: Dict[str, str]) -> None:\n \"\"\"Nothing should be saved or changed\"\"\"\n pass\n[docs] def clear(self) -> None:\n \"\"\"Nothing to clear, got a memory like a vault.\"\"\"\n pass", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/memory/readonly.html"} +{"id": "71d05a5188a7-0", "text": "Source code for langchain.memory.simple\nfrom typing import Any, Dict, List\nfrom langchain.schema import BaseMemory\n[docs]class SimpleMemory(BaseMemory):\n \"\"\"Simple memory for storing context or other bits of information that shouldn't\n ever change between prompts.\n \"\"\"\n memories: Dict[str, Any] = dict()\n @property\n def memory_variables(self) -> List[str]:\n return list(self.memories.keys())\n[docs] def load_memory_variables(self, inputs: Dict[str, Any]) -> Dict[str, str]:\n return self.memories\n[docs] def save_context(self, inputs: Dict[str, Any], outputs: Dict[str, str]) -> None:\n \"\"\"Nothing should be saved or changed, my memory is set in stone.\"\"\"\n pass\n[docs] def clear(self) -> None:\n \"\"\"Nothing to clear, got a memory like a vault.\"\"\"\n pass", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/memory/simple.html"} +{"id": "380a3fecde3b-0", "text": "Source code for langchain.memory.token_buffer\nfrom typing import Any, Dict, List\nfrom langchain.base_language import BaseLanguageModel\nfrom langchain.memory.chat_memory import BaseChatMemory\nfrom langchain.schema import BaseMessage, get_buffer_string\n[docs]class ConversationTokenBufferMemory(BaseChatMemory):\n \"\"\"Buffer for storing conversation memory.\"\"\"\n human_prefix: str = \"Human\"\n ai_prefix: str = \"AI\"\n llm: BaseLanguageModel\n memory_key: str = \"history\"\n max_token_limit: int = 2000\n @property\n def buffer(self) -> List[BaseMessage]:\n \"\"\"String buffer of memory.\"\"\"\n return self.chat_memory.messages\n @property\n def memory_variables(self) -> List[str]:\n \"\"\"Will always return list of memory variables.\n :meta private:\n \"\"\"\n return [self.memory_key]\n[docs] def load_memory_variables(self, inputs: Dict[str, Any]) -> Dict[str, Any]:\n \"\"\"Return history buffer.\"\"\"\n buffer: Any = self.buffer\n if self.return_messages:\n final_buffer: Any = buffer\n else:\n final_buffer = get_buffer_string(\n buffer,\n human_prefix=self.human_prefix,\n ai_prefix=self.ai_prefix,\n )\n return {self.memory_key: final_buffer}\n[docs] def save_context(self, inputs: Dict[str, Any], outputs: Dict[str, str]) -> None:\n \"\"\"Save context from this conversation to buffer. Pruned.\"\"\"\n super().save_context(inputs, outputs)\n # Prune buffer if it exceeds max token limit\n buffer = self.chat_memory.messages\n curr_buffer_length = self.llm.get_num_tokens_from_messages(buffer)\n if curr_buffer_length > self.max_token_limit:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/memory/token_buffer.html"} +{"id": "380a3fecde3b-1", "text": "if curr_buffer_length > self.max_token_limit:\n pruned_memory = []\n while curr_buffer_length > self.max_token_limit:\n pruned_memory.append(buffer.pop(0))\n curr_buffer_length = self.llm.get_num_tokens_from_messages(buffer)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/memory/token_buffer.html"} +{"id": "90e9147f6de0-0", "text": "Source code for langchain.memory.summary_buffer\nfrom typing import Any, Dict, List\nfrom pydantic import root_validator\nfrom langchain.memory.chat_memory import BaseChatMemory\nfrom langchain.memory.summary import SummarizerMixin\nfrom langchain.schema import BaseMessage, get_buffer_string\n[docs]class ConversationSummaryBufferMemory(BaseChatMemory, SummarizerMixin):\n \"\"\"Buffer with summarizer for storing conversation memory.\"\"\"\n max_token_limit: int = 2000\n moving_summary_buffer: str = \"\"\n memory_key: str = \"history\"\n @property\n def buffer(self) -> List[BaseMessage]:\n return self.chat_memory.messages\n @property\n def memory_variables(self) -> List[str]:\n \"\"\"Will always return list of memory variables.\n :meta private:\n \"\"\"\n return [self.memory_key]\n[docs] def load_memory_variables(self, inputs: Dict[str, Any]) -> Dict[str, Any]:\n \"\"\"Return history buffer.\"\"\"\n buffer = self.buffer\n if self.moving_summary_buffer != \"\":\n first_messages: List[BaseMessage] = [\n self.summary_message_cls(content=self.moving_summary_buffer)\n ]\n buffer = first_messages + buffer\n if self.return_messages:\n final_buffer: Any = buffer\n else:\n final_buffer = get_buffer_string(\n buffer, human_prefix=self.human_prefix, ai_prefix=self.ai_prefix\n )\n return {self.memory_key: final_buffer}\n @root_validator()\n def validate_prompt_input_variables(cls, values: Dict) -> Dict:\n \"\"\"Validate that prompt input variables are consistent.\"\"\"\n prompt_variables = values[\"prompt\"].input_variables\n expected_keys = {\"summary\", \"new_lines\"}\n if expected_keys != set(prompt_variables):\n raise ValueError(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/memory/summary_buffer.html"} +{"id": "90e9147f6de0-1", "text": "if expected_keys != set(prompt_variables):\n raise ValueError(\n \"Got unexpected prompt input variables. The prompt expects \"\n f\"{prompt_variables}, but it should have {expected_keys}.\"\n )\n return values\n[docs] def save_context(self, inputs: Dict[str, Any], outputs: Dict[str, str]) -> None:\n \"\"\"Save context from this conversation to buffer.\"\"\"\n super().save_context(inputs, outputs)\n self.prune()\n[docs] def prune(self) -> None:\n \"\"\"Prune buffer if it exceeds max token limit\"\"\"\n buffer = self.chat_memory.messages\n curr_buffer_length = self.llm.get_num_tokens_from_messages(buffer)\n if curr_buffer_length > self.max_token_limit:\n pruned_memory = []\n while curr_buffer_length > self.max_token_limit:\n pruned_memory.append(buffer.pop(0))\n curr_buffer_length = self.llm.get_num_tokens_from_messages(buffer)\n self.moving_summary_buffer = self.predict_new_summary(\n pruned_memory, self.moving_summary_buffer\n )\n[docs] def clear(self) -> None:\n \"\"\"Clear memory contents.\"\"\"\n super().clear()\n self.moving_summary_buffer = \"\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/memory/summary_buffer.html"} +{"id": "d3067a1e5395-0", "text": "Source code for langchain.memory.kg\nfrom typing import Any, Dict, List, Type, Union\nfrom pydantic import Field\nfrom langchain.base_language import BaseLanguageModel\nfrom langchain.chains.llm import LLMChain\nfrom langchain.graphs import NetworkxEntityGraph\nfrom langchain.graphs.networkx_graph import KnowledgeTriple, get_entities, parse_triples\nfrom langchain.memory.chat_memory import BaseChatMemory\nfrom langchain.memory.prompt import (\n ENTITY_EXTRACTION_PROMPT,\n KNOWLEDGE_TRIPLE_EXTRACTION_PROMPT,\n)\nfrom langchain.memory.utils import get_prompt_input_key\nfrom langchain.prompts.base import BasePromptTemplate\nfrom langchain.schema import (\n BaseMessage,\n SystemMessage,\n get_buffer_string,\n)\n[docs]class ConversationKGMemory(BaseChatMemory):\n \"\"\"Knowledge graph memory for storing conversation memory.\n Integrates with external knowledge graph to store and retrieve\n information about knowledge triples in the conversation.\n \"\"\"\n k: int = 2\n human_prefix: str = \"Human\"\n ai_prefix: str = \"AI\"\n kg: NetworkxEntityGraph = Field(default_factory=NetworkxEntityGraph)\n knowledge_extraction_prompt: BasePromptTemplate = KNOWLEDGE_TRIPLE_EXTRACTION_PROMPT\n entity_extraction_prompt: BasePromptTemplate = ENTITY_EXTRACTION_PROMPT\n llm: BaseLanguageModel\n summary_message_cls: Type[BaseMessage] = SystemMessage\n \"\"\"Number of previous utterances to include in the context.\"\"\"\n memory_key: str = \"history\" #: :meta private:\n[docs] def load_memory_variables(self, inputs: Dict[str, Any]) -> Dict[str, Any]:\n \"\"\"Return history buffer.\"\"\"\n entities = self._get_current_entities(inputs)\n summary_strings = []", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/memory/kg.html"} +{"id": "d3067a1e5395-1", "text": "entities = self._get_current_entities(inputs)\n summary_strings = []\n for entity in entities:\n knowledge = self.kg.get_entity_knowledge(entity)\n if knowledge:\n summary = f\"On {entity}: {'. '.join(knowledge)}.\"\n summary_strings.append(summary)\n context: Union[str, List]\n if not summary_strings:\n context = [] if self.return_messages else \"\"\n elif self.return_messages:\n context = [\n self.summary_message_cls(content=text) for text in summary_strings\n ]\n else:\n context = \"\\n\".join(summary_strings)\n return {self.memory_key: context}\n @property\n def memory_variables(self) -> List[str]:\n \"\"\"Will always return list of memory variables.\n :meta private:\n \"\"\"\n return [self.memory_key]\n def _get_prompt_input_key(self, inputs: Dict[str, Any]) -> str:\n \"\"\"Get the input key for the prompt.\"\"\"\n if self.input_key is None:\n return get_prompt_input_key(inputs, self.memory_variables)\n return self.input_key\n def _get_prompt_output_key(self, outputs: Dict[str, Any]) -> str:\n \"\"\"Get the output key for the prompt.\"\"\"\n if self.output_key is None:\n if len(outputs) != 1:\n raise ValueError(f\"One output key expected, got {outputs.keys()}\")\n return list(outputs.keys())[0]\n return self.output_key\n[docs] def get_current_entities(self, input_string: str) -> List[str]:\n chain = LLMChain(llm=self.llm, prompt=self.entity_extraction_prompt)\n buffer_string = get_buffer_string(\n self.chat_memory.messages[-self.k * 2 :],\n human_prefix=self.human_prefix,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/memory/kg.html"} +{"id": "d3067a1e5395-2", "text": "human_prefix=self.human_prefix,\n ai_prefix=self.ai_prefix,\n )\n output = chain.predict(\n history=buffer_string,\n input=input_string,\n )\n return get_entities(output)\n def _get_current_entities(self, inputs: Dict[str, Any]) -> List[str]:\n \"\"\"Get the current entities in the conversation.\"\"\"\n prompt_input_key = self._get_prompt_input_key(inputs)\n return self.get_current_entities(inputs[prompt_input_key])\n[docs] def get_knowledge_triplets(self, input_string: str) -> List[KnowledgeTriple]:\n chain = LLMChain(llm=self.llm, prompt=self.knowledge_extraction_prompt)\n buffer_string = get_buffer_string(\n self.chat_memory.messages[-self.k * 2 :],\n human_prefix=self.human_prefix,\n ai_prefix=self.ai_prefix,\n )\n output = chain.predict(\n history=buffer_string,\n input=input_string,\n verbose=True,\n )\n knowledge = parse_triples(output)\n return knowledge\n def _get_and_update_kg(self, inputs: Dict[str, Any]) -> None:\n \"\"\"Get and update knowledge graph from the conversation history.\"\"\"\n prompt_input_key = self._get_prompt_input_key(inputs)\n knowledge = self.get_knowledge_triplets(inputs[prompt_input_key])\n for triple in knowledge:\n self.kg.add_triple(triple)\n[docs] def save_context(self, inputs: Dict[str, Any], outputs: Dict[str, str]) -> None:\n \"\"\"Save context from this conversation to buffer.\"\"\"\n super().save_context(inputs, outputs)\n self._get_and_update_kg(inputs)\n[docs] def clear(self) -> None:\n \"\"\"Clear memory contents.\"\"\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/memory/kg.html"} +{"id": "d3067a1e5395-3", "text": "[docs] def clear(self) -> None:\n \"\"\"Clear memory contents.\"\"\"\n super().clear()\n self.kg.clear()", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/memory/kg.html"} +{"id": "f92dd48dc452-0", "text": "Source code for langchain.memory.chat_message_histories.zep\nfrom __future__ import annotations\nimport logging\nfrom typing import TYPE_CHECKING, Dict, List, Optional\nfrom langchain.schema import (\n AIMessage,\n BaseChatMessageHistory,\n BaseMessage,\n HumanMessage,\n)\nif TYPE_CHECKING:\n from zep_python import Memory, MemorySearchResult, Message, NotFoundError\nlogger = logging.getLogger(__name__)\n[docs]class ZepChatMessageHistory(BaseChatMessageHistory):\n \"\"\"A ChatMessageHistory implementation that uses Zep as a backend.\n Recommended usage::\n # Set up Zep Chat History\n zep_chat_history = ZepChatMessageHistory(\n session_id=session_id,\n url=ZEP_API_URL,\n )\n # Use a standard ConversationBufferMemory to encapsulate the Zep chat history\n memory = ConversationBufferMemory(\n memory_key=\"chat_history\", chat_memory=zep_chat_history\n )\n Zep provides long-term conversation storage for LLM apps. The server stores,\n summarizes, embeds, indexes, and enriches conversational AI chat\n histories, and exposes them via simple, low-latency APIs.\n For server installation instructions and more, see: https://getzep.github.io/\n This class is a thin wrapper around the zep-python package. Additional\n Zep functionality is exposed via the `zep_summary` and `zep_messages`\n properties.\n For more information on the zep-python package, see:\n https://github.com/getzep/zep-python\n \"\"\"\n def __init__(\n self,\n session_id: str,\n url: str = \"http://localhost:8000\",\n ) -> None:\n try:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/memory/chat_message_histories/zep.html"} +{"id": "f92dd48dc452-1", "text": ") -> None:\n try:\n from zep_python import ZepClient\n except ImportError:\n raise ValueError(\n \"Could not import zep-python package. \"\n \"Please install it with `pip install zep-python`.\"\n )\n self.zep_client = ZepClient(base_url=url)\n self.session_id = session_id\n @property\n def messages(self) -> List[BaseMessage]: # type: ignore\n \"\"\"Retrieve messages from Zep memory\"\"\"\n zep_memory: Optional[Memory] = self._get_memory()\n if not zep_memory:\n return []\n messages: List[BaseMessage] = []\n # Extract summary, if present, and messages\n if zep_memory.summary:\n if len(zep_memory.summary.content) > 0:\n messages.append(HumanMessage(content=zep_memory.summary.content))\n if zep_memory.messages:\n msg: Message\n for msg in zep_memory.messages:\n if msg.role == \"ai\":\n messages.append(AIMessage(content=msg.content))\n else:\n messages.append(HumanMessage(content=msg.content))\n return messages\n @property\n def zep_messages(self) -> List[Message]:\n \"\"\"Retrieve summary from Zep memory\"\"\"\n zep_memory: Optional[Memory] = self._get_memory()\n if not zep_memory:\n return []\n return zep_memory.messages\n @property\n def zep_summary(self) -> Optional[str]:\n \"\"\"Retrieve summary from Zep memory\"\"\"\n zep_memory: Optional[Memory] = self._get_memory()\n if not zep_memory or not zep_memory.summary:\n return None\n return zep_memory.summary.content", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/memory/chat_message_histories/zep.html"} +{"id": "f92dd48dc452-2", "text": "return None\n return zep_memory.summary.content\n def _get_memory(self) -> Optional[Memory]:\n \"\"\"Retrieve memory from Zep\"\"\"\n from zep_python import NotFoundError\n try:\n zep_memory: Memory = self.zep_client.get_memory(self.session_id)\n except NotFoundError:\n logger.warning(\n f\"Session {self.session_id} not found in Zep. Returning None\"\n )\n return None\n return zep_memory\n[docs] def add_message(self, message: BaseMessage) -> None:\n \"\"\"Append the message to the Zep memory history\"\"\"\n from zep_python import Memory, Message\n zep_message: Message\n if isinstance(message, HumanMessage):\n zep_message = Message(content=message.content, role=\"human\")\n else:\n zep_message = Message(content=message.content, role=\"ai\")\n zep_memory = Memory(messages=[zep_message])\n self.zep_client.add_memory(self.session_id, zep_memory)\n[docs] def search(\n self, query: str, metadata: Optional[Dict] = None, limit: Optional[int] = None\n ) -> List[MemorySearchResult]:\n \"\"\"Search Zep memory for messages matching the query\"\"\"\n from zep_python import MemorySearchPayload\n payload: MemorySearchPayload = MemorySearchPayload(\n text=query, metadata=metadata\n )\n return self.zep_client.search_memory(self.session_id, payload, limit=limit)\n[docs] def clear(self) -> None:\n \"\"\"Clear session memory from Zep. Note that Zep is long-term storage for memory\n and this is not advised unless you have specific data retention requirements.\n \"\"\"\n try:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/memory/chat_message_histories/zep.html"} +{"id": "f92dd48dc452-3", "text": "\"\"\"\n try:\n self.zep_client.delete_memory(self.session_id)\n except NotFoundError:\n logger.warning(\n f\"Session {self.session_id} not found in Zep. Skipping delete.\"\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/memory/chat_message_histories/zep.html"} +{"id": "90e5aed4021f-0", "text": "Source code for langchain.memory.chat_message_histories.cosmos_db\n\"\"\"Azure CosmosDB Memory History.\"\"\"\nfrom __future__ import annotations\nimport logging\nfrom types import TracebackType\nfrom typing import TYPE_CHECKING, Any, List, Optional, Type\nfrom langchain.schema import (\n BaseChatMessageHistory,\n BaseMessage,\n messages_from_dict,\n messages_to_dict,\n)\nlogger = logging.getLogger(__name__)\nif TYPE_CHECKING:\n from azure.cosmos import ContainerProxy\n[docs]class CosmosDBChatMessageHistory(BaseChatMessageHistory):\n \"\"\"Chat history backed by Azure CosmosDB.\"\"\"\n def __init__(\n self,\n cosmos_endpoint: str,\n cosmos_database: str,\n cosmos_container: str,\n session_id: str,\n user_id: str,\n credential: Any = None,\n connection_string: Optional[str] = None,\n ttl: Optional[int] = None,\n cosmos_client_kwargs: Optional[dict] = None,\n ):\n \"\"\"\n Initializes a new instance of the CosmosDBChatMessageHistory class.\n Make sure to call prepare_cosmos or use the context manager to make\n sure your database is ready.\n Either a credential or a connection string must be provided.\n :param cosmos_endpoint: The connection endpoint for the Azure Cosmos DB account.\n :param cosmos_database: The name of the database to use.\n :param cosmos_container: The name of the container to use.\n :param session_id: The session ID to use, can be overwritten while loading.\n :param user_id: The user ID to use, can be overwritten while loading.\n :param credential: The credential to use to authenticate to Azure Cosmos DB.\n :param connection_string: The connection string to use to authenticate.", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/memory/chat_message_histories/cosmos_db.html"} +{"id": "90e5aed4021f-1", "text": ":param connection_string: The connection string to use to authenticate.\n :param ttl: The time to live (in seconds) to use for documents in the container.\n :param cosmos_client_kwargs: Additional kwargs to pass to the CosmosClient.\n \"\"\"\n self.cosmos_endpoint = cosmos_endpoint\n self.cosmos_database = cosmos_database\n self.cosmos_container = cosmos_container\n self.credential = credential\n self.conn_string = connection_string\n self.session_id = session_id\n self.user_id = user_id\n self.ttl = ttl\n self.messages: List[BaseMessage] = []\n try:\n from azure.cosmos import ( # pylint: disable=import-outside-toplevel # noqa: E501\n CosmosClient,\n )\n except ImportError as exc:\n raise ImportError(\n \"You must install the azure-cosmos package to use the CosmosDBChatMessageHistory.\" # noqa: E501\n ) from exc\n if self.credential:\n self._client = CosmosClient(\n url=self.cosmos_endpoint,\n credential=self.credential,\n **cosmos_client_kwargs or {},\n )\n elif self.conn_string:\n self._client = CosmosClient.from_connection_string(\n conn_str=self.conn_string,\n **cosmos_client_kwargs or {},\n )\n else:\n raise ValueError(\"Either a connection string or a credential must be set.\")\n self._container: Optional[ContainerProxy] = None\n[docs] def prepare_cosmos(self) -> None:\n \"\"\"Prepare the CosmosDB client.\n Use this function or the context manager to make sure your database is ready.\n \"\"\"\n try:\n from azure.cosmos import ( # pylint: disable=import-outside-toplevel # noqa: E501", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/memory/chat_message_histories/cosmos_db.html"} +{"id": "90e5aed4021f-2", "text": "PartitionKey,\n )\n except ImportError as exc:\n raise ImportError(\n \"You must install the azure-cosmos package to use the CosmosDBChatMessageHistory.\" # noqa: E501\n ) from exc\n database = self._client.create_database_if_not_exists(self.cosmos_database)\n self._container = database.create_container_if_not_exists(\n self.cosmos_container,\n partition_key=PartitionKey(\"/user_id\"),\n default_ttl=self.ttl,\n )\n self.load_messages()\n def __enter__(self) -> \"CosmosDBChatMessageHistory\":\n \"\"\"Context manager entry point.\"\"\"\n self._client.__enter__()\n self.prepare_cosmos()\n return self\n def __exit__(\n self,\n exc_type: Optional[Type[BaseException]],\n exc_val: Optional[BaseException],\n traceback: Optional[TracebackType],\n ) -> None:\n \"\"\"Context manager exit\"\"\"\n self.upsert_messages()\n self._client.__exit__(exc_type, exc_val, traceback)\n[docs] def load_messages(self) -> None:\n \"\"\"Retrieve the messages from Cosmos\"\"\"\n if not self._container:\n raise ValueError(\"Container not initialized\")\n try:\n from azure.cosmos.exceptions import ( # pylint: disable=import-outside-toplevel # noqa: E501\n CosmosHttpResponseError,\n )\n except ImportError as exc:\n raise ImportError(\n \"You must install the azure-cosmos package to use the CosmosDBChatMessageHistory.\" # noqa: E501\n ) from exc\n try:\n item = self._container.read_item(\n item=self.session_id, partition_key=self.user_id\n )\n except CosmosHttpResponseError:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/memory/chat_message_histories/cosmos_db.html"} +{"id": "90e5aed4021f-3", "text": ")\n except CosmosHttpResponseError:\n logger.info(\"no session found\")\n return\n if \"messages\" in item and len(item[\"messages\"]) > 0:\n self.messages = messages_from_dict(item[\"messages\"])\n[docs] def add_message(self, message: BaseMessage) -> None:\n \"\"\"Add a self-created message to the store\"\"\"\n self.messages.append(message)\n self.upsert_messages()\n[docs] def upsert_messages(self) -> None:\n \"\"\"Update the cosmosdb item.\"\"\"\n if not self._container:\n raise ValueError(\"Container not initialized\")\n self._container.upsert_item(\n body={\n \"id\": self.session_id,\n \"user_id\": self.user_id,\n \"messages\": messages_to_dict(self.messages),\n }\n )\n[docs] def clear(self) -> None:\n \"\"\"Clear session memory from this memory and cosmos.\"\"\"\n self.messages = []\n if self._container:\n self._container.delete_item(\n item=self.session_id, partition_key=self.user_id\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/memory/chat_message_histories/cosmos_db.html"} +{"id": "e39f0d4c97c6-0", "text": "Source code for langchain.memory.chat_message_histories.in_memory\nfrom typing import List\nfrom pydantic import BaseModel\nfrom langchain.schema import (\n BaseChatMessageHistory,\n BaseMessage,\n)\n[docs]class ChatMessageHistory(BaseChatMessageHistory, BaseModel):\n messages: List[BaseMessage] = []\n[docs] def add_message(self, message: BaseMessage) -> None:\n \"\"\"Add a self-created message to the store\"\"\"\n self.messages.append(message)\n[docs] def clear(self) -> None:\n self.messages = []", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/memory/chat_message_histories/in_memory.html"} +{"id": "26aed2d5d85e-0", "text": "Source code for langchain.memory.chat_message_histories.cassandra\nimport json\nimport logging\nfrom typing import List\nfrom langchain.schema import (\n BaseChatMessageHistory,\n BaseMessage,\n _message_to_dict,\n messages_from_dict,\n)\nlogger = logging.getLogger(__name__)\nDEFAULT_KEYSPACE_NAME = \"chat_history\"\nDEFAULT_TABLE_NAME = \"message_store\"\nDEFAULT_USERNAME = \"cassandra\"\nDEFAULT_PASSWORD = \"cassandra\"\nDEFAULT_PORT = 9042\n[docs]class CassandraChatMessageHistory(BaseChatMessageHistory):\n \"\"\"Chat message history that stores history in Cassandra.\n Args:\n contact_points: list of ips to connect to Cassandra cluster\n session_id: arbitrary key that is used to store the messages\n of a single chat session.\n port: port to connect to Cassandra cluster\n username: username to connect to Cassandra cluster\n password: password to connect to Cassandra cluster\n keyspace_name: name of the keyspace to use\n table_name: name of the table to use\n \"\"\"\n def __init__(\n self,\n contact_points: List[str],\n session_id: str,\n port: int = DEFAULT_PORT,\n username: str = DEFAULT_USERNAME,\n password: str = DEFAULT_PASSWORD,\n keyspace_name: str = DEFAULT_KEYSPACE_NAME,\n table_name: str = DEFAULT_TABLE_NAME,\n ):\n self.contact_points = contact_points\n self.session_id = session_id\n self.port = port\n self.username = username\n self.password = password\n self.keyspace_name = keyspace_name\n self.table_name = table_name\n try:\n from cassandra import (\n AuthenticationFailed,\n OperationTimedOut,\n UnresolvableContactPoints,\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/memory/chat_message_histories/cassandra.html"} +{"id": "26aed2d5d85e-1", "text": "OperationTimedOut,\n UnresolvableContactPoints,\n )\n from cassandra.cluster import Cluster, PlainTextAuthProvider\n except ImportError:\n raise ValueError(\n \"Could not import cassandra-driver python package. \"\n \"Please install it with `pip install cassandra-driver`.\"\n )\n self.cluster: Cluster = Cluster(\n contact_points,\n port=port,\n auth_provider=PlainTextAuthProvider(\n username=self.username, password=self.password\n ),\n )\n try:\n self.session = self.cluster.connect()\n except (\n AuthenticationFailed,\n UnresolvableContactPoints,\n OperationTimedOut,\n ) as error:\n logger.error(\n \"Unable to establish connection with \\\n cassandra chat message history database\"\n )\n raise error\n self._prepare_cassandra()\n def _prepare_cassandra(self) -> None:\n \"\"\"Create the keyspace and table if they don't exist yet\"\"\"\n from cassandra import OperationTimedOut, Unavailable\n try:\n self.session.execute(\n f\"\"\"CREATE KEYSPACE IF NOT EXISTS \n {self.keyspace_name} WITH REPLICATION = \n {{ 'class' : 'SimpleStrategy', 'replication_factor' : 1 }};\"\"\"\n )\n except (OperationTimedOut, Unavailable) as error:\n logger.error(\n f\"Unable to create cassandra \\\n chat message history keyspace: {self.keyspace_name}.\"\n )\n raise error\n self.session.set_keyspace(self.keyspace_name)\n try:\n self.session.execute(\n f\"\"\"CREATE TABLE IF NOT EXISTS \n {self.table_name} (id UUID, session_id varchar,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/memory/chat_message_histories/cassandra.html"} +{"id": "26aed2d5d85e-2", "text": "{self.table_name} (id UUID, session_id varchar, \n history text, PRIMARY KEY ((session_id), id) );\"\"\"\n )\n except (OperationTimedOut, Unavailable) as error:\n logger.error(\n f\"Unable to create cassandra \\\n chat message history table: {self.table_name}\"\n )\n raise error\n @property\n def messages(self) -> List[BaseMessage]: # type: ignore\n \"\"\"Retrieve the messages from Cassandra\"\"\"\n from cassandra import ReadFailure, ReadTimeout, Unavailable\n try:\n rows = self.session.execute(\n f\"\"\"SELECT * FROM {self.table_name}\n WHERE session_id = '{self.session_id}' ;\"\"\"\n )\n except (Unavailable, ReadTimeout, ReadFailure) as error:\n logger.error(\"Unable to Retreive chat history messages from cassadra\")\n raise error\n if rows:\n items = [json.loads(row.history) for row in rows]\n else:\n items = []\n messages = messages_from_dict(items)\n return messages\n[docs] def add_message(self, message: BaseMessage) -> None:\n \"\"\"Append the message to the record in Cassandra\"\"\"\n import uuid\n from cassandra import Unavailable, WriteFailure, WriteTimeout\n try:\n self.session.execute(\n \"\"\"INSERT INTO message_store\n (id, session_id, history) VALUES (%s, %s, %s);\"\"\",\n (uuid.uuid4(), self.session_id, json.dumps(_message_to_dict(message))),\n )\n except (Unavailable, WriteTimeout, WriteFailure) as error:\n logger.error(\"Unable to write chat history messages to cassandra\")\n raise error", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/memory/chat_message_histories/cassandra.html"} +{"id": "26aed2d5d85e-3", "text": "logger.error(\"Unable to write chat history messages to cassandra\")\n raise error\n[docs] def clear(self) -> None:\n \"\"\"Clear session memory from Cassandra\"\"\"\n from cassandra import OperationTimedOut, Unavailable\n try:\n self.session.execute(\n f\"DELETE FROM {self.table_name} WHERE session_id = '{self.session_id}';\"\n )\n except (Unavailable, OperationTimedOut) as error:\n logger.error(\"Unable to clear chat history messages from cassandra\")\n raise error\n def __del__(self) -> None:\n if self.session:\n self.session.shutdown()\n if self.cluster:\n self.cluster.shutdown()", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/memory/chat_message_histories/cassandra.html"} +{"id": "f03e723a0847-0", "text": "Source code for langchain.memory.chat_message_histories.momento\nfrom __future__ import annotations\nimport json\nfrom datetime import timedelta\nfrom typing import TYPE_CHECKING, Any, Optional\nfrom langchain.schema import (\n BaseChatMessageHistory,\n BaseMessage,\n _message_to_dict,\n messages_from_dict,\n)\nfrom langchain.utils import get_from_env\nif TYPE_CHECKING:\n import momento\ndef _ensure_cache_exists(cache_client: momento.CacheClient, cache_name: str) -> None:\n \"\"\"Create cache if it doesn't exist.\n Raises:\n SdkException: Momento service or network error\n Exception: Unexpected response\n \"\"\"\n from momento.responses import CreateCache\n create_cache_response = cache_client.create_cache(cache_name)\n if isinstance(create_cache_response, CreateCache.Success) or isinstance(\n create_cache_response, CreateCache.CacheAlreadyExists\n ):\n return None\n elif isinstance(create_cache_response, CreateCache.Error):\n raise create_cache_response.inner_exception\n else:\n raise Exception(f\"Unexpected response cache creation: {create_cache_response}\")\n[docs]class MomentoChatMessageHistory(BaseChatMessageHistory):\n \"\"\"Chat message history cache that uses Momento as a backend.\n See https://gomomento.com/\"\"\"\n def __init__(\n self,\n session_id: str,\n cache_client: momento.CacheClient,\n cache_name: str,\n *,\n key_prefix: str = \"message_store:\",\n ttl: Optional[timedelta] = None,\n ensure_cache_exists: bool = True,\n ):\n \"\"\"Instantiate a chat message history cache that uses Momento as a backend.\n Note: to instantiate the cache client passed to MomentoChatMessageHistory,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/memory/chat_message_histories/momento.html"} +{"id": "f03e723a0847-1", "text": "Note: to instantiate the cache client passed to MomentoChatMessageHistory,\n you must have a Momento account at https://gomomento.com/.\n Args:\n session_id (str): The session ID to use for this chat session.\n cache_client (CacheClient): The Momento cache client.\n cache_name (str): The name of the cache to use to store the messages.\n key_prefix (str, optional): The prefix to apply to the cache key.\n Defaults to \"message_store:\".\n ttl (Optional[timedelta], optional): The TTL to use for the messages.\n Defaults to None, ie the default TTL of the cache will be used.\n ensure_cache_exists (bool, optional): Create the cache if it doesn't exist.\n Defaults to True.\n Raises:\n ImportError: Momento python package is not installed.\n TypeError: cache_client is not of type momento.CacheClientObject\n \"\"\"\n try:\n from momento import CacheClient\n from momento.requests import CollectionTtl\n except ImportError:\n raise ImportError(\n \"Could not import momento python package. \"\n \"Please install it with `pip install momento`.\"\n )\n if not isinstance(cache_client, CacheClient):\n raise TypeError(\"cache_client must be a momento.CacheClient object.\")\n if ensure_cache_exists:\n _ensure_cache_exists(cache_client, cache_name)\n self.key = key_prefix + session_id\n self.cache_client = cache_client\n self.cache_name = cache_name\n if ttl is not None:\n self.ttl = CollectionTtl.of(ttl)\n else:\n self.ttl = CollectionTtl.from_cache_ttl()\n[docs] @classmethod\n def from_client_params(\n cls,\n session_id: str,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/memory/chat_message_histories/momento.html"} +{"id": "f03e723a0847-2", "text": "def from_client_params(\n cls,\n session_id: str,\n cache_name: str,\n ttl: timedelta,\n *,\n configuration: Optional[momento.config.Configuration] = None,\n auth_token: Optional[str] = None,\n **kwargs: Any,\n ) -> MomentoChatMessageHistory:\n \"\"\"Construct cache from CacheClient parameters.\"\"\"\n try:\n from momento import CacheClient, Configurations, CredentialProvider\n except ImportError:\n raise ImportError(\n \"Could not import momento python package. \"\n \"Please install it with `pip install momento`.\"\n )\n if configuration is None:\n configuration = Configurations.Laptop.v1()\n auth_token = auth_token or get_from_env(\"auth_token\", \"MOMENTO_AUTH_TOKEN\")\n credentials = CredentialProvider.from_string(auth_token)\n cache_client = CacheClient(configuration, credentials, default_ttl=ttl)\n return cls(session_id, cache_client, cache_name, ttl=ttl, **kwargs)\n @property\n def messages(self) -> list[BaseMessage]: # type: ignore[override]\n \"\"\"Retrieve the messages from Momento.\n Raises:\n SdkException: Momento service or network error\n Exception: Unexpected response\n Returns:\n list[BaseMessage]: List of cached messages\n \"\"\"\n from momento.responses import CacheListFetch\n fetch_response = self.cache_client.list_fetch(self.cache_name, self.key)\n if isinstance(fetch_response, CacheListFetch.Hit):\n items = [json.loads(m) for m in fetch_response.value_list_string]\n return messages_from_dict(items)\n elif isinstance(fetch_response, CacheListFetch.Miss):\n return []\n elif isinstance(fetch_response, CacheListFetch.Error):", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/memory/chat_message_histories/momento.html"} +{"id": "f03e723a0847-3", "text": "return []\n elif isinstance(fetch_response, CacheListFetch.Error):\n raise fetch_response.inner_exception\n else:\n raise Exception(f\"Unexpected response: {fetch_response}\")\n[docs] def add_message(self, message: BaseMessage) -> None:\n \"\"\"Store a message in the cache.\n Args:\n message (BaseMessage): The message object to store.\n Raises:\n SdkException: Momento service or network error.\n Exception: Unexpected response.\n \"\"\"\n from momento.responses import CacheListPushBack\n item = json.dumps(_message_to_dict(message))\n push_response = self.cache_client.list_push_back(\n self.cache_name, self.key, item, ttl=self.ttl\n )\n if isinstance(push_response, CacheListPushBack.Success):\n return None\n elif isinstance(push_response, CacheListPushBack.Error):\n raise push_response.inner_exception\n else:\n raise Exception(f\"Unexpected response: {push_response}\")\n[docs] def clear(self) -> None:\n \"\"\"Remove the session's messages from the cache.\n Raises:\n SdkException: Momento service or network error.\n Exception: Unexpected response.\n \"\"\"\n from momento.responses import CacheDelete\n delete_response = self.cache_client.delete(self.cache_name, self.key)\n if isinstance(delete_response, CacheDelete.Success):\n return None\n elif isinstance(delete_response, CacheDelete.Error):\n raise delete_response.inner_exception\n else:\n raise Exception(f\"Unexpected response: {delete_response}\")", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/memory/chat_message_histories/momento.html"} +{"id": "68ddf579b500-0", "text": "Source code for langchain.memory.chat_message_histories.sql\nimport json\nimport logging\nfrom typing import List\nfrom sqlalchemy import Column, Integer, Text, create_engine\ntry:\n from sqlalchemy.orm import declarative_base\nexcept ImportError:\n from sqlalchemy.ext.declarative import declarative_base\nfrom sqlalchemy.orm import sessionmaker\nfrom langchain.schema import (\n BaseChatMessageHistory,\n BaseMessage,\n _message_to_dict,\n messages_from_dict,\n)\nlogger = logging.getLogger(__name__)\ndef create_message_model(table_name, DynamicBase): # type: ignore\n \"\"\"\n Create a message model for a given table name.\n Args:\n table_name: The name of the table to use.\n DynamicBase: The base class to use for the model.\n Returns:\n The model class.\n \"\"\"\n # Model decleared inside a function to have a dynamic table name\n class Message(DynamicBase):\n __tablename__ = table_name\n id = Column(Integer, primary_key=True)\n session_id = Column(Text)\n message = Column(Text)\n return Message\n[docs]class SQLChatMessageHistory(BaseChatMessageHistory):\n \"\"\"Chat message history stored in an SQL database.\"\"\"\n def __init__(\n self,\n session_id: str,\n connection_string: str,\n table_name: str = \"message_store\",\n ):\n self.table_name = table_name\n self.connection_string = connection_string\n self.engine = create_engine(connection_string, echo=False)\n self._create_table_if_not_exists()\n self.session_id = session_id\n self.Session = sessionmaker(self.engine)\n def _create_table_if_not_exists(self) -> None:\n DynamicBase = declarative_base()", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/memory/chat_message_histories/sql.html"} +{"id": "68ddf579b500-1", "text": "DynamicBase = declarative_base()\n self.Message = create_message_model(self.table_name, DynamicBase)\n # Create all does the check for us in case the table exists.\n DynamicBase.metadata.create_all(self.engine)\n @property\n def messages(self) -> List[BaseMessage]: # type: ignore\n \"\"\"Retrieve all messages from db\"\"\"\n with self.Session() as session:\n result = session.query(self.Message).where(\n self.Message.session_id == self.session_id\n )\n items = [json.loads(record.message) for record in result]\n messages = messages_from_dict(items)\n return messages\n[docs] def add_message(self, message: BaseMessage) -> None:\n \"\"\"Append the message to the record in db\"\"\"\n with self.Session() as session:\n jsonstr = json.dumps(_message_to_dict(message))\n session.add(self.Message(session_id=self.session_id, message=jsonstr))\n session.commit()\n[docs] def clear(self) -> None:\n \"\"\"Clear session memory from db\"\"\"\n with self.Session() as session:\n session.query(self.Message).filter(\n self.Message.session_id == self.session_id\n ).delete()\n session.commit()", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/memory/chat_message_histories/sql.html"} +{"id": "f2dcfa9df0e4-0", "text": "Source code for langchain.memory.chat_message_histories.file\nimport json\nimport logging\nfrom pathlib import Path\nfrom typing import List\nfrom langchain.schema import (\n BaseChatMessageHistory,\n BaseMessage,\n messages_from_dict,\n messages_to_dict,\n)\nlogger = logging.getLogger(__name__)\n[docs]class FileChatMessageHistory(BaseChatMessageHistory):\n \"\"\"\n Chat message history that stores history in a local file.\n Args:\n file_path: path of the local file to store the messages.\n \"\"\"\n def __init__(self, file_path: str):\n self.file_path = Path(file_path)\n if not self.file_path.exists():\n self.file_path.touch()\n self.file_path.write_text(json.dumps([]))\n @property\n def messages(self) -> List[BaseMessage]: # type: ignore\n \"\"\"Retrieve the messages from the local file\"\"\"\n items = json.loads(self.file_path.read_text())\n messages = messages_from_dict(items)\n return messages\n[docs] def add_message(self, message: BaseMessage) -> None:\n \"\"\"Append the message to the record in the local file\"\"\"\n messages = messages_to_dict(self.messages)\n messages.append(messages_to_dict([message])[0])\n self.file_path.write_text(json.dumps(messages))\n[docs] def clear(self) -> None:\n \"\"\"Clear session memory from the local file\"\"\"\n self.file_path.write_text(json.dumps([]))", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/memory/chat_message_histories/file.html"} +{"id": "da4a8150ec28-0", "text": "Source code for langchain.memory.chat_message_histories.dynamodb\nimport logging\nfrom typing import List, Optional\nfrom langchain.schema import (\n BaseChatMessageHistory,\n BaseMessage,\n _message_to_dict,\n messages_from_dict,\n messages_to_dict,\n)\nlogger = logging.getLogger(__name__)\n[docs]class DynamoDBChatMessageHistory(BaseChatMessageHistory):\n \"\"\"Chat message history that stores history in AWS DynamoDB.\n This class expects that a DynamoDB table with name `table_name`\n and a partition Key of `SessionId` is present.\n Args:\n table_name: name of the DynamoDB table\n session_id: arbitrary key that is used to store the messages\n of a single chat session.\n endpoint_url: URL of the AWS endpoint to connect to. This argument\n is optional and useful for test purposes, like using Localstack.\n If you plan to use AWS cloud service, you normally don't have to\n worry about setting the endpoint_url.\n \"\"\"\n def __init__(\n self, table_name: str, session_id: str, endpoint_url: Optional[str] = None\n ):\n import boto3\n if endpoint_url:\n client = boto3.resource(\"dynamodb\", endpoint_url=endpoint_url)\n else:\n client = boto3.resource(\"dynamodb\")\n self.table = client.Table(table_name)\n self.session_id = session_id\n @property\n def messages(self) -> List[BaseMessage]: # type: ignore\n \"\"\"Retrieve the messages from DynamoDB\"\"\"\n from botocore.exceptions import ClientError\n response = None\n try:\n response = self.table.get_item(Key={\"SessionId\": self.session_id})\n except ClientError as error:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/memory/chat_message_histories/dynamodb.html"} +{"id": "da4a8150ec28-1", "text": "except ClientError as error:\n if error.response[\"Error\"][\"Code\"] == \"ResourceNotFoundException\":\n logger.warning(\"No record found with session id: %s\", self.session_id)\n else:\n logger.error(error)\n if response and \"Item\" in response:\n items = response[\"Item\"][\"History\"]\n else:\n items = []\n messages = messages_from_dict(items)\n return messages\n[docs] def add_message(self, message: BaseMessage) -> None:\n \"\"\"Append the message to the record in DynamoDB\"\"\"\n from botocore.exceptions import ClientError\n messages = messages_to_dict(self.messages)\n _message = _message_to_dict(message)\n messages.append(_message)\n try:\n self.table.put_item(\n Item={\"SessionId\": self.session_id, \"History\": messages}\n )\n except ClientError as err:\n logger.error(err)\n[docs] def clear(self) -> None:\n \"\"\"Clear session memory from DynamoDB\"\"\"\n from botocore.exceptions import ClientError\n try:\n self.table.delete_item(Key={\"SessionId\": self.session_id})\n except ClientError as err:\n logger.error(err)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/memory/chat_message_histories/dynamodb.html"} +{"id": "ad40ad93b269-0", "text": "Source code for langchain.memory.chat_message_histories.mongodb\nimport json\nimport logging\nfrom typing import List\nfrom langchain.schema import (\n BaseChatMessageHistory,\n BaseMessage,\n _message_to_dict,\n messages_from_dict,\n)\nlogger = logging.getLogger(__name__)\nDEFAULT_DBNAME = \"chat_history\"\nDEFAULT_COLLECTION_NAME = \"message_store\"\n[docs]class MongoDBChatMessageHistory(BaseChatMessageHistory):\n \"\"\"Chat message history that stores history in MongoDB.\n Args:\n connection_string: connection string to connect to MongoDB\n session_id: arbitrary key that is used to store the messages\n of a single chat session.\n database_name: name of the database to use\n collection_name: name of the collection to use\n \"\"\"\n def __init__(\n self,\n connection_string: str,\n session_id: str,\n database_name: str = DEFAULT_DBNAME,\n collection_name: str = DEFAULT_COLLECTION_NAME,\n ):\n from pymongo import MongoClient, errors\n self.connection_string = connection_string\n self.session_id = session_id\n self.database_name = database_name\n self.collection_name = collection_name\n try:\n self.client: MongoClient = MongoClient(connection_string)\n except errors.ConnectionFailure as error:\n logger.error(error)\n self.db = self.client[database_name]\n self.collection = self.db[collection_name]\n self.collection.create_index(\"SessionId\")\n @property\n def messages(self) -> List[BaseMessage]: # type: ignore\n \"\"\"Retrieve the messages from MongoDB\"\"\"\n from pymongo import errors\n try:\n cursor = self.collection.find({\"SessionId\": self.session_id})\n except errors.OperationFailure as error:\n logger.error(error)\n if cursor:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/memory/chat_message_histories/mongodb.html"} +{"id": "ad40ad93b269-1", "text": "except errors.OperationFailure as error:\n logger.error(error)\n if cursor:\n items = [json.loads(document[\"History\"]) for document in cursor]\n else:\n items = []\n messages = messages_from_dict(items)\n return messages\n[docs] def add_message(self, message: BaseMessage) -> None:\n \"\"\"Append the message to the record in MongoDB\"\"\"\n from pymongo import errors\n try:\n self.collection.insert_one(\n {\n \"SessionId\": self.session_id,\n \"History\": json.dumps(_message_to_dict(message)),\n }\n )\n except errors.WriteError as err:\n logger.error(err)\n[docs] def clear(self) -> None:\n \"\"\"Clear session memory from MongoDB\"\"\"\n from pymongo import errors\n try:\n self.collection.delete_many({\"SessionId\": self.session_id})\n except errors.WriteError as err:\n logger.error(err)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/memory/chat_message_histories/mongodb.html"} +{"id": "7662a8aa06af-0", "text": "Source code for langchain.memory.chat_message_histories.redis\nimport json\nimport logging\nfrom typing import List, Optional\nfrom langchain.schema import (\n BaseChatMessageHistory,\n BaseMessage,\n _message_to_dict,\n messages_from_dict,\n)\nlogger = logging.getLogger(__name__)\n[docs]class RedisChatMessageHistory(BaseChatMessageHistory):\n \"\"\"Chat message history stored in a Redis database.\"\"\"\n def __init__(\n self,\n session_id: str,\n url: str = \"redis://localhost:6379/0\",\n key_prefix: str = \"message_store:\",\n ttl: Optional[int] = None,\n ):\n try:\n import redis\n except ImportError:\n raise ImportError(\n \"Could not import redis python package. \"\n \"Please install it with `pip install redis`.\"\n )\n try:\n self.redis_client = redis.Redis.from_url(url=url)\n except redis.exceptions.ConnectionError as error:\n logger.error(error)\n self.session_id = session_id\n self.key_prefix = key_prefix\n self.ttl = ttl\n @property\n def key(self) -> str:\n \"\"\"Construct the record key to use\"\"\"\n return self.key_prefix + self.session_id\n @property\n def messages(self) -> List[BaseMessage]: # type: ignore\n \"\"\"Retrieve the messages from Redis\"\"\"\n _items = self.redis_client.lrange(self.key, 0, -1)\n items = [json.loads(m.decode(\"utf-8\")) for m in _items[::-1]]\n messages = messages_from_dict(items)\n return messages\n[docs] def add_message(self, message: BaseMessage) -> None:\n \"\"\"Append the message to the record in Redis\"\"\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/memory/chat_message_histories/redis.html"} +{"id": "7662a8aa06af-1", "text": "\"\"\"Append the message to the record in Redis\"\"\"\n self.redis_client.lpush(self.key, json.dumps(_message_to_dict(message)))\n if self.ttl:\n self.redis_client.expire(self.key, self.ttl)\n[docs] def clear(self) -> None:\n \"\"\"Clear session memory from Redis\"\"\"\n self.redis_client.delete(self.key)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/memory/chat_message_histories/redis.html"} +{"id": "aac3c1be102a-0", "text": "Source code for langchain.memory.chat_message_histories.postgres\nimport json\nimport logging\nfrom typing import List\nfrom langchain.schema import (\n BaseChatMessageHistory,\n BaseMessage,\n _message_to_dict,\n messages_from_dict,\n)\nlogger = logging.getLogger(__name__)\nDEFAULT_CONNECTION_STRING = \"postgresql://postgres:mypassword@localhost/chat_history\"\n[docs]class PostgresChatMessageHistory(BaseChatMessageHistory):\n \"\"\"Chat message history stored in a Postgres database.\"\"\"\n def __init__(\n self,\n session_id: str,\n connection_string: str = DEFAULT_CONNECTION_STRING,\n table_name: str = \"message_store\",\n ):\n import psycopg\n from psycopg.rows import dict_row\n try:\n self.connection = psycopg.connect(connection_string)\n self.cursor = self.connection.cursor(row_factory=dict_row)\n except psycopg.OperationalError as error:\n logger.error(error)\n self.session_id = session_id\n self.table_name = table_name\n self._create_table_if_not_exists()\n def _create_table_if_not_exists(self) -> None:\n create_table_query = f\"\"\"CREATE TABLE IF NOT EXISTS {self.table_name} (\n id SERIAL PRIMARY KEY,\n session_id TEXT NOT NULL,\n message JSONB NOT NULL\n );\"\"\"\n self.cursor.execute(create_table_query)\n self.connection.commit()\n @property\n def messages(self) -> List[BaseMessage]: # type: ignore\n \"\"\"Retrieve the messages from PostgreSQL\"\"\"\n query = f\"SELECT message FROM {self.table_name} WHERE session_id = %s;\"\n self.cursor.execute(query, (self.session_id,))\n items = [record[\"message\"] for record in self.cursor.fetchall()]\n messages = messages_from_dict(items)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/memory/chat_message_histories/postgres.html"} +{"id": "aac3c1be102a-1", "text": "messages = messages_from_dict(items)\n return messages\n[docs] def add_message(self, message: BaseMessage) -> None:\n \"\"\"Append the message to the record in PostgreSQL\"\"\"\n from psycopg import sql\n query = sql.SQL(\"INSERT INTO {} (session_id, message) VALUES (%s, %s);\").format(\n sql.Identifier(self.table_name)\n )\n self.cursor.execute(\n query, (self.session_id, json.dumps(_message_to_dict(message)))\n )\n self.connection.commit()\n[docs] def clear(self) -> None:\n \"\"\"Clear session memory from PostgreSQL\"\"\"\n query = f\"DELETE FROM {self.table_name} WHERE session_id = %s;\"\n self.cursor.execute(query, (self.session_id,))\n self.connection.commit()\n def __del__(self) -> None:\n if self.cursor:\n self.cursor.close()\n if self.connection:\n self.connection.close()", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/memory/chat_message_histories/postgres.html"} +{"id": "a44ea63966c1-0", "text": "Source code for langchain.agents.loading\n\"\"\"Functionality for loading agents.\"\"\"\nimport json\nimport logging\nfrom pathlib import Path\nfrom typing import Any, List, Optional, Union\nimport yaml\nfrom langchain.agents.agent import BaseMultiActionAgent, BaseSingleActionAgent\nfrom langchain.agents.tools import Tool\nfrom langchain.agents.types import AGENT_TO_CLASS\nfrom langchain.base_language import BaseLanguageModel\nfrom langchain.chains.loading import load_chain, load_chain_from_config\nfrom langchain.utilities.loading import try_load_from_hub\nlogger = logging.getLogger(__file__)\nURL_BASE = \"https://raw.githubusercontent.com/hwchase17/langchain-hub/master/agents/\"\ndef _load_agent_from_tools(\n config: dict, llm: BaseLanguageModel, tools: List[Tool], **kwargs: Any\n) -> Union[BaseSingleActionAgent, BaseMultiActionAgent]:\n config_type = config.pop(\"_type\")\n if config_type not in AGENT_TO_CLASS:\n raise ValueError(f\"Loading {config_type} agent not supported\")\n agent_cls = AGENT_TO_CLASS[config_type]\n combined_config = {**config, **kwargs}\n return agent_cls.from_llm_and_tools(llm, tools, **combined_config)\ndef load_agent_from_config(\n config: dict,\n llm: Optional[BaseLanguageModel] = None,\n tools: Optional[List[Tool]] = None,\n **kwargs: Any,\n) -> Union[BaseSingleActionAgent, BaseMultiActionAgent]:\n \"\"\"Load agent from Config Dict.\"\"\"\n if \"_type\" not in config:\n raise ValueError(\"Must specify an agent Type in config\")\n load_from_tools = config.pop(\"load_from_llm_and_tools\", False)\n if load_from_tools:\n if llm is None:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/loading.html"} +{"id": "a44ea63966c1-1", "text": "if load_from_tools:\n if llm is None:\n raise ValueError(\n \"If `load_from_llm_and_tools` is set to True, \"\n \"then LLM must be provided\"\n )\n if tools is None:\n raise ValueError(\n \"If `load_from_llm_and_tools` is set to True, \"\n \"then tools must be provided\"\n )\n return _load_agent_from_tools(config, llm, tools, **kwargs)\n config_type = config.pop(\"_type\")\n if config_type not in AGENT_TO_CLASS:\n raise ValueError(f\"Loading {config_type} agent not supported\")\n agent_cls = AGENT_TO_CLASS[config_type]\n if \"llm_chain\" in config:\n config[\"llm_chain\"] = load_chain_from_config(config.pop(\"llm_chain\"))\n elif \"llm_chain_path\" in config:\n config[\"llm_chain\"] = load_chain(config.pop(\"llm_chain_path\"))\n else:\n raise ValueError(\"One of `llm_chain` and `llm_chain_path` should be specified.\")\n if \"output_parser\" in config:\n logger.warning(\n \"Currently loading output parsers on agent is not supported, \"\n \"will just use the default one.\"\n )\n del config[\"output_parser\"]\n combined_config = {**config, **kwargs}\n return agent_cls(**combined_config) # type: ignore\n[docs]def load_agent(\n path: Union[str, Path], **kwargs: Any\n) -> Union[BaseSingleActionAgent, BaseMultiActionAgent]:\n \"\"\"Unified method for loading a agent from LangChainHub or local fs.\"\"\"\n if hub_result := try_load_from_hub(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/loading.html"} +{"id": "a44ea63966c1-2", "text": "if hub_result := try_load_from_hub(\n path, _load_agent_from_file, \"agents\", {\"json\", \"yaml\"}\n ):\n return hub_result\n else:\n return _load_agent_from_file(path, **kwargs)\ndef _load_agent_from_file(\n file: Union[str, Path], **kwargs: Any\n) -> Union[BaseSingleActionAgent, BaseMultiActionAgent]:\n \"\"\"Load agent from file.\"\"\"\n # Convert file to Path object.\n if isinstance(file, str):\n file_path = Path(file)\n else:\n file_path = file\n # Load from either json or yaml.\n if file_path.suffix == \".json\":\n with open(file_path) as f:\n config = json.load(f)\n elif file_path.suffix == \".yaml\":\n with open(file_path, \"r\") as f:\n config = yaml.safe_load(f)\n else:\n raise ValueError(\"File type must be json or yaml\")\n # Load the agent from the config now.\n return load_agent_from_config(config, **kwargs)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/loading.html"} +{"id": "db617d351d5d-0", "text": "Source code for langchain.agents.initialize\n\"\"\"Load agent.\"\"\"\nfrom typing import Any, Optional, Sequence\nfrom langchain.agents.agent import AgentExecutor\nfrom langchain.agents.agent_types import AgentType\nfrom langchain.agents.loading import AGENT_TO_CLASS, load_agent\nfrom langchain.base_language import BaseLanguageModel\nfrom langchain.callbacks.base import BaseCallbackManager\nfrom langchain.tools.base import BaseTool\n[docs]def initialize_agent(\n tools: Sequence[BaseTool],\n llm: BaseLanguageModel,\n agent: Optional[AgentType] = None,\n callback_manager: Optional[BaseCallbackManager] = None,\n agent_path: Optional[str] = None,\n agent_kwargs: Optional[dict] = None,\n *,\n tags: Optional[Sequence[str]] = None,\n **kwargs: Any,\n) -> AgentExecutor:\n \"\"\"Load an agent executor given tools and LLM.\n Args:\n tools: List of tools this agent has access to.\n llm: Language model to use as the agent.\n agent: Agent type to use. If None and agent_path is also None, will default to\n AgentType.ZERO_SHOT_REACT_DESCRIPTION.\n callback_manager: CallbackManager to use. Global callback manager is used if\n not provided. Defaults to None.\n agent_path: Path to serialized agent to use.\n agent_kwargs: Additional key word arguments to pass to the underlying agent\n tags: Tags to apply to the traced runs.\n **kwargs: Additional key word arguments passed to the agent executor\n Returns:\n An agent executor\n \"\"\"\n tags_ = list(tags) if tags else []\n if agent is None and agent_path is None:\n agent = AgentType.ZERO_SHOT_REACT_DESCRIPTION", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/initialize.html"} +{"id": "db617d351d5d-1", "text": "agent = AgentType.ZERO_SHOT_REACT_DESCRIPTION\n if agent is not None and agent_path is not None:\n raise ValueError(\n \"Both `agent` and `agent_path` are specified, \"\n \"but at most only one should be.\"\n )\n if agent is not None:\n if agent not in AGENT_TO_CLASS:\n raise ValueError(\n f\"Got unknown agent type: {agent}. \"\n f\"Valid types are: {AGENT_TO_CLASS.keys()}.\"\n )\n tags_.append(agent.value if isinstance(agent, AgentType) else agent)\n agent_cls = AGENT_TO_CLASS[agent]\n agent_kwargs = agent_kwargs or {}\n agent_obj = agent_cls.from_llm_and_tools(\n llm, tools, callback_manager=callback_manager, **agent_kwargs\n )\n elif agent_path is not None:\n agent_obj = load_agent(\n agent_path, llm=llm, tools=tools, callback_manager=callback_manager\n )\n try:\n # TODO: Add tags from the serialized object directly.\n tags_.append(agent_obj._agent_type)\n except NotImplementedError:\n pass\n else:\n raise ValueError(\n \"Somehow both `agent` and `agent_path` are None, \"\n \"this should never happen.\"\n )\n return AgentExecutor.from_agent_and_tools(\n agent=agent_obj,\n tools=tools,\n callback_manager=callback_manager,\n tags=tags_,\n **kwargs,\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/initialize.html"} +{"id": "9315a95a386b-0", "text": "Source code for langchain.agents.load_tools\n# flake8: noqa\n\"\"\"Load tools.\"\"\"\nimport warnings\nfrom typing import Any, Dict, List, Optional, Callable, Tuple\nfrom mypy_extensions import Arg, KwArg\nfrom langchain.agents.tools import Tool\nfrom langchain.base_language import BaseLanguageModel\nfrom langchain.callbacks.base import BaseCallbackManager\nfrom langchain.callbacks.manager import Callbacks\nfrom langchain.chains.api import news_docs, open_meteo_docs, podcast_docs, tmdb_docs\nfrom langchain.chains.api.base import APIChain\nfrom langchain.chains.llm_math.base import LLMMathChain\nfrom langchain.chains.pal.base import PALChain\nfrom langchain.requests import TextRequestsWrapper\nfrom langchain.tools.arxiv.tool import ArxivQueryRun\nfrom langchain.tools.pubmed.tool import PubmedQueryRun\nfrom langchain.tools.base import BaseTool\nfrom langchain.tools.bing_search.tool import BingSearchRun\nfrom langchain.tools.ddg_search.tool import DuckDuckGoSearchRun\nfrom langchain.tools.google_search.tool import GoogleSearchResults, GoogleSearchRun\nfrom langchain.tools.metaphor_search.tool import MetaphorSearchResults\nfrom langchain.tools.google_serper.tool import GoogleSerperResults, GoogleSerperRun\nfrom langchain.tools.graphql.tool import BaseGraphQLTool\nfrom langchain.tools.human.tool import HumanInputRun\nfrom langchain.tools.python.tool import PythonREPLTool\nfrom langchain.tools.requests.tool import (\n RequestsDeleteTool,\n RequestsGetTool,\n RequestsPatchTool,\n RequestsPostTool,\n RequestsPutTool,\n)\nfrom langchain.tools.scenexplain.tool import SceneXplainTool\nfrom langchain.tools.searx_search.tool import SearxSearchResults, SearxSearchRun\nfrom langchain.tools.shell.tool import ShellTool", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/load_tools.html"} +{"id": "9315a95a386b-1", "text": "from langchain.tools.shell.tool import ShellTool\nfrom langchain.tools.sleep.tool import SleepTool\nfrom langchain.tools.wikipedia.tool import WikipediaQueryRun\nfrom langchain.tools.wolfram_alpha.tool import WolframAlphaQueryRun\nfrom langchain.tools.openweathermap.tool import OpenWeatherMapQueryRun\nfrom langchain.utilities import ArxivAPIWrapper\nfrom langchain.utilities import PubMedAPIWrapper\nfrom langchain.utilities.bing_search import BingSearchAPIWrapper\nfrom langchain.utilities.duckduckgo_search import DuckDuckGoSearchAPIWrapper\nfrom langchain.utilities.google_search import GoogleSearchAPIWrapper\nfrom langchain.utilities.google_serper import GoogleSerperAPIWrapper\nfrom langchain.utilities.metaphor_search import MetaphorSearchAPIWrapper\nfrom langchain.utilities.awslambda import LambdaWrapper\nfrom langchain.utilities.graphql import GraphQLAPIWrapper\nfrom langchain.utilities.searx_search import SearxSearchWrapper\nfrom langchain.utilities.serpapi import SerpAPIWrapper\nfrom langchain.utilities.twilio import TwilioAPIWrapper\nfrom langchain.utilities.wikipedia import WikipediaAPIWrapper\nfrom langchain.utilities.wolfram_alpha import WolframAlphaAPIWrapper\nfrom langchain.utilities.openweathermap import OpenWeatherMapAPIWrapper\ndef _get_python_repl() -> BaseTool:\n return PythonREPLTool()\ndef _get_tools_requests_get() -> BaseTool:\n return RequestsGetTool(requests_wrapper=TextRequestsWrapper())\ndef _get_tools_requests_post() -> BaseTool:\n return RequestsPostTool(requests_wrapper=TextRequestsWrapper())\ndef _get_tools_requests_patch() -> BaseTool:\n return RequestsPatchTool(requests_wrapper=TextRequestsWrapper())\ndef _get_tools_requests_put() -> BaseTool:\n return RequestsPutTool(requests_wrapper=TextRequestsWrapper())\ndef _get_tools_requests_delete() -> BaseTool:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/load_tools.html"} +{"id": "9315a95a386b-2", "text": "def _get_tools_requests_delete() -> BaseTool:\n return RequestsDeleteTool(requests_wrapper=TextRequestsWrapper())\ndef _get_terminal() -> BaseTool:\n return ShellTool()\ndef _get_sleep() -> BaseTool:\n return SleepTool()\n_BASE_TOOLS: Dict[str, Callable[[], BaseTool]] = {\n \"python_repl\": _get_python_repl,\n \"requests\": _get_tools_requests_get, # preserved for backwards compatability\n \"requests_get\": _get_tools_requests_get,\n \"requests_post\": _get_tools_requests_post,\n \"requests_patch\": _get_tools_requests_patch,\n \"requests_put\": _get_tools_requests_put,\n \"requests_delete\": _get_tools_requests_delete,\n \"terminal\": _get_terminal,\n \"sleep\": _get_sleep,\n}\ndef _get_pal_math(llm: BaseLanguageModel) -> BaseTool:\n return Tool(\n name=\"PAL-MATH\",\n description=\"A language model that is really good at solving complex word math problems. Input should be a fully worded hard word math problem.\",\n func=PALChain.from_math_prompt(llm).run,\n )\ndef _get_pal_colored_objects(llm: BaseLanguageModel) -> BaseTool:\n return Tool(\n name=\"PAL-COLOR-OBJ\",\n description=\"A language model that is really good at reasoning about position and the color attributes of objects. Input should be a fully worded hard reasoning problem. Make sure to include all information about the objects AND the final question you want to answer.\",\n func=PALChain.from_colored_object_prompt(llm).run,\n )\ndef _get_llm_math(llm: BaseLanguageModel) -> BaseTool:\n return Tool(\n name=\"Calculator\",", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/load_tools.html"} +{"id": "9315a95a386b-3", "text": "return Tool(\n name=\"Calculator\",\n description=\"Useful for when you need to answer questions about math.\",\n func=LLMMathChain.from_llm(llm=llm).run,\n coroutine=LLMMathChain.from_llm(llm=llm).arun,\n )\ndef _get_open_meteo_api(llm: BaseLanguageModel) -> BaseTool:\n chain = APIChain.from_llm_and_api_docs(llm, open_meteo_docs.OPEN_METEO_DOCS)\n return Tool(\n name=\"Open Meteo API\",\n description=\"Useful for when you want to get weather information from the OpenMeteo API. The input should be a question in natural language that this API can answer.\",\n func=chain.run,\n )\n_LLM_TOOLS: Dict[str, Callable[[BaseLanguageModel], BaseTool]] = {\n \"pal-math\": _get_pal_math,\n \"pal-colored-objects\": _get_pal_colored_objects,\n \"llm-math\": _get_llm_math,\n \"open-meteo-api\": _get_open_meteo_api,\n}\ndef _get_news_api(llm: BaseLanguageModel, **kwargs: Any) -> BaseTool:\n news_api_key = kwargs[\"news_api_key\"]\n chain = APIChain.from_llm_and_api_docs(\n llm, news_docs.NEWS_DOCS, headers={\"X-Api-Key\": news_api_key}\n )\n return Tool(\n name=\"News API\",\n description=\"Use this when you want to get information about the top headlines of current news stories. The input should be a question in natural language that this API can answer.\",\n func=chain.run,\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/load_tools.html"} +{"id": "9315a95a386b-4", "text": "func=chain.run,\n )\ndef _get_tmdb_api(llm: BaseLanguageModel, **kwargs: Any) -> BaseTool:\n tmdb_bearer_token = kwargs[\"tmdb_bearer_token\"]\n chain = APIChain.from_llm_and_api_docs(\n llm,\n tmdb_docs.TMDB_DOCS,\n headers={\"Authorization\": f\"Bearer {tmdb_bearer_token}\"},\n )\n return Tool(\n name=\"TMDB API\",\n description=\"Useful for when you want to get information from The Movie Database. The input should be a question in natural language that this API can answer.\",\n func=chain.run,\n )\ndef _get_podcast_api(llm: BaseLanguageModel, **kwargs: Any) -> BaseTool:\n listen_api_key = kwargs[\"listen_api_key\"]\n chain = APIChain.from_llm_and_api_docs(\n llm,\n podcast_docs.PODCAST_DOCS,\n headers={\"X-ListenAPI-Key\": listen_api_key},\n )\n return Tool(\n name=\"Podcast API\",\n description=\"Use the Listen Notes Podcast API to search all podcasts or episodes. The input should be a question in natural language that this API can answer.\",\n func=chain.run,\n )\ndef _get_lambda_api(**kwargs: Any) -> BaseTool:\n return Tool(\n name=kwargs[\"awslambda_tool_name\"],\n description=kwargs[\"awslambda_tool_description\"],\n func=LambdaWrapper(**kwargs).run,\n )\ndef _get_wolfram_alpha(**kwargs: Any) -> BaseTool:\n return WolframAlphaQueryRun(api_wrapper=WolframAlphaAPIWrapper(**kwargs))\ndef _get_google_search(**kwargs: Any) -> BaseTool:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/load_tools.html"} +{"id": "9315a95a386b-5", "text": "def _get_google_search(**kwargs: Any) -> BaseTool:\n return GoogleSearchRun(api_wrapper=GoogleSearchAPIWrapper(**kwargs))\ndef _get_wikipedia(**kwargs: Any) -> BaseTool:\n return WikipediaQueryRun(api_wrapper=WikipediaAPIWrapper(**kwargs))\ndef _get_arxiv(**kwargs: Any) -> BaseTool:\n return ArxivQueryRun(api_wrapper=ArxivAPIWrapper(**kwargs))\ndef _get_pupmed(**kwargs: Any) -> BaseTool:\n return PubmedQueryRun(api_wrapper=PubMedAPIWrapper(**kwargs))\ndef _get_google_serper(**kwargs: Any) -> BaseTool:\n return GoogleSerperRun(api_wrapper=GoogleSerperAPIWrapper(**kwargs))\ndef _get_google_serper_results_json(**kwargs: Any) -> BaseTool:\n return GoogleSerperResults(api_wrapper=GoogleSerperAPIWrapper(**kwargs))\ndef _get_google_search_results_json(**kwargs: Any) -> BaseTool:\n return GoogleSearchResults(api_wrapper=GoogleSearchAPIWrapper(**kwargs))\ndef _get_serpapi(**kwargs: Any) -> BaseTool:\n return Tool(\n name=\"Search\",\n description=\"A search engine. Useful for when you need to answer questions about current events. Input should be a search query.\",\n func=SerpAPIWrapper(**kwargs).run,\n coroutine=SerpAPIWrapper(**kwargs).arun,\n )\ndef _get_twilio(**kwargs: Any) -> BaseTool:\n return Tool(\n name=\"Text Message\",\n description=\"Useful for when you need to send a text message to a provided phone number.\",\n func=TwilioAPIWrapper(**kwargs).run,\n )\ndef _get_searx_search(**kwargs: Any) -> BaseTool:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/load_tools.html"} +{"id": "9315a95a386b-6", "text": ")\ndef _get_searx_search(**kwargs: Any) -> BaseTool:\n return SearxSearchRun(wrapper=SearxSearchWrapper(**kwargs))\ndef _get_searx_search_results_json(**kwargs: Any) -> BaseTool:\n wrapper_kwargs = {k: v for k, v in kwargs.items() if k != \"num_results\"}\n return SearxSearchResults(wrapper=SearxSearchWrapper(**wrapper_kwargs), **kwargs)\ndef _get_bing_search(**kwargs: Any) -> BaseTool:\n return BingSearchRun(api_wrapper=BingSearchAPIWrapper(**kwargs))\ndef _get_metaphor_search(**kwargs: Any) -> BaseTool:\n return MetaphorSearchResults(api_wrapper=MetaphorSearchAPIWrapper(**kwargs))\ndef _get_ddg_search(**kwargs: Any) -> BaseTool:\n return DuckDuckGoSearchRun(api_wrapper=DuckDuckGoSearchAPIWrapper(**kwargs))\ndef _get_human_tool(**kwargs: Any) -> BaseTool:\n return HumanInputRun(**kwargs)\ndef _get_scenexplain(**kwargs: Any) -> BaseTool:\n return SceneXplainTool(**kwargs)\ndef _get_graphql_tool(**kwargs: Any) -> BaseTool:\n graphql_endpoint = kwargs[\"graphql_endpoint\"]\n wrapper = GraphQLAPIWrapper(graphql_endpoint=graphql_endpoint)\n return BaseGraphQLTool(graphql_wrapper=wrapper)\ndef _get_openweathermap(**kwargs: Any) -> BaseTool:\n return OpenWeatherMapQueryRun(api_wrapper=OpenWeatherMapAPIWrapper(**kwargs))\n_EXTRA_LLM_TOOLS: Dict[\n str,\n Tuple[Callable[[Arg(BaseLanguageModel, \"llm\"), KwArg(Any)], BaseTool], List[str]],\n] = {", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/load_tools.html"} +{"id": "9315a95a386b-7", "text": "] = {\n \"news-api\": (_get_news_api, [\"news_api_key\"]),\n \"tmdb-api\": (_get_tmdb_api, [\"tmdb_bearer_token\"]),\n \"podcast-api\": (_get_podcast_api, [\"listen_api_key\"]),\n}\n_EXTRA_OPTIONAL_TOOLS: Dict[str, Tuple[Callable[[KwArg(Any)], BaseTool], List[str]]] = {\n \"wolfram-alpha\": (_get_wolfram_alpha, [\"wolfram_alpha_appid\"]),\n \"google-search\": (_get_google_search, [\"google_api_key\", \"google_cse_id\"]),\n \"google-search-results-json\": (\n _get_google_search_results_json,\n [\"google_api_key\", \"google_cse_id\", \"num_results\"],\n ),\n \"searx-search-results-json\": (\n _get_searx_search_results_json,\n [\"searx_host\", \"engines\", \"num_results\", \"aiosession\"],\n ),\n \"bing-search\": (_get_bing_search, [\"bing_subscription_key\", \"bing_search_url\"]),\n \"metaphor-search\": (_get_metaphor_search, [\"metaphor_api_key\"]),\n \"ddg-search\": (_get_ddg_search, []),\n \"google-serper\": (_get_google_serper, [\"serper_api_key\", \"aiosession\"]),\n \"google-serper-results-json\": (\n _get_google_serper_results_json,\n [\"serper_api_key\", \"aiosession\"],\n ),\n \"serpapi\": (_get_serpapi, [\"serpapi_api_key\", \"aiosession\"]),\n \"twilio\": (_get_twilio, [\"account_sid\", \"auth_token\", \"from_number\"]),", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/load_tools.html"} +{"id": "9315a95a386b-8", "text": "\"searx-search\": (_get_searx_search, [\"searx_host\", \"engines\", \"aiosession\"]),\n \"wikipedia\": (_get_wikipedia, [\"top_k_results\", \"lang\"]),\n \"arxiv\": (\n _get_arxiv,\n [\"top_k_results\", \"load_max_docs\", \"load_all_available_meta\"],\n ),\n \"pupmed\": (\n _get_pupmed,\n [\"top_k_results\", \"load_max_docs\", \"load_all_available_meta\"],\n ),\n \"human\": (_get_human_tool, [\"prompt_func\", \"input_func\"]),\n \"awslambda\": (\n _get_lambda_api,\n [\"awslambda_tool_name\", \"awslambda_tool_description\", \"function_name\"],\n ),\n \"sceneXplain\": (_get_scenexplain, []),\n \"graphql\": (_get_graphql_tool, [\"graphql_endpoint\"]),\n \"openweathermap-api\": (_get_openweathermap, [\"openweathermap_api_key\"]),\n}\ndef _handle_callbacks(\n callback_manager: Optional[BaseCallbackManager], callbacks: Callbacks\n) -> Callbacks:\n if callback_manager is not None:\n warnings.warn(\n \"callback_manager is deprecated. Please use callbacks instead.\",\n DeprecationWarning,\n )\n if callbacks is not None:\n raise ValueError(\n \"Cannot specify both callback_manager and callbacks arguments.\"\n )\n return callback_manager\n return callbacks\n[docs]def load_huggingface_tool(\n task_or_repo_id: str,\n model_repo_id: Optional[str] = None,\n token: Optional[str] = None,\n remote: bool = False,\n **kwargs: Any,\n) -> BaseTool:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/load_tools.html"} +{"id": "9315a95a386b-9", "text": "**kwargs: Any,\n) -> BaseTool:\n \"\"\"Loads a tool from the HuggingFace Hub.\n Args:\n task_or_repo_id: Task or model repo id.\n model_repo_id: Optional model repo id.\n token: Optional token.\n remote: Optional remote. Defaults to False.\n **kwargs:\n Returns:\n A tool.\n \"\"\"\n try:\n from transformers import load_tool\n except ImportError:\n raise ImportError(\n \"HuggingFace tools require the libraries `transformers>=4.29.0`\"\n \" and `huggingface_hub>=0.14.1` to be installed.\"\n \" Please install it with\"\n \" `pip install --upgrade transformers huggingface_hub`.\"\n )\n hf_tool = load_tool(\n task_or_repo_id,\n model_repo_id=model_repo_id,\n token=token,\n remote=remote,\n **kwargs,\n )\n outputs = hf_tool.outputs\n if set(outputs) != {\"text\"}:\n raise NotImplementedError(\"Multimodal outputs not supported yet.\")\n inputs = hf_tool.inputs\n if set(inputs) != {\"text\"}:\n raise NotImplementedError(\"Multimodal inputs not supported yet.\")\n return Tool.from_function(\n hf_tool.__call__, name=hf_tool.name, description=hf_tool.description\n )\n[docs]def load_tools(\n tool_names: List[str],\n llm: Optional[BaseLanguageModel] = None,\n callbacks: Callbacks = None,\n **kwargs: Any,\n) -> List[BaseTool]:\n \"\"\"Load tools based on their name.\n Args:\n tool_names: name of tools to load.", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/load_tools.html"} +{"id": "9315a95a386b-10", "text": "Args:\n tool_names: name of tools to load.\n llm: Optional language model, may be needed to initialize certain tools.\n callbacks: Optional callback manager or list of callback handlers.\n If not provided, default global callback manager will be used.\n Returns:\n List of tools.\n \"\"\"\n tools = []\n callbacks = _handle_callbacks(\n callback_manager=kwargs.get(\"callback_manager\"), callbacks=callbacks\n )\n for name in tool_names:\n if name == \"requests\":\n warnings.warn(\n \"tool name `requests` is deprecated - \"\n \"please use `requests_all` or specify the requests method\"\n )\n if name == \"requests_all\":\n # expand requests into various methods\n requests_method_tools = [\n _tool for _tool in _BASE_TOOLS if _tool.startswith(\"requests_\")\n ]\n tool_names.extend(requests_method_tools)\n elif name in _BASE_TOOLS:\n tools.append(_BASE_TOOLS[name]())\n elif name in _LLM_TOOLS:\n if llm is None:\n raise ValueError(f\"Tool {name} requires an LLM to be provided\")\n tool = _LLM_TOOLS[name](llm)\n tools.append(tool)\n elif name in _EXTRA_LLM_TOOLS:\n if llm is None:\n raise ValueError(f\"Tool {name} requires an LLM to be provided\")\n _get_llm_tool_func, extra_keys = _EXTRA_LLM_TOOLS[name]\n missing_keys = set(extra_keys).difference(kwargs)\n if missing_keys:\n raise ValueError(\n f\"Tool {name} requires some parameters that were not \"\n f\"provided: {missing_keys}\"\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/load_tools.html"} +{"id": "9315a95a386b-11", "text": "f\"provided: {missing_keys}\"\n )\n sub_kwargs = {k: kwargs[k] for k in extra_keys}\n tool = _get_llm_tool_func(llm=llm, **sub_kwargs)\n tools.append(tool)\n elif name in _EXTRA_OPTIONAL_TOOLS:\n _get_tool_func, extra_keys = _EXTRA_OPTIONAL_TOOLS[name]\n sub_kwargs = {k: kwargs[k] for k in extra_keys if k in kwargs}\n tool = _get_tool_func(**sub_kwargs)\n tools.append(tool)\n else:\n raise ValueError(f\"Got unknown tool {name}\")\n if callbacks is not None:\n for tool in tools:\n tool.callbacks = callbacks\n return tools\n[docs]def get_all_tool_names() -> List[str]:\n \"\"\"Get a list of all possible tool names.\"\"\"\n return (\n list(_BASE_TOOLS)\n + list(_EXTRA_OPTIONAL_TOOLS)\n + list(_EXTRA_LLM_TOOLS)\n + list(_LLM_TOOLS)\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/load_tools.html"} +{"id": "1e075621b75c-0", "text": "Source code for langchain.agents.agent_types\nfrom enum import Enum\n[docs]class AgentType(str, Enum):\n \"\"\"Enumerator with the Agent types.\"\"\"\n ZERO_SHOT_REACT_DESCRIPTION = \"zero-shot-react-description\"\n REACT_DOCSTORE = \"react-docstore\"\n SELF_ASK_WITH_SEARCH = \"self-ask-with-search\"\n CONVERSATIONAL_REACT_DESCRIPTION = \"conversational-react-description\"\n CHAT_ZERO_SHOT_REACT_DESCRIPTION = \"chat-zero-shot-react-description\"\n CHAT_CONVERSATIONAL_REACT_DESCRIPTION = \"chat-conversational-react-description\"\n STRUCTURED_CHAT_ZERO_SHOT_REACT_DESCRIPTION = (\n \"structured-chat-zero-shot-react-description\"\n )\n OPENAI_FUNCTIONS = \"openai-functions\"\n OPENAI_MULTI_FUNCTIONS = \"openai-multi-functions\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/agent_types.html"} +{"id": "c33eb77314f9-0", "text": "Source code for langchain.agents.agent\n\"\"\"Chain that takes in an input and produces an action and action input.\"\"\"\nfrom __future__ import annotations\nimport asyncio\nimport json\nimport logging\nimport time\nfrom abc import abstractmethod\nfrom pathlib import Path\nfrom typing import Any, Callable, Dict, List, Optional, Sequence, Tuple, Union\nimport yaml\nfrom pydantic import BaseModel, root_validator\nfrom langchain.agents.agent_types import AgentType\nfrom langchain.agents.tools import InvalidTool\nfrom langchain.base_language import BaseLanguageModel\nfrom langchain.callbacks.base import BaseCallbackManager\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForChainRun,\n AsyncCallbackManagerForToolRun,\n CallbackManagerForChainRun,\n CallbackManagerForToolRun,\n Callbacks,\n)\nfrom langchain.chains.base import Chain\nfrom langchain.chains.llm import LLMChain\nfrom langchain.input import get_color_mapping\nfrom langchain.prompts.base import BasePromptTemplate\nfrom langchain.prompts.few_shot import FewShotPromptTemplate\nfrom langchain.prompts.prompt import PromptTemplate\nfrom langchain.schema import (\n AgentAction,\n AgentFinish,\n BaseMessage,\n BaseOutputParser,\n OutputParserException,\n)\nfrom langchain.tools.base import BaseTool\nfrom langchain.utilities.asyncio import asyncio_timeout\nlogger = logging.getLogger(__name__)\n[docs]class BaseSingleActionAgent(BaseModel):\n \"\"\"Base Agent class.\"\"\"\n @property\n def return_values(self) -> List[str]:\n \"\"\"Return values of the agent.\"\"\"\n return [\"output\"]\n[docs] def get_allowed_tools(self) -> Optional[List[str]]:\n return None\n[docs] @abstractmethod\n def plan(\n self,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/agent.html"} +{"id": "c33eb77314f9-1", "text": "return None\n[docs] @abstractmethod\n def plan(\n self,\n intermediate_steps: List[Tuple[AgentAction, str]],\n callbacks: Callbacks = None,\n **kwargs: Any,\n ) -> Union[AgentAction, AgentFinish]:\n \"\"\"Given input, decided what to do.\n Args:\n intermediate_steps: Steps the LLM has taken to date,\n along with observations\n callbacks: Callbacks to run.\n **kwargs: User inputs.\n Returns:\n Action specifying what tool to use.\n \"\"\"\n[docs] @abstractmethod\n async def aplan(\n self,\n intermediate_steps: List[Tuple[AgentAction, str]],\n callbacks: Callbacks = None,\n **kwargs: Any,\n ) -> Union[AgentAction, AgentFinish]:\n \"\"\"Given input, decided what to do.\n Args:\n intermediate_steps: Steps the LLM has taken to date,\n along with observations\n callbacks: Callbacks to run.\n **kwargs: User inputs.\n Returns:\n Action specifying what tool to use.\n \"\"\"\n @property\n @abstractmethod\n def input_keys(self) -> List[str]:\n \"\"\"Return the input keys.\n :meta private:\n \"\"\"\n[docs] def return_stopped_response(\n self,\n early_stopping_method: str,\n intermediate_steps: List[Tuple[AgentAction, str]],\n **kwargs: Any,\n ) -> AgentFinish:\n \"\"\"Return response when agent has been stopped due to max iterations.\"\"\"\n if early_stopping_method == \"force\":\n # `force` just returns a constant string\n return AgentFinish(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/agent.html"} +{"id": "c33eb77314f9-2", "text": "# `force` just returns a constant string\n return AgentFinish(\n {\"output\": \"Agent stopped due to iteration limit or time limit.\"}, \"\"\n )\n else:\n raise ValueError(\n f\"Got unsupported early_stopping_method `{early_stopping_method}`\"\n )\n[docs] @classmethod\n def from_llm_and_tools(\n cls,\n llm: BaseLanguageModel,\n tools: Sequence[BaseTool],\n callback_manager: Optional[BaseCallbackManager] = None,\n **kwargs: Any,\n ) -> BaseSingleActionAgent:\n raise NotImplementedError\n @property\n def _agent_type(self) -> str:\n \"\"\"Return Identifier of agent type.\"\"\"\n raise NotImplementedError\n[docs] def dict(self, **kwargs: Any) -> Dict:\n \"\"\"Return dictionary representation of agent.\"\"\"\n _dict = super().dict()\n _type = self._agent_type\n if isinstance(_type, AgentType):\n _dict[\"_type\"] = str(_type.value)\n else:\n _dict[\"_type\"] = _type\n return _dict\n[docs] def save(self, file_path: Union[Path, str]) -> None:\n \"\"\"Save the agent.\n Args:\n file_path: Path to file to save the agent to.\n Example:\n .. code-block:: python\n # If working with agent executor\n agent.agent.save(file_path=\"path/agent.yaml\")\n \"\"\"\n # Convert file to Path object.\n if isinstance(file_path, str):\n save_path = Path(file_path)\n else:\n save_path = file_path\n directory_path = save_path.parent\n directory_path.mkdir(parents=True, exist_ok=True)\n # Fetch dictionary to save", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/agent.html"} +{"id": "c33eb77314f9-3", "text": "directory_path.mkdir(parents=True, exist_ok=True)\n # Fetch dictionary to save\n agent_dict = self.dict()\n if save_path.suffix == \".json\":\n with open(file_path, \"w\") as f:\n json.dump(agent_dict, f, indent=4)\n elif save_path.suffix == \".yaml\":\n with open(file_path, \"w\") as f:\n yaml.dump(agent_dict, f, default_flow_style=False)\n else:\n raise ValueError(f\"{save_path} must be json or yaml\")\n[docs] def tool_run_logging_kwargs(self) -> Dict:\n return {}\n[docs]class BaseMultiActionAgent(BaseModel):\n \"\"\"Base Agent class.\"\"\"\n @property\n def return_values(self) -> List[str]:\n \"\"\"Return values of the agent.\"\"\"\n return [\"output\"]\n[docs] def get_allowed_tools(self) -> Optional[List[str]]:\n return None\n[docs] @abstractmethod\n def plan(\n self,\n intermediate_steps: List[Tuple[AgentAction, str]],\n callbacks: Callbacks = None,\n **kwargs: Any,\n ) -> Union[List[AgentAction], AgentFinish]:\n \"\"\"Given input, decided what to do.\n Args:\n intermediate_steps: Steps the LLM has taken to date,\n along with observations\n callbacks: Callbacks to run.\n **kwargs: User inputs.\n Returns:\n Actions specifying what tool to use.\n \"\"\"\n[docs] @abstractmethod\n async def aplan(\n self,\n intermediate_steps: List[Tuple[AgentAction, str]],\n callbacks: Callbacks = None,\n **kwargs: Any,\n ) -> Union[List[AgentAction], AgentFinish]:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/agent.html"} +{"id": "c33eb77314f9-4", "text": "**kwargs: Any,\n ) -> Union[List[AgentAction], AgentFinish]:\n \"\"\"Given input, decided what to do.\n Args:\n intermediate_steps: Steps the LLM has taken to date,\n along with observations\n callbacks: Callbacks to run.\n **kwargs: User inputs.\n Returns:\n Actions specifying what tool to use.\n \"\"\"\n @property\n @abstractmethod\n def input_keys(self) -> List[str]:\n \"\"\"Return the input keys.\n :meta private:\n \"\"\"\n[docs] def return_stopped_response(\n self,\n early_stopping_method: str,\n intermediate_steps: List[Tuple[AgentAction, str]],\n **kwargs: Any,\n ) -> AgentFinish:\n \"\"\"Return response when agent has been stopped due to max iterations.\"\"\"\n if early_stopping_method == \"force\":\n # `force` just returns a constant string\n return AgentFinish({\"output\": \"Agent stopped due to max iterations.\"}, \"\")\n else:\n raise ValueError(\n f\"Got unsupported early_stopping_method `{early_stopping_method}`\"\n )\n @property\n def _agent_type(self) -> str:\n \"\"\"Return Identifier of agent type.\"\"\"\n raise NotImplementedError\n[docs] def dict(self, **kwargs: Any) -> Dict:\n \"\"\"Return dictionary representation of agent.\"\"\"\n _dict = super().dict()\n _dict[\"_type\"] = str(self._agent_type)\n return _dict\n[docs] def save(self, file_path: Union[Path, str]) -> None:\n \"\"\"Save the agent.\n Args:\n file_path: Path to file to save the agent to.\n Example:\n .. code-block:: python", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/agent.html"} +{"id": "c33eb77314f9-5", "text": "Example:\n .. code-block:: python\n # If working with agent executor\n agent.agent.save(file_path=\"path/agent.yaml\")\n \"\"\"\n # Convert file to Path object.\n if isinstance(file_path, str):\n save_path = Path(file_path)\n else:\n save_path = file_path\n directory_path = save_path.parent\n directory_path.mkdir(parents=True, exist_ok=True)\n # Fetch dictionary to save\n agent_dict = self.dict()\n if save_path.suffix == \".json\":\n with open(file_path, \"w\") as f:\n json.dump(agent_dict, f, indent=4)\n elif save_path.suffix == \".yaml\":\n with open(file_path, \"w\") as f:\n yaml.dump(agent_dict, f, default_flow_style=False)\n else:\n raise ValueError(f\"{save_path} must be json or yaml\")\n[docs] def tool_run_logging_kwargs(self) -> Dict:\n return {}\n[docs]class AgentOutputParser(BaseOutputParser):\n[docs] @abstractmethod\n def parse(self, text: str) -> Union[AgentAction, AgentFinish]:\n \"\"\"Parse text into agent action/finish.\"\"\"\n[docs]class LLMSingleActionAgent(BaseSingleActionAgent):\n llm_chain: LLMChain\n output_parser: AgentOutputParser\n stop: List[str]\n @property\n def input_keys(self) -> List[str]:\n return list(set(self.llm_chain.input_keys) - {\"intermediate_steps\"})\n[docs] def dict(self, **kwargs: Any) -> Dict:\n \"\"\"Return dictionary representation of agent.\"\"\"\n _dict = super().dict()\n del _dict[\"output_parser\"]\n return _dict\n[docs] def plan(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/agent.html"} +{"id": "c33eb77314f9-6", "text": "return _dict\n[docs] def plan(\n self,\n intermediate_steps: List[Tuple[AgentAction, str]],\n callbacks: Callbacks = None,\n **kwargs: Any,\n ) -> Union[AgentAction, AgentFinish]:\n \"\"\"Given input, decided what to do.\n Args:\n intermediate_steps: Steps the LLM has taken to date,\n along with observations\n callbacks: Callbacks to run.\n **kwargs: User inputs.\n Returns:\n Action specifying what tool to use.\n \"\"\"\n output = self.llm_chain.run(\n intermediate_steps=intermediate_steps,\n stop=self.stop,\n callbacks=callbacks,\n **kwargs,\n )\n return self.output_parser.parse(output)\n[docs] async def aplan(\n self,\n intermediate_steps: List[Tuple[AgentAction, str]],\n callbacks: Callbacks = None,\n **kwargs: Any,\n ) -> Union[AgentAction, AgentFinish]:\n \"\"\"Given input, decided what to do.\n Args:\n intermediate_steps: Steps the LLM has taken to date,\n along with observations\n callbacks: Callbacks to run.\n **kwargs: User inputs.\n Returns:\n Action specifying what tool to use.\n \"\"\"\n output = await self.llm_chain.arun(\n intermediate_steps=intermediate_steps,\n stop=self.stop,\n callbacks=callbacks,\n **kwargs,\n )\n return self.output_parser.parse(output)\n[docs] def tool_run_logging_kwargs(self) -> Dict:\n return {\n \"llm_prefix\": \"\",\n \"observation_prefix\": \"\" if len(self.stop) == 0 else self.stop[0],\n }", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/agent.html"} +{"id": "c33eb77314f9-7", "text": "}\n[docs]class Agent(BaseSingleActionAgent):\n \"\"\"Class responsible for calling the language model and deciding the action.\n This is driven by an LLMChain. The prompt in the LLMChain MUST include\n a variable called \"agent_scratchpad\" where the agent can put its\n intermediary work.\n \"\"\"\n llm_chain: LLMChain\n output_parser: AgentOutputParser\n allowed_tools: Optional[List[str]] = None\n[docs] def dict(self, **kwargs: Any) -> Dict:\n \"\"\"Return dictionary representation of agent.\"\"\"\n _dict = super().dict()\n del _dict[\"output_parser\"]\n return _dict\n[docs] def get_allowed_tools(self) -> Optional[List[str]]:\n return self.allowed_tools\n @property\n def return_values(self) -> List[str]:\n return [\"output\"]\n def _fix_text(self, text: str) -> str:\n \"\"\"Fix the text.\"\"\"\n raise ValueError(\"fix_text not implemented for this agent.\")\n @property\n def _stop(self) -> List[str]:\n return [\n f\"\\n{self.observation_prefix.rstrip()}\",\n f\"\\n\\t{self.observation_prefix.rstrip()}\",\n ]\n def _construct_scratchpad(\n self, intermediate_steps: List[Tuple[AgentAction, str]]\n ) -> Union[str, List[BaseMessage]]:\n \"\"\"Construct the scratchpad that lets the agent continue its thought process.\"\"\"\n thoughts = \"\"\n for action, observation in intermediate_steps:\n thoughts += action.log\n thoughts += f\"\\n{self.observation_prefix}{observation}\\n{self.llm_prefix}\"\n return thoughts\n[docs] def plan(\n self,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/agent.html"} +{"id": "c33eb77314f9-8", "text": "return thoughts\n[docs] def plan(\n self,\n intermediate_steps: List[Tuple[AgentAction, str]],\n callbacks: Callbacks = None,\n **kwargs: Any,\n ) -> Union[AgentAction, AgentFinish]:\n \"\"\"Given input, decided what to do.\n Args:\n intermediate_steps: Steps the LLM has taken to date,\n along with observations\n callbacks: Callbacks to run.\n **kwargs: User inputs.\n Returns:\n Action specifying what tool to use.\n \"\"\"\n full_inputs = self.get_full_inputs(intermediate_steps, **kwargs)\n full_output = self.llm_chain.predict(callbacks=callbacks, **full_inputs)\n return self.output_parser.parse(full_output)\n[docs] async def aplan(\n self,\n intermediate_steps: List[Tuple[AgentAction, str]],\n callbacks: Callbacks = None,\n **kwargs: Any,\n ) -> Union[AgentAction, AgentFinish]:\n \"\"\"Given input, decided what to do.\n Args:\n intermediate_steps: Steps the LLM has taken to date,\n along with observations\n callbacks: Callbacks to run.\n **kwargs: User inputs.\n Returns:\n Action specifying what tool to use.\n \"\"\"\n full_inputs = self.get_full_inputs(intermediate_steps, **kwargs)\n full_output = await self.llm_chain.apredict(callbacks=callbacks, **full_inputs)\n return self.output_parser.parse(full_output)\n[docs] def get_full_inputs(\n self, intermediate_steps: List[Tuple[AgentAction, str]], **kwargs: Any\n ) -> Dict[str, Any]:\n \"\"\"Create the full inputs for the LLMChain from intermediate steps.\"\"\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/agent.html"} +{"id": "c33eb77314f9-9", "text": "\"\"\"Create the full inputs for the LLMChain from intermediate steps.\"\"\"\n thoughts = self._construct_scratchpad(intermediate_steps)\n new_inputs = {\"agent_scratchpad\": thoughts, \"stop\": self._stop}\n full_inputs = {**kwargs, **new_inputs}\n return full_inputs\n @property\n def input_keys(self) -> List[str]:\n \"\"\"Return the input keys.\n :meta private:\n \"\"\"\n return list(set(self.llm_chain.input_keys) - {\"agent_scratchpad\"})\n @root_validator()\n def validate_prompt(cls, values: Dict) -> Dict:\n \"\"\"Validate that prompt matches format.\"\"\"\n prompt = values[\"llm_chain\"].prompt\n if \"agent_scratchpad\" not in prompt.input_variables:\n logger.warning(\n \"`agent_scratchpad` should be a variable in prompt.input_variables.\"\n \" Did not find it, so adding it at the end.\"\n )\n prompt.input_variables.append(\"agent_scratchpad\")\n if isinstance(prompt, PromptTemplate):\n prompt.template += \"\\n{agent_scratchpad}\"\n elif isinstance(prompt, FewShotPromptTemplate):\n prompt.suffix += \"\\n{agent_scratchpad}\"\n else:\n raise ValueError(f\"Got unexpected prompt type {type(prompt)}\")\n return values\n @property\n @abstractmethod\n def observation_prefix(self) -> str:\n \"\"\"Prefix to append the observation with.\"\"\"\n @property\n @abstractmethod\n def llm_prefix(self) -> str:\n \"\"\"Prefix to append the LLM call with.\"\"\"\n[docs] @classmethod\n @abstractmethod\n def create_prompt(cls, tools: Sequence[BaseTool]) -> BasePromptTemplate:\n \"\"\"Create a prompt for this class.\"\"\"\n @classmethod", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/agent.html"} +{"id": "c33eb77314f9-10", "text": "\"\"\"Create a prompt for this class.\"\"\"\n @classmethod\n def _validate_tools(cls, tools: Sequence[BaseTool]) -> None:\n \"\"\"Validate that appropriate tools are passed in.\"\"\"\n pass\n @classmethod\n @abstractmethod\n def _get_default_output_parser(cls, **kwargs: Any) -> AgentOutputParser:\n \"\"\"Get default output parser for this class.\"\"\"\n[docs] @classmethod\n def from_llm_and_tools(\n cls,\n llm: BaseLanguageModel,\n tools: Sequence[BaseTool],\n callback_manager: Optional[BaseCallbackManager] = None,\n output_parser: Optional[AgentOutputParser] = None,\n **kwargs: Any,\n ) -> Agent:\n \"\"\"Construct an agent from an LLM and tools.\"\"\"\n cls._validate_tools(tools)\n llm_chain = LLMChain(\n llm=llm,\n prompt=cls.create_prompt(tools),\n callback_manager=callback_manager,\n )\n tool_names = [tool.name for tool in tools]\n _output_parser = output_parser or cls._get_default_output_parser()\n return cls(\n llm_chain=llm_chain,\n allowed_tools=tool_names,\n output_parser=_output_parser,\n **kwargs,\n )\n[docs] def return_stopped_response(\n self,\n early_stopping_method: str,\n intermediate_steps: List[Tuple[AgentAction, str]],\n **kwargs: Any,\n ) -> AgentFinish:\n \"\"\"Return response when agent has been stopped due to max iterations.\"\"\"\n if early_stopping_method == \"force\":\n # `force` just returns a constant string\n return AgentFinish(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/agent.html"} +{"id": "c33eb77314f9-11", "text": "# `force` just returns a constant string\n return AgentFinish(\n {\"output\": \"Agent stopped due to iteration limit or time limit.\"}, \"\"\n )\n elif early_stopping_method == \"generate\":\n # Generate does one final forward pass\n thoughts = \"\"\n for action, observation in intermediate_steps:\n thoughts += action.log\n thoughts += (\n f\"\\n{self.observation_prefix}{observation}\\n{self.llm_prefix}\"\n )\n # Adding to the previous steps, we now tell the LLM to make a final pred\n thoughts += (\n \"\\n\\nI now need to return a final answer based on the previous steps:\"\n )\n new_inputs = {\"agent_scratchpad\": thoughts, \"stop\": self._stop}\n full_inputs = {**kwargs, **new_inputs}\n full_output = self.llm_chain.predict(**full_inputs)\n # We try to extract a final answer\n parsed_output = self.output_parser.parse(full_output)\n if isinstance(parsed_output, AgentFinish):\n # If we can extract, we send the correct stuff\n return parsed_output\n else:\n # If we can extract, but the tool is not the final tool,\n # we just return the full output\n return AgentFinish({\"output\": full_output}, full_output)\n else:\n raise ValueError(\n \"early_stopping_method should be one of `force` or `generate`, \"\n f\"got {early_stopping_method}\"\n )\n[docs] def tool_run_logging_kwargs(self) -> Dict:\n return {\n \"llm_prefix\": self.llm_prefix,\n \"observation_prefix\": self.observation_prefix,\n }\nclass ExceptionTool(BaseTool):\n name = \"_Exception\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/agent.html"} +{"id": "c33eb77314f9-12", "text": "}\nclass ExceptionTool(BaseTool):\n name = \"_Exception\"\n description = \"Exception tool\"\n def _run(\n self,\n query: str,\n run_manager: Optional[CallbackManagerForToolRun] = None,\n ) -> str:\n return query\n async def _arun(\n self,\n query: str,\n run_manager: Optional[AsyncCallbackManagerForToolRun] = None,\n ) -> str:\n return query\n[docs]class AgentExecutor(Chain):\n \"\"\"Consists of an agent using tools.\"\"\"\n agent: Union[BaseSingleActionAgent, BaseMultiActionAgent]\n \"\"\"The agent to run for creating a plan and determining actions\n to take at each step of the execution loop.\"\"\"\n tools: Sequence[BaseTool]\n \"\"\"The valid tools the agent can call.\"\"\"\n return_intermediate_steps: bool = False\n \"\"\"Whether to return the agent's trajectory of intermediate steps\n at the end in addition to the final output.\"\"\"\n max_iterations: Optional[int] = 15\n \"\"\"The maximum number of steps to take before ending the execution\n loop.\n \n Setting to 'None' could lead to an infinite loop.\"\"\"\n max_execution_time: Optional[float] = None\n \"\"\"The maximum amount of wall clock time to spend in the execution\n loop.\n \"\"\"\n early_stopping_method: str = \"force\"\n \"\"\"The method to use for early stopping if the agent never\n returns `AgentFinish`. Either 'force' or 'generate'.\n `\"force\"` returns a string saying that it stopped because it met a\n time or iteration limit.\n \n `\"generate\"` calls the agent's LLM Chain one final time to generate", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/agent.html"} +{"id": "c33eb77314f9-13", "text": "`\"generate\"` calls the agent's LLM Chain one final time to generate\n a final answer based on the previous steps.\n \"\"\"\n handle_parsing_errors: Union[\n bool, str, Callable[[OutputParserException], str]\n ] = False\n \"\"\"How to handle errors raised by the agent's output parser.\n Defaults to `False`, which raises the error.\ns\n If `true`, the error will be sent back to the LLM as an observation.\n If a string, the string itself will be sent to the LLM as an observation.\n If a callable function, the function will be called with the exception\n as an argument, and the result of that function will be passed to the agent\n as an observation.\n \"\"\"\n[docs] @classmethod\n def from_agent_and_tools(\n cls,\n agent: Union[BaseSingleActionAgent, BaseMultiActionAgent],\n tools: Sequence[BaseTool],\n callback_manager: Optional[BaseCallbackManager] = None,\n **kwargs: Any,\n ) -> AgentExecutor:\n \"\"\"Create from agent and tools.\"\"\"\n return cls(\n agent=agent, tools=tools, callback_manager=callback_manager, **kwargs\n )\n @root_validator()\n def validate_tools(cls, values: Dict) -> Dict:\n \"\"\"Validate that tools are compatible with agent.\"\"\"\n agent = values[\"agent\"]\n tools = values[\"tools\"]\n allowed_tools = agent.get_allowed_tools()\n if allowed_tools is not None:\n if set(allowed_tools) != set([tool.name for tool in tools]):\n raise ValueError(\n f\"Allowed tools ({allowed_tools}) different than \"\n f\"provided tools ({[tool.name for tool in tools]})\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/agent.html"} +{"id": "c33eb77314f9-14", "text": "f\"provided tools ({[tool.name for tool in tools]})\"\n )\n return values\n @root_validator()\n def validate_return_direct_tool(cls, values: Dict) -> Dict:\n \"\"\"Validate that tools are compatible with agent.\"\"\"\n agent = values[\"agent\"]\n tools = values[\"tools\"]\n if isinstance(agent, BaseMultiActionAgent):\n for tool in tools:\n if tool.return_direct:\n raise ValueError(\n \"Tools that have `return_direct=True` are not allowed \"\n \"in multi-action agents\"\n )\n return values\n[docs] def save(self, file_path: Union[Path, str]) -> None:\n \"\"\"Raise error - saving not supported for Agent Executors.\"\"\"\n raise ValueError(\n \"Saving not supported for agent executors. \"\n \"If you are trying to save the agent, please use the \"\n \"`.save_agent(...)`\"\n )\n[docs] def save_agent(self, file_path: Union[Path, str]) -> None:\n \"\"\"Save the underlying agent.\"\"\"\n return self.agent.save(file_path)\n @property\n def input_keys(self) -> List[str]:\n \"\"\"Return the input keys.\n :meta private:\n \"\"\"\n return self.agent.input_keys\n @property\n def output_keys(self) -> List[str]:\n \"\"\"Return the singular output key.\n :meta private:\n \"\"\"\n if self.return_intermediate_steps:\n return self.agent.return_values + [\"intermediate_steps\"]\n else:\n return self.agent.return_values\n[docs] def lookup_tool(self, name: str) -> BaseTool:\n \"\"\"Lookup tool by name.\"\"\"\n return {tool.name: tool for tool in self.tools}[name]", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/agent.html"} +{"id": "c33eb77314f9-15", "text": "return {tool.name: tool for tool in self.tools}[name]\n def _should_continue(self, iterations: int, time_elapsed: float) -> bool:\n if self.max_iterations is not None and iterations >= self.max_iterations:\n return False\n if (\n self.max_execution_time is not None\n and time_elapsed >= self.max_execution_time\n ):\n return False\n return True\n def _return(\n self,\n output: AgentFinish,\n intermediate_steps: list,\n run_manager: Optional[CallbackManagerForChainRun] = None,\n ) -> Dict[str, Any]:\n if run_manager:\n run_manager.on_agent_finish(output, color=\"green\", verbose=self.verbose)\n final_output = output.return_values\n if self.return_intermediate_steps:\n final_output[\"intermediate_steps\"] = intermediate_steps\n return final_output\n async def _areturn(\n self,\n output: AgentFinish,\n intermediate_steps: list,\n run_manager: Optional[AsyncCallbackManagerForChainRun] = None,\n ) -> Dict[str, Any]:\n if run_manager:\n await run_manager.on_agent_finish(\n output, color=\"green\", verbose=self.verbose\n )\n final_output = output.return_values\n if self.return_intermediate_steps:\n final_output[\"intermediate_steps\"] = intermediate_steps\n return final_output\n def _take_next_step(\n self,\n name_to_tool_map: Dict[str, BaseTool],\n color_mapping: Dict[str, str],\n inputs: Dict[str, str],\n intermediate_steps: List[Tuple[AgentAction, str]],\n run_manager: Optional[CallbackManagerForChainRun] = None,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/agent.html"} +{"id": "c33eb77314f9-16", "text": "run_manager: Optional[CallbackManagerForChainRun] = None,\n ) -> Union[AgentFinish, List[Tuple[AgentAction, str]]]:\n \"\"\"Take a single step in the thought-action-observation loop.\n Override this to take control of how the agent makes and acts on choices.\n \"\"\"\n try:\n # Call the LLM to see what to do.\n output = self.agent.plan(\n intermediate_steps,\n callbacks=run_manager.get_child() if run_manager else None,\n **inputs,\n )\n except OutputParserException as e:\n if isinstance(self.handle_parsing_errors, bool):\n raise_error = not self.handle_parsing_errors\n else:\n raise_error = False\n if raise_error:\n raise e\n text = str(e)\n if isinstance(self.handle_parsing_errors, bool):\n if e.send_to_llm:\n observation = str(e.observation)\n text = str(e.llm_output)\n else:\n observation = \"Invalid or incomplete response\"\n elif isinstance(self.handle_parsing_errors, str):\n observation = self.handle_parsing_errors\n elif callable(self.handle_parsing_errors):\n observation = self.handle_parsing_errors(e)\n else:\n raise ValueError(\"Got unexpected type of `handle_parsing_errors`\")\n output = AgentAction(\"_Exception\", observation, text)\n if run_manager:\n run_manager.on_agent_action(output, color=\"green\")\n tool_run_kwargs = self.agent.tool_run_logging_kwargs()\n observation = ExceptionTool().run(\n output.tool_input,\n verbose=self.verbose,\n color=None,\n callbacks=run_manager.get_child() if run_manager else None,\n **tool_run_kwargs,\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/agent.html"} +{"id": "c33eb77314f9-17", "text": "**tool_run_kwargs,\n )\n return [(output, observation)]\n # If the tool chosen is the finishing tool, then we end and return.\n if isinstance(output, AgentFinish):\n return output\n actions: List[AgentAction]\n if isinstance(output, AgentAction):\n actions = [output]\n else:\n actions = output\n result = []\n for agent_action in actions:\n if run_manager:\n run_manager.on_agent_action(agent_action, color=\"green\")\n # Otherwise we lookup the tool\n if agent_action.tool in name_to_tool_map:\n tool = name_to_tool_map[agent_action.tool]\n return_direct = tool.return_direct\n color = color_mapping[agent_action.tool]\n tool_run_kwargs = self.agent.tool_run_logging_kwargs()\n if return_direct:\n tool_run_kwargs[\"llm_prefix\"] = \"\"\n # We then call the tool on the tool input to get an observation\n observation = tool.run(\n agent_action.tool_input,\n verbose=self.verbose,\n color=color,\n callbacks=run_manager.get_child() if run_manager else None,\n **tool_run_kwargs,\n )\n else:\n tool_run_kwargs = self.agent.tool_run_logging_kwargs()\n observation = InvalidTool().run(\n agent_action.tool,\n verbose=self.verbose,\n color=None,\n callbacks=run_manager.get_child() if run_manager else None,\n **tool_run_kwargs,\n )\n result.append((agent_action, observation))\n return result\n async def _atake_next_step(\n self,\n name_to_tool_map: Dict[str, BaseTool],\n color_mapping: Dict[str, str],\n inputs: Dict[str, str],", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/agent.html"} +{"id": "c33eb77314f9-18", "text": "color_mapping: Dict[str, str],\n inputs: Dict[str, str],\n intermediate_steps: List[Tuple[AgentAction, str]],\n run_manager: Optional[AsyncCallbackManagerForChainRun] = None,\n ) -> Union[AgentFinish, List[Tuple[AgentAction, str]]]:\n \"\"\"Take a single step in the thought-action-observation loop.\n Override this to take control of how the agent makes and acts on choices.\n \"\"\"\n try:\n # Call the LLM to see what to do.\n output = await self.agent.aplan(\n intermediate_steps,\n callbacks=run_manager.get_child() if run_manager else None,\n **inputs,\n )\n except OutputParserException as e:\n if isinstance(self.handle_parsing_errors, bool):\n raise_error = not self.handle_parsing_errors\n else:\n raise_error = False\n if raise_error:\n raise e\n text = str(e)\n if isinstance(self.handle_parsing_errors, bool):\n if e.send_to_llm:\n observation = str(e.observation)\n text = str(e.llm_output)\n else:\n observation = \"Invalid or incomplete response\"\n elif isinstance(self.handle_parsing_errors, str):\n observation = self.handle_parsing_errors\n elif callable(self.handle_parsing_errors):\n observation = self.handle_parsing_errors(e)\n else:\n raise ValueError(\"Got unexpected type of `handle_parsing_errors`\")\n output = AgentAction(\"_Exception\", observation, text)\n tool_run_kwargs = self.agent.tool_run_logging_kwargs()\n observation = await ExceptionTool().arun(\n output.tool_input,\n verbose=self.verbose,\n color=None,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/agent.html"} +{"id": "c33eb77314f9-19", "text": "output.tool_input,\n verbose=self.verbose,\n color=None,\n callbacks=run_manager.get_child() if run_manager else None,\n **tool_run_kwargs,\n )\n return [(output, observation)]\n # If the tool chosen is the finishing tool, then we end and return.\n if isinstance(output, AgentFinish):\n return output\n actions: List[AgentAction]\n if isinstance(output, AgentAction):\n actions = [output]\n else:\n actions = output\n async def _aperform_agent_action(\n agent_action: AgentAction,\n ) -> Tuple[AgentAction, str]:\n if run_manager:\n await run_manager.on_agent_action(\n agent_action, verbose=self.verbose, color=\"green\"\n )\n # Otherwise we lookup the tool\n if agent_action.tool in name_to_tool_map:\n tool = name_to_tool_map[agent_action.tool]\n return_direct = tool.return_direct\n color = color_mapping[agent_action.tool]\n tool_run_kwargs = self.agent.tool_run_logging_kwargs()\n if return_direct:\n tool_run_kwargs[\"llm_prefix\"] = \"\"\n # We then call the tool on the tool input to get an observation\n observation = await tool.arun(\n agent_action.tool_input,\n verbose=self.verbose,\n color=color,\n callbacks=run_manager.get_child() if run_manager else None,\n **tool_run_kwargs,\n )\n else:\n tool_run_kwargs = self.agent.tool_run_logging_kwargs()\n observation = await InvalidTool().arun(\n agent_action.tool,\n verbose=self.verbose,\n color=None,\n callbacks=run_manager.get_child() if run_manager else None,\n **tool_run_kwargs,\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/agent.html"} +{"id": "c33eb77314f9-20", "text": "**tool_run_kwargs,\n )\n return agent_action, observation\n # Use asyncio.gather to run multiple tool.arun() calls concurrently\n result = await asyncio.gather(\n *[_aperform_agent_action(agent_action) for agent_action in actions]\n )\n return list(result)\n def _call(\n self,\n inputs: Dict[str, str],\n run_manager: Optional[CallbackManagerForChainRun] = None,\n ) -> Dict[str, Any]:\n \"\"\"Run text through and get agent response.\"\"\"\n # Construct a mapping of tool name to tool for easy lookup\n name_to_tool_map = {tool.name: tool for tool in self.tools}\n # We construct a mapping from each tool to a color, used for logging.\n color_mapping = get_color_mapping(\n [tool.name for tool in self.tools], excluded_colors=[\"green\", \"red\"]\n )\n intermediate_steps: List[Tuple[AgentAction, str]] = []\n # Let's start tracking the number of iterations and time elapsed\n iterations = 0\n time_elapsed = 0.0\n start_time = time.time()\n # We now enter the agent loop (until it returns something).\n while self._should_continue(iterations, time_elapsed):\n next_step_output = self._take_next_step(\n name_to_tool_map,\n color_mapping,\n inputs,\n intermediate_steps,\n run_manager=run_manager,\n )\n if isinstance(next_step_output, AgentFinish):\n return self._return(\n next_step_output, intermediate_steps, run_manager=run_manager\n )\n intermediate_steps.extend(next_step_output)\n if len(next_step_output) == 1:\n next_step_action = next_step_output[0]", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/agent.html"} +{"id": "c33eb77314f9-21", "text": "next_step_action = next_step_output[0]\n # See if tool should return directly\n tool_return = self._get_tool_return(next_step_action)\n if tool_return is not None:\n return self._return(\n tool_return, intermediate_steps, run_manager=run_manager\n )\n iterations += 1\n time_elapsed = time.time() - start_time\n output = self.agent.return_stopped_response(\n self.early_stopping_method, intermediate_steps, **inputs\n )\n return self._return(output, intermediate_steps, run_manager=run_manager)\n async def _acall(\n self,\n inputs: Dict[str, str],\n run_manager: Optional[AsyncCallbackManagerForChainRun] = None,\n ) -> Dict[str, str]:\n \"\"\"Run text through and get agent response.\"\"\"\n # Construct a mapping of tool name to tool for easy lookup\n name_to_tool_map = {tool.name: tool for tool in self.tools}\n # We construct a mapping from each tool to a color, used for logging.\n color_mapping = get_color_mapping(\n [tool.name for tool in self.tools], excluded_colors=[\"green\"]\n )\n intermediate_steps: List[Tuple[AgentAction, str]] = []\n # Let's start tracking the number of iterations and time elapsed\n iterations = 0\n time_elapsed = 0.0\n start_time = time.time()\n # We now enter the agent loop (until it returns something).\n async with asyncio_timeout(self.max_execution_time):\n try:\n while self._should_continue(iterations, time_elapsed):\n next_step_output = await self._atake_next_step(\n name_to_tool_map,\n color_mapping,\n inputs,\n intermediate_steps,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/agent.html"} +{"id": "c33eb77314f9-22", "text": "color_mapping,\n inputs,\n intermediate_steps,\n run_manager=run_manager,\n )\n if isinstance(next_step_output, AgentFinish):\n return await self._areturn(\n next_step_output,\n intermediate_steps,\n run_manager=run_manager,\n )\n intermediate_steps.extend(next_step_output)\n if len(next_step_output) == 1:\n next_step_action = next_step_output[0]\n # See if tool should return directly\n tool_return = self._get_tool_return(next_step_action)\n if tool_return is not None:\n return await self._areturn(\n tool_return, intermediate_steps, run_manager=run_manager\n )\n iterations += 1\n time_elapsed = time.time() - start_time\n output = self.agent.return_stopped_response(\n self.early_stopping_method, intermediate_steps, **inputs\n )\n return await self._areturn(\n output, intermediate_steps, run_manager=run_manager\n )\n except TimeoutError:\n # stop early when interrupted by the async timeout\n output = self.agent.return_stopped_response(\n self.early_stopping_method, intermediate_steps, **inputs\n )\n return await self._areturn(\n output, intermediate_steps, run_manager=run_manager\n )\n def _get_tool_return(\n self, next_step_output: Tuple[AgentAction, str]\n ) -> Optional[AgentFinish]:\n \"\"\"Check if the tool is a returning tool.\"\"\"\n agent_action, observation = next_step_output\n name_to_tool_map = {tool.name: tool for tool in self.tools}\n # Invalid tools won't be in the map, so we return False.\n if agent_action.tool in name_to_tool_map:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/agent.html"} +{"id": "c33eb77314f9-23", "text": "if agent_action.tool in name_to_tool_map:\n if name_to_tool_map[agent_action.tool].return_direct:\n return AgentFinish(\n {self.agent.return_values[0]: observation},\n \"\",\n )\n return None", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/agent.html"} +{"id": "93a525adace4-0", "text": "Source code for langchain.agents.structured_chat.base\nimport re\nfrom typing import Any, List, Optional, Sequence, Tuple\nfrom pydantic import Field\nfrom langchain.agents.agent import Agent, AgentOutputParser\nfrom langchain.agents.structured_chat.output_parser import (\n StructuredChatOutputParserWithRetries,\n)\nfrom langchain.agents.structured_chat.prompt import FORMAT_INSTRUCTIONS, PREFIX, SUFFIX\nfrom langchain.base_language import BaseLanguageModel\nfrom langchain.callbacks.base import BaseCallbackManager\nfrom langchain.chains.llm import LLMChain\nfrom langchain.prompts.base import BasePromptTemplate\nfrom langchain.prompts.chat import (\n ChatPromptTemplate,\n HumanMessagePromptTemplate,\n SystemMessagePromptTemplate,\n)\nfrom langchain.schema import AgentAction\nfrom langchain.tools import BaseTool\nHUMAN_MESSAGE_TEMPLATE = \"{input}\\n\\n{agent_scratchpad}\"\n[docs]class StructuredChatAgent(Agent):\n output_parser: AgentOutputParser = Field(\n default_factory=StructuredChatOutputParserWithRetries\n )\n @property\n def observation_prefix(self) -> str:\n \"\"\"Prefix to append the observation with.\"\"\"\n return \"Observation: \"\n @property\n def llm_prefix(self) -> str:\n \"\"\"Prefix to append the llm call with.\"\"\"\n return \"Thought:\"\n def _construct_scratchpad(\n self, intermediate_steps: List[Tuple[AgentAction, str]]\n ) -> str:\n agent_scratchpad = super()._construct_scratchpad(intermediate_steps)\n if not isinstance(agent_scratchpad, str):\n raise ValueError(\"agent_scratchpad should be of type string.\")\n if agent_scratchpad:\n return (\n f\"This was your previous work \"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/structured_chat/base.html"} +{"id": "93a525adace4-1", "text": "return (\n f\"This was your previous work \"\n f\"(but I haven't seen any of it! I only see what \"\n f\"you return as final answer):\\n{agent_scratchpad}\"\n )\n else:\n return agent_scratchpad\n @classmethod\n def _validate_tools(cls, tools: Sequence[BaseTool]) -> None:\n pass\n @classmethod\n def _get_default_output_parser(\n cls, llm: Optional[BaseLanguageModel] = None, **kwargs: Any\n ) -> AgentOutputParser:\n return StructuredChatOutputParserWithRetries.from_llm(llm=llm)\n @property\n def _stop(self) -> List[str]:\n return [\"Observation:\"]\n[docs] @classmethod\n def create_prompt(\n cls,\n tools: Sequence[BaseTool],\n prefix: str = PREFIX,\n suffix: str = SUFFIX,\n human_message_template: str = HUMAN_MESSAGE_TEMPLATE,\n format_instructions: str = FORMAT_INSTRUCTIONS,\n input_variables: Optional[List[str]] = None,\n memory_prompts: Optional[List[BasePromptTemplate]] = None,\n ) -> BasePromptTemplate:\n tool_strings = []\n for tool in tools:\n args_schema = re.sub(\"}\", \"}}}}\", re.sub(\"{\", \"{{{{\", str(tool.args)))\n tool_strings.append(f\"{tool.name}: {tool.description}, args: {args_schema}\")\n formatted_tools = \"\\n\".join(tool_strings)\n tool_names = \", \".join([tool.name for tool in tools])\n format_instructions = format_instructions.format(tool_names=tool_names)\n template = \"\\n\\n\".join([prefix, formatted_tools, format_instructions, suffix])", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/structured_chat/base.html"} +{"id": "93a525adace4-2", "text": "template = \"\\n\\n\".join([prefix, formatted_tools, format_instructions, suffix])\n if input_variables is None:\n input_variables = [\"input\", \"agent_scratchpad\"]\n _memory_prompts = memory_prompts or []\n messages = [\n SystemMessagePromptTemplate.from_template(template),\n *_memory_prompts,\n HumanMessagePromptTemplate.from_template(human_message_template),\n ]\n return ChatPromptTemplate(input_variables=input_variables, messages=messages)\n[docs] @classmethod\n def from_llm_and_tools(\n cls,\n llm: BaseLanguageModel,\n tools: Sequence[BaseTool],\n callback_manager: Optional[BaseCallbackManager] = None,\n output_parser: Optional[AgentOutputParser] = None,\n prefix: str = PREFIX,\n suffix: str = SUFFIX,\n human_message_template: str = HUMAN_MESSAGE_TEMPLATE,\n format_instructions: str = FORMAT_INSTRUCTIONS,\n input_variables: Optional[List[str]] = None,\n memory_prompts: Optional[List[BasePromptTemplate]] = None,\n **kwargs: Any,\n ) -> Agent:\n \"\"\"Construct an agent from an LLM and tools.\"\"\"\n cls._validate_tools(tools)\n prompt = cls.create_prompt(\n tools,\n prefix=prefix,\n suffix=suffix,\n human_message_template=human_message_template,\n format_instructions=format_instructions,\n input_variables=input_variables,\n memory_prompts=memory_prompts,\n )\n llm_chain = LLMChain(\n llm=llm,\n prompt=prompt,\n callback_manager=callback_manager,\n )\n tool_names = [tool.name for tool in tools]", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/structured_chat/base.html"} +{"id": "93a525adace4-3", "text": ")\n tool_names = [tool.name for tool in tools]\n _output_parser = output_parser or cls._get_default_output_parser(llm=llm)\n return cls(\n llm_chain=llm_chain,\n allowed_tools=tool_names,\n output_parser=_output_parser,\n **kwargs,\n )\n @property\n def _agent_type(self) -> str:\n raise ValueError", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/structured_chat/base.html"} +{"id": "491ea6b63e3b-0", "text": "Source code for langchain.agents.agent_toolkits.sql.base\n\"\"\"SQL agent.\"\"\"\nfrom typing import Any, Dict, List, Optional\nfrom langchain.agents.agent import AgentExecutor, BaseSingleActionAgent\nfrom langchain.agents.agent_toolkits.sql.prompt import (\n SQL_FUNCTIONS_SUFFIX,\n SQL_PREFIX,\n SQL_SUFFIX,\n)\nfrom langchain.agents.agent_toolkits.sql.toolkit import SQLDatabaseToolkit\nfrom langchain.agents.agent_types import AgentType\nfrom langchain.agents.mrkl.base import ZeroShotAgent\nfrom langchain.agents.mrkl.prompt import FORMAT_INSTRUCTIONS\nfrom langchain.agents.openai_functions_agent.base import OpenAIFunctionsAgent\nfrom langchain.base_language import BaseLanguageModel\nfrom langchain.callbacks.base import BaseCallbackManager\nfrom langchain.chains.llm import LLMChain\nfrom langchain.prompts.chat import (\n ChatPromptTemplate,\n HumanMessagePromptTemplate,\n MessagesPlaceholder,\n)\nfrom langchain.schema import AIMessage, SystemMessage\n[docs]def create_sql_agent(\n llm: BaseLanguageModel,\n toolkit: SQLDatabaseToolkit,\n agent_type: AgentType = AgentType.ZERO_SHOT_REACT_DESCRIPTION,\n callback_manager: Optional[BaseCallbackManager] = None,\n prefix: str = SQL_PREFIX,\n suffix: Optional[str] = None,\n format_instructions: str = FORMAT_INSTRUCTIONS,\n input_variables: Optional[List[str]] = None,\n top_k: int = 10,\n max_iterations: Optional[int] = 15,\n max_execution_time: Optional[float] = None,\n early_stopping_method: str = \"force\",\n verbose: bool = False,\n agent_executor_kwargs: Optional[Dict[str, Any]] = None,\n **kwargs: Dict[str, Any],", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/agent_toolkits/sql/base.html"} +{"id": "491ea6b63e3b-1", "text": "**kwargs: Dict[str, Any],\n) -> AgentExecutor:\n \"\"\"Construct a sql agent from an LLM and tools.\"\"\"\n tools = toolkit.get_tools()\n prefix = prefix.format(dialect=toolkit.dialect, top_k=top_k)\n agent: BaseSingleActionAgent\n if agent_type == AgentType.ZERO_SHOT_REACT_DESCRIPTION:\n prompt = ZeroShotAgent.create_prompt(\n tools,\n prefix=prefix,\n suffix=suffix or SQL_SUFFIX,\n format_instructions=format_instructions,\n input_variables=input_variables,\n )\n llm_chain = LLMChain(\n llm=llm,\n prompt=prompt,\n callback_manager=callback_manager,\n )\n tool_names = [tool.name for tool in tools]\n agent = ZeroShotAgent(llm_chain=llm_chain, allowed_tools=tool_names, **kwargs)\n elif agent_type == AgentType.OPENAI_FUNCTIONS:\n messages = [\n SystemMessage(content=prefix),\n HumanMessagePromptTemplate.from_template(\"{input}\"),\n AIMessage(content=suffix or SQL_FUNCTIONS_SUFFIX),\n MessagesPlaceholder(variable_name=\"agent_scratchpad\"),\n ]\n input_variables = [\"input\", \"agent_scratchpad\"]\n _prompt = ChatPromptTemplate(input_variables=input_variables, messages=messages)\n agent = OpenAIFunctionsAgent(\n llm=llm,\n prompt=_prompt,\n tools=tools,\n callback_manager=callback_manager,\n **kwargs,\n )\n else:\n raise ValueError(f\"Agent type {agent_type} not supported at the moment.\")\n return AgentExecutor.from_agent_and_tools(\n agent=agent,\n tools=tools,\n callback_manager=callback_manager,\n verbose=verbose,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/agent_toolkits/sql/base.html"} +{"id": "491ea6b63e3b-2", "text": "tools=tools,\n callback_manager=callback_manager,\n verbose=verbose,\n max_iterations=max_iterations,\n max_execution_time=max_execution_time,\n early_stopping_method=early_stopping_method,\n **(agent_executor_kwargs or {}),\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/agent_toolkits/sql/base.html"} +{"id": "57fff97f93d6-0", "text": "Source code for langchain.agents.agent_toolkits.sql.toolkit\n\"\"\"Toolkit for interacting with a SQL database.\"\"\"\nfrom typing import List\nfrom pydantic import Field\nfrom langchain.agents.agent_toolkits.base import BaseToolkit\nfrom langchain.base_language import BaseLanguageModel\nfrom langchain.sql_database import SQLDatabase\nfrom langchain.tools import BaseTool\nfrom langchain.tools.sql_database.tool import (\n InfoSQLDatabaseTool,\n ListSQLDatabaseTool,\n QuerySQLCheckerTool,\n QuerySQLDataBaseTool,\n)\n[docs]class SQLDatabaseToolkit(BaseToolkit):\n \"\"\"Toolkit for interacting with SQL databases.\"\"\"\n db: SQLDatabase = Field(exclude=True)\n llm: BaseLanguageModel = Field(exclude=True)\n @property\n def dialect(self) -> str:\n \"\"\"Return string representation of dialect to use.\"\"\"\n return self.db.dialect\n class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n arbitrary_types_allowed = True\n[docs] def get_tools(self) -> List[BaseTool]:\n \"\"\"Get the tools in the toolkit.\"\"\"\n query_sql_database_tool_description = (\n \"Input to this tool is a detailed and correct SQL query, output is a \"\n \"result from the database. If the query is not correct, an error message \"\n \"will be returned. If an error is returned, rewrite the query, check the \"\n \"query, and try again. If you encounter an issue with Unknown column \"\n \"'xxxx' in 'field list', using schema_sql_db to query the correct table \"\n \"fields.\"\n )\n info_sql_database_tool_description = (\n \"Input to this tool is a comma-separated list of tables, output is the \"\n \"schema and sample rows for those tables. \"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/agent_toolkits/sql/toolkit.html"} +{"id": "57fff97f93d6-1", "text": "\"schema and sample rows for those tables. \"\n \"Be sure that the tables actually exist by calling list_tables_sql_db \"\n \"first! Example Input: 'table1, table2, table3'\"\n )\n return [\n QuerySQLDataBaseTool(\n db=self.db, description=query_sql_database_tool_description\n ),\n InfoSQLDatabaseTool(\n db=self.db, description=info_sql_database_tool_description\n ),\n ListSQLDatabaseTool(db=self.db),\n QuerySQLCheckerTool(db=self.db, llm=self.llm),\n ]", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/agent_toolkits/sql/toolkit.html"} +{"id": "724edefbe883-0", "text": "Source code for langchain.agents.agent_toolkits.python.base\n\"\"\"Python agent.\"\"\"\nfrom typing import Any, Dict, Optional\nfrom langchain.agents.agent import AgentExecutor, BaseSingleActionAgent\nfrom langchain.agents.agent_toolkits.python.prompt import PREFIX\nfrom langchain.agents.mrkl.base import ZeroShotAgent\nfrom langchain.agents.openai_functions_agent.base import OpenAIFunctionsAgent\nfrom langchain.agents.types import AgentType\nfrom langchain.base_language import BaseLanguageModel\nfrom langchain.callbacks.base import BaseCallbackManager\nfrom langchain.chains.llm import LLMChain\nfrom langchain.schema import SystemMessage\nfrom langchain.tools.python.tool import PythonREPLTool\n[docs]def create_python_agent(\n llm: BaseLanguageModel,\n tool: PythonREPLTool,\n agent_type: AgentType = AgentType.ZERO_SHOT_REACT_DESCRIPTION,\n callback_manager: Optional[BaseCallbackManager] = None,\n verbose: bool = False,\n prefix: str = PREFIX,\n agent_executor_kwargs: Optional[Dict[str, Any]] = None,\n **kwargs: Dict[str, Any],\n) -> AgentExecutor:\n \"\"\"Construct a python agent from an LLM and tool.\"\"\"\n tools = [tool]\n agent: BaseSingleActionAgent\n if agent_type == AgentType.ZERO_SHOT_REACT_DESCRIPTION:\n prompt = ZeroShotAgent.create_prompt(tools, prefix=prefix)\n llm_chain = LLMChain(\n llm=llm,\n prompt=prompt,\n callback_manager=callback_manager,\n )\n tool_names = [tool.name for tool in tools]\n agent = ZeroShotAgent(llm_chain=llm_chain, allowed_tools=tool_names, **kwargs)\n elif agent_type == AgentType.OPENAI_FUNCTIONS:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/agent_toolkits/python/base.html"} +{"id": "724edefbe883-1", "text": "elif agent_type == AgentType.OPENAI_FUNCTIONS:\n system_message = SystemMessage(content=prefix)\n _prompt = OpenAIFunctionsAgent.create_prompt(system_message=system_message)\n agent = OpenAIFunctionsAgent(\n llm=llm,\n prompt=_prompt,\n tools=tools,\n callback_manager=callback_manager,\n **kwargs,\n )\n else:\n raise ValueError(f\"Agent type {agent_type} not supported at the moment.\")\n return AgentExecutor.from_agent_and_tools(\n agent=agent,\n tools=tools,\n callback_manager=callback_manager,\n verbose=verbose,\n **(agent_executor_kwargs or {}),\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/agent_toolkits/python/base.html"} +{"id": "6444ba80f633-0", "text": "Source code for langchain.agents.agent_toolkits.nla.toolkit\n\"\"\"Toolkit for interacting with API's using natural language.\"\"\"\nfrom __future__ import annotations\nfrom typing import Any, List, Optional, Sequence\nfrom pydantic import Field\nfrom langchain.agents.agent_toolkits.base import BaseToolkit\nfrom langchain.agents.agent_toolkits.nla.tool import NLATool\nfrom langchain.base_language import BaseLanguageModel\nfrom langchain.requests import Requests\nfrom langchain.tools.base import BaseTool\nfrom langchain.tools.openapi.utils.openapi_utils import OpenAPISpec\nfrom langchain.tools.plugin import AIPlugin\n[docs]class NLAToolkit(BaseToolkit):\n \"\"\"Natural Language API Toolkit Definition.\"\"\"\n nla_tools: Sequence[NLATool] = Field(...)\n \"\"\"List of API Endpoint Tools.\"\"\"\n[docs] def get_tools(self) -> List[BaseTool]:\n \"\"\"Get the tools for all the API operations.\"\"\"\n return list(self.nla_tools)\n @staticmethod\n def _get_http_operation_tools(\n llm: BaseLanguageModel,\n spec: OpenAPISpec,\n requests: Optional[Requests] = None,\n verbose: bool = False,\n **kwargs: Any,\n ) -> List[NLATool]:\n \"\"\"Get the tools for all the API operations.\"\"\"\n if not spec.paths:\n return []\n http_operation_tools = []\n for path in spec.paths:\n for method in spec.get_methods_for_path(path):\n endpoint_tool = NLATool.from_llm_and_method(\n llm=llm,\n path=path,\n method=method,\n spec=spec,\n requests=requests,\n verbose=verbose,\n **kwargs,\n )\n http_operation_tools.append(endpoint_tool)\n return http_operation_tools", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/agent_toolkits/nla/toolkit.html"} +{"id": "6444ba80f633-1", "text": ")\n http_operation_tools.append(endpoint_tool)\n return http_operation_tools\n[docs] @classmethod\n def from_llm_and_spec(\n cls,\n llm: BaseLanguageModel,\n spec: OpenAPISpec,\n requests: Optional[Requests] = None,\n verbose: bool = False,\n **kwargs: Any,\n ) -> NLAToolkit:\n \"\"\"Instantiate the toolkit by creating tools for each operation.\"\"\"\n http_operation_tools = cls._get_http_operation_tools(\n llm=llm, spec=spec, requests=requests, verbose=verbose, **kwargs\n )\n return cls(nla_tools=http_operation_tools)\n[docs] @classmethod\n def from_llm_and_url(\n cls,\n llm: BaseLanguageModel,\n open_api_url: str,\n requests: Optional[Requests] = None,\n verbose: bool = False,\n **kwargs: Any,\n ) -> NLAToolkit:\n \"\"\"Instantiate the toolkit from an OpenAPI Spec URL\"\"\"\n spec = OpenAPISpec.from_url(open_api_url)\n return cls.from_llm_and_spec(\n llm=llm, spec=spec, requests=requests, verbose=verbose, **kwargs\n )\n[docs] @classmethod\n def from_llm_and_ai_plugin(\n cls,\n llm: BaseLanguageModel,\n ai_plugin: AIPlugin,\n requests: Optional[Requests] = None,\n verbose: bool = False,\n **kwargs: Any,\n ) -> NLAToolkit:\n \"\"\"Instantiate the toolkit from an OpenAPI Spec URL\"\"\"\n spec = OpenAPISpec.from_url(ai_plugin.api.url)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/agent_toolkits/nla/toolkit.html"} +{"id": "6444ba80f633-2", "text": "spec = OpenAPISpec.from_url(ai_plugin.api.url)\n # TODO: Merge optional Auth information with the `requests` argument\n return cls.from_llm_and_spec(\n llm=llm,\n spec=spec,\n requests=requests,\n verbose=verbose,\n **kwargs,\n )\n[docs] @classmethod\n def from_llm_and_ai_plugin_url(\n cls,\n llm: BaseLanguageModel,\n ai_plugin_url: str,\n requests: Optional[Requests] = None,\n verbose: bool = False,\n **kwargs: Any,\n ) -> NLAToolkit:\n \"\"\"Instantiate the toolkit from an OpenAPI Spec URL\"\"\"\n plugin = AIPlugin.from_url(ai_plugin_url)\n return cls.from_llm_and_ai_plugin(\n llm=llm, ai_plugin=plugin, requests=requests, verbose=verbose, **kwargs\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/agent_toolkits/nla/toolkit.html"} +{"id": "fd501c32544b-0", "text": "Source code for langchain.agents.agent_toolkits.powerbi.base\n\"\"\"Power BI agent.\"\"\"\nfrom typing import Any, Dict, List, Optional\nfrom langchain.agents import AgentExecutor\nfrom langchain.agents.agent_toolkits.powerbi.prompt import (\n POWERBI_PREFIX,\n POWERBI_SUFFIX,\n)\nfrom langchain.agents.agent_toolkits.powerbi.toolkit import PowerBIToolkit\nfrom langchain.agents.mrkl.base import ZeroShotAgent\nfrom langchain.agents.mrkl.prompt import FORMAT_INSTRUCTIONS\nfrom langchain.base_language import BaseLanguageModel\nfrom langchain.callbacks.base import BaseCallbackManager\nfrom langchain.chains.llm import LLMChain\nfrom langchain.utilities.powerbi import PowerBIDataset\n[docs]def create_pbi_agent(\n llm: BaseLanguageModel,\n toolkit: Optional[PowerBIToolkit],\n powerbi: Optional[PowerBIDataset] = None,\n callback_manager: Optional[BaseCallbackManager] = None,\n prefix: str = POWERBI_PREFIX,\n suffix: str = POWERBI_SUFFIX,\n format_instructions: str = FORMAT_INSTRUCTIONS,\n examples: Optional[str] = None,\n input_variables: Optional[List[str]] = None,\n top_k: int = 10,\n verbose: bool = False,\n agent_executor_kwargs: Optional[Dict[str, Any]] = None,\n **kwargs: Dict[str, Any],\n) -> AgentExecutor:\n \"\"\"Construct a pbi agent from an LLM and tools.\"\"\"\n if toolkit is None:\n if powerbi is None:\n raise ValueError(\"Must provide either a toolkit or powerbi dataset\")\n toolkit = PowerBIToolkit(powerbi=powerbi, llm=llm, examples=examples)\n tools = toolkit.get_tools()", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/agent_toolkits/powerbi/base.html"} +{"id": "fd501c32544b-1", "text": "tools = toolkit.get_tools()\n agent = ZeroShotAgent(\n llm_chain=LLMChain(\n llm=llm,\n prompt=ZeroShotAgent.create_prompt(\n tools,\n prefix=prefix.format(top_k=top_k),\n suffix=suffix,\n format_instructions=format_instructions,\n input_variables=input_variables,\n ),\n callback_manager=callback_manager, # type: ignore\n verbose=verbose,\n ),\n allowed_tools=[tool.name for tool in tools],\n **kwargs,\n )\n return AgentExecutor.from_agent_and_tools(\n agent=agent,\n tools=tools,\n callback_manager=callback_manager,\n verbose=verbose,\n **(agent_executor_kwargs or {}),\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/agent_toolkits/powerbi/base.html"} +{"id": "5fce85560262-0", "text": "Source code for langchain.agents.agent_toolkits.powerbi.toolkit\n\"\"\"Toolkit for interacting with a Power BI dataset.\"\"\"\nfrom typing import List, Optional\nfrom pydantic import Field\nfrom langchain.agents.agent_toolkits.base import BaseToolkit\nfrom langchain.base_language import BaseLanguageModel\nfrom langchain.callbacks.base import BaseCallbackManager\nfrom langchain.chains.llm import LLMChain\nfrom langchain.prompts import PromptTemplate\nfrom langchain.tools import BaseTool\nfrom langchain.tools.powerbi.prompt import QUESTION_TO_QUERY\nfrom langchain.tools.powerbi.tool import (\n InfoPowerBITool,\n ListPowerBITool,\n QueryPowerBITool,\n)\nfrom langchain.utilities.powerbi import PowerBIDataset\n[docs]class PowerBIToolkit(BaseToolkit):\n \"\"\"Toolkit for interacting with PowerBI dataset.\"\"\"\n powerbi: PowerBIDataset = Field(exclude=True)\n llm: BaseLanguageModel = Field(exclude=True)\n examples: Optional[str] = None\n max_iterations: int = 5\n callback_manager: Optional[BaseCallbackManager] = None\n class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n arbitrary_types_allowed = True\n[docs] def get_tools(self) -> List[BaseTool]:\n \"\"\"Get the tools in the toolkit.\"\"\"\n if self.callback_manager:\n chain = LLMChain(\n llm=self.llm,\n callback_manager=self.callback_manager,\n prompt=PromptTemplate(\n template=QUESTION_TO_QUERY,\n input_variables=[\"tool_input\", \"tables\", \"schemas\", \"examples\"],\n ),\n )\n else:\n chain = LLMChain(\n llm=self.llm,\n prompt=PromptTemplate(\n template=QUESTION_TO_QUERY,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/agent_toolkits/powerbi/toolkit.html"} +{"id": "5fce85560262-1", "text": "prompt=PromptTemplate(\n template=QUESTION_TO_QUERY,\n input_variables=[\"tool_input\", \"tables\", \"schemas\", \"examples\"],\n ),\n )\n return [\n QueryPowerBITool(\n llm_chain=chain,\n powerbi=self.powerbi,\n examples=self.examples,\n max_iterations=self.max_iterations,\n ),\n InfoPowerBITool(powerbi=self.powerbi),\n ListPowerBITool(powerbi=self.powerbi),\n ]", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/agent_toolkits/powerbi/toolkit.html"} +{"id": "b00e8def370a-0", "text": "Source code for langchain.agents.agent_toolkits.powerbi.chat_base\n\"\"\"Power BI agent.\"\"\"\nfrom typing import Any, Dict, List, Optional\nfrom langchain.agents import AgentExecutor\nfrom langchain.agents.agent import AgentOutputParser\nfrom langchain.agents.agent_toolkits.powerbi.prompt import (\n POWERBI_CHAT_PREFIX,\n POWERBI_CHAT_SUFFIX,\n)\nfrom langchain.agents.agent_toolkits.powerbi.toolkit import PowerBIToolkit\nfrom langchain.agents.conversational_chat.base import ConversationalChatAgent\nfrom langchain.callbacks.base import BaseCallbackManager\nfrom langchain.chat_models.base import BaseChatModel\nfrom langchain.memory import ConversationBufferMemory\nfrom langchain.memory.chat_memory import BaseChatMemory\nfrom langchain.utilities.powerbi import PowerBIDataset\n[docs]def create_pbi_chat_agent(\n llm: BaseChatModel,\n toolkit: Optional[PowerBIToolkit],\n powerbi: Optional[PowerBIDataset] = None,\n callback_manager: Optional[BaseCallbackManager] = None,\n output_parser: Optional[AgentOutputParser] = None,\n prefix: str = POWERBI_CHAT_PREFIX,\n suffix: str = POWERBI_CHAT_SUFFIX,\n examples: Optional[str] = None,\n input_variables: Optional[List[str]] = None,\n memory: Optional[BaseChatMemory] = None,\n top_k: int = 10,\n verbose: bool = False,\n agent_executor_kwargs: Optional[Dict[str, Any]] = None,\n **kwargs: Dict[str, Any],\n) -> AgentExecutor:\n \"\"\"Construct a pbi agent from an Chat LLM and tools.\n If you supply only a toolkit and no powerbi dataset, the same LLM is used for both.\n \"\"\"\n if toolkit is None:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/agent_toolkits/powerbi/chat_base.html"} +{"id": "b00e8def370a-1", "text": "\"\"\"\n if toolkit is None:\n if powerbi is None:\n raise ValueError(\"Must provide either a toolkit or powerbi dataset\")\n toolkit = PowerBIToolkit(powerbi=powerbi, llm=llm, examples=examples)\n tools = toolkit.get_tools()\n agent = ConversationalChatAgent.from_llm_and_tools(\n llm=llm,\n tools=tools,\n system_message=prefix.format(top_k=top_k),\n human_message=suffix,\n input_variables=input_variables,\n callback_manager=callback_manager,\n output_parser=output_parser,\n verbose=verbose,\n **kwargs,\n )\n return AgentExecutor.from_agent_and_tools(\n agent=agent,\n tools=tools,\n callback_manager=callback_manager,\n memory=memory\n or ConversationBufferMemory(memory_key=\"chat_history\", return_messages=True),\n verbose=verbose,\n **(agent_executor_kwargs or {}),\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/agent_toolkits/powerbi/chat_base.html"} +{"id": "b4c449989b1b-0", "text": "Source code for langchain.agents.agent_toolkits.json.base\n\"\"\"Json agent.\"\"\"\nfrom typing import Any, Dict, List, Optional\nfrom langchain.agents.agent import AgentExecutor\nfrom langchain.agents.agent_toolkits.json.prompt import JSON_PREFIX, JSON_SUFFIX\nfrom langchain.agents.agent_toolkits.json.toolkit import JsonToolkit\nfrom langchain.agents.mrkl.base import ZeroShotAgent\nfrom langchain.agents.mrkl.prompt import FORMAT_INSTRUCTIONS\nfrom langchain.base_language import BaseLanguageModel\nfrom langchain.callbacks.base import BaseCallbackManager\nfrom langchain.chains.llm import LLMChain\n[docs]def create_json_agent(\n llm: BaseLanguageModel,\n toolkit: JsonToolkit,\n callback_manager: Optional[BaseCallbackManager] = None,\n prefix: str = JSON_PREFIX,\n suffix: str = JSON_SUFFIX,\n format_instructions: str = FORMAT_INSTRUCTIONS,\n input_variables: Optional[List[str]] = None,\n verbose: bool = False,\n agent_executor_kwargs: Optional[Dict[str, Any]] = None,\n **kwargs: Dict[str, Any],\n) -> AgentExecutor:\n \"\"\"Construct a json agent from an LLM and tools.\"\"\"\n tools = toolkit.get_tools()\n prompt = ZeroShotAgent.create_prompt(\n tools,\n prefix=prefix,\n suffix=suffix,\n format_instructions=format_instructions,\n input_variables=input_variables,\n )\n llm_chain = LLMChain(\n llm=llm,\n prompt=prompt,\n callback_manager=callback_manager,\n )\n tool_names = [tool.name for tool in tools]\n agent = ZeroShotAgent(llm_chain=llm_chain, allowed_tools=tool_names, **kwargs)\n return AgentExecutor.from_agent_and_tools(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/agent_toolkits/json/base.html"} +{"id": "b4c449989b1b-1", "text": "return AgentExecutor.from_agent_and_tools(\n agent=agent,\n tools=tools,\n callback_manager=callback_manager,\n verbose=verbose,\n **(agent_executor_kwargs or {}),\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/agent_toolkits/json/base.html"} +{"id": "25f7a18530a3-0", "text": "Source code for langchain.agents.agent_toolkits.json.toolkit\n\"\"\"Toolkit for interacting with a JSON spec.\"\"\"\nfrom __future__ import annotations\nfrom typing import List\nfrom langchain.agents.agent_toolkits.base import BaseToolkit\nfrom langchain.tools import BaseTool\nfrom langchain.tools.json.tool import JsonGetValueTool, JsonListKeysTool, JsonSpec\n[docs]class JsonToolkit(BaseToolkit):\n \"\"\"Toolkit for interacting with a JSON spec.\"\"\"\n spec: JsonSpec\n[docs] def get_tools(self) -> List[BaseTool]:\n \"\"\"Get the tools in the toolkit.\"\"\"\n return [\n JsonListKeysTool(spec=self.spec),\n JsonGetValueTool(spec=self.spec),\n ]", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/agent_toolkits/json/toolkit.html"} +{"id": "f4b63b4c3b14-0", "text": "Source code for langchain.agents.agent_toolkits.pandas.base\n\"\"\"Agent for working with pandas objects.\"\"\"\nfrom typing import Any, Dict, List, Optional, Tuple\nfrom langchain.agents.agent import AgentExecutor, BaseSingleActionAgent\nfrom langchain.agents.agent_toolkits.pandas.prompt import (\n FUNCTIONS_WITH_DF,\n FUNCTIONS_WITH_MULTI_DF,\n MULTI_DF_PREFIX,\n MULTI_DF_PREFIX_FUNCTIONS,\n PREFIX,\n PREFIX_FUNCTIONS,\n SUFFIX_NO_DF,\n SUFFIX_WITH_DF,\n SUFFIX_WITH_MULTI_DF,\n)\nfrom langchain.agents.mrkl.base import ZeroShotAgent\nfrom langchain.agents.openai_functions_agent.base import OpenAIFunctionsAgent\nfrom langchain.agents.types import AgentType\nfrom langchain.base_language import BaseLanguageModel\nfrom langchain.callbacks.base import BaseCallbackManager\nfrom langchain.chains.llm import LLMChain\nfrom langchain.prompts.base import BasePromptTemplate\nfrom langchain.schema import SystemMessage\nfrom langchain.tools.python.tool import PythonAstREPLTool\ndef _get_multi_prompt(\n dfs: List[Any],\n prefix: Optional[str] = None,\n suffix: Optional[str] = None,\n input_variables: Optional[List[str]] = None,\n include_df_in_prompt: Optional[bool] = True,\n) -> Tuple[BasePromptTemplate, List[PythonAstREPLTool]]:\n num_dfs = len(dfs)\n if suffix is not None:\n suffix_to_use = suffix\n include_dfs_head = True\n elif include_df_in_prompt:\n suffix_to_use = SUFFIX_WITH_MULTI_DF\n include_dfs_head = True\n else:\n suffix_to_use = SUFFIX_NO_DF\n include_dfs_head = False\n if input_variables is None:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/agent_toolkits/pandas/base.html"} +{"id": "f4b63b4c3b14-1", "text": "include_dfs_head = False\n if input_variables is None:\n input_variables = [\"input\", \"agent_scratchpad\", \"num_dfs\"]\n if include_dfs_head:\n input_variables += [\"dfs_head\"]\n if prefix is None:\n prefix = MULTI_DF_PREFIX\n df_locals = {}\n for i, dataframe in enumerate(dfs):\n df_locals[f\"df{i + 1}\"] = dataframe\n tools = [PythonAstREPLTool(locals=df_locals)]\n prompt = ZeroShotAgent.create_prompt(\n tools, prefix=prefix, suffix=suffix_to_use, input_variables=input_variables\n )\n partial_prompt = prompt.partial()\n if \"dfs_head\" in input_variables:\n dfs_head = \"\\n\\n\".join([d.head().to_markdown() for d in dfs])\n partial_prompt = partial_prompt.partial(num_dfs=str(num_dfs), dfs_head=dfs_head)\n if \"num_dfs\" in input_variables:\n partial_prompt = partial_prompt.partial(num_dfs=str(num_dfs))\n return partial_prompt, tools\ndef _get_single_prompt(\n df: Any,\n prefix: Optional[str] = None,\n suffix: Optional[str] = None,\n input_variables: Optional[List[str]] = None,\n include_df_in_prompt: Optional[bool] = True,\n) -> Tuple[BasePromptTemplate, List[PythonAstREPLTool]]:\n if suffix is not None:\n suffix_to_use = suffix\n include_df_head = True\n elif include_df_in_prompt:\n suffix_to_use = SUFFIX_WITH_DF\n include_df_head = True\n else:\n suffix_to_use = SUFFIX_NO_DF\n include_df_head = False\n if input_variables is None:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/agent_toolkits/pandas/base.html"} +{"id": "f4b63b4c3b14-2", "text": "include_df_head = False\n if input_variables is None:\n input_variables = [\"input\", \"agent_scratchpad\"]\n if include_df_head:\n input_variables += [\"df_head\"]\n if prefix is None:\n prefix = PREFIX\n tools = [PythonAstREPLTool(locals={\"df\": df})]\n prompt = ZeroShotAgent.create_prompt(\n tools, prefix=prefix, suffix=suffix_to_use, input_variables=input_variables\n )\n partial_prompt = prompt.partial()\n if \"df_head\" in input_variables:\n partial_prompt = partial_prompt.partial(df_head=str(df.head().to_markdown()))\n return partial_prompt, tools\ndef _get_prompt_and_tools(\n df: Any,\n prefix: Optional[str] = None,\n suffix: Optional[str] = None,\n input_variables: Optional[List[str]] = None,\n include_df_in_prompt: Optional[bool] = True,\n) -> Tuple[BasePromptTemplate, List[PythonAstREPLTool]]:\n try:\n import pandas as pd\n except ImportError:\n raise ValueError(\n \"pandas package not found, please install with `pip install pandas`\"\n )\n if include_df_in_prompt is not None and suffix is not None:\n raise ValueError(\"If suffix is specified, include_df_in_prompt should not be.\")\n if isinstance(df, list):\n for item in df:\n if not isinstance(item, pd.DataFrame):\n raise ValueError(f\"Expected pandas object, got {type(df)}\")\n return _get_multi_prompt(\n df,\n prefix=prefix,\n suffix=suffix,\n input_variables=input_variables,\n include_df_in_prompt=include_df_in_prompt,\n )\n else:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/agent_toolkits/pandas/base.html"} +{"id": "f4b63b4c3b14-3", "text": "include_df_in_prompt=include_df_in_prompt,\n )\n else:\n if not isinstance(df, pd.DataFrame):\n raise ValueError(f\"Expected pandas object, got {type(df)}\")\n return _get_single_prompt(\n df,\n prefix=prefix,\n suffix=suffix,\n input_variables=input_variables,\n include_df_in_prompt=include_df_in_prompt,\n )\ndef _get_functions_single_prompt(\n df: Any,\n prefix: Optional[str] = None,\n suffix: Optional[str] = None,\n include_df_in_prompt: Optional[bool] = True,\n) -> Tuple[BasePromptTemplate, List[PythonAstREPLTool]]:\n if suffix is not None:\n suffix_to_use = suffix\n if include_df_in_prompt:\n suffix_to_use = suffix_to_use.format(df_head=str(df.head().to_markdown()))\n elif include_df_in_prompt:\n suffix_to_use = FUNCTIONS_WITH_DF.format(df_head=str(df.head().to_markdown()))\n else:\n suffix_to_use = \"\"\n if prefix is None:\n prefix = PREFIX_FUNCTIONS\n tools = [PythonAstREPLTool(locals={\"df\": df})]\n system_message = SystemMessage(content=prefix + suffix_to_use)\n prompt = OpenAIFunctionsAgent.create_prompt(system_message=system_message)\n return prompt, tools\ndef _get_functions_multi_prompt(\n dfs: Any,\n prefix: Optional[str] = None,\n suffix: Optional[str] = None,\n include_df_in_prompt: Optional[bool] = True,\n) -> Tuple[BasePromptTemplate, List[PythonAstREPLTool]]:\n if suffix is not None:\n suffix_to_use = suffix\n if include_df_in_prompt:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/agent_toolkits/pandas/base.html"} +{"id": "f4b63b4c3b14-4", "text": "suffix_to_use = suffix\n if include_df_in_prompt:\n dfs_head = \"\\n\\n\".join([d.head().to_markdown() for d in dfs])\n suffix_to_use = suffix_to_use.format(\n dfs_head=dfs_head,\n )\n elif include_df_in_prompt:\n dfs_head = \"\\n\\n\".join([d.head().to_markdown() for d in dfs])\n suffix_to_use = FUNCTIONS_WITH_MULTI_DF.format(\n dfs_head=dfs_head,\n )\n else:\n suffix_to_use = \"\"\n if prefix is None:\n prefix = MULTI_DF_PREFIX_FUNCTIONS\n prefix = prefix.format(num_dfs=str(len(dfs)))\n df_locals = {}\n for i, dataframe in enumerate(dfs):\n df_locals[f\"df{i + 1}\"] = dataframe\n tools = [PythonAstREPLTool(locals=df_locals)]\n system_message = SystemMessage(content=prefix + suffix_to_use)\n prompt = OpenAIFunctionsAgent.create_prompt(system_message=system_message)\n return prompt, tools\ndef _get_functions_prompt_and_tools(\n df: Any,\n prefix: Optional[str] = None,\n suffix: Optional[str] = None,\n input_variables: Optional[List[str]] = None,\n include_df_in_prompt: Optional[bool] = True,\n) -> Tuple[BasePromptTemplate, List[PythonAstREPLTool]]:\n try:\n import pandas as pd\n except ImportError:\n raise ValueError(\n \"pandas package not found, please install with `pip install pandas`\"\n )\n if input_variables is not None:\n raise ValueError(\"`input_variables` is not supported at the moment.\")\n if include_df_in_prompt is not None and suffix is not None:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/agent_toolkits/pandas/base.html"} +{"id": "f4b63b4c3b14-5", "text": "if include_df_in_prompt is not None and suffix is not None:\n raise ValueError(\"If suffix is specified, include_df_in_prompt should not be.\")\n if isinstance(df, list):\n for item in df:\n if not isinstance(item, pd.DataFrame):\n raise ValueError(f\"Expected pandas object, got {type(df)}\")\n return _get_functions_multi_prompt(\n df,\n prefix=prefix,\n suffix=suffix,\n include_df_in_prompt=include_df_in_prompt,\n )\n else:\n if not isinstance(df, pd.DataFrame):\n raise ValueError(f\"Expected pandas object, got {type(df)}\")\n return _get_functions_single_prompt(\n df,\n prefix=prefix,\n suffix=suffix,\n include_df_in_prompt=include_df_in_prompt,\n )\n[docs]def create_pandas_dataframe_agent(\n llm: BaseLanguageModel,\n df: Any,\n agent_type: AgentType = AgentType.ZERO_SHOT_REACT_DESCRIPTION,\n callback_manager: Optional[BaseCallbackManager] = None,\n prefix: Optional[str] = None,\n suffix: Optional[str] = None,\n input_variables: Optional[List[str]] = None,\n verbose: bool = False,\n return_intermediate_steps: bool = False,\n max_iterations: Optional[int] = 15,\n max_execution_time: Optional[float] = None,\n early_stopping_method: str = \"force\",\n agent_executor_kwargs: Optional[Dict[str, Any]] = None,\n include_df_in_prompt: Optional[bool] = True,\n **kwargs: Dict[str, Any],\n) -> AgentExecutor:\n \"\"\"Construct a pandas agent from an LLM and dataframe.\"\"\"\n agent: BaseSingleActionAgent", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/agent_toolkits/pandas/base.html"} +{"id": "f4b63b4c3b14-6", "text": "agent: BaseSingleActionAgent\n if agent_type == AgentType.ZERO_SHOT_REACT_DESCRIPTION:\n prompt, tools = _get_prompt_and_tools(\n df,\n prefix=prefix,\n suffix=suffix,\n input_variables=input_variables,\n include_df_in_prompt=include_df_in_prompt,\n )\n llm_chain = LLMChain(\n llm=llm,\n prompt=prompt,\n callback_manager=callback_manager,\n )\n tool_names = [tool.name for tool in tools]\n agent = ZeroShotAgent(\n llm_chain=llm_chain,\n allowed_tools=tool_names,\n callback_manager=callback_manager,\n **kwargs,\n )\n elif agent_type == AgentType.OPENAI_FUNCTIONS:\n _prompt, tools = _get_functions_prompt_and_tools(\n df,\n prefix=prefix,\n suffix=suffix,\n input_variables=input_variables,\n include_df_in_prompt=include_df_in_prompt,\n )\n agent = OpenAIFunctionsAgent(\n llm=llm,\n prompt=_prompt,\n tools=tools,\n callback_manager=callback_manager,\n **kwargs,\n )\n else:\n raise ValueError(f\"Agent type {agent_type} not supported at the moment.\")\n return AgentExecutor.from_agent_and_tools(\n agent=agent,\n tools=tools,\n callback_manager=callback_manager,\n verbose=verbose,\n return_intermediate_steps=return_intermediate_steps,\n max_iterations=max_iterations,\n max_execution_time=max_execution_time,\n early_stopping_method=early_stopping_method,\n **(agent_executor_kwargs or {}),\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/agent_toolkits/pandas/base.html"} +{"id": "bf8921b00538-0", "text": "Source code for langchain.agents.agent_toolkits.gmail.toolkit\nfrom __future__ import annotations\nfrom typing import TYPE_CHECKING, List\nfrom pydantic import Field\nfrom langchain.agents.agent_toolkits.base import BaseToolkit\nfrom langchain.tools import BaseTool\nfrom langchain.tools.gmail.create_draft import GmailCreateDraft\nfrom langchain.tools.gmail.get_message import GmailGetMessage\nfrom langchain.tools.gmail.get_thread import GmailGetThread\nfrom langchain.tools.gmail.search import GmailSearch\nfrom langchain.tools.gmail.send_message import GmailSendMessage\nfrom langchain.tools.gmail.utils import build_resource_service\nif TYPE_CHECKING:\n # This is for linting and IDE typehints\n from googleapiclient.discovery import Resource\nelse:\n try:\n # We do this so pydantic can resolve the types when instantiating\n from googleapiclient.discovery import Resource\n except ImportError:\n pass\nSCOPES = [\"https://mail.google.com/\"]\n[docs]class GmailToolkit(BaseToolkit):\n \"\"\"Toolkit for interacting with Gmail.\"\"\"\n api_resource: Resource = Field(default_factory=build_resource_service)\n class Config:\n \"\"\"Pydantic config.\"\"\"\n arbitrary_types_allowed = True\n[docs] def get_tools(self) -> List[BaseTool]:\n \"\"\"Get the tools in the toolkit.\"\"\"\n return [\n GmailCreateDraft(api_resource=self.api_resource),\n GmailSendMessage(api_resource=self.api_resource),\n GmailSearch(api_resource=self.api_resource),\n GmailGetMessage(api_resource=self.api_resource),\n GmailGetThread(api_resource=self.api_resource),\n ]", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/agent_toolkits/gmail/toolkit.html"} +{"id": "2a67c55ccbf7-0", "text": "Source code for langchain.agents.agent_toolkits.vectorstore.base\n\"\"\"VectorStore agent.\"\"\"\nfrom typing import Any, Dict, Optional\nfrom langchain.agents.agent import AgentExecutor\nfrom langchain.agents.agent_toolkits.vectorstore.prompt import PREFIX, ROUTER_PREFIX\nfrom langchain.agents.agent_toolkits.vectorstore.toolkit import (\n VectorStoreRouterToolkit,\n VectorStoreToolkit,\n)\nfrom langchain.agents.mrkl.base import ZeroShotAgent\nfrom langchain.base_language import BaseLanguageModel\nfrom langchain.callbacks.base import BaseCallbackManager\nfrom langchain.chains.llm import LLMChain\n[docs]def create_vectorstore_agent(\n llm: BaseLanguageModel,\n toolkit: VectorStoreToolkit,\n callback_manager: Optional[BaseCallbackManager] = None,\n prefix: str = PREFIX,\n verbose: bool = False,\n agent_executor_kwargs: Optional[Dict[str, Any]] = None,\n **kwargs: Dict[str, Any],\n) -> AgentExecutor:\n \"\"\"Construct a vectorstore agent from an LLM and tools.\"\"\"\n tools = toolkit.get_tools()\n prompt = ZeroShotAgent.create_prompt(tools, prefix=prefix)\n llm_chain = LLMChain(\n llm=llm,\n prompt=prompt,\n callback_manager=callback_manager,\n )\n tool_names = [tool.name for tool in tools]\n agent = ZeroShotAgent(llm_chain=llm_chain, allowed_tools=tool_names, **kwargs)\n return AgentExecutor.from_agent_and_tools(\n agent=agent,\n tools=tools,\n callback_manager=callback_manager,\n verbose=verbose,\n **(agent_executor_kwargs or {}),\n )\n[docs]def create_vectorstore_router_agent(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/agent_toolkits/vectorstore/base.html"} +{"id": "2a67c55ccbf7-1", "text": ")\n[docs]def create_vectorstore_router_agent(\n llm: BaseLanguageModel,\n toolkit: VectorStoreRouterToolkit,\n callback_manager: Optional[BaseCallbackManager] = None,\n prefix: str = ROUTER_PREFIX,\n verbose: bool = False,\n agent_executor_kwargs: Optional[Dict[str, Any]] = None,\n **kwargs: Dict[str, Any],\n) -> AgentExecutor:\n \"\"\"Construct a vectorstore router agent from an LLM and tools.\"\"\"\n tools = toolkit.get_tools()\n prompt = ZeroShotAgent.create_prompt(tools, prefix=prefix)\n llm_chain = LLMChain(\n llm=llm,\n prompt=prompt,\n callback_manager=callback_manager,\n )\n tool_names = [tool.name for tool in tools]\n agent = ZeroShotAgent(llm_chain=llm_chain, allowed_tools=tool_names, **kwargs)\n return AgentExecutor.from_agent_and_tools(\n agent=agent,\n tools=tools,\n callback_manager=callback_manager,\n verbose=verbose,\n **(agent_executor_kwargs or {}),\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/agent_toolkits/vectorstore/base.html"} +{"id": "d503454acc6f-0", "text": "Source code for langchain.agents.agent_toolkits.vectorstore.toolkit\n\"\"\"Toolkit for interacting with a vector store.\"\"\"\nfrom typing import List\nfrom pydantic import BaseModel, Field\nfrom langchain.agents.agent_toolkits.base import BaseToolkit\nfrom langchain.base_language import BaseLanguageModel\nfrom langchain.llms.openai import OpenAI\nfrom langchain.tools import BaseTool\nfrom langchain.tools.vectorstore.tool import (\n VectorStoreQATool,\n VectorStoreQAWithSourcesTool,\n)\nfrom langchain.vectorstores.base import VectorStore\n[docs]class VectorStoreInfo(BaseModel):\n \"\"\"Information about a vectorstore.\"\"\"\n vectorstore: VectorStore = Field(exclude=True)\n name: str\n description: str\n class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n arbitrary_types_allowed = True\n[docs]class VectorStoreToolkit(BaseToolkit):\n \"\"\"Toolkit for interacting with a vector store.\"\"\"\n vectorstore_info: VectorStoreInfo = Field(exclude=True)\n llm: BaseLanguageModel = Field(default_factory=lambda: OpenAI(temperature=0))\n class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n arbitrary_types_allowed = True\n[docs] def get_tools(self) -> List[BaseTool]:\n \"\"\"Get the tools in the toolkit.\"\"\"\n description = VectorStoreQATool.get_description(\n self.vectorstore_info.name, self.vectorstore_info.description\n )\n qa_tool = VectorStoreQATool(\n name=self.vectorstore_info.name,\n description=description,\n vectorstore=self.vectorstore_info.vectorstore,\n llm=self.llm,\n )\n description = VectorStoreQAWithSourcesTool.get_description(\n self.vectorstore_info.name, self.vectorstore_info.description\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/agent_toolkits/vectorstore/toolkit.html"} +{"id": "d503454acc6f-1", "text": "self.vectorstore_info.name, self.vectorstore_info.description\n )\n qa_with_sources_tool = VectorStoreQAWithSourcesTool(\n name=f\"{self.vectorstore_info.name}_with_sources\",\n description=description,\n vectorstore=self.vectorstore_info.vectorstore,\n llm=self.llm,\n )\n return [qa_tool, qa_with_sources_tool]\n[docs]class VectorStoreRouterToolkit(BaseToolkit):\n \"\"\"Toolkit for routing between vector stores.\"\"\"\n vectorstores: List[VectorStoreInfo] = Field(exclude=True)\n llm: BaseLanguageModel = Field(default_factory=lambda: OpenAI(temperature=0))\n class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n arbitrary_types_allowed = True\n[docs] def get_tools(self) -> List[BaseTool]:\n \"\"\"Get the tools in the toolkit.\"\"\"\n tools: List[BaseTool] = []\n for vectorstore_info in self.vectorstores:\n description = VectorStoreQATool.get_description(\n vectorstore_info.name, vectorstore_info.description\n )\n qa_tool = VectorStoreQATool(\n name=vectorstore_info.name,\n description=description,\n vectorstore=vectorstore_info.vectorstore,\n llm=self.llm,\n )\n tools.append(qa_tool)\n return tools", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/agent_toolkits/vectorstore/toolkit.html"} +{"id": "67e62ab43b82-0", "text": "Source code for langchain.agents.agent_toolkits.spark.base\n\"\"\"Agent for working with pandas objects.\"\"\"\nfrom typing import Any, Dict, List, Optional\nfrom langchain.agents.agent import AgentExecutor\nfrom langchain.agents.agent_toolkits.spark.prompt import PREFIX, SUFFIX\nfrom langchain.agents.mrkl.base import ZeroShotAgent\nfrom langchain.callbacks.base import BaseCallbackManager\nfrom langchain.chains.llm import LLMChain\nfrom langchain.llms.base import BaseLLM\nfrom langchain.tools.python.tool import PythonAstREPLTool\ndef _validate_spark_df(df: Any) -> bool:\n try:\n from pyspark.sql import DataFrame as SparkLocalDataFrame\n return isinstance(df, SparkLocalDataFrame)\n except ImportError:\n return False\ndef _validate_spark_connect_df(df: Any) -> bool:\n try:\n from pyspark.sql.connect.dataframe import DataFrame as SparkConnectDataFrame\n return isinstance(df, SparkConnectDataFrame)\n except ImportError:\n return False\n[docs]def create_spark_dataframe_agent(\n llm: BaseLLM,\n df: Any,\n callback_manager: Optional[BaseCallbackManager] = None,\n prefix: str = PREFIX,\n suffix: str = SUFFIX,\n input_variables: Optional[List[str]] = None,\n verbose: bool = False,\n return_intermediate_steps: bool = False,\n max_iterations: Optional[int] = 15,\n max_execution_time: Optional[float] = None,\n early_stopping_method: str = \"force\",\n agent_executor_kwargs: Optional[Dict[str, Any]] = None,\n **kwargs: Dict[str, Any],\n) -> AgentExecutor:\n \"\"\"Construct a spark agent from an LLM and dataframe.\"\"\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/agent_toolkits/spark/base.html"} +{"id": "67e62ab43b82-1", "text": ") -> AgentExecutor:\n \"\"\"Construct a spark agent from an LLM and dataframe.\"\"\"\n if not _validate_spark_df(df) and not _validate_spark_connect_df(df):\n raise ValueError(\"Spark is not installed. run `pip install pyspark`.\")\n if input_variables is None:\n input_variables = [\"df\", \"input\", \"agent_scratchpad\"]\n tools = [PythonAstREPLTool(locals={\"df\": df})]\n prompt = ZeroShotAgent.create_prompt(\n tools, prefix=prefix, suffix=suffix, input_variables=input_variables\n )\n partial_prompt = prompt.partial(df=str(df.first()))\n llm_chain = LLMChain(\n llm=llm,\n prompt=partial_prompt,\n callback_manager=callback_manager,\n )\n tool_names = [tool.name for tool in tools]\n agent = ZeroShotAgent(\n llm_chain=llm_chain,\n allowed_tools=tool_names,\n callback_manager=callback_manager,\n **kwargs,\n )\n return AgentExecutor.from_agent_and_tools(\n agent=agent,\n tools=tools,\n callback_manager=callback_manager,\n verbose=verbose,\n return_intermediate_steps=return_intermediate_steps,\n max_iterations=max_iterations,\n max_execution_time=max_execution_time,\n early_stopping_method=early_stopping_method,\n **(agent_executor_kwargs or {}),\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/agent_toolkits/spark/base.html"} +{"id": "7d822fb2c9ff-0", "text": "Source code for langchain.agents.agent_toolkits.playwright.toolkit\n\"\"\"Playwright web browser toolkit.\"\"\"\nfrom __future__ import annotations\nfrom typing import TYPE_CHECKING, List, Optional, Type, cast\nfrom pydantic import Extra, root_validator\nfrom langchain.agents.agent_toolkits.base import BaseToolkit\nfrom langchain.tools.base import BaseTool\nfrom langchain.tools.playwright.base import (\n BaseBrowserTool,\n lazy_import_playwright_browsers,\n)\nfrom langchain.tools.playwright.click import ClickTool\nfrom langchain.tools.playwright.current_page import CurrentWebPageTool\nfrom langchain.tools.playwright.extract_hyperlinks import ExtractHyperlinksTool\nfrom langchain.tools.playwright.extract_text import ExtractTextTool\nfrom langchain.tools.playwright.get_elements import GetElementsTool\nfrom langchain.tools.playwright.navigate import NavigateTool\nfrom langchain.tools.playwright.navigate_back import NavigateBackTool\nif TYPE_CHECKING:\n from playwright.async_api import Browser as AsyncBrowser\n from playwright.sync_api import Browser as SyncBrowser\nelse:\n try:\n # We do this so pydantic can resolve the types when instantiating\n from playwright.async_api import Browser as AsyncBrowser\n from playwright.sync_api import Browser as SyncBrowser\n except ImportError:\n pass\n[docs]class PlayWrightBrowserToolkit(BaseToolkit):\n \"\"\"Toolkit for web browser tools.\"\"\"\n sync_browser: Optional[\"SyncBrowser\"] = None\n async_browser: Optional[\"AsyncBrowser\"] = None\n class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n arbitrary_types_allowed = True\n @root_validator\n def validate_imports_and_browser_provided(cls, values: dict) -> dict:\n \"\"\"Check that the arguments are valid.\"\"\"\n lazy_import_playwright_browsers()", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/agent_toolkits/playwright/toolkit.html"} +{"id": "7d822fb2c9ff-1", "text": "\"\"\"Check that the arguments are valid.\"\"\"\n lazy_import_playwright_browsers()\n if values.get(\"async_browser\") is None and values.get(\"sync_browser\") is None:\n raise ValueError(\"Either async_browser or sync_browser must be specified.\")\n return values\n[docs] def get_tools(self) -> List[BaseTool]:\n \"\"\"Get the tools in the toolkit.\"\"\"\n tool_classes: List[Type[BaseBrowserTool]] = [\n ClickTool,\n NavigateTool,\n NavigateBackTool,\n ExtractTextTool,\n ExtractHyperlinksTool,\n GetElementsTool,\n CurrentWebPageTool,\n ]\n tools = [\n tool_cls.from_browser(\n sync_browser=self.sync_browser, async_browser=self.async_browser\n )\n for tool_cls in tool_classes\n ]\n return cast(List[BaseTool], tools)\n[docs] @classmethod\n def from_browser(\n cls,\n sync_browser: Optional[SyncBrowser] = None,\n async_browser: Optional[AsyncBrowser] = None,\n ) -> PlayWrightBrowserToolkit:\n \"\"\"Instantiate the toolkit.\"\"\"\n # This is to raise a better error than the forward ref ones Pydantic would have\n lazy_import_playwright_browsers()\n return cls(sync_browser=sync_browser, async_browser=async_browser)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/agent_toolkits/playwright/toolkit.html"} +{"id": "716e69f1713f-0", "text": "Source code for langchain.agents.agent_toolkits.csv.base\n\"\"\"Agent for working with csvs.\"\"\"\nfrom typing import Any, List, Optional, Union\nfrom langchain.agents.agent import AgentExecutor\nfrom langchain.agents.agent_toolkits.pandas.base import create_pandas_dataframe_agent\nfrom langchain.base_language import BaseLanguageModel\n[docs]def create_csv_agent(\n llm: BaseLanguageModel,\n path: Union[str, List[str]],\n pandas_kwargs: Optional[dict] = None,\n **kwargs: Any,\n) -> AgentExecutor:\n \"\"\"Create csv agent by loading to a dataframe and using pandas agent.\"\"\"\n try:\n import pandas as pd\n except ImportError:\n raise ValueError(\n \"pandas package not found, please install with `pip install pandas`\"\n )\n _kwargs = pandas_kwargs or {}\n if isinstance(path, str):\n df = pd.read_csv(path, **_kwargs)\n elif isinstance(path, list):\n df = []\n for item in path:\n if not isinstance(item, str):\n raise ValueError(f\"Expected str, got {type(path)}\")\n df.append(pd.read_csv(item, **_kwargs))\n else:\n raise ValueError(f\"Expected str or list, got {type(path)}\")\n return create_pandas_dataframe_agent(llm, df, **kwargs)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/agent_toolkits/csv/base.html"} +{"id": "5026fcf6dda7-0", "text": "Source code for langchain.agents.agent_toolkits.openapi.base\n\"\"\"OpenAPI spec agent.\"\"\"\nfrom typing import Any, Dict, List, Optional\nfrom langchain.agents.agent import AgentExecutor\nfrom langchain.agents.agent_toolkits.openapi.prompt import (\n OPENAPI_PREFIX,\n OPENAPI_SUFFIX,\n)\nfrom langchain.agents.agent_toolkits.openapi.toolkit import OpenAPIToolkit\nfrom langchain.agents.mrkl.base import ZeroShotAgent\nfrom langchain.agents.mrkl.prompt import FORMAT_INSTRUCTIONS\nfrom langchain.base_language import BaseLanguageModel\nfrom langchain.callbacks.base import BaseCallbackManager\nfrom langchain.chains.llm import LLMChain\n[docs]def create_openapi_agent(\n llm: BaseLanguageModel,\n toolkit: OpenAPIToolkit,\n callback_manager: Optional[BaseCallbackManager] = None,\n prefix: str = OPENAPI_PREFIX,\n suffix: str = OPENAPI_SUFFIX,\n format_instructions: str = FORMAT_INSTRUCTIONS,\n input_variables: Optional[List[str]] = None,\n max_iterations: Optional[int] = 15,\n max_execution_time: Optional[float] = None,\n early_stopping_method: str = \"force\",\n verbose: bool = False,\n return_intermediate_steps: bool = False,\n agent_executor_kwargs: Optional[Dict[str, Any]] = None,\n **kwargs: Dict[str, Any],\n) -> AgentExecutor:\n \"\"\"Construct a json agent from an LLM and tools.\"\"\"\n tools = toolkit.get_tools()\n prompt = ZeroShotAgent.create_prompt(\n tools,\n prefix=prefix,\n suffix=suffix,\n format_instructions=format_instructions,\n input_variables=input_variables,\n )\n llm_chain = LLMChain(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/agent_toolkits/openapi/base.html"} +{"id": "5026fcf6dda7-1", "text": "input_variables=input_variables,\n )\n llm_chain = LLMChain(\n llm=llm,\n prompt=prompt,\n callback_manager=callback_manager,\n )\n tool_names = [tool.name for tool in tools]\n agent = ZeroShotAgent(llm_chain=llm_chain, allowed_tools=tool_names, **kwargs)\n return AgentExecutor.from_agent_and_tools(\n agent=agent,\n tools=tools,\n callback_manager=callback_manager,\n verbose=verbose,\n return_intermediate_steps=return_intermediate_steps,\n max_iterations=max_iterations,\n max_execution_time=max_execution_time,\n early_stopping_method=early_stopping_method,\n **(agent_executor_kwargs or {}),\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/agent_toolkits/openapi/base.html"} +{"id": "e97945b27eb4-0", "text": "Source code for langchain.agents.agent_toolkits.openapi.toolkit\n\"\"\"Requests toolkit.\"\"\"\nfrom __future__ import annotations\nfrom typing import Any, List\nfrom langchain.agents.agent import AgentExecutor\nfrom langchain.agents.agent_toolkits.base import BaseToolkit\nfrom langchain.agents.agent_toolkits.json.base import create_json_agent\nfrom langchain.agents.agent_toolkits.json.toolkit import JsonToolkit\nfrom langchain.agents.agent_toolkits.openapi.prompt import DESCRIPTION\nfrom langchain.agents.tools import Tool\nfrom langchain.base_language import BaseLanguageModel\nfrom langchain.requests import TextRequestsWrapper\nfrom langchain.tools import BaseTool\nfrom langchain.tools.json.tool import JsonSpec\nfrom langchain.tools.requests.tool import (\n RequestsDeleteTool,\n RequestsGetTool,\n RequestsPatchTool,\n RequestsPostTool,\n RequestsPutTool,\n)\nclass RequestsToolkit(BaseToolkit):\n \"\"\"Toolkit for making requests.\"\"\"\n requests_wrapper: TextRequestsWrapper\n def get_tools(self) -> List[BaseTool]:\n \"\"\"Return a list of tools.\"\"\"\n return [\n RequestsGetTool(requests_wrapper=self.requests_wrapper),\n RequestsPostTool(requests_wrapper=self.requests_wrapper),\n RequestsPatchTool(requests_wrapper=self.requests_wrapper),\n RequestsPutTool(requests_wrapper=self.requests_wrapper),\n RequestsDeleteTool(requests_wrapper=self.requests_wrapper),\n ]\n[docs]class OpenAPIToolkit(BaseToolkit):\n \"\"\"Toolkit for interacting with a OpenAPI api.\"\"\"\n json_agent: AgentExecutor\n requests_wrapper: TextRequestsWrapper\n[docs] def get_tools(self) -> List[BaseTool]:\n \"\"\"Get the tools in the toolkit.\"\"\"\n json_agent_tool = Tool(\n name=\"json_explorer\",\n func=self.json_agent.run,\n description=DESCRIPTION,\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/agent_toolkits/openapi/toolkit.html"} +{"id": "e97945b27eb4-1", "text": "func=self.json_agent.run,\n description=DESCRIPTION,\n )\n request_toolkit = RequestsToolkit(requests_wrapper=self.requests_wrapper)\n return [*request_toolkit.get_tools(), json_agent_tool]\n[docs] @classmethod\n def from_llm(\n cls,\n llm: BaseLanguageModel,\n json_spec: JsonSpec,\n requests_wrapper: TextRequestsWrapper,\n **kwargs: Any,\n ) -> OpenAPIToolkit:\n \"\"\"Create json agent from llm, then initialize.\"\"\"\n json_agent = create_json_agent(llm, JsonToolkit(spec=json_spec), **kwargs)\n return cls(json_agent=json_agent, requests_wrapper=requests_wrapper)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/agent_toolkits/openapi/toolkit.html"} +{"id": "10f6c29fa3ea-0", "text": "Source code for langchain.agents.agent_toolkits.jira.toolkit\n\"\"\"Jira Toolkit.\"\"\"\nfrom typing import List\nfrom langchain.agents.agent_toolkits.base import BaseToolkit\nfrom langchain.tools import BaseTool\nfrom langchain.tools.jira.tool import JiraAction\nfrom langchain.utilities.jira import JiraAPIWrapper\n[docs]class JiraToolkit(BaseToolkit):\n \"\"\"Jira Toolkit.\"\"\"\n tools: List[BaseTool] = []\n[docs] @classmethod\n def from_jira_api_wrapper(cls, jira_api_wrapper: JiraAPIWrapper) -> \"JiraToolkit\":\n actions = jira_api_wrapper.list()\n tools = [\n JiraAction(\n name=action[\"name\"],\n description=action[\"description\"],\n mode=action[\"mode\"],\n api_wrapper=jira_api_wrapper,\n )\n for action in actions\n ]\n return cls(tools=tools)\n[docs] def get_tools(self) -> List[BaseTool]:\n \"\"\"Get the tools in the toolkit.\"\"\"\n return self.tools", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/agent_toolkits/jira/toolkit.html"} +{"id": "dbaea494b47d-0", "text": "Source code for langchain.agents.agent_toolkits.azure_cognitive_services.toolkit\nfrom __future__ import annotations\nimport sys\nfrom typing import List\nfrom langchain.agents.agent_toolkits.base import BaseToolkit\nfrom langchain.tools.azure_cognitive_services import (\n AzureCogsFormRecognizerTool,\n AzureCogsImageAnalysisTool,\n AzureCogsSpeech2TextTool,\n AzureCogsText2SpeechTool,\n)\nfrom langchain.tools.base import BaseTool\n[docs]class AzureCognitiveServicesToolkit(BaseToolkit):\n \"\"\"Toolkit for Azure Cognitive Services.\"\"\"\n[docs] def get_tools(self) -> List[BaseTool]:\n \"\"\"Get the tools in the toolkit.\"\"\"\n tools = [\n AzureCogsFormRecognizerTool(),\n AzureCogsSpeech2TextTool(),\n AzureCogsText2SpeechTool(),\n ]\n # TODO: Remove check once azure-ai-vision supports MacOS.\n if sys.platform.startswith(\"linux\") or sys.platform.startswith(\"win\"):\n tools.append(AzureCogsImageAnalysisTool())\n return tools", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/agent_toolkits/azure_cognitive_services/toolkit.html"} +{"id": "151bff97a57e-0", "text": "Source code for langchain.agents.agent_toolkits.spark_sql.base\n\"\"\"Spark SQL agent.\"\"\"\nfrom typing import Any, Dict, List, Optional\nfrom langchain.agents.agent import AgentExecutor\nfrom langchain.agents.agent_toolkits.spark_sql.prompt import SQL_PREFIX, SQL_SUFFIX\nfrom langchain.agents.agent_toolkits.spark_sql.toolkit import SparkSQLToolkit\nfrom langchain.agents.mrkl.base import ZeroShotAgent\nfrom langchain.agents.mrkl.prompt import FORMAT_INSTRUCTIONS\nfrom langchain.base_language import BaseLanguageModel\nfrom langchain.callbacks.base import BaseCallbackManager\nfrom langchain.chains.llm import LLMChain\n[docs]def create_spark_sql_agent(\n llm: BaseLanguageModel,\n toolkit: SparkSQLToolkit,\n callback_manager: Optional[BaseCallbackManager] = None,\n prefix: str = SQL_PREFIX,\n suffix: str = SQL_SUFFIX,\n format_instructions: str = FORMAT_INSTRUCTIONS,\n input_variables: Optional[List[str]] = None,\n top_k: int = 10,\n max_iterations: Optional[int] = 15,\n max_execution_time: Optional[float] = None,\n early_stopping_method: str = \"force\",\n verbose: bool = False,\n agent_executor_kwargs: Optional[Dict[str, Any]] = None,\n **kwargs: Dict[str, Any],\n) -> AgentExecutor:\n \"\"\"Construct a sql agent from an LLM and tools.\"\"\"\n tools = toolkit.get_tools()\n prefix = prefix.format(top_k=top_k)\n prompt = ZeroShotAgent.create_prompt(\n tools,\n prefix=prefix,\n suffix=suffix,\n format_instructions=format_instructions,\n input_variables=input_variables,\n )\n llm_chain = LLMChain(\n llm=llm,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/agent_toolkits/spark_sql/base.html"} +{"id": "151bff97a57e-1", "text": "llm_chain = LLMChain(\n llm=llm,\n prompt=prompt,\n callback_manager=callback_manager,\n )\n tool_names = [tool.name for tool in tools]\n agent = ZeroShotAgent(llm_chain=llm_chain, allowed_tools=tool_names, **kwargs)\n return AgentExecutor.from_agent_and_tools(\n agent=agent,\n tools=tools,\n callback_manager=callback_manager,\n verbose=verbose,\n max_iterations=max_iterations,\n max_execution_time=max_execution_time,\n early_stopping_method=early_stopping_method,\n **(agent_executor_kwargs or {}),\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/agent_toolkits/spark_sql/base.html"} +{"id": "0837ef9291ac-0", "text": "Source code for langchain.agents.agent_toolkits.spark_sql.toolkit\n\"\"\"Toolkit for interacting with Spark SQL.\"\"\"\nfrom typing import List\nfrom pydantic import Field\nfrom langchain.agents.agent_toolkits.base import BaseToolkit\nfrom langchain.base_language import BaseLanguageModel\nfrom langchain.tools import BaseTool\nfrom langchain.tools.spark_sql.tool import (\n InfoSparkSQLTool,\n ListSparkSQLTool,\n QueryCheckerTool,\n QuerySparkSQLTool,\n)\nfrom langchain.utilities.spark_sql import SparkSQL\n[docs]class SparkSQLToolkit(BaseToolkit):\n \"\"\"Toolkit for interacting with Spark SQL.\"\"\"\n db: SparkSQL = Field(exclude=True)\n llm: BaseLanguageModel = Field(exclude=True)\n class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n arbitrary_types_allowed = True\n[docs] def get_tools(self) -> List[BaseTool]:\n \"\"\"Get the tools in the toolkit.\"\"\"\n return [\n QuerySparkSQLTool(db=self.db),\n InfoSparkSQLTool(db=self.db),\n ListSparkSQLTool(db=self.db),\n QueryCheckerTool(db=self.db, llm=self.llm),\n ]", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/agent_toolkits/spark_sql/toolkit.html"} +{"id": "143a9d18da7a-0", "text": "Source code for langchain.agents.agent_toolkits.zapier.toolkit\n\"\"\"Zapier Toolkit.\"\"\"\nfrom typing import List\nfrom langchain.agents.agent_toolkits.base import BaseToolkit\nfrom langchain.tools import BaseTool\nfrom langchain.tools.zapier.tool import ZapierNLARunAction\nfrom langchain.utilities.zapier import ZapierNLAWrapper\n[docs]class ZapierToolkit(BaseToolkit):\n \"\"\"Zapier Toolkit.\"\"\"\n tools: List[BaseTool] = []\n[docs] @classmethod\n def from_zapier_nla_wrapper(\n cls, zapier_nla_wrapper: ZapierNLAWrapper\n ) -> \"ZapierToolkit\":\n \"\"\"Create a toolkit from a ZapierNLAWrapper.\"\"\"\n actions = zapier_nla_wrapper.list()\n tools = [\n ZapierNLARunAction(\n action_id=action[\"id\"],\n zapier_description=action[\"description\"],\n params_schema=action[\"params\"],\n api_wrapper=zapier_nla_wrapper,\n )\n for action in actions\n ]\n return cls(tools=tools)\n[docs] def get_tools(self) -> List[BaseTool]:\n \"\"\"Get the tools in the toolkit.\"\"\"\n return self.tools", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/agent_toolkits/zapier/toolkit.html"} +{"id": "de97b485dcdb-0", "text": "Source code for langchain.agents.agent_toolkits.file_management.toolkit\n\"\"\"Toolkit for interacting with the local filesystem.\"\"\"\nfrom __future__ import annotations\nfrom typing import List, Optional\nfrom pydantic import root_validator\nfrom langchain.agents.agent_toolkits.base import BaseToolkit\nfrom langchain.tools import BaseTool\nfrom langchain.tools.file_management.copy import CopyFileTool\nfrom langchain.tools.file_management.delete import DeleteFileTool\nfrom langchain.tools.file_management.file_search import FileSearchTool\nfrom langchain.tools.file_management.list_dir import ListDirectoryTool\nfrom langchain.tools.file_management.move import MoveFileTool\nfrom langchain.tools.file_management.read import ReadFileTool\nfrom langchain.tools.file_management.write import WriteFileTool\n_FILE_TOOLS = {\n tool_cls.__fields__[\"name\"].default: tool_cls\n for tool_cls in [\n CopyFileTool,\n DeleteFileTool,\n FileSearchTool,\n MoveFileTool,\n ReadFileTool,\n WriteFileTool,\n ListDirectoryTool,\n ]\n}\n[docs]class FileManagementToolkit(BaseToolkit):\n \"\"\"Toolkit for interacting with a Local Files.\"\"\"\n root_dir: Optional[str] = None\n \"\"\"If specified, all file operations are made relative to root_dir.\"\"\"\n selected_tools: Optional[List[str]] = None\n \"\"\"If provided, only provide the selected tools. Defaults to all.\"\"\"\n @root_validator\n def validate_tools(cls, values: dict) -> dict:\n selected_tools = values.get(\"selected_tools\") or []\n for tool_name in selected_tools:\n if tool_name not in _FILE_TOOLS:\n raise ValueError(\n f\"File Tool of name {tool_name} not supported.\"\n f\" Permitted tools: {list(_FILE_TOOLS)}\"\n )\n return values", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/agent_toolkits/file_management/toolkit.html"} +{"id": "de97b485dcdb-1", "text": ")\n return values\n[docs] def get_tools(self) -> List[BaseTool]:\n \"\"\"Get the tools in the toolkit.\"\"\"\n allowed_tools = self.selected_tools or _FILE_TOOLS.keys()\n tools: List[BaseTool] = []\n for tool in allowed_tools:\n tool_cls = _FILE_TOOLS[tool]\n tools.append(tool_cls(root_dir=self.root_dir)) # type: ignore\n return tools\n__all__ = [\"FileManagementToolkit\"]", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/agent_toolkits/file_management/toolkit.html"} +{"id": "f799e084c205-0", "text": "Source code for langchain.agents.conversational_chat.base\n\"\"\"An agent designed to hold a conversation in addition to using tools.\"\"\"\nfrom __future__ import annotations\nfrom typing import Any, List, Optional, Sequence, Tuple\nfrom pydantic import Field\nfrom langchain.agents.agent import Agent, AgentOutputParser\nfrom langchain.agents.conversational_chat.output_parser import ConvoOutputParser\nfrom langchain.agents.conversational_chat.prompt import (\n PREFIX,\n SUFFIX,\n TEMPLATE_TOOL_RESPONSE,\n)\nfrom langchain.agents.utils import validate_tools_single_input\nfrom langchain.base_language import BaseLanguageModel\nfrom langchain.callbacks.base import BaseCallbackManager\nfrom langchain.chains import LLMChain\nfrom langchain.prompts.base import BasePromptTemplate\nfrom langchain.prompts.chat import (\n ChatPromptTemplate,\n HumanMessagePromptTemplate,\n MessagesPlaceholder,\n SystemMessagePromptTemplate,\n)\nfrom langchain.schema import (\n AgentAction,\n AIMessage,\n BaseMessage,\n BaseOutputParser,\n HumanMessage,\n)\nfrom langchain.tools.base import BaseTool\n[docs]class ConversationalChatAgent(Agent):\n \"\"\"An agent designed to hold a conversation in addition to using tools.\"\"\"\n output_parser: AgentOutputParser = Field(default_factory=ConvoOutputParser)\n template_tool_response: str = TEMPLATE_TOOL_RESPONSE\n @classmethod\n def _get_default_output_parser(cls, **kwargs: Any) -> AgentOutputParser:\n return ConvoOutputParser()\n @property\n def _agent_type(self) -> str:\n raise NotImplementedError\n @property\n def observation_prefix(self) -> str:\n \"\"\"Prefix to append the observation with.\"\"\"\n return \"Observation: \"\n @property", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/conversational_chat/base.html"} +{"id": "f799e084c205-1", "text": "return \"Observation: \"\n @property\n def llm_prefix(self) -> str:\n \"\"\"Prefix to append the llm call with.\"\"\"\n return \"Thought:\"\n @classmethod\n def _validate_tools(cls, tools: Sequence[BaseTool]) -> None:\n super()._validate_tools(tools)\n validate_tools_single_input(cls.__name__, tools)\n[docs] @classmethod\n def create_prompt(\n cls,\n tools: Sequence[BaseTool],\n system_message: str = PREFIX,\n human_message: str = SUFFIX,\n input_variables: Optional[List[str]] = None,\n output_parser: Optional[BaseOutputParser] = None,\n ) -> BasePromptTemplate:\n tool_strings = \"\\n\".join(\n [f\"> {tool.name}: {tool.description}\" for tool in tools]\n )\n tool_names = \", \".join([tool.name for tool in tools])\n _output_parser = output_parser or cls._get_default_output_parser()\n format_instructions = human_message.format(\n format_instructions=_output_parser.get_format_instructions()\n )\n final_prompt = format_instructions.format(\n tool_names=tool_names, tools=tool_strings\n )\n if input_variables is None:\n input_variables = [\"input\", \"chat_history\", \"agent_scratchpad\"]\n messages = [\n SystemMessagePromptTemplate.from_template(system_message),\n MessagesPlaceholder(variable_name=\"chat_history\"),\n HumanMessagePromptTemplate.from_template(final_prompt),\n MessagesPlaceholder(variable_name=\"agent_scratchpad\"),\n ]\n return ChatPromptTemplate(input_variables=input_variables, messages=messages)\n def _construct_scratchpad(\n self, intermediate_steps: List[Tuple[AgentAction, str]]\n ) -> List[BaseMessage]:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/conversational_chat/base.html"} +{"id": "f799e084c205-2", "text": ") -> List[BaseMessage]:\n \"\"\"Construct the scratchpad that lets the agent continue its thought process.\"\"\"\n thoughts: List[BaseMessage] = []\n for action, observation in intermediate_steps:\n thoughts.append(AIMessage(content=action.log))\n human_message = HumanMessage(\n content=self.template_tool_response.format(observation=observation)\n )\n thoughts.append(human_message)\n return thoughts\n[docs] @classmethod\n def from_llm_and_tools(\n cls,\n llm: BaseLanguageModel,\n tools: Sequence[BaseTool],\n callback_manager: Optional[BaseCallbackManager] = None,\n output_parser: Optional[AgentOutputParser] = None,\n system_message: str = PREFIX,\n human_message: str = SUFFIX,\n input_variables: Optional[List[str]] = None,\n **kwargs: Any,\n ) -> Agent:\n \"\"\"Construct an agent from an LLM and tools.\"\"\"\n cls._validate_tools(tools)\n _output_parser = output_parser or cls._get_default_output_parser()\n prompt = cls.create_prompt(\n tools,\n system_message=system_message,\n human_message=human_message,\n input_variables=input_variables,\n output_parser=_output_parser,\n )\n llm_chain = LLMChain(\n llm=llm,\n prompt=prompt,\n callback_manager=callback_manager,\n )\n tool_names = [tool.name for tool in tools]\n return cls(\n llm_chain=llm_chain,\n allowed_tools=tool_names,\n output_parser=_output_parser,\n **kwargs,\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/conversational_chat/base.html"} +{"id": "f95b17fd55b6-0", "text": "Source code for langchain.agents.openai_functions_agent.base\n\"\"\"Module implements an agent that uses OpenAI's APIs function enabled API.\"\"\"\nimport json\nfrom dataclasses import dataclass\nfrom json import JSONDecodeError\nfrom typing import Any, List, Optional, Sequence, Tuple, Union\nfrom pydantic import root_validator\nfrom langchain.agents import BaseSingleActionAgent\nfrom langchain.base_language import BaseLanguageModel\nfrom langchain.callbacks.base import BaseCallbackManager\nfrom langchain.callbacks.manager import Callbacks\nfrom langchain.chat_models.openai import ChatOpenAI\nfrom langchain.prompts.base import BasePromptTemplate\nfrom langchain.prompts.chat import (\n BaseMessagePromptTemplate,\n ChatPromptTemplate,\n HumanMessagePromptTemplate,\n MessagesPlaceholder,\n)\nfrom langchain.schema import (\n AgentAction,\n AgentFinish,\n AIMessage,\n BaseMessage,\n FunctionMessage,\n OutputParserException,\n SystemMessage,\n)\nfrom langchain.tools import BaseTool\nfrom langchain.tools.convert_to_openai import format_tool_to_openai_function\n@dataclass\nclass _FunctionsAgentAction(AgentAction):\n message_log: List[BaseMessage]\ndef _convert_agent_action_to_messages(\n agent_action: AgentAction, observation: str\n) -> List[BaseMessage]:\n \"\"\"Convert an agent action to a message.\n This code is used to reconstruct the original AI message from the agent action.\n Args:\n agent_action: Agent action to convert.\n Returns:\n AIMessage that corresponds to the original tool invocation.\n \"\"\"\n if isinstance(agent_action, _FunctionsAgentAction):\n return agent_action.message_log + [\n _create_function_message(agent_action, observation)\n ]\n else:\n return [AIMessage(content=agent_action.log)]", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/openai_functions_agent/base.html"} +{"id": "f95b17fd55b6-1", "text": "]\n else:\n return [AIMessage(content=agent_action.log)]\ndef _create_function_message(\n agent_action: AgentAction, observation: str\n) -> FunctionMessage:\n \"\"\"Convert agent action and observation into a function message.\n Args:\n agent_action: the tool invocation request from the agent\n observation: the result of the tool invocation\n Returns:\n FunctionMessage that corresponds to the original tool invocation\n \"\"\"\n if not isinstance(observation, str):\n try:\n content = json.dumps(observation, ensure_ascii=False)\n except Exception:\n content = str(observation)\n else:\n content = observation\n return FunctionMessage(\n name=agent_action.tool,\n content=content,\n )\ndef _format_intermediate_steps(\n intermediate_steps: List[Tuple[AgentAction, str]],\n) -> List[BaseMessage]:\n \"\"\"Format intermediate steps.\n Args:\n intermediate_steps: Steps the LLM has taken to date, along with observations\n Returns:\n list of messages to send to the LLM for the next prediction\n \"\"\"\n messages = []\n for intermediate_step in intermediate_steps:\n agent_action, observation = intermediate_step\n messages.extend(_convert_agent_action_to_messages(agent_action, observation))\n return messages\ndef _parse_ai_message(message: BaseMessage) -> Union[AgentAction, AgentFinish]:\n \"\"\"Parse an AI message.\"\"\"\n if not isinstance(message, AIMessage):\n raise TypeError(f\"Expected an AI message got {type(message)}\")\n function_call = message.additional_kwargs.get(\"function_call\", {})\n if function_call:\n function_call = message.additional_kwargs[\"function_call\"]\n function_name = function_call[\"name\"]\n try:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/openai_functions_agent/base.html"} +{"id": "f95b17fd55b6-2", "text": "function_name = function_call[\"name\"]\n try:\n _tool_input = json.loads(function_call[\"arguments\"])\n except JSONDecodeError:\n raise OutputParserException(\n f\"Could not parse tool input: {function_call} because \"\n f\"the `arguments` is not valid JSON.\"\n )\n # HACK HACK HACK:\n # The code that encodes tool input into Open AI uses a special variable\n # name called `__arg1` to handle old style tools that do not expose a\n # schema and expect a single string argument as an input.\n # We unpack the argument here if it exists.\n # Open AI does not support passing in a JSON array as an argument.\n if \"__arg1\" in _tool_input:\n tool_input = _tool_input[\"__arg1\"]\n else:\n tool_input = _tool_input\n content_msg = \"responded: {content}\\n\" if message.content else \"\\n\"\n return _FunctionsAgentAction(\n tool=function_name,\n tool_input=tool_input,\n log=f\"\\nInvoking: `{function_name}` with `{tool_input}`\\n{content_msg}\\n\",\n message_log=[message],\n )\n return AgentFinish(return_values={\"output\": message.content}, log=message.content)\n[docs]class OpenAIFunctionsAgent(BaseSingleActionAgent):\n \"\"\"An Agent driven by OpenAIs function powered API.\n Args:\n llm: This should be an instance of ChatOpenAI, specifically a model\n that supports using `functions`.\n tools: The tools this agent has access to.\n prompt: The prompt for this agent, should support agent_scratchpad as one\n of the variables. For an easy way to construct this prompt, use", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/openai_functions_agent/base.html"} +{"id": "f95b17fd55b6-3", "text": "of the variables. For an easy way to construct this prompt, use\n `OpenAIFunctionsAgent.create_prompt(...)`\n \"\"\"\n llm: BaseLanguageModel\n tools: Sequence[BaseTool]\n prompt: BasePromptTemplate\n[docs] def get_allowed_tools(self) -> List[str]:\n \"\"\"Get allowed tools.\"\"\"\n return list([t.name for t in self.tools])\n @root_validator\n def validate_llm(cls, values: dict) -> dict:\n if not isinstance(values[\"llm\"], ChatOpenAI):\n raise ValueError(\"Only supported with ChatOpenAI models.\")\n return values\n @root_validator\n def validate_prompt(cls, values: dict) -> dict:\n prompt: BasePromptTemplate = values[\"prompt\"]\n if \"agent_scratchpad\" not in prompt.input_variables:\n raise ValueError(\n \"`agent_scratchpad` should be one of the variables in the prompt, \"\n f\"got {prompt.input_variables}\"\n )\n return values\n @property\n def input_keys(self) -> List[str]:\n \"\"\"Get input keys. Input refers to user input here.\"\"\"\n return [\"input\"]\n @property\n def functions(self) -> List[dict]:\n return [dict(format_tool_to_openai_function(t)) for t in self.tools]\n[docs] def plan(\n self,\n intermediate_steps: List[Tuple[AgentAction, str]],\n callbacks: Callbacks = None,\n **kwargs: Any,\n ) -> Union[AgentAction, AgentFinish]:\n \"\"\"Given input, decided what to do.\n Args:\n intermediate_steps: Steps the LLM has taken to date, along with observations\n **kwargs: User inputs.\n Returns:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/openai_functions_agent/base.html"} +{"id": "f95b17fd55b6-4", "text": "**kwargs: User inputs.\n Returns:\n Action specifying what tool to use.\n \"\"\"\n agent_scratchpad = _format_intermediate_steps(intermediate_steps)\n selected_inputs = {\n k: kwargs[k] for k in self.prompt.input_variables if k != \"agent_scratchpad\"\n }\n full_inputs = dict(**selected_inputs, agent_scratchpad=agent_scratchpad)\n prompt = self.prompt.format_prompt(**full_inputs)\n messages = prompt.to_messages()\n predicted_message = self.llm.predict_messages(\n messages, functions=self.functions, callbacks=callbacks\n )\n agent_decision = _parse_ai_message(predicted_message)\n return agent_decision\n[docs] async def aplan(\n self,\n intermediate_steps: List[Tuple[AgentAction, str]],\n callbacks: Callbacks = None,\n **kwargs: Any,\n ) -> Union[AgentAction, AgentFinish]:\n \"\"\"Given input, decided what to do.\n Args:\n intermediate_steps: Steps the LLM has taken to date,\n along with observations\n **kwargs: User inputs.\n Returns:\n Action specifying what tool to use.\n \"\"\"\n agent_scratchpad = _format_intermediate_steps(intermediate_steps)\n selected_inputs = {\n k: kwargs[k] for k in self.prompt.input_variables if k != \"agent_scratchpad\"\n }\n full_inputs = dict(**selected_inputs, agent_scratchpad=agent_scratchpad)\n prompt = self.prompt.format_prompt(**full_inputs)\n messages = prompt.to_messages()\n predicted_message = await self.llm.apredict_messages(\n messages, functions=self.functions, callbacks=callbacks\n )\n agent_decision = _parse_ai_message(predicted_message)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/openai_functions_agent/base.html"} +{"id": "f95b17fd55b6-5", "text": ")\n agent_decision = _parse_ai_message(predicted_message)\n return agent_decision\n[docs] @classmethod\n def create_prompt(\n cls,\n system_message: Optional[SystemMessage] = SystemMessage(\n content=\"You are a helpful AI assistant.\"\n ),\n extra_prompt_messages: Optional[List[BaseMessagePromptTemplate]] = None,\n ) -> BasePromptTemplate:\n \"\"\"Create prompt for this agent.\n Args:\n system_message: Message to use as the system message that will be the\n first in the prompt.\n extra_prompt_messages: Prompt messages that will be placed between the\n system message and the new human input.\n Returns:\n A prompt template to pass into this agent.\n \"\"\"\n _prompts = extra_prompt_messages or []\n messages: List[Union[BaseMessagePromptTemplate, BaseMessage]]\n if system_message:\n messages = [system_message]\n else:\n messages = []\n messages.extend(\n [\n *_prompts,\n HumanMessagePromptTemplate.from_template(\"{input}\"),\n MessagesPlaceholder(variable_name=\"agent_scratchpad\"),\n ]\n )\n return ChatPromptTemplate(messages=messages)\n[docs] @classmethod\n def from_llm_and_tools(\n cls,\n llm: BaseLanguageModel,\n tools: Sequence[BaseTool],\n callback_manager: Optional[BaseCallbackManager] = None,\n extra_prompt_messages: Optional[List[BaseMessagePromptTemplate]] = None,\n system_message: Optional[SystemMessage] = SystemMessage(\n content=\"You are a helpful AI assistant.\"\n ),\n **kwargs: Any,\n ) -> BaseSingleActionAgent:\n \"\"\"Construct an agent from an LLM and tools.\"\"\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/openai_functions_agent/base.html"} +{"id": "f95b17fd55b6-6", "text": "\"\"\"Construct an agent from an LLM and tools.\"\"\"\n if not isinstance(llm, ChatOpenAI):\n raise ValueError(\"Only supported with ChatOpenAI models.\")\n prompt = cls.create_prompt(\n extra_prompt_messages=extra_prompt_messages,\n system_message=system_message,\n )\n return cls(\n llm=llm,\n prompt=prompt,\n tools=tools,\n callback_manager=callback_manager,\n **kwargs,\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/openai_functions_agent/base.html"} +{"id": "2fa1c3af04b2-0", "text": "Source code for langchain.agents.mrkl.base\n\"\"\"Attempt to implement MRKL systems as described in arxiv.org/pdf/2205.00445.pdf.\"\"\"\nfrom __future__ import annotations\nfrom typing import Any, Callable, List, NamedTuple, Optional, Sequence\nfrom pydantic import Field\nfrom langchain.agents.agent import Agent, AgentExecutor, AgentOutputParser\nfrom langchain.agents.agent_types import AgentType\nfrom langchain.agents.mrkl.output_parser import MRKLOutputParser\nfrom langchain.agents.mrkl.prompt import FORMAT_INSTRUCTIONS, PREFIX, SUFFIX\nfrom langchain.agents.tools import Tool\nfrom langchain.agents.utils import validate_tools_single_input\nfrom langchain.base_language import BaseLanguageModel\nfrom langchain.callbacks.base import BaseCallbackManager\nfrom langchain.chains import LLMChain\nfrom langchain.prompts import PromptTemplate\nfrom langchain.tools.base import BaseTool\nclass ChainConfig(NamedTuple):\n \"\"\"Configuration for chain to use in MRKL system.\n Args:\n action_name: Name of the action.\n action: Action function to call.\n action_description: Description of the action.\n \"\"\"\n action_name: str\n action: Callable\n action_description: str\n[docs]class ZeroShotAgent(Agent):\n \"\"\"Agent for the MRKL chain.\"\"\"\n output_parser: AgentOutputParser = Field(default_factory=MRKLOutputParser)\n @classmethod\n def _get_default_output_parser(cls, **kwargs: Any) -> AgentOutputParser:\n return MRKLOutputParser()\n @property\n def _agent_type(self) -> str:\n \"\"\"Return Identifier of agent type.\"\"\"\n return AgentType.ZERO_SHOT_REACT_DESCRIPTION\n @property\n def observation_prefix(self) -> str:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/mrkl/base.html"} +{"id": "2fa1c3af04b2-1", "text": "@property\n def observation_prefix(self) -> str:\n \"\"\"Prefix to append the observation with.\"\"\"\n return \"Observation: \"\n @property\n def llm_prefix(self) -> str:\n \"\"\"Prefix to append the llm call with.\"\"\"\n return \"Thought:\"\n[docs] @classmethod\n def create_prompt(\n cls,\n tools: Sequence[BaseTool],\n prefix: str = PREFIX,\n suffix: str = SUFFIX,\n format_instructions: str = FORMAT_INSTRUCTIONS,\n input_variables: Optional[List[str]] = None,\n ) -> PromptTemplate:\n \"\"\"Create prompt in the style of the zero shot agent.\n Args:\n tools: List of tools the agent will have access to, used to format the\n prompt.\n prefix: String to put before the list of tools.\n suffix: String to put after the list of tools.\n input_variables: List of input variables the final prompt will expect.\n Returns:\n A PromptTemplate with the template assembled from the pieces here.\n \"\"\"\n tool_strings = \"\\n\".join([f\"{tool.name}: {tool.description}\" for tool in tools])\n tool_names = \", \".join([tool.name for tool in tools])\n format_instructions = format_instructions.format(tool_names=tool_names)\n template = \"\\n\\n\".join([prefix, tool_strings, format_instructions, suffix])\n if input_variables is None:\n input_variables = [\"input\", \"agent_scratchpad\"]\n return PromptTemplate(template=template, input_variables=input_variables)\n[docs] @classmethod\n def from_llm_and_tools(\n cls,\n llm: BaseLanguageModel,\n tools: Sequence[BaseTool],", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/mrkl/base.html"} +{"id": "2fa1c3af04b2-2", "text": "llm: BaseLanguageModel,\n tools: Sequence[BaseTool],\n callback_manager: Optional[BaseCallbackManager] = None,\n output_parser: Optional[AgentOutputParser] = None,\n prefix: str = PREFIX,\n suffix: str = SUFFIX,\n format_instructions: str = FORMAT_INSTRUCTIONS,\n input_variables: Optional[List[str]] = None,\n **kwargs: Any,\n ) -> Agent:\n \"\"\"Construct an agent from an LLM and tools.\"\"\"\n cls._validate_tools(tools)\n prompt = cls.create_prompt(\n tools,\n prefix=prefix,\n suffix=suffix,\n format_instructions=format_instructions,\n input_variables=input_variables,\n )\n llm_chain = LLMChain(\n llm=llm,\n prompt=prompt,\n callback_manager=callback_manager,\n )\n tool_names = [tool.name for tool in tools]\n _output_parser = output_parser or cls._get_default_output_parser()\n return cls(\n llm_chain=llm_chain,\n allowed_tools=tool_names,\n output_parser=_output_parser,\n **kwargs,\n )\n @classmethod\n def _validate_tools(cls, tools: Sequence[BaseTool]) -> None:\n validate_tools_single_input(cls.__name__, tools)\n if len(tools) == 0:\n raise ValueError(\n f\"Got no tools for {cls.__name__}. At least one tool must be provided.\"\n )\n for tool in tools:\n if tool.description is None:\n raise ValueError(\n f\"Got a tool {tool.name} without a description. For this agent, \"\n f\"a description must always be provided.\"\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/mrkl/base.html"} +{"id": "2fa1c3af04b2-3", "text": "f\"a description must always be provided.\"\n )\n super()._validate_tools(tools)\n[docs]class MRKLChain(AgentExecutor):\n \"\"\"Chain that implements the MRKL system.\n Example:\n .. code-block:: python\n from langchain import OpenAI, MRKLChain\n from langchain.chains.mrkl.base import ChainConfig\n llm = OpenAI(temperature=0)\n prompt = PromptTemplate(...)\n chains = [...]\n mrkl = MRKLChain.from_chains(llm=llm, prompt=prompt)\n \"\"\"\n[docs] @classmethod\n def from_chains(\n cls, llm: BaseLanguageModel, chains: List[ChainConfig], **kwargs: Any\n ) -> AgentExecutor:\n \"\"\"User friendly way to initialize the MRKL chain.\n This is intended to be an easy way to get up and running with the\n MRKL chain.\n Args:\n llm: The LLM to use as the agent LLM.\n chains: The chains the MRKL system has access to.\n **kwargs: parameters to be passed to initialization.\n Returns:\n An initialized MRKL chain.\n Example:\n .. code-block:: python\n from langchain import LLMMathChain, OpenAI, SerpAPIWrapper, MRKLChain\n from langchain.chains.mrkl.base import ChainConfig\n llm = OpenAI(temperature=0)\n search = SerpAPIWrapper()\n llm_math_chain = LLMMathChain(llm=llm)\n chains = [\n ChainConfig(\n action_name = \"Search\",\n action=search.search,\n action_description=\"useful for searching\"\n ),\n ChainConfig(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/mrkl/base.html"} +{"id": "2fa1c3af04b2-4", "text": "action_description=\"useful for searching\"\n ),\n ChainConfig(\n action_name=\"Calculator\",\n action=llm_math_chain.run,\n action_description=\"useful for doing math\"\n )\n ]\n mrkl = MRKLChain.from_chains(llm, chains)\n \"\"\"\n tools = [\n Tool(\n name=c.action_name,\n func=c.action,\n description=c.action_description,\n )\n for c in chains\n ]\n agent = ZeroShotAgent.from_llm_and_tools(llm, tools)\n return cls(agent=agent, tools=tools, **kwargs)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/mrkl/base.html"} +{"id": "5a55e9747bbc-0", "text": "Source code for langchain.agents.react.base\n\"\"\"Chain that implements the ReAct paper from https://arxiv.org/pdf/2210.03629.pdf.\"\"\"\nfrom typing import Any, List, Optional, Sequence\nfrom pydantic import Field\nfrom langchain.agents.agent import Agent, AgentExecutor, AgentOutputParser\nfrom langchain.agents.agent_types import AgentType\nfrom langchain.agents.react.output_parser import ReActOutputParser\nfrom langchain.agents.react.textworld_prompt import TEXTWORLD_PROMPT\nfrom langchain.agents.react.wiki_prompt import WIKI_PROMPT\nfrom langchain.agents.tools import Tool\nfrom langchain.agents.utils import validate_tools_single_input\nfrom langchain.base_language import BaseLanguageModel\nfrom langchain.docstore.base import Docstore\nfrom langchain.docstore.document import Document\nfrom langchain.prompts.base import BasePromptTemplate\nfrom langchain.tools.base import BaseTool\nclass ReActDocstoreAgent(Agent):\n \"\"\"Agent for the ReAct chain.\"\"\"\n output_parser: AgentOutputParser = Field(default_factory=ReActOutputParser)\n @classmethod\n def _get_default_output_parser(cls, **kwargs: Any) -> AgentOutputParser:\n return ReActOutputParser()\n @property\n def _agent_type(self) -> str:\n \"\"\"Return Identifier of agent type.\"\"\"\n return AgentType.REACT_DOCSTORE\n @classmethod\n def create_prompt(cls, tools: Sequence[BaseTool]) -> BasePromptTemplate:\n \"\"\"Return default prompt.\"\"\"\n return WIKI_PROMPT\n @classmethod\n def _validate_tools(cls, tools: Sequence[BaseTool]) -> None:\n validate_tools_single_input(cls.__name__, tools)\n super()._validate_tools(tools)\n if len(tools) != 2:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/react/base.html"} +{"id": "5a55e9747bbc-1", "text": "super()._validate_tools(tools)\n if len(tools) != 2:\n raise ValueError(f\"Exactly two tools must be specified, but got {tools}\")\n tool_names = {tool.name for tool in tools}\n if tool_names != {\"Lookup\", \"Search\"}:\n raise ValueError(\n f\"Tool names should be Lookup and Search, got {tool_names}\"\n )\n @property\n def observation_prefix(self) -> str:\n \"\"\"Prefix to append the observation with.\"\"\"\n return \"Observation: \"\n @property\n def _stop(self) -> List[str]:\n return [\"\\nObservation:\"]\n @property\n def llm_prefix(self) -> str:\n \"\"\"Prefix to append the LLM call with.\"\"\"\n return \"Thought:\"\nclass DocstoreExplorer:\n \"\"\"Class to assist with exploration of a document store.\"\"\"\n def __init__(self, docstore: Docstore):\n \"\"\"Initialize with a docstore, and set initial document to None.\"\"\"\n self.docstore = docstore\n self.document: Optional[Document] = None\n self.lookup_str = \"\"\n self.lookup_index = 0\n def search(self, term: str) -> str:\n \"\"\"Search for a term in the docstore, and if found save.\"\"\"\n result = self.docstore.search(term)\n if isinstance(result, Document):\n self.document = result\n return self._summary\n else:\n self.document = None\n return result\n def lookup(self, term: str) -> str:\n \"\"\"Lookup a term in document (if saved).\"\"\"\n if self.document is None:\n raise ValueError(\"Cannot lookup without a successful search first\")\n if term.lower() != self.lookup_str:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/react/base.html"} +{"id": "5a55e9747bbc-2", "text": "if term.lower() != self.lookup_str:\n self.lookup_str = term.lower()\n self.lookup_index = 0\n else:\n self.lookup_index += 1\n lookups = [p for p in self._paragraphs if self.lookup_str in p.lower()]\n if len(lookups) == 0:\n return \"No Results\"\n elif self.lookup_index >= len(lookups):\n return \"No More Results\"\n else:\n result_prefix = f\"(Result {self.lookup_index + 1}/{len(lookups)})\"\n return f\"{result_prefix} {lookups[self.lookup_index]}\"\n @property\n def _summary(self) -> str:\n return self._paragraphs[0]\n @property\n def _paragraphs(self) -> List[str]:\n if self.document is None:\n raise ValueError(\"Cannot get paragraphs without a document\")\n return self.document.page_content.split(\"\\n\\n\")\n[docs]class ReActTextWorldAgent(ReActDocstoreAgent):\n \"\"\"Agent for the ReAct TextWorld chain.\"\"\"\n[docs] @classmethod\n def create_prompt(cls, tools: Sequence[BaseTool]) -> BasePromptTemplate:\n \"\"\"Return default prompt.\"\"\"\n return TEXTWORLD_PROMPT\n @classmethod\n def _validate_tools(cls, tools: Sequence[BaseTool]) -> None:\n validate_tools_single_input(cls.__name__, tools)\n super()._validate_tools(tools)\n if len(tools) != 1:\n raise ValueError(f\"Exactly one tool must be specified, but got {tools}\")\n tool_names = {tool.name for tool in tools}\n if tool_names != {\"Play\"}:\n raise ValueError(f\"Tool name should be Play, got {tool_names}\")", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/react/base.html"} +{"id": "5a55e9747bbc-3", "text": "raise ValueError(f\"Tool name should be Play, got {tool_names}\")\n[docs]class ReActChain(AgentExecutor):\n \"\"\"Chain that implements the ReAct paper.\n Example:\n .. code-block:: python\n from langchain import ReActChain, OpenAI\n react = ReAct(llm=OpenAI())\n \"\"\"\n def __init__(self, llm: BaseLanguageModel, docstore: Docstore, **kwargs: Any):\n \"\"\"Initialize with the LLM and a docstore.\"\"\"\n docstore_explorer = DocstoreExplorer(docstore)\n tools = [\n Tool(\n name=\"Search\",\n func=docstore_explorer.search,\n description=\"Search for a term in the docstore.\",\n ),\n Tool(\n name=\"Lookup\",\n func=docstore_explorer.lookup,\n description=\"Lookup a term in the docstore.\",\n ),\n ]\n agent = ReActDocstoreAgent.from_llm_and_tools(llm, tools)\n super().__init__(agent=agent, tools=tools, **kwargs)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/react/base.html"} +{"id": "fdcef5c744a0-0", "text": "Source code for langchain.agents.self_ask_with_search.base\n\"\"\"Chain that does self ask with search.\"\"\"\nfrom typing import Any, Sequence, Union\nfrom pydantic import Field\nfrom langchain.agents.agent import Agent, AgentExecutor, AgentOutputParser\nfrom langchain.agents.agent_types import AgentType\nfrom langchain.agents.self_ask_with_search.output_parser import SelfAskOutputParser\nfrom langchain.agents.self_ask_with_search.prompt import PROMPT\nfrom langchain.agents.tools import Tool\nfrom langchain.agents.utils import validate_tools_single_input\nfrom langchain.base_language import BaseLanguageModel\nfrom langchain.prompts.base import BasePromptTemplate\nfrom langchain.tools.base import BaseTool\nfrom langchain.utilities.google_serper import GoogleSerperAPIWrapper\nfrom langchain.utilities.serpapi import SerpAPIWrapper\nclass SelfAskWithSearchAgent(Agent):\n \"\"\"Agent for the self-ask-with-search paper.\"\"\"\n output_parser: AgentOutputParser = Field(default_factory=SelfAskOutputParser)\n @classmethod\n def _get_default_output_parser(cls, **kwargs: Any) -> AgentOutputParser:\n return SelfAskOutputParser()\n @property\n def _agent_type(self) -> str:\n \"\"\"Return Identifier of agent type.\"\"\"\n return AgentType.SELF_ASK_WITH_SEARCH\n @classmethod\n def create_prompt(cls, tools: Sequence[BaseTool]) -> BasePromptTemplate:\n \"\"\"Prompt does not depend on tools.\"\"\"\n return PROMPT\n @classmethod\n def _validate_tools(cls, tools: Sequence[BaseTool]) -> None:\n validate_tools_single_input(cls.__name__, tools)\n super()._validate_tools(tools)\n if len(tools) != 1:\n raise ValueError(f\"Exactly one tool must be specified, but got {tools}\")", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/self_ask_with_search/base.html"} +{"id": "fdcef5c744a0-1", "text": "raise ValueError(f\"Exactly one tool must be specified, but got {tools}\")\n tool_names = {tool.name for tool in tools}\n if tool_names != {\"Intermediate Answer\"}:\n raise ValueError(\n f\"Tool name should be Intermediate Answer, got {tool_names}\"\n )\n @property\n def observation_prefix(self) -> str:\n \"\"\"Prefix to append the observation with.\"\"\"\n return \"Intermediate answer: \"\n @property\n def llm_prefix(self) -> str:\n \"\"\"Prefix to append the LLM call with.\"\"\"\n return \"\"\n[docs]class SelfAskWithSearchChain(AgentExecutor):\n \"\"\"Chain that does self ask with search.\n Example:\n .. code-block:: python\n from langchain import SelfAskWithSearchChain, OpenAI, GoogleSerperAPIWrapper\n search_chain = GoogleSerperAPIWrapper()\n self_ask = SelfAskWithSearchChain(llm=OpenAI(), search_chain=search_chain)\n \"\"\"\n def __init__(\n self,\n llm: BaseLanguageModel,\n search_chain: Union[GoogleSerperAPIWrapper, SerpAPIWrapper],\n **kwargs: Any,\n ):\n \"\"\"Initialize with just an LLM and a search chain.\"\"\"\n search_tool = Tool(\n name=\"Intermediate Answer\",\n func=search_chain.run,\n coroutine=search_chain.arun,\n description=\"Search\",\n )\n agent = SelfAskWithSearchAgent.from_llm_and_tools(llm, [search_tool])\n super().__init__(agent=agent, tools=[search_tool], **kwargs)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/self_ask_with_search/base.html"} +{"id": "cd82155c77d6-0", "text": "Source code for langchain.agents.conversational.base\n\"\"\"An agent designed to hold a conversation in addition to using tools.\"\"\"\nfrom __future__ import annotations\nfrom typing import Any, List, Optional, Sequence\nfrom pydantic import Field\nfrom langchain.agents.agent import Agent, AgentOutputParser\nfrom langchain.agents.agent_types import AgentType\nfrom langchain.agents.conversational.output_parser import ConvoOutputParser\nfrom langchain.agents.conversational.prompt import FORMAT_INSTRUCTIONS, PREFIX, SUFFIX\nfrom langchain.agents.utils import validate_tools_single_input\nfrom langchain.base_language import BaseLanguageModel\nfrom langchain.callbacks.base import BaseCallbackManager\nfrom langchain.chains import LLMChain\nfrom langchain.prompts import PromptTemplate\nfrom langchain.tools.base import BaseTool\n[docs]class ConversationalAgent(Agent):\n \"\"\"An agent designed to hold a conversation in addition to using tools.\"\"\"\n ai_prefix: str = \"AI\"\n output_parser: AgentOutputParser = Field(default_factory=ConvoOutputParser)\n @classmethod\n def _get_default_output_parser(\n cls, ai_prefix: str = \"AI\", **kwargs: Any\n ) -> AgentOutputParser:\n return ConvoOutputParser(ai_prefix=ai_prefix)\n @property\n def _agent_type(self) -> str:\n \"\"\"Return Identifier of agent type.\"\"\"\n return AgentType.CONVERSATIONAL_REACT_DESCRIPTION\n @property\n def observation_prefix(self) -> str:\n \"\"\"Prefix to append the observation with.\"\"\"\n return \"Observation: \"\n @property\n def llm_prefix(self) -> str:\n \"\"\"Prefix to append the llm call with.\"\"\"\n return \"Thought:\"\n[docs] @classmethod\n def create_prompt(\n cls,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/conversational/base.html"} +{"id": "cd82155c77d6-1", "text": "[docs] @classmethod\n def create_prompt(\n cls,\n tools: Sequence[BaseTool],\n prefix: str = PREFIX,\n suffix: str = SUFFIX,\n format_instructions: str = FORMAT_INSTRUCTIONS,\n ai_prefix: str = \"AI\",\n human_prefix: str = \"Human\",\n input_variables: Optional[List[str]] = None,\n ) -> PromptTemplate:\n \"\"\"Create prompt in the style of the zero shot agent.\n Args:\n tools: List of tools the agent will have access to, used to format the\n prompt.\n prefix: String to put before the list of tools.\n suffix: String to put after the list of tools.\n ai_prefix: String to use before AI output.\n human_prefix: String to use before human output.\n input_variables: List of input variables the final prompt will expect.\n Returns:\n A PromptTemplate with the template assembled from the pieces here.\n \"\"\"\n tool_strings = \"\\n\".join(\n [f\"> {tool.name}: {tool.description}\" for tool in tools]\n )\n tool_names = \", \".join([tool.name for tool in tools])\n format_instructions = format_instructions.format(\n tool_names=tool_names, ai_prefix=ai_prefix, human_prefix=human_prefix\n )\n template = \"\\n\\n\".join([prefix, tool_strings, format_instructions, suffix])\n if input_variables is None:\n input_variables = [\"input\", \"chat_history\", \"agent_scratchpad\"]\n return PromptTemplate(template=template, input_variables=input_variables)\n @classmethod\n def _validate_tools(cls, tools: Sequence[BaseTool]) -> None:\n super()._validate_tools(tools)\n validate_tools_single_input(cls.__name__, tools)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/conversational/base.html"} +{"id": "cd82155c77d6-2", "text": "validate_tools_single_input(cls.__name__, tools)\n[docs] @classmethod\n def from_llm_and_tools(\n cls,\n llm: BaseLanguageModel,\n tools: Sequence[BaseTool],\n callback_manager: Optional[BaseCallbackManager] = None,\n output_parser: Optional[AgentOutputParser] = None,\n prefix: str = PREFIX,\n suffix: str = SUFFIX,\n format_instructions: str = FORMAT_INSTRUCTIONS,\n ai_prefix: str = \"AI\",\n human_prefix: str = \"Human\",\n input_variables: Optional[List[str]] = None,\n **kwargs: Any,\n ) -> Agent:\n \"\"\"Construct an agent from an LLM and tools.\"\"\"\n cls._validate_tools(tools)\n prompt = cls.create_prompt(\n tools,\n ai_prefix=ai_prefix,\n human_prefix=human_prefix,\n prefix=prefix,\n suffix=suffix,\n format_instructions=format_instructions,\n input_variables=input_variables,\n )\n llm_chain = LLMChain(\n llm=llm,\n prompt=prompt,\n callback_manager=callback_manager,\n )\n tool_names = [tool.name for tool in tools]\n _output_parser = output_parser or cls._get_default_output_parser(\n ai_prefix=ai_prefix\n )\n return cls(\n llm_chain=llm_chain,\n allowed_tools=tool_names,\n ai_prefix=ai_prefix,\n output_parser=_output_parser,\n **kwargs,\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/agents/conversational/base.html"} +{"id": "87c035800629-0", "text": "Source code for langchain.chains.loading\n\"\"\"Functionality for loading chains.\"\"\"\nimport json\nfrom pathlib import Path\nfrom typing import Any, Union\nimport yaml\nfrom langchain.chains.api.base import APIChain\nfrom langchain.chains.base import Chain\nfrom langchain.chains.combine_documents.map_reduce import MapReduceDocumentsChain\nfrom langchain.chains.combine_documents.map_rerank import MapRerankDocumentsChain\nfrom langchain.chains.combine_documents.refine import RefineDocumentsChain\nfrom langchain.chains.combine_documents.stuff import StuffDocumentsChain\nfrom langchain.chains.graph_qa.cypher import GraphCypherQAChain\nfrom langchain.chains.hyde.base import HypotheticalDocumentEmbedder\nfrom langchain.chains.llm import LLMChain\nfrom langchain.chains.llm_bash.base import LLMBashChain\nfrom langchain.chains.llm_checker.base import LLMCheckerChain\nfrom langchain.chains.llm_math.base import LLMMathChain\nfrom langchain.chains.llm_requests import LLMRequestsChain\nfrom langchain.chains.pal.base import PALChain\nfrom langchain.chains.qa_with_sources.base import QAWithSourcesChain\nfrom langchain.chains.qa_with_sources.vector_db import VectorDBQAWithSourcesChain\nfrom langchain.chains.retrieval_qa.base import RetrievalQA, VectorDBQA\nfrom langchain.chains.sql_database.base import SQLDatabaseChain\nfrom langchain.llms.loading import load_llm, load_llm_from_config\nfrom langchain.prompts.loading import (\n _load_output_parser,\n load_prompt,\n load_prompt_from_config,\n)\nfrom langchain.utilities.loading import try_load_from_hub\nURL_BASE = \"https://raw.githubusercontent.com/hwchase17/langchain-hub/master/chains/\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/loading.html"} +{"id": "87c035800629-1", "text": "def _load_llm_chain(config: dict, **kwargs: Any) -> LLMChain:\n \"\"\"Load LLM chain from config dict.\"\"\"\n if \"llm\" in config:\n llm_config = config.pop(\"llm\")\n llm = load_llm_from_config(llm_config)\n elif \"llm_path\" in config:\n llm = load_llm(config.pop(\"llm_path\"))\n else:\n raise ValueError(\"One of `llm` or `llm_path` must be present.\")\n if \"prompt\" in config:\n prompt_config = config.pop(\"prompt\")\n prompt = load_prompt_from_config(prompt_config)\n elif \"prompt_path\" in config:\n prompt = load_prompt(config.pop(\"prompt_path\"))\n else:\n raise ValueError(\"One of `prompt` or `prompt_path` must be present.\")\n _load_output_parser(config)\n return LLMChain(llm=llm, prompt=prompt, **config)\ndef _load_hyde_chain(config: dict, **kwargs: Any) -> HypotheticalDocumentEmbedder:\n \"\"\"Load hypothetical document embedder chain from config dict.\"\"\"\n if \"llm_chain\" in config:\n llm_chain_config = config.pop(\"llm_chain\")\n llm_chain = load_chain_from_config(llm_chain_config)\n elif \"llm_chain_path\" in config:\n llm_chain = load_chain(config.pop(\"llm_chain_path\"))\n else:\n raise ValueError(\"One of `llm_chain` or `llm_chain_path` must be present.\")\n if \"embeddings\" in kwargs:\n embeddings = kwargs.pop(\"embeddings\")\n else:\n raise ValueError(\"`embeddings` must be present.\")\n return HypotheticalDocumentEmbedder(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/loading.html"} +{"id": "87c035800629-2", "text": "return HypotheticalDocumentEmbedder(\n llm_chain=llm_chain, base_embeddings=embeddings, **config\n )\ndef _load_stuff_documents_chain(config: dict, **kwargs: Any) -> StuffDocumentsChain:\n if \"llm_chain\" in config:\n llm_chain_config = config.pop(\"llm_chain\")\n llm_chain = load_chain_from_config(llm_chain_config)\n elif \"llm_chain_path\" in config:\n llm_chain = load_chain(config.pop(\"llm_chain_path\"))\n else:\n raise ValueError(\"One of `llm_chain` or `llm_chain_config` must be present.\")\n if not isinstance(llm_chain, LLMChain):\n raise ValueError(f\"Expected LLMChain, got {llm_chain}\")\n if \"document_prompt\" in config:\n prompt_config = config.pop(\"document_prompt\")\n document_prompt = load_prompt_from_config(prompt_config)\n elif \"document_prompt_path\" in config:\n document_prompt = load_prompt(config.pop(\"document_prompt_path\"))\n else:\n raise ValueError(\n \"One of `document_prompt` or `document_prompt_path` must be present.\"\n )\n return StuffDocumentsChain(\n llm_chain=llm_chain, document_prompt=document_prompt, **config\n )\ndef _load_map_reduce_documents_chain(\n config: dict, **kwargs: Any\n) -> MapReduceDocumentsChain:\n if \"llm_chain\" in config:\n llm_chain_config = config.pop(\"llm_chain\")\n llm_chain = load_chain_from_config(llm_chain_config)\n elif \"llm_chain_path\" in config:\n llm_chain = load_chain(config.pop(\"llm_chain_path\"))\n else:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/loading.html"} +{"id": "87c035800629-3", "text": "llm_chain = load_chain(config.pop(\"llm_chain_path\"))\n else:\n raise ValueError(\"One of `llm_chain` or `llm_chain_config` must be present.\")\n if not isinstance(llm_chain, LLMChain):\n raise ValueError(f\"Expected LLMChain, got {llm_chain}\")\n if \"combine_document_chain\" in config:\n combine_document_chain_config = config.pop(\"combine_document_chain\")\n combine_document_chain = load_chain_from_config(combine_document_chain_config)\n elif \"combine_document_chain_path\" in config:\n combine_document_chain = load_chain(config.pop(\"combine_document_chain_path\"))\n else:\n raise ValueError(\n \"One of `combine_document_chain` or \"\n \"`combine_document_chain_path` must be present.\"\n )\n if \"collapse_document_chain\" in config:\n collapse_document_chain_config = config.pop(\"collapse_document_chain\")\n if collapse_document_chain_config is None:\n collapse_document_chain = None\n else:\n collapse_document_chain = load_chain_from_config(\n collapse_document_chain_config\n )\n elif \"collapse_document_chain_path\" in config:\n collapse_document_chain = load_chain(config.pop(\"collapse_document_chain_path\"))\n return MapReduceDocumentsChain(\n llm_chain=llm_chain,\n combine_document_chain=combine_document_chain,\n collapse_document_chain=collapse_document_chain,\n **config,\n )\ndef _load_llm_bash_chain(config: dict, **kwargs: Any) -> LLMBashChain:\n llm_chain = None\n if \"llm_chain\" in config:\n llm_chain_config = config.pop(\"llm_chain\")\n llm_chain = load_chain_from_config(llm_chain_config)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/loading.html"} +{"id": "87c035800629-4", "text": "llm_chain = load_chain_from_config(llm_chain_config)\n elif \"llm_chain_path\" in config:\n llm_chain = load_chain(config.pop(\"llm_chain_path\"))\n # llm attribute is deprecated in favor of llm_chain, here to support old configs\n elif \"llm\" in config:\n llm_config = config.pop(\"llm\")\n llm = load_llm_from_config(llm_config)\n # llm_path attribute is deprecated in favor of llm_chain_path,\n # its to support old configs\n elif \"llm_path\" in config:\n llm = load_llm(config.pop(\"llm_path\"))\n else:\n raise ValueError(\"One of `llm_chain` or `llm_chain_path` must be present.\")\n if \"prompt\" in config:\n prompt_config = config.pop(\"prompt\")\n prompt = load_prompt_from_config(prompt_config)\n elif \"prompt_path\" in config:\n prompt = load_prompt(config.pop(\"prompt_path\"))\n if llm_chain:\n return LLMBashChain(llm_chain=llm_chain, prompt=prompt, **config)\n else:\n return LLMBashChain(llm=llm, prompt=prompt, **config)\ndef _load_llm_checker_chain(config: dict, **kwargs: Any) -> LLMCheckerChain:\n if \"llm\" in config:\n llm_config = config.pop(\"llm\")\n llm = load_llm_from_config(llm_config)\n elif \"llm_path\" in config:\n llm = load_llm(config.pop(\"llm_path\"))\n else:\n raise ValueError(\"One of `llm` or `llm_path` must be present.\")", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/loading.html"} +{"id": "87c035800629-5", "text": "if \"create_draft_answer_prompt\" in config:\n create_draft_answer_prompt_config = config.pop(\"create_draft_answer_prompt\")\n create_draft_answer_prompt = load_prompt_from_config(\n create_draft_answer_prompt_config\n )\n elif \"create_draft_answer_prompt_path\" in config:\n create_draft_answer_prompt = load_prompt(\n config.pop(\"create_draft_answer_prompt_path\")\n )\n if \"list_assertions_prompt\" in config:\n list_assertions_prompt_config = config.pop(\"list_assertions_prompt\")\n list_assertions_prompt = load_prompt_from_config(list_assertions_prompt_config)\n elif \"list_assertions_prompt_path\" in config:\n list_assertions_prompt = load_prompt(config.pop(\"list_assertions_prompt_path\"))\n if \"check_assertions_prompt\" in config:\n check_assertions_prompt_config = config.pop(\"check_assertions_prompt\")\n check_assertions_prompt = load_prompt_from_config(\n check_assertions_prompt_config\n )\n elif \"check_assertions_prompt_path\" in config:\n check_assertions_prompt = load_prompt(\n config.pop(\"check_assertions_prompt_path\")\n )\n if \"revised_answer_prompt\" in config:\n revised_answer_prompt_config = config.pop(\"revised_answer_prompt\")\n revised_answer_prompt = load_prompt_from_config(revised_answer_prompt_config)\n elif \"revised_answer_prompt_path\" in config:\n revised_answer_prompt = load_prompt(config.pop(\"revised_answer_prompt_path\"))\n return LLMCheckerChain(\n llm=llm,\n create_draft_answer_prompt=create_draft_answer_prompt,\n list_assertions_prompt=list_assertions_prompt,\n check_assertions_prompt=check_assertions_prompt,\n revised_answer_prompt=revised_answer_prompt,\n **config,\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/loading.html"} +{"id": "87c035800629-6", "text": "revised_answer_prompt=revised_answer_prompt,\n **config,\n )\ndef _load_llm_math_chain(config: dict, **kwargs: Any) -> LLMMathChain:\n llm_chain = None\n if \"llm_chain\" in config:\n llm_chain_config = config.pop(\"llm_chain\")\n llm_chain = load_chain_from_config(llm_chain_config)\n elif \"llm_chain_path\" in config:\n llm_chain = load_chain(config.pop(\"llm_chain_path\"))\n # llm attribute is deprecated in favor of llm_chain, here to support old configs\n elif \"llm\" in config:\n llm_config = config.pop(\"llm\")\n llm = load_llm_from_config(llm_config)\n # llm_path attribute is deprecated in favor of llm_chain_path,\n # its to support old configs\n elif \"llm_path\" in config:\n llm = load_llm(config.pop(\"llm_path\"))\n else:\n raise ValueError(\"One of `llm_chain` or `llm_chain_path` must be present.\")\n if \"prompt\" in config:\n prompt_config = config.pop(\"prompt\")\n prompt = load_prompt_from_config(prompt_config)\n elif \"prompt_path\" in config:\n prompt = load_prompt(config.pop(\"prompt_path\"))\n if llm_chain:\n return LLMMathChain(llm_chain=llm_chain, prompt=prompt, **config)\n else:\n return LLMMathChain(llm=llm, prompt=prompt, **config)\ndef _load_map_rerank_documents_chain(\n config: dict, **kwargs: Any\n) -> MapRerankDocumentsChain:\n if \"llm_chain\" in config:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/loading.html"} +{"id": "87c035800629-7", "text": "if \"llm_chain\" in config:\n llm_chain_config = config.pop(\"llm_chain\")\n llm_chain = load_chain_from_config(llm_chain_config)\n elif \"llm_chain_path\" in config:\n llm_chain = load_chain(config.pop(\"llm_chain_path\"))\n else:\n raise ValueError(\"One of `llm_chain` or `llm_chain_config` must be present.\")\n return MapRerankDocumentsChain(llm_chain=llm_chain, **config)\ndef _load_pal_chain(config: dict, **kwargs: Any) -> PALChain:\n llm_chain = None\n if \"llm_chain\" in config:\n llm_chain_config = config.pop(\"llm_chain\")\n llm_chain = load_chain_from_config(llm_chain_config)\n elif \"llm_chain_path\" in config:\n llm_chain = load_chain(config.pop(\"llm_chain_path\"))\n # llm attribute is deprecated in favor of llm_chain, here to support old configs\n elif \"llm\" in config:\n llm_config = config.pop(\"llm\")\n llm = load_llm_from_config(llm_config)\n # llm_path attribute is deprecated in favor of llm_chain_path,\n # its to support old configs\n elif \"llm_path\" in config:\n llm = load_llm(config.pop(\"llm_path\"))\n else:\n raise ValueError(\"One of `llm_chain` or `llm_chain_path` must be present.\")\n if \"prompt\" in config:\n prompt_config = config.pop(\"prompt\")\n prompt = load_prompt_from_config(prompt_config)\n elif \"prompt_path\" in config:\n prompt = load_prompt(config.pop(\"prompt_path\"))\n else:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/loading.html"} +{"id": "87c035800629-8", "text": "prompt = load_prompt(config.pop(\"prompt_path\"))\n else:\n raise ValueError(\"One of `prompt` or `prompt_path` must be present.\")\n if llm_chain:\n return PALChain(llm_chain=llm_chain, prompt=prompt, **config)\n else:\n return PALChain(llm=llm, prompt=prompt, **config)\ndef _load_refine_documents_chain(config: dict, **kwargs: Any) -> RefineDocumentsChain:\n if \"initial_llm_chain\" in config:\n initial_llm_chain_config = config.pop(\"initial_llm_chain\")\n initial_llm_chain = load_chain_from_config(initial_llm_chain_config)\n elif \"initial_llm_chain_path\" in config:\n initial_llm_chain = load_chain(config.pop(\"initial_llm_chain_path\"))\n else:\n raise ValueError(\n \"One of `initial_llm_chain` or `initial_llm_chain_config` must be present.\"\n )\n if \"refine_llm_chain\" in config:\n refine_llm_chain_config = config.pop(\"refine_llm_chain\")\n refine_llm_chain = load_chain_from_config(refine_llm_chain_config)\n elif \"refine_llm_chain_path\" in config:\n refine_llm_chain = load_chain(config.pop(\"refine_llm_chain_path\"))\n else:\n raise ValueError(\n \"One of `refine_llm_chain` or `refine_llm_chain_config` must be present.\"\n )\n if \"document_prompt\" in config:\n prompt_config = config.pop(\"document_prompt\")\n document_prompt = load_prompt_from_config(prompt_config)\n elif \"document_prompt_path\" in config:\n document_prompt = load_prompt(config.pop(\"document_prompt_path\"))", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/loading.html"} +{"id": "87c035800629-9", "text": "document_prompt = load_prompt(config.pop(\"document_prompt_path\"))\n return RefineDocumentsChain(\n initial_llm_chain=initial_llm_chain,\n refine_llm_chain=refine_llm_chain,\n document_prompt=document_prompt,\n **config,\n )\ndef _load_qa_with_sources_chain(config: dict, **kwargs: Any) -> QAWithSourcesChain:\n if \"combine_documents_chain\" in config:\n combine_documents_chain_config = config.pop(\"combine_documents_chain\")\n combine_documents_chain = load_chain_from_config(combine_documents_chain_config)\n elif \"combine_documents_chain_path\" in config:\n combine_documents_chain = load_chain(config.pop(\"combine_documents_chain_path\"))\n else:\n raise ValueError(\n \"One of `combine_documents_chain` or \"\n \"`combine_documents_chain_path` must be present.\"\n )\n return QAWithSourcesChain(combine_documents_chain=combine_documents_chain, **config)\ndef _load_sql_database_chain(config: dict, **kwargs: Any) -> SQLDatabaseChain:\n if \"database\" in kwargs:\n database = kwargs.pop(\"database\")\n else:\n raise ValueError(\"`database` must be present.\")\n if \"llm\" in config:\n llm_config = config.pop(\"llm\")\n llm = load_llm_from_config(llm_config)\n elif \"llm_path\" in config:\n llm = load_llm(config.pop(\"llm_path\"))\n else:\n raise ValueError(\"One of `llm` or `llm_path` must be present.\")\n if \"prompt\" in config:\n prompt_config = config.pop(\"prompt\")\n prompt = load_prompt_from_config(prompt_config)\n else:\n prompt = None", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/loading.html"} +{"id": "87c035800629-10", "text": "prompt = load_prompt_from_config(prompt_config)\n else:\n prompt = None\n return SQLDatabaseChain.from_llm(llm, database, prompt=prompt, **config)\ndef _load_vector_db_qa_with_sources_chain(\n config: dict, **kwargs: Any\n) -> VectorDBQAWithSourcesChain:\n if \"vectorstore\" in kwargs:\n vectorstore = kwargs.pop(\"vectorstore\")\n else:\n raise ValueError(\"`vectorstore` must be present.\")\n if \"combine_documents_chain\" in config:\n combine_documents_chain_config = config.pop(\"combine_documents_chain\")\n combine_documents_chain = load_chain_from_config(combine_documents_chain_config)\n elif \"combine_documents_chain_path\" in config:\n combine_documents_chain = load_chain(config.pop(\"combine_documents_chain_path\"))\n else:\n raise ValueError(\n \"One of `combine_documents_chain` or \"\n \"`combine_documents_chain_path` must be present.\"\n )\n return VectorDBQAWithSourcesChain(\n combine_documents_chain=combine_documents_chain,\n vectorstore=vectorstore,\n **config,\n )\ndef _load_retrieval_qa(config: dict, **kwargs: Any) -> RetrievalQA:\n if \"retriever\" in kwargs:\n retriever = kwargs.pop(\"retriever\")\n else:\n raise ValueError(\"`retriever` must be present.\")\n if \"combine_documents_chain\" in config:\n combine_documents_chain_config = config.pop(\"combine_documents_chain\")\n combine_documents_chain = load_chain_from_config(combine_documents_chain_config)\n elif \"combine_documents_chain_path\" in config:\n combine_documents_chain = load_chain(config.pop(\"combine_documents_chain_path\"))\n else:\n raise ValueError(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/loading.html"} +{"id": "87c035800629-11", "text": "else:\n raise ValueError(\n \"One of `combine_documents_chain` or \"\n \"`combine_documents_chain_path` must be present.\"\n )\n return RetrievalQA(\n combine_documents_chain=combine_documents_chain,\n retriever=retriever,\n **config,\n )\ndef _load_vector_db_qa(config: dict, **kwargs: Any) -> VectorDBQA:\n if \"vectorstore\" in kwargs:\n vectorstore = kwargs.pop(\"vectorstore\")\n else:\n raise ValueError(\"`vectorstore` must be present.\")\n if \"combine_documents_chain\" in config:\n combine_documents_chain_config = config.pop(\"combine_documents_chain\")\n combine_documents_chain = load_chain_from_config(combine_documents_chain_config)\n elif \"combine_documents_chain_path\" in config:\n combine_documents_chain = load_chain(config.pop(\"combine_documents_chain_path\"))\n else:\n raise ValueError(\n \"One of `combine_documents_chain` or \"\n \"`combine_documents_chain_path` must be present.\"\n )\n return VectorDBQA(\n combine_documents_chain=combine_documents_chain,\n vectorstore=vectorstore,\n **config,\n )\ndef _load_graph_cypher_chain(config: dict, **kwargs: Any) -> GraphCypherQAChain:\n if \"graph\" in kwargs:\n graph = kwargs.pop(\"graph\")\n else:\n raise ValueError(\"`graph` must be present.\")\n if \"cypher_generation_chain\" in config:\n cypher_generation_chain_config = config.pop(\"cypher_generation_chain\")\n cypher_generation_chain = load_chain_from_config(cypher_generation_chain_config)\n else:\n raise ValueError(\"`cypher_generation_chain` must be present.\")\n if \"qa_chain\" in config:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/loading.html"} +{"id": "87c035800629-12", "text": "if \"qa_chain\" in config:\n qa_chain_config = config.pop(\"qa_chain\")\n qa_chain = load_chain_from_config(qa_chain_config)\n else:\n raise ValueError(\"`qa_chain` must be present.\")\n return GraphCypherQAChain(\n graph=graph,\n cypher_generation_chain=cypher_generation_chain,\n qa_chain=qa_chain,\n **config,\n )\ndef _load_api_chain(config: dict, **kwargs: Any) -> APIChain:\n if \"api_request_chain\" in config:\n api_request_chain_config = config.pop(\"api_request_chain\")\n api_request_chain = load_chain_from_config(api_request_chain_config)\n elif \"api_request_chain_path\" in config:\n api_request_chain = load_chain(config.pop(\"api_request_chain_path\"))\n else:\n raise ValueError(\n \"One of `api_request_chain` or `api_request_chain_path` must be present.\"\n )\n if \"api_answer_chain\" in config:\n api_answer_chain_config = config.pop(\"api_answer_chain\")\n api_answer_chain = load_chain_from_config(api_answer_chain_config)\n elif \"api_answer_chain_path\" in config:\n api_answer_chain = load_chain(config.pop(\"api_answer_chain_path\"))\n else:\n raise ValueError(\n \"One of `api_answer_chain` or `api_answer_chain_path` must be present.\"\n )\n if \"requests_wrapper\" in kwargs:\n requests_wrapper = kwargs.pop(\"requests_wrapper\")\n else:\n raise ValueError(\"`requests_wrapper` must be present.\")\n return APIChain(\n api_request_chain=api_request_chain,\n api_answer_chain=api_answer_chain,\n requests_wrapper=requests_wrapper,\n **config,\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/loading.html"} +{"id": "87c035800629-13", "text": "requests_wrapper=requests_wrapper,\n **config,\n )\ndef _load_llm_requests_chain(config: dict, **kwargs: Any) -> LLMRequestsChain:\n if \"llm_chain\" in config:\n llm_chain_config = config.pop(\"llm_chain\")\n llm_chain = load_chain_from_config(llm_chain_config)\n elif \"llm_chain_path\" in config:\n llm_chain = load_chain(config.pop(\"llm_chain_path\"))\n else:\n raise ValueError(\"One of `llm_chain` or `llm_chain_path` must be present.\")\n if \"requests_wrapper\" in kwargs:\n requests_wrapper = kwargs.pop(\"requests_wrapper\")\n return LLMRequestsChain(\n llm_chain=llm_chain, requests_wrapper=requests_wrapper, **config\n )\n else:\n return LLMRequestsChain(llm_chain=llm_chain, **config)\ntype_to_loader_dict = {\n \"api_chain\": _load_api_chain,\n \"hyde_chain\": _load_hyde_chain,\n \"llm_chain\": _load_llm_chain,\n \"llm_bash_chain\": _load_llm_bash_chain,\n \"llm_checker_chain\": _load_llm_checker_chain,\n \"llm_math_chain\": _load_llm_math_chain,\n \"llm_requests_chain\": _load_llm_requests_chain,\n \"pal_chain\": _load_pal_chain,\n \"qa_with_sources_chain\": _load_qa_with_sources_chain,\n \"stuff_documents_chain\": _load_stuff_documents_chain,\n \"map_reduce_documents_chain\": _load_map_reduce_documents_chain,\n \"map_rerank_documents_chain\": _load_map_rerank_documents_chain,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/loading.html"} +{"id": "87c035800629-14", "text": "\"map_rerank_documents_chain\": _load_map_rerank_documents_chain,\n \"refine_documents_chain\": _load_refine_documents_chain,\n \"sql_database_chain\": _load_sql_database_chain,\n \"vector_db_qa_with_sources_chain\": _load_vector_db_qa_with_sources_chain,\n \"vector_db_qa\": _load_vector_db_qa,\n \"retrieval_qa\": _load_retrieval_qa,\n \"graph_cypher_chain\": _load_graph_cypher_chain,\n}\ndef load_chain_from_config(config: dict, **kwargs: Any) -> Chain:\n \"\"\"Load chain from Config Dict.\"\"\"\n if \"_type\" not in config:\n raise ValueError(\"Must specify a chain Type in config\")\n config_type = config.pop(\"_type\")\n if config_type not in type_to_loader_dict:\n raise ValueError(f\"Loading {config_type} chain not supported\")\n chain_loader = type_to_loader_dict[config_type]\n return chain_loader(config, **kwargs)\n[docs]def load_chain(path: Union[str, Path], **kwargs: Any) -> Chain:\n \"\"\"Unified method for loading a chain from LangChainHub or local fs.\"\"\"\n if hub_result := try_load_from_hub(\n path, _load_chain_from_file, \"chains\", {\"json\", \"yaml\"}, **kwargs\n ):\n return hub_result\n else:\n return _load_chain_from_file(path, **kwargs)\ndef _load_chain_from_file(file: Union[str, Path], **kwargs: Any) -> Chain:\n \"\"\"Load chain from file.\"\"\"\n # Convert file to Path object.\n if isinstance(file, str):\n file_path = Path(file)\n else:\n file_path = file\n # Load from either json or yaml.", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/loading.html"} +{"id": "87c035800629-15", "text": "else:\n file_path = file\n # Load from either json or yaml.\n if file_path.suffix == \".json\":\n with open(file_path) as f:\n config = json.load(f)\n elif file_path.suffix == \".yaml\":\n with open(file_path, \"r\") as f:\n config = yaml.safe_load(f)\n else:\n raise ValueError(\"File type must be json or yaml\")\n # Override default 'verbose' and 'memory' for the chain\n if \"verbose\" in kwargs:\n config[\"verbose\"] = kwargs.pop(\"verbose\")\n if \"memory\" in kwargs:\n config[\"memory\"] = kwargs.pop(\"memory\")\n # Load the chain from the config now.\n return load_chain_from_config(config, **kwargs)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/loading.html"} +{"id": "a750a2cef08a-0", "text": "Source code for langchain.chains.llm\n\"\"\"Chain that just formats a prompt and calls an LLM.\"\"\"\nfrom __future__ import annotations\nimport warnings\nfrom typing import Any, Dict, List, Optional, Sequence, Tuple, Union\nfrom pydantic import Extra, Field\nfrom langchain.base_language import BaseLanguageModel\nfrom langchain.callbacks.manager import (\n AsyncCallbackManager,\n AsyncCallbackManagerForChainRun,\n CallbackManager,\n CallbackManagerForChainRun,\n Callbacks,\n)\nfrom langchain.chains.base import Chain\nfrom langchain.input import get_colored_text\nfrom langchain.load.dump import dumpd\nfrom langchain.prompts.base import BasePromptTemplate\nfrom langchain.prompts.prompt import PromptTemplate\nfrom langchain.schema import (\n BaseLLMOutputParser,\n LLMResult,\n NoOpOutputParser,\n PromptValue,\n)\n[docs]class LLMChain(Chain):\n \"\"\"Chain to run queries against LLMs.\n Example:\n .. code-block:: python\n from langchain import LLMChain, OpenAI, PromptTemplate\n prompt_template = \"Tell me a {adjective} joke\"\n prompt = PromptTemplate(\n input_variables=[\"adjective\"], template=prompt_template\n )\n llm = LLMChain(llm=OpenAI(), prompt=prompt)\n \"\"\"\n @property\n def lc_serializable(self) -> bool:\n return True\n prompt: BasePromptTemplate\n \"\"\"Prompt object to use.\"\"\"\n llm: BaseLanguageModel\n \"\"\"Language model to call.\"\"\"\n output_key: str = \"text\" #: :meta private:\n output_parser: BaseLLMOutputParser = Field(default_factory=NoOpOutputParser)\n \"\"\"Output parser to use.", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/llm.html"} +{"id": "a750a2cef08a-1", "text": "\"\"\"Output parser to use.\n Defaults to one that takes the most likely string but does not change it \n otherwise.\"\"\"\n return_final_only: bool = True\n \"\"\"Whether to return only the final parsed result. Defaults to True.\n If false, will return a bunch of extra information about the generation.\"\"\"\n llm_kwargs: dict = Field(default_factory=dict)\n class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n arbitrary_types_allowed = True\n @property\n def input_keys(self) -> List[str]:\n \"\"\"Will be whatever keys the prompt expects.\n :meta private:\n \"\"\"\n return self.prompt.input_variables\n @property\n def output_keys(self) -> List[str]:\n \"\"\"Will always return text key.\n :meta private:\n \"\"\"\n if self.return_final_only:\n return [self.output_key]\n else:\n return [self.output_key, \"full_generation\"]\n def _call(\n self,\n inputs: Dict[str, Any],\n run_manager: Optional[CallbackManagerForChainRun] = None,\n ) -> Dict[str, str]:\n response = self.generate([inputs], run_manager=run_manager)\n return self.create_outputs(response)[0]\n[docs] def generate(\n self,\n input_list: List[Dict[str, Any]],\n run_manager: Optional[CallbackManagerForChainRun] = None,\n ) -> LLMResult:\n \"\"\"Generate LLM result from inputs.\"\"\"\n prompts, stop = self.prep_prompts(input_list, run_manager=run_manager)\n return self.llm.generate_prompt(\n prompts,\n stop,\n callbacks=run_manager.get_child() if run_manager else None,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/llm.html"} +{"id": "a750a2cef08a-2", "text": "stop,\n callbacks=run_manager.get_child() if run_manager else None,\n **self.llm_kwargs,\n )\n[docs] async def agenerate(\n self,\n input_list: List[Dict[str, Any]],\n run_manager: Optional[AsyncCallbackManagerForChainRun] = None,\n ) -> LLMResult:\n \"\"\"Generate LLM result from inputs.\"\"\"\n prompts, stop = await self.aprep_prompts(input_list, run_manager=run_manager)\n return await self.llm.agenerate_prompt(\n prompts,\n stop,\n callbacks=run_manager.get_child() if run_manager else None,\n **self.llm_kwargs,\n )\n[docs] def prep_prompts(\n self,\n input_list: List[Dict[str, Any]],\n run_manager: Optional[CallbackManagerForChainRun] = None,\n ) -> Tuple[List[PromptValue], Optional[List[str]]]:\n \"\"\"Prepare prompts from inputs.\"\"\"\n stop = None\n if \"stop\" in input_list[0]:\n stop = input_list[0][\"stop\"]\n prompts = []\n for inputs in input_list:\n selected_inputs = {k: inputs[k] for k in self.prompt.input_variables}\n prompt = self.prompt.format_prompt(**selected_inputs)\n _colored_text = get_colored_text(prompt.to_string(), \"green\")\n _text = \"Prompt after formatting:\\n\" + _colored_text\n if run_manager:\n run_manager.on_text(_text, end=\"\\n\", verbose=self.verbose)\n if \"stop\" in inputs and inputs[\"stop\"] != stop:\n raise ValueError(\n \"If `stop` is present in any inputs, should be present in all.\"\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/llm.html"} +{"id": "a750a2cef08a-3", "text": ")\n prompts.append(prompt)\n return prompts, stop\n[docs] async def aprep_prompts(\n self,\n input_list: List[Dict[str, Any]],\n run_manager: Optional[AsyncCallbackManagerForChainRun] = None,\n ) -> Tuple[List[PromptValue], Optional[List[str]]]:\n \"\"\"Prepare prompts from inputs.\"\"\"\n stop = None\n if \"stop\" in input_list[0]:\n stop = input_list[0][\"stop\"]\n prompts = []\n for inputs in input_list:\n selected_inputs = {k: inputs[k] for k in self.prompt.input_variables}\n prompt = self.prompt.format_prompt(**selected_inputs)\n _colored_text = get_colored_text(prompt.to_string(), \"green\")\n _text = \"Prompt after formatting:\\n\" + _colored_text\n if run_manager:\n await run_manager.on_text(_text, end=\"\\n\", verbose=self.verbose)\n if \"stop\" in inputs and inputs[\"stop\"] != stop:\n raise ValueError(\n \"If `stop` is present in any inputs, should be present in all.\"\n )\n prompts.append(prompt)\n return prompts, stop\n[docs] def apply(\n self, input_list: List[Dict[str, Any]], callbacks: Callbacks = None\n ) -> List[Dict[str, str]]:\n \"\"\"Utilize the LLM generate method for speed gains.\"\"\"\n callback_manager = CallbackManager.configure(\n callbacks, self.callbacks, self.verbose\n )\n run_manager = callback_manager.on_chain_start(\n dumpd(self),\n {\"input_list\": input_list},\n )\n try:\n response = self.generate(input_list, run_manager=run_manager)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/llm.html"} +{"id": "a750a2cef08a-4", "text": "try:\n response = self.generate(input_list, run_manager=run_manager)\n except (KeyboardInterrupt, Exception) as e:\n run_manager.on_chain_error(e)\n raise e\n outputs = self.create_outputs(response)\n run_manager.on_chain_end({\"outputs\": outputs})\n return outputs\n[docs] async def aapply(\n self, input_list: List[Dict[str, Any]], callbacks: Callbacks = None\n ) -> List[Dict[str, str]]:\n \"\"\"Utilize the LLM generate method for speed gains.\"\"\"\n callback_manager = AsyncCallbackManager.configure(\n callbacks, self.callbacks, self.verbose\n )\n run_manager = await callback_manager.on_chain_start(\n dumpd(self),\n {\"input_list\": input_list},\n )\n try:\n response = await self.agenerate(input_list, run_manager=run_manager)\n except (KeyboardInterrupt, Exception) as e:\n await run_manager.on_chain_error(e)\n raise e\n outputs = self.create_outputs(response)\n await run_manager.on_chain_end({\"outputs\": outputs})\n return outputs\n @property\n def _run_output_key(self) -> str:\n return self.output_key\n[docs] def create_outputs(self, llm_result: LLMResult) -> List[Dict[str, Any]]:\n \"\"\"Create outputs from response.\"\"\"\n result = [\n # Get the text of the top generated string.\n {\n self.output_key: self.output_parser.parse_result(generation),\n \"full_generation\": generation,\n }\n for generation in llm_result.generations\n ]\n if self.return_final_only:\n result = [{self.output_key: r[self.output_key]} for r in result]\n return result", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/llm.html"} +{"id": "a750a2cef08a-5", "text": "return result\n async def _acall(\n self,\n inputs: Dict[str, Any],\n run_manager: Optional[AsyncCallbackManagerForChainRun] = None,\n ) -> Dict[str, str]:\n response = await self.agenerate([inputs], run_manager=run_manager)\n return self.create_outputs(response)[0]\n[docs] def predict(self, callbacks: Callbacks = None, **kwargs: Any) -> str:\n \"\"\"Format prompt with kwargs and pass to LLM.\n Args:\n callbacks: Callbacks to pass to LLMChain\n **kwargs: Keys to pass to prompt template.\n Returns:\n Completion from LLM.\n Example:\n .. code-block:: python\n completion = llm.predict(adjective=\"funny\")\n \"\"\"\n return self(kwargs, callbacks=callbacks)[self.output_key]\n[docs] async def apredict(self, callbacks: Callbacks = None, **kwargs: Any) -> str:\n \"\"\"Format prompt with kwargs and pass to LLM.\n Args:\n callbacks: Callbacks to pass to LLMChain\n **kwargs: Keys to pass to prompt template.\n Returns:\n Completion from LLM.\n Example:\n .. code-block:: python\n completion = llm.predict(adjective=\"funny\")\n \"\"\"\n return (await self.acall(kwargs, callbacks=callbacks))[self.output_key]\n[docs] def predict_and_parse(\n self, callbacks: Callbacks = None, **kwargs: Any\n ) -> Union[str, List[str], Dict[str, Any]]:\n \"\"\"Call predict and then parse the results.\"\"\"\n warnings.warn(\n \"The predict_and_parse method is deprecated, \"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/llm.html"} +{"id": "a750a2cef08a-6", "text": "warnings.warn(\n \"The predict_and_parse method is deprecated, \"\n \"instead pass an output parser directly to LLMChain.\"\n )\n result = self.predict(callbacks=callbacks, **kwargs)\n if self.prompt.output_parser is not None:\n return self.prompt.output_parser.parse(result)\n else:\n return result\n[docs] async def apredict_and_parse(\n self, callbacks: Callbacks = None, **kwargs: Any\n ) -> Union[str, List[str], Dict[str, str]]:\n \"\"\"Call apredict and then parse the results.\"\"\"\n warnings.warn(\n \"The apredict_and_parse method is deprecated, \"\n \"instead pass an output parser directly to LLMChain.\"\n )\n result = await self.apredict(callbacks=callbacks, **kwargs)\n if self.prompt.output_parser is not None:\n return self.prompt.output_parser.parse(result)\n else:\n return result\n[docs] def apply_and_parse(\n self, input_list: List[Dict[str, Any]], callbacks: Callbacks = None\n ) -> Sequence[Union[str, List[str], Dict[str, str]]]:\n \"\"\"Call apply and then parse the results.\"\"\"\n warnings.warn(\n \"The apply_and_parse method is deprecated, \"\n \"instead pass an output parser directly to LLMChain.\"\n )\n result = self.apply(input_list, callbacks=callbacks)\n return self._parse_generation(result)\n def _parse_generation(\n self, generation: List[Dict[str, str]]\n ) -> Sequence[Union[str, List[str], Dict[str, str]]]:\n if self.prompt.output_parser is not None:\n return [\n self.prompt.output_parser.parse(res[self.output_key])\n for res in generation", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/llm.html"} +{"id": "a750a2cef08a-7", "text": "self.prompt.output_parser.parse(res[self.output_key])\n for res in generation\n ]\n else:\n return generation\n[docs] async def aapply_and_parse(\n self, input_list: List[Dict[str, Any]], callbacks: Callbacks = None\n ) -> Sequence[Union[str, List[str], Dict[str, str]]]:\n \"\"\"Call apply and then parse the results.\"\"\"\n warnings.warn(\n \"The aapply_and_parse method is deprecated, \"\n \"instead pass an output parser directly to LLMChain.\"\n )\n result = await self.aapply(input_list, callbacks=callbacks)\n return self._parse_generation(result)\n @property\n def _chain_type(self) -> str:\n return \"llm_chain\"\n[docs] @classmethod\n def from_string(cls, llm: BaseLanguageModel, template: str) -> LLMChain:\n \"\"\"Create LLMChain from LLM and template.\"\"\"\n prompt_template = PromptTemplate.from_template(template)\n return cls(llm=llm, prompt=prompt_template)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/llm.html"} +{"id": "682a16784371-0", "text": "Source code for langchain.chains.moderation\n\"\"\"Pass input through a moderation endpoint.\"\"\"\nfrom typing import Any, Dict, List, Optional\nfrom pydantic import root_validator\nfrom langchain.callbacks.manager import CallbackManagerForChainRun\nfrom langchain.chains.base import Chain\nfrom langchain.utils import get_from_dict_or_env\n[docs]class OpenAIModerationChain(Chain):\n \"\"\"Pass input through a moderation endpoint.\n To use, you should have the ``openai`` python package installed, and the\n environment variable ``OPENAI_API_KEY`` set with your API key.\n Any parameters that are valid to be passed to the openai.create call can be passed\n in, even if not explicitly saved on this class.\n Example:\n .. code-block:: python\n from langchain.chains import OpenAIModerationChain\n moderation = OpenAIModerationChain()\n \"\"\"\n client: Any #: :meta private:\n model_name: Optional[str] = None\n \"\"\"Moderation model name to use.\"\"\"\n error: bool = False\n \"\"\"Whether or not to error if bad content was found.\"\"\"\n input_key: str = \"input\" #: :meta private:\n output_key: str = \"output\" #: :meta private:\n openai_api_key: Optional[str] = None\n openai_organization: Optional[str] = None\n @root_validator()\n def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"Validate that api key and python package exists in environment.\"\"\"\n openai_api_key = get_from_dict_or_env(\n values, \"openai_api_key\", \"OPENAI_API_KEY\"\n )\n openai_organization = get_from_dict_or_env(\n values,\n \"openai_organization\",", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/moderation.html"} +{"id": "682a16784371-1", "text": "values,\n \"openai_organization\",\n \"OPENAI_ORGANIZATION\",\n default=\"\",\n )\n try:\n import openai\n openai.api_key = openai_api_key\n if openai_organization:\n openai.organization = openai_organization\n values[\"client\"] = openai.Moderation\n except ImportError:\n raise ImportError(\n \"Could not import openai python package. \"\n \"Please install it with `pip install openai`.\"\n )\n return values\n @property\n def input_keys(self) -> List[str]:\n \"\"\"Expect input key.\n :meta private:\n \"\"\"\n return [self.input_key]\n @property\n def output_keys(self) -> List[str]:\n \"\"\"Return output key.\n :meta private:\n \"\"\"\n return [self.output_key]\n def _moderate(self, text: str, results: dict) -> str:\n if results[\"flagged\"]:\n error_str = \"Text was found that violates OpenAI's content policy.\"\n if self.error:\n raise ValueError(error_str)\n else:\n return error_str\n return text\n def _call(\n self,\n inputs: Dict[str, str],\n run_manager: Optional[CallbackManagerForChainRun] = None,\n ) -> Dict[str, str]:\n text = inputs[self.input_key]\n results = self.client.create(text)\n output = self._moderate(text, results[\"results\"][0])\n return {self.output_key: output}", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/moderation.html"} +{"id": "88af0da816bd-0", "text": "Source code for langchain.chains.transform\n\"\"\"Chain that runs an arbitrary python function.\"\"\"\nfrom typing import Callable, Dict, List, Optional\nfrom langchain.callbacks.manager import CallbackManagerForChainRun\nfrom langchain.chains.base import Chain\n[docs]class TransformChain(Chain):\n \"\"\"Chain transform chain output.\n Example:\n .. code-block:: python\n from langchain import TransformChain\n transform_chain = TransformChain(input_variables=[\"text\"],\n output_variables[\"entities\"], transform=func())\n \"\"\"\n input_variables: List[str]\n output_variables: List[str]\n transform: Callable[[Dict[str, str]], Dict[str, str]]\n @property\n def input_keys(self) -> List[str]:\n \"\"\"Expect input keys.\n :meta private:\n \"\"\"\n return self.input_variables\n @property\n def output_keys(self) -> List[str]:\n \"\"\"Return output keys.\n :meta private:\n \"\"\"\n return self.output_variables\n def _call(\n self,\n inputs: Dict[str, str],\n run_manager: Optional[CallbackManagerForChainRun] = None,\n ) -> Dict[str, str]:\n return self.transform(inputs)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/transform.html"} +{"id": "c40598880724-0", "text": "Source code for langchain.chains.sequential\n\"\"\"Chain pipeline where the outputs of one step feed directly into next.\"\"\"\nfrom typing import Any, Dict, List, Optional\nfrom pydantic import Extra, root_validator\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForChainRun,\n CallbackManagerForChainRun,\n)\nfrom langchain.chains.base import Chain\nfrom langchain.input import get_color_mapping\n[docs]class SequentialChain(Chain):\n \"\"\"Chain where the outputs of one chain feed directly into next.\"\"\"\n chains: List[Chain]\n input_variables: List[str]\n output_variables: List[str] #: :meta private:\n return_all: bool = False\n class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n arbitrary_types_allowed = True\n @property\n def input_keys(self) -> List[str]:\n \"\"\"Return expected input keys to the chain.\n :meta private:\n \"\"\"\n return self.input_variables\n @property\n def output_keys(self) -> List[str]:\n \"\"\"Return output key.\n :meta private:\n \"\"\"\n return self.output_variables\n @root_validator(pre=True)\n def validate_chains(cls, values: Dict) -> Dict:\n \"\"\"Validate that the correct inputs exist for all chains.\"\"\"\n chains = values[\"chains\"]\n input_variables = values[\"input_variables\"]\n memory_keys = list()\n if \"memory\" in values and values[\"memory\"] is not None:\n \"\"\"Validate that prompt input variables are consistent.\"\"\"\n memory_keys = values[\"memory\"].memory_variables\n if set(input_variables).intersection(set(memory_keys)):\n overlapping_keys = set(input_variables) & set(memory_keys)\n raise ValueError(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/sequential.html"} +{"id": "c40598880724-1", "text": "overlapping_keys = set(input_variables) & set(memory_keys)\n raise ValueError(\n f\"The the input key(s) {''.join(overlapping_keys)} are found \"\n f\"in the Memory keys ({memory_keys}) - please use input and \"\n f\"memory keys that don't overlap.\"\n )\n known_variables = set(input_variables + memory_keys)\n for chain in chains:\n missing_vars = set(chain.input_keys).difference(known_variables)\n if missing_vars:\n raise ValueError(\n f\"Missing required input keys: {missing_vars}, \"\n f\"only had {known_variables}\"\n )\n overlapping_keys = known_variables.intersection(chain.output_keys)\n if overlapping_keys:\n raise ValueError(\n f\"Chain returned keys that already exist: {overlapping_keys}\"\n )\n known_variables |= set(chain.output_keys)\n if \"output_variables\" not in values:\n if values.get(\"return_all\", False):\n output_keys = known_variables.difference(input_variables)\n else:\n output_keys = chains[-1].output_keys\n values[\"output_variables\"] = output_keys\n else:\n missing_vars = set(values[\"output_variables\"]).difference(known_variables)\n if missing_vars:\n raise ValueError(\n f\"Expected output variables that were not found: {missing_vars}.\"\n )\n return values\n def _call(\n self,\n inputs: Dict[str, str],\n run_manager: Optional[CallbackManagerForChainRun] = None,\n ) -> Dict[str, str]:\n known_values = inputs.copy()\n _run_manager = run_manager or CallbackManagerForChainRun.get_noop_manager()\n for i, chain in enumerate(self.chains):", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/sequential.html"} +{"id": "c40598880724-2", "text": "for i, chain in enumerate(self.chains):\n callbacks = _run_manager.get_child()\n outputs = chain(known_values, return_only_outputs=True, callbacks=callbacks)\n known_values.update(outputs)\n return {k: known_values[k] for k in self.output_variables}\n async def _acall(\n self,\n inputs: Dict[str, Any],\n run_manager: Optional[AsyncCallbackManagerForChainRun] = None,\n ) -> Dict[str, Any]:\n known_values = inputs.copy()\n _run_manager = run_manager or AsyncCallbackManagerForChainRun.get_noop_manager()\n callbacks = _run_manager.get_child()\n for i, chain in enumerate(self.chains):\n outputs = await chain.acall(\n known_values, return_only_outputs=True, callbacks=callbacks\n )\n known_values.update(outputs)\n return {k: known_values[k] for k in self.output_variables}\n[docs]class SimpleSequentialChain(Chain):\n \"\"\"Simple chain where the outputs of one step feed directly into next.\"\"\"\n chains: List[Chain]\n strip_outputs: bool = False\n input_key: str = \"input\" #: :meta private:\n output_key: str = \"output\" #: :meta private:\n class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n arbitrary_types_allowed = True\n @property\n def input_keys(self) -> List[str]:\n \"\"\"Expect input key.\n :meta private:\n \"\"\"\n return [self.input_key]\n @property\n def output_keys(self) -> List[str]:\n \"\"\"Return output key.\n :meta private:\n \"\"\"\n return [self.output_key]\n @root_validator()", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/sequential.html"} +{"id": "c40598880724-3", "text": "\"\"\"\n return [self.output_key]\n @root_validator()\n def validate_chains(cls, values: Dict) -> Dict:\n \"\"\"Validate that chains are all single input/output.\"\"\"\n for chain in values[\"chains\"]:\n if len(chain.input_keys) != 1:\n raise ValueError(\n \"Chains used in SimplePipeline should all have one input, got \"\n f\"{chain} with {len(chain.input_keys)} inputs.\"\n )\n if len(chain.output_keys) != 1:\n raise ValueError(\n \"Chains used in SimplePipeline should all have one output, got \"\n f\"{chain} with {len(chain.output_keys)} outputs.\"\n )\n return values\n def _call(\n self,\n inputs: Dict[str, str],\n run_manager: Optional[CallbackManagerForChainRun] = None,\n ) -> Dict[str, str]:\n _run_manager = run_manager or CallbackManagerForChainRun.get_noop_manager()\n _input = inputs[self.input_key]\n color_mapping = get_color_mapping([str(i) for i in range(len(self.chains))])\n for i, chain in enumerate(self.chains):\n _input = chain.run(_input, callbacks=_run_manager.get_child(f\"step_{i+1}\"))\n if self.strip_outputs:\n _input = _input.strip()\n _run_manager.on_text(\n _input, color=color_mapping[str(i)], end=\"\\n\", verbose=self.verbose\n )\n return {self.output_key: _input}\n async def _acall(\n self,\n inputs: Dict[str, Any],\n run_manager: Optional[AsyncCallbackManagerForChainRun] = None,\n ) -> Dict[str, Any]:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/sequential.html"} +{"id": "c40598880724-4", "text": ") -> Dict[str, Any]:\n _run_manager = run_manager or AsyncCallbackManagerForChainRun.get_noop_manager()\n callbacks = _run_manager.get_child()\n _input = inputs[self.input_key]\n color_mapping = get_color_mapping([str(i) for i in range(len(self.chains))])\n for i, chain in enumerate(self.chains):\n _input = await chain.arun(_input, callbacks=callbacks)\n if self.strip_outputs:\n _input = _input.strip()\n await _run_manager.on_text(\n _input, color=color_mapping[str(i)], end=\"\\n\", verbose=self.verbose\n )\n return {self.output_key: _input}", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/sequential.html"} +{"id": "1b5463676e5e-0", "text": "Source code for langchain.chains.llm_requests\n\"\"\"Chain that hits a URL and then uses an LLM to parse results.\"\"\"\nfrom __future__ import annotations\nfrom typing import Any, Dict, List, Optional\nfrom pydantic import Extra, Field, root_validator\nfrom langchain.callbacks.manager import CallbackManagerForChainRun\nfrom langchain.chains import LLMChain\nfrom langchain.chains.base import Chain\nfrom langchain.requests import TextRequestsWrapper\nDEFAULT_HEADERS = {\n \"User-Agent\": \"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/87.0.4280.88 Safari/537.36\" # noqa: E501\n}\n[docs]class LLMRequestsChain(Chain):\n \"\"\"Chain that hits a URL and then uses an LLM to parse results.\"\"\"\n llm_chain: LLMChain\n requests_wrapper: TextRequestsWrapper = Field(\n default_factory=lambda: TextRequestsWrapper(headers=DEFAULT_HEADERS),\n exclude=True,\n )\n text_length: int = 8000\n requests_key: str = \"requests_result\" #: :meta private:\n input_key: str = \"url\" #: :meta private:\n output_key: str = \"output\" #: :meta private:\n class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n arbitrary_types_allowed = True\n @property\n def input_keys(self) -> List[str]:\n \"\"\"Will be whatever keys the prompt expects.\n :meta private:\n \"\"\"\n return [self.input_key]\n @property\n def output_keys(self) -> List[str]:\n \"\"\"Will always return text key.\n :meta private:\n \"\"\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/llm_requests.html"} +{"id": "1b5463676e5e-1", "text": "\"\"\"Will always return text key.\n :meta private:\n \"\"\"\n return [self.output_key]\n @root_validator()\n def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"Validate that api key and python package exists in environment.\"\"\"\n try:\n from bs4 import BeautifulSoup # noqa: F401\n except ImportError:\n raise ValueError(\n \"Could not import bs4 python package. \"\n \"Please install it with `pip install bs4`.\"\n )\n return values\n def _call(\n self,\n inputs: Dict[str, Any],\n run_manager: Optional[CallbackManagerForChainRun] = None,\n ) -> Dict[str, Any]:\n from bs4 import BeautifulSoup\n _run_manager = run_manager or CallbackManagerForChainRun.get_noop_manager()\n # Other keys are assumed to be needed for LLM prediction\n other_keys = {k: v for k, v in inputs.items() if k != self.input_key}\n url = inputs[self.input_key]\n res = self.requests_wrapper.get(url)\n # extract the text from the html\n soup = BeautifulSoup(res, \"html.parser\")\n other_keys[self.requests_key] = soup.get_text()[: self.text_length]\n result = self.llm_chain.predict(\n callbacks=_run_manager.get_child(), **other_keys\n )\n return {self.output_key: result}\n @property\n def _chain_type(self) -> str:\n return \"llm_requests_chain\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/llm_requests.html"} +{"id": "8a832ff0b3a4-0", "text": "Source code for langchain.chains.mapreduce\n\"\"\"Map-reduce chain.\nSplits up a document, sends the smaller parts to the LLM with one prompt,\nthen combines the results with another one.\n\"\"\"\nfrom __future__ import annotations\nfrom typing import Any, Dict, List, Mapping, Optional\nfrom pydantic import Extra\nfrom langchain.base_language import BaseLanguageModel\nfrom langchain.callbacks.manager import CallbackManagerForChainRun, Callbacks\nfrom langchain.chains.base import Chain\nfrom langchain.chains.combine_documents.base import BaseCombineDocumentsChain\nfrom langchain.chains.combine_documents.map_reduce import MapReduceDocumentsChain\nfrom langchain.chains.combine_documents.stuff import StuffDocumentsChain\nfrom langchain.chains.llm import LLMChain\nfrom langchain.docstore.document import Document\nfrom langchain.prompts.base import BasePromptTemplate\nfrom langchain.text_splitter import TextSplitter\n[docs]class MapReduceChain(Chain):\n \"\"\"Map-reduce chain.\"\"\"\n combine_documents_chain: BaseCombineDocumentsChain\n \"\"\"Chain to use to combine documents.\"\"\"\n text_splitter: TextSplitter\n \"\"\"Text splitter to use.\"\"\"\n input_key: str = \"input_text\" #: :meta private:\n output_key: str = \"output_text\" #: :meta private:\n[docs] @classmethod\n def from_params(\n cls,\n llm: BaseLanguageModel,\n prompt: BasePromptTemplate,\n text_splitter: TextSplitter,\n callbacks: Callbacks = None,\n combine_chain_kwargs: Optional[Mapping[str, Any]] = None,\n reduce_chain_kwargs: Optional[Mapping[str, Any]] = None,\n **kwargs: Any,\n ) -> MapReduceChain:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/mapreduce.html"} +{"id": "8a832ff0b3a4-1", "text": "**kwargs: Any,\n ) -> MapReduceChain:\n \"\"\"Construct a map-reduce chain that uses the chain for map and reduce.\"\"\"\n llm_chain = LLMChain(llm=llm, prompt=prompt, callbacks=callbacks)\n reduce_chain = StuffDocumentsChain(\n llm_chain=llm_chain,\n callbacks=callbacks,\n **(reduce_chain_kwargs if reduce_chain_kwargs else {}),\n )\n combine_documents_chain = MapReduceDocumentsChain(\n llm_chain=llm_chain,\n combine_document_chain=reduce_chain,\n callbacks=callbacks,\n **(combine_chain_kwargs if combine_chain_kwargs else {}),\n )\n return cls(\n combine_documents_chain=combine_documents_chain,\n text_splitter=text_splitter,\n callbacks=callbacks,\n **kwargs,\n )\n class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n arbitrary_types_allowed = True\n @property\n def input_keys(self) -> List[str]:\n \"\"\"Expect input key.\n :meta private:\n \"\"\"\n return [self.input_key]\n @property\n def output_keys(self) -> List[str]:\n \"\"\"Return output key.\n :meta private:\n \"\"\"\n return [self.output_key]\n def _call(\n self,\n inputs: Dict[str, str],\n run_manager: Optional[CallbackManagerForChainRun] = None,\n ) -> Dict[str, str]:\n _run_manager = run_manager or CallbackManagerForChainRun.get_noop_manager()\n # Split the larger text into smaller chunks.\n doc_text = inputs.pop(self.input_key)\n texts = self.text_splitter.split_text(doc_text)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/mapreduce.html"} +{"id": "8a832ff0b3a4-2", "text": "texts = self.text_splitter.split_text(doc_text)\n docs = [Document(page_content=text) for text in texts]\n _inputs: Dict[str, Any] = {\n **inputs,\n self.combine_documents_chain.input_key: docs,\n }\n outputs = self.combine_documents_chain.run(\n _inputs, callbacks=_run_manager.get_child()\n )\n return {self.output_key: outputs}", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/mapreduce.html"} +{"id": "fb8f4dad48c0-0", "text": "Source code for langchain.chains.router.multi_retrieval_qa\n\"\"\"Use a single chain to route an input to one of multiple retrieval qa chains.\"\"\"\nfrom __future__ import annotations\nfrom typing import Any, Dict, List, Mapping, Optional\nfrom langchain.base_language import BaseLanguageModel\nfrom langchain.chains import ConversationChain\nfrom langchain.chains.base import Chain\nfrom langchain.chains.conversation.prompt import DEFAULT_TEMPLATE\nfrom langchain.chains.retrieval_qa.base import BaseRetrievalQA, RetrievalQA\nfrom langchain.chains.router.base import MultiRouteChain\nfrom langchain.chains.router.llm_router import LLMRouterChain, RouterOutputParser\nfrom langchain.chains.router.multi_retrieval_prompt import (\n MULTI_RETRIEVAL_ROUTER_TEMPLATE,\n)\nfrom langchain.chat_models import ChatOpenAI\nfrom langchain.prompts import PromptTemplate\nfrom langchain.schema import BaseRetriever\n[docs]class MultiRetrievalQAChain(MultiRouteChain):\n \"\"\"A multi-route chain that uses an LLM router chain to choose amongst retrieval\n qa chains.\"\"\"\n router_chain: LLMRouterChain\n \"\"\"Chain for deciding a destination chain and the input to it.\"\"\"\n destination_chains: Mapping[str, BaseRetrievalQA]\n \"\"\"Map of name to candidate chains that inputs can be routed to.\"\"\"\n default_chain: Chain\n \"\"\"Default chain to use when router doesn't map input to one of the destinations.\"\"\"\n @property\n def output_keys(self) -> List[str]:\n return [\"result\"]\n[docs] @classmethod\n def from_retrievers(\n cls,\n llm: BaseLanguageModel,\n retriever_infos: List[Dict[str, Any]],\n default_retriever: Optional[BaseRetriever] = None,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/router/multi_retrieval_qa.html"} +{"id": "fb8f4dad48c0-1", "text": "default_retriever: Optional[BaseRetriever] = None,\n default_prompt: Optional[PromptTemplate] = None,\n default_chain: Optional[Chain] = None,\n **kwargs: Any,\n ) -> MultiRetrievalQAChain:\n if default_prompt and not default_retriever:\n raise ValueError(\n \"`default_retriever` must be specified if `default_prompt` is \"\n \"provided. Received only `default_prompt`.\"\n )\n destinations = [f\"{r['name']}: {r['description']}\" for r in retriever_infos]\n destinations_str = \"\\n\".join(destinations)\n router_template = MULTI_RETRIEVAL_ROUTER_TEMPLATE.format(\n destinations=destinations_str\n )\n router_prompt = PromptTemplate(\n template=router_template,\n input_variables=[\"input\"],\n output_parser=RouterOutputParser(next_inputs_inner_key=\"query\"),\n )\n router_chain = LLMRouterChain.from_llm(llm, router_prompt)\n destination_chains = {}\n for r_info in retriever_infos:\n prompt = r_info.get(\"prompt\")\n retriever = r_info[\"retriever\"]\n chain = RetrievalQA.from_llm(llm, prompt=prompt, retriever=retriever)\n name = r_info[\"name\"]\n destination_chains[name] = chain\n if default_chain:\n _default_chain = default_chain\n elif default_retriever:\n _default_chain = RetrievalQA.from_llm(\n llm, prompt=default_prompt, retriever=default_retriever\n )\n else:\n prompt_template = DEFAULT_TEMPLATE.replace(\"input\", \"query\")\n prompt = PromptTemplate(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/router/multi_retrieval_qa.html"} +{"id": "fb8f4dad48c0-2", "text": "prompt = PromptTemplate(\n template=prompt_template, input_variables=[\"history\", \"query\"]\n )\n _default_chain = ConversationChain(\n llm=ChatOpenAI(), prompt=prompt, input_key=\"query\", output_key=\"result\"\n )\n return cls(\n router_chain=router_chain,\n destination_chains=destination_chains,\n default_chain=_default_chain,\n **kwargs,\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/router/multi_retrieval_qa.html"} +{"id": "c8be2718eed5-0", "text": "Source code for langchain.chains.router.base\n\"\"\"Base classes for chain routing.\"\"\"\nfrom __future__ import annotations\nfrom abc import ABC\nfrom typing import Any, Dict, List, Mapping, NamedTuple, Optional\nfrom pydantic import Extra\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForChainRun,\n CallbackManagerForChainRun,\n Callbacks,\n)\nfrom langchain.chains.base import Chain\nclass Route(NamedTuple):\n destination: Optional[str]\n next_inputs: Dict[str, Any]\n[docs]class RouterChain(Chain, ABC):\n \"\"\"Chain that outputs the name of a destination chain and the inputs to it.\"\"\"\n @property\n def output_keys(self) -> List[str]:\n return [\"destination\", \"next_inputs\"]\n[docs] def route(self, inputs: Dict[str, Any], callbacks: Callbacks = None) -> Route:\n result = self(inputs, callbacks=callbacks)\n return Route(result[\"destination\"], result[\"next_inputs\"])\n[docs] async def aroute(\n self, inputs: Dict[str, Any], callbacks: Callbacks = None\n ) -> Route:\n result = await self.acall(inputs, callbacks=callbacks)\n return Route(result[\"destination\"], result[\"next_inputs\"])\n[docs]class MultiRouteChain(Chain):\n \"\"\"Use a single chain to route an input to one of multiple candidate chains.\"\"\"\n router_chain: RouterChain\n \"\"\"Chain that routes inputs to destination chains.\"\"\"\n destination_chains: Mapping[str, Chain]\n \"\"\"Chains that return final answer to inputs.\"\"\"\n default_chain: Chain\n \"\"\"Default chain to use when none of the destination chains are suitable.\"\"\"\n silent_errors: bool = False\n \"\"\"If True, use default_chain when an invalid destination name is provided.", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/router/base.html"} +{"id": "c8be2718eed5-1", "text": "\"\"\"If True, use default_chain when an invalid destination name is provided. \n Defaults to False.\"\"\"\n class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n arbitrary_types_allowed = True\n @property\n def input_keys(self) -> List[str]:\n \"\"\"Will be whatever keys the router chain prompt expects.\n :meta private:\n \"\"\"\n return self.router_chain.input_keys\n @property\n def output_keys(self) -> List[str]:\n \"\"\"Will always return text key.\n :meta private:\n \"\"\"\n return []\n def _call(\n self,\n inputs: Dict[str, Any],\n run_manager: Optional[CallbackManagerForChainRun] = None,\n ) -> Dict[str, Any]:\n _run_manager = run_manager or CallbackManagerForChainRun.get_noop_manager()\n callbacks = _run_manager.get_child()\n route = self.router_chain.route(inputs, callbacks=callbacks)\n _run_manager.on_text(\n str(route.destination) + \": \" + str(route.next_inputs), verbose=self.verbose\n )\n if not route.destination:\n return self.default_chain(route.next_inputs, callbacks=callbacks)\n elif route.destination in self.destination_chains:\n return self.destination_chains[route.destination](\n route.next_inputs, callbacks=callbacks\n )\n elif self.silent_errors:\n return self.default_chain(route.next_inputs, callbacks=callbacks)\n else:\n raise ValueError(\n f\"Received invalid destination chain name '{route.destination}'\"\n )\n async def _acall(\n self,\n inputs: Dict[str, Any],\n run_manager: Optional[AsyncCallbackManagerForChainRun] = None,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/router/base.html"} +{"id": "c8be2718eed5-2", "text": "run_manager: Optional[AsyncCallbackManagerForChainRun] = None,\n ) -> Dict[str, Any]:\n _run_manager = run_manager or CallbackManagerForChainRun.get_noop_manager()\n callbacks = _run_manager.get_child()\n route = await self.router_chain.aroute(inputs, callbacks=callbacks)\n _run_manager.on_text(\n str(route.destination) + \": \" + str(route.next_inputs), verbose=self.verbose\n )\n if not route.destination:\n return await self.default_chain.acall(\n route.next_inputs, callbacks=callbacks\n )\n elif route.destination in self.destination_chains:\n return await self.destination_chains[route.destination].acall(\n route.next_inputs, callbacks=callbacks\n )\n elif self.silent_errors:\n return await self.default_chain.acall(\n route.next_inputs, callbacks=callbacks\n )\n else:\n raise ValueError(\n f\"Received invalid destination chain name '{route.destination}'\"\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/router/base.html"} +{"id": "8f34344aff0b-0", "text": "Source code for langchain.chains.router.multi_prompt\n\"\"\"Use a single chain to route an input to one of multiple llm chains.\"\"\"\nfrom __future__ import annotations\nfrom typing import Any, Dict, List, Mapping, Optional\nfrom langchain.base_language import BaseLanguageModel\nfrom langchain.chains import ConversationChain\nfrom langchain.chains.llm import LLMChain\nfrom langchain.chains.router.base import MultiRouteChain, RouterChain\nfrom langchain.chains.router.llm_router import LLMRouterChain, RouterOutputParser\nfrom langchain.chains.router.multi_prompt_prompt import MULTI_PROMPT_ROUTER_TEMPLATE\nfrom langchain.prompts import PromptTemplate\n[docs]class MultiPromptChain(MultiRouteChain):\n \"\"\"A multi-route chain that uses an LLM router chain to choose amongst prompts.\"\"\"\n router_chain: RouterChain\n \"\"\"Chain for deciding a destination chain and the input to it.\"\"\"\n destination_chains: Mapping[str, LLMChain]\n \"\"\"Map of name to candidate chains that inputs can be routed to.\"\"\"\n default_chain: LLMChain\n \"\"\"Default chain to use when router doesn't map input to one of the destinations.\"\"\"\n @property\n def output_keys(self) -> List[str]:\n return [\"text\"]\n[docs] @classmethod\n def from_prompts(\n cls,\n llm: BaseLanguageModel,\n prompt_infos: List[Dict[str, str]],\n default_chain: Optional[LLMChain] = None,\n **kwargs: Any,\n ) -> MultiPromptChain:\n \"\"\"Convenience constructor for instantiating from destination prompts.\"\"\"\n destinations = [f\"{p['name']}: {p['description']}\" for p in prompt_infos]\n destinations_str = \"\\n\".join(destinations)\n router_template = MULTI_PROMPT_ROUTER_TEMPLATE.format(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/router/multi_prompt.html"} +{"id": "8f34344aff0b-1", "text": "router_template = MULTI_PROMPT_ROUTER_TEMPLATE.format(\n destinations=destinations_str\n )\n router_prompt = PromptTemplate(\n template=router_template,\n input_variables=[\"input\"],\n output_parser=RouterOutputParser(),\n )\n router_chain = LLMRouterChain.from_llm(llm, router_prompt)\n destination_chains = {}\n for p_info in prompt_infos:\n name = p_info[\"name\"]\n prompt_template = p_info[\"prompt_template\"]\n prompt = PromptTemplate(template=prompt_template, input_variables=[\"input\"])\n chain = LLMChain(llm=llm, prompt=prompt)\n destination_chains[name] = chain\n _default_chain = default_chain or ConversationChain(llm=llm, output_key=\"text\")\n return cls(\n router_chain=router_chain,\n destination_chains=destination_chains,\n default_chain=_default_chain,\n **kwargs,\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/router/multi_prompt.html"} +{"id": "106ec7c634f3-0", "text": "Source code for langchain.chains.router.llm_router\n\"\"\"Base classes for LLM-powered router chains.\"\"\"\nfrom __future__ import annotations\nfrom typing import Any, Dict, List, Optional, Type, cast\nfrom pydantic import root_validator\nfrom langchain.base_language import BaseLanguageModel\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForChainRun,\n CallbackManagerForChainRun,\n)\nfrom langchain.chains import LLMChain\nfrom langchain.chains.router.base import RouterChain\nfrom langchain.output_parsers.json import parse_and_check_json_markdown\nfrom langchain.prompts import BasePromptTemplate\nfrom langchain.schema import BaseOutputParser, OutputParserException\n[docs]class LLMRouterChain(RouterChain):\n \"\"\"A router chain that uses an LLM chain to perform routing.\"\"\"\n llm_chain: LLMChain\n \"\"\"LLM chain used to perform routing\"\"\"\n @root_validator()\n def validate_prompt(cls, values: dict) -> dict:\n prompt = values[\"llm_chain\"].prompt\n if prompt.output_parser is None:\n raise ValueError(\n \"LLMRouterChain requires base llm_chain prompt to have an output\"\n \" parser that converts LLM text output to a dictionary with keys\"\n \" 'destination' and 'next_inputs'. Received a prompt with no output\"\n \" parser.\"\n )\n return values\n @property\n def input_keys(self) -> List[str]:\n \"\"\"Will be whatever keys the LLM chain prompt expects.\n :meta private:\n \"\"\"\n return self.llm_chain.input_keys\n def _validate_outputs(self, outputs: Dict[str, Any]) -> None:\n super()._validate_outputs(outputs)\n if not isinstance(outputs[\"next_inputs\"], dict):\n raise ValueError\n def _call(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/router/llm_router.html"} +{"id": "106ec7c634f3-1", "text": "raise ValueError\n def _call(\n self,\n inputs: Dict[str, Any],\n run_manager: Optional[CallbackManagerForChainRun] = None,\n ) -> Dict[str, Any]:\n _run_manager = run_manager or CallbackManagerForChainRun.get_noop_manager()\n callbacks = _run_manager.get_child()\n output = cast(\n Dict[str, Any],\n self.llm_chain.predict_and_parse(callbacks=callbacks, **inputs),\n )\n return output\n async def _acall(\n self,\n inputs: Dict[str, Any],\n run_manager: Optional[AsyncCallbackManagerForChainRun] = None,\n ) -> Dict[str, Any]:\n _run_manager = run_manager or CallbackManagerForChainRun.get_noop_manager()\n callbacks = _run_manager.get_child()\n output = cast(\n Dict[str, Any],\n await self.llm_chain.apredict_and_parse(callbacks=callbacks, **inputs),\n )\n return output\n[docs] @classmethod\n def from_llm(\n cls, llm: BaseLanguageModel, prompt: BasePromptTemplate, **kwargs: Any\n ) -> LLMRouterChain:\n \"\"\"Convenience constructor.\"\"\"\n llm_chain = LLMChain(llm=llm, prompt=prompt)\n return cls(llm_chain=llm_chain, **kwargs)\nclass RouterOutputParser(BaseOutputParser[Dict[str, str]]):\n \"\"\"Parser for output of router chain int he multi-prompt chain.\"\"\"\n default_destination: str = \"DEFAULT\"\n next_inputs_type: Type = str\n next_inputs_inner_key: str = \"input\"\n def parse(self, text: str) -> Dict[str, Any]:\n try:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/router/llm_router.html"} +{"id": "106ec7c634f3-2", "text": "def parse(self, text: str) -> Dict[str, Any]:\n try:\n expected_keys = [\"destination\", \"next_inputs\"]\n parsed = parse_and_check_json_markdown(text, expected_keys)\n if not isinstance(parsed[\"destination\"], str):\n raise ValueError(\"Expected 'destination' to be a string.\")\n if not isinstance(parsed[\"next_inputs\"], self.next_inputs_type):\n raise ValueError(\n f\"Expected 'next_inputs' to be {self.next_inputs_type}.\"\n )\n parsed[\"next_inputs\"] = {self.next_inputs_inner_key: parsed[\"next_inputs\"]}\n if (\n parsed[\"destination\"].strip().lower()\n == self.default_destination.lower()\n ):\n parsed[\"destination\"] = None\n else:\n parsed[\"destination\"] = parsed[\"destination\"].strip()\n return parsed\n except Exception as e:\n raise OutputParserException(\n f\"Parsing text\\n{text}\\n raised following error:\\n{e}\"\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/router/llm_router.html"} +{"id": "cfa93e891e5b-0", "text": "Source code for langchain.chains.natbot.base\n\"\"\"Implement an LLM driven browser.\"\"\"\nfrom __future__ import annotations\nimport warnings\nfrom typing import Any, Dict, List, Optional\nfrom pydantic import Extra, root_validator\nfrom langchain.base_language import BaseLanguageModel\nfrom langchain.callbacks.manager import CallbackManagerForChainRun\nfrom langchain.chains.base import Chain\nfrom langchain.chains.llm import LLMChain\nfrom langchain.chains.natbot.prompt import PROMPT\nfrom langchain.llms.openai import OpenAI\n[docs]class NatBotChain(Chain):\n \"\"\"Implement an LLM driven browser.\n Example:\n .. code-block:: python\n from langchain import NatBotChain\n natbot = NatBotChain.from_default(\"Buy me a new hat.\")\n \"\"\"\n llm_chain: LLMChain\n objective: str\n \"\"\"Objective that NatBot is tasked with completing.\"\"\"\n llm: Optional[BaseLanguageModel] = None\n \"\"\"[Deprecated] LLM wrapper to use.\"\"\"\n input_url_key: str = \"url\" #: :meta private:\n input_browser_content_key: str = \"browser_content\" #: :meta private:\n previous_command: str = \"\" #: :meta private:\n output_key: str = \"command\" #: :meta private:\n class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n arbitrary_types_allowed = True\n @root_validator(pre=True)\n def raise_deprecation(cls, values: Dict) -> Dict:\n if \"llm\" in values:\n warnings.warn(\n \"Directly instantiating an NatBotChain with an llm is deprecated. \"\n \"Please instantiate with llm_chain argument or using the from_llm \"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/natbot/base.html"} +{"id": "cfa93e891e5b-1", "text": "\"Please instantiate with llm_chain argument or using the from_llm \"\n \"class method.\"\n )\n if \"llm_chain\" not in values and values[\"llm\"] is not None:\n values[\"llm_chain\"] = LLMChain(llm=values[\"llm\"], prompt=PROMPT)\n return values\n[docs] @classmethod\n def from_default(cls, objective: str, **kwargs: Any) -> NatBotChain:\n \"\"\"Load with default LLMChain.\"\"\"\n llm = OpenAI(temperature=0.5, best_of=10, n=3, max_tokens=50)\n return cls.from_llm(llm, objective, **kwargs)\n[docs] @classmethod\n def from_llm(\n cls, llm: BaseLanguageModel, objective: str, **kwargs: Any\n ) -> NatBotChain:\n \"\"\"Load from LLM.\"\"\"\n llm_chain = LLMChain(llm=llm, prompt=PROMPT)\n return cls(llm_chain=llm_chain, objective=objective, **kwargs)\n @property\n def input_keys(self) -> List[str]:\n \"\"\"Expect url and browser content.\n :meta private:\n \"\"\"\n return [self.input_url_key, self.input_browser_content_key]\n @property\n def output_keys(self) -> List[str]:\n \"\"\"Return command.\n :meta private:\n \"\"\"\n return [self.output_key]\n def _call(\n self,\n inputs: Dict[str, str],\n run_manager: Optional[CallbackManagerForChainRun] = None,\n ) -> Dict[str, str]:\n _run_manager = run_manager or CallbackManagerForChainRun.get_noop_manager()", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/natbot/base.html"} +{"id": "cfa93e891e5b-2", "text": "_run_manager = run_manager or CallbackManagerForChainRun.get_noop_manager()\n url = inputs[self.input_url_key]\n browser_content = inputs[self.input_browser_content_key]\n llm_cmd = self.llm_chain.predict(\n objective=self.objective,\n url=url[:100],\n previous_command=self.previous_command,\n browser_content=browser_content[:4500],\n callbacks=_run_manager.get_child(),\n )\n llm_cmd = llm_cmd.strip()\n self.previous_command = llm_cmd\n return {self.output_key: llm_cmd}\n[docs] def execute(self, url: str, browser_content: str) -> str:\n \"\"\"Figure out next browser command to run.\n Args:\n url: URL of the site currently on.\n browser_content: Content of the page as currently displayed by the browser.\n Returns:\n Next browser command to run.\n Example:\n .. code-block:: python\n browser_content = \"....\"\n llm_command = natbot.run(\"www.google.com\", browser_content)\n \"\"\"\n _inputs = {\n self.input_url_key: url,\n self.input_browser_content_key: browser_content,\n }\n return self(_inputs)[self.output_key]\n @property\n def _chain_type(self) -> str:\n return \"nat_bot_chain\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/natbot/base.html"} +{"id": "d56991f75c9e-0", "text": "Source code for langchain.chains.graph_qa.base\n\"\"\"Question answering over a graph.\"\"\"\nfrom __future__ import annotations\nfrom typing import Any, Dict, List, Optional\nfrom pydantic import Field\nfrom langchain.base_language import BaseLanguageModel\nfrom langchain.callbacks.manager import CallbackManagerForChainRun\nfrom langchain.chains.base import Chain\nfrom langchain.chains.graph_qa.prompts import ENTITY_EXTRACTION_PROMPT, PROMPT\nfrom langchain.chains.llm import LLMChain\nfrom langchain.graphs.networkx_graph import NetworkxEntityGraph, get_entities\nfrom langchain.prompts.base import BasePromptTemplate\n[docs]class GraphQAChain(Chain):\n \"\"\"Chain for question-answering against a graph.\"\"\"\n graph: NetworkxEntityGraph = Field(exclude=True)\n entity_extraction_chain: LLMChain\n qa_chain: LLMChain\n input_key: str = \"query\" #: :meta private:\n output_key: str = \"result\" #: :meta private:\n @property\n def input_keys(self) -> List[str]:\n \"\"\"Return the input keys.\n :meta private:\n \"\"\"\n return [self.input_key]\n @property\n def output_keys(self) -> List[str]:\n \"\"\"Return the output keys.\n :meta private:\n \"\"\"\n _output_keys = [self.output_key]\n return _output_keys\n[docs] @classmethod\n def from_llm(\n cls,\n llm: BaseLanguageModel,\n qa_prompt: BasePromptTemplate = PROMPT,\n entity_prompt: BasePromptTemplate = ENTITY_EXTRACTION_PROMPT,\n **kwargs: Any,\n ) -> GraphQAChain:\n \"\"\"Initialize from LLM.\"\"\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/graph_qa/base.html"} +{"id": "d56991f75c9e-1", "text": ") -> GraphQAChain:\n \"\"\"Initialize from LLM.\"\"\"\n qa_chain = LLMChain(llm=llm, prompt=qa_prompt)\n entity_chain = LLMChain(llm=llm, prompt=entity_prompt)\n return cls(\n qa_chain=qa_chain,\n entity_extraction_chain=entity_chain,\n **kwargs,\n )\n def _call(\n self,\n inputs: Dict[str, Any],\n run_manager: Optional[CallbackManagerForChainRun] = None,\n ) -> Dict[str, str]:\n \"\"\"Extract entities, look up info and answer question.\"\"\"\n _run_manager = run_manager or CallbackManagerForChainRun.get_noop_manager()\n question = inputs[self.input_key]\n entity_string = self.entity_extraction_chain.run(question)\n _run_manager.on_text(\"Entities Extracted:\", end=\"\\n\", verbose=self.verbose)\n _run_manager.on_text(\n entity_string, color=\"green\", end=\"\\n\", verbose=self.verbose\n )\n entities = get_entities(entity_string)\n context = \"\"\n for entity in entities:\n triplets = self.graph.get_entity_knowledge(entity)\n context += \"\\n\".join(triplets)\n _run_manager.on_text(\"Full Context:\", end=\"\\n\", verbose=self.verbose)\n _run_manager.on_text(context, color=\"green\", end=\"\\n\", verbose=self.verbose)\n result = self.qa_chain(\n {\"question\": question, \"context\": context},\n callbacks=_run_manager.get_child(),\n )\n return {self.output_key: result[self.qa_chain.output_key]}", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/graph_qa/base.html"} +{"id": "88ba74da2c25-0", "text": "Source code for langchain.chains.graph_qa.cypher\n\"\"\"Question answering over a graph.\"\"\"\nfrom __future__ import annotations\nimport re\nfrom typing import Any, Dict, List, Optional\nfrom pydantic import Field\nfrom langchain.base_language import BaseLanguageModel\nfrom langchain.callbacks.manager import CallbackManagerForChainRun\nfrom langchain.chains.base import Chain\nfrom langchain.chains.graph_qa.prompts import CYPHER_GENERATION_PROMPT, CYPHER_QA_PROMPT\nfrom langchain.chains.llm import LLMChain\nfrom langchain.graphs.neo4j_graph import Neo4jGraph\nfrom langchain.prompts.base import BasePromptTemplate\nINTERMEDIATE_STEPS_KEY = \"intermediate_steps\"\ndef extract_cypher(text: str) -> str:\n \"\"\"\n Extract Cypher code from a text.\n Args:\n text: Text to extract Cypher code from.\n Returns:\n Cypher code extracted from the text.\n \"\"\"\n # The pattern to find Cypher code enclosed in triple backticks\n pattern = r\"```(.*?)```\"\n # Find all matches in the input text\n matches = re.findall(pattern, text, re.DOTALL)\n return matches[0] if matches else text\n[docs]class GraphCypherQAChain(Chain):\n \"\"\"Chain for question-answering against a graph by generating Cypher statements.\"\"\"\n graph: Neo4jGraph = Field(exclude=True)\n cypher_generation_chain: LLMChain\n qa_chain: LLMChain\n input_key: str = \"query\" #: :meta private:\n output_key: str = \"result\" #: :meta private:\n top_k: int = 10\n \"\"\"Number of results to return from the query\"\"\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/graph_qa/cypher.html"} +{"id": "88ba74da2c25-1", "text": "\"\"\"Number of results to return from the query\"\"\"\n return_intermediate_steps: bool = False\n \"\"\"Whether or not to return the intermediate steps along with the final answer.\"\"\"\n return_direct: bool = False\n \"\"\"Whether or not to return the result of querying the graph directly.\"\"\"\n @property\n def input_keys(self) -> List[str]:\n \"\"\"Return the input keys.\n :meta private:\n \"\"\"\n return [self.input_key]\n @property\n def output_keys(self) -> List[str]:\n \"\"\"Return the output keys.\n :meta private:\n \"\"\"\n _output_keys = [self.output_key]\n return _output_keys\n @property\n def _chain_type(self) -> str:\n return \"graph_cypher_chain\"\n[docs] @classmethod\n def from_llm(\n cls,\n llm: BaseLanguageModel,\n *,\n qa_prompt: BasePromptTemplate = CYPHER_QA_PROMPT,\n cypher_prompt: BasePromptTemplate = CYPHER_GENERATION_PROMPT,\n **kwargs: Any,\n ) -> GraphCypherQAChain:\n \"\"\"Initialize from LLM.\"\"\"\n qa_chain = LLMChain(llm=llm, prompt=qa_prompt)\n cypher_generation_chain = LLMChain(llm=llm, prompt=cypher_prompt)\n return cls(\n qa_chain=qa_chain,\n cypher_generation_chain=cypher_generation_chain,\n **kwargs,\n )\n def _call(\n self,\n inputs: Dict[str, Any],\n run_manager: Optional[CallbackManagerForChainRun] = None,\n ) -> Dict[str, Any]:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/graph_qa/cypher.html"} +{"id": "88ba74da2c25-2", "text": ") -> Dict[str, Any]:\n \"\"\"Generate Cypher statement, use it to look up in db and answer question.\"\"\"\n _run_manager = run_manager or CallbackManagerForChainRun.get_noop_manager()\n callbacks = _run_manager.get_child()\n question = inputs[self.input_key]\n intermediate_steps: List = []\n generated_cypher = self.cypher_generation_chain.run(\n {\"question\": question, \"schema\": self.graph.get_schema}, callbacks=callbacks\n )\n # Extract Cypher code if it is wrapped in backticks\n generated_cypher = extract_cypher(generated_cypher)\n _run_manager.on_text(\"Generated Cypher:\", end=\"\\n\", verbose=self.verbose)\n _run_manager.on_text(\n generated_cypher, color=\"green\", end=\"\\n\", verbose=self.verbose\n )\n intermediate_steps.append({\"query\": generated_cypher})\n # Retrieve and limit the number of results\n context = self.graph.query(generated_cypher)[: self.top_k]\n if self.return_direct:\n final_result = context\n else:\n _run_manager.on_text(\"Full Context:\", end=\"\\n\", verbose=self.verbose)\n _run_manager.on_text(\n str(context), color=\"green\", end=\"\\n\", verbose=self.verbose\n )\n intermediate_steps.append({\"context\": context})\n result = self.qa_chain(\n {\"question\": question, \"context\": context},\n callbacks=callbacks,\n )\n final_result = result[self.qa_chain.output_key]\n chain_result: Dict[str, Any] = {self.output_key: final_result}\n if self.return_intermediate_steps:\n chain_result[INTERMEDIATE_STEPS_KEY] = intermediate_steps\n return chain_result", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/graph_qa/cypher.html"} +{"id": "5e1b2d864262-0", "text": "Source code for langchain.chains.graph_qa.nebulagraph\n\"\"\"Question answering over a graph.\"\"\"\nfrom __future__ import annotations\nfrom typing import Any, Dict, List, Optional\nfrom pydantic import Field\nfrom langchain.base_language import BaseLanguageModel\nfrom langchain.callbacks.manager import CallbackManagerForChainRun\nfrom langchain.chains.base import Chain\nfrom langchain.chains.graph_qa.prompts import CYPHER_QA_PROMPT, NGQL_GENERATION_PROMPT\nfrom langchain.chains.llm import LLMChain\nfrom langchain.graphs.nebula_graph import NebulaGraph\nfrom langchain.prompts.base import BasePromptTemplate\n[docs]class NebulaGraphQAChain(Chain):\n \"\"\"Chain for question-answering against a graph by generating nGQL statements.\"\"\"\n graph: NebulaGraph = Field(exclude=True)\n ngql_generation_chain: LLMChain\n qa_chain: LLMChain\n input_key: str = \"query\" #: :meta private:\n output_key: str = \"result\" #: :meta private:\n @property\n def input_keys(self) -> List[str]:\n \"\"\"Return the input keys.\n :meta private:\n \"\"\"\n return [self.input_key]\n @property\n def output_keys(self) -> List[str]:\n \"\"\"Return the output keys.\n :meta private:\n \"\"\"\n _output_keys = [self.output_key]\n return _output_keys\n[docs] @classmethod\n def from_llm(\n cls,\n llm: BaseLanguageModel,\n *,\n qa_prompt: BasePromptTemplate = CYPHER_QA_PROMPT,\n ngql_prompt: BasePromptTemplate = NGQL_GENERATION_PROMPT,\n **kwargs: Any,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/graph_qa/nebulagraph.html"} +{"id": "5e1b2d864262-1", "text": "**kwargs: Any,\n ) -> NebulaGraphQAChain:\n \"\"\"Initialize from LLM.\"\"\"\n qa_chain = LLMChain(llm=llm, prompt=qa_prompt)\n ngql_generation_chain = LLMChain(llm=llm, prompt=ngql_prompt)\n return cls(\n qa_chain=qa_chain,\n ngql_generation_chain=ngql_generation_chain,\n **kwargs,\n )\n def _call(\n self,\n inputs: Dict[str, Any],\n run_manager: Optional[CallbackManagerForChainRun] = None,\n ) -> Dict[str, str]:\n \"\"\"Generate nGQL statement, use it to look up in db and answer question.\"\"\"\n _run_manager = run_manager or CallbackManagerForChainRun.get_noop_manager()\n callbacks = _run_manager.get_child()\n question = inputs[self.input_key]\n generated_ngql = self.ngql_generation_chain.run(\n {\"question\": question, \"schema\": self.graph.get_schema}, callbacks=callbacks\n )\n _run_manager.on_text(\"Generated nGQL:\", end=\"\\n\", verbose=self.verbose)\n _run_manager.on_text(\n generated_ngql, color=\"green\", end=\"\\n\", verbose=self.verbose\n )\n context = self.graph.query(generated_ngql)\n _run_manager.on_text(\"Full Context:\", end=\"\\n\", verbose=self.verbose)\n _run_manager.on_text(\n str(context), color=\"green\", end=\"\\n\", verbose=self.verbose\n )\n result = self.qa_chain(\n {\"question\": question, \"context\": context},\n callbacks=callbacks,\n )\n return {self.output_key: result[self.qa_chain.output_key]}", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/graph_qa/nebulagraph.html"} +{"id": "6430c2eebd76-0", "text": "Source code for langchain.chains.graph_qa.kuzu\n\"\"\"Question answering over a graph.\"\"\"\nfrom __future__ import annotations\nfrom typing import Any, Dict, List, Optional\nfrom pydantic import Field\nfrom langchain.base_language import BaseLanguageModel\nfrom langchain.callbacks.manager import CallbackManagerForChainRun\nfrom langchain.chains.base import Chain\nfrom langchain.chains.graph_qa.prompts import CYPHER_QA_PROMPT, KUZU_GENERATION_PROMPT\nfrom langchain.chains.llm import LLMChain\nfrom langchain.graphs.kuzu_graph import KuzuGraph\nfrom langchain.prompts.base import BasePromptTemplate\n[docs]class KuzuQAChain(Chain):\n \"\"\"Chain for question-answering against a graph by generating Cypher statements for\n K\u00f9zu.\n \"\"\"\n graph: KuzuGraph = Field(exclude=True)\n cypher_generation_chain: LLMChain\n qa_chain: LLMChain\n input_key: str = \"query\" #: :meta private:\n output_key: str = \"result\" #: :meta private:\n @property\n def input_keys(self) -> List[str]:\n \"\"\"Return the input keys.\n :meta private:\n \"\"\"\n return [self.input_key]\n @property\n def output_keys(self) -> List[str]:\n \"\"\"Return the output keys.\n :meta private:\n \"\"\"\n _output_keys = [self.output_key]\n return _output_keys\n[docs] @classmethod\n def from_llm(\n cls,\n llm: BaseLanguageModel,\n *,\n qa_prompt: BasePromptTemplate = CYPHER_QA_PROMPT,\n cypher_prompt: BasePromptTemplate = KUZU_GENERATION_PROMPT,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/graph_qa/kuzu.html"} +{"id": "6430c2eebd76-1", "text": "cypher_prompt: BasePromptTemplate = KUZU_GENERATION_PROMPT,\n **kwargs: Any,\n ) -> KuzuQAChain:\n \"\"\"Initialize from LLM.\"\"\"\n qa_chain = LLMChain(llm=llm, prompt=qa_prompt)\n cypher_generation_chain = LLMChain(llm=llm, prompt=cypher_prompt)\n return cls(\n qa_chain=qa_chain,\n cypher_generation_chain=cypher_generation_chain,\n **kwargs,\n )\n def _call(\n self,\n inputs: Dict[str, Any],\n run_manager: Optional[CallbackManagerForChainRun] = None,\n ) -> Dict[str, str]:\n \"\"\"Generate Cypher statement, use it to look up in db and answer question.\"\"\"\n _run_manager = run_manager or CallbackManagerForChainRun.get_noop_manager()\n callbacks = _run_manager.get_child()\n question = inputs[self.input_key]\n generated_cypher = self.cypher_generation_chain.run(\n {\"question\": question, \"schema\": self.graph.get_schema}, callbacks=callbacks\n )\n _run_manager.on_text(\"Generated Cypher:\", end=\"\\n\", verbose=self.verbose)\n _run_manager.on_text(\n generated_cypher, color=\"green\", end=\"\\n\", verbose=self.verbose\n )\n context = self.graph.query(generated_cypher)\n _run_manager.on_text(\"Full Context:\", end=\"\\n\", verbose=self.verbose)\n _run_manager.on_text(\n str(context), color=\"green\", end=\"\\n\", verbose=self.verbose\n )\n result = self.qa_chain(\n {\"question\": question, \"context\": context},\n callbacks=callbacks,\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/graph_qa/kuzu.html"} +{"id": "6430c2eebd76-2", "text": "callbacks=callbacks,\n )\n return {self.output_key: result[self.qa_chain.output_key]}", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/graph_qa/kuzu.html"} +{"id": "cba4b5754a59-0", "text": "Source code for langchain.chains.llm_bash.base\n\"\"\"Chain that interprets a prompt and executes bash code to perform bash operations.\"\"\"\nfrom __future__ import annotations\nimport logging\nimport warnings\nfrom typing import Any, Dict, List, Optional\nfrom pydantic import Extra, Field, root_validator\nfrom langchain.base_language import BaseLanguageModel\nfrom langchain.callbacks.manager import CallbackManagerForChainRun\nfrom langchain.chains.base import Chain\nfrom langchain.chains.llm import LLMChain\nfrom langchain.chains.llm_bash.prompt import PROMPT\nfrom langchain.prompts.base import BasePromptTemplate\nfrom langchain.schema import OutputParserException\nfrom langchain.utilities.bash import BashProcess\nlogger = logging.getLogger(__name__)\n[docs]class LLMBashChain(Chain):\n \"\"\"Chain that interprets a prompt and executes bash code to perform bash operations.\n Example:\n .. code-block:: python\n from langchain import LLMBashChain, OpenAI\n llm_bash = LLMBashChain.from_llm(OpenAI())\n \"\"\"\n llm_chain: LLMChain\n llm: Optional[BaseLanguageModel] = None\n \"\"\"[Deprecated] LLM wrapper to use.\"\"\"\n input_key: str = \"question\" #: :meta private:\n output_key: str = \"answer\" #: :meta private:\n prompt: BasePromptTemplate = PROMPT\n \"\"\"[Deprecated]\"\"\"\n bash_process: BashProcess = Field(default_factory=BashProcess) #: :meta private:\n class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n arbitrary_types_allowed = True\n @root_validator(pre=True)\n def raise_deprecation(cls, values: Dict) -> Dict:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/llm_bash/base.html"} +{"id": "cba4b5754a59-1", "text": "def raise_deprecation(cls, values: Dict) -> Dict:\n if \"llm\" in values:\n warnings.warn(\n \"Directly instantiating an LLMBashChain with an llm is deprecated. \"\n \"Please instantiate with llm_chain or using the from_llm class method.\"\n )\n if \"llm_chain\" not in values and values[\"llm\"] is not None:\n prompt = values.get(\"prompt\", PROMPT)\n values[\"llm_chain\"] = LLMChain(llm=values[\"llm\"], prompt=prompt)\n return values\n @root_validator\n def validate_prompt(cls, values: Dict) -> Dict:\n if values[\"llm_chain\"].prompt.output_parser is None:\n raise ValueError(\n \"The prompt used by llm_chain is expected to have an output_parser.\"\n )\n return values\n @property\n def input_keys(self) -> List[str]:\n \"\"\"Expect input key.\n :meta private:\n \"\"\"\n return [self.input_key]\n @property\n def output_keys(self) -> List[str]:\n \"\"\"Expect output key.\n :meta private:\n \"\"\"\n return [self.output_key]\n def _call(\n self,\n inputs: Dict[str, Any],\n run_manager: Optional[CallbackManagerForChainRun] = None,\n ) -> Dict[str, str]:\n _run_manager = run_manager or CallbackManagerForChainRun.get_noop_manager()\n _run_manager.on_text(inputs[self.input_key], verbose=self.verbose)\n t = self.llm_chain.predict(\n question=inputs[self.input_key], callbacks=_run_manager.get_child()\n )\n _run_manager.on_text(t, color=\"green\", verbose=self.verbose)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/llm_bash/base.html"} +{"id": "cba4b5754a59-2", "text": ")\n _run_manager.on_text(t, color=\"green\", verbose=self.verbose)\n t = t.strip()\n try:\n parser = self.llm_chain.prompt.output_parser\n command_list = parser.parse(t) # type: ignore[union-attr]\n except OutputParserException as e:\n _run_manager.on_chain_error(e, verbose=self.verbose)\n raise e\n if self.verbose:\n _run_manager.on_text(\"\\nCode: \", verbose=self.verbose)\n _run_manager.on_text(\n str(command_list), color=\"yellow\", verbose=self.verbose\n )\n output = self.bash_process.run(command_list)\n _run_manager.on_text(\"\\nAnswer: \", verbose=self.verbose)\n _run_manager.on_text(output, color=\"yellow\", verbose=self.verbose)\n return {self.output_key: output}\n @property\n def _chain_type(self) -> str:\n return \"llm_bash_chain\"\n[docs] @classmethod\n def from_llm(\n cls,\n llm: BaseLanguageModel,\n prompt: BasePromptTemplate = PROMPT,\n **kwargs: Any,\n ) -> LLMBashChain:\n llm_chain = LLMChain(llm=llm, prompt=prompt)\n return cls(llm_chain=llm_chain, **kwargs)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/llm_bash/base.html"} +{"id": "365129e9c75d-0", "text": "Source code for langchain.chains.retrieval_qa.base\n\"\"\"Chain for question-answering against a vector database.\"\"\"\nfrom __future__ import annotations\nimport warnings\nfrom abc import abstractmethod\nfrom typing import Any, Dict, List, Optional\nfrom pydantic import Extra, Field, root_validator\nfrom langchain.base_language import BaseLanguageModel\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForChainRun,\n CallbackManagerForChainRun,\n)\nfrom langchain.chains.base import Chain\nfrom langchain.chains.combine_documents.base import BaseCombineDocumentsChain\nfrom langchain.chains.combine_documents.stuff import StuffDocumentsChain\nfrom langchain.chains.llm import LLMChain\nfrom langchain.chains.question_answering import load_qa_chain\nfrom langchain.chains.question_answering.stuff_prompt import PROMPT_SELECTOR\nfrom langchain.prompts import PromptTemplate\nfrom langchain.schema import BaseRetriever, Document\nfrom langchain.vectorstores.base import VectorStore\nclass BaseRetrievalQA(Chain):\n combine_documents_chain: BaseCombineDocumentsChain\n \"\"\"Chain to use to combine the documents.\"\"\"\n input_key: str = \"query\" #: :meta private:\n output_key: str = \"result\" #: :meta private:\n return_source_documents: bool = False\n \"\"\"Return the source documents.\"\"\"\n class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n arbitrary_types_allowed = True\n allow_population_by_field_name = True\n @property\n def input_keys(self) -> List[str]:\n \"\"\"Return the input keys.\n :meta private:\n \"\"\"\n return [self.input_key]\n @property\n def output_keys(self) -> List[str]:\n \"\"\"Return the output keys.", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/retrieval_qa/base.html"} +{"id": "365129e9c75d-1", "text": "def output_keys(self) -> List[str]:\n \"\"\"Return the output keys.\n :meta private:\n \"\"\"\n _output_keys = [self.output_key]\n if self.return_source_documents:\n _output_keys = _output_keys + [\"source_documents\"]\n return _output_keys\n @classmethod\n def from_llm(\n cls,\n llm: BaseLanguageModel,\n prompt: Optional[PromptTemplate] = None,\n **kwargs: Any,\n ) -> BaseRetrievalQA:\n \"\"\"Initialize from LLM.\"\"\"\n _prompt = prompt or PROMPT_SELECTOR.get_prompt(llm)\n llm_chain = LLMChain(llm=llm, prompt=_prompt)\n document_prompt = PromptTemplate(\n input_variables=[\"page_content\"], template=\"Context:\\n{page_content}\"\n )\n combine_documents_chain = StuffDocumentsChain(\n llm_chain=llm_chain,\n document_variable_name=\"context\",\n document_prompt=document_prompt,\n )\n return cls(combine_documents_chain=combine_documents_chain, **kwargs)\n @classmethod\n def from_chain_type(\n cls,\n llm: BaseLanguageModel,\n chain_type: str = \"stuff\",\n chain_type_kwargs: Optional[dict] = None,\n **kwargs: Any,\n ) -> BaseRetrievalQA:\n \"\"\"Load chain from chain type.\"\"\"\n _chain_type_kwargs = chain_type_kwargs or {}\n combine_documents_chain = load_qa_chain(\n llm, chain_type=chain_type, **_chain_type_kwargs\n )\n return cls(combine_documents_chain=combine_documents_chain, **kwargs)\n @abstractmethod\n def _get_docs(self, question: str) -> List[Document]:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/retrieval_qa/base.html"} +{"id": "365129e9c75d-2", "text": "def _get_docs(self, question: str) -> List[Document]:\n \"\"\"Get documents to do question answering over.\"\"\"\n def _call(\n self,\n inputs: Dict[str, Any],\n run_manager: Optional[CallbackManagerForChainRun] = None,\n ) -> Dict[str, Any]:\n \"\"\"Run get_relevant_text and llm on input query.\n If chain has 'return_source_documents' as 'True', returns\n the retrieved documents as well under the key 'source_documents'.\n Example:\n .. code-block:: python\n res = indexqa({'query': 'This is my query'})\n answer, docs = res['result'], res['source_documents']\n \"\"\"\n _run_manager = run_manager or CallbackManagerForChainRun.get_noop_manager()\n question = inputs[self.input_key]\n docs = self._get_docs(question)\n answer = self.combine_documents_chain.run(\n input_documents=docs, question=question, callbacks=_run_manager.get_child()\n )\n if self.return_source_documents:\n return {self.output_key: answer, \"source_documents\": docs}\n else:\n return {self.output_key: answer}\n @abstractmethod\n async def _aget_docs(self, question: str) -> List[Document]:\n \"\"\"Get documents to do question answering over.\"\"\"\n async def _acall(\n self,\n inputs: Dict[str, Any],\n run_manager: Optional[AsyncCallbackManagerForChainRun] = None,\n ) -> Dict[str, Any]:\n \"\"\"Run get_relevant_text and llm on input query.\n If chain has 'return_source_documents' as 'True', returns\n the retrieved documents as well under the key 'source_documents'.\n Example:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/retrieval_qa/base.html"} +{"id": "365129e9c75d-3", "text": "the retrieved documents as well under the key 'source_documents'.\n Example:\n .. code-block:: python\n res = indexqa({'query': 'This is my query'})\n answer, docs = res['result'], res['source_documents']\n \"\"\"\n _run_manager = run_manager or AsyncCallbackManagerForChainRun.get_noop_manager()\n question = inputs[self.input_key]\n docs = await self._aget_docs(question)\n answer = await self.combine_documents_chain.arun(\n input_documents=docs, question=question, callbacks=_run_manager.get_child()\n )\n if self.return_source_documents:\n return {self.output_key: answer, \"source_documents\": docs}\n else:\n return {self.output_key: answer}\n[docs]class RetrievalQA(BaseRetrievalQA):\n \"\"\"Chain for question-answering against an index.\n Example:\n .. code-block:: python\n from langchain.llms import OpenAI\n from langchain.chains import RetrievalQA\n from langchain.faiss import FAISS\n from langchain.vectorstores.base import VectorStoreRetriever\n retriever = VectorStoreRetriever(vectorstore=FAISS(...))\n retrievalQA = RetrievalQA.from_llm(llm=OpenAI(), retriever=retriever)\n \"\"\"\n retriever: BaseRetriever = Field(exclude=True)\n def _get_docs(self, question: str) -> List[Document]:\n return self.retriever.get_relevant_documents(question)\n async def _aget_docs(self, question: str) -> List[Document]:\n return await self.retriever.aget_relevant_documents(question)\n @property\n def _chain_type(self) -> str:\n \"\"\"Return the chain type.\"\"\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/retrieval_qa/base.html"} +{"id": "365129e9c75d-4", "text": "def _chain_type(self) -> str:\n \"\"\"Return the chain type.\"\"\"\n return \"retrieval_qa\"\n[docs]class VectorDBQA(BaseRetrievalQA):\n \"\"\"Chain for question-answering against a vector database.\"\"\"\n vectorstore: VectorStore = Field(exclude=True, alias=\"vectorstore\")\n \"\"\"Vector Database to connect to.\"\"\"\n k: int = 4\n \"\"\"Number of documents to query for.\"\"\"\n search_type: str = \"similarity\"\n \"\"\"Search type to use over vectorstore. `similarity` or `mmr`.\"\"\"\n search_kwargs: Dict[str, Any] = Field(default_factory=dict)\n \"\"\"Extra search args.\"\"\"\n @root_validator()\n def raise_deprecation(cls, values: Dict) -> Dict:\n warnings.warn(\n \"`VectorDBQA` is deprecated - \"\n \"please use `from langchain.chains import RetrievalQA`\"\n )\n return values\n @root_validator()\n def validate_search_type(cls, values: Dict) -> Dict:\n \"\"\"Validate search type.\"\"\"\n if \"search_type\" in values:\n search_type = values[\"search_type\"]\n if search_type not in (\"similarity\", \"mmr\"):\n raise ValueError(f\"search_type of {search_type} not allowed.\")\n return values\n def _get_docs(self, question: str) -> List[Document]:\n if self.search_type == \"similarity\":\n docs = self.vectorstore.similarity_search(\n question, k=self.k, **self.search_kwargs\n )\n elif self.search_type == \"mmr\":\n docs = self.vectorstore.max_marginal_relevance_search(\n question, k=self.k, **self.search_kwargs\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/retrieval_qa/base.html"} +{"id": "365129e9c75d-5", "text": "question, k=self.k, **self.search_kwargs\n )\n else:\n raise ValueError(f\"search_type of {self.search_type} not allowed.\")\n return docs\n async def _aget_docs(self, question: str) -> List[Document]:\n raise NotImplementedError(\"VectorDBQA does not support async\")\n @property\n def _chain_type(self) -> str:\n \"\"\"Return the chain type.\"\"\"\n return \"vector_db_qa\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/retrieval_qa/base.html"} +{"id": "22d30a360c1d-0", "text": "Source code for langchain.chains.llm_math.base\n\"\"\"Chain that interprets a prompt and executes python code to do math.\"\"\"\nfrom __future__ import annotations\nimport math\nimport re\nimport warnings\nfrom typing import Any, Dict, List, Optional\nimport numexpr\nfrom pydantic import Extra, root_validator\nfrom langchain.base_language import BaseLanguageModel\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForChainRun,\n CallbackManagerForChainRun,\n)\nfrom langchain.chains.base import Chain\nfrom langchain.chains.llm import LLMChain\nfrom langchain.chains.llm_math.prompt import PROMPT\nfrom langchain.prompts.base import BasePromptTemplate\n[docs]class LLMMathChain(Chain):\n \"\"\"Chain that interprets a prompt and executes python code to do math.\n Example:\n .. code-block:: python\n from langchain import LLMMathChain, OpenAI\n llm_math = LLMMathChain.from_llm(OpenAI())\n \"\"\"\n llm_chain: LLMChain\n llm: Optional[BaseLanguageModel] = None\n \"\"\"[Deprecated] LLM wrapper to use.\"\"\"\n prompt: BasePromptTemplate = PROMPT\n \"\"\"[Deprecated] Prompt to use to translate to python if necessary.\"\"\"\n input_key: str = \"question\" #: :meta private:\n output_key: str = \"answer\" #: :meta private:\n class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n arbitrary_types_allowed = True\n @root_validator(pre=True)\n def raise_deprecation(cls, values: Dict) -> Dict:\n if \"llm\" in values:\n warnings.warn(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/llm_math/base.html"} +{"id": "22d30a360c1d-1", "text": "if \"llm\" in values:\n warnings.warn(\n \"Directly instantiating an LLMMathChain with an llm is deprecated. \"\n \"Please instantiate with llm_chain argument or using the from_llm \"\n \"class method.\"\n )\n if \"llm_chain\" not in values and values[\"llm\"] is not None:\n prompt = values.get(\"prompt\", PROMPT)\n values[\"llm_chain\"] = LLMChain(llm=values[\"llm\"], prompt=prompt)\n return values\n @property\n def input_keys(self) -> List[str]:\n \"\"\"Expect input key.\n :meta private:\n \"\"\"\n return [self.input_key]\n @property\n def output_keys(self) -> List[str]:\n \"\"\"Expect output key.\n :meta private:\n \"\"\"\n return [self.output_key]\n def _evaluate_expression(self, expression: str) -> str:\n try:\n local_dict = {\"pi\": math.pi, \"e\": math.e}\n output = str(\n numexpr.evaluate(\n expression.strip(),\n global_dict={}, # restrict access to globals\n local_dict=local_dict, # add common mathematical functions\n )\n )\n except Exception as e:\n raise ValueError(\n f'LLMMathChain._evaluate(\"{expression}\") raised error: {e}.'\n \" Please try again with a valid numerical expression\"\n )\n # Remove any leading and trailing brackets from the output\n return re.sub(r\"^\\[|\\]$\", \"\", output)\n def _process_llm_result(\n self, llm_output: str, run_manager: CallbackManagerForChainRun\n ) -> Dict[str, str]:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/llm_math/base.html"} +{"id": "22d30a360c1d-2", "text": ") -> Dict[str, str]:\n run_manager.on_text(llm_output, color=\"green\", verbose=self.verbose)\n llm_output = llm_output.strip()\n text_match = re.search(r\"^```text(.*?)```\", llm_output, re.DOTALL)\n if text_match:\n expression = text_match.group(1)\n output = self._evaluate_expression(expression)\n run_manager.on_text(\"\\nAnswer: \", verbose=self.verbose)\n run_manager.on_text(output, color=\"yellow\", verbose=self.verbose)\n answer = \"Answer: \" + output\n elif llm_output.startswith(\"Answer:\"):\n answer = llm_output\n elif \"Answer:\" in llm_output:\n answer = \"Answer: \" + llm_output.split(\"Answer:\")[-1]\n else:\n raise ValueError(f\"unknown format from LLM: {llm_output}\")\n return {self.output_key: answer}\n async def _aprocess_llm_result(\n self,\n llm_output: str,\n run_manager: AsyncCallbackManagerForChainRun,\n ) -> Dict[str, str]:\n await run_manager.on_text(llm_output, color=\"green\", verbose=self.verbose)\n llm_output = llm_output.strip()\n text_match = re.search(r\"^```text(.*?)```\", llm_output, re.DOTALL)\n if text_match:\n expression = text_match.group(1)\n output = self._evaluate_expression(expression)\n await run_manager.on_text(\"\\nAnswer: \", verbose=self.verbose)\n await run_manager.on_text(output, color=\"yellow\", verbose=self.verbose)\n answer = \"Answer: \" + output\n elif llm_output.startswith(\"Answer:\"):\n answer = llm_output", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/llm_math/base.html"} +{"id": "22d30a360c1d-3", "text": "elif llm_output.startswith(\"Answer:\"):\n answer = llm_output\n elif \"Answer:\" in llm_output:\n answer = \"Answer: \" + llm_output.split(\"Answer:\")[-1]\n else:\n raise ValueError(f\"unknown format from LLM: {llm_output}\")\n return {self.output_key: answer}\n def _call(\n self,\n inputs: Dict[str, str],\n run_manager: Optional[CallbackManagerForChainRun] = None,\n ) -> Dict[str, str]:\n _run_manager = run_manager or CallbackManagerForChainRun.get_noop_manager()\n _run_manager.on_text(inputs[self.input_key])\n llm_output = self.llm_chain.predict(\n question=inputs[self.input_key],\n stop=[\"```output\"],\n callbacks=_run_manager.get_child(),\n )\n return self._process_llm_result(llm_output, _run_manager)\n async def _acall(\n self,\n inputs: Dict[str, str],\n run_manager: Optional[AsyncCallbackManagerForChainRun] = None,\n ) -> Dict[str, str]:\n _run_manager = run_manager or AsyncCallbackManagerForChainRun.get_noop_manager()\n await _run_manager.on_text(inputs[self.input_key])\n llm_output = await self.llm_chain.apredict(\n question=inputs[self.input_key],\n stop=[\"```output\"],\n callbacks=_run_manager.get_child(),\n )\n return await self._aprocess_llm_result(llm_output, _run_manager)\n @property\n def _chain_type(self) -> str:\n return \"llm_math_chain\"\n[docs] @classmethod\n def from_llm(\n cls,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/llm_math/base.html"} +{"id": "22d30a360c1d-4", "text": "[docs] @classmethod\n def from_llm(\n cls,\n llm: BaseLanguageModel,\n prompt: BasePromptTemplate = PROMPT,\n **kwargs: Any,\n ) -> LLMMathChain:\n llm_chain = LLMChain(llm=llm, prompt=prompt)\n return cls(llm_chain=llm_chain, **kwargs)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/llm_math/base.html"} +{"id": "659a191b9cdb-0", "text": "Source code for langchain.chains.hyde.base\n\"\"\"Hypothetical Document Embeddings.\nhttps://arxiv.org/abs/2212.10496\n\"\"\"\nfrom __future__ import annotations\nfrom typing import Any, Dict, List, Optional\nimport numpy as np\nfrom pydantic import Extra\nfrom langchain.base_language import BaseLanguageModel\nfrom langchain.callbacks.manager import CallbackManagerForChainRun\nfrom langchain.chains.base import Chain\nfrom langchain.chains.hyde.prompts import PROMPT_MAP\nfrom langchain.chains.llm import LLMChain\nfrom langchain.embeddings.base import Embeddings\n[docs]class HypotheticalDocumentEmbedder(Chain, Embeddings):\n \"\"\"Generate hypothetical document for query, and then embed that.\n Based on https://arxiv.org/abs/2212.10496\n \"\"\"\n base_embeddings: Embeddings\n llm_chain: LLMChain\n class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n arbitrary_types_allowed = True\n @property\n def input_keys(self) -> List[str]:\n \"\"\"Input keys for Hyde's LLM chain.\"\"\"\n return self.llm_chain.input_keys\n @property\n def output_keys(self) -> List[str]:\n \"\"\"Output keys for Hyde's LLM chain.\"\"\"\n return self.llm_chain.output_keys\n[docs] def embed_documents(self, texts: List[str]) -> List[List[float]]:\n \"\"\"Call the base embeddings.\"\"\"\n return self.base_embeddings.embed_documents(texts)\n[docs] def combine_embeddings(self, embeddings: List[List[float]]) -> List[float]:\n \"\"\"Combine embeddings into final embeddings.\"\"\"\n return list(np.array(embeddings).mean(axis=0))", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/hyde/base.html"} +{"id": "659a191b9cdb-1", "text": "return list(np.array(embeddings).mean(axis=0))\n[docs] def embed_query(self, text: str) -> List[float]:\n \"\"\"Generate a hypothetical document and embedded it.\"\"\"\n var_name = self.llm_chain.input_keys[0]\n result = self.llm_chain.generate([{var_name: text}])\n documents = [generation.text for generation in result.generations[0]]\n embeddings = self.embed_documents(documents)\n return self.combine_embeddings(embeddings)\n def _call(\n self,\n inputs: Dict[str, Any],\n run_manager: Optional[CallbackManagerForChainRun] = None,\n ) -> Dict[str, str]:\n \"\"\"Call the internal llm chain.\"\"\"\n _run_manager = run_manager or CallbackManagerForChainRun.get_noop_manager()\n return self.llm_chain(inputs, callbacks=_run_manager.get_child())\n[docs] @classmethod\n def from_llm(\n cls,\n llm: BaseLanguageModel,\n base_embeddings: Embeddings,\n prompt_key: str,\n **kwargs: Any,\n ) -> HypotheticalDocumentEmbedder:\n \"\"\"Load and use LLMChain for a specific prompt key.\"\"\"\n prompt = PROMPT_MAP[prompt_key]\n llm_chain = LLMChain(llm=llm, prompt=prompt)\n return cls(base_embeddings=base_embeddings, llm_chain=llm_chain, **kwargs)\n @property\n def _chain_type(self) -> str:\n return \"hyde_chain\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/hyde/base.html"} +{"id": "75ae6523907f-0", "text": "Source code for langchain.chains.llm_checker.base\n\"\"\"Chain for question-answering with self-verification.\"\"\"\nfrom __future__ import annotations\nimport warnings\nfrom typing import Any, Dict, List, Optional\nfrom pydantic import Extra, root_validator\nfrom langchain.base_language import BaseLanguageModel\nfrom langchain.callbacks.manager import CallbackManagerForChainRun\nfrom langchain.chains.base import Chain\nfrom langchain.chains.llm import LLMChain\nfrom langchain.chains.llm_checker.prompt import (\n CHECK_ASSERTIONS_PROMPT,\n CREATE_DRAFT_ANSWER_PROMPT,\n LIST_ASSERTIONS_PROMPT,\n REVISED_ANSWER_PROMPT,\n)\nfrom langchain.chains.sequential import SequentialChain\nfrom langchain.prompts import PromptTemplate\ndef _load_question_to_checked_assertions_chain(\n llm: BaseLanguageModel,\n create_draft_answer_prompt: PromptTemplate,\n list_assertions_prompt: PromptTemplate,\n check_assertions_prompt: PromptTemplate,\n revised_answer_prompt: PromptTemplate,\n) -> SequentialChain:\n create_draft_answer_chain = LLMChain(\n llm=llm,\n prompt=create_draft_answer_prompt,\n output_key=\"statement\",\n )\n list_assertions_chain = LLMChain(\n llm=llm,\n prompt=list_assertions_prompt,\n output_key=\"assertions\",\n )\n check_assertions_chain = LLMChain(\n llm=llm,\n prompt=check_assertions_prompt,\n output_key=\"checked_assertions\",\n )\n revised_answer_chain = LLMChain(\n llm=llm,\n prompt=revised_answer_prompt,\n output_key=\"revised_statement\",\n )\n chains = [\n create_draft_answer_chain,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/llm_checker/base.html"} +{"id": "75ae6523907f-1", "text": ")\n chains = [\n create_draft_answer_chain,\n list_assertions_chain,\n check_assertions_chain,\n revised_answer_chain,\n ]\n question_to_checked_assertions_chain = SequentialChain(\n chains=chains,\n input_variables=[\"question\"],\n output_variables=[\"revised_statement\"],\n verbose=True,\n )\n return question_to_checked_assertions_chain\n[docs]class LLMCheckerChain(Chain):\n \"\"\"Chain for question-answering with self-verification.\n Example:\n .. code-block:: python\n from langchain import OpenAI, LLMCheckerChain\n llm = OpenAI(temperature=0.7)\n checker_chain = LLMCheckerChain.from_llm(llm)\n \"\"\"\n question_to_checked_assertions_chain: SequentialChain\n llm: Optional[BaseLanguageModel] = None\n \"\"\"[Deprecated] LLM wrapper to use.\"\"\"\n create_draft_answer_prompt: PromptTemplate = CREATE_DRAFT_ANSWER_PROMPT\n \"\"\"[Deprecated]\"\"\"\n list_assertions_prompt: PromptTemplate = LIST_ASSERTIONS_PROMPT\n \"\"\"[Deprecated]\"\"\"\n check_assertions_prompt: PromptTemplate = CHECK_ASSERTIONS_PROMPT\n \"\"\"[Deprecated]\"\"\"\n revised_answer_prompt: PromptTemplate = REVISED_ANSWER_PROMPT\n \"\"\"[Deprecated] Prompt to use when questioning the documents.\"\"\"\n input_key: str = \"query\" #: :meta private:\n output_key: str = \"result\" #: :meta private:\n class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n arbitrary_types_allowed = True\n @root_validator(pre=True)\n def raise_deprecation(cls, values: Dict) -> Dict:\n if \"llm\" in values:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/llm_checker/base.html"} +{"id": "75ae6523907f-2", "text": "if \"llm\" in values:\n warnings.warn(\n \"Directly instantiating an LLMCheckerChain with an llm is deprecated. \"\n \"Please instantiate with question_to_checked_assertions_chain \"\n \"or using the from_llm class method.\"\n )\n if (\n \"question_to_checked_assertions_chain\" not in values\n and values[\"llm\"] is not None\n ):\n question_to_checked_assertions_chain = (\n _load_question_to_checked_assertions_chain(\n values[\"llm\"],\n values.get(\n \"create_draft_answer_prompt\", CREATE_DRAFT_ANSWER_PROMPT\n ),\n values.get(\"list_assertions_prompt\", LIST_ASSERTIONS_PROMPT),\n values.get(\"check_assertions_prompt\", CHECK_ASSERTIONS_PROMPT),\n values.get(\"revised_answer_prompt\", REVISED_ANSWER_PROMPT),\n )\n )\n values[\n \"question_to_checked_assertions_chain\"\n ] = question_to_checked_assertions_chain\n return values\n @property\n def input_keys(self) -> List[str]:\n \"\"\"Return the singular input key.\n :meta private:\n \"\"\"\n return [self.input_key]\n @property\n def output_keys(self) -> List[str]:\n \"\"\"Return the singular output key.\n :meta private:\n \"\"\"\n return [self.output_key]\n def _call(\n self,\n inputs: Dict[str, Any],\n run_manager: Optional[CallbackManagerForChainRun] = None,\n ) -> Dict[str, str]:\n _run_manager = run_manager or CallbackManagerForChainRun.get_noop_manager()\n question = inputs[self.input_key]\n output = self.question_to_checked_assertions_chain(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/llm_checker/base.html"} +{"id": "75ae6523907f-3", "text": "output = self.question_to_checked_assertions_chain(\n {\"question\": question}, callbacks=_run_manager.get_child()\n )\n return {self.output_key: output[\"revised_statement\"]}\n @property\n def _chain_type(self) -> str:\n return \"llm_checker_chain\"\n[docs] @classmethod\n def from_llm(\n cls,\n llm: BaseLanguageModel,\n create_draft_answer_prompt: PromptTemplate = CREATE_DRAFT_ANSWER_PROMPT,\n list_assertions_prompt: PromptTemplate = LIST_ASSERTIONS_PROMPT,\n check_assertions_prompt: PromptTemplate = CHECK_ASSERTIONS_PROMPT,\n revised_answer_prompt: PromptTemplate = REVISED_ANSWER_PROMPT,\n **kwargs: Any,\n ) -> LLMCheckerChain:\n question_to_checked_assertions_chain = (\n _load_question_to_checked_assertions_chain(\n llm,\n create_draft_answer_prompt,\n list_assertions_prompt,\n check_assertions_prompt,\n revised_answer_prompt,\n )\n )\n return cls(\n question_to_checked_assertions_chain=question_to_checked_assertions_chain,\n **kwargs,\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/llm_checker/base.html"} +{"id": "b6ad8fd2bfee-0", "text": "Source code for langchain.chains.constitutional_ai.base\n\"\"\"Chain for applying constitutional principles to the outputs of another chain.\"\"\"\nfrom typing import Any, Dict, List, Optional\nfrom langchain.base_language import BaseLanguageModel\nfrom langchain.callbacks.manager import CallbackManagerForChainRun\nfrom langchain.chains.base import Chain\nfrom langchain.chains.constitutional_ai.models import ConstitutionalPrinciple\nfrom langchain.chains.constitutional_ai.principles import PRINCIPLES\nfrom langchain.chains.constitutional_ai.prompts import CRITIQUE_PROMPT, REVISION_PROMPT\nfrom langchain.chains.llm import LLMChain\nfrom langchain.prompts.base import BasePromptTemplate\n[docs]class ConstitutionalChain(Chain):\n \"\"\"Chain for applying constitutional principles.\n Example:\n .. code-block:: python\n from langchain.llms import OpenAI\n from langchain.chains import LLMChain, ConstitutionalChain\n from langchain.chains.constitutional_ai.models \\\n import ConstitutionalPrinciple\n llm = OpenAI()\n qa_prompt = PromptTemplate(\n template=\"Q: {question} A:\",\n input_variables=[\"question\"],\n )\n qa_chain = LLMChain(llm=llm, prompt=qa_prompt)\n constitutional_chain = ConstitutionalChain.from_llm(\n llm=llm,\n chain=qa_chain,\n constitutional_principles=[\n ConstitutionalPrinciple(\n critique_request=\"Tell if this answer is good.\",\n revision_request=\"Give a better answer.\",\n )\n ],\n )\n constitutional_chain.run(question=\"What is the meaning of life?\")\n \"\"\"\n chain: LLMChain\n constitutional_principles: List[ConstitutionalPrinciple]\n critique_chain: LLMChain", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/constitutional_ai/base.html"} +{"id": "b6ad8fd2bfee-1", "text": "critique_chain: LLMChain\n revision_chain: LLMChain\n return_intermediate_steps: bool = False\n[docs] @classmethod\n def get_principles(\n cls, names: Optional[List[str]] = None\n ) -> List[ConstitutionalPrinciple]:\n if names is None:\n return list(PRINCIPLES.values())\n else:\n return [PRINCIPLES[name] for name in names]\n[docs] @classmethod\n def from_llm(\n cls,\n llm: BaseLanguageModel,\n chain: LLMChain,\n critique_prompt: BasePromptTemplate = CRITIQUE_PROMPT,\n revision_prompt: BasePromptTemplate = REVISION_PROMPT,\n **kwargs: Any,\n ) -> \"ConstitutionalChain\":\n \"\"\"Create a chain from an LLM.\"\"\"\n critique_chain = LLMChain(llm=llm, prompt=critique_prompt)\n revision_chain = LLMChain(llm=llm, prompt=revision_prompt)\n return cls(\n chain=chain,\n critique_chain=critique_chain,\n revision_chain=revision_chain,\n **kwargs,\n )\n @property\n def input_keys(self) -> List[str]:\n \"\"\"Defines the input keys.\"\"\"\n return self.chain.input_keys\n @property\n def output_keys(self) -> List[str]:\n \"\"\"Defines the output keys.\"\"\"\n if self.return_intermediate_steps:\n return [\"output\", \"critiques_and_revisions\", \"initial_output\"]\n return [\"output\"]\n def _call(\n self,\n inputs: Dict[str, Any],\n run_manager: Optional[CallbackManagerForChainRun] = None,\n ) -> Dict[str, Any]:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/constitutional_ai/base.html"} +{"id": "b6ad8fd2bfee-2", "text": ") -> Dict[str, Any]:\n _run_manager = run_manager or CallbackManagerForChainRun.get_noop_manager()\n response = self.chain.run(\n **inputs,\n callbacks=_run_manager.get_child(\"original\"),\n )\n initial_response = response\n input_prompt = self.chain.prompt.format(**inputs)\n _run_manager.on_text(\n text=\"Initial response: \" + response + \"\\n\\n\",\n verbose=self.verbose,\n color=\"yellow\",\n )\n critiques_and_revisions = []\n for constitutional_principle in self.constitutional_principles:\n # Do critique\n raw_critique = self.critique_chain.run(\n input_prompt=input_prompt,\n output_from_model=response,\n critique_request=constitutional_principle.critique_request,\n callbacks=_run_manager.get_child(\"critique\"),\n )\n critique = self._parse_critique(\n output_string=raw_critique,\n ).strip()\n # if the critique contains \"No critique needed\", then we're done\n # in this case, initial_output is the same as output,\n # but we'll keep it for consistency\n if \"no critique needed\" in critique.lower():\n critiques_and_revisions.append((critique, \"\"))\n continue\n # Do revision\n revision = self.revision_chain.run(\n input_prompt=input_prompt,\n output_from_model=response,\n critique_request=constitutional_principle.critique_request,\n critique=critique,\n revision_request=constitutional_principle.revision_request,\n callbacks=_run_manager.get_child(\"revision\"),\n ).strip()\n response = revision\n critiques_and_revisions.append((critique, revision))\n _run_manager.on_text(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/constitutional_ai/base.html"} +{"id": "b6ad8fd2bfee-3", "text": "_run_manager.on_text(\n text=f\"Applying {constitutional_principle.name}...\" + \"\\n\\n\",\n verbose=self.verbose,\n color=\"green\",\n )\n _run_manager.on_text(\n text=\"Critique: \" + critique + \"\\n\\n\",\n verbose=self.verbose,\n color=\"blue\",\n )\n _run_manager.on_text(\n text=\"Updated response: \" + revision + \"\\n\\n\",\n verbose=self.verbose,\n color=\"yellow\",\n )\n final_output: Dict[str, Any] = {\"output\": response}\n if self.return_intermediate_steps:\n final_output[\"initial_output\"] = initial_response\n final_output[\"critiques_and_revisions\"] = critiques_and_revisions\n return final_output\n @staticmethod\n def _parse_critique(output_string: str) -> str:\n if \"Revision request:\" not in output_string:\n return output_string\n output_string = output_string.split(\"Revision request:\")[0]\n if \"\\n\\n\" in output_string:\n output_string = output_string.split(\"\\n\\n\")[0]\n return output_string", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/constitutional_ai/base.html"} +{"id": "6e145a168885-0", "text": "Source code for langchain.chains.conversation.base\n\"\"\"Chain that carries on a conversation and calls an LLM.\"\"\"\nfrom typing import Dict, List\nfrom pydantic import Extra, Field, root_validator\nfrom langchain.chains.conversation.prompt import PROMPT\nfrom langchain.chains.llm import LLMChain\nfrom langchain.memory.buffer import ConversationBufferMemory\nfrom langchain.prompts.base import BasePromptTemplate\nfrom langchain.schema import BaseMemory\n[docs]class ConversationChain(LLMChain):\n \"\"\"Chain to have a conversation and load context from memory.\n Example:\n .. code-block:: python\n from langchain import ConversationChain, OpenAI\n conversation = ConversationChain(llm=OpenAI())\n \"\"\"\n memory: BaseMemory = Field(default_factory=ConversationBufferMemory)\n \"\"\"Default memory store.\"\"\"\n prompt: BasePromptTemplate = PROMPT\n \"\"\"Default conversation prompt to use.\"\"\"\n input_key: str = \"input\" #: :meta private:\n output_key: str = \"response\" #: :meta private:\n class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n arbitrary_types_allowed = True\n @property\n def input_keys(self) -> List[str]:\n \"\"\"Use this since so some prompt vars come from history.\"\"\"\n return [self.input_key]\n @root_validator()\n def validate_prompt_input_variables(cls, values: Dict) -> Dict:\n \"\"\"Validate that prompt input variables are consistent.\"\"\"\n memory_keys = values[\"memory\"].memory_variables\n input_key = values[\"input_key\"]\n if input_key in memory_keys:\n raise ValueError(\n f\"The input key {input_key} was also found in the memory keys \"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/conversation/base.html"} +{"id": "6e145a168885-1", "text": "f\"The input key {input_key} was also found in the memory keys \"\n f\"({memory_keys}) - please provide keys that don't overlap.\"\n )\n prompt_variables = values[\"prompt\"].input_variables\n expected_keys = memory_keys + [input_key]\n if set(expected_keys) != set(prompt_variables):\n raise ValueError(\n \"Got unexpected prompt input variables. The prompt expects \"\n f\"{prompt_variables}, but got {memory_keys} as inputs from \"\n f\"memory, and {input_key} as the normal input key.\"\n )\n return values", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/conversation/base.html"} +{"id": "1194f1d1ec81-0", "text": "Source code for langchain.chains.qa_with_sources.retrieval\n\"\"\"Question-answering with sources over an index.\"\"\"\nfrom typing import Any, Dict, List\nfrom pydantic import Field\nfrom langchain.chains.combine_documents.stuff import StuffDocumentsChain\nfrom langchain.chains.qa_with_sources.base import BaseQAWithSourcesChain\nfrom langchain.docstore.document import Document\nfrom langchain.schema import BaseRetriever\n[docs]class RetrievalQAWithSourcesChain(BaseQAWithSourcesChain):\n \"\"\"Question-answering with sources over an index.\"\"\"\n retriever: BaseRetriever = Field(exclude=True)\n \"\"\"Index to connect to.\"\"\"\n reduce_k_below_max_tokens: bool = False\n \"\"\"Reduce the number of results to return from store based on tokens limit\"\"\"\n max_tokens_limit: int = 3375\n \"\"\"Restrict the docs to return from store based on tokens,\n enforced only for StuffDocumentChain and if reduce_k_below_max_tokens is to true\"\"\"\n def _reduce_tokens_below_limit(self, docs: List[Document]) -> List[Document]:\n num_docs = len(docs)\n if self.reduce_k_below_max_tokens and isinstance(\n self.combine_documents_chain, StuffDocumentsChain\n ):\n tokens = [\n self.combine_documents_chain.llm_chain.llm.get_num_tokens(\n doc.page_content\n )\n for doc in docs\n ]\n token_count = sum(tokens[:num_docs])\n while token_count > self.max_tokens_limit:\n num_docs -= 1\n token_count -= tokens[num_docs]\n return docs[:num_docs]\n def _get_docs(self, inputs: Dict[str, Any]) -> List[Document]:\n question = inputs[self.question_key]\n docs = self.retriever.get_relevant_documents(question)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/qa_with_sources/retrieval.html"} +{"id": "1194f1d1ec81-1", "text": "docs = self.retriever.get_relevant_documents(question)\n return self._reduce_tokens_below_limit(docs)\n async def _aget_docs(self, inputs: Dict[str, Any]) -> List[Document]:\n question = inputs[self.question_key]\n docs = await self.retriever.aget_relevant_documents(question)\n return self._reduce_tokens_below_limit(docs)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/qa_with_sources/retrieval.html"} +{"id": "a70bdcbec8bc-0", "text": "Source code for langchain.chains.qa_with_sources.base\n\"\"\"Question answering with sources over documents.\"\"\"\nfrom __future__ import annotations\nimport re\nfrom abc import ABC, abstractmethod\nfrom typing import Any, Dict, List, Optional\nfrom pydantic import Extra, root_validator\nfrom langchain.base_language import BaseLanguageModel\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForChainRun,\n CallbackManagerForChainRun,\n)\nfrom langchain.chains.base import Chain\nfrom langchain.chains.combine_documents.base import BaseCombineDocumentsChain\nfrom langchain.chains.combine_documents.map_reduce import MapReduceDocumentsChain\nfrom langchain.chains.combine_documents.stuff import StuffDocumentsChain\nfrom langchain.chains.llm import LLMChain\nfrom langchain.chains.qa_with_sources.loading import load_qa_with_sources_chain\nfrom langchain.chains.qa_with_sources.map_reduce_prompt import (\n COMBINE_PROMPT,\n EXAMPLE_PROMPT,\n QUESTION_PROMPT,\n)\nfrom langchain.docstore.document import Document\nfrom langchain.prompts.base import BasePromptTemplate\nclass BaseQAWithSourcesChain(Chain, ABC):\n \"\"\"Question answering with sources over documents.\"\"\"\n combine_documents_chain: BaseCombineDocumentsChain\n \"\"\"Chain to use to combine documents.\"\"\"\n question_key: str = \"question\" #: :meta private:\n input_docs_key: str = \"docs\" #: :meta private:\n answer_key: str = \"answer\" #: :meta private:\n sources_answer_key: str = \"sources\" #: :meta private:\n return_source_documents: bool = False\n \"\"\"Return the source documents.\"\"\"\n @classmethod\n def from_llm(\n cls,\n llm: BaseLanguageModel,\n document_prompt: BasePromptTemplate = EXAMPLE_PROMPT,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/qa_with_sources/base.html"} +{"id": "a70bdcbec8bc-1", "text": "document_prompt: BasePromptTemplate = EXAMPLE_PROMPT,\n question_prompt: BasePromptTemplate = QUESTION_PROMPT,\n combine_prompt: BasePromptTemplate = COMBINE_PROMPT,\n **kwargs: Any,\n ) -> BaseQAWithSourcesChain:\n \"\"\"Construct the chain from an LLM.\"\"\"\n llm_question_chain = LLMChain(llm=llm, prompt=question_prompt)\n llm_combine_chain = LLMChain(llm=llm, prompt=combine_prompt)\n combine_results_chain = StuffDocumentsChain(\n llm_chain=llm_combine_chain,\n document_prompt=document_prompt,\n document_variable_name=\"summaries\",\n )\n combine_document_chain = MapReduceDocumentsChain(\n llm_chain=llm_question_chain,\n combine_document_chain=combine_results_chain,\n document_variable_name=\"context\",\n )\n return cls(\n combine_documents_chain=combine_document_chain,\n **kwargs,\n )\n @classmethod\n def from_chain_type(\n cls,\n llm: BaseLanguageModel,\n chain_type: str = \"stuff\",\n chain_type_kwargs: Optional[dict] = None,\n **kwargs: Any,\n ) -> BaseQAWithSourcesChain:\n \"\"\"Load chain from chain type.\"\"\"\n _chain_kwargs = chain_type_kwargs or {}\n combine_document_chain = load_qa_with_sources_chain(\n llm, chain_type=chain_type, **_chain_kwargs\n )\n return cls(combine_documents_chain=combine_document_chain, **kwargs)\n class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n arbitrary_types_allowed = True\n @property\n def input_keys(self) -> List[str]:\n \"\"\"Expect input key.", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/qa_with_sources/base.html"} +{"id": "a70bdcbec8bc-2", "text": "def input_keys(self) -> List[str]:\n \"\"\"Expect input key.\n :meta private:\n \"\"\"\n return [self.question_key]\n @property\n def output_keys(self) -> List[str]:\n \"\"\"Return output key.\n :meta private:\n \"\"\"\n _output_keys = [self.answer_key, self.sources_answer_key]\n if self.return_source_documents:\n _output_keys = _output_keys + [\"source_documents\"]\n return _output_keys\n @root_validator(pre=True)\n def validate_naming(cls, values: Dict) -> Dict:\n \"\"\"Fix backwards compatability in naming.\"\"\"\n if \"combine_document_chain\" in values:\n values[\"combine_documents_chain\"] = values.pop(\"combine_document_chain\")\n return values\n @abstractmethod\n def _get_docs(self, inputs: Dict[str, Any]) -> List[Document]:\n \"\"\"Get docs to run questioning over.\"\"\"\n def _call(\n self,\n inputs: Dict[str, Any],\n run_manager: Optional[CallbackManagerForChainRun] = None,\n ) -> Dict[str, str]:\n _run_manager = run_manager or CallbackManagerForChainRun.get_noop_manager()\n docs = self._get_docs(inputs)\n answer = self.combine_documents_chain.run(\n input_documents=docs, callbacks=_run_manager.get_child(), **inputs\n )\n if re.search(r\"SOURCES:\\s\", answer):\n answer, sources = re.split(r\"SOURCES:\\s\", answer)\n else:\n sources = \"\"\n result: Dict[str, Any] = {\n self.answer_key: answer,\n self.sources_answer_key: sources,\n }\n if self.return_source_documents:\n result[\"source_documents\"] = docs", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/qa_with_sources/base.html"} +{"id": "a70bdcbec8bc-3", "text": "}\n if self.return_source_documents:\n result[\"source_documents\"] = docs\n return result\n @abstractmethod\n async def _aget_docs(self, inputs: Dict[str, Any]) -> List[Document]:\n \"\"\"Get docs to run questioning over.\"\"\"\n async def _acall(\n self,\n inputs: Dict[str, Any],\n run_manager: Optional[AsyncCallbackManagerForChainRun] = None,\n ) -> Dict[str, Any]:\n _run_manager = run_manager or AsyncCallbackManagerForChainRun.get_noop_manager()\n docs = await self._aget_docs(inputs)\n answer = await self.combine_documents_chain.arun(\n input_documents=docs, callbacks=_run_manager.get_child(), **inputs\n )\n if re.search(r\"SOURCES:\\s\", answer):\n answer, sources = re.split(r\"SOURCES:\\s\", answer)\n else:\n sources = \"\"\n result: Dict[str, Any] = {\n self.answer_key: answer,\n self.sources_answer_key: sources,\n }\n if self.return_source_documents:\n result[\"source_documents\"] = docs\n return result\n[docs]class QAWithSourcesChain(BaseQAWithSourcesChain):\n \"\"\"Question answering with sources over documents.\"\"\"\n input_docs_key: str = \"docs\" #: :meta private:\n @property\n def input_keys(self) -> List[str]:\n \"\"\"Expect input key.\n :meta private:\n \"\"\"\n return [self.input_docs_key, self.question_key]\n def _get_docs(self, inputs: Dict[str, Any]) -> List[Document]:\n return inputs.pop(self.input_docs_key)\n async def _aget_docs(self, inputs: Dict[str, Any]) -> List[Document]:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/qa_with_sources/base.html"} +{"id": "a70bdcbec8bc-4", "text": "return inputs.pop(self.input_docs_key)\n @property\n def _chain_type(self) -> str:\n return \"qa_with_sources_chain\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/qa_with_sources/base.html"} +{"id": "2b7475a403f3-0", "text": "Source code for langchain.chains.qa_with_sources.vector_db\n\"\"\"Question-answering with sources over a vector database.\"\"\"\nimport warnings\nfrom typing import Any, Dict, List\nfrom pydantic import Field, root_validator\nfrom langchain.chains.combine_documents.stuff import StuffDocumentsChain\nfrom langchain.chains.qa_with_sources.base import BaseQAWithSourcesChain\nfrom langchain.docstore.document import Document\nfrom langchain.vectorstores.base import VectorStore\n[docs]class VectorDBQAWithSourcesChain(BaseQAWithSourcesChain):\n \"\"\"Question-answering with sources over a vector database.\"\"\"\n vectorstore: VectorStore = Field(exclude=True)\n \"\"\"Vector Database to connect to.\"\"\"\n k: int = 4\n \"\"\"Number of results to return from store\"\"\"\n reduce_k_below_max_tokens: bool = False\n \"\"\"Reduce the number of results to return from store based on tokens limit\"\"\"\n max_tokens_limit: int = 3375\n \"\"\"Restrict the docs to return from store based on tokens,\n enforced only for StuffDocumentChain and if reduce_k_below_max_tokens is to true\"\"\"\n search_kwargs: Dict[str, Any] = Field(default_factory=dict)\n \"\"\"Extra search args.\"\"\"\n def _reduce_tokens_below_limit(self, docs: List[Document]) -> List[Document]:\n num_docs = len(docs)\n if self.reduce_k_below_max_tokens and isinstance(\n self.combine_documents_chain, StuffDocumentsChain\n ):\n tokens = [\n self.combine_documents_chain.llm_chain.llm.get_num_tokens(\n doc.page_content\n )\n for doc in docs\n ]\n token_count = sum(tokens[:num_docs])\n while token_count > self.max_tokens_limit:\n num_docs -= 1\n token_count -= tokens[num_docs]", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/qa_with_sources/vector_db.html"} +{"id": "2b7475a403f3-1", "text": "num_docs -= 1\n token_count -= tokens[num_docs]\n return docs[:num_docs]\n def _get_docs(self, inputs: Dict[str, Any]) -> List[Document]:\n question = inputs[self.question_key]\n docs = self.vectorstore.similarity_search(\n question, k=self.k, **self.search_kwargs\n )\n return self._reduce_tokens_below_limit(docs)\n async def _aget_docs(self, inputs: Dict[str, Any]) -> List[Document]:\n raise NotImplementedError(\"VectorDBQAWithSourcesChain does not support async\")\n @root_validator()\n def raise_deprecation(cls, values: Dict) -> Dict:\n warnings.warn(\n \"`VectorDBQAWithSourcesChain` is deprecated - \"\n \"please use `from langchain.chains import RetrievalQAWithSourcesChain`\"\n )\n return values\n @property\n def _chain_type(self) -> str:\n return \"vector_db_qa_with_sources_chain\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/qa_with_sources/vector_db.html"} +{"id": "06a5c6601dd4-0", "text": "Source code for langchain.chains.openai_functions.extraction\nfrom typing import Any, List\nfrom pydantic import BaseModel\nfrom langchain.base_language import BaseLanguageModel\nfrom langchain.chains.base import Chain\nfrom langchain.chains.llm import LLMChain\nfrom langchain.chains.openai_functions.utils import (\n _convert_schema,\n _resolve_schema_references,\n get_llm_kwargs,\n)\nfrom langchain.output_parsers.openai_functions import (\n JsonKeyOutputFunctionsParser,\n PydanticAttrOutputFunctionsParser,\n)\nfrom langchain.prompts import ChatPromptTemplate\ndef _get_extraction_function(entity_schema: dict) -> dict:\n return {\n \"name\": \"information_extraction\",\n \"description\": \"Extracts the relevant information from the passage.\",\n \"parameters\": {\n \"type\": \"object\",\n \"properties\": {\n \"info\": {\"type\": \"array\", \"items\": _convert_schema(entity_schema)}\n },\n \"required\": [\"info\"],\n },\n }\n_EXTRACTION_TEMPLATE = \"\"\"Extract and save the relevant entities mentioned\\\n in the following passage together with their properties.\nPassage:\n{input}\n\"\"\"\n[docs]def create_extraction_chain(schema: dict, llm: BaseLanguageModel) -> Chain:\n \"\"\"Creates a chain that extracts information from a passage.\n Args:\n schema: The schema of the entities to extract.\n llm: The language model to use.\n Returns:\n Chain that can be used to extract information from a passage.\n \"\"\"\n function = _get_extraction_function(schema)\n prompt = ChatPromptTemplate.from_template(_EXTRACTION_TEMPLATE)\n output_parser = JsonKeyOutputFunctionsParser(key_name=\"info\")", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/openai_functions/extraction.html"} +{"id": "06a5c6601dd4-1", "text": "output_parser = JsonKeyOutputFunctionsParser(key_name=\"info\")\n llm_kwargs = get_llm_kwargs(function)\n chain = LLMChain(\n llm=llm,\n prompt=prompt,\n llm_kwargs=llm_kwargs,\n output_parser=output_parser,\n )\n return chain\n[docs]def create_extraction_chain_pydantic(\n pydantic_schema: Any, llm: BaseLanguageModel\n) -> Chain:\n \"\"\"Creates a chain that extracts information from a passage using pydantic schema.\n Args:\n pydantic_schema: The pydantic schema of the entities to extract.\n llm: The language model to use.\n Returns:\n Chain that can be used to extract information from a passage.\n \"\"\"\n class PydanticSchema(BaseModel):\n info: List[pydantic_schema] # type: ignore\n openai_schema = PydanticSchema.schema()\n openai_schema = _resolve_schema_references(\n openai_schema, openai_schema[\"definitions\"]\n )\n function = _get_extraction_function(openai_schema)\n prompt = ChatPromptTemplate.from_template(_EXTRACTION_TEMPLATE)\n output_parser = PydanticAttrOutputFunctionsParser(\n pydantic_schema=PydanticSchema, attr_name=\"info\"\n )\n llm_kwargs = get_llm_kwargs(function)\n chain = LLMChain(\n llm=llm,\n prompt=prompt,\n llm_kwargs=llm_kwargs,\n output_parser=output_parser,\n )\n return chain", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/openai_functions/extraction.html"} +{"id": "fb8c7913912e-0", "text": "Source code for langchain.chains.openai_functions.qa_with_structure\nfrom typing import Any, List, Optional, Type, Union\nfrom pydantic import BaseModel, Field\nfrom langchain.base_language import BaseLanguageModel\nfrom langchain.chains.llm import LLMChain\nfrom langchain.chains.openai_functions.utils import get_llm_kwargs\nfrom langchain.output_parsers.openai_functions import (\n OutputFunctionsParser,\n PydanticOutputFunctionsParser,\n)\nfrom langchain.prompts import PromptTemplate\nfrom langchain.prompts.chat import ChatPromptTemplate, HumanMessagePromptTemplate\nfrom langchain.schema import BaseLLMOutputParser, HumanMessage, SystemMessage\nclass AnswerWithSources(BaseModel):\n \"\"\"An answer to the question being asked, with sources.\"\"\"\n answer: str = Field(..., description=\"Answer to the question that was asked\")\n sources: List[str] = Field(\n ..., description=\"List of sources used to answer the question\"\n )\n[docs]def create_qa_with_structure_chain(\n llm: BaseLanguageModel,\n schema: Union[dict, Type[BaseModel]],\n output_parser: str = \"base\",\n prompt: Optional[Union[PromptTemplate, ChatPromptTemplate]] = None,\n) -> LLMChain:\n \"\"\"Create a question answering chain that returns an answer with sources.\n Args:\n llm: Language model to use for the chain.\n schema: Pydantic schema to use for the output.\n output_parser: Output parser to use. Should be one of `pydantic` or `base`.\n Default to `base`.\n prompt: Optional prompt to use for the chain.\n Returns:\n \"\"\"\n if output_parser == \"pydantic\":", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/openai_functions/qa_with_structure.html"} +{"id": "fb8c7913912e-1", "text": "Returns:\n \"\"\"\n if output_parser == \"pydantic\":\n if not (isinstance(schema, type) and issubclass(schema, BaseModel)):\n raise ValueError(\n \"Must provide a pydantic class for schema when output_parser is \"\n \"'pydantic'.\"\n )\n _output_parser: BaseLLMOutputParser = PydanticOutputFunctionsParser(\n pydantic_schema=schema\n )\n elif output_parser == \"base\":\n _output_parser = OutputFunctionsParser()\n else:\n raise ValueError(\n f\"Got unexpected output_parser: {output_parser}. \"\n f\"Should be one of `pydantic` or `base`.\"\n )\n if isinstance(schema, type) and issubclass(schema, BaseModel):\n schema_dict = schema.schema()\n else:\n schema_dict = schema\n function = {\n \"name\": schema_dict[\"title\"],\n \"description\": schema_dict[\"description\"],\n \"parameters\": schema_dict,\n }\n llm_kwargs = get_llm_kwargs(function)\n messages = [\n SystemMessage(\n content=(\n \"You are a world class algorithm to answer \"\n \"questions in a specific format.\"\n )\n ),\n HumanMessage(content=\"Answer question using the following context\"),\n HumanMessagePromptTemplate.from_template(\"{context}\"),\n HumanMessagePromptTemplate.from_template(\"Question: {question}\"),\n HumanMessage(content=\"Tips: Make sure to answer in the correct format\"),\n ]\n prompt = prompt or ChatPromptTemplate(messages=messages)\n chain = LLMChain(\n llm=llm,\n prompt=prompt,\n llm_kwargs=llm_kwargs,\n output_parser=_output_parser,\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/openai_functions/qa_with_structure.html"} +{"id": "fb8c7913912e-2", "text": "output_parser=_output_parser,\n )\n return chain\n[docs]def create_qa_with_sources_chain(llm: BaseLanguageModel, **kwargs: Any) -> LLMChain:\n \"\"\"Create a question answering chain that returns an answer with sources.\n Args:\n llm: Language model to use for the chain.\n **kwargs: Keyword arguments to pass to `create_qa_with_structure_chain`.\n Returns:\n Chain (LLMChain) that can be used to answer questions with citations.\n \"\"\"\n return create_qa_with_structure_chain(llm, AnswerWithSources, **kwargs)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/openai_functions/qa_with_structure.html"} +{"id": "a7a0d25e47a9-0", "text": "Source code for langchain.chains.openai_functions.citation_fuzzy_match\nfrom typing import Iterator, List\nfrom pydantic import BaseModel, Field\nfrom langchain.base_language import BaseLanguageModel\nfrom langchain.chains.llm import LLMChain\nfrom langchain.chains.openai_functions.utils import get_llm_kwargs\nfrom langchain.output_parsers.openai_functions import (\n PydanticOutputFunctionsParser,\n)\nfrom langchain.prompts.chat import ChatPromptTemplate, HumanMessagePromptTemplate\nfrom langchain.schema import HumanMessage, SystemMessage\nclass FactWithEvidence(BaseModel):\n \"\"\"Class representing single statement.\n Each fact has a body and a list of sources.\n If there are multiple facts make sure to break them apart\n such that each one only uses a set of sources that are relevant to it.\n \"\"\"\n fact: str = Field(..., description=\"Body of the sentence, as part of a response\")\n substring_quote: List[str] = Field(\n ...,\n description=(\n \"Each source should be a direct quote from the context, \"\n \"as a substring of the original content\"\n ),\n )\n def _get_span(self, quote: str, context: str, errs: int = 100) -> Iterator[str]:\n import regex\n minor = quote\n major = context\n errs_ = 0\n s = regex.search(f\"({minor}){{e<={errs_}}}\", major)\n while s is None and errs_ <= errs:\n errs_ += 1\n s = regex.search(f\"({minor}){{e<={errs_}}}\", major)\n if s is not None:\n yield from s.spans()\n def get_spans(self, context: str) -> Iterator[str]:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/openai_functions/citation_fuzzy_match.html"} +{"id": "a7a0d25e47a9-1", "text": "def get_spans(self, context: str) -> Iterator[str]:\n for quote in self.substring_quote:\n yield from self._get_span(quote, context)\nclass QuestionAnswer(BaseModel):\n \"\"\"A question and its answer as a list of facts each one should have a source.\n each sentence contains a body and a list of sources.\"\"\"\n question: str = Field(..., description=\"Question that was asked\")\n answer: List[FactWithEvidence] = Field(\n ...,\n description=(\n \"Body of the answer, each fact should be \"\n \"its separate object with a body and a list of sources\"\n ),\n )\n[docs]def create_citation_fuzzy_match_chain(llm: BaseLanguageModel) -> LLMChain:\n \"\"\"Create a citation fuzzy match chain.\n Args:\n llm: Language model to use for the chain.\n Returns:\n Chain (LLMChain) that can be used to answer questions with citations.\n \"\"\"\n output_parser = PydanticOutputFunctionsParser(pydantic_schema=QuestionAnswer)\n schema = QuestionAnswer.schema()\n function = {\n \"name\": schema[\"title\"],\n \"description\": schema[\"description\"],\n \"parameters\": schema,\n }\n llm_kwargs = get_llm_kwargs(function)\n messages = [\n SystemMessage(\n content=(\n \"You are a world class algorithm to answer \"\n \"questions with correct and exact citations.\"\n )\n ),\n HumanMessage(content=\"Answer question using the following context\"),\n HumanMessagePromptTemplate.from_template(\"{context}\"),\n HumanMessagePromptTemplate.from_template(\"Question: {question}\"),\n HumanMessage(\n content=(\n \"Tips: Make sure to cite your sources, \"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/openai_functions/citation_fuzzy_match.html"} +{"id": "a7a0d25e47a9-2", "text": "content=(\n \"Tips: Make sure to cite your sources, \"\n \"and use the exact words from the context.\"\n )\n ),\n ]\n prompt = ChatPromptTemplate(messages=messages)\n chain = LLMChain(\n llm=llm,\n prompt=prompt,\n llm_kwargs=llm_kwargs,\n output_parser=output_parser,\n )\n return chain", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/openai_functions/citation_fuzzy_match.html"} +{"id": "1040804c62b5-0", "text": "Source code for langchain.chains.openai_functions.tagging\nfrom typing import Any\nfrom langchain.base_language import BaseLanguageModel\nfrom langchain.chains.base import Chain\nfrom langchain.chains.llm import LLMChain\nfrom langchain.chains.openai_functions.utils import _convert_schema, get_llm_kwargs\nfrom langchain.output_parsers.openai_functions import (\n JsonOutputFunctionsParser,\n PydanticOutputFunctionsParser,\n)\nfrom langchain.prompts import ChatPromptTemplate\ndef _get_tagging_function(schema: dict) -> dict:\n return {\n \"name\": \"information_extraction\",\n \"description\": \"Extracts the relevant information from the passage.\",\n \"parameters\": _convert_schema(schema),\n }\n_TAGGING_TEMPLATE = \"\"\"Extract the desired information from the following passage.\nPassage:\n{input}\n\"\"\"\n[docs]def create_tagging_chain(schema: dict, llm: BaseLanguageModel) -> Chain:\n \"\"\"Creates a chain that extracts information from a passage.\n Args:\n schema: The schema of the entities to extract.\n llm: The language model to use.\n Returns:\n Chain (LLMChain) that can be used to extract information from a passage.\n \"\"\"\n function = _get_tagging_function(schema)\n prompt = ChatPromptTemplate.from_template(_TAGGING_TEMPLATE)\n output_parser = JsonOutputFunctionsParser()\n llm_kwargs = get_llm_kwargs(function)\n chain = LLMChain(\n llm=llm,\n prompt=prompt,\n llm_kwargs=llm_kwargs,\n output_parser=output_parser,\n )\n return chain\n[docs]def create_tagging_chain_pydantic(\n pydantic_schema: Any, llm: BaseLanguageModel", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/openai_functions/tagging.html"} +{"id": "1040804c62b5-1", "text": "pydantic_schema: Any, llm: BaseLanguageModel\n) -> Chain:\n \"\"\"Creates a chain that extracts information from a passage.\n Args:\n pydantic_schema: The pydantic schema of the entities to extract.\n llm: The language model to use.\n Returns:\n Chain (LLMChain) that can be used to extract information from a passage.\n \"\"\"\n openai_schema = pydantic_schema.schema()\n function = _get_tagging_function(openai_schema)\n prompt = ChatPromptTemplate.from_template(_TAGGING_TEMPLATE)\n output_parser = PydanticOutputFunctionsParser(pydantic_schema=pydantic_schema)\n llm_kwargs = get_llm_kwargs(function)\n chain = LLMChain(\n llm=llm,\n prompt=prompt,\n llm_kwargs=llm_kwargs,\n output_parser=output_parser,\n )\n return chain", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/openai_functions/tagging.html"} +{"id": "a0934a367be4-0", "text": "Source code for langchain.chains.api.base\n\"\"\"Chain that makes API calls and summarizes the responses to answer a question.\"\"\"\nfrom __future__ import annotations\nfrom typing import Any, Dict, List, Optional\nfrom pydantic import Field, root_validator\nfrom langchain.base_language import BaseLanguageModel\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForChainRun,\n CallbackManagerForChainRun,\n)\nfrom langchain.chains.api.prompt import API_RESPONSE_PROMPT, API_URL_PROMPT\nfrom langchain.chains.base import Chain\nfrom langchain.chains.llm import LLMChain\nfrom langchain.prompts import BasePromptTemplate\nfrom langchain.requests import TextRequestsWrapper\n[docs]class APIChain(Chain):\n \"\"\"Chain that makes API calls and summarizes the responses to answer a question.\"\"\"\n api_request_chain: LLMChain\n api_answer_chain: LLMChain\n requests_wrapper: TextRequestsWrapper = Field(exclude=True)\n api_docs: str\n question_key: str = \"question\" #: :meta private:\n output_key: str = \"output\" #: :meta private:\n @property\n def input_keys(self) -> List[str]:\n \"\"\"Expect input key.\n :meta private:\n \"\"\"\n return [self.question_key]\n @property\n def output_keys(self) -> List[str]:\n \"\"\"Expect output key.\n :meta private:\n \"\"\"\n return [self.output_key]\n @root_validator(pre=True)\n def validate_api_request_prompt(cls, values: Dict) -> Dict:\n \"\"\"Check that api request prompt expects the right variables.\"\"\"\n input_vars = values[\"api_request_chain\"].prompt.input_variables\n expected_vars = {\"question\", \"api_docs\"}\n if set(input_vars) != expected_vars:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/api/base.html"} +{"id": "a0934a367be4-1", "text": "if set(input_vars) != expected_vars:\n raise ValueError(\n f\"Input variables should be {expected_vars}, got {input_vars}\"\n )\n return values\n @root_validator(pre=True)\n def validate_api_answer_prompt(cls, values: Dict) -> Dict:\n \"\"\"Check that api answer prompt expects the right variables.\"\"\"\n input_vars = values[\"api_answer_chain\"].prompt.input_variables\n expected_vars = {\"question\", \"api_docs\", \"api_url\", \"api_response\"}\n if set(input_vars) != expected_vars:\n raise ValueError(\n f\"Input variables should be {expected_vars}, got {input_vars}\"\n )\n return values\n def _call(\n self,\n inputs: Dict[str, Any],\n run_manager: Optional[CallbackManagerForChainRun] = None,\n ) -> Dict[str, str]:\n _run_manager = run_manager or CallbackManagerForChainRun.get_noop_manager()\n question = inputs[self.question_key]\n api_url = self.api_request_chain.predict(\n question=question,\n api_docs=self.api_docs,\n callbacks=_run_manager.get_child(),\n )\n _run_manager.on_text(api_url, color=\"green\", end=\"\\n\", verbose=self.verbose)\n api_url = api_url.strip()\n api_response = self.requests_wrapper.get(api_url)\n _run_manager.on_text(\n api_response, color=\"yellow\", end=\"\\n\", verbose=self.verbose\n )\n answer = self.api_answer_chain.predict(\n question=question,\n api_docs=self.api_docs,\n api_url=api_url,\n api_response=api_response,\n callbacks=_run_manager.get_child(),\n )\n return {self.output_key: answer}\n async def _acall(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/api/base.html"} +{"id": "a0934a367be4-2", "text": "return {self.output_key: answer}\n async def _acall(\n self,\n inputs: Dict[str, Any],\n run_manager: Optional[AsyncCallbackManagerForChainRun] = None,\n ) -> Dict[str, str]:\n _run_manager = run_manager or AsyncCallbackManagerForChainRun.get_noop_manager()\n question = inputs[self.question_key]\n api_url = await self.api_request_chain.apredict(\n question=question,\n api_docs=self.api_docs,\n callbacks=_run_manager.get_child(),\n )\n await _run_manager.on_text(\n api_url, color=\"green\", end=\"\\n\", verbose=self.verbose\n )\n api_url = api_url.strip()\n api_response = await self.requests_wrapper.aget(api_url)\n await _run_manager.on_text(\n api_response, color=\"yellow\", end=\"\\n\", verbose=self.verbose\n )\n answer = await self.api_answer_chain.apredict(\n question=question,\n api_docs=self.api_docs,\n api_url=api_url,\n api_response=api_response,\n callbacks=_run_manager.get_child(),\n )\n return {self.output_key: answer}\n[docs] @classmethod\n def from_llm_and_api_docs(\n cls,\n llm: BaseLanguageModel,\n api_docs: str,\n headers: Optional[dict] = None,\n api_url_prompt: BasePromptTemplate = API_URL_PROMPT,\n api_response_prompt: BasePromptTemplate = API_RESPONSE_PROMPT,\n **kwargs: Any,\n ) -> APIChain:\n \"\"\"Load chain from just an LLM and the api docs.\"\"\"\n get_request_chain = LLMChain(llm=llm, prompt=api_url_prompt)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/api/base.html"} +{"id": "a0934a367be4-3", "text": "requests_wrapper = TextRequestsWrapper(headers=headers)\n get_answer_chain = LLMChain(llm=llm, prompt=api_response_prompt)\n return cls(\n api_request_chain=get_request_chain,\n api_answer_chain=get_answer_chain,\n requests_wrapper=requests_wrapper,\n api_docs=api_docs,\n **kwargs,\n )\n @property\n def _chain_type(self) -> str:\n return \"api_chain\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/api/base.html"} +{"id": "f6a1398e90f8-0", "text": "Source code for langchain.chains.api.openapi.chain\n\"\"\"Chain that makes API calls and summarizes the responses to answer a question.\"\"\"\nfrom __future__ import annotations\nimport json\nfrom typing import Any, Dict, List, NamedTuple, Optional, cast\nfrom pydantic import BaseModel, Field\nfrom requests import Response\nfrom langchain.base_language import BaseLanguageModel\nfrom langchain.callbacks.manager import CallbackManagerForChainRun, Callbacks\nfrom langchain.chains.api.openapi.requests_chain import APIRequesterChain\nfrom langchain.chains.api.openapi.response_chain import APIResponderChain\nfrom langchain.chains.base import Chain\nfrom langchain.chains.llm import LLMChain\nfrom langchain.requests import Requests\nfrom langchain.tools.openapi.utils.api_models import APIOperation\nclass _ParamMapping(NamedTuple):\n \"\"\"Mapping from parameter name to parameter value.\"\"\"\n query_params: List[str]\n body_params: List[str]\n path_params: List[str]\n[docs]class OpenAPIEndpointChain(Chain, BaseModel):\n \"\"\"Chain interacts with an OpenAPI endpoint using natural language.\"\"\"\n api_request_chain: LLMChain\n api_response_chain: Optional[LLMChain]\n api_operation: APIOperation\n requests: Requests = Field(exclude=True, default_factory=Requests)\n param_mapping: _ParamMapping = Field(alias=\"param_mapping\")\n return_intermediate_steps: bool = False\n instructions_key: str = \"instructions\" #: :meta private:\n output_key: str = \"output\" #: :meta private:\n max_text_length: Optional[int] = Field(ge=0) #: :meta private:\n @property\n def input_keys(self) -> List[str]:\n \"\"\"Expect input key.\n :meta private:\n \"\"\"\n return [self.instructions_key]\n @property", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/api/openapi/chain.html"} +{"id": "f6a1398e90f8-1", "text": "\"\"\"\n return [self.instructions_key]\n @property\n def output_keys(self) -> List[str]:\n \"\"\"Expect output key.\n :meta private:\n \"\"\"\n if not self.return_intermediate_steps:\n return [self.output_key]\n else:\n return [self.output_key, \"intermediate_steps\"]\n def _construct_path(self, args: Dict[str, str]) -> str:\n \"\"\"Construct the path from the deserialized input.\"\"\"\n path = self.api_operation.base_url + self.api_operation.path\n for param in self.param_mapping.path_params:\n path = path.replace(f\"{{{param}}}\", str(args.pop(param, \"\")))\n return path\n def _extract_query_params(self, args: Dict[str, str]) -> Dict[str, str]:\n \"\"\"Extract the query params from the deserialized input.\"\"\"\n query_params = {}\n for param in self.param_mapping.query_params:\n if param in args:\n query_params[param] = args.pop(param)\n return query_params\n def _extract_body_params(self, args: Dict[str, str]) -> Optional[Dict[str, str]]:\n \"\"\"Extract the request body params from the deserialized input.\"\"\"\n body_params = None\n if self.param_mapping.body_params:\n body_params = {}\n for param in self.param_mapping.body_params:\n if param in args:\n body_params[param] = args.pop(param)\n return body_params\n[docs] def deserialize_json_input(self, serialized_args: str) -> dict:\n \"\"\"Use the serialized typescript dictionary.\n Resolve the path, query params dict, and optional requestBody dict.\n \"\"\"\n args: dict = json.loads(serialized_args)\n path = self._construct_path(args)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/api/openapi/chain.html"} +{"id": "f6a1398e90f8-2", "text": "path = self._construct_path(args)\n body_params = self._extract_body_params(args)\n query_params = self._extract_query_params(args)\n return {\n \"url\": path,\n \"data\": body_params,\n \"params\": query_params,\n }\n def _get_output(self, output: str, intermediate_steps: dict) -> dict:\n \"\"\"Return the output from the API call.\"\"\"\n if self.return_intermediate_steps:\n return {\n self.output_key: output,\n \"intermediate_steps\": intermediate_steps,\n }\n else:\n return {self.output_key: output}\n def _call(\n self,\n inputs: Dict[str, Any],\n run_manager: Optional[CallbackManagerForChainRun] = None,\n ) -> Dict[str, str]:\n _run_manager = run_manager or CallbackManagerForChainRun.get_noop_manager()\n intermediate_steps = {}\n instructions = inputs[self.instructions_key]\n instructions = instructions[: self.max_text_length]\n _api_arguments = self.api_request_chain.predict_and_parse(\n instructions=instructions, callbacks=_run_manager.get_child()\n )\n api_arguments = cast(str, _api_arguments)\n intermediate_steps[\"request_args\"] = api_arguments\n _run_manager.on_text(\n api_arguments, color=\"green\", end=\"\\n\", verbose=self.verbose\n )\n if api_arguments.startswith(\"ERROR\"):\n return self._get_output(api_arguments, intermediate_steps)\n elif api_arguments.startswith(\"MESSAGE:\"):\n return self._get_output(\n api_arguments[len(\"MESSAGE:\") :], intermediate_steps\n )\n try:\n request_args = self.deserialize_json_input(api_arguments)\n method = getattr(self.requests, self.api_operation.method.value)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/api/openapi/chain.html"} +{"id": "f6a1398e90f8-3", "text": "method = getattr(self.requests, self.api_operation.method.value)\n api_response: Response = method(**request_args)\n if api_response.status_code != 200:\n method_str = str(self.api_operation.method.value)\n response_text = (\n f\"{api_response.status_code}: {api_response.reason}\"\n + f\"\\nFor {method_str.upper()} {request_args['url']}\\n\"\n + f\"Called with args: {request_args['params']}\"\n )\n else:\n response_text = api_response.text\n except Exception as e:\n response_text = f\"Error with message {str(e)}\"\n response_text = response_text[: self.max_text_length]\n intermediate_steps[\"response_text\"] = response_text\n _run_manager.on_text(\n response_text, color=\"blue\", end=\"\\n\", verbose=self.verbose\n )\n if self.api_response_chain is not None:\n _answer = self.api_response_chain.predict_and_parse(\n response=response_text,\n instructions=instructions,\n callbacks=_run_manager.get_child(),\n )\n answer = cast(str, _answer)\n _run_manager.on_text(answer, color=\"yellow\", end=\"\\n\", verbose=self.verbose)\n return self._get_output(answer, intermediate_steps)\n else:\n return self._get_output(response_text, intermediate_steps)\n[docs] @classmethod\n def from_url_and_method(\n cls,\n spec_url: str,\n path: str,\n method: str,\n llm: BaseLanguageModel,\n requests: Optional[Requests] = None,\n return_intermediate_steps: bool = False,\n **kwargs: Any\n # TODO: Handle async\n ) -> \"OpenAPIEndpointChain\":", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/api/openapi/chain.html"} +{"id": "f6a1398e90f8-4", "text": "# TODO: Handle async\n ) -> \"OpenAPIEndpointChain\":\n \"\"\"Create an OpenAPIEndpoint from a spec at the specified url.\"\"\"\n operation = APIOperation.from_openapi_url(spec_url, path, method)\n return cls.from_api_operation(\n operation,\n requests=requests,\n llm=llm,\n return_intermediate_steps=return_intermediate_steps,\n **kwargs,\n )\n[docs] @classmethod\n def from_api_operation(\n cls,\n operation: APIOperation,\n llm: BaseLanguageModel,\n requests: Optional[Requests] = None,\n verbose: bool = False,\n return_intermediate_steps: bool = False,\n raw_response: bool = False,\n callbacks: Callbacks = None,\n **kwargs: Any\n # TODO: Handle async\n ) -> \"OpenAPIEndpointChain\":\n \"\"\"Create an OpenAPIEndpointChain from an operation and a spec.\"\"\"\n param_mapping = _ParamMapping(\n query_params=operation.query_params,\n body_params=operation.body_params,\n path_params=operation.path_params,\n )\n requests_chain = APIRequesterChain.from_llm_and_typescript(\n llm,\n typescript_definition=operation.to_typescript(),\n verbose=verbose,\n callbacks=callbacks,\n )\n if raw_response:\n response_chain = None\n else:\n response_chain = APIResponderChain.from_llm(\n llm, verbose=verbose, callbacks=callbacks\n )\n _requests = requests or Requests()\n return cls(\n api_request_chain=requests_chain,\n api_response_chain=response_chain,\n api_operation=operation,\n requests=_requests,\n param_mapping=param_mapping,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/api/openapi/chain.html"} +{"id": "f6a1398e90f8-5", "text": "requests=_requests,\n param_mapping=param_mapping,\n verbose=verbose,\n return_intermediate_steps=return_intermediate_steps,\n callbacks=callbacks,\n **kwargs,\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/api/openapi/chain.html"} +{"id": "1ebd10c5982b-0", "text": "Source code for langchain.chains.combine_documents.base\n\"\"\"Base interface for chains combining documents.\"\"\"\nfrom abc import ABC, abstractmethod\nfrom typing import Any, Dict, List, Optional, Tuple\nfrom pydantic import Field\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForChainRun,\n CallbackManagerForChainRun,\n)\nfrom langchain.chains.base import Chain\nfrom langchain.docstore.document import Document\nfrom langchain.prompts.base import BasePromptTemplate\nfrom langchain.text_splitter import RecursiveCharacterTextSplitter, TextSplitter\ndef format_document(doc: Document, prompt: BasePromptTemplate) -> str:\n \"\"\"Format a document into a string based on a prompt template.\"\"\"\n base_info = {\"page_content\": doc.page_content}\n base_info.update(doc.metadata)\n missing_metadata = set(prompt.input_variables).difference(base_info)\n if len(missing_metadata) > 0:\n required_metadata = [\n iv for iv in prompt.input_variables if iv != \"page_content\"\n ]\n raise ValueError(\n f\"Document prompt requires documents to have metadata variables: \"\n f\"{required_metadata}. Received document with missing metadata: \"\n f\"{list(missing_metadata)}.\"\n )\n document_info = {k: base_info[k] for k in prompt.input_variables}\n return prompt.format(**document_info)\nclass BaseCombineDocumentsChain(Chain, ABC):\n \"\"\"Base interface for chains combining documents.\"\"\"\n input_key: str = \"input_documents\" #: :meta private:\n output_key: str = \"output_text\" #: :meta private:\n @property\n def input_keys(self) -> List[str]:\n \"\"\"Expect input key.\n :meta private:\n \"\"\"\n return [self.input_key]\n @property", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/combine_documents/base.html"} +{"id": "1ebd10c5982b-1", "text": "\"\"\"\n return [self.input_key]\n @property\n def output_keys(self) -> List[str]:\n \"\"\"Return output key.\n :meta private:\n \"\"\"\n return [self.output_key]\n def prompt_length(self, docs: List[Document], **kwargs: Any) -> Optional[int]:\n \"\"\"Return the prompt length given the documents passed in.\n Returns None if the method does not depend on the prompt length.\n \"\"\"\n return None\n @abstractmethod\n def combine_docs(self, docs: List[Document], **kwargs: Any) -> Tuple[str, dict]:\n \"\"\"Combine documents into a single string.\"\"\"\n @abstractmethod\n async def acombine_docs(\n self, docs: List[Document], **kwargs: Any\n ) -> Tuple[str, dict]:\n \"\"\"Combine documents into a single string asynchronously.\"\"\"\n def _call(\n self,\n inputs: Dict[str, List[Document]],\n run_manager: Optional[CallbackManagerForChainRun] = None,\n ) -> Dict[str, str]:\n _run_manager = run_manager or CallbackManagerForChainRun.get_noop_manager()\n docs = inputs[self.input_key]\n # Other keys are assumed to be needed for LLM prediction\n other_keys = {k: v for k, v in inputs.items() if k != self.input_key}\n output, extra_return_dict = self.combine_docs(\n docs, callbacks=_run_manager.get_child(), **other_keys\n )\n extra_return_dict[self.output_key] = output\n return extra_return_dict\n async def _acall(\n self,\n inputs: Dict[str, List[Document]],\n run_manager: Optional[AsyncCallbackManagerForChainRun] = None,\n ) -> Dict[str, str]:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/combine_documents/base.html"} +{"id": "1ebd10c5982b-2", "text": ") -> Dict[str, str]:\n _run_manager = run_manager or AsyncCallbackManagerForChainRun.get_noop_manager()\n docs = inputs[self.input_key]\n # Other keys are assumed to be needed for LLM prediction\n other_keys = {k: v for k, v in inputs.items() if k != self.input_key}\n output, extra_return_dict = await self.acombine_docs(\n docs, callbacks=_run_manager.get_child(), **other_keys\n )\n extra_return_dict[self.output_key] = output\n return extra_return_dict\n[docs]class AnalyzeDocumentChain(Chain):\n \"\"\"Chain that splits documents, then analyzes it in pieces.\"\"\"\n input_key: str = \"input_document\" #: :meta private:\n text_splitter: TextSplitter = Field(default_factory=RecursiveCharacterTextSplitter)\n combine_docs_chain: BaseCombineDocumentsChain\n @property\n def input_keys(self) -> List[str]:\n \"\"\"Expect input key.\n :meta private:\n \"\"\"\n return [self.input_key]\n @property\n def output_keys(self) -> List[str]:\n \"\"\"Return output key.\n :meta private:\n \"\"\"\n return self.combine_docs_chain.output_keys\n def _call(\n self,\n inputs: Dict[str, str],\n run_manager: Optional[CallbackManagerForChainRun] = None,\n ) -> Dict[str, str]:\n _run_manager = run_manager or CallbackManagerForChainRun.get_noop_manager()\n document = inputs[self.input_key]\n docs = self.text_splitter.create_documents([document])\n # Other keys are assumed to be needed for LLM prediction\n other_keys: Dict = {k: v for k, v in inputs.items() if k != self.input_key}", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/combine_documents/base.html"} +{"id": "1ebd10c5982b-3", "text": "other_keys[self.combine_docs_chain.input_key] = docs\n return self.combine_docs_chain(\n other_keys, return_only_outputs=True, callbacks=_run_manager.get_child()\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/combine_documents/base.html"} +{"id": "0d7bc7e19e0a-0", "text": "Source code for langchain.chains.combine_documents.stuff\n\"\"\"Chain that combines documents by stuffing into context.\"\"\"\nfrom typing import Any, Dict, List, Optional, Tuple\nfrom pydantic import Extra, Field, root_validator\nfrom langchain.callbacks.manager import Callbacks\nfrom langchain.chains.combine_documents.base import (\n BaseCombineDocumentsChain,\n format_document,\n)\nfrom langchain.chains.llm import LLMChain\nfrom langchain.docstore.document import Document\nfrom langchain.prompts.base import BasePromptTemplate\nfrom langchain.prompts.prompt import PromptTemplate\ndef _get_default_document_prompt() -> PromptTemplate:\n return PromptTemplate(input_variables=[\"page_content\"], template=\"{page_content}\")\n[docs]class StuffDocumentsChain(BaseCombineDocumentsChain):\n \"\"\"Chain that combines documents by stuffing into context.\"\"\"\n llm_chain: LLMChain\n \"\"\"LLM wrapper to use after formatting documents.\"\"\"\n document_prompt: BasePromptTemplate = Field(\n default_factory=_get_default_document_prompt\n )\n \"\"\"Prompt to use to format each document.\"\"\"\n document_variable_name: str\n \"\"\"The variable name in the llm_chain to put the documents in.\n If only one variable in the llm_chain, this need not be provided.\"\"\"\n document_separator: str = \"\\n\\n\"\n \"\"\"The string with which to join the formatted documents\"\"\"\n class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n arbitrary_types_allowed = True\n @root_validator(pre=True)\n def get_default_document_variable_name(cls, values: Dict) -> Dict:\n \"\"\"Get default document variable name, if not provided.\"\"\"\n llm_chain_variables = values[\"llm_chain\"].prompt.input_variables\n if \"document_variable_name\" not in values:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/combine_documents/stuff.html"} +{"id": "0d7bc7e19e0a-1", "text": "if \"document_variable_name\" not in values:\n if len(llm_chain_variables) == 1:\n values[\"document_variable_name\"] = llm_chain_variables[0]\n else:\n raise ValueError(\n \"document_variable_name must be provided if there are \"\n \"multiple llm_chain_variables\"\n )\n else:\n if values[\"document_variable_name\"] not in llm_chain_variables:\n raise ValueError(\n f\"document_variable_name {values['document_variable_name']} was \"\n f\"not found in llm_chain input_variables: {llm_chain_variables}\"\n )\n return values\n def _get_inputs(self, docs: List[Document], **kwargs: Any) -> dict:\n # Format each document according to the prompt\n doc_strings = [format_document(doc, self.document_prompt) for doc in docs]\n # Join the documents together to put them in the prompt.\n inputs = {\n k: v\n for k, v in kwargs.items()\n if k in self.llm_chain.prompt.input_variables\n }\n inputs[self.document_variable_name] = self.document_separator.join(doc_strings)\n return inputs\n[docs] def prompt_length(self, docs: List[Document], **kwargs: Any) -> Optional[int]:\n \"\"\"Get the prompt length by formatting the prompt.\"\"\"\n inputs = self._get_inputs(docs, **kwargs)\n prompt = self.llm_chain.prompt.format(**inputs)\n return self.llm_chain.llm.get_num_tokens(prompt)\n[docs] def combine_docs(\n self, docs: List[Document], callbacks: Callbacks = None, **kwargs: Any\n ) -> Tuple[str, dict]:\n \"\"\"Stuff all documents into one prompt and pass to LLM.\"\"\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/combine_documents/stuff.html"} +{"id": "0d7bc7e19e0a-2", "text": "\"\"\"Stuff all documents into one prompt and pass to LLM.\"\"\"\n inputs = self._get_inputs(docs, **kwargs)\n # Call predict on the LLM.\n return self.llm_chain.predict(callbacks=callbacks, **inputs), {}\n[docs] async def acombine_docs(\n self, docs: List[Document], callbacks: Callbacks = None, **kwargs: Any\n ) -> Tuple[str, dict]:\n \"\"\"Stuff all documents into one prompt and pass to LLM.\"\"\"\n inputs = self._get_inputs(docs, **kwargs)\n # Call predict on the LLM.\n return await self.llm_chain.apredict(callbacks=callbacks, **inputs), {}\n @property\n def _chain_type(self) -> str:\n return \"stuff_documents_chain\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/combine_documents/stuff.html"} +{"id": "e81db1ed4ae5-0", "text": "Source code for langchain.chains.combine_documents.map_reduce\n\"\"\"Combining documents by mapping a chain over them first, then combining results.\"\"\"\nfrom __future__ import annotations\nfrom typing import Any, Callable, Dict, List, Optional, Protocol, Tuple\nfrom pydantic import Extra, root_validator\nfrom langchain.callbacks.manager import Callbacks\nfrom langchain.chains.combine_documents.base import BaseCombineDocumentsChain\nfrom langchain.chains.llm import LLMChain\nfrom langchain.docstore.document import Document\nclass CombineDocsProtocol(Protocol):\n \"\"\"Interface for the combine_docs method.\"\"\"\n def __call__(self, docs: List[Document], **kwargs: Any) -> str:\n \"\"\"Interface for the combine_docs method.\"\"\"\ndef _split_list_of_docs(\n docs: List[Document], length_func: Callable, token_max: int, **kwargs: Any\n) -> List[List[Document]]:\n new_result_doc_list = []\n _sub_result_docs = []\n for doc in docs:\n _sub_result_docs.append(doc)\n _num_tokens = length_func(_sub_result_docs, **kwargs)\n if _num_tokens > token_max:\n if len(_sub_result_docs) == 1:\n raise ValueError(\n \"A single document was longer than the context length,\"\n \" we cannot handle this.\"\n )\n if len(_sub_result_docs) == 2:\n raise ValueError(\n \"A single document was so long it could not be combined \"\n \"with another document, we cannot handle this.\"\n )\n new_result_doc_list.append(_sub_result_docs[:-1])\n _sub_result_docs = _sub_result_docs[-1:]\n new_result_doc_list.append(_sub_result_docs)\n return new_result_doc_list\ndef _collapse_docs(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/combine_documents/map_reduce.html"} +{"id": "e81db1ed4ae5-1", "text": "return new_result_doc_list\ndef _collapse_docs(\n docs: List[Document],\n combine_document_func: CombineDocsProtocol,\n **kwargs: Any,\n) -> Document:\n result = combine_document_func(docs, **kwargs)\n combined_metadata = {k: str(v) for k, v in docs[0].metadata.items()}\n for doc in docs[1:]:\n for k, v in doc.metadata.items():\n if k in combined_metadata:\n combined_metadata[k] += f\", {v}\"\n else:\n combined_metadata[k] = str(v)\n return Document(page_content=result, metadata=combined_metadata)\n[docs]class MapReduceDocumentsChain(BaseCombineDocumentsChain):\n \"\"\"Combining documents by mapping a chain over them, then combining results.\"\"\"\n llm_chain: LLMChain\n \"\"\"Chain to apply to each document individually.\"\"\"\n combine_document_chain: BaseCombineDocumentsChain\n \"\"\"Chain to use to combine results of applying llm_chain to documents.\"\"\"\n collapse_document_chain: Optional[BaseCombineDocumentsChain] = None\n \"\"\"Chain to use to collapse intermediary results if needed.\n If None, will use the combine_document_chain.\"\"\"\n document_variable_name: str\n \"\"\"The variable name in the llm_chain to put the documents in.\n If only one variable in the llm_chain, this need not be provided.\"\"\"\n return_intermediate_steps: bool = False\n \"\"\"Return the results of the map steps in the output.\"\"\"\n @property\n def output_keys(self) -> List[str]:\n \"\"\"Expect input key.\n :meta private:\n \"\"\"\n _output_keys = super().output_keys\n if self.return_intermediate_steps:\n _output_keys = _output_keys + [\"intermediate_steps\"]", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/combine_documents/map_reduce.html"} +{"id": "e81db1ed4ae5-2", "text": "_output_keys = _output_keys + [\"intermediate_steps\"]\n return _output_keys\n class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n arbitrary_types_allowed = True\n @root_validator(pre=True)\n def get_return_intermediate_steps(cls, values: Dict) -> Dict:\n \"\"\"For backwards compatibility.\"\"\"\n if \"return_map_steps\" in values:\n values[\"return_intermediate_steps\"] = values[\"return_map_steps\"]\n del values[\"return_map_steps\"]\n return values\n @root_validator(pre=True)\n def get_default_document_variable_name(cls, values: Dict) -> Dict:\n \"\"\"Get default document variable name, if not provided.\"\"\"\n if \"document_variable_name\" not in values:\n llm_chain_variables = values[\"llm_chain\"].prompt.input_variables\n if len(llm_chain_variables) == 1:\n values[\"document_variable_name\"] = llm_chain_variables[0]\n else:\n raise ValueError(\n \"document_variable_name must be provided if there are \"\n \"multiple llm_chain input_variables\"\n )\n else:\n llm_chain_variables = values[\"llm_chain\"].prompt.input_variables\n if values[\"document_variable_name\"] not in llm_chain_variables:\n raise ValueError(\n f\"document_variable_name {values['document_variable_name']} was \"\n f\"not found in llm_chain input_variables: {llm_chain_variables}\"\n )\n return values\n @property\n def _collapse_chain(self) -> BaseCombineDocumentsChain:\n if self.collapse_document_chain is not None:\n return self.collapse_document_chain\n else:\n return self.combine_document_chain\n[docs] def combine_docs(\n self,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/combine_documents/map_reduce.html"} +{"id": "e81db1ed4ae5-3", "text": "return self.combine_document_chain\n[docs] def combine_docs(\n self,\n docs: List[Document],\n token_max: int = 3000,\n callbacks: Callbacks = None,\n **kwargs: Any,\n ) -> Tuple[str, dict]:\n \"\"\"Combine documents in a map reduce manner.\n Combine by mapping first chain over all documents, then reducing the results.\n This reducing can be done recursively if needed (if there are many documents).\n \"\"\"\n results = self.llm_chain.apply(\n # FYI - this is parallelized and so it is fast.\n [{self.document_variable_name: d.page_content, **kwargs} for d in docs],\n callbacks=callbacks,\n )\n return self._process_results(\n results, docs, token_max, callbacks=callbacks, **kwargs\n )\n[docs] async def acombine_docs(\n self, docs: List[Document], callbacks: Callbacks = None, **kwargs: Any\n ) -> Tuple[str, dict]:\n \"\"\"Combine documents in a map reduce manner.\n Combine by mapping first chain over all documents, then reducing the results.\n This reducing can be done recursively if needed (if there are many documents).\n \"\"\"\n results = await self.llm_chain.aapply(\n # FYI - this is parallelized and so it is fast.\n [{**{self.document_variable_name: d.page_content}, **kwargs} for d in docs],\n callbacks=callbacks,\n )\n return await self._aprocess_results(\n results, docs, callbacks=callbacks, **kwargs\n )\n def _process_results_common(\n self,\n results: List[Dict],\n docs: List[Document],", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/combine_documents/map_reduce.html"} +{"id": "e81db1ed4ae5-4", "text": "self,\n results: List[Dict],\n docs: List[Document],\n token_max: int = 3000,\n callbacks: Callbacks = None,\n **kwargs: Any,\n ) -> Tuple[List[Document], dict]:\n question_result_key = self.llm_chain.output_key\n result_docs = [\n Document(page_content=r[question_result_key], metadata=docs[i].metadata)\n # This uses metadata from the docs, and the textual results from `results`\n for i, r in enumerate(results)\n ]\n length_func = self.combine_document_chain.prompt_length\n num_tokens = length_func(result_docs, **kwargs)\n def _collapse_docs_func(docs: List[Document], **kwargs: Any) -> str:\n return self._collapse_chain.run(\n input_documents=docs, callbacks=callbacks, **kwargs\n )\n while num_tokens is not None and num_tokens > token_max:\n new_result_doc_list = _split_list_of_docs(\n result_docs, length_func, token_max, **kwargs\n )\n result_docs = []\n for docs in new_result_doc_list:\n new_doc = _collapse_docs(docs, _collapse_docs_func, **kwargs)\n result_docs.append(new_doc)\n num_tokens = length_func(result_docs, **kwargs)\n if self.return_intermediate_steps:\n _results = [r[self.llm_chain.output_key] for r in results]\n extra_return_dict = {\"intermediate_steps\": _results}\n else:\n extra_return_dict = {}\n return result_docs, extra_return_dict\n def _process_results(\n self,\n results: List[Dict],\n docs: List[Document],\n token_max: int = 3000,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/combine_documents/map_reduce.html"} +{"id": "e81db1ed4ae5-5", "text": "docs: List[Document],\n token_max: int = 3000,\n callbacks: Callbacks = None,\n **kwargs: Any,\n ) -> Tuple[str, dict]:\n result_docs, extra_return_dict = self._process_results_common(\n results, docs, token_max, callbacks=callbacks, **kwargs\n )\n output = self.combine_document_chain.run(\n input_documents=result_docs, callbacks=callbacks, **kwargs\n )\n return output, extra_return_dict\n async def _aprocess_results(\n self,\n results: List[Dict],\n docs: List[Document],\n callbacks: Callbacks = None,\n **kwargs: Any,\n ) -> Tuple[str, dict]:\n result_docs, extra_return_dict = self._process_results_common(\n results, docs, callbacks=callbacks, **kwargs\n )\n output = await self.combine_document_chain.arun(\n input_documents=result_docs, callbacks=callbacks, **kwargs\n )\n return output, extra_return_dict\n @property\n def _chain_type(self) -> str:\n return \"map_reduce_documents_chain\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/combine_documents/map_reduce.html"} +{"id": "d5e6c9dd5e49-0", "text": "Source code for langchain.chains.combine_documents.map_rerank\n\"\"\"Combining documents by mapping a chain over them first, then reranking results.\"\"\"\nfrom __future__ import annotations\nfrom typing import Any, Dict, List, Optional, Sequence, Tuple, Union, cast\nfrom pydantic import Extra, root_validator\nfrom langchain.callbacks.manager import Callbacks\nfrom langchain.chains.combine_documents.base import BaseCombineDocumentsChain\nfrom langchain.chains.llm import LLMChain\nfrom langchain.docstore.document import Document\nfrom langchain.output_parsers.regex import RegexParser\n[docs]class MapRerankDocumentsChain(BaseCombineDocumentsChain):\n \"\"\"Combining documents by mapping a chain over them, then reranking results.\"\"\"\n llm_chain: LLMChain\n \"\"\"Chain to apply to each document individually.\"\"\"\n document_variable_name: str\n \"\"\"The variable name in the llm_chain to put the documents in.\n If only one variable in the llm_chain, this need not be provided.\"\"\"\n rank_key: str\n \"\"\"Key in output of llm_chain to rank on.\"\"\"\n answer_key: str\n \"\"\"Key in output of llm_chain to return as answer.\"\"\"\n metadata_keys: Optional[List[str]] = None\n return_intermediate_steps: bool = False\n class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n arbitrary_types_allowed = True\n @property\n def output_keys(self) -> List[str]:\n \"\"\"Expect input key.\n :meta private:\n \"\"\"\n _output_keys = super().output_keys\n if self.return_intermediate_steps:\n _output_keys = _output_keys + [\"intermediate_steps\"]\n if self.metadata_keys is not None:\n _output_keys += self.metadata_keys\n return _output_keys", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/combine_documents/map_rerank.html"} +{"id": "d5e6c9dd5e49-1", "text": "_output_keys += self.metadata_keys\n return _output_keys\n @root_validator()\n def validate_llm_output(cls, values: Dict) -> Dict:\n \"\"\"Validate that the combine chain outputs a dictionary.\"\"\"\n output_parser = values[\"llm_chain\"].prompt.output_parser\n if not isinstance(output_parser, RegexParser):\n raise ValueError(\n \"Output parser of llm_chain should be a RegexParser,\"\n f\" got {output_parser}\"\n )\n output_keys = output_parser.output_keys\n if values[\"rank_key\"] not in output_keys:\n raise ValueError(\n f\"Got {values['rank_key']} as key to rank on, but did not find \"\n f\"it in the llm_chain output keys ({output_keys})\"\n )\n if values[\"answer_key\"] not in output_keys:\n raise ValueError(\n f\"Got {values['answer_key']} as key to return, but did not find \"\n f\"it in the llm_chain output keys ({output_keys})\"\n )\n return values\n @root_validator(pre=True)\n def get_default_document_variable_name(cls, values: Dict) -> Dict:\n \"\"\"Get default document variable name, if not provided.\"\"\"\n if \"document_variable_name\" not in values:\n llm_chain_variables = values[\"llm_chain\"].prompt.input_variables\n if len(llm_chain_variables) == 1:\n values[\"document_variable_name\"] = llm_chain_variables[0]\n else:\n raise ValueError(\n \"document_variable_name must be provided if there are \"\n \"multiple llm_chain input_variables\"\n )\n else:\n llm_chain_variables = values[\"llm_chain\"].prompt.input_variables", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/combine_documents/map_rerank.html"} +{"id": "d5e6c9dd5e49-2", "text": "else:\n llm_chain_variables = values[\"llm_chain\"].prompt.input_variables\n if values[\"document_variable_name\"] not in llm_chain_variables:\n raise ValueError(\n f\"document_variable_name {values['document_variable_name']} was \"\n f\"not found in llm_chain input_variables: {llm_chain_variables}\"\n )\n return values\n[docs] def combine_docs(\n self, docs: List[Document], callbacks: Callbacks = None, **kwargs: Any\n ) -> Tuple[str, dict]:\n \"\"\"Combine documents in a map rerank manner.\n Combine by mapping first chain over all documents, then reranking the results.\n \"\"\"\n results = self.llm_chain.apply_and_parse(\n # FYI - this is parallelized and so it is fast.\n [{**{self.document_variable_name: d.page_content}, **kwargs} for d in docs],\n callbacks=callbacks,\n )\n return self._process_results(docs, results)\n[docs] async def acombine_docs(\n self, docs: List[Document], callbacks: Callbacks = None, **kwargs: Any\n ) -> Tuple[str, dict]:\n \"\"\"Combine documents in a map rerank manner.\n Combine by mapping first chain over all documents, then reranking the results.\n \"\"\"\n results = await self.llm_chain.aapply_and_parse(\n # FYI - this is parallelized and so it is fast.\n [{**{self.document_variable_name: d.page_content}, **kwargs} for d in docs],\n callbacks=callbacks,\n )\n return self._process_results(docs, results)\n def _process_results(\n self,\n docs: List[Document],", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/combine_documents/map_rerank.html"} +{"id": "d5e6c9dd5e49-3", "text": "def _process_results(\n self,\n docs: List[Document],\n results: Sequence[Union[str, List[str], Dict[str, str]]],\n ) -> Tuple[str, dict]:\n typed_results = cast(List[dict], results)\n sorted_res = sorted(\n zip(typed_results, docs), key=lambda x: -int(x[0][self.rank_key])\n )\n output, document = sorted_res[0]\n extra_info = {}\n if self.metadata_keys is not None:\n for key in self.metadata_keys:\n extra_info[key] = document.metadata[key]\n if self.return_intermediate_steps:\n extra_info[\"intermediate_steps\"] = results\n return output[self.answer_key], extra_info\n @property\n def _chain_type(self) -> str:\n return \"map_rerank_documents_chain\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/combine_documents/map_rerank.html"} +{"id": "07104dfa9808-0", "text": "Source code for langchain.chains.combine_documents.refine\n\"\"\"Combining documents by doing a first pass and then refining on more documents.\"\"\"\nfrom __future__ import annotations\nfrom typing import Any, Dict, List, Tuple\nfrom pydantic import Extra, Field, root_validator\nfrom langchain.callbacks.manager import Callbacks\nfrom langchain.chains.combine_documents.base import (\n BaseCombineDocumentsChain,\n format_document,\n)\nfrom langchain.chains.llm import LLMChain\nfrom langchain.docstore.document import Document\nfrom langchain.prompts.base import BasePromptTemplate\nfrom langchain.prompts.prompt import PromptTemplate\ndef _get_default_document_prompt() -> PromptTemplate:\n return PromptTemplate(input_variables=[\"page_content\"], template=\"{page_content}\")\n[docs]class RefineDocumentsChain(BaseCombineDocumentsChain):\n \"\"\"Combine documents by doing a first pass and then refining on more documents.\"\"\"\n initial_llm_chain: LLMChain\n \"\"\"LLM chain to use on initial document.\"\"\"\n refine_llm_chain: LLMChain\n \"\"\"LLM chain to use when refining.\"\"\"\n document_variable_name: str\n \"\"\"The variable name in the initial_llm_chain to put the documents in.\n If only one variable in the initial_llm_chain, this need not be provided.\"\"\"\n initial_response_name: str\n \"\"\"The variable name to format the initial response in when refining.\"\"\"\n document_prompt: BasePromptTemplate = Field(\n default_factory=_get_default_document_prompt\n )\n \"\"\"Prompt to use to format each document.\"\"\"\n return_intermediate_steps: bool = False\n \"\"\"Return the results of the refine steps in the output.\"\"\"\n @property\n def output_keys(self) -> List[str]:\n \"\"\"Expect input key.\n :meta private:\n \"\"\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/combine_documents/refine.html"} +{"id": "07104dfa9808-1", "text": "\"\"\"Expect input key.\n :meta private:\n \"\"\"\n _output_keys = super().output_keys\n if self.return_intermediate_steps:\n _output_keys = _output_keys + [\"intermediate_steps\"]\n return _output_keys\n class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n arbitrary_types_allowed = True\n @root_validator(pre=True)\n def get_return_intermediate_steps(cls, values: Dict) -> Dict:\n \"\"\"For backwards compatibility.\"\"\"\n if \"return_refine_steps\" in values:\n values[\"return_intermediate_steps\"] = values[\"return_refine_steps\"]\n del values[\"return_refine_steps\"]\n return values\n @root_validator(pre=True)\n def get_default_document_variable_name(cls, values: Dict) -> Dict:\n \"\"\"Get default document variable name, if not provided.\"\"\"\n if \"document_variable_name\" not in values:\n llm_chain_variables = values[\"initial_llm_chain\"].prompt.input_variables\n if len(llm_chain_variables) == 1:\n values[\"document_variable_name\"] = llm_chain_variables[0]\n else:\n raise ValueError(\n \"document_variable_name must be provided if there are \"\n \"multiple llm_chain input_variables\"\n )\n else:\n llm_chain_variables = values[\"initial_llm_chain\"].prompt.input_variables\n if values[\"document_variable_name\"] not in llm_chain_variables:\n raise ValueError(\n f\"document_variable_name {values['document_variable_name']} was \"\n f\"not found in llm_chain input_variables: {llm_chain_variables}\"\n )\n return values\n[docs] def combine_docs(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/combine_documents/refine.html"} +{"id": "07104dfa9808-2", "text": ")\n return values\n[docs] def combine_docs(\n self, docs: List[Document], callbacks: Callbacks = None, **kwargs: Any\n ) -> Tuple[str, dict]:\n \"\"\"Combine by mapping first chain over all, then stuffing into final chain.\"\"\"\n inputs = self._construct_initial_inputs(docs, **kwargs)\n res = self.initial_llm_chain.predict(callbacks=callbacks, **inputs)\n refine_steps = [res]\n for doc in docs[1:]:\n base_inputs = self._construct_refine_inputs(doc, res)\n inputs = {**base_inputs, **kwargs}\n res = self.refine_llm_chain.predict(callbacks=callbacks, **inputs)\n refine_steps.append(res)\n return self._construct_result(refine_steps, res)\n[docs] async def acombine_docs(\n self, docs: List[Document], callbacks: Callbacks = None, **kwargs: Any\n ) -> Tuple[str, dict]:\n \"\"\"Combine by mapping first chain over all, then stuffing into final chain.\"\"\"\n inputs = self._construct_initial_inputs(docs, **kwargs)\n res = await self.initial_llm_chain.apredict(callbacks=callbacks, **inputs)\n refine_steps = [res]\n for doc in docs[1:]:\n base_inputs = self._construct_refine_inputs(doc, res)\n inputs = {**base_inputs, **kwargs}\n res = await self.refine_llm_chain.apredict(callbacks=callbacks, **inputs)\n refine_steps.append(res)\n return self._construct_result(refine_steps, res)\n def _construct_result(self, refine_steps: List[str], res: str) -> Tuple[str, dict]:\n if self.return_intermediate_steps:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/combine_documents/refine.html"} +{"id": "07104dfa9808-3", "text": "if self.return_intermediate_steps:\n extra_return_dict = {\"intermediate_steps\": refine_steps}\n else:\n extra_return_dict = {}\n return res, extra_return_dict\n def _construct_refine_inputs(self, doc: Document, res: str) -> Dict[str, Any]:\n return {\n self.document_variable_name: format_document(doc, self.document_prompt),\n self.initial_response_name: res,\n }\n def _construct_initial_inputs(\n self, docs: List[Document], **kwargs: Any\n ) -> Dict[str, Any]:\n base_info = {\"page_content\": docs[0].page_content}\n base_info.update(docs[0].metadata)\n document_info = {k: base_info[k] for k in self.document_prompt.input_variables}\n base_inputs: dict = {\n self.document_variable_name: self.document_prompt.format(**document_info)\n }\n inputs = {**base_inputs, **kwargs}\n return inputs\n @property\n def _chain_type(self) -> str:\n return \"refine_documents_chain\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/combine_documents/refine.html"} +{"id": "ec29731682d8-0", "text": "Source code for langchain.chains.pal.base\n\"\"\"Implements Program-Aided Language Models.\nAs in https://arxiv.org/pdf/2211.10435.pdf.\n\"\"\"\nfrom __future__ import annotations\nimport warnings\nfrom typing import Any, Dict, List, Optional\nfrom pydantic import Extra, root_validator\nfrom langchain.base_language import BaseLanguageModel\nfrom langchain.callbacks.manager import CallbackManagerForChainRun\nfrom langchain.chains.base import Chain\nfrom langchain.chains.llm import LLMChain\nfrom langchain.chains.pal.colored_object_prompt import COLORED_OBJECT_PROMPT\nfrom langchain.chains.pal.math_prompt import MATH_PROMPT\nfrom langchain.prompts.base import BasePromptTemplate\nfrom langchain.utilities import PythonREPL\n[docs]class PALChain(Chain):\n \"\"\"Implements Program-Aided Language Models.\"\"\"\n llm_chain: LLMChain\n llm: Optional[BaseLanguageModel] = None\n \"\"\"[Deprecated]\"\"\"\n prompt: BasePromptTemplate = MATH_PROMPT\n \"\"\"[Deprecated]\"\"\"\n stop: str = \"\\n\\n\"\n get_answer_expr: str = \"print(solution())\"\n python_globals: Optional[Dict[str, Any]] = None\n python_locals: Optional[Dict[str, Any]] = None\n output_key: str = \"result\" #: :meta private:\n return_intermediate_steps: bool = False\n class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n arbitrary_types_allowed = True\n @root_validator(pre=True)\n def raise_deprecation(cls, values: Dict) -> Dict:\n if \"llm\" in values:\n warnings.warn(\n \"Directly instantiating an PALChain with an llm is deprecated. \"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/pal/base.html"} +{"id": "ec29731682d8-1", "text": "\"Directly instantiating an PALChain with an llm is deprecated. \"\n \"Please instantiate with llm_chain argument or using the one of \"\n \"the class method constructors from_math_prompt, \"\n \"from_colored_object_prompt.\"\n )\n if \"llm_chain\" not in values and values[\"llm\"] is not None:\n values[\"llm_chain\"] = LLMChain(llm=values[\"llm\"], prompt=MATH_PROMPT)\n return values\n @property\n def input_keys(self) -> List[str]:\n \"\"\"Return the singular input key.\n :meta private:\n \"\"\"\n return self.prompt.input_variables\n @property\n def output_keys(self) -> List[str]:\n \"\"\"Return the singular output key.\n :meta private:\n \"\"\"\n if not self.return_intermediate_steps:\n return [self.output_key]\n else:\n return [self.output_key, \"intermediate_steps\"]\n def _call(\n self,\n inputs: Dict[str, Any],\n run_manager: Optional[CallbackManagerForChainRun] = None,\n ) -> Dict[str, str]:\n _run_manager = run_manager or CallbackManagerForChainRun.get_noop_manager()\n code = self.llm_chain.predict(\n stop=[self.stop], callbacks=_run_manager.get_child(), **inputs\n )\n _run_manager.on_text(code, color=\"green\", end=\"\\n\", verbose=self.verbose)\n repl = PythonREPL(_globals=self.python_globals, _locals=self.python_locals)\n res = repl.run(code + f\"\\n{self.get_answer_expr}\")\n output = {self.output_key: res.strip()}\n if self.return_intermediate_steps:\n output[\"intermediate_steps\"] = code", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/pal/base.html"} +{"id": "ec29731682d8-2", "text": "if self.return_intermediate_steps:\n output[\"intermediate_steps\"] = code\n return output\n[docs] @classmethod\n def from_math_prompt(cls, llm: BaseLanguageModel, **kwargs: Any) -> PALChain:\n \"\"\"Load PAL from math prompt.\"\"\"\n llm_chain = LLMChain(llm=llm, prompt=MATH_PROMPT)\n return cls(\n llm_chain=llm_chain,\n stop=\"\\n\\n\",\n get_answer_expr=\"print(solution())\",\n **kwargs,\n )\n[docs] @classmethod\n def from_colored_object_prompt(\n cls, llm: BaseLanguageModel, **kwargs: Any\n ) -> PALChain:\n \"\"\"Load PAL from colored object prompt.\"\"\"\n llm_chain = LLMChain(llm=llm, prompt=COLORED_OBJECT_PROMPT)\n return cls(\n llm_chain=llm_chain,\n stop=\"\\n\\n\\n\",\n get_answer_expr=\"print(answer)\",\n **kwargs,\n )\n @property\n def _chain_type(self) -> str:\n return \"pal_chain\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/pal/base.html"} +{"id": "c596050b2c94-0", "text": "Source code for langchain.chains.conversational_retrieval.base\n\"\"\"Chain for chatting with a vector database.\"\"\"\nfrom __future__ import annotations\nimport warnings\nfrom abc import abstractmethod\nfrom pathlib import Path\nfrom typing import Any, Callable, Dict, List, Optional, Tuple, Union\nfrom pydantic import Extra, Field, root_validator\nfrom langchain.base_language import BaseLanguageModel\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForChainRun,\n CallbackManagerForChainRun,\n Callbacks,\n)\nfrom langchain.chains.base import Chain\nfrom langchain.chains.combine_documents.base import BaseCombineDocumentsChain\nfrom langchain.chains.combine_documents.stuff import StuffDocumentsChain\nfrom langchain.chains.conversational_retrieval.prompts import CONDENSE_QUESTION_PROMPT\nfrom langchain.chains.llm import LLMChain\nfrom langchain.chains.question_answering import load_qa_chain\nfrom langchain.prompts.base import BasePromptTemplate\nfrom langchain.schema import BaseMessage, BaseRetriever, Document\nfrom langchain.vectorstores.base import VectorStore\n# Depending on the memory type and configuration, the chat history format may differ.\n# This needs to be consolidated.\nCHAT_TURN_TYPE = Union[Tuple[str, str], BaseMessage]\n_ROLE_MAP = {\"human\": \"Human: \", \"ai\": \"Assistant: \"}\ndef _get_chat_history(chat_history: List[CHAT_TURN_TYPE]) -> str:\n buffer = \"\"\n for dialogue_turn in chat_history:\n if isinstance(dialogue_turn, BaseMessage):\n role_prefix = _ROLE_MAP.get(dialogue_turn.type, f\"{dialogue_turn.type}: \")\n buffer += f\"\\n{role_prefix}{dialogue_turn.content}\"\n elif isinstance(dialogue_turn, tuple):\n human = \"Human: \" + dialogue_turn[0]", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/conversational_retrieval/base.html"} +{"id": "c596050b2c94-1", "text": "human = \"Human: \" + dialogue_turn[0]\n ai = \"Assistant: \" + dialogue_turn[1]\n buffer += \"\\n\" + \"\\n\".join([human, ai])\n else:\n raise ValueError(\n f\"Unsupported chat history format: {type(dialogue_turn)}.\"\n f\" Full chat history: {chat_history} \"\n )\n return buffer\nclass BaseConversationalRetrievalChain(Chain):\n \"\"\"Chain for chatting with an index.\"\"\"\n combine_docs_chain: BaseCombineDocumentsChain\n question_generator: LLMChain\n output_key: str = \"answer\"\n return_source_documents: bool = False\n return_generated_question: bool = False\n get_chat_history: Optional[Callable[[CHAT_TURN_TYPE], str]] = None\n \"\"\"Return the source documents.\"\"\"\n class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n arbitrary_types_allowed = True\n allow_population_by_field_name = True\n @property\n def input_keys(self) -> List[str]:\n \"\"\"Input keys.\"\"\"\n return [\"question\", \"chat_history\"]\n @property\n def output_keys(self) -> List[str]:\n \"\"\"Return the output keys.\n :meta private:\n \"\"\"\n _output_keys = [self.output_key]\n if self.return_source_documents:\n _output_keys = _output_keys + [\"source_documents\"]\n if self.return_generated_question:\n _output_keys = _output_keys + [\"generated_question\"]\n return _output_keys\n @abstractmethod\n def _get_docs(self, question: str, inputs: Dict[str, Any]) -> List[Document]:\n \"\"\"Get docs.\"\"\"\n def _call(\n self,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/conversational_retrieval/base.html"} +{"id": "c596050b2c94-2", "text": "\"\"\"Get docs.\"\"\"\n def _call(\n self,\n inputs: Dict[str, Any],\n run_manager: Optional[CallbackManagerForChainRun] = None,\n ) -> Dict[str, Any]:\n _run_manager = run_manager or CallbackManagerForChainRun.get_noop_manager()\n question = inputs[\"question\"]\n get_chat_history = self.get_chat_history or _get_chat_history\n chat_history_str = get_chat_history(inputs[\"chat_history\"])\n if chat_history_str:\n callbacks = _run_manager.get_child()\n new_question = self.question_generator.run(\n question=question, chat_history=chat_history_str, callbacks=callbacks\n )\n else:\n new_question = question\n docs = self._get_docs(new_question, inputs)\n new_inputs = inputs.copy()\n new_inputs[\"question\"] = new_question\n new_inputs[\"chat_history\"] = chat_history_str\n answer = self.combine_docs_chain.run(\n input_documents=docs, callbacks=_run_manager.get_child(), **new_inputs\n )\n output: Dict[str, Any] = {self.output_key: answer}\n if self.return_source_documents:\n output[\"source_documents\"] = docs\n if self.return_generated_question:\n output[\"generated_question\"] = new_question\n return output\n @abstractmethod\n async def _aget_docs(self, question: str, inputs: Dict[str, Any]) -> List[Document]:\n \"\"\"Get docs.\"\"\"\n async def _acall(\n self,\n inputs: Dict[str, Any],\n run_manager: Optional[AsyncCallbackManagerForChainRun] = None,\n ) -> Dict[str, Any]:\n _run_manager = run_manager or AsyncCallbackManagerForChainRun.get_noop_manager()\n question = inputs[\"question\"]", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/conversational_retrieval/base.html"} +{"id": "c596050b2c94-3", "text": "question = inputs[\"question\"]\n get_chat_history = self.get_chat_history or _get_chat_history\n chat_history_str = get_chat_history(inputs[\"chat_history\"])\n if chat_history_str:\n callbacks = _run_manager.get_child()\n new_question = await self.question_generator.arun(\n question=question, chat_history=chat_history_str, callbacks=callbacks\n )\n else:\n new_question = question\n docs = await self._aget_docs(new_question, inputs)\n new_inputs = inputs.copy()\n new_inputs[\"question\"] = new_question\n new_inputs[\"chat_history\"] = chat_history_str\n answer = await self.combine_docs_chain.arun(\n input_documents=docs, callbacks=_run_manager.get_child(), **new_inputs\n )\n output: Dict[str, Any] = {self.output_key: answer}\n if self.return_source_documents:\n output[\"source_documents\"] = docs\n if self.return_generated_question:\n output[\"generated_question\"] = new_question\n return output\n def save(self, file_path: Union[Path, str]) -> None:\n if self.get_chat_history:\n raise ValueError(\"Chain not savable when `get_chat_history` is not None.\")\n super().save(file_path)\n[docs]class ConversationalRetrievalChain(BaseConversationalRetrievalChain):\n \"\"\"Chain for chatting with an index.\"\"\"\n retriever: BaseRetriever\n \"\"\"Index to connect to.\"\"\"\n max_tokens_limit: Optional[int] = None\n \"\"\"If set, restricts the docs to return from store based on tokens, enforced only\n for StuffDocumentChain\"\"\"\n def _reduce_tokens_below_limit(self, docs: List[Document]) -> List[Document]:\n num_docs = len(docs)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/conversational_retrieval/base.html"} +{"id": "c596050b2c94-4", "text": "num_docs = len(docs)\n if self.max_tokens_limit and isinstance(\n self.combine_docs_chain, StuffDocumentsChain\n ):\n tokens = [\n self.combine_docs_chain.llm_chain.llm.get_num_tokens(doc.page_content)\n for doc in docs\n ]\n token_count = sum(tokens[:num_docs])\n while token_count > self.max_tokens_limit:\n num_docs -= 1\n token_count -= tokens[num_docs]\n return docs[:num_docs]\n def _get_docs(self, question: str, inputs: Dict[str, Any]) -> List[Document]:\n docs = self.retriever.get_relevant_documents(question)\n return self._reduce_tokens_below_limit(docs)\n async def _aget_docs(self, question: str, inputs: Dict[str, Any]) -> List[Document]:\n docs = await self.retriever.aget_relevant_documents(question)\n return self._reduce_tokens_below_limit(docs)\n[docs] @classmethod\n def from_llm(\n cls,\n llm: BaseLanguageModel,\n retriever: BaseRetriever,\n condense_question_prompt: BasePromptTemplate = CONDENSE_QUESTION_PROMPT,\n chain_type: str = \"stuff\",\n verbose: bool = False,\n condense_question_llm: Optional[BaseLanguageModel] = None,\n combine_docs_chain_kwargs: Optional[Dict] = None,\n callbacks: Callbacks = None,\n **kwargs: Any,\n ) -> BaseConversationalRetrievalChain:\n \"\"\"Load chain from LLM.\"\"\"\n combine_docs_chain_kwargs = combine_docs_chain_kwargs or {}\n doc_chain = load_qa_chain(\n llm,\n chain_type=chain_type,\n verbose=verbose,\n callbacks=callbacks,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/conversational_retrieval/base.html"} +{"id": "c596050b2c94-5", "text": "chain_type=chain_type,\n verbose=verbose,\n callbacks=callbacks,\n **combine_docs_chain_kwargs,\n )\n _llm = condense_question_llm or llm\n condense_question_chain = LLMChain(\n llm=_llm,\n prompt=condense_question_prompt,\n verbose=verbose,\n callbacks=callbacks,\n )\n return cls(\n retriever=retriever,\n combine_docs_chain=doc_chain,\n question_generator=condense_question_chain,\n callbacks=callbacks,\n **kwargs,\n )\n[docs]class ChatVectorDBChain(BaseConversationalRetrievalChain):\n \"\"\"Chain for chatting with a vector database.\"\"\"\n vectorstore: VectorStore = Field(alias=\"vectorstore\")\n top_k_docs_for_context: int = 4\n search_kwargs: dict = Field(default_factory=dict)\n @property\n def _chain_type(self) -> str:\n return \"chat-vector-db\"\n @root_validator()\n def raise_deprecation(cls, values: Dict) -> Dict:\n warnings.warn(\n \"`ChatVectorDBChain` is deprecated - \"\n \"please use `from langchain.chains import ConversationalRetrievalChain`\"\n )\n return values\n def _get_docs(self, question: str, inputs: Dict[str, Any]) -> List[Document]:\n vectordbkwargs = inputs.get(\"vectordbkwargs\", {})\n full_kwargs = {**self.search_kwargs, **vectordbkwargs}\n return self.vectorstore.similarity_search(\n question, k=self.top_k_docs_for_context, **full_kwargs\n )\n async def _aget_docs(self, question: str, inputs: Dict[str, Any]) -> List[Document]:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/conversational_retrieval/base.html"} +{"id": "c596050b2c94-6", "text": "raise NotImplementedError(\"ChatVectorDBChain does not support async\")\n[docs] @classmethod\n def from_llm(\n cls,\n llm: BaseLanguageModel,\n vectorstore: VectorStore,\n condense_question_prompt: BasePromptTemplate = CONDENSE_QUESTION_PROMPT,\n chain_type: str = \"stuff\",\n combine_docs_chain_kwargs: Optional[Dict] = None,\n callbacks: Callbacks = None,\n **kwargs: Any,\n ) -> BaseConversationalRetrievalChain:\n \"\"\"Load chain from LLM.\"\"\"\n combine_docs_chain_kwargs = combine_docs_chain_kwargs or {}\n doc_chain = load_qa_chain(\n llm,\n chain_type=chain_type,\n callbacks=callbacks,\n **combine_docs_chain_kwargs,\n )\n condense_question_chain = LLMChain(\n llm=llm, prompt=condense_question_prompt, callbacks=callbacks\n )\n return cls(\n vectorstore=vectorstore,\n combine_docs_chain=doc_chain,\n question_generator=condense_question_chain,\n callbacks=callbacks,\n **kwargs,\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/conversational_retrieval/base.html"} +{"id": "38c9150a2e43-0", "text": "Source code for langchain.chains.sql_database.base\n\"\"\"Chain for interacting with SQL Database.\"\"\"\nfrom __future__ import annotations\nimport warnings\nfrom typing import Any, Dict, List, Optional\nfrom pydantic import Extra, Field, root_validator\nfrom langchain.base_language import BaseLanguageModel\nfrom langchain.callbacks.manager import CallbackManagerForChainRun\nfrom langchain.chains.base import Chain\nfrom langchain.chains.llm import LLMChain\nfrom langchain.chains.sql_database.prompt import DECIDER_PROMPT, PROMPT, SQL_PROMPTS\nfrom langchain.prompts.base import BasePromptTemplate\nfrom langchain.prompts.prompt import PromptTemplate\nfrom langchain.sql_database import SQLDatabase\nfrom langchain.tools.sql_database.prompt import QUERY_CHECKER\nINTERMEDIATE_STEPS_KEY = \"intermediate_steps\"\n[docs]class SQLDatabaseChain(Chain):\n \"\"\"Chain for interacting with SQL Database.\n Example:\n .. code-block:: python\n from langchain import SQLDatabaseChain, OpenAI, SQLDatabase\n db = SQLDatabase(...)\n db_chain = SQLDatabaseChain.from_llm(OpenAI(), db)\n \"\"\"\n llm_chain: LLMChain\n llm: Optional[BaseLanguageModel] = None\n \"\"\"[Deprecated] LLM wrapper to use.\"\"\"\n database: SQLDatabase = Field(exclude=True)\n \"\"\"SQL Database to connect to.\"\"\"\n prompt: Optional[BasePromptTemplate] = None\n \"\"\"[Deprecated] Prompt to use to translate natural language to SQL.\"\"\"\n top_k: int = 5\n \"\"\"Number of results to return from the query\"\"\"\n input_key: str = \"query\" #: :meta private:\n output_key: str = \"result\" #: :meta private:\n return_intermediate_steps: bool = False", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/sql_database/base.html"} +{"id": "38c9150a2e43-1", "text": "return_intermediate_steps: bool = False\n \"\"\"Whether or not to return the intermediate steps along with the final answer.\"\"\"\n return_direct: bool = False\n \"\"\"Whether or not to return the result of querying the SQL table directly.\"\"\"\n use_query_checker: bool = False\n \"\"\"Whether or not the query checker tool should be used to attempt \n to fix the initial SQL from the LLM.\"\"\"\n query_checker_prompt: Optional[BasePromptTemplate] = None\n \"\"\"The prompt template that should be used by the query checker\"\"\"\n class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n arbitrary_types_allowed = True\n @root_validator(pre=True)\n def raise_deprecation(cls, values: Dict) -> Dict:\n if \"llm\" in values:\n warnings.warn(\n \"Directly instantiating an SQLDatabaseChain with an llm is deprecated. \"\n \"Please instantiate with llm_chain argument or using the from_llm \"\n \"class method.\"\n )\n if \"llm_chain\" not in values and values[\"llm\"] is not None:\n database = values[\"database\"]\n prompt = values.get(\"prompt\") or SQL_PROMPTS.get(\n database.dialect, PROMPT\n )\n values[\"llm_chain\"] = LLMChain(llm=values[\"llm\"], prompt=prompt)\n return values\n @property\n def input_keys(self) -> List[str]:\n \"\"\"Return the singular input key.\n :meta private:\n \"\"\"\n return [self.input_key]\n @property\n def output_keys(self) -> List[str]:\n \"\"\"Return the singular output key.\n :meta private:\n \"\"\"\n if not self.return_intermediate_steps:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/sql_database/base.html"} +{"id": "38c9150a2e43-2", "text": ":meta private:\n \"\"\"\n if not self.return_intermediate_steps:\n return [self.output_key]\n else:\n return [self.output_key, INTERMEDIATE_STEPS_KEY]\n def _call(\n self,\n inputs: Dict[str, Any],\n run_manager: Optional[CallbackManagerForChainRun] = None,\n ) -> Dict[str, Any]:\n _run_manager = run_manager or CallbackManagerForChainRun.get_noop_manager()\n input_text = f\"{inputs[self.input_key]}\\nSQLQuery:\"\n _run_manager.on_text(input_text, verbose=self.verbose)\n # If not present, then defaults to None which is all tables.\n table_names_to_use = inputs.get(\"table_names_to_use\")\n table_info = self.database.get_table_info(table_names=table_names_to_use)\n llm_inputs = {\n \"input\": input_text,\n \"top_k\": str(self.top_k),\n \"dialect\": self.database.dialect,\n \"table_info\": table_info,\n \"stop\": [\"\\nSQLResult:\"],\n }\n intermediate_steps: List = []\n try:\n intermediate_steps.append(llm_inputs) # input: sql generation\n sql_cmd = self.llm_chain.predict(\n callbacks=_run_manager.get_child(),\n **llm_inputs,\n ).strip()\n if not self.use_query_checker:\n _run_manager.on_text(sql_cmd, color=\"green\", verbose=self.verbose)\n intermediate_steps.append(\n sql_cmd\n ) # output: sql generation (no checker)\n intermediate_steps.append({\"sql_cmd\": sql_cmd}) # input: sql exec\n result = self.database.run(sql_cmd)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/sql_database/base.html"} +{"id": "38c9150a2e43-3", "text": "result = self.database.run(sql_cmd)\n intermediate_steps.append(str(result)) # output: sql exec\n else:\n query_checker_prompt = self.query_checker_prompt or PromptTemplate(\n template=QUERY_CHECKER, input_variables=[\"query\", \"dialect\"]\n )\n query_checker_chain = LLMChain(\n llm=self.llm_chain.llm, prompt=query_checker_prompt\n )\n query_checker_inputs = {\n \"query\": sql_cmd,\n \"dialect\": self.database.dialect,\n }\n checked_sql_command: str = query_checker_chain.predict(\n callbacks=_run_manager.get_child(), **query_checker_inputs\n ).strip()\n intermediate_steps.append(\n checked_sql_command\n ) # output: sql generation (checker)\n _run_manager.on_text(\n checked_sql_command, color=\"green\", verbose=self.verbose\n )\n intermediate_steps.append(\n {\"sql_cmd\": checked_sql_command}\n ) # input: sql exec\n result = self.database.run(checked_sql_command)\n intermediate_steps.append(str(result)) # output: sql exec\n sql_cmd = checked_sql_command\n _run_manager.on_text(\"\\nSQLResult: \", verbose=self.verbose)\n _run_manager.on_text(result, color=\"yellow\", verbose=self.verbose)\n # If return direct, we just set the final result equal to\n # the result of the sql query result, otherwise try to get a human readable\n # final answer\n if self.return_direct:\n final_result = result\n else:\n _run_manager.on_text(\"\\nAnswer:\", verbose=self.verbose)\n input_text += f\"{sql_cmd}\\nSQLResult: {result}\\nAnswer:\"\n llm_inputs[\"input\"] = input_text", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/sql_database/base.html"} +{"id": "38c9150a2e43-4", "text": "llm_inputs[\"input\"] = input_text\n intermediate_steps.append(llm_inputs) # input: final answer\n final_result = self.llm_chain.predict(\n callbacks=_run_manager.get_child(),\n **llm_inputs,\n ).strip()\n intermediate_steps.append(final_result) # output: final answer\n _run_manager.on_text(final_result, color=\"green\", verbose=self.verbose)\n chain_result: Dict[str, Any] = {self.output_key: final_result}\n if self.return_intermediate_steps:\n chain_result[INTERMEDIATE_STEPS_KEY] = intermediate_steps\n return chain_result\n except Exception as exc:\n # Append intermediate steps to exception, to aid in logging and later\n # improvement of few shot prompt seeds\n exc.intermediate_steps = intermediate_steps # type: ignore\n raise exc\n @property\n def _chain_type(self) -> str:\n return \"sql_database_chain\"\n[docs] @classmethod\n def from_llm(\n cls,\n llm: BaseLanguageModel,\n db: SQLDatabase,\n prompt: Optional[BasePromptTemplate] = None,\n **kwargs: Any,\n ) -> SQLDatabaseChain:\n prompt = prompt or SQL_PROMPTS.get(db.dialect, PROMPT)\n llm_chain = LLMChain(llm=llm, prompt=prompt)\n return cls(llm_chain=llm_chain, database=db, **kwargs)\n[docs]class SQLDatabaseSequentialChain(Chain):\n \"\"\"Chain for querying SQL database that is a sequential chain.\n The chain is as follows:\n 1. Based on the query, determine which tables to use.\n 2. Based on those tables, call the normal SQL database chain.", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/sql_database/base.html"} +{"id": "38c9150a2e43-5", "text": "2. Based on those tables, call the normal SQL database chain.\n This is useful in cases where the number of tables in the database is large.\n \"\"\"\n decider_chain: LLMChain\n sql_chain: SQLDatabaseChain\n input_key: str = \"query\" #: :meta private:\n output_key: str = \"result\" #: :meta private:\n return_intermediate_steps: bool = False\n[docs] @classmethod\n def from_llm(\n cls,\n llm: BaseLanguageModel,\n database: SQLDatabase,\n query_prompt: BasePromptTemplate = PROMPT,\n decider_prompt: BasePromptTemplate = DECIDER_PROMPT,\n **kwargs: Any,\n ) -> SQLDatabaseSequentialChain:\n \"\"\"Load the necessary chains.\"\"\"\n sql_chain = SQLDatabaseChain.from_llm(\n llm, database, prompt=query_prompt, **kwargs\n )\n decider_chain = LLMChain(\n llm=llm, prompt=decider_prompt, output_key=\"table_names\"\n )\n return cls(sql_chain=sql_chain, decider_chain=decider_chain, **kwargs)\n @property\n def input_keys(self) -> List[str]:\n \"\"\"Return the singular input key.\n :meta private:\n \"\"\"\n return [self.input_key]\n @property\n def output_keys(self) -> List[str]:\n \"\"\"Return the singular output key.\n :meta private:\n \"\"\"\n if not self.return_intermediate_steps:\n return [self.output_key]\n else:\n return [self.output_key, INTERMEDIATE_STEPS_KEY]\n def _call(\n self,\n inputs: Dict[str, Any],", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/sql_database/base.html"} +{"id": "38c9150a2e43-6", "text": "def _call(\n self,\n inputs: Dict[str, Any],\n run_manager: Optional[CallbackManagerForChainRun] = None,\n ) -> Dict[str, Any]:\n _run_manager = run_manager or CallbackManagerForChainRun.get_noop_manager()\n _table_names = self.sql_chain.database.get_usable_table_names()\n table_names = \", \".join(_table_names)\n llm_inputs = {\n \"query\": inputs[self.input_key],\n \"table_names\": table_names,\n }\n _lowercased_table_names = [name.lower() for name in _table_names]\n table_names_from_chain = self.decider_chain.predict_and_parse(**llm_inputs)\n table_names_to_use = [\n name\n for name in table_names_from_chain\n if name.lower() in _lowercased_table_names\n ]\n _run_manager.on_text(\"Table names to use:\", end=\"\\n\", verbose=self.verbose)\n _run_manager.on_text(\n str(table_names_to_use), color=\"yellow\", verbose=self.verbose\n )\n new_inputs = {\n self.sql_chain.input_key: inputs[self.input_key],\n \"table_names_to_use\": table_names_to_use,\n }\n return self.sql_chain(\n new_inputs, callbacks=_run_manager.get_child(), return_only_outputs=True\n )\n @property\n def _chain_type(self) -> str:\n return \"sql_database_sequential_chain\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/sql_database/base.html"} +{"id": "049f6d4f43a8-0", "text": "Source code for langchain.chains.qa_generation.base\nfrom __future__ import annotations\nimport json\nfrom typing import Any, Dict, List, Optional\nfrom pydantic import Field\nfrom langchain.base_language import BaseLanguageModel\nfrom langchain.callbacks.manager import CallbackManagerForChainRun\nfrom langchain.chains.base import Chain\nfrom langchain.chains.llm import LLMChain\nfrom langchain.chains.qa_generation.prompt import PROMPT_SELECTOR\nfrom langchain.prompts.base import BasePromptTemplate\nfrom langchain.text_splitter import RecursiveCharacterTextSplitter, TextSplitter\n[docs]class QAGenerationChain(Chain):\n llm_chain: LLMChain\n text_splitter: TextSplitter = Field(\n default=RecursiveCharacterTextSplitter(chunk_overlap=500)\n )\n input_key: str = \"text\"\n output_key: str = \"questions\"\n k: Optional[int] = None\n[docs] @classmethod\n def from_llm(\n cls,\n llm: BaseLanguageModel,\n prompt: Optional[BasePromptTemplate] = None,\n **kwargs: Any,\n ) -> QAGenerationChain:\n _prompt = prompt or PROMPT_SELECTOR.get_prompt(llm)\n chain = LLMChain(llm=llm, prompt=_prompt)\n return cls(llm_chain=chain, **kwargs)\n @property\n def _chain_type(self) -> str:\n raise NotImplementedError\n @property\n def input_keys(self) -> List[str]:\n return [self.input_key]\n @property\n def output_keys(self) -> List[str]:\n return [self.output_key]\n def _call(\n self,\n inputs: Dict[str, Any],", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/qa_generation/base.html"} +{"id": "049f6d4f43a8-1", "text": "def _call(\n self,\n inputs: Dict[str, Any],\n run_manager: Optional[CallbackManagerForChainRun] = None,\n ) -> Dict[str, List]:\n docs = self.text_splitter.create_documents([inputs[self.input_key]])\n results = self.llm_chain.generate(\n [{\"text\": d.page_content} for d in docs], run_manager=run_manager\n )\n qa = [json.loads(res[0].text) for res in results.generations]\n return {self.output_key: qa}", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/qa_generation/base.html"} +{"id": "78014d9e345b-0", "text": "Source code for langchain.chains.flare.base\nfrom __future__ import annotations\nimport re\nfrom abc import abstractmethod\nfrom typing import Any, Dict, List, Optional, Sequence, Tuple\nimport numpy as np\nfrom pydantic import Field\nfrom langchain.base_language import BaseLanguageModel\nfrom langchain.callbacks.manager import (\n CallbackManagerForChainRun,\n)\nfrom langchain.chains.base import Chain\nfrom langchain.chains.flare.prompts import (\n PROMPT,\n QUESTION_GENERATOR_PROMPT,\n FinishedOutputParser,\n)\nfrom langchain.chains.llm import LLMChain\nfrom langchain.llms import OpenAI\nfrom langchain.prompts import BasePromptTemplate\nfrom langchain.schema import BaseRetriever, Generation\nclass _ResponseChain(LLMChain):\n prompt: BasePromptTemplate = PROMPT\n @property\n def input_keys(self) -> List[str]:\n return self.prompt.input_variables\n def generate_tokens_and_log_probs(\n self,\n _input: Dict[str, Any],\n *,\n run_manager: Optional[CallbackManagerForChainRun] = None,\n ) -> Tuple[Sequence[str], Sequence[float]]:\n llm_result = self.generate([_input], run_manager=run_manager)\n return self._extract_tokens_and_log_probs(llm_result.generations[0])\n @abstractmethod\n def _extract_tokens_and_log_probs(\n self, generations: List[Generation]\n ) -> Tuple[Sequence[str], Sequence[float]]:\n \"\"\"Extract tokens and log probs from response.\"\"\"\nclass _OpenAIResponseChain(_ResponseChain):\n llm: OpenAI = Field(\n default_factory=lambda: OpenAI(\n max_tokens=32, model_kwargs={\"logprobs\": 1}, temperature=0\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/flare/base.html"} +{"id": "78014d9e345b-1", "text": ")\n )\n def _extract_tokens_and_log_probs(\n self, generations: List[Generation]\n ) -> Tuple[Sequence[str], Sequence[float]]:\n tokens = []\n log_probs = []\n for gen in generations:\n if gen.generation_info is None:\n raise ValueError\n tokens.extend(gen.generation_info[\"logprobs\"][\"tokens\"])\n log_probs.extend(gen.generation_info[\"logprobs\"][\"token_logprobs\"])\n return tokens, log_probs\nclass QuestionGeneratorChain(LLMChain):\n prompt: BasePromptTemplate = QUESTION_GENERATOR_PROMPT\n @property\n def input_keys(self) -> List[str]:\n return [\"user_input\", \"context\", \"response\"]\ndef _low_confidence_spans(\n tokens: Sequence[str],\n log_probs: Sequence[float],\n min_prob: float,\n min_token_gap: int,\n num_pad_tokens: int,\n) -> List[str]:\n _low_idx = np.where(np.exp(log_probs) < min_prob)[0]\n low_idx = [i for i in _low_idx if re.search(r\"\\w\", tokens[i])]\n if len(low_idx) == 0:\n return []\n spans = [[low_idx[0], low_idx[0] + num_pad_tokens + 1]]\n for i, idx in enumerate(low_idx[1:]):\n end = idx + num_pad_tokens + 1\n if idx - low_idx[i] < min_token_gap:\n spans[-1][1] = end\n else:\n spans.append([idx, end])\n return [\"\".join(tokens[start:end]) for start, end in spans]\n[docs]class FlareChain(Chain):\n question_generator_chain: QuestionGeneratorChain", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/flare/base.html"} +{"id": "78014d9e345b-2", "text": "[docs]class FlareChain(Chain):\n question_generator_chain: QuestionGeneratorChain\n response_chain: _ResponseChain = Field(default_factory=_OpenAIResponseChain)\n output_parser: FinishedOutputParser = Field(default_factory=FinishedOutputParser)\n retriever: BaseRetriever\n min_prob: float = 0.2\n min_token_gap: int = 5\n num_pad_tokens: int = 2\n max_iter: int = 10\n start_with_retrieval: bool = True\n @property\n def input_keys(self) -> List[str]:\n return [\"user_input\"]\n @property\n def output_keys(self) -> List[str]:\n return [\"response\"]\n def _do_generation(\n self,\n questions: List[str],\n user_input: str,\n response: str,\n _run_manager: CallbackManagerForChainRun,\n ) -> Tuple[str, bool]:\n callbacks = _run_manager.get_child()\n docs = []\n for question in questions:\n docs.extend(self.retriever.get_relevant_documents(question))\n context = \"\\n\\n\".join(d.page_content for d in docs)\n result = self.response_chain.predict(\n user_input=user_input,\n context=context,\n response=response,\n callbacks=callbacks,\n )\n marginal, finished = self.output_parser.parse(result)\n return marginal, finished\n def _do_retrieval(\n self,\n low_confidence_spans: List[str],\n _run_manager: CallbackManagerForChainRun,\n user_input: str,\n response: str,\n initial_response: str,\n ) -> Tuple[str, bool]:\n question_gen_inputs = [\n {\n \"user_input\": user_input,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/flare/base.html"} +{"id": "78014d9e345b-3", "text": "question_gen_inputs = [\n {\n \"user_input\": user_input,\n \"current_response\": initial_response,\n \"uncertain_span\": span,\n }\n for span in low_confidence_spans\n ]\n callbacks = _run_manager.get_child()\n question_gen_outputs = self.question_generator_chain.apply(\n question_gen_inputs, callbacks=callbacks\n )\n questions = [\n output[self.question_generator_chain.output_keys[0]]\n for output in question_gen_outputs\n ]\n _run_manager.on_text(\n f\"Generated Questions: {questions}\", color=\"yellow\", end=\"\\n\"\n )\n return self._do_generation(questions, user_input, response, _run_manager)\n def _call(\n self,\n inputs: Dict[str, Any],\n run_manager: Optional[CallbackManagerForChainRun] = None,\n ) -> Dict[str, Any]:\n _run_manager = run_manager or CallbackManagerForChainRun.get_noop_manager()\n user_input = inputs[self.input_keys[0]]\n response = \"\"\n for i in range(self.max_iter):\n _run_manager.on_text(\n f\"Current Response: {response}\", color=\"blue\", end=\"\\n\"\n )\n _input = {\"user_input\": user_input, \"context\": \"\", \"response\": response}\n tokens, log_probs = self.response_chain.generate_tokens_and_log_probs(\n _input, run_manager=_run_manager\n )\n low_confidence_spans = _low_confidence_spans(\n tokens,\n log_probs,\n self.min_prob,\n self.min_token_gap,\n self.num_pad_tokens,\n )\n initial_response = response.strip() + \" \" + \"\".join(tokens)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/flare/base.html"} +{"id": "78014d9e345b-4", "text": ")\n initial_response = response.strip() + \" \" + \"\".join(tokens)\n if not low_confidence_spans:\n response = initial_response\n final_response, finished = self.output_parser.parse(response)\n if finished:\n return {self.output_keys[0]: final_response}\n continue\n marginal, finished = self._do_retrieval(\n low_confidence_spans,\n _run_manager,\n user_input,\n response,\n initial_response,\n )\n response = response.strip() + \" \" + marginal\n if finished:\n break\n return {self.output_keys[0]: response}\n[docs] @classmethod\n def from_llm(\n cls, llm: BaseLanguageModel, max_generation_len: int = 32, **kwargs: Any\n ) -> FlareChain:\n question_gen_chain = QuestionGeneratorChain(llm=llm)\n response_llm = OpenAI(\n max_tokens=max_generation_len, model_kwargs={\"logprobs\": 1}, temperature=0\n )\n response_chain = _OpenAIResponseChain(llm=response_llm)\n return cls(\n question_generator_chain=question_gen_chain,\n response_chain=response_chain,\n **kwargs,\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/flare/base.html"} +{"id": "d7b7a789c4af-0", "text": "Source code for langchain.chains.llm_summarization_checker.base\n\"\"\"Chain for summarization with self-verification.\"\"\"\nfrom __future__ import annotations\nimport warnings\nfrom pathlib import Path\nfrom typing import Any, Dict, List, Optional\nfrom pydantic import Extra, root_validator\nfrom langchain.base_language import BaseLanguageModel\nfrom langchain.callbacks.manager import CallbackManagerForChainRun\nfrom langchain.chains.base import Chain\nfrom langchain.chains.llm import LLMChain\nfrom langchain.chains.sequential import SequentialChain\nfrom langchain.prompts.prompt import PromptTemplate\nPROMPTS_DIR = Path(__file__).parent / \"prompts\"\nCREATE_ASSERTIONS_PROMPT = PromptTemplate.from_file(\n PROMPTS_DIR / \"create_facts.txt\", [\"summary\"]\n)\nCHECK_ASSERTIONS_PROMPT = PromptTemplate.from_file(\n PROMPTS_DIR / \"check_facts.txt\", [\"assertions\"]\n)\nREVISED_SUMMARY_PROMPT = PromptTemplate.from_file(\n PROMPTS_DIR / \"revise_summary.txt\", [\"checked_assertions\", \"summary\"]\n)\nARE_ALL_TRUE_PROMPT = PromptTemplate.from_file(\n PROMPTS_DIR / \"are_all_true_prompt.txt\", [\"checked_assertions\"]\n)\ndef _load_sequential_chain(\n llm: BaseLanguageModel,\n create_assertions_prompt: PromptTemplate,\n check_assertions_prompt: PromptTemplate,\n revised_summary_prompt: PromptTemplate,\n are_all_true_prompt: PromptTemplate,\n verbose: bool = False,\n) -> SequentialChain:\n chain = SequentialChain(\n chains=[\n LLMChain(\n llm=llm,\n prompt=create_assertions_prompt,\n output_key=\"assertions\",\n verbose=verbose,\n ),\n LLMChain(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/llm_summarization_checker/base.html"} +{"id": "d7b7a789c4af-1", "text": "verbose=verbose,\n ),\n LLMChain(\n llm=llm,\n prompt=check_assertions_prompt,\n output_key=\"checked_assertions\",\n verbose=verbose,\n ),\n LLMChain(\n llm=llm,\n prompt=revised_summary_prompt,\n output_key=\"revised_summary\",\n verbose=verbose,\n ),\n LLMChain(\n llm=llm,\n output_key=\"all_true\",\n prompt=are_all_true_prompt,\n verbose=verbose,\n ),\n ],\n input_variables=[\"summary\"],\n output_variables=[\"all_true\", \"revised_summary\"],\n verbose=verbose,\n )\n return chain\n[docs]class LLMSummarizationCheckerChain(Chain):\n \"\"\"Chain for question-answering with self-verification.\n Example:\n .. code-block:: python\n from langchain import OpenAI, LLMSummarizationCheckerChain\n llm = OpenAI(temperature=0.0)\n checker_chain = LLMSummarizationCheckerChain.from_llm(llm)\n \"\"\"\n sequential_chain: SequentialChain\n llm: Optional[BaseLanguageModel] = None\n \"\"\"[Deprecated] LLM wrapper to use.\"\"\"\n create_assertions_prompt: PromptTemplate = CREATE_ASSERTIONS_PROMPT\n \"\"\"[Deprecated]\"\"\"\n check_assertions_prompt: PromptTemplate = CHECK_ASSERTIONS_PROMPT\n \"\"\"[Deprecated]\"\"\"\n revised_summary_prompt: PromptTemplate = REVISED_SUMMARY_PROMPT\n \"\"\"[Deprecated]\"\"\"\n are_all_true_prompt: PromptTemplate = ARE_ALL_TRUE_PROMPT\n \"\"\"[Deprecated]\"\"\"\n input_key: str = \"query\" #: :meta private:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/llm_summarization_checker/base.html"} +{"id": "d7b7a789c4af-2", "text": "input_key: str = \"query\" #: :meta private:\n output_key: str = \"result\" #: :meta private:\n max_checks: int = 2\n \"\"\"Maximum number of times to check the assertions. Default to double-checking.\"\"\"\n class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n arbitrary_types_allowed = True\n @root_validator(pre=True)\n def raise_deprecation(cls, values: Dict) -> Dict:\n if \"llm\" in values:\n warnings.warn(\n \"Directly instantiating an LLMSummarizationCheckerChain with an llm is \"\n \"deprecated. Please instantiate with\"\n \" sequential_chain argument or using the from_llm class method.\"\n )\n if \"sequential_chain\" not in values and values[\"llm\"] is not None:\n values[\"sequential_chain\"] = _load_sequential_chain(\n values[\"llm\"],\n values.get(\"create_assertions_prompt\", CREATE_ASSERTIONS_PROMPT),\n values.get(\"check_assertions_prompt\", CHECK_ASSERTIONS_PROMPT),\n values.get(\"revised_summary_prompt\", REVISED_SUMMARY_PROMPT),\n values.get(\"are_all_true_prompt\", ARE_ALL_TRUE_PROMPT),\n verbose=values.get(\"verbose\", False),\n )\n return values\n @property\n def input_keys(self) -> List[str]:\n \"\"\"Return the singular input key.\n :meta private:\n \"\"\"\n return [self.input_key]\n @property\n def output_keys(self) -> List[str]:\n \"\"\"Return the singular output key.\n :meta private:\n \"\"\"\n return [self.output_key]\n def _call(\n self,\n inputs: Dict[str, Any],", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/llm_summarization_checker/base.html"} +{"id": "d7b7a789c4af-3", "text": "def _call(\n self,\n inputs: Dict[str, Any],\n run_manager: Optional[CallbackManagerForChainRun] = None,\n ) -> Dict[str, str]:\n _run_manager = run_manager or CallbackManagerForChainRun.get_noop_manager()\n all_true = False\n count = 0\n output = None\n original_input = inputs[self.input_key]\n chain_input = original_input\n while not all_true and count < self.max_checks:\n output = self.sequential_chain(\n {\"summary\": chain_input}, callbacks=_run_manager.get_child()\n )\n count += 1\n if output[\"all_true\"].strip() == \"True\":\n break\n if self.verbose:\n print(output[\"revised_summary\"])\n chain_input = output[\"revised_summary\"]\n if not output:\n raise ValueError(\"No output from chain\")\n return {self.output_key: output[\"revised_summary\"].strip()}\n @property\n def _chain_type(self) -> str:\n return \"llm_summarization_checker_chain\"\n[docs] @classmethod\n def from_llm(\n cls,\n llm: BaseLanguageModel,\n create_assertions_prompt: PromptTemplate = CREATE_ASSERTIONS_PROMPT,\n check_assertions_prompt: PromptTemplate = CHECK_ASSERTIONS_PROMPT,\n revised_summary_prompt: PromptTemplate = REVISED_SUMMARY_PROMPT,\n are_all_true_prompt: PromptTemplate = ARE_ALL_TRUE_PROMPT,\n verbose: bool = False,\n **kwargs: Any,\n ) -> LLMSummarizationCheckerChain:\n chain = _load_sequential_chain(\n llm,\n create_assertions_prompt,\n check_assertions_prompt,\n revised_summary_prompt,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/llm_summarization_checker/base.html"} +{"id": "d7b7a789c4af-4", "text": "create_assertions_prompt,\n check_assertions_prompt,\n revised_summary_prompt,\n are_all_true_prompt,\n verbose=verbose,\n )\n return cls(sequential_chain=chain, verbose=verbose, **kwargs)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chains/llm_summarization_checker/base.html"} +{"id": "4ec3fb483d2b-0", "text": "Source code for langchain.experimental.autonomous_agents.baby_agi.baby_agi\n\"\"\"BabyAGI agent.\"\"\"\nfrom collections import deque\nfrom typing import Any, Dict, List, Optional\nfrom pydantic import BaseModel, Field\nfrom langchain.base_language import BaseLanguageModel\nfrom langchain.callbacks.manager import CallbackManagerForChainRun\nfrom langchain.chains.base import Chain\nfrom langchain.experimental.autonomous_agents.baby_agi.task_creation import (\n TaskCreationChain,\n)\nfrom langchain.experimental.autonomous_agents.baby_agi.task_execution import (\n TaskExecutionChain,\n)\nfrom langchain.experimental.autonomous_agents.baby_agi.task_prioritization import (\n TaskPrioritizationChain,\n)\nfrom langchain.vectorstores.base import VectorStore\n[docs]class BabyAGI(Chain, BaseModel):\n \"\"\"Controller model for the BabyAGI agent.\"\"\"\n task_list: deque = Field(default_factory=deque)\n task_creation_chain: Chain = Field(...)\n task_prioritization_chain: Chain = Field(...)\n execution_chain: Chain = Field(...)\n task_id_counter: int = Field(1)\n vectorstore: VectorStore = Field(init=False)\n max_iterations: Optional[int] = None\n[docs] class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n arbitrary_types_allowed = True\n def add_task(self, task: Dict) -> None:\n self.task_list.append(task)\n def print_task_list(self) -> None:\n print(\"\\033[95m\\033[1m\" + \"\\n*****TASK LIST*****\\n\" + \"\\033[0m\\033[0m\")\n for t in self.task_list:\n print(str(t[\"task_id\"]) + \": \" + t[\"task_name\"])", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/experimental/autonomous_agents/baby_agi/baby_agi.html"} +{"id": "4ec3fb483d2b-1", "text": "print(str(t[\"task_id\"]) + \": \" + t[\"task_name\"])\n def print_next_task(self, task: Dict) -> None:\n print(\"\\033[92m\\033[1m\" + \"\\n*****NEXT TASK*****\\n\" + \"\\033[0m\\033[0m\")\n print(str(task[\"task_id\"]) + \": \" + task[\"task_name\"])\n def print_task_result(self, result: str) -> None:\n print(\"\\033[93m\\033[1m\" + \"\\n*****TASK RESULT*****\\n\" + \"\\033[0m\\033[0m\")\n print(result)\n @property\n def input_keys(self) -> List[str]:\n return [\"objective\"]\n @property\n def output_keys(self) -> List[str]:\n return []\n[docs] def get_next_task(\n self, result: str, task_description: str, objective: str\n ) -> List[Dict]:\n \"\"\"Get the next task.\"\"\"\n task_names = [t[\"task_name\"] for t in self.task_list]\n incomplete_tasks = \", \".join(task_names)\n response = self.task_creation_chain.run(\n result=result,\n task_description=task_description,\n incomplete_tasks=incomplete_tasks,\n objective=objective,\n )\n new_tasks = response.split(\"\\n\")\n return [\n {\"task_name\": task_name} for task_name in new_tasks if task_name.strip()\n ]\n[docs] def prioritize_tasks(self, this_task_id: int, objective: str) -> List[Dict]:\n \"\"\"Prioritize tasks.\"\"\"\n task_names = [t[\"task_name\"] for t in list(self.task_list)]\n next_task_id = int(this_task_id) + 1", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/experimental/autonomous_agents/baby_agi/baby_agi.html"} +{"id": "4ec3fb483d2b-2", "text": "next_task_id = int(this_task_id) + 1\n response = self.task_prioritization_chain.run(\n task_names=\", \".join(task_names),\n next_task_id=str(next_task_id),\n objective=objective,\n )\n new_tasks = response.split(\"\\n\")\n prioritized_task_list = []\n for task_string in new_tasks:\n if not task_string.strip():\n continue\n task_parts = task_string.strip().split(\".\", 1)\n if len(task_parts) == 2:\n task_id = task_parts[0].strip()\n task_name = task_parts[1].strip()\n prioritized_task_list.append(\n {\"task_id\": task_id, \"task_name\": task_name}\n )\n return prioritized_task_list\n def _get_top_tasks(self, query: str, k: int) -> List[str]:\n \"\"\"Get the top k tasks based on the query.\"\"\"\n results = self.vectorstore.similarity_search(query, k=k)\n if not results:\n return []\n return [str(item.metadata[\"task\"]) for item in results]\n[docs] def execute_task(self, objective: str, task: str, k: int = 5) -> str:\n \"\"\"Execute a task.\"\"\"\n context = self._get_top_tasks(query=objective, k=k)\n return self.execution_chain.run(\n objective=objective, context=\"\\n\".join(context), task=task\n )\n def _call(\n self,\n inputs: Dict[str, Any],\n run_manager: Optional[CallbackManagerForChainRun] = None,\n ) -> Dict[str, Any]:\n \"\"\"Run the agent.\"\"\"\n objective = inputs[\"objective\"]", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/experimental/autonomous_agents/baby_agi/baby_agi.html"} +{"id": "4ec3fb483d2b-3", "text": "\"\"\"Run the agent.\"\"\"\n objective = inputs[\"objective\"]\n first_task = inputs.get(\"first_task\", \"Make a todo list\")\n self.add_task({\"task_id\": 1, \"task_name\": first_task})\n num_iters = 0\n while True:\n if self.task_list:\n self.print_task_list()\n # Step 1: Pull the first task\n task = self.task_list.popleft()\n self.print_next_task(task)\n # Step 2: Execute the task\n result = self.execute_task(objective, task[\"task_name\"])\n this_task_id = int(task[\"task_id\"])\n self.print_task_result(result)\n # Step 3: Store the result in Pinecone\n result_id = f\"result_{task['task_id']}\"\n self.vectorstore.add_texts(\n texts=[result],\n metadatas=[{\"task\": task[\"task_name\"]}],\n ids=[result_id],\n )\n # Step 4: Create new tasks and reprioritize task list\n new_tasks = self.get_next_task(result, task[\"task_name\"], objective)\n for new_task in new_tasks:\n self.task_id_counter += 1\n new_task.update({\"task_id\": self.task_id_counter})\n self.add_task(new_task)\n self.task_list = deque(self.prioritize_tasks(this_task_id, objective))\n num_iters += 1\n if self.max_iterations is not None and num_iters == self.max_iterations:\n print(\n \"\\033[91m\\033[1m\" + \"\\n*****TASK ENDING*****\\n\" + \"\\033[0m\\033[0m\"\n )\n break\n return {}\n[docs] @classmethod\n def from_llm(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/experimental/autonomous_agents/baby_agi/baby_agi.html"} +{"id": "4ec3fb483d2b-4", "text": "return {}\n[docs] @classmethod\n def from_llm(\n cls,\n llm: BaseLanguageModel,\n vectorstore: VectorStore,\n verbose: bool = False,\n task_execution_chain: Optional[Chain] = None,\n **kwargs: Dict[str, Any],\n ) -> \"BabyAGI\":\n \"\"\"Initialize the BabyAGI Controller.\"\"\"\n task_creation_chain = TaskCreationChain.from_llm(llm, verbose=verbose)\n task_prioritization_chain = TaskPrioritizationChain.from_llm(\n llm, verbose=verbose\n )\n if task_execution_chain is None:\n execution_chain: Chain = TaskExecutionChain.from_llm(llm, verbose=verbose)\n else:\n execution_chain = task_execution_chain\n return cls(\n task_creation_chain=task_creation_chain,\n task_prioritization_chain=task_prioritization_chain,\n execution_chain=execution_chain,\n vectorstore=vectorstore,\n **kwargs,\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/experimental/autonomous_agents/baby_agi/baby_agi.html"} +{"id": "43cfbc207f33-0", "text": "Source code for langchain.experimental.autonomous_agents.autogpt.agent\nfrom __future__ import annotations\nfrom typing import List, Optional\nfrom pydantic import ValidationError\nfrom langchain.chains.llm import LLMChain\nfrom langchain.chat_models.base import BaseChatModel\nfrom langchain.experimental.autonomous_agents.autogpt.output_parser import (\n AutoGPTOutputParser,\n BaseAutoGPTOutputParser,\n)\nfrom langchain.experimental.autonomous_agents.autogpt.prompt import AutoGPTPrompt\nfrom langchain.experimental.autonomous_agents.autogpt.prompt_generator import (\n FINISH_NAME,\n)\nfrom langchain.memory import ChatMessageHistory\nfrom langchain.schema import (\n AIMessage,\n BaseChatMessageHistory,\n Document,\n HumanMessage,\n SystemMessage,\n)\nfrom langchain.tools.base import BaseTool\nfrom langchain.tools.human.tool import HumanInputRun\nfrom langchain.vectorstores.base import VectorStoreRetriever\n[docs]class AutoGPT:\n \"\"\"Agent class for interacting with Auto-GPT.\"\"\"\n def __init__(\n self,\n ai_name: str,\n memory: VectorStoreRetriever,\n chain: LLMChain,\n output_parser: BaseAutoGPTOutputParser,\n tools: List[BaseTool],\n feedback_tool: Optional[HumanInputRun] = None,\n chat_history_memory: Optional[BaseChatMessageHistory] = None,\n ):\n self.ai_name = ai_name\n self.memory = memory\n self.next_action_count = 0\n self.chain = chain\n self.output_parser = output_parser\n self.tools = tools\n self.feedback_tool = feedback_tool\n self.chat_history_memory = chat_history_memory or ChatMessageHistory()\n @classmethod\n def from_llm_and_tools(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/experimental/autonomous_agents/autogpt/agent.html"} +{"id": "43cfbc207f33-1", "text": "@classmethod\n def from_llm_and_tools(\n cls,\n ai_name: str,\n ai_role: str,\n memory: VectorStoreRetriever,\n tools: List[BaseTool],\n llm: BaseChatModel,\n human_in_the_loop: bool = False,\n output_parser: Optional[BaseAutoGPTOutputParser] = None,\n chat_history_memory: Optional[BaseChatMessageHistory] = None,\n ) -> AutoGPT:\n prompt = AutoGPTPrompt(\n ai_name=ai_name,\n ai_role=ai_role,\n tools=tools,\n input_variables=[\"memory\", \"messages\", \"goals\", \"user_input\"],\n token_counter=llm.get_num_tokens,\n )\n human_feedback_tool = HumanInputRun() if human_in_the_loop else None\n chain = LLMChain(llm=llm, prompt=prompt)\n return cls(\n ai_name,\n memory,\n chain,\n output_parser or AutoGPTOutputParser(),\n tools,\n feedback_tool=human_feedback_tool,\n chat_history_memory=chat_history_memory,\n )\n def run(self, goals: List[str]) -> str:\n user_input = (\n \"Determine which next command to use, \"\n \"and respond using the format specified above:\"\n )\n # Interaction Loop\n loop_count = 0\n while True:\n # Discontinue if continuous limit is reached\n loop_count += 1\n # Send message to AI, get response\n assistant_reply = self.chain.run(\n goals=goals,\n messages=self.chat_history_memory.messages,\n memory=self.memory,\n user_input=user_input,\n )\n # Print Assistant thoughts", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/experimental/autonomous_agents/autogpt/agent.html"} +{"id": "43cfbc207f33-2", "text": "user_input=user_input,\n )\n # Print Assistant thoughts\n print(assistant_reply)\n self.chat_history_memory.add_message(HumanMessage(content=user_input))\n self.chat_history_memory.add_message(AIMessage(content=assistant_reply))\n # Get command name and arguments\n action = self.output_parser.parse(assistant_reply)\n tools = {t.name: t for t in self.tools}\n if action.name == FINISH_NAME:\n return action.args[\"response\"]\n if action.name in tools:\n tool = tools[action.name]\n try:\n observation = tool.run(action.args)\n except ValidationError as e:\n observation = (\n f\"Validation Error in args: {str(e)}, args: {action.args}\"\n )\n except Exception as e:\n observation = (\n f\"Error: {str(e)}, {type(e).__name__}, args: {action.args}\"\n )\n result = f\"Command {tool.name} returned: {observation}\"\n elif action.name == \"ERROR\":\n result = f\"Error: {action.args}. \"\n else:\n result = (\n f\"Unknown command '{action.name}'. \"\n f\"Please refer to the 'COMMANDS' list for available \"\n f\"commands and only respond in the specified JSON format.\"\n )\n memory_to_add = (\n f\"Assistant Reply: {assistant_reply} \" f\"\\nResult: {result} \"\n )\n if self.feedback_tool is not None:\n feedback = f\"\\n{self.feedback_tool.run('Input: ')}\"\n if feedback in {\"q\", \"stop\"}:\n print(\"EXITING\")\n return \"EXITING\"\n memory_to_add += feedback", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/experimental/autonomous_agents/autogpt/agent.html"} +{"id": "43cfbc207f33-3", "text": "return \"EXITING\"\n memory_to_add += feedback\n self.memory.add_documents([Document(page_content=memory_to_add)])\n self.chat_history_memory.add_message(SystemMessage(content=result))", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/experimental/autonomous_agents/autogpt/agent.html"} +{"id": "8fddb7ad3a73-0", "text": "Source code for langchain.experimental.generative_agents.memory\nimport logging\nimport re\nfrom datetime import datetime\nfrom typing import Any, Dict, List, Optional\nfrom langchain import LLMChain\nfrom langchain.base_language import BaseLanguageModel\nfrom langchain.prompts import PromptTemplate\nfrom langchain.retrievers import TimeWeightedVectorStoreRetriever\nfrom langchain.schema import BaseMemory, Document\nfrom langchain.utils import mock_now\nlogger = logging.getLogger(__name__)\n[docs]class GenerativeAgentMemory(BaseMemory):\n llm: BaseLanguageModel\n \"\"\"The core language model.\"\"\"\n memory_retriever: TimeWeightedVectorStoreRetriever\n \"\"\"The retriever to fetch related memories.\"\"\"\n verbose: bool = False\n reflection_threshold: Optional[float] = None\n \"\"\"When aggregate_importance exceeds reflection_threshold, stop to reflect.\"\"\"\n current_plan: List[str] = []\n \"\"\"The current plan of the agent.\"\"\"\n # A weight of 0.15 makes this less important than it\n # would be otherwise, relative to salience and time\n importance_weight: float = 0.15\n \"\"\"How much weight to assign the memory importance.\"\"\"\n aggregate_importance: float = 0.0 # : :meta private:\n \"\"\"Track the sum of the 'importance' of recent memories.\n Triggers reflection when it reaches reflection_threshold.\"\"\"\n max_tokens_limit: int = 1200 # : :meta private:\n # input keys\n queries_key: str = \"queries\"\n most_recent_memories_token_key: str = \"recent_memories_token\"\n add_memory_key: str = \"add_memory\"\n # output keys\n relevant_memories_key: str = \"relevant_memories\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/experimental/generative_agents/memory.html"} +{"id": "8fddb7ad3a73-1", "text": "# output keys\n relevant_memories_key: str = \"relevant_memories\"\n relevant_memories_simple_key: str = \"relevant_memories_simple\"\n most_recent_memories_key: str = \"most_recent_memories\"\n now_key: str = \"now\"\n reflecting: bool = False\n def chain(self, prompt: PromptTemplate) -> LLMChain:\n return LLMChain(llm=self.llm, prompt=prompt, verbose=self.verbose)\n @staticmethod\n def _parse_list(text: str) -> List[str]:\n \"\"\"Parse a newline-separated string into a list of strings.\"\"\"\n lines = re.split(r\"\\n\", text.strip())\n lines = [line for line in lines if line.strip()] # remove empty lines\n return [re.sub(r\"^\\s*\\d+\\.\\s*\", \"\", line).strip() for line in lines]\n def _get_topics_of_reflection(self, last_k: int = 50) -> List[str]:\n \"\"\"Return the 3 most salient high-level questions about recent observations.\"\"\"\n prompt = PromptTemplate.from_template(\n \"{observations}\\n\\n\"\n \"Given only the information above, what are the 3 most salient \"\n \"high-level questions we can answer about the subjects in the statements?\\n\"\n \"Provide each question on a new line.\"\n )\n observations = self.memory_retriever.memory_stream[-last_k:]\n observation_str = \"\\n\".join(\n [self._format_memory_detail(o) for o in observations]\n )\n result = self.chain(prompt).run(observations=observation_str)\n return self._parse_list(result)\n def _get_insights_on_topic(\n self, topic: str, now: Optional[datetime] = None", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/experimental/generative_agents/memory.html"} +{"id": "8fddb7ad3a73-2", "text": "self, topic: str, now: Optional[datetime] = None\n ) -> List[str]:\n \"\"\"Generate 'insights' on a topic of reflection, based on pertinent memories.\"\"\"\n prompt = PromptTemplate.from_template(\n \"Statements relevant to: '{topic}'\\n\"\n \"---\\n\"\n \"{related_statements}\\n\"\n \"---\\n\"\n \"What 5 high-level novel insights can you infer from the above statements \"\n \"that are relevant for answering the following question?\\n\"\n \"Do not include any insights that are not relevant to the question.\\n\"\n \"Do not repeat any insights that have already been made.\\n\\n\"\n \"Question: {topic}\\n\\n\"\n \"(example format: insight (because of 1, 5, 3))\\n\"\n )\n related_memories = self.fetch_memories(topic, now=now)\n related_statements = \"\\n\".join(\n [\n self._format_memory_detail(memory, prefix=f\"{i+1}. \")\n for i, memory in enumerate(related_memories)\n ]\n )\n result = self.chain(prompt).run(\n topic=topic, related_statements=related_statements\n )\n # TODO: Parse the connections between memories and insights\n return self._parse_list(result)\n[docs] def pause_to_reflect(self, now: Optional[datetime] = None) -> List[str]:\n \"\"\"Reflect on recent observations and generate 'insights'.\"\"\"\n if self.verbose:\n logger.info(\"Character is reflecting\")\n new_insights = []\n topics = self._get_topics_of_reflection()\n for topic in topics:\n insights = self._get_insights_on_topic(topic, now=now)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/experimental/generative_agents/memory.html"} +{"id": "8fddb7ad3a73-3", "text": "insights = self._get_insights_on_topic(topic, now=now)\n for insight in insights:\n self.add_memory(insight, now=now)\n new_insights.extend(insights)\n return new_insights\n def _score_memory_importance(self, memory_content: str) -> float:\n \"\"\"Score the absolute importance of the given memory.\"\"\"\n prompt = PromptTemplate.from_template(\n \"On the scale of 1 to 10, where 1 is purely mundane\"\n + \" (e.g., brushing teeth, making bed) and 10 is\"\n + \" extremely poignant (e.g., a break up, college\"\n + \" acceptance), rate the likely poignancy of the\"\n + \" following piece of memory. Respond with a single integer.\"\n + \"\\nMemory: {memory_content}\"\n + \"\\nRating: \"\n )\n score = self.chain(prompt).run(memory_content=memory_content).strip()\n if self.verbose:\n logger.info(f\"Importance score: {score}\")\n match = re.search(r\"^\\D*(\\d+)\", score)\n if match:\n return (float(match.group(1)) / 10) * self.importance_weight\n else:\n return 0.0\n def _score_memories_importance(self, memory_content: str) -> List[float]:\n \"\"\"Score the absolute importance of the given memory.\"\"\"\n prompt = PromptTemplate.from_template(\n \"On the scale of 1 to 10, where 1 is purely mundane\"\n + \" (e.g., brushing teeth, making bed) and 10 is\"\n + \" extremely poignant (e.g., a break up, college\"\n + \" acceptance), rate the likely poignancy of the\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/experimental/generative_agents/memory.html"} +{"id": "8fddb7ad3a73-4", "text": "+ \" acceptance), rate the likely poignancy of the\"\n + \" following piece of memory. Always answer with only a list of numbers.\"\n + \" If just given one memory still respond in a list.\"\n + \" Memories are separated by semi colans (;)\"\n + \"\\Memories: {memory_content}\"\n + \"\\nRating: \"\n )\n scores = self.chain(prompt).run(memory_content=memory_content).strip()\n if self.verbose:\n logger.info(f\"Importance scores: {scores}\")\n # Split into list of strings and convert to floats\n scores_list = [float(x) for x in scores.split(\";\")]\n return scores_list\n[docs] def add_memories(\n self, memory_content: str, now: Optional[datetime] = None\n ) -> List[str]:\n \"\"\"Add an observations or memories to the agent's memory.\"\"\"\n importance_scores = self._score_memories_importance(memory_content)\n self.aggregate_importance += max(importance_scores)\n memory_list = memory_content.split(\";\")\n documents = []\n for i in range(len(memory_list)):\n documents.append(\n Document(\n page_content=memory_list[i],\n metadata={\"importance\": importance_scores[i]},\n )\n )\n result = self.memory_retriever.add_documents(documents, current_time=now)\n # After an agent has processed a certain amount of memories (as measured by\n # aggregate importance), it is time to reflect on recent events to add\n # more synthesized memories to the agent's memory stream.\n if (\n self.reflection_threshold is not None\n and self.aggregate_importance > self.reflection_threshold\n and not self.reflecting\n ):\n self.reflecting = True", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/experimental/generative_agents/memory.html"} +{"id": "8fddb7ad3a73-5", "text": "and not self.reflecting\n ):\n self.reflecting = True\n self.pause_to_reflect(now=now)\n # Hack to clear the importance from reflection\n self.aggregate_importance = 0.0\n self.reflecting = False\n return result\n[docs] def add_memory(\n self, memory_content: str, now: Optional[datetime] = None\n ) -> List[str]:\n \"\"\"Add an observation or memory to the agent's memory.\"\"\"\n importance_score = self._score_memory_importance(memory_content)\n self.aggregate_importance += importance_score\n document = Document(\n page_content=memory_content, metadata={\"importance\": importance_score}\n )\n result = self.memory_retriever.add_documents([document], current_time=now)\n # After an agent has processed a certain amount of memories (as measured by\n # aggregate importance), it is time to reflect on recent events to add\n # more synthesized memories to the agent's memory stream.\n if (\n self.reflection_threshold is not None\n and self.aggregate_importance > self.reflection_threshold\n and not self.reflecting\n ):\n self.reflecting = True\n self.pause_to_reflect(now=now)\n # Hack to clear the importance from reflection\n self.aggregate_importance = 0.0\n self.reflecting = False\n return result\n[docs] def fetch_memories(\n self, observation: str, now: Optional[datetime] = None\n ) -> List[Document]:\n \"\"\"Fetch related memories.\"\"\"\n if now is not None:\n with mock_now(now):\n return self.memory_retriever.get_relevant_documents(observation)\n else:\n return self.memory_retriever.get_relevant_documents(observation)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/experimental/generative_agents/memory.html"} +{"id": "8fddb7ad3a73-6", "text": "else:\n return self.memory_retriever.get_relevant_documents(observation)\n def format_memories_detail(self, relevant_memories: List[Document]) -> str:\n content = []\n for mem in relevant_memories:\n content.append(self._format_memory_detail(mem, prefix=\"- \"))\n return \"\\n\".join([f\"{mem}\" for mem in content])\n def _format_memory_detail(self, memory: Document, prefix: str = \"\") -> str:\n created_time = memory.metadata[\"created_at\"].strftime(\"%B %d, %Y, %I:%M %p\")\n return f\"{prefix}[{created_time}] {memory.page_content.strip()}\"\n def format_memories_simple(self, relevant_memories: List[Document]) -> str:\n return \"; \".join([f\"{mem.page_content}\" for mem in relevant_memories])\n def _get_memories_until_limit(self, consumed_tokens: int) -> str:\n \"\"\"Reduce the number of tokens in the documents.\"\"\"\n result = []\n for doc in self.memory_retriever.memory_stream[::-1]:\n if consumed_tokens >= self.max_tokens_limit:\n break\n consumed_tokens += self.llm.get_num_tokens(doc.page_content)\n if consumed_tokens < self.max_tokens_limit:\n result.append(doc)\n return self.format_memories_simple(result)\n @property\n def memory_variables(self) -> List[str]:\n \"\"\"Input keys this memory class will load dynamically.\"\"\"\n return []\n[docs] def load_memory_variables(self, inputs: Dict[str, Any]) -> Dict[str, str]:\n \"\"\"Return key-value pairs given the text input to the chain.\"\"\"\n queries = inputs.get(self.queries_key)\n now = inputs.get(self.now_key)\n if queries is not None:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/experimental/generative_agents/memory.html"} +{"id": "8fddb7ad3a73-7", "text": "now = inputs.get(self.now_key)\n if queries is not None:\n relevant_memories = [\n mem for query in queries for mem in self.fetch_memories(query, now=now)\n ]\n return {\n self.relevant_memories_key: self.format_memories_detail(\n relevant_memories\n ),\n self.relevant_memories_simple_key: self.format_memories_simple(\n relevant_memories\n ),\n }\n most_recent_memories_token = inputs.get(self.most_recent_memories_token_key)\n if most_recent_memories_token is not None:\n return {\n self.most_recent_memories_key: self._get_memories_until_limit(\n most_recent_memories_token\n )\n }\n return {}\n[docs] def save_context(self, inputs: Dict[str, Any], outputs: Dict[str, Any]) -> None:\n \"\"\"Save the context of this model run to memory.\"\"\"\n # TODO: fix the save memory key\n mem = outputs.get(self.add_memory_key)\n now = outputs.get(self.now_key)\n if mem:\n self.add_memory(mem, now=now)\n[docs] def clear(self) -> None:\n \"\"\"Clear memory contents.\"\"\"\n # TODO", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/experimental/generative_agents/memory.html"} +{"id": "09975923a7eb-0", "text": "Source code for langchain.experimental.generative_agents.generative_agent\nimport re\nfrom datetime import datetime\nfrom typing import Any, Dict, List, Optional, Tuple\nfrom pydantic import BaseModel, Field\nfrom langchain import LLMChain\nfrom langchain.base_language import BaseLanguageModel\nfrom langchain.experimental.generative_agents.memory import GenerativeAgentMemory\nfrom langchain.prompts import PromptTemplate\n[docs]class GenerativeAgent(BaseModel):\n \"\"\"A character with memory and innate characteristics.\"\"\"\n name: str\n \"\"\"The character's name.\"\"\"\n age: Optional[int] = None\n \"\"\"The optional age of the character.\"\"\"\n traits: str = \"N/A\"\n \"\"\"Permanent traits to ascribe to the character.\"\"\"\n status: str\n \"\"\"The traits of the character you wish not to change.\"\"\"\n memory: GenerativeAgentMemory\n \"\"\"The memory object that combines relevance, recency, and 'importance'.\"\"\"\n llm: BaseLanguageModel\n \"\"\"The underlying language model.\"\"\"\n verbose: bool = False\n summary: str = \"\" #: :meta private:\n \"\"\"Stateful self-summary generated via reflection on the character's memory.\"\"\"\n summary_refresh_seconds: int = 3600 #: :meta private:\n \"\"\"How frequently to re-generate the summary.\"\"\"\n last_refreshed: datetime = Field(default_factory=datetime.now) # : :meta private:\n \"\"\"The last time the character's summary was regenerated.\"\"\"\n daily_summaries: List[str] = Field(default_factory=list) # : :meta private:\n \"\"\"Summary of the events in the plan that the agent took.\"\"\"\n[docs] class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n arbitrary_types_allowed = True\n # LLM-related methods\n @staticmethod", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/experimental/generative_agents/generative_agent.html"} +{"id": "09975923a7eb-1", "text": "arbitrary_types_allowed = True\n # LLM-related methods\n @staticmethod\n def _parse_list(text: str) -> List[str]:\n \"\"\"Parse a newline-separated string into a list of strings.\"\"\"\n lines = re.split(r\"\\n\", text.strip())\n return [re.sub(r\"^\\s*\\d+\\.\\s*\", \"\", line).strip() for line in lines]\n def chain(self, prompt: PromptTemplate) -> LLMChain:\n return LLMChain(\n llm=self.llm, prompt=prompt, verbose=self.verbose, memory=self.memory\n )\n def _get_entity_from_observation(self, observation: str) -> str:\n prompt = PromptTemplate.from_template(\n \"What is the observed entity in the following observation? {observation}\"\n + \"\\nEntity=\"\n )\n return self.chain(prompt).run(observation=observation).strip()\n def _get_entity_action(self, observation: str, entity_name: str) -> str:\n prompt = PromptTemplate.from_template(\n \"What is the {entity} doing in the following observation? {observation}\"\n + \"\\nThe {entity} is\"\n )\n return (\n self.chain(prompt).run(entity=entity_name, observation=observation).strip()\n )\n[docs] def summarize_related_memories(self, observation: str) -> str:\n \"\"\"Summarize memories that are most relevant to an observation.\"\"\"\n prompt = PromptTemplate.from_template(\n \"\"\"\n{q1}?\nContext from memory:\n{relevant_memories}\nRelevant context: \n\"\"\"\n )\n entity_name = self._get_entity_from_observation(observation)\n entity_action = self._get_entity_action(observation, entity_name)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/experimental/generative_agents/generative_agent.html"} +{"id": "09975923a7eb-2", "text": "entity_action = self._get_entity_action(observation, entity_name)\n q1 = f\"What is the relationship between {self.name} and {entity_name}\"\n q2 = f\"{entity_name} is {entity_action}\"\n return self.chain(prompt=prompt).run(q1=q1, queries=[q1, q2]).strip()\n def _generate_reaction(\n self, observation: str, suffix: str, now: Optional[datetime] = None\n ) -> str:\n \"\"\"React to a given observation or dialogue act.\"\"\"\n prompt = PromptTemplate.from_template(\n \"{agent_summary_description}\"\n + \"\\nIt is {current_time}.\"\n + \"\\n{agent_name}'s status: {agent_status}\"\n + \"\\nSummary of relevant context from {agent_name}'s memory:\"\n + \"\\n{relevant_memories}\"\n + \"\\nMost recent observations: {most_recent_memories}\"\n + \"\\nObservation: {observation}\"\n + \"\\n\\n\"\n + suffix\n )\n agent_summary_description = self.get_summary(now=now)\n relevant_memories_str = self.summarize_related_memories(observation)\n current_time_str = (\n datetime.now().strftime(\"%B %d, %Y, %I:%M %p\")\n if now is None\n else now.strftime(\"%B %d, %Y, %I:%M %p\")\n )\n kwargs: Dict[str, Any] = dict(\n agent_summary_description=agent_summary_description,\n current_time=current_time_str,\n relevant_memories=relevant_memories_str,\n agent_name=self.name,\n observation=observation,\n agent_status=self.status,\n )\n consumed_tokens = self.llm.get_num_tokens(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/experimental/generative_agents/generative_agent.html"} +{"id": "09975923a7eb-3", "text": ")\n consumed_tokens = self.llm.get_num_tokens(\n prompt.format(most_recent_memories=\"\", **kwargs)\n )\n kwargs[self.memory.most_recent_memories_token_key] = consumed_tokens\n return self.chain(prompt=prompt).run(**kwargs).strip()\n def _clean_response(self, text: str) -> str:\n return re.sub(f\"^{self.name} \", \"\", text.strip()).strip()\n[docs] def generate_reaction(\n self, observation: str, now: Optional[datetime] = None\n ) -> Tuple[bool, str]:\n \"\"\"React to a given observation.\"\"\"\n call_to_action_template = (\n \"Should {agent_name} react to the observation, and if so,\"\n + \" what would be an appropriate reaction? Respond in one line.\"\n + ' If the action is to engage in dialogue, write:\\nSAY: \"what to say\"'\n + \"\\notherwise, write:\\nREACT: {agent_name}'s reaction (if anything).\"\n + \"\\nEither do nothing, react, or say something but not both.\\n\\n\"\n )\n full_result = self._generate_reaction(\n observation, call_to_action_template, now=now\n )\n result = full_result.strip().split(\"\\n\")[0]\n # AAA\n self.memory.save_context(\n {},\n {\n self.memory.add_memory_key: f\"{self.name} observed \"\n f\"{observation} and reacted by {result}\",\n self.memory.now_key: now,\n },\n )\n if \"REACT:\" in result:\n reaction = self._clean_response(result.split(\"REACT:\")[-1])\n return False, f\"{self.name} {reaction}\"\n if \"SAY:\" in result:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/experimental/generative_agents/generative_agent.html"} +{"id": "09975923a7eb-4", "text": "if \"SAY:\" in result:\n said_value = self._clean_response(result.split(\"SAY:\")[-1])\n return True, f\"{self.name} said {said_value}\"\n else:\n return False, result\n[docs] def generate_dialogue_response(\n self, observation: str, now: Optional[datetime] = None\n ) -> Tuple[bool, str]:\n \"\"\"React to a given observation.\"\"\"\n call_to_action_template = (\n \"What would {agent_name} say? To end the conversation, write:\"\n ' GOODBYE: \"what to say\". Otherwise to continue the conversation,'\n ' write: SAY: \"what to say next\"\\n\\n'\n )\n full_result = self._generate_reaction(\n observation, call_to_action_template, now=now\n )\n result = full_result.strip().split(\"\\n\")[0]\n if \"GOODBYE:\" in result:\n farewell = self._clean_response(result.split(\"GOODBYE:\")[-1])\n self.memory.save_context(\n {},\n {\n self.memory.add_memory_key: f\"{self.name} observed \"\n f\"{observation} and said {farewell}\",\n self.memory.now_key: now,\n },\n )\n return False, f\"{self.name} said {farewell}\"\n if \"SAY:\" in result:\n response_text = self._clean_response(result.split(\"SAY:\")[-1])\n self.memory.save_context(\n {},\n {\n self.memory.add_memory_key: f\"{self.name} observed \"\n f\"{observation} and said {response_text}\",\n self.memory.now_key: now,\n },\n )\n return True, f\"{self.name} said {response_text}\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/experimental/generative_agents/generative_agent.html"} +{"id": "09975923a7eb-5", "text": ")\n return True, f\"{self.name} said {response_text}\"\n else:\n return False, result\n ######################################################\n # Agent stateful' summary methods. #\n # Each dialog or response prompt includes a header #\n # summarizing the agent's self-description. This is #\n # updated periodically through probing its memories #\n ######################################################\n def _compute_agent_summary(self) -> str:\n \"\"\"\"\"\"\n prompt = PromptTemplate.from_template(\n \"How would you summarize {name}'s core characteristics given the\"\n + \" following statements:\\n\"\n + \"{relevant_memories}\"\n + \"Do not embellish.\"\n + \"\\n\\nSummary: \"\n )\n # The agent seeks to think about their core characteristics.\n return (\n self.chain(prompt)\n .run(name=self.name, queries=[f\"{self.name}'s core characteristics\"])\n .strip()\n )\n[docs] def get_summary(\n self, force_refresh: bool = False, now: Optional[datetime] = None\n ) -> str:\n \"\"\"Return a descriptive summary of the agent.\"\"\"\n current_time = datetime.now() if now is None else now\n since_refresh = (current_time - self.last_refreshed).seconds\n if (\n not self.summary\n or since_refresh >= self.summary_refresh_seconds\n or force_refresh\n ):\n self.summary = self._compute_agent_summary()\n self.last_refreshed = current_time\n age = self.age if self.age is not None else \"N/A\"\n return (\n f\"Name: {self.name} (age: {age})\"\n + f\"\\nInnate traits: {self.traits}\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/experimental/generative_agents/generative_agent.html"} +{"id": "09975923a7eb-6", "text": "+ f\"\\nInnate traits: {self.traits}\"\n + f\"\\n{self.summary}\"\n )\n[docs] def get_full_header(\n self, force_refresh: bool = False, now: Optional[datetime] = None\n ) -> str:\n \"\"\"Return a full header of the agent's status, summary, and current time.\"\"\"\n now = datetime.now() if now is None else now\n summary = self.get_summary(force_refresh=force_refresh, now=now)\n current_time_str = now.strftime(\"%B %d, %Y, %I:%M %p\")\n return (\n f\"{summary}\\nIt is {current_time_str}.\\n{self.name}'s status: {self.status}\"\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/experimental/generative_agents/generative_agent.html"} +{"id": "79eac95dbfb0-0", "text": "Source code for langchain.llms.anyscale\n\"\"\"Wrapper around Anyscale\"\"\"\nfrom typing import Any, Dict, List, Mapping, Optional\nimport requests\nfrom pydantic import Extra, root_validator\nfrom langchain.callbacks.manager import CallbackManagerForLLMRun\nfrom langchain.llms.base import LLM\nfrom langchain.llms.utils import enforce_stop_tokens\nfrom langchain.utils import get_from_dict_or_env\n[docs]class Anyscale(LLM):\n \"\"\"Wrapper around Anyscale Services.\n To use, you should have the environment variable ``ANYSCALE_SERVICE_URL``,\n ``ANYSCALE_SERVICE_ROUTE`` and ``ANYSCALE_SERVICE_TOKEN`` set with your Anyscale\n Service, or pass it as a named parameter to the constructor.\n Example:\n .. code-block:: python\n from langchain.llms import Anyscale\n anyscale = Anyscale(anyscale_service_url=\"SERVICE_URL\",\n anyscale_service_route=\"SERVICE_ROUTE\",\n anyscale_service_token=\"SERVICE_TOKEN\")\n # Use Ray for distributed processing\n import ray\n prompt_list=[]\n @ray.remote\n def send_query(llm, prompt):\n resp = llm(prompt)\n return resp\n futures = [send_query.remote(anyscale, prompt) for prompt in prompt_list]\n results = ray.get(futures)\n \"\"\"\n model_kwargs: Optional[dict] = None\n \"\"\"Key word arguments to pass to the model. Reserved for future use\"\"\"\n anyscale_service_url: Optional[str] = None\n anyscale_service_route: Optional[str] = None\n anyscale_service_token: Optional[str] = None\n class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n @root_validator()\n def validate_environment(cls, values: Dict) -> Dict:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/anyscale.html"} +{"id": "79eac95dbfb0-1", "text": "@root_validator()\n def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"Validate that api key and python package exists in environment.\"\"\"\n anyscale_service_url = get_from_dict_or_env(\n values, \"anyscale_service_url\", \"ANYSCALE_SERVICE_URL\"\n )\n anyscale_service_route = get_from_dict_or_env(\n values, \"anyscale_service_route\", \"ANYSCALE_SERVICE_ROUTE\"\n )\n anyscale_service_token = get_from_dict_or_env(\n values, \"anyscale_service_token\", \"ANYSCALE_SERVICE_TOKEN\"\n )\n if anyscale_service_url.endswith(\"/\"):\n anyscale_service_url = anyscale_service_url[:-1]\n if not anyscale_service_route.startswith(\"/\"):\n anyscale_service_route = \"/\" + anyscale_service_route\n try:\n anyscale_service_endpoint = f\"{anyscale_service_url}/-/routes\"\n headers = {\"Authorization\": f\"Bearer {anyscale_service_token}\"}\n requests.get(anyscale_service_endpoint, headers=headers)\n except requests.exceptions.RequestException as e:\n raise ValueError(e)\n values[\"anyscale_service_url\"] = anyscale_service_url\n values[\"anyscale_service_route\"] = anyscale_service_route\n values[\"anyscale_service_token\"] = anyscale_service_token\n return values\n @property\n def _identifying_params(self) -> Mapping[str, Any]:\n \"\"\"Get the identifying parameters.\"\"\"\n return {\n \"anyscale_service_url\": self.anyscale_service_url,\n \"anyscale_service_route\": self.anyscale_service_route,\n }\n @property\n def _llm_type(self) -> str:\n \"\"\"Return type of llm.\"\"\"\n return \"anyscale\"\n def _call(\n self,\n prompt: str,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/anyscale.html"} +{"id": "79eac95dbfb0-2", "text": "def _call(\n self,\n prompt: str,\n stop: Optional[List[str]] = None,\n run_manager: Optional[CallbackManagerForLLMRun] = None,\n **kwargs: Any,\n ) -> str:\n \"\"\"Call out to Anyscale Service endpoint.\n Args:\n prompt: The prompt to pass into the model.\n stop: Optional list of stop words to use when generating.\n Returns:\n The string generated by the model.\n Example:\n .. code-block:: python\n response = anyscale(\"Tell me a joke.\")\n \"\"\"\n anyscale_service_endpoint = (\n f\"{self.anyscale_service_url}{self.anyscale_service_route}\"\n )\n headers = {\"Authorization\": f\"Bearer {self.anyscale_service_token}\"}\n body = {\"prompt\": prompt}\n resp = requests.post(anyscale_service_endpoint, headers=headers, json=body)\n if resp.status_code != 200:\n raise ValueError(\n f\"Error returned by service, status code {resp.status_code}\"\n )\n text = resp.text\n if stop is not None:\n # This is a bit hacky, but I can't figure out a better way to enforce\n # stop tokens when making calls to huggingface_hub.\n text = enforce_stop_tokens(text, stop)\n return text", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/anyscale.html"} +{"id": "16d833370d2f-0", "text": "Source code for langchain.llms.bedrock\nimport json\nfrom typing import Any, Dict, List, Mapping, Optional\nfrom pydantic import Extra, root_validator\nfrom langchain.callbacks.manager import CallbackManagerForLLMRun\nfrom langchain.llms.base import LLM\nfrom langchain.llms.utils import enforce_stop_tokens\nclass LLMInputOutputAdapter:\n \"\"\"Adapter class to prepare the inputs from Langchain to a format\n that LLM model expects. Also, provides helper function to extract\n the generated text from the model response.\"\"\"\n @classmethod\n def prepare_input(\n cls, provider: str, prompt: str, model_kwargs: Dict[str, Any]\n ) -> Dict[str, Any]:\n input_body = {**model_kwargs}\n if provider == \"anthropic\" or provider == \"ai21\":\n input_body[\"prompt\"] = prompt\n elif provider == \"amazon\":\n input_body = dict()\n input_body[\"inputText\"] = prompt\n input_body[\"textGenerationConfig\"] = {**model_kwargs}\n else:\n input_body[\"inputText\"] = prompt\n if provider == \"anthropic\" and \"max_tokens_to_sample\" not in input_body:\n input_body[\"max_tokens_to_sample\"] = 50\n return input_body\n @classmethod\n def prepare_output(cls, provider: str, response: Any) -> str:\n if provider == \"anthropic\":\n response_body = json.loads(response.get(\"body\").read().decode())\n return response_body.get(\"completion\")\n else:\n response_body = json.loads(response.get(\"body\").read())\n if provider == \"ai21\":\n return response_body.get(\"completions\")[0].get(\"data\").get(\"text\")\n else:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/bedrock.html"} +{"id": "16d833370d2f-1", "text": "else:\n return response_body.get(\"results\")[0].get(\"outputText\")\n[docs]class Bedrock(LLM):\n \"\"\"LLM provider to invoke Bedrock models.\n To authenticate, the AWS client uses the following methods to\n automatically load credentials:\n https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html\n If a specific credential profile should be used, you must pass\n the name of the profile from the ~/.aws/credentials file that is to be used.\n Make sure the credentials / roles used have the required policies to\n access the Bedrock service.\n \"\"\"\n \"\"\"\n Example:\n .. code-block:: python\n from bedrock_langchain.bedrock_llm import BedrockLLM\n llm = BedrockLLM(\n credentials_profile_name=\"default\", \n model_id=\"amazon.titan-tg1-large\"\n )\n \"\"\"\n client: Any #: :meta private:\n region_name: Optional[str] = None\n \"\"\"The aws region e.g., `us-west-2`. Fallsback to AWS_DEFAULT_REGION env variable\n or region specified in ~/.aws/config in case it is not provided here.\n \"\"\"\n credentials_profile_name: Optional[str] = None\n \"\"\"The name of the profile in the ~/.aws/credentials or ~/.aws/config files, which\n has either access keys or role information specified.\n If not specified, the default credential profile or, if on an EC2 instance,\n credentials from IMDS will be used.\n See: https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html\n \"\"\"\n model_id: str\n \"\"\"Id of the model to call, e.g., amazon.titan-tg1-large, this is", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/bedrock.html"} +{"id": "16d833370d2f-2", "text": "equivalent to the modelId property in the list-foundation-models api\"\"\"\n model_kwargs: Optional[Dict] = None\n \"\"\"Key word arguments to pass to the model.\"\"\"\n class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n @root_validator()\n def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"Validate that AWS credentials to and python package exists in environment.\"\"\"\n # Skip creating new client if passed in constructor\n if values[\"client\"] is not None:\n return values\n try:\n import boto3\n if values[\"credentials_profile_name\"] is not None:\n session = boto3.Session(profile_name=values[\"credentials_profile_name\"])\n else:\n # use default credentials\n session = boto3.Session()\n client_params = {}\n if values[\"region_name\"]:\n client_params[\"region_name\"] = values[\"region_name\"]\n values[\"client\"] = session.client(\"bedrock\", **client_params)\n except ImportError:\n raise ModuleNotFoundError(\n \"Could not import boto3 python package. \"\n \"Please install it with `pip install boto3`.\"\n )\n except Exception as e:\n raise ValueError(\n \"Could not load credentials to authenticate with AWS client. \"\n \"Please check that credentials in the specified \"\n \"profile name are valid.\"\n ) from e\n return values\n @property\n def _identifying_params(self) -> Mapping[str, Any]:\n \"\"\"Get the identifying parameters.\"\"\"\n _model_kwargs = self.model_kwargs or {}\n return {\n **{\"model_kwargs\": _model_kwargs},\n }\n @property\n def _llm_type(self) -> str:\n \"\"\"Return type of llm.\"\"\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/bedrock.html"} +{"id": "16d833370d2f-3", "text": "\"\"\"Return type of llm.\"\"\"\n return \"amazon_bedrock\"\n def _call(\n self,\n prompt: str,\n stop: Optional[List[str]] = None,\n run_manager: Optional[CallbackManagerForLLMRun] = None,\n **kwargs: Any,\n ) -> str:\n \"\"\"Call out to Bedrock service model.\n Args:\n prompt: The prompt to pass into the model.\n stop: Optional list of stop words to use when generating.\n Returns:\n The string generated by the model.\n Example:\n .. code-block:: python\n response = se(\"Tell me a joke.\")\n \"\"\"\n _model_kwargs = self.model_kwargs or {}\n provider = self.model_id.split(\".\")[0]\n params = {**_model_kwargs, **kwargs}\n input_body = LLMInputOutputAdapter.prepare_input(provider, prompt, params)\n body = json.dumps(input_body)\n accept = \"application/json\"\n contentType = \"application/json\"\n try:\n response = self.client.invoke_model(\n body=body, modelId=self.model_id, accept=accept, contentType=contentType\n )\n text = LLMInputOutputAdapter.prepare_output(provider, response)\n except Exception as e:\n raise ValueError(f\"Error raised by bedrock service: {e}\")\n if stop is not None:\n text = enforce_stop_tokens(text, stop)\n return text", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/bedrock.html"} +{"id": "c4697f5c0bf4-0", "text": "Source code for langchain.llms.self_hosted\n\"\"\"Run model inference on self-hosted remote hardware.\"\"\"\nimport importlib.util\nimport logging\nimport pickle\nfrom typing import Any, Callable, List, Mapping, Optional\nfrom pydantic import Extra\nfrom langchain.callbacks.manager import CallbackManagerForLLMRun\nfrom langchain.llms.base import LLM\nfrom langchain.llms.utils import enforce_stop_tokens\nlogger = logging.getLogger(__name__)\ndef _generate_text(\n pipeline: Any,\n prompt: str,\n *args: Any,\n stop: Optional[List[str]] = None,\n **kwargs: Any,\n) -> str:\n \"\"\"Inference function to send to the remote hardware.\n Accepts a pipeline callable (or, more likely,\n a key pointing to the model on the cluster's object store)\n and returns text predictions for each document\n in the batch.\n \"\"\"\n text = pipeline(prompt, *args, **kwargs)\n if stop is not None:\n text = enforce_stop_tokens(text, stop)\n return text\ndef _send_pipeline_to_device(pipeline: Any, device: int) -> Any:\n \"\"\"Send a pipeline to a device on the cluster.\"\"\"\n if isinstance(pipeline, str):\n with open(pipeline, \"rb\") as f:\n pipeline = pickle.load(f)\n if importlib.util.find_spec(\"torch\") is not None:\n import torch\n cuda_device_count = torch.cuda.device_count()\n if device < -1 or (device >= cuda_device_count):\n raise ValueError(\n f\"Got device=={device}, \"\n f\"device is required to be within [-1, {cuda_device_count})\"\n )\n if device < 0 and cuda_device_count > 0:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/self_hosted.html"} +{"id": "c4697f5c0bf4-1", "text": ")\n if device < 0 and cuda_device_count > 0:\n logger.warning(\n \"Device has %d GPUs available. \"\n \"Provide device={deviceId} to `from_model_id` to use available\"\n \"GPUs for execution. deviceId is -1 for CPU and \"\n \"can be a positive integer associated with CUDA device id.\",\n cuda_device_count,\n )\n pipeline.device = torch.device(device)\n pipeline.model = pipeline.model.to(pipeline.device)\n return pipeline\n[docs]class SelfHostedPipeline(LLM):\n \"\"\"Run model inference on self-hosted remote hardware.\n Supported hardware includes auto-launched instances on AWS, GCP, Azure,\n and Lambda, as well as servers specified\n by IP address and SSH credentials (such as on-prem, or another\n cloud like Paperspace, Coreweave, etc.).\n To use, you should have the ``runhouse`` python package installed.\n Example for custom pipeline and inference functions:\n .. code-block:: python\n from langchain.llms import SelfHostedPipeline\n from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline\n import runhouse as rh\n def load_pipeline():\n tokenizer = AutoTokenizer.from_pretrained(\"gpt2\")\n model = AutoModelForCausalLM.from_pretrained(\"gpt2\")\n return pipeline(\n \"text-generation\", model=model, tokenizer=tokenizer,\n max_new_tokens=10\n )\n def inference_fn(pipeline, prompt, stop = None):\n return pipeline(prompt)[0][\"generated_text\"]\n gpu = rh.cluster(name=\"rh-a10x\", instance_type=\"A100:1\")\n llm = SelfHostedPipeline(\n model_load_fn=load_pipeline,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/self_hosted.html"} +{"id": "c4697f5c0bf4-2", "text": "llm = SelfHostedPipeline(\n model_load_fn=load_pipeline,\n hardware=gpu,\n model_reqs=model_reqs, inference_fn=inference_fn\n )\n Example for <2GB model (can be serialized and sent directly to the server):\n .. code-block:: python\n from langchain.llms import SelfHostedPipeline\n import runhouse as rh\n gpu = rh.cluster(name=\"rh-a10x\", instance_type=\"A100:1\")\n my_model = ...\n llm = SelfHostedPipeline.from_pipeline(\n pipeline=my_model,\n hardware=gpu,\n model_reqs=[\"./\", \"torch\", \"transformers\"],\n )\n Example passing model path for larger models:\n .. code-block:: python\n from langchain.llms import SelfHostedPipeline\n import runhouse as rh\n import pickle\n from transformers import pipeline\n generator = pipeline(model=\"gpt2\")\n rh.blob(pickle.dumps(generator), path=\"models/pipeline.pkl\"\n ).save().to(gpu, path=\"models\")\n llm = SelfHostedPipeline.from_pipeline(\n pipeline=\"models/pipeline.pkl\",\n hardware=gpu,\n model_reqs=[\"./\", \"torch\", \"transformers\"],\n )\n \"\"\"\n pipeline_ref: Any #: :meta private:\n client: Any #: :meta private:\n inference_fn: Callable = _generate_text #: :meta private:\n \"\"\"Inference function to send to the remote hardware.\"\"\"\n hardware: Any\n \"\"\"Remote hardware to send the inference function to.\"\"\"\n model_load_fn: Callable\n \"\"\"Function to load the model remotely on the server.\"\"\"\n load_fn_kwargs: Optional[dict] = None", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/self_hosted.html"} +{"id": "c4697f5c0bf4-3", "text": "load_fn_kwargs: Optional[dict] = None\n \"\"\"Key word arguments to pass to the model load function.\"\"\"\n model_reqs: List[str] = [\"./\", \"torch\"]\n \"\"\"Requirements to install on hardware to inference the model.\"\"\"\n class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n def __init__(self, **kwargs: Any):\n \"\"\"Init the pipeline with an auxiliary function.\n The load function must be in global scope to be imported\n and run on the server, i.e. in a module and not a REPL or closure.\n Then, initialize the remote inference function.\n \"\"\"\n super().__init__(**kwargs)\n try:\n import runhouse as rh\n except ImportError:\n raise ImportError(\n \"Could not import runhouse python package. \"\n \"Please install it with `pip install runhouse`.\"\n )\n remote_load_fn = rh.function(fn=self.model_load_fn).to(\n self.hardware, reqs=self.model_reqs\n )\n _load_fn_kwargs = self.load_fn_kwargs or {}\n self.pipeline_ref = remote_load_fn.remote(**_load_fn_kwargs)\n self.client = rh.function(fn=self.inference_fn).to(\n self.hardware, reqs=self.model_reqs\n )\n[docs] @classmethod\n def from_pipeline(\n cls,\n pipeline: Any,\n hardware: Any,\n model_reqs: Optional[List[str]] = None,\n device: int = 0,\n **kwargs: Any,\n ) -> LLM:\n \"\"\"Init the SelfHostedPipeline from a pipeline object or string.\"\"\"\n if not isinstance(pipeline, str):\n logger.warning(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/self_hosted.html"} +{"id": "c4697f5c0bf4-4", "text": "if not isinstance(pipeline, str):\n logger.warning(\n \"Serializing pipeline to send to remote hardware. \"\n \"Note, it can be quite slow\"\n \"to serialize and send large models with each execution. \"\n \"Consider sending the pipeline\"\n \"to the cluster and passing the path to the pipeline instead.\"\n )\n load_fn_kwargs = {\"pipeline\": pipeline, \"device\": device}\n return cls(\n load_fn_kwargs=load_fn_kwargs,\n model_load_fn=_send_pipeline_to_device,\n hardware=hardware,\n model_reqs=[\"transformers\", \"torch\"] + (model_reqs or []),\n **kwargs,\n )\n @property\n def _identifying_params(self) -> Mapping[str, Any]:\n \"\"\"Get the identifying parameters.\"\"\"\n return {\n **{\"hardware\": self.hardware},\n }\n @property\n def _llm_type(self) -> str:\n return \"self_hosted_llm\"\n def _call(\n self,\n prompt: str,\n stop: Optional[List[str]] = None,\n run_manager: Optional[CallbackManagerForLLMRun] = None,\n **kwargs: Any,\n ) -> str:\n return self.client(\n pipeline=self.pipeline_ref, prompt=prompt, stop=stop, **kwargs\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/self_hosted.html"} +{"id": "368561eaf35f-0", "text": "Source code for langchain.llms.aleph_alpha\n\"\"\"Wrapper around Aleph Alpha APIs.\"\"\"\nfrom typing import Any, Dict, List, Optional, Sequence\nfrom pydantic import Extra, root_validator\nfrom langchain.callbacks.manager import CallbackManagerForLLMRun\nfrom langchain.llms.base import LLM\nfrom langchain.llms.utils import enforce_stop_tokens\nfrom langchain.utils import get_from_dict_or_env\n[docs]class AlephAlpha(LLM):\n \"\"\"Wrapper around Aleph Alpha large language models.\n To use, you should have the ``aleph_alpha_client`` python package installed, and the\n environment variable ``ALEPH_ALPHA_API_KEY`` set with your API key, or pass\n it as a named parameter to the constructor.\n Parameters are explained more in depth here:\n https://github.com/Aleph-Alpha/aleph-alpha-client/blob/c14b7dd2b4325c7da0d6a119f6e76385800e097b/aleph_alpha_client/completion.py#L10\n Example:\n .. code-block:: python\n from langchain.llms import AlephAlpha\n aleph_alpha = AlephAlpha(aleph_alpha_api_key=\"my-api-key\")\n \"\"\"\n client: Any #: :meta private:\n model: Optional[str] = \"luminous-base\"\n \"\"\"Model name to use.\"\"\"\n maximum_tokens: int = 64\n \"\"\"The maximum number of tokens to be generated.\"\"\"\n temperature: float = 0.0\n \"\"\"A non-negative float that tunes the degree of randomness in generation.\"\"\"\n top_k: int = 0\n \"\"\"Number of most likely tokens to consider at each step.\"\"\"\n top_p: float = 0.0\n \"\"\"Total probability mass of tokens to consider at each step.\"\"\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/aleph_alpha.html"} +{"id": "368561eaf35f-1", "text": "\"\"\"Total probability mass of tokens to consider at each step.\"\"\"\n presence_penalty: float = 0.0\n \"\"\"Penalizes repeated tokens.\"\"\"\n frequency_penalty: float = 0.0\n \"\"\"Penalizes repeated tokens according to frequency.\"\"\"\n repetition_penalties_include_prompt: Optional[bool] = False\n \"\"\"Flag deciding whether presence penalty or frequency penalty are\n updated from the prompt.\"\"\"\n use_multiplicative_presence_penalty: Optional[bool] = False\n \"\"\"Flag deciding whether presence penalty is applied\n multiplicatively (True) or additively (False).\"\"\"\n penalty_bias: Optional[str] = None\n \"\"\"Penalty bias for the completion.\"\"\"\n penalty_exceptions: Optional[List[str]] = None\n \"\"\"List of strings that may be generated without penalty,\n regardless of other penalty settings\"\"\"\n penalty_exceptions_include_stop_sequences: Optional[bool] = None\n \"\"\"Should stop_sequences be included in penalty_exceptions.\"\"\"\n best_of: Optional[int] = None\n \"\"\"returns the one with the \"best of\" results\n (highest log probability per token)\n \"\"\"\n n: int = 1\n \"\"\"How many completions to generate for each prompt.\"\"\"\n logit_bias: Optional[Dict[int, float]] = None\n \"\"\"The logit bias allows to influence the likelihood of generating tokens.\"\"\"\n log_probs: Optional[int] = None\n \"\"\"Number of top log probabilities to be returned for each generated token.\"\"\"\n tokens: Optional[bool] = False\n \"\"\"return tokens of completion.\"\"\"\n disable_optimizations: Optional[bool] = False\n minimum_tokens: Optional[int] = 0\n \"\"\"Generate at least this number of tokens.\"\"\"\n echo: bool = False\n \"\"\"Echo the prompt in the completion.\"\"\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/aleph_alpha.html"} +{"id": "368561eaf35f-2", "text": "echo: bool = False\n \"\"\"Echo the prompt in the completion.\"\"\"\n use_multiplicative_frequency_penalty: bool = False\n sequence_penalty: float = 0.0\n sequence_penalty_min_length: int = 2\n use_multiplicative_sequence_penalty: bool = False\n completion_bias_inclusion: Optional[Sequence[str]] = None\n completion_bias_inclusion_first_token_only: bool = False\n completion_bias_exclusion: Optional[Sequence[str]] = None\n completion_bias_exclusion_first_token_only: bool = False\n \"\"\"Only consider the first token for the completion_bias_exclusion.\"\"\"\n contextual_control_threshold: Optional[float] = None\n \"\"\"If set to None, attention control parameters only apply to those tokens that have\n explicitly been set in the request.\n If set to a non-None value, control parameters are also applied to similar tokens.\n \"\"\"\n control_log_additive: Optional[bool] = True\n \"\"\"True: apply control by adding the log(control_factor) to attention scores.\n False: (attention_scores - - attention_scores.min(-1)) * control_factor\n \"\"\"\n repetition_penalties_include_completion: bool = True\n \"\"\"Flag deciding whether presence penalty or frequency penalty\n are updated from the completion.\"\"\"\n raw_completion: bool = False\n \"\"\"Force the raw completion of the model to be returned.\"\"\"\n aleph_alpha_api_key: Optional[str] = None\n \"\"\"API key for Aleph Alpha API.\"\"\"\n stop_sequences: Optional[List[str]] = None\n \"\"\"Stop sequences to use.\"\"\"\n class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n @root_validator()\n def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"Validate that api key and python package exists in environment.\"\"\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/aleph_alpha.html"} +{"id": "368561eaf35f-3", "text": "\"\"\"Validate that api key and python package exists in environment.\"\"\"\n aleph_alpha_api_key = get_from_dict_or_env(\n values, \"aleph_alpha_api_key\", \"ALEPH_ALPHA_API_KEY\"\n )\n try:\n import aleph_alpha_client\n values[\"client\"] = aleph_alpha_client.Client(token=aleph_alpha_api_key)\n except ImportError:\n raise ImportError(\n \"Could not import aleph_alpha_client python package. \"\n \"Please install it with `pip install aleph_alpha_client`.\"\n )\n return values\n @property\n def _default_params(self) -> Dict[str, Any]:\n \"\"\"Get the default parameters for calling the Aleph Alpha API.\"\"\"\n return {\n \"maximum_tokens\": self.maximum_tokens,\n \"temperature\": self.temperature,\n \"top_k\": self.top_k,\n \"top_p\": self.top_p,\n \"presence_penalty\": self.presence_penalty,\n \"frequency_penalty\": self.frequency_penalty,\n \"n\": self.n,\n \"repetition_penalties_include_prompt\": self.repetition_penalties_include_prompt, # noqa: E501\n \"use_multiplicative_presence_penalty\": self.use_multiplicative_presence_penalty, # noqa: E501\n \"penalty_bias\": self.penalty_bias,\n \"penalty_exceptions\": self.penalty_exceptions,\n \"penalty_exceptions_include_stop_sequences\": self.penalty_exceptions_include_stop_sequences, # noqa: E501\n \"best_of\": self.best_of,\n \"logit_bias\": self.logit_bias,\n \"log_probs\": self.log_probs,\n \"tokens\": self.tokens,\n \"disable_optimizations\": self.disable_optimizations,\n \"minimum_tokens\": self.minimum_tokens,\n \"echo\": self.echo,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/aleph_alpha.html"} +{"id": "368561eaf35f-4", "text": "\"minimum_tokens\": self.minimum_tokens,\n \"echo\": self.echo,\n \"use_multiplicative_frequency_penalty\": self.use_multiplicative_frequency_penalty, # noqa: E501\n \"sequence_penalty\": self.sequence_penalty,\n \"sequence_penalty_min_length\": self.sequence_penalty_min_length,\n \"use_multiplicative_sequence_penalty\": self.use_multiplicative_sequence_penalty, # noqa: E501\n \"completion_bias_inclusion\": self.completion_bias_inclusion,\n \"completion_bias_inclusion_first_token_only\": self.completion_bias_inclusion_first_token_only, # noqa: E501\n \"completion_bias_exclusion\": self.completion_bias_exclusion,\n \"completion_bias_exclusion_first_token_only\": self.completion_bias_exclusion_first_token_only, # noqa: E501\n \"contextual_control_threshold\": self.contextual_control_threshold,\n \"control_log_additive\": self.control_log_additive,\n \"repetition_penalties_include_completion\": self.repetition_penalties_include_completion, # noqa: E501\n \"raw_completion\": self.raw_completion,\n }\n @property\n def _identifying_params(self) -> Dict[str, Any]:\n \"\"\"Get the identifying parameters.\"\"\"\n return {**{\"model\": self.model}, **self._default_params}\n @property\n def _llm_type(self) -> str:\n \"\"\"Return type of llm.\"\"\"\n return \"aleph_alpha\"\n def _call(\n self,\n prompt: str,\n stop: Optional[List[str]] = None,\n run_manager: Optional[CallbackManagerForLLMRun] = None,\n **kwargs: Any,\n ) -> str:\n \"\"\"Call out to Aleph Alpha's completion endpoint.\n Args:\n prompt: The prompt to pass into the model.", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/aleph_alpha.html"} +{"id": "368561eaf35f-5", "text": "Args:\n prompt: The prompt to pass into the model.\n stop: Optional list of stop words to use when generating.\n Returns:\n The string generated by the model.\n Example:\n .. code-block:: python\n response = aleph_alpha(\"Tell me a joke.\")\n \"\"\"\n from aleph_alpha_client import CompletionRequest, Prompt\n params = self._default_params\n if self.stop_sequences is not None and stop is not None:\n raise ValueError(\n \"stop sequences found in both the input and default params.\"\n )\n elif self.stop_sequences is not None:\n params[\"stop_sequences\"] = self.stop_sequences\n else:\n params[\"stop_sequences\"] = stop\n params = {**params, **kwargs}\n request = CompletionRequest(prompt=Prompt.from_text(prompt), **params)\n response = self.client.complete(model=self.model, request=request)\n text = response.completions[0].completion\n # If stop tokens are provided, Aleph Alpha's endpoint returns them.\n # In order to make this consistent with other endpoints, we strip them.\n if stop is not None or self.stop_sequences is not None:\n text = enforce_stop_tokens(text, params[\"stop_sequences\"])\n return text", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/aleph_alpha.html"} +{"id": "3503349f7721-0", "text": "Source code for langchain.llms.baseten\n\"\"\"Wrapper around Baseten deployed model API.\"\"\"\nimport logging\nfrom typing import Any, Dict, List, Mapping, Optional\nfrom pydantic import Field\nfrom langchain.callbacks.manager import CallbackManagerForLLMRun\nfrom langchain.llms.base import LLM\nlogger = logging.getLogger(__name__)\n[docs]class Baseten(LLM):\n \"\"\"Use your Baseten models in Langchain\n To use, you should have the ``baseten`` python package installed,\n and run ``baseten.login()`` with your Baseten API key.\n The required ``model`` param can be either a model id or model\n version id. Using a model version ID will result in\n slightly faster invocation.\n Any other model parameters can also\n be passed in with the format input={model_param: value, ...}\n The Baseten model must accept a dictionary of input with the key\n \"prompt\" and return a dictionary with a key \"data\" which maps\n to a list of response strings.\n Example:\n .. code-block:: python\n from langchain.llms import Baseten\n my_model = Baseten(model=\"MODEL_ID\")\n output = my_model(\"prompt\")\n \"\"\"\n model: str\n input: Dict[str, Any] = Field(default_factory=dict)\n model_kwargs: Dict[str, Any] = Field(default_factory=dict)\n @property\n def _identifying_params(self) -> Mapping[str, Any]:\n \"\"\"Get the identifying parameters.\"\"\"\n return {\n **{\"model_kwargs\": self.model_kwargs},\n }\n @property\n def _llm_type(self) -> str:\n \"\"\"Return type of model.\"\"\"\n return \"baseten\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/baseten.html"} +{"id": "3503349f7721-1", "text": "\"\"\"Return type of model.\"\"\"\n return \"baseten\"\n def _call(\n self,\n prompt: str,\n stop: Optional[List[str]] = None,\n run_manager: Optional[CallbackManagerForLLMRun] = None,\n **kwargs: Any,\n ) -> str:\n \"\"\"Call to Baseten deployed model endpoint.\"\"\"\n try:\n import baseten\n except ImportError as exc:\n raise ValueError(\n \"Could not import Baseten Python package. \"\n \"Please install it with `pip install baseten`.\"\n ) from exc\n # get the model and version\n try:\n model = baseten.deployed_model_version_id(self.model)\n response = model.predict({\"prompt\": prompt})\n except baseten.common.core.ApiError:\n model = baseten.deployed_model_id(self.model)\n response = model.predict({\"prompt\": prompt})\n return \"\".join(response)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/baseten.html"} +{"id": "8383012c00d0-0", "text": "Source code for langchain.llms.textgen\n\"\"\"Wrapper around text-generation-webui.\"\"\"\nimport logging\nfrom typing import Any, Dict, List, Optional\nimport requests\nfrom pydantic import Field\nfrom langchain.callbacks.manager import CallbackManagerForLLMRun\nfrom langchain.llms.base import LLM\nlogger = logging.getLogger(__name__)\n[docs]class TextGen(LLM):\n \"\"\"Wrapper around the text-generation-webui model.\n To use, you should have the text-generation-webui installed, a model loaded,\n and --api added as a command-line option.\n Suggested installation, use one-click installer for your OS:\n https://github.com/oobabooga/text-generation-webui#one-click-installers\n Paremeters below taken from text-generation-webui api example:\n https://github.com/oobabooga/text-generation-webui/blob/main/api-examples/api-example.py\n Example:\n .. code-block:: python\n from langchain.llms import TextGen\n llm = TextGen(model_url=\"http://localhost:8500\")\n \"\"\"\n model_url: str\n \"\"\"The full URL to the textgen webui including http[s]://host:port \"\"\"\n max_new_tokens: Optional[int] = 250\n \"\"\"The maximum number of tokens to generate.\"\"\"\n do_sample: bool = Field(True, alias=\"do_sample\")\n \"\"\"Do sample\"\"\"\n temperature: Optional[float] = 1.3\n \"\"\"Primary factor to control randomness of outputs. 0 = deterministic\n (only the most likely token is used). Higher value = more randomness.\"\"\"\n top_p: Optional[float] = 0.1\n \"\"\"If not set to 1, select tokens with probabilities adding up to less than this\n number. Higher value = higher range of possible random results.\"\"\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/textgen.html"} +{"id": "8383012c00d0-1", "text": "number. Higher value = higher range of possible random results.\"\"\"\n typical_p: Optional[float] = 1\n \"\"\"If not set to 1, select only tokens that are at least this much more likely to\n appear than random tokens, given the prior text.\"\"\"\n epsilon_cutoff: Optional[float] = 0 # In units of 1e-4\n \"\"\"Epsilon cutoff\"\"\"\n eta_cutoff: Optional[float] = 0 # In units of 1e-4\n \"\"\"ETA cutoff\"\"\"\n repetition_penalty: Optional[float] = 1.18\n \"\"\"Exponential penalty factor for repeating prior tokens. 1 means no penalty,\n higher value = less repetition, lower value = more repetition.\"\"\"\n top_k: Optional[float] = 40\n \"\"\"Similar to top_p, but select instead only the top_k most likely tokens.\n Higher value = higher range of possible random results.\"\"\"\n min_length: Optional[int] = 0\n \"\"\"Minimum generation length in tokens.\"\"\"\n no_repeat_ngram_size: Optional[int] = 0\n \"\"\"If not set to 0, specifies the length of token sets that are completely blocked\n from repeating at all. Higher values = blocks larger phrases,\n lower values = blocks words or letters from repeating.\n Only 0 or high values are a good idea in most cases.\"\"\"\n num_beams: Optional[int] = 1\n \"\"\"Number of beams\"\"\"\n penalty_alpha: Optional[float] = 0\n \"\"\"Penalty Alpha\"\"\"\n length_penalty: Optional[float] = 1\n \"\"\"Length Penalty\"\"\"\n early_stopping: bool = Field(False, alias=\"early_stopping\")\n \"\"\"Early stopping\"\"\"\n seed: int = Field(-1, alias=\"seed\")\n \"\"\"Seed (-1 for random)\"\"\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/textgen.html"} +{"id": "8383012c00d0-2", "text": "\"\"\"Seed (-1 for random)\"\"\"\n add_bos_token: bool = Field(True, alias=\"add_bos_token\")\n \"\"\"Add the bos_token to the beginning of prompts.\n Disabling this can make the replies more creative.\"\"\"\n truncation_length: Optional[int] = 2048\n \"\"\"Truncate the prompt up to this length. The leftmost tokens are removed if\n the prompt exceeds this length. Most models require this to be at most 2048.\"\"\"\n ban_eos_token: bool = Field(False, alias=\"ban_eos_token\")\n \"\"\"Ban the eos_token. Forces the model to never end the generation prematurely.\"\"\"\n skip_special_tokens: bool = Field(True, alias=\"skip_special_tokens\")\n \"\"\"Skip special tokens. Some specific models need this unset.\"\"\"\n stopping_strings: Optional[List[str]] = []\n \"\"\"A list of strings to stop generation when encountered.\"\"\"\n streaming: bool = False\n \"\"\"Whether to stream the results, token by token (currently unimplemented).\"\"\"\n @property\n def _default_params(self) -> Dict[str, Any]:\n \"\"\"Get the default parameters for calling textgen.\"\"\"\n return {\n \"max_new_tokens\": self.max_new_tokens,\n \"do_sample\": self.do_sample,\n \"temperature\": self.temperature,\n \"top_p\": self.top_p,\n \"typical_p\": self.typical_p,\n \"epsilon_cutoff\": self.epsilon_cutoff,\n \"eta_cutoff\": self.eta_cutoff,\n \"repetition_penalty\": self.repetition_penalty,\n \"top_k\": self.top_k,\n \"min_length\": self.min_length,\n \"no_repeat_ngram_size\": self.no_repeat_ngram_size,\n \"num_beams\": self.num_beams,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/textgen.html"} +{"id": "8383012c00d0-3", "text": "\"num_beams\": self.num_beams,\n \"penalty_alpha\": self.penalty_alpha,\n \"length_penalty\": self.length_penalty,\n \"early_stopping\": self.early_stopping,\n \"seed\": self.seed,\n \"add_bos_token\": self.add_bos_token,\n \"truncation_length\": self.truncation_length,\n \"ban_eos_token\": self.ban_eos_token,\n \"skip_special_tokens\": self.skip_special_tokens,\n \"stopping_strings\": self.stopping_strings,\n }\n @property\n def _identifying_params(self) -> Dict[str, Any]:\n \"\"\"Get the identifying parameters.\"\"\"\n return {**{\"model_url\": self.model_url}, **self._default_params}\n @property\n def _llm_type(self) -> str:\n \"\"\"Return type of llm.\"\"\"\n return \"textgen\"\n def _get_parameters(self, stop: Optional[List[str]] = None) -> Dict[str, Any]:\n \"\"\"\n Performs sanity check, preparing paramaters in format needed by textgen.\n Args:\n stop (Optional[List[str]]): List of stop sequences for textgen.\n Returns:\n Dictionary containing the combined parameters.\n \"\"\"\n # Raise error if stop sequences are in both input and default params\n # if self.stop and stop is not None:\n if self.stopping_strings and stop is not None:\n raise ValueError(\"`stop` found in both the input and default params.\")\n params = self._default_params\n # then sets it as configured, or default to an empty list:\n params[\"stop\"] = self.stopping_strings or stop or []\n return params\n def _call(\n self,\n prompt: str,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/textgen.html"} +{"id": "8383012c00d0-4", "text": "return params\n def _call(\n self,\n prompt: str,\n stop: Optional[List[str]] = None,\n run_manager: Optional[CallbackManagerForLLMRun] = None,\n **kwargs: Any,\n ) -> str:\n \"\"\"Call the textgen web API and return the output.\n Args:\n prompt: The prompt to use for generation.\n stop: A list of strings to stop generation when encountered.\n Returns:\n The generated text.\n Example:\n .. code-block:: python\n from langchain.llms import TextGen\n llm = TextGen(model_url=\"http://localhost:5000\")\n llm(\"Write a story about llamas.\")\n \"\"\"\n if self.streaming:\n raise ValueError(\"`streaming` option currently unsupported.\")\n url = f\"{self.model_url}/api/v1/generate\"\n params = self._get_parameters(stop)\n request = params.copy()\n request[\"prompt\"] = prompt\n response = requests.post(url, json=request)\n if response.status_code == 200:\n result = response.json()[\"results\"][0][\"text\"]\n print(prompt + result)\n else:\n print(f\"ERROR: response: {response}\")\n result = \"\"\n return result", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/textgen.html"} +{"id": "70edbd616d16-0", "text": "Source code for langchain.llms.gooseai\n\"\"\"Wrapper around GooseAI API.\"\"\"\nimport logging\nfrom typing import Any, Dict, List, Mapping, Optional\nfrom pydantic import Extra, Field, root_validator\nfrom langchain.callbacks.manager import CallbackManagerForLLMRun\nfrom langchain.llms.base import LLM\nfrom langchain.utils import get_from_dict_or_env\nlogger = logging.getLogger(__name__)\n[docs]class GooseAI(LLM):\n \"\"\"Wrapper around OpenAI large language models.\n To use, you should have the ``openai`` python package installed, and the\n environment variable ``GOOSEAI_API_KEY`` set with your API key.\n Any parameters that are valid to be passed to the openai.create call can be passed\n in, even if not explicitly saved on this class.\n Example:\n .. code-block:: python\n from langchain.llms import GooseAI\n gooseai = GooseAI(model_name=\"gpt-neo-20b\")\n \"\"\"\n client: Any\n model_name: str = \"gpt-neo-20b\"\n \"\"\"Model name to use\"\"\"\n temperature: float = 0.7\n \"\"\"What sampling temperature to use\"\"\"\n max_tokens: int = 256\n \"\"\"The maximum number of tokens to generate in the completion.\n -1 returns as many tokens as possible given the prompt and\n the models maximal context size.\"\"\"\n top_p: float = 1\n \"\"\"Total probability mass of tokens to consider at each step.\"\"\"\n min_tokens: int = 1\n \"\"\"The minimum number of tokens to generate in the completion.\"\"\"\n frequency_penalty: float = 0\n \"\"\"Penalizes repeated tokens according to frequency.\"\"\"\n presence_penalty: float = 0\n \"\"\"Penalizes repeated tokens.\"\"\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/gooseai.html"} +{"id": "70edbd616d16-1", "text": "presence_penalty: float = 0\n \"\"\"Penalizes repeated tokens.\"\"\"\n n: int = 1\n \"\"\"How many completions to generate for each prompt.\"\"\"\n model_kwargs: Dict[str, Any] = Field(default_factory=dict)\n \"\"\"Holds any model parameters valid for `create` call not explicitly specified.\"\"\"\n logit_bias: Optional[Dict[str, float]] = Field(default_factory=dict)\n \"\"\"Adjust the probability of specific tokens being generated.\"\"\"\n gooseai_api_key: Optional[str] = None\n class Config:\n \"\"\"Configuration for this pydantic config.\"\"\"\n extra = Extra.ignore\n @root_validator(pre=True)\n def build_extra(cls, values: Dict[str, Any]) -> Dict[str, Any]:\n \"\"\"Build extra kwargs from additional params that were passed in.\"\"\"\n all_required_field_names = {field.alias for field in cls.__fields__.values()}\n extra = values.get(\"model_kwargs\", {})\n for field_name in list(values):\n if field_name not in all_required_field_names:\n if field_name in extra:\n raise ValueError(f\"Found {field_name} supplied twice.\")\n logger.warning(\n f\"\"\"WARNING! {field_name} is not default parameter.\n {field_name} was transfered to model_kwargs.\n Please confirm that {field_name} is what you intended.\"\"\"\n )\n extra[field_name] = values.pop(field_name)\n values[\"model_kwargs\"] = extra\n return values\n @root_validator()\n def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"Validate that api key and python package exists in environment.\"\"\"\n gooseai_api_key = get_from_dict_or_env(\n values, \"gooseai_api_key\", \"GOOSEAI_API_KEY\"\n )\n try:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/gooseai.html"} +{"id": "70edbd616d16-2", "text": ")\n try:\n import openai\n openai.api_key = gooseai_api_key\n openai.api_base = \"https://api.goose.ai/v1\"\n values[\"client\"] = openai.Completion\n except ImportError:\n raise ImportError(\n \"Could not import openai python package. \"\n \"Please install it with `pip install openai`.\"\n )\n return values\n @property\n def _default_params(self) -> Dict[str, Any]:\n \"\"\"Get the default parameters for calling GooseAI API.\"\"\"\n normal_params = {\n \"temperature\": self.temperature,\n \"max_tokens\": self.max_tokens,\n \"top_p\": self.top_p,\n \"min_tokens\": self.min_tokens,\n \"frequency_penalty\": self.frequency_penalty,\n \"presence_penalty\": self.presence_penalty,\n \"n\": self.n,\n \"logit_bias\": self.logit_bias,\n }\n return {**normal_params, **self.model_kwargs}\n @property\n def _identifying_params(self) -> Mapping[str, Any]:\n \"\"\"Get the identifying parameters.\"\"\"\n return {**{\"model_name\": self.model_name}, **self._default_params}\n @property\n def _llm_type(self) -> str:\n \"\"\"Return type of llm.\"\"\"\n return \"gooseai\"\n def _call(\n self,\n prompt: str,\n stop: Optional[List[str]] = None,\n run_manager: Optional[CallbackManagerForLLMRun] = None,\n **kwargs: Any,\n ) -> str:\n \"\"\"Call the GooseAI API.\"\"\"\n params = self._default_params\n if stop is not None:\n if \"stop\" in params:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/gooseai.html"} +{"id": "70edbd616d16-3", "text": "if stop is not None:\n if \"stop\" in params:\n raise ValueError(\"`stop` found in both the input and default params.\")\n params[\"stop\"] = stop\n params = {**params, **kwargs}\n response = self.client.create(engine=self.model_name, prompt=prompt, **params)\n text = response.choices[0].text\n return text", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/gooseai.html"} +{"id": "e98fd257b856-0", "text": "Source code for langchain.llms.rwkv\n\"\"\"Wrapper for the RWKV model.\nBased on https://github.com/saharNooby/rwkv.cpp/blob/master/rwkv/chat_with_bot.py\n https://github.com/BlinkDL/ChatRWKV/blob/main/v2/chat.py\n\"\"\"\nfrom typing import Any, Dict, List, Mapping, Optional, Set\nfrom pydantic import BaseModel, Extra, root_validator\nfrom langchain.callbacks.manager import CallbackManagerForLLMRun\nfrom langchain.llms.base import LLM\nfrom langchain.llms.utils import enforce_stop_tokens\n[docs]class RWKV(LLM, BaseModel):\n r\"\"\"Wrapper around RWKV language models.\n To use, you should have the ``rwkv`` python package installed, the\n pre-trained model file, and the model's config information.\n Example:\n .. code-block:: python\n from langchain.llms import RWKV\n model = RWKV(model=\"./models/rwkv-3b-fp16.bin\", strategy=\"cpu fp32\")\n # Simplest invocation\n response = model(\"Once upon a time, \")\n \"\"\"\n model: str\n \"\"\"Path to the pre-trained RWKV model file.\"\"\"\n tokens_path: str\n \"\"\"Path to the RWKV tokens file.\"\"\"\n strategy: str = \"cpu fp32\"\n \"\"\"Token context window.\"\"\"\n rwkv_verbose: bool = True\n \"\"\"Print debug information.\"\"\"\n temperature: float = 1.0\n \"\"\"The temperature to use for sampling.\"\"\"\n top_p: float = 0.5\n \"\"\"The top-p value to use for sampling.\"\"\"\n penalty_alpha_frequency: float = 0.4\n \"\"\"Positive values penalize new tokens based on their existing frequency", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/rwkv.html"} +{"id": "e98fd257b856-1", "text": "\"\"\"Positive values penalize new tokens based on their existing frequency\n in the text so far, decreasing the model's likelihood to repeat the same\n line verbatim..\"\"\"\n penalty_alpha_presence: float = 0.4\n \"\"\"Positive values penalize new tokens based on whether they appear\n in the text so far, increasing the model's likelihood to talk about\n new topics..\"\"\"\n CHUNK_LEN: int = 256\n \"\"\"Batch size for prompt processing.\"\"\"\n max_tokens_per_generation: int = 256\n \"\"\"Maximum number of tokens to generate.\"\"\"\n client: Any = None #: :meta private:\n tokenizer: Any = None #: :meta private:\n pipeline: Any = None #: :meta private:\n model_tokens: Any = None #: :meta private:\n model_state: Any = None #: :meta private:\n class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n @property\n def _default_params(self) -> Dict[str, Any]:\n \"\"\"Get the identifying parameters.\"\"\"\n return {\n \"verbose\": self.verbose,\n \"top_p\": self.top_p,\n \"temperature\": self.temperature,\n \"penalty_alpha_frequency\": self.penalty_alpha_frequency,\n \"penalty_alpha_presence\": self.penalty_alpha_presence,\n \"CHUNK_LEN\": self.CHUNK_LEN,\n \"max_tokens_per_generation\": self.max_tokens_per_generation,\n }\n @staticmethod\n def _rwkv_param_names() -> Set[str]:\n \"\"\"Get the identifying parameters.\"\"\"\n return {\n \"verbose\",\n }\n @root_validator()\n def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"Validate that the python package exists in the environment.\"\"\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/rwkv.html"} +{"id": "e98fd257b856-2", "text": "\"\"\"Validate that the python package exists in the environment.\"\"\"\n try:\n import tokenizers\n except ImportError:\n raise ImportError(\n \"Could not import tokenizers python package. \"\n \"Please install it with `pip install tokenizers`.\"\n )\n try:\n from rwkv.model import RWKV as RWKVMODEL\n from rwkv.utils import PIPELINE\n values[\"tokenizer\"] = tokenizers.Tokenizer.from_file(values[\"tokens_path\"])\n rwkv_keys = cls._rwkv_param_names()\n model_kwargs = {k: v for k, v in values.items() if k in rwkv_keys}\n model_kwargs[\"verbose\"] = values[\"rwkv_verbose\"]\n values[\"client\"] = RWKVMODEL(\n values[\"model\"], strategy=values[\"strategy\"], **model_kwargs\n )\n values[\"pipeline\"] = PIPELINE(values[\"client\"], values[\"tokens_path\"])\n except ImportError:\n raise ValueError(\n \"Could not import rwkv python package. \"\n \"Please install it with `pip install rwkv`.\"\n )\n return values\n @property\n def _identifying_params(self) -> Mapping[str, Any]:\n \"\"\"Get the identifying parameters.\"\"\"\n return {\n \"model\": self.model,\n **self._default_params,\n **{k: v for k, v in self.__dict__.items() if k in RWKV._rwkv_param_names()},\n }\n @property\n def _llm_type(self) -> str:\n \"\"\"Return the type of llm.\"\"\"\n return \"rwkv\"\n def run_rnn(self, _tokens: List[str], newline_adj: int = 0) -> Any:\n AVOID_REPEAT_TOKENS = []", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/rwkv.html"} +{"id": "e98fd257b856-3", "text": "AVOID_REPEAT_TOKENS = []\n AVOID_REPEAT = \"\uff0c\uff1a\uff1f\uff01\"\n for i in AVOID_REPEAT:\n dd = self.pipeline.encode(i)\n assert len(dd) == 1\n AVOID_REPEAT_TOKENS += dd\n tokens = [int(x) for x in _tokens]\n self.model_tokens += tokens\n out: Any = None\n while len(tokens) > 0:\n out, self.model_state = self.client.forward(\n tokens[: self.CHUNK_LEN], self.model_state\n )\n tokens = tokens[self.CHUNK_LEN :]\n END_OF_LINE = 187\n out[END_OF_LINE] += newline_adj # adjust \\n probability\n if self.model_tokens[-1] in AVOID_REPEAT_TOKENS:\n out[self.model_tokens[-1]] = -999999999\n return out\n def rwkv_generate(self, prompt: str) -> str:\n self.model_state = None\n self.model_tokens = []\n logits = self.run_rnn(self.tokenizer.encode(prompt).ids)\n begin = len(self.model_tokens)\n out_last = begin\n occurrence: Dict = {}\n decoded = \"\"\n for i in range(self.max_tokens_per_generation):\n for n in occurrence:\n logits[n] -= (\n self.penalty_alpha_presence\n + occurrence[n] * self.penalty_alpha_frequency\n )\n token = self.pipeline.sample_logits(\n logits, temperature=self.temperature, top_p=self.top_p\n )\n END_OF_TEXT = 0\n if token == END_OF_TEXT:\n break\n if token not in occurrence:\n occurrence[token] = 1\n else:\n occurrence[token] += 1\n logits = self.run_rnn([token])", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/rwkv.html"} +{"id": "e98fd257b856-4", "text": "occurrence[token] += 1\n logits = self.run_rnn([token])\n xxx = self.tokenizer.decode(self.model_tokens[out_last:])\n if \"\\ufffd\" not in xxx: # avoid utf-8 display issues\n decoded += xxx\n out_last = begin + i + 1\n if i >= self.max_tokens_per_generation - 100:\n break\n return decoded\n def _call(\n self,\n prompt: str,\n stop: Optional[List[str]] = None,\n run_manager: Optional[CallbackManagerForLLMRun] = None,\n **kwargs: Any,\n ) -> str:\n r\"\"\"RWKV generation\n Args:\n prompt: The prompt to pass into the model.\n stop: A list of strings to stop generation when encountered.\n Returns:\n The string generated by the model.\n Example:\n .. code-block:: python\n prompt = \"Once upon a time, \"\n response = model(prompt, n_predict=55)\n \"\"\"\n text = self.rwkv_generate(prompt)\n if stop is not None:\n text = enforce_stop_tokens(text, stop)\n return text", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/rwkv.html"} +{"id": "dbcbb4ae085d-0", "text": "Source code for langchain.llms.ctransformers\n\"\"\"Wrapper around the C Transformers library.\"\"\"\nfrom typing import Any, Dict, Optional, Sequence\nfrom pydantic import root_validator\nfrom langchain.callbacks.manager import CallbackManagerForLLMRun\nfrom langchain.llms.base import LLM\n[docs]class CTransformers(LLM):\n \"\"\"Wrapper around the C Transformers LLM interface.\n To use, you should have the ``ctransformers`` python package installed.\n See https://github.com/marella/ctransformers\n Example:\n .. code-block:: python\n from langchain.llms import CTransformers\n llm = CTransformers(model=\"/path/to/ggml-gpt-2.bin\", model_type=\"gpt2\")\n \"\"\"\n client: Any #: :meta private:\n model: str\n \"\"\"The path to a model file or directory or the name of a Hugging Face Hub\n model repo.\"\"\"\n model_type: Optional[str] = None\n \"\"\"The model type.\"\"\"\n model_file: Optional[str] = None\n \"\"\"The name of the model file in repo or directory.\"\"\"\n config: Optional[Dict[str, Any]] = None\n \"\"\"The config parameters.\n See https://github.com/marella/ctransformers#config\"\"\"\n lib: Optional[str] = None\n \"\"\"The path to a shared library or one of `avx2`, `avx`, `basic`.\"\"\"\n @property\n def _identifying_params(self) -> Dict[str, Any]:\n \"\"\"Get the identifying parameters.\"\"\"\n return {\n \"model\": self.model,\n \"model_type\": self.model_type,\n \"model_file\": self.model_file,\n \"config\": self.config,\n }\n @property", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/ctransformers.html"} +{"id": "dbcbb4ae085d-1", "text": "\"config\": self.config,\n }\n @property\n def _llm_type(self) -> str:\n \"\"\"Return type of llm.\"\"\"\n return \"ctransformers\"\n @root_validator()\n def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"Validate that ``ctransformers`` package is installed.\"\"\"\n try:\n from ctransformers import AutoModelForCausalLM\n except ImportError:\n raise ImportError(\n \"Could not import `ctransformers` package. \"\n \"Please install it with `pip install ctransformers`\"\n )\n config = values[\"config\"] or {}\n values[\"client\"] = AutoModelForCausalLM.from_pretrained(\n values[\"model\"],\n model_type=values[\"model_type\"],\n model_file=values[\"model_file\"],\n lib=values[\"lib\"],\n **config,\n )\n return values\n def _call(\n self,\n prompt: str,\n stop: Optional[Sequence[str]] = None,\n run_manager: Optional[CallbackManagerForLLMRun] = None,\n **kwargs: Any,\n ) -> str:\n \"\"\"Generate text from a prompt.\n Args:\n prompt: The prompt to generate text from.\n stop: A list of sequences to stop generation when encountered.\n Returns:\n The generated text.\n Example:\n .. code-block:: python\n response = llm(\"Tell me a joke.\")\n \"\"\"\n text = []\n _run_manager = run_manager or CallbackManagerForLLMRun.get_noop_manager()\n for chunk in self.client(prompt, stop=stop, stream=True):\n text.append(chunk)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/ctransformers.html"} +{"id": "dbcbb4ae085d-2", "text": "text.append(chunk)\n _run_manager.on_llm_new_token(chunk, verbose=self.verbose)\n return \"\".join(text)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/ctransformers.html"} +{"id": "953950b19c41-0", "text": "Source code for langchain.llms.huggingface_endpoint\n\"\"\"Wrapper around HuggingFace APIs.\"\"\"\nfrom typing import Any, Dict, List, Mapping, Optional\nimport requests\nfrom pydantic import Extra, root_validator\nfrom langchain.callbacks.manager import CallbackManagerForLLMRun\nfrom langchain.llms.base import LLM\nfrom langchain.llms.utils import enforce_stop_tokens\nfrom langchain.utils import get_from_dict_or_env\nVALID_TASKS = (\"text2text-generation\", \"text-generation\", \"summarization\")\n[docs]class HuggingFaceEndpoint(LLM):\n \"\"\"Wrapper around HuggingFaceHub Inference Endpoints.\n To use, you should have the ``huggingface_hub`` python package installed, and the\n environment variable ``HUGGINGFACEHUB_API_TOKEN`` set with your API token, or pass\n it as a named parameter to the constructor.\n Only supports `text-generation` and `text2text-generation` for now.\n Example:\n .. code-block:: python\n from langchain.llms import HuggingFaceEndpoint\n endpoint_url = (\n \"https://abcdefghijklmnop.us-east-1.aws.endpoints.huggingface.cloud\"\n )\n hf = HuggingFaceEndpoint(\n endpoint_url=endpoint_url,\n huggingfacehub_api_token=\"my-api-key\"\n )\n \"\"\"\n endpoint_url: str = \"\"\n \"\"\"Endpoint URL to use.\"\"\"\n task: Optional[str] = None\n \"\"\"Task to call the model with.\n Should be a task that returns `generated_text` or `summary_text`.\"\"\"\n model_kwargs: Optional[dict] = None\n \"\"\"Key word arguments to pass to the model.\"\"\"\n huggingfacehub_api_token: Optional[str] = None\n class Config:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/huggingface_endpoint.html"} +{"id": "953950b19c41-1", "text": "huggingfacehub_api_token: Optional[str] = None\n class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n @root_validator()\n def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"Validate that api key and python package exists in environment.\"\"\"\n huggingfacehub_api_token = get_from_dict_or_env(\n values, \"huggingfacehub_api_token\", \"HUGGINGFACEHUB_API_TOKEN\"\n )\n try:\n from huggingface_hub.hf_api import HfApi\n try:\n HfApi(\n endpoint=\"https://huggingface.co\", # Can be a Private Hub endpoint.\n token=huggingfacehub_api_token,\n ).whoami()\n except Exception as e:\n raise ValueError(\n \"Could not authenticate with huggingface_hub. \"\n \"Please check your API token.\"\n ) from e\n except ImportError:\n raise ValueError(\n \"Could not import huggingface_hub python package. \"\n \"Please install it with `pip install huggingface_hub`.\"\n )\n values[\"huggingfacehub_api_token\"] = huggingfacehub_api_token\n return values\n @property\n def _identifying_params(self) -> Mapping[str, Any]:\n \"\"\"Get the identifying parameters.\"\"\"\n _model_kwargs = self.model_kwargs or {}\n return {\n **{\"endpoint_url\": self.endpoint_url, \"task\": self.task},\n **{\"model_kwargs\": _model_kwargs},\n }\n @property\n def _llm_type(self) -> str:\n \"\"\"Return type of llm.\"\"\"\n return \"huggingface_endpoint\"\n def _call(\n self,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/huggingface_endpoint.html"} +{"id": "953950b19c41-2", "text": "return \"huggingface_endpoint\"\n def _call(\n self,\n prompt: str,\n stop: Optional[List[str]] = None,\n run_manager: Optional[CallbackManagerForLLMRun] = None,\n **kwargs: Any,\n ) -> str:\n \"\"\"Call out to HuggingFace Hub's inference endpoint.\n Args:\n prompt: The prompt to pass into the model.\n stop: Optional list of stop words to use when generating.\n Returns:\n The string generated by the model.\n Example:\n .. code-block:: python\n response = hf(\"Tell me a joke.\")\n \"\"\"\n _model_kwargs = self.model_kwargs or {}\n # payload samples\n params = {**_model_kwargs, **kwargs}\n parameter_payload = {\"inputs\": prompt, \"parameters\": params}\n # HTTP headers for authorization\n headers = {\n \"Authorization\": f\"Bearer {self.huggingfacehub_api_token}\",\n \"Content-Type\": \"application/json\",\n }\n # send request\n try:\n response = requests.post(\n self.endpoint_url, headers=headers, json=parameter_payload\n )\n except requests.exceptions.RequestException as e: # This is the correct syntax\n raise ValueError(f\"Error raised by inference endpoint: {e}\")\n generated_text = response.json()\n if \"error\" in generated_text:\n raise ValueError(\n f\"Error raised by inference API: {generated_text['error']}\"\n )\n if self.task == \"text-generation\":\n # Text generation return includes the starter text.\n text = generated_text[0][\"generated_text\"][len(prompt) :]\n elif self.task == \"text2text-generation\":", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/huggingface_endpoint.html"} +{"id": "953950b19c41-3", "text": "elif self.task == \"text2text-generation\":\n text = generated_text[0][\"generated_text\"]\n elif self.task == \"summarization\":\n text = generated_text[0][\"summary_text\"]\n else:\n raise ValueError(\n f\"Got invalid task {self.task}, \"\n f\"currently only {VALID_TASKS} are supported\"\n )\n if stop is not None:\n # This is a bit hacky, but I can't figure out a better way to enforce\n # stop tokens when making calls to huggingface_hub.\n text = enforce_stop_tokens(text, stop)\n return text", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/huggingface_endpoint.html"} +{"id": "eccfa8283ddc-0", "text": "Source code for langchain.llms.aviary\n\"\"\"Wrapper around Aviary\"\"\"\nimport dataclasses\nimport os\nfrom typing import Any, Dict, List, Mapping, Optional, Union, cast\nimport requests\nfrom pydantic import Extra, root_validator\nfrom langchain.callbacks.manager import CallbackManagerForLLMRun\nfrom langchain.llms.base import LLM\nfrom langchain.llms.utils import enforce_stop_tokens\nfrom langchain.utils import get_from_dict_or_env\nTIMEOUT = 60\n@dataclasses.dataclass\nclass AviaryBackend:\n backend_url: str\n bearer: str\n def __post_init__(self) -> None:\n self.header = {\"Authorization\": self.bearer}\n @classmethod\n def from_env(cls) -> \"AviaryBackend\":\n aviary_url = os.getenv(\"AVIARY_URL\")\n assert aviary_url, \"AVIARY_URL must be set\"\n aviary_token = os.getenv(\"AVIARY_TOKEN\", \"\")\n bearer = f\"Bearer {aviary_token}\" if aviary_token else \"\"\n aviary_url += \"/\" if not aviary_url.endswith(\"/\") else \"\"\n return cls(aviary_url, bearer)\ndef get_models() -> List[str]:\n \"\"\"List available models\"\"\"\n backend = AviaryBackend.from_env()\n request_url = backend.backend_url + \"-/routes\"\n response = requests.get(request_url, headers=backend.header, timeout=TIMEOUT)\n try:\n result = response.json()\n except requests.JSONDecodeError as e:\n raise RuntimeError(\n f\"Error decoding JSON from {request_url}. Text response: {response.text}\"\n ) from e\n result = sorted(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/aviary.html"} +{"id": "eccfa8283ddc-1", "text": ") from e\n result = sorted(\n [k.lstrip(\"/\").replace(\"--\", \"/\") for k in result.keys() if \"--\" in k]\n )\n return result\ndef get_completions(\n model: str,\n prompt: str,\n use_prompt_format: bool = True,\n version: str = \"\",\n) -> Dict[str, Union[str, float, int]]:\n \"\"\"Get completions from Aviary models.\"\"\"\n backend = AviaryBackend.from_env()\n url = backend.backend_url + model.replace(\"/\", \"--\") + \"/\" + version + \"query\"\n response = requests.post(\n url,\n headers=backend.header,\n json={\"prompt\": prompt, \"use_prompt_format\": use_prompt_format},\n timeout=TIMEOUT,\n )\n try:\n return response.json()\n except requests.JSONDecodeError as e:\n raise RuntimeError(\n f\"Error decoding JSON from {url}. Text response: {response.text}\"\n ) from e\n[docs]class Aviary(LLM):\n \"\"\"Allow you to use an Aviary.\n Aviary is a backend for hosted models. You can\n find out more about aviary at\n http://github.com/ray-project/aviary\n To get a list of the models supported on an\n aviary, follow the instructions on the web site to\n install the aviary CLI and then use:\n `aviary models`\n AVIARY_URL and AVIARY_TOKEN environement variables must be set.\n Example:\n .. code-block:: python\n from langchain.llms import Aviary\n os.environ[\"AVIARY_URL\"] = \"\"\n os.environ[\"AVIARY_TOKEN\"] = \"\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/aviary.html"} +{"id": "eccfa8283ddc-2", "text": "os.environ[\"AVIARY_TOKEN\"] = \"\"\n light = Aviary(model='amazon/LightGPT')\n output = light('How do you make fried rice?')\n \"\"\"\n model: str = \"amazon/LightGPT\"\n aviary_url: Optional[str] = None\n aviary_token: Optional[str] = None\n # If True the prompt template for the model will be ignored.\n use_prompt_format: bool = True\n # API version to use for Aviary\n version: Optional[str] = None\n class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n @root_validator(pre=True)\n def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"Validate that api key and python package exists in environment.\"\"\"\n aviary_url = get_from_dict_or_env(values, \"aviary_url\", \"AVIARY_URL\")\n aviary_token = get_from_dict_or_env(values, \"aviary_token\", \"AVIARY_TOKEN\")\n # Set env viarables for aviary sdk\n os.environ[\"AVIARY_URL\"] = aviary_url\n os.environ[\"AVIARY_TOKEN\"] = aviary_token\n try:\n aviary_models = get_models()\n except requests.exceptions.RequestException as e:\n raise ValueError(e)\n model = values.get(\"model\")\n if model and model not in aviary_models:\n raise ValueError(f\"{aviary_url} does not support model {values['model']}.\")\n return values\n @property\n def _identifying_params(self) -> Mapping[str, Any]:\n \"\"\"Get the identifying parameters.\"\"\"\n return {\n \"model_name\": self.model,\n \"aviary_url\": self.aviary_url,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/aviary.html"} +{"id": "eccfa8283ddc-3", "text": "\"aviary_url\": self.aviary_url,\n }\n @property\n def _llm_type(self) -> str:\n \"\"\"Return type of llm.\"\"\"\n return f\"aviary-{self.model.replace('/', '-')}\"\n def _call(\n self,\n prompt: str,\n stop: Optional[List[str]] = None,\n run_manager: Optional[CallbackManagerForLLMRun] = None,\n **kwargs: Any,\n ) -> str:\n \"\"\"Call out to Aviary\n Args:\n prompt: The prompt to pass into the model.\n Returns:\n The string generated by the model.\n Example:\n .. code-block:: python\n response = aviary(\"Tell me a joke.\")\n \"\"\"\n kwargs = {\"use_prompt_format\": self.use_prompt_format}\n if self.version:\n kwargs[\"version\"] = self.version\n output = get_completions(\n model=self.model,\n prompt=prompt,\n **kwargs,\n )\n text = cast(str, output[\"generated_text\"])\n if stop:\n text = enforce_stop_tokens(text, stop)\n return text", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/aviary.html"} +{"id": "bf2cf988e4d3-0", "text": "Source code for langchain.llms.writer\n\"\"\"Wrapper around Writer APIs.\"\"\"\nfrom typing import Any, Dict, List, Mapping, Optional\nimport requests\nfrom pydantic import Extra, root_validator\nfrom langchain.callbacks.manager import CallbackManagerForLLMRun\nfrom langchain.llms.base import LLM\nfrom langchain.llms.utils import enforce_stop_tokens\nfrom langchain.utils import get_from_dict_or_env\n[docs]class Writer(LLM):\n \"\"\"Wrapper around Writer large language models.\n To use, you should have the environment variable ``WRITER_API_KEY`` and\n ``WRITER_ORG_ID`` set with your API key and organization ID respectively.\n Example:\n .. code-block:: python\n from langchain import Writer\n writer = Writer(model_id=\"palmyra-base\")\n \"\"\"\n writer_org_id: Optional[str] = None\n \"\"\"Writer organization ID.\"\"\"\n model_id: str = \"palmyra-instruct\"\n \"\"\"Model name to use.\"\"\"\n min_tokens: Optional[int] = None\n \"\"\"Minimum number of tokens to generate.\"\"\"\n max_tokens: Optional[int] = None\n \"\"\"Maximum number of tokens to generate.\"\"\"\n temperature: Optional[float] = None\n \"\"\"What sampling temperature to use.\"\"\"\n top_p: Optional[float] = None\n \"\"\"Total probability mass of tokens to consider at each step.\"\"\"\n stop: Optional[List[str]] = None\n \"\"\"Sequences when completion generation will stop.\"\"\"\n presence_penalty: Optional[float] = None\n \"\"\"Penalizes repeated tokens regardless of frequency.\"\"\"\n repetition_penalty: Optional[float] = None\n \"\"\"Penalizes repeated tokens according to frequency.\"\"\"\n best_of: Optional[int] = None\n \"\"\"Generates this many completions server-side and returns the \"best\".\"\"\"\n logprobs: bool = False", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/writer.html"} +{"id": "bf2cf988e4d3-1", "text": "logprobs: bool = False\n \"\"\"Whether to return log probabilities.\"\"\"\n n: Optional[int] = None\n \"\"\"How many completions to generate.\"\"\"\n writer_api_key: Optional[str] = None\n \"\"\"Writer API key.\"\"\"\n base_url: Optional[str] = None\n \"\"\"Base url to use, if None decides based on model name.\"\"\"\n class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n @root_validator()\n def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"Validate that api key and organization id exist in environment.\"\"\"\n writer_api_key = get_from_dict_or_env(\n values, \"writer_api_key\", \"WRITER_API_KEY\"\n )\n values[\"writer_api_key\"] = writer_api_key\n writer_org_id = get_from_dict_or_env(values, \"writer_org_id\", \"WRITER_ORG_ID\")\n values[\"writer_org_id\"] = writer_org_id\n return values\n @property\n def _default_params(self) -> Mapping[str, Any]:\n \"\"\"Get the default parameters for calling Writer API.\"\"\"\n return {\n \"minTokens\": self.min_tokens,\n \"maxTokens\": self.max_tokens,\n \"temperature\": self.temperature,\n \"topP\": self.top_p,\n \"stop\": self.stop,\n \"presencePenalty\": self.presence_penalty,\n \"repetitionPenalty\": self.repetition_penalty,\n \"bestOf\": self.best_of,\n \"logprobs\": self.logprobs,\n \"n\": self.n,\n }\n @property\n def _identifying_params(self) -> Mapping[str, Any]:\n \"\"\"Get the identifying parameters.\"\"\"\n return {", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/writer.html"} +{"id": "bf2cf988e4d3-2", "text": "\"\"\"Get the identifying parameters.\"\"\"\n return {\n **{\"model_id\": self.model_id, \"writer_org_id\": self.writer_org_id},\n **self._default_params,\n }\n @property\n def _llm_type(self) -> str:\n \"\"\"Return type of llm.\"\"\"\n return \"writer\"\n def _call(\n self,\n prompt: str,\n stop: Optional[List[str]] = None,\n run_manager: Optional[CallbackManagerForLLMRun] = None,\n **kwargs: Any,\n ) -> str:\n \"\"\"Call out to Writer's completions endpoint.\n Args:\n prompt: The prompt to pass into the model.\n stop: Optional list of stop words to use when generating.\n Returns:\n The string generated by the model.\n Example:\n .. code-block:: python\n response = Writer(\"Tell me a joke.\")\n \"\"\"\n if self.base_url is not None:\n base_url = self.base_url\n else:\n base_url = (\n \"https://enterprise-api.writer.com/llm\"\n f\"/organization/{self.writer_org_id}\"\n f\"/model/{self.model_id}/completions\"\n )\n params = {**self._default_params, **kwargs}\n response = requests.post(\n url=base_url,\n headers={\n \"Authorization\": f\"{self.writer_api_key}\",\n \"Content-Type\": \"application/json\",\n \"Accept\": \"application/json\",\n },\n json={\"prompt\": prompt, **params},\n )\n text = response.text\n if stop is not None:\n # I believe this is required since the stop tokens\n # are not enforced by the model parameters", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/writer.html"} +{"id": "bf2cf988e4d3-3", "text": "# are not enforced by the model parameters\n text = enforce_stop_tokens(text, stop)\n return text", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/writer.html"} +{"id": "4828e0b4e329-0", "text": "Source code for langchain.llms.ai21\n\"\"\"Wrapper around AI21 APIs.\"\"\"\nfrom typing import Any, Dict, List, Optional\nimport requests\nfrom pydantic import BaseModel, Extra, root_validator\nfrom langchain.callbacks.manager import CallbackManagerForLLMRun\nfrom langchain.llms.base import LLM\nfrom langchain.utils import get_from_dict_or_env\nclass AI21PenaltyData(BaseModel):\n \"\"\"Parameters for AI21 penalty data.\"\"\"\n scale: int = 0\n applyToWhitespaces: bool = True\n applyToPunctuations: bool = True\n applyToNumbers: bool = True\n applyToStopwords: bool = True\n applyToEmojis: bool = True\n[docs]class AI21(LLM):\n \"\"\"Wrapper around AI21 large language models.\n To use, you should have the environment variable ``AI21_API_KEY``\n set with your API key.\n Example:\n .. code-block:: python\n from langchain.llms import AI21\n ai21 = AI21(model=\"j2-jumbo-instruct\")\n \"\"\"\n model: str = \"j2-jumbo-instruct\"\n \"\"\"Model name to use.\"\"\"\n temperature: float = 0.7\n \"\"\"What sampling temperature to use.\"\"\"\n maxTokens: int = 256\n \"\"\"The maximum number of tokens to generate in the completion.\"\"\"\n minTokens: int = 0\n \"\"\"The minimum number of tokens to generate in the completion.\"\"\"\n topP: float = 1.0\n \"\"\"Total probability mass of tokens to consider at each step.\"\"\"\n presencePenalty: AI21PenaltyData = AI21PenaltyData()\n \"\"\"Penalizes repeated tokens.\"\"\"\n countPenalty: AI21PenaltyData = AI21PenaltyData()", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/ai21.html"} +{"id": "4828e0b4e329-1", "text": "countPenalty: AI21PenaltyData = AI21PenaltyData()\n \"\"\"Penalizes repeated tokens according to count.\"\"\"\n frequencyPenalty: AI21PenaltyData = AI21PenaltyData()\n \"\"\"Penalizes repeated tokens according to frequency.\"\"\"\n numResults: int = 1\n \"\"\"How many completions to generate for each prompt.\"\"\"\n logitBias: Optional[Dict[str, float]] = None\n \"\"\"Adjust the probability of specific tokens being generated.\"\"\"\n ai21_api_key: Optional[str] = None\n stop: Optional[List[str]] = None\n base_url: Optional[str] = None\n \"\"\"Base url to use, if None decides based on model name.\"\"\"\n class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n @root_validator()\n def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"Validate that api key exists in environment.\"\"\"\n ai21_api_key = get_from_dict_or_env(values, \"ai21_api_key\", \"AI21_API_KEY\")\n values[\"ai21_api_key\"] = ai21_api_key\n return values\n @property\n def _default_params(self) -> Dict[str, Any]:\n \"\"\"Get the default parameters for calling AI21 API.\"\"\"\n return {\n \"temperature\": self.temperature,\n \"maxTokens\": self.maxTokens,\n \"minTokens\": self.minTokens,\n \"topP\": self.topP,\n \"presencePenalty\": self.presencePenalty.dict(),\n \"countPenalty\": self.countPenalty.dict(),\n \"frequencyPenalty\": self.frequencyPenalty.dict(),\n \"numResults\": self.numResults,\n \"logitBias\": self.logitBias,\n }\n @property", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/ai21.html"} +{"id": "4828e0b4e329-2", "text": "\"logitBias\": self.logitBias,\n }\n @property\n def _identifying_params(self) -> Dict[str, Any]:\n \"\"\"Get the identifying parameters.\"\"\"\n return {**{\"model\": self.model}, **self._default_params}\n @property\n def _llm_type(self) -> str:\n \"\"\"Return type of llm.\"\"\"\n return \"ai21\"\n def _call(\n self,\n prompt: str,\n stop: Optional[List[str]] = None,\n run_manager: Optional[CallbackManagerForLLMRun] = None,\n **kwargs: Any,\n ) -> str:\n \"\"\"Call out to AI21's complete endpoint.\n Args:\n prompt: The prompt to pass into the model.\n stop: Optional list of stop words to use when generating.\n Returns:\n The string generated by the model.\n Example:\n .. code-block:: python\n response = ai21(\"Tell me a joke.\")\n \"\"\"\n if self.stop is not None and stop is not None:\n raise ValueError(\"`stop` found in both the input and default params.\")\n elif self.stop is not None:\n stop = self.stop\n elif stop is None:\n stop = []\n if self.base_url is not None:\n base_url = self.base_url\n else:\n if self.model in (\"j1-grande-instruct\",):\n base_url = \"https://api.ai21.com/studio/v1/experimental\"\n else:\n base_url = \"https://api.ai21.com/studio/v1\"\n params = {**self._default_params, **kwargs}\n response = requests.post(\n url=f\"{base_url}/{self.model}/complete\",", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/ai21.html"} +{"id": "4828e0b4e329-3", "text": "response = requests.post(\n url=f\"{base_url}/{self.model}/complete\",\n headers={\"Authorization\": f\"Bearer {self.ai21_api_key}\"},\n json={\"prompt\": prompt, \"stopSequences\": stop, **params},\n )\n if response.status_code != 200:\n optional_detail = response.json().get(\"error\")\n raise ValueError(\n f\"AI21 /complete call failed with status code {response.status_code}.\"\n f\" Details: {optional_detail}\"\n )\n response_json = response.json()\n return response_json[\"completions\"][0][\"data\"][\"text\"]", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/ai21.html"} +{"id": "52d648e51df5-0", "text": "Source code for langchain.llms.clarifai\n\"\"\"Wrapper around Clarifai's APIs.\"\"\"\nimport logging\nfrom typing import Any, Dict, List, Optional\nfrom pydantic import Extra, root_validator\nfrom langchain.callbacks.manager import CallbackManagerForLLMRun\nfrom langchain.llms.base import LLM\nfrom langchain.llms.utils import enforce_stop_tokens\nfrom langchain.utils import get_from_dict_or_env\nlogger = logging.getLogger(__name__)\n[docs]class Clarifai(LLM):\n \"\"\"Wrapper around Clarifai's large language models.\n To use, you should have an account on the Clarifai platform, \n the ``clarifai`` python package installed, and the\n environment variable ``CLARIFAI_PAT_KEY`` set with your PAT key, \n or pass it as a named parameter to the constructor.\n Example:\n .. code-block:: python\n from langchain.llms import Clarifai\n clarifai_llm = Clarifai(clarifai_pat_key=CLARIFAI_PAT_KEY, \\\n user_id=USER_ID, app_id=APP_ID, model_id=MODEL_ID)\n \"\"\"\n stub: Any #: :meta private:\n metadata: Any\n userDataObject: Any\n model_id: Optional[str] = None\n \"\"\"Model id to use.\"\"\"\n model_version_id: Optional[str] = None\n \"\"\"Model version id to use.\"\"\"\n app_id: Optional[str] = None\n \"\"\"Clarifai application id to use.\"\"\"\n user_id: Optional[str] = None\n \"\"\"Clarifai user id to use.\"\"\"\n clarifai_pat_key: Optional[str] = None\n api_base: str = \"https://api.clarifai.com\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/clarifai.html"} +{"id": "52d648e51df5-1", "text": "api_base: str = \"https://api.clarifai.com\"\n stop: Optional[List[str]] = None\n class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n @root_validator()\n def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"Validate that we have all required info to access Clarifai\n platform and python package exists in environment.\"\"\"\n values[\"clarifai_pat_key\"] = get_from_dict_or_env(\n values, \"clarifai_pat_key\", \"CLARIFAI_PAT_KEY\"\n )\n user_id = values.get(\"user_id\")\n app_id = values.get(\"app_id\")\n model_id = values.get(\"model_id\")\n if values[\"clarifai_pat_key\"] is None:\n raise ValueError(\"Please provide a clarifai_pat_key.\")\n if user_id is None:\n raise ValueError(\"Please provide a user_id.\")\n if app_id is None:\n raise ValueError(\"Please provide a app_id.\")\n if model_id is None:\n raise ValueError(\"Please provide a model_id.\")\n return values\n @property\n def _default_params(self) -> Dict[str, Any]:\n \"\"\"Get the default parameters for calling Cohere API.\"\"\"\n return {}\n @property\n def _identifying_params(self) -> Dict[str, Any]:\n \"\"\"Get the identifying parameters.\"\"\"\n return {**{\"model_id\": self.model_id}}\n @property\n def _llm_type(self) -> str:\n \"\"\"Return type of llm.\"\"\"\n return \"clarifai\"\n def _call(\n self,\n prompt: str,\n stop: Optional[List[str]] = None,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/clarifai.html"} +{"id": "52d648e51df5-2", "text": "prompt: str,\n stop: Optional[List[str]] = None,\n run_manager: Optional[CallbackManagerForLLMRun] = None,\n **kwargs: Any\n ) -> str:\n \"\"\"Call out to Clarfai's PostModelOutputs endpoint.\n Args:\n prompt: The prompt to pass into the model.\n stop: Optional list of stop words to use when generating.\n Returns:\n The string generated by the model.\n Example:\n .. code-block:: python\n response = clarifai_llm(\"Tell me a joke.\")\n \"\"\"\n try:\n from clarifai.auth.helper import ClarifaiAuthHelper\n from clarifai.client import create_stub\n from clarifai_grpc.grpc.api import (\n resources_pb2,\n service_pb2,\n )\n from clarifai_grpc.grpc.api.status import status_code_pb2\n except ImportError:\n raise ImportError(\n \"Could not import clarifai python package. \"\n \"Please install it with `pip install clarifai`.\"\n )\n auth = ClarifaiAuthHelper(\n user_id=self.user_id,\n app_id=self.app_id,\n pat=self.clarifai_pat_key,\n base=self.api_base,\n )\n self.userDataObject = auth.get_user_app_id_proto()\n self.stub = create_stub(auth)\n params = self._default_params\n if self.stop is not None and stop is not None:\n raise ValueError(\"`stop` found in both the input and default params.\")\n elif self.stop is not None:\n params[\"stop_sequences\"] = self.stop\n else:\n params[\"stop_sequences\"] = stop\n # The userDataObject is created in the overview and", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/clarifai.html"} +{"id": "52d648e51df5-3", "text": "# The userDataObject is created in the overview and\n # is required when using a PAT\n # If version_id None, Defaults to the latest model version\n post_model_outputs_request = service_pb2.PostModelOutputsRequest(\n user_app_id=self.userDataObject,\n model_id=self.model_id,\n version_id=self.model_version_id,\n inputs=[\n resources_pb2.Input(\n data=resources_pb2.Data(text=resources_pb2.Text(raw=prompt))\n )\n ],\n )\n post_model_outputs_response = self.stub.PostModelOutputs(\n post_model_outputs_request\n )\n if post_model_outputs_response.status.code != status_code_pb2.SUCCESS:\n logger.error(post_model_outputs_response.status)\n raise Exception(\n \"Post model outputs failed, status: \"\n + post_model_outputs_response.status.description\n )\n text = post_model_outputs_response.outputs[0].data.text.raw\n # In order to make this consistent with other endpoints, we strip them.\n if stop is not None or self.stop is not None:\n text = enforce_stop_tokens(text, params[\"stop_sequences\"])\n return text", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/clarifai.html"} +{"id": "ec84d0b710fb-0", "text": "Source code for langchain.llms.human\nfrom typing import Any, Callable, List, Mapping, Optional\nfrom pydantic import Field\nfrom langchain.callbacks.manager import CallbackManagerForLLMRun\nfrom langchain.llms.base import LLM\nfrom langchain.llms.utils import enforce_stop_tokens\ndef _display_prompt(prompt: str) -> None:\n \"\"\"Displays the given prompt to the user.\"\"\"\n print(f\"\\n{prompt}\")\ndef _collect_user_input(\n separator: Optional[str] = None, stop: Optional[List[str]] = None\n) -> str:\n \"\"\"Collects and returns user input as a single string.\"\"\"\n separator = separator or \"\\n\"\n lines = []\n while True:\n line = input()\n if not line:\n break\n lines.append(line)\n if stop and any(seq in line for seq in stop):\n break\n # Combine all lines into a single string\n multi_line_input = separator.join(lines)\n return multi_line_input\n[docs]class HumanInputLLM(LLM):\n \"\"\"\n A LLM wrapper which returns user input as the response.\n \"\"\"\n input_func: Callable = Field(default_factory=lambda: _collect_user_input)\n prompt_func: Callable[[str], None] = Field(default_factory=lambda: _display_prompt)\n separator: str = \"\\n\"\n input_kwargs: Mapping[str, Any] = {}\n prompt_kwargs: Mapping[str, Any] = {}\n @property\n def _identifying_params(self) -> Mapping[str, Any]:\n \"\"\"\n Returns an empty dictionary as there are no identifying parameters.\n \"\"\"\n return {}\n @property\n def _llm_type(self) -> str:\n \"\"\"Returns the type of LLM.\"\"\"\n return \"human-input\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/human.html"} +{"id": "ec84d0b710fb-1", "text": "\"\"\"Returns the type of LLM.\"\"\"\n return \"human-input\"\n def _call(\n self,\n prompt: str,\n stop: Optional[List[str]] = None,\n run_manager: Optional[CallbackManagerForLLMRun] = None,\n **kwargs: Any,\n ) -> str:\n \"\"\"\n Displays the prompt to the user and returns their input as a response.\n Args:\n prompt (str): The prompt to be displayed to the user.\n stop (Optional[List[str]]): A list of stop strings.\n run_manager (Optional[CallbackManagerForLLMRun]): Currently not used.\n Returns:\n str: The user's input as a response.\n \"\"\"\n self.prompt_func(prompt, **self.prompt_kwargs)\n user_input = self.input_func(\n separator=self.separator, stop=stop, **self.input_kwargs\n )\n if stop is not None:\n # I believe this is required since the stop tokens\n # are not enforced by the human themselves\n user_input = enforce_stop_tokens(user_input, stop)\n return user_input", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/human.html"} +{"id": "4e6910fbc4ba-0", "text": "Source code for langchain.llms.replicate\n\"\"\"Wrapper around Replicate API.\"\"\"\nimport logging\nfrom typing import Any, Dict, List, Mapping, Optional\nfrom pydantic import Extra, Field, root_validator\nfrom langchain.callbacks.manager import CallbackManagerForLLMRun\nfrom langchain.llms.base import LLM\nfrom langchain.utils import get_from_dict_or_env\nlogger = logging.getLogger(__name__)\n[docs]class Replicate(LLM):\n \"\"\"Wrapper around Replicate models.\n To use, you should have the ``replicate`` python package installed,\n and the environment variable ``REPLICATE_API_TOKEN`` set with your API token.\n You can find your token here: https://replicate.com/account\n The model param is required, but any other model parameters can also\n be passed in with the format input={model_param: value, ...}\n Example:\n .. code-block:: python\n from langchain.llms import Replicate\n replicate = Replicate(model=\"stability-ai/stable-diffusion: \\\n 27b93a2413e7f36cd83da926f365628\\\n 0b2931564ff050bf9575f1fdf9bcd7478\",\n input={\"image_dimensions\": \"512x512\"})\n \"\"\"\n model: str\n input: Dict[str, Any] = Field(default_factory=dict)\n model_kwargs: Dict[str, Any] = Field(default_factory=dict)\n replicate_api_token: Optional[str] = None\n class Config:\n \"\"\"Configuration for this pydantic config.\"\"\"\n extra = Extra.forbid\n @root_validator(pre=True)\n def build_extra(cls, values: Dict[str, Any]) -> Dict[str, Any]:\n \"\"\"Build extra kwargs from additional params that were passed in.\"\"\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/replicate.html"} +{"id": "4e6910fbc4ba-1", "text": "\"\"\"Build extra kwargs from additional params that were passed in.\"\"\"\n all_required_field_names = {field.alias for field in cls.__fields__.values()}\n extra = values.get(\"model_kwargs\", {})\n for field_name in list(values):\n if field_name not in all_required_field_names:\n if field_name in extra:\n raise ValueError(f\"Found {field_name} supplied twice.\")\n logger.warning(\n f\"\"\"{field_name} was transfered to model_kwargs.\n Please confirm that {field_name} is what you intended.\"\"\"\n )\n extra[field_name] = values.pop(field_name)\n values[\"model_kwargs\"] = extra\n return values\n @root_validator()\n def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"Validate that api key and python package exists in environment.\"\"\"\n replicate_api_token = get_from_dict_or_env(\n values, \"REPLICATE_API_TOKEN\", \"REPLICATE_API_TOKEN\"\n )\n values[\"replicate_api_token\"] = replicate_api_token\n return values\n @property\n def _identifying_params(self) -> Mapping[str, Any]:\n \"\"\"Get the identifying parameters.\"\"\"\n return {\n \"model\": self.model,\n **{\"model_kwargs\": self.model_kwargs},\n }\n @property\n def _llm_type(self) -> str:\n \"\"\"Return type of model.\"\"\"\n return \"replicate\"\n def _call(\n self,\n prompt: str,\n stop: Optional[List[str]] = None,\n run_manager: Optional[CallbackManagerForLLMRun] = None,\n **kwargs: Any,\n ) -> str:\n \"\"\"Call to replicate endpoint.\"\"\"\n try:\n import replicate as replicate_python\n except ImportError:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/replicate.html"} +{"id": "4e6910fbc4ba-2", "text": "try:\n import replicate as replicate_python\n except ImportError:\n raise ImportError(\n \"Could not import replicate python package. \"\n \"Please install it with `pip install replicate`.\"\n )\n # get the model and version\n model_str, version_str = self.model.split(\":\")\n model = replicate_python.models.get(model_str)\n version = model.versions.get(version_str)\n # sort through the openapi schema to get the name of the first input\n input_properties = sorted(\n version.openapi_schema[\"components\"][\"schemas\"][\"Input\"][\n \"properties\"\n ].items(),\n key=lambda item: item[1].get(\"x-order\", 0),\n )\n first_input_name = input_properties[0][0]\n inputs = {first_input_name: prompt, **self.input}\n iterator = replicate_python.run(self.model, input={**inputs, **kwargs})\n return \"\".join([output for output in iterator])", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/replicate.html"} +{"id": "afe64ccd5679-0", "text": "Source code for langchain.llms.fake\n\"\"\"Fake LLM wrapper for testing purposes.\"\"\"\nfrom typing import Any, List, Mapping, Optional\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForLLMRun,\n CallbackManagerForLLMRun,\n)\nfrom langchain.llms.base import LLM\n[docs]class FakeListLLM(LLM):\n \"\"\"Fake LLM wrapper for testing purposes.\"\"\"\n responses: List\n i: int = 0\n @property\n def _llm_type(self) -> str:\n \"\"\"Return type of llm.\"\"\"\n return \"fake-list\"\n def _call(\n self,\n prompt: str,\n stop: Optional[List[str]] = None,\n run_manager: Optional[CallbackManagerForLLMRun] = None,\n **kwargs: Any,\n ) -> str:\n \"\"\"Return next response\"\"\"\n response = self.responses[self.i]\n self.i += 1\n return response\n async def _acall(\n self,\n prompt: str,\n stop: Optional[List[str]] = None,\n run_manager: Optional[AsyncCallbackManagerForLLMRun] = None,\n **kwargs: Any,\n ) -> str:\n \"\"\"Return next response\"\"\"\n response = self.responses[self.i]\n self.i += 1\n return response\n @property\n def _identifying_params(self) -> Mapping[str, Any]:\n return {\"responses\": self.responses}", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/fake.html"} +{"id": "b46a9402f8f9-0", "text": "Source code for langchain.llms.stochasticai\n\"\"\"Wrapper around StochasticAI APIs.\"\"\"\nimport logging\nimport time\nfrom typing import Any, Dict, List, Mapping, Optional\nimport requests\nfrom pydantic import Extra, Field, root_validator\nfrom langchain.callbacks.manager import CallbackManagerForLLMRun\nfrom langchain.llms.base import LLM\nfrom langchain.llms.utils import enforce_stop_tokens\nfrom langchain.utils import get_from_dict_or_env\nlogger = logging.getLogger(__name__)\n[docs]class StochasticAI(LLM):\n \"\"\"Wrapper around StochasticAI large language models.\n To use, you should have the environment variable ``STOCHASTICAI_API_KEY``\n set with your API key.\n Example:\n .. code-block:: python\n from langchain.llms import StochasticAI\n stochasticai = StochasticAI(api_url=\"\")\n \"\"\"\n api_url: str = \"\"\n \"\"\"Model name to use.\"\"\"\n model_kwargs: Dict[str, Any] = Field(default_factory=dict)\n \"\"\"Holds any model parameters valid for `create` call not\n explicitly specified.\"\"\"\n stochasticai_api_key: Optional[str] = None\n class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n @root_validator(pre=True)\n def build_extra(cls, values: Dict[str, Any]) -> Dict[str, Any]:\n \"\"\"Build extra kwargs from additional params that were passed in.\"\"\"\n all_required_field_names = {field.alias for field in cls.__fields__.values()}\n extra = values.get(\"model_kwargs\", {})\n for field_name in list(values):\n if field_name not in all_required_field_names:\n if field_name in extra:\n raise ValueError(f\"Found {field_name} supplied twice.\")", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/stochasticai.html"} +{"id": "b46a9402f8f9-1", "text": "raise ValueError(f\"Found {field_name} supplied twice.\")\n logger.warning(\n f\"\"\"{field_name} was transfered to model_kwargs.\n Please confirm that {field_name} is what you intended.\"\"\"\n )\n extra[field_name] = values.pop(field_name)\n values[\"model_kwargs\"] = extra\n return values\n @root_validator()\n def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"Validate that api key exists in environment.\"\"\"\n stochasticai_api_key = get_from_dict_or_env(\n values, \"stochasticai_api_key\", \"STOCHASTICAI_API_KEY\"\n )\n values[\"stochasticai_api_key\"] = stochasticai_api_key\n return values\n @property\n def _identifying_params(self) -> Mapping[str, Any]:\n \"\"\"Get the identifying parameters.\"\"\"\n return {\n **{\"endpoint_url\": self.api_url},\n **{\"model_kwargs\": self.model_kwargs},\n }\n @property\n def _llm_type(self) -> str:\n \"\"\"Return type of llm.\"\"\"\n return \"stochasticai\"\n def _call(\n self,\n prompt: str,\n stop: Optional[List[str]] = None,\n run_manager: Optional[CallbackManagerForLLMRun] = None,\n **kwargs: Any,\n ) -> str:\n \"\"\"Call out to StochasticAI's complete endpoint.\n Args:\n prompt: The prompt to pass into the model.\n stop: Optional list of stop words to use when generating.\n Returns:\n The string generated by the model.\n Example:\n .. code-block:: python\n response = StochasticAI(\"Tell me a joke.\")\n \"\"\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/stochasticai.html"} +{"id": "b46a9402f8f9-2", "text": "response = StochasticAI(\"Tell me a joke.\")\n \"\"\"\n params = self.model_kwargs or {}\n params = {**params, **kwargs}\n response_post = requests.post(\n url=self.api_url,\n json={\"prompt\": prompt, \"params\": params},\n headers={\n \"apiKey\": f\"{self.stochasticai_api_key}\",\n \"Accept\": \"application/json\",\n \"Content-Type\": \"application/json\",\n },\n )\n response_post.raise_for_status()\n response_post_json = response_post.json()\n completed = False\n while not completed:\n response_get = requests.get(\n url=response_post_json[\"data\"][\"responseUrl\"],\n headers={\n \"apiKey\": f\"{self.stochasticai_api_key}\",\n \"Accept\": \"application/json\",\n \"Content-Type\": \"application/json\",\n },\n )\n response_get.raise_for_status()\n response_get_json = response_get.json()[\"data\"]\n text = response_get_json.get(\"completion\")\n completed = text is not None\n time.sleep(0.5)\n text = text[0]\n if stop is not None:\n # I believe this is required since the stop tokens\n # are not enforced by the model parameters\n text = enforce_stop_tokens(text, stop)\n return text", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/stochasticai.html"} +{"id": "89a879b6d8d7-0", "text": "Source code for langchain.llms.gpt4all\n\"\"\"Wrapper for the GPT4All model.\"\"\"\nfrom functools import partial\nfrom typing import Any, Dict, List, Mapping, Optional, Set\nfrom pydantic import Extra, Field, root_validator\nfrom langchain.callbacks.manager import CallbackManagerForLLMRun\nfrom langchain.llms.base import LLM\nfrom langchain.llms.utils import enforce_stop_tokens\n[docs]class GPT4All(LLM):\n r\"\"\"Wrapper around GPT4All language models.\n To use, you should have the ``gpt4all`` python package installed, the\n pre-trained model file, and the model's config information.\n Example:\n .. code-block:: python\n from langchain.llms import GPT4All\n model = GPT4All(model=\"./models/gpt4all-model.bin\", n_ctx=512, n_threads=8)\n # Simplest invocation\n response = model(\"Once upon a time, \")\n \"\"\"\n model: str\n \"\"\"Path to the pre-trained GPT4All model file.\"\"\"\n backend: Optional[str] = Field(None, alias=\"backend\")\n n_ctx: int = Field(512, alias=\"n_ctx\")\n \"\"\"Token context window.\"\"\"\n n_parts: int = Field(-1, alias=\"n_parts\")\n \"\"\"Number of parts to split the model into. \n If -1, the number of parts is automatically determined.\"\"\"\n seed: int = Field(0, alias=\"seed\")\n \"\"\"Seed. If -1, a random seed is used.\"\"\"\n f16_kv: bool = Field(False, alias=\"f16_kv\")\n \"\"\"Use half-precision for key/value cache.\"\"\"\n logits_all: bool = Field(False, alias=\"logits_all\")", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/gpt4all.html"} +{"id": "89a879b6d8d7-1", "text": "logits_all: bool = Field(False, alias=\"logits_all\")\n \"\"\"Return logits for all tokens, not just the last token.\"\"\"\n vocab_only: bool = Field(False, alias=\"vocab_only\")\n \"\"\"Only load the vocabulary, no weights.\"\"\"\n use_mlock: bool = Field(False, alias=\"use_mlock\")\n \"\"\"Force system to keep model in RAM.\"\"\"\n embedding: bool = Field(False, alias=\"embedding\")\n \"\"\"Use embedding mode only.\"\"\"\n n_threads: Optional[int] = Field(4, alias=\"n_threads\")\n \"\"\"Number of threads to use.\"\"\"\n n_predict: Optional[int] = 256\n \"\"\"The maximum number of tokens to generate.\"\"\"\n temp: Optional[float] = 0.8\n \"\"\"The temperature to use for sampling.\"\"\"\n top_p: Optional[float] = 0.95\n \"\"\"The top-p value to use for sampling.\"\"\"\n top_k: Optional[int] = 40\n \"\"\"The top-k value to use for sampling.\"\"\"\n echo: Optional[bool] = False\n \"\"\"Whether to echo the prompt.\"\"\"\n stop: Optional[List[str]] = []\n \"\"\"A list of strings to stop generation when encountered.\"\"\"\n repeat_last_n: Optional[int] = 64\n \"Last n tokens to penalize\"\n repeat_penalty: Optional[float] = 1.3\n \"\"\"The penalty to apply to repeated tokens.\"\"\"\n n_batch: int = Field(1, alias=\"n_batch\")\n \"\"\"Batch size for prompt processing.\"\"\"\n streaming: bool = False\n \"\"\"Whether to stream the results or not.\"\"\"\n context_erase: float = 0.5\n \"\"\"Leave (n_ctx * context_erase) tokens\n starting from beginning if the context has run out.\"\"\"\n allow_download: bool = False", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/gpt4all.html"} +{"id": "89a879b6d8d7-2", "text": "starting from beginning if the context has run out.\"\"\"\n allow_download: bool = False\n \"\"\"If model does not exist in ~/.cache/gpt4all/, download it.\"\"\"\n client: Any = None #: :meta private:\n class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n @staticmethod\n def _model_param_names() -> Set[str]:\n return {\n \"n_ctx\",\n \"n_predict\",\n \"top_k\",\n \"top_p\",\n \"temp\",\n \"n_batch\",\n \"repeat_penalty\",\n \"repeat_last_n\",\n \"context_erase\",\n }\n def _default_params(self) -> Dict[str, Any]:\n return {\n \"n_ctx\": self.n_ctx,\n \"n_predict\": self.n_predict,\n \"top_k\": self.top_k,\n \"top_p\": self.top_p,\n \"temp\": self.temp,\n \"n_batch\": self.n_batch,\n \"repeat_penalty\": self.repeat_penalty,\n \"repeat_last_n\": self.repeat_last_n,\n \"context_erase\": self.context_erase,\n }\n @root_validator()\n def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"Validate that the python package exists in the environment.\"\"\"\n try:\n from gpt4all import GPT4All as GPT4AllModel\n except ImportError:\n raise ImportError(\n \"Could not import gpt4all python package. \"\n \"Please install it with `pip install gpt4all`.\"\n )\n full_path = values[\"model\"]\n model_path, delimiter, model_name = full_path.rpartition(\"/\")\n model_path += delimiter", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/gpt4all.html"} +{"id": "89a879b6d8d7-3", "text": "model_path += delimiter\n values[\"client\"] = GPT4AllModel(\n model_name,\n model_path=model_path or None,\n model_type=values[\"backend\"],\n allow_download=values[\"allow_download\"],\n )\n if values[\"n_threads\"] is not None:\n # set n_threads\n values[\"client\"].model.set_thread_count(values[\"n_threads\"])\n try:\n values[\"backend\"] = values[\"client\"].model_type\n except AttributeError:\n # The below is for compatibility with GPT4All Python bindings <= 0.2.3.\n values[\"backend\"] = values[\"client\"].model.model_type\n return values\n @property\n def _identifying_params(self) -> Mapping[str, Any]:\n \"\"\"Get the identifying parameters.\"\"\"\n return {\n \"model\": self.model,\n **self._default_params(),\n **{\n k: v for k, v in self.__dict__.items() if k in self._model_param_names()\n },\n }\n @property\n def _llm_type(self) -> str:\n \"\"\"Return the type of llm.\"\"\"\n return \"gpt4all\"\n def _call(\n self,\n prompt: str,\n stop: Optional[List[str]] = None,\n run_manager: Optional[CallbackManagerForLLMRun] = None,\n **kwargs: Any,\n ) -> str:\n r\"\"\"Call out to GPT4All's generate method.\n Args:\n prompt: The prompt to pass into the model.\n stop: A list of strings to stop generation when encountered.\n Returns:\n The string generated by the model.\n Example:\n .. code-block:: python", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/gpt4all.html"} +{"id": "89a879b6d8d7-4", "text": "The string generated by the model.\n Example:\n .. code-block:: python\n prompt = \"Once upon a time, \"\n response = model(prompt, n_predict=55)\n \"\"\"\n text_callback = None\n if run_manager:\n text_callback = partial(run_manager.on_llm_new_token, verbose=self.verbose)\n text = \"\"\n params = {**self._default_params(), **kwargs}\n for token in self.client.generate(prompt, **params):\n if text_callback:\n text_callback(token)\n text += token\n if stop is not None:\n text = enforce_stop_tokens(text, stop)\n return text", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/gpt4all.html"} +{"id": "9152b5755288-0", "text": "Source code for langchain.llms.cerebriumai\n\"\"\"Wrapper around CerebriumAI API.\"\"\"\nimport logging\nfrom typing import Any, Dict, List, Mapping, Optional\nfrom pydantic import Extra, Field, root_validator\nfrom langchain.callbacks.manager import CallbackManagerForLLMRun\nfrom langchain.llms.base import LLM\nfrom langchain.llms.utils import enforce_stop_tokens\nfrom langchain.utils import get_from_dict_or_env\nlogger = logging.getLogger(__name__)\n[docs]class CerebriumAI(LLM):\n \"\"\"Wrapper around CerebriumAI large language models.\n To use, you should have the ``cerebrium`` python package installed, and the\n environment variable ``CEREBRIUMAI_API_KEY`` set with your API key.\n Any parameters that are valid to be passed to the call can be passed\n in, even if not explicitly saved on this class.\n Example:\n .. code-block:: python\n from langchain.llms import CerebriumAI\n cerebrium = CerebriumAI(endpoint_url=\"\")\n \"\"\"\n endpoint_url: str = \"\"\n \"\"\"model endpoint to use\"\"\"\n model_kwargs: Dict[str, Any] = Field(default_factory=dict)\n \"\"\"Holds any model parameters valid for `create` call not\n explicitly specified.\"\"\"\n cerebriumai_api_key: Optional[str] = None\n class Config:\n \"\"\"Configuration for this pydantic config.\"\"\"\n extra = Extra.forbid\n @root_validator(pre=True)\n def build_extra(cls, values: Dict[str, Any]) -> Dict[str, Any]:\n \"\"\"Build extra kwargs from additional params that were passed in.\"\"\"\n all_required_field_names = {field.alias for field in cls.__fields__.values()}", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/cerebriumai.html"} +{"id": "9152b5755288-1", "text": "all_required_field_names = {field.alias for field in cls.__fields__.values()}\n extra = values.get(\"model_kwargs\", {})\n for field_name in list(values):\n if field_name not in all_required_field_names:\n if field_name in extra:\n raise ValueError(f\"Found {field_name} supplied twice.\")\n logger.warning(\n f\"\"\"{field_name} was transfered to model_kwargs.\n Please confirm that {field_name} is what you intended.\"\"\"\n )\n extra[field_name] = values.pop(field_name)\n values[\"model_kwargs\"] = extra\n return values\n @root_validator()\n def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"Validate that api key and python package exists in environment.\"\"\"\n cerebriumai_api_key = get_from_dict_or_env(\n values, \"cerebriumai_api_key\", \"CEREBRIUMAI_API_KEY\"\n )\n values[\"cerebriumai_api_key\"] = cerebriumai_api_key\n return values\n @property\n def _identifying_params(self) -> Mapping[str, Any]:\n \"\"\"Get the identifying parameters.\"\"\"\n return {\n **{\"endpoint_url\": self.endpoint_url},\n **{\"model_kwargs\": self.model_kwargs},\n }\n @property\n def _llm_type(self) -> str:\n \"\"\"Return type of llm.\"\"\"\n return \"cerebriumai\"\n def _call(\n self,\n prompt: str,\n stop: Optional[List[str]] = None,\n run_manager: Optional[CallbackManagerForLLMRun] = None,\n **kwargs: Any,\n ) -> str:\n \"\"\"Call to CerebriumAI endpoint.\"\"\"\n try:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/cerebriumai.html"} +{"id": "9152b5755288-2", "text": "\"\"\"Call to CerebriumAI endpoint.\"\"\"\n try:\n from cerebrium import model_api_request\n except ImportError:\n raise ValueError(\n \"Could not import cerebrium python package. \"\n \"Please install it with `pip install cerebrium`.\"\n )\n params = self.model_kwargs or {}\n response = model_api_request(\n self.endpoint_url,\n {\"prompt\": prompt, **params, **kwargs},\n self.cerebriumai_api_key,\n )\n text = response[\"data\"][\"result\"]\n if stop is not None:\n # I believe this is required since the stop tokens\n # are not enforced by the model parameters\n text = enforce_stop_tokens(text, stop)\n return text", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/cerebriumai.html"} +{"id": "28cd4760119c-0", "text": "Source code for langchain.llms.huggingface_pipeline\n\"\"\"Wrapper around HuggingFace Pipeline APIs.\"\"\"\nimport importlib.util\nimport logging\nfrom typing import Any, List, Mapping, Optional\nfrom pydantic import Extra\nfrom langchain.callbacks.manager import CallbackManagerForLLMRun\nfrom langchain.llms.base import LLM\nfrom langchain.llms.utils import enforce_stop_tokens\nDEFAULT_MODEL_ID = \"gpt2\"\nDEFAULT_TASK = \"text-generation\"\nVALID_TASKS = (\"text2text-generation\", \"text-generation\", \"summarization\")\nlogger = logging.getLogger(__name__)\n[docs]class HuggingFacePipeline(LLM):\n \"\"\"Wrapper around HuggingFace Pipeline API.\n To use, you should have the ``transformers`` python package installed.\n Only supports `text-generation`, `text2text-generation` and `summarization` for now.\n Example using from_model_id:\n .. code-block:: python\n from langchain.llms import HuggingFacePipeline\n hf = HuggingFacePipeline.from_model_id(\n model_id=\"gpt2\",\n task=\"text-generation\",\n pipeline_kwargs={\"max_new_tokens\": 10},\n )\n Example passing pipeline in directly:\n .. code-block:: python\n from langchain.llms import HuggingFacePipeline\n from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline\n model_id = \"gpt2\"\n tokenizer = AutoTokenizer.from_pretrained(model_id)\n model = AutoModelForCausalLM.from_pretrained(model_id)\n pipe = pipeline(\n \"text-generation\", model=model, tokenizer=tokenizer, max_new_tokens=10\n )\n hf = HuggingFacePipeline(pipeline=pipe)\n \"\"\"\n pipeline: Any #: :meta private:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/huggingface_pipeline.html"} +{"id": "28cd4760119c-1", "text": "\"\"\"\n pipeline: Any #: :meta private:\n model_id: str = DEFAULT_MODEL_ID\n \"\"\"Model name to use.\"\"\"\n model_kwargs: Optional[dict] = None\n \"\"\"Key word arguments passed to the model.\"\"\"\n pipeline_kwargs: Optional[dict] = None\n \"\"\"Key word arguments passed to the pipeline.\"\"\"\n class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n[docs] @classmethod\n def from_model_id(\n cls,\n model_id: str,\n task: str,\n device: int = -1,\n model_kwargs: Optional[dict] = None,\n pipeline_kwargs: Optional[dict] = None,\n **kwargs: Any,\n ) -> LLM:\n \"\"\"Construct the pipeline object from model_id and task.\"\"\"\n try:\n from transformers import (\n AutoModelForCausalLM,\n AutoModelForSeq2SeqLM,\n AutoTokenizer,\n )\n from transformers import pipeline as hf_pipeline\n except ImportError:\n raise ValueError(\n \"Could not import transformers python package. \"\n \"Please install it with `pip install transformers`.\"\n )\n _model_kwargs = model_kwargs or {}\n tokenizer = AutoTokenizer.from_pretrained(model_id, **_model_kwargs)\n try:\n if task == \"text-generation\":\n model = AutoModelForCausalLM.from_pretrained(model_id, **_model_kwargs)\n elif task in (\"text2text-generation\", \"summarization\"):\n model = AutoModelForSeq2SeqLM.from_pretrained(model_id, **_model_kwargs)\n else:\n raise ValueError(\n f\"Got invalid task {task}, \"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/huggingface_pipeline.html"} +{"id": "28cd4760119c-2", "text": "else:\n raise ValueError(\n f\"Got invalid task {task}, \"\n f\"currently only {VALID_TASKS} are supported\"\n )\n except ImportError as e:\n raise ValueError(\n f\"Could not load the {task} model due to missing dependencies.\"\n ) from e\n if importlib.util.find_spec(\"torch\") is not None:\n import torch\n cuda_device_count = torch.cuda.device_count()\n if device < -1 or (device >= cuda_device_count):\n raise ValueError(\n f\"Got device=={device}, \"\n f\"device is required to be within [-1, {cuda_device_count})\"\n )\n if device < 0 and cuda_device_count > 0:\n logger.warning(\n \"Device has %d GPUs available. \"\n \"Provide device={deviceId} to `from_model_id` to use available\"\n \"GPUs for execution. deviceId is -1 (default) for CPU and \"\n \"can be a positive integer associated with CUDA device id.\",\n cuda_device_count,\n )\n if \"trust_remote_code\" in _model_kwargs:\n _model_kwargs = {\n k: v for k, v in _model_kwargs.items() if k != \"trust_remote_code\"\n }\n _pipeline_kwargs = pipeline_kwargs or {}\n pipeline = hf_pipeline(\n task=task,\n model=model,\n tokenizer=tokenizer,\n device=device,\n model_kwargs=_model_kwargs,\n **_pipeline_kwargs,\n )\n if pipeline.task not in VALID_TASKS:\n raise ValueError(\n f\"Got invalid task {pipeline.task}, \"\n f\"currently only {VALID_TASKS} are supported\"\n )\n return cls(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/huggingface_pipeline.html"} +{"id": "28cd4760119c-3", "text": ")\n return cls(\n pipeline=pipeline,\n model_id=model_id,\n model_kwargs=_model_kwargs,\n pipeline_kwargs=_pipeline_kwargs,\n **kwargs,\n )\n @property\n def _identifying_params(self) -> Mapping[str, Any]:\n \"\"\"Get the identifying parameters.\"\"\"\n return {\n \"model_id\": self.model_id,\n \"model_kwargs\": self.model_kwargs,\n \"pipeline_kwargs\": self.pipeline_kwargs,\n }\n @property\n def _llm_type(self) -> str:\n return \"huggingface_pipeline\"\n def _call(\n self,\n prompt: str,\n stop: Optional[List[str]] = None,\n run_manager: Optional[CallbackManagerForLLMRun] = None,\n **kwargs: Any,\n ) -> str:\n response = self.pipeline(prompt)\n if self.pipeline.task == \"text-generation\":\n # Text generation return includes the starter text.\n text = response[0][\"generated_text\"][len(prompt) :]\n elif self.pipeline.task == \"text2text-generation\":\n text = response[0][\"generated_text\"]\n elif self.pipeline.task == \"summarization\":\n text = response[0][\"summary_text\"]\n else:\n raise ValueError(\n f\"Got invalid task {self.pipeline.task}, \"\n f\"currently only {VALID_TASKS} are supported\"\n )\n if stop is not None:\n # This is a bit hacky, but I can't figure out a better way to enforce\n # stop tokens when making calls to huggingface_hub.\n text = enforce_stop_tokens(text, stop)\n return text", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/huggingface_pipeline.html"} +{"id": "b4f4da8343ad-0", "text": "Source code for langchain.llms.openai\n\"\"\"Wrapper around OpenAI APIs.\"\"\"\nfrom __future__ import annotations\nimport logging\nimport sys\nimport warnings\nfrom typing import (\n AbstractSet,\n Any,\n Callable,\n Collection,\n Dict,\n Generator,\n List,\n Literal,\n Mapping,\n Optional,\n Set,\n Tuple,\n Union,\n)\nfrom pydantic import Field, root_validator\nfrom tenacity import (\n before_sleep_log,\n retry,\n retry_if_exception_type,\n stop_after_attempt,\n wait_exponential,\n)\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForLLMRun,\n CallbackManagerForLLMRun,\n)\nfrom langchain.llms.base import BaseLLM\nfrom langchain.schema import Generation, LLMResult\nfrom langchain.utils import get_from_dict_or_env\nlogger = logging.getLogger(__name__)\ndef update_token_usage(\n keys: Set[str], response: Dict[str, Any], token_usage: Dict[str, Any]\n) -> None:\n \"\"\"Update token usage.\"\"\"\n _keys_to_use = keys.intersection(response[\"usage\"])\n for _key in _keys_to_use:\n if _key not in token_usage:\n token_usage[_key] = response[\"usage\"][_key]\n else:\n token_usage[_key] += response[\"usage\"][_key]\ndef _update_response(response: Dict[str, Any], stream_response: Dict[str, Any]) -> None:\n \"\"\"Update response from the stream response.\"\"\"\n response[\"choices\"][0][\"text\"] += stream_response[\"choices\"][0][\"text\"]\n response[\"choices\"][0][\"finish_reason\"] = stream_response[\"choices\"][0][\n \"finish_reason\"\n ]", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/openai.html"} +{"id": "b4f4da8343ad-1", "text": "\"finish_reason\"\n ]\n response[\"choices\"][0][\"logprobs\"] = stream_response[\"choices\"][0][\"logprobs\"]\ndef _streaming_response_template() -> Dict[str, Any]:\n return {\n \"choices\": [\n {\n \"text\": \"\",\n \"finish_reason\": None,\n \"logprobs\": None,\n }\n ]\n }\ndef _create_retry_decorator(llm: Union[BaseOpenAI, OpenAIChat]) -> Callable[[Any], Any]:\n import openai\n min_seconds = 4\n max_seconds = 10\n # Wait 2^x * 1 second between each retry starting with\n # 4 seconds, then up to 10 seconds, then 10 seconds afterwards\n return retry(\n reraise=True,\n stop=stop_after_attempt(llm.max_retries),\n wait=wait_exponential(multiplier=1, min=min_seconds, max=max_seconds),\n retry=(\n retry_if_exception_type(openai.error.Timeout)\n | retry_if_exception_type(openai.error.APIError)\n | retry_if_exception_type(openai.error.APIConnectionError)\n | retry_if_exception_type(openai.error.RateLimitError)\n | retry_if_exception_type(openai.error.ServiceUnavailableError)\n ),\n before_sleep=before_sleep_log(logger, logging.WARNING),\n )\ndef completion_with_retry(llm: Union[BaseOpenAI, OpenAIChat], **kwargs: Any) -> Any:\n \"\"\"Use tenacity to retry the completion call.\"\"\"\n retry_decorator = _create_retry_decorator(llm)\n @retry_decorator\n def _completion_with_retry(**kwargs: Any) -> Any:\n return llm.client.create(**kwargs)\n return _completion_with_retry(**kwargs)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/openai.html"} +{"id": "b4f4da8343ad-2", "text": "return llm.client.create(**kwargs)\n return _completion_with_retry(**kwargs)\nasync def acompletion_with_retry(\n llm: Union[BaseOpenAI, OpenAIChat], **kwargs: Any\n) -> Any:\n \"\"\"Use tenacity to retry the async completion call.\"\"\"\n retry_decorator = _create_retry_decorator(llm)\n @retry_decorator\n async def _completion_with_retry(**kwargs: Any) -> Any:\n # Use OpenAI's async api https://github.com/openai/openai-python#async-api\n return await llm.client.acreate(**kwargs)\n return await _completion_with_retry(**kwargs)\nclass BaseOpenAI(BaseLLM):\n \"\"\"Wrapper around OpenAI large language models.\"\"\"\n @property\n def lc_secrets(self) -> Dict[str, str]:\n return {\"openai_api_key\": \"OPENAI_API_KEY\"}\n @property\n def lc_serializable(self) -> bool:\n return True\n client: Any #: :meta private:\n model_name: str = Field(\"text-davinci-003\", alias=\"model\")\n \"\"\"Model name to use.\"\"\"\n temperature: float = 0.7\n \"\"\"What sampling temperature to use.\"\"\"\n max_tokens: int = 256\n \"\"\"The maximum number of tokens to generate in the completion.\n -1 returns as many tokens as possible given the prompt and\n the models maximal context size.\"\"\"\n top_p: float = 1\n \"\"\"Total probability mass of tokens to consider at each step.\"\"\"\n frequency_penalty: float = 0\n \"\"\"Penalizes repeated tokens according to frequency.\"\"\"\n presence_penalty: float = 0\n \"\"\"Penalizes repeated tokens.\"\"\"\n n: int = 1\n \"\"\"How many completions to generate for each prompt.\"\"\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/openai.html"} +{"id": "b4f4da8343ad-3", "text": "\"\"\"How many completions to generate for each prompt.\"\"\"\n best_of: int = 1\n \"\"\"Generates best_of completions server-side and returns the \"best\".\"\"\"\n model_kwargs: Dict[str, Any] = Field(default_factory=dict)\n \"\"\"Holds any model parameters valid for `create` call not explicitly specified.\"\"\"\n openai_api_key: Optional[str] = None\n openai_api_base: Optional[str] = None\n openai_organization: Optional[str] = None\n # to support explicit proxy for OpenAI\n openai_proxy: Optional[str] = None\n batch_size: int = 20\n \"\"\"Batch size to use when passing multiple documents to generate.\"\"\"\n request_timeout: Optional[Union[float, Tuple[float, float]]] = None\n \"\"\"Timeout for requests to OpenAI completion API. Default is 600 seconds.\"\"\"\n logit_bias: Optional[Dict[str, float]] = Field(default_factory=dict)\n \"\"\"Adjust the probability of specific tokens being generated.\"\"\"\n max_retries: int = 6\n \"\"\"Maximum number of retries to make when generating.\"\"\"\n streaming: bool = False\n \"\"\"Whether to stream the results or not.\"\"\"\n allowed_special: Union[Literal[\"all\"], AbstractSet[str]] = set()\n \"\"\"Set of special tokens that are allowed\u3002\"\"\"\n disallowed_special: Union[Literal[\"all\"], Collection[str]] = \"all\"\n \"\"\"Set of special tokens that are not allowed\u3002\"\"\"\n tiktoken_model_name: Optional[str] = None\n \"\"\"The model name to pass to tiktoken when using this class. \n Tiktoken is used to count the number of tokens in documents to constrain \n them to be under a certain limit. By default, when set to None, this will", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/openai.html"} +{"id": "b4f4da8343ad-4", "text": "be the same as the embedding model name. However, there are some cases \n where you may want to use this Embedding class with a model name not \n supported by tiktoken. This can include when using Azure embeddings or \n when using one of the many model providers that expose an OpenAI-like \n API but with different models. In those cases, in order to avoid erroring \n when tiktoken is called, you can specify a model name to use here.\"\"\"\n def __new__(cls, **data: Any) -> Union[OpenAIChat, BaseOpenAI]: # type: ignore\n \"\"\"Initialize the OpenAI object.\"\"\"\n model_name = data.get(\"model_name\", \"\")\n if model_name.startswith(\"gpt-3.5-turbo\") or model_name.startswith(\"gpt-4\"):\n warnings.warn(\n \"You are trying to use a chat model. This way of initializing it is \"\n \"no longer supported. Instead, please use: \"\n \"`from langchain.chat_models import ChatOpenAI`\"\n )\n return OpenAIChat(**data)\n return super().__new__(cls)\n class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n allow_population_by_field_name = True\n @root_validator(pre=True)\n def build_extra(cls, values: Dict[str, Any]) -> Dict[str, Any]:\n \"\"\"Build extra kwargs from additional params that were passed in.\"\"\"\n all_required_field_names = cls.all_required_field_names()\n extra = values.get(\"model_kwargs\", {})\n for field_name in list(values):\n if field_name in extra:\n raise ValueError(f\"Found {field_name} supplied twice.\")\n if field_name not in all_required_field_names:\n logger.warning(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/openai.html"} +{"id": "b4f4da8343ad-5", "text": "if field_name not in all_required_field_names:\n logger.warning(\n f\"\"\"WARNING! {field_name} is not default parameter.\n {field_name} was transferred to model_kwargs.\n Please confirm that {field_name} is what you intended.\"\"\"\n )\n extra[field_name] = values.pop(field_name)\n invalid_model_kwargs = all_required_field_names.intersection(extra.keys())\n if invalid_model_kwargs:\n raise ValueError(\n f\"Parameters {invalid_model_kwargs} should be specified explicitly. \"\n f\"Instead they were passed in as part of `model_kwargs` parameter.\"\n )\n values[\"model_kwargs\"] = extra\n return values\n @root_validator()\n def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"Validate that api key and python package exists in environment.\"\"\"\n values[\"openai_api_key\"] = get_from_dict_or_env(\n values, \"openai_api_key\", \"OPENAI_API_KEY\"\n )\n values[\"openai_api_base\"] = get_from_dict_or_env(\n values,\n \"openai_api_base\",\n \"OPENAI_API_BASE\",\n default=\"\",\n )\n values[\"openai_proxy\"] = get_from_dict_or_env(\n values,\n \"openai_proxy\",\n \"OPENAI_PROXY\",\n default=\"\",\n )\n values[\"openai_organization\"] = get_from_dict_or_env(\n values,\n \"openai_organization\",\n \"OPENAI_ORGANIZATION\",\n default=\"\",\n )\n try:\n import openai\n values[\"client\"] = openai.Completion\n except ImportError:\n raise ImportError(\n \"Could not import openai python package. \"\n \"Please install it with `pip install openai`.\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/openai.html"} +{"id": "b4f4da8343ad-6", "text": "\"Please install it with `pip install openai`.\"\n )\n if values[\"streaming\"] and values[\"n\"] > 1:\n raise ValueError(\"Cannot stream results when n > 1.\")\n if values[\"streaming\"] and values[\"best_of\"] > 1:\n raise ValueError(\"Cannot stream results when best_of > 1.\")\n return values\n @property\n def _default_params(self) -> Dict[str, Any]:\n \"\"\"Get the default parameters for calling OpenAI API.\"\"\"\n normal_params = {\n \"temperature\": self.temperature,\n \"max_tokens\": self.max_tokens,\n \"top_p\": self.top_p,\n \"frequency_penalty\": self.frequency_penalty,\n \"presence_penalty\": self.presence_penalty,\n \"n\": self.n,\n \"request_timeout\": self.request_timeout,\n \"logit_bias\": self.logit_bias,\n }\n # Azure gpt-35-turbo doesn't support best_of\n # don't specify best_of if it is 1\n if self.best_of > 1:\n normal_params[\"best_of\"] = self.best_of\n return {**normal_params, **self.model_kwargs}\n def _generate(\n self,\n prompts: List[str],\n stop: Optional[List[str]] = None,\n run_manager: Optional[CallbackManagerForLLMRun] = None,\n **kwargs: Any,\n ) -> LLMResult:\n \"\"\"Call out to OpenAI's endpoint with k unique prompts.\n Args:\n prompts: The prompts to pass into the model.\n stop: Optional list of stop words to use when generating.\n Returns:\n The full LLM output.\n Example:\n .. code-block:: python", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/openai.html"} +{"id": "b4f4da8343ad-7", "text": "The full LLM output.\n Example:\n .. code-block:: python\n response = openai.generate([\"Tell me a joke.\"])\n \"\"\"\n # TODO: write a unit test for this\n params = self._invocation_params\n params = {**params, **kwargs}\n sub_prompts = self.get_sub_prompts(params, prompts, stop)\n choices = []\n token_usage: Dict[str, int] = {}\n # Get the token usage from the response.\n # Includes prompt, completion, and total tokens used.\n _keys = {\"completion_tokens\", \"prompt_tokens\", \"total_tokens\"}\n for _prompts in sub_prompts:\n if self.streaming:\n if len(_prompts) > 1:\n raise ValueError(\"Cannot stream results with multiple prompts.\")\n params[\"stream\"] = True\n response = _streaming_response_template()\n for stream_resp in completion_with_retry(\n self, prompt=_prompts, **params\n ):\n if run_manager:\n run_manager.on_llm_new_token(\n stream_resp[\"choices\"][0][\"text\"],\n verbose=self.verbose,\n logprobs=stream_resp[\"choices\"][0][\"logprobs\"],\n )\n _update_response(response, stream_resp)\n choices.extend(response[\"choices\"])\n else:\n response = completion_with_retry(self, prompt=_prompts, **params)\n choices.extend(response[\"choices\"])\n if not self.streaming:\n # Can't update token usage if streaming\n update_token_usage(_keys, response, token_usage)\n return self.create_llm_result(choices, prompts, token_usage)\n async def _agenerate(\n self,\n prompts: List[str],\n stop: Optional[List[str]] = None,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/openai.html"} +{"id": "b4f4da8343ad-8", "text": "prompts: List[str],\n stop: Optional[List[str]] = None,\n run_manager: Optional[AsyncCallbackManagerForLLMRun] = None,\n **kwargs: Any,\n ) -> LLMResult:\n \"\"\"Call out to OpenAI's endpoint async with k unique prompts.\"\"\"\n params = self._invocation_params\n params = {**params, **kwargs}\n sub_prompts = self.get_sub_prompts(params, prompts, stop)\n choices = []\n token_usage: Dict[str, int] = {}\n # Get the token usage from the response.\n # Includes prompt, completion, and total tokens used.\n _keys = {\"completion_tokens\", \"prompt_tokens\", \"total_tokens\"}\n for _prompts in sub_prompts:\n if self.streaming:\n if len(_prompts) > 1:\n raise ValueError(\"Cannot stream results with multiple prompts.\")\n params[\"stream\"] = True\n response = _streaming_response_template()\n async for stream_resp in await acompletion_with_retry(\n self, prompt=_prompts, **params\n ):\n if run_manager:\n await run_manager.on_llm_new_token(\n stream_resp[\"choices\"][0][\"text\"],\n verbose=self.verbose,\n logprobs=stream_resp[\"choices\"][0][\"logprobs\"],\n )\n _update_response(response, stream_resp)\n choices.extend(response[\"choices\"])\n else:\n response = await acompletion_with_retry(self, prompt=_prompts, **params)\n choices.extend(response[\"choices\"])\n if not self.streaming:\n # Can't update token usage if streaming\n update_token_usage(_keys, response, token_usage)\n return self.create_llm_result(choices, prompts, token_usage)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/openai.html"} +{"id": "b4f4da8343ad-9", "text": "return self.create_llm_result(choices, prompts, token_usage)\n def get_sub_prompts(\n self,\n params: Dict[str, Any],\n prompts: List[str],\n stop: Optional[List[str]] = None,\n ) -> List[List[str]]:\n \"\"\"Get the sub prompts for llm call.\"\"\"\n if stop is not None:\n if \"stop\" in params:\n raise ValueError(\"`stop` found in both the input and default params.\")\n params[\"stop\"] = stop\n if params[\"max_tokens\"] == -1:\n if len(prompts) != 1:\n raise ValueError(\n \"max_tokens set to -1 not supported for multiple inputs.\"\n )\n params[\"max_tokens\"] = self.max_tokens_for_prompt(prompts[0])\n sub_prompts = [\n prompts[i : i + self.batch_size]\n for i in range(0, len(prompts), self.batch_size)\n ]\n return sub_prompts\n def create_llm_result(\n self, choices: Any, prompts: List[str], token_usage: Dict[str, int]\n ) -> LLMResult:\n \"\"\"Create the LLMResult from the choices and prompts.\"\"\"\n generations = []\n for i, _ in enumerate(prompts):\n sub_choices = choices[i * self.n : (i + 1) * self.n]\n generations.append(\n [\n Generation(\n text=choice[\"text\"],\n generation_info=dict(\n finish_reason=choice.get(\"finish_reason\"),\n logprobs=choice.get(\"logprobs\"),\n ),\n )\n for choice in sub_choices\n ]\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/openai.html"} +{"id": "b4f4da8343ad-10", "text": "),\n )\n for choice in sub_choices\n ]\n )\n llm_output = {\"token_usage\": token_usage, \"model_name\": self.model_name}\n return LLMResult(generations=generations, llm_output=llm_output)\n def stream(self, prompt: str, stop: Optional[List[str]] = None) -> Generator:\n \"\"\"Call OpenAI with streaming flag and return the resulting generator.\n BETA: this is a beta feature while we figure out the right abstraction.\n Once that happens, this interface could change.\n Args:\n prompt: The prompts to pass into the model.\n stop: Optional list of stop words to use when generating.\n Returns:\n A generator representing the stream of tokens from OpenAI.\n Example:\n .. code-block:: python\n generator = openai.stream(\"Tell me a joke.\")\n for token in generator:\n yield token\n \"\"\"\n params = self.prep_streaming_params(stop)\n generator = self.client.create(prompt=prompt, **params)\n return generator\n def prep_streaming_params(self, stop: Optional[List[str]] = None) -> Dict[str, Any]:\n \"\"\"Prepare the params for streaming.\"\"\"\n params = self._invocation_params\n if \"best_of\" in params and params[\"best_of\"] != 1:\n raise ValueError(\"OpenAI only supports best_of == 1 for streaming\")\n if stop is not None:\n if \"stop\" in params:\n raise ValueError(\"`stop` found in both the input and default params.\")\n params[\"stop\"] = stop\n params[\"stream\"] = True\n return params\n @property\n def _invocation_params(self) -> Dict[str, Any]:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/openai.html"} +{"id": "b4f4da8343ad-11", "text": "@property\n def _invocation_params(self) -> Dict[str, Any]:\n \"\"\"Get the parameters used to invoke the model.\"\"\"\n openai_creds: Dict[str, Any] = {\n \"api_key\": self.openai_api_key,\n \"api_base\": self.openai_api_base,\n \"organization\": self.openai_organization,\n }\n if self.openai_proxy:\n import openai\n openai.proxy = {\"http\": self.openai_proxy, \"https\": self.openai_proxy} # type: ignore[assignment] # noqa: E501\n return {**openai_creds, **self._default_params}\n @property\n def _identifying_params(self) -> Mapping[str, Any]:\n \"\"\"Get the identifying parameters.\"\"\"\n return {**{\"model_name\": self.model_name}, **self._default_params}\n @property\n def _llm_type(self) -> str:\n \"\"\"Return type of llm.\"\"\"\n return \"openai\"\n def get_token_ids(self, text: str) -> List[int]:\n \"\"\"Get the token IDs using the tiktoken package.\"\"\"\n # tiktoken NOT supported for Python < 3.8\n if sys.version_info[1] < 8:\n return super().get_num_tokens(text)\n try:\n import tiktoken\n except ImportError:\n raise ImportError(\n \"Could not import tiktoken python package. \"\n \"This is needed in order to calculate get_num_tokens. \"\n \"Please install it with `pip install tiktoken`.\"\n )\n model_name = self.tiktoken_model_name or self.model_name\n try:\n enc = tiktoken.encoding_for_model(model_name)\n except KeyError:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/openai.html"} +{"id": "b4f4da8343ad-12", "text": "enc = tiktoken.encoding_for_model(model_name)\n except KeyError:\n logger.warning(\"Warning: model not found. Using cl100k_base encoding.\")\n model = \"cl100k_base\"\n enc = tiktoken.get_encoding(model)\n return enc.encode(\n text,\n allowed_special=self.allowed_special,\n disallowed_special=self.disallowed_special,\n )\n @staticmethod\n def modelname_to_contextsize(modelname: str) -> int:\n \"\"\"Calculate the maximum number of tokens possible to generate for a model.\n Args:\n modelname: The modelname we want to know the context size for.\n Returns:\n The maximum context size\n Example:\n .. code-block:: python\n max_tokens = openai.modelname_to_contextsize(\"text-davinci-003\")\n \"\"\"\n model_token_mapping = {\n \"gpt-4\": 8192,\n \"gpt-4-0314\": 8192,\n \"gpt-4-0613\": 8192,\n \"gpt-4-32k\": 32768,\n \"gpt-4-32k-0314\": 32768,\n \"gpt-4-32k-0613\": 32768,\n \"gpt-3.5-turbo\": 4096,\n \"gpt-3.5-turbo-0301\": 4096,\n \"gpt-3.5-turbo-0613\": 4096,\n \"gpt-3.5-turbo-16k\": 16385,\n \"gpt-3.5-turbo-16k-0613\": 16385,\n \"text-ada-001\": 2049,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/openai.html"} +{"id": "b4f4da8343ad-13", "text": "\"text-ada-001\": 2049,\n \"ada\": 2049,\n \"text-babbage-001\": 2040,\n \"babbage\": 2049,\n \"text-curie-001\": 2049,\n \"curie\": 2049,\n \"davinci\": 2049,\n \"text-davinci-003\": 4097,\n \"text-davinci-002\": 4097,\n \"code-davinci-002\": 8001,\n \"code-davinci-001\": 8001,\n \"code-cushman-002\": 2048,\n \"code-cushman-001\": 2048,\n }\n # handling finetuned models\n if \"ft-\" in modelname:\n modelname = modelname.split(\":\")[0]\n context_size = model_token_mapping.get(modelname, None)\n if context_size is None:\n raise ValueError(\n f\"Unknown model: {modelname}. Please provide a valid OpenAI model name.\"\n \"Known models are: \" + \", \".join(model_token_mapping.keys())\n )\n return context_size\n @property\n def max_context_size(self) -> int:\n \"\"\"Get max context size for this model.\"\"\"\n return self.modelname_to_contextsize(self.model_name)\n def max_tokens_for_prompt(self, prompt: str) -> int:\n \"\"\"Calculate the maximum number of tokens possible to generate for a prompt.\n Args:\n prompt: The prompt to pass into the model.\n Returns:\n The maximum number of tokens to generate for a prompt.\n Example:\n .. code-block:: python\n max_tokens = openai.max_token_for_prompt(\"Tell me a joke.\")", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/openai.html"} +{"id": "b4f4da8343ad-14", "text": "max_tokens = openai.max_token_for_prompt(\"Tell me a joke.\")\n \"\"\"\n num_tokens = self.get_num_tokens(prompt)\n return self.max_context_size - num_tokens\n[docs]class OpenAI(BaseOpenAI):\n \"\"\"Wrapper around OpenAI large language models.\n To use, you should have the ``openai`` python package installed, and the\n environment variable ``OPENAI_API_KEY`` set with your API key.\n Any parameters that are valid to be passed to the openai.create call can be passed\n in, even if not explicitly saved on this class.\n Example:\n .. code-block:: python\n from langchain.llms import OpenAI\n openai = OpenAI(model_name=\"text-davinci-003\")\n \"\"\"\n @property\n def _invocation_params(self) -> Dict[str, Any]:\n return {**{\"model\": self.model_name}, **super()._invocation_params}\n[docs]class AzureOpenAI(BaseOpenAI):\n \"\"\"Wrapper around Azure-specific OpenAI large language models.\n To use, you should have the ``openai`` python package installed, and the\n environment variable ``OPENAI_API_KEY`` set with your API key.\n Any parameters that are valid to be passed to the openai.create call can be passed\n in, even if not explicitly saved on this class.\n Example:\n .. code-block:: python\n from langchain.llms import AzureOpenAI\n openai = AzureOpenAI(model_name=\"text-davinci-003\")\n \"\"\"\n deployment_name: str = \"\"\n \"\"\"Deployment name to use.\"\"\"\n openai_api_type: str = \"azure\"\n openai_api_version: str = \"\"\n @root_validator()", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/openai.html"} +{"id": "b4f4da8343ad-15", "text": "openai_api_version: str = \"\"\n @root_validator()\n def validate_azure_settings(cls, values: Dict) -> Dict:\n values[\"openai_api_version\"] = get_from_dict_or_env(\n values,\n \"openai_api_version\",\n \"OPENAI_API_VERSION\",\n )\n values[\"openai_api_type\"] = get_from_dict_or_env(\n values,\n \"openai_api_type\",\n \"OPENAI_API_TYPE\",\n )\n return values\n @property\n def _identifying_params(self) -> Mapping[str, Any]:\n return {\n **{\"deployment_name\": self.deployment_name},\n **super()._identifying_params,\n }\n @property\n def _invocation_params(self) -> Dict[str, Any]:\n openai_params = {\n \"engine\": self.deployment_name,\n \"api_type\": self.openai_api_type,\n \"api_version\": self.openai_api_version,\n }\n return {**openai_params, **super()._invocation_params}\n @property\n def _llm_type(self) -> str:\n \"\"\"Return type of llm.\"\"\"\n return \"azure\"\n[docs]class OpenAIChat(BaseLLM):\n \"\"\"Wrapper around OpenAI Chat large language models.\n To use, you should have the ``openai`` python package installed, and the\n environment variable ``OPENAI_API_KEY`` set with your API key.\n Any parameters that are valid to be passed to the openai.create call can be passed\n in, even if not explicitly saved on this class.\n Example:\n .. code-block:: python\n from langchain.llms import OpenAIChat", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/openai.html"} +{"id": "b4f4da8343ad-16", "text": ".. code-block:: python\n from langchain.llms import OpenAIChat\n openaichat = OpenAIChat(model_name=\"gpt-3.5-turbo\")\n \"\"\"\n client: Any #: :meta private:\n model_name: str = \"gpt-3.5-turbo\"\n \"\"\"Model name to use.\"\"\"\n model_kwargs: Dict[str, Any] = Field(default_factory=dict)\n \"\"\"Holds any model parameters valid for `create` call not explicitly specified.\"\"\"\n openai_api_key: Optional[str] = None\n openai_api_base: Optional[str] = None\n # to support explicit proxy for OpenAI\n openai_proxy: Optional[str] = None\n max_retries: int = 6\n \"\"\"Maximum number of retries to make when generating.\"\"\"\n prefix_messages: List = Field(default_factory=list)\n \"\"\"Series of messages for Chat input.\"\"\"\n streaming: bool = False\n \"\"\"Whether to stream the results or not.\"\"\"\n allowed_special: Union[Literal[\"all\"], AbstractSet[str]] = set()\n \"\"\"Set of special tokens that are allowed\u3002\"\"\"\n disallowed_special: Union[Literal[\"all\"], Collection[str]] = \"all\"\n \"\"\"Set of special tokens that are not allowed\u3002\"\"\"\n @root_validator(pre=True)\n def build_extra(cls, values: Dict[str, Any]) -> Dict[str, Any]:\n \"\"\"Build extra kwargs from additional params that were passed in.\"\"\"\n all_required_field_names = {field.alias for field in cls.__fields__.values()}\n extra = values.get(\"model_kwargs\", {})\n for field_name in list(values):\n if field_name not in all_required_field_names:\n if field_name in extra:\n raise ValueError(f\"Found {field_name} supplied twice.\")", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/openai.html"} +{"id": "b4f4da8343ad-17", "text": "raise ValueError(f\"Found {field_name} supplied twice.\")\n extra[field_name] = values.pop(field_name)\n values[\"model_kwargs\"] = extra\n return values\n @root_validator()\n def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"Validate that api key and python package exists in environment.\"\"\"\n openai_api_key = get_from_dict_or_env(\n values, \"openai_api_key\", \"OPENAI_API_KEY\"\n )\n openai_api_base = get_from_dict_or_env(\n values,\n \"openai_api_base\",\n \"OPENAI_API_BASE\",\n default=\"\",\n )\n openai_proxy = get_from_dict_or_env(\n values,\n \"openai_proxy\",\n \"OPENAI_PROXY\",\n default=\"\",\n )\n openai_organization = get_from_dict_or_env(\n values, \"openai_organization\", \"OPENAI_ORGANIZATION\", default=\"\"\n )\n try:\n import openai\n openai.api_key = openai_api_key\n if openai_api_base:\n openai.api_base = openai_api_base\n if openai_organization:\n openai.organization = openai_organization\n if openai_proxy:\n openai.proxy = {\"http\": openai_proxy, \"https\": openai_proxy} # type: ignore[assignment] # noqa: E501\n except ImportError:\n raise ImportError(\n \"Could not import openai python package. \"\n \"Please install it with `pip install openai`.\"\n )\n try:\n values[\"client\"] = openai.ChatCompletion\n except AttributeError:\n raise ValueError(\n \"`openai` has no `ChatCompletion` attribute, this is likely \"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/openai.html"} +{"id": "b4f4da8343ad-18", "text": "\"`openai` has no `ChatCompletion` attribute, this is likely \"\n \"due to an old version of the openai package. Try upgrading it \"\n \"with `pip install --upgrade openai`.\"\n )\n warnings.warn(\n \"You are trying to use a chat model. This way of initializing it is \"\n \"no longer supported. Instead, please use: \"\n \"`from langchain.chat_models import ChatOpenAI`\"\n )\n return values\n @property\n def _default_params(self) -> Dict[str, Any]:\n \"\"\"Get the default parameters for calling OpenAI API.\"\"\"\n return self.model_kwargs\n def _get_chat_params(\n self, prompts: List[str], stop: Optional[List[str]] = None\n ) -> Tuple:\n if len(prompts) > 1:\n raise ValueError(\n f\"OpenAIChat currently only supports single prompt, got {prompts}\"\n )\n messages = self.prefix_messages + [{\"role\": \"user\", \"content\": prompts[0]}]\n params: Dict[str, Any] = {**{\"model\": self.model_name}, **self._default_params}\n if stop is not None:\n if \"stop\" in params:\n raise ValueError(\"`stop` found in both the input and default params.\")\n params[\"stop\"] = stop\n if params.get(\"max_tokens\") == -1:\n # for ChatGPT api, omitting max_tokens is equivalent to having no limit\n del params[\"max_tokens\"]\n return messages, params\n def _generate(\n self,\n prompts: List[str],\n stop: Optional[List[str]] = None,\n run_manager: Optional[CallbackManagerForLLMRun] = None,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/openai.html"} +{"id": "b4f4da8343ad-19", "text": "run_manager: Optional[CallbackManagerForLLMRun] = None,\n **kwargs: Any,\n ) -> LLMResult:\n messages, params = self._get_chat_params(prompts, stop)\n params = {**params, **kwargs}\n if self.streaming:\n response = \"\"\n params[\"stream\"] = True\n for stream_resp in completion_with_retry(self, messages=messages, **params):\n token = stream_resp[\"choices\"][0][\"delta\"].get(\"content\", \"\")\n response += token\n if run_manager:\n run_manager.on_llm_new_token(\n token,\n )\n return LLMResult(\n generations=[[Generation(text=response)]],\n )\n else:\n full_response = completion_with_retry(self, messages=messages, **params)\n llm_output = {\n \"token_usage\": full_response[\"usage\"],\n \"model_name\": self.model_name,\n }\n return LLMResult(\n generations=[\n [Generation(text=full_response[\"choices\"][0][\"message\"][\"content\"])]\n ],\n llm_output=llm_output,\n )\n async def _agenerate(\n self,\n prompts: List[str],\n stop: Optional[List[str]] = None,\n run_manager: Optional[AsyncCallbackManagerForLLMRun] = None,\n **kwargs: Any,\n ) -> LLMResult:\n messages, params = self._get_chat_params(prompts, stop)\n params = {**params, **kwargs}\n if self.streaming:\n response = \"\"\n params[\"stream\"] = True\n async for stream_resp in await acompletion_with_retry(\n self, messages=messages, **params\n ):", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/openai.html"} +{"id": "b4f4da8343ad-20", "text": "self, messages=messages, **params\n ):\n token = stream_resp[\"choices\"][0][\"delta\"].get(\"content\", \"\")\n response += token\n if run_manager:\n await run_manager.on_llm_new_token(\n token,\n )\n return LLMResult(\n generations=[[Generation(text=response)]],\n )\n else:\n full_response = await acompletion_with_retry(\n self, messages=messages, **params\n )\n llm_output = {\n \"token_usage\": full_response[\"usage\"],\n \"model_name\": self.model_name,\n }\n return LLMResult(\n generations=[\n [Generation(text=full_response[\"choices\"][0][\"message\"][\"content\"])]\n ],\n llm_output=llm_output,\n )\n @property\n def _identifying_params(self) -> Mapping[str, Any]:\n \"\"\"Get the identifying parameters.\"\"\"\n return {**{\"model_name\": self.model_name}, **self._default_params}\n @property\n def _llm_type(self) -> str:\n \"\"\"Return type of llm.\"\"\"\n return \"openai-chat\"\n[docs] def get_token_ids(self, text: str) -> List[int]:\n \"\"\"Get the token IDs using the tiktoken package.\"\"\"\n # tiktoken NOT supported for Python < 3.8\n if sys.version_info[1] < 8:\n return super().get_token_ids(text)\n try:\n import tiktoken\n except ImportError:\n raise ImportError(\n \"Could not import tiktoken python package. \"\n \"This is needed in order to calculate get_num_tokens. \"\n \"Please install it with `pip install tiktoken`.\"\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/openai.html"} +{"id": "b4f4da8343ad-21", "text": "\"Please install it with `pip install tiktoken`.\"\n )\n enc = tiktoken.encoding_for_model(self.model_name)\n return enc.encode(\n text,\n allowed_special=self.allowed_special,\n disallowed_special=self.disallowed_special,\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/openai.html"} +{"id": "beb764fe74ad-0", "text": "Source code for langchain.llms.databricks\nimport os\nfrom abc import ABC, abstractmethod\nfrom typing import Any, Callable, Dict, List, Optional\nimport requests\nfrom pydantic import BaseModel, Extra, Field, PrivateAttr, root_validator, validator\nfrom langchain.callbacks.manager import CallbackManagerForLLMRun\nfrom langchain.llms.base import LLM\n__all__ = [\"Databricks\"]\nclass _DatabricksClientBase(BaseModel, ABC):\n \"\"\"A base JSON API client that talks to Databricks.\"\"\"\n api_url: str\n api_token: str\n def post_raw(self, request: Any) -> Any:\n headers = {\"Authorization\": f\"Bearer {self.api_token}\"}\n response = requests.post(self.api_url, headers=headers, json=request)\n # TODO: error handling and automatic retries\n if not response.ok:\n raise ValueError(f\"HTTP {response.status_code} error: {response.text}\")\n return response.json()\n @abstractmethod\n def post(self, request: Any) -> Any:\n ...\nclass _DatabricksServingEndpointClient(_DatabricksClientBase):\n \"\"\"An API client that talks to a Databricks serving endpoint.\"\"\"\n host: str\n endpoint_name: str\n @root_validator(pre=True)\n def set_api_url(cls, values: Dict[str, Any]) -> Dict[str, Any]:\n if \"api_url\" not in values:\n host = values[\"host\"]\n endpoint_name = values[\"endpoint_name\"]\n api_url = f\"https://{host}/serving-endpoints/{endpoint_name}/invocations\"\n values[\"api_url\"] = api_url\n return values\n def post(self, request: Any) -> Any:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/databricks.html"} +{"id": "beb764fe74ad-1", "text": "return values\n def post(self, request: Any) -> Any:\n # See https://docs.databricks.com/machine-learning/model-serving/score-model-serving-endpoints.html\n wrapped_request = {\"dataframe_records\": [request]}\n response = self.post_raw(wrapped_request)[\"predictions\"]\n # For a single-record query, the result is not a list.\n if isinstance(response, list):\n response = response[0]\n return response\nclass _DatabricksClusterDriverProxyClient(_DatabricksClientBase):\n \"\"\"An API client that talks to a Databricks cluster driver proxy app.\"\"\"\n host: str\n cluster_id: str\n cluster_driver_port: str\n @root_validator(pre=True)\n def set_api_url(cls, values: Dict[str, Any]) -> Dict[str, Any]:\n if \"api_url\" not in values:\n host = values[\"host\"]\n cluster_id = values[\"cluster_id\"]\n port = values[\"cluster_driver_port\"]\n api_url = f\"https://{host}/driver-proxy-api/o/0/{cluster_id}/{port}\"\n values[\"api_url\"] = api_url\n return values\n def post(self, request: Any) -> Any:\n return self.post_raw(request)\ndef get_repl_context() -> Any:\n \"\"\"Gets the notebook REPL context if running inside a Databricks notebook.\n Returns None otherwise.\n \"\"\"\n try:\n from dbruntime.databricks_repl_context import get_context\n return get_context()\n except ImportError:\n raise ValueError(\n \"Cannot access dbruntime, not running inside a Databricks notebook.\"\n )\ndef get_default_host() -> str:\n \"\"\"Gets the default Databricks workspace hostname.", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/databricks.html"} +{"id": "beb764fe74ad-2", "text": "\"\"\"Gets the default Databricks workspace hostname.\n Raises an error if the hostname cannot be automatically determined.\n \"\"\"\n host = os.getenv(\"DATABRICKS_HOST\")\n if not host:\n try:\n host = get_repl_context().browserHostName\n if not host:\n raise ValueError(\"context doesn't contain browserHostName.\")\n except Exception as e:\n raise ValueError(\n \"host was not set and cannot be automatically inferred. Set \"\n f\"environment variable 'DATABRICKS_HOST'. Received error: {e}\"\n )\n # TODO: support Databricks CLI profile\n host = host.lstrip(\"https://\").lstrip(\"http://\").rstrip(\"/\")\n return host\ndef get_default_api_token() -> str:\n \"\"\"Gets the default Databricks personal access token.\n Raises an error if the token cannot be automatically determined.\n \"\"\"\n if api_token := os.getenv(\"DATABRICKS_TOKEN\"):\n return api_token\n try:\n api_token = get_repl_context().apiToken\n if not api_token:\n raise ValueError(\"context doesn't contain apiToken.\")\n except Exception as e:\n raise ValueError(\n \"api_token was not set and cannot be automatically inferred. Set \"\n f\"environment variable 'DATABRICKS_TOKEN'. Received error: {e}\"\n )\n # TODO: support Databricks CLI profile\n return api_token\n[docs]class Databricks(LLM):\n \"\"\"LLM wrapper around a Databricks serving endpoint or a cluster driver proxy app.\n It supports two endpoint types:\n * **Serving endpoint** (recommended for both production and development).\n We assume that an LLM was registered and deployed to a serving endpoint.", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/databricks.html"} +{"id": "beb764fe74ad-3", "text": "We assume that an LLM was registered and deployed to a serving endpoint.\n To wrap it as an LLM you must have \"Can Query\" permission to the endpoint.\n Set ``endpoint_name`` accordingly and do not set ``cluster_id`` and\n ``cluster_driver_port``.\n The expected model signature is:\n * inputs::\n [{\"name\": \"prompt\", \"type\": \"string\"},\n {\"name\": \"stop\", \"type\": \"list[string]\"}]\n * outputs: ``[{\"type\": \"string\"}]``\n * **Cluster driver proxy app** (recommended for interactive development).\n One can load an LLM on a Databricks interactive cluster and start a local HTTP\n server on the driver node to serve the model at ``/`` using HTTP POST method\n with JSON input/output.\n Please use a port number between ``[3000, 8000]`` and let the server listen to\n the driver IP address or simply ``0.0.0.0`` instead of localhost only.\n To wrap it as an LLM you must have \"Can Attach To\" permission to the cluster.\n Set ``cluster_id`` and ``cluster_driver_port`` and do not set ``endpoint_name``.\n The expected server schema (using JSON schema) is:\n * inputs::\n {\"type\": \"object\",\n \"properties\": {\n \"prompt\": {\"type\": \"string\"},\n \"stop\": {\"type\": \"array\", \"items\": {\"type\": \"string\"}}},\n \"required\": [\"prompt\"]}`\n * outputs: ``{\"type\": \"string\"}``\n If the endpoint model signature is different or you want to set extra params,\n you can use `transform_input_fn` and `transform_output_fn` to apply necessary", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/databricks.html"} +{"id": "beb764fe74ad-4", "text": "you can use `transform_input_fn` and `transform_output_fn` to apply necessary\n transformations before and after the query.\n \"\"\"\n host: str = Field(default_factory=get_default_host)\n \"\"\"Databricks workspace hostname.\n If not provided, the default value is determined by\n * the ``DATABRICKS_HOST`` environment variable if present, or\n * the hostname of the current Databricks workspace if running inside\n a Databricks notebook attached to an interactive cluster in \"single user\"\n or \"no isolation shared\" mode.\n \"\"\"\n api_token: str = Field(default_factory=get_default_api_token)\n \"\"\"Databricks personal access token.\n If not provided, the default value is determined by\n * the ``DATABRICKS_TOKEN`` environment variable if present, or\n * an automatically generated temporary token if running inside a Databricks\n notebook attached to an interactive cluster in \"single user\" or\n \"no isolation shared\" mode.\n \"\"\"\n endpoint_name: Optional[str] = None\n \"\"\"Name of the model serving endpont.\n You must specify the endpoint name to connect to a model serving endpoint.\n You must not set both ``endpoint_name`` and ``cluster_id``.\n \"\"\"\n cluster_id: Optional[str] = None\n \"\"\"ID of the cluster if connecting to a cluster driver proxy app.\n If neither ``endpoint_name`` nor ``cluster_id`` is not provided and the code runs\n inside a Databricks notebook attached to an interactive cluster in \"single user\"\n or \"no isolation shared\" mode, the current cluster ID is used as default.\n You must not set both ``endpoint_name`` and ``cluster_id``.\n \"\"\"\n cluster_driver_port: Optional[str] = None", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/databricks.html"} +{"id": "beb764fe74ad-5", "text": "\"\"\"\n cluster_driver_port: Optional[str] = None\n \"\"\"The port number used by the HTTP server running on the cluster driver node.\n The server should listen on the driver IP address or simply ``0.0.0.0`` to connect.\n We recommend the server using a port number between ``[3000, 8000]``.\n \"\"\"\n model_kwargs: Optional[Dict[str, Any]] = None\n \"\"\"Extra parameters to pass to the endpoint.\"\"\"\n transform_input_fn: Optional[Callable] = None\n \"\"\"A function that transforms ``{prompt, stop, **kwargs}`` into a JSON-compatible\n request object that the endpoint accepts.\n For example, you can apply a prompt template to the input prompt.\n \"\"\"\n transform_output_fn: Optional[Callable[..., str]] = None\n \"\"\"A function that transforms the output from the endpoint to the generated text.\n \"\"\"\n _client: _DatabricksClientBase = PrivateAttr()\n class Config:\n extra = Extra.forbid\n underscore_attrs_are_private = True\n @validator(\"cluster_id\", always=True)\n def set_cluster_id(cls, v: Any, values: Dict[str, Any]) -> Optional[str]:\n if v and values[\"endpoint_name\"]:\n raise ValueError(\"Cannot set both endpoint_name and cluster_id.\")\n elif values[\"endpoint_name\"]:\n return None\n elif v:\n return v\n else:\n try:\n if v := get_repl_context().clusterId:\n return v\n raise ValueError(\"Context doesn't contain clusterId.\")\n except Exception as e:\n raise ValueError(\n \"Neither endpoint_name nor cluster_id was set. \"\n \"And the cluster_id cannot be automatically determined. Received\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/databricks.html"} +{"id": "beb764fe74ad-6", "text": "\"And the cluster_id cannot be automatically determined. Received\"\n f\" error: {e}\"\n )\n @validator(\"cluster_driver_port\", always=True)\n def set_cluster_driver_port(cls, v: Any, values: Dict[str, Any]) -> Optional[str]:\n if v and values[\"endpoint_name\"]:\n raise ValueError(\"Cannot set both endpoint_name and cluster_driver_port.\")\n elif values[\"endpoint_name\"]:\n return None\n elif v is None:\n raise ValueError(\n \"Must set cluster_driver_port to connect to a cluster driver.\"\n )\n elif int(v) <= 0:\n raise ValueError(f\"Invalid cluster_driver_port: {v}\")\n else:\n return v\n @validator(\"model_kwargs\", always=True)\n def set_model_kwargs(cls, v: Optional[Dict[str, Any]]) -> Optional[Dict[str, Any]]:\n if v:\n assert \"prompt\" not in v, \"model_kwargs must not contain key 'prompt'\"\n assert \"stop\" not in v, \"model_kwargs must not contain key 'stop'\"\n return v\n def __init__(self, **data: Any):\n super().__init__(**data)\n if self.endpoint_name:\n self._client = _DatabricksServingEndpointClient(\n host=self.host,\n api_token=self.api_token,\n endpoint_name=self.endpoint_name,\n )\n elif self.cluster_id and self.cluster_driver_port:\n self._client = _DatabricksClusterDriverProxyClient(\n host=self.host,\n api_token=self.api_token,\n cluster_id=self.cluster_id,\n cluster_driver_port=self.cluster_driver_port,\n )\n else:\n raise ValueError(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/databricks.html"} +{"id": "beb764fe74ad-7", "text": ")\n else:\n raise ValueError(\n \"Must specify either endpoint_name or cluster_id/cluster_driver_port.\"\n )\n @property\n def _llm_type(self) -> str:\n \"\"\"Return type of llm.\"\"\"\n return \"databricks\"\n def _call(\n self,\n prompt: str,\n stop: Optional[List[str]] = None,\n run_manager: Optional[CallbackManagerForLLMRun] = None,\n **kwargs: Any,\n ) -> str:\n \"\"\"Queries the LLM endpoint with the given prompt and stop sequence.\"\"\"\n # TODO: support callbacks\n request = {\"prompt\": prompt, \"stop\": stop}\n request.update(kwargs)\n if self.model_kwargs:\n request.update(self.model_kwargs)\n if self.transform_input_fn:\n request = self.transform_input_fn(**request)\n response = self._client.post(request)\n if self.transform_output_fn:\n response = self.transform_output_fn(response)\n return response", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/databricks.html"} +{"id": "cbb70596ee90-0", "text": "Source code for langchain.llms.petals\n\"\"\"Wrapper around Petals API.\"\"\"\nimport logging\nfrom typing import Any, Dict, List, Mapping, Optional\nfrom pydantic import Extra, Field, root_validator\nfrom langchain.callbacks.manager import CallbackManagerForLLMRun\nfrom langchain.llms.base import LLM\nfrom langchain.llms.utils import enforce_stop_tokens\nfrom langchain.utils import get_from_dict_or_env\nlogger = logging.getLogger(__name__)\n[docs]class Petals(LLM):\n \"\"\"Wrapper around Petals Bloom models.\n To use, you should have the ``petals`` python package installed, and the\n environment variable ``HUGGINGFACE_API_KEY`` set with your API key.\n Any parameters that are valid to be passed to the call can be passed\n in, even if not explicitly saved on this class.\n Example:\n .. code-block:: python\n from langchain.llms import petals\n petals = Petals()\n \"\"\"\n client: Any\n \"\"\"The client to use for the API calls.\"\"\"\n tokenizer: Any\n \"\"\"The tokenizer to use for the API calls.\"\"\"\n model_name: str = \"bigscience/bloom-petals\"\n \"\"\"The model to use.\"\"\"\n temperature: float = 0.7\n \"\"\"What sampling temperature to use\"\"\"\n max_new_tokens: int = 256\n \"\"\"The maximum number of new tokens to generate in the completion.\"\"\"\n top_p: float = 0.9\n \"\"\"The cumulative probability for top-p sampling.\"\"\"\n top_k: Optional[int] = None\n \"\"\"The number of highest probability vocabulary tokens\n to keep for top-k-filtering.\"\"\"\n do_sample: bool = True\n \"\"\"Whether or not to use sampling; use greedy decoding otherwise.\"\"\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/petals.html"} +{"id": "cbb70596ee90-1", "text": "\"\"\"Whether or not to use sampling; use greedy decoding otherwise.\"\"\"\n max_length: Optional[int] = None\n \"\"\"The maximum length of the sequence to be generated.\"\"\"\n model_kwargs: Dict[str, Any] = Field(default_factory=dict)\n \"\"\"Holds any model parameters valid for `create` call\n not explicitly specified.\"\"\"\n huggingface_api_key: Optional[str] = None\n class Config:\n \"\"\"Configuration for this pydantic config.\"\"\"\n extra = Extra.forbid\n @root_validator(pre=True)\n def build_extra(cls, values: Dict[str, Any]) -> Dict[str, Any]:\n \"\"\"Build extra kwargs from additional params that were passed in.\"\"\"\n all_required_field_names = {field.alias for field in cls.__fields__.values()}\n extra = values.get(\"model_kwargs\", {})\n for field_name in list(values):\n if field_name not in all_required_field_names:\n if field_name in extra:\n raise ValueError(f\"Found {field_name} supplied twice.\")\n logger.warning(\n f\"\"\"WARNING! {field_name} is not default parameter.\n {field_name} was transfered to model_kwargs.\n Please confirm that {field_name} is what you intended.\"\"\"\n )\n extra[field_name] = values.pop(field_name)\n values[\"model_kwargs\"] = extra\n return values\n @root_validator()\n def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"Validate that api key and python package exists in environment.\"\"\"\n huggingface_api_key = get_from_dict_or_env(\n values, \"huggingface_api_key\", \"HUGGINGFACE_API_KEY\"\n )\n try:\n from petals import DistributedBloomForCausalLM\n from transformers import BloomTokenizerFast", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/petals.html"} +{"id": "cbb70596ee90-2", "text": "from petals import DistributedBloomForCausalLM\n from transformers import BloomTokenizerFast\n model_name = values[\"model_name\"]\n values[\"tokenizer\"] = BloomTokenizerFast.from_pretrained(model_name)\n values[\"client\"] = DistributedBloomForCausalLM.from_pretrained(model_name)\n values[\"huggingface_api_key\"] = huggingface_api_key\n except ImportError:\n raise ValueError(\n \"Could not import transformers or petals python package.\"\n \"Please install with `pip install -U transformers petals`.\"\n )\n return values\n @property\n def _default_params(self) -> Dict[str, Any]:\n \"\"\"Get the default parameters for calling Petals API.\"\"\"\n normal_params = {\n \"temperature\": self.temperature,\n \"max_new_tokens\": self.max_new_tokens,\n \"top_p\": self.top_p,\n \"top_k\": self.top_k,\n \"do_sample\": self.do_sample,\n \"max_length\": self.max_length,\n }\n return {**normal_params, **self.model_kwargs}\n @property\n def _identifying_params(self) -> Mapping[str, Any]:\n \"\"\"Get the identifying parameters.\"\"\"\n return {**{\"model_name\": self.model_name}, **self._default_params}\n @property\n def _llm_type(self) -> str:\n \"\"\"Return type of llm.\"\"\"\n return \"petals\"\n def _call(\n self,\n prompt: str,\n stop: Optional[List[str]] = None,\n run_manager: Optional[CallbackManagerForLLMRun] = None,\n **kwargs: Any,\n ) -> str:\n \"\"\"Call the Petals API.\"\"\"\n params = self._default_params", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/petals.html"} +{"id": "cbb70596ee90-3", "text": "\"\"\"Call the Petals API.\"\"\"\n params = self._default_params\n params = {**params, **kwargs}\n inputs = self.tokenizer(prompt, return_tensors=\"pt\")[\"input_ids\"]\n outputs = self.client.generate(inputs, **params)\n text = self.tokenizer.decode(outputs[0])\n if stop is not None:\n # I believe this is required since the stop tokens\n # are not enforced by the model parameters\n text = enforce_stop_tokens(text, stop)\n return text", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/petals.html"} +{"id": "aa28b4b5edff-0", "text": "Source code for langchain.llms.openllm\n\"\"\"Wrapper around OpenLLM APIs.\"\"\"\nfrom __future__ import annotations\nimport copy\nimport json\nimport logging\nfrom typing import (\n TYPE_CHECKING,\n Any,\n Dict,\n List,\n Literal,\n Optional,\n TypedDict,\n Union,\n overload,\n)\nfrom pydantic import PrivateAttr\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForLLMRun,\n CallbackManagerForLLMRun,\n)\nfrom langchain.llms.base import LLM\nif TYPE_CHECKING:\n import openllm\nServerType = Literal[\"http\", \"grpc\"]\nclass IdentifyingParams(TypedDict):\n model_name: str\n model_id: Optional[str]\n server_url: Optional[str]\n server_type: Optional[ServerType]\n embedded: bool\n llm_kwargs: Dict[str, Any]\nlogger = logging.getLogger(__name__)\n[docs]class OpenLLM(LLM):\n \"\"\"Wrapper for accessing OpenLLM, supporting both in-process model\n instance and remote OpenLLM servers.\n To use, you should have the openllm library installed:\n .. code-block:: bash\n pip install openllm\n Learn more at: https://github.com/bentoml/openllm\n Example running an LLM model locally managed by OpenLLM:\n .. code-block:: python\n from langchain.llms import OpenLLM\n llm = OpenLLM(\n model_name='flan-t5',\n model_id='google/flan-t5-large',\n )\n llm(\"What is the difference between a duck and a goose?\")\n For all available supported models, you can run 'openllm models'.", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/openllm.html"} +{"id": "aa28b4b5edff-1", "text": "For all available supported models, you can run 'openllm models'.\n If you have a OpenLLM server running, you can also use it remotely:\n .. code-block:: python\n from langchain.llms import OpenLLM\n llm = OpenLLM(server_url='http://localhost:3000')\n llm(\"What is the difference between a duck and a goose?\")\n \"\"\"\n model_name: Optional[str] = None\n \"\"\"Model name to use. See 'openllm models' for all available models.\"\"\"\n model_id: Optional[str] = None\n \"\"\"Model Id to use. If not provided, will use the default model for the model name.\n See 'openllm models' for all available model variants.\"\"\"\n server_url: Optional[str] = None\n \"\"\"Optional server URL that currently runs a LLMServer with 'openllm start'.\"\"\"\n server_type: ServerType = \"http\"\n \"\"\"Optional server type. Either 'http' or 'grpc'.\"\"\"\n embedded: bool = True\n \"\"\"Initialize this LLM instance in current process by default. Should \n only set to False when using in conjunction with BentoML Service.\"\"\"\n llm_kwargs: Dict[str, Any]\n \"\"\"Key word arguments to be passed to openllm.LLM\"\"\"\n _runner: Optional[openllm.LLMRunner] = PrivateAttr(default=None)\n _client: Union[\n openllm.client.HTTPClient, openllm.client.GrpcClient, None\n ] = PrivateAttr(default=None)\n class Config:\n extra = \"forbid\"\n @overload\n def __init__(\n self,\n model_name: Optional[str] = ...,\n *,\n model_id: Optional[str] = ...,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/openllm.html"} +{"id": "aa28b4b5edff-2", "text": "*,\n model_id: Optional[str] = ...,\n embedded: Literal[True, False] = ...,\n **llm_kwargs: Any,\n ) -> None:\n ...\n @overload\n def __init__(\n self,\n *,\n server_url: str = ...,\n server_type: Literal[\"grpc\", \"http\"] = ...,\n **llm_kwargs: Any,\n ) -> None:\n ...\n def __init__(\n self,\n model_name: Optional[str] = None,\n *,\n model_id: Optional[str] = None,\n server_url: Optional[str] = None,\n server_type: Literal[\"grpc\", \"http\"] = \"http\",\n embedded: bool = True,\n **llm_kwargs: Any,\n ):\n try:\n import openllm\n except ImportError as e:\n raise ImportError(\n \"Could not import openllm. Make sure to install it with \"\n \"'pip install openllm.'\"\n ) from e\n llm_kwargs = llm_kwargs or {}\n if server_url is not None:\n logger.debug(\"'server_url' is provided, returning a openllm.Client\")\n assert (\n model_id is None and model_name is None\n ), \"'server_url' and {'model_id', 'model_name'} are mutually exclusive\"\n client_cls = (\n openllm.client.HTTPClient\n if server_type == \"http\"\n else openllm.client.GrpcClient\n )\n client = client_cls(server_url)\n super().__init__(\n **{\n \"server_url\": server_url,\n \"server_type\": server_type,\n \"llm_kwargs\": llm_kwargs,\n }\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/openllm.html"} +{"id": "aa28b4b5edff-3", "text": "\"llm_kwargs\": llm_kwargs,\n }\n )\n self._runner = None # type: ignore\n self._client = client\n else:\n assert model_name is not None, \"Must provide 'model_name' or 'server_url'\"\n # since the LLM are relatively huge, we don't actually want to convert the\n # Runner with embedded when running the server. Instead, we will only set\n # the init_local here so that LangChain users can still use the LLM\n # in-process. Wrt to BentoML users, setting embedded=False is the expected\n # behaviour to invoke the runners remotely\n runner = openllm.Runner(\n model_name=model_name,\n model_id=model_id,\n init_local=embedded,\n **llm_kwargs,\n )\n super().__init__(\n **{\n \"model_name\": model_name,\n \"model_id\": model_id,\n \"embedded\": embedded,\n \"llm_kwargs\": llm_kwargs,\n }\n )\n self._client = None # type: ignore\n self._runner = runner\n @property\n def runner(self) -> openllm.LLMRunner:\n \"\"\"\n Get the underlying openllm.LLMRunner instance for integration with BentoML.\n Example:\n .. code-block:: python\n llm = OpenLLM(\n model_name='flan-t5',\n model_id='google/flan-t5-large',\n embedded=False,\n )\n tools = load_tools([\"serpapi\", \"llm-math\"], llm=llm)\n agent = initialize_agent(\n tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/openllm.html"} +{"id": "aa28b4b5edff-4", "text": "tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION\n )\n svc = bentoml.Service(\"langchain-openllm\", runners=[llm.runner])\n @svc.api(input=Text(), output=Text())\n def chat(input_text: str):\n return agent.run(input_text)\n \"\"\"\n if self._runner is None:\n raise ValueError(\"OpenLLM must be initialized locally with 'model_name'\")\n return self._runner\n @property\n def _identifying_params(self) -> IdentifyingParams:\n \"\"\"Get the identifying parameters.\"\"\"\n if self._client is not None:\n self.llm_kwargs.update(self._client.configuration)\n model_name = self._client.model_name\n model_id = self._client.model_id\n else:\n if self._runner is None:\n raise ValueError(\"Runner must be initialized.\")\n model_name = self.model_name\n model_id = self.model_id\n try:\n self.llm_kwargs.update(\n json.loads(self._runner.identifying_params[\"configuration\"])\n )\n except (TypeError, json.JSONDecodeError):\n pass\n return IdentifyingParams(\n server_url=self.server_url,\n server_type=self.server_type,\n embedded=self.embedded,\n llm_kwargs=self.llm_kwargs,\n model_name=model_name,\n model_id=model_id,\n )\n @property\n def _llm_type(self) -> str:\n return \"openllm_client\" if self._client else \"openllm\"\n def _call(\n self,\n prompt: str,\n stop: Optional[List[str]] = None,\n run_manager: CallbackManagerForLLMRun | None = None,\n **kwargs: Any,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/openllm.html"} +{"id": "aa28b4b5edff-5", "text": "**kwargs: Any,\n ) -> str:\n try:\n import openllm\n except ImportError as e:\n raise ImportError(\n \"Could not import openllm. Make sure to install it with \"\n \"'pip install openllm'.\"\n ) from e\n copied = copy.deepcopy(self.llm_kwargs)\n copied.update(kwargs)\n config = openllm.AutoConfig.for_model(\n self._identifying_params[\"model_name\"], **copied\n )\n if self._client:\n return self._client.query(prompt, **config.model_dump(flatten=True))\n else:\n assert self._runner is not None\n return self._runner(prompt, **config.model_dump(flatten=True))\n async def _acall(\n self,\n prompt: str,\n stop: Optional[List[str]] = None,\n run_manager: Optional[AsyncCallbackManagerForLLMRun] = None,\n **kwargs: Any,\n ) -> str:\n try:\n import openllm\n except ImportError as e:\n raise ImportError(\n \"Could not import openllm. Make sure to install it with \"\n \"'pip install openllm'.\"\n ) from e\n copied = copy.deepcopy(self.llm_kwargs)\n copied.update(kwargs)\n config = openllm.AutoConfig.for_model(\n self._identifying_params[\"model_name\"], **copied\n )\n if self._client:\n return await self._client.acall(\n \"generate\", prompt, **config.model_dump(flatten=True)\n )\n else:\n assert self._runner is not None\n (\n prompt,\n generate_kwargs,\n postprocess_kwargs,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/openllm.html"} +{"id": "aa28b4b5edff-6", "text": "(\n prompt,\n generate_kwargs,\n postprocess_kwargs,\n ) = self._runner.llm.sanitize_parameters(prompt, **kwargs)\n generated_result = await self._runner.generate.async_run(\n prompt, **generate_kwargs\n )\n return self._runner.llm.postprocess_generate(\n prompt, generated_result, **postprocess_kwargs\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/openllm.html"} +{"id": "9ad2e0a02605-0", "text": "Source code for langchain.llms.huggingface_hub\n\"\"\"Wrapper around HuggingFace APIs.\"\"\"\nfrom typing import Any, Dict, List, Mapping, Optional\nfrom pydantic import Extra, root_validator\nfrom langchain.callbacks.manager import CallbackManagerForLLMRun\nfrom langchain.llms.base import LLM\nfrom langchain.llms.utils import enforce_stop_tokens\nfrom langchain.utils import get_from_dict_or_env\nDEFAULT_REPO_ID = \"gpt2\"\nVALID_TASKS = (\"text2text-generation\", \"text-generation\", \"summarization\")\n[docs]class HuggingFaceHub(LLM):\n \"\"\"Wrapper around HuggingFaceHub models.\n To use, you should have the ``huggingface_hub`` python package installed, and the\n environment variable ``HUGGINGFACEHUB_API_TOKEN`` set with your API token, or pass\n it as a named parameter to the constructor.\n Only supports `text-generation`, `text2text-generation` and `summarization` for now.\n Example:\n .. code-block:: python\n from langchain.llms import HuggingFaceHub\n hf = HuggingFaceHub(repo_id=\"gpt2\", huggingfacehub_api_token=\"my-api-key\")\n \"\"\"\n client: Any #: :meta private:\n repo_id: str = DEFAULT_REPO_ID\n \"\"\"Model name to use.\"\"\"\n task: Optional[str] = None\n \"\"\"Task to call the model with.\n Should be a task that returns `generated_text` or `summary_text`.\"\"\"\n model_kwargs: Optional[dict] = None\n \"\"\"Key word arguments to pass to the model.\"\"\"\n huggingfacehub_api_token: Optional[str] = None\n class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/huggingface_hub.html"} +{"id": "9ad2e0a02605-1", "text": "\"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n @root_validator()\n def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"Validate that api key and python package exists in environment.\"\"\"\n huggingfacehub_api_token = get_from_dict_or_env(\n values, \"huggingfacehub_api_token\", \"HUGGINGFACEHUB_API_TOKEN\"\n )\n try:\n from huggingface_hub.inference_api import InferenceApi\n repo_id = values[\"repo_id\"]\n client = InferenceApi(\n repo_id=repo_id,\n token=huggingfacehub_api_token,\n task=values.get(\"task\"),\n )\n if client.task not in VALID_TASKS:\n raise ValueError(\n f\"Got invalid task {client.task}, \"\n f\"currently only {VALID_TASKS} are supported\"\n )\n values[\"client\"] = client\n except ImportError:\n raise ValueError(\n \"Could not import huggingface_hub python package. \"\n \"Please install it with `pip install huggingface_hub`.\"\n )\n return values\n @property\n def _identifying_params(self) -> Mapping[str, Any]:\n \"\"\"Get the identifying parameters.\"\"\"\n _model_kwargs = self.model_kwargs or {}\n return {\n **{\"repo_id\": self.repo_id, \"task\": self.task},\n **{\"model_kwargs\": _model_kwargs},\n }\n @property\n def _llm_type(self) -> str:\n \"\"\"Return type of llm.\"\"\"\n return \"huggingface_hub\"\n def _call(\n self,\n prompt: str,\n stop: Optional[List[str]] = None,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/huggingface_hub.html"} +{"id": "9ad2e0a02605-2", "text": "prompt: str,\n stop: Optional[List[str]] = None,\n run_manager: Optional[CallbackManagerForLLMRun] = None,\n **kwargs: Any,\n ) -> str:\n \"\"\"Call out to HuggingFace Hub's inference endpoint.\n Args:\n prompt: The prompt to pass into the model.\n stop: Optional list of stop words to use when generating.\n Returns:\n The string generated by the model.\n Example:\n .. code-block:: python\n response = hf(\"Tell me a joke.\")\n \"\"\"\n _model_kwargs = self.model_kwargs or {}\n params = {**_model_kwargs, **kwargs}\n response = self.client(inputs=prompt, params=params)\n if \"error\" in response:\n raise ValueError(f\"Error raised by inference API: {response['error']}\")\n if self.client.task == \"text-generation\":\n # Text generation return includes the starter text.\n text = response[0][\"generated_text\"][len(prompt) :]\n elif self.client.task == \"text2text-generation\":\n text = response[0][\"generated_text\"]\n elif self.client.task == \"summarization\":\n text = response[0][\"summary_text\"]\n else:\n raise ValueError(\n f\"Got invalid task {self.client.task}, \"\n f\"currently only {VALID_TASKS} are supported\"\n )\n if stop is not None:\n # This is a bit hacky, but I can't figure out a better way to enforce\n # stop tokens when making calls to huggingface_hub.\n text = enforce_stop_tokens(text, stop)\n return text", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/huggingface_hub.html"} +{"id": "aeaeafa6f0b8-0", "text": "Source code for langchain.llms.forefrontai\n\"\"\"Wrapper around ForefrontAI APIs.\"\"\"\nfrom typing import Any, Dict, List, Mapping, Optional\nimport requests\nfrom pydantic import Extra, root_validator\nfrom langchain.callbacks.manager import CallbackManagerForLLMRun\nfrom langchain.llms.base import LLM\nfrom langchain.llms.utils import enforce_stop_tokens\nfrom langchain.utils import get_from_dict_or_env\n[docs]class ForefrontAI(LLM):\n \"\"\"Wrapper around ForefrontAI large language models.\n To use, you should have the environment variable ``FOREFRONTAI_API_KEY``\n set with your API key.\n Example:\n .. code-block:: python\n from langchain.llms import ForefrontAI\n forefrontai = ForefrontAI(endpoint_url=\"\")\n \"\"\"\n endpoint_url: str = \"\"\n \"\"\"Model name to use.\"\"\"\n temperature: float = 0.7\n \"\"\"What sampling temperature to use.\"\"\"\n length: int = 256\n \"\"\"The maximum number of tokens to generate in the completion.\"\"\"\n top_p: float = 1.0\n \"\"\"Total probability mass of tokens to consider at each step.\"\"\"\n top_k: int = 40\n \"\"\"The number of highest probability vocabulary tokens to\n keep for top-k-filtering.\"\"\"\n repetition_penalty: int = 1\n \"\"\"Penalizes repeated tokens according to frequency.\"\"\"\n forefrontai_api_key: Optional[str] = None\n base_url: Optional[str] = None\n \"\"\"Base url to use, if None decides based on model name.\"\"\"\n class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n @root_validator()\n def validate_environment(cls, values: Dict) -> Dict:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/forefrontai.html"} +{"id": "aeaeafa6f0b8-1", "text": "@root_validator()\n def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"Validate that api key exists in environment.\"\"\"\n forefrontai_api_key = get_from_dict_or_env(\n values, \"forefrontai_api_key\", \"FOREFRONTAI_API_KEY\"\n )\n values[\"forefrontai_api_key\"] = forefrontai_api_key\n return values\n @property\n def _default_params(self) -> Mapping[str, Any]:\n \"\"\"Get the default parameters for calling ForefrontAI API.\"\"\"\n return {\n \"temperature\": self.temperature,\n \"length\": self.length,\n \"top_p\": self.top_p,\n \"top_k\": self.top_k,\n \"repetition_penalty\": self.repetition_penalty,\n }\n @property\n def _identifying_params(self) -> Mapping[str, Any]:\n \"\"\"Get the identifying parameters.\"\"\"\n return {**{\"endpoint_url\": self.endpoint_url}, **self._default_params}\n @property\n def _llm_type(self) -> str:\n \"\"\"Return type of llm.\"\"\"\n return \"forefrontai\"\n def _call(\n self,\n prompt: str,\n stop: Optional[List[str]] = None,\n run_manager: Optional[CallbackManagerForLLMRun] = None,\n **kwargs: Any,\n ) -> str:\n \"\"\"Call out to ForefrontAI's complete endpoint.\n Args:\n prompt: The prompt to pass into the model.\n stop: Optional list of stop words to use when generating.\n Returns:\n The string generated by the model.\n Example:\n .. code-block:: python\n response = ForefrontAI(\"Tell me a joke.\")\n \"\"\"\n response = requests.post(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/forefrontai.html"} +{"id": "aeaeafa6f0b8-2", "text": "\"\"\"\n response = requests.post(\n url=self.endpoint_url,\n headers={\n \"Authorization\": f\"Bearer {self.forefrontai_api_key}\",\n \"Content-Type\": \"application/json\",\n },\n json={\"text\": prompt, **self._default_params, **kwargs},\n )\n response_json = response.json()\n text = response_json[\"result\"][0][\"completion\"]\n if stop is not None:\n # I believe this is required since the stop tokens\n # are not enforced by the model parameters\n text = enforce_stop_tokens(text, stop)\n return text", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/forefrontai.html"} +{"id": "b05c516b7cf6-0", "text": "Source code for langchain.llms.bananadev\n\"\"\"Wrapper around Banana API.\"\"\"\nimport logging\nfrom typing import Any, Dict, List, Mapping, Optional\nfrom pydantic import Extra, Field, root_validator\nfrom langchain.callbacks.manager import CallbackManagerForLLMRun\nfrom langchain.llms.base import LLM\nfrom langchain.llms.utils import enforce_stop_tokens\nfrom langchain.utils import get_from_dict_or_env\nlogger = logging.getLogger(__name__)\n[docs]class Banana(LLM):\n \"\"\"Wrapper around Banana large language models.\n To use, you should have the ``banana-dev`` python package installed,\n and the environment variable ``BANANA_API_KEY`` set with your API key.\n Any parameters that are valid to be passed to the call can be passed\n in, even if not explicitly saved on this class.\n Example:\n .. code-block:: python\n from langchain.llms import Banana\n banana = Banana(model_key=\"\")\n \"\"\"\n model_key: str = \"\"\n \"\"\"model endpoint to use\"\"\"\n model_kwargs: Dict[str, Any] = Field(default_factory=dict)\n \"\"\"Holds any model parameters valid for `create` call not\n explicitly specified.\"\"\"\n banana_api_key: Optional[str] = None\n class Config:\n \"\"\"Configuration for this pydantic config.\"\"\"\n extra = Extra.forbid\n @root_validator(pre=True)\n def build_extra(cls, values: Dict[str, Any]) -> Dict[str, Any]:\n \"\"\"Build extra kwargs from additional params that were passed in.\"\"\"\n all_required_field_names = {field.alias for field in cls.__fields__.values()}\n extra = values.get(\"model_kwargs\", {})\n for field_name in list(values):\n if field_name not in all_required_field_names:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/bananadev.html"} +{"id": "b05c516b7cf6-1", "text": "if field_name not in all_required_field_names:\n if field_name in extra:\n raise ValueError(f\"Found {field_name} supplied twice.\")\n logger.warning(\n f\"\"\"{field_name} was transfered to model_kwargs.\n Please confirm that {field_name} is what you intended.\"\"\"\n )\n extra[field_name] = values.pop(field_name)\n values[\"model_kwargs\"] = extra\n return values\n @root_validator()\n def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"Validate that api key and python package exists in environment.\"\"\"\n banana_api_key = get_from_dict_or_env(\n values, \"banana_api_key\", \"BANANA_API_KEY\"\n )\n values[\"banana_api_key\"] = banana_api_key\n return values\n @property\n def _identifying_params(self) -> Mapping[str, Any]:\n \"\"\"Get the identifying parameters.\"\"\"\n return {\n **{\"model_key\": self.model_key},\n **{\"model_kwargs\": self.model_kwargs},\n }\n @property\n def _llm_type(self) -> str:\n \"\"\"Return type of llm.\"\"\"\n return \"bananadev\"\n def _call(\n self,\n prompt: str,\n stop: Optional[List[str]] = None,\n run_manager: Optional[CallbackManagerForLLMRun] = None,\n **kwargs: Any,\n ) -> str:\n \"\"\"Call to Banana endpoint.\"\"\"\n try:\n import banana_dev as banana\n except ImportError:\n raise ImportError(\n \"Could not import banana-dev python package. \"\n \"Please install it with `pip install banana-dev`.\"\n )\n params = self.model_kwargs or {}", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/bananadev.html"} +{"id": "b05c516b7cf6-2", "text": ")\n params = self.model_kwargs or {}\n params = {**params, **kwargs}\n api_key = self.banana_api_key\n model_key = self.model_key\n model_inputs = {\n # a json specific to your model.\n \"prompt\": prompt,\n **params,\n }\n response = banana.run(api_key, model_key, model_inputs)\n try:\n text = response[\"modelOutputs\"][0][\"output\"]\n except (KeyError, TypeError):\n returned = response[\"modelOutputs\"][0]\n raise ValueError(\n \"Response should be of schema: {'output': 'text'}.\"\n f\"\\nResponse was: {returned}\"\n \"\\nTo fix this:\"\n \"\\n- fork the source repo of the Banana model\"\n \"\\n- modify app.py to return the above schema\"\n \"\\n- deploy that as a custom repo\"\n )\n if stop is not None:\n # I believe this is required since the stop tokens\n # are not enforced by the model parameters\n text = enforce_stop_tokens(text, stop)\n return text", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/bananadev.html"} +{"id": "91a9c2c0e066-0", "text": "Source code for langchain.llms.anthropic\n\"\"\"Wrapper around Anthropic APIs.\"\"\"\nimport re\nimport warnings\nfrom typing import Any, Callable, Dict, Generator, List, Mapping, Optional, Tuple, Union\nfrom pydantic import BaseModel, root_validator\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForLLMRun,\n CallbackManagerForLLMRun,\n)\nfrom langchain.llms.base import LLM\nfrom langchain.utils import get_from_dict_or_env\nclass _AnthropicCommon(BaseModel):\n client: Any = None #: :meta private:\n model: str = \"claude-v1\"\n \"\"\"Model name to use.\"\"\"\n max_tokens_to_sample: int = 256\n \"\"\"Denotes the number of tokens to predict per generation.\"\"\"\n temperature: Optional[float] = None\n \"\"\"A non-negative float that tunes the degree of randomness in generation.\"\"\"\n top_k: Optional[int] = None\n \"\"\"Number of most likely tokens to consider at each step.\"\"\"\n top_p: Optional[float] = None\n \"\"\"Total probability mass of tokens to consider at each step.\"\"\"\n streaming: bool = False\n \"\"\"Whether to stream the results.\"\"\"\n default_request_timeout: Optional[Union[float, Tuple[float, float]]] = None\n \"\"\"Timeout for requests to Anthropic Completion API. Default is 600 seconds.\"\"\"\n anthropic_api_url: Optional[str] = None\n anthropic_api_key: Optional[str] = None\n HUMAN_PROMPT: Optional[str] = None\n AI_PROMPT: Optional[str] = None\n count_tokens: Optional[Callable[[str], int]] = None\n @root_validator()\n def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"Validate that api key and python package exists in environment.\"\"\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/anthropic.html"} +{"id": "91a9c2c0e066-1", "text": "\"\"\"Validate that api key and python package exists in environment.\"\"\"\n anthropic_api_key = get_from_dict_or_env(\n values, \"anthropic_api_key\", \"ANTHROPIC_API_KEY\"\n )\n \"\"\"Get custom api url from environment.\"\"\"\n anthropic_api_url = get_from_dict_or_env(\n values,\n \"anthropic_api_url\",\n \"ANTHROPIC_API_URL\",\n default=\"https://api.anthropic.com\",\n )\n try:\n import anthropic\n values[\"client\"] = anthropic.Client(\n api_url=anthropic_api_url,\n api_key=anthropic_api_key,\n default_request_timeout=values[\"default_request_timeout\"],\n )\n values[\"HUMAN_PROMPT\"] = anthropic.HUMAN_PROMPT\n values[\"AI_PROMPT\"] = anthropic.AI_PROMPT\n values[\"count_tokens\"] = anthropic.count_tokens\n except ImportError:\n raise ImportError(\n \"Could not import anthropic python package. \"\n \"Please it install it with `pip install anthropic`.\"\n )\n return values\n @property\n def _default_params(self) -> Mapping[str, Any]:\n \"\"\"Get the default parameters for calling Anthropic API.\"\"\"\n d = {\n \"max_tokens_to_sample\": self.max_tokens_to_sample,\n \"model\": self.model,\n }\n if self.temperature is not None:\n d[\"temperature\"] = self.temperature\n if self.top_k is not None:\n d[\"top_k\"] = self.top_k\n if self.top_p is not None:\n d[\"top_p\"] = self.top_p\n return d\n @property\n def _identifying_params(self) -> Mapping[str, Any]:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/anthropic.html"} +{"id": "91a9c2c0e066-2", "text": "@property\n def _identifying_params(self) -> Mapping[str, Any]:\n \"\"\"Get the identifying parameters.\"\"\"\n return {**{}, **self._default_params}\n def _get_anthropic_stop(self, stop: Optional[List[str]] = None) -> List[str]:\n if not self.HUMAN_PROMPT or not self.AI_PROMPT:\n raise NameError(\"Please ensure the anthropic package is loaded\")\n if stop is None:\n stop = []\n # Never want model to invent new turns of Human / Assistant dialog.\n stop.extend([self.HUMAN_PROMPT])\n return stop\n[docs]class Anthropic(LLM, _AnthropicCommon):\n r\"\"\"Wrapper around Anthropic's large language models.\n To use, you should have the ``anthropic`` python package installed, and the\n environment variable ``ANTHROPIC_API_KEY`` set with your API key, or pass\n it as a named parameter to the constructor.\n Example:\n .. code-block:: python\n import anthropic\n from langchain.llms import Anthropic\n model = Anthropic(model=\"\", anthropic_api_key=\"my-api-key\")\n # Simplest invocation, automatically wrapped with HUMAN_PROMPT\n # and AI_PROMPT.\n response = model(\"What are the biggest risks facing humanity?\")\n # Or if you want to use the chat mode, build a few-shot-prompt, or\n # put words in the Assistant's mouth, use HUMAN_PROMPT and AI_PROMPT:\n raw_prompt = \"What are the biggest risks facing humanity?\"\n prompt = f\"{anthropic.HUMAN_PROMPT} {prompt}{anthropic.AI_PROMPT}\"\n response = model(prompt)\n \"\"\"\n @root_validator()", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/anthropic.html"} +{"id": "91a9c2c0e066-3", "text": "response = model(prompt)\n \"\"\"\n @root_validator()\n def raise_warning(cls, values: Dict) -> Dict:\n \"\"\"Raise warning that this class is deprecated.\"\"\"\n warnings.warn(\n \"This Anthropic LLM is deprecated. \"\n \"Please use `from langchain.chat_models import ChatAnthropic` instead\"\n )\n return values\n @property\n def _llm_type(self) -> str:\n \"\"\"Return type of llm.\"\"\"\n return \"anthropic-llm\"\n def _wrap_prompt(self, prompt: str) -> str:\n if not self.HUMAN_PROMPT or not self.AI_PROMPT:\n raise NameError(\"Please ensure the anthropic package is loaded\")\n if prompt.startswith(self.HUMAN_PROMPT):\n return prompt # Already wrapped.\n # Guard against common errors in specifying wrong number of newlines.\n corrected_prompt, n_subs = re.subn(r\"^\\n*Human:\", self.HUMAN_PROMPT, prompt)\n if n_subs == 1:\n return corrected_prompt\n # As a last resort, wrap the prompt ourselves to emulate instruct-style.\n return f\"{self.HUMAN_PROMPT} {prompt}{self.AI_PROMPT} Sure, here you go:\\n\"\n def _call(\n self,\n prompt: str,\n stop: Optional[List[str]] = None,\n run_manager: Optional[CallbackManagerForLLMRun] = None,\n **kwargs: Any,\n ) -> str:\n r\"\"\"Call out to Anthropic's completion endpoint.\n Args:\n prompt: The prompt to pass into the model.\n stop: Optional list of stop words to use when generating.\n Returns:\n The string generated by the model.\n Example:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/anthropic.html"} +{"id": "91a9c2c0e066-4", "text": "Returns:\n The string generated by the model.\n Example:\n .. code-block:: python\n prompt = \"What are the biggest risks facing humanity?\"\n prompt = f\"\\n\\nHuman: {prompt}\\n\\nAssistant:\"\n response = model(prompt)\n \"\"\"\n stop = self._get_anthropic_stop(stop)\n params = {**self._default_params, **kwargs}\n if self.streaming:\n stream_resp = self.client.completion_stream(\n prompt=self._wrap_prompt(prompt),\n stop_sequences=stop,\n **params,\n )\n current_completion = \"\"\n for data in stream_resp:\n delta = data[\"completion\"][len(current_completion) :]\n current_completion = data[\"completion\"]\n if run_manager:\n run_manager.on_llm_new_token(delta, **data)\n return current_completion\n response = self.client.completion(\n prompt=self._wrap_prompt(prompt),\n stop_sequences=stop,\n **params,\n )\n return response[\"completion\"]\n async def _acall(\n self,\n prompt: str,\n stop: Optional[List[str]] = None,\n run_manager: Optional[AsyncCallbackManagerForLLMRun] = None,\n **kwargs: Any,\n ) -> str:\n \"\"\"Call out to Anthropic's completion endpoint asynchronously.\"\"\"\n stop = self._get_anthropic_stop(stop)\n params = {**self._default_params, **kwargs}\n if self.streaming:\n stream_resp = await self.client.acompletion_stream(\n prompt=self._wrap_prompt(prompt),\n stop_sequences=stop,\n **params,\n )\n current_completion = \"\"\n async for data in stream_resp:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/anthropic.html"} +{"id": "91a9c2c0e066-5", "text": ")\n current_completion = \"\"\n async for data in stream_resp:\n delta = data[\"completion\"][len(current_completion) :]\n current_completion = data[\"completion\"]\n if run_manager:\n await run_manager.on_llm_new_token(delta, **data)\n return current_completion\n response = await self.client.acompletion(\n prompt=self._wrap_prompt(prompt),\n stop_sequences=stop,\n **params,\n )\n return response[\"completion\"]\n[docs] def stream(self, prompt: str, stop: Optional[List[str]] = None) -> Generator:\n r\"\"\"Call Anthropic completion_stream and return the resulting generator.\n BETA: this is a beta feature while we figure out the right abstraction.\n Once that happens, this interface could change.\n Args:\n prompt: The prompt to pass into the model.\n stop: Optional list of stop words to use when generating.\n Returns:\n A generator representing the stream of tokens from Anthropic.\n Example:\n .. code-block:: python\n prompt = \"Write a poem about a stream.\"\n prompt = f\"\\n\\nHuman: {prompt}\\n\\nAssistant:\"\n generator = anthropic.stream(prompt)\n for token in generator:\n yield token\n \"\"\"\n stop = self._get_anthropic_stop(stop)\n return self.client.completion_stream(\n prompt=self._wrap_prompt(prompt),\n stop_sequences=stop,\n **self._default_params,\n )\n[docs] def get_num_tokens(self, text: str) -> int:\n \"\"\"Calculate number of tokens.\"\"\"\n if not self.count_tokens:\n raise NameError(\"Please ensure the anthropic package is loaded\")\n return self.count_tokens(text)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/anthropic.html"} +{"id": "7393076f3330-0", "text": "Source code for langchain.llms.google_palm\n\"\"\"Wrapper arround Google's PaLM Text APIs.\"\"\"\nfrom __future__ import annotations\nimport logging\nfrom typing import Any, Callable, Dict, List, Optional\nfrom pydantic import BaseModel, root_validator\nfrom tenacity import (\n before_sleep_log,\n retry,\n retry_if_exception_type,\n stop_after_attempt,\n wait_exponential,\n)\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForLLMRun,\n CallbackManagerForLLMRun,\n)\nfrom langchain.llms import BaseLLM\nfrom langchain.schema import Generation, LLMResult\nfrom langchain.utils import get_from_dict_or_env\nlogger = logging.getLogger(__name__)\ndef _create_retry_decorator() -> Callable[[Any], Any]:\n \"\"\"Returns a tenacity retry decorator, preconfigured to handle PaLM exceptions\"\"\"\n try:\n import google.api_core.exceptions\n except ImportError:\n raise ImportError(\n \"Could not import google-api-core python package. \"\n \"Please install it with `pip install google-api-core`.\"\n )\n multiplier = 2\n min_seconds = 1\n max_seconds = 60\n max_retries = 10\n return retry(\n reraise=True,\n stop=stop_after_attempt(max_retries),\n wait=wait_exponential(multiplier=multiplier, min=min_seconds, max=max_seconds),\n retry=(\n retry_if_exception_type(google.api_core.exceptions.ResourceExhausted)\n | retry_if_exception_type(google.api_core.exceptions.ServiceUnavailable)\n | retry_if_exception_type(google.api_core.exceptions.GoogleAPIError)\n ),\n before_sleep=before_sleep_log(logger, logging.WARNING),\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/google_palm.html"} +{"id": "7393076f3330-1", "text": "),\n before_sleep=before_sleep_log(logger, logging.WARNING),\n )\ndef generate_with_retry(llm: GooglePalm, **kwargs: Any) -> Any:\n \"\"\"Use tenacity to retry the completion call.\"\"\"\n retry_decorator = _create_retry_decorator()\n @retry_decorator\n def _generate_with_retry(**kwargs: Any) -> Any:\n return llm.client.generate_text(**kwargs)\n return _generate_with_retry(**kwargs)\ndef _strip_erroneous_leading_spaces(text: str) -> str:\n \"\"\"Strip erroneous leading spaces from text.\n The PaLM API will sometimes erroneously return a single leading space in all\n lines > 1. This function strips that space.\n \"\"\"\n has_leading_space = all(not line or line[0] == \" \" for line in text.split(\"\\n\")[1:])\n if has_leading_space:\n return text.replace(\"\\n \", \"\\n\")\n else:\n return text\n[docs]class GooglePalm(BaseLLM, BaseModel):\n client: Any #: :meta private:\n google_api_key: Optional[str]\n model_name: str = \"models/text-bison-001\"\n \"\"\"Model name to use.\"\"\"\n temperature: float = 0.7\n \"\"\"Run inference with this temperature. Must by in the closed interval\n [0.0, 1.0].\"\"\"\n top_p: Optional[float] = None\n \"\"\"Decode using nucleus sampling: consider the smallest set of tokens whose\n probability sum is at least top_p. Must be in the closed interval [0.0, 1.0].\"\"\"\n top_k: Optional[int] = None\n \"\"\"Decode using top-k sampling: consider the set of top_k most probable tokens.\n Must be positive.\"\"\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/google_palm.html"} +{"id": "7393076f3330-2", "text": "Must be positive.\"\"\"\n max_output_tokens: Optional[int] = None\n \"\"\"Maximum number of tokens to include in a candidate. Must be greater than zero.\n If unset, will default to 64.\"\"\"\n n: int = 1\n \"\"\"Number of chat completions to generate for each prompt. Note that the API may\n not return the full n completions if duplicates are generated.\"\"\"\n @root_validator()\n def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"Validate api key, python package exists.\"\"\"\n google_api_key = get_from_dict_or_env(\n values, \"google_api_key\", \"GOOGLE_API_KEY\"\n )\n try:\n import google.generativeai as genai\n genai.configure(api_key=google_api_key)\n except ImportError:\n raise ImportError(\n \"Could not import google-generativeai python package. \"\n \"Please install it with `pip install google-generativeai`.\"\n )\n values[\"client\"] = genai\n if values[\"temperature\"] is not None and not 0 <= values[\"temperature\"] <= 1:\n raise ValueError(\"temperature must be in the range [0.0, 1.0]\")\n if values[\"top_p\"] is not None and not 0 <= values[\"top_p\"] <= 1:\n raise ValueError(\"top_p must be in the range [0.0, 1.0]\")\n if values[\"top_k\"] is not None and values[\"top_k\"] <= 0:\n raise ValueError(\"top_k must be positive\")\n if values[\"max_output_tokens\"] is not None and values[\"max_output_tokens\"] <= 0:\n raise ValueError(\"max_output_tokens must be greater than zero\")\n return values\n def _generate(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/google_palm.html"} +{"id": "7393076f3330-3", "text": "return values\n def _generate(\n self,\n prompts: List[str],\n stop: Optional[List[str]] = None,\n run_manager: Optional[CallbackManagerForLLMRun] = None,\n **kwargs: Any,\n ) -> LLMResult:\n generations = []\n for prompt in prompts:\n completion = generate_with_retry(\n self,\n model=self.model_name,\n prompt=prompt,\n stop_sequences=stop,\n temperature=self.temperature,\n top_p=self.top_p,\n top_k=self.top_k,\n max_output_tokens=self.max_output_tokens,\n candidate_count=self.n,\n **kwargs,\n )\n prompt_generations = []\n for candidate in completion.candidates:\n raw_text = candidate[\"output\"]\n stripped_text = _strip_erroneous_leading_spaces(raw_text)\n prompt_generations.append(Generation(text=stripped_text))\n generations.append(prompt_generations)\n return LLMResult(generations=generations)\n async def _agenerate(\n self,\n prompts: List[str],\n stop: Optional[List[str]] = None,\n run_manager: Optional[AsyncCallbackManagerForLLMRun] = None,\n **kwargs: Any,\n ) -> LLMResult:\n raise NotImplementedError()\n @property\n def _llm_type(self) -> str:\n \"\"\"Return type of llm.\"\"\"\n return \"google_palm\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/google_palm.html"} +{"id": "7bc0b3b6d851-0", "text": "Source code for langchain.llms.deepinfra\n\"\"\"Wrapper around DeepInfra APIs.\"\"\"\nfrom typing import Any, Dict, List, Mapping, Optional\nimport requests\nfrom pydantic import Extra, root_validator\nfrom langchain.callbacks.manager import CallbackManagerForLLMRun\nfrom langchain.llms.base import LLM\nfrom langchain.llms.utils import enforce_stop_tokens\nfrom langchain.utils import get_from_dict_or_env\nDEFAULT_MODEL_ID = \"google/flan-t5-xl\"\n[docs]class DeepInfra(LLM):\n \"\"\"Wrapper around DeepInfra deployed models.\n To use, you should have the ``requests`` python package installed, and the\n environment variable ``DEEPINFRA_API_TOKEN`` set with your API token, or pass\n it as a named parameter to the constructor.\n Only supports `text-generation` and `text2text-generation` for now.\n Example:\n .. code-block:: python\n from langchain.llms import DeepInfra\n di = DeepInfra(model_id=\"google/flan-t5-xl\",\n deepinfra_api_token=\"my-api-key\")\n \"\"\"\n model_id: str = DEFAULT_MODEL_ID\n model_kwargs: Optional[dict] = None\n deepinfra_api_token: Optional[str] = None\n class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n @root_validator()\n def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"Validate that api key and python package exists in environment.\"\"\"\n deepinfra_api_token = get_from_dict_or_env(\n values, \"deepinfra_api_token\", \"DEEPINFRA_API_TOKEN\"\n )\n values[\"deepinfra_api_token\"] = deepinfra_api_token\n return values\n @property", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/deepinfra.html"} +{"id": "7bc0b3b6d851-1", "text": "return values\n @property\n def _identifying_params(self) -> Mapping[str, Any]:\n \"\"\"Get the identifying parameters.\"\"\"\n return {\n **{\"model_id\": self.model_id},\n **{\"model_kwargs\": self.model_kwargs},\n }\n @property\n def _llm_type(self) -> str:\n \"\"\"Return type of llm.\"\"\"\n return \"deepinfra\"\n def _call(\n self,\n prompt: str,\n stop: Optional[List[str]] = None,\n run_manager: Optional[CallbackManagerForLLMRun] = None,\n **kwargs: Any,\n ) -> str:\n \"\"\"Call out to DeepInfra's inference API endpoint.\n Args:\n prompt: The prompt to pass into the model.\n stop: Optional list of stop words to use when generating.\n Returns:\n The string generated by the model.\n Example:\n .. code-block:: python\n response = di(\"Tell me a joke.\")\n \"\"\"\n _model_kwargs = self.model_kwargs or {}\n _model_kwargs = {**_model_kwargs, **kwargs}\n # HTTP headers for authorization\n headers = {\n \"Authorization\": f\"bearer {self.deepinfra_api_token}\",\n \"Content-Type\": \"application/json\",\n }\n try:\n res = requests.post(\n f\"https://api.deepinfra.com/v1/inference/{self.model_id}\",\n headers=headers,\n json={\"input\": prompt, **_model_kwargs},\n )\n except requests.exceptions.RequestException as e:\n raise ValueError(f\"Error raised by inference endpoint: {e}\")\n if res.status_code != 200:\n raise ValueError(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/deepinfra.html"} +{"id": "7bc0b3b6d851-2", "text": "if res.status_code != 200:\n raise ValueError(\n \"Error raised by inference API HTTP code: %s, %s\"\n % (res.status_code, res.text)\n )\n try:\n t = res.json()\n text = t[\"results\"][0][\"generated_text\"]\n except requests.exceptions.JSONDecodeError as e:\n raise ValueError(\n f\"Error raised by inference API: {e}.\\nResponse: {res.text}\"\n )\n if stop is not None:\n # I believe this is required since the stop tokens\n # are not enforced by the model parameters\n text = enforce_stop_tokens(text, stop)\n return text", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/deepinfra.html"} +{"id": "466233e04674-0", "text": "Source code for langchain.llms.predictionguard\n\"\"\"Wrapper around Prediction Guard APIs.\"\"\"\nimport logging\nfrom typing import Any, Dict, List, Optional\nfrom pydantic import Extra, root_validator\nfrom langchain.callbacks.manager import CallbackManagerForLLMRun\nfrom langchain.llms.base import LLM\nfrom langchain.llms.utils import enforce_stop_tokens\nfrom langchain.utils import get_from_dict_or_env\nlogger = logging.getLogger(__name__)\n[docs]class PredictionGuard(LLM):\n \"\"\"Wrapper around Prediction Guard large language models.\n To use, you should have the ``predictionguard`` python package installed, and the\n environment variable ``PREDICTIONGUARD_TOKEN`` set with your access token, or pass\n it as a named parameter to the constructor. To use Prediction Guard's API along\n with OpenAI models, set the environment variable ``OPENAI_API_KEY`` with your\n OpenAI API key as well.\n Example:\n .. code-block:: python\n pgllm = PredictionGuard(model=\"MPT-7B-Instruct\",\n token=\"my-access-token\",\n output={\n \"type\": \"boolean\"\n })\n \"\"\"\n client: Any #: :meta private:\n model: Optional[str] = \"MPT-7B-Instruct\"\n \"\"\"Model name to use.\"\"\"\n output: Optional[Dict[str, Any]] = None\n \"\"\"The output type or structure for controlling the LLM output.\"\"\"\n max_tokens: int = 256\n \"\"\"Denotes the number of tokens to predict per generation.\"\"\"\n temperature: float = 0.75\n \"\"\"A non-negative float that tunes the degree of randomness in generation.\"\"\"\n token: Optional[str] = None\n \"\"\"Your Prediction Guard access token.\"\"\"\n stop: Optional[List[str]] = None", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/predictionguard.html"} +{"id": "466233e04674-1", "text": "\"\"\"Your Prediction Guard access token.\"\"\"\n stop: Optional[List[str]] = None\n class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n @root_validator()\n def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"Validate that the access token and python package exists in environment.\"\"\"\n token = get_from_dict_or_env(values, \"token\", \"PREDICTIONGUARD_TOKEN\")\n try:\n import predictionguard as pg\n values[\"client\"] = pg.Client(token=token)\n except ImportError:\n raise ImportError(\n \"Could not import predictionguard python package. \"\n \"Please install it with `pip install predictionguard`.\"\n )\n return values\n @property\n def _default_params(self) -> Dict[str, Any]:\n \"\"\"Get the default parameters for calling the Prediction Guard API.\"\"\"\n return {\n \"max_tokens\": self.max_tokens,\n \"temperature\": self.temperature,\n }\n @property\n def _identifying_params(self) -> Dict[str, Any]:\n \"\"\"Get the identifying parameters.\"\"\"\n return {**{\"model\": self.model}, **self._default_params}\n @property\n def _llm_type(self) -> str:\n \"\"\"Return type of llm.\"\"\"\n return \"predictionguard\"\n def _call(\n self,\n prompt: str,\n stop: Optional[List[str]] = None,\n run_manager: Optional[CallbackManagerForLLMRun] = None,\n **kwargs: Any,\n ) -> str:\n \"\"\"Call out to Prediction Guard's model API.\n Args:\n prompt: The prompt to pass into the model.\n Returns:\n The string generated by the model.", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/predictionguard.html"} +{"id": "466233e04674-2", "text": "Returns:\n The string generated by the model.\n Example:\n .. code-block:: python\n response = pgllm(\"Tell me a joke.\")\n \"\"\"\n import predictionguard as pg\n params = self._default_params\n if self.stop is not None and stop is not None:\n raise ValueError(\"`stop` found in both the input and default params.\")\n elif self.stop is not None:\n params[\"stop_sequences\"] = self.stop\n else:\n params[\"stop_sequences\"] = stop\n response = pg.Completion.create(\n model=self.model,\n prompt=prompt,\n output=self.output,\n temperature=params[\"temperature\"],\n max_tokens=params[\"max_tokens\"],\n **kwargs,\n )\n text = response[\"choices\"][0][\"text\"]\n # If stop tokens are provided, Prediction Guard's endpoint returns them.\n # In order to make this consistent with other endpoints, we strip them.\n if stop is not None or self.stop is not None:\n text = enforce_stop_tokens(text, params[\"stop_sequences\"])\n return text", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/predictionguard.html"} +{"id": "7f185d3d5063-0", "text": "Source code for langchain.llms.openlm\nfrom typing import Any, Dict\nfrom pydantic import root_validator\nfrom langchain.llms.openai import BaseOpenAI\n[docs]class OpenLM(BaseOpenAI):\n @property\n def _invocation_params(self) -> Dict[str, Any]:\n return {**{\"model\": self.model_name}, **super()._invocation_params}\n @root_validator()\n def validate_environment(cls, values: Dict) -> Dict:\n try:\n import openlm\n values[\"client\"] = openlm.Completion\n except ImportError:\n raise ValueError(\n \"Could not import openlm python package. \"\n \"Please install it with `pip install openlm`.\"\n )\n if values[\"streaming\"]:\n raise ValueError(\"Streaming not supported with openlm\")\n return values", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/openlm.html"} +{"id": "c5eee539fba7-0", "text": "Source code for langchain.llms.nlpcloud\n\"\"\"Wrapper around NLPCloud APIs.\"\"\"\nfrom typing import Any, Dict, List, Mapping, Optional\nfrom pydantic import Extra, root_validator\nfrom langchain.callbacks.manager import CallbackManagerForLLMRun\nfrom langchain.llms.base import LLM\nfrom langchain.utils import get_from_dict_or_env\n[docs]class NLPCloud(LLM):\n \"\"\"Wrapper around NLPCloud large language models.\n To use, you should have the ``nlpcloud`` python package installed, and the\n environment variable ``NLPCLOUD_API_KEY`` set with your API key.\n Example:\n .. code-block:: python\n from langchain.llms import NLPCloud\n nlpcloud = NLPCloud(model=\"gpt-neox-20b\")\n \"\"\"\n client: Any #: :meta private:\n model_name: str = \"finetuned-gpt-neox-20b\"\n \"\"\"Model name to use.\"\"\"\n temperature: float = 0.7\n \"\"\"What sampling temperature to use.\"\"\"\n min_length: int = 1\n \"\"\"The minimum number of tokens to generate in the completion.\"\"\"\n max_length: int = 256\n \"\"\"The maximum number of tokens to generate in the completion.\"\"\"\n length_no_input: bool = True\n \"\"\"Whether min_length and max_length should include the length of the input.\"\"\"\n remove_input: bool = True\n \"\"\"Remove input text from API response\"\"\"\n remove_end_sequence: bool = True\n \"\"\"Whether or not to remove the end sequence token.\"\"\"\n bad_words: List[str] = []\n \"\"\"List of tokens not allowed to be generated.\"\"\"\n top_p: int = 1\n \"\"\"Total probability mass of tokens to consider at each step.\"\"\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/nlpcloud.html"} +{"id": "c5eee539fba7-1", "text": "\"\"\"Total probability mass of tokens to consider at each step.\"\"\"\n top_k: int = 50\n \"\"\"The number of highest probability tokens to keep for top-k filtering.\"\"\"\n repetition_penalty: float = 1.0\n \"\"\"Penalizes repeated tokens. 1.0 means no penalty.\"\"\"\n length_penalty: float = 1.0\n \"\"\"Exponential penalty to the length.\"\"\"\n do_sample: bool = True\n \"\"\"Whether to use sampling (True) or greedy decoding.\"\"\"\n num_beams: int = 1\n \"\"\"Number of beams for beam search.\"\"\"\n early_stopping: bool = False\n \"\"\"Whether to stop beam search at num_beams sentences.\"\"\"\n num_return_sequences: int = 1\n \"\"\"How many completions to generate for each prompt.\"\"\"\n nlpcloud_api_key: Optional[str] = None\n class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n @root_validator()\n def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"Validate that api key and python package exists in environment.\"\"\"\n nlpcloud_api_key = get_from_dict_or_env(\n values, \"nlpcloud_api_key\", \"NLPCLOUD_API_KEY\"\n )\n try:\n import nlpcloud\n values[\"client\"] = nlpcloud.Client(\n values[\"model_name\"], nlpcloud_api_key, gpu=True, lang=\"en\"\n )\n except ImportError:\n raise ImportError(\n \"Could not import nlpcloud python package. \"\n \"Please install it with `pip install nlpcloud`.\"\n )\n return values\n @property\n def _default_params(self) -> Mapping[str, Any]:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/nlpcloud.html"} +{"id": "c5eee539fba7-2", "text": "@property\n def _default_params(self) -> Mapping[str, Any]:\n \"\"\"Get the default parameters for calling NLPCloud API.\"\"\"\n return {\n \"temperature\": self.temperature,\n \"min_length\": self.min_length,\n \"max_length\": self.max_length,\n \"length_no_input\": self.length_no_input,\n \"remove_input\": self.remove_input,\n \"remove_end_sequence\": self.remove_end_sequence,\n \"bad_words\": self.bad_words,\n \"top_p\": self.top_p,\n \"top_k\": self.top_k,\n \"repetition_penalty\": self.repetition_penalty,\n \"length_penalty\": self.length_penalty,\n \"do_sample\": self.do_sample,\n \"num_beams\": self.num_beams,\n \"early_stopping\": self.early_stopping,\n \"num_return_sequences\": self.num_return_sequences,\n }\n @property\n def _identifying_params(self) -> Mapping[str, Any]:\n \"\"\"Get the identifying parameters.\"\"\"\n return {**{\"model_name\": self.model_name}, **self._default_params}\n @property\n def _llm_type(self) -> str:\n \"\"\"Return type of llm.\"\"\"\n return \"nlpcloud\"\n def _call(\n self,\n prompt: str,\n stop: Optional[List[str]] = None,\n run_manager: Optional[CallbackManagerForLLMRun] = None,\n **kwargs: Any,\n ) -> str:\n \"\"\"Call out to NLPCloud's create endpoint.\n Args:\n prompt: The prompt to pass into the model.\n stop: Not supported by this interface (pass in init method)\n Returns:\n The string generated by the model.\n Example:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/nlpcloud.html"} +{"id": "c5eee539fba7-3", "text": "Returns:\n The string generated by the model.\n Example:\n .. code-block:: python\n response = nlpcloud(\"Tell me a joke.\")\n \"\"\"\n if stop and len(stop) > 1:\n raise ValueError(\n \"NLPCloud only supports a single stop sequence per generation.\"\n \"Pass in a list of length 1.\"\n )\n elif stop and len(stop) == 1:\n end_sequence = stop[0]\n else:\n end_sequence = None\n params = {**self._default_params, **kwargs}\n response = self.client.generation(prompt, end_sequence=end_sequence, **params)\n return response[\"generated_text\"]", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/nlpcloud.html"} +{"id": "b24ad666a9e0-0", "text": "Source code for langchain.llms.cohere\n\"\"\"Wrapper around Cohere APIs.\"\"\"\nfrom __future__ import annotations\nimport logging\nfrom typing import Any, Callable, Dict, List, Optional\nfrom pydantic import Extra, root_validator\nfrom tenacity import (\n before_sleep_log,\n retry,\n retry_if_exception_type,\n stop_after_attempt,\n wait_exponential,\n)\nfrom langchain.callbacks.manager import CallbackManagerForLLMRun\nfrom langchain.llms.base import LLM\nfrom langchain.llms.utils import enforce_stop_tokens\nfrom langchain.utils import get_from_dict_or_env\nlogger = logging.getLogger(__name__)\ndef _create_retry_decorator(llm: Cohere) -> Callable[[Any], Any]:\n import cohere\n min_seconds = 4\n max_seconds = 10\n # Wait 2^x * 1 second between each retry starting with\n # 4 seconds, then up to 10 seconds, then 10 seconds afterwards\n return retry(\n reraise=True,\n stop=stop_after_attempt(llm.max_retries),\n wait=wait_exponential(multiplier=1, min=min_seconds, max=max_seconds),\n retry=(retry_if_exception_type(cohere.error.CohereError)),\n before_sleep=before_sleep_log(logger, logging.WARNING),\n )\ndef completion_with_retry(llm: Cohere, **kwargs: Any) -> Any:\n \"\"\"Use tenacity to retry the completion call.\"\"\"\n retry_decorator = _create_retry_decorator(llm)\n @retry_decorator\n def _completion_with_retry(**kwargs: Any) -> Any:\n return llm.client.generate(**kwargs)\n return _completion_with_retry(**kwargs)\n[docs]class Cohere(LLM):\n \"\"\"Wrapper around Cohere large language models.", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/cohere.html"} +{"id": "b24ad666a9e0-1", "text": "\"\"\"Wrapper around Cohere large language models.\n To use, you should have the ``cohere`` python package installed, and the\n environment variable ``COHERE_API_KEY`` set with your API key, or pass\n it as a named parameter to the constructor.\n Example:\n .. code-block:: python\n from langchain.llms import Cohere\n cohere = Cohere(model=\"gptd-instruct-tft\", cohere_api_key=\"my-api-key\")\n \"\"\"\n client: Any #: :meta private:\n model: Optional[str] = None\n \"\"\"Model name to use.\"\"\"\n max_tokens: int = 256\n \"\"\"Denotes the number of tokens to predict per generation.\"\"\"\n temperature: float = 0.75\n \"\"\"A non-negative float that tunes the degree of randomness in generation.\"\"\"\n k: int = 0\n \"\"\"Number of most likely tokens to consider at each step.\"\"\"\n p: int = 1\n \"\"\"Total probability mass of tokens to consider at each step.\"\"\"\n frequency_penalty: float = 0.0\n \"\"\"Penalizes repeated tokens according to frequency. Between 0 and 1.\"\"\"\n presence_penalty: float = 0.0\n \"\"\"Penalizes repeated tokens. Between 0 and 1.\"\"\"\n truncate: Optional[str] = None\n \"\"\"Specify how the client handles inputs longer than the maximum token\n length: Truncate from START, END or NONE\"\"\"\n max_retries: int = 10\n \"\"\"Maximum number of retries to make when generating.\"\"\"\n cohere_api_key: Optional[str] = None\n stop: Optional[List[str]] = None\n class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n @root_validator()", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/cohere.html"} +{"id": "b24ad666a9e0-2", "text": "extra = Extra.forbid\n @root_validator()\n def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"Validate that api key and python package exists in environment.\"\"\"\n cohere_api_key = get_from_dict_or_env(\n values, \"cohere_api_key\", \"COHERE_API_KEY\"\n )\n try:\n import cohere\n values[\"client\"] = cohere.Client(cohere_api_key)\n except ImportError:\n raise ImportError(\n \"Could not import cohere python package. \"\n \"Please install it with `pip install cohere`.\"\n )\n return values\n @property\n def _default_params(self) -> Dict[str, Any]:\n \"\"\"Get the default parameters for calling Cohere API.\"\"\"\n return {\n \"max_tokens\": self.max_tokens,\n \"temperature\": self.temperature,\n \"k\": self.k,\n \"p\": self.p,\n \"frequency_penalty\": self.frequency_penalty,\n \"presence_penalty\": self.presence_penalty,\n \"truncate\": self.truncate,\n }\n @property\n def _identifying_params(self) -> Dict[str, Any]:\n \"\"\"Get the identifying parameters.\"\"\"\n return {**{\"model\": self.model}, **self._default_params}\n @property\n def _llm_type(self) -> str:\n \"\"\"Return type of llm.\"\"\"\n return \"cohere\"\n def _call(\n self,\n prompt: str,\n stop: Optional[List[str]] = None,\n run_manager: Optional[CallbackManagerForLLMRun] = None,\n **kwargs: Any,\n ) -> str:\n \"\"\"Call out to Cohere's generate endpoint.\n Args:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/cohere.html"} +{"id": "b24ad666a9e0-3", "text": "\"\"\"Call out to Cohere's generate endpoint.\n Args:\n prompt: The prompt to pass into the model.\n stop: Optional list of stop words to use when generating.\n Returns:\n The string generated by the model.\n Example:\n .. code-block:: python\n response = cohere(\"Tell me a joke.\")\n \"\"\"\n params = self._default_params\n if self.stop is not None and stop is not None:\n raise ValueError(\"`stop` found in both the input and default params.\")\n elif self.stop is not None:\n params[\"stop_sequences\"] = self.stop\n else:\n params[\"stop_sequences\"] = stop\n params = {**params, **kwargs}\n response = completion_with_retry(\n self, model=self.model, prompt=prompt, **params\n )\n text = response.generations[0].text\n # If stop tokens are provided, Cohere's endpoint returns them.\n # In order to make this consistent with other endpoints, we strip them.\n if stop is not None or self.stop is not None:\n text = enforce_stop_tokens(text, params[\"stop_sequences\"])\n return text", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/cohere.html"} +{"id": "bc94350dcf5b-0", "text": "Source code for langchain.llms.promptlayer_openai\n\"\"\"PromptLayer wrapper.\"\"\"\nimport datetime\nfrom typing import Any, List, Optional\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForLLMRun,\n CallbackManagerForLLMRun,\n)\nfrom langchain.llms import OpenAI, OpenAIChat\nfrom langchain.schema import LLMResult\n[docs]class PromptLayerOpenAI(OpenAI):\n \"\"\"Wrapper around OpenAI large language models.\n To use, you should have the ``openai`` and ``promptlayer`` python\n package installed, and the environment variable ``OPENAI_API_KEY``\n and ``PROMPTLAYER_API_KEY`` set with your openAI API key and\n promptlayer key respectively.\n All parameters that can be passed to the OpenAI LLM can also\n be passed here. The PromptLayerOpenAI LLM adds two optional\n parameters:\n ``pl_tags``: List of strings to tag the request with.\n ``return_pl_id``: If True, the PromptLayer request ID will be\n returned in the ``generation_info`` field of the\n ``Generation`` object.\n Example:\n .. code-block:: python\n from langchain.llms import PromptLayerOpenAI\n openai = PromptLayerOpenAI(model_name=\"text-davinci-003\")\n \"\"\"\n pl_tags: Optional[List[str]]\n return_pl_id: Optional[bool] = False\n def _generate(\n self,\n prompts: List[str],\n stop: Optional[List[str]] = None,\n run_manager: Optional[CallbackManagerForLLMRun] = None,\n **kwargs: Any,\n ) -> LLMResult:\n \"\"\"Call OpenAI generate and then call PromptLayer API to log the request.\"\"\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/promptlayer_openai.html"} +{"id": "bc94350dcf5b-1", "text": "\"\"\"Call OpenAI generate and then call PromptLayer API to log the request.\"\"\"\n from promptlayer.utils import get_api_key, promptlayer_api_request\n request_start_time = datetime.datetime.now().timestamp()\n generated_responses = super()._generate(prompts, stop, run_manager)\n request_end_time = datetime.datetime.now().timestamp()\n for i in range(len(prompts)):\n prompt = prompts[i]\n generation = generated_responses.generations[i][0]\n resp = {\n \"text\": generation.text,\n \"llm_output\": generated_responses.llm_output,\n }\n params = {**self._identifying_params, **kwargs}\n pl_request_id = promptlayer_api_request(\n \"langchain.PromptLayerOpenAI\",\n \"langchain\",\n [prompt],\n params,\n self.pl_tags,\n resp,\n request_start_time,\n request_end_time,\n get_api_key(),\n return_pl_id=self.return_pl_id,\n )\n if self.return_pl_id:\n if generation.generation_info is None or not isinstance(\n generation.generation_info, dict\n ):\n generation.generation_info = {}\n generation.generation_info[\"pl_request_id\"] = pl_request_id\n return generated_responses\n async def _agenerate(\n self,\n prompts: List[str],\n stop: Optional[List[str]] = None,\n run_manager: Optional[AsyncCallbackManagerForLLMRun] = None,\n **kwargs: Any,\n ) -> LLMResult:\n from promptlayer.utils import get_api_key, promptlayer_api_request_async\n request_start_time = datetime.datetime.now().timestamp()\n generated_responses = await super()._agenerate(prompts, stop, run_manager)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/promptlayer_openai.html"} +{"id": "bc94350dcf5b-2", "text": "generated_responses = await super()._agenerate(prompts, stop, run_manager)\n request_end_time = datetime.datetime.now().timestamp()\n for i in range(len(prompts)):\n prompt = prompts[i]\n generation = generated_responses.generations[i][0]\n resp = {\n \"text\": generation.text,\n \"llm_output\": generated_responses.llm_output,\n }\n params = {**self._identifying_params, **kwargs}\n pl_request_id = await promptlayer_api_request_async(\n \"langchain.PromptLayerOpenAI.async\",\n \"langchain\",\n [prompt],\n params,\n self.pl_tags,\n resp,\n request_start_time,\n request_end_time,\n get_api_key(),\n return_pl_id=self.return_pl_id,\n )\n if self.return_pl_id:\n if generation.generation_info is None or not isinstance(\n generation.generation_info, dict\n ):\n generation.generation_info = {}\n generation.generation_info[\"pl_request_id\"] = pl_request_id\n return generated_responses\n[docs]class PromptLayerOpenAIChat(OpenAIChat):\n \"\"\"Wrapper around OpenAI large language models.\n To use, you should have the ``openai`` and ``promptlayer`` python\n package installed, and the environment variable ``OPENAI_API_KEY``\n and ``PROMPTLAYER_API_KEY`` set with your openAI API key and\n promptlayer key respectively.\n All parameters that can be passed to the OpenAIChat LLM can also\n be passed here. The PromptLayerOpenAIChat adds two optional\n parameters:\n ``pl_tags``: List of strings to tag the request with.", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/promptlayer_openai.html"} +{"id": "bc94350dcf5b-3", "text": "parameters:\n ``pl_tags``: List of strings to tag the request with.\n ``return_pl_id``: If True, the PromptLayer request ID will be\n returned in the ``generation_info`` field of the\n ``Generation`` object.\n Example:\n .. code-block:: python\n from langchain.llms import PromptLayerOpenAIChat\n openaichat = PromptLayerOpenAIChat(model_name=\"gpt-3.5-turbo\")\n \"\"\"\n pl_tags: Optional[List[str]]\n return_pl_id: Optional[bool] = False\n def _generate(\n self,\n prompts: List[str],\n stop: Optional[List[str]] = None,\n run_manager: Optional[CallbackManagerForLLMRun] = None,\n **kwargs: Any,\n ) -> LLMResult:\n \"\"\"Call OpenAI generate and then call PromptLayer API to log the request.\"\"\"\n from promptlayer.utils import get_api_key, promptlayer_api_request\n request_start_time = datetime.datetime.now().timestamp()\n generated_responses = super()._generate(prompts, stop, run_manager)\n request_end_time = datetime.datetime.now().timestamp()\n for i in range(len(prompts)):\n prompt = prompts[i]\n generation = generated_responses.generations[i][0]\n resp = {\n \"text\": generation.text,\n \"llm_output\": generated_responses.llm_output,\n }\n params = {**self._identifying_params, **kwargs}\n pl_request_id = promptlayer_api_request(\n \"langchain.PromptLayerOpenAIChat\",\n \"langchain\",\n [prompt],\n params,\n self.pl_tags,\n resp,\n request_start_time,\n request_end_time,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/promptlayer_openai.html"} +{"id": "bc94350dcf5b-4", "text": "resp,\n request_start_time,\n request_end_time,\n get_api_key(),\n return_pl_id=self.return_pl_id,\n )\n if self.return_pl_id:\n if generation.generation_info is None or not isinstance(\n generation.generation_info, dict\n ):\n generation.generation_info = {}\n generation.generation_info[\"pl_request_id\"] = pl_request_id\n return generated_responses\n async def _agenerate(\n self,\n prompts: List[str],\n stop: Optional[List[str]] = None,\n run_manager: Optional[AsyncCallbackManagerForLLMRun] = None,\n **kwargs: Any,\n ) -> LLMResult:\n from promptlayer.utils import get_api_key, promptlayer_api_request_async\n request_start_time = datetime.datetime.now().timestamp()\n generated_responses = await super()._agenerate(prompts, stop, run_manager)\n request_end_time = datetime.datetime.now().timestamp()\n for i in range(len(prompts)):\n prompt = prompts[i]\n generation = generated_responses.generations[i][0]\n resp = {\n \"text\": generation.text,\n \"llm_output\": generated_responses.llm_output,\n }\n params = {**self._identifying_params, **kwargs}\n pl_request_id = await promptlayer_api_request_async(\n \"langchain.PromptLayerOpenAIChat.async\",\n \"langchain\",\n [prompt],\n params,\n self.pl_tags,\n resp,\n request_start_time,\n request_end_time,\n get_api_key(),\n return_pl_id=self.return_pl_id,\n )\n if self.return_pl_id:\n if generation.generation_info is None or not isinstance(\n generation.generation_info, dict", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/promptlayer_openai.html"} +{"id": "bc94350dcf5b-5", "text": "generation.generation_info, dict\n ):\n generation.generation_info = {}\n generation.generation_info[\"pl_request_id\"] = pl_request_id\n return generated_responses", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/promptlayer_openai.html"} +{"id": "68ae42b97792-0", "text": "Source code for langchain.llms.vertexai\n\"\"\"Wrapper around Google VertexAI models.\"\"\"\nfrom typing import TYPE_CHECKING, Any, Dict, List, Optional\nfrom pydantic import BaseModel, root_validator\nfrom langchain.callbacks.manager import CallbackManagerForLLMRun\nfrom langchain.llms.base import LLM\nfrom langchain.llms.utils import enforce_stop_tokens\nfrom langchain.utilities.vertexai import (\n init_vertexai,\n raise_vertex_import_error,\n)\nif TYPE_CHECKING:\n from vertexai.language_models._language_models import _LanguageModel\ndef is_codey_model(model_name: str) -> bool:\n return \"code\" in model_name\nclass _VertexAICommon(BaseModel):\n client: \"_LanguageModel\" = None #: :meta private:\n model_name: str\n \"Model name to use.\"\n temperature: float = 0.0\n \"Sampling temperature, it controls the degree of randomness in token selection.\"\n max_output_tokens: int = 128\n \"Token limit determines the maximum amount of text output from one prompt.\"\n top_p: float = 0.95\n \"Tokens are selected from most probable to least until the sum of their \"\n \"probabilities equals the top-p value. Top-p is ignored for Codey models.\"\n top_k: int = 40\n \"How the model selects tokens for output, the next token is selected from \"\n \"among the top-k most probable tokens. Top-k is ignored for Codey models.\"\n stop: Optional[List[str]] = None\n \"Optional list of stop words to use when generating.\"\n project: Optional[str] = None\n \"The default GCP project to use when making Vertex API calls.\"\n location: str = \"us-central1\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/vertexai.html"} +{"id": "68ae42b97792-1", "text": "location: str = \"us-central1\"\n \"The default location to use when making API calls.\"\n credentials: Any = None\n \"The default custom credentials (google.auth.credentials.Credentials) to use \"\n \"when making API calls. If not provided, credentials will be ascertained from \"\n \"the environment.\"\n @property\n def is_codey_model(self) -> bool:\n return is_codey_model(self.model_name)\n @property\n def _default_params(self) -> Dict[str, Any]:\n if self.is_codey_model:\n return {\n \"temperature\": self.temperature,\n \"max_output_tokens\": self.max_output_tokens,\n }\n else:\n return {\n \"temperature\": self.temperature,\n \"max_output_tokens\": self.max_output_tokens,\n \"top_k\": self.top_k,\n \"top_p\": self.top_p,\n }\n def _predict(\n self, prompt: str, stop: Optional[List[str]] = None, **kwargs: Any\n ) -> str:\n params = {**self._default_params, **kwargs}\n res = self.client.predict(prompt, **params)\n return self._enforce_stop_words(res.text, stop)\n def _enforce_stop_words(self, text: str, stop: Optional[List[str]] = None) -> str:\n if stop is None and self.stop is not None:\n stop = self.stop\n if stop:\n return enforce_stop_tokens(text, stop)\n return text\n @property\n def _llm_type(self) -> str:\n return \"vertexai\"\n @classmethod\n def _try_init_vertexai(cls, values: Dict) -> None:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/vertexai.html"} +{"id": "68ae42b97792-2", "text": "def _try_init_vertexai(cls, values: Dict) -> None:\n allowed_params = [\"project\", \"location\", \"credentials\"]\n params = {k: v for k, v in values.items() if k in allowed_params}\n init_vertexai(**params)\n return None\n[docs]class VertexAI(_VertexAICommon, LLM):\n \"\"\"Wrapper around Google Vertex AI large language models.\"\"\"\n model_name: str = \"text-bison\"\n \"The name of the Vertex AI large language model.\"\n tuned_model_name: Optional[str] = None\n \"The name of a tuned model. If provided, model_name is ignored.\"\n @root_validator()\n def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"Validate that the python package exists in environment.\"\"\"\n cls._try_init_vertexai(values)\n tuned_model_name = values.get(\"tuned_model_name\")\n model_name = values[\"model_name\"]\n try:\n if tuned_model_name or not is_codey_model(model_name):\n from vertexai.preview.language_models import TextGenerationModel\n if tuned_model_name:\n values[\"client\"] = TextGenerationModel.get_tuned_model(\n tuned_model_name\n )\n else:\n values[\"client\"] = TextGenerationModel.from_pretrained(model_name)\n else:\n from vertexai.preview.language_models import CodeGenerationModel\n values[\"client\"] = CodeGenerationModel.from_pretrained(model_name)\n except ImportError:\n raise_vertex_import_error()\n return values\n def _call(\n self,\n prompt: str,\n stop: Optional[List[str]] = None,\n run_manager: Optional[CallbackManagerForLLMRun] = None,\n **kwargs: Any,\n ) -> str:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/vertexai.html"} +{"id": "68ae42b97792-3", "text": "**kwargs: Any,\n ) -> str:\n \"\"\"Call Vertex model to get predictions based on the prompt.\n Args:\n prompt: The prompt to pass into the model.\n stop: A list of stop words (optional).\n run_manager: A Callbackmanager for LLM run, optional.\n Returns:\n The string generated by the model.\n \"\"\"\n return self._predict(prompt, stop, **kwargs)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/vertexai.html"} +{"id": "52495bd4bb90-0", "text": "Source code for langchain.llms.mosaicml\n\"\"\"Wrapper around MosaicML APIs.\"\"\"\nfrom typing import Any, Dict, List, Mapping, Optional\nimport requests\nfrom pydantic import Extra, root_validator\nfrom langchain.callbacks.manager import CallbackManagerForLLMRun\nfrom langchain.llms.base import LLM\nfrom langchain.llms.utils import enforce_stop_tokens\nfrom langchain.utils import get_from_dict_or_env\nINSTRUCTION_KEY = \"### Instruction:\"\nRESPONSE_KEY = \"### Response:\"\nINTRO_BLURB = (\n \"Below is an instruction that describes a task. \"\n \"Write a response that appropriately completes the request.\"\n)\nPROMPT_FOR_GENERATION_FORMAT = \"\"\"{intro}\n{instruction_key}\n{instruction}\n{response_key}\n\"\"\".format(\n intro=INTRO_BLURB,\n instruction_key=INSTRUCTION_KEY,\n instruction=\"{instruction}\",\n response_key=RESPONSE_KEY,\n)\n[docs]class MosaicML(LLM):\n \"\"\"Wrapper around MosaicML's LLM inference service.\n To use, you should have the\n environment variable ``MOSAICML_API_TOKEN`` set with your API token, or pass\n it as a named parameter to the constructor.\n Example:\n .. code-block:: python\n from langchain.llms import MosaicML\n endpoint_url = (\n \"https://models.hosted-on.mosaicml.hosting/mpt-7b-instruct/v1/predict\"\n )\n mosaic_llm = MosaicML(\n endpoint_url=endpoint_url,\n mosaicml_api_token=\"my-api-key\"\n )\n \"\"\"\n endpoint_url: str = (", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/mosaicml.html"} +{"id": "52495bd4bb90-1", "text": ")\n \"\"\"\n endpoint_url: str = (\n \"https://models.hosted-on.mosaicml.hosting/mpt-7b-instruct/v1/predict\"\n )\n \"\"\"Endpoint URL to use.\"\"\"\n inject_instruction_format: bool = False\n \"\"\"Whether to inject the instruction format into the prompt.\"\"\"\n model_kwargs: Optional[dict] = None\n \"\"\"Key word arguments to pass to the model.\"\"\"\n retry_sleep: float = 1.0\n \"\"\"How long to try sleeping for if a rate limit is encountered\"\"\"\n mosaicml_api_token: Optional[str] = None\n class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n @root_validator()\n def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"Validate that api key and python package exists in environment.\"\"\"\n mosaicml_api_token = get_from_dict_or_env(\n values, \"mosaicml_api_token\", \"MOSAICML_API_TOKEN\"\n )\n values[\"mosaicml_api_token\"] = mosaicml_api_token\n return values\n @property\n def _identifying_params(self) -> Mapping[str, Any]:\n \"\"\"Get the identifying parameters.\"\"\"\n _model_kwargs = self.model_kwargs or {}\n return {\n **{\"endpoint_url\": self.endpoint_url},\n **{\"model_kwargs\": _model_kwargs},\n }\n @property\n def _llm_type(self) -> str:\n \"\"\"Return type of llm.\"\"\"\n return \"mosaic\"\n def _transform_prompt(self, prompt: str) -> str:\n \"\"\"Transform prompt.\"\"\"\n if self.inject_instruction_format:\n prompt = PROMPT_FOR_GENERATION_FORMAT.format(\n instruction=prompt,\n )\n return prompt", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/mosaicml.html"} +{"id": "52495bd4bb90-2", "text": "instruction=prompt,\n )\n return prompt\n def _call(\n self,\n prompt: str,\n stop: Optional[List[str]] = None,\n run_manager: Optional[CallbackManagerForLLMRun] = None,\n is_retry: bool = False,\n **kwargs: Any,\n ) -> str:\n \"\"\"Call out to a MosaicML LLM inference endpoint.\n Args:\n prompt: The prompt to pass into the model.\n stop: Optional list of stop words to use when generating.\n Returns:\n The string generated by the model.\n Example:\n .. code-block:: python\n response = mosaic_llm(\"Tell me a joke.\")\n \"\"\"\n _model_kwargs = self.model_kwargs or {}\n prompt = self._transform_prompt(prompt)\n payload = {\"input_strings\": [prompt]}\n payload.update(_model_kwargs)\n payload.update(kwargs)\n # HTTP headers for authorization\n headers = {\n \"Authorization\": f\"{self.mosaicml_api_token}\",\n \"Content-Type\": \"application/json\",\n }\n # send request\n try:\n response = requests.post(self.endpoint_url, headers=headers, json=payload)\n except requests.exceptions.RequestException as e:\n raise ValueError(f\"Error raised by inference endpoint: {e}\")\n try:\n parsed_response = response.json()\n if \"error\" in parsed_response:\n # if we get rate limited, try sleeping for 1 second\n if (\n not is_retry\n and \"rate limit exceeded\" in parsed_response[\"error\"].lower()\n ):\n import time\n time.sleep(self.retry_sleep)\n return self._call(prompt, stop, run_manager, is_retry=True)\n raise ValueError(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/mosaicml.html"} +{"id": "52495bd4bb90-3", "text": "raise ValueError(\n f\"Error raised by inference API: {parsed_response['error']}\"\n )\n # The inference API has changed a couple of times, so we add some handling\n # to be robust to multiple response formats.\n if isinstance(parsed_response, dict):\n if \"data\" in parsed_response:\n output_item = parsed_response[\"data\"]\n elif \"output\" in parsed_response:\n output_item = parsed_response[\"output\"]\n else:\n raise ValueError(\n f\"No key data or output in response: {parsed_response}\"\n )\n if isinstance(output_item, list):\n text = output_item[0]\n else:\n text = output_item\n elif isinstance(parsed_response, list):\n first_item = parsed_response[0]\n if isinstance(first_item, str):\n text = first_item\n elif isinstance(first_item, dict):\n if \"output\" in parsed_response:\n text = first_item[\"output\"]\n else:\n raise ValueError(\n f\"No key data or output in response: {parsed_response}\"\n )\n else:\n raise ValueError(f\"Unexpected response format: {parsed_response}\")\n else:\n raise ValueError(f\"Unexpected response type: {parsed_response}\")\n text = text[len(prompt) :]\n except requests.exceptions.JSONDecodeError as e:\n raise ValueError(\n f\"Error raised by inference API: {e}.\\nResponse: {response.text}\"\n )\n # TODO: replace when MosaicML supports custom stop tokens natively\n if stop is not None:\n text = enforce_stop_tokens(text, stop)\n return text", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/mosaicml.html"} +{"id": "3185287c4654-0", "text": "Source code for langchain.llms.modal\n\"\"\"Wrapper around Modal API.\"\"\"\nimport logging\nfrom typing import Any, Dict, List, Mapping, Optional\nimport requests\nfrom pydantic import Extra, Field, root_validator\nfrom langchain.callbacks.manager import CallbackManagerForLLMRun\nfrom langchain.llms.base import LLM\nfrom langchain.llms.utils import enforce_stop_tokens\nlogger = logging.getLogger(__name__)\n[docs]class Modal(LLM):\n \"\"\"Wrapper around Modal large language models.\n To use, you should have the ``modal-client`` python package installed.\n Any parameters that are valid to be passed to the call can be passed\n in, even if not explicitly saved on this class.\n Example:\n .. code-block:: python\n from langchain.llms import Modal\n modal = Modal(endpoint_url=\"\")\n \"\"\"\n endpoint_url: str = \"\"\n \"\"\"model endpoint to use\"\"\"\n model_kwargs: Dict[str, Any] = Field(default_factory=dict)\n \"\"\"Holds any model parameters valid for `create` call not\n explicitly specified.\"\"\"\n class Config:\n \"\"\"Configuration for this pydantic config.\"\"\"\n extra = Extra.forbid\n @root_validator(pre=True)\n def build_extra(cls, values: Dict[str, Any]) -> Dict[str, Any]:\n \"\"\"Build extra kwargs from additional params that were passed in.\"\"\"\n all_required_field_names = {field.alias for field in cls.__fields__.values()}\n extra = values.get(\"model_kwargs\", {})\n for field_name in list(values):\n if field_name not in all_required_field_names:\n if field_name in extra:\n raise ValueError(f\"Found {field_name} supplied twice.\")\n logger.warning(\n f\"\"\"{field_name} was transfered to model_kwargs.", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/modal.html"} +{"id": "3185287c4654-1", "text": "logger.warning(\n f\"\"\"{field_name} was transfered to model_kwargs.\n Please confirm that {field_name} is what you intended.\"\"\"\n )\n extra[field_name] = values.pop(field_name)\n values[\"model_kwargs\"] = extra\n return values\n @property\n def _identifying_params(self) -> Mapping[str, Any]:\n \"\"\"Get the identifying parameters.\"\"\"\n return {\n **{\"endpoint_url\": self.endpoint_url},\n **{\"model_kwargs\": self.model_kwargs},\n }\n @property\n def _llm_type(self) -> str:\n \"\"\"Return type of llm.\"\"\"\n return \"modal\"\n def _call(\n self,\n prompt: str,\n stop: Optional[List[str]] = None,\n run_manager: Optional[CallbackManagerForLLMRun] = None,\n **kwargs: Any,\n ) -> str:\n \"\"\"Call to Modal endpoint.\"\"\"\n params = self.model_kwargs or {}\n params = {**params, **kwargs}\n response = requests.post(\n url=self.endpoint_url,\n headers={\n \"Content-Type\": \"application/json\",\n },\n json={\"prompt\": prompt, **params},\n )\n try:\n if prompt in response.json()[\"prompt\"]:\n response_json = response.json()\n except KeyError:\n raise ValueError(\"LangChain requires 'prompt' key in response.\")\n text = response_json[\"prompt\"]\n if stop is not None:\n # I believe this is required since the stop tokens\n # are not enforced by the model parameters\n text = enforce_stop_tokens(text, stop)\n return text", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/modal.html"} +{"id": "b2f20384c6b5-0", "text": "Source code for langchain.llms.self_hosted_hugging_face\n\"\"\"Wrapper around HuggingFace Pipeline API to run on self-hosted remote hardware.\"\"\"\nimport importlib.util\nimport logging\nfrom typing import Any, Callable, List, Mapping, Optional\nfrom pydantic import Extra\nfrom langchain.callbacks.manager import CallbackManagerForLLMRun\nfrom langchain.llms.self_hosted import SelfHostedPipeline\nfrom langchain.llms.utils import enforce_stop_tokens\nDEFAULT_MODEL_ID = \"gpt2\"\nDEFAULT_TASK = \"text-generation\"\nVALID_TASKS = (\"text2text-generation\", \"text-generation\", \"summarization\")\nlogger = logging.getLogger(__name__)\ndef _generate_text(\n pipeline: Any,\n prompt: str,\n *args: Any,\n stop: Optional[List[str]] = None,\n **kwargs: Any,\n) -> str:\n \"\"\"Inference function to send to the remote hardware.\n Accepts a Hugging Face pipeline (or more likely,\n a key pointing to such a pipeline on the cluster's object store)\n and returns generated text.\n \"\"\"\n response = pipeline(prompt, *args, **kwargs)\n if pipeline.task == \"text-generation\":\n # Text generation return includes the starter text.\n text = response[0][\"generated_text\"][len(prompt) :]\n elif pipeline.task == \"text2text-generation\":\n text = response[0][\"generated_text\"]\n elif pipeline.task == \"summarization\":\n text = response[0][\"summary_text\"]\n else:\n raise ValueError(\n f\"Got invalid task {pipeline.task}, \"\n f\"currently only {VALID_TASKS} are supported\"\n )\n if stop is not None:\n text = enforce_stop_tokens(text, stop)\n return text", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/self_hosted_hugging_face.html"} +{"id": "b2f20384c6b5-1", "text": "text = enforce_stop_tokens(text, stop)\n return text\ndef _load_transformer(\n model_id: str = DEFAULT_MODEL_ID,\n task: str = DEFAULT_TASK,\n device: int = 0,\n model_kwargs: Optional[dict] = None,\n) -> Any:\n \"\"\"Inference function to send to the remote hardware.\n Accepts a huggingface model_id and returns a pipeline for the task.\n \"\"\"\n from transformers import AutoModelForCausalLM, AutoModelForSeq2SeqLM, AutoTokenizer\n from transformers import pipeline as hf_pipeline\n _model_kwargs = model_kwargs or {}\n tokenizer = AutoTokenizer.from_pretrained(model_id, **_model_kwargs)\n try:\n if task == \"text-generation\":\n model = AutoModelForCausalLM.from_pretrained(model_id, **_model_kwargs)\n elif task in (\"text2text-generation\", \"summarization\"):\n model = AutoModelForSeq2SeqLM.from_pretrained(model_id, **_model_kwargs)\n else:\n raise ValueError(\n f\"Got invalid task {task}, \"\n f\"currently only {VALID_TASKS} are supported\"\n )\n except ImportError as e:\n raise ValueError(\n f\"Could not load the {task} model due to missing dependencies.\"\n ) from e\n if importlib.util.find_spec(\"torch\") is not None:\n import torch\n cuda_device_count = torch.cuda.device_count()\n if device < -1 or (device >= cuda_device_count):\n raise ValueError(\n f\"Got device=={device}, \"\n f\"device is required to be within [-1, {cuda_device_count})\"\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/self_hosted_hugging_face.html"} +{"id": "b2f20384c6b5-2", "text": ")\n if device < 0 and cuda_device_count > 0:\n logger.warning(\n \"Device has %d GPUs available. \"\n \"Provide device={deviceId} to `from_model_id` to use available\"\n \"GPUs for execution. deviceId is -1 for CPU and \"\n \"can be a positive integer associated with CUDA device id.\",\n cuda_device_count,\n )\n pipeline = hf_pipeline(\n task=task,\n model=model,\n tokenizer=tokenizer,\n device=device,\n model_kwargs=_model_kwargs,\n )\n if pipeline.task not in VALID_TASKS:\n raise ValueError(\n f\"Got invalid task {pipeline.task}, \"\n f\"currently only {VALID_TASKS} are supported\"\n )\n return pipeline\n[docs]class SelfHostedHuggingFaceLLM(SelfHostedPipeline):\n \"\"\"Wrapper around HuggingFace Pipeline API to run on self-hosted remote hardware.\n Supported hardware includes auto-launched instances on AWS, GCP, Azure,\n and Lambda, as well as servers specified\n by IP address and SSH credentials (such as on-prem, or another cloud\n like Paperspace, Coreweave, etc.).\n To use, you should have the ``runhouse`` python package installed.\n Only supports `text-generation`, `text2text-generation` and `summarization` for now.\n Example using from_model_id:\n .. code-block:: python\n from langchain.llms import SelfHostedHuggingFaceLLM\n import runhouse as rh\n gpu = rh.cluster(name=\"rh-a10x\", instance_type=\"A100:1\")\n hf = SelfHostedHuggingFaceLLM(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/self_hosted_hugging_face.html"} +{"id": "b2f20384c6b5-3", "text": "hf = SelfHostedHuggingFaceLLM(\n model_id=\"google/flan-t5-large\", task=\"text2text-generation\",\n hardware=gpu\n )\n Example passing fn that generates a pipeline (bc the pipeline is not serializable):\n .. code-block:: python\n from langchain.llms import SelfHostedHuggingFaceLLM\n from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline\n import runhouse as rh\n def get_pipeline():\n model_id = \"gpt2\"\n tokenizer = AutoTokenizer.from_pretrained(model_id)\n model = AutoModelForCausalLM.from_pretrained(model_id)\n pipe = pipeline(\n \"text-generation\", model=model, tokenizer=tokenizer\n )\n return pipe\n hf = SelfHostedHuggingFaceLLM(\n model_load_fn=get_pipeline, model_id=\"gpt2\", hardware=gpu)\n \"\"\"\n model_id: str = DEFAULT_MODEL_ID\n \"\"\"Hugging Face model_id to load the model.\"\"\"\n task: str = DEFAULT_TASK\n \"\"\"Hugging Face task (\"text-generation\", \"text2text-generation\" or\n \"summarization\").\"\"\"\n device: int = 0\n \"\"\"Device to use for inference. -1 for CPU, 0 for GPU, 1 for second GPU, etc.\"\"\"\n model_kwargs: Optional[dict] = None\n \"\"\"Key word arguments to pass to the model.\"\"\"\n hardware: Any\n \"\"\"Remote hardware to send the inference function to.\"\"\"\n model_reqs: List[str] = [\"./\", \"transformers\", \"torch\"]\n \"\"\"Requirements to install on hardware to inference the model.\"\"\"\n model_load_fn: Callable = _load_transformer\n \"\"\"Function to load the model remotely on the server.\"\"\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/self_hosted_hugging_face.html"} +{"id": "b2f20384c6b5-4", "text": "\"\"\"Function to load the model remotely on the server.\"\"\"\n inference_fn: Callable = _generate_text #: :meta private:\n \"\"\"Inference function to send to the remote hardware.\"\"\"\n class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n def __init__(self, **kwargs: Any):\n \"\"\"Construct the pipeline remotely using an auxiliary function.\n The load function needs to be importable to be imported\n and run on the server, i.e. in a module and not a REPL or closure.\n Then, initialize the remote inference function.\n \"\"\"\n load_fn_kwargs = {\n \"model_id\": kwargs.get(\"model_id\", DEFAULT_MODEL_ID),\n \"task\": kwargs.get(\"task\", DEFAULT_TASK),\n \"device\": kwargs.get(\"device\", 0),\n \"model_kwargs\": kwargs.get(\"model_kwargs\", None),\n }\n super().__init__(load_fn_kwargs=load_fn_kwargs, **kwargs)\n @property\n def _identifying_params(self) -> Mapping[str, Any]:\n \"\"\"Get the identifying parameters.\"\"\"\n return {\n **{\"model_id\": self.model_id},\n **{\"model_kwargs\": self.model_kwargs},\n }\n @property\n def _llm_type(self) -> str:\n return \"selfhosted_huggingface_pipeline\"\n def _call(\n self,\n prompt: str,\n stop: Optional[List[str]] = None,\n run_manager: Optional[CallbackManagerForLLMRun] = None,\n **kwargs: Any,\n ) -> str:\n return self.client(\n pipeline=self.pipeline_ref, prompt=prompt, stop=stop, **kwargs\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/self_hosted_hugging_face.html"} +{"id": "3fb41742b28b-0", "text": "Source code for langchain.llms.huggingface_text_gen_inference\n\"\"\"Wrapper around Huggingface text generation inference API.\"\"\"\nfrom functools import partial\nfrom typing import Any, Dict, List, Optional\nfrom pydantic import Extra, Field, root_validator\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForLLMRun,\n CallbackManagerForLLMRun,\n)\nfrom langchain.llms.base import LLM\n[docs]class HuggingFaceTextGenInference(LLM):\n \"\"\"\n HuggingFace text generation inference API.\n This class is a wrapper around the HuggingFace text generation inference API.\n It is used to generate text from a given prompt.\n Attributes:\n - max_new_tokens: The maximum number of tokens to generate.\n - top_k: The number of top-k tokens to consider when generating text.\n - top_p: The cumulative probability threshold for generating text.\n - typical_p: The typical probability threshold for generating text.\n - temperature: The temperature to use when generating text.\n - repetition_penalty: The repetition penalty to use when generating text.\n - stop_sequences: A list of stop sequences to use when generating text.\n - seed: The seed to use when generating text.\n - inference_server_url: The URL of the inference server to use.\n - timeout: The timeout value in seconds to use while connecting to inference server.\n - server_kwargs: The keyword arguments to pass to the inference server.\n - client: The client object used to communicate with the inference server.\n - async_client: The async client object used to communicate with the server.\n Methods:\n - _call: Generates text based on a given prompt and stop sequences.\n - _acall: Async generates text based on a given prompt and stop sequences.", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/huggingface_text_gen_inference.html"} +{"id": "3fb41742b28b-1", "text": "- _acall: Async generates text based on a given prompt and stop sequences.\n - _llm_type: Returns the type of LLM.\n \"\"\"\n \"\"\"\n Example:\n .. code-block:: python\n # Basic Example (no streaming)\n llm = HuggingFaceTextGenInference(\n inference_server_url = \"http://localhost:8010/\",\n max_new_tokens = 512,\n top_k = 10,\n top_p = 0.95,\n typical_p = 0.95,\n temperature = 0.01,\n repetition_penalty = 1.03,\n )\n print(llm(\"What is Deep Learning?\"))\n \n # Streaming response example\n from langchain.callbacks import streaming_stdout\n \n callbacks = [streaming_stdout.StreamingStdOutCallbackHandler()]\n llm = HuggingFaceTextGenInference(\n inference_server_url = \"http://localhost:8010/\",\n max_new_tokens = 512,\n top_k = 10,\n top_p = 0.95,\n typical_p = 0.95,\n temperature = 0.01,\n repetition_penalty = 1.03,\n callbacks = callbacks,\n stream = True\n )\n print(llm(\"What is Deep Learning?\"))\n \n \"\"\"\n max_new_tokens: int = 512\n top_k: Optional[int] = None\n top_p: Optional[float] = 0.95\n typical_p: Optional[float] = 0.95\n temperature: float = 0.8\n repetition_penalty: Optional[float] = None\n stop_sequences: List[str] = Field(default_factory=list)\n seed: Optional[int] = None", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/huggingface_text_gen_inference.html"} +{"id": "3fb41742b28b-2", "text": "seed: Optional[int] = None\n inference_server_url: str = \"\"\n timeout: int = 120\n server_kwargs: Dict[str, Any] = Field(default_factory=dict)\n stream: bool = False\n client: Any\n async_client: Any\n class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n @root_validator()\n def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"Validate that python package exists in environment.\"\"\"\n try:\n import text_generation\n values[\"client\"] = text_generation.Client(\n values[\"inference_server_url\"],\n timeout=values[\"timeout\"],\n **values[\"server_kwargs\"],\n )\n values[\"async_client\"] = text_generation.AsyncClient(\n values[\"inference_server_url\"],\n timeout=values[\"timeout\"],\n **values[\"server_kwargs\"],\n )\n except ImportError:\n raise ImportError(\n \"Could not import text_generation python package. \"\n \"Please install it with `pip install text_generation`.\"\n )\n return values\n @property\n def _llm_type(self) -> str:\n \"\"\"Return type of llm.\"\"\"\n return \"huggingface_textgen_inference\"\n def _call(\n self,\n prompt: str,\n stop: Optional[List[str]] = None,\n run_manager: Optional[CallbackManagerForLLMRun] = None,\n **kwargs: Any,\n ) -> str:\n if stop is None:\n stop = self.stop_sequences\n else:\n stop += self.stop_sequences\n if not self.stream:\n res = self.client.generate(\n prompt,\n stop_sequences=stop,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/huggingface_text_gen_inference.html"} +{"id": "3fb41742b28b-3", "text": "res = self.client.generate(\n prompt,\n stop_sequences=stop,\n max_new_tokens=self.max_new_tokens,\n top_k=self.top_k,\n top_p=self.top_p,\n typical_p=self.typical_p,\n temperature=self.temperature,\n repetition_penalty=self.repetition_penalty,\n seed=self.seed,\n **kwargs,\n )\n # remove stop sequences from the end of the generated text\n for stop_seq in stop:\n if stop_seq in res.generated_text:\n res.generated_text = res.generated_text[\n : res.generated_text.index(stop_seq)\n ]\n text = res.generated_text\n else:\n text_callback = None\n if run_manager:\n text_callback = partial(\n run_manager.on_llm_new_token, verbose=self.verbose\n )\n params = {\n \"stop_sequences\": stop,\n \"max_new_tokens\": self.max_new_tokens,\n \"top_k\": self.top_k,\n \"top_p\": self.top_p,\n \"typical_p\": self.typical_p,\n \"temperature\": self.temperature,\n \"repetition_penalty\": self.repetition_penalty,\n \"seed\": self.seed,\n }\n text = \"\"\n for res in self.client.generate_stream(prompt, **params):\n token = res.token\n is_stop = False\n for stop_seq in stop:\n if stop_seq in token.text:\n is_stop = True\n break\n if is_stop:\n break\n if not token.special:\n if text_callback:\n text_callback(token.text)\n text += token.text\n return text\n async def _acall(\n self,\n prompt: str,\n stop: Optional[List[str]] = None,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/huggingface_text_gen_inference.html"} +{"id": "3fb41742b28b-4", "text": "prompt: str,\n stop: Optional[List[str]] = None,\n run_manager: Optional[AsyncCallbackManagerForLLMRun] = None,\n **kwargs: Any,\n ) -> str:\n if stop is None:\n stop = self.stop_sequences\n else:\n stop += self.stop_sequences\n if not self.stream:\n res = await self.async_client.generate(\n prompt,\n stop_sequences=stop,\n max_new_tokens=self.max_new_tokens,\n top_k=self.top_k,\n top_p=self.top_p,\n typical_p=self.typical_p,\n temperature=self.temperature,\n repetition_penalty=self.repetition_penalty,\n seed=self.seed,\n **kwargs,\n )\n # remove stop sequences from the end of the generated text\n for stop_seq in stop:\n if stop_seq in res.generated_text:\n res.generated_text = res.generated_text[\n : res.generated_text.index(stop_seq)\n ]\n text: str = res.generated_text\n else:\n text_callback = None\n if run_manager:\n text_callback = partial(\n run_manager.on_llm_new_token, verbose=self.verbose\n )\n params = {\n **{\n \"stop_sequences\": stop,\n \"max_new_tokens\": self.max_new_tokens,\n \"top_k\": self.top_k,\n \"top_p\": self.top_p,\n \"typical_p\": self.typical_p,\n \"temperature\": self.temperature,\n \"repetition_penalty\": self.repetition_penalty,\n \"seed\": self.seed,\n },\n **kwargs,\n }\n text = \"\"\n async for res in self.async_client.generate_stream(prompt, **params):\n token = res.token\n is_stop = False", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/huggingface_text_gen_inference.html"} +{"id": "3fb41742b28b-5", "text": "token = res.token\n is_stop = False\n for stop_seq in stop:\n if stop_seq in token.text:\n is_stop = True\n break\n if is_stop:\n break\n if not token.special:\n if text_callback:\n await text_callback(token.text)\n return text", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/huggingface_text_gen_inference.html"} +{"id": "f8cd799a2197-0", "text": "Source code for langchain.llms.manifest\n\"\"\"Wrapper around HazyResearch's Manifest library.\"\"\"\nfrom typing import Any, Dict, List, Mapping, Optional\nfrom pydantic import Extra, root_validator\nfrom langchain.callbacks.manager import CallbackManagerForLLMRun\nfrom langchain.llms.base import LLM\n[docs]class ManifestWrapper(LLM):\n \"\"\"Wrapper around HazyResearch's Manifest library.\"\"\"\n client: Any #: :meta private:\n llm_kwargs: Optional[Dict] = None\n class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n @root_validator()\n def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"Validate that python package exists in environment.\"\"\"\n try:\n from manifest import Manifest\n if not isinstance(values[\"client\"], Manifest):\n raise ValueError\n except ImportError:\n raise ValueError(\n \"Could not import manifest python package. \"\n \"Please install it with `pip install manifest-ml`.\"\n )\n return values\n @property\n def _identifying_params(self) -> Mapping[str, Any]:\n kwargs = self.llm_kwargs or {}\n return {**self.client.client.get_model_params(), **kwargs}\n @property\n def _llm_type(self) -> str:\n \"\"\"Return type of llm.\"\"\"\n return \"manifest\"\n def _call(\n self,\n prompt: str,\n stop: Optional[List[str]] = None,\n run_manager: Optional[CallbackManagerForLLMRun] = None,\n **kwargs: Any,\n ) -> str:\n \"\"\"Call out to LLM through Manifest.\"\"\"\n if stop is not None and len(stop) != 1:\n raise NotImplementedError(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/manifest.html"} +{"id": "f8cd799a2197-1", "text": "if stop is not None and len(stop) != 1:\n raise NotImplementedError(\n f\"Manifest currently only supports a single stop token, got {stop}\"\n )\n params = self.llm_kwargs or {}\n params = {**params, **kwargs}\n if stop is not None:\n params[\"stop_token\"] = stop\n return self.client.run(prompt, **params)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/manifest.html"} +{"id": "ae490ba62939-0", "text": "Source code for langchain.llms.pipelineai\n\"\"\"Wrapper around Pipeline Cloud API.\"\"\"\nimport logging\nfrom typing import Any, Dict, List, Mapping, Optional\nfrom pydantic import BaseModel, Extra, Field, root_validator\nfrom langchain.callbacks.manager import CallbackManagerForLLMRun\nfrom langchain.llms.base import LLM\nfrom langchain.llms.utils import enforce_stop_tokens\nfrom langchain.utils import get_from_dict_or_env\nlogger = logging.getLogger(__name__)\n[docs]class PipelineAI(LLM, BaseModel):\n \"\"\"Wrapper around PipelineAI large language models.\n To use, you should have the ``pipeline-ai`` python package installed,\n and the environment variable ``PIPELINE_API_KEY`` set with your API key.\n Any parameters that are valid to be passed to the call can be passed\n in, even if not explicitly saved on this class.\n Example:\n .. code-block:: python\n from langchain import PipelineAI\n pipeline = PipelineAI(pipeline_key=\"\")\n \"\"\"\n pipeline_key: str = \"\"\n \"\"\"The id or tag of the target pipeline\"\"\"\n pipeline_kwargs: Dict[str, Any] = Field(default_factory=dict)\n \"\"\"Holds any pipeline parameters valid for `create` call not\n explicitly specified.\"\"\"\n pipeline_api_key: Optional[str] = None\n class Config:\n \"\"\"Configuration for this pydantic config.\"\"\"\n extra = Extra.forbid\n @root_validator(pre=True)\n def build_extra(cls, values: Dict[str, Any]) -> Dict[str, Any]:\n \"\"\"Build extra kwargs from additional params that were passed in.\"\"\"\n all_required_field_names = {field.alias for field in cls.__fields__.values()}\n extra = values.get(\"pipeline_kwargs\", {})\n for field_name in list(values):", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/pipelineai.html"} +{"id": "ae490ba62939-1", "text": "extra = values.get(\"pipeline_kwargs\", {})\n for field_name in list(values):\n if field_name not in all_required_field_names:\n if field_name in extra:\n raise ValueError(f\"Found {field_name} supplied twice.\")\n logger.warning(\n f\"\"\"{field_name} was transfered to pipeline_kwargs.\n Please confirm that {field_name} is what you intended.\"\"\"\n )\n extra[field_name] = values.pop(field_name)\n values[\"pipeline_kwargs\"] = extra\n return values\n @root_validator()\n def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"Validate that api key and python package exists in environment.\"\"\"\n pipeline_api_key = get_from_dict_or_env(\n values, \"pipeline_api_key\", \"PIPELINE_API_KEY\"\n )\n values[\"pipeline_api_key\"] = pipeline_api_key\n return values\n @property\n def _identifying_params(self) -> Mapping[str, Any]:\n \"\"\"Get the identifying parameters.\"\"\"\n return {\n **{\"pipeline_key\": self.pipeline_key},\n **{\"pipeline_kwargs\": self.pipeline_kwargs},\n }\n @property\n def _llm_type(self) -> str:\n \"\"\"Return type of llm.\"\"\"\n return \"pipeline_ai\"\n def _call(\n self,\n prompt: str,\n stop: Optional[List[str]] = None,\n run_manager: Optional[CallbackManagerForLLMRun] = None,\n **kwargs: Any,\n ) -> str:\n \"\"\"Call to Pipeline Cloud endpoint.\"\"\"\n try:\n from pipeline import PipelineCloud\n except ImportError:\n raise ValueError(\n \"Could not import pipeline-ai python package. \"\n \"Please install it with `pip install pipeline-ai`.\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/pipelineai.html"} +{"id": "ae490ba62939-2", "text": "\"Please install it with `pip install pipeline-ai`.\"\n )\n client = PipelineCloud(token=self.pipeline_api_key)\n params = self.pipeline_kwargs or {}\n params = {**params, **kwargs}\n run = client.run_pipeline(self.pipeline_key, [prompt, params])\n try:\n text = run.result_preview[0][0]\n except AttributeError:\n raise AttributeError(\n f\"A pipeline run should have a `result_preview` attribute.\"\n f\"Run was: {run}\"\n )\n if stop is not None:\n # I believe this is required since the stop tokens\n # are not enforced by the pipeline parameters\n text = enforce_stop_tokens(text, stop)\n return text", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/pipelineai.html"} +{"id": "f38ca04ac557-0", "text": "Source code for langchain.llms.sagemaker_endpoint\n\"\"\"Wrapper around Sagemaker InvokeEndpoint API.\"\"\"\nfrom abc import abstractmethod\nfrom typing import Any, Dict, Generic, List, Mapping, Optional, TypeVar, Union\nfrom pydantic import Extra, root_validator\nfrom langchain.callbacks.manager import CallbackManagerForLLMRun\nfrom langchain.llms.base import LLM\nfrom langchain.llms.utils import enforce_stop_tokens\nINPUT_TYPE = TypeVar(\"INPUT_TYPE\", bound=Union[str, List[str]])\nOUTPUT_TYPE = TypeVar(\"OUTPUT_TYPE\", bound=Union[str, List[List[float]]])\nclass ContentHandlerBase(Generic[INPUT_TYPE, OUTPUT_TYPE]):\n \"\"\"A handler class to transform input from LLM to a\n format that SageMaker endpoint expects. Similarily,\n the class also handles transforming output from the\n SageMaker endpoint to a format that LLM class expects.\n \"\"\"\n \"\"\"\n Example:\n .. code-block:: python\n class ContentHandler(ContentHandlerBase):\n content_type = \"application/json\"\n accepts = \"application/json\"\n def transform_input(self, prompt: str, model_kwargs: Dict) -> bytes:\n input_str = json.dumps({prompt: prompt, **model_kwargs})\n return input_str.encode('utf-8')\n \n def transform_output(self, output: bytes) -> str:\n response_json = json.loads(output.read().decode(\"utf-8\"))\n return response_json[0][\"generated_text\"]\n \"\"\"\n content_type: Optional[str] = \"text/plain\"\n \"\"\"The MIME type of the input data passed to endpoint\"\"\"\n accepts: Optional[str] = \"text/plain\"\n \"\"\"The MIME type of the response data returned from endpoint\"\"\"\n @abstractmethod", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/sagemaker_endpoint.html"} +{"id": "f38ca04ac557-1", "text": "\"\"\"The MIME type of the response data returned from endpoint\"\"\"\n @abstractmethod\n def transform_input(self, prompt: INPUT_TYPE, model_kwargs: Dict) -> bytes:\n \"\"\"Transforms the input to a format that model can accept\n as the request Body. Should return bytes or seekable file\n like object in the format specified in the content_type\n request header.\n \"\"\"\n @abstractmethod\n def transform_output(self, output: bytes) -> OUTPUT_TYPE:\n \"\"\"Transforms the output from the model to string that\n the LLM class expects.\n \"\"\"\nclass LLMContentHandler(ContentHandlerBase[str, str]):\n \"\"\"Content handler for LLM class.\"\"\"\n[docs]class SagemakerEndpoint(LLM):\n \"\"\"Wrapper around custom Sagemaker Inference Endpoints.\n To use, you must supply the endpoint name from your deployed\n Sagemaker model & the region where it is deployed.\n To authenticate, the AWS client uses the following methods to\n automatically load credentials:\n https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html\n If a specific credential profile should be used, you must pass\n the name of the profile from the ~/.aws/credentials file that is to be used.\n Make sure the credentials / roles used have the required policies to\n access the Sagemaker endpoint.\n See: https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies.html\n \"\"\"\n \"\"\"\n Example:\n .. code-block:: python\n from langchain import SagemakerEndpoint\n endpoint_name = (\n \"my-endpoint-name\"\n )\n region_name = (\n \"us-west-2\"\n )\n credentials_profile_name = (\n \"default\"\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/sagemaker_endpoint.html"} +{"id": "f38ca04ac557-2", "text": ")\n credentials_profile_name = (\n \"default\"\n )\n se = SagemakerEndpoint(\n endpoint_name=endpoint_name,\n region_name=region_name,\n credentials_profile_name=credentials_profile_name\n )\n \"\"\"\n client: Any #: :meta private:\n endpoint_name: str = \"\"\n \"\"\"The name of the endpoint from the deployed Sagemaker model.\n Must be unique within an AWS Region.\"\"\"\n region_name: str = \"\"\n \"\"\"The aws region where the Sagemaker model is deployed, eg. `us-west-2`.\"\"\"\n credentials_profile_name: Optional[str] = None\n \"\"\"The name of the profile in the ~/.aws/credentials or ~/.aws/config files, which\n has either access keys or role information specified.\n If not specified, the default credential profile or, if on an EC2 instance,\n credentials from IMDS will be used.\n See: https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html\n \"\"\"\n content_handler: LLMContentHandler\n \"\"\"The content handler class that provides an input and\n output transform functions to handle formats between LLM\n and the endpoint.\n \"\"\"\n \"\"\"\n Example:\n .. code-block:: python\n from langchain.llms.sagemaker_endpoint import LLMContentHandler\n class ContentHandler(LLMContentHandler):\n content_type = \"application/json\"\n accepts = \"application/json\"\n def transform_input(self, prompt: str, model_kwargs: Dict) -> bytes:\n input_str = json.dumps({prompt: prompt, **model_kwargs})\n return input_str.encode('utf-8')\n \n def transform_output(self, output: bytes) -> str:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/sagemaker_endpoint.html"} +{"id": "f38ca04ac557-3", "text": "def transform_output(self, output: bytes) -> str:\n response_json = json.loads(output.read().decode(\"utf-8\"))\n return response_json[0][\"generated_text\"]\n \"\"\"\n model_kwargs: Optional[Dict] = None\n \"\"\"Key word arguments to pass to the model.\"\"\"\n endpoint_kwargs: Optional[Dict] = None\n \"\"\"Optional attributes passed to the invoke_endpoint\n function. See `boto3`_. docs for more info.\n .. _boto3: \n \"\"\"\n class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n @root_validator()\n def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"Validate that AWS credentials to and python package exists in environment.\"\"\"\n try:\n import boto3\n try:\n if values[\"credentials_profile_name\"] is not None:\n session = boto3.Session(\n profile_name=values[\"credentials_profile_name\"]\n )\n else:\n # use default credentials\n session = boto3.Session()\n values[\"client\"] = session.client(\n \"sagemaker-runtime\", region_name=values[\"region_name\"]\n )\n except Exception as e:\n raise ValueError(\n \"Could not load credentials to authenticate with AWS client. \"\n \"Please check that credentials in the specified \"\n \"profile name are valid.\"\n ) from e\n except ImportError:\n raise ImportError(\n \"Could not import boto3 python package. \"\n \"Please install it with `pip install boto3`.\"\n )\n return values\n @property\n def _identifying_params(self) -> Mapping[str, Any]:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/sagemaker_endpoint.html"} +{"id": "f38ca04ac557-4", "text": "@property\n def _identifying_params(self) -> Mapping[str, Any]:\n \"\"\"Get the identifying parameters.\"\"\"\n _model_kwargs = self.model_kwargs or {}\n return {\n **{\"endpoint_name\": self.endpoint_name},\n **{\"model_kwargs\": _model_kwargs},\n }\n @property\n def _llm_type(self) -> str:\n \"\"\"Return type of llm.\"\"\"\n return \"sagemaker_endpoint\"\n def _call(\n self,\n prompt: str,\n stop: Optional[List[str]] = None,\n run_manager: Optional[CallbackManagerForLLMRun] = None,\n **kwargs: Any,\n ) -> str:\n \"\"\"Call out to Sagemaker inference endpoint.\n Args:\n prompt: The prompt to pass into the model.\n stop: Optional list of stop words to use when generating.\n Returns:\n The string generated by the model.\n Example:\n .. code-block:: python\n response = se(\"Tell me a joke.\")\n \"\"\"\n _model_kwargs = self.model_kwargs or {}\n _model_kwargs = {**_model_kwargs, **kwargs}\n _endpoint_kwargs = self.endpoint_kwargs or {}\n body = self.content_handler.transform_input(prompt, _model_kwargs)\n content_type = self.content_handler.content_type\n accepts = self.content_handler.accepts\n # send request\n try:\n response = self.client.invoke_endpoint(\n EndpointName=self.endpoint_name,\n Body=body,\n ContentType=content_type,\n Accept=accepts,\n **_endpoint_kwargs,\n )\n except Exception as e:\n raise ValueError(f\"Error raised by inference endpoint: {e}\")\n text = self.content_handler.transform_output(response[\"Body\"])", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/sagemaker_endpoint.html"} +{"id": "f38ca04ac557-5", "text": "text = self.content_handler.transform_output(response[\"Body\"])\n if stop is not None:\n # This is a bit hacky, but I can't figure out a better way to enforce\n # stop tokens when making calls to the sagemaker endpoint.\n text = enforce_stop_tokens(text, stop)\n return text", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/sagemaker_endpoint.html"} +{"id": "e2037d2aa88a-0", "text": "Source code for langchain.llms.llamacpp\n\"\"\"Wrapper around llama.cpp.\"\"\"\nimport logging\nfrom typing import Any, Dict, Generator, List, Optional\nfrom pydantic import Field, root_validator\nfrom langchain.callbacks.manager import CallbackManagerForLLMRun\nfrom langchain.llms.base import LLM\nlogger = logging.getLogger(__name__)\n[docs]class LlamaCpp(LLM):\n \"\"\"Wrapper around the llama.cpp model.\n To use, you should have the llama-cpp-python library installed, and provide the\n path to the Llama model as a named parameter to the constructor.\n Check out: https://github.com/abetlen/llama-cpp-python\n Example:\n .. code-block:: python\n from langchain.llms import LlamaCppEmbeddings\n llm = LlamaCppEmbeddings(model_path=\"/path/to/llama/model\")\n \"\"\"\n client: Any #: :meta private:\n model_path: str\n \"\"\"The path to the Llama model file.\"\"\"\n lora_base: Optional[str] = None\n \"\"\"The path to the Llama LoRA base model.\"\"\"\n lora_path: Optional[str] = None\n \"\"\"The path to the Llama LoRA. If None, no LoRa is loaded.\"\"\"\n n_ctx: int = Field(512, alias=\"n_ctx\")\n \"\"\"Token context window.\"\"\"\n n_parts: int = Field(-1, alias=\"n_parts\")\n \"\"\"Number of parts to split the model into.\n If -1, the number of parts is automatically determined.\"\"\"\n seed: int = Field(-1, alias=\"seed\")\n \"\"\"Seed. If -1, a random seed is used.\"\"\"\n f16_kv: bool = Field(True, alias=\"f16_kv\")", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/llamacpp.html"} +{"id": "e2037d2aa88a-1", "text": "f16_kv: bool = Field(True, alias=\"f16_kv\")\n \"\"\"Use half-precision for key/value cache.\"\"\"\n logits_all: bool = Field(False, alias=\"logits_all\")\n \"\"\"Return logits for all tokens, not just the last token.\"\"\"\n vocab_only: bool = Field(False, alias=\"vocab_only\")\n \"\"\"Only load the vocabulary, no weights.\"\"\"\n use_mlock: bool = Field(False, alias=\"use_mlock\")\n \"\"\"Force system to keep model in RAM.\"\"\"\n n_threads: Optional[int] = Field(None, alias=\"n_threads\")\n \"\"\"Number of threads to use.\n If None, the number of threads is automatically determined.\"\"\"\n n_batch: Optional[int] = Field(8, alias=\"n_batch\")\n \"\"\"Number of tokens to process in parallel.\n Should be a number between 1 and n_ctx.\"\"\"\n n_gpu_layers: Optional[int] = Field(None, alias=\"n_gpu_layers\")\n \"\"\"Number of layers to be loaded into gpu memory. Default None.\"\"\"\n suffix: Optional[str] = Field(None)\n \"\"\"A suffix to append to the generated text. If None, no suffix is appended.\"\"\"\n max_tokens: Optional[int] = 256\n \"\"\"The maximum number of tokens to generate.\"\"\"\n temperature: Optional[float] = 0.8\n \"\"\"The temperature to use for sampling.\"\"\"\n top_p: Optional[float] = 0.95\n \"\"\"The top-p value to use for sampling.\"\"\"\n logprobs: Optional[int] = Field(None)\n \"\"\"The number of logprobs to return. If None, no logprobs are returned.\"\"\"\n echo: Optional[bool] = False\n \"\"\"Whether to echo the prompt.\"\"\"\n stop: Optional[List[str]] = []", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/llamacpp.html"} +{"id": "e2037d2aa88a-2", "text": "\"\"\"Whether to echo the prompt.\"\"\"\n stop: Optional[List[str]] = []\n \"\"\"A list of strings to stop generation when encountered.\"\"\"\n repeat_penalty: Optional[float] = 1.1\n \"\"\"The penalty to apply to repeated tokens.\"\"\"\n top_k: Optional[int] = 40\n \"\"\"The top-k value to use for sampling.\"\"\"\n last_n_tokens_size: Optional[int] = 64\n \"\"\"The number of tokens to look back when applying the repeat_penalty.\"\"\"\n use_mmap: Optional[bool] = True\n \"\"\"Whether to keep the model loaded in RAM\"\"\"\n streaming: bool = True\n \"\"\"Whether to stream the results, token by token.\"\"\"\n @root_validator()\n def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"Validate that llama-cpp-python library is installed.\"\"\"\n model_path = values[\"model_path\"]\n model_param_names = [\n \"lora_path\",\n \"lora_base\",\n \"n_ctx\",\n \"n_parts\",\n \"seed\",\n \"f16_kv\",\n \"logits_all\",\n \"vocab_only\",\n \"use_mlock\",\n \"n_threads\",\n \"n_batch\",\n \"use_mmap\",\n \"last_n_tokens_size\",\n ]\n model_params = {k: values[k] for k in model_param_names}\n # For backwards compatibility, only include if non-null.\n if values[\"n_gpu_layers\"] is not None:\n model_params[\"n_gpu_layers\"] = values[\"n_gpu_layers\"]\n try:\n from llama_cpp import Llama\n values[\"client\"] = Llama(model_path, **model_params)\n except ImportError:\n raise ModuleNotFoundError(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/llamacpp.html"} +{"id": "e2037d2aa88a-3", "text": "except ImportError:\n raise ModuleNotFoundError(\n \"Could not import llama-cpp-python library. \"\n \"Please install the llama-cpp-python library to \"\n \"use this embedding model: pip install llama-cpp-python\"\n )\n except Exception as e:\n raise ValueError(\n f\"Could not load Llama model from path: {model_path}. \"\n f\"Received error {e}\"\n )\n return values\n @property\n def _default_params(self) -> Dict[str, Any]:\n \"\"\"Get the default parameters for calling llama_cpp.\"\"\"\n return {\n \"suffix\": self.suffix,\n \"max_tokens\": self.max_tokens,\n \"temperature\": self.temperature,\n \"top_p\": self.top_p,\n \"logprobs\": self.logprobs,\n \"echo\": self.echo,\n \"stop_sequences\": self.stop, # key here is convention among LLM classes\n \"repeat_penalty\": self.repeat_penalty,\n \"top_k\": self.top_k,\n }\n @property\n def _identifying_params(self) -> Dict[str, Any]:\n \"\"\"Get the identifying parameters.\"\"\"\n return {**{\"model_path\": self.model_path}, **self._default_params}\n @property\n def _llm_type(self) -> str:\n \"\"\"Return type of llm.\"\"\"\n return \"llamacpp\"\n def _get_parameters(self, stop: Optional[List[str]] = None) -> Dict[str, Any]:\n \"\"\"\n Performs sanity check, preparing parameters in format needed by llama_cpp.\n Args:\n stop (Optional[List[str]]): List of stop sequences for llama_cpp.\n Returns:\n Dictionary containing the combined parameters.\n \"\"\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/llamacpp.html"} +{"id": "e2037d2aa88a-4", "text": "Returns:\n Dictionary containing the combined parameters.\n \"\"\"\n # Raise error if stop sequences are in both input and default params\n if self.stop and stop is not None:\n raise ValueError(\"`stop` found in both the input and default params.\")\n params = self._default_params\n # llama_cpp expects the \"stop\" key not this, so we remove it:\n params.pop(\"stop_sequences\")\n # then sets it as configured, or default to an empty list:\n params[\"stop\"] = self.stop or stop or []\n return params\n def _call(\n self,\n prompt: str,\n stop: Optional[List[str]] = None,\n run_manager: Optional[CallbackManagerForLLMRun] = None,\n **kwargs: Any,\n ) -> str:\n \"\"\"Call the Llama model and return the output.\n Args:\n prompt: The prompt to use for generation.\n stop: A list of strings to stop generation when encountered.\n Returns:\n The generated text.\n Example:\n .. code-block:: python\n from langchain.llms import LlamaCpp\n llm = LlamaCpp(model_path=\"/path/to/local/llama/model.bin\")\n llm(\"This is a prompt.\")\n \"\"\"\n if self.streaming:\n # If streaming is enabled, we use the stream\n # method that yields as they are generated\n # and return the combined strings from the first choices's text:\n combined_text_output = \"\"\n for token in self.stream(prompt=prompt, stop=stop, run_manager=run_manager):\n combined_text_output += token[\"choices\"][0][\"text\"]\n return combined_text_output\n else:\n params = self._get_parameters(stop)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/llamacpp.html"} +{"id": "e2037d2aa88a-5", "text": "return combined_text_output\n else:\n params = self._get_parameters(stop)\n params = {**params, **kwargs}\n result = self.client(prompt=prompt, **params)\n return result[\"choices\"][0][\"text\"]\n[docs] def stream(\n self,\n prompt: str,\n stop: Optional[List[str]] = None,\n run_manager: Optional[CallbackManagerForLLMRun] = None,\n ) -> Generator[Dict, None, None]:\n \"\"\"Yields results objects as they are generated in real time.\n BETA: this is a beta feature while we figure out the right abstraction.\n Once that happens, this interface could change.\n It also calls the callback manager's on_llm_new_token event with\n similar parameters to the OpenAI LLM class method of the same name.\n Args:\n prompt: The prompts to pass into the model.\n stop: Optional list of stop words to use when generating.\n Returns:\n A generator representing the stream of tokens being generated.\n Yields:\n A dictionary like objects containing a string token and metadata.\n See llama-cpp-python docs and below for more.\n Example:\n .. code-block:: python\n from langchain.llms import LlamaCpp\n llm = LlamaCpp(\n model_path=\"/path/to/local/model.bin\",\n temperature = 0.5\n )\n for chunk in llm.stream(\"Ask 'Hi, how are you?' like a pirate:'\",\n stop=[\"'\",\"\\n\"]):\n result = chunk[\"choices\"][0]\n print(result[\"text\"], end='', flush=True)\n \"\"\"\n params = self._get_parameters(stop)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/llamacpp.html"} +{"id": "e2037d2aa88a-6", "text": "\"\"\"\n params = self._get_parameters(stop)\n result = self.client(prompt=prompt, stream=True, **params)\n for chunk in result:\n token = chunk[\"choices\"][0][\"text\"]\n log_probs = chunk[\"choices\"][0].get(\"logprobs\", None)\n if run_manager:\n run_manager.on_llm_new_token(\n token=token, verbose=self.verbose, log_probs=log_probs\n )\n yield chunk\n[docs] def get_num_tokens(self, text: str) -> int:\n tokenized_text = self.client.tokenize(text.encode(\"utf-8\"))\n return len(tokenized_text)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/llamacpp.html"} +{"id": "d1f654c4abbb-0", "text": "Source code for langchain.llms.azureml_endpoint\n\"\"\"Wrapper around AzureML Managed Online Endpoint API.\"\"\"\nimport json\nimport urllib.request\nfrom abc import abstractmethod\nfrom typing import Any, Dict, List, Mapping, Optional\nfrom pydantic import BaseModel, validator\nfrom langchain.callbacks.manager import CallbackManagerForLLMRun\nfrom langchain.llms.base import LLM\nfrom langchain.utils import get_from_dict_or_env\nclass AzureMLEndpointClient(object):\n \"\"\"Wrapper around AzureML Managed Online Endpoint Client.\"\"\"\n def __init__(\n self, endpoint_url: str, endpoint_api_key: str, deployment_name: str\n ) -> None:\n \"\"\"Initialize the class.\"\"\"\n if not endpoint_api_key:\n raise ValueError(\"A key should be provided to invoke the endpoint\")\n self.endpoint_url = endpoint_url\n self.endpoint_api_key = endpoint_api_key\n self.deployment_name = deployment_name\n def call(self, body: bytes) -> bytes:\n \"\"\"call.\"\"\"\n # The azureml-model-deployment header will force the request to go to a\n # specific deployment. Remove this header to have the request observe the\n # endpoint traffic rules.\n headers = {\n \"Content-Type\": \"application/json\",\n \"Authorization\": (\"Bearer \" + self.endpoint_api_key),\n \"azureml-model-deployment\": self.deployment_name,\n }\n req = urllib.request.Request(self.endpoint_url, body, headers)\n response = urllib.request.urlopen(req, timeout=50)\n result = response.read()\n return result\nclass ContentFormatterBase:\n \"\"\"A handler class to transform request and response of\n AzureML endpoint to match with required schema.\n \"\"\"\n \"\"\"\n Example:\n .. code-block:: python\n \n class ContentFormatter(ContentFormatterBase):", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/azureml_endpoint.html"} +{"id": "d1f654c4abbb-1", "text": ".. code-block:: python\n \n class ContentFormatter(ContentFormatterBase):\n content_type = \"application/json\"\n accepts = \"application/json\"\n \n def format_request_payload(\n self, \n prompt: str, \n model_kwargs: Dict\n ) -> bytes:\n input_str = json.dumps(\n {\n \"inputs\": {\"input_string\": [prompt]}, \n \"parameters\": model_kwargs,\n }\n )\n return str.encode(input_str)\n \n def format_response_payload(self, output: str) -> str:\n response_json = json.loads(output)\n return response_json[0][\"0\"]\n \"\"\"\n content_type: Optional[str] = \"application/json\"\n \"\"\"The MIME type of the input data passed to the endpoint\"\"\"\n accepts: Optional[str] = \"application/json\"\n \"\"\"The MIME type of the response data returned form the endpoint\"\"\"\n @abstractmethod\n def format_request_payload(self, prompt: str, model_kwargs: Dict) -> bytes:\n \"\"\"Formats the request body according to the input schema of\n the model. Returns bytes or seekable file like object in the\n format specified in the content_type request header.\n \"\"\"\n @abstractmethod\n def format_response_payload(self, output: bytes) -> str:\n \"\"\"Formats the response body according to the output\n schema of the model. Returns the data type that is\n received from the response.\n \"\"\"\nclass OSSContentFormatter(ContentFormatterBase):\n \"\"\"Content handler for LLMs from the OSS catalog.\"\"\"\n def format_request_payload(self, prompt: str, model_kwargs: Dict) -> bytes:\n input_str = json.dumps(\n {\"inputs\": {\"input_string\": [prompt]}, \"parameters\": model_kwargs}\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/azureml_endpoint.html"} +{"id": "d1f654c4abbb-2", "text": ")\n return str.encode(input_str)\n def format_response_payload(self, output: bytes) -> str:\n response_json = json.loads(output)\n return response_json[0][\"0\"]\nclass HFContentFormatter(ContentFormatterBase):\n \"\"\"Content handler for LLMs from the HuggingFace catalog.\"\"\"\n def format_request_payload(self, prompt: str, model_kwargs: Dict) -> bytes:\n input_str = json.dumps({\"inputs\": [prompt], \"parameters\": model_kwargs})\n return str.encode(input_str)\n def format_response_payload(self, output: bytes) -> str:\n response_json = json.loads(output)\n return response_json[0][0][\"generated_text\"]\nclass DollyContentFormatter(ContentFormatterBase):\n \"\"\"Content handler for the Dolly-v2-12b model\"\"\"\n def format_request_payload(self, prompt: str, model_kwargs: Dict) -> bytes:\n input_str = json.dumps(\n {\"input_data\": {\"input_string\": [prompt]}, \"parameters\": model_kwargs}\n )\n return str.encode(input_str)\n def format_response_payload(self, output: bytes) -> str:\n response_json = json.loads(output)\n return response_json[0]\n[docs]class AzureMLOnlineEndpoint(LLM, BaseModel):\n \"\"\"Wrapper around Azure ML Hosted models using Managed Online Endpoints.\n Example:\n .. code-block:: python\n azure_llm = AzureMLModel(\n endpoint_url=\"https://..inference.ml.azure.com/score\",\n endpoint_api_key=\"my-api-key\",\n deployment_name=\"my-deployment-name\",\n content_formatter=content_formatter,\n )\n \"\"\" # noqa: E501\n endpoint_url: str = \"\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/azureml_endpoint.html"} +{"id": "d1f654c4abbb-3", "text": ")\n \"\"\" # noqa: E501\n endpoint_url: str = \"\"\n \"\"\"URL of pre-existing Endpoint. Should be passed to constructor or specified as \n env var `AZUREML_ENDPOINT_URL`.\"\"\"\n endpoint_api_key: str = \"\"\n \"\"\"Authentication Key for Endpoint. Should be passed to constructor or specified as\n env var `AZUREML_ENDPOINT_API_KEY`.\"\"\"\n deployment_name: str = \"\"\n \"\"\"Deployment Name for Endpoint. Should be passed to constructor or specified as\n env var `AZUREML_DEPLOYMENT_NAME`.\"\"\"\n http_client: Any = None #: :meta private:\n content_formatter: Any = None\n \"\"\"The content formatter that provides an input and output\n transform function to handle formats between the LLM and\n the endpoint\"\"\"\n model_kwargs: Optional[dict] = None\n \"\"\"Key word arguments to pass to the model.\"\"\"\n @validator(\"http_client\", always=True, allow_reuse=True)\n @classmethod\n def validate_client(cls, field_value: Any, values: Dict) -> AzureMLEndpointClient:\n \"\"\"Validate that api key and python package exists in environment.\"\"\"\n endpoint_key = get_from_dict_or_env(\n values, \"endpoint_api_key\", \"AZUREML_ENDPOINT_API_KEY\"\n )\n endpoint_url = get_from_dict_or_env(\n values, \"endpoint_url\", \"AZUREML_ENDPOINT_URL\"\n )\n deployment_name = get_from_dict_or_env(\n values, \"deployment_name\", \"AZUREML_DEPLOYMENT_NAME\"\n )\n http_client = AzureMLEndpointClient(endpoint_url, endpoint_key, deployment_name)\n return http_client\n @property\n def _identifying_params(self) -> Mapping[str, Any]:\n \"\"\"Get the identifying parameters.\"\"\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/azureml_endpoint.html"} +{"id": "d1f654c4abbb-4", "text": "\"\"\"Get the identifying parameters.\"\"\"\n _model_kwargs = self.model_kwargs or {}\n return {\n **{\"deployment_name\": self.deployment_name},\n **{\"model_kwargs\": _model_kwargs},\n }\n @property\n def _llm_type(self) -> str:\n \"\"\"Return type of llm.\"\"\"\n return \"azureml_endpoint\"\n def _call(\n self,\n prompt: str,\n stop: Optional[List[str]] = None,\n run_manager: Optional[CallbackManagerForLLMRun] = None,\n **kwargs: Any\n ) -> str:\n \"\"\"Call out to an AzureML Managed Online endpoint.\n Args:\n prompt: The prompt to pass into the model.\n stop: Optional list of stop words to use when generating.\n Returns:\n The string generated by the model.\n Example:\n .. code-block:: python\n response = azureml_model(\"Tell me a joke.\")\n \"\"\"\n _model_kwargs = self.model_kwargs or {}\n body = self.content_formatter.format_request_payload(prompt, _model_kwargs)\n endpoint_response = self.http_client.call(body)\n response = self.content_formatter.format_response_payload(endpoint_response)\n return response", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/azureml_endpoint.html"} +{"id": "28d1eb6acef6-0", "text": "Source code for langchain.llms.amazon_api_gateway\nfrom typing import Any, Dict, List, Mapping, Optional\nimport requests\nfrom pydantic import Extra\nfrom langchain.callbacks.manager import CallbackManagerForLLMRun\nfrom langchain.llms.base import LLM\nfrom langchain.llms.utils import enforce_stop_tokens\nclass ContentHandlerAmazonAPIGateway:\n \"\"\"Adapter class to prepare the inputs from Langchain to a format\n that LLM model expects. Also, provides helper function to extract\n the generated text from the model response.\"\"\"\n @classmethod\n def transform_input(\n cls, prompt: str, model_kwargs: Dict[str, Any]\n ) -> Dict[str, Any]:\n return {\"inputs\": prompt, \"parameters\": model_kwargs}\n @classmethod\n def transform_output(cls, response: Any) -> str:\n return response.json()[0][\"generated_text\"]\n[docs]class AmazonAPIGateway(LLM):\n \"\"\"Wrapper around custom Amazon API Gateway\"\"\"\n api_url: str\n \"\"\"API Gateway URL\"\"\"\n model_kwargs: Optional[Dict] = None\n \"\"\"Key word arguments to pass to the model.\"\"\"\n content_handler: ContentHandlerAmazonAPIGateway = ContentHandlerAmazonAPIGateway()\n \"\"\"The content handler class that provides an input and\n output transform functions to handle formats between LLM\n and the endpoint.\n \"\"\"\n class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n @property\n def _identifying_params(self) -> Mapping[str, Any]:\n \"\"\"Get the identifying parameters.\"\"\"\n _model_kwargs = self.model_kwargs or {}\n return {\n **{\"endpoint_name\": self.api_url},\n **{\"model_kwargs\": _model_kwargs},\n }\n @property", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/amazon_api_gateway.html"} +{"id": "28d1eb6acef6-1", "text": "**{\"model_kwargs\": _model_kwargs},\n }\n @property\n def _llm_type(self) -> str:\n \"\"\"Return type of llm.\"\"\"\n return \"amazon_api_gateway\"\n def _call(\n self,\n prompt: str,\n stop: Optional[List[str]] = None,\n run_manager: Optional[CallbackManagerForLLMRun] = None,\n **kwargs: Any,\n ) -> str:\n \"\"\"Call out to Amazon API Gateway model.\n Args:\n prompt: The prompt to pass into the model.\n stop: Optional list of stop words to use when generating.\n Returns:\n The string generated by the model.\n Example:\n .. code-block:: python\n response = se(\"Tell me a joke.\")\n \"\"\"\n _model_kwargs = self.model_kwargs or {}\n payload = self.content_handler.transform_input(prompt, _model_kwargs)\n try:\n response = requests.post(\n self.api_url,\n json=payload,\n )\n text = self.content_handler.transform_output(response)\n except Exception as error:\n raise ValueError(f\"Error raised by the service: {error}\")\n if stop is not None:\n text = enforce_stop_tokens(text, stop)\n return text", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/amazon_api_gateway.html"} +{"id": "a7214af167b7-0", "text": "Source code for langchain.llms.beam\n\"\"\"Wrapper around Beam API.\"\"\"\nimport base64\nimport json\nimport logging\nimport subprocess\nimport textwrap\nimport time\nfrom typing import Any, Dict, List, Mapping, Optional\nimport requests\nfrom pydantic import Extra, Field, root_validator\nfrom langchain.callbacks.manager import CallbackManagerForLLMRun\nfrom langchain.llms.base import LLM\nfrom langchain.utils import get_from_dict_or_env\nlogger = logging.getLogger(__name__)\nDEFAULT_NUM_TRIES = 10\nDEFAULT_SLEEP_TIME = 4\n[docs]class Beam(LLM):\n \"\"\"Wrapper around Beam API for gpt2 large language model.\n To use, you should have the ``beam-sdk`` python package installed,\n and the environment variable ``BEAM_CLIENT_ID`` set with your client id\n and ``BEAM_CLIENT_SECRET`` set with your client secret. Information on how\n to get these is available here: https://docs.beam.cloud/account/api-keys.\n The wrapper can then be called as follows, where the name, cpu, memory, gpu,\n python version, and python packages can be updated accordingly. Once deployed,\n the instance can be called.\n Example:\n .. code-block:: python\n llm = Beam(model_name=\"gpt2\",\n name=\"langchain-gpt2\",\n cpu=8,\n memory=\"32Gi\",\n gpu=\"A10G\",\n python_version=\"python3.8\",\n python_packages=[\n \"diffusers[torch]>=0.10\",\n \"transformers\",\n \"torch\",\n \"pillow\",\n \"accelerate\",\n \"safetensors\",\n \"xformers\",],\n max_length=50)\n llm._deploy()", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/beam.html"} +{"id": "a7214af167b7-1", "text": "max_length=50)\n llm._deploy()\n call_result = llm._call(input)\n \"\"\"\n model_name: str = \"\"\n name: str = \"\"\n cpu: str = \"\"\n memory: str = \"\"\n gpu: str = \"\"\n python_version: str = \"\"\n python_packages: List[str] = []\n max_length: str = \"\"\n url: str = \"\"\n \"\"\"model endpoint to use\"\"\"\n model_kwargs: Dict[str, Any] = Field(default_factory=dict)\n \"\"\"Holds any model parameters valid for `create` call not\n explicitly specified.\"\"\"\n beam_client_id: str = \"\"\n beam_client_secret: str = \"\"\n app_id: Optional[str] = None\n class Config:\n \"\"\"Configuration for this pydantic config.\"\"\"\n extra = Extra.forbid\n @root_validator(pre=True)\n def build_extra(cls, values: Dict[str, Any]) -> Dict[str, Any]:\n \"\"\"Build extra kwargs from additional params that were passed in.\"\"\"\n all_required_field_names = {field.alias for field in cls.__fields__.values()}\n extra = values.get(\"model_kwargs\", {})\n for field_name in list(values):\n if field_name not in all_required_field_names:\n if field_name in extra:\n raise ValueError(f\"Found {field_name} supplied twice.\")\n logger.warning(\n f\"\"\"{field_name} was transfered to model_kwargs.\n Please confirm that {field_name} is what you intended.\"\"\"\n )\n extra[field_name] = values.pop(field_name)\n values[\"model_kwargs\"] = extra\n return values\n @root_validator()\n def validate_environment(cls, values: Dict) -> Dict:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/beam.html"} +{"id": "a7214af167b7-2", "text": "@root_validator()\n def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"Validate that api key and python package exists in environment.\"\"\"\n beam_client_id = get_from_dict_or_env(\n values, \"beam_client_id\", \"BEAM_CLIENT_ID\"\n )\n beam_client_secret = get_from_dict_or_env(\n values, \"beam_client_secret\", \"BEAM_CLIENT_SECRET\"\n )\n values[\"beam_client_id\"] = beam_client_id\n values[\"beam_client_secret\"] = beam_client_secret\n return values\n @property\n def _identifying_params(self) -> Mapping[str, Any]:\n \"\"\"Get the identifying parameters.\"\"\"\n return {\n \"model_name\": self.model_name,\n \"name\": self.name,\n \"cpu\": self.cpu,\n \"memory\": self.memory,\n \"gpu\": self.gpu,\n \"python_version\": self.python_version,\n \"python_packages\": self.python_packages,\n \"max_length\": self.max_length,\n \"model_kwargs\": self.model_kwargs,\n }\n @property\n def _llm_type(self) -> str:\n \"\"\"Return type of llm.\"\"\"\n return \"beam\"\n[docs] def app_creation(self) -> None:\n \"\"\"Creates a Python file which will contain your Beam app definition.\"\"\"\n script = textwrap.dedent(\n \"\"\"\\\n import beam\n # The environment your code will run on\n app = beam.App(\n name=\"{name}\",\n cpu={cpu},\n memory=\"{memory}\",\n gpu=\"{gpu}\",\n python_version=\"{python_version}\",\n python_packages={python_packages},\n )\n app.Trigger.RestAPI(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/beam.html"} +{"id": "a7214af167b7-3", "text": "python_packages={python_packages},\n )\n app.Trigger.RestAPI(\n inputs={{\"prompt\": beam.Types.String(), \"max_length\": beam.Types.String()}},\n outputs={{\"text\": beam.Types.String()}},\n handler=\"run.py:beam_langchain\",\n )\n \"\"\"\n )\n script_name = \"app.py\"\n with open(script_name, \"w\") as file:\n file.write(\n script.format(\n name=self.name,\n cpu=self.cpu,\n memory=self.memory,\n gpu=self.gpu,\n python_version=self.python_version,\n python_packages=self.python_packages,\n )\n )\n[docs] def run_creation(self) -> None:\n \"\"\"Creates a Python file which will be deployed on beam.\"\"\"\n script = textwrap.dedent(\n \"\"\"\n import os\n import transformers\n from transformers import GPT2LMHeadModel, GPT2Tokenizer\n model_name = \"{model_name}\"\n def beam_langchain(**inputs):\n prompt = inputs[\"prompt\"]\n length = inputs[\"max_length\"]\n tokenizer = GPT2Tokenizer.from_pretrained(model_name)\n model = GPT2LMHeadModel.from_pretrained(model_name)\n encodedPrompt = tokenizer.encode(prompt, return_tensors='pt')\n outputs = model.generate(encodedPrompt, max_length=int(length),\n do_sample=True, pad_token_id=tokenizer.eos_token_id)\n output = tokenizer.decode(outputs[0], skip_special_tokens=True)\n print(output)\n return {{\"text\": output}}\n \"\"\"\n )\n script_name = \"run.py\"\n with open(script_name, \"w\") as file:\n file.write(script.format(model_name=self.model_name))", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/beam.html"} +{"id": "a7214af167b7-4", "text": "file.write(script.format(model_name=self.model_name))\n def _deploy(self) -> str:\n \"\"\"Call to Beam.\"\"\"\n try:\n import beam # type: ignore\n if beam.__path__ == \"\":\n raise ImportError\n except ImportError:\n raise ImportError(\n \"Could not import beam python package. \"\n \"Please install it with `curl \"\n \"https://raw.githubusercontent.com/slai-labs\"\n \"/get-beam/main/get-beam.sh -sSfL | sh`.\"\n )\n self.app_creation()\n self.run_creation()\n process = subprocess.run(\n \"beam deploy app.py\", shell=True, capture_output=True, text=True\n )\n if process.returncode == 0:\n output = process.stdout\n logger.info(output)\n lines = output.split(\"\\n\")\n for line in lines:\n if line.startswith(\" i Send requests to: https://apps.beam.cloud/\"):\n self.app_id = line.split(\"/\")[-1]\n self.url = line.split(\":\")[1].strip()\n return self.app_id\n raise ValueError(\n f\"\"\"Failed to retrieve the appID from the deployment output.\n Deployment output: {output}\"\"\"\n )\n else:\n raise ValueError(f\"Deployment failed. Error: {process.stderr}\")\n @property\n def authorization(self) -> str:\n if self.beam_client_id:\n credential_str = self.beam_client_id + \":\" + self.beam_client_secret\n else:\n credential_str = self.beam_client_secret\n return base64.b64encode(credential_str.encode()).decode()\n def _call(\n self,\n prompt: str,\n stop: Optional[list] = None,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/beam.html"} +{"id": "a7214af167b7-5", "text": "self,\n prompt: str,\n stop: Optional[list] = None,\n run_manager: Optional[CallbackManagerForLLMRun] = None,\n **kwargs: Any,\n ) -> str:\n \"\"\"Call to Beam.\"\"\"\n url = \"https://apps.beam.cloud/\" + self.app_id if self.app_id else self.url\n payload = {\"prompt\": prompt, \"max_length\": self.max_length}\n payload.update(kwargs)\n headers = {\n \"Accept\": \"*/*\",\n \"Accept-Encoding\": \"gzip, deflate\",\n \"Authorization\": \"Basic \" + self.authorization,\n \"Connection\": \"keep-alive\",\n \"Content-Type\": \"application/json\",\n }\n for _ in range(DEFAULT_NUM_TRIES):\n request = requests.post(url, headers=headers, data=json.dumps(payload))\n if request.status_code == 200:\n return request.json()[\"text\"]\n time.sleep(DEFAULT_SLEEP_TIME)\n logger.warning(\"Unable to successfully call model.\")\n return \"\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/llms/beam.html"} +{"id": "5fe6e6d8f65c-0", "text": "Source code for langchain.callbacks.clearml_callback\nimport tempfile\nfrom copy import deepcopy\nfrom pathlib import Path\nfrom typing import Any, Dict, List, Optional, Sequence, Union\nfrom langchain.callbacks.base import BaseCallbackHandler\nfrom langchain.callbacks.utils import (\n BaseMetadataCallbackHandler,\n flatten_dict,\n hash_string,\n import_pandas,\n import_spacy,\n import_textstat,\n load_json,\n)\nfrom langchain.schema import AgentAction, AgentFinish, LLMResult\ndef import_clearml() -> Any:\n \"\"\"Import the clearml python package and raise an error if it is not installed.\"\"\"\n try:\n import clearml # noqa: F401\n except ImportError:\n raise ImportError(\n \"To use the clearml callback manager you need to have the `clearml` python \"\n \"package installed. Please install it with `pip install clearml`\"\n )\n return clearml\n[docs]class ClearMLCallbackHandler(BaseMetadataCallbackHandler, BaseCallbackHandler):\n \"\"\"Callback Handler that logs to ClearML.\n Parameters:\n job_type (str): The type of clearml task such as \"inference\", \"testing\" or \"qc\"\n project_name (str): The clearml project name\n tags (list): Tags to add to the task\n task_name (str): Name of the clearml task\n visualize (bool): Whether to visualize the run.\n complexity_metrics (bool): Whether to log complexity metrics\n stream_logs (bool): Whether to stream callback actions to ClearML\n This handler will utilize the associated callback method and formats\n the input of each callback function with metadata regarding the state of LLM run,\n and adds the response to the list of records for both the {method}_records and", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/clearml_callback.html"} +{"id": "5fe6e6d8f65c-1", "text": "and adds the response to the list of records for both the {method}_records and\n action. It then logs the response to the ClearML console.\n \"\"\"\n def __init__(\n self,\n task_type: Optional[str] = \"inference\",\n project_name: Optional[str] = \"langchain_callback_demo\",\n tags: Optional[Sequence] = None,\n task_name: Optional[str] = None,\n visualize: bool = False,\n complexity_metrics: bool = False,\n stream_logs: bool = False,\n ) -> None:\n \"\"\"Initialize callback handler.\"\"\"\n clearml = import_clearml()\n spacy = import_spacy()\n super().__init__()\n self.task_type = task_type\n self.project_name = project_name\n self.tags = tags\n self.task_name = task_name\n self.visualize = visualize\n self.complexity_metrics = complexity_metrics\n self.stream_logs = stream_logs\n self.temp_dir = tempfile.TemporaryDirectory()\n # Check if ClearML task already exists (e.g. in pipeline)\n if clearml.Task.current_task():\n self.task = clearml.Task.current_task()\n else:\n self.task = clearml.Task.init( # type: ignore\n task_type=self.task_type,\n project_name=self.project_name,\n tags=self.tags,\n task_name=self.task_name,\n output_uri=True,\n )\n self.logger = self.task.get_logger()\n warning = (\n \"The clearml callback is currently in beta and is subject to change \"\n \"based on updates to `langchain`. Please report any issues to \"\n \"https://github.com/allegroai/clearml/issues with the tag `langchain`.\"\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/clearml_callback.html"} +{"id": "5fe6e6d8f65c-2", "text": ")\n self.logger.report_text(warning, level=30, print_console=True)\n self.callback_columns: list = []\n self.action_records: list = []\n self.complexity_metrics = complexity_metrics\n self.visualize = visualize\n self.nlp = spacy.load(\"en_core_web_sm\")\n def _init_resp(self) -> Dict:\n return {k: None for k in self.callback_columns}\n[docs] def on_llm_start(\n self, serialized: Dict[str, Any], prompts: List[str], **kwargs: Any\n ) -> None:\n \"\"\"Run when LLM starts.\"\"\"\n self.step += 1\n self.llm_starts += 1\n self.starts += 1\n resp = self._init_resp()\n resp.update({\"action\": \"on_llm_start\"})\n resp.update(flatten_dict(serialized))\n resp.update(self.get_custom_callback_meta())\n for prompt in prompts:\n prompt_resp = deepcopy(resp)\n prompt_resp[\"prompts\"] = prompt\n self.on_llm_start_records.append(prompt_resp)\n self.action_records.append(prompt_resp)\n if self.stream_logs:\n self.logger.report_text(prompt_resp)\n[docs] def on_llm_new_token(self, token: str, **kwargs: Any) -> None:\n \"\"\"Run when LLM generates a new token.\"\"\"\n self.step += 1\n self.llm_streams += 1\n resp = self._init_resp()\n resp.update({\"action\": \"on_llm_new_token\", \"token\": token})\n resp.update(self.get_custom_callback_meta())\n self.on_llm_token_records.append(resp)\n self.action_records.append(resp)\n if self.stream_logs:\n self.logger.report_text(resp)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/clearml_callback.html"} +{"id": "5fe6e6d8f65c-3", "text": "if self.stream_logs:\n self.logger.report_text(resp)\n[docs] def on_llm_end(self, response: LLMResult, **kwargs: Any) -> None:\n \"\"\"Run when LLM ends running.\"\"\"\n self.step += 1\n self.llm_ends += 1\n self.ends += 1\n resp = self._init_resp()\n resp.update({\"action\": \"on_llm_end\"})\n resp.update(flatten_dict(response.llm_output or {}))\n resp.update(self.get_custom_callback_meta())\n for generations in response.generations:\n for generation in generations:\n generation_resp = deepcopy(resp)\n generation_resp.update(flatten_dict(generation.dict()))\n generation_resp.update(self.analyze_text(generation.text))\n self.on_llm_end_records.append(generation_resp)\n self.action_records.append(generation_resp)\n if self.stream_logs:\n self.logger.report_text(generation_resp)\n[docs] def on_llm_error(\n self, error: Union[Exception, KeyboardInterrupt], **kwargs: Any\n ) -> None:\n \"\"\"Run when LLM errors.\"\"\"\n self.step += 1\n self.errors += 1\n[docs] def on_chain_start(\n self, serialized: Dict[str, Any], inputs: Dict[str, Any], **kwargs: Any\n ) -> None:\n \"\"\"Run when chain starts running.\"\"\"\n self.step += 1\n self.chain_starts += 1\n self.starts += 1\n resp = self._init_resp()\n resp.update({\"action\": \"on_chain_start\"})\n resp.update(flatten_dict(serialized))\n resp.update(self.get_custom_callback_meta())\n chain_input = inputs[\"input\"]\n if isinstance(chain_input, str):", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/clearml_callback.html"} +{"id": "5fe6e6d8f65c-4", "text": "chain_input = inputs[\"input\"]\n if isinstance(chain_input, str):\n input_resp = deepcopy(resp)\n input_resp[\"input\"] = chain_input\n self.on_chain_start_records.append(input_resp)\n self.action_records.append(input_resp)\n if self.stream_logs:\n self.logger.report_text(input_resp)\n elif isinstance(chain_input, list):\n for inp in chain_input:\n input_resp = deepcopy(resp)\n input_resp.update(inp)\n self.on_chain_start_records.append(input_resp)\n self.action_records.append(input_resp)\n if self.stream_logs:\n self.logger.report_text(input_resp)\n else:\n raise ValueError(\"Unexpected data format provided!\")\n[docs] def on_chain_end(self, outputs: Dict[str, Any], **kwargs: Any) -> None:\n \"\"\"Run when chain ends running.\"\"\"\n self.step += 1\n self.chain_ends += 1\n self.ends += 1\n resp = self._init_resp()\n resp.update({\"action\": \"on_chain_end\", \"outputs\": outputs[\"output\"]})\n resp.update(self.get_custom_callback_meta())\n self.on_chain_end_records.append(resp)\n self.action_records.append(resp)\n if self.stream_logs:\n self.logger.report_text(resp)\n[docs] def on_chain_error(\n self, error: Union[Exception, KeyboardInterrupt], **kwargs: Any\n ) -> None:\n \"\"\"Run when chain errors.\"\"\"\n self.step += 1\n self.errors += 1\n[docs] def on_tool_start(\n self, serialized: Dict[str, Any], input_str: str, **kwargs: Any\n ) -> None:\n \"\"\"Run when tool starts running.\"\"\"\n self.step += 1\n self.tool_starts += 1", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/clearml_callback.html"} +{"id": "5fe6e6d8f65c-5", "text": "self.step += 1\n self.tool_starts += 1\n self.starts += 1\n resp = self._init_resp()\n resp.update({\"action\": \"on_tool_start\", \"input_str\": input_str})\n resp.update(flatten_dict(serialized))\n resp.update(self.get_custom_callback_meta())\n self.on_tool_start_records.append(resp)\n self.action_records.append(resp)\n if self.stream_logs:\n self.logger.report_text(resp)\n[docs] def on_tool_end(self, output: str, **kwargs: Any) -> None:\n \"\"\"Run when tool ends running.\"\"\"\n self.step += 1\n self.tool_ends += 1\n self.ends += 1\n resp = self._init_resp()\n resp.update({\"action\": \"on_tool_end\", \"output\": output})\n resp.update(self.get_custom_callback_meta())\n self.on_tool_end_records.append(resp)\n self.action_records.append(resp)\n if self.stream_logs:\n self.logger.report_text(resp)\n[docs] def on_tool_error(\n self, error: Union[Exception, KeyboardInterrupt], **kwargs: Any\n ) -> None:\n \"\"\"Run when tool errors.\"\"\"\n self.step += 1\n self.errors += 1\n[docs] def on_text(self, text: str, **kwargs: Any) -> None:\n \"\"\"\n Run when agent is ending.\n \"\"\"\n self.step += 1\n self.text_ctr += 1\n resp = self._init_resp()\n resp.update({\"action\": \"on_text\", \"text\": text})\n resp.update(self.get_custom_callback_meta())\n self.on_text_records.append(resp)\n self.action_records.append(resp)\n if self.stream_logs:\n self.logger.report_text(resp)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/clearml_callback.html"} +{"id": "5fe6e6d8f65c-6", "text": "if self.stream_logs:\n self.logger.report_text(resp)\n[docs] def on_agent_finish(self, finish: AgentFinish, **kwargs: Any) -> None:\n \"\"\"Run when agent ends running.\"\"\"\n self.step += 1\n self.agent_ends += 1\n self.ends += 1\n resp = self._init_resp()\n resp.update(\n {\n \"action\": \"on_agent_finish\",\n \"output\": finish.return_values[\"output\"],\n \"log\": finish.log,\n }\n )\n resp.update(self.get_custom_callback_meta())\n self.on_agent_finish_records.append(resp)\n self.action_records.append(resp)\n if self.stream_logs:\n self.logger.report_text(resp)\n[docs] def on_agent_action(self, action: AgentAction, **kwargs: Any) -> Any:\n \"\"\"Run on agent action.\"\"\"\n self.step += 1\n self.tool_starts += 1\n self.starts += 1\n resp = self._init_resp()\n resp.update(\n {\n \"action\": \"on_agent_action\",\n \"tool\": action.tool,\n \"tool_input\": action.tool_input,\n \"log\": action.log,\n }\n )\n resp.update(self.get_custom_callback_meta())\n self.on_agent_action_records.append(resp)\n self.action_records.append(resp)\n if self.stream_logs:\n self.logger.report_text(resp)\n[docs] def analyze_text(self, text: str) -> dict:\n \"\"\"Analyze text using textstat and spacy.\n Parameters:\n text (str): The text to analyze.\n Returns:\n (dict): A dictionary containing the complexity metrics.\n \"\"\"\n resp = {}\n textstat = import_textstat()", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/clearml_callback.html"} +{"id": "5fe6e6d8f65c-7", "text": "\"\"\"\n resp = {}\n textstat = import_textstat()\n spacy = import_spacy()\n if self.complexity_metrics:\n text_complexity_metrics = {\n \"flesch_reading_ease\": textstat.flesch_reading_ease(text),\n \"flesch_kincaid_grade\": textstat.flesch_kincaid_grade(text),\n \"smog_index\": textstat.smog_index(text),\n \"coleman_liau_index\": textstat.coleman_liau_index(text),\n \"automated_readability_index\": textstat.automated_readability_index(\n text\n ),\n \"dale_chall_readability_score\": textstat.dale_chall_readability_score(\n text\n ),\n \"difficult_words\": textstat.difficult_words(text),\n \"linsear_write_formula\": textstat.linsear_write_formula(text),\n \"gunning_fog\": textstat.gunning_fog(text),\n \"text_standard\": textstat.text_standard(text),\n \"fernandez_huerta\": textstat.fernandez_huerta(text),\n \"szigriszt_pazos\": textstat.szigriszt_pazos(text),\n \"gutierrez_polini\": textstat.gutierrez_polini(text),\n \"crawford\": textstat.crawford(text),\n \"gulpease_index\": textstat.gulpease_index(text),\n \"osman\": textstat.osman(text),\n }\n resp.update(text_complexity_metrics)\n if self.visualize and self.nlp and self.temp_dir.name is not None:\n doc = self.nlp(text)\n dep_out = spacy.displacy.render( # type: ignore", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/clearml_callback.html"} +{"id": "5fe6e6d8f65c-8", "text": "dep_out = spacy.displacy.render( # type: ignore\n doc, style=\"dep\", jupyter=False, page=True\n )\n dep_output_path = Path(\n self.temp_dir.name, hash_string(f\"dep-{text}\") + \".html\"\n )\n dep_output_path.open(\"w\", encoding=\"utf-8\").write(dep_out)\n ent_out = spacy.displacy.render( # type: ignore\n doc, style=\"ent\", jupyter=False, page=True\n )\n ent_output_path = Path(\n self.temp_dir.name, hash_string(f\"ent-{text}\") + \".html\"\n )\n ent_output_path.open(\"w\", encoding=\"utf-8\").write(ent_out)\n self.logger.report_media(\n \"Dependencies Plot\", text, local_path=dep_output_path\n )\n self.logger.report_media(\"Entities Plot\", text, local_path=ent_output_path)\n return resp\n def _create_session_analysis_df(self) -> Any:\n \"\"\"Create a dataframe with all the information from the session.\"\"\"\n pd = import_pandas()\n on_llm_start_records_df = pd.DataFrame(self.on_llm_start_records)\n on_llm_end_records_df = pd.DataFrame(self.on_llm_end_records)\n llm_input_prompts_df = (\n on_llm_start_records_df[[\"step\", \"prompts\", \"name\"]]\n .dropna(axis=1)\n .rename({\"step\": \"prompt_step\"}, axis=1)\n )\n complexity_metrics_columns = []\n visualizations_columns: List = []\n if self.complexity_metrics:\n complexity_metrics_columns = [\n \"flesch_reading_ease\",\n \"flesch_kincaid_grade\",", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/clearml_callback.html"} +{"id": "5fe6e6d8f65c-9", "text": "\"flesch_kincaid_grade\",\n \"smog_index\",\n \"coleman_liau_index\",\n \"automated_readability_index\",\n \"dale_chall_readability_score\",\n \"difficult_words\",\n \"linsear_write_formula\",\n \"gunning_fog\",\n \"text_standard\",\n \"fernandez_huerta\",\n \"szigriszt_pazos\",\n \"gutierrez_polini\",\n \"crawford\",\n \"gulpease_index\",\n \"osman\",\n ]\n llm_outputs_df = (\n on_llm_end_records_df[\n [\n \"step\",\n \"text\",\n \"token_usage_total_tokens\",\n \"token_usage_prompt_tokens\",\n \"token_usage_completion_tokens\",\n ]\n + complexity_metrics_columns\n + visualizations_columns\n ]\n .dropna(axis=1)\n .rename({\"step\": \"output_step\", \"text\": \"output\"}, axis=1)\n )\n session_analysis_df = pd.concat([llm_input_prompts_df, llm_outputs_df], axis=1)\n # session_analysis_df[\"chat_html\"] = session_analysis_df[\n # [\"prompts\", \"output\"]\n # ].apply(\n # lambda row: construct_html_from_prompt_and_generation(\n # row[\"prompts\"], row[\"output\"]\n # ),\n # axis=1,\n # )\n return session_analysis_df\n[docs] def flush_tracker(\n self,\n name: Optional[str] = None,\n langchain_asset: Any = None,\n finish: bool = False,\n ) -> None:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/clearml_callback.html"} +{"id": "5fe6e6d8f65c-10", "text": "finish: bool = False,\n ) -> None:\n \"\"\"Flush the tracker and setup the session.\n Everything after this will be a new table.\n Args:\n name: Name of the preformed session so far so it is identifyable\n langchain_asset: The langchain asset to save.\n finish: Whether to finish the run.\n Returns:\n None\n \"\"\"\n pd = import_pandas()\n clearml = import_clearml()\n # Log the action records\n self.logger.report_table(\n \"Action Records\", name, table_plot=pd.DataFrame(self.action_records)\n )\n # Session analysis\n session_analysis_df = self._create_session_analysis_df()\n self.logger.report_table(\n \"Session Analysis\", name, table_plot=session_analysis_df\n )\n if self.stream_logs:\n self.logger.report_text(\n {\n \"action_records\": pd.DataFrame(self.action_records),\n \"session_analysis\": session_analysis_df,\n }\n )\n if langchain_asset:\n langchain_asset_path = Path(self.temp_dir.name, \"model.json\")\n try:\n langchain_asset.save(langchain_asset_path)\n # Create output model and connect it to the task\n output_model = clearml.OutputModel(\n task=self.task, config_text=load_json(langchain_asset_path)\n )\n output_model.update_weights(\n weights_filename=str(langchain_asset_path),\n auto_delete_file=False,\n target_filename=name,\n )\n except ValueError:\n langchain_asset.save_agent(langchain_asset_path)\n output_model = clearml.OutputModel(\n task=self.task, config_text=load_json(langchain_asset_path)\n )\n output_model.update_weights(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/clearml_callback.html"} +{"id": "5fe6e6d8f65c-11", "text": ")\n output_model.update_weights(\n weights_filename=str(langchain_asset_path),\n auto_delete_file=False,\n target_filename=name,\n )\n except NotImplementedError as e:\n print(\"Could not save model.\")\n print(repr(e))\n pass\n # Cleanup after adding everything to ClearML\n self.task.flush(wait_for_uploads=True)\n self.temp_dir.cleanup()\n self.temp_dir = tempfile.TemporaryDirectory()\n self.reset_callback_meta()\n if finish:\n self.task.close()", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/clearml_callback.html"} +{"id": "beaf52c24564-0", "text": "Source code for langchain.callbacks.manager\nfrom __future__ import annotations\nimport asyncio\nimport functools\nimport logging\nimport os\nimport warnings\nfrom contextlib import asynccontextmanager, contextmanager\nfrom contextvars import ContextVar\nfrom typing import (\n Any,\n AsyncGenerator,\n Dict,\n Generator,\n List,\n Optional,\n Type,\n TypeVar,\n Union,\n cast,\n)\nfrom uuid import UUID, uuid4\nimport langchain\nfrom langchain.callbacks.base import (\n BaseCallbackHandler,\n BaseCallbackManager,\n ChainManagerMixin,\n LLMManagerMixin,\n RunManagerMixin,\n ToolManagerMixin,\n)\nfrom langchain.callbacks.openai_info import OpenAICallbackHandler\nfrom langchain.callbacks.stdout import StdOutCallbackHandler\nfrom langchain.callbacks.tracers.langchain import LangChainTracer\nfrom langchain.callbacks.tracers.langchain_v1 import LangChainTracerV1, TracerSessionV1\nfrom langchain.callbacks.tracers.stdout import ConsoleCallbackHandler\nfrom langchain.callbacks.tracers.wandb import WandbTracer\nfrom langchain.schema import (\n AgentAction,\n AgentFinish,\n BaseMessage,\n LLMResult,\n get_buffer_string,\n)\nlogger = logging.getLogger(__name__)\nCallbacks = Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]\nopenai_callback_var: ContextVar[Optional[OpenAICallbackHandler]] = ContextVar(\n \"openai_callback\", default=None\n)\ntracing_callback_var: ContextVar[\n Optional[LangChainTracerV1]\n] = ContextVar( # noqa: E501\n \"tracing_callback\", default=None\n)\nwandb_tracing_callback_var: ContextVar[\n Optional[WandbTracer]", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/manager.html"} +{"id": "beaf52c24564-1", "text": "wandb_tracing_callback_var: ContextVar[\n Optional[WandbTracer]\n] = ContextVar( # noqa: E501\n \"tracing_wandb_callback\", default=None\n)\ntracing_v2_callback_var: ContextVar[\n Optional[LangChainTracer]\n] = ContextVar( # noqa: E501\n \"tracing_callback_v2\", default=None\n)\ndef _get_debug() -> bool:\n return langchain.debug\n[docs]@contextmanager\ndef get_openai_callback() -> Generator[OpenAICallbackHandler, None, None]:\n \"\"\"Get the OpenAI callback handler in a context manager.\n which conveniently exposes token and cost information.\n Returns:\n OpenAICallbackHandler: The OpenAI callback handler.\n Example:\n >>> with get_openai_callback() as cb:\n ... # Use the OpenAI callback handler\n \"\"\"\n cb = OpenAICallbackHandler()\n openai_callback_var.set(cb)\n yield cb\n openai_callback_var.set(None)\n[docs]@contextmanager\ndef tracing_enabled(\n session_name: str = \"default\",\n) -> Generator[TracerSessionV1, None, None]:\n \"\"\"Get the Deprecated LangChainTracer in a context manager.\n Args:\n session_name (str, optional): The name of the session.\n Defaults to \"default\".\n Returns:\n TracerSessionV1: The LangChainTracer session.\n Example:\n >>> with tracing_enabled() as session:\n ... # Use the LangChainTracer session\n \"\"\"\n cb = LangChainTracerV1()\n session = cast(TracerSessionV1, cb.load_session(session_name))\n tracing_callback_var.set(cb)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/manager.html"} +{"id": "beaf52c24564-2", "text": "tracing_callback_var.set(cb)\n yield session\n tracing_callback_var.set(None)\n[docs]@contextmanager\ndef wandb_tracing_enabled(\n session_name: str = \"default\",\n) -> Generator[None, None, None]:\n \"\"\"Get the WandbTracer in a context manager.\n Args:\n session_name (str, optional): The name of the session.\n Defaults to \"default\".\n Returns:\n None\n Example:\n >>> with wandb_tracing_enabled() as session:\n ... # Use the WandbTracer session\n \"\"\"\n cb = WandbTracer()\n wandb_tracing_callback_var.set(cb)\n yield None\n wandb_tracing_callback_var.set(None)\n@contextmanager\ndef tracing_v2_enabled(\n project_name: Optional[str] = None,\n *,\n example_id: Optional[Union[str, UUID]] = None,\n) -> Generator[None, None, None]:\n \"\"\"Instruct LangChain to log all runs in context to LangSmith.\n Args:\n project_name (str, optional): The name of the project.\n Defaults to \"default\".\n example_id (str or UUID, optional): The ID of the example.\n Defaults to None.\n Returns:\n None\n Example:\n >>> with tracing_v2_enabled():\n ... # LangChain code will automatically be traced\n \"\"\"\n # Issue a warning that this is experimental\n warnings.warn(\n \"The tracing v2 API is in development. \"\n \"This is not yet stable and may change in the future.\"\n )\n if isinstance(example_id, str):\n example_id = UUID(example_id)\n cb = LangChainTracer(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/manager.html"} +{"id": "beaf52c24564-3", "text": "example_id = UUID(example_id)\n cb = LangChainTracer(\n example_id=example_id,\n project_name=project_name,\n )\n tracing_v2_callback_var.set(cb)\n yield\n tracing_v2_callback_var.set(None)\n@contextmanager\ndef trace_as_chain_group(\n group_name: str,\n *,\n project_name: Optional[str] = None,\n example_id: Optional[Union[str, UUID]] = None,\n tags: Optional[List[str]] = None,\n) -> Generator[CallbackManager, None, None]:\n \"\"\"Get a callback manager for a chain group in a context manager.\n Useful for grouping different calls together as a single run even if\n they aren't composed in a single chain.\n Args:\n group_name (str): The name of the chain group.\n project_name (str, optional): The name of the project.\n Defaults to None.\n example_id (str or UUID, optional): The ID of the example.\n Defaults to None.\n tags (List[str], optional): The inheritable tags to apply to all runs.\n Defaults to None.\n Returns:\n CallbackManager: The callback manager for the chain group.\n Example:\n >>> with trace_as_chain_group(\"group_name\") as manager:\n ... # Use the callback manager for the chain group\n ... llm.predict(\"Foo\", callbacks=manager)\n \"\"\"\n cb = LangChainTracer(\n project_name=project_name,\n example_id=example_id,\n )\n cm = CallbackManager.configure(\n inheritable_callbacks=[cb],\n inheritable_tags=tags,\n )\n run_manager = cm.on_chain_start({\"name\": group_name}, {})", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/manager.html"} +{"id": "beaf52c24564-4", "text": ")\n run_manager = cm.on_chain_start({\"name\": group_name}, {})\n yield run_manager.get_child()\n run_manager.on_chain_end({})\n@asynccontextmanager\nasync def atrace_as_chain_group(\n group_name: str,\n *,\n project_name: Optional[str] = None,\n example_id: Optional[Union[str, UUID]] = None,\n tags: Optional[List[str]] = None,\n) -> AsyncGenerator[AsyncCallbackManager, None]:\n \"\"\"Get an async callback manager for a chain group in a context manager.\n Useful for grouping different async calls together as a single run even if\n they aren't composed in a single chain.\n Args:\n group_name (str): The name of the chain group.\n project_name (str, optional): The name of the project.\n Defaults to None.\n example_id (str or UUID, optional): The ID of the example.\n Defaults to None.\n tags (List[str], optional): The inheritable tags to apply to all runs.\n Defaults to None.\n Returns:\n AsyncCallbackManager: The async callback manager for the chain group.\n Example:\n >>> async with atrace_as_chain_group(\"group_name\") as manager:\n ... # Use the async callback manager for the chain group\n ... await llm.apredict(\"Foo\", callbacks=manager)\n \"\"\"\n cb = LangChainTracer(\n project_name=project_name,\n example_id=example_id,\n )\n cm = AsyncCallbackManager.configure(\n inheritable_callbacks=[cb], inheritable_tags=tags\n )\n run_manager = await cm.on_chain_start({\"name\": group_name}, {})\n try:\n yield run_manager.get_child()\n finally:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/manager.html"} +{"id": "beaf52c24564-5", "text": "try:\n yield run_manager.get_child()\n finally:\n await run_manager.on_chain_end({})\ndef _handle_event(\n handlers: List[BaseCallbackHandler],\n event_name: str,\n ignore_condition_name: Optional[str],\n *args: Any,\n **kwargs: Any,\n) -> None:\n \"\"\"Generic event handler for CallbackManager.\"\"\"\n message_strings: Optional[List[str]] = None\n for handler in handlers:\n try:\n if ignore_condition_name is None or not getattr(\n handler, ignore_condition_name\n ):\n getattr(handler, event_name)(*args, **kwargs)\n except NotImplementedError as e:\n if event_name == \"on_chat_model_start\":\n if message_strings is None:\n message_strings = [get_buffer_string(m) for m in args[1]]\n _handle_event(\n [handler],\n \"on_llm_start\",\n \"ignore_llm\",\n args[0],\n message_strings,\n *args[2:],\n **kwargs,\n )\n else:\n logger.warning(\n f\"Error in {handler.__class__.__name__}.{event_name} callback: {e}\"\n )\n except Exception as e:\n logger.warning(\n f\"Error in {handler.__class__.__name__}.{event_name} callback: {e}\"\n )\n if handler.raise_error:\n raise e\nasync def _ahandle_event_for_handler(\n handler: BaseCallbackHandler,\n event_name: str,\n ignore_condition_name: Optional[str],\n *args: Any,\n **kwargs: Any,\n) -> None:\n try:\n if ignore_condition_name is None or not getattr(handler, ignore_condition_name):", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/manager.html"} +{"id": "beaf52c24564-6", "text": "if ignore_condition_name is None or not getattr(handler, ignore_condition_name):\n event = getattr(handler, event_name)\n if asyncio.iscoroutinefunction(event):\n await event(*args, **kwargs)\n else:\n if handler.run_inline:\n event(*args, **kwargs)\n else:\n await asyncio.get_event_loop().run_in_executor(\n None, functools.partial(event, *args, **kwargs)\n )\n except NotImplementedError as e:\n if event_name == \"on_chat_model_start\":\n message_strings = [get_buffer_string(m) for m in args[1]]\n await _ahandle_event_for_handler(\n handler,\n \"on_llm_start\",\n \"ignore_llm\",\n args[0],\n message_strings,\n *args[2:],\n **kwargs,\n )\n else:\n logger.warning(\n f\"Error in {handler.__class__.__name__}.{event_name} callback: {e}\"\n )\n except Exception as e:\n logger.warning(\n f\"Error in {handler.__class__.__name__}.{event_name} callback: {e}\"\n )\n if handler.raise_error:\n raise e\nasync def _ahandle_event(\n handlers: List[BaseCallbackHandler],\n event_name: str,\n ignore_condition_name: Optional[str],\n *args: Any,\n **kwargs: Any,\n) -> None:\n \"\"\"Generic event handler for AsyncCallbackManager.\"\"\"\n for handler in [h for h in handlers if h.run_inline]:\n await _ahandle_event_for_handler(\n handler, event_name, ignore_condition_name, *args, **kwargs\n )\n await asyncio.gather(\n *(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/manager.html"} +{"id": "beaf52c24564-7", "text": ")\n await asyncio.gather(\n *(\n _ahandle_event_for_handler(\n handler, event_name, ignore_condition_name, *args, **kwargs\n )\n for handler in handlers\n if not handler.run_inline\n )\n )\nBRM = TypeVar(\"BRM\", bound=\"BaseRunManager\")\nclass BaseRunManager(RunManagerMixin):\n \"\"\"Base class for run manager (a bound callback manager).\"\"\"\n def __init__(\n self,\n *,\n run_id: UUID,\n handlers: List[BaseCallbackHandler],\n inheritable_handlers: List[BaseCallbackHandler],\n parent_run_id: Optional[UUID] = None,\n tags: List[str],\n inheritable_tags: List[str],\n ) -> None:\n \"\"\"Initialize the run manager.\n Args:\n run_id (UUID): The ID of the run.\n handlers (List[BaseCallbackHandler]): The list of handlers.\n inheritable_handlers (List[BaseCallbackHandler]):\n The list of inheritable handlers.\n parent_run_id (UUID, optional): The ID of the parent run.\n Defaults to None.\n tags (List[str]): The list of tags.\n inheritable_tags (List[str]): The list of inheritable tags.\n \"\"\"\n self.run_id = run_id\n self.handlers = handlers\n self.inheritable_handlers = inheritable_handlers\n self.tags = tags\n self.inheritable_tags = inheritable_tags\n self.parent_run_id = parent_run_id\n @classmethod\n def get_noop_manager(cls: Type[BRM]) -> BRM:\n \"\"\"Return a manager that doesn't perform any operations.\n Returns:\n BaseRunManager: The noop manager.\n \"\"\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/manager.html"} +{"id": "beaf52c24564-8", "text": "Returns:\n BaseRunManager: The noop manager.\n \"\"\"\n return cls(\n run_id=uuid4(),\n handlers=[],\n inheritable_handlers=[],\n tags=[],\n inheritable_tags=[],\n )\nclass RunManager(BaseRunManager):\n \"\"\"Sync Run Manager.\"\"\"\n def on_text(\n self,\n text: str,\n **kwargs: Any,\n ) -> Any:\n \"\"\"Run when text is received.\n Args:\n text (str): The received text.\n Returns:\n Any: The result of the callback.\n \"\"\"\n _handle_event(\n self.handlers,\n \"on_text\",\n None,\n text,\n run_id=self.run_id,\n parent_run_id=self.parent_run_id,\n **kwargs,\n )\nclass AsyncRunManager(BaseRunManager):\n \"\"\"Async Run Manager.\"\"\"\n async def on_text(\n self,\n text: str,\n **kwargs: Any,\n ) -> Any:\n \"\"\"Run when text is received.\n Args:\n text (str): The received text.\n Returns:\n Any: The result of the callback.\n \"\"\"\n await _ahandle_event(\n self.handlers,\n \"on_text\",\n None,\n text,\n run_id=self.run_id,\n parent_run_id=self.parent_run_id,\n **kwargs,\n )\nclass CallbackManagerForLLMRun(RunManager, LLMManagerMixin):\n \"\"\"Callback manager for LLM run.\"\"\"\n def on_llm_new_token(\n self,\n token: str,\n **kwargs: Any,\n ) -> None:\n \"\"\"Run when LLM generates a new token.", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/manager.html"} +{"id": "beaf52c24564-9", "text": ") -> None:\n \"\"\"Run when LLM generates a new token.\n Args:\n token (str): The new token.\n \"\"\"\n _handle_event(\n self.handlers,\n \"on_llm_new_token\",\n \"ignore_llm\",\n token=token,\n run_id=self.run_id,\n parent_run_id=self.parent_run_id,\n **kwargs,\n )\n def on_llm_end(self, response: LLMResult, **kwargs: Any) -> None:\n \"\"\"Run when LLM ends running.\n Args:\n response (LLMResult): The LLM result.\n \"\"\"\n _handle_event(\n self.handlers,\n \"on_llm_end\",\n \"ignore_llm\",\n response,\n run_id=self.run_id,\n parent_run_id=self.parent_run_id,\n **kwargs,\n )\n def on_llm_error(\n self,\n error: Union[Exception, KeyboardInterrupt],\n **kwargs: Any,\n ) -> None:\n \"\"\"Run when LLM errors.\n Args:\n error (Exception or KeyboardInterrupt): The error.\n \"\"\"\n _handle_event(\n self.handlers,\n \"on_llm_error\",\n \"ignore_llm\",\n error,\n run_id=self.run_id,\n parent_run_id=self.parent_run_id,\n **kwargs,\n )\nclass AsyncCallbackManagerForLLMRun(AsyncRunManager, LLMManagerMixin):\n \"\"\"Async callback manager for LLM run.\"\"\"\n async def on_llm_new_token(\n self,\n token: str,\n **kwargs: Any,\n ) -> None:\n \"\"\"Run when LLM generates a new token.\n Args:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/manager.html"} +{"id": "beaf52c24564-10", "text": "\"\"\"Run when LLM generates a new token.\n Args:\n token (str): The new token.\n \"\"\"\n await _ahandle_event(\n self.handlers,\n \"on_llm_new_token\",\n \"ignore_llm\",\n token,\n run_id=self.run_id,\n parent_run_id=self.parent_run_id,\n **kwargs,\n )\n async def on_llm_end(self, response: LLMResult, **kwargs: Any) -> None:\n \"\"\"Run when LLM ends running.\n Args:\n response (LLMResult): The LLM result.\n \"\"\"\n await _ahandle_event(\n self.handlers,\n \"on_llm_end\",\n \"ignore_llm\",\n response,\n run_id=self.run_id,\n parent_run_id=self.parent_run_id,\n **kwargs,\n )\n async def on_llm_error(\n self,\n error: Union[Exception, KeyboardInterrupt],\n **kwargs: Any,\n ) -> None:\n \"\"\"Run when LLM errors.\n Args:\n error (Exception or KeyboardInterrupt): The error.\n \"\"\"\n await _ahandle_event(\n self.handlers,\n \"on_llm_error\",\n \"ignore_llm\",\n error,\n run_id=self.run_id,\n parent_run_id=self.parent_run_id,\n **kwargs,\n )\nclass CallbackManagerForChainRun(RunManager, ChainManagerMixin):\n \"\"\"Callback manager for chain run.\"\"\"\n def get_child(self, tag: Optional[str] = None) -> CallbackManager:\n \"\"\"Get a child callback manager.\n Args:\n tag (str, optional): The tag for the child callback manager.\n Defaults to None.", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/manager.html"} +{"id": "beaf52c24564-11", "text": "Defaults to None.\n Returns:\n CallbackManager: The child callback manager.\n \"\"\"\n manager = CallbackManager(handlers=[], parent_run_id=self.run_id)\n manager.set_handlers(self.inheritable_handlers)\n manager.add_tags(self.inheritable_tags)\n if tag is not None:\n manager.add_tags([tag], False)\n return manager\n def on_chain_end(self, outputs: Dict[str, Any], **kwargs: Any) -> None:\n \"\"\"Run when chain ends running.\n Args:\n outputs (Dict[str, Any]): The outputs of the chain.\n \"\"\"\n _handle_event(\n self.handlers,\n \"on_chain_end\",\n \"ignore_chain\",\n outputs,\n run_id=self.run_id,\n parent_run_id=self.parent_run_id,\n **kwargs,\n )\n def on_chain_error(\n self,\n error: Union[Exception, KeyboardInterrupt],\n **kwargs: Any,\n ) -> None:\n \"\"\"Run when chain errors.\n Args:\n error (Exception or KeyboardInterrupt): The error.\n \"\"\"\n _handle_event(\n self.handlers,\n \"on_chain_error\",\n \"ignore_chain\",\n error,\n run_id=self.run_id,\n parent_run_id=self.parent_run_id,\n **kwargs,\n )\n def on_agent_action(self, action: AgentAction, **kwargs: Any) -> Any:\n \"\"\"Run when agent action is received.\n Args:\n action (AgentAction): The agent action.\n Returns:\n Any: The result of the callback.\n \"\"\"\n _handle_event(\n self.handlers,\n \"on_agent_action\",\n \"ignore_agent\",\n action,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/manager.html"} +{"id": "beaf52c24564-12", "text": "\"on_agent_action\",\n \"ignore_agent\",\n action,\n run_id=self.run_id,\n parent_run_id=self.parent_run_id,\n **kwargs,\n )\n def on_agent_finish(self, finish: AgentFinish, **kwargs: Any) -> Any:\n \"\"\"Run when agent finish is received.\n Args:\n finish (AgentFinish): The agent finish.\n Returns:\n Any: The result of the callback.\n \"\"\"\n _handle_event(\n self.handlers,\n \"on_agent_finish\",\n \"ignore_agent\",\n finish,\n run_id=self.run_id,\n parent_run_id=self.parent_run_id,\n **kwargs,\n )\nclass AsyncCallbackManagerForChainRun(AsyncRunManager, ChainManagerMixin):\n \"\"\"Async callback manager for chain run.\"\"\"\n def get_child(self, tag: Optional[str] = None) -> AsyncCallbackManager:\n \"\"\"Get a child callback manager.\n Args:\n tag (str, optional): The tag for the child callback manager.\n Defaults to None.\n Returns:\n AsyncCallbackManager: The child callback manager.\n \"\"\"\n manager = AsyncCallbackManager(handlers=[], parent_run_id=self.run_id)\n manager.set_handlers(self.inheritable_handlers)\n manager.add_tags(self.inheritable_tags)\n if tag is not None:\n manager.add_tags([tag], False)\n return manager\n async def on_chain_end(self, outputs: Dict[str, Any], **kwargs: Any) -> None:\n \"\"\"Run when chain ends running.\n Args:\n outputs (Dict[str, Any]): The outputs of the chain.\n \"\"\"\n await _ahandle_event(\n self.handlers,\n \"on_chain_end\",\n \"ignore_chain\",", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/manager.html"} +{"id": "beaf52c24564-13", "text": "self.handlers,\n \"on_chain_end\",\n \"ignore_chain\",\n outputs,\n run_id=self.run_id,\n parent_run_id=self.parent_run_id,\n **kwargs,\n )\n async def on_chain_error(\n self,\n error: Union[Exception, KeyboardInterrupt],\n **kwargs: Any,\n ) -> None:\n \"\"\"Run when chain errors.\n Args:\n error (Exception or KeyboardInterrupt): The error.\n \"\"\"\n await _ahandle_event(\n self.handlers,\n \"on_chain_error\",\n \"ignore_chain\",\n error,\n run_id=self.run_id,\n parent_run_id=self.parent_run_id,\n **kwargs,\n )\n async def on_agent_action(self, action: AgentAction, **kwargs: Any) -> Any:\n \"\"\"Run when agent action is received.\n Args:\n action (AgentAction): The agent action.\n Returns:\n Any: The result of the callback.\n \"\"\"\n await _ahandle_event(\n self.handlers,\n \"on_agent_action\",\n \"ignore_agent\",\n action,\n run_id=self.run_id,\n parent_run_id=self.parent_run_id,\n **kwargs,\n )\n async def on_agent_finish(self, finish: AgentFinish, **kwargs: Any) -> Any:\n \"\"\"Run when agent finish is received.\n Args:\n finish (AgentFinish): The agent finish.\n Returns:\n Any: The result of the callback.\n \"\"\"\n await _ahandle_event(\n self.handlers,\n \"on_agent_finish\",\n \"ignore_agent\",\n finish,\n run_id=self.run_id,\n parent_run_id=self.parent_run_id,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/manager.html"} +{"id": "beaf52c24564-14", "text": "run_id=self.run_id,\n parent_run_id=self.parent_run_id,\n **kwargs,\n )\nclass CallbackManagerForToolRun(RunManager, ToolManagerMixin):\n \"\"\"Callback manager for tool run.\"\"\"\n def get_child(self, tag: Optional[str] = None) -> CallbackManager:\n \"\"\"Get a child callback manager.\n Args:\n tag (str, optional): The tag for the child callback manager.\n Defaults to None.\n Returns:\n CallbackManager: The child callback manager.\n \"\"\"\n manager = CallbackManager(handlers=[], parent_run_id=self.run_id)\n manager.set_handlers(self.inheritable_handlers)\n manager.add_tags(self.inheritable_tags)\n if tag is not None:\n manager.add_tags([tag], False)\n return manager\n def on_tool_end(\n self,\n output: str,\n **kwargs: Any,\n ) -> None:\n \"\"\"Run when tool ends running.\n Args:\n output (str): The output of the tool.\n \"\"\"\n _handle_event(\n self.handlers,\n \"on_tool_end\",\n \"ignore_agent\",\n output,\n run_id=self.run_id,\n parent_run_id=self.parent_run_id,\n **kwargs,\n )\n def on_tool_error(\n self,\n error: Union[Exception, KeyboardInterrupt],\n **kwargs: Any,\n ) -> None:\n \"\"\"Run when tool errors.\n Args:\n error (Exception or KeyboardInterrupt): The error.\n \"\"\"\n _handle_event(\n self.handlers,\n \"on_tool_error\",\n \"ignore_agent\",\n error,\n run_id=self.run_id,\n parent_run_id=self.parent_run_id,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/manager.html"} +{"id": "beaf52c24564-15", "text": "run_id=self.run_id,\n parent_run_id=self.parent_run_id,\n **kwargs,\n )\nclass AsyncCallbackManagerForToolRun(AsyncRunManager, ToolManagerMixin):\n \"\"\"Async callback manager for tool run.\"\"\"\n def get_child(self, tag: Optional[str] = None) -> AsyncCallbackManager:\n \"\"\"Get a child callback manager.\n Args:\n tag (str, optional): The tag to add to the child\n callback manager. Defaults to None.\n Returns:\n AsyncCallbackManager: The child callback manager.\n \"\"\"\n manager = AsyncCallbackManager(handlers=[], parent_run_id=self.run_id)\n manager.set_handlers(self.inheritable_handlers)\n manager.add_tags(self.inheritable_tags)\n if tag is not None:\n manager.add_tags([tag], False)\n return manager\n async def on_tool_end(self, output: str, **kwargs: Any) -> None:\n \"\"\"Run when tool ends running.\n Args:\n output (str): The output of the tool.\n \"\"\"\n await _ahandle_event(\n self.handlers,\n \"on_tool_end\",\n \"ignore_agent\",\n output,\n run_id=self.run_id,\n parent_run_id=self.parent_run_id,\n **kwargs,\n )\n async def on_tool_error(\n self,\n error: Union[Exception, KeyboardInterrupt],\n **kwargs: Any,\n ) -> None:\n \"\"\"Run when tool errors.\n Args:\n error (Exception or KeyboardInterrupt): The error.\n \"\"\"\n await _ahandle_event(\n self.handlers,\n \"on_tool_error\",\n \"ignore_agent\",\n error,\n run_id=self.run_id,\n parent_run_id=self.parent_run_id,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/manager.html"} +{"id": "beaf52c24564-16", "text": "run_id=self.run_id,\n parent_run_id=self.parent_run_id,\n **kwargs,\n )\nclass CallbackManager(BaseCallbackManager):\n \"\"\"Callback manager that can be used to handle callbacks from langchain.\"\"\"\n def on_llm_start(\n self,\n serialized: Dict[str, Any],\n prompts: List[str],\n **kwargs: Any,\n ) -> List[CallbackManagerForLLMRun]:\n \"\"\"Run when LLM starts running.\n Args:\n serialized (Dict[str, Any]): The serialized LLM.\n prompts (List[str]): The list of prompts.\n run_id (UUID, optional): The ID of the run. Defaults to None.\n Returns:\n List[CallbackManagerForLLMRun]: A callback manager for each\n prompt as an LLM run.\n \"\"\"\n managers = []\n for prompt in prompts:\n run_id_ = uuid4()\n _handle_event(\n self.handlers,\n \"on_llm_start\",\n \"ignore_llm\",\n serialized,\n [prompt],\n run_id=run_id_,\n parent_run_id=self.parent_run_id,\n tags=self.tags,\n **kwargs,\n )\n managers.append(\n CallbackManagerForLLMRun(\n run_id=run_id_,\n handlers=self.handlers,\n inheritable_handlers=self.inheritable_handlers,\n parent_run_id=self.parent_run_id,\n tags=self.tags,\n inheritable_tags=self.inheritable_tags,\n )\n )\n return managers\n def on_chat_model_start(\n self,\n serialized: Dict[str, Any],\n messages: List[List[BaseMessage]],\n **kwargs: Any,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/manager.html"} +{"id": "beaf52c24564-17", "text": "messages: List[List[BaseMessage]],\n **kwargs: Any,\n ) -> List[CallbackManagerForLLMRun]:\n \"\"\"Run when LLM starts running.\n Args:\n serialized (Dict[str, Any]): The serialized LLM.\n messages (List[List[BaseMessage]]): The list of messages.\n run_id (UUID, optional): The ID of the run. Defaults to None.\n Returns:\n List[CallbackManagerForLLMRun]: A callback manager for each\n list of messages as an LLM run.\n \"\"\"\n managers = []\n for message_list in messages:\n run_id_ = uuid4()\n _handle_event(\n self.handlers,\n \"on_chat_model_start\",\n \"ignore_chat_model\",\n serialized,\n [message_list],\n run_id=run_id_,\n parent_run_id=self.parent_run_id,\n tags=self.tags,\n **kwargs,\n )\n managers.append(\n CallbackManagerForLLMRun(\n run_id=run_id_,\n handlers=self.handlers,\n inheritable_handlers=self.inheritable_handlers,\n parent_run_id=self.parent_run_id,\n tags=self.tags,\n inheritable_tags=self.inheritable_tags,\n )\n )\n return managers\n def on_chain_start(\n self,\n serialized: Dict[str, Any],\n inputs: Dict[str, Any],\n run_id: Optional[UUID] = None,\n **kwargs: Any,\n ) -> CallbackManagerForChainRun:\n \"\"\"Run when chain starts running.\n Args:\n serialized (Dict[str, Any]): The serialized chain.\n inputs (Dict[str, Any]): The inputs to the chain.", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/manager.html"} +{"id": "beaf52c24564-18", "text": "inputs (Dict[str, Any]): The inputs to the chain.\n run_id (UUID, optional): The ID of the run. Defaults to None.\n Returns:\n CallbackManagerForChainRun: The callback manager for the chain run.\n \"\"\"\n if run_id is None:\n run_id = uuid4()\n _handle_event(\n self.handlers,\n \"on_chain_start\",\n \"ignore_chain\",\n serialized,\n inputs,\n run_id=run_id,\n parent_run_id=self.parent_run_id,\n tags=self.tags,\n **kwargs,\n )\n return CallbackManagerForChainRun(\n run_id=run_id,\n handlers=self.handlers,\n inheritable_handlers=self.inheritable_handlers,\n parent_run_id=self.parent_run_id,\n tags=self.tags,\n inheritable_tags=self.inheritable_tags,\n )\n def on_tool_start(\n self,\n serialized: Dict[str, Any],\n input_str: str,\n run_id: Optional[UUID] = None,\n parent_run_id: Optional[UUID] = None,\n **kwargs: Any,\n ) -> CallbackManagerForToolRun:\n \"\"\"Run when tool starts running.\n Args:\n serialized (Dict[str, Any]): The serialized tool.\n input_str (str): The input to the tool.\n run_id (UUID, optional): The ID of the run. Defaults to None.\n parent_run_id (UUID, optional): The ID of the parent run. Defaults to None.\n Returns:\n CallbackManagerForToolRun: The callback manager for the tool run.\n \"\"\"\n if run_id is None:\n run_id = uuid4()\n _handle_event(\n self.handlers,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/manager.html"} +{"id": "beaf52c24564-19", "text": "run_id = uuid4()\n _handle_event(\n self.handlers,\n \"on_tool_start\",\n \"ignore_agent\",\n serialized,\n input_str,\n run_id=run_id,\n parent_run_id=self.parent_run_id,\n tags=self.tags,\n **kwargs,\n )\n return CallbackManagerForToolRun(\n run_id=run_id,\n handlers=self.handlers,\n inheritable_handlers=self.inheritable_handlers,\n parent_run_id=self.parent_run_id,\n tags=self.tags,\n inheritable_tags=self.inheritable_tags,\n )\n @classmethod\n def configure(\n cls,\n inheritable_callbacks: Callbacks = None,\n local_callbacks: Callbacks = None,\n verbose: bool = False,\n inheritable_tags: Optional[List[str]] = None,\n local_tags: Optional[List[str]] = None,\n ) -> CallbackManager:\n \"\"\"Configure the callback manager.\n Args:\n inheritable_callbacks (Optional[Callbacks], optional): The inheritable\n callbacks. Defaults to None.\n local_callbacks (Optional[Callbacks], optional): The local callbacks.\n Defaults to None.\n verbose (bool, optional): Whether to enable verbose mode. Defaults to False.\n inheritable_tags (Optional[List[str]], optional): The inheritable tags.\n Defaults to None.\n local_tags (Optional[List[str]], optional): The local tags.\n Defaults to None.\n Returns:\n CallbackManager: The configured callback manager.\n \"\"\"\n return _configure(\n cls,\n inheritable_callbacks,\n local_callbacks,\n verbose,\n inheritable_tags,\n local_tags,\n )\nclass AsyncCallbackManager(BaseCallbackManager):", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/manager.html"} +{"id": "beaf52c24564-20", "text": "local_tags,\n )\nclass AsyncCallbackManager(BaseCallbackManager):\n \"\"\"Async callback manager that can be used to handle callbacks from LangChain.\"\"\"\n @property\n def is_async(self) -> bool:\n \"\"\"Return whether the handler is async.\"\"\"\n return True\n async def on_llm_start(\n self,\n serialized: Dict[str, Any],\n prompts: List[str],\n **kwargs: Any,\n ) -> List[AsyncCallbackManagerForLLMRun]:\n \"\"\"Run when LLM starts running.\n Args:\n serialized (Dict[str, Any]): The serialized LLM.\n prompts (List[str]): The list of prompts.\n run_id (UUID, optional): The ID of the run. Defaults to None.\n Returns:\n List[AsyncCallbackManagerForLLMRun]: The list of async\n callback managers, one for each LLM Run corresponding\n to each prompt.\n \"\"\"\n tasks = []\n managers = []\n for prompt in prompts:\n run_id_ = uuid4()\n tasks.append(\n _ahandle_event(\n self.handlers,\n \"on_llm_start\",\n \"ignore_llm\",\n serialized,\n [prompt],\n run_id=run_id_,\n parent_run_id=self.parent_run_id,\n tags=self.tags,\n **kwargs,\n )\n )\n managers.append(\n AsyncCallbackManagerForLLMRun(\n run_id=run_id_,\n handlers=self.handlers,\n inheritable_handlers=self.inheritable_handlers,\n parent_run_id=self.parent_run_id,\n tags=self.tags,\n inheritable_tags=self.inheritable_tags,\n )\n )\n await asyncio.gather(*tasks)\n return managers", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/manager.html"} +{"id": "beaf52c24564-21", "text": ")\n )\n await asyncio.gather(*tasks)\n return managers\n async def on_chat_model_start(\n self,\n serialized: Dict[str, Any],\n messages: List[List[BaseMessage]],\n **kwargs: Any,\n ) -> Any:\n \"\"\"Run when LLM starts running.\n Args:\n serialized (Dict[str, Any]): The serialized LLM.\n messages (List[List[BaseMessage]]): The list of messages.\n run_id (UUID, optional): The ID of the run. Defaults to None.\n Returns:\n List[AsyncCallbackManagerForLLMRun]: The list of\n async callback managers, one for each LLM Run\n corresponding to each inner message list.\n \"\"\"\n tasks = []\n managers = []\n for message_list in messages:\n run_id_ = uuid4()\n tasks.append(\n _ahandle_event(\n self.handlers,\n \"on_chat_model_start\",\n \"ignore_chat_model\",\n serialized,\n [message_list],\n run_id=run_id_,\n parent_run_id=self.parent_run_id,\n tags=self.tags,\n **kwargs,\n )\n )\n managers.append(\n AsyncCallbackManagerForLLMRun(\n run_id=run_id_,\n handlers=self.handlers,\n inheritable_handlers=self.inheritable_handlers,\n parent_run_id=self.parent_run_id,\n tags=self.tags,\n inheritable_tags=self.inheritable_tags,\n )\n )\n await asyncio.gather(*tasks)\n return managers\n async def on_chain_start(\n self,\n serialized: Dict[str, Any],\n inputs: Dict[str, Any],", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/manager.html"} +{"id": "beaf52c24564-22", "text": "serialized: Dict[str, Any],\n inputs: Dict[str, Any],\n run_id: Optional[UUID] = None,\n **kwargs: Any,\n ) -> AsyncCallbackManagerForChainRun:\n \"\"\"Run when chain starts running.\n Args:\n serialized (Dict[str, Any]): The serialized chain.\n inputs (Dict[str, Any]): The inputs to the chain.\n run_id (UUID, optional): The ID of the run. Defaults to None.\n Returns:\n AsyncCallbackManagerForChainRun: The async callback manager\n for the chain run.\n \"\"\"\n if run_id is None:\n run_id = uuid4()\n await _ahandle_event(\n self.handlers,\n \"on_chain_start\",\n \"ignore_chain\",\n serialized,\n inputs,\n run_id=run_id,\n parent_run_id=self.parent_run_id,\n tags=self.tags,\n **kwargs,\n )\n return AsyncCallbackManagerForChainRun(\n run_id=run_id,\n handlers=self.handlers,\n inheritable_handlers=self.inheritable_handlers,\n parent_run_id=self.parent_run_id,\n tags=self.tags,\n inheritable_tags=self.inheritable_tags,\n )\n async def on_tool_start(\n self,\n serialized: Dict[str, Any],\n input_str: str,\n run_id: Optional[UUID] = None,\n parent_run_id: Optional[UUID] = None,\n **kwargs: Any,\n ) -> AsyncCallbackManagerForToolRun:\n \"\"\"Run when tool starts running.\n Args:\n serialized (Dict[str, Any]): The serialized tool.\n input_str (str): The input to the tool.", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/manager.html"} +{"id": "beaf52c24564-23", "text": "input_str (str): The input to the tool.\n run_id (UUID, optional): The ID of the run. Defaults to None.\n parent_run_id (UUID, optional): The ID of the parent run.\n Defaults to None.\n Returns:\n AsyncCallbackManagerForToolRun: The async callback manager\n for the tool run.\n \"\"\"\n if run_id is None:\n run_id = uuid4()\n await _ahandle_event(\n self.handlers,\n \"on_tool_start\",\n \"ignore_agent\",\n serialized,\n input_str,\n run_id=run_id,\n parent_run_id=self.parent_run_id,\n tags=self.tags,\n **kwargs,\n )\n return AsyncCallbackManagerForToolRun(\n run_id=run_id,\n handlers=self.handlers,\n inheritable_handlers=self.inheritable_handlers,\n parent_run_id=self.parent_run_id,\n tags=self.tags,\n inheritable_tags=self.inheritable_tags,\n )\n @classmethod\n def configure(\n cls,\n inheritable_callbacks: Callbacks = None,\n local_callbacks: Callbacks = None,\n verbose: bool = False,\n inheritable_tags: Optional[List[str]] = None,\n local_tags: Optional[List[str]] = None,\n ) -> AsyncCallbackManager:\n \"\"\"Configure the async callback manager.\n Args:\n inheritable_callbacks (Optional[Callbacks], optional): The inheritable\n callbacks. Defaults to None.\n local_callbacks (Optional[Callbacks], optional): The local callbacks.\n Defaults to None.\n verbose (bool, optional): Whether to enable verbose mode. Defaults to False.\n inheritable_tags (Optional[List[str]], optional): The inheritable tags.\n Defaults to None.", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/manager.html"} +{"id": "beaf52c24564-24", "text": "Defaults to None.\n local_tags (Optional[List[str]], optional): The local tags.\n Defaults to None.\n Returns:\n AsyncCallbackManager: The configured async callback manager.\n \"\"\"\n return _configure(\n cls,\n inheritable_callbacks,\n local_callbacks,\n verbose,\n inheritable_tags,\n local_tags,\n )\nT = TypeVar(\"T\", CallbackManager, AsyncCallbackManager)\ndef env_var_is_set(env_var: str) -> bool:\n \"\"\"Check if an environment variable is set.\n Args:\n env_var (str): The name of the environment variable.\n Returns:\n bool: True if the environment variable is set, False otherwise.\n \"\"\"\n return env_var in os.environ and os.environ[env_var] not in (\n \"\",\n \"0\",\n \"false\",\n \"False\",\n )\ndef _configure(\n callback_manager_cls: Type[T],\n inheritable_callbacks: Callbacks = None,\n local_callbacks: Callbacks = None,\n verbose: bool = False,\n inheritable_tags: Optional[List[str]] = None,\n local_tags: Optional[List[str]] = None,\n) -> T:\n \"\"\"Configure the callback manager.\n Args:\n callback_manager_cls (Type[T]): The callback manager class.\n inheritable_callbacks (Optional[Callbacks], optional): The inheritable\n callbacks. Defaults to None.\n local_callbacks (Optional[Callbacks], optional): The local callbacks.\n Defaults to None.\n verbose (bool, optional): Whether to enable verbose mode. Defaults to False.\n inheritable_tags (Optional[List[str]], optional): The inheritable tags.\n Defaults to None.", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/manager.html"} +{"id": "beaf52c24564-25", "text": "Defaults to None.\n local_tags (Optional[List[str]], optional): The local tags. Defaults to None.\n Returns:\n T: The configured callback manager.\n \"\"\"\n callback_manager = callback_manager_cls(handlers=[])\n if inheritable_callbacks or local_callbacks:\n if isinstance(inheritable_callbacks, list) or inheritable_callbacks is None:\n inheritable_callbacks_ = inheritable_callbacks or []\n callback_manager = callback_manager_cls(\n handlers=inheritable_callbacks_.copy(),\n inheritable_handlers=inheritable_callbacks_.copy(),\n )\n else:\n callback_manager = callback_manager_cls(\n handlers=inheritable_callbacks.handlers,\n inheritable_handlers=inheritable_callbacks.inheritable_handlers,\n parent_run_id=inheritable_callbacks.parent_run_id,\n tags=inheritable_callbacks.tags,\n inheritable_tags=inheritable_callbacks.inheritable_tags,\n )\n local_handlers_ = (\n local_callbacks\n if isinstance(local_callbacks, list)\n else (local_callbacks.handlers if local_callbacks else [])\n )\n for handler in local_handlers_:\n callback_manager.add_handler(handler, False)\n if inheritable_tags or local_tags:\n callback_manager.add_tags(inheritable_tags or [])\n callback_manager.add_tags(local_tags or [], False)\n tracer = tracing_callback_var.get()\n wandb_tracer = wandb_tracing_callback_var.get()\n open_ai = openai_callback_var.get()\n tracing_enabled_ = (\n env_var_is_set(\"LANGCHAIN_TRACING\")\n or tracer is not None\n or env_var_is_set(\"LANGCHAIN_HANDLER\")\n )\n wandb_tracing_enabled_ = (\n env_var_is_set(\"LANGCHAIN_WANDB_TRACING\") or wandb_tracer is not None", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/manager.html"} +{"id": "beaf52c24564-26", "text": ")\n tracer_v2 = tracing_v2_callback_var.get()\n tracing_v2_enabled_ = (\n env_var_is_set(\"LANGCHAIN_TRACING_V2\") or tracer_v2 is not None\n )\n tracer_project = os.environ.get(\n \"LANGCHAIN_PROJECT\", os.environ.get(\"LANGCHAIN_SESSION\", \"default\")\n )\n debug = _get_debug()\n if (\n verbose\n or debug\n or tracing_enabled_\n or tracing_v2_enabled_\n or wandb_tracing_enabled_\n or open_ai is not None\n ):\n if verbose and not any(\n isinstance(handler, StdOutCallbackHandler)\n for handler in callback_manager.handlers\n ):\n if debug:\n pass\n else:\n callback_manager.add_handler(StdOutCallbackHandler(), False)\n if debug and not any(\n isinstance(handler, ConsoleCallbackHandler)\n for handler in callback_manager.handlers\n ):\n callback_manager.add_handler(ConsoleCallbackHandler(), True)\n if tracing_enabled_ and not any(\n isinstance(handler, LangChainTracerV1)\n for handler in callback_manager.handlers\n ):\n if tracer:\n callback_manager.add_handler(tracer, True)\n else:\n handler = LangChainTracerV1()\n handler.load_session(tracer_project)\n callback_manager.add_handler(handler, True)\n if wandb_tracing_enabled_ and not any(\n isinstance(handler, WandbTracer) for handler in callback_manager.handlers\n ):\n if wandb_tracer:\n callback_manager.add_handler(wandb_tracer, True)\n else:\n handler = WandbTracer()\n callback_manager.add_handler(handler, True)\n if tracing_v2_enabled_ and not any(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/manager.html"} +{"id": "beaf52c24564-27", "text": "if tracing_v2_enabled_ and not any(\n isinstance(handler, LangChainTracer)\n for handler in callback_manager.handlers\n ):\n if tracer_v2:\n callback_manager.add_handler(tracer_v2, True)\n else:\n try:\n handler = LangChainTracer(project_name=tracer_project)\n callback_manager.add_handler(handler, True)\n except Exception as e:\n logger.warning(\n \"Unable to load requested LangChainTracer.\"\n \" To disable this warning,\"\n \" unset the LANGCHAIN_TRACING_V2 environment variables.\",\n e,\n )\n if open_ai is not None and not any(\n isinstance(handler, OpenAICallbackHandler)\n for handler in callback_manager.handlers\n ):\n callback_manager.add_handler(open_ai, True)\n return callback_manager", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/manager.html"} +{"id": "2bb75dbc000d-0", "text": "Source code for langchain.callbacks.openai_info\n\"\"\"Callback Handler that prints to std out.\"\"\"\nfrom typing import Any, Dict, List\nfrom langchain.callbacks.base import BaseCallbackHandler\nfrom langchain.schema import LLMResult\nMODEL_COST_PER_1K_TOKENS = {\n # GPT-4 input\n \"gpt-4\": 0.03,\n \"gpt-4-0314\": 0.03,\n \"gpt-4-0613\": 0.03,\n \"gpt-4-32k\": 0.06,\n \"gpt-4-32k-0314\": 0.06,\n \"gpt-4-32k-0613\": 0.06,\n # GPT-4 output\n \"gpt-4-completion\": 0.06,\n \"gpt-4-0314-completion\": 0.06,\n \"gpt-4-0613-completion\": 0.06,\n \"gpt-4-32k-completion\": 0.12,\n \"gpt-4-32k-0314-completion\": 0.12,\n \"gpt-4-32k-0613-completion\": 0.12,\n # GPT-3.5 input\n \"gpt-3.5-turbo\": 0.0015,\n \"gpt-3.5-turbo-0301\": 0.0015,\n \"gpt-3.5-turbo-0613\": 0.0015,\n \"gpt-3.5-turbo-16k\": 0.003,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/openai_info.html"} +{"id": "2bb75dbc000d-1", "text": "\"gpt-3.5-turbo-16k-0613\": 0.003,\n # GPT-3.5 output\n \"gpt-3.5-turbo-completion\": 0.002,\n \"gpt-3.5-turbo-0301-completion\": 0.002,\n \"gpt-3.5-turbo-0613-completion\": 0.002,\n \"gpt-3.5-turbo-16k-completion\": 0.004,\n \"gpt-3.5-turbo-16k-0613-completion\": 0.004,\n # Others\n \"gpt-35-turbo\": 0.002, # Azure OpenAI version of ChatGPT\n \"text-ada-001\": 0.0004,\n \"ada\": 0.0004,\n \"text-babbage-001\": 0.0005,\n \"babbage\": 0.0005,\n \"text-curie-001\": 0.002,\n \"curie\": 0.002,\n \"text-davinci-003\": 0.02,\n \"text-davinci-002\": 0.02,\n \"code-davinci-002\": 0.02,\n \"ada-finetuned\": 0.0016,\n \"babbage-finetuned\": 0.0024,\n \"curie-finetuned\": 0.012,\n \"davinci-finetuned\": 0.12,\n}\ndef standardize_model_name(\n model_name: str,\n is_completion: bool = False,\n) -> str:\n \"\"\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/openai_info.html"} +{"id": "2bb75dbc000d-2", "text": "is_completion: bool = False,\n) -> str:\n \"\"\"\n Standardize the model name to a format that can be used in the OpenAI API.\n Args:\n model_name: Model name to standardize.\n is_completion: Whether the model is used for completion or not.\n Defaults to False.\n Returns:\n Standardized model name.\n \"\"\"\n model_name = model_name.lower()\n if \"ft-\" in model_name:\n return model_name.split(\":\")[0] + \"-finetuned\"\n elif is_completion and (\n model_name.startswith(\"gpt-4\") or model_name.startswith(\"gpt-3.5\")\n ):\n return model_name + \"-completion\"\n else:\n return model_name\ndef get_openai_token_cost_for_model(\n model_name: str, num_tokens: int, is_completion: bool = False\n) -> float:\n \"\"\"\n Get the cost in USD for a given model and number of tokens.\n Args:\n model_name: Name of the model\n num_tokens: Number of tokens.\n is_completion: Whether the model is used for completion or not.\n Defaults to False.\n Returns:\n Cost in USD.\n \"\"\"\n model_name = standardize_model_name(model_name, is_completion=is_completion)\n if model_name not in MODEL_COST_PER_1K_TOKENS:\n raise ValueError(\n f\"Unknown model: {model_name}. Please provide a valid OpenAI model name.\"\n \"Known models are: \" + \", \".join(MODEL_COST_PER_1K_TOKENS.keys())\n )\n return MODEL_COST_PER_1K_TOKENS[model_name] * num_tokens / 1000", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/openai_info.html"} +{"id": "2bb75dbc000d-3", "text": "[docs]class OpenAICallbackHandler(BaseCallbackHandler):\n \"\"\"Callback Handler that tracks OpenAI info.\"\"\"\n total_tokens: int = 0\n prompt_tokens: int = 0\n completion_tokens: int = 0\n successful_requests: int = 0\n total_cost: float = 0.0\n def __repr__(self) -> str:\n return (\n f\"Tokens Used: {self.total_tokens}\\n\"\n f\"\\tPrompt Tokens: {self.prompt_tokens}\\n\"\n f\"\\tCompletion Tokens: {self.completion_tokens}\\n\"\n f\"Successful Requests: {self.successful_requests}\\n\"\n f\"Total Cost (USD): ${self.total_cost}\"\n )\n @property\n def always_verbose(self) -> bool:\n \"\"\"Whether to call verbose callbacks even if verbose is False.\"\"\"\n return True\n[docs] def on_llm_start(\n self, serialized: Dict[str, Any], prompts: List[str], **kwargs: Any\n ) -> None:\n \"\"\"Print out the prompts.\"\"\"\n pass\n[docs] def on_llm_new_token(self, token: str, **kwargs: Any) -> None:\n \"\"\"Print out the token.\"\"\"\n pass\n[docs] def on_llm_end(self, response: LLMResult, **kwargs: Any) -> None:\n \"\"\"Collect token usage.\"\"\"\n if response.llm_output is None:\n return None\n self.successful_requests += 1\n if \"token_usage\" not in response.llm_output:\n return None\n token_usage = response.llm_output[\"token_usage\"]\n completion_tokens = token_usage.get(\"completion_tokens\", 0)\n prompt_tokens = token_usage.get(\"prompt_tokens\", 0)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/openai_info.html"} +{"id": "2bb75dbc000d-4", "text": "prompt_tokens = token_usage.get(\"prompt_tokens\", 0)\n model_name = standardize_model_name(response.llm_output.get(\"model_name\", \"\"))\n if model_name in MODEL_COST_PER_1K_TOKENS:\n completion_cost = get_openai_token_cost_for_model(\n model_name, completion_tokens, is_completion=True\n )\n prompt_cost = get_openai_token_cost_for_model(model_name, prompt_tokens)\n self.total_cost += prompt_cost + completion_cost\n self.total_tokens += token_usage.get(\"total_tokens\", 0)\n self.prompt_tokens += prompt_tokens\n self.completion_tokens += completion_tokens\n def __copy__(self) -> \"OpenAICallbackHandler\":\n \"\"\"Return a copy of the callback handler.\"\"\"\n return self\n def __deepcopy__(self, memo: Any) -> \"OpenAICallbackHandler\":\n \"\"\"Return a deep copy of the callback handler.\"\"\"\n return self", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/openai_info.html"} +{"id": "cb1b74b447f3-0", "text": "Source code for langchain.callbacks.infino_callback\nimport time\nfrom typing import Any, Dict, List, Optional, Union\nfrom langchain.callbacks.base import BaseCallbackHandler\nfrom langchain.schema import AgentAction, AgentFinish, LLMResult\ndef import_infino() -> Any:\n try:\n from infinopy import InfinoClient\n except ImportError:\n raise ImportError(\n \"To use the Infino callbacks manager you need to have the\"\n \" `infinopy` python package installed.\"\n \"Please install it with `pip install infinopy`\"\n )\n return InfinoClient()\n[docs]class InfinoCallbackHandler(BaseCallbackHandler):\n \"\"\"Callback Handler that logs to Infino.\"\"\"\n def __init__(\n self,\n model_id: Optional[str] = None,\n model_version: Optional[str] = None,\n verbose: bool = False,\n ) -> None:\n # Set Infino client\n self.client = import_infino()\n self.model_id = model_id\n self.model_version = model_version\n self.verbose = verbose\n def _send_to_infino(\n self,\n key: str,\n value: Any,\n is_ts: bool = True,\n ) -> None:\n \"\"\"Send the key-value to Infino.\n Parameters:\n key (str): the key to send to Infino.\n value (Any): the value to send to Infino.\n is_ts (bool): if True, the value is part of a time series, else it\n is sent as a log message.\n \"\"\"\n payload = {\n \"date\": int(time.time()),\n key: value,\n \"labels\": {\n \"model_id\": self.model_id,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/infino_callback.html"} +{"id": "cb1b74b447f3-1", "text": "\"labels\": {\n \"model_id\": self.model_id,\n \"model_version\": self.model_version,\n },\n }\n if self.verbose:\n print(f\"Tracking {key} with Infino: {payload}\")\n # Append to Infino time series only if is_ts is True, otherwise\n # append to Infino log.\n if is_ts:\n self.client.append_ts(payload)\n else:\n self.client.append_log(payload)\n[docs] def on_llm_start(\n self,\n serialized: Dict[str, Any],\n prompts: List[str],\n **kwargs: Any,\n ) -> None:\n \"\"\"Log the prompts to Infino, and set start time and error flag.\"\"\"\n for prompt in prompts:\n self._send_to_infino(\"prompt\", prompt, is_ts=False)\n # Set the error flag to indicate no error (this will get overridden\n # in on_llm_error if an error occurs).\n self.error = 0\n # Set the start time (so that we can calculate the request\n # duration in on_llm_end).\n self.start_time = time.time()\n[docs] def on_llm_new_token(self, token: str, **kwargs: Any) -> None:\n \"\"\"Do nothing when a new token is generated.\"\"\"\n pass\n[docs] def on_llm_end(self, response: LLMResult, **kwargs: Any) -> None:\n \"\"\"Log the latency, error, token usage, and response to Infino.\"\"\"\n # Calculate and track the request latency.\n self.end_time = time.time()\n duration = self.end_time - self.start_time\n self._send_to_infino(\"latency\", duration)\n # Track success or error flag.", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/infino_callback.html"} +{"id": "cb1b74b447f3-2", "text": "# Track success or error flag.\n self._send_to_infino(\"error\", self.error)\n # Track token usage.\n if (response.llm_output is not None) and isinstance(response.llm_output, Dict):\n token_usage = response.llm_output[\"token_usage\"]\n if token_usage is not None:\n prompt_tokens = token_usage[\"prompt_tokens\"]\n total_tokens = token_usage[\"total_tokens\"]\n completion_tokens = token_usage[\"completion_tokens\"]\n self._send_to_infino(\"prompt_tokens\", prompt_tokens)\n self._send_to_infino(\"total_tokens\", total_tokens)\n self._send_to_infino(\"completion_tokens\", completion_tokens)\n # Track prompt response.\n for generations in response.generations:\n for generation in generations:\n self._send_to_infino(\"prompt_response\", generation.text, is_ts=False)\n[docs] def on_llm_error(\n self, error: Union[Exception, KeyboardInterrupt], **kwargs: Any\n ) -> None:\n \"\"\"Set the error flag.\"\"\"\n self.error = 1\n[docs] def on_chain_start(\n self, serialized: Dict[str, Any], inputs: Dict[str, Any], **kwargs: Any\n ) -> None:\n \"\"\"Do nothing when LLM chain starts.\"\"\"\n pass\n[docs] def on_chain_end(self, outputs: Dict[str, Any], **kwargs: Any) -> None:\n \"\"\"Do nothing when LLM chain ends.\"\"\"\n pass\n[docs] def on_chain_error(\n self, error: Union[Exception, KeyboardInterrupt], **kwargs: Any\n ) -> None:\n \"\"\"Need to log the error.\"\"\"\n pass\n[docs] def on_tool_start(\n self,\n serialized: Dict[str, Any],", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/infino_callback.html"} +{"id": "cb1b74b447f3-3", "text": "self,\n serialized: Dict[str, Any],\n input_str: str,\n **kwargs: Any,\n ) -> None:\n \"\"\"Do nothing when tool starts.\"\"\"\n pass\n[docs] def on_agent_action(self, action: AgentAction, **kwargs: Any) -> Any:\n \"\"\"Do nothing when agent takes a specific action.\"\"\"\n pass\n[docs] def on_tool_end(\n self,\n output: str,\n observation_prefix: Optional[str] = None,\n llm_prefix: Optional[str] = None,\n **kwargs: Any,\n ) -> None:\n \"\"\"Do nothing when tool ends.\"\"\"\n pass\n[docs] def on_tool_error(\n self, error: Union[Exception, KeyboardInterrupt], **kwargs: Any\n ) -> None:\n \"\"\"Do nothing when tool outputs an error.\"\"\"\n pass\n[docs] def on_text(self, text: str, **kwargs: Any) -> None:\n \"\"\"Do nothing.\"\"\"\n pass\n[docs] def on_agent_finish(self, finish: AgentFinish, **kwargs: Any) -> None:\n \"\"\"Do nothing.\"\"\"\n pass", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/infino_callback.html"} +{"id": "35b2b2a31324-0", "text": "Source code for langchain.callbacks.human\nfrom typing import Any, Callable, Dict, Optional\nfrom uuid import UUID\nfrom langchain.callbacks.base import BaseCallbackHandler\ndef _default_approve(_input: str) -> bool:\n msg = (\n \"Do you approve of the following input? \"\n \"Anything except 'Y'/'Yes' (case-insensitive) will be treated as a no.\"\n )\n msg += \"\\n\\n\" + _input + \"\\n\"\n resp = input(msg)\n return resp.lower() in (\"yes\", \"y\")\ndef _default_true(_: Dict[str, Any]) -> bool:\n return True\nclass HumanRejectedException(Exception):\n \"\"\"Exception to raise when a person manually review and rejects a value.\"\"\"\n[docs]class HumanApprovalCallbackHandler(BaseCallbackHandler):\n \"\"\"Callback for manually validating values.\"\"\"\n raise_error: bool = True\n def __init__(\n self,\n approve: Callable[[Any], bool] = _default_approve,\n should_check: Callable[[Dict[str, Any]], bool] = _default_true,\n ):\n self._approve = approve\n self._should_check = should_check\n[docs] def on_tool_start(\n self,\n serialized: Dict[str, Any],\n input_str: str,\n *,\n run_id: UUID,\n parent_run_id: Optional[UUID] = None,\n **kwargs: Any,\n ) -> Any:\n if self._should_check(serialized) and not self._approve(input_str):\n raise HumanRejectedException(\n f\"Inputs {input_str} to tool {serialized} were rejected.\"\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/human.html"} +{"id": "62f9a75fcba8-0", "text": "Source code for langchain.callbacks.streaming_stdout_final_only\n\"\"\"Callback Handler streams to stdout on new llm token.\"\"\"\nimport sys\nfrom typing import Any, Dict, List, Optional\nfrom langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler\nDEFAULT_ANSWER_PREFIX_TOKENS = [\"Final\", \"Answer\", \":\"]\n[docs]class FinalStreamingStdOutCallbackHandler(StreamingStdOutCallbackHandler):\n \"\"\"Callback handler for streaming in agents.\n Only works with agents using LLMs that support streaming.\n Only the final output of the agent will be streamed.\n \"\"\"\n[docs] def append_to_last_tokens(self, token: str) -> None:\n self.last_tokens.append(token)\n self.last_tokens_stripped.append(token.strip())\n if len(self.last_tokens) > len(self.answer_prefix_tokens):\n self.last_tokens.pop(0)\n self.last_tokens_stripped.pop(0)\n[docs] def check_if_answer_reached(self) -> bool:\n if self.strip_tokens:\n return self.last_tokens_stripped == self.answer_prefix_tokens_stripped\n else:\n return self.last_tokens == self.answer_prefix_tokens\n def __init__(\n self,\n *,\n answer_prefix_tokens: Optional[List[str]] = None,\n strip_tokens: bool = True,\n stream_prefix: bool = False\n ) -> None:\n \"\"\"Instantiate FinalStreamingStdOutCallbackHandler.\n Args:\n answer_prefix_tokens: Token sequence that prefixes the anwer.\n Default is [\"Final\", \"Answer\", \":\"]\n strip_tokens: Ignore white spaces and new lines when comparing\n answer_prefix_tokens to last tokens? (to determine if answer has been\n reached)\n stream_prefix: Should answer prefix itself also be streamed?\n \"\"\"\n super().__init__()", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/streaming_stdout_final_only.html"} +{"id": "62f9a75fcba8-1", "text": "\"\"\"\n super().__init__()\n if answer_prefix_tokens is None:\n self.answer_prefix_tokens = DEFAULT_ANSWER_PREFIX_TOKENS\n else:\n self.answer_prefix_tokens = answer_prefix_tokens\n if strip_tokens:\n self.answer_prefix_tokens_stripped = [\n token.strip() for token in self.answer_prefix_tokens\n ]\n else:\n self.answer_prefix_tokens_stripped = self.answer_prefix_tokens\n self.last_tokens = [\"\"] * len(self.answer_prefix_tokens)\n self.last_tokens_stripped = [\"\"] * len(self.answer_prefix_tokens)\n self.strip_tokens = strip_tokens\n self.stream_prefix = stream_prefix\n self.answer_reached = False\n[docs] def on_llm_start(\n self, serialized: Dict[str, Any], prompts: List[str], **kwargs: Any\n ) -> None:\n \"\"\"Run when LLM starts running.\"\"\"\n self.answer_reached = False\n[docs] def on_llm_new_token(self, token: str, **kwargs: Any) -> None:\n \"\"\"Run on new LLM token. Only available when streaming is enabled.\"\"\"\n # Remember the last n tokens, where n = len(answer_prefix_tokens)\n self.append_to_last_tokens(token)\n # Check if the last n tokens match the answer_prefix_tokens list ...\n if self.check_if_answer_reached():\n self.answer_reached = True\n if self.stream_prefix:\n for t in self.last_tokens:\n sys.stdout.write(t)\n sys.stdout.flush()\n return\n # ... if yes, then print tokens from now on\n if self.answer_reached:\n sys.stdout.write(token)\n sys.stdout.flush()", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/streaming_stdout_final_only.html"} +{"id": "ae9d10aaa559-0", "text": "Source code for langchain.callbacks.wandb_callback\nimport json\nimport tempfile\nfrom copy import deepcopy\nfrom pathlib import Path\nfrom typing import Any, Dict, List, Optional, Sequence, Union\nfrom langchain.callbacks.base import BaseCallbackHandler\nfrom langchain.callbacks.utils import (\n BaseMetadataCallbackHandler,\n flatten_dict,\n hash_string,\n import_pandas,\n import_spacy,\n import_textstat,\n)\nfrom langchain.schema import AgentAction, AgentFinish, LLMResult\ndef import_wandb() -> Any:\n \"\"\"Import the wandb python package and raise an error if it is not installed.\"\"\"\n try:\n import wandb # noqa: F401\n except ImportError:\n raise ImportError(\n \"To use the wandb callback manager you need to have the `wandb` python \"\n \"package installed. Please install it with `pip install wandb`\"\n )\n return wandb\ndef load_json_to_dict(json_path: Union[str, Path]) -> dict:\n \"\"\"Load json file to a dictionary.\n Parameters:\n json_path (str): The path to the json file.\n Returns:\n (dict): The dictionary representation of the json file.\n \"\"\"\n with open(json_path, \"r\") as f:\n data = json.load(f)\n return data\ndef analyze_text(\n text: str,\n complexity_metrics: bool = True,\n visualize: bool = True,\n nlp: Any = None,\n output_dir: Optional[Union[str, Path]] = None,\n) -> dict:\n \"\"\"Analyze text using textstat and spacy.\n Parameters:\n text (str): The text to analyze.\n complexity_metrics (bool): Whether to compute complexity metrics.", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/wandb_callback.html"} +{"id": "ae9d10aaa559-1", "text": "complexity_metrics (bool): Whether to compute complexity metrics.\n visualize (bool): Whether to visualize the text.\n nlp (spacy.lang): The spacy language model to use for visualization.\n output_dir (str): The directory to save the visualization files to.\n Returns:\n (dict): A dictionary containing the complexity metrics and visualization\n files serialized in a wandb.Html element.\n \"\"\"\n resp = {}\n textstat = import_textstat()\n wandb = import_wandb()\n spacy = import_spacy()\n if complexity_metrics:\n text_complexity_metrics = {\n \"flesch_reading_ease\": textstat.flesch_reading_ease(text),\n \"flesch_kincaid_grade\": textstat.flesch_kincaid_grade(text),\n \"smog_index\": textstat.smog_index(text),\n \"coleman_liau_index\": textstat.coleman_liau_index(text),\n \"automated_readability_index\": textstat.automated_readability_index(text),\n \"dale_chall_readability_score\": textstat.dale_chall_readability_score(text),\n \"difficult_words\": textstat.difficult_words(text),\n \"linsear_write_formula\": textstat.linsear_write_formula(text),\n \"gunning_fog\": textstat.gunning_fog(text),\n \"text_standard\": textstat.text_standard(text),\n \"fernandez_huerta\": textstat.fernandez_huerta(text),\n \"szigriszt_pazos\": textstat.szigriszt_pazos(text),\n \"gutierrez_polini\": textstat.gutierrez_polini(text),\n \"crawford\": textstat.crawford(text),", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/wandb_callback.html"} +{"id": "ae9d10aaa559-2", "text": "\"crawford\": textstat.crawford(text),\n \"gulpease_index\": textstat.gulpease_index(text),\n \"osman\": textstat.osman(text),\n }\n resp.update(text_complexity_metrics)\n if visualize and nlp and output_dir is not None:\n doc = nlp(text)\n dep_out = spacy.displacy.render( # type: ignore\n doc, style=\"dep\", jupyter=False, page=True\n )\n dep_output_path = Path(output_dir, hash_string(f\"dep-{text}\") + \".html\")\n dep_output_path.open(\"w\", encoding=\"utf-8\").write(dep_out)\n ent_out = spacy.displacy.render( # type: ignore\n doc, style=\"ent\", jupyter=False, page=True\n )\n ent_output_path = Path(output_dir, hash_string(f\"ent-{text}\") + \".html\")\n ent_output_path.open(\"w\", encoding=\"utf-8\").write(ent_out)\n text_visualizations = {\n \"dependency_tree\": wandb.Html(str(dep_output_path)),\n \"entities\": wandb.Html(str(ent_output_path)),\n }\n resp.update(text_visualizations)\n return resp\ndef construct_html_from_prompt_and_generation(prompt: str, generation: str) -> Any:\n \"\"\"Construct an html element from a prompt and a generation.\n Parameters:\n prompt (str): The prompt.\n generation (str): The generation.\n Returns:\n (wandb.Html): The html element.\"\"\"\n wandb = import_wandb()\n formatted_prompt = prompt.replace(\"\\n\", \"
\")\n formatted_generation = generation.replace(\"\\n\", \"
\")\n return wandb.Html(\n f\"\"\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/wandb_callback.html"} +{"id": "ae9d10aaa559-3", "text": "return wandb.Html(\n f\"\"\"\n

{formatted_prompt}:

\n
\n

\n {formatted_generation}\n

\n
\n \"\"\",\n inject=False,\n )\n[docs]class WandbCallbackHandler(BaseMetadataCallbackHandler, BaseCallbackHandler):\n \"\"\"Callback Handler that logs to Weights and Biases.\n Parameters:\n job_type (str): The type of job.\n project (str): The project to log to.\n entity (str): The entity to log to.\n tags (list): The tags to log.\n group (str): The group to log to.\n name (str): The name of the run.\n notes (str): The notes to log.\n visualize (bool): Whether to visualize the run.\n complexity_metrics (bool): Whether to log complexity metrics.\n stream_logs (bool): Whether to stream callback actions to W&B\n This handler will utilize the associated callback method called and formats\n the input of each callback function with metadata regarding the state of LLM run,\n and adds the response to the list of records for both the {method}_records and\n action. It then logs the response using the run.log() method to Weights and Biases.\n \"\"\"\n def __init__(\n self,\n job_type: Optional[str] = None,\n project: Optional[str] = \"langchain_callback_demo\",\n entity: Optional[str] = None,\n tags: Optional[Sequence] = None,\n group: Optional[str] = None,\n name: Optional[str] = None,\n notes: Optional[str] = None,\n visualize: bool = False,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/wandb_callback.html"} +{"id": "ae9d10aaa559-4", "text": "notes: Optional[str] = None,\n visualize: bool = False,\n complexity_metrics: bool = False,\n stream_logs: bool = False,\n ) -> None:\n \"\"\"Initialize callback handler.\"\"\"\n wandb = import_wandb()\n import_pandas()\n import_textstat()\n spacy = import_spacy()\n super().__init__()\n self.job_type = job_type\n self.project = project\n self.entity = entity\n self.tags = tags\n self.group = group\n self.name = name\n self.notes = notes\n self.visualize = visualize\n self.complexity_metrics = complexity_metrics\n self.stream_logs = stream_logs\n self.temp_dir = tempfile.TemporaryDirectory()\n self.run: wandb.sdk.wandb_run.Run = wandb.init( # type: ignore\n job_type=self.job_type,\n project=self.project,\n entity=self.entity,\n tags=self.tags,\n group=self.group,\n name=self.name,\n notes=self.notes,\n )\n warning = (\n \"DEPRECATION: The `WandbCallbackHandler` will soon be deprecated in favor \"\n \"of the `WandbTracer`. Please update your code to use the `WandbTracer` \"\n \"instead.\"\n )\n wandb.termwarn(\n warning,\n repeat=False,\n )\n self.callback_columns: list = []\n self.action_records: list = []\n self.complexity_metrics = complexity_metrics\n self.visualize = visualize\n self.nlp = spacy.load(\"en_core_web_sm\")\n def _init_resp(self) -> Dict:\n return {k: None for k in self.callback_columns}", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/wandb_callback.html"} +{"id": "ae9d10aaa559-5", "text": "return {k: None for k in self.callback_columns}\n[docs] def on_llm_start(\n self, serialized: Dict[str, Any], prompts: List[str], **kwargs: Any\n ) -> None:\n \"\"\"Run when LLM starts.\"\"\"\n self.step += 1\n self.llm_starts += 1\n self.starts += 1\n resp = self._init_resp()\n resp.update({\"action\": \"on_llm_start\"})\n resp.update(flatten_dict(serialized))\n resp.update(self.get_custom_callback_meta())\n for prompt in prompts:\n prompt_resp = deepcopy(resp)\n prompt_resp[\"prompts\"] = prompt\n self.on_llm_start_records.append(prompt_resp)\n self.action_records.append(prompt_resp)\n if self.stream_logs:\n self.run.log(prompt_resp)\n[docs] def on_llm_new_token(self, token: str, **kwargs: Any) -> None:\n \"\"\"Run when LLM generates a new token.\"\"\"\n self.step += 1\n self.llm_streams += 1\n resp = self._init_resp()\n resp.update({\"action\": \"on_llm_new_token\", \"token\": token})\n resp.update(self.get_custom_callback_meta())\n self.on_llm_token_records.append(resp)\n self.action_records.append(resp)\n if self.stream_logs:\n self.run.log(resp)\n[docs] def on_llm_end(self, response: LLMResult, **kwargs: Any) -> None:\n \"\"\"Run when LLM ends running.\"\"\"\n self.step += 1\n self.llm_ends += 1\n self.ends += 1\n resp = self._init_resp()\n resp.update({\"action\": \"on_llm_end\"})", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/wandb_callback.html"} +{"id": "ae9d10aaa559-6", "text": "resp.update({\"action\": \"on_llm_end\"})\n resp.update(flatten_dict(response.llm_output or {}))\n resp.update(self.get_custom_callback_meta())\n for generations in response.generations:\n for generation in generations:\n generation_resp = deepcopy(resp)\n generation_resp.update(flatten_dict(generation.dict()))\n generation_resp.update(\n analyze_text(\n generation.text,\n complexity_metrics=self.complexity_metrics,\n visualize=self.visualize,\n nlp=self.nlp,\n output_dir=self.temp_dir.name,\n )\n )\n self.on_llm_end_records.append(generation_resp)\n self.action_records.append(generation_resp)\n if self.stream_logs:\n self.run.log(generation_resp)\n[docs] def on_llm_error(\n self, error: Union[Exception, KeyboardInterrupt], **kwargs: Any\n ) -> None:\n \"\"\"Run when LLM errors.\"\"\"\n self.step += 1\n self.errors += 1\n[docs] def on_chain_start(\n self, serialized: Dict[str, Any], inputs: Dict[str, Any], **kwargs: Any\n ) -> None:\n \"\"\"Run when chain starts running.\"\"\"\n self.step += 1\n self.chain_starts += 1\n self.starts += 1\n resp = self._init_resp()\n resp.update({\"action\": \"on_chain_start\"})\n resp.update(flatten_dict(serialized))\n resp.update(self.get_custom_callback_meta())\n chain_input = inputs[\"input\"]\n if isinstance(chain_input, str):\n input_resp = deepcopy(resp)\n input_resp[\"input\"] = chain_input\n self.on_chain_start_records.append(input_resp)\n self.action_records.append(input_resp)\n if self.stream_logs:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/wandb_callback.html"} +{"id": "ae9d10aaa559-7", "text": "self.action_records.append(input_resp)\n if self.stream_logs:\n self.run.log(input_resp)\n elif isinstance(chain_input, list):\n for inp in chain_input:\n input_resp = deepcopy(resp)\n input_resp.update(inp)\n self.on_chain_start_records.append(input_resp)\n self.action_records.append(input_resp)\n if self.stream_logs:\n self.run.log(input_resp)\n else:\n raise ValueError(\"Unexpected data format provided!\")\n[docs] def on_chain_end(self, outputs: Dict[str, Any], **kwargs: Any) -> None:\n \"\"\"Run when chain ends running.\"\"\"\n self.step += 1\n self.chain_ends += 1\n self.ends += 1\n resp = self._init_resp()\n resp.update({\"action\": \"on_chain_end\", \"outputs\": outputs[\"output\"]})\n resp.update(self.get_custom_callback_meta())\n self.on_chain_end_records.append(resp)\n self.action_records.append(resp)\n if self.stream_logs:\n self.run.log(resp)\n[docs] def on_chain_error(\n self, error: Union[Exception, KeyboardInterrupt], **kwargs: Any\n ) -> None:\n \"\"\"Run when chain errors.\"\"\"\n self.step += 1\n self.errors += 1\n[docs] def on_tool_start(\n self, serialized: Dict[str, Any], input_str: str, **kwargs: Any\n ) -> None:\n \"\"\"Run when tool starts running.\"\"\"\n self.step += 1\n self.tool_starts += 1\n self.starts += 1\n resp = self._init_resp()\n resp.update({\"action\": \"on_tool_start\", \"input_str\": input_str})\n resp.update(flatten_dict(serialized))", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/wandb_callback.html"} +{"id": "ae9d10aaa559-8", "text": "resp.update(flatten_dict(serialized))\n resp.update(self.get_custom_callback_meta())\n self.on_tool_start_records.append(resp)\n self.action_records.append(resp)\n if self.stream_logs:\n self.run.log(resp)\n[docs] def on_tool_end(self, output: str, **kwargs: Any) -> None:\n \"\"\"Run when tool ends running.\"\"\"\n self.step += 1\n self.tool_ends += 1\n self.ends += 1\n resp = self._init_resp()\n resp.update({\"action\": \"on_tool_end\", \"output\": output})\n resp.update(self.get_custom_callback_meta())\n self.on_tool_end_records.append(resp)\n self.action_records.append(resp)\n if self.stream_logs:\n self.run.log(resp)\n[docs] def on_tool_error(\n self, error: Union[Exception, KeyboardInterrupt], **kwargs: Any\n ) -> None:\n \"\"\"Run when tool errors.\"\"\"\n self.step += 1\n self.errors += 1\n[docs] def on_text(self, text: str, **kwargs: Any) -> None:\n \"\"\"\n Run when agent is ending.\n \"\"\"\n self.step += 1\n self.text_ctr += 1\n resp = self._init_resp()\n resp.update({\"action\": \"on_text\", \"text\": text})\n resp.update(self.get_custom_callback_meta())\n self.on_text_records.append(resp)\n self.action_records.append(resp)\n if self.stream_logs:\n self.run.log(resp)\n[docs] def on_agent_finish(self, finish: AgentFinish, **kwargs: Any) -> None:\n \"\"\"Run when agent ends running.\"\"\"\n self.step += 1\n self.agent_ends += 1\n self.ends += 1", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/wandb_callback.html"} +{"id": "ae9d10aaa559-9", "text": "self.agent_ends += 1\n self.ends += 1\n resp = self._init_resp()\n resp.update(\n {\n \"action\": \"on_agent_finish\",\n \"output\": finish.return_values[\"output\"],\n \"log\": finish.log,\n }\n )\n resp.update(self.get_custom_callback_meta())\n self.on_agent_finish_records.append(resp)\n self.action_records.append(resp)\n if self.stream_logs:\n self.run.log(resp)\n[docs] def on_agent_action(self, action: AgentAction, **kwargs: Any) -> Any:\n \"\"\"Run on agent action.\"\"\"\n self.step += 1\n self.tool_starts += 1\n self.starts += 1\n resp = self._init_resp()\n resp.update(\n {\n \"action\": \"on_agent_action\",\n \"tool\": action.tool,\n \"tool_input\": action.tool_input,\n \"log\": action.log,\n }\n )\n resp.update(self.get_custom_callback_meta())\n self.on_agent_action_records.append(resp)\n self.action_records.append(resp)\n if self.stream_logs:\n self.run.log(resp)\n def _create_session_analysis_df(self) -> Any:\n \"\"\"Create a dataframe with all the information from the session.\"\"\"\n pd = import_pandas()\n on_llm_start_records_df = pd.DataFrame(self.on_llm_start_records)\n on_llm_end_records_df = pd.DataFrame(self.on_llm_end_records)\n llm_input_prompts_df = (\n on_llm_start_records_df[[\"step\", \"prompts\", \"name\"]]\n .dropna(axis=1)\n .rename({\"step\": \"prompt_step\"}, axis=1)\n )\n complexity_metrics_columns = []", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/wandb_callback.html"} +{"id": "ae9d10aaa559-10", "text": ")\n complexity_metrics_columns = []\n visualizations_columns = []\n if self.complexity_metrics:\n complexity_metrics_columns = [\n \"flesch_reading_ease\",\n \"flesch_kincaid_grade\",\n \"smog_index\",\n \"coleman_liau_index\",\n \"automated_readability_index\",\n \"dale_chall_readability_score\",\n \"difficult_words\",\n \"linsear_write_formula\",\n \"gunning_fog\",\n \"text_standard\",\n \"fernandez_huerta\",\n \"szigriszt_pazos\",\n \"gutierrez_polini\",\n \"crawford\",\n \"gulpease_index\",\n \"osman\",\n ]\n if self.visualize:\n visualizations_columns = [\"dependency_tree\", \"entities\"]\n llm_outputs_df = (\n on_llm_end_records_df[\n [\n \"step\",\n \"text\",\n \"token_usage_total_tokens\",\n \"token_usage_prompt_tokens\",\n \"token_usage_completion_tokens\",\n ]\n + complexity_metrics_columns\n + visualizations_columns\n ]\n .dropna(axis=1)\n .rename({\"step\": \"output_step\", \"text\": \"output\"}, axis=1)\n )\n session_analysis_df = pd.concat([llm_input_prompts_df, llm_outputs_df], axis=1)\n session_analysis_df[\"chat_html\"] = session_analysis_df[\n [\"prompts\", \"output\"]\n ].apply(\n lambda row: construct_html_from_prompt_and_generation(\n row[\"prompts\"], row[\"output\"]\n ),\n axis=1,\n )\n return session_analysis_df", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/wandb_callback.html"} +{"id": "ae9d10aaa559-11", "text": "),\n axis=1,\n )\n return session_analysis_df\n[docs] def flush_tracker(\n self,\n langchain_asset: Any = None,\n reset: bool = True,\n finish: bool = False,\n job_type: Optional[str] = None,\n project: Optional[str] = None,\n entity: Optional[str] = None,\n tags: Optional[Sequence] = None,\n group: Optional[str] = None,\n name: Optional[str] = None,\n notes: Optional[str] = None,\n visualize: Optional[bool] = None,\n complexity_metrics: Optional[bool] = None,\n ) -> None:\n \"\"\"Flush the tracker and reset the session.\n Args:\n langchain_asset: The langchain asset to save.\n reset: Whether to reset the session.\n finish: Whether to finish the run.\n job_type: The job type.\n project: The project.\n entity: The entity.\n tags: The tags.\n group: The group.\n name: The name.\n notes: The notes.\n visualize: Whether to visualize.\n complexity_metrics: Whether to compute complexity metrics.\n Returns:\n None\n \"\"\"\n pd = import_pandas()\n wandb = import_wandb()\n action_records_table = wandb.Table(dataframe=pd.DataFrame(self.action_records))\n session_analysis_table = wandb.Table(\n dataframe=self._create_session_analysis_df()\n )\n self.run.log(\n {\n \"action_records\": action_records_table,\n \"session_analysis\": session_analysis_table,\n }\n )\n if langchain_asset:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/wandb_callback.html"} +{"id": "ae9d10aaa559-12", "text": "}\n )\n if langchain_asset:\n langchain_asset_path = Path(self.temp_dir.name, \"model.json\")\n model_artifact = wandb.Artifact(name=\"model\", type=\"model\")\n model_artifact.add(action_records_table, name=\"action_records\")\n model_artifact.add(session_analysis_table, name=\"session_analysis\")\n try:\n langchain_asset.save(langchain_asset_path)\n model_artifact.add_file(str(langchain_asset_path))\n model_artifact.metadata = load_json_to_dict(langchain_asset_path)\n except ValueError:\n langchain_asset.save_agent(langchain_asset_path)\n model_artifact.add_file(str(langchain_asset_path))\n model_artifact.metadata = load_json_to_dict(langchain_asset_path)\n except NotImplementedError as e:\n print(\"Could not save model.\")\n print(repr(e))\n pass\n self.run.log_artifact(model_artifact)\n if finish or reset:\n self.run.finish()\n self.temp_dir.cleanup()\n self.reset_callback_meta()\n if reset:\n self.__init__( # type: ignore\n job_type=job_type if job_type else self.job_type,\n project=project if project else self.project,\n entity=entity if entity else self.entity,\n tags=tags if tags else self.tags,\n group=group if group else self.group,\n name=name if name else self.name,\n notes=notes if notes else self.notes,\n visualize=visualize if visualize else self.visualize,\n complexity_metrics=complexity_metrics\n if complexity_metrics\n else self.complexity_metrics,\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/wandb_callback.html"} +{"id": "d027cc07e3dd-0", "text": "Source code for langchain.callbacks.arize_callback\nfrom datetime import datetime\nfrom typing import Any, Dict, List, Optional, Union\nfrom langchain.callbacks.base import BaseCallbackHandler\nfrom langchain.callbacks.utils import import_pandas\nfrom langchain.schema import AgentAction, AgentFinish, LLMResult\n[docs]class ArizeCallbackHandler(BaseCallbackHandler):\n \"\"\"Callback Handler that logs to Arize.\"\"\"\n def __init__(\n self,\n model_id: Optional[str] = None,\n model_version: Optional[str] = None,\n SPACE_KEY: Optional[str] = None,\n API_KEY: Optional[str] = None,\n ) -> None:\n \"\"\"Initialize callback handler.\"\"\"\n super().__init__()\n self.model_id = model_id\n self.model_version = model_version\n self.space_key = SPACE_KEY\n self.api_key = API_KEY\n self.prompt_records: List[str] = []\n self.response_records: List[str] = []\n self.prediction_ids: List[str] = []\n self.pred_timestamps: List[int] = []\n self.response_embeddings: List[float] = []\n self.prompt_embeddings: List[float] = []\n self.prompt_tokens = 0\n self.completion_tokens = 0\n self.total_tokens = 0\n self.step = 0\n from arize.pandas.embeddings import EmbeddingGenerator, UseCases\n from arize.pandas.logger import Client\n self.generator = EmbeddingGenerator.from_use_case(\n use_case=UseCases.NLP.SEQUENCE_CLASSIFICATION,\n model_name=\"distilbert-base-uncased\",\n tokenizer_max_length=512,\n batch_size=256,\n )\n self.arize_client = Client(space_key=SPACE_KEY, api_key=API_KEY)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/arize_callback.html"} +{"id": "d027cc07e3dd-1", "text": "self.arize_client = Client(space_key=SPACE_KEY, api_key=API_KEY)\n if SPACE_KEY == \"SPACE_KEY\" or API_KEY == \"API_KEY\":\n raise ValueError(\"\u274c CHANGE SPACE AND API KEYS\")\n else:\n print(\"\u2705 Arize client setup done! Now you can start using Arize!\")\n[docs] def on_llm_start(\n self, serialized: Dict[str, Any], prompts: List[str], **kwargs: Any\n ) -> None:\n for prompt in prompts:\n self.prompt_records.append(prompt.replace(\"\\n\", \"\"))\n[docs] def on_llm_new_token(self, token: str, **kwargs: Any) -> None:\n \"\"\"Do nothing.\"\"\"\n pass\n[docs] def on_llm_end(self, response: LLMResult, **kwargs: Any) -> None:\n pd = import_pandas()\n from arize.utils.types import (\n EmbeddingColumnNames,\n Environments,\n ModelTypes,\n Schema,\n )\n # Safe check if 'llm_output' and 'token_usage' exist\n if response.llm_output and \"token_usage\" in response.llm_output:\n self.prompt_tokens = response.llm_output[\"token_usage\"].get(\n \"prompt_tokens\", 0\n )\n self.total_tokens = response.llm_output[\"token_usage\"].get(\n \"total_tokens\", 0\n )\n self.completion_tokens = response.llm_output[\"token_usage\"].get(\n \"completion_tokens\", 0\n )\n else:\n self.prompt_tokens = (\n self.total_tokens\n ) = self.completion_tokens = 0 # assign default value\n for generations in response.generations:\n for generation in generations:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/arize_callback.html"} +{"id": "d027cc07e3dd-2", "text": "for generations in response.generations:\n for generation in generations:\n prompt = self.prompt_records[self.step]\n self.step = self.step + 1\n prompt_embedding = pd.Series(\n self.generator.generate_embeddings(\n text_col=pd.Series(prompt.replace(\"\\n\", \" \"))\n ).reset_index(drop=True)\n )\n # Assigning text to response_text instead of response\n response_text = generation.text.replace(\"\\n\", \" \")\n response_embedding = pd.Series(\n self.generator.generate_embeddings(\n text_col=pd.Series(generation.text.replace(\"\\n\", \" \"))\n ).reset_index(drop=True)\n )\n pred_timestamp = datetime.now().timestamp()\n # Define the columns and data\n columns = [\n \"prediction_ts\",\n \"response\",\n \"prompt\",\n \"response_vector\",\n \"prompt_vector\",\n \"prompt_token\",\n \"completion_token\",\n \"total_token\",\n ]\n data = [\n [\n pred_timestamp,\n response_text,\n prompt,\n response_embedding[0],\n prompt_embedding[0],\n self.prompt_tokens,\n self.total_tokens,\n self.completion_tokens,\n ]\n ]\n # Create the DataFrame\n df = pd.DataFrame(data, columns=columns)\n # Declare prompt and response columns\n prompt_columns = EmbeddingColumnNames(\n vector_column_name=\"prompt_vector\", data_column_name=\"prompt\"\n )\n response_columns = EmbeddingColumnNames(\n vector_column_name=\"response_vector\", data_column_name=\"response\"\n )\n schema = Schema(\n timestamp_column_name=\"prediction_ts\",\n tag_column_names=[\n \"prompt_token\",\n \"completion_token\",\n \"total_token\",\n ],", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/arize_callback.html"} +{"id": "d027cc07e3dd-3", "text": "\"completion_token\",\n \"total_token\",\n ],\n prompt_column_names=prompt_columns,\n response_column_names=response_columns,\n )\n response_from_arize = self.arize_client.log(\n dataframe=df,\n schema=schema,\n model_id=self.model_id,\n model_version=self.model_version,\n model_type=ModelTypes.GENERATIVE_LLM,\n environment=Environments.PRODUCTION,\n )\n if response_from_arize.status_code == 200:\n print(\"\u2705 Successfully logged data to Arize!\")\n else:\n print(f'\u274c Logging failed \"{response_from_arize.text}\"')\n[docs] def on_llm_error(\n self, error: Union[Exception, KeyboardInterrupt], **kwargs: Any\n ) -> None:\n \"\"\"Do nothing.\"\"\"\n pass\n[docs] def on_chain_start(\n self, serialized: Dict[str, Any], inputs: Dict[str, Any], **kwargs: Any\n ) -> None:\n pass\n[docs] def on_chain_end(self, outputs: Dict[str, Any], **kwargs: Any) -> None:\n \"\"\"Do nothing.\"\"\"\n pass\n[docs] def on_chain_error(\n self, error: Union[Exception, KeyboardInterrupt], **kwargs: Any\n ) -> None:\n \"\"\"Do nothing.\"\"\"\n pass\n[docs] def on_tool_start(\n self,\n serialized: Dict[str, Any],\n input_str: str,\n **kwargs: Any,\n ) -> None:\n pass\n[docs] def on_agent_action(self, action: AgentAction, **kwargs: Any) -> Any:\n \"\"\"Do nothing.\"\"\"\n pass\n[docs] def on_tool_end(\n self,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/arize_callback.html"} +{"id": "d027cc07e3dd-4", "text": "pass\n[docs] def on_tool_end(\n self,\n output: str,\n observation_prefix: Optional[str] = None,\n llm_prefix: Optional[str] = None,\n **kwargs: Any,\n ) -> None:\n pass\n[docs] def on_tool_error(\n self, error: Union[Exception, KeyboardInterrupt], **kwargs: Any\n ) -> None:\n pass\n[docs] def on_text(self, text: str, **kwargs: Any) -> None:\n pass\n[docs] def on_agent_finish(self, finish: AgentFinish, **kwargs: Any) -> None:\n pass", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/arize_callback.html"} +{"id": "649364316cd6-0", "text": "Source code for langchain.callbacks.aim_callback\nfrom copy import deepcopy\nfrom typing import Any, Dict, List, Optional, Union\nfrom langchain.callbacks.base import BaseCallbackHandler\nfrom langchain.schema import AgentAction, AgentFinish, LLMResult\ndef import_aim() -> Any:\n \"\"\"Import the aim python package and raise an error if it is not installed.\"\"\"\n try:\n import aim\n except ImportError:\n raise ImportError(\n \"To use the Aim callback manager you need to have the\"\n \" `aim` python package installed.\"\n \"Please install it with `pip install aim`\"\n )\n return aim\nclass BaseMetadataCallbackHandler:\n \"\"\"This class handles the metadata and associated function states for callbacks.\n Attributes:\n step (int): The current step.\n starts (int): The number of times the start method has been called.\n ends (int): The number of times the end method has been called.\n errors (int): The number of times the error method has been called.\n text_ctr (int): The number of times the text method has been called.\n ignore_llm_ (bool): Whether to ignore llm callbacks.\n ignore_chain_ (bool): Whether to ignore chain callbacks.\n ignore_agent_ (bool): Whether to ignore agent callbacks.\n always_verbose_ (bool): Whether to always be verbose.\n chain_starts (int): The number of times the chain start method has been called.\n chain_ends (int): The number of times the chain end method has been called.\n llm_starts (int): The number of times the llm start method has been called.\n llm_ends (int): The number of times the llm end method has been called.", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/aim_callback.html"} +{"id": "649364316cd6-1", "text": "llm_streams (int): The number of times the text method has been called.\n tool_starts (int): The number of times the tool start method has been called.\n tool_ends (int): The number of times the tool end method has been called.\n agent_ends (int): The number of times the agent end method has been called.\n \"\"\"\n def __init__(self) -> None:\n self.step = 0\n self.starts = 0\n self.ends = 0\n self.errors = 0\n self.text_ctr = 0\n self.ignore_llm_ = False\n self.ignore_chain_ = False\n self.ignore_agent_ = False\n self.always_verbose_ = False\n self.chain_starts = 0\n self.chain_ends = 0\n self.llm_starts = 0\n self.llm_ends = 0\n self.llm_streams = 0\n self.tool_starts = 0\n self.tool_ends = 0\n self.agent_ends = 0\n @property\n def always_verbose(self) -> bool:\n \"\"\"Whether to call verbose callbacks even if verbose is False.\"\"\"\n return self.always_verbose_\n @property\n def ignore_llm(self) -> bool:\n \"\"\"Whether to ignore LLM callbacks.\"\"\"\n return self.ignore_llm_\n @property\n def ignore_chain(self) -> bool:\n \"\"\"Whether to ignore chain callbacks.\"\"\"\n return self.ignore_chain_\n @property\n def ignore_agent(self) -> bool:\n \"\"\"Whether to ignore agent callbacks.\"\"\"\n return self.ignore_agent_\n def get_custom_callback_meta(self) -> Dict[str, Any]:\n return {\n \"step\": self.step,\n \"starts\": self.starts,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/aim_callback.html"} +{"id": "649364316cd6-2", "text": "\"step\": self.step,\n \"starts\": self.starts,\n \"ends\": self.ends,\n \"errors\": self.errors,\n \"text_ctr\": self.text_ctr,\n \"chain_starts\": self.chain_starts,\n \"chain_ends\": self.chain_ends,\n \"llm_starts\": self.llm_starts,\n \"llm_ends\": self.llm_ends,\n \"llm_streams\": self.llm_streams,\n \"tool_starts\": self.tool_starts,\n \"tool_ends\": self.tool_ends,\n \"agent_ends\": self.agent_ends,\n }\n def reset_callback_meta(self) -> None:\n \"\"\"Reset the callback metadata.\"\"\"\n self.step = 0\n self.starts = 0\n self.ends = 0\n self.errors = 0\n self.text_ctr = 0\n self.ignore_llm_ = False\n self.ignore_chain_ = False\n self.ignore_agent_ = False\n self.always_verbose_ = False\n self.chain_starts = 0\n self.chain_ends = 0\n self.llm_starts = 0\n self.llm_ends = 0\n self.llm_streams = 0\n self.tool_starts = 0\n self.tool_ends = 0\n self.agent_ends = 0\n return None\n[docs]class AimCallbackHandler(BaseMetadataCallbackHandler, BaseCallbackHandler):\n \"\"\"Callback Handler that logs to Aim.\n Parameters:\n repo (:obj:`str`, optional): Aim repository path or Repo object to which\n Run object is bound. If skipped, default Repo is used.\n experiment_name (:obj:`str`, optional): Sets Run's `experiment` property.\n 'default' if not specified. Can be used later to query runs/sequences.", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/aim_callback.html"} +{"id": "649364316cd6-3", "text": "'default' if not specified. Can be used later to query runs/sequences.\n system_tracking_interval (:obj:`int`, optional): Sets the tracking interval\n in seconds for system usage metrics (CPU, Memory, etc.). Set to `None`\n to disable system metrics tracking.\n log_system_params (:obj:`bool`, optional): Enable/Disable logging of system\n params such as installed packages, git info, environment variables, etc.\n This handler will utilize the associated callback method called and formats\n the input of each callback function with metadata regarding the state of LLM run\n and then logs the response to Aim.\n \"\"\"\n def __init__(\n self,\n repo: Optional[str] = None,\n experiment_name: Optional[str] = None,\n system_tracking_interval: Optional[int] = 10,\n log_system_params: bool = True,\n ) -> None:\n \"\"\"Initialize callback handler.\"\"\"\n super().__init__()\n aim = import_aim()\n self.repo = repo\n self.experiment_name = experiment_name\n self.system_tracking_interval = system_tracking_interval\n self.log_system_params = log_system_params\n self._run = aim.Run(\n repo=self.repo,\n experiment=self.experiment_name,\n system_tracking_interval=self.system_tracking_interval,\n log_system_params=self.log_system_params,\n )\n self._run_hash = self._run.hash\n self.action_records: list = []\n[docs] def setup(self, **kwargs: Any) -> None:\n aim = import_aim()\n if not self._run:\n if self._run_hash:\n self._run = aim.Run(\n self._run_hash,\n repo=self.repo,\n system_tracking_interval=self.system_tracking_interval,\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/aim_callback.html"} +{"id": "649364316cd6-4", "text": "repo=self.repo,\n system_tracking_interval=self.system_tracking_interval,\n )\n else:\n self._run = aim.Run(\n repo=self.repo,\n experiment=self.experiment_name,\n system_tracking_interval=self.system_tracking_interval,\n log_system_params=self.log_system_params,\n )\n self._run_hash = self._run.hash\n if kwargs:\n for key, value in kwargs.items():\n self._run.set(key, value, strict=False)\n[docs] def on_llm_start(\n self, serialized: Dict[str, Any], prompts: List[str], **kwargs: Any\n ) -> None:\n \"\"\"Run when LLM starts.\"\"\"\n aim = import_aim()\n self.step += 1\n self.llm_starts += 1\n self.starts += 1\n resp = {\"action\": \"on_llm_start\"}\n resp.update(self.get_custom_callback_meta())\n prompts_res = deepcopy(prompts)\n self._run.track(\n [aim.Text(prompt) for prompt in prompts_res],\n name=\"on_llm_start\",\n context=resp,\n )\n[docs] def on_llm_end(self, response: LLMResult, **kwargs: Any) -> None:\n \"\"\"Run when LLM ends running.\"\"\"\n aim = import_aim()\n self.step += 1\n self.llm_ends += 1\n self.ends += 1\n resp = {\"action\": \"on_llm_end\"}\n resp.update(self.get_custom_callback_meta())\n response_res = deepcopy(response)\n generated = [\n aim.Text(generation.text)\n for generations in response_res.generations\n for generation in generations\n ]\n self._run.track(\n generated,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/aim_callback.html"} +{"id": "649364316cd6-5", "text": "for generation in generations\n ]\n self._run.track(\n generated,\n name=\"on_llm_end\",\n context=resp,\n )\n[docs] def on_llm_new_token(self, token: str, **kwargs: Any) -> None:\n \"\"\"Run when LLM generates a new token.\"\"\"\n self.step += 1\n self.llm_streams += 1\n[docs] def on_llm_error(\n self, error: Union[Exception, KeyboardInterrupt], **kwargs: Any\n ) -> None:\n \"\"\"Run when LLM errors.\"\"\"\n self.step += 1\n self.errors += 1\n[docs] def on_chain_start(\n self, serialized: Dict[str, Any], inputs: Dict[str, Any], **kwargs: Any\n ) -> None:\n \"\"\"Run when chain starts running.\"\"\"\n aim = import_aim()\n self.step += 1\n self.chain_starts += 1\n self.starts += 1\n resp = {\"action\": \"on_chain_start\"}\n resp.update(self.get_custom_callback_meta())\n inputs_res = deepcopy(inputs)\n self._run.track(\n aim.Text(inputs_res[\"input\"]), name=\"on_chain_start\", context=resp\n )\n[docs] def on_chain_end(self, outputs: Dict[str, Any], **kwargs: Any) -> None:\n \"\"\"Run when chain ends running.\"\"\"\n aim = import_aim()\n self.step += 1\n self.chain_ends += 1\n self.ends += 1\n resp = {\"action\": \"on_chain_end\"}\n resp.update(self.get_custom_callback_meta())\n outputs_res = deepcopy(outputs)\n self._run.track(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/aim_callback.html"} +{"id": "649364316cd6-6", "text": "outputs_res = deepcopy(outputs)\n self._run.track(\n aim.Text(outputs_res[\"output\"]), name=\"on_chain_end\", context=resp\n )\n[docs] def on_chain_error(\n self, error: Union[Exception, KeyboardInterrupt], **kwargs: Any\n ) -> None:\n \"\"\"Run when chain errors.\"\"\"\n self.step += 1\n self.errors += 1\n[docs] def on_tool_start(\n self, serialized: Dict[str, Any], input_str: str, **kwargs: Any\n ) -> None:\n \"\"\"Run when tool starts running.\"\"\"\n aim = import_aim()\n self.step += 1\n self.tool_starts += 1\n self.starts += 1\n resp = {\"action\": \"on_tool_start\"}\n resp.update(self.get_custom_callback_meta())\n self._run.track(aim.Text(input_str), name=\"on_tool_start\", context=resp)\n[docs] def on_tool_end(self, output: str, **kwargs: Any) -> None:\n \"\"\"Run when tool ends running.\"\"\"\n aim = import_aim()\n self.step += 1\n self.tool_ends += 1\n self.ends += 1\n resp = {\"action\": \"on_tool_end\"}\n resp.update(self.get_custom_callback_meta())\n self._run.track(aim.Text(output), name=\"on_tool_end\", context=resp)\n[docs] def on_tool_error(\n self, error: Union[Exception, KeyboardInterrupt], **kwargs: Any\n ) -> None:\n \"\"\"Run when tool errors.\"\"\"\n self.step += 1\n self.errors += 1\n[docs] def on_text(self, text: str, **kwargs: Any) -> None:\n \"\"\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/aim_callback.html"} +{"id": "649364316cd6-7", "text": "\"\"\"\n Run when agent is ending.\n \"\"\"\n self.step += 1\n self.text_ctr += 1\n[docs] def on_agent_finish(self, finish: AgentFinish, **kwargs: Any) -> None:\n \"\"\"Run when agent ends running.\"\"\"\n aim = import_aim()\n self.step += 1\n self.agent_ends += 1\n self.ends += 1\n resp = {\"action\": \"on_agent_finish\"}\n resp.update(self.get_custom_callback_meta())\n finish_res = deepcopy(finish)\n text = \"OUTPUT:\\n{}\\n\\nLOG:\\n{}\".format(\n finish_res.return_values[\"output\"], finish_res.log\n )\n self._run.track(aim.Text(text), name=\"on_agent_finish\", context=resp)\n[docs] def on_agent_action(self, action: AgentAction, **kwargs: Any) -> Any:\n \"\"\"Run on agent action.\"\"\"\n aim = import_aim()\n self.step += 1\n self.tool_starts += 1\n self.starts += 1\n resp = {\n \"action\": \"on_agent_action\",\n \"tool\": action.tool,\n }\n resp.update(self.get_custom_callback_meta())\n action_res = deepcopy(action)\n text = \"TOOL INPUT:\\n{}\\n\\nLOG:\\n{}\".format(\n action_res.tool_input, action_res.log\n )\n self._run.track(aim.Text(text), name=\"on_agent_action\", context=resp)\n[docs] def flush_tracker(\n self,\n repo: Optional[str] = None,\n experiment_name: Optional[str] = None,\n system_tracking_interval: Optional[int] = 10,\n log_system_params: bool = True,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/aim_callback.html"} +{"id": "649364316cd6-8", "text": "log_system_params: bool = True,\n langchain_asset: Any = None,\n reset: bool = True,\n finish: bool = False,\n ) -> None:\n \"\"\"Flush the tracker and reset the session.\n Args:\n repo (:obj:`str`, optional): Aim repository path or Repo object to which\n Run object is bound. If skipped, default Repo is used.\n experiment_name (:obj:`str`, optional): Sets Run's `experiment` property.\n 'default' if not specified. Can be used later to query runs/sequences.\n system_tracking_interval (:obj:`int`, optional): Sets the tracking interval\n in seconds for system usage metrics (CPU, Memory, etc.). Set to `None`\n to disable system metrics tracking.\n log_system_params (:obj:`bool`, optional): Enable/Disable logging of system\n params such as installed packages, git info, environment variables, etc.\n langchain_asset: The langchain asset to save.\n reset: Whether to reset the session.\n finish: Whether to finish the run.\n Returns:\n None\n \"\"\"\n if langchain_asset:\n try:\n for key, value in langchain_asset.dict().items():\n self._run.set(key, value, strict=False)\n except Exception:\n pass\n if finish or reset:\n self._run.close()\n self.reset_callback_meta()\n if reset:\n self.__init__( # type: ignore\n repo=repo if repo else self.repo,\n experiment_name=experiment_name\n if experiment_name\n else self.experiment_name,\n system_tracking_interval=system_tracking_interval\n if system_tracking_interval\n else self.system_tracking_interval,\n log_system_params=log_system_params\n if log_system_params", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/aim_callback.html"} +{"id": "649364316cd6-9", "text": "log_system_params=log_system_params\n if log_system_params\n else self.log_system_params,\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/aim_callback.html"} +{"id": "058163f5341f-0", "text": "Source code for langchain.callbacks.whylabs_callback\nfrom __future__ import annotations\nimport logging\nfrom typing import TYPE_CHECKING, Any, Dict, List, Optional, Union\nfrom langchain.callbacks.base import BaseCallbackHandler\nfrom langchain.schema import AgentAction, AgentFinish, Generation, LLMResult\nfrom langchain.utils import get_from_env\nif TYPE_CHECKING:\n from whylogs.api.logger.logger import Logger\ndiagnostic_logger = logging.getLogger(__name__)\ndef import_langkit(\n sentiment: bool = False,\n toxicity: bool = False,\n themes: bool = False,\n) -> Any:\n \"\"\"Import the langkit python package and raise an error if it is not installed.\n Args:\n sentiment: Whether to import the langkit.sentiment module. Defaults to False.\n toxicity: Whether to import the langkit.toxicity module. Defaults to False.\n themes: Whether to import the langkit.themes module. Defaults to False.\n Returns:\n The imported langkit module.\n \"\"\"\n try:\n import langkit # noqa: F401\n import langkit.regexes # noqa: F401\n import langkit.textstat # noqa: F401\n if sentiment:\n import langkit.sentiment # noqa: F401\n if toxicity:\n import langkit.toxicity # noqa: F401\n if themes:\n import langkit.themes # noqa: F401\n except ImportError:\n raise ImportError(\n \"To use the whylabs callback manager you need to have the `langkit` python \"\n \"package installed. Please install it with `pip install langkit`.\"\n )\n return langkit\n[docs]class WhyLabsCallbackHandler(BaseCallbackHandler):", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/whylabs_callback.html"} +{"id": "058163f5341f-1", "text": "return langkit\n[docs]class WhyLabsCallbackHandler(BaseCallbackHandler):\n \"\"\"WhyLabs CallbackHandler.\"\"\"\n def __init__(self, logger: Logger):\n \"\"\"Initiate the rolling logger\"\"\"\n super().__init__()\n self.logger = logger\n diagnostic_logger.info(\n \"Initialized WhyLabs callback handler with configured whylogs Logger.\"\n )\n def _profile_generations(self, generations: List[Generation]) -> None:\n for gen in generations:\n self.logger.log({\"response\": gen.text})\n[docs] def on_llm_start(\n self, serialized: Dict[str, Any], prompts: List[str], **kwargs: Any\n ) -> None:\n \"\"\"Pass the input prompts to the logger\"\"\"\n for prompt in prompts:\n self.logger.log({\"prompt\": prompt})\n[docs] def on_llm_end(self, response: LLMResult, **kwargs: Any) -> None:\n \"\"\"Pass the generated response to the logger.\"\"\"\n for generations in response.generations:\n self._profile_generations(generations)\n[docs] def on_llm_new_token(self, token: str, **kwargs: Any) -> None:\n \"\"\"Do nothing.\"\"\"\n pass\n[docs] def on_llm_error(\n self, error: Union[Exception, KeyboardInterrupt], **kwargs: Any\n ) -> None:\n \"\"\"Do nothing.\"\"\"\n pass\n[docs] def on_chain_start(\n self, serialized: Dict[str, Any], inputs: Dict[str, Any], **kwargs: Any\n ) -> None:\n \"\"\"Do nothing.\"\"\"\n[docs] def on_chain_end(self, outputs: Dict[str, Any], **kwargs: Any) -> None:\n \"\"\"Do nothing.\"\"\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/whylabs_callback.html"} +{"id": "058163f5341f-2", "text": "\"\"\"Do nothing.\"\"\"\n[docs] def on_chain_error(\n self, error: Union[Exception, KeyboardInterrupt], **kwargs: Any\n ) -> None:\n \"\"\"Do nothing.\"\"\"\n pass\n[docs] def on_tool_start(\n self,\n serialized: Dict[str, Any],\n input_str: str,\n **kwargs: Any,\n ) -> None:\n \"\"\"Do nothing.\"\"\"\n[docs] def on_agent_action(\n self, action: AgentAction, color: Optional[str] = None, **kwargs: Any\n ) -> Any:\n \"\"\"Do nothing.\"\"\"\n[docs] def on_tool_end(\n self,\n output: str,\n color: Optional[str] = None,\n observation_prefix: Optional[str] = None,\n llm_prefix: Optional[str] = None,\n **kwargs: Any,\n ) -> None:\n \"\"\"Do nothing.\"\"\"\n[docs] def on_tool_error(\n self, error: Union[Exception, KeyboardInterrupt], **kwargs: Any\n ) -> None:\n \"\"\"Do nothing.\"\"\"\n pass\n[docs] def on_text(self, text: str, **kwargs: Any) -> None:\n \"\"\"Do nothing.\"\"\"\n[docs] def on_agent_finish(\n self, finish: AgentFinish, color: Optional[str] = None, **kwargs: Any\n ) -> None:\n \"\"\"Run on agent end.\"\"\"\n pass\n[docs] def flush(self) -> None:\n self.logger._do_rollover()\n diagnostic_logger.info(\"Flushing WhyLabs logger, writing profile...\")\n[docs] def close(self) -> None:\n self.logger.close()", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/whylabs_callback.html"} +{"id": "058163f5341f-3", "text": "[docs] def close(self) -> None:\n self.logger.close()\n diagnostic_logger.info(\"Closing WhyLabs logger, see you next time!\")\n def __enter__(self) -> WhyLabsCallbackHandler:\n return self\n def __exit__(\n self, exception_type: Any, exception_value: Any, traceback: Any\n ) -> None:\n self.close()\n[docs] @classmethod\n def from_params(\n cls,\n *,\n api_key: Optional[str] = None,\n org_id: Optional[str] = None,\n dataset_id: Optional[str] = None,\n sentiment: bool = False,\n toxicity: bool = False,\n themes: bool = False,\n ) -> Logger:\n \"\"\"Instantiate whylogs Logger from params.\n Args:\n api_key (Optional[str]): WhyLabs API key. Optional because the preferred\n way to specify the API key is with environment variable\n WHYLABS_API_KEY.\n org_id (Optional[str]): WhyLabs organization id to write profiles to.\n If not set must be specified in environment variable\n WHYLABS_DEFAULT_ORG_ID.\n dataset_id (Optional[str]): The model or dataset this callback is gathering\n telemetry for. If not set must be specified in environment variable\n WHYLABS_DEFAULT_DATASET_ID.\n sentiment (bool): If True will initialize a model to perform\n sentiment analysis compound score. Defaults to False and will not gather\n this metric.\n toxicity (bool): If True will initialize a model to score\n toxicity. Defaults to False and will not gather this metric.\n themes (bool): If True will initialize a model to calculate\n distance to configured themes. Defaults to None and will not gather this\n metric.\n \"\"\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/whylabs_callback.html"} +{"id": "058163f5341f-4", "text": "metric.\n \"\"\"\n # langkit library will import necessary whylogs libraries\n import_langkit(sentiment=sentiment, toxicity=toxicity, themes=themes)\n import whylogs as why\n from whylogs.api.writer.whylabs import WhyLabsWriter\n from whylogs.core.schema import DeclarativeSchema\n from whylogs.experimental.core.metrics.udf_metric import generate_udf_schema\n api_key = api_key or get_from_env(\"api_key\", \"WHYLABS_API_KEY\")\n org_id = org_id or get_from_env(\"org_id\", \"WHYLABS_DEFAULT_ORG_ID\")\n dataset_id = dataset_id or get_from_env(\n \"dataset_id\", \"WHYLABS_DEFAULT_DATASET_ID\"\n )\n whylabs_writer = WhyLabsWriter(\n api_key=api_key, org_id=org_id, dataset_id=dataset_id\n )\n langkit_schema = DeclarativeSchema(generate_udf_schema())\n whylabs_logger = why.logger(\n mode=\"rolling\", interval=5, when=\"M\", schema=langkit_schema\n )\n whylabs_logger.append_writer(writer=whylabs_writer)\n diagnostic_logger.info(\n \"Started whylogs Logger with WhyLabsWriter and initialized LangKit. \ud83d\udcdd\"\n )\n return cls(whylabs_logger)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/whylabs_callback.html"} +{"id": "c64dbede48f0-0", "text": "Source code for langchain.callbacks.streaming_aiter\nfrom __future__ import annotations\nimport asyncio\nfrom typing import Any, AsyncIterator, Dict, List, Literal, Union, cast\nfrom langchain.callbacks.base import AsyncCallbackHandler\nfrom langchain.schema import LLMResult\n# TODO If used by two LLM runs in parallel this won't work as expected\n[docs]class AsyncIteratorCallbackHandler(AsyncCallbackHandler):\n \"\"\"Callback handler that returns an async iterator.\"\"\"\n queue: asyncio.Queue[str]\n done: asyncio.Event\n @property\n def always_verbose(self) -> bool:\n return True\n def __init__(self) -> None:\n self.queue = asyncio.Queue()\n self.done = asyncio.Event()\n[docs] async def on_llm_start(\n self, serialized: Dict[str, Any], prompts: List[str], **kwargs: Any\n ) -> None:\n # If two calls are made in a row, this resets the state\n self.done.clear()\n[docs] async def on_llm_new_token(self, token: str, **kwargs: Any) -> None:\n self.queue.put_nowait(token)\n[docs] async def on_llm_end(self, response: LLMResult, **kwargs: Any) -> None:\n self.done.set()\n[docs] async def on_llm_error(\n self, error: Union[Exception, KeyboardInterrupt], **kwargs: Any\n ) -> None:\n self.done.set()\n # TODO implement the other methods\n[docs] async def aiter(self) -> AsyncIterator[str]:\n while not self.queue.empty() or not self.done.is_set():\n # Wait for the next token in the queue,\n # but stop waiting if the done event is set\n done, other = await asyncio.wait(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/streaming_aiter.html"} +{"id": "c64dbede48f0-1", "text": "done, other = await asyncio.wait(\n [\n # NOTE: If you add other tasks here, update the code below,\n # which assumes each set has exactly one task each\n asyncio.ensure_future(self.queue.get()),\n asyncio.ensure_future(self.done.wait()),\n ],\n return_when=asyncio.FIRST_COMPLETED,\n )\n # Cancel the other task\n if other:\n other.pop().cancel()\n # Extract the value of the first completed task\n token_or_done = cast(Union[str, Literal[True]], done.pop().result())\n # If the extracted value is the boolean True, the done event was set\n if token_or_done is True:\n break\n # Otherwise, the extracted value is a token, which we yield\n yield token_or_done", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/streaming_aiter.html"} +{"id": "97b544f127d5-0", "text": "Source code for langchain.callbacks.streaming_stdout\n\"\"\"Callback Handler streams to stdout on new llm token.\"\"\"\nimport sys\nfrom typing import Any, Dict, List, Union\nfrom langchain.callbacks.base import BaseCallbackHandler\nfrom langchain.schema import AgentAction, AgentFinish, LLMResult\n[docs]class StreamingStdOutCallbackHandler(BaseCallbackHandler):\n \"\"\"Callback handler for streaming. Only works with LLMs that support streaming.\"\"\"\n[docs] def on_llm_start(\n self, serialized: Dict[str, Any], prompts: List[str], **kwargs: Any\n ) -> None:\n \"\"\"Run when LLM starts running.\"\"\"\n[docs] def on_llm_new_token(self, token: str, **kwargs: Any) -> None:\n \"\"\"Run on new LLM token. Only available when streaming is enabled.\"\"\"\n sys.stdout.write(token)\n sys.stdout.flush()\n[docs] def on_llm_end(self, response: LLMResult, **kwargs: Any) -> None:\n \"\"\"Run when LLM ends running.\"\"\"\n[docs] def on_llm_error(\n self, error: Union[Exception, KeyboardInterrupt], **kwargs: Any\n ) -> None:\n \"\"\"Run when LLM errors.\"\"\"\n[docs] def on_chain_start(\n self, serialized: Dict[str, Any], inputs: Dict[str, Any], **kwargs: Any\n ) -> None:\n \"\"\"Run when chain starts running.\"\"\"\n[docs] def on_chain_end(self, outputs: Dict[str, Any], **kwargs: Any) -> None:\n \"\"\"Run when chain ends running.\"\"\"\n[docs] def on_chain_error(\n self, error: Union[Exception, KeyboardInterrupt], **kwargs: Any\n ) -> None:\n \"\"\"Run when chain errors.\"\"\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/streaming_stdout.html"} +{"id": "97b544f127d5-1", "text": ") -> None:\n \"\"\"Run when chain errors.\"\"\"\n[docs] def on_tool_start(\n self, serialized: Dict[str, Any], input_str: str, **kwargs: Any\n ) -> None:\n \"\"\"Run when tool starts running.\"\"\"\n[docs] def on_agent_action(self, action: AgentAction, **kwargs: Any) -> Any:\n \"\"\"Run on agent action.\"\"\"\n pass\n[docs] def on_tool_end(self, output: str, **kwargs: Any) -> None:\n \"\"\"Run when tool ends running.\"\"\"\n[docs] def on_tool_error(\n self, error: Union[Exception, KeyboardInterrupt], **kwargs: Any\n ) -> None:\n \"\"\"Run when tool errors.\"\"\"\n[docs] def on_text(self, text: str, **kwargs: Any) -> None:\n \"\"\"Run on arbitrary text.\"\"\"\n[docs] def on_agent_finish(self, finish: AgentFinish, **kwargs: Any) -> None:\n \"\"\"Run on agent end.\"\"\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/streaming_stdout.html"} +{"id": "611bde63148b-0", "text": "Source code for langchain.callbacks.file\n\"\"\"Callback Handler that writes to a file.\"\"\"\nfrom typing import Any, Dict, Optional, TextIO, cast\nfrom langchain.callbacks.base import BaseCallbackHandler\nfrom langchain.input import print_text\nfrom langchain.schema import AgentAction, AgentFinish\n[docs]class FileCallbackHandler(BaseCallbackHandler):\n \"\"\"Callback Handler that writes to a file.\"\"\"\n def __init__(\n self, filename: str, mode: str = \"a\", color: Optional[str] = None\n ) -> None:\n \"\"\"Initialize callback handler.\"\"\"\n self.file = cast(TextIO, open(filename, mode))\n self.color = color\n def __del__(self) -> None:\n \"\"\"Destructor to cleanup when done.\"\"\"\n self.file.close()\n[docs] def on_chain_start(\n self, serialized: Dict[str, Any], inputs: Dict[str, Any], **kwargs: Any\n ) -> None:\n \"\"\"Print out that we are entering a chain.\"\"\"\n class_name = serialized[\"name\"]\n print_text(\n f\"\\n\\n\\033[1m> Entering new {class_name} chain...\\033[0m\",\n end=\"\\n\",\n file=self.file,\n )\n[docs] def on_chain_end(self, outputs: Dict[str, Any], **kwargs: Any) -> None:\n \"\"\"Print out that we finished a chain.\"\"\"\n print_text(\"\\n\\033[1m> Finished chain.\\033[0m\", end=\"\\n\", file=self.file)\n[docs] def on_agent_action(\n self, action: AgentAction, color: Optional[str] = None, **kwargs: Any\n ) -> Any:\n \"\"\"Run on agent action.\"\"\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/file.html"} +{"id": "611bde63148b-1", "text": ") -> Any:\n \"\"\"Run on agent action.\"\"\"\n print_text(action.log, color=color if color else self.color, file=self.file)\n[docs] def on_tool_end(\n self,\n output: str,\n color: Optional[str] = None,\n observation_prefix: Optional[str] = None,\n llm_prefix: Optional[str] = None,\n **kwargs: Any,\n ) -> None:\n \"\"\"If not the final action, print out observation.\"\"\"\n if observation_prefix is not None:\n print_text(f\"\\n{observation_prefix}\", file=self.file)\n print_text(output, color=color if color else self.color, file=self.file)\n if llm_prefix is not None:\n print_text(f\"\\n{llm_prefix}\", file=self.file)\n[docs] def on_text(\n self,\n text: str,\n color: Optional[str] = None,\n end: str = \"\",\n **kwargs: Any,\n ) -> None:\n \"\"\"Run when agent ends.\"\"\"\n print_text(text, color=color if color else self.color, end=end, file=self.file)\n[docs] def on_agent_finish(\n self, finish: AgentFinish, color: Optional[str] = None, **kwargs: Any\n ) -> None:\n \"\"\"Run on agent end.\"\"\"\n print_text(\n finish.log, color=color if self.color else color, end=\"\\n\", file=self.file\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/file.html"} +{"id": "ecbd189a9b76-0", "text": "Source code for langchain.callbacks.stdout\n\"\"\"Callback Handler that prints to std out.\"\"\"\nfrom typing import Any, Dict, List, Optional, Union\nfrom langchain.callbacks.base import BaseCallbackHandler\nfrom langchain.input import print_text\nfrom langchain.schema import AgentAction, AgentFinish, LLMResult\n[docs]class StdOutCallbackHandler(BaseCallbackHandler):\n \"\"\"Callback Handler that prints to std out.\"\"\"\n def __init__(self, color: Optional[str] = None) -> None:\n \"\"\"Initialize callback handler.\"\"\"\n self.color = color\n[docs] def on_llm_start(\n self, serialized: Dict[str, Any], prompts: List[str], **kwargs: Any\n ) -> None:\n \"\"\"Print out the prompts.\"\"\"\n pass\n[docs] def on_llm_end(self, response: LLMResult, **kwargs: Any) -> None:\n \"\"\"Do nothing.\"\"\"\n pass\n[docs] def on_llm_new_token(self, token: str, **kwargs: Any) -> None:\n \"\"\"Do nothing.\"\"\"\n pass\n[docs] def on_llm_error(\n self, error: Union[Exception, KeyboardInterrupt], **kwargs: Any\n ) -> None:\n \"\"\"Do nothing.\"\"\"\n pass\n[docs] def on_chain_start(\n self, serialized: Dict[str, Any], inputs: Dict[str, Any], **kwargs: Any\n ) -> None:\n \"\"\"Print out that we are entering a chain.\"\"\"\n class_name = serialized.get(\"name\", \"\")\n print(f\"\\n\\n\\033[1m> Entering new {class_name} chain...\\033[0m\")\n[docs] def on_chain_end(self, outputs: Dict[str, Any], **kwargs: Any) -> None:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/stdout.html"} +{"id": "ecbd189a9b76-1", "text": "\"\"\"Print out that we finished a chain.\"\"\"\n print(\"\\n\\033[1m> Finished chain.\\033[0m\")\n[docs] def on_chain_error(\n self, error: Union[Exception, KeyboardInterrupt], **kwargs: Any\n ) -> None:\n \"\"\"Do nothing.\"\"\"\n pass\n[docs] def on_tool_start(\n self,\n serialized: Dict[str, Any],\n input_str: str,\n **kwargs: Any,\n ) -> None:\n \"\"\"Do nothing.\"\"\"\n pass\n[docs] def on_agent_action(\n self, action: AgentAction, color: Optional[str] = None, **kwargs: Any\n ) -> Any:\n \"\"\"Run on agent action.\"\"\"\n print_text(action.log, color=color if color else self.color)\n[docs] def on_tool_end(\n self,\n output: str,\n color: Optional[str] = None,\n observation_prefix: Optional[str] = None,\n llm_prefix: Optional[str] = None,\n **kwargs: Any,\n ) -> None:\n \"\"\"If not the final action, print out observation.\"\"\"\n if observation_prefix is not None:\n print_text(f\"\\n{observation_prefix}\")\n print_text(output, color=color if color else self.color)\n if llm_prefix is not None:\n print_text(f\"\\n{llm_prefix}\")\n[docs] def on_tool_error(\n self, error: Union[Exception, KeyboardInterrupt], **kwargs: Any\n ) -> None:\n \"\"\"Do nothing.\"\"\"\n pass\n[docs] def on_text(\n self,\n text: str,\n color: Optional[str] = None,\n end: str = \"\",", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/stdout.html"} +{"id": "ecbd189a9b76-2", "text": "color: Optional[str] = None,\n end: str = \"\",\n **kwargs: Any,\n ) -> None:\n \"\"\"Run when agent ends.\"\"\"\n print_text(text, color=color if color else self.color, end=end)\n[docs] def on_agent_finish(\n self, finish: AgentFinish, color: Optional[str] = None, **kwargs: Any\n ) -> None:\n \"\"\"Run on agent end.\"\"\"\n print_text(finish.log, color=color if self.color else color, end=\"\\n\")", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/stdout.html"} +{"id": "6cc2d514820a-0", "text": "Source code for langchain.callbacks.mlflow_callback\nimport random\nimport string\nimport tempfile\nimport traceback\nfrom copy import deepcopy\nfrom pathlib import Path\nfrom typing import Any, Dict, List, Optional, Union\nfrom langchain.callbacks.base import BaseCallbackHandler\nfrom langchain.callbacks.utils import (\n BaseMetadataCallbackHandler,\n flatten_dict,\n hash_string,\n import_pandas,\n import_spacy,\n import_textstat,\n)\nfrom langchain.schema import AgentAction, AgentFinish, LLMResult\nfrom langchain.utils import get_from_dict_or_env\ndef import_mlflow() -> Any:\n \"\"\"Import the mlflow python package and raise an error if it is not installed.\"\"\"\n try:\n import mlflow\n except ImportError:\n raise ImportError(\n \"To use the mlflow callback manager you need to have the `mlflow` python \"\n \"package installed. Please install it with `pip install mlflow>=2.3.0`\"\n )\n return mlflow\ndef analyze_text(\n text: str,\n nlp: Any = None,\n) -> dict:\n \"\"\"Analyze text using textstat and spacy.\n Parameters:\n text (str): The text to analyze.\n nlp (spacy.lang): The spacy language model to use for visualization.\n Returns:\n (dict): A dictionary containing the complexity metrics and visualization\n files serialized to HTML string.\n \"\"\"\n resp: Dict[str, Any] = {}\n textstat = import_textstat()\n spacy = import_spacy()\n text_complexity_metrics = {\n \"flesch_reading_ease\": textstat.flesch_reading_ease(text),", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/mlflow_callback.html"} +{"id": "6cc2d514820a-1", "text": "\"flesch_reading_ease\": textstat.flesch_reading_ease(text),\n \"flesch_kincaid_grade\": textstat.flesch_kincaid_grade(text),\n \"smog_index\": textstat.smog_index(text),\n \"coleman_liau_index\": textstat.coleman_liau_index(text),\n \"automated_readability_index\": textstat.automated_readability_index(text),\n \"dale_chall_readability_score\": textstat.dale_chall_readability_score(text),\n \"difficult_words\": textstat.difficult_words(text),\n \"linsear_write_formula\": textstat.linsear_write_formula(text),\n \"gunning_fog\": textstat.gunning_fog(text),\n # \"text_standard\": textstat.text_standard(text),\n \"fernandez_huerta\": textstat.fernandez_huerta(text),\n \"szigriszt_pazos\": textstat.szigriszt_pazos(text),\n \"gutierrez_polini\": textstat.gutierrez_polini(text),\n \"crawford\": textstat.crawford(text),\n \"gulpease_index\": textstat.gulpease_index(text),\n \"osman\": textstat.osman(text),\n }\n resp.update({\"text_complexity_metrics\": text_complexity_metrics})\n resp.update(text_complexity_metrics)\n if nlp is not None:\n doc = nlp(text)\n dep_out = spacy.displacy.render( # type: ignore\n doc, style=\"dep\", jupyter=False, page=True\n )\n ent_out = spacy.displacy.render( # type: ignore\n doc, style=\"ent\", jupyter=False, page=True\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/mlflow_callback.html"} +{"id": "6cc2d514820a-2", "text": "doc, style=\"ent\", jupyter=False, page=True\n )\n text_visualizations = {\n \"dependency_tree\": dep_out,\n \"entities\": ent_out,\n }\n resp.update(text_visualizations)\n return resp\ndef construct_html_from_prompt_and_generation(prompt: str, generation: str) -> Any:\n \"\"\"Construct an html element from a prompt and a generation.\n Parameters:\n prompt (str): The prompt.\n generation (str): The generation.\n Returns:\n (str): The html string.\"\"\"\n formatted_prompt = prompt.replace(\"\\n\", \"
\")\n formatted_generation = generation.replace(\"\\n\", \"
\")\n return f\"\"\"\n

{formatted_prompt}:

\n
\n

\n {formatted_generation}\n

\n
\n \"\"\"\nclass MlflowLogger:\n \"\"\"Callback Handler that logs metrics and artifacts to mlflow server.\n Parameters:\n name (str): Name of the run.\n experiment (str): Name of the experiment.\n tags (dict): Tags to be attached for the run.\n tracking_uri (str): MLflow tracking server uri.\n This handler implements the helper functions to initialize,\n log metrics and artifacts to the mlflow server.\n \"\"\"\n def __init__(self, **kwargs: Any):\n self.mlflow = import_mlflow()\n tracking_uri = get_from_dict_or_env(\n kwargs, \"tracking_uri\", \"MLFLOW_TRACKING_URI\", \"\"\n )\n self.mlflow.set_tracking_uri(tracking_uri)\n # User can set other env variables described here", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/mlflow_callback.html"} +{"id": "6cc2d514820a-3", "text": "# User can set other env variables described here\n # > https://www.mlflow.org/docs/latest/tracking.html#logging-to-a-tracking-server\n experiment_name = get_from_dict_or_env(\n kwargs, \"experiment_name\", \"MLFLOW_EXPERIMENT_NAME\"\n )\n self.mlf_exp = self.mlflow.get_experiment_by_name(experiment_name)\n if self.mlf_exp is not None:\n self.mlf_expid = self.mlf_exp.experiment_id\n else:\n self.mlf_expid = self.mlflow.create_experiment(experiment_name)\n self.start_run(kwargs[\"run_name\"], kwargs[\"run_tags\"])\n def start_run(self, name: str, tags: Dict[str, str]) -> None:\n \"\"\"To start a new run, auto generates the random suffix for name\"\"\"\n if name.endswith(\"-%\"):\n rname = \"\".join(random.choices(string.ascii_uppercase + string.digits, k=7))\n name = name.replace(\"%\", rname)\n self.run = self.mlflow.MlflowClient().create_run(\n self.mlf_expid, run_name=name, tags=tags\n )\n def finish_run(self) -> None:\n \"\"\"To finish the run.\"\"\"\n with self.mlflow.start_run(\n run_id=self.run.info.run_id, experiment_id=self.mlf_expid\n ):\n self.mlflow.end_run()\n def metric(self, key: str, value: float) -> None:\n \"\"\"To log metric to mlflow server.\"\"\"\n with self.mlflow.start_run(\n run_id=self.run.info.run_id, experiment_id=self.mlf_expid\n ):\n self.mlflow.log_metric(key, value)\n def metrics(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/mlflow_callback.html"} +{"id": "6cc2d514820a-4", "text": "):\n self.mlflow.log_metric(key, value)\n def metrics(\n self, data: Union[Dict[str, float], Dict[str, int]], step: Optional[int] = 0\n ) -> None:\n \"\"\"To log all metrics in the input dict.\"\"\"\n with self.mlflow.start_run(\n run_id=self.run.info.run_id, experiment_id=self.mlf_expid\n ):\n self.mlflow.log_metrics(data)\n def jsonf(self, data: Dict[str, Any], filename: str) -> None:\n \"\"\"To log the input data as json file artifact.\"\"\"\n with self.mlflow.start_run(\n run_id=self.run.info.run_id, experiment_id=self.mlf_expid\n ):\n self.mlflow.log_dict(data, f\"{filename}.json\")\n def table(self, name: str, dataframe) -> None: # type: ignore\n \"\"\"To log the input pandas dataframe as a html table\"\"\"\n self.html(dataframe.to_html(), f\"table_{name}\")\n def html(self, html: str, filename: str) -> None:\n \"\"\"To log the input html string as html file artifact.\"\"\"\n with self.mlflow.start_run(\n run_id=self.run.info.run_id, experiment_id=self.mlf_expid\n ):\n self.mlflow.log_text(html, f\"{filename}.html\")\n def text(self, text: str, filename: str) -> None:\n \"\"\"To log the input text as text file artifact.\"\"\"\n with self.mlflow.start_run(\n run_id=self.run.info.run_id, experiment_id=self.mlf_expid\n ):\n self.mlflow.log_text(text, f\"{filename}.txt\")\n def artifact(self, path: str) -> None:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/mlflow_callback.html"} +{"id": "6cc2d514820a-5", "text": "def artifact(self, path: str) -> None:\n \"\"\"To upload the file from given path as artifact.\"\"\"\n with self.mlflow.start_run(\n run_id=self.run.info.run_id, experiment_id=self.mlf_expid\n ):\n self.mlflow.log_artifact(path)\n def langchain_artifact(self, chain: Any) -> None:\n with self.mlflow.start_run(\n run_id=self.run.info.run_id, experiment_id=self.mlf_expid\n ):\n self.mlflow.langchain.log_model(chain, \"langchain-model\")\n[docs]class MlflowCallbackHandler(BaseMetadataCallbackHandler, BaseCallbackHandler):\n \"\"\"Callback Handler that logs metrics and artifacts to mlflow server.\n Parameters:\n name (str): Name of the run.\n experiment (str): Name of the experiment.\n tags (dict): Tags to be attached for the run.\n tracking_uri (str): MLflow tracking server uri.\n This handler will utilize the associated callback method called and formats\n the input of each callback function with metadata regarding the state of LLM run,\n and adds the response to the list of records for both the {method}_records and\n action. It then logs the response to mlflow server.\n \"\"\"\n def __init__(\n self,\n name: Optional[str] = \"langchainrun-%\",\n experiment: Optional[str] = \"langchain\",\n tags: Optional[Dict] = {},\n tracking_uri: Optional[str] = None,\n ) -> None:\n \"\"\"Initialize callback handler.\"\"\"\n import_pandas()\n import_textstat()\n import_mlflow()\n spacy = import_spacy()\n super().__init__()\n self.name = name\n self.experiment = experiment", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/mlflow_callback.html"} +{"id": "6cc2d514820a-6", "text": "super().__init__()\n self.name = name\n self.experiment = experiment\n self.tags = tags\n self.tracking_uri = tracking_uri\n self.temp_dir = tempfile.TemporaryDirectory()\n self.mlflg = MlflowLogger(\n tracking_uri=self.tracking_uri,\n experiment_name=self.experiment,\n run_name=self.name,\n run_tags=self.tags,\n )\n self.action_records: list = []\n self.nlp = spacy.load(\"en_core_web_sm\")\n self.metrics = {\n \"step\": 0,\n \"starts\": 0,\n \"ends\": 0,\n \"errors\": 0,\n \"text_ctr\": 0,\n \"chain_starts\": 0,\n \"chain_ends\": 0,\n \"llm_starts\": 0,\n \"llm_ends\": 0,\n \"llm_streams\": 0,\n \"tool_starts\": 0,\n \"tool_ends\": 0,\n \"agent_ends\": 0,\n }\n self.records: Dict[str, Any] = {\n \"on_llm_start_records\": [],\n \"on_llm_token_records\": [],\n \"on_llm_end_records\": [],\n \"on_chain_start_records\": [],\n \"on_chain_end_records\": [],\n \"on_tool_start_records\": [],\n \"on_tool_end_records\": [],\n \"on_text_records\": [],\n \"on_agent_finish_records\": [],\n \"on_agent_action_records\": [],\n \"action_records\": [],\n }\n def _reset(self) -> None:\n for k, v in self.metrics.items():\n self.metrics[k] = 0\n for k, v in self.records.items():", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/mlflow_callback.html"} +{"id": "6cc2d514820a-7", "text": "self.metrics[k] = 0\n for k, v in self.records.items():\n self.records[k] = []\n[docs] def on_llm_start(\n self, serialized: Dict[str, Any], prompts: List[str], **kwargs: Any\n ) -> None:\n \"\"\"Run when LLM starts.\"\"\"\n self.metrics[\"step\"] += 1\n self.metrics[\"llm_starts\"] += 1\n self.metrics[\"starts\"] += 1\n llm_starts = self.metrics[\"llm_starts\"]\n resp: Dict[str, Any] = {}\n resp.update({\"action\": \"on_llm_start\"})\n resp.update(flatten_dict(serialized))\n resp.update(self.metrics)\n self.mlflg.metrics(self.metrics, step=self.metrics[\"step\"])\n for idx, prompt in enumerate(prompts):\n prompt_resp = deepcopy(resp)\n prompt_resp[\"prompt\"] = prompt\n self.records[\"on_llm_start_records\"].append(prompt_resp)\n self.records[\"action_records\"].append(prompt_resp)\n self.mlflg.jsonf(prompt_resp, f\"llm_start_{llm_starts}_prompt_{idx}\")\n[docs] def on_llm_new_token(self, token: str, **kwargs: Any) -> None:\n \"\"\"Run when LLM generates a new token.\"\"\"\n self.metrics[\"step\"] += 1\n self.metrics[\"llm_streams\"] += 1\n llm_streams = self.metrics[\"llm_streams\"]\n resp: Dict[str, Any] = {}\n resp.update({\"action\": \"on_llm_new_token\", \"token\": token})\n resp.update(self.metrics)\n self.mlflg.metrics(self.metrics, step=self.metrics[\"step\"])\n self.records[\"on_llm_token_records\"].append(resp)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/mlflow_callback.html"} +{"id": "6cc2d514820a-8", "text": "self.records[\"on_llm_token_records\"].append(resp)\n self.records[\"action_records\"].append(resp)\n self.mlflg.jsonf(resp, f\"llm_new_tokens_{llm_streams}\")\n[docs] def on_llm_end(self, response: LLMResult, **kwargs: Any) -> None:\n \"\"\"Run when LLM ends running.\"\"\"\n self.metrics[\"step\"] += 1\n self.metrics[\"llm_ends\"] += 1\n self.metrics[\"ends\"] += 1\n llm_ends = self.metrics[\"llm_ends\"]\n resp: Dict[str, Any] = {}\n resp.update({\"action\": \"on_llm_end\"})\n resp.update(flatten_dict(response.llm_output or {}))\n resp.update(self.metrics)\n self.mlflg.metrics(self.metrics, step=self.metrics[\"step\"])\n for generations in response.generations:\n for idx, generation in enumerate(generations):\n generation_resp = deepcopy(resp)\n generation_resp.update(flatten_dict(generation.dict()))\n generation_resp.update(\n analyze_text(\n generation.text,\n nlp=self.nlp,\n )\n )\n complexity_metrics: Dict[str, float] = generation_resp.pop(\"text_complexity_metrics\") # type: ignore # noqa: E501\n self.mlflg.metrics(\n complexity_metrics,\n step=self.metrics[\"step\"],\n )\n self.records[\"on_llm_end_records\"].append(generation_resp)\n self.records[\"action_records\"].append(generation_resp)\n self.mlflg.jsonf(resp, f\"llm_end_{llm_ends}_generation_{idx}\")\n dependency_tree = generation_resp[\"dependency_tree\"]\n entities = generation_resp[\"entities\"]", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/mlflow_callback.html"} +{"id": "6cc2d514820a-9", "text": "dependency_tree = generation_resp[\"dependency_tree\"]\n entities = generation_resp[\"entities\"]\n self.mlflg.html(dependency_tree, \"dep-\" + hash_string(generation.text))\n self.mlflg.html(entities, \"ent-\" + hash_string(generation.text))\n[docs] def on_llm_error(\n self, error: Union[Exception, KeyboardInterrupt], **kwargs: Any\n ) -> None:\n \"\"\"Run when LLM errors.\"\"\"\n self.metrics[\"step\"] += 1\n self.metrics[\"errors\"] += 1\n[docs] def on_chain_start(\n self, serialized: Dict[str, Any], inputs: Dict[str, Any], **kwargs: Any\n ) -> None:\n \"\"\"Run when chain starts running.\"\"\"\n self.metrics[\"step\"] += 1\n self.metrics[\"chain_starts\"] += 1\n self.metrics[\"starts\"] += 1\n chain_starts = self.metrics[\"chain_starts\"]\n resp: Dict[str, Any] = {}\n resp.update({\"action\": \"on_chain_start\"})\n resp.update(flatten_dict(serialized))\n resp.update(self.metrics)\n self.mlflg.metrics(self.metrics, step=self.metrics[\"step\"])\n chain_input = \",\".join([f\"{k}={v}\" for k, v in inputs.items()])\n input_resp = deepcopy(resp)\n input_resp[\"inputs\"] = chain_input\n self.records[\"on_chain_start_records\"].append(input_resp)\n self.records[\"action_records\"].append(input_resp)\n self.mlflg.jsonf(input_resp, f\"chain_start_{chain_starts}\")\n[docs] def on_chain_end(self, outputs: Dict[str, Any], **kwargs: Any) -> None:\n \"\"\"Run when chain ends running.\"\"\"\n self.metrics[\"step\"] += 1", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/mlflow_callback.html"} +{"id": "6cc2d514820a-10", "text": "\"\"\"Run when chain ends running.\"\"\"\n self.metrics[\"step\"] += 1\n self.metrics[\"chain_ends\"] += 1\n self.metrics[\"ends\"] += 1\n chain_ends = self.metrics[\"chain_ends\"]\n resp: Dict[str, Any] = {}\n chain_output = \",\".join([f\"{k}={v}\" for k, v in outputs.items()])\n resp.update({\"action\": \"on_chain_end\", \"outputs\": chain_output})\n resp.update(self.metrics)\n self.mlflg.metrics(self.metrics, step=self.metrics[\"step\"])\n self.records[\"on_chain_end_records\"].append(resp)\n self.records[\"action_records\"].append(resp)\n self.mlflg.jsonf(resp, f\"chain_end_{chain_ends}\")\n[docs] def on_chain_error(\n self, error: Union[Exception, KeyboardInterrupt], **kwargs: Any\n ) -> None:\n \"\"\"Run when chain errors.\"\"\"\n self.metrics[\"step\"] += 1\n self.metrics[\"errors\"] += 1\n[docs] def on_tool_start(\n self, serialized: Dict[str, Any], input_str: str, **kwargs: Any\n ) -> None:\n \"\"\"Run when tool starts running.\"\"\"\n self.metrics[\"step\"] += 1\n self.metrics[\"tool_starts\"] += 1\n self.metrics[\"starts\"] += 1\n tool_starts = self.metrics[\"tool_starts\"]\n resp: Dict[str, Any] = {}\n resp.update({\"action\": \"on_tool_start\", \"input_str\": input_str})\n resp.update(flatten_dict(serialized))\n resp.update(self.metrics)\n self.mlflg.metrics(self.metrics, step=self.metrics[\"step\"])\n self.records[\"on_tool_start_records\"].append(resp)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/mlflow_callback.html"} +{"id": "6cc2d514820a-11", "text": "self.records[\"on_tool_start_records\"].append(resp)\n self.records[\"action_records\"].append(resp)\n self.mlflg.jsonf(resp, f\"tool_start_{tool_starts}\")\n[docs] def on_tool_end(self, output: str, **kwargs: Any) -> None:\n \"\"\"Run when tool ends running.\"\"\"\n self.metrics[\"step\"] += 1\n self.metrics[\"tool_ends\"] += 1\n self.metrics[\"ends\"] += 1\n tool_ends = self.metrics[\"tool_ends\"]\n resp: Dict[str, Any] = {}\n resp.update({\"action\": \"on_tool_end\", \"output\": output})\n resp.update(self.metrics)\n self.mlflg.metrics(self.metrics, step=self.metrics[\"step\"])\n self.records[\"on_tool_end_records\"].append(resp)\n self.records[\"action_records\"].append(resp)\n self.mlflg.jsonf(resp, f\"tool_end_{tool_ends}\")\n[docs] def on_tool_error(\n self, error: Union[Exception, KeyboardInterrupt], **kwargs: Any\n ) -> None:\n \"\"\"Run when tool errors.\"\"\"\n self.metrics[\"step\"] += 1\n self.metrics[\"errors\"] += 1\n[docs] def on_text(self, text: str, **kwargs: Any) -> None:\n \"\"\"\n Run when agent is ending.\n \"\"\"\n self.metrics[\"step\"] += 1\n self.metrics[\"text_ctr\"] += 1\n text_ctr = self.metrics[\"text_ctr\"]\n resp: Dict[str, Any] = {}\n resp.update({\"action\": \"on_text\", \"text\": text})\n resp.update(self.metrics)\n self.mlflg.metrics(self.metrics, step=self.metrics[\"step\"])\n self.records[\"on_text_records\"].append(resp)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/mlflow_callback.html"} +{"id": "6cc2d514820a-12", "text": "self.records[\"on_text_records\"].append(resp)\n self.records[\"action_records\"].append(resp)\n self.mlflg.jsonf(resp, f\"on_text_{text_ctr}\")\n[docs] def on_agent_finish(self, finish: AgentFinish, **kwargs: Any) -> None:\n \"\"\"Run when agent ends running.\"\"\"\n self.metrics[\"step\"] += 1\n self.metrics[\"agent_ends\"] += 1\n self.metrics[\"ends\"] += 1\n agent_ends = self.metrics[\"agent_ends\"]\n resp: Dict[str, Any] = {}\n resp.update(\n {\n \"action\": \"on_agent_finish\",\n \"output\": finish.return_values[\"output\"],\n \"log\": finish.log,\n }\n )\n resp.update(self.metrics)\n self.mlflg.metrics(self.metrics, step=self.metrics[\"step\"])\n self.records[\"on_agent_finish_records\"].append(resp)\n self.records[\"action_records\"].append(resp)\n self.mlflg.jsonf(resp, f\"agent_finish_{agent_ends}\")\n[docs] def on_agent_action(self, action: AgentAction, **kwargs: Any) -> Any:\n \"\"\"Run on agent action.\"\"\"\n self.metrics[\"step\"] += 1\n self.metrics[\"tool_starts\"] += 1\n self.metrics[\"starts\"] += 1\n tool_starts = self.metrics[\"tool_starts\"]\n resp: Dict[str, Any] = {}\n resp.update(\n {\n \"action\": \"on_agent_action\",\n \"tool\": action.tool,\n \"tool_input\": action.tool_input,\n \"log\": action.log,\n }\n )\n resp.update(self.metrics)\n self.mlflg.metrics(self.metrics, step=self.metrics[\"step\"])", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/mlflow_callback.html"} +{"id": "6cc2d514820a-13", "text": "self.mlflg.metrics(self.metrics, step=self.metrics[\"step\"])\n self.records[\"on_agent_action_records\"].append(resp)\n self.records[\"action_records\"].append(resp)\n self.mlflg.jsonf(resp, f\"agent_action_{tool_starts}\")\n def _create_session_analysis_df(self) -> Any:\n \"\"\"Create a dataframe with all the information from the session.\"\"\"\n pd = import_pandas()\n on_llm_start_records_df = pd.DataFrame(self.records[\"on_llm_start_records\"])\n on_llm_end_records_df = pd.DataFrame(self.records[\"on_llm_end_records\"])\n llm_input_prompts_df = (\n on_llm_start_records_df[[\"step\", \"prompt\", \"name\"]]\n .dropna(axis=1)\n .rename({\"step\": \"prompt_step\"}, axis=1)\n )\n complexity_metrics_columns = []\n visualizations_columns = []\n complexity_metrics_columns = [\n \"flesch_reading_ease\",\n \"flesch_kincaid_grade\",\n \"smog_index\",\n \"coleman_liau_index\",\n \"automated_readability_index\",\n \"dale_chall_readability_score\",\n \"difficult_words\",\n \"linsear_write_formula\",\n \"gunning_fog\",\n # \"text_standard\",\n \"fernandez_huerta\",\n \"szigriszt_pazos\",\n \"gutierrez_polini\",\n \"crawford\",\n \"gulpease_index\",\n \"osman\",\n ]\n visualizations_columns = [\"dependency_tree\", \"entities\"]\n llm_outputs_df = (\n on_llm_end_records_df[\n [\n \"step\",\n \"text\",", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/mlflow_callback.html"} +{"id": "6cc2d514820a-14", "text": "[\n \"step\",\n \"text\",\n \"token_usage_total_tokens\",\n \"token_usage_prompt_tokens\",\n \"token_usage_completion_tokens\",\n ]\n + complexity_metrics_columns\n + visualizations_columns\n ]\n .dropna(axis=1)\n .rename({\"step\": \"output_step\", \"text\": \"output\"}, axis=1)\n )\n session_analysis_df = pd.concat([llm_input_prompts_df, llm_outputs_df], axis=1)\n session_analysis_df[\"chat_html\"] = session_analysis_df[\n [\"prompt\", \"output\"]\n ].apply(\n lambda row: construct_html_from_prompt_and_generation(\n row[\"prompt\"], row[\"output\"]\n ),\n axis=1,\n )\n return session_analysis_df\n[docs] def flush_tracker(self, langchain_asset: Any = None, finish: bool = False) -> None:\n pd = import_pandas()\n self.mlflg.table(\"action_records\", pd.DataFrame(self.records[\"action_records\"]))\n session_analysis_df = self._create_session_analysis_df()\n chat_html = session_analysis_df.pop(\"chat_html\")\n chat_html = chat_html.replace(\"\\n\", \"\", regex=True)\n self.mlflg.table(\"session_analysis\", pd.DataFrame(session_analysis_df))\n self.mlflg.html(\"\".join(chat_html.tolist()), \"chat_html\")\n if langchain_asset:\n # To avoid circular import error\n # mlflow only supports LLMChain asset\n if \"langchain.chains.llm.LLMChain\" in str(type(langchain_asset)):\n self.mlflg.langchain_artifact(langchain_asset)\n else:\n langchain_asset_path = str(Path(self.temp_dir.name, \"model.json\"))\n try:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/mlflow_callback.html"} +{"id": "6cc2d514820a-15", "text": "try:\n langchain_asset.save(langchain_asset_path)\n self.mlflg.artifact(langchain_asset_path)\n except ValueError:\n try:\n langchain_asset.save_agent(langchain_asset_path)\n self.mlflg.artifact(langchain_asset_path)\n except AttributeError:\n print(\"Could not save model.\")\n traceback.print_exc()\n pass\n except NotImplementedError:\n print(\"Could not save model.\")\n traceback.print_exc()\n pass\n except NotImplementedError:\n print(\"Could not save model.\")\n traceback.print_exc()\n pass\n if finish:\n self.mlflg.finish_run()\n self._reset()", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/mlflow_callback.html"} +{"id": "44b25e5f9279-0", "text": "Source code for langchain.callbacks.argilla_callback\nimport os\nimport warnings\nfrom typing import Any, Dict, List, Optional, Union\nfrom langchain.callbacks.base import BaseCallbackHandler\nfrom langchain.schema import AgentAction, AgentFinish, LLMResult\n[docs]class ArgillaCallbackHandler(BaseCallbackHandler):\n \"\"\"Callback Handler that logs into Argilla.\n Args:\n dataset_name: name of the `FeedbackDataset` in Argilla. Note that it must\n exist in advance. If you need help on how to create a `FeedbackDataset` in\n Argilla, please visit\n https://docs.argilla.io/en/latest/guides/llms/practical_guides/use_argilla_callback_in_langchain.html.\n workspace_name: name of the workspace in Argilla where the specified\n `FeedbackDataset` lives in. Defaults to `None`, which means that the\n default workspace will be used.\n api_url: URL of the Argilla Server that we want to use, and where the\n `FeedbackDataset` lives in. Defaults to `None`, which means that either\n `ARGILLA_API_URL` environment variable or the default http://localhost:6900\n will be used.\n api_key: API Key to connect to the Argilla Server. Defaults to `None`, which\n means that either `ARGILLA_API_KEY` environment variable or the default\n `argilla.apikey` will be used.\n Raises:\n ImportError: if the `argilla` package is not installed.\n ConnectionError: if the connection to Argilla fails.\n FileNotFoundError: if the `FeedbackDataset` retrieval from Argilla fails.\n Examples:\n >>> from langchain.llms import OpenAI\n >>> from langchain.callbacks import ArgillaCallbackHandler\n >>> argilla_callback = ArgillaCallbackHandler(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/argilla_callback.html"} +{"id": "44b25e5f9279-1", "text": ">>> argilla_callback = ArgillaCallbackHandler(\n ... dataset_name=\"my-dataset\",\n ... workspace_name=\"my-workspace\",\n ... api_url=\"http://localhost:6900\",\n ... api_key=\"argilla.apikey\",\n ... )\n >>> llm = OpenAI(\n ... temperature=0,\n ... callbacks=[argilla_callback],\n ... verbose=True,\n ... openai_api_key=\"API_KEY_HERE\",\n ... )\n >>> llm.generate([\n ... \"What is the best NLP-annotation tool out there? (no bias at all)\",\n ... ])\n \"Argilla, no doubt about it.\"\n \"\"\"\n def __init__(\n self,\n dataset_name: str,\n workspace_name: Optional[str] = None,\n api_url: Optional[str] = None,\n api_key: Optional[str] = None,\n ) -> None:\n \"\"\"Initializes the `ArgillaCallbackHandler`.\n Args:\n dataset_name: name of the `FeedbackDataset` in Argilla. Note that it must\n exist in advance. If you need help on how to create a `FeedbackDataset`\n in Argilla, please visit\n https://docs.argilla.io/en/latest/guides/llms/practical_guides/use_argilla_callback_in_langchain.html.\n workspace_name: name of the workspace in Argilla where the specified\n `FeedbackDataset` lives in. Defaults to `None`, which means that the\n default workspace will be used.\n api_url: URL of the Argilla Server that we want to use, and where the\n `FeedbackDataset` lives in. Defaults to `None`, which means that either", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/argilla_callback.html"} +{"id": "44b25e5f9279-2", "text": "`FeedbackDataset` lives in. Defaults to `None`, which means that either\n `ARGILLA_API_URL` environment variable or the default\n http://localhost:6900 will be used.\n api_key: API Key to connect to the Argilla Server. Defaults to `None`, which\n means that either `ARGILLA_API_KEY` environment variable or the default\n `argilla.apikey` will be used.\n Raises:\n ImportError: if the `argilla` package is not installed.\n ConnectionError: if the connection to Argilla fails.\n FileNotFoundError: if the `FeedbackDataset` retrieval from Argilla fails.\n \"\"\"\n super().__init__()\n # Import Argilla (not via `import_argilla` to keep hints in IDEs)\n try:\n import argilla as rg # noqa: F401\n except ImportError:\n raise ImportError(\n \"To use the Argilla callback manager you need to have the `argilla` \"\n \"Python package installed. Please install it with `pip install argilla`\"\n )\n # Show a warning message if Argilla will assume the default values will be used\n if api_url is None and os.getenv(\"ARGILLA_API_URL\") is None:\n warnings.warn(\n (\n \"Since `api_url` is None, and the env var `ARGILLA_API_URL` is not\"\n \" set, it will default to `http://localhost:6900`.\"\n ),\n )\n if api_key is None and os.getenv(\"ARGILLA_API_KEY\") is None:\n warnings.warn(\n (\n \"Since `api_key` is None, and the env var `ARGILLA_API_KEY` is not\"\n \" set, it will default to `argilla.apikey`.\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/argilla_callback.html"} +{"id": "44b25e5f9279-3", "text": "\" set, it will default to `argilla.apikey`.\"\n ),\n )\n # Connect to Argilla with the provided credentials, if applicable\n try:\n rg.init(\n api_key=api_key,\n api_url=api_url,\n )\n except Exception as e:\n raise ConnectionError(\n f\"Could not connect to Argilla with exception: '{e}'.\\n\"\n \"Please check your `api_key` and `api_url`, and make sure that \"\n \"the Argilla server is up and running. If the problem persists \"\n \"please report it to https://github.com/argilla-io/argilla/issues \"\n \"with the label `langchain`.\"\n ) from e\n # Set the Argilla variables\n self.dataset_name = dataset_name\n self.workspace_name = workspace_name or rg.get_workspace()\n # Retrieve the `FeedbackDataset` from Argilla (without existing records)\n try:\n self.dataset = rg.FeedbackDataset.from_argilla(\n name=self.dataset_name,\n workspace=self.workspace_name,\n with_records=False,\n )\n except Exception as e:\n raise FileNotFoundError(\n \"`FeedbackDataset` retrieval from Argilla failed with exception:\"\n f\" '{e}'.\\nPlease check that the dataset with\"\n f\" name={self.dataset_name} in the\"\n f\" workspace={self.workspace_name} exists in advance. If you need help\"\n \" on how to create a `langchain`-compatible `FeedbackDataset` in\"\n \" Argilla, please visit\"\n \" https://docs.argilla.io/en/latest/guides/llms/practical_guides/use_argilla_callback_in_langchain.html.\" # noqa: E501", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/argilla_callback.html"} +{"id": "44b25e5f9279-4", "text": "\" If the problem persists please report it to\"\n \" https://github.com/argilla-io/argilla/issues with the label\"\n \" `langchain`.\"\n ) from e\n supported_fields = [\"prompt\", \"response\"]\n if supported_fields != [field.name for field in self.dataset.fields]:\n raise ValueError(\n f\"`FeedbackDataset` with name={self.dataset_name} in the\"\n f\" workspace={self.workspace_name} \"\n \"had fields that are not supported yet for the `langchain` integration.\"\n \" Supported fields are: \"\n f\"{supported_fields}, and the current `FeedbackDataset` fields are\"\n f\" {[field.name for field in self.dataset.fields]}. \"\n \"For more information on how to create a `langchain`-compatible\"\n \" `FeedbackDataset` in Argilla, please visit\"\n \" https://docs.argilla.io/en/latest/guides/llms/practical_guides/use_argilla_callback_in_langchain.html.\" # noqa: E501\n )\n self.prompts: Dict[str, List[str]] = {}\n warnings.warn(\n (\n \"The `ArgillaCallbackHandler` is currently in beta and is subject to \"\n \"change based on updates to `langchain`. Please report any issues to \"\n \"https://github.com/argilla-io/argilla/issues with the tag `langchain`.\"\n ),\n )\n[docs] def on_llm_start(\n self, serialized: Dict[str, Any], prompts: List[str], **kwargs: Any\n ) -> None:\n \"\"\"Save the prompts in memory when an LLM starts.\"\"\"\n self.prompts.update({str(kwargs[\"parent_run_id\"] or kwargs[\"run_id\"]): prompts})", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/argilla_callback.html"} +{"id": "44b25e5f9279-5", "text": "[docs] def on_llm_new_token(self, token: str, **kwargs: Any) -> None:\n \"\"\"Do nothing when a new token is generated.\"\"\"\n pass\n[docs] def on_llm_end(self, response: LLMResult, **kwargs: Any) -> None:\n \"\"\"Log records to Argilla when an LLM ends.\"\"\"\n # Do nothing if there's a parent_run_id, since we will log the records when\n # the chain ends\n if kwargs[\"parent_run_id\"]:\n return\n # Creates the records and adds them to the `FeedbackDataset`\n prompts = self.prompts[str(kwargs[\"run_id\"])]\n for prompt, generations in zip(prompts, response.generations):\n self.dataset.add_records(\n records=[\n {\n \"fields\": {\n \"prompt\": prompt,\n \"response\": generation.text.strip(),\n },\n }\n for generation in generations\n ]\n )\n # Push the records to Argilla\n self.dataset.push_to_argilla()\n # Pop current run from `self.runs`\n self.prompts.pop(str(kwargs[\"run_id\"]))\n[docs] def on_llm_error(\n self, error: Union[Exception, KeyboardInterrupt], **kwargs: Any\n ) -> None:\n \"\"\"Do nothing when LLM outputs an error.\"\"\"\n pass\n[docs] def on_chain_start(\n self, serialized: Dict[str, Any], inputs: Dict[str, Any], **kwargs: Any\n ) -> None:\n \"\"\"If the key `input` is in `inputs`, then save it in `self.prompts` using\n either the `parent_run_id` or the `run_id` as the key. This is done so that", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/argilla_callback.html"} +{"id": "44b25e5f9279-6", "text": "we don't log the same input prompt twice, once when the LLM starts and once\n when the chain starts.\n \"\"\"\n if \"input\" in inputs:\n self.prompts.update(\n {\n str(kwargs[\"parent_run_id\"] or kwargs[\"run_id\"]): (\n inputs[\"input\"]\n if isinstance(inputs[\"input\"], list)\n else [inputs[\"input\"]]\n )\n }\n )\n[docs] def on_chain_end(self, outputs: Dict[str, Any], **kwargs: Any) -> None:\n \"\"\"If either the `parent_run_id` or the `run_id` is in `self.prompts`, then\n log the outputs to Argilla, and pop the run from `self.prompts`. The behavior\n differs if the output is a list or not.\n \"\"\"\n if not any(\n key in self.prompts\n for key in [str(kwargs[\"parent_run_id\"]), str(kwargs[\"run_id\"])]\n ):\n return\n prompts = self.prompts.get(str(kwargs[\"parent_run_id\"])) or self.prompts.get(\n str(kwargs[\"run_id\"])\n )\n for chain_output_key, chain_output_val in outputs.items():\n if isinstance(chain_output_val, list):\n # Creates the records and adds them to the `FeedbackDataset`\n self.dataset.add_records(\n records=[\n {\n \"fields\": {\n \"prompt\": prompt,\n \"response\": output[\"text\"].strip(),\n },\n }\n for prompt, output in zip(\n prompts, chain_output_val # type: ignore\n )\n ]\n )\n else:\n # Creates the records and adds them to the `FeedbackDataset`\n self.dataset.add_records(\n records=[", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/argilla_callback.html"} +{"id": "44b25e5f9279-7", "text": "self.dataset.add_records(\n records=[\n {\n \"fields\": {\n \"prompt\": \" \".join(prompts), # type: ignore\n \"response\": chain_output_val.strip(),\n },\n }\n ]\n )\n # Push the records to Argilla\n self.dataset.push_to_argilla()\n # Pop current run from `self.runs`\n if str(kwargs[\"parent_run_id\"]) in self.prompts:\n self.prompts.pop(str(kwargs[\"parent_run_id\"]))\n if str(kwargs[\"run_id\"]) in self.prompts:\n self.prompts.pop(str(kwargs[\"run_id\"]))\n[docs] def on_chain_error(\n self, error: Union[Exception, KeyboardInterrupt], **kwargs: Any\n ) -> None:\n \"\"\"Do nothing when LLM chain outputs an error.\"\"\"\n pass\n[docs] def on_tool_start(\n self,\n serialized: Dict[str, Any],\n input_str: str,\n **kwargs: Any,\n ) -> None:\n \"\"\"Do nothing when tool starts.\"\"\"\n pass\n[docs] def on_agent_action(self, action: AgentAction, **kwargs: Any) -> Any:\n \"\"\"Do nothing when agent takes a specific action.\"\"\"\n pass\n[docs] def on_tool_end(\n self,\n output: str,\n observation_prefix: Optional[str] = None,\n llm_prefix: Optional[str] = None,\n **kwargs: Any,\n ) -> None:\n \"\"\"Do nothing when tool ends.\"\"\"\n pass\n[docs] def on_tool_error(\n self, error: Union[Exception, KeyboardInterrupt], **kwargs: Any\n ) -> None:\n \"\"\"Do nothing when tool outputs an error.\"\"\"\n pass", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/argilla_callback.html"} +{"id": "44b25e5f9279-8", "text": ") -> None:\n \"\"\"Do nothing when tool outputs an error.\"\"\"\n pass\n[docs] def on_text(self, text: str, **kwargs: Any) -> None:\n \"\"\"Do nothing\"\"\"\n pass\n[docs] def on_agent_finish(self, finish: AgentFinish, **kwargs: Any) -> None:\n \"\"\"Do nothing\"\"\"\n pass", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/argilla_callback.html"} +{"id": "0e3d28e4f822-0", "text": "Source code for langchain.callbacks.comet_ml_callback\nimport tempfile\nfrom copy import deepcopy\nfrom pathlib import Path\nfrom typing import Any, Callable, Dict, List, Optional, Sequence, Union\nimport langchain\nfrom langchain.callbacks.base import BaseCallbackHandler\nfrom langchain.callbacks.utils import (\n BaseMetadataCallbackHandler,\n flatten_dict,\n import_pandas,\n import_spacy,\n import_textstat,\n)\nfrom langchain.schema import AgentAction, AgentFinish, Generation, LLMResult\nLANGCHAIN_MODEL_NAME = \"langchain-model\"\ndef import_comet_ml() -> Any:\n try:\n import comet_ml # noqa: F401\n except ImportError:\n raise ImportError(\n \"To use the comet_ml callback manager you need to have the \"\n \"`comet_ml` python package installed. Please install it with\"\n \" `pip install comet_ml`\"\n )\n return comet_ml\ndef _get_experiment(\n workspace: Optional[str] = None, project_name: Optional[str] = None\n) -> Any:\n comet_ml = import_comet_ml()\n experiment = comet_ml.Experiment( # type: ignore\n workspace=workspace,\n project_name=project_name,\n )\n return experiment\ndef _fetch_text_complexity_metrics(text: str) -> dict:\n textstat = import_textstat()\n text_complexity_metrics = {\n \"flesch_reading_ease\": textstat.flesch_reading_ease(text),\n \"flesch_kincaid_grade\": textstat.flesch_kincaid_grade(text),\n \"smog_index\": textstat.smog_index(text),\n \"coleman_liau_index\": textstat.coleman_liau_index(text),", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/comet_ml_callback.html"} +{"id": "0e3d28e4f822-1", "text": "\"automated_readability_index\": textstat.automated_readability_index(text),\n \"dale_chall_readability_score\": textstat.dale_chall_readability_score(text),\n \"difficult_words\": textstat.difficult_words(text),\n \"linsear_write_formula\": textstat.linsear_write_formula(text),\n \"gunning_fog\": textstat.gunning_fog(text),\n \"text_standard\": textstat.text_standard(text),\n \"fernandez_huerta\": textstat.fernandez_huerta(text),\n \"szigriszt_pazos\": textstat.szigriszt_pazos(text),\n \"gutierrez_polini\": textstat.gutierrez_polini(text),\n \"crawford\": textstat.crawford(text),\n \"gulpease_index\": textstat.gulpease_index(text),\n \"osman\": textstat.osman(text),\n }\n return text_complexity_metrics\ndef _summarize_metrics_for_generated_outputs(metrics: Sequence) -> dict:\n pd = import_pandas()\n metrics_df = pd.DataFrame(metrics)\n metrics_summary = metrics_df.describe()\n return metrics_summary.to_dict()\n[docs]class CometCallbackHandler(BaseMetadataCallbackHandler, BaseCallbackHandler):\n \"\"\"Callback Handler that logs to Comet.\n Parameters:\n job_type (str): The type of comet_ml task such as \"inference\",\n \"testing\" or \"qc\"\n project_name (str): The comet_ml project name\n tags (list): Tags to add to the task\n task_name (str): Name of the comet_ml task\n visualize (bool): Whether to visualize the run.\n complexity_metrics (bool): Whether to log complexity metrics\n stream_logs (bool): Whether to stream callback actions to Comet", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/comet_ml_callback.html"} +{"id": "0e3d28e4f822-2", "text": "stream_logs (bool): Whether to stream callback actions to Comet\n This handler will utilize the associated callback method and formats\n the input of each callback function with metadata regarding the state of LLM run,\n and adds the response to the list of records for both the {method}_records and\n action. It then logs the response to Comet.\n \"\"\"\n def __init__(\n self,\n task_type: Optional[str] = \"inference\",\n workspace: Optional[str] = None,\n project_name: Optional[str] = None,\n tags: Optional[Sequence] = None,\n name: Optional[str] = None,\n visualizations: Optional[List[str]] = None,\n complexity_metrics: bool = False,\n custom_metrics: Optional[Callable] = None,\n stream_logs: bool = True,\n ) -> None:\n \"\"\"Initialize callback handler.\"\"\"\n self.comet_ml = import_comet_ml()\n super().__init__()\n self.task_type = task_type\n self.workspace = workspace\n self.project_name = project_name\n self.tags = tags\n self.visualizations = visualizations\n self.complexity_metrics = complexity_metrics\n self.custom_metrics = custom_metrics\n self.stream_logs = stream_logs\n self.temp_dir = tempfile.TemporaryDirectory()\n self.experiment = _get_experiment(workspace, project_name)\n self.experiment.log_other(\"Created from\", \"langchain\")\n if tags:\n self.experiment.add_tags(tags)\n self.name = name\n if self.name:\n self.experiment.set_name(self.name)\n warning = (\n \"The comet_ml callback is currently in beta and is subject to change \"\n \"based on updates to `langchain`. Please report any issues to \"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/comet_ml_callback.html"} +{"id": "0e3d28e4f822-3", "text": "\"based on updates to `langchain`. Please report any issues to \"\n \"https://github.com/comet-ml/issue-tracking/issues with the tag \"\n \"`langchain`.\"\n )\n self.comet_ml.LOGGER.warning(warning)\n self.callback_columns: list = []\n self.action_records: list = []\n self.complexity_metrics = complexity_metrics\n if self.visualizations:\n spacy = import_spacy()\n self.nlp = spacy.load(\"en_core_web_sm\")\n else:\n self.nlp = None\n def _init_resp(self) -> Dict:\n return {k: None for k in self.callback_columns}\n[docs] def on_llm_start(\n self, serialized: Dict[str, Any], prompts: List[str], **kwargs: Any\n ) -> None:\n \"\"\"Run when LLM starts.\"\"\"\n self.step += 1\n self.llm_starts += 1\n self.starts += 1\n metadata = self._init_resp()\n metadata.update({\"action\": \"on_llm_start\"})\n metadata.update(flatten_dict(serialized))\n metadata.update(self.get_custom_callback_meta())\n for prompt in prompts:\n prompt_resp = deepcopy(metadata)\n prompt_resp[\"prompts\"] = prompt\n self.on_llm_start_records.append(prompt_resp)\n self.action_records.append(prompt_resp)\n if self.stream_logs:\n self._log_stream(prompt, metadata, self.step)\n[docs] def on_llm_new_token(self, token: str, **kwargs: Any) -> None:\n \"\"\"Run when LLM generates a new token.\"\"\"\n self.step += 1\n self.llm_streams += 1\n resp = self._init_resp()", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/comet_ml_callback.html"} +{"id": "0e3d28e4f822-4", "text": "self.llm_streams += 1\n resp = self._init_resp()\n resp.update({\"action\": \"on_llm_new_token\", \"token\": token})\n resp.update(self.get_custom_callback_meta())\n self.action_records.append(resp)\n[docs] def on_llm_end(self, response: LLMResult, **kwargs: Any) -> None:\n \"\"\"Run when LLM ends running.\"\"\"\n self.step += 1\n self.llm_ends += 1\n self.ends += 1\n metadata = self._init_resp()\n metadata.update({\"action\": \"on_llm_end\"})\n metadata.update(flatten_dict(response.llm_output or {}))\n metadata.update(self.get_custom_callback_meta())\n output_complexity_metrics = []\n output_custom_metrics = []\n for prompt_idx, generations in enumerate(response.generations):\n for gen_idx, generation in enumerate(generations):\n text = generation.text\n generation_resp = deepcopy(metadata)\n generation_resp.update(flatten_dict(generation.dict()))\n complexity_metrics = self._get_complexity_metrics(text)\n if complexity_metrics:\n output_complexity_metrics.append(complexity_metrics)\n generation_resp.update(complexity_metrics)\n custom_metrics = self._get_custom_metrics(\n generation, prompt_idx, gen_idx\n )\n if custom_metrics:\n output_custom_metrics.append(custom_metrics)\n generation_resp.update(custom_metrics)\n if self.stream_logs:\n self._log_stream(text, metadata, self.step)\n self.action_records.append(generation_resp)\n self.on_llm_end_records.append(generation_resp)\n self._log_text_metrics(output_complexity_metrics, step=self.step)\n self._log_text_metrics(output_custom_metrics, step=self.step)\n[docs] def on_llm_error(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/comet_ml_callback.html"} +{"id": "0e3d28e4f822-5", "text": "[docs] def on_llm_error(\n self, error: Union[Exception, KeyboardInterrupt], **kwargs: Any\n ) -> None:\n \"\"\"Run when LLM errors.\"\"\"\n self.step += 1\n self.errors += 1\n[docs] def on_chain_start(\n self, serialized: Dict[str, Any], inputs: Dict[str, Any], **kwargs: Any\n ) -> None:\n \"\"\"Run when chain starts running.\"\"\"\n self.step += 1\n self.chain_starts += 1\n self.starts += 1\n resp = self._init_resp()\n resp.update({\"action\": \"on_chain_start\"})\n resp.update(flatten_dict(serialized))\n resp.update(self.get_custom_callback_meta())\n for chain_input_key, chain_input_val in inputs.items():\n if isinstance(chain_input_val, str):\n input_resp = deepcopy(resp)\n if self.stream_logs:\n self._log_stream(chain_input_val, resp, self.step)\n input_resp.update({chain_input_key: chain_input_val})\n self.action_records.append(input_resp)\n else:\n self.comet_ml.LOGGER.warning(\n f\"Unexpected data format provided! \"\n f\"Input Value for {chain_input_key} will not be logged\"\n )\n[docs] def on_chain_end(self, outputs: Dict[str, Any], **kwargs: Any) -> None:\n \"\"\"Run when chain ends running.\"\"\"\n self.step += 1\n self.chain_ends += 1\n self.ends += 1\n resp = self._init_resp()\n resp.update({\"action\": \"on_chain_end\"})\n resp.update(self.get_custom_callback_meta())\n for chain_output_key, chain_output_val in outputs.items():\n if isinstance(chain_output_val, str):", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/comet_ml_callback.html"} +{"id": "0e3d28e4f822-6", "text": "if isinstance(chain_output_val, str):\n output_resp = deepcopy(resp)\n if self.stream_logs:\n self._log_stream(chain_output_val, resp, self.step)\n output_resp.update({chain_output_key: chain_output_val})\n self.action_records.append(output_resp)\n else:\n self.comet_ml.LOGGER.warning(\n f\"Unexpected data format provided! \"\n f\"Output Value for {chain_output_key} will not be logged\"\n )\n[docs] def on_chain_error(\n self, error: Union[Exception, KeyboardInterrupt], **kwargs: Any\n ) -> None:\n \"\"\"Run when chain errors.\"\"\"\n self.step += 1\n self.errors += 1\n[docs] def on_tool_start(\n self, serialized: Dict[str, Any], input_str: str, **kwargs: Any\n ) -> None:\n \"\"\"Run when tool starts running.\"\"\"\n self.step += 1\n self.tool_starts += 1\n self.starts += 1\n resp = self._init_resp()\n resp.update({\"action\": \"on_tool_start\"})\n resp.update(flatten_dict(serialized))\n resp.update(self.get_custom_callback_meta())\n if self.stream_logs:\n self._log_stream(input_str, resp, self.step)\n resp.update({\"input_str\": input_str})\n self.action_records.append(resp)\n[docs] def on_tool_end(self, output: str, **kwargs: Any) -> None:\n \"\"\"Run when tool ends running.\"\"\"\n self.step += 1\n self.tool_ends += 1\n self.ends += 1\n resp = self._init_resp()\n resp.update({\"action\": \"on_tool_end\"})\n resp.update(self.get_custom_callback_meta())", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/comet_ml_callback.html"} +{"id": "0e3d28e4f822-7", "text": "resp.update(self.get_custom_callback_meta())\n if self.stream_logs:\n self._log_stream(output, resp, self.step)\n resp.update({\"output\": output})\n self.action_records.append(resp)\n[docs] def on_tool_error(\n self, error: Union[Exception, KeyboardInterrupt], **kwargs: Any\n ) -> None:\n \"\"\"Run when tool errors.\"\"\"\n self.step += 1\n self.errors += 1\n[docs] def on_text(self, text: str, **kwargs: Any) -> None:\n \"\"\"\n Run when agent is ending.\n \"\"\"\n self.step += 1\n self.text_ctr += 1\n resp = self._init_resp()\n resp.update({\"action\": \"on_text\"})\n resp.update(self.get_custom_callback_meta())\n if self.stream_logs:\n self._log_stream(text, resp, self.step)\n resp.update({\"text\": text})\n self.action_records.append(resp)\n[docs] def on_agent_finish(self, finish: AgentFinish, **kwargs: Any) -> None:\n \"\"\"Run when agent ends running.\"\"\"\n self.step += 1\n self.agent_ends += 1\n self.ends += 1\n resp = self._init_resp()\n output = finish.return_values[\"output\"]\n log = finish.log\n resp.update({\"action\": \"on_agent_finish\", \"log\": log})\n resp.update(self.get_custom_callback_meta())\n if self.stream_logs:\n self._log_stream(output, resp, self.step)\n resp.update({\"output\": output})\n self.action_records.append(resp)\n[docs] def on_agent_action(self, action: AgentAction, **kwargs: Any) -> Any:\n \"\"\"Run on agent action.\"\"\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/comet_ml_callback.html"} +{"id": "0e3d28e4f822-8", "text": "\"\"\"Run on agent action.\"\"\"\n self.step += 1\n self.tool_starts += 1\n self.starts += 1\n tool = action.tool\n tool_input = str(action.tool_input)\n log = action.log\n resp = self._init_resp()\n resp.update({\"action\": \"on_agent_action\", \"log\": log, \"tool\": tool})\n resp.update(self.get_custom_callback_meta())\n if self.stream_logs:\n self._log_stream(tool_input, resp, self.step)\n resp.update({\"tool_input\": tool_input})\n self.action_records.append(resp)\n def _get_complexity_metrics(self, text: str) -> dict:\n \"\"\"Compute text complexity metrics using textstat.\n Parameters:\n text (str): The text to analyze.\n Returns:\n (dict): A dictionary containing the complexity metrics.\n \"\"\"\n resp = {}\n if self.complexity_metrics:\n text_complexity_metrics = _fetch_text_complexity_metrics(text)\n resp.update(text_complexity_metrics)\n return resp\n def _get_custom_metrics(\n self, generation: Generation, prompt_idx: int, gen_idx: int\n ) -> dict:\n \"\"\"Compute Custom Metrics for an LLM Generated Output\n Args:\n generation (LLMResult): Output generation from an LLM\n prompt_idx (int): List index of the input prompt\n gen_idx (int): List index of the generated output\n Returns:\n dict: A dictionary containing the custom metrics.\n \"\"\"\n resp = {}\n if self.custom_metrics:\n custom_metrics = self.custom_metrics(generation, prompt_idx, gen_idx)\n resp.update(custom_metrics)\n return resp\n[docs] def flush_tracker(\n self,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/comet_ml_callback.html"} +{"id": "0e3d28e4f822-9", "text": "return resp\n[docs] def flush_tracker(\n self,\n langchain_asset: Any = None,\n task_type: Optional[str] = \"inference\",\n workspace: Optional[str] = None,\n project_name: Optional[str] = \"comet-langchain-demo\",\n tags: Optional[Sequence] = None,\n name: Optional[str] = None,\n visualizations: Optional[List[str]] = None,\n complexity_metrics: bool = False,\n custom_metrics: Optional[Callable] = None,\n finish: bool = False,\n reset: bool = False,\n ) -> None:\n \"\"\"Flush the tracker and setup the session.\n Everything after this will be a new table.\n Args:\n name: Name of the preformed session so far so it is identifyable\n langchain_asset: The langchain asset to save.\n finish: Whether to finish the run.\n Returns:\n None\n \"\"\"\n self._log_session(langchain_asset)\n if langchain_asset:\n try:\n self._log_model(langchain_asset)\n except Exception:\n self.comet_ml.LOGGER.error(\n \"Failed to export agent or LLM to Comet\",\n exc_info=True,\n extra={\"show_traceback\": True},\n )\n if finish:\n self.experiment.end()\n if reset:\n self._reset(\n task_type,\n workspace,\n project_name,\n tags,\n name,\n visualizations,\n complexity_metrics,\n custom_metrics,\n )\n def _log_stream(self, prompt: str, metadata: dict, step: int) -> None:\n self.experiment.log_text(prompt, metadata=metadata, step=step)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/comet_ml_callback.html"} +{"id": "0e3d28e4f822-10", "text": "self.experiment.log_text(prompt, metadata=metadata, step=step)\n def _log_model(self, langchain_asset: Any) -> None:\n model_parameters = self._get_llm_parameters(langchain_asset)\n self.experiment.log_parameters(model_parameters, prefix=\"model\")\n langchain_asset_path = Path(self.temp_dir.name, \"model.json\")\n model_name = self.name if self.name else LANGCHAIN_MODEL_NAME\n try:\n if hasattr(langchain_asset, \"save\"):\n langchain_asset.save(langchain_asset_path)\n self.experiment.log_model(model_name, str(langchain_asset_path))\n except (ValueError, AttributeError, NotImplementedError) as e:\n if hasattr(langchain_asset, \"save_agent\"):\n langchain_asset.save_agent(langchain_asset_path)\n self.experiment.log_model(model_name, str(langchain_asset_path))\n else:\n self.comet_ml.LOGGER.error(\n f\"{e}\"\n \" Could not save Langchain Asset \"\n f\"for {langchain_asset.__class__.__name__}\"\n )\n def _log_session(self, langchain_asset: Optional[Any] = None) -> None:\n try:\n llm_session_df = self._create_session_analysis_dataframe(langchain_asset)\n # Log the cleaned dataframe as a table\n self.experiment.log_table(\"langchain-llm-session.csv\", llm_session_df)\n except Exception:\n self.comet_ml.LOGGER.warning(\n \"Failed to log session data to Comet\",\n exc_info=True,\n extra={\"show_traceback\": True},\n )\n try:\n metadata = {\"langchain_version\": str(langchain.__version__)}\n # Log the langchain low-level records as a JSON file directly", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/comet_ml_callback.html"} +{"id": "0e3d28e4f822-11", "text": "# Log the langchain low-level records as a JSON file directly\n self.experiment.log_asset_data(\n self.action_records, \"langchain-action_records.json\", metadata=metadata\n )\n except Exception:\n self.comet_ml.LOGGER.warning(\n \"Failed to log session data to Comet\",\n exc_info=True,\n extra={\"show_traceback\": True},\n )\n try:\n self._log_visualizations(llm_session_df)\n except Exception:\n self.comet_ml.LOGGER.warning(\n \"Failed to log visualizations to Comet\",\n exc_info=True,\n extra={\"show_traceback\": True},\n )\n def _log_text_metrics(self, metrics: Sequence[dict], step: int) -> None:\n if not metrics:\n return\n metrics_summary = _summarize_metrics_for_generated_outputs(metrics)\n for key, value in metrics_summary.items():\n self.experiment.log_metrics(value, prefix=key, step=step)\n def _log_visualizations(self, session_df: Any) -> None:\n if not (self.visualizations and self.nlp):\n return\n spacy = import_spacy()\n prompts = session_df[\"prompts\"].tolist()\n outputs = session_df[\"text\"].tolist()\n for idx, (prompt, output) in enumerate(zip(prompts, outputs)):\n doc = self.nlp(output)\n sentence_spans = list(doc.sents)\n for visualization in self.visualizations:\n try:\n html = spacy.displacy.render(\n sentence_spans,\n style=visualization,\n options={\"compact\": True},\n jupyter=False,\n page=True,\n )\n self.experiment.log_asset_data(\n html,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/comet_ml_callback.html"} +{"id": "0e3d28e4f822-12", "text": ")\n self.experiment.log_asset_data(\n html,\n name=f\"langchain-viz-{visualization}-{idx}.html\",\n metadata={\"prompt\": prompt},\n step=idx,\n )\n except Exception as e:\n self.comet_ml.LOGGER.warning(\n e, exc_info=True, extra={\"show_traceback\": True}\n )\n return\n def _reset(\n self,\n task_type: Optional[str] = None,\n workspace: Optional[str] = None,\n project_name: Optional[str] = None,\n tags: Optional[Sequence] = None,\n name: Optional[str] = None,\n visualizations: Optional[List[str]] = None,\n complexity_metrics: bool = False,\n custom_metrics: Optional[Callable] = None,\n ) -> None:\n _task_type = task_type if task_type else self.task_type\n _workspace = workspace if workspace else self.workspace\n _project_name = project_name if project_name else self.project_name\n _tags = tags if tags else self.tags\n _name = name if name else self.name\n _visualizations = visualizations if visualizations else self.visualizations\n _complexity_metrics = (\n complexity_metrics if complexity_metrics else self.complexity_metrics\n )\n _custom_metrics = custom_metrics if custom_metrics else self.custom_metrics\n self.__init__( # type: ignore\n task_type=_task_type,\n workspace=_workspace,\n project_name=_project_name,\n tags=_tags,\n name=_name,\n visualizations=_visualizations,\n complexity_metrics=_complexity_metrics,\n custom_metrics=_custom_metrics,\n )\n self.reset_callback_meta()\n self.temp_dir = tempfile.TemporaryDirectory()", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/comet_ml_callback.html"} +{"id": "0e3d28e4f822-13", "text": "self.reset_callback_meta()\n self.temp_dir = tempfile.TemporaryDirectory()\n def _create_session_analysis_dataframe(self, langchain_asset: Any = None) -> dict:\n pd = import_pandas()\n llm_parameters = self._get_llm_parameters(langchain_asset)\n num_generations_per_prompt = llm_parameters.get(\"n\", 1)\n llm_start_records_df = pd.DataFrame(self.on_llm_start_records)\n # Repeat each input row based on the number of outputs generated per prompt\n llm_start_records_df = llm_start_records_df.loc[\n llm_start_records_df.index.repeat(num_generations_per_prompt)\n ].reset_index(drop=True)\n llm_end_records_df = pd.DataFrame(self.on_llm_end_records)\n llm_session_df = pd.merge(\n llm_start_records_df,\n llm_end_records_df,\n left_index=True,\n right_index=True,\n suffixes=[\"_llm_start\", \"_llm_end\"],\n )\n return llm_session_df\n def _get_llm_parameters(self, langchain_asset: Any = None) -> dict:\n if not langchain_asset:\n return {}\n try:\n if hasattr(langchain_asset, \"agent\"):\n llm_parameters = langchain_asset.agent.llm_chain.llm.dict()\n elif hasattr(langchain_asset, \"llm_chain\"):\n llm_parameters = langchain_asset.llm_chain.llm.dict()\n elif hasattr(langchain_asset, \"llm\"):\n llm_parameters = langchain_asset.llm.dict()\n else:\n llm_parameters = langchain_asset.dict()\n except Exception:\n return {}\n return llm_parameters", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/comet_ml_callback.html"} +{"id": "49dfeee28dca-0", "text": "Source code for langchain.callbacks.streamlit\nfrom __future__ import annotations\nfrom typing import TYPE_CHECKING, Optional\nfrom langchain.callbacks.base import BaseCallbackHandler\nfrom langchain.callbacks.streamlit.streamlit_callback_handler import (\n LLMThoughtLabeler as LLMThoughtLabeler,\n)\nfrom langchain.callbacks.streamlit.streamlit_callback_handler import (\n StreamlitCallbackHandler as _InternalStreamlitCallbackHandler,\n)\nif TYPE_CHECKING:\n from streamlit.delta_generator import DeltaGenerator\n[docs]def StreamlitCallbackHandler(\n parent_container: DeltaGenerator,\n *,\n max_thought_containers: int = 4,\n expand_new_thoughts: bool = True,\n collapse_completed_thoughts: bool = True,\n thought_labeler: Optional[LLMThoughtLabeler] = None,\n) -> BaseCallbackHandler:\n \"\"\"Construct a new StreamlitCallbackHandler. This CallbackHandler is geared towards\n use with a LangChain Agent; it displays the Agent's LLM and tool-usage \"thoughts\"\n inside a series of Streamlit expanders.\n Parameters\n ----------\n parent_container\n The `st.container` that will contain all the Streamlit elements that the\n Handler creates.\n max_thought_containers\n The max number of completed LLM thought containers to show at once. When this\n threshold is reached, a new thought will cause the oldest thoughts to be\n collapsed into a \"History\" expander. Defaults to 4.\n expand_new_thoughts\n Each LLM \"thought\" gets its own `st.expander`. This param controls whether that\n expander is expanded by default. Defaults to True.\n collapse_completed_thoughts\n If True, LLM thought expanders will be collapsed when completed.", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/streamlit.html"} +{"id": "49dfeee28dca-1", "text": "If True, LLM thought expanders will be collapsed when completed.\n Defaults to True.\n thought_labeler\n An optional custom LLMThoughtLabeler instance. If unspecified, the handler\n will use the default thought labeling logic. Defaults to None.\n Returns\n -------\n A new StreamlitCallbackHandler instance.\n Note that this is an \"auto-updating\" API: if the installed version of Streamlit\n has a more recent StreamlitCallbackHandler implementation, an instance of that class\n will be used.\n \"\"\"\n # If we're using a version of Streamlit that implements StreamlitCallbackHandler,\n # delegate to it instead of using our built-in handler. The official handler is\n # guaranteed to support the same set of kwargs.\n try:\n from streamlit.external.langchain import (\n StreamlitCallbackHandler as OfficialStreamlitCallbackHandler, # type: ignore # noqa: 501\n )\n return OfficialStreamlitCallbackHandler(\n parent_container,\n max_thought_containers=max_thought_containers,\n expand_new_thoughts=expand_new_thoughts,\n collapse_completed_thoughts=collapse_completed_thoughts,\n thought_labeler=thought_labeler,\n )\n except ImportError:\n return _InternalStreamlitCallbackHandler(\n parent_container,\n max_thought_containers=max_thought_containers,\n expand_new_thoughts=expand_new_thoughts,\n collapse_completed_thoughts=collapse_completed_thoughts,\n thought_labeler=thought_labeler,\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/streamlit.html"} +{"id": "081c917dc5f2-0", "text": "Source code for langchain.callbacks.streamlit.streamlit_callback_handler\n\"\"\"Callback Handler that prints to streamlit.\"\"\"\nfrom __future__ import annotations\nfrom enum import Enum\nfrom typing import TYPE_CHECKING, Any, Dict, List, NamedTuple, Optional, Union\nfrom langchain.callbacks.base import BaseCallbackHandler\nfrom langchain.callbacks.streamlit.mutable_expander import MutableExpander\nfrom langchain.schema import AgentAction, AgentFinish, LLMResult\nif TYPE_CHECKING:\n from streamlit.delta_generator import DeltaGenerator\ndef _convert_newlines(text: str) -> str:\n \"\"\"Convert newline characters to markdown newline sequences\n (space, space, newline).\n \"\"\"\n return text.replace(\"\\n\", \" \\n\")\nCHECKMARK_EMOJI = \"\u2705\"\nTHINKING_EMOJI = \":thinking_face:\"\nHISTORY_EMOJI = \":books:\"\nEXCEPTION_EMOJI = \"\u26a0\ufe0f\"\nclass LLMThoughtState(Enum):\n # The LLM is thinking about what to do next. We don't know which tool we'll run.\n THINKING = \"THINKING\"\n # The LLM has decided to run a tool. We don't have results from the tool yet.\n RUNNING_TOOL = \"RUNNING_TOOL\"\n # We have results from the tool.\n COMPLETE = \"COMPLETE\"\nclass ToolRecord(NamedTuple):\n name: str\n input_str: str\n[docs]class LLMThoughtLabeler:\n \"\"\"\n Generates markdown labels for LLMThought containers. Pass a custom\n subclass of this to StreamlitCallbackHandler to override its default\n labeling logic.\n \"\"\"\n[docs] def get_initial_label(self) -> str:\n \"\"\"Return the markdown label for a new LLMThought that doesn't have", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/streamlit/streamlit_callback_handler.html"} +{"id": "081c917dc5f2-1", "text": "\"\"\"Return the markdown label for a new LLMThought that doesn't have\n an associated tool yet.\n \"\"\"\n return f\"{THINKING_EMOJI} **Thinking...**\"\n[docs] def get_tool_label(self, tool: ToolRecord, is_complete: bool) -> str:\n \"\"\"Return the label for an LLMThought that has an associated\n tool.\n Parameters\n ----------\n tool\n The tool's ToolRecord\n is_complete\n True if the thought is complete; False if the thought\n is still receiving input.\n Returns\n -------\n The markdown label for the thought's container.\n \"\"\"\n input = tool.input_str\n name = tool.name\n emoji = CHECKMARK_EMOJI if is_complete else THINKING_EMOJI\n if name == \"_Exception\":\n emoji = EXCEPTION_EMOJI\n name = \"Parsing error\"\n idx = min([60, len(input)])\n input = input[0:idx]\n if len(tool.input_str) > idx:\n input = input + \"...\"\n input = input.replace(\"\\n\", \" \")\n label = f\"{emoji} **{name}:** {input}\"\n return label\n[docs] def get_history_label(self) -> str:\n \"\"\"Return a markdown label for the special 'history' container\n that contains overflow thoughts.\n \"\"\"\n return f\"{HISTORY_EMOJI} **History**\"\n[docs] def get_final_agent_thought_label(self) -> str:\n \"\"\"Return the markdown label for the agent's final thought -\n the \"Now I have the answer\" thought, that doesn't involve\n a tool.\n \"\"\"\n return f\"{CHECKMARK_EMOJI} **Complete!**\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/streamlit/streamlit_callback_handler.html"} +{"id": "081c917dc5f2-2", "text": "\"\"\"\n return f\"{CHECKMARK_EMOJI} **Complete!**\"\nclass LLMThought:\n def __init__(\n self,\n parent_container: DeltaGenerator,\n labeler: LLMThoughtLabeler,\n expanded: bool,\n collapse_on_complete: bool,\n ):\n self._container = MutableExpander(\n parent_container=parent_container,\n label=labeler.get_initial_label(),\n expanded=expanded,\n )\n self._state = LLMThoughtState.THINKING\n self._llm_token_stream = \"\"\n self._llm_token_writer_idx: Optional[int] = None\n self._last_tool: Optional[ToolRecord] = None\n self._collapse_on_complete = collapse_on_complete\n self._labeler = labeler\n @property\n def container(self) -> MutableExpander:\n \"\"\"The container we're writing into.\"\"\"\n return self._container\n @property\n def last_tool(self) -> Optional[ToolRecord]:\n \"\"\"The last tool executed by this thought\"\"\"\n return self._last_tool\n def _reset_llm_token_stream(self) -> None:\n self._llm_token_stream = \"\"\n self._llm_token_writer_idx = None\n def on_llm_start(self, serialized: Dict[str, Any], prompts: List[str]) -> None:\n self._reset_llm_token_stream()\n def on_llm_new_token(self, token: str, **kwargs: Any) -> None:\n # This is only called when the LLM is initialized with `streaming=True`\n self._llm_token_stream += _convert_newlines(token)\n self._llm_token_writer_idx = self._container.markdown(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/streamlit/streamlit_callback_handler.html"} +{"id": "081c917dc5f2-3", "text": "self._llm_token_writer_idx = self._container.markdown(\n self._llm_token_stream, index=self._llm_token_writer_idx\n )\n def on_llm_end(self, response: LLMResult, **kwargs: Any) -> None:\n # `response` is the concatenation of all the tokens received by the LLM.\n # If we're receiving streaming tokens from `on_llm_new_token`, this response\n # data is redundant\n self._reset_llm_token_stream()\n def on_llm_error(\n self, error: Union[Exception, KeyboardInterrupt], **kwargs: Any\n ) -> None:\n self._container.markdown(\"**LLM encountered an error...**\")\n self._container.exception(error)\n def on_tool_start(\n self, serialized: Dict[str, Any], input_str: str, **kwargs: Any\n ) -> None:\n # Called with the name of the tool we're about to run (in `serialized[name]`),\n # and its input. We change our container's label to be the tool name.\n self._state = LLMThoughtState.RUNNING_TOOL\n tool_name = serialized[\"name\"]\n self._last_tool = ToolRecord(name=tool_name, input_str=input_str)\n self._container.update(\n new_label=self._labeler.get_tool_label(self._last_tool, is_complete=False)\n )\n def on_tool_end(\n self,\n output: str,\n color: Optional[str] = None,\n observation_prefix: Optional[str] = None,\n llm_prefix: Optional[str] = None,\n **kwargs: Any,\n ) -> None:\n self._container.markdown(f\"**{output}**\")\n def on_tool_error(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/streamlit/streamlit_callback_handler.html"} +{"id": "081c917dc5f2-4", "text": "def on_tool_error(\n self, error: Union[Exception, KeyboardInterrupt], **kwargs: Any\n ) -> None:\n self._container.markdown(\"**Tool encountered an error...**\")\n self._container.exception(error)\n def on_agent_action(\n self, action: AgentAction, color: Optional[str] = None, **kwargs: Any\n ) -> Any:\n # Called when we're about to kick off a new tool. The `action` data\n # tells us the tool we're about to use, and the input we'll give it.\n # We don't output anything here, because we'll receive this same data\n # when `on_tool_start` is called immediately after.\n pass\n def complete(self, final_label: Optional[str] = None) -> None:\n \"\"\"Finish the thought.\"\"\"\n if final_label is None and self._state == LLMThoughtState.RUNNING_TOOL:\n assert (\n self._last_tool is not None\n ), \"_last_tool should never be null when _state == RUNNING_TOOL\"\n final_label = self._labeler.get_tool_label(\n self._last_tool, is_complete=True\n )\n self._state = LLMThoughtState.COMPLETE\n if self._collapse_on_complete:\n self._container.update(new_label=final_label, new_expanded=False)\n else:\n self._container.update(new_label=final_label)\n def clear(self) -> None:\n \"\"\"Remove the thought from the screen. A cleared thought can't be reused.\"\"\"\n self._container.clear()\nclass StreamlitCallbackHandler(BaseCallbackHandler):\n def __init__(\n self,\n parent_container: DeltaGenerator,\n *,\n max_thought_containers: int = 4,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/streamlit/streamlit_callback_handler.html"} +{"id": "081c917dc5f2-5", "text": "*,\n max_thought_containers: int = 4,\n expand_new_thoughts: bool = True,\n collapse_completed_thoughts: bool = True,\n thought_labeler: Optional[LLMThoughtLabeler] = None,\n ):\n \"\"\"Create a StreamlitCallbackHandler instance.\n Parameters\n ----------\n parent_container\n The `st.container` that will contain all the Streamlit elements that the\n Handler creates.\n max_thought_containers\n The max number of completed LLM thought containers to show at once. When\n this threshold is reached, a new thought will cause the oldest thoughts to\n be collapsed into a \"History\" expander. Defaults to 4.\n expand_new_thoughts\n Each LLM \"thought\" gets its own `st.expander`. This param controls whether\n that expander is expanded by default. Defaults to True.\n collapse_completed_thoughts\n If True, LLM thought expanders will be collapsed when completed.\n Defaults to True.\n thought_labeler\n An optional custom LLMThoughtLabeler instance. If unspecified, the handler\n will use the default thought labeling logic. Defaults to None.\n \"\"\"\n self._parent_container = parent_container\n self._history_parent = parent_container.container()\n self._history_container: Optional[MutableExpander] = None\n self._current_thought: Optional[LLMThought] = None\n self._completed_thoughts: List[LLMThought] = []\n self._max_thought_containers = max(max_thought_containers, 1)\n self._expand_new_thoughts = expand_new_thoughts\n self._collapse_completed_thoughts = collapse_completed_thoughts", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/streamlit/streamlit_callback_handler.html"} +{"id": "081c917dc5f2-6", "text": "self._collapse_completed_thoughts = collapse_completed_thoughts\n self._thought_labeler = thought_labeler or LLMThoughtLabeler()\n def _require_current_thought(self) -> LLMThought:\n \"\"\"Return our current LLMThought. Raise an error if we have no current\n thought.\n \"\"\"\n if self._current_thought is None:\n raise RuntimeError(\"Current LLMThought is unexpectedly None!\")\n return self._current_thought\n def _get_last_completed_thought(self) -> Optional[LLMThought]:\n \"\"\"Return our most recent completed LLMThought, or None if we don't have one.\"\"\"\n if len(self._completed_thoughts) > 0:\n return self._completed_thoughts[len(self._completed_thoughts) - 1]\n return None\n @property\n def _num_thought_containers(self) -> int:\n \"\"\"The number of 'thought containers' we're currently showing: the\n number of completed thought containers, the history container (if it exists),\n and the current thought container (if it exists).\n \"\"\"\n count = len(self._completed_thoughts)\n if self._history_container is not None:\n count += 1\n if self._current_thought is not None:\n count += 1\n return count\n def _complete_current_thought(self, final_label: Optional[str] = None) -> None:\n \"\"\"Complete the current thought, optionally assigning it a new label.\n Add it to our _completed_thoughts list.\n \"\"\"\n thought = self._require_current_thought()\n thought.complete(final_label)\n self._completed_thoughts.append(thought)\n self._current_thought = None", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/streamlit/streamlit_callback_handler.html"} +{"id": "081c917dc5f2-7", "text": "self._current_thought = None\n def _prune_old_thought_containers(self) -> None:\n \"\"\"If we have too many thoughts onscreen, move older thoughts to the\n 'history container.'\n \"\"\"\n while (\n self._num_thought_containers > self._max_thought_containers\n and len(self._completed_thoughts) > 0\n ):\n # Create our history container if it doesn't exist, and if\n # max_thought_containers is > 1. (if max_thought_containers is 1, we don't\n # have room to show history.)\n if self._history_container is None and self._max_thought_containers > 1:\n self._history_container = MutableExpander(\n self._history_parent,\n label=self._thought_labeler.get_history_label(),\n expanded=False,\n )\n oldest_thought = self._completed_thoughts.pop(0)\n if self._history_container is not None:\n self._history_container.markdown(oldest_thought.container.label)\n self._history_container.append_copy(oldest_thought.container)\n oldest_thought.clear()\n def on_llm_start(\n self, serialized: Dict[str, Any], prompts: List[str], **kwargs: Any\n ) -> None:\n if self._current_thought is None:\n self._current_thought = LLMThought(\n parent_container=self._parent_container,\n expanded=self._expand_new_thoughts,\n collapse_on_complete=self._collapse_completed_thoughts,\n labeler=self._thought_labeler,\n )\n self._current_thought.on_llm_start(serialized, prompts)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/streamlit/streamlit_callback_handler.html"} +{"id": "081c917dc5f2-8", "text": ")\n self._current_thought.on_llm_start(serialized, prompts)\n # We don't prune_old_thought_containers here, because our container won't\n # be visible until it has a child.\n def on_llm_new_token(self, token: str, **kwargs: Any) -> None:\n self._require_current_thought().on_llm_new_token(token, **kwargs)\n self._prune_old_thought_containers()\n def on_llm_end(self, response: LLMResult, **kwargs: Any) -> None:\n self._require_current_thought().on_llm_end(response, **kwargs)\n self._prune_old_thought_containers()\n def on_llm_error(\n self, error: Union[Exception, KeyboardInterrupt], **kwargs: Any\n ) -> None:\n self._require_current_thought().on_llm_error(error, **kwargs)\n self._prune_old_thought_containers()\n def on_tool_start(\n self, serialized: Dict[str, Any], input_str: str, **kwargs: Any\n ) -> None:\n self._require_current_thought().on_tool_start(serialized, input_str, **kwargs)\n self._prune_old_thought_containers()\n def on_tool_end(\n self,\n output: str,\n color: Optional[str] = None,\n observation_prefix: Optional[str] = None,\n llm_prefix: Optional[str] = None,\n **kwargs: Any,\n ) -> None:\n self._require_current_thought().on_tool_end(\n output, color, observation_prefix, llm_prefix, **kwargs\n )\n self._complete_current_thought()\n def on_tool_error(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/streamlit/streamlit_callback_handler.html"} +{"id": "081c917dc5f2-9", "text": ")\n self._complete_current_thought()\n def on_tool_error(\n self, error: Union[Exception, KeyboardInterrupt], **kwargs: Any\n ) -> None:\n self._require_current_thought().on_tool_error(error, **kwargs)\n self._prune_old_thought_containers()\n def on_text(\n self,\n text: str,\n color: Optional[str] = None,\n end: str = \"\",\n **kwargs: Any,\n ) -> None:\n pass\n def on_chain_start(\n self, serialized: Dict[str, Any], inputs: Dict[str, Any], **kwargs: Any\n ) -> None:\n pass\n def on_chain_end(self, outputs: Dict[str, Any], **kwargs: Any) -> None:\n pass\n def on_chain_error(\n self, error: Union[Exception, KeyboardInterrupt], **kwargs: Any\n ) -> None:\n pass\n def on_agent_action(\n self, action: AgentAction, color: Optional[str] = None, **kwargs: Any\n ) -> Any:\n self._require_current_thought().on_agent_action(action, color, **kwargs)\n self._prune_old_thought_containers()\n def on_agent_finish(\n self, finish: AgentFinish, color: Optional[str] = None, **kwargs: Any\n ) -> None:\n if self._current_thought is not None:\n self._current_thought.complete(\n self._thought_labeler.get_final_agent_thought_label()\n )\n self._current_thought = None", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/callbacks/streamlit/streamlit_callback_handler.html"} +{"id": "50a8cc43f1f5-0", "text": "Source code for langchain.retrievers.zep\nfrom __future__ import annotations\nfrom typing import TYPE_CHECKING, Dict, List, Optional\nfrom langchain.schema import BaseRetriever, Document\nif TYPE_CHECKING:\n from zep_python import MemorySearchResult\n[docs]class ZepRetriever(BaseRetriever):\n \"\"\"A Retriever implementation for the Zep long-term memory store. Search your\n user's long-term chat history with Zep.\n Note: You will need to provide the user's `session_id` to use this retriever.\n More on Zep:\n Zep provides long-term conversation storage for LLM apps. The server stores,\n summarizes, embeds, indexes, and enriches conversational AI chat\n histories, and exposes them via simple, low-latency APIs.\n For server installation instructions, see:\n https://getzep.github.io/deployment/quickstart/\n \"\"\"\n def __init__(\n self,\n session_id: str,\n url: str,\n top_k: Optional[int] = None,\n ):\n try:\n from zep_python import ZepClient\n except ImportError:\n raise ValueError(\n \"Could not import zep-python package. \"\n \"Please install it with `pip install zep-python`.\"\n )\n self.zep_client = ZepClient(base_url=url)\n self.session_id = session_id\n self.top_k = top_k\n def _search_result_to_doc(\n self, results: List[MemorySearchResult]\n ) -> List[Document]:\n return [\n Document(\n page_content=r.message.pop(\"content\"),\n metadata={\"score\": r.dist, **r.message},\n )\n for r in results", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/retrievers/zep.html"} +{"id": "50a8cc43f1f5-1", "text": ")\n for r in results\n if r.message\n ]\n[docs] def get_relevant_documents(\n self, query: str, metadata: Optional[Dict] = None\n ) -> List[Document]:\n from zep_python import MemorySearchPayload\n payload: MemorySearchPayload = MemorySearchPayload(\n text=query, metadata=metadata\n )\n results: List[MemorySearchResult] = self.zep_client.search_memory(\n self.session_id, payload, limit=self.top_k\n )\n return self._search_result_to_doc(results)\n[docs] async def aget_relevant_documents(\n self, query: str, metadata: Optional[Dict] = None\n ) -> List[Document]:\n from zep_python import MemorySearchPayload\n payload: MemorySearchPayload = MemorySearchPayload(\n text=query, metadata=metadata\n )\n results: List[MemorySearchResult] = await self.zep_client.asearch_memory(\n self.session_id, payload, limit=self.top_k\n )\n return self._search_result_to_doc(results)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/retrievers/zep.html"} +{"id": "d37c2f8845b4-0", "text": "Source code for langchain.retrievers.chatgpt_plugin_retriever\nfrom __future__ import annotations\nfrom typing import List, Optional\nimport aiohttp\nimport requests\nfrom pydantic import BaseModel\nfrom langchain.schema import BaseRetriever, Document\n[docs]class ChatGPTPluginRetriever(BaseRetriever, BaseModel):\n url: str\n bearer_token: str\n top_k: int = 3\n filter: Optional[dict] = None\n aiosession: Optional[aiohttp.ClientSession] = None\n class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n arbitrary_types_allowed = True\n[docs] def get_relevant_documents(self, query: str) -> List[Document]:\n url, json, headers = self._create_request(query)\n response = requests.post(url, json=json, headers=headers)\n results = response.json()[\"results\"][0][\"results\"]\n docs = []\n for d in results:\n content = d.pop(\"text\")\n metadata = d.pop(\"metadata\", d)\n if metadata.get(\"source_id\"):\n metadata[\"source\"] = metadata.pop(\"source_id\")\n docs.append(Document(page_content=content, metadata=metadata))\n return docs\n[docs] async def aget_relevant_documents(self, query: str) -> List[Document]:\n url, json, headers = self._create_request(query)\n if not self.aiosession:\n async with aiohttp.ClientSession() as session:\n async with session.post(url, headers=headers, json=json) as response:\n res = await response.json()\n else:\n async with self.aiosession.post(\n url, headers=headers, json=json\n ) as response:\n res = await response.json()", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/retrievers/chatgpt_plugin_retriever.html"} +{"id": "d37c2f8845b4-1", "text": ") as response:\n res = await response.json()\n results = res[\"results\"][0][\"results\"]\n docs = []\n for d in results:\n content = d.pop(\"text\")\n metadata = d.pop(\"metadata\", d)\n if metadata.get(\"source_id\"):\n metadata[\"source\"] = metadata.pop(\"source_id\")\n docs.append(Document(page_content=content, metadata=metadata))\n return docs\n def _create_request(self, query: str) -> tuple[str, dict, dict]:\n url = f\"{self.url}/query\"\n json = {\n \"queries\": [\n {\n \"query\": query,\n \"filter\": self.filter,\n \"top_k\": self.top_k,\n }\n ]\n }\n headers = {\n \"Content-Type\": \"application/json\",\n \"Authorization\": f\"Bearer {self.bearer_token}\",\n }\n return url, json, headers", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/retrievers/chatgpt_plugin_retriever.html"} +{"id": "798a81eca712-0", "text": "Source code for langchain.retrievers.databerry\nfrom typing import List, Optional\nimport aiohttp\nimport requests\nfrom langchain.schema import BaseRetriever, Document\n[docs]class DataberryRetriever(BaseRetriever):\n \"\"\"Retriever that uses the Databerry API.\"\"\"\n datastore_url: str\n top_k: Optional[int]\n api_key: Optional[str]\n def __init__(\n self,\n datastore_url: str,\n top_k: Optional[int] = None,\n api_key: Optional[str] = None,\n ):\n self.datastore_url = datastore_url\n self.api_key = api_key\n self.top_k = top_k\n[docs] def get_relevant_documents(self, query: str) -> List[Document]:\n response = requests.post(\n self.datastore_url,\n json={\n \"query\": query,\n **({\"topK\": self.top_k} if self.top_k is not None else {}),\n },\n headers={\n \"Content-Type\": \"application/json\",\n **(\n {\"Authorization\": f\"Bearer {self.api_key}\"}\n if self.api_key is not None\n else {}\n ),\n },\n )\n data = response.json()\n return [\n Document(\n page_content=r[\"text\"],\n metadata={\"source\": r[\"source\"], \"score\": r[\"score\"]},\n )\n for r in data[\"results\"]\n ]\n[docs] async def aget_relevant_documents(self, query: str) -> List[Document]:\n async with aiohttp.ClientSession() as session:\n async with session.request(\n \"POST\",\n self.datastore_url,\n json={\n \"query\": query,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/retrievers/databerry.html"} +{"id": "798a81eca712-1", "text": "self.datastore_url,\n json={\n \"query\": query,\n **({\"topK\": self.top_k} if self.top_k is not None else {}),\n },\n headers={\n \"Content-Type\": \"application/json\",\n **(\n {\"Authorization\": f\"Bearer {self.api_key}\"}\n if self.api_key is not None\n else {}\n ),\n },\n ) as response:\n data = await response.json()\n return [\n Document(\n page_content=r[\"text\"],\n metadata={\"source\": r[\"source\"], \"score\": r[\"score\"]},\n )\n for r in data[\"results\"]\n ]", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/retrievers/databerry.html"} +{"id": "69576b9724ba-0", "text": "Source code for langchain.retrievers.time_weighted_retriever\n\"\"\"Retriever that combines embedding similarity with recency in retrieving values.\"\"\"\nimport datetime\nfrom copy import deepcopy\nfrom typing import Any, Dict, List, Optional, Tuple\nfrom pydantic import BaseModel, Field\nfrom langchain.schema import BaseRetriever, Document\nfrom langchain.vectorstores.base import VectorStore\ndef _get_hours_passed(time: datetime.datetime, ref_time: datetime.datetime) -> float:\n \"\"\"Get the hours passed between two datetime objects.\"\"\"\n return (time - ref_time).total_seconds() / 3600\n[docs]class TimeWeightedVectorStoreRetriever(BaseRetriever, BaseModel):\n \"\"\"Retriever combining embedding similarity with recency.\"\"\"\n vectorstore: VectorStore\n \"\"\"The vectorstore to store documents and determine salience.\"\"\"\n search_kwargs: dict = Field(default_factory=lambda: dict(k=100))\n \"\"\"Keyword arguments to pass to the vectorstore similarity search.\"\"\"\n # TODO: abstract as a queue\n memory_stream: List[Document] = Field(default_factory=list)\n \"\"\"The memory_stream of documents to search through.\"\"\"\n decay_rate: float = Field(default=0.01)\n \"\"\"The exponential decay factor used as (1.0-decay_rate)**(hrs_passed).\"\"\"\n k: int = 4\n \"\"\"The maximum number of documents to retrieve in a given call.\"\"\"\n other_score_keys: List[str] = []\n \"\"\"Other keys in the metadata to factor into the score, e.g. 'importance'.\"\"\"\n default_salience: Optional[float] = None\n \"\"\"The salience to assign memories not retrieved from the vector store.\n None assigns no salience to documents not fetched from the vector store.\n \"\"\"\n class Config:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/retrievers/time_weighted_retriever.html"} +{"id": "69576b9724ba-1", "text": "\"\"\"\n class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n arbitrary_types_allowed = True\n def _get_combined_score(\n self,\n document: Document,\n vector_relevance: Optional[float],\n current_time: datetime.datetime,\n ) -> float:\n \"\"\"Return the combined score for a document.\"\"\"\n hours_passed = _get_hours_passed(\n current_time,\n document.metadata[\"last_accessed_at\"],\n )\n score = (1.0 - self.decay_rate) ** hours_passed\n for key in self.other_score_keys:\n if key in document.metadata:\n score += document.metadata[key]\n if vector_relevance is not None:\n score += vector_relevance\n return score\n[docs] def get_salient_docs(self, query: str) -> Dict[int, Tuple[Document, float]]:\n \"\"\"Return documents that are salient to the query.\"\"\"\n docs_and_scores: List[Tuple[Document, float]]\n docs_and_scores = self.vectorstore.similarity_search_with_relevance_scores(\n query, **self.search_kwargs\n )\n results = {}\n for fetched_doc, relevance in docs_and_scores:\n if \"buffer_idx\" in fetched_doc.metadata:\n buffer_idx = fetched_doc.metadata[\"buffer_idx\"]\n doc = self.memory_stream[buffer_idx]\n results[buffer_idx] = (doc, relevance)\n return results\n[docs] def get_relevant_documents(self, query: str) -> List[Document]:\n \"\"\"Return documents that are relevant to the query.\"\"\"\n current_time = datetime.datetime.now()\n docs_and_scores = {\n doc.metadata[\"buffer_idx\"]: (doc, self.default_salience)\n for doc in self.memory_stream[-self.k :]", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/retrievers/time_weighted_retriever.html"} +{"id": "69576b9724ba-2", "text": "for doc in self.memory_stream[-self.k :]\n }\n # If a doc is considered salient, update the salience score\n docs_and_scores.update(self.get_salient_docs(query))\n rescored_docs = [\n (doc, self._get_combined_score(doc, relevance, current_time))\n for doc, relevance in docs_and_scores.values()\n ]\n rescored_docs.sort(key=lambda x: x[1], reverse=True)\n result = []\n # Ensure frequently accessed memories aren't forgotten\n for doc, _ in rescored_docs[: self.k]:\n # TODO: Update vector store doc once `update` method is exposed.\n buffered_doc = self.memory_stream[doc.metadata[\"buffer_idx\"]]\n buffered_doc.metadata[\"last_accessed_at\"] = current_time\n result.append(buffered_doc)\n return result\n[docs] async def aget_relevant_documents(self, query: str) -> List[Document]:\n \"\"\"Return documents that are relevant to the query.\"\"\"\n raise NotImplementedError\n[docs] def add_documents(self, documents: List[Document], **kwargs: Any) -> List[str]:\n \"\"\"Add documents to vectorstore.\"\"\"\n current_time = kwargs.get(\"current_time\")\n if current_time is None:\n current_time = datetime.datetime.now()\n # Avoid mutating input documents\n dup_docs = [deepcopy(d) for d in documents]\n for i, doc in enumerate(dup_docs):\n if \"last_accessed_at\" not in doc.metadata:\n doc.metadata[\"last_accessed_at\"] = current_time\n if \"created_at\" not in doc.metadata:\n doc.metadata[\"created_at\"] = current_time\n doc.metadata[\"buffer_idx\"] = len(self.memory_stream) + i", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/retrievers/time_weighted_retriever.html"} +{"id": "69576b9724ba-3", "text": "doc.metadata[\"buffer_idx\"] = len(self.memory_stream) + i\n self.memory_stream.extend(dup_docs)\n return self.vectorstore.add_documents(dup_docs, **kwargs)\n[docs] async def aadd_documents(\n self, documents: List[Document], **kwargs: Any\n ) -> List[str]:\n \"\"\"Add documents to vectorstore.\"\"\"\n current_time = kwargs.get(\"current_time\")\n if current_time is None:\n current_time = datetime.datetime.now()\n # Avoid mutating input documents\n dup_docs = [deepcopy(d) for d in documents]\n for i, doc in enumerate(dup_docs):\n if \"last_accessed_at\" not in doc.metadata:\n doc.metadata[\"last_accessed_at\"] = current_time\n if \"created_at\" not in doc.metadata:\n doc.metadata[\"created_at\"] = current_time\n doc.metadata[\"buffer_idx\"] = len(self.memory_stream) + i\n self.memory_stream.extend(dup_docs)\n return await self.vectorstore.aadd_documents(dup_docs, **kwargs)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/retrievers/time_weighted_retriever.html"} +{"id": "488c4e327251-0", "text": "Source code for langchain.retrievers.tfidf\n\"\"\"TF-IDF Retriever.\nLargely based on\nhttps://github.com/asvskartheek/Text-Retrieval/blob/master/TF-IDF%20Search%20Engine%20(SKLEARN).ipynb\"\"\"\nfrom __future__ import annotations\nfrom typing import Any, Dict, Iterable, List, Optional\nfrom pydantic import BaseModel\nfrom langchain.schema import BaseRetriever, Document\n[docs]class TFIDFRetriever(BaseRetriever, BaseModel):\n vectorizer: Any\n docs: List[Document]\n tfidf_array: Any\n k: int = 4\n class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n arbitrary_types_allowed = True\n[docs] @classmethod\n def from_texts(\n cls,\n texts: Iterable[str],\n metadatas: Optional[Iterable[dict]] = None,\n tfidf_params: Optional[Dict[str, Any]] = None,\n **kwargs: Any,\n ) -> TFIDFRetriever:\n try:\n from sklearn.feature_extraction.text import TfidfVectorizer\n except ImportError:\n raise ImportError(\n \"Could not import scikit-learn, please install with `pip install \"\n \"scikit-learn`.\"\n )\n tfidf_params = tfidf_params or {}\n vectorizer = TfidfVectorizer(**tfidf_params)\n tfidf_array = vectorizer.fit_transform(texts)\n metadatas = metadatas or ({} for _ in texts)\n docs = [Document(page_content=t, metadata=m) for t, m in zip(texts, metadatas)]", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/retrievers/tfidf.html"} +{"id": "488c4e327251-1", "text": "return cls(vectorizer=vectorizer, docs=docs, tfidf_array=tfidf_array, **kwargs)\n[docs] @classmethod\n def from_documents(\n cls,\n documents: Iterable[Document],\n *,\n tfidf_params: Optional[Dict[str, Any]] = None,\n **kwargs: Any,\n ) -> TFIDFRetriever:\n texts, metadatas = zip(*((d.page_content, d.metadata) for d in documents))\n return cls.from_texts(\n texts=texts, tfidf_params=tfidf_params, metadatas=metadatas, **kwargs\n )\n[docs] def get_relevant_documents(self, query: str) -> List[Document]:\n from sklearn.metrics.pairwise import cosine_similarity\n query_vec = self.vectorizer.transform(\n [query]\n ) # Ip -- (n_docs,x), Op -- (n_docs,n_Feats)\n results = cosine_similarity(self.tfidf_array, query_vec).reshape(\n (-1,)\n ) # Op -- (n_docs,1) -- Cosine Sim with each doc\n return_docs = [self.docs[i] for i in results.argsort()[-self.k :][::-1]]\n return return_docs\n[docs] async def aget_relevant_documents(self, query: str) -> List[Document]:\n raise NotImplementedError", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/retrievers/tfidf.html"} +{"id": "129763b8fd6d-0", "text": "Source code for langchain.retrievers.milvus\n\"\"\"Milvus Retriever\"\"\"\nimport warnings\nfrom typing import Any, Dict, List, Optional\nfrom langchain.embeddings.base import Embeddings\nfrom langchain.schema import BaseRetriever, Document\nfrom langchain.vectorstores.milvus import Milvus\n# TODO: Update to MilvusClient + Hybrid Search when available\n[docs]class MilvusRetriever(BaseRetriever):\n \"\"\"Retriever that uses the Milvus API.\"\"\"\n def __init__(\n self,\n embedding_function: Embeddings,\n collection_name: str = \"LangChainCollection\",\n connection_args: Optional[Dict[str, Any]] = None,\n consistency_level: str = \"Session\",\n search_params: Optional[dict] = None,\n ):\n self.store = Milvus(\n embedding_function,\n collection_name,\n connection_args,\n consistency_level,\n )\n self.retriever = self.store.as_retriever(search_kwargs={\"param\": search_params})\n[docs] def add_texts(\n self, texts: List[str], metadatas: Optional[List[dict]] = None\n ) -> None:\n \"\"\"Add text to the Milvus store\n Args:\n texts (List[str]): The text\n metadatas (List[dict]): Metadata dicts, must line up with existing store\n \"\"\"\n self.store.add_texts(texts, metadatas)\n[docs] def get_relevant_documents(self, query: str) -> List[Document]:\n return self.retriever.get_relevant_documents(query)\n[docs] async def aget_relevant_documents(self, query: str) -> List[Document]:\n raise NotImplementedError", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/retrievers/milvus.html"} +{"id": "129763b8fd6d-1", "text": "raise NotImplementedError\ndef MilvusRetreiver(*args: Any, **kwargs: Any) -> MilvusRetriever:\n \"\"\"Deprecated MilvusRetreiver. Please use MilvusRetriever ('i' before 'e') instead.\n Args:\n *args:\n **kwargs:\n Returns:\n MilvusRetriever\n \"\"\"\n warnings.warn(\n \"MilvusRetreiver will be deprecated in the future. \"\n \"Please use MilvusRetriever ('i' before 'e') instead.\",\n DeprecationWarning,\n )\n return MilvusRetriever(*args, **kwargs)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/retrievers/milvus.html"} +{"id": "bd13a981659d-0", "text": "Source code for langchain.retrievers.arxiv\nfrom typing import List\nfrom langchain.schema import BaseRetriever, Document\nfrom langchain.utilities.arxiv import ArxivAPIWrapper\n[docs]class ArxivRetriever(BaseRetriever, ArxivAPIWrapper):\n \"\"\"\n It is effectively a wrapper for ArxivAPIWrapper.\n It wraps load() to get_relevant_documents().\n It uses all ArxivAPIWrapper arguments without any change.\n \"\"\"\n[docs] def get_relevant_documents(self, query: str) -> List[Document]:\n return self.load(query=query)\n[docs] async def aget_relevant_documents(self, query: str) -> List[Document]:\n raise NotImplementedError", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/retrievers/arxiv.html"} +{"id": "cb932f461fbf-0", "text": "Source code for langchain.retrievers.docarray\nfrom enum import Enum\nfrom typing import Any, Dict, List, Optional, Union\nimport numpy as np\nfrom pydantic import BaseModel\nfrom langchain.embeddings.base import Embeddings\nfrom langchain.schema import BaseRetriever, Document\nfrom langchain.vectorstores.utils import maximal_marginal_relevance\nclass SearchType(str, Enum):\n \"\"\"Enumerator of the types of search to perform.\"\"\"\n similarity = \"similarity\"\n mmr = \"mmr\"\n[docs]class DocArrayRetriever(BaseRetriever, BaseModel):\n \"\"\"\n Retriever class for DocArray Document Indices.\n Currently, supports 5 backends:\n InMemoryExactNNIndex, HnswDocumentIndex, QdrantDocumentIndex,\n ElasticDocIndex, and WeaviateDocumentIndex.\n Attributes:\n index: One of the above-mentioned index instances\n embeddings: Embedding model to represent text as vectors\n search_field: Field to consider for searching in the documents.\n Should be an embedding/vector/tensor.\n content_field: Field that represents the main content in your document schema.\n Will be used as a `page_content`. Everything else will go into `metadata`.\n search_type: Type of search to perform (similarity / mmr)\n filters: Filters applied for document retrieval.\n top_k: Number of documents to return\n \"\"\"\n index: Any\n embeddings: Embeddings\n search_field: str\n content_field: str\n search_type: SearchType = SearchType.similarity\n top_k: int = 1\n filters: Optional[Any] = None\n class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n arbitrary_types_allowed = True", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/retrievers/docarray.html"} +{"id": "cb932f461fbf-1", "text": "\"\"\"Configuration for this pydantic object.\"\"\"\n arbitrary_types_allowed = True\n[docs] def get_relevant_documents(self, query: str) -> List[Document]:\n \"\"\"Get documents relevant for a query.\n Args:\n query: string to find relevant documents for\n Returns:\n List of relevant documents\n \"\"\"\n query_emb = np.array(self.embeddings.embed_query(query))\n if self.search_type == SearchType.similarity:\n results = self._similarity_search(query_emb)\n elif self.search_type == SearchType.mmr:\n results = self._mmr_search(query_emb)\n else:\n raise ValueError(\n f\"Search type {self.search_type} does not exist. \"\n f\"Choose either 'similarity' or 'mmr'.\"\n )\n return results\n def _search(\n self, query_emb: np.ndarray, top_k: int\n ) -> List[Union[Dict[str, Any], Any]]:\n \"\"\"\n Perform a search using the query embedding and return top_k documents.\n Args:\n query_emb: Query represented as an embedding\n top_k: Number of documents to return\n Returns:\n A list of top_k documents matching the query\n \"\"\"\n from docarray.index import ElasticDocIndex, WeaviateDocumentIndex\n filter_args = {}\n search_field = self.search_field\n if isinstance(self.index, WeaviateDocumentIndex):\n filter_args[\"where_filter\"] = self.filters\n search_field = \"\"\n elif isinstance(self.index, ElasticDocIndex):\n filter_args[\"query\"] = self.filters\n else:\n filter_args[\"filter_query\"] = self.filters\n if self.filters:\n query = (", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/retrievers/docarray.html"} +{"id": "cb932f461fbf-2", "text": "if self.filters:\n query = (\n self.index.build_query() # get empty query object\n .find(\n query=query_emb, search_field=search_field\n ) # add vector similarity search\n .filter(**filter_args) # add filter search\n .build(limit=top_k) # build the query\n )\n # execute the combined query and return the results\n docs = self.index.execute_query(query)\n if hasattr(docs, \"documents\"):\n docs = docs.documents\n docs = docs[:top_k]\n else:\n docs = self.index.find(\n query=query_emb, search_field=search_field, limit=top_k\n ).documents\n return docs\n def _similarity_search(self, query_emb: np.ndarray) -> List[Document]:\n \"\"\"\n Perform a similarity search.\n Args:\n query_emb: Query represented as an embedding\n Returns:\n A list of documents most similar to the query\n \"\"\"\n docs = self._search(query_emb=query_emb, top_k=self.top_k)\n results = [self._docarray_to_langchain_doc(doc) for doc in docs]\n return results\n def _mmr_search(self, query_emb: np.ndarray) -> List[Document]:\n \"\"\"\n Perform a maximal marginal relevance (mmr) search.\n Args:\n query_emb: Query represented as an embedding\n Returns:\n A list of diverse documents related to the query\n \"\"\"\n docs = self._search(query_emb=query_emb, top_k=20)\n mmr_selected = maximal_marginal_relevance(\n query_emb,\n [\n doc[self.search_field]\n if isinstance(doc, dict)\n else getattr(doc, self.search_field)\n for doc in docs\n ],", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/retrievers/docarray.html"} +{"id": "cb932f461fbf-3", "text": "else getattr(doc, self.search_field)\n for doc in docs\n ],\n k=self.top_k,\n )\n results = [self._docarray_to_langchain_doc(docs[idx]) for idx in mmr_selected]\n return results\n def _docarray_to_langchain_doc(self, doc: Union[Dict[str, Any], Any]) -> Document:\n \"\"\"\n Convert a DocArray document (which also might be a dict)\n to a langchain document format.\n DocArray document can contain arbitrary fields, so the mapping is done\n in the following way:\n page_content <-> content_field\n metadata <-> all other fields excluding\n tensors and embeddings (so float, int, string)\n Args:\n doc: DocArray document\n Returns:\n Document in langchain format\n Raises:\n ValueError: If the document doesn't contain the content field\n \"\"\"\n fields = doc.keys() if isinstance(doc, dict) else doc.__fields__\n if self.content_field not in fields:\n raise ValueError(\n f\"Document does not contain the content field - {self.content_field}.\"\n )\n lc_doc = Document(\n page_content=doc[self.content_field]\n if isinstance(doc, dict)\n else getattr(doc, self.content_field)\n )\n for name in fields:\n value = doc[name] if isinstance(doc, dict) else getattr(doc, name)\n if (\n isinstance(value, (str, int, float, bool))\n and name != self.content_field\n ):\n lc_doc.metadata[name] = value\n return lc_doc\n[docs] async def aget_relevant_documents(self, query: str) -> List[Document]:\n raise NotImplementedError", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/retrievers/docarray.html"} +{"id": "8b809e712593-0", "text": "Source code for langchain.retrievers.weaviate_hybrid_search\n\"\"\"Wrapper around weaviate vector database.\"\"\"\nfrom __future__ import annotations\nfrom typing import Any, Dict, List, Optional\nfrom uuid import uuid4\nfrom pydantic import Extra\nfrom langchain.docstore.document import Document\nfrom langchain.schema import BaseRetriever\n[docs]class WeaviateHybridSearchRetriever(BaseRetriever):\n def __init__(\n self,\n client: Any,\n index_name: str,\n text_key: str,\n alpha: float = 0.5,\n k: int = 4,\n attributes: Optional[List[str]] = None,\n create_schema_if_missing: bool = True,\n ):\n try:\n import weaviate\n except ImportError:\n raise ImportError(\n \"Could not import weaviate python package. \"\n \"Please install it with `pip install weaviate-client`.\"\n )\n if not isinstance(client, weaviate.Client):\n raise ValueError(\n f\"client should be an instance of weaviate.Client, got {type(client)}\"\n )\n self._client = client\n self.k = k\n self.alpha = alpha\n self._index_name = index_name\n self._text_key = text_key\n self._query_attrs = [self._text_key]\n if attributes is not None:\n self._query_attrs.extend(attributes)\n if create_schema_if_missing:\n self._create_schema_if_missing()\n def _create_schema_if_missing(self) -> None:\n class_obj = {\n \"class\": self._index_name,\n \"properties\": [{\"name\": self._text_key, \"dataType\": [\"text\"]}],", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/retrievers/weaviate_hybrid_search.html"} +{"id": "8b809e712593-1", "text": "\"properties\": [{\"name\": self._text_key, \"dataType\": [\"text\"]}],\n \"vectorizer\": \"text2vec-openai\",\n }\n if not self._client.schema.exists(self._index_name):\n self._client.schema.create_class(class_obj)\n[docs] class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n arbitrary_types_allowed = True\n # added text_key\n[docs] def add_documents(self, docs: List[Document], **kwargs: Any) -> List[str]:\n \"\"\"Upload documents to Weaviate.\"\"\"\n from weaviate.util import get_valid_uuid\n with self._client.batch as batch:\n ids = []\n for i, doc in enumerate(docs):\n metadata = doc.metadata or {}\n data_properties = {self._text_key: doc.page_content, **metadata}\n # If the UUID of one of the objects already exists\n # then the existing objectwill be replaced by the new object.\n if \"uuids\" in kwargs:\n _id = kwargs[\"uuids\"][i]\n else:\n _id = get_valid_uuid(uuid4())\n batch.add_data_object(data_properties, self._index_name, _id)\n ids.append(_id)\n return ids\n[docs] def get_relevant_documents(\n self, query: str, where_filter: Optional[Dict[str, object]] = None\n ) -> List[Document]:\n \"\"\"Look up similar documents in Weaviate.\"\"\"\n query_obj = self._client.query.get(self._index_name, self._query_attrs)\n if where_filter:\n query_obj = query_obj.with_where(where_filter)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/retrievers/weaviate_hybrid_search.html"} +{"id": "8b809e712593-2", "text": "if where_filter:\n query_obj = query_obj.with_where(where_filter)\n result = query_obj.with_hybrid(query, alpha=self.alpha).with_limit(self.k).do()\n if \"errors\" in result:\n raise ValueError(f\"Error during query: {result['errors']}\")\n docs = []\n for res in result[\"data\"][\"Get\"][self._index_name]:\n text = res.pop(self._text_key)\n docs.append(Document(page_content=text, metadata=res))\n return docs\n[docs] async def aget_relevant_documents(\n self, query: str, where_filter: Optional[Dict[str, object]] = None\n ) -> List[Document]:\n raise NotImplementedError", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/retrievers/weaviate_hybrid_search.html"} +{"id": "f97ea17bf25a-0", "text": "Source code for langchain.retrievers.kendra\nimport re\nfrom typing import Any, Dict, List, Literal, Optional\nfrom pydantic import BaseModel, Extra\nfrom langchain.docstore.document import Document\nfrom langchain.schema import BaseRetriever\ndef clean_excerpt(excerpt: str) -> str:\n if not excerpt:\n return excerpt\n res = re.sub(\"\\s+\", \" \", excerpt).replace(\"...\", \"\")\n return res\ndef combined_text(title: str, excerpt: str) -> str:\n if not title or not excerpt:\n return \"\"\n return f\"Document Title: {title} \\nDocument Excerpt: \\n{excerpt}\\n\"\nclass Highlight(BaseModel, extra=Extra.allow):\n BeginOffset: int\n EndOffset: int\n TopAnswer: Optional[bool]\n Type: Optional[str]\nclass TextWithHighLights(BaseModel, extra=Extra.allow):\n Text: str\n Highlights: Optional[Any]\nclass AdditionalResultAttribute(BaseModel, extra=Extra.allow):\n Key: str\n ValueType: Literal[\"TEXT_WITH_HIGHLIGHTS_VALUE\"]\n Value: Optional[TextWithHighLights]\n def get_value_text(self) -> str:\n if not self.Value:\n return \"\"\n else:\n return self.Value.Text\nclass QueryResultItem(BaseModel, extra=Extra.allow):\n DocumentId: str\n DocumentTitle: TextWithHighLights\n DocumentURI: Optional[str]\n FeedbackToken: Optional[str]\n Format: Optional[str]\n Id: Optional[str]\n Type: Optional[str]\n AdditionalAttributes: Optional[List[AdditionalResultAttribute]] = []\n DocumentExcerpt: Optional[TextWithHighLights]\n def get_attribute_value(self) -> str:\n if not self.AdditionalAttributes:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/retrievers/kendra.html"} +{"id": "f97ea17bf25a-1", "text": "def get_attribute_value(self) -> str:\n if not self.AdditionalAttributes:\n return \"\"\n if not self.AdditionalAttributes[0]:\n return \"\"\n else:\n return self.AdditionalAttributes[0].get_value_text()\n def get_excerpt(self) -> str:\n if (\n self.AdditionalAttributes\n and self.AdditionalAttributes[0].Key == \"AnswerText\"\n ):\n excerpt = self.get_attribute_value()\n elif self.DocumentExcerpt:\n excerpt = self.DocumentExcerpt.Text\n else:\n excerpt = \"\"\n return clean_excerpt(excerpt)\n def to_doc(self) -> Document:\n title = self.DocumentTitle.Text\n source = self.DocumentURI\n excerpt = self.get_excerpt()\n type = self.Type\n page_content = combined_text(title, excerpt)\n metadata = {\"source\": source, \"title\": title, \"excerpt\": excerpt, \"type\": type}\n return Document(page_content=page_content, metadata=metadata)\nclass QueryResult(BaseModel, extra=Extra.allow):\n ResultItems: List[QueryResultItem]\n def get_top_k_docs(self, top_n: int) -> List[Document]:\n items_len = len(self.ResultItems)\n count = items_len if items_len < top_n else top_n\n docs = [self.ResultItems[i].to_doc() for i in range(0, count)]\n return docs\nclass DocumentAttributeValue(BaseModel, extra=Extra.allow):\n DateValue: Optional[str]\n LongValue: Optional[int]\n StringListValue: Optional[List[str]]\n StringValue: Optional[str]\nclass DocumentAttribute(BaseModel, extra=Extra.allow):\n Key: str\n Value: DocumentAttributeValue", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/retrievers/kendra.html"} +{"id": "f97ea17bf25a-2", "text": "Key: str\n Value: DocumentAttributeValue\nclass RetrieveResultItem(BaseModel, extra=Extra.allow):\n Content: Optional[str]\n DocumentAttributes: Optional[List[DocumentAttribute]] = []\n DocumentId: Optional[str]\n DocumentTitle: Optional[str]\n DocumentURI: Optional[str]\n Id: Optional[str]\n def get_excerpt(self) -> str:\n if not self.Content:\n return \"\"\n return clean_excerpt(self.Content)\n def to_doc(self) -> Document:\n title = self.DocumentTitle if self.DocumentTitle else \"\"\n source = self.DocumentURI\n excerpt = self.get_excerpt()\n page_content = combined_text(title, excerpt)\n metadata = {\"source\": source, \"title\": title, \"excerpt\": excerpt}\n return Document(page_content=page_content, metadata=metadata)\nclass RetrieveResult(BaseModel, extra=Extra.allow):\n QueryId: str\n ResultItems: List[RetrieveResultItem]\n def get_top_k_docs(self, top_n: int) -> List[Document]:\n items_len = len(self.ResultItems)\n count = items_len if items_len < top_n else top_n\n docs = [self.ResultItems[i].to_doc() for i in range(0, count)]\n return docs\n[docs]class AmazonKendraRetriever(BaseRetriever):\n \"\"\"Retriever class to query documents from Amazon Kendra Index.\n Args:\n index_id: Kendra index id\n region_name: The aws region e.g., `us-west-2`.\n Fallsback to AWS_DEFAULT_REGION env variable\n or region specified in ~/.aws/config.\n credentials_profile_name: The name of the profile in the ~/.aws/credentials\n or ~/.aws/config files, which has either access keys or role information", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/retrievers/kendra.html"} +{"id": "f97ea17bf25a-3", "text": "or ~/.aws/config files, which has either access keys or role information\n specified. If not specified, the default credential profile or, if on an\n EC2 instance, credentials from IMDS will be used.\n top_k: No of results to return\n attribute_filter: Additional filtering of results based on metadata\n See: https://docs.aws.amazon.com/kendra/latest/APIReference\n client: boto3 client for Kendra\n Example:\n .. code-block:: python\n retriever = AmazonKendraRetriever(\n index_id=\"c0806df7-e76b-4bce-9b5c-d5582f6b1a03\"\n )\n \"\"\"\n def __init__(\n self,\n index_id: str,\n region_name: Optional[str] = None,\n credentials_profile_name: Optional[str] = None,\n top_k: int = 3,\n attribute_filter: Optional[Dict] = None,\n client: Optional[Any] = None,\n ):\n self.index_id = index_id\n self.top_k = top_k\n self.attribute_filter = attribute_filter\n if client is not None:\n self.client = client\n return\n try:\n import boto3\n if credentials_profile_name is not None:\n session = boto3.Session(profile_name=credentials_profile_name)\n else:\n # use default credentials\n session = boto3.Session()\n client_params = {}\n if region_name is not None:\n client_params[\"region_name\"] = region_name\n self.client = session.client(\"kendra\", **client_params)\n except ImportError:\n raise ModuleNotFoundError(\n \"Could not import boto3 python package. \"\n \"Please install it with `pip install boto3`.\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/retrievers/kendra.html"} +{"id": "f97ea17bf25a-4", "text": "\"Please install it with `pip install boto3`.\"\n )\n except Exception as e:\n raise ValueError(\n \"Could not load credentials to authenticate with AWS client. \"\n \"Please check that credentials in the specified \"\n \"profile name are valid.\"\n ) from e\n def _kendra_query(\n self,\n query: str,\n top_k: int,\n attribute_filter: Optional[Dict] = None,\n ) -> List[Document]:\n if attribute_filter is not None:\n response = self.client.retrieve(\n IndexId=self.index_id,\n QueryText=query.strip(),\n PageSize=top_k,\n AttributeFilter=attribute_filter,\n )\n else:\n response = self.client.retrieve(\n IndexId=self.index_id, QueryText=query.strip(), PageSize=top_k\n )\n r_result = RetrieveResult.parse_obj(response)\n result_len = len(r_result.ResultItems)\n if result_len == 0:\n # retrieve API returned 0 results, call query API\n if attribute_filter is not None:\n response = self.client.query(\n IndexId=self.index_id,\n QueryText=query.strip(),\n PageSize=top_k,\n AttributeFilter=attribute_filter,\n )\n else:\n response = self.client.query(\n IndexId=self.index_id, QueryText=query.strip(), PageSize=top_k\n )\n q_result = QueryResult.parse_obj(response)\n docs = q_result.get_top_k_docs(top_k)\n else:\n docs = r_result.get_top_k_docs(top_k)\n return docs\n[docs] def get_relevant_documents(self, query: str) -> List[Document]:\n \"\"\"Run search on Kendra index and get top k documents\n Example:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/retrievers/kendra.html"} +{"id": "f97ea17bf25a-5", "text": "\"\"\"Run search on Kendra index and get top k documents\n Example:\n .. code-block:: python\n docs = retriever.get_relevant_documents('This is my query')\n \"\"\"\n docs = self._kendra_query(query, self.top_k, self.attribute_filter)\n return docs\n[docs] async def aget_relevant_documents(self, query: str) -> List[Document]:\n raise NotImplementedError(\"Async version is not implemented for Kendra yet.\")", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/retrievers/kendra.html"} +{"id": "e06e5440ee3c-0", "text": "Source code for langchain.retrievers.vespa_retriever\n\"\"\"Wrapper for retrieving documents from Vespa.\"\"\"\nfrom __future__ import annotations\nimport json\nfrom typing import TYPE_CHECKING, Any, Dict, List, Literal, Optional, Sequence, Union\nfrom langchain.schema import BaseRetriever, Document\nif TYPE_CHECKING:\n from vespa.application import Vespa\n[docs]class VespaRetriever(BaseRetriever):\n \"\"\"Retriever that uses the Vespa.\"\"\"\n def __init__(\n self,\n app: Vespa,\n body: Dict,\n content_field: str,\n metadata_fields: Optional[Sequence[str]] = None,\n ):\n self._application = app\n self._query_body = body\n self._content_field = content_field\n self._metadata_fields = metadata_fields or ()\n def _query(self, body: Dict) -> List[Document]:\n response = self._application.query(body)\n if not str(response.status_code).startswith(\"2\"):\n raise RuntimeError(\n \"Could not retrieve data from Vespa. Error code: {}\".format(\n response.status_code\n )\n )\n root = response.json[\"root\"]\n if \"errors\" in root:\n raise RuntimeError(json.dumps(root[\"errors\"]))\n docs = []\n for child in response.hits:\n page_content = child[\"fields\"].pop(self._content_field, \"\")\n if self._metadata_fields == \"*\":\n metadata = child[\"fields\"]\n else:\n metadata = {mf: child[\"fields\"].get(mf) for mf in self._metadata_fields}\n metadata[\"id\"] = child[\"id\"]\n docs.append(Document(page_content=page_content, metadata=metadata))\n return docs", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/retrievers/vespa_retriever.html"} +{"id": "e06e5440ee3c-1", "text": "docs.append(Document(page_content=page_content, metadata=metadata))\n return docs\n[docs] def get_relevant_documents(self, query: str) -> List[Document]:\n body = self._query_body.copy()\n body[\"query\"] = query\n return self._query(body)\n[docs] async def aget_relevant_documents(self, query: str) -> List[Document]:\n raise NotImplementedError\n[docs] def get_relevant_documents_with_filter(\n self, query: str, *, _filter: Optional[str] = None\n ) -> List[Document]:\n body = self._query_body.copy()\n _filter = f\" and {_filter}\" if _filter else \"\"\n body[\"yql\"] = body[\"yql\"] + _filter\n body[\"query\"] = query\n return self._query(body)\n[docs] @classmethod\n def from_params(\n cls,\n url: str,\n content_field: str,\n *,\n k: Optional[int] = None,\n metadata_fields: Union[Sequence[str], Literal[\"*\"]] = (),\n sources: Union[Sequence[str], Literal[\"*\"], None] = None,\n _filter: Optional[str] = None,\n yql: Optional[str] = None,\n **kwargs: Any,\n ) -> VespaRetriever:\n \"\"\"Instantiate retriever from params.\n Args:\n url (str): Vespa app URL.\n content_field (str): Field in results to return as Document page_content.\n k (Optional[int]): Number of Documents to return. Defaults to None.\n metadata_fields(Sequence[str] or \"*\"): Fields in results to include in\n document metadata. Defaults to empty tuple ().", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/retrievers/vespa_retriever.html"} +{"id": "e06e5440ee3c-2", "text": "document metadata. Defaults to empty tuple ().\n sources (Sequence[str] or \"*\" or None): Sources to retrieve\n from. Defaults to None.\n _filter (Optional[str]): Document filter condition expressed in YQL.\n Defaults to None.\n yql (Optional[str]): Full YQL query to be used. Should not be specified\n if _filter or sources are specified. Defaults to None.\n kwargs (Any): Keyword arguments added to query body.\n \"\"\"\n try:\n from vespa.application import Vespa\n except ImportError:\n raise ImportError(\n \"pyvespa is not installed, please install with `pip install pyvespa`\"\n )\n app = Vespa(url)\n body = kwargs.copy()\n if yql and (sources or _filter):\n raise ValueError(\n \"yql should only be specified if both sources and _filter are not \"\n \"specified.\"\n )\n else:\n if metadata_fields == \"*\":\n _fields = \"*\"\n body[\"summary\"] = \"short\"\n else:\n _fields = \", \".join([content_field] + list(metadata_fields or []))\n _sources = \", \".join(sources) if isinstance(sources, Sequence) else \"*\"\n _filter = f\" and {_filter}\" if _filter else \"\"\n yql = f\"select {_fields} from sources {_sources} where userQuery(){_filter}\"\n body[\"yql\"] = yql\n if k:\n body[\"hits\"] = k\n return cls(app, body, content_field, metadata_fields=metadata_fields)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/retrievers/vespa_retriever.html"} +{"id": "6b265ab5c2bf-0", "text": "Source code for langchain.retrievers.knn\n\"\"\"KNN Retriever.\nLargely based on\nhttps://github.com/karpathy/randomfun/blob/master/knn_vs_svm.ipynb\"\"\"\nfrom __future__ import annotations\nimport concurrent.futures\nfrom typing import Any, List, Optional\nimport numpy as np\nfrom pydantic import BaseModel\nfrom langchain.embeddings.base import Embeddings\nfrom langchain.schema import BaseRetriever, Document\ndef create_index(contexts: List[str], embeddings: Embeddings) -> np.ndarray:\n \"\"\"\n Create an index of embeddings for a list of contexts.\n Args:\n contexts: List of contexts to embed.\n embeddings: Embeddings model to use.\n Returns:\n Index of embeddings.\n \"\"\"\n with concurrent.futures.ThreadPoolExecutor() as executor:\n return np.array(list(executor.map(embeddings.embed_query, contexts)))\n[docs]class KNNRetriever(BaseRetriever, BaseModel):\n \"\"\"KNN Retriever.\"\"\"\n embeddings: Embeddings\n index: Any\n texts: List[str]\n k: int = 4\n relevancy_threshold: Optional[float] = None\n class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n arbitrary_types_allowed = True\n[docs] @classmethod\n def from_texts(\n cls, texts: List[str], embeddings: Embeddings, **kwargs: Any\n ) -> KNNRetriever:\n index = create_index(texts, embeddings)\n return cls(embeddings=embeddings, index=index, texts=texts, **kwargs)\n[docs] def get_relevant_documents(self, query: str) -> List[Document]:\n query_embeds = np.array(self.embeddings.embed_query(query))", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/retrievers/knn.html"} +{"id": "6b265ab5c2bf-1", "text": "query_embeds = np.array(self.embeddings.embed_query(query))\n # calc L2 norm\n index_embeds = self.index / np.sqrt((self.index**2).sum(1, keepdims=True))\n query_embeds = query_embeds / np.sqrt((query_embeds**2).sum())\n similarities = index_embeds.dot(query_embeds)\n sorted_ix = np.argsort(-similarities)\n denominator = np.max(similarities) - np.min(similarities) + 1e-6\n normalized_similarities = (similarities - np.min(similarities)) / denominator\n top_k_results = [\n Document(page_content=self.texts[row])\n for row in sorted_ix[0 : self.k]\n if (\n self.relevancy_threshold is None\n or normalized_similarities[row] >= self.relevancy_threshold\n )\n ]\n return top_k_results\n[docs] async def aget_relevant_documents(self, query: str) -> List[Document]:\n raise NotImplementedError", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/retrievers/knn.html"} +{"id": "a1924ce4a9bf-0", "text": "Source code for langchain.retrievers.llama_index\nfrom typing import Any, Dict, List, cast\nfrom pydantic import BaseModel, Field\nfrom langchain.schema import BaseRetriever, Document\n[docs]class LlamaIndexRetriever(BaseRetriever, BaseModel):\n \"\"\"Question-answering with sources over an LlamaIndex data structure.\"\"\"\n index: Any\n query_kwargs: Dict = Field(default_factory=dict)\n[docs] def get_relevant_documents(self, query: str) -> List[Document]:\n \"\"\"Get documents relevant for a query.\"\"\"\n try:\n from llama_index.indices.base import BaseGPTIndex\n from llama_index.response.schema import Response\n except ImportError:\n raise ImportError(\n \"You need to install `pip install llama-index` to use this retriever.\"\n )\n index = cast(BaseGPTIndex, self.index)\n response = index.query(query, response_mode=\"no_text\", **self.query_kwargs)\n response = cast(Response, response)\n # parse source nodes\n docs = []\n for source_node in response.source_nodes:\n metadata = source_node.extra_info or {}\n docs.append(\n Document(page_content=source_node.source_text, metadata=metadata)\n )\n return docs\n[docs] async def aget_relevant_documents(self, query: str) -> List[Document]:\n raise NotImplementedError(\"LlamaIndexRetriever does not support async\")\n[docs]class LlamaIndexGraphRetriever(BaseRetriever, BaseModel):\n \"\"\"Question-answering with sources over an LlamaIndex graph data structure.\"\"\"\n graph: Any\n query_configs: List[Dict] = Field(default_factory=list)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/retrievers/llama_index.html"} +{"id": "a1924ce4a9bf-1", "text": "graph: Any\n query_configs: List[Dict] = Field(default_factory=list)\n[docs] def get_relevant_documents(self, query: str) -> List[Document]:\n \"\"\"Get documents relevant for a query.\"\"\"\n try:\n from llama_index.composability.graph import (\n QUERY_CONFIG_TYPE,\n ComposableGraph,\n )\n from llama_index.response.schema import Response\n except ImportError:\n raise ImportError(\n \"You need to install `pip install llama-index` to use this retriever.\"\n )\n graph = cast(ComposableGraph, self.graph)\n # for now, inject response_mode=\"no_text\" into query configs\n for query_config in self.query_configs:\n query_config[\"response_mode\"] = \"no_text\"\n query_configs = cast(List[QUERY_CONFIG_TYPE], self.query_configs)\n response = graph.query(query, query_configs=query_configs)\n response = cast(Response, response)\n # parse source nodes\n docs = []\n for source_node in response.source_nodes:\n metadata = source_node.extra_info or {}\n docs.append(\n Document(page_content=source_node.source_text, metadata=metadata)\n )\n return docs\n[docs] async def aget_relevant_documents(self, query: str) -> List[Document]:\n raise NotImplementedError(\"LlamaIndexGraphRetriever does not support async\")", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/retrievers/llama_index.html"} +{"id": "7e7ae4388417-0", "text": "Source code for langchain.retrievers.azure_cognitive_search\n\"\"\"Retriever wrapper for Azure Cognitive Search.\"\"\"\nfrom __future__ import annotations\nimport json\nfrom typing import Dict, List, Optional\nimport aiohttp\nimport requests\nfrom pydantic import BaseModel, Extra, root_validator\nfrom langchain.schema import BaseRetriever, Document\nfrom langchain.utils import get_from_dict_or_env\n[docs]class AzureCognitiveSearchRetriever(BaseRetriever, BaseModel):\n \"\"\"Wrapper around Azure Cognitive Search.\"\"\"\n service_name: str = \"\"\n \"\"\"Name of Azure Cognitive Search service\"\"\"\n index_name: str = \"\"\n \"\"\"Name of Index inside Azure Cognitive Search service\"\"\"\n api_key: str = \"\"\n \"\"\"API Key. Both Admin and Query keys work, but for reading data it's\n recommended to use a Query key.\"\"\"\n api_version: str = \"2020-06-30\"\n \"\"\"API version\"\"\"\n aiosession: Optional[aiohttp.ClientSession] = None\n \"\"\"ClientSession, in case we want to reuse connection for better performance.\"\"\"\n content_key: str = \"content\"\n \"\"\"Key in a retrieved result to set as the Document page_content.\"\"\"\n class Config:\n extra = Extra.forbid\n arbitrary_types_allowed = True\n @root_validator(pre=True)\n def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"Validate that service name, index name and api key exists in environment.\"\"\"\n values[\"service_name\"] = get_from_dict_or_env(\n values, \"service_name\", \"AZURE_COGNITIVE_SEARCH_SERVICE_NAME\"\n )\n values[\"index_name\"] = get_from_dict_or_env(\n values, \"index_name\", \"AZURE_COGNITIVE_SEARCH_INDEX_NAME\"\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/retrievers/azure_cognitive_search.html"} +{"id": "7e7ae4388417-1", "text": ")\n values[\"api_key\"] = get_from_dict_or_env(\n values, \"api_key\", \"AZURE_COGNITIVE_SEARCH_API_KEY\"\n )\n return values\n def _build_search_url(self, query: str) -> str:\n base_url = f\"https://{self.service_name}.search.windows.net/\"\n endpoint_path = f\"indexes/{self.index_name}/docs?api-version={self.api_version}\"\n return base_url + endpoint_path + f\"&search={query}\"\n @property\n def _headers(self) -> Dict[str, str]:\n return {\n \"Content-Type\": \"application/json\",\n \"api-key\": self.api_key,\n }\n def _search(self, query: str) -> List[dict]:\n search_url = self._build_search_url(query)\n response = requests.get(search_url, headers=self._headers)\n if response.status_code != 200:\n raise Exception(f\"Error in search request: {response}\")\n return json.loads(response.text)[\"value\"]\n async def _asearch(self, query: str) -> List[dict]:\n search_url = self._build_search_url(query)\n if not self.aiosession:\n async with aiohttp.ClientSession() as session:\n async with session.get(search_url, headers=self._headers) as response:\n response_json = await response.json()\n else:\n async with self.aiosession.get(\n search_url, headers=self._headers\n ) as response:\n response_json = await response.json()\n return response_json[\"value\"]\n[docs] def get_relevant_documents(self, query: str) -> List[Document]:\n search_results = self._search(query)\n return [", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/retrievers/azure_cognitive_search.html"} +{"id": "7e7ae4388417-2", "text": "search_results = self._search(query)\n return [\n Document(page_content=result.pop(self.content_key), metadata=result)\n for result in search_results\n ]\n[docs] async def aget_relevant_documents(self, query: str) -> List[Document]:\n search_results = await self._asearch(query)\n return [\n Document(page_content=result.pop(self.content_key), metadata=result)\n for result in search_results\n ]", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/retrievers/azure_cognitive_search.html"} +{"id": "53acd95d0c09-0", "text": "Source code for langchain.retrievers.contextual_compression\n\"\"\"Retriever that wraps a base retriever and filters the results.\"\"\"\nfrom typing import List\nfrom pydantic import BaseModel, Extra\nfrom langchain.retrievers.document_compressors.base import (\n BaseDocumentCompressor,\n)\nfrom langchain.schema import BaseRetriever, Document\n[docs]class ContextualCompressionRetriever(BaseRetriever, BaseModel):\n \"\"\"Retriever that wraps a base retriever and compresses the results.\"\"\"\n base_compressor: BaseDocumentCompressor\n \"\"\"Compressor for compressing retrieved documents.\"\"\"\n base_retriever: BaseRetriever\n \"\"\"Base Retriever to use for getting relevant documents.\"\"\"\n class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n arbitrary_types_allowed = True\n[docs] def get_relevant_documents(self, query: str) -> List[Document]:\n \"\"\"Get documents relevant for a query.\n Args:\n query: string to find relevant documents for\n Returns:\n Sequence of relevant documents\n \"\"\"\n docs = self.base_retriever.get_relevant_documents(query)\n if docs:\n compressed_docs = self.base_compressor.compress_documents(docs, query)\n return list(compressed_docs)\n else:\n return []\n[docs] async def aget_relevant_documents(self, query: str) -> List[Document]:\n \"\"\"Get documents relevant for a query.\n Args:\n query: string to find relevant documents for\n Returns:\n List of relevant documents\n \"\"\"\n docs = await self.base_retriever.aget_relevant_documents(query)\n if docs:\n compressed_docs = await self.base_compressor.acompress_documents(\n docs, query", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/retrievers/contextual_compression.html"} +{"id": "53acd95d0c09-1", "text": "compressed_docs = await self.base_compressor.acompress_documents(\n docs, query\n )\n return list(compressed_docs)\n else:\n return []", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/retrievers/contextual_compression.html"} +{"id": "557bcb2e522a-0", "text": "Source code for langchain.retrievers.elastic_search_bm25\n\"\"\"Wrapper around Elasticsearch vector database.\"\"\"\nfrom __future__ import annotations\nimport uuid\nfrom typing import Any, Iterable, List\nfrom langchain.docstore.document import Document\nfrom langchain.schema import BaseRetriever\n[docs]class ElasticSearchBM25Retriever(BaseRetriever):\n \"\"\"Wrapper around Elasticsearch using BM25 as a retrieval method.\n To connect to an Elasticsearch instance that requires login credentials,\n including Elastic Cloud, use the Elasticsearch URL format\n https://username:password@es_host:9243. For example, to connect to Elastic\n Cloud, create the Elasticsearch URL with the required authentication details and\n pass it to the ElasticVectorSearch constructor as the named parameter\n elasticsearch_url.\n You can obtain your Elastic Cloud URL and login credentials by logging in to the\n Elastic Cloud console at https://cloud.elastic.co, selecting your deployment, and\n navigating to the \"Deployments\" page.\n To obtain your Elastic Cloud password for the default \"elastic\" user:\n 1. Log in to the Elastic Cloud console at https://cloud.elastic.co\n 2. Go to \"Security\" > \"Users\"\n 3. Locate the \"elastic\" user and click \"Edit\"\n 4. Click \"Reset password\"\n 5. Follow the prompts to reset the password\n The format for Elastic Cloud URLs is\n https://username:password@cluster_id.region_id.gcp.cloud.es.io:9243.\n \"\"\"\n def __init__(self, client: Any, index_name: str):\n self.client = client\n self.index_name = index_name\n[docs] @classmethod\n def create(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/retrievers/elastic_search_bm25.html"} +{"id": "557bcb2e522a-1", "text": "self.index_name = index_name\n[docs] @classmethod\n def create(\n cls, elasticsearch_url: str, index_name: str, k1: float = 2.0, b: float = 0.75\n ) -> ElasticSearchBM25Retriever:\n from elasticsearch import Elasticsearch\n # Create an Elasticsearch client instance\n es = Elasticsearch(elasticsearch_url)\n # Define the index settings and mappings\n settings = {\n \"analysis\": {\"analyzer\": {\"default\": {\"type\": \"standard\"}}},\n \"similarity\": {\n \"custom_bm25\": {\n \"type\": \"BM25\",\n \"k1\": k1,\n \"b\": b,\n }\n },\n }\n mappings = {\n \"properties\": {\n \"content\": {\n \"type\": \"text\",\n \"similarity\": \"custom_bm25\", # Use the custom BM25 similarity\n }\n }\n }\n # Create the index with the specified settings and mappings\n es.indices.create(index=index_name, mappings=mappings, settings=settings)\n return cls(es, index_name)\n[docs] def add_texts(\n self,\n texts: Iterable[str],\n refresh_indices: bool = True,\n ) -> List[str]:\n \"\"\"Run more texts through the embeddings and add to the retriever.\n Args:\n texts: Iterable of strings to add to the retriever.\n refresh_indices: bool to refresh ElasticSearch indices\n Returns:\n List of ids from adding the texts into the retriever.\n \"\"\"\n try:\n from elasticsearch.helpers import bulk\n except ImportError:\n raise ValueError(\n \"Could not import elasticsearch python package. \"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/retrievers/elastic_search_bm25.html"} +{"id": "557bcb2e522a-2", "text": "raise ValueError(\n \"Could not import elasticsearch python package. \"\n \"Please install it with `pip install elasticsearch`.\"\n )\n requests = []\n ids = []\n for i, text in enumerate(texts):\n _id = str(uuid.uuid4())\n request = {\n \"_op_type\": \"index\",\n \"_index\": self.index_name,\n \"content\": text,\n \"_id\": _id,\n }\n ids.append(_id)\n requests.append(request)\n bulk(self.client, requests)\n if refresh_indices:\n self.client.indices.refresh(index=self.index_name)\n return ids\n[docs] def get_relevant_documents(self, query: str) -> List[Document]:\n query_dict = {\"query\": {\"match\": {\"content\": query}}}\n res = self.client.search(index=self.index_name, body=query_dict)\n docs = []\n for r in res[\"hits\"][\"hits\"]:\n docs.append(Document(page_content=r[\"_source\"][\"content\"]))\n return docs\n[docs] async def aget_relevant_documents(self, query: str) -> List[Document]:\n raise NotImplementedError", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/retrievers/elastic_search_bm25.html"} +{"id": "0cb27424848f-0", "text": "Source code for langchain.retrievers.svm\n\"\"\"SMV Retriever.\nLargely based on\nhttps://github.com/karpathy/randomfun/blob/master/knn_vs_svm.ipynb\"\"\"\nfrom __future__ import annotations\nimport concurrent.futures\nfrom typing import Any, List, Optional\nimport numpy as np\nfrom pydantic import BaseModel\nfrom langchain.embeddings.base import Embeddings\nfrom langchain.schema import BaseRetriever, Document\ndef create_index(contexts: List[str], embeddings: Embeddings) -> np.ndarray:\n \"\"\"\n Create an index of embeddings for a list of contexts.\n Args:\n contexts: List of contexts to embed.\n embeddings: Embeddings model to use.\n Returns:\n Index of embeddings.\n \"\"\"\n with concurrent.futures.ThreadPoolExecutor() as executor:\n return np.array(list(executor.map(embeddings.embed_query, contexts)))\n[docs]class SVMRetriever(BaseRetriever, BaseModel):\n \"\"\"SVM Retriever.\"\"\"\n embeddings: Embeddings\n index: Any\n texts: List[str]\n k: int = 4\n relevancy_threshold: Optional[float] = None\n class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n arbitrary_types_allowed = True\n[docs] @classmethod\n def from_texts(\n cls, texts: List[str], embeddings: Embeddings, **kwargs: Any\n ) -> SVMRetriever:\n index = create_index(texts, embeddings)\n return cls(embeddings=embeddings, index=index, texts=texts, **kwargs)\n[docs] def get_relevant_documents(self, query: str) -> List[Document]:\n from sklearn import svm\n query_embeds = np.array(self.embeddings.embed_query(query))", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/retrievers/svm.html"} +{"id": "0cb27424848f-1", "text": "query_embeds = np.array(self.embeddings.embed_query(query))\n x = np.concatenate([query_embeds[None, ...], self.index])\n y = np.zeros(x.shape[0])\n y[0] = 1\n clf = svm.LinearSVC(\n class_weight=\"balanced\", verbose=False, max_iter=10000, tol=1e-6, C=0.1\n )\n clf.fit(x, y)\n similarities = clf.decision_function(x)\n sorted_ix = np.argsort(-similarities)\n # svm.LinearSVC in scikit-learn is non-deterministic.\n # if a text is the same as a query, there is no guarantee\n # the query will be in the first index.\n # this performs a simple swap, this works because anything\n # left of the 0 should be equivalent.\n zero_index = np.where(sorted_ix == 0)[0][0]\n if zero_index != 0:\n sorted_ix[0], sorted_ix[zero_index] = sorted_ix[zero_index], sorted_ix[0]\n denominator = np.max(similarities) - np.min(similarities) + 1e-6\n normalized_similarities = (similarities - np.min(similarities)) / denominator\n top_k_results = []\n for row in sorted_ix[1 : self.k + 1]:\n if (\n self.relevancy_threshold is None\n or normalized_similarities[row] >= self.relevancy_threshold\n ):\n top_k_results.append(Document(page_content=self.texts[row - 1]))\n return top_k_results\n[docs] async def aget_relevant_documents(self, query: str) -> List[Document]:\n raise NotImplementedError", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/retrievers/svm.html"} +{"id": "7c5c7d771277-0", "text": "Source code for langchain.retrievers.pinecone_hybrid_search\n\"\"\"Taken from: https://docs.pinecone.io/docs/hybrid-search\"\"\"\nimport hashlib\nfrom typing import Any, Dict, List, Optional\nfrom pydantic import BaseModel, Extra, root_validator\nfrom langchain.embeddings.base import Embeddings\nfrom langchain.schema import BaseRetriever, Document\ndef hash_text(text: str) -> str:\n \"\"\"Hash a text using SHA256.\n Args:\n text: Text to hash.\n Returns:\n Hashed text.\n \"\"\"\n return str(hashlib.sha256(text.encode(\"utf-8\")).hexdigest())\ndef create_index(\n contexts: List[str],\n index: Any,\n embeddings: Embeddings,\n sparse_encoder: Any,\n ids: Optional[List[str]] = None,\n metadatas: Optional[List[dict]] = None,\n) -> None:\n \"\"\"\n Create a Pinecone index from a list of contexts.\n Modifies the index argument in-place.\n Args:\n contexts: List of contexts to embed.\n index: Pinecone index to use.\n embeddings: Embeddings model to use.\n sparse_encoder: Sparse encoder to use.\n ids: List of ids to use for the documents.\n metadatas: List of metadata to use for the documents.\n \"\"\"\n batch_size = 32\n _iterator = range(0, len(contexts), batch_size)\n try:\n from tqdm.auto import tqdm\n _iterator = tqdm(_iterator)\n except ImportError:\n pass\n if ids is None:\n # create unique ids using hash of the text\n ids = [hash_text(context) for context in contexts]\n for i in _iterator:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/retrievers/pinecone_hybrid_search.html"} +{"id": "7c5c7d771277-1", "text": "for i in _iterator:\n # find end of batch\n i_end = min(i + batch_size, len(contexts))\n # extract batch\n context_batch = contexts[i:i_end]\n batch_ids = ids[i:i_end]\n metadata_batch = (\n metadatas[i:i_end] if metadatas else [{} for _ in context_batch]\n )\n # add context passages as metadata\n meta = [\n {\"context\": context, **metadata}\n for context, metadata in zip(context_batch, metadata_batch)\n ]\n # create dense vectors\n dense_embeds = embeddings.embed_documents(context_batch)\n # create sparse vectors\n sparse_embeds = sparse_encoder.encode_documents(context_batch)\n for s in sparse_embeds:\n s[\"values\"] = [float(s1) for s1 in s[\"values\"]]\n vectors = []\n # loop through the data and create dictionaries for upserts\n for doc_id, sparse, dense, metadata in zip(\n batch_ids, sparse_embeds, dense_embeds, meta\n ):\n vectors.append(\n {\n \"id\": doc_id,\n \"sparse_values\": sparse,\n \"values\": dense,\n \"metadata\": metadata,\n }\n )\n # upload the documents to the new hybrid index\n index.upsert(vectors)\n[docs]class PineconeHybridSearchRetriever(BaseRetriever, BaseModel):\n embeddings: Embeddings\n sparse_encoder: Any\n index: Any\n top_k: int = 4\n alpha: float = 0.5\n class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n arbitrary_types_allowed = True\n[docs] def add_texts(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/retrievers/pinecone_hybrid_search.html"} +{"id": "7c5c7d771277-2", "text": "arbitrary_types_allowed = True\n[docs] def add_texts(\n self,\n texts: List[str],\n ids: Optional[List[str]] = None,\n metadatas: Optional[List[dict]] = None,\n ) -> None:\n create_index(\n texts,\n self.index,\n self.embeddings,\n self.sparse_encoder,\n ids=ids,\n metadatas=metadatas,\n )\n @root_validator()\n def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"Validate that api key and python package exists in environment.\"\"\"\n try:\n from pinecone_text.hybrid import hybrid_convex_scale # noqa:F401\n from pinecone_text.sparse.base_sparse_encoder import (\n BaseSparseEncoder, # noqa:F401\n )\n except ImportError:\n raise ValueError(\n \"Could not import pinecone_text python package. \"\n \"Please install it with `pip install pinecone_text`.\"\n )\n return values\n[docs] def get_relevant_documents(self, query: str) -> List[Document]:\n from pinecone_text.hybrid import hybrid_convex_scale\n sparse_vec = self.sparse_encoder.encode_queries(query)\n # convert the question into a dense vector\n dense_vec = self.embeddings.embed_query(query)\n # scale alpha with hybrid_scale\n dense_vec, sparse_vec = hybrid_convex_scale(dense_vec, sparse_vec, self.alpha)\n sparse_vec[\"values\"] = [float(s1) for s1 in sparse_vec[\"values\"]]\n # query pinecone with the query parameters\n result = self.index.query(\n vector=dense_vec,\n sparse_vector=sparse_vec,\n top_k=self.top_k,\n include_metadata=True,\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/retrievers/pinecone_hybrid_search.html"} +{"id": "7c5c7d771277-3", "text": "top_k=self.top_k,\n include_metadata=True,\n )\n final_result = []\n for res in result[\"matches\"]:\n context = res[\"metadata\"].pop(\"context\")\n final_result.append(\n Document(page_content=context, metadata=res[\"metadata\"])\n )\n # return search results as json\n return final_result\n[docs] async def aget_relevant_documents(self, query: str) -> List[Document]:\n raise NotImplementedError", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/retrievers/pinecone_hybrid_search.html"} +{"id": "0db63f20f3f0-0", "text": "Source code for langchain.retrievers.merger_retriever\nfrom typing import List\nfrom langchain.schema import BaseRetriever, Document\n[docs]class MergerRetriever(BaseRetriever):\n \"\"\"\n This class merges the results of multiple retrievers.\n Args:\n retrievers: A list of retrievers to merge.\n \"\"\"\n def __init__(\n self,\n retrievers: List[BaseRetriever],\n ):\n \"\"\"\n Initialize the MergerRetriever class.\n Args:\n retrievers: A list of retrievers to merge.\n \"\"\"\n self.retrievers = retrievers\n[docs] def get_relevant_documents(self, query: str) -> List[Document]:\n \"\"\"\n Get the relevant documents for a given query.\n Args:\n query: The query to search for.\n Returns:\n A list of relevant documents.\n \"\"\"\n # Merge the results of the retrievers.\n merged_documents = self.merge_documents(query)\n return merged_documents\n[docs] async def aget_relevant_documents(self, query: str) -> List[Document]:\n \"\"\"\n Asynchronously get the relevant documents for a given query.\n Args:\n query: The query to search for.\n Returns:\n A list of relevant documents.\n \"\"\"\n # Merge the results of the retrievers.\n merged_documents = await self.amerge_documents(query)\n return merged_documents\n[docs] def merge_documents(self, query: str) -> List[Document]:\n \"\"\"\n Merge the results of the retrievers.\n Args:\n query: The query to search for.\n Returns:\n A list of merged documents.\n \"\"\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/retrievers/merger_retriever.html"} +{"id": "0db63f20f3f0-1", "text": "Returns:\n A list of merged documents.\n \"\"\"\n # Get the results of all retrievers.\n retriever_docs = [\n retriever.get_relevant_documents(query) for retriever in self.retrievers\n ]\n # Merge the results of the retrievers.\n merged_documents = []\n max_docs = max(len(docs) for docs in retriever_docs)\n for i in range(max_docs):\n for retriever, doc in zip(self.retrievers, retriever_docs):\n if i < len(doc):\n merged_documents.append(doc[i])\n return merged_documents\n[docs] async def amerge_documents(self, query: str) -> List[Document]:\n \"\"\"\n Asynchronously merge the results of the retrievers.\n Args:\n query: The query to search for.\n Returns:\n A list of merged documents.\n \"\"\"\n # Get the results of all retrievers.\n retriever_docs = [\n await retriever.aget_relevant_documents(query)\n for retriever in self.retrievers\n ]\n # Merge the results of the retrievers.\n merged_documents = []\n max_docs = max(len(docs) for docs in retriever_docs)\n for i in range(max_docs):\n for retriever, doc in zip(self.retrievers, retriever_docs):\n if i < len(doc):\n merged_documents.append(doc[i])\n return merged_documents", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/retrievers/merger_retriever.html"} +{"id": "592182c4a717-0", "text": "Source code for langchain.retrievers.wikipedia\nfrom typing import List\nfrom langchain.schema import BaseRetriever, Document\nfrom langchain.utilities.wikipedia import WikipediaAPIWrapper\n[docs]class WikipediaRetriever(BaseRetriever, WikipediaAPIWrapper):\n \"\"\"\n It is effectively a wrapper for WikipediaAPIWrapper.\n It wraps load() to get_relevant_documents().\n It uses all WikipediaAPIWrapper arguments without any change.\n \"\"\"\n[docs] def get_relevant_documents(self, query: str) -> List[Document]:\n return self.load(query=query)\n[docs] async def aget_relevant_documents(self, query: str) -> List[Document]:\n raise NotImplementedError", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/retrievers/wikipedia.html"} +{"id": "5b5249fc3718-0", "text": "Source code for langchain.retrievers.metal\nfrom typing import Any, List, Optional\nfrom langchain.schema import BaseRetriever, Document\n[docs]class MetalRetriever(BaseRetriever):\n \"\"\"Retriever that uses the Metal API.\"\"\"\n def __init__(self, client: Any, params: Optional[dict] = None):\n from metal_sdk.metal import Metal\n if not isinstance(client, Metal):\n raise ValueError(\n \"Got unexpected client, should be of type metal_sdk.metal.Metal. \"\n f\"Instead, got {type(client)}\"\n )\n self.client: Metal = client\n self.params = params or {}\n[docs] def get_relevant_documents(self, query: str) -> List[Document]:\n results = self.client.search({\"text\": query}, **self.params)\n final_results = []\n for r in results[\"data\"]:\n metadata = {k: v for k, v in r.items() if k != \"text\"}\n final_results.append(Document(page_content=r[\"text\"], metadata=metadata))\n return final_results\n[docs] async def aget_relevant_documents(self, query: str) -> List[Document]:\n raise NotImplementedError", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/retrievers/metal.html"} +{"id": "e9064c86de22-0", "text": "Source code for langchain.retrievers.pupmed\nfrom typing import List\nfrom langchain.schema import BaseRetriever, Document\nfrom langchain.utilities.pupmed import PubMedAPIWrapper\n[docs]class PubMedRetriever(BaseRetriever, PubMedAPIWrapper):\n \"\"\"\n It is effectively a wrapper for PubMedAPIWrapper.\n It wraps load() to get_relevant_documents().\n It uses all PubMedAPIWrapper arguments without any change.\n \"\"\"\n[docs] def get_relevant_documents(self, query: str) -> List[Document]:\n return self.load_docs(query=query)\n[docs] async def aget_relevant_documents(self, query: str) -> List[Document]:\n raise NotImplementedError", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/retrievers/pupmed.html"} +{"id": "3b6cef46ca82-0", "text": "Source code for langchain.retrievers.remote_retriever\nfrom typing import List, Optional\nimport aiohttp\nimport requests\nfrom pydantic import BaseModel\nfrom langchain.schema import BaseRetriever, Document\n[docs]class RemoteLangChainRetriever(BaseRetriever, BaseModel):\n url: str\n headers: Optional[dict] = None\n input_key: str = \"message\"\n response_key: str = \"response\"\n page_content_key: str = \"page_content\"\n metadata_key: str = \"metadata\"\n[docs] def get_relevant_documents(self, query: str) -> List[Document]:\n response = requests.post(\n self.url, json={self.input_key: query}, headers=self.headers\n )\n result = response.json()\n return [\n Document(\n page_content=r[self.page_content_key], metadata=r[self.metadata_key]\n )\n for r in result[self.response_key]\n ]\n[docs] async def aget_relevant_documents(self, query: str) -> List[Document]:\n async with aiohttp.ClientSession() as session:\n async with session.request(\n \"POST\", self.url, headers=self.headers, json={self.input_key: query}\n ) as response:\n result = await response.json()\n return [\n Document(\n page_content=r[self.page_content_key], metadata=r[self.metadata_key]\n )\n for r in result[self.response_key]\n ]", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/retrievers/remote_retriever.html"} +{"id": "afce1c8babc6-0", "text": "Source code for langchain.retrievers.zilliz\n\"\"\"Zilliz Retriever\"\"\"\nimport warnings\nfrom typing import Any, Dict, List, Optional\nfrom langchain.embeddings.base import Embeddings\nfrom langchain.schema import BaseRetriever, Document\nfrom langchain.vectorstores.zilliz import Zilliz\n# TODO: Update to ZillizClient + Hybrid Search when available\n[docs]class ZillizRetriever(BaseRetriever):\n \"\"\"Retriever that uses the Zilliz API.\"\"\"\n def __init__(\n self,\n embedding_function: Embeddings,\n collection_name: str = \"LangChainCollection\",\n connection_args: Optional[Dict[str, Any]] = None,\n consistency_level: str = \"Session\",\n search_params: Optional[dict] = None,\n ):\n self.store = Zilliz(\n embedding_function,\n collection_name,\n connection_args,\n consistency_level,\n )\n self.retriever = self.store.as_retriever(search_kwargs={\"param\": search_params})\n[docs] def add_texts(\n self, texts: List[str], metadatas: Optional[List[dict]] = None\n ) -> None:\n \"\"\"Add text to the Zilliz store\n Args:\n texts (List[str]): The text\n metadatas (List[dict]): Metadata dicts, must line up with existing store\n \"\"\"\n self.store.add_texts(texts, metadatas)\n[docs] def get_relevant_documents(self, query: str) -> List[Document]:\n return self.retriever.get_relevant_documents(query)\n[docs] async def aget_relevant_documents(self, query: str) -> List[Document]:\n raise NotImplementedError", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/retrievers/zilliz.html"} +{"id": "afce1c8babc6-1", "text": "raise NotImplementedError\ndef ZillizRetreiver(*args: Any, **kwargs: Any) -> ZillizRetriever:\n \"\"\"\n Deprecated ZillizRetreiver. Please use ZillizRetriever ('i' before 'e') instead.\n Args:\n *args:\n **kwargs:\n Returns:\n ZillizRetriever\n \"\"\"\n warnings.warn(\n \"ZillizRetreiver will be deprecated in the future. \"\n \"Please use ZillizRetriever ('i' before 'e') instead.\",\n DeprecationWarning,\n )\n return ZillizRetriever(*args, **kwargs)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/retrievers/zilliz.html"} +{"id": "b0635f6496b6-0", "text": "Source code for langchain.retrievers.self_query.base\n\"\"\"Retriever that generates and executes structured queries over its own data source.\"\"\"\nfrom typing import Any, Dict, List, Optional, Type, cast\nfrom pydantic import BaseModel, Field, root_validator\nfrom langchain import LLMChain\nfrom langchain.base_language import BaseLanguageModel\nfrom langchain.callbacks.manager import Callbacks\nfrom langchain.chains.query_constructor.base import load_query_constructor_chain\nfrom langchain.chains.query_constructor.ir import StructuredQuery, Visitor\nfrom langchain.chains.query_constructor.schema import AttributeInfo\nfrom langchain.retrievers.self_query.chroma import ChromaTranslator\nfrom langchain.retrievers.self_query.myscale import MyScaleTranslator\nfrom langchain.retrievers.self_query.pinecone import PineconeTranslator\nfrom langchain.retrievers.self_query.qdrant import QdrantTranslator\nfrom langchain.retrievers.self_query.weaviate import WeaviateTranslator\nfrom langchain.schema import BaseRetriever, Document\nfrom langchain.vectorstores import (\n Chroma,\n MyScale,\n Pinecone,\n Qdrant,\n VectorStore,\n Weaviate,\n)\ndef _get_builtin_translator(vectorstore: VectorStore) -> Visitor:\n \"\"\"Get the translator class corresponding to the vector store class.\"\"\"\n vectorstore_cls = vectorstore.__class__\n BUILTIN_TRANSLATORS: Dict[Type[VectorStore], Type[Visitor]] = {\n Pinecone: PineconeTranslator,\n Chroma: ChromaTranslator,\n Weaviate: WeaviateTranslator,\n Qdrant: QdrantTranslator,\n MyScale: MyScaleTranslator,\n }\n if vectorstore_cls not in BUILTIN_TRANSLATORS:\n raise ValueError(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/retrievers/self_query/base.html"} +{"id": "b0635f6496b6-1", "text": "if vectorstore_cls not in BUILTIN_TRANSLATORS:\n raise ValueError(\n f\"Self query retriever with Vector Store type {vectorstore_cls}\"\n f\" not supported.\"\n )\n if isinstance(vectorstore, Qdrant):\n return QdrantTranslator(metadata_key=vectorstore.metadata_payload_key)\n elif isinstance(vectorstore, MyScale):\n return MyScaleTranslator(metadata_key=vectorstore.metadata_column)\n return BUILTIN_TRANSLATORS[vectorstore_cls]()\n[docs]class SelfQueryRetriever(BaseRetriever, BaseModel):\n \"\"\"Retriever that wraps around a vector store and uses an LLM to generate\n the vector store queries.\"\"\"\n vectorstore: VectorStore\n \"\"\"The underlying vector store from which documents will be retrieved.\"\"\"\n llm_chain: LLMChain\n \"\"\"The LLMChain for generating the vector store queries.\"\"\"\n search_type: str = \"similarity\"\n \"\"\"The search type to perform on the vector store.\"\"\"\n search_kwargs: dict = Field(default_factory=dict)\n \"\"\"Keyword arguments to pass in to the vector store search.\"\"\"\n structured_query_translator: Visitor\n \"\"\"Translator for turning internal query language into vectorstore search params.\"\"\"\n verbose: bool = False\n \"\"\"Use original query instead of the revised new query from LLM\"\"\"\n use_original_query: bool = False\n class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n arbitrary_types_allowed = True\n @root_validator(pre=True)\n def validate_translator(cls, values: Dict) -> Dict:\n \"\"\"Validate translator.\"\"\"\n if \"structured_query_translator\" not in values:\n values[\"structured_query_translator\"] = _get_builtin_translator(\n values[\"vectorstore\"]\n )\n return values", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/retrievers/self_query/base.html"} +{"id": "b0635f6496b6-2", "text": "values[\"vectorstore\"]\n )\n return values\n[docs] def get_relevant_documents(\n self, query: str, callbacks: Callbacks = None\n ) -> List[Document]:\n \"\"\"Get documents relevant for a query.\n Args:\n query: string to find relevant documents for\n Returns:\n List of relevant documents\n \"\"\"\n inputs = self.llm_chain.prep_inputs({\"query\": query})\n structured_query = cast(\n StructuredQuery,\n self.llm_chain.predict_and_parse(callbacks=callbacks, **inputs),\n )\n if self.verbose:\n print(structured_query)\n new_query, new_kwargs = self.structured_query_translator.visit_structured_query(\n structured_query\n )\n if structured_query.limit is not None:\n new_kwargs[\"k\"] = structured_query.limit\n if self.use_original_query:\n new_query = query\n search_kwargs = {**self.search_kwargs, **new_kwargs}\n docs = self.vectorstore.search(new_query, self.search_type, **search_kwargs)\n return docs\n[docs] async def aget_relevant_documents(self, query: str) -> List[Document]:\n raise NotImplementedError\n[docs] @classmethod\n def from_llm(\n cls,\n llm: BaseLanguageModel,\n vectorstore: VectorStore,\n document_contents: str,\n metadata_field_info: List[AttributeInfo],\n structured_query_translator: Optional[Visitor] = None,\n chain_kwargs: Optional[Dict] = None,\n enable_limit: bool = False,\n use_original_query: bool = False,\n **kwargs: Any,\n ) -> \"SelfQueryRetriever\":\n if structured_query_translator is None:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/retrievers/self_query/base.html"} +{"id": "b0635f6496b6-3", "text": "if structured_query_translator is None:\n structured_query_translator = _get_builtin_translator(vectorstore)\n chain_kwargs = chain_kwargs or {}\n if \"allowed_comparators\" not in chain_kwargs:\n chain_kwargs[\n \"allowed_comparators\"\n ] = structured_query_translator.allowed_comparators\n if \"allowed_operators\" not in chain_kwargs:\n chain_kwargs[\n \"allowed_operators\"\n ] = structured_query_translator.allowed_operators\n llm_chain = load_query_constructor_chain(\n llm,\n document_contents,\n metadata_field_info,\n enable_limit=enable_limit,\n **chain_kwargs,\n )\n return cls(\n llm_chain=llm_chain,\n vectorstore=vectorstore,\n use_original_query=use_original_query,\n structured_query_translator=structured_query_translator,\n **kwargs,\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/retrievers/self_query/base.html"} +{"id": "6632a8be1a98-0", "text": "Source code for langchain.retrievers.document_compressors.base\n\"\"\"Interface for retrieved document compressors.\"\"\"\nfrom abc import ABC, abstractmethod\nfrom typing import List, Sequence, Union\nfrom pydantic import BaseModel\nfrom langchain.schema import BaseDocumentTransformer, Document\nclass BaseDocumentCompressor(BaseModel, ABC):\n \"\"\"Base abstraction interface for document compression.\"\"\"\n @abstractmethod\n def compress_documents(\n self, documents: Sequence[Document], query: str\n ) -> Sequence[Document]:\n \"\"\"Compress retrieved documents given the query context.\"\"\"\n @abstractmethod\n async def acompress_documents(\n self, documents: Sequence[Document], query: str\n ) -> Sequence[Document]:\n \"\"\"Compress retrieved documents given the query context.\"\"\"\n[docs]class DocumentCompressorPipeline(BaseDocumentCompressor):\n \"\"\"Document compressor that uses a pipeline of transformers.\"\"\"\n transformers: List[Union[BaseDocumentTransformer, BaseDocumentCompressor]]\n \"\"\"List of document filters that are chained together and run in sequence.\"\"\"\n class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n arbitrary_types_allowed = True\n[docs] def compress_documents(\n self, documents: Sequence[Document], query: str\n ) -> Sequence[Document]:\n \"\"\"Transform a list of documents.\"\"\"\n for _transformer in self.transformers:\n if isinstance(_transformer, BaseDocumentCompressor):\n documents = _transformer.compress_documents(documents, query)\n elif isinstance(_transformer, BaseDocumentTransformer):\n documents = _transformer.transform_documents(documents)\n else:\n raise ValueError(f\"Got unexpected transformer type: {_transformer}\")\n return documents\n[docs] async def acompress_documents(\n self, documents: Sequence[Document], query: str", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/retrievers/document_compressors/base.html"} +{"id": "6632a8be1a98-1", "text": "self, documents: Sequence[Document], query: str\n ) -> Sequence[Document]:\n \"\"\"Compress retrieved documents given the query context.\"\"\"\n for _transformer in self.transformers:\n if isinstance(_transformer, BaseDocumentCompressor):\n documents = await _transformer.acompress_documents(documents, query)\n elif isinstance(_transformer, BaseDocumentTransformer):\n documents = await _transformer.atransform_documents(documents)\n else:\n raise ValueError(f\"Got unexpected transformer type: {_transformer}\")\n return documents", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/retrievers/document_compressors/base.html"} +{"id": "5e20c793ea8c-0", "text": "Source code for langchain.retrievers.document_compressors.embeddings_filter\n\"\"\"Document compressor that uses embeddings to drop documents unrelated to the query.\"\"\"\nfrom typing import Callable, Dict, Optional, Sequence\nimport numpy as np\nfrom pydantic import root_validator\nfrom langchain.document_transformers import (\n _get_embeddings_from_stateful_docs,\n get_stateful_documents,\n)\nfrom langchain.embeddings.base import Embeddings\nfrom langchain.math_utils import cosine_similarity\nfrom langchain.retrievers.document_compressors.base import (\n BaseDocumentCompressor,\n)\nfrom langchain.schema import Document\n[docs]class EmbeddingsFilter(BaseDocumentCompressor):\n embeddings: Embeddings\n \"\"\"Embeddings to use for embedding document contents and queries.\"\"\"\n similarity_fn: Callable = cosine_similarity\n \"\"\"Similarity function for comparing documents. Function expected to take as input\n two matrices (List[List[float]]) and return a matrix of scores where higher values\n indicate greater similarity.\"\"\"\n k: Optional[int] = 20\n \"\"\"The number of relevant documents to return. Can be set to None, in which case\n `similarity_threshold` must be specified. Defaults to 20.\"\"\"\n similarity_threshold: Optional[float]\n \"\"\"Threshold for determining when two documents are similar enough\n to be considered redundant. Defaults to None, must be specified if `k` is set\n to None.\"\"\"\n class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n arbitrary_types_allowed = True\n @root_validator()\n def validate_params(cls, values: Dict) -> Dict:\n \"\"\"Validate similarity parameters.\"\"\"\n if values[\"k\"] is None and values[\"similarity_threshold\"] is None:\n raise ValueError(\"Must specify one of `k` or `similarity_threshold`.\")\n return values", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/retrievers/document_compressors/embeddings_filter.html"} +{"id": "5e20c793ea8c-1", "text": "return values\n[docs] def compress_documents(\n self, documents: Sequence[Document], query: str\n ) -> Sequence[Document]:\n \"\"\"Filter documents based on similarity of their embeddings to the query.\"\"\"\n stateful_documents = get_stateful_documents(documents)\n embedded_documents = _get_embeddings_from_stateful_docs(\n self.embeddings, stateful_documents\n )\n embedded_query = self.embeddings.embed_query(query)\n similarity = self.similarity_fn([embedded_query], embedded_documents)[0]\n included_idxs = np.arange(len(embedded_documents))\n if self.k is not None:\n included_idxs = np.argsort(similarity)[::-1][: self.k]\n if self.similarity_threshold is not None:\n similar_enough = np.where(\n similarity[included_idxs] > self.similarity_threshold\n )\n included_idxs = included_idxs[similar_enough]\n return [stateful_documents[i] for i in included_idxs]\n[docs] async def acompress_documents(\n self, documents: Sequence[Document], query: str\n ) -> Sequence[Document]:\n \"\"\"Filter down documents.\"\"\"\n raise NotImplementedError", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/retrievers/document_compressors/embeddings_filter.html"} +{"id": "7cec6d49b566-0", "text": "Source code for langchain.retrievers.document_compressors.chain_filter\n\"\"\"Filter that uses an LLM to drop documents that aren't relevant to the query.\"\"\"\nfrom typing import Any, Callable, Dict, Optional, Sequence\nfrom langchain import BasePromptTemplate, LLMChain, PromptTemplate\nfrom langchain.base_language import BaseLanguageModel\nfrom langchain.output_parsers.boolean import BooleanOutputParser\nfrom langchain.retrievers.document_compressors.base import BaseDocumentCompressor\nfrom langchain.retrievers.document_compressors.chain_filter_prompt import (\n prompt_template,\n)\nfrom langchain.schema import Document\ndef _get_default_chain_prompt() -> PromptTemplate:\n return PromptTemplate(\n template=prompt_template,\n input_variables=[\"question\", \"context\"],\n output_parser=BooleanOutputParser(),\n )\ndef default_get_input(query: str, doc: Document) -> Dict[str, Any]:\n \"\"\"Return the compression chain input.\"\"\"\n return {\"question\": query, \"context\": doc.page_content}\n[docs]class LLMChainFilter(BaseDocumentCompressor):\n \"\"\"Filter that drops documents that aren't relevant to the query.\"\"\"\n llm_chain: LLMChain\n \"\"\"LLM wrapper to use for filtering documents. \n The chain prompt is expected to have a BooleanOutputParser.\"\"\"\n get_input: Callable[[str, Document], dict] = default_get_input\n \"\"\"Callable for constructing the chain input from the query and a Document.\"\"\"\n[docs] def compress_documents(\n self, documents: Sequence[Document], query: str\n ) -> Sequence[Document]:\n \"\"\"Filter down documents based on their relevance to the query.\"\"\"\n filtered_docs = []\n for doc in documents:\n _input = self.get_input(query, doc)\n include_doc = self.llm_chain.predict_and_parse(**_input)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/retrievers/document_compressors/chain_filter.html"} +{"id": "7cec6d49b566-1", "text": "include_doc = self.llm_chain.predict_and_parse(**_input)\n if include_doc:\n filtered_docs.append(doc)\n return filtered_docs\n[docs] async def acompress_documents(\n self, documents: Sequence[Document], query: str\n ) -> Sequence[Document]:\n \"\"\"Filter down documents.\"\"\"\n raise NotImplementedError\n[docs] @classmethod\n def from_llm(\n cls,\n llm: BaseLanguageModel,\n prompt: Optional[BasePromptTemplate] = None,\n **kwargs: Any\n ) -> \"LLMChainFilter\":\n _prompt = prompt if prompt is not None else _get_default_chain_prompt()\n llm_chain = LLMChain(llm=llm, prompt=_prompt)\n return cls(llm_chain=llm_chain, **kwargs)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/retrievers/document_compressors/chain_filter.html"} +{"id": "2d403d3f5892-0", "text": "Source code for langchain.retrievers.document_compressors.chain_extract\n\"\"\"DocumentFilter that uses an LLM chain to extract the relevant parts of documents.\"\"\"\nfrom __future__ import annotations\nimport asyncio\nfrom typing import Any, Callable, Dict, Optional, Sequence\nfrom langchain import LLMChain, PromptTemplate\nfrom langchain.base_language import BaseLanguageModel\nfrom langchain.retrievers.document_compressors.base import BaseDocumentCompressor\nfrom langchain.retrievers.document_compressors.chain_extract_prompt import (\n prompt_template,\n)\nfrom langchain.schema import BaseOutputParser, Document\ndef default_get_input(query: str, doc: Document) -> Dict[str, Any]:\n \"\"\"Return the compression chain input.\"\"\"\n return {\"question\": query, \"context\": doc.page_content}\nclass NoOutputParser(BaseOutputParser[str]):\n \"\"\"Parse outputs that could return a null string of some sort.\"\"\"\n no_output_str: str = \"NO_OUTPUT\"\n def parse(self, text: str) -> str:\n cleaned_text = text.strip()\n if cleaned_text == self.no_output_str:\n return \"\"\n return cleaned_text\ndef _get_default_chain_prompt() -> PromptTemplate:\n output_parser = NoOutputParser()\n template = prompt_template.format(no_output_str=output_parser.no_output_str)\n return PromptTemplate(\n template=template,\n input_variables=[\"question\", \"context\"],\n output_parser=output_parser,\n )\n[docs]class LLMChainExtractor(BaseDocumentCompressor):\n llm_chain: LLMChain\n \"\"\"LLM wrapper to use for compressing documents.\"\"\"\n get_input: Callable[[str, Document], dict] = default_get_input\n \"\"\"Callable for constructing the chain input from the query and a Document.\"\"\"\n[docs] def compress_documents(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/retrievers/document_compressors/chain_extract.html"} +{"id": "2d403d3f5892-1", "text": "[docs] def compress_documents(\n self, documents: Sequence[Document], query: str\n ) -> Sequence[Document]:\n \"\"\"Compress page content of raw documents.\"\"\"\n compressed_docs = []\n for doc in documents:\n _input = self.get_input(query, doc)\n output = self.llm_chain.predict_and_parse(**_input)\n if len(output) == 0:\n continue\n compressed_docs.append(Document(page_content=output, metadata=doc.metadata))\n return compressed_docs\n[docs] async def acompress_documents(\n self, documents: Sequence[Document], query: str\n ) -> Sequence[Document]:\n \"\"\"Compress page content of raw documents asynchronously.\"\"\"\n outputs = await asyncio.gather(\n *[\n self.llm_chain.apredict_and_parse(**self.get_input(query, doc))\n for doc in documents\n ]\n )\n compressed_docs = []\n for i, doc in enumerate(documents):\n if len(outputs[i]) == 0:\n continue\n compressed_docs.append(\n Document(page_content=outputs[i], metadata=doc.metadata)\n )\n return compressed_docs\n[docs] @classmethod\n def from_llm(\n cls,\n llm: BaseLanguageModel,\n prompt: Optional[PromptTemplate] = None,\n get_input: Optional[Callable[[str, Document], str]] = None,\n llm_chain_kwargs: Optional[dict] = None,\n ) -> LLMChainExtractor:\n \"\"\"Initialize from LLM.\"\"\"\n _prompt = prompt if prompt is not None else _get_default_chain_prompt()\n _get_input = get_input if get_input is not None else default_get_input", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/retrievers/document_compressors/chain_extract.html"} +{"id": "2d403d3f5892-2", "text": "_get_input = get_input if get_input is not None else default_get_input\n llm_chain = LLMChain(llm=llm, prompt=_prompt, **(llm_chain_kwargs or {}))\n return cls(llm_chain=llm_chain, get_input=_get_input)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/retrievers/document_compressors/chain_extract.html"} +{"id": "94154df0fdeb-0", "text": "Source code for langchain.retrievers.document_compressors.cohere_rerank\nfrom __future__ import annotations\nfrom typing import TYPE_CHECKING, Dict, Sequence\nfrom pydantic import Extra, root_validator\nfrom langchain.retrievers.document_compressors.base import BaseDocumentCompressor\nfrom langchain.schema import Document\nfrom langchain.utils import get_from_dict_or_env\nif TYPE_CHECKING:\n from cohere import Client\nelse:\n # We do to avoid pydantic annotation issues when actually instantiating\n # while keeping this import optional\n try:\n from cohere import Client\n except ImportError:\n pass\n[docs]class CohereRerank(BaseDocumentCompressor):\n client: Client\n top_n: int = 3\n model: str = \"rerank-english-v2.0\"\n class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n arbitrary_types_allowed = True\n @root_validator(pre=True)\n def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"Validate that api key and python package exists in environment.\"\"\"\n cohere_api_key = get_from_dict_or_env(\n values, \"cohere_api_key\", \"COHERE_API_KEY\"\n )\n try:\n import cohere\n values[\"client\"] = cohere.Client(cohere_api_key)\n except ImportError:\n raise ImportError(\n \"Could not import cohere python package. \"\n \"Please install it with `pip install cohere`.\"\n )\n return values\n[docs] def compress_documents(\n self, documents: Sequence[Document], query: str\n ) -> Sequence[Document]:\n if len(documents) == 0: # to avoid empty api call\n return []", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/retrievers/document_compressors/cohere_rerank.html"} +{"id": "94154df0fdeb-1", "text": "return []\n doc_list = list(documents)\n _docs = [d.page_content for d in doc_list]\n results = self.client.rerank(\n model=self.model, query=query, documents=_docs, top_n=self.top_n\n )\n final_results = []\n for r in results:\n doc = doc_list[r.index]\n doc.metadata[\"relevance_score\"] = r.relevance_score\n final_results.append(doc)\n return final_results\n[docs] async def acompress_documents(\n self, documents: Sequence[Document], query: str\n ) -> Sequence[Document]:\n raise NotImplementedError", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/retrievers/document_compressors/cohere_rerank.html"} +{"id": "85e271f02d53-0", "text": "Source code for langchain.output_parsers.rail_parser\nfrom __future__ import annotations\nfrom typing import Any, Callable, Dict, Optional\nfrom langchain.schema import BaseOutputParser\n[docs]class GuardrailsOutputParser(BaseOutputParser):\n guard: Any\n api: Optional[Callable]\n args: Any\n kwargs: Any\n @property\n def _type(self) -> str:\n return \"guardrails\"\n[docs] @classmethod\n def from_rail(\n cls,\n rail_file: str,\n num_reasks: int = 1,\n api: Optional[Callable] = None,\n *args: Any,\n **kwargs: Any,\n ) -> GuardrailsOutputParser:\n try:\n from guardrails import Guard\n except ImportError:\n raise ValueError(\n \"guardrails-ai package not installed. \"\n \"Install it by running `pip install guardrails-ai`.\"\n )\n return cls(\n guard=Guard.from_rail(rail_file, num_reasks=num_reasks),\n api=api,\n args=args,\n kwargs=kwargs,\n )\n[docs] @classmethod\n def from_rail_string(\n cls,\n rail_str: str,\n num_reasks: int = 1,\n api: Optional[Callable] = None,\n *args: Any,\n **kwargs: Any,\n ) -> GuardrailsOutputParser:\n try:\n from guardrails import Guard\n except ImportError:\n raise ValueError(\n \"guardrails-ai package not installed. \"\n \"Install it by running `pip install guardrails-ai`.\"\n )\n return cls(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/output_parsers/rail_parser.html"} +{"id": "85e271f02d53-1", "text": ")\n return cls(\n guard=Guard.from_rail_string(rail_str, num_reasks=num_reasks),\n api=api,\n args=args,\n kwargs=kwargs,\n )\n[docs] def get_format_instructions(self) -> str:\n return self.guard.raw_prompt.format_instructions\n[docs] def parse(self, text: str) -> Dict:\n return self.guard.parse(text, llm_api=self.api, *self.args, **self.kwargs)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/output_parsers/rail_parser.html"} +{"id": "9aa9e029c385-0", "text": "Source code for langchain.output_parsers.datetime\nimport random\nfrom datetime import datetime, timedelta\nfrom typing import List\nfrom langchain.schema import BaseOutputParser, OutputParserException\nfrom langchain.utils import comma_list\ndef _generate_random_datetime_strings(\n pattern: str,\n n: int = 3,\n start_date: datetime = datetime(1, 1, 1),\n end_date: datetime = datetime.now() + timedelta(days=3650),\n) -> List[str]:\n \"\"\"\n Generates n random datetime strings conforming to the\n given pattern within the specified date range.\n Pattern should be a string containing the desired format codes.\n start_date and end_date should be datetime objects representing\n the start and end of the date range.\n \"\"\"\n examples = []\n delta = end_date - start_date\n for i in range(n):\n random_delta = random.uniform(0, delta.total_seconds())\n dt = start_date + timedelta(seconds=random_delta)\n date_string = dt.strftime(pattern)\n examples.append(date_string)\n return examples\n[docs]class DatetimeOutputParser(BaseOutputParser[datetime]):\n format: str = \"%Y-%m-%dT%H:%M:%S.%fZ\"\n[docs] def get_format_instructions(self) -> str:\n examples = comma_list(_generate_random_datetime_strings(self.format))\n return f\"\"\"Write a datetime string that matches the \n following pattern: \"{self.format}\". Examples: {examples}\"\"\"\n[docs] def parse(self, response: str) -> datetime:\n try:\n return datetime.strptime(response.strip(), self.format)\n except ValueError as e:\n raise OutputParserException(\n f\"Could not parse datetime string: {response}\"\n ) from e\n @property", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/output_parsers/datetime.html"} +{"id": "9aa9e029c385-1", "text": ") from e\n @property\n def _type(self) -> str:\n return \"datetime\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/output_parsers/datetime.html"} +{"id": "6fd7632133a0-0", "text": "Source code for langchain.output_parsers.structured\nfrom __future__ import annotations\nfrom typing import Any, List\nfrom pydantic import BaseModel\nfrom langchain.output_parsers.format_instructions import STRUCTURED_FORMAT_INSTRUCTIONS\nfrom langchain.output_parsers.json import parse_and_check_json_markdown\nfrom langchain.schema import BaseOutputParser\nline_template = '\\t\"{name}\": {type} // {description}'\n[docs]class ResponseSchema(BaseModel):\n name: str\n description: str\n type: str = \"string\"\ndef _get_sub_string(schema: ResponseSchema) -> str:\n return line_template.format(\n name=schema.name, description=schema.description, type=schema.type\n )\n[docs]class StructuredOutputParser(BaseOutputParser):\n response_schemas: List[ResponseSchema]\n[docs] @classmethod\n def from_response_schemas(\n cls, response_schemas: List[ResponseSchema]\n ) -> StructuredOutputParser:\n return cls(response_schemas=response_schemas)\n[docs] def get_format_instructions(self) -> str:\n schema_str = \"\\n\".join(\n [_get_sub_string(schema) for schema in self.response_schemas]\n )\n return STRUCTURED_FORMAT_INSTRUCTIONS.format(format=schema_str)\n[docs] def parse(self, text: str) -> Any:\n expected_keys = [rs.name for rs in self.response_schemas]\n return parse_and_check_json_markdown(text, expected_keys)\n @property\n def _type(self) -> str:\n return \"structured\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/output_parsers/structured.html"} +{"id": "a2f0ed6d50bd-0", "text": "Source code for langchain.output_parsers.regex_dict\nfrom __future__ import annotations\nimport re\nfrom typing import Dict, Optional\nfrom langchain.schema import BaseOutputParser\n[docs]class RegexDictParser(BaseOutputParser):\n \"\"\"Class to parse the output into a dictionary.\"\"\"\n regex_pattern: str = r\"{}:\\s?([^.'\\n']*)\\.?\" # : :meta private:\n output_key_to_format: Dict[str, str]\n no_update_value: Optional[str] = None\n @property\n def _type(self) -> str:\n \"\"\"Return the type key.\"\"\"\n return \"regex_dict_parser\"\n[docs] def parse(self, text: str) -> Dict[str, str]:\n \"\"\"Parse the output of an LLM call.\"\"\"\n result = {}\n for output_key, expected_format in self.output_key_to_format.items():\n specific_regex = self.regex_pattern.format(re.escape(expected_format))\n matches = re.findall(specific_regex, text)\n if not matches:\n raise ValueError(\n f\"No match found for output key: {output_key} with expected format \\\n {expected_format} on text {text}\"\n )\n elif len(matches) > 1:\n raise ValueError(\n f\"Multiple matches found for output key: {output_key} with \\\n expected format {expected_format} on text {text}\"\n )\n elif (\n self.no_update_value is not None and matches[0] == self.no_update_value\n ):\n continue\n else:\n result[output_key] = matches[0]\n return result", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/output_parsers/regex_dict.html"} +{"id": "e63d6ac646f7-0", "text": "Source code for langchain.output_parsers.list\nfrom __future__ import annotations\nfrom abc import abstractmethod\nfrom typing import List\nfrom langchain.schema import BaseOutputParser\n[docs]class ListOutputParser(BaseOutputParser):\n \"\"\"Class to parse the output of an LLM call to a list.\"\"\"\n @property\n def _type(self) -> str:\n return \"list\"\n[docs] @abstractmethod\n def parse(self, text: str) -> List[str]:\n \"\"\"Parse the output of an LLM call.\"\"\"\n[docs]class CommaSeparatedListOutputParser(ListOutputParser):\n \"\"\"Parse out comma separated lists.\"\"\"\n[docs] def get_format_instructions(self) -> str:\n return (\n \"Your response should be a list of comma separated values, \"\n \"eg: `foo, bar, baz`\"\n )\n[docs] def parse(self, text: str) -> List[str]:\n \"\"\"Parse the output of an LLM call.\"\"\"\n return text.strip().split(\", \")", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/output_parsers/list.html"} +{"id": "0a3dec6de5ed-0", "text": "Source code for langchain.output_parsers.boolean\nfrom langchain.schema import BaseOutputParser\n[docs]class BooleanOutputParser(BaseOutputParser[bool]):\n true_val: str = \"YES\"\n false_val: str = \"NO\"\n[docs] def parse(self, text: str) -> bool:\n \"\"\"Parse the output of an LLM call to a boolean.\n Args:\n text: output of language model\n Returns:\n boolean\n \"\"\"\n cleaned_text = text.strip()\n if cleaned_text.upper() not in (self.true_val.upper(), self.false_val.upper()):\n raise ValueError(\n f\"BooleanOutputParser expected output value to either be \"\n f\"{self.true_val} or {self.false_val}. Received {cleaned_text}.\"\n )\n return cleaned_text.upper() == self.true_val.upper()\n @property\n def _type(self) -> str:\n \"\"\"Snake-case string identifier for output parser type.\"\"\"\n return \"boolean_output_parser\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/output_parsers/boolean.html"} +{"id": "735750354b8d-0", "text": "Source code for langchain.output_parsers.combining\nfrom __future__ import annotations\nfrom typing import Any, Dict, List\nfrom pydantic import root_validator\nfrom langchain.schema import BaseOutputParser\n[docs]class CombiningOutputParser(BaseOutputParser):\n \"\"\"Class to combine multiple output parsers into one.\"\"\"\n parsers: List[BaseOutputParser]\n @root_validator()\n def validate_parsers(cls, values: Dict[str, Any]) -> Dict[str, Any]:\n \"\"\"Validate the parsers.\"\"\"\n parsers = values[\"parsers\"]\n if len(parsers) < 2:\n raise ValueError(\"Must have at least two parsers\")\n for parser in parsers:\n if parser._type == \"combining\":\n raise ValueError(\"Cannot nest combining parsers\")\n if parser._type == \"list\":\n raise ValueError(\"Cannot comine list parsers\")\n return values\n @property\n def _type(self) -> str:\n \"\"\"Return the type key.\"\"\"\n return \"combining\"\n[docs] def get_format_instructions(self) -> str:\n \"\"\"Instructions on how the LLM output should be formatted.\"\"\"\n initial = f\"For your first output: {self.parsers[0].get_format_instructions()}\"\n subsequent = \"\\n\".join(\n f\"Complete that output fully. Then produce another output, separated by two newline characters: {p.get_format_instructions()}\" # noqa: E501\n for p in self.parsers[1:]\n )\n return f\"{initial}\\n{subsequent}\"\n[docs] def parse(self, text: str) -> Dict[str, Any]:\n \"\"\"Parse the output of an LLM call.\"\"\"\n texts = text.split(\"\\n\\n\")\n output = dict()", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/output_parsers/combining.html"} +{"id": "735750354b8d-1", "text": "texts = text.split(\"\\n\\n\")\n output = dict()\n for txt, parser in zip(texts, self.parsers):\n output.update(parser.parse(txt.strip()))\n return output", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/output_parsers/combining.html"} +{"id": "15436f06af05-0", "text": "Source code for langchain.output_parsers.regex\nfrom __future__ import annotations\nimport re\nfrom typing import Dict, List, Optional\nfrom langchain.schema import BaseOutputParser\n[docs]class RegexParser(BaseOutputParser):\n \"\"\"Class to parse the output into a dictionary.\"\"\"\n regex: str\n output_keys: List[str]\n default_output_key: Optional[str] = None\n @property\n def _type(self) -> str:\n \"\"\"Return the type key.\"\"\"\n return \"regex_parser\"\n[docs] def parse(self, text: str) -> Dict[str, str]:\n \"\"\"Parse the output of an LLM call.\"\"\"\n match = re.search(self.regex, text)\n if match:\n return {key: match.group(i + 1) for i, key in enumerate(self.output_keys)}\n else:\n if self.default_output_key is None:\n raise ValueError(f\"Could not parse output: {text}\")\n else:\n return {\n key: text if key == self.default_output_key else \"\"\n for key in self.output_keys\n }", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/output_parsers/regex.html"} +{"id": "0cd2294524a6-0", "text": "Source code for langchain.output_parsers.pydantic\nimport json\nimport re\nfrom typing import Type, TypeVar\nfrom pydantic import BaseModel, ValidationError\nfrom langchain.output_parsers.format_instructions import PYDANTIC_FORMAT_INSTRUCTIONS\nfrom langchain.schema import BaseOutputParser, OutputParserException\nT = TypeVar(\"T\", bound=BaseModel)\n[docs]class PydanticOutputParser(BaseOutputParser[T]):\n pydantic_object: Type[T]\n[docs] def parse(self, text: str) -> T:\n try:\n # Greedy search for 1st json candidate.\n match = re.search(\n r\"\\{.*\\}\", text.strip(), re.MULTILINE | re.IGNORECASE | re.DOTALL\n )\n json_str = \"\"\n if match:\n json_str = match.group()\n json_object = json.loads(json_str, strict=False)\n return self.pydantic_object.parse_obj(json_object)\n except (json.JSONDecodeError, ValidationError) as e:\n name = self.pydantic_object.__name__\n msg = f\"Failed to parse {name} from completion {text}. Got: {e}\"\n raise OutputParserException(msg)\n[docs] def get_format_instructions(self) -> str:\n schema = self.pydantic_object.schema()\n # Remove extraneous fields.\n reduced_schema = schema\n if \"title\" in reduced_schema:\n del reduced_schema[\"title\"]\n if \"type\" in reduced_schema:\n del reduced_schema[\"type\"]\n # Ensure json in context is well-formed with double quotes.\n schema_str = json.dumps(reduced_schema)\n return PYDANTIC_FORMAT_INSTRUCTIONS.format(schema=schema_str)\n @property\n def _type(self) -> str:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/output_parsers/pydantic.html"} +{"id": "0cd2294524a6-1", "text": "@property\n def _type(self) -> str:\n return \"pydantic\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/output_parsers/pydantic.html"} +{"id": "1041e86f03f8-0", "text": "Source code for langchain.output_parsers.retry\nfrom __future__ import annotations\nfrom typing import TypeVar\nfrom langchain.base_language import BaseLanguageModel\nfrom langchain.chains.llm import LLMChain\nfrom langchain.prompts.base import BasePromptTemplate\nfrom langchain.prompts.prompt import PromptTemplate\nfrom langchain.schema import (\n BaseOutputParser,\n OutputParserException,\n PromptValue,\n)\nNAIVE_COMPLETION_RETRY = \"\"\"Prompt:\n{prompt}\nCompletion:\n{completion}\nAbove, the Completion did not satisfy the constraints given in the Prompt.\nPlease try again:\"\"\"\nNAIVE_COMPLETION_RETRY_WITH_ERROR = \"\"\"Prompt:\n{prompt}\nCompletion:\n{completion}\nAbove, the Completion did not satisfy the constraints given in the Prompt.\nDetails: {error}\nPlease try again:\"\"\"\nNAIVE_RETRY_PROMPT = PromptTemplate.from_template(NAIVE_COMPLETION_RETRY)\nNAIVE_RETRY_WITH_ERROR_PROMPT = PromptTemplate.from_template(\n NAIVE_COMPLETION_RETRY_WITH_ERROR\n)\nT = TypeVar(\"T\")\n[docs]class RetryOutputParser(BaseOutputParser[T]):\n \"\"\"Wraps a parser and tries to fix parsing errors.\n Does this by passing the original prompt and the completion to another\n LLM, and telling it the completion did not satisfy criteria in the prompt.\n \"\"\"\n parser: BaseOutputParser[T]\n retry_chain: LLMChain\n[docs] @classmethod\n def from_llm(\n cls,\n llm: BaseLanguageModel,\n parser: BaseOutputParser[T],\n prompt: BasePromptTemplate = NAIVE_RETRY_PROMPT,\n ) -> RetryOutputParser[T]:\n chain = LLMChain(llm=llm, prompt=prompt)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/output_parsers/retry.html"} +{"id": "1041e86f03f8-1", "text": "chain = LLMChain(llm=llm, prompt=prompt)\n return cls(parser=parser, retry_chain=chain)\n[docs] def parse_with_prompt(self, completion: str, prompt_value: PromptValue) -> T:\n try:\n parsed_completion = self.parser.parse(completion)\n except OutputParserException:\n new_completion = self.retry_chain.run(\n prompt=prompt_value.to_string(), completion=completion\n )\n parsed_completion = self.parser.parse(new_completion)\n return parsed_completion\n[docs] def parse(self, completion: str) -> T:\n raise NotImplementedError(\n \"This OutputParser can only be called by the `parse_with_prompt` method.\"\n )\n[docs] def get_format_instructions(self) -> str:\n return self.parser.get_format_instructions()\n @property\n def _type(self) -> str:\n return \"retry\"\n[docs]class RetryWithErrorOutputParser(BaseOutputParser[T]):\n \"\"\"Wraps a parser and tries to fix parsing errors.\n Does this by passing the original prompt, the completion, AND the error\n that was raised to another language model and telling it that the completion\n did not work, and raised the given error. Differs from RetryOutputParser\n in that this implementation provides the error that was raised back to the\n LLM, which in theory should give it more information on how to fix it.\n \"\"\"\n parser: BaseOutputParser[T]\n retry_chain: LLMChain\n[docs] @classmethod\n def from_llm(\n cls,\n llm: BaseLanguageModel,\n parser: BaseOutputParser[T],\n prompt: BasePromptTemplate = NAIVE_RETRY_WITH_ERROR_PROMPT,\n ) -> RetryWithErrorOutputParser[T]:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/output_parsers/retry.html"} +{"id": "1041e86f03f8-2", "text": ") -> RetryWithErrorOutputParser[T]:\n chain = LLMChain(llm=llm, prompt=prompt)\n return cls(parser=parser, retry_chain=chain)\n[docs] def parse_with_prompt(self, completion: str, prompt_value: PromptValue) -> T:\n try:\n parsed_completion = self.parser.parse(completion)\n except OutputParserException as e:\n new_completion = self.retry_chain.run(\n prompt=prompt_value.to_string(), completion=completion, error=repr(e)\n )\n parsed_completion = self.parser.parse(new_completion)\n return parsed_completion\n[docs] def parse(self, completion: str) -> T:\n raise NotImplementedError(\n \"This OutputParser can only be called by the `parse_with_prompt` method.\"\n )\n[docs] def get_format_instructions(self) -> str:\n return self.parser.get_format_instructions()\n @property\n def _type(self) -> str:\n return \"retry_with_error\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/output_parsers/retry.html"} +{"id": "6e4f3642087d-0", "text": "Source code for langchain.output_parsers.enum\nfrom enum import Enum\nfrom typing import Any, Dict, List, Type\nfrom pydantic import root_validator\nfrom langchain.schema import BaseOutputParser, OutputParserException\n[docs]class EnumOutputParser(BaseOutputParser):\n enum: Type[Enum]\n @root_validator()\n def raise_deprecation(cls, values: Dict) -> Dict:\n enum = values[\"enum\"]\n if not all(isinstance(e.value, str) for e in enum):\n raise ValueError(\"Enum values must be strings\")\n return values\n @property\n def _valid_values(self) -> List[str]:\n return [e.value for e in self.enum]\n[docs] def parse(self, response: str) -> Any:\n try:\n return self.enum(response.strip())\n except ValueError:\n raise OutputParserException(\n f\"Response '{response}' is not one of the \"\n f\"expected values: {self._valid_values}\"\n )\n[docs] def get_format_instructions(self) -> str:\n return f\"Select one of the following options: {', '.join(self._valid_values)}\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/output_parsers/enum.html"} +{"id": "a3cfd29bd272-0", "text": "Source code for langchain.output_parsers.fix\nfrom __future__ import annotations\nfrom typing import TypeVar\nfrom langchain.base_language import BaseLanguageModel\nfrom langchain.chains.llm import LLMChain\nfrom langchain.output_parsers.prompts import NAIVE_FIX_PROMPT\nfrom langchain.prompts.base import BasePromptTemplate\nfrom langchain.schema import BaseOutputParser, OutputParserException\nT = TypeVar(\"T\")\n[docs]class OutputFixingParser(BaseOutputParser[T]):\n \"\"\"Wraps a parser and tries to fix parsing errors.\"\"\"\n parser: BaseOutputParser[T]\n retry_chain: LLMChain\n[docs] @classmethod\n def from_llm(\n cls,\n llm: BaseLanguageModel,\n parser: BaseOutputParser[T],\n prompt: BasePromptTemplate = NAIVE_FIX_PROMPT,\n ) -> OutputFixingParser[T]:\n chain = LLMChain(llm=llm, prompt=prompt)\n return cls(parser=parser, retry_chain=chain)\n[docs] def parse(self, completion: str) -> T:\n try:\n parsed_completion = self.parser.parse(completion)\n except OutputParserException as e:\n new_completion = self.retry_chain.run(\n instructions=self.parser.get_format_instructions(),\n completion=completion,\n error=repr(e),\n )\n parsed_completion = self.parser.parse(new_completion)\n return parsed_completion\n[docs] def get_format_instructions(self) -> str:\n return self.parser.get_format_instructions()\n @property\n def _type(self) -> str:\n return \"output_fixing\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/output_parsers/fix.html"} +{"id": "cd09695e78d9-0", "text": "Source code for langchain.prompts.few_shot\n\"\"\"Prompt template that contains few shot examples.\"\"\"\nfrom typing import Any, Dict, List, Optional\nfrom pydantic import Extra, root_validator\nfrom langchain.prompts.base import (\n DEFAULT_FORMATTER_MAPPING,\n StringPromptTemplate,\n check_valid_template,\n)\nfrom langchain.prompts.example_selector.base import BaseExampleSelector\nfrom langchain.prompts.prompt import PromptTemplate\n[docs]class FewShotPromptTemplate(StringPromptTemplate):\n \"\"\"Prompt template that contains few shot examples.\"\"\"\n @property\n def lc_serializable(self) -> bool:\n return False\n examples: Optional[List[dict]] = None\n \"\"\"Examples to format into the prompt.\n Either this or example_selector should be provided.\"\"\"\n example_selector: Optional[BaseExampleSelector] = None\n \"\"\"ExampleSelector to choose the examples to format into the prompt.\n Either this or examples should be provided.\"\"\"\n example_prompt: PromptTemplate\n \"\"\"PromptTemplate used to format an individual example.\"\"\"\n suffix: str\n \"\"\"A prompt template string to put after the examples.\"\"\"\n input_variables: List[str]\n \"\"\"A list of the names of the variables the prompt template expects.\"\"\"\n example_separator: str = \"\\n\\n\"\n \"\"\"String separator used to join the prefix, the examples, and suffix.\"\"\"\n prefix: str = \"\"\n \"\"\"A prompt template string to put before the examples.\"\"\"\n template_format: str = \"f-string\"\n \"\"\"The format of the prompt template. Options are: 'f-string', 'jinja2'.\"\"\"\n validate_template: bool = True\n \"\"\"Whether or not to try validating the template.\"\"\"\n @root_validator(pre=True)\n def check_examples_and_selector(cls, values: Dict) -> Dict:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/prompts/few_shot.html"} +{"id": "cd09695e78d9-1", "text": "def check_examples_and_selector(cls, values: Dict) -> Dict:\n \"\"\"Check that one and only one of examples/example_selector are provided.\"\"\"\n examples = values.get(\"examples\", None)\n example_selector = values.get(\"example_selector\", None)\n if examples and example_selector:\n raise ValueError(\n \"Only one of 'examples' and 'example_selector' should be provided\"\n )\n if examples is None and example_selector is None:\n raise ValueError(\n \"One of 'examples' and 'example_selector' should be provided\"\n )\n return values\n @root_validator()\n def template_is_valid(cls, values: Dict) -> Dict:\n \"\"\"Check that prefix, suffix and input variables are consistent.\"\"\"\n if values[\"validate_template\"]:\n check_valid_template(\n values[\"prefix\"] + values[\"suffix\"],\n values[\"template_format\"],\n values[\"input_variables\"] + list(values[\"partial_variables\"]),\n )\n return values\n class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n arbitrary_types_allowed = True\n def _get_examples(self, **kwargs: Any) -> List[dict]:\n if self.examples is not None:\n return self.examples\n elif self.example_selector is not None:\n return self.example_selector.select_examples(kwargs)\n else:\n raise ValueError\n[docs] def format(self, **kwargs: Any) -> str:\n \"\"\"Format the prompt with the inputs.\n Args:\n kwargs: Any arguments to be passed to the prompt template.\n Returns:\n A formatted string.\n Example:\n .. code-block:: python\n prompt.format(variable1=\"foo\")\n \"\"\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/prompts/few_shot.html"} +{"id": "cd09695e78d9-2", "text": ".. code-block:: python\n prompt.format(variable1=\"foo\")\n \"\"\"\n kwargs = self._merge_partial_and_user_variables(**kwargs)\n # Get the examples to use.\n examples = self._get_examples(**kwargs)\n examples = [\n {k: e[k] for k in self.example_prompt.input_variables} for e in examples\n ]\n # Format the examples.\n example_strings = [\n self.example_prompt.format(**example) for example in examples\n ]\n # Create the overall template.\n pieces = [self.prefix, *example_strings, self.suffix]\n template = self.example_separator.join([piece for piece in pieces if piece])\n # Format the template with the input variables.\n return DEFAULT_FORMATTER_MAPPING[self.template_format](template, **kwargs)\n @property\n def _prompt_type(self) -> str:\n \"\"\"Return the prompt type key.\"\"\"\n return \"few_shot\"\n[docs] def dict(self, **kwargs: Any) -> Dict:\n \"\"\"Return a dictionary of the prompt.\"\"\"\n if self.example_selector:\n raise ValueError(\"Saving an example selector is not currently supported\")\n return super().dict(**kwargs)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/prompts/few_shot.html"} +{"id": "518a5b6960e3-0", "text": "Source code for langchain.prompts.base\n\"\"\"BasePrompt schema definition.\"\"\"\nfrom __future__ import annotations\nimport json\nfrom abc import ABC, abstractmethod\nfrom pathlib import Path\nfrom typing import Any, Callable, Dict, List, Mapping, Optional, Set, Union\nimport yaml\nfrom pydantic import Field, root_validator\nfrom langchain.formatting import formatter\nfrom langchain.load.serializable import Serializable\nfrom langchain.schema import BaseMessage, BaseOutputParser, HumanMessage, PromptValue\ndef jinja2_formatter(template: str, **kwargs: Any) -> str:\n \"\"\"Format a template using jinja2.\"\"\"\n try:\n from jinja2 import Template\n except ImportError:\n raise ImportError(\n \"jinja2 not installed, which is needed to use the jinja2_formatter. \"\n \"Please install it with `pip install jinja2`.\"\n )\n return Template(template).render(**kwargs)\ndef validate_jinja2(template: str, input_variables: List[str]) -> None:\n \"\"\"\n Validate that the input variables are valid for the template.\n Raise an exception if missing or extra variables are found.\n Args:\n template: The template string.\n input_variables: The input variables.\n \"\"\"\n input_variables_set = set(input_variables)\n valid_variables = _get_jinja2_variables_from_template(template)\n missing_variables = valid_variables - input_variables_set\n extra_variables = input_variables_set - valid_variables\n error_message = \"\"\n if missing_variables:\n error_message += f\"Missing variables: {missing_variables} \"\n if extra_variables:\n error_message += f\"Extra variables: {extra_variables}\"\n if error_message:\n raise KeyError(error_message.strip())", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/prompts/base.html"} +{"id": "518a5b6960e3-1", "text": "if error_message:\n raise KeyError(error_message.strip())\ndef _get_jinja2_variables_from_template(template: str) -> Set[str]:\n try:\n from jinja2 import Environment, meta\n except ImportError:\n raise ImportError(\n \"jinja2 not installed, which is needed to use the jinja2_formatter. \"\n \"Please install it with `pip install jinja2`.\"\n )\n env = Environment()\n ast = env.parse(template)\n variables = meta.find_undeclared_variables(ast)\n return variables\nDEFAULT_FORMATTER_MAPPING: Dict[str, Callable] = {\n \"f-string\": formatter.format,\n \"jinja2\": jinja2_formatter,\n}\nDEFAULT_VALIDATOR_MAPPING: Dict[str, Callable] = {\n \"f-string\": formatter.validate_input_variables,\n \"jinja2\": validate_jinja2,\n}\ndef check_valid_template(\n template: str, template_format: str, input_variables: List[str]\n) -> None:\n \"\"\"Check that template string is valid.\"\"\"\n if template_format not in DEFAULT_FORMATTER_MAPPING:\n valid_formats = list(DEFAULT_FORMATTER_MAPPING)\n raise ValueError(\n f\"Invalid template format. Got `{template_format}`;\"\n f\" should be one of {valid_formats}\"\n )\n try:\n validator_func = DEFAULT_VALIDATOR_MAPPING[template_format]\n validator_func(template, input_variables)\n except KeyError as e:\n raise ValueError(\n \"Invalid prompt schema; check for mismatched or missing input parameters. \"\n + str(e)\n )\nclass StringPromptValue(PromptValue):\n text: str\n def to_string(self) -> str:\n \"\"\"Return prompt as string.\"\"\"\n return self.text", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/prompts/base.html"} +{"id": "518a5b6960e3-2", "text": "\"\"\"Return prompt as string.\"\"\"\n return self.text\n def to_messages(self) -> List[BaseMessage]:\n \"\"\"Return prompt as messages.\"\"\"\n return [HumanMessage(content=self.text)]\n[docs]class BasePromptTemplate(Serializable, ABC):\n \"\"\"Base class for all prompt templates, returning a prompt.\"\"\"\n input_variables: List[str]\n \"\"\"A list of the names of the variables the prompt template expects.\"\"\"\n output_parser: Optional[BaseOutputParser] = None\n \"\"\"How to parse the output of calling an LLM on this formatted prompt.\"\"\"\n partial_variables: Mapping[str, Union[str, Callable[[], str]]] = Field(\n default_factory=dict\n )\n @property\n def lc_serializable(self) -> bool:\n return True\n class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n arbitrary_types_allowed = True\n[docs] @abstractmethod\n def format_prompt(self, **kwargs: Any) -> PromptValue:\n \"\"\"Create Chat Messages.\"\"\"\n @root_validator()\n def validate_variable_names(cls, values: Dict) -> Dict:\n \"\"\"Validate variable names do not include restricted names.\"\"\"\n if \"stop\" in values[\"input_variables\"]:\n raise ValueError(\n \"Cannot have an input variable named 'stop', as it is used internally,\"\n \" please rename.\"\n )\n if \"stop\" in values[\"partial_variables\"]:\n raise ValueError(\n \"Cannot have an partial variable named 'stop', as it is used \"\n \"internally, please rename.\"\n )\n overall = set(values[\"input_variables\"]).intersection(\n values[\"partial_variables\"]\n )\n if overall:\n raise ValueError(\n f\"Found overlapping input and partial variables: {overall}\"\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/prompts/base.html"} +{"id": "518a5b6960e3-3", "text": "f\"Found overlapping input and partial variables: {overall}\"\n )\n return values\n[docs] def partial(self, **kwargs: Union[str, Callable[[], str]]) -> BasePromptTemplate:\n \"\"\"Return a partial of the prompt template.\"\"\"\n prompt_dict = self.__dict__.copy()\n prompt_dict[\"input_variables\"] = list(\n set(self.input_variables).difference(kwargs)\n )\n prompt_dict[\"partial_variables\"] = {**self.partial_variables, **kwargs}\n return type(self)(**prompt_dict)\n def _merge_partial_and_user_variables(self, **kwargs: Any) -> Dict[str, Any]:\n # Get partial params:\n partial_kwargs = {\n k: v if isinstance(v, str) else v()\n for k, v in self.partial_variables.items()\n }\n return {**partial_kwargs, **kwargs}\n[docs] @abstractmethod\n def format(self, **kwargs: Any) -> str:\n \"\"\"Format the prompt with the inputs.\n Args:\n kwargs: Any arguments to be passed to the prompt template.\n Returns:\n A formatted string.\n Example:\n .. code-block:: python\n prompt.format(variable1=\"foo\")\n \"\"\"\n @property\n def _prompt_type(self) -> str:\n \"\"\"Return the prompt type key.\"\"\"\n raise NotImplementedError\n[docs] def dict(self, **kwargs: Any) -> Dict:\n \"\"\"Return dictionary representation of prompt.\"\"\"\n prompt_dict = super().dict(**kwargs)\n prompt_dict[\"_type\"] = self._prompt_type\n return prompt_dict\n[docs] def save(self, file_path: Union[Path, str]) -> None:\n \"\"\"Save the prompt.\n Args:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/prompts/base.html"} +{"id": "518a5b6960e3-4", "text": "\"\"\"Save the prompt.\n Args:\n file_path: Path to directory to save prompt to.\n Example:\n .. code-block:: python\n prompt.save(file_path=\"path/prompt.yaml\")\n \"\"\"\n if self.partial_variables:\n raise ValueError(\"Cannot save prompt with partial variables.\")\n # Convert file to Path object.\n if isinstance(file_path, str):\n save_path = Path(file_path)\n else:\n save_path = file_path\n directory_path = save_path.parent\n directory_path.mkdir(parents=True, exist_ok=True)\n # Fetch dictionary to save\n prompt_dict = self.dict()\n if save_path.suffix == \".json\":\n with open(file_path, \"w\") as f:\n json.dump(prompt_dict, f, indent=4)\n elif save_path.suffix == \".yaml\":\n with open(file_path, \"w\") as f:\n yaml.dump(prompt_dict, f, default_flow_style=False)\n else:\n raise ValueError(f\"{save_path} must be json or yaml\")\n[docs]class StringPromptTemplate(BasePromptTemplate, ABC):\n \"\"\"String prompt should expose the format method, returning a prompt.\"\"\"\n[docs] def format_prompt(self, **kwargs: Any) -> PromptValue:\n \"\"\"Create Chat Messages.\"\"\"\n return StringPromptValue(text=self.format(**kwargs))", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/prompts/base.html"} +{"id": "81c458169787-0", "text": "Source code for langchain.prompts.loading\n\"\"\"Load prompts from disk.\"\"\"\nimport importlib\nimport json\nimport logging\nfrom pathlib import Path\nfrom typing import Union\nimport yaml\nfrom langchain.output_parsers.regex import RegexParser\nfrom langchain.prompts.base import BasePromptTemplate\nfrom langchain.prompts.few_shot import FewShotPromptTemplate\nfrom langchain.prompts.prompt import PromptTemplate\nfrom langchain.schema import BaseLLMOutputParser, NoOpOutputParser\nfrom langchain.utilities.loading import try_load_from_hub\nURL_BASE = \"https://raw.githubusercontent.com/hwchase17/langchain-hub/master/prompts/\"\nlogger = logging.getLogger(__name__)\ndef load_prompt_from_config(config: dict) -> BasePromptTemplate:\n \"\"\"Load prompt from Config Dict.\"\"\"\n if \"_type\" not in config:\n logger.warning(\"No `_type` key found, defaulting to `prompt`.\")\n config_type = config.pop(\"_type\", \"prompt\")\n if config_type not in type_to_loader_dict:\n raise ValueError(f\"Loading {config_type} prompt not supported\")\n prompt_loader = type_to_loader_dict[config_type]\n return prompt_loader(config)\ndef _load_template(var_name: str, config: dict) -> dict:\n \"\"\"Load template from disk if applicable.\"\"\"\n # Check if template_path exists in config.\n if f\"{var_name}_path\" in config:\n # If it does, make sure template variable doesn't also exist.\n if var_name in config:\n raise ValueError(\n f\"Both `{var_name}_path` and `{var_name}` cannot be provided.\"\n )\n # Pop the template path from the config.\n template_path = Path(config.pop(f\"{var_name}_path\"))\n # Load the template.", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/prompts/loading.html"} +{"id": "81c458169787-1", "text": "# Load the template.\n if template_path.suffix == \".txt\":\n with open(template_path) as f:\n template = f.read()\n else:\n raise ValueError\n # Set the template variable to the extracted variable.\n config[var_name] = template\n return config\ndef _load_examples(config: dict) -> dict:\n \"\"\"Load examples if necessary.\"\"\"\n if isinstance(config[\"examples\"], list):\n pass\n elif isinstance(config[\"examples\"], str):\n with open(config[\"examples\"]) as f:\n if config[\"examples\"].endswith(\".json\"):\n examples = json.load(f)\n elif config[\"examples\"].endswith((\".yaml\", \".yml\")):\n examples = yaml.safe_load(f)\n else:\n raise ValueError(\n \"Invalid file format. Only json or yaml formats are supported.\"\n )\n config[\"examples\"] = examples\n else:\n raise ValueError(\"Invalid examples format. Only list or string are supported.\")\n return config\ndef _load_output_parser(config: dict) -> dict:\n \"\"\"Load output parser.\"\"\"\n if \"output_parser\" in config and config[\"output_parser\"]:\n _config = config.pop(\"output_parser\")\n output_parser_type = _config.pop(\"_type\")\n if output_parser_type == \"regex_parser\":\n output_parser: BaseLLMOutputParser = RegexParser(**_config)\n elif output_parser_type == \"default\":\n output_parser = NoOpOutputParser(**_config)\n else:\n raise ValueError(f\"Unsupported output parser {output_parser_type}\")\n config[\"output_parser\"] = output_parser\n return config\ndef _load_few_shot_prompt(config: dict) -> FewShotPromptTemplate:\n \"\"\"Load the few shot prompt from the config.\"\"\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/prompts/loading.html"} +{"id": "81c458169787-2", "text": "\"\"\"Load the few shot prompt from the config.\"\"\"\n # Load the suffix and prefix templates.\n config = _load_template(\"suffix\", config)\n config = _load_template(\"prefix\", config)\n # Load the example prompt.\n if \"example_prompt_path\" in config:\n if \"example_prompt\" in config:\n raise ValueError(\n \"Only one of example_prompt and example_prompt_path should \"\n \"be specified.\"\n )\n config[\"example_prompt\"] = load_prompt(config.pop(\"example_prompt_path\"))\n else:\n config[\"example_prompt\"] = load_prompt_from_config(config[\"example_prompt\"])\n # Load the examples.\n config = _load_examples(config)\n config = _load_output_parser(config)\n return FewShotPromptTemplate(**config)\ndef _load_prompt(config: dict) -> PromptTemplate:\n \"\"\"Load the prompt template from config.\"\"\"\n # Load the template from disk if necessary.\n config = _load_template(\"template\", config)\n config = _load_output_parser(config)\n return PromptTemplate(**config)\n[docs]def load_prompt(path: Union[str, Path]) -> BasePromptTemplate:\n \"\"\"Unified method for loading a prompt from LangChainHub or local fs.\"\"\"\n if hub_result := try_load_from_hub(\n path, _load_prompt_from_file, \"prompts\", {\"py\", \"json\", \"yaml\"}\n ):\n return hub_result\n else:\n return _load_prompt_from_file(path)\ndef _load_prompt_from_file(file: Union[str, Path]) -> BasePromptTemplate:\n \"\"\"Load prompt from file.\"\"\"\n # Convert file to Path object.\n if isinstance(file, str):\n file_path = Path(file)\n else:\n file_path = file", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/prompts/loading.html"} +{"id": "81c458169787-3", "text": "file_path = Path(file)\n else:\n file_path = file\n # Load from either json or yaml.\n if file_path.suffix == \".json\":\n with open(file_path) as f:\n config = json.load(f)\n elif file_path.suffix == \".yaml\":\n with open(file_path, \"r\") as f:\n config = yaml.safe_load(f)\n elif file_path.suffix == \".py\":\n spec = importlib.util.spec_from_loader(\n \"prompt\", loader=None, origin=str(file_path)\n )\n if spec is None:\n raise ValueError(\"could not load spec\")\n helper = importlib.util.module_from_spec(spec)\n with open(file_path, \"rb\") as f:\n exec(f.read(), helper.__dict__)\n if not isinstance(helper.PROMPT, BasePromptTemplate):\n raise ValueError(\"Did not get object of type BasePromptTemplate.\")\n return helper.PROMPT\n else:\n raise ValueError(f\"Got unsupported file type {file_path.suffix}\")\n # Load the prompt from the config now.\n return load_prompt_from_config(config)\ntype_to_loader_dict = {\n \"prompt\": _load_prompt,\n \"few_shot\": _load_few_shot_prompt,\n # \"few_shot_with_templates\": _load_few_shot_with_templates_prompt,\n}", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/prompts/loading.html"} +{"id": "917a9ff2fc9a-0", "text": "Source code for langchain.prompts.prompt\n\"\"\"Prompt schema definition.\"\"\"\nfrom __future__ import annotations\nfrom pathlib import Path\nfrom string import Formatter\nfrom typing import Any, Dict, List, Union\nfrom pydantic import root_validator\nfrom langchain.prompts.base import (\n DEFAULT_FORMATTER_MAPPING,\n StringPromptTemplate,\n _get_jinja2_variables_from_template,\n check_valid_template,\n)\n[docs]class PromptTemplate(StringPromptTemplate):\n \"\"\"Schema to represent a prompt for an LLM.\n Example:\n .. code-block:: python\n from langchain import PromptTemplate\n prompt = PromptTemplate(input_variables=[\"foo\"], template=\"Say {foo}\")\n \"\"\"\n @property\n def lc_attributes(self) -> Dict[str, Any]:\n return {\n \"template_format\": self.template_format,\n }\n input_variables: List[str]\n \"\"\"A list of the names of the variables the prompt template expects.\"\"\"\n template: str\n \"\"\"The prompt template.\"\"\"\n template_format: str = \"f-string\"\n \"\"\"The format of the prompt template. Options are: 'f-string', 'jinja2'.\"\"\"\n validate_template: bool = True\n \"\"\"Whether or not to try validating the template.\"\"\"\n @property\n def _prompt_type(self) -> str:\n \"\"\"Return the prompt type key.\"\"\"\n return \"prompt\"\n[docs] def format(self, **kwargs: Any) -> str:\n \"\"\"Format the prompt with the inputs.\n Args:\n kwargs: Any arguments to be passed to the prompt template.\n Returns:\n A formatted string.\n Example:\n .. code-block:: python\n prompt.format(variable1=\"foo\")\n \"\"\"\n kwargs = self._merge_partial_and_user_variables(**kwargs)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/prompts/prompt.html"} +{"id": "917a9ff2fc9a-1", "text": "\"\"\"\n kwargs = self._merge_partial_and_user_variables(**kwargs)\n return DEFAULT_FORMATTER_MAPPING[self.template_format](self.template, **kwargs)\n @root_validator()\n def template_is_valid(cls, values: Dict) -> Dict:\n \"\"\"Check that template and input variables are consistent.\"\"\"\n if values[\"validate_template\"]:\n all_inputs = values[\"input_variables\"] + list(values[\"partial_variables\"])\n check_valid_template(\n values[\"template\"], values[\"template_format\"], all_inputs\n )\n return values\n[docs] @classmethod\n def from_examples(\n cls,\n examples: List[str],\n suffix: str,\n input_variables: List[str],\n example_separator: str = \"\\n\\n\",\n prefix: str = \"\",\n **kwargs: Any,\n ) -> PromptTemplate:\n \"\"\"Take examples in list format with prefix and suffix to create a prompt.\n Intended to be used as a way to dynamically create a prompt from examples.\n Args:\n examples: List of examples to use in the prompt.\n suffix: String to go after the list of examples. Should generally\n set up the user's input.\n input_variables: A list of variable names the final prompt template\n will expect.\n example_separator: The separator to use in between examples. Defaults\n to two new line characters.\n prefix: String that should go before any examples. Generally includes\n examples. Default to an empty string.\n Returns:\n The final prompt generated.\n \"\"\"\n template = example_separator.join([prefix, *examples, suffix])\n return cls(input_variables=input_variables, template=template, **kwargs)\n[docs] @classmethod\n def from_file(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/prompts/prompt.html"} +{"id": "917a9ff2fc9a-2", "text": "[docs] @classmethod\n def from_file(\n cls, template_file: Union[str, Path], input_variables: List[str], **kwargs: Any\n ) -> PromptTemplate:\n \"\"\"Load a prompt from a file.\n Args:\n template_file: The path to the file containing the prompt template.\n input_variables: A list of variable names the final prompt template\n will expect.\n Returns:\n The prompt loaded from the file.\n \"\"\"\n with open(str(template_file), \"r\") as f:\n template = f.read()\n return cls(input_variables=input_variables, template=template, **kwargs)\n[docs] @classmethod\n def from_template(cls, template: str, **kwargs: Any) -> PromptTemplate:\n \"\"\"Load a prompt template from a template.\"\"\"\n if \"template_format\" in kwargs and kwargs[\"template_format\"] == \"jinja2\":\n # Get the variables for the template\n input_variables = _get_jinja2_variables_from_template(template)\n else:\n input_variables = {\n v for _, v, _, _ in Formatter().parse(template) if v is not None\n }\n if \"partial_variables\" in kwargs:\n partial_variables = kwargs[\"partial_variables\"]\n input_variables = {\n var for var in input_variables if var not in partial_variables\n }\n return cls(\n input_variables=list(sorted(input_variables)), template=template, **kwargs\n )\n# For backwards compatibility.\nPrompt = PromptTemplate", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/prompts/prompt.html"} +{"id": "f0a6c87ce3bd-0", "text": "Source code for langchain.prompts.chat\n\"\"\"Chat prompt template.\"\"\"\nfrom __future__ import annotations\nfrom abc import ABC, abstractmethod\nfrom pathlib import Path\nfrom typing import Any, Callable, List, Sequence, Tuple, Type, TypeVar, Union\nfrom pydantic import Field, root_validator\nfrom langchain.load.serializable import Serializable\nfrom langchain.memory.buffer import get_buffer_string\nfrom langchain.prompts.base import BasePromptTemplate, StringPromptTemplate\nfrom langchain.prompts.prompt import PromptTemplate\nfrom langchain.schema import (\n AIMessage,\n BaseMessage,\n ChatMessage,\n HumanMessage,\n PromptValue,\n SystemMessage,\n)\nclass BaseMessagePromptTemplate(Serializable, ABC):\n @property\n def lc_serializable(self) -> bool:\n return True\n @abstractmethod\n def format_messages(self, **kwargs: Any) -> List[BaseMessage]:\n \"\"\"To messages.\"\"\"\n @property\n @abstractmethod\n def input_variables(self) -> List[str]:\n \"\"\"Input variables for this prompt template.\"\"\"\n[docs]class MessagesPlaceholder(BaseMessagePromptTemplate):\n \"\"\"Prompt template that assumes variable is already list of messages.\"\"\"\n variable_name: str\n[docs] def format_messages(self, **kwargs: Any) -> List[BaseMessage]:\n \"\"\"To a BaseMessage.\"\"\"\n value = kwargs[self.variable_name]\n if not isinstance(value, list):\n raise ValueError(\n f\"variable {self.variable_name} should be a list of base messages, \"\n f\"got {value}\"\n )\n for v in value:\n if not isinstance(v, BaseMessage):\n raise ValueError(\n f\"variable {self.variable_name} should be a list of base messages,\"\n f\" got {value}\"\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/prompts/chat.html"} +{"id": "f0a6c87ce3bd-1", "text": "f\" got {value}\"\n )\n return value\n @property\n def input_variables(self) -> List[str]:\n \"\"\"Input variables for this prompt template.\"\"\"\n return [self.variable_name]\nMessagePromptTemplateT = TypeVar(\n \"MessagePromptTemplateT\", bound=\"BaseStringMessagePromptTemplate\"\n)\nclass BaseStringMessagePromptTemplate(BaseMessagePromptTemplate, ABC):\n prompt: StringPromptTemplate\n additional_kwargs: dict = Field(default_factory=dict)\n @classmethod\n def from_template(\n cls: Type[MessagePromptTemplateT],\n template: str,\n template_format: str = \"f-string\",\n **kwargs: Any,\n ) -> MessagePromptTemplateT:\n prompt = PromptTemplate.from_template(template, template_format=template_format)\n return cls(prompt=prompt, **kwargs)\n @classmethod\n def from_template_file(\n cls: Type[MessagePromptTemplateT],\n template_file: Union[str, Path],\n input_variables: List[str],\n **kwargs: Any,\n ) -> MessagePromptTemplateT:\n prompt = PromptTemplate.from_file(template_file, input_variables)\n return cls(prompt=prompt, **kwargs)\n @abstractmethod\n def format(self, **kwargs: Any) -> BaseMessage:\n \"\"\"To a BaseMessage.\"\"\"\n def format_messages(self, **kwargs: Any) -> List[BaseMessage]:\n return [self.format(**kwargs)]\n @property\n def input_variables(self) -> List[str]:\n return self.prompt.input_variables\n[docs]class ChatMessagePromptTemplate(BaseStringMessagePromptTemplate):\n role: str\n[docs] def format(self, **kwargs: Any) -> BaseMessage:\n text = self.prompt.format(**kwargs)\n return ChatMessage(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/prompts/chat.html"} +{"id": "f0a6c87ce3bd-2", "text": "text = self.prompt.format(**kwargs)\n return ChatMessage(\n content=text, role=self.role, additional_kwargs=self.additional_kwargs\n )\n[docs]class HumanMessagePromptTemplate(BaseStringMessagePromptTemplate):\n[docs] def format(self, **kwargs: Any) -> BaseMessage:\n text = self.prompt.format(**kwargs)\n return HumanMessage(content=text, additional_kwargs=self.additional_kwargs)\n[docs]class AIMessagePromptTemplate(BaseStringMessagePromptTemplate):\n[docs] def format(self, **kwargs: Any) -> BaseMessage:\n text = self.prompt.format(**kwargs)\n return AIMessage(content=text, additional_kwargs=self.additional_kwargs)\n[docs]class SystemMessagePromptTemplate(BaseStringMessagePromptTemplate):\n[docs] def format(self, **kwargs: Any) -> BaseMessage:\n text = self.prompt.format(**kwargs)\n return SystemMessage(content=text, additional_kwargs=self.additional_kwargs)\nclass ChatPromptValue(PromptValue):\n messages: List[BaseMessage]\n def to_string(self) -> str:\n \"\"\"Return prompt as string.\"\"\"\n return get_buffer_string(self.messages)\n def to_messages(self) -> List[BaseMessage]:\n \"\"\"Return prompt as messages.\"\"\"\n return self.messages\n[docs]class BaseChatPromptTemplate(BasePromptTemplate, ABC):\n[docs] def format(self, **kwargs: Any) -> str:\n return self.format_prompt(**kwargs).to_string()\n[docs] def format_prompt(self, **kwargs: Any) -> PromptValue:\n messages = self.format_messages(**kwargs)\n return ChatPromptValue(messages=messages)\n[docs] @abstractmethod\n def format_messages(self, **kwargs: Any) -> List[BaseMessage]:\n \"\"\"Format kwargs into a list of messages.\"\"\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/prompts/chat.html"} +{"id": "f0a6c87ce3bd-3", "text": "\"\"\"Format kwargs into a list of messages.\"\"\"\n[docs]class ChatPromptTemplate(BaseChatPromptTemplate, ABC):\n input_variables: List[str]\n messages: List[Union[BaseMessagePromptTemplate, BaseMessage]]\n @root_validator(pre=True)\n def validate_input_variables(cls, values: dict) -> dict:\n messages = values[\"messages\"]\n input_vars = set()\n for message in messages:\n if isinstance(message, BaseMessagePromptTemplate):\n input_vars.update(message.input_variables)\n if \"partial_variables\" in values:\n input_vars = input_vars - set(values[\"partial_variables\"])\n if \"input_variables\" in values:\n if input_vars != set(values[\"input_variables\"]):\n raise ValueError(\n \"Got mismatched input_variables. \"\n f\"Expected: {input_vars}. \"\n f\"Got: {values['input_variables']}\"\n )\n else:\n values[\"input_variables\"] = list(input_vars)\n return values\n[docs] @classmethod\n def from_template(cls, template: str, **kwargs: Any) -> ChatPromptTemplate:\n prompt_template = PromptTemplate.from_template(template, **kwargs)\n message = HumanMessagePromptTemplate(prompt=prompt_template)\n return cls.from_messages([message])\n[docs] @classmethod\n def from_role_strings(\n cls, string_messages: List[Tuple[str, str]]\n ) -> ChatPromptTemplate:\n messages = [\n ChatMessagePromptTemplate(\n prompt=PromptTemplate.from_template(template), role=role\n )\n for role, template in string_messages\n ]\n return cls.from_messages(messages)\n[docs] @classmethod\n def from_strings(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/prompts/chat.html"} +{"id": "f0a6c87ce3bd-4", "text": "[docs] @classmethod\n def from_strings(\n cls, string_messages: List[Tuple[Type[BaseMessagePromptTemplate], str]]\n ) -> ChatPromptTemplate:\n messages = [\n role(prompt=PromptTemplate.from_template(template))\n for role, template in string_messages\n ]\n return cls.from_messages(messages)\n[docs] @classmethod\n def from_messages(\n cls, messages: Sequence[Union[BaseMessagePromptTemplate, BaseMessage]]\n ) -> ChatPromptTemplate:\n input_vars = set()\n for message in messages:\n if isinstance(message, BaseMessagePromptTemplate):\n input_vars.update(message.input_variables)\n return cls(input_variables=list(input_vars), messages=messages)\n[docs] def format(self, **kwargs: Any) -> str:\n return self.format_prompt(**kwargs).to_string()\n[docs] def format_messages(self, **kwargs: Any) -> List[BaseMessage]:\n kwargs = self._merge_partial_and_user_variables(**kwargs)\n result = []\n for message_template in self.messages:\n if isinstance(message_template, BaseMessage):\n result.extend([message_template])\n elif isinstance(message_template, BaseMessagePromptTemplate):\n rel_params = {\n k: v\n for k, v in kwargs.items()\n if k in message_template.input_variables\n }\n message = message_template.format_messages(**rel_params)\n result.extend(message)\n else:\n raise ValueError(f\"Unexpected input: {message_template}\")\n return result\n[docs] def partial(self, **kwargs: Union[str, Callable[[], str]]) -> BasePromptTemplate:\n raise NotImplementedError\n @property\n def _prompt_type(self) -> str:\n return \"chat\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/prompts/chat.html"} +{"id": "f0a6c87ce3bd-5", "text": "def _prompt_type(self) -> str:\n return \"chat\"\n[docs] def save(self, file_path: Union[Path, str]) -> None:\n raise NotImplementedError", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/prompts/chat.html"} +{"id": "1ea66dd00bf0-0", "text": "Source code for langchain.prompts.pipeline\nfrom typing import Any, Dict, List, Tuple\nfrom pydantic import root_validator\nfrom langchain.prompts.base import BasePromptTemplate\nfrom langchain.prompts.chat import BaseChatPromptTemplate\nfrom langchain.schema import PromptValue\ndef _get_inputs(inputs: dict, input_variables: List[str]) -> dict:\n return {k: inputs[k] for k in input_variables}\n[docs]class PipelinePromptTemplate(BasePromptTemplate):\n \"\"\"A prompt template for composing multiple prompts together.\n This can be useful when you want to reuse parts of prompts.\n A PipelinePrompt consists of two main parts:\n - final_prompt: This is the final prompt that is returned\n - pipeline_prompts: This is a list of tuples, consisting\n of a string (`name`) and a Prompt Template.\n Each PromptTemplate will be formatted and then passed\n to future prompt templates as a variable with\n the same name as `name`\n \"\"\"\n final_prompt: BasePromptTemplate\n pipeline_prompts: List[Tuple[str, BasePromptTemplate]]\n @root_validator(pre=True)\n def get_input_variables(cls, values: Dict) -> Dict:\n \"\"\"Get input variables.\"\"\"\n created_variables = set()\n all_variables = set()\n for k, prompt in values[\"pipeline_prompts\"]:\n created_variables.add(k)\n all_variables.update(prompt.input_variables)\n values[\"input_variables\"] = list(all_variables.difference(created_variables))\n return values\n[docs] def format_prompt(self, **kwargs: Any) -> PromptValue:\n for k, prompt in self.pipeline_prompts:\n _inputs = _get_inputs(kwargs, prompt.input_variables)\n if isinstance(prompt, BaseChatPromptTemplate):", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/prompts/pipeline.html"} +{"id": "1ea66dd00bf0-1", "text": "if isinstance(prompt, BaseChatPromptTemplate):\n kwargs[k] = prompt.format_messages(**_inputs)\n else:\n kwargs[k] = prompt.format(**_inputs)\n _inputs = _get_inputs(kwargs, self.final_prompt.input_variables)\n return self.final_prompt.format_prompt(**_inputs)\n[docs] def format(self, **kwargs: Any) -> str:\n return self.format_prompt(**kwargs).to_string()\n @property\n def _prompt_type(self) -> str:\n raise ValueError", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/prompts/pipeline.html"} +{"id": "71945812d122-0", "text": "Source code for langchain.prompts.few_shot_with_templates\n\"\"\"Prompt template that contains few shot examples.\"\"\"\nfrom typing import Any, Dict, List, Optional\nfrom pydantic import Extra, root_validator\nfrom langchain.prompts.base import DEFAULT_FORMATTER_MAPPING, StringPromptTemplate\nfrom langchain.prompts.example_selector.base import BaseExampleSelector\nfrom langchain.prompts.prompt import PromptTemplate\n[docs]class FewShotPromptWithTemplates(StringPromptTemplate):\n \"\"\"Prompt template that contains few shot examples.\"\"\"\n examples: Optional[List[dict]] = None\n \"\"\"Examples to format into the prompt.\n Either this or example_selector should be provided.\"\"\"\n example_selector: Optional[BaseExampleSelector] = None\n \"\"\"ExampleSelector to choose the examples to format into the prompt.\n Either this or examples should be provided.\"\"\"\n example_prompt: PromptTemplate\n \"\"\"PromptTemplate used to format an individual example.\"\"\"\n suffix: StringPromptTemplate\n \"\"\"A PromptTemplate to put after the examples.\"\"\"\n input_variables: List[str]\n \"\"\"A list of the names of the variables the prompt template expects.\"\"\"\n example_separator: str = \"\\n\\n\"\n \"\"\"String separator used to join the prefix, the examples, and suffix.\"\"\"\n prefix: Optional[StringPromptTemplate] = None\n \"\"\"A PromptTemplate to put before the examples.\"\"\"\n template_format: str = \"f-string\"\n \"\"\"The format of the prompt template. Options are: 'f-string', 'jinja2'.\"\"\"\n validate_template: bool = True\n \"\"\"Whether or not to try validating the template.\"\"\"\n @root_validator(pre=True)\n def check_examples_and_selector(cls, values: Dict) -> Dict:\n \"\"\"Check that one and only one of examples/example_selector are provided.\"\"\"\n examples = values.get(\"examples\", None)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/prompts/few_shot_with_templates.html"} +{"id": "71945812d122-1", "text": "examples = values.get(\"examples\", None)\n example_selector = values.get(\"example_selector\", None)\n if examples and example_selector:\n raise ValueError(\n \"Only one of 'examples' and 'example_selector' should be provided\"\n )\n if examples is None and example_selector is None:\n raise ValueError(\n \"One of 'examples' and 'example_selector' should be provided\"\n )\n return values\n @root_validator()\n def template_is_valid(cls, values: Dict) -> Dict:\n \"\"\"Check that prefix, suffix and input variables are consistent.\"\"\"\n if values[\"validate_template\"]:\n input_variables = values[\"input_variables\"]\n expected_input_variables = set(values[\"suffix\"].input_variables)\n expected_input_variables |= set(values[\"partial_variables\"])\n if values[\"prefix\"] is not None:\n expected_input_variables |= set(values[\"prefix\"].input_variables)\n missing_vars = expected_input_variables.difference(input_variables)\n if missing_vars:\n raise ValueError(\n f\"Got input_variables={input_variables}, but based on \"\n f\"prefix/suffix expected {expected_input_variables}\"\n )\n return values\n class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n arbitrary_types_allowed = True\n def _get_examples(self, **kwargs: Any) -> List[dict]:\n if self.examples is not None:\n return self.examples\n elif self.example_selector is not None:\n return self.example_selector.select_examples(kwargs)\n else:\n raise ValueError\n[docs] def format(self, **kwargs: Any) -> str:\n \"\"\"Format the prompt with the inputs.\n Args:\n kwargs: Any arguments to be passed to the prompt template.", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/prompts/few_shot_with_templates.html"} +{"id": "71945812d122-2", "text": "Args:\n kwargs: Any arguments to be passed to the prompt template.\n Returns:\n A formatted string.\n Example:\n .. code-block:: python\n prompt.format(variable1=\"foo\")\n \"\"\"\n kwargs = self._merge_partial_and_user_variables(**kwargs)\n # Get the examples to use.\n examples = self._get_examples(**kwargs)\n # Format the examples.\n example_strings = [\n self.example_prompt.format(**example) for example in examples\n ]\n # Create the overall prefix.\n if self.prefix is None:\n prefix = \"\"\n else:\n prefix_kwargs = {\n k: v for k, v in kwargs.items() if k in self.prefix.input_variables\n }\n for k in prefix_kwargs.keys():\n kwargs.pop(k)\n prefix = self.prefix.format(**prefix_kwargs)\n # Create the overall suffix\n suffix_kwargs = {\n k: v for k, v in kwargs.items() if k in self.suffix.input_variables\n }\n for k in suffix_kwargs.keys():\n kwargs.pop(k)\n suffix = self.suffix.format(\n **suffix_kwargs,\n )\n pieces = [prefix, *example_strings, suffix]\n template = self.example_separator.join([piece for piece in pieces if piece])\n # Format the template with the input variables.\n return DEFAULT_FORMATTER_MAPPING[self.template_format](template, **kwargs)\n @property\n def _prompt_type(self) -> str:\n \"\"\"Return the prompt type key.\"\"\"\n return \"few_shot_with_templates\"\n[docs] def dict(self, **kwargs: Any) -> Dict:\n \"\"\"Return a dictionary of the prompt.\"\"\"\n if self.example_selector:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/prompts/few_shot_with_templates.html"} +{"id": "71945812d122-3", "text": "\"\"\"Return a dictionary of the prompt.\"\"\"\n if self.example_selector:\n raise ValueError(\"Saving an example selector is not currently supported\")\n return super().dict(**kwargs)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/prompts/few_shot_with_templates.html"} +{"id": "db935087fcd6-0", "text": "Source code for langchain.prompts.example_selector.ngram_overlap\n\"\"\"Select and order examples based on ngram overlap score (sentence_bleu score).\nhttps://www.nltk.org/_modules/nltk/translate/bleu_score.html\nhttps://aclanthology.org/P02-1040.pdf\n\"\"\"\nfrom typing import Dict, List\nimport numpy as np\nfrom pydantic import BaseModel, root_validator\nfrom langchain.prompts.example_selector.base import BaseExampleSelector\nfrom langchain.prompts.prompt import PromptTemplate\ndef ngram_overlap_score(source: List[str], example: List[str]) -> float:\n \"\"\"Compute ngram overlap score of source and example as sentence_bleu score.\n Use sentence_bleu with method1 smoothing function and auto reweighting.\n Return float value between 0.0 and 1.0 inclusive.\n https://www.nltk.org/_modules/nltk/translate/bleu_score.html\n https://aclanthology.org/P02-1040.pdf\n \"\"\"\n from nltk.translate.bleu_score import (\n SmoothingFunction, # type: ignore\n sentence_bleu,\n )\n hypotheses = source[0].split()\n references = [s.split() for s in example]\n return float(\n sentence_bleu(\n references,\n hypotheses,\n smoothing_function=SmoothingFunction().method1,\n auto_reweigh=True,\n )\n )\n[docs]class NGramOverlapExampleSelector(BaseExampleSelector, BaseModel):\n \"\"\"Select and order examples based on ngram overlap score (sentence_bleu score).\n https://www.nltk.org/_modules/nltk/translate/bleu_score.html\n https://aclanthology.org/P02-1040.pdf\n \"\"\"\n examples: List[dict]", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/prompts/example_selector/ngram_overlap.html"} +{"id": "db935087fcd6-1", "text": "\"\"\"\n examples: List[dict]\n \"\"\"A list of the examples that the prompt template expects.\"\"\"\n example_prompt: PromptTemplate\n \"\"\"Prompt template used to format the examples.\"\"\"\n threshold: float = -1.0\n \"\"\"Threshold at which algorithm stops. Set to -1.0 by default.\n For negative threshold:\n select_examples sorts examples by ngram_overlap_score, but excludes none.\n For threshold greater than 1.0:\n select_examples excludes all examples, and returns an empty list.\n For threshold equal to 0.0:\n select_examples sorts examples by ngram_overlap_score,\n and excludes examples with no ngram overlap with input.\n \"\"\"\n @root_validator(pre=True)\n def check_dependencies(cls, values: Dict) -> Dict:\n \"\"\"Check that valid dependencies exist.\"\"\"\n try:\n from nltk.translate.bleu_score import ( # noqa: disable=F401\n SmoothingFunction,\n sentence_bleu,\n )\n except ImportError as e:\n raise ValueError(\n \"Not all the correct dependencies for this ExampleSelect exist\"\n ) from e\n return values\n[docs] def add_example(self, example: Dict[str, str]) -> None:\n \"\"\"Add new example to list.\"\"\"\n self.examples.append(example)\n[docs] def select_examples(self, input_variables: Dict[str, str]) -> List[dict]:\n \"\"\"Return list of examples sorted by ngram_overlap_score with input.\n Descending order.\n Excludes any examples with ngram_overlap_score less than or equal to threshold.\n \"\"\"\n inputs = list(input_variables.values())\n examples = []\n k = len(self.examples)\n score = [0.0] * k", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/prompts/example_selector/ngram_overlap.html"} +{"id": "db935087fcd6-2", "text": "k = len(self.examples)\n score = [0.0] * k\n first_prompt_template_key = self.example_prompt.input_variables[0]\n for i in range(k):\n score[i] = ngram_overlap_score(\n inputs, [self.examples[i][first_prompt_template_key]]\n )\n while True:\n arg_max = np.argmax(score)\n if (score[arg_max] < self.threshold) or abs(\n score[arg_max] - self.threshold\n ) < 1e-9:\n break\n examples.append(self.examples[arg_max])\n score[arg_max] = self.threshold - 1.0\n return examples", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/prompts/example_selector/ngram_overlap.html"} +{"id": "87d7a6f9ef90-0", "text": "Source code for langchain.prompts.example_selector.semantic_similarity\n\"\"\"Example selector that selects examples based on SemanticSimilarity.\"\"\"\nfrom __future__ import annotations\nfrom typing import Any, Dict, List, Optional, Type\nfrom pydantic import BaseModel, Extra\nfrom langchain.embeddings.base import Embeddings\nfrom langchain.prompts.example_selector.base import BaseExampleSelector\nfrom langchain.vectorstores.base import VectorStore\ndef sorted_values(values: Dict[str, str]) -> List[Any]:\n \"\"\"Return a list of values in dict sorted by key.\"\"\"\n return [values[val] for val in sorted(values)]\n[docs]class SemanticSimilarityExampleSelector(BaseExampleSelector, BaseModel):\n \"\"\"Example selector that selects examples based on SemanticSimilarity.\"\"\"\n vectorstore: VectorStore\n \"\"\"VectorStore than contains information about examples.\"\"\"\n k: int = 4\n \"\"\"Number of examples to select.\"\"\"\n example_keys: Optional[List[str]] = None\n \"\"\"Optional keys to filter examples to.\"\"\"\n input_keys: Optional[List[str]] = None\n \"\"\"Optional keys to filter input to. If provided, the search is based on\n the input variables instead of all variables.\"\"\"\n class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n arbitrary_types_allowed = True\n[docs] def add_example(self, example: Dict[str, str]) -> str:\n \"\"\"Add new example to vectorstore.\"\"\"\n if self.input_keys:\n string_example = \" \".join(\n sorted_values({key: example[key] for key in self.input_keys})\n )\n else:\n string_example = \" \".join(sorted_values(example))\n ids = self.vectorstore.add_texts([string_example], metadatas=[example])\n return ids[0]", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/prompts/example_selector/semantic_similarity.html"} +{"id": "87d7a6f9ef90-1", "text": "return ids[0]\n[docs] def select_examples(self, input_variables: Dict[str, str]) -> List[dict]:\n \"\"\"Select which examples to use based on semantic similarity.\"\"\"\n # Get the docs with the highest similarity.\n if self.input_keys:\n input_variables = {key: input_variables[key] for key in self.input_keys}\n query = \" \".join(sorted_values(input_variables))\n example_docs = self.vectorstore.similarity_search(query, k=self.k)\n # Get the examples from the metadata.\n # This assumes that examples are stored in metadata.\n examples = [dict(e.metadata) for e in example_docs]\n # If example keys are provided, filter examples to those keys.\n if self.example_keys:\n examples = [{k: eg[k] for k in self.example_keys} for eg in examples]\n return examples\n[docs] @classmethod\n def from_examples(\n cls,\n examples: List[dict],\n embeddings: Embeddings,\n vectorstore_cls: Type[VectorStore],\n k: int = 4,\n input_keys: Optional[List[str]] = None,\n **vectorstore_cls_kwargs: Any,\n ) -> SemanticSimilarityExampleSelector:\n \"\"\"Create k-shot example selector using example list and embeddings.\n Reshuffles examples dynamically based on query similarity.\n Args:\n examples: List of examples to use in the prompt.\n embeddings: An initialized embedding API interface, e.g. OpenAIEmbeddings().\n vectorstore_cls: A vector store DB interface class, e.g. FAISS.\n k: Number of examples to select\n input_keys: If provided, the search is based on the input variables\n instead of all variables.", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/prompts/example_selector/semantic_similarity.html"} +{"id": "87d7a6f9ef90-2", "text": "instead of all variables.\n vectorstore_cls_kwargs: optional kwargs containing url for vector store\n Returns:\n The ExampleSelector instantiated, backed by a vector store.\n \"\"\"\n if input_keys:\n string_examples = [\n \" \".join(sorted_values({k: eg[k] for k in input_keys}))\n for eg in examples\n ]\n else:\n string_examples = [\" \".join(sorted_values(eg)) for eg in examples]\n vectorstore = vectorstore_cls.from_texts(\n string_examples, embeddings, metadatas=examples, **vectorstore_cls_kwargs\n )\n return cls(vectorstore=vectorstore, k=k, input_keys=input_keys)\n[docs]class MaxMarginalRelevanceExampleSelector(SemanticSimilarityExampleSelector):\n \"\"\"ExampleSelector that selects examples based on Max Marginal Relevance.\n This was shown to improve performance in this paper:\n https://arxiv.org/pdf/2211.13892.pdf\n \"\"\"\n fetch_k: int = 20\n \"\"\"Number of examples to fetch to rerank.\"\"\"\n[docs] def select_examples(self, input_variables: Dict[str, str]) -> List[dict]:\n \"\"\"Select which examples to use based on semantic similarity.\"\"\"\n # Get the docs with the highest similarity.\n if self.input_keys:\n input_variables = {key: input_variables[key] for key in self.input_keys}\n query = \" \".join(sorted_values(input_variables))\n example_docs = self.vectorstore.max_marginal_relevance_search(\n query, k=self.k, fetch_k=self.fetch_k\n )\n # Get the examples from the metadata.\n # This assumes that examples are stored in metadata.\n examples = [dict(e.metadata) for e in example_docs]", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/prompts/example_selector/semantic_similarity.html"} +{"id": "87d7a6f9ef90-3", "text": "examples = [dict(e.metadata) for e in example_docs]\n # If example keys are provided, filter examples to those keys.\n if self.example_keys:\n examples = [{k: eg[k] for k in self.example_keys} for eg in examples]\n return examples\n[docs] @classmethod\n def from_examples(\n cls,\n examples: List[dict],\n embeddings: Embeddings,\n vectorstore_cls: Type[VectorStore],\n k: int = 4,\n input_keys: Optional[List[str]] = None,\n fetch_k: int = 20,\n **vectorstore_cls_kwargs: Any,\n ) -> MaxMarginalRelevanceExampleSelector:\n \"\"\"Create k-shot example selector using example list and embeddings.\n Reshuffles examples dynamically based on query similarity.\n Args:\n examples: List of examples to use in the prompt.\n embeddings: An iniialized embedding API interface, e.g. OpenAIEmbeddings().\n vectorstore_cls: A vector store DB interface class, e.g. FAISS.\n k: Number of examples to select\n input_keys: If provided, the search is based on the input variables\n instead of all variables.\n vectorstore_cls_kwargs: optional kwargs containing url for vector store\n Returns:\n The ExampleSelector instantiated, backed by a vector store.\n \"\"\"\n if input_keys:\n string_examples = [\n \" \".join(sorted_values({k: eg[k] for k in input_keys}))\n for eg in examples\n ]\n else:\n string_examples = [\" \".join(sorted_values(eg)) for eg in examples]\n vectorstore = vectorstore_cls.from_texts(\n string_examples, embeddings, metadatas=examples, **vectorstore_cls_kwargs\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/prompts/example_selector/semantic_similarity.html"} +{"id": "87d7a6f9ef90-4", "text": ")\n return cls(vectorstore=vectorstore, k=k, fetch_k=fetch_k, input_keys=input_keys)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/prompts/example_selector/semantic_similarity.html"} +{"id": "5e65121cb205-0", "text": "Source code for langchain.prompts.example_selector.length_based\n\"\"\"Select examples based on length.\"\"\"\nimport re\nfrom typing import Callable, Dict, List\nfrom pydantic import BaseModel, validator\nfrom langchain.prompts.example_selector.base import BaseExampleSelector\nfrom langchain.prompts.prompt import PromptTemplate\ndef _get_length_based(text: str) -> int:\n return len(re.split(\"\\n| \", text))\n[docs]class LengthBasedExampleSelector(BaseExampleSelector, BaseModel):\n \"\"\"Select examples based on length.\"\"\"\n examples: List[dict]\n \"\"\"A list of the examples that the prompt template expects.\"\"\"\n example_prompt: PromptTemplate\n \"\"\"Prompt template used to format the examples.\"\"\"\n get_text_length: Callable[[str], int] = _get_length_based\n \"\"\"Function to measure prompt length. Defaults to word count.\"\"\"\n max_length: int = 2048\n \"\"\"Max length for the prompt, beyond which examples are cut.\"\"\"\n example_text_lengths: List[int] = [] #: :meta private:\n[docs] def add_example(self, example: Dict[str, str]) -> None:\n \"\"\"Add new example to list.\"\"\"\n self.examples.append(example)\n string_example = self.example_prompt.format(**example)\n self.example_text_lengths.append(self.get_text_length(string_example))\n @validator(\"example_text_lengths\", always=True)\n def calculate_example_text_lengths(cls, v: List[int], values: Dict) -> List[int]:\n \"\"\"Calculate text lengths if they don't exist.\"\"\"\n # Check if text lengths were passed in\n if v:\n return v\n # If they were not, calculate them\n example_prompt = values[\"example_prompt\"]\n get_text_length = values[\"get_text_length\"]", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/prompts/example_selector/length_based.html"} +{"id": "5e65121cb205-1", "text": "get_text_length = values[\"get_text_length\"]\n string_examples = [example_prompt.format(**eg) for eg in values[\"examples\"]]\n return [get_text_length(eg) for eg in string_examples]\n[docs] def select_examples(self, input_variables: Dict[str, str]) -> List[dict]:\n \"\"\"Select which examples to use based on the input lengths.\"\"\"\n inputs = \" \".join(input_variables.values())\n remaining_length = self.max_length - self.get_text_length(inputs)\n i = 0\n examples = []\n while remaining_length > 0 and i < len(self.examples):\n new_length = remaining_length - self.example_text_lengths[i]\n if new_length < 0:\n break\n else:\n examples.append(self.examples[i])\n remaining_length = new_length\n i += 1\n return examples", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/prompts/example_selector/length_based.html"} +{"id": "9880df7d8d1f-0", "text": "Source code for langchain.chat_models.azure_openai\n\"\"\"Azure OpenAI chat wrapper.\"\"\"\nfrom __future__ import annotations\nimport logging\nfrom typing import Any, Dict, Mapping\nfrom pydantic import root_validator\nfrom langchain.chat_models.openai import ChatOpenAI\nfrom langchain.schema import ChatResult\nfrom langchain.utils import get_from_dict_or_env\nlogger = logging.getLogger(__name__)\n[docs]class AzureChatOpenAI(ChatOpenAI):\n \"\"\"Wrapper around Azure OpenAI Chat Completion API. To use this class you\n must have a deployed model on Azure OpenAI. Use `deployment_name` in the\n constructor to refer to the \"Model deployment name\" in the Azure portal.\n In addition, you should have the ``openai`` python package installed, and the\n following environment variables set or passed in constructor in lower case:\n - ``OPENAI_API_TYPE`` (default: ``azure``)\n - ``OPENAI_API_KEY``\n - ``OPENAI_API_BASE``\n - ``OPENAI_API_VERSION``\n - ``OPENAI_PROXY``\n For exmaple, if you have `gpt-35-turbo` deployed, with the deployment name\n `35-turbo-dev`, the constructor should look like:\n .. code-block:: python\n AzureChatOpenAI(\n deployment_name=\"35-turbo-dev\",\n openai_api_version=\"2023-03-15-preview\",\n )\n Be aware the API version may change.\n Any parameters that are valid to be passed to the openai.create call can be passed\n in, even if not explicitly saved on this class.\n \"\"\"\n deployment_name: str = \"\"\n openai_api_type: str = \"azure\"\n openai_api_base: str = \"\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chat_models/azure_openai.html"} +{"id": "9880df7d8d1f-1", "text": "openai_api_base: str = \"\"\n openai_api_version: str = \"\"\n openai_api_key: str = \"\"\n openai_organization: str = \"\"\n openai_proxy: str = \"\"\n @root_validator()\n def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"Validate that api key and python package exists in environment.\"\"\"\n values[\"openai_api_key\"] = get_from_dict_or_env(\n values,\n \"openai_api_key\",\n \"OPENAI_API_KEY\",\n )\n values[\"openai_api_base\"] = get_from_dict_or_env(\n values,\n \"openai_api_base\",\n \"OPENAI_API_BASE\",\n )\n values[\"openai_api_version\"] = get_from_dict_or_env(\n values,\n \"openai_api_version\",\n \"OPENAI_API_VERSION\",\n )\n values[\"openai_api_type\"] = get_from_dict_or_env(\n values,\n \"openai_api_type\",\n \"OPENAI_API_TYPE\",\n )\n values[\"openai_organization\"] = get_from_dict_or_env(\n values,\n \"openai_organization\",\n \"OPENAI_ORGANIZATION\",\n default=\"\",\n )\n values[\"openai_proxy\"] = get_from_dict_or_env(\n values,\n \"openai_proxy\",\n \"OPENAI_PROXY\",\n default=\"\",\n )\n try:\n import openai\n except ImportError:\n raise ImportError(\n \"Could not import openai python package. \"\n \"Please install it with `pip install openai`.\"\n )\n try:\n values[\"client\"] = openai.ChatCompletion\n except AttributeError:\n raise ValueError(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chat_models/azure_openai.html"} +{"id": "9880df7d8d1f-2", "text": "except AttributeError:\n raise ValueError(\n \"`openai` has no `ChatCompletion` attribute, this is likely \"\n \"due to an old version of the openai package. Try upgrading it \"\n \"with `pip install --upgrade openai`.\"\n )\n if values[\"n\"] < 1:\n raise ValueError(\"n must be at least 1.\")\n if values[\"n\"] > 1 and values[\"streaming\"]:\n raise ValueError(\"n must be 1 when streaming.\")\n return values\n @property\n def _default_params(self) -> Dict[str, Any]:\n \"\"\"Get the default parameters for calling OpenAI API.\"\"\"\n return {\n **super()._default_params,\n \"engine\": self.deployment_name,\n }\n @property\n def _identifying_params(self) -> Mapping[str, Any]:\n \"\"\"Get the identifying parameters.\"\"\"\n return {**self._default_params}\n @property\n def _invocation_params(self) -> Mapping[str, Any]:\n openai_creds = {\n \"api_type\": self.openai_api_type,\n \"api_version\": self.openai_api_version,\n }\n return {**openai_creds, **super()._invocation_params}\n @property\n def _llm_type(self) -> str:\n return \"azure-openai-chat\"\n def _create_chat_result(self, response: Mapping[str, Any]) -> ChatResult:\n for res in response[\"choices\"]:\n if res.get(\"finish_reason\", None) == \"content_filter\":\n raise ValueError(\n \"Azure has not provided the response due to a content\"\n \" filter being triggered\"\n )\n return super()._create_chat_result(response)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chat_models/azure_openai.html"} +{"id": "3dba71088f66-0", "text": "Source code for langchain.chat_models.fake\n\"\"\"Fake ChatModel for testing purposes.\"\"\"\nfrom typing import Any, List, Mapping, Optional\nfrom langchain.callbacks.manager import CallbackManagerForLLMRun\nfrom langchain.chat_models.base import SimpleChatModel\nfrom langchain.schema import BaseMessage\n[docs]class FakeListChatModel(SimpleChatModel):\n \"\"\"Fake ChatModel for testing purposes.\"\"\"\n responses: List\n i: int = 0\n @property\n def _llm_type(self) -> str:\n return \"fake-list-chat-model\"\n def _call(\n self,\n messages: List[BaseMessage],\n stop: Optional[List[str]] = None,\n run_manager: Optional[CallbackManagerForLLMRun] = None,\n **kwargs: Any,\n ) -> str:\n \"\"\"First try to lookup in queries, else return 'foo' or 'bar'.\"\"\"\n response = self.responses[self.i]\n self.i += 1\n return response\n @property\n def _identifying_params(self) -> Mapping[str, Any]:\n return {\"responses\": self.responses}", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chat_models/fake.html"} +{"id": "aaab389ab84b-0", "text": "Source code for langchain.chat_models.openai\n\"\"\"OpenAI chat wrapper.\"\"\"\nfrom __future__ import annotations\nimport logging\nimport sys\nfrom typing import (\n TYPE_CHECKING,\n Any,\n Callable,\n Dict,\n List,\n Mapping,\n Optional,\n Tuple,\n Union,\n)\nfrom pydantic import Field, root_validator\nfrom tenacity import (\n before_sleep_log,\n retry,\n retry_if_exception_type,\n stop_after_attempt,\n wait_exponential,\n)\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForLLMRun,\n CallbackManagerForLLMRun,\n)\nfrom langchain.chat_models.base import BaseChatModel\nfrom langchain.schema import (\n AIMessage,\n BaseMessage,\n ChatGeneration,\n ChatMessage,\n ChatResult,\n FunctionMessage,\n HumanMessage,\n SystemMessage,\n)\nfrom langchain.utils import get_from_dict_or_env\nif TYPE_CHECKING:\n import tiktoken\nlogger = logging.getLogger(__name__)\ndef _import_tiktoken() -> Any:\n try:\n import tiktoken\n except ImportError:\n raise ValueError(\n \"Could not import tiktoken python package. \"\n \"This is needed in order to calculate get_token_ids. \"\n \"Please install it with `pip install tiktoken`.\"\n )\n return tiktoken\ndef _create_retry_decorator(llm: ChatOpenAI) -> Callable[[Any], Any]:\n import openai\n min_seconds = 1\n max_seconds = 60\n # Wait 2^x * 1 second between each retry starting with\n # 4 seconds, then up to 10 seconds, then 10 seconds afterwards\n return retry(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chat_models/openai.html"} +{"id": "aaab389ab84b-1", "text": "return retry(\n reraise=True,\n stop=stop_after_attempt(llm.max_retries),\n wait=wait_exponential(multiplier=1, min=min_seconds, max=max_seconds),\n retry=(\n retry_if_exception_type(openai.error.Timeout)\n | retry_if_exception_type(openai.error.APIError)\n | retry_if_exception_type(openai.error.APIConnectionError)\n | retry_if_exception_type(openai.error.RateLimitError)\n | retry_if_exception_type(openai.error.ServiceUnavailableError)\n ),\n before_sleep=before_sleep_log(logger, logging.WARNING),\n )\nasync def acompletion_with_retry(llm: ChatOpenAI, **kwargs: Any) -> Any:\n \"\"\"Use tenacity to retry the async completion call.\"\"\"\n retry_decorator = _create_retry_decorator(llm)\n @retry_decorator\n async def _completion_with_retry(**kwargs: Any) -> Any:\n # Use OpenAI's async api https://github.com/openai/openai-python#async-api\n return await llm.client.acreate(**kwargs)\n return await _completion_with_retry(**kwargs)\ndef _convert_dict_to_message(_dict: Mapping[str, Any]) -> BaseMessage:\n role = _dict[\"role\"]\n if role == \"user\":\n return HumanMessage(content=_dict[\"content\"])\n elif role == \"assistant\":\n content = _dict[\"content\"] or \"\" # OpenAI returns None for tool invocations\n if _dict.get(\"function_call\"):\n additional_kwargs = {\"function_call\": dict(_dict[\"function_call\"])}\n else:\n additional_kwargs = {}\n return AIMessage(content=content, additional_kwargs=additional_kwargs)\n elif role == \"system\":\n return SystemMessage(content=_dict[\"content\"])", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chat_models/openai.html"} +{"id": "aaab389ab84b-2", "text": "elif role == \"system\":\n return SystemMessage(content=_dict[\"content\"])\n elif role == \"function\":\n return FunctionMessage(content=_dict[\"content\"], name=_dict[\"name\"])\n else:\n return ChatMessage(content=_dict[\"content\"], role=role)\ndef _convert_message_to_dict(message: BaseMessage) -> dict:\n if isinstance(message, ChatMessage):\n message_dict = {\"role\": message.role, \"content\": message.content}\n elif isinstance(message, HumanMessage):\n message_dict = {\"role\": \"user\", \"content\": message.content}\n elif isinstance(message, AIMessage):\n message_dict = {\"role\": \"assistant\", \"content\": message.content}\n if \"function_call\" in message.additional_kwargs:\n message_dict[\"function_call\"] = message.additional_kwargs[\"function_call\"]\n elif isinstance(message, SystemMessage):\n message_dict = {\"role\": \"system\", \"content\": message.content}\n elif isinstance(message, FunctionMessage):\n message_dict = {\n \"role\": \"function\",\n \"content\": message.content,\n \"name\": message.name,\n }\n else:\n raise ValueError(f\"Got unknown type {message}\")\n if \"name\" in message.additional_kwargs:\n message_dict[\"name\"] = message.additional_kwargs[\"name\"]\n return message_dict\n[docs]class ChatOpenAI(BaseChatModel):\n \"\"\"Wrapper around OpenAI Chat large language models.\n To use, you should have the ``openai`` python package installed, and the\n environment variable ``OPENAI_API_KEY`` set with your API key.\n Any parameters that are valid to be passed to the openai.create call can be passed\n in, even if not explicitly saved on this class.\n Example:\n .. code-block:: python", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chat_models/openai.html"} +{"id": "aaab389ab84b-3", "text": "Example:\n .. code-block:: python\n from langchain.chat_models import ChatOpenAI\n openai = ChatOpenAI(model_name=\"gpt-3.5-turbo\")\n \"\"\"\n @property\n def lc_secrets(self) -> Dict[str, str]:\n return {\"openai_api_key\": \"OPENAI_API_KEY\"}\n @property\n def lc_serializable(self) -> bool:\n return True\n client: Any #: :meta private:\n model_name: str = Field(default=\"gpt-3.5-turbo\", alias=\"model\")\n \"\"\"Model name to use.\"\"\"\n temperature: float = 0.7\n \"\"\"What sampling temperature to use.\"\"\"\n model_kwargs: Dict[str, Any] = Field(default_factory=dict)\n \"\"\"Holds any model parameters valid for `create` call not explicitly specified.\"\"\"\n openai_api_key: Optional[str] = None\n \"\"\"Base URL path for API requests, \n leave blank if not using a proxy or service emulator.\"\"\"\n openai_api_base: Optional[str] = None\n openai_organization: Optional[str] = None\n # to support explicit proxy for OpenAI\n openai_proxy: Optional[str] = None\n request_timeout: Optional[Union[float, Tuple[float, float]]] = None\n \"\"\"Timeout for requests to OpenAI completion API. Default is 600 seconds.\"\"\"\n max_retries: int = 6\n \"\"\"Maximum number of retries to make when generating.\"\"\"\n streaming: bool = False\n \"\"\"Whether to stream the results or not.\"\"\"\n n: int = 1\n \"\"\"Number of chat completions to generate for each prompt.\"\"\"\n max_tokens: Optional[int] = None\n \"\"\"Maximum number of tokens to generate.\"\"\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chat_models/openai.html"} +{"id": "aaab389ab84b-4", "text": "max_tokens: Optional[int] = None\n \"\"\"Maximum number of tokens to generate.\"\"\"\n tiktoken_model_name: Optional[str] = None\n \"\"\"The model name to pass to tiktoken when using this class. \n Tiktoken is used to count the number of tokens in documents to constrain \n them to be under a certain limit. By default, when set to None, this will \n be the same as the embedding model name. However, there are some cases \n where you may want to use this Embedding class with a model name not \n supported by tiktoken. This can include when using Azure embeddings or \n when using one of the many model providers that expose an OpenAI-like \n API but with different models. In those cases, in order to avoid erroring \n when tiktoken is called, you can specify a model name to use here.\"\"\"\n class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n allow_population_by_field_name = True\n @root_validator(pre=True)\n def build_extra(cls, values: Dict[str, Any]) -> Dict[str, Any]:\n \"\"\"Build extra kwargs from additional params that were passed in.\"\"\"\n all_required_field_names = cls.all_required_field_names()\n extra = values.get(\"model_kwargs\", {})\n for field_name in list(values):\n if field_name in extra:\n raise ValueError(f\"Found {field_name} supplied twice.\")\n if field_name not in all_required_field_names:\n logger.warning(\n f\"\"\"WARNING! {field_name} is not default parameter.\n {field_name} was transferred to model_kwargs.\n Please confirm that {field_name} is what you intended.\"\"\"\n )\n extra[field_name] = values.pop(field_name)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chat_models/openai.html"} +{"id": "aaab389ab84b-5", "text": ")\n extra[field_name] = values.pop(field_name)\n invalid_model_kwargs = all_required_field_names.intersection(extra.keys())\n if invalid_model_kwargs:\n raise ValueError(\n f\"Parameters {invalid_model_kwargs} should be specified explicitly. \"\n f\"Instead they were passed in as part of `model_kwargs` parameter.\"\n )\n values[\"model_kwargs\"] = extra\n return values\n @root_validator()\n def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"Validate that api key and python package exists in environment.\"\"\"\n values[\"openai_api_key\"] = get_from_dict_or_env(\n values, \"openai_api_key\", \"OPENAI_API_KEY\"\n )\n values[\"openai_organization\"] = get_from_dict_or_env(\n values,\n \"openai_organization\",\n \"OPENAI_ORGANIZATION\",\n default=\"\",\n )\n values[\"openai_api_base\"] = get_from_dict_or_env(\n values,\n \"openai_api_base\",\n \"OPENAI_API_BASE\",\n default=\"\",\n )\n values[\"openai_proxy\"] = get_from_dict_or_env(\n values,\n \"openai_proxy\",\n \"OPENAI_PROXY\",\n default=\"\",\n )\n try:\n import openai\n except ImportError:\n raise ValueError(\n \"Could not import openai python package. \"\n \"Please install it with `pip install openai`.\"\n )\n try:\n values[\"client\"] = openai.ChatCompletion\n except AttributeError:\n raise ValueError(\n \"`openai` has no `ChatCompletion` attribute, this is likely \"\n \"due to an old version of the openai package. Try upgrading it \"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chat_models/openai.html"} +{"id": "aaab389ab84b-6", "text": "\"due to an old version of the openai package. Try upgrading it \"\n \"with `pip install --upgrade openai`.\"\n )\n if values[\"n\"] < 1:\n raise ValueError(\"n must be at least 1.\")\n if values[\"n\"] > 1 and values[\"streaming\"]:\n raise ValueError(\"n must be 1 when streaming.\")\n return values\n @property\n def _default_params(self) -> Dict[str, Any]:\n \"\"\"Get the default parameters for calling OpenAI API.\"\"\"\n return {\n \"model\": self.model_name,\n \"request_timeout\": self.request_timeout,\n \"max_tokens\": self.max_tokens,\n \"stream\": self.streaming,\n \"n\": self.n,\n \"temperature\": self.temperature,\n **self.model_kwargs,\n }\n def _create_retry_decorator(self) -> Callable[[Any], Any]:\n import openai\n min_seconds = 1\n max_seconds = 60\n # Wait 2^x * 1 second between each retry starting with\n # 4 seconds, then up to 10 seconds, then 10 seconds afterwards\n return retry(\n reraise=True,\n stop=stop_after_attempt(self.max_retries),\n wait=wait_exponential(multiplier=1, min=min_seconds, max=max_seconds),\n retry=(\n retry_if_exception_type(openai.error.Timeout)\n | retry_if_exception_type(openai.error.APIError)\n | retry_if_exception_type(openai.error.APIConnectionError)\n | retry_if_exception_type(openai.error.RateLimitError)\n | retry_if_exception_type(openai.error.ServiceUnavailableError)\n ),\n before_sleep=before_sleep_log(logger, logging.WARNING),\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chat_models/openai.html"} +{"id": "aaab389ab84b-7", "text": "),\n before_sleep=before_sleep_log(logger, logging.WARNING),\n )\n[docs] def completion_with_retry(self, **kwargs: Any) -> Any:\n \"\"\"Use tenacity to retry the completion call.\"\"\"\n retry_decorator = self._create_retry_decorator()\n @retry_decorator\n def _completion_with_retry(**kwargs: Any) -> Any:\n return self.client.create(**kwargs)\n return _completion_with_retry(**kwargs)\n def _combine_llm_outputs(self, llm_outputs: List[Optional[dict]]) -> dict:\n overall_token_usage: dict = {}\n for output in llm_outputs:\n if output is None:\n # Happens in streaming\n continue\n token_usage = output[\"token_usage\"]\n for k, v in token_usage.items():\n if k in overall_token_usage:\n overall_token_usage[k] += v\n else:\n overall_token_usage[k] = v\n return {\"token_usage\": overall_token_usage, \"model_name\": self.model_name}\n def _generate(\n self,\n messages: List[BaseMessage],\n stop: Optional[List[str]] = None,\n run_manager: Optional[CallbackManagerForLLMRun] = None,\n **kwargs: Any,\n ) -> ChatResult:\n message_dicts, params = self._create_message_dicts(messages, stop)\n params = {**params, **kwargs}\n if self.streaming:\n inner_completion = \"\"\n role = \"assistant\"\n params[\"stream\"] = True\n function_call: Optional[dict] = None\n for stream_resp in self.completion_with_retry(\n messages=message_dicts, **params\n ):\n role = stream_resp[\"choices\"][0][\"delta\"].get(\"role\", role)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chat_models/openai.html"} +{"id": "aaab389ab84b-8", "text": "role = stream_resp[\"choices\"][0][\"delta\"].get(\"role\", role)\n token = stream_resp[\"choices\"][0][\"delta\"].get(\"content\") or \"\"\n inner_completion += token\n _function_call = stream_resp[\"choices\"][0][\"delta\"].get(\"function_call\")\n if _function_call:\n if function_call is None:\n function_call = _function_call\n else:\n function_call[\"arguments\"] += _function_call[\"arguments\"]\n if run_manager:\n run_manager.on_llm_new_token(token)\n message = _convert_dict_to_message(\n {\n \"content\": inner_completion,\n \"role\": role,\n \"function_call\": function_call,\n }\n )\n return ChatResult(generations=[ChatGeneration(message=message)])\n response = self.completion_with_retry(messages=message_dicts, **params)\n return self._create_chat_result(response)\n def _create_message_dicts(\n self, messages: List[BaseMessage], stop: Optional[List[str]]\n ) -> Tuple[List[Dict[str, Any]], Dict[str, Any]]:\n params = dict(self._invocation_params)\n if stop is not None:\n if \"stop\" in params:\n raise ValueError(\"`stop` found in both the input and default params.\")\n params[\"stop\"] = stop\n message_dicts = [_convert_message_to_dict(m) for m in messages]\n return message_dicts, params\n def _create_chat_result(self, response: Mapping[str, Any]) -> ChatResult:\n generations = []\n for res in response[\"choices\"]:\n message = _convert_dict_to_message(res[\"message\"])\n gen = ChatGeneration(message=message)\n generations.append(gen)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chat_models/openai.html"} +{"id": "aaab389ab84b-9", "text": "gen = ChatGeneration(message=message)\n generations.append(gen)\n llm_output = {\"token_usage\": response[\"usage\"], \"model_name\": self.model_name}\n return ChatResult(generations=generations, llm_output=llm_output)\n async def _agenerate(\n self,\n messages: List[BaseMessage],\n stop: Optional[List[str]] = None,\n run_manager: Optional[AsyncCallbackManagerForLLMRun] = None,\n **kwargs: Any,\n ) -> ChatResult:\n message_dicts, params = self._create_message_dicts(messages, stop)\n params = {**params, **kwargs}\n if self.streaming:\n inner_completion = \"\"\n role = \"assistant\"\n params[\"stream\"] = True\n function_call: Optional[dict] = None\n async for stream_resp in await acompletion_with_retry(\n self, messages=message_dicts, **params\n ):\n role = stream_resp[\"choices\"][0][\"delta\"].get(\"role\", role)\n token = stream_resp[\"choices\"][0][\"delta\"].get(\"content\", \"\")\n inner_completion += token or \"\"\n _function_call = stream_resp[\"choices\"][0][\"delta\"].get(\"function_call\")\n if _function_call:\n if function_call is None:\n function_call = _function_call\n else:\n function_call[\"arguments\"] += _function_call[\"arguments\"]\n if run_manager:\n await run_manager.on_llm_new_token(token)\n message = _convert_dict_to_message(\n {\n \"content\": inner_completion,\n \"role\": role,\n \"function_call\": function_call,\n }\n )\n return ChatResult(generations=[ChatGeneration(message=message)])\n else:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chat_models/openai.html"} +{"id": "aaab389ab84b-10", "text": "return ChatResult(generations=[ChatGeneration(message=message)])\n else:\n response = await acompletion_with_retry(\n self, messages=message_dicts, **params\n )\n return self._create_chat_result(response)\n @property\n def _identifying_params(self) -> Mapping[str, Any]:\n \"\"\"Get the identifying parameters.\"\"\"\n return {**{\"model_name\": self.model_name}, **self._default_params}\n @property\n def _invocation_params(self) -> Mapping[str, Any]:\n \"\"\"Get the parameters used to invoke the model.\"\"\"\n openai_creds: Dict[str, Any] = {\n \"api_key\": self.openai_api_key,\n \"api_base\": self.openai_api_base,\n \"organization\": self.openai_organization,\n \"model\": self.model_name,\n }\n if self.openai_proxy:\n import openai\n openai.proxy = {\"http\": self.openai_proxy, \"https\": self.openai_proxy} # type: ignore[assignment] # noqa: E501\n return {**openai_creds, **self._default_params}\n @property\n def _llm_type(self) -> str:\n \"\"\"Return type of chat model.\"\"\"\n return \"openai-chat\"\n def _get_encoding_model(self) -> Tuple[str, tiktoken.Encoding]:\n tiktoken_ = _import_tiktoken()\n if self.tiktoken_model_name is not None:\n model = self.tiktoken_model_name\n else:\n model = self.model_name\n if model == \"gpt-3.5-turbo\":\n # gpt-3.5-turbo may change over time.", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chat_models/openai.html"} +{"id": "aaab389ab84b-11", "text": "# gpt-3.5-turbo may change over time.\n # Returning num tokens assuming gpt-3.5-turbo-0301.\n model = \"gpt-3.5-turbo-0301\"\n elif model == \"gpt-4\":\n # gpt-4 may change over time.\n # Returning num tokens assuming gpt-4-0314.\n model = \"gpt-4-0314\"\n # Returns the number of tokens used by a list of messages.\n try:\n encoding = tiktoken_.encoding_for_model(model)\n except KeyError:\n logger.warning(\"Warning: model not found. Using cl100k_base encoding.\")\n model = \"cl100k_base\"\n encoding = tiktoken_.get_encoding(model)\n return model, encoding\n[docs] def get_token_ids(self, text: str) -> List[int]:\n \"\"\"Get the tokens present in the text with tiktoken package.\"\"\"\n # tiktoken NOT supported for Python 3.7 or below\n if sys.version_info[1] <= 7:\n return super().get_token_ids(text)\n _, encoding_model = self._get_encoding_model()\n return encoding_model.encode(text)\n[docs] def get_num_tokens_from_messages(self, messages: List[BaseMessage]) -> int:\n \"\"\"Calculate num tokens for gpt-3.5-turbo and gpt-4 with tiktoken package.\n Official documentation: https://github.com/openai/openai-cookbook/blob/\n main/examples/How_to_format_inputs_to_ChatGPT_models.ipynb\"\"\"\n if sys.version_info[1] <= 7:\n return super().get_num_tokens_from_messages(messages)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chat_models/openai.html"} +{"id": "aaab389ab84b-12", "text": "return super().get_num_tokens_from_messages(messages)\n model, encoding = self._get_encoding_model()\n if model.startswith(\"gpt-3.5-turbo\"):\n # every message follows {role/name}\\n{content}\\n\n tokens_per_message = 4\n # if there's a name, the role is omitted\n tokens_per_name = -1\n elif model.startswith(\"gpt-4\"):\n tokens_per_message = 3\n tokens_per_name = 1\n else:\n raise NotImplementedError(\n f\"get_num_tokens_from_messages() is not presently implemented \"\n f\"for model {model}.\"\n \"See https://github.com/openai/openai-python/blob/main/chatml.md for \"\n \"information on how messages are converted to tokens.\"\n )\n num_tokens = 0\n messages_dict = [_convert_message_to_dict(m) for m in messages]\n for message in messages_dict:\n num_tokens += tokens_per_message\n for key, value in message.items():\n num_tokens += len(encoding.encode(value))\n if key == \"name\":\n num_tokens += tokens_per_name\n # every reply is primed with assistant\n num_tokens += 3\n return num_tokens", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chat_models/openai.html"} +{"id": "46fd8f075b37-0", "text": "Source code for langchain.chat_models.anthropic\nfrom typing import Any, Dict, List, Optional\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForLLMRun,\n CallbackManagerForLLMRun,\n)\nfrom langchain.chat_models.base import BaseChatModel\nfrom langchain.llms.anthropic import _AnthropicCommon\nfrom langchain.schema import (\n AIMessage,\n BaseMessage,\n ChatGeneration,\n ChatMessage,\n ChatResult,\n HumanMessage,\n SystemMessage,\n)\n[docs]class ChatAnthropic(BaseChatModel, _AnthropicCommon):\n r\"\"\"Wrapper around Anthropic's large language model.\n To use, you should have the ``anthropic`` python package installed, and the\n environment variable ``ANTHROPIC_API_KEY`` set with your API key, or pass\n it as a named parameter to the constructor.\n Example:\n .. code-block:: python\n import anthropic\n from langchain.llms import Anthropic\n model = ChatAnthropic(model=\"\", anthropic_api_key=\"my-api-key\")\n \"\"\"\n @property\n def _llm_type(self) -> str:\n \"\"\"Return type of chat model.\"\"\"\n return \"anthropic-chat\"\n @property\n def lc_serializable(self) -> bool:\n return True\n def _convert_one_message_to_text(self, message: BaseMessage) -> str:\n if isinstance(message, ChatMessage):\n message_text = f\"\\n\\n{message.role.capitalize()}: {message.content}\"\n elif isinstance(message, HumanMessage):\n message_text = f\"{self.HUMAN_PROMPT} {message.content}\"\n elif isinstance(message, AIMessage):\n message_text = f\"{self.AI_PROMPT} {message.content}\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chat_models/anthropic.html"} +{"id": "46fd8f075b37-1", "text": "message_text = f\"{self.AI_PROMPT} {message.content}\"\n elif isinstance(message, SystemMessage):\n message_text = f\"{self.HUMAN_PROMPT} {message.content}\"\n else:\n raise ValueError(f\"Got unknown type {message}\")\n return message_text\n def _convert_messages_to_text(self, messages: List[BaseMessage]) -> str:\n \"\"\"Format a list of strings into a single string with necessary newlines.\n Args:\n messages (List[BaseMessage]): List of BaseMessage to combine.\n Returns:\n str: Combined string with necessary newlines.\n \"\"\"\n return \"\".join(\n self._convert_one_message_to_text(message) for message in messages\n )\n def _convert_messages_to_prompt(self, messages: List[BaseMessage]) -> str:\n \"\"\"Format a list of messages into a full prompt for the Anthropic model\n Args:\n messages (List[BaseMessage]): List of BaseMessage to combine.\n Returns:\n str: Combined string with necessary HUMAN_PROMPT and AI_PROMPT tags.\n \"\"\"\n messages = messages.copy() # don't mutate the original list\n if not self.AI_PROMPT:\n raise NameError(\"Please ensure the anthropic package is loaded\")\n if not isinstance(messages[-1], AIMessage):\n messages.append(AIMessage(content=\"\"))\n text = self._convert_messages_to_text(messages)\n return (\n text.rstrip()\n ) # trim off the trailing ' ' that might come from the \"Assistant: \"\n def _generate(\n self,\n messages: List[BaseMessage],\n stop: Optional[List[str]] = None,\n run_manager: Optional[CallbackManagerForLLMRun] = None,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chat_models/anthropic.html"} +{"id": "46fd8f075b37-2", "text": "run_manager: Optional[CallbackManagerForLLMRun] = None,\n **kwargs: Any,\n ) -> ChatResult:\n prompt = self._convert_messages_to_prompt(messages)\n params: Dict[str, Any] = {\"prompt\": prompt, **self._default_params, **kwargs}\n if stop:\n params[\"stop_sequences\"] = stop\n if self.streaming:\n completion = \"\"\n stream_resp = self.client.completion_stream(**params)\n for data in stream_resp:\n delta = data[\"completion\"][len(completion) :]\n completion = data[\"completion\"]\n if run_manager:\n run_manager.on_llm_new_token(\n delta,\n )\n else:\n response = self.client.completion(**params)\n completion = response[\"completion\"]\n message = AIMessage(content=completion)\n return ChatResult(generations=[ChatGeneration(message=message)])\n async def _agenerate(\n self,\n messages: List[BaseMessage],\n stop: Optional[List[str]] = None,\n run_manager: Optional[AsyncCallbackManagerForLLMRun] = None,\n **kwargs: Any,\n ) -> ChatResult:\n prompt = self._convert_messages_to_prompt(messages)\n params: Dict[str, Any] = {\"prompt\": prompt, **self._default_params, **kwargs}\n if stop:\n params[\"stop_sequences\"] = stop\n if self.streaming:\n completion = \"\"\n stream_resp = await self.client.acompletion_stream(**params)\n async for data in stream_resp:\n delta = data[\"completion\"][len(completion) :]\n completion = data[\"completion\"]\n if run_manager:\n await run_manager.on_llm_new_token(\n delta,\n )\n else:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chat_models/anthropic.html"} +{"id": "46fd8f075b37-3", "text": "delta,\n )\n else:\n response = await self.client.acompletion(**params)\n completion = response[\"completion\"]\n message = AIMessage(content=completion)\n return ChatResult(generations=[ChatGeneration(message=message)])\n[docs] def get_num_tokens(self, text: str) -> int:\n \"\"\"Calculate number of tokens.\"\"\"\n if not self.count_tokens:\n raise NameError(\"Please ensure the anthropic package is loaded\")\n return self.count_tokens(text)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chat_models/anthropic.html"} +{"id": "021e83cffc34-0", "text": "Source code for langchain.chat_models.google_palm\n\"\"\"Wrapper around Google's PaLM Chat API.\"\"\"\nfrom __future__ import annotations\nimport logging\nfrom typing import TYPE_CHECKING, Any, Callable, Dict, List, Mapping, Optional\nfrom pydantic import BaseModel, root_validator\nfrom tenacity import (\n before_sleep_log,\n retry,\n retry_if_exception_type,\n stop_after_attempt,\n wait_exponential,\n)\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForLLMRun,\n CallbackManagerForLLMRun,\n)\nfrom langchain.chat_models.base import BaseChatModel\nfrom langchain.schema import (\n AIMessage,\n BaseMessage,\n ChatGeneration,\n ChatMessage,\n ChatResult,\n HumanMessage,\n SystemMessage,\n)\nfrom langchain.utils import get_from_dict_or_env\nif TYPE_CHECKING:\n import google.generativeai as genai\nlogger = logging.getLogger(__name__)\nclass ChatGooglePalmError(Exception):\n \"\"\"Error raised when there is an issue with the Google PaLM API.\"\"\"\n pass\ndef _truncate_at_stop_tokens(\n text: str,\n stop: Optional[List[str]],\n) -> str:\n \"\"\"Truncates text at the earliest stop token found.\"\"\"\n if stop is None:\n return text\n for stop_token in stop:\n stop_token_idx = text.find(stop_token)\n if stop_token_idx != -1:\n text = text[:stop_token_idx]\n return text\ndef _response_to_result(\n response: genai.types.ChatResponse,\n stop: Optional[List[str]],\n) -> ChatResult:\n \"\"\"Converts a PaLM API response into a LangChain ChatResult.\"\"\"\n if not response.candidates:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chat_models/google_palm.html"} +{"id": "021e83cffc34-1", "text": "if not response.candidates:\n raise ChatGooglePalmError(\"ChatResponse must have at least one candidate.\")\n generations: List[ChatGeneration] = []\n for candidate in response.candidates:\n author = candidate.get(\"author\")\n if author is None:\n raise ChatGooglePalmError(f\"ChatResponse must have an author: {candidate}\")\n content = _truncate_at_stop_tokens(candidate.get(\"content\", \"\"), stop)\n if content is None:\n raise ChatGooglePalmError(f\"ChatResponse must have a content: {candidate}\")\n if author == \"ai\":\n generations.append(\n ChatGeneration(text=content, message=AIMessage(content=content))\n )\n elif author == \"human\":\n generations.append(\n ChatGeneration(\n text=content,\n message=HumanMessage(content=content),\n )\n )\n else:\n generations.append(\n ChatGeneration(\n text=content,\n message=ChatMessage(role=author, content=content),\n )\n )\n return ChatResult(generations=generations)\ndef _messages_to_prompt_dict(\n input_messages: List[BaseMessage],\n) -> genai.types.MessagePromptDict:\n \"\"\"Converts a list of LangChain messages into a PaLM API MessagePrompt structure.\"\"\"\n import google.generativeai as genai\n context: str = \"\"\n examples: List[genai.types.MessageDict] = []\n messages: List[genai.types.MessageDict] = []\n remaining = list(enumerate(input_messages))\n while remaining:\n index, input_message = remaining.pop(0)\n if isinstance(input_message, SystemMessage):\n if index != 0:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chat_models/google_palm.html"} +{"id": "021e83cffc34-2", "text": "if isinstance(input_message, SystemMessage):\n if index != 0:\n raise ChatGooglePalmError(\"System message must be first input message.\")\n context = input_message.content\n elif isinstance(input_message, HumanMessage) and input_message.example:\n if messages:\n raise ChatGooglePalmError(\n \"Message examples must come before other messages.\"\n )\n _, next_input_message = remaining.pop(0)\n if isinstance(next_input_message, AIMessage) and next_input_message.example:\n examples.extend(\n [\n genai.types.MessageDict(\n author=\"human\", content=input_message.content\n ),\n genai.types.MessageDict(\n author=\"ai\", content=next_input_message.content\n ),\n ]\n )\n else:\n raise ChatGooglePalmError(\n \"Human example message must be immediately followed by an \"\n \" AI example response.\"\n )\n elif isinstance(input_message, AIMessage) and input_message.example:\n raise ChatGooglePalmError(\n \"AI example message must be immediately preceded by a Human \"\n \"example message.\"\n )\n elif isinstance(input_message, AIMessage):\n messages.append(\n genai.types.MessageDict(author=\"ai\", content=input_message.content)\n )\n elif isinstance(input_message, HumanMessage):\n messages.append(\n genai.types.MessageDict(author=\"human\", content=input_message.content)\n )\n elif isinstance(input_message, ChatMessage):\n messages.append(\n genai.types.MessageDict(\n author=input_message.role, content=input_message.content\n )\n )\n else:\n raise ChatGooglePalmError(\n \"Messages without an explicit role not supported by PaLM API.\"\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chat_models/google_palm.html"} +{"id": "021e83cffc34-3", "text": "\"Messages without an explicit role not supported by PaLM API.\"\n )\n return genai.types.MessagePromptDict(\n context=context,\n examples=examples,\n messages=messages,\n )\ndef _create_retry_decorator() -> Callable[[Any], Any]:\n \"\"\"Returns a tenacity retry decorator, preconfigured to handle PaLM exceptions\"\"\"\n import google.api_core.exceptions\n multiplier = 2\n min_seconds = 1\n max_seconds = 60\n max_retries = 10\n return retry(\n reraise=True,\n stop=stop_after_attempt(max_retries),\n wait=wait_exponential(multiplier=multiplier, min=min_seconds, max=max_seconds),\n retry=(\n retry_if_exception_type(google.api_core.exceptions.ResourceExhausted)\n | retry_if_exception_type(google.api_core.exceptions.ServiceUnavailable)\n | retry_if_exception_type(google.api_core.exceptions.GoogleAPIError)\n ),\n before_sleep=before_sleep_log(logger, logging.WARNING),\n )\ndef chat_with_retry(llm: ChatGooglePalm, **kwargs: Any) -> Any:\n \"\"\"Use tenacity to retry the completion call.\"\"\"\n retry_decorator = _create_retry_decorator()\n @retry_decorator\n def _chat_with_retry(**kwargs: Any) -> Any:\n return llm.client.chat(**kwargs)\n return _chat_with_retry(**kwargs)\nasync def achat_with_retry(llm: ChatGooglePalm, **kwargs: Any) -> Any:\n \"\"\"Use tenacity to retry the async completion call.\"\"\"\n retry_decorator = _create_retry_decorator()\n @retry_decorator\n async def _achat_with_retry(**kwargs: Any) -> Any:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chat_models/google_palm.html"} +{"id": "021e83cffc34-4", "text": "async def _achat_with_retry(**kwargs: Any) -> Any:\n # Use OpenAI's async api https://github.com/openai/openai-python#async-api\n return await llm.client.chat_async(**kwargs)\n return await _achat_with_retry(**kwargs)\n[docs]class ChatGooglePalm(BaseChatModel, BaseModel):\n \"\"\"Wrapper around Google's PaLM Chat API.\n To use you must have the google.generativeai Python package installed and\n either:\n 1. The ``GOOGLE_API_KEY``` environment varaible set with your API key, or\n 2. Pass your API key using the google_api_key kwarg to the ChatGoogle\n constructor.\n Example:\n .. code-block:: python\n from langchain.chat_models import ChatGooglePalm\n chat = ChatGooglePalm()\n \"\"\"\n client: Any #: :meta private:\n model_name: str = \"models/chat-bison-001\"\n \"\"\"Model name to use.\"\"\"\n google_api_key: Optional[str] = None\n temperature: Optional[float] = None\n \"\"\"Run inference with this temperature. Must by in the closed\n interval [0.0, 1.0].\"\"\"\n top_p: Optional[float] = None\n \"\"\"Decode using nucleus sampling: consider the smallest set of tokens whose\n probability sum is at least top_p. Must be in the closed interval [0.0, 1.0].\"\"\"\n top_k: Optional[int] = None\n \"\"\"Decode using top-k sampling: consider the set of top_k most probable tokens.\n Must be positive.\"\"\"\n n: int = 1\n \"\"\"Number of chat completions to generate for each prompt. Note that the API may\n not return the full n completions if duplicates are generated.\"\"\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chat_models/google_palm.html"} +{"id": "021e83cffc34-5", "text": "not return the full n completions if duplicates are generated.\"\"\"\n @root_validator()\n def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"Validate api key, python package exists, temperature, top_p, and top_k.\"\"\"\n google_api_key = get_from_dict_or_env(\n values, \"google_api_key\", \"GOOGLE_API_KEY\"\n )\n try:\n import google.generativeai as genai\n genai.configure(api_key=google_api_key)\n except ImportError:\n raise ChatGooglePalmError(\n \"Could not import google.generativeai python package. \"\n \"Please install it with `pip install google-generativeai`\"\n )\n values[\"client\"] = genai\n if values[\"temperature\"] is not None and not 0 <= values[\"temperature\"] <= 1:\n raise ValueError(\"temperature must be in the range [0.0, 1.0]\")\n if values[\"top_p\"] is not None and not 0 <= values[\"top_p\"] <= 1:\n raise ValueError(\"top_p must be in the range [0.0, 1.0]\")\n if values[\"top_k\"] is not None and values[\"top_k\"] <= 0:\n raise ValueError(\"top_k must be positive\")\n return values\n def _generate(\n self,\n messages: List[BaseMessage],\n stop: Optional[List[str]] = None,\n run_manager: Optional[CallbackManagerForLLMRun] = None,\n **kwargs: Any,\n ) -> ChatResult:\n prompt = _messages_to_prompt_dict(messages)\n response: genai.types.ChatResponse = chat_with_retry(\n self,\n model=self.model_name,\n prompt=prompt,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chat_models/google_palm.html"} +{"id": "021e83cffc34-6", "text": "self,\n model=self.model_name,\n prompt=prompt,\n temperature=self.temperature,\n top_p=self.top_p,\n top_k=self.top_k,\n candidate_count=self.n,\n **kwargs,\n )\n return _response_to_result(response, stop)\n async def _agenerate(\n self,\n messages: List[BaseMessage],\n stop: Optional[List[str]] = None,\n run_manager: Optional[AsyncCallbackManagerForLLMRun] = None,\n **kwargs: Any,\n ) -> ChatResult:\n prompt = _messages_to_prompt_dict(messages)\n response: genai.types.ChatResponse = await achat_with_retry(\n self,\n model=self.model_name,\n prompt=prompt,\n temperature=self.temperature,\n top_p=self.top_p,\n top_k=self.top_k,\n candidate_count=self.n,\n )\n return _response_to_result(response, stop)\n @property\n def _identifying_params(self) -> Mapping[str, Any]:\n \"\"\"Get the identifying parameters.\"\"\"\n return {\n \"model_name\": self.model_name,\n \"temperature\": self.temperature,\n \"top_p\": self.top_p,\n \"top_k\": self.top_k,\n \"n\": self.n,\n }\n @property\n def _llm_type(self) -> str:\n return \"google-palm-chat\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chat_models/google_palm.html"} +{"id": "fe905092523d-0", "text": "Source code for langchain.chat_models.promptlayer_openai\n\"\"\"PromptLayer wrapper.\"\"\"\nimport datetime\nfrom typing import Any, List, Mapping, Optional\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForLLMRun,\n CallbackManagerForLLMRun,\n)\nfrom langchain.chat_models import ChatOpenAI\nfrom langchain.schema import BaseMessage, ChatResult\n[docs]class PromptLayerChatOpenAI(ChatOpenAI):\n \"\"\"Wrapper around OpenAI Chat large language models and PromptLayer.\n To use, you should have the ``openai`` and ``promptlayer`` python\n package installed, and the environment variable ``OPENAI_API_KEY``\n and ``PROMPTLAYER_API_KEY`` set with your openAI API key and\n promptlayer key respectively.\n All parameters that can be passed to the OpenAI LLM can also\n be passed here. The PromptLayerChatOpenAI adds to optional\n parameters:\n ``pl_tags``: List of strings to tag the request with.\n ``return_pl_id``: If True, the PromptLayer request ID will be\n returned in the ``generation_info`` field of the\n ``Generation`` object.\n Example:\n .. code-block:: python\n from langchain.chat_models import PromptLayerChatOpenAI\n openai = PromptLayerChatOpenAI(model_name=\"gpt-3.5-turbo\")\n \"\"\"\n pl_tags: Optional[List[str]]\n return_pl_id: Optional[bool] = False\n def _generate(\n self,\n messages: List[BaseMessage],\n stop: Optional[List[str]] = None,\n run_manager: Optional[CallbackManagerForLLMRun] = None,\n **kwargs: Any\n ) -> ChatResult:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chat_models/promptlayer_openai.html"} +{"id": "fe905092523d-1", "text": "**kwargs: Any\n ) -> ChatResult:\n \"\"\"Call ChatOpenAI generate and then call PromptLayer API to log the request.\"\"\"\n from promptlayer.utils import get_api_key, promptlayer_api_request\n request_start_time = datetime.datetime.now().timestamp()\n generated_responses = super()._generate(messages, stop, run_manager)\n request_end_time = datetime.datetime.now().timestamp()\n message_dicts, params = super()._create_message_dicts(messages, stop)\n for i, generation in enumerate(generated_responses.generations):\n response_dict, params = super()._create_message_dicts(\n [generation.message], stop\n )\n params = {**params, **kwargs}\n pl_request_id = promptlayer_api_request(\n \"langchain.PromptLayerChatOpenAI\",\n \"langchain\",\n message_dicts,\n params,\n self.pl_tags,\n response_dict,\n request_start_time,\n request_end_time,\n get_api_key(),\n return_pl_id=self.return_pl_id,\n )\n if self.return_pl_id:\n if generation.generation_info is None or not isinstance(\n generation.generation_info, dict\n ):\n generation.generation_info = {}\n generation.generation_info[\"pl_request_id\"] = pl_request_id\n return generated_responses\n async def _agenerate(\n self,\n messages: List[BaseMessage],\n stop: Optional[List[str]] = None,\n run_manager: Optional[AsyncCallbackManagerForLLMRun] = None,\n **kwargs: Any\n ) -> ChatResult:\n \"\"\"Call ChatOpenAI agenerate and then call PromptLayer to log.\"\"\"\n from promptlayer.utils import get_api_key, promptlayer_api_request_async\n request_start_time = datetime.datetime.now().timestamp()", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chat_models/promptlayer_openai.html"} +{"id": "fe905092523d-2", "text": "request_start_time = datetime.datetime.now().timestamp()\n generated_responses = await super()._agenerate(messages, stop, run_manager)\n request_end_time = datetime.datetime.now().timestamp()\n message_dicts, params = super()._create_message_dicts(messages, stop)\n for i, generation in enumerate(generated_responses.generations):\n response_dict, params = super()._create_message_dicts(\n [generation.message], stop\n )\n params = {**params, **kwargs}\n pl_request_id = await promptlayer_api_request_async(\n \"langchain.PromptLayerChatOpenAI.async\",\n \"langchain\",\n message_dicts,\n params,\n self.pl_tags,\n response_dict,\n request_start_time,\n request_end_time,\n get_api_key(),\n return_pl_id=self.return_pl_id,\n )\n if self.return_pl_id:\n if generation.generation_info is None or not isinstance(\n generation.generation_info, dict\n ):\n generation.generation_info = {}\n generation.generation_info[\"pl_request_id\"] = pl_request_id\n return generated_responses\n @property\n def _llm_type(self) -> str:\n return \"promptlayer-openai-chat\"\n @property\n def _identifying_params(self) -> Mapping[str, Any]:\n return {\n **super()._identifying_params,\n \"pl_tags\": self.pl_tags,\n \"return_pl_id\": self.return_pl_id,\n }", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chat_models/promptlayer_openai.html"} +{"id": "fd3dd65946fd-0", "text": "Source code for langchain.chat_models.vertexai\n\"\"\"Wrapper around Google VertexAI chat-based models.\"\"\"\nfrom dataclasses import dataclass, field\nfrom typing import Any, Dict, List, Optional\nfrom pydantic import root_validator\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForLLMRun,\n CallbackManagerForLLMRun,\n)\nfrom langchain.chat_models.base import BaseChatModel\nfrom langchain.llms.vertexai import _VertexAICommon, is_codey_model\nfrom langchain.schema import (\n AIMessage,\n BaseMessage,\n ChatGeneration,\n ChatResult,\n HumanMessage,\n SystemMessage,\n)\nfrom langchain.utilities.vertexai import raise_vertex_import_error\n@dataclass\nclass _MessagePair:\n \"\"\"InputOutputTextPair represents a pair of input and output texts.\"\"\"\n question: HumanMessage\n answer: AIMessage\n@dataclass\nclass _ChatHistory:\n \"\"\"InputOutputTextPair represents a pair of input and output texts.\"\"\"\n history: List[_MessagePair] = field(default_factory=list)\n system_message: Optional[SystemMessage] = None\ndef _parse_chat_history(history: List[BaseMessage]) -> _ChatHistory:\n \"\"\"Parse a sequence of messages into history.\n A sequence should be either (SystemMessage, HumanMessage, AIMessage,\n HumanMessage, AIMessage, ...) or (HumanMessage, AIMessage, HumanMessage,\n AIMessage, ...). CodeChat does not support SystemMessage.\n Args:\n history: The list of messages to re-create the history of the chat.\n Returns:\n A parsed chat history.\n Raises:\n ValueError: If a sequence of message is odd, or a human message is not followed", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chat_models/vertexai.html"} +{"id": "fd3dd65946fd-1", "text": "ValueError: If a sequence of message is odd, or a human message is not followed\n by a message from AI (e.g., Human, Human, AI or AI, AI, Human).\n \"\"\"\n if not history:\n return _ChatHistory()\n first_message = history[0]\n system_message = first_message if isinstance(first_message, SystemMessage) else None\n chat_history = _ChatHistory(system_message=system_message)\n messages_left = history[1:] if system_message else history\n if len(messages_left) % 2 != 0:\n raise ValueError(\n f\"Amount of messages in history should be even, got {len(messages_left)}!\"\n )\n for question, answer in zip(messages_left[::2], messages_left[1::2]):\n if not isinstance(question, HumanMessage) or not isinstance(answer, AIMessage):\n raise ValueError(\n \"A human message should follow a bot one, \"\n f\"got {question.type}, {answer.type}.\"\n )\n chat_history.history.append(_MessagePair(question=question, answer=answer))\n return chat_history\n[docs]class ChatVertexAI(_VertexAICommon, BaseChatModel):\n \"\"\"Wrapper around Vertex AI large language models.\"\"\"\n model_name: str = \"chat-bison\"\n @root_validator()\n def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"Validate that the python package exists in environment.\"\"\"\n cls._try_init_vertexai(values)\n try:\n if is_codey_model(values[\"model_name\"]):\n from vertexai.preview.language_models import CodeChatModel\n values[\"client\"] = CodeChatModel.from_pretrained(values[\"model_name\"])\n else:\n from vertexai.preview.language_models import ChatModel", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chat_models/vertexai.html"} +{"id": "fd3dd65946fd-2", "text": "else:\n from vertexai.preview.language_models import ChatModel\n values[\"client\"] = ChatModel.from_pretrained(values[\"model_name\"])\n except ImportError:\n raise_vertex_import_error()\n return values\n def _generate(\n self,\n messages: List[BaseMessage],\n stop: Optional[List[str]] = None,\n run_manager: Optional[CallbackManagerForLLMRun] = None,\n **kwargs: Any,\n ) -> ChatResult:\n \"\"\"Generate next turn in the conversation.\n Args:\n messages: The history of the conversation as a list of messages. Code chat\n does not support context.\n stop: The list of stop words (optional).\n run_manager: The CallbackManager for LLM run, it's not used at the moment.\n Returns:\n The ChatResult that contains outputs generated by the model.\n Raises:\n ValueError: if the last message in the list is not from human.\n \"\"\"\n if not messages:\n raise ValueError(\n \"You should provide at least one message to start the chat!\"\n )\n question = messages[-1]\n if not isinstance(question, HumanMessage):\n raise ValueError(\n f\"Last message in the list should be from human, got {question.type}.\"\n )\n history = _parse_chat_history(messages[:-1])\n context = history.system_message.content if history.system_message else None\n params = {**self._default_params, **kwargs}\n if not self.is_codey_model:\n chat = self.client.start_chat(context=context, **params)\n else:\n chat = self.client.start_chat(**params)\n for pair in history.history:\n chat._history.append((pair.question.content, pair.answer.content))", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chat_models/vertexai.html"} +{"id": "fd3dd65946fd-3", "text": "chat._history.append((pair.question.content, pair.answer.content))\n response = chat.send_message(question.content, **params)\n text = self._enforce_stop_words(response.text, stop)\n return ChatResult(generations=[ChatGeneration(message=AIMessage(content=text))])\n async def _agenerate(\n self,\n messages: List[BaseMessage],\n stop: Optional[List[str]] = None,\n run_manager: Optional[AsyncCallbackManagerForLLMRun] = None,\n **kwargs: Any,\n ) -> ChatResult:\n raise NotImplementedError(\n \"\"\"Vertex AI doesn't support async requests at the moment.\"\"\"\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/chat_models/vertexai.html"} +{"id": "398a680a2c30-0", "text": "Source code for langchain.tools.base\n\"\"\"Base implementation for tools or skills.\"\"\"\nfrom __future__ import annotations\nimport warnings\nfrom abc import ABC, abstractmethod\nfrom inspect import signature\nfrom typing import Any, Awaitable, Callable, Dict, Optional, Tuple, Type, Union\nfrom pydantic import (\n BaseModel,\n Extra,\n Field,\n create_model,\n root_validator,\n validate_arguments,\n)\nfrom pydantic.main import ModelMetaclass\nfrom langchain.callbacks.base import BaseCallbackManager\nfrom langchain.callbacks.manager import (\n AsyncCallbackManager,\n AsyncCallbackManagerForToolRun,\n CallbackManager,\n CallbackManagerForToolRun,\n Callbacks,\n)\nclass SchemaAnnotationError(TypeError):\n \"\"\"Raised when 'args_schema' is missing or has an incorrect type annotation.\"\"\"\nclass ToolMetaclass(ModelMetaclass):\n \"\"\"Metaclass for BaseTool to ensure the provided args_schema\n doesn't silently ignored.\"\"\"\n def __new__(\n cls: Type[ToolMetaclass], name: str, bases: Tuple[Type, ...], dct: dict\n ) -> ToolMetaclass:\n \"\"\"Create the definition of the new tool class.\"\"\"\n schema_type: Optional[Type[BaseModel]] = dct.get(\"args_schema\")\n if schema_type is not None:\n schema_annotations = dct.get(\"__annotations__\", {})\n args_schema_type = schema_annotations.get(\"args_schema\", None)\n if args_schema_type is None or args_schema_type == BaseModel:\n # Throw errors for common mis-annotations.\n # TODO: Use get_args / get_origin and fully\n # specify valid annotations.\n typehint_mandate = \"\"\"\nclass ChildTool(BaseTool):\n ...\n args_schema: Type[BaseModel] = SchemaClass\n ...\"\"\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/base.html"} +{"id": "398a680a2c30-1", "text": "...\n args_schema: Type[BaseModel] = SchemaClass\n ...\"\"\"\n raise SchemaAnnotationError(\n f\"Tool definition for {name} must include valid type annotations\"\n f\" for argument 'args_schema' to behave as expected.\\n\"\n f\"Expected annotation of 'Type[BaseModel]'\"\n f\" but got '{args_schema_type}'.\\n\"\n f\"Expected class looks like:\\n\"\n f\"{typehint_mandate}\"\n )\n # Pass through to Pydantic's metaclass\n return super().__new__(cls, name, bases, dct)\ndef _create_subset_model(\n name: str, model: BaseModel, field_names: list\n) -> Type[BaseModel]:\n \"\"\"Create a pydantic model with only a subset of model's fields.\"\"\"\n fields = {}\n for field_name in field_names:\n field = model.__fields__[field_name]\n fields[field_name] = (field.type_, field.field_info)\n return create_model(name, **fields) # type: ignore\ndef _get_filtered_args(\n inferred_model: Type[BaseModel],\n func: Callable,\n) -> dict:\n \"\"\"Get the arguments from a function's signature.\"\"\"\n schema = inferred_model.schema()[\"properties\"]\n valid_keys = signature(func).parameters\n return {k: schema[k] for k in valid_keys if k not in (\"run_manager\", \"callbacks\")}\nclass _SchemaConfig:\n \"\"\"Configuration for the pydantic model.\"\"\"\n extra = Extra.forbid\n arbitrary_types_allowed = True\ndef create_schema_from_function(\n model_name: str,\n func: Callable,\n) -> Type[BaseModel]:\n \"\"\"Create a pydantic schema from a function's signature.", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/base.html"} +{"id": "398a680a2c30-2", "text": "\"\"\"Create a pydantic schema from a function's signature.\n Args:\n model_name: Name to assign to the generated pydandic schema\n func: Function to generate the schema from\n Returns:\n A pydantic model with the same arguments as the function\n \"\"\"\n # https://docs.pydantic.dev/latest/usage/validation_decorator/\n validated = validate_arguments(func, config=_SchemaConfig) # type: ignore\n inferred_model = validated.model # type: ignore\n if \"run_manager\" in inferred_model.__fields__:\n del inferred_model.__fields__[\"run_manager\"]\n if \"callbacks\" in inferred_model.__fields__:\n del inferred_model.__fields__[\"callbacks\"]\n # Pydantic adds placeholder virtual fields we need to strip\n valid_properties = _get_filtered_args(inferred_model, func)\n return _create_subset_model(\n f\"{model_name}Schema\", inferred_model, list(valid_properties)\n )\nclass ToolException(Exception):\n \"\"\"An optional exception that tool throws when execution error occurs.\n When this exception is thrown, the agent will not stop working,\n but will handle the exception according to the handle_tool_error\n variable of the tool, and the processing result will be returned\n to the agent as observation, and printed in red on the console.\n \"\"\"\n pass\n[docs]class BaseTool(ABC, BaseModel, metaclass=ToolMetaclass):\n \"\"\"Interface LangChain tools must implement.\"\"\"\n name: str\n \"\"\"The unique name of the tool that clearly communicates its purpose.\"\"\"\n description: str\n \"\"\"Used to tell the model how/when/why to use the tool.\n \n You can provide few-shot examples as a part of the description.\n \"\"\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/base.html"} +{"id": "398a680a2c30-3", "text": "You can provide few-shot examples as a part of the description.\n \"\"\"\n args_schema: Optional[Type[BaseModel]] = None\n \"\"\"Pydantic model class to validate and parse the tool's input arguments.\"\"\"\n return_direct: bool = False\n \"\"\"Whether to return the tool's output directly. Setting this to True means\n \n that after the tool is called, the AgentExecutor will stop looping.\n \"\"\"\n verbose: bool = False\n \"\"\"Whether to log the tool's progress.\"\"\"\n callbacks: Callbacks = Field(default=None, exclude=True)\n \"\"\"Callbacks to be called during tool execution.\"\"\"\n callback_manager: Optional[BaseCallbackManager] = Field(default=None, exclude=True)\n \"\"\"Deprecated. Please use callbacks instead.\"\"\"\n handle_tool_error: Optional[\n Union[bool, str, Callable[[ToolException], str]]\n ] = False\n \"\"\"Handle the content of the ToolException thrown.\"\"\"\n class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n extra = Extra.forbid\n arbitrary_types_allowed = True\n @property\n def is_single_input(self) -> bool:\n \"\"\"Whether the tool only accepts a single input.\"\"\"\n keys = {k for k in self.args if k != \"kwargs\"}\n return len(keys) == 1\n @property\n def args(self) -> dict:\n if self.args_schema is not None:\n return self.args_schema.schema()[\"properties\"]\n else:\n schema = create_schema_from_function(self.name, self._run)\n return schema.schema()[\"properties\"]\n def _parse_input(\n self,\n tool_input: Union[str, Dict],\n ) -> Union[str, Dict[str, Any]]:\n \"\"\"Convert tool input to pydantic model.\"\"\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/base.html"} +{"id": "398a680a2c30-4", "text": "\"\"\"Convert tool input to pydantic model.\"\"\"\n input_args = self.args_schema\n if isinstance(tool_input, str):\n if input_args is not None:\n key_ = next(iter(input_args.__fields__.keys()))\n input_args.validate({key_: tool_input})\n return tool_input\n else:\n if input_args is not None:\n result = input_args.parse_obj(tool_input)\n return {k: v for k, v in result.dict().items() if k in tool_input}\n return tool_input\n @root_validator()\n def raise_deprecation(cls, values: Dict) -> Dict:\n \"\"\"Raise deprecation warning if callback_manager is used.\"\"\"\n if values.get(\"callback_manager\") is not None:\n warnings.warn(\n \"callback_manager is deprecated. Please use callbacks instead.\",\n DeprecationWarning,\n )\n values[\"callbacks\"] = values.pop(\"callback_manager\", None)\n return values\n @abstractmethod\n def _run(\n self,\n *args: Any,\n **kwargs: Any,\n ) -> Any:\n \"\"\"Use the tool.\n Add run_manager: Optional[CallbackManagerForToolRun] = None\n to child implementations to enable tracing,\n \"\"\"\n @abstractmethod\n async def _arun(\n self,\n *args: Any,\n **kwargs: Any,\n ) -> Any:\n \"\"\"Use the tool asynchronously.\n Add run_manager: Optional[AsyncCallbackManagerForToolRun] = None\n to child implementations to enable tracing,\n \"\"\"\n def _to_args_and_kwargs(self, tool_input: Union[str, Dict]) -> Tuple[Tuple, Dict]:\n # For backwards compatibility, if run_input is a string,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/base.html"} +{"id": "398a680a2c30-5", "text": "# For backwards compatibility, if run_input is a string,\n # pass as a positional argument.\n if isinstance(tool_input, str):\n return (tool_input,), {}\n else:\n return (), tool_input\n[docs] def run(\n self,\n tool_input: Union[str, Dict],\n verbose: Optional[bool] = None,\n start_color: Optional[str] = \"green\",\n color: Optional[str] = \"green\",\n callbacks: Callbacks = None,\n **kwargs: Any,\n ) -> Any:\n \"\"\"Run the tool.\"\"\"\n parsed_input = self._parse_input(tool_input)\n if not self.verbose and verbose is not None:\n verbose_ = verbose\n else:\n verbose_ = self.verbose\n callback_manager = CallbackManager.configure(\n callbacks, self.callbacks, verbose=verbose_\n )\n # TODO: maybe also pass through run_manager is _run supports kwargs\n new_arg_supported = signature(self._run).parameters.get(\"run_manager\")\n run_manager = callback_manager.on_tool_start(\n {\"name\": self.name, \"description\": self.description},\n tool_input if isinstance(tool_input, str) else str(tool_input),\n color=start_color,\n **kwargs,\n )\n try:\n tool_args, tool_kwargs = self._to_args_and_kwargs(parsed_input)\n observation = (\n self._run(*tool_args, run_manager=run_manager, **tool_kwargs)\n if new_arg_supported\n else self._run(*tool_args, **tool_kwargs)\n )\n except ToolException as e:\n if not self.handle_tool_error:\n run_manager.on_tool_error(e)\n raise e\n elif isinstance(self.handle_tool_error, bool):", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/base.html"} +{"id": "398a680a2c30-6", "text": "raise e\n elif isinstance(self.handle_tool_error, bool):\n if e.args:\n observation = e.args[0]\n else:\n observation = \"Tool execution error\"\n elif isinstance(self.handle_tool_error, str):\n observation = self.handle_tool_error\n elif callable(self.handle_tool_error):\n observation = self.handle_tool_error(e)\n else:\n raise ValueError(\n f\"Got unexpected type of `handle_tool_error`. Expected bool, str \"\n f\"or callable. Received: {self.handle_tool_error}\"\n )\n run_manager.on_tool_end(\n str(observation), color=\"red\", name=self.name, **kwargs\n )\n return observation\n except (Exception, KeyboardInterrupt) as e:\n run_manager.on_tool_error(e)\n raise e\n else:\n run_manager.on_tool_end(\n str(observation), color=color, name=self.name, **kwargs\n )\n return observation\n[docs] async def arun(\n self,\n tool_input: Union[str, Dict],\n verbose: Optional[bool] = None,\n start_color: Optional[str] = \"green\",\n color: Optional[str] = \"green\",\n callbacks: Callbacks = None,\n **kwargs: Any,\n ) -> Any:\n \"\"\"Run the tool asynchronously.\"\"\"\n parsed_input = self._parse_input(tool_input)\n if not self.verbose and verbose is not None:\n verbose_ = verbose\n else:\n verbose_ = self.verbose\n callback_manager = AsyncCallbackManager.configure(\n callbacks, self.callbacks, verbose=verbose_\n )\n new_arg_supported = signature(self._arun).parameters.get(\"run_manager\")\n run_manager = await callback_manager.on_tool_start(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/base.html"} +{"id": "398a680a2c30-7", "text": "run_manager = await callback_manager.on_tool_start(\n {\"name\": self.name, \"description\": self.description},\n tool_input if isinstance(tool_input, str) else str(tool_input),\n color=start_color,\n **kwargs,\n )\n try:\n # We then call the tool on the tool input to get an observation\n tool_args, tool_kwargs = self._to_args_and_kwargs(parsed_input)\n observation = (\n await self._arun(*tool_args, run_manager=run_manager, **tool_kwargs)\n if new_arg_supported\n else await self._arun(*tool_args, **tool_kwargs)\n )\n except ToolException as e:\n if not self.handle_tool_error:\n await run_manager.on_tool_error(e)\n raise e\n elif isinstance(self.handle_tool_error, bool):\n if e.args:\n observation = e.args[0]\n else:\n observation = \"Tool execution error\"\n elif isinstance(self.handle_tool_error, str):\n observation = self.handle_tool_error\n elif callable(self.handle_tool_error):\n observation = self.handle_tool_error(e)\n else:\n raise ValueError(\n f\"Got unexpected type of `handle_tool_error`. Expected bool, str \"\n f\"or callable. Received: {self.handle_tool_error}\"\n )\n await run_manager.on_tool_end(\n str(observation), color=\"red\", name=self.name, **kwargs\n )\n return observation\n except (Exception, KeyboardInterrupt) as e:\n await run_manager.on_tool_error(e)\n raise e\n else:\n await run_manager.on_tool_end(\n str(observation), color=color, name=self.name, **kwargs\n )\n return observation", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/base.html"} +{"id": "398a680a2c30-8", "text": ")\n return observation\n def __call__(self, tool_input: str, callbacks: Callbacks = None) -> str:\n \"\"\"Make tool callable.\"\"\"\n return self.run(tool_input, callbacks=callbacks)\n[docs]class Tool(BaseTool):\n \"\"\"Tool that takes in function or coroutine directly.\"\"\"\n description: str = \"\"\n func: Callable[..., str]\n \"\"\"The function to run when the tool is called.\"\"\"\n coroutine: Optional[Callable[..., Awaitable[str]]] = None\n \"\"\"The asynchronous version of the function.\"\"\"\n @property\n def args(self) -> dict:\n \"\"\"The tool's input arguments.\"\"\"\n if self.args_schema is not None:\n return self.args_schema.schema()[\"properties\"]\n # For backwards compatibility, if the function signature is ambiguous,\n # assume it takes a single string input.\n return {\"tool_input\": {\"type\": \"string\"}}\n def _to_args_and_kwargs(self, tool_input: Union[str, Dict]) -> Tuple[Tuple, Dict]:\n \"\"\"Convert tool input to pydantic model.\"\"\"\n args, kwargs = super()._to_args_and_kwargs(tool_input)\n # For backwards compatibility. The tool must be run with a single input\n all_args = list(args) + list(kwargs.values())\n if len(all_args) != 1:\n raise ToolException(\n f\"Too many arguments to single-input tool {self.name}.\"\n f\" Args: {all_args}\"\n )\n return tuple(all_args), {}\n def _run(\n self,\n *args: Any,\n run_manager: Optional[CallbackManagerForToolRun] = None,\n **kwargs: Any,\n ) -> Any:\n \"\"\"Use the tool.\"\"\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/base.html"} +{"id": "398a680a2c30-9", "text": "**kwargs: Any,\n ) -> Any:\n \"\"\"Use the tool.\"\"\"\n new_argument_supported = signature(self.func).parameters.get(\"callbacks\")\n return (\n self.func(\n *args,\n callbacks=run_manager.get_child() if run_manager else None,\n **kwargs,\n )\n if new_argument_supported\n else self.func(*args, **kwargs)\n )\n async def _arun(\n self,\n *args: Any,\n run_manager: Optional[AsyncCallbackManagerForToolRun] = None,\n **kwargs: Any,\n ) -> Any:\n \"\"\"Use the tool asynchronously.\"\"\"\n if self.coroutine:\n new_argument_supported = signature(self.coroutine).parameters.get(\n \"callbacks\"\n )\n return (\n await self.coroutine(\n *args,\n callbacks=run_manager.get_child() if run_manager else None,\n **kwargs,\n )\n if new_argument_supported\n else await self.coroutine(*args, **kwargs)\n )\n raise NotImplementedError(\"Tool does not support async\")\n # TODO: this is for backwards compatibility, remove in future\n def __init__(\n self, name: str, func: Callable, description: str, **kwargs: Any\n ) -> None:\n \"\"\"Initialize tool.\"\"\"\n super(Tool, self).__init__(\n name=name, func=func, description=description, **kwargs\n )\n[docs] @classmethod\n def from_function(\n cls,\n func: Callable,\n name: str, # We keep these required to support backwards compatibility\n description: str,\n return_direct: bool = False,\n args_schema: Optional[Type[BaseModel]] = None,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/base.html"} +{"id": "398a680a2c30-10", "text": "args_schema: Optional[Type[BaseModel]] = None,\n **kwargs: Any,\n ) -> Tool:\n \"\"\"Initialize tool from a function.\"\"\"\n return cls(\n name=name,\n func=func,\n description=description,\n return_direct=return_direct,\n args_schema=args_schema,\n **kwargs,\n )\n[docs]class StructuredTool(BaseTool):\n \"\"\"Tool that can operate on any number of inputs.\"\"\"\n description: str = \"\"\n args_schema: Type[BaseModel] = Field(..., description=\"The tool schema.\")\n \"\"\"The input arguments' schema.\"\"\"\n func: Callable[..., Any]\n \"\"\"The function to run when the tool is called.\"\"\"\n coroutine: Optional[Callable[..., Awaitable[Any]]] = None\n \"\"\"The asynchronous version of the function.\"\"\"\n @property\n def args(self) -> dict:\n \"\"\"The tool's input arguments.\"\"\"\n return self.args_schema.schema()[\"properties\"]\n def _run(\n self,\n *args: Any,\n run_manager: Optional[CallbackManagerForToolRun] = None,\n **kwargs: Any,\n ) -> Any:\n \"\"\"Use the tool.\"\"\"\n new_argument_supported = signature(self.func).parameters.get(\"callbacks\")\n return (\n self.func(\n *args,\n callbacks=run_manager.get_child() if run_manager else None,\n **kwargs,\n )\n if new_argument_supported\n else self.func(*args, **kwargs)\n )\n async def _arun(\n self,\n *args: Any,\n run_manager: Optional[AsyncCallbackManagerForToolRun] = None,\n **kwargs: Any,\n ) -> str:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/base.html"} +{"id": "398a680a2c30-11", "text": "**kwargs: Any,\n ) -> str:\n \"\"\"Use the tool asynchronously.\"\"\"\n if self.coroutine:\n new_argument_supported = signature(self.coroutine).parameters.get(\n \"callbacks\"\n )\n return (\n await self.coroutine(\n *args,\n callbacks=run_manager.get_child() if run_manager else None,\n **kwargs,\n )\n if new_argument_supported\n else await self.coroutine(*args, **kwargs)\n )\n raise NotImplementedError(\"Tool does not support async\")\n[docs] @classmethod\n def from_function(\n cls,\n func: Callable,\n name: Optional[str] = None,\n description: Optional[str] = None,\n return_direct: bool = False,\n args_schema: Optional[Type[BaseModel]] = None,\n infer_schema: bool = True,\n **kwargs: Any,\n ) -> StructuredTool:\n \"\"\"Create tool from a given function.\n A classmethod that helps to create a tool from a function.\n Args:\n func: The function from which to create a tool\n name: The name of the tool. Defaults to the function name\n description: The description of the tool. Defaults to the function docstring\n return_direct: Whether to return the result directly or as a callback\n args_schema: The schema of the tool's input arguments\n infer_schema: Whether to infer the schema from the function's signature\n **kwargs: Additional arguments to pass to the tool\n Returns:\n The tool\n Examples:\n ... code-block:: python\n def add(a: int, b: int) -> int:\n \\\"\\\"\\\"Add two numbers\\\"\\\"\\\"\n return a + b", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/base.html"} +{"id": "398a680a2c30-12", "text": "\\\"\\\"\\\"Add two numbers\\\"\\\"\\\"\n return a + b\n tool = StructuredTool.from_function(add)\n tool.run(1, 2) # 3\n \"\"\"\n name = name or func.__name__\n description = description or func.__doc__\n assert (\n description is not None\n ), \"Function must have a docstring if description not provided.\"\n # Description example:\n # search_api(query: str) - Searches the API for the query.\n description = f\"{name}{signature(func)} - {description.strip()}\"\n _args_schema = args_schema\n if _args_schema is None and infer_schema:\n _args_schema = create_schema_from_function(f\"{name}Schema\", func)\n return cls(\n name=name,\n func=func,\n args_schema=_args_schema,\n description=description,\n return_direct=return_direct,\n **kwargs,\n )\n[docs]def tool(\n *args: Union[str, Callable],\n return_direct: bool = False,\n args_schema: Optional[Type[BaseModel]] = None,\n infer_schema: bool = True,\n) -> Callable:\n \"\"\"Make tools out of functions, can be used with or without arguments.\n Args:\n *args: The arguments to the tool.\n return_direct: Whether to return directly from the tool rather\n than continuing the agent loop.\n args_schema: optional argument schema for user to specify\n infer_schema: Whether to infer the schema of the arguments from\n the function's signature. This also makes the resultant tool\n accept a dictionary input to its `run()` function.\n Requires:\n - Function must be of type (str) -> str\n - Function must have a docstring", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/base.html"} +{"id": "398a680a2c30-13", "text": "- Function must have a docstring\n Examples:\n .. code-block:: python\n @tool\n def search_api(query: str) -> str:\n # Searches the API for the query.\n return\n @tool(\"search\", return_direct=True)\n def search_api(query: str) -> str:\n # Searches the API for the query.\n return\n \"\"\"\n def _make_with_name(tool_name: str) -> Callable:\n def _make_tool(func: Callable) -> BaseTool:\n if infer_schema or args_schema is not None:\n return StructuredTool.from_function(\n func,\n name=tool_name,\n return_direct=return_direct,\n args_schema=args_schema,\n infer_schema=infer_schema,\n )\n # If someone doesn't want a schema applied, we must treat it as\n # a simple string->string function\n assert func.__doc__ is not None, \"Function must have a docstring\"\n return Tool(\n name=tool_name,\n func=func,\n description=f\"{tool_name} tool\",\n return_direct=return_direct,\n )\n return _make_tool\n if len(args) == 1 and isinstance(args[0], str):\n # if the argument is a string, then we use the string as the tool name\n # Example usage: @tool(\"search\", return_direct=True)\n return _make_with_name(args[0])\n elif len(args) == 1 and callable(args[0]):\n # if the argument is a function, then we use the function name as the tool name\n # Example usage: @tool\n return _make_with_name(args[0].__name__)(args[0])\n elif len(args) == 0:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/base.html"} +{"id": "398a680a2c30-14", "text": "elif len(args) == 0:\n # if there are no arguments, then we use the function name as the tool name\n # Example usage: @tool(return_direct=True)\n def _partial(func: Callable[[str], str]) -> BaseTool:\n return _make_with_name(func.__name__)(func)\n return _partial\n else:\n raise ValueError(\"Too many arguments for tool decorator\")", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/base.html"} +{"id": "85782826d853-0", "text": "Source code for langchain.tools.convert_to_openai\nfrom typing import TypedDict\nfrom langchain.tools import BaseTool, StructuredTool\nclass FunctionDescription(TypedDict):\n \"\"\"Representation of a callable function to the OpenAI API.\"\"\"\n name: str\n \"\"\"The name of the function.\"\"\"\n description: str\n \"\"\"A description of the function.\"\"\"\n parameters: dict\n \"\"\"The parameters of the function.\"\"\"\n[docs]def format_tool_to_openai_function(tool: BaseTool) -> FunctionDescription:\n \"\"\"Format tool into the OpenAI function API.\"\"\"\n if isinstance(tool, StructuredTool):\n schema_ = tool.args_schema.schema()\n # Bug with required missing for structured tools.\n required = sorted(schema_[\"properties\"]) # BUG WORKAROUND\n return {\n \"name\": tool.name,\n \"description\": tool.description,\n \"parameters\": {\n \"type\": \"object\",\n \"properties\": schema_[\"properties\"],\n \"required\": required,\n },\n }\n else:\n if tool.args_schema:\n parameters = tool.args_schema.schema()\n else:\n parameters = {\n # This is a hack to get around the fact that some tools\n # do not expose an args_schema, and expect an argument\n # which is a string.\n # And Open AI does not support an array type for the\n # parameters.\n \"properties\": {\n \"__arg1\": {\"title\": \"__arg1\", \"type\": \"string\"},\n },\n \"required\": [\"__arg1\"],\n \"type\": \"object\",\n }\n return {\n \"name\": tool.name,\n \"description\": tool.description,\n \"parameters\": parameters,\n }", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/convert_to_openai.html"} +{"id": "29f911a0f2b1-0", "text": "Source code for langchain.tools.plugin\nfrom __future__ import annotations\nimport json\nfrom typing import Optional, Type\nimport requests\nimport yaml\nfrom pydantic import BaseModel\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForToolRun,\n CallbackManagerForToolRun,\n)\nfrom langchain.tools.base import BaseTool\nclass ApiConfig(BaseModel):\n type: str\n url: str\n has_user_authentication: Optional[bool] = False\nclass AIPlugin(BaseModel):\n \"\"\"AI Plugin Definition.\"\"\"\n schema_version: str\n name_for_model: str\n name_for_human: str\n description_for_model: str\n description_for_human: str\n auth: Optional[dict] = None\n api: ApiConfig\n logo_url: Optional[str]\n contact_email: Optional[str]\n legal_info_url: Optional[str]\n @classmethod\n def from_url(cls, url: str) -> AIPlugin:\n \"\"\"Instantiate AIPlugin from a URL.\"\"\"\n response = requests.get(url).json()\n return cls(**response)\ndef marshal_spec(txt: str) -> dict:\n \"\"\"Convert the yaml or json serialized spec to a dict.\n Args:\n txt: The yaml or json serialized spec.\n Returns:\n dict: The spec as a dict.\n \"\"\"\n try:\n return json.loads(txt)\n except json.JSONDecodeError:\n return yaml.safe_load(txt)\nclass AIPluginToolSchema(BaseModel):\n \"\"\"AIPLuginToolSchema.\"\"\"\n tool_input: Optional[str] = \"\"\n[docs]class AIPluginTool(BaseTool):\n plugin: AIPlugin\n api_spec: str\n args_schema: Type[AIPluginToolSchema] = AIPluginToolSchema\n[docs] @classmethod", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/plugin.html"} +{"id": "29f911a0f2b1-1", "text": "[docs] @classmethod\n def from_plugin_url(cls, url: str) -> AIPluginTool:\n plugin = AIPlugin.from_url(url)\n description = (\n f\"Call this tool to get the OpenAPI spec (and usage guide) \"\n f\"for interacting with the {plugin.name_for_human} API. \"\n f\"You should only call this ONCE! What is the \"\n f\"{plugin.name_for_human} API useful for? \"\n ) + plugin.description_for_human\n open_api_spec_str = requests.get(plugin.api.url).text\n open_api_spec = marshal_spec(open_api_spec_str)\n api_spec = (\n f\"Usage Guide: {plugin.description_for_model}\\n\\n\"\n f\"OpenAPI Spec: {open_api_spec}\"\n )\n return cls(\n name=plugin.name_for_model,\n description=description,\n plugin=plugin,\n api_spec=api_spec,\n )\n def _run(\n self,\n tool_input: Optional[str] = \"\",\n run_manager: Optional[CallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Use the tool.\"\"\"\n return self.api_spec\n async def _arun(\n self,\n tool_input: Optional[str] = None,\n run_manager: Optional[AsyncCallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Use the tool asynchronously.\"\"\"\n return self.api_spec", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/plugin.html"} +{"id": "4bc16e9d1548-0", "text": "Source code for langchain.tools.ifttt\n\"\"\"From https://github.com/SidU/teams-langchain-js/wiki/Connecting-IFTTT-Services.\n# Creating a webhook\n- Go to https://ifttt.com/create\n# Configuring the \"If This\"\n- Click on the \"If This\" button in the IFTTT interface.\n- Search for \"Webhooks\" in the search bar.\n- Choose the first option for \"Receive a web request with a JSON payload.\"\n- Choose an Event Name that is specific to the service you plan to connect to.\nThis will make it easier for you to manage the webhook URL.\nFor example, if you're connecting to Spotify, you could use \"Spotify\" as your\nEvent Name.\n- Click the \"Create Trigger\" button to save your settings and create your webhook.\n# Configuring the \"Then That\"\n- Tap on the \"Then That\" button in the IFTTT interface.\n- Search for the service you want to connect, such as Spotify.\n- Choose an action from the service, such as \"Add track to a playlist\".\n- Configure the action by specifying the necessary details, such as the playlist name,\ne.g., \"Songs from AI\".\n- Reference the JSON Payload received by the Webhook in your action. For the Spotify\nscenario, choose \"{{JsonPayload}}\" as your search query.\n- Tap the \"Create Action\" button to save your action settings.\n- Once you have finished configuring your action, click the \"Finish\" button to\ncomplete the setup.\n- Congratulations! You have successfully connected the Webhook to the desired\nservice, and you're ready to start receiving data and triggering actions \ud83c\udf89\n# Finishing up\n- To get your webhook URL go to https://ifttt.com/maker_webhooks/settings", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/ifttt.html"} +{"id": "4bc16e9d1548-1", "text": "- To get your webhook URL go to https://ifttt.com/maker_webhooks/settings\n- Copy the IFTTT key value from there. The URL is of the form\nhttps://maker.ifttt.com/use/YOUR_IFTTT_KEY. Grab the YOUR_IFTTT_KEY value.\n\"\"\"\nfrom typing import Optional\nimport requests\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForToolRun,\n CallbackManagerForToolRun,\n)\nfrom langchain.tools.base import BaseTool\n[docs]class IFTTTWebhook(BaseTool):\n \"\"\"IFTTT Webhook.\n Args:\n name: name of the tool\n description: description of the tool\n url: url to hit with the json event.\n \"\"\"\n url: str\n def _run(\n self,\n tool_input: str,\n run_manager: Optional[CallbackManagerForToolRun] = None,\n ) -> str:\n body = {\"this\": tool_input}\n response = requests.post(self.url, data=body)\n return response.text\n async def _arun(\n self,\n tool_input: str,\n run_manager: Optional[AsyncCallbackManagerForToolRun] = None,\n ) -> str:\n raise NotImplementedError(\"Not implemented.\")", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/ifttt.html"} +{"id": "11aea9cf2504-0", "text": "Source code for langchain.tools.openweathermap.tool\n\"\"\"Tool for the OpenWeatherMap API.\"\"\"\nfrom typing import Optional\nfrom pydantic import Field\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForToolRun,\n CallbackManagerForToolRun,\n)\nfrom langchain.tools.base import BaseTool\nfrom langchain.utilities import OpenWeatherMapAPIWrapper\n[docs]class OpenWeatherMapQueryRun(BaseTool):\n \"\"\"Tool that adds the capability to query using the OpenWeatherMap API.\"\"\"\n api_wrapper: OpenWeatherMapAPIWrapper = Field(\n default_factory=OpenWeatherMapAPIWrapper\n )\n name = \"OpenWeatherMap\"\n description = (\n \"A wrapper around OpenWeatherMap API. \"\n \"Useful for fetching current weather information for a specified location. \"\n \"Input should be a location string (e.g. London,GB).\"\n )\n def _run(\n self, location: str, run_manager: Optional[CallbackManagerForToolRun] = None\n ) -> str:\n \"\"\"Use the OpenWeatherMap tool.\"\"\"\n return self.api_wrapper.run(location)\n async def _arun(\n self,\n location: str,\n run_manager: Optional[AsyncCallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Use the OpenWeatherMap tool asynchronously.\"\"\"\n raise NotImplementedError(\"OpenWeatherMapQueryRun does not support async\")", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/openweathermap/tool.html"} +{"id": "07004d2cc0d9-0", "text": "Source code for langchain.tools.sleep.tool\n\"\"\"Tool for agent to sleep.\"\"\"\nfrom asyncio import sleep as asleep\nfrom time import sleep\nfrom typing import Optional, Type\nfrom pydantic import BaseModel, Field\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForToolRun,\n CallbackManagerForToolRun,\n)\nfrom langchain.tools.base import BaseTool\nclass SleepInput(BaseModel):\n \"\"\"Input for CopyFileTool.\"\"\"\n sleep_time: int = Field(..., description=\"Time to sleep in seconds\")\n[docs]class SleepTool(BaseTool):\n \"\"\"Tool that adds the capability to sleep.\"\"\"\n name = \"sleep\"\n args_schema: Type[BaseModel] = SleepInput\n description = \"Make agent sleep for a specified number of seconds.\"\n def _run(\n self,\n sleep_time: int,\n run_manager: Optional[CallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Use the Sleep tool.\"\"\"\n sleep(sleep_time)\n return f\"Agent slept for {sleep_time} seconds.\"\n async def _arun(\n self,\n sleep_time: int,\n run_manager: Optional[AsyncCallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Use the sleep tool asynchronously.\"\"\"\n await asleep(sleep_time)\n return f\"Agent slept for {sleep_time} seconds.\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/sleep/tool.html"} +{"id": "e453a27a579f-0", "text": "Source code for langchain.tools.youtube.search\n\"\"\"\nAdapted from https://github.com/venuv/langchain_yt_tools\nCustomYTSearchTool searches YouTube videos related to a person\nand returns a specified number of video URLs.\nInput to this tool should be a comma separated list,\n - the first part contains a person name\n - and the second(optional) a number that is the\n maximum number of video results to return\n \"\"\"\nimport json\nfrom typing import Optional\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForToolRun,\n CallbackManagerForToolRun,\n)\nfrom langchain.tools import BaseTool\n[docs]class YouTubeSearchTool(BaseTool):\n name = \"youtube_search\"\n description = (\n \"search for youtube videos associated with a person. \"\n \"the input to this tool should be a comma separated list, \"\n \"the first part contains a person name and the second a \"\n \"number that is the maximum number of video results \"\n \"to return aka num_results. the second part is optional\"\n )\n def _search(self, person: str, num_results: int) -> str:\n from youtube_search import YoutubeSearch\n results = YoutubeSearch(person, num_results).to_json()\n data = json.loads(results)\n url_suffix_list = [video[\"url_suffix\"] for video in data[\"videos\"]]\n return str(url_suffix_list)\n def _run(\n self,\n query: str,\n run_manager: Optional[CallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Use the tool.\"\"\"\n values = query.split(\",\")\n person = values[0]\n if len(values) > 1:\n num_results = int(values[1])\n else:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/youtube/search.html"} +{"id": "e453a27a579f-1", "text": "num_results = int(values[1])\n else:\n num_results = 2\n return self._search(person, num_results)\n async def _arun(\n self,\n query: str,\n run_manager: Optional[AsyncCallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Use the tool asynchronously.\"\"\"\n raise NotImplementedError(\"YouTubeSearchTool does not yet support async\")", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/youtube/search.html"} +{"id": "da5048df6df9-0", "text": "Source code for langchain.tools.arxiv.tool\n\"\"\"Tool for the Arxiv API.\"\"\"\nfrom typing import Optional\nfrom pydantic import Field\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForToolRun,\n CallbackManagerForToolRun,\n)\nfrom langchain.tools.base import BaseTool\nfrom langchain.utilities.arxiv import ArxivAPIWrapper\n[docs]class ArxivQueryRun(BaseTool):\n \"\"\"Tool that adds the capability to search using the Arxiv API.\"\"\"\n name = \"arxiv\"\n description = (\n \"A wrapper around Arxiv.org \"\n \"Useful for when you need to answer questions about Physics, Mathematics, \"\n \"Computer Science, Quantitative Biology, Quantitative Finance, Statistics, \"\n \"Electrical Engineering, and Economics \"\n \"from scientific articles on arxiv.org. \"\n \"Input should be a search query.\"\n )\n api_wrapper: ArxivAPIWrapper = Field(default_factory=ArxivAPIWrapper)\n def _run(\n self,\n query: str,\n run_manager: Optional[CallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Use the Arxiv tool.\"\"\"\n return self.api_wrapper.run(query)\n async def _arun(\n self,\n query: str,\n run_manager: Optional[AsyncCallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Use the Arxiv tool asynchronously.\"\"\"\n raise NotImplementedError(\"ArxivAPIWrapper does not support async\")", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/arxiv/tool.html"} +{"id": "4456583c45ab-0", "text": "Source code for langchain.tools.python.tool\n\"\"\"A tool for running python code in a REPL.\"\"\"\nimport ast\nimport re\nimport sys\nfrom contextlib import redirect_stdout\nfrom io import StringIO\nfrom typing import Any, Dict, Optional\nfrom pydantic import Field, root_validator\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForToolRun,\n CallbackManagerForToolRun,\n)\nfrom langchain.tools.base import BaseTool\nfrom langchain.utilities import PythonREPL\ndef _get_default_python_repl() -> PythonREPL:\n return PythonREPL(_globals=globals(), _locals=None)\ndef sanitize_input(query: str) -> str:\n \"\"\"Sanitize input to the python REPL.\n Remove whitespace, backtick & python (if llm mistakes python console as terminal)\n Args:\n query: The query to sanitize\n Returns:\n str: The sanitized query\n \"\"\"\n # Removes `, whitespace & python from start\n query = re.sub(r\"^(\\s|`)*(?i:python)?\\s*\", \"\", query)\n # Removes whitespace & ` from end\n query = re.sub(r\"(\\s|`)*$\", \"\", query)\n return query\n[docs]class PythonREPLTool(BaseTool):\n \"\"\"A tool for running python code in a REPL.\"\"\"\n name = \"Python_REPL\"\n description = (\n \"A Python shell. Use this to execute python commands. \"\n \"Input should be a valid python command. \"\n \"If you want to see the output of a value, you should print it out \"\n \"with `print(...)`.\"\n )\n python_repl: PythonREPL = Field(default_factory=_get_default_python_repl)\n sanitize_input: bool = True\n def _run(\n self,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/python/tool.html"} +{"id": "4456583c45ab-1", "text": "sanitize_input: bool = True\n def _run(\n self,\n query: str,\n run_manager: Optional[CallbackManagerForToolRun] = None,\n ) -> Any:\n \"\"\"Use the tool.\"\"\"\n if self.sanitize_input:\n query = sanitize_input(query)\n return self.python_repl.run(query)\n async def _arun(\n self,\n query: str,\n run_manager: Optional[AsyncCallbackManagerForToolRun] = None,\n ) -> Any:\n \"\"\"Use the tool asynchronously.\"\"\"\n raise NotImplementedError(\"PythonReplTool does not support async\")\n[docs]class PythonAstREPLTool(BaseTool):\n \"\"\"A tool for running python code in a REPL.\"\"\"\n name = \"python_repl_ast\"\n description = (\n \"A Python shell. Use this to execute python commands. \"\n \"Input should be a valid python command. \"\n \"When using this tool, sometimes output is abbreviated - \"\n \"make sure it does not look abbreviated before using it in your answer.\"\n )\n globals: Optional[Dict] = Field(default_factory=dict)\n locals: Optional[Dict] = Field(default_factory=dict)\n sanitize_input: bool = True\n @root_validator(pre=True)\n def validate_python_version(cls, values: Dict) -> Dict:\n \"\"\"Validate valid python version.\"\"\"\n if sys.version_info < (3, 9):\n raise ValueError(\n \"This tool relies on Python 3.9 or higher \"\n \"(as it uses new functionality in the `ast` module, \"\n f\"you have Python version: {sys.version}\"\n )\n return values\n def _run(\n self,\n query: str,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/python/tool.html"} +{"id": "4456583c45ab-2", "text": "return values\n def _run(\n self,\n query: str,\n run_manager: Optional[CallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Use the tool.\"\"\"\n try:\n if self.sanitize_input:\n query = sanitize_input(query)\n tree = ast.parse(query)\n module = ast.Module(tree.body[:-1], type_ignores=[])\n exec(ast.unparse(module), self.globals, self.locals) # type: ignore\n module_end = ast.Module(tree.body[-1:], type_ignores=[])\n module_end_str = ast.unparse(module_end) # type: ignore\n io_buffer = StringIO()\n try:\n with redirect_stdout(io_buffer):\n ret = eval(module_end_str, self.globals, self.locals)\n if ret is None:\n return io_buffer.getvalue()\n else:\n return ret\n except Exception:\n with redirect_stdout(io_buffer):\n exec(module_end_str, self.globals, self.locals)\n return io_buffer.getvalue()\n except Exception as e:\n return \"{}: {}\".format(type(e).__name__, str(e))\n async def _arun(\n self,\n query: str,\n run_manager: Optional[AsyncCallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Use the tool asynchronously.\"\"\"\n raise NotImplementedError(\"PythonReplTool does not support async\")", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/python/tool.html"} +{"id": "23fe9be1ae73-0", "text": "Source code for langchain.tools.google_places.tool\n\"\"\"Tool for the Google search API.\"\"\"\nfrom typing import Optional, Type\nfrom pydantic import BaseModel, Field\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForToolRun,\n CallbackManagerForToolRun,\n)\nfrom langchain.tools.base import BaseTool\nfrom langchain.utilities.google_places_api import GooglePlacesAPIWrapper\nclass GooglePlacesSchema(BaseModel):\n query: str = Field(..., description=\"Query for google maps\")\n[docs]class GooglePlacesTool(BaseTool):\n \"\"\"Tool that adds the capability to query the Google places API.\"\"\"\n name = \"google_places\"\n description = (\n \"A wrapper around Google Places. \"\n \"Useful for when you need to validate or \"\n \"discover addressed from ambiguous text. \"\n \"Input should be a search query.\"\n )\n api_wrapper: GooglePlacesAPIWrapper = Field(default_factory=GooglePlacesAPIWrapper)\n args_schema: Type[BaseModel] = GooglePlacesSchema\n def _run(\n self,\n query: str,\n run_manager: Optional[CallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Use the tool.\"\"\"\n return self.api_wrapper.run(query)\n async def _arun(\n self,\n query: str,\n run_manager: Optional[AsyncCallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Use the tool asynchronously.\"\"\"\n raise NotImplementedError(\"GooglePlacesRun does not support async\")", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/google_places/tool.html"} +{"id": "b8904f94ae6f-0", "text": "Source code for langchain.tools.wolfram_alpha.tool\n\"\"\"Tool for the Wolfram Alpha API.\"\"\"\nfrom typing import Optional\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForToolRun,\n CallbackManagerForToolRun,\n)\nfrom langchain.tools.base import BaseTool\nfrom langchain.utilities.wolfram_alpha import WolframAlphaAPIWrapper\n[docs]class WolframAlphaQueryRun(BaseTool):\n \"\"\"Tool that adds the capability to query using the Wolfram Alpha SDK.\"\"\"\n name = \"wolfram_alpha\"\n description = (\n \"A wrapper around Wolfram Alpha. \"\n \"Useful for when you need to answer questions about Math, \"\n \"Science, Technology, Culture, Society and Everyday Life. \"\n \"Input should be a search query.\"\n )\n api_wrapper: WolframAlphaAPIWrapper\n def _run(\n self,\n query: str,\n run_manager: Optional[CallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Use the WolframAlpha tool.\"\"\"\n return self.api_wrapper.run(query)\n async def _arun(\n self,\n query: str,\n run_manager: Optional[AsyncCallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Use the WolframAlpha tool asynchronously.\"\"\"\n raise NotImplementedError(\"WolframAlphaQueryRun does not support async\")", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/wolfram_alpha/tool.html"} +{"id": "d2c2fef307ca-0", "text": "Source code for langchain.tools.powerbi.tool\n\"\"\"Tools for interacting with a Power BI dataset.\"\"\"\nimport logging\nfrom typing import Any, Dict, Optional, Tuple\nfrom pydantic import Field, validator\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForToolRun,\n CallbackManagerForToolRun,\n)\nfrom langchain.chains.llm import LLMChain\nfrom langchain.tools.base import BaseTool\nfrom langchain.tools.powerbi.prompt import (\n BAD_REQUEST_RESPONSE,\n DEFAULT_FEWSHOT_EXAMPLES,\n QUESTION_TO_QUERY,\n RETRY_RESPONSE,\n)\nfrom langchain.utilities.powerbi import PowerBIDataset, json_to_md\nlogger = logging.getLogger(__name__)\n[docs]class QueryPowerBITool(BaseTool):\n \"\"\"Tool for querying a Power BI Dataset.\"\"\"\n name = \"query_powerbi\"\n description = \"\"\"\n Input to this tool is a detailed question about the dataset, output is a result from the dataset. It will try to answer the question using the dataset, and if it cannot, it will ask for clarification.\n Example Input: \"How many rows are in table1?\"\n \"\"\" # noqa: E501\n llm_chain: LLMChain\n powerbi: PowerBIDataset = Field(exclude=True)\n template: Optional[str] = QUESTION_TO_QUERY\n examples: Optional[str] = DEFAULT_FEWSHOT_EXAMPLES\n session_cache: Dict[str, Any] = Field(default_factory=dict, exclude=True)\n max_iterations: int = 5\n class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n arbitrary_types_allowed = True\n @validator(\"llm_chain\")\n def validate_llm_chain_input_variables( # pylint: disable=E0213\n cls, llm_chain: LLMChain", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/powerbi/tool.html"} +{"id": "d2c2fef307ca-1", "text": "cls, llm_chain: LLMChain\n ) -> LLMChain:\n \"\"\"Make sure the LLM chain has the correct input variables.\"\"\"\n if llm_chain.prompt.input_variables != [\n \"tool_input\",\n \"tables\",\n \"schemas\",\n \"examples\",\n ]:\n raise ValueError(\n \"LLM chain for QueryPowerBITool must have input variables ['tool_input', 'tables', 'schemas', 'examples'], found %s\", # noqa: C0301 E501 # pylint: disable=C0301\n llm_chain.prompt.input_variables,\n )\n return llm_chain\n def _check_cache(self, tool_input: str) -> Optional[str]:\n \"\"\"Check if the input is present in the cache.\n If the value is a bad request, overwrite with the escalated version,\n if not present return None.\"\"\"\n if tool_input not in self.session_cache:\n return None\n return self.session_cache[tool_input]\n def _run(\n self,\n tool_input: str,\n run_manager: Optional[CallbackManagerForToolRun] = None,\n **kwargs: Any,\n ) -> str:\n \"\"\"Execute the query, return the results or an error message.\"\"\"\n if cache := self._check_cache(tool_input):\n logger.debug(\"Found cached result for %s: %s\", tool_input, cache)\n return cache\n try:\n logger.info(\"Running PBI Query Tool with input: %s\", tool_input)\n query = self.llm_chain.predict(\n tool_input=tool_input,\n tables=self.powerbi.get_table_names(),\n schemas=self.powerbi.get_schemas(),\n examples=self.examples,\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/powerbi/tool.html"} +{"id": "d2c2fef307ca-2", "text": "schemas=self.powerbi.get_schemas(),\n examples=self.examples,\n )\n except Exception as exc: # pylint: disable=broad-except\n self.session_cache[tool_input] = f\"Error on call to LLM: {exc}\"\n return self.session_cache[tool_input]\n if query == \"I cannot answer this\":\n self.session_cache[tool_input] = query\n return self.session_cache[tool_input]\n logger.info(\"Query: %s\", query)\n pbi_result = self.powerbi.run(command=query)\n result, error = self._parse_output(pbi_result)\n if error is not None and \"TokenExpired\" in error:\n self.session_cache[\n tool_input\n ] = \"Authentication token expired or invalid, please try reauthenticate.\"\n return self.session_cache[tool_input]\n iterations = kwargs.get(\"iterations\", 0)\n if error and iterations < self.max_iterations:\n return self._run(\n tool_input=RETRY_RESPONSE.format(\n tool_input=tool_input, query=query, error=error\n ),\n run_manager=run_manager,\n iterations=iterations + 1,\n )\n self.session_cache[tool_input] = (\n result if result else BAD_REQUEST_RESPONSE.format(error=error)\n )\n return self.session_cache[tool_input]\n async def _arun(\n self,\n tool_input: str,\n run_manager: Optional[AsyncCallbackManagerForToolRun] = None,\n **kwargs: Any,\n ) -> str:\n \"\"\"Execute the query, return the results or an error message.\"\"\"\n if cache := self._check_cache(tool_input):\n logger.debug(\"Found cached result for %s: %s\", tool_input, cache)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/powerbi/tool.html"} +{"id": "d2c2fef307ca-3", "text": "logger.debug(\"Found cached result for %s: %s\", tool_input, cache)\n return cache\n try:\n logger.info(\"Running PBI Query Tool with input: %s\", tool_input)\n query = await self.llm_chain.apredict(\n tool_input=tool_input,\n tables=self.powerbi.get_table_names(),\n schemas=self.powerbi.get_schemas(),\n examples=self.examples,\n )\n except Exception as exc: # pylint: disable=broad-except\n self.session_cache[tool_input] = f\"Error on call to LLM: {exc}\"\n return self.session_cache[tool_input]\n if query == \"I cannot answer this\":\n self.session_cache[tool_input] = query\n return self.session_cache[tool_input]\n logger.info(\"Query: %s\", query)\n pbi_result = await self.powerbi.arun(command=query)\n result, error = self._parse_output(pbi_result)\n if error is not None and \"TokenExpired\" in error:\n self.session_cache[\n tool_input\n ] = \"Authentication token expired or invalid, please try reauthenticate.\"\n return self.session_cache[tool_input]\n iterations = kwargs.get(\"iterations\", 0)\n if error and iterations < self.max_iterations:\n return await self._arun(\n tool_input=RETRY_RESPONSE.format(\n tool_input=tool_input, query=query, error=error\n ),\n run_manager=run_manager,\n iterations=iterations + 1,\n )\n self.session_cache[tool_input] = (\n result if result else BAD_REQUEST_RESPONSE.format(error=error)\n )\n return self.session_cache[tool_input]\n def _parse_output(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/powerbi/tool.html"} +{"id": "d2c2fef307ca-4", "text": ")\n return self.session_cache[tool_input]\n def _parse_output(\n self, pbi_result: Dict[str, Any]\n ) -> Tuple[Optional[str], Optional[str]]:\n \"\"\"Parse the output of the query to a markdown table.\"\"\"\n if \"results\" in pbi_result:\n return json_to_md(pbi_result[\"results\"][0][\"tables\"][0][\"rows\"]), None\n if \"error\" in pbi_result:\n if (\n \"pbi.error\" in pbi_result[\"error\"]\n and \"details\" in pbi_result[\"error\"][\"pbi.error\"]\n ):\n return None, pbi_result[\"error\"][\"pbi.error\"][\"details\"][0][\"detail\"]\n return None, pbi_result[\"error\"]\n return None, \"Unknown error\"\n[docs]class InfoPowerBITool(BaseTool):\n \"\"\"Tool for getting metadata about a PowerBI Dataset.\"\"\"\n name = \"schema_powerbi\"\n description = \"\"\"\n Input to this tool is a comma-separated list of tables, output is the schema and sample rows for those tables.\n Be sure that the tables actually exist by calling list_tables_powerbi first!\n Example Input: \"table1, table2, table3\"\n \"\"\" # noqa: E501\n powerbi: PowerBIDataset = Field(exclude=True)\n class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n arbitrary_types_allowed = True\n def _run(\n self,\n tool_input: str,\n run_manager: Optional[CallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Get the schema for tables in a comma-separated list.\"\"\"\n return self.powerbi.get_table_info(tool_input.split(\", \"))\n async def _arun(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/powerbi/tool.html"} +{"id": "d2c2fef307ca-5", "text": "async def _arun(\n self,\n tool_input: str,\n run_manager: Optional[AsyncCallbackManagerForToolRun] = None,\n ) -> str:\n return await self.powerbi.aget_table_info(tool_input.split(\", \"))\n[docs]class ListPowerBITool(BaseTool):\n \"\"\"Tool for getting tables names.\"\"\"\n name = \"list_tables_powerbi\"\n description = \"Input is an empty string, output is a comma separated list of tables in the database.\" # noqa: E501 # pylint: disable=C0301\n powerbi: PowerBIDataset = Field(exclude=True)\n class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n arbitrary_types_allowed = True\n def _run(\n self,\n tool_input: Optional[str] = None,\n run_manager: Optional[CallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Get the names of the tables.\"\"\"\n return \", \".join(self.powerbi.get_table_names())\n async def _arun(\n self,\n tool_input: Optional[str] = None,\n run_manager: Optional[AsyncCallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Get the names of the tables.\"\"\"\n return \", \".join(self.powerbi.get_table_names())", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/powerbi/tool.html"} +{"id": "ed7e502979b8-0", "text": "Source code for langchain.tools.metaphor_search.tool\n\"\"\"Tool for the Metaphor search API.\"\"\"\nfrom typing import Dict, List, Optional, Union\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForToolRun,\n CallbackManagerForToolRun,\n)\nfrom langchain.tools.base import BaseTool\nfrom langchain.utilities.metaphor_search import MetaphorSearchAPIWrapper\n[docs]class MetaphorSearchResults(BaseTool):\n \"\"\"Tool that has capability to query the Metaphor Search API and get back json.\"\"\"\n name = \"metaphor_search_results_json\"\n description = (\n \"A wrapper around Metaphor Search. \"\n \"Input should be a Metaphor-optimized query. \"\n \"Output is a JSON array of the query results\"\n )\n api_wrapper: MetaphorSearchAPIWrapper\n def _run(\n self,\n query: str,\n num_results: int,\n include_domains: Optional[List[str]] = None,\n exclude_domains: Optional[List[str]] = None,\n start_crawl_date: Optional[str] = None,\n end_crawl_date: Optional[str] = None,\n start_published_date: Optional[str] = None,\n end_published_date: Optional[str] = None,\n run_manager: Optional[CallbackManagerForToolRun] = None,\n ) -> Union[List[Dict], str]:\n \"\"\"Use the tool.\"\"\"\n try:\n return self.api_wrapper.results(\n query,\n num_results,\n include_domains,\n exclude_domains,\n start_crawl_date,\n end_crawl_date,\n start_published_date,\n end_published_date,\n )\n except Exception as e:\n return repr(e)\n async def _arun(", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/metaphor_search/tool.html"} +{"id": "ed7e502979b8-1", "text": "return repr(e)\n async def _arun(\n self,\n query: str,\n num_results: int,\n include_domains: Optional[List[str]] = None,\n exclude_domains: Optional[List[str]] = None,\n start_crawl_date: Optional[str] = None,\n end_crawl_date: Optional[str] = None,\n start_published_date: Optional[str] = None,\n end_published_date: Optional[str] = None,\n run_manager: Optional[AsyncCallbackManagerForToolRun] = None,\n ) -> Union[List[Dict], str]:\n \"\"\"Use the tool asynchronously.\"\"\"\n try:\n return await self.api_wrapper.results_async(\n query,\n num_results,\n include_domains,\n exclude_domains,\n start_crawl_date,\n end_crawl_date,\n start_published_date,\n end_published_date,\n )\n except Exception as e:\n return repr(e)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/metaphor_search/tool.html"} +{"id": "bb20222a6e22-0", "text": "Source code for langchain.tools.json.tool\n# flake8: noqa\n\"\"\"Tools for working with JSON specs.\"\"\"\nfrom __future__ import annotations\nimport json\nimport re\nfrom pathlib import Path\nfrom typing import Dict, List, Optional, Union\nfrom pydantic import BaseModel\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForToolRun,\n CallbackManagerForToolRun,\n)\nfrom langchain.tools.base import BaseTool\ndef _parse_input(text: str) -> List[Union[str, int]]:\n \"\"\"Parse input of the form data[\"key1\"][0][\"key2\"] into a list of keys.\"\"\"\n _res = re.findall(r\"\\[.*?]\", text)\n # strip the brackets and quotes, convert to int if possible\n res = [i[1:-1].replace('\"', \"\") for i in _res]\n res = [int(i) if i.isdigit() else i for i in res]\n return res\nclass JsonSpec(BaseModel):\n \"\"\"Base class for JSON spec.\"\"\"\n dict_: Dict\n max_value_length: int = 200\n @classmethod\n def from_file(cls, path: Path) -> JsonSpec:\n \"\"\"Create a JsonSpec from a file.\"\"\"\n if not path.exists():\n raise FileNotFoundError(f\"File not found: {path}\")\n dict_ = json.loads(path.read_text())\n return cls(dict_=dict_)\n def keys(self, text: str) -> str:\n \"\"\"Return the keys of the dict at the given path.\n Args:\n text: Python representation of the path to the dict (e.g. data[\"key1\"][0][\"key2\"]).\n \"\"\"\n try:\n items = _parse_input(text)\n val = self.dict_\n for i in items:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/json/tool.html"} +{"id": "bb20222a6e22-1", "text": "val = self.dict_\n for i in items:\n if i:\n val = val[i]\n if not isinstance(val, dict):\n raise ValueError(\n f\"Value at path `{text}` is not a dict, get the value directly.\"\n )\n return str(list(val.keys()))\n except Exception as e:\n return repr(e)\n def value(self, text: str) -> str:\n \"\"\"Return the value of the dict at the given path.\n Args:\n text: Python representation of the path to the dict (e.g. data[\"key1\"][0][\"key2\"]).\n \"\"\"\n try:\n items = _parse_input(text)\n val = self.dict_\n for i in items:\n val = val[i]\n if isinstance(val, dict) and len(str(val)) > self.max_value_length:\n return \"Value is a large dictionary, should explore its keys directly\"\n str_val = str(val)\n if len(str_val) > self.max_value_length:\n str_val = str_val[: self.max_value_length] + \"...\"\n return str_val\n except Exception as e:\n return repr(e)\n[docs]class JsonListKeysTool(BaseTool):\n \"\"\"Tool for listing keys in a JSON spec.\"\"\"\n name = \"json_spec_list_keys\"\n description = \"\"\"\n Can be used to list all keys at a given path. \n Before calling this you should be SURE that the path to this exists.\n The input is a text representation of the path to the dict in Python syntax (e.g. data[\"key1\"][0][\"key2\"]).\n \"\"\"\n spec: JsonSpec\n def _run(\n self,\n tool_input: str,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/json/tool.html"} +{"id": "bb20222a6e22-2", "text": "def _run(\n self,\n tool_input: str,\n run_manager: Optional[CallbackManagerForToolRun] = None,\n ) -> str:\n return self.spec.keys(tool_input)\n async def _arun(\n self,\n tool_input: str,\n run_manager: Optional[AsyncCallbackManagerForToolRun] = None,\n ) -> str:\n return self._run(tool_input)\n[docs]class JsonGetValueTool(BaseTool):\n \"\"\"Tool for getting a value in a JSON spec.\"\"\"\n name = \"json_spec_get_value\"\n description = \"\"\"\n Can be used to see value in string format at a given path.\n Before calling this you should be SURE that the path to this exists.\n The input is a text representation of the path to the dict in Python syntax (e.g. data[\"key1\"][0][\"key2\"]).\n \"\"\"\n spec: JsonSpec\n def _run(\n self,\n tool_input: str,\n run_manager: Optional[CallbackManagerForToolRun] = None,\n ) -> str:\n return self.spec.value(tool_input)\n async def _arun(\n self,\n tool_input: str,\n run_manager: Optional[AsyncCallbackManagerForToolRun] = None,\n ) -> str:\n return self._run(tool_input)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/json/tool.html"} +{"id": "2548be0081de-0", "text": "Source code for langchain.tools.shell.tool\nimport asyncio\nimport platform\nimport warnings\nfrom typing import List, Optional, Type, Union\nfrom pydantic import BaseModel, Field, root_validator\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForToolRun,\n CallbackManagerForToolRun,\n)\nfrom langchain.tools.base import BaseTool\nfrom langchain.utilities.bash import BashProcess\nclass ShellInput(BaseModel):\n \"\"\"Commands for the Bash Shell tool.\"\"\"\n commands: Union[str, List[str]] = Field(\n ...,\n description=\"List of shell commands to run. Deserialized using json.loads\",\n )\n \"\"\"List of shell commands to run.\"\"\"\n @root_validator\n def _validate_commands(cls, values: dict) -> dict:\n \"\"\"Validate commands.\"\"\"\n # TODO: Add real validators\n commands = values.get(\"commands\")\n if not isinstance(commands, list):\n values[\"commands\"] = [commands]\n # Warn that the bash tool is not safe\n warnings.warn(\n \"The shell tool has no safeguards by default. Use at your own risk.\"\n )\n return values\ndef _get_default_bash_processs() -> BashProcess:\n \"\"\"Get file path from string.\"\"\"\n return BashProcess(return_err_output=True)\ndef _get_platform() -> str:\n \"\"\"Get platform.\"\"\"\n system = platform.system()\n if system == \"Darwin\":\n return \"MacOS\"\n return system\n[docs]class ShellTool(BaseTool):\n \"\"\"Tool to run shell commands.\"\"\"\n process: BashProcess = Field(default_factory=_get_default_bash_processs)\n \"\"\"Bash process to run commands.\"\"\"\n name: str = \"terminal\"\n \"\"\"Name of tool.\"\"\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/shell/tool.html"} +{"id": "2548be0081de-1", "text": "name: str = \"terminal\"\n \"\"\"Name of tool.\"\"\"\n description: str = f\"Run shell commands on this {_get_platform()} machine.\"\n \"\"\"Description of tool.\"\"\"\n args_schema: Type[BaseModel] = ShellInput\n \"\"\"Schema for input arguments.\"\"\"\n def _run(\n self,\n commands: Union[str, List[str]],\n run_manager: Optional[CallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Run commands and return final output.\"\"\"\n return self.process.run(commands)\n async def _arun(\n self,\n commands: Union[str, List[str]],\n run_manager: Optional[AsyncCallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Run commands asynchronously and return final output.\"\"\"\n return await asyncio.get_event_loop().run_in_executor(\n None, self.process.run, commands\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/shell/tool.html"} +{"id": "0b929a55c2d5-0", "text": "Source code for langchain.tools.bing_search.tool\n\"\"\"Tool for the Bing search API.\"\"\"\nfrom typing import Optional\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForToolRun,\n CallbackManagerForToolRun,\n)\nfrom langchain.tools.base import BaseTool\nfrom langchain.utilities.bing_search import BingSearchAPIWrapper\n[docs]class BingSearchRun(BaseTool):\n \"\"\"Tool that adds the capability to query the Bing search API.\"\"\"\n name = \"bing_search\"\n description = (\n \"A wrapper around Bing Search. \"\n \"Useful for when you need to answer questions about current events. \"\n \"Input should be a search query.\"\n )\n api_wrapper: BingSearchAPIWrapper\n def _run(\n self,\n query: str,\n run_manager: Optional[CallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Use the tool.\"\"\"\n return self.api_wrapper.run(query)\n async def _arun(\n self,\n query: str,\n run_manager: Optional[AsyncCallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Use the tool asynchronously.\"\"\"\n raise NotImplementedError(\"BingSearchRun does not support async\")\n[docs]class BingSearchResults(BaseTool):\n \"\"\"Tool that has capability to query the Bing Search API and get back json.\"\"\"\n name = \"Bing Search Results JSON\"\n description = (\n \"A wrapper around Bing Search. \"\n \"Useful for when you need to answer questions about current events. \"\n \"Input should be a search query. Output is a JSON array of the query results\"\n )\n num_results: int = 4\n api_wrapper: BingSearchAPIWrapper\n def _run(\n self,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/bing_search/tool.html"} +{"id": "0b929a55c2d5-1", "text": "api_wrapper: BingSearchAPIWrapper\n def _run(\n self,\n query: str,\n run_manager: Optional[CallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Use the tool.\"\"\"\n return str(self.api_wrapper.results(query, self.num_results))\n async def _arun(\n self,\n query: str,\n run_manager: Optional[AsyncCallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Use the tool asynchronously.\"\"\"\n raise NotImplementedError(\"BingSearchResults does not support async\")", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/bing_search/tool.html"} +{"id": "2a896c38101d-0", "text": "Source code for langchain.tools.gmail.create_draft\nimport base64\nfrom email.message import EmailMessage\nfrom typing import List, Optional, Type\nfrom pydantic import BaseModel, Field\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForToolRun,\n CallbackManagerForToolRun,\n)\nfrom langchain.tools.gmail.base import GmailBaseTool\nclass CreateDraftSchema(BaseModel):\n message: str = Field(\n ...,\n description=\"The message to include in the draft.\",\n )\n to: List[str] = Field(\n ...,\n description=\"The list of recipients.\",\n )\n subject: str = Field(\n ...,\n description=\"The subject of the message.\",\n )\n cc: Optional[List[str]] = Field(\n None,\n description=\"The list of CC recipients.\",\n )\n bcc: Optional[List[str]] = Field(\n None,\n description=\"The list of BCC recipients.\",\n )\n[docs]class GmailCreateDraft(GmailBaseTool):\n name: str = \"create_gmail_draft\"\n description: str = (\n \"Use this tool to create a draft email with the provided message fields.\"\n )\n args_schema: Type[CreateDraftSchema] = CreateDraftSchema\n def _prepare_draft_message(\n self,\n message: str,\n to: List[str],\n subject: str,\n cc: Optional[List[str]] = None,\n bcc: Optional[List[str]] = None,\n ) -> dict:\n draft_message = EmailMessage()\n draft_message.set_content(message)\n draft_message[\"To\"] = \", \".join(to)\n draft_message[\"Subject\"] = subject\n if cc is not None:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/gmail/create_draft.html"} +{"id": "2a896c38101d-1", "text": "draft_message[\"Subject\"] = subject\n if cc is not None:\n draft_message[\"Cc\"] = \", \".join(cc)\n if bcc is not None:\n draft_message[\"Bcc\"] = \", \".join(bcc)\n encoded_message = base64.urlsafe_b64encode(draft_message.as_bytes()).decode()\n return {\"message\": {\"raw\": encoded_message}}\n def _run(\n self,\n message: str,\n to: List[str],\n subject: str,\n cc: Optional[List[str]] = None,\n bcc: Optional[List[str]] = None,\n run_manager: Optional[CallbackManagerForToolRun] = None,\n ) -> str:\n try:\n create_message = self._prepare_draft_message(message, to, subject, cc, bcc)\n draft = (\n self.api_resource.users()\n .drafts()\n .create(userId=\"me\", body=create_message)\n .execute()\n )\n output = f'Draft created. Draft Id: {draft[\"id\"]}'\n return output\n except Exception as e:\n raise Exception(f\"An error occurred: {e}\")\n async def _arun(\n self,\n message: str,\n to: List[str],\n subject: str,\n cc: Optional[List[str]] = None,\n bcc: Optional[List[str]] = None,\n run_manager: Optional[AsyncCallbackManagerForToolRun] = None,\n ) -> str:\n raise NotImplementedError(f\"The tool {self.name} does not support async yet.\")", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/gmail/create_draft.html"} +{"id": "f775e2697459-0", "text": "Source code for langchain.tools.gmail.search\nimport base64\nimport email\nfrom enum import Enum\nfrom typing import Any, Dict, List, Optional, Type\nfrom pydantic import BaseModel, Field\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForToolRun,\n CallbackManagerForToolRun,\n)\nfrom langchain.tools.gmail.base import GmailBaseTool\nfrom langchain.tools.gmail.utils import clean_email_body\nclass Resource(str, Enum):\n \"\"\"Enumerator of Resources to search.\"\"\"\n THREADS = \"threads\"\n MESSAGES = \"messages\"\nclass SearchArgsSchema(BaseModel):\n # From https://support.google.com/mail/answer/7190?hl=en\n query: str = Field(\n ...,\n description=\"The Gmail query. Example filters include from:sender,\"\n \" to:recipient, subject:subject, -filtered_term,\"\n \" in:folder, is:important|read|starred, after:year/mo/date, \"\n \"before:year/mo/date, label:label_name\"\n ' \"exact phrase\".'\n \" Search newer/older than using d (day), m (month), and y (year): \"\n \"newer_than:2d, older_than:1y.\"\n \" Attachments with extension example: filename:pdf. Multiple term\"\n \" matching example: from:amy OR from:david.\",\n )\n resource: Resource = Field(\n default=Resource.MESSAGES,\n description=\"Whether to search for threads or messages.\",\n )\n max_results: int = Field(\n default=10,\n description=\"The maximum number of results to return.\",\n )\n[docs]class GmailSearch(GmailBaseTool):\n name: str = \"search_gmail\"\n description: str = (", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/gmail/search.html"} +{"id": "f775e2697459-1", "text": "name: str = \"search_gmail\"\n description: str = (\n \"Use this tool to search for email messages or threads.\"\n \" The input must be a valid Gmail query.\"\n \" The output is a JSON list of the requested resource.\"\n )\n args_schema: Type[SearchArgsSchema] = SearchArgsSchema\n def _parse_threads(self, threads: List[Dict[str, Any]]) -> List[Dict[str, Any]]:\n # Add the thread message snippets to the thread results\n results = []\n for thread in threads:\n thread_id = thread[\"id\"]\n thread_data = (\n self.api_resource.users()\n .threads()\n .get(userId=\"me\", id=thread_id)\n .execute()\n )\n messages = thread_data[\"messages\"]\n thread[\"messages\"] = []\n for message in messages:\n snippet = message[\"snippet\"]\n thread[\"messages\"].append({\"snippet\": snippet, \"id\": message[\"id\"]})\n results.append(thread)\n return results\n def _parse_messages(self, messages: List[Dict[str, Any]]) -> List[Dict[str, Any]]:\n results = []\n for message in messages:\n message_id = message[\"id\"]\n message_data = (\n self.api_resource.users()\n .messages()\n .get(userId=\"me\", format=\"raw\", id=message_id)\n .execute()\n )\n raw_message = base64.urlsafe_b64decode(message_data[\"raw\"])\n email_msg = email.message_from_bytes(raw_message)\n subject = email_msg[\"Subject\"]\n sender = email_msg[\"From\"]\n message_body = email_msg.get_payload()\n body = clean_email_body(message_body)\n results.append(\n {", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/gmail/search.html"} +{"id": "f775e2697459-2", "text": "body = clean_email_body(message_body)\n results.append(\n {\n \"id\": message[\"id\"],\n \"threadId\": message_data[\"threadId\"],\n \"snippet\": message_data[\"snippet\"],\n \"body\": body,\n \"subject\": subject,\n \"sender\": sender,\n }\n )\n return results\n def _run(\n self,\n query: str,\n resource: Resource = Resource.MESSAGES,\n max_results: int = 10,\n run_manager: Optional[CallbackManagerForToolRun] = None,\n ) -> List[Dict[str, Any]]:\n \"\"\"Run the tool.\"\"\"\n results = (\n self.api_resource.users()\n .messages()\n .list(userId=\"me\", q=query, maxResults=max_results)\n .execute()\n .get(resource.value, [])\n )\n if resource == Resource.THREADS:\n return self._parse_threads(results)\n elif resource == Resource.MESSAGES:\n return self._parse_messages(results)\n else:\n raise NotImplementedError(f\"Resource of type {resource} not implemented.\")\n async def _arun(\n self,\n query: str,\n resource: Resource = Resource.MESSAGES,\n max_results: int = 10,\n run_manager: Optional[AsyncCallbackManagerForToolRun] = None,\n ) -> List[Dict[str, Any]]:\n \"\"\"Run the tool.\"\"\"\n raise NotImplementedError", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/gmail/search.html"} +{"id": "92e032d54acc-0", "text": "Source code for langchain.tools.gmail.get_thread\nfrom typing import Dict, Optional, Type\nfrom pydantic import BaseModel, Field\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForToolRun,\n CallbackManagerForToolRun,\n)\nfrom langchain.tools.gmail.base import GmailBaseTool\nclass GetThreadSchema(BaseModel):\n # From https://support.google.com/mail/answer/7190?hl=en\n thread_id: str = Field(\n ...,\n description=\"The thread ID.\",\n )\n[docs]class GmailGetThread(GmailBaseTool):\n name: str = \"get_gmail_thread\"\n description: str = (\n \"Use this tool to search for email messages.\"\n \" The input must be a valid Gmail query.\"\n \" The output is a JSON list of messages.\"\n )\n args_schema: Type[GetThreadSchema] = GetThreadSchema\n def _run(\n self,\n thread_id: str,\n run_manager: Optional[CallbackManagerForToolRun] = None,\n ) -> Dict:\n \"\"\"Run the tool.\"\"\"\n query = self.api_resource.users().threads().get(userId=\"me\", id=thread_id)\n thread_data = query.execute()\n if not isinstance(thread_data, dict):\n raise ValueError(\"The output of the query must be a list.\")\n messages = thread_data[\"messages\"]\n thread_data[\"messages\"] = []\n keys_to_keep = [\"id\", \"snippet\", \"snippet\"]\n # TODO: Parse body.\n for message in messages:\n thread_data[\"messages\"].append(\n {k: message[k] for k in keys_to_keep if k in message}\n )\n return thread_data\n async def _arun(\n self,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/gmail/get_thread.html"} +{"id": "92e032d54acc-1", "text": ")\n return thread_data\n async def _arun(\n self,\n thread_id: str,\n run_manager: Optional[AsyncCallbackManagerForToolRun] = None,\n ) -> Dict:\n \"\"\"Run the tool.\"\"\"\n raise NotImplementedError", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/gmail/get_thread.html"} +{"id": "819ddd72ed85-0", "text": "Source code for langchain.tools.gmail.send_message\n\"\"\"Send Gmail messages.\"\"\"\nimport base64\nfrom email.mime.multipart import MIMEMultipart\nfrom email.mime.text import MIMEText\nfrom typing import Any, Dict, List, Optional, Union\nfrom pydantic import BaseModel, Field\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForToolRun,\n CallbackManagerForToolRun,\n)\nfrom langchain.tools.gmail.base import GmailBaseTool\nclass SendMessageSchema(BaseModel):\n message: str = Field(\n ...,\n description=\"The message to send.\",\n )\n to: Union[str, List[str]] = Field(\n ...,\n description=\"The list of recipients.\",\n )\n subject: str = Field(\n ...,\n description=\"The subject of the message.\",\n )\n cc: Optional[Union[str, List[str]]] = Field(\n None,\n description=\"The list of CC recipients.\",\n )\n bcc: Optional[Union[str, List[str]]] = Field(\n None,\n description=\"The list of BCC recipients.\",\n )\n[docs]class GmailSendMessage(GmailBaseTool):\n name: str = \"send_gmail_message\"\n description: str = (\n \"Use this tool to send email messages.\" \" The input is the message, recipents\"\n )\n def _prepare_message(\n self,\n message: str,\n to: Union[str, List[str]],\n subject: str,\n cc: Optional[Union[str, List[str]]] = None,\n bcc: Optional[Union[str, List[str]]] = None,\n ) -> Dict[str, Any]:\n \"\"\"Create a message for an email.\"\"\"\n mime_message = MIMEMultipart()", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/gmail/send_message.html"} +{"id": "819ddd72ed85-1", "text": "\"\"\"Create a message for an email.\"\"\"\n mime_message = MIMEMultipart()\n mime_message.attach(MIMEText(message, \"html\"))\n mime_message[\"To\"] = \", \".join(to if isinstance(to, list) else [to])\n mime_message[\"Subject\"] = subject\n if cc is not None:\n mime_message[\"Cc\"] = \", \".join(cc if isinstance(cc, list) else [cc])\n if bcc is not None:\n mime_message[\"Bcc\"] = \", \".join(bcc if isinstance(bcc, list) else [bcc])\n encoded_message = base64.urlsafe_b64encode(mime_message.as_bytes()).decode()\n return {\"raw\": encoded_message}\n def _run(\n self,\n message: str,\n to: Union[str, List[str]],\n subject: str,\n cc: Optional[Union[str, List[str]]] = None,\n bcc: Optional[Union[str, List[str]]] = None,\n run_manager: Optional[CallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Run the tool.\"\"\"\n try:\n create_message = self._prepare_message(message, to, subject, cc=cc, bcc=bcc)\n send_message = (\n self.api_resource.users()\n .messages()\n .send(userId=\"me\", body=create_message)\n )\n sent_message = send_message.execute()\n return f'Message sent. Message Id: {sent_message[\"id\"]}'\n except Exception as error:\n raise Exception(f\"An error occurred: {error}\")\n async def _arun(\n self,\n message: str,\n to: Union[str, List[str]],\n subject: str,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/gmail/send_message.html"} +{"id": "819ddd72ed85-2", "text": "to: Union[str, List[str]],\n subject: str,\n cc: Optional[Union[str, List[str]]] = None,\n bcc: Optional[Union[str, List[str]]] = None,\n run_manager: Optional[AsyncCallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Run the tool asynchronously.\"\"\"\n raise NotImplementedError(f\"The tool {self.name} does not support async yet.\")", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/gmail/send_message.html"} +{"id": "2aa39996e56e-0", "text": "Source code for langchain.tools.gmail.get_message\nimport base64\nimport email\nfrom typing import Dict, Optional, Type\nfrom pydantic import BaseModel, Field\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForToolRun,\n CallbackManagerForToolRun,\n)\nfrom langchain.tools.gmail.base import GmailBaseTool\nfrom langchain.tools.gmail.utils import clean_email_body\nclass SearchArgsSchema(BaseModel):\n message_id: str = Field(\n ...,\n description=\"The unique ID of the email message, retrieved from a search.\",\n )\n[docs]class GmailGetMessage(GmailBaseTool):\n name: str = \"get_gmail_message\"\n description: str = (\n \"Use this tool to fetch an email by message ID.\"\n \" Returns the thread ID, snipet, body, subject, and sender.\"\n )\n args_schema: Type[SearchArgsSchema] = SearchArgsSchema\n def _run(\n self,\n message_id: str,\n run_manager: Optional[CallbackManagerForToolRun] = None,\n ) -> Dict:\n \"\"\"Run the tool.\"\"\"\n query = (\n self.api_resource.users()\n .messages()\n .get(userId=\"me\", format=\"raw\", id=message_id)\n )\n message_data = query.execute()\n raw_message = base64.urlsafe_b64decode(message_data[\"raw\"])\n email_msg = email.message_from_bytes(raw_message)\n subject = email_msg[\"Subject\"]\n sender = email_msg[\"From\"]\n message_body = email_msg.get_payload()\n body = clean_email_body(message_body)\n return {\n \"id\": message_id,\n \"threadId\": message_data[\"threadId\"],\n \"snippet\": message_data[\"snippet\"],", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/gmail/get_message.html"} +{"id": "2aa39996e56e-1", "text": "\"snippet\": message_data[\"snippet\"],\n \"body\": body,\n \"subject\": subject,\n \"sender\": sender,\n }\n async def _arun(\n self,\n message_id: str,\n run_manager: Optional[AsyncCallbackManagerForToolRun] = None,\n ) -> Dict:\n \"\"\"Run the tool.\"\"\"\n raise NotImplementedError", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/gmail/get_message.html"} +{"id": "91abaa4924ae-0", "text": "Source code for langchain.tools.vectorstore.tool\n\"\"\"Tools for interacting with vectorstores.\"\"\"\nimport json\nfrom typing import Any, Dict, Optional\nfrom pydantic import BaseModel, Field\nfrom langchain.base_language import BaseLanguageModel\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForToolRun,\n CallbackManagerForToolRun,\n)\nfrom langchain.chains import RetrievalQA, RetrievalQAWithSourcesChain\nfrom langchain.llms.openai import OpenAI\nfrom langchain.tools.base import BaseTool\nfrom langchain.vectorstores.base import VectorStore\nclass BaseVectorStoreTool(BaseModel):\n \"\"\"Base class for tools that use a VectorStore.\"\"\"\n vectorstore: VectorStore = Field(exclude=True)\n llm: BaseLanguageModel = Field(default_factory=lambda: OpenAI(temperature=0))\n class Config(BaseTool.Config):\n \"\"\"Configuration for this pydantic object.\"\"\"\n arbitrary_types_allowed = True\ndef _create_description_from_template(values: Dict[str, Any]) -> Dict[str, Any]:\n values[\"description\"] = values[\"template\"].format(name=values[\"name\"])\n return values\n[docs]class VectorStoreQATool(BaseVectorStoreTool, BaseTool):\n \"\"\"Tool for the VectorDBQA chain. To be initialized with name and chain.\"\"\"\n[docs] @staticmethod\n def get_description(name: str, description: str) -> str:\n template: str = (\n \"Useful for when you need to answer questions about {name}. \"\n \"Whenever you need information about {description} \"\n \"you should ALWAYS use this. \"\n \"Input should be a fully formed question.\"\n )\n return template.format(name=name, description=description)\n def _run(\n self,\n query: str,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/vectorstore/tool.html"} +{"id": "91abaa4924ae-1", "text": "def _run(\n self,\n query: str,\n run_manager: Optional[CallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Use the tool.\"\"\"\n chain = RetrievalQA.from_chain_type(\n self.llm, retriever=self.vectorstore.as_retriever()\n )\n return chain.run(query)\n async def _arun(\n self,\n query: str,\n run_manager: Optional[AsyncCallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Use the tool asynchronously.\"\"\"\n raise NotImplementedError(\"VectorStoreQATool does not support async\")\n[docs]class VectorStoreQAWithSourcesTool(BaseVectorStoreTool, BaseTool):\n \"\"\"Tool for the VectorDBQAWithSources chain.\"\"\"\n[docs] @staticmethod\n def get_description(name: str, description: str) -> str:\n template: str = (\n \"Useful for when you need to answer questions about {name} and the sources \"\n \"used to construct the answer. \"\n \"Whenever you need information about {description} \"\n \"you should ALWAYS use this. \"\n \" Input should be a fully formed question. \"\n \"Output is a json serialized dictionary with keys `answer` and `sources`. \"\n \"Only use this tool if the user explicitly asks for sources.\"\n )\n return template.format(name=name, description=description)\n def _run(\n self,\n query: str,\n run_manager: Optional[CallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Use the tool.\"\"\"\n chain = RetrievalQAWithSourcesChain.from_chain_type(\n self.llm, retriever=self.vectorstore.as_retriever()\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/vectorstore/tool.html"} +{"id": "91abaa4924ae-2", "text": "self.llm, retriever=self.vectorstore.as_retriever()\n )\n return json.dumps(chain({chain.question_key: query}, return_only_outputs=True))\n async def _arun(\n self,\n query: str,\n run_manager: Optional[AsyncCallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Use the tool asynchronously.\"\"\"\n raise NotImplementedError(\"VectorStoreQAWithSourcesTool does not support async\")", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/vectorstore/tool.html"} +{"id": "86bb01339d31-0", "text": "Source code for langchain.tools.pubmed.tool\n\"\"\"Tool for the Pubmed API.\"\"\"\nfrom typing import Optional\nfrom pydantic import Field\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForToolRun,\n CallbackManagerForToolRun,\n)\nfrom langchain.tools.base import BaseTool\nfrom langchain.utilities.pupmed import PubMedAPIWrapper\n[docs]class PubmedQueryRun(BaseTool):\n \"\"\"Tool that adds the capability to search using the PubMed API.\"\"\"\n name = \"PubMed\"\n description = (\n \"A wrapper around PubMed.org \"\n \"Useful for when you need to answer questions about Physics, Mathematics, \"\n \"Computer Science, Quantitative Biology, Quantitative Finance, Statistics, \"\n \"Electrical Engineering, and Economics \"\n \"from scientific articles on PubMed.org. \"\n \"Input should be a search query.\"\n )\n api_wrapper: PubMedAPIWrapper = Field(default_factory=PubMedAPIWrapper)\n def _run(\n self,\n query: str,\n run_manager: Optional[CallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Use the Arxiv tool.\"\"\"\n return self.api_wrapper.run(query)\n async def _arun(\n self,\n query: str,\n run_manager: Optional[AsyncCallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Use the PubMed tool asynchronously.\"\"\"\n raise NotImplementedError(\"PubMedAPIWrapper does not support async\")", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/pubmed/tool.html"} +{"id": "6e21864c2843-0", "text": "Source code for langchain.tools.google_search.tool\n\"\"\"Tool for the Google search API.\"\"\"\nfrom typing import Optional\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForToolRun,\n CallbackManagerForToolRun,\n)\nfrom langchain.tools.base import BaseTool\nfrom langchain.utilities.google_search import GoogleSearchAPIWrapper\n[docs]class GoogleSearchRun(BaseTool):\n \"\"\"Tool that adds the capability to query the Google search API.\"\"\"\n name = \"google_search\"\n description = (\n \"A wrapper around Google Search. \"\n \"Useful for when you need to answer questions about current events. \"\n \"Input should be a search query.\"\n )\n api_wrapper: GoogleSearchAPIWrapper\n def _run(\n self,\n query: str,\n run_manager: Optional[CallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Use the tool.\"\"\"\n return self.api_wrapper.run(query)\n async def _arun(\n self,\n query: str,\n run_manager: Optional[AsyncCallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Use the tool asynchronously.\"\"\"\n raise NotImplementedError(\"GoogleSearchRun does not support async\")\n[docs]class GoogleSearchResults(BaseTool):\n \"\"\"Tool that has capability to query the Google Search API and get back json.\"\"\"\n name = \"Google Search Results JSON\"\n description = (\n \"A wrapper around Google Search. \"\n \"Useful for when you need to answer questions about current events. \"\n \"Input should be a search query. Output is a JSON array of the query results\"\n )\n num_results: int = 4\n api_wrapper: GoogleSearchAPIWrapper\n def _run(\n self,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/google_search/tool.html"} +{"id": "6e21864c2843-1", "text": "api_wrapper: GoogleSearchAPIWrapper\n def _run(\n self,\n query: str,\n run_manager: Optional[CallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Use the tool.\"\"\"\n return str(self.api_wrapper.results(query, self.num_results))\n async def _arun(\n self,\n query: str,\n run_manager: Optional[AsyncCallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Use the tool asynchronously.\"\"\"\n raise NotImplementedError(\"GoogleSearchRun does not support async\")", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/google_search/tool.html"} +{"id": "89b10f61c372-0", "text": "Source code for langchain.tools.searx_search.tool\n\"\"\"Tool for the SearxNG search API.\"\"\"\nfrom typing import Optional\nfrom pydantic import Extra\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForToolRun,\n CallbackManagerForToolRun,\n)\nfrom langchain.tools.base import BaseTool, Field\nfrom langchain.utilities.searx_search import SearxSearchWrapper\n[docs]class SearxSearchRun(BaseTool):\n \"\"\"Tool that adds the capability to query a Searx instance.\"\"\"\n name = \"searx_search\"\n description = (\n \"A meta search engine.\"\n \"Useful for when you need to answer questions about current events.\"\n \"Input should be a search query.\"\n )\n wrapper: SearxSearchWrapper\n kwargs: dict = Field(default_factory=dict)\n def _run(\n self,\n query: str,\n run_manager: Optional[CallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Use the tool.\"\"\"\n return self.wrapper.run(query, **self.kwargs)\n async def _arun(\n self,\n query: str,\n run_manager: Optional[AsyncCallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Use the tool asynchronously.\"\"\"\n return await self.wrapper.arun(query, **self.kwargs)\n[docs]class SearxSearchResults(BaseTool):\n \"\"\"Tool that has the capability to query a Searx instance and get back json.\"\"\"\n name = \"Searx Search Results\"\n description = (\n \"A meta search engine.\"\n \"Useful for when you need to answer questions about current events.\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/searx_search/tool.html"} +{"id": "89b10f61c372-1", "text": "\"Useful for when you need to answer questions about current events.\"\n \"Input should be a search query. Output is a JSON array of the query results\"\n )\n wrapper: SearxSearchWrapper\n num_results: int = 4\n kwargs: dict = Field(default_factory=dict)\n class Config:\n \"\"\"Pydantic config.\"\"\"\n extra = Extra.allow\n def _run(\n self,\n query: str,\n run_manager: Optional[CallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Use the tool.\"\"\"\n return str(self.wrapper.results(query, self.num_results, **self.kwargs))\n async def _arun(\n self,\n query: str,\n run_manager: Optional[AsyncCallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Use the tool asynchronously.\"\"\"\n return (\n await self.wrapper.aresults(query, self.num_results, **self.kwargs)\n ).__str__()", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/searx_search/tool.html"} +{"id": "2d1f6d93ec8a-0", "text": "Source code for langchain.tools.requests.tool\n# flake8: noqa\n\"\"\"Tools for making requests to an API endpoint.\"\"\"\nimport json\nfrom typing import Any, Dict, Optional\nfrom pydantic import BaseModel\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForToolRun,\n CallbackManagerForToolRun,\n)\nfrom langchain.requests import TextRequestsWrapper\nfrom langchain.tools.base import BaseTool\ndef _parse_input(text: str) -> Dict[str, Any]:\n \"\"\"Parse the json string into a dict.\"\"\"\n return json.loads(text)\ndef _clean_url(url: str) -> str:\n \"\"\"Strips quotes from the url.\"\"\"\n return url.strip(\"\\\"'\")\n[docs]class BaseRequestsTool(BaseModel):\n \"\"\"Base class for requests tools.\"\"\"\n requests_wrapper: TextRequestsWrapper\n[docs]class RequestsGetTool(BaseRequestsTool, BaseTool):\n \"\"\"Tool for making a GET request to an API endpoint.\"\"\"\n name = \"requests_get\"\n description = \"A portal to the internet. Use this when you need to get specific content from a website. Input should be a url (i.e. https://www.google.com). The output will be the text response of the GET request.\"\n def _run(\n self, url: str, run_manager: Optional[CallbackManagerForToolRun] = None\n ) -> str:\n \"\"\"Run the tool.\"\"\"\n return self.requests_wrapper.get(_clean_url(url))\n async def _arun(\n self,\n url: str,\n run_manager: Optional[AsyncCallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Run the tool asynchronously.\"\"\"\n return await self.requests_wrapper.aget(_clean_url(url))\n[docs]class RequestsPostTool(BaseRequestsTool, BaseTool):", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/requests/tool.html"} +{"id": "2d1f6d93ec8a-1", "text": "[docs]class RequestsPostTool(BaseRequestsTool, BaseTool):\n \"\"\"Tool for making a POST request to an API endpoint.\"\"\"\n name = \"requests_post\"\n description = \"\"\"Use this when you want to POST to a website.\n Input should be a json string with two keys: \"url\" and \"data\".\n The value of \"url\" should be a string, and the value of \"data\" should be a dictionary of \n key-value pairs you want to POST to the url.\n Be careful to always use double quotes for strings in the json string\n The output will be the text response of the POST request.\n \"\"\"\n def _run(\n self, text: str, run_manager: Optional[CallbackManagerForToolRun] = None\n ) -> str:\n \"\"\"Run the tool.\"\"\"\n try:\n data = _parse_input(text)\n return self.requests_wrapper.post(_clean_url(data[\"url\"]), data[\"data\"])\n except Exception as e:\n return repr(e)\n async def _arun(\n self,\n text: str,\n run_manager: Optional[AsyncCallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Run the tool asynchronously.\"\"\"\n try:\n data = _parse_input(text)\n return await self.requests_wrapper.apost(\n _clean_url(data[\"url\"]), data[\"data\"]\n )\n except Exception as e:\n return repr(e)\n[docs]class RequestsPatchTool(BaseRequestsTool, BaseTool):\n \"\"\"Tool for making a PATCH request to an API endpoint.\"\"\"\n name = \"requests_patch\"\n description = \"\"\"Use this when you want to PATCH to a website.\n Input should be a json string with two keys: \"url\" and \"data\".", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/requests/tool.html"} +{"id": "2d1f6d93ec8a-2", "text": "Input should be a json string with two keys: \"url\" and \"data\".\n The value of \"url\" should be a string, and the value of \"data\" should be a dictionary of \n key-value pairs you want to PATCH to the url.\n Be careful to always use double quotes for strings in the json string\n The output will be the text response of the PATCH request.\n \"\"\"\n def _run(\n self, text: str, run_manager: Optional[CallbackManagerForToolRun] = None\n ) -> str:\n \"\"\"Run the tool.\"\"\"\n try:\n data = _parse_input(text)\n return self.requests_wrapper.patch(_clean_url(data[\"url\"]), data[\"data\"])\n except Exception as e:\n return repr(e)\n async def _arun(\n self,\n text: str,\n run_manager: Optional[AsyncCallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Run the tool asynchronously.\"\"\"\n try:\n data = _parse_input(text)\n return await self.requests_wrapper.apatch(\n _clean_url(data[\"url\"]), data[\"data\"]\n )\n except Exception as e:\n return repr(e)\n[docs]class RequestsPutTool(BaseRequestsTool, BaseTool):\n \"\"\"Tool for making a PUT request to an API endpoint.\"\"\"\n name = \"requests_put\"\n description = \"\"\"Use this when you want to PUT to a website.\n Input should be a json string with two keys: \"url\" and \"data\".\n The value of \"url\" should be a string, and the value of \"data\" should be a dictionary of \n key-value pairs you want to PUT to the url.", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/requests/tool.html"} +{"id": "2d1f6d93ec8a-3", "text": "key-value pairs you want to PUT to the url.\n Be careful to always use double quotes for strings in the json string.\n The output will be the text response of the PUT request.\n \"\"\"\n def _run(\n self, text: str, run_manager: Optional[CallbackManagerForToolRun] = None\n ) -> str:\n \"\"\"Run the tool.\"\"\"\n try:\n data = _parse_input(text)\n return self.requests_wrapper.put(_clean_url(data[\"url\"]), data[\"data\"])\n except Exception as e:\n return repr(e)\n async def _arun(\n self,\n text: str,\n run_manager: Optional[AsyncCallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Run the tool asynchronously.\"\"\"\n try:\n data = _parse_input(text)\n return await self.requests_wrapper.aput(\n _clean_url(data[\"url\"]), data[\"data\"]\n )\n except Exception as e:\n return repr(e)\n[docs]class RequestsDeleteTool(BaseRequestsTool, BaseTool):\n \"\"\"Tool for making a DELETE request to an API endpoint.\"\"\"\n name = \"requests_delete\"\n description = \"A portal to the internet. Use this when you need to make a DELETE request to a URL. Input should be a specific url, and the output will be the text response of the DELETE request.\"\n def _run(\n self,\n url: str,\n run_manager: Optional[CallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Run the tool.\"\"\"\n return self.requests_wrapper.delete(_clean_url(url))\n async def _arun(\n self,\n url: str,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/requests/tool.html"} +{"id": "2d1f6d93ec8a-4", "text": "async def _arun(\n self,\n url: str,\n run_manager: Optional[AsyncCallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Run the tool asynchronously.\"\"\"\n return await self.requests_wrapper.adelete(_clean_url(url))", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/requests/tool.html"} +{"id": "3e9272b626bc-0", "text": "Source code for langchain.tools.playwright.extract_text\nfrom __future__ import annotations\nfrom typing import Optional, Type\nfrom pydantic import BaseModel, root_validator\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForToolRun,\n CallbackManagerForToolRun,\n)\nfrom langchain.tools.playwright.base import BaseBrowserTool\nfrom langchain.tools.playwright.utils import aget_current_page, get_current_page\n[docs]class ExtractTextTool(BaseBrowserTool):\n name: str = \"extract_text\"\n description: str = \"Extract all the text on the current webpage\"\n args_schema: Type[BaseModel] = BaseModel\n @root_validator\n def check_acheck_bs_importrgs(cls, values: dict) -> dict:\n \"\"\"Check that the arguments are valid.\"\"\"\n try:\n from bs4 import BeautifulSoup # noqa: F401\n except ImportError:\n raise ValueError(\n \"The 'beautifulsoup4' package is required to use this tool.\"\n \" Please install it with 'pip install beautifulsoup4'.\"\n )\n return values\n def _run(self, run_manager: Optional[CallbackManagerForToolRun] = None) -> str:\n \"\"\"Use the tool.\"\"\"\n # Use Beautiful Soup since it's faster than looping through the elements\n from bs4 import BeautifulSoup\n if self.sync_browser is None:\n raise ValueError(f\"Synchronous browser not provided to {self.name}\")\n page = get_current_page(self.sync_browser)\n html_content = page.content()\n # Parse the HTML content with BeautifulSoup\n soup = BeautifulSoup(html_content, \"lxml\")\n return \" \".join(text for text in soup.stripped_strings)\n async def _arun(\n self, run_manager: Optional[AsyncCallbackManagerForToolRun] = None", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/playwright/extract_text.html"} +{"id": "3e9272b626bc-1", "text": "self, run_manager: Optional[AsyncCallbackManagerForToolRun] = None\n ) -> str:\n \"\"\"Use the tool.\"\"\"\n if self.async_browser is None:\n raise ValueError(f\"Asynchronous browser not provided to {self.name}\")\n # Use Beautiful Soup since it's faster than looping through the elements\n from bs4 import BeautifulSoup\n page = await aget_current_page(self.async_browser)\n html_content = await page.content()\n # Parse the HTML content with BeautifulSoup\n soup = BeautifulSoup(html_content, \"lxml\")\n return \" \".join(text for text in soup.stripped_strings)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/playwright/extract_text.html"} +{"id": "a38d6081c983-0", "text": "Source code for langchain.tools.playwright.navigate_back\nfrom __future__ import annotations\nfrom typing import Optional, Type\nfrom pydantic import BaseModel\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForToolRun,\n CallbackManagerForToolRun,\n)\nfrom langchain.tools.playwright.base import BaseBrowserTool\nfrom langchain.tools.playwright.utils import (\n aget_current_page,\n get_current_page,\n)\n[docs]class NavigateBackTool(BaseBrowserTool):\n \"\"\"Navigate back to the previous page in the browser history.\"\"\"\n name: str = \"previous_webpage\"\n description: str = \"Navigate back to the previous page in the browser history\"\n args_schema: Type[BaseModel] = BaseModel\n def _run(self, run_manager: Optional[CallbackManagerForToolRun] = None) -> str:\n \"\"\"Use the tool.\"\"\"\n if self.sync_browser is None:\n raise ValueError(f\"Synchronous browser not provided to {self.name}\")\n page = get_current_page(self.sync_browser)\n response = page.go_back()\n if response:\n return (\n f\"Navigated back to the previous page with URL '{response.url}'.\"\n f\" Status code {response.status}\"\n )\n else:\n return \"Unable to navigate back; no previous page in the history\"\n async def _arun(\n self,\n run_manager: Optional[AsyncCallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Use the tool.\"\"\"\n if self.async_browser is None:\n raise ValueError(f\"Asynchronous browser not provided to {self.name}\")\n page = await aget_current_page(self.async_browser)\n response = await page.go_back()\n if response:\n return (", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/playwright/navigate_back.html"} +{"id": "a38d6081c983-1", "text": "response = await page.go_back()\n if response:\n return (\n f\"Navigated back to the previous page with URL '{response.url}'.\"\n f\" Status code {response.status}\"\n )\n else:\n return \"Unable to navigate back; no previous page in the history\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/playwright/navigate_back.html"} +{"id": "6e3d4cad5cd6-0", "text": "Source code for langchain.tools.playwright.extract_hyperlinks\nfrom __future__ import annotations\nimport json\nfrom typing import TYPE_CHECKING, Any, Optional, Type\nfrom pydantic import BaseModel, Field, root_validator\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForToolRun,\n CallbackManagerForToolRun,\n)\nfrom langchain.tools.playwright.base import BaseBrowserTool\nfrom langchain.tools.playwright.utils import aget_current_page, get_current_page\nif TYPE_CHECKING:\n pass\nclass ExtractHyperlinksToolInput(BaseModel):\n \"\"\"Input for ExtractHyperlinksTool.\"\"\"\n absolute_urls: bool = Field(\n default=False,\n description=\"Return absolute URLs instead of relative URLs\",\n )\n[docs]class ExtractHyperlinksTool(BaseBrowserTool):\n \"\"\"Extract all hyperlinks on the page.\"\"\"\n name: str = \"extract_hyperlinks\"\n description: str = \"Extract all hyperlinks on the current webpage\"\n args_schema: Type[BaseModel] = ExtractHyperlinksToolInput\n @root_validator\n def check_bs_import(cls, values: dict) -> dict:\n \"\"\"Check that the arguments are valid.\"\"\"\n try:\n from bs4 import BeautifulSoup # noqa: F401\n except ImportError:\n raise ValueError(\n \"The 'beautifulsoup4' package is required to use this tool.\"\n \" Please install it with 'pip install beautifulsoup4'.\"\n )\n return values\n[docs] @staticmethod\n def scrape_page(page: Any, html_content: str, absolute_urls: bool) -> str:\n from urllib.parse import urljoin\n from bs4 import BeautifulSoup\n # Parse the HTML content with BeautifulSoup\n soup = BeautifulSoup(html_content, \"lxml\")\n # Find all the anchor elements and extract their href attributes", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/playwright/extract_hyperlinks.html"} +{"id": "6e3d4cad5cd6-1", "text": "# Find all the anchor elements and extract their href attributes\n anchors = soup.find_all(\"a\")\n if absolute_urls:\n base_url = page.url\n links = [urljoin(base_url, anchor.get(\"href\", \"\")) for anchor in anchors]\n else:\n links = [anchor.get(\"href\", \"\") for anchor in anchors]\n # Return the list of links as a JSON string\n return json.dumps(links)\n def _run(\n self,\n absolute_urls: bool = False,\n run_manager: Optional[CallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Use the tool.\"\"\"\n if self.sync_browser is None:\n raise ValueError(f\"Synchronous browser not provided to {self.name}\")\n page = get_current_page(self.sync_browser)\n html_content = page.content()\n return self.scrape_page(page, html_content, absolute_urls)\n async def _arun(\n self,\n absolute_urls: bool = False,\n run_manager: Optional[AsyncCallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Use the tool asynchronously.\"\"\"\n if self.async_browser is None:\n raise ValueError(f\"Asynchronous browser not provided to {self.name}\")\n page = await aget_current_page(self.async_browser)\n html_content = await page.content()\n return self.scrape_page(page, html_content, absolute_urls)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/playwright/extract_hyperlinks.html"} +{"id": "679e5d12d315-0", "text": "Source code for langchain.tools.playwright.click\nfrom __future__ import annotations\nfrom typing import Optional, Type\nfrom pydantic import BaseModel, Field\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForToolRun,\n CallbackManagerForToolRun,\n)\nfrom langchain.tools.playwright.base import BaseBrowserTool\nfrom langchain.tools.playwright.utils import (\n aget_current_page,\n get_current_page,\n)\nclass ClickToolInput(BaseModel):\n \"\"\"Input for ClickTool.\"\"\"\n selector: str = Field(..., description=\"CSS selector for the element to click\")\n[docs]class ClickTool(BaseBrowserTool):\n name: str = \"click_element\"\n description: str = \"Click on an element with the given CSS selector\"\n args_schema: Type[BaseModel] = ClickToolInput\n visible_only: bool = True\n \"\"\"Whether to consider only visible elements.\"\"\"\n playwright_strict: bool = False\n \"\"\"Whether to employ Playwright's strict mode when clicking on elements.\"\"\"\n playwright_timeout: float = 1_000\n \"\"\"Timeout (in ms) for Playwright to wait for element to be ready.\"\"\"\n def _selector_effective(self, selector: str) -> str:\n if not self.visible_only:\n return selector\n return f\"{selector} >> visible=1\"\n def _run(\n self,\n selector: str,\n run_manager: Optional[CallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Use the tool.\"\"\"\n if self.sync_browser is None:\n raise ValueError(f\"Synchronous browser not provided to {self.name}\")\n page = get_current_page(self.sync_browser)\n # Navigate to the desired webpage before using this tool", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/playwright/click.html"} +{"id": "679e5d12d315-1", "text": "# Navigate to the desired webpage before using this tool\n selector_effective = self._selector_effective(selector=selector)\n from playwright.sync_api import TimeoutError as PlaywrightTimeoutError\n try:\n page.click(\n selector_effective,\n strict=self.playwright_strict,\n timeout=self.playwright_timeout,\n )\n except PlaywrightTimeoutError:\n return f\"Unable to click on element '{selector}'\"\n return f\"Clicked element '{selector}'\"\n async def _arun(\n self,\n selector: str,\n run_manager: Optional[AsyncCallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Use the tool.\"\"\"\n if self.async_browser is None:\n raise ValueError(f\"Asynchronous browser not provided to {self.name}\")\n page = await aget_current_page(self.async_browser)\n # Navigate to the desired webpage before using this tool\n selector_effective = self._selector_effective(selector=selector)\n from playwright.async_api import TimeoutError as PlaywrightTimeoutError\n try:\n await page.click(\n selector_effective,\n strict=self.playwright_strict,\n timeout=self.playwright_timeout,\n )\n except PlaywrightTimeoutError:\n return f\"Unable to click on element '{selector}'\"\n return f\"Clicked element '{selector}'\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/playwright/click.html"} +{"id": "dc79c052ed47-0", "text": "Source code for langchain.tools.playwright.navigate\nfrom __future__ import annotations\nfrom typing import Optional, Type\nfrom pydantic import BaseModel, Field\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForToolRun,\n CallbackManagerForToolRun,\n)\nfrom langchain.tools.playwright.base import BaseBrowserTool\nfrom langchain.tools.playwright.utils import (\n aget_current_page,\n get_current_page,\n)\nclass NavigateToolInput(BaseModel):\n \"\"\"Input for NavigateToolInput.\"\"\"\n url: str = Field(..., description=\"url to navigate to\")\n[docs]class NavigateTool(BaseBrowserTool):\n name: str = \"navigate_browser\"\n description: str = \"Navigate a browser to the specified URL\"\n args_schema: Type[BaseModel] = NavigateToolInput\n def _run(\n self,\n url: str,\n run_manager: Optional[CallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Use the tool.\"\"\"\n if self.sync_browser is None:\n raise ValueError(f\"Synchronous browser not provided to {self.name}\")\n page = get_current_page(self.sync_browser)\n response = page.goto(url)\n status = response.status if response else \"unknown\"\n return f\"Navigating to {url} returned status code {status}\"\n async def _arun(\n self,\n url: str,\n run_manager: Optional[AsyncCallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Use the tool.\"\"\"\n if self.async_browser is None:\n raise ValueError(f\"Asynchronous browser not provided to {self.name}\")\n page = await aget_current_page(self.async_browser)\n response = await page.goto(url)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/playwright/navigate.html"} +{"id": "dc79c052ed47-1", "text": "response = await page.goto(url)\n status = response.status if response else \"unknown\"\n return f\"Navigating to {url} returned status code {status}\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/playwright/navigate.html"} +{"id": "9eba7ee60151-0", "text": "Source code for langchain.tools.playwright.get_elements\nfrom __future__ import annotations\nimport json\nfrom typing import TYPE_CHECKING, List, Optional, Sequence, Type\nfrom pydantic import BaseModel, Field\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForToolRun,\n CallbackManagerForToolRun,\n)\nfrom langchain.tools.playwright.base import BaseBrowserTool\nfrom langchain.tools.playwright.utils import aget_current_page, get_current_page\nif TYPE_CHECKING:\n from playwright.async_api import Page as AsyncPage\n from playwright.sync_api import Page as SyncPage\nclass GetElementsToolInput(BaseModel):\n \"\"\"Input for GetElementsTool.\"\"\"\n selector: str = Field(\n ...,\n description=\"CSS selector, such as '*', 'div', 'p', 'a', #id, .classname\",\n )\n attributes: List[str] = Field(\n default_factory=lambda: [\"innerText\"],\n description=\"Set of attributes to retrieve for each element\",\n )\nasync def _aget_elements(\n page: AsyncPage, selector: str, attributes: Sequence[str]\n) -> List[dict]:\n \"\"\"Get elements matching the given CSS selector.\"\"\"\n elements = await page.query_selector_all(selector)\n results = []\n for element in elements:\n result = {}\n for attribute in attributes:\n if attribute == \"innerText\":\n val: Optional[str] = await element.inner_text()\n else:\n val = await element.get_attribute(attribute)\n if val is not None and val.strip() != \"\":\n result[attribute] = val\n if result:\n results.append(result)\n return results\ndef _get_elements(\n page: SyncPage, selector: str, attributes: Sequence[str]\n) -> List[dict]:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/playwright/get_elements.html"} +{"id": "9eba7ee60151-1", "text": ") -> List[dict]:\n \"\"\"Get elements matching the given CSS selector.\"\"\"\n elements = page.query_selector_all(selector)\n results = []\n for element in elements:\n result = {}\n for attribute in attributes:\n if attribute == \"innerText\":\n val: Optional[str] = element.inner_text()\n else:\n val = element.get_attribute(attribute)\n if val is not None and val.strip() != \"\":\n result[attribute] = val\n if result:\n results.append(result)\n return results\n[docs]class GetElementsTool(BaseBrowserTool):\n name: str = \"get_elements\"\n description: str = (\n \"Retrieve elements in the current web page matching the given CSS selector\"\n )\n args_schema: Type[BaseModel] = GetElementsToolInput\n def _run(\n self,\n selector: str,\n attributes: Sequence[str] = [\"innerText\"],\n run_manager: Optional[CallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Use the tool.\"\"\"\n if self.sync_browser is None:\n raise ValueError(f\"Synchronous browser not provided to {self.name}\")\n page = get_current_page(self.sync_browser)\n # Navigate to the desired webpage before using this tool\n results = _get_elements(page, selector, attributes)\n return json.dumps(results, ensure_ascii=False)\n async def _arun(\n self,\n selector: str,\n attributes: Sequence[str] = [\"innerText\"],\n run_manager: Optional[AsyncCallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Use the tool.\"\"\"\n if self.async_browser is None:\n raise ValueError(f\"Asynchronous browser not provided to {self.name}\")", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/playwright/get_elements.html"} +{"id": "9eba7ee60151-2", "text": "raise ValueError(f\"Asynchronous browser not provided to {self.name}\")\n page = await aget_current_page(self.async_browser)\n # Navigate to the desired webpage before using this tool\n results = await _aget_elements(page, selector, attributes)\n return json.dumps(results, ensure_ascii=False)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/playwright/get_elements.html"} +{"id": "934e6a0e22a9-0", "text": "Source code for langchain.tools.playwright.current_page\nfrom __future__ import annotations\nfrom typing import Optional, Type\nfrom pydantic import BaseModel\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForToolRun,\n CallbackManagerForToolRun,\n)\nfrom langchain.tools.playwright.base import BaseBrowserTool\nfrom langchain.tools.playwright.utils import aget_current_page, get_current_page\n[docs]class CurrentWebPageTool(BaseBrowserTool):\n name: str = \"current_webpage\"\n description: str = \"Returns the URL of the current page\"\n args_schema: Type[BaseModel] = BaseModel\n def _run(\n self,\n run_manager: Optional[CallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Use the tool.\"\"\"\n if self.sync_browser is None:\n raise ValueError(f\"Synchronous browser not provided to {self.name}\")\n page = get_current_page(self.sync_browser)\n return str(page.url)\n async def _arun(\n self,\n run_manager: Optional[AsyncCallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Use the tool.\"\"\"\n if self.async_browser is None:\n raise ValueError(f\"Asynchronous browser not provided to {self.name}\")\n page = await aget_current_page(self.async_browser)\n return str(page.url)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/playwright/current_page.html"} +{"id": "51db1bbf3ae1-0", "text": "Source code for langchain.tools.ddg_search.tool\n\"\"\"Tool for the DuckDuckGo search API.\"\"\"\nimport warnings\nfrom typing import Any, Optional\nfrom pydantic import Field\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForToolRun,\n CallbackManagerForToolRun,\n)\nfrom langchain.tools.base import BaseTool\nfrom langchain.utilities.duckduckgo_search import DuckDuckGoSearchAPIWrapper\n[docs]class DuckDuckGoSearchRun(BaseTool):\n \"\"\"Tool that adds the capability to query the DuckDuckGo search API.\"\"\"\n name = \"duckduckgo_search\"\n description = (\n \"A wrapper around DuckDuckGo Search. \"\n \"Useful for when you need to answer questions about current events. \"\n \"Input should be a search query.\"\n )\n api_wrapper: DuckDuckGoSearchAPIWrapper = Field(\n default_factory=DuckDuckGoSearchAPIWrapper\n )\n def _run(\n self,\n query: str,\n run_manager: Optional[CallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Use the tool.\"\"\"\n return self.api_wrapper.run(query)\n async def _arun(\n self,\n query: str,\n run_manager: Optional[AsyncCallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Use the tool asynchronously.\"\"\"\n raise NotImplementedError(\"DuckDuckGoSearch does not support async\")\n[docs]class DuckDuckGoSearchResults(BaseTool):\n \"\"\"Tool that queries the Duck Duck Go Search API and get back json.\"\"\"\n name = \"DuckDuckGo Results JSON\"\n description = (\n \"A wrapper around Duck Duck Go Search. \"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/ddg_search/tool.html"} +{"id": "51db1bbf3ae1-1", "text": "description = (\n \"A wrapper around Duck Duck Go Search. \"\n \"Useful for when you need to answer questions about current events. \"\n \"Input should be a search query. Output is a JSON array of the query results\"\n )\n num_results: int = 4\n api_wrapper: DuckDuckGoSearchAPIWrapper = Field(\n default_factory=DuckDuckGoSearchAPIWrapper\n )\n def _run(\n self,\n query: str,\n run_manager: Optional[CallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Use the tool.\"\"\"\n return str(self.api_wrapper.results(query, self.num_results))\n async def _arun(\n self,\n query: str,\n run_manager: Optional[AsyncCallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Use the tool asynchronously.\"\"\"\n raise NotImplementedError(\"DuckDuckGoSearchResults does not support async\")\ndef DuckDuckGoSearchTool(*args: Any, **kwargs: Any) -> DuckDuckGoSearchRun:\n \"\"\"\n Deprecated. Use DuckDuckGoSearchRun instead.\n Args:\n *args:\n **kwargs:\n Returns:\n DuckDuckGoSearchRun\n \"\"\"\n warnings.warn(\n \"DuckDuckGoSearchTool will be deprecated in the future. \"\n \"Please use DuckDuckGoSearchRun instead.\",\n DeprecationWarning,\n )\n return DuckDuckGoSearchRun(*args, **kwargs)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/ddg_search/tool.html"} +{"id": "d688346a3919-0", "text": "Source code for langchain.tools.brave_search.tool\nfrom __future__ import annotations\nfrom typing import Any, Optional\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForToolRun,\n CallbackManagerForToolRun,\n)\nfrom langchain.tools.base import BaseTool\nfrom langchain.utilities.brave_search import BraveSearchWrapper\n[docs]class BraveSearch(BaseTool):\n name = \"brave_search\"\n description = (\n \"a search engine. \"\n \"useful for when you need to answer questions about current events.\"\n \" input should be a search query.\"\n )\n search_wrapper: BraveSearchWrapper\n[docs] @classmethod\n def from_api_key(\n cls, api_key: str, search_kwargs: Optional[dict] = None, **kwargs: Any\n ) -> BraveSearch:\n wrapper = BraveSearchWrapper(api_key=api_key, search_kwargs=search_kwargs or {})\n return cls(search_wrapper=wrapper, **kwargs)\n def _run(\n self,\n query: str,\n run_manager: Optional[CallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Use the tool.\"\"\"\n return self.search_wrapper.run(query)\n async def _arun(\n self,\n query: str,\n run_manager: Optional[AsyncCallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Use the tool asynchronously.\"\"\"\n raise NotImplementedError(\"BraveSearch does not support async\")", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/brave_search/tool.html"} +{"id": "c85270e08b01-0", "text": "Source code for langchain.tools.scenexplain.tool\n\"\"\"Tool for the SceneXplain API.\"\"\"\nfrom typing import Optional\nfrom pydantic import BaseModel, Field\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForToolRun,\n CallbackManagerForToolRun,\n)\nfrom langchain.tools.base import BaseTool\nfrom langchain.utilities.scenexplain import SceneXplainAPIWrapper\nclass SceneXplainInput(BaseModel):\n \"\"\"Input for SceneXplain.\"\"\"\n query: str = Field(..., description=\"The link to the image to explain\")\n[docs]class SceneXplainTool(BaseTool):\n \"\"\"Tool that adds the capability to explain images.\"\"\"\n name = \"image_explainer\"\n description = (\n \"An Image Captioning Tool: Use this tool to generate a detailed caption \"\n \"for an image. The input can be an image file of any format, and \"\n \"the output will be a text description that covers every detail of the image.\"\n )\n api_wrapper: SceneXplainAPIWrapper = Field(default_factory=SceneXplainAPIWrapper)\n def _run(\n self, query: str, run_manager: Optional[CallbackManagerForToolRun] = None\n ) -> str:\n \"\"\"Use the tool.\"\"\"\n return self.api_wrapper.run(query)\n async def _arun(\n self, query: str, run_manager: Optional[AsyncCallbackManagerForToolRun] = None\n ) -> str:\n \"\"\"Use the tool asynchronously.\"\"\"\n raise NotImplementedError(\"SceneXplainTool does not support async\")", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/scenexplain/tool.html"} +{"id": "e31c2c2d80a6-0", "text": "Source code for langchain.tools.openapi.utils.api_models\n\"\"\"Pydantic models for parsing an OpenAPI spec.\"\"\"\nimport logging\nfrom enum import Enum\nfrom typing import Any, Dict, List, Optional, Sequence, Tuple, Type, Union\nfrom openapi_schema_pydantic import MediaType, Parameter, Reference, RequestBody, Schema\nfrom pydantic import BaseModel, Field\nfrom langchain.tools.openapi.utils.openapi_utils import HTTPVerb, OpenAPISpec\nlogger = logging.getLogger(__name__)\nPRIMITIVE_TYPES = {\n \"integer\": int,\n \"number\": float,\n \"string\": str,\n \"boolean\": bool,\n \"array\": List,\n \"object\": Dict,\n \"null\": None,\n}\n# See https://github.com/OAI/OpenAPI-Specification/blob/main/versions/3.1.0.md#parameterIn\n# for more info.\nclass APIPropertyLocation(Enum):\n \"\"\"The location of the property.\"\"\"\n QUERY = \"query\"\n PATH = \"path\"\n HEADER = \"header\"\n COOKIE = \"cookie\" # Not yet supported\n @classmethod\n def from_str(cls, location: str) -> \"APIPropertyLocation\":\n \"\"\"Parse an APIPropertyLocation.\"\"\"\n try:\n return cls(location)\n except ValueError:\n raise ValueError(\n f\"Invalid APIPropertyLocation. Valid values are {cls.__members__}\"\n )\n_SUPPORTED_MEDIA_TYPES = (\"application/json\",)\nSUPPORTED_LOCATIONS = {\n APIPropertyLocation.QUERY,\n APIPropertyLocation.PATH,\n}\nINVALID_LOCATION_TEMPL = (\n 'Unsupported APIPropertyLocation \"{location}\"'\n \" for parameter {name}. \"\n + f\"Valid values are {[loc.value for loc in SUPPORTED_LOCATIONS]}\"\n)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/openapi/utils/api_models.html"} +{"id": "e31c2c2d80a6-1", "text": ")\nSCHEMA_TYPE = Union[str, Type, tuple, None, Enum]\nclass APIPropertyBase(BaseModel):\n \"\"\"Base model for an API property.\"\"\"\n # The name of the parameter is required and is case-sensitive.\n # If \"in\" is \"path\", the \"name\" field must correspond to a template expression\n # within the path field in the Paths Object.\n # If \"in\" is \"header\" and the \"name\" field is \"Accept\", \"Content-Type\",\n # or \"Authorization\", the parameter definition is ignored.\n # For all other cases, the \"name\" corresponds to the parameter\n # name used by the \"in\" property.\n name: str = Field(alias=\"name\")\n \"\"\"The name of the property.\"\"\"\n required: bool = Field(alias=\"required\")\n \"\"\"Whether the property is required.\"\"\"\n type: SCHEMA_TYPE = Field(alias=\"type\")\n \"\"\"The type of the property.\n \n Either a primitive type, a component/parameter type,\n or an array or 'object' (dict) of the above.\"\"\"\n default: Optional[Any] = Field(alias=\"default\", default=None)\n \"\"\"The default value of the property.\"\"\"\n description: Optional[str] = Field(alias=\"description\", default=None)\n \"\"\"The description of the property.\"\"\"\nclass APIProperty(APIPropertyBase):\n \"\"\"A model for a property in the query, path, header, or cookie params.\"\"\"\n location: APIPropertyLocation = Field(alias=\"location\")\n \"\"\"The path/how it's being passed to the endpoint.\"\"\"\n @staticmethod\n def _cast_schema_list_type(schema: Schema) -> Optional[Union[str, Tuple[str, ...]]]:\n type_ = schema.type\n if not isinstance(type_, list):\n return type_", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/openapi/utils/api_models.html"} +{"id": "e31c2c2d80a6-2", "text": "if not isinstance(type_, list):\n return type_\n else:\n return tuple(type_)\n @staticmethod\n def _get_schema_type_for_enum(parameter: Parameter, schema: Schema) -> Enum:\n \"\"\"Get the schema type when the parameter is an enum.\"\"\"\n param_name = f\"{parameter.name}Enum\"\n return Enum(param_name, {str(v): v for v in schema.enum})\n @staticmethod\n def _get_schema_type_for_array(\n schema: Schema,\n ) -> Optional[Union[str, Tuple[str, ...]]]:\n items = schema.items\n if isinstance(items, Schema):\n schema_type = APIProperty._cast_schema_list_type(items)\n elif isinstance(items, Reference):\n ref_name = items.ref.split(\"/\")[-1]\n schema_type = ref_name # TODO: Add ref definitions to make his valid\n else:\n raise ValueError(f\"Unsupported array items: {items}\")\n if isinstance(schema_type, str):\n # TODO: recurse\n schema_type = (schema_type,)\n return schema_type\n @staticmethod\n def _get_schema_type(parameter: Parameter, schema: Optional[Schema]) -> SCHEMA_TYPE:\n if schema is None:\n return None\n schema_type: SCHEMA_TYPE = APIProperty._cast_schema_list_type(schema)\n if schema_type == \"array\":\n schema_type = APIProperty._get_schema_type_for_array(schema)\n elif schema_type == \"object\":\n # TODO: Resolve array and object types to components.\n raise NotImplementedError(\"Objects not yet supported\")\n elif schema_type in PRIMITIVE_TYPES:\n if schema.enum:\n schema_type = APIProperty._get_schema_type_for_enum(parameter, schema)\n else:\n # Directly use the primitive type\n pass", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/openapi/utils/api_models.html"} +{"id": "e31c2c2d80a6-3", "text": "else:\n # Directly use the primitive type\n pass\n else:\n raise NotImplementedError(f\"Unsupported type: {schema_type}\")\n return schema_type\n @staticmethod\n def _validate_location(location: APIPropertyLocation, name: str) -> None:\n if location not in SUPPORTED_LOCATIONS:\n raise NotImplementedError(\n INVALID_LOCATION_TEMPL.format(location=location, name=name)\n )\n @staticmethod\n def _validate_content(content: Optional[Dict[str, MediaType]]) -> None:\n if content:\n raise ValueError(\n \"API Properties with media content not supported. \"\n \"Media content only supported within APIRequestBodyProperty's\"\n )\n @staticmethod\n def _get_schema(parameter: Parameter, spec: OpenAPISpec) -> Optional[Schema]:\n schema = parameter.param_schema\n if isinstance(schema, Reference):\n schema = spec.get_referenced_schema(schema)\n elif schema is None:\n return None\n elif not isinstance(schema, Schema):\n raise ValueError(f\"Error dereferencing schema: {schema}\")\n return schema\n @staticmethod\n def is_supported_location(location: str) -> bool:\n \"\"\"Return whether the provided location is supported.\"\"\"\n try:\n return APIPropertyLocation.from_str(location) in SUPPORTED_LOCATIONS\n except ValueError:\n return False\n @classmethod\n def from_parameter(cls, parameter: Parameter, spec: OpenAPISpec) -> \"APIProperty\":\n \"\"\"Instantiate from an OpenAPI Parameter.\"\"\"\n location = APIPropertyLocation.from_str(parameter.param_in)\n cls._validate_location(\n location,\n parameter.name,\n )\n cls._validate_content(parameter.content)\n schema = cls._get_schema(parameter, spec)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/openapi/utils/api_models.html"} +{"id": "e31c2c2d80a6-4", "text": "schema = cls._get_schema(parameter, spec)\n schema_type = cls._get_schema_type(parameter, schema)\n default_val = schema.default if schema is not None else None\n return cls(\n name=parameter.name,\n location=location,\n default=default_val,\n description=parameter.description,\n required=parameter.required,\n type=schema_type,\n )\nclass APIRequestBodyProperty(APIPropertyBase):\n \"\"\"A model for a request body property.\"\"\"\n properties: List[\"APIRequestBodyProperty\"] = Field(alias=\"properties\")\n \"\"\"The sub-properties of the property.\"\"\"\n # This is useful for handling nested property cycles.\n # We can define separate types in that case.\n references_used: List[str] = Field(alias=\"references_used\")\n \"\"\"The references used by the property.\"\"\"\n @classmethod\n def _process_object_schema(\n cls, schema: Schema, spec: OpenAPISpec, references_used: List[str]\n ) -> Tuple[Union[str, List[str], None], List[\"APIRequestBodyProperty\"]]:\n properties = []\n required_props = schema.required or []\n if schema.properties is None:\n raise ValueError(\n f\"No properties found when processing object schema: {schema}\"\n )\n for prop_name, prop_schema in schema.properties.items():\n if isinstance(prop_schema, Reference):\n ref_name = prop_schema.ref.split(\"/\")[-1]\n if ref_name not in references_used:\n references_used.append(ref_name)\n prop_schema = spec.get_referenced_schema(prop_schema)\n else:\n continue\n properties.append(\n cls.from_schema(\n schema=prop_schema,\n name=prop_name,\n required=prop_name in required_props,\n spec=spec,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/openapi/utils/api_models.html"} +{"id": "e31c2c2d80a6-5", "text": "required=prop_name in required_props,\n spec=spec,\n references_used=references_used,\n )\n )\n return schema.type, properties\n @classmethod\n def _process_array_schema(\n cls, schema: Schema, name: str, spec: OpenAPISpec, references_used: List[str]\n ) -> str:\n items = schema.items\n if items is not None:\n if isinstance(items, Reference):\n ref_name = items.ref.split(\"/\")[-1]\n if ref_name not in references_used:\n references_used.append(ref_name)\n items = spec.get_referenced_schema(items)\n else:\n pass\n return f\"Array<{ref_name}>\"\n else:\n pass\n if isinstance(items, Schema):\n array_type = cls.from_schema(\n schema=items,\n name=f\"{name}Item\",\n required=True, # TODO: Add required\n spec=spec,\n references_used=references_used,\n )\n return f\"Array<{array_type.type}>\"\n return \"array\"\n @classmethod\n def from_schema(\n cls,\n schema: Schema,\n name: str,\n required: bool,\n spec: OpenAPISpec,\n references_used: Optional[List[str]] = None,\n ) -> \"APIRequestBodyProperty\":\n \"\"\"Recursively populate from an OpenAPI Schema.\"\"\"\n if references_used is None:\n references_used = []\n schema_type = schema.type\n properties: List[APIRequestBodyProperty] = []\n if schema_type == \"object\" and schema.properties:\n schema_type, properties = cls._process_object_schema(\n schema, spec, references_used\n )\n elif schema_type == \"array\":", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/openapi/utils/api_models.html"} +{"id": "e31c2c2d80a6-6", "text": "schema, spec, references_used\n )\n elif schema_type == \"array\":\n schema_type = cls._process_array_schema(schema, name, spec, references_used)\n elif schema_type in PRIMITIVE_TYPES:\n # Use the primitive type directly\n pass\n elif schema_type is None:\n # No typing specified/parsed. WIll map to 'any'\n pass\n else:\n raise ValueError(f\"Unsupported type: {schema_type}\")\n return cls(\n name=name,\n required=required,\n type=schema_type,\n default=schema.default,\n description=schema.description,\n properties=properties,\n references_used=references_used,\n )\nclass APIRequestBody(BaseModel):\n \"\"\"A model for a request body.\"\"\"\n description: Optional[str] = Field(alias=\"description\")\n \"\"\"The description of the request body.\"\"\"\n properties: List[APIRequestBodyProperty] = Field(alias=\"properties\")\n # E.g., application/json - we only support JSON at the moment.\n media_type: str = Field(alias=\"media_type\")\n \"\"\"The media type of the request body.\"\"\"\n @classmethod\n def _process_supported_media_type(\n cls,\n media_type_obj: MediaType,\n spec: OpenAPISpec,\n ) -> List[APIRequestBodyProperty]:\n \"\"\"Process the media type of the request body.\"\"\"\n references_used = []\n schema = media_type_obj.media_type_schema\n if isinstance(schema, Reference):\n references_used.append(schema.ref.split(\"/\")[-1])\n schema = spec.get_referenced_schema(schema)\n if schema is None:\n raise ValueError(\n f\"Could not resolve schema for media type: {media_type_obj}\"\n )\n api_request_body_properties = []", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/openapi/utils/api_models.html"} +{"id": "e31c2c2d80a6-7", "text": ")\n api_request_body_properties = []\n required_properties = schema.required or []\n if schema.type == \"object\" and schema.properties:\n for prop_name, prop_schema in schema.properties.items():\n if isinstance(prop_schema, Reference):\n prop_schema = spec.get_referenced_schema(prop_schema)\n api_request_body_properties.append(\n APIRequestBodyProperty.from_schema(\n schema=prop_schema,\n name=prop_name,\n required=prop_name in required_properties,\n spec=spec,\n )\n )\n else:\n api_request_body_properties.append(\n APIRequestBodyProperty(\n name=\"body\",\n required=True,\n type=schema.type,\n default=schema.default,\n description=schema.description,\n properties=[],\n references_used=references_used,\n )\n )\n return api_request_body_properties\n @classmethod\n def from_request_body(\n cls, request_body: RequestBody, spec: OpenAPISpec\n ) -> \"APIRequestBody\":\n \"\"\"Instantiate from an OpenAPI RequestBody.\"\"\"\n properties = []\n for media_type, media_type_obj in request_body.content.items():\n if media_type not in _SUPPORTED_MEDIA_TYPES:\n continue\n api_request_body_properties = cls._process_supported_media_type(\n media_type_obj,\n spec,\n )\n properties.extend(api_request_body_properties)\n return cls(\n description=request_body.description,\n properties=properties,\n media_type=media_type,\n )\n[docs]class APIOperation(BaseModel):\n \"\"\"A model for a single API operation.\"\"\"\n operation_id: str = Field(alias=\"operation_id\")\n \"\"\"The unique identifier of the operation.\"\"\"\n description: Optional[str] = Field(alias=\"description\")", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/openapi/utils/api_models.html"} +{"id": "e31c2c2d80a6-8", "text": "description: Optional[str] = Field(alias=\"description\")\n \"\"\"The description of the operation.\"\"\"\n base_url: str = Field(alias=\"base_url\")\n \"\"\"The base URL of the operation.\"\"\"\n path: str = Field(alias=\"path\")\n \"\"\"The path of the operation.\"\"\"\n method: HTTPVerb = Field(alias=\"method\")\n \"\"\"The HTTP method of the operation.\"\"\"\n properties: Sequence[APIProperty] = Field(alias=\"properties\")\n # TODO: Add parse in used components to be able to specify what type of\n # referenced object it is.\n # \"\"\"The properties of the operation.\"\"\"\n # components: Dict[str, BaseModel] = Field(alias=\"components\")\n request_body: Optional[APIRequestBody] = Field(alias=\"request_body\")\n \"\"\"The request body of the operation.\"\"\"\n @staticmethod\n def _get_properties_from_parameters(\n parameters: List[Parameter], spec: OpenAPISpec\n ) -> List[APIProperty]:\n \"\"\"Get the properties of the operation.\"\"\"\n properties = []\n for param in parameters:\n if APIProperty.is_supported_location(param.param_in):\n properties.append(APIProperty.from_parameter(param, spec))\n elif param.required:\n raise ValueError(\n INVALID_LOCATION_TEMPL.format(\n location=param.param_in, name=param.name\n )\n )\n else:\n logger.warning(\n INVALID_LOCATION_TEMPL.format(\n location=param.param_in, name=param.name\n )\n + \" Ignoring optional parameter\"\n )\n pass\n return properties\n[docs] @classmethod\n def from_openapi_url(\n cls,\n spec_url: str,\n path: str,\n method: str,\n ) -> \"APIOperation\":", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/openapi/utils/api_models.html"} +{"id": "e31c2c2d80a6-9", "text": "path: str,\n method: str,\n ) -> \"APIOperation\":\n \"\"\"Create an APIOperation from an OpenAPI URL.\"\"\"\n spec = OpenAPISpec.from_url(spec_url)\n return cls.from_openapi_spec(spec, path, method)\n[docs] @classmethod\n def from_openapi_spec(\n cls,\n spec: OpenAPISpec,\n path: str,\n method: str,\n ) -> \"APIOperation\":\n \"\"\"Create an APIOperation from an OpenAPI spec.\"\"\"\n operation = spec.get_operation(path, method)\n parameters = spec.get_parameters_for_operation(operation)\n properties = cls._get_properties_from_parameters(parameters, spec)\n operation_id = OpenAPISpec.get_cleaned_operation_id(operation, path, method)\n request_body = spec.get_request_body_for_operation(operation)\n api_request_body = (\n APIRequestBody.from_request_body(request_body, spec)\n if request_body is not None\n else None\n )\n description = operation.description or operation.summary\n if not description and spec.paths is not None:\n description = spec.paths[path].description or spec.paths[path].summary\n return cls(\n operation_id=operation_id,\n description=description,\n base_url=spec.base_url,\n path=path,\n method=method,\n properties=properties,\n request_body=api_request_body,\n )\n[docs] @staticmethod\n def ts_type_from_python(type_: SCHEMA_TYPE) -> str:\n if type_ is None:\n # TODO: Handle Nones better. These often result when\n # parsing specs that are < v3\n return \"any\"\n elif isinstance(type_, str):\n return {\n \"str\": \"string\",", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/openapi/utils/api_models.html"} +{"id": "e31c2c2d80a6-10", "text": "elif isinstance(type_, str):\n return {\n \"str\": \"string\",\n \"integer\": \"number\",\n \"float\": \"number\",\n \"date-time\": \"string\",\n }.get(type_, type_)\n elif isinstance(type_, tuple):\n return f\"Array<{APIOperation.ts_type_from_python(type_[0])}>\"\n elif isinstance(type_, type) and issubclass(type_, Enum):\n return \" | \".join([f\"'{e.value}'\" for e in type_])\n else:\n return str(type_)\n def _format_nested_properties(\n self, properties: List[APIRequestBodyProperty], indent: int = 2\n ) -> str:\n \"\"\"Format nested properties.\"\"\"\n formatted_props = []\n for prop in properties:\n prop_name = prop.name\n prop_type = self.ts_type_from_python(prop.type)\n prop_required = \"\" if prop.required else \"?\"\n prop_desc = f\"/* {prop.description} */\" if prop.description else \"\"\n if prop.properties:\n nested_props = self._format_nested_properties(\n prop.properties, indent + 2\n )\n prop_type = f\"{{\\n{nested_props}\\n{' ' * indent}}}\"\n formatted_props.append(\n f\"{prop_desc}\\n{' ' * indent}{prop_name}{prop_required}: {prop_type},\"\n )\n return \"\\n\".join(formatted_props)\n[docs] def to_typescript(self) -> str:\n \"\"\"Get typescript string representation of the operation.\"\"\"\n operation_name = self.operation_id\n params = []\n if self.request_body:\n formatted_request_body_props = self._format_nested_properties(\n self.request_body.properties\n )\n params.append(formatted_request_body_props)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/openapi/utils/api_models.html"} +{"id": "e31c2c2d80a6-11", "text": "self.request_body.properties\n )\n params.append(formatted_request_body_props)\n for prop in self.properties:\n prop_name = prop.name\n prop_type = self.ts_type_from_python(prop.type)\n prop_required = \"\" if prop.required else \"?\"\n prop_desc = f\"/* {prop.description} */\" if prop.description else \"\"\n params.append(f\"{prop_desc}\\n\\t\\t{prop_name}{prop_required}: {prop_type},\")\n formatted_params = \"\\n\".join(params).strip()\n description_str = f\"/* {self.description} */\" if self.description else \"\"\n typescript_definition = f\"\"\"\n{description_str}\ntype {operation_name} = (_: {{\n{formatted_params}\n}}) => any;\n\"\"\"\n return typescript_definition.strip()\n @property\n def query_params(self) -> List[str]:\n return [\n property.name\n for property in self.properties\n if property.location == APIPropertyLocation.QUERY\n ]\n @property\n def path_params(self) -> List[str]:\n return [\n property.name\n for property in self.properties\n if property.location == APIPropertyLocation.PATH\n ]\n @property\n def body_params(self) -> List[str]:\n if self.request_body is None:\n return []\n return [prop.name for prop in self.request_body.properties]", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/openapi/utils/api_models.html"} +{"id": "fbf6bad4eb54-0", "text": "Source code for langchain.tools.google_serper.tool\n\"\"\"Tool for the Serper.dev Google Search API.\"\"\"\nfrom typing import Optional\nfrom pydantic.fields import Field\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForToolRun,\n CallbackManagerForToolRun,\n)\nfrom langchain.tools.base import BaseTool\nfrom langchain.utilities.google_serper import GoogleSerperAPIWrapper\n[docs]class GoogleSerperRun(BaseTool):\n \"\"\"Tool that adds the capability to query the Serper.dev Google search API.\"\"\"\n name = \"google_serper\"\n description = (\n \"A low-cost Google Search API.\"\n \"Useful for when you need to answer questions about current events.\"\n \"Input should be a search query.\"\n )\n api_wrapper: GoogleSerperAPIWrapper\n def _run(\n self,\n query: str,\n run_manager: Optional[CallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Use the tool.\"\"\"\n return str(self.api_wrapper.run(query))\n async def _arun(\n self,\n query: str,\n run_manager: Optional[AsyncCallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Use the tool asynchronously.\"\"\"\n return (await self.api_wrapper.arun(query)).__str__()\n[docs]class GoogleSerperResults(BaseTool):\n \"\"\"Tool that has capability to query the Serper.dev Google Search API\n and get back json.\"\"\"\n name = \"Google Serrper Results JSON\"\n description = (\n \"A low-cost Google Search API.\"\n \"Useful for when you need to answer questions about current events.\"\n \"Input should be a search query. Output is a JSON object of the query results\"\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/google_serper/tool.html"} +{"id": "fbf6bad4eb54-1", "text": ")\n api_wrapper: GoogleSerperAPIWrapper = Field(default_factory=GoogleSerperAPIWrapper)\n def _run(\n self,\n query: str,\n run_manager: Optional[CallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Use the tool.\"\"\"\n return str(self.api_wrapper.results(query))\n async def _arun(\n self,\n query: str,\n run_manager: Optional[AsyncCallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Use the tool asynchronously.\"\"\"\n return (await self.api_wrapper.aresults(query)).__str__()", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/google_serper/tool.html"} +{"id": "26e51dd8fead-0", "text": "Source code for langchain.tools.jira.tool\n\"\"\"\nThis tool allows agents to interact with the atlassian-python-api library\nand operate on a Jira instance. For more information on the\natlassian-python-api library, see https://atlassian-python-api.readthedocs.io/jira.html\nTo use this tool, you must first set as environment variables:\n JIRA_API_TOKEN\n JIRA_USERNAME\n JIRA_INSTANCE_URL\nBelow is a sample script that uses the Jira tool:\n```python\nfrom langchain.agents import AgentType\nfrom langchain.agents import initialize_agent\nfrom langchain.agents.agent_toolkits.jira.toolkit import JiraToolkit\nfrom langchain.llms import OpenAI\nfrom langchain.utilities.jira import JiraAPIWrapper\nllm = OpenAI(temperature=0)\njira = JiraAPIWrapper()\ntoolkit = JiraToolkit.from_jira_api_wrapper(jira)\nagent = initialize_agent(\n toolkit.get_tools(),\n llm,\n agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION,\n verbose=True\n)\n```\n\"\"\"\nfrom typing import Optional\nfrom pydantic import Field\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForToolRun,\n CallbackManagerForToolRun,\n)\nfrom langchain.tools.base import BaseTool\nfrom langchain.utilities.jira import JiraAPIWrapper\n[docs]class JiraAction(BaseTool):\n api_wrapper: JiraAPIWrapper = Field(default_factory=JiraAPIWrapper)\n mode: str\n name = \"\"\n description = \"\"\n def _run(\n self,\n instructions: str,\n run_manager: Optional[CallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Use the Atlassian Jira API to run an operation.\"\"\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/jira/tool.html"} +{"id": "26e51dd8fead-1", "text": "\"\"\"Use the Atlassian Jira API to run an operation.\"\"\"\n return self.api_wrapper.run(self.mode, instructions)\n async def _arun(\n self,\n _: str,\n run_manager: Optional[AsyncCallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Use the Atlassian Jira API to run an operation.\"\"\"\n raise NotImplementedError(\"JiraAction does not support async\")", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/jira/tool.html"} +{"id": "de657b342993-0", "text": "Source code for langchain.tools.steamship_image_generation.tool\n\"\"\"This tool allows agents to generate images using Steamship.\nSteamship offers access to different third party image generation APIs\nusing a single API key.\nToday the following models are supported:\n- Dall-E\n- Stable Diffusion\nTo use this tool, you must first set as environment variables:\n STEAMSHIP_API_KEY\n```\n\"\"\"\nfrom __future__ import annotations\nfrom enum import Enum\nfrom typing import TYPE_CHECKING, Dict, Optional\nfrom pydantic import root_validator\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForToolRun,\n CallbackManagerForToolRun,\n)\nfrom langchain.tools import BaseTool\nfrom langchain.tools.steamship_image_generation.utils import make_image_public\nfrom langchain.utils import get_from_dict_or_env\nif TYPE_CHECKING:\n pass\nclass ModelName(str, Enum):\n \"\"\"Supported Image Models for generation.\"\"\"\n DALL_E = \"dall-e\"\n STABLE_DIFFUSION = \"stable-diffusion\"\nSUPPORTED_IMAGE_SIZES = {\n ModelName.DALL_E: (\"256x256\", \"512x512\", \"1024x1024\"),\n ModelName.STABLE_DIFFUSION: (\"512x512\", \"768x768\"),\n}\n[docs]class SteamshipImageGenerationTool(BaseTool):\n try:\n from steamship import Steamship\n except ImportError:\n pass\n \"\"\"Tool used to generate images from a text-prompt.\"\"\"\n model_name: ModelName\n size: Optional[str] = \"512x512\"\n steamship: Steamship\n return_urls: Optional[bool] = False\n name = \"GenerateImage\"\n description = (\n \"Useful for when you need to generate an image.\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/steamship_image_generation/tool.html"} +{"id": "de657b342993-1", "text": "description = (\n \"Useful for when you need to generate an image.\"\n \"Input: A detailed text-2-image prompt describing an image\"\n \"Output: the UUID of a generated image\"\n )\n @root_validator(pre=True)\n def validate_size(cls, values: Dict) -> Dict:\n if \"size\" in values:\n size = values[\"size\"]\n model_name = values[\"model_name\"]\n if size not in SUPPORTED_IMAGE_SIZES[model_name]:\n raise RuntimeError(f\"size {size} is not supported by {model_name}\")\n return values\n @root_validator(pre=True)\n def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"Validate that api key and python package exists in environment.\"\"\"\n steamship_api_key = get_from_dict_or_env(\n values, \"steamship_api_key\", \"STEAMSHIP_API_KEY\"\n )\n try:\n from steamship import Steamship\n except ImportError:\n raise ImportError(\n \"steamship is not installed. \"\n \"Please install it with `pip install steamship`\"\n )\n steamship = Steamship(\n api_key=steamship_api_key,\n )\n values[\"steamship\"] = steamship\n if \"steamship_api_key\" in values:\n del values[\"steamship_api_key\"]\n return values\n def _run(\n self,\n query: str,\n run_manager: Optional[CallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Use the tool.\"\"\"\n image_generator = self.steamship.use_plugin(\n plugin_handle=self.model_name.value, config={\"n\": 1, \"size\": self.size}\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/steamship_image_generation/tool.html"} +{"id": "de657b342993-2", "text": ")\n task = image_generator.generate(text=query, append_output_to_file=True)\n task.wait()\n blocks = task.output.blocks\n if len(blocks) > 0:\n if self.return_urls:\n return make_image_public(self.steamship, blocks[0])\n else:\n return blocks[0].id\n raise RuntimeError(f\"[{self.name}] Tool unable to generate image!\")\n async def _arun(\n self,\n query: str,\n run_manager: Optional[AsyncCallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Use the tool asynchronously.\"\"\"\n raise NotImplementedError(\"GenerateImageTool does not support async\")", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/steamship_image_generation/tool.html"} +{"id": "f57967c1689d-0", "text": "Source code for langchain.tools.azure_cognitive_services.text2speech\nfrom __future__ import annotations\nimport logging\nimport tempfile\nfrom typing import Any, Dict, Optional\nfrom pydantic import root_validator\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForToolRun,\n CallbackManagerForToolRun,\n)\nfrom langchain.tools.base import BaseTool\nfrom langchain.utils import get_from_dict_or_env\nlogger = logging.getLogger(__name__)\n[docs]class AzureCogsText2SpeechTool(BaseTool):\n \"\"\"Tool that queries the Azure Cognitive Services Text2Speech API.\n In order to set this up, follow instructions at:\n https://learn.microsoft.com/en-us/azure/cognitive-services/speech-service/get-started-text-to-speech?pivots=programming-language-python\n \"\"\"\n azure_cogs_key: str = \"\" #: :meta private:\n azure_cogs_region: str = \"\" #: :meta private:\n speech_language: str = \"en-US\" #: :meta private:\n speech_config: Any #: :meta private:\n name = \"azure_cognitive_services_text2speech\"\n description = (\n \"A wrapper around Azure Cognitive Services Text2Speech. \"\n \"Useful for when you need to convert text to speech. \"\n )\n @root_validator(pre=True)\n def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"Validate that api key and endpoint exists in environment.\"\"\"\n azure_cogs_key = get_from_dict_or_env(\n values, \"azure_cogs_key\", \"AZURE_COGS_KEY\"\n )\n azure_cogs_region = get_from_dict_or_env(\n values, \"azure_cogs_region\", \"AZURE_COGS_REGION\"\n )\n try:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/azure_cognitive_services/text2speech.html"} +{"id": "f57967c1689d-1", "text": ")\n try:\n import azure.cognitiveservices.speech as speechsdk\n values[\"speech_config\"] = speechsdk.SpeechConfig(\n subscription=azure_cogs_key, region=azure_cogs_region\n )\n except ImportError:\n raise ImportError(\n \"azure-cognitiveservices-speech is not installed. \"\n \"Run `pip install azure-cognitiveservices-speech` to install.\"\n )\n return values\n def _text2speech(self, text: str, speech_language: str) -> str:\n try:\n import azure.cognitiveservices.speech as speechsdk\n except ImportError:\n pass\n self.speech_config.speech_synthesis_language = speech_language\n speech_synthesizer = speechsdk.SpeechSynthesizer(\n speech_config=self.speech_config, audio_config=None\n )\n result = speech_synthesizer.speak_text(text)\n if result.reason == speechsdk.ResultReason.SynthesizingAudioCompleted:\n stream = speechsdk.AudioDataStream(result)\n with tempfile.NamedTemporaryFile(\n mode=\"wb\", suffix=\".wav\", delete=False\n ) as f:\n stream.save_to_wav_file(f.name)\n return f.name\n elif result.reason == speechsdk.ResultReason.Canceled:\n cancellation_details = result.cancellation_details\n logger.debug(f\"Speech synthesis canceled: {cancellation_details.reason}\")\n if cancellation_details.reason == speechsdk.CancellationReason.Error:\n raise RuntimeError(\n f\"Speech synthesis error: {cancellation_details.error_details}\"\n )\n return \"Speech synthesis canceled.\"\n else:\n return f\"Speech synthesis failed: {result.reason}\"\n def _run(\n self,\n query: str,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/azure_cognitive_services/text2speech.html"} +{"id": "f57967c1689d-2", "text": "def _run(\n self,\n query: str,\n run_manager: Optional[CallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Use the tool.\"\"\"\n try:\n speech_file = self._text2speech(query, self.speech_language)\n return speech_file\n except Exception as e:\n raise RuntimeError(f\"Error while running AzureCogsText2SpeechTool: {e}\")\n async def _arun(\n self,\n query: str,\n run_manager: Optional[AsyncCallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Use the tool asynchronously.\"\"\"\n raise NotImplementedError(\"AzureCogsText2SpeechTool does not support async\")", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/azure_cognitive_services/text2speech.html"} +{"id": "bbeaccf78c48-0", "text": "Source code for langchain.tools.azure_cognitive_services.form_recognizer\nfrom __future__ import annotations\nimport logging\nfrom typing import Any, Dict, List, Optional\nfrom pydantic import root_validator\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForToolRun,\n CallbackManagerForToolRun,\n)\nfrom langchain.tools.azure_cognitive_services.utils import detect_file_src_type\nfrom langchain.tools.base import BaseTool\nfrom langchain.utils import get_from_dict_or_env\nlogger = logging.getLogger(__name__)\n[docs]class AzureCogsFormRecognizerTool(BaseTool):\n \"\"\"Tool that queries the Azure Cognitive Services Form Recognizer API.\n In order to set this up, follow instructions at:\n https://learn.microsoft.com/en-us/azure/applied-ai-services/form-recognizer/quickstarts/get-started-sdks-rest-api?view=form-recog-3.0.0&pivots=programming-language-python\n \"\"\"\n azure_cogs_key: str = \"\" #: :meta private:\n azure_cogs_endpoint: str = \"\" #: :meta private:\n doc_analysis_client: Any #: :meta private:\n name = \"azure_cognitive_services_form_recognizer\"\n description = (\n \"A wrapper around Azure Cognitive Services Form Recognizer. \"\n \"Useful for when you need to \"\n \"extract text, tables, and key-value pairs from documents. \"\n \"Input should be a url to a document.\"\n )\n @root_validator(pre=True)\n def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"Validate that api key and endpoint exists in environment.\"\"\"\n azure_cogs_key = get_from_dict_or_env(\n values, \"azure_cogs_key\", \"AZURE_COGS_KEY\"\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/azure_cognitive_services/form_recognizer.html"} +{"id": "bbeaccf78c48-1", "text": ")\n azure_cogs_endpoint = get_from_dict_or_env(\n values, \"azure_cogs_endpoint\", \"AZURE_COGS_ENDPOINT\"\n )\n try:\n from azure.ai.formrecognizer import DocumentAnalysisClient\n from azure.core.credentials import AzureKeyCredential\n values[\"doc_analysis_client\"] = DocumentAnalysisClient(\n endpoint=azure_cogs_endpoint,\n credential=AzureKeyCredential(azure_cogs_key),\n )\n except ImportError:\n raise ImportError(\n \"azure-ai-formrecognizer is not installed. \"\n \"Run `pip install azure-ai-formrecognizer` to install.\"\n )\n return values\n def _parse_tables(self, tables: List[Any]) -> List[Any]:\n result = []\n for table in tables:\n rc, cc = table.row_count, table.column_count\n _table = [[\"\" for _ in range(cc)] for _ in range(rc)]\n for cell in table.cells:\n _table[cell.row_index][cell.column_index] = cell.content\n result.append(_table)\n return result\n def _parse_kv_pairs(self, kv_pairs: List[Any]) -> List[Any]:\n result = []\n for kv_pair in kv_pairs:\n key = kv_pair.key.content if kv_pair.key else \"\"\n value = kv_pair.value.content if kv_pair.value else \"\"\n result.append((key, value))\n return result\n def _document_analysis(self, document_path: str) -> Dict:\n document_src_type = detect_file_src_type(document_path)\n if document_src_type == \"local\":\n with open(document_path, \"rb\") as document:\n poller = self.doc_analysis_client.begin_analyze_document(\n \"prebuilt-document\", document\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/azure_cognitive_services/form_recognizer.html"} +{"id": "bbeaccf78c48-2", "text": "\"prebuilt-document\", document\n )\n elif document_src_type == \"remote\":\n poller = self.doc_analysis_client.begin_analyze_document_from_url(\n \"prebuilt-document\", document_path\n )\n else:\n raise ValueError(f\"Invalid document path: {document_path}\")\n result = poller.result()\n res_dict = {}\n if result.content is not None:\n res_dict[\"content\"] = result.content\n if result.tables is not None:\n res_dict[\"tables\"] = self._parse_tables(result.tables)\n if result.key_value_pairs is not None:\n res_dict[\"key_value_pairs\"] = self._parse_kv_pairs(result.key_value_pairs)\n return res_dict\n def _format_document_analysis_result(self, document_analysis_result: Dict) -> str:\n formatted_result = []\n if \"content\" in document_analysis_result:\n formatted_result.append(\n f\"Content: {document_analysis_result['content']}\".replace(\"\\n\", \" \")\n )\n if \"tables\" in document_analysis_result:\n for i, table in enumerate(document_analysis_result[\"tables\"]):\n formatted_result.append(f\"Table {i}: {table}\".replace(\"\\n\", \" \"))\n if \"key_value_pairs\" in document_analysis_result:\n for kv_pair in document_analysis_result[\"key_value_pairs\"]:\n formatted_result.append(\n f\"{kv_pair[0]}: {kv_pair[1]}\".replace(\"\\n\", \" \")\n )\n return \"\\n\".join(formatted_result)\n def _run(\n self,\n query: str,\n run_manager: Optional[CallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Use the tool.\"\"\"\n try:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/azure_cognitive_services/form_recognizer.html"} +{"id": "bbeaccf78c48-3", "text": ") -> str:\n \"\"\"Use the tool.\"\"\"\n try:\n document_analysis_result = self._document_analysis(query)\n if not document_analysis_result:\n return \"No good document analysis result was found\"\n return self._format_document_analysis_result(document_analysis_result)\n except Exception as e:\n raise RuntimeError(f\"Error while running AzureCogsFormRecognizerTool: {e}\")\n async def _arun(\n self,\n query: str,\n run_manager: Optional[AsyncCallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Use the tool asynchronously.\"\"\"\n raise NotImplementedError(\"AzureCogsFormRecognizerTool does not support async\")", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/azure_cognitive_services/form_recognizer.html"} +{"id": "be5aef6582e1-0", "text": "Source code for langchain.tools.azure_cognitive_services.image_analysis\nfrom __future__ import annotations\nimport logging\nfrom typing import Any, Dict, Optional\nfrom pydantic import root_validator\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForToolRun,\n CallbackManagerForToolRun,\n)\nfrom langchain.tools.azure_cognitive_services.utils import detect_file_src_type\nfrom langchain.tools.base import BaseTool\nfrom langchain.utils import get_from_dict_or_env\nlogger = logging.getLogger(__name__)\n[docs]class AzureCogsImageAnalysisTool(BaseTool):\n \"\"\"Tool that queries the Azure Cognitive Services Image Analysis API.\n In order to set this up, follow instructions at:\n https://learn.microsoft.com/en-us/azure/cognitive-services/computer-vision/quickstarts-sdk/image-analysis-client-library-40\n \"\"\"\n azure_cogs_key: str = \"\" #: :meta private:\n azure_cogs_endpoint: str = \"\" #: :meta private:\n vision_service: Any #: :meta private:\n analysis_options: Any #: :meta private:\n name = \"azure_cognitive_services_image_analysis\"\n description = (\n \"A wrapper around Azure Cognitive Services Image Analysis. \"\n \"Useful for when you need to analyze images. \"\n \"Input should be a url to an image.\"\n )\n @root_validator(pre=True)\n def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"Validate that api key and endpoint exists in environment.\"\"\"\n azure_cogs_key = get_from_dict_or_env(\n values, \"azure_cogs_key\", \"AZURE_COGS_KEY\"\n )\n azure_cogs_endpoint = get_from_dict_or_env(\n values, \"azure_cogs_endpoint\", \"AZURE_COGS_ENDPOINT\"\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/azure_cognitive_services/image_analysis.html"} +{"id": "be5aef6582e1-1", "text": ")\n try:\n import azure.ai.vision as sdk\n values[\"vision_service\"] = sdk.VisionServiceOptions(\n endpoint=azure_cogs_endpoint, key=azure_cogs_key\n )\n values[\"analysis_options\"] = sdk.ImageAnalysisOptions()\n values[\"analysis_options\"].features = (\n sdk.ImageAnalysisFeature.CAPTION\n | sdk.ImageAnalysisFeature.OBJECTS\n | sdk.ImageAnalysisFeature.TAGS\n | sdk.ImageAnalysisFeature.TEXT\n )\n except ImportError:\n raise ImportError(\n \"azure-ai-vision is not installed. \"\n \"Run `pip install azure-ai-vision` to install.\"\n )\n return values\n def _image_analysis(self, image_path: str) -> Dict:\n try:\n import azure.ai.vision as sdk\n except ImportError:\n pass\n image_src_type = detect_file_src_type(image_path)\n if image_src_type == \"local\":\n vision_source = sdk.VisionSource(filename=image_path)\n elif image_src_type == \"remote\":\n vision_source = sdk.VisionSource(url=image_path)\n else:\n raise ValueError(f\"Invalid image path: {image_path}\")\n image_analyzer = sdk.ImageAnalyzer(\n self.vision_service, vision_source, self.analysis_options\n )\n result = image_analyzer.analyze()\n res_dict = {}\n if result.reason == sdk.ImageAnalysisResultReason.ANALYZED:\n if result.caption is not None:\n res_dict[\"caption\"] = result.caption.content\n if result.objects is not None:\n res_dict[\"objects\"] = [obj.name for obj in result.objects]\n if result.tags is not None:\n res_dict[\"tags\"] = [tag.name for tag in result.tags]", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/azure_cognitive_services/image_analysis.html"} +{"id": "be5aef6582e1-2", "text": "res_dict[\"tags\"] = [tag.name for tag in result.tags]\n if result.text is not None:\n res_dict[\"text\"] = [line.content for line in result.text.lines]\n else:\n error_details = sdk.ImageAnalysisErrorDetails.from_result(result)\n raise RuntimeError(\n f\"Image analysis failed.\\n\"\n f\"Reason: {error_details.reason}\\n\"\n f\"Details: {error_details.message}\"\n )\n return res_dict\n def _format_image_analysis_result(self, image_analysis_result: Dict) -> str:\n formatted_result = []\n if \"caption\" in image_analysis_result:\n formatted_result.append(\"Caption: \" + image_analysis_result[\"caption\"])\n if (\n \"objects\" in image_analysis_result\n and len(image_analysis_result[\"objects\"]) > 0\n ):\n formatted_result.append(\n \"Objects: \" + \", \".join(image_analysis_result[\"objects\"])\n )\n if \"tags\" in image_analysis_result and len(image_analysis_result[\"tags\"]) > 0:\n formatted_result.append(\"Tags: \" + \", \".join(image_analysis_result[\"tags\"]))\n if \"text\" in image_analysis_result and len(image_analysis_result[\"text\"]) > 0:\n formatted_result.append(\"Text: \" + \", \".join(image_analysis_result[\"text\"]))\n return \"\\n\".join(formatted_result)\n def _run(\n self,\n query: str,\n run_manager: Optional[CallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Use the tool.\"\"\"\n try:\n image_analysis_result = self._image_analysis(query)\n if not image_analysis_result:\n return \"No good image analysis result was found\"", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/azure_cognitive_services/image_analysis.html"} +{"id": "be5aef6582e1-3", "text": "if not image_analysis_result:\n return \"No good image analysis result was found\"\n return self._format_image_analysis_result(image_analysis_result)\n except Exception as e:\n raise RuntimeError(f\"Error while running AzureCogsImageAnalysisTool: {e}\")\n async def _arun(\n self,\n query: str,\n run_manager: Optional[AsyncCallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Use the tool asynchronously.\"\"\"\n raise NotImplementedError(\"AzureCogsImageAnalysisTool does not support async\")", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/azure_cognitive_services/image_analysis.html"} +{"id": "7a5eb0607494-0", "text": "Source code for langchain.tools.azure_cognitive_services.speech2text\nfrom __future__ import annotations\nimport logging\nimport time\nfrom typing import Any, Dict, Optional\nfrom pydantic import root_validator\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForToolRun,\n CallbackManagerForToolRun,\n)\nfrom langchain.tools.azure_cognitive_services.utils import (\n detect_file_src_type,\n download_audio_from_url,\n)\nfrom langchain.tools.base import BaseTool\nfrom langchain.utils import get_from_dict_or_env\nlogger = logging.getLogger(__name__)\n[docs]class AzureCogsSpeech2TextTool(BaseTool):\n \"\"\"Tool that queries the Azure Cognitive Services Speech2Text API.\n In order to set this up, follow instructions at:\n https://learn.microsoft.com/en-us/azure/cognitive-services/speech-service/get-started-speech-to-text?pivots=programming-language-python\n \"\"\"\n azure_cogs_key: str = \"\" #: :meta private:\n azure_cogs_region: str = \"\" #: :meta private:\n speech_language: str = \"en-US\" #: :meta private:\n speech_config: Any #: :meta private:\n name = \"azure_cognitive_services_speech2text\"\n description = (\n \"A wrapper around Azure Cognitive Services Speech2Text. \"\n \"Useful for when you need to transcribe audio to text. \"\n \"Input should be a url to an audio file.\"\n )\n @root_validator(pre=True)\n def validate_environment(cls, values: Dict) -> Dict:\n \"\"\"Validate that api key and endpoint exists in environment.\"\"\"\n azure_cogs_key = get_from_dict_or_env(\n values, \"azure_cogs_key\", \"AZURE_COGS_KEY\"\n )", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/azure_cognitive_services/speech2text.html"} +{"id": "7a5eb0607494-1", "text": ")\n azure_cogs_region = get_from_dict_or_env(\n values, \"azure_cogs_region\", \"AZURE_COGS_REGION\"\n )\n try:\n import azure.cognitiveservices.speech as speechsdk\n values[\"speech_config\"] = speechsdk.SpeechConfig(\n subscription=azure_cogs_key, region=azure_cogs_region\n )\n except ImportError:\n raise ImportError(\n \"azure-cognitiveservices-speech is not installed. \"\n \"Run `pip install azure-cognitiveservices-speech` to install.\"\n )\n return values\n def _continuous_recognize(self, speech_recognizer: Any) -> str:\n done = False\n text = \"\"\n def stop_cb(evt: Any) -> None:\n \"\"\"callback that stop continuous recognition\"\"\"\n speech_recognizer.stop_continuous_recognition_async()\n nonlocal done\n done = True\n def retrieve_cb(evt: Any) -> None:\n \"\"\"callback that retrieves the intermediate recognition results\"\"\"\n nonlocal text\n text += evt.result.text\n # retrieve text on recognized events\n speech_recognizer.recognized.connect(retrieve_cb)\n # stop continuous recognition on either session stopped or canceled events\n speech_recognizer.session_stopped.connect(stop_cb)\n speech_recognizer.canceled.connect(stop_cb)\n # Start continuous speech recognition\n speech_recognizer.start_continuous_recognition_async()\n while not done:\n time.sleep(0.5)\n return text\n def _speech2text(self, audio_path: str, speech_language: str) -> str:\n try:\n import azure.cognitiveservices.speech as speechsdk\n except ImportError:\n pass", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/azure_cognitive_services/speech2text.html"} +{"id": "7a5eb0607494-2", "text": "except ImportError:\n pass\n audio_src_type = detect_file_src_type(audio_path)\n if audio_src_type == \"local\":\n audio_config = speechsdk.AudioConfig(filename=audio_path)\n elif audio_src_type == \"remote\":\n tmp_audio_path = download_audio_from_url(audio_path)\n audio_config = speechsdk.AudioConfig(filename=tmp_audio_path)\n else:\n raise ValueError(f\"Invalid audio path: {audio_path}\")\n self.speech_config.speech_recognition_language = speech_language\n speech_recognizer = speechsdk.SpeechRecognizer(self.speech_config, audio_config)\n return self._continuous_recognize(speech_recognizer)\n def _run(\n self,\n query: str,\n run_manager: Optional[CallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Use the tool.\"\"\"\n try:\n text = self._speech2text(query, self.speech_language)\n return text\n except Exception as e:\n raise RuntimeError(f\"Error while running AzureCogsSpeech2TextTool: {e}\")\n async def _arun(\n self,\n query: str,\n run_manager: Optional[AsyncCallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Use the tool asynchronously.\"\"\"\n raise NotImplementedError(\"AzureCogsSpeech2TextTool does not support async\")", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/azure_cognitive_services/speech2text.html"} +{"id": "8e4366ac490c-0", "text": "Source code for langchain.tools.sql_database.tool\n# flake8: noqa\n\"\"\"Tools for interacting with a SQL database.\"\"\"\nfrom typing import Any, Dict, Optional\nfrom pydantic import BaseModel, Extra, Field, root_validator\nfrom langchain.base_language import BaseLanguageModel\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForToolRun,\n CallbackManagerForToolRun,\n)\nfrom langchain.chains.llm import LLMChain\nfrom langchain.prompts import PromptTemplate\nfrom langchain.sql_database import SQLDatabase\nfrom langchain.tools.base import BaseTool\nfrom langchain.tools.sql_database.prompt import QUERY_CHECKER\n[docs]class BaseSQLDatabaseTool(BaseModel):\n \"\"\"Base tool for interacting with a SQL database.\"\"\"\n db: SQLDatabase = Field(exclude=True)\n # Override BaseTool.Config to appease mypy\n # See https://github.com/pydantic/pydantic/issues/4173\n class Config(BaseTool.Config):\n \"\"\"Configuration for this pydantic object.\"\"\"\n arbitrary_types_allowed = True\n extra = Extra.forbid\n[docs]class QuerySQLDataBaseTool(BaseSQLDatabaseTool, BaseTool):\n \"\"\"Tool for querying a SQL database.\"\"\"\n name = \"sql_db_query\"\n description = \"\"\"\n Input to this tool is a detailed and correct SQL query, output is a result from the database.\n If the query is not correct, an error message will be returned.\n If an error is returned, rewrite the query, check the query, and try again.\n \"\"\"\n def _run(\n self,\n query: str,\n run_manager: Optional[CallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Execute the query, return the results or an error message.\"\"\"\n return self.db.run_no_throw(query)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/sql_database/tool.html"} +{"id": "8e4366ac490c-1", "text": "return self.db.run_no_throw(query)\n async def _arun(\n self,\n query: str,\n run_manager: Optional[AsyncCallbackManagerForToolRun] = None,\n ) -> str:\n raise NotImplementedError(\"QuerySqlDbTool does not support async\")\n[docs]class InfoSQLDatabaseTool(BaseSQLDatabaseTool, BaseTool):\n \"\"\"Tool for getting metadata about a SQL database.\"\"\"\n name = \"sql_db_schema\"\n description = \"\"\"\n Input to this tool is a comma-separated list of tables, output is the schema and sample rows for those tables. \n Example Input: \"table1, table2, table3\"\n \"\"\"\n def _run(\n self,\n table_names: str,\n run_manager: Optional[CallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Get the schema for tables in a comma-separated list.\"\"\"\n return self.db.get_table_info_no_throw(table_names.split(\", \"))\n async def _arun(\n self,\n table_name: str,\n run_manager: Optional[AsyncCallbackManagerForToolRun] = None,\n ) -> str:\n raise NotImplementedError(\"SchemaSqlDbTool does not support async\")\n[docs]class ListSQLDatabaseTool(BaseSQLDatabaseTool, BaseTool):\n \"\"\"Tool for getting tables names.\"\"\"\n name = \"sql_db_list_tables\"\n description = \"Input is an empty string, output is a comma separated list of tables in the database.\"\n def _run(\n self,\n tool_input: str = \"\",\n run_manager: Optional[CallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Get the schema for a specific table.\"\"\"\n return \", \".join(self.db.get_usable_table_names())", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/sql_database/tool.html"} +{"id": "8e4366ac490c-2", "text": "return \", \".join(self.db.get_usable_table_names())\n async def _arun(\n self,\n tool_input: str = \"\",\n run_manager: Optional[AsyncCallbackManagerForToolRun] = None,\n ) -> str:\n raise NotImplementedError(\"ListTablesSqlDbTool does not support async\")\n[docs]class QuerySQLCheckerTool(BaseSQLDatabaseTool, BaseTool):\n \"\"\"Use an LLM to check if a query is correct.\n Adapted from https://www.patterns.app/blog/2023/01/18/crunchbot-sql-analyst-gpt/\"\"\"\n template: str = QUERY_CHECKER\n llm: BaseLanguageModel\n llm_chain: LLMChain = Field(init=False)\n name = \"sql_db_query_checker\"\n description = \"\"\"\n Use this tool to double check if your query is correct before executing it.\n Always use this tool before executing a query with query_sql_db!\n \"\"\"\n @root_validator(pre=True)\n def initialize_llm_chain(cls, values: Dict[str, Any]) -> Dict[str, Any]:\n if \"llm_chain\" not in values:\n values[\"llm_chain\"] = LLMChain(\n llm=values.get(\"llm\"),\n prompt=PromptTemplate(\n template=QUERY_CHECKER, input_variables=[\"query\", \"dialect\"]\n ),\n )\n if values[\"llm_chain\"].prompt.input_variables != [\"query\", \"dialect\"]:\n raise ValueError(\n \"LLM chain for QueryCheckerTool must have input variables ['query', 'dialect']\"\n )\n return values\n def _run(\n self,\n query: str,\n run_manager: Optional[CallbackManagerForToolRun] = None,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/sql_database/tool.html"} +{"id": "8e4366ac490c-3", "text": "run_manager: Optional[CallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Use the LLM to check the query.\"\"\"\n return self.llm_chain.predict(query=query, dialect=self.db.dialect)\n async def _arun(\n self,\n query: str,\n run_manager: Optional[AsyncCallbackManagerForToolRun] = None,\n ) -> str:\n return await self.llm_chain.apredict(query=query, dialect=self.db.dialect)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/sql_database/tool.html"} +{"id": "ec3c38294bf7-0", "text": "Source code for langchain.tools.human.tool\n\"\"\"Tool for asking human input.\"\"\"\nfrom typing import Callable, Optional\nfrom pydantic import Field\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForToolRun,\n CallbackManagerForToolRun,\n)\nfrom langchain.tools.base import BaseTool\ndef _print_func(text: str) -> None:\n print(\"\\n\")\n print(text)\n[docs]class HumanInputRun(BaseTool):\n \"\"\"Tool that adds the capability to ask user for input.\"\"\"\n name = \"human\"\n description = (\n \"You can ask a human for guidance when you think you \"\n \"got stuck or you are not sure what to do next. \"\n \"The input should be a question for the human.\"\n )\n prompt_func: Callable[[str], None] = Field(default_factory=lambda: _print_func)\n input_func: Callable = Field(default_factory=lambda: input)\n def _run(\n self,\n query: str,\n run_manager: Optional[CallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Use the Human input tool.\"\"\"\n self.prompt_func(query)\n return self.input_func()\n async def _arun(\n self,\n query: str,\n run_manager: Optional[AsyncCallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Use the Human tool asynchronously.\"\"\"\n raise NotImplementedError(\"Human tool does not support async\")", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/human/tool.html"} +{"id": "91d2d98ac7c1-0", "text": "Source code for langchain.tools.spark_sql.tool\n# flake8: noqa\n\"\"\"Tools for interacting with Spark SQL.\"\"\"\nfrom typing import Any, Dict, Optional\nfrom pydantic import BaseModel, Extra, Field, root_validator\nfrom langchain.base_language import BaseLanguageModel\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForToolRun,\n CallbackManagerForToolRun,\n)\nfrom langchain.chains.llm import LLMChain\nfrom langchain.prompts import PromptTemplate\nfrom langchain.utilities.spark_sql import SparkSQL\nfrom langchain.tools.base import BaseTool\nfrom langchain.tools.spark_sql.prompt import QUERY_CHECKER\n[docs]class BaseSparkSQLTool(BaseModel):\n \"\"\"Base tool for interacting with Spark SQL.\"\"\"\n db: SparkSQL = Field(exclude=True)\n # Override BaseTool.Config to appease mypy\n # See https://github.com/pydantic/pydantic/issues/4173\n class Config(BaseTool.Config):\n \"\"\"Configuration for this pydantic object.\"\"\"\n arbitrary_types_allowed = True\n extra = Extra.forbid\n[docs]class QuerySparkSQLTool(BaseSparkSQLTool, BaseTool):\n \"\"\"Tool for querying a Spark SQL.\"\"\"\n name = \"query_sql_db\"\n description = \"\"\"\n Input to this tool is a detailed and correct SQL query, output is a result from the Spark SQL.\n If the query is not correct, an error message will be returned.\n If an error is returned, rewrite the query, check the query, and try again.\n \"\"\"\n def _run(\n self,\n query: str,\n run_manager: Optional[CallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Execute the query, return the results or an error message.\"\"\"\n return self.db.run_no_throw(query)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/spark_sql/tool.html"} +{"id": "91d2d98ac7c1-1", "text": "return self.db.run_no_throw(query)\n async def _arun(\n self,\n query: str,\n run_manager: Optional[AsyncCallbackManagerForToolRun] = None,\n ) -> str:\n raise NotImplementedError(\"QuerySqlDbTool does not support async\")\n[docs]class InfoSparkSQLTool(BaseSparkSQLTool, BaseTool):\n \"\"\"Tool for getting metadata about a Spark SQL.\"\"\"\n name = \"schema_sql_db\"\n description = \"\"\"\n Input to this tool is a comma-separated list of tables, output is the schema and sample rows for those tables.\n Be sure that the tables actually exist by calling list_tables_sql_db first!\n Example Input: \"table1, table2, table3\"\n \"\"\"\n def _run(\n self,\n table_names: str,\n run_manager: Optional[CallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Get the schema for tables in a comma-separated list.\"\"\"\n return self.db.get_table_info_no_throw(table_names.split(\", \"))\n async def _arun(\n self,\n table_name: str,\n run_manager: Optional[AsyncCallbackManagerForToolRun] = None,\n ) -> str:\n raise NotImplementedError(\"SchemaSqlDbTool does not support async\")\n[docs]class ListSparkSQLTool(BaseSparkSQLTool, BaseTool):\n \"\"\"Tool for getting tables names.\"\"\"\n name = \"list_tables_sql_db\"\n description = \"Input is an empty string, output is a comma separated list of tables in the Spark SQL.\"\n def _run(\n self,\n tool_input: str = \"\",\n run_manager: Optional[CallbackManagerForToolRun] = None,\n ) -> str:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/spark_sql/tool.html"} +{"id": "91d2d98ac7c1-2", "text": ") -> str:\n \"\"\"Get the schema for a specific table.\"\"\"\n return \", \".join(self.db.get_usable_table_names())\n async def _arun(\n self,\n tool_input: str = \"\",\n run_manager: Optional[AsyncCallbackManagerForToolRun] = None,\n ) -> str:\n raise NotImplementedError(\"ListTablesSqlDbTool does not support async\")\n[docs]class QueryCheckerTool(BaseSparkSQLTool, BaseTool):\n \"\"\"Use an LLM to check if a query is correct.\n Adapted from https://www.patterns.app/blog/2023/01/18/crunchbot-sql-analyst-gpt/\"\"\"\n template: str = QUERY_CHECKER\n llm: BaseLanguageModel\n llm_chain: LLMChain = Field(init=False)\n name = \"query_checker_sql_db\"\n description = \"\"\"\n Use this tool to double check if your query is correct before executing it.\n Always use this tool before executing a query with query_sql_db!\n \"\"\"\n @root_validator(pre=True)\n def initialize_llm_chain(cls, values: Dict[str, Any]) -> Dict[str, Any]:\n if \"llm_chain\" not in values:\n values[\"llm_chain\"] = LLMChain(\n llm=values.get(\"llm\"),\n prompt=PromptTemplate(\n template=QUERY_CHECKER, input_variables=[\"query\"]\n ),\n )\n if values[\"llm_chain\"].prompt.input_variables != [\"query\"]:\n raise ValueError(\n \"LLM chain for QueryCheckerTool need to use ['query'] as input_variables \"\n \"for the embedded prompt\"\n )\n return values\n def _run(\n self,\n query: str,", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/spark_sql/tool.html"} +{"id": "91d2d98ac7c1-3", "text": "return values\n def _run(\n self,\n query: str,\n run_manager: Optional[CallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Use the LLM to check the query.\"\"\"\n return self.llm_chain.predict(query=query)\n async def _arun(\n self,\n query: str,\n run_manager: Optional[AsyncCallbackManagerForToolRun] = None,\n ) -> str:\n return await self.llm_chain.apredict(query=query)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/spark_sql/tool.html"} +{"id": "b89d05273357-0", "text": "Source code for langchain.tools.interaction.tool\n\"\"\"Tools for interacting with the user.\"\"\"\nimport warnings\nfrom typing import Any\nfrom langchain.tools.human.tool import HumanInputRun\n[docs]def StdInInquireTool(*args: Any, **kwargs: Any) -> HumanInputRun:\n \"\"\"Tool for asking the user for input.\"\"\"\n warnings.warn(\n \"StdInInquireTool will be deprecated in the future. \"\n \"Please use HumanInputRun instead.\",\n DeprecationWarning,\n )\n return HumanInputRun(*args, **kwargs)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/interaction/tool.html"} +{"id": "c4df72b61bb3-0", "text": "Source code for langchain.tools.wikipedia.tool\n\"\"\"Tool for the Wikipedia API.\"\"\"\nfrom typing import Optional\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForToolRun,\n CallbackManagerForToolRun,\n)\nfrom langchain.tools.base import BaseTool\nfrom langchain.utilities.wikipedia import WikipediaAPIWrapper\n[docs]class WikipediaQueryRun(BaseTool):\n \"\"\"Tool that adds the capability to search using the Wikipedia API.\"\"\"\n name = \"Wikipedia\"\n description = (\n \"A wrapper around Wikipedia. \"\n \"Useful for when you need to answer general questions about \"\n \"people, places, companies, facts, historical events, or other subjects. \"\n \"Input should be a search query.\"\n )\n api_wrapper: WikipediaAPIWrapper\n def _run(\n self,\n query: str,\n run_manager: Optional[CallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Use the Wikipedia tool.\"\"\"\n return self.api_wrapper.run(query)\n async def _arun(\n self,\n query: str,\n run_manager: Optional[AsyncCallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Use the Wikipedia tool asynchronously.\"\"\"\n raise NotImplementedError(\"WikipediaQueryRun does not support async\")", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/wikipedia/tool.html"} +{"id": "ff72ec4c2c4d-0", "text": "Source code for langchain.tools.zapier.tool\n\"\"\"## Zapier Natural Language Actions API\n\\\nFull docs here: https://nla.zapier.com/api/v1/docs\n**Zapier Natural Language Actions** gives you access to the 5k+ apps, 20k+ actions\non Zapier's platform through a natural language API interface.\nNLA supports apps like Gmail, Salesforce, Trello, Slack, Asana, HubSpot, Google Sheets,\nMicrosoft Teams, and thousands more apps: https://zapier.com/apps\nZapier NLA handles ALL the underlying API auth and translation from\nnatural language --> underlying API call --> return simplified output for LLMs\nThe key idea is you, or your users, expose a set of actions via an oauth-like setup\nwindow, which you can then query and execute via a REST API.\nNLA offers both API Key and OAuth for signing NLA API requests.\n1. Server-side (API Key): for quickly getting started, testing, and production scenarios\n where LangChain will only use actions exposed in the developer's Zapier account\n (and will use the developer's connected accounts on Zapier.com)\n2. User-facing (Oauth): for production scenarios where you are deploying an end-user\n facing application and LangChain needs access to end-user's exposed actions and\n connected accounts on Zapier.com\nThis quick start will focus on the server-side use case for brevity.\nReview [full docs](https://nla.zapier.com/api/v1/docs) or reach out to\nnla@zapier.com for user-facing oauth developer support.\nTypically, you'd use SequentialChain, here's a basic example:\n 1. Use NLA to find an email in Gmail\n 2. Use LLMChain to generate a draft reply to (1)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/zapier/tool.html"} +{"id": "ff72ec4c2c4d-1", "text": "2. Use LLMChain to generate a draft reply to (1)\n 3. Use NLA to send the draft reply (2) to someone in Slack via direct message\nIn code, below:\n```python\nimport os\n# get from https://platform.openai.com/\nos.environ[\"OPENAI_API_KEY\"] = os.environ.get(\"OPENAI_API_KEY\", \"\")\n# get from https://nla.zapier.com/demo/provider/debug\n# (under User Information, after logging in):\nos.environ[\"ZAPIER_NLA_API_KEY\"] = os.environ.get(\"ZAPIER_NLA_API_KEY\", \"\")\nfrom langchain.llms import OpenAI\nfrom langchain.agents import initialize_agent\nfrom langchain.agents.agent_toolkits import ZapierToolkit\nfrom langchain.utilities.zapier import ZapierNLAWrapper\n## step 0. expose gmail 'find email' and slack 'send channel message' actions\n# first go here, log in, expose (enable) the two actions:\n# https://nla.zapier.com/demo/start\n# -- for this example, can leave all fields \"Have AI guess\"\n# in an oauth scenario, you'd get your own id (instead of 'demo')\n# which you route your users through first\nllm = OpenAI(temperature=0)\nzapier = ZapierNLAWrapper()\n## To leverage a nla_oauth_access_token you may pass the value to the ZapierNLAWrapper\n## If you do this there is no need to initialize the ZAPIER_NLA_API_KEY env variable\n# zapier = ZapierNLAWrapper(zapier_nla_oauth_access_token=\"TOKEN_HERE\")\ntoolkit = ZapierToolkit.from_zapier_nla_wrapper(zapier)\nagent = initialize_agent(\n toolkit.get_tools(),", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/zapier/tool.html"} +{"id": "ff72ec4c2c4d-2", "text": "agent = initialize_agent(\n toolkit.get_tools(),\n llm,\n agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION,\n verbose=True\n)\nagent.run((\"Summarize the last email I received regarding Silicon Valley Bank. \"\n \"Send the summary to the #test-zapier channel in slack.\"))\n```\n\"\"\"\nfrom typing import Any, Dict, Optional\nfrom pydantic import Field, root_validator\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForToolRun,\n CallbackManagerForToolRun,\n)\nfrom langchain.tools.base import BaseTool\nfrom langchain.tools.zapier.prompt import BASE_ZAPIER_TOOL_PROMPT\nfrom langchain.utilities.zapier import ZapierNLAWrapper\n[docs]class ZapierNLARunAction(BaseTool):\n \"\"\"\n Args:\n action_id: a specific action ID (from list actions) of the action to execute\n (the set api_key must be associated with the action owner)\n instructions: a natural language instruction string for using the action\n (eg. \"get the latest email from Mike Knoop\" for \"Gmail: find email\" action)\n params: a dict, optional. Any params provided will *override* AI guesses\n from `instructions` (see \"understanding the AI guessing flow\" here:\n https://nla.zapier.com/api/v1/docs)\n \"\"\"\n api_wrapper: ZapierNLAWrapper = Field(default_factory=ZapierNLAWrapper)\n action_id: str\n params: Optional[dict] = None\n base_prompt: str = BASE_ZAPIER_TOOL_PROMPT\n zapier_description: str\n params_schema: Dict[str, str] = Field(default_factory=dict)\n name = \"\"\n description = \"\"\n @root_validator", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/zapier/tool.html"} +{"id": "ff72ec4c2c4d-3", "text": "name = \"\"\n description = \"\"\n @root_validator\n def set_name_description(cls, values: Dict[str, Any]) -> Dict[str, Any]:\n zapier_description = values[\"zapier_description\"]\n params_schema = values[\"params_schema\"]\n if \"instructions\" in params_schema:\n del params_schema[\"instructions\"]\n # Ensure base prompt (if overrided) contains necessary input fields\n necessary_fields = {\"{zapier_description}\", \"{params}\"}\n if not all(field in values[\"base_prompt\"] for field in necessary_fields):\n raise ValueError(\n \"Your custom base Zapier prompt must contain input fields for \"\n \"{zapier_description} and {params}.\"\n )\n values[\"name\"] = zapier_description\n values[\"description\"] = values[\"base_prompt\"].format(\n zapier_description=zapier_description,\n params=str(list(params_schema.keys())),\n )\n return values\n def _run(\n self, instructions: str, run_manager: Optional[CallbackManagerForToolRun] = None\n ) -> str:\n \"\"\"Use the Zapier NLA tool to return a list of all exposed user actions.\"\"\"\n return self.api_wrapper.run_as_str(self.action_id, instructions, self.params)\n async def _arun(\n self,\n _: str,\n run_manager: Optional[AsyncCallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Use the Zapier NLA tool to return a list of all exposed user actions.\"\"\"\n raise NotImplementedError(\"ZapierNLAListActions does not support async\")\nZapierNLARunAction.__doc__ = (\n ZapierNLAWrapper.run.__doc__ + ZapierNLARunAction.__doc__ # type: ignore\n)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/zapier/tool.html"} +{"id": "ff72ec4c2c4d-4", "text": ")\n# other useful actions\n[docs]class ZapierNLAListActions(BaseTool):\n \"\"\"\n Args:\n None\n \"\"\"\n name = \"ZapierNLA_list_actions\"\n description = BASE_ZAPIER_TOOL_PROMPT + (\n \"This tool returns a list of the user's exposed actions.\"\n )\n api_wrapper: ZapierNLAWrapper = Field(default_factory=ZapierNLAWrapper)\n def _run(\n self,\n _: str = \"\",\n run_manager: Optional[CallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Use the Zapier NLA tool to return a list of all exposed user actions.\"\"\"\n return self.api_wrapper.list_as_str()\n async def _arun(\n self,\n _: str = \"\",\n run_manager: Optional[AsyncCallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Use the Zapier NLA tool to return a list of all exposed user actions.\"\"\"\n raise NotImplementedError(\"ZapierNLAListActions does not support async\")\nZapierNLAListActions.__doc__ = (\n ZapierNLAWrapper.list.__doc__ + ZapierNLAListActions.__doc__ # type: ignore\n)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/zapier/tool.html"} +{"id": "d0966f8c858b-0", "text": "Source code for langchain.tools.file_management.read\nfrom typing import Optional, Type\nfrom pydantic import BaseModel, Field\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForToolRun,\n CallbackManagerForToolRun,\n)\nfrom langchain.tools.base import BaseTool\nfrom langchain.tools.file_management.utils import (\n INVALID_PATH_TEMPLATE,\n BaseFileToolMixin,\n FileValidationError,\n)\nclass ReadFileInput(BaseModel):\n \"\"\"Input for ReadFileTool.\"\"\"\n file_path: str = Field(..., description=\"name of file\")\n[docs]class ReadFileTool(BaseFileToolMixin, BaseTool):\n name: str = \"read_file\"\n args_schema: Type[BaseModel] = ReadFileInput\n description: str = \"Read file from disk\"\n def _run(\n self,\n file_path: str,\n run_manager: Optional[CallbackManagerForToolRun] = None,\n ) -> str:\n try:\n read_path = self.get_relative_path(file_path)\n except FileValidationError:\n return INVALID_PATH_TEMPLATE.format(arg_name=\"file_path\", value=file_path)\n if not read_path.exists():\n return f\"Error: no such file or directory: {file_path}\"\n try:\n with read_path.open(\"r\", encoding=\"utf-8\") as f:\n content = f.read()\n return content\n except Exception as e:\n return \"Error: \" + str(e)\n async def _arun(\n self,\n file_path: str,\n run_manager: Optional[AsyncCallbackManagerForToolRun] = None,\n ) -> str:\n # TODO: Add aiofiles method\n raise NotImplementedError", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/file_management/read.html"} +{"id": "def3d8f551b1-0", "text": "Source code for langchain.tools.file_management.move\nimport shutil\nfrom typing import Optional, Type\nfrom pydantic import BaseModel, Field\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForToolRun,\n CallbackManagerForToolRun,\n)\nfrom langchain.tools.base import BaseTool\nfrom langchain.tools.file_management.utils import (\n INVALID_PATH_TEMPLATE,\n BaseFileToolMixin,\n FileValidationError,\n)\nclass FileMoveInput(BaseModel):\n \"\"\"Input for MoveFileTool.\"\"\"\n source_path: str = Field(..., description=\"Path of the file to move\")\n destination_path: str = Field(..., description=\"New path for the moved file\")\n[docs]class MoveFileTool(BaseFileToolMixin, BaseTool):\n name: str = \"move_file\"\n args_schema: Type[BaseModel] = FileMoveInput\n description: str = \"Move or rename a file from one location to another\"\n def _run(\n self,\n source_path: str,\n destination_path: str,\n run_manager: Optional[CallbackManagerForToolRun] = None,\n ) -> str:\n try:\n source_path_ = self.get_relative_path(source_path)\n except FileValidationError:\n return INVALID_PATH_TEMPLATE.format(\n arg_name=\"source_path\", value=source_path\n )\n try:\n destination_path_ = self.get_relative_path(destination_path)\n except FileValidationError:\n return INVALID_PATH_TEMPLATE.format(\n arg_name=\"destination_path_\", value=destination_path_\n )\n if not source_path_.exists():\n return f\"Error: no such file or directory {source_path}\"\n try:\n # shutil.move expects str args in 3.8\n shutil.move(str(source_path_), destination_path_)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/file_management/move.html"} +{"id": "def3d8f551b1-1", "text": "shutil.move(str(source_path_), destination_path_)\n return f\"File moved successfully from {source_path} to {destination_path}.\"\n except Exception as e:\n return \"Error: \" + str(e)\n async def _arun(\n self,\n source_path: str,\n destination_path: str,\n run_manager: Optional[AsyncCallbackManagerForToolRun] = None,\n ) -> str:\n # TODO: Add aiofiles method\n raise NotImplementedError", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/file_management/move.html"} +{"id": "64b9f9eac3a1-0", "text": "Source code for langchain.tools.file_management.file_search\nimport fnmatch\nimport os\nfrom typing import Optional, Type\nfrom pydantic import BaseModel, Field\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForToolRun,\n CallbackManagerForToolRun,\n)\nfrom langchain.tools.base import BaseTool\nfrom langchain.tools.file_management.utils import (\n INVALID_PATH_TEMPLATE,\n BaseFileToolMixin,\n FileValidationError,\n)\nclass FileSearchInput(BaseModel):\n \"\"\"Input for FileSearchTool.\"\"\"\n dir_path: str = Field(\n default=\".\",\n description=\"Subdirectory to search in.\",\n )\n pattern: str = Field(\n ...,\n description=\"Unix shell regex, where * matches everything.\",\n )\n[docs]class FileSearchTool(BaseFileToolMixin, BaseTool):\n name: str = \"file_search\"\n args_schema: Type[BaseModel] = FileSearchInput\n description: str = (\n \"Recursively search for files in a subdirectory that match the regex pattern\"\n )\n def _run(\n self,\n pattern: str,\n dir_path: str = \".\",\n run_manager: Optional[CallbackManagerForToolRun] = None,\n ) -> str:\n try:\n dir_path_ = self.get_relative_path(dir_path)\n except FileValidationError:\n return INVALID_PATH_TEMPLATE.format(arg_name=\"dir_path\", value=dir_path)\n matches = []\n try:\n for root, _, filenames in os.walk(dir_path_):\n for filename in fnmatch.filter(filenames, pattern):\n absolute_path = os.path.join(root, filename)\n relative_path = os.path.relpath(absolute_path, dir_path_)\n matches.append(relative_path)\n if matches:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/file_management/file_search.html"} +{"id": "64b9f9eac3a1-1", "text": "matches.append(relative_path)\n if matches:\n return \"\\n\".join(matches)\n else:\n return f\"No files found for pattern {pattern} in directory {dir_path}\"\n except Exception as e:\n return \"Error: \" + str(e)\n async def _arun(\n self,\n dir_path: str,\n pattern: str,\n run_manager: Optional[AsyncCallbackManagerForToolRun] = None,\n ) -> str:\n # TODO: Add aiofiles method\n raise NotImplementedError", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/file_management/file_search.html"} +{"id": "e914c818212f-0", "text": "Source code for langchain.tools.file_management.copy\nimport shutil\nfrom typing import Optional, Type\nfrom pydantic import BaseModel, Field\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForToolRun,\n CallbackManagerForToolRun,\n)\nfrom langchain.tools.base import BaseTool\nfrom langchain.tools.file_management.utils import (\n INVALID_PATH_TEMPLATE,\n BaseFileToolMixin,\n FileValidationError,\n)\nclass FileCopyInput(BaseModel):\n \"\"\"Input for CopyFileTool.\"\"\"\n source_path: str = Field(..., description=\"Path of the file to copy\")\n destination_path: str = Field(..., description=\"Path to save the copied file\")\n[docs]class CopyFileTool(BaseFileToolMixin, BaseTool):\n name: str = \"copy_file\"\n args_schema: Type[BaseModel] = FileCopyInput\n description: str = \"Create a copy of a file in a specified location\"\n def _run(\n self,\n source_path: str,\n destination_path: str,\n run_manager: Optional[CallbackManagerForToolRun] = None,\n ) -> str:\n try:\n source_path_ = self.get_relative_path(source_path)\n except FileValidationError:\n return INVALID_PATH_TEMPLATE.format(\n arg_name=\"source_path\", value=source_path\n )\n try:\n destination_path_ = self.get_relative_path(destination_path)\n except FileValidationError:\n return INVALID_PATH_TEMPLATE.format(\n arg_name=\"destination_path\", value=destination_path\n )\n try:\n shutil.copy2(source_path_, destination_path_, follow_symlinks=False)\n return f\"File copied successfully from {source_path} to {destination_path}.\"\n except Exception as e:", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/file_management/copy.html"} +{"id": "e914c818212f-1", "text": "except Exception as e:\n return \"Error: \" + str(e)\n async def _arun(\n self,\n source_path: str,\n destination_path: str,\n run_manager: Optional[AsyncCallbackManagerForToolRun] = None,\n ) -> str:\n # TODO: Add aiofiles method\n raise NotImplementedError", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/file_management/copy.html"} +{"id": "7b654827d10e-0", "text": "Source code for langchain.tools.file_management.write\nfrom typing import Optional, Type\nfrom pydantic import BaseModel, Field\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForToolRun,\n CallbackManagerForToolRun,\n)\nfrom langchain.tools.base import BaseTool\nfrom langchain.tools.file_management.utils import (\n INVALID_PATH_TEMPLATE,\n BaseFileToolMixin,\n FileValidationError,\n)\nclass WriteFileInput(BaseModel):\n \"\"\"Input for WriteFileTool.\"\"\"\n file_path: str = Field(..., description=\"name of file\")\n text: str = Field(..., description=\"text to write to file\")\n append: bool = Field(\n default=False, description=\"Whether to append to an existing file.\"\n )\n[docs]class WriteFileTool(BaseFileToolMixin, BaseTool):\n name: str = \"write_file\"\n args_schema: Type[BaseModel] = WriteFileInput\n description: str = \"Write file to disk\"\n def _run(\n self,\n file_path: str,\n text: str,\n append: bool = False,\n run_manager: Optional[CallbackManagerForToolRun] = None,\n ) -> str:\n try:\n write_path = self.get_relative_path(file_path)\n except FileValidationError:\n return INVALID_PATH_TEMPLATE.format(arg_name=\"file_path\", value=file_path)\n try:\n write_path.parent.mkdir(exist_ok=True, parents=False)\n mode = \"a\" if append else \"w\"\n with write_path.open(mode, encoding=\"utf-8\") as f:\n f.write(text)\n return f\"File written successfully to {file_path}.\"\n except Exception as e:\n return \"Error: \" + str(e)", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/file_management/write.html"} +{"id": "7b654827d10e-1", "text": "except Exception as e:\n return \"Error: \" + str(e)\n async def _arun(\n self,\n file_path: str,\n text: str,\n append: bool = False,\n run_manager: Optional[AsyncCallbackManagerForToolRun] = None,\n ) -> str:\n # TODO: Add aiofiles method\n raise NotImplementedError", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/file_management/write.html"} +{"id": "6d663d21f09b-0", "text": "Source code for langchain.tools.file_management.delete\nimport os\nfrom typing import Optional, Type\nfrom pydantic import BaseModel, Field\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForToolRun,\n CallbackManagerForToolRun,\n)\nfrom langchain.tools.base import BaseTool\nfrom langchain.tools.file_management.utils import (\n INVALID_PATH_TEMPLATE,\n BaseFileToolMixin,\n FileValidationError,\n)\nclass FileDeleteInput(BaseModel):\n \"\"\"Input for DeleteFileTool.\"\"\"\n file_path: str = Field(..., description=\"Path of the file to delete\")\n[docs]class DeleteFileTool(BaseFileToolMixin, BaseTool):\n name: str = \"file_delete\"\n args_schema: Type[BaseModel] = FileDeleteInput\n description: str = \"Delete a file\"\n def _run(\n self,\n file_path: str,\n run_manager: Optional[CallbackManagerForToolRun] = None,\n ) -> str:\n try:\n file_path_ = self.get_relative_path(file_path)\n except FileValidationError:\n return INVALID_PATH_TEMPLATE.format(arg_name=\"file_path\", value=file_path)\n if not file_path_.exists():\n return f\"Error: no such file or directory: {file_path}\"\n try:\n os.remove(file_path_)\n return f\"File deleted successfully: {file_path}.\"\n except Exception as e:\n return \"Error: \" + str(e)\n async def _arun(\n self,\n file_path: str,\n run_manager: Optional[AsyncCallbackManagerForToolRun] = None,\n ) -> str:\n # TODO: Add aiofiles method\n raise NotImplementedError", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/file_management/delete.html"} +{"id": "5ee1497cb5c6-0", "text": "Source code for langchain.tools.file_management.list_dir\nimport os\nfrom typing import Optional, Type\nfrom pydantic import BaseModel, Field\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForToolRun,\n CallbackManagerForToolRun,\n)\nfrom langchain.tools.base import BaseTool\nfrom langchain.tools.file_management.utils import (\n INVALID_PATH_TEMPLATE,\n BaseFileToolMixin,\n FileValidationError,\n)\nclass DirectoryListingInput(BaseModel):\n \"\"\"Input for ListDirectoryTool.\"\"\"\n dir_path: str = Field(default=\".\", description=\"Subdirectory to list.\")\n[docs]class ListDirectoryTool(BaseFileToolMixin, BaseTool):\n name: str = \"list_directory\"\n args_schema: Type[BaseModel] = DirectoryListingInput\n description: str = \"List files and directories in a specified folder\"\n def _run(\n self,\n dir_path: str = \".\",\n run_manager: Optional[CallbackManagerForToolRun] = None,\n ) -> str:\n try:\n dir_path_ = self.get_relative_path(dir_path)\n except FileValidationError:\n return INVALID_PATH_TEMPLATE.format(arg_name=\"dir_path\", value=dir_path)\n try:\n entries = os.listdir(dir_path_)\n if entries:\n return \"\\n\".join(entries)\n else:\n return f\"No files found in directory {dir_path}\"\n except Exception as e:\n return \"Error: \" + str(e)\n async def _arun(\n self,\n dir_path: str,\n run_manager: Optional[AsyncCallbackManagerForToolRun] = None,\n ) -> str:\n # TODO: Add aiofiles method\n raise NotImplementedError", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/file_management/list_dir.html"} +{"id": "e569558fef91-0", "text": "Source code for langchain.tools.graphql.tool\nimport json\nfrom typing import Optional\nfrom langchain.callbacks.manager import (\n AsyncCallbackManagerForToolRun,\n CallbackManagerForToolRun,\n)\nfrom langchain.tools.base import BaseTool\nfrom langchain.utilities.graphql import GraphQLAPIWrapper\n[docs]class BaseGraphQLTool(BaseTool):\n \"\"\"Base tool for querying a GraphQL API.\"\"\"\n graphql_wrapper: GraphQLAPIWrapper\n name = \"query_graphql\"\n description = \"\"\"\\\n Input to this tool is a detailed and correct GraphQL query, output is a result from the API.\n If the query is not correct, an error message will be returned.\n If an error is returned with 'Bad request' in it, rewrite the query and try again.\n If an error is returned with 'Unauthorized' in it, do not try again, but tell the user to change their authentication.\n Example Input: query {{ allUsers {{ id, name, email }} }}\\\n \"\"\" # noqa: E501\n class Config:\n \"\"\"Configuration for this pydantic object.\"\"\"\n arbitrary_types_allowed = True\n def _run(\n self,\n tool_input: str,\n run_manager: Optional[CallbackManagerForToolRun] = None,\n ) -> str:\n result = self.graphql_wrapper.run(tool_input)\n return json.dumps(result, indent=2)\n async def _arun(\n self,\n tool_input: str,\n run_manager: Optional[AsyncCallbackManagerForToolRun] = None,\n ) -> str:\n \"\"\"Use the Graphql tool asynchronously.\"\"\"\n raise NotImplementedError(\"GraphQLAPIWrapper does not support async\")", "source": "https://api.python.langchain.com/en/latest/_modules/langchain/tools/graphql/tool.html"}