id
stringlengths
14
15
text
stringlengths
35
2.51k
source
stringlengths
61
154
b75952407af0-0
langchain.utilities.arxiv.ArxivAPIWrapper¶ class langchain.utilities.arxiv.ArxivAPIWrapper(*, arxiv_search: Any = None, arxiv_exceptions: Any = None, top_k_results: int = 3, load_max_docs: int = 100, load_all_available_meta: bool = False, doc_content_chars_max: Optional[int] = 4000, ARXIV_MAX_QUERY_LENGTH: int = 300)[source]¶ Bases: BaseModel Wrapper around ArxivAPI. To use, you should have the arxiv python package installed. https://lukasschwab.me/arxiv.py/index.html This wrapper will use the Arxiv API to conduct searches and fetch document summaries. By default, it will return the document summaries of the top-k results. It limits the Document content by doc_content_chars_max. Set doc_content_chars_max=None if you don’t want to limit the content size. Parameters top_k_results – number of the top-scored document used for the arxiv tool ARXIV_MAX_QUERY_LENGTH – the cut limit on the query used for the arxiv tool. load_max_docs – a limit to the number of loaded documents load_all_available_meta – if True: the metadata of the loaded Documents gets all available meta info(see https://lukasschwab.me/arxiv.py/index.html#Result), if False: the metadata gets only the most informative fields. Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param arxiv_exceptions: Any = None¶ param doc_content_chars_max: Optional[int] = 4000¶ param load_all_available_meta: bool = False¶ param load_max_docs: int = 100¶ param top_k_results: int = 3¶
https://api.python.langchain.com/en/latest/utilities/langchain.utilities.arxiv.ArxivAPIWrapper.html
b75952407af0-1
param top_k_results: int = 3¶ load(query: str) → List[Document][source]¶ Run Arxiv search and get the article texts plus the article meta information. See https://lukasschwab.me/arxiv.py/index.html#Search Returns: a list of documents with the document.page_content in text format run(query: str) → str[source]¶ Run Arxiv search and get the article meta information. See https://lukasschwab.me/arxiv.py/index.html#Search See https://lukasschwab.me/arxiv.py/index.html#Result It uses only the most informative fields of article meta information. validator validate_environment  »  all fields[source]¶ Validate that the python package exists in environment. model Config[source]¶ Bases: object Configuration for this pydantic object. extra = 'forbid'¶
https://api.python.langchain.com/en/latest/utilities/langchain.utilities.arxiv.ArxivAPIWrapper.html
d4ff1f95cf55-0
langchain.utilities.vertexai.init_vertexai¶ langchain.utilities.vertexai.init_vertexai(project: Optional[str] = None, location: Optional[str] = None, credentials: Optional[Credentials] = None) → None[source]¶ Init vertexai. Parameters project – The default GCP project to use when making Vertex API calls. location – The default location to use when making API calls. credentials – The default custom credentials to use when making API calls. If not provided credentials will be ascertained from the environment. Raises ImportError – If importing vertexai SDK did not succeed.
https://api.python.langchain.com/en/latest/utilities/langchain.utilities.vertexai.init_vertexai.html
fc397df88532-0
langchain.utilities.searx_search.SearxSearchWrapper¶ class langchain.utilities.searx_search.SearxSearchWrapper(*, searx_host: str = '', unsecure: bool = False, params: dict = None, headers: Optional[dict] = None, engines: Optional[List[str]] = [], categories: Optional[List[str]] = [], query_suffix: Optional[str] = '', k: int = 10, aiosession: Optional[Any] = None)[source]¶ Bases: BaseModel Wrapper for Searx API. To use you need to provide the searx host by passing the named parameter searx_host or exporting the environment variable SEARX_HOST. In some situations you might want to disable SSL verification, for example if you are running searx locally. You can do this by passing the named parameter unsecure. You can also pass the host url scheme as http to disable SSL. Example from langchain.utilities import SearxSearchWrapper searx = SearxSearchWrapper(searx_host="http://localhost:8888") Example with SSL disabled:from langchain.utilities import SearxSearchWrapper # note the unsecure parameter is not needed if you pass the url scheme as # http searx = SearxSearchWrapper(searx_host="http://localhost:8888", unsecure=True) Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param aiosession: Optional[Any] = None¶ param categories: Optional[List[str]] = []¶ param engines: Optional[List[str]] = []¶ param headers: Optional[dict] = None¶ param k: int = 10¶ param params: dict [Optional]¶
https://api.python.langchain.com/en/latest/utilities/langchain.utilities.searx_search.SearxSearchWrapper.html
fc397df88532-1
param k: int = 10¶ param params: dict [Optional]¶ param query_suffix: Optional[str] = ''¶ param searx_host: str = ''¶ param unsecure: bool = False¶ async aresults(query: str, num_results: int, engines: Optional[List[str]] = None, query_suffix: Optional[str] = '', **kwargs: Any) → List[Dict][source]¶ Asynchronously query with json results. Uses aiohttp. See results for more info. async arun(query: str, engines: Optional[List[str]] = None, query_suffix: Optional[str] = '', **kwargs: Any) → str[source]¶ Asynchronously version of run. validator disable_ssl_warnings  »  unsecure[source]¶ Disable SSL warnings. results(query: str, num_results: int, engines: Optional[List[str]] = None, categories: Optional[List[str]] = None, query_suffix: Optional[str] = '', **kwargs: Any) → List[Dict][source]¶ Run query through Searx API and returns the results with metadata. Parameters query – The query to search for. query_suffix – Extra suffix appended to the query. num_results – Limit the number of results to return. engines – List of engines to use for the query. categories – List of categories to use for the query. **kwargs – extra parameters to pass to the searx API. Returns {snippet: The description of the result. title: The title of the result. link: The link to the result. engines: The engines used for the result. category: Searx category of the result. } Return type Dict with the following keys
https://api.python.langchain.com/en/latest/utilities/langchain.utilities.searx_search.SearxSearchWrapper.html
fc397df88532-2
} Return type Dict with the following keys run(query: str, engines: Optional[List[str]] = None, categories: Optional[List[str]] = None, query_suffix: Optional[str] = '', **kwargs: Any) → str[source]¶ Run query through Searx API and parse results. You can pass any other params to the searx query API. Parameters query – The query to search for. query_suffix – Extra suffix appended to the query. engines – List of engines to use for the query. categories – List of categories to use for the query. **kwargs – extra parameters to pass to the searx API. Returns The result of the query. Return type str Raises ValueError – If an error occurred with the query. Example This will make a query to the qwant engine: from langchain.utilities import SearxSearchWrapper searx = SearxSearchWrapper(searx_host="http://my.searx.host") searx.run("what is the weather in France ?", engine="qwant") # the same result can be achieved using the `!` syntax of searx # to select the engine using `query_suffix` searx.run("what is the weather in France ?", query_suffix="!qwant") validator validate_params  »  all fields[source]¶ Validate that custom searx params are merged with default ones. model Config[source]¶ Bases: object Configuration for this pydantic object. extra = 'forbid'¶
https://api.python.langchain.com/en/latest/utilities/langchain.utilities.searx_search.SearxSearchWrapper.html
7bd51cb86842-0
langchain.utilities.powerbi.fix_table_name¶ langchain.utilities.powerbi.fix_table_name(table: str) → str[source]¶ Add single quotes around table names that contain spaces.
https://api.python.langchain.com/en/latest/utilities/langchain.utilities.powerbi.fix_table_name.html
7e16424fd2f8-0
langchain.utilities.zapier.ZapierNLAWrapper¶ class langchain.utilities.zapier.ZapierNLAWrapper(*, zapier_nla_api_key: str, zapier_nla_oauth_access_token: str, zapier_nla_api_base: str = 'https://nla.zapier.com/api/v1/')[source]¶ Bases: BaseModel Wrapper for Zapier NLA. Full docs here: https://nla.zapier.com/start/ This wrapper supports both API Key and OAuth Credential auth methods. API Key is the fastest way to get started using this wrapper. Call this wrapper with either zapier_nla_api_key or zapier_nla_oauth_access_token arguments, or set the ZAPIER_NLA_API_KEY environment variable. If both arguments are set, the Access Token will take precedence. For use-cases where LangChain + Zapier NLA is powering a user-facing application, and LangChain needs access to the end-user’s connected accounts on Zapier.com, you’ll need to use OAuth. Review the full docs above to learn how to create your own provider and generate credentials. Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param zapier_nla_api_base: str = 'https://nla.zapier.com/api/v1/'¶ param zapier_nla_api_key: str [Required]¶ param zapier_nla_oauth_access_token: str [Required]¶ async alist() → List[Dict][source]¶ Returns a list of all exposed (enabled) actions associated with current user (associated with the set api_key). Change your exposed actions here: https://nla.zapier.com/demo/start/ The return list can be empty if no actions exposed. Else will contain
https://api.python.langchain.com/en/latest/utilities/langchain.utilities.zapier.ZapierNLAWrapper.html
7e16424fd2f8-1
The return list can be empty if no actions exposed. Else will contain a list of action objects: [{“id”: str, “description”: str, “params”: Dict[str, str] }] params will always contain an instructions key, the only required param. All others optional and if provided will override any AI guesses (see “understanding the AI guessing flow” here: https://nla.zapier.com/api/v1/docs) async alist_as_str() → str[source]¶ Same as list, but returns a stringified version of the JSON for insertting back into an LLM. async apreview(action_id: str, instructions: str, params: Optional[Dict] = None) → Dict[source]¶ Same as run, but instead of actually executing the action, will instead return a preview of params that have been guessed by the AI in case you need to explicitly review before executing. async apreview_as_str(*args, **kwargs) → str[source]¶ Same as preview, but returns a stringified version of the JSON for insertting back into an LLM. async arun(action_id: str, instructions: str, params: Optional[Dict] = None) → Dict[source]¶ Executes an action that is identified by action_id, must be exposed (enabled) by the current user (associated with the set api_key). Change your exposed actions here: https://nla.zapier.com/demo/start/ The return JSON is guaranteed to be less than ~500 words (350 tokens) making it safe to inject into the prompt of another LLM call. async arun_as_str(*args, **kwargs) → str[source]¶ Same as run, but returns a stringified version of the JSON for insertting back into an LLM.
https://api.python.langchain.com/en/latest/utilities/langchain.utilities.zapier.ZapierNLAWrapper.html
7e16424fd2f8-2
insertting back into an LLM. list() → List[Dict][source]¶ Returns a list of all exposed (enabled) actions associated with current user (associated with the set api_key). Change your exposed actions here: https://nla.zapier.com/demo/start/ The return list can be empty if no actions exposed. Else will contain a list of action objects: [{“id”: str, “description”: str, “params”: Dict[str, str] }] params will always contain an instructions key, the only required param. All others optional and if provided will override any AI guesses (see “understanding the AI guessing flow” here: https://nla.zapier.com/docs/using-the-api#ai-guessing) list_as_str() → str[source]¶ Same as list, but returns a stringified version of the JSON for insertting back into an LLM. preview(action_id: str, instructions: str, params: Optional[Dict] = None) → Dict[source]¶ Same as run, but instead of actually executing the action, will instead return a preview of params that have been guessed by the AI in case you need to explicitly review before executing. preview_as_str(*args, **kwargs) → str[source]¶ Same as preview, but returns a stringified version of the JSON for insertting back into an LLM. run(action_id: str, instructions: str, params: Optional[Dict] = None) → Dict[source]¶ Executes an action that is identified by action_id, must be exposed (enabled) by the current user (associated with the set api_key). Change your exposed actions here: https://nla.zapier.com/demo/start/ The return JSON is guaranteed to be less than ~500 words (350
https://api.python.langchain.com/en/latest/utilities/langchain.utilities.zapier.ZapierNLAWrapper.html
7e16424fd2f8-3
The return JSON is guaranteed to be less than ~500 words (350 tokens) making it safe to inject into the prompt of another LLM call. run_as_str(*args, **kwargs) → str[source]¶ Same as run, but returns a stringified version of the JSON for insertting back into an LLM. validator validate_environment  »  all fields[source]¶ Validate that api key exists in environment. model Config[source]¶ Bases: object Configuration for this pydantic object. extra = 'forbid'¶
https://api.python.langchain.com/en/latest/utilities/langchain.utilities.zapier.ZapierNLAWrapper.html
2cb98161dc83-0
langchain.utilities.wolfram_alpha.WolframAlphaAPIWrapper¶ class langchain.utilities.wolfram_alpha.WolframAlphaAPIWrapper(*, wolfram_client: Any = None, wolfram_alpha_appid: Optional[str] = None)[source]¶ Bases: BaseModel Wrapper for Wolfram Alpha. Docs for using: Go to wolfram alpha and sign up for a developer account Create an app and get your APP ID Save your APP ID into WOLFRAM_ALPHA_APPID env variable pip install wolframalpha Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param wolfram_alpha_appid: Optional[str] = None¶ run(query: str) → str[source]¶ Run query through WolframAlpha and parse result. validator validate_environment  »  all fields[source]¶ Validate that api key and python package exists in environment. model Config[source]¶ Bases: object Configuration for this pydantic object. extra = 'forbid'¶
https://api.python.langchain.com/en/latest/utilities/langchain.utilities.wolfram_alpha.WolframAlphaAPIWrapper.html
baf3ca8a3527-0
langchain.utilities.bing_search.BingSearchAPIWrapper¶ class langchain.utilities.bing_search.BingSearchAPIWrapper(*, bing_subscription_key: str, bing_search_url: str, k: int = 10)[source]¶ Bases: BaseModel Wrapper for Bing Search API. In order to set this up, follow instructions at: https://levelup.gitconnected.com/api-tutorial-how-to-use-bing-web-search-api-in-python-4165d5592a7e Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param bing_search_url: str [Required]¶ param bing_subscription_key: str [Required]¶ param k: int = 10¶ results(query: str, num_results: int) → List[Dict][source]¶ Run query through BingSearch and return metadata. Parameters query – The query to search for. num_results – The number of results to return. Returns snippet - The description of the result. title - The title of the result. link - The link to the result. Return type A list of dictionaries with the following keys run(query: str) → str[source]¶ Run query through BingSearch and parse result. validator validate_environment  »  all fields[source]¶ Validate that api key and endpoint exists in environment. model Config[source]¶ Bases: object Configuration for this pydantic object. extra = 'forbid'¶
https://api.python.langchain.com/en/latest/utilities/langchain.utilities.bing_search.BingSearchAPIWrapper.html
dd10ba01e8dc-0
langchain.utilities.twilio.TwilioAPIWrapper¶ class langchain.utilities.twilio.TwilioAPIWrapper(*, client: Any = None, account_sid: Optional[str] = None, auth_token: Optional[str] = None, from_number: Optional[str] = None)[source]¶ Bases: BaseModel Messaging Client using Twilio. To use, you should have the twilio python package installed, and the environment variables TWILIO_ACCOUNT_SID, TWILIO_AUTH_TOKEN, and TWILIO_FROM_NUMBER, or pass account_sid, auth_token, and from_number as named parameters to the constructor. Example from langchain.utilities.twilio import TwilioAPIWrapper twilio = TwilioAPIWrapper( account_sid="ACxxx", auth_token="xxx", from_number="+10123456789" ) twilio.run('test', '+12484345508') Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param account_sid: Optional[str] = None¶ Twilio account string identifier. param auth_token: Optional[str] = None¶ Twilio auth token. param from_number: Optional[str] = None¶ A Twilio phone number in [E.164](https://www.twilio.com/docs/glossary/what-e164) format, an [alphanumeric sender ID](https://www.twilio.com/docs/sms/send-messages#use-an-alphanumeric-sender-id), or a [Channel Endpoint address](https://www.twilio.com/docs/sms/channels#channel-addresses) that is enabled for the type of message you want to send. Phone numbers or [short codes](https://www.twilio.com/docs/sms/api/short-code) purchased from
https://api.python.langchain.com/en/latest/utilities/langchain.utilities.twilio.TwilioAPIWrapper.html
dd10ba01e8dc-1
Twilio also work here. You cannot, for example, spoof messages from a private cell phone number. If you are using messaging_service_sid, this parameter must be empty. run(body: str, to: str) → str[source]¶ Run body through Twilio and respond with message sid. Parameters body – The text of the message you want to send. Can be up to 1,600 characters in length. to – The destination phone number in [E.164](https://www.twilio.com/docs/glossary/what-e164) format for SMS/MMS or [Channel user address](https://www.twilio.com/docs/sms/channels#channel-addresses) for other 3rd-party channels. validator validate_environment  »  all fields[source]¶ Validate that api key and python package exists in environment. model Config[source]¶ Bases: object Configuration for this pydantic object. arbitrary_types_allowed = False¶ extra = 'forbid'¶
https://api.python.langchain.com/en/latest/utilities/langchain.utilities.twilio.TwilioAPIWrapper.html
cb4a62bcbf99-0
langchain.utilities.openapi.OpenAPISpec¶ class langchain.utilities.openapi.OpenAPISpec(*, openapi: str = '3.1.0', info: Info, jsonSchemaDialect: Optional[str] = None, servers: List[Server] = [Server(url='/', description=None, variables=None)], paths: Optional[Dict[str, PathItem]] = None, webhooks: Optional[Dict[str, Union[PathItem, Reference]]] = None, components: Optional[Components] = None, security: Optional[List[Dict[str, List[str]]]] = None, tags: Optional[List[Tag]] = None, externalDocs: Optional[ExternalDocumentation] = None)[source]¶ Bases: OpenAPI OpenAPI Model that removes misformatted parts of the spec. Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param components: Optional[openapi_schema_pydantic.v3.v3_1_0.components.Components] = None¶ An element to hold various schemas for the document. param externalDocs: Optional[openapi_schema_pydantic.v3.v3_1_0.external_documentation.ExternalDocumentation] = None¶ Additional external documentation. param info: openapi_schema_pydantic.v3.v3_1_0.info.Info [Required]¶ REQUIRED. Provides metadata about the API. The metadata MAY be used by tooling as required. param jsonSchemaDialect: Optional[str] = None¶ The default value for the $schema keyword within [Schema Objects](#schemaObject) contained within this OAS document. This MUST be in the form of a URI. param openapi: str = '3.1.0'¶ REQUIRED. This string MUST be the [version number](#versions)
https://api.python.langchain.com/en/latest/utilities/langchain.utilities.openapi.OpenAPISpec.html
cb4a62bcbf99-1
REQUIRED. This string MUST be the [version number](#versions) of the OpenAPI Specification that the OpenAPI document uses. The openapi field SHOULD be used by tooling to interpret the OpenAPI document. This is not related to the API [info.version](#infoVersion) string. param paths: Optional[Dict[str, openapi_schema_pydantic.v3.v3_1_0.path_item.PathItem]] = None¶ The available paths and operations for the API. param security: Optional[List[Dict[str, List[str]]]] = None¶ A declaration of which security mechanisms can be used across the API. The list of values includes alternative security requirement objects that can be used. Only one of the security requirement objects need to be satisfied to authorize a request. Individual operations can override this definition. To make security optional, an empty security requirement ({}) can be included in the array. param servers: List[openapi_schema_pydantic.v3.v3_1_0.server.Server] = [Server(url='/', description=None, variables=None)]¶ An array of Server Objects, which provide connectivity information to a target server. If the servers property is not provided, or is an empty array, the default value would be a [Server Object](#serverObject) with a [url](#serverUrl) value of /. param tags: Optional[List[openapi_schema_pydantic.v3.v3_1_0.tag.Tag]] = None¶ A list of tags used by the document with additional metadata. The order of the tags can be used to reflect on their order by the parsing tools. Not all tags that are used by the [Operation Object](#operationObject) must be declared. The tags that are not declared MAY be organized randomly or based on the tools’ logic. Each tag name in the list MUST be unique.
https://api.python.langchain.com/en/latest/utilities/langchain.utilities.openapi.OpenAPISpec.html
cb4a62bcbf99-2
Each tag name in the list MUST be unique. param webhooks: Optional[Dict[str, Union[openapi_schema_pydantic.v3.v3_1_0.path_item.PathItem, openapi_schema_pydantic.v3.v3_1_0.reference.Reference]]] = None¶ The incoming webhooks that MAY be received as part of this API and that the API consumer MAY choose to implement. Closely related to the callbacks feature, this section describes requests initiated other than by an API call, for example by an out of band registration. The key name is a unique string to refer to each webhook, while the (optionally referenced) Path Item Object describes a request that may be initiated by the API provider and the expected responses. An [example](../examples/v3.1/webhook-example.yaml) is available. classmethod from_file(path: Union[str, Path]) → OpenAPISpec[source]¶ Get an OpenAPI spec from a file path. classmethod from_spec_dict(spec_dict: dict) → OpenAPISpec[source]¶ Get an OpenAPI spec from a dict. classmethod from_text(text: str) → OpenAPISpec[source]¶ Get an OpenAPI spec from a text. classmethod from_url(url: str) → OpenAPISpec[source]¶ Get an OpenAPI spec from a URL. static get_cleaned_operation_id(operation: Operation, path: str, method: str) → str[source]¶ Get a cleaned operation id from an operation id. get_methods_for_path(path: str) → List[str][source]¶ Return a list of valid methods for the specified path. get_operation(path: str, method: str) → Operation[source]¶ Get the operation object for a given path and HTTP method. get_parameters_for_operation(operation: Operation) → List[Parameter][source]¶
https://api.python.langchain.com/en/latest/utilities/langchain.utilities.openapi.OpenAPISpec.html
cb4a62bcbf99-3
get_parameters_for_operation(operation: Operation) → List[Parameter][source]¶ Get the components for a given operation. get_parameters_for_path(path: str) → List[Parameter][source]¶ get_referenced_schema(ref: Reference) → Schema[source]¶ Get a schema (or nested reference) or err. get_request_body_for_operation(operation: Operation) → Optional[RequestBody][source]¶ Get the request body for a given operation. get_schema(schema: Union[Reference, Schema]) → Schema[source]¶ classmethod parse_obj(obj: dict) → OpenAPISpec[source]¶ property base_url: str¶ Get the base url. model Config¶ Bases: object extra = 'ignore'¶
https://api.python.langchain.com/en/latest/utilities/langchain.utilities.openapi.OpenAPISpec.html
f040afbb12fe-0
langchain.utilities.powerbi.json_to_md¶ langchain.utilities.powerbi.json_to_md(json_contents: List[Dict[str, Union[str, int, float]]], table_name: Optional[str] = None) → str[source]¶ Converts a JSON object to a markdown table.
https://api.python.langchain.com/en/latest/utilities/langchain.utilities.powerbi.json_to_md.html
92ebac41992a-0
langchain.utilities.google_serper.GoogleSerperAPIWrapper¶ class langchain.utilities.google_serper.GoogleSerperAPIWrapper(*, k: int = 10, gl: str = 'us', hl: str = 'en', type: Literal['news', 'search', 'places', 'images'] = 'search', tbs: Optional[str] = None, serper_api_key: Optional[str] = None, aiosession: Optional[ClientSession] = None, result_key_for_type: dict = {'images': 'images', 'news': 'news', 'places': 'places', 'search': 'organic'})[source]¶ Bases: BaseModel Wrapper around the Serper.dev Google Search API. You can create a free API key at https://serper.dev. To use, you should have the environment variable SERPER_API_KEY set with your API key, or pass serper_api_key as a named parameter to the constructor. Example from langchain import GoogleSerperAPIWrapper google_serper = GoogleSerperAPIWrapper() Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param aiosession: Optional[aiohttp.client.ClientSession] = None¶ param gl: str = 'us'¶ param hl: str = 'en'¶ param k: int = 10¶ param serper_api_key: Optional[str] = None¶ param tbs: Optional[str] = None¶ param type: Literal['news', 'search', 'places', 'images'] = 'search'¶ async aresults(query: str, **kwargs: Any) → Dict[source]¶ Run query through GoogleSearch. async arun(query: str, **kwargs: Any) → str[source]¶
https://api.python.langchain.com/en/latest/utilities/langchain.utilities.google_serper.GoogleSerperAPIWrapper.html
92ebac41992a-1
async arun(query: str, **kwargs: Any) → str[source]¶ Run query through GoogleSearch and parse result async. results(query: str, **kwargs: Any) → Dict[source]¶ Run query through GoogleSearch. run(query: str, **kwargs: Any) → str[source]¶ Run query through GoogleSearch and parse result. validator validate_environment  »  all fields[source]¶ Validate that api key exists in environment. model Config[source]¶ Bases: object Configuration for this pydantic object. arbitrary_types_allowed = True¶
https://api.python.langchain.com/en/latest/utilities/langchain.utilities.google_serper.GoogleSerperAPIWrapper.html
afd05bfcdca7-0
langchain.utilities.python.PythonREPL¶ class langchain.utilities.python.PythonREPL(*, _globals: Optional[Dict] = None, _locals: Optional[Dict] = None)[source]¶ Bases: BaseModel Simulates a standalone Python REPL. Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param globals: Optional[Dict] [Optional] (alias '_globals')¶ param locals: Optional[Dict] [Optional] (alias '_locals')¶ run(command: str) → str[source]¶ Run command with own globals/locals and returns anything printed.
https://api.python.langchain.com/en/latest/utilities/langchain.utilities.python.PythonREPL.html
be80775b4a84-0
langchain.utilities.bibtex.BibtexparserWrapper¶ class langchain.utilities.bibtex.BibtexparserWrapper[source]¶ Bases: BaseModel Wrapper around bibtexparser. To use, you should have the bibtexparser python package installed. https://bibtexparser.readthedocs.io/en/master/ This wrapper will use bibtexparser to load a collection of references from a bibtex file and fetch document summaries. Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. get_metadata(entry: Mapping[str, Any], load_extra: bool = False) → Dict[str, Any][source]¶ Get metadata for the given entry. load_bibtex_entries(path: str) → List[Dict[str, Any]][source]¶ Load bibtex entries from the bibtex file at the given path. validator validate_environment  »  all fields[source]¶ Validate that the python package exists in environment. model Config[source]¶ Bases: object Configuration for this pydantic object. extra = 'forbid'¶
https://api.python.langchain.com/en/latest/utilities/langchain.utilities.bibtex.BibtexparserWrapper.html
dbf2a00dd48b-0
langchain.utilities.wikipedia.WikipediaAPIWrapper¶ class langchain.utilities.wikipedia.WikipediaAPIWrapper(*, wiki_client: Any = None, top_k_results: int = 3, lang: str = 'en', load_all_available_meta: bool = False, doc_content_chars_max: int = 4000)[source]¶ Bases: BaseModel Wrapper around WikipediaAPI. To use, you should have the wikipedia python package installed. This wrapper will use the Wikipedia API to conduct searches and fetch page summaries. By default, it will return the page summaries of the top-k results. It limits the Document content by doc_content_chars_max. Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param doc_content_chars_max: int = 4000¶ param lang: str = 'en'¶ param load_all_available_meta: bool = False¶ param top_k_results: int = 3¶ load(query: str) → List[Document][source]¶ Run Wikipedia search and get the article text plus the meta information. See Returns: a list of documents. run(query: str) → str[source]¶ Run Wikipedia search and get page summaries. validator validate_environment  »  all fields[source]¶ Validate that the python package exists in environment. model Config[source]¶ Bases: object Configuration for this pydantic object. extra = 'forbid'¶
https://api.python.langchain.com/en/latest/utilities/langchain.utilities.wikipedia.WikipediaAPIWrapper.html
e974631fce49-0
langchain.utilities.awslambda.LambdaWrapper¶ class langchain.utilities.awslambda.LambdaWrapper(*, lambda_client: Any = None, function_name: Optional[str] = None, awslambda_tool_name: Optional[str] = None, awslambda_tool_description: Optional[str] = None)[source]¶ Bases: BaseModel Wrapper for AWS Lambda SDK. Docs for using: pip install boto3 Create a lambda function using the AWS Console or CLI Run aws configure and enter your AWS credentials Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param awslambda_tool_description: Optional[str] = None¶ param awslambda_tool_name: Optional[str] = None¶ param function_name: Optional[str] = None¶ run(query: str) → str[source]¶ Invoke Lambda function and parse result. validator validate_environment  »  all fields[source]¶ Validate that python package exists in environment. model Config[source]¶ Bases: object Configuration for this pydantic object. extra = 'forbid'¶
https://api.python.langchain.com/en/latest/utilities/langchain.utilities.awslambda.LambdaWrapper.html
155df39af40f-0
langchain.utilities.scenexplain.SceneXplainAPIWrapper¶ class langchain.utilities.scenexplain.SceneXplainAPIWrapper(_env_file: Optional[Union[str, PathLike, List[Union[str, PathLike]], Tuple[Union[str, PathLike], ...]]] = '<object object>', _env_file_encoding: Optional[str] = None, _env_nested_delimiter: Optional[str] = None, _secrets_dir: Optional[Union[str, PathLike]] = None, *, scenex_api_key: str, scenex_api_url: str = 'https://us-central1-causal-diffusion.cloudfunctions.net/describe')[source]¶ Bases: BaseSettings, BaseModel Wrapper for SceneXplain API. In order to set this up, you need API key for the SceneXplain API. You can obtain a key by following the steps below. - Sign up for a free account at https://scenex.jina.ai/. - Navigate to the API Access page (https://scenex.jina.ai/api) and create a new API key. Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param scenex_api_key: str [Required]¶ param scenex_api_url: str = 'https://us-central1-causal-diffusion.cloudfunctions.net/describe'¶ run(image: str) → str[source]¶ Run SceneXplain image explainer. validator validate_environment  »  all fields[source]¶ Validate that api key exists in environment. model Config¶ Bases: BaseConfig getter_dict¶ alias of GetterDict
https://api.python.langchain.com/en/latest/utilities/langchain.utilities.scenexplain.SceneXplainAPIWrapper.html
155df39af40f-1
model Config¶ Bases: BaseConfig getter_dict¶ alias of GetterDict classmethod customise_sources(init_settings: Callable[[BaseSettings], Dict[str, Any]], env_settings: Callable[[BaseSettings], Dict[str, Any]], file_secret_settings: Callable[[BaseSettings], Dict[str, Any]]) → Tuple[Callable[[BaseSettings], Dict[str, Any]], ...]¶ classmethod get_field_info(name: unicode) → Dict[str, Any]¶ Get properties of FieldInfo from the fields property of the config class. json_dumps(*, skipkeys=False, ensure_ascii=True, check_circular=True, allow_nan=True, cls=None, indent=None, separators=None, default=None, sort_keys=False, **kw)¶ Serialize obj to a JSON formatted str. If skipkeys is true then dict keys that are not basic types (str, int, float, bool, None) will be skipped instead of raising a TypeError. If ensure_ascii is false, then the return value can contain non-ASCII characters if they appear in strings contained in obj. Otherwise, all such characters are escaped in JSON strings. If check_circular is false, then the circular reference check for container types will be skipped and a circular reference will result in an RecursionError (or worse). If allow_nan is false, then it will be a ValueError to serialize out of range float values (nan, inf, -inf) in strict compliance of the JSON specification, instead of using the JavaScript equivalents (NaN, Infinity, -Infinity). If indent is a non-negative integer, then JSON array elements and object members will be pretty-printed with that indent level. An indent level of 0 will only insert newlines. None is the most compact representation. If specified, separators should be an (item_separator, key_separator)
https://api.python.langchain.com/en/latest/utilities/langchain.utilities.scenexplain.SceneXplainAPIWrapper.html
155df39af40f-2
representation. If specified, separators should be an (item_separator, key_separator) tuple. The default is (', ', ': ') if indent is None and (',', ': ') otherwise. To get the most compact JSON representation, you should specify (',', ':') to eliminate whitespace. default(obj) is a function that should return a serializable version of obj or raise TypeError. The default simply raises TypeError. If sort_keys is true (default: False), then the output of dictionaries will be sorted by key. To use a custom JSONEncoder subclass (e.g. one that overrides the .default() method to serialize additional types), specify it with the cls kwarg; otherwise JSONEncoder is used. json_loads(*, cls=None, object_hook=None, parse_float=None, parse_int=None, parse_constant=None, object_pairs_hook=None, **kw)¶ Deserialize s (a str, bytes or bytearray instance containing a JSON document) to a Python object. object_hook is an optional function that will be called with the result of any object literal decode (a dict). The return value of object_hook will be used instead of the dict. This feature can be used to implement custom decoders (e.g. JSON-RPC class hinting). object_pairs_hook is an optional function that will be called with the result of any object literal decoded with an ordered list of pairs. The return value of object_pairs_hook will be used instead of the dict. This feature can be used to implement custom decoders. If object_hook is also defined, the object_pairs_hook takes priority. parse_float, if specified, will be called with the string of every JSON float to be decoded. By default this is equivalent to float(num_str). This can be used to use another datatype or parser for JSON floats (e.g. decimal.Decimal).
https://api.python.langchain.com/en/latest/utilities/langchain.utilities.scenexplain.SceneXplainAPIWrapper.html
155df39af40f-3
for JSON floats (e.g. decimal.Decimal). parse_int, if specified, will be called with the string of every JSON int to be decoded. By default this is equivalent to int(num_str). This can be used to use another datatype or parser for JSON integers (e.g. float). parse_constant, if specified, will be called with one of the following strings: -Infinity, Infinity, NaN. This can be used to raise an exception if invalid JSON numbers are encountered. To use a custom JSONDecoder subclass, specify it with the cls kwarg; otherwise JSONDecoder is used. classmethod parse_env_var(field_name: unicode, raw_val: unicode) → Any¶ classmethod prepare_field(field: ModelField) → None¶ Optional hook to check or modify fields during model creation. alias_generator = None¶ allow_inf_nan = True¶ allow_mutation = True¶ allow_population_by_field_name = False¶ anystr_lower = False¶ anystr_strip_whitespace = False¶ anystr_upper = False¶ arbitrary_types_allowed = True¶ case_sensitive = False¶ copy_on_model_validation = 'shallow'¶ env_file = None¶ env_file_encoding = None¶ env_nested_delimiter = None¶ env_prefix = ''¶ error_msg_templates = {}¶ extra = 'forbid'¶ fields = {}¶ frozen = False¶ json_encoders = {}¶ keep_untouched = ()¶ max_anystr_length = None¶ min_anystr_length = 0¶ orm_mode = False¶ post_init_call = 'before_validation'¶ schema_extra = {}¶ secrets_dir = None¶ smart_union = False¶ title = None¶ underscore_attrs_are_private = False¶ use_enum_values = False¶ validate_all = True¶ validate_assignment = False¶
https://api.python.langchain.com/en/latest/utilities/langchain.utilities.scenexplain.SceneXplainAPIWrapper.html
2c4100e21f1c-0
langchain.output_parsers.boolean.BooleanOutputParser¶ class langchain.output_parsers.boolean.BooleanOutputParser(*, true_val: str = 'YES', false_val: str = 'NO')[source]¶ Bases: BaseOutputParser[bool] Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param false_val: str = 'NO'¶ param true_val: str = 'YES'¶ dict(**kwargs: Any) → Dict¶ Return dictionary representation of output parser. get_format_instructions() → str¶ Instructions on how the LLM output should be formatted. parse(text: str) → bool[source]¶ Parse the output of an LLM call to a boolean. Parameters text – output of language model Returns boolean parse_result(result: List[Generation]) → T¶ Parse LLM Result. parse_with_prompt(completion: str, prompt: PromptValue) → Any¶ Optional method to parse the output of an LLM call with a prompt. The prompt is largely provided in the event the OutputParser wants to retry or fix the output in some way, and needs information from the prompt to do so. Parameters completion – output of language model prompt – prompt value Returns structured output to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶ to_json_not_implemented() → SerializedNotImplemented¶ property lc_attributes: Dict¶ Return a list of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the constructor. property lc_namespace: List[str]¶ Return the namespace of the langchain object. eg. [“langchain”, “llms”, “openai”] property lc_secrets: Dict[str, str]¶
https://api.python.langchain.com/en/latest/output_parsers/langchain.output_parsers.boolean.BooleanOutputParser.html
2c4100e21f1c-1
property lc_secrets: Dict[str, str]¶ Return a map of constructor argument names to secret ids. eg. {“openai_api_key”: “OPENAI_API_KEY”} property lc_serializable: bool¶ Return whether or not the class is serializable. model Config¶ Bases: object extra = 'ignore'¶
https://api.python.langchain.com/en/latest/output_parsers/langchain.output_parsers.boolean.BooleanOutputParser.html
303a5f7006d9-0
langchain.output_parsers.fix.OutputFixingParser¶ class langchain.output_parsers.fix.OutputFixingParser(*, parser: BaseOutputParser[T], retry_chain: LLMChain)[source]¶ Bases: BaseOutputParser[T] Wraps a parser and tries to fix parsing errors. Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param parser: langchain.schema.BaseOutputParser[langchain.output_parsers.fix.T] [Required]¶ param retry_chain: langchain.chains.llm.LLMChain [Required]¶ dict(**kwargs: Any) → Dict¶ Return dictionary representation of output parser. classmethod from_llm(llm: BaseLanguageModel, parser: BaseOutputParser[T], prompt: BasePromptTemplate = PromptTemplate(input_variables=['completion', 'error', 'instructions'], output_parser=None, partial_variables={}, template='Instructions:\n--------------\n{instructions}\n--------------\nCompletion:\n--------------\n{completion}\n--------------\n\nAbove, the Completion did not satisfy the constraints given in the Instructions.\nError:\n--------------\n{error}\n--------------\n\nPlease try again. Please only respond with an answer that satisfies the constraints laid out in the Instructions:', template_format='f-string', validate_template=True)) → OutputFixingParser[T][source]¶ get_format_instructions() → str[source]¶ Instructions on how the LLM output should be formatted. parse(completion: str) → T[source]¶ Parse the output of an LLM call. A method which takes in a string (assumed output of a language model ) and parses it into some structure. Parameters text – output of language model Returns structured output parse_result(result: List[Generation]) → T¶ Parse LLM Result.
https://api.python.langchain.com/en/latest/output_parsers/langchain.output_parsers.fix.OutputFixingParser.html
303a5f7006d9-1
parse_result(result: List[Generation]) → T¶ Parse LLM Result. parse_with_prompt(completion: str, prompt: PromptValue) → Any¶ Optional method to parse the output of an LLM call with a prompt. The prompt is largely provided in the event the OutputParser wants to retry or fix the output in some way, and needs information from the prompt to do so. Parameters completion – output of language model prompt – prompt value Returns structured output to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶ to_json_not_implemented() → SerializedNotImplemented¶ property lc_attributes: Dict¶ Return a list of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the constructor. property lc_namespace: List[str]¶ Return the namespace of the langchain object. eg. [“langchain”, “llms”, “openai”] property lc_secrets: Dict[str, str]¶ Return a map of constructor argument names to secret ids. eg. {“openai_api_key”: “OPENAI_API_KEY”} property lc_serializable: bool¶ Return whether or not the class is serializable. model Config¶ Bases: object extra = 'ignore'¶
https://api.python.langchain.com/en/latest/output_parsers/langchain.output_parsers.fix.OutputFixingParser.html
ad350233fbc9-0
langchain.output_parsers.list.CommaSeparatedListOutputParser¶ class langchain.output_parsers.list.CommaSeparatedListOutputParser[source]¶ Bases: ListOutputParser Parse out comma separated lists. Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. dict(**kwargs: Any) → Dict¶ Return dictionary representation of output parser. get_format_instructions() → str[source]¶ Instructions on how the LLM output should be formatted. parse(text: str) → List[str][source]¶ Parse the output of an LLM call. parse_result(result: List[Generation]) → T¶ Parse LLM Result. parse_with_prompt(completion: str, prompt: PromptValue) → Any¶ Optional method to parse the output of an LLM call with a prompt. The prompt is largely provided in the event the OutputParser wants to retry or fix the output in some way, and needs information from the prompt to do so. Parameters completion – output of language model prompt – prompt value Returns structured output to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶ to_json_not_implemented() → SerializedNotImplemented¶ property lc_attributes: Dict¶ Return a list of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the constructor. property lc_namespace: List[str]¶ Return the namespace of the langchain object. eg. [“langchain”, “llms”, “openai”] property lc_secrets: Dict[str, str]¶ Return a map of constructor argument names to secret ids. eg. {“openai_api_key”: “OPENAI_API_KEY”} property lc_serializable: bool¶ Return whether or not the class is serializable.
https://api.python.langchain.com/en/latest/output_parsers/langchain.output_parsers.list.CommaSeparatedListOutputParser.html
ad350233fbc9-1
property lc_serializable: bool¶ Return whether or not the class is serializable. model Config¶ Bases: object extra = 'ignore'¶
https://api.python.langchain.com/en/latest/output_parsers/langchain.output_parsers.list.CommaSeparatedListOutputParser.html
e10d746acbc6-0
langchain.output_parsers.list.ListOutputParser¶ class langchain.output_parsers.list.ListOutputParser[source]¶ Bases: BaseOutputParser Class to parse the output of an LLM call to a list. Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. dict(**kwargs: Any) → Dict¶ Return dictionary representation of output parser. get_format_instructions() → str¶ Instructions on how the LLM output should be formatted. abstract parse(text: str) → List[str][source]¶ Parse the output of an LLM call. parse_result(result: List[Generation]) → T¶ Parse LLM Result. parse_with_prompt(completion: str, prompt: PromptValue) → Any¶ Optional method to parse the output of an LLM call with a prompt. The prompt is largely provided in the event the OutputParser wants to retry or fix the output in some way, and needs information from the prompt to do so. Parameters completion – output of language model prompt – prompt value Returns structured output to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶ to_json_not_implemented() → SerializedNotImplemented¶ property lc_attributes: Dict¶ Return a list of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the constructor. property lc_namespace: List[str]¶ Return the namespace of the langchain object. eg. [“langchain”, “llms”, “openai”] property lc_secrets: Dict[str, str]¶ Return a map of constructor argument names to secret ids. eg. {“openai_api_key”: “OPENAI_API_KEY”} property lc_serializable: bool¶ Return whether or not the class is serializable.
https://api.python.langchain.com/en/latest/output_parsers/langchain.output_parsers.list.ListOutputParser.html
e10d746acbc6-1
property lc_serializable: bool¶ Return whether or not the class is serializable. model Config¶ Bases: object extra = 'ignore'¶
https://api.python.langchain.com/en/latest/output_parsers/langchain.output_parsers.list.ListOutputParser.html
1528f54609be-0
langchain.output_parsers.combining.CombiningOutputParser¶ class langchain.output_parsers.combining.CombiningOutputParser(*, parsers: List[BaseOutputParser])[source]¶ Bases: BaseOutputParser Class to combine multiple output parsers into one. Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param parsers: List[langchain.schema.BaseOutputParser] [Required]¶ dict(**kwargs: Any) → Dict¶ Return dictionary representation of output parser. get_format_instructions() → str[source]¶ Instructions on how the LLM output should be formatted. parse(text: str) → Dict[str, Any][source]¶ Parse the output of an LLM call. parse_result(result: List[Generation]) → T¶ Parse LLM Result. parse_with_prompt(completion: str, prompt: PromptValue) → Any¶ Optional method to parse the output of an LLM call with a prompt. The prompt is largely provided in the event the OutputParser wants to retry or fix the output in some way, and needs information from the prompt to do so. Parameters completion – output of language model prompt – prompt value Returns structured output to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶ to_json_not_implemented() → SerializedNotImplemented¶ validator validate_parsers  »  all fields[source]¶ Validate the parsers. property lc_attributes: Dict¶ Return a list of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the constructor. property lc_namespace: List[str]¶ Return the namespace of the langchain object. eg. [“langchain”, “llms”, “openai”]
https://api.python.langchain.com/en/latest/output_parsers/langchain.output_parsers.combining.CombiningOutputParser.html
1528f54609be-1
eg. [“langchain”, “llms”, “openai”] property lc_secrets: Dict[str, str]¶ Return a map of constructor argument names to secret ids. eg. {“openai_api_key”: “OPENAI_API_KEY”} property lc_serializable: bool¶ Return whether or not the class is serializable. model Config¶ Bases: object extra = 'ignore'¶
https://api.python.langchain.com/en/latest/output_parsers/langchain.output_parsers.combining.CombiningOutputParser.html
3435fb553124-0
langchain.output_parsers.openai_functions.PydanticAttrOutputFunctionsParser¶ class langchain.output_parsers.openai_functions.PydanticAttrOutputFunctionsParser(*, args_only: bool = True, pydantic_schema: Any = None, attr_name: str)[source]¶ Bases: PydanticOutputFunctionsParser Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param args_only: bool = True¶ param attr_name: str [Required]¶ param pydantic_schema: Any = None¶ parse_result(result: List[Generation]) → Any[source]¶ Parse LLM Result. to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶ to_json_not_implemented() → SerializedNotImplemented¶ property lc_attributes: Dict¶ Return a list of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the constructor. property lc_namespace: List[str]¶ Return the namespace of the langchain object. eg. [“langchain”, “llms”, “openai”] property lc_secrets: Dict[str, str]¶ Return a map of constructor argument names to secret ids. eg. {“openai_api_key”: “OPENAI_API_KEY”} property lc_serializable: bool¶ Return whether or not the class is serializable. model Config¶ Bases: object extra = 'ignore'¶
https://api.python.langchain.com/en/latest/output_parsers/langchain.output_parsers.openai_functions.PydanticAttrOutputFunctionsParser.html
724e0cc9ff6d-0
langchain.output_parsers.regex_dict.RegexDictParser¶ class langchain.output_parsers.regex_dict.RegexDictParser(*, regex_pattern: str = "{}:\\s?([^.'\\n']*)\\.?", output_key_to_format: Dict[str, str], no_update_value: Optional[str] = None)[source]¶ Bases: BaseOutputParser Class to parse the output into a dictionary. Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param no_update_value: Optional[str] = None¶ param output_key_to_format: Dict[str, str] [Required]¶ param regex_pattern: str = "{}:\\s?([^.'\\n']*)\\.?"¶ dict(**kwargs: Any) → Dict¶ Return dictionary representation of output parser. get_format_instructions() → str¶ Instructions on how the LLM output should be formatted. parse(text: str) → Dict[str, str][source]¶ Parse the output of an LLM call. parse_result(result: List[Generation]) → T¶ Parse LLM Result. parse_with_prompt(completion: str, prompt: PromptValue) → Any¶ Optional method to parse the output of an LLM call with a prompt. The prompt is largely provided in the event the OutputParser wants to retry or fix the output in some way, and needs information from the prompt to do so. Parameters completion – output of language model prompt – prompt value Returns structured output to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶ to_json_not_implemented() → SerializedNotImplemented¶ property lc_attributes: Dict¶ Return a list of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the constructor. property lc_namespace: List[str]¶
https://api.python.langchain.com/en/latest/output_parsers/langchain.output_parsers.regex_dict.RegexDictParser.html
724e0cc9ff6d-1
constructor. property lc_namespace: List[str]¶ Return the namespace of the langchain object. eg. [“langchain”, “llms”, “openai”] property lc_secrets: Dict[str, str]¶ Return a map of constructor argument names to secret ids. eg. {“openai_api_key”: “OPENAI_API_KEY”} property lc_serializable: bool¶ Return whether or not the class is serializable. model Config¶ Bases: object extra = 'ignore'¶
https://api.python.langchain.com/en/latest/output_parsers/langchain.output_parsers.regex_dict.RegexDictParser.html
940675cc3cec-0
langchain.output_parsers.json.parse_and_check_json_markdown¶ langchain.output_parsers.json.parse_and_check_json_markdown(text: str, expected_keys: List[str]) → dict[source]¶ Parse a JSON string from a Markdown string and check that it contains the expected keys. Parameters text – The Markdown string. expected_keys – The expected keys in the JSON string. Returns The parsed JSON object as a Python dictionary.
https://api.python.langchain.com/en/latest/output_parsers/langchain.output_parsers.json.parse_and_check_json_markdown.html
d69240d0110d-0
langchain.output_parsers.loading.load_output_parser¶ langchain.output_parsers.loading.load_output_parser(config: dict) → dict[source]¶ Load output parser.
https://api.python.langchain.com/en/latest/output_parsers/langchain.output_parsers.loading.load_output_parser.html
5a1489c7615d-0
langchain.output_parsers.retry.RetryWithErrorOutputParser¶ class langchain.output_parsers.retry.RetryWithErrorOutputParser(*, parser: BaseOutputParser[T], retry_chain: LLMChain)[source]¶ Bases: BaseOutputParser[T] Wraps a parser and tries to fix parsing errors. Does this by passing the original prompt, the completion, AND the error that was raised to another language model and telling it that the completion did not work, and raised the given error. Differs from RetryOutputParser in that this implementation provides the error that was raised back to the LLM, which in theory should give it more information on how to fix it. Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param parser: langchain.schema.BaseOutputParser[langchain.output_parsers.retry.T] [Required]¶ param retry_chain: langchain.chains.llm.LLMChain [Required]¶ dict(**kwargs: Any) → Dict¶ Return dictionary representation of output parser. classmethod from_llm(llm: BaseLanguageModel, parser: BaseOutputParser[T], prompt: BasePromptTemplate = PromptTemplate(input_variables=['completion', 'error', 'prompt'], output_parser=None, partial_variables={}, template='Prompt:\n{prompt}\nCompletion:\n{completion}\n\nAbove, the Completion did not satisfy the constraints given in the Prompt.\nDetails: {error}\nPlease try again:', template_format='f-string', validate_template=True)) → RetryWithErrorOutputParser[T][source]¶ get_format_instructions() → str[source]¶ Instructions on how the LLM output should be formatted. parse(completion: str) → T[source]¶ Parse the output of an LLM call. A method which takes in a string (assumed output of a language model )
https://api.python.langchain.com/en/latest/output_parsers/langchain.output_parsers.retry.RetryWithErrorOutputParser.html
5a1489c7615d-1
A method which takes in a string (assumed output of a language model ) and parses it into some structure. Parameters text – output of language model Returns structured output parse_result(result: List[Generation]) → T¶ Parse LLM Result. parse_with_prompt(completion: str, prompt_value: PromptValue) → T[source]¶ Optional method to parse the output of an LLM call with a prompt. The prompt is largely provided in the event the OutputParser wants to retry or fix the output in some way, and needs information from the prompt to do so. Parameters completion – output of language model prompt – prompt value Returns structured output to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶ to_json_not_implemented() → SerializedNotImplemented¶ property lc_attributes: Dict¶ Return a list of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the constructor. property lc_namespace: List[str]¶ Return the namespace of the langchain object. eg. [“langchain”, “llms”, “openai”] property lc_secrets: Dict[str, str]¶ Return a map of constructor argument names to secret ids. eg. {“openai_api_key”: “OPENAI_API_KEY”} property lc_serializable: bool¶ Return whether or not the class is serializable. model Config¶ Bases: object extra = 'ignore'¶
https://api.python.langchain.com/en/latest/output_parsers/langchain.output_parsers.retry.RetryWithErrorOutputParser.html
45e4e1863287-0
langchain.output_parsers.structured.ResponseSchema¶ class langchain.output_parsers.structured.ResponseSchema(*, name: str, description: str, type: str = 'string')[source]¶ Bases: BaseModel Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param description: str [Required]¶ param name: str [Required]¶ param type: str = 'string'¶
https://api.python.langchain.com/en/latest/output_parsers/langchain.output_parsers.structured.ResponseSchema.html
bebfa17e0150-0
langchain.output_parsers.retry.RetryOutputParser¶ class langchain.output_parsers.retry.RetryOutputParser(*, parser: BaseOutputParser[T], retry_chain: LLMChain)[source]¶ Bases: BaseOutputParser[T] Wraps a parser and tries to fix parsing errors. Does this by passing the original prompt and the completion to another LLM, and telling it the completion did not satisfy criteria in the prompt. Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param parser: langchain.schema.BaseOutputParser[langchain.output_parsers.retry.T] [Required]¶ param retry_chain: langchain.chains.llm.LLMChain [Required]¶ dict(**kwargs: Any) → Dict¶ Return dictionary representation of output parser. classmethod from_llm(llm: BaseLanguageModel, parser: BaseOutputParser[T], prompt: BasePromptTemplate = PromptTemplate(input_variables=['completion', 'prompt'], output_parser=None, partial_variables={}, template='Prompt:\n{prompt}\nCompletion:\n{completion}\n\nAbove, the Completion did not satisfy the constraints given in the Prompt.\nPlease try again:', template_format='f-string', validate_template=True)) → RetryOutputParser[T][source]¶ get_format_instructions() → str[source]¶ Instructions on how the LLM output should be formatted. parse(completion: str) → T[source]¶ Parse the output of an LLM call. A method which takes in a string (assumed output of a language model ) and parses it into some structure. Parameters text – output of language model Returns structured output parse_result(result: List[Generation]) → T¶ Parse LLM Result. parse_with_prompt(completion: str, prompt_value: PromptValue) → T[source]¶
https://api.python.langchain.com/en/latest/output_parsers/langchain.output_parsers.retry.RetryOutputParser.html
bebfa17e0150-1
parse_with_prompt(completion: str, prompt_value: PromptValue) → T[source]¶ Optional method to parse the output of an LLM call with a prompt. The prompt is largely provided in the event the OutputParser wants to retry or fix the output in some way, and needs information from the prompt to do so. Parameters completion – output of language model prompt – prompt value Returns structured output to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶ to_json_not_implemented() → SerializedNotImplemented¶ property lc_attributes: Dict¶ Return a list of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the constructor. property lc_namespace: List[str]¶ Return the namespace of the langchain object. eg. [“langchain”, “llms”, “openai”] property lc_secrets: Dict[str, str]¶ Return a map of constructor argument names to secret ids. eg. {“openai_api_key”: “OPENAI_API_KEY”} property lc_serializable: bool¶ Return whether or not the class is serializable. model Config¶ Bases: object extra = 'ignore'¶
https://api.python.langchain.com/en/latest/output_parsers/langchain.output_parsers.retry.RetryOutputParser.html
bf739c9dd3c8-0
langchain.output_parsers.enum.EnumOutputParser¶ class langchain.output_parsers.enum.EnumOutputParser(*, enum: Type[Enum])[source]¶ Bases: BaseOutputParser Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param enum: Type[enum.Enum] [Required]¶ dict(**kwargs: Any) → Dict¶ Return dictionary representation of output parser. get_format_instructions() → str[source]¶ Instructions on how the LLM output should be formatted. parse(response: str) → Any[source]¶ Parse the output of an LLM call. A method which takes in a string (assumed output of a language model ) and parses it into some structure. Parameters text – output of language model Returns structured output parse_result(result: List[Generation]) → T¶ Parse LLM Result. parse_with_prompt(completion: str, prompt: PromptValue) → Any¶ Optional method to parse the output of an LLM call with a prompt. The prompt is largely provided in the event the OutputParser wants to retry or fix the output in some way, and needs information from the prompt to do so. Parameters completion – output of language model prompt – prompt value Returns structured output validator raise_deprecation  »  all fields[source]¶ to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶ to_json_not_implemented() → SerializedNotImplemented¶ property lc_attributes: Dict¶ Return a list of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the constructor. property lc_namespace: List[str]¶ Return the namespace of the langchain object. eg. [“langchain”, “llms”, “openai”]
https://api.python.langchain.com/en/latest/output_parsers/langchain.output_parsers.enum.EnumOutputParser.html
bf739c9dd3c8-1
eg. [“langchain”, “llms”, “openai”] property lc_secrets: Dict[str, str]¶ Return a map of constructor argument names to secret ids. eg. {“openai_api_key”: “OPENAI_API_KEY”} property lc_serializable: bool¶ Return whether or not the class is serializable. model Config¶ Bases: object extra = 'ignore'¶
https://api.python.langchain.com/en/latest/output_parsers/langchain.output_parsers.enum.EnumOutputParser.html
1ddf25188369-0
langchain.output_parsers.openai_functions.OutputFunctionsParser¶ class langchain.output_parsers.openai_functions.OutputFunctionsParser(*, args_only: bool = True)[source]¶ Bases: BaseLLMOutputParser[Any] Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param args_only: bool = True¶ parse_result(result: List[Generation]) → Any[source]¶ Parse LLM Result. to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶ to_json_not_implemented() → SerializedNotImplemented¶ property lc_attributes: Dict¶ Return a list of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the constructor. property lc_namespace: List[str]¶ Return the namespace of the langchain object. eg. [“langchain”, “llms”, “openai”] property lc_secrets: Dict[str, str]¶ Return a map of constructor argument names to secret ids. eg. {“openai_api_key”: “OPENAI_API_KEY”} property lc_serializable: bool¶ Return whether or not the class is serializable. model Config¶ Bases: object extra = 'ignore'¶
https://api.python.langchain.com/en/latest/output_parsers/langchain.output_parsers.openai_functions.OutputFunctionsParser.html
146b7b425ba6-0
langchain.output_parsers.structured.StructuredOutputParser¶ class langchain.output_parsers.structured.StructuredOutputParser(*, response_schemas: List[ResponseSchema])[source]¶ Bases: BaseOutputParser Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param response_schemas: List[langchain.output_parsers.structured.ResponseSchema] [Required]¶ dict(**kwargs: Any) → Dict¶ Return dictionary representation of output parser. classmethod from_response_schemas(response_schemas: List[ResponseSchema]) → StructuredOutputParser[source]¶ get_format_instructions() → str[source]¶ Instructions on how the LLM output should be formatted. parse(text: str) → Any[source]¶ Parse the output of an LLM call. A method which takes in a string (assumed output of a language model ) and parses it into some structure. Parameters text – output of language model Returns structured output parse_result(result: List[Generation]) → T¶ Parse LLM Result. parse_with_prompt(completion: str, prompt: PromptValue) → Any¶ Optional method to parse the output of an LLM call with a prompt. The prompt is largely provided in the event the OutputParser wants to retry or fix the output in some way, and needs information from the prompt to do so. Parameters completion – output of language model prompt – prompt value Returns structured output to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶ to_json_not_implemented() → SerializedNotImplemented¶ property lc_attributes: Dict¶ Return a list of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the constructor. property lc_namespace: List[str]¶
https://api.python.langchain.com/en/latest/output_parsers/langchain.output_parsers.structured.StructuredOutputParser.html
146b7b425ba6-1
constructor. property lc_namespace: List[str]¶ Return the namespace of the langchain object. eg. [“langchain”, “llms”, “openai”] property lc_secrets: Dict[str, str]¶ Return a map of constructor argument names to secret ids. eg. {“openai_api_key”: “OPENAI_API_KEY”} property lc_serializable: bool¶ Return whether or not the class is serializable. model Config¶ Bases: object extra = 'ignore'¶
https://api.python.langchain.com/en/latest/output_parsers/langchain.output_parsers.structured.StructuredOutputParser.html
c9fe79a9d9e2-0
langchain.output_parsers.pydantic.PydanticOutputParser¶ class langchain.output_parsers.pydantic.PydanticOutputParser(*, pydantic_object: Type[T])[source]¶ Bases: BaseOutputParser[T] Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param pydantic_object: Type[langchain.output_parsers.pydantic.T] [Required]¶ dict(**kwargs: Any) → Dict¶ Return dictionary representation of output parser. get_format_instructions() → str[source]¶ Instructions on how the LLM output should be formatted. parse(text: str) → T[source]¶ Parse the output of an LLM call. A method which takes in a string (assumed output of a language model ) and parses it into some structure. Parameters text – output of language model Returns structured output parse_result(result: List[Generation]) → T¶ Parse LLM Result. parse_with_prompt(completion: str, prompt: PromptValue) → Any¶ Optional method to parse the output of an LLM call with a prompt. The prompt is largely provided in the event the OutputParser wants to retry or fix the output in some way, and needs information from the prompt to do so. Parameters completion – output of language model prompt – prompt value Returns structured output to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶ to_json_not_implemented() → SerializedNotImplemented¶ property lc_attributes: Dict¶ Return a list of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the constructor. property lc_namespace: List[str]¶ Return the namespace of the langchain object.
https://api.python.langchain.com/en/latest/output_parsers/langchain.output_parsers.pydantic.PydanticOutputParser.html
c9fe79a9d9e2-1
property lc_namespace: List[str]¶ Return the namespace of the langchain object. eg. [“langchain”, “llms”, “openai”] property lc_secrets: Dict[str, str]¶ Return a map of constructor argument names to secret ids. eg. {“openai_api_key”: “OPENAI_API_KEY”} property lc_serializable: bool¶ Return whether or not the class is serializable. model Config¶ Bases: object extra = 'ignore'¶
https://api.python.langchain.com/en/latest/output_parsers/langchain.output_parsers.pydantic.PydanticOutputParser.html
1d67d87b8789-0
langchain.output_parsers.datetime.DatetimeOutputParser¶ class langchain.output_parsers.datetime.DatetimeOutputParser(*, format: str = '%Y-%m-%dT%H:%M:%S.%fZ')[source]¶ Bases: BaseOutputParser[datetime] Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param format: str = '%Y-%m-%dT%H:%M:%S.%fZ'¶ dict(**kwargs: Any) → Dict¶ Return dictionary representation of output parser. get_format_instructions() → str[source]¶ Instructions on how the LLM output should be formatted. parse(response: str) → datetime[source]¶ Parse the output of an LLM call. A method which takes in a string (assumed output of a language model ) and parses it into some structure. Parameters text – output of language model Returns structured output parse_result(result: List[Generation]) → T¶ Parse LLM Result. parse_with_prompt(completion: str, prompt: PromptValue) → Any¶ Optional method to parse the output of an LLM call with a prompt. The prompt is largely provided in the event the OutputParser wants to retry or fix the output in some way, and needs information from the prompt to do so. Parameters completion – output of language model prompt – prompt value Returns structured output to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶ to_json_not_implemented() → SerializedNotImplemented¶ property lc_attributes: Dict¶ Return a list of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the constructor. property lc_namespace: List[str]¶ Return the namespace of the langchain object.
https://api.python.langchain.com/en/latest/output_parsers/langchain.output_parsers.datetime.DatetimeOutputParser.html
1d67d87b8789-1
property lc_namespace: List[str]¶ Return the namespace of the langchain object. eg. [“langchain”, “llms”, “openai”] property lc_secrets: Dict[str, str]¶ Return a map of constructor argument names to secret ids. eg. {“openai_api_key”: “OPENAI_API_KEY”} property lc_serializable: bool¶ Return whether or not the class is serializable. model Config¶ Bases: object extra = 'ignore'¶
https://api.python.langchain.com/en/latest/output_parsers/langchain.output_parsers.datetime.DatetimeOutputParser.html
cf1a2ff8fa9d-0
langchain.output_parsers.openai_functions.JsonOutputFunctionsParser¶ class langchain.output_parsers.openai_functions.JsonOutputFunctionsParser(*, args_only: bool = True)[source]¶ Bases: OutputFunctionsParser Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param args_only: bool = True¶ parse_result(result: List[Generation]) → Any[source]¶ Parse LLM Result. to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶ to_json_not_implemented() → SerializedNotImplemented¶ property lc_attributes: Dict¶ Return a list of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the constructor. property lc_namespace: List[str]¶ Return the namespace of the langchain object. eg. [“langchain”, “llms”, “openai”] property lc_secrets: Dict[str, str]¶ Return a map of constructor argument names to secret ids. eg. {“openai_api_key”: “OPENAI_API_KEY”} property lc_serializable: bool¶ Return whether or not the class is serializable. model Config¶ Bases: object extra = 'ignore'¶
https://api.python.langchain.com/en/latest/output_parsers/langchain.output_parsers.openai_functions.JsonOutputFunctionsParser.html
9bfef693fef6-0
langchain.output_parsers.rail_parser.GuardrailsOutputParser¶ class langchain.output_parsers.rail_parser.GuardrailsOutputParser(*, guard: Any = None, api: Optional[Callable] = None, args: Any = None, kwargs: Any = None)[source]¶ Bases: BaseOutputParser Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param api: Optional[Callable] = None¶ param args: Any = None¶ param guard: Any = None¶ param kwargs: Any = None¶ dict(**kwargs: Any) → Dict¶ Return dictionary representation of output parser. classmethod from_pydantic(output_class: Any, num_reasks: int = 1, api: Optional[Callable] = None, *args: Any, **kwargs: Any) → GuardrailsOutputParser[source]¶ classmethod from_rail(rail_file: str, num_reasks: int = 1, api: Optional[Callable] = None, *args: Any, **kwargs: Any) → GuardrailsOutputParser[source]¶ classmethod from_rail_string(rail_str: str, num_reasks: int = 1, api: Optional[Callable] = None, *args: Any, **kwargs: Any) → GuardrailsOutputParser[source]¶ get_format_instructions() → str[source]¶ Instructions on how the LLM output should be formatted. parse(text: str) → Dict[source]¶ Parse the output of an LLM call. A method which takes in a string (assumed output of a language model ) and parses it into some structure. Parameters text – output of language model Returns structured output parse_result(result: List[Generation]) → T¶ Parse LLM Result.
https://api.python.langchain.com/en/latest/output_parsers/langchain.output_parsers.rail_parser.GuardrailsOutputParser.html
9bfef693fef6-1
parse_result(result: List[Generation]) → T¶ Parse LLM Result. parse_with_prompt(completion: str, prompt: PromptValue) → Any¶ Optional method to parse the output of an LLM call with a prompt. The prompt is largely provided in the event the OutputParser wants to retry or fix the output in some way, and needs information from the prompt to do so. Parameters completion – output of language model prompt – prompt value Returns structured output to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶ to_json_not_implemented() → SerializedNotImplemented¶ property lc_attributes: Dict¶ Return a list of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the constructor. property lc_namespace: List[str]¶ Return the namespace of the langchain object. eg. [“langchain”, “llms”, “openai”] property lc_secrets: Dict[str, str]¶ Return a map of constructor argument names to secret ids. eg. {“openai_api_key”: “OPENAI_API_KEY”} property lc_serializable: bool¶ Return whether or not the class is serializable. model Config¶ Bases: object extra = 'ignore'¶
https://api.python.langchain.com/en/latest/output_parsers/langchain.output_parsers.rail_parser.GuardrailsOutputParser.html
709811056098-0
langchain.output_parsers.openai_functions.JsonKeyOutputFunctionsParser¶ class langchain.output_parsers.openai_functions.JsonKeyOutputFunctionsParser(*, args_only: bool = True, key_name: str)[source]¶ Bases: JsonOutputFunctionsParser Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param args_only: bool = True¶ param key_name: str [Required]¶ parse_result(result: List[Generation]) → Any[source]¶ Parse LLM Result. to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶ to_json_not_implemented() → SerializedNotImplemented¶ property lc_attributes: Dict¶ Return a list of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the constructor. property lc_namespace: List[str]¶ Return the namespace of the langchain object. eg. [“langchain”, “llms”, “openai”] property lc_secrets: Dict[str, str]¶ Return a map of constructor argument names to secret ids. eg. {“openai_api_key”: “OPENAI_API_KEY”} property lc_serializable: bool¶ Return whether or not the class is serializable. model Config¶ Bases: object extra = 'ignore'¶
https://api.python.langchain.com/en/latest/output_parsers/langchain.output_parsers.openai_functions.JsonKeyOutputFunctionsParser.html
145f0035cd01-0
langchain.output_parsers.regex.RegexParser¶ class langchain.output_parsers.regex.RegexParser(*, regex: str, output_keys: List[str], default_output_key: Optional[str] = None)[source]¶ Bases: BaseOutputParser Class to parse the output into a dictionary. Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param default_output_key: Optional[str] = None¶ param output_keys: List[str] [Required]¶ param regex: str [Required]¶ dict(**kwargs: Any) → Dict¶ Return dictionary representation of output parser. get_format_instructions() → str¶ Instructions on how the LLM output should be formatted. parse(text: str) → Dict[str, str][source]¶ Parse the output of an LLM call. parse_result(result: List[Generation]) → T¶ Parse LLM Result. parse_with_prompt(completion: str, prompt: PromptValue) → Any¶ Optional method to parse the output of an LLM call with a prompt. The prompt is largely provided in the event the OutputParser wants to retry or fix the output in some way, and needs information from the prompt to do so. Parameters completion – output of language model prompt – prompt value Returns structured output to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶ to_json_not_implemented() → SerializedNotImplemented¶ property lc_attributes: Dict¶ Return a list of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the constructor. property lc_namespace: List[str]¶ Return the namespace of the langchain object. eg. [“langchain”, “llms”, “openai”]
https://api.python.langchain.com/en/latest/output_parsers/langchain.output_parsers.regex.RegexParser.html
145f0035cd01-1
eg. [“langchain”, “llms”, “openai”] property lc_secrets: Dict[str, str]¶ Return a map of constructor argument names to secret ids. eg. {“openai_api_key”: “OPENAI_API_KEY”} property lc_serializable: bool¶ Return whether or not the class is serializable. model Config¶ Bases: object extra = 'ignore'¶
https://api.python.langchain.com/en/latest/output_parsers/langchain.output_parsers.regex.RegexParser.html
9ea2907a5c89-0
langchain.output_parsers.json.parse_json_markdown¶ langchain.output_parsers.json.parse_json_markdown(json_string: str) → dict[source]¶ Parse a JSON string from a Markdown string. Parameters json_string – The Markdown string. Returns The parsed JSON object as a Python dictionary.
https://api.python.langchain.com/en/latest/output_parsers/langchain.output_parsers.json.parse_json_markdown.html
08e6f88856d5-0
langchain.output_parsers.openai_functions.PydanticOutputFunctionsParser¶ class langchain.output_parsers.openai_functions.PydanticOutputFunctionsParser(*, args_only: bool = True, pydantic_schema: Any = None)[source]¶ Bases: OutputFunctionsParser Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param args_only: bool = True¶ param pydantic_schema: Any = None¶ parse_result(result: List[Generation]) → Any[source]¶ Parse LLM Result. to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶ to_json_not_implemented() → SerializedNotImplemented¶ property lc_attributes: Dict¶ Return a list of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the constructor. property lc_namespace: List[str]¶ Return the namespace of the langchain object. eg. [“langchain”, “llms”, “openai”] property lc_secrets: Dict[str, str]¶ Return a map of constructor argument names to secret ids. eg. {“openai_api_key”: “OPENAI_API_KEY”} property lc_serializable: bool¶ Return whether or not the class is serializable. model Config¶ Bases: object extra = 'ignore'¶
https://api.python.langchain.com/en/latest/output_parsers/langchain.output_parsers.openai_functions.PydanticOutputFunctionsParser.html
8eaf1795a9f2-0
langchain.base_language.BaseLanguageModel¶ class langchain.base_language.BaseLanguageModel[source]¶ Bases: Serializable, ABC Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. abstract async agenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → LLMResult[source]¶ Take in a list of prompt values and return an LLMResult. classmethod all_required_field_names() → Set[source]¶ abstract async apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str[source]¶ Predict text from text. abstract async apredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage[source]¶ Predict message from messages. abstract generate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → LLMResult[source]¶ Take in a list of prompt values and return an LLMResult. get_num_tokens(text: str) → int[source]¶ Get the number of tokens present in the text. get_num_tokens_from_messages(messages: List[BaseMessage]) → int[source]¶ Get the number of tokens in the message. get_token_ids(text: str) → List[int][source]¶ Get the token present in the text. abstract predict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str[source]¶ Predict text from text.
https://api.python.langchain.com/en/latest/base_language/langchain.base_language.BaseLanguageModel.html
8eaf1795a9f2-1
Predict text from text. abstract predict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage[source]¶ Predict message from messages. to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶ to_json_not_implemented() → SerializedNotImplemented¶ property lc_attributes: Dict¶ Return a list of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the constructor. property lc_namespace: List[str]¶ Return the namespace of the langchain object. eg. [“langchain”, “llms”, “openai”] property lc_secrets: Dict[str, str]¶ Return a map of constructor argument names to secret ids. eg. {“openai_api_key”: “OPENAI_API_KEY”} property lc_serializable: bool¶ Return whether or not the class is serializable. model Config¶ Bases: object extra = 'ignore'¶
https://api.python.langchain.com/en/latest/base_language/langchain.base_language.BaseLanguageModel.html
2b37f8cb857b-0
langchain.document_loaders.url_selenium.SeleniumURLLoader¶ class langchain.document_loaders.url_selenium.SeleniumURLLoader(urls: List[str], continue_on_failure: bool = True, browser: Literal['chrome', 'firefox'] = 'chrome', binary_location: Optional[str] = None, executable_path: Optional[str] = None, headless: bool = True, arguments: List[str] = [])[source]¶ Bases: BaseLoader Loader that uses Selenium and to load a page and unstructured to load the html. This is useful for loading pages that require javascript to render. urls¶ List of URLs to load. Type List[str] continue_on_failure¶ If True, continue loading other URLs on failure. Type bool browser¶ The browser to use, either ‘chrome’ or ‘firefox’. Type str binary_location¶ The location of the browser binary. Type Optional[str] executable_path¶ The path to the browser executable. Type Optional[str] headless¶ If True, the browser will run in headless mode. Type bool arguments [List[str]] List of arguments to pass to the browser. Load a list of URLs using Selenium and unstructured. Methods __init__(urls[, continue_on_failure, ...]) Load a list of URLs using Selenium and unstructured. lazy_load() A lazy loader for document content. load() Load the specified URLs using Selenium and create Document instances. load_and_split([text_splitter]) Load documents and split into chunks. lazy_load() → Iterator[Document]¶ A lazy loader for document content. load() → List[Document][source]¶ Load the specified URLs using Selenium and create Document instances. Returns A list of Document instances with loaded content. Return type List[Document]
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.url_selenium.SeleniumURLLoader.html
2b37f8cb857b-1
Returns A list of Document instances with loaded content. Return type List[Document] load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶ Load documents and split into chunks.
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.url_selenium.SeleniumURLLoader.html
f66e3c816526-0
langchain.document_loaders.epub.UnstructuredEPubLoader¶ class langchain.document_loaders.epub.UnstructuredEPubLoader(file_path: Union[str, List[str]], mode: str = 'single', **unstructured_kwargs: Any)[source]¶ Bases: UnstructuredFileLoader Loader that uses unstructured to load epub files. Initialize with file path. Methods __init__(file_path[, mode]) Initialize with file path. lazy_load() A lazy loader for document content. load() Load file. load_and_split([text_splitter]) Load documents and split into chunks. lazy_load() → Iterator[Document]¶ A lazy loader for document content. load() → List[Document]¶ Load file. load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶ Load documents and split into chunks.
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.epub.UnstructuredEPubLoader.html
d7f9814b1574-0
langchain.document_loaders.url_playwright.PlaywrightURLLoader¶ class langchain.document_loaders.url_playwright.PlaywrightURLLoader(urls: List[str], continue_on_failure: bool = True, headless: bool = True, remove_selectors: Optional[List[str]] = None)[source]¶ Bases: BaseLoader Loader that uses Playwright and to load a page and unstructured to load the html. This is useful for loading pages that require javascript to render. urls¶ List of URLs to load. Type List[str] continue_on_failure¶ If True, continue loading other URLs on failure. Type bool headless¶ If True, the browser will run in headless mode. Type bool Load a list of URLs using Playwright and unstructured. Methods __init__(urls[, continue_on_failure, ...]) Load a list of URLs using Playwright and unstructured. lazy_load() A lazy loader for document content. load() Load the specified URLs using Playwright and create Document instances. load_and_split([text_splitter]) Load documents and split into chunks. lazy_load() → Iterator[Document]¶ A lazy loader for document content. load() → List[Document][source]¶ Load the specified URLs using Playwright and create Document instances. Returns A list of Document instances with loaded content. Return type List[Document] load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶ Load documents and split into chunks.
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.url_playwright.PlaywrightURLLoader.html
4f11a5b378bf-0
langchain.document_loaders.parsers.language.language_parser.LanguageParser¶ class langchain.document_loaders.parsers.language.language_parser.LanguageParser(language: Optional[Language] = None, parser_threshold: int = 0)[source]¶ Bases: BaseBlobParser Language parser that split code using the respective language syntax. Each top-level function and class in the code is loaded into separate documents. Furthermore, an extra document is generated, containing the remaining top-level code that excludes the already segmented functions and classes. This approach can potentially improve the accuracy of QA models over source code. Currently, the supported languages for code parsing are Python and JavaScript. The language used for parsing can be configured, along with the minimum number of lines required to activate the splitting based on syntax. Examples from langchain.text_splitter.Language from langchain.document_loaders.generic import GenericLoader from langchain.document_loaders.parsers import LanguageParser loader = GenericLoader.from_filesystem( "./code", glob="**/*", suffixes=[".py", ".js"], parser=LanguageParser() ) docs = loader.load() Example instantiations to manually select the language: … code-block:: python from langchain.text_splitter import Language loader = GenericLoader.from_filesystem(“./code”, glob=”**/*”, suffixes=[“.py”], parser=LanguageParser(language=Language.PYTHON) ) Example instantiations to set number of lines threshold: … code-block:: python loader = GenericLoader.from_filesystem(“./code”, glob=”**/*”, suffixes=[“.py”], parser=LanguageParser(parser_threshold=200) ) Language parser that split code using the respective language syntax. Parameters language – If None (default), it will try to infer language from source.
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.parsers.language.language_parser.LanguageParser.html
4f11a5b378bf-1
Parameters language – If None (default), it will try to infer language from source. parser_threshold – Minimum lines needed to activate parsing (0 by default). Methods __init__([language, parser_threshold]) Language parser that split code using the respective language syntax. lazy_parse(blob) Lazy parsing interface. parse(blob) Eagerly parse the blob into a document or documents. lazy_parse(blob: Blob) → Iterator[Document][source]¶ Lazy parsing interface. Subclasses are required to implement this method. Parameters blob – Blob instance Returns Generator of documents parse(blob: Blob) → List[Document]¶ Eagerly parse the blob into a document or documents. This is a convenience method for interactive development environment. Production applications should favor the lazy_parse method instead. Subclasses should generally not over-ride this parse method. Parameters blob – Blob instance Returns List of documents
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.parsers.language.language_parser.LanguageParser.html
0ef314eca46c-0
langchain.document_loaders.git.GitLoader¶ class langchain.document_loaders.git.GitLoader(repo_path: str, clone_url: Optional[str] = None, branch: Optional[str] = 'main', file_filter: Optional[Callable[[str], bool]] = None)[source]¶ Bases: BaseLoader Loads files from a Git repository into a list of documents. Repository can be local on disk available at repo_path, or remote at clone_url that will be cloned to repo_path. Currently supports only text files. Each document represents one file in the repository. The path points to the local Git repository, and the branch specifies the branch to load files from. By default, it loads from the main branch. Methods __init__(repo_path[, clone_url, branch, ...]) lazy_load() A lazy loader for document content. load() Load data into document objects. load_and_split([text_splitter]) Load documents and split into chunks. lazy_load() → Iterator[Document]¶ A lazy loader for document content. load() → List[Document][source]¶ Load data into document objects. load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶ Load documents and split into chunks.
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.git.GitLoader.html
3c71f26f4151-0
langchain.document_loaders.parsers.pdf.PDFPlumberParser¶ class langchain.document_loaders.parsers.pdf.PDFPlumberParser(text_kwargs: Optional[Mapping[str, Any]] = None)[source]¶ Bases: BaseBlobParser Parse PDFs with PDFPlumber. Initialize the parser. Parameters text_kwargs – Keyword arguments to pass to pdfplumber.Page.extract_text() Methods __init__([text_kwargs]) Initialize the parser. lazy_parse(blob) Lazily parse the blob. parse(blob) Eagerly parse the blob into a document or documents. lazy_parse(blob: Blob) → Iterator[Document][source]¶ Lazily parse the blob. parse(blob: Blob) → List[Document]¶ Eagerly parse the blob into a document or documents. This is a convenience method for interactive development environment. Production applications should favor the lazy_parse method instead. Subclasses should generally not over-ride this parse method. Parameters blob – Blob instance Returns List of documents
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.parsers.pdf.PDFPlumberParser.html
1d488b3c4c03-0
langchain.document_loaders.blob_loaders.schema.BlobLoader¶ class langchain.document_loaders.blob_loaders.schema.BlobLoader[source]¶ Bases: ABC Abstract interface for blob loaders implementation. Implementer should be able to load raw content from a storage system according to some criteria and return the raw content lazily as a stream of blobs. Methods __init__() yield_blobs() A lazy loader for raw data represented by LangChain's Blob object. abstract yield_blobs() → Iterable[Blob][source]¶ A lazy loader for raw data represented by LangChain’s Blob object. Returns A generator over blobs
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.blob_loaders.schema.BlobLoader.html
5ec76c272c4e-0
langchain.document_loaders.onedrive.OneDriveLoader¶ class langchain.document_loaders.onedrive.OneDriveLoader(*, settings: _OneDriveSettings = None, drive_id: str, folder_path: Optional[str] = None, object_ids: Optional[List[str]] = None, auth_with_token: bool = False)[source]¶ Bases: BaseLoader, BaseModel Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param auth_with_token: bool = False¶ param drive_id: str [Required]¶ param folder_path: Optional[str] = None¶ param object_ids: Optional[List[str]] = None¶ param settings: langchain.document_loaders.onedrive._OneDriveSettings [Optional]¶ lazy_load() → Iterator[Document]¶ A lazy loader for document content. load() → List[Document][source]¶ Loads all supported document files from the specified OneDrive drive a nd returns a list of Document objects. Returns A list of Document objects representing the loaded documents. Return type List[Document] Raises ValueError – If the specified drive ID does not correspond to a drive in the OneDrive storage. – load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶ Load documents and split into chunks.
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.onedrive.OneDriveLoader.html
d6de7f649aa4-0
langchain.document_loaders.telegram.TelegramChatFileLoader¶ class langchain.document_loaders.telegram.TelegramChatFileLoader(path: str)[source]¶ Bases: BaseLoader Loader that loads Telegram chat json directory dump. Initialize with path. Methods __init__(path) Initialize with path. lazy_load() A lazy loader for document content. load() Load documents. load_and_split([text_splitter]) Load documents and split into chunks. lazy_load() → Iterator[Document]¶ A lazy loader for document content. load() → List[Document][source]¶ Load documents. load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶ Load documents and split into chunks.
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.telegram.TelegramChatFileLoader.html
bb6520b19ac0-0
langchain.document_loaders.merge.MergedDataLoader¶ class langchain.document_loaders.merge.MergedDataLoader(loaders: List)[source]¶ Bases: BaseLoader Merge documents from a list of loaders Initialize with a list of loaders Methods __init__(loaders) Initialize with a list of loaders lazy_load() Lazy load docs from each individual loader. load() Load docs. load_and_split([text_splitter]) Load documents and split into chunks. lazy_load() → Iterator[Document][source]¶ Lazy load docs from each individual loader. load() → List[Document][source]¶ Load docs. load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶ Load documents and split into chunks.
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.merge.MergedDataLoader.html
6c93e3e23e87-0
langchain.document_loaders.blackboard.BlackboardLoader¶ class langchain.document_loaders.blackboard.BlackboardLoader(blackboard_course_url: str, bbrouter: str, load_all_recursively: bool = True, basic_auth: Optional[Tuple[str, str]] = None, cookies: Optional[dict] = None)[source]¶ Bases: WebBaseLoader Loader that loads all documents from a Blackboard course. This loader is not compatible with all Blackboard courses. It is only compatible with courses that use the new Blackboard interface. To use this loader, you must have the BbRouter cookie. You can get this cookie by logging into the course and then copying the value of the BbRouter cookie from the browser’s developer tools. Example from langchain.document_loaders import BlackboardLoader loader = BlackboardLoader( blackboard_course_url="https://blackboard.example.com/webapps/blackboard/execute/announcement?method=search&context=course_entry&course_id=_123456_1", bbrouter="expires:12345...", ) documents = loader.load() Initialize with blackboard course url. The BbRouter cookie is required for most blackboard courses. Parameters blackboard_course_url – Blackboard course url. bbrouter – BbRouter cookie. load_all_recursively – If True, load all documents recursively. basic_auth – Basic auth credentials. cookies – Cookies. Raises ValueError – If blackboard course url is invalid. Methods __init__(blackboard_course_url, bbrouter[, ...]) Initialize with blackboard course url. aload() Load text from the urls in web_path async into Documents. check_bs4() Check if BeautifulSoup4 is installed. download(path) Download a file from a url. fetch_all(urls)
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.blackboard.BlackboardLoader.html
6c93e3e23e87-1
download(path) Download a file from a url. fetch_all(urls) Fetch all urls concurrently with rate limiting. lazy_load() Lazy load text from the url(s) in web_path. load() Load data into document objects. load_and_split([text_splitter]) Load documents and split into chunks. parse_filename(url) Parse the filename from a url. scrape([parser]) Scrape data from webpage and return it in BeautifulSoup format. scrape_all(urls[, parser]) Fetch all urls, then return soups for all results. Attributes bs_get_text_kwargs kwargs for beatifulsoup4 get_text default_parser Default parser to use for BeautifulSoup. raise_for_status Raise an exception if http status code denotes an error. requests_kwargs kwargs for requests requests_per_second Max number of concurrent requests to make. web_path base_url folder_path load_all_recursively aload() → List[Document]¶ Load text from the urls in web_path async into Documents. check_bs4() → None[source]¶ Check if BeautifulSoup4 is installed. Raises ImportError – If BeautifulSoup4 is not installed. download(path: str) → None[source]¶ Download a file from a url. Parameters path – Path to the file. async fetch_all(urls: List[str]) → Any¶ Fetch all urls concurrently with rate limiting. lazy_load() → Iterator[Document]¶ Lazy load text from the url(s) in web_path. load() → List[Document][source]¶ Load data into document objects. Returns List of documents. load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶ Load documents and split into chunks. parse_filename(url: str) → str[source]¶
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.blackboard.BlackboardLoader.html
6c93e3e23e87-2
Load documents and split into chunks. parse_filename(url: str) → str[source]¶ Parse the filename from a url. Parameters url – Url to parse the filename from. Returns The filename. scrape(parser: Optional[str] = None) → Any¶ Scrape data from webpage and return it in BeautifulSoup format. scrape_all(urls: List[str], parser: Optional[str] = None) → List[Any]¶ Fetch all urls, then return soups for all results. base_url: str¶ bs_get_text_kwargs: Dict[str, Any] = {}¶ kwargs for beatifulsoup4 get_text default_parser: str = 'html.parser'¶ Default parser to use for BeautifulSoup. folder_path: str¶ load_all_recursively: bool¶ raise_for_status: bool = False¶ Raise an exception if http status code denotes an error. requests_kwargs: Dict[str, Any] = {}¶ kwargs for requests requests_per_second: int = 2¶ Max number of concurrent requests to make. property web_path: str¶ web_paths: List[str]¶
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.blackboard.BlackboardLoader.html
6609da19cbb8-0
langchain.document_loaders.org_mode.UnstructuredOrgModeLoader¶ class langchain.document_loaders.org_mode.UnstructuredOrgModeLoader(file_path: str, mode: str = 'single', **unstructured_kwargs: Any)[source]¶ Bases: UnstructuredFileLoader Loader that uses unstructured to load Org-Mode files. Initialize with file path. Methods __init__(file_path[, mode]) Initialize with file path. lazy_load() A lazy loader for document content. load() Load file. load_and_split([text_splitter]) Load documents and split into chunks. lazy_load() → Iterator[Document]¶ A lazy loader for document content. load() → List[Document]¶ Load file. load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶ Load documents and split into chunks.
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.org_mode.UnstructuredOrgModeLoader.html
35ae80bdb879-0
langchain.document_loaders.azure_blob_storage_container.AzureBlobStorageContainerLoader¶ class langchain.document_loaders.azure_blob_storage_container.AzureBlobStorageContainerLoader(conn_str: str, container: str, prefix: str = '')[source]¶ Bases: BaseLoader Loading logic for loading documents from Azure Blob Storage. Initialize with connection string, container and blob prefix. Methods __init__(conn_str, container[, prefix]) Initialize with connection string, container and blob prefix. lazy_load() A lazy loader for document content. load() Load documents. load_and_split([text_splitter]) Load documents and split into chunks. lazy_load() → Iterator[Document]¶ A lazy loader for document content. load() → List[Document][source]¶ Load documents. load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶ Load documents and split into chunks.
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.azure_blob_storage_container.AzureBlobStorageContainerLoader.html
d774ce3a3001-0
langchain.document_loaders.bigquery.BigQueryLoader¶ class langchain.document_loaders.bigquery.BigQueryLoader(query: str, project: Optional[str] = None, page_content_columns: Optional[List[str]] = None, metadata_columns: Optional[List[str]] = None, credentials: Optional[Credentials] = None)[source]¶ Bases: BaseLoader Loads a query result from BigQuery into a list of documents. Each document represents one row of the result. The page_content_columns are written into the page_content of the document. The metadata_columns are written into the metadata of the document. By default, all columns are written into the page_content and none into the metadata. Initialize BigQuery document loader. Parameters query – The query to run in BigQuery. project – Optional. The project to run the query in. page_content_columns – Optional. The columns to write into the page_content of the document. metadata_columns – Optional. The columns to write into the metadata of the document. credentials – google.auth.credentials.Credentials, optional override (Credentials for accessing Google APIs. Use this parameter to) – default credentials, such as to use Compute Engine (google.auth.compute_engine.Credentials) or Service Account (google.oauth2.service_account.Credentials) credentials directly. Methods __init__(query[, project, ...]) Initialize BigQuery document loader. lazy_load() A lazy loader for document content. load() Load data into document objects. load_and_split([text_splitter]) Load documents and split into chunks. lazy_load() → Iterator[Document]¶ A lazy loader for document content. load() → List[Document][source]¶ Load data into document objects. load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.bigquery.BigQueryLoader.html
d774ce3a3001-1
Load documents and split into chunks.
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.bigquery.BigQueryLoader.html
2db69461ec52-0
langchain.document_loaders.azure_blob_storage_file.AzureBlobStorageFileLoader¶ class langchain.document_loaders.azure_blob_storage_file.AzureBlobStorageFileLoader(conn_str: str, container: str, blob_name: str)[source]¶ Bases: BaseLoader Loading logic for loading documents from Azure Blob Storage. Initialize with connection string, container and blob name. Methods __init__(conn_str, container, blob_name) Initialize with connection string, container and blob name. lazy_load() A lazy loader for document content. load() Load documents. load_and_split([text_splitter]) Load documents and split into chunks. lazy_load() → Iterator[Document]¶ A lazy loader for document content. load() → List[Document][source]¶ Load documents. load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶ Load documents and split into chunks.
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.azure_blob_storage_file.AzureBlobStorageFileLoader.html
51d264b70545-0
langchain.document_loaders.parsers.audio.OpenAIWhisperParser¶ class langchain.document_loaders.parsers.audio.OpenAIWhisperParser[source]¶ Bases: BaseBlobParser Transcribe and parse audio files. Audio transcription is with OpenAI Whisper model. Methods __init__() lazy_parse(blob) Lazily parse the blob. parse(blob) Eagerly parse the blob into a document or documents. lazy_parse(blob: Blob) → Iterator[Document][source]¶ Lazily parse the blob. parse(blob: Blob) → List[Document]¶ Eagerly parse the blob into a document or documents. This is a convenience method for interactive development environment. Production applications should favor the lazy_parse method instead. Subclasses should generally not over-ride this parse method. Parameters blob – Blob instance Returns List of documents
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.parsers.audio.OpenAIWhisperParser.html
1577e4ba45fe-0
langchain.document_loaders.confluence.ConfluenceLoader¶ class langchain.document_loaders.confluence.ConfluenceLoader(url: str, api_key: Optional[str] = None, username: Optional[str] = None, oauth2: Optional[dict] = None, token: Optional[str] = None, cloud: Optional[bool] = True, number_of_retries: Optional[int] = 3, min_retry_seconds: Optional[int] = 2, max_retry_seconds: Optional[int] = 10, confluence_kwargs: Optional[dict] = None)[source]¶ Bases: BaseLoader Load Confluence pages. Port of https://llamahub.ai/l/confluence This currently supports username/api_key, Oauth2 login or personal access token authentication. Specify a list page_ids and/or space_key to load in the corresponding pages into Document objects, if both are specified the union of both sets will be returned. You can also specify a boolean include_attachments to include attachments, this is set to False by default, if set to True all attachments will be downloaded and ConfluenceReader will extract the text from the attachments and add it to the Document object. Currently supported attachment types are: PDF, PNG, JPEG/JPG, SVG, Word and Excel. Confluence API supports difference format of page content. The storage format is the raw XML representation for storage. The view format is the HTML representation for viewing with macros are rendered as though it is viewed by users. You can pass a enum content_format argument to load() to specify the content format, this is set to ContentFormat.STORAGE by default. Hint: space_key and page_id can both be found in the URL of a page in Confluence - https://yoursite.atlassian.com/wiki/spaces/<space_key>/pages/<page_id> Example
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.confluence.ConfluenceLoader.html
1577e4ba45fe-1
Example from langchain.document_loaders import ConfluenceLoader loader = ConfluenceLoader( url="https://yoursite.atlassian.com/wiki", username="me", api_key="12345" ) documents = loader.load(space_key="SPACE",limit=50) Parameters url (str) – _description_ api_key (str, optional) – _description_, defaults to None username (str, optional) – _description_, defaults to None oauth2 (dict, optional) – _description_, defaults to {} token (str, optional) – _description_, defaults to None cloud (bool, optional) – _description_, defaults to True number_of_retries (Optional[int], optional) – How many times to retry, defaults to 3 min_retry_seconds (Optional[int], optional) – defaults to 2 max_retry_seconds (Optional[int], optional) – defaults to 10 confluence_kwargs (dict, optional) – additional kwargs to initialize confluence with Raises ValueError – Errors while validating input ImportError – Required dependencies not installed. Methods __init__(url[, api_key, username, oauth2, ...]) is_public_page(page) Check if a page is publicly accessible. lazy_load() A lazy loader for document content. load([space_key, page_ids, label, cql, ...]) param space_key Space key retrieved from a confluence URL, defaults to None load_and_split([text_splitter]) Load documents and split into chunks. paginate_request(retrieval_method, **kwargs) Paginate the various methods to retrieve groups of pages. process_attachment(page_id[, ocr_languages]) process_doc(link) process_image(link[, ocr_languages]) process_page(page, include_attachments, ...)
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.confluence.ConfluenceLoader.html
1577e4ba45fe-2
process_image(link[, ocr_languages]) process_page(page, include_attachments, ...) process_pages(pages, ...[, ocr_languages]) Process a list of pages into a list of documents. process_pdf(link[, ocr_languages]) process_svg(link[, ocr_languages]) process_xls(link) validate_init_args([url, api_key, username, ...]) Validates proper combinations of init arguments is_public_page(page: dict) → bool[source]¶ Check if a page is publicly accessible. lazy_load() → Iterator[Document]¶ A lazy loader for document content. load(space_key: Optional[str] = None, page_ids: Optional[List[str]] = None, label: Optional[str] = None, cql: Optional[str] = None, include_restricted_content: bool = False, include_archived_content: bool = False, include_attachments: bool = False, include_comments: bool = False, content_format: ContentFormat = ContentFormat.STORAGE, limit: Optional[int] = 50, max_pages: Optional[int] = 1000, ocr_languages: Optional[str] = None) → List[Document][source]¶ Parameters space_key (Optional[str], optional) – Space key retrieved from a confluence URL, defaults to None page_ids (Optional[List[str]], optional) – List of specific page IDs to load, defaults to None label (Optional[str], optional) – Get all pages with this label, defaults to None cql (Optional[str], optional) – CQL Expression, defaults to None include_restricted_content (bool, optional) – defaults to False include_archived_content (bool, optional) – Whether to include archived content, defaults to False include_attachments (bool, optional) – defaults to False include_comments (bool, optional) – defaults to False
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.confluence.ConfluenceLoader.html
1577e4ba45fe-3
include_comments (bool, optional) – defaults to False content_format (ContentFormat) – Specify content format, defaults to ContentFormat.STORAGE limit (int, optional) – Maximum number of pages to retrieve per request, defaults to 50 max_pages (int, optional) – Maximum number of pages to retrieve in total, defaults 1000 ocr_languages (str, optional) – The languages to use for the Tesseract agent. To use a language, you’ll first need to install the appropriate Tesseract language pack. Raises ValueError – _description_ ImportError – _description_ Returns _description_ Return type List[Document] load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶ Load documents and split into chunks. paginate_request(retrieval_method: Callable, **kwargs: Any) → List[source]¶ Paginate the various methods to retrieve groups of pages. Unfortunately, due to page size, sometimes the Confluence API doesn’t match the limit value. If limit is >100 confluence seems to cap the response to 100. Also, due to the Atlassian Python package, we don’t get the “next” values from the “_links” key because they only return the value from the results key. So here, the pagination starts from 0 and goes until the max_pages, getting the limit number of pages with each request. We have to manually check if there are more docs based on the length of the returned list of pages, rather than just checking for the presence of a next key in the response like this page would have you do: https://developer.atlassian.com/server/confluence/pagination-in-the-rest-api/ Parameters retrieval_method (callable) – Function used to retrieve docs Returns List of documents Return type
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.confluence.ConfluenceLoader.html
1577e4ba45fe-4
Returns List of documents Return type List process_attachment(page_id: str, ocr_languages: Optional[str] = None) → List[str][source]¶ process_doc(link: str) → str[source]¶ process_image(link: str, ocr_languages: Optional[str] = None) → str[source]¶ process_page(page: dict, include_attachments: bool, include_comments: bool, content_format: ContentFormat, ocr_languages: Optional[str] = None) → Document[source]¶ process_pages(pages: List[dict], include_restricted_content: bool, include_attachments: bool, include_comments: bool, content_format: ContentFormat, ocr_languages: Optional[str] = None) → List[Document][source]¶ Process a list of pages into a list of documents. process_pdf(link: str, ocr_languages: Optional[str] = None) → str[source]¶ process_svg(link: str, ocr_languages: Optional[str] = None) → str[source]¶ process_xls(link: str) → str[source]¶ static validate_init_args(url: Optional[str] = None, api_key: Optional[str] = None, username: Optional[str] = None, oauth2: Optional[dict] = None, token: Optional[str] = None) → Optional[List][source]¶ Validates proper combinations of init arguments
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.confluence.ConfluenceLoader.html
6d820b2f8c6d-0
langchain.document_loaders.embaas.EmbaasDocumentExtractionParameters¶ class langchain.document_loaders.embaas.EmbaasDocumentExtractionParameters[source]¶ Bases: TypedDict Parameters for the embaas document extraction API. Methods __init__(*args, **kwargs) clear() copy() fromkeys([value]) Create a new dictionary with keys from iterable and values set to value. get(key[, default]) Return the value for key if key is in the dictionary, else default. items() keys() pop(k[,d]) If the key is not found, return the default if given; otherwise, raise a KeyError. popitem() Remove and return a (key, value) pair as a 2-tuple. setdefault(key[, default]) Insert key with a value of default if key is not in the dictionary. update([E, ]**F) If E is present and has a .keys() method, then does: for k in E: D[k] = E[k] If E is present and lacks a .keys() method, then does: for k, v in E: D[k] = v In either case, this is followed by: for k in F: D[k] = F[k] values() Attributes mime_type The mime type of the document. file_extension The file extension of the document. file_name The file name of the document. should_chunk Whether to chunk the document into pages. chunk_size The maximum size of the text chunks. chunk_overlap The maximum overlap allowed between chunks. chunk_splitter The text splitter class name for creating chunks. separators The separators for chunks. should_embed Whether to create embeddings for the document in the response. model
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.embaas.EmbaasDocumentExtractionParameters.html
6d820b2f8c6d-1
should_embed Whether to create embeddings for the document in the response. model The model to pass to the Embaas document extraction API. instruction The instruction to pass to the Embaas document extraction API. clear() → None.  Remove all items from D.¶ copy() → a shallow copy of D¶ fromkeys(value=None, /)¶ Create a new dictionary with keys from iterable and values set to value. get(key, default=None, /)¶ Return the value for key if key is in the dictionary, else default. items() → a set-like object providing a view on D's items¶ keys() → a set-like object providing a view on D's keys¶ pop(k[, d]) → v, remove specified key and return the corresponding value.¶ If the key is not found, return the default if given; otherwise, raise a KeyError. popitem()¶ Remove and return a (key, value) pair as a 2-tuple. Pairs are returned in LIFO (last-in, first-out) order. Raises KeyError if the dict is empty. setdefault(key, default=None, /)¶ Insert key with a value of default if key is not in the dictionary. Return the value for key if key is in the dictionary, else default. update([E, ]**F) → None.  Update D from dict/iterable E and F.¶ If E is present and has a .keys() method, then does: for k in E: D[k] = E[k] If E is present and lacks a .keys() method, then does: for k, v in E: D[k] = v In either case, this is followed by: for k in F: D[k] = F[k]
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.embaas.EmbaasDocumentExtractionParameters.html
6d820b2f8c6d-2
values() → an object providing a view on D's values¶ chunk_overlap: int¶ The maximum overlap allowed between chunks. chunk_size: int¶ The maximum size of the text chunks. chunk_splitter: str¶ The text splitter class name for creating chunks. file_extension: str¶ The file extension of the document. file_name: str¶ The file name of the document. instruction: str¶ The instruction to pass to the Embaas document extraction API. mime_type: str¶ The mime type of the document. model: str¶ The model to pass to the Embaas document extraction API. separators: List[str]¶ The separators for chunks. should_chunk: bool¶ Whether to chunk the document into pages. should_embed: bool¶ Whether to create embeddings for the document in the response.
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.embaas.EmbaasDocumentExtractionParameters.html
8cd85f08e19f-0
langchain.document_loaders.notebook.concatenate_cells¶ langchain.document_loaders.notebook.concatenate_cells(cell: dict, include_outputs: bool, max_output_length: int, traceback: bool) → str[source]¶ Combine cells information in a readable format ready to be used.
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.notebook.concatenate_cells.html
67c1622da840-0
langchain.document_loaders.embaas.BaseEmbaasLoader¶ class langchain.document_loaders.embaas.BaseEmbaasLoader(*, embaas_api_key: Optional[str] = None, api_url: str = 'https://api.embaas.io/v1/document/extract-text/bytes/', params: EmbaasDocumentExtractionParameters = {})[source]¶ Bases: BaseModel Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param api_url: str = 'https://api.embaas.io/v1/document/extract-text/bytes/'¶ The URL of the embaas document extraction API. param embaas_api_key: Optional[str] = None¶ param params: langchain.document_loaders.embaas.EmbaasDocumentExtractionParameters = {}¶ Additional parameters to pass to the embaas document extraction API. validator validate_environment  »  all fields[source]¶ Validate that api key and python package exists in environment.
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.embaas.BaseEmbaasLoader.html