id
stringlengths
14
15
text
stringlengths
49
2.47k
source
stringlengths
61
166
5e3c7a67af38-0
langchain.document_loaders.geodataframe.GeoDataFrameLoader¶ class langchain.document_loaders.geodataframe.GeoDataFrameLoader(data_frame: Any, page_content_column: str = 'geometry')[source]¶ Load geopandas Dataframe. Initialize with geopandas Dataframe. Parameters data_frame – geopandas DataFrame object. page_content_column – Name of the column containing the page content. Defaults to “geometry”. Methods __init__(data_frame[, page_content_column]) Initialize with geopandas Dataframe. lazy_load() Lazy load records from dataframe. load() Load full dataframe. load_and_split([text_splitter]) Load Documents and split into chunks. __init__(data_frame: Any, page_content_column: str = 'geometry')[source]¶ Initialize with geopandas Dataframe. Parameters data_frame – geopandas DataFrame object. page_content_column – Name of the column containing the page content. Defaults to “geometry”. lazy_load() → Iterator[Document][source]¶ Lazy load records from dataframe. load() → List[Document][source]¶ Load full dataframe. load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶ Load Documents and split into chunks. Chunks are returned as Documents. Parameters text_splitter – TextSplitter instance to use for splitting documents. Defaults to RecursiveCharacterTextSplitter. Returns List of Documents. Examples using GeoDataFrameLoader¶ Geopandas
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.geodataframe.GeoDataFrameLoader.html
0bc2cb985d53-0
langchain.document_loaders.parsers.language.python.PythonSegmenter¶ class langchain.document_loaders.parsers.language.python.PythonSegmenter(code: str)[source]¶ The code segmenter for Python. Methods __init__(code) extract_functions_classes() is_valid() simplify_code() __init__(code: str)[source]¶ extract_functions_classes() → List[str][source]¶ is_valid() → bool[source]¶ simplify_code() → str[source]¶
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.parsers.language.python.PythonSegmenter.html
33cdae6f3d7f-0
langchain.document_loaders.parsers.audio.OpenAIWhisperParser¶ class langchain.document_loaders.parsers.audio.OpenAIWhisperParser(api_key: Optional[str] = None)[source]¶ Transcribe and parse audio files. Audio transcription is with OpenAI Whisper model. Methods __init__([api_key]) lazy_parse(blob) Lazily parse the blob. parse(blob) Eagerly parse the blob into a document or documents. __init__(api_key: Optional[str] = None)[source]¶ lazy_parse(blob: Blob) → Iterator[Document][source]¶ Lazily parse the blob. parse(blob: Blob) → List[Document]¶ Eagerly parse the blob into a document or documents. This is a convenience method for interactive development environment. Production applications should favor the lazy_parse method instead. Subclasses should generally not over-ride this parse method. Parameters blob – Blob instance Returns List of documents Examples using OpenAIWhisperParser¶ Loading documents from a YouTube url
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.parsers.audio.OpenAIWhisperParser.html
c332b3211273-0
langchain.document_loaders.open_city_data.OpenCityDataLoader¶ class langchain.document_loaders.open_city_data.OpenCityDataLoader(city_id: str, dataset_id: str, limit: int)[source]¶ Loads Open City data. Initialize with dataset_id. Example: https://dev.socrata.com/foundry/data.sfgov.org/vw6y-z8j6 e.g., city_id = data.sfgov.org e.g., dataset_id = vw6y-z8j6 Parameters city_id – The Open City city identifier. dataset_id – The Open City dataset identifier. limit – The maximum number of documents to load. Methods __init__(city_id, dataset_id, limit) Initialize with dataset_id. lazy_load() Lazy load records. load() Load records. load_and_split([text_splitter]) Load Documents and split into chunks. __init__(city_id: str, dataset_id: str, limit: int)[source]¶ Initialize with dataset_id. Example: https://dev.socrata.com/foundry/data.sfgov.org/vw6y-z8j6 e.g., city_id = data.sfgov.org e.g., dataset_id = vw6y-z8j6 Parameters city_id – The Open City city identifier. dataset_id – The Open City dataset identifier. limit – The maximum number of documents to load. lazy_load() → Iterator[Document][source]¶ Lazy load records. load() → List[Document][source]¶ Load records. load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶ Load Documents and split into chunks. Chunks are returned as Documents. Parameters text_splitter – TextSplitter instance to use for splitting documents.
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.open_city_data.OpenCityDataLoader.html
c332b3211273-1
Parameters text_splitter – TextSplitter instance to use for splitting documents. Defaults to RecursiveCharacterTextSplitter. Returns List of Documents. Examples using OpenCityDataLoader¶ Geopandas Open City Data
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.open_city_data.OpenCityDataLoader.html
f7b1a950a6de-0
langchain.document_loaders.notebook.concatenate_cells¶ langchain.document_loaders.notebook.concatenate_cells(cell: dict, include_outputs: bool, max_output_length: int, traceback: bool) → str[source]¶ Combine cells information in a readable format ready to be used. Parameters cell – A dictionary include_outputs – Whether to include the outputs of the cell. max_output_length – Maximum length of the output to be displayed. traceback – Whether to return a traceback of the error. Returns A string with the cell information.
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.notebook.concatenate_cells.html
a08213eee1c5-0
langchain.document_loaders.weather.WeatherDataLoader¶ class langchain.document_loaders.weather.WeatherDataLoader(client: OpenWeatherMapAPIWrapper, places: Sequence[str])[source]¶ Weather Reader. Reads the forecast & current weather of any location using OpenWeatherMap’s free API. Checkout ‘https://openweathermap.org/appid’ for more on how to generate a free OpenWeatherMap API. Initialize with parameters. Methods __init__(client, places) Initialize with parameters. from_params(places, *[, openweathermap_api_key]) lazy_load() Lazily load weather data for the given locations. load() Load weather data for the given locations. load_and_split([text_splitter]) Load Documents and split into chunks. __init__(client: OpenWeatherMapAPIWrapper, places: Sequence[str]) → None[source]¶ Initialize with parameters. classmethod from_params(places: Sequence[str], *, openweathermap_api_key: Optional[str] = None) → WeatherDataLoader[source]¶ lazy_load() → Iterator[Document][source]¶ Lazily load weather data for the given locations. load() → List[Document][source]¶ Load weather data for the given locations. load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶ Load Documents and split into chunks. Chunks are returned as Documents. Parameters text_splitter – TextSplitter instance to use for splitting documents. Defaults to RecursiveCharacterTextSplitter. Returns List of Documents. Examples using WeatherDataLoader¶ Weather
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.weather.WeatherDataLoader.html
5a72470c7966-0
langchain.document_loaders.parsers.registry.get_parser¶ langchain.document_loaders.parsers.registry.get_parser(parser_name: str) → BaseBlobParser[source]¶ Get a parser by parser name.
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.parsers.registry.get_parser.html
18cac18ff750-0
langchain.document_loaders.apify_dataset.ApifyDatasetLoader¶ class langchain.document_loaders.apify_dataset.ApifyDatasetLoader[source]¶ Bases: BaseLoader, BaseModel Loads datasets from Apify-a web scraping, crawling, and data extraction platform. For details, see https://docs.apify.com/platform/integrations/langchain Example from langchain.document_loaders import ApifyDatasetLoader from langchain.schema import Document loader = ApifyDatasetLoader( dataset_id="YOUR-DATASET-ID", dataset_mapping_function=lambda dataset_item: Document( page_content=dataset_item["text"], metadata={"source": dataset_item["url"]} ), ) documents = loader.load() Initialize the loader with an Apify dataset ID and a mapping function. Parameters dataset_id (str) – The ID of the dataset on the Apify platform. dataset_mapping_function (Callable) – A function that takes a single dictionary (an Apify dataset item) and converts it to an instance of the Document class. param apify_client: Any = None¶ An instance of the ApifyClient class from the apify-client Python package. param dataset_id: str [Required]¶ The ID of the dataset on the Apify platform. param dataset_mapping_function: Callable[[Dict], langchain.schema.document.Document] [Required]¶ A custom function that takes a single dictionary (an Apify dataset item) and converts it to an instance of the Document class. classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶ Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.apify_dataset.ApifyDatasetLoader.html
18cac18ff750-1
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶ Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include – fields to include in new model exclude – fields to exclude from new model, as with values this takes precedence over include update – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep – set to True to make a deep copy of the model Returns new model instance dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶ Generate a dictionary representation of the model, optionally specifying which fields to include or exclude. classmethod from_orm(obj: Any) → Model¶
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.apify_dataset.ApifyDatasetLoader.html
18cac18ff750-2
classmethod from_orm(obj: Any) → Model¶ json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶ Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). lazy_load() → Iterator[Document]¶ A lazy loader for Documents. load() → List[Document][source]¶ Load documents. load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶ Load Documents and split into chunks. Chunks are returned as Documents. Parameters text_splitter – TextSplitter instance to use for splitting documents. Defaults to RecursiveCharacterTextSplitter. Returns List of Documents. classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ classmethod parse_obj(obj: Any) → Model¶ classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.apify_dataset.ApifyDatasetLoader.html
18cac18ff750-3
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶ classmethod update_forward_refs(**localns: Any) → None¶ Try to update ForwardRefs on fields based on this Model, globalns and localns. classmethod validate(value: Any) → Model¶ Examples using ApifyDatasetLoader¶ Apify Apify Dataset
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.apify_dataset.ApifyDatasetLoader.html
91542bba344b-0
langchain.document_loaders.pdf.MathpixPDFLoader¶ class langchain.document_loaders.pdf.MathpixPDFLoader(file_path: str, processed_file_format: str = 'mmd', max_wait_time_seconds: int = 500, should_clean_pdf: bool = False, **kwargs: Any)[source]¶ This class uses Mathpix service to load PDF files. Initialize with a file path. Parameters file_path – a file for loading. processed_file_format – a format of the processed file. Default is “mmd”. max_wait_time_seconds – a maximum time to wait for the response from the server. Default is 500. should_clean_pdf – a flag to clean the PDF file. Default is False. **kwargs – additional keyword arguments. Attributes data headers source url Methods __init__(file_path[, processed_file_format, ...]) Initialize with a file path. clean_pdf(contents) Clean the PDF file. get_processed_pdf(pdf_id) lazy_load() A lazy loader for Documents. load() Load data into Document objects. load_and_split([text_splitter]) Load Documents and split into chunks. send_pdf() wait_for_processing(pdf_id) Wait for processing to complete. __init__(file_path: str, processed_file_format: str = 'mmd', max_wait_time_seconds: int = 500, should_clean_pdf: bool = False, **kwargs: Any) → None[source]¶ Initialize with a file path. Parameters file_path – a file for loading. processed_file_format – a format of the processed file. Default is “mmd”. max_wait_time_seconds – a maximum time to wait for the response from the server. Default is 500. should_clean_pdf – a flag to clean the PDF file. Default is False.
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.pdf.MathpixPDFLoader.html
91542bba344b-1
should_clean_pdf – a flag to clean the PDF file. Default is False. **kwargs – additional keyword arguments. clean_pdf(contents: str) → str[source]¶ Clean the PDF file. Parameters contents – a PDF file contents. Returns: get_processed_pdf(pdf_id: str) → str[source]¶ lazy_load() → Iterator[Document]¶ A lazy loader for Documents. load() → List[Document][source]¶ Load data into Document objects. load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶ Load Documents and split into chunks. Chunks are returned as Documents. Parameters text_splitter – TextSplitter instance to use for splitting documents. Defaults to RecursiveCharacterTextSplitter. Returns List of Documents. send_pdf() → str[source]¶ wait_for_processing(pdf_id: str) → None[source]¶ Wait for processing to complete. Parameters pdf_id – a PDF id. Returns: None
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.pdf.MathpixPDFLoader.html
a23691935008-0
langchain.document_loaders.etherscan.EtherscanLoader¶ class langchain.document_loaders.etherscan.EtherscanLoader(account_address: str, api_key: str = 'docs-demo', filter: str = 'normal_transaction', page: int = 1, offset: int = 10, start_block: int = 0, end_block: int = 99999999, sort: str = 'desc')[source]¶ Load transactions from an account on Ethereum mainnet. The Loader use Etherscan API to interact with Ethereum mainnet. ETHERSCAN_API_KEY environment variable must be set use this loader. Methods __init__(account_address[, api_key, filter, ...]) getERC1155Tx() getERC20Tx() getERC721Tx() getEthBalance() getInternalTx() getNormTx() lazy_load() Lazy load Documents from table. load() Load transactions from spcifc account by Etherscan. load_and_split([text_splitter]) Load Documents and split into chunks. __init__(account_address: str, api_key: str = 'docs-demo', filter: str = 'normal_transaction', page: int = 1, offset: int = 10, start_block: int = 0, end_block: int = 99999999, sort: str = 'desc')[source]¶ getERC1155Tx() → List[Document][source]¶ getERC20Tx() → List[Document][source]¶ getERC721Tx() → List[Document][source]¶ getEthBalance() → List[Document][source]¶ getInternalTx() → List[Document][source]¶ getNormTx() → List[Document][source]¶ lazy_load() → Iterator[Document][source]¶ Lazy load Documents from table.
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.etherscan.EtherscanLoader.html
a23691935008-1
lazy_load() → Iterator[Document][source]¶ Lazy load Documents from table. load() → List[Document][source]¶ Load transactions from spcifc account by Etherscan. load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶ Load Documents and split into chunks. Chunks are returned as Documents. Parameters text_splitter – TextSplitter instance to use for splitting documents. Defaults to RecursiveCharacterTextSplitter. Returns List of Documents. Examples using EtherscanLoader¶ Etherscan Loader
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.etherscan.EtherscanLoader.html
ea31d46f5c30-0
langchain.document_loaders.mhtml.MHTMLLoader¶ class langchain.document_loaders.mhtml.MHTMLLoader(file_path: str, open_encoding: Optional[str] = None, bs_kwargs: Optional[dict] = None, get_text_separator: str = '')[source]¶ Loader that uses beautiful soup to parse HTML files. Initialise with path, and optionally, file encoding to use, and any kwargs to pass to the BeautifulSoup object. Parameters file_path – Path to file to load. open_encoding – The encoding to use when opening the file. bs_kwargs – Any kwargs to pass to the BeautifulSoup object. get_text_separator – The separator to use when getting the text from the soup. Methods __init__(file_path[, open_encoding, ...]) Initialise with path, and optionally, file encoding to use, and any kwargs to pass to the BeautifulSoup object. lazy_load() A lazy loader for Documents. load() Load data into Document objects. load_and_split([text_splitter]) Load Documents and split into chunks. __init__(file_path: str, open_encoding: Optional[str] = None, bs_kwargs: Optional[dict] = None, get_text_separator: str = '') → None[source]¶ Initialise with path, and optionally, file encoding to use, and any kwargs to pass to the BeautifulSoup object. Parameters file_path – Path to file to load. open_encoding – The encoding to use when opening the file. bs_kwargs – Any kwargs to pass to the BeautifulSoup object. get_text_separator – The separator to use when getting the text from the soup. lazy_load() → Iterator[Document]¶ A lazy loader for Documents. load() → List[Document][source]¶ Load data into Document objects.
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.mhtml.MHTMLLoader.html
ea31d46f5c30-1
load() → List[Document][source]¶ Load data into Document objects. load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶ Load Documents and split into chunks. Chunks are returned as Documents. Parameters text_splitter – TextSplitter instance to use for splitting documents. Defaults to RecursiveCharacterTextSplitter. Returns List of Documents. Examples using MHTMLLoader¶ mhtml
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.mhtml.MHTMLLoader.html
ac6a33bbd1cd-0
langchain.document_loaders.epub.UnstructuredEPubLoader¶ class langchain.document_loaders.epub.UnstructuredEPubLoader(file_path: Union[str, List[str]], mode: str = 'single', **unstructured_kwargs: Any)[source]¶ Loader that uses Unstructured to load EPUB files. You can run the loader in one of two modes: “single” and “elements”. If you use “single” mode, the document will be returned as a single langchain Document object. If you use “elements” mode, the unstructured library will split the document into elements such as Title and NarrativeText. You can pass in additional unstructured kwargs after mode to apply different unstructured settings. Examples from langchain.document_loaders import UnstructuredEPubLoader loader = UnstructuredEPubLoader(“example.epub”, mode=”elements”, strategy=”fast”, ) docs = loader.load() References https://unstructured-io.github.io/unstructured/bricks.html#partition-epub Initialize with file path. Methods __init__(file_path[, mode]) Initialize with file path. lazy_load() A lazy loader for Documents. load() Load file. load_and_split([text_splitter]) Load Documents and split into chunks. __init__(file_path: Union[str, List[str]], mode: str = 'single', **unstructured_kwargs: Any)¶ Initialize with file path. lazy_load() → Iterator[Document]¶ A lazy loader for Documents. load() → List[Document]¶ Load file. load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶ Load Documents and split into chunks. Chunks are returned as Documents. Parameters text_splitter – TextSplitter instance to use for splitting documents.
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.epub.UnstructuredEPubLoader.html
ac6a33bbd1cd-1
Parameters text_splitter – TextSplitter instance to use for splitting documents. Defaults to RecursiveCharacterTextSplitter. Returns List of Documents. Examples using UnstructuredEPubLoader¶ EPub
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.epub.UnstructuredEPubLoader.html
3bfb1a3f5907-0
langchain.document_loaders.blob_loaders.youtube_audio.YoutubeAudioLoader¶ class langchain.document_loaders.blob_loaders.youtube_audio.YoutubeAudioLoader(urls: List[str], save_dir: str)[source]¶ Load YouTube urls as audio file(s). Methods __init__(urls, save_dir) yield_blobs() Yield audio blobs for each url. __init__(urls: List[str], save_dir: str)[source]¶ yield_blobs() → Iterable[Blob][source]¶ Yield audio blobs for each url. Examples using YoutubeAudioLoader¶ Loading documents from a YouTube url
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.blob_loaders.youtube_audio.YoutubeAudioLoader.html
5fd1c0bd7cba-0
langchain.document_loaders.recursive_url_loader.RecursiveUrlLoader¶ class langchain.document_loaders.recursive_url_loader.RecursiveUrlLoader(url: str, max_depth: Optional[int] = None, use_async: Optional[bool] = None, extractor: Optional[Callable[[str], str]] = None, exclude_dirs: Optional[str] = None, timeout: Optional[int] = None, prevent_outside: Optional[bool] = None)[source]¶ Loads all child links from a given url. Initialize with URL to crawl and any subdirectories to exclude. :param url: The URL to crawl. :param exclude_dirs: A list of subdirectories to exclude. :param use_async: Whether to use asynchronous loading, :param if use_async is true: :param this function will not be lazy: :param : :param but it will still work in the expected way: :param just not lazy.: :param extractor: A function to extract the text from the html, :param when extract function returns empty string: :param the document will be ignored.: :param max_depth: The max depth of the recursive loading. :param timeout: The timeout for the requests, in the unit of seconds. Methods __init__(url[, max_depth, use_async, ...]) Initialize with URL to crawl and any subdirectories to exclude. lazy_load() Lazy load web pages. load() Load web pages. load_and_split([text_splitter]) Load Documents and split into chunks. __init__(url: str, max_depth: Optional[int] = None, use_async: Optional[bool] = None, extractor: Optional[Callable[[str], str]] = None, exclude_dirs: Optional[str] = None, timeout: Optional[int] = None, prevent_outside: Optional[bool] = None) → None[source]¶
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.recursive_url_loader.RecursiveUrlLoader.html
5fd1c0bd7cba-1
Initialize with URL to crawl and any subdirectories to exclude. :param url: The URL to crawl. :param exclude_dirs: A list of subdirectories to exclude. :param use_async: Whether to use asynchronous loading, :param if use_async is true: :param this function will not be lazy: :param : :param but it will still work in the expected way: :param just not lazy.: :param extractor: A function to extract the text from the html, :param when extract function returns empty string: :param the document will be ignored.: :param max_depth: The max depth of the recursive loading. :param timeout: The timeout for the requests, in the unit of seconds. lazy_load() → Iterator[Document][source]¶ Lazy load web pages. When use_async is True, this function will not be lazy, but it will still work in the expected way, just not lazy. load() → List[Document][source]¶ Load web pages. load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶ Load Documents and split into chunks. Chunks are returned as Documents. Parameters text_splitter – TextSplitter instance to use for splitting documents. Defaults to RecursiveCharacterTextSplitter. Returns List of Documents. Examples using RecursiveUrlLoader¶ Recursive URL Loader
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.recursive_url_loader.RecursiveUrlLoader.html
6ce154890fd3-0
langchain.document_loaders.psychic.PsychicLoader¶ class langchain.document_loaders.psychic.PsychicLoader(api_key: str, account_id: str, connector_id: Optional[str] = None)[source]¶ Loads documents from Psychic.dev. Initialize with API key, connector id, and account id. Parameters api_key – The Psychic API key. account_id – The Psychic account id. connector_id – The Psychic connector id. Methods __init__(api_key, account_id[, connector_id]) Initialize with API key, connector id, and account id. lazy_load() A lazy loader for Documents. load() Load documents. load_and_split([text_splitter]) Load Documents and split into chunks. __init__(api_key: str, account_id: str, connector_id: Optional[str] = None)[source]¶ Initialize with API key, connector id, and account id. Parameters api_key – The Psychic API key. account_id – The Psychic account id. connector_id – The Psychic connector id. lazy_load() → Iterator[Document]¶ A lazy loader for Documents. load() → List[Document][source]¶ Load documents. load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶ Load Documents and split into chunks. Chunks are returned as Documents. Parameters text_splitter – TextSplitter instance to use for splitting documents. Defaults to RecursiveCharacterTextSplitter. Returns List of Documents. Examples using PsychicLoader¶ Psychic
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.psychic.PsychicLoader.html
759cf4a4a2fa-0
langchain.document_loaders.python.PythonLoader¶ class langchain.document_loaders.python.PythonLoader(file_path: str)[source]¶ Load Python files, respecting any non-default encoding if specified. Initialize with a file path. Parameters file_path – The path to the file to load. Methods __init__(file_path) Initialize with a file path. lazy_load() A lazy loader for Documents. load() Load from file path. load_and_split([text_splitter]) Load Documents and split into chunks. __init__(file_path: str)[source]¶ Initialize with a file path. Parameters file_path – The path to the file to load. lazy_load() → Iterator[Document]¶ A lazy loader for Documents. load() → List[Document]¶ Load from file path. load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶ Load Documents and split into chunks. Chunks are returned as Documents. Parameters text_splitter – TextSplitter instance to use for splitting documents. Defaults to RecursiveCharacterTextSplitter. Returns List of Documents.
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.python.PythonLoader.html
755f243d2b47-0
langchain.document_loaders.discord.DiscordChatLoader¶ class langchain.document_loaders.discord.DiscordChatLoader(chat_log: pd.DataFrame, user_id_col: str = 'ID')[source]¶ Load Discord chat logs. Initialize with a Pandas DataFrame containing chat logs. Parameters chat_log – Pandas DataFrame containing chat logs. user_id_col – Name of the column containing the user ID. Defaults to “ID”. Methods __init__(chat_log[, user_id_col]) Initialize with a Pandas DataFrame containing chat logs. lazy_load() A lazy loader for Documents. load() Load all chat messages. load_and_split([text_splitter]) Load Documents and split into chunks. __init__(chat_log: pd.DataFrame, user_id_col: str = 'ID')[source]¶ Initialize with a Pandas DataFrame containing chat logs. Parameters chat_log – Pandas DataFrame containing chat logs. user_id_col – Name of the column containing the user ID. Defaults to “ID”. lazy_load() → Iterator[Document]¶ A lazy loader for Documents. load() → List[Document][source]¶ Load all chat messages. load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶ Load Documents and split into chunks. Chunks are returned as Documents. Parameters text_splitter – TextSplitter instance to use for splitting documents. Defaults to RecursiveCharacterTextSplitter. Returns List of Documents. Examples using DiscordChatLoader¶ Discord
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.discord.DiscordChatLoader.html
e67f6f5b23d5-0
langchain.document_loaders.airbyte.AirbyteTypeformLoader¶ class langchain.document_loaders.airbyte.AirbyteTypeformLoader(config: Mapping[str, Any], stream_name: str, record_handler: Optional[Callable[[Any, Optional[str]], Document]] = None, state: Optional[Any] = None)[source]¶ Methods __init__(config, stream_name[, ...]) lazy_load() A lazy loader for Documents. load() Load data into Document objects. load_and_split([text_splitter]) Load Documents and split into chunks. __init__(config: Mapping[str, Any], stream_name: str, record_handler: Optional[Callable[[Any, Optional[str]], Document]] = None, state: Optional[Any] = None) → None[source]¶ lazy_load() → Iterator[Document]¶ A lazy loader for Documents. load() → List[Document]¶ Load data into Document objects. load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶ Load Documents and split into chunks. Chunks are returned as Documents. Parameters text_splitter – TextSplitter instance to use for splitting documents. Defaults to RecursiveCharacterTextSplitter. Returns List of Documents.
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.airbyte.AirbyteTypeformLoader.html
a95a8531fbc7-0
langchain.document_loaders.roam.RoamLoader¶ class langchain.document_loaders.roam.RoamLoader(path: str)[source]¶ Loads Roam files from disk. Initialize with a path. Methods __init__(path) Initialize with a path. lazy_load() A lazy loader for Documents. load() Load documents. load_and_split([text_splitter]) Load Documents and split into chunks. __init__(path: str)[source]¶ Initialize with a path. lazy_load() → Iterator[Document]¶ A lazy loader for Documents. load() → List[Document][source]¶ Load documents. load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶ Load Documents and split into chunks. Chunks are returned as Documents. Parameters text_splitter – TextSplitter instance to use for splitting documents. Defaults to RecursiveCharacterTextSplitter. Returns List of Documents. Examples using RoamLoader¶ Roam
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.roam.RoamLoader.html
01a20fde4033-0
langchain.document_loaders.youtube.GoogleApiClient¶ class langchain.document_loaders.youtube.GoogleApiClient(credentials_path: Path = PosixPath('/home/docs/.credentials/credentials.json'), service_account_path: Path = PosixPath('/home/docs/.credentials/credentials.json'), token_path: Path = PosixPath('/home/docs/.credentials/token.json'))[source]¶ A Generic Google Api Client. To use, you should have the google_auth_oauthlib,youtube_transcript_api,google python package installed. As the google api expects credentials you need to set up a google account and register your Service. “https://developers.google.com/docs/api/quickstart/python” Example from langchain.document_loaders import GoogleApiClient google_api_client = GoogleApiClient( service_account_path=Path("path_to_your_sec_file.json") ) Attributes credentials_path service_account_path token_path Methods __init__([credentials_path, ...]) validate_channel_or_videoIds_is_set(values) Validate that either folder_id or document_ids is set, but not both. __init__(credentials_path: Path = PosixPath('/home/docs/.credentials/credentials.json'), service_account_path: Path = PosixPath('/home/docs/.credentials/credentials.json'), token_path: Path = PosixPath('/home/docs/.credentials/token.json')) → None¶ classmethod validate_channel_or_videoIds_is_set(values: Dict[str, Any]) → Dict[str, Any][source]¶ Validate that either folder_id or document_ids is set, but not both. Examples using GoogleApiClient¶ YouTube transcripts
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.youtube.GoogleApiClient.html
3906a9c2dde3-0
langchain.document_loaders.parsers.grobid.ServerUnavailableException¶ class langchain.document_loaders.parsers.grobid.ServerUnavailableException[source]¶ Exception raised when the GROBID server is unavailable.
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.parsers.grobid.ServerUnavailableException.html
348546c68ee0-0
langchain.document_loaders.markdown.UnstructuredMarkdownLoader¶ class langchain.document_loaders.markdown.UnstructuredMarkdownLoader(file_path: Union[str, List[str]], mode: str = 'single', **unstructured_kwargs: Any)[source]¶ Loader that uses Unstructured to load markdown files. You can run the loader in one of two modes: “single” and “elements”. If you use “single” mode, the document will be returned as a single langchain Document object. If you use “elements” mode, the unstructured library will split the document into elements such as Title and NarrativeText. You can pass in additional unstructured kwargs after mode to apply different unstructured settings. Examples from langchain.document_loaders import UnstructuredMarkdownLoader loader = UnstructuredMarkdownLoader(“example.md”, mode=”elements”, strategy=”fast”, ) docs = loader.load() References https://unstructured-io.github.io/unstructured/bricks.html#partition-md Initialize with file path. Methods __init__(file_path[, mode]) Initialize with file path. lazy_load() A lazy loader for Documents. load() Load file. load_and_split([text_splitter]) Load Documents and split into chunks. __init__(file_path: Union[str, List[str]], mode: str = 'single', **unstructured_kwargs: Any)¶ Initialize with file path. lazy_load() → Iterator[Document]¶ A lazy loader for Documents. load() → List[Document]¶ Load file. load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶ Load Documents and split into chunks. Chunks are returned as Documents. Parameters text_splitter – TextSplitter instance to use for splitting documents. Defaults to RecursiveCharacterTextSplitter. Returns
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.markdown.UnstructuredMarkdownLoader.html
348546c68ee0-1
Defaults to RecursiveCharacterTextSplitter. Returns List of Documents. Examples using UnstructuredMarkdownLoader¶ StarRocks
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.markdown.UnstructuredMarkdownLoader.html
71c094b5b30b-0
langchain.document_loaders.ifixit.IFixitLoader¶ class langchain.document_loaders.ifixit.IFixitLoader(web_path: str)[source]¶ Load iFixit repair guides, device wikis and answers. iFixit is the largest, open repair community on the web. The site contains nearly 100k repair manuals, 200k Questions & Answers on 42k devices, and all the data is licensed under CC-BY. This loader will allow you to download the text of a repair guide, text of Q&A’s and wikis from devices on iFixit using their open APIs and web scraping. Initialize with a web path. Methods __init__(web_path) Initialize with a web path. lazy_load() A lazy loader for Documents. load() Load data into Document objects. load_and_split([text_splitter]) Load Documents and split into chunks. load_device([url_override, include_guides]) Loads a device load_guide([url_override]) Load a guide load_questions_and_answers([url_override]) Load a list of questions and answers. load_suggestions([query, doc_type]) Load suggestions. __init__(web_path: str)[source]¶ Initialize with a web path. lazy_load() → Iterator[Document]¶ A lazy loader for Documents. load() → List[Document][source]¶ Load data into Document objects. load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶ Load Documents and split into chunks. Chunks are returned as Documents. Parameters text_splitter – TextSplitter instance to use for splitting documents. Defaults to RecursiveCharacterTextSplitter. Returns List of Documents.
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.ifixit.IFixitLoader.html
71c094b5b30b-1
Defaults to RecursiveCharacterTextSplitter. Returns List of Documents. load_device(url_override: Optional[str] = None, include_guides: bool = True) → List[Document][source]¶ Loads a device Parameters url_override – A URL to override the default URL. include_guides – Whether to include guides linked to from the device. Defaults to True. Returns: load_guide(url_override: Optional[str] = None) → List[Document][source]¶ Load a guide Parameters url_override – A URL to override the default URL. Returns: List[Document] load_questions_and_answers(url_override: Optional[str] = None) → List[Document][source]¶ Load a list of questions and answers. Parameters url_override – A URL to override the default URL. Returns: List[Document] static load_suggestions(query: str = '', doc_type: str = 'all') → List[Document][source]¶ Load suggestions. Parameters query – A query string doc_type – The type of document to search for. Can be one of “all”, “device”, “guide”, “teardown”, “answer”, “wiki”. Returns: Examples using IFixitLoader¶ iFixit
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.ifixit.IFixitLoader.html
93f27bb2a47b-0
langchain.document_loaders.embaas.EmbaasDocumentExtractionParameters¶ class langchain.document_loaders.embaas.EmbaasDocumentExtractionParameters[source]¶ Parameters for the embaas document extraction API. Attributes mime_type The mime type of the document. file_extension The file extension of the document. file_name The file name of the document. should_chunk Whether to chunk the document into pages. chunk_size The maximum size of the text chunks. chunk_overlap The maximum overlap allowed between chunks. chunk_splitter The text splitter class name for creating chunks. separators The separators for chunks. should_embed Whether to create embeddings for the document in the response. model The model to pass to the Embaas document extraction API. instruction The instruction to pass to the Embaas document extraction API. Methods __init__(*args, **kwargs) clear() copy() fromkeys([value]) Create a new dictionary with keys from iterable and values set to value. get(key[, default]) Return the value for key if key is in the dictionary, else default. items() keys() pop(k[,d]) If the key is not found, return the default if given; otherwise, raise a KeyError. popitem() Remove and return a (key, value) pair as a 2-tuple. setdefault(key[, default]) Insert key with a value of default if key is not in the dictionary. update([E, ]**F)
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.embaas.EmbaasDocumentExtractionParameters.html
93f27bb2a47b-1
update([E, ]**F) If E is present and has a .keys() method, then does: for k in E: D[k] = E[k] If E is present and lacks a .keys() method, then does: for k, v in E: D[k] = v In either case, this is followed by: for k in F: D[k] = F[k] values() __init__(*args, **kwargs)¶ clear() → None.  Remove all items from D.¶ copy() → a shallow copy of D¶ fromkeys(value=None, /)¶ Create a new dictionary with keys from iterable and values set to value. get(key, default=None, /)¶ Return the value for key if key is in the dictionary, else default. items() → a set-like object providing a view on D's items¶ keys() → a set-like object providing a view on D's keys¶ pop(k[, d]) → v, remove specified key and return the corresponding value.¶ If the key is not found, return the default if given; otherwise, raise a KeyError. popitem()¶ Remove and return a (key, value) pair as a 2-tuple. Pairs are returned in LIFO (last-in, first-out) order. Raises KeyError if the dict is empty. setdefault(key, default=None, /)¶ Insert key with a value of default if key is not in the dictionary. Return the value for key if key is in the dictionary, else default. update([E, ]**F) → None.  Update D from dict/iterable E and F.¶ If E is present and has a .keys() method, then does: for k in E: D[k] = E[k]
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.embaas.EmbaasDocumentExtractionParameters.html
93f27bb2a47b-2
If E is present and lacks a .keys() method, then does: for k, v in E: D[k] = v In either case, this is followed by: for k in F: D[k] = F[k] values() → an object providing a view on D's values¶
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.embaas.EmbaasDocumentExtractionParameters.html
b2bc01c3c09f-0
langchain.document_loaders.googledrive.GoogleDriveLoader¶ class langchain.document_loaders.googledrive.GoogleDriveLoader[source]¶ Bases: BaseLoader, BaseModel Loads Google Docs from Google Drive. Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param credentials_path: pathlib.Path = PosixPath('/home/docs/.credentials/credentials.json')¶ Path to the credentials file. param document_ids: Optional[List[str]] = None¶ The document ids to load from. param file_ids: Optional[List[str]] = None¶ The file ids to load from. param file_loader_cls: Any = None¶ The file loader class to use. param file_loader_kwargs: Dict[str, Any] = {}¶ The file loader kwargs to use. param file_types: Optional[Sequence[str]] = None¶ The file types to load. Only applies when folder_id is given. param folder_id: Optional[str] = None¶ The folder id to load from. param load_trashed_files: bool = False¶ Whether to load trashed files. Only applies when folder_id is given. param recursive: bool = False¶ Whether to load recursively. Only applies when folder_id is given. param service_account_key: pathlib.Path = PosixPath('/home/docs/.credentials/keys.json')¶ Path to the service account key file. param token_path: pathlib.Path = PosixPath('/home/docs/.credentials/token.json')¶ Path to the token file. classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶ Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed.
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.googledrive.GoogleDriveLoader.html
b2bc01c3c09f-1
Default values are respected, but no other validation is performed. Behaves as if Config.extra = ‘allow’ was set since it adds all passed values copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶ Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include – fields to include in new model exclude – fields to exclude from new model, as with values this takes precedence over include update – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep – set to True to make a deep copy of the model Returns new model instance dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶ Generate a dictionary representation of the model, optionally specifying which fields to include or exclude. classmethod from_orm(obj: Any) → Model¶
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.googledrive.GoogleDriveLoader.html
b2bc01c3c09f-2
classmethod from_orm(obj: Any) → Model¶ json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶ Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). lazy_load() → Iterator[Document]¶ A lazy loader for Documents. load() → List[Document][source]¶ Load documents. load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶ Load Documents and split into chunks. Chunks are returned as Documents. Parameters text_splitter – TextSplitter instance to use for splitting documents. Defaults to RecursiveCharacterTextSplitter. Returns List of Documents. classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ classmethod parse_obj(obj: Any) → Model¶ classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.googledrive.GoogleDriveLoader.html
b2bc01c3c09f-3
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶ classmethod update_forward_refs(**localns: Any) → None¶ Try to update ForwardRefs on fields based on this Model, globalns and localns. classmethod validate(value: Any) → Model¶ Examples using GoogleDriveLoader¶ Google Drive
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.googledrive.GoogleDriveLoader.html
26db59d14daa-0
langchain.document_loaders.onedrive_file.OneDriveFileLoader¶ class langchain.document_loaders.onedrive_file.OneDriveFileLoader[source]¶ Bases: BaseLoader, BaseModel Loads a file from OneDrive. Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param file: File [Required]¶ The file to load. classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶ Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = ‘allow’ was set since it adds all passed values copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶ Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include – fields to include in new model exclude – fields to exclude from new model, as with values this takes precedence over include update – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep – set to True to make a deep copy of the model Returns new model instance
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.onedrive_file.OneDriveFileLoader.html
26db59d14daa-1
deep – set to True to make a deep copy of the model Returns new model instance dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶ Generate a dictionary representation of the model, optionally specifying which fields to include or exclude. classmethod from_orm(obj: Any) → Model¶ json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶ Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). lazy_load() → Iterator[Document]¶ A lazy loader for Documents. load() → List[Document][source]¶ Load Documents load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶ Load Documents and split into chunks. Chunks are returned as Documents. Parameters text_splitter – TextSplitter instance to use for splitting documents. Defaults to RecursiveCharacterTextSplitter. Returns List of Documents.
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.onedrive_file.OneDriveFileLoader.html
26db59d14daa-2
Defaults to RecursiveCharacterTextSplitter. Returns List of Documents. classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ classmethod parse_obj(obj: Any) → Model¶ classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶ classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶ classmethod update_forward_refs(**localns: Any) → None¶ Try to update ForwardRefs on fields based on this Model, globalns and localns. classmethod validate(value: Any) → Model¶
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.onedrive_file.OneDriveFileLoader.html
a86b5e167d9b-0
langchain.document_loaders.obs_file.OBSFileLoader¶ class langchain.document_loaders.obs_file.OBSFileLoader(bucket: str, key: str, client: Any = None, endpoint: str = '', config: Optional[dict] = None)[source]¶ Loader for Huawei OBS file. Initialize the OBSFileLoader with the specified settings. Parameters bucket (str) – The name of the OBS bucket to be used. key (str) – The name of the object in the OBS bucket. client (ObsClient, optional) – An instance of the ObsClient to connect to OBS. endpoint (str, optional) – The endpoint URL of your OBS bucket. This parameter is mandatory if client is not provided. config (dict, optional) – The parameters for connecting to OBS, provided as a dictionary. This parameter is ignored if client is provided. The dictionary could have the following keys: - “ak” (str, optional): Your OBS access key (required if get_token_from_ecs is False and bucket policy is not public read). - “sk” (str, optional): Your OBS secret key (required if get_token_from_ecs is False and bucket policy is not public read). - “token” (str, optional): Your security token (required if using temporary credentials). - “get_token_from_ecs” (bool, optional): Whether to retrieve the security token from ECS. Defaults to False if not provided. If set to True, ak, sk, and token will be ignored. Raises ValueError – If the esdk-obs-python package is not installed. TypeError – If the provided client is not an instance of ObsClient. ValueError – If client is not provided, but endpoint is missing. Note
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.obs_file.OBSFileLoader.html
a86b5e167d9b-1
ValueError – If client is not provided, but endpoint is missing. Note Before using this class, make sure you have registered with OBS and have the necessary credentials. The ak, sk, and endpoint values are mandatory unless get_token_from_ecs is True or the bucket policy is public read. token is required when using temporary credentials. Example To create a new OBSFileLoader with a new client: ``` config = { “ak”: “your-access-key”, “sk”: “your-secret-key” } obs_loader = OBSFileLoader(“your-bucket-name”, “your-object-key”, config=config) ``` To create a new OBSFileLoader with an existing client: ``` from obs import ObsClient # Assuming you have an existing ObsClient object ‘obs_client’ obs_loader = OBSFileLoader(“your-bucket-name”, “your-object-key”, client=obs_client) ``` To create a new OBSFileLoader without an existing client: ` obs_loader = OBSFileLoader("your-bucket-name", "your-object-key", endpoint="your-endpoint-url") ` Methods __init__(bucket, key[, client, endpoint, config]) Initialize the OBSFileLoader with the specified settings. lazy_load() A lazy loader for Documents. load() Load documents. load_and_split([text_splitter]) Load Documents and split into chunks. __init__(bucket: str, key: str, client: Any = None, endpoint: str = '', config: Optional[dict] = None) → None[source]¶ Initialize the OBSFileLoader with the specified settings. Parameters bucket (str) – The name of the OBS bucket to be used. key (str) – The name of the object in the OBS bucket.
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.obs_file.OBSFileLoader.html
a86b5e167d9b-2
key (str) – The name of the object in the OBS bucket. client (ObsClient, optional) – An instance of the ObsClient to connect to OBS. endpoint (str, optional) – The endpoint URL of your OBS bucket. This parameter is mandatory if client is not provided. config (dict, optional) – The parameters for connecting to OBS, provided as a dictionary. This parameter is ignored if client is provided. The dictionary could have the following keys: - “ak” (str, optional): Your OBS access key (required if get_token_from_ecs is False and bucket policy is not public read). - “sk” (str, optional): Your OBS secret key (required if get_token_from_ecs is False and bucket policy is not public read). - “token” (str, optional): Your security token (required if using temporary credentials). - “get_token_from_ecs” (bool, optional): Whether to retrieve the security token from ECS. Defaults to False if not provided. If set to True, ak, sk, and token will be ignored. Raises ValueError – If the esdk-obs-python package is not installed. TypeError – If the provided client is not an instance of ObsClient. ValueError – If client is not provided, but endpoint is missing. Note Before using this class, make sure you have registered with OBS and have the necessary credentials. The ak, sk, and endpoint values are mandatory unless get_token_from_ecs is True or the bucket policy is public read. token is required when using temporary credentials. Example To create a new OBSFileLoader with a new client: ``` config = { “ak”: “your-access-key”, “sk”: “your-secret-key” } obs_loader = OBSFileLoader(“your-bucket-name”, “your-object-key”, config=config) ```
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.obs_file.OBSFileLoader.html
a86b5e167d9b-3
``` To create a new OBSFileLoader with an existing client: ``` from obs import ObsClient # Assuming you have an existing ObsClient object ‘obs_client’ obs_loader = OBSFileLoader(“your-bucket-name”, “your-object-key”, client=obs_client) ``` To create a new OBSFileLoader without an existing client: ` obs_loader = OBSFileLoader("your-bucket-name", "your-object-key", endpoint="your-endpoint-url") ` lazy_load() → Iterator[Document]¶ A lazy loader for Documents. load() → List[Document][source]¶ Load documents. load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶ Load Documents and split into chunks. Chunks are returned as Documents. Parameters text_splitter – TextSplitter instance to use for splitting documents. Defaults to RecursiveCharacterTextSplitter. Returns List of Documents.
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.obs_file.OBSFileLoader.html
d05a5b3f685e-0
langchain.document_loaders.pdf.PDFMinerLoader¶ class langchain.document_loaders.pdf.PDFMinerLoader(file_path: str)[source]¶ Loader that uses PDFMiner to load PDF files. Initialize with file path. Attributes source Methods __init__(file_path) Initialize with file path. lazy_load() Lazily load documents. load() Eagerly load the content. load_and_split([text_splitter]) Load Documents and split into chunks. __init__(file_path: str) → None[source]¶ Initialize with file path. lazy_load() → Iterator[Document][source]¶ Lazily load documents. load() → List[Document][source]¶ Eagerly load the content. load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶ Load Documents and split into chunks. Chunks are returned as Documents. Parameters text_splitter – TextSplitter instance to use for splitting documents. Defaults to RecursiveCharacterTextSplitter. Returns List of Documents.
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.pdf.PDFMinerLoader.html
05b8d4036e01-0
langchain.document_loaders.datadog_logs.DatadogLogsLoader¶ class langchain.document_loaders.datadog_logs.DatadogLogsLoader(query: str, api_key: str, app_key: str, from_time: Optional[int] = None, to_time: Optional[int] = None, limit: int = 100)[source]¶ Loads a query result from Datadog into a list of documents. Logs are written into the page_content and into the metadata. Initialize Datadog document loader. Requirements: Must have datadog_api_client installed. Install with pip install datadog_api_client. Parameters query – The query to run in Datadog. api_key – The Datadog API key. app_key – The Datadog APP key. from_time – Optional. The start of the time range to query. Supports date math and regular timestamps (milliseconds) like ‘1688732708951’ Defaults to 20 minutes ago. to_time – Optional. The end of the time range to query. Supports date math and regular timestamps (milliseconds) like ‘1688732708951’ Defaults to now. limit – The maximum number of logs to return. Defaults to 100. Methods __init__(query, api_key, app_key[, ...]) Initialize Datadog document loader. lazy_load() A lazy loader for Documents. load() Get logs from Datadog. load_and_split([text_splitter]) Load Documents and split into chunks. parse_log(log) Create Document objects from Datadog log items. __init__(query: str, api_key: str, app_key: str, from_time: Optional[int] = None, to_time: Optional[int] = None, limit: int = 100) → None[source]¶
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.datadog_logs.DatadogLogsLoader.html
05b8d4036e01-1
Initialize Datadog document loader. Requirements: Must have datadog_api_client installed. Install with pip install datadog_api_client. Parameters query – The query to run in Datadog. api_key – The Datadog API key. app_key – The Datadog APP key. from_time – Optional. The start of the time range to query. Supports date math and regular timestamps (milliseconds) like ‘1688732708951’ Defaults to 20 minutes ago. to_time – Optional. The end of the time range to query. Supports date math and regular timestamps (milliseconds) like ‘1688732708951’ Defaults to now. limit – The maximum number of logs to return. Defaults to 100. lazy_load() → Iterator[Document]¶ A lazy loader for Documents. load() → List[Document][source]¶ Get logs from Datadog. Returns A list of Document objects. page_content metadata id service status tags timestamp load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶ Load Documents and split into chunks. Chunks are returned as Documents. Parameters text_splitter – TextSplitter instance to use for splitting documents. Defaults to RecursiveCharacterTextSplitter. Returns List of Documents. parse_log(log: dict) → Document[source]¶ Create Document objects from Datadog log items. Examples using DatadogLogsLoader¶ Datadog Logs
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.datadog_logs.DatadogLogsLoader.html
cf6ce09267d7-0
langchain.document_loaders.rtf.UnstructuredRTFLoader¶ class langchain.document_loaders.rtf.UnstructuredRTFLoader(file_path: str, mode: str = 'single', **unstructured_kwargs: Any)[source]¶ Loader that uses unstructured to load RTF files. You can run the loader in one of two modes: “single” and “elements”. If you use “single” mode, the document will be returned as a single langchain Document object. If you use “elements” mode, the unstructured library will split the document into elements such as Title and NarrativeText. You can pass in additional unstructured kwargs after mode to apply different unstructured settings. Examples from langchain.document_loaders import UnstructuredRTFLoader loader = UnstructuredRTFLoader(“example.rtf”, mode=”elements”, strategy=”fast”, ) docs = loader.load() References https://unstructured-io.github.io/unstructured/bricks.html#partition-rtf Initialize with a file path. Parameters file_path – The path to the file to load. mode – The mode to use for partitioning. See unstructured for details. Defaults to “single”. **unstructured_kwargs – Additional keyword arguments to pass to unstructured. Methods __init__(file_path[, mode]) Initialize with a file path. lazy_load() A lazy loader for Documents. load() Load file. load_and_split([text_splitter]) Load Documents and split into chunks. __init__(file_path: str, mode: str = 'single', **unstructured_kwargs: Any)[source]¶ Initialize with a file path. Parameters file_path – The path to the file to load. mode – The mode to use for partitioning. See unstructured for details. Defaults to “single”.
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.rtf.UnstructuredRTFLoader.html
cf6ce09267d7-1
Defaults to “single”. **unstructured_kwargs – Additional keyword arguments to pass to unstructured. lazy_load() → Iterator[Document]¶ A lazy loader for Documents. load() → List[Document]¶ Load file. load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶ Load Documents and split into chunks. Chunks are returned as Documents. Parameters text_splitter – TextSplitter instance to use for splitting documents. Defaults to RecursiveCharacterTextSplitter. Returns List of Documents.
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.rtf.UnstructuredRTFLoader.html
444d3a0c1369-0
langchain.document_loaders.word_document.UnstructuredWordDocumentLoader¶ class langchain.document_loaders.word_document.UnstructuredWordDocumentLoader(file_path: Union[str, List[str]], mode: str = 'single', **unstructured_kwargs: Any)[source]¶ Loader that uses unstructured to load word documents. Works with both .docx and .doc files. You can run the loader in one of two modes: “single” and “elements”. If you use “single” mode, the document will be returned as a single langchain Document object. If you use “elements” mode, the unstructured library will split the document into elements such as Title and NarrativeText. You can pass in additional unstructured kwargs after mode to apply different unstructured settings. Examples from langchain.document_loaders import UnstructuredWordDocumentLoader loader = UnstructuredWordDocumentLoader(“example.docx”, mode=”elements”, strategy=”fast”, ) docs = loader.load() References https://unstructured-io.github.io/unstructured/bricks.html#partition-docx Initialize with file path. Methods __init__(file_path[, mode]) Initialize with file path. lazy_load() A lazy loader for Documents. load() Load file. load_and_split([text_splitter]) Load Documents and split into chunks. __init__(file_path: Union[str, List[str]], mode: str = 'single', **unstructured_kwargs: Any)¶ Initialize with file path. lazy_load() → Iterator[Document]¶ A lazy loader for Documents. load() → List[Document]¶ Load file. load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶ Load Documents and split into chunks. Chunks are returned as Documents. Parameters
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.word_document.UnstructuredWordDocumentLoader.html
444d3a0c1369-1
Load Documents and split into chunks. Chunks are returned as Documents. Parameters text_splitter – TextSplitter instance to use for splitting documents. Defaults to RecursiveCharacterTextSplitter. Returns List of Documents. Examples using UnstructuredWordDocumentLoader¶ Microsoft Word
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.word_document.UnstructuredWordDocumentLoader.html
b6f1d8d36d48-0
langchain.document_loaders.blackboard.BlackboardLoader¶ class langchain.document_loaders.blackboard.BlackboardLoader(blackboard_course_url: str, bbrouter: str, load_all_recursively: bool = True, basic_auth: Optional[Tuple[str, str]] = None, cookies: Optional[dict] = None, continue_on_failure: Optional[bool] = False)[source]¶ Loads all documents from a Blackboard course. This loader is not compatible with all Blackboard courses. It is only compatible with courses that use the new Blackboard interface. To use this loader, you must have the BbRouter cookie. You can get this cookie by logging into the course and then copying the value of the BbRouter cookie from the browser’s developer tools. Example from langchain.document_loaders import BlackboardLoader loader = BlackboardLoader( blackboard_course_url="https://blackboard.example.com/webapps/blackboard/execute/announcement?method=search&context=course_entry&course_id=_123456_1", bbrouter="expires:12345...", ) documents = loader.load() Initialize with blackboard course url. The BbRouter cookie is required for most blackboard courses. Parameters blackboard_course_url – Blackboard course url. bbrouter – BbRouter cookie. load_all_recursively – If True, load all documents recursively. basic_auth – Basic auth credentials. cookies – Cookies. continue_on_failure – whether to continue loading the sitemap if an error occurs loading a url, emitting a warning instead of raising an exception. Setting this to True makes the loader more robust, but also may result in missing data. Default: False Raises ValueError – If blackboard course url is invalid. Attributes bs_get_text_kwargs kwargs for beatifulsoup4 get_text default_parser
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.blackboard.BlackboardLoader.html
b6f1d8d36d48-1
Attributes bs_get_text_kwargs kwargs for beatifulsoup4 get_text default_parser Default parser to use for BeautifulSoup. raise_for_status Raise an exception if http status code denotes an error. requests_kwargs kwargs for requests requests_per_second Max number of concurrent requests to make. web_path base_url Base url of the blackboard course. folder_path Path to the folder containing the documents. load_all_recursively If True, load all documents recursively. Methods __init__(blackboard_course_url, bbrouter[, ...]) Initialize with blackboard course url. aload() Load text from the urls in web_path async into Documents. check_bs4() Check if BeautifulSoup4 is installed. download(path) Download a file from an url. fetch_all(urls) Fetch all urls concurrently with rate limiting. lazy_load() Lazy load text from the url(s) in web_path. load() Load data into Document objects. load_and_split([text_splitter]) Load Documents and split into chunks. parse_filename(url) Parse the filename from an url. scrape([parser]) Scrape data from webpage and return it in BeautifulSoup format. scrape_all(urls[, parser]) Fetch all urls, then return soups for all results. __init__(blackboard_course_url: str, bbrouter: str, load_all_recursively: bool = True, basic_auth: Optional[Tuple[str, str]] = None, cookies: Optional[dict] = None, continue_on_failure: Optional[bool] = False)[source]¶ Initialize with blackboard course url. The BbRouter cookie is required for most blackboard courses. Parameters blackboard_course_url – Blackboard course url. bbrouter – BbRouter cookie.
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.blackboard.BlackboardLoader.html
b6f1d8d36d48-2
bbrouter – BbRouter cookie. load_all_recursively – If True, load all documents recursively. basic_auth – Basic auth credentials. cookies – Cookies. continue_on_failure – whether to continue loading the sitemap if an error occurs loading a url, emitting a warning instead of raising an exception. Setting this to True makes the loader more robust, but also may result in missing data. Default: False Raises ValueError – If blackboard course url is invalid. aload() → List[Document]¶ Load text from the urls in web_path async into Documents. check_bs4() → None[source]¶ Check if BeautifulSoup4 is installed. Raises ImportError – If BeautifulSoup4 is not installed. download(path: str) → None[source]¶ Download a file from an url. Parameters path – Path to the file. async fetch_all(urls: List[str]) → Any¶ Fetch all urls concurrently with rate limiting. lazy_load() → Iterator[Document]¶ Lazy load text from the url(s) in web_path. load() → List[Document][source]¶ Load data into Document objects. Returns List of Documents. load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶ Load Documents and split into chunks. Chunks are returned as Documents. Parameters text_splitter – TextSplitter instance to use for splitting documents. Defaults to RecursiveCharacterTextSplitter. Returns List of Documents. parse_filename(url: str) → str[source]¶ Parse the filename from an url. Parameters url – Url to parse the filename from. Returns The filename. scrape(parser: Optional[str] = None) → Any¶ Scrape data from webpage and return it in BeautifulSoup format.
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.blackboard.BlackboardLoader.html
b6f1d8d36d48-3
Scrape data from webpage and return it in BeautifulSoup format. scrape_all(urls: List[str], parser: Optional[str] = None) → List[Any]¶ Fetch all urls, then return soups for all results. Examples using BlackboardLoader¶ Blackboard
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.blackboard.BlackboardLoader.html
b868f10f2e80-0
langchain.document_loaders.airbyte.AirbyteCDKLoader¶ class langchain.document_loaders.airbyte.AirbyteCDKLoader(config: Mapping[str, Any], source_class: Any, stream_name: str, record_handler: Optional[Callable[[Any, Optional[str]], Document]] = None, state: Optional[Any] = None)[source]¶ Loads records using an Airbyte source connector implemented using the CDK. Methods __init__(config, source_class, stream_name) lazy_load() A lazy loader for Documents. load() Load data into Document objects. load_and_split([text_splitter]) Load Documents and split into chunks. __init__(config: Mapping[str, Any], source_class: Any, stream_name: str, record_handler: Optional[Callable[[Any, Optional[str]], Document]] = None, state: Optional[Any] = None) → None[source]¶ lazy_load() → Iterator[Document][source]¶ A lazy loader for Documents. load() → List[Document][source]¶ Load data into Document objects. load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶ Load Documents and split into chunks. Chunks are returned as Documents. Parameters text_splitter – TextSplitter instance to use for splitting documents. Defaults to RecursiveCharacterTextSplitter. Returns List of Documents.
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.airbyte.AirbyteCDKLoader.html
d5fadc275a86-0
langchain.document_loaders.confluence.ContentFormat¶ class langchain.document_loaders.confluence.ContentFormat(value, names=None, *, module=None, qualname=None, type=None, start=1, boundary=None)[source]¶ Enumerator of the content formats of Confluence page. STORAGE = 'body.storage'¶ VIEW = 'body.view'¶
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.confluence.ContentFormat.html
589c91238bf5-0
langchain_experimental.pal_chain.base.PALValidation¶ class langchain_experimental.pal_chain.base.PALValidation(solution_expression_name: Optional[str] = None, solution_expression_type: Optional[type] = None, allow_imports: bool = False, allow_command_exec: bool = False)[source]¶ Initialize a PALValidation instance. Parameters solution_expression_name (str) – Name of the expected solution expression. If passed, solution_expression_type must be passed as well. solution_expression_type (str) – AST type of the expected solution expression. If passed, solution_expression_name must be passed as well. Must be one of PALValidation.SOLUTION_EXPRESSION_TYPE_FUNCTION, PALValidation.SOLUTION_EXPRESSION_TYPE_VARIABLE. allow_imports (bool) – Allow import statements. allow_command_exec (bool) – Allow using known command execution functions. Methods __init__([solution_expression_name, ...]) Initialize a PALValidation instance. __init__(solution_expression_name: Optional[str] = None, solution_expression_type: Optional[type] = None, allow_imports: bool = False, allow_command_exec: bool = False)[source]¶ Initialize a PALValidation instance. Parameters solution_expression_name (str) – Name of the expected solution expression. If passed, solution_expression_type must be passed as well. solution_expression_type (str) – AST type of the expected solution expression. If passed, solution_expression_name must be passed as well. Must be one of PALValidation.SOLUTION_EXPRESSION_TYPE_FUNCTION, PALValidation.SOLUTION_EXPRESSION_TYPE_VARIABLE. allow_imports (bool) – Allow import statements. allow_command_exec (bool) – Allow using known command execution functions.
https://api.python.langchain.com/en/latest/pal_chain/langchain_experimental.pal_chain.base.PALValidation.html
39fced3ab897-0
langchain_experimental.pal_chain.base.PALChain¶ class langchain_experimental.pal_chain.base.PALChain[source]¶ Bases: Chain Implements Program-Aided Language Models (PAL). This class implements the Program-Aided Language Models (PAL) for generating code solutions. PAL is a technique described in the paper “Program-Aided Language Models” (https://arxiv.org/pdf/2211.10435.pdf). Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param callback_manager: Optional[BaseCallbackManager] = None¶ Deprecated, use callbacks instead. param callbacks: Callbacks = None¶ Optional list of callback handlers (or callback manager). Defaults to None. Callback handlers are called throughout the lifecycle of a call to a chain, starting with on_chain_start, ending with on_chain_end or on_chain_error. Each custom chain can optionally call additional callback methods, see Callback docs for full details. param code_validations: PALValidation [Optional]¶ Validations to perform on the generated code. param get_answer_expr: str = 'print(solution())'¶ Expression to use to get the answer from the generated code. param llm_chain: LLMChain [Required]¶ param memory: Optional[BaseMemory] = None¶ Optional memory object. Defaults to None. Memory is a class that gets called at the start and at the end of every chain. At the start, memory loads variables and passes them along in the chain. At the end, it saves any returned variables. There are many different types of memory - please see memory docs for the full catalog. param metadata: Optional[Dict[str, Any]] = None¶ Optional metadata associated with the chain. Defaults to None.
https://api.python.langchain.com/en/latest/pal_chain/langchain_experimental.pal_chain.base.PALChain.html
39fced3ab897-1
Optional metadata associated with the chain. Defaults to None. This metadata will be associated with each call to this chain, and passed as arguments to the handlers defined in callbacks. You can use these to eg identify a specific instance of a chain with its use case. param python_globals: Optional[Dict[str, Any]] = None¶ Python globals and locals to use when executing the generated code. param python_locals: Optional[Dict[str, Any]] = None¶ Python globals and locals to use when executing the generated code. param return_intermediate_steps: bool = False¶ Whether to return intermediate steps in the generated code. param stop: str = '\n\n'¶ Stop token to use when generating code. param tags: Optional[List[str]] = None¶ Optional list of tags associated with the chain. Defaults to None. These tags will be associated with each call to this chain, and passed as arguments to the handlers defined in callbacks. You can use these to eg identify a specific instance of a chain with its use case. param timeout: Optional[int] = 10¶ Timeout in seconds for the generated code to execute. param verbose: bool [Optional]¶ Whether or not run in verbose mode. In verbose mode, some intermediate logs will be printed to the console. Defaults to langchain.verbose value. __call__(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, include_run_info: bool = False) → Dict[str, Any]¶ Execute the chain. Parameters inputs – Dictionary of inputs, or single input if chain expects only one param. Should contain all inputs specified in
https://api.python.langchain.com/en/latest/pal_chain/langchain_experimental.pal_chain.base.PALChain.html
39fced3ab897-2
only one param. Should contain all inputs specified in Chain.input_keys except for inputs that will be set by the chain’s memory. return_only_outputs – Whether to return only outputs in the response. If True, only new keys generated by this chain will be returned. If False, both input keys and new keys generated by this chain will be returned. Defaults to False. callbacks – Callbacks to use for this chain run. These will be called in addition to callbacks passed to the chain during construction, but only these runtime callbacks will propagate to calls to other objects. tags – List of string tags to pass to all callbacks. These will be passed in addition to tags passed to the chain during construction, but only these runtime tags will propagate to calls to other objects. metadata – Optional metadata associated with the chain. Defaults to None include_run_info – Whether to include run info in the response. Defaults to False. Returns A dict of named outputs. Should contain all outputs specified inChain.output_keys. async abatch(inputs: List[Input], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, *, max_concurrency: Optional[int] = None) → List[Output]¶ async acall(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, include_run_info: bool = False) → Dict[str, Any]¶ Asynchronously execute the chain. Parameters inputs – Dictionary of inputs, or single input if chain expects only one param. Should contain all inputs specified in Chain.input_keys except for inputs that will be set by the chain’s memory.
https://api.python.langchain.com/en/latest/pal_chain/langchain_experimental.pal_chain.base.PALChain.html
39fced3ab897-3
Chain.input_keys except for inputs that will be set by the chain’s memory. return_only_outputs – Whether to return only outputs in the response. If True, only new keys generated by this chain will be returned. If False, both input keys and new keys generated by this chain will be returned. Defaults to False. callbacks – Callbacks to use for this chain run. These will be called in addition to callbacks passed to the chain during construction, but only these runtime callbacks will propagate to calls to other objects. tags – List of string tags to pass to all callbacks. These will be passed in addition to tags passed to the chain during construction, but only these runtime tags will propagate to calls to other objects. metadata – Optional metadata associated with the chain. Defaults to None include_run_info – Whether to include run info in the response. Defaults to False. Returns A dict of named outputs. Should contain all outputs specified inChain.output_keys. async ainvoke(input: Dict[str, Any], config: Optional[RunnableConfig] = None) → Dict[str, Any]¶ apply(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) → List[Dict[str, str]]¶ Call the chain on all inputs in the list. async arun(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶ Convenience method for executing chain. The main difference between this method and Chain.__call__ is that this method expects inputs to be passed directly in as positional arguments or keyword arguments, whereas Chain.__call__ expects a single input dictionary with all the inputs Parameters
https://api.python.langchain.com/en/latest/pal_chain/langchain_experimental.pal_chain.base.PALChain.html
39fced3ab897-4
with all the inputs Parameters *args – If the chain expects a single input, it can be passed in as the sole positional argument. callbacks – Callbacks to use for this chain run. These will be called in addition to callbacks passed to the chain during construction, but only these runtime callbacks will propagate to calls to other objects. tags – List of string tags to pass to all callbacks. These will be passed in addition to tags passed to the chain during construction, but only these runtime tags will propagate to calls to other objects. **kwargs – If the chain expects multiple inputs, they can be passed in directly as keyword arguments. Returns The chain output. Example # Suppose we have a single-input chain that takes a 'question' string: await chain.arun("What's the temperature in Boise, Idaho?") # -> "The temperature in Boise is..." # Suppose we have a multi-input chain that takes a 'question' string # and 'context' string: question = "What's the temperature in Boise, Idaho?" context = "Weather report for Boise, Idaho on 07/03/23..." await chain.arun(question=question, context=context) # -> "The temperature in Boise is..." async astream(input: Input, config: Optional[RunnableConfig] = None) → AsyncIterator[Output]¶ batch(inputs: List[Input], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, *, max_concurrency: Optional[int] = None) → List[Output]¶ bind(**kwargs: Any) → Runnable[Input, Output]¶ Bind arguments to a Runnable, returning a new Runnable. classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
https://api.python.langchain.com/en/latest/pal_chain/langchain_experimental.pal_chain.base.PALChain.html
39fced3ab897-5
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = ‘allow’ was set since it adds all passed values copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶ Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include – fields to include in new model exclude – fields to exclude from new model, as with values this takes precedence over include update – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep – set to True to make a deep copy of the model Returns new model instance dict(**kwargs: Any) → Dict¶ Dictionary representation of chain. Expects Chain._chain_type property to be implemented and for memory to benull. Parameters **kwargs – Keyword arguments passed to default pydantic.BaseModel.dict method. Returns A dictionary representation of the chain. Example ..code-block:: python chain.dict(exclude_unset=True) # -> {“_type”: “foo”, “verbose”: False, …} classmethod from_colored_object_prompt(llm: BaseLanguageModel, **kwargs: Any) → PALChain[source]¶ Load PAL from colored object prompt. Parameters llm (BaseLanguageModel) – The language model to use for generating code. Returns An instance of PALChain. Return type PALChain
https://api.python.langchain.com/en/latest/pal_chain/langchain_experimental.pal_chain.base.PALChain.html
39fced3ab897-6
Returns An instance of PALChain. Return type PALChain classmethod from_math_prompt(llm: BaseLanguageModel, **kwargs: Any) → PALChain[source]¶ Load PAL from math prompt. Parameters llm (BaseLanguageModel) – The language model to use for generating code. Returns An instance of PALChain. Return type PALChain classmethod from_orm(obj: Any) → Model¶ invoke(input: Dict[str, Any], config: Optional[RunnableConfig] = None) → Dict[str, Any]¶ json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶ Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ classmethod parse_obj(obj: Any) → Model¶ classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ prep_inputs(inputs: Union[Dict[str, Any], Any]) → Dict[str, str]¶ Validate and prepare chain inputs, including adding inputs from memory. Parameters
https://api.python.langchain.com/en/latest/pal_chain/langchain_experimental.pal_chain.base.PALChain.html
39fced3ab897-7
Validate and prepare chain inputs, including adding inputs from memory. Parameters inputs – Dictionary of raw inputs, or single input if chain expects only one param. Should contain all inputs specified in Chain.input_keys except for inputs that will be set by the chain’s memory. Returns A dictionary of all inputs, including those added by the chain’s memory. prep_outputs(inputs: Dict[str, str], outputs: Dict[str, str], return_only_outputs: bool = False) → Dict[str, str]¶ Validate and prepare chain outputs, and save info about this run to memory. Parameters inputs – Dictionary of chain inputs, including any inputs added by chain memory. outputs – Dictionary of initial chain outputs. return_only_outputs – Whether to only return the chain outputs. If False, inputs are also added to the final outputs. Returns A dict of the final chain outputs. run(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶ Convenience method for executing chain. The main difference between this method and Chain.__call__ is that this method expects inputs to be passed directly in as positional arguments or keyword arguments, whereas Chain.__call__ expects a single input dictionary with all the inputs Parameters *args – If the chain expects a single input, it can be passed in as the sole positional argument. callbacks – Callbacks to use for this chain run. These will be called in addition to callbacks passed to the chain during construction, but only these runtime callbacks will propagate to calls to other objects. tags – List of string tags to pass to all callbacks. These will be passed in addition to tags passed to the chain during construction, but only
https://api.python.langchain.com/en/latest/pal_chain/langchain_experimental.pal_chain.base.PALChain.html
39fced3ab897-8
addition to tags passed to the chain during construction, but only these runtime tags will propagate to calls to other objects. **kwargs – If the chain expects multiple inputs, they can be passed in directly as keyword arguments. Returns The chain output. Example # Suppose we have a single-input chain that takes a 'question' string: chain.run("What's the temperature in Boise, Idaho?") # -> "The temperature in Boise is..." # Suppose we have a multi-input chain that takes a 'question' string # and 'context' string: question = "What's the temperature in Boise, Idaho?" context = "Weather report for Boise, Idaho on 07/03/23..." chain.run(question=question, context=context) # -> "The temperature in Boise is..." save(file_path: Union[Path, str]) → None¶ Save the chain. Expects Chain._chain_type property to be implemented and for memory to benull. Parameters file_path – Path to file to save the chain to. Example chain.save(file_path="path/chain.yaml") classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶ classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶ stream(input: Input, config: Optional[RunnableConfig] = None) → Iterator[Output]¶ to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶ to_json_not_implemented() → SerializedNotImplemented¶ classmethod update_forward_refs(**localns: Any) → None¶ Try to update ForwardRefs on fields based on this Model, globalns and localns. classmethod validate(value: Any) → Model¶
https://api.python.langchain.com/en/latest/pal_chain/langchain_experimental.pal_chain.base.PALChain.html
39fced3ab897-9
classmethod validate(value: Any) → Model¶ classmethod validate_code(code: str, code_validations: PALValidation) → None[source]¶ with_fallbacks(fallbacks: ~typing.Sequence[~langchain.schema.runnable.Runnable[~langchain.schema.runnable.Input, ~langchain.schema.runnable.Output]], *, exceptions_to_handle: ~typing.Tuple[~typing.Type[BaseException]] = (<class 'Exception'>,)) → RunnableWithFallbacks[Input, Output]¶ property lc_attributes: Dict¶ Return a list of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the constructor. property lc_namespace: List[str]¶ Return the namespace of the langchain object. eg. [“langchain”, “llms”, “openai”] property lc_secrets: Dict[str, str]¶ Return a map of constructor argument names to secret ids. eg. {“openai_api_key”: “OPENAI_API_KEY”} property lc_serializable: bool¶ Return whether or not the class is serializable.
https://api.python.langchain.com/en/latest/pal_chain/langchain_experimental.pal_chain.base.PALChain.html
0ccd11fe3a76-0
langchain._api.deprecation.suppress_langchain_deprecation_warning¶ langchain._api.deprecation.suppress_langchain_deprecation_warning() → Generator[None, None, None][source]¶ Context manager to suppress LangChainDeprecationWarning.
https://api.python.langchain.com/en/latest/_api/langchain._api.deprecation.suppress_langchain_deprecation_warning.html
c9b128928ebf-0
langchain._api.deprecation.deprecated¶ langchain._api.deprecation.deprecated(since: str, *, message: str = '', name: str = '', alternative: str = '', pending: bool = False, obj_type: str = '', addendum: str = '', removal: str = '') → Callable[[T], T][source]¶ Decorator to mark a function, a class, or a property as deprecated. When deprecating a classmethod, a staticmethod, or a property, the @deprecated decorator should go under @classmethod and @staticmethod (i.e., deprecated should directly decorate the underlying callable), but over @property. When deprecating a class C intended to be used as a base class in a multiple inheritance hierarchy, C must define an __init__ method (if C instead inherited its __init__ from its own base class, then @deprecated would mess up __init__ inheritance when installing its own (deprecation-emitting) C.__init__). Parameters are the same as for warn_deprecated, except that obj_type defaults to ‘class’ if decorating a class, ‘attribute’ if decorating a property, and ‘function’ otherwise. Parameters since – str The release at which this API became deprecated. message – str, optional Override the default deprecation message. The %(since)s, %(name)s, %(alternative)s, %(obj_type)s, %(addendum)s, and %(removal)s format specifiers will be replaced by the values of the respective arguments passed to this function. name – str, optional The name of the deprecated object. alternative – str, optional An alternative API that the user may use in place of the deprecated API. The deprecation warning will tell the user about this alternative if provided. pending – bool, optional
https://api.python.langchain.com/en/latest/_api/langchain._api.deprecation.deprecated.html
c9b128928ebf-1
about this alternative if provided. pending – bool, optional If True, uses a PendingDeprecationWarning instead of a DeprecationWarning. Cannot be used together with removal. obj_type – str, optional The object type being deprecated. addendum – str, optional Additional text appended directly to the final message. removal – str, optional The expected removal version. With the default (an empty string), a removal version is automatically computed from since. Set to other Falsy values to not schedule a removal date. Cannot be used together with pending. Examples @deprecated('1.4.0') def the_function_to_deprecate(): pass
https://api.python.langchain.com/en/latest/_api/langchain._api.deprecation.deprecated.html
2cf383a45648-0
langchain._api.deprecation.LangChainDeprecationWarning¶ class langchain._api.deprecation.LangChainDeprecationWarning[source]¶ A class for issuing deprecation warnings for LangChain users.
https://api.python.langchain.com/en/latest/_api/langchain._api.deprecation.LangChainDeprecationWarning.html
b24494378615-0
langchain.indexes.vectorstore.VectorstoreIndexCreator¶ class langchain.indexes.vectorstore.VectorstoreIndexCreator[source]¶ Bases: BaseModel Logic for creating indexes. Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param embedding: langchain.embeddings.base.Embeddings [Optional]¶ param text_splitter: langchain.text_splitter.TextSplitter [Optional]¶ param vectorstore_cls: Type[langchain.vectorstores.base.VectorStore] = <class 'langchain.vectorstores.chroma.Chroma'>¶ param vectorstore_kwargs: dict [Optional]¶ classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶ Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = ‘allow’ was set since it adds all passed values copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶ Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include – fields to include in new model exclude – fields to exclude from new model, as with values this takes precedence over include update – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep – set to True to make a deep copy of the model Returns new model instance
https://api.python.langchain.com/en/latest/indexes/langchain.indexes.vectorstore.VectorstoreIndexCreator.html
b24494378615-1
deep – set to True to make a deep copy of the model Returns new model instance dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶ Generate a dictionary representation of the model, optionally specifying which fields to include or exclude. from_documents(documents: List[Document]) → VectorStoreIndexWrapper[source]¶ Create a vectorstore index from documents. from_loaders(loaders: List[BaseLoader]) → VectorStoreIndexWrapper[source]¶ Create a vectorstore index from loaders. classmethod from_orm(obj: Any) → Model¶ json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶ Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ classmethod parse_obj(obj: Any) → Model¶
https://api.python.langchain.com/en/latest/indexes/langchain.indexes.vectorstore.VectorstoreIndexCreator.html
b24494378615-2
classmethod parse_obj(obj: Any) → Model¶ classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶ classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶ classmethod update_forward_refs(**localns: Any) → None¶ Try to update ForwardRefs on fields based on this Model, globalns and localns. classmethod validate(value: Any) → Model¶ Examples using VectorstoreIndexCreator¶ Apify HuggingFace dataset Spreedly Image captions Figma Apify Dataset Iugu Stripe Modern Treasury Question Answering Benchmarking: State of the Union Address Question Answering Benchmarking: Paul Graham Essay Agent VectorDB Question Answering Benchmarking QA over Documents
https://api.python.langchain.com/en/latest/indexes/langchain.indexes.vectorstore.VectorstoreIndexCreator.html
70fa463f6729-0
langchain.indexes.vectorstore.VectorStoreIndexWrapper¶ class langchain.indexes.vectorstore.VectorStoreIndexWrapper[source]¶ Bases: BaseModel Wrapper around a vectorstore for easy access. Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param vectorstore: langchain.vectorstores.base.VectorStore [Required]¶ classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶ Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = ‘allow’ was set since it adds all passed values copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶ Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include – fields to include in new model exclude – fields to exclude from new model, as with values this takes precedence over include update – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep – set to True to make a deep copy of the model Returns new model instance
https://api.python.langchain.com/en/latest/indexes/langchain.indexes.vectorstore.VectorStoreIndexWrapper.html
70fa463f6729-1
deep – set to True to make a deep copy of the model Returns new model instance dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶ Generate a dictionary representation of the model, optionally specifying which fields to include or exclude. classmethod from_orm(obj: Any) → Model¶ json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶ Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ classmethod parse_obj(obj: Any) → Model¶ classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
https://api.python.langchain.com/en/latest/indexes/langchain.indexes.vectorstore.VectorStoreIndexWrapper.html
70fa463f6729-2
query(question: str, llm: Optional[BaseLanguageModel] = None, retriever_kwargs: Optional[Dict[str, Any]] = None, **kwargs: Any) → str[source]¶ Query the vectorstore. query_with_sources(question: str, llm: Optional[BaseLanguageModel] = None, retriever_kwargs: Optional[Dict[str, Any]] = None, **kwargs: Any) → dict[source]¶ Query the vectorstore and get back sources. classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶ classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶ classmethod update_forward_refs(**localns: Any) → None¶ Try to update ForwardRefs on fields based on this Model, globalns and localns. classmethod validate(value: Any) → Model¶
https://api.python.langchain.com/en/latest/indexes/langchain.indexes.vectorstore.VectorStoreIndexWrapper.html
2221929c85dc-0
langchain.indexes.graph.GraphIndexCreator¶ class langchain.indexes.graph.GraphIndexCreator[source]¶ Bases: BaseModel Functionality to create graph index. Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param graph_type: Type[langchain.graphs.networkx_graph.NetworkxEntityGraph] = <class 'langchain.graphs.networkx_graph.NetworkxEntityGraph'>¶ param llm: Optional[langchain.schema.language_model.BaseLanguageModel] = None¶
https://api.python.langchain.com/en/latest/indexes/langchain.indexes.graph.GraphIndexCreator.html
2221929c85dc-1
param llm: Optional[langchain.schema.language_model.BaseLanguageModel] = None¶ async afrom_text(text: str, prompt: BasePromptTemplate = PromptTemplate(input_variables=['text'], output_parser=None, partial_variables={}, template="You are a networked intelligence helping a human track knowledge triples about all relevant people, things, concepts, etc. and integrating them with your knowledge stored within your weights as well as that stored in a knowledge graph. Extract all of the knowledge triples from the text. A knowledge triple is a clause that contains a subject, a predicate, and an object. The subject is the entity being described, the predicate is the property of the subject that is being described, and the object is the value of the property.\n\nEXAMPLE\nIt's a state in the US. It's also the number 1 producer of gold in the US.\n\nOutput: (Nevada, is a, state)<|>(Nevada, is in, US)<|>(Nevada, is the number 1 producer of, gold)\nEND OF EXAMPLE\n\nEXAMPLE\nI'm going to the store.\n\nOutput: NONE\nEND OF EXAMPLE\n\nEXAMPLE\nOh huh. I know Descartes likes to drive antique scooters and play the mandolin.\nOutput: (Descartes, likes to drive, antique scooters)<|>(Descartes, plays, mandolin)\nEND OF EXAMPLE\n\nEXAMPLE\n{text}Output:", template_format='f-string', validate_template=True)) → NetworkxEntityGraph[source]¶ Create graph index from text asynchronously. classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶ Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed.
https://api.python.langchain.com/en/latest/indexes/langchain.indexes.graph.GraphIndexCreator.html
2221929c85dc-2
Default values are respected, but no other validation is performed. Behaves as if Config.extra = ‘allow’ was set since it adds all passed values copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶ Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include – fields to include in new model exclude – fields to exclude from new model, as with values this takes precedence over include update – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep – set to True to make a deep copy of the model Returns new model instance dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶ Generate a dictionary representation of the model, optionally specifying which fields to include or exclude. classmethod from_orm(obj: Any) → Model¶
https://api.python.langchain.com/en/latest/indexes/langchain.indexes.graph.GraphIndexCreator.html
2221929c85dc-3
classmethod from_orm(obj: Any) → Model¶ from_text(text: str, prompt: BasePromptTemplate = PromptTemplate(input_variables=['text'], output_parser=None, partial_variables={}, template="You are a networked intelligence helping a human track knowledge triples about all relevant people, things, concepts, etc. and integrating them with your knowledge stored within your weights as well as that stored in a knowledge graph. Extract all of the knowledge triples from the text. A knowledge triple is a clause that contains a subject, a predicate, and an object. The subject is the entity being described, the predicate is the property of the subject that is being described, and the object is the value of the property.\n\nEXAMPLE\nIt's a state in the US. It's also the number 1 producer of gold in the US.\n\nOutput: (Nevada, is a, state)<|>(Nevada, is in, US)<|>(Nevada, is the number 1 producer of, gold)\nEND OF EXAMPLE\n\nEXAMPLE\nI'm going to the store.\n\nOutput: NONE\nEND OF EXAMPLE\n\nEXAMPLE\nOh huh. I know Descartes likes to drive antique scooters and play the mandolin.\nOutput: (Descartes, likes to drive, antique scooters)<|>(Descartes, plays, mandolin)\nEND OF EXAMPLE\n\nEXAMPLE\n{text}Output:", template_format='f-string', validate_template=True)) → NetworkxEntityGraph[source]¶ Create graph index from text.
https://api.python.langchain.com/en/latest/indexes/langchain.indexes.graph.GraphIndexCreator.html
2221929c85dc-4
Create graph index from text. json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶ Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ classmethod parse_obj(obj: Any) → Model¶ classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶ classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶ classmethod update_forward_refs(**localns: Any) → None¶ Try to update ForwardRefs on fields based on this Model, globalns and localns. classmethod validate(value: Any) → Model¶ Examples using GraphIndexCreator¶ Graph QA
https://api.python.langchain.com/en/latest/indexes/langchain.indexes.graph.GraphIndexCreator.html
2601b9370086-0
langchain.load.load.load¶ langchain.load.load.load(obj: Any, *, secrets_map: Optional[Dict[str, str]] = None, valid_namespaces: Optional[List[str]] = None) → Any[source]¶ Revive a LangChain class from a JSON object. Use this if you already have a parsed JSON object, eg. from json.load or orjson.loads. Parameters obj – The object to load. secrets_map – A map of secrets to load. valid_namespaces – A list of additional namespaces (modules) to allow to be deserialized. Returns Revived LangChain objects.
https://api.python.langchain.com/en/latest/load/langchain.load.load.load.html
e395f068c4a9-0
langchain.load.serializable.SerializedSecret¶ class langchain.load.serializable.SerializedSecret[source]¶ Serialized secret. lc: int¶ id: List[str]¶ type: Literal['secret']¶
https://api.python.langchain.com/en/latest/load/langchain.load.serializable.SerializedSecret.html
0807339c6742-0
langchain.load.dump.dumps¶ langchain.load.dump.dumps(obj: Any, *, pretty: bool = False) → str[source]¶ Return a json string representation of an object.
https://api.python.langchain.com/en/latest/load/langchain.load.dump.dumps.html
64841c341ba6-0
langchain.load.serializable.SerializedConstructor¶ class langchain.load.serializable.SerializedConstructor[source]¶ Serialized constructor. lc: int¶ id: List[str]¶ type: Literal['constructor']¶ kwargs: Dict[str, Any]¶
https://api.python.langchain.com/en/latest/load/langchain.load.serializable.SerializedConstructor.html
d23697b1fec8-0
langchain.load.serializable.Serializable¶ class langchain.load.serializable.Serializable[source]¶ Bases: BaseModel, ABC Serializable base class. Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶ Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = ‘allow’ was set since it adds all passed values copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶ Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include – fields to include in new model exclude – fields to exclude from new model, as with values this takes precedence over include update – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep – set to True to make a deep copy of the model Returns new model instance dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶ Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
https://api.python.langchain.com/en/latest/load/langchain.load.serializable.Serializable.html
d23697b1fec8-1
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude. classmethod from_orm(obj: Any) → Model¶ json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶ Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ classmethod parse_obj(obj: Any) → Model¶ classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶ classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶ to_json() → Union[SerializedConstructor, SerializedNotImplemented][source]¶ to_json_not_implemented() → SerializedNotImplemented[source]¶ classmethod update_forward_refs(**localns: Any) → None¶ Try to update ForwardRefs on fields based on this Model, globalns and localns.
https://api.python.langchain.com/en/latest/load/langchain.load.serializable.Serializable.html
d23697b1fec8-2
Try to update ForwardRefs on fields based on this Model, globalns and localns. classmethod validate(value: Any) → Model¶ property lc_attributes: Dict¶ Return a list of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the constructor. property lc_namespace: List[str]¶ Return the namespace of the langchain object. eg. [“langchain”, “llms”, “openai”] property lc_secrets: Dict[str, str]¶ Return a map of constructor argument names to secret ids. eg. {“openai_api_key”: “OPENAI_API_KEY”} property lc_serializable: bool¶ Return whether or not the class is serializable.
https://api.python.langchain.com/en/latest/load/langchain.load.serializable.Serializable.html
82c0eb7af6f7-0
langchain.load.serializable.SerializedNotImplemented¶ class langchain.load.serializable.SerializedNotImplemented[source]¶ Serialized not implemented. lc: int¶ id: List[str]¶ type: Literal['not_implemented']¶
https://api.python.langchain.com/en/latest/load/langchain.load.serializable.SerializedNotImplemented.html
7d6b6769f9ad-0
langchain.load.serializable.BaseSerialized¶ class langchain.load.serializable.BaseSerialized[source]¶ Base class for serialized objects. lc: int¶ id: List[str]¶
https://api.python.langchain.com/en/latest/load/langchain.load.serializable.BaseSerialized.html
6725b9b86e31-0
langchain.load.serializable.to_json_not_implemented¶ langchain.load.serializable.to_json_not_implemented(obj: object) → SerializedNotImplemented[source]¶ Serialize a “not implemented” object. Parameters obj – object to serialize Returns SerializedNotImplemented
https://api.python.langchain.com/en/latest/load/langchain.load.serializable.to_json_not_implemented.html
2492e156f966-0
langchain.load.load.Reviver¶ class langchain.load.load.Reviver(secrets_map: Optional[Dict[str, str]] = None, valid_namespaces: Optional[List[str]] = None)[source]¶ Reviver for JSON objects. Methods __init__([secrets_map, valid_namespaces]) __init__(secrets_map: Optional[Dict[str, str]] = None, valid_namespaces: Optional[List[str]] = None) → None[source]¶
https://api.python.langchain.com/en/latest/load/langchain.load.load.Reviver.html
1dc280765494-0
langchain.load.dump.dumpd¶ langchain.load.dump.dumpd(obj: Any) → Dict[str, Any][source]¶ Return a json dict representation of an object.
https://api.python.langchain.com/en/latest/load/langchain.load.dump.dumpd.html
3f64df6e8b08-0
langchain.load.dump.default¶ langchain.load.dump.default(obj: Any) → Any[source]¶ Return a default value for a Serializable object or a SerializedNotImplemented object.
https://api.python.langchain.com/en/latest/load/langchain.load.dump.default.html