id
stringlengths 14
15
| text
stringlengths 49
2.47k
| source
stringlengths 61
166
|
|---|---|---|
3f9d03633dfa-1
|
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(query: str, user: str, password: str, account: str, warehouse: str, role: str, database: str, schema: str, parameters: Optional[Dict[str, Any]] = None, page_content_columns: Optional[List[str]] = None, metadata_columns: Optional[List[str]] = None)[source]¶
Initialize Snowflake document loader.
Parameters
query – The query to run in Snowflake.
user – Snowflake user.
password – Snowflake password.
account – Snowflake account.
warehouse – Snowflake warehouse.
role – Snowflake role.
database – Snowflake database
schema – Snowflake schema
parameters – Optional. Parameters to pass to the query.
page_content_columns – Optional. Columns written to Document page_content.
metadata_columns – Optional. Columns written to Document metadata.
lazy_load() → Iterator[Document][source]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load data into document objects.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using SnowflakeLoader¶
Snowflake
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.snowflake_loader.SnowflakeLoader.html
|
0b0f7c8b6eba-0
|
langchain.document_loaders.cube_semantic.CubeSemanticLoader¶
class langchain.document_loaders.cube_semantic.CubeSemanticLoader(cube_api_url: str, cube_api_token: str, load_dimension_values: bool = True, dimension_values_limit: int = 10000, dimension_values_max_retries: int = 10, dimension_values_retry_delay: int = 3)[source]¶
Load Cube semantic layer metadata.
Parameters
cube_api_url – REST API endpoint.
Use the REST API of your Cube’s deployment.
Please find out more information here:
https://cube.dev/docs/http-api/rest#configuration-base-path
cube_api_token – Cube API token.
Authentication tokens are generated based on your Cube’s API secret.
Please find out more information here:
https://cube.dev/docs/security#generating-json-web-tokens-jwt
load_dimension_values – Whether to load dimension values for every string
dimension or not.
dimension_values_limit – Maximum number of dimension values to load.
dimension_values_max_retries – Maximum number of retries to load dimension
values.
dimension_values_retry_delay – Delay between retries to load dimension values.
Methods
__init__(cube_api_url, cube_api_token[, ...])
lazy_load()
A lazy loader for Documents.
load()
Makes a call to Cube's REST API metadata endpoint.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(cube_api_url: str, cube_api_token: str, load_dimension_values: bool = True, dimension_values_limit: int = 10000, dimension_values_max_retries: int = 10, dimension_values_retry_delay: int = 3)[source]¶
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Makes a call to Cube’s REST API metadata endpoint.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.cube_semantic.CubeSemanticLoader.html
|
0b0f7c8b6eba-1
|
Makes a call to Cube’s REST API metadata endpoint.
Returns
page_content=column_title + column_description
metadata
table_name
column_name
column_data_type
column_member_type
column_title
column_description
column_values
Return type
A list of documents with attributes
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using CubeSemanticLoader¶
Cube Semantic Layer
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.cube_semantic.CubeSemanticLoader.html
|
02fcf735db9d-0
|
langchain.document_loaders.tencent_cos_file.TencentCOSFileLoader¶
class langchain.document_loaders.tencent_cos_file.TencentCOSFileLoader(conf: Any, bucket: str, key: str)[source]¶
Loader for Tencent Cloud COS file.
Initialize with COS config, bucket and key name.
:param conf(CosConfig): COS config.
:param bucket(str): COS bucket.
:param key(str): COS file key.
Methods
__init__(conf, bucket, key)
Initialize with COS config, bucket and key name.
lazy_load()
Load documents.
load()
Load data into Document objects.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(conf: Any, bucket: str, key: str)[source]¶
Initialize with COS config, bucket and key name.
:param conf(CosConfig): COS config.
:param bucket(str): COS bucket.
:param key(str): COS file key.
lazy_load() → Iterator[Document][source]¶
Load documents.
load() → List[Document][source]¶
Load data into Document objects.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using TencentCOSFileLoader¶
Tencent COS File
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.tencent_cos_file.TencentCOSFileLoader.html
|
c62c4578922a-0
|
langchain.document_loaders.airbyte.AirbyteSalesforceLoader¶
class langchain.document_loaders.airbyte.AirbyteSalesforceLoader(config: Mapping[str, Any], stream_name: str, record_handler: Optional[Callable[[Any, Optional[str]], Document]] = None, state: Optional[Any] = None)[source]¶
Methods
__init__(config, stream_name[, ...])
lazy_load()
A lazy loader for Documents.
load()
Load data into Document objects.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(config: Mapping[str, Any], stream_name: str, record_handler: Optional[Callable[[Any, Optional[str]], Document]] = None, state: Optional[Any] = None) → None[source]¶
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document]¶
Load data into Document objects.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.airbyte.AirbyteSalesforceLoader.html
|
a0987e3944d3-0
|
langchain.document_loaders.pdf.AmazonTextractPDFLoader¶
class langchain.document_loaders.pdf.AmazonTextractPDFLoader(file_path: str, textract_features: Optional[Sequence[str]] = None, client: Optional[Any] = None, credentials_profile_name: Optional[str] = None, region_name: Optional[str] = None, endpoint_url: Optional[str] = None)[source]¶
Loads a PDF document from local file system, HTTP or S3.
To authenticate, the AWS client uses the following methods to
automatically load credentials:
https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html
If a specific credential profile should be used, you must pass
the name of the profile from the ~/.aws/credentials file that is to be used.
Make sure the credentials / roles used have the required policies to
access the Amazon Textract service.
Example
Initialize the loader.
Parameters
file_path – A file, url or s3 path for input file
textract_features – Features to be used for extraction, each feature
should be passed as a str that conforms to the enum
Textract_Features, see amazon-textract-caller pkg
client – boto3 textract client (Optional)
credentials_profile_name – AWS profile name, if not default (Optional)
region_name – AWS region, eg us-east-1 (Optional)
endpoint_url – endpoint url for the textract service (Optional)
Attributes
source
Methods
__init__(file_path[, textract_features, ...])
Initialize the loader.
lazy_load()
Lazy load documents
load()
Load given path as pages.
load_and_split([text_splitter])
Load Documents and split into chunks.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.pdf.AmazonTextractPDFLoader.html
|
a0987e3944d3-1
|
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(file_path: str, textract_features: Optional[Sequence[str]] = None, client: Optional[Any] = None, credentials_profile_name: Optional[str] = None, region_name: Optional[str] = None, endpoint_url: Optional[str] = None) → None[source]¶
Initialize the loader.
Parameters
file_path – A file, url or s3 path for input file
textract_features – Features to be used for extraction, each feature
should be passed as a str that conforms to the enum
Textract_Features, see amazon-textract-caller pkg
client – boto3 textract client (Optional)
credentials_profile_name – AWS profile name, if not default (Optional)
region_name – AWS region, eg us-east-1 (Optional)
endpoint_url – endpoint url for the textract service (Optional)
lazy_load() → Iterator[Document][source]¶
Lazy load documents
load() → List[Document][source]¶
Load given path as pages.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.pdf.AmazonTextractPDFLoader.html
|
a1df2b6582ad-0
|
langchain.document_loaders.modern_treasury.ModernTreasuryLoader¶
class langchain.document_loaders.modern_treasury.ModernTreasuryLoader(resource: str, organization_id: Optional[str] = None, api_key: Optional[str] = None)[source]¶
Loader that fetches data from Modern Treasury.
Parameters
resource – The Modern Treasury resource to load.
organization_id – The Modern Treasury organization ID. It can also be
specified via the environment variable
“MODERN_TREASURY_ORGANIZATION_ID”.
api_key – The Modern Treasury API key. It can also be specified via
the environment variable “MODERN_TREASURY_API_KEY”.
Methods
__init__(resource[, organization_id, api_key])
param resource
The Modern Treasury resource to load.
lazy_load()
A lazy loader for Documents.
load()
Load data into Document objects.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(resource: str, organization_id: Optional[str] = None, api_key: Optional[str] = None) → None[source]¶
Parameters
resource – The Modern Treasury resource to load.
organization_id – The Modern Treasury organization ID. It can also be
specified via the environment variable
“MODERN_TREASURY_ORGANIZATION_ID”.
api_key – The Modern Treasury API key. It can also be specified via
the environment variable “MODERN_TREASURY_API_KEY”.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load data into Document objects.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.modern_treasury.ModernTreasuryLoader.html
|
a1df2b6582ad-1
|
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using ModernTreasuryLoader¶
Modern Treasury
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.modern_treasury.ModernTreasuryLoader.html
|
0c9f190d020d-0
|
langchain.document_loaders.xorbits.XorbitsLoader¶
class langchain.document_loaders.xorbits.XorbitsLoader(data_frame: Any, page_content_column: str = 'text')[source]¶
Load Xorbits DataFrame.
Initialize with dataframe object.
Requirements:Must have xorbits installed. You can install with pip install xorbits.
Parameters
data_frame – Xorbits DataFrame object.
page_content_column – Name of the column containing the page content.
Defaults to “text”.
Methods
__init__(data_frame[, page_content_column])
Initialize with dataframe object.
lazy_load()
Lazy load records from dataframe.
load()
Load full dataframe.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(data_frame: Any, page_content_column: str = 'text')[source]¶
Initialize with dataframe object.
Requirements:Must have xorbits installed. You can install with pip install xorbits.
Parameters
data_frame – Xorbits DataFrame object.
page_content_column – Name of the column containing the page content.
Defaults to “text”.
lazy_load() → Iterator[Document][source]¶
Lazy load records from dataframe.
load() → List[Document][source]¶
Load full dataframe.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using XorbitsLoader¶
Xorbits Pandas DataFrame
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.xorbits.XorbitsLoader.html
|
a539e1b67db9-0
|
langchain.document_loaders.blob_loaders.schema.BlobLoader¶
class langchain.document_loaders.blob_loaders.schema.BlobLoader[source]¶
Abstract interface for blob loaders implementation.
Implementer should be able to load raw content from a storage system according
to some criteria and return the raw content lazily as a stream of blobs.
Methods
__init__()
yield_blobs()
A lazy loader for raw data represented by LangChain's Blob object.
__init__()¶
abstract yield_blobs() → Iterable[Blob][source]¶
A lazy loader for raw data represented by LangChain’s Blob object.
Returns
A generator over blobs
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.blob_loaders.schema.BlobLoader.html
|
9b0d7a74a039-0
|
langchain.document_loaders.pdf.PyPDFium2Loader¶
class langchain.document_loaders.pdf.PyPDFium2Loader(file_path: str)[source]¶
Loads a PDF with pypdfium2 and chunks at character level.
Initialize with a file path.
Attributes
source
Methods
__init__(file_path)
Initialize with a file path.
lazy_load()
Lazy load given path as pages.
load()
Load given path as pages.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(file_path: str)[source]¶
Initialize with a file path.
lazy_load() → Iterator[Document][source]¶
Lazy load given path as pages.
load() → List[Document][source]¶
Load given path as pages.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.pdf.PyPDFium2Loader.html
|
b3f9664657e1-0
|
langchain.document_loaders.pyspark_dataframe.PySparkDataFrameLoader¶
class langchain.document_loaders.pyspark_dataframe.PySparkDataFrameLoader(spark_session: Optional[SparkSession] = None, df: Optional[Any] = None, page_content_column: str = 'text', fraction_of_memory: float = 0.1)[source]¶
Load PySpark DataFrames
Initialize with a Spark DataFrame object.
Parameters
spark_session – The SparkSession object.
df – The Spark DataFrame object.
page_content_column – The name of the column containing the page content.
Defaults to “text”.
fraction_of_memory – The fraction of memory to use. Defaults to 0.1.
Methods
__init__([spark_session, df, ...])
Initialize with a Spark DataFrame object.
get_num_rows()
Gets the number of "feasible" rows for the DataFrame
lazy_load()
A lazy loader for document content.
load()
Load from the dataframe.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(spark_session: Optional[SparkSession] = None, df: Optional[Any] = None, page_content_column: str = 'text', fraction_of_memory: float = 0.1)[source]¶
Initialize with a Spark DataFrame object.
Parameters
spark_session – The SparkSession object.
df – The Spark DataFrame object.
page_content_column – The name of the column containing the page content.
Defaults to “text”.
fraction_of_memory – The fraction of memory to use. Defaults to 0.1.
get_num_rows() → Tuple[int, int][source]¶
Gets the number of “feasible” rows for the DataFrame
lazy_load() → Iterator[Document][source]¶
A lazy loader for document content.
load() → List[Document][source]¶
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.pyspark_dataframe.PySparkDataFrameLoader.html
|
b3f9664657e1-1
|
A lazy loader for document content.
load() → List[Document][source]¶
Load from the dataframe.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using PySparkDataFrameLoader¶
PySpark DataFrame Loader
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.pyspark_dataframe.PySparkDataFrameLoader.html
|
3cf70b9c1c4f-0
|
langchain.document_loaders.toml.TomlLoader¶
class langchain.document_loaders.toml.TomlLoader(source: Union[str, Path])[source]¶
A TOML document loader that inherits from the BaseLoader class.
This class can be initialized with either a single source file or a source
directory containing TOML files.
Initialize the TomlLoader with a source file or directory.
Methods
__init__(source)
Initialize the TomlLoader with a source file or directory.
lazy_load()
Lazily load the TOML documents from the source file or directory.
load()
Load and return all documents.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(source: Union[str, Path])[source]¶
Initialize the TomlLoader with a source file or directory.
lazy_load() → Iterator[Document][source]¶
Lazily load the TOML documents from the source file or directory.
load() → List[Document][source]¶
Load and return all documents.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using TomlLoader¶
TOML
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.toml.TomlLoader.html
|
b70b50dbe1c8-0
|
langchain.document_loaders.azure_blob_storage_container.AzureBlobStorageContainerLoader¶
class langchain.document_loaders.azure_blob_storage_container.AzureBlobStorageContainerLoader(conn_str: str, container: str, prefix: str = '')[source]¶
Loading Documents from Azure Blob Storage.
Initialize with connection string, container and blob prefix.
Attributes
conn_str
Connection string for Azure Blob Storage.
container
Container name.
prefix
Prefix for blob names.
Methods
__init__(conn_str, container[, prefix])
Initialize with connection string, container and blob prefix.
lazy_load()
A lazy loader for Documents.
load()
Load documents.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(conn_str: str, container: str, prefix: str = '')[source]¶
Initialize with connection string, container and blob prefix.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load documents.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using AzureBlobStorageContainerLoader¶
Azure Blob Storage
Azure Blob Storage Container
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.azure_blob_storage_container.AzureBlobStorageContainerLoader.html
|
ec425bcfb8ba-0
|
langchain.document_loaders.mastodon.MastodonTootsLoader¶
class langchain.document_loaders.mastodon.MastodonTootsLoader(mastodon_accounts: Sequence[str], number_toots: Optional[int] = 100, exclude_replies: bool = False, access_token: Optional[str] = None, api_base_url: str = 'https://mastodon.social')[source]¶
Mastodon toots loader.
Instantiate Mastodon toots loader.
Parameters
mastodon_accounts – The list of Mastodon accounts to query.
number_toots – How many toots to pull for each account. Defaults to 100.
exclude_replies – Whether to exclude reply toots from the load.
Defaults to False.
access_token – An access token if toots are loaded as a Mastodon app. Can
also be specified via the environment variables “MASTODON_ACCESS_TOKEN”.
api_base_url – A Mastodon API base URL to talk to, if not using the default.
Defaults to “https://mastodon.social”.
Methods
__init__(mastodon_accounts[, number_toots, ...])
Instantiate Mastodon toots loader.
lazy_load()
A lazy loader for Documents.
load()
Load toots into documents.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(mastodon_accounts: Sequence[str], number_toots: Optional[int] = 100, exclude_replies: bool = False, access_token: Optional[str] = None, api_base_url: str = 'https://mastodon.social')[source]¶
Instantiate Mastodon toots loader.
Parameters
mastodon_accounts – The list of Mastodon accounts to query.
number_toots – How many toots to pull for each account. Defaults to 100.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.mastodon.MastodonTootsLoader.html
|
ec425bcfb8ba-1
|
exclude_replies – Whether to exclude reply toots from the load.
Defaults to False.
access_token – An access token if toots are loaded as a Mastodon app. Can
also be specified via the environment variables “MASTODON_ACCESS_TOKEN”.
api_base_url – A Mastodon API base URL to talk to, if not using the default.
Defaults to “https://mastodon.social”.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load toots into documents.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using MastodonTootsLoader¶
Mastodon
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.mastodon.MastodonTootsLoader.html
|
e970c2e0be75-0
|
langchain.document_loaders.generic.GenericLoader¶
class langchain.document_loaders.generic.GenericLoader(blob_loader: BlobLoader, blob_parser: BaseBlobParser)[source]¶
A generic document loader.
A generic document loader that allows combining an arbitrary blob loader with
a blob parser.
Examples
from langchain.document_loaders import GenericLoader
from langchain.document_loaders.blob_loaders import FileSystemBlobLoader
loader = GenericLoader.from_filesystem(path=”path/to/directory”,
glob=”**/[!.]*”,
suffixes=[“.pdf”],
show_progress=True,
)
docs = loader.lazy_load()
next(docs)
Example instantiations to change which files are loaded:
… code-block:: python
# Recursively load all text files in a directory.
loader = GenericLoader.from_filesystem(“/path/to/dir”, glob=”**/*.txt”)
# Recursively load all non-hidden files in a directory.
loader = GenericLoader.from_filesystem(“/path/to/dir”, glob=”**/[!.]*”)
# Load all files in a directory without recursion.
loader = GenericLoader.from_filesystem(“/path/to/dir”, glob=”*”)
Example instantiations to change which parser is used:
… code-block:: python
from langchain.document_loaders.parsers.pdf import PyPDFParser
# Recursively load all text files in a directory.
loader = GenericLoader.from_filesystem(
“/path/to/dir”,
glob=”**/*.pdf”,
parser=PyPDFParser()
)
A generic document loader.
Parameters
blob_loader – A blob loader which knows how to yield blobs
blob_parser – A blob parser which knows how to parse blobs into documents
Methods
__init__(blob_loader, blob_parser)
A generic document loader.
from_filesystem(path, *[, glob, suffixes, ...])
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.generic.GenericLoader.html
|
e970c2e0be75-1
|
from_filesystem(path, *[, glob, suffixes, ...])
Create a generic document loader using a filesystem blob loader.
lazy_load()
Load documents lazily.
load()
Load all documents.
load_and_split([text_splitter])
Load all documents and split them into sentences.
__init__(blob_loader: BlobLoader, blob_parser: BaseBlobParser) → None[source]¶
A generic document loader.
Parameters
blob_loader – A blob loader which knows how to yield blobs
blob_parser – A blob parser which knows how to parse blobs into documents
classmethod from_filesystem(path: Union[str, Path], *, glob: str = '**/[!.]*', suffixes: Optional[Sequence[str]] = None, show_progress: bool = False, parser: Union[Literal['default'], BaseBlobParser] = 'default') → GenericLoader[source]¶
Create a generic document loader using a filesystem blob loader.
Parameters
path – The path to the directory to load documents from.
glob – The glob pattern to use to find documents.
suffixes – The suffixes to use to filter documents. If None, all files
matching the glob will be loaded.
show_progress – Whether to show a progress bar or not (requires tqdm).
Proxies to the file system loader.
parser – A blob parser which knows how to parse blobs into documents
Returns
A generic document loader.
lazy_load() → Iterator[Document][source]¶
Load documents lazily. Use this when working at a large scale.
load() → List[Document][source]¶
Load all documents.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document][source]¶
Load all documents and split them into sentences.
Examples using GenericLoader¶
Grobid
Loading documents from a YouTube url
Source Code
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.generic.GenericLoader.html
|
959ec3b312f1-0
|
langchain.document_loaders.unstructured.UnstructuredFileLoader¶
class langchain.document_loaders.unstructured.UnstructuredFileLoader(file_path: Union[str, List[str]], mode: str = 'single', **unstructured_kwargs: Any)[source]¶
Loader that uses Unstructured to load files.
The file loader uses the
unstructured partition function and will automatically detect the file
type. You can run the loader in one of two modes: “single” and “elements”.
If you use “single” mode, the document will be returned as a single
langchain Document object. If you use “elements” mode, the unstructured
library will split the document into elements such as Title and NarrativeText.
You can pass in additional unstructured kwargs after mode to apply
different unstructured settings.
Examples
from langchain.document_loaders import UnstructuredFileLoader
loader = UnstructuredFileLoader(“example.pdf”, mode=”elements”, strategy=”fast”,
)
docs = loader.load()
References
https://unstructured-io.github.io/unstructured/bricks.html#partition
Initialize with file path.
Methods
__init__(file_path[, mode])
Initialize with file path.
lazy_load()
A lazy loader for Documents.
load()
Load file.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(file_path: Union[str, List[str]], mode: str = 'single', **unstructured_kwargs: Any)[source]¶
Initialize with file path.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document]¶
Load file.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.unstructured.UnstructuredFileLoader.html
|
959ec3b312f1-1
|
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using UnstructuredFileLoader¶
Unstructured
Unstructured File
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.unstructured.UnstructuredFileLoader.html
|
4e8ee19eb2bf-0
|
langchain.document_loaders.trello.TrelloLoader¶
class langchain.document_loaders.trello.TrelloLoader(client: TrelloClient, board_name: str, *, include_card_name: bool = True, include_comments: bool = True, include_checklist: bool = True, card_filter: Literal['closed', 'open', 'all'] = 'all', extra_metadata: Tuple[str, ...] = ('due_date', 'labels', 'list', 'closed'))[source]¶
Trello loader. Reads all cards from a Trello board.
Initialize Trello loader.
Parameters
client – Trello API client.
board_name – The name of the Trello board.
include_card_name – Whether to include the name of the card in the document.
include_comments – Whether to include the comments on the card in the
document.
include_checklist – Whether to include the checklist on the card in the
document.
card_filter – Filter on card status. Valid values are “closed”, “open”,
“all”.
extra_metadata – List of additional metadata fields to include as document
metadata.Valid values are “due_date”, “labels”, “list”, “closed”.
Methods
__init__(client, board_name, *[, ...])
Initialize Trello loader.
from_credentials(board_name, *[, api_key, token])
Convenience constructor that builds TrelloClient init param for you.
lazy_load()
A lazy loader for Documents.
load()
Loads all cards from the specified Trello board.
load_and_split([text_splitter])
Load Documents and split into chunks.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.trello.TrelloLoader.html
|
4e8ee19eb2bf-1
|
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(client: TrelloClient, board_name: str, *, include_card_name: bool = True, include_comments: bool = True, include_checklist: bool = True, card_filter: Literal['closed', 'open', 'all'] = 'all', extra_metadata: Tuple[str, ...] = ('due_date', 'labels', 'list', 'closed'))[source]¶
Initialize Trello loader.
Parameters
client – Trello API client.
board_name – The name of the Trello board.
include_card_name – Whether to include the name of the card in the document.
include_comments – Whether to include the comments on the card in the
document.
include_checklist – Whether to include the checklist on the card in the
document.
card_filter – Filter on card status. Valid values are “closed”, “open”,
“all”.
extra_metadata – List of additional metadata fields to include as document
metadata.Valid values are “due_date”, “labels”, “list”, “closed”.
classmethod from_credentials(board_name: str, *, api_key: Optional[str] = None, token: Optional[str] = None, **kwargs: Any) → TrelloLoader[source]¶
Convenience constructor that builds TrelloClient init param for you.
Parameters
board_name – The name of the Trello board.
api_key – Trello API key. Can also be specified as environment variable
TRELLO_API_KEY.
token – Trello token. Can also be specified as environment variable
TRELLO_TOKEN.
include_card_name – Whether to include the name of the card in the document.
include_comments – Whether to include the comments on the card in the
document.
include_checklist – Whether to include the checklist on the card in the
document.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.trello.TrelloLoader.html
|
4e8ee19eb2bf-2
|
include_checklist – Whether to include the checklist on the card in the
document.
card_filter – Filter on card status. Valid values are “closed”, “open”,
“all”.
extra_metadata – List of additional metadata fields to include as document
metadata.Valid values are “due_date”, “labels”, “list”, “closed”.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Loads all cards from the specified Trello board.
You can filter the cards, metadata and text included by using the optional
parameters.
Returns:A list of documents, one for each card in the board.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using TrelloLoader¶
Trello
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.trello.TrelloLoader.html
|
ed38b935c7aa-0
|
langchain.document_loaders.email.OutlookMessageLoader¶
class langchain.document_loaders.email.OutlookMessageLoader(file_path: str)[source]¶
Loads Outlook Message files using extract_msg.
https://github.com/TeamMsgExtractor/msg-extractor
Initialize with a file path.
Parameters
file_path – The path to the Outlook Message file.
Methods
__init__(file_path)
Initialize with a file path.
lazy_load()
A lazy loader for Documents.
load()
Load data into document objects.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(file_path: str)[source]¶
Initialize with a file path.
Parameters
file_path – The path to the Outlook Message file.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load data into document objects.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using OutlookMessageLoader¶
Email
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.email.OutlookMessageLoader.html
|
1933f448eac4-0
|
langchain.document_loaders.parsers.grobid.GrobidParser¶
class langchain.document_loaders.parsers.grobid.GrobidParser(segment_sentences: bool, grobid_server: str = 'http://localhost:8070/api/processFulltextDocument')[source]¶
Loader that uses Grobid to load article PDF files.
Methods
__init__(segment_sentences[, grobid_server])
lazy_parse(blob)
Lazy parsing interface.
parse(blob)
Eagerly parse the blob into a document or documents.
process_xml(file_path, xml_data, ...)
Process the XML file from Grobin.
__init__(segment_sentences: bool, grobid_server: str = 'http://localhost:8070/api/processFulltextDocument') → None[source]¶
lazy_parse(blob: Blob) → Iterator[Document][source]¶
Lazy parsing interface.
Subclasses are required to implement this method.
Parameters
blob – Blob instance
Returns
Generator of documents
parse(blob: Blob) → List[Document]¶
Eagerly parse the blob into a document or documents.
This is a convenience method for interactive development environment.
Production applications should favor the lazy_parse method instead.
Subclasses should generally not over-ride this parse method.
Parameters
blob – Blob instance
Returns
List of documents
process_xml(file_path: str, xml_data: str, segment_sentences: bool) → Iterator[Document][source]¶
Process the XML file from Grobin.
Examples using GrobidParser¶
Grobid
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.parsers.grobid.GrobidParser.html
|
fc4618a131f4-0
|
langchain.document_loaders.parsers.pdf.PyMuPDFParser¶
class langchain.document_loaders.parsers.pdf.PyMuPDFParser(text_kwargs: Optional[Mapping[str, Any]] = None)[source]¶
Parse PDFs with PyMuPDF.
Initialize the parser.
Parameters
text_kwargs – Keyword arguments to pass to fitz.Page.get_text().
Methods
__init__([text_kwargs])
Initialize the parser.
lazy_parse(blob)
Lazily parse the blob.
parse(blob)
Eagerly parse the blob into a document or documents.
__init__(text_kwargs: Optional[Mapping[str, Any]] = None) → None[source]¶
Initialize the parser.
Parameters
text_kwargs – Keyword arguments to pass to fitz.Page.get_text().
lazy_parse(blob: Blob) → Iterator[Document][source]¶
Lazily parse the blob.
parse(blob: Blob) → List[Document]¶
Eagerly parse the blob into a document or documents.
This is a convenience method for interactive development environment.
Production applications should favor the lazy_parse method instead.
Subclasses should generally not over-ride this parse method.
Parameters
blob – Blob instance
Returns
List of documents
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.parsers.pdf.PyMuPDFParser.html
|
64231f264c6d-0
|
langchain.document_loaders.embaas.BaseEmbaasLoader¶
class langchain.document_loaders.embaas.BaseEmbaasLoader[source]¶
Bases: BaseModel
Base class for embedding a model into an Embaas document extraction API.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param api_url: str = 'https://api.embaas.io/v1/document/extract-text/bytes/'¶
The URL of the embaas document extraction API.
param embaas_api_key: Optional[str] = None¶
The API key for the embaas document extraction API.
param params: langchain.document_loaders.embaas.EmbaasDocumentExtractionParameters = {}¶
Additional parameters to pass to the embaas document extraction API.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.embaas.BaseEmbaasLoader.html
|
64231f264c6d-1
|
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
classmethod from_orm(obj: Any) → Model¶
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.embaas.BaseEmbaasLoader.html
|
64231f264c6d-2
|
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.embaas.BaseEmbaasLoader.html
|
31442c1fcce9-0
|
langchain.document_loaders.directory.DirectoryLoader¶
class langchain.document_loaders.directory.DirectoryLoader(path: str, glob: str = '**/[!.]*', silent_errors: bool = False, load_hidden: bool = False, loader_cls: ~typing.Union[~typing.Type[~langchain.document_loaders.unstructured.UnstructuredFileLoader], ~typing.Type[~langchain.document_loaders.text.TextLoader], ~typing.Type[~langchain.document_loaders.html_bs.BSHTMLLoader]] = <class 'langchain.document_loaders.unstructured.UnstructuredFileLoader'>, loader_kwargs: ~typing.Optional[dict] = None, recursive: bool = False, show_progress: bool = False, use_multithreading: bool = False, max_concurrency: int = 4)[source]¶
Load documents from a directory.
Initialize with a path to directory and how to glob over it.
Parameters
path – Path to directory.
glob – Glob pattern to use to find files. Defaults to “**/[!.]*”
(all files except hidden).
silent_errors – Whether to silently ignore errors. Defaults to False.
load_hidden – Whether to load hidden files. Defaults to False.
loader_cls – Loader class to use for loading files.
Defaults to UnstructuredFileLoader.
loader_kwargs – Keyword arguments to pass to loader_cls. Defaults to None.
recursive – Whether to recursively search for files. Defaults to False.
show_progress – Whether to show a progress bar. Defaults to False.
use_multithreading – Whether to use multithreading. Defaults to False.
max_concurrency – The maximum number of threads to use. Defaults to 4.
Methods
__init__(path[, glob, silent_errors, ...])
Initialize with a path to directory and how to glob over it.
lazy_load()
A lazy loader for Documents.
load()
Load documents.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.directory.DirectoryLoader.html
|
31442c1fcce9-1
|
lazy_load()
A lazy loader for Documents.
load()
Load documents.
load_and_split([text_splitter])
Load Documents and split into chunks.
load_file(item, path, docs, pbar)
Load a file.
__init__(path: str, glob: str = '**/[!.]*', silent_errors: bool = False, load_hidden: bool = False, loader_cls: ~typing.Union[~typing.Type[~langchain.document_loaders.unstructured.UnstructuredFileLoader], ~typing.Type[~langchain.document_loaders.text.TextLoader], ~typing.Type[~langchain.document_loaders.html_bs.BSHTMLLoader]] = <class 'langchain.document_loaders.unstructured.UnstructuredFileLoader'>, loader_kwargs: ~typing.Optional[dict] = None, recursive: bool = False, show_progress: bool = False, use_multithreading: bool = False, max_concurrency: int = 4)[source]¶
Initialize with a path to directory and how to glob over it.
Parameters
path – Path to directory.
glob – Glob pattern to use to find files. Defaults to “**/[!.]*”
(all files except hidden).
silent_errors – Whether to silently ignore errors. Defaults to False.
load_hidden – Whether to load hidden files. Defaults to False.
loader_cls – Loader class to use for loading files.
Defaults to UnstructuredFileLoader.
loader_kwargs – Keyword arguments to pass to loader_cls. Defaults to None.
recursive – Whether to recursively search for files. Defaults to False.
show_progress – Whether to show a progress bar. Defaults to False.
use_multithreading – Whether to use multithreading. Defaults to False.
max_concurrency – The maximum number of threads to use. Defaults to 4.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.directory.DirectoryLoader.html
|
31442c1fcce9-2
|
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load documents.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
load_file(item: Path, path: Path, docs: List[Document], pbar: Optional[Any]) → None[source]¶
Load a file.
Parameters
item – File path.
path – Directory path.
docs – List of documents to append to.
pbar – Progress bar. Defaults to None.
Examples using DirectoryLoader¶
StarRocks
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.directory.DirectoryLoader.html
|
e75196f17d26-0
|
langchain.document_loaders.arxiv.ArxivLoader¶
class langchain.document_loaders.arxiv.ArxivLoader(query: str, load_max_docs: Optional[int] = 100, load_all_available_meta: Optional[bool] = False)[source]¶
Loads a query result from arxiv.org into a list of Documents.
The loader converts the original PDF format into the text.
Attributes
query
The query to be passed to the arxiv.org API.
load_max_docs
The maximum number of documents to load.
load_all_available_meta
Whether to load all available metadata.
Methods
__init__(query[, load_max_docs, ...])
lazy_load()
A lazy loader for Documents.
load()
Load data into Document objects.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(query: str, load_max_docs: Optional[int] = 100, load_all_available_meta: Optional[bool] = False)[source]¶
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load data into Document objects.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using ArxivLoader¶
Arxiv
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.arxiv.ArxivLoader.html
|
af2d5b92e7cc-0
|
langchain.document_loaders.airbyte.AirbyteStripeLoader¶
class langchain.document_loaders.airbyte.AirbyteStripeLoader(config: Mapping[str, Any], stream_name: str, record_handler: Optional[Callable[[Any, Optional[str]], Document]] = None, state: Optional[Any] = None)[source]¶
Methods
__init__(config, stream_name[, ...])
lazy_load()
A lazy loader for Documents.
load()
Load data into Document objects.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(config: Mapping[str, Any], stream_name: str, record_handler: Optional[Callable[[Any, Optional[str]], Document]] = None, state: Optional[Any] = None) → None[source]¶
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document]¶
Load data into Document objects.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.airbyte.AirbyteStripeLoader.html
|
0c5b00808656-0
|
langchain.document_loaders.base.BaseBlobParser¶
class langchain.document_loaders.base.BaseBlobParser[source]¶
Abstract interface for blob parsers.
A blob parser provides a way to parse raw data stored in a blob into one
or more documents.
The parser can be composed with blob loaders, making it easy to re-use
a parser independent of how the blob was originally loaded.
Methods
__init__()
lazy_parse(blob)
Lazy parsing interface.
parse(blob)
Eagerly parse the blob into a document or documents.
__init__()¶
abstract lazy_parse(blob: Blob) → Iterator[Document][source]¶
Lazy parsing interface.
Subclasses are required to implement this method.
Parameters
blob – Blob instance
Returns
Generator of documents
parse(blob: Blob) → List[Document][source]¶
Eagerly parse the blob into a document or documents.
This is a convenience method for interactive development environment.
Production applications should favor the lazy_parse method instead.
Subclasses should generally not over-ride this parse method.
Parameters
blob – Blob instance
Returns
List of documents
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.base.BaseBlobParser.html
|
cee78d0f5390-0
|
langchain.document_loaders.chatgpt.concatenate_rows¶
langchain.document_loaders.chatgpt.concatenate_rows(message: dict, title: str) → str[source]¶
Combine message information in a readable format ready to be used.
:param message: Message to be concatenated
:param title: Title of the conversation
Returns
Concatenated message
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.chatgpt.concatenate_rows.html
|
8cfcb1be7d34-0
|
langchain.document_loaders.parsers.language.language_parser.LanguageParser¶
class langchain.document_loaders.parsers.language.language_parser.LanguageParser(language: Optional[Language] = None, parser_threshold: int = 0)[source]¶
Language parser that split code using the respective language syntax.
Each top-level function and class in the code is loaded into separate documents.
Furthermore, an extra document is generated, containing the remaining top-level code
that excludes the already segmented functions and classes.
This approach can potentially improve the accuracy of QA models over source code.
Currently, the supported languages for code parsing are Python and JavaScript.
The language used for parsing can be configured, along with the minimum number of
lines required to activate the splitting based on syntax.
Examples
from langchain.text_splitter.Language
from langchain.document_loaders.generic import GenericLoader
from langchain.document_loaders.parsers import LanguageParser
loader = GenericLoader.from_filesystem(
"./code",
glob="**/*",
suffixes=[".py", ".js"],
parser=LanguageParser()
)
docs = loader.load()
Example instantiations to manually select the language:
… code-block:: python
from langchain.text_splitter import Language
loader = GenericLoader.from_filesystem(“./code”,
glob=”**/*”,
suffixes=[“.py”],
parser=LanguageParser(language=Language.PYTHON)
)
Example instantiations to set number of lines threshold:
… code-block:: python
loader = GenericLoader.from_filesystem(“./code”,
glob=”**/*”,
suffixes=[“.py”],
parser=LanguageParser(parser_threshold=200)
)
Language parser that split code using the respective language syntax.
Parameters
language – If None (default), it will try to infer language from source.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.parsers.language.language_parser.LanguageParser.html
|
8cfcb1be7d34-1
|
Parameters
language – If None (default), it will try to infer language from source.
parser_threshold – Minimum lines needed to activate parsing (0 by default).
Methods
__init__([language, parser_threshold])
Language parser that split code using the respective language syntax.
lazy_parse(blob)
Lazy parsing interface.
parse(blob)
Eagerly parse the blob into a document or documents.
__init__(language: Optional[Language] = None, parser_threshold: int = 0)[source]¶
Language parser that split code using the respective language syntax.
Parameters
language – If None (default), it will try to infer language from source.
parser_threshold – Minimum lines needed to activate parsing (0 by default).
lazy_parse(blob: Blob) → Iterator[Document][source]¶
Lazy parsing interface.
Subclasses are required to implement this method.
Parameters
blob – Blob instance
Returns
Generator of documents
parse(blob: Blob) → List[Document]¶
Eagerly parse the blob into a document or documents.
This is a convenience method for interactive development environment.
Production applications should favor the lazy_parse method instead.
Subclasses should generally not over-ride this parse method.
Parameters
blob – Blob instance
Returns
List of documents
Examples using LanguageParser¶
Source Code
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.parsers.language.language_parser.LanguageParser.html
|
89ba76df28e0-0
|
langchain.document_loaders.tensorflow_datasets.TensorflowDatasetLoader¶
class langchain.document_loaders.tensorflow_datasets.TensorflowDatasetLoader(dataset_name: str, split_name: str, load_max_docs: Optional[int] = 100, sample_to_document_function: Optional[Callable[[Dict], Document]] = None)[source]¶
Loads from TensorFlow Datasets into a list of Documents.
dataset_name¶
the name of the dataset to load
split_name¶
the name of the split to load.
load_max_docs¶
a limit to the number of loaded documents. Defaults to 100.
sample_to_document_function¶
a function that converts a dataset sample
into a Document
Example
from langchain.document_loaders import TensorflowDatasetLoader
def mlqaen_example_to_document(example: dict) -> Document:
return Document(
page_content=decode_to_str(example["context"]),
metadata={
"id": decode_to_str(example["id"]),
"title": decode_to_str(example["title"]),
"question": decode_to_str(example["question"]),
"answer": decode_to_str(example["answers"]["text"][0]),
},
)
tsds_client = TensorflowDatasetLoader(
dataset_name="mlqa/en",
split_name="test",
load_max_docs=100,
sample_to_document_function=mlqaen_example_to_document,
)
Initialize the TensorflowDatasetLoader.
Parameters
dataset_name – the name of the dataset to load
split_name – the name of the split to load.
load_max_docs – a limit to the number of loaded documents. Defaults to 100.
sample_to_document_function – a function that converts a dataset sample
into a Document.
Attributes
load_max_docs
The maximum number of documents to load.
sample_to_document_function
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.tensorflow_datasets.TensorflowDatasetLoader.html
|
89ba76df28e0-1
|
Attributes
load_max_docs
The maximum number of documents to load.
sample_to_document_function
Custom function that transform a dataset sample into a Document.
Methods
__init__(dataset_name, split_name[, ...])
Initialize the TensorflowDatasetLoader.
lazy_load()
A lazy loader for Documents.
load()
Load data into Document objects.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(dataset_name: str, split_name: str, load_max_docs: Optional[int] = 100, sample_to_document_function: Optional[Callable[[Dict], Document]] = None)[source]¶
Initialize the TensorflowDatasetLoader.
Parameters
dataset_name – the name of the dataset to load
split_name – the name of the split to load.
load_max_docs – a limit to the number of loaded documents. Defaults to 100.
sample_to_document_function – a function that converts a dataset sample
into a Document.
lazy_load() → Iterator[Document][source]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load data into Document objects.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.tensorflow_datasets.TensorflowDatasetLoader.html
|
98ede2cf8e5e-0
|
langchain.document_loaders.unstructured.UnstructuredFileIOLoader¶
class langchain.document_loaders.unstructured.UnstructuredFileIOLoader(file: Union[IO, Sequence[IO]], mode: str = 'single', **unstructured_kwargs: Any)[source]¶
Loader that uses Unstructured to load files.
The file loader
uses the unstructured partition function and will automatically detect the file
type. You can run the loader in one of two modes: “single” and “elements”.
If you use “single” mode, the document will be returned as a single
langchain Document object. If you use “elements” mode, the unstructured
library will split the document into elements such as Title and NarrativeText.
You can pass in additional unstructured kwargs after mode to apply
different unstructured settings.
Examples
from langchain.document_loaders import UnstructuredFileIOLoader
with open(“example.pdf”, “rb”) as f:
loader = UnstructuredFileIOLoader(f, mode=”elements”, strategy=”fast”,
)
docs = loader.load()
References
https://unstructured-io.github.io/unstructured/bricks.html#partition
Initialize with file path.
Methods
__init__(file[, mode])
Initialize with file path.
lazy_load()
A lazy loader for Documents.
load()
Load file.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(file: Union[IO, Sequence[IO]], mode: str = 'single', **unstructured_kwargs: Any)[source]¶
Initialize with file path.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document]¶
Load file.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.unstructured.UnstructuredFileIOLoader.html
|
98ede2cf8e5e-1
|
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using UnstructuredFileIOLoader¶
Google Drive
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.unstructured.UnstructuredFileIOLoader.html
|
0c9990d29cce-0
|
langchain.document_loaders.json_loader.JSONLoader¶
class langchain.document_loaders.json_loader.JSONLoader(file_path: Union[str, Path], jq_schema: str, content_key: Optional[str] = None, metadata_func: Optional[Callable[[Dict, Dict], Dict]] = None, text_content: bool = True, json_lines: bool = False)[source]¶
Loads a JSON file using a jq schema.
Example
[{“text”: …}, {“text”: …}, {“text”: …}] -> schema = .[].text
{“key”: [{“text”: …}, {“text”: …}, {“text”: …}]} -> schema = .key[].text
[“”, “”, “”] -> schema = .[]
Initialize the JSONLoader.
Parameters
file_path (Union[str, Path]) – The path to the JSON or JSON Lines file.
jq_schema (str) – The jq schema to use to extract the data or text from
the JSON.
content_key (str) – The key to use to extract the content from the JSON if
the jq_schema results to a list of objects (dict).
metadata_func (Callable[Dict, Dict]) – A function that takes in the JSON
object extracted by the jq_schema and the default metadata and returns
a dict of the updated metadata.
text_content (bool) – Boolean flag to indicate whether the content is in
string format, default to True.
json_lines (bool) – Boolean flag to indicate whether the input is in
JSON Lines format.
Methods
__init__(file_path, jq_schema[, ...])
Initialize the JSONLoader.
lazy_load()
A lazy loader for Documents.
load()
Load and return documents from the JSON file.
load_and_split([text_splitter])
Load Documents and split into chunks.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.json_loader.JSONLoader.html
|
0c9990d29cce-1
|
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(file_path: Union[str, Path], jq_schema: str, content_key: Optional[str] = None, metadata_func: Optional[Callable[[Dict, Dict], Dict]] = None, text_content: bool = True, json_lines: bool = False)[source]¶
Initialize the JSONLoader.
Parameters
file_path (Union[str, Path]) – The path to the JSON or JSON Lines file.
jq_schema (str) – The jq schema to use to extract the data or text from
the JSON.
content_key (str) – The key to use to extract the content from the JSON if
the jq_schema results to a list of objects (dict).
metadata_func (Callable[Dict, Dict]) – A function that takes in the JSON
object extracted by the jq_schema and the default metadata and returns
a dict of the updated metadata.
text_content (bool) – Boolean flag to indicate whether the content is in
string format, default to True.
json_lines (bool) – Boolean flag to indicate whether the input is in
JSON Lines format.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load and return documents from the JSON file.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.json_loader.JSONLoader.html
|
ef2f48785333-0
|
langchain.document_loaders.parsers.pdf.PyPDFParser¶
class langchain.document_loaders.parsers.pdf.PyPDFParser(password: Optional[Union[str, bytes]] = None)[source]¶
Loads a PDF with pypdf and chunks at character level.
Methods
__init__([password])
lazy_parse(blob)
Lazily parse the blob.
parse(blob)
Eagerly parse the blob into a document or documents.
__init__(password: Optional[Union[str, bytes]] = None)[source]¶
lazy_parse(blob: Blob) → Iterator[Document][source]¶
Lazily parse the blob.
parse(blob: Blob) → List[Document]¶
Eagerly parse the blob into a document or documents.
This is a convenience method for interactive development environment.
Production applications should favor the lazy_parse method instead.
Subclasses should generally not over-ride this parse method.
Parameters
blob – Blob instance
Returns
List of documents
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.parsers.pdf.PyPDFParser.html
|
9bc5f3211142-0
|
langchain.document_loaders.helpers.detect_file_encodings¶
langchain.document_loaders.helpers.detect_file_encodings(file_path: str, timeout: int = 5) → List[FileEncoding][source]¶
Try to detect the file encoding.
Returns a list of FileEncoding tuples with the detected encodings ordered
by confidence.
Parameters
file_path – The path to the file to detect the encoding for.
timeout – The timeout in seconds for the encoding detection.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.helpers.detect_file_encodings.html
|
ee8fc24ccde9-0
|
langchain.document_loaders.telegram.concatenate_rows¶
langchain.document_loaders.telegram.concatenate_rows(row: dict) → str[source]¶
Combine message information in a readable format ready to be used.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.telegram.concatenate_rows.html
|
4d8fb390a4aa-0
|
langchain.document_loaders.rss.RSSFeedLoader¶
class langchain.document_loaders.rss.RSSFeedLoader(urls: Optional[Sequence[str]] = None, opml: Optional[str] = None, continue_on_failure: bool = True, show_progress_bar: bool = False, **newsloader_kwargs: Any)[source]¶
Loader that uses newspaper to load news articles from RSS feeds.
Parameters
urls – URLs for RSS feeds to load. Each articles in the feed is loaded into its own document.
opml – OPML file to load feed urls from. Only one of urls or opml should be provided. The value
string (can be a URL) –
string. (or OPML markup contents as byte or) –
continue_on_failure – If True, continue loading documents even if
loading fails for a particular URL.
show_progress_bar – If True, use tqdm to show a loading progress bar. Requires
tqdm to be installed, pip install tqdm.
**newsloader_kwargs – Any additional named arguments to pass to
NewsURLLoader.
Example
from langchain.document_loaders import RSSFeedLoader
loader = RSSFeedLoader(
urls=["<url-1>", "<url-2>"],
)
docs = loader.load()
The loader uses feedparser to parse RSS feeds. The feedparser library is not installed by default so you should
install it if using this loader:
https://pythonhosted.org/feedparser/
If you use OPML, you should also install listparser:
https://pythonhosted.org/listparser/
Finally, newspaper is used to process each article:
https://newspaper.readthedocs.io/en/latest/
Initialize with urls or OPML.
Methods
__init__([urls, opml, continue_on_failure, ...])
Initialize with urls or OPML.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.rss.RSSFeedLoader.html
|
4d8fb390a4aa-1
|
Initialize with urls or OPML.
lazy_load()
A lazy loader for Documents.
load()
Load data into Document objects.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(urls: Optional[Sequence[str]] = None, opml: Optional[str] = None, continue_on_failure: bool = True, show_progress_bar: bool = False, **newsloader_kwargs: Any) → None[source]¶
Initialize with urls or OPML.
lazy_load() → Iterator[Document][source]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load data into Document objects.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.rss.RSSFeedLoader.html
|
3d3c93ab5df9-0
|
langchain.document_loaders.parsers.pdf.PDFPlumberParser¶
class langchain.document_loaders.parsers.pdf.PDFPlumberParser(text_kwargs: Optional[Mapping[str, Any]] = None)[source]¶
Parse PDFs with PDFPlumber.
Initialize the parser.
Parameters
text_kwargs – Keyword arguments to pass to pdfplumber.Page.extract_text()
Methods
__init__([text_kwargs])
Initialize the parser.
lazy_parse(blob)
Lazily parse the blob.
parse(blob)
Eagerly parse the blob into a document or documents.
__init__(text_kwargs: Optional[Mapping[str, Any]] = None) → None[source]¶
Initialize the parser.
Parameters
text_kwargs – Keyword arguments to pass to pdfplumber.Page.extract_text()
lazy_parse(blob: Blob) → Iterator[Document][source]¶
Lazily parse the blob.
parse(blob: Blob) → List[Document]¶
Eagerly parse the blob into a document or documents.
This is a convenience method for interactive development environment.
Production applications should favor the lazy_parse method instead.
Subclasses should generally not over-ride this parse method.
Parameters
blob – Blob instance
Returns
List of documents
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.parsers.pdf.PDFPlumberParser.html
|
e9955be377fb-0
|
langchain.document_loaders.gcs_directory.GCSDirectoryLoader¶
class langchain.document_loaders.gcs_directory.GCSDirectoryLoader(project_name: str, bucket: str, prefix: str = '', loader_func: Optional[Callable[[str], BaseLoader]] = None)[source]¶
Loads Documents from GCS.
Initialize with bucket and key name.
Parameters
project_name – The name of the project for the GCS bucket.
bucket – The name of the GCS bucket.
prefix – The prefix of the GCS bucket.
loader_func – A loader function that instatiates a loader based on a
file_path argument. If nothing is provided, the GCSFileLoader
would use its default loader.
Methods
__init__(project_name, bucket[, prefix, ...])
Initialize with bucket and key name.
lazy_load()
A lazy loader for Documents.
load()
Load documents.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(project_name: str, bucket: str, prefix: str = '', loader_func: Optional[Callable[[str], BaseLoader]] = None)[source]¶
Initialize with bucket and key name.
Parameters
project_name – The name of the project for the GCS bucket.
bucket – The name of the GCS bucket.
prefix – The prefix of the GCS bucket.
loader_func – A loader function that instatiates a loader based on a
file_path argument. If nothing is provided, the GCSFileLoader
would use its default loader.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load documents.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.gcs_directory.GCSDirectoryLoader.html
|
e9955be377fb-1
|
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using GCSDirectoryLoader¶
Google Cloud Storage
Google Cloud Storage Directory
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.gcs_directory.GCSDirectoryLoader.html
|
4a4369c5dc0f-0
|
langchain.document_loaders.parsers.language.code_segmenter.CodeSegmenter¶
class langchain.document_loaders.parsers.language.code_segmenter.CodeSegmenter(code: str)[source]¶
The abstract class for the code segmenter.
Methods
__init__(code)
extract_functions_classes()
is_valid()
simplify_code()
__init__(code: str)[source]¶
abstract extract_functions_classes() → List[str][source]¶
is_valid() → bool[source]¶
abstract simplify_code() → str[source]¶
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.parsers.language.code_segmenter.CodeSegmenter.html
|
0954e1676d91-0
|
langchain.document_loaders.unstructured.satisfies_min_unstructured_version¶
langchain.document_loaders.unstructured.satisfies_min_unstructured_version(min_version: str) → bool[source]¶
Checks to see if the installed unstructured version exceeds the minimum version
for the feature in question.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.unstructured.satisfies_min_unstructured_version.html
|
119feba2e229-0
|
langchain.document_loaders.whatsapp_chat.WhatsAppChatLoader¶
class langchain.document_loaders.whatsapp_chat.WhatsAppChatLoader(path: str)[source]¶
Loads WhatsApp messages text file.
Initialize with path.
Methods
__init__(path)
Initialize with path.
lazy_load()
A lazy loader for Documents.
load()
Load documents.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(path: str)[source]¶
Initialize with path.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load documents.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using WhatsAppChatLoader¶
WhatsApp
WhatsApp Chat
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.whatsapp_chat.WhatsAppChatLoader.html
|
f845db557b04-0
|
langchain.document_loaders.csv_loader.UnstructuredCSVLoader¶
class langchain.document_loaders.csv_loader.UnstructuredCSVLoader(file_path: str, mode: str = 'single', **unstructured_kwargs: Any)[source]¶
Loader that uses unstructured to load CSV files. Like other
Unstructured loaders, UnstructuredCSVLoader can be used in both
“single” and “elements” mode. If you use the loader in “elements”
mode, the CSV file will be a single Unstructured Table element.
If you use the loader in “elements” mode, an HTML representation
of the table will be available in the “text_as_html” key in the
document metadata.
Examples
from langchain.document_loaders.csv_loader import UnstructuredCSVLoader
loader = UnstructuredCSVLoader(“stanley-cups.csv”, mode=”elements”)
docs = loader.load()
Parameters
file_path – The path to the CSV file.
mode – The mode to use when loading the CSV file.
Optional. Defaults to “single”.
**unstructured_kwargs – Keyword arguments to pass to unstructured.
Methods
__init__(file_path[, mode])
param file_path
The path to the CSV file.
lazy_load()
A lazy loader for Documents.
load()
Load file.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(file_path: str, mode: str = 'single', **unstructured_kwargs: Any)[source]¶
Parameters
file_path – The path to the CSV file.
mode – The mode to use when loading the CSV file.
Optional. Defaults to “single”.
**unstructured_kwargs – Keyword arguments to pass to unstructured.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document]¶
Load file.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.csv_loader.UnstructuredCSVLoader.html
|
f845db557b04-1
|
A lazy loader for Documents.
load() → List[Document]¶
Load file.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using UnstructuredCSVLoader¶
CSV
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.csv_loader.UnstructuredCSVLoader.html
|
237d0f2d3b05-0
|
langchain.document_loaders.diffbot.DiffbotLoader¶
class langchain.document_loaders.diffbot.DiffbotLoader(api_token: str, urls: List[str], continue_on_failure: bool = True)[source]¶
Loads Diffbot file json.
Initialize with API token, ids, and key.
Parameters
api_token – Diffbot API token.
urls – List of URLs to load.
continue_on_failure – Whether to continue loading other URLs if one fails.
Defaults to True.
Methods
__init__(api_token, urls[, continue_on_failure])
Initialize with API token, ids, and key.
lazy_load()
A lazy loader for Documents.
load()
Extract text from Diffbot on all the URLs and return Documents
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(api_token: str, urls: List[str], continue_on_failure: bool = True)[source]¶
Initialize with API token, ids, and key.
Parameters
api_token – Diffbot API token.
urls – List of URLs to load.
continue_on_failure – Whether to continue loading other URLs if one fails.
Defaults to True.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Extract text from Diffbot on all the URLs and return Documents
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using DiffbotLoader¶
Diffbot
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.diffbot.DiffbotLoader.html
|
b47ae3ba2b28-0
|
langchain.document_loaders.reddit.RedditPostsLoader¶
class langchain.document_loaders.reddit.RedditPostsLoader(client_id: str, client_secret: str, user_agent: str, search_queries: Sequence[str], mode: str, categories: Sequence[str] = ['new'], number_posts: Optional[int] = 10)[source]¶
Reddit posts loader.
Read posts on a subreddit.
First, you need to go to
https://www.reddit.com/prefs/apps/
and create your application
Initialize with client_id, client_secret, user_agent, search_queries, mode,categories, number_posts.
Example: https://www.reddit.com/r/learnpython/
Parameters
client_id – Reddit client id.
client_secret – Reddit client secret.
user_agent – Reddit user agent.
search_queries – The search queries.
mode – The mode.
categories – The categories. Default: [“new”]
number_posts – The number of posts. Default: 10
Methods
__init__(client_id, client_secret, ...[, ...])
Initialize with client_id, client_secret, user_agent, search_queries, mode,
lazy_load()
A lazy loader for Documents.
load()
Load reddits.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(client_id: str, client_secret: str, user_agent: str, search_queries: Sequence[str], mode: str, categories: Sequence[str] = ['new'], number_posts: Optional[int] = 10)[source]¶
Initialize with client_id, client_secret, user_agent, search_queries, mode,categories, number_posts.
Example: https://www.reddit.com/r/learnpython/
Parameters
client_id – Reddit client id.
client_secret – Reddit client secret.
user_agent – Reddit user agent.
search_queries – The search queries.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.reddit.RedditPostsLoader.html
|
b47ae3ba2b28-1
|
user_agent – Reddit user agent.
search_queries – The search queries.
mode – The mode.
categories – The categories. Default: [“new”]
number_posts – The number of posts. Default: 10
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load reddits.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using RedditPostsLoader¶
Reddit
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.reddit.RedditPostsLoader.html
|
dcd47283cbbc-0
|
langchain.document_loaders.imsdb.IMSDbLoader¶
class langchain.document_loaders.imsdb.IMSDbLoader(web_path: Union[str, List[str]], header_template: Optional[dict] = None, verify_ssl: Optional[bool] = True, proxies: Optional[dict] = None, continue_on_failure: Optional[bool] = False)[source]¶
Loads IMSDb webpages.
Initialize with webpage path.
Attributes
bs_get_text_kwargs
kwargs for beatifulsoup4 get_text
default_parser
Default parser to use for BeautifulSoup.
raise_for_status
Raise an exception if http status code denotes an error.
requests_kwargs
kwargs for requests
requests_per_second
Max number of concurrent requests to make.
web_path
Methods
__init__(web_path[, header_template, ...])
Initialize with webpage path.
aload()
Load text from the urls in web_path async into Documents.
fetch_all(urls)
Fetch all urls concurrently with rate limiting.
lazy_load()
Lazy load text from the url(s) in web_path.
load()
Load webpage.
load_and_split([text_splitter])
Load Documents and split into chunks.
scrape([parser])
Scrape data from webpage and return it in BeautifulSoup format.
scrape_all(urls[, parser])
Fetch all urls, then return soups for all results.
__init__(web_path: Union[str, List[str]], header_template: Optional[dict] = None, verify_ssl: Optional[bool] = True, proxies: Optional[dict] = None, continue_on_failure: Optional[bool] = False)¶
Initialize with webpage path.
aload() → List[Document]¶
Load text from the urls in web_path async into Documents.
async fetch_all(urls: List[str]) → Any¶
Fetch all urls concurrently with rate limiting.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.imsdb.IMSDbLoader.html
|
dcd47283cbbc-1
|
Fetch all urls concurrently with rate limiting.
lazy_load() → Iterator[Document]¶
Lazy load text from the url(s) in web_path.
load() → List[Document][source]¶
Load webpage.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
scrape(parser: Optional[str] = None) → Any¶
Scrape data from webpage and return it in BeautifulSoup format.
scrape_all(urls: List[str], parser: Optional[str] = None) → List[Any]¶
Fetch all urls, then return soups for all results.
Examples using IMSDbLoader¶
IMSDb
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.imsdb.IMSDbLoader.html
|
2c3ee1ca900f-0
|
langchain.document_loaders.airbyte_json.AirbyteJSONLoader¶
class langchain.document_loaders.airbyte_json.AirbyteJSONLoader(file_path: str)[source]¶
Loads local airbyte json files.
Initialize with a file path. This should start with ‘/tmp/airbyte_local/’.
Attributes
file_path
Path to the directory containing the json files.
Methods
__init__(file_path)
Initialize with a file path.
lazy_load()
A lazy loader for Documents.
load()
Load data into Document objects.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(file_path: str)[source]¶
Initialize with a file path. This should start with ‘/tmp/airbyte_local/’.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load data into Document objects.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using AirbyteJSONLoader¶
Airbyte
Airbyte JSON
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.airbyte_json.AirbyteJSONLoader.html
|
6fb746617570-0
|
langchain.document_loaders.docugami.DocugamiLoader¶
class langchain.document_loaders.docugami.DocugamiLoader[source]¶
Bases: BaseLoader, BaseModel
Loads processed docs from Docugami.
To use, you should have the lxml python package installed.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param access_token: Optional[str] = None¶
The Docugami API access token to use.
param api: str = 'https://api.docugami.com/v1preview1'¶
The Docugami API endpoint to use.
param docset_id: Optional[str] = None¶
The Docugami API docset ID to use.
param document_ids: Optional[Sequence[str]] = None¶
The Docugami API document IDs to use.
param file_paths: Optional[Sequence[Union[pathlib.Path, str]]] = None¶
The local file paths to use.
param min_chunk_size: int = 32¶
The minimum chunk size to use when parsing DGML. Defaults to 32.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.docugami.DocugamiLoader.html
|
6fb746617570-1
|
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
classmethod from_orm(obj: Any) → Model¶
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load documents.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.docugami.DocugamiLoader.html
|
6fb746617570-2
|
load() → List[Document][source]¶
Load documents.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶
Examples using DocugamiLoader¶
Docugami
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.docugami.DocugamiLoader.html
|
8663329536db-0
|
langchain.document_loaders.confluence.ConfluenceLoader¶
class langchain.document_loaders.confluence.ConfluenceLoader(url: str, api_key: Optional[str] = None, username: Optional[str] = None, oauth2: Optional[dict] = None, token: Optional[str] = None, cloud: Optional[bool] = True, number_of_retries: Optional[int] = 3, min_retry_seconds: Optional[int] = 2, max_retry_seconds: Optional[int] = 10, confluence_kwargs: Optional[dict] = None)[source]¶
Load Confluence pages.
Port of https://llamahub.ai/l/confluence
This currently supports username/api_key, Oauth2 login or personal access token
authentication.
Specify a list page_ids and/or space_key to load in the corresponding pages into
Document objects, if both are specified the union of both sets will be returned.
You can also specify a boolean include_attachments to include attachments, this
is set to False by default, if set to True all attachments will be downloaded and
ConfluenceReader will extract the text from the attachments and add it to the
Document object. Currently supported attachment types are: PDF, PNG, JPEG/JPG,
SVG, Word and Excel.
Confluence API supports difference format of page content. The storage format is the
raw XML representation for storage. The view format is the HTML representation for
viewing with macros are rendered as though it is viewed by users. You can pass
a enum content_format argument to load() to specify the content format, this is
set to ContentFormat.STORAGE by default.
Hint: space_key and page_id can both be found in the URL of a page in Confluence
- https://yoursite.atlassian.com/wiki/spaces/<space_key>/pages/<page_id>
Example
from langchain.document_loaders import ConfluenceLoader
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.confluence.ConfluenceLoader.html
|
8663329536db-1
|
Example
from langchain.document_loaders import ConfluenceLoader
loader = ConfluenceLoader(
url="https://yoursite.atlassian.com/wiki",
username="me",
api_key="12345"
)
documents = loader.load(space_key="SPACE",limit=50)
Parameters
url (str) – _description_
api_key (str, optional) – _description_, defaults to None
username (str, optional) – _description_, defaults to None
oauth2 (dict, optional) – _description_, defaults to {}
token (str, optional) – _description_, defaults to None
cloud (bool, optional) – _description_, defaults to True
number_of_retries (Optional[int], optional) – How many times to retry, defaults to 3
min_retry_seconds (Optional[int], optional) – defaults to 2
max_retry_seconds (Optional[int], optional) – defaults to 10
confluence_kwargs (dict, optional) – additional kwargs to initialize confluence with
Raises
ValueError – Errors while validating input
ImportError – Required dependencies not installed.
Methods
__init__(url[, api_key, username, oauth2, ...])
is_public_page(page)
Check if a page is publicly accessible.
lazy_load()
A lazy loader for Documents.
load([space_key, page_ids, label, cql, ...])
param space_key
Space key retrieved from a confluence URL, defaults to None
load_and_split([text_splitter])
Load Documents and split into chunks.
paginate_request(retrieval_method, **kwargs)
Paginate the various methods to retrieve groups of pages.
process_attachment(page_id[, ocr_languages])
process_doc(link)
process_image(link[, ocr_languages])
process_page(page, include_attachments, ...)
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.confluence.ConfluenceLoader.html
|
8663329536db-2
|
process_image(link[, ocr_languages])
process_page(page, include_attachments, ...)
process_pages(pages, ...[, ocr_languages, ...])
Process a list of pages into a list of documents.
process_pdf(link[, ocr_languages])
process_svg(link[, ocr_languages])
process_xls(link)
validate_init_args([url, api_key, username, ...])
Validates proper combinations of init arguments
__init__(url: str, api_key: Optional[str] = None, username: Optional[str] = None, oauth2: Optional[dict] = None, token: Optional[str] = None, cloud: Optional[bool] = True, number_of_retries: Optional[int] = 3, min_retry_seconds: Optional[int] = 2, max_retry_seconds: Optional[int] = 10, confluence_kwargs: Optional[dict] = None)[source]¶
is_public_page(page: dict) → bool[source]¶
Check if a page is publicly accessible.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load(space_key: Optional[str] = None, page_ids: Optional[List[str]] = None, label: Optional[str] = None, cql: Optional[str] = None, include_restricted_content: bool = False, include_archived_content: bool = False, include_attachments: bool = False, include_comments: bool = False, content_format: ContentFormat = ContentFormat.STORAGE, limit: Optional[int] = 50, max_pages: Optional[int] = 1000, ocr_languages: Optional[str] = None, keep_markdown_format: bool = False) → List[Document][source]¶
Parameters
space_key (Optional[str], optional) – Space key retrieved from a confluence URL, defaults to None
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.confluence.ConfluenceLoader.html
|
8663329536db-3
|
page_ids (Optional[List[str]], optional) – List of specific page IDs to load, defaults to None
label (Optional[str], optional) – Get all pages with this label, defaults to None
cql (Optional[str], optional) – CQL Expression, defaults to None
include_restricted_content (bool, optional) – defaults to False
include_archived_content (bool, optional) – Whether to include archived content,
defaults to False
include_attachments (bool, optional) – defaults to False
include_comments (bool, optional) – defaults to False
content_format (ContentFormat) – Specify content format, defaults to ContentFormat.STORAGE
limit (int, optional) – Maximum number of pages to retrieve per request, defaults to 50
max_pages (int, optional) – Maximum number of pages to retrieve in total, defaults 1000
ocr_languages (str, optional) – The languages to use for the Tesseract agent. To use a
language, you’ll first need to install the appropriate
Tesseract language pack.
keep_markdown_format (bool) – Whether to keep the markdown format, defaults to
False
Raises
ValueError – _description_
ImportError – _description_
Returns
_description_
Return type
List[Document]
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
paginate_request(retrieval_method: Callable, **kwargs: Any) → List[source]¶
Paginate the various methods to retrieve groups of pages.
Unfortunately, due to page size, sometimes the Confluence API
doesn’t match the limit value. If limit is >100 confluence
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.confluence.ConfluenceLoader.html
|
8663329536db-4
|
doesn’t match the limit value. If limit is >100 confluence
seems to cap the response to 100. Also, due to the Atlassian Python
package, we don’t get the “next” values from the “_links” key because
they only return the value from the result key. So here, the pagination
starts from 0 and goes until the max_pages, getting the limit number
of pages with each request. We have to manually check if there
are more docs based on the length of the returned list of pages, rather than
just checking for the presence of a next key in the response like this page
would have you do:
https://developer.atlassian.com/server/confluence/pagination-in-the-rest-api/
Parameters
retrieval_method (callable) – Function used to retrieve docs
Returns
List of documents
Return type
List
process_attachment(page_id: str, ocr_languages: Optional[str] = None) → List[str][source]¶
process_doc(link: str) → str[source]¶
process_image(link: str, ocr_languages: Optional[str] = None) → str[source]¶
process_page(page: dict, include_attachments: bool, include_comments: bool, content_format: ContentFormat, ocr_languages: Optional[str] = None, keep_markdown_format: Optional[bool] = False) → Document[source]¶
process_pages(pages: List[dict], include_restricted_content: bool, include_attachments: bool, include_comments: bool, content_format: ContentFormat, ocr_languages: Optional[str] = None, keep_markdown_format: Optional[bool] = False) → List[Document][source]¶
Process a list of pages into a list of documents.
process_pdf(link: str, ocr_languages: Optional[str] = None) → str[source]¶
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.confluence.ConfluenceLoader.html
|
8663329536db-5
|
process_svg(link: str, ocr_languages: Optional[str] = None) → str[source]¶
process_xls(link: str) → str[source]¶
static validate_init_args(url: Optional[str] = None, api_key: Optional[str] = None, username: Optional[str] = None, oauth2: Optional[dict] = None, token: Optional[str] = None) → Optional[List][source]¶
Validates proper combinations of init arguments
Examples using ConfluenceLoader¶
Confluence
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.confluence.ConfluenceLoader.html
|
b90f057d2da3-0
|
langchain.document_loaders.blockchain.BlockchainType¶
class langchain.document_loaders.blockchain.BlockchainType(value, names=None, *, module=None, qualname=None, type=None, start=1, boundary=None)[source]¶
Enumerator of the supported blockchains.
ETH_MAINNET = 'eth-mainnet'¶
ETH_GOERLI = 'eth-goerli'¶
POLYGON_MAINNET = 'polygon-mainnet'¶
POLYGON_MUMBAI = 'polygon-mumbai'¶
Examples using BlockchainType¶
Blockchain
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.blockchain.BlockchainType.html
|
f96ab2eba2bb-0
|
langchain.document_loaders.excel.UnstructuredExcelLoader¶
class langchain.document_loaders.excel.UnstructuredExcelLoader(file_path: str, mode: str = 'single', **unstructured_kwargs: Any)[source]¶
Loader that uses unstructured to load Excel files. Like other
Unstructured loaders, UnstructuredExcelLoader can be used in both
“single” and “elements” mode. If you use the loader in “elements”
mode, each sheet in the Excel file will be a an Unstructured Table
element. If you use the loader in “elements” mode, an
HTML representation of the table will be available in the
“text_as_html” key in the document metadata.
Examples
from langchain.document_loaders.excel import UnstructuredExcelLoader
loader = UnstructuredExcelLoader(“stanley-cups.xlsd”, mode=”elements”)
docs = loader.load()
Parameters
file_path – The path to the Microsoft Excel file.
mode – The mode to use when partitioning the file. See unstructured docs
for more info. Optional. Defaults to “single”.
**unstructured_kwargs – Keyword arguments to pass to unstructured.
Methods
__init__(file_path[, mode])
param file_path
The path to the Microsoft Excel file.
lazy_load()
A lazy loader for Documents.
load()
Load file.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(file_path: str, mode: str = 'single', **unstructured_kwargs: Any)[source]¶
Parameters
file_path – The path to the Microsoft Excel file.
mode – The mode to use when partitioning the file. See unstructured docs
for more info. Optional. Defaults to “single”.
**unstructured_kwargs – Keyword arguments to pass to unstructured.
lazy_load() → Iterator[Document]¶
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.excel.UnstructuredExcelLoader.html
|
f96ab2eba2bb-1
|
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document]¶
Load file.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using UnstructuredExcelLoader¶
Microsoft Excel
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.excel.UnstructuredExcelLoader.html
|
5845a9fa6aeb-0
|
langchain.document_loaders.slack_directory.SlackDirectoryLoader¶
class langchain.document_loaders.slack_directory.SlackDirectoryLoader(zip_path: str, workspace_url: Optional[str] = None)[source]¶
Loads documents from a Slack directory dump.
Initialize the SlackDirectoryLoader.
Parameters
zip_path (str) – The path to the Slack directory dump zip file.
workspace_url (Optional[str]) – The Slack workspace URL.
Including the URL will turn
sources into links. Defaults to None.
Methods
__init__(zip_path[, workspace_url])
Initialize the SlackDirectoryLoader.
lazy_load()
A lazy loader for Documents.
load()
Load and return documents from the Slack directory dump.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(zip_path: str, workspace_url: Optional[str] = None)[source]¶
Initialize the SlackDirectoryLoader.
Parameters
zip_path (str) – The path to the Slack directory dump zip file.
workspace_url (Optional[str]) – The Slack workspace URL.
Including the URL will turn
sources into links. Defaults to None.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load and return documents from the Slack directory dump.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using SlackDirectoryLoader¶
Slack
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.slack_directory.SlackDirectoryLoader.html
|
1d8c6a4d106a-0
|
langchain.document_loaders.parsers.html.bs4.BS4HTMLParser¶
class langchain.document_loaders.parsers.html.bs4.BS4HTMLParser(*, features: str = 'lxml', get_text_separator: str = '', **kwargs: Any)[source]¶
Parser that uses beautiful soup to parse HTML files.
Initialize a bs4 based HTML parser.
Methods
__init__(*[, features, get_text_separator])
Initialize a bs4 based HTML parser.
lazy_parse(blob)
Load HTML document into document objects.
parse(blob)
Eagerly parse the blob into a document or documents.
__init__(*, features: str = 'lxml', get_text_separator: str = '', **kwargs: Any) → None[source]¶
Initialize a bs4 based HTML parser.
lazy_parse(blob: Blob) → Iterator[Document][source]¶
Load HTML document into document objects.
parse(blob: Blob) → List[Document]¶
Eagerly parse the blob into a document or documents.
This is a convenience method for interactive development environment.
Production applications should favor the lazy_parse method instead.
Subclasses should generally not over-ride this parse method.
Parameters
blob – Blob instance
Returns
List of documents
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.parsers.html.bs4.BS4HTMLParser.html
|
20f892d75722-0
|
langchain.document_loaders.facebook_chat.concatenate_rows¶
langchain.document_loaders.facebook_chat.concatenate_rows(row: dict) → str[source]¶
Combine message information in a readable format ready to be used.
Parameters
row – dictionary containing message information.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.facebook_chat.concatenate_rows.html
|
dba3aa50fb9e-0
|
langchain.document_loaders.hn.HNLoader¶
class langchain.document_loaders.hn.HNLoader(web_path: Union[str, List[str]], header_template: Optional[dict] = None, verify_ssl: Optional[bool] = True, proxies: Optional[dict] = None, continue_on_failure: Optional[bool] = False)[source]¶
Load Hacker News data from either main page results or the comments page.
Initialize with webpage path.
Attributes
bs_get_text_kwargs
kwargs for beatifulsoup4 get_text
default_parser
Default parser to use for BeautifulSoup.
raise_for_status
Raise an exception if http status code denotes an error.
requests_kwargs
kwargs for requests
requests_per_second
Max number of concurrent requests to make.
web_path
Methods
__init__(web_path[, header_template, ...])
Initialize with webpage path.
aload()
Load text from the urls in web_path async into Documents.
fetch_all(urls)
Fetch all urls concurrently with rate limiting.
lazy_load()
Lazy load text from the url(s) in web_path.
load()
Get important HN webpage information.
load_and_split([text_splitter])
Load Documents and split into chunks.
load_comments(soup_info)
Load comments from a HN post.
load_results(soup)
Load items from an HN page.
scrape([parser])
Scrape data from webpage and return it in BeautifulSoup format.
scrape_all(urls[, parser])
Fetch all urls, then return soups for all results.
__init__(web_path: Union[str, List[str]], header_template: Optional[dict] = None, verify_ssl: Optional[bool] = True, proxies: Optional[dict] = None, continue_on_failure: Optional[bool] = False)¶
Initialize with webpage path.
aload() → List[Document]¶
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.hn.HNLoader.html
|
dba3aa50fb9e-1
|
Initialize with webpage path.
aload() → List[Document]¶
Load text from the urls in web_path async into Documents.
async fetch_all(urls: List[str]) → Any¶
Fetch all urls concurrently with rate limiting.
lazy_load() → Iterator[Document]¶
Lazy load text from the url(s) in web_path.
load() → List[Document][source]¶
Get important HN webpage information.
HN webpage components are:
title
content
source url,
time of post
author of the post
number of comments
rank of the post
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
load_comments(soup_info: Any) → List[Document][source]¶
Load comments from a HN post.
load_results(soup: Any) → List[Document][source]¶
Load items from an HN page.
scrape(parser: Optional[str] = None) → Any¶
Scrape data from webpage and return it in BeautifulSoup format.
scrape_all(urls: List[str], parser: Optional[str] = None) → List[Any]¶
Fetch all urls, then return soups for all results.
Examples using HNLoader¶
Hacker News
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.hn.HNLoader.html
|
8d7ac3433847-0
|
langchain.document_loaders.pdf.OnlinePDFLoader¶
class langchain.document_loaders.pdf.OnlinePDFLoader(file_path: str)[source]¶
Loads online PDFs.
Initialize with a file path.
Attributes
source
Methods
__init__(file_path)
Initialize with a file path.
lazy_load()
A lazy loader for Documents.
load()
Load documents.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(file_path: str)¶
Initialize with a file path.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load documents.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.pdf.OnlinePDFLoader.html
|
b9874fba7908-0
|
langchain.document_loaders.stripe.StripeLoader¶
class langchain.document_loaders.stripe.StripeLoader(resource: str, access_token: Optional[str] = None)[source]¶
Loader that fetches data from Stripe.
Initialize with a resource and an access token.
Parameters
resource – The resource.
access_token – The access token.
Methods
__init__(resource[, access_token])
Initialize with a resource and an access token.
lazy_load()
A lazy loader for Documents.
load()
Load data into Document objects.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(resource: str, access_token: Optional[str] = None) → None[source]¶
Initialize with a resource and an access token.
Parameters
resource – The resource.
access_token – The access token.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load data into Document objects.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using StripeLoader¶
Stripe
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.stripe.StripeLoader.html
|
a3930e7b3628-0
|
langchain.document_loaders.unstructured.UnstructuredBaseLoader¶
class langchain.document_loaders.unstructured.UnstructuredBaseLoader(mode: str = 'single', post_processors: List[Callable] = [], **unstructured_kwargs: Any)[source]¶
Loader that uses Unstructured to load files.
Initialize with file path.
Methods
__init__([mode, post_processors])
Initialize with file path.
lazy_load()
A lazy loader for Documents.
load()
Load file.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(mode: str = 'single', post_processors: List[Callable] = [], **unstructured_kwargs: Any)[source]¶
Initialize with file path.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load file.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.unstructured.UnstructuredBaseLoader.html
|
1980fd1d2383-0
|
langchain.document_loaders.merge.MergedDataLoader¶
class langchain.document_loaders.merge.MergedDataLoader(loaders: List)[source]¶
Merge documents from a list of loaders
Initialize with a list of loaders
Methods
__init__(loaders)
Initialize with a list of loaders
lazy_load()
Lazy load docs from each individual loader.
load()
Load docs.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(loaders: List)[source]¶
Initialize with a list of loaders
lazy_load() → Iterator[Document][source]¶
Lazy load docs from each individual loader.
load() → List[Document][source]¶
Load docs.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using MergedDataLoader¶
MergeDocLoader
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.merge.MergedDataLoader.html
|
52ebb595960d-0
|
langchain.document_loaders.parsers.audio.OpenAIWhisperParserLocal¶
class langchain.document_loaders.parsers.audio.OpenAIWhisperParserLocal(device: str = '0', lang_model: Optional[str] = None, forced_decoder_ids: Optional[Tuple[Dict]] = None)[source]¶
Transcribe and parse audio files.
Audio transcription with OpenAI Whisper model locally from transformers
Parameters:
device - device to use
NOTE: By default uses the gpu if available,
if you want to use cpu, please set device = “cpu”
lang_model - whisper model to use, for example “openai/whisper-medium”
forced_decoder_ids - id states for decoder in multilanguage model,
usage example:
from transformers import WhisperProcessor
processor = WhisperProcessor.from_pretrained(“openai/whisper-medium”)
forced_decoder_ids = WhisperProcessor.get_decoder_prompt_ids(language=”french”,
task=”transcribe”)
forced_decoder_ids = WhisperProcessor.get_decoder_prompt_ids(language=”french”,
task=”translate”)
Methods
__init__([device, lang_model, ...])
lazy_parse(blob)
Lazily parse the blob.
parse(blob)
Eagerly parse the blob into a document or documents.
__init__(device: str = '0', lang_model: Optional[str] = None, forced_decoder_ids: Optional[Tuple[Dict]] = None)[source]¶
lazy_parse(blob: Blob) → Iterator[Document][source]¶
Lazily parse the blob.
parse(blob: Blob) → List[Document]¶
Eagerly parse the blob into a document or documents.
This is a convenience method for interactive development environment.
Production applications should favor the lazy_parse method instead.
Subclasses should generally not over-ride this parse method.
Parameters
blob – Blob instance
Returns
List of documents
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.parsers.audio.OpenAIWhisperParserLocal.html
|
0743b92475ba-0
|
langchain.document_loaders.embaas.EmbaasBlobLoader¶
class langchain.document_loaders.embaas.EmbaasBlobLoader[source]¶
Bases: BaseEmbaasLoader, BaseBlobParser
Embaas’s document byte loader.
To use, you should have the
environment variable EMBAAS_API_KEY set with your API key, or pass
it as a named parameter to the constructor.
Example
# Default parsing
from langchain.document_loaders.embaas import EmbaasBlobLoader
loader = EmbaasBlobLoader()
blob = Blob.from_path(path="example.mp3")
documents = loader.parse(blob=blob)
# Custom api parameters (create embeddings automatically)
from langchain.document_loaders.embaas import EmbaasBlobLoader
loader = EmbaasBlobLoader(
params={
"should_embed": True,
"model": "e5-large-v2",
"chunk_size": 256,
"chunk_splitter": "CharacterTextSplitter"
}
)
blob = Blob.from_path(path="example.pdf")
documents = loader.parse(blob=blob)
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param api_url: str = 'https://api.embaas.io/v1/document/extract-text/bytes/'¶
The URL of the embaas document extraction API.
param embaas_api_key: Optional[str] = None¶
The API key for the embaas document extraction API.
param params: langchain.document_loaders.embaas.EmbaasDocumentExtractionParameters = {}¶
Additional parameters to pass to the embaas document extraction API.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.embaas.EmbaasBlobLoader.html
|
0743b92475ba-1
|
Additional parameters to pass to the embaas document extraction API.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
classmethod from_orm(obj: Any) → Model¶
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.embaas.EmbaasBlobLoader.html
|
0743b92475ba-2
|
classmethod from_orm(obj: Any) → Model¶
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
lazy_parse(blob: Blob) → Iterator[Document][source]¶
Parses the blob lazily.
Parameters
blob – The blob to parse.
parse(blob: Blob) → List[Document]¶
Eagerly parse the blob into a document or documents.
This is a convenience method for interactive development environment.
Production applications should favor the lazy_parse method instead.
Subclasses should generally not over-ride this parse method.
Parameters
blob – Blob instance
Returns
List of documents
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.embaas.EmbaasBlobLoader.html
|
0743b92475ba-3
|
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶
Examples using EmbaasBlobLoader¶
Embaas
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.embaas.EmbaasBlobLoader.html
|
436b9e3e1436-0
|
langchain.document_loaders.notebook.NotebookLoader¶
class langchain.document_loaders.notebook.NotebookLoader(path: str, include_outputs: bool = False, max_output_length: int = 10, remove_newline: bool = False, traceback: bool = False)[source]¶
Loads .ipynb notebook files.
Initialize with path.
Parameters
path – The path to load the notebook from.
include_outputs – Whether to include the outputs of the cell.
Defaults to False.
max_output_length – Maximum length of the output to be displayed.
Defaults to 10.
remove_newline – Whether to remove newlines from the notebook.
Defaults to False.
traceback – Whether to return a traceback of the error.
Defaults to False.
Methods
__init__(path[, include_outputs, ...])
Initialize with path.
lazy_load()
A lazy loader for Documents.
load()
Load documents.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(path: str, include_outputs: bool = False, max_output_length: int = 10, remove_newline: bool = False, traceback: bool = False)[source]¶
Initialize with path.
Parameters
path – The path to load the notebook from.
include_outputs – Whether to include the outputs of the cell.
Defaults to False.
max_output_length – Maximum length of the output to be displayed.
Defaults to 10.
remove_newline – Whether to remove newlines from the notebook.
Defaults to False.
traceback – Whether to return a traceback of the error.
Defaults to False.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load documents.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.notebook.NotebookLoader.html
|
436b9e3e1436-1
|
load() → List[Document][source]¶
Load documents.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using NotebookLoader¶
Jupyter Notebook
Notebook
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.notebook.NotebookLoader.html
|
ae3b1ccbd10e-0
|
langchain.document_loaders.blob_loaders.file_system.FileSystemBlobLoader¶
class langchain.document_loaders.blob_loaders.file_system.FileSystemBlobLoader(path: Union[str, Path], *, glob: str = '**/[!.]*', suffixes: Optional[Sequence[str]] = None, show_progress: bool = False)[source]¶
Blob loader for the local file system.
Example:
from langchain.document_loaders.blob_loaders import FileSystemBlobLoader
loader = FileSystemBlobLoader("/path/to/directory")
for blob in loader.yield_blobs():
print(blob)
Initialize with path to directory and how to glob over it.
Parameters
path – Path to directory to load from
glob – Glob pattern relative to the specified path
by default set to pick up all non-hidden files
suffixes – Provide to keep only files with these suffixes
Useful when wanting to keep files with different suffixes
Suffixes must include the dot, e.g. “.txt”
show_progress – If true, will show a progress bar as the files are loaded.
This forces an iteration through all matching files
to count them prior to loading them.
Examples:
… code-block:: python
# Recursively load all text files in a directory.
loader = FileSystemBlobLoader(“/path/to/directory”, glob=”**/*.txt”)
# Recursively load all non-hidden files in a directory.
loader = FileSystemBlobLoader(“/path/to/directory”, glob=”**/[!.]*”)
# Load all files in a directory without recursion.
loader = FileSystemBlobLoader(“/path/to/directory”, glob=”*”)
Methods
__init__(path, *[, glob, suffixes, ...])
Initialize with path to directory and how to glob over it.
count_matching_files()
Count files that match the pattern without loading them.
yield_blobs()
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.blob_loaders.file_system.FileSystemBlobLoader.html
|
ae3b1ccbd10e-1
|
Count files that match the pattern without loading them.
yield_blobs()
Yield blobs that match the requested pattern.
__init__(path: Union[str, Path], *, glob: str = '**/[!.]*', suffixes: Optional[Sequence[str]] = None, show_progress: bool = False) → None[source]¶
Initialize with path to directory and how to glob over it.
Parameters
path – Path to directory to load from
glob – Glob pattern relative to the specified path
by default set to pick up all non-hidden files
suffixes – Provide to keep only files with these suffixes
Useful when wanting to keep files with different suffixes
Suffixes must include the dot, e.g. “.txt”
show_progress – If true, will show a progress bar as the files are loaded.
This forces an iteration through all matching files
to count them prior to loading them.
Examples:
… code-block:: python
# Recursively load all text files in a directory.
loader = FileSystemBlobLoader(“/path/to/directory”, glob=”**/*.txt”)
# Recursively load all non-hidden files in a directory.
loader = FileSystemBlobLoader(“/path/to/directory”, glob=”**/[!.]*”)
# Load all files in a directory without recursion.
loader = FileSystemBlobLoader(“/path/to/directory”, glob=”*”)
count_matching_files() → int[source]¶
Count files that match the pattern without loading them.
yield_blobs() → Iterable[Blob][source]¶
Yield blobs that match the requested pattern.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.blob_loaders.file_system.FileSystemBlobLoader.html
|
ea7c7ff1fbf3-0
|
langchain.document_loaders.srt.SRTLoader¶
class langchain.document_loaders.srt.SRTLoader(file_path: str)[source]¶
Loader for .srt (subtitle) files.
Initialize with a file path.
Methods
__init__(file_path)
Initialize with a file path.
lazy_load()
A lazy loader for Documents.
load()
Load using pysrt file.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(file_path: str)[source]¶
Initialize with a file path.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load using pysrt file.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using SRTLoader¶
Subtitle
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.srt.SRTLoader.html
|
33ebb8691b9d-0
|
langchain.document_loaders.spreedly.SpreedlyLoader¶
class langchain.document_loaders.spreedly.SpreedlyLoader(access_token: str, resource: str)[source]¶
Loader that fetches data from Spreedly API.
Initialize with an access token and a resource.
Parameters
access_token – The access token.
resource – The resource.
Methods
__init__(access_token, resource)
Initialize with an access token and a resource.
lazy_load()
A lazy loader for Documents.
load()
Load data into Document objects.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(access_token: str, resource: str) → None[source]¶
Initialize with an access token and a resource.
Parameters
access_token – The access token.
resource – The resource.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load data into Document objects.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using SpreedlyLoader¶
Spreedly
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.spreedly.SpreedlyLoader.html
|
e86696fe5062-0
|
langchain.document_loaders.rst.UnstructuredRSTLoader¶
class langchain.document_loaders.rst.UnstructuredRSTLoader(file_path: str, mode: str = 'single', **unstructured_kwargs: Any)[source]¶
Loader that uses unstructured to load RST files.
You can run the loader in one of two modes: “single” and “elements”.
If you use “single” mode, the document will be returned as a single
langchain Document object. If you use “elements” mode, the unstructured
library will split the document into elements such as Title and NarrativeText.
You can pass in additional unstructured kwargs after mode to apply
different unstructured settings.
Examples
from langchain.document_loaders import UnstructuredRSTLoader
loader = UnstructuredRSTLoader(“example.rst”, mode=”elements”, strategy=”fast”,
)
docs = loader.load()
References
https://unstructured-io.github.io/unstructured/bricks.html#partition-rst
Initialize with a file path.
Parameters
file_path – The path to the file to load.
mode – The mode to use for partitioning. See unstructured for details.
Defaults to “single”.
**unstructured_kwargs – Additional keyword arguments to pass
to unstructured.
Methods
__init__(file_path[, mode])
Initialize with a file path.
lazy_load()
A lazy loader for Documents.
load()
Load file.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(file_path: str, mode: str = 'single', **unstructured_kwargs: Any)[source]¶
Initialize with a file path.
Parameters
file_path – The path to the file to load.
mode – The mode to use for partitioning. See unstructured for details.
Defaults to “single”.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.rst.UnstructuredRSTLoader.html
|
e86696fe5062-1
|
Defaults to “single”.
**unstructured_kwargs – Additional keyword arguments to pass
to unstructured.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document]¶
Load file.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using UnstructuredRSTLoader¶
RST
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.rst.UnstructuredRSTLoader.html
|
1c6bc2cffb0c-0
|
langchain.document_loaders.pdf.BasePDFLoader¶
class langchain.document_loaders.pdf.BasePDFLoader(file_path: str)[source]¶
Base loader class for PDF files.
Defaults to check for local file, but if the file is a web path, it will download it
to a temporary file, use it, then clean up the temporary file after completion
Initialize with a file path.
Attributes
source
Methods
__init__(file_path)
Initialize with a file path.
lazy_load()
A lazy loader for Documents.
load()
Load data into Document objects.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(file_path: str)[source]¶
Initialize with a file path.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
abstract load() → List[Document]¶
Load data into Document objects.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.pdf.BasePDFLoader.html
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.