id stringlengths 14 15 | text stringlengths 49 2.47k | source stringlengths 61 166 |
|---|---|---|
3f9d03633dfa-1 | load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(query: str, user: str, password: str, account: str, warehouse: str, role: str, database: str, schema: str, parameters: Optional[Dict[str, Any]] = None, page_content_columns: Optional[List[str]] = None, metadata_columns: Optional[List[str]] =... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.snowflake_loader.SnowflakeLoader.html |
0b0f7c8b6eba-0 | langchain.document_loaders.cube_semantic.CubeSemanticLoader¶
class langchain.document_loaders.cube_semantic.CubeSemanticLoader(cube_api_url: str, cube_api_token: str, load_dimension_values: bool = True, dimension_values_limit: int = 10000, dimension_values_max_retries: int = 10, dimension_values_retry_delay: int = 3)[s... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.cube_semantic.CubeSemanticLoader.html |
0b0f7c8b6eba-1 | Makes a call to Cube’s REST API metadata endpoint.
Returns
page_content=column_title + column_description
metadata
table_name
column_name
column_data_type
column_member_type
column_title
column_description
column_values
Return type
A list of documents with attributes
load_and_split(text_splitter: Optional[TextSplitter]... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.cube_semantic.CubeSemanticLoader.html |
02fcf735db9d-0 | langchain.document_loaders.tencent_cos_file.TencentCOSFileLoader¶
class langchain.document_loaders.tencent_cos_file.TencentCOSFileLoader(conf: Any, bucket: str, key: str)[source]¶
Loader for Tencent Cloud COS file.
Initialize with COS config, bucket and key name.
:param conf(CosConfig): COS config.
:param bucket(str): ... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.tencent_cos_file.TencentCOSFileLoader.html |
c62c4578922a-0 | langchain.document_loaders.airbyte.AirbyteSalesforceLoader¶
class langchain.document_loaders.airbyte.AirbyteSalesforceLoader(config: Mapping[str, Any], stream_name: str, record_handler: Optional[Callable[[Any, Optional[str]], Document]] = None, state: Optional[Any] = None)[source]¶
Methods
__init__(config, stream_name[... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.airbyte.AirbyteSalesforceLoader.html |
a0987e3944d3-0 | langchain.document_loaders.pdf.AmazonTextractPDFLoader¶
class langchain.document_loaders.pdf.AmazonTextractPDFLoader(file_path: str, textract_features: Optional[Sequence[str]] = None, client: Optional[Any] = None, credentials_profile_name: Optional[str] = None, region_name: Optional[str] = None, endpoint_url: Optional[... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.pdf.AmazonTextractPDFLoader.html |
a0987e3944d3-1 | load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(file_path: str, textract_features: Optional[Sequence[str]] = None, client: Optional[Any] = None, credentials_profile_name: Optional[str] = None, region_name: Optional[str] = None, endpoint_url: Optional[str] = None) → None[source]¶
Initializ... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.pdf.AmazonTextractPDFLoader.html |
a1df2b6582ad-0 | langchain.document_loaders.modern_treasury.ModernTreasuryLoader¶
class langchain.document_loaders.modern_treasury.ModernTreasuryLoader(resource: str, organization_id: Optional[str] = None, api_key: Optional[str] = None)[source]¶
Loader that fetches data from Modern Treasury.
Parameters
resource – The Modern Treasury re... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.modern_treasury.ModernTreasuryLoader.html |
a1df2b6582ad-1 | Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using ModernTreasuryLoader¶
Modern Treasury | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.modern_treasury.ModernTreasuryLoader.html |
0c9f190d020d-0 | langchain.document_loaders.xorbits.XorbitsLoader¶
class langchain.document_loaders.xorbits.XorbitsLoader(data_frame: Any, page_content_column: str = 'text')[source]¶
Load Xorbits DataFrame.
Initialize with dataframe object.
Requirements:Must have xorbits installed. You can install with pip install xorbits.
Parameters
d... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.xorbits.XorbitsLoader.html |
a539e1b67db9-0 | langchain.document_loaders.blob_loaders.schema.BlobLoader¶
class langchain.document_loaders.blob_loaders.schema.BlobLoader[source]¶
Abstract interface for blob loaders implementation.
Implementer should be able to load raw content from a storage system according
to some criteria and return the raw content lazily as a s... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.blob_loaders.schema.BlobLoader.html |
9b0d7a74a039-0 | langchain.document_loaders.pdf.PyPDFium2Loader¶
class langchain.document_loaders.pdf.PyPDFium2Loader(file_path: str)[source]¶
Loads a PDF with pypdfium2 and chunks at character level.
Initialize with a file path.
Attributes
source
Methods
__init__(file_path)
Initialize with a file path.
lazy_load()
Lazy load given path... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.pdf.PyPDFium2Loader.html |
b3f9664657e1-0 | langchain.document_loaders.pyspark_dataframe.PySparkDataFrameLoader¶
class langchain.document_loaders.pyspark_dataframe.PySparkDataFrameLoader(spark_session: Optional[SparkSession] = None, df: Optional[Any] = None, page_content_column: str = 'text', fraction_of_memory: float = 0.1)[source]¶
Load PySpark DataFrames
Init... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.pyspark_dataframe.PySparkDataFrameLoader.html |
b3f9664657e1-1 | A lazy loader for document content.
load() → List[Document][source]¶
Load from the dataframe.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting docum... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.pyspark_dataframe.PySparkDataFrameLoader.html |
3cf70b9c1c4f-0 | langchain.document_loaders.toml.TomlLoader¶
class langchain.document_loaders.toml.TomlLoader(source: Union[str, Path])[source]¶
A TOML document loader that inherits from the BaseLoader class.
This class can be initialized with either a single source file or a source
directory containing TOML files.
Initialize the TomlL... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.toml.TomlLoader.html |
b70b50dbe1c8-0 | langchain.document_loaders.azure_blob_storage_container.AzureBlobStorageContainerLoader¶
class langchain.document_loaders.azure_blob_storage_container.AzureBlobStorageContainerLoader(conn_str: str, container: str, prefix: str = '')[source]¶
Loading Documents from Azure Blob Storage.
Initialize with connection string, c... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.azure_blob_storage_container.AzureBlobStorageContainerLoader.html |
ec425bcfb8ba-0 | langchain.document_loaders.mastodon.MastodonTootsLoader¶
class langchain.document_loaders.mastodon.MastodonTootsLoader(mastodon_accounts: Sequence[str], number_toots: Optional[int] = 100, exclude_replies: bool = False, access_token: Optional[str] = None, api_base_url: str = 'https://mastodon.social')[source]¶
Mastodon ... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.mastodon.MastodonTootsLoader.html |
ec425bcfb8ba-1 | exclude_replies – Whether to exclude reply toots from the load.
Defaults to False.
access_token – An access token if toots are loaded as a Mastodon app. Can
also be specified via the environment variables “MASTODON_ACCESS_TOKEN”.
api_base_url – A Mastodon API base URL to talk to, if not using the default.
Defaults to “... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.mastodon.MastodonTootsLoader.html |
e970c2e0be75-0 | langchain.document_loaders.generic.GenericLoader¶
class langchain.document_loaders.generic.GenericLoader(blob_loader: BlobLoader, blob_parser: BaseBlobParser)[source]¶
A generic document loader.
A generic document loader that allows combining an arbitrary blob loader with
a blob parser.
Examples
from langchain.document... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.generic.GenericLoader.html |
e970c2e0be75-1 | from_filesystem(path, *[, glob, suffixes, ...])
Create a generic document loader using a filesystem blob loader.
lazy_load()
Load documents lazily.
load()
Load all documents.
load_and_split([text_splitter])
Load all documents and split them into sentences.
__init__(blob_loader: BlobLoader, blob_parser: BaseBlobParser) ... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.generic.GenericLoader.html |
959ec3b312f1-0 | langchain.document_loaders.unstructured.UnstructuredFileLoader¶
class langchain.document_loaders.unstructured.UnstructuredFileLoader(file_path: Union[str, List[str]], mode: str = 'single', **unstructured_kwargs: Any)[source]¶
Loader that uses Unstructured to load files.
The file loader uses the
unstructured partition f... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.unstructured.UnstructuredFileLoader.html |
959ec3b312f1-1 | Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using UnstructuredFileLoader¶
Unstructured
Unstructured File | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.unstructured.UnstructuredFileLoader.html |
4e8ee19eb2bf-0 | langchain.document_loaders.trello.TrelloLoader¶
class langchain.document_loaders.trello.TrelloLoader(client: TrelloClient, board_name: str, *, include_card_name: bool = True, include_comments: bool = True, include_checklist: bool = True, card_filter: Literal['closed', 'open', 'all'] = 'all', extra_metadata: Tuple[str, ... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.trello.TrelloLoader.html |
4e8ee19eb2bf-1 | load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(client: TrelloClient, board_name: str, *, include_card_name: bool = True, include_comments: bool = True, include_checklist: bool = True, card_filter: Literal['closed', 'open', 'all'] = 'all', extra_metadata: Tuple[str, ...] = ('due_date', 'l... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.trello.TrelloLoader.html |
4e8ee19eb2bf-2 | include_checklist – Whether to include the checklist on the card in the
document.
card_filter – Filter on card status. Valid values are “closed”, “open”,
“all”.
extra_metadata – List of additional metadata fields to include as document
metadata.Valid values are “due_date”, “labels”, “list”, “closed”.
lazy_load() → Iter... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.trello.TrelloLoader.html |
ed38b935c7aa-0 | langchain.document_loaders.email.OutlookMessageLoader¶
class langchain.document_loaders.email.OutlookMessageLoader(file_path: str)[source]¶
Loads Outlook Message files using extract_msg.
https://github.com/TeamMsgExtractor/msg-extractor
Initialize with a file path.
Parameters
file_path – The path to the Outlook Message... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.email.OutlookMessageLoader.html |
1933f448eac4-0 | langchain.document_loaders.parsers.grobid.GrobidParser¶
class langchain.document_loaders.parsers.grobid.GrobidParser(segment_sentences: bool, grobid_server: str = 'http://localhost:8070/api/processFulltextDocument')[source]¶
Loader that uses Grobid to load article PDF files.
Methods
__init__(segment_sentences[, grobid_... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.parsers.grobid.GrobidParser.html |
fc4618a131f4-0 | langchain.document_loaders.parsers.pdf.PyMuPDFParser¶
class langchain.document_loaders.parsers.pdf.PyMuPDFParser(text_kwargs: Optional[Mapping[str, Any]] = None)[source]¶
Parse PDFs with PyMuPDF.
Initialize the parser.
Parameters
text_kwargs – Keyword arguments to pass to fitz.Page.get_text().
Methods
__init__([text_kw... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.parsers.pdf.PyMuPDFParser.html |
64231f264c6d-0 | langchain.document_loaders.embaas.BaseEmbaasLoader¶
class langchain.document_loaders.embaas.BaseEmbaasLoader[source]¶
Bases: BaseModel
Base class for embedding a model into an Embaas document extraction API.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the in... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.embaas.BaseEmbaasLoader.html |
64231f264c6d-1 | the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[boo... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.embaas.BaseEmbaasLoader.html |
64231f264c6d-2 | classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.embaas.BaseEmbaasLoader.html |
31442c1fcce9-0 | langchain.document_loaders.directory.DirectoryLoader¶
class langchain.document_loaders.directory.DirectoryLoader(path: str, glob: str = '**/[!.]*', silent_errors: bool = False, load_hidden: bool = False, loader_cls: ~typing.Union[~typing.Type[~langchain.document_loaders.unstructured.UnstructuredFileLoader], ~typing.Typ... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.directory.DirectoryLoader.html |
31442c1fcce9-1 | lazy_load()
A lazy loader for Documents.
load()
Load documents.
load_and_split([text_splitter])
Load Documents and split into chunks.
load_file(item, path, docs, pbar)
Load a file.
__init__(path: str, glob: str = '**/[!.]*', silent_errors: bool = False, load_hidden: bool = False, loader_cls: ~typing.Union[~typing.Type[... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.directory.DirectoryLoader.html |
31442c1fcce9-2 | lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load documents.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use f... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.directory.DirectoryLoader.html |
e75196f17d26-0 | langchain.document_loaders.arxiv.ArxivLoader¶
class langchain.document_loaders.arxiv.ArxivLoader(query: str, load_max_docs: Optional[int] = 100, load_all_available_meta: Optional[bool] = False)[source]¶
Loads a query result from arxiv.org into a list of Documents.
The loader converts the original PDF format into the te... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.arxiv.ArxivLoader.html |
af2d5b92e7cc-0 | langchain.document_loaders.airbyte.AirbyteStripeLoader¶
class langchain.document_loaders.airbyte.AirbyteStripeLoader(config: Mapping[str, Any], stream_name: str, record_handler: Optional[Callable[[Any, Optional[str]], Document]] = None, state: Optional[Any] = None)[source]¶
Methods
__init__(config, stream_name[, ...])
... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.airbyte.AirbyteStripeLoader.html |
0c5b00808656-0 | langchain.document_loaders.base.BaseBlobParser¶
class langchain.document_loaders.base.BaseBlobParser[source]¶
Abstract interface for blob parsers.
A blob parser provides a way to parse raw data stored in a blob into one
or more documents.
The parser can be composed with blob loaders, making it easy to re-use
a parser i... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.base.BaseBlobParser.html |
cee78d0f5390-0 | langchain.document_loaders.chatgpt.concatenate_rows¶
langchain.document_loaders.chatgpt.concatenate_rows(message: dict, title: str) → str[source]¶
Combine message information in a readable format ready to be used.
:param message: Message to be concatenated
:param title: Title of the conversation
Returns
Concatenated me... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.chatgpt.concatenate_rows.html |
8cfcb1be7d34-0 | langchain.document_loaders.parsers.language.language_parser.LanguageParser¶
class langchain.document_loaders.parsers.language.language_parser.LanguageParser(language: Optional[Language] = None, parser_threshold: int = 0)[source]¶
Language parser that split code using the respective language syntax.
Each top-level funct... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.parsers.language.language_parser.LanguageParser.html |
8cfcb1be7d34-1 | Parameters
language – If None (default), it will try to infer language from source.
parser_threshold – Minimum lines needed to activate parsing (0 by default).
Methods
__init__([language, parser_threshold])
Language parser that split code using the respective language syntax.
lazy_parse(blob)
Lazy parsing interface.
pa... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.parsers.language.language_parser.LanguageParser.html |
89ba76df28e0-0 | langchain.document_loaders.tensorflow_datasets.TensorflowDatasetLoader¶
class langchain.document_loaders.tensorflow_datasets.TensorflowDatasetLoader(dataset_name: str, split_name: str, load_max_docs: Optional[int] = 100, sample_to_document_function: Optional[Callable[[Dict], Document]] = None)[source]¶
Loads from Tenso... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.tensorflow_datasets.TensorflowDatasetLoader.html |
89ba76df28e0-1 | Attributes
load_max_docs
The maximum number of documents to load.
sample_to_document_function
Custom function that transform a dataset sample into a Document.
Methods
__init__(dataset_name, split_name[, ...])
Initialize the TensorflowDatasetLoader.
lazy_load()
A lazy loader for Documents.
load()
Load data into Document... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.tensorflow_datasets.TensorflowDatasetLoader.html |
98ede2cf8e5e-0 | langchain.document_loaders.unstructured.UnstructuredFileIOLoader¶
class langchain.document_loaders.unstructured.UnstructuredFileIOLoader(file: Union[IO, Sequence[IO]], mode: str = 'single', **unstructured_kwargs: Any)[source]¶
Loader that uses Unstructured to load files.
The file loader
uses the unstructured partition ... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.unstructured.UnstructuredFileIOLoader.html |
98ede2cf8e5e-1 | Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using UnstructuredFileIOLoader¶
Google Drive | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.unstructured.UnstructuredFileIOLoader.html |
0c9990d29cce-0 | langchain.document_loaders.json_loader.JSONLoader¶
class langchain.document_loaders.json_loader.JSONLoader(file_path: Union[str, Path], jq_schema: str, content_key: Optional[str] = None, metadata_func: Optional[Callable[[Dict, Dict], Dict]] = None, text_content: bool = True, json_lines: bool = False)[source]¶
Loads a J... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.json_loader.JSONLoader.html |
0c9990d29cce-1 | load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(file_path: Union[str, Path], jq_schema: str, content_key: Optional[str] = None, metadata_func: Optional[Callable[[Dict, Dict], Dict]] = None, text_content: bool = True, json_lines: bool = False)[source]¶
Initialize the JSONLoader.
Parameters... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.json_loader.JSONLoader.html |
ef2f48785333-0 | langchain.document_loaders.parsers.pdf.PyPDFParser¶
class langchain.document_loaders.parsers.pdf.PyPDFParser(password: Optional[Union[str, bytes]] = None)[source]¶
Loads a PDF with pypdf and chunks at character level.
Methods
__init__([password])
lazy_parse(blob)
Lazily parse the blob.
parse(blob)
Eagerly parse the blo... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.parsers.pdf.PyPDFParser.html |
9bc5f3211142-0 | langchain.document_loaders.helpers.detect_file_encodings¶
langchain.document_loaders.helpers.detect_file_encodings(file_path: str, timeout: int = 5) → List[FileEncoding][source]¶
Try to detect the file encoding.
Returns a list of FileEncoding tuples with the detected encodings ordered
by confidence.
Parameters
file_pat... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.helpers.detect_file_encodings.html |
ee8fc24ccde9-0 | langchain.document_loaders.telegram.concatenate_rows¶
langchain.document_loaders.telegram.concatenate_rows(row: dict) → str[source]¶
Combine message information in a readable format ready to be used. | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.telegram.concatenate_rows.html |
4d8fb390a4aa-0 | langchain.document_loaders.rss.RSSFeedLoader¶
class langchain.document_loaders.rss.RSSFeedLoader(urls: Optional[Sequence[str]] = None, opml: Optional[str] = None, continue_on_failure: bool = True, show_progress_bar: bool = False, **newsloader_kwargs: Any)[source]¶
Loader that uses newspaper to load news articles from R... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.rss.RSSFeedLoader.html |
4d8fb390a4aa-1 | Initialize with urls or OPML.
lazy_load()
A lazy loader for Documents.
load()
Load data into Document objects.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(urls: Optional[Sequence[str]] = None, opml: Optional[str] = None, continue_on_failure: bool = True, show_progress_bar: bool = Fals... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.rss.RSSFeedLoader.html |
3d3c93ab5df9-0 | langchain.document_loaders.parsers.pdf.PDFPlumberParser¶
class langchain.document_loaders.parsers.pdf.PDFPlumberParser(text_kwargs: Optional[Mapping[str, Any]] = None)[source]¶
Parse PDFs with PDFPlumber.
Initialize the parser.
Parameters
text_kwargs – Keyword arguments to pass to pdfplumber.Page.extract_text()
Methods... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.parsers.pdf.PDFPlumberParser.html |
e9955be377fb-0 | langchain.document_loaders.gcs_directory.GCSDirectoryLoader¶
class langchain.document_loaders.gcs_directory.GCSDirectoryLoader(project_name: str, bucket: str, prefix: str = '', loader_func: Optional[Callable[[str], BaseLoader]] = None)[source]¶
Loads Documents from GCS.
Initialize with bucket and key name.
Parameters
p... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.gcs_directory.GCSDirectoryLoader.html |
e9955be377fb-1 | Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using GCSDirectoryLoader¶
Google Cloud Storage
Google Cloud Storage Directory | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.gcs_directory.GCSDirectoryLoader.html |
4a4369c5dc0f-0 | langchain.document_loaders.parsers.language.code_segmenter.CodeSegmenter¶
class langchain.document_loaders.parsers.language.code_segmenter.CodeSegmenter(code: str)[source]¶
The abstract class for the code segmenter.
Methods
__init__(code)
extract_functions_classes()
is_valid()
simplify_code()
__init__(code: str)[source... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.parsers.language.code_segmenter.CodeSegmenter.html |
0954e1676d91-0 | langchain.document_loaders.unstructured.satisfies_min_unstructured_version¶
langchain.document_loaders.unstructured.satisfies_min_unstructured_version(min_version: str) → bool[source]¶
Checks to see if the installed unstructured version exceeds the minimum version
for the feature in question. | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.unstructured.satisfies_min_unstructured_version.html |
119feba2e229-0 | langchain.document_loaders.whatsapp_chat.WhatsAppChatLoader¶
class langchain.document_loaders.whatsapp_chat.WhatsAppChatLoader(path: str)[source]¶
Loads WhatsApp messages text file.
Initialize with path.
Methods
__init__(path)
Initialize with path.
lazy_load()
A lazy loader for Documents.
load()
Load documents.
load_an... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.whatsapp_chat.WhatsAppChatLoader.html |
f845db557b04-0 | langchain.document_loaders.csv_loader.UnstructuredCSVLoader¶
class langchain.document_loaders.csv_loader.UnstructuredCSVLoader(file_path: str, mode: str = 'single', **unstructured_kwargs: Any)[source]¶
Loader that uses unstructured to load CSV files. Like other
Unstructured loaders, UnstructuredCSVLoader can be used in... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.csv_loader.UnstructuredCSVLoader.html |
f845db557b04-1 | A lazy loader for Documents.
load() → List[Document]¶
Load file.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCh... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.csv_loader.UnstructuredCSVLoader.html |
237d0f2d3b05-0 | langchain.document_loaders.diffbot.DiffbotLoader¶
class langchain.document_loaders.diffbot.DiffbotLoader(api_token: str, urls: List[str], continue_on_failure: bool = True)[source]¶
Loads Diffbot file json.
Initialize with API token, ids, and key.
Parameters
api_token – Diffbot API token.
urls – List of URLs to load.
co... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.diffbot.DiffbotLoader.html |
b47ae3ba2b28-0 | langchain.document_loaders.reddit.RedditPostsLoader¶
class langchain.document_loaders.reddit.RedditPostsLoader(client_id: str, client_secret: str, user_agent: str, search_queries: Sequence[str], mode: str, categories: Sequence[str] = ['new'], number_posts: Optional[int] = 10)[source]¶
Reddit posts loader.
Read posts on... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.reddit.RedditPostsLoader.html |
b47ae3ba2b28-1 | user_agent – Reddit user agent.
search_queries – The search queries.
mode – The mode.
categories – The categories. Default: [“new”]
number_posts – The number of posts. Default: 10
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load reddits.
load_and_split(text_splitter: ... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.reddit.RedditPostsLoader.html |
dcd47283cbbc-0 | langchain.document_loaders.imsdb.IMSDbLoader¶
class langchain.document_loaders.imsdb.IMSDbLoader(web_path: Union[str, List[str]], header_template: Optional[dict] = None, verify_ssl: Optional[bool] = True, proxies: Optional[dict] = None, continue_on_failure: Optional[bool] = False)[source]¶
Loads IMSDb webpages.
Initial... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.imsdb.IMSDbLoader.html |
dcd47283cbbc-1 | Fetch all urls concurrently with rate limiting.
lazy_load() → Iterator[Document]¶
Lazy load text from the url(s) in web_path.
load() → List[Document][source]¶
Load webpage.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documen... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.imsdb.IMSDbLoader.html |
2c3ee1ca900f-0 | langchain.document_loaders.airbyte_json.AirbyteJSONLoader¶
class langchain.document_loaders.airbyte_json.AirbyteJSONLoader(file_path: str)[source]¶
Loads local airbyte json files.
Initialize with a file path. This should start with ‘/tmp/airbyte_local/’.
Attributes
file_path
Path to the directory containing the json fi... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.airbyte_json.AirbyteJSONLoader.html |
6fb746617570-0 | langchain.document_loaders.docugami.DocugamiLoader¶
class langchain.document_loaders.docugami.DocugamiLoader[source]¶
Bases: BaseLoader, BaseModel
Loads processed docs from Docugami.
To use, you should have the lxml python package installed.
Create a new model by parsing and validating input data from keyword arguments... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.docugami.DocugamiLoader.html |
6fb746617570-1 | Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creat... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.docugami.DocugamiLoader.html |
6fb746617570-2 | load() → List[Document][source]¶
Load documents.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplit... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.docugami.DocugamiLoader.html |
8663329536db-0 | langchain.document_loaders.confluence.ConfluenceLoader¶
class langchain.document_loaders.confluence.ConfluenceLoader(url: str, api_key: Optional[str] = None, username: Optional[str] = None, oauth2: Optional[dict] = None, token: Optional[str] = None, cloud: Optional[bool] = True, number_of_retries: Optional[int] = 3, mi... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.confluence.ConfluenceLoader.html |
8663329536db-1 | Example
from langchain.document_loaders import ConfluenceLoader
loader = ConfluenceLoader(
url="https://yoursite.atlassian.com/wiki",
username="me",
api_key="12345"
)
documents = loader.load(space_key="SPACE",limit=50)
Parameters
url (str) – _description_
api_key (str, optional) – _description_, defaults to... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.confluence.ConfluenceLoader.html |
8663329536db-2 | process_image(link[, ocr_languages])
process_page(page, include_attachments, ...)
process_pages(pages, ...[, ocr_languages, ...])
Process a list of pages into a list of documents.
process_pdf(link[, ocr_languages])
process_svg(link[, ocr_languages])
process_xls(link)
validate_init_args([url, api_key, username, ...])
Va... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.confluence.ConfluenceLoader.html |
8663329536db-3 | page_ids (Optional[List[str]], optional) – List of specific page IDs to load, defaults to None
label (Optional[str], optional) – Get all pages with this label, defaults to None
cql (Optional[str], optional) – CQL Expression, defaults to None
include_restricted_content (bool, optional) – defaults to False
include_archiv... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.confluence.ConfluenceLoader.html |
8663329536db-4 | doesn’t match the limit value. If limit is >100 confluence
seems to cap the response to 100. Also, due to the Atlassian Python
package, we don’t get the “next” values from the “_links” key because
they only return the value from the result key. So here, the pagination
starts from 0 and goes until the max_pages, getting... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.confluence.ConfluenceLoader.html |
8663329536db-5 | process_svg(link: str, ocr_languages: Optional[str] = None) → str[source]¶
process_xls(link: str) → str[source]¶
static validate_init_args(url: Optional[str] = None, api_key: Optional[str] = None, username: Optional[str] = None, oauth2: Optional[dict] = None, token: Optional[str] = None) → Optional[List][source]¶
Valid... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.confluence.ConfluenceLoader.html |
b90f057d2da3-0 | langchain.document_loaders.blockchain.BlockchainType¶
class langchain.document_loaders.blockchain.BlockchainType(value, names=None, *, module=None, qualname=None, type=None, start=1, boundary=None)[source]¶
Enumerator of the supported blockchains.
ETH_MAINNET = 'eth-mainnet'¶
ETH_GOERLI = 'eth-goerli'¶
POLYGON_MAINNET ... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.blockchain.BlockchainType.html |
f96ab2eba2bb-0 | langchain.document_loaders.excel.UnstructuredExcelLoader¶
class langchain.document_loaders.excel.UnstructuredExcelLoader(file_path: str, mode: str = 'single', **unstructured_kwargs: Any)[source]¶
Loader that uses unstructured to load Excel files. Like other
Unstructured loaders, UnstructuredExcelLoader can be used in b... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.excel.UnstructuredExcelLoader.html |
f96ab2eba2bb-1 | lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document]¶
Load file.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting ... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.excel.UnstructuredExcelLoader.html |
5845a9fa6aeb-0 | langchain.document_loaders.slack_directory.SlackDirectoryLoader¶
class langchain.document_loaders.slack_directory.SlackDirectoryLoader(zip_path: str, workspace_url: Optional[str] = None)[source]¶
Loads documents from a Slack directory dump.
Initialize the SlackDirectoryLoader.
Parameters
zip_path (str) – The path to th... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.slack_directory.SlackDirectoryLoader.html |
1d8c6a4d106a-0 | langchain.document_loaders.parsers.html.bs4.BS4HTMLParser¶
class langchain.document_loaders.parsers.html.bs4.BS4HTMLParser(*, features: str = 'lxml', get_text_separator: str = '', **kwargs: Any)[source]¶
Parser that uses beautiful soup to parse HTML files.
Initialize a bs4 based HTML parser.
Methods
__init__(*[, featur... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.parsers.html.bs4.BS4HTMLParser.html |
20f892d75722-0 | langchain.document_loaders.facebook_chat.concatenate_rows¶
langchain.document_loaders.facebook_chat.concatenate_rows(row: dict) → str[source]¶
Combine message information in a readable format ready to be used.
Parameters
row – dictionary containing message information. | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.facebook_chat.concatenate_rows.html |
dba3aa50fb9e-0 | langchain.document_loaders.hn.HNLoader¶
class langchain.document_loaders.hn.HNLoader(web_path: Union[str, List[str]], header_template: Optional[dict] = None, verify_ssl: Optional[bool] = True, proxies: Optional[dict] = None, continue_on_failure: Optional[bool] = False)[source]¶
Load Hacker News data from either main pa... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.hn.HNLoader.html |
dba3aa50fb9e-1 | Initialize with webpage path.
aload() → List[Document]¶
Load text from the urls in web_path async into Documents.
async fetch_all(urls: List[str]) → Any¶
Fetch all urls concurrently with rate limiting.
lazy_load() → Iterator[Document]¶
Lazy load text from the url(s) in web_path.
load() → List[Document][source]¶
Get imp... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.hn.HNLoader.html |
8d7ac3433847-0 | langchain.document_loaders.pdf.OnlinePDFLoader¶
class langchain.document_loaders.pdf.OnlinePDFLoader(file_path: str)[source]¶
Loads online PDFs.
Initialize with a file path.
Attributes
source
Methods
__init__(file_path)
Initialize with a file path.
lazy_load()
A lazy loader for Documents.
load()
Load documents.
load_an... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.pdf.OnlinePDFLoader.html |
b9874fba7908-0 | langchain.document_loaders.stripe.StripeLoader¶
class langchain.document_loaders.stripe.StripeLoader(resource: str, access_token: Optional[str] = None)[source]¶
Loader that fetches data from Stripe.
Initialize with a resource and an access token.
Parameters
resource – The resource.
access_token – The access token.
Meth... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.stripe.StripeLoader.html |
a3930e7b3628-0 | langchain.document_loaders.unstructured.UnstructuredBaseLoader¶
class langchain.document_loaders.unstructured.UnstructuredBaseLoader(mode: str = 'single', post_processors: List[Callable] = [], **unstructured_kwargs: Any)[source]¶
Loader that uses Unstructured to load files.
Initialize with file path.
Methods
__init__([... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.unstructured.UnstructuredBaseLoader.html |
1980fd1d2383-0 | langchain.document_loaders.merge.MergedDataLoader¶
class langchain.document_loaders.merge.MergedDataLoader(loaders: List)[source]¶
Merge documents from a list of loaders
Initialize with a list of loaders
Methods
__init__(loaders)
Initialize with a list of loaders
lazy_load()
Lazy load docs from each individual loader.
... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.merge.MergedDataLoader.html |
52ebb595960d-0 | langchain.document_loaders.parsers.audio.OpenAIWhisperParserLocal¶
class langchain.document_loaders.parsers.audio.OpenAIWhisperParserLocal(device: str = '0', lang_model: Optional[str] = None, forced_decoder_ids: Optional[Tuple[Dict]] = None)[source]¶
Transcribe and parse audio files.
Audio transcription with OpenAI Whi... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.parsers.audio.OpenAIWhisperParserLocal.html |
0743b92475ba-0 | langchain.document_loaders.embaas.EmbaasBlobLoader¶
class langchain.document_loaders.embaas.EmbaasBlobLoader[source]¶
Bases: BaseEmbaasLoader, BaseBlobParser
Embaas’s document byte loader.
To use, you should have the
environment variable EMBAAS_API_KEY set with your API key, or pass
it as a named parameter to the const... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.embaas.EmbaasBlobLoader.html |
0743b92475ba-1 | Additional parameters to pass to the embaas document extraction API.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.embaas.EmbaasBlobLoader.html |
0743b92475ba-2 | classmethod from_orm(obj: Any) → Model¶
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_n... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.embaas.EmbaasBlobLoader.html |
0743b92475ba-3 | classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶
Examples usi... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.embaas.EmbaasBlobLoader.html |
436b9e3e1436-0 | langchain.document_loaders.notebook.NotebookLoader¶
class langchain.document_loaders.notebook.NotebookLoader(path: str, include_outputs: bool = False, max_output_length: int = 10, remove_newline: bool = False, traceback: bool = False)[source]¶
Loads .ipynb notebook files.
Initialize with path.
Parameters
path – The pat... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.notebook.NotebookLoader.html |
436b9e3e1436-1 | load() → List[Document][source]¶
Load documents.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplit... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.notebook.NotebookLoader.html |
ae3b1ccbd10e-0 | langchain.document_loaders.blob_loaders.file_system.FileSystemBlobLoader¶
class langchain.document_loaders.blob_loaders.file_system.FileSystemBlobLoader(path: Union[str, Path], *, glob: str = '**/[!.]*', suffixes: Optional[Sequence[str]] = None, show_progress: bool = False)[source]¶
Blob loader for the local file syste... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.blob_loaders.file_system.FileSystemBlobLoader.html |
ae3b1ccbd10e-1 | Count files that match the pattern without loading them.
yield_blobs()
Yield blobs that match the requested pattern.
__init__(path: Union[str, Path], *, glob: str = '**/[!.]*', suffixes: Optional[Sequence[str]] = None, show_progress: bool = False) → None[source]¶
Initialize with path to directory and how to glob over i... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.blob_loaders.file_system.FileSystemBlobLoader.html |
ea7c7ff1fbf3-0 | langchain.document_loaders.srt.SRTLoader¶
class langchain.document_loaders.srt.SRTLoader(file_path: str)[source]¶
Loader for .srt (subtitle) files.
Initialize with a file path.
Methods
__init__(file_path)
Initialize with a file path.
lazy_load()
A lazy loader for Documents.
load()
Load using pysrt file.
load_and_split(... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.srt.SRTLoader.html |
33ebb8691b9d-0 | langchain.document_loaders.spreedly.SpreedlyLoader¶
class langchain.document_loaders.spreedly.SpreedlyLoader(access_token: str, resource: str)[source]¶
Loader that fetches data from Spreedly API.
Initialize with an access token and a resource.
Parameters
access_token – The access token.
resource – The resource.
Methods... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.spreedly.SpreedlyLoader.html |
e86696fe5062-0 | langchain.document_loaders.rst.UnstructuredRSTLoader¶
class langchain.document_loaders.rst.UnstructuredRSTLoader(file_path: str, mode: str = 'single', **unstructured_kwargs: Any)[source]¶
Loader that uses unstructured to load RST files.
You can run the loader in one of two modes: “single” and “elements”.
If you use “si... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.rst.UnstructuredRSTLoader.html |
e86696fe5062-1 | Defaults to “single”.
**unstructured_kwargs – Additional keyword arguments to pass
to unstructured.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document]¶
Load file.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chun... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.rst.UnstructuredRSTLoader.html |
1c6bc2cffb0c-0 | langchain.document_loaders.pdf.BasePDFLoader¶
class langchain.document_loaders.pdf.BasePDFLoader(file_path: str)[source]¶
Base loader class for PDF files.
Defaults to check for local file, but if the file is a web path, it will download it
to a temporary file, use it, then clean up the temporary file after completion
I... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.pdf.BasePDFLoader.html |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.