id stringlengths 14 15 | text stringlengths 44 2.47k | source stringlengths 61 181 |
|---|---|---|
43ef13d6a5de-0 | langchain.document_loaders.whatsapp_chat.concatenate_rows¶
langchain.document_loaders.whatsapp_chat.concatenate_rows(date: str, sender: str, text: str) → str[source]¶
Combine message information in a readable format ready to be used. | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.whatsapp_chat.concatenate_rows.html |
bfdea50d3d49-0 | langchain.document_loaders.joplin.JoplinLoader¶
class langchain.document_loaders.joplin.JoplinLoader(access_token: Optional[str] = None, port: int = 41184, host: str = 'localhost')[source]¶
Load notes from Joplin.
In order to use this loader, you need to have Joplin running with the
Web Clipper enabled (look for “Web C... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.joplin.JoplinLoader.html |
bfdea50d3d49-1 | load() → List[Document][source]¶
Load data into Document objects.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveC... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.joplin.JoplinLoader.html |
95e20f33dfb6-0 | langchain.document_loaders.parsers.pdf.PyPDFParser¶
class langchain.document_loaders.parsers.pdf.PyPDFParser(password: Optional[Union[str, bytes]] = None)[source]¶
Load PDF using pypdf and chunk at character level.
Methods
__init__([password])
lazy_parse(blob)
Lazily parse the blob.
parse(blob)
Eagerly parse the blob i... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.parsers.pdf.PyPDFParser.html |
920b66bc62b4-0 | langchain.document_loaders.parsers.pdf.DocumentIntelligenceParser¶
class langchain.document_loaders.parsers.pdf.DocumentIntelligenceParser(client: Any, model: str)[source]¶
Loads a PDF with Azure Document Intelligence
(formerly Forms Recognizer) and chunks at character level.
Methods
__init__(client, model)
lazy_parse(... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.parsers.pdf.DocumentIntelligenceParser.html |
d1d065d98f99-0 | langchain.document_loaders.github.GitHubIssuesLoader¶
class langchain.document_loaders.github.GitHubIssuesLoader[source]¶
Bases: BaseGitHubLoader
Load issues of a GitHub repository.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed ... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.github.GitHubIssuesLoader.html |
d1d065d98f99-1 | param sort: Optional[Literal['created', 'updated', 'comments']] = None¶
What to sort results by. Can be one of: ‘created’, ‘updated’, ‘comments’.
Default is ‘created’.
param state: Optional[Literal['open', 'closed', 'all']] = None¶
Filter on issue state. Can be one of: ‘open’, ‘closed’, ‘all’.
classmethod construct(_fi... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.github.GitHubIssuesLoader.html |
d1d065d98f99-2 | deep – set to True to make a deep copy of the model
Returns
new model instance
dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, ex... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.github.GitHubIssuesLoader.html |
d1d065d98f99-3 | Returns
page_content
metadata
url
title
creator
created_at
last_update_time
closed_time
number of comments
state
labels
assignee
assignees
milestone
locked
number
is_pull_request
Return type
A list of Documents with attributes
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.github.GitHubIssuesLoader.html |
96ff100ec0de-0 | langchain.document_loaders.rss.RSSFeedLoader¶
class langchain.document_loaders.rss.RSSFeedLoader(urls: Optional[Sequence[str]] = None, opml: Optional[str] = None, continue_on_failure: bool = True, show_progress_bar: bool = False, **newsloader_kwargs: Any)[source]¶
Load news articles from RSS feeds using Unstructured.
P... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.rss.RSSFeedLoader.html |
96ff100ec0de-1 | Initialize with urls or OPML.
lazy_load()
A lazy loader for Documents.
load()
Load data into Document objects.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(urls: Optional[Sequence[str]] = None, opml: Optional[str] = None, continue_on_failure: bool = True, show_progress_bar: bool = Fals... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.rss.RSSFeedLoader.html |
8b2e788cb6fc-0 | langchain.document_loaders.news.NewsURLLoader¶
class langchain.document_loaders.news.NewsURLLoader(urls: List[str], text_mode: bool = True, nlp: bool = False, continue_on_failure: bool = True, show_progress_bar: bool = False, **newspaper_kwargs: Any)[source]¶
Load news articles from URLs using Unstructured.
Parameters
... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.news.NewsURLLoader.html |
8b2e788cb6fc-1 | Initialize with file path.
lazy_load() → Iterator[Document][source]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load data into Document objects.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Param... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.news.NewsURLLoader.html |
fcf318d0f5e4-0 | langchain.document_loaders.parsers.pdf.AmazonTextractPDFParser¶
class langchain.document_loaders.parsers.pdf.AmazonTextractPDFParser(textract_features: Optional[Sequence[int]] = None, client: Optional[Any] = None)[source]¶
Send PDF files to Amazon Textract and parse them.
For parsing multi-page PDFs, they have to resid... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.parsers.pdf.AmazonTextractPDFParser.html |
fcf318d0f5e4-1 | parse(blob: Blob) → List[Document]¶
Eagerly parse the blob into a document or documents.
This is a convenience method for interactive development environment.
Production applications should favor the lazy_parse method instead.
Subclasses should generally not over-ride this parse method.
Parameters
blob – Blob instance
... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.parsers.pdf.AmazonTextractPDFParser.html |
b140d872b03c-0 | langchain.document_loaders.fauna.FaunaLoader¶
class langchain.document_loaders.fauna.FaunaLoader(query: str, page_content_field: str, secret: str, metadata_fields: Optional[Sequence[str]] = None)[source]¶
Load from FaunaDB.
query¶
The FQL query string to execute.
Type
str
page_content_field¶
The field that contains the... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.fauna.FaunaLoader.html |
6f3743fa6d62-0 | langchain.document_loaders.parsers.audio.OpenAIWhisperParserLocal¶
class langchain.document_loaders.parsers.audio.OpenAIWhisperParserLocal(device: str = '0', lang_model: Optional[str] = None, forced_decoder_ids: Optional[Tuple[Dict]] = None)[source]¶
Transcribe and parse audio files with OpenAI Whisper model.
Audio tra... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.parsers.audio.OpenAIWhisperParserLocal.html |
6f3743fa6d62-1 | Initialize the parser.
Parameters
device – device to use.
lang_model – whisper model to use, for example “openai/whisper-medium”.
Defaults to None.
forced_decoder_ids – id states for decoder in a multilanguage model.
Defaults to None.
lazy_parse(blob: Blob) → Iterator[Document][source]¶
Lazily parse the blob.
parse(blo... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.parsers.audio.OpenAIWhisperParserLocal.html |
d76fbebe4124-0 | langchain.document_loaders.base.BaseBlobParser¶
class langchain.document_loaders.base.BaseBlobParser[source]¶
Abstract interface for blob parsers.
A blob parser provides a way to parse raw data stored in a blob into one
or more documents.
The parser can be composed with blob loaders, making it easy to re-use
a parser i... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.base.BaseBlobParser.html |
31af7c8ba374-0 | langchain.document_loaders.tensorflow_datasets.TensorflowDatasetLoader¶
class langchain.document_loaders.tensorflow_datasets.TensorflowDatasetLoader(dataset_name: str, split_name: str, load_max_docs: Optional[int] = 100, sample_to_document_function: Optional[Callable[[Dict], Document]] = None)[source]¶
Load from Tensor... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.tensorflow_datasets.TensorflowDatasetLoader.html |
31af7c8ba374-1 | sample_to_document_function
Custom function that transform a dataset sample into a Document.
Methods
__init__(dataset_name, split_name[, ...])
Initialize the TensorflowDatasetLoader.
lazy_load()
A lazy loader for Documents.
load()
Load data into Document objects.
load_and_split([text_splitter])
Load Documents and split... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.tensorflow_datasets.TensorflowDatasetLoader.html |
e6b20484b744-0 | langchain.document_loaders.pdf.PyPDFLoader¶
class langchain.document_loaders.pdf.PyPDFLoader(file_path: str, password: Optional[Union[str, bytes]] = None, headers: Optional[Dict] = None)[source]¶
Load PDF using `pypdf and chunks at character level.
Loader also stores page numbers in metadata.
Initialize with a file pat... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.pdf.PyPDFLoader.html |
2e29783af51d-0 | langchain.document_loaders.pdf.PyPDFDirectoryLoader¶
class langchain.document_loaders.pdf.PyPDFDirectoryLoader(path: str, glob: str = '**/[!.]*.pdf', silent_errors: bool = False, load_hidden: bool = False, recursive: bool = False)[source]¶
Load a directory with PDF files using pypdf and chunks at character level.
Loade... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.pdf.PyPDFDirectoryLoader.html |
6f6e5deb9c79-0 | langchain.document_loaders.embaas.EmbaasDocumentExtractionParameters¶
class langchain.document_loaders.embaas.EmbaasDocumentExtractionParameters[source]¶
Parameters for the embaas document extraction API.
Attributes
mime_type
The mime type of the document.
file_extension
The file extension of the document.
file_name
Th... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.embaas.EmbaasDocumentExtractionParameters.html |
6f6e5deb9c79-1 | update([E, ]**F)
If E is present and has a .keys() method, then does: for k in E: D[k] = E[k] If E is present and lacks a .keys() method, then does: for k, v in E: D[k] = v In either case, this is followed by: for k in F: D[k] = F[k]
values()
__init__(*args, **kwargs)¶
clear() → None. Remove all items from D.¶
copy... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.embaas.EmbaasDocumentExtractionParameters.html |
6f6e5deb9c79-2 | If E is present and lacks a .keys() method, then does: for k, v in E: D[k] = v
In either case, this is followed by: for k in F: D[k] = F[k]
values() → an object providing a view on D's values¶ | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.embaas.EmbaasDocumentExtractionParameters.html |
a077c26e1d8b-0 | langchain.document_loaders.embaas.EmbaasDocumentExtractionPayload¶
class langchain.document_loaders.embaas.EmbaasDocumentExtractionPayload[source]¶
Payload for the Embaas document extraction API.
Attributes
bytes
The base64 encoded bytes of the document to extract text from.
Methods
__init__(*args, **kwargs)
clear()
co... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.embaas.EmbaasDocumentExtractionPayload.html |
a077c26e1d8b-1 | items() → a set-like object providing a view on D's items¶
keys() → a set-like object providing a view on D's keys¶
pop(k[, d]) → v, remove specified key and return the corresponding value.¶
If the key is not found, return the default if given; otherwise,
raise a KeyError.
popitem()¶
Remove and return a (key, value) pa... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.embaas.EmbaasDocumentExtractionPayload.html |
88c29c9d346a-0 | langchain.document_loaders.sitemap.SitemapLoader¶
class langchain.document_loaders.sitemap.SitemapLoader(web_path: str, filter_urls: Optional[List[str]] = None, parsing_function: Optional[Callable] = None, blocksize: Optional[int] = None, blocknum: int = 0, meta_function: Optional[Callable] = None, is_local: bool = Fal... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.sitemap.SitemapLoader.html |
88c29c9d346a-1 | lazy_load()
Lazy load text from the url(s) in web_path.
load()
Load sitemap.
load_and_split([text_splitter])
Load Documents and split into chunks.
parse_sitemap(soup)
Parse sitemap xml and load into a list of dicts.
scrape([parser])
Scrape data from webpage and return it in BeautifulSoup format.
scrape_all(urls[, parse... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.sitemap.SitemapLoader.html |
88c29c9d346a-2 | may result in missing data. Default: False
aload() → List[Document]¶
Load text from the urls in web_path async into Documents.
async fetch_all(urls: List[str]) → Any¶
Fetch all urls concurrently with rate limiting.
lazy_load() → Iterator[Document]¶
Lazy load text from the url(s) in web_path.
load() → List[Document][sou... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.sitemap.SitemapLoader.html |
33580b722a55-0 | langchain.document_loaders.blockchain.BlockchainDocumentLoader¶
class langchain.document_loaders.blockchain.BlockchainDocumentLoader(contract_address: str, blockchainType: BlockchainType = BlockchainType.ETH_MAINNET, api_key: str = 'docs-demo', startToken: str = '', get_all_tokens: bool = False, max_execution_time: Opt... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.blockchain.BlockchainDocumentLoader.html |
33580b722a55-1 | __init__(contract_address[, blockchainType, ...])
param contract_address
The address of the smart contract.
lazy_load()
A lazy loader for Documents.
load()
Load data into Document objects.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(contract_address: str, blockchainType: BlockchainTyp... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.blockchain.BlockchainDocumentLoader.html |
645e4571c06a-0 | langchain.document_loaders.text.TextLoader¶
class langchain.document_loaders.text.TextLoader(file_path: str, encoding: Optional[str] = None, autodetect_encoding: bool = False)[source]¶
Load text file.
Parameters
file_path – Path to the file to load.
encoding – File encoding to use. If None, the file will be loaded
enco... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.text.TextLoader.html |
645e4571c06a-1 | Dingo
Zilliz
SingleStoreDB
Annoy
Typesense
Atlas
Activeloop Deep Lake
Neo4j Vector Index
Tair
Chroma
Alibaba Cloud OpenSearch
StarRocks
scikit-learn
Tencent Cloud VectorDB
DocArray HnswSearch
MyScale
ClickHouse
Qdrant
Tigris
AwaDB
Supabase (Postgres)
OpenSearch
Pinecone
BagelDB
Azure Cognitive Search
Cassandra
USearch
... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.text.TextLoader.html |
347edfc98b2b-0 | langchain.document_loaders.max_compute.MaxComputeLoader¶
class langchain.document_loaders.max_compute.MaxComputeLoader(query: str, api_wrapper: MaxComputeAPIWrapper, *, page_content_columns: Optional[Sequence[str]] = None, metadata_columns: Optional[Sequence[str]] = None)[source]¶
Load from Alibaba Cloud MaxCompute tab... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.max_compute.MaxComputeLoader.html |
347edfc98b2b-1 | If unspecified, all columns not added to page_content will be written.
classmethod from_params(query: str, endpoint: str, project: str, *, access_id: Optional[str] = None, secret_access_key: Optional[str] = None, **kwargs: Any) → MaxComputeLoader[source]¶
Convenience constructor that builds the MaxCompute API wrapper f... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.max_compute.MaxComputeLoader.html |
d5f318ee4f54-0 | langchain.document_loaders.polars_dataframe.PolarsDataFrameLoader¶
class langchain.document_loaders.polars_dataframe.PolarsDataFrameLoader(data_frame: Any, *, page_content_column: str = 'text')[source]¶
Load Polars DataFrame.
Initialize with dataframe object.
Parameters
data_frame – Polars DataFrame object.
page_conten... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.polars_dataframe.PolarsDataFrameLoader.html |
4c3bb0959f6e-0 | langchain.document_loaders.readthedocs.ReadTheDocsLoader¶
class langchain.document_loaders.readthedocs.ReadTheDocsLoader(path: Union[str, Path], encoding: Optional[str] = None, errors: Optional[str] = None, custom_html_tag: Optional[Tuple[str, dict]] = None, **kwargs: Optional[Any])[source]¶
Load ReadTheDocs documentat... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.readthedocs.ReadTheDocsLoader.html |
4c3bb0959f6e-1 | Initialize ReadTheDocsLoader
The loader loops over all files under path and extracts the actual content of
the files by retrieving main html tags. Default main html tags include
<main id=”main-content>, <div role=”main>, and <article role=”main”>. You
can also define your own html tags by passing custom_html_tag, e.g.
... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.readthedocs.ReadTheDocsLoader.html |
124f70a1a3fe-0 | langchain.document_loaders.parsers.language.language_parser.LanguageParser¶
class langchain.document_loaders.parsers.language.language_parser.LanguageParser(language: Optional[Language] = None, parser_threshold: int = 0)[source]¶
Parse using the respective programming language syntax.
Each top-level function and class ... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.parsers.language.language_parser.LanguageParser.html |
124f70a1a3fe-1 | Parameters
language – If None (default), it will try to infer language from source.
parser_threshold – Minimum lines needed to activate parsing (0 by default).
Methods
__init__([language, parser_threshold])
Language parser that split code using the respective language syntax.
lazy_parse(blob)
Lazy parsing interface.
pa... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.parsers.language.language_parser.LanguageParser.html |
8862ef7f7259-0 | langchain.document_loaders.tomarkdown.ToMarkdownLoader¶
class langchain.document_loaders.tomarkdown.ToMarkdownLoader(url: str, api_key: str)[source]¶
Load HTML using 2markdown API.
Initialize with url and api key.
Methods
__init__(url, api_key)
Initialize with url and api key.
lazy_load()
Lazily load the file.
load()
L... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.tomarkdown.ToMarkdownLoader.html |
5826b16b763b-0 | langchain.document_loaders.onedrive_file.OneDriveFileLoader¶
class langchain.document_loaders.onedrive_file.OneDriveFileLoader[source]¶
Bases: BaseLoader, BaseModel
Load a file from Microsoft OneDrive.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input da... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.onedrive_file.OneDriveFileLoader.html |
5826b16b763b-1 | deep – set to True to make a deep copy of the model
Returns
new model instance
dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, ex... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.onedrive_file.OneDriveFileLoader.html |
5826b16b763b-2 | Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, byt... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.onedrive_file.OneDriveFileLoader.html |
93e158d1b2dc-0 | langchain.document_loaders.gutenberg.GutenbergLoader¶
class langchain.document_loaders.gutenberg.GutenbergLoader(file_path: str)[source]¶
Load from Gutenberg.org.
Initialize with a file path.
Methods
__init__(file_path)
Initialize with a file path.
lazy_load()
A lazy loader for Documents.
load()
Load file.
load_and_spl... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.gutenberg.GutenbergLoader.html |
2053eda6b566-0 | langchain.document_loaders.unstructured.UnstructuredAPIFileIOLoader¶
class langchain.document_loaders.unstructured.UnstructuredAPIFileIOLoader(file: Union[IO, Sequence[IO]], mode: str = 'single', url: str = 'https://api.unstructured.io/general/v0/general', api_key: str = '', **unstructured_kwargs: Any)[source]¶
Load fi... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.unstructured.UnstructuredAPIFileIOLoader.html |
2053eda6b566-1 | Initialize with file path.
lazy_load()
A lazy loader for Documents.
load()
Load file.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(file: Union[IO, Sequence[IO]], mode: str = 'single', url: str = 'https://api.unstructured.io/general/v0/general', api_key: str = '', **unstructured_kwargs:... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.unstructured.UnstructuredAPIFileIOLoader.html |
19361ea7ed1b-0 | langchain.document_loaders.parsers.audio.OpenAIWhisperParser¶
class langchain.document_loaders.parsers.audio.OpenAIWhisperParser(api_key: Optional[str] = None)[source]¶
Transcribe and parse audio files.
Audio transcription is with OpenAI Whisper model.
Methods
__init__([api_key])
lazy_parse(blob)
Lazily parse the blob.... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.parsers.audio.OpenAIWhisperParser.html |
9d1fc3ae284c-0 | langchain.document_loaders.parsers.docai.DocAIParsingResults¶
class langchain.document_loaders.parsers.docai.DocAIParsingResults(source_path: str, parsed_path: str)[source]¶
A dataclass to store DocAI parsing results.
Attributes
source_path
parsed_path
Methods
__init__(source_path, parsed_path)
__init__(source_path: st... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.parsers.docai.DocAIParsingResults.html |
5c07866f186e-0 | langchain.document_loaders.apify_dataset.ApifyDatasetLoader¶
class langchain.document_loaders.apify_dataset.ApifyDatasetLoader[source]¶
Bases: BaseLoader, BaseModel
Load datasets from Apify web scraping, crawling, and data extraction platform.
For details, see https://docs.apify.com/platform/integrations/langchain
Exam... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.apify_dataset.ApifyDatasetLoader.html |
5c07866f186e-1 | Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.apify_dataset.ApifyDatasetLoader.html |
5c07866f186e-2 | classmethod from_orm(obj: Any) → Model¶
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_n... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.apify_dataset.ApifyDatasetLoader.html |
5c07866f186e-3 | classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶
Examples usi... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.apify_dataset.ApifyDatasetLoader.html |
e6b6ff3cd687-0 | langchain.document_loaders.wikipedia.WikipediaLoader¶
class langchain.document_loaders.wikipedia.WikipediaLoader(query: str, lang: str = 'en', load_max_docs: Optional[int] = 100, load_all_available_meta: Optional[bool] = False, doc_content_chars_max: Optional[int] = 4000)[source]¶
Load from Wikipedia.
The hard limit on... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.wikipedia.WikipediaLoader.html |
e6b6ff3cd687-1 | Parameters
query (str) – The query string to search on Wikipedia.
lang (str, optional) – The language code for the Wikipedia language edition.
Defaults to “en”.
load_max_docs (int, optional) – The maximum number of documents to load.
Defaults to 100.
load_all_available_meta (bool, optional) – Indicates whether to load ... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.wikipedia.WikipediaLoader.html |
ee50e72e062d-0 | langchain.document_loaders.parsers.pdf.PyPDFium2Parser¶
class langchain.document_loaders.parsers.pdf.PyPDFium2Parser[source]¶
Parse PDF with PyPDFium2.
Initialize the parser.
Methods
__init__()
Initialize the parser.
lazy_parse(blob)
Lazily parse the blob.
parse(blob)
Eagerly parse the blob into a document or documents... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.parsers.pdf.PyPDFium2Parser.html |
2431e7b167fb-0 | langchain.document_loaders.word_document.Docx2txtLoader¶
class langchain.document_loaders.word_document.Docx2txtLoader(file_path: str)[source]¶
Load DOCX file using docx2txt and chunks at character level.
Defaults to check for local file, but if the file is a web path, it will download it
to a temporary file, and use t... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.word_document.Docx2txtLoader.html |
9dd8768cade6-0 | langchain.document_loaders.parsers.html.bs4.BS4HTMLParser¶
class langchain.document_loaders.parsers.html.bs4.BS4HTMLParser(*, features: str = 'lxml', get_text_separator: str = '', **kwargs: Any)[source]¶
Pparse HTML files using Beautiful Soup.
Initialize a bs4 based HTML parser.
Methods
__init__(*[, features, get_text_... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.parsers.html.bs4.BS4HTMLParser.html |
cb1ec02e5c38-0 | langchain.document_loaders.mastodon.MastodonTootsLoader¶
class langchain.document_loaders.mastodon.MastodonTootsLoader(mastodon_accounts: Sequence[str], number_toots: Optional[int] = 100, exclude_replies: bool = False, access_token: Optional[str] = None, api_base_url: str = 'https://mastodon.social')[source]¶
Load the ... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.mastodon.MastodonTootsLoader.html |
cb1ec02e5c38-1 | exclude_replies – Whether to exclude reply toots from the load.
Defaults to False.
access_token – An access token if toots are loaded as a Mastodon app. Can
also be specified via the environment variables “MASTODON_ACCESS_TOKEN”.
api_base_url – A Mastodon API base URL to talk to, if not using the default.
Defaults to “... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.mastodon.MastodonTootsLoader.html |
de6183041981-0 | langchain.document_loaders.merge.MergedDataLoader¶
class langchain.document_loaders.merge.MergedDataLoader(loaders: List)[source]¶
Merge documents from a list of loaders
Initialize with a list of loaders
Methods
__init__(loaders)
Initialize with a list of loaders
lazy_load()
Lazy load docs from each individual loader.
... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.merge.MergedDataLoader.html |
e9e78363c6ce-0 | langchain.document_loaders.airbyte.AirbyteHubspotLoader¶
class langchain.document_loaders.airbyte.AirbyteHubspotLoader(config: Mapping[str, Any], stream_name: str, record_handler: Optional[Callable[[Any, Optional[str]], Document]] = None, state: Optional[Any] = None)[source]¶
Load from Hubspot using an Airbyte source c... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.airbyte.AirbyteHubspotLoader.html |
e9e78363c6ce-1 | load() → List[Document]¶
Load data into Document objects.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacter... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.airbyte.AirbyteHubspotLoader.html |
8e452074ec00-0 | langchain.document_loaders.pdf.PDFPlumberLoader¶
class langchain.document_loaders.pdf.PDFPlumberLoader(file_path: str, text_kwargs: Optional[Mapping[str, Any]] = None, dedupe: bool = False, headers: Optional[Dict] = None)[source]¶
Load PDF files using pdfplumber.
Initialize with a file path.
Attributes
source
Methods
_... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.pdf.PDFPlumberLoader.html |
567e3ae8f5c1-0 | langchain.document_loaders.rtf.UnstructuredRTFLoader¶
class langchain.document_loaders.rtf.UnstructuredRTFLoader(file_path: str, mode: str = 'single', **unstructured_kwargs: Any)[source]¶
Load RTF files using Unstructured.
You can run the loader in one of two modes: “single” and “elements”.
If you use “single” mode, th... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.rtf.UnstructuredRTFLoader.html |
567e3ae8f5c1-1 | Defaults to “single”.
**unstructured_kwargs – Additional keyword arguments to pass
to unstructured.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document]¶
Load file.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chun... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.rtf.UnstructuredRTFLoader.html |
14bece9fb24b-0 | langchain.document_loaders.html.UnstructuredHTMLLoader¶
class langchain.document_loaders.html.UnstructuredHTMLLoader(file_path: Union[str, List[str]], mode: str = 'single', **unstructured_kwargs: Any)[source]¶
Load HTML files using Unstructured.
You can run the loader in one of two modes: “single” and “elements”.
If yo... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.html.UnstructuredHTMLLoader.html |
ebdb5da3acd3-0 | langchain.document_loaders.airbyte.AirbyteCDKLoader¶
class langchain.document_loaders.airbyte.AirbyteCDKLoader(config: Mapping[str, Any], source_class: Any, stream_name: str, record_handler: Optional[Callable[[Any, Optional[str]], Document]] = None, state: Optional[Any] = None)[source]¶
Load with an Airbyte source conn... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.airbyte.AirbyteCDKLoader.html |
ebdb5da3acd3-1 | state – The state to pass to the source connector. Defaults to None.
lazy_load() → Iterator[Document][source]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load data into Document objects.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunk... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.airbyte.AirbyteCDKLoader.html |
40b338d0d7e7-0 | langchain.document_loaders.url.UnstructuredURLLoader¶
class langchain.document_loaders.url.UnstructuredURLLoader(urls: List[str], continue_on_failure: bool = True, mode: str = 'single', show_progress_bar: bool = False, **unstructured_kwargs: Any)[source]¶
Load files from remote URLs using Unstructured.
Use the unstruct... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.url.UnstructuredURLLoader.html |
40b338d0d7e7-1 | load() → List[Document][source]¶
Load file.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.url.UnstructuredURLLoader.html |
d4ef4f01818c-0 | langchain.document_loaders.unstructured.validate_unstructured_version¶
langchain.document_loaders.unstructured.validate_unstructured_version(min_unstructured_version: str) → None[source]¶
Raise an error if the Unstructured version does not exceed the
specified minimum. | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.unstructured.validate_unstructured_version.html |
fe2875ce1290-0 | langchain.document_loaders.mediawikidump.MWDumpLoader¶
class langchain.document_loaders.mediawikidump.MWDumpLoader(file_path: Union[str, Path], encoding: Optional[str] = 'utf8', namespaces: Optional[Sequence[int]] = None, skip_redirects: Optional[bool] = False, stop_on_error: Optional[bool] = True)[source]¶
Load MediaW... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.mediawikidump.MWDumpLoader.html |
fe2875ce1290-1 | load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(file_path: Union[str, Path], encoding: Optional[str] = 'utf8', namespaces: Optional[Sequence[int]] = None, skip_redirects: Optional[bool] = False, stop_on_error: Optional[bool] = True)[source]¶
lazy_load() → Iterator[Document]¶
A lazy loader... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.mediawikidump.MWDumpLoader.html |
63413646c3b6-0 | langchain.document_loaders.browserless.BrowserlessLoader¶
class langchain.document_loaders.browserless.BrowserlessLoader(api_token: str, urls: Union[str, List[str]], text_content: bool = True)[source]¶
Load webpages with Browserless /content endpoint.
Initialize with API token and the URLs to scrape
Attributes
api_toke... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.browserless.BrowserlessLoader.html |
a3f6404aedaa-0 | langchain.document_loaders.spreedly.SpreedlyLoader¶
class langchain.document_loaders.spreedly.SpreedlyLoader(access_token: str, resource: str)[source]¶
Load from Spreedly API.
Initialize with an access token and a resource.
Parameters
access_token – The access token.
resource – The resource.
Methods
__init__(access_tok... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.spreedly.SpreedlyLoader.html |
3a78c6bbfa29-0 | langchain.document_loaders.rst.UnstructuredRSTLoader¶
class langchain.document_loaders.rst.UnstructuredRSTLoader(file_path: str, mode: str = 'single', **unstructured_kwargs: Any)[source]¶
Load RST files using Unstructured.
You can run the loader in one of two modes: “single” and “elements”.
If you use “single” mode, th... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.rst.UnstructuredRSTLoader.html |
3a78c6bbfa29-1 | Defaults to “single”.
**unstructured_kwargs – Additional keyword arguments to pass
to unstructured.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document]¶
Load file.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chun... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.rst.UnstructuredRSTLoader.html |
63e515587574-0 | langchain.document_loaders.powerpoint.UnstructuredPowerPointLoader¶
class langchain.document_loaders.powerpoint.UnstructuredPowerPointLoader(file_path: Union[str, List[str]], mode: str = 'single', **unstructured_kwargs: Any)[source]¶
Load Microsoft PowerPoint files using Unstructured.
Works with both .ppt and .pptx fil... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.powerpoint.UnstructuredPowerPointLoader.html |
63e515587574-1 | Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using UnstructuredPowerPointLoader¶
Microsoft PowerPoint | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.powerpoint.UnstructuredPowerPointLoader.html |
b2a0e0e1cc7d-0 | langchain.document_loaders.parsers.language.python.PythonSegmenter¶
class langchain.document_loaders.parsers.language.python.PythonSegmenter(code: str)[source]¶
Code segmenter for Python.
Methods
__init__(code)
extract_functions_classes()
is_valid()
simplify_code()
__init__(code: str)[source]¶
extract_functions_classes... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.parsers.language.python.PythonSegmenter.html |
d7e11dbffa39-0 | langchain.document_loaders.telegram.text_to_docs¶
langchain.document_loaders.telegram.text_to_docs(text: Union[str, List[str]]) → List[Document][source]¶
Convert a string or list of strings to a list of Documents with metadata. | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.telegram.text_to_docs.html |
b9c9a598834e-0 | langchain.document_loaders.telegram.TelegramChatApiLoader¶
class langchain.document_loaders.telegram.TelegramChatApiLoader(chat_entity: Optional[EntityLike] = None, api_id: Optional[int] = None, api_hash: Optional[str] = None, username: Optional[str] = None, file_path: str = 'telegram_data.json')[source]¶
Load Telegram... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.telegram.TelegramChatApiLoader.html |
b9c9a598834e-1 | lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load documents.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use f... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.telegram.TelegramChatApiLoader.html |
bbea7b828556-0 | langchain.document_loaders.gcs_file.GCSFileLoader¶
class langchain.document_loaders.gcs_file.GCSFileLoader(project_name: str, bucket: str, blob: str, loader_func: Optional[Callable[[str], BaseLoader]] = None)[source]¶
Load from GCS file.
Initialize with bucket and key name.
Parameters
project_name – The name of the pro... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.gcs_file.GCSFileLoader.html |
bbea7b828556-1 | file_path argument. If nothing is provided, the
UnstructuredFileLoader is used.
Examples
To use an alternative PDF loader:
>> from from langchain.document_loaders import PyPDFLoader
>> loader = GCSFileLoader(…, loader_func=PyPDFLoader)
To use UnstructuredFileLoader with additional arguments:
>> loader = GCSFileLoader(…... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.gcs_file.GCSFileLoader.html |
fd8504bdfd58-0 | langchain.document_loaders.pdf.BasePDFLoader¶
class langchain.document_loaders.pdf.BasePDFLoader(file_path: str, *, headers: Optional[Dict] = None)[source]¶
Base Loader class for PDF files.
If the file is a web path, it will download it to a temporary file, use it, thenclean up the temporary file after completion.
Init... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.pdf.BasePDFLoader.html |
635d02e3d80a-0 | langchain.document_loaders.recursive_url_loader.RecursiveUrlLoader¶
class langchain.document_loaders.recursive_url_loader.RecursiveUrlLoader(url: str, max_depth: Optional[int] = 2, use_async: Optional[bool] = None, extractor: Optional[Callable[[str], str]] = None, metadata_extractor: Optional[Callable[[str, str], str]]... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.recursive_url_loader.RecursiveUrlLoader.html |
635d02e3d80a-1 | check_response_status – If True, check HTTP response status and skip
URLs with error responses (400-599).
Methods
__init__(url[, max_depth, use_async, ...])
Initialize with URL to crawl and any subdirectories to exclude. :param url: The URL to crawl. :param max_depth: The max depth of the recursive loading. :param use_... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.recursive_url_loader.RecursiveUrlLoader.html |
635d02e3d80a-2 | load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(url: str, max_depth: Optional[int] = 2, use_async: Optional[bool] = None, extractor: Optional[Callable[[str], str]] = None, metadata_extractor: Optional[Callable[[str, str], str]] = None, exclude_dirs: Optional[Sequence[str]] = (), timeout: ... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.recursive_url_loader.RecursiveUrlLoader.html |
635d02e3d80a-3 | lazy_load() → Iterator[Document][source]¶
Lazy load web pages.
When use_async is True, this function will not be lazy,
but it will still work in the expected way, just not lazy.
load() → List[Document][source]¶
Load web pages.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.recursive_url_loader.RecursiveUrlLoader.html |
2326dac25046-0 | langchain.document_loaders.gcs_directory.GCSDirectoryLoader¶
class langchain.document_loaders.gcs_directory.GCSDirectoryLoader(project_name: str, bucket: str, prefix: str = '', loader_func: Optional[Callable[[str], BaseLoader]] = None)[source]¶
Load from GCS directory.
Initialize with bucket and key name.
Parameters
pr... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.gcs_directory.GCSDirectoryLoader.html |
2326dac25046-1 | Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using GCSDirectoryLoader¶
Google Cloud Storage
Google Cloud Storage Directory | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.gcs_directory.GCSDirectoryLoader.html |
e3e1a3d94104-0 | langchain.document_loaders.rocksetdb.default_joiner¶
langchain.document_loaders.rocksetdb.default_joiner(docs: List[Tuple[str, Any]]) → str[source]¶
Default joiner for content columns. | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.rocksetdb.default_joiner.html |
99f58e26a813-0 | langchain.document_loaders.rocksetdb.RocksetLoader¶
class langchain.document_loaders.rocksetdb.RocksetLoader(client: ~typing.Any, query: ~typing.Any, content_keys: ~typing.List[str], metadata_keys: ~typing.Optional[~typing.List[str]] = None, content_columns_joiner: ~typing.Callable[[~typing.List[~typing.Tuple[str, ~typ... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.rocksetdb.RocksetLoader.html |
99f58e26a813-1 | line. This method is only relevant if there are multiple content_keys.
Methods
__init__(client, query, content_keys[, ...])
Initialize with Rockset client.
lazy_load()
A lazy loader for Documents.
load()
Load data into Document objects.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(clie... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.rocksetdb.RocksetLoader.html |
99f58e26a813-2 | Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using RocksetLoader¶
Rockset | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.rocksetdb.RocksetLoader.html |
47ae88142a5c-0 | langchain.document_loaders.embaas.BaseEmbaasLoader¶
class langchain.document_loaders.embaas.BaseEmbaasLoader[source]¶
Bases: BaseModel
Base loader for Embaas document extraction API.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.embaas.BaseEmbaasLoader.html |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.