id stringlengths 14 15 | text stringlengths 49 2.47k | source stringlengths 61 166 |
|---|---|---|
d69fed5fe3a9-0 | langchain.document_loaders.odt.UnstructuredODTLoader¶
class langchain.document_loaders.odt.UnstructuredODTLoader(file_path: str, mode: str = 'single', **unstructured_kwargs: Any)[source]¶
Loader that uses unstructured to load OpenOffice ODT files.
You can run the loader in one of two modes: “single” and “elements”.
If ... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.odt.UnstructuredODTLoader.html |
d69fed5fe3a9-1 | mode – The mode to use when loading the file. Can be one of “single”,
“multi”, or “all”. Default is “single”.
**unstructured_kwargs – Any kwargs to pass to the unstructured.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document]¶
Load file.
load_and_split(text_splitter: Optional[TextSpli... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.odt.UnstructuredODTLoader.html |
45b9b6547be4-0 | langchain.document_loaders.rocksetdb.ColumnNotFoundError¶
class langchain.document_loaders.rocksetdb.ColumnNotFoundError(missing_key: str, query: str)[source]¶
Column not found error. | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.rocksetdb.ColumnNotFoundError.html |
a61cbf18b322-0 | langchain.document_loaders.youtube.YoutubeLoader¶
class langchain.document_loaders.youtube.YoutubeLoader(video_id: str, add_video_info: bool = False, language: Union[str, Sequence[str]] = 'en', translation: str = 'en', continue_on_failure: bool = False)[source]¶
Loads Youtube transcripts.
Initialize with YouTube video ... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.youtube.YoutubeLoader.html |
0025be90cba0-0 | langchain.document_loaders.s3_directory.S3DirectoryLoader¶
class langchain.document_loaders.s3_directory.S3DirectoryLoader(bucket: str, prefix: str = '')[source]¶
Loading logic for loading documents from an AWS S3.
Initialize with bucket and key name.
Parameters
bucket – The name of the S3 bucket.
prefix – The prefix o... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.s3_directory.S3DirectoryLoader.html |
28010a374d55-0 | langchain.document_loaders.twitter.TwitterTweetLoader¶
class langchain.document_loaders.twitter.TwitterTweetLoader(auth_handler: Union[OAuthHandler, OAuth2BearerHandler], twitter_users: Sequence[str], number_tweets: Optional[int] = 100)[source]¶
Twitter tweets loader.
Read tweets of user twitter handle.
First you need ... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.twitter.TwitterTweetLoader.html |
28010a374d55-1 | load() → List[Document][source]¶
Load tweets.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.twitter.TwitterTweetLoader.html |
7fdd89a73dac-0 | langchain.document_loaders.powerpoint.UnstructuredPowerPointLoader¶
class langchain.document_loaders.powerpoint.UnstructuredPowerPointLoader(file_path: Union[str, List[str]], mode: str = 'single', **unstructured_kwargs: Any)[source]¶
Loader that uses unstructured to load PowerPoint files.
Works with both .ppt and .pptx... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.powerpoint.UnstructuredPowerPointLoader.html |
7fdd89a73dac-1 | Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using UnstructuredPowerPointLoader¶
Microsoft PowerPoint | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.powerpoint.UnstructuredPowerPointLoader.html |
23f44904a546-0 | langchain.document_loaders.iugu.IuguLoader¶
class langchain.document_loaders.iugu.IuguLoader(resource: str, api_token: Optional[str] = None)[source]¶
Loader that fetches data from IUGU.
Initialize the IUGU resource.
Parameters
resource – The name of the resource to fetch.
api_token – The IUGU API token to use.
Methods
... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.iugu.IuguLoader.html |
0e59a30a7f0d-0 | langchain.document_loaders.unstructured.get_elements_from_api¶
langchain.document_loaders.unstructured.get_elements_from_api(file_path: Optional[Union[str, List[str]]] = None, file: Optional[Union[IO, Sequence[IO]]] = None, api_url: str = 'https://api.unstructured.io/general/v0/general', api_key: str = '', **unstructur... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.unstructured.get_elements_from_api.html |
b0653925e790-0 | langchain.document_loaders.pdf.PyPDFDirectoryLoader¶
class langchain.document_loaders.pdf.PyPDFDirectoryLoader(path: str, glob: str = '**/[!.]*.pdf', silent_errors: bool = False, load_hidden: bool = False, recursive: bool = False)[source]¶
Loads a directory with PDF files with pypdf and chunks at character level.
Loade... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.pdf.PyPDFDirectoryLoader.html |
30e93d04c999-0 | langchain.document_loaders.larksuite.LarkSuiteDocLoader¶
class langchain.document_loaders.larksuite.LarkSuiteDocLoader(domain: str, access_token: str, document_id: str)[source]¶
Loads LarkSuite (FeiShu) document.
Initialize with domain, access_token (tenant / user), and document_id.
Parameters
domain – The domain to lo... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.larksuite.LarkSuiteDocLoader.html |
30e93d04c999-1 | Returns
List of Documents.
Examples using LarkSuiteDocLoader¶
LarkSuite (FeiShu) | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.larksuite.LarkSuiteDocLoader.html |
47291d148cd8-0 | langchain.document_loaders.mediawikidump.MWDumpLoader¶
class langchain.document_loaders.mediawikidump.MWDumpLoader(file_path: Union[str, Path], encoding: Optional[str] = 'utf8', namespaces: Optional[Sequence[int]] = None, skip_redirects: Optional[bool] = False, stop_on_error: Optional[bool] = True)[source]¶
Load MediaW... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.mediawikidump.MWDumpLoader.html |
47291d148cd8-1 | load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(file_path: Union[str, Path], encoding: Optional[str] = 'utf8', namespaces: Optional[Sequence[int]] = None, skip_redirects: Optional[bool] = False, stop_on_error: Optional[bool] = True)[source]¶
lazy_load() → Iterator[Document]¶
A lazy loader... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.mediawikidump.MWDumpLoader.html |
233c00f23a9a-0 | langchain.document_loaders.airbyte.AirbyteShopifyLoader¶
class langchain.document_loaders.airbyte.AirbyteShopifyLoader(config: Mapping[str, Any], stream_name: str, record_handler: Optional[Callable[[Any, Optional[str]], Document]] = None, state: Optional[Any] = None)[source]¶
Methods
__init__(config, stream_name[, ...]... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.airbyte.AirbyteShopifyLoader.html |
06d48cd91af0-0 | langchain.document_loaders.whatsapp_chat.concatenate_rows¶
langchain.document_loaders.whatsapp_chat.concatenate_rows(date: str, sender: str, text: str) → str[source]¶
Combine message information in a readable format ready to be used. | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.whatsapp_chat.concatenate_rows.html |
7ad33d557147-0 | langchain.document_loaders.image.UnstructuredImageLoader¶
class langchain.document_loaders.image.UnstructuredImageLoader(file_path: Union[str, List[str]], mode: str = 'single', **unstructured_kwargs: Any)[source]¶
Loader that uses Unstructured to load PNG and JPG files.
You can run the loader in one of two modes: “sing... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.image.UnstructuredImageLoader.html |
7ad33d557147-1 | Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using UnstructuredImageLoader¶
Images | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.image.UnstructuredImageLoader.html |
f8315e0f5a0d-0 | langchain.document_loaders.nuclia.NucliaLoader¶
class langchain.document_loaders.nuclia.NucliaLoader(path: str, nuclia_tool: NucliaUnderstandingAPI)[source]¶
Extract text from any file type.
Methods
__init__(path, nuclia_tool)
lazy_load()
A lazy loader for Documents.
load()
Load documents.
load_and_split([text_splitter... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.nuclia.NucliaLoader.html |
750c0971f93a-0 | langchain.document_loaders.rocksetdb.default_joiner¶
langchain.document_loaders.rocksetdb.default_joiner(docs: List[Tuple[str, Any]]) → str[source]¶
Default joiner for content columns. | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.rocksetdb.default_joiner.html |
ffd3e18035e2-0 | langchain.document_loaders.url_selenium.SeleniumURLLoader¶
class langchain.document_loaders.url_selenium.SeleniumURLLoader(urls: List[str], continue_on_failure: bool = True, browser: Literal['chrome', 'firefox'] = 'chrome', binary_location: Optional[str] = None, executable_path: Optional[str] = None, headless: bool = T... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.url_selenium.SeleniumURLLoader.html |
ffd3e18035e2-1 | Load a list of URLs using Selenium and unstructured.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load the specified URLs using Selenium and create Document instances.
Returns
A list of Document instances with loaded content.
Return type
List[Document]
load_and_split(t... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.url_selenium.SeleniumURLLoader.html |
b5ba825c8dc5-0 | langchain.document_loaders.brave_search.BraveSearchLoader¶
class langchain.document_loaders.brave_search.BraveSearchLoader(query: str, api_key: str, search_kwargs: Optional[dict] = None)[source]¶
Loads a query result from Brave Search engine into a list of Documents.
Initializes the BraveLoader.
Parameters
query – The ... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.brave_search.BraveSearchLoader.html |
132264150f7f-0 | langchain.document_loaders.github.GitHubIssuesLoader¶
class langchain.document_loaders.github.GitHubIssuesLoader[source]¶
Bases: BaseGitHubLoader
Load issues of a GitHub repository.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed ... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.github.GitHubIssuesLoader.html |
132264150f7f-1 | param sort: Optional[Literal['created', 'updated', 'comments']] = None¶
What to sort results by. Can be one of: ‘created’, ‘updated’, ‘comments’.
Default is ‘created’.
param state: Optional[Literal['open', 'closed', 'all']] = None¶
Filter on issue state. Can be one of: ‘open’, ‘closed’, ‘all’.
classmethod construct(_fi... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.github.GitHubIssuesLoader.html |
132264150f7f-2 | deep – set to True to make a deep copy of the model
Returns
new model instance
dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, ex... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.github.GitHubIssuesLoader.html |
132264150f7f-3 | Returns
page_content
metadata
url
title
creator
created_at
last_update_time
closed_time
number of comments
state
labels
assignee
assignees
milestone
locked
number
is_pull_request
Return type
A list of Documents with attributes
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.github.GitHubIssuesLoader.html |
d7b2dc79cc9c-0 | langchain.document_loaders.parsers.txt.TextParser¶
class langchain.document_loaders.parsers.txt.TextParser[source]¶
Parser for text blobs.
Methods
__init__()
lazy_parse(blob)
Lazily parse the blob.
parse(blob)
Eagerly parse the blob into a document or documents.
__init__()¶
lazy_parse(blob: Blob) → Iterator[Document][s... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.parsers.txt.TextParser.html |
d6a16697f4f6-0 | langchain.document_loaders.parsers.language.javascript.JavaScriptSegmenter¶
class langchain.document_loaders.parsers.language.javascript.JavaScriptSegmenter(code: str)[source]¶
The code segmenter for JavaScript.
Methods
__init__(code)
extract_functions_classes()
is_valid()
simplify_code()
__init__(code: str)[source]¶
e... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.parsers.language.javascript.JavaScriptSegmenter.html |
27bb085732c7-0 | langchain.document_loaders.max_compute.MaxComputeLoader¶
class langchain.document_loaders.max_compute.MaxComputeLoader(query: str, api_wrapper: MaxComputeAPIWrapper, *, page_content_columns: Optional[Sequence[str]] = None, metadata_columns: Optional[Sequence[str]] = None)[source]¶
Loads a query result from Alibaba Clou... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.max_compute.MaxComputeLoader.html |
27bb085732c7-1 | If unspecified, all columns not added to page_content will be written.
classmethod from_params(query: str, endpoint: str, project: str, *, access_id: Optional[str] = None, secret_access_key: Optional[str] = None, **kwargs: Any) → MaxComputeLoader[source]¶
Convenience constructor that builds the MaxCompute API wrapper f... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.max_compute.MaxComputeLoader.html |
449b3a1ca562-0 | langchain.document_loaders.pubmed.PubMedLoader¶
class langchain.document_loaders.pubmed.PubMedLoader(query: str, load_max_docs: Optional[int] = 3)[source]¶
Loads a query result from PubMed biomedical library into a list of Documents.
query¶
The query to be passed to the PubMed API.
load_max_docs¶
The maximum number of ... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.pubmed.PubMedLoader.html |
e3a3c2be7661-0 | langchain.document_loaders.csv_loader.CSVLoader¶
class langchain.document_loaders.csv_loader.CSVLoader(file_path: str, source_column: Optional[str] = None, csv_args: Optional[Dict] = None, encoding: Optional[str] = None)[source]¶
Loads a CSV file into a list of documents.
Each document represents one row of the CSV fil... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.csv_loader.CSVLoader.html |
e3a3c2be7661-1 | Parameters
file_path – The path to the CSV file.
source_column – The name of the column in the CSV file to use as the source.
Optional. Defaults to None.
csv_args – A dictionary of arguments to pass to the csv.DictReader.
Optional. Defaults to None.
encoding – The encoding of the CSV file. Optional. Defaults to None.
l... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.csv_loader.CSVLoader.html |
9c030e957492-0 | langchain.document_loaders.airbyte.AirbyteZendeskSupportLoader¶
class langchain.document_loaders.airbyte.AirbyteZendeskSupportLoader(config: Mapping[str, Any], stream_name: str, record_handler: Optional[Callable[[Any, Optional[str]], Document]] = None, state: Optional[Any] = None)[source]¶
Methods
__init__(config, stre... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.airbyte.AirbyteZendeskSupportLoader.html |
05ca5479de27-0 | langchain.document_loaders.dropbox.DropboxLoader¶
class langchain.document_loaders.dropbox.DropboxLoader[source]¶
Bases: BaseLoader, BaseModel
Loads files from Dropbox.
In addition to common files such as text and PDF files, it also supports
Dropbox Paper files.
Create a new model by parsing and validating input data f... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.dropbox.DropboxLoader.html |
05ca5479de27-1 | the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[boo... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.dropbox.DropboxLoader.html |
05ca5479de27-2 | Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, byt... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.dropbox.DropboxLoader.html |
98c423e826c9-0 | langchain.document_loaders.notebook.remove_newlines¶
langchain.document_loaders.notebook.remove_newlines(x: Any) → Any[source]¶
Recursively removes newlines, no matter the data structure they are stored in. | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.notebook.remove_newlines.html |
063917237600-0 | langchain.document_loaders.parsers.pdf.PDFMinerParser¶
class langchain.document_loaders.parsers.pdf.PDFMinerParser[source]¶
Parse PDFs with PDFMiner.
Methods
__init__()
lazy_parse(blob)
Lazily parse the blob.
parse(blob)
Eagerly parse the blob into a document or documents.
__init__()¶
lazy_parse(blob: Blob) → Iterator[... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.parsers.pdf.PDFMinerParser.html |
f59e139a63a9-0 | langchain.document_loaders.async_html.AsyncHtmlLoader¶
class langchain.document_loaders.async_html.AsyncHtmlLoader(web_path: Union[str, List[str]], header_template: Optional[dict] = None, verify_ssl: Optional[bool] = True, proxies: Optional[dict] = None, requests_per_second: int = 2, requests_kwargs: Dict[str, Any] = {... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.async_html.AsyncHtmlLoader.html |
f59e139a63a9-1 | Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using AsyncHtmlLoader¶
html2text
AsyncHtmlLoader | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.async_html.AsyncHtmlLoader.html |
065e39812c4e-0 | langchain.document_loaders.gitbook.GitbookLoader¶
class langchain.document_loaders.gitbook.GitbookLoader(web_page: str, load_all_paths: bool = False, base_url: Optional[str] = None, content_selector: str = 'main', continue_on_failure: Optional[bool] = False)[source]¶
Load GitBook data.
load from either a single page, o... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.gitbook.GitbookLoader.html |
065e39812c4e-1 | lazy_load()
Lazy load text from the url(s) in web_path.
load()
Fetch text from one single GitBook page.
load_and_split([text_splitter])
Load Documents and split into chunks.
scrape([parser])
Scrape data from webpage and return it in BeautifulSoup format.
scrape_all(urls[, parser])
Fetch all urls, then return soups for ... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.gitbook.GitbookLoader.html |
065e39812c4e-2 | Fetch text from one single GitBook page.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Ret... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.gitbook.GitbookLoader.html |
32c899e809f1-0 | langchain.document_loaders.tencent_cos_directory.TencentCOSDirectoryLoader¶
class langchain.document_loaders.tencent_cos_directory.TencentCOSDirectoryLoader(conf: Any, bucket: str, prefix: str = '')[source]¶
Loader for Tencent Cloud COS directory.
Initialize with COS config, bucket and prefix.
:param conf(CosConfig): C... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.tencent_cos_directory.TencentCOSDirectoryLoader.html |
411b0a8eebd6-0 | langchain.document_loaders.youtube.GoogleApiYoutubeLoader¶
class langchain.document_loaders.youtube.GoogleApiYoutubeLoader(google_api_client: GoogleApiClient, channel_name: Optional[str] = None, video_ids: Optional[List[str]] = None, add_video_info: bool = True, captions_language: str = 'en', continue_on_failure: bool ... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.youtube.GoogleApiYoutubeLoader.html |
411b0a8eebd6-1 | lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load documents.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use f... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.youtube.GoogleApiYoutubeLoader.html |
b6fdd3ffa7a1-0 | langchain.document_loaders.parsers.generic.MimeTypeBasedParser¶
class langchain.document_loaders.parsers.generic.MimeTypeBasedParser(handlers: Mapping[str, BaseBlobParser], *, fallback_parser: Optional[BaseBlobParser] = None)[source]¶
A parser that uses mime-types to determine how to parse a blob.
This parser is useful... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.parsers.generic.MimeTypeBasedParser.html |
b6fdd3ffa7a1-1 | and return a document.
fallback_parser – A fallback_parser parser to use if the mime-type is not
found in the handlers. If provided, this parser will be
used to parse blobs with all mime-types not found in
the handlers.
If not provided, a ValueError will be raised if the
mime-type is not found in the handlers.
lazy_par... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.parsers.generic.MimeTypeBasedParser.html |
ef681deb9b47-0 | langchain.document_loaders.evernote.EverNoteLoader¶
class langchain.document_loaders.evernote.EverNoteLoader(file_path: str, load_single_document: bool = True)[source]¶
EverNote Loader.
Loads an EverNote notebook export file e.g. my_notebook.enex into Documents.
Instructions on producing this file can be found at
https... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.evernote.EverNoteLoader.html |
ef681deb9b47-1 | Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using EverNoteLoader¶
EverNote | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.evernote.EverNoteLoader.html |
b43e66b706e4-0 | langchain.document_loaders.unstructured.UnstructuredAPIFileLoader¶
class langchain.document_loaders.unstructured.UnstructuredAPIFileLoader(file_path: Union[str, List[str]] = '', mode: str = 'single', url: str = 'https://api.unstructured.io/general/v0/general', api_key: str = '', **unstructured_kwargs: Any)[source]¶
Loa... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.unstructured.UnstructuredAPIFileLoader.html |
b43e66b706e4-1 | lazy_load()
A lazy loader for Documents.
load()
Load file.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(file_path: Union[str, List[str]] = '', mode: str = 'single', url: str = 'https://api.unstructured.io/general/v0/general', api_key: str = '', **unstructured_kwargs: Any)[source]¶
Init... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.unstructured.UnstructuredAPIFileLoader.html |
15a2dc1d830a-0 | langchain.document_loaders.html_bs.BSHTMLLoader¶
class langchain.document_loaders.html_bs.BSHTMLLoader(file_path: str, open_encoding: Optional[str] = None, bs_kwargs: Optional[dict] = None, get_text_separator: str = '')[source]¶
Loader that uses beautiful soup to parse HTML files.
Initialise with path, and optionally, ... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.html_bs.BSHTMLLoader.html |
15a2dc1d830a-1 | load() → List[Document][source]¶
Load HTML document into document objects.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to R... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.html_bs.BSHTMLLoader.html |
d832507761b6-0 | langchain.document_loaders.parsers.pdf.PyPDFium2Parser¶
class langchain.document_loaders.parsers.pdf.PyPDFium2Parser[source]¶
Parse PDFs with PyPDFium2.
Initialize the parser.
Methods
__init__()
Initialize the parser.
lazy_parse(blob)
Lazily parse the blob.
parse(blob)
Eagerly parse the blob into a document or document... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.parsers.pdf.PyPDFium2Parser.html |
a8e2aea47f21-0 | langchain.document_loaders.web_base.WebBaseLoader¶
class langchain.document_loaders.web_base.WebBaseLoader(web_path: Union[str, List[str]], header_template: Optional[dict] = None, verify_ssl: Optional[bool] = True, proxies: Optional[dict] = None, continue_on_failure: Optional[bool] = False)[source]¶
Loader that uses ur... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.web_base.WebBaseLoader.html |
a8e2aea47f21-1 | Load text from the urls in web_path async into Documents.
async fetch_all(urls: List[str]) → Any[source]¶
Fetch all urls concurrently with rate limiting.
lazy_load() → Iterator[Document][source]¶
Lazy load text from the url(s) in web_path.
load() → List[Document][source]¶
Load text from the url(s) in web_path.
load_and... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.web_base.WebBaseLoader.html |
2bedab5ab11a-0 | langchain.document_loaders.dataframe.DataFrameLoader¶
class langchain.document_loaders.dataframe.DataFrameLoader(data_frame: Any, page_content_column: str = 'text')[source]¶
Load Pandas DataFrame.
Initialize with dataframe object.
Parameters
data_frame – Pandas DataFrame object.
page_content_column – Name of the column... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.dataframe.DataFrameLoader.html |
19e4c9212a24-0 | langchain.document_loaders.base.BaseLoader¶
class langchain.document_loaders.base.BaseLoader[source]¶
Interface for loading Documents.
Implementations should implement the lazy-loading method using generators
to avoid loading all Documents into memory at once.
The load method will remain as is for backwards compatibili... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.base.BaseLoader.html |
40b14b4bcd3a-0 | langchain.document_loaders.chatgpt.ChatGPTLoader¶
class langchain.document_loaders.chatgpt.ChatGPTLoader(log_file: str, num_logs: int = - 1)[source]¶
Load conversations from exported ChatGPT data.
Initialize a class object.
Parameters
log_file – Path to the log file
num_logs – Number of logs to load. If 0, load all log... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.chatgpt.ChatGPTLoader.html |
c4b06677db99-0 | langchain.document_loaders.bibtex.BibtexLoader¶
class langchain.document_loaders.bibtex.BibtexLoader(file_path: str, *, parser: Optional[BibtexparserWrapper] = None, max_docs: Optional[int] = None, max_content_chars: Optional[int] = 4000, load_extra_metadata: bool = False, file_pattern: str = '[^:]+\\.pdf')[source]¶
Lo... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.bibtex.BibtexLoader.html |
c4b06677db99-1 | load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(file_path: str, *, parser: Optional[BibtexparserWrapper] = None, max_docs: Optional[int] = None, max_content_chars: Optional[int] = 4000, load_extra_metadata: bool = False, file_pattern: str = '[^:]+\\.pdf')[source]¶
Initialize the BibtexLoa... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.bibtex.BibtexLoader.html |
c4b06677db99-2 | Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using BibtexLoader¶
BibTeX | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.bibtex.BibtexLoader.html |
c74f16fda89e-0 | langchain.document_loaders.news.NewsURLLoader¶
class langchain.document_loaders.news.NewsURLLoader(urls: List[str], text_mode: bool = True, nlp: bool = False, continue_on_failure: bool = True, show_progress_bar: bool = False, **newspaper_kwargs: Any)[source]¶
Loader that uses newspaper to load news articles from URLs.
... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.news.NewsURLLoader.html |
c74f16fda89e-1 | Initialize with file path.
lazy_load() → Iterator[Document][source]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load data into Document objects.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Param... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.news.NewsURLLoader.html |
d3a194889543-0 | langchain.document_loaders.telegram.TelegramChatApiLoader¶
class langchain.document_loaders.telegram.TelegramChatApiLoader(chat_entity: Optional[EntityLike] = None, api_id: Optional[int] = None, api_hash: Optional[str] = None, username: Optional[str] = None, file_path: str = 'telegram_data.json')[source]¶
Loads Telegra... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.telegram.TelegramChatApiLoader.html |
d3a194889543-1 | lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load documents.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use f... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.telegram.TelegramChatApiLoader.html |
d3d579ff40df-0 | langchain.document_loaders.college_confidential.CollegeConfidentialLoader¶
class langchain.document_loaders.college_confidential.CollegeConfidentialLoader(web_path: Union[str, List[str]], header_template: Optional[dict] = None, verify_ssl: Optional[bool] = True, proxies: Optional[dict] = None, continue_on_failure: Opti... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.college_confidential.CollegeConfidentialLoader.html |
d3d579ff40df-1 | async fetch_all(urls: List[str]) → Any¶
Fetch all urls concurrently with rate limiting.
lazy_load() → Iterator[Document]¶
Lazy load text from the url(s) in web_path.
load() → List[Document][source]¶
Load webpages as Documents.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.college_confidential.CollegeConfidentialLoader.html |
e63381c4dd8d-0 | langchain.document_loaders.obsidian.ObsidianLoader¶
class langchain.document_loaders.obsidian.ObsidianLoader(path: str, encoding: str = 'UTF-8', collect_metadata: bool = True)[source]¶
Loads Obsidian files from disk.
Initialize with a path.
Parameters
path – Path to the directory containing the Obsidian files.
encoding... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.obsidian.ObsidianLoader.html |
d47ff00c8e63-0 | langchain.document_loaders.html.UnstructuredHTMLLoader¶
class langchain.document_loaders.html.UnstructuredHTMLLoader(file_path: Union[str, List[str]], mode: str = 'single', **unstructured_kwargs: Any)[source]¶
Loader that uses Unstructured to load HTML files.
You can run the loader in one of two modes: “single” and “el... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.html.UnstructuredHTMLLoader.html |
d47ff00c8e63-1 | Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents. | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.html.UnstructuredHTMLLoader.html |
8464546ee988-0 | langchain.document_loaders.obs_directory.OBSDirectoryLoader¶
class langchain.document_loaders.obs_directory.OBSDirectoryLoader(bucket: str, endpoint: str, config: Optional[dict] = None, prefix: str = '')[source]¶
Loading logic for loading documents from Huawei OBS.
Initialize the OBSDirectoryLoader with the specified s... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.obs_directory.OBSDirectoryLoader.html |
8464546ee988-1 | Methods
__init__(bucket, endpoint[, config, prefix])
Initialize the OBSDirectoryLoader with the specified settings.
lazy_load()
A lazy loader for Documents.
load()
Load documents.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(bucket: str, endpoint: str, config: Optional[dict] = None, pr... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.obs_directory.OBSDirectoryLoader.html |
8464546ee988-2 | ```
config = {
“ak”: “your-access-key”,
“sk”: “your-secret-key”
directory_loader = OBSDirectoryLoader(“your-bucket-name”, “your-end-endpoint”, config, “your-prefix”)
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load documents.
load_and_split(text_splitter: Optional[Tex... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.obs_directory.OBSDirectoryLoader.html |
40ef9c463798-0 | langchain.document_loaders.figma.FigmaFileLoader¶
class langchain.document_loaders.figma.FigmaFileLoader(access_token: str, ids: str, key: str)[source]¶
Loads Figma file json.
Initialize with access token, ids, and key.
Parameters
access_token – The access token for the Figma REST API.
ids – The ids of the Figma file.
... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.figma.FigmaFileLoader.html |
408338ee7ccf-0 | langchain.document_loaders.pdf.PDFPlumberLoader¶
class langchain.document_loaders.pdf.PDFPlumberLoader(file_path: str, text_kwargs: Optional[Mapping[str, Any]] = None)[source]¶
Loader that uses pdfplumber to load PDF files.
Initialize with a file path.
Attributes
source
Methods
__init__(file_path[, text_kwargs])
Initia... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.pdf.PDFPlumberLoader.html |
942cc9f62749-0 | langchain.document_loaders.pdf.UnstructuredPDFLoader¶
class langchain.document_loaders.pdf.UnstructuredPDFLoader(file_path: Union[str, List[str]], mode: str = 'single', **unstructured_kwargs: Any)[source]¶
Loader that uses unstructured to load PDF files.
You can run the loader in one of two modes: “single” and “element... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.pdf.UnstructuredPDFLoader.html |
942cc9f62749-1 | Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents. | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.pdf.UnstructuredPDFLoader.html |
9e0bcad37d3f-0 | langchain.document_loaders.fauna.FaunaLoader¶
class langchain.document_loaders.fauna.FaunaLoader(query: str, page_content_field: str, secret: str, metadata_fields: Optional[Sequence[str]] = None)[source]¶
FaunaDB Loader.
query¶
The FQL query string to execute.
Type
str
page_content_field¶
The field that contains the co... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.fauna.FaunaLoader.html |
e2cc60795dc4-0 | langchain.document_loaders.telegram.TelegramChatFileLoader¶
class langchain.document_loaders.telegram.TelegramChatFileLoader(path: str)[source]¶
Loads Telegram chat json directory dump.
Initialize with a path.
Methods
__init__(path)
Initialize with a path.
lazy_load()
A lazy loader for Documents.
load()
Load documents.... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.telegram.TelegramChatFileLoader.html |
929e05472513-0 | langchain.document_loaders.git.GitLoader¶
class langchain.document_loaders.git.GitLoader(repo_path: str, clone_url: Optional[str] = None, branch: Optional[str] = 'main', file_filter: Optional[Callable[[str], bool]] = None)[source]¶
Loads files from a Git repository into a list of documents.
The Repository can be local ... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.git.GitLoader.html |
929e05472513-1 | file_filter – Optional. A function that takes a file path and returns
a boolean indicating whether to load the file. Defaults to None.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load data into Document objects.
load_and_split(text_splitter: Optional[TextSplitter] = N... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.git.GitLoader.html |
a900c0808446-0 | langchain.document_loaders.url.UnstructuredURLLoader¶
class langchain.document_loaders.url.UnstructuredURLLoader(urls: List[str], continue_on_failure: bool = True, mode: str = 'single', show_progress_bar: bool = False, **unstructured_kwargs: Any)[source]¶
Loader that use Unstructured to load files from remote URLs.
Use... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.url.UnstructuredURLLoader.html |
a900c0808446-1 | A lazy loader for Documents.
load() → List[Document][source]¶
Load file.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to Rec... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.url.UnstructuredURLLoader.html |
4a30e3012878-0 | langchain.document_loaders.azlyrics.AZLyricsLoader¶
class langchain.document_loaders.azlyrics.AZLyricsLoader(web_path: Union[str, List[str]], header_template: Optional[dict] = None, verify_ssl: Optional[bool] = True, proxies: Optional[dict] = None, continue_on_failure: Optional[bool] = False)[source]¶
Loads AZLyrics we... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.azlyrics.AZLyricsLoader.html |
4a30e3012878-1 | async fetch_all(urls: List[str]) → Any¶
Fetch all urls concurrently with rate limiting.
lazy_load() → Iterator[Document]¶
Lazy load text from the url(s) in web_path.
load() → List[Document][source]¶
Load webpages into Documents.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documen... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.azlyrics.AZLyricsLoader.html |
e86cfe15631d-0 | langchain.document_loaders.sitemap.SitemapLoader¶
class langchain.document_loaders.sitemap.SitemapLoader(web_path: str, filter_urls: Optional[List[str]] = None, parsing_function: Optional[Callable] = None, blocksize: Optional[int] = None, blocknum: int = 0, meta_function: Optional[Callable] = None, is_local: bool = Fal... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.sitemap.SitemapLoader.html |
e86cfe15631d-1 | web_path
Methods
__init__(web_path[, filter_urls, ...])
Initialize with webpage path and optional filter URLs.
aload()
Load text from the urls in web_path async into Documents.
fetch_all(urls)
Fetch all urls concurrently with rate limiting.
lazy_load()
Lazy load text from the url(s) in web_path.
load()
Load sitemap.
lo... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.sitemap.SitemapLoader.html |
e86cfe15631d-2 | is_local – whether the sitemap is a local file. Default: False
continue_on_failure – whether to continue loading the sitemap if an error
occurs loading a url, emitting a warning instead of raising an
exception. Setting this to True makes the loader more robust, but also
may result in missing data. Default: False
aload(... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.sitemap.SitemapLoader.html |
b9fc2766020a-0 | langchain.document_loaders.org_mode.UnstructuredOrgModeLoader¶
class langchain.document_loaders.org_mode.UnstructuredOrgModeLoader(file_path: str, mode: str = 'single', **unstructured_kwargs: Any)[source]¶
Loader that uses unstructured to load Org-Mode files.
You can run the loader in one of two modes: “single” and “el... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.org_mode.UnstructuredOrgModeLoader.html |
b9fc2766020a-1 | **unstructured_kwargs – Any additional keyword arguments to pass
to the unstructured.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document]¶
Load file.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returne... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.org_mode.UnstructuredOrgModeLoader.html |
86a2b2a3a666-0 | langchain.document_loaders.email.UnstructuredEmailLoader¶
class langchain.document_loaders.email.UnstructuredEmailLoader(file_path: str, mode: str = 'single', **unstructured_kwargs: Any)[source]¶
Loader that uses unstructured to load email files. Works with both
.eml and .msg files. You can process attachments in addit... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.email.UnstructuredEmailLoader.html |
86a2b2a3a666-1 | Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using UnstructuredEmailLoader¶
Email | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.email.UnstructuredEmailLoader.html |
e30990cd7c9a-0 | langchain.document_loaders.parsers.pdf.AmazonTextractPDFParser¶
class langchain.document_loaders.parsers.pdf.AmazonTextractPDFParser(textract_features: Optional[Sequence[int]] = None, client: Optional[Any] = None)[source]¶
Sends PDF files to Amazon Textract and parses them to generate Documents.
For parsing multi-page ... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.parsers.pdf.AmazonTextractPDFParser.html |
e30990cd7c9a-1 | parse(blob: Blob) → List[Document]¶
Eagerly parse the blob into a document or documents.
This is a convenience method for interactive development environment.
Production applications should favor the lazy_parse method instead.
Subclasses should generally not over-ride this parse method.
Parameters
blob – Blob instance
... | https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.parsers.pdf.AmazonTextractPDFParser.html |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.