id stringlengths 14 16 | text stringlengths 13 2.7k | source stringlengths 57 178 |
|---|---|---|
cb487ba8a490-0 | langchain.document_loaders.spreedly.SpreedlyLoader¶
class langchain.document_loaders.spreedly.SpreedlyLoader(access_token: str, resource: str)[source]¶
Load from Spreedly API.
Initialize with an access token and a resource.
Parameters
access_token – The access token.
resource – The resource.
Methods
__init__(access_tok... | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.spreedly.SpreedlyLoader.html |
fe1c46fd6831-0 | langchain.document_loaders.embaas.EmbaasDocumentExtractionParameters¶
class langchain.document_loaders.embaas.EmbaasDocumentExtractionParameters[source]¶
Parameters for the embaas document extraction API.
Attributes
mime_type
The mime type of the document.
file_extension
The file extension of the document.
file_name
Th... | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.embaas.EmbaasDocumentExtractionParameters.html |
fe1c46fd6831-1 | update([E, ]**F)
If E is present and has a .keys() method, then does: for k in E: D[k] = E[k] If E is present and lacks a .keys() method, then does: for k, v in E: D[k] = v In either case, this is followed by: for k in F: D[k] = F[k]
values()
__init__(*args, **kwargs)¶
clear() → None. Remove all items from D.¶
copy... | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.embaas.EmbaasDocumentExtractionParameters.html |
fe1c46fd6831-2 | If E is present and lacks a .keys() method, then does: for k, v in E: D[k] = v
In either case, this is followed by: for k in F: D[k] = F[k]
values() → an object providing a view on D's values¶ | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.embaas.EmbaasDocumentExtractionParameters.html |
8212f1616d9d-0 | langchain.document_loaders.readthedocs.ReadTheDocsLoader¶
class langchain.document_loaders.readthedocs.ReadTheDocsLoader(path: Union[str, Path], encoding: Optional[str] = None, errors: Optional[str] = None, custom_html_tag: Optional[Tuple[str, dict]] = None, patterns: Sequence[str] = ('*.htm', '*.html'), exclude_links_... | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.readthedocs.ReadTheDocsLoader.html |
8212f1616d9d-1 | Initialize ReadTheDocsLoader
lazy_load()
A lazy loader for Documents.
load()
Load documents.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(path: Union[str, Path], encoding: Optional[str] = None, errors: Optional[str] = None, custom_html_tag: Optional[Tuple[str, dict]] = None, patterns: ... | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.readthedocs.ReadTheDocsLoader.html |
8212f1616d9d-2 | lazy_load() → Iterator[Document][source]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load documents.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance ... | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.readthedocs.ReadTheDocsLoader.html |
314228b6f6d3-0 | langchain.document_loaders.parsers.pdf.PyPDFium2Parser¶
class langchain.document_loaders.parsers.pdf.PyPDFium2Parser(extract_images: bool = False)[source]¶
Parse PDF with PyPDFium2.
Initialize the parser.
Methods
__init__([extract_images])
Initialize the parser.
lazy_parse(blob)
Lazily parse the blob.
parse(blob)
Eager... | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.parsers.pdf.PyPDFium2Parser.html |
0d0ab5fde645-0 | langchain.document_loaders.python.PythonLoader¶
class langchain.document_loaders.python.PythonLoader(file_path: str)[source]¶
Load Python files, respecting any non-default encoding if specified.
Initialize with a file path.
Parameters
file_path – The path to the file to load.
Methods
__init__(file_path)
Initialize with... | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.python.PythonLoader.html |
c7e027cec286-0 | langchain.document_loaders.telegram.TelegramChatFileLoader¶
class langchain.document_loaders.telegram.TelegramChatFileLoader(path: str)[source]¶
Load from Telegram chat dump.
Initialize with a path.
Methods
__init__(path)
Initialize with a path.
lazy_load()
A lazy loader for Documents.
load()
Load documents.
load_and_s... | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.telegram.TelegramChatFileLoader.html |
655bbaf49e7a-0 | langchain.document_loaders.telegram.TelegramChatApiLoader¶
class langchain.document_loaders.telegram.TelegramChatApiLoader(chat_entity: Optional[EntityLike] = None, api_id: Optional[int] = None, api_hash: Optional[str] = None, username: Optional[str] = None, file_path: str = 'telegram_data.json')[source]¶
Load Telegram... | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.telegram.TelegramChatApiLoader.html |
655bbaf49e7a-1 | lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load documents.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use f... | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.telegram.TelegramChatApiLoader.html |
ffd4a07effda-0 | langchain.document_loaders.text.TextLoader¶
class langchain.document_loaders.text.TextLoader(file_path: str, encoding: Optional[str] = None, autodetect_encoding: bool = False)[source]¶
Load text file.
Parameters
file_path – Path to the file to load.
encoding – File encoding to use. If None, the file will be loaded
enco... | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.text.TextLoader.html |
ffd4a07effda-1 | Zilliz
SingleStoreDB
Annoy
Typesense
Atlas
Activeloop Deep Lake
Neo4j Vector Index
Tair
Chroma
Alibaba Cloud OpenSearch
Baidu Cloud VectorSearch
StarRocks
scikit-learn
Tencent Cloud VectorDB
DocArray HnswSearch
MyScale
ClickHouse
Qdrant
Tigris
AwaDB
Supabase (Postgres)
OpenSearch
Pinecone
BagelDB
Azure Cognitive Search... | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.text.TextLoader.html |
a5221828fdbc-0 | langchain.document_loaders.airbyte.AirbyteStripeLoader¶
class langchain.document_loaders.airbyte.AirbyteStripeLoader(config: Mapping[str, Any], stream_name: str, record_handler: Optional[Callable[[Any, Optional[str]], Document]] = None, state: Optional[Any] = None)[source]¶
Load from Stripe using an Airbyte source conn... | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.airbyte.AirbyteStripeLoader.html |
a5221828fdbc-1 | load() → List[Document]¶
Load data into Document objects.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacter... | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.airbyte.AirbyteStripeLoader.html |
bf9b26600add-0 | langchain.document_loaders.datadog_logs.DatadogLogsLoader¶
class langchain.document_loaders.datadog_logs.DatadogLogsLoader(query: str, api_key: str, app_key: str, from_time: Optional[int] = None, to_time: Optional[int] = None, limit: int = 100)[source]¶
Load Datadog logs.
Logs are written into the page_content and into... | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.datadog_logs.DatadogLogsLoader.html |
bf9b26600add-1 | Initialize Datadog document loader.
Requirements:
Must have datadog_api_client installed. Install with pip install datadog_api_client.
Parameters
query – The query to run in Datadog.
api_key – The Datadog API key.
app_key – The Datadog APP key.
from_time – Optional. The start of the time range to query.
Supports date m... | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.datadog_logs.DatadogLogsLoader.html |
0126710426a6-0 | langchain.document_loaders.epub.UnstructuredEPubLoader¶
class langchain.document_loaders.epub.UnstructuredEPubLoader(file_path: Union[str, List[str]], mode: str = 'single', **unstructured_kwargs: Any)[source]¶
Load EPub files using Unstructured.
You can run the loader in one of two modes: “single” and “elements”.
If yo... | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.epub.UnstructuredEPubLoader.html |
0126710426a6-1 | Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using UnstructuredEPubLoader¶
EPub | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.epub.UnstructuredEPubLoader.html |
ac4ce66f8e3f-0 | langchain.document_loaders.html.UnstructuredHTMLLoader¶
class langchain.document_loaders.html.UnstructuredHTMLLoader(file_path: Union[str, List[str]], mode: str = 'single', **unstructured_kwargs: Any)[source]¶
Load HTML files using Unstructured.
You can run the loader in one of two modes: “single” and “elements”.
If yo... | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.html.UnstructuredHTMLLoader.html |
7820cdcdf6fb-0 | langchain.document_loaders.rtf.UnstructuredRTFLoader¶
class langchain.document_loaders.rtf.UnstructuredRTFLoader(file_path: str, mode: str = 'single', **unstructured_kwargs: Any)[source]¶
Load RTF files using Unstructured.
You can run the loader in one of two modes: “single” and “elements”.
If you use “single” mode, th... | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.rtf.UnstructuredRTFLoader.html |
7820cdcdf6fb-1 | Defaults to “single”.
**unstructured_kwargs – Additional keyword arguments to pass
to unstructured.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document]¶
Load file.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chun... | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.rtf.UnstructuredRTFLoader.html |
a7e5edfee4d8-0 | langchain.document_loaders.airbyte.AirbyteTypeformLoader¶
class langchain.document_loaders.airbyte.AirbyteTypeformLoader(config: Mapping[str, Any], stream_name: str, record_handler: Optional[Callable[[Any, Optional[str]], Document]] = None, state: Optional[Any] = None)[source]¶
Load from Typeform using an Airbyte sourc... | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.airbyte.AirbyteTypeformLoader.html |
a7e5edfee4d8-1 | load() → List[Document]¶
Load data into Document objects.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacter... | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.airbyte.AirbyteTypeformLoader.html |
1b46cf9ee7ad-0 | langchain.document_loaders.bilibili.BiliBiliLoader¶
class langchain.document_loaders.bilibili.BiliBiliLoader(video_urls: List[str])[source]¶
Load BiliBili video transcripts.
Initialize with bilibili url.
Parameters
video_urls – List of bilibili urls.
Methods
__init__(video_urls)
Initialize with bilibili url.
lazy_load(... | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.bilibili.BiliBiliLoader.html |
e6b729064fa9-0 | langchain.document_loaders.pdf.MathpixPDFLoader¶
class langchain.document_loaders.pdf.MathpixPDFLoader(file_path: str, processed_file_format: str = 'md', max_wait_time_seconds: int = 500, should_clean_pdf: bool = False, **kwargs: Any)[source]¶
Load PDF files using Mathpix service.
Initialize with a file path.
Parameter... | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.pdf.MathpixPDFLoader.html |
e6b729064fa9-1 | **kwargs – additional keyword arguments.
clean_pdf(contents: str) → str[source]¶
Clean the PDF file.
Parameters
contents – a PDF file contents.
Returns:
get_processed_pdf(pdf_id: str) → str[source]¶
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load data into Document o... | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.pdf.MathpixPDFLoader.html |
ff80065efdf2-0 | langchain.document_loaders.youtube.GoogleApiClient¶
class langchain.document_loaders.youtube.GoogleApiClient(credentials_path: Path = PosixPath('/home/docs/.credentials/credentials.json'), service_account_path: Path = PosixPath('/home/docs/.credentials/credentials.json'), token_path: Path = PosixPath('/home/docs/.crede... | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.youtube.GoogleApiClient.html |
3908e70ec594-0 | langchain.document_loaders.parsers.pdf.PyMuPDFParser¶
class langchain.document_loaders.parsers.pdf.PyMuPDFParser(text_kwargs: Optional[Mapping[str, Any]] = None, extract_images: bool = False)[source]¶
Parse PDF using PyMuPDF.
Initialize the parser.
Parameters
text_kwargs – Keyword arguments to pass to fitz.Page.get_tex... | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.parsers.pdf.PyMuPDFParser.html |
4d7d34bd3e4b-0 | langchain.document_loaders.news.NewsURLLoader¶
class langchain.document_loaders.news.NewsURLLoader(urls: List[str], text_mode: bool = True, nlp: bool = False, continue_on_failure: bool = True, show_progress_bar: bool = False, **newspaper_kwargs: Any)[source]¶
Load news articles from URLs using Unstructured.
Parameters
... | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.news.NewsURLLoader.html |
4d7d34bd3e4b-1 | Initialize with file path.
lazy_load() → Iterator[Document][source]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load data into Document objects.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Param... | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.news.NewsURLLoader.html |
96ec8655892f-0 | langchain.document_loaders.xml.UnstructuredXMLLoader¶
class langchain.document_loaders.xml.UnstructuredXMLLoader(file_path: str, mode: str = 'single', **unstructured_kwargs: Any)[source]¶
Load XML file using Unstructured.
You can run the loader in one of two modes: “single” and “elements”.
If you use “single” mode, the... | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.xml.UnstructuredXMLLoader.html |
1836ab36e67e-0 | langchain.document_loaders.whatsapp_chat.concatenate_rows¶
langchain.document_loaders.whatsapp_chat.concatenate_rows(date: str, sender: str, text: str) → str[source]¶
Combine message information in a readable format ready to be used. | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.whatsapp_chat.concatenate_rows.html |
5e038de32947-0 | langchain.document_loaders.parsers.pdf.PDFPlumberParser¶
class langchain.document_loaders.parsers.pdf.PDFPlumberParser(text_kwargs: Optional[Mapping[str, Any]] = None, dedupe: bool = False, extract_images: bool = False)[source]¶
Parse PDF with PDFPlumber.
Initialize the parser.
Parameters
text_kwargs – Keyword argument... | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.parsers.pdf.PDFPlumberParser.html |
203654ebefb3-0 | langchain.document_loaders.weather.WeatherDataLoader¶
class langchain.document_loaders.weather.WeatherDataLoader(client: OpenWeatherMapAPIWrapper, places: Sequence[str])[source]¶
Load weather data with Open Weather Map API.
Reads the forecast & current weather of any location using OpenWeatherMap’s free
API. Checkout ‘... | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.weather.WeatherDataLoader.html |
8759c4e0e27b-0 | langchain.document_loaders.snowflake_loader.SnowflakeLoader¶
class langchain.document_loaders.snowflake_loader.SnowflakeLoader(query: str, user: str, password: str, account: str, warehouse: str, role: str, database: str, schema: str, parameters: Optional[Dict[str, Any]] = None, page_content_columns: Optional[List[str]]... | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.snowflake_loader.SnowflakeLoader.html |
8759c4e0e27b-1 | load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(query: str, user: str, password: str, account: str, warehouse: str, role: str, database: str, schema: str, parameters: Optional[Dict[str, Any]] = None, page_content_columns: Optional[List[str]] = None, metadata_columns: Optional[List[str]] =... | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.snowflake_loader.SnowflakeLoader.html |
1107835b4654-0 | langchain.document_loaders.notiondb.NotionDBLoader¶
class langchain.document_loaders.notiondb.NotionDBLoader(integration_token: str, database_id: str, request_timeout_sec: Optional[int] = 10)[source]¶
Load from Notion DB.
Reads content from pages within a Notion Database.
:param integration_token: Notion integration to... | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.notiondb.NotionDBLoader.html |
1107835b4654-1 | Read a page.
Parameters
page_summary – Page summary from Notion API.
Examples using NotionDBLoader¶
Notion DB
Notion DB 2/2 | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.notiondb.NotionDBLoader.html |
ddbf3a85a89d-0 | langchain.document_loaders.sharepoint.SharePointLoader¶
class langchain.document_loaders.sharepoint.SharePointLoader[source]¶
Bases: O365BaseLoader
Load from SharePoint.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a v... | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.sharepoint.SharePointLoader.html |
ddbf3a85a89d-1 | Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep co... | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.sharepoint.SharePointLoader.html |
ddbf3a85a89d-2 | load() → List[Document][source]¶
Load all documents.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextS... | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.sharepoint.SharePointLoader.html |
f67554232a0b-0 | langchain.document_loaders.markdown.UnstructuredMarkdownLoader¶
class langchain.document_loaders.markdown.UnstructuredMarkdownLoader(file_path: Union[str, List[str]], mode: str = 'single', **unstructured_kwargs: Any)[source]¶
Load Markdown files using Unstructured.
You can run the loader in one of two modes: “single” a... | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.markdown.UnstructuredMarkdownLoader.html |
f67554232a0b-1 | Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using UnstructuredMarkdownLoader¶
StarRocks | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.markdown.UnstructuredMarkdownLoader.html |
0b06b498702e-0 | langchain.document_loaders.parsers.audio.OpenAIWhisperParser¶
class langchain.document_loaders.parsers.audio.OpenAIWhisperParser(api_key: Optional[str] = None)[source]¶
Transcribe and parse audio files.
Audio transcription is with OpenAI Whisper model.
Methods
__init__([api_key])
lazy_parse(blob)
Lazily parse the blob.... | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.parsers.audio.OpenAIWhisperParser.html |
b505d77cd250-0 | langchain.document_loaders.notebook.remove_newlines¶
langchain.document_loaders.notebook.remove_newlines(x: Any) → Any[source]¶
Recursively remove newlines, no matter the data structure they are stored in. | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.notebook.remove_newlines.html |
0b011c86cb06-0 | langchain.document_loaders.azure_blob_storage_container.AzureBlobStorageContainerLoader¶
class langchain.document_loaders.azure_blob_storage_container.AzureBlobStorageContainerLoader(conn_str: str, container: str, prefix: str = '')[source]¶
Load from Azure Blob Storage container.
Initialize with connection string, cont... | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.azure_blob_storage_container.AzureBlobStorageContainerLoader.html |
7e4a760dbb12-0 | langchain.document_loaders.gitbook.GitbookLoader¶
class langchain.document_loaders.gitbook.GitbookLoader(web_page: str, load_all_paths: bool = False, base_url: Optional[str] = None, content_selector: str = 'main', continue_on_failure: bool = False)[source]¶
Load GitBook data.
load from either a single page, or
load all... | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.gitbook.GitbookLoader.html |
7e4a760dbb12-1 | scrape_all(urls[, parser])
Fetch all urls, then return soups for all results.
__init__(web_page: str, load_all_paths: bool = False, base_url: Optional[str] = None, content_selector: str = 'main', continue_on_failure: bool = False)[source]¶
Initialize with web page and whether to load all paths.
Parameters
web_page – Th... | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.gitbook.GitbookLoader.html |
7e4a760dbb12-2 | List of Documents.
scrape(parser: Optional[str] = None) → Any¶
Scrape data from webpage and return it in BeautifulSoup format.
scrape_all(urls: List[str], parser: Optional[str] = None) → List[Any]¶
Fetch all urls, then return soups for all results.
Examples using GitbookLoader¶
GitBook | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.gitbook.GitbookLoader.html |
095dcdf9e8fe-0 | langchain.document_loaders.joplin.JoplinLoader¶
class langchain.document_loaders.joplin.JoplinLoader(access_token: Optional[str] = None, port: int = 41184, host: str = 'localhost')[source]¶
Load notes from Joplin.
In order to use this loader, you need to have Joplin running with the
Web Clipper enabled (look for “Web C... | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.joplin.JoplinLoader.html |
095dcdf9e8fe-1 | load() → List[Document][source]¶
Load data into Document objects.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveC... | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.joplin.JoplinLoader.html |
1eaa1db52fe0-0 | langchain.document_loaders.pdf.PyMuPDFLoader¶
class langchain.document_loaders.pdf.PyMuPDFLoader(file_path: str, *, headers: Optional[Dict] = None, extract_images: bool = False, **kwargs: Any)[source]¶
Load PDF files using PyMuPDF.
Initialize with a file path.
Attributes
source
Methods
__init__(file_path, *[, headers, ... | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.pdf.PyMuPDFLoader.html |
4de4580e645e-0 | langchain.document_loaders.confluence.ConfluenceLoader¶
class langchain.document_loaders.confluence.ConfluenceLoader(url: str, api_key: Optional[str] = None, username: Optional[str] = None, session: Optional[Session] = None, oauth2: Optional[dict] = None, token: Optional[str] = None, cloud: Optional[bool] = True, numbe... | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.confluence.ConfluenceLoader.html |
4de4580e645e-1 | and ContentFormat.VIEW.
Hint: space_key and page_id can both be found in the URL of a page in Confluence
- https://yoursite.atlassian.com/wiki/spaces/<space_key>/pages/<page_id>
Example
from langchain.document_loaders import ConfluenceLoader
loader = ConfluenceLoader(
url="https://yoursite.atlassian.com/wiki",
... | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.confluence.ConfluenceLoader.html |
4de4580e645e-2 | Check if a page is publicly accessible.
lazy_load()
A lazy loader for Documents.
load([space_key, page_ids, label, cql, ...])
param space_key
Space key retrieved from a confluence URL, defaults to None
load_and_split([text_splitter])
Load Documents and split into chunks.
paginate_request(retrieval_method, **kwargs)
Pag... | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.confluence.ConfluenceLoader.html |
4de4580e645e-3 | lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load(space_key: Optional[str] = None, page_ids: Optional[List[str]] = None, label: Optional[str] = None, cql: Optional[str] = None, include_restricted_content: bool = False, include_archived_content: bool = False, include_attachments: bool = False, include_... | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.confluence.ConfluenceLoader.html |
4de4580e645e-4 | ocr_languages (str, optional) – The languages to use for the Tesseract agent. To use a
language, you’ll first need to install the appropriate
Tesseract language pack.
keep_markdown_format (bool) – Whether to keep the markdown format, defaults to
False
keep_newlines (bool) – Whether to keep the newlines format, defaults... | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.confluence.ConfluenceLoader.html |
4de4580e645e-5 | Parameters
retrieval_method (callable) – Function used to retrieve docs
Returns
List of documents
Return type
List
process_attachment(page_id: str, ocr_languages: Optional[str] = None) → List[str][source]¶
process_doc(link: str) → str[source]¶
process_image(link: str, ocr_languages: Optional[str] = None) → str[source]¶... | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.confluence.ConfluenceLoader.html |
c578d916a44a-0 | langchain.document_loaders.mongodb.MongodbLoader¶
class langchain.document_loaders.mongodb.MongodbLoader(connection_string: str, db_name: str, collection_name: str, *, filter_criteria: Optional[Dict] = None)[source]¶
Load MongoDB documents.
Methods
__init__(connection_string, db_name, ...[, ...])
aload()
Load data into... | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.mongodb.MongodbLoader.html |
b6d4f70eff57-0 | langchain.document_loaders.quip.QuipLoader¶
class langchain.document_loaders.quip.QuipLoader(api_url: str, access_token: str, request_timeout: Optional[int] = 60)[source]¶
Load Quip pages.
Port of https://github.com/quip/quip-api/tree/master/samples/baqup
Parameters
api_url – https://platform.quip.com
access_token – to... | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.quip.QuipLoader.html |
b6d4f70eff57-1 | Process a list of thread into a list of documents.
__init__(api_url: str, access_token: str, request_timeout: Optional[int] = 60)[source]¶
Parameters
api_url – https://platform.quip.com
access_token – token of access quip API. Please refer:
https – //quip.com/dev/automation/documentation/current#section/Authentication/... | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.quip.QuipLoader.html |
b6d4f70eff57-2 | Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
process_thread(thread_id: str, include_images: bool, include_messages: bool) → Optional[Document][source]¶
process_thread_images(tree: ElementTree) → str[source]¶
process_thread_messages(thread_id: str) → str[source]¶
process_threads(thread_ids: Seq... | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.quip.QuipLoader.html |
443d64fdbc76-0 | langchain.document_loaders.powerpoint.UnstructuredPowerPointLoader¶
class langchain.document_loaders.powerpoint.UnstructuredPowerPointLoader(file_path: Union[str, List[str]], mode: str = 'single', **unstructured_kwargs: Any)[source]¶
Load Microsoft PowerPoint files using Unstructured.
Works with both .ppt and .pptx fil... | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.powerpoint.UnstructuredPowerPointLoader.html |
443d64fdbc76-1 | Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using UnstructuredPowerPointLoader¶
Microsoft PowerPoint | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.powerpoint.UnstructuredPowerPointLoader.html |
06ab36eb9fc6-0 | langchain.document_loaders.psychic.PsychicLoader¶
class langchain.document_loaders.psychic.PsychicLoader(api_key: str, account_id: str, connector_id: Optional[str] = None)[source]¶
Load from Psychic.dev.
Initialize with API key, connector id, and account id.
Parameters
api_key – The Psychic API key.
account_id – The Ps... | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.psychic.PsychicLoader.html |
125b4eb4d8fb-0 | langchain.document_loaders.csv_loader.CSVLoader¶
class langchain.document_loaders.csv_loader.CSVLoader(file_path: str, source_column: Optional[str] = None, metadata_columns: Sequence[str] = (), csv_args: Optional[Dict] = None, encoding: Optional[str] = None, autodetect_encoding: bool = False)[source]¶
Load a CSV file i... | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.csv_loader.CSVLoader.html |
125b4eb4d8fb-1 | load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(file_path: str, source_column: Optional[str] = None, metadata_columns: Sequence[str] = (), csv_args: Optional[Dict] = None, encoding: Optional[str] = None, autodetect_encoding: bool = False)[source]¶
Parameters
file_path – The path to the CS... | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.csv_loader.CSVLoader.html |
a6ef57175b7f-0 | langchain.document_loaders.base.BaseBlobParser¶
class langchain.document_loaders.base.BaseBlobParser[source]¶
Abstract interface for blob parsers.
A blob parser provides a way to parse raw data stored in a blob into one
or more documents.
The parser can be composed with blob loaders, making it easy to reuse
a parser in... | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.base.BaseBlobParser.html |
eccf581cc030-0 | langchain.document_loaders.blob_loaders.schema.Blob¶
class langchain.document_loaders.blob_loaders.schema.Blob[source]¶
Bases: BaseModel
Blob represents raw data by either reference or value.
Provides an interface to materialize the blob in different representations, and
help to decouple the development of data loaders... | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.blob_loaders.schema.Blob.html |
eccf581cc030-1 | Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creat... | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.blob_loaders.schema.Blob.html |
eccf581cc030-2 | Parameters
path – path like object to file to be read
encoding – Encoding to use if decoding the bytes into a string
mime_type – if provided, will be set as the mime-type of the data
guess_type – If True, the mimetype will be guessed from the file extension,
if a mime-type was not provided
Returns
Blob instance
json(*,... | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.blob_loaders.schema.Blob.html |
eccf581cc030-3 | classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶
property source: Optional[str]¶
The source location of the blob as string if known otherwise none.
Examples using Blob¶
docai.md
Embaas | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.blob_loaders.schema.Blob.html |
0d993cedb2e3-0 | langchain.document_loaders.tsv.UnstructuredTSVLoader¶
class langchain.document_loaders.tsv.UnstructuredTSVLoader(file_path: str, mode: str = 'single', **unstructured_kwargs: Any)[source]¶
Load TSV files using Unstructured.
Like other
Unstructured loaders, UnstructuredTSVLoader can be used in both
“single” and “elements... | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.tsv.UnstructuredTSVLoader.html |
89706734a71e-0 | langchain.document_loaders.url_playwright.UnstructuredHtmlEvaluator¶
class langchain.document_loaders.url_playwright.UnstructuredHtmlEvaluator(remove_selectors: Optional[List[str]] = None)[source]¶
Evaluates the page HTML content using the unstructured library.
Initialize UnstructuredHtmlEvaluator.
Methods
__init__([re... | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.url_playwright.UnstructuredHtmlEvaluator.html |
b8988843deba-0 | langchain.document_loaders.baiducloud_bos_file.BaiduBOSFileLoader¶
class langchain.document_loaders.baiducloud_bos_file.BaiduBOSFileLoader(conf: Any, bucket: str, key: str)[source]¶
Load from Baidu Cloud BOS file.
Initialize with BOS config, bucket and key name.
:param conf(BceClientConfiguration): BOS config.
:param b... | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.baiducloud_bos_file.BaiduBOSFileLoader.html |
e3f2b098eec9-0 | langchain.document_loaders.stripe.StripeLoader¶
class langchain.document_loaders.stripe.StripeLoader(resource: str, access_token: Optional[str] = None)[source]¶
Load from Stripe API.
Initialize with a resource and an access token.
Parameters
resource – The resource.
access_token – The access token.
Methods
__init__(res... | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.stripe.StripeLoader.html |
2d15289e50ef-0 | langchain.document_loaders.imsdb.IMSDbLoader¶
class langchain.document_loaders.imsdb.IMSDbLoader(web_path: Union[str, Sequence[str]] = '', header_template: Optional[dict] = None, verify_ssl: bool = True, proxies: Optional[dict] = None, continue_on_failure: bool = False, autoset_encoding: bool = True, encoding: Optional... | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.imsdb.IMSDbLoader.html |
2d15289e50ef-1 | scrape_all(urls[, parser])
Fetch all urls, then return soups for all results.
__init__(web_path: Union[str, Sequence[str]] = '', header_template: Optional[dict] = None, verify_ssl: bool = True, proxies: Optional[dict] = None, continue_on_failure: bool = False, autoset_encoding: bool = True, encoding: Optional[str] = No... | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.imsdb.IMSDbLoader.html |
2d15289e50ef-2 | Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
scrape(parser: Optional[str] = None) → Any¶
Scrape data from webpage and return it in BeautifulSoup format.
scrape_all(urls: List[str], parser: Optional[str] = None) → ... | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.imsdb.IMSDbLoader.html |
4fe84d4fb681-0 | langchain.document_loaders.parsers.pdf.PyPDFParser¶
class langchain.document_loaders.parsers.pdf.PyPDFParser(password: Optional[Union[str, bytes]] = None, extract_images: bool = False)[source]¶
Load PDF using pypdf
Methods
__init__([password, extract_images])
lazy_parse(blob)
Lazily parse the blob.
parse(blob)
Eagerly ... | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.parsers.pdf.PyPDFParser.html |
69ad40d378f5-0 | langchain.document_loaders.whatsapp_chat.WhatsAppChatLoader¶
class langchain.document_loaders.whatsapp_chat.WhatsAppChatLoader(path: str)[source]¶
Load WhatsApp messages text file.
Initialize with path.
Methods
__init__(path)
Initialize with path.
lazy_load()
A lazy loader for Documents.
load()
Load documents.
load_and... | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.whatsapp_chat.WhatsAppChatLoader.html |
1dcfeab517eb-0 | langchain.document_loaders.gutenberg.GutenbergLoader¶
class langchain.document_loaders.gutenberg.GutenbergLoader(file_path: str)[source]¶
Load from Gutenberg.org.
Initialize with a file path.
Methods
__init__(file_path)
Initialize with a file path.
lazy_load()
A lazy loader for Documents.
load()
Load file.
load_and_spl... | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.gutenberg.GutenbergLoader.html |
cec534b0ac17-0 | langchain.document_loaders.recursive_url_loader.RecursiveUrlLoader¶
class langchain.document_loaders.recursive_url_loader.RecursiveUrlLoader(url: str, max_depth: Optional[int] = 2, use_async: Optional[bool] = None, extractor: Optional[Callable[[str], str]] = None, metadata_extractor: Optional[Callable[[str, str], str]]... | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.recursive_url_loader.RecursiveUrlLoader.html |
cec534b0ac17-1 | Initialize with URL to crawl and any subdirectories to exclude.
Parameters
url – The URL to crawl.
max_depth – The max depth of the recursive loading.
use_async – Whether to use asynchronous loading.
If True, this function will not be lazy, but it will still work in the
expected way, just not lazy.
extractor – A functi... | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.recursive_url_loader.RecursiveUrlLoader.html |
cec534b0ac17-2 | load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(url: str, max_depth: Optional[int] = 2, use_async: Optional[bool] = None, extractor: Optional[Callable[[str], str]] = None, metadata_extractor: Optional[Callable[[str, str], str]] = None, exclude_dirs: Optional[Sequence[str]] = (), timeout: ... | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.recursive_url_loader.RecursiveUrlLoader.html |
cec534b0ac17-3 | lazy_load() → Iterator[Document][source]¶
Lazy load web pages.
When use_async is True, this function will not be lazy,
but it will still work in the expected way, just not lazy.
load() → List[Document][source]¶
Load web pages.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents... | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.recursive_url_loader.RecursiveUrlLoader.html |
5d4802b1044c-0 | langchain.document_loaders.airbyte.AirbyteZendeskSupportLoader¶
class langchain.document_loaders.airbyte.AirbyteZendeskSupportLoader(config: Mapping[str, Any], stream_name: str, record_handler: Optional[Callable[[Any, Optional[str]], Document]] = None, state: Optional[Any] = None)[source]¶
Load from Zendesk Support usi... | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.airbyte.AirbyteZendeskSupportLoader.html |
5d4802b1044c-1 | load() → List[Document]¶
Load data into Document objects.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacter... | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.airbyte.AirbyteZendeskSupportLoader.html |
0233306da3e9-0 | langchain.document_loaders.mhtml.MHTMLLoader¶
class langchain.document_loaders.mhtml.MHTMLLoader(file_path: str, open_encoding: Optional[str] = None, bs_kwargs: Optional[dict] = None, get_text_separator: str = '')[source]¶
Parse MHTML files with BeautifulSoup.
Initialise with path, and optionally, file encoding to use,... | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.mhtml.MHTMLLoader.html |
0233306da3e9-1 | load() → List[Document][source]¶
Load data into Document objects.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveC... | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.mhtml.MHTMLLoader.html |
cec4b30479b8-0 | langchain.document_loaders.wikipedia.WikipediaLoader¶
class langchain.document_loaders.wikipedia.WikipediaLoader(query: str, lang: str = 'en', load_max_docs: Optional[int] = 100, load_all_available_meta: Optional[bool] = False, doc_content_chars_max: Optional[int] = 4000)[source]¶
Load from Wikipedia.
The hard limit on... | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.wikipedia.WikipediaLoader.html |
cec4b30479b8-1 | Parameters
query (str) – The query string to search on Wikipedia.
lang (str, optional) – The language code for the Wikipedia language edition.
Defaults to “en”.
load_max_docs (int, optional) – The maximum number of documents to load.
Defaults to 100.
load_all_available_meta (bool, optional) – Indicates whether to load ... | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.wikipedia.WikipediaLoader.html |
2f29d672671f-0 | langchain.document_loaders.tomarkdown.ToMarkdownLoader¶
class langchain.document_loaders.tomarkdown.ToMarkdownLoader(url: str, api_key: str)[source]¶
Load HTML using 2markdown API.
Initialize with url and api key.
Methods
__init__(url, api_key)
Initialize with url and api key.
lazy_load()
Lazily load the file.
load()
L... | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.tomarkdown.ToMarkdownLoader.html |
69939151e61e-0 | langchain.document_loaders.url_playwright.PlaywrightEvaluator¶
class langchain.document_loaders.url_playwright.PlaywrightEvaluator[source]¶
Abstract base class for all evaluators.
Each evaluator should take a page, a browser instance, and a response
object, process the page as necessary, and return the resulting text.
... | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.url_playwright.PlaywrightEvaluator.html |
11125c1caaa1-0 | langchain.document_loaders.parsers.audio.OpenAIWhisperParserLocal¶
class langchain.document_loaders.parsers.audio.OpenAIWhisperParserLocal(device: str = '0', lang_model: Optional[str] = None, forced_decoder_ids: Optional[Tuple[Dict]] = None)[source]¶
Transcribe and parse audio files with OpenAI Whisper model.
Audio tra... | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.parsers.audio.OpenAIWhisperParserLocal.html |
11125c1caaa1-1 | Initialize the parser.
Parameters
device – device to use.
lang_model – whisper model to use, for example “openai/whisper-medium”.
Defaults to None.
forced_decoder_ids – id states for decoder in a multilanguage model.
Defaults to None.
lazy_parse(blob: Blob) → Iterator[Document][source]¶
Lazily parse the blob.
parse(blo... | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.parsers.audio.OpenAIWhisperParserLocal.html |
bc6fb583710a-0 | langchain.document_loaders.rss.RSSFeedLoader¶
class langchain.document_loaders.rss.RSSFeedLoader(urls: Optional[Sequence[str]] = None, opml: Optional[str] = None, continue_on_failure: bool = True, show_progress_bar: bool = False, **newsloader_kwargs: Any)[source]¶
Load news articles from RSS feeds using Unstructured.
P... | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.rss.RSSFeedLoader.html |
bc6fb583710a-1 | Initialize with urls or OPML.
lazy_load()
A lazy loader for Documents.
load()
Load data into Document objects.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(urls: Optional[Sequence[str]] = None, opml: Optional[str] = None, continue_on_failure: bool = True, show_progress_bar: bool = Fals... | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.rss.RSSFeedLoader.html |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.