id
stringlengths 14
15
| text
stringlengths 35
2.51k
| source
stringlengths 61
154
|
|---|---|---|
52d900d50e98-0
|
langchain.document_loaders.github.GitHubIssuesLoader¶
class langchain.document_loaders.github.GitHubIssuesLoader(*, repo: str, access_token: str, include_prs: bool = True, milestone: Optional[Union[int, Literal['*', 'none']]] = None, state: Optional[Literal['open', 'closed', 'all']] = None, assignee: Optional[str] = None, creator: Optional[str] = None, mentioned: Optional[str] = None, labels: Optional[List[str]] = None, sort: Optional[Literal['created', 'updated', 'comments']] = None, direction: Optional[Literal['asc', 'desc']] = None, since: Optional[str] = None)[source]¶
Bases: BaseGitHubLoader
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param access_token: str [Required]¶
Personal access token - see https://github.com/settings/tokens?type=beta
param assignee: Optional[str] = None¶
Filter on assigned user. Pass ‘none’ for no user and ‘*’ for any user.
param creator: Optional[str] = None¶
Filter on the user that created the issue.
param direction: Optional[Literal['asc', 'desc']] = None¶
The direction to sort the results by. Can be one of: ‘asc’, ‘desc’.
param include_prs: bool = True¶
If True include Pull Requests in results, otherwise ignore them.
param labels: Optional[List[str]] = None¶
Label names to filter one. Example: bug,ui,@high.
param mentioned: Optional[str] = None¶
Filter on a user that’s mentioned in the issue.
param milestone: Optional[Union[int, Literal['*', 'none']]] = None¶
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.github.GitHubIssuesLoader.html
|
52d900d50e98-1
|
param milestone: Optional[Union[int, Literal['*', 'none']]] = None¶
If integer is passed, it should be a milestone’s number field.
If the string ‘*’ is passed, issues with any milestone are accepted.
If the string ‘none’ is passed, issues without milestones are returned.
param repo: str [Required]¶
Name of repository
param since: Optional[str] = None¶
Only show notifications updated after the given time.
This is a timestamp in ISO 8601 format: YYYY-MM-DDTHH:MM:SSZ.
param sort: Optional[Literal['created', 'updated', 'comments']] = None¶
What to sort results by. Can be one of: ‘created’, ‘updated’, ‘comments’.
Default is ‘created’.
param state: Optional[Literal['open', 'closed', 'all']] = None¶
Filter on issue state. Can be one of: ‘open’, ‘closed’, ‘all’.
lazy_load() → Iterator[Document][source]¶
Get issues of a GitHub repository.
Returns
page_content
metadata
url
title
creator
created_at
last_update_time
closed_time
number of comments
state
labels
assignee
assignees
milestone
locked
number
is_pull_request
Return type
A list of Documents with attributes
load() → List[Document][source]¶
Get issues of a GitHub repository.
Returns
page_content
metadata
url
title
creator
created_at
last_update_time
closed_time
number of comments
state
labels
assignee
assignees
milestone
locked
number
is_pull_request
Return type
A list of Documents with attributes
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load documents and split into chunks.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.github.GitHubIssuesLoader.html
|
52d900d50e98-2
|
Load documents and split into chunks.
parse_issue(issue: dict) → Document[source]¶
Create Document objects from a list of GitHub issues.
validator validate_environment » all fields¶
Validate that access token exists in environment.
validator validate_since » since[source]¶
property headers: Dict[str, str]¶
property query_params: str¶
property url: str¶
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.github.GitHubIssuesLoader.html
|
ffc3fcd9e6b4-0
|
langchain.document_loaders.embaas.EmbaasLoader¶
class langchain.document_loaders.embaas.EmbaasLoader(*, embaas_api_key: Optional[str] = None, api_url: str = 'https://api.embaas.io/v1/document/extract-text/bytes/', params: EmbaasDocumentExtractionParameters = {}, file_path: str, blob_loader: Optional[EmbaasBlobLoader] = None)[source]¶
Bases: BaseEmbaasLoader, BaseLoader
Wrapper around embaas’s document loader service.
To use, you should have the
environment variable EMBAAS_API_KEY set with your API key, or pass
it as a named parameter to the constructor.
Example
# Default parsing
from langchain.document_loaders.embaas import EmbaasLoader
loader = EmbaasLoader(file_path="example.mp3")
documents = loader.load()
# Custom api parameters (create embeddings automatically)
from langchain.document_loaders.embaas import EmbaasBlobLoader
loader = EmbaasBlobLoader(
file_path="example.pdf",
params={
"should_embed": True,
"model": "e5-large-v2",
"chunk_size": 256,
"chunk_splitter": "CharacterTextSplitter"
}
)
documents = loader.load()
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param api_url: str = 'https://api.embaas.io/v1/document/extract-text/bytes/'¶
The URL of the embaas document extraction API.
param blob_loader: Optional[langchain.document_loaders.embaas.EmbaasBlobLoader] = None¶
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.embaas.EmbaasLoader.html
|
ffc3fcd9e6b4-1
|
The blob loader to use. If not provided, a default one will be created.
param embaas_api_key: Optional[str] = None¶
param file_path: str [Required]¶
The path to the file to load.
param params: langchain.document_loaders.embaas.EmbaasDocumentExtractionParameters = {}¶
Additional parameters to pass to the embaas document extraction API.
lazy_load() → Iterator[Document][source]¶
Load the documents from the file path lazily.
load() → List[Document][source]¶
Load data into document objects.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document][source]¶
Load documents and split into chunks.
validator validate_blob_loader » blob_loader[source]¶
validator validate_environment » all fields¶
Validate that api key and python package exists in environment.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.embaas.EmbaasLoader.html
|
fd0d9d03c49b-0
|
langchain.document_loaders.pdf.PDFPlumberLoader¶
class langchain.document_loaders.pdf.PDFPlumberLoader(file_path: str, text_kwargs: Optional[Mapping[str, Any]] = None)[source]¶
Bases: BasePDFLoader
Loader that uses pdfplumber to load PDF files.
Initialize with file path.
Methods
__init__(file_path[, text_kwargs])
Initialize with file path.
lazy_load()
A lazy loader for document content.
load()
Load file.
load_and_split([text_splitter])
Load documents and split into chunks.
Attributes
source
lazy_load() → Iterator[Document]¶
A lazy loader for document content.
load() → List[Document][source]¶
Load file.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load documents and split into chunks.
property source: str¶
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.pdf.PDFPlumberLoader.html
|
cee071886617-0
|
langchain.document_loaders.blob_loaders.schema.Blob¶
class langchain.document_loaders.blob_loaders.schema.Blob(*, data: Optional[Union[bytes, str]] = None, mimetype: Optional[str] = None, encoding: str = 'utf-8', path: Optional[Union[str, PurePath]] = None)[source]¶
Bases: BaseModel
A blob is used to represent raw data by either reference or value.
Provides an interface to materialize the blob in different representations, and
help to decouple the development of data loaders from the downstream parsing of
the raw data.
Inspired by: https://developer.mozilla.org/en-US/docs/Web/API/Blob
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param data: Optional[Union[bytes, str]] = None¶
param encoding: str = 'utf-8'¶
param mimetype: Optional[str] = None¶
param path: Optional[Union[str, pathlib.PurePath]] = None¶
as_bytes() → bytes[source]¶
Read data as bytes.
as_bytes_io() → Generator[Union[BytesIO, BufferedReader], None, None][source]¶
Read data as a byte stream.
as_string() → str[source]¶
Read data as a string.
validator check_blob_is_valid » all fields[source]¶
Verify that either data or path is provided.
classmethod from_data(data: Union[str, bytes], *, encoding: str = 'utf-8', mime_type: Optional[str] = None, path: Optional[str] = None) → Blob[source]¶
Initialize the blob from in-memory data.
Parameters
data – the in-memory data associated with the blob
encoding – Encoding to use if decoding the bytes into a string
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.blob_loaders.schema.Blob.html
|
cee071886617-1
|
encoding – Encoding to use if decoding the bytes into a string
mime_type – if provided, will be set as the mime-type of the data
path – if provided, will be set as the source from which the data came
Returns
Blob instance
classmethod from_path(path: Union[str, PurePath], *, encoding: str = 'utf-8', mime_type: Optional[str] = None, guess_type: bool = True) → Blob[source]¶
Load the blob from a path like object.
Parameters
path – path like object to file to be read
encoding – Encoding to use if decoding the bytes into a string
mime_type – if provided, will be set as the mime-type of the data
guess_type – If True, the mimetype will be guessed from the file extension,
if a mime-type was not provided
Returns
Blob instance
property source: Optional[str]¶
The source location of the blob as string if known otherwise none.
model Config[source]¶
Bases: object
arbitrary_types_allowed = True¶
frozen = True¶
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.blob_loaders.schema.Blob.html
|
98d1c596dbee-0
|
langchain.document_loaders.apify_dataset.ApifyDatasetLoader¶
class langchain.document_loaders.apify_dataset.ApifyDatasetLoader(dataset_id: str, dataset_mapping_function: Callable[[Dict], Document])[source]¶
Bases: BaseLoader, BaseModel
Logic for loading documents from Apify datasets.
Initialize the loader with an Apify dataset ID and a mapping function.
Parameters
dataset_id (str) – The ID of the dataset on the Apify platform.
dataset_mapping_function (Callable) – A function that takes a single
dictionary (an Apify dataset item) and converts it to an instance
of the Document class.
param apify_client: Any = None¶
param dataset_id: str [Required]¶
The ID of the dataset on the Apify platform.
param dataset_mapping_function: Callable[[Dict], langchain.schema.Document] [Required]¶
A custom function that takes a single dictionary (an Apify dataset item)
and converts it to an instance of the Document class.
lazy_load() → Iterator[Document]¶
A lazy loader for document content.
load() → List[Document][source]¶
Load documents.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load documents and split into chunks.
validator validate_environment » all fields[source]¶
Validate environment.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.apify_dataset.ApifyDatasetLoader.html
|
9926de484718-0
|
langchain.document_loaders.larksuite.LarkSuiteDocLoader¶
class langchain.document_loaders.larksuite.LarkSuiteDocLoader(domain: str, access_token: str, document_id: str)[source]¶
Bases: BaseLoader
Loader that loads LarkSuite (FeiShu) document.
Initialize with domain, access_token (tenant / user), and document_id.
Methods
__init__(domain, access_token, document_id)
Initialize with domain, access_token (tenant / user), and document_id.
lazy_load()
Lazy load LarkSuite (FeiShu) document.
load()
Load LarkSuite (FeiShu) document.
load_and_split([text_splitter])
Load documents and split into chunks.
lazy_load() → Iterator[Document][source]¶
Lazy load LarkSuite (FeiShu) document.
load() → List[Document][source]¶
Load LarkSuite (FeiShu) document.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load documents and split into chunks.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.larksuite.LarkSuiteDocLoader.html
|
983ed091b99f-0
|
langchain.document_loaders.parsers.language.code_segmenter.CodeSegmenter¶
class langchain.document_loaders.parsers.language.code_segmenter.CodeSegmenter(code: str)[source]¶
Bases: ABC
Methods
__init__(code)
extract_functions_classes()
is_valid()
simplify_code()
abstract extract_functions_classes() → List[str][source]¶
is_valid() → bool[source]¶
abstract simplify_code() → str[source]¶
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.parsers.language.code_segmenter.CodeSegmenter.html
|
41e34a539a80-0
|
langchain.document_loaders.tomarkdown.ToMarkdownLoader¶
class langchain.document_loaders.tomarkdown.ToMarkdownLoader(url: str, api_key: str)[source]¶
Bases: BaseLoader
Loader that loads HTML to markdown using 2markdown.
Initialize with url and api key.
Methods
__init__(url, api_key)
Initialize with url and api key.
lazy_load()
Lazily load the file.
load()
Load file.
load_and_split([text_splitter])
Load documents and split into chunks.
lazy_load() → Iterator[Document][source]¶
Lazily load the file.
load() → List[Document][source]¶
Load file.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load documents and split into chunks.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.tomarkdown.ToMarkdownLoader.html
|
b05ca61e74be-0
|
langchain.document_loaders.word_document.Docx2txtLoader¶
class langchain.document_loaders.word_document.Docx2txtLoader(file_path: str)[source]¶
Bases: BaseLoader, ABC
Loads a DOCX with docx2txt and chunks at character level.
Defaults to check for local file, but if the file is a web path, it will download it
to a temporary file, and use that, then clean up the temporary file after completion
Initialize with file path.
Methods
__init__(file_path)
Initialize with file path.
lazy_load()
A lazy loader for document content.
load()
Load given path as single page.
load_and_split([text_splitter])
Load documents and split into chunks.
lazy_load() → Iterator[Document]¶
A lazy loader for document content.
load() → List[Document][source]¶
Load given path as single page.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load documents and split into chunks.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.word_document.Docx2txtLoader.html
|
fb4f4714046f-0
|
langchain.document_loaders.evernote.EverNoteLoader¶
class langchain.document_loaders.evernote.EverNoteLoader(file_path: str, load_single_document: bool = True)[source]¶
Bases: BaseLoader
EverNote Loader.
Loads an EverNote notebook export file e.g. my_notebook.enex into Documents.
Instructions on producing this file can be found at
https://help.evernote.com/hc/en-us/articles/209005557-Export-notes-and-notebooks-as-ENEX-or-HTML
Currently only the plain text in the note is extracted and stored as the contents
of the Document, any non content metadata (e.g. ‘author’, ‘created’, ‘updated’ etc.
but not ‘content-raw’ or ‘resource’) tags on the note will be extracted and stored
as metadata on the Document.
Parameters
file_path (str) – The path to the notebook export with a .enex extension
load_single_document (bool) – Whether or not to concatenate the content of all
notes into a single long Document.
True (If this is set to) – the ‘source’ which contains the file name of the export.
Initialize with file path.
Methods
__init__(file_path[, load_single_document])
Initialize with file path.
lazy_load()
A lazy loader for document content.
load()
Load documents from EverNote export file.
load_and_split([text_splitter])
Load documents and split into chunks.
lazy_load() → Iterator[Document]¶
A lazy loader for document content.
load() → List[Document][source]¶
Load documents from EverNote export file.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load documents and split into chunks.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.evernote.EverNoteLoader.html
|
30e6db81b2cc-0
|
langchain.document_loaders.trello.TrelloLoader¶
class langchain.document_loaders.trello.TrelloLoader(client: TrelloClient, board_name: str, *, include_card_name: bool = True, include_comments: bool = True, include_checklist: bool = True, card_filter: Literal['closed', 'open', 'all'] = 'all', extra_metadata: Tuple[str, ...] = ('due_date', 'labels', 'list', 'closed'))[source]¶
Bases: BaseLoader
Trello loader. Reads all cards from a Trello board.
Initialize Trello loader.
Parameters
client – Trello API client.
board_name – The name of the Trello board.
include_card_name – Whether to include the name of the card in the document.
include_comments – Whether to include the comments on the card in the
document.
include_checklist – Whether to include the checklist on the card in the
document.
card_filter – Filter on card status. Valid values are “closed”, “open”,
“all”.
extra_metadata – List of additional metadata fields to include as document
metadata.Valid values are “due_date”, “labels”, “list”, “closed”.
Methods
__init__(client, board_name, *[, ...])
Initialize Trello loader.
from_credentials(board_name, *[, api_key, token])
Convenience constructor that builds TrelloClient init param for you.
lazy_load()
A lazy loader for document content.
load()
Loads all cards from the specified Trello board.
load_and_split([text_splitter])
Load documents and split into chunks.
classmethod from_credentials(board_name: str, *, api_key: Optional[str] = None, token: Optional[str] = None, **kwargs: Any) → TrelloLoader[source]¶
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.trello.TrelloLoader.html
|
30e6db81b2cc-1
|
Convenience constructor that builds TrelloClient init param for you.
Parameters
board_name – The name of the Trello board.
api_key – Trello API key. Can also be specified as environment variable
TRELLO_API_KEY.
token – Trello token. Can also be specified as environment variable
TRELLO_TOKEN.
include_card_name – Whether to include the name of the card in the document.
include_comments – Whether to include the comments on the card in the
document.
include_checklist – Whether to include the checklist on the card in the
document.
card_filter – Filter on card status. Valid values are “closed”, “open”,
“all”.
extra_metadata – List of additional metadata fields to include as document
metadata.Valid values are “due_date”, “labels”, “list”, “closed”.
lazy_load() → Iterator[Document]¶
A lazy loader for document content.
load() → List[Document][source]¶
Loads all cards from the specified Trello board.
You can filter the cards, metadata and text included by using the optional
parameters.
Returns:A list of documents, one for each card in the board.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load documents and split into chunks.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.trello.TrelloLoader.html
|
11a132d94232-0
|
langchain.document_loaders.csv_loader.UnstructuredCSVLoader¶
class langchain.document_loaders.csv_loader.UnstructuredCSVLoader(file_path: str, mode: str = 'single', **unstructured_kwargs: Any)[source]¶
Bases: UnstructuredFileLoader
Loader that uses unstructured to load CSV files.
Initialize with file path.
Methods
__init__(file_path[, mode])
Initialize with file path.
lazy_load()
A lazy loader for document content.
load()
Load file.
load_and_split([text_splitter])
Load documents and split into chunks.
lazy_load() → Iterator[Document]¶
A lazy loader for document content.
load() → List[Document]¶
Load file.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load documents and split into chunks.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.csv_loader.UnstructuredCSVLoader.html
|
38d4d2dbefc9-0
|
langchain.document_loaders.docugami.DocugamiLoader¶
class langchain.document_loaders.docugami.DocugamiLoader(*, api: str = 'https://api.docugami.com/v1preview1', access_token: Optional[str] = None, docset_id: Optional[str] = None, document_ids: Optional[Sequence[str]] = None, file_paths: Optional[Sequence[Union[Path, str]]] = None, min_chunk_size: int = 32)[source]¶
Bases: BaseLoader, BaseModel
Loader that loads processed docs from Docugami.
To use, you should have the lxml python package installed.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param access_token: Optional[str] = None¶
param api: str = 'https://api.docugami.com/v1preview1'¶
param docset_id: Optional[str] = None¶
param document_ids: Optional[Sequence[str]] = None¶
param file_paths: Optional[Sequence[Union[pathlib.Path, str]]] = None¶
param min_chunk_size: int = 32¶
lazy_load() → Iterator[Document]¶
A lazy loader for document content.
load() → List[Document][source]¶
Load documents.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load documents and split into chunks.
validator validate_local_or_remote » all fields[source]¶
Validate that either local file paths are given, or remote API docset ID.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.docugami.DocugamiLoader.html
|
c79b6257c598-0
|
langchain.document_loaders.chatgpt.concatenate_rows¶
langchain.document_loaders.chatgpt.concatenate_rows(message: dict, title: str) → str[source]¶
Combine message information in a readable format ready to be used.
:param message: Message to be concatenated
:param title: Title of the conversation
Returns
Concatenated message
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.chatgpt.concatenate_rows.html
|
bd20cab43369-0
|
langchain.document_loaders.chatgpt.ChatGPTLoader¶
class langchain.document_loaders.chatgpt.ChatGPTLoader(log_file: str, num_logs: int = - 1)[source]¶
Bases: BaseLoader
Loader that loads conversations from exported ChatGPT data.
Methods
__init__(log_file[, num_logs])
lazy_load()
A lazy loader for document content.
load()
Load data into document objects.
load_and_split([text_splitter])
Load documents and split into chunks.
lazy_load() → Iterator[Document]¶
A lazy loader for document content.
load() → List[Document][source]¶
Load data into document objects.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load documents and split into chunks.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.chatgpt.ChatGPTLoader.html
|
6256458879f3-0
|
langchain.document_loaders.azlyrics.AZLyricsLoader¶
class langchain.document_loaders.azlyrics.AZLyricsLoader(web_path: Union[str, List[str]], header_template: Optional[dict] = None, verify: Optional[bool] = True, proxies: Optional[dict] = None)[source]¶
Bases: WebBaseLoader
Loader that loads AZLyrics webpages.
Initialize with webpage path.
Methods
__init__(web_path[, header_template, ...])
Initialize with webpage path.
aload()
Load text from the urls in web_path async into Documents.
fetch_all(urls)
Fetch all urls concurrently with rate limiting.
lazy_load()
Lazy load text from the url(s) in web_path.
load()
Load webpage.
load_and_split([text_splitter])
Load documents and split into chunks.
scrape([parser])
Scrape data from webpage and return it in BeautifulSoup format.
scrape_all(urls[, parser])
Fetch all urls, then return soups for all results.
Attributes
bs_get_text_kwargs
kwargs for beatifulsoup4 get_text
default_parser
Default parser to use for BeautifulSoup.
raise_for_status
Raise an exception if http status code denotes an error.
requests_kwargs
kwargs for requests
requests_per_second
Max number of concurrent requests to make.
web_path
aload() → List[Document]¶
Load text from the urls in web_path async into Documents.
async fetch_all(urls: List[str]) → Any¶
Fetch all urls concurrently with rate limiting.
lazy_load() → Iterator[Document]¶
Lazy load text from the url(s) in web_path.
load() → List[Document][source]¶
Load webpage.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.azlyrics.AZLyricsLoader.html
|
6256458879f3-1
|
Load documents and split into chunks.
scrape(parser: Optional[str] = None) → Any¶
Scrape data from webpage and return it in BeautifulSoup format.
scrape_all(urls: List[str], parser: Optional[str] = None) → List[Any]¶
Fetch all urls, then return soups for all results.
bs_get_text_kwargs: Dict[str, Any] = {}¶
kwargs for beatifulsoup4 get_text
default_parser: str = 'html.parser'¶
Default parser to use for BeautifulSoup.
raise_for_status: bool = False¶
Raise an exception if http status code denotes an error.
requests_kwargs: Dict[str, Any] = {}¶
kwargs for requests
requests_per_second: int = 2¶
Max number of concurrent requests to make.
property web_path: str¶
web_paths: List[str]¶
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.azlyrics.AZLyricsLoader.html
|
a0de2254d79b-0
|
langchain.document_loaders.csv_loader.CSVLoader¶
class langchain.document_loaders.csv_loader.CSVLoader(file_path: str, source_column: Optional[str] = None, csv_args: Optional[Dict] = None, encoding: Optional[str] = None)[source]¶
Bases: BaseLoader
Loads a CSV file into a list of documents.
Each document represents one row of the CSV file. Every row is converted into a
key/value pair and outputted to a new line in the document’s page_content.
The source for each document loaded from csv is set to the value of the
file_path argument for all doucments by default.
You can override this by setting the source_column argument to the
name of a column in the CSV file.
The source of each document will then be set to the value of the column
with the name specified in source_column.
Output Example:column1: value1
column2: value2
column3: value3
Methods
__init__(file_path[, source_column, ...])
lazy_load()
A lazy loader for document content.
load()
Load data into document objects.
load_and_split([text_splitter])
Load documents and split into chunks.
lazy_load() → Iterator[Document]¶
A lazy loader for document content.
load() → List[Document][source]¶
Load data into document objects.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load documents and split into chunks.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.csv_loader.CSVLoader.html
|
f8f1748faf8a-0
|
langchain.document_loaders.hn.HNLoader¶
class langchain.document_loaders.hn.HNLoader(web_path: Union[str, List[str]], header_template: Optional[dict] = None, verify: Optional[bool] = True, proxies: Optional[dict] = None)[source]¶
Bases: WebBaseLoader
Load Hacker News data from either main page results or the comments page.
Initialize with webpage path.
Methods
__init__(web_path[, header_template, ...])
Initialize with webpage path.
aload()
Load text from the urls in web_path async into Documents.
fetch_all(urls)
Fetch all urls concurrently with rate limiting.
lazy_load()
Lazy load text from the url(s) in web_path.
load()
Get important HN webpage information.
load_and_split([text_splitter])
Load documents and split into chunks.
load_comments(soup_info)
Load comments from a HN post.
load_results(soup)
Load items from an HN page.
scrape([parser])
Scrape data from webpage and return it in BeautifulSoup format.
scrape_all(urls[, parser])
Fetch all urls, then return soups for all results.
Attributes
bs_get_text_kwargs
kwargs for beatifulsoup4 get_text
default_parser
Default parser to use for BeautifulSoup.
raise_for_status
Raise an exception if http status code denotes an error.
requests_kwargs
kwargs for requests
requests_per_second
Max number of concurrent requests to make.
web_path
aload() → List[Document]¶
Load text from the urls in web_path async into Documents.
async fetch_all(urls: List[str]) → Any¶
Fetch all urls concurrently with rate limiting.
lazy_load() → Iterator[Document]¶
Lazy load text from the url(s) in web_path.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.hn.HNLoader.html
|
f8f1748faf8a-1
|
Lazy load text from the url(s) in web_path.
load() → List[Document][source]¶
Get important HN webpage information.
Components are:
title
content
source url,
time of post
author of the post
number of comments
rank of the post
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load documents and split into chunks.
load_comments(soup_info: Any) → List[Document][source]¶
Load comments from a HN post.
load_results(soup: Any) → List[Document][source]¶
Load items from an HN page.
scrape(parser: Optional[str] = None) → Any¶
Scrape data from webpage and return it in BeautifulSoup format.
scrape_all(urls: List[str], parser: Optional[str] = None) → List[Any]¶
Fetch all urls, then return soups for all results.
bs_get_text_kwargs: Dict[str, Any] = {}¶
kwargs for beatifulsoup4 get_text
default_parser: str = 'html.parser'¶
Default parser to use for BeautifulSoup.
raise_for_status: bool = False¶
Raise an exception if http status code denotes an error.
requests_kwargs: Dict[str, Any] = {}¶
kwargs for requests
requests_per_second: int = 2¶
Max number of concurrent requests to make.
property web_path: str¶
web_paths: List[str]¶
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.hn.HNLoader.html
|
92d583a009ee-0
|
langchain.document_loaders.pdf.UnstructuredPDFLoader¶
class langchain.document_loaders.pdf.UnstructuredPDFLoader(file_path: Union[str, List[str]], mode: str = 'single', **unstructured_kwargs: Any)[source]¶
Bases: UnstructuredFileLoader
Loader that uses unstructured to load PDF files.
Initialize with file path.
Methods
__init__(file_path[, mode])
Initialize with file path.
lazy_load()
A lazy loader for document content.
load()
Load file.
load_and_split([text_splitter])
Load documents and split into chunks.
lazy_load() → Iterator[Document]¶
A lazy loader for document content.
load() → List[Document]¶
Load file.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load documents and split into chunks.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.pdf.UnstructuredPDFLoader.html
|
3f78c4442a44-0
|
langchain.document_loaders.tencent_cos_file.TencentCOSFileLoader¶
class langchain.document_loaders.tencent_cos_file.TencentCOSFileLoader(conf: Any, bucket: str, key: str)[source]¶
Bases: BaseLoader
Loading logic for loading documents from Tencent Cloud COS.
Initialize with COS config, bucket and key name.
:param conf(CosConfig): COS config.
:param bucket(str): COS bucket.
:param key(str): COS file key.
Methods
__init__(conf, bucket, key)
Initialize with COS config, bucket and key name.
lazy_load()
Load documents.
load()
Load data into document objects.
load_and_split([text_splitter])
Load documents and split into chunks.
lazy_load() → Iterator[Document][source]¶
Load documents.
load() → List[Document][source]¶
Load data into document objects.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load documents and split into chunks.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.tencent_cos_file.TencentCOSFileLoader.html
|
21f55ae62ac5-0
|
langchain.document_loaders.discord.DiscordChatLoader¶
class langchain.document_loaders.discord.DiscordChatLoader(chat_log: pd.DataFrame, user_id_col: str = 'ID')[source]¶
Bases: BaseLoader
Load Discord chat logs.
Initialize with a Pandas DataFrame containing chat logs.
Methods
__init__(chat_log[, user_id_col])
Initialize with a Pandas DataFrame containing chat logs.
lazy_load()
A lazy loader for document content.
load()
Load all chat messages.
load_and_split([text_splitter])
Load documents and split into chunks.
lazy_load() → Iterator[Document]¶
A lazy loader for document content.
load() → List[Document][source]¶
Load all chat messages.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load documents and split into chunks.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.discord.DiscordChatLoader.html
|
005d3bf5e68a-0
|
langchain.document_loaders.whatsapp_chat.WhatsAppChatLoader¶
class langchain.document_loaders.whatsapp_chat.WhatsAppChatLoader(path: str)[source]¶
Bases: BaseLoader
Loader that loads WhatsApp messages text file.
Initialize with path.
Methods
__init__(path)
Initialize with path.
lazy_load()
A lazy loader for document content.
load()
Load documents.
load_and_split([text_splitter])
Load documents and split into chunks.
lazy_load() → Iterator[Document]¶
A lazy loader for document content.
load() → List[Document][source]¶
Load documents.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load documents and split into chunks.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.whatsapp_chat.WhatsAppChatLoader.html
|
27d699b22c89-0
|
langchain.document_loaders.confluence.ContentFormat¶
class langchain.document_loaders.confluence.ContentFormat(value, names=None, *, module=None, qualname=None, type=None, start=1, boundary=None)[source]¶
Bases: str, Enum
Enumerator of the content formats of Confluence page.
Methods
get_content(page)
__init__(*args, **kwds)
capitalize()
Return a capitalized version of the string.
casefold()
Return a version of the string suitable for caseless comparisons.
center(width[, fillchar])
Return a centered string of length width.
count(sub[, start[, end]])
Return the number of non-overlapping occurrences of substring sub in string S[start:end].
encode([encoding, errors])
Encode the string using the codec registered for encoding.
endswith(suffix[, start[, end]])
Return True if S ends with the specified suffix, False otherwise.
expandtabs([tabsize])
Return a copy where all tab characters are expanded using spaces.
find(sub[, start[, end]])
Return the lowest index in S where substring sub is found, such that sub is contained within S[start:end].
format(*args, **kwargs)
Return a formatted version of S, using substitutions from args and kwargs.
format_map(mapping)
Return a formatted version of S, using substitutions from mapping.
index(sub[, start[, end]])
Return the lowest index in S where substring sub is found, such that sub is contained within S[start:end].
isalnum()
Return True if the string is an alpha-numeric string, False otherwise.
isalpha()
Return True if the string is an alphabetic string, False otherwise.
isascii()
Return True if all characters in the string are ASCII, False otherwise.
isdecimal()
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.confluence.ContentFormat.html
|
27d699b22c89-1
|
Return True if all characters in the string are ASCII, False otherwise.
isdecimal()
Return True if the string is a decimal string, False otherwise.
isdigit()
Return True if the string is a digit string, False otherwise.
isidentifier()
Return True if the string is a valid Python identifier, False otherwise.
islower()
Return True if the string is a lowercase string, False otherwise.
isnumeric()
Return True if the string is a numeric string, False otherwise.
isprintable()
Return True if the string is printable, False otherwise.
isspace()
Return True if the string is a whitespace string, False otherwise.
istitle()
Return True if the string is a title-cased string, False otherwise.
isupper()
Return True if the string is an uppercase string, False otherwise.
join(iterable, /)
Concatenate any number of strings.
ljust(width[, fillchar])
Return a left-justified string of length width.
lower()
Return a copy of the string converted to lowercase.
lstrip([chars])
Return a copy of the string with leading whitespace removed.
maketrans
Return a translation table usable for str.translate().
partition(sep, /)
Partition the string into three parts using the given separator.
removeprefix(prefix, /)
Return a str with the given prefix string removed if present.
removesuffix(suffix, /)
Return a str with the given suffix string removed if present.
replace(old, new[, count])
Return a copy with all occurrences of substring old replaced by new.
rfind(sub[, start[, end]])
Return the highest index in S where substring sub is found, such that sub is contained within S[start:end].
rindex(sub[, start[, end]])
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.confluence.ContentFormat.html
|
27d699b22c89-2
|
rindex(sub[, start[, end]])
Return the highest index in S where substring sub is found, such that sub is contained within S[start:end].
rjust(width[, fillchar])
Return a right-justified string of length width.
rpartition(sep, /)
Partition the string into three parts using the given separator.
rsplit([sep, maxsplit])
Return a list of the substrings in the string, using sep as the separator string.
rstrip([chars])
Return a copy of the string with trailing whitespace removed.
split([sep, maxsplit])
Return a list of the substrings in the string, using sep as the separator string.
splitlines([keepends])
Return a list of the lines in the string, breaking at line boundaries.
startswith(prefix[, start[, end]])
Return True if S starts with the specified prefix, False otherwise.
strip([chars])
Return a copy of the string with leading and trailing whitespace removed.
swapcase()
Convert uppercase characters to lowercase and lowercase characters to uppercase.
title()
Return a version of the string where each word is titlecased.
translate(table, /)
Replace each character in the string using the given translation table.
upper()
Return a copy of the string converted to uppercase.
zfill(width, /)
Pad a numeric string with zeros on the left, to fill a field of the given width.
Attributes
STORAGE
VIEW
capitalize()¶
Return a capitalized version of the string.
More specifically, make the first character have upper case and the rest lower
case.
casefold()¶
Return a version of the string suitable for caseless comparisons.
center(width, fillchar=' ', /)¶
Return a centered string of length width.
Padding is done using the specified fill character (default is a space).
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.confluence.ContentFormat.html
|
27d699b22c89-3
|
Padding is done using the specified fill character (default is a space).
count(sub[, start[, end]]) → int¶
Return the number of non-overlapping occurrences of substring sub in
string S[start:end]. Optional arguments start and end are
interpreted as in slice notation.
encode(encoding='utf-8', errors='strict')¶
Encode the string using the codec registered for encoding.
encodingThe encoding in which to encode the string.
errorsThe error handling scheme to use for encoding errors.
The default is ‘strict’ meaning that encoding errors raise a
UnicodeEncodeError. Other possible values are ‘ignore’, ‘replace’ and
‘xmlcharrefreplace’ as well as any other name registered with
codecs.register_error that can handle UnicodeEncodeErrors.
endswith(suffix[, start[, end]]) → bool¶
Return True if S ends with the specified suffix, False otherwise.
With optional start, test S beginning at that position.
With optional end, stop comparing S at that position.
suffix can also be a tuple of strings to try.
expandtabs(tabsize=8)¶
Return a copy where all tab characters are expanded using spaces.
If tabsize is not given, a tab size of 8 characters is assumed.
find(sub[, start[, end]]) → int¶
Return the lowest index in S where substring sub is found,
such that sub is contained within S[start:end]. Optional
arguments start and end are interpreted as in slice notation.
Return -1 on failure.
format(*args, **kwargs) → str¶
Return a formatted version of S, using substitutions from args and kwargs.
The substitutions are identified by braces (‘{’ and ‘}’).
format_map(mapping) → str¶
Return a formatted version of S, using substitutions from mapping.
The substitutions are identified by braces (‘{’ and ‘}’).
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.confluence.ContentFormat.html
|
27d699b22c89-4
|
The substitutions are identified by braces (‘{’ and ‘}’).
get_content(page: dict) → str[source]¶
index(sub[, start[, end]]) → int¶
Return the lowest index in S where substring sub is found,
such that sub is contained within S[start:end]. Optional
arguments start and end are interpreted as in slice notation.
Raises ValueError when the substring is not found.
isalnum()¶
Return True if the string is an alpha-numeric string, False otherwise.
A string is alpha-numeric if all characters in the string are alpha-numeric and
there is at least one character in the string.
isalpha()¶
Return True if the string is an alphabetic string, False otherwise.
A string is alphabetic if all characters in the string are alphabetic and there
is at least one character in the string.
isascii()¶
Return True if all characters in the string are ASCII, False otherwise.
ASCII characters have code points in the range U+0000-U+007F.
Empty string is ASCII too.
isdecimal()¶
Return True if the string is a decimal string, False otherwise.
A string is a decimal string if all characters in the string are decimal and
there is at least one character in the string.
isdigit()¶
Return True if the string is a digit string, False otherwise.
A string is a digit string if all characters in the string are digits and there
is at least one character in the string.
isidentifier()¶
Return True if the string is a valid Python identifier, False otherwise.
Call keyword.iskeyword(s) to test whether string s is a reserved identifier,
such as “def” or “class”.
islower()¶
Return True if the string is a lowercase string, False otherwise.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.confluence.ContentFormat.html
|
27d699b22c89-5
|
islower()¶
Return True if the string is a lowercase string, False otherwise.
A string is lowercase if all cased characters in the string are lowercase and
there is at least one cased character in the string.
isnumeric()¶
Return True if the string is a numeric string, False otherwise.
A string is numeric if all characters in the string are numeric and there is at
least one character in the string.
isprintable()¶
Return True if the string is printable, False otherwise.
A string is printable if all of its characters are considered printable in
repr() or if it is empty.
isspace()¶
Return True if the string is a whitespace string, False otherwise.
A string is whitespace if all characters in the string are whitespace and there
is at least one character in the string.
istitle()¶
Return True if the string is a title-cased string, False otherwise.
In a title-cased string, upper- and title-case characters may only
follow uncased characters and lowercase characters only cased ones.
isupper()¶
Return True if the string is an uppercase string, False otherwise.
A string is uppercase if all cased characters in the string are uppercase and
there is at least one cased character in the string.
join(iterable, /)¶
Concatenate any number of strings.
The string whose method is called is inserted in between each given string.
The result is returned as a new string.
Example: ‘.’.join([‘ab’, ‘pq’, ‘rs’]) -> ‘ab.pq.rs’
ljust(width, fillchar=' ', /)¶
Return a left-justified string of length width.
Padding is done using the specified fill character (default is a space).
lower()¶
Return a copy of the string converted to lowercase.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.confluence.ContentFormat.html
|
27d699b22c89-6
|
lower()¶
Return a copy of the string converted to lowercase.
lstrip(chars=None, /)¶
Return a copy of the string with leading whitespace removed.
If chars is given and not None, remove characters in chars instead.
static maketrans()¶
Return a translation table usable for str.translate().
If there is only one argument, it must be a dictionary mapping Unicode
ordinals (integers) or characters to Unicode ordinals, strings or None.
Character keys will be then converted to ordinals.
If there are two arguments, they must be strings of equal length, and
in the resulting dictionary, each character in x will be mapped to the
character at the same position in y. If there is a third argument, it
must be a string, whose characters will be mapped to None in the result.
partition(sep, /)¶
Partition the string into three parts using the given separator.
This will search for the separator in the string. If the separator is found,
returns a 3-tuple containing the part before the separator, the separator
itself, and the part after it.
If the separator is not found, returns a 3-tuple containing the original string
and two empty strings.
removeprefix(prefix, /)¶
Return a str with the given prefix string removed if present.
If the string starts with the prefix string, return string[len(prefix):].
Otherwise, return a copy of the original string.
removesuffix(suffix, /)¶
Return a str with the given suffix string removed if present.
If the string ends with the suffix string and that suffix is not empty,
return string[:-len(suffix)]. Otherwise, return a copy of the original
string.
replace(old, new, count=- 1, /)¶
Return a copy with all occurrences of substring old replaced by new.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.confluence.ContentFormat.html
|
27d699b22c89-7
|
Return a copy with all occurrences of substring old replaced by new.
countMaximum number of occurrences to replace.
-1 (the default value) means replace all occurrences.
If the optional argument count is given, only the first count occurrences are
replaced.
rfind(sub[, start[, end]]) → int¶
Return the highest index in S where substring sub is found,
such that sub is contained within S[start:end]. Optional
arguments start and end are interpreted as in slice notation.
Return -1 on failure.
rindex(sub[, start[, end]]) → int¶
Return the highest index in S where substring sub is found,
such that sub is contained within S[start:end]. Optional
arguments start and end are interpreted as in slice notation.
Raises ValueError when the substring is not found.
rjust(width, fillchar=' ', /)¶
Return a right-justified string of length width.
Padding is done using the specified fill character (default is a space).
rpartition(sep, /)¶
Partition the string into three parts using the given separator.
This will search for the separator in the string, starting at the end. If
the separator is found, returns a 3-tuple containing the part before the
separator, the separator itself, and the part after it.
If the separator is not found, returns a 3-tuple containing two empty strings
and the original string.
rsplit(sep=None, maxsplit=- 1)¶
Return a list of the substrings in the string, using sep as the separator string.
sepThe separator used to split the string.
When set to None (the default value), will split on any whitespace
character (including \n \r \t \f and spaces) and will discard
empty strings from the result.
maxsplitMaximum number of splits (starting from the left).
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.confluence.ContentFormat.html
|
27d699b22c89-8
|
empty strings from the result.
maxsplitMaximum number of splits (starting from the left).
-1 (the default value) means no limit.
Splitting starts at the end of the string and works to the front.
rstrip(chars=None, /)¶
Return a copy of the string with trailing whitespace removed.
If chars is given and not None, remove characters in chars instead.
split(sep=None, maxsplit=- 1)¶
Return a list of the substrings in the string, using sep as the separator string.
sepThe separator used to split the string.
When set to None (the default value), will split on any whitespace
character (including \n \r \t \f and spaces) and will discard
empty strings from the result.
maxsplitMaximum number of splits (starting from the left).
-1 (the default value) means no limit.
Note, str.split() is mainly useful for data that has been intentionally
delimited. With natural text that includes punctuation, consider using
the regular expression module.
splitlines(keepends=False)¶
Return a list of the lines in the string, breaking at line boundaries.
Line breaks are not included in the resulting list unless keepends is given and
true.
startswith(prefix[, start[, end]]) → bool¶
Return True if S starts with the specified prefix, False otherwise.
With optional start, test S beginning at that position.
With optional end, stop comparing S at that position.
prefix can also be a tuple of strings to try.
strip(chars=None, /)¶
Return a copy of the string with leading and trailing whitespace removed.
If chars is given and not None, remove characters in chars instead.
swapcase()¶
Convert uppercase characters to lowercase and lowercase characters to uppercase.
title()¶
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.confluence.ContentFormat.html
|
27d699b22c89-9
|
Convert uppercase characters to lowercase and lowercase characters to uppercase.
title()¶
Return a version of the string where each word is titlecased.
More specifically, words start with uppercased characters and all remaining
cased characters have lower case.
translate(table, /)¶
Replace each character in the string using the given translation table.
tableTranslation table, which must be a mapping of Unicode ordinals to
Unicode ordinals, strings, or None.
The table must implement lookup/indexing via __getitem__, for instance a
dictionary or list. If this operation raises LookupError, the character is
left untouched. Characters mapped to None are deleted.
upper()¶
Return a copy of the string converted to uppercase.
zfill(width, /)¶
Pad a numeric string with zeros on the left, to fill a field of the given width.
The string is never truncated.
STORAGE = 'body.storage'¶
VIEW = 'body.view'¶
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.confluence.ContentFormat.html
|
8495d54dc06e-0
|
langchain.document_loaders.college_confidential.CollegeConfidentialLoader¶
class langchain.document_loaders.college_confidential.CollegeConfidentialLoader(web_path: Union[str, List[str]], header_template: Optional[dict] = None, verify: Optional[bool] = True, proxies: Optional[dict] = None)[source]¶
Bases: WebBaseLoader
Loader that loads College Confidential webpages.
Initialize with webpage path.
Methods
__init__(web_path[, header_template, ...])
Initialize with webpage path.
aload()
Load text from the urls in web_path async into Documents.
fetch_all(urls)
Fetch all urls concurrently with rate limiting.
lazy_load()
Lazy load text from the url(s) in web_path.
load()
Load webpage.
load_and_split([text_splitter])
Load documents and split into chunks.
scrape([parser])
Scrape data from webpage and return it in BeautifulSoup format.
scrape_all(urls[, parser])
Fetch all urls, then return soups for all results.
Attributes
bs_get_text_kwargs
kwargs for beatifulsoup4 get_text
default_parser
Default parser to use for BeautifulSoup.
raise_for_status
Raise an exception if http status code denotes an error.
requests_kwargs
kwargs for requests
requests_per_second
Max number of concurrent requests to make.
web_path
aload() → List[Document]¶
Load text from the urls in web_path async into Documents.
async fetch_all(urls: List[str]) → Any¶
Fetch all urls concurrently with rate limiting.
lazy_load() → Iterator[Document]¶
Lazy load text from the url(s) in web_path.
load() → List[Document][source]¶
Load webpage.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.college_confidential.CollegeConfidentialLoader.html
|
8495d54dc06e-1
|
Load documents and split into chunks.
scrape(parser: Optional[str] = None) → Any¶
Scrape data from webpage and return it in BeautifulSoup format.
scrape_all(urls: List[str], parser: Optional[str] = None) → List[Any]¶
Fetch all urls, then return soups for all results.
bs_get_text_kwargs: Dict[str, Any] = {}¶
kwargs for beatifulsoup4 get_text
default_parser: str = 'html.parser'¶
Default parser to use for BeautifulSoup.
raise_for_status: bool = False¶
Raise an exception if http status code denotes an error.
requests_kwargs: Dict[str, Any] = {}¶
kwargs for requests
requests_per_second: int = 2¶
Max number of concurrent requests to make.
property web_path: str¶
web_paths: List[str]¶
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.college_confidential.CollegeConfidentialLoader.html
|
1418f4b09641-0
|
langchain.document_loaders.parsers.generic.MimeTypeBasedParser¶
class langchain.document_loaders.parsers.generic.MimeTypeBasedParser(handlers: Mapping[str, BaseBlobParser], *, fallback_parser: Optional[BaseBlobParser] = None)[source]¶
Bases: BaseBlobParser
A parser that uses mime-types to determine how to parse a blob.
This parser is useful for simple pipelines where the mime-type is sufficient
to determine how to parse a blob.
To use, configure handlers based on mime-types and pass them to the initializer.
Example
from langchain.document_loaders.parsers.generic import MimeTypeBasedParser
parser = MimeTypeBasedParser(
handlers={“application/pdf”: …,
},
fallback_parser=…,
)
Define a parser that uses mime-types to determine how to parse a blob.
Parameters
handlers – A mapping from mime-types to functions that take a blob, parse it
and return a document.
fallback_parser – A fallback_parser parser to use if the mime-type is not
found in the handlers. If provided, this parser will be
used to parse blobs with all mime-types not found in
the handlers.
If not provided, a ValueError will be raised if the
mime-type is not found in the handlers.
Methods
__init__(handlers, *[, fallback_parser])
Define a parser that uses mime-types to determine how to parse a blob.
lazy_parse(blob)
Load documents from a blob.
parse(blob)
Eagerly parse the blob into a document or documents.
lazy_parse(blob: Blob) → Iterator[Document][source]¶
Load documents from a blob.
parse(blob: Blob) → List[Document]¶
Eagerly parse the blob into a document or documents.
This is a convenience method for interactive development environment.
Production applications should favor the lazy_parse method instead.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.parsers.generic.MimeTypeBasedParser.html
|
1418f4b09641-1
|
Production applications should favor the lazy_parse method instead.
Subclasses should generally not over-ride this parse method.
Parameters
blob – Blob instance
Returns
List of documents
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.parsers.generic.MimeTypeBasedParser.html
|
1d239d930219-0
|
langchain.document_loaders.helpers.detect_file_encodings¶
langchain.document_loaders.helpers.detect_file_encodings(file_path: str, timeout: int = 5) → List[FileEncoding][source]¶
Try to detect the file encoding.
Returns a list of FileEncoding tuples with the detected encodings ordered
by confidence.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.helpers.detect_file_encodings.html
|
8ccfd7dcd7ea-0
|
langchain.document_loaders.pdf.PDFMinerPDFasHTMLLoader¶
class langchain.document_loaders.pdf.PDFMinerPDFasHTMLLoader(file_path: str)[source]¶
Bases: BasePDFLoader
Loader that uses PDFMiner to load PDF files as HTML content.
Initialize with file path.
Methods
__init__(file_path)
Initialize with file path.
lazy_load()
A lazy loader for document content.
load()
Load file.
load_and_split([text_splitter])
Load documents and split into chunks.
Attributes
source
lazy_load() → Iterator[Document]¶
A lazy loader for document content.
load() → List[Document][source]¶
Load file.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load documents and split into chunks.
property source: str¶
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.pdf.PDFMinerPDFasHTMLLoader.html
|
ae0713b7fc5e-0
|
langchain.document_loaders.notion.NotionDirectoryLoader¶
class langchain.document_loaders.notion.NotionDirectoryLoader(path: str)[source]¶
Bases: BaseLoader
Loader that loads Notion directory dump.
Initialize with path.
Methods
__init__(path)
Initialize with path.
lazy_load()
A lazy loader for document content.
load()
Load documents.
load_and_split([text_splitter])
Load documents and split into chunks.
lazy_load() → Iterator[Document]¶
A lazy loader for document content.
load() → List[Document][source]¶
Load documents.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load documents and split into chunks.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.notion.NotionDirectoryLoader.html
|
537b45b7fcae-0
|
langchain.document_loaders.telegram.TelegramChatApiLoader¶
class langchain.document_loaders.telegram.TelegramChatApiLoader(chat_entity: Optional[EntityLike] = None, api_id: Optional[int] = None, api_hash: Optional[str] = None, username: Optional[str] = None, file_path: str = 'telegram_data.json')[source]¶
Bases: BaseLoader
Loader that loads Telegram chat json directory dump.
Initialize with API parameters.
Methods
__init__([chat_entity, api_id, api_hash, ...])
Initialize with API parameters.
fetch_data_from_telegram()
Fetch data from Telegram API and save it as a JSON file.
lazy_load()
A lazy loader for document content.
load()
Load documents.
load_and_split([text_splitter])
Load documents and split into chunks.
async fetch_data_from_telegram() → None[source]¶
Fetch data from Telegram API and save it as a JSON file.
lazy_load() → Iterator[Document]¶
A lazy loader for document content.
load() → List[Document][source]¶
Load documents.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load documents and split into chunks.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.telegram.TelegramChatApiLoader.html
|
11743db0b864-0
|
langchain.document_loaders.facebook_chat.concatenate_rows¶
langchain.document_loaders.facebook_chat.concatenate_rows(row: dict) → str[source]¶
Combine message information in a readable format ready to be used.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.facebook_chat.concatenate_rows.html
|
18a74a0f70c7-0
|
langchain.document_loaders.s3_directory.S3DirectoryLoader¶
class langchain.document_loaders.s3_directory.S3DirectoryLoader(bucket: str, prefix: str = '')[source]¶
Bases: BaseLoader
Loading logic for loading documents from s3.
Initialize with bucket and key name.
Methods
__init__(bucket[, prefix])
Initialize with bucket and key name.
lazy_load()
A lazy loader for document content.
load()
Load documents.
load_and_split([text_splitter])
Load documents and split into chunks.
lazy_load() → Iterator[Document]¶
A lazy loader for document content.
load() → List[Document][source]¶
Load documents.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load documents and split into chunks.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.s3_directory.S3DirectoryLoader.html
|
28eba5e79eb5-0
|
langchain.document_loaders.imsdb.IMSDbLoader¶
class langchain.document_loaders.imsdb.IMSDbLoader(web_path: Union[str, List[str]], header_template: Optional[dict] = None, verify: Optional[bool] = True, proxies: Optional[dict] = None)[source]¶
Bases: WebBaseLoader
Loader that loads IMSDb webpages.
Initialize with webpage path.
Methods
__init__(web_path[, header_template, ...])
Initialize with webpage path.
aload()
Load text from the urls in web_path async into Documents.
fetch_all(urls)
Fetch all urls concurrently with rate limiting.
lazy_load()
Lazy load text from the url(s) in web_path.
load()
Load webpage.
load_and_split([text_splitter])
Load documents and split into chunks.
scrape([parser])
Scrape data from webpage and return it in BeautifulSoup format.
scrape_all(urls[, parser])
Fetch all urls, then return soups for all results.
Attributes
bs_get_text_kwargs
kwargs for beatifulsoup4 get_text
default_parser
Default parser to use for BeautifulSoup.
raise_for_status
Raise an exception if http status code denotes an error.
requests_kwargs
kwargs for requests
requests_per_second
Max number of concurrent requests to make.
web_path
aload() → List[Document]¶
Load text from the urls in web_path async into Documents.
async fetch_all(urls: List[str]) → Any¶
Fetch all urls concurrently with rate limiting.
lazy_load() → Iterator[Document]¶
Lazy load text from the url(s) in web_path.
load() → List[Document][source]¶
Load webpage.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.imsdb.IMSDbLoader.html
|
28eba5e79eb5-1
|
Load documents and split into chunks.
scrape(parser: Optional[str] = None) → Any¶
Scrape data from webpage and return it in BeautifulSoup format.
scrape_all(urls: List[str], parser: Optional[str] = None) → List[Any]¶
Fetch all urls, then return soups for all results.
bs_get_text_kwargs: Dict[str, Any] = {}¶
kwargs for beatifulsoup4 get_text
default_parser: str = 'html.parser'¶
Default parser to use for BeautifulSoup.
raise_for_status: bool = False¶
Raise an exception if http status code denotes an error.
requests_kwargs: Dict[str, Any] = {}¶
kwargs for requests
requests_per_second: int = 2¶
Max number of concurrent requests to make.
property web_path: str¶
web_paths: List[str]¶
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.imsdb.IMSDbLoader.html
|
5a4b0874c882-0
|
langchain.document_loaders.ifixit.IFixitLoader¶
class langchain.document_loaders.ifixit.IFixitLoader(web_path: str)[source]¶
Bases: BaseLoader
Load iFixit repair guides, device wikis and answers.
iFixit is the largest, open repair community on the web. The site contains nearly
100k repair manuals, 200k Questions & Answers on 42k devices, and all the data is
licensed under CC-BY.
This loader will allow you to download the text of a repair guide, text of Q&A’s
and wikis from devices on iFixit using their open APIs and web scraping.
Initialize with web path.
Methods
__init__(web_path)
Initialize with web path.
lazy_load()
A lazy loader for document content.
load()
Load data into document objects.
load_and_split([text_splitter])
Load documents and split into chunks.
load_device([url_override, include_guides])
load_guide([url_override])
load_questions_and_answers([url_override])
load_suggestions([query, doc_type])
lazy_load() → Iterator[Document]¶
A lazy loader for document content.
load() → List[Document][source]¶
Load data into document objects.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load documents and split into chunks.
load_device(url_override: Optional[str] = None, include_guides: bool = True) → List[Document][source]¶
load_guide(url_override: Optional[str] = None) → List[Document][source]¶
load_questions_and_answers(url_override: Optional[str] = None) → List[Document][source]¶
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.ifixit.IFixitLoader.html
|
5a4b0874c882-1
|
static load_suggestions(query: str = '', doc_type: str = 'all') → List[Document][source]¶
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.ifixit.IFixitLoader.html
|
812d3ff63f45-0
|
langchain.document_loaders.url.UnstructuredURLLoader¶
class langchain.document_loaders.url.UnstructuredURLLoader(urls: List[str], continue_on_failure: bool = True, mode: str = 'single', show_progress_bar: bool = False, **unstructured_kwargs: Any)[source]¶
Bases: BaseLoader
Loader that uses unstructured to load HTML files.
Initialize with file path.
Methods
__init__(urls[, continue_on_failure, mode, ...])
Initialize with file path.
lazy_load()
A lazy loader for document content.
load()
Load file.
load_and_split([text_splitter])
Load documents and split into chunks.
lazy_load() → Iterator[Document]¶
A lazy loader for document content.
load() → List[Document][source]¶
Load file.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load documents and split into chunks.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.url.UnstructuredURLLoader.html
|
be379c316da0-0
|
langchain.document_loaders.duckdb_loader.DuckDBLoader¶
class langchain.document_loaders.duckdb_loader.DuckDBLoader(query: str, database: str = ':memory:', read_only: bool = False, config: Optional[Dict[str, str]] = None, page_content_columns: Optional[List[str]] = None, metadata_columns: Optional[List[str]] = None)[source]¶
Bases: BaseLoader
Loads a query result from DuckDB into a list of documents.
Each document represents one row of the result. The page_content_columns
are written into the page_content of the document. The metadata_columns
are written into the metadata of the document. By default, all columns
are written into the page_content and none into the metadata.
Methods
__init__(query[, database, read_only, ...])
lazy_load()
A lazy loader for document content.
load()
Load data into document objects.
load_and_split([text_splitter])
Load documents and split into chunks.
lazy_load() → Iterator[Document]¶
A lazy loader for document content.
load() → List[Document][source]¶
Load data into document objects.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load documents and split into chunks.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.duckdb_loader.DuckDBLoader.html
|
49b2ee9a0aca-0
|
langchain.document_loaders.airtable.AirtableLoader¶
class langchain.document_loaders.airtable.AirtableLoader(api_token: str, table_id: str, base_id: str)[source]¶
Bases: BaseLoader
Loader for Airtable tables.
Initialize with API token and the IDs for table and base
Methods
__init__(api_token, table_id, base_id)
Initialize with API token and the IDs for table and base
lazy_load()
Lazy load records from table.
load()
Load Table.
load_and_split([text_splitter])
Load documents and split into chunks.
lazy_load() → Iterator[Document][source]¶
Lazy load records from table.
load() → List[Document][source]¶
Load Table.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load documents and split into chunks.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.airtable.AirtableLoader.html
|
7a3797ea0cd5-0
|
langchain.document_loaders.notiondb.NotionDBLoader¶
class langchain.document_loaders.notiondb.NotionDBLoader(integration_token: str, database_id: str, request_timeout_sec: Optional[int] = 10)[source]¶
Bases: BaseLoader
Notion DB Loader.
Reads content from pages within a Noton Database.
:param integration_token: Notion integration token.
:type integration_token: str
:param database_id: Notion database id.
:type database_id: str
:param request_timeout_sec: Timeout for Notion requests in seconds.
:type request_timeout_sec: int
Initialize with parameters.
Methods
__init__(integration_token, database_id[, ...])
Initialize with parameters.
lazy_load()
A lazy loader for document content.
load()
Load documents from the Notion database.
load_and_split([text_splitter])
Load documents and split into chunks.
load_page(page_summary)
Read a page.
lazy_load() → Iterator[Document]¶
A lazy loader for document content.
load() → List[Document][source]¶
Load documents from the Notion database.
:returns: List of documents.
:rtype: List[Document]
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load documents and split into chunks.
load_page(page_summary: Dict[str, Any]) → Document[source]¶
Read a page.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.notiondb.NotionDBLoader.html
|
d8d0565c08b6-0
|
langchain.document_loaders.unstructured.satisfies_min_unstructured_version¶
langchain.document_loaders.unstructured.satisfies_min_unstructured_version(min_version: str) → bool[source]¶
Checks to see if the installed unstructured version exceeds the minimum version
for the feature in question.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.unstructured.satisfies_min_unstructured_version.html
|
bc8b0996623c-0
|
langchain.document_loaders.blockchain.BlockchainType¶
class langchain.document_loaders.blockchain.BlockchainType(value, names=None, *, module=None, qualname=None, type=None, start=1, boundary=None)[source]¶
Bases: Enum
Enumerator of the supported blockchains.
Attributes
ETH_MAINNET
ETH_GOERLI
POLYGON_MAINNET
POLYGON_MUMBAI
ETH_GOERLI = 'eth-goerli'¶
ETH_MAINNET = 'eth-mainnet'¶
POLYGON_MAINNET = 'polygon-mainnet'¶
POLYGON_MUMBAI = 'polygon-mumbai'¶
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.blockchain.BlockchainType.html
|
39250c9e0fa6-0
|
langchain.document_loaders.diffbot.DiffbotLoader¶
class langchain.document_loaders.diffbot.DiffbotLoader(api_token: str, urls: List[str], continue_on_failure: bool = True)[source]¶
Bases: BaseLoader
Loader that loads Diffbot file json.
Initialize with API token, ids, and key.
Methods
__init__(api_token, urls[, continue_on_failure])
Initialize with API token, ids, and key.
lazy_load()
A lazy loader for document content.
load()
Extract text from Diffbot on all the URLs and return Document instances
load_and_split([text_splitter])
Load documents and split into chunks.
lazy_load() → Iterator[Document]¶
A lazy loader for document content.
load() → List[Document][source]¶
Extract text from Diffbot on all the URLs and return Document instances
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load documents and split into chunks.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.diffbot.DiffbotLoader.html
|
5d882e6ae7f4-0
|
langchain.document_loaders.rtf.UnstructuredRTFLoader¶
class langchain.document_loaders.rtf.UnstructuredRTFLoader(file_path: str, mode: str = 'single', **unstructured_kwargs: Any)[source]¶
Bases: UnstructuredFileLoader
Loader that uses unstructured to load rtf files.
Initialize with file path.
Methods
__init__(file_path[, mode])
Initialize with file path.
lazy_load()
A lazy loader for document content.
load()
Load file.
load_and_split([text_splitter])
Load documents and split into chunks.
lazy_load() → Iterator[Document]¶
A lazy loader for document content.
load() → List[Document]¶
Load file.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load documents and split into chunks.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.rtf.UnstructuredRTFLoader.html
|
21fcd5548583-0
|
langchain.document_loaders.tencent_cos_directory.TencentCOSDirectoryLoader¶
class langchain.document_loaders.tencent_cos_directory.TencentCOSDirectoryLoader(conf: Any, bucket: str, prefix: str = '')[source]¶
Bases: BaseLoader
Loading logic for loading documents from Tencent Cloud COS.
Initialize with COS config, bucket and prefix.
:param conf(CosConfig): COS config.
:param bucket(str): COS bucket.
:param prefix(str): prefix.
Methods
__init__(conf, bucket[, prefix])
Initialize with COS config, bucket and prefix.
lazy_load()
Load documents.
load()
Load data into document objects.
load_and_split([text_splitter])
Load documents and split into chunks.
lazy_load() → Iterator[Document][source]¶
Load documents.
load() → List[Document][source]¶
Load data into document objects.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load documents and split into chunks.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.tencent_cos_directory.TencentCOSDirectoryLoader.html
|
af9971c34b47-0
|
langchain.document_loaders.directory.DirectoryLoader¶
class langchain.document_loaders.directory.DirectoryLoader(path: str, glob: str = '**/[!.]*', silent_errors: bool = False, load_hidden: bool = False, loader_cls: ~typing.Union[~typing.Type[~langchain.document_loaders.unstructured.UnstructuredFileLoader], ~typing.Type[~langchain.document_loaders.text.TextLoader], ~typing.Type[~langchain.document_loaders.html_bs.BSHTMLLoader]] = <class 'langchain.document_loaders.unstructured.UnstructuredFileLoader'>, loader_kwargs: ~typing.Optional[dict] = None, recursive: bool = False, show_progress: bool = False, use_multithreading: bool = False, max_concurrency: int = 4)[source]¶
Bases: BaseLoader
Loading logic for loading documents from a directory.
Initialize with path to directory and how to glob over it.
Methods
__init__(path[, glob, silent_errors, ...])
Initialize with path to directory and how to glob over it.
lazy_load()
A lazy loader for document content.
load()
Load documents.
load_and_split([text_splitter])
Load documents and split into chunks.
load_file(item, path, docs, pbar)
lazy_load() → Iterator[Document]¶
A lazy loader for document content.
load() → List[Document][source]¶
Load documents.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load documents and split into chunks.
load_file(item: Path, path: Path, docs: List[Document], pbar: Optional[Any]) → None[source]¶
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.directory.DirectoryLoader.html
|
1aea370b89bc-0
|
langchain.document_loaders.blob_loaders.file_system.FileSystemBlobLoader¶
class langchain.document_loaders.blob_loaders.file_system.FileSystemBlobLoader(path: Union[str, Path], *, glob: str = '**/[!.]*', suffixes: Optional[Sequence[str]] = None, show_progress: bool = False)[source]¶
Bases: BlobLoader
Blob loader for the local file system.
Example:
from langchain.document_loaders.blob_loaders import FileSystemBlobLoader
loader = FileSystemBlobLoader("/path/to/directory")
for blob in loader.yield_blobs():
print(blob)
Initialize with path to directory and how to glob over it.
Parameters
path – Path to directory to load from
glob – Glob pattern relative to the specified path
by default set to pick up all non-hidden files
suffixes – Provide to keep only files with these suffixes
Useful when wanting to keep files with different suffixes
Suffixes must include the dot, e.g. “.txt”
show_progress – If true, will show a progress bar as the files are loaded.
This forces an iteration through all matching files
to count them prior to loading them.
Examples:
… code-block:: python
# Recursively load all text files in a directory.
loader = FileSystemBlobLoader(“/path/to/directory”, glob=”**/*.txt”)
# Recursively load all non-hidden files in a directory.
loader = FileSystemBlobLoader(“/path/to/directory”, glob=”**/[!.]*”)
# Load all files in a directory without recursion.
loader = FileSystemBlobLoader(“/path/to/directory”, glob=”*”)
Methods
__init__(path, *[, glob, suffixes, ...])
Initialize with path to directory and how to glob over it.
count_matching_files()
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.blob_loaders.file_system.FileSystemBlobLoader.html
|
1aea370b89bc-1
|
Initialize with path to directory and how to glob over it.
count_matching_files()
Count files that match the pattern without loading them.
yield_blobs()
Yield blobs that match the requested pattern.
count_matching_files() → int[source]¶
Count files that match the pattern without loading them.
yield_blobs() → Iterable[Blob][source]¶
Yield blobs that match the requested pattern.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.blob_loaders.file_system.FileSystemBlobLoader.html
|
08f1e07e5a81-0
|
langchain.document_loaders.srt.SRTLoader¶
class langchain.document_loaders.srt.SRTLoader(file_path: str)[source]¶
Bases: BaseLoader
Loader for .srt (subtitle) files.
Initialize with file path.
Methods
__init__(file_path)
Initialize with file path.
lazy_load()
A lazy loader for document content.
load()
Load using pysrt file.
load_and_split([text_splitter])
Load documents and split into chunks.
lazy_load() → Iterator[Document]¶
A lazy loader for document content.
load() → List[Document][source]¶
Load using pysrt file.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load documents and split into chunks.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.srt.SRTLoader.html
|
73b5780da31e-0
|
langchain.document_loaders.github.BaseGitHubLoader¶
class langchain.document_loaders.github.BaseGitHubLoader(*, repo: str, access_token: str)[source]¶
Bases: BaseLoader, BaseModel, ABC
Load issues of a GitHub repository.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param access_token: str [Required]¶
Personal access token - see https://github.com/settings/tokens?type=beta
param repo: str [Required]¶
Name of repository
lazy_load() → Iterator[Document]¶
A lazy loader for document content.
abstract load() → List[Document]¶
Load data into document objects.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load documents and split into chunks.
validator validate_environment » all fields[source]¶
Validate that access token exists in environment.
property headers: Dict[str, str]¶
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.github.BaseGitHubLoader.html
|
e6474816ed15-0
|
langchain.document_loaders.embaas.EmbaasDocumentExtractionPayload¶
class langchain.document_loaders.embaas.EmbaasDocumentExtractionPayload[source]¶
Bases: EmbaasDocumentExtractionParameters
Payload for the Embaas document extraction API.
Methods
__init__(*args, **kwargs)
clear()
copy()
fromkeys([value])
Create a new dictionary with keys from iterable and values set to value.
get(key[, default])
Return the value for key if key is in the dictionary, else default.
items()
keys()
pop(k[,d])
If the key is not found, return the default if given; otherwise, raise a KeyError.
popitem()
Remove and return a (key, value) pair as a 2-tuple.
setdefault(key[, default])
Insert key with a value of default if key is not in the dictionary.
update([E, ]**F)
If E is present and has a .keys() method, then does: for k in E: D[k] = E[k] If E is present and lacks a .keys() method, then does: for k, v in E: D[k] = v In either case, this is followed by: for k in F: D[k] = F[k]
values()
Attributes
bytes
The base64 encoded bytes of the document to extract text from.
clear() → None. Remove all items from D.¶
copy() → a shallow copy of D¶
fromkeys(value=None, /)¶
Create a new dictionary with keys from iterable and values set to value.
get(key, default=None, /)¶
Return the value for key if key is in the dictionary, else default.
items() → a set-like object providing a view on D's items¶
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.embaas.EmbaasDocumentExtractionPayload.html
|
e6474816ed15-1
|
items() → a set-like object providing a view on D's items¶
keys() → a set-like object providing a view on D's keys¶
pop(k[, d]) → v, remove specified key and return the corresponding value.¶
If the key is not found, return the default if given; otherwise,
raise a KeyError.
popitem()¶
Remove and return a (key, value) pair as a 2-tuple.
Pairs are returned in LIFO (last-in, first-out) order.
Raises KeyError if the dict is empty.
setdefault(key, default=None, /)¶
Insert key with a value of default if key is not in the dictionary.
Return the value for key if key is in the dictionary, else default.
update([E, ]**F) → None. Update D from dict/iterable E and F.¶
If E is present and has a .keys() method, then does: for k in E: D[k] = E[k]
If E is present and lacks a .keys() method, then does: for k, v in E: D[k] = v
In either case, this is followed by: for k in F: D[k] = F[k]
values() → an object providing a view on D's values¶
bytes: str¶
The base64 encoded bytes of the document to extract text from.
chunk_overlap: int¶
chunk_size: int¶
chunk_splitter: str¶
file_extension: str¶
file_name: str¶
instruction: str¶
mime_type: str¶
model: str¶
separators: List[str]¶
should_chunk: bool¶
should_embed: bool¶
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.embaas.EmbaasDocumentExtractionPayload.html
|
730860accb49-0
|
langchain.document_loaders.acreom.AcreomLoader¶
class langchain.document_loaders.acreom.AcreomLoader(path: str, encoding: str = 'UTF-8', collect_metadata: bool = True)[source]¶
Bases: BaseLoader
Initialize with path.
Methods
__init__(path[, encoding, collect_metadata])
Initialize with path.
lazy_load()
A lazy loader for document content.
load()
Load data into document objects.
load_and_split([text_splitter])
Load documents and split into chunks.
Attributes
FRONT_MATTER_REGEX
lazy_load() → Iterator[Document][source]¶
A lazy loader for document content.
load() → List[Document][source]¶
Load data into document objects.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load documents and split into chunks.
FRONT_MATTER_REGEX = re.compile('^---\\n(.*?)\\n---\\n', re.MULTILINE|re.DOTALL)¶
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.acreom.AcreomLoader.html
|
1d64e1d3e9ce-0
|
langchain.document_loaders.blockchain.BlockchainDocumentLoader¶
class langchain.document_loaders.blockchain.BlockchainDocumentLoader(contract_address: str, blockchainType: BlockchainType = BlockchainType.ETH_MAINNET, api_key: str = 'docs-demo', startToken: str = '', get_all_tokens: bool = False, max_execution_time: Optional[int] = None)[source]¶
Bases: BaseLoader
Loads elements from a blockchain smart contract into Langchain documents.
The supported blockchains are: Ethereum mainnet, Ethereum Goerli testnet,
Polygon mainnet, and Polygon Mumbai testnet.
If no BlockchainType is specified, the default is Ethereum mainnet.
The Loader uses the Alchemy API to interact with the blockchain.
ALCHEMY_API_KEY environment variable must be set to use this loader.
The API returns 100 NFTs per request and can be paginated using the
startToken parameter.
If get_all_tokens is set to True, the loader will get all tokens
on the contract. Note that for contracts with a large number of tokens,
this may take a long time (e.g. 10k tokens is 100 requests).
Default value is false for this reason.
The max_execution_time (sec) can be set to limit the execution time
of the loader.
Future versions of this loader can:
Support additional Alchemy APIs (e.g. getTransactions, etc.)
Support additional blockain APIs (e.g. Infura, Opensea, etc.)
Methods
__init__(contract_address[, blockchainType, ...])
lazy_load()
A lazy loader for document content.
load()
Load data into document objects.
load_and_split([text_splitter])
Load documents and split into chunks.
lazy_load() → Iterator[Document]¶
A lazy loader for document content.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.blockchain.BlockchainDocumentLoader.html
|
1d64e1d3e9ce-1
|
lazy_load() → Iterator[Document]¶
A lazy loader for document content.
load() → List[Document][source]¶
Load data into document objects.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load documents and split into chunks.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.blockchain.BlockchainDocumentLoader.html
|
33734dade2fe-0
|
langchain.document_loaders.gcs_directory.GCSDirectoryLoader¶
class langchain.document_loaders.gcs_directory.GCSDirectoryLoader(project_name: str, bucket: str, prefix: str = '')[source]¶
Bases: BaseLoader
Loading logic for loading documents from GCS.
Initialize with bucket and key name.
Methods
__init__(project_name, bucket[, prefix])
Initialize with bucket and key name.
lazy_load()
A lazy loader for document content.
load()
Load documents.
load_and_split([text_splitter])
Load documents and split into chunks.
lazy_load() → Iterator[Document]¶
A lazy loader for document content.
load() → List[Document][source]¶
Load documents.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load documents and split into chunks.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.gcs_directory.GCSDirectoryLoader.html
|
5c89c102e017-0
|
langchain.document_loaders.notebook.remove_newlines¶
langchain.document_loaders.notebook.remove_newlines(x: Any) → Any[source]¶
Remove recursively newlines, no matter the data structure they are stored in.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.notebook.remove_newlines.html
|
bf27b9ab40fc-0
|
langchain.document_loaders.googledrive.GoogleDriveLoader¶
class langchain.document_loaders.googledrive.GoogleDriveLoader(*, service_account_key: Path = PosixPath('/home/docs/.credentials/keys.json'), credentials_path: Path = PosixPath('/home/docs/.credentials/credentials.json'), token_path: Path = PosixPath('/home/docs/.credentials/token.json'), folder_id: Optional[str] = None, document_ids: Optional[List[str]] = None, file_ids: Optional[List[str]] = None, recursive: bool = False, file_types: Optional[Sequence[str]] = None, load_trashed_files: bool = False, file_loader_cls: Any = None, file_loader_kwargs: Dict[str, Any] = {})[source]¶
Bases: BaseLoader, BaseModel
Loader that loads Google Docs from Google Drive.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param credentials_path: pathlib.Path = PosixPath('/home/docs/.credentials/credentials.json')¶
param document_ids: Optional[List[str]] = None¶
param file_ids: Optional[List[str]] = None¶
param file_loader_cls: Any = None¶
param file_loader_kwargs: Dict[str, Any] = {}¶
param file_types: Optional[Sequence[str]] = None¶
param folder_id: Optional[str] = None¶
param load_trashed_files: bool = False¶
param recursive: bool = False¶
param service_account_key: pathlib.Path = PosixPath('/home/docs/.credentials/keys.json')¶
param token_path: pathlib.Path = PosixPath('/home/docs/.credentials/token.json')¶
lazy_load() → Iterator[Document]¶
A lazy loader for document content.
load() → List[Document][source]¶
Load documents.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.googledrive.GoogleDriveLoader.html
|
bf27b9ab40fc-1
|
load() → List[Document][source]¶
Load documents.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load documents and split into chunks.
validator validate_credentials_path » credentials_path[source]¶
Validate that credentials_path exists.
validator validate_inputs » all fields[source]¶
Validate that either folder_id or document_ids is set, but not both.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.googledrive.GoogleDriveLoader.html
|
7e974c7a1016-0
|
langchain.document_loaders.fauna.FaunaLoader¶
class langchain.document_loaders.fauna.FaunaLoader(query: str, page_content_field: str, secret: str, metadata_fields: Optional[Sequence[str]] = None)[source]¶
Bases: BaseLoader
FaunaDB Loader.
query¶
The FQL query string to execute.
Type
str
page_content_field¶
The field that contains the content of each page.
Type
str
secret¶
The secret key for authenticating to FaunaDB.
Type
str
metadata_fields¶
Optional list of field names to include in metadata.
Type
Optional[Sequence[str]]
Methods
__init__(query, page_content_field, secret)
lazy_load()
A lazy loader for document content.
load()
Load data into document objects.
load_and_split([text_splitter])
Load documents and split into chunks.
lazy_load() → Iterator[Document][source]¶
A lazy loader for document content.
load() → List[Document][source]¶
Load data into document objects.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load documents and split into chunks.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.fauna.FaunaLoader.html
|
75e48f9d0fac-0
|
langchain.document_loaders.pdf.PyPDFium2Loader¶
class langchain.document_loaders.pdf.PyPDFium2Loader(file_path: str)[source]¶
Bases: BasePDFLoader
Loads a PDF with pypdfium2 and chunks at character level.
Initialize with file path.
Methods
__init__(file_path)
Initialize with file path.
lazy_load()
Lazy load given path as pages.
load()
Load given path as pages.
load_and_split([text_splitter])
Load documents and split into chunks.
Attributes
source
lazy_load() → Iterator[Document][source]¶
Lazy load given path as pages.
load() → List[Document][source]¶
Load given path as pages.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load documents and split into chunks.
property source: str¶
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.pdf.PyPDFium2Loader.html
|
c7043103a039-0
|
langchain.document_loaders.helpers.FileEncoding¶
class langchain.document_loaders.helpers.FileEncoding(encoding, confidence, language)[source]¶
Bases: NamedTuple
Create new instance of FileEncoding(encoding, confidence, language)
Methods
__init__()
count(value, /)
Return number of occurrences of value.
index(value[, start, stop])
Return first index of value.
Attributes
confidence
Alias for field number 1
encoding
Alias for field number 0
language
Alias for field number 2
count(value, /)¶
Return number of occurrences of value.
index(value, start=0, stop=9223372036854775807, /)¶
Return first index of value.
Raises ValueError if the value is not present.
confidence: float¶
Alias for field number 1
encoding: Optional[str]¶
Alias for field number 0
language: Optional[str]¶
Alias for field number 2
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.helpers.FileEncoding.html
|
342c7d46e3d5-0
|
langchain.document_loaders.blob_loaders.youtube_audio.YoutubeAudioLoader¶
class langchain.document_loaders.blob_loaders.youtube_audio.YoutubeAudioLoader(urls: List[str], save_dir: str)[source]¶
Bases: BlobLoader
Load YouTube urls as audio file(s).
Methods
__init__(urls, save_dir)
yield_blobs()
Yield audio blobs for each url.
yield_blobs() → Iterable[Blob][source]¶
Yield audio blobs for each url.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.blob_loaders.youtube_audio.YoutubeAudioLoader.html
|
ede04abd9677-0
|
langchain.document_loaders.recursive_url_loader.RecursiveUrlLoader¶
class langchain.document_loaders.recursive_url_loader.RecursiveUrlLoader(url: str, exclude_dirs: Optional[str] = None)[source]¶
Bases: BaseLoader
Loader that loads all child links from a given url.
Initialize with URL to crawl and any sub-directories to exclude.
Methods
__init__(url[, exclude_dirs])
Initialize with URL to crawl and any sub-directories to exclude.
get_child_links_recursive(url[, visited])
Recursively get all child links starting with the path of the input URL.
lazy_load()
A lazy loader for document content.
load()
Load web pages.
load_and_split([text_splitter])
Load documents and split into chunks.
get_child_links_recursive(url: str, visited: Optional[Set[str]] = None) → Set[str][source]¶
Recursively get all child links starting with the path of the input URL.
lazy_load() → Iterator[Document][source]¶
A lazy loader for document content.
load() → List[Document][source]¶
Load web pages.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load documents and split into chunks.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.recursive_url_loader.RecursiveUrlLoader.html
|
b8345b7e063a-0
|
langchain.document_loaders.parsers.html.bs4.BS4HTMLParser¶
class langchain.document_loaders.parsers.html.bs4.BS4HTMLParser(*, features: str = 'lxml', get_text_separator: str = '', **kwargs: Any)[source]¶
Bases: BaseBlobParser
Parser that uses beautiful soup to parse HTML files.
Initialize a bs4 based HTML parser.
Methods
__init__(*[, features, get_text_separator])
Initialize a bs4 based HTML parser.
lazy_parse(blob)
Load HTML document into document objects.
parse(blob)
Eagerly parse the blob into a document or documents.
lazy_parse(blob: Blob) → Iterator[Document][source]¶
Load HTML document into document objects.
parse(blob: Blob) → List[Document]¶
Eagerly parse the blob into a document or documents.
This is a convenience method for interactive development environment.
Production applications should favor the lazy_parse method instead.
Subclasses should generally not over-ride this parse method.
Parameters
blob – Blob instance
Returns
List of documents
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.parsers.html.bs4.BS4HTMLParser.html
|
ff0f4129f7ef-0
|
langchain.document_loaders.modern_treasury.ModernTreasuryLoader¶
class langchain.document_loaders.modern_treasury.ModernTreasuryLoader(resource: str, organization_id: Optional[str] = None, api_key: Optional[str] = None)[source]¶
Bases: BaseLoader
Loader that fetches data from Modern Treasury.
Methods
__init__(resource[, organization_id, api_key])
lazy_load()
A lazy loader for document content.
load()
Load data into document objects.
load_and_split([text_splitter])
Load documents and split into chunks.
lazy_load() → Iterator[Document]¶
A lazy loader for document content.
load() → List[Document][source]¶
Load data into document objects.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load documents and split into chunks.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.modern_treasury.ModernTreasuryLoader.html
|
e7d3879e9600-0
|
langchain.document_loaders.wikipedia.WikipediaLoader¶
class langchain.document_loaders.wikipedia.WikipediaLoader(query: str, lang: str = 'en', load_max_docs: Optional[int] = 100, load_all_available_meta: Optional[bool] = False, doc_content_chars_max: Optional[int] = 4000)[source]¶
Bases: BaseLoader
Loads a query result from www.wikipedia.org into a list of Documents.
The hard limit on the number of downloaded Documents is 300 for now.
Each wiki page represents one Document.
Initializes a new instance of the WikipediaLoader class.
Parameters
query (str) – The query string to search on Wikipedia.
lang (str, optional) – The language code for the Wikipedia language edition.
Defaults to “en”.
load_max_docs (int, optional) – The maximum number of documents to load.
Defaults to 100.
load_all_available_meta (bool, optional) – Indicates whether to load all
available metadata for each document. Defaults to False.
doc_content_chars_max (int, optional) – The maximum number of characters
for the document content. Defaults to 4000.
Methods
__init__(query[, lang, load_max_docs, ...])
Initializes a new instance of the WikipediaLoader class.
lazy_load()
A lazy loader for document content.
load()
Loads the query result from Wikipedia into a list of Documents.
load_and_split([text_splitter])
Load documents and split into chunks.
lazy_load() → Iterator[Document]¶
A lazy loader for document content.
load() → List[Document][source]¶
Loads the query result from Wikipedia into a list of Documents.
Returns
A list of Document objects representing the loadedWikipedia pages.
Return type
List[Document]
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.wikipedia.WikipediaLoader.html
|
e7d3879e9600-1
|
Return type
List[Document]
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load documents and split into chunks.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.wikipedia.WikipediaLoader.html
|
3ace3239d956-0
|
langchain.document_loaders.parsers.pdf.PyPDFium2Parser¶
class langchain.document_loaders.parsers.pdf.PyPDFium2Parser[source]¶
Bases: BaseBlobParser
Parse PDFs with PyPDFium2.
Initialize the parser.
Methods
__init__()
Initialize the parser.
lazy_parse(blob)
Lazily parse the blob.
parse(blob)
Eagerly parse the blob into a document or documents.
lazy_parse(blob: Blob) → Iterator[Document][source]¶
Lazily parse the blob.
parse(blob: Blob) → List[Document]¶
Eagerly parse the blob into a document or documents.
This is a convenience method for interactive development environment.
Production applications should favor the lazy_parse method instead.
Subclasses should generally not over-ride this parse method.
Parameters
blob – Blob instance
Returns
List of documents
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.parsers.pdf.PyPDFium2Parser.html
|
4f8e77745141-0
|
langchain.document_loaders.parsers.registry.get_parser¶
langchain.document_loaders.parsers.registry.get_parser(parser_name: str) → BaseBlobParser[source]¶
Get a parser by parser name.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.parsers.registry.get_parser.html
|
64641bc9c4e1-0
|
langchain.document_loaders.unstructured.UnstructuredFileLoader¶
class langchain.document_loaders.unstructured.UnstructuredFileLoader(file_path: Union[str, List[str]], mode: str = 'single', **unstructured_kwargs: Any)[source]¶
Bases: UnstructuredBaseLoader
Loader that uses unstructured to load files.
Initialize with file path.
Methods
__init__(file_path[, mode])
Initialize with file path.
lazy_load()
A lazy loader for document content.
load()
Load file.
load_and_split([text_splitter])
Load documents and split into chunks.
lazy_load() → Iterator[Document]¶
A lazy loader for document content.
load() → List[Document]¶
Load file.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load documents and split into chunks.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.unstructured.UnstructuredFileLoader.html
|
ca64ffc8aeba-0
|
langchain.document_loaders.parsers.pdf.PyMuPDFParser¶
class langchain.document_loaders.parsers.pdf.PyMuPDFParser(text_kwargs: Optional[Mapping[str, Any]] = None)[source]¶
Bases: BaseBlobParser
Parse PDFs with PyMuPDF.
Initialize the parser.
Parameters
text_kwargs – Keyword arguments to pass to fitz.Page.get_text().
Methods
__init__([text_kwargs])
Initialize the parser.
lazy_parse(blob)
Lazily parse the blob.
parse(blob)
Eagerly parse the blob into a document or documents.
lazy_parse(blob: Blob) → Iterator[Document][source]¶
Lazily parse the blob.
parse(blob: Blob) → List[Document]¶
Eagerly parse the blob into a document or documents.
This is a convenience method for interactive development environment.
Production applications should favor the lazy_parse method instead.
Subclasses should generally not over-ride this parse method.
Parameters
blob – Blob instance
Returns
List of documents
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.parsers.pdf.PyMuPDFParser.html
|
fdd30c55afe3-0
|
langchain.document_loaders.email.OutlookMessageLoader¶
class langchain.document_loaders.email.OutlookMessageLoader(file_path: str)[source]¶
Bases: BaseLoader
Loader that loads Outlook Message files using extract_msg.
https://github.com/TeamMsgExtractor/msg-extractor
Initialize with file path.
Methods
__init__(file_path)
Initialize with file path.
lazy_load()
A lazy loader for document content.
load()
Load data into document objects.
load_and_split([text_splitter])
Load documents and split into chunks.
lazy_load() → Iterator[Document]¶
A lazy loader for document content.
load() → List[Document][source]¶
Load data into document objects.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load documents and split into chunks.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.email.OutlookMessageLoader.html
|
76ed47b84274-0
|
langchain.document_loaders.roam.RoamLoader¶
class langchain.document_loaders.roam.RoamLoader(path: str)[source]¶
Bases: BaseLoader
Loader that loads Roam files from disk.
Initialize with path.
Methods
__init__(path)
Initialize with path.
lazy_load()
A lazy loader for document content.
load()
Load documents.
load_and_split([text_splitter])
Load documents and split into chunks.
lazy_load() → Iterator[Document]¶
A lazy loader for document content.
load() → List[Document][source]¶
Load documents.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load documents and split into chunks.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.roam.RoamLoader.html
|
2d4035c7f1bb-0
|
langchain.document_loaders.open_city_data.OpenCityDataLoader¶
class langchain.document_loaders.open_city_data.OpenCityDataLoader(city_id: str, dataset_id: str, limit: int)[source]¶
Bases: BaseLoader
Loader that loads Open city data.
Initialize with dataset_id
Methods
__init__(city_id, dataset_id, limit)
Initialize with dataset_id
lazy_load()
Lazy load records.
load()
Load records.
load_and_split([text_splitter])
Load documents and split into chunks.
lazy_load() → Iterator[Document][source]¶
Lazy load records.
load() → List[Document][source]¶
Load records.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load documents and split into chunks.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.open_city_data.OpenCityDataLoader.html
|
305bbc3c345f-0
|
langchain.document_loaders.facebook_chat.FacebookChatLoader¶
class langchain.document_loaders.facebook_chat.FacebookChatLoader(path: str)[source]¶
Bases: BaseLoader
Loader that loads Facebook messages json directory dump.
Initialize with path.
Methods
__init__(path)
Initialize with path.
lazy_load()
A lazy loader for document content.
load()
Load documents.
load_and_split([text_splitter])
Load documents and split into chunks.
lazy_load() → Iterator[Document]¶
A lazy loader for document content.
load() → List[Document][source]¶
Load documents.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load documents and split into chunks.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.facebook_chat.FacebookChatLoader.html
|
e76422ebde44-0
|
langchain.document_loaders.base.BaseLoader¶
class langchain.document_loaders.base.BaseLoader[source]¶
Bases: ABC
Interface for loading documents.
Implementations should implement the lazy-loading method using generators
to avoid loading all documents into memory at once.
The load method will remain as is for backwards compatibility, but its
implementation should be just list(self.lazy_load()).
Methods
__init__()
lazy_load()
A lazy loader for document content.
load()
Load data into document objects.
load_and_split([text_splitter])
Load documents and split into chunks.
lazy_load() → Iterator[Document][source]¶
A lazy loader for document content.
abstract load() → List[Document][source]¶
Load data into document objects.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document][source]¶
Load documents and split into chunks.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.base.BaseLoader.html
|
b3c1d4508116-0
|
langchain.document_loaders.whatsapp_chat.concatenate_rows¶
langchain.document_loaders.whatsapp_chat.concatenate_rows(date: str, sender: str, text: str) → str[source]¶
Combine message information in a readable format ready to be used.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.whatsapp_chat.concatenate_rows.html
|
5e10af1fd128-0
|
langchain.document_loaders.readthedocs.ReadTheDocsLoader¶
class langchain.document_loaders.readthedocs.ReadTheDocsLoader(path: Union[str, Path], encoding: Optional[str] = None, errors: Optional[str] = None, custom_html_tag: Optional[Tuple[str, dict]] = None, **kwargs: Optional[Any])[source]¶
Bases: BaseLoader
Loader that loads ReadTheDocs documentation directory dump.
Initialize ReadTheDocsLoader
The loader loops over all files under path and extract the actual content of
the files by retrieving main html tags. Default main html tags include
<main id=”main-content>, <div role=”main>, and <article role=”main”>. You
can also define your own html tags by passing custom_html_tag, e.g.
(“div”, “class=main”). The loader iterates html tags with the order of
custom html tags (if exists) and default html tags. If any of the tags is not
empty, the loop will break and retrieve the content out of that tag.
Parameters
path – The location of pulled readthedocs folder.
encoding – The encoding with which to open the documents.
errors – Specifies how encoding and decoding errors are to be handled—this
cannot be used in binary mode.
custom_html_tag – Optional custom html tag to retrieve the content from
files.
Methods
__init__(path[, encoding, errors, ...])
Initialize ReadTheDocsLoader
lazy_load()
A lazy loader for document content.
load()
Load documents.
load_and_split([text_splitter])
Load documents and split into chunks.
lazy_load() → Iterator[Document]¶
A lazy loader for document content.
load() → List[Document][source]¶
Load documents.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.readthedocs.ReadTheDocsLoader.html
|
5e10af1fd128-1
|
load() → List[Document][source]¶
Load documents.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load documents and split into chunks.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.readthedocs.ReadTheDocsLoader.html
|
d473c1703531-0
|
langchain.document_loaders.unstructured.validate_unstructured_version¶
langchain.document_loaders.unstructured.validate_unstructured_version(min_unstructured_version: str) → None[source]¶
Raises an error if the unstructured version does not exceed the
specified minimum.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.unstructured.validate_unstructured_version.html
|
6bc3d4a58699-0
|
langchain.document_loaders.unstructured.UnstructuredAPIFileIOLoader¶
class langchain.document_loaders.unstructured.UnstructuredAPIFileIOLoader(file: Union[IO, Sequence[IO]], mode: str = 'single', url: str = 'https://api.unstructured.io/general/v0/general', api_key: str = '', **unstructured_kwargs: Any)[source]¶
Bases: UnstructuredFileIOLoader
Loader that uses the unstructured web API to load file IO objects.
Initialize with file path.
Methods
__init__(file[, mode, url, api_key])
Initialize with file path.
lazy_load()
A lazy loader for document content.
load()
Load file.
load_and_split([text_splitter])
Load documents and split into chunks.
lazy_load() → Iterator[Document]¶
A lazy loader for document content.
load() → List[Document]¶
Load file.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load documents and split into chunks.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.unstructured.UnstructuredAPIFileIOLoader.html
|
7c569403db90-0
|
langchain.document_loaders.pdf.BasePDFLoader¶
class langchain.document_loaders.pdf.BasePDFLoader(file_path: str)[source]¶
Bases: BaseLoader, ABC
Base loader class for PDF files.
Defaults to check for local file, but if the file is a web path, it will download it
to a temporary file, and use that, then clean up the temporary file after completion
Initialize with file path.
Methods
__init__(file_path)
Initialize with file path.
lazy_load()
A lazy loader for document content.
load()
Load data into document objects.
load_and_split([text_splitter])
Load documents and split into chunks.
Attributes
source
lazy_load() → Iterator[Document]¶
A lazy loader for document content.
abstract load() → List[Document]¶
Load data into document objects.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load documents and split into chunks.
property source: str¶
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.pdf.BasePDFLoader.html
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.