id
stringlengths 14
15
| text
stringlengths 35
2.51k
| source
stringlengths 61
154
|
|---|---|---|
41b144249056-0
|
langchain.document_loaders.dataframe.DataFrameLoader¶
class langchain.document_loaders.dataframe.DataFrameLoader(data_frame: Any, page_content_column: str = 'text')[source]¶
Bases: BaseLoader
Load Pandas DataFrames.
Initialize with dataframe object.
Methods
__init__(data_frame[, page_content_column])
Initialize with dataframe object.
lazy_load()
Lazy load records from dataframe.
load()
Load full dataframe.
load_and_split([text_splitter])
Load documents and split into chunks.
lazy_load() → Iterator[Document][source]¶
Lazy load records from dataframe.
load() → List[Document][source]¶
Load full dataframe.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load documents and split into chunks.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.dataframe.DataFrameLoader.html
|
960455d99848-0
|
langchain.document_loaders.html.UnstructuredHTMLLoader¶
class langchain.document_loaders.html.UnstructuredHTMLLoader(file_path: Union[str, List[str]], mode: str = 'single', **unstructured_kwargs: Any)[source]¶
Bases: UnstructuredFileLoader
Loader that uses unstructured to load HTML files.
Initialize with file path.
Methods
__init__(file_path[, mode])
Initialize with file path.
lazy_load()
A lazy loader for document content.
load()
Load file.
load_and_split([text_splitter])
Load documents and split into chunks.
lazy_load() → Iterator[Document]¶
A lazy loader for document content.
load() → List[Document]¶
Load file.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load documents and split into chunks.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.html.UnstructuredHTMLLoader.html
|
f7364462e6f3-0
|
langchain.document_loaders.reddit.RedditPostsLoader¶
class langchain.document_loaders.reddit.RedditPostsLoader(client_id: str, client_secret: str, user_agent: str, search_queries: Sequence[str], mode: str, categories: Sequence[str] = ['new'], number_posts: Optional[int] = 10)[source]¶
Bases: BaseLoader
Reddit posts loader.
Read posts on a subreddit.
First you need to go to
https://www.reddit.com/prefs/apps/
and create your application
Methods
__init__(client_id, client_secret, ...[, ...])
lazy_load()
A lazy loader for document content.
load()
Load reddits.
load_and_split([text_splitter])
Load documents and split into chunks.
lazy_load() → Iterator[Document]¶
A lazy loader for document content.
load() → List[Document][source]¶
Load reddits.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load documents and split into chunks.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.reddit.RedditPostsLoader.html
|
548af43dbf80-0
|
langchain.document_loaders.youtube.YoutubeLoader¶
class langchain.document_loaders.youtube.YoutubeLoader(video_id: str, add_video_info: bool = False, language: Union[str, Sequence[str]] = 'en', translation: str = 'en', continue_on_failure: bool = False)[source]¶
Bases: BaseLoader
Loader that loads Youtube transcripts.
Initialize with YouTube video ID.
Methods
__init__(video_id[, add_video_info, ...])
Initialize with YouTube video ID.
extract_video_id(youtube_url)
Extract video id from common YT urls.
from_youtube_url(youtube_url, **kwargs)
Given youtube URL, load video.
lazy_load()
A lazy loader for document content.
load()
Load documents.
load_and_split([text_splitter])
Load documents and split into chunks.
static extract_video_id(youtube_url: str) → str[source]¶
Extract video id from common YT urls.
classmethod from_youtube_url(youtube_url: str, **kwargs: Any) → YoutubeLoader[source]¶
Given youtube URL, load video.
lazy_load() → Iterator[Document]¶
A lazy loader for document content.
load() → List[Document][source]¶
Load documents.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load documents and split into chunks.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.youtube.YoutubeLoader.html
|
f7c95b00aa55-0
|
langchain.document_loaders.youtube.GoogleApiYoutubeLoader¶
class langchain.document_loaders.youtube.GoogleApiYoutubeLoader(google_api_client: GoogleApiClient, channel_name: Optional[str] = None, video_ids: Optional[List[str]] = None, add_video_info: bool = True, captions_language: str = 'en', continue_on_failure: bool = False)[source]¶
Bases: BaseLoader
Loader that loads all Videos from a Channel
To use, you should have the googleapiclient,youtube_transcript_api
python package installed.
As the service needs a google_api_client, you first have to initialize
the GoogleApiClient.
Additionally you have to either provide a channel name or a list of videoids
“https://developers.google.com/docs/api/quickstart/python”
Example
from langchain.document_loaders import GoogleApiClient
from langchain.document_loaders import GoogleApiYoutubeLoader
google_api_client = GoogleApiClient(
service_account_path=Path("path_to_your_sec_file.json")
)
loader = GoogleApiYoutubeLoader(
google_api_client=google_api_client,
channel_name = "CodeAesthetic"
)
load.load()
Methods
__init__(google_api_client[, channel_name, ...])
lazy_load()
A lazy loader for document content.
load()
Load documents.
load_and_split([text_splitter])
Load documents and split into chunks.
validate_channel_or_videoIds_is_set(values)
Validate that either folder_id or document_ids is set, but not both.
Attributes
add_video_info
captions_language
channel_name
continue_on_failure
video_ids
google_api_client
lazy_load() → Iterator[Document]¶
A lazy loader for document content.
load() → List[Document][source]¶
Load documents.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.youtube.GoogleApiYoutubeLoader.html
|
f7c95b00aa55-1
|
load() → List[Document][source]¶
Load documents.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load documents and split into chunks.
classmethod validate_channel_or_videoIds_is_set(values: Dict[str, Any]) → Dict[str, Any][source]¶
Validate that either folder_id or document_ids is set, but not both.
add_video_info: bool = True¶
captions_language: str = 'en'¶
channel_name: Optional[str] = None¶
continue_on_failure: bool = False¶
google_api_client: langchain.document_loaders.youtube.GoogleApiClient¶
video_ids: Optional[List[str]] = None¶
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.youtube.GoogleApiYoutubeLoader.html
|
de1d3bd7acbb-0
|
langchain.document_loaders.mastodon.MastodonTootsLoader¶
class langchain.document_loaders.mastodon.MastodonTootsLoader(mastodon_accounts: Sequence[str], number_toots: Optional[int] = 100, exclude_replies: bool = False, access_token: Optional[str] = None, api_base_url: str = 'https://mastodon.social')[source]¶
Bases: BaseLoader
Mastodon toots loader.
Instantiate Mastodon toots loader.
Parameters
mastodon_accounts – The list of Mastodon accounts to query.
number_toots – How many toots to pull for each account.
exclude_replies – Whether to exclude reply toots from the load.
access_token – An access token if toots are loaded as a Mastodon app. Can
also be specified via the environment variables “MASTODON_ACCESS_TOKEN”.
api_base_url – A Mastodon API base URL to talk to, if not using the default.
Methods
__init__(mastodon_accounts[, number_toots, ...])
Instantiate Mastodon toots loader.
lazy_load()
A lazy loader for document content.
load()
Load toots into documents.
load_and_split([text_splitter])
Load documents and split into chunks.
lazy_load() → Iterator[Document]¶
A lazy loader for document content.
load() → List[Document][source]¶
Load toots into documents.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load documents and split into chunks.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.mastodon.MastodonTootsLoader.html
|
6155679c9ddc-0
|
langchain.document_loaders.pyspark_dataframe.PySparkDataFrameLoader¶
class langchain.document_loaders.pyspark_dataframe.PySparkDataFrameLoader(spark_session: Optional[SparkSession] = None, df: Optional[Any] = None, page_content_column: str = 'text', fraction_of_memory: float = 0.1)[source]¶
Bases: BaseLoader
Load PySpark DataFrames
Initialize with a Spark DataFrame object.
Methods
__init__([spark_session, df, ...])
Initialize with a Spark DataFrame object.
get_num_rows()
Gets the amount of "feasible" rows for the DataFrame
lazy_load()
A lazy loader for document content.
load()
Load from the dataframe.
load_and_split([text_splitter])
Load documents and split into chunks.
get_num_rows() → Tuple[int, int][source]¶
Gets the amount of “feasible” rows for the DataFrame
lazy_load() → Iterator[Document][source]¶
A lazy loader for document content.
load() → List[Document][source]¶
Load from the dataframe.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load documents and split into chunks.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.pyspark_dataframe.PySparkDataFrameLoader.html
|
65fc14bd1f9f-0
|
langchain.document_loaders.word_document.UnstructuredWordDocumentLoader¶
class langchain.document_loaders.word_document.UnstructuredWordDocumentLoader(file_path: Union[str, List[str]], mode: str = 'single', **unstructured_kwargs: Any)[source]¶
Bases: UnstructuredFileLoader
Loader that uses unstructured to load word documents.
Initialize with file path.
Methods
__init__(file_path[, mode])
Initialize with file path.
lazy_load()
A lazy loader for document content.
load()
Load file.
load_and_split([text_splitter])
Load documents and split into chunks.
lazy_load() → Iterator[Document]¶
A lazy loader for document content.
load() → List[Document]¶
Load file.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load documents and split into chunks.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.word_document.UnstructuredWordDocumentLoader.html
|
c78e219c3d90-0
|
langchain.document_loaders.sitemap.SitemapLoader¶
class langchain.document_loaders.sitemap.SitemapLoader(web_path: str, filter_urls: Optional[List[str]] = None, parsing_function: Optional[Callable] = None, blocksize: Optional[int] = None, blocknum: int = 0, meta_function: Optional[Callable] = None, is_local: bool = False)[source]¶
Bases: WebBaseLoader
Loader that fetches a sitemap and loads those URLs.
Initialize with webpage path and optional filter URLs.
Parameters
web_path – url of the sitemap. can also be a local path
filter_urls – list of strings or regexes that will be applied to filter the
urls that are parsed and loaded
parsing_function – Function to parse bs4.Soup output
blocksize – number of sitemap locations per block
blocknum – the number of the block that should be loaded - zero indexed
meta_function – Function to parse bs4.Soup output for metadata
remember when setting this method to also copy metadata[“loc”]
to metadata[“source”] if you are using this field
is_local – whether the sitemap is a local file
Methods
__init__(web_path[, filter_urls, ...])
Initialize with webpage path and optional filter URLs.
aload()
Load text from the urls in web_path async into Documents.
fetch_all(urls)
Fetch all urls concurrently with rate limiting.
lazy_load()
Lazy load text from the url(s) in web_path.
load()
Load sitemap.
load_and_split([text_splitter])
Load documents and split into chunks.
parse_sitemap(soup)
Parse sitemap xml and load into a list of dicts.
scrape([parser])
Scrape data from webpage and return it in BeautifulSoup format.
scrape_all(urls[, parser])
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.sitemap.SitemapLoader.html
|
c78e219c3d90-1
|
scrape_all(urls[, parser])
Fetch all urls, then return soups for all results.
Attributes
bs_get_text_kwargs
kwargs for beatifulsoup4 get_text
default_parser
Default parser to use for BeautifulSoup.
raise_for_status
Raise an exception if http status code denotes an error.
requests_kwargs
kwargs for requests
requests_per_second
Max number of concurrent requests to make.
web_path
aload() → List[Document]¶
Load text from the urls in web_path async into Documents.
async fetch_all(urls: List[str]) → Any¶
Fetch all urls concurrently with rate limiting.
lazy_load() → Iterator[Document]¶
Lazy load text from the url(s) in web_path.
load() → List[Document][source]¶
Load sitemap.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load documents and split into chunks.
parse_sitemap(soup: Any) → List[dict][source]¶
Parse sitemap xml and load into a list of dicts.
scrape(parser: Optional[str] = None) → Any¶
Scrape data from webpage and return it in BeautifulSoup format.
scrape_all(urls: List[str], parser: Optional[str] = None) → List[Any]¶
Fetch all urls, then return soups for all results.
bs_get_text_kwargs: Dict[str, Any] = {}¶
kwargs for beatifulsoup4 get_text
default_parser: str = 'html.parser'¶
Default parser to use for BeautifulSoup.
raise_for_status: bool = False¶
Raise an exception if http status code denotes an error.
requests_kwargs: Dict[str, Any] = {}¶
kwargs for requests
requests_per_second: int = 2¶
Max number of concurrent requests to make.
property web_path: str¶
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.sitemap.SitemapLoader.html
|
c78e219c3d90-2
|
Max number of concurrent requests to make.
property web_path: str¶
web_paths: List[str]¶
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.sitemap.SitemapLoader.html
|
6a4ad28a5a67-0
|
langchain.document_loaders.unstructured.UnstructuredAPIFileLoader¶
class langchain.document_loaders.unstructured.UnstructuredAPIFileLoader(file_path: Union[str, List[str]] = '', mode: str = 'single', url: str = 'https://api.unstructured.io/general/v0/general', api_key: str = '', **unstructured_kwargs: Any)[source]¶
Bases: UnstructuredFileLoader
Loader that uses the unstructured web API to load files.
Initialize with file path.
Methods
__init__([file_path, mode, url, api_key])
Initialize with file path.
lazy_load()
A lazy loader for document content.
load()
Load file.
load_and_split([text_splitter])
Load documents and split into chunks.
lazy_load() → Iterator[Document]¶
A lazy loader for document content.
load() → List[Document]¶
Load file.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load documents and split into chunks.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.unstructured.UnstructuredAPIFileLoader.html
|
f2917aecf763-0
|
langchain.document_loaders.markdown.UnstructuredMarkdownLoader¶
class langchain.document_loaders.markdown.UnstructuredMarkdownLoader(file_path: Union[str, List[str]], mode: str = 'single', **unstructured_kwargs: Any)[source]¶
Bases: UnstructuredFileLoader
Loader that uses unstructured to load markdown files.
Initialize with file path.
Methods
__init__(file_path[, mode])
Initialize with file path.
lazy_load()
A lazy loader for document content.
load()
Load file.
load_and_split([text_splitter])
Load documents and split into chunks.
lazy_load() → Iterator[Document]¶
A lazy loader for document content.
load() → List[Document]¶
Load file.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load documents and split into chunks.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.markdown.UnstructuredMarkdownLoader.html
|
a7d79785ca1a-0
|
langchain.document_loaders.weather.WeatherDataLoader¶
class langchain.document_loaders.weather.WeatherDataLoader(client: OpenWeatherMapAPIWrapper, places: Sequence[str])[source]¶
Bases: BaseLoader
Weather Reader.
Reads the forecast & current weather of any location using OpenWeatherMap’s free
API. Checkout ‘https://openweathermap.org/appid’ for more on how to generate a free
OpenWeatherMap API.
Initialize with parameters.
Methods
__init__(client, places)
Initialize with parameters.
from_params(places, *[, openweathermap_api_key])
lazy_load()
Lazily load weather data for the given locations.
load()
Load weather data for the given locations.
load_and_split([text_splitter])
Load documents and split into chunks.
classmethod from_params(places: Sequence[str], *, openweathermap_api_key: Optional[str] = None) → WeatherDataLoader[source]¶
lazy_load() → Iterator[Document][source]¶
Lazily load weather data for the given locations.
load() → List[Document][source]¶
Load weather data for the given locations.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load documents and split into chunks.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.weather.WeatherDataLoader.html
|
30f6aab4afee-0
|
langchain.document_loaders.twitter.TwitterTweetLoader¶
class langchain.document_loaders.twitter.TwitterTweetLoader(auth_handler: Union[OAuthHandler, OAuth2BearerHandler], twitter_users: Sequence[str], number_tweets: Optional[int] = 100)[source]¶
Bases: BaseLoader
Twitter tweets loader.
Read tweets of user twitter handle.
First you need to go to
https://developer.twitter.com/en/docs/twitter-api
/getting-started/getting-access-to-the-twitter-api
to get your token. And create a v2 version of the app.
Methods
__init__(auth_handler, twitter_users[, ...])
from_bearer_token(oauth2_bearer_token, ...)
Create a TwitterTweetLoader from OAuth2 bearer token.
from_secrets(access_token, ...[, number_tweets])
Create a TwitterTweetLoader from access tokens and secrets.
lazy_load()
A lazy loader for document content.
load()
Load tweets.
load_and_split([text_splitter])
Load documents and split into chunks.
classmethod from_bearer_token(oauth2_bearer_token: str, twitter_users: Sequence[str], number_tweets: Optional[int] = 100) → TwitterTweetLoader[source]¶
Create a TwitterTweetLoader from OAuth2 bearer token.
classmethod from_secrets(access_token: str, access_token_secret: str, consumer_key: str, consumer_secret: str, twitter_users: Sequence[str], number_tweets: Optional[int] = 100) → TwitterTweetLoader[source]¶
Create a TwitterTweetLoader from access tokens and secrets.
lazy_load() → Iterator[Document]¶
A lazy loader for document content.
load() → List[Document][source]¶
Load tweets.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load documents and split into chunks.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.twitter.TwitterTweetLoader.html
|
2df5376fa875-0
|
langchain.document_loaders.parsers.pdf.PyPDFParser¶
class langchain.document_loaders.parsers.pdf.PyPDFParser(password: Optional[Union[str, bytes]] = None)[source]¶
Bases: BaseBlobParser
Loads a PDF with pypdf and chunks at character level.
Methods
__init__([password])
lazy_parse(blob)
Lazily parse the blob.
parse(blob)
Eagerly parse the blob into a document or documents.
lazy_parse(blob: Blob) → Iterator[Document][source]¶
Lazily parse the blob.
parse(blob: Blob) → List[Document]¶
Eagerly parse the blob into a document or documents.
This is a convenience method for interactive development environment.
Production applications should favor the lazy_parse method instead.
Subclasses should generally not over-ride this parse method.
Parameters
blob – Blob instance
Returns
List of documents
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.parsers.pdf.PyPDFParser.html
|
2ba0b334614a-0
|
langchain.document_loaders.parsers.language.javascript.JavaScriptSegmenter¶
class langchain.document_loaders.parsers.language.javascript.JavaScriptSegmenter(code: str)[source]¶
Bases: CodeSegmenter
Methods
__init__(code)
extract_functions_classes()
is_valid()
simplify_code()
extract_functions_classes() → List[str][source]¶
is_valid() → bool[source]¶
simplify_code() → str[source]¶
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.parsers.language.javascript.JavaScriptSegmenter.html
|
0dbae2694c45-0
|
langchain.document_loaders.conllu.CoNLLULoader¶
class langchain.document_loaders.conllu.CoNLLULoader(file_path: str)[source]¶
Bases: BaseLoader
Load CoNLL-U files.
Initialize with file path.
Methods
__init__(file_path)
Initialize with file path.
lazy_load()
A lazy loader for document content.
load()
Load from file path.
load_and_split([text_splitter])
Load documents and split into chunks.
lazy_load() → Iterator[Document]¶
A lazy loader for document content.
load() → List[Document][source]¶
Load from file path.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load documents and split into chunks.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.conllu.CoNLLULoader.html
|
b1b705992751-0
|
langchain.document_loaders.notebook.NotebookLoader¶
class langchain.document_loaders.notebook.NotebookLoader(path: str, include_outputs: bool = False, max_output_length: int = 10, remove_newline: bool = False, traceback: bool = False)[source]¶
Bases: BaseLoader
Loader that loads .ipynb notebook files.
Initialize with path.
Methods
__init__(path[, include_outputs, ...])
Initialize with path.
lazy_load()
A lazy loader for document content.
load()
Load documents.
load_and_split([text_splitter])
Load documents and split into chunks.
lazy_load() → Iterator[Document]¶
A lazy loader for document content.
load() → List[Document][source]¶
Load documents.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load documents and split into chunks.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.notebook.NotebookLoader.html
|
bab16791de86-0
|
langchain.document_loaders.psychic.PsychicLoader¶
class langchain.document_loaders.psychic.PsychicLoader(api_key: str, account_id: str, connector_id: Optional[str] = None)[source]¶
Bases: BaseLoader
Loader that loads documents from Psychic.dev.
Initialize with API key, connector id, and account id.
Methods
__init__(api_key, account_id[, connector_id])
Initialize with API key, connector id, and account id.
lazy_load()
A lazy loader for document content.
load()
Load documents.
load_and_split([text_splitter])
Load documents and split into chunks.
lazy_load() → Iterator[Document]¶
A lazy loader for document content.
load() → List[Document][source]¶
Load documents.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load documents and split into chunks.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.psychic.PsychicLoader.html
|
c9b4d4426411-0
|
langchain.document_loaders.max_compute.MaxComputeLoader¶
class langchain.document_loaders.max_compute.MaxComputeLoader(query: str, api_wrapper: MaxComputeAPIWrapper, *, page_content_columns: Optional[Sequence[str]] = None, metadata_columns: Optional[Sequence[str]] = None)[source]¶
Bases: BaseLoader
Loads a query result from Alibaba Cloud MaxCompute table into documents.
Initialize Alibaba Cloud MaxCompute document loader.
Parameters
query – SQL query to execute.
api_wrapper – MaxCompute API wrapper.
page_content_columns – The columns to write into the page_content of the
Document. If unspecified, all columns will be written to page_content.
metadata_columns – The columns to write into the metadata of the Document.
If unspecified, all columns not added to page_content will be written.
Methods
__init__(query, api_wrapper, *[, ...])
Initialize Alibaba Cloud MaxCompute document loader.
from_params(query, endpoint, project, *[, ...])
Convenience constructor that builds the MaxCompute API wrapper from
lazy_load()
A lazy loader for document content.
load()
Load data into document objects.
load_and_split([text_splitter])
Load documents and split into chunks.
classmethod from_params(query: str, endpoint: str, project: str, *, access_id: Optional[str] = None, secret_access_key: Optional[str] = None, **kwargs: Any) → MaxComputeLoader[source]¶
Convenience constructor that builds the MaxCompute API wrapper fromgiven parameters.
Parameters
query – SQL query to execute.
endpoint – MaxCompute endpoint.
project – A project is a basic organizational unit of MaxCompute, which is
similar to a database.
access_id – MaxCompute access ID. Should be passed in directly or set as the
environment variable MAX_COMPUTE_ACCESS_ID.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.max_compute.MaxComputeLoader.html
|
c9b4d4426411-1
|
environment variable MAX_COMPUTE_ACCESS_ID.
secret_access_key – MaxCompute secret access key. Should be passed in
directly or set as the environment variable
MAX_COMPUTE_SECRET_ACCESS_KEY.
lazy_load() → Iterator[Document][source]¶
A lazy loader for document content.
load() → List[Document][source]¶
Load data into document objects.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load documents and split into chunks.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.max_compute.MaxComputeLoader.html
|
518ba22ae42a-0
|
langchain.document_loaders.parsers.txt.TextParser¶
class langchain.document_loaders.parsers.txt.TextParser[source]¶
Bases: BaseBlobParser
Parser for text blobs.
Methods
__init__()
lazy_parse(blob)
Lazily parse the blob.
parse(blob)
Eagerly parse the blob into a document or documents.
lazy_parse(blob: Blob) → Iterator[Document][source]¶
Lazily parse the blob.
parse(blob: Blob) → List[Document]¶
Eagerly parse the blob into a document or documents.
This is a convenience method for interactive development environment.
Production applications should favor the lazy_parse method instead.
Subclasses should generally not over-ride this parse method.
Parameters
blob – Blob instance
Returns
List of documents
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.parsers.txt.TextParser.html
|
e065e5528830-0
|
langchain.document_loaders.s3_file.S3FileLoader¶
class langchain.document_loaders.s3_file.S3FileLoader(bucket: str, key: str)[source]¶
Bases: BaseLoader
Loading logic for loading documents from s3.
Initialize with bucket and key name.
Methods
__init__(bucket, key)
Initialize with bucket and key name.
lazy_load()
A lazy loader for document content.
load()
Load documents.
load_and_split([text_splitter])
Load documents and split into chunks.
lazy_load() → Iterator[Document]¶
A lazy loader for document content.
load() → List[Document][source]¶
Load documents.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load documents and split into chunks.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.s3_file.S3FileLoader.html
|
b123c18c6987-0
|
langchain.document_loaders.airbyte_json.AirbyteJSONLoader¶
class langchain.document_loaders.airbyte_json.AirbyteJSONLoader(file_path: str)[source]¶
Bases: BaseLoader
Loader that loads local airbyte json files.
Initialize with file path. This should start with ‘/tmp/airbyte_local/’.
Methods
__init__(file_path)
Initialize with file path.
lazy_load()
A lazy loader for document content.
load()
Load file.
load_and_split([text_splitter])
Load documents and split into chunks.
lazy_load() → Iterator[Document]¶
A lazy loader for document content.
load() → List[Document][source]¶
Load file.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load documents and split into chunks.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.airbyte_json.AirbyteJSONLoader.html
|
a4c2530e2b27-0
|
langchain.document_loaders.pdf.OnlinePDFLoader¶
class langchain.document_loaders.pdf.OnlinePDFLoader(file_path: str)[source]¶
Bases: BasePDFLoader
Loader that loads online PDFs.
Initialize with file path.
Methods
__init__(file_path)
Initialize with file path.
lazy_load()
A lazy loader for document content.
load()
Load documents.
load_and_split([text_splitter])
Load documents and split into chunks.
Attributes
source
lazy_load() → Iterator[Document]¶
A lazy loader for document content.
load() → List[Document][source]¶
Load documents.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load documents and split into chunks.
property source: str¶
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.pdf.OnlinePDFLoader.html
|
3fd38f735c18-0
|
langchain.document_loaders.pdf.MathpixPDFLoader¶
class langchain.document_loaders.pdf.MathpixPDFLoader(file_path: str, processed_file_format: str = 'mmd', max_wait_time_seconds: int = 500, should_clean_pdf: bool = False, **kwargs: Any)[source]¶
Bases: BasePDFLoader
Initialize with file path.
Methods
__init__(file_path[, processed_file_format, ...])
Initialize with file path.
clean_pdf(contents)
get_processed_pdf(pdf_id)
lazy_load()
A lazy loader for document content.
load()
Load data into document objects.
load_and_split([text_splitter])
Load documents and split into chunks.
send_pdf()
wait_for_processing(pdf_id)
Attributes
data
headers
source
url
clean_pdf(contents: str) → str[source]¶
get_processed_pdf(pdf_id: str) → str[source]¶
lazy_load() → Iterator[Document]¶
A lazy loader for document content.
load() → List[Document][source]¶
Load data into document objects.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load documents and split into chunks.
send_pdf() → str[source]¶
wait_for_processing(pdf_id: str) → None[source]¶
property data: dict¶
property headers: dict¶
property source: str¶
property url: str¶
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.pdf.MathpixPDFLoader.html
|
e5543afe41bf-0
|
langchain.document_loaders.unstructured.UnstructuredBaseLoader¶
class langchain.document_loaders.unstructured.UnstructuredBaseLoader(mode: str = 'single', **unstructured_kwargs: Any)[source]¶
Bases: BaseLoader, ABC
Loader that uses unstructured to load files.
Initialize with file path.
Methods
__init__([mode])
Initialize with file path.
lazy_load()
A lazy loader for document content.
load()
Load file.
load_and_split([text_splitter])
Load documents and split into chunks.
lazy_load() → Iterator[Document]¶
A lazy loader for document content.
load() → List[Document][source]¶
Load file.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load documents and split into chunks.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.unstructured.UnstructuredBaseLoader.html
|
3020f93ad120-0
|
langchain.document_loaders.unstructured.UnstructuredFileIOLoader¶
class langchain.document_loaders.unstructured.UnstructuredFileIOLoader(file: Union[IO, Sequence[IO]], mode: str = 'single', **unstructured_kwargs: Any)[source]¶
Bases: UnstructuredBaseLoader
Loader that uses unstructured to load file IO objects.
Initialize with file path.
Methods
__init__(file[, mode])
Initialize with file path.
lazy_load()
A lazy loader for document content.
load()
Load file.
load_and_split([text_splitter])
Load documents and split into chunks.
lazy_load() → Iterator[Document]¶
A lazy loader for document content.
load() → List[Document]¶
Load file.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load documents and split into chunks.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.unstructured.UnstructuredFileIOLoader.html
|
0b3bb3f363d8-0
|
langchain.document_loaders.odt.UnstructuredODTLoader¶
class langchain.document_loaders.odt.UnstructuredODTLoader(file_path: str, mode: str = 'single', **unstructured_kwargs: Any)[source]¶
Bases: UnstructuredFileLoader
Loader that uses unstructured to load open office ODT files.
Initialize with file path.
Methods
__init__(file_path[, mode])
Initialize with file path.
lazy_load()
A lazy loader for document content.
load()
Load file.
load_and_split([text_splitter])
Load documents and split into chunks.
lazy_load() → Iterator[Document]¶
A lazy loader for document content.
load() → List[Document]¶
Load file.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load documents and split into chunks.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.odt.UnstructuredODTLoader.html
|
0a56df0d4639-0
|
langchain.document_loaders.obsidian.ObsidianLoader¶
class langchain.document_loaders.obsidian.ObsidianLoader(path: str, encoding: str = 'UTF-8', collect_metadata: bool = True)[source]¶
Bases: BaseLoader
Loader that loads Obsidian files from disk.
Initialize with path.
Methods
__init__(path[, encoding, collect_metadata])
Initialize with path.
lazy_load()
A lazy loader for document content.
load()
Load documents.
load_and_split([text_splitter])
Load documents and split into chunks.
Attributes
FRONT_MATTER_REGEX
lazy_load() → Iterator[Document]¶
A lazy loader for document content.
load() → List[Document][source]¶
Load documents.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load documents and split into chunks.
FRONT_MATTER_REGEX = re.compile('^---\\n(.*?)\\n---\\n', re.MULTILINE|re.DOTALL)¶
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.obsidian.ObsidianLoader.html
|
bd556f256222-0
|
langchain.document_loaders.gutenberg.GutenbergLoader¶
class langchain.document_loaders.gutenberg.GutenbergLoader(file_path: str)[source]¶
Bases: BaseLoader
Loader that uses urllib to load .txt web files.
Initialize with file path.
Methods
__init__(file_path)
Initialize with file path.
lazy_load()
A lazy loader for document content.
load()
Load file.
load_and_split([text_splitter])
Load documents and split into chunks.
lazy_load() → Iterator[Document]¶
A lazy loader for document content.
load() → List[Document][source]¶
Load file.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load documents and split into chunks.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.gutenberg.GutenbergLoader.html
|
2f2df8488214-0
|
langchain.document_loaders.gcs_file.GCSFileLoader¶
class langchain.document_loaders.gcs_file.GCSFileLoader(project_name: str, bucket: str, blob: str)[source]¶
Bases: BaseLoader
Loading logic for loading documents from GCS.
Initialize with bucket and key name.
Methods
__init__(project_name, bucket, blob)
Initialize with bucket and key name.
lazy_load()
A lazy loader for document content.
load()
Load documents.
load_and_split([text_splitter])
Load documents and split into chunks.
lazy_load() → Iterator[Document]¶
A lazy loader for document content.
load() → List[Document][source]¶
Load documents.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load documents and split into chunks.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.gcs_file.GCSFileLoader.html
|
d7dfbf55ac3c-0
|
langchain.document_loaders.powerpoint.UnstructuredPowerPointLoader¶
class langchain.document_loaders.powerpoint.UnstructuredPowerPointLoader(file_path: Union[str, List[str]], mode: str = 'single', **unstructured_kwargs: Any)[source]¶
Bases: UnstructuredFileLoader
Loader that uses unstructured to load powerpoint files.
Initialize with file path.
Methods
__init__(file_path[, mode])
Initialize with file path.
lazy_load()
A lazy loader for document content.
load()
Load file.
load_and_split([text_splitter])
Load documents and split into chunks.
lazy_load() → Iterator[Document]¶
A lazy loader for document content.
load() → List[Document]¶
Load file.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load documents and split into chunks.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.powerpoint.UnstructuredPowerPointLoader.html
|
0063810a880b-0
|
langchain.document_loaders.web_base.WebBaseLoader¶
class langchain.document_loaders.web_base.WebBaseLoader(web_path: Union[str, List[str]], header_template: Optional[dict] = None, verify: Optional[bool] = True, proxies: Optional[dict] = None)[source]¶
Bases: BaseLoader
Loader that uses urllib and beautiful soup to load webpages.
Initialize with webpage path.
Methods
__init__(web_path[, header_template, ...])
Initialize with webpage path.
aload()
Load text from the urls in web_path async into Documents.
fetch_all(urls)
Fetch all urls concurrently with rate limiting.
lazy_load()
Lazy load text from the url(s) in web_path.
load()
Load text from the url(s) in web_path.
load_and_split([text_splitter])
Load documents and split into chunks.
scrape([parser])
Scrape data from webpage and return it in BeautifulSoup format.
scrape_all(urls[, parser])
Fetch all urls, then return soups for all results.
Attributes
bs_get_text_kwargs
kwargs for beatifulsoup4 get_text
default_parser
Default parser to use for BeautifulSoup.
raise_for_status
Raise an exception if http status code denotes an error.
requests_kwargs
kwargs for requests
requests_per_second
Max number of concurrent requests to make.
web_path
web_paths
aload() → List[Document][source]¶
Load text from the urls in web_path async into Documents.
async fetch_all(urls: List[str]) → Any[source]¶
Fetch all urls concurrently with rate limiting.
lazy_load() → Iterator[Document][source]¶
Lazy load text from the url(s) in web_path.
load() → List[Document][source]¶
Load text from the url(s) in web_path.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.web_base.WebBaseLoader.html
|
0063810a880b-1
|
Load text from the url(s) in web_path.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load documents and split into chunks.
scrape(parser: Optional[str] = None) → Any[source]¶
Scrape data from webpage and return it in BeautifulSoup format.
scrape_all(urls: List[str], parser: Optional[str] = None) → List[Any][source]¶
Fetch all urls, then return soups for all results.
bs_get_text_kwargs: Dict[str, Any] = {}¶
kwargs for beatifulsoup4 get_text
default_parser: str = 'html.parser'¶
Default parser to use for BeautifulSoup.
raise_for_status: bool = False¶
Raise an exception if http status code denotes an error.
requests_kwargs: Dict[str, Any] = {}¶
kwargs for requests
requests_per_second: int = 2¶
Max number of concurrent requests to make.
property web_path: str¶
web_paths: List[str]¶
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.web_base.WebBaseLoader.html
|
83858bc1b7bd-0
|
langchain.document_loaders.telegram.text_to_docs¶
langchain.document_loaders.telegram.text_to_docs(text: Union[str, List[str]]) → List[Document][source]¶
Converts a string or list of strings to a list of Documents with metadata.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.telegram.text_to_docs.html
|
b6f22623f5f5-0
|
langchain.document_loaders.figma.FigmaFileLoader¶
class langchain.document_loaders.figma.FigmaFileLoader(access_token: str, ids: str, key: str)[source]¶
Bases: BaseLoader
Loader that loads Figma file json.
Initialize with access token, ids, and key.
Methods
__init__(access_token, ids, key)
Initialize with access token, ids, and key.
lazy_load()
A lazy loader for document content.
load()
Load file
load_and_split([text_splitter])
Load documents and split into chunks.
lazy_load() → Iterator[Document]¶
A lazy loader for document content.
load() → List[Document][source]¶
Load file
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load documents and split into chunks.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.figma.FigmaFileLoader.html
|
5d9cced20e32-0
|
langchain.document_loaders.onedrive_file.OneDriveFileLoader¶
class langchain.document_loaders.onedrive_file.OneDriveFileLoader(*, file: File)[source]¶
Bases: BaseLoader, BaseModel
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param file: File [Required]¶
lazy_load() → Iterator[Document]¶
A lazy loader for document content.
load() → List[Document][source]¶
Load Documents
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load documents and split into chunks.
model Config[source]¶
Bases: object
arbitrary_types_allowed = True¶
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.onedrive_file.OneDriveFileLoader.html
|
b389f787ba92-0
|
langchain.document_loaders.pdf.PyPDFLoader¶
class langchain.document_loaders.pdf.PyPDFLoader(file_path: str, password: Optional[Union[str, bytes]] = None)[source]¶
Bases: BasePDFLoader
Loads a PDF with pypdf and chunks at character level.
Loader also stores page numbers in metadatas.
Initialize with file path.
Methods
__init__(file_path[, password])
Initialize with file path.
lazy_load()
Lazy load given path as pages.
load()
Load given path as pages.
load_and_split([text_splitter])
Load documents and split into chunks.
Attributes
source
lazy_load() → Iterator[Document][source]¶
Lazy load given path as pages.
load() → List[Document][source]¶
Load given path as pages.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load documents and split into chunks.
property source: str¶
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.pdf.PyPDFLoader.html
|
80a495e3f554-0
|
langchain.document_loaders.unstructured.get_elements_from_api¶
langchain.document_loaders.unstructured.get_elements_from_api(file_path: Optional[Union[str, List[str]]] = None, file: Optional[Union[IO, Sequence[IO]]] = None, api_url: str = 'https://api.unstructured.io/general/v0/general', api_key: str = '', **unstructured_kwargs: Any) → List[source]¶
Retrieves a list of elements from the Unstructured API.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.unstructured.get_elements_from_api.html
|
62adcaa6b53a-0
|
langchain.document_loaders.gitbook.GitbookLoader¶
class langchain.document_loaders.gitbook.GitbookLoader(web_page: str, load_all_paths: bool = False, base_url: Optional[str] = None, content_selector: str = 'main')[source]¶
Bases: WebBaseLoader
Load GitBook data.
load from either a single page, or
load all (relative) paths in the navbar.
Initialize with web page and whether to load all paths.
Parameters
web_page – The web page to load or the starting point from where
relative paths are discovered.
load_all_paths – If set to True, all relative paths in the navbar
are loaded instead of only web_page.
base_url – If load_all_paths is True, the relative paths are
appended to this base url. Defaults to web_page if not set.
Methods
__init__(web_page[, load_all_paths, ...])
Initialize with web page and whether to load all paths.
aload()
Load text from the urls in web_path async into Documents.
fetch_all(urls)
Fetch all urls concurrently with rate limiting.
lazy_load()
Lazy load text from the url(s) in web_path.
load()
Fetch text from one single GitBook page.
load_and_split([text_splitter])
Load documents and split into chunks.
scrape([parser])
Scrape data from webpage and return it in BeautifulSoup format.
scrape_all(urls[, parser])
Fetch all urls, then return soups for all results.
Attributes
bs_get_text_kwargs
kwargs for beatifulsoup4 get_text
default_parser
Default parser to use for BeautifulSoup.
raise_for_status
Raise an exception if http status code denotes an error.
requests_kwargs
kwargs for requests
requests_per_second
Max number of concurrent requests to make.
web_path
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.gitbook.GitbookLoader.html
|
62adcaa6b53a-1
|
kwargs for requests
requests_per_second
Max number of concurrent requests to make.
web_path
aload() → List[Document]¶
Load text from the urls in web_path async into Documents.
async fetch_all(urls: List[str]) → Any¶
Fetch all urls concurrently with rate limiting.
lazy_load() → Iterator[Document]¶
Lazy load text from the url(s) in web_path.
load() → List[Document][source]¶
Fetch text from one single GitBook page.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load documents and split into chunks.
scrape(parser: Optional[str] = None) → Any¶
Scrape data from webpage and return it in BeautifulSoup format.
scrape_all(urls: List[str], parser: Optional[str] = None) → List[Any]¶
Fetch all urls, then return soups for all results.
bs_get_text_kwargs: Dict[str, Any] = {}¶
kwargs for beatifulsoup4 get_text
default_parser: str = 'html.parser'¶
Default parser to use for BeautifulSoup.
raise_for_status: bool = False¶
Raise an exception if http status code denotes an error.
requests_kwargs: Dict[str, Any] = {}¶
kwargs for requests
requests_per_second: int = 2¶
Max number of concurrent requests to make.
property web_path: str¶
web_paths: List[str]¶
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.gitbook.GitbookLoader.html
|
46e61152b13b-0
|
langchain.document_loaders.python.PythonLoader¶
class langchain.document_loaders.python.PythonLoader(file_path: str)[source]¶
Bases: TextLoader
Load Python files, respecting any non-default encoding if specified.
Initialize with file path.
Methods
__init__(file_path)
Initialize with file path.
lazy_load()
A lazy loader for document content.
load()
Load from file path.
load_and_split([text_splitter])
Load documents and split into chunks.
lazy_load() → Iterator[Document]¶
A lazy loader for document content.
load() → List[Document]¶
Load from file path.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load documents and split into chunks.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.python.PythonLoader.html
|
31382f5ef3b0-0
|
langchain.document_loaders.parsers.language.python.PythonSegmenter¶
class langchain.document_loaders.parsers.language.python.PythonSegmenter(code: str)[source]¶
Bases: CodeSegmenter
Methods
__init__(code)
extract_functions_classes()
is_valid()
simplify_code()
extract_functions_classes() → List[str][source]¶
is_valid() → bool[source]¶
simplify_code() → str[source]¶
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.parsers.language.python.PythonSegmenter.html
|
29e7cfac1d25-0
|
langchain.document_loaders.toml.TomlLoader¶
class langchain.document_loaders.toml.TomlLoader(source: Union[str, Path])[source]¶
Bases: BaseLoader
A TOML document loader that inherits from the BaseLoader class.
This class can be initialized with either a single source file or a source
directory containing TOML files.
Initialize the TomlLoader with a source file or directory.
Methods
__init__(source)
Initialize the TomlLoader with a source file or directory.
lazy_load()
Lazily load the TOML documents from the source file or directory.
load()
Load and return all documents.
load_and_split([text_splitter])
Load documents and split into chunks.
lazy_load() → Iterator[Document][source]¶
Lazily load the TOML documents from the source file or directory.
load() → List[Document][source]¶
Load and return all documents.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load documents and split into chunks.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.toml.TomlLoader.html
|
73132c3a13be-0
|
langchain.document_loaders.email.UnstructuredEmailLoader¶
class langchain.document_loaders.email.UnstructuredEmailLoader(file_path: Union[str, List[str]], mode: str = 'single', **unstructured_kwargs: Any)[source]¶
Bases: UnstructuredFileLoader
Loader that uses unstructured to load email files.
Initialize with file path.
Methods
__init__(file_path[, mode])
Initialize with file path.
lazy_load()
A lazy loader for document content.
load()
Load file.
load_and_split([text_splitter])
Load documents and split into chunks.
lazy_load() → Iterator[Document]¶
A lazy loader for document content.
load() → List[Document]¶
Load file.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load documents and split into chunks.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.email.UnstructuredEmailLoader.html
|
1ad69d299eb4-0
|
langchain.document_loaders.bilibili.BiliBiliLoader¶
class langchain.document_loaders.bilibili.BiliBiliLoader(video_urls: List[str])[source]¶
Bases: BaseLoader
Loader that loads bilibili transcripts.
Initialize with bilibili url.
Methods
__init__(video_urls)
Initialize with bilibili url.
lazy_load()
A lazy loader for document content.
load()
Load from bilibili url.
load_and_split([text_splitter])
Load documents and split into chunks.
lazy_load() → Iterator[Document]¶
A lazy loader for document content.
load() → List[Document][source]¶
Load from bilibili url.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load documents and split into chunks.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.bilibili.BiliBiliLoader.html
|
eb7b847ef1fe-0
|
langchain.document_loaders.json_loader.JSONLoader¶
class langchain.document_loaders.json_loader.JSONLoader(file_path: Union[str, Path], jq_schema: str, content_key: Optional[str] = None, metadata_func: Optional[Callable[[Dict, Dict], Dict]] = None, text_content: bool = True)[source]¶
Bases: BaseLoader
Loads a JSON file and references a jq schema provided to load the text into
documents.
Example
[{“text”: …}, {“text”: …}, {“text”: …}] -> schema = .[].text
{“key”: [{“text”: …}, {“text”: …}, {“text”: …}]} -> schema = .key[].text
[“”, “”, “”] -> schema = .[]
Initialize the JSONLoader.
Parameters
file_path (Union[str, Path]) – The path to the JSON file.
jq_schema (str) – The jq schema to use to extract the data or text from
the JSON.
content_key (str) – The key to use to extract the content from the JSON if
the jq_schema results to a list of objects (dict).
metadata_func (Callable[Dict, Dict]) – A function that takes in the JSON
object extracted by the jq_schema and the default metadata and returns
a dict of the updated metadata.
text_content (bool) – Boolean flag to indicates whether the content is in
string format, default to True
Methods
__init__(file_path, jq_schema[, ...])
Initialize the JSONLoader.
lazy_load()
A lazy loader for document content.
load()
Load and return documents from the JSON file.
load_and_split([text_splitter])
Load documents and split into chunks.
lazy_load() → Iterator[Document]¶
A lazy loader for document content.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.json_loader.JSONLoader.html
|
eb7b847ef1fe-1
|
lazy_load() → Iterator[Document]¶
A lazy loader for document content.
load() → List[Document][source]¶
Load and return documents from the JSON file.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load documents and split into chunks.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.json_loader.JSONLoader.html
|
a45949880335-0
|
langchain.document_loaders.xml.UnstructuredXMLLoader¶
class langchain.document_loaders.xml.UnstructuredXMLLoader(file_path: str, mode: str = 'single', **unstructured_kwargs: Any)[source]¶
Bases: UnstructuredFileLoader
Loader that uses unstructured to load XML files.
Initialize with file path.
Methods
__init__(file_path[, mode])
Initialize with file path.
lazy_load()
A lazy loader for document content.
load()
Load file.
load_and_split([text_splitter])
Load documents and split into chunks.
lazy_load() → Iterator[Document]¶
A lazy loader for document content.
load() → List[Document]¶
Load file.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load documents and split into chunks.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.xml.UnstructuredXMLLoader.html
|
c0d6c2f7ef00-0
|
langchain.document_loaders.joplin.JoplinLoader¶
class langchain.document_loaders.joplin.JoplinLoader(access_token: Optional[str] = None, port: int = 41184, host: str = 'localhost')[source]¶
Bases: BaseLoader
Loader that fetches notes from Joplin.
In order to use this loader, you need to have Joplin running with the
Web Clipper enabled (look for “Web Clipper” in the app settings).
To get the access token, you need to go to the Web Clipper options and
under “Advanced Options” you will find the access token.
You can find more information about the Web Clipper service here:
https://joplinapp.org/clipper/
Methods
__init__([access_token, port, host])
lazy_load()
A lazy loader for document content.
load()
Load data into document objects.
load_and_split([text_splitter])
Load documents and split into chunks.
lazy_load() → Iterator[Document][source]¶
A lazy loader for document content.
load() → List[Document][source]¶
Load data into document objects.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load documents and split into chunks.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.joplin.JoplinLoader.html
|
47c6c50d4275-0
|
langchain.document_loaders.image_captions.ImageCaptionLoader¶
class langchain.document_loaders.image_captions.ImageCaptionLoader(path_images: Union[str, List[str]], blip_processor: str = 'Salesforce/blip-image-captioning-base', blip_model: str = 'Salesforce/blip-image-captioning-base')[source]¶
Bases: BaseLoader
Loader that loads the captions of an image
Initialize with a list of image paths
Methods
__init__(path_images[, blip_processor, ...])
Initialize with a list of image paths
lazy_load()
A lazy loader for document content.
load()
Load from a list of image files
load_and_split([text_splitter])
Load documents and split into chunks.
lazy_load() → Iterator[Document]¶
A lazy loader for document content.
load() → List[Document][source]¶
Load from a list of image files
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load documents and split into chunks.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.image_captions.ImageCaptionLoader.html
|
214f40d95a85-0
|
langchain.document_loaders.snowflake_loader.SnowflakeLoader¶
class langchain.document_loaders.snowflake_loader.SnowflakeLoader(query: str, user: str, password: str, account: str, warehouse: str, role: str, database: str, schema: str, parameters: Optional[Dict[str, Any]] = None, page_content_columns: Optional[List[str]] = None, metadata_columns: Optional[List[str]] = None)[source]¶
Bases: BaseLoader
Loads a query result from Snowflake into a list of documents.
Each document represents one row of the result. The page_content_columns
are written into the page_content of the document. The metadata_columns
are written into the metadata of the document. By default, all columns
are written into the page_content and none into the metadata.
Initialize Snowflake document loader.
Parameters
query – The query to run in Snowflake.
user – Snowflake user.
password – Snowflake password.
account – Snowflake account.
warehouse – Snowflake warehouse.
role – Snowflake role.
database – Snowflake database
schema – Snowflake schema
page_content_columns – Optional. Columns written to Document page_content.
metadata_columns – Optional. Columns written to Document metadata.
Methods
__init__(query, user, password, account, ...)
Initialize Snowflake document loader.
lazy_load()
A lazy loader for document content.
load()
Load data into document objects.
load_and_split([text_splitter])
Load documents and split into chunks.
lazy_load() → Iterator[Document][source]¶
A lazy loader for document content.
load() → List[Document][source]¶
Load data into document objects.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load documents and split into chunks.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.snowflake_loader.SnowflakeLoader.html
|
b89feecb5cd5-0
|
langchain.document_loaders.mhtml.MHTMLLoader¶
class langchain.document_loaders.mhtml.MHTMLLoader(file_path: str, open_encoding: Optional[str] = None, bs_kwargs: Optional[dict] = None, get_text_separator: str = '')[source]¶
Bases: BaseLoader
Loader that uses beautiful soup to parse HTML files.
Initialise with path, and optionally, file encoding to use, and any kwargs
to pass to the BeautifulSoup object.
Methods
__init__(file_path[, open_encoding, ...])
Initialise with path, and optionally, file encoding to use, and any kwargs to pass to the BeautifulSoup object.
lazy_load()
A lazy loader for document content.
load()
Load data into document objects.
load_and_split([text_splitter])
Load documents and split into chunks.
lazy_load() → Iterator[Document]¶
A lazy loader for document content.
load() → List[Document][source]¶
Load data into document objects.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load documents and split into chunks.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.mhtml.MHTMLLoader.html
|
e5ab26f506f9-0
|
langchain.document_loaders.hugging_face_dataset.HuggingFaceDatasetLoader¶
class langchain.document_loaders.hugging_face_dataset.HuggingFaceDatasetLoader(path: str, page_content_column: str = 'text', name: Optional[str] = None, data_dir: Optional[str] = None, data_files: Optional[Union[str, Sequence[str], Mapping[str, Union[str, Sequence[str]]]]] = None, cache_dir: Optional[str] = None, keep_in_memory: Optional[bool] = None, save_infos: bool = False, use_auth_token: Optional[Union[bool, str]] = None, num_proc: Optional[int] = None)[source]¶
Bases: BaseLoader
Loading logic for loading documents from the Hugging Face Hub.
Initialize the HuggingFaceDatasetLoader.
Parameters
path – Path or name of the dataset.
page_content_column – Page content column name.
name – Name of the dataset configuration.
data_dir – Data directory of the dataset configuration.
data_files – Path(s) to source data file(s).
cache_dir – Directory to read/write data.
keep_in_memory – Whether to copy the dataset in-memory.
save_infos – Save the dataset information (checksums/size/splits/…).
use_auth_token – Bearer token for remote files on the Datasets Hub.
num_proc – Number of processes.
Methods
__init__(path[, page_content_column, name, ...])
Initialize the HuggingFaceDatasetLoader.
lazy_load()
Load documents lazily.
load()
Load documents.
load_and_split([text_splitter])
Load documents and split into chunks.
lazy_load() → Iterator[Document][source]¶
Load documents lazily.
load() → List[Document][source]¶
Load documents.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.hugging_face_dataset.HuggingFaceDatasetLoader.html
|
e5ab26f506f9-1
|
load() → List[Document][source]¶
Load documents.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load documents and split into chunks.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.hugging_face_dataset.HuggingFaceDatasetLoader.html
|
2f066a1e1ed7-0
|
langchain.document_loaders.embaas.EmbaasBlobLoader¶
class langchain.document_loaders.embaas.EmbaasBlobLoader(*, embaas_api_key: Optional[str] = None, api_url: str = 'https://api.embaas.io/v1/document/extract-text/bytes/', params: EmbaasDocumentExtractionParameters = {})[source]¶
Bases: BaseEmbaasLoader, BaseBlobParser
Wrapper around embaas’s document byte loader service.
To use, you should have the
environment variable EMBAAS_API_KEY set with your API key, or pass
it as a named parameter to the constructor.
Example
# Default parsing
from langchain.document_loaders.embaas import EmbaasBlobLoader
loader = EmbaasBlobLoader()
blob = Blob.from_path(path="example.mp3")
documents = loader.parse(blob=blob)
# Custom api parameters (create embeddings automatically)
from langchain.document_loaders.embaas import EmbaasBlobLoader
loader = EmbaasBlobLoader(
params={
"should_embed": True,
"model": "e5-large-v2",
"chunk_size": 256,
"chunk_splitter": "CharacterTextSplitter"
}
)
blob = Blob.from_path(path="example.pdf")
documents = loader.parse(blob=blob)
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param api_url: str = 'https://api.embaas.io/v1/document/extract-text/bytes/'¶
The URL of the embaas document extraction API.
param embaas_api_key: Optional[str] = None¶
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.embaas.EmbaasBlobLoader.html
|
2f066a1e1ed7-1
|
param embaas_api_key: Optional[str] = None¶
param params: langchain.document_loaders.embaas.EmbaasDocumentExtractionParameters = {}¶
Additional parameters to pass to the embaas document extraction API.
lazy_parse(blob: Blob) → Iterator[Document][source]¶
Lazy parsing interface.
Subclasses are required to implement this method.
Parameters
blob – Blob instance
Returns
Generator of documents
validator validate_environment » all fields¶
Validate that api key and python package exists in environment.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.embaas.EmbaasBlobLoader.html
|
3b76b8f34231-0
|
langchain.document_loaders.parsers.grobid.GrobidParser¶
class langchain.document_loaders.parsers.grobid.GrobidParser(segment_sentences: bool, grobid_server: str = 'http://localhost:8070/api/processFulltextDocument')[source]¶
Bases: BaseBlobParser
Loader that uses Grobid to load article PDF files.
Methods
__init__(segment_sentences[, grobid_server])
lazy_parse(blob)
Lazy parsing interface.
parse(blob)
Eagerly parse the blob into a document or documents.
process_xml(file_path, xml_data, ...)
Process the XML file from Grobin.
lazy_parse(blob: Blob) → Iterator[Document][source]¶
Lazy parsing interface.
Subclasses are required to implement this method.
Parameters
blob – Blob instance
Returns
Generator of documents
parse(blob: Blob) → List[Document]¶
Eagerly parse the blob into a document or documents.
This is a convenience method for interactive development environment.
Production applications should favor the lazy_parse method instead.
Subclasses should generally not over-ride this parse method.
Parameters
blob – Blob instance
Returns
List of documents
process_xml(file_path: str, xml_data: str, segment_sentences: bool) → Iterator[Document][source]¶
Process the XML file from Grobin.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.parsers.grobid.GrobidParser.html
|
ce557662964e-0
|
langchain.document_loaders.bibtex.BibtexLoader¶
class langchain.document_loaders.bibtex.BibtexLoader(file_path: str, *, parser: Optional[BibtexparserWrapper] = None, max_docs: Optional[int] = None, max_content_chars: Optional[int] = 4000, load_extra_metadata: bool = False, file_pattern: str = '[^:]+\\.pdf')[source]¶
Bases: BaseLoader
Loads a bibtex file into a list of Documents.
Each document represents one entry from the bibtex file.
If a PDF file is present in the file bibtex field, the original PDF
is loaded into the document text. If no such file entry is present,
the abstract field is used instead.
Initialize the BibtexLoader.
Parameters
file_path – Path to the bibtex file.
max_docs – Max number of associated documents to load. Use -1 means
no limit.
Methods
__init__(file_path, *[, parser, max_docs, ...])
Initialize the BibtexLoader.
lazy_load()
Load bibtex file using bibtexparser and get the article texts plus the
load()
Load bibtex file documents from the given bibtex file path.
load_and_split([text_splitter])
Load documents and split into chunks.
lazy_load() → Iterator[Document][source]¶
Load bibtex file using bibtexparser and get the article texts plus the
article metadata.
See https://bibtexparser.readthedocs.io/en/master/
Returns
a list of documents with the document.page_content in text format
load() → List[Document][source]¶
Load bibtex file documents from the given bibtex file path.
See https://bibtexparser.readthedocs.io/en/master/
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.bibtex.BibtexLoader.html
|
ce557662964e-1
|
See https://bibtexparser.readthedocs.io/en/master/
Parameters
file_path – the path to the bibtex file
Returns
a list of documents with the document.page_content in text format
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load documents and split into chunks.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.bibtex.BibtexLoader.html
|
9f8112efacc3-0
|
langchain.document_loaders.image.UnstructuredImageLoader¶
class langchain.document_loaders.image.UnstructuredImageLoader(file_path: Union[str, List[str]], mode: str = 'single', **unstructured_kwargs: Any)[source]¶
Bases: UnstructuredFileLoader
Loader that uses unstructured to load image files, such as PNGs and JPGs.
Initialize with file path.
Methods
__init__(file_path[, mode])
Initialize with file path.
lazy_load()
A lazy loader for document content.
load()
Load file.
load_and_split([text_splitter])
Load documents and split into chunks.
lazy_load() → Iterator[Document]¶
A lazy loader for document content.
load() → List[Document]¶
Load file.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load documents and split into chunks.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.image.UnstructuredImageLoader.html
|
1b9c169dda82-0
|
langchain.document_loaders.parsers.grobid.ServerUnavailableException¶
class langchain.document_loaders.parsers.grobid.ServerUnavailableException[source]¶
Bases: Exception
add_note()¶
Exception.add_note(note) –
add a note to the exception
with_traceback()¶
Exception.with_traceback(tb) –
set self.__traceback__ to tb and return self.
args¶
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.parsers.grobid.ServerUnavailableException.html
|
3b1f48431dd3-0
|
langchain.document_loaders.rst.UnstructuredRSTLoader¶
class langchain.document_loaders.rst.UnstructuredRSTLoader(file_path: str, mode: str = 'single', **unstructured_kwargs: Any)[source]¶
Bases: UnstructuredFileLoader
Loader that uses unstructured to load RST files.
Initialize with file path.
Methods
__init__(file_path[, mode])
Initialize with file path.
lazy_load()
A lazy loader for document content.
load()
Load file.
load_and_split([text_splitter])
Load documents and split into chunks.
lazy_load() → Iterator[Document]¶
A lazy loader for document content.
load() → List[Document]¶
Load file.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load documents and split into chunks.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.rst.UnstructuredRSTLoader.html
|
b5688629636e-0
|
langchain.document_loaders.spreedly.SpreedlyLoader¶
class langchain.document_loaders.spreedly.SpreedlyLoader(access_token: str, resource: str)[source]¶
Bases: BaseLoader
Loader that fetches data from Spreedly API.
Methods
__init__(access_token, resource)
lazy_load()
A lazy loader for document content.
load()
Load data into document objects.
load_and_split([text_splitter])
Load documents and split into chunks.
lazy_load() → Iterator[Document]¶
A lazy loader for document content.
load() → List[Document][source]¶
Load data into document objects.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load documents and split into chunks.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.spreedly.SpreedlyLoader.html
|
7cc45dda5450-0
|
langchain.document_loaders.html_bs.BSHTMLLoader¶
class langchain.document_loaders.html_bs.BSHTMLLoader(file_path: str, open_encoding: Optional[str] = None, bs_kwargs: Optional[dict] = None, get_text_separator: str = '')[source]¶
Bases: BaseLoader
Loader that uses beautiful soup to parse HTML files.
Initialise with path, and optionally, file encoding to use, and any kwargs
to pass to the BeautifulSoup object.
Methods
__init__(file_path[, open_encoding, ...])
Initialise with path, and optionally, file encoding to use, and any kwargs to pass to the BeautifulSoup object.
lazy_load()
A lazy loader for document content.
load()
Load data into document objects.
load_and_split([text_splitter])
Load documents and split into chunks.
lazy_load() → Iterator[Document]¶
A lazy loader for document content.
load() → List[Document][source]¶
Load data into document objects.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load documents and split into chunks.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.html_bs.BSHTMLLoader.html
|
7506a25da504-0
|
langchain.document_loaders.text.TextLoader¶
class langchain.document_loaders.text.TextLoader(file_path: str, encoding: Optional[str] = None, autodetect_encoding: bool = False)[source]¶
Bases: BaseLoader
Load text files.
Parameters
file_path – Path to the file to load.
encoding – File encoding to use. If None, the file will be loaded
encoding. (with the default system) –
autodetect_encoding – Whether to try to autodetect the file encoding
if the specified encoding fails.
Initialize with file path.
Methods
__init__(file_path[, encoding, ...])
Initialize with file path.
lazy_load()
A lazy loader for document content.
load()
Load from file path.
load_and_split([text_splitter])
Load documents and split into chunks.
lazy_load() → Iterator[Document]¶
A lazy loader for document content.
load() → List[Document][source]¶
Load from file path.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load documents and split into chunks.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.text.TextLoader.html
|
309060c6ddac-0
|
langchain.document_loaders.base.BaseBlobParser¶
class langchain.document_loaders.base.BaseBlobParser[source]¶
Bases: ABC
Abstract interface for blob parsers.
A blob parser is provides a way to parse raw data stored in a blob into one
or more documents.
The parser can be composed with blob loaders, making it easy to re-use
a parser independent of how the blob was originally loaded.
Methods
__init__()
lazy_parse(blob)
Lazy parsing interface.
parse(blob)
Eagerly parse the blob into a document or documents.
abstract lazy_parse(blob: Blob) → Iterator[Document][source]¶
Lazy parsing interface.
Subclasses are required to implement this method.
Parameters
blob – Blob instance
Returns
Generator of documents
parse(blob: Blob) → List[Document][source]¶
Eagerly parse the blob into a document or documents.
This is a convenience method for interactive development environment.
Production applications should favor the lazy_parse method instead.
Subclasses should generally not over-ride this parse method.
Parameters
blob – Blob instance
Returns
List of documents
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.base.BaseBlobParser.html
|
2a0ef7f16ee0-0
|
langchain.document_loaders.slack_directory.SlackDirectoryLoader¶
class langchain.document_loaders.slack_directory.SlackDirectoryLoader(zip_path: str, workspace_url: Optional[str] = None)[source]¶
Bases: BaseLoader
Loader for loading documents from a Slack directory dump.
Initialize the SlackDirectoryLoader.
Parameters
zip_path (str) – The path to the Slack directory dump zip file.
workspace_url (Optional[str]) – The Slack workspace URL.
Including the URL will turn
sources into links. Defaults to None.
Methods
__init__(zip_path[, workspace_url])
Initialize the SlackDirectoryLoader.
lazy_load()
A lazy loader for document content.
load()
Load and return documents from the Slack directory dump.
load_and_split([text_splitter])
Load documents and split into chunks.
lazy_load() → Iterator[Document]¶
A lazy loader for document content.
load() → List[Document][source]¶
Load and return documents from the Slack directory dump.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load documents and split into chunks.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.slack_directory.SlackDirectoryLoader.html
|
735b07bd408f-0
|
langchain.document_loaders.iugu.IuguLoader¶
class langchain.document_loaders.iugu.IuguLoader(resource: str, api_token: Optional[str] = None)[source]¶
Bases: BaseLoader
Loader that fetches data from IUGU.
Methods
__init__(resource[, api_token])
lazy_load()
A lazy loader for document content.
load()
Load data into document objects.
load_and_split([text_splitter])
Load documents and split into chunks.
lazy_load() → Iterator[Document]¶
A lazy loader for document content.
load() → List[Document][source]¶
Load data into document objects.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load documents and split into chunks.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.iugu.IuguLoader.html
|
b99eee13531d-0
|
langchain.document_loaders.excel.UnstructuredExcelLoader¶
class langchain.document_loaders.excel.UnstructuredExcelLoader(file_path: str, mode: str = 'single', **unstructured_kwargs: Any)[source]¶
Bases: UnstructuredFileLoader
Loader that uses unstructured to load Microsoft Excel files.
Initialize with file path.
Methods
__init__(file_path[, mode])
Initialize with file path.
lazy_load()
A lazy loader for document content.
load()
Load file.
load_and_split([text_splitter])
Load documents and split into chunks.
lazy_load() → Iterator[Document]¶
A lazy loader for document content.
load() → List[Document]¶
Load file.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load documents and split into chunks.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.excel.UnstructuredExcelLoader.html
|
c1759145760e-0
|
langchain.document_loaders.pdf.PDFMinerLoader¶
class langchain.document_loaders.pdf.PDFMinerLoader(file_path: str)[source]¶
Bases: BasePDFLoader
Loader that uses PDFMiner to load PDF files.
Initialize with file path.
Methods
__init__(file_path)
Initialize with file path.
lazy_load()
Lazily lod documents.
load()
Eagerly load the content.
load_and_split([text_splitter])
Load documents and split into chunks.
Attributes
source
lazy_load() → Iterator[Document][source]¶
Lazily lod documents.
load() → List[Document][source]¶
Eagerly load the content.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load documents and split into chunks.
property source: str¶
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.pdf.PDFMinerLoader.html
|
44865a745ff2-0
|
langchain.document_loaders.parsers.pdf.PDFMinerParser¶
class langchain.document_loaders.parsers.pdf.PDFMinerParser[source]¶
Bases: BaseBlobParser
Parse PDFs with PDFMiner.
Methods
__init__()
lazy_parse(blob)
Lazily parse the blob.
parse(blob)
Eagerly parse the blob into a document or documents.
lazy_parse(blob: Blob) → Iterator[Document][source]¶
Lazily parse the blob.
parse(blob: Blob) → List[Document]¶
Eagerly parse the blob into a document or documents.
This is a convenience method for interactive development environment.
Production applications should favor the lazy_parse method instead.
Subclasses should generally not over-ride this parse method.
Parameters
blob – Blob instance
Returns
List of documents
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.parsers.pdf.PDFMinerParser.html
|
f349fdf5d40f-0
|
langchain.document_loaders.pdf.PyMuPDFLoader¶
class langchain.document_loaders.pdf.PyMuPDFLoader(file_path: str)[source]¶
Bases: BasePDFLoader
Loader that uses PyMuPDF to load PDF files.
Initialize with file path.
Methods
__init__(file_path)
Initialize with file path.
lazy_load()
A lazy loader for document content.
load(**kwargs)
Load file.
load_and_split([text_splitter])
Load documents and split into chunks.
Attributes
source
lazy_load() → Iterator[Document]¶
A lazy loader for document content.
load(**kwargs: Optional[Any]) → List[Document][source]¶
Load file.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load documents and split into chunks.
property source: str¶
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.pdf.PyMuPDFLoader.html
|
4f246b2ccb73-0
|
langchain.document_loaders.telegram.concatenate_rows¶
langchain.document_loaders.telegram.concatenate_rows(row: dict) → str[source]¶
Combine message information in a readable format ready to be used.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.telegram.concatenate_rows.html
|
ee1f714aca62-0
|
langchain.document_loaders.pdf.PyPDFDirectoryLoader¶
class langchain.document_loaders.pdf.PyPDFDirectoryLoader(path: str, glob: str = '**/[!.]*.pdf', silent_errors: bool = False, load_hidden: bool = False, recursive: bool = False)[source]¶
Bases: BaseLoader
Loads a directory with PDF files with pypdf and chunks at character level.
Loader also stores page numbers in metadatas.
Methods
__init__(path[, glob, silent_errors, ...])
lazy_load()
A lazy loader for document content.
load()
Load data into document objects.
load_and_split([text_splitter])
Load documents and split into chunks.
lazy_load() → Iterator[Document]¶
A lazy loader for document content.
load() → List[Document][source]¶
Load data into document objects.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load documents and split into chunks.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.pdf.PyPDFDirectoryLoader.html
|
5e34a21cb2cb-0
|
langchain.document_loaders.arxiv.ArxivLoader¶
class langchain.document_loaders.arxiv.ArxivLoader(query: str, load_max_docs: Optional[int] = 100, load_all_available_meta: Optional[bool] = False)[source]¶
Bases: BaseLoader
Loads a query result from arxiv.org into a list of Documents.
Each document represents one Document.
The loader converts the original PDF format into the text.
Methods
__init__(query[, load_max_docs, ...])
lazy_load()
A lazy loader for document content.
load()
Load data into document objects.
load_and_split([text_splitter])
Load documents and split into chunks.
lazy_load() → Iterator[Document]¶
A lazy loader for document content.
load() → List[Document][source]¶
Load data into document objects.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load documents and split into chunks.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.arxiv.ArxivLoader.html
|
ebd88ce0db1c-0
|
langchain.document_loaders.generic.GenericLoader¶
class langchain.document_loaders.generic.GenericLoader(blob_loader: BlobLoader, blob_parser: BaseBlobParser)[source]¶
Bases: BaseLoader
A generic document loader.
A generic document loader that allows combining an arbitrary blob loader with
a blob parser.
Examples
from langchain.document_loaders import GenericLoader
from langchain.document_loaders.blob_loaders import FileSystemBlobLoader
loader = GenericLoader.from_filesystem(path=”path/to/directory”,
glob=”**/[!.]*”,
suffixes=[“.pdf”],
show_progress=True,
)
docs = loader.lazy_load()
next(docs)
Example instantiations to change which files are loaded:
… code-block:: python
# Recursively load all text files in a directory.
loader = GenericLoader.from_filesystem(“/path/to/dir”, glob=”**/*.txt”)
# Recursively load all non-hidden files in a directory.
loader = GenericLoader.from_filesystem(“/path/to/dir”, glob=”**/[!.]*”)
# Load all files in a directory without recursion.
loader = GenericLoader.from_filesystem(“/path/to/dir”, glob=”*”)
Example instantiations to change which parser is used:
… code-block:: python
from langchain.document_loaders.parsers.pdf import PyPDFParser
# Recursively load all text files in a directory.
loader = GenericLoader.from_filesystem(
“/path/to/dir”,
glob=”**/*.pdf”,
parser=PyPDFParser()
)
A generic document loader.
Parameters
blob_loader – A blob loader which knows how to yield blobs
blob_parser – A blob parser which knows how to parse blobs into documents
Methods
__init__(blob_loader, blob_parser)
A generic document loader.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.generic.GenericLoader.html
|
ebd88ce0db1c-1
|
Methods
__init__(blob_loader, blob_parser)
A generic document loader.
from_filesystem(path, *[, glob, suffixes, ...])
Create a generic document loader using a filesystem blob loader.
lazy_load()
Load documents lazily.
load()
Load all documents.
load_and_split([text_splitter])
Load all documents and split them into sentences.
classmethod from_filesystem(path: Union[str, Path], *, glob: str = '**/[!.]*', suffixes: Optional[Sequence[str]] = None, show_progress: bool = False, parser: Union[Literal['default'], BaseBlobParser] = 'default') → GenericLoader[source]¶
Create a generic document loader using a filesystem blob loader.
Parameters
path – The path to the directory to load documents from.
glob – The glob pattern to use to find documents.
suffixes – The suffixes to use to filter documents. If None, all files
matching the glob will be loaded.
show_progress – Whether to show a progress bar or not (requires tqdm).
Proxies to the file system loader.
parser – A blob parser which knows how to parse blobs into documents
Returns
A generic document loader.
lazy_load() → Iterator[Document][source]¶
Load documents lazily. Use this when working at a large scale.
load() → List[Document][source]¶
Load all documents.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document][source]¶
Load all documents and split them into sentences.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.generic.GenericLoader.html
|
ee9378716492-0
|
langchain.document_loaders.mediawikidump.MWDumpLoader¶
class langchain.document_loaders.mediawikidump.MWDumpLoader(file_path: str, encoding: Optional[str] = 'utf8')[source]¶
Bases: BaseLoader
Load MediaWiki dump from XML file
.. rubric:: Example
from langchain.document_loaders import MWDumpLoader
loader = MWDumpLoader(
file_path="myWiki.xml",
encoding="utf8"
)
docs = loader.load()
from langchain.text_splitter import RecursiveCharacterTextSplitter
text_splitter = RecursiveCharacterTextSplitter(
chunk_size=1000, chunk_overlap=0
)
texts = text_splitter.split_documents(docs)
Parameters
file_path (str) – XML local file path
encoding (str, optional) – Charset encoding, defaults to “utf8”
Initialize with file path.
Methods
__init__(file_path[, encoding])
Initialize with file path.
lazy_load()
A lazy loader for document content.
load()
Load from file path.
load_and_split([text_splitter])
Load documents and split into chunks.
lazy_load() → Iterator[Document]¶
A lazy loader for document content.
load() → List[Document][source]¶
Load from file path.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load documents and split into chunks.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.mediawikidump.MWDumpLoader.html
|
d56cef2cb5c5-0
|
langchain.document_loaders.stripe.StripeLoader¶
class langchain.document_loaders.stripe.StripeLoader(resource: str, access_token: Optional[str] = None)[source]¶
Bases: BaseLoader
Loader that fetches data from Stripe.
Methods
__init__(resource[, access_token])
lazy_load()
A lazy loader for document content.
load()
Load data into document objects.
load_and_split([text_splitter])
Load documents and split into chunks.
lazy_load() → Iterator[Document]¶
A lazy loader for document content.
load() → List[Document][source]¶
Load data into document objects.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load documents and split into chunks.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.stripe.StripeLoader.html
|
69fb39233d26-0
|
langchain.prompts.base.StringPromptTemplate¶
class langchain.prompts.base.StringPromptTemplate(*, input_variables: List[str], output_parser: Optional[BaseOutputParser] = None, partial_variables: Mapping[str, Union[str, Callable[[], str]]] = None)[source]¶
Bases: BasePromptTemplate, ABC
String prompt should expose the format method, returning a prompt.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param input_variables: List[str] [Required]¶
A list of the names of the variables the prompt template expects.
param output_parser: Optional[langchain.schema.BaseOutputParser] = None¶
How to parse the output of calling an LLM on this formatted prompt.
param partial_variables: Mapping[str, Union[str, Callable[[], str]]] [Optional]¶
dict(**kwargs: Any) → Dict¶
Return dictionary representation of prompt.
abstract format(**kwargs: Any) → str¶
Format the prompt with the inputs.
Parameters
kwargs – Any arguments to be passed to the prompt template.
Returns
A formatted string.
Example:
prompt.format(variable1="foo")
format_prompt(**kwargs: Any) → PromptValue[source]¶
Create Chat Messages.
partial(**kwargs: Union[str, Callable[[], str]]) → BasePromptTemplate¶
Return a partial of the prompt template.
save(file_path: Union[Path, str]) → None¶
Save the prompt.
Parameters
file_path – Path to directory to save prompt to.
Example:
.. code-block:: python
prompt.save(file_path=”path/prompt.yaml”)
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
validator validate_variable_names » all fields¶
|
https://api.python.langchain.com/en/latest/prompts/langchain.prompts.base.StringPromptTemplate.html
|
69fb39233d26-1
|
validator validate_variable_names » all fields¶
Validate variable names do not include restricted names.
property lc_attributes: Dict¶
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]¶
Return the namespace of the langchain object.
eg. [“langchain”, “llms”, “openai”]
property lc_secrets: Dict[str, str]¶
Return a map of constructor argument names to secret ids.
eg. {“openai_api_key”: “OPENAI_API_KEY”}
property lc_serializable: bool¶
Return whether or not the class is serializable.
model Config¶
Bases: object
Configuration for this pydantic object.
arbitrary_types_allowed = True¶
|
https://api.python.langchain.com/en/latest/prompts/langchain.prompts.base.StringPromptTemplate.html
|
df9445d5adab-0
|
langchain.prompts.loading.load_prompt_from_config¶
langchain.prompts.loading.load_prompt_from_config(config: dict) → BasePromptTemplate[source]¶
Load prompt from Config Dict.
|
https://api.python.langchain.com/en/latest/prompts/langchain.prompts.loading.load_prompt_from_config.html
|
e037cb7d0796-0
|
langchain.prompts.chat.BaseStringMessagePromptTemplate¶
class langchain.prompts.chat.BaseStringMessagePromptTemplate(*, prompt: StringPromptTemplate, additional_kwargs: dict = None)[source]¶
Bases: BaseMessagePromptTemplate, ABC
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param additional_kwargs: dict [Optional]¶
param prompt: langchain.prompts.base.StringPromptTemplate [Required]¶
abstract format(**kwargs: Any) → BaseMessage[source]¶
To a BaseMessage.
format_messages(**kwargs: Any) → List[BaseMessage][source]¶
To messages.
classmethod from_template(template: str, template_format: str = 'f-string', **kwargs: Any) → MessagePromptTemplateT[source]¶
classmethod from_template_file(template_file: Union[str, Path], input_variables: List[str], **kwargs: Any) → MessagePromptTemplateT[source]¶
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
property input_variables: List[str]¶
Input variables for this prompt template.
property lc_attributes: Dict¶
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]¶
Return the namespace of the langchain object.
eg. [“langchain”, “llms”, “openai”]
property lc_secrets: Dict[str, str]¶
Return a map of constructor argument names to secret ids.
eg. {“openai_api_key”: “OPENAI_API_KEY”}
property lc_serializable: bool¶
Return whether or not the class is serializable.
model Config¶
Bases: object
|
https://api.python.langchain.com/en/latest/prompts/langchain.prompts.chat.BaseStringMessagePromptTemplate.html
|
e037cb7d0796-1
|
Return whether or not the class is serializable.
model Config¶
Bases: object
extra = 'ignore'¶
|
https://api.python.langchain.com/en/latest/prompts/langchain.prompts.chat.BaseStringMessagePromptTemplate.html
|
4cb19ff568cd-0
|
langchain.prompts.prompt.PromptTemplate¶
class langchain.prompts.prompt.PromptTemplate(*, input_variables: List[str], output_parser: Optional[BaseOutputParser] = None, partial_variables: Mapping[str, Union[str, Callable[[], str]]] = None, template: str, template_format: str = 'f-string', validate_template: bool = True)[source]¶
Bases: StringPromptTemplate
Schema to represent a prompt for an LLM.
Example
from langchain import PromptTemplate
prompt = PromptTemplate(input_variables=["foo"], template="Say {foo}")
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param input_variables: List[str] [Required]¶
A list of the names of the variables the prompt template expects.
param output_parser: Optional[BaseOutputParser] = None¶
How to parse the output of calling an LLM on this formatted prompt.
param partial_variables: Mapping[str, Union[str, Callable[[], str]]] [Optional]¶
param template: str [Required]¶
The prompt template.
param template_format: str = 'f-string'¶
The format of the prompt template. Options are: ‘f-string’, ‘jinja2’.
param validate_template: bool = True¶
Whether or not to try validating the template.
dict(**kwargs: Any) → Dict¶
Return dictionary representation of prompt.
format(**kwargs: Any) → str[source]¶
Format the prompt with the inputs.
Parameters
kwargs – Any arguments to be passed to the prompt template.
Returns
A formatted string.
Example:
prompt.format(variable1="foo")
format_prompt(**kwargs: Any) → PromptValue¶
Create Chat Messages.
|
https://api.python.langchain.com/en/latest/prompts/langchain.prompts.prompt.PromptTemplate.html
|
4cb19ff568cd-1
|
format_prompt(**kwargs: Any) → PromptValue¶
Create Chat Messages.
classmethod from_examples(examples: List[str], suffix: str, input_variables: List[str], example_separator: str = '\n\n', prefix: str = '', **kwargs: Any) → PromptTemplate[source]¶
Take examples in list format with prefix and suffix to create a prompt.
Intended to be used as a way to dynamically create a prompt from examples.
Parameters
examples – List of examples to use in the prompt.
suffix – String to go after the list of examples. Should generally
set up the user’s input.
input_variables – A list of variable names the final prompt template
will expect.
example_separator – The separator to use in between examples. Defaults
to two new line characters.
prefix – String that should go before any examples. Generally includes
examples. Default to an empty string.
Returns
The final prompt generated.
classmethod from_file(template_file: Union[str, Path], input_variables: List[str], **kwargs: Any) → PromptTemplate[source]¶
Load a prompt from a file.
Parameters
template_file – The path to the file containing the prompt template.
input_variables – A list of variable names the final prompt template
will expect.
Returns
The prompt loaded from the file.
classmethod from_template(template: str, **kwargs: Any) → PromptTemplate[source]¶
Load a prompt template from a template.
partial(**kwargs: Union[str, Callable[[], str]]) → BasePromptTemplate¶
Return a partial of the prompt template.
save(file_path: Union[Path, str]) → None¶
Save the prompt.
Parameters
file_path – Path to directory to save prompt to.
Example:
.. code-block:: python
prompt.save(file_path=”path/prompt.yaml”)
validator template_is_valid » all fields[source]¶
|
https://api.python.langchain.com/en/latest/prompts/langchain.prompts.prompt.PromptTemplate.html
|
4cb19ff568cd-2
|
validator template_is_valid » all fields[source]¶
Check that template and input variables are consistent.
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
validator validate_variable_names » all fields¶
Validate variable names do not include restricted names.
property lc_attributes: Dict[str, Any]¶
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]¶
Return the namespace of the langchain object.
eg. [“langchain”, “llms”, “openai”]
property lc_secrets: Dict[str, str]¶
Return a map of constructor argument names to secret ids.
eg. {“openai_api_key”: “OPENAI_API_KEY”}
property lc_serializable: bool¶
Return whether or not the class is serializable.
model Config¶
Bases: object
Configuration for this pydantic object.
arbitrary_types_allowed = True¶
|
https://api.python.langchain.com/en/latest/prompts/langchain.prompts.prompt.PromptTemplate.html
|
ec9a1584873c-0
|
langchain.prompts.example_selector.base.BaseExampleSelector¶
class langchain.prompts.example_selector.base.BaseExampleSelector[source]¶
Bases: ABC
Interface for selecting examples to include in prompts.
Methods
__init__()
add_example(example)
Add new example to store for a key.
select_examples(input_variables)
Select which examples to use based on the inputs.
abstract add_example(example: Dict[str, str]) → Any[source]¶
Add new example to store for a key.
abstract select_examples(input_variables: Dict[str, str]) → List[dict][source]¶
Select which examples to use based on the inputs.
|
https://api.python.langchain.com/en/latest/prompts/langchain.prompts.example_selector.base.BaseExampleSelector.html
|
8617e4e0de2f-0
|
langchain.prompts.chat.ChatPromptTemplate¶
class langchain.prompts.chat.ChatPromptTemplate(*, input_variables: List[str], output_parser: Optional[BaseOutputParser] = None, partial_variables: Mapping[str, Union[str, Callable[[], str]]] = None, messages: List[Union[BaseMessagePromptTemplate, BaseMessage]])[source]¶
Bases: BaseChatPromptTemplate, ABC
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param input_variables: List[str] [Required]¶
A list of the names of the variables the prompt template expects.
param messages: List[Union[BaseMessagePromptTemplate, BaseMessage]] [Required]¶
param output_parser: Optional[BaseOutputParser] = None¶
How to parse the output of calling an LLM on this formatted prompt.
param partial_variables: Mapping[str, Union[str, Callable[[], str]]] [Optional]¶
dict(**kwargs: Any) → Dict¶
Return dictionary representation of prompt.
format(**kwargs: Any) → str[source]¶
Format the prompt with the inputs.
Parameters
kwargs – Any arguments to be passed to the prompt template.
Returns
A formatted string.
Example:
prompt.format(variable1="foo")
format_messages(**kwargs: Any) → List[BaseMessage][source]¶
Format kwargs into a list of messages.
format_prompt(**kwargs: Any) → PromptValue¶
Create Chat Messages.
classmethod from_messages(messages: Sequence[Union[BaseMessagePromptTemplate, BaseMessage]]) → ChatPromptTemplate[source]¶
classmethod from_role_strings(string_messages: List[Tuple[str, str]]) → ChatPromptTemplate[source]¶
classmethod from_strings(string_messages: List[Tuple[Type[BaseMessagePromptTemplate], str]]) → ChatPromptTemplate[source]¶
|
https://api.python.langchain.com/en/latest/prompts/langchain.prompts.chat.ChatPromptTemplate.html
|
8617e4e0de2f-1
|
classmethod from_template(template: str, **kwargs: Any) → ChatPromptTemplate[source]¶
partial(**kwargs: Union[str, Callable[[], str]]) → BasePromptTemplate[source]¶
Return a partial of the prompt template.
save(file_path: Union[Path, str]) → None[source]¶
Save the prompt.
Parameters
file_path – Path to directory to save prompt to.
Example:
.. code-block:: python
prompt.save(file_path=”path/prompt.yaml”)
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
validator validate_input_variables » all fields[source]¶
validator validate_variable_names » all fields¶
Validate variable names do not include restricted names.
property lc_attributes: Dict¶
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]¶
Return the namespace of the langchain object.
eg. [“langchain”, “llms”, “openai”]
property lc_secrets: Dict[str, str]¶
Return a map of constructor argument names to secret ids.
eg. {“openai_api_key”: “OPENAI_API_KEY”}
property lc_serializable: bool¶
Return whether or not the class is serializable.
model Config¶
Bases: object
Configuration for this pydantic object.
arbitrary_types_allowed = True¶
|
https://api.python.langchain.com/en/latest/prompts/langchain.prompts.chat.ChatPromptTemplate.html
|
657696fcf5f4-0
|
langchain.prompts.chat.AIMessagePromptTemplate¶
class langchain.prompts.chat.AIMessagePromptTemplate(*, prompt: StringPromptTemplate, additional_kwargs: dict = None)[source]¶
Bases: BaseStringMessagePromptTemplate
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param additional_kwargs: dict [Optional]¶
param prompt: langchain.prompts.base.StringPromptTemplate [Required]¶
format(**kwargs: Any) → BaseMessage[source]¶
To a BaseMessage.
format_messages(**kwargs: Any) → List[BaseMessage]¶
To messages.
classmethod from_template(template: str, template_format: str = 'f-string', **kwargs: Any) → MessagePromptTemplateT¶
classmethod from_template_file(template_file: Union[str, Path], input_variables: List[str], **kwargs: Any) → MessagePromptTemplateT¶
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
property input_variables: List[str]¶
Input variables for this prompt template.
property lc_attributes: Dict¶
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]¶
Return the namespace of the langchain object.
eg. [“langchain”, “llms”, “openai”]
property lc_secrets: Dict[str, str]¶
Return a map of constructor argument names to secret ids.
eg. {“openai_api_key”: “OPENAI_API_KEY”}
property lc_serializable: bool¶
Return whether or not the class is serializable.
model Config¶
Bases: object
extra = 'ignore'¶
|
https://api.python.langchain.com/en/latest/prompts/langchain.prompts.chat.AIMessagePromptTemplate.html
|
b08008f09981-0
|
langchain.prompts.example_selector.semantic_similarity.sorted_values¶
langchain.prompts.example_selector.semantic_similarity.sorted_values(values: Dict[str, str]) → List[Any][source]¶
Return a list of values in dict sorted by key.
|
https://api.python.langchain.com/en/latest/prompts/langchain.prompts.example_selector.semantic_similarity.sorted_values.html
|
52487d9fff54-0
|
langchain.prompts.base.check_valid_template¶
langchain.prompts.base.check_valid_template(template: str, template_format: str, input_variables: List[str]) → None[source]¶
Check that template string is valid.
|
https://api.python.langchain.com/en/latest/prompts/langchain.prompts.base.check_valid_template.html
|
24261e6f25d5-0
|
langchain.prompts.few_shot_with_templates.FewShotPromptWithTemplates¶
class langchain.prompts.few_shot_with_templates.FewShotPromptWithTemplates(*, input_variables: List[str], output_parser: Optional[BaseOutputParser] = None, partial_variables: Mapping[str, Union[str, Callable[[], str]]] = None, examples: Optional[List[dict]] = None, example_selector: Optional[BaseExampleSelector] = None, example_prompt: PromptTemplate, suffix: StringPromptTemplate, example_separator: str = '\n\n', prefix: Optional[StringPromptTemplate] = None, template_format: str = 'f-string', validate_template: bool = True)[source]¶
Bases: StringPromptTemplate
Prompt template that contains few shot examples.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param example_prompt: langchain.prompts.prompt.PromptTemplate [Required]¶
PromptTemplate used to format an individual example.
param example_selector: Optional[langchain.prompts.example_selector.base.BaseExampleSelector] = None¶
ExampleSelector to choose the examples to format into the prompt.
Either this or examples should be provided.
param example_separator: str = '\n\n'¶
String separator used to join the prefix, the examples, and suffix.
param examples: Optional[List[dict]] = None¶
Examples to format into the prompt.
Either this or example_selector should be provided.
param input_variables: List[str] [Required]¶
A list of the names of the variables the prompt template expects.
param output_parser: Optional[BaseOutputParser] = None¶
How to parse the output of calling an LLM on this formatted prompt.
param partial_variables: Mapping[str, Union[str, Callable[[], str]]] [Optional]¶
|
https://api.python.langchain.com/en/latest/prompts/langchain.prompts.few_shot_with_templates.FewShotPromptWithTemplates.html
|
24261e6f25d5-1
|
param prefix: Optional[langchain.prompts.base.StringPromptTemplate] = None¶
A PromptTemplate to put before the examples.
param suffix: langchain.prompts.base.StringPromptTemplate [Required]¶
A PromptTemplate to put after the examples.
param template_format: str = 'f-string'¶
The format of the prompt template. Options are: ‘f-string’, ‘jinja2’.
param validate_template: bool = True¶
Whether or not to try validating the template.
validator check_examples_and_selector » all fields[source]¶
Check that one and only one of examples/example_selector are provided.
dict(**kwargs: Any) → Dict[source]¶
Return a dictionary of the prompt.
format(**kwargs: Any) → str[source]¶
Format the prompt with the inputs.
Parameters
kwargs – Any arguments to be passed to the prompt template.
Returns
A formatted string.
Example:
prompt.format(variable1="foo")
format_prompt(**kwargs: Any) → PromptValue¶
Create Chat Messages.
partial(**kwargs: Union[str, Callable[[], str]]) → BasePromptTemplate¶
Return a partial of the prompt template.
save(file_path: Union[Path, str]) → None¶
Save the prompt.
Parameters
file_path – Path to directory to save prompt to.
Example:
.. code-block:: python
prompt.save(file_path=”path/prompt.yaml”)
validator template_is_valid » all fields[source]¶
Check that prefix, suffix and input variables are consistent.
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
validator validate_variable_names » all fields¶
Validate variable names do not include restricted names.
property lc_attributes: Dict¶
Return a list of attribute names that should be included in the
|
https://api.python.langchain.com/en/latest/prompts/langchain.prompts.few_shot_with_templates.FewShotPromptWithTemplates.html
|
24261e6f25d5-2
|
property lc_attributes: Dict¶
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]¶
Return the namespace of the langchain object.
eg. [“langchain”, “llms”, “openai”]
property lc_secrets: Dict[str, str]¶
Return a map of constructor argument names to secret ids.
eg. {“openai_api_key”: “OPENAI_API_KEY”}
property lc_serializable: bool¶
Return whether or not the class is serializable.
model Config[source]¶
Bases: object
Configuration for this pydantic object.
arbitrary_types_allowed = True¶
extra = 'forbid'¶
|
https://api.python.langchain.com/en/latest/prompts/langchain.prompts.few_shot_with_templates.FewShotPromptWithTemplates.html
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.