id
stringlengths
14
16
text
stringlengths
31
2.41k
source
stringlengths
53
121
4c9e56a69292-28
file_loader_kwargs (Dict[str, Any]) – Return type None attribute credentials_path: pathlib.Path = PosixPath('/home/docs/.credentials/credentials.json') attribute document_ids: Optional[List[str]] = None attribute file_ids: Optional[List[str]] = None attribute file_loader_cls: Any = None attribute file_loader_kwargs: Dict[str, Any] = {} attribute file_types: Optional[Sequence[str]] = None attribute folder_id: Optional[str] = None attribute load_trashed_files: bool = False attribute recursive: bool = False attribute service_account_key: pathlib.Path = PosixPath('/home/docs/.credentials/keys.json') attribute token_path: pathlib.Path = PosixPath('/home/docs/.credentials/token.json') load()[source] Load documents. Return type List[langchain.schema.Document] class langchain.document_loaders.GutenbergLoader(file_path)[source] Bases: langchain.document_loaders.base.BaseLoader Loader that uses urllib to load .txt web files. Parameters file_path (str) – load()[source] Load file. Return type List[langchain.schema.Document] class langchain.document_loaders.HNLoader(web_path, header_template=None, verify=True, proxies=None)[source] Bases: langchain.document_loaders.web_base.WebBaseLoader Load Hacker News data from either main page results or the comments page. Parameters web_path (Union[str, List[str]]) – header_template (Optional[dict]) – verify (Optional[bool]) – proxies (Optional[dict]) – load()[source] Get important HN webpage information. Components are: title content
https://api.python.langchain.com/en/stable/modules/document_loaders.html
4c9e56a69292-29
Get important HN webpage information. Components are: title content source url, time of post author of the post number of comments rank of the post Return type List[langchain.schema.Document] load_comments(soup_info)[source] Load comments from a HN post. Parameters soup_info (Any) – Return type List[langchain.schema.Document] load_results(soup)[source] Load items from an HN page. Parameters soup (Any) – Return type List[langchain.schema.Document] class langchain.document_loaders.HuggingFaceDatasetLoader(path, page_content_column='text', name=None, data_dir=None, data_files=None, cache_dir=None, keep_in_memory=None, save_infos=False, use_auth_token=None, num_proc=None)[source] Bases: langchain.document_loaders.base.BaseLoader Loading logic for loading documents from the Hugging Face Hub. Parameters path (str) – page_content_column (str) – name (Optional[str]) – data_dir (Optional[str]) – data_files (Optional[Union[str, Sequence[str], Mapping[str, Union[str, Sequence[str]]]]]) – cache_dir (Optional[str]) – keep_in_memory (Optional[bool]) – save_infos (bool) – use_auth_token (Optional[Union[bool, str]]) – num_proc (Optional[int]) – lazy_load()[source] Load documents lazily. Return type Iterator[langchain.schema.Document] load()[source] Load documents. Return type List[langchain.schema.Document] class langchain.document_loaders.IFixitLoader(web_path)[source] Bases: langchain.document_loaders.base.BaseLoader
https://api.python.langchain.com/en/stable/modules/document_loaders.html
4c9e56a69292-30
Bases: langchain.document_loaders.base.BaseLoader Load iFixit repair guides, device wikis and answers. iFixit is the largest, open repair community on the web. The site contains nearly 100k repair manuals, 200k Questions & Answers on 42k devices, and all the data is licensed under CC-BY. This loader will allow you to download the text of a repair guide, text of Q&A’s and wikis from devices on iFixit using their open APIs and web scraping. Parameters web_path (str) – load()[source] Load data into document objects. Return type List[langchain.schema.Document] static load_suggestions(query='', doc_type='all')[source] Parameters query (str) – doc_type (str) – Return type List[langchain.schema.Document] load_questions_and_answers(url_override=None)[source] Parameters url_override (Optional[str]) – Return type List[langchain.schema.Document] load_device(url_override=None, include_guides=True)[source] Parameters url_override (Optional[str]) – include_guides (bool) – Return type List[langchain.schema.Document] load_guide(url_override=None)[source] Parameters url_override (Optional[str]) – Return type List[langchain.schema.Document] class langchain.document_loaders.IMSDbLoader(web_path, header_template=None, verify=True, proxies=None)[source] Bases: langchain.document_loaders.web_base.WebBaseLoader Loader that loads IMSDb webpages. Parameters web_path (Union[str, List[str]]) – header_template (Optional[dict]) – verify (Optional[bool]) –
https://api.python.langchain.com/en/stable/modules/document_loaders.html
4c9e56a69292-31
header_template (Optional[dict]) – verify (Optional[bool]) – proxies (Optional[dict]) – load()[source] Load webpage. Return type List[langchain.schema.Document] class langchain.document_loaders.ImageCaptionLoader(path_images, blip_processor='Salesforce/blip-image-captioning-base', blip_model='Salesforce/blip-image-captioning-base')[source] Bases: langchain.document_loaders.base.BaseLoader Loader that loads the captions of an image Parameters path_images (Union[str, List[str]]) – blip_processor (str) – blip_model (str) – load()[source] Load from a list of image files Return type List[langchain.schema.Document] class langchain.document_loaders.IuguLoader(resource, api_token=None)[source] Bases: langchain.document_loaders.base.BaseLoader Loader that fetches data from IUGU. Parameters resource (str) – api_token (Optional[str]) – Return type None load()[source] Load data into document objects. Return type List[langchain.schema.Document] class langchain.document_loaders.JSONLoader(file_path, jq_schema, content_key=None, metadata_func=None, text_content=True)[source] Bases: langchain.document_loaders.base.BaseLoader Loads a JSON file and references a jq schema provided to load the text into documents. Example [{β€œtext”: …}, {β€œtext”: …}, {β€œtext”: …}] -> schema = .[].text {β€œkey”: [{β€œtext”: …}, {β€œtext”: …}, {β€œtext”: …}]} -> schema = .key[].text [β€œβ€, β€œβ€, β€œβ€] -> schema = .[] Parameters
https://api.python.langchain.com/en/stable/modules/document_loaders.html
4c9e56a69292-32
[β€œβ€, β€œβ€, β€œβ€] -> schema = .[] Parameters file_path (Union[str, pathlib.Path]) – jq_schema (str) – content_key (Optional[str]) – metadata_func (Optional[Callable[[Dict, Dict], Dict]]) – text_content (bool) – load()[source] Load and return documents from the JSON file. Return type List[langchain.schema.Document] class langchain.document_loaders.JoplinLoader(access_token=None, port=41184, host='localhost')[source] Bases: langchain.document_loaders.base.BaseLoader Loader that fetches notes from Joplin. In order to use this loader, you need to have Joplin running with the Web Clipper enabled (look for β€œWeb Clipper” in the app settings). To get the access token, you need to go to the Web Clipper options and under β€œAdvanced Options” you will find the access token. You can find more information about the Web Clipper service here: https://joplinapp.org/clipper/ Parameters access_token (Optional[str]) – port (int) – host (str) – Return type None lazy_load()[source] A lazy loader for document content. Return type Iterator[langchain.schema.Document] load()[source] Load data into document objects. Return type List[langchain.schema.Document] class langchain.document_loaders.LarkSuiteDocLoader(domain, access_token, document_id)[source] Bases: langchain.document_loaders.base.BaseLoader Loader that loads LarkSuite (FeiShu) document. Parameters domain (str) – access_token (str) – document_id (str) –
https://api.python.langchain.com/en/stable/modules/document_loaders.html
4c9e56a69292-33
access_token (str) – document_id (str) – lazy_load()[source] Lazy load LarkSuite (FeiShu) document. Return type Iterator[langchain.schema.Document] load()[source] Load LarkSuite (FeiShu) document. Return type List[langchain.schema.Document] class langchain.document_loaders.MWDumpLoader(file_path, encoding='utf8')[source] Bases: langchain.document_loaders.base.BaseLoader Load MediaWiki dump from XML file .. rubric:: Example from langchain.document_loaders import MWDumpLoader loader = MWDumpLoader( file_path="myWiki.xml", encoding="utf8" ) docs = loader.load() from langchain.text_splitter import RecursiveCharacterTextSplitter text_splitter = RecursiveCharacterTextSplitter( chunk_size=1000, chunk_overlap=0 ) texts = text_splitter.split_documents(docs) Parameters file_path (str) – XML local file path encoding (str, optional) – Charset encoding, defaults to β€œutf8” load()[source] Load from file path. Return type List[langchain.schema.Document] class langchain.document_loaders.MastodonTootsLoader(mastodon_accounts, number_toots=100, exclude_replies=False, access_token=None, api_base_url='https://mastodon.social')[source] Bases: langchain.document_loaders.base.BaseLoader Mastodon toots loader. Parameters mastodon_accounts (Sequence[str]) – number_toots (Optional[int]) – exclude_replies (bool) – access_token (Optional[str]) – api_base_url (str) – load()[source]
https://api.python.langchain.com/en/stable/modules/document_loaders.html
4c9e56a69292-34
api_base_url (str) – load()[source] Load toots into documents. Return type List[langchain.schema.Document] class langchain.document_loaders.MathpixPDFLoader(file_path, processed_file_format='mmd', max_wait_time_seconds=500, should_clean_pdf=False, **kwargs)[source] Bases: langchain.document_loaders.pdf.BasePDFLoader Parameters file_path (str) – processed_file_format (str) – max_wait_time_seconds (int) – should_clean_pdf (bool) – kwargs (Any) – Return type None property headers: dict property url: str property data: dict send_pdf()[source] Return type str wait_for_processing(pdf_id)[source] Parameters pdf_id (str) – Return type None get_processed_pdf(pdf_id)[source] Parameters pdf_id (str) – Return type str clean_pdf(contents)[source] Parameters contents (str) – Return type str load()[source] Load data into document objects. Return type List[langchain.schema.Document] class langchain.document_loaders.MaxComputeLoader(query, api_wrapper, *, page_content_columns=None, metadata_columns=None)[source] Bases: langchain.document_loaders.base.BaseLoader Loads a query result from Alibaba Cloud MaxCompute table into documents. Parameters query (str) – api_wrapper (MaxComputeAPIWrapper) – page_content_columns (Optional[Sequence[str]]) – metadata_columns (Optional[Sequence[str]]) – classmethod from_params(query, endpoint, project, *, access_id=None, secret_access_key=None, **kwargs)[source]
https://api.python.langchain.com/en/stable/modules/document_loaders.html
4c9e56a69292-35
Convenience constructor that builds the MaxCompute API wrapper fromgiven parameters. Parameters query (str) – SQL query to execute. endpoint (str) – MaxCompute endpoint. project (str) – A project is a basic organizational unit of MaxCompute, which is similar to a database. access_id (Optional[str]) – MaxCompute access ID. Should be passed in directly or set as the environment variable MAX_COMPUTE_ACCESS_ID. secret_access_key (Optional[str]) – MaxCompute secret access key. Should be passed in directly or set as the environment variable MAX_COMPUTE_SECRET_ACCESS_KEY. kwargs (Any) – Return type langchain.document_loaders.max_compute.MaxComputeLoader lazy_load()[source] A lazy loader for document content. Return type Iterator[langchain.schema.Document] load()[source] Load data into document objects. Return type List[langchain.schema.Document] class langchain.document_loaders.MergedDataLoader(loaders)[source] Bases: langchain.document_loaders.base.BaseLoader Merge documents from a list of loaders Parameters loaders (List) – lazy_load()[source] Lazy load docs from each individual loader. Return type Iterator[langchain.schema.Document] load()[source] Load docs. Return type List[langchain.schema.Document] class langchain.document_loaders.MHTMLLoader(file_path, open_encoding=None, bs_kwargs=None, get_text_separator='')[source] Bases: langchain.document_loaders.base.BaseLoader Loader that uses beautiful soup to parse HTML files. Parameters file_path (str) – open_encoding (Optional[str]) – bs_kwargs (Optional[dict]) – get_text_separator (str) – Return type None
https://api.python.langchain.com/en/stable/modules/document_loaders.html
4c9e56a69292-36
get_text_separator (str) – Return type None load()[source] Load data into document objects. Return type List[langchain.schema.Document] class langchain.document_loaders.ModernTreasuryLoader(resource, organization_id=None, api_key=None)[source] Bases: langchain.document_loaders.base.BaseLoader Loader that fetches data from Modern Treasury. Parameters resource (str) – organization_id (Optional[str]) – api_key (Optional[str]) – Return type None load()[source] Load data into document objects. Return type List[langchain.schema.Document] class langchain.document_loaders.NotebookLoader(path, include_outputs=False, max_output_length=10, remove_newline=False, traceback=False)[source] Bases: langchain.document_loaders.base.BaseLoader Loader that loads .ipynb notebook files. Parameters path (str) – include_outputs (bool) – max_output_length (int) – remove_newline (bool) – traceback (bool) – load()[source] Load documents. Return type List[langchain.schema.Document] class langchain.document_loaders.NotionDBLoader(integration_token, database_id, request_timeout_sec=10)[source] Bases: langchain.document_loaders.base.BaseLoader Notion DB Loader. Reads content from pages within a Noton Database. :param integration_token: Notion integration token. :type integration_token: str :param database_id: Notion database id. :type database_id: str :param request_timeout_sec: Timeout for Notion requests in seconds. :type request_timeout_sec: int Parameters integration_token (str) – database_id (str) –
https://api.python.langchain.com/en/stable/modules/document_loaders.html
4c9e56a69292-37
Parameters integration_token (str) – database_id (str) – request_timeout_sec (Optional[int]) – Return type None load()[source] Load documents from the Notion database. :returns: List of documents. :rtype: List[Document] Return type List[langchain.schema.Document] load_page(page_summary)[source] Read a page. Parameters page_summary (Dict[str, Any]) – Return type langchain.schema.Document class langchain.document_loaders.NotionDirectoryLoader(path)[source] Bases: langchain.document_loaders.base.BaseLoader Loader that loads Notion directory dump. Parameters path (str) – load()[source] Load documents. Return type List[langchain.schema.Document] class langchain.document_loaders.ObsidianLoader(path, encoding='UTF-8', collect_metadata=True)[source] Bases: langchain.document_loaders.base.BaseLoader Loader that loads Obsidian files from disk. Parameters path (str) – encoding (str) – collect_metadata (bool) – FRONT_MATTER_REGEX = re.compile('^---\\n(.*?)\\n---\\n', re.MULTILINE|re.DOTALL) load()[source] Load documents. Return type List[langchain.schema.Document] class langchain.document_loaders.OneDriveFileLoader(*, file)[source] Bases: langchain.document_loaders.base.BaseLoader, pydantic.main.BaseModel Parameters file (File) – Return type None attribute file: File [Required] load()[source] Load Documents Return type List[langchain.schema.Document]
https://api.python.langchain.com/en/stable/modules/document_loaders.html
4c9e56a69292-38
Load Documents Return type List[langchain.schema.Document] class langchain.document_loaders.OneDriveLoader(*, settings=None, drive_id, folder_path=None, object_ids=None, auth_with_token=False)[source] Bases: langchain.document_loaders.base.BaseLoader, pydantic.main.BaseModel Parameters settings (langchain.document_loaders.onedrive._OneDriveSettings) – drive_id (str) – folder_path (Optional[str]) – object_ids (Optional[List[str]]) – auth_with_token (bool) – Return type None attribute auth_with_token: bool = False attribute drive_id: str [Required] attribute folder_path: Optional[str] = None attribute object_ids: Optional[List[str]] = None attribute settings: langchain.document_loaders.onedrive._OneDriveSettings [Optional] load()[source] Loads all supported document files from the specified OneDrive drive a nd returns a list of Document objects. Returns A list of Document objects representing the loaded documents. Return type List[Document] Raises ValueError – If the specified drive ID does not correspond to a drive in the OneDrive storage. – class langchain.document_loaders.OnlinePDFLoader(file_path)[source] Bases: langchain.document_loaders.pdf.BasePDFLoader Loader that loads online PDFs. Parameters file_path (str) – load()[source] Load documents. Return type List[langchain.schema.Document] class langchain.document_loaders.OutlookMessageLoader(file_path)[source] Bases: langchain.document_loaders.base.BaseLoader Loader that loads Outlook Message files using extract_msg. https://github.com/TeamMsgExtractor/msg-extractor
https://api.python.langchain.com/en/stable/modules/document_loaders.html
4c9e56a69292-39
https://github.com/TeamMsgExtractor/msg-extractor Parameters file_path (str) – load()[source] Load data into document objects. Return type List[langchain.schema.Document] class langchain.document_loaders.OpenCityDataLoader(city_id, dataset_id, limit)[source] Bases: langchain.document_loaders.base.BaseLoader Loader that loads Open city data. Parameters city_id (str) – dataset_id (str) – limit (int) – lazy_load()[source] Lazy load records. Return type Iterator[langchain.schema.Document] load()[source] Load records. Return type List[langchain.schema.Document] class langchain.document_loaders.PDFMinerLoader(file_path)[source] Bases: langchain.document_loaders.pdf.BasePDFLoader Loader that uses PDFMiner to load PDF files. Parameters file_path (str) – Return type None load()[source] Eagerly load the content. Return type List[langchain.schema.Document] lazy_load()[source] Lazily lod documents. Return type Iterator[langchain.schema.Document] class langchain.document_loaders.PDFMinerPDFasHTMLLoader(file_path)[source] Bases: langchain.document_loaders.pdf.BasePDFLoader Loader that uses PDFMiner to load PDF files as HTML content. Parameters file_path (str) – load()[source] Load file. Return type List[langchain.schema.Document] class langchain.document_loaders.PDFPlumberLoader(file_path, text_kwargs=None)[source] Bases: langchain.document_loaders.pdf.BasePDFLoader
https://api.python.langchain.com/en/stable/modules/document_loaders.html
4c9e56a69292-40
Bases: langchain.document_loaders.pdf.BasePDFLoader Loader that uses pdfplumber to load PDF files. Parameters file_path (str) – text_kwargs (Optional[Mapping[str, Any]]) – Return type None load()[source] Load file. Return type List[langchain.schema.Document] langchain.document_loaders.PagedPDFSplitter alias of langchain.document_loaders.pdf.PyPDFLoader class langchain.document_loaders.PlaywrightURLLoader(urls, continue_on_failure=True, headless=True, remove_selectors=None)[source] Bases: langchain.document_loaders.base.BaseLoader Loader that uses Playwright and to load a page and unstructured to load the html. This is useful for loading pages that require javascript to render. Parameters urls (List[str]) – continue_on_failure (bool) – headless (bool) – remove_selectors (Optional[List[str]]) – urls List of URLs to load. Type List[str] continue_on_failure If True, continue loading other URLs on failure. Type bool headless If True, the browser will run in headless mode. Type bool load()[source] Load the specified URLs using Playwright and create Document instances. Returns A list of Document instances with loaded content. Return type List[Document] class langchain.document_loaders.PsychicLoader(api_key, account_id, connector_id=None)[source] Bases: langchain.document_loaders.base.BaseLoader Loader that loads documents from Psychic.dev. Parameters api_key (str) – account_id (str) – connector_id (Optional[str]) – load()[source] Load documents.
https://api.python.langchain.com/en/stable/modules/document_loaders.html
4c9e56a69292-41
connector_id (Optional[str]) – load()[source] Load documents. Return type List[langchain.schema.Document] class langchain.document_loaders.PyMuPDFLoader(file_path)[source] Bases: langchain.document_loaders.pdf.BasePDFLoader Loader that uses PyMuPDF to load PDF files. Parameters file_path (str) – Return type None load(**kwargs)[source] Load file. Parameters kwargs (Optional[Any]) – Return type List[langchain.schema.Document] class langchain.document_loaders.PyPDFDirectoryLoader(path, glob='**/[!.]*.pdf', silent_errors=False, load_hidden=False, recursive=False)[source] Bases: langchain.document_loaders.base.BaseLoader Loads a directory with PDF files with pypdf and chunks at character level. Loader also stores page numbers in metadatas. Parameters path (str) – glob (str) – silent_errors (bool) – load_hidden (bool) – recursive (bool) – load()[source] Load data into document objects. Return type List[langchain.schema.Document] class langchain.document_loaders.PyPDFLoader(file_path)[source] Bases: langchain.document_loaders.pdf.BasePDFLoader Loads a PDF with pypdf and chunks at character level. Loader also stores page numbers in metadatas. Parameters file_path (str) – Return type None load()[source] Load given path as pages. Return type List[langchain.schema.Document] lazy_load()[source] Lazy load given path as pages. Return type Iterator[langchain.schema.Document]
https://api.python.langchain.com/en/stable/modules/document_loaders.html
4c9e56a69292-42
Lazy load given path as pages. Return type Iterator[langchain.schema.Document] class langchain.document_loaders.PyPDFium2Loader(file_path)[source] Bases: langchain.document_loaders.pdf.BasePDFLoader Loads a PDF with pypdfium2 and chunks at character level. Parameters file_path (str) – load()[source] Load given path as pages. Return type List[langchain.schema.Document] lazy_load()[source] Lazy load given path as pages. Return type Iterator[langchain.schema.Document] class langchain.document_loaders.PySparkDataFrameLoader(spark_session=None, df=None, page_content_column='text', fraction_of_memory=0.1)[source] Bases: langchain.document_loaders.base.BaseLoader Load PySpark DataFrames Parameters spark_session (Optional[SparkSession]) – df (Optional[Any]) – page_content_column (str) – fraction_of_memory (float) – get_num_rows()[source] Gets the amount of β€œfeasible” rows for the DataFrame Return type Tuple[int, int] lazy_load()[source] A lazy loader for document content. Return type Iterator[langchain.schema.Document] load()[source] Load from the dataframe. Return type List[langchain.schema.Document] class langchain.document_loaders.PythonLoader(file_path)[source] Bases: langchain.document_loaders.text.TextLoader Load Python files, respecting any non-default encoding if specified. Parameters file_path (str) – class langchain.document_loaders.ReadTheDocsLoader(path, encoding=None, errors=None, custom_html_tag=None, **kwargs)[source]
https://api.python.langchain.com/en/stable/modules/document_loaders.html
4c9e56a69292-43
Bases: langchain.document_loaders.base.BaseLoader Loader that loads ReadTheDocs documentation directory dump. Parameters path (Union[str, pathlib.Path]) – encoding (Optional[str]) – errors (Optional[str]) – custom_html_tag (Optional[Tuple[str, dict]]) – kwargs (Optional[Any]) – load()[source] Load documents. Return type List[langchain.schema.Document] class langchain.document_loaders.RecursiveUrlLoader(url, exclude_dirs=None)[source] Bases: langchain.document_loaders.base.BaseLoader Loader that loads all child links from a given url. Parameters url (str) – exclude_dirs (Optional[str]) – Return type None get_child_links_recursive(url, visited=None)[source] Recursively get all child links starting with the path of the input URL. Parameters url (str) – visited (Optional[Set[str]]) – Return type Set[str] lazy_load()[source] A lazy loader for document content. Return type Iterator[langchain.schema.Document] load()[source] Load web pages. Return type List[langchain.schema.Document] class langchain.document_loaders.RedditPostsLoader(client_id, client_secret, user_agent, search_queries, mode, categories=['new'], number_posts=10)[source] Bases: langchain.document_loaders.base.BaseLoader Reddit posts loader. Read posts on a subreddit. First you need to go to https://www.reddit.com/prefs/apps/ and create your application Parameters client_id (str) – client_secret (str) – user_agent (str) – search_queries (Sequence[str]) – mode (str) –
https://api.python.langchain.com/en/stable/modules/document_loaders.html
4c9e56a69292-44
search_queries (Sequence[str]) – mode (str) – categories (Sequence[str]) – number_posts (Optional[int]) – load()[source] Load reddits. Return type List[langchain.schema.Document] class langchain.document_loaders.RoamLoader(path)[source] Bases: langchain.document_loaders.base.BaseLoader Loader that loads Roam files from disk. Parameters path (str) – load()[source] Load documents. Return type List[langchain.schema.Document] class langchain.document_loaders.S3DirectoryLoader(bucket, prefix='')[source] Bases: langchain.document_loaders.base.BaseLoader Loading logic for loading documents from s3. Parameters bucket (str) – prefix (str) – load()[source] Load documents. Return type List[langchain.schema.Document] class langchain.document_loaders.S3FileLoader(bucket, key)[source] Bases: langchain.document_loaders.base.BaseLoader Loading logic for loading documents from s3. Parameters bucket (str) – key (str) – load()[source] Load documents. Return type List[langchain.schema.Document] class langchain.document_loaders.SRTLoader(file_path)[source] Bases: langchain.document_loaders.base.BaseLoader Loader for .srt (subtitle) files. Parameters file_path (str) – load()[source] Load using pysrt file. Return type List[langchain.schema.Document] class langchain.document_loaders.SeleniumURLLoader(urls, continue_on_failure=True, browser='chrome', binary_location=None, executable_path=None, headless=True, arguments=[])[source]
https://api.python.langchain.com/en/stable/modules/document_loaders.html
4c9e56a69292-45
Bases: langchain.document_loaders.base.BaseLoader Loader that uses Selenium and to load a page and unstructured to load the html. This is useful for loading pages that require javascript to render. Parameters urls (List[str]) – continue_on_failure (bool) – browser (Literal['chrome', 'firefox']) – binary_location (Optional[str]) – executable_path (Optional[str]) – headless (bool) – arguments (List[str]) – urls List of URLs to load. Type List[str] continue_on_failure If True, continue loading other URLs on failure. Type bool browser The browser to use, either β€˜chrome’ or β€˜firefox’. Type str binary_location The location of the browser binary. Type Optional[str] executable_path The path to the browser executable. Type Optional[str] headless If True, the browser will run in headless mode. Type bool arguments [List[str]] List of arguments to pass to the browser. load()[source] Load the specified URLs using Selenium and create Document instances. Returns A list of Document instances with loaded content. Return type List[Document] class langchain.document_loaders.SitemapLoader(web_path, filter_urls=None, parsing_function=None, blocksize=None, blocknum=0, meta_function=None, is_local=False)[source] Bases: langchain.document_loaders.web_base.WebBaseLoader Loader that fetches a sitemap and loads those URLs. Parameters web_path (str) – filter_urls (Optional[List[str]]) – parsing_function (Optional[Callable]) – blocksize (Optional[int]) – blocknum (int) –
https://api.python.langchain.com/en/stable/modules/document_loaders.html
4c9e56a69292-46
blocksize (Optional[int]) – blocknum (int) – meta_function (Optional[Callable]) – is_local (bool) – parse_sitemap(soup)[source] Parse sitemap xml and load into a list of dicts. Parameters soup (Any) – Return type List[dict] load()[source] Load sitemap. Return type List[langchain.schema.Document] class langchain.document_loaders.SlackDirectoryLoader(zip_path, workspace_url=None)[source] Bases: langchain.document_loaders.base.BaseLoader Loader for loading documents from a Slack directory dump. Parameters zip_path (str) – workspace_url (Optional[str]) – load()[source] Load and return documents from the Slack directory dump. Return type List[langchain.schema.Document] class langchain.document_loaders.SnowflakeLoader(query, user, password, account, warehouse, role, database, schema, parameters=None, page_content_columns=None, metadata_columns=None)[source] Bases: langchain.document_loaders.base.BaseLoader Loads a query result from Snowflake into a list of documents. Each document represents one row of the result. The page_content_columns are written into the page_content of the document. The metadata_columns are written into the metadata of the document. By default, all columns are written into the page_content and none into the metadata. Parameters query (str) – user (str) – password (str) – account (str) – warehouse (str) – role (str) – database (str) – schema (str) – parameters (Optional[Dict[str, Any]]) – page_content_columns (Optional[List[str]]) –
https://api.python.langchain.com/en/stable/modules/document_loaders.html
4c9e56a69292-47
page_content_columns (Optional[List[str]]) – metadata_columns (Optional[List[str]]) – lazy_load()[source] A lazy loader for document content. Return type Iterator[langchain.schema.Document] load()[source] Load data into document objects. Return type List[langchain.schema.Document] class langchain.document_loaders.SpreedlyLoader(access_token, resource)[source] Bases: langchain.document_loaders.base.BaseLoader Loader that fetches data from Spreedly API. Parameters access_token (str) – resource (str) – Return type None load()[source] Load data into document objects. Return type List[langchain.schema.Document] class langchain.document_loaders.StripeLoader(resource, access_token=None)[source] Bases: langchain.document_loaders.base.BaseLoader Loader that fetches data from Stripe. Parameters resource (str) – access_token (Optional[str]) – Return type None load()[source] Load data into document objects. Return type List[langchain.schema.Document] class langchain.document_loaders.TencentCOSDirectoryLoader(conf, bucket, prefix='')[source] Bases: langchain.document_loaders.base.BaseLoader Loading logic for loading documents from Tencent Cloud COS. Parameters conf (Any) – bucket (str) – prefix (str) – load()[source] Load data into document objects. Return type List[langchain.schema.Document] lazy_load()[source] Load documents. Return type Iterator[langchain.schema.Document] class langchain.document_loaders.TencentCOSFileLoader(conf, bucket, key)[source]
https://api.python.langchain.com/en/stable/modules/document_loaders.html
4c9e56a69292-48
Bases: langchain.document_loaders.base.BaseLoader Loading logic for loading documents from Tencent Cloud COS. Parameters conf (Any) – bucket (str) – key (str) – load()[source] Load data into document objects. Return type List[langchain.schema.Document] lazy_load()[source] Load documents. Return type Iterator[langchain.schema.Document] class langchain.document_loaders.TelegramChatApiLoader(chat_entity=None, api_id=None, api_hash=None, username=None, file_path='telegram_data.json')[source] Bases: langchain.document_loaders.base.BaseLoader Loader that loads Telegram chat json directory dump. Parameters chat_entity (Optional[EntityLike]) – api_id (Optional[int]) – api_hash (Optional[str]) – username (Optional[str]) – file_path (str) – async fetch_data_from_telegram()[source] Fetch data from Telegram API and save it as a JSON file. Return type None load()[source] Load documents. Return type List[langchain.schema.Document] class langchain.document_loaders.TelegramChatFileLoader(path)[source] Bases: langchain.document_loaders.base.BaseLoader Loader that loads Telegram chat json directory dump. Parameters path (str) – load()[source] Load documents. Return type List[langchain.schema.Document] langchain.document_loaders.TelegramChatLoader alias of langchain.document_loaders.telegram.TelegramChatFileLoader class langchain.document_loaders.TextLoader(file_path, encoding=None, autodetect_encoding=False)[source] Bases: langchain.document_loaders.base.BaseLoader Load text files. Parameters
https://api.python.langchain.com/en/stable/modules/document_loaders.html
4c9e56a69292-49
Bases: langchain.document_loaders.base.BaseLoader Load text files. Parameters file_path (str) – Path to the file to load. encoding (Optional[str]) – File encoding to use. If None, the file will be loaded encoding. (with the default system) – autodetect_encoding (bool) – Whether to try to autodetect the file encoding if the specified encoding fails. load()[source] Load from file path. Return type List[langchain.schema.Document] class langchain.document_loaders.ToMarkdownLoader(url, api_key)[source] Bases: langchain.document_loaders.base.BaseLoader Loader that loads HTML to markdown using 2markdown. Parameters url (str) – api_key (str) – lazy_load()[source] Lazily load the file. Return type Iterator[langchain.schema.Document] load()[source] Load file. Return type List[langchain.schema.Document] class langchain.document_loaders.TomlLoader(source)[source] Bases: langchain.document_loaders.base.BaseLoader A TOML document loader that inherits from the BaseLoader class. This class can be initialized with either a single source file or a source directory containing TOML files. Parameters source (Union[str, pathlib.Path]) – load()[source] Load and return all documents. Return type List[langchain.schema.Document] lazy_load()[source] Lazily load the TOML documents from the source file or directory. Return type Iterator[langchain.schema.Document]
https://api.python.langchain.com/en/stable/modules/document_loaders.html
4c9e56a69292-50
Return type Iterator[langchain.schema.Document] class langchain.document_loaders.TrelloLoader(client, board_name, *, include_card_name=True, include_comments=True, include_checklist=True, card_filter='all', extra_metadata=('due_date', 'labels', 'list', 'closed'))[source] Bases: langchain.document_loaders.base.BaseLoader Trello loader. Reads all cards from a Trello board. Parameters client (TrelloClient) – board_name (str) – include_card_name (bool) – include_comments (bool) – include_checklist (bool) – card_filter (Literal['closed', 'open', 'all']) – extra_metadata (Tuple[str, ...]) – classmethod from_credentials(board_name, *, api_key=None, token=None, **kwargs)[source] Convenience constructor that builds TrelloClient init param for you. Parameters board_name (str) – The name of the Trello board. api_key (Optional[str]) – Trello API key. Can also be specified as environment variable TRELLO_API_KEY. token (Optional[str]) – Trello token. Can also be specified as environment variable TRELLO_TOKEN. include_card_name – Whether to include the name of the card in the document. include_comments – Whether to include the comments on the card in the document. include_checklist – Whether to include the checklist on the card in the document. card_filter – Filter on card status. Valid values are β€œclosed”, β€œopen”, β€œall”. extra_metadata – List of additional metadata fields to include as document metadata.Valid values are β€œdue_date”, β€œlabels”, β€œlist”, β€œclosed”. kwargs (Any) – Return type langchain.document_loaders.trello.TrelloLoader
https://api.python.langchain.com/en/stable/modules/document_loaders.html
4c9e56a69292-51
Return type langchain.document_loaders.trello.TrelloLoader load()[source] Loads all cards from the specified Trello board. You can filter the cards, metadata and text included by using the optional parameters. Returns:A list of documents, one for each card in the board. Return type List[langchain.schema.Document] class langchain.document_loaders.TwitterTweetLoader(auth_handler, twitter_users, number_tweets=100)[source] Bases: langchain.document_loaders.base.BaseLoader Twitter tweets loader. Read tweets of user twitter handle. First you need to go to https://developer.twitter.com/en/docs/twitter-api /getting-started/getting-access-to-the-twitter-api to get your token. And create a v2 version of the app. Parameters auth_handler (Union[OAuthHandler, OAuth2BearerHandler]) – twitter_users (Sequence[str]) – number_tweets (Optional[int]) – load()[source] Load tweets. Return type List[langchain.schema.Document] classmethod from_bearer_token(oauth2_bearer_token, twitter_users, number_tweets=100)[source] Create a TwitterTweetLoader from OAuth2 bearer token. Parameters oauth2_bearer_token (str) – twitter_users (Sequence[str]) – number_tweets (Optional[int]) – Return type langchain.document_loaders.twitter.TwitterTweetLoader classmethod from_secrets(access_token, access_token_secret, consumer_key, consumer_secret, twitter_users, number_tweets=100)[source] Create a TwitterTweetLoader from access tokens and secrets. Parameters access_token (str) – access_token_secret (str) – consumer_key (str) – consumer_secret (str) – twitter_users (Sequence[str]) –
https://api.python.langchain.com/en/stable/modules/document_loaders.html
4c9e56a69292-52
consumer_secret (str) – twitter_users (Sequence[str]) – number_tweets (Optional[int]) – Return type langchain.document_loaders.twitter.TwitterTweetLoader class langchain.document_loaders.UnstructuredAPIFileIOLoader(file, mode='single', url='https://api.unstructured.io/general/v0/general', api_key='', **unstructured_kwargs)[source] Bases: langchain.document_loaders.unstructured.UnstructuredFileIOLoader Loader that uses the unstructured web API to load file IO objects. Parameters file (Union[IO, Sequence[IO]]) – mode (str) – url (str) – api_key (str) – unstructured_kwargs (Any) – class langchain.document_loaders.UnstructuredAPIFileLoader(file_path='', mode='single', url='https://api.unstructured.io/general/v0/general', api_key='', **unstructured_kwargs)[source] Bases: langchain.document_loaders.unstructured.UnstructuredFileLoader Loader that uses the unstructured web API to load files. Parameters file_path (Union[str, List[str]]) – mode (str) – url (str) – api_key (str) – unstructured_kwargs (Any) – class langchain.document_loaders.UnstructuredCSVLoader(file_path, mode='single', **unstructured_kwargs)[source] Bases: langchain.document_loaders.unstructured.UnstructuredFileLoader Loader that uses unstructured to load CSV files. Parameters file_path (str) – mode (str) – unstructured_kwargs (Any) – class langchain.document_loaders.UnstructuredEPubLoader(file_path, mode='single', **unstructured_kwargs)[source]
https://api.python.langchain.com/en/stable/modules/document_loaders.html
4c9e56a69292-53
Bases: langchain.document_loaders.unstructured.UnstructuredFileLoader Loader that uses unstructured to load epub files. Parameters file_path (Union[str, List[str]]) – mode (str) – unstructured_kwargs (Any) – class langchain.document_loaders.UnstructuredEmailLoader(file_path, mode='single', **unstructured_kwargs)[source] Bases: langchain.document_loaders.unstructured.UnstructuredFileLoader Loader that uses unstructured to load email files. Parameters file_path (Union[str, List[str]]) – mode (str) – unstructured_kwargs (Any) – class langchain.document_loaders.UnstructuredExcelLoader(file_path, mode='single', **unstructured_kwargs)[source] Bases: langchain.document_loaders.unstructured.UnstructuredFileLoader Loader that uses unstructured to load Microsoft Excel files. Parameters file_path (str) – mode (str) – unstructured_kwargs (Any) – class langchain.document_loaders.UnstructuredFileIOLoader(file, mode='single', **unstructured_kwargs)[source] Bases: langchain.document_loaders.unstructured.UnstructuredBaseLoader Loader that uses unstructured to load file IO objects. Parameters file (Union[IO, Sequence[IO]]) – mode (str) – unstructured_kwargs (Any) – class langchain.document_loaders.UnstructuredFileLoader(file_path, mode='single', **unstructured_kwargs)[source] Bases: langchain.document_loaders.unstructured.UnstructuredBaseLoader Loader that uses unstructured to load files. Parameters file_path (Union[str, List[str]]) – mode (str) – unstructured_kwargs (Any) –
https://api.python.langchain.com/en/stable/modules/document_loaders.html
4c9e56a69292-54
mode (str) – unstructured_kwargs (Any) – class langchain.document_loaders.UnstructuredHTMLLoader(file_path, mode='single', **unstructured_kwargs)[source] Bases: langchain.document_loaders.unstructured.UnstructuredFileLoader Loader that uses unstructured to load HTML files. Parameters file_path (Union[str, List[str]]) – mode (str) – unstructured_kwargs (Any) – class langchain.document_loaders.UnstructuredImageLoader(file_path, mode='single', **unstructured_kwargs)[source] Bases: langchain.document_loaders.unstructured.UnstructuredFileLoader Loader that uses unstructured to load image files, such as PNGs and JPGs. Parameters file_path (Union[str, List[str]]) – mode (str) – unstructured_kwargs (Any) – class langchain.document_loaders.UnstructuredMarkdownLoader(file_path, mode='single', **unstructured_kwargs)[source] Bases: langchain.document_loaders.unstructured.UnstructuredFileLoader Loader that uses unstructured to load markdown files. Parameters file_path (Union[str, List[str]]) – mode (str) – unstructured_kwargs (Any) – class langchain.document_loaders.UnstructuredODTLoader(file_path, mode='single', **unstructured_kwargs)[source] Bases: langchain.document_loaders.unstructured.UnstructuredFileLoader Loader that uses unstructured to load open office ODT files. Parameters file_path (str) – mode (str) – unstructured_kwargs (Any) – class langchain.document_loaders.UnstructuredOrgModeLoader(file_path, mode='single', **unstructured_kwargs)[source] Bases: langchain.document_loaders.unstructured.UnstructuredFileLoader
https://api.python.langchain.com/en/stable/modules/document_loaders.html
4c9e56a69292-55
Bases: langchain.document_loaders.unstructured.UnstructuredFileLoader Loader that uses unstructured to load Org-Mode files. Parameters file_path (str) – mode (str) – unstructured_kwargs (Any) – class langchain.document_loaders.UnstructuredPDFLoader(file_path, mode='single', **unstructured_kwargs)[source] Bases: langchain.document_loaders.unstructured.UnstructuredFileLoader Loader that uses unstructured to load PDF files. Parameters file_path (Union[str, List[str]]) – mode (str) – unstructured_kwargs (Any) – class langchain.document_loaders.UnstructuredPowerPointLoader(file_path, mode='single', **unstructured_kwargs)[source] Bases: langchain.document_loaders.unstructured.UnstructuredFileLoader Loader that uses unstructured to load powerpoint files. Parameters file_path (Union[str, List[str]]) – mode (str) – unstructured_kwargs (Any) – class langchain.document_loaders.UnstructuredRSTLoader(file_path, mode='single', **unstructured_kwargs)[source] Bases: langchain.document_loaders.unstructured.UnstructuredFileLoader Loader that uses unstructured to load RST files. Parameters file_path (str) – mode (str) – unstructured_kwargs (Any) – class langchain.document_loaders.UnstructuredRTFLoader(file_path, mode='single', **unstructured_kwargs)[source] Bases: langchain.document_loaders.unstructured.UnstructuredFileLoader Loader that uses unstructured to load rtf files. Parameters file_path (str) – mode (str) – unstructured_kwargs (Any) –
https://api.python.langchain.com/en/stable/modules/document_loaders.html
4c9e56a69292-56
mode (str) – unstructured_kwargs (Any) – class langchain.document_loaders.UnstructuredURLLoader(urls, continue_on_failure=True, mode='single', show_progress_bar=False, **unstructured_kwargs)[source] Bases: langchain.document_loaders.base.BaseLoader Loader that uses unstructured to load HTML files. Parameters urls (List[str]) – continue_on_failure (bool) – mode (str) – show_progress_bar (bool) – unstructured_kwargs (Any) – load()[source] Load file. Return type List[langchain.schema.Document] class langchain.document_loaders.UnstructuredWordDocumentLoader(file_path, mode='single', **unstructured_kwargs)[source] Bases: langchain.document_loaders.unstructured.UnstructuredFileLoader Loader that uses unstructured to load word documents. Parameters file_path (Union[str, List[str]]) – mode (str) – unstructured_kwargs (Any) – class langchain.document_loaders.UnstructuredXMLLoader(file_path, mode='single', **unstructured_kwargs)[source] Bases: langchain.document_loaders.unstructured.UnstructuredFileLoader Loader that uses unstructured to load XML files. Parameters file_path (str) – mode (str) – unstructured_kwargs (Any) – class langchain.document_loaders.WeatherDataLoader(client, places)[source] Bases: langchain.document_loaders.base.BaseLoader Weather Reader. Reads the forecast & current weather of any location using OpenWeatherMap’s free API. Checkout β€˜https://openweathermap.org/appid’ for more on how to generate a free OpenWeatherMap API. Parameters client (OpenWeatherMapAPIWrapper) –
https://api.python.langchain.com/en/stable/modules/document_loaders.html
4c9e56a69292-57
OpenWeatherMap API. Parameters client (OpenWeatherMapAPIWrapper) – places (Sequence[str]) – Return type None classmethod from_params(places, *, openweathermap_api_key=None)[source] Parameters places (Sequence[str]) – openweathermap_api_key (Optional[str]) – Return type langchain.document_loaders.weather.WeatherDataLoader lazy_load()[source] Lazily load weather data for the given locations. Return type Iterator[langchain.schema.Document] load()[source] Load weather data for the given locations. Return type List[langchain.schema.Document] class langchain.document_loaders.WebBaseLoader(web_path, header_template=None, verify=True, proxies=None)[source] Bases: langchain.document_loaders.base.BaseLoader Loader that uses urllib and beautiful soup to load webpages. Parameters web_path (Union[str, List[str]]) – header_template (Optional[dict]) – verify (Optional[bool]) – proxies (Optional[dict]) – requests_per_second: int = 2 Max number of concurrent requests to make. default_parser: str = 'html.parser' Default parser to use for BeautifulSoup. requests_kwargs: Dict[str, Any] = {} kwargs for requests raise_for_status: bool = False Raise an exception if http status code denotes an error. bs_get_text_kwargs: Dict[str, Any] = {} kwargs for beatifulsoup4 get_text web_paths: List[str] property web_path: str async fetch_all(urls)[source] Fetch all urls concurrently with rate limiting. Parameters urls (List[str]) – Return type Any
https://api.python.langchain.com/en/stable/modules/document_loaders.html
4c9e56a69292-58
Parameters urls (List[str]) – Return type Any scrape_all(urls, parser=None)[source] Fetch all urls, then return soups for all results. Parameters urls (List[str]) – parser (Optional[str]) – Return type List[Any] scrape(parser=None)[source] Scrape data from webpage and return it in BeautifulSoup format. Parameters parser (Optional[str]) – Return type Any lazy_load()[source] Lazy load text from the url(s) in web_path. Return type Iterator[langchain.schema.Document] load()[source] Load text from the url(s) in web_path. Return type List[langchain.schema.Document] aload()[source] Load text from the urls in web_path async into Documents. Return type List[langchain.schema.Document] class langchain.document_loaders.WhatsAppChatLoader(path)[source] Bases: langchain.document_loaders.base.BaseLoader Loader that loads WhatsApp messages text file. Parameters path (str) – load()[source] Load documents. Return type List[langchain.schema.Document] class langchain.document_loaders.WikipediaLoader(query, lang='en', load_max_docs=100, load_all_available_meta=False, doc_content_chars_max=4000)[source] Bases: langchain.document_loaders.base.BaseLoader Loads a query result from www.wikipedia.org into a list of Documents. The hard limit on the number of downloaded Documents is 300 for now. Each wiki page represents one Document. Parameters query (str) – lang (str) – load_max_docs (Optional[int]) – load_all_available_meta (Optional[bool]) –
https://api.python.langchain.com/en/stable/modules/document_loaders.html
4c9e56a69292-59
load_all_available_meta (Optional[bool]) – doc_content_chars_max (Optional[int]) – load()[source] Loads the query result from Wikipedia into a list of Documents. Returns A list of Document objects representing the loadedWikipedia pages. Return type List[Document] class langchain.document_loaders.YoutubeAudioLoader(urls, save_dir)[source] Bases: langchain.document_loaders.blob_loaders.schema.BlobLoader Load YouTube urls as audio file(s). Parameters urls (List[str]) – save_dir (str) – yield_blobs()[source] Yield audio blobs for each url. Return type Iterable[langchain.document_loaders.blob_loaders.schema.Blob] class langchain.document_loaders.YoutubeLoader(video_id, add_video_info=False, language='en', translation='en', continue_on_failure=False)[source] Bases: langchain.document_loaders.base.BaseLoader Loader that loads Youtube transcripts. Parameters video_id (str) – add_video_info (bool) – language (Union[str, Sequence[str]]) – translation (str) – continue_on_failure (bool) – static extract_video_id(youtube_url)[source] Extract video id from common YT urls. Parameters youtube_url (str) – Return type str classmethod from_youtube_url(youtube_url, **kwargs)[source] Given youtube URL, load video. Parameters youtube_url (str) – kwargs (Any) – Return type langchain.document_loaders.youtube.YoutubeLoader load()[source] Load documents. Return type List[langchain.schema.Document]
https://api.python.langchain.com/en/stable/modules/document_loaders.html
7fccba7e9300-0
Document Transformers Transform documents langchain.document_transformers.get_stateful_documents(documents)[source] Convert a list of documents to a list of documents with state. Parameters documents (Sequence[langchain.schema.Document]) – The documents to convert. Returns A list of documents with state. Return type Sequence[langchain.document_transformers._DocumentWithState] class langchain.document_transformers.EmbeddingsRedundantFilter(*, embeddings, similarity_fn=<function cosine_similarity>, similarity_threshold=0.95)[source] Bases: langchain.schema.BaseDocumentTransformer, pydantic.main.BaseModel Filter that drops redundant documents by comparing their embeddings. Parameters embeddings (langchain.embeddings.base.Embeddings) – similarity_fn (Callable) – similarity_threshold (float) – Return type None attribute embeddings: langchain.embeddings.base.Embeddings [Required] Embeddings to use for embedding document contents. attribute similarity_fn: Callable = <function cosine_similarity> Similarity function for comparing documents. Function expected to take as input two matrices (List[List[float]]) and return a matrix of scores where higher values indicate greater similarity. attribute similarity_threshold: float = 0.95 Threshold for determining when two documents are similar enough to be considered redundant. async atransform_documents(documents, **kwargs)[source] Asynchronously transform a list of documents. Parameters documents (Sequence[langchain.schema.Document]) – kwargs (Any) – Return type Sequence[langchain.schema.Document] transform_documents(documents, **kwargs)[source] Filter down documents. Parameters documents (Sequence[langchain.schema.Document]) – kwargs (Any) – Return type Sequence[langchain.schema.Document]
https://api.python.langchain.com/en/stable/modules/document_transformers.html
7fccba7e9300-1
kwargs (Any) – Return type Sequence[langchain.schema.Document] Text Splitters Functionality for splitting text. class langchain.text_splitter.TextSplitter(chunk_size=4000, chunk_overlap=200, length_function=<built-in function len>, keep_separator=False, add_start_index=False)[source] Bases: langchain.schema.BaseDocumentTransformer, abc.ABC Interface for splitting text into chunks. Parameters chunk_size (int) – chunk_overlap (int) – length_function (Callable[[str], int]) – keep_separator (bool) – add_start_index (bool) – Return type None abstract split_text(text)[source] Split text into multiple components. Parameters text (str) – Return type List[str] create_documents(texts, metadatas=None)[source] Create documents from a list of texts. Parameters texts (List[str]) – metadatas (Optional[List[dict]]) – Return type List[langchain.schema.Document] split_documents(documents)[source] Split documents. Parameters documents (Iterable[langchain.schema.Document]) – Return type List[langchain.schema.Document] classmethod from_huggingface_tokenizer(tokenizer, **kwargs)[source] Text splitter that uses HuggingFace tokenizer to count length. Parameters tokenizer (Any) – kwargs (Any) – Return type langchain.text_splitter.TextSplitter classmethod from_tiktoken_encoder(encoding_name='gpt2', model_name=None, allowed_special={}, disallowed_special='all', **kwargs)[source] Text splitter that uses tiktoken encoder to count length. Parameters encoding_name (str) – model_name (Optional[str]) –
https://api.python.langchain.com/en/stable/modules/document_transformers.html
7fccba7e9300-2
Parameters encoding_name (str) – model_name (Optional[str]) – allowed_special (Union[Literal['all'], typing.AbstractSet[str]]) – disallowed_special (Union[Literal['all'], typing.Collection[str]]) – kwargs (Any) – Return type langchain.text_splitter.TS transform_documents(documents, **kwargs)[source] Transform sequence of documents by splitting them. Parameters documents (Sequence[langchain.schema.Document]) – kwargs (Any) – Return type Sequence[langchain.schema.Document] async atransform_documents(documents, **kwargs)[source] Asynchronously transform a sequence of documents by splitting them. Parameters documents (Sequence[langchain.schema.Document]) – kwargs (Any) – Return type Sequence[langchain.schema.Document] class langchain.text_splitter.CharacterTextSplitter(separator='\n\n', **kwargs)[source] Bases: langchain.text_splitter.TextSplitter Implementation of splitting text that looks at characters. Parameters separator (str) – kwargs (Any) – Return type None split_text(text)[source] Split incoming text and return chunks. Parameters text (str) – Return type List[str] class langchain.text_splitter.LineType[source] Bases: TypedDict Line type as typed dict. metadata: Dict[str, str] content: str class langchain.text_splitter.HeaderType[source] Bases: TypedDict Header type as typed dict. level: int name: str data: str class langchain.text_splitter.MarkdownHeaderTextSplitter(headers_to_split_on, return_each_line=False)[source] Bases: object
https://api.python.langchain.com/en/stable/modules/document_transformers.html
7fccba7e9300-3
Bases: object Implementation of splitting markdown files based on specified headers. Parameters headers_to_split_on (List[Tuple[str, str]]) – return_each_line (bool) – aggregate_lines_to_chunks(lines)[source] Combine lines with common metadata into chunks :param lines: Line of text / associated header metadata Parameters lines (List[langchain.text_splitter.LineType]) – Return type List[langchain.schema.Document] split_text(text)[source] Split markdown file :param text: Markdown file Parameters text (str) – Return type List[langchain.schema.Document] class langchain.text_splitter.Tokenizer(chunk_overlap: 'int', tokens_per_chunk: 'int', decode: 'Callable[[list[int]], str]', encode: 'Callable[[str], List[int]]')[source] Bases: object Parameters chunk_overlap (int) – tokens_per_chunk (int) – decode (Callable[[list[int]], str]) – encode (Callable[[str], List[int]]) – Return type None chunk_overlap: int tokens_per_chunk: int decode: Callable[[list[int]], str] encode: Callable[[str], List[int]] langchain.text_splitter.split_text_on_tokens(*, text, tokenizer)[source] Split incoming text and return chunks. Parameters text (str) – tokenizer (langchain.text_splitter.Tokenizer) – Return type List[str] class langchain.text_splitter.TokenTextSplitter(encoding_name='gpt2', model_name=None, allowed_special={}, disallowed_special='all', **kwargs)[source] Bases: langchain.text_splitter.TextSplitter Implementation of splitting text that looks at tokens. Parameters
https://api.python.langchain.com/en/stable/modules/document_transformers.html
7fccba7e9300-4
Implementation of splitting text that looks at tokens. Parameters encoding_name (str) – model_name (Optional[str]) – allowed_special (Union[Literal['all'], AbstractSet[str]]) – disallowed_special (Union[Literal['all'], Collection[str]]) – kwargs (Any) – Return type None split_text(text)[source] Split text into multiple components. Parameters text (str) – Return type List[str] class langchain.text_splitter.SentenceTransformersTokenTextSplitter(chunk_overlap=50, model_name='sentence-transformers/all-mpnet-base-v2', tokens_per_chunk=None, **kwargs)[source] Bases: langchain.text_splitter.TextSplitter Implementation of splitting text that looks at tokens. Parameters chunk_overlap (int) – model_name (str) – tokens_per_chunk (Optional[int]) – kwargs (Any) – Return type None split_text(text)[source] Split text into multiple components. Parameters text (str) – Return type List[str] count_tokens(*, text)[source] Parameters text (str) – Return type int class langchain.text_splitter.Language(value, names=None, *, module=None, qualname=None, type=None, start=1, boundary=None)[source] Bases: str, enum.Enum CPP = 'cpp' GO = 'go' JAVA = 'java' JS = 'js' PHP = 'php' PROTO = 'proto' PYTHON = 'python' RST = 'rst' RUBY = 'ruby' RUST = 'rust'
https://api.python.langchain.com/en/stable/modules/document_transformers.html
7fccba7e9300-5
RUBY = 'ruby' RUST = 'rust' SCALA = 'scala' SWIFT = 'swift' MARKDOWN = 'markdown' LATEX = 'latex' HTML = 'html' SOL = 'sol' class langchain.text_splitter.RecursiveCharacterTextSplitter(separators=None, keep_separator=True, **kwargs)[source] Bases: langchain.text_splitter.TextSplitter Implementation of splitting text that looks at characters. Recursively tries to split by different characters to find one that works. Parameters separators (Optional[List[str]]) – keep_separator (bool) – kwargs (Any) – Return type None split_text(text)[source] Split text into multiple components. Parameters text (str) – Return type List[str] classmethod from_language(language, **kwargs)[source] Parameters language (langchain.text_splitter.Language) – kwargs (Any) – Return type langchain.text_splitter.RecursiveCharacterTextSplitter static get_separators_for_language(language)[source] Parameters language (langchain.text_splitter.Language) – Return type List[str] class langchain.text_splitter.NLTKTextSplitter(separator='\n\n', **kwargs)[source] Bases: langchain.text_splitter.TextSplitter Implementation of splitting text that looks at sentences using NLTK. Parameters separator (str) – kwargs (Any) – Return type None split_text(text)[source] Split incoming text and return chunks. Parameters text (str) – Return type List[str]
https://api.python.langchain.com/en/stable/modules/document_transformers.html
7fccba7e9300-6
Parameters text (str) – Return type List[str] class langchain.text_splitter.SpacyTextSplitter(separator='\n\n', pipeline='en_core_web_sm', **kwargs)[source] Bases: langchain.text_splitter.TextSplitter Implementation of splitting text that looks at sentences using Spacy. Parameters separator (str) – pipeline (str) – kwargs (Any) – Return type None split_text(text)[source] Split incoming text and return chunks. Parameters text (str) – Return type List[str] class langchain.text_splitter.PythonCodeTextSplitter(**kwargs)[source] Bases: langchain.text_splitter.RecursiveCharacterTextSplitter Attempts to split the text along Python syntax. Parameters kwargs (Any) – Return type None class langchain.text_splitter.MarkdownTextSplitter(**kwargs)[source] Bases: langchain.text_splitter.RecursiveCharacterTextSplitter Attempts to split the text along Markdown-formatted headings. Parameters kwargs (Any) – Return type None class langchain.text_splitter.LatexTextSplitter(**kwargs)[source] Bases: langchain.text_splitter.RecursiveCharacterTextSplitter Attempts to split the text along Latex-formatted layout elements. Parameters kwargs (Any) – Return type None
https://api.python.langchain.com/en/stable/modules/document_transformers.html
e6d866776567-0
All modules for which code is available langchain.agents.agent langchain.agents.agent_toolkits.azure_cognitive_services.toolkit langchain.agents.agent_toolkits.csv.base langchain.agents.agent_toolkits.file_management.toolkit langchain.agents.agent_toolkits.gmail.toolkit langchain.agents.agent_toolkits.jira.toolkit langchain.agents.agent_toolkits.json.base langchain.agents.agent_toolkits.json.toolkit langchain.agents.agent_toolkits.nla.toolkit langchain.agents.agent_toolkits.openapi.base langchain.agents.agent_toolkits.openapi.toolkit langchain.agents.agent_toolkits.pandas.base langchain.agents.agent_toolkits.playwright.toolkit langchain.agents.agent_toolkits.powerbi.base langchain.agents.agent_toolkits.powerbi.chat_base langchain.agents.agent_toolkits.powerbi.toolkit langchain.agents.agent_toolkits.python.base langchain.agents.agent_toolkits.spark.base langchain.agents.agent_toolkits.spark_sql.base langchain.agents.agent_toolkits.spark_sql.toolkit langchain.agents.agent_toolkits.sql.base langchain.agents.agent_toolkits.sql.toolkit langchain.agents.agent_toolkits.vectorstore.base langchain.agents.agent_toolkits.vectorstore.toolkit langchain.agents.agent_toolkits.zapier.toolkit langchain.agents.agent_types langchain.agents.conversational.base langchain.agents.conversational_chat.base langchain.agents.initialize langchain.agents.load_tools langchain.agents.loading langchain.agents.mrkl.base langchain.agents.openai_functions_agent.base langchain.agents.react.base langchain.agents.self_ask_with_search.base langchain.agents.structured_chat.base langchain.callbacks.aim_callback langchain.callbacks.argilla_callback langchain.callbacks.arize_callback
https://api.python.langchain.com/en/stable/_modules/index.html
e6d866776567-1
langchain.callbacks.argilla_callback langchain.callbacks.arize_callback langchain.callbacks.clearml_callback langchain.callbacks.comet_ml_callback langchain.callbacks.file langchain.callbacks.human langchain.callbacks.infino_callback langchain.callbacks.manager langchain.callbacks.mlflow_callback langchain.callbacks.openai_info langchain.callbacks.stdout langchain.callbacks.streaming_aiter langchain.callbacks.streaming_stdout langchain.callbacks.streaming_stdout_final_only langchain.callbacks.streamlit langchain.callbacks.streamlit.streamlit_callback_handler langchain.callbacks.wandb_callback langchain.callbacks.whylabs_callback langchain.chains.api.base langchain.chains.api.openapi.chain langchain.chains.combine_documents.base langchain.chains.combine_documents.map_reduce langchain.chains.combine_documents.map_rerank langchain.chains.combine_documents.refine langchain.chains.combine_documents.stuff langchain.chains.constitutional_ai.base langchain.chains.conversation.base langchain.chains.conversational_retrieval.base langchain.chains.flare.base langchain.chains.graph_qa.base langchain.chains.graph_qa.cypher langchain.chains.graph_qa.kuzu langchain.chains.graph_qa.nebulagraph langchain.chains.hyde.base langchain.chains.llm langchain.chains.llm_bash.base langchain.chains.llm_checker.base langchain.chains.llm_math.base langchain.chains.llm_requests langchain.chains.llm_summarization_checker.base langchain.chains.loading langchain.chains.mapreduce langchain.chains.moderation langchain.chains.natbot.base langchain.chains.openai_functions.citation_fuzzy_match langchain.chains.openai_functions.extraction langchain.chains.openai_functions.qa_with_structure
https://api.python.langchain.com/en/stable/_modules/index.html
e6d866776567-2
langchain.chains.openai_functions.qa_with_structure langchain.chains.openai_functions.tagging langchain.chains.pal.base langchain.chains.qa_generation.base langchain.chains.qa_with_sources.base langchain.chains.qa_with_sources.retrieval langchain.chains.qa_with_sources.vector_db langchain.chains.retrieval_qa.base langchain.chains.router.base langchain.chains.router.llm_router langchain.chains.router.multi_prompt langchain.chains.router.multi_retrieval_qa langchain.chains.sequential langchain.chains.sql_database.base langchain.chains.transform langchain.chat_models.anthropic langchain.chat_models.azure_openai langchain.chat_models.fake langchain.chat_models.google_palm langchain.chat_models.openai langchain.chat_models.promptlayer_openai langchain.chat_models.vertexai langchain.document_loaders.acreom langchain.document_loaders.airbyte_json langchain.document_loaders.airtable langchain.document_loaders.apify_dataset langchain.document_loaders.arxiv langchain.document_loaders.azlyrics langchain.document_loaders.azure_blob_storage_container langchain.document_loaders.azure_blob_storage_file langchain.document_loaders.bibtex langchain.document_loaders.bigquery langchain.document_loaders.bilibili langchain.document_loaders.blackboard langchain.document_loaders.blob_loaders.file_system langchain.document_loaders.blob_loaders.schema langchain.document_loaders.blob_loaders.youtube_audio langchain.document_loaders.blockchain langchain.document_loaders.chatgpt langchain.document_loaders.college_confidential langchain.document_loaders.confluence langchain.document_loaders.conllu langchain.document_loaders.csv_loader langchain.document_loaders.dataframe langchain.document_loaders.diffbot
https://api.python.langchain.com/en/stable/_modules/index.html
e6d866776567-3
langchain.document_loaders.dataframe langchain.document_loaders.diffbot langchain.document_loaders.directory langchain.document_loaders.discord langchain.document_loaders.docugami langchain.document_loaders.duckdb_loader langchain.document_loaders.email langchain.document_loaders.embaas langchain.document_loaders.epub langchain.document_loaders.evernote langchain.document_loaders.excel langchain.document_loaders.facebook_chat langchain.document_loaders.fauna langchain.document_loaders.figma langchain.document_loaders.gcs_directory langchain.document_loaders.gcs_file langchain.document_loaders.git langchain.document_loaders.gitbook langchain.document_loaders.github langchain.document_loaders.googledrive langchain.document_loaders.gutenberg langchain.document_loaders.hn langchain.document_loaders.html langchain.document_loaders.html_bs langchain.document_loaders.hugging_face_dataset langchain.document_loaders.ifixit langchain.document_loaders.image langchain.document_loaders.image_captions langchain.document_loaders.imsdb langchain.document_loaders.iugu langchain.document_loaders.joplin langchain.document_loaders.json_loader langchain.document_loaders.larksuite langchain.document_loaders.markdown langchain.document_loaders.mastodon langchain.document_loaders.max_compute langchain.document_loaders.mediawikidump langchain.document_loaders.merge langchain.document_loaders.mhtml langchain.document_loaders.modern_treasury langchain.document_loaders.notebook langchain.document_loaders.notion langchain.document_loaders.notiondb langchain.document_loaders.obsidian langchain.document_loaders.odt langchain.document_loaders.onedrive langchain.document_loaders.onedrive_file
https://api.python.langchain.com/en/stable/_modules/index.html
e6d866776567-4
langchain.document_loaders.onedrive langchain.document_loaders.onedrive_file langchain.document_loaders.open_city_data langchain.document_loaders.org_mode langchain.document_loaders.pdf langchain.document_loaders.powerpoint langchain.document_loaders.psychic langchain.document_loaders.pyspark_dataframe langchain.document_loaders.python langchain.document_loaders.readthedocs langchain.document_loaders.recursive_url_loader langchain.document_loaders.reddit langchain.document_loaders.roam langchain.document_loaders.rst langchain.document_loaders.rtf langchain.document_loaders.s3_directory langchain.document_loaders.s3_file langchain.document_loaders.sitemap langchain.document_loaders.slack_directory langchain.document_loaders.snowflake_loader langchain.document_loaders.spreedly langchain.document_loaders.srt langchain.document_loaders.stripe langchain.document_loaders.telegram langchain.document_loaders.tencent_cos_directory langchain.document_loaders.tencent_cos_file langchain.document_loaders.text langchain.document_loaders.tomarkdown langchain.document_loaders.toml langchain.document_loaders.trello langchain.document_loaders.twitter langchain.document_loaders.unstructured langchain.document_loaders.url langchain.document_loaders.url_playwright langchain.document_loaders.url_selenium langchain.document_loaders.weather langchain.document_loaders.web_base langchain.document_loaders.whatsapp_chat langchain.document_loaders.wikipedia langchain.document_loaders.word_document langchain.document_loaders.xml langchain.document_loaders.youtube langchain.document_transformers langchain.embeddings.aleph_alpha langchain.embeddings.bedrock langchain.embeddings.cohere langchain.embeddings.dashscope langchain.embeddings.deepinfra langchain.embeddings.elasticsearch
https://api.python.langchain.com/en/stable/_modules/index.html
e6d866776567-5
langchain.embeddings.deepinfra langchain.embeddings.elasticsearch langchain.embeddings.embaas langchain.embeddings.fake langchain.embeddings.huggingface langchain.embeddings.huggingface_hub langchain.embeddings.llamacpp langchain.embeddings.minimax langchain.embeddings.modelscope_hub langchain.embeddings.mosaicml langchain.embeddings.openai langchain.embeddings.sagemaker_endpoint langchain.embeddings.self_hosted langchain.embeddings.self_hosted_hugging_face langchain.embeddings.tensorflow_hub langchain.experimental.autonomous_agents.autogpt.agent langchain.experimental.autonomous_agents.baby_agi.baby_agi langchain.experimental.generative_agents.generative_agent langchain.experimental.generative_agents.memory langchain.llms.ai21 langchain.llms.aleph_alpha langchain.llms.amazon_api_gateway langchain.llms.anthropic langchain.llms.anyscale langchain.llms.aviary langchain.llms.azureml_endpoint langchain.llms.bananadev langchain.llms.baseten langchain.llms.beam langchain.llms.bedrock langchain.llms.cerebriumai langchain.llms.clarifai langchain.llms.cohere langchain.llms.ctransformers langchain.llms.databricks langchain.llms.deepinfra langchain.llms.fake langchain.llms.forefrontai langchain.llms.google_palm langchain.llms.gooseai langchain.llms.gpt4all langchain.llms.huggingface_endpoint langchain.llms.huggingface_hub langchain.llms.huggingface_pipeline langchain.llms.huggingface_text_gen_inference langchain.llms.human langchain.llms.llamacpp langchain.llms.manifest
https://api.python.langchain.com/en/stable/_modules/index.html
e6d866776567-6
langchain.llms.llamacpp langchain.llms.manifest langchain.llms.modal langchain.llms.mosaicml langchain.llms.nlpcloud langchain.llms.octoai_endpoint langchain.llms.openai langchain.llms.openllm langchain.llms.openlm langchain.llms.petals langchain.llms.pipelineai langchain.llms.predictionguard langchain.llms.promptlayer_openai langchain.llms.replicate langchain.llms.rwkv langchain.llms.sagemaker_endpoint langchain.llms.self_hosted langchain.llms.self_hosted_hugging_face langchain.llms.stochasticai langchain.llms.textgen langchain.llms.vertexai langchain.llms.writer langchain.memory.buffer langchain.memory.buffer_window langchain.memory.chat_message_histories.cassandra langchain.memory.chat_message_histories.cosmos_db langchain.memory.chat_message_histories.dynamodb langchain.memory.chat_message_histories.file langchain.memory.chat_message_histories.in_memory langchain.memory.chat_message_histories.momento langchain.memory.chat_message_histories.mongodb langchain.memory.chat_message_histories.postgres langchain.memory.chat_message_histories.redis langchain.memory.chat_message_histories.sql langchain.memory.chat_message_histories.zep langchain.memory.combined langchain.memory.entity langchain.memory.kg langchain.memory.motorhead_memory langchain.memory.readonly langchain.memory.simple langchain.memory.summary langchain.memory.summary_buffer langchain.memory.token_buffer langchain.memory.vectorstore langchain.output_parsers.boolean langchain.output_parsers.combining langchain.output_parsers.datetime langchain.output_parsers.enum langchain.output_parsers.fix langchain.output_parsers.list
https://api.python.langchain.com/en/stable/_modules/index.html
e6d866776567-7
langchain.output_parsers.fix langchain.output_parsers.list langchain.output_parsers.pydantic langchain.output_parsers.rail_parser langchain.output_parsers.regex langchain.output_parsers.regex_dict langchain.output_parsers.retry langchain.output_parsers.structured langchain.prompts.base langchain.prompts.chat langchain.prompts.example_selector.length_based langchain.prompts.example_selector.ngram_overlap langchain.prompts.example_selector.semantic_similarity langchain.prompts.few_shot langchain.prompts.few_shot_with_templates langchain.prompts.loading langchain.prompts.pipeline langchain.prompts.prompt langchain.requests langchain.retrievers.arxiv langchain.retrievers.azure_cognitive_search langchain.retrievers.chatgpt_plugin_retriever langchain.retrievers.contextual_compression langchain.retrievers.databerry langchain.retrievers.docarray langchain.retrievers.document_compressors.base langchain.retrievers.document_compressors.chain_extract langchain.retrievers.document_compressors.chain_filter langchain.retrievers.document_compressors.cohere_rerank langchain.retrievers.document_compressors.embeddings_filter langchain.retrievers.elastic_search_bm25 langchain.retrievers.kendra langchain.retrievers.knn langchain.retrievers.llama_index langchain.retrievers.merger_retriever langchain.retrievers.metal langchain.retrievers.milvus langchain.retrievers.multi_query langchain.retrievers.pinecone_hybrid_search langchain.retrievers.pupmed langchain.retrievers.remote_retriever langchain.retrievers.self_query.base langchain.retrievers.svm langchain.retrievers.tfidf langchain.retrievers.time_weighted_retriever
https://api.python.langchain.com/en/stable/_modules/index.html
e6d866776567-8
langchain.retrievers.tfidf langchain.retrievers.time_weighted_retriever langchain.retrievers.vespa_retriever langchain.retrievers.weaviate_hybrid_search langchain.retrievers.wikipedia langchain.retrievers.zep langchain.retrievers.zilliz langchain.schema langchain.text_splitter langchain.tools.arxiv.tool langchain.tools.azure_cognitive_services.form_recognizer langchain.tools.azure_cognitive_services.image_analysis langchain.tools.azure_cognitive_services.speech2text langchain.tools.azure_cognitive_services.text2speech langchain.tools.base langchain.tools.bing_search.tool langchain.tools.brave_search.tool langchain.tools.convert_to_openai langchain.tools.ddg_search.tool langchain.tools.file_management.copy langchain.tools.file_management.delete langchain.tools.file_management.file_search langchain.tools.file_management.list_dir langchain.tools.file_management.move langchain.tools.file_management.read langchain.tools.file_management.write langchain.tools.gmail.create_draft langchain.tools.gmail.get_message langchain.tools.gmail.get_thread langchain.tools.gmail.search langchain.tools.gmail.send_message langchain.tools.google_places.tool langchain.tools.google_search.tool langchain.tools.google_serper.tool langchain.tools.graphql.tool langchain.tools.human.tool langchain.tools.ifttt langchain.tools.interaction.tool langchain.tools.jira.tool langchain.tools.json.tool langchain.tools.metaphor_search.tool langchain.tools.openapi.utils.api_models langchain.tools.openweathermap.tool langchain.tools.playwright.click langchain.tools.playwright.current_page langchain.tools.playwright.extract_hyperlinks langchain.tools.playwright.extract_text langchain.tools.playwright.get_elements langchain.tools.playwright.navigate langchain.tools.playwright.navigate_back langchain.tools.plugin
https://api.python.langchain.com/en/stable/_modules/index.html
e6d866776567-9
langchain.tools.playwright.navigate langchain.tools.playwright.navigate_back langchain.tools.plugin langchain.tools.powerbi.tool langchain.tools.pubmed.tool langchain.tools.python.tool langchain.tools.requests.tool langchain.tools.scenexplain.tool langchain.tools.searx_search.tool langchain.tools.shell.tool langchain.tools.sleep.tool langchain.tools.spark_sql.tool langchain.tools.sql_database.tool langchain.tools.steamship_image_generation.tool langchain.tools.vectorstore.tool langchain.tools.wikipedia.tool langchain.tools.wolfram_alpha.tool langchain.tools.youtube.search langchain.tools.zapier.tool langchain.utilities.apify langchain.utilities.arxiv langchain.utilities.awslambda langchain.utilities.bash langchain.utilities.bibtex langchain.utilities.bing_search langchain.utilities.brave_search langchain.utilities.duckduckgo_search langchain.utilities.google_places_api langchain.utilities.google_search langchain.utilities.google_serper langchain.utilities.graphql langchain.utilities.jira langchain.utilities.max_compute langchain.utilities.metaphor_search langchain.utilities.openapi langchain.utilities.openweathermap langchain.utilities.powerbi langchain.utilities.pupmed langchain.utilities.python langchain.utilities.scenexplain langchain.utilities.searx_search langchain.utilities.serpapi langchain.utilities.spark_sql langchain.utilities.twilio langchain.utilities.wikipedia langchain.utilities.wolfram_alpha langchain.utilities.zapier langchain.vectorstores.alibabacloud_opensearch langchain.vectorstores.analyticdb langchain.vectorstores.annoy langchain.vectorstores.atlas langchain.vectorstores.awadb langchain.vectorstores.azuresearch langchain.vectorstores.base langchain.vectorstores.cassandra langchain.vectorstores.chroma langchain.vectorstores.clarifai
https://api.python.langchain.com/en/stable/_modules/index.html
e6d866776567-10
langchain.vectorstores.chroma langchain.vectorstores.clarifai langchain.vectorstores.clickhouse langchain.vectorstores.deeplake langchain.vectorstores.docarray.hnsw langchain.vectorstores.docarray.in_memory langchain.vectorstores.elastic_vector_search langchain.vectorstores.faiss langchain.vectorstores.hologres langchain.vectorstores.lancedb langchain.vectorstores.matching_engine langchain.vectorstores.milvus langchain.vectorstores.mongodb_atlas langchain.vectorstores.myscale langchain.vectorstores.opensearch_vector_search langchain.vectorstores.pinecone langchain.vectorstores.qdrant langchain.vectorstores.redis langchain.vectorstores.rocksetdb langchain.vectorstores.singlestoredb langchain.vectorstores.sklearn langchain.vectorstores.starrocks langchain.vectorstores.supabase langchain.vectorstores.tair langchain.vectorstores.tigris langchain.vectorstores.typesense langchain.vectorstores.vectara langchain.vectorstores.weaviate langchain.vectorstores.zilliz pydantic.config pydantic.main
https://api.python.langchain.com/en/stable/_modules/index.html
b51f8900a521-0
Source code for langchain.text_splitter """Functionality for splitting text.""" from __future__ import annotations import copy import logging import re from abc import ABC, abstractmethod from dataclasses import dataclass from enum import Enum from typing import ( AbstractSet, Any, Callable, Collection, Dict, Iterable, List, Literal, Optional, Sequence, Tuple, Type, TypedDict, TypeVar, Union, cast, ) from langchain.docstore.document import Document from langchain.schema import BaseDocumentTransformer logger = logging.getLogger(__name__) TS = TypeVar("TS", bound="TextSplitter") def _split_text_with_regex( text: str, separator: str, keep_separator: bool ) -> List[str]: # Now that we have the separator, split the text if separator: if keep_separator: # The parentheses in the pattern keep the delimiters in the result. _splits = re.split(f"({separator})", text) splits = [_splits[i] + _splits[i + 1] for i in range(1, len(_splits), 2)] if len(_splits) % 2 == 0: splits += _splits[-1:] splits = [_splits[0]] + splits else: splits = text.split(separator) else: splits = list(text) return [s for s in splits if s != ""] [docs]class TextSplitter(BaseDocumentTransformer, ABC): """Interface for splitting text into chunks.""" def __init__( self,
https://api.python.langchain.com/en/stable/_modules/langchain/text_splitter.html
b51f8900a521-1
"""Interface for splitting text into chunks.""" def __init__( self, chunk_size: int = 4000, chunk_overlap: int = 200, length_function: Callable[[str], int] = len, keep_separator: bool = False, add_start_index: bool = False, ) -> None: """Create a new TextSplitter. Args: chunk_size: Maximum size of chunks to return chunk_overlap: Overlap in characters between chunks length_function: Function that measures the length of given chunks keep_separator: Whether or not to keep the separator in the chunks add_start_index: If `True`, includes chunk's start index in metadata """ if chunk_overlap > chunk_size: raise ValueError( f"Got a larger chunk overlap ({chunk_overlap}) than chunk size " f"({chunk_size}), should be smaller." ) self._chunk_size = chunk_size self._chunk_overlap = chunk_overlap self._length_function = length_function self._keep_separator = keep_separator self._add_start_index = add_start_index [docs] @abstractmethod def split_text(self, text: str) -> List[str]: """Split text into multiple components.""" [docs] def create_documents( self, texts: List[str], metadatas: Optional[List[dict]] = None ) -> List[Document]: """Create documents from a list of texts.""" _metadatas = metadatas or [{}] * len(texts) documents = [] for i, text in enumerate(texts): index = -1 for chunk in self.split_text(text): metadata = copy.deepcopy(_metadatas[i])
https://api.python.langchain.com/en/stable/_modules/langchain/text_splitter.html
b51f8900a521-2
metadata = copy.deepcopy(_metadatas[i]) if self._add_start_index: index = text.find(chunk, index + 1) metadata["start_index"] = index new_doc = Document(page_content=chunk, metadata=metadata) documents.append(new_doc) return documents [docs] def split_documents(self, documents: Iterable[Document]) -> List[Document]: """Split documents.""" texts, metadatas = [], [] for doc in documents: texts.append(doc.page_content) metadatas.append(doc.metadata) return self.create_documents(texts, metadatas=metadatas) def _join_docs(self, docs: List[str], separator: str) -> Optional[str]: text = separator.join(docs) text = text.strip() if text == "": return None else: return text def _merge_splits(self, splits: Iterable[str], separator: str) -> List[str]: # We now want to combine these smaller pieces into medium size # chunks to send to the LLM. separator_len = self._length_function(separator) docs = [] current_doc: List[str] = [] total = 0 for d in splits: _len = self._length_function(d) if ( total + _len + (separator_len if len(current_doc) > 0 else 0) > self._chunk_size ): if total > self._chunk_size: logger.warning( f"Created a chunk of size {total}, " f"which is longer than the specified {self._chunk_size}" ) if len(current_doc) > 0:
https://api.python.langchain.com/en/stable/_modules/langchain/text_splitter.html
b51f8900a521-3
) if len(current_doc) > 0: doc = self._join_docs(current_doc, separator) if doc is not None: docs.append(doc) # Keep on popping if: # - we have a larger chunk than in the chunk overlap # - or if we still have any chunks and the length is long while total > self._chunk_overlap or ( total + _len + (separator_len if len(current_doc) > 0 else 0) > self._chunk_size and total > 0 ): total -= self._length_function(current_doc[0]) + ( separator_len if len(current_doc) > 1 else 0 ) current_doc = current_doc[1:] current_doc.append(d) total += _len + (separator_len if len(current_doc) > 1 else 0) doc = self._join_docs(current_doc, separator) if doc is not None: docs.append(doc) return docs [docs] @classmethod def from_huggingface_tokenizer(cls, tokenizer: Any, **kwargs: Any) -> TextSplitter: """Text splitter that uses HuggingFace tokenizer to count length.""" try: from transformers import PreTrainedTokenizerBase if not isinstance(tokenizer, PreTrainedTokenizerBase): raise ValueError( "Tokenizer received was not an instance of PreTrainedTokenizerBase" ) def _huggingface_tokenizer_length(text: str) -> int: return len(tokenizer.encode(text)) except ImportError: raise ValueError( "Could not import transformers python package. " "Please install it with `pip install transformers`." )
https://api.python.langchain.com/en/stable/_modules/langchain/text_splitter.html
b51f8900a521-4
"Please install it with `pip install transformers`." ) return cls(length_function=_huggingface_tokenizer_length, **kwargs) [docs] @classmethod def from_tiktoken_encoder( cls: Type[TS], encoding_name: str = "gpt2", model_name: Optional[str] = None, allowed_special: Union[Literal["all"], AbstractSet[str]] = set(), disallowed_special: Union[Literal["all"], Collection[str]] = "all", **kwargs: Any, ) -> TS: """Text splitter that uses tiktoken encoder to count length.""" try: import tiktoken except ImportError: raise ImportError( "Could not import tiktoken python package. " "This is needed in order to calculate max_tokens_for_prompt. " "Please install it with `pip install tiktoken`." ) if model_name is not None: enc = tiktoken.encoding_for_model(model_name) else: enc = tiktoken.get_encoding(encoding_name) def _tiktoken_encoder(text: str) -> int: return len( enc.encode( text, allowed_special=allowed_special, disallowed_special=disallowed_special, ) ) if issubclass(cls, TokenTextSplitter): extra_kwargs = { "encoding_name": encoding_name, "model_name": model_name, "allowed_special": allowed_special, "disallowed_special": disallowed_special, } kwargs = {**kwargs, **extra_kwargs} return cls(length_function=_tiktoken_encoder, **kwargs) [docs] def transform_documents(
https://api.python.langchain.com/en/stable/_modules/langchain/text_splitter.html
b51f8900a521-5
[docs] def transform_documents( self, documents: Sequence[Document], **kwargs: Any ) -> Sequence[Document]: """Transform sequence of documents by splitting them.""" return self.split_documents(list(documents)) [docs] async def atransform_documents( self, documents: Sequence[Document], **kwargs: Any ) -> Sequence[Document]: """Asynchronously transform a sequence of documents by splitting them.""" raise NotImplementedError [docs]class CharacterTextSplitter(TextSplitter): """Implementation of splitting text that looks at characters.""" def __init__(self, separator: str = "\n\n", **kwargs: Any) -> None: """Create a new TextSplitter.""" super().__init__(**kwargs) self._separator = separator [docs] def split_text(self, text: str) -> List[str]: """Split incoming text and return chunks.""" # First we naively split the large input into a bunch of smaller ones. splits = _split_text_with_regex(text, self._separator, self._keep_separator) _separator = "" if self._keep_separator else self._separator return self._merge_splits(splits, _separator) [docs]class LineType(TypedDict): """Line type as typed dict.""" metadata: Dict[str, str] content: str [docs]class HeaderType(TypedDict): """Header type as typed dict.""" level: int name: str data: str [docs]class MarkdownHeaderTextSplitter: """Implementation of splitting markdown files based on specified headers.""" def __init__( self, headers_to_split_on: List[Tuple[str, str]], return_each_line: bool = False ):
https://api.python.langchain.com/en/stable/_modules/langchain/text_splitter.html
b51f8900a521-6
): """Create a new MarkdownHeaderTextSplitter. Args: headers_to_split_on: Headers we want to track return_each_line: Return each line w/ associated headers """ # Output line-by-line or aggregated into chunks w/ common headers self.return_each_line = return_each_line # Given the headers we want to split on, # (e.g., "#, ##, etc") order by length self.headers_to_split_on = sorted( headers_to_split_on, key=lambda split: len(split[0]), reverse=True ) [docs] def aggregate_lines_to_chunks(self, lines: List[LineType]) -> List[Document]: """Combine lines with common metadata into chunks Args: lines: Line of text / associated header metadata """ aggregated_chunks: List[LineType] = [] for line in lines: if ( aggregated_chunks and aggregated_chunks[-1]["metadata"] == line["metadata"] ): # If the last line in the aggregated list # has the same metadata as the current line, # append the current content to the last lines's content aggregated_chunks[-1]["content"] += " \n" + line["content"] else: # Otherwise, append the current line to the aggregated list aggregated_chunks.append(line) return [ Document(page_content=chunk["content"], metadata=chunk["metadata"]) for chunk in aggregated_chunks ] [docs] def split_text(self, text: str) -> List[Document]: """Split markdown file Args: text: Markdown file""" # Split the input text by newline character ("\n"). lines = text.split("\n") # Final output
https://api.python.langchain.com/en/stable/_modules/langchain/text_splitter.html
b51f8900a521-7
lines = text.split("\n") # Final output lines_with_metadata: List[LineType] = [] # Content and metadata of the chunk currently being processed current_content: List[str] = [] current_metadata: Dict[str, str] = {} # Keep track of the nested header structure # header_stack: List[Dict[str, Union[int, str]]] = [] header_stack: List[HeaderType] = [] initial_metadata: Dict[str, str] = {} for line in lines: stripped_line = line.strip() # Check each line against each of the header types (e.g., #, ##) for sep, name in self.headers_to_split_on: # Check if line starts with a header that we intend to split on if stripped_line.startswith(sep) and ( # Header with no text OR header is followed by space # Both are valid conditions that sep is being used a header len(stripped_line) == len(sep) or stripped_line[len(sep)] == " " ): # Ensure we are tracking the header as metadata if name is not None: # Get the current header level current_header_level = sep.count("#") # Pop out headers of lower or same level from the stack while ( header_stack and header_stack[-1]["level"] >= current_header_level ): # We have encountered a new header # at the same or higher level popped_header = header_stack.pop() # Clear the metadata for the # popped header in initial_metadata if popped_header["name"] in initial_metadata: initial_metadata.pop(popped_header["name"]) # Push the current header to the stack
https://api.python.langchain.com/en/stable/_modules/langchain/text_splitter.html
b51f8900a521-8
# Push the current header to the stack header: HeaderType = { "level": current_header_level, "name": name, "data": stripped_line[len(sep) :].strip(), } header_stack.append(header) # Update initial_metadata with the current header initial_metadata[name] = header["data"] # Add the previous line to the lines_with_metadata # only if current_content is not empty if current_content: lines_with_metadata.append( { "content": "\n".join(current_content), "metadata": current_metadata.copy(), } ) current_content.clear() break else: if stripped_line: current_content.append(stripped_line) elif current_content: lines_with_metadata.append( { "content": "\n".join(current_content), "metadata": current_metadata.copy(), } ) current_content.clear() current_metadata = initial_metadata.copy() if current_content: lines_with_metadata.append( {"content": "\n".join(current_content), "metadata": current_metadata} ) # lines_with_metadata has each line with associated header metadata # aggregate these into chunks based on common metadata if not self.return_each_line: return self.aggregate_lines_to_chunks(lines_with_metadata) else: return [ Document(page_content=chunk["content"], metadata=chunk["metadata"]) for chunk in lines_with_metadata ] # should be in newer Python versions (3.10+) # @dataclass(frozen=True, kw_only=True, slots=True) [docs]@dataclass(frozen=True) class Tokenizer: chunk_overlap: int
https://api.python.langchain.com/en/stable/_modules/langchain/text_splitter.html
b51f8900a521-9
class Tokenizer: chunk_overlap: int tokens_per_chunk: int decode: Callable[[list[int]], str] encode: Callable[[str], List[int]] [docs]def split_text_on_tokens(*, text: str, tokenizer: Tokenizer) -> List[str]: """Split incoming text and return chunks.""" splits: List[str] = [] input_ids = tokenizer.encode(text) start_idx = 0 cur_idx = min(start_idx + tokenizer.tokens_per_chunk, len(input_ids)) chunk_ids = input_ids[start_idx:cur_idx] while start_idx < len(input_ids): splits.append(tokenizer.decode(chunk_ids)) start_idx += tokenizer.tokens_per_chunk - tokenizer.chunk_overlap cur_idx = min(start_idx + tokenizer.tokens_per_chunk, len(input_ids)) chunk_ids = input_ids[start_idx:cur_idx] return splits [docs]class TokenTextSplitter(TextSplitter): """Implementation of splitting text that looks at tokens.""" def __init__( self, encoding_name: str = "gpt2", model_name: Optional[str] = None, allowed_special: Union[Literal["all"], AbstractSet[str]] = set(), disallowed_special: Union[Literal["all"], Collection[str]] = "all", **kwargs: Any, ) -> None: """Create a new TextSplitter.""" super().__init__(**kwargs) try: import tiktoken except ImportError: raise ImportError( "Could not import tiktoken python package. " "This is needed in order to for TokenTextSplitter. " "Please install it with `pip install tiktoken`." ) if model_name is not None:
https://api.python.langchain.com/en/stable/_modules/langchain/text_splitter.html
b51f8900a521-10
) if model_name is not None: enc = tiktoken.encoding_for_model(model_name) else: enc = tiktoken.get_encoding(encoding_name) self._tokenizer = enc self._allowed_special = allowed_special self._disallowed_special = disallowed_special [docs] def split_text(self, text: str) -> List[str]: def _encode(_text: str) -> List[int]: return self._tokenizer.encode( _text, allowed_special=self._allowed_special, disallowed_special=self._disallowed_special, ) tokenizer = Tokenizer( chunk_overlap=self._chunk_overlap, tokens_per_chunk=self._chunk_size, decode=self._tokenizer.decode, encode=_encode, ) return split_text_on_tokens(text=text, tokenizer=tokenizer) [docs]class SentenceTransformersTokenTextSplitter(TextSplitter): """Implementation of splitting text that looks at tokens.""" def __init__( self, chunk_overlap: int = 50, model_name: str = "sentence-transformers/all-mpnet-base-v2", tokens_per_chunk: Optional[int] = None, **kwargs: Any, ) -> None: """Create a new TextSplitter.""" super().__init__(**kwargs, chunk_overlap=chunk_overlap) try: from sentence_transformers import SentenceTransformer except ImportError: raise ImportError( "Could not import sentence_transformer python package. " "This is needed in order to for SentenceTransformersTokenTextSplitter. " "Please install it with `pip install sentence-transformers`." ) self.model_name = model_name
https://api.python.langchain.com/en/stable/_modules/langchain/text_splitter.html
b51f8900a521-11
) self.model_name = model_name self._model = SentenceTransformer(self.model_name) self.tokenizer = self._model.tokenizer self._initialize_chunk_configuration(tokens_per_chunk=tokens_per_chunk) def _initialize_chunk_configuration( self, *, tokens_per_chunk: Optional[int] ) -> None: self.maximum_tokens_per_chunk = cast(int, self._model.max_seq_length) if tokens_per_chunk is None: self.tokens_per_chunk = self.maximum_tokens_per_chunk else: self.tokens_per_chunk = tokens_per_chunk if self.tokens_per_chunk > self.maximum_tokens_per_chunk: raise ValueError( f"The token limit of the models '{self.model_name}'" f" is: {self.maximum_tokens_per_chunk}." f" Argument tokens_per_chunk={self.tokens_per_chunk}" f" > maximum token limit." ) [docs] def split_text(self, text: str) -> List[str]: def encode_strip_start_and_stop_token_ids(text: str) -> List[int]: return self._encode(text)[1:-1] tokenizer = Tokenizer( chunk_overlap=self._chunk_overlap, tokens_per_chunk=self.tokens_per_chunk, decode=self.tokenizer.decode, encode=encode_strip_start_and_stop_token_ids, ) return split_text_on_tokens(text=text, tokenizer=tokenizer) [docs] def count_tokens(self, *, text: str) -> int: return len(self._encode(text)) _max_length_equal_32_bit_integer = 2**32 def _encode(self, text: str) -> List[int]: token_ids_with_start_and_end_token_ids = self.tokenizer.encode( text,
https://api.python.langchain.com/en/stable/_modules/langchain/text_splitter.html
b51f8900a521-12
token_ids_with_start_and_end_token_ids = self.tokenizer.encode( text, max_length=self._max_length_equal_32_bit_integer, truncation="do_not_truncate", ) return token_ids_with_start_and_end_token_ids [docs]class Language(str, Enum): CPP = "cpp" GO = "go" JAVA = "java" JS = "js" PHP = "php" PROTO = "proto" PYTHON = "python" RST = "rst" RUBY = "ruby" RUST = "rust" SCALA = "scala" SWIFT = "swift" MARKDOWN = "markdown" LATEX = "latex" HTML = "html" SOL = "sol" [docs]class RecursiveCharacterTextSplitter(TextSplitter): """Implementation of splitting text that looks at characters. Recursively tries to split by different characters to find one that works. """ def __init__( self, separators: Optional[List[str]] = None, keep_separator: bool = True, **kwargs: Any, ) -> None: """Create a new TextSplitter.""" super().__init__(keep_separator=keep_separator, **kwargs) self._separators = separators or ["\n\n", "\n", " ", ""] def _split_text(self, text: str, separators: List[str]) -> List[str]: """Split incoming text and return chunks.""" final_chunks = [] # Get appropriate separator to use separator = separators[-1] new_separators = [] for i, _s in enumerate(separators):
https://api.python.langchain.com/en/stable/_modules/langchain/text_splitter.html
b51f8900a521-13
for i, _s in enumerate(separators): if _s == "": separator = _s break if re.search(_s, text): separator = _s new_separators = separators[i + 1 :] break splits = _split_text_with_regex(text, separator, self._keep_separator) # Now go merging things, recursively splitting longer texts. _good_splits = [] _separator = "" if self._keep_separator else separator for s in splits: if self._length_function(s) < self._chunk_size: _good_splits.append(s) else: if _good_splits: merged_text = self._merge_splits(_good_splits, _separator) final_chunks.extend(merged_text) _good_splits = [] if not new_separators: final_chunks.append(s) else: other_info = self._split_text(s, new_separators) final_chunks.extend(other_info) if _good_splits: merged_text = self._merge_splits(_good_splits, _separator) final_chunks.extend(merged_text) return final_chunks [docs] def split_text(self, text: str) -> List[str]: return self._split_text(text, self._separators) [docs] @classmethod def from_language( cls, language: Language, **kwargs: Any ) -> RecursiveCharacterTextSplitter: separators = cls.get_separators_for_language(language) return cls(separators=separators, **kwargs) [docs] @staticmethod def get_separators_for_language(language: Language) -> List[str]: if language == Language.CPP: return [
https://api.python.langchain.com/en/stable/_modules/langchain/text_splitter.html
b51f8900a521-14
if language == Language.CPP: return [ # Split along class definitions "\nclass ", # Split along function definitions "\nvoid ", "\nint ", "\nfloat ", "\ndouble ", # Split along control flow statements "\nif ", "\nfor ", "\nwhile ", "\nswitch ", "\ncase ", # Split by the normal type of lines "\n\n", "\n", " ", "", ] elif language == Language.GO: return [ # Split along function definitions "\nfunc ", "\nvar ", "\nconst ", "\ntype ", # Split along control flow statements "\nif ", "\nfor ", "\nswitch ", "\ncase ", # Split by the normal type of lines "\n\n", "\n", " ", "", ] elif language == Language.JAVA: return [ # Split along class definitions "\nclass ", # Split along method definitions "\npublic ", "\nprotected ", "\nprivate ", "\nstatic ", # Split along control flow statements "\nif ", "\nfor ", "\nwhile ", "\nswitch ", "\ncase ", # Split by the normal type of lines "\n\n", "\n", " ", "", ] elif language == Language.JS: return [ # Split along function definitions "\nfunction ", "\nconst ", "\nlet ",
https://api.python.langchain.com/en/stable/_modules/langchain/text_splitter.html
b51f8900a521-15
"\nfunction ", "\nconst ", "\nlet ", "\nvar ", "\nclass ", # Split along control flow statements "\nif ", "\nfor ", "\nwhile ", "\nswitch ", "\ncase ", "\ndefault ", # Split by the normal type of lines "\n\n", "\n", " ", "", ] elif language == Language.PHP: return [ # Split along function definitions "\nfunction ", # Split along class definitions "\nclass ", # Split along control flow statements "\nif ", "\nforeach ", "\nwhile ", "\ndo ", "\nswitch ", "\ncase ", # Split by the normal type of lines "\n\n", "\n", " ", "", ] elif language == Language.PROTO: return [ # Split along message definitions "\nmessage ", # Split along service definitions "\nservice ", # Split along enum definitions "\nenum ", # Split along option definitions "\noption ", # Split along import statements "\nimport ", # Split along syntax declarations "\nsyntax ", # Split by the normal type of lines "\n\n", "\n", " ", "", ] elif language == Language.PYTHON: return [ # First, try to split along class definitions "\nclass ", "\ndef ", "\n\tdef ", # Now split by the normal type of lines "\n\n",
https://api.python.langchain.com/en/stable/_modules/langchain/text_splitter.html
b51f8900a521-16
# Now split by the normal type of lines "\n\n", "\n", " ", "", ] elif language == Language.RST: return [ # Split along section titles "\n=+\n", "\n-+\n", "\n\*+\n", # Split along directive markers "\n\n.. *\n\n", # Split by the normal type of lines "\n\n", "\n", " ", "", ] elif language == Language.RUBY: return [ # Split along method definitions "\ndef ", "\nclass ", # Split along control flow statements "\nif ", "\nunless ", "\nwhile ", "\nfor ", "\ndo ", "\nbegin ", "\nrescue ", # Split by the normal type of lines "\n\n", "\n", " ", "", ] elif language == Language.RUST: return [ # Split along function definitions "\nfn ", "\nconst ", "\nlet ", # Split along control flow statements "\nif ", "\nwhile ", "\nfor ", "\nloop ", "\nmatch ", "\nconst ", # Split by the normal type of lines "\n\n", "\n", " ", "", ] elif language == Language.SCALA: return [ # Split along class definitions "\nclass ", "\nobject ", # Split along method definitions "\ndef ",
https://api.python.langchain.com/en/stable/_modules/langchain/text_splitter.html
b51f8900a521-17
"\nobject ", # Split along method definitions "\ndef ", "\nval ", "\nvar ", # Split along control flow statements "\nif ", "\nfor ", "\nwhile ", "\nmatch ", "\ncase ", # Split by the normal type of lines "\n\n", "\n", " ", "", ] elif language == Language.SWIFT: return [ # Split along function definitions "\nfunc ", # Split along class definitions "\nclass ", "\nstruct ", "\nenum ", # Split along control flow statements "\nif ", "\nfor ", "\nwhile ", "\ndo ", "\nswitch ", "\ncase ", # Split by the normal type of lines "\n\n", "\n", " ", "", ] elif language == Language.MARKDOWN: return [ # First, try to split along Markdown headings (starting with level 2) "\n#{1,6} ", # Note the alternative syntax for headings (below) is not handled here # Heading level 2 # --------------- # End of code block "```\n", # Horizontal lines "\n\*\*\*+\n", "\n---+\n", "\n___+\n", # Note that this splitter doesn't handle horizontal lines defined # by *three or more* of ***, ---, or ___, but this is not handled "\n\n", "\n", " ", "", ]
https://api.python.langchain.com/en/stable/_modules/langchain/text_splitter.html
b51f8900a521-18
"\n", " ", "", ] elif language == Language.LATEX: return [ # First, try to split along Latex sections "\n\\\chapter{", "\n\\\section{", "\n\\\subsection{", "\n\\\subsubsection{", # Now split by environments "\n\\\begin{enumerate}", "\n\\\begin{itemize}", "\n\\\begin{description}", "\n\\\begin{list}", "\n\\\begin{quote}", "\n\\\begin{quotation}", "\n\\\begin{verse}", "\n\\\begin{verbatim}", # Now split by math environments "\n\\\begin{align}", "$$", "$", # Now split by the normal type of lines " ", "", ] elif language == Language.HTML: return [ # First, try to split along HTML tags "<body", "<div", "<p", "<br", "<li", "<h1", "<h2", "<h3", "<h4", "<h5", "<h6", "<span", "<table", "<tr", "<td", "<th", "<ul", "<ol", "<header", "<footer", "<nav", # Head "<head", "<style", "<script", "<meta", "<title", "", ] elif language == Language.SOL: return [ # Split along compiler informations definitions "\npragma ",
https://api.python.langchain.com/en/stable/_modules/langchain/text_splitter.html
b51f8900a521-19
return [ # Split along compiler informations definitions "\npragma ", "\nusing ", # Split along contract definitions "\ncontract ", "\ninterface ", "\nlibrary ", # Split along method definitions "\nconstructor ", "\ntype ", "\nfunction ", "\nevent ", "\nmodifier ", "\nerror ", "\nstruct ", "\nenum ", # Split along control flow statements "\nif ", "\nfor ", "\nwhile ", "\ndo while ", "\nassembly ", # Split by the normal type of lines "\n\n", "\n", " ", "", ] else: raise ValueError( f"Language {language} is not supported! " f"Please choose from {list(Language)}" ) [docs]class NLTKTextSplitter(TextSplitter): """Implementation of splitting text that looks at sentences using NLTK.""" def __init__(self, separator: str = "\n\n", **kwargs: Any) -> None: """Initialize the NLTK splitter.""" super().__init__(**kwargs) try: from nltk.tokenize import sent_tokenize self._tokenizer = sent_tokenize except ImportError: raise ImportError( "NLTK is not installed, please install it with `pip install nltk`." ) self._separator = separator [docs] def split_text(self, text: str) -> List[str]: """Split incoming text and return chunks.""" # First we naively split the large input into a bunch of smaller ones. splits = self._tokenizer(text)
https://api.python.langchain.com/en/stable/_modules/langchain/text_splitter.html
b51f8900a521-20
splits = self._tokenizer(text) return self._merge_splits(splits, self._separator) [docs]class SpacyTextSplitter(TextSplitter): """Implementation of splitting text that looks at sentences using Spacy.""" def __init__( self, separator: str = "\n\n", pipeline: str = "en_core_web_sm", **kwargs: Any ) -> None: """Initialize the spacy text splitter.""" super().__init__(**kwargs) try: import spacy except ImportError: raise ImportError( "Spacy is not installed, please install it with `pip install spacy`." ) self._tokenizer = spacy.load(pipeline) self._separator = separator [docs] def split_text(self, text: str) -> List[str]: """Split incoming text and return chunks.""" splits = (str(s) for s in self._tokenizer(text).sents) return self._merge_splits(splits, self._separator) # For backwards compatibility [docs]class PythonCodeTextSplitter(RecursiveCharacterTextSplitter): """Attempts to split the text along Python syntax.""" def __init__(self, **kwargs: Any) -> None: """Initialize a PythonCodeTextSplitter.""" separators = self.get_separators_for_language(Language.PYTHON) super().__init__(separators=separators, **kwargs) [docs]class MarkdownTextSplitter(RecursiveCharacterTextSplitter): """Attempts to split the text along Markdown-formatted headings.""" def __init__(self, **kwargs: Any) -> None: """Initialize a MarkdownTextSplitter.""" separators = self.get_separators_for_language(Language.MARKDOWN)
https://api.python.langchain.com/en/stable/_modules/langchain/text_splitter.html
b51f8900a521-21
separators = self.get_separators_for_language(Language.MARKDOWN) super().__init__(separators=separators, **kwargs) [docs]class LatexTextSplitter(RecursiveCharacterTextSplitter): """Attempts to split the text along Latex-formatted layout elements.""" def __init__(self, **kwargs: Any) -> None: """Initialize a LatexTextSplitter.""" separators = self.get_separators_for_language(Language.LATEX) super().__init__(separators=separators, **kwargs)
https://api.python.langchain.com/en/stable/_modules/langchain/text_splitter.html
4f6982c1328d-0
Source code for langchain.requests """Lightweight wrapper around requests library, with async support.""" from contextlib import asynccontextmanager from typing import Any, AsyncGenerator, Dict, Optional import aiohttp import requests from pydantic import BaseModel, Extra class Requests(BaseModel): """Wrapper around requests to handle auth and async. The main purpose of this wrapper is to handle authentication (by saving headers) and enable easy async methods on the same base object. """ headers: Optional[Dict[str, str]] = None aiosession: Optional[aiohttp.ClientSession] = None class Config: """Configuration for this pydantic object.""" extra = Extra.forbid arbitrary_types_allowed = True def get(self, url: str, **kwargs: Any) -> requests.Response: """GET the URL and return the text.""" return requests.get(url, headers=self.headers, **kwargs) def post(self, url: str, data: Dict[str, Any], **kwargs: Any) -> requests.Response: """POST to the URL and return the text.""" return requests.post(url, json=data, headers=self.headers, **kwargs) def patch(self, url: str, data: Dict[str, Any], **kwargs: Any) -> requests.Response: """PATCH the URL and return the text.""" return requests.patch(url, json=data, headers=self.headers, **kwargs) def put(self, url: str, data: Dict[str, Any], **kwargs: Any) -> requests.Response: """PUT the URL and return the text.""" return requests.put(url, json=data, headers=self.headers, **kwargs) def delete(self, url: str, **kwargs: Any) -> requests.Response:
https://api.python.langchain.com/en/stable/_modules/langchain/requests.html
4f6982c1328d-1
def delete(self, url: str, **kwargs: Any) -> requests.Response: """DELETE the URL and return the text.""" return requests.delete(url, headers=self.headers, **kwargs) @asynccontextmanager async def _arequest( self, method: str, url: str, **kwargs: Any ) -> AsyncGenerator[aiohttp.ClientResponse, None]: """Make an async request.""" if not self.aiosession: async with aiohttp.ClientSession() as session: async with session.request( method, url, headers=self.headers, **kwargs ) as response: yield response else: async with self.aiosession.request( method, url, headers=self.headers, **kwargs ) as response: yield response @asynccontextmanager async def aget( self, url: str, **kwargs: Any ) -> AsyncGenerator[aiohttp.ClientResponse, None]: """GET the URL and return the text asynchronously.""" async with self._arequest("GET", url, **kwargs) as response: yield response @asynccontextmanager async def apost( self, url: str, data: Dict[str, Any], **kwargs: Any ) -> AsyncGenerator[aiohttp.ClientResponse, None]: """POST to the URL and return the text asynchronously.""" async with self._arequest("POST", url, **kwargs) as response: yield response @asynccontextmanager async def apatch( self, url: str, data: Dict[str, Any], **kwargs: Any ) -> AsyncGenerator[aiohttp.ClientResponse, None]: """PATCH the URL and return the text asynchronously."""
https://api.python.langchain.com/en/stable/_modules/langchain/requests.html
4f6982c1328d-2
"""PATCH the URL and return the text asynchronously.""" async with self._arequest("PATCH", url, **kwargs) as response: yield response @asynccontextmanager async def aput( self, url: str, data: Dict[str, Any], **kwargs: Any ) -> AsyncGenerator[aiohttp.ClientResponse, None]: """PUT the URL and return the text asynchronously.""" async with self._arequest("PUT", url, **kwargs) as response: yield response @asynccontextmanager async def adelete( self, url: str, **kwargs: Any ) -> AsyncGenerator[aiohttp.ClientResponse, None]: """DELETE the URL and return the text asynchronously.""" async with self._arequest("DELETE", url, **kwargs) as response: yield response [docs]class TextRequestsWrapper(BaseModel): """Lightweight wrapper around requests library. The main purpose of this wrapper is to always return a text output. """ headers: Optional[Dict[str, str]] = None aiosession: Optional[aiohttp.ClientSession] = None class Config: """Configuration for this pydantic object.""" extra = Extra.forbid arbitrary_types_allowed = True @property def requests(self) -> Requests: return Requests(headers=self.headers, aiosession=self.aiosession) [docs] def get(self, url: str, **kwargs: Any) -> str: """GET the URL and return the text.""" return self.requests.get(url, **kwargs).text [docs] def post(self, url: str, data: Dict[str, Any], **kwargs: Any) -> str:
https://api.python.langchain.com/en/stable/_modules/langchain/requests.html
4f6982c1328d-3
"""POST to the URL and return the text.""" return self.requests.post(url, data, **kwargs).text [docs] def patch(self, url: str, data: Dict[str, Any], **kwargs: Any) -> str: """PATCH the URL and return the text.""" return self.requests.patch(url, data, **kwargs).text [docs] def put(self, url: str, data: Dict[str, Any], **kwargs: Any) -> str: """PUT the URL and return the text.""" return self.requests.put(url, data, **kwargs).text [docs] def delete(self, url: str, **kwargs: Any) -> str: """DELETE the URL and return the text.""" return self.requests.delete(url, **kwargs).text [docs] async def aget(self, url: str, **kwargs: Any) -> str: """GET the URL and return the text asynchronously.""" async with self.requests.aget(url, **kwargs) as response: return await response.text() [docs] async def apost(self, url: str, data: Dict[str, Any], **kwargs: Any) -> str: """POST to the URL and return the text asynchronously.""" async with self.requests.apost(url, **kwargs) as response: return await response.text() [docs] async def apatch(self, url: str, data: Dict[str, Any], **kwargs: Any) -> str: """PATCH the URL and return the text asynchronously.""" async with self.requests.apatch(url, **kwargs) as response: return await response.text() [docs] async def aput(self, url: str, data: Dict[str, Any], **kwargs: Any) -> str:
https://api.python.langchain.com/en/stable/_modules/langchain/requests.html
4f6982c1328d-4
"""PUT the URL and return the text asynchronously.""" async with self.requests.aput(url, **kwargs) as response: return await response.text() [docs] async def adelete(self, url: str, **kwargs: Any) -> str: """DELETE the URL and return the text asynchronously.""" async with self.requests.adelete(url, **kwargs) as response: return await response.text() # For backwards compatibility RequestsWrapper = TextRequestsWrapper
https://api.python.langchain.com/en/stable/_modules/langchain/requests.html
cf9fa59b406b-0
Source code for langchain.schema """Common schema objects.""" from __future__ import annotations from abc import ABC, abstractmethod from dataclasses import dataclass from typing import ( Any, Dict, Generic, List, NamedTuple, Optional, Sequence, TypeVar, Union, ) from uuid import UUID from pydantic import BaseModel, Field, root_validator from langchain.load.serializable import Serializable RUN_KEY = "__run" [docs]def get_buffer_string( messages: List[BaseMessage], human_prefix: str = "Human", ai_prefix: str = "AI" ) -> str: """Get buffer string of messages.""" string_messages = [] for m in messages: if isinstance(m, HumanMessage): role = human_prefix elif isinstance(m, AIMessage): role = ai_prefix elif isinstance(m, SystemMessage): role = "System" elif isinstance(m, FunctionMessage): role = "Function" elif isinstance(m, ChatMessage): role = m.role else: raise ValueError(f"Got unsupported message type: {m}") message = f"{role}: {m.content}" if isinstance(m, AIMessage) and "function_call" in m.additional_kwargs: message += f"{m.additional_kwargs['function_call']}" string_messages.append(message) return "\n".join(string_messages) [docs]@dataclass class AgentAction: """Agent's action to take.""" tool: str tool_input: Union[str, dict] log: str [docs]class AgentFinish(NamedTuple): """Agent's return value.""" return_values: dict
https://api.python.langchain.com/en/stable/_modules/langchain/schema.html
cf9fa59b406b-1
"""Agent's return value.""" return_values: dict log: str [docs]class Generation(Serializable): """Output of a single generation.""" text: str """Generated text output.""" generation_info: Optional[Dict[str, Any]] = None """Raw generation info response from the provider""" """May include things like reason for finishing (e.g. in OpenAI)""" # TODO: add log probs @property def lc_serializable(self) -> bool: """This class is LangChain serializable.""" return True [docs]class BaseMessage(Serializable): """Message object.""" content: str additional_kwargs: dict = Field(default_factory=dict) @property @abstractmethod def type(self) -> str: """Type of the message, used for serialization.""" @property def lc_serializable(self) -> bool: """This class is LangChain serializable.""" return True [docs]class HumanMessage(BaseMessage): """Type of message that is spoken by the human.""" example: bool = False @property def type(self) -> str: """Type of the message, used for serialization.""" return "human" [docs]class AIMessage(BaseMessage): """Type of message that is spoken by the AI.""" example: bool = False @property def type(self) -> str: """Type of the message, used for serialization.""" return "ai" [docs]class SystemMessage(BaseMessage): """Type of message that is a system message.""" @property def type(self) -> str: """Type of the message, used for serialization.""" return "system"
https://api.python.langchain.com/en/stable/_modules/langchain/schema.html
cf9fa59b406b-2
"""Type of the message, used for serialization.""" return "system" [docs]class FunctionMessage(BaseMessage): name: str @property def type(self) -> str: """Type of the message, used for serialization.""" return "function" [docs]class ChatMessage(BaseMessage): """Type of message with arbitrary speaker.""" role: str @property def type(self) -> str: """Type of the message, used for serialization.""" return "chat" def _message_to_dict(message: BaseMessage) -> dict: return {"type": message.type, "data": message.dict()} [docs]def messages_to_dict(messages: List[BaseMessage]) -> List[dict]: """Convert messages to dict. Args: messages: List of messages to convert. Returns: List of dicts. """ return [_message_to_dict(m) for m in messages] def _message_from_dict(message: dict) -> BaseMessage: _type = message["type"] if _type == "human": return HumanMessage(**message["data"]) elif _type == "ai": return AIMessage(**message["data"]) elif _type == "system": return SystemMessage(**message["data"]) elif _type == "chat": return ChatMessage(**message["data"]) else: raise ValueError(f"Got unexpected type: {_type}") [docs]def messages_from_dict(messages: List[dict]) -> List[BaseMessage]: """Convert messages from dict. Args: messages: List of messages (dicts) to convert. Returns: List of messages (BaseMessages). """
https://api.python.langchain.com/en/stable/_modules/langchain/schema.html
cf9fa59b406b-3
Returns: List of messages (BaseMessages). """ return [_message_from_dict(m) for m in messages] [docs]class ChatGeneration(Generation): """Output of a single generation.""" text = "" message: BaseMessage @root_validator def set_text(cls, values: Dict[str, Any]) -> Dict[str, Any]: values["text"] = values["message"].content return values [docs]class RunInfo(BaseModel): """Class that contains all relevant metadata for a Run.""" run_id: UUID [docs]class ChatResult(BaseModel): """Class that contains all relevant information for a Chat Result.""" generations: List[ChatGeneration] """List of the things generated.""" llm_output: Optional[dict] = None """For arbitrary LLM provider specific output.""" [docs]class LLMResult(BaseModel): """Class that contains all relevant information for an LLM Result.""" generations: List[List[Generation]] """List of the things generated. This is List[List[]] because each input could have multiple generations.""" llm_output: Optional[dict] = None """For arbitrary LLM provider specific output.""" run: Optional[List[RunInfo]] = None """Run metadata.""" [docs] def flatten(self) -> List[LLMResult]: """Flatten generations into a single list.""" llm_results = [] for i, gen_list in enumerate(self.generations): # Avoid double counting tokens in OpenAICallback if i == 0: llm_results.append( LLMResult( generations=[gen_list], llm_output=self.llm_output, ) )
https://api.python.langchain.com/en/stable/_modules/langchain/schema.html
cf9fa59b406b-4
llm_output=self.llm_output, ) ) else: if self.llm_output is not None: llm_output = self.llm_output.copy() llm_output["token_usage"] = dict() else: llm_output = None llm_results.append( LLMResult( generations=[gen_list], llm_output=llm_output, ) ) return llm_results def __eq__(self, other: object) -> bool: if not isinstance(other, LLMResult): return NotImplemented return ( self.generations == other.generations and self.llm_output == other.llm_output ) [docs]class PromptValue(Serializable, ABC): [docs] @abstractmethod def to_string(self) -> str: """Return prompt as string.""" [docs] @abstractmethod def to_messages(self) -> List[BaseMessage]: """Return prompt as messages.""" [docs]class BaseMemory(Serializable, ABC): """Base interface for memory in chains.""" class Config: """Configuration for this pydantic object.""" arbitrary_types_allowed = True @property @abstractmethod def memory_variables(self) -> List[str]: """Input keys this memory class will load dynamically.""" [docs] @abstractmethod def load_memory_variables(self, inputs: Dict[str, Any]) -> Dict[str, Any]: """Return key-value pairs given the text input to the chain. If None, return all memories """ [docs] @abstractmethod def save_context(self, inputs: Dict[str, Any], outputs: Dict[str, str]) -> None:
https://api.python.langchain.com/en/stable/_modules/langchain/schema.html
cf9fa59b406b-5
"""Save the context of this model run to memory.""" [docs] @abstractmethod def clear(self) -> None: """Clear memory contents.""" [docs]class BaseChatMessageHistory(ABC): """Base interface for chat message history See `ChatMessageHistory` for default implementation. """ """ Example: .. code-block:: python class FileChatMessageHistory(BaseChatMessageHistory): storage_path: str session_id: str @property def messages(self): with open(os.path.join(storage_path, session_id), 'r:utf-8') as f: messages = json.loads(f.read()) return messages_from_dict(messages) def add_message(self, message: BaseMessage) -> None: messages = self.messages.append(_message_to_dict(message)) with open(os.path.join(storage_path, session_id), 'w') as f: json.dump(f, messages) def clear(self): with open(os.path.join(storage_path, session_id), 'w') as f: f.write("[]") """ messages: List[BaseMessage] [docs] def add_user_message(self, message: str) -> None: """Add a user message to the store""" self.add_message(HumanMessage(content=message)) [docs] def add_ai_message(self, message: str) -> None: """Add an AI message to the store""" self.add_message(AIMessage(content=message)) [docs] def add_message(self, message: BaseMessage) -> None: """Add a self-created message to the store""" raise NotImplementedError [docs] @abstractmethod def clear(self) -> None:
https://api.python.langchain.com/en/stable/_modules/langchain/schema.html
cf9fa59b406b-6
raise NotImplementedError [docs] @abstractmethod def clear(self) -> None: """Remove all messages from the store""" [docs]class Document(Serializable): """Interface for interacting with a document.""" page_content: str metadata: dict = Field(default_factory=dict) [docs]class BaseRetriever(ABC): """Base interface for retrievers.""" [docs] @abstractmethod def get_relevant_documents(self, query: str) -> List[Document]: """Get documents relevant for a query. Args: query: string to find relevant documents for Returns: List of relevant documents """ [docs] @abstractmethod async def aget_relevant_documents(self, query: str) -> List[Document]: """Get documents relevant for a query. Args: query: string to find relevant documents for Returns: List of relevant documents """ # For backwards compatibility Memory = BaseMemory T = TypeVar("T") [docs]class BaseLLMOutputParser(Serializable, ABC, Generic[T]): [docs] @abstractmethod def parse_result(self, result: List[Generation]) -> T: """Parse LLM Result.""" [docs]class BaseOutputParser(BaseLLMOutputParser, ABC, Generic[T]): """Class to parse the output of an LLM call. Output parsers help structure language model responses. """ [docs] def parse_result(self, result: List[Generation]) -> T: return self.parse(result[0].text) [docs] @abstractmethod def parse(self, text: str) -> T: """Parse the output of an LLM call.
https://api.python.langchain.com/en/stable/_modules/langchain/schema.html
cf9fa59b406b-7
"""Parse the output of an LLM call. A method which takes in a string (assumed output of a language model ) and parses it into some structure. Args: text: output of language model Returns: structured output """ [docs] def parse_with_prompt(self, completion: str, prompt: PromptValue) -> Any: """Optional method to parse the output of an LLM call with a prompt. The prompt is largely provided in the event the OutputParser wants to retry or fix the output in some way, and needs information from the prompt to do so. Args: completion: output of language model prompt: prompt value Returns: structured output """ return self.parse(completion) [docs] def get_format_instructions(self) -> str: """Instructions on how the LLM output should be formatted.""" raise NotImplementedError @property def _type(self) -> str: """Return the type key.""" raise NotImplementedError( f"_type property is not implemented in class {self.__class__.__name__}." " This is required for serialization." ) [docs] def dict(self, **kwargs: Any) -> Dict: """Return dictionary representation of output parser.""" output_parser_dict = super().dict() output_parser_dict["_type"] = self._type return output_parser_dict [docs]class NoOpOutputParser(BaseOutputParser[str]): """Output parser that just returns the text as is.""" @property def lc_serializable(self) -> bool: return True @property def _type(self) -> str: return "default"
https://api.python.langchain.com/en/stable/_modules/langchain/schema.html
cf9fa59b406b-8
@property def _type(self) -> str: return "default" [docs] def parse(self, text: str) -> str: return text [docs]class OutputParserException(ValueError): """Exception that output parsers should raise to signify a parsing error. This exists to differentiate parsing errors from other code or execution errors that also may arise inside the output parser. OutputParserExceptions will be available to catch and handle in ways to fix the parsing error, while other errors will be raised. """ def __init__( self, error: Any, observation: str | None = None, llm_output: str | None = None, send_to_llm: bool = False, ): super(OutputParserException, self).__init__(error) if send_to_llm: if observation is None or llm_output is None: raise ValueError( "Arguments 'observation' & 'llm_output'" " are required if 'send_to_llm' is True" ) self.observation = observation self.llm_output = llm_output self.send_to_llm = send_to_llm [docs]class BaseDocumentTransformer(ABC): """Base interface for transforming documents.""" [docs] @abstractmethod def transform_documents( self, documents: Sequence[Document], **kwargs: Any ) -> Sequence[Document]: """Transform a list of documents.""" [docs] @abstractmethod async def atransform_documents( self, documents: Sequence[Document], **kwargs: Any ) -> Sequence[Document]: """Asynchronously transform a list of documents."""
https://api.python.langchain.com/en/stable/_modules/langchain/schema.html
53bae54580e6-0
Source code for langchain.document_transformers """Transform documents""" from typing import Any, Callable, List, Sequence import numpy as np from pydantic import BaseModel, Field from langchain.embeddings.base import Embeddings from langchain.math_utils import cosine_similarity from langchain.schema import BaseDocumentTransformer, Document class _DocumentWithState(Document): """Wrapper for a document that includes arbitrary state.""" state: dict = Field(default_factory=dict) """State associated with the document.""" def to_document(self) -> Document: """Convert the DocumentWithState to a Document.""" return Document(page_content=self.page_content, metadata=self.metadata) @classmethod def from_document(cls, doc: Document) -> "_DocumentWithState": """Create a DocumentWithState from a Document.""" if isinstance(doc, cls): return doc return cls(page_content=doc.page_content, metadata=doc.metadata) [docs]def get_stateful_documents( documents: Sequence[Document], ) -> Sequence[_DocumentWithState]: """Convert a list of documents to a list of documents with state. Args: documents: The documents to convert. Returns: A list of documents with state. """ return [_DocumentWithState.from_document(doc) for doc in documents] def _filter_similar_embeddings( embedded_documents: List[List[float]], similarity_fn: Callable, threshold: float ) -> List[int]: """Filter redundant documents based on the similarity of their embeddings.""" similarity = np.tril(similarity_fn(embedded_documents, embedded_documents), k=-1) redundant = np.where(similarity > threshold) redundant_stacked = np.column_stack(redundant)
https://api.python.langchain.com/en/stable/_modules/langchain/document_transformers.html
53bae54580e6-1
redundant_stacked = np.column_stack(redundant) redundant_sorted = np.argsort(similarity[redundant])[::-1] included_idxs = set(range(len(embedded_documents))) for first_idx, second_idx in redundant_stacked[redundant_sorted]: if first_idx in included_idxs and second_idx in included_idxs: # Default to dropping the second document of any highly similar pair. included_idxs.remove(second_idx) return list(sorted(included_idxs)) def _get_embeddings_from_stateful_docs( embeddings: Embeddings, documents: Sequence[_DocumentWithState] ) -> List[List[float]]: if len(documents) and "embedded_doc" in documents[0].state: embedded_documents = [doc.state["embedded_doc"] for doc in documents] else: embedded_documents = embeddings.embed_documents( [d.page_content for d in documents] ) for doc, embedding in zip(documents, embedded_documents): doc.state["embedded_doc"] = embedding return embedded_documents [docs]class EmbeddingsRedundantFilter(BaseDocumentTransformer, BaseModel): """Filter that drops redundant documents by comparing their embeddings.""" embeddings: Embeddings """Embeddings to use for embedding document contents.""" similarity_fn: Callable = cosine_similarity """Similarity function for comparing documents. Function expected to take as input two matrices (List[List[float]]) and return a matrix of scores where higher values indicate greater similarity.""" similarity_threshold: float = 0.95 """Threshold for determining when two documents are similar enough to be considered redundant.""" class Config: """Configuration for this pydantic object.""" arbitrary_types_allowed = True [docs] def transform_documents(
https://api.python.langchain.com/en/stable/_modules/langchain/document_transformers.html
53bae54580e6-2
arbitrary_types_allowed = True [docs] def transform_documents( self, documents: Sequence[Document], **kwargs: Any ) -> Sequence[Document]: """Filter down documents.""" stateful_documents = get_stateful_documents(documents) embedded_documents = _get_embeddings_from_stateful_docs( self.embeddings, stateful_documents ) included_idxs = _filter_similar_embeddings( embedded_documents, self.similarity_fn, self.similarity_threshold ) return [stateful_documents[i] for i in sorted(included_idxs)] [docs] async def atransform_documents( self, documents: Sequence[Document], **kwargs: Any ) -> Sequence[Document]: raise NotImplementedError
https://api.python.langchain.com/en/stable/_modules/langchain/document_transformers.html
6efe031c7c9b-0
Source code for langchain.agents.loading """Functionality for loading agents.""" import json import logging from pathlib import Path from typing import Any, List, Optional, Union import yaml from langchain.agents.agent import BaseMultiActionAgent, BaseSingleActionAgent from langchain.agents.tools import Tool from langchain.agents.types import AGENT_TO_CLASS from langchain.base_language import BaseLanguageModel from langchain.chains.loading import load_chain, load_chain_from_config from langchain.utilities.loading import try_load_from_hub logger = logging.getLogger(__file__) URL_BASE = "https://raw.githubusercontent.com/hwchase17/langchain-hub/master/agents/" def _load_agent_from_tools( config: dict, llm: BaseLanguageModel, tools: List[Tool], **kwargs: Any ) -> Union[BaseSingleActionAgent, BaseMultiActionAgent]: config_type = config.pop("_type") if config_type not in AGENT_TO_CLASS: raise ValueError(f"Loading {config_type} agent not supported") agent_cls = AGENT_TO_CLASS[config_type] combined_config = {**config, **kwargs} return agent_cls.from_llm_and_tools(llm, tools, **combined_config) def load_agent_from_config( config: dict, llm: Optional[BaseLanguageModel] = None, tools: Optional[List[Tool]] = None, **kwargs: Any, ) -> Union[BaseSingleActionAgent, BaseMultiActionAgent]: """Load agent from Config Dict.""" if "_type" not in config: raise ValueError("Must specify an agent Type in config") load_from_tools = config.pop("load_from_llm_and_tools", False) if load_from_tools: if llm is None:
https://api.python.langchain.com/en/stable/_modules/langchain/agents/loading.html
6efe031c7c9b-1
if load_from_tools: if llm is None: raise ValueError( "If `load_from_llm_and_tools` is set to True, " "then LLM must be provided" ) if tools is None: raise ValueError( "If `load_from_llm_and_tools` is set to True, " "then tools must be provided" ) return _load_agent_from_tools(config, llm, tools, **kwargs) config_type = config.pop("_type") if config_type not in AGENT_TO_CLASS: raise ValueError(f"Loading {config_type} agent not supported") agent_cls = AGENT_TO_CLASS[config_type] if "llm_chain" in config: config["llm_chain"] = load_chain_from_config(config.pop("llm_chain")) elif "llm_chain_path" in config: config["llm_chain"] = load_chain(config.pop("llm_chain_path")) else: raise ValueError("One of `llm_chain` and `llm_chain_path` should be specified.") if "output_parser" in config: logger.warning( "Currently loading output parsers on agent is not supported, " "will just use the default one." ) del config["output_parser"] combined_config = {**config, **kwargs} return agent_cls(**combined_config) # type: ignore [docs]def load_agent( path: Union[str, Path], **kwargs: Any ) -> Union[BaseSingleActionAgent, BaseMultiActionAgent]: """Unified method for loading a agent from LangChainHub or local fs.""" if hub_result := try_load_from_hub(
https://api.python.langchain.com/en/stable/_modules/langchain/agents/loading.html
6efe031c7c9b-2
if hub_result := try_load_from_hub( path, _load_agent_from_file, "agents", {"json", "yaml"} ): return hub_result else: return _load_agent_from_file(path, **kwargs) def _load_agent_from_file( file: Union[str, Path], **kwargs: Any ) -> Union[BaseSingleActionAgent, BaseMultiActionAgent]: """Load agent from file.""" # Convert file to Path object. if isinstance(file, str): file_path = Path(file) else: file_path = file # Load from either json or yaml. if file_path.suffix == ".json": with open(file_path) as f: config = json.load(f) elif file_path.suffix == ".yaml": with open(file_path, "r") as f: config = yaml.safe_load(f) else: raise ValueError("File type must be json or yaml") # Load the agent from the config now. return load_agent_from_config(config, **kwargs)
https://api.python.langchain.com/en/stable/_modules/langchain/agents/loading.html
1886b3951db2-0
Source code for langchain.agents.agent_types from enum import Enum [docs]class AgentType(str, Enum): """Enumerator with the Agent types.""" ZERO_SHOT_REACT_DESCRIPTION = "zero-shot-react-description" REACT_DOCSTORE = "react-docstore" SELF_ASK_WITH_SEARCH = "self-ask-with-search" CONVERSATIONAL_REACT_DESCRIPTION = "conversational-react-description" CHAT_ZERO_SHOT_REACT_DESCRIPTION = "chat-zero-shot-react-description" CHAT_CONVERSATIONAL_REACT_DESCRIPTION = "chat-conversational-react-description" STRUCTURED_CHAT_ZERO_SHOT_REACT_DESCRIPTION = ( "structured-chat-zero-shot-react-description" ) OPENAI_FUNCTIONS = "openai-functions" OPENAI_MULTI_FUNCTIONS = "openai-multi-functions"
https://api.python.langchain.com/en/stable/_modules/langchain/agents/agent_types.html
3626c117023e-0
Source code for langchain.agents.initialize """Load agent.""" from typing import Any, Optional, Sequence from langchain.agents.agent import AgentExecutor from langchain.agents.agent_types import AgentType from langchain.agents.loading import AGENT_TO_CLASS, load_agent from langchain.base_language import BaseLanguageModel from langchain.callbacks.base import BaseCallbackManager from langchain.tools.base import BaseTool [docs]def initialize_agent( tools: Sequence[BaseTool], llm: BaseLanguageModel, agent: Optional[AgentType] = None, callback_manager: Optional[BaseCallbackManager] = None, agent_path: Optional[str] = None, agent_kwargs: Optional[dict] = None, *, tags: Optional[Sequence[str]] = None, **kwargs: Any, ) -> AgentExecutor: """Load an agent executor given tools and LLM. Args: tools: List of tools this agent has access to. llm: Language model to use as the agent. agent: Agent type to use. If None and agent_path is also None, will default to AgentType.ZERO_SHOT_REACT_DESCRIPTION. callback_manager: CallbackManager to use. Global callback manager is used if not provided. Defaults to None. agent_path: Path to serialized agent to use. agent_kwargs: Additional key word arguments to pass to the underlying agent tags: Tags to apply to the traced runs. **kwargs: Additional key word arguments passed to the agent executor Returns: An agent executor """ tags_ = list(tags) if tags else [] if agent is None and agent_path is None: agent = AgentType.ZERO_SHOT_REACT_DESCRIPTION
https://api.python.langchain.com/en/stable/_modules/langchain/agents/initialize.html
3626c117023e-1
agent = AgentType.ZERO_SHOT_REACT_DESCRIPTION if agent is not None and agent_path is not None: raise ValueError( "Both `agent` and `agent_path` are specified, " "but at most only one should be." ) if agent is not None: if agent not in AGENT_TO_CLASS: raise ValueError( f"Got unknown agent type: {agent}. " f"Valid types are: {AGENT_TO_CLASS.keys()}." ) tags_.append(agent.value if isinstance(agent, AgentType) else agent) agent_cls = AGENT_TO_CLASS[agent] agent_kwargs = agent_kwargs or {} agent_obj = agent_cls.from_llm_and_tools( llm, tools, callback_manager=callback_manager, **agent_kwargs ) elif agent_path is not None: agent_obj = load_agent( agent_path, llm=llm, tools=tools, callback_manager=callback_manager ) try: # TODO: Add tags from the serialized object directly. tags_.append(agent_obj._agent_type) except NotImplementedError: pass else: raise ValueError( "Somehow both `agent` and `agent_path` are None, " "this should never happen." ) return AgentExecutor.from_agent_and_tools( agent=agent_obj, tools=tools, callback_manager=callback_manager, tags=tags_, **kwargs, )
https://api.python.langchain.com/en/stable/_modules/langchain/agents/initialize.html
9e3e8c0b73e3-0
Source code for langchain.agents.agent """Chain that takes in an input and produces an action and action input.""" from __future__ import annotations import asyncio import json import logging import time from abc import abstractmethod from pathlib import Path from typing import Any, Callable, Dict, List, Optional, Sequence, Tuple, Union import yaml from pydantic import BaseModel, root_validator from langchain.agents.agent_types import AgentType from langchain.agents.tools import InvalidTool from langchain.base_language import BaseLanguageModel from langchain.callbacks.base import BaseCallbackManager from langchain.callbacks.manager import ( AsyncCallbackManagerForChainRun, AsyncCallbackManagerForToolRun, CallbackManagerForChainRun, CallbackManagerForToolRun, Callbacks, ) from langchain.chains.base import Chain from langchain.chains.llm import LLMChain from langchain.input import get_color_mapping from langchain.prompts.base import BasePromptTemplate from langchain.prompts.few_shot import FewShotPromptTemplate from langchain.prompts.prompt import PromptTemplate from langchain.schema import ( AgentAction, AgentFinish, BaseMessage, BaseOutputParser, OutputParserException, ) from langchain.tools.base import BaseTool from langchain.utilities.asyncio import asyncio_timeout logger = logging.getLogger(__name__) [docs]class BaseSingleActionAgent(BaseModel): """Base Agent class.""" @property def return_values(self) -> List[str]: """Return values of the agent.""" return ["output"] [docs] def get_allowed_tools(self) -> Optional[List[str]]: return None [docs] @abstractmethod def plan( self,
https://api.python.langchain.com/en/stable/_modules/langchain/agents/agent.html
9e3e8c0b73e3-1
return None [docs] @abstractmethod def plan( self, intermediate_steps: List[Tuple[AgentAction, str]], callbacks: Callbacks = None, **kwargs: Any, ) -> Union[AgentAction, AgentFinish]: """Given input, decided what to do. Args: intermediate_steps: Steps the LLM has taken to date, along with observations callbacks: Callbacks to run. **kwargs: User inputs. Returns: Action specifying what tool to use. """ [docs] @abstractmethod async def aplan( self, intermediate_steps: List[Tuple[AgentAction, str]], callbacks: Callbacks = None, **kwargs: Any, ) -> Union[AgentAction, AgentFinish]: """Given input, decided what to do. Args: intermediate_steps: Steps the LLM has taken to date, along with observations callbacks: Callbacks to run. **kwargs: User inputs. Returns: Action specifying what tool to use. """ @property @abstractmethod def input_keys(self) -> List[str]: """Return the input keys. :meta private: """ [docs] def return_stopped_response( self, early_stopping_method: str, intermediate_steps: List[Tuple[AgentAction, str]], **kwargs: Any, ) -> AgentFinish: """Return response when agent has been stopped due to max iterations.""" if early_stopping_method == "force": # `force` just returns a constant string return AgentFinish(
https://api.python.langchain.com/en/stable/_modules/langchain/agents/agent.html
9e3e8c0b73e3-2
# `force` just returns a constant string return AgentFinish( {"output": "Agent stopped due to iteration limit or time limit."}, "" ) else: raise ValueError( f"Got unsupported early_stopping_method `{early_stopping_method}`" ) [docs] @classmethod def from_llm_and_tools( cls, llm: BaseLanguageModel, tools: Sequence[BaseTool], callback_manager: Optional[BaseCallbackManager] = None, **kwargs: Any, ) -> BaseSingleActionAgent: raise NotImplementedError @property def _agent_type(self) -> str: """Return Identifier of agent type.""" raise NotImplementedError [docs] def dict(self, **kwargs: Any) -> Dict: """Return dictionary representation of agent.""" _dict = super().dict() _type = self._agent_type if isinstance(_type, AgentType): _dict["_type"] = str(_type.value) else: _dict["_type"] = _type return _dict [docs] def save(self, file_path: Union[Path, str]) -> None: """Save the agent. Args: file_path: Path to file to save the agent to. Example: .. code-block:: python # If working with agent executor agent.agent.save(file_path="path/agent.yaml") """ # Convert file to Path object. if isinstance(file_path, str): save_path = Path(file_path) else: save_path = file_path directory_path = save_path.parent directory_path.mkdir(parents=True, exist_ok=True) # Fetch dictionary to save
https://api.python.langchain.com/en/stable/_modules/langchain/agents/agent.html
9e3e8c0b73e3-3
directory_path.mkdir(parents=True, exist_ok=True) # Fetch dictionary to save agent_dict = self.dict() if save_path.suffix == ".json": with open(file_path, "w") as f: json.dump(agent_dict, f, indent=4) elif save_path.suffix == ".yaml": with open(file_path, "w") as f: yaml.dump(agent_dict, f, default_flow_style=False) else: raise ValueError(f"{save_path} must be json or yaml") [docs] def tool_run_logging_kwargs(self) -> Dict: return {} [docs]class BaseMultiActionAgent(BaseModel): """Base Agent class.""" @property def return_values(self) -> List[str]: """Return values of the agent.""" return ["output"] [docs] def get_allowed_tools(self) -> Optional[List[str]]: return None [docs] @abstractmethod def plan( self, intermediate_steps: List[Tuple[AgentAction, str]], callbacks: Callbacks = None, **kwargs: Any, ) -> Union[List[AgentAction], AgentFinish]: """Given input, decided what to do. Args: intermediate_steps: Steps the LLM has taken to date, along with observations callbacks: Callbacks to run. **kwargs: User inputs. Returns: Actions specifying what tool to use. """ [docs] @abstractmethod async def aplan( self, intermediate_steps: List[Tuple[AgentAction, str]], callbacks: Callbacks = None, **kwargs: Any, ) -> Union[List[AgentAction], AgentFinish]:
https://api.python.langchain.com/en/stable/_modules/langchain/agents/agent.html
9e3e8c0b73e3-4
**kwargs: Any, ) -> Union[List[AgentAction], AgentFinish]: """Given input, decided what to do. Args: intermediate_steps: Steps the LLM has taken to date, along with observations callbacks: Callbacks to run. **kwargs: User inputs. Returns: Actions specifying what tool to use. """ @property @abstractmethod def input_keys(self) -> List[str]: """Return the input keys. :meta private: """ [docs] def return_stopped_response( self, early_stopping_method: str, intermediate_steps: List[Tuple[AgentAction, str]], **kwargs: Any, ) -> AgentFinish: """Return response when agent has been stopped due to max iterations.""" if early_stopping_method == "force": # `force` just returns a constant string return AgentFinish({"output": "Agent stopped due to max iterations."}, "") else: raise ValueError( f"Got unsupported early_stopping_method `{early_stopping_method}`" ) @property def _agent_type(self) -> str: """Return Identifier of agent type.""" raise NotImplementedError [docs] def dict(self, **kwargs: Any) -> Dict: """Return dictionary representation of agent.""" _dict = super().dict() _dict["_type"] = str(self._agent_type) return _dict [docs] def save(self, file_path: Union[Path, str]) -> None: """Save the agent. Args: file_path: Path to file to save the agent to. Example: .. code-block:: python
https://api.python.langchain.com/en/stable/_modules/langchain/agents/agent.html