index int64 0 0 | repo_id stringclasses 596
values | file_path stringlengths 31 168 | content stringlengths 1 6.2M |
|---|---|---|---|
0 | lc_public_repos/langgraph/docs/docs/troubleshooting | lc_public_repos/langgraph/docs/docs/troubleshooting/errors/INVALID_CONCURRENT_GRAPH_UPDATE.md | # INVALID_CONCURRENT_GRAPH_UPDATE
A LangGraph [`StateGraph`](https://langchain-ai.github.io/langgraph/reference/graphs/#langgraph.graph.state.StateGraph) received concurrent updates to its state from multiple nodes to a state property that doesn't
support it.
One way this can occur is if you are using a [fanout](http... |
0 | lc_public_repos/langgraph/docs/docs/troubleshooting | lc_public_repos/langgraph/docs/docs/troubleshooting/errors/GRAPH_RECURSION_LIMIT.md | # GRAPH_RECURSION_LIMIT
Your LangGraph [`StateGraph`](https://langchain-ai.github.io/langgraph/reference/graphs/#langgraph.graph.state.StateGraph) reached the maximum number of steps before hitting a stop condition.
This is often due to an infinite loop caused by code like the example below:
```python
class State(Typ... |
0 | lc_public_repos/langgraph/docs/docs/troubleshooting | lc_public_repos/langgraph/docs/docs/troubleshooting/errors/index.md | # Error reference
This page contains guides around resolving common errors you may find while building with LangChain.
Errors referenced below will have an `lc_error_code` property corresponding to one of the below codes when they are thrown in code.
- [GRAPH_RECURSION_LIMIT](./GRAPH_RECURSION_LIMIT.md)
- [INVALID_CO... |
0 | lc_public_repos/langgraph/docs/docs/troubleshooting | lc_public_repos/langgraph/docs/docs/troubleshooting/errors/INVALID_GRAPH_NODE_RETURN_VALUE.md | # INVALID_GRAPH_NODE_RETURN_VALUE
A LangGraph [`StateGraph`](https://langchain-ai.github.io/langgraph/reference/graphs/#langgraph.graph.state.StateGraph)
received a non-dict return type from a node. Here's an example:
```python
class State(TypedDict):
some_key: str
def bad_node(state: State):
# Should return... |
0 | lc_public_repos/langgraph/docs | lc_public_repos/langgraph/docs/overrides/main.html | {% extends "base.html" %}
{% block extrahead %}
<style>
@import url("https://fonts.googleapis.com/css2?family=Public+Sans&display=swap");
:root {
--md-primary-fg-color: #333333;
--md-accent-fg-color: #1E88E5;
--md-default-bg-color: #FFFFFF;
--md-default-fg-color: #333333;
--md-t... |
0 | lc_public_repos/langgraph/docs/overrides | lc_public_repos/langgraph/docs/overrides/partials/comments.html | {% if not page.meta.hide_comments %}
<h2 id="__comments">{{ lang.t("meta.comments") }}</h2>
<script src="https://giscus.app/client.js"
data-repo="langchain-ai/langgraph"
data-repo-id="R_kgDOKFU0lQ"
data-category="Discussions"
data-category-id="DIC_kwDOKFU0lc4CfZgA"
data-mappi... |
0 | lc_public_repos/langgraph/docs/overrides | lc_public_repos/langgraph/docs/overrides/partials/logo.html | {% if config.theme.logo_light_mode %}
<img src="{{ config.theme.logo_light_mode | url }}" alt="logo" class="logo-light" />
<img src="{{ config.theme.logo_dark_mode | url }}" alt="logo" class="logo-dark" />
{% endif %} |
0 | lc_public_repos/langgraph/docs | lc_public_repos/langgraph/docs/_scripts/notebook_convert.py | import os
import re
from pathlib import Path
import nbformat
from nbconvert.exporters import MarkdownExporter
from nbconvert.preprocessors import Preprocessor
from generate_api_reference_links import ImportPreprocessor
class EscapePreprocessor(Preprocessor):
def preprocess_cell(self, cell, resources, cell_index... |
0 | lc_public_repos/langgraph/docs | lc_public_repos/langgraph/docs/_scripts/generate_api_reference_links.py | import importlib
import inspect
import logging
import os
import re
from typing import List, Literal, Optional
from typing_extensions import TypedDict
import nbformat
from nbconvert.preprocessors import Preprocessor
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger(__name__)
# Base URL for all class ... |
0 | lc_public_repos/langgraph/docs | lc_public_repos/langgraph/docs/_scripts/notebook_hooks.py | import logging
from typing import Any, Dict
from mkdocs.structure.pages import Page
from mkdocs.structure.files import Files, File
from notebook_convert import convert_notebook
logger = logging.getLogger(__name__)
logging.basicConfig()
logger.setLevel(logging.INFO)
class NotebookFile(File):
def is_documentation... |
0 | lc_public_repos/langgraph/docs | lc_public_repos/langgraph/docs/_scripts/prepare_notebooks_for_ci.py | """Preprocess notebooks for CI. Currently adds VCR cassettes and optionally removes pip install cells."""
import logging
import os
import json
import click
import nbformat
logger = logging.getLogger(__name__)
NOTEBOOK_DIRS = ("docs/docs/how-tos","docs/docs/tutorials")
DOCS_PATH = os.path.dirname(os.path.dirname(os.pa... |
0 | lc_public_repos/langgraph/docs | lc_public_repos/langgraph/docs/_scripts/download_tiktoken.py | import tiktoken
# This will trigger the download and caching of the necessary files
for encoding in ("gpt2", "gpt-3.5"):
tiktoken.encoding_for_model(encoding) |
0 | lc_public_repos/langgraph/docs | lc_public_repos/langgraph/docs/_scripts/execute_notebooks.sh | #!/bin/bash
# Read the list of notebooks to skip from the JSON file
SKIP_NOTEBOOKS=$(python -c "import json; print('\n'.join(json.load(open('docs/notebooks_no_execution.json'))))")
# Function to execute a single notebook
execute_notebook() {
file="$1"
echo "Starting execution of $file"
start_time=$(date +... |
0 | lc_public_repos/langgraph/docs/_scripts/notebook_convert_templates | lc_public_repos/langgraph/docs/_scripts/notebook_convert_templates/mdoutput/index.md.j2 | {% extends 'markdown/index.md.j2' %}
{%- block traceback_line -%}
```output
{{ line.rstrip() | strip_ansi }}
```
{%- endblock traceback_line -%}
{%- block stream -%}
```output
{{ output.text.rstrip() }}
```
{%- endblock stream -%}
{%- block data_text scoped -%}
```output
{{ output.data['text/plain'].rstrip() }}
```
... |
0 | lc_public_repos/langgraph/docs/_scripts/notebook_convert_templates | lc_public_repos/langgraph/docs/_scripts/notebook_convert_templates/mdoutput/conf.json | {
"mimetypes": {
"text/markdown": true
}
} |
0 | lc_public_repos/langgraph | lc_public_repos/langgraph/examples/README.md | # LangGraph examples
This directory should NOT be used for documentation. All new documentation must be added to `docs/docs/` directory. |
0 | lc_public_repos/langgraph/examples | lc_public_repos/langgraph/examples/chatbot-simulation-evaluation/simulation_utils.py | import functools
from typing import Annotated, Any, Callable, Dict, List, Optional, Union
from langchain_community.adapters.openai import convert_message_to_dict
from langchain_core.messages import AIMessage, AnyMessage, BaseMessage, HumanMessage
from langchain_core.prompts import ChatPromptTemplate, MessagesPlacehold... |
0 | lc_public_repos/langgraph/examples | lc_public_repos/langgraph/examples/rag/langgraph_self_rag.ipynb | import getpass
import os
def _set_env(key: str):
if key not in os.environ:
os.environ[key] = getpass.getpass(f"{key}:")
_set_env("OPENAI_API_KEY")from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain_community.document_loaders import WebBaseLoader
from langchain_community.vec... |
0 | lc_public_repos/langgraph/examples | lc_public_repos/langgraph/examples/rag/langgraph_adaptive_rag.ipynb | import getpass
import os
def _set_env(var: str):
if not os.environ.get(var):
os.environ[var] = getpass.getpass(f"{var}: ")
_set_env("OPENAI_API_KEY")
_set_env("COHERE_API_KEY")
_set_env("TAVILY_API_KEY")### Build Index
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain_co... |
0 | lc_public_repos/langgraph/examples | lc_public_repos/langgraph/examples/rag/langgraph_agentic_rag.ipynb | import getpass
import os
def _set_env(key: str):
if key not in os.environ:
os.environ[key] = getpass.getpass(f"{key}:")
_set_env("OPENAI_API_KEY")from langchain_community.document_loaders import WebBaseLoader
from langchain_community.vectorstores import Chroma
from langchain_openai import OpenAIEmbeddin... |
0 | lc_public_repos/langgraph/examples | lc_public_repos/langgraph/examples/rag/langgraph_crag_local.ipynb | import getpass
import os
def _set_env(key: str):
if key not in os.environ:
os.environ[key] = getpass.getpass(f"{key}:")
_set_env("OPENAI_API_KEY")
_set_env("TAVILY_API_KEY")local_llm = "llama3"
model_tested = "llama3-8b"
metadata = f"CRAG, {model_tested}"from langchain.text_splitter import RecursiveChar... |
0 | lc_public_repos/langgraph/examples | lc_public_repos/langgraph/examples/rag/langgraph_adaptive_rag_local.ipynb | %capture --no-stderr
%pip install -U langchain-nomic langchain_community tiktoken langchainhub chromadb langchain langgraph tavily-python nomic[local]import getpass
import os
def _set_env(var: str):
if not os.environ.get(var):
os.environ[var] = getpass.getpass(f"{var}: ")
_set_env("TAVILY_API_KEY")
_set... |
0 | lc_public_repos/langgraph/examples | lc_public_repos/langgraph/examples/rag/langgraph_self_rag_pinecone_movies.ipynb | %pip install -qU langchain-pinecone langchain-openai langchainhub langgraphimport os
os.environ["LANGCHAIN_TRACING_V2"] = "true"
os.environ["LANGCHAIN_ENDPOINT"] = "https://api.smith.langchain.com"
os.environ["LANGCHAIN_API_KEY"] = "<your-api-key>"import os
os.environ["LANGCHAIN_PROJECT"] = "pinecone-devconnect"from ... |
0 | lc_public_repos/langgraph/examples | lc_public_repos/langgraph/examples/rag/langgraph_self_rag_local.ipynb | %capture --no-stderr
%pip install -U langchain-nomic langchain_community tiktoken langchainhub chromadb langchain langgraph nomic[local]import getpass
import os
def _set_env(key: str):
if key not in os.environ:
os.environ[key] = getpass.getpass(f"{key}:")
_set_env("NOMIC_API_KEY")# Ollama model name
loc... |
0 | lc_public_repos/langgraph/examples | lc_public_repos/langgraph/examples/rag/langgraph_adaptive_rag_cohere.ipynb | ### LLMs
import os
os.environ["COHERE_API_KEY"] = "<your-api-key>"# ### Tracing (optional)
# os.environ['LANGCHAIN_TRACING_V2'] = 'true'
# os.environ['LANGCHAIN_ENDPOINT'] = 'https://api.smith.langchain.com'
# os.environ['LANGCHAIN_API_KEY'] ='<your-api-key>'### Build Index
from langchain.text_splitter import Recursi... |
0 | lc_public_repos/langgraph/examples | lc_public_repos/langgraph/examples/rag/langgraph_crag.ipynb | import getpass
import os
def _set_env(key: str):
if key not in os.environ:
os.environ[key] = getpass.getpass(f"{key}:")
_set_env("OPENAI_API_KEY")
_set_env("TAVILY_API_KEY")from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain_community.document_loaders import WebBaseLoader
f... |
0 | lc_public_repos/langgraph/examples | lc_public_repos/langgraph/examples/code_assistant/langgraph_code_assistant_mistral.ipynb | import os
os.environ["TOKENIZERS_PARALLELISM"] = "true"
mistral_api_key = os.getenv("MISTRAL_API_KEY") # Ensure this is setos.environ["LANGCHAIN_TRACING_V2"] = "true"
os.environ["LANGCHAIN_ENDPOINT"] = "https://api.smith.langchain.com"
os.environ["LANGCHAIN_API_KEY"] = "<your-api-key>"
os.environ["LANGCHAIN_PROJECT"]... |
0 | lc_public_repos | lc_public_repos/text-split-explorer/requirements.txt | tiktoken==0.4.0
langchain==0.0.222
streamlit==1.25.0
|
0 | lc_public_repos | lc_public_repos/text-split-explorer/README.md | # Text Split Explorer

Many of the most important LLM applications involve connecting LLMs to external sources of data.
A prerequisite to doing this is to ingest data into a format where LLMs can easily connect to them.
Most of the time, that means ingesting data into a vectorstore.
A prerequisite to... |
0 | lc_public_repos | lc_public_repos/text-split-explorer/splitter.py | import streamlit as st
from langchain.text_splitter import RecursiveCharacterTextSplitter, CharacterTextSplitter, Language
import code_snippets as code_snippets
import tiktoken
# Streamlit UI
st.title("Text Splitter Playground")
st.info("""Split a text into chunks using a **Text Splitter**. Parameters include:
- `ch... |
0 | lc_public_repos | lc_public_repos/text-split-explorer/code_snippets.py | CHARACTER = """```python
from langchain.text_splitter import CharacterTextSplitter
{length_function}
splitter = CharacterTextSplitter(
separator = "\\n\\n", # Split character (default \\n\\n)
chunk_size={chunk_size},
chunk_overlap={chunk_overlap},
length_function=length_function,
)
text = "foo bar"
s... |
0 | lc_public_repos | lc_public_repos/langchain-aiplugin/LICENSE | MIT License
Copyright (c) 2023 langchain-ai
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, dist... |
0 | lc_public_repos | lc_public_repos/langchain-aiplugin/poetry.lock | # This file is automatically @generated by Poetry and should not be changed by hand.
[[package]]
name = "aiohttp"
version = "3.8.4"
description = "Async http client/server framework (asyncio)"
category = "main"
optional = false
python-versions = ">=3.6"
files = [
{file = "aiohttp-3.8.4-cp310-cp310-macosx_10_9_univ... |
0 | lc_public_repos | lc_public_repos/langchain-aiplugin/README.md | # LangChain as an AIPlugin
## Introduction
[LangChain](https://python.langchain.com/en/latest/index.html) can flexibly integrate with the ChatGPT AI plugin ecosystem.
LangChain chains and agents can themselves be deployed as a plugin that can communicate with other agents or with ChatGPT itself.
For more informati... |
0 | lc_public_repos | lc_public_repos/langchain-aiplugin/pyproject.toml | [tool.poetry]
name = "langchain-plugin"
version = "0.1.0"
description = "An example ChatGPT Plugin that exposes a LangChain chain, agent, or retriever"
authors = ["LangChain Core"]
readme = "README.md"
packages = [{include = "app"}]
[tool.poetry.scripts]
app = "app.main:start"
[tool.poetry.dependencies]
python = "^3.... |
0 | lc_public_repos/langchain-aiplugin | lc_public_repos/langchain-aiplugin/template/chain.py | from langchain.chains.base import Chain
def load_chain() -> Chain:
"""Load your chain here."""
|
0 | lc_public_repos/langchain-aiplugin | lc_public_repos/langchain-aiplugin/template/README.md | # Template
This is a template folder for you to start afresh in.
Step 1: Fill out `get_chain` in `chain.py`.
Step 2: Fill out all the constants in `constants.py`.
|
0 | lc_public_repos/langchain-aiplugin | lc_public_repos/langchain-aiplugin/template/constants.py | # flake8: noqa
# The description of the chain you are exposing. This will be used by ChatGPT to decide when to call it.
ENDPOINT_DESCRIPTION = ""
# The name of your endpoint that you are exposing.
ENDPOINT_NAME = ""
# The input key for the chain. The user input will get mapped to this key.
INPUT_NAME = ""
# The output ... |
0 | lc_public_repos/langchain-aiplugin | lc_public_repos/langchain-aiplugin/app/api.py | """Define the API schema."""
from pydantic import BaseModel
class ConversationRequest(BaseModel):
"""Request message to the LangChain."""
# Message is passed directly to the LangChain
# deployed in the plugin.
message: str
class ConversationResponse(BaseModel):
"""Deployed LangChain response.""... |
0 | lc_public_repos/langchain-aiplugin | lc_public_repos/langchain-aiplugin/app/main.py | """"Example LangChain Plugin."""
import json
import logging
import os
from typing import Optional, cast
import importlib
from importlib.machinery import SourceFileLoader
from pathlib import Path
import uvicorn
import yaml
from app.api import ConversationRequest, ConversationResponse
from fastapi import Body, Depends, ... |
0 | lc_public_repos/langchain-aiplugin | lc_public_repos/langchain-aiplugin/agent/chain.py | from langchain.agents import AgentExecutor, initialize_agent, load_tools
from langchain.llms import OpenAI
def get_chain() -> AgentExecutor:
"""Load the agent executor chain."""
llm = OpenAI(temperature=0)
tools = load_tools(["llm-math"], llm)
return initialize_agent(tools, llm, "zero-shot-react-descr... |
0 | lc_public_repos/langchain-aiplugin | lc_public_repos/langchain-aiplugin/agent/README.md | # Agent
This example shows how to expose an agent as a ChatGPTPlugin.
Step 1: Make any modifications to `chain.py` as you see fit (changing prompts, etc.)
Step 2: Make any changes to `constants.py` as you see fit (this is where you control the descriptions used, etc)
|
0 | lc_public_repos/langchain-aiplugin | lc_public_repos/langchain-aiplugin/agent/constants.py | # flake8: noqa
ENDPOINT_DESCRIPTION = "Solve math word problems"
ENDPOINT_NAME = "math-problems"
INPUT_NAME = "input"
OUTPUT_KEY = "output"
NAME_FOR_MODEL = "MathWordProblems"
NAME_FOR_HUMAN = "Math Problems Solver"
DESCRIPTION_FOR_MODEL = "This plugin provides access to a LangChain Agent hooked up to a calculator, so ... |
0 | lc_public_repos/langchain-aiplugin | lc_public_repos/langchain-aiplugin/retrieval_qa/chain.py | from pathlib import Path
from langchain.llms import OpenAI
import pickle
from langchain.chains import RetrievalQA
DIR_PATH = Path(__file__).parent
def get_chain():
with open(DIR_PATH / "vectorstore.pkl", "rb") as f:
vectorstore = pickle.load(f)
return RetrievalQA.from_chain_type(
llm=OpenAI(t... |
0 | lc_public_repos/langchain-aiplugin | lc_public_repos/langchain-aiplugin/retrieval_qa/ingest.py | """Load html from files, clean up, split, ingest into Weaviate."""
import pickle
from langchain.document_loaders import SitemapLoader
from langchain.embeddings import OpenAIEmbeddings
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain.vectorstores.faiss import FAISS
def get_text(conten... |
0 | lc_public_repos/langchain-aiplugin | lc_public_repos/langchain-aiplugin/retrieval_qa/requirements.txt | langchain
faiss-cpu
lxml
|
0 | lc_public_repos/langchain-aiplugin | lc_public_repos/langchain-aiplugin/retrieval_qa/README.md | # RetrievalQA
This example shows how to expose a RetrievalQA chain as a ChatGPTPlugin.
Step 1: Ingest documents. To run the example, run `python ingest.py`
Step 2: Make any modifications to `chain.py` as you see fit (changing prompts, etc.)
Step 3: Make any changes to `constants.py` as you see fit (this is where yo... |
0 | lc_public_repos/langchain-aiplugin | lc_public_repos/langchain-aiplugin/retrieval_qa/constants.py | # flake8: noqa
ENDPOINT_DESCRIPTION = "Ask questions about LangChain documentation!"
ENDPOINT_NAME = "ask-langchain"
INPUT_NAME = "query"
OUTPUT_KEY = "result"
NAME_FOR_MODEL = "langchainQABot"
NAME_FOR_HUMAN = "LangChain QA Bot"
DESCRIPTION_FOR_MODEL = "This plugin provides access to a LangChain QA Bot to answer quest... |
0 | lc_public_repos | lc_public_repos/langchain/Makefile | .PHONY: all clean help docs_build docs_clean docs_linkcheck api_docs_build api_docs_clean api_docs_linkcheck spell_check spell_fix lint lint_package lint_tests format format_diff
## help: Show this help info.
help: Makefile
@printf "\n\033[1mUsage: make <TARGETS> ...\033[0m\n\n\033[1mTargets:\033[0m\n\n"
@sed -n 's/... |
0 | lc_public_repos | lc_public_repos/langchain/LICENSE | MIT License
Copyright (c) LangChain, Inc.
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distri... |
0 | lc_public_repos | lc_public_repos/langchain/yarn.lock | # THIS IS AN AUTOGENERATED FILE. DO NOT EDIT THIS FILE DIRECTLY.
# yarn lockfile v1
|
0 | lc_public_repos | lc_public_repos/langchain/CITATION.cff | cff-version: 1.2.0
message: "If you use this software, please cite it as below."
authors:
- family-names: "Chase"
given-names: "Harrison"
title: "LangChain"
date-released: 2022-10-17
url: "https://github.com/langchain-ai/langchain"
|
0 | lc_public_repos | lc_public_repos/langchain/poetry.lock | # This file is automatically @generated by Poetry 1.8.4 and should not be changed by hand.
[[package]]
name = "aiofiles"
version = "24.1.0"
description = "File support for asyncio."
optional = false
python-versions = ">=3.8"
files = [
{file = "aiofiles-24.1.0-py3-none-any.whl", hash = "sha256:b4ec55f4195e3eb5d7abd... |
0 | lc_public_repos | lc_public_repos/langchain/README.md | # 🦜️🔗 LangChain
⚡ Build context-aware reasoning applications ⚡
[](https://github.com/langchain-ai/langchain/releases)
[](https:/... |
0 | lc_public_repos | lc_public_repos/langchain/pyproject.toml | [tool.poetry]
name = "langchain-monorepo"
version = "0.0.1"
description = "LangChain mono-repo"
authors = []
license = "MIT"
readme = "README.md"
repository = "https://www.github.com/langchain-ai/langchain"
[tool.poetry.dependencies]
python = ">=3.9,<4.0"
[tool.poetry.group.lint.dependencies]
ruff = "^0.5.0"
[tool.... |
0 | lc_public_repos | lc_public_repos/langchain/SECURITY.md | # Security Policy
LangChain has a large ecosystem of integrations with various external resources like local and remote file systems, APIs and databases. These integrations allow developers to create versatile applications that combine the power of LLMs with the ability to access, interact with and manipulate external... |
0 | lc_public_repos | lc_public_repos/langchain/.readthedocs.yaml | # Read the Docs configuration file
# See https://docs.readthedocs.io/en/stable/config-file/v2.html for details
# Required
version: 2
formats:
- pdf
# Set the version of Python and other tools you might need
build:
os: ubuntu-22.04
tools:
python: "3.11"
commands:
- mkdir -p $READTHEDOCS_OUTPUT
- c... |
0 | lc_public_repos | lc_public_repos/langchain/poetry.toml | [virtualenvs]
in-project = true
|
0 | lc_public_repos | lc_public_repos/langchain/MIGRATE.md | # Migrating
Please see the following guides for migrating LangChain code:
* Migrate to [LangChain v0.3](https://python.langchain.com/docs/versions/v0_3/)
* Migrate to [LangChain v0.2](https://python.langchain.com/docs/versions/v0_2/)
* Migrating from [LangChain 0.0.x Chains](https://python.langchain.com/docs/versions... |
0 | lc_public_repos/langchain | lc_public_repos/langchain/libs/packages.yml | # this file is used to define the packages that are used in the project
# it is EXPERIMENTAL and may be removed in the future
packages:
- name: langchain-core
repo: langchain-ai/langchain
path: libs/core
- name: langchain-text-splitters
repo: langchain-ai/langchain
path: libs/text-splitters
- nam... |
0 | lc_public_repos/langchain/libs | lc_public_repos/langchain/libs/standard-tests/Makefile | .PHONY: all format lint test tests integration_tests docker_tests help extended_tests
# Default target executed when no arguments are given to make.
all: help
# Define a variable for the test file path.
TEST_FILE ?= tests/unit_tests/
INTEGRATION_TEST_FILE ?= tests/integration_tests/
integration_test integration_test... |
0 | lc_public_repos/langchain/libs | lc_public_repos/langchain/libs/standard-tests/poetry.lock | # This file is automatically @generated by Poetry 1.8.3 and should not be changed by hand.
[[package]]
name = "annotated-types"
version = "0.7.0"
description = "Reusable constraint types to use with typing.Annotated"
optional = false
python-versions = ">=3.8"
files = [
{file = "annotated_types-0.7.0-py3-none-any.w... |
0 | lc_public_repos/langchain/libs | lc_public_repos/langchain/libs/standard-tests/README.md | # langchain-tests
This is a testing library for LangChain integrations. It contains the base classes for
a standard set of tests.
## Installation
We encourage pinning your version to a specific version in order to avoid breaking
your CI when we publish new tests. We recommend upgrading to the latest version
periodic... |
0 | lc_public_repos/langchain/libs | lc_public_repos/langchain/libs/standard-tests/pyproject.toml | [build-system]
requires = ["poetry-core"]
build-backend = "poetry.core.masonry.api"
[tool.poetry]
name = "langchain-tests"
version = "0.3.5"
description = "Standard tests for LangChain implementations"
authors = ["Erick Friis <erick@langchain.dev>"]
readme = "README.md"
repository = "https://github.com/langchain-ai/la... |
0 | lc_public_repos/langchain/libs/standard-tests/tests | lc_public_repos/langchain/libs/standard-tests/tests/integration_tests/test_compile.py | import pytest
@pytest.mark.compile
def test_placeholder() -> None:
"""Used for compiling integration tests without running any real tests."""
pass
|
0 | lc_public_repos/langchain/libs/standard-tests/tests | lc_public_repos/langchain/libs/standard-tests/tests/unit_tests/test_in_memory_base_store.py | """Tests for the InMemoryStore class."""
from typing import Tuple
import pytest
from langchain_core.stores import InMemoryStore
from langchain_tests.integration_tests.base_store import (
BaseStoreAsyncTests,
BaseStoreSyncTests,
)
class TestInMemoryStore(BaseStoreSyncTests):
@pytest.fixture
def thre... |
0 | lc_public_repos/langchain/libs/standard-tests/tests | lc_public_repos/langchain/libs/standard-tests/tests/unit_tests/test_in_memory_cache.py | import pytest
from langchain_core.caches import InMemoryCache
from langchain_tests.integration_tests.cache import (
AsyncCacheTestSuite,
SyncCacheTestSuite,
)
class TestInMemoryCache(SyncCacheTestSuite):
@pytest.fixture
def cache(self) -> InMemoryCache:
return InMemoryCache()
class TestInMe... |
0 | lc_public_repos/langchain/libs/standard-tests/tests | lc_public_repos/langchain/libs/standard-tests/tests/unit_tests/test_embeddings.py | from typing import Type
from langchain_core.embeddings import DeterministicFakeEmbedding, Embeddings
from langchain_tests.integration_tests import EmbeddingsIntegrationTests
from langchain_tests.unit_tests import EmbeddingsUnitTests
class TestFakeEmbeddingsUnit(EmbeddingsUnitTests):
@property
def embeddings... |
0 | lc_public_repos/langchain/libs/standard-tests/tests | lc_public_repos/langchain/libs/standard-tests/tests/unit_tests/test_decorated_tool.py | from langchain_core.tools import BaseTool, tool
from langchain_tests.integration_tests import ToolsIntegrationTests
from langchain_tests.unit_tests import ToolsUnitTests
@tool
def parrot_multiply_tool(a: int, b: int) -> int:
"""Multiply two numbers like a parrot. Parrots always add eighty for their matey."""
... |
0 | lc_public_repos/langchain/libs/standard-tests/tests | lc_public_repos/langchain/libs/standard-tests/tests/unit_tests/test_custom_chat_model.py | """
Test the standard tests on the custom chat model in the docs
"""
from typing import Type
from langchain_tests.integration_tests import ChatModelIntegrationTests
from langchain_tests.unit_tests import ChatModelUnitTests
from .custom_chat_model import ChatParrotLink
class TestChatParrotLinkUnit(ChatModelUnitTest... |
0 | lc_public_repos/langchain/libs/standard-tests/tests | lc_public_repos/langchain/libs/standard-tests/tests/unit_tests/test_basic_retriever.py | from typing import Any, Type
from langchain_core.callbacks import CallbackManagerForRetrieverRun
from langchain_core.documents import Document
from langchain_core.retrievers import BaseRetriever
from langchain_tests.integration_tests import RetrieversIntegrationTests
class ParrotRetriever(BaseRetriever):
parrot... |
0 | lc_public_repos/langchain/libs/standard-tests/tests | lc_public_repos/langchain/libs/standard-tests/tests/unit_tests/test_in_memory_vectorstore.py | import pytest
from langchain_core.vectorstores import (
InMemoryVectorStore,
VectorStore,
)
from langchain_tests.integration_tests.vectorstores import (
AsyncReadWriteTestSuite,
ReadWriteTestSuite,
)
class TestInMemoryVectorStore(ReadWriteTestSuite):
@pytest.fixture
def vectorstore(self) -> V... |
0 | lc_public_repos/langchain/libs/standard-tests/tests | lc_public_repos/langchain/libs/standard-tests/tests/unit_tests/test_basic_tool.py | from typing import Type
from langchain_core.tools import BaseTool
from langchain_tests.integration_tests import ToolsIntegrationTests
from langchain_tests.unit_tests import ToolsUnitTests
class ParrotMultiplyTool(BaseTool): # type: ignore
name: str = "ParrotMultiplyTool"
description: str = (
"Multi... |
0 | lc_public_repos/langchain/libs/standard-tests/tests | lc_public_repos/langchain/libs/standard-tests/tests/unit_tests/custom_chat_model.py | from typing import Any, Dict, Iterator, List, Optional
from langchain_core.callbacks import (
CallbackManagerForLLMRun,
)
from langchain_core.language_models import BaseChatModel
from langchain_core.messages import (
AIMessage,
AIMessageChunk,
BaseMessage,
)
from langchain_core.messages.ai import Usage... |
0 | lc_public_repos/langchain/libs/standard-tests | lc_public_repos/langchain/libs/standard-tests/langchain_tests/base.py | from abc import ABC
from typing import Type
class BaseStandardTests(ABC):
def test_no_overrides_DO_NOT_OVERRIDE(self) -> None:
"""
Test that no standard tests are overridden.
"""
# find path to standard test implementations
comparison_class = None
def explore_bases... |
0 | lc_public_repos/langchain/libs/standard-tests/langchain_tests | lc_public_repos/langchain/libs/standard-tests/langchain_tests/integration_tests/cache.py | from abc import abstractmethod
import pytest
from langchain_core.caches import BaseCache
from langchain_core.outputs import Generation
from langchain_tests.base import BaseStandardTests
class SyncCacheTestSuite(BaseStandardTests):
"""Test suite for checking the BaseCache API of a caching layer for LLMs.
Th... |
0 | lc_public_repos/langchain/libs/standard-tests/langchain_tests | lc_public_repos/langchain/libs/standard-tests/langchain_tests/integration_tests/retrievers.py | from abc import abstractmethod
from typing import Type
import pytest
from langchain_core.documents import Document
from langchain_core.retrievers import BaseRetriever
from langchain_tests.base import BaseStandardTests
class RetrieversIntegrationTests(BaseStandardTests):
@property
@abstractmethod
def ret... |
0 | lc_public_repos/langchain/libs/standard-tests/langchain_tests | lc_public_repos/langchain/libs/standard-tests/langchain_tests/integration_tests/chat_models.py | import base64
import json
from typing import List, Optional, cast
import httpx
import pytest
from langchain_core.language_models import BaseChatModel, GenericFakeChatModel
from langchain_core.messages import (
AIMessage,
AIMessageChunk,
BaseMessage,
BaseMessageChunk,
HumanMessage,
SystemMessage... |
0 | lc_public_repos/langchain/libs/standard-tests/langchain_tests | lc_public_repos/langchain/libs/standard-tests/langchain_tests/integration_tests/tools.py | from langchain_core.messages import ToolCall
from langchain_core.tools import BaseTool
from langchain_tests.unit_tests.tools import ToolsTests
class ToolsIntegrationTests(ToolsTests):
def test_invoke_matches_output_schema(self, tool: BaseTool) -> None:
"""
If invoked with a ToolCall, the tool sho... |
0 | lc_public_repos/langchain/libs/standard-tests/langchain_tests | lc_public_repos/langchain/libs/standard-tests/langchain_tests/integration_tests/vectorstores.py | """Test suite to test vectostores."""
from abc import abstractmethod
import pytest
from langchain_core.documents import Document
from langchain_core.embeddings.fake import DeterministicFakeEmbedding, Embeddings
from langchain_core.vectorstores import VectorStore
from langchain_tests.base import BaseStandardTests
# ... |
0 | lc_public_repos/langchain/libs/standard-tests/langchain_tests | lc_public_repos/langchain/libs/standard-tests/langchain_tests/integration_tests/embeddings.py | from typing import List
from langchain_core.embeddings import Embeddings
from langchain_tests.unit_tests.embeddings import EmbeddingsTests
class EmbeddingsIntegrationTests(EmbeddingsTests):
"""Base class for embeddings integration tests.
Test subclasses must implement the ``embeddings_class`` property to s... |
0 | lc_public_repos/langchain/libs/standard-tests/langchain_tests | lc_public_repos/langchain/libs/standard-tests/langchain_tests/integration_tests/indexer.py | """Test suite to check index implementations."""
import inspect
import uuid
from abc import ABC, abstractmethod
from typing import AsyncGenerator, Generator
import pytest
from langchain_core.documents import Document
from langchain_core.indexing.base import DocumentIndex
class DocumentIndexerTestSuite(ABC):
"""... |
0 | lc_public_repos/langchain/libs/standard-tests/langchain_tests | lc_public_repos/langchain/libs/standard-tests/langchain_tests/integration_tests/__init__.py | # ruff: noqa: E402
import pytest
# Rewrite assert statements for test suite so that implementations can
# see the full error message from failed asserts.
# https://docs.pytest.org/en/7.1.x/how-to/writing_plugins.html#assertion-rewriting
modules = [
"base_store",
"cache",
"chat_models",
"vectorstores",
... |
0 | lc_public_repos/langchain/libs/standard-tests/langchain_tests | lc_public_repos/langchain/libs/standard-tests/langchain_tests/integration_tests/base_store.py | from abc import abstractmethod
from typing import AsyncGenerator, Generator, Generic, Tuple, TypeVar
import pytest
from langchain_core.stores import BaseStore
from langchain_tests.base import BaseStandardTests
V = TypeVar("V")
class BaseStoreSyncTests(BaseStandardTests, Generic[V]):
"""Test suite for checking ... |
0 | lc_public_repos/langchain/libs/standard-tests/langchain_tests | lc_public_repos/langchain/libs/standard-tests/langchain_tests/unit_tests/chat_models.py | """
:autodoc-options: autoproperty
"""
import os
from abc import abstractmethod
from typing import Any, Dict, List, Literal, Optional, Tuple, Type
from unittest import mock
import pytest
from langchain_core.language_models import BaseChatModel
from langchain_core.load import dumpd, load
from langchain_core.runnables ... |
0 | lc_public_repos/langchain/libs/standard-tests/langchain_tests | lc_public_repos/langchain/libs/standard-tests/langchain_tests/unit_tests/tools.py | import os
from abc import abstractmethod
from typing import Tuple, Type, Union
from unittest import mock
import pytest
from langchain_core.tools import BaseTool
from pydantic import SecretStr
from langchain_tests.base import BaseStandardTests
class ToolsTests(BaseStandardTests):
@property
@abstractmethod
... |
0 | lc_public_repos/langchain/libs/standard-tests/langchain_tests | lc_public_repos/langchain/libs/standard-tests/langchain_tests/unit_tests/embeddings.py | import os
from abc import abstractmethod
from typing import Tuple, Type
from unittest import mock
import pytest
from langchain_core.embeddings import Embeddings
from pydantic import SecretStr
from langchain_tests.base import BaseStandardTests
class EmbeddingsTests(BaseStandardTests):
@property
@abstractmeth... |
0 | lc_public_repos/langchain/libs/standard-tests/langchain_tests | lc_public_repos/langchain/libs/standard-tests/langchain_tests/unit_tests/__init__.py | # ruff: noqa: E402
import pytest
# Rewrite assert statements for test suite so that implementations can
# see the full error message from failed asserts.
# https://docs.pytest.org/en/7.1.x/how-to/writing_plugins.html#assertion-rewriting
modules = [
"chat_models",
"embeddings",
"tools",
]
for module in mod... |
0 | lc_public_repos/langchain/libs/standard-tests/langchain_tests | lc_public_repos/langchain/libs/standard-tests/langchain_tests/utils/pydantic.py | """Utilities for working with pydantic models."""
def get_pydantic_major_version() -> int:
"""Get the major version of Pydantic."""
try:
import pydantic
return int(pydantic.__version__.split(".")[0])
except ImportError:
return 0
PYDANTIC_MAJOR_VERSION = get_pydantic_major_versio... |
0 | lc_public_repos/langchain/libs/standard-tests | lc_public_repos/langchain/libs/standard-tests/scripts/lint_imports.sh | #!/bin/bash
set -eu
# Initialize a variable to keep track of errors
errors=0
# make sure not importing from langchain or langchain_experimental
git --no-pager grep '^from langchain\.' . && errors=$((errors+1))
git --no-pager grep '^from langchain_experimental\.' . && errors=$((errors+1))
# Decide on an exit status ... |
0 | lc_public_repos/langchain/libs/standard-tests | lc_public_repos/langchain/libs/standard-tests/scripts/check_imports.py | import random
import string
import sys
import traceback
from importlib.machinery import SourceFileLoader
if __name__ == "__main__":
files = sys.argv[1:]
has_failure = False
for file in files:
try:
module_name = "".join(
random.choice(string.ascii_letters) for _ in range(... |
0 | lc_public_repos/langchain/libs | lc_public_repos/langchain/libs/community/extended_testing_deps.txt | aiosqlite>=0.19.0,<0.20
aleph-alpha-client>=2.15.0,<3
anthropic>=0.3.11,<0.4
arxiv>=1.4,<2
assemblyai>=0.17.0,<0.18
atlassian-python-api>=3.36.0,<4
azure-ai-documentintelligence>=1.0.0b1,<2
azure-identity>=1.15.0,<2
azure-search-documents==11.4.0
beautifulsoup4>=4,<5
bibtexparser>=1.4.0,<2
cassio>=0.1.6,<0.2
chardet>=5... |
0 | lc_public_repos/langchain/libs | lc_public_repos/langchain/libs/community/Makefile | .PHONY: all format lint test tests test_watch integration_tests docker_tests help extended_tests
# Default target executed when no arguments are given to make.
all: help
# Define a variable for the test file path.
TEST_FILE ?= tests/unit_tests/
integration_tests: TEST_FILE = tests/integration_tests/
# Run unit tests... |
0 | lc_public_repos/langchain/libs | lc_public_repos/langchain/libs/community/poetry.lock | # This file is automatically @generated by Poetry 1.8.4 and should not be changed by hand.
[[package]]
name = "aiohappyeyeballs"
version = "2.4.3"
description = "Happy Eyeballs for asyncio"
optional = false
python-versions = ">=3.8"
files = [
{file = "aiohappyeyeballs-2.4.3-py3-none-any.whl", hash = "sha256:8a7a83... |
0 | lc_public_repos/langchain/libs | lc_public_repos/langchain/libs/community/README.md | # 🦜️🧑🤝🧑 LangChain Community
[](https://pepy.tech/project/langchain_community)
[](https://opensource.org/licenses/MIT)
## Quick Install
```bash
pip install langchain-communit... |
0 | lc_public_repos/langchain/libs | lc_public_repos/langchain/libs/community/pyproject.toml | [build-system]
requires = [ "poetry-core>=1.0.0",]
build-backend = "poetry.core.masonry.api"
[tool.poetry]
name = "langchain-community"
version = "0.3.9"
description = "Community contributed LangChain integrations."
authors = []
license = "MIT"
readme = "README.md"
repository = "https://github.com/langchain-ai/langcha... |
0 | lc_public_repos/langchain/libs/community | lc_public_repos/langchain/libs/community/langchain_community/cache.py | """
.. warning::
Beta Feature!
**Cache** provides an optional caching layer for LLMs.
Cache is useful for two reasons:
- It can save you money by reducing the number of API calls you make to the LLM
provider if you're often requesting the same completion multiple times.
- It can speed up your application by redu... |
0 | lc_public_repos/langchain/libs/community | lc_public_repos/langchain/libs/community/langchain_community/__init__.py | """Main entrypoint into package."""
from importlib import metadata
try:
__version__ = metadata.version(__package__)
except metadata.PackageNotFoundError:
# Case where package metadata is not available.
__version__ = ""
del metadata # optional, avoids polluting the results of dir(__package__)
|
0 | lc_public_repos/langchain/libs/community/langchain_community | lc_public_repos/langchain/libs/community/langchain_community/query_constructors/opensearch.py | from typing import Dict, Tuple, Union
from langchain_core.structured_query import (
Comparator,
Comparison,
Operation,
Operator,
StructuredQuery,
Visitor,
)
class OpenSearchTranslator(Visitor):
"""Translate `OpenSearch` internal query domain-specific
language elements to valid filters... |
0 | lc_public_repos/langchain/libs/community/langchain_community | lc_public_repos/langchain/libs/community/langchain_community/query_constructors/redis.py | from __future__ import annotations
from typing import Any, Tuple
from langchain_core.structured_query import (
Comparator,
Comparison,
Operation,
Operator,
StructuredQuery,
Visitor,
)
from langchain_community.vectorstores.redis import Redis
from langchain_community.vectorstores.redis.filters ... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.