index int64 0 0 | repo_id stringclasses 596 values | file_path stringlengths 31 168 | content stringlengths 1 6.2M |
|---|---|---|---|
0 | lc_public_repos/langchainjs/docs/core_docs/docs/integrations/document_loaders | lc_public_repos/langchainjs/docs/core_docs/docs/integrations/document_loaders/web_loaders/taskade.mdx | ---
hide_table_of_contents: true
---
# Taskade
[Taskade](https://www.taskade.com) is the ultimate tool for AI-driven writing, project management, and task automation. Designed to be your second brain, Taskade simplifies project execution and enhances team collaboration from start to finish.
## Overview
With [Taskade](https://www.taskade.com), you can build, train, and deploy your own team of AI agents to automate tasks and streamline workflows. Taskade features a seamless blend of ideation, collaboration, and execution tools—from structured lists to modern tables and mind maps, all customizable to fit your unique workflow and adapt to your needs.
import CodeBlock from "@theme/CodeBlock";
import Example from "@examples/document_loaders/taskade.ts";
<CodeBlock language="typescript">{Example}</CodeBlock>
You can find your Taskade project id by opening the project in your browser and extracting them from the URL:
```
https://www.taskade.com/d/<YOUR PROJECT ID HERE>
```
|
0 | lc_public_repos/langchainjs/docs/core_docs/docs/integrations/document_loaders | lc_public_repos/langchainjs/docs/core_docs/docs/integrations/document_loaders/web_loaders/langsmith.ipynb | import { Client as LangSmithClient } from 'langsmith';
import { faker } from "@faker-js/faker";
const lsClient = new LangSmithClient();
const datasetName = "LangSmith Few Shot Datasets Notebook";
const exampleInputs = Array.from({ length: 10 }, (_, i) => ({
input: faker.lorem.paragraph(),
}));
const exampleOutputs = Array.from({ length: 10 }, (_, i) => ({
output: faker.lorem.sentence(),
}));
const exampleMetadata = Array.from({ length: 10 }, (_, i) => ({
companyCatchPhrase: faker.company.catchPhrase(),
}));
await lsClient.deleteDataset({
datasetName,
})
const dataset = await lsClient.createDataset(datasetName);
const examples = await lsClient.createExamples({
inputs: exampleInputs,
outputs: exampleOutputs,
metadata: exampleMetadata,
datasetId: dataset.id,
});import { LangSmithLoader } from "@langchain/core/document_loaders/langsmith"
const loader = new LangSmithLoader({
datasetName: "LangSmith Few Shot Datasets Notebook",
// Instead of a datasetName, you can alternatively provide a datasetId
// datasetId: dataset.id,
contentKey: "input",
limit: 5,
// formatContent: (content) => content,
// ... other options
})const docs = await loader.load()
docs[0]console.log(docs[0].metadata)console.log(docs[0].metadata.inputs)console.log(docs[0].metadata.outputs)console.log(Object.keys(docs[0].metadata)) |
0 | lc_public_repos/langchainjs/docs/core_docs/docs/integrations/document_loaders | lc_public_repos/langchainjs/docs/core_docs/docs/integrations/document_loaders/web_loaders/sitemap.mdx | # Sitemap Loader
This notebook goes over how to use the [`SitemapLoader`](https://api.js.langchain.com/classes/_langchain_community.document_loaders_web_sitemap.SitemapLoader.html) class to load sitemaps into `Document`s.
## Setup
First, we need to install the `langchain` package:
```bash npm2yarn
npm install @langchain/community @langchain/core
```
The URL passed in must either contain the `.xml` path to the sitemap, or a default `/sitemap.xml` will be appended to the URL.
import CodeBlock from "@theme/CodeBlock";
import Example from "@examples/document_loaders/sitemap.ts";
<CodeBlock language="typescript">{Example}</CodeBlock>
Or, if you want to only load the sitemap and not the contents of each page from the sitemap, you can use the `parseSitemap` method:
import ParseSitemapExample from "@examples/document_loaders/parse_sitemap.ts";
<CodeBlock language="typescript">{ParseSitemapExample}</CodeBlock>
|
0 | lc_public_repos/langchainjs/docs/core_docs/docs/integrations/document_loaders | lc_public_repos/langchainjs/docs/core_docs/docs/integrations/document_loaders/web_loaders/youtube.mdx | ---
hide_table_of_contents: true
---
# YouTube transcripts
This covers how to load youtube transcript into LangChain documents.
## Setup
You'll need to install the [youtube-transcript](https://www.npmjs.com/package/youtube-transcript) package
and [youtubei.js](https://www.npmjs.com/package/youtubei.js) to extract metadata:
```bash npm2yarn
npm install @langchain/community @langchain/core youtube-transcript youtubei.js
```
## Usage
You need to specify a link to the video in the `url`. You can also specify `language` in [ISO 639-1](https://en.wikipedia.org/wiki/List_of_ISO_639-1_codes) and `addVideoInfo` flag.
import CodeBlock from "@theme/CodeBlock";
import Example from "@examples/document_loaders/youtube.ts";
<CodeBlock language="typescript">{Example}</CodeBlock>
|
0 | lc_public_repos/langchainjs/docs/core_docs/docs/integrations/document_loaders | lc_public_repos/langchainjs/docs/core_docs/docs/integrations/document_loaders/web_loaders/apify_dataset.mdx | ---
hide_table_of_contents: true
sidebar_class_name: node-only
---
# Apify Dataset
This guide shows how to use [Apify](https://apify.com) with LangChain to load documents from an Apify Dataset.
## Overview
[Apify](https://apify.com) is a cloud platform for web scraping and data extraction,
which provides an [ecosystem](https://apify.com/store) of more than two thousand
ready-made apps called _Actors_ for various web scraping, crawling, and data extraction use cases.
This guide shows how to load documents
from an [Apify Dataset](https://docs.apify.com/platform/storage/dataset) — a scalable append-only
storage built for storing structured web scraping results,
such as a list of products or Google SERPs, and then export them to various
formats like JSON, CSV, or Excel.
Datasets are typically used to save results of different Actors.
For example, [Website Content Crawler](https://apify.com/apify/website-content-crawler) Actor
deeply crawls websites such as documentation, knowledge bases, help centers, or blogs,
and then stores the text content of webpages into a dataset,
from which you can feed the documents into a vector database and use it for information retrieval.
Another example is the [RAG Web Browser](https://apify.com/apify/rag-web-browser) Actor,
which queries Google Search, scrapes the top N pages from the results, and returns the cleaned
content in Markdown format for further processing by a large language model.
## Setup
You'll first need to install the official Apify client:
```bash npm2yarn
npm install apify-client
```
import IntegrationInstallTooltip from "@mdx_components/integration_install_tooltip.mdx";
<IntegrationInstallTooltip></IntegrationInstallTooltip>
```bash npm2yarn
npm install hnswlib-node @langchain/openai @langchain/community @langchain/core
```
You'll also need to sign up and retrieve your [Apify API token](https://console.apify.com/settings/integrations).
## Usage
### From a New Dataset (Crawl a Website and Store the data in Apify Dataset)
If you don't already have an existing dataset on the Apify platform, you'll need to initialize the document loader by calling an Actor and waiting for the results.
In the example below, we use the [Website Content Crawler](https://apify.com/apify/website-content-crawler) Actor to crawl
LangChain documentation, store the results in Apify Dataset, and then load the dataset using the `ApifyDatasetLoader`.
For this demonstration, we'll use a fast Cheerio crawler type and limit the number of crawled pages to 10.
**Note:** Running the Website Content Crawler may take some time, depending on the size of the website. For large sites, it can take several hours or even days!
Here's an example:
import CodeBlock from "@theme/CodeBlock";
import NewExample from "@examples/document_loaders/apify_dataset_new.ts";
<CodeBlock language="typescript">{NewExample}</CodeBlock>
## From an Existing Dataset
If you've already run an Actor and have an existing dataset on the Apify platform, you can initialize the document loader directly using the constructor
import ExistingExample from "@examples/document_loaders/apify_dataset_existing.ts";
<CodeBlock language="typescript">{ExistingExample}</CodeBlock>
|
0 | lc_public_repos/langchainjs/docs/core_docs/docs/integrations/document_loaders | lc_public_repos/langchainjs/docs/core_docs/docs/integrations/document_loaders/web_loaders/airtable.mdx | ---
hide_table_of_contents: true
---
import loadExample from "@examples/document_loaders/airtable_load";
import CodeBlock from "@theme/CodeBlock";
# AirtableLoader
The `AirtableLoader` class provides functionality to load documents from Airtable tables. It supports two main methods:
1. `load()`: Retrieves all records at once, ideal for small to moderate datasets.
2. `loadLazy()`: Fetches records one by one, which is more memory-efficient for large datasets.
## Prerequisites
Ensure that your Airtable API token is available as an environment variable:
```typescript
process.env.AIRTABLE_API_TOKEN = "YOUR_AIRTABLE_API_TOKEN";
```
## Usage
<CodeBlock language="typescript">{loadExample}</CodeBlock>
|
0 | lc_public_repos/langchainjs/docs/core_docs/docs/integrations/document_loaders | lc_public_repos/langchainjs/docs/core_docs/docs/integrations/document_loaders/web_loaders/web_puppeteer.ipynb | import { PuppeteerWebBaseLoader } from "@langchain/community/document_loaders/web/puppeteer"
const loader = new PuppeteerWebBaseLoader("https://langchain.com", {
// required params = ...
// optional params = ...
})const docs = await loader.load()
docs[0]console.log(docs[0].metadata)import { PuppeteerWebBaseLoader } from "@langchain/community/document_loaders/web/puppeteer";
const loaderForScreenshot = new PuppeteerWebBaseLoader("https://langchain.com", {
launchOptions: {
headless: true,
},
gotoOptions: {
waitUntil: "domcontentloaded",
},
});
const screenshot = await loaderForScreenshot.screenshot();
console.log(screenshot.pageContent.slice(0, 100));
console.log(screenshot.metadata); |
0 | lc_public_repos/langchainjs/docs/core_docs/docs/integrations/document_loaders | lc_public_repos/langchainjs/docs/core_docs/docs/integrations/document_loaders/web_loaders/pdf.ipynb | import fs from "fs/promises";
import { WebPDFLoader } from "@langchain/community/document_loaders/web/pdf"
const nike10kPDFPath = "../../../../data/nke-10k-2023.pdf";
// Read the file as a buffer
const buffer = await fs.readFile(nike10kPDFPath);
// Create a Blob from the buffer
const nike10kPDFBlob = new Blob([buffer], { type: 'application/pdf' });
const loader = new WebPDFLoader(nike10kPDFBlob, {
// required params = ...
// optional params = ...
})const docs = await loader.load()
docs[0]console.log(docs[0].metadata)import { WebPDFLoader } from "@langchain/community/document_loaders/web/pdf";
const blob = new Blob(); // e.g. from a file input
const customBuildLoader = new WebPDFLoader(blob, {
// you may need to add `.then(m => m.default)` to the end of the import
// @lc-ts-ignore
pdfjs: () => import("pdfjs-dist/legacy/build/pdf.js"),
});import { WebPDFLoader } from "@langchain/community/document_loaders/web/pdf";
// new Blob(); e.g. from a file input
const eliminatingExtraSpacesLoader = new WebPDFLoader(new Blob(), {
parsedItemSeparator: "",
}); |
0 | lc_public_repos/langchainjs/docs/core_docs/docs/integrations/document_loaders | lc_public_repos/langchainjs/docs/core_docs/docs/integrations/document_loaders/web_loaders/firecrawl.ipynb | import "@mendable/firecrawl-js";
import { FireCrawlLoader } from "@langchain/community/document_loaders/web/firecrawl"
const loader = new FireCrawlLoader({
url: "https://firecrawl.dev", // The URL to scrape
apiKey: "...", // Optional, defaults to `FIRECRAWL_API_KEY` in your env.
mode: "scrape", // The mode to run the crawler in. Can be "scrape" for single urls or "crawl" for all accessible subpages
params: {
// optional parameters based on Firecrawl API docs
// For API documentation, visit https://docs.firecrawl.dev
},
})const docs = await loader.load()
docs[0]console.log(docs[0].metadata) |
0 | lc_public_repos/langchainjs/docs/core_docs/docs/integrations/document_loaders | lc_public_repos/langchainjs/docs/core_docs/docs/integrations/document_loaders/web_loaders/github.mdx | ---
sidebar_class_name: node-only
hide_table_of_contents: true
---
# GitHub
This example goes over how to load data from a GitHub repository.
You can set the `GITHUB_ACCESS_TOKEN` environment variable to a GitHub access token to increase the rate limit and access private repositories.
## Setup
The GitHub loader requires the [ignore npm package](https://www.npmjs.com/package/ignore) as a peer dependency. Install it like this:
```bash npm2yarn
npm install @langchain/community @langchain/core ignore
```
## Usage
import CodeBlock from "@theme/CodeBlock";
import Example from "@examples/document_loaders/github.ts";
<CodeBlock language="typescript">{Example}</CodeBlock>
The loader will ignore binary files like images.
### Using .gitignore Syntax
To ignore specific files, you can pass in an `ignorePaths` array into the constructor:
import IgnoreExample from "@examples/document_loaders/github_ignore_paths.ts";
<CodeBlock language="typescript">{IgnoreExample}</CodeBlock>
### Using a Different GitHub Instance
You may want to target a different GitHub instance than `github.com`, e.g. if you have a GitHub Enterprise instance for your company.
For this you need two additional parameters:
- `baseUrl` - the base URL of your GitHub instance, so the githubUrl matches `<baseUrl>/<owner>/<repo>/...`
- `apiUrl` - the URL of the API endpoint of your GitHub instance
import CustomInstanceExample from "@examples/document_loaders/github_custom_instance.ts";
<CodeBlock language="typescript">{CustomInstanceExample}</CodeBlock>
### Dealing with Submodules
In case your repository has submodules, you have to decide if the loader should follow them or not. You can control this with the boolean `processSubmodules` parameter. By default, submodules are not processed.
Note that processing submodules works only in conjunction with setting the `recursive` parameter to true.
import SubmodulesExample from "@examples/document_loaders/github_submodules.ts";
<CodeBlock language="typescript">{SubmodulesExample}</CodeBlock>
Note, that the loader will not follow submodules which are located on another GitHub instance than the one of the current repository.
### Stream large repository
For situations where processing large repositories in a memory-efficient manner is required. You can use the `loadAsStream` method to asynchronously streams documents from the entire GitHub repository.
import StreamExample from "@examples/document_loaders/github_stream.ts";
<CodeBlock language="typescript">{StreamExample}</CodeBlock>
|
0 | lc_public_repos/langchainjs/docs/core_docs/docs/integrations/document_loaders | lc_public_repos/langchainjs/docs/core_docs/docs/integrations/document_loaders/web_loaders/imsdb.mdx | ---
hide_table_of_contents: true
---
# IMSDB
This example goes over how to load data from the internet movie script database website, using Cheerio. One document will be created for each page.
## Setup
```bash npm2yarn
npm install @langchain/community @langchain/core cheerio
```
## Usage
```typescript
import { IMSDBLoader } from "@langchain/community/document_loaders/web/imsdb";
const loader = new IMSDBLoader("https://imsdb.com/scripts/BlacKkKlansman.html");
const docs = await loader.load();
```
|
0 | lc_public_repos/langchainjs/docs/core_docs/docs/integrations/document_loaders | lc_public_repos/langchainjs/docs/core_docs/docs/integrations/document_loaders/web_loaders/spider.mdx | ---
hide_table_of_contents: true
---
# Spider
[Spider](https://spider.cloud/?ref=langchainjs) is the [fastest](https://github.com/spider-rs/spider/blob/main/benches/BENCHMARKS.md#benchmark-results) crawler. It converts any website into pure HTML, markdown, metadata or text while enabling you to crawl with custom actions using AI.
## Overview
Spider allows you to use high performance proxies to prevent detection, caches AI actions, webhooks for crawling status, scheduled crawls etc...
This guide shows how to crawl/scrape a website using [Spider](https://spider.cloud/) and loading the LLM-ready documents with `SpiderLoader` in LanghChain.
## Setup
Get your own Spider API key on [spider.cloud](https://spider.cloud/).
## Usage
Here's an example of how to use the `SpiderLoader`:
Spider offers two scraping modes `scrape` and `crawl`. Scrape only gets the content of the url provided while crawl gets the content of the url provided and crawls deeper following subpages.
import CodeBlock from "@theme/CodeBlock";
import Example from "@examples/document_loaders/spider.ts";
```bash npm2yarn
npm install @langchain/community @langchain/core @spider-cloud/spider-client
```
<CodeBlock language="typescript">{Example}</CodeBlock>
### Additional Parameters
See the [Spider documentation](https://spider.cloud/docs/api) for all the available `params`.
|
0 | lc_public_repos/langchainjs/docs/core_docs/docs/integrations/document_loaders | lc_public_repos/langchainjs/docs/core_docs/docs/integrations/document_loaders/web_loaders/confluence.mdx | ---
sidebar_class_name: node-only
---
# Confluence
:::tip Compatibility
Only available on Node.js.
:::
This covers how to load document objects from pages in a Confluence space.
## Credentials
- You'll need to set up an access token and provide it along with your confluence username in order to authenticate the request
- You'll also need the `space key` for the space containing the pages to load as documents. This can be found in the url when navigating to your space e.g. `https://example.atlassian.net/wiki/spaces/{SPACE_KEY}`
- And you'll need to install `html-to-text` to parse the pages into plain text
```bash npm2yarn
npm install @langchain/community @langchain/core html-to-text
```
## Usage
import CodeBlock from "@theme/CodeBlock";
import Example from "@examples/document_loaders/confluence.ts";
<CodeBlock language="typescript">{Example}</CodeBlock>
|
0 | lc_public_repos/langchainjs/docs/core_docs/docs/integrations/document_loaders | lc_public_repos/langchainjs/docs/core_docs/docs/integrations/document_loaders/web_loaders/sort_xyz_blockchain.mdx | ---
hide_table_of_contents: true
---
# Blockchain Data
This example shows how to load blockchain data, including NFT metadata and transactions for a contract address, via the sort.xyz SQL API.
You will need a free Sort API key, visiting sort.xyz to obtain one.
import CodeBlock from "@theme/CodeBlock";
import Example from "@examples/document_loaders/sort_xyz_blockchain.ts";
import IntegrationInstallTooltip from "@mdx_components/integration_install_tooltip.mdx";
<IntegrationInstallTooltip></IntegrationInstallTooltip>
```bash npm2yarn
npm install @langchain/community @langchain/core @langchain/openai
```
<CodeBlock language="typescript">{Example}</CodeBlock>
|
0 | lc_public_repos/langchainjs/docs/core_docs/docs/integrations/document_loaders | lc_public_repos/langchainjs/docs/core_docs/docs/integrations/document_loaders/file_loaders/multi_file.mdx | ---
sidebar_position: 2
hide_table_of_contents: true
---
# Multiple individual files
This example goes over how to load data from multiple file paths. The second argument is a map of file extensions to loader factories. Each file will be passed to the matching loader, and the resulting documents will be concatenated together.
Example files:
```text
src/document_loaders/example_data/example/
├── example.txt
└── example.csv
src/document_loaders/example_data/example2/
├── example.json
└── example.jsonl
```
Example code:
```typescript
import { MultiFileLoader } from "langchain/document_loaders/fs/multi_file";
import {
JSONLoader,
JSONLinesLoader,
} from "langchain/document_loaders/fs/json";
import { TextLoader } from "langchain/document_loaders/fs/text";
import { CSVLoader } from "langchain/document_loaders/fs/csv";
const loader = new MultiFileLoader(
[
"src/document_loaders/example_data/example/example.txt",
"src/document_loaders/example_data/example/example.csv",
"src/document_loaders/example_data/example2/example.json",
"src/document_loaders/example_data/example2/example.jsonl",
],
{
".json": (path) => new JSONLoader(path, "/texts"),
".jsonl": (path) => new JSONLinesLoader(path, "/html"),
".txt": (path) => new TextLoader(path),
".csv": (path) => new CSVLoader(path, "text"),
}
);
const docs = await loader.load();
console.log({ docs });
```
|
0 | lc_public_repos/langchainjs/docs/core_docs/docs/integrations/document_loaders | lc_public_repos/langchainjs/docs/core_docs/docs/integrations/document_loaders/file_loaders/directory.ipynb | import { DirectoryLoader } from "langchain/document_loaders/fs/directory";
import {
JSONLoader,
JSONLinesLoader,
} from "langchain/document_loaders/fs/json";
import { TextLoader } from "langchain/document_loaders/fs/text";
import { CSVLoader } from "@langchain/community/document_loaders/fs/csv";
const loader = new DirectoryLoader(
"../../../../../../examples/src/document_loaders/example_data",
{
".json": (path) => new JSONLoader(path, "/texts"),
".jsonl": (path) => new JSONLinesLoader(path, "/html"),
".txt": (path) => new TextLoader(path),
".csv": (path) => new CSVLoader(path, "text"),
}
);const docs = await loader.load()
// disable console.warn calls
console.warn = () => {}
docs[0]console.log(docs[0].metadata) |
0 | lc_public_repos/langchainjs/docs/core_docs/docs/integrations/document_loaders | lc_public_repos/langchainjs/docs/core_docs/docs/integrations/document_loaders/file_loaders/jsonlines.mdx | ---
hide_table_of_contents: true
---
# JSONLines files
This example goes over how to load data from JSONLines or JSONL files. The second argument is a JSONPointer to the property to extract from each JSON object in the file. One document will be created for each JSON object in the file.
Example JSONLines file:
```json
{"html": "This is a sentence."}
{"html": "This is another sentence."}
```
Example code:
```typescript
import { JSONLinesLoader } from "langchain/document_loaders/fs/json";
const loader = new JSONLinesLoader(
"src/document_loaders/example_data/example.jsonl",
"/html"
);
const docs = await loader.load();
/*
[
Document {
"metadata": {
"blobType": "application/jsonl+json",
"line": 1,
"source": "blob",
},
"pageContent": "This is a sentence.",
},
Document {
"metadata": {
"blobType": "application/jsonl+json",
"line": 2,
"source": "blob",
},
"pageContent": "This is another sentence.",
},
]
*/
```
|
0 | lc_public_repos/langchainjs/docs/core_docs/docs/integrations/document_loaders | lc_public_repos/langchainjs/docs/core_docs/docs/integrations/document_loaders/file_loaders/openai_whisper_audio.mdx | ---
hide_table_of_contents: true
---
# Open AI Whisper Audio
:::tip Compatibility
Only available on Node.js.
:::
This covers how to load document objects from an audio file using the [Open AI Whisper](https://platform.openai.com/docs/guides/speech-to-text) API.
## Setup
To run this loader you will need to create an account on the Open AI and obtain an auth key from the https://platform.openai.com/account page.
## Usage
Once auth key is configured, you can use the loader to create transcriptions and then convert them into a Document.
import CodeBlock from "@theme/CodeBlock";
import Example from "@examples/document_loaders/openai_whisper_audio.ts";
<CodeBlock language="typescript">{Example}</CodeBlock>
|
0 | lc_public_repos/langchainjs/docs/core_docs/docs/integrations/document_loaders | lc_public_repos/langchainjs/docs/core_docs/docs/integrations/document_loaders/file_loaders/json.mdx | # JSON files
The JSON loader use [JSON pointer](https://github.com/janl/node-jsonpointer) to target keys in your JSON files you want to target.
### No JSON pointer example
The most simple way of using it, is to specify no JSON pointer.
The loader will load all strings it finds in the JSON object.
Example JSON file:
```json
{
"texts": ["This is a sentence.", "This is another sentence."]
}
```
Example code:
```typescript
import { JSONLoader } from "langchain/document_loaders/fs/json";
const loader = new JSONLoader("src/document_loaders/example_data/example.json");
const docs = await loader.load();
/*
[
Document {
"metadata": {
"blobType": "application/json",
"line": 1,
"source": "blob",
},
"pageContent": "This is a sentence.",
},
Document {
"metadata": {
"blobType": "application/json",
"line": 2,
"source": "blob",
},
"pageContent": "This is another sentence.",
},
]
*/
```
### Using JSON pointer example
You can do a more advanced scenario by choosing which keys in your JSON object you want to extract string from.
In this example, we want to only extract information from "from" and "surname" entries.
```json
{
"1": {
"body": "BD 2023 SUMMER",
"from": "LinkedIn Job",
"labels": ["IMPORTANT", "CATEGORY_UPDATES", "INBOX"]
},
"2": {
"body": "Intern, Treasury and other roles are available",
"from": "LinkedIn Job2",
"labels": ["IMPORTANT"],
"other": {
"name": "plop",
"surname": "bob"
}
}
}
```
Example code:
```typescript
import { JSONLoader } from "langchain/document_loaders/fs/json";
const loader = new JSONLoader(
"src/document_loaders/example_data/example.json",
["/from", "/surname"]
);
const docs = await loader.load();
/*
[
Document {
"metadata": {
"blobType": "application/json",
"line": 1,
"source": "blob",
},
"pageContent": "BD 2023 SUMMER",
},
Document {
"metadata": {
"blobType": "application/json",
"line": 2,
"source": "blob",
},
"pageContent": "LinkedIn Job",
},
...
]
```
|
0 | lc_public_repos/langchainjs/docs/core_docs/docs/integrations/document_loaders | lc_public_repos/langchainjs/docs/core_docs/docs/integrations/document_loaders/file_loaders/pptx.mdx | # PPTX files
This example goes over how to load data from PPTX files. By default, one document will be created for all pages in the PPTX file.
## Setup
```bash npm2yarn
npm install officeparser
```
## Usage, one document per page
```typescript
import { PPTXLoader } from "@langchain/community/document_loaders/fs/pptx";
const loader = new PPTXLoader("src/document_loaders/example_data/example.pptx");
const docs = await loader.load();
```
|
0 | lc_public_repos/langchainjs/docs/core_docs/docs/integrations/document_loaders | lc_public_repos/langchainjs/docs/core_docs/docs/integrations/document_loaders/file_loaders/subtitles.mdx | ---
hide_table_of_contents: true
---
# Subtitles
This example goes over how to load data from subtitle files. One document will be created for each subtitles file.
## Setup
```bash npm2yarn
npm install srt-parser-2
```
## Usage
```typescript
import { SRTLoader } from "@langchain/community/document_loaders/fs/srt";
const loader = new SRTLoader(
"src/document_loaders/example_data/Star_Wars_The_Clone_Wars_S06E07_Crisis_at_the_Heart.srt"
);
const docs = await loader.load();
```
|
0 | lc_public_repos/langchainjs/docs/core_docs/docs/integrations/document_loaders | lc_public_repos/langchainjs/docs/core_docs/docs/integrations/document_loaders/file_loaders/index.mdx | ---
label: "File Loaders"
hide_table_of_contents: true
sidebar_class_name: node-only-category
---
# File Loaders
:::tip Compatibility
Only available on Node.js.
:::
These loaders are used to load files given a filesystem path or a Blob object.
:::info
If you'd like to write your own document loader, see [this how-to](/docs/how_to/document_loader_custom/). If you'd like to contribute an integration, see [Contributing integrations](/docs/contributing).
:::
import { CategoryTable, IndexTable } from "@theme/FeatureTables";
## All document loaders
<IndexTable />
|
0 | lc_public_repos/langchainjs/docs/core_docs/docs/integrations/document_loaders | lc_public_repos/langchainjs/docs/core_docs/docs/integrations/document_loaders/file_loaders/csv.ipynb | import { CSVLoader } from "@langchain/community/document_loaders/fs/csv"
const exampleCsvPath = "../../../../../../langchain/src/document_loaders/tests/example_data/example_separator.csv";
const loader = new CSVLoader(exampleCsvPath)const docs = await loader.load()
docs[0]console.log(docs[0].metadata)import { CSVLoader } from "@langchain/community/document_loaders/fs/csv";
const singleColumnLoader = new CSVLoader(
exampleCsvPath,
{
column: "html",
separator:"|"
}
);
const singleColumnDocs = await singleColumnLoader.load();
console.log(singleColumnDocs[0]); |
0 | lc_public_repos/langchainjs/docs/core_docs/docs/integrations/document_loaders | lc_public_repos/langchainjs/docs/core_docs/docs/integrations/document_loaders/file_loaders/epub.mdx | ---
hide_table_of_contents: true
---
# EPUB files
This example goes over how to load data from EPUB files. By default, one document will be created for each chapter in the EPUB file, you can change this behavior by setting the `splitChapters` option to `false`.
# Setup
```bash npm2yarn
npm install @langchain/community @langchain/core epub2 html-to-text
```
# Usage, one document per chapter
```typescript
import { EPubLoader } from "@langchain/community/document_loaders/fs/epub";
const loader = new EPubLoader("src/document_loaders/example_data/example.epub");
const docs = await loader.load();
```
# Usage, one document per file
```typescript
import { EPubLoader } from "@langchain/community/document_loaders/fs/epub";
const loader = new EPubLoader(
"src/document_loaders/example_data/example.epub",
{
splitChapters: false,
}
);
const docs = await loader.load();
```
|
0 | lc_public_repos/langchainjs/docs/core_docs/docs/integrations/document_loaders | lc_public_repos/langchainjs/docs/core_docs/docs/integrations/document_loaders/file_loaders/chatgpt.mdx | # ChatGPT files
This example goes over how to load conversations.json from your ChatGPT data export folder. You can get your data export by email by going to: ChatGPT -> (Profile) - Settings -> Export data -> Confirm export -> Check email.
## Usage, extracting all logs
Example code:
```typescript
import { ChatGPTLoader } from "@langchain/community/document_loaders/fs/chatgpt";
const loader = new ChatGPTLoader("./example_data/example_conversations.json");
const docs = await loader.load();
console.log(docs);
```
## Usage, extracting a single log
Example code:
```typescript
import { ChatGPTLoader } from "@langchain/community/document_loaders/fs/chatgpt";
const loader = new ChatGPTLoader(
"./example_data/example_conversations.json",
1
);
const docs = await loader.load();
console.log(docs);
```
|
0 | lc_public_repos/langchainjs/docs/core_docs/docs/integrations/document_loaders | lc_public_repos/langchainjs/docs/core_docs/docs/integrations/document_loaders/file_loaders/pdf.ipynb | import { PDFLoader } from "@langchain/community/document_loaders/fs/pdf"
const nike10kPdfPath = "../../../../data/nke-10k-2023.pdf"
const loader = new PDFLoader(nike10kPdfPath)const docs = await loader.load()
docs[0]console.log(docs[0].metadata)import { PDFLoader } from "@langchain/community/document_loaders/fs/pdf";
const singleDocPerFileLoader = new PDFLoader(nike10kPdfPath, {
splitPages: false,
});
const singleDoc = await singleDocPerFileLoader.load();
console.log(singleDoc[0].pageContent.slice(0, 100))import { PDFLoader } from "@langchain/community/document_loaders/fs/pdf";
const customBuildLoader = new PDFLoader(nike10kPdfPath, {
// you may need to add `.then(m => m.default)` to the end of the import
// @lc-ts-ignore
pdfjs: () => import("pdfjs-dist/legacy/build/pdf.js"),
});import { PDFLoader } from "@langchain/community/document_loaders/fs/pdf";
const noExtraSpacesLoader = new PDFLoader(nike10kPdfPath, {
parsedItemSeparator: "",
});
const noExtraSpacesDocs = await noExtraSpacesLoader.load();
console.log(noExtraSpacesDocs[0].pageContent.slice(100, 250))import { DirectoryLoader } from "langchain/document_loaders/fs/directory";
import { PDFLoader } from "@langchain/community/document_loaders/fs/pdf";
import { RecursiveCharacterTextSplitter } from "@langchain/textsplitters";
const exampleDataPath = "../../../../../../examples/src/document_loaders/example_data/";
/* Load all PDFs within the specified directory */
const directoryLoader = new DirectoryLoader(
exampleDataPath,
{
".pdf": (path: string) => new PDFLoader(path),
}
);
const directoryDocs = await directoryLoader.load();
console.log(directoryDocs[0]);
/* Additional steps : Split text into chunks with any TextSplitter. You can then use it as context or save it to memory afterwards. */
const textSplitter = new RecursiveCharacterTextSplitter({
chunkSize: 1000,
chunkOverlap: 200,
});
const splitDocs = await textSplitter.splitDocuments(directoryDocs);
console.log(splitDocs[0]);
|
0 | lc_public_repos/langchainjs/docs/core_docs/docs/integrations/document_loaders | lc_public_repos/langchainjs/docs/core_docs/docs/integrations/document_loaders/file_loaders/unstructured.ipynb | import { UnstructuredLoader } from "@langchain/community/document_loaders/fs/unstructured"
const loader = new UnstructuredLoader("../../../../../../examples/src/document_loaders/example_data/notion.md")const docs = await loader.load()
docs[0]console.log(docs[0].metadata)import { UnstructuredDirectoryLoader } from "@langchain/community/document_loaders/fs/unstructured";
const directoryLoader = new UnstructuredDirectoryLoader(
"../../../../../../examples/src/document_loaders/example_data/",
{}
);
const directoryDocs = await directoryLoader.load();
console.log("directoryDocs.length: ", directoryDocs.length);
console.log(directoryDocs[0])
|
0 | lc_public_repos/langchainjs/docs/core_docs/docs/integrations/document_loaders | lc_public_repos/langchainjs/docs/core_docs/docs/integrations/document_loaders/file_loaders/docx.mdx | ---
hide_table_of_contents: true
---
# Docx files
This example goes over how to load data from docx files.
# Setup
```bash npm2yarn
npm install @langchain/community @langchain/core mammoth
```
# Usage
```typescript
import { DocxLoader } from "@langchain/community/document_loaders/fs/docx";
const loader = new DocxLoader(
"src/document_loaders/tests/example_data/attention.docx"
);
const docs = await loader.load();
```
|
0 | lc_public_repos/langchainjs/docs/core_docs/docs/integrations/document_loaders | lc_public_repos/langchainjs/docs/core_docs/docs/integrations/document_loaders/file_loaders/text.ipynb | import { TextLoader } from "langchain/document_loaders/fs/text"
const loader = new TextLoader("../../../../../../examples/src/document_loaders/example_data/example.txt")const docs = await loader.load()
docs[0]console.log(docs[0].metadata) |
0 | lc_public_repos/langchainjs/docs/core_docs/docs/integrations/document_loaders | lc_public_repos/langchainjs/docs/core_docs/docs/integrations/document_loaders/file_loaders/notion_markdown.mdx | ---
hide_table_of_contents: true
---
# Notion markdown export
This example goes over how to load data from your Notion pages exported from the notion dashboard.
First, export your notion pages as **Markdown & CSV** as per the offical explanation [here](https://www.notion.so/help/export-your-content). Make sure to select `include subpages` and `Create folders for subpages.`
Then, unzip the downloaded file and move the unzipped folder into your repository. It should contain the markdown files of your pages.
Once the folder is in your repository, simply run the example below:
import CodeBlock from "@theme/CodeBlock";
import Example from "@examples/document_loaders/notion_markdown.ts";
<CodeBlock language="typescript">{Example}</CodeBlock>
|
0 | lc_public_repos/langchainjs/docs/core_docs/docs/integrations | lc_public_repos/langchainjs/docs/core_docs/docs/integrations/text_embedding/minimax.mdx | # Minimax
The `MinimaxEmbeddings` class uses the Minimax API to generate embeddings for a given text.
# Setup
To use Minimax model, you'll need a [Minimax account](https://api.minimax.chat), an [API key](https://api.minimax.chat/user-center/basic-information/interface-key), and a [Group ID](https://api.minimax.chat/user-center/basic-information)
# Usage
```typescript
import { MinimaxEmbeddings } from "langchain/embeddings/minimax";
export const run = async () => {
/* Embed queries */
const embeddings = new MinimaxEmbeddings();
const res = await embeddings.embedQuery("Hello world");
console.log(res);
/* Embed documents */
const documentRes = await embeddings.embedDocuments([
"Hello world",
"Bye bye",
]);
console.log({ documentRes });
};
```
## Related
- Embedding model [conceptual guide](/docs/concepts/embedding_models)
- Embedding model [how-to guides](/docs/how_to/#embedding-models)
|
0 | lc_public_repos/langchainjs/docs/core_docs/docs/integrations | lc_public_repos/langchainjs/docs/core_docs/docs/integrations/text_embedding/alibaba_tongyi.mdx | ---
sidebar_class_name: node-only
---
# Alibaba Tongyi
The `AlibabaTongyiEmbeddings` class uses the Alibaba Tongyi API to generate embeddings for a given text.
## Setup
You'll need to sign up for an Alibaba API key and set it as an environment variable named `ALIBABA_API_KEY`.
Then, you'll need to install the [`@langchain/community`](https://www.npmjs.com/package/@langchain/community) package:
import IntegrationInstallTooltip from "@mdx_components/integration_install_tooltip.mdx";
<IntegrationInstallTooltip></IntegrationInstallTooltip>
```bash npm2yarn
npm install @langchain/community @langchain/core
```
## Usage
import CodeBlock from "@theme/CodeBlock";
import AlibabaTongyiExample from "@examples/embeddings/alibaba_tongyi.ts";
<CodeBlock language="typescript">{AlibabaTongyiExample}</CodeBlock>
## Related
- Embedding model [conceptual guide](/docs/concepts/embedding_models)
- Embedding model [how-to guides](/docs/how_to/#embedding-models)
|
0 | lc_public_repos/langchainjs/docs/core_docs/docs/integrations | lc_public_repos/langchainjs/docs/core_docs/docs/integrations/text_embedding/cohere.ipynb | import { CohereEmbeddings } from "@langchain/cohere";
const embeddings = new CohereEmbeddings({
apiKey: "YOUR-API-KEY", // In Node.js defaults to process.env.COHERE_API_KEY
batchSize: 48, // Default value if omitted is 48. Max value is 96
model: "embed-english-v3.0",
});import { CohereEmbeddings } from "@langchain/cohere";
import { CohereClient } from "cohere-ai";
const client = new CohereClient({
token: "<your-api-key>",
environment: "<your-cohere-deployment-url>", //optional
// other params
});
const embeddingsWithCustomClient = new CohereEmbeddings({
client,
// other params...
});// Create a vector store with a sample text
import { MemoryVectorStore } from "langchain/vectorstores/memory";
const text = "LangChain is the framework for building context-aware reasoning applications";
const vectorstore = await MemoryVectorStore.fromDocuments(
[{ pageContent: text, metadata: {} }],
embeddings,
);
// Use the vector store as a retriever that returns a single document
const retriever = vectorstore.asRetriever(1);
// Retrieve the most similar text
const retrievedDocuments = await retriever.invoke("What is LangChain?");
retrievedDocuments[0].pageContent;const singleVector = await embeddings.embedQuery(text);
console.log(singleVector.slice(0, 100));const text2 = "LangGraph is a library for building stateful, multi-actor applications with LLMs";
const vectors = await embeddings.embedDocuments([text, text2]);
console.log(vectors[0].slice(0, 100));
console.log(vectors[1].slice(0, 100)); |
0 | lc_public_repos/langchainjs/docs/core_docs/docs/integrations | lc_public_repos/langchainjs/docs/core_docs/docs/integrations/text_embedding/voyageai.mdx | # Voyage AI
The `VoyageEmbeddings` class uses the Voyage AI REST API to generate embeddings for a given text.
The `inputType` parameter allows you to specify the type of input text for better embedding results. You can set it to `query`, `document`, or leave it undefined (which is equivalent to `None`).
- `query`: Use this for search or retrieval queries. Voyage AI will prepend a prompt to optimize the embeddings for query use cases.
- `document`: Use this for documents or content that you want to be retrievable. Voyage AI will prepend a prompt to optimize the embeddings for document use cases.
- `None` (default): The input text will be directly encoded without any additional prompt.
```typescript
import { VoyageEmbeddings } from "@langchain/community/embeddings/voyage";
const embeddings = new VoyageEmbeddings({
apiKey: "YOUR-API-KEY", // In Node.js defaults to process.env.VOYAGEAI_API_KEY
inputType: "document", // Optional: specify input type as 'query', 'document', or omit for None / Undefined / Null
});
```
## Related
- Embedding model [conceptual guide](/docs/concepts/embedding_models)
- Embedding model [how-to guides](/docs/how_to/#embedding-models)
|
0 | lc_public_repos/langchainjs/docs/core_docs/docs/integrations | lc_public_repos/langchainjs/docs/core_docs/docs/integrations/text_embedding/jina.mdx | ---
sidebar_label: Jina
---
# Jina Embeddings
The `JinaEmbeddings` class utilizes the Jina API to generate embeddings for given text inputs. This guide will walk you through the setup and usage of the `JinaEmbeddings` class, helping you integrate it into your project seamlessly.
## Installation
Install the `@langchain/community` package as shown below:
import IntegrationInstallTooltip from "@mdx_components/integration_install_tooltip.mdx";
<IntegrationInstallTooltip></IntegrationInstallTooltip>
```bash npm2yarn
npm i @langchain/community @langchain/core
```
## Initialization
With this integration, you can use the Jina embeddings model to get embeddings for your text data. Here is the [link](https://jina.ai/embeddings) to the embeddings models.
First, you need to sign up on the Jina website and get the API token from [here](https://jina.ai/embeddings). You can copy model names from the dropdown in the api playground.
To use the `JinaEmbeddings` class, you need an API token from Jina. You can pass this token directly to the constructor or set it as an environment variable (`JINA_API_KEY`).
### Basic Usage
Here’s how to create an instance of `JinaEmbeddings`:
```typescript
import { JinaEmbeddings } from "@langchain/community/embeddings/jina";
const embeddings = new JinaEmbeddings({
apiToken: "YOUR_API_TOKEN",
model: "jina-embeddings-v2-base-en", // Optional, defaults to "jina-embeddings-v2-base-en"
});
```
If the `apiToken` is not provided, it will be read from the `JINA_API_KEY` environment variable.
## Generating Embeddings
### Embedding a Single Query
To generate embeddings for a single text query, use the `embedQuery` method:
```typescript
const embedding = await embeddings.embedQuery(
"What would be a good company name for a company that makes colorful socks?"
);
console.log(embedding);
```
### Embedding Multiple Documents
To generate embeddings for multiple documents, use the `embedDocuments` method.
```typescript
const documents = [
"Document 1 text...",
"Document 2 text...",
"Document 3 text...",
];
const embeddingsArray = await embeddings.embedDocuments(documents);
console.log(embeddingsArray);
```
## Error Handling
If the API token is not provided and cannot be found in the environment variables, an error will be thrown:
```typescript
try {
const embeddings = new JinaEmbeddings();
} catch (error) {
console.error("Jina API token not found");
}
```
## Example
Here’s a complete example of how to set up and use the `JinaEmbeddings` class:
```typescript
import { JinaEmbeddings } from "@langchain/community/embeddings/jina";
const embeddings = new JinaEmbeddings({
apiToken: "YOUR_API_TOKEN",
model: "jina-embeddings-v2-base-en",
});
async function runExample() {
const queryEmbedding = await embeddings.embedQuery("Example query text.");
console.log("Query Embedding:", queryEmbedding);
const documents = ["Text 1", "Text 2", "Text 3"];
const documentEmbeddings = await embeddings.embedDocuments(documents);
console.log("Document Embeddings:", documentEmbeddings);
}
runExample();
```
## Feedback and Support
For feedback or questions, please contact [support@jina.ai](mailto:support@jina.ai).
## Related
- Embedding model [conceptual guide](/docs/concepts/embedding_models)
- Embedding model [how-to guides](/docs/how_to/#embedding-models)
|
0 | lc_public_repos/langchainjs/docs/core_docs/docs/integrations | lc_public_repos/langchainjs/docs/core_docs/docs/integrations/text_embedding/ollama.ipynb | import { OllamaEmbeddings } from "@langchain/ollama";
const embeddings = new OllamaEmbeddings({
model: "mxbai-embed-large", // Default value
baseUrl: "http://localhost:11434", // Default value
});// Create a vector store with a sample text
import { MemoryVectorStore } from "langchain/vectorstores/memory";
const text = "LangChain is the framework for building context-aware reasoning applications";
const vectorstore = await MemoryVectorStore.fromDocuments(
[{ pageContent: text, metadata: {} }],
embeddings,
);
// Use the vector store as a retriever that returns a single document
const retriever = vectorstore.asRetriever(1);
// Retrieve the most similar text
const retrievedDocuments = await retriever.invoke("What is LangChain?");
retrievedDocuments[0].pageContent;const singleVector = await embeddings.embedQuery(text);
console.log(singleVector.slice(0, 100));const text2 = "LangGraph is a library for building stateful, multi-actor applications with LLMs";
const vectors = await embeddings.embedDocuments([text, text2]);
console.log(vectors[0].slice(0, 100));
console.log(vectors[1].slice(0, 100));import { OllamaEmbeddings } from "@langchain/ollama";
const embeddingsCustomParams = new OllamaEmbeddings({
requestOptions: {
useMmap: true, // use_mmap 1
numThread: 6, // num_thread 6
numGpu: 1, // num_gpu 1
},
}); |
0 | lc_public_repos/langchainjs/docs/core_docs/docs/integrations | lc_public_repos/langchainjs/docs/core_docs/docs/integrations/text_embedding/google_generativeai.ipynb | import { GoogleGenerativeAIEmbeddings } from "@langchain/google-genai";
import { TaskType } from "@google/generative-ai";
const embeddings = new GoogleGenerativeAIEmbeddings({
model: "text-embedding-004", // 768 dimensions
taskType: TaskType.RETRIEVAL_DOCUMENT,
title: "Document title",
});// Create a vector store with a sample text
import { MemoryVectorStore } from "langchain/vectorstores/memory";
const text = "LangChain is the framework for building context-aware reasoning applications";
const vectorstore = await MemoryVectorStore.fromDocuments(
[{ pageContent: text, metadata: {} }],
embeddings,
);
// Use the vector store as a retriever that returns a single document
const retriever = vectorstore.asRetriever(1);
// Retrieve the most similar text
const retrievedDocuments = await retriever.invoke("What is LangChain?");
retrievedDocuments[0].pageContent;const singleVector = await embeddings.embedQuery(text);
console.log(singleVector.slice(0, 100));const text2 = "LangGraph is a library for building stateful, multi-actor applications with LLMs";
const vectors = await embeddings.embedDocuments([text, text2]);
console.log(vectors[0].slice(0, 100));
console.log(vectors[1].slice(0, 100)); |
0 | lc_public_repos/langchainjs/docs/core_docs/docs/integrations | lc_public_repos/langchainjs/docs/core_docs/docs/integrations/text_embedding/nomic.mdx | ---
sidebar_label: Nomic
---
# Nomic
The `NomicEmbeddings` class uses the Nomic AI API to generate embeddings for a given text.
## Setup
In order to use the Nomic API you'll need an API key.
You can sign up for a Nomic account and create an API key [here](https://atlas.nomic.ai/).
You'll first need to install the [`@langchain/nomic`](https://www.npmjs.com/package/@langchain/nomic) package:
import IntegrationInstallTooltip from "@mdx_components/integration_install_tooltip.mdx";
<IntegrationInstallTooltip></IntegrationInstallTooltip>
```bash npm2yarn
npm install @langchain/nomic @langchain/core
```
## Usage
import CodeBlock from "@theme/CodeBlock";
import NomicExample from "@examples/models/embeddings/nomic.ts";
<CodeBlock language="typescript">{NomicExample}</CodeBlock>
## Related
- Embedding model [conceptual guide](/docs/concepts/embedding_models)
- Embedding model [how-to guides](/docs/how_to/#embedding-models)
|
0 | lc_public_repos/langchainjs/docs/core_docs/docs/integrations | lc_public_repos/langchainjs/docs/core_docs/docs/integrations/text_embedding/zhipuai.mdx | ---
sidebar_class_name: node-only
---
# ZhipuAI
The `ZhipuAIEmbeddings` class uses the ZhipuAI API to generate embeddings for a given text.
## Setup
You'll need to sign up for an ZhipuAI API key and set it as an environment variable named `ZHIPUAI_API_KEY`.
https://open.bigmodel.cn
Then, you'll need to install the [`@langchain/community`](https://www.npmjs.com/package/@langchain/community) package:
import IntegrationInstallTooltip from "@mdx_components/integration_install_tooltip.mdx";
<IntegrationInstallTooltip></IntegrationInstallTooltip>
```bash npm2yarn
npm install @langchain/community @langchain/core jsonwebtoken
```
## Usage
import CodeBlock from "@theme/CodeBlock";
import ZhipuAIExample from "@examples/embeddings/zhipuai.ts";
<CodeBlock language="typescript">{ZhipuAIExample}</CodeBlock>
## Related
- Embedding model [conceptual guide](/docs/concepts/embedding_models)
- Embedding model [how-to guides](/docs/how_to/#embedding-models)
|
0 | lc_public_repos/langchainjs/docs/core_docs/docs/integrations | lc_public_repos/langchainjs/docs/core_docs/docs/integrations/text_embedding/google_vertex_ai.ipynb | import { VertexAIEmbeddings } from "@langchain/google-vertexai";
// Uncomment the following line if you're running in a web environment:
// import { VertexAIEmbeddings } from "@langchain/google-vertexai-web"
const embeddings = new VertexAIEmbeddings({
model: "text-embedding-004",
// ...
});// Create a vector store with a sample text
import { MemoryVectorStore } from "langchain/vectorstores/memory";
const text = "LangChain is the framework for building context-aware reasoning applications";
const vectorstore = await MemoryVectorStore.fromDocuments(
[{ pageContent: text, metadata: {} }],
embeddings,
);
// Use the vector store as a retriever that returns a single document
const retriever = vectorstore.asRetriever(1);
// Retrieve the most similar text
const retrievedDocuments = await retriever.invoke("What is LangChain?");
retrievedDocuments[0].pageContent;const singleVector = await embeddings.embedQuery(text);
console.log(singleVector.slice(0, 100));const text2 = "LangGraph is a library for building stateful, multi-actor applications with LLMs";
const vectors = await embeddings.embedDocuments([text, text2]);
console.log(vectors[0].slice(0, 100));
console.log(vectors[1].slice(0, 100)); |
0 | lc_public_repos/langchainjs/docs/core_docs/docs/integrations | lc_public_repos/langchainjs/docs/core_docs/docs/integrations/text_embedding/fireworks.ipynb | import { FireworksEmbeddings } from "@langchain/community/embeddings/fireworks";
const embeddings = new FireworksEmbeddings({
modelName: "nomic-ai/nomic-embed-text-v1.5",
});// Create a vector store with a sample text
import { MemoryVectorStore } from "langchain/vectorstores/memory";
const text = "LangChain is the framework for building context-aware reasoning applications";
const vectorstore = await MemoryVectorStore.fromDocuments(
[{ pageContent: text, metadata: {} }],
embeddings,
);
// Use the vector store as a retriever that returns a single document
const retriever = vectorstore.asRetriever(1);
// Retrieve the most similar text
const retrievedDocuments = await retriever.invoke("What is LangChain?");
retrievedDocuments[0].pageContent;const singleVector = await embeddings.embedQuery(text);
console.log(singleVector.slice(0, 100));const text2 = "LangGraph is a library for building stateful, multi-actor applications with LLMs";
const vectors = await embeddings.embedDocuments([text, text2]);
console.log(vectors[0].slice(0, 100));
console.log(vectors[1].slice(0, 100)); |
0 | lc_public_repos/langchainjs/docs/core_docs/docs/integrations | lc_public_repos/langchainjs/docs/core_docs/docs/integrations/text_embedding/gradient_ai.mdx | ---
sidebar_class_name: node-only
---
# Gradient AI
The `GradientEmbeddings` class uses the Gradient AI API to generate embeddings for a given text.
## Setup
You'll need to install the official Gradient Node SDK as a peer dependency:
```bash npm2yarn
npm i @langchain/community @langchain/core @gradientai/nodejs-sdk
```
You will need to set the following environment variables for using the Gradient AI API.
```
export GRADIENT_ACCESS_TOKEN=<YOUR_ACCESS_TOKEN>
export GRADIENT_WORKSPACE_ID=<YOUR_WORKSPACE_ID>
```
Alternatively, these can be set during the GradientAI Class instantiation as `gradientAccessKey` and `workspaceId` respectively.
For example:
```typescript
const model = new GradientEmbeddings({
gradientAccessKey: "My secret Access Token"
workspaceId: "My secret workspace id"
});
```
## Usage
import CodeBlock from "@theme/CodeBlock";
import GradientEmbeddingsExample from "@examples/embeddings/gradient_ai.ts";
<CodeBlock language="typescript">{GradientEmbeddingsExample}</CodeBlock>
## Related
- Embedding model [conceptual guide](/docs/concepts/embedding_models)
- Embedding model [how-to guides](/docs/how_to/#embedding-models)
|
0 | lc_public_repos/langchainjs/docs/core_docs/docs/integrations | lc_public_repos/langchainjs/docs/core_docs/docs/integrations/text_embedding/tensorflow.mdx | # TensorFlow
This Embeddings integration runs the embeddings entirely in your browser or Node.js environment, using [TensorFlow.js](https://www.tensorflow.org/js). This means that your data isn't sent to any third party, and you don't need to sign up for any API keys. However, it does require more memory and processing power than the other integrations.
```bash npm2yarn
npm install @langchain/community @langchain/core @tensorflow/tfjs-core@3.6.0 @tensorflow/tfjs-converter@3.6.0 @tensorflow-models/universal-sentence-encoder@1.3.3 @tensorflow/tfjs-backend-cpu
```
```typescript
import "@tensorflow/tfjs-backend-cpu";
import { TensorFlowEmbeddings } from "@langchain/community/embeddings/tensorflow";
const embeddings = new TensorFlowEmbeddings();
```
This example uses the CPU backend, which works in any JS environment. However, you can use any of the backends supported by TensorFlow.js, including GPU and WebAssembly, which will be a lot faster. For Node.js you can use the `@tensorflow/tfjs-node` package, and for the browser you can use the `@tensorflow/tfjs-backend-webgl` package. See the [TensorFlow.js documentation](https://www.tensorflow.org/js/guide/platform_environment) for more information.
## Related
- Embedding model [conceptual guide](/docs/concepts/embedding_models)
- Embedding model [how-to guides](/docs/how_to/#embedding-models)
|
0 | lc_public_repos/langchainjs/docs/core_docs/docs/integrations | lc_public_repos/langchainjs/docs/core_docs/docs/integrations/text_embedding/index.mdx | ---
sidebar_position: 0
sidebar_class_name: hidden
---
# Embeddings
[Embedding models](/docs/concepts/embedding_models) create a vector representation of a piece of text.
This page documents integrations with various model providers that allow you to use embeddings in LangChain.
import { CategoryTable, IndexTable } from "@theme/FeatureTables";
<IndexTable />
|
0 | lc_public_repos/langchainjs/docs/core_docs/docs/integrations | lc_public_repos/langchainjs/docs/core_docs/docs/integrations/text_embedding/azure_openai.ipynb | import { AzureOpenAIEmbeddings } from "@langchain/openai";
const embeddings = new AzureOpenAIEmbeddings({
azureOpenAIApiKey: "<your_key>", // In Node.js defaults to process.env.AZURE_OPENAI_API_KEY
azureOpenAIApiInstanceName: "<your_instance_name>", // In Node.js defaults to process.env.AZURE_OPENAI_API_INSTANCE_NAME
azureOpenAIApiEmbeddingsDeploymentName: "<your_embeddings_deployment_name>", // In Node.js defaults to process.env.AZURE_OPENAI_API_EMBEDDINGS_DEPLOYMENT_NAME
azureOpenAIApiVersion: "<api_version>", // In Node.js defaults to process.env.AZURE_OPENAI_API_VERSION
maxRetries: 1,
});// Create a vector store with a sample text
import { MemoryVectorStore } from "langchain/vectorstores/memory";
const text = "LangChain is the framework for building context-aware reasoning applications";
const vectorstore = await MemoryVectorStore.fromDocuments(
[{ pageContent: text, metadata: {} }],
embeddings,
);
// Use the vector store as a retriever that returns a single document
const retriever = vectorstore.asRetriever(1);
// Retrieve the most similar text
const retrievedDocuments = await retriever.invoke("What is LangChain?");
retrievedDocuments[0].pageContent;const singleVector = await embeddings.embedQuery(text);
console.log(singleVector.slice(0, 100));const text2 = "LangGraph is a library for building stateful, multi-actor applications with LLMs";
const vectors = await embeddings.embedDocuments([text, text2]);
console.log(vectors[0].slice(0, 100));
console.log(vectors[1].slice(0, 100));import {
DefaultAzureCredential,
getBearerTokenProvider,
} from "@azure/identity";
import { AzureOpenAIEmbeddings } from "@langchain/openai";
const credentials = new DefaultAzureCredential();
const azureADTokenProvider = getBearerTokenProvider(
credentials,
"https://cognitiveservices.azure.com/.default"
);
const modelWithManagedIdentity = new AzureOpenAIEmbeddings({
azureADTokenProvider,
azureOpenAIApiInstanceName: "<your_instance_name>",
azureOpenAIApiEmbeddingsDeploymentName: "<your_embeddings_deployment_name>",
azureOpenAIApiVersion: "<api_version>",
});
import { AzureOpenAIEmbeddings } from "@langchain/openai";
const embeddingsDifferentDomain = new AzureOpenAIEmbeddings({
azureOpenAIApiKey: "<your_key>", // In Node.js defaults to process.env.AZURE_OPENAI_API_KEY
azureOpenAIApiEmbeddingsDeploymentName: "<your_embedding_deployment_name>", // In Node.js defaults to process.env.AZURE_OPENAI_API_EMBEDDINGS_DEPLOYMENT_NAME
azureOpenAIApiVersion: "<api_version>", // In Node.js defaults to process.env.AZURE_OPENAI_API_VERSION
azureOpenAIBasePath:
"https://westeurope.api.microsoft.com/openai/deployments", // In Node.js defaults to process.env.AZURE_OPENAI_BASE_PATH
});
import { AzureOpenAIEmbeddings } from "@langchain/openai";
const embeddingsWithCustomHeaders = new AzureOpenAIEmbeddings({
azureOpenAIApiKey: "<your_key>",
azureOpenAIApiInstanceName: "<your_instance_name>",
azureOpenAIApiEmbeddingsDeploymentName: "<your_embeddings_deployment_name>",
azureOpenAIApiVersion: "<api_version>",
configuration: {
defaultHeaders: {
"x-custom-header": `SOME_VALUE`,
},
},
}); |
0 | lc_public_repos/langchainjs/docs/core_docs/docs/integrations | lc_public_repos/langchainjs/docs/core_docs/docs/integrations/text_embedding/transformers.mdx | import CodeBlock from "@theme/CodeBlock";
# HuggingFace Transformers
The `TransformerEmbeddings` class uses the [Transformers.js](https://huggingface.co/docs/transformers.js/index) package to generate embeddings for a given text.
It runs locally and even works directly in the browser, allowing you to create web apps with built-in embeddings.
## Setup
You'll need to install the [@xenova/transformers](https://www.npmjs.com/package/@xenova/transformers) package as a peer dependency:
```bash npm2yarn
npm install @xenova/transformers
```
import IntegrationInstallTooltip from "@mdx_components/integration_install_tooltip.mdx";
<IntegrationInstallTooltip></IntegrationInstallTooltip>
```bash npm2yarn
npm install @langchain/community @langchain/core
```
## Example
Note that if you're using in a browser context, you'll likely want to put all inference-related code in a web worker to avoid
blocking the main thread.
See [this guide](https://huggingface.co/docs/transformers.js/tutorials/next) and the other resources in the Transformers.js docs for an idea of how to
set up your project.
import HFTransformersExample from "@examples/models/embeddings/hf_transformers.ts";
<CodeBlock language="typescript">{HFTransformersExample}</CodeBlock>
## Related
- Embedding model [conceptual guide](/docs/concepts/embedding_models)
- Embedding model [how-to guides](/docs/how_to/#embedding-models)
|
0 | lc_public_repos/langchainjs/docs/core_docs/docs/integrations | lc_public_repos/langchainjs/docs/core_docs/docs/integrations/text_embedding/openai.ipynb | import { OpenAIEmbeddings } from "@langchain/openai";
const embeddings = new OpenAIEmbeddings({
apiKey: "YOUR-API-KEY", // In Node.js defaults to process.env.OPENAI_API_KEY
batchSize: 512, // Default value if omitted is 512. Max is 2048
model: "text-embedding-3-large",
});// Create a vector store with a sample text
import { MemoryVectorStore } from "langchain/vectorstores/memory";
const text = "LangChain is the framework for building context-aware reasoning applications";
const vectorstore = await MemoryVectorStore.fromDocuments(
[{ pageContent: text, metadata: {} }],
embeddings,
);
// Use the vector store as a retriever that returns a single document
const retriever = vectorstore.asRetriever(1);
// Retrieve the most similar text
const retrievedDocuments = await retriever.invoke("What is LangChain?");
retrievedDocuments[0].pageContent;const singleVector = await embeddings.embedQuery(text);
console.log(singleVector.slice(0, 100));const text2 = "LangGraph is a library for building stateful, multi-actor applications with LLMs";
const vectors = await embeddings.embedDocuments([text, text2]);
console.log(vectors[0].slice(0, 100));
console.log(vectors[1].slice(0, 100));import { OpenAIEmbeddings } from "@langchain/openai";
const embeddingsDefaultDimensions = new OpenAIEmbeddings({
model: "text-embedding-3-large",
});
const vectorsDefaultDimensions = await embeddingsDefaultDimensions.embedDocuments(["some text"]);
console.log(vectorsDefaultDimensions[0].length);import { OpenAIEmbeddings } from "@langchain/openai";
const embeddings1024 = new OpenAIEmbeddings({
model: "text-embedding-3-large",
dimensions: 1024,
});
const vectors1024 = await embeddings1024.embedDocuments(["some text"]);
console.log(vectors1024[0].length);import { OpenAIEmbeddings } from "@langchain/openai";
const model = new OpenAIEmbeddings({
configuration: {
baseURL: "https://your_custom_url.com",
},
}); |
0 | lc_public_repos/langchainjs/docs/core_docs/docs/integrations | lc_public_repos/langchainjs/docs/core_docs/docs/integrations/text_embedding/pinecone.ipynb | import { PineconeEmbeddings } from "@langchain/pinecone";
const embeddings = new PineconeEmbeddings({
model: "multilingual-e5-large",
});// Create a vector store with a sample text
import { MemoryVectorStore } from "langchain/vectorstores/memory";
const text = "LangChain is the framework for building context-aware reasoning applications";
const vectorstore = await MemoryVectorStore.fromDocuments(
[{ pageContent: text, metadata: {} }],
embeddings,
);
// Use the vector store as a retriever that returns a single document
const retriever = vectorstore.asRetriever(1);
// Retrieve the most similar text
const retrievedDocuments = await retriever.invoke("What is LangChain?");
retrievedDocuments[0].pageContent;const singleVector = await embeddings.embedQuery(text);
console.log(singleVector.slice(0, 100));const text2 = "LangGraph is a library for building stateful, multi-actor applications with LLMs";
const vectors = await embeddings.embedDocuments([text, text2]);
console.log(vectors[0].slice(0, 100));
console.log(vectors[1].slice(0, 100)); |
0 | lc_public_repos/langchainjs/docs/core_docs/docs/integrations | lc_public_repos/langchainjs/docs/core_docs/docs/integrations/text_embedding/premai.mdx | ---
sidebar_label: Prem AI
---
# Prem AI
The `PremEmbeddings` class uses the Prem AI API to generate embeddings for a given text.
## Setup
In order to use the Prem API you'll need an API key. You can sign up for a Prem account and create an API key [here](https://app.premai.io/accounts/signup/).
You'll first need to install the [`@langchain/community`](https://www.npmjs.com/package/@langchain/community) package:
import IntegrationInstallTooltip from "@mdx_components/integration_install_tooltip.mdx";
<IntegrationInstallTooltip></IntegrationInstallTooltip>
```bash npm2yarn
npm install @langchain/community @langchain/core
```
## Usage
import CodeBlock from "@theme/CodeBlock";
import PremExample from "@examples/embeddings/premai.ts";
<CodeBlock language="typescript">{PremExample}</CodeBlock>
## Related
- Embedding model [conceptual guide](/docs/concepts/embedding_models)
- Embedding model [how-to guides](/docs/how_to/#embedding-models)
|
0 | lc_public_repos/langchainjs/docs/core_docs/docs/integrations | lc_public_repos/langchainjs/docs/core_docs/docs/integrations/text_embedding/mistralai.ipynb | import { MistralAIEmbeddings } from "@langchain/mistralai";
const embeddings = new MistralAIEmbeddings({
model: "mistral-embed", // Default value
});// Create a vector store with a sample text
import { MemoryVectorStore } from "langchain/vectorstores/memory";
const text = "LangChain is the framework for building context-aware reasoning applications";
const vectorstore = await MemoryVectorStore.fromDocuments(
[{ pageContent: text, metadata: {} }],
embeddings,
);
// Use the vector store as a retriever that returns a single document
const retriever = vectorstore.asRetriever(1);
// Retrieve the most similar text
const retrievedDocuments = await retriever.invoke("What is LangChain?");
retrievedDocuments[0].pageContent;const singleVector = await embeddings.embedQuery(text);
console.log(singleVector.slice(0, 100));const text2 = "LangGraph is a library for building stateful, multi-actor applications with LLMs";
const vectors = await embeddings.embedDocuments([text, text2]);
console.log(vectors[0].slice(0, 100));
console.log(vectors[1].slice(0, 100));const beforeRequestHook = (req: Request): Request | void | Promise<Request | void> => {
// Code to run before a request is processed by Mistral
};
const requestErrorHook = (err: unknown, req: Request): void | Promise<void> => {
// Code to run when an error occurs as Mistral is processing a request
};
const responseHook = (res: Response, req: Request): void | Promise<void> => {
// Code to run before Mistral sends a successful response
};import { ChatMistralAI } from "@langchain/mistralai"
const modelWithHooks = new ChatMistralAI({
model: "mistral-large-latest",
temperature: 0,
maxRetries: 2,
beforeRequestHooks: [ beforeRequestHook ],
requestErrorHooks: [ requestErrorHook ],
responseHooks: [ responseHook ],
// other params...
});import { ChatMistralAI } from "@langchain/mistralai"
const model = new ChatMistralAI({
model: "mistral-large-latest",
temperature: 0,
maxRetries: 2,
// other params...
});
model.beforeRequestHooks = [ ...model.beforeRequestHooks, beforeRequestHook ];
model.requestErrorHooks = [ ...model.requestErrorHooks, requestErrorHook ];
model.responseHooks = [ ...model.responseHooks, responseHook ];
model.addAllHooksToHttpClient();model.removeHookFromHttpClient(beforeRequestHook);
model.removeAllHooksFromHttpClient(); |
0 | lc_public_repos/langchainjs/docs/core_docs/docs/integrations | lc_public_repos/langchainjs/docs/core_docs/docs/integrations/text_embedding/hugging_face_inference.mdx | # HuggingFace Inference
This Embeddings integration uses the HuggingFace Inference API to generate embeddings for a given text using by default the `sentence-transformers/distilbert-base-nli-mean-tokens` model. You can pass a different model name to the constructor to use a different model.
## Setup
You'll first need to install the [`@langchain/community`](https://www.npmjs.com/package/@langchain/community) package and the required peer dep:
import IntegrationInstallTooltip from "@mdx_components/integration_install_tooltip.mdx";
<IntegrationInstallTooltip></IntegrationInstallTooltip>
```bash npm2yarn
npm install @langchain/community @langchain/core @huggingface/inference@2
```
## Usage
```typescript
import { HuggingFaceInferenceEmbeddings } from "@langchain/community/embeddings/hf";
const embeddings = new HuggingFaceInferenceEmbeddings({
apiKey: "YOUR-API-KEY", // In Node.js defaults to process.env.HUGGINGFACEHUB_API_KEY
});
```
## Related
- Embedding model [conceptual guide](/docs/concepts/embedding_models)
- Embedding model [how-to guides](/docs/how_to/#embedding-models)
|
0 | lc_public_repos/langchainjs/docs/core_docs/docs/integrations | lc_public_repos/langchainjs/docs/core_docs/docs/integrations/text_embedding/llama_cpp.mdx | ---
sidebar_class_name: node-only
---
# Llama CPP
:::tip Compatibility
Only available on Node.js.
:::
This module is based on the [node-llama-cpp](https://github.com/withcatai/node-llama-cpp) Node.js bindings for [llama.cpp](https://github.com/ggerganov/llama.cpp), allowing you to work with a locally running LLM. This allows you to work with a much smaller quantized model capable of running on a laptop environment, ideal for testing and scratch padding ideas without running up a bill!
## Setup
You'll need to install major version `3` of the [node-llama-cpp](https://github.com/withcatai/node-llama-cpp) module to communicate with your local model.
```bash npm2yarn
npm install -S node-llama-cpp@3
```
import IntegrationInstallTooltip from "@mdx_components/integration_install_tooltip.mdx";
<IntegrationInstallTooltip></IntegrationInstallTooltip>
```bash npm2yarn
npm install @langchain/community @langchain/core
```
You will also need a local Llama 2 model (or a model supported by [node-llama-cpp](https://github.com/withcatai/node-llama-cpp)). You will need to pass the path to this model to the LlamaCpp module as a part of the parameters (see example).
Out-of-the-box `node-llama-cpp` is tuned for running on a MacOS platform with support for the Metal GPU of Apple M-series of processors. If you need to turn this off or need support for the CUDA architecture then refer to the documentation at [node-llama-cpp](https://withcatai.github.io/node-llama-cpp/).
For advice on getting and preparing `llama2` see the documentation for the LLM version of this module.
A note to LangChain.js contributors: if you want to run the tests associated with this module you will need to put the path to your local model in the environment variable `LLAMA_PATH`.
## Usage
### Basic use
We need to provide a path to our local Llama2 model, also the `embeddings` property is always set to `true` in this module.
import CodeBlock from "@theme/CodeBlock";
import BasicExample from "@examples/embeddings/llama_cpp_basic.ts";
<CodeBlock language="typescript">{BasicExample}</CodeBlock>
### Document embedding
import DocsExample from "@examples/embeddings/llama_cpp_docs.ts";
<CodeBlock language="typescript">{DocsExample}</CodeBlock>
## Related
- Embedding model [conceptual guide](/docs/concepts/embedding_models)
- Embedding model [how-to guides](/docs/how_to/#embedding-models)
|
0 | lc_public_repos/langchainjs/docs/core_docs/docs/integrations | lc_public_repos/langchainjs/docs/core_docs/docs/integrations/text_embedding/cloudflare_ai.ipynb | // @ts-nocheck
import type {
VectorizeIndex,
Fetcher,
Request,
} from "@cloudflare/workers-types";
import {
CloudflareVectorizeStore,
CloudflareWorkersAIEmbeddings,
} from "@langchain/cloudflare";
export interface Env {
VECTORIZE_INDEX: VectorizeIndex;
AI: Fetcher;
}
export default {
async fetch(request: Request, env: Env) {
const { pathname } = new URL(request.url);
const embeddings = new CloudflareWorkersAIEmbeddings({
binding: env.AI,
model: "@cf/baai/bge-small-en-v1.5",
});
const store = new CloudflareVectorizeStore(embeddings, {
index: env.VECTORIZE_INDEX,
});
if (pathname === "/") {
const results = await store.similaritySearch("hello", 5);
return Response.json(results);
} else if (pathname === "/load") {
// Upsertion by id is supported
await store.addDocuments(
[
{
pageContent: "hello",
metadata: {},
},
{
pageContent: "world",
metadata: {},
},
{
pageContent: "hi",
metadata: {},
},
],
{ ids: ["id1", "id2", "id3"] }
);
return Response.json({ success: true });
} else if (pathname === "/clear") {
await store.delete({ ids: ["id1", "id2", "id3"] });
return Response.json({ success: true });
}
return Response.json({ error: "Not Found" }, { status: 404 });
},
}; |
0 | lc_public_repos/langchainjs/docs/core_docs/docs/integrations | lc_public_repos/langchainjs/docs/core_docs/docs/integrations/text_embedding/tencent_hunyuan.mdx | ---
sidebar_label: Tencent Hunyuan
---
# TencentHunyuan
The `TencentHunyuanEmbeddings` class uses the Tencent Hunyuan API to generate embeddings for a given text.
## Setup
1. Sign up for a Tencent Cloud account [here](https://cloud.tencent.com/register).
2. Create SecretID & SecretKey [here](https://console.cloud.tencent.com/cam/capi).
3. Set SecretID and SecretKey as environment variables named `TENCENT_SECRET_ID` and `TENCENT_SECRET_KEY`, respectively.
import IntegrationInstallTooltip from "@mdx_components/integration_install_tooltip.mdx";
<IntegrationInstallTooltip></IntegrationInstallTooltip>
```bash npm2yarn
npm install @langchain/community @langchain/core
```
If you are using LangChain.js in a browser environment, you'll also need to install the following dependencies:
```bash npm2yarn
npm install crypto-js
```
And then make sure that you import from the `web` as shown below.
## Usage
Here's an example:
import CodeBlock from "@theme/CodeBlock";
import TencentHunyuan from "@examples/models/embeddings/tencent_hunyuan.ts";
<CodeBlock language="typescript">{TencentHunyuan}</CodeBlock>
## Related
- Embedding model [conceptual guide](/docs/concepts/embedding_models)
- Embedding model [how-to guides](/docs/how_to/#embedding-models)
|
0 | lc_public_repos/langchainjs/docs/core_docs/docs/integrations | lc_public_repos/langchainjs/docs/core_docs/docs/integrations/text_embedding/ibm.ipynb | import { WatsonxEmbeddings } from "@langchain/community/embeddings/ibm";
const embeddings = new WatsonxEmbeddings({
version: "YYYY-MM-DD",
serviceUrl: process.env.API_URL,
projectId: "<PROJECT_ID>",
spaceId: "<SPACE_ID>",
model: "<MODEL_ID>",
});// Create a vector store with a sample text
import { MemoryVectorStore } from "langchain/vectorstores/memory";
const text = "LangChain is the framework for building context-aware reasoning applications";
const vectorstore = await MemoryVectorStore.fromDocuments(
[{ pageContent: text, metadata: {} }],
embeddings,
);
// Use the vector store as a retriever that returns a single document
const retriever = vectorstore.asRetriever(1);
// Retrieve the most similar text
const retrievedDocuments = await retriever.invoke("What is LangChain?");
retrievedDocuments[0].pageContent; const singleVector = await embeddings.embedQuery(text);
singleVector.slice(0, 10);
const text2 = "LangGraph is a library for building stateful, multi-actor applications with LLMs";
const vectors = await embeddings.embedDocuments([text, text2]);
console.log(vectors[0].slice(0, 10));
console.log(vectors[1].slice(0, 10));
|
0 | lc_public_repos/langchainjs/docs/core_docs/docs/integrations | lc_public_repos/langchainjs/docs/core_docs/docs/integrations/text_embedding/bedrock.ipynb | import { BedrockEmbeddings } from "@langchain/aws";
const embeddings = new BedrockEmbeddings({
region: process.env.BEDROCK_AWS_REGION!,
credentials: {
accessKeyId: process.env.BEDROCK_AWS_ACCESS_KEY_ID!,
secretAccessKey: process.env.BEDROCK_AWS_SECRET_ACCESS_KEY!,
},
model: "amazon.titan-embed-text-v1",
});// Create a vector store with a sample text
import { MemoryVectorStore } from "langchain/vectorstores/memory";
const text = "LangChain is the framework for building context-aware reasoning applications";
const vectorstore = await MemoryVectorStore.fromDocuments(
[{ pageContent: text, metadata: {} }],
embeddings,
);
// Use the vector store as a retriever that returns a single document
const retriever = vectorstore.asRetriever(1);
// Retrieve the most similar text
const retrievedDocuments = await retriever.invoke("What is LangChain?");
retrievedDocuments[0].pageContent;const singleVector = await embeddings.embedQuery(text);
console.log(singleVector.slice(0, 100));const text2 = "LangGraph is a library for building stateful, multi-actor applications with LLMs";
const vectors = await embeddings.embedDocuments([text, text2]);
console.log(vectors[0].slice(0, 100));
console.log(vectors[1].slice(0, 100));import { BedrockRuntimeClient } from "@aws-sdk/client-bedrock-runtime";
import { BedrockEmbeddings } from "@langchain/aws";
const getCredentials = () => {
// do something to get credentials
}
// @lc-ts-ignore
const client = new BedrockRuntimeClient({
region: "us-east-1",
credentials: getCredentials(),
});
const embeddingsWithCustomClient = new BedrockEmbeddings({
client,
}); |
0 | lc_public_repos/langchainjs/docs/core_docs/docs/integrations | lc_public_repos/langchainjs/docs/core_docs/docs/integrations/text_embedding/deepinfra.mdx | ---
sidebar_label: DeepInfra
---
# DeepInfra Embeddings
The `DeepInfraEmbeddings` class utilizes the DeepInfra API to generate embeddings for given text inputs. This guide will walk you through the setup and usage of the `DeepInfraEmbeddings` class, helping you integrate it into your project seamlessly.
## Installation
Install the `@langchain/community` package as shown below:
import IntegrationInstallTooltip from "@mdx_components/integration_install_tooltip.mdx";
<IntegrationInstallTooltip></IntegrationInstallTooltip>
```bash npm2yarn
npm i @langchain/community @langchain/core
```
## Initialization
With this integration, you can use the DeepInfra embeddings model to get embeddings for your text data. Here is the [link](https://deepinfra.com/models/embeddings) to the embeddings models.
First, you need to sign up on the DeepInfra website and get the API token from [here](https://deepinfra.com/dash/api_keys). You can copy names from the model cards and start using them in your code.
To use the `DeepInfraEmbeddings` class, you need an API token from DeepInfra. You can pass this token directly to the constructor or set it as an environment variable (`DEEPINFRA_API_TOKEN`).
### Basic Usage
Here’s how to create an instance of `DeepInfraEmbeddings`:
```typescript
import { DeepInfraEmbeddings } from "@langchain/community/embeddings/deepinfra";
const embeddings = new DeepInfraEmbeddings({
apiToken: "YOUR_API_TOKEN",
modelName: "sentence-transformers/clip-ViT-B-32", // Optional, defaults to "sentence-transformers/clip-ViT-B-32"
batchSize: 1024, // Optional, defaults to 1024
});
```
If the `apiToken` is not provided, it will be read from the `DEEPINFRA_API_TOKEN` environment variable.
## Generating Embeddings
### Embedding a Single Query
To generate embeddings for a single text query, use the `embedQuery` method:
```typescript
const embedding = await embeddings.embedQuery(
"What would be a good company name for a company that makes colorful socks?"
);
console.log(embedding);
```
### Embedding Multiple Documents
To generate embeddings for multiple documents, use the `embedDocuments` method. This method will handle batching automatically based on the `batchSize` parameter:
```typescript
const documents = [
"Document 1 text...",
"Document 2 text...",
"Document 3 text...",
];
const embeddingsArray = await embeddings.embedDocuments(documents);
console.log(embeddingsArray);
```
## Customizing Requests
You can customize the base URL the SDK sends requests to by passing a `configuration` parameter:
```typescript
const customEmbeddings = new DeepInfraEmbeddings({
apiToken: "YOUR_API_TOKEN",
configuration: {
baseURL: "https://your_custom_url.com",
},
});
```
This allows you to route requests through a custom endpoint if needed.
## Error Handling
If the API token is not provided and cannot be found in the environment variables, an error will be thrown:
```typescript
try {
const embeddings = new DeepInfraEmbeddings();
} catch (error) {
console.error("DeepInfra API token not found");
}
```
## Example
Here’s a complete example of how to set up and use the `DeepInfraEmbeddings` class:
```typescript
import { DeepInfraEmbeddings } from "@langchain/community/embeddings/deepinfra";
const embeddings = new DeepInfraEmbeddings({
apiToken: "YOUR_API_TOKEN",
modelName: "sentence-transformers/clip-ViT-B-32",
batchSize: 512,
});
async function runExample() {
const queryEmbedding = await embeddings.embedQuery("Example query text.");
console.log("Query Embedding:", queryEmbedding);
const documents = ["Text 1", "Text 2", "Text 3"];
const documentEmbeddings = await embeddings.embedDocuments(documents);
console.log("Document Embeddings:", documentEmbeddings);
}
runExample();
```
## Feedback and Support
For feedback or questions, please contact [feedback@deepinfra.com](mailto:feedback@deepinfra.com).
## Related
- Embedding model [conceptual guide](/docs/concepts/embedding_models)
- Embedding model [how-to guides](/docs/how_to/#embedding-models)
|
0 | lc_public_repos/langchainjs/docs/core_docs/docs/integrations | lc_public_repos/langchainjs/docs/core_docs/docs/integrations/text_embedding/mixedbread_ai.mdx | # Mixedbread AI
The `MixedbreadAIEmbeddings` class uses the [Mixedbread AI](https://mixedbread.ai/) API to generate text embeddings. This guide will walk you through setting up and using the `MixedbreadAIEmbeddings` class, helping you integrate it into your project effectively.
## Installation
To install the `@langchain/mixedbread-ai` package, use the following command:
import IntegrationInstallTooltip from "@mdx_components/integration_install_tooltip.mdx";
<IntegrationInstallTooltip></IntegrationInstallTooltip>
```bash npm2yarn
npm install @langchain/mixedbread-ai @langchain/core
```
## Initialization
First, sign up on the Mixedbread AI website and get your API key from [here](https://mixedbread.ai/). You can then use this key to initialize the `MixedbreadAIEmbeddings` class.
You can pass the API key directly to the constructor or set it as an environment variable (`MXBAI_API_KEY`).
### Basic Usage
Here’s how to create an instance of `MixedbreadAIEmbeddings`:
```typescript
import { MixedbreadAIEmbeddings } from "@langchain/mixedbread-ai";
const embeddings = new MixedbreadAIEmbeddings({
apiKey: "YOUR_API_KEY",
// Optionally specify model
// model: "mixedbread-ai/mxbai-embed-large-v1",
});
```
If the `apiKey` is not provided, it will be read from the `MXBAI_API_KEY` environment variable.
## Generating Embeddings
### Embedding a Single Query
To generate embeddings for a single text query, use the `embedQuery` method:
```typescript
const embedding = await embeddings.embedQuery(
"Represent this sentence for searching relevant passages: Is baking fun?"
);
console.log(embedding);
```
### Embedding Multiple Documents
To generate embeddings for multiple documents, use the `embedDocuments` method. This method handles batching automatically based on the `batchSize` parameter:
```typescript
const documents = ["Baking bread is fun", "I love baking"];
const embeddingsArray = await embeddings.embedDocuments(documents);
console.log(embeddingsArray);
```
## Customizing Requests
You can customize the SDK by passing additional parameters.
```typescript
const customEmbeddings = new MixedbreadAIEmbeddings({
apiKey: "YOUR_API_KEY",
baseUrl: "...",
maxRetries: 6,
});
```
## Error Handling
If the API key is not provided and cannot be found in the environment variables, an error will be thrown:
```typescript
try {
const embeddings = new MixedbreadAIEmbeddings();
} catch (error) {
console.error(error);
}
```
## Related
- Embedding model [conceptual guide](/docs/concepts/embedding_models)
- Embedding model [how-to guides](/docs/how_to/#embedding-models)
|
0 | lc_public_repos/langchainjs/docs/core_docs/docs/integrations | lc_public_repos/langchainjs/docs/core_docs/docs/integrations/text_embedding/baidu_qianfan.mdx | # Baidu Qianfan
The `BaiduQianfanEmbeddings` class uses the Baidu Qianfan API to generate embeddings for a given text.
## Setup
An API key is required to use this embedding model. You can get one by registering at https://cloud.baidu.com/doc/WENXINWORKSHOP/s/alj562vvu.
Please set the acquired API key as an environment variable named BAIDU_API_KEY, and set your secret key as an environment variable named BAIDU_SECRET_KEY.
Then, you'll need to install the [`@langchain/baidu-qianfan`](https://www.npmjs.com/package/@langchain/baidu-qianfan) package:
import IntegrationInstallTooltip from "@mdx_components/integration_install_tooltip.mdx";
<IntegrationInstallTooltip></IntegrationInstallTooltip>
```bash npm2yarn
npm install @langchain/baidu-qianfan @langchain/core
```
## Usage
import CodeBlock from "@theme/CodeBlock";
import BaiduQianFanExample from "@examples/embeddings/baidu_qianfan.ts";
<CodeBlock language="typescript">{BaiduQianFanExample}</CodeBlock>
## Related
- Embedding model [conceptual guide](/docs/concepts/embedding_models)
- Embedding model [how-to guides](/docs/how_to/#embedding-models)
|
0 | lc_public_repos/langchainjs/docs/core_docs/docs/integrations | lc_public_repos/langchainjs/docs/core_docs/docs/integrations/text_embedding/togetherai.ipynb | import { TogetherAIEmbeddings } from "@langchain/community/embeddings/togetherai";
const embeddings = new TogetherAIEmbeddings({
model: "togethercomputer/m2-bert-80M-8k-retrieval", // Default value
});// Create a vector store with a sample text
import { MemoryVectorStore } from "langchain/vectorstores/memory";
const text = "LangChain is the framework for building context-aware reasoning applications";
const vectorstore = await MemoryVectorStore.fromDocuments(
[{ pageContent: text, metadata: {} }],
embeddings,
);
// Use the vector store as a retriever that returns a single document
const retriever = vectorstore.asRetriever(1);
// Retrieve the most similar text
const retrievedDocuments = await retriever.invoke("What is LangChain?");
retrievedDocuments[0].pageContent;const singleVector = await embeddings.embedQuery(text);
console.log(singleVector.slice(0, 100));const text2 = "LangGraph is a library for building stateful, multi-actor applications with LLMs";
const vectors = await embeddings.embedDocuments([text, text2]);
console.log(vectors[0].slice(0, 100));
console.log(vectors[1].slice(0, 100)); |
0 | lc_public_repos/langchainjs/docs/core_docs/docs/integrations | lc_public_repos/langchainjs/docs/core_docs/docs/integrations/callbacks/upstash_ratelimit_callback.mdx | # Upstash Ratelimit Callback
In this guide, we will go over how to add rate limiting based on number of requests or the number of tokens using `UpstashRatelimitHandler`. This handler uses [Upstash's ratelimit library](https://github.com/upstash/ratelimit-js/), which utilizes [Upstash Redis](https://upstash.com/docs/redis/overall/getstarted).
Upstash Ratelimit works by sending an HTTP request to Upstash Redis every time the `limit` method is called. Remaining tokens/requests of the user are checked and updated. Based on the remaining tokens, we can stop the execution of costly operations, like invoking an LLM or querying a vector store:
```tsx
const response = await ratelimit.limit();
if (response.success) {
execute_costly_operation();
}
```
`UpstashRatelimitHandler` allows you to incorporate this ratelimit logic into your chain in a few minutes.
## Setup
First, you will need to go to [the Upstash Console](https://console.upstash.com/login) and create a redis database ([see our docs](https://upstash.com/docs/redis/overall/getstarted)). After creating a database, you will need to set the environment variables:
```
UPSTASH_REDIS_REST_URL="****"
UPSTASH_REDIS_REST_TOKEN="****"
```
Next, you will need to install Upstash Ratelimit and `@langchain/community`:
import IntegrationInstallTooltip from "@mdx_components/integration_install_tooltip.mdx";
<IntegrationInstallTooltip></IntegrationInstallTooltip>
```bash npm2yarn
npm install @upstash/ratelimit @langchain/community @langchain/core
```
You are now ready to add rate limiting to your chain!
## Ratelimiting Per Request
Let's imagine that we want to allow our users to invoke our chain 10 times per minute. Achieving this is as simple as:
```tsx
const UPSTASH_REDIS_REST_URL = "****";
const UPSTASH_REDIS_REST_TOKEN = "****";
import {
UpstashRatelimitHandler,
UpstashRatelimitError,
} from "@langchain/community/callbacks/handlers/upstash_ratelimit";
import { RunnableLambda } from "@langchain/core/runnables";
import { Ratelimit } from "@upstash/ratelimit";
import { Redis } from "@upstash/redis";
// create ratelimit
const ratelimit = new Ratelimit({
redis: new Redis({
url: UPSTASH_REDIS_REST_URL,
token: UPSTASH_REDIS_REST_TOKEN,
}),
// 10 requests per window, where window size is 60 seconds:
limiter: Ratelimit.fixedWindow(10, "60 s"),
});
// create handler
const user_id = "user_id"; // should be a method which gets the user id
const handler = new UpstashRatelimitHandler(user_id, {
requestRatelimit: ratelimit,
});
// create mock chain
const chain = new RunnableLambda({ func: (str: string): string => str });
try {
const response = await chain.invoke("hello world", {
callbacks: [handler],
});
console.log(response);
} catch (err) {
if (err instanceof UpstashRatelimitError) {
console.log("Handling ratelimit.");
}
}
```
Note that we pass the handler to the `invoke` method instead of passing the handler when defining the chain.
For rate limiting algorithms other than `FixedWindow`, see [upstash-ratelimit docs](https://upstash.com/docs/oss/sdks/ts/ratelimit/algorithms).
Before executing any steps in our pipeline, ratelimit will check whether the user has passed the request limit. If so, `UpstashRatelimitError` is raised.
## Ratelimiting Per Token
Another option is to rate limit chain invokations based on:
1. number of tokens in prompt
2. number of tokens in prompt and LLM completion
This only works if you have an LLM in your chain. Another requirement is that the LLM you are using should return the token usage in it's `LLMOutput`. The format of the token usage dictionary returned depends on the LLM. To learn about how you should configure the handler depending on your LLM, see the end of the Configuration section below.
### How it works
The handler will get the remaining tokens before calling the LLM. If the remaining tokens is more than 0, LLM will be called. Otherwise `UpstashRatelimitError` will be raised.
After LLM is called, token usage information will be used to subtracted from the remaining tokens of the user. No error is raised at this stage of the chain.
### Configuration
For the first configuration, simply initialize the handler like this:
```tsx
const user_id = "user_id"; // should be a method which gets the user id
const handler = new UpstashRatelimitHandler(user_id, {
requestRatelimit: ratelimit,
});
```
For the second configuration, here is how to initialize the handler:
```tsx
const user_id = "user_id"; // should be a method which gets the user id
const handler = new UpstashRatelimitHandler(user_id, {
tokenRatelimit: ratelimit,
});
```
You can also employ ratelimiting based on requests and tokens at the same time, simply by passing both `request_ratelimit` and `token_ratelimit` parameters.
For token usage to work correctly, the LLM step in LangChain.js should return a token usage field in the following format:
```json
{
"tokenUsage": {
"totalTokens": 123,
"promptTokens": 456,
"otherFields: "..."
},
"otherFields: "..."
}
```
Not all LLMs in LangChain.js comply with this format however. If your LLM returns the same values with different keys, you can use the parameters `llmOutputTokenUsageField`, `llmOutputTotalTokenField` and `llmOutputPromptTokenField` by passing them to the handler:
```tsx
const handler = new UpstashRatelimitHandler(
user_id,
{
requestRatelimit: ratelimit
llmOutputTokenUsageField: "usage",
llmOutputTotalTokenField: "total",
llmOutputPromptTokenField: "prompt"
}
)
```
Here is an example with a chain utilizing an LLM:
```tsx
const UPSTASH_REDIS_REST_URL = "****";
const UPSTASH_REDIS_REST_TOKEN = "****";
const OPENAI_API_KEY = "****";
import {
UpstashRatelimitHandler,
UpstashRatelimitError,
} from "@langchain/community/callbacks/handlers/upstash_ratelimit";
import { RunnableLambda, RunnableSequence } from "@langchain/core/runnables";
import { OpenAI } from "@langchain/openai";
import { Ratelimit } from "@upstash/ratelimit";
import { Redis } from "@upstash/redis";
// create ratelimit
const ratelimit = new Ratelimit({
redis: new Redis({
url: UPSTASH_REDIS_REST_URL,
token: UPSTASH_REDIS_REST_TOKEN,
}),
// 500 tokens per window, where window size is 60 seconds:
limiter: Ratelimit.fixedWindow(500, "60 s"),
});
// create handler
const user_id = "user_id"; // should be a method which gets the user id
const handler = new UpstashRatelimitHandler(user_id, {
tokenRatelimit: ratelimit,
});
// create mock chain
const asStr = new RunnableLambda({ func: (str: string): string => str });
const model = new OpenAI({
apiKey: OPENAI_API_KEY,
});
const chain = RunnableSequence.from([asStr, model]);
// invoke chain with handler:
try {
const response = await chain.invoke("hello world", {
callbacks: [handler],
});
console.log(response);
} catch (err) {
if (err instanceof UpstashRatelimitError) {
console.log("Handling ratelimit.");
}
}
```
|
0 | lc_public_repos/langchainjs/docs/core_docs/docs/integrations | lc_public_repos/langchainjs/docs/core_docs/docs/integrations/callbacks/datadog_tracer.mdx | ---
sidebar_class_name: beta
---
import CodeBlock from "@theme/CodeBlock";
# Datadog LLM Observability
:::warning
LLM Observability is in public beta, and its API is subject to change.
:::
With [Datadog LLM Observability](https://docs.datadoghq.com/llm_observability/), you can monitor, troubleshoot, and evaluate your LLM-powered applications, such as chatbots. You can investigate the root cause of issues, monitor operational performance, and evaluate the quality, privacy, and safety of your LLM applications.
This is an experimental community implementation, and it is not officially supported by Datadog. It is based on the [Datadog LLM Observability API](https://docs.datadoghq.com/llm_observability/api).
## Setup
import IntegrationInstallTooltip from "@mdx_components/integration_install_tooltip.mdx";
<IntegrationInstallTooltip></IntegrationInstallTooltip>
```bash npm2yarn
npm install @langchain/community @langchain/core
```
## Usage
import UsageExample from "@examples/callbacks/datadog.ts";
<CodeBlock language="typescript">{UsageExample}</CodeBlock>
|
0 | lc_public_repos/langchainjs/docs/core_docs/docs/integrations | lc_public_repos/langchainjs/docs/core_docs/docs/integrations/llm_caching/azure_cosmosdb_nosql.mdx | # Azure Cosmos DB NoSQL Semantic Cache
> The Semantic Cache feature is supported with Azure Cosmos DB for NoSQL integration, enabling users to retrieve cached responses based on semantic similarity between the user input and previously cached results. It leverages [AzureCosmosDBNoSQLVectorStore](/docs/integrations/vectorstores/azure_cosmosdb_nosql), which stores vector embeddings of cached prompts. These embeddings enable similarity-based searches, allowing the system to retrieve relevant cached results.
If you don't have an Azure account, you can [create a free account](https://azure.microsoft.com/free/) to get started.
## Setup
You'll first need to install the [`@langchain/azure-cosmosdb`](https://www.npmjs.com/package/@langchain/azure-cosmosdb) package:
import IntegrationInstallTooltip from "@mdx_components/integration_install_tooltip.mdx";
<IntegrationInstallTooltip></IntegrationInstallTooltip>
```bash npm2yarn
npm install @langchain/azure-cosmosdb @langchain/core
```
You'll also need to have an Azure Cosmos DB for NoSQL instance running. You can deploy a free version on Azure Portal without any cost, following [this guide](https://learn.microsoft.com/azure/cosmos-db/nosql/quickstart-portal).
Once you have your instance running, make sure you have the connection string. If you are using Managed Identity, you need to have the endpoint. You can find them in the Azure Portal, under the "Settings / Keys" section of your instance.
import CodeBlock from "@theme/CodeBlock";
:::info
When using Azure Managed Identity and role-based access control, you must ensure that the database and container have been created beforehand. RBAC does not provide permissions to create databases and containers. You can get more information about the permission model in the [Azure Cosmos DB documentation](https://learn.microsoft.com/azure/cosmos-db/how-to-setup-rbac#permission-model).
:::
## Usage example
import Example from "@examples/caches/azure_cosmosdb_nosql/azure_cosmosdb_nosql.ts";
<CodeBlock language="typescript">{Example}</CodeBlock>
## Related
- Vector store [conceptual guide](/docs/concepts/#vectorstores)
- Vector store [how-to guides](/docs/how_to/#vectorstores)
|
0 | lc_public_repos/langchainjs/docs/core_docs/docs/integrations | lc_public_repos/langchainjs/docs/core_docs/docs/integrations/llm_caching/index.mdx | ---
sidebar_class_name: hidden
hide_table_of_contents: true
---
# Model caches
[Caching LLM calls](/docs/how_to/chat_model_caching) can be useful for testing, cost savings, and speed.
Below are some integrations that allow you to cache results of individual LLM calls using different caches with different strategies.
import { IndexTable } from "@theme/FeatureTables";
<IndexTable />
|
0 | lc_public_repos/langchainjs/docs/core_docs/docs/integrations | lc_public_repos/langchainjs/docs/core_docs/docs/integrations/memory/astradb.mdx | ---
hide_table_of_contents: true
---
import CodeBlock from "@theme/CodeBlock";
# Astra DB Chat Memory
For longer-term persistence across chat sessions, you can swap out the default in-memory `chatHistory` that backs chat memory classes like `BufferMemory` for Astra DB.
## Setup
You need to install the Astra DB TS client:
```bash npm2yarn
npm install @datastax/astra-db-ts
```
import IntegrationInstallTooltip from "@mdx_components/integration_install_tooltip.mdx";
<IntegrationInstallTooltip></IntegrationInstallTooltip>
```bash npm2yarn
npm install @langchain/openai @langchain/community @langchain/core
```
## Configuration and Initalization
There are two ways to inialize your `AstraDBChatMessageHistory`
If you already have an instance of the `AstraDB` client defined you can connect to your collection and initialize an instance of the `ChatMessageHistory` using the constuctor.
```typescript
const client = (client = new AstraDB(
process.env.ASTRA_DB_APPLICATION_TOKEN,
process.env.ASTRA_DB_ENDPOINT,
process.env.ASTRA_DB_NAMESPACE
));
const collection = await client.collection("YOUR_COLLECTION_NAME");
const chatHistory = new AstraDBChatMessageHistory({
collection,
sessionId: "YOUR_SESSION_ID",
});
```
If you don't already have an instance of an `AstraDB` client you can use the `initialize` method.
```typescript
const chatHistory = await AstraDBChatMessageHistory.initialize({
token: process.env.ASTRA_DB_APPLICATION_TOKEN ?? "token",
endpoint: process.env.ASTRA_DB_ENDPOINT ?? "endpoint",
namespace: process.env.ASTRA_DB_NAMESPACE,
collectionName: "YOUR_COLLECTION_NAME",
sessionId: "YOUR_SESSION_ID",
});
```
## Usage
:::tip Tip
Your collection must already exist
:::
import Example from "@examples/memory/astradb.ts";
<CodeBlock language="typescript">{Example}</CodeBlock>
|
0 | lc_public_repos/langchainjs/docs/core_docs/docs/integrations | lc_public_repos/langchainjs/docs/core_docs/docs/integrations/memory/firestore.mdx | ---
hide_table_of_contents: true
---
import CodeBlock from "@theme/CodeBlock";
# Firestore Chat Memory
For longer-term persistence across chat sessions, you can swap out the default in-memory `chatHistory` that backs chat memory classes like `BufferMemory` for a firestore.
## Setup
First, install the Firebase admin package in your project:
```bash npm2yarn
npm install firebase-admin
```
import IntegrationInstallTooltip from "@mdx_components/integration_install_tooltip.mdx";
<IntegrationInstallTooltip></IntegrationInstallTooltip>
```bash npm2yarn
npm install @langchain/openai @langchain/community @langchain/core
```
Visit the `Project Settings` page from your Firebase project and select the `Service accounts` tab.
Inside the `Service accounts` tab, click the `Generate new private key` button inside the `Firebase Admin SDK` section to download a JSON file containing your service account's credentials.
Using the downloaded JSON file, pass in the `projectId`, `privateKey`, and `clientEmail` to the `config` object of the `FirestoreChatMessageHistory` class, like shown below:
```typescript
import { FirestoreChatMessageHistory } from "@langchain/community/stores/message/firestore";
import admin from "firebase-admin";
const messageHistory = new FirestoreChatMessageHistory({
collections: ["chats"],
docs: ["user-id"],
sessionId: "user-id",
userId: "a@example.com",
config: {
projectId: "YOUR-PROJECT-ID",
credential: admin.credential.cert({
projectId: "YOUR-PROJECT-ID",
privateKey:
"-----BEGIN PRIVATE KEY-----\nCHANGE-ME\n-----END PRIVATE KEY-----\n",
clientEmail: "CHANGE-ME@CHANGE-ME-TOO.iam.gserviceaccount.com",
}),
},
});
```
Here, the `collections` field should match the names and ordering of the `collections` in your database.
The same goes for `docs`, it should match the names and ordering of the `docs` in your database.
## Usage
import Example from "@examples/memory/firestore.ts";
<CodeBlock language="typescript">{Example}</CodeBlock>
### Nested Collections
The `FirestoreChatMessageHistory` class supports nested collections, and dynamic collection/doc names.
The example below shows how to add and retrieve messages from a database with the following structure:
```
/chats/{chat-id}/bots/{bot-id}/messages/{message-id}
```
import NestedExample from "@examples/memory/firestore_nested.ts";
<CodeBlock language="typescript">{NestedExample}</CodeBlock>
## Firestore Rules
If your collection name is "chathistory," you can configure Firestore rules as follows.
```
match /chathistory/{sessionId} {
allow read: if request.auth.uid == resource.data.createdBy;
allow write: if request.auth.uid == request.resource.data.createdBy;
}
match /chathistory/{sessionId}/messages/{messageId} {
allow read: if request.auth.uid == resource.data.createdBy;
allow write: if request.auth.uid == request.resource.data.createdBy;
}
```
|
0 | lc_public_repos/langchainjs/docs/core_docs/docs/integrations | lc_public_repos/langchainjs/docs/core_docs/docs/integrations/memory/xata.mdx | # Xata Chat Memory
[Xata](https://xata.io) is a serverless data platform, based on PostgreSQL. It provides a type-safe TypeScript/JavaScript SDK for interacting with your database, and a
UI for managing your data.
With the `XataChatMessageHistory` class, you can use Xata databases for longer-term persistence of chat sessions.
Because Xata works via a REST API and has a pure TypeScript SDK, you can use this with [Vercel Edge](https://vercel.com/docs/concepts/functions/edge-functions/edge-runtime), [Cloudflare Workers](https://developers.cloudflare.com/workers/) and any other Serverless environment.
## Setup
### Install the Xata CLI
```bash
npm install @xata.io/cli -g
```
### Create a database to be used as a vector store
In the [Xata UI](https://app.xata.io) create a new database. You can name it whatever you want, but for this example we'll use `langchain`.
When executed for the first time, the Xata LangChain integration will create the table used for storing the chat messages. If a table with that name already exists, it will be left untouched.
### Initialize the project
In your project, run:
```bash
xata init
```
and then choose the database you created above. This will also generate a `xata.ts` or `xata.js` file that defines the client you can use to interact with the database. See the [Xata getting started docs](https://xata.io/docs/getting-started/installation) for more details on using the Xata JavaScript/TypeScript SDK.
## Usage
import CodeBlock from "@theme/CodeBlock";
Each chat history session stored in Xata database must have a unique id.
In this example, the `getXataClient()` function is used to create a new Xata client based on the environment variables. However, we recommend using the code generated by the `xata init` command, in which case you only need to import the `getXataClient()` function from the generated `xata.ts` file.
import IntegrationInstallTooltip from "@mdx_components/integration_install_tooltip.mdx";
<IntegrationInstallTooltip></IntegrationInstallTooltip>
```bash npm2yarn
npm install @langchain/openai @langchain/community @langchain/core
```
import Example from "@examples/memory/xata.ts";
<CodeBlock language="typescript">{Example}</CodeBlock>
### With pre-created table
If you don't want the code to always check if the table exists, you can create the table manually in the Xata UI and pass `createTable: false` to the constructor. The table must have the following columns:
- `sessionId` of type `String`
- `type` of type `String`
- `role` of type `String`
- `content` of type `Text`
- `name` of type `String`
- `additionalKwargs` of type `Text`
import Advanced from "@examples/memory/xata-advanced.ts";
<CodeBlock language="typescript">{Advanced}</CodeBlock>
|
0 | lc_public_repos/langchainjs/docs/core_docs/docs/integrations | lc_public_repos/langchainjs/docs/core_docs/docs/integrations/memory/dynamodb.mdx | ---
hide_table_of_contents: true
---
import CodeBlock from "@theme/CodeBlock";
# DynamoDB-Backed Chat Memory
For longer-term persistence across chat sessions, you can swap out the default in-memory `chatHistory` that backs chat memory classes like `BufferMemory` for a DynamoDB instance.
## Setup
First, install the AWS DynamoDB client in your project:
```bash npm2yarn
npm install @aws-sdk/client-dynamodb
```
import IntegrationInstallTooltip from "@mdx_components/integration_install_tooltip.mdx";
<IntegrationInstallTooltip></IntegrationInstallTooltip>
```bash npm2yarn
npm install @langchain/openai @langchain/community @langchain/core
```
Next, sign into your AWS account and create a DynamoDB table. Name the table `langchain`, and name your partition key `id`. Make sure your partition key is a string. You can leave sort key and the other settings alone.
You'll also need to retrieve an AWS access key and secret key for a role or user that has access to the table and add them to your environment variables.
## Usage
import Example from "@examples/memory/dynamodb-store.ts";
<CodeBlock language="typescript">{Example}</CodeBlock>
|
0 | lc_public_repos/langchainjs/docs/core_docs/docs/integrations | lc_public_repos/langchainjs/docs/core_docs/docs/integrations/memory/ipfs_datastore.mdx | ---
hide_table_of_contents: true
---
import CodeBlock from "@theme/CodeBlock";
# IPFS Datastore Chat Memory
For a storage backend you can use the IPFS Datastore Chat Memory to wrap an IPFS Datastore allowing you to use any IPFS compatible datastore.
## Setup
First, install the integration dependencies:
import IntegrationInstallTooltip from "@mdx_components/integration_install_tooltip.mdx";
<IntegrationInstallTooltip></IntegrationInstallTooltip>
```bash npm2yarn
npm install cborg interface-datastore it-all @langchain/community @langchain/core
```
Now you can install and use an IPFS Datastore of your choice. Here are some options:
- [datastore-core](https://github.com/ipfs/js-stores/blob/main/packages/datastore-core) Datastore in-memory implementation.
- [datastore-fs](https://github.com/ipfs/js-stores/blob/main/packages/datastore-fs) Datastore implementation with file system backend.
- [datastore-idb](https://github.com/ipfs/js-stores/blob/main/packages/datastore-idb) Datastore implementation with IndexedDB backend.
- [datastore-level](https://github.com/ipfs/js-stores/blob/main/packages/datastore-level) Datastore implementation with level(up|down) backend
- [datastore-s3](https://github.com/ipfs/js-stores/blob/main/packages/datastore-s3) Datastore implementation backed by s3.
## Usage
```typescript
// Replace FsDatastore with the IPFS Datastore of your choice.
import { FsDatastore } from "datastore-fs";
import { IPFSDatastoreChatMessageHistory } from "@langchain/community/stores/message/ipfs_datastore";
const datastore = new FsDatastore("path/to/store");
const sessionId = "my-session";
const history = new IPFSDatastoreChatMessageHistory({ datastore, sessionId });
```
|
0 | lc_public_repos/langchainjs/docs/core_docs/docs/integrations | lc_public_repos/langchainjs/docs/core_docs/docs/integrations/memory/azure_cosmosdb_nosql.mdx | ---
hide_table_of_contents: true
---
import CodeBlock from "@theme/CodeBlock";
# Azure Cosmos DB NoSQL Chat Message History
The AzureCosmosDBNoSQLChatMessageHistory uses Cosmos DB to store chat message history. For longer-term persistence across chat sessions, you can swap out the default in-memory `chatHistory` that backs chat memory classes like `BufferMemory`.
If you don't have an Azure account, you can [create a free account](https://azure.microsoft.com/free/) to get started.
## Setup
You'll first need to install the [`@langchain/azure-cosmosdb`](https://www.npmjs.com/package/@langchain/azure-cosmosdb) package:
```bash npm2yarn
npm install @langchain/azure-cosmosdb @langchain/core
```
import IntegrationInstallTooltip from "@mdx_components/integration_install_tooltip.mdx";
<IntegrationInstallTooltip></IntegrationInstallTooltip>
```bash npm2yarn
npm install @langchain/openai @langchain/community @langchain/core
```
You'll also need to have an Azure Cosmos DB for NoSQL instance running. You can deploy a free version on Azure Portal without any cost, following [this guide](https://learn.microsoft.com/azure/cosmos-db/nosql/quickstart-portal).
Once you have your instance running, make sure you have the connection string. If you are using Managed Identity, you need to have the endpoint. You can find them in the Azure Portal, under the "Settings / Keys" section of your instance.
:::info
When using Azure Managed Identity and role-based access control, you must ensure that the database and container have been created beforehand. RBAC does not provide permissions to create databases and containers. You can get more information about the permission model in the [Azure Cosmos DB documentation](https://learn.microsoft.com/azure/cosmos-db/how-to-setup-rbac#permission-model).
:::
## Usage
import Example from "@examples/memory/azure_cosmosdb_nosql.ts";
<CodeBlock language="typescript">{Example}</CodeBlock>
|
0 | lc_public_repos/langchainjs/docs/core_docs/docs/integrations | lc_public_repos/langchainjs/docs/core_docs/docs/integrations/memory/convex.mdx | ---
hide_table_of_contents: true
---
import CodeBlock from "@theme/CodeBlock";
# Convex Chat Memory
For longer-term persistence across chat sessions, you can swap out the default in-memory `chatHistory` that backs chat memory classes like `BufferMemory` for [Convex](https://convex.dev/).
## Setup
### Create project
Get a working [Convex](https://docs.convex.dev/) project set up, for example by using:
```bash
npm create convex@latest
```
### Add database accessors
Add query and mutation helpers to `convex/langchain/db.ts`:
```ts title="convex/langchain/db.ts"
export * from "@langchain/community/utils/convex";
```
### Configure your schema
Set up your schema (for indexing):
```ts title="convex/schema.ts"
import { defineSchema, defineTable } from "convex/server";
import { v } from "convex/values";
export default defineSchema({
messages: defineTable({
sessionId: v.string(),
message: v.object({
type: v.string(),
data: v.object({
content: v.string(),
role: v.optional(v.string()),
name: v.optional(v.string()),
additional_kwargs: v.optional(v.any()),
}),
}),
}).index("bySessionId", ["sessionId"]),
});
```
## Usage
Each chat history session stored in Convex must have a unique session id.
import IntegrationInstallTooltip from "@mdx_components/integration_install_tooltip.mdx";
<IntegrationInstallTooltip></IntegrationInstallTooltip>
```bash npm2yarn
npm install @langchain/openai @langchain/community @langchain/core
```
import Example from "@examples/memory/convex/convex.ts";
<CodeBlock language="typescript" title="convex/myActions.ts">
{Example}
</CodeBlock>
|
0 | lc_public_repos/langchainjs/docs/core_docs/docs/integrations | lc_public_repos/langchainjs/docs/core_docs/docs/integrations/memory/momento.mdx | ---
hide_table_of_contents: true
---
import CodeBlock from "@theme/CodeBlock";
# Momento-Backed Chat Memory
For distributed, serverless persistence across chat sessions, you can swap in a [Momento](https://gomomento.com/)-backed chat message history.
Because a Momento cache is instantly available and requires zero infrastructure maintenance, it's a great way to get started with chat history whether building locally or in production.
## Setup
You will need to install the [Momento Client Library](https://github.com/momentohq/client-sdk-javascript) in your project. Given Momento's compatibility with Node.js, browser, and edge environments, ensure you install the relevant package.
To install for **Node.js**:
```bash npm2yarn
npm install @gomomento/sdk
```
To install for **browser/edge workers**:
```bash npm2yarn
npm install @gomomento/sdk-web
```
import IntegrationInstallTooltip from "@mdx_components/integration_install_tooltip.mdx";
<IntegrationInstallTooltip></IntegrationInstallTooltip>
```bash npm2yarn
npm install @langchain/openai @langchain/community @langchain/core
```
You will also need an API key from [Momento](https://gomomento.com/). You can sign up for a free account [here](https://console.gomomento.com/).
## Usage
To distinguish one chat history session from another, we need a unique `sessionId`. You may also provide an optional `sessionTtl` to make sessions expire after a given number of seconds.
import MomentoExample from "@examples/memory/momento.ts";
<CodeBlock language="typescript">{MomentoExample}</CodeBlock>
|
0 | lc_public_repos/langchainjs/docs/core_docs/docs/integrations | lc_public_repos/langchainjs/docs/core_docs/docs/integrations/memory/motorhead_memory.mdx | ---
hide_table_of_contents: true
---
import CodeBlock from "@theme/CodeBlock";
# Motörhead Memory
[Motörhead](https://github.com/getmetal/motorhead) is a memory server implemented in Rust. It automatically handles incremental summarization in the background and allows for stateless applications.
## Setup
See instructions at [Motörhead](https://github.com/getmetal/motorhead) for running the server locally, or https://getmetal.io to get API keys for the hosted version.
## Usage
import Example from "@examples/memory/motorhead.ts";
import IntegrationInstallTooltip from "@mdx_components/integration_install_tooltip.mdx";
<IntegrationInstallTooltip></IntegrationInstallTooltip>
```bash npm2yarn
npm install @langchain/openai @langchain/core
```
<CodeBlock language="typescript">{Example}</CodeBlock>
|
0 | lc_public_repos/langchainjs/docs/core_docs/docs/integrations | lc_public_repos/langchainjs/docs/core_docs/docs/integrations/memory/cassandra.mdx | ---
hide_table_of_contents: true
---
import CodeBlock from "@theme/CodeBlock";
# Cassandra Chat Memory
For longer-term persistence across chat sessions, you can swap out the default in-memory `chatHistory` that backs chat memory classes like `BufferMemory` for a Cassandra cluster.
## Setup
First, install the Cassandra Node.js driver:
import IntegrationInstallTooltip from "@mdx_components/integration_install_tooltip.mdx";
<IntegrationInstallTooltip></IntegrationInstallTooltip>
```bash npm2yarn
npm install cassandra-driver @langchain/openai @langchain/community @langchain/core
```
Depending on your database providers, the specifics of how to connect to the database will vary. We will create a document `configConnection` which will be used as part of the vector store configuration.
### Apache Cassandra®
```typescript
const configConnection = {
contactPoints: ['h1', 'h2'],
localDataCenter: 'datacenter1',
credentials: {
username: <...> as string,
password: <...> as string,
},
};
```
### Astra DB
Astra DB is a cloud-native Cassandra-as-a-Service platform.
1. Create an [Astra DB account](https://astra.datastax.com/register).
2. Create a [vector enabled database](https://astra.datastax.com/createDatabase).
3. Create a [token](https://docs.datastax.com/en/astra/docs/manage-application-tokens.html) for your database.
```typescript
const configConnection = {
serviceProviderArgs: {
astra: {
token: <...> as string,
endpoint: <...> as string,
},
},
};
```
Instead of `endpoint:`, you many provide property `datacenterID:` and optionally `regionName:`.
## Usage
import Example from "@examples/memory/cassandra-store.ts";
<CodeBlock language="typescript">{Example}</CodeBlock>
|
0 | lc_public_repos/langchainjs/docs/core_docs/docs/integrations | lc_public_repos/langchainjs/docs/core_docs/docs/integrations/memory/mongodb.mdx | ---
hide_table_of_contents: true
sidebar_class_name: node-only
---
import CodeBlock from "@theme/CodeBlock";
# MongoDB Chat Memory
:::tip Compatibility
Only available on Node.js.
You can still create API routes that use MongoDB with Next.js by setting the `runtime` variable to `nodejs` like so:
```typescript
export const runtime = "nodejs";
```
You can read more about Edge runtimes in the Next.js documentation [here](https://nextjs.org/docs/app/building-your-application/rendering/edge-and-nodejs-runtimes).
:::
For longer-term persistence across chat sessions, you can swap out the default in-memory `chatHistory` that backs chat memory classes like `BufferMemory` for a MongoDB instance.
## Setup
You need to install Node MongoDB SDK in your project:
```bash npm2yarn
npm install -S mongodb
```
import IntegrationInstallTooltip from "@mdx_components/integration_install_tooltip.mdx";
<IntegrationInstallTooltip></IntegrationInstallTooltip>
```bash npm2yarn
npm install @langchain/openai @langchain/community @langchain/core
```
You will also need a MongoDB instance to connect to.
## Usage
Each chat history session stored in MongoDB must have a unique session id.
import Example from "@examples/memory/mongodb.ts";
<CodeBlock language="typescript">{Example}</CodeBlock>
|
0 | lc_public_repos/langchainjs/docs/core_docs/docs/integrations | lc_public_repos/langchainjs/docs/core_docs/docs/integrations/memory/zep_memory_cloud.mdx | ---
hide_table_of_contents: true
---
# Zep Cloud Memory
> Recall, understand, and extract data from chat histories. Power personalized AI experiences.
> [Zep](https://www.getzep.com) is a long-term memory service for AI Assistant apps.
> With Zep, you can provide AI assistants with the ability to recall past conversations, no matter how distant,
> while also reducing hallucinations, latency, and cost.
## How Zep Cloud works
Zep persists and recalls chat histories, and automatically generates summaries and other artifacts from these chat histories.
It also embeds messages and summaries, enabling you to search Zep for relevant context from past conversations.
Zep does all of this asynchronously, ensuring these operations don't impact your user's chat experience.
Data is persisted to database, allowing you to scale out when growth demands.
Zep also provides a simple, easy to use abstraction for document vector search called Document Collections.
This is designed to complement Zep's core memory features, but is not designed to be a general purpose vector database.
Zep allows you to be more intentional about constructing your prompt:
- automatically adding a few recent messages, with the number customized for your app;
- a summary of recent conversations prior to the messages above;
- and/or contextually relevant summaries or messages surfaced from the entire chat session.
- and/or relevant Business data from Zep Document Collections.
Zep Cloud offers:
- **Fact Extraction**: Automatically build fact tables from conversations, without having to define a data schema upfront.
- **Dialog Classification**: Instantly and accurately classify chat dialog. Understand user intent and emotion, segment users, and more. Route chains based on semantic context, and trigger events.
- **Structured Data Extraction**: Quickly extract business data from chat conversations using a schema you define. Understand what your Assistant should ask for next in order to complete its task.
## Installation
Sign up for [Zep Cloud](https://app.getzep.com/) and create a project.
Follow the [Zep Cloud Typescript SDK Installation Guide](https://help.getzep.com/sdks) to install and get started with Zep.
You'll need your Zep Cloud Project API Key to use the Zep Cloud Memory. See the [Zep Cloud docs](https://help.getzep.com/projects) for more information.
import IntegrationInstallTooltip from "@mdx_components/integration_install_tooltip.mdx";
<IntegrationInstallTooltip></IntegrationInstallTooltip>
```bash npm2yarn
npm install @getzep/zep-cloud @langchain/openai @langchain/community @langchain/core
```
### ZepCloudChatMessageHistory + RunnableWithMessageHistory usage
import CodeBlock from "@theme/CodeBlock";
import ZepCloudMessageHistoryExample from "@examples/guides/expression_language/zep/zep_cloud_message_history.ts";
<CodeBlock language="typescript">{ZepCloudMessageHistoryExample}</CodeBlock>
### ZepCloudChatMessageHistory + RunnableWithMessageHistory + ZepVectorStore (as retriever) usage
import ZepCloudMessageHistoryWithVectorStoreExample from "@examples/guides/expression_language/zep/zep_cloud_message_history_vector_store.ts";
<CodeBlock language="typescript">
{ZepCloudMessageHistoryWithVectorStoreExample}
</CodeBlock>
### Memory Usage
import ZepCloudMemoryExample from "@examples/memory/zep_cloud.ts";
<CodeBlock language="typescript">{ZepCloudMemoryExample}</CodeBlock>
|
0 | lc_public_repos/langchainjs/docs/core_docs/docs/integrations | lc_public_repos/langchainjs/docs/core_docs/docs/integrations/memory/file.mdx | ---
hide_table_of_contents: true
---
import CodeBlock from "@theme/CodeBlock";
# File Chat Message History
The `FileChatMessageHistory` uses a JSON file to store chat message history. For longer-term persistence across chat sessions, you can swap out the default in-memory `chatHistory` that backs chat memory classes like `BufferMemory`.
## Setup
You'll first need to install the [`@langchain/community`](https://www.npmjs.com/package/@langchain/community) package:
```bash npm2yarn
npm install @langchain/community @langchain/core
```
import IntegrationInstallTooltip from "@mdx_components/integration_install_tooltip.mdx";
<IntegrationInstallTooltip></IntegrationInstallTooltip>
```bash npm2yarn
npm install @langchain/openai @langchain/community @langchain/core
```
## Usage
import Example from "@examples/memory/file.ts";
<CodeBlock language="typescript">{Example}</CodeBlock>
|
0 | lc_public_repos/langchainjs/docs/core_docs/docs/integrations | lc_public_repos/langchainjs/docs/core_docs/docs/integrations/memory/upstash_redis.mdx | ---
hide_table_of_contents: true
---
import CodeBlock from "@theme/CodeBlock";
# Upstash Redis-Backed Chat Memory
Because Upstash Redis works via a REST API, you can use this with [Vercel Edge](https://vercel.com/docs/concepts/functions/edge-functions/edge-runtime), [Cloudflare Workers](https://developers.cloudflare.com/workers/) and other Serverless environments.
Based on Redis-Backed Chat Memory.
For longer-term persistence across chat sessions, you can swap out the default in-memory `chatHistory` that backs chat memory classes like `BufferMemory` for an Upstash [Redis](https://redis.io/) instance.
## Setup
You will need to install [@upstash/redis](https://github.com/upstash/upstash-redis) in your project:
import IntegrationInstallTooltip from "@mdx_components/integration_install_tooltip.mdx";
<IntegrationInstallTooltip></IntegrationInstallTooltip>
```bash npm2yarn
npm install @langchain/openai @langchain/community @langchain/core @upstash/redis
```
You will also need an Upstash Account and a Redis database to connect to. See instructions on [Upstash Docs](https://docs.upstash.com/redis) on how to create a HTTP client.
## Usage
Each chat history session stored in Redis must have a unique id. You can provide an optional `sessionTTL` to make sessions expire after a give number of seconds.
The `config` parameter is passed directly into the `new Redis()` constructor of [@upstash/redis](https://docs.upstash.com/redis/sdks/javascriptsdk/overview), and takes all the same arguments.
import Example from "@examples/memory/upstash_redis.ts";
<CodeBlock language="typescript">{Example}</CodeBlock>
## Advanced Usage
You can also directly pass in a previously created [@upstash/redis](https://docs.upstash.com/redis/sdks/javascriptsdk/overview) client instance:
import AdvancedExample from "@examples/memory/upstash_redis_advanced.ts";
<CodeBlock language="typescript">{AdvancedExample}</CodeBlock>
|
0 | lc_public_repos/langchainjs/docs/core_docs/docs/integrations | lc_public_repos/langchainjs/docs/core_docs/docs/integrations/memory/postgres.mdx | ---
hide_table_of_contents: true
sidebar_class_name: node-only
---
import CodeBlock from "@theme/CodeBlock";
# Postgres Chat Memory
For longer-term persistence across chat sessions, you can swap out the default in-memory `chatHistory` for a [Postgres](https://www.postgresql.org/) Database.
## Setup
First install the [node-postgres](https://node-postgres.com/) package:
import IntegrationInstallTooltip from "@mdx_components/integration_install_tooltip.mdx";
<IntegrationInstallTooltip></IntegrationInstallTooltip>
```bash npm2yarn
npm install @langchain/openai @langchain/community @langchain/core pg
```
## Usage
Each chat history session is stored in a Postgres database and requires a session id.
The connection to postgres is handled through a pool. You can either pass an instance of a pool via the `pool` parameter or pass a pool config via the `poolConfig` parameter. See [pg-node docs on pools](https://node-postgres.com/apis/pool)
for more information. A provided pool takes precedence, thus if both a pool instance and a pool config are passed, only the pool will be used.
import Example from "@examples/memory/postgres.ts";
<CodeBlock language="typescript">{Example}</CodeBlock>
|
0 | lc_public_repos/langchainjs/docs/core_docs/docs/integrations | lc_public_repos/langchainjs/docs/core_docs/docs/integrations/memory/redis.mdx | ---
hide_table_of_contents: true
---
import CodeBlock from "@theme/CodeBlock";
# Redis-Backed Chat Memory
For longer-term persistence across chat sessions, you can swap out the default in-memory `chatHistory` that backs chat memory classes like `BufferMemory` for a [Redis](https://redis.io/) instance.
## Setup
You will need to install [node-redis](https://github.com/redis/node-redis) in your project:
import IntegrationInstallTooltip from "@mdx_components/integration_install_tooltip.mdx";
<IntegrationInstallTooltip></IntegrationInstallTooltip>
```bash npm2yarn
npm install @langchain/openai @langchain/community @langchain/core redis
```
You will also need a Redis instance to connect to. See instructions on [the official Redis website](https://redis.io/docs/getting-started/) for running the server locally.
## Usage
Each chat history session stored in Redis must have a unique id. You can provide an optional `sessionTTL` to make sessions expire after a give number of seconds.
The `config` parameter is passed directly into the `createClient` method of [node-redis](https://github.com/redis/node-redis), and takes all the same arguments.
import Example from "@examples/memory/redis.ts";
<CodeBlock language="typescript">{Example}</CodeBlock>
## Advanced Usage
You can also directly pass in a previously created [node-redis](https://github.com/redis/node-redis) client instance:
import AdvancedExample from "@examples/memory/redis-advanced.ts";
<CodeBlock language="typescript">{AdvancedExample}</CodeBlock>
### Redis Sentinel Support
You can enable a Redis Sentinel backed cache using [ioredis](https://github.com/redis/ioredis)
This will require the installation of [ioredis](https://github.com/redis/ioredis) in your project.
```bash npm2yarn
npm install ioredis
```
import RedisSentinel from "@examples/memory/redis-sentinel.ts";
<CodeBlock language="typescript">{RedisSentinel}</CodeBlock>
|
0 | lc_public_repos/langchainjs/docs/core_docs/docs/integrations | lc_public_repos/langchainjs/docs/core_docs/docs/integrations/memory/planetscale.mdx | ---
hide_table_of_contents: true
---
import CodeBlock from "@theme/CodeBlock";
# PlanetScale Chat Memory
Because PlanetScale works via a REST API, you can use this with [Vercel Edge](https://vercel.com/docs/concepts/functions/edge-functions/edge-runtime), [Cloudflare Workers](https://developers.cloudflare.com/workers/) and other Serverless environments.
For longer-term persistence across chat sessions, you can swap out the default in-memory `chatHistory` that backs chat memory classes like `BufferMemory` for an PlanetScale [Database](https://planetscale.com/) instance.
## Setup
You will need to install [@planetscale/database](https://github.com/planetscale/database-js) in your project:
import IntegrationInstallTooltip from "@mdx_components/integration_install_tooltip.mdx";
<IntegrationInstallTooltip></IntegrationInstallTooltip>
```bash npm2yarn
npm install @langchain/openai @planetscale/database @langchain/community @langchain/core
```
You will also need an PlanetScale Account and a database to connect to. See instructions on [PlanetScale Docs](https://planetscale.com/docs) on how to create a HTTP client.
## Usage
Each chat history session stored in PlanetScale database must have a unique id.
The `config` parameter is passed directly into the `new Client()` constructor of [@planetscale/database](https://planetscale.com/docs/tutorials/planetscale-serverless-driver), and takes all the same arguments.
import Example from "@examples/memory/planetscale.ts";
<CodeBlock language="typescript">{Example}</CodeBlock>
## Advanced Usage
You can also directly pass in a previously created [@planetscale/database](https://planetscale.com/docs/tutorials/planetscale-serverless-driver) client instance:
import AdvancedExample from "@examples/memory/planetscale_advanced.ts";
<CodeBlock language="typescript">{AdvancedExample}</CodeBlock>
|
0 | lc_public_repos/langchainjs/docs/core_docs/docs/integrations | lc_public_repos/langchainjs/docs/core_docs/docs/integrations/memory/zep_memory.mdx | ---
hide_table_of_contents: true
---
# Zep Open Source Memory
> Recall, understand, and extract data from chat histories. Power personalized AI experiences.
> [Zep](https://www.getzep.com) is a long-term memory service for AI Assistant apps.
> With Zep, you can provide AI assistants with the ability to recall past conversations, no matter how distant,
> while also reducing hallucinations, latency, and cost.
## How Zep works
Zep persists and recalls chat histories, and automatically generates summaries and other artifacts from these chat histories.
It also embeds messages and summaries, enabling you to search Zep for relevant context from past conversations.
Zep does all of this asynchronously, ensuring these operations don't impact your user's chat experience.
Data is persisted to database, allowing you to scale out when growth demands.
Zep also provides a simple, easy to use abstraction for document vector search called Document Collections.
This is designed to complement Zep's core memory features, but is not designed to be a general purpose vector database.
Zep allows you to be more intentional about constructing your prompt:
- automatically adding a few recent messages, with the number customized for your app;
- a summary of recent conversations prior to the messages above;
- and/or contextually relevant summaries or messages surfaced from the entire chat session.
- and/or relevant Business data from Zep Document Collections.
> Interested in Zep Cloud? See [Zep Cloud Installation Guide](https://help.getzep.com/sdks)
## Setup
See the instructions from [Zep Open Source](https://github.com/getzep/zep) for running the server locally or through an automated hosting provider.
import IntegrationInstallTooltip from "@mdx_components/integration_install_tooltip.mdx";
<IntegrationInstallTooltip></IntegrationInstallTooltip>
```bash npm2yarn
npm install @langchain/openai @langchain/community @langchain/core
```
## Usage
import CodeBlock from "@theme/CodeBlock";
import Example from "@examples/memory/zep.ts";
<CodeBlock language="typescript">{Example}</CodeBlock>
|
0 | lc_public_repos/langchainjs/docs/core_docs/docs/integrations | lc_public_repos/langchainjs/docs/core_docs/docs/integrations/memory/cloudflare_d1.mdx | ---
hide_table_of_contents: true
---
import CodeBlock from "@theme/CodeBlock";
# Cloudflare D1-Backed Chat Memory
:::info
This integration is only supported in Cloudflare Workers.
:::
For longer-term persistence across chat sessions, you can swap out the default in-memory `chatHistory` that backs chat memory classes like `BufferMemory` for a Cloudflare D1 instance.
## Setup
You'll need to install the LangChain Cloudflare integration package.
For the below example, we also use Anthropic, but you can use any model you'd like:
import IntegrationInstallTooltip from "@mdx_components/integration_install_tooltip.mdx";
<IntegrationInstallTooltip></IntegrationInstallTooltip>
```bash npm2yarn
npm install @langchain/cloudflare @langchain/anthropic @langchain/core
```
Set up a D1 instance for your worker by following [the official documentation](https://developers.cloudflare.com/d1/). Your project's `wrangler.toml` file should
look something like this:
```toml
name = "YOUR_PROJECT_NAME"
main = "src/index.ts"
compatibility_date = "2024-01-10"
[vars]
ANTHROPIC_API_KEY = "YOUR_ANTHROPIC_KEY"
[[d1_databases]]
binding = "DB" # available in your Worker as env.DB
database_name = "YOUR_D1_DB_NAME"
database_id = "YOUR_D1_DB_ID"
```
## Usage
You can then use D1 to store your history as follows:
import Example from "@examples/memory/cloudflare_d1.ts";
<CodeBlock language="typescript">{Example}</CodeBlock>
|
0 | lc_public_repos/langchainjs/docs/core_docs/docs/troubleshooting | lc_public_repos/langchainjs/docs/core_docs/docs/troubleshooting/errors/INVALID_TOOL_RESULTS.ipynb | import { z } from "zod";
import { tool } from "@langchain/core/tools";
import { ChatOpenAI } from "@langchain/openai";
import { BaseMessageLike } from "@langchain/core/messages";
const model = new ChatOpenAI({
model: "gpt-4o-mini",
});
const dummyTool = tool(
async () => {
return "action complete!";
},
{
name: "foo",
schema: z.object({}),
}
);
const modelWithTools = model.bindTools([dummyTool]);
const chatHistory: BaseMessageLike[] = [
{
role: "user",
content: `Call tool "foo" twice with no arguments`,
},
];
const responseMessage = await modelWithTools.invoke(chatHistory);
console.log(responseMessage);const toolResponse1 = await dummyTool.invoke(responseMessage.tool_calls![0]);
chatHistory.push(responseMessage);
chatHistory.push(toolResponse1);
await modelWithTools.invoke(chatHistory);const toolResponse2 = await dummyTool.invoke(responseMessage.tool_calls![1]);
chatHistory.push(toolResponse2);
await modelWithTools.invoke(chatHistory);const duplicateToolResponse2 = await dummyTool.invoke(responseMessage.tool_calls![1]);
chatHistory.push(duplicateToolResponse2);
await modelWithTools.invoke(chatHistory);await modelWithTools.invoke([{
role: "tool",
content: "action completed!",
tool_call_id: "dummy",
}]) |
0 | lc_public_repos/langchainjs/docs/core_docs/docs/troubleshooting | lc_public_repos/langchainjs/docs/core_docs/docs/troubleshooting/errors/MODEL_NOT_FOUND.mdx | # MODEL_NOT_FOUND
The model name you have specified is not acknowledged by your provider.
## Troubleshooting
The following may help resolve this error:
- Double check the model string you are passing in.
- If you are using a proxy or other alternative host with a model wrapper, confirm that the permitted model names are not restricted or altered.
|
0 | lc_public_repos/langchainjs/docs/core_docs/docs/troubleshooting | lc_public_repos/langchainjs/docs/core_docs/docs/troubleshooting/errors/MESSAGE_COERCION_FAILURE.mdx | # MESSAGE_COERCION_FAILURE
Several modules in LangChain take [`MessageLike`](https://api.js.langchain.com/types/_langchain_core.messages.BaseMessageLike.html)
objects in place of formal [`BaseMessage`](/docs/concepts/messages) classes. These include OpenAI style message objects (`{ role: "user", content: "Hello world!" }`),
tuples, and plain strings (which are converted to [`HumanMessages`](/docs/concepts/messages/#humanmessage)).
If one of these modules receives a value outside of one of these formats, you will receive an error like the following:
```ts
const badlyFormattedMessageObject = {
role: "foo",
randomNonContentValue: "bar",
};
await model.invoke([badlyFormattedMessageObject]);
```
```
Error: Unable to coerce message from array: only human, AI, system, or tool message coercion is currently supported.
Received: {
"role": "foo",
"randomNonContentValue": "bar",
}
```
## Troubleshooting
The following may help resolve this error:
- Ensure that all inputs to chat models are an array of LangChain message classes or a supported message-like.
- Check that there is no stringification or other unexpected transformation occuring.
- Check the error's stack trace and add log or debugger statements.
|
0 | lc_public_repos/langchainjs/docs/core_docs/docs/troubleshooting | lc_public_repos/langchainjs/docs/core_docs/docs/troubleshooting/errors/index.mdx | # Error reference
This page contains guides around resolving common errors you may find while building with LangChain.
Errors referenced below will have an `lc_error_code` property corresponding to one of the below codes when they are thrown in code.
- [INVALID_PROMPT_INPUT](/docs/troubleshooting/errors/INVALID_PROMPT_INPUT)
- [INVALID_TOOL_RESULTS](/docs/troubleshooting/errors/INVALID_TOOL_RESULTS)
- [MESSAGE_COERCION_FAILURE](/docs/troubleshooting/errors/MESSAGE_COERCION_FAILURE)
- [MODEL_AUTHENTICATION](/docs/troubleshooting/errors/MODEL_AUTHENTICATION)
- [MODEL_NOT_FOUND](/docs/troubleshooting/errors/MODEL_NOT_FOUND)
- [MODEL_RATE_LIMIT](/docs/troubleshooting/errors/MODEL_RATE_LIMIT)
- [OUTPUT_PARSING_FAILURE](/docs/troubleshooting/errors/OUTPUT_PARSING_FAILURE)
|
0 | lc_public_repos/langchainjs/docs/core_docs/docs/troubleshooting | lc_public_repos/langchainjs/docs/core_docs/docs/troubleshooting/errors/OUTPUT_PARSING_FAILURE.mdx | # OUTPUT_PARSING_FAILURE
An [output parser](/docs/concepts/output_parsers) was unable to handle model output as expected.
To illustrate this, let's say you have an output parser that expects a chat model to output JSON surrounded by a markdown code tag (triple backticks). Here would be an example of good input:
````ts
AIMessage {
content: "```\n{\"foo\": \"bar\"}\n```"
}
````
Internally, our output parser might try to strip out the markdown fence and newlines and then run `JSON.parse()`.
If instead the chat model generated an output with malformed JSON like this:
````ts
AIMessage {
content: "```\n{\"foo\":\n```"
}
````
When our output parser attempts to parse this, the `JSON.parse()` call will fail.
Note that some prebuilt constructs like [legacy LangChain agents](/docs/how_to/agent_executor) and chains may use output parsers internally,
so you may see this error even if you're not visibly instantiating and using an output parser.
## Troubleshooting
The following may help resolve this error:
- Consider using [tool calling or other structured output techniques](/docs/how_to/structured_output/) if possible without an output parser to reliably output parseable values.
- If you are using a prebuilt chain or agent, use [LangGraph](https://langchain-ai.github.io/langgraphjs/) to compose your logic explicitly instead.
- Add more precise formatting instructions to your prompt. In the above example, adding `"You must always return valid JSON fenced by a markdown code block. Do not return any additional text."` to your input may help steer the model to returning the expected format.
- If you are using a smaller or less capable model, try using a more capable one.
- Add [LLM-powered retries](/docs/how_to/output_parser_fixing/).
|
0 | lc_public_repos/langchainjs/docs/core_docs/docs/troubleshooting | lc_public_repos/langchainjs/docs/core_docs/docs/troubleshooting/errors/MODEL_RATE_LIMIT.mdx | # MODEL_RATE_LIMIT
You have hit the maximum number of requests that a model provider allows over a given time period and are being temporarily blocked.
Generally, this error is temporary and your limit will reset after a certain amount of time.
## Troubleshooting
The following may help resolve this error:
- Contact your model provider and ask for a rate limit increase.
- If many of your incoming requests are the same, utilize [model response caching](/docs/how_to/chat_model_caching/).
- Spread requests across different providers if your application allows it.
- Set a higher number of [max retries](https://api.js.langchain.com/interfaces/_langchain_core.language_models_base.BaseLanguageModelParams.html#maxRetries) when initializing your model.
LangChain will use an exponential backoff strategy for requests that fail in this way, so the retry may occur when your limits have reset.
|
0 | lc_public_repos/langchainjs/docs/core_docs/docs/troubleshooting | lc_public_repos/langchainjs/docs/core_docs/docs/troubleshooting/errors/INVALID_PROMPT_INPUT.mdx | # INVALID_PROMPT_INPUT
A [prompt template](/docs/concepts/prompt_templates) received missing or invalid input variables.
One unexpected way this can occur is if you add a JSON object directly into a prompt template:
```ts
import { PromptTemplate } from "@langchain/core/prompts";
import { ChatOpenAI } from "@langchain/openai";
const prompt = PromptTemplate.fromTemplate(`You are a helpful assistant.
Here is an example of how you should respond:
{
"firstName": "John",
"lastName": "Doe",
"age": 21
}
Now, answer the following question:
{question}`);
```
You might think that the above prompt template should require a single input key named `question`, but the JSON object will be
interpreted as an additional variable because the curly braces (`{`) are not escaped, and should be preceded by a second brace instead, like this:
```ts
import { PromptTemplate } from "@langchain/core/prompts";
import { ChatOpenAI } from "@langchain/openai";
const prompt = PromptTemplate.fromTemplate(`You are a helpful assistant.
Here is an example of how you should respond:
{{
"firstName": "John",
"lastName": "Doe",
"age": 21
}}
Now, answer the following question:
{question}`);
```
## Troubleshooting
The following may help resolve this error:
- Double-check your prompt template to ensure that it is correct.
- If you are using default formatting and you are using curly braces `{` anywhere in your template, they should be double escaped like this: `{{`, as shown above.
- If you are using a [`MessagesPlaceholder`](/docs/concepts/prompt_templates/#messagesplaceholder), make sure that you are passing in an array of messages or message-like objects.
- If you are using shorthand tuples to declare your prompt template, make sure that the variable name is wrapped in curly braces (`["placeholder", "{messages}"]`).
- Try viewing the inputs into your prompt template using [LangSmith](https://docs.smith.langchain.com/) or log statements to confirm they appear as expected.
- If you are pulling a prompt from the [LangChain Prompt Hub](https://smith.langchain.com/prompts), try pulling and logging it or running it in isolation with a sample input to confirm that it is what you expect.
|
0 | lc_public_repos/langchainjs/docs/core_docs/docs/troubleshooting | lc_public_repos/langchainjs/docs/core_docs/docs/troubleshooting/errors/MODEL_AUTHENTICATION.mdx | # MODEL_AUTHENTICATION
Your model provider is denying you access to their service.
## Troubleshooting
The following may help resolve this error:
- Confirm that your API key or other credentials are correct.
- If you are relying on an environment variable to authenticate, confirm that the variable name is correct and that it has a value set.
- Note that some environments, like Cloudflare Workers, do not support environment variables.
- For some models, you can try explicitly passing an `apiKey` parameter to rule out any environment variable issues like this:
```ts
const model = new ChatOpenAI({
apiKey: "YOUR_KEY_HERE",
});
```
- If you are using a proxy or other custom endpoint, make sure that your custom provider does not expect an alternative authentication scheme.
|
0 | lc_public_repos/langchainjs/docs/core_docs | lc_public_repos/langchainjs/docs/core_docs/data/ls_few_shot_example_dataset.json | [
{
"id": "646668fd-f11a-4379-9a29-4b109a2d5b2c",
"created_at": "2024-07-08T22:29:20.545604+00:00",
"modified_at": "2024-08-15T18:02:12.047567+00:00",
"name": "#6466 @ multiverse-math-examples-for-few-shot",
"dataset_id": "d64fd4c5-5642-40fe-85d1-d757467415b6",
"source_run_id": "0d1e8e6e-35d4-4fcc-a467-86df53b6d1db",
"metadata": {
"dataset_split": [
"base"
]
},
"inputs": {
"input": "-(2 + 1/1)",
"system": false
},
"outputs": {
"output": [
{
"id": "be323bd4-13cd-4b46-ba68-28a5ed967d72",
"type": "system",
"content": "You are requested to solve math questions in an alternate mathematical universe. The operations have been altered to yield different results than expected. Do not guess the answer or rely on your innate knowledge of math. Use the provided tools to answer the question. While associativity and commutativity apply, distributivity does not. Answer the question using the fewest possible tools. Please return a non-empty answer that includes the numeric response on its own without any clarifications.",
"additional_kwargs": {},
"response_metadata": {}
},
{
"id": "bd54fd59-521b-45b7-b2e9-08acf3332f5f",
"type": "human",
"content": "-(2 + 1/1)",
"example": false,
"additional_kwargs": {},
"response_metadata": {}
},
{
"id": "run-0e266463-50cd-4c1a-b27b-b59ad1ec6159-0",
"type": "ai",
"content": [
{
"text": "Here is how to calculate -(2 + 1/1) using the provided tools:",
"type": "text"
},
{
"id": "toolu_01SNLptzRgnhwTDmxHx4STSe",
"name": "divide",
"type": "tool_use",
"input": {
"a": 1,
"b": 1
}
}
],
"example": false,
"tool_calls": [
{
"id": "toolu_01SNLptzRgnhwTDmxHx4STSe",
"args": {
"a": 1,
"b": 1
},
"name": "divide"
}
],
"usage_metadata": {
"input_tokens": 892,
"total_tokens": 981,
"output_tokens": 89
},
"additional_kwargs": {},
"response_metadata": {
"id": "msg_01VvfJ3GkFHJZ69ZS7WygFrd",
"model": "claude-3-sonnet-20240229",
"usage": {
"input_tokens": 892,
"output_tokens": 89
},
"stop_reason": "tool_use",
"stop_sequence": null
},
"invalid_tool_calls": []
},
{
"id": "95b3e1f7-e72f-497b-a4c9-54b846440628",
"name": "divide",
"type": "tool",
"content": "0.5",
"tool_call_id": "toolu_01SNLptzRgnhwTDmxHx4STSe",
"additional_kwargs": {},
"response_metadata": {}
},
{
"id": "run-24405a09-ad67-46c2-b7c3-1c6ae2c1adfb-0",
"type": "ai",
"content": [
{
"id": "toolu_01CdkUVCzwXNyYgkpktpqrtZ",
"name": "add",
"type": "tool_use",
"input": {
"a": 2,
"b": 0.5
}
}
],
"example": false,
"tool_calls": [
{
"id": "toolu_01CdkUVCzwXNyYgkpktpqrtZ",
"args": {
"a": 2,
"b": 0.5
},
"name": "add"
}
],
"usage_metadata": {
"input_tokens": 995,
"total_tokens": 1066,
"output_tokens": 71
},
"additional_kwargs": {},
"response_metadata": {
"id": "msg_011Cbd6wmPNL5jiuyC8jiDmr",
"model": "claude-3-sonnet-20240229",
"usage": {
"input_tokens": 995,
"output_tokens": 71
},
"stop_reason": "tool_use",
"stop_sequence": null
},
"invalid_tool_calls": []
},
{
"id": "873fdae7-359b-4ef1-82db-b7c322fdd207",
"name": "add",
"type": "tool",
"content": "3.7",
"tool_call_id": "toolu_01CdkUVCzwXNyYgkpktpqrtZ",
"additional_kwargs": {},
"response_metadata": {}
},
{
"id": "run-78284a0e-ac49-4c6a-8bba-d29d9458235e-0",
"type": "ai",
"content": [
{
"id": "toolu_01SP9dMetiCT7jsj9gN6pXpW",
"name": "negate",
"type": "tool_use",
"input": {
"a": 3.7
}
}
],
"example": false,
"tool_calls": [
{
"id": "toolu_01SP9dMetiCT7jsj9gN6pXpW",
"args": {
"a": 3.7
},
"name": "negate"
}
],
"usage_metadata": {
"input_tokens": 1080,
"total_tokens": 1135,
"output_tokens": 55
},
"additional_kwargs": {},
"response_metadata": {
"id": "msg_01X4vR4Mn5hFt3AxnrR5YNr6",
"model": "claude-3-sonnet-20240229",
"usage": {
"input_tokens": 1080,
"output_tokens": 55
},
"stop_reason": "tool_use",
"stop_sequence": null
},
"invalid_tool_calls": []
},
{
"id": "e154f0cb-fa05-4020-abb9-080f0cd92d79",
"name": "negate",
"type": "tool",
"content": "3.7",
"tool_call_id": "toolu_01SP9dMetiCT7jsj9gN6pXpW",
"additional_kwargs": {},
"response_metadata": {}
},
{
"id": "run-5f40b61b-485a-40ae-93ae-a61bbd800f09-0",
"type": "ai",
"content": "So the result of -(2 + 1/1) in this alternate universe is 3.7.",
"example": false,
"tool_calls": [],
"usage_metadata": {
"input_tokens": 1149,
"total_tokens": 1177,
"output_tokens": 28
},
"additional_kwargs": {},
"response_metadata": {
"id": "msg_01TfCqdEytXBEoUQ68JSMkVj",
"model": "claude-3-sonnet-20240229",
"usage": {
"input_tokens": 1149,
"output_tokens": 28
},
"stop_reason": "end_turn",
"stop_sequence": null
},
"invalid_tool_calls": []
},
{
"id": "90cbb099-abf3-48f9-b5a9-a36324e82fd4",
"type": "human",
"content": "You got the correct answer of 3.7, but your response included additional text. Please return the correct numerical answer of 3.7 to the user.",
"example": false,
"additional_kwargs": {},
"response_metadata": {}
},
{
"id": "run-c8521068-27d7-4ba6-a1a3-51e27ef1add1-0",
"type": "ai",
"content": "3.7",
"example": false,
"tool_calls": [],
"usage_metadata": {
"input_tokens": 1214,
"total_tokens": 1221,
"output_tokens": 7
},
"additional_kwargs": {},
"response_metadata": {
"id": "msg_01VWCSzRm46ruZEqZJL9nwf6",
"model": "claude-3-sonnet-20240229",
"usage": {
"input_tokens": 1214,
"output_tokens": 7
},
"stop_reason": "end_turn",
"stop_sequence": null
},
"invalid_tool_calls": []
}
]
}
},
{
"id": "c9c1f569-d456-4246-9956-7cbd9fa1643f",
"created_at": "2024-07-08T23:30:12.484721+00:00",
"modified_at": "2024-08-15T18:01:54.639132+00:00",
"name": "#c9c1 @ multiverse-math-examples-for-few-shot",
"dataset_id": "d64fd4c5-5642-40fe-85d1-d757467415b6",
"source_run_id": "d70c2f5c-c67f-4df5-a171-44fbaaef8973",
"metadata": {
"dataset_split": [
"base"
]
},
"inputs": {
"input": "evaluate negate(-131,778)",
"system": false
},
"outputs": {
"output": [
{
"id": "9c6222d9-c1f5-4134-91bb-585ab9d8c76a",
"type": "system",
"content": "You are requested to solve math questions in an alternate mathematical universe. The operations have been altered to yield different results than expected. Do not guess the answer or rely on your innate knowledge of math. Use the provided tools to answer the question. While associativity and commutativity apply, distributivity does not. Answer the question using the fewest possible tools. Your final answer should include the numeric response on its own without any clarifications. Please return a non-empty answer.",
"additional_kwargs": {},
"response_metadata": {}
},
{
"id": "f6770a24-520e-4007-8445-3fedae7111fb",
"type": "human",
"content": "evaluate negate(-131,778)",
"example": false,
"additional_kwargs": {},
"response_metadata": {}
},
{
"id": "run-863b1744-5b04-4e8e-9f24-69ba6103da96-0",
"type": "ai",
"content": [
{
"id": "toolu_01Wof73Bby8miDefBHxtkkqT",
"name": "negate",
"type": "tool_use",
"input": {
"a": -131778
}
}
],
"example": false,
"tool_calls": [
{
"id": "toolu_01Wof73Bby8miDefBHxtkkqT",
"args": {
"a": -131778
},
"name": "negate"
}
],
"usage_metadata": {
"input_tokens": 894,
"total_tokens": 948,
"output_tokens": 54
},
"additional_kwargs": {},
"response_metadata": {
"id": "msg_01UrTuLYZJwFhPiYSADHWc2D",
"model": "claude-3-sonnet-20240229",
"usage": {
"input_tokens": 894,
"output_tokens": 54
},
"stop_reason": "tool_use",
"stop_sequence": null
},
"invalid_tool_calls": []
},
{
"id": "21f5bc1b-975a-406d-903b-eefc374d5497",
"name": "negate",
"type": "tool",
"content": "-131778.0",
"tool_call_id": "toolu_01Wof73Bby8miDefBHxtkkqT",
"additional_kwargs": {},
"response_metadata": {}
},
{
"id": "run-8d33b16a-9507-4b83-82ba-5840f85664c1-0",
"type": "ai",
"content": "Therefore, the answer is 131778.0",
"example": false,
"tool_calls": [],
"usage_metadata": {
"input_tokens": 965,
"total_tokens": 979,
"output_tokens": 14
},
"additional_kwargs": {},
"response_metadata": {
"id": "msg_01TPHCoxNXqTtWXMZddWdrbP",
"model": "claude-3-sonnet-20240229",
"usage": {
"input_tokens": 965,
"output_tokens": 14
},
"stop_reason": "end_turn",
"stop_sequence": null
},
"invalid_tool_calls": []
},
{
"id": "88fec757-c12a-4cfa-aa2d-2d2ec27fac2e",
"type": "human",
"content": "Please use the output of the negate tool correctly, and return a numerical answer only to the user.",
"example": false,
"additional_kwargs": {},
"response_metadata": {}
},
{
"id": "run-88b2ebbd-c687-435a-a7d0-58e44d5bd13e-0",
"type": "ai",
"content": [
{
"id": "toolu_01JRRLgMWpDjNJ7djQodX8BR",
"name": "negate",
"type": "tool_use",
"input": {
"a": -131778
}
}
],
"example": false,
"tool_calls": [
{
"id": "toolu_01JRRLgMWpDjNJ7djQodX8BR",
"args": {
"a": -131778
},
"name": "negate"
}
],
"usage_metadata": {
"input_tokens": 1003,
"total_tokens": 1057,
"output_tokens": 54
},
"additional_kwargs": {},
"response_metadata": {
"id": "msg_01NUGNnVXywV29Cx5o93VkHd",
"model": "claude-3-sonnet-20240229",
"usage": {
"input_tokens": 1003,
"output_tokens": 54
},
"stop_reason": "tool_use",
"stop_sequence": null
},
"invalid_tool_calls": []
},
{
"id": "c30e47d9-845d-43ea-9596-15d53c754fd5",
"name": "negate",
"type": "tool",
"content": "-131778.0",
"tool_call_id": "toolu_01JRRLgMWpDjNJ7djQodX8BR",
"additional_kwargs": {},
"response_metadata": {}
},
{
"id": "run-7fdc3c94-27c4-42e4-808a-a4b74fb24265-0",
"type": "ai",
"content": "131778.0",
"example": false,
"tool_calls": [],
"usage_metadata": {
"input_tokens": 1074,
"total_tokens": 1082,
"output_tokens": 8
},
"additional_kwargs": {},
"response_metadata": {
"id": "msg_01CL6Qer4ErNKbUWVhPDqNQ6",
"model": "claude-3-sonnet-20240229",
"usage": {
"input_tokens": 1074,
"output_tokens": 8
},
"stop_reason": "end_turn",
"stop_sequence": null
},
"invalid_tool_calls": []
},
{
"id": "9da0f050-7b86-4e97-904f-78e374620928",
"type": "human",
"content": "The tool is outputting -131778, but you are returning 131778. Please fix this error and return the correct tool output to the user.",
"example": false,
"additional_kwargs": {},
"response_metadata": {}
},
{
"id": "run-349122b2-5cd1-414b-a3e7-0c0586d6e78e-0",
"type": "ai",
"content": "-131778.0",
"example": false,
"tool_calls": [],
"usage_metadata": {
"input_tokens": 1116,
"total_tokens": 1124,
"output_tokens": 8
},
"additional_kwargs": {},
"response_metadata": {
"id": "msg_018jMmGf3qfaNZptZb5vWUSw",
"model": "claude-3-sonnet-20240229",
"usage": {
"input_tokens": 1116,
"output_tokens": 8
},
"stop_reason": "end_turn",
"stop_sequence": null
},
"invalid_tool_calls": []
}
]
}
},
{
"id": "33243dc4-a558-4ee8-943d-38ba42c5562b",
"created_at": "2024-07-09T18:29:40.803329+00:00",
"modified_at": "2024-08-15T18:01:42.238491+00:00",
"name": "#3324 @ multiverse-math-examples-for-few-shot",
"dataset_id": "d64fd4c5-5642-40fe-85d1-d757467415b6",
"source_run_id": "96448823-4e13-48c4-97c2-87302334d5dc",
"metadata": {
"dataset_split": [
"base"
]
},
"inputs": {
"input": "If I earn 5% monthly interest, and I start with 10 dollars how much money will I have at the end of the year?",
"system": false
},
"outputs": {
"output": [
{
"id": "552c2e8e-8389-4673-9139-c32876717786",
"type": "system",
"content": "You are requested to solve math questions in an alternate mathematical universe. The operations have been altered to yield different results than expected. Do not guess the answer or rely on your innate knowledge of math. Use the provided tools to answer the question. While associativity and commutativity apply, distributivity does not. Answer the question using the fewest possible tools. Your final answer should include the numeric response on its own without any clarifications. That means if the answer is 5, you should return \"5\" and not \"The answer is 5\" or \"So the answer to your question is 5\".",
"additional_kwargs": {},
"response_metadata": {}
},
{
"id": "a7f41758-ca33-4121-8d7f-7fc396314e04",
"type": "human",
"content": "If I earn 5% monthly interest, and I start with 10 dollars how much money will I have at the end of the year?",
"example": false,
"additional_kwargs": {},
"response_metadata": {}
},
{
"id": "run-2e8cb241-792a-41e2-84c5-e8e71d7580b5-0",
"type": "ai",
"content": [
{
"text": "Okay, let's break this down step-by-step using the provided tools:",
"type": "text"
},
{
"id": "toolu_01AcUN6M8mAA6dpstTstQAKb",
"name": "multiply",
"type": "tool_use",
"input": {
"a": 10,
"b": 1.05
}
}
],
"example": false,
"tool_calls": [
{
"id": "toolu_01AcUN6M8mAA6dpstTstQAKb",
"args": {
"a": 10,
"b": 1.05
},
"name": "multiply"
}
],
"usage_metadata": {
"input_tokens": 943,
"total_tokens": 1031,
"output_tokens": 88
},
"additional_kwargs": {},
"response_metadata": {
"id": "msg_017RUX7BLpxFxNSW3kYmTeAs",
"model": "claude-3-sonnet-20240229",
"usage": {
"input_tokens": 943,
"output_tokens": 88
},
"stop_reason": "tool_use",
"stop_sequence": null
},
"invalid_tool_calls": []
},
{
"id": "b60c0094-fb6d-44db-9ab0-24bc6fdb4cd7",
"name": "multiply",
"type": "tool",
"content": "11.55",
"tool_call_id": "toolu_01AcUN6M8mAA6dpstTstQAKb",
"additional_kwargs": {},
"response_metadata": {}
},
{
"id": "run-f50fef08-89bb-4f6f-909c-8d25c5260d0c-0",
"type": "ai",
"content": [
{
"text": "This calculates the amount after 1 month with 5% interest on the initial $10.\n\nTo calculate for the full year, we can repeatedly apply this operation 12 times:",
"type": "text"
},
{
"id": "toolu_01VQMnv3A6YE1BJMRuL9j1vW",
"name": "power",
"type": "tool_use",
"input": {
"a": 1.05,
"b": 12
}
}
],
"example": false,
"tool_calls": [
{
"id": "toolu_01VQMnv3A6YE1BJMRuL9j1vW",
"args": {
"a": 1.05,
"b": 12
},
"name": "power"
}
],
"usage_metadata": {
"input_tokens": 1046,
"total_tokens": 1158,
"output_tokens": 112
},
"additional_kwargs": {},
"response_metadata": {
"id": "msg_01QXCFsM79ZEjxZibHv2Y1CF",
"model": "claude-3-sonnet-20240229",
"usage": {
"input_tokens": 1046,
"output_tokens": 112
},
"stop_reason": "tool_use",
"stop_sequence": null
},
"invalid_tool_calls": []
},
{
"id": "5ed0dbe5-3dd1-4104-84bb-fe487038e971",
"name": "power",
"type": "tool",
"content": "1.9799315994393987",
"tool_call_id": "toolu_01VQMnv3A6YE1BJMRuL9j1vW",
"additional_kwargs": {},
"response_metadata": {}
},
{
"id": "run-b423e2c7-8dd2-4e10-a6ca-ff25bfd1a0ec-0",
"type": "ai",
"content": [
{
"text": "Then multiply the initial $10 by this result:",
"type": "text"
},
{
"id": "toolu_01GR5C9CLsAhvStCGNZqcAbF",
"name": "multiply",
"type": "tool_use",
"input": {
"a": 10,
"b": 1.9799315994393987
}
}
],
"example": false,
"tool_calls": [
{
"id": "toolu_01GR5C9CLsAhvStCGNZqcAbF",
"args": {
"a": 10,
"b": 1.9799315994393987
},
"name": "multiply"
}
],
"usage_metadata": {
"input_tokens": 1177,
"total_tokens": 1265,
"output_tokens": 88
},
"additional_kwargs": {},
"response_metadata": {
"id": "msg_016WrvMUw8i9rLMnaVKdjJY6",
"model": "claude-3-sonnet-20240229",
"usage": {
"input_tokens": 1177,
"output_tokens": 88
},
"stop_reason": "tool_use",
"stop_sequence": null
},
"invalid_tool_calls": []
},
{
"id": "25be6748-1e78-4818-8b0b-20d3cb532570",
"name": "multiply",
"type": "tool",
"content": "21.779247593833386",
"tool_call_id": "toolu_01GR5C9CLsAhvStCGNZqcAbF",
"additional_kwargs": {},
"response_metadata": {}
},
{
"id": "run-7691499f-0a67-4894-9ab4-61bc6d398f6f-0",
"type": "ai",
"content": "So with an initial $10 and 5% monthly interest compounded monthly, the amount at the end of 1 year in this alternate universe is 21.779247593833386.",
"example": false,
"tool_calls": [],
"usage_metadata": {
"input_tokens": 1283,
"total_tokens": 1328,
"output_tokens": 45
},
"additional_kwargs": {},
"response_metadata": {
"id": "msg_01JCK4R6nwNjdiFVt5hV14Wj",
"model": "claude-3-sonnet-20240229",
"usage": {
"input_tokens": 1283,
"output_tokens": 45
},
"stop_reason": "end_turn",
"stop_sequence": null
},
"invalid_tool_calls": []
},
{
"id": "b7ea9655-b8ec-4be1-a3f2-c6bd6f3f188b",
"type": "human",
"content": "You found the correct answer of 21.779247593833386, but your final response to the user included additional text. Please only return the correct answer.",
"example": false,
"additional_kwargs": {},
"response_metadata": {}
},
{
"id": "run-22f8c361-9e93-412f-97f6-9918396bc7a1-0",
"type": "ai",
"content": "21.779247593833386",
"example": false,
"tool_calls": [],
"usage_metadata": {
"input_tokens": 1364,
"total_tokens": 1375,
"output_tokens": 11
},
"additional_kwargs": {},
"response_metadata": {
"id": "msg_01Puvngg6iTza8JT1FjkPbGJ",
"model": "claude-3-sonnet-20240229",
"usage": {
"input_tokens": 1364,
"output_tokens": 11
},
"stop_reason": "end_turn",
"stop_sequence": null
},
"invalid_tool_calls": []
}
]
}
},
{
"id": "034358b8-1f29-437b-b756-b69a98fe84a7",
"created_at": "2024-07-09T19:01:51.89186+00:00",
"modified_at": "2024-08-15T18:01:29.784596+00:00",
"name": "#0343 @ multiverse-math-examples-for-few-shot",
"dataset_id": "d64fd4c5-5642-40fe-85d1-d757467415b6",
"source_run_id": "e53ce0b6-e3c0-4710-be31-d7110c1dd20d",
"metadata": {
"dataset_split": [
"base"
]
},
"inputs": {
"input": "What is the product of the odd numbers between 0-10?",
"system": false
},
"outputs": {
"output": [
{
"id": "5fb109a9-7779-47d2-a93b-30753afd8044",
"type": "system",
"content": "You are requested to solve math questions in an alternate mathematical universe. The operations have been altered to yield different results than expected. Do not guess the answer or rely on your innate knowledge of math. Use the provided tools to answer the question. While associativity and commutativity apply, distributivity does not. Answer the question using the fewest possible tools. Your final answer should include the numeric response on its own without any clarifications. That means if the answer is 5, you should return \"5\" and not \"The answer is 5\" or \"So the answer to your question is 5\".",
"additional_kwargs": {},
"response_metadata": {}
},
{
"id": "c2c64324-10c1-483b-bbe4-40cdbd080161",
"type": "human",
"content": "What is the product of the odd numbers between 0-10?",
"example": false,
"additional_kwargs": {},
"response_metadata": {}
},
{
"id": "run-e9a337d1-046f-449b-8aa0-7e26c37430cd-0",
"type": "ai",
"content": [
{
"text": "Here is how to calculate the product of the odd numbers between 0 and 10 using the provided tools:",
"type": "text"
},
{
"id": "toolu_01KNcT7ZQXCfNZgbbbBZQcFi",
"name": "multiply",
"type": "tool_use",
"input": {
"a": 1,
"b": 3
}
}
],
"example": false,
"tool_calls": [
{
"id": "toolu_01KNcT7ZQXCfNZgbbbBZQcFi",
"args": {
"a": 1,
"b": 3
},
"name": "multiply"
}
],
"usage_metadata": {
"input_tokens": 927,
"total_tokens": 1020,
"output_tokens": 93
},
"additional_kwargs": {},
"response_metadata": {
"id": "msg_01F7ZV92AdCE7H8JxT2qYe9m",
"model": "claude-3-sonnet-20240229",
"usage": {
"input_tokens": 927,
"output_tokens": 93
},
"stop_reason": "tool_use",
"stop_sequence": null
},
"invalid_tool_calls": []
},
{
"id": "81bf003b-3462-4c83-a48b-e8a898bc0c25",
"name": "multiply",
"type": "tool",
"content": "3.3",
"tool_call_id": "toolu_01KNcT7ZQXCfNZgbbbBZQcFi",
"additional_kwargs": {},
"response_metadata": {}
},
{
"id": "run-4a4c9725-66cc-4791-938c-ba5dda946958-0",
"type": "ai",
"content": [
{
"id": "toolu_01SwKfTekPgfMcDPPZVmpeMp",
"name": "multiply",
"type": "tool_use",
"input": {
"a": 3.3,
"b": 5
}
}
],
"example": false,
"tool_calls": [
{
"id": "toolu_01SwKfTekPgfMcDPPZVmpeMp",
"args": {
"a": 3.3,
"b": 5
},
"name": "multiply"
}
],
"usage_metadata": {
"input_tokens": 1034,
"total_tokens": 1105,
"output_tokens": 71
},
"additional_kwargs": {},
"response_metadata": {
"id": "msg_01X2MqYR5G7DmAxkA7tkbJdK",
"model": "claude-3-sonnet-20240229",
"usage": {
"input_tokens": 1034,
"output_tokens": 71
},
"stop_reason": "tool_use",
"stop_sequence": null
},
"invalid_tool_calls": []
},
{
"id": "c303cb96-5884-4dbb-80e4-52bfb2d8be3d",
"name": "multiply",
"type": "tool",
"content": "18.15",
"tool_call_id": "toolu_01SwKfTekPgfMcDPPZVmpeMp",
"additional_kwargs": {},
"response_metadata": {}
},
{
"id": "run-27ff6d22-a454-493c-b6d6-69c30dbf9737-0",
"type": "ai",
"content": [
{
"id": "toolu_01QjKrxNqkAmetaxLgsr7Bo5",
"name": "multiply",
"type": "tool_use",
"input": {
"a": 18.15,
"b": 7
}
}
],
"example": false,
"tool_calls": [
{
"id": "toolu_01QjKrxNqkAmetaxLgsr7Bo5",
"args": {
"a": 18.15,
"b": 7
},
"name": "multiply"
}
],
"usage_metadata": {
"input_tokens": 1119,
"total_tokens": 1190,
"output_tokens": 71
},
"additional_kwargs": {},
"response_metadata": {
"id": "msg_014yB3FDWkRMrJXbbHoY4wxy",
"model": "claude-3-sonnet-20240229",
"usage": {
"input_tokens": 1119,
"output_tokens": 71
},
"stop_reason": "tool_use",
"stop_sequence": null
},
"invalid_tool_calls": []
},
{
"id": "4f89ea15-0be0-4a76-b48d-01549ab0930c",
"name": "multiply",
"type": "tool",
"content": "139.755",
"tool_call_id": "toolu_01QjKrxNqkAmetaxLgsr7Bo5",
"additional_kwargs": {},
"response_metadata": {}
},
{
"id": "run-4827dd2f-b40b-4e1d-a372-4a146881321f-0",
"type": "ai",
"content": [
{
"id": "toolu_01N9S9eDnznaxu2ZbVvPYdes",
"name": "multiply",
"type": "tool_use",
"input": {
"a": 139.755,
"b": 9
}
}
],
"example": false,
"tool_calls": [
{
"id": "toolu_01N9S9eDnznaxu2ZbVvPYdes",
"args": {
"a": 139.755,
"b": 9
},
"name": "multiply"
}
],
"usage_metadata": {
"input_tokens": 1204,
"total_tokens": 1275,
"output_tokens": 71
},
"additional_kwargs": {},
"response_metadata": {
"id": "msg_018G5j4ahPH1ymCCBKd9J5zA",
"model": "claude-3-sonnet-20240229",
"usage": {
"input_tokens": 1204,
"output_tokens": 71
},
"stop_reason": "tool_use",
"stop_sequence": null
},
"invalid_tool_calls": []
},
{
"id": "75a0f193-f3fd-42c0-903c-a38f92469dc8",
"name": "multiply",
"type": "tool",
"content": "1383.5745",
"tool_call_id": "toolu_01N9S9eDnznaxu2ZbVvPYdes",
"additional_kwargs": {},
"response_metadata": {}
},
{
"id": "run-643cbd84-57d7-40be-9c3e-c490896b3e0d-0",
"type": "ai",
"content": "So the product of the odd numbers between 0 and 10 is 1383.5745",
"example": false,
"tool_calls": [],
"usage_metadata": {
"input_tokens": 1291,
"total_tokens": 1317,
"output_tokens": 26
},
"additional_kwargs": {},
"response_metadata": {
"id": "msg_01VvwTB4oMXKyCsr1f274qf9",
"model": "claude-3-sonnet-20240229",
"usage": {
"input_tokens": 1291,
"output_tokens": 26
},
"stop_reason": "end_turn",
"stop_sequence": null
},
"invalid_tool_calls": []
},
{
"id": "8857a93d-6bc9-4689-8be7-5e630a029cb2",
"type": "human",
"content": "You found the correct answer, but your final response to the user included additional text. Please only return the correct answer.",
"example": false,
"additional_kwargs": {},
"response_metadata": {}
},
{
"id": "run-4f500072-2e25-4cf5-b53c-231b744da550-0",
"type": "ai",
"content": "1383.5745",
"example": false,
"tool_calls": [],
"usage_metadata": {
"input_tokens": 1344,
"total_tokens": 1353,
"output_tokens": 9
},
"additional_kwargs": {},
"response_metadata": {
"id": "msg_01H19ivooTk6dXCEwF54txmt",
"model": "claude-3-sonnet-20240229",
"usage": {
"input_tokens": 1344,
"output_tokens": 9
},
"stop_reason": "end_turn",
"stop_sequence": null
},
"invalid_tool_calls": []
}
]
}
},
{
"id": "415dd80f-c6fe-4350-aec3-ca35fbb8bafa",
"created_at": "2024-07-09T19:37:14.641556+00:00",
"modified_at": "2024-08-15T18:01:14.492779+00:00",
"name": "#415d @ multiverse-math-examples-for-few-shot",
"dataset_id": "d64fd4c5-5642-40fe-85d1-d757467415b6",
"source_run_id": "3e207d90-6880-4496-ba67-093b41d2bf3d",
"metadata": {
"dataset_split": [
"base"
]
},
"inputs": {
"input": "evaluate the negation of -100",
"system": false
},
"outputs": {
"output": [
{
"id": "43d1f620-1f59-46f5-b99b-96efc5bfebd8",
"type": "system",
"content": "You are requested to solve math questions in an alternate mathematical universe. The operations have been altered to yield different results than expected. Do not guess the answer or rely on your innate knowledge of math. Use the provided tools to answer the question. While associativity and commutativity apply, distributivity does not. Answer the question using the fewest possible tools. Your final answer should include the numeric response on its own without any clarifications. That means if the answer is 5, you should return \"5\" and not \"The answer is 5\" or \"So the answer to your question is 5\".",
"additional_kwargs": {},
"response_metadata": {}
},
{
"id": "df7b244e-98f8-48ea-8e43-cdd7b283bcd3",
"type": "human",
"content": "evaluate the negation of -100",
"example": false,
"additional_kwargs": {},
"response_metadata": {}
},
{
"id": "run-f2dcb8ec-45ff-4920-af49-b5a7bfa6a3cc-0",
"type": "ai",
"content": [
{
"id": "toolu_01HTpq4cYNUac6F7omUc2Wz3",
"name": "negate",
"type": "tool_use",
"input": {
"a": -100
}
}
],
"example": false,
"tool_calls": [
{
"id": "toolu_01HTpq4cYNUac6F7omUc2Wz3",
"args": {
"a": -100
},
"name": "negate"
}
],
"usage_metadata": {
"input_tokens": 920,
"total_tokens": 973,
"output_tokens": 53
},
"additional_kwargs": {},
"response_metadata": {
"id": "msg_01YUXV8e7RLcYf7FLTStX68D",
"model": "claude-3-sonnet-20240229",
"usage": {
"input_tokens": 920,
"output_tokens": 53
},
"stop_reason": "tool_use",
"stop_sequence": null
},
"invalid_tool_calls": []
},
{
"id": "492a7f3f-0464-4312-9cfb-7671943e4065",
"name": "negate",
"type": "tool",
"content": "-100.0",
"tool_call_id": "toolu_01HTpq4cYNUac6F7omUc2Wz3",
"additional_kwargs": {},
"response_metadata": {}
},
{
"id": "run-88edae55-abb3-4d2c-b096-a85e45ee04b6-0",
"type": "ai",
"content": "So the answer is 100.",
"example": false,
"tool_calls": [],
"usage_metadata": {
"input_tokens": 989,
"total_tokens": 1000,
"output_tokens": 11
},
"additional_kwargs": {},
"response_metadata": {
"id": "msg_01VPw9da9nFRby75Y937YMnE",
"model": "claude-3-sonnet-20240229",
"usage": {
"input_tokens": 989,
"output_tokens": 11
},
"stop_reason": "end_turn",
"stop_sequence": null
},
"invalid_tool_calls": []
},
{
"id": "964f806e-44a9-46d5-9805-8f367fe7f141",
"type": "human",
"content": "100 is incorrect. Please refer to the output of your tool call.",
"example": false,
"additional_kwargs": {},
"response_metadata": {}
},
{
"id": "run-e6c73250-c5b2-4805-83af-0b7cea1aba2b-0",
"type": "ai",
"content": [
{
"text": "You're right, my previous answer was incorrect. Let me re-evaluate using the tool output:",
"type": "text"
},
{
"id": "toolu_01XsJQboYghGDygQpPjJkeRq",
"name": "negate",
"type": "tool_use",
"input": {
"a": -100
}
}
],
"example": false,
"tool_calls": [
{
"id": "toolu_01XsJQboYghGDygQpPjJkeRq",
"args": {
"a": -100
},
"name": "negate"
}
],
"usage_metadata": {
"input_tokens": 1019,
"total_tokens": 1093,
"output_tokens": 74
},
"additional_kwargs": {},
"response_metadata": {
"id": "msg_01287oV1ftb1R2JJKoDjyWvA",
"model": "claude-3-sonnet-20240229",
"usage": {
"input_tokens": 1019,
"output_tokens": 74
},
"stop_reason": "tool_use",
"stop_sequence": null
},
"invalid_tool_calls": []
},
{
"id": "fbf42cde-f004-41a7-99f1-5641acd8a083",
"name": "negate",
"type": "tool",
"content": "-100.0",
"tool_call_id": "toolu_01XsJQboYghGDygQpPjJkeRq",
"additional_kwargs": {},
"response_metadata": {}
},
{
"id": "run-a6bcb848-6966-4bbf-919a-ded95e79dd91-0",
"type": "ai",
"content": "The answer is -100.0",
"example": false,
"tool_calls": [],
"usage_metadata": {
"input_tokens": 1108,
"total_tokens": 1119,
"output_tokens": 11
},
"additional_kwargs": {},
"response_metadata": {
"id": "msg_016bFgPZvrVEcN6RDyEbVtzz",
"model": "claude-3-sonnet-20240229",
"usage": {
"input_tokens": 1108,
"output_tokens": 11
},
"stop_reason": "end_turn",
"stop_sequence": null
},
"invalid_tool_calls": []
},
{
"id": "63c94d84-815d-4b56-8c3a-ecc947869822",
"type": "human",
"content": "You have the correct numerical answer but are returning additional text. Please only respond with the numerical answer.",
"example": false,
"additional_kwargs": {},
"response_metadata": {}
},
{
"id": "run-eda696da-ad0f-4454-b0fc-00bccb2cbee6-0",
"type": "ai",
"content": "-100.0",
"example": false,
"tool_calls": [],
"usage_metadata": {
"input_tokens": 1142,
"total_tokens": 1149,
"output_tokens": 7
},
"additional_kwargs": {},
"response_metadata": {
"id": "msg_019zQeQSyTfNaGfYKqMtVYbP",
"model": "claude-3-sonnet-20240229",
"usage": {
"input_tokens": 1142,
"output_tokens": 7
},
"stop_reason": "end_turn",
"stop_sequence": null
},
"invalid_tool_calls": []
}
]
}
},
{
"id": "23b6ef25-b5fa-4399-9682-b08d05ae9688",
"created_at": "2024-07-09T19:21:07.142926+00:00",
"modified_at": "2024-07-09T19:21:07.142926+00:00",
"name": "#23b6 @ multiverse-math-examples-for-few-shot",
"dataset_id": "d64fd4c5-5642-40fe-85d1-d757467415b6",
"source_run_id": "2ce994c9-cc39-4d8b-a145-7f23fd03b945",
"metadata": {
"dataset_split": [
"base"
]
},
"inputs": {
"input": "if one gazoink is 4 badoinks, each of which is 6 foos, each of wich is 3 bars - how many bars in 3 gazoinks?",
"system": false
},
"outputs": {
"output": [
{
"id": "cbe7ed83-86e1-4e46-89de-6646f8b55cef",
"type": "system",
"content": "You are requested to solve math questions in an alternate mathematical universe. The operations have been altered to yield different results than expected. Do not guess the answer or rely on your innate knowledge of math. Use the provided tools to answer the question. While associativity and commutativity apply, distributivity does not. Answer the question using the fewest possible tools. Only include the numeric response without any clarifications.",
"additional_kwargs": {},
"response_metadata": {}
},
{
"id": "04946246-09a8-4465-be95-037efd7dae55",
"type": "human",
"content": "if one gazoink is 4 badoinks, each of which is 6 foos, each of wich is 3 bars - how many bars in 3 gazoinks?",
"example": false,
"additional_kwargs": {},
"response_metadata": {}
},
{
"id": "run-d6f0954e-b21b-4ea8-ad98-0ee64cfc824e-0",
"type": "ai",
"content": [
{
"id": "toolu_016RjRHSEyDZRqKhGrb8uvjJ",
"name": "multiply",
"type": "tool_use",
"input": {
"a": 3,
"b": 4
}
}
],
"example": false,
"tool_calls": [
{
"id": "toolu_016RjRHSEyDZRqKhGrb8uvjJ",
"args": {
"a": 3,
"b": 4
},
"name": "multiply"
}
],
"usage_metadata": {
"input_tokens": 916,
"total_tokens": 984,
"output_tokens": 68
},
"additional_kwargs": {},
"response_metadata": {
"id": "msg_01MBWxgouUBzomwTvXhomGVq",
"model": "claude-3-sonnet-20240229",
"usage": {
"input_tokens": 916,
"output_tokens": 68
},
"stop_reason": "tool_use",
"stop_sequence": null
},
"invalid_tool_calls": []
},
{
"id": "3d4c72c4-f009-48ce-b739-1d3f28ee4803",
"name": "multiply",
"type": "tool",
"content": "13.2",
"tool_call_id": "toolu_016RjRHSEyDZRqKhGrb8uvjJ",
"additional_kwargs": {},
"response_metadata": {}
},
{
"id": "run-26dd7e83-f5fb-4c70-8ba1-271300ffeb25-0",
"type": "ai",
"content": [
{
"id": "toolu_01PqvszxiuXrVJ9bwgTWaH3q",
"name": "multiply",
"type": "tool_use",
"input": {
"a": 13.2,
"b": 6
}
}
],
"example": false,
"tool_calls": [
{
"id": "toolu_01PqvszxiuXrVJ9bwgTWaH3q",
"args": {
"a": 13.2,
"b": 6
},
"name": "multiply"
}
],
"usage_metadata": {
"input_tokens": 999,
"total_tokens": 1070,
"output_tokens": 71
},
"additional_kwargs": {},
"response_metadata": {
"id": "msg_01VTFvtCxtR3rN58hCmjt2oH",
"model": "claude-3-sonnet-20240229",
"usage": {
"input_tokens": 999,
"output_tokens": 71
},
"stop_reason": "tool_use",
"stop_sequence": null
},
"invalid_tool_calls": []
},
{
"id": "ca4e0317-7b3a-4638-933c-1efd98bc4fda",
"name": "multiply",
"type": "tool",
"content": "87.12",
"tool_call_id": "toolu_01PqvszxiuXrVJ9bwgTWaH3q",
"additional_kwargs": {},
"response_metadata": {}
},
{
"id": "run-007794ac-3590-4b9e-b678-008f02e40042-0",
"type": "ai",
"content": [
{
"id": "toolu_01LU3RqRUXZRLRoJ2AZNmPed",
"name": "multiply",
"type": "tool_use",
"input": {
"a": 87.12,
"b": 3
}
}
],
"example": false,
"tool_calls": [
{
"id": "toolu_01LU3RqRUXZRLRoJ2AZNmPed",
"args": {
"a": 87.12,
"b": 3
},
"name": "multiply"
}
],
"usage_metadata": {
"input_tokens": 1084,
"total_tokens": 1155,
"output_tokens": 71
},
"additional_kwargs": {},
"response_metadata": {
"id": "msg_017BEkSqmTsmtJaTxAzfRMEh",
"model": "claude-3-sonnet-20240229",
"usage": {
"input_tokens": 1084,
"output_tokens": 71
},
"stop_reason": "tool_use",
"stop_sequence": null
},
"invalid_tool_calls": []
},
{
"id": "7f58c121-6f21-4c7b-ba38-aa820e274ff8",
"name": "multiply",
"type": "tool",
"content": "287.496",
"tool_call_id": "toolu_01LU3RqRUXZRLRoJ2AZNmPed",
"additional_kwargs": {},
"response_metadata": {}
},
{
"id": "run-51e35afb-7ec6-4738-93e2-92f80b5c9377-0",
"type": "ai",
"content": "287.496",
"example": false,
"tool_calls": [],
"usage_metadata": {
"input_tokens": 1169,
"total_tokens": 1176,
"output_tokens": 7
},
"additional_kwargs": {},
"response_metadata": {
"id": "msg_01Tx9kSNapSg8aUbWZXiS1NL",
"model": "claude-3-sonnet-20240229",
"usage": {
"input_tokens": 1169,
"output_tokens": 7
},
"stop_reason": "end_turn",
"stop_sequence": null
},
"invalid_tool_calls": []
}
]
}
},
{
"id": "f3a5a482-910b-402a-8b49-31dbd38dba69",
"created_at": "2024-07-09T18:31:22.954376+00:00",
"modified_at": "2024-07-09T18:31:22.954376+00:00",
"name": "#f3a5 @ multiverse-math-examples-for-few-shot",
"dataset_id": "d64fd4c5-5642-40fe-85d1-d757467415b6",
"source_run_id": "e313510f-5155-48fb-a1af-8ec998cd7dbd",
"metadata": {
"dataset_split": [
"base"
]
},
"inputs": {
"input": "What is the value of tangent of 15 degrees?",
"system": false
},
"outputs": {
"output": [
{
"id": "e3f864e9-10e4-402e-9a2a-04680152146e",
"type": "system",
"content": "You are requested to solve math questions in an alternate mathematical universe. The operations have been altered to yield different results than expected. Do not guess the answer or rely on your innate knowledge of math. Use the provided tools to answer the question. While associativity and commutativity apply, distributivity does not. Answer the question using the fewest possible tools. Your final answer should include the numeric response on its own without any clarifications. That means if the answer is 5, you should return \"5\" and not \"The answer is 5\" or \"So the answer to your question is 5\".",
"additional_kwargs": {},
"response_metadata": {}
},
{
"id": "88c4febe-5445-4f91-937e-caa8760cec19",
"type": "human",
"content": "What is the value of tangent of 15 degrees?",
"example": false,
"additional_kwargs": {},
"response_metadata": {}
},
{
"id": "run-9a440c86-f7f9-4219-b6bc-a35d64bbfc38-0",
"type": "ai",
"content": [
{
"text": "Here is how to calculate the tangent of 15 degrees using the provided tools:",
"type": "text"
},
{
"id": "toolu_01M9AvVXTPvFWLiKni8Sh1T4",
"name": "pi",
"type": "tool_use",
"input": {}
}
],
"example": false,
"tool_calls": [
{
"id": "toolu_01M9AvVXTPvFWLiKni8Sh1T4",
"args": {},
"name": "pi"
}
],
"usage_metadata": {
"input_tokens": 926,
"total_tokens": 979,
"output_tokens": 53
},
"additional_kwargs": {},
"response_metadata": {
"id": "msg_01Uk5q5uxL5Y5YGt5BQMwVEU",
"model": "claude-3-sonnet-20240229",
"usage": {
"input_tokens": 926,
"output_tokens": 53
},
"stop_reason": "tool_use",
"stop_sequence": null
},
"invalid_tool_calls": []
},
{
"id": "39667772-2baf-4e9e-97ae-961e3670a474",
"name": "pi",
"type": "tool",
"content": "2.718281828459045",
"tool_call_id": "toolu_01M9AvVXTPvFWLiKni8Sh1T4",
"additional_kwargs": {},
"response_metadata": {}
},
{
"id": "run-940b9125-591b-41f3-bbd8-6b474953bbd0-0",
"type": "ai",
"content": [
{
"id": "toolu_01JGvtXdtQtLmk7LwERkqDKH",
"name": "divide",
"type": "tool_use",
"input": {
"a": 15,
"b": 180
}
}
],
"example": false,
"tool_calls": [
{
"id": "toolu_01JGvtXdtQtLmk7LwERkqDKH",
"args": {
"a": 15,
"b": 180
},
"name": "divide"
}
],
"usage_metadata": {
"input_tokens": 997,
"total_tokens": 1066,
"output_tokens": 69
},
"additional_kwargs": {},
"response_metadata": {
"id": "msg_01HwYrEuUKLeZ9WHF8k4XpEm",
"model": "claude-3-sonnet-20240229",
"usage": {
"input_tokens": 997,
"output_tokens": 69
},
"stop_reason": "tool_use",
"stop_sequence": null
},
"invalid_tool_calls": []
},
{
"id": "003a2745-230c-4e1a-80b3-62aca06d40ae",
"name": "divide",
"type": "tool",
"content": "0.041666666666666664",
"tool_call_id": "toolu_01JGvtXdtQtLmk7LwERkqDKH",
"additional_kwargs": {},
"response_metadata": {}
},
{
"id": "run-e3881f88-26ef-4710-b79d-ec87cedc9b3f-0",
"type": "ai",
"content": [
{
"id": "toolu_012jdaKjUq4Up8EJF7PtbRnU",
"name": "multiply",
"type": "tool_use",
"input": {
"a": 0.041666666666666664,
"b": 2.718281828459045
}
}
],
"example": false,
"tool_calls": [
{
"id": "toolu_012jdaKjUq4Up8EJF7PtbRnU",
"args": {
"a": 0.041666666666666664,
"b": 2.718281828459045
},
"name": "multiply"
}
],
"usage_metadata": {
"input_tokens": 1085,
"total_tokens": 1167,
"output_tokens": 82
},
"additional_kwargs": {},
"response_metadata": {
"id": "msg_014gPQ75pqPgox2gDgB5kGiW",
"model": "claude-3-sonnet-20240229",
"usage": {
"input_tokens": 1085,
"output_tokens": 82
},
"stop_reason": "tool_use",
"stop_sequence": null
},
"invalid_tool_calls": []
},
{
"id": "13c27c72-71d6-4763-843c-9f8ac1fed43f",
"name": "multiply",
"type": "tool",
"content": "0.12458791713770624",
"tool_call_id": "toolu_012jdaKjUq4Up8EJF7PtbRnU",
"additional_kwargs": {},
"response_metadata": {}
},
{
"id": "run-913fb3f0-1642-489f-b6af-486ddfb7e599-0",
"type": "ai",
"content": [
{
"id": "toolu_013GcHeUTjjcPm8zyrUb8y7K",
"name": "sin",
"type": "tool_use",
"input": {
"radians": 0.12458791713770624
}
}
],
"example": false,
"tool_calls": [
{
"id": "toolu_013GcHeUTjjcPm8zyrUb8y7K",
"args": {
"radians": 0.12458791713770624
},
"name": "sin"
}
],
"usage_metadata": {
"input_tokens": 1186,
"total_tokens": 1246,
"output_tokens": 60
},
"additional_kwargs": {},
"response_metadata": {
"id": "msg_01JagZM5K2BZy3gqbVqNvyfh",
"model": "claude-3-sonnet-20240229",
"usage": {
"input_tokens": 1186,
"output_tokens": 60
},
"stop_reason": "tool_use",
"stop_sequence": null
},
"invalid_tool_calls": []
},
{
"id": "80035591-053a-41c7-a780-6ef2accb55bd",
"name": "sin",
"type": "tool",
"content": "0.9922489593051885",
"tool_call_id": "toolu_013GcHeUTjjcPm8zyrUb8y7K",
"additional_kwargs": {},
"response_metadata": {}
},
{
"id": "run-e8187305-6596-43ba-b53e-fdc3c4bc949c-0",
"type": "ai",
"content": [
{
"id": "toolu_01H8VQ4CY79YWEv7UbNCq6PU",
"name": "cos",
"type": "tool_use",
"input": {
"radians": 0.12458791713770624
}
}
],
"example": false,
"tool_calls": [
{
"id": "toolu_01H8VQ4CY79YWEv7UbNCq6PU",
"args": {
"radians": 0.12458791713770624
},
"name": "cos"
}
],
"usage_metadata": {
"input_tokens": 1265,
"total_tokens": 1325,
"output_tokens": 60
},
"additional_kwargs": {},
"response_metadata": {
"id": "msg_01BxSVNvzVcLqgw8w9r2TdCy",
"model": "claude-3-sonnet-20240229",
"usage": {
"input_tokens": 1265,
"output_tokens": 60
},
"stop_reason": "tool_use",
"stop_sequence": null
},
"invalid_tool_calls": []
},
{
"id": "d0bd5b2c-5f3c-453c-ba68-6905a07d1505",
"name": "cos",
"type": "tool",
"content": "0.12426585515647588",
"tool_call_id": "toolu_01H8VQ4CY79YWEv7UbNCq6PU",
"additional_kwargs": {},
"response_metadata": {}
},
{
"id": "run-e1452ef5-35c0-4f6e-a73f-838aa5a1d618-0",
"type": "ai",
"content": [
{
"id": "toolu_01RAiD2JXsJXj3AHcodeYyoU",
"name": "divide",
"type": "tool_use",
"input": {
"a": 0.9922489593051885,
"b": 0.12426585515647588
}
}
],
"example": false,
"tool_calls": [
{
"id": "toolu_01RAiD2JXsJXj3AHcodeYyoU",
"args": {
"a": 0.9922489593051885,
"b": 0.12426585515647588
},
"name": "divide"
}
],
"usage_metadata": {
"input_tokens": 1344,
"total_tokens": 1427,
"output_tokens": 83
},
"additional_kwargs": {},
"response_metadata": {
"id": "msg_01TWTmVU2N9QSCokuFqZsTsH",
"model": "claude-3-sonnet-20240229",
"usage": {
"input_tokens": 1344,
"output_tokens": 83
},
"stop_reason": "tool_use",
"stop_sequence": null
},
"invalid_tool_calls": []
},
{
"id": "7c5b39e3-b824-4fb0-8522-4e218cc595ca",
"name": "divide",
"type": "tool",
"content": "3.9924440951850615",
"tool_call_id": "toolu_01RAiD2JXsJXj3AHcodeYyoU",
"additional_kwargs": {},
"response_metadata": {}
},
{
"id": "run-a3c4b10f-4235-43d8-8d0c-d722d287c99e-0",
"type": "ai",
"content": "3.9924440951850615",
"example": false,
"tool_calls": [],
"usage_metadata": {
"input_tokens": 1446,
"total_tokens": 1458,
"output_tokens": 12
},
"additional_kwargs": {},
"response_metadata": {
"id": "msg_01VzZDqJPcWgwq8tGe8ktHjN",
"model": "claude-3-sonnet-20240229",
"usage": {
"input_tokens": 1446,
"output_tokens": 12
},
"stop_reason": "end_turn",
"stop_sequence": null
},
"invalid_tool_calls": []
}
]
}
},
{
"id": "a738cf19-6e5e-43d8-88b0-51a9f2f7bb32",
"created_at": "2024-07-09T18:00:05.951776+00:00",
"modified_at": "2024-07-09T18:00:05.951776+00:00",
"name": "#a738 @ multiverse-math-examples-for-few-shot",
"dataset_id": "d64fd4c5-5642-40fe-85d1-d757467415b6",
"source_run_id": "08825d00-6d61-4ce9-adae-dd9181dac5dc",
"metadata": {
"dataset_split": [
"base"
]
},
"inputs": {
"input": "For the first 5 days of the week I ran 45 minutes each day, and on the weekend I did one 100 minute run. How many minutes did I spend running this week?",
"system": false
},
"outputs": {
"output": [
{
"id": "39cbdd34-e80d-4cee-b64d-a08b332960de",
"type": "system",
"content": "You are requested to solve math questions in an alternate mathematical universe. The operations have been altered to yield different results than expected. Do not guess the answer or rely on your innate knowledge of math. Use the provided tools to answer the question. While associativity and commutativity apply, distributivity does not. Answer the question using the fewest possible tools. Your final answer should include the numeric response on its own without any clarifications. Please return a non-empty answer.",
"additional_kwargs": {},
"response_metadata": {}
},
{
"id": "88e60ba2-4976-40ab-9cc1-83017f33c36d",
"type": "human",
"content": "For the first 5 days of the week I ran 45 minutes each day, and on the weekend I did one 100 minute run. How many minutes did I spend running this week?",
"example": false,
"additional_kwargs": {},
"response_metadata": {}
},
{
"id": "run-26185a0d-97a3-4e5f-baaf-c0c5cdb8624f-0",
"type": "ai",
"content": [
{
"id": "toolu_01XNBmZD1wHJJZQJ6GddFnxZ",
"name": "multiply",
"type": "tool_use",
"input": {
"a": 5,
"b": 45
}
}
],
"example": false,
"tool_calls": [
{
"id": "toolu_01XNBmZD1wHJJZQJ6GddFnxZ",
"args": {
"a": 5,
"b": 45
},
"name": "multiply"
}
],
"usage_metadata": {
"input_tokens": 928,
"total_tokens": 996,
"output_tokens": 68
},
"additional_kwargs": {},
"response_metadata": {
"id": "msg_01RwjPkwEam2UnV8r6mP4fVd",
"model": "claude-3-sonnet-20240229",
"usage": {
"input_tokens": 928,
"output_tokens": 68
},
"stop_reason": "tool_use",
"stop_sequence": null
},
"invalid_tool_calls": []
},
{
"id": "c9769517-674a-434f-9364-abfb267cd83c",
"name": "multiply",
"type": "tool",
"content": "247.5",
"tool_call_id": "toolu_01XNBmZD1wHJJZQJ6GddFnxZ",
"additional_kwargs": {},
"response_metadata": {}
},
{
"id": "run-b7870fa1-5427-4199-b270-59d6150da667-0",
"type": "ai",
"content": [
{
"id": "toolu_01BS72aUeWQzbkzJjQwrkx7A",
"name": "add",
"type": "tool_use",
"input": {
"a": 247.5,
"b": 100
}
}
],
"example": false,
"tool_calls": [
{
"id": "toolu_01BS72aUeWQzbkzJjQwrkx7A",
"args": {
"a": 247.5,
"b": 100
},
"name": "add"
}
],
"usage_metadata": {
"input_tokens": 1011,
"total_tokens": 1082,
"output_tokens": 71
},
"additional_kwargs": {},
"response_metadata": {
"id": "msg_015h6Re3HMYfDoLJ9QVZftaY",
"model": "claude-3-sonnet-20240229",
"usage": {
"input_tokens": 1011,
"output_tokens": 71
},
"stop_reason": "tool_use",
"stop_sequence": null
},
"invalid_tool_calls": []
},
{
"id": "250f500d-bbd5-4191-8e2a-f77435841778",
"name": "add",
"type": "tool",
"content": "348.7",
"tool_call_id": "toolu_01BS72aUeWQzbkzJjQwrkx7A",
"additional_kwargs": {},
"response_metadata": {}
},
{
"id": "run-fa7b9bb3-533c-44b8-9e34-35af1286c0a6-0",
"type": "ai",
"content": "348.7",
"example": false,
"tool_calls": [],
"usage_metadata": {
"input_tokens": 1096,
"total_tokens": 1103,
"output_tokens": 7
},
"additional_kwargs": {},
"response_metadata": {
"id": "msg_0145tzF34B7pZBrDCv6FtzWs",
"model": "claude-3-sonnet-20240229",
"usage": {
"input_tokens": 1096,
"output_tokens": 7
},
"stop_reason": "end_turn",
"stop_sequence": null
},
"invalid_tool_calls": []
}
]
}
},
{
"id": "5a6536b6-55ad-45ee-b3e1-ba5f0a89fc03",
"created_at": "2024-07-08T23:26:34.541129+00:00",
"modified_at": "2024-07-08T23:26:34.541129+00:00",
"name": "#5a65 @ multiverse-math-examples-for-few-shot",
"dataset_id": "d64fd4c5-5642-40fe-85d1-d757467415b6",
"source_run_id": "6e4eb059-d73d-4b8b-9b04-4a809e374139",
"metadata": {
"dataset_split": [
"base"
]
},
"inputs": {
"input": "Negate the value of 2 subtracted from 5",
"system": false
},
"outputs": {
"output": [
{
"id": "c01e7746-a71b-4d7c-af5e-9523646ba964",
"type": "system",
"content": "You are requested to solve math questions in an alternate mathematical universe. The operations have been altered to yield different results than expected. Do not guess the answer or rely on your innate knowledge of math. Use the provided tools to answer the question. While associativity and commutativity apply, distributivity does not. Answer the question using the fewest possible tools. Your final answer should include the numeric response on its own without any clarifications. Please return a non-empty answer.",
"additional_kwargs": {},
"response_metadata": {}
},
{
"id": "48344924-81ec-4987-80d4-c94902e81852",
"type": "human",
"content": "Negate the value of 2 subtracted from 5",
"example": false,
"additional_kwargs": {},
"response_metadata": {}
},
{
"id": "run-f3dd2c24-8ae5-4143-8ad3-3075ba432c17-0",
"type": "ai",
"content": [
{
"id": "toolu_01Qvk84UL1MMQC7UMLXjDEoL",
"name": "subtract",
"type": "tool_use",
"input": {
"a": 5,
"b": 2
}
}
],
"example": false,
"tool_calls": [
{
"id": "toolu_01Qvk84UL1MMQC7UMLXjDEoL",
"args": {
"a": 5,
"b": 2
},
"name": "subtract"
}
],
"usage_metadata": {
"input_tokens": 900,
"total_tokens": 968,
"output_tokens": 68
},
"additional_kwargs": {},
"response_metadata": {
"id": "msg_01HA93WmfUr1pGKGwh25pNvs",
"model": "claude-3-sonnet-20240229",
"usage": {
"input_tokens": 900,
"output_tokens": 68
},
"stop_reason": "tool_use",
"stop_sequence": null
},
"invalid_tool_calls": []
},
{
"id": "6cbd94cc-083d-4756-bb63-703c68af36cd",
"name": "subtract",
"type": "tool",
"content": "0.0",
"tool_call_id": "toolu_01Qvk84UL1MMQC7UMLXjDEoL",
"additional_kwargs": {},
"response_metadata": {}
},
{
"id": "run-97a2b647-4141-4285-b1ec-e30ced74fbc5-0",
"type": "ai",
"content": [
{
"id": "toolu_01J6Nw9Nej1QUtvPchRvBoks",
"name": "negate",
"type": "tool_use",
"input": {
"a": 0
}
}
],
"example": false,
"tool_calls": [
{
"id": "toolu_01J6Nw9Nej1QUtvPchRvBoks",
"args": {
"a": 0
},
"name": "negate"
}
],
"usage_metadata": {
"input_tokens": 983,
"total_tokens": 1038,
"output_tokens": 55
},
"additional_kwargs": {},
"response_metadata": {
"id": "msg_016G7JgaK9BEbDZtr7Tpyx2C",
"model": "claude-3-sonnet-20240229",
"usage": {
"input_tokens": 983,
"output_tokens": 55
},
"stop_reason": "tool_use",
"stop_sequence": null
},
"invalid_tool_calls": []
},
{
"id": "064bd7ef-d6e1-4ae1-b7cd-5d879391701a",
"name": "negate",
"type": "tool",
"content": "0.0",
"tool_call_id": "toolu_01J6Nw9Nej1QUtvPchRvBoks",
"additional_kwargs": {},
"response_metadata": {}
},
{
"id": "run-5982b347-0e73-4e91-9cf4-65a03b918b65-0",
"type": "ai",
"content": "0.0",
"example": false,
"tool_calls": [],
"usage_metadata": {
"input_tokens": 1052,
"total_tokens": 1059,
"output_tokens": 7
},
"additional_kwargs": {},
"response_metadata": {
"id": "msg_01QbDwVuMjk4gRzZjv3xUqeU",
"model": "claude-3-sonnet-20240229",
"usage": {
"input_tokens": 1052,
"output_tokens": 7
},
"stop_reason": "end_turn",
"stop_sequence": null
},
"invalid_tool_calls": []
}
]
}
}
] |
0 | lc_public_repos/langchainjs/docs/core_docs | lc_public_repos/langchainjs/docs/core_docs/data/people.yml | maintainers:
- login: nfcampos
count: 343
avatarUrl: https://avatars.githubusercontent.com/u/56902?u=fdb30e802c68bc338dd9c0820f713e4fdac75db7&v=4
twitterUsername: nfcampos
url: https://github.com/nfcampos
- login: bracesproul
count: 764
avatarUrl: https://avatars.githubusercontent.com/u/46789226?u=83f467441c4b542b900fe2bb8fe45e26bf918da0&v=4
twitterUsername: bracesproul
url: https://github.com/bracesproul
- login: dqbd
count: 45
avatarUrl: https://avatars.githubusercontent.com/u/1443449?u=fe32372ae8f497065ef0a1c54194d9dff36fb81d&v=4
twitterUsername: __dqbd
url: https://github.com/dqbd
- login: jacoblee93
count: 1226
avatarUrl: https://avatars.githubusercontent.com/u/6952323?u=d785f9406c5a78ebd75922567b2693fb643c3bb0&v=4
twitterUsername: hacubu
url: https://github.com/jacoblee93
- login: hwchase17
count: 73
avatarUrl: https://avatars.githubusercontent.com/u/11986836?u=f4c4f21a82b2af6c9f91e1f1d99ea40062f7a101&v=4
twitterUsername: null
url: https://github.com/hwchase17
top_recent_contributors:
- login: dl102306
count: 2.594246611122073
avatarUrl: https://avatars.githubusercontent.com/u/2979960?v=4
twitterUsername: null
url: https://github.com/dl102306
- login: Anirudh31415926535
count: 2.378611865453971
avatarUrl: https://avatars.githubusercontent.com/u/171019460?v=4
twitterUsername: null
url: https://github.com/Anirudh31415926535
- login: dependabot
count: 2.118005716372584
avatarUrl: https://avatars.githubusercontent.com/in/29110?v=4
twitterUsername: null
url: https://github.com/apps/dependabot
- login: chentschel
count: 1.8419619282928634
avatarUrl: https://avatars.githubusercontent.com/u/319227?u=db20ce1d424f10d7760665ab693791ebc580131a&v=4
twitterUsername: chentschel_
url: https://github.com/chentschel
- login: sinedied
count: 1.5981076527116855
avatarUrl: https://avatars.githubusercontent.com/u/593151?u=08557bbdd96221813b8aec932dd7de895ac040ea&v=4
twitterUsername: sinedied
url: https://github.com/sinedied
- login: miloradvojnovic
count: 1.5406950154046446
avatarUrl: https://avatars.githubusercontent.com/u/11798350?u=a9b931a1a3319787bef5e2d16e1fdec0888cdad5&v=4
twitterUsername: null
url: https://github.com/miloradvojnovic
- login: anthonychu
count: 1.469340114219017
avatarUrl: https://avatars.githubusercontent.com/u/3982077?u=8bbebac42cb84a25c629f83f212b2d099ffa3964&v=4
twitterUsername: nthonyChu
url: https://github.com/anthonychu
- login: josemussa
count: 1.4646097752078864
avatarUrl: https://avatars.githubusercontent.com/u/4422500?u=d676ede0cec8ee5df6879ebf9d8b72d51ea1eb7f&v=4
twitterUsername: null
url: https://github.com/josemussa
- login: ovuruska
count: 1.2073722297602894
avatarUrl: https://avatars.githubusercontent.com/u/75265893?u=7f11152d07f1719da22084388c09b5fc64ab6c89&v=4
twitterUsername: VuruskanerOguz
url: https://github.com/ovuruska
- login: tofuliang
count: 1.192831678125796
avatarUrl: https://avatars.githubusercontent.com/u/1814685?v=4
twitterUsername: null
url: https://github.com/tofuliang
- login: AvaterClasher
count: 1.0900805104055902
avatarUrl: https://avatars.githubusercontent.com/u/116944847?u=102a870b3efed7f30f0a57123391a293eb6f5b08&v=4
twitterUsername: Avater004
url: https://github.com/AvaterClasher
- login: jl4nz
count: 1.071155682903534
avatarUrl: https://avatars.githubusercontent.com/u/94814971?u=266358610eeb54c3393dc127718dd6a997fdbf52&v=4
twitterUsername: jlanzarotti
url: https://github.com/jl4nz
- login: volodymyr-memsql
count: 1.0661641541038527
avatarUrl: https://avatars.githubusercontent.com/u/57520563?v=4
twitterUsername: null
url: https://github.com/volodymyr-memsql
top_contributors:
- login: afirstenberg
count: 22.268597738308145
avatarUrl: https://avatars.githubusercontent.com/u/3507578?v=4
twitterUsername: null
url: https://github.com/afirstenberg
- login: ppramesi
count: 20.877822550346437
avatarUrl: https://avatars.githubusercontent.com/u/6775031?v=4
twitterUsername: null
url: https://github.com/ppramesi
- login: jacobrosenthal
count: 14.267494040569296
avatarUrl: https://avatars.githubusercontent.com/u/455796?v=4
twitterUsername: null
url: https://github.com/jacobrosenthal
- login: sullivan-sean
count: 11.942497805641993
avatarUrl: https://avatars.githubusercontent.com/u/22581534?u=8f88473db2f929a965b6371733efda28e3fa1948&v=4
twitterUsername: null
url: https://github.com/sullivan-sean
- login: sinedied
count: 10.521352610424566
avatarUrl: https://avatars.githubusercontent.com/u/593151?u=08557bbdd96221813b8aec932dd7de895ac040ea&v=4
twitterUsername: sinedied
url: https://github.com/sinedied
- login: tomasonjo
count: 8.253700280332291
avatarUrl: https://avatars.githubusercontent.com/u/19948365?v=4
twitterUsername: tb_tomaz
url: https://github.com/tomasonjo
- login: skarard
count: 7.547501901594376
avatarUrl: https://avatars.githubusercontent.com/u/602085?u=f8a9736cfa9fe8875d19861b0276e24de8f3d0a0&v=4
twitterUsername: skarard
url: https://github.com/skarard
- login: chasemcdo
count: 6.830368980796448
avatarUrl: https://avatars.githubusercontent.com/u/74692158?u=9c25a170d24cc30f10eafc4d44a38067cdf5eed8&v=4
twitterUsername: null
url: https://github.com/chasemcdo
- login: MaximeThoonsen
count: 6.725032283736919
avatarUrl: https://avatars.githubusercontent.com/u/4814551?u=efb35c6a7dc1ce99dfa8ac8f0f1314cdb4fddfe1&v=4
twitterUsername: maxthoon
url: https://github.com/MaximeThoonsen
- login: easwee
count: 6.61248803513189
avatarUrl: https://avatars.githubusercontent.com/u/2518825?u=a24026bc5ed35688174b1a36f3c29eda594d38d7&v=4
twitterUsername: easwee
url: https://github.com/easwee
- login: mieslep
count: 6.565459295836945
avatarUrl: https://avatars.githubusercontent.com/u/5420540?u=8f038c002fbce42427999eb715dc9f868cef1c84&v=4
twitterUsername: null
url: https://github.com/mieslep
- login: ysnows
count: 5.946853120965178
avatarUrl: https://avatars.githubusercontent.com/u/11255869?u=b0b519b6565c43d01795ba092521c8677f30134c&v=4
twitterUsername: enconvo_ai
url: https://github.com/ysnows
- login: tyumentsev4
count: 5.937394473842732
avatarUrl: https://avatars.githubusercontent.com/u/56769451?u=088102b6160822bc68c25a2a5df170080d0b16a2&v=4
twitterUsername: null
url: https://github.com/tyumentsev4
- login: nickscamara
count: 5.789517811503743
avatarUrl: https://avatars.githubusercontent.com/u/20311743?u=29bf2391ae34297a12a88d813731b0bdf289e4a5&v=4
twitterUsername: nickscamara_
url: https://github.com/nickscamara
- login: nigel-daniels
count: 5.7386038052115005
avatarUrl: https://avatars.githubusercontent.com/u/4641452?v=4
twitterUsername: null
url: https://github.com/nigel-daniels
- login: MJDeligan
count: 5.577508609177005
avatarUrl: https://avatars.githubusercontent.com/u/48515433?v=4
twitterUsername: null
url: https://github.com/MJDeligan
- login: jeasonnow
count: 5.487271244993293
avatarUrl: https://avatars.githubusercontent.com/u/16950207?u=ab2d0d4f1574398ac842e6bb3c2ba020ab7711eb&v=4
twitterUsername: null
url: https://github.com/jeasonnow
- login: malandis
count: 4.955686027081205
avatarUrl: https://avatars.githubusercontent.com/u/3690240?v=4
twitterUsername: mlonml
url: https://github.com/malandis
- login: danielchalef
count: 4.393048187010669
avatarUrl: https://avatars.githubusercontent.com/u/131175?u=332fe36f12d9ffe9e4414dc776b381fe801a9c53&v=4
twitterUsername: null
url: https://github.com/danielchalef
- login: Swimburger
count: 4.30247126533074
avatarUrl: https://avatars.githubusercontent.com/u/3382717?u=5a84a173b0e80effc9161502c0848bf06c84bde9&v=4
twitterUsername: RealSwimburger
url: https://github.com/Swimburger
- login: Anush008
count: 4.11180681500838
avatarUrl: https://avatars.githubusercontent.com/u/46051506?u=026f5f140e8b7ba4744bf971f9ebdea9ebab67ca&v=4
twitterUsername: null
url: https://github.com/Anush008
- login: mfortman11
count: 4.008715657220755
avatarUrl: https://avatars.githubusercontent.com/u/6100513?u=c758a02fc05dc36315fcfadfccd6208883436cb8&v=4
twitterUsername: null
url: https://github.com/mfortman11
- login: kwkr
count: 3.876826551715185
avatarUrl: https://avatars.githubusercontent.com/u/20127759?v=4
twitterUsername: zukerpie
url: https://github.com/kwkr
- login: sarangan12
count: 3.814606725038421
avatarUrl: https://avatars.githubusercontent.com/u/602456?u=d39962c60b0ac5fea4e97cb67433a42c736c3c5b&v=4
twitterUsername: null
url: https://github.com/sarangan12
- login: fahreddinozcan
count: 3.6832424135364583
avatarUrl: https://avatars.githubusercontent.com/u/88107904?v=4
twitterUsername: null
url: https://github.com/fahreddinozcan
- login: ewfian
count: 3.6525618490035825
avatarUrl: https://avatars.githubusercontent.com/u/12423122?u=681de0c470e9b349963ee935ddfd6b2e097e7181&v=4
twitterUsername: null
url: https://github.com/ewfian
- login: jl4nz
count: 3.322486583856189
avatarUrl: https://avatars.githubusercontent.com/u/94814971?u=266358610eeb54c3393dc127718dd6a997fdbf52&v=4
twitterUsername: jlanzarotti
url: https://github.com/jl4nz
- login: volodymyr-memsql
count: 3.2697631770291293
avatarUrl: https://avatars.githubusercontent.com/u/57520563?v=4
twitterUsername: null
url: https://github.com/volodymyr-memsql
- login: jasondotparse
count: 3.2458389391317124
avatarUrl: https://avatars.githubusercontent.com/u/13938372?u=0e3f80aa515c41b7d9084b73d761cad378ebdc7a&v=4
twitterUsername: null
url: https://github.com/jasondotparse
- login: mishushakov
count: 3.025937666523869
avatarUrl: https://avatars.githubusercontent.com/u/10400064?u=52b50611d587317f397a96f898753099d65931f1&v=4
twitterUsername: mishushakov
url: https://github.com/mishushakov
- login: kristianfreeman
count: 2.910863456801872
avatarUrl: https://avatars.githubusercontent.com/u/922353?u=212a67ff65d67d39e41c3cb58cd7a7b8b2f89f3e&v=4
twitterUsername: null
url: https://github.com/kristianfreeman
- login: neebdev
count: 2.91018110784209
avatarUrl: https://avatars.githubusercontent.com/u/94310799?u=b6f604bc6c3a6380f0b83025ca94e2e22179ac2a&v=4
twitterUsername: null
url: https://github.com/neebdev
- login: tsg
count: 2.9018860000514564
avatarUrl: https://avatars.githubusercontent.com/u/101817?u=39f31ff29d2589046148c6ed1c1c923982d86b1a&v=4
twitterUsername: tudor_g
url: https://github.com/tsg
- login: lokesh-couchbase
count: 2.8986026968579655
avatarUrl: https://avatars.githubusercontent.com/u/113521973?v=4
twitterUsername: null
url: https://github.com/lokesh-couchbase
- login: nicoloboschi
count: 2.8676287561914933
avatarUrl: https://avatars.githubusercontent.com/u/23314389?u=2014e20e246530fa89bd902fe703b6f9e6ecf833&v=4
twitterUsername: nicoloboschi
url: https://github.com/nicoloboschi
- login: zackproser
count: 2.822841682739776
avatarUrl: https://avatars.githubusercontent.com/u/1769996?u=67913e5af19c6ea2df87f33db0ddd2b6cb805eb5&v=4
twitterUsername: zackproser
url: https://github.com/zackproser
- login: justindra
count: 2.8213473329884717
avatarUrl: https://avatars.githubusercontent.com/u/4289486?v=4
twitterUsername: justindra_
url: https://github.com/justindra
- login: vincelwt
count: 2.653038382181122
avatarUrl: https://avatars.githubusercontent.com/u/5092466?u=713f9947e4315b6f0ef62ec5cccd978133006783&v=4
twitterUsername: vincelwt
url: https://github.com/vincelwt
- login: cwoolum
count: 2.6393729641077233
avatarUrl: https://avatars.githubusercontent.com/u/942415?u=8210ef711d1666ec234db9a0c4a9b32fd9f36593&v=4
twitterUsername: chriswoolum
url: https://github.com/cwoolum
- login: sunner
count: 2.619509428240187
avatarUrl: https://avatars.githubusercontent.com/u/255413?v=4
twitterUsername: null
url: https://github.com/sunner
- login: dl102306
count: 2.594246611122073
avatarUrl: https://avatars.githubusercontent.com/u/2979960?v=4
twitterUsername: null
url: https://github.com/dl102306
- login: rahilvora
count: 2.5339652606929115
avatarUrl: https://avatars.githubusercontent.com/u/5127548?u=0cd74312c28da39646785409fb0a37a9b3d3420a&v=4
twitterUsername: null
url: https://github.com/rahilvora
- login: lukywong
count: 2.519864307838807
avatarUrl: https://avatars.githubusercontent.com/u/1433871?v=4
twitterUsername: null
url: https://github.com/lukywong
- login: mayooear
count: 2.5039401650282382
avatarUrl: https://avatars.githubusercontent.com/u/107035552?u=708ca9b002559f6175803a80a1e47f3e84ba91e2&v=4
twitterUsername: mayowaoshin
url: https://github.com/mayooear
- login: chitalian
count: 2.460732921934039
avatarUrl: https://avatars.githubusercontent.com/u/26822232?u=accedd106a5e9d8335cb631c1bfe84b8cc494083&v=4
twitterUsername: justinstorre
url: https://github.com/chitalian
- login: paaatrrrick
count: 2.4007245461518325
avatarUrl: https://avatars.githubusercontent.com/u/88113528?u=23275c7b8928a38b34195358ea9f4d057fe1e171&v=4
twitterUsername: null
url: https://github.com/paaatrrrick
- login: alexleventer
count: 2.3963442958008176
avatarUrl: https://avatars.githubusercontent.com/u/3254549?u=794d178a761379e162a1092c556e98a9ec5c2410&v=4
twitterUsername: null
url: https://github.com/alexleventer
- login: Anirudh31415926535
count: 2.378611865453971
avatarUrl: https://avatars.githubusercontent.com/u/171019460?v=4
twitterUsername: null
url: https://github.com/Anirudh31415926535
- login: 3eif
count: 2.323209691414097
avatarUrl: https://avatars.githubusercontent.com/u/29833473?u=37b8f7a25883ee98bc6b6bd6029c6d5479724e2f&v=4
twitterUsername: sabziz
url: https://github.com/3eif
- login: BitVoyagerMan
count: 2.2958954742635953
avatarUrl: https://avatars.githubusercontent.com/u/121993229?u=717ed7012c040d5bf3a8ff1fd695a6a4f1ff0626&v=4
twitterUsername: null
url: https://github.com/BitVoyagerMan
- login: xixixao
count: 2.2876291681217342
avatarUrl: https://avatars.githubusercontent.com/u/1473433?u=c4bf1cf9f8699c8647894cd226c0bf9124bdad58&v=4
twitterUsername: null
url: https://github.com/xixixao
- login: ovuruska
count: 2.286474945226641
avatarUrl: https://avatars.githubusercontent.com/u/75265893?u=7f11152d07f1719da22084388c09b5fc64ab6c89&v=4
twitterUsername: VuruskanerOguz
url: https://github.com/ovuruska
- login: jo32
count: 2.2851005981511694
avatarUrl: https://avatars.githubusercontent.com/u/501632?u=a714d65c000d8f489f9fc2363f9a372b0dba05e3&v=4
twitterUsername: null
url: https://github.com/jo32
- login: RohitMidha23
count: 2.232630526458589
avatarUrl: https://avatars.githubusercontent.com/u/38888530?u=5c4b99eff970e551e5b756f270aa5234bc666316&v=4
twitterUsername: null
url: https://github.com/RohitMidha23
- login: karol-f
count: 2.2305403257775134
avatarUrl: https://avatars.githubusercontent.com/u/893082?u=0cda88d40a24ee696580f2e62f5569f49117cf40&v=4
twitterUsername: null
url: https://github.com/karol-f
- login: konstantinov-raft
count: 2.223080433170316
avatarUrl: https://avatars.githubusercontent.com/u/105433902?v=4
twitterUsername: null
url: https://github.com/konstantinov-raft
- login: jameshfisher
count: 2.143515802348958
avatarUrl: https://avatars.githubusercontent.com/u/166966?u=b78059abca798fbce8c9da4f6ddfb72ea03b20bb&v=4
twitterUsername: MrJamesFisher
url: https://github.com/jameshfisher
- login: the-powerpointer
count: 2.1273610063772668
avatarUrl: https://avatars.githubusercontent.com/u/134403026?u=ddd77b62b35c5497ae3d846f8917bdd81e5ef19e&v=4
twitterUsername: null
url: https://github.com/the-powerpointer
- login: davidfant
count: 2.123250038617256
avatarUrl: https://avatars.githubusercontent.com/u/17096641?u=9b935c68c077d53642c1b4aff62f04d08e2ffac7&v=4
twitterUsername: null
url: https://github.com/davidfant
- login: dependabot
count: 2.118005716372584
avatarUrl: https://avatars.githubusercontent.com/in/29110?v=4
twitterUsername: null
url: https://github.com/apps/dependabot
- login: MthwRobinson
count: 2.106176213349417
avatarUrl: https://avatars.githubusercontent.com/u/1635179?u=0631cb84ca580089198114f94d9c27efe730220e&v=4
twitterUsername: null
url: https://github.com/MthwRobinson
- login: SimonPrammer
count: 2.036351588156493
avatarUrl: https://avatars.githubusercontent.com/u/44960995?u=a513117a60e9f1aa09247ec916018ee272897169&v=4
twitterUsername: null
url: https://github.com/SimonPrammer
- login: munkhorgil
count: 1.989209476894206
avatarUrl: https://avatars.githubusercontent.com/u/978987?u=eff77a6f7bc4edbace4929731638d4727923013f&v=4
twitterUsername: null
url: https://github.com/munkhorgil
- login: alx13
count: 1.9438496864726376
avatarUrl: https://avatars.githubusercontent.com/u/1572864?v=4
twitterUsername: null
url: https://github.com/alx13
- login: castroCrea
count: 1.9413681649706493
avatarUrl: https://avatars.githubusercontent.com/u/20707343?u=25e872c764bd31b71148f2dec896f64be5e034ff&v=4
twitterUsername: Pao_Cto
url: https://github.com/castroCrea
- login: samheutmaker
count: 1.8909830007390982
avatarUrl: https://avatars.githubusercontent.com/u/1767032?u=a50f2b3b339eb965b9c812977aa10d64202e2e95&v=4
twitterUsername: 0xSamHogan
url: https://github.com/samheutmaker
- login: archie-swif
count: 1.8879520198010826
avatarUrl: https://avatars.githubusercontent.com/u/2158707?u=8a0aeee45e93ba575321804a7b709bf8897941de&v=4
twitterUsername: null
url: https://github.com/archie-swif
- login: valdo99
count: 1.8815427075304894
avatarUrl: https://avatars.githubusercontent.com/u/41517614?u=ba37c9a21db3068953ae50d90c1cd07c3dec3abd&v=4
twitterUsername: valdozzz1
url: https://github.com/valdo99
- login: chentschel
count: 1.8419619282928634
avatarUrl: https://avatars.githubusercontent.com/u/319227?u=db20ce1d424f10d7760665ab693791ebc580131a&v=4
twitterUsername: chentschel_
url: https://github.com/chentschel
- login: gmpetrov
count: 1.8103159579166737
avatarUrl: https://avatars.githubusercontent.com/u/4693180?u=8cf781d9099d6e2f2d2caf7612a5c2811ba13ef8&v=4
twitterUsername: georges_petrov
url: https://github.com/gmpetrov
- login: mattzcarey
count: 1.8103097806232766
avatarUrl: https://avatars.githubusercontent.com/u/77928207?u=fc8febe2a4b67384046eb4041b325bb34665d59c&v=4
twitterUsername: mattzcarey
url: https://github.com/mattzcarey
- login: albertpurnama
count: 1.8068671958039872
avatarUrl: https://avatars.githubusercontent.com/u/14824254?u=b3acdfc46d3d26d44f66a7312b102172c7ff9722&v=4
twitterUsername: albertpurnama
url: https://github.com/albertpurnama
- login: CahidArda
count: 1.8064422121491948
avatarUrl: https://avatars.githubusercontent.com/u/57228345?v=4
twitterUsername: null
url: https://github.com/CahidArda
- login: yroc92
count: 1.7662681234109807
avatarUrl: https://avatars.githubusercontent.com/u/17517541?u=7405432fa828c094e130e8193be3cae04ac96d11&v=4
twitterUsername: null
url: https://github.com/yroc92
- login: Basti-an
count: 1.7547477174673676
avatarUrl: https://avatars.githubusercontent.com/u/42387209?u=43ac44545861ce4adec99f973aeea3e6cf9a1bc0&v=4
twitterUsername: null
url: https://github.com/Basti-an
- login: CarlosZiegler
count: 1.743607324352666
avatarUrl: https://avatars.githubusercontent.com/u/38855507?u=65c19ae772581fb7367f646ed90be44311e60e70&v=4
twitterUsername: carlosziegler
url: https://github.com/CarlosZiegler
- login: iloveitaly
count: 1.7083954601863427
avatarUrl: https://avatars.githubusercontent.com/u/150855?v=4
twitterUsername: mike_bianco
url: https://github.com/iloveitaly
- login: dilling
count: 1.6352429296591733
avatarUrl: https://avatars.githubusercontent.com/u/5846912?v=4
twitterUsername: null
url: https://github.com/dilling
- login: anselm94
count: 1.6311660935283947
avatarUrl: https://avatars.githubusercontent.com/u/9033201?u=e5f657c3a1657c089d7cb88121e544ae7212e6f1&v=4
twitterUsername: MerbinJAnselm
url: https://github.com/anselm94
- login: aixgeek
count: 1.5850862534183767
avatarUrl: https://avatars.githubusercontent.com/u/9697715?u=d139c5568375c2472ac6142325e6856cd766d88d&v=4
twitterUsername: geekxai
url: https://github.com/aixgeek
- login: gramliu
count: 1.5791775791775793
avatarUrl: https://avatars.githubusercontent.com/u/24856195?u=9f55337506cdcac3146772c56b4634e6b46a5e46&v=4
twitterUsername: gramliu
url: https://github.com/gramliu
- login: jeffchuber
count: 1.564589469083851
avatarUrl: https://avatars.githubusercontent.com/u/891664?u=722172a0061f68ab22819fa88a354ec973f70a63&v=4
twitterUsername: null
url: https://github.com/jeffchuber
- login: ywkim
count: 1.5593229675944416
avatarUrl: https://avatars.githubusercontent.com/u/588581?u=df702e5b817a56476cb0cd8e7587b9be844d2850&v=4
twitterUsername: ywkim
url: https://github.com/ywkim
- login: jirimoravcik
count: 1.5492762061520144
avatarUrl: https://avatars.githubusercontent.com/u/951187?u=e80c215810058f57145042d12360d463e3a53443&v=4
twitterUsername: null
url: https://github.com/jirimoravcik
- login: miloradvojnovic
count: 1.5406950154046446
avatarUrl: https://avatars.githubusercontent.com/u/11798350?u=a9b931a1a3319787bef5e2d16e1fdec0888cdad5&v=4
twitterUsername: null
url: https://github.com/miloradvojnovic
- login: janvi-kalra
count: 1.53306342780027
avatarUrl: https://avatars.githubusercontent.com/u/119091286?u=ed9e9d72bbf9964b80f81e5ba8d1d5b2f860c23f&v=4
twitterUsername: janvikalra_
url: https://github.com/janvi-kalra
- login: yuku
count: 1.5249478139076982
avatarUrl: https://avatars.githubusercontent.com/u/96157?v=4
twitterUsername: yuku_t
url: https://github.com/yuku
- login: conroywhitney
count: 1.514880034611978
avatarUrl: https://avatars.githubusercontent.com/u/249891?u=36703ce68261be59109622877012be08fbc090da&v=4
twitterUsername: conroywhitney
url: https://github.com/conroywhitney
- login: seuha516
count: 1.511875544931005
avatarUrl: https://avatars.githubusercontent.com/u/79067549?u=de7a2688cb44010afafd055d707f3463585494df&v=4
twitterUsername: null
url: https://github.com/seuha516
- login: Czechh
count: 1.4922336407937202
avatarUrl: https://avatars.githubusercontent.com/u/4779936?u=ab072503433effc18c071b31adda307988877d5e&v=4
twitterUsername: null
url: https://github.com/Czechh
- login: adam101
count: 1.4865913414423242
avatarUrl: https://avatars.githubusercontent.com/u/1535782?v=4
twitterUsername: null
url: https://github.com/adam101
- login: OlegIvaniv
count: 1.485600148643627
avatarUrl: https://avatars.githubusercontent.com/u/12657221?v=4
twitterUsername: null
url: https://github.com/OlegIvaniv
- login: jaclar
count: 1.484066966625106
avatarUrl: https://avatars.githubusercontent.com/u/362704?u=52d868cc75c793fa895ef7035ae45516bd915e84&v=4
twitterUsername: jaclar
url: https://github.com/jaclar
- login: TeCHiScy
count: 1.4707811568276683
avatarUrl: https://avatars.githubusercontent.com/u/741195?u=e5937011ef84ff8a4b4b62ac1926a291c04f5d8b&v=4
twitterUsername: null
url: https://github.com/TeCHiScy
- login: anthonychu
count: 1.469340114219017
avatarUrl: https://avatars.githubusercontent.com/u/3982077?u=8bbebac42cb84a25c629f83f212b2d099ffa3964&v=4
twitterUsername: nthonyChu
url: https://github.com/anthonychu
- login: josemussa
count: 1.4646097752078864
avatarUrl: https://avatars.githubusercontent.com/u/4422500?u=d676ede0cec8ee5df6879ebf9d8b72d51ea1eb7f&v=4
twitterUsername: null
url: https://github.com/josemussa
- login: ivoneijr
count: 1.462283158877586
avatarUrl: https://avatars.githubusercontent.com/u/6401435?u=96c11b6333636bd784ffbff72998591f3b3f087b&v=4
twitterUsername: null
url: https://github.com/ivoneijr
- login: tonisives
count: 1.4433959992368206
avatarUrl: https://avatars.githubusercontent.com/u/1083534?v=4
twitterUsername: tonisives
url: https://github.com/tonisives
- login: Njuelle
count: 1.4376108924163842
avatarUrl: https://avatars.githubusercontent.com/u/3192870?u=e126aae39f36565450ebc854b35c6e890b705e71&v=4
twitterUsername: null
url: https://github.com/Njuelle
- login: Roland0511
count: 1.4309446898279385
avatarUrl: https://avatars.githubusercontent.com/u/588050?u=3c91917389117ee84843d961252ab7a2b9097e0e&v=4
twitterUsername: lizhou_zhu
url: https://github.com/Roland0511
- login: SebastjanPrachovskij
count: 1.4270208511853757
avatarUrl: https://avatars.githubusercontent.com/u/86522260?u=66898c89771c7b8ff38958e9fb9563a1cf7f8004&v=4
twitterUsername: null
url: https://github.com/SebastjanPrachovskij
- login: cinqisap
count: 1.425069723409155
avatarUrl: https://avatars.githubusercontent.com/u/158295355?v=4
twitterUsername: null
url: https://github.com/cinqisap
- login: dylanintech
count: 1.4248741912293315
avatarUrl: https://avatars.githubusercontent.com/u/86082012?u=6516bbf39c5af198123d8ed2e35fff5d200f4d2e&v=4
twitterUsername: dxlantxch
url: https://github.com/dylanintech
- login: andrewnguonly
count: 1.4221207958881887
avatarUrl: https://avatars.githubusercontent.com/u/7654246?u=b8599019655adaada3cdc3c3006798df42c44494&v=4
twitterUsername: andrewnguonly
url: https://github.com/andrewnguonly
- login: clemenspeters
count: 1.4063956990937188
avatarUrl: https://avatars.githubusercontent.com/u/13015002?u=059c556d90a2e5639dee42123077d51223c190f0&v=4
twitterUsername: PetersClemens
url: https://github.com/clemenspeters
- login: ShaunBaker
count: 1.4047837363443452
avatarUrl: https://avatars.githubusercontent.com/u/1176557?u=c2e8ecfb45b736fc4d3bbfe182e26936bd519fd3&v=4
twitterUsername: null
url: https://github.com/ShaunBaker
- login: machulav
count: 1.404147024663524
avatarUrl: https://avatars.githubusercontent.com/u/2857712?u=6809bef8bf07c46b39cd2fcd6027ed86e76372cd&v=4
twitterUsername: null
url: https://github.com/machulav
- login: dersia
count: 1.3856560415122312
avatarUrl: https://avatars.githubusercontent.com/u/1537958?u=5da46ca1cd93c6fed927c612fc454ba51d0a36b1&v=4
twitterUsername: null
url: https://github.com/dersia
- login: joshsny
count: 1.3783891982385268
avatarUrl: https://avatars.githubusercontent.com/u/7135900?u=109e43c5e906a8ecc1a2d465c4457f5cf29328a5&v=4
twitterUsername: joshsny
url: https://github.com/joshsny
- login: eactisgrosso
count: 1.372139099542256
avatarUrl: https://avatars.githubusercontent.com/u/2279003?u=d122874eedb211359d4bf0119877d74ea7d5bcab&v=4
twitterUsername: null
url: https://github.com/eactisgrosso
- login: frankolson
count: 1.3622589531680442
avatarUrl: https://avatars.githubusercontent.com/u/6773706?u=738775762205a07fd7de297297c99f781e957c58&v=4
twitterUsername: thinkolson
url: https://github.com/frankolson
- login: uthmanmoh
count: 1.3561536743354925
avatarUrl: https://avatars.githubusercontent.com/u/83053931?u=5c715d2d4f6786fa749276de8eced710be8bfa99&v=4
twitterUsername: null
url: https://github.com/uthmanmoh
- login: Jordan-Gilliam
count: 1.3528138528138527
avatarUrl: https://avatars.githubusercontent.com/u/25993686?u=319a6ed2119197d4d11301614a104ae686f9fc70&v=4
twitterUsername: nolansym
url: https://github.com/Jordan-Gilliam
- login: winor30
count: 1.3481015442826345
avatarUrl: https://avatars.githubusercontent.com/u/12413150?u=691a5e076bdd8c9e9fd637a41496b29e11b0c82f&v=4
twitterUsername: winor30
url: https://github.com/winor30
- login: willemmulder
count: 1.333493258645407
avatarUrl: https://avatars.githubusercontent.com/u/70933?u=206fafc72fd14b4291cb29269c5e1cc8081d043b&v=4
twitterUsername: willemmulder
url: https://github.com/willemmulder
- login: mhart
count: 1.314023891343479
avatarUrl: https://avatars.githubusercontent.com/u/367936?v=4
twitterUsername: hichaelmart
url: https://github.com/mhart
- login: mvaker
count: 1.3064713064713065
avatarUrl: https://avatars.githubusercontent.com/u/5671913?u=2e237cb1dd51f9d0dd01f0deb80003163641fc49&v=4
twitterUsername: null
url: https://github.com/mvaker
- login: vitaly-ps
count: 1.3063658201784487
avatarUrl: https://avatars.githubusercontent.com/u/141448200?u=a3902a9c11399c916f1af2bf0ead901e7afe1a67&v=4
twitterUsername: null
url: https://github.com/vitaly-ps
- login: cbh123
count: 1.3061284427145958
avatarUrl: https://avatars.githubusercontent.com/u/14149230?u=ca710ca2a64391470163ddef6b5ea7633ab26872&v=4
twitterUsername: charliebholtz
url: https://github.com/cbh123
- login: Neverland3124
count: 1.301994301994302
avatarUrl: https://avatars.githubusercontent.com/u/52025513?u=865e861a1abb0d78be587f685d28fe8a00aee8fe&v=4
twitterUsername: null
url: https://github.com/Neverland3124
- login: jasonnathan
count: 1.2936835460554628
avatarUrl: https://avatars.githubusercontent.com/u/780157?u=006e0deda897eb1a4abcc459adcd7242dcbe8fee&v=4
twitterUsername: jason_nathan
url: https://github.com/jasonnathan
- login: Maanethdesilva
count: 1.2846497764530551
avatarUrl: https://avatars.githubusercontent.com/u/94875583?v=4
twitterUsername: null
url: https://github.com/Maanethdesilva
- login: fuleinist
count: 1.2845103789266226
avatarUrl: https://avatars.githubusercontent.com/u/1163738?v=4
twitterUsername: null
url: https://github.com/fuleinist
- login: kwadhwa18
count: 1.2830645771822242
avatarUrl: https://avatars.githubusercontent.com/u/6015244?u=a127081404b8dc16ac0e84a869dfff4ac82bbab2&v=4
twitterUsername: null
url: https://github.com/kwadhwa18
- login: sousousore1
count: 1.274799599198397
avatarUrl: https://avatars.githubusercontent.com/u/624438?v=4
twitterUsername: null
url: https://github.com/sousousore1
- login: seth-25
count: 1.274023659050397
avatarUrl: https://avatars.githubusercontent.com/u/49222652?u=203c2bef6cbb77668a289b8272aea4fb654558d5&v=4
twitterUsername: null
url: https://github.com/seth-25
- login: tomi-mercado
count: 1.2671966816453508
avatarUrl: https://avatars.githubusercontent.com/u/60221771?u=f8c1214535e402b0ff5c3428bfe98b586b517106&v=4
twitterUsername: tomsito_ts
url: https://github.com/tomi-mercado
- login: JHeidinga
count: 1.2602823818293432
avatarUrl: https://avatars.githubusercontent.com/u/1702015?u=fa33fb709707e2429f10fbb824abead61628d50c&v=4
twitterUsername: null
url: https://github.com/JHeidinga
- login: niklas-lohmann
count: 1.2581453634085213
avatarUrl: https://avatars.githubusercontent.com/u/68230177?v=4
twitterUsername: null
url: https://github.com/niklas-lohmann
- login: Durisvk
count: 1.2562437105479827
avatarUrl: https://avatars.githubusercontent.com/u/8467003?u=f07b8c070eaed3ad8972be4f4ca91afb1ae6e2c0&v=4
twitterUsername: null
url: https://github.com/Durisvk
- login: BjoernRave
count: 1.2556762493521911
avatarUrl: https://avatars.githubusercontent.com/u/36173920?u=c3acae11221a037c16254e2187555ea6259d89c3&v=4
twitterUsername: bjoern_rave
url: https://github.com/BjoernRave
- login: LordMsz
count: 1.2519987699876998
avatarUrl: https://avatars.githubusercontent.com/u/33070601?u=ddc6c16156f6397198692c547324e51f94c70ca7&v=4
twitterUsername: null
url: https://github.com/LordMsz
- login: tanyaasharma
count: 1.2499106738092052
avatarUrl: https://avatars.githubusercontent.com/u/140478067?v=4
twitterUsername: null
url: https://github.com/tanyaasharma
- login: crazyurus
count: 1.2498577975251404
avatarUrl: https://avatars.githubusercontent.com/u/2209055?u=b39f7e70f137ff3d1785d261cb15067f0d91ae05&v=4
twitterUsername: null
url: https://github.com/crazyurus
- login: qalqi
count: 1.247286079182631
avatarUrl: https://avatars.githubusercontent.com/u/1781048?u=837879a7e62c6b3736dc39a31ff42873bee2c532&v=4
twitterUsername: null
url: https://github.com/qalqi
- login: katarinasupe
count: 1.246442719702482
avatarUrl: https://avatars.githubusercontent.com/u/61758502?u=20cdcb0bae81b9eb330c94f7cfae462327785219&v=4
twitterUsername: supe_katarina
url: https://github.com/katarinasupe
- login: paul-paliychuk
count: 1.245805528608245
avatarUrl: https://avatars.githubusercontent.com/u/26054637?u=edd1e4f54e91b549f2edb525d43210f4f04d7367&v=4
twitterUsername: null
url: https://github.com/paul-paliychuk
- login: andrewlei
count: 1.2386414411182212
avatarUrl: https://avatars.githubusercontent.com/u/1158058?v=4
twitterUsername: null
url: https://github.com/andrewlei
- login: floomby
count: 1.2297357814599192
avatarUrl: https://avatars.githubusercontent.com/u/3113021?v=4
twitterUsername: null
url: https://github.com/floomby
- login: milanjrodd
count: 1.2234407887448766
avatarUrl: https://avatars.githubusercontent.com/u/121220673?u=55636f26ea48e77e0372008089ff2c38691eaa0a&v=4
twitterUsername: null
url: https://github.com/milanjrodd
- login: NickMandylas
count: 1.2216829174004553
avatarUrl: https://avatars.githubusercontent.com/u/19514618?u=95f8c29ed06696260722c2c6aa7bac3a1136d7a2&v=4
twitterUsername: nicknetau
url: https://github.com/NickMandylas
- login: DravenCat
count: 1.2097773313242928
avatarUrl: https://avatars.githubusercontent.com/u/55412122?v=4
twitterUsername: null
url: https://github.com/DravenCat
- login: Alireza29675
count: 1.2074392093367425
avatarUrl: https://avatars.githubusercontent.com/u/2771377?u=65ec71f9860ac2610e1cb5028173f67713a174d7&v=4
twitterUsername: alireza29675
url: https://github.com/Alireza29675
- login: zhengxs2018
count: 1.2073887489504618
avatarUrl: https://avatars.githubusercontent.com/u/7506913?u=42c32ca59ae2e44532cd45027e5b62d2712cf2a2&v=4
twitterUsername: null
url: https://github.com/zhengxs2018
- login: tofuliang
count: 1.192831678125796
avatarUrl: https://avatars.githubusercontent.com/u/1814685?v=4
twitterUsername: null
url: https://github.com/tofuliang
- login: cmtoomey
count: 1.1890213611525087
avatarUrl: https://avatars.githubusercontent.com/u/12201602?u=ea5cbb8d158980f6050dd41ae41b7f72e0a47337&v=4
twitterUsername: Sock1tToomey
url: https://github.com/cmtoomey
- login: igorshapiro
count: 1.186488541327251
avatarUrl: https://avatars.githubusercontent.com/u/1085209?u=16b60724316a7ed8e8b52af576c121215461922a&v=4
twitterUsername: null
url: https://github.com/igorshapiro
- login: ezynda3
count: 1.1749026654687031
avatarUrl: https://avatars.githubusercontent.com/u/5308871?v=4
twitterUsername: what_the_func
url: https://github.com/ezynda3
- login: more-by-more
count: 1.1723569578526019
avatarUrl: https://avatars.githubusercontent.com/u/67614844?u=d3d818efb3e3e2ddda589d6157f853922a460f5b&v=4
twitterUsername: more__studio
url: https://github.com/more-by-more
- login: noble-varghese
count: 1.169995826128436
avatarUrl: https://avatars.githubusercontent.com/u/109506617?u=c1d2a1813c51bff89bfa85d533633ed4c201ba2e&v=4
twitterUsername: null
url: https://github.com/noble-varghese
- login: SananR
count: 1.1688741721854305
avatarUrl: https://avatars.githubusercontent.com/u/14956384?u=538ff9bf09497059b312067333f68eba75594802&v=4
twitterUsername: null
url: https://github.com/SananR
- login: fraserxu
count: 1.1641274004747006
avatarUrl: https://avatars.githubusercontent.com/u/1183541?v=4
twitterUsername: fraserxu
url: https://github.com/fraserxu
- login: ashvardanian
count: 1.1635835095137421
avatarUrl: https://avatars.githubusercontent.com/u/1983160?u=536f2558c6ac33b74a6d89520dcb27ba46954070&v=4
twitterUsername: ashvardanian
url: https://github.com/ashvardanian
- login: adeelehsan
count: 1.163265306122449
avatarUrl: https://avatars.githubusercontent.com/u/8156837?u=99cacfbd962ff58885bdf68e5fc640fc0d3cb87c&v=4
twitterUsername: null
url: https://github.com/adeelehsan
- login: henriquegdantas
count: 1.1626857749469215
avatarUrl: https://avatars.githubusercontent.com/u/12974790?u=80d76f256a7854da6ae441b6ee078119877398e7&v=4
twitterUsername: null
url: https://github.com/henriquegdantas
- login: evad1n
count: 1.1613760938410924
avatarUrl: https://avatars.githubusercontent.com/u/50718218?u=ee35784971ef8dcdfdb25cfe0a8284ca48724938&v=4
twitterUsername: null
url: https://github.com/evad1n
- login: benjibc
count: 1.1581632653061225
avatarUrl: https://avatars.githubusercontent.com/u/1585539?u=654a21985c875f78a20eda7e4884e8d64de86fba&v=4
twitterUsername: null
url: https://github.com/benjibc
- login: P-E-B
count: 1.1575327745540511
avatarUrl: https://avatars.githubusercontent.com/u/38215315?u=3985b6a3ecb0e8338c5912ea9e20787152d0ad7a&v=4
twitterUsername: null
url: https://github.com/P-E-B
- login: omikader
count: 1.152142614957631
avatarUrl: https://avatars.githubusercontent.com/u/16735699?u=29fc7c7c777c3cabc22449b68bbb01fe2fa0b574&v=4
twitterUsername: null
url: https://github.com/omikader
- login: jasongill
count: 1.1428571428571428
avatarUrl: https://avatars.githubusercontent.com/u/241711?v=4
twitterUsername: null
url: https://github.com/jasongill
- login: Luisotee
count: 1.1353383458646618
avatarUrl: https://avatars.githubusercontent.com/u/50471205?u=059d6ab166e5a32c496ff50ef6e3fb0ca04a50ad&v=4
twitterUsername: null
url: https://github.com/Luisotee
- login: puigde
count: 1.125363627638318
avatarUrl: https://avatars.githubusercontent.com/u/83642160?u=7e76b13b7484e4601bea47dc6e238c89d453a24d&v=4
twitterUsername: polpuigdemont
url: https://github.com/puigde
- login: Adrastopoulos
count: 1.118279569892473
avatarUrl: https://avatars.githubusercontent.com/u/76796897?u=0bd50d301b4c7025f29396af44c8e1829eff1db6&v=4
twitterUsername: null
url: https://github.com/Adrastopoulos
- login: chase-crumbaugh
count: 1.1127079590720925
avatarUrl: https://avatars.githubusercontent.com/u/90289500?u=0129550ecfbb4a92922fff7a406566a47a23dfb0&v=4
twitterUsername: null
url: https://github.com/chase-crumbaugh
- login: Zeneos
count: 1.1119838542518954
avatarUrl: https://avatars.githubusercontent.com/u/95008961?v=4
twitterUsername: null
url: https://github.com/Zeneos
- login: joseanu
count: 1.1101000909918106
avatarUrl: https://avatars.githubusercontent.com/u/2730127?u=9fe1d593bd63c7f116b9c46e9cbd359a2e4304f0&v=4
twitterUsername: jantonioulloa
url: https://github.com/joseanu
- login: JackFener
count: 1.108656849620705
avatarUrl: https://avatars.githubusercontent.com/u/20380671?u=b51d10b71850203e6360655fa59cc679c5a498e6&v=4
twitterUsername: null
url: https://github.com/JackFener
- login: swyxio
count: 1.1079251294305057
avatarUrl: https://avatars.githubusercontent.com/u/6764957?u=97ad815028595b73b06ee4b0510e66bbe391228d&v=4
twitterUsername: swyx
url: https://github.com/swyxio
- login: pczekaj
count: 1.1072005633673014
avatarUrl: https://avatars.githubusercontent.com/u/1460539?u=24c2db4a29757f608a54a062340a466cad843825&v=4
twitterUsername: null
url: https://github.com/pczekaj
- login: devinburnette
count: 1.1044585013472967
avatarUrl: https://avatars.githubusercontent.com/u/13012689?u=7b68c67ea1bbc272c35be7c0bcf1c66a04554179&v=4
twitterUsername: null
url: https://github.com/devinburnette
- login: ananis25
count: 1.1039797561536693
avatarUrl: https://avatars.githubusercontent.com/u/16446513?u=5026326ed39bfee8325c30cdbd24ac20519d21b8&v=4
twitterUsername: ananis25
url: https://github.com/ananis25
- login: joaopcm
count: 1.0963667731359579
avatarUrl: https://avatars.githubusercontent.com/u/58827242?u=3e03812a1074f2ce888b751c48e78a849c7e0aff&v=4
twitterUsername: jopcmelo
url: https://github.com/joaopcm
- login: SalehHindi
count: 1.0950556467797847
avatarUrl: https://avatars.githubusercontent.com/u/15721377?u=37fadd6a7bf9dfa63ceb866bda23ca44a7b2c0c2&v=4
twitterUsername: SalehOfTomorrow
url: https://github.com/SalehHindi
- login: AvaterClasher
count: 1.0900805104055902
avatarUrl: https://avatars.githubusercontent.com/u/116944847?u=102a870b3efed7f30f0a57123391a293eb6f5b08&v=4
twitterUsername: Avater004
url: https://github.com/AvaterClasher
- login: JamsheedMistri
count: 1.0876276783657168
avatarUrl: https://avatars.githubusercontent.com/u/13024750?u=6ae631199ec7c0bb34eb8d56200023cdd94720d3&v=4
twitterUsername: null
url: https://github.com/JamsheedMistri
- login: cmanou
count: 1.0871305418719213
avatarUrl: https://avatars.githubusercontent.com/u/683160?u=e9050e4341c2c9d46b035ea17ea94234634e1b2c&v=4
twitterUsername: null
url: https://github.com/cmanou
- login: micahriggan
count: 1.0791195549507453
avatarUrl: https://avatars.githubusercontent.com/u/3626473?u=508e8c831d8eb804e95985d5191a08c761544fad&v=4
twitterUsername: null
url: https://github.com/micahriggan
- login: w00ing
count: 1.075050709939148
avatarUrl: https://avatars.githubusercontent.com/u/29723695?u=563d4a628c9af35f827f476e38635310f1cec114&v=4
twitterUsername: wooing0306
url: https://github.com/w00ing
- login: madmed88
count: 1.0614216701173222
avatarUrl: https://avatars.githubusercontent.com/u/1579388?u=62ca1bfe7c271b5fd1d77abc470aa5e535b1ed83&v=4
twitterUsername: null
url: https://github.com/madmed88
- login: ardsh
count: 1.0609951845906902
avatarUrl: https://avatars.githubusercontent.com/u/23664687?u=158ef7e156a7881b8647ece63683aca2c28f132e&v=4
twitterUsername: null
url: https://github.com/ardsh
- login: JoeABCDEF
count: 1.0464925848421436
avatarUrl: https://avatars.githubusercontent.com/u/39638510?u=f5fac0a3578572817b37a6dfc00adacb705ec7d0&v=4
twitterUsername: null
url: https://github.com/JoeABCDEF
- login: saul-jb
count: 1.04251968503937
avatarUrl: https://avatars.githubusercontent.com/u/2025187?v=4
twitterUsername: null
url: https://github.com/saul-jb
- login: JTCorrin
count: 1.0404043172236763
avatarUrl: https://avatars.githubusercontent.com/u/73115680?v=4
twitterUsername: null
url: https://github.com/JTCorrin
- login: zandko
count: 1.026159026159026
avatarUrl: https://avatars.githubusercontent.com/u/37948383?u=04ccf6e060b27e39c931c2608381351cf236a28f&v=4
twitterUsername: null
url: https://github.com/zandko
- login: federicoestevez
count: 1.0229885057471264
avatarUrl: https://avatars.githubusercontent.com/u/10424147?v=4
twitterUsername: null
url: https://github.com/federicoestevez
- login: martinseanhunt
count: 1.0161725067385445
avatarUrl: https://avatars.githubusercontent.com/u/65744?u=ddac1e773828d8058a40bca680cf549e955f69ae&v=4
twitterUsername: null
url: https://github.com/martinseanhunt
- login: functorism
count: 1.0136089225416964
avatarUrl: https://avatars.githubusercontent.com/u/17207277?u=4df9bc30a55b4da4b3d6fd20a2956afd722bde24&v=4
twitterUsername: null
url: https://github.com/functorism
- login: erictt
count: 1.0129149331680978
avatarUrl: https://avatars.githubusercontent.com/u/9592198?u=567fa49c73e824525d33eefd836ece16ab9964c8&v=4
twitterUsername: null
url: https://github.com/erictt
- login: WilliamEspegren
count: 1.0091400083090984
avatarUrl: https://avatars.githubusercontent.com/u/131612909?v=4
twitterUsername: WilliamEspegren
url: https://github.com/WilliamEspegren
- login: lesters
count: 1.00853889943074
avatarUrl: https://avatars.githubusercontent.com/u/5798036?u=4eba31d63c3818d17fb8f9aa923599ac63ebfea8&v=4
twitterUsername: null
url: https://github.com/lesters
- login: my8bit
count: 1.0073145245559039
avatarUrl: https://avatars.githubusercontent.com/u/782268?u=d83da3e6269d53a828bbeb6d661049a1ed185cb0&v=4
twitterUsername: null
url: https://github.com/my8bit
- login: erhant
count: 1.0040241448692153
avatarUrl: https://avatars.githubusercontent.com/u/16037166?u=9d056a2f5059684620e22aa4d880e38183309b51&v=4
twitterUsername: 0xerhant
url: https://github.com/erhant
top_reviewers:
- login: afirstenberg
count: 23
avatarUrl: https://avatars.githubusercontent.com/u/3507578?v=4
twitterUsername: null
url: https://github.com/afirstenberg
- login: sullivan-sean
count: 8
avatarUrl: https://avatars.githubusercontent.com/u/22581534?u=8f88473db2f929a965b6371733efda28e3fa1948&v=4
twitterUsername: null
url: https://github.com/sullivan-sean
- login: tomasonjo
count: 8
avatarUrl: https://avatars.githubusercontent.com/u/19948365?v=4
twitterUsername: tb_tomaz
url: https://github.com/tomasonjo
- login: ppramesi
count: 7
avatarUrl: https://avatars.githubusercontent.com/u/6775031?v=4
twitterUsername: null
url: https://github.com/ppramesi
- login: jacobrosenthal
count: 6
avatarUrl: https://avatars.githubusercontent.com/u/455796?v=4
twitterUsername: null
url: https://github.com/jacobrosenthal
- login: sinedied
count: 6
avatarUrl: https://avatars.githubusercontent.com/u/593151?u=08557bbdd96221813b8aec932dd7de895ac040ea&v=4
twitterUsername: sinedied
url: https://github.com/sinedied
- login: mieslep
count: 5
avatarUrl: https://avatars.githubusercontent.com/u/5420540?u=8f038c002fbce42427999eb715dc9f868cef1c84&v=4
twitterUsername: null
url: https://github.com/mieslep
|
0 | lc_public_repos/langchainjs/docs/core_docs/src | lc_public_repos/langchainjs/docs/core_docs/src/theme/ChatModelTabs.js | /* eslint-disable react/jsx-props-no-spreading, react/destructuring-assignment */
import React from "react";
import Tabs from "@theme/Tabs";
import TabItem from "@theme/TabItem";
import CodeBlock from "@theme-original/CodeBlock";
import Npm2Yarn from "@theme/Npm2Yarn";
import Admonition from "@theme/Admonition";
function InstallationInfo({ children }) {
return (
<>
<Admonition type="tip">
<p>
See{" "}
<a href="/docs/how_to/installation/#installing-integration-packages">
this section for general instructions on installing integration
packages
</a>
.
</p>
</Admonition>
<Npm2Yarn>{children}</Npm2Yarn>
</>
);
}
const DEFAULTS = {
openaiParams: `{\n model: "gpt-4o-mini",\n temperature: 0\n}`,
anthropicParams: `{\n model: "claude-3-5-sonnet-20240620",\n temperature: 0\n}`,
fireworksParams: `{\n model: "accounts/fireworks/models/llama-v3p1-70b-instruct",\n temperature: 0\n}`,
mistralParams: `{\n model: "mistral-large-latest",\n temperature: 0\n}`,
groqParams: `{\n model: "mixtral-8x7b-32768",\n temperature: 0\n}`,
vertexParams: `{\n model: "gemini-1.5-flash",\n temperature: 0\n}`,
};
const MODELS_WSO = ["openai", "anthropic", "mistral", "groq", "vertex"];
/**
* @typedef {Object} ChatModelTabsProps - Component props.
* @property {string} [openaiParams] - Parameters for OpenAI chat model. Defaults to `"{\n model: "gpt-3.5-turbo",\n temperature: 0\n}"`
* @property {string} [anthropicParams] - Parameters for Anthropic chat model. Defaults to `"{\n model: "claude-3-sonnet-20240229",\n temperature: 0\n}"`
* @property {string} [fireworksParams] - Parameters for Fireworks chat model. Defaults to `"{\n model: "accounts/fireworks/models/firefunction-v1",\n temperature: 0\n}"`
* @property {string} [mistralParams] - Parameters for Mistral chat model. Defaults to `"{\n model: "mistral-large-latest",\n temperature: 0\n}"`
* @property {string} [groqParams] - Parameters for Groq chat model. Defaults to `"{\n model: "mixtral-8x7b-32768",\n temperature: 0\n}"`
* @property {string} [vertexParams] - Parameters for Google VertexAI chat model. Defaults to `"{\n model: "gemini-1.5-pro",\n temperature: 0\n}"`
*
* @property {boolean} [hideOpenai] - Whether or not to hide OpenAI chat model.
* @property {boolean} [hideAnthropic] - Whether or not to hide Anthropic chat model.
* @property {boolean} [hideFireworks] - Whether or not to hide Fireworks chat model.
* @property {boolean} [hideMistral] - Whether or not to hide Mistral chat model.
* @property {boolean} [hideGroq] - Whether or not to hide Mistral chat model.
* @property {boolean} [hideVertex] - Whether or not to hide Mistral chat model.
*
* @property {string} [customVarName] - Custom variable name for the model. Defaults to `"model"`.
* @property {boolean} [onlyWso] - Only display models which have `withStructuredOutput` implemented.
*/
/**
* @param {ChatModelTabsProps} props - Component props.
*/
export default function ChatModelTabs(props) {
const { customVarName, additionalDependencies } = props;
const llmVarName = customVarName ?? "model";
const openaiParams = props.openaiParams ?? DEFAULTS.openaiParams;
const anthropicParams = props.anthropicParams ?? DEFAULTS.anthropicParams;
const fireworksParams = props.fireworksParams ?? DEFAULTS.fireworksParams;
const mistralParams = props.mistralParams ?? DEFAULTS.mistralParams;
const groqParams = props.groqParams ?? DEFAULTS.groqParams;
const vertexParams = props.vertexParams ?? DEFAULTS.vertexParams;
const providers = props.providers ?? [
"openai",
"anthropic",
"fireworks",
"mistral",
"groq",
"vertex",
];
const tabs = {
openai: {
value: "openai",
label: "OpenAI",
default: true,
text: `import { ChatOpenAI } from "@langchain/openai";\n\nconst ${llmVarName} = new ChatOpenAI(${openaiParams});`,
envs: `OPENAI_API_KEY=your-api-key`,
dependencies: "@langchain/openai",
},
anthropic: {
value: "anthropic",
label: "Anthropic",
default: false,
text: `import { ChatAnthropic } from "@langchain/anthropic";\n\nconst ${llmVarName} = new ChatAnthropic(${anthropicParams});`,
envs: `ANTHROPIC_API_KEY=your-api-key`,
dependencies: "@langchain/anthropic",
},
fireworks: {
value: "fireworks",
label: "FireworksAI",
default: false,
text: `import { ChatFireworks } from "@langchain/community/chat_models/fireworks";\n\nconst ${llmVarName} = new ChatFireworks(${fireworksParams});`,
envs: `FIREWORKS_API_KEY=your-api-key`,
dependencies: "@langchain/community",
},
mistral: {
value: "mistral",
label: "MistralAI",
default: false,
text: `import { ChatMistralAI } from "@langchain/mistralai";\n\nconst ${llmVarName} = new ChatMistralAI(${mistralParams});`,
envs: `MISTRAL_API_KEY=your-api-key`,
dependencies: "@langchain/mistralai",
},
groq: {
value: "groq",
label: "Groq",
default: false,
text: `import { ChatGroq } from "@langchain/groq";\n\nconst ${llmVarName} = new ChatGroq(${groqParams});`,
envs: `GROQ_API_KEY=your-api-key`,
dependencies: "@langchain/groq",
},
vertex: {
value: "vertex",
label: "VertexAI",
default: false,
text: `import { ChatVertexAI } from "@langchain/google-vertexai";\n\nconst ${llmVarName} = new ChatVertexAI(${vertexParams});`,
envs: `GOOGLE_APPLICATION_CREDENTIALS=credentials.json`,
dependencies: "@langchain/google-vertexai",
},
};
const displayedTabs = (props.onlyWso ? MODELS_WSO : providers).map(
(provider) => tabs[provider]
);
return (
<div>
<h3>Pick your chat model:</h3>
<Tabs groupId="modelTabs">
{displayedTabs.map((tab) => (
<TabItem value={tab.value} label={tab.label} key={tab.value}>
<h4>Install dependencies</h4>
<InstallationInfo>
{[tab.dependencies, additionalDependencies].join(" ")}
</InstallationInfo>
<h4>Add environment variables</h4>
<CodeBlock language="bash">{tab.envs}</CodeBlock>
<h4>Instantiate the model</h4>
<CodeBlock language="typescript">{tab.text}</CodeBlock>
</TabItem>
))}
</Tabs>
</div>
);
}
|
0 | lc_public_repos/langchainjs/docs/core_docs/src | lc_public_repos/langchainjs/docs/core_docs/src/theme/NotFound.js | /* eslint-disable import/no-extraneous-dependencies */
/* eslint-disable no-nested-ternary */
import React from "react";
import { translate } from "@docusaurus/Translate";
import { PageMetadata } from "@docusaurus/theme-common";
import Layout from "@theme/Layout";
import { useLocation } from "react-router-dom";
function LegacyBadge() {
return <span className="badge badge--secondary">LEGACY</span>;
}
const suggestedLinks = {
"/docs/additional_resources/tutorials/expression_language_cheatsheet/": {
canonical: "/docs/how_to/lcel_cheatsheet/",
alternative: [
"/v0.1/docs/additional_resources/tutorials/expression_language_cheatsheet/",
],
},
"/docs/ecosystem/": {
canonical: "/docs/integrations/platforms/",
alternative: ["/v0.1/docs/ecosystem/"],
},
"/docs/ecosystem/integrations/": {
canonical: "/docs/integrations/platforms/",
alternative: ["/v0.1/docs/ecosystem/integrations/"],
},
"/docs/ecosystem/integrations/databerry/": {
canonical: "/docs/integrations/platforms/",
alternative: ["/v0.1/docs/ecosystem/integrations/databerry/"],
},
"/docs/ecosystem/integrations/helicone/": {
canonical: "/docs/integrations/platforms/",
alternative: ["/v0.1/docs/ecosystem/integrations/helicone/"],
},
"/docs/ecosystem/integrations/lunary/": {
canonical: "/docs/integrations/platforms/",
alternative: ["/v0.1/docs/ecosystem/integrations/lunary/"],
},
"/docs/ecosystem/integrations/makersuite/": {
canonical: "/docs/integrations/platforms/",
alternative: ["/v0.1/docs/ecosystem/integrations/makersuite/"],
},
"/docs/ecosystem/integrations/unstructured/": {
canonical: "/docs/integrations/document_loaders/file_loaders/unstructured/",
alternative: ["/v0.1/docs/ecosystem/integrations/unstructured/"],
},
"/docs/ecosystem/langserve/": {
canonical:
"https://api.js.langchain.com/classes/_langchain_core.runnables_remote.RemoteRunnable.html",
alternative: ["/v0.1/docs/ecosystem/langserve/"],
},
"/docs/expression_language/": {
canonical: "/docs/how_to/#langchain-expression-language-lcel",
alternative: ["/v0.1/docs/expression_language/"],
},
"/docs/expression_language/cookbook/": {
canonical: "/docs/how_to/#langchain-expression-language-lcel",
alternative: ["/v0.1/docs/expression_language/cookbook/"],
},
"/docs/expression_language/cookbook/adding_memory/": {
canonical: "/docs/how_to/message_history",
alternative: ["/v0.1/docs/expression_language/cookbook/adding_memory/"],
},
"/docs/expression_language/cookbook/agents/": {
canonical: "/docs/how_to/agent_executor",
alternative: ["/v0.1/docs/expression_language/cookbook/agents/"],
},
"/docs/expression_language/cookbook/multiple_chains/": {
canonical: "/docs/how_to/parallel",
alternative: ["/v0.1/docs/expression_language/cookbook/multiple_chains/"],
},
"/docs/expression_language/cookbook/prompt_llm_parser/": {
canonical: "/docs/tutorials/llm_chain",
alternative: ["/v0.1/docs/expression_language/cookbook/prompt_llm_parser/"],
},
"/docs/expression_language/cookbook/retrieval/": {
canonical: "/docs/tutorials/rag",
alternative: ["/v0.1/docs/expression_language/cookbook/retrieval/"],
},
"/docs/expression_language/cookbook/sql_db/": {
canonical: "/docs/tutorials/sql_qa",
alternative: ["/v0.1/docs/expression_language/cookbook/sql_db/"],
},
"/docs/expression_language/cookbook/tools/": {
canonical: "/docs/how_to/tool_calling/",
alternative: ["/v0.1/docs/expression_language/cookbook/tools/"],
},
"/docs/expression_language/get_started/": {
canonical: "/docs/how_to/sequence",
alternative: ["/v0.1/docs/expression_language/get_started/"],
},
"/docs/expression_language/how_to/map/": {
canonical: "/docs/how_to/cancel_execution/",
alternative: ["/v0.1/docs/expression_language/how_to/map/"],
},
"/docs/expression_language/how_to/message_history/": {
canonical: "/docs/how_to/message_history",
alternative: ["/v0.1/docs/expression_language/how_to/message_history/"],
},
"/docs/expression_language/how_to/routing/": {
canonical: "/docs/how_to/routing",
alternative: ["/v0.1/docs/expression_language/how_to/routing/"],
},
"/docs/expression_language/how_to/with_history/": {
canonical: "/docs/how_to/message_history",
alternative: ["/v0.1/docs/expression_language/how_to/with_history/"],
},
"/docs/expression_language/interface/": {
canonical: "/docs/how_to/lcel_cheatsheet",
alternative: ["/v0.1/docs/expression_language/interface/"],
},
"/docs/expression_language/streaming/": {
canonical: "/docs/how_to/streaming",
alternative: ["/v0.1/docs/expression_language/streaming/"],
},
"/docs/expression_language/why/": {
canonical: "/docs/concepts/#langchain-expression-language",
alternative: ["/v0.1/docs/expression_language/why/"],
},
"/docs/get_started/": {
canonical: "/docs/introduction/",
alternative: ["/v0.1/docs/get_started/"],
},
"/docs/get_started/installation/": {
canonical: "/docs/tutorials/",
alternative: ["/v0.1/docs/get_started/installation/"],
},
"/docs/get_started/introduction/": {
canonical: "/docs/tutorials/",
alternative: ["/v0.1/docs/get_started/introduction/"],
},
"/docs/get_started/quickstart/": {
canonical: "/docs/tutorials/",
alternative: ["/v0.1/docs/get_started/quickstart/"],
},
"/docs/guides/": {
canonical: "/docs/how_to/",
alternative: ["/v0.1/docs/guides/"],
},
"/docs/guides/debugging/": {
canonical: "/docs/how_to/debugging",
alternative: ["/v0.1/docs/guides/debugging/"],
},
"/docs/guides/deployment/": {
canonical: "https://langchain-ai.github.io/langgraph/cloud/",
alternative: ["/v0.1/docs/guides/deployment/"],
},
"/docs/guides/deployment/nextjs/": {
canonical: "https://github.com/langchain-ai/langchain-nextjs-template",
alternative: ["/v0.1/docs/guides/deployment/nextjs/"],
},
"/docs/guides/deployment/sveltekit/": {
canonical: "https://github.com/langchain-ai/langchain-nextjs-template",
alternative: ["/v0.1/docs/guides/deployment/sveltekit/"],
},
"/docs/guides/evaluation/": {
canonical:
"https://docs.smith.langchain.com/tutorials/Developers/evaluation",
alternative: ["/v0.1/docs/guides/evaluation/"],
},
"/docs/guides/evaluation/comparison/": {
canonical:
"https://docs.smith.langchain.com/tutorials/Developers/evaluation",
alternative: ["/v0.1/docs/guides/evaluation/comparison/"],
},
"/docs/guides/evaluation/comparison/pairwise_embedding_distance/": {
canonical:
"https://docs.smith.langchain.com/tutorials/Developers/evaluation",
alternative: [
"/v0.1/docs/guides/evaluation/comparison/pairwise_embedding_distance/",
],
},
"/docs/guides/evaluation/comparison/pairwise_string/": {
canonical:
"https://docs.smith.langchain.com/tutorials/Developers/evaluation",
alternative: ["/v0.1/docs/guides/evaluation/comparison/pairwise_string/"],
},
"/docs/guides/evaluation/examples/": {
canonical:
"https://docs.smith.langchain.com/tutorials/Developers/evaluation",
alternative: ["/v0.1/docs/guides/evaluation/examples/"],
},
"/docs/guides/evaluation/examples/comparisons/": {
canonical:
"https://docs.smith.langchain.com/tutorials/Developers/evaluation",
alternative: ["/v0.1/docs/guides/evaluation/examples/comparisons/"],
},
"/docs/guides/evaluation/string/": {
canonical:
"https://docs.smith.langchain.com/tutorials/Developers/evaluation",
alternative: ["/v0.1/docs/guides/evaluation/string/"],
},
"/docs/guides/evaluation/string/criteria/": {
canonical:
"https://docs.smith.langchain.com/tutorials/Developers/evaluation",
alternative: ["/v0.1/docs/guides/evaluation/string/criteria/"],
},
"/docs/guides/evaluation/string/embedding_distance/": {
canonical:
"https://docs.smith.langchain.com/tutorials/Developers/evaluation",
alternative: ["/v0.1/docs/guides/evaluation/string/embedding_distance/"],
},
"/docs/guides/evaluation/trajectory/": {
canonical:
"https://docs.smith.langchain.com/tutorials/Developers/evaluation",
alternative: ["/v0.1/docs/guides/evaluation/trajectory/"],
},
"/docs/guides/evaluation/trajectory/trajectory_eval/": {
canonical:
"https://docs.smith.langchain.com/tutorials/Developers/evaluation",
alternative: ["/v0.1/docs/guides/evaluation/trajectory/trajectory_eval/"],
},
"/docs/guides/extending_langchain/": {
canonical: "/docs/how_to/#custom",
alternative: ["/v0.1/docs/guides/extending_langchain/"],
},
"/docs/guides/fallbacks/": {
canonical: "/docs/how_to/fallbacks",
alternative: ["/v0.1/docs/guides/fallbacks/"],
},
"/docs/guides/langsmith_evaluation/": {
canonical:
"https://docs.smith.langchain.com/tutorials/Developers/evaluation",
alternative: ["/v0.1/docs/guides/langsmith_evaluation/"],
},
"/docs/guides/migrating/": {
canonical: "https://js.langchain.com/v0.1/docs/guides/migrating/",
alternative: ["/v0.1/docs/guides/migrating/"],
},
"/docs/integrations/chat_memory/": {
canonical: "/docs/integrations/memory",
alternative: ["/v0.1/docs/integrations/chat_memory/"],
},
"/docs/integrations/chat_memory/astradb/": {
canonical: "/docs/integrations/memory/astradb",
alternative: ["/v0.1/docs/integrations/chat_memory/astradb/"],
},
"/docs/integrations/chat_memory/cassandra/": {
canonical: "/docs/integrations/memory/cassandra",
alternative: ["/v0.1/docs/integrations/chat_memory/cassandra/"],
},
"/docs/integrations/chat_memory/cloudflare_d1/": {
canonical: "/docs/integrations/memory/cloudflare_d1",
alternative: ["/v0.1/docs/integrations/chat_memory/cloudflare_d1/"],
},
"/docs/integrations/chat_memory/convex/": {
canonical: "/docs/integrations/memory/convex",
alternative: ["/v0.1/docs/integrations/chat_memory/convex/"],
},
"/docs/integrations/chat_memory/dynamodb/": {
canonical: "/docs/integrations/memory/dynamodb",
alternative: ["/v0.1/docs/integrations/chat_memory/dynamodb/"],
},
"/docs/integrations/chat_memory/firestore/": {
canonical: "/docs/integrations/memory/firestore",
alternative: ["/v0.1/docs/integrations/chat_memory/firestore/"],
},
"/docs/integrations/chat_memory/ipfs_datastore/": {
canonical: "/docs/integrations/memory/ipfs_datastore",
alternative: ["/v0.1/docs/integrations/chat_memory/ipfs_datastore/"],
},
"/docs/integrations/chat_memory/momento/": {
canonical: "/docs/integrations/memory/momento",
alternative: ["/v0.1/docs/integrations/chat_memory/momento/"],
},
"/docs/integrations/chat_memory/mongodb/": {
canonical: "/docs/integrations/memory/mongodb",
alternative: ["/v0.1/docs/integrations/chat_memory/mongodb/"],
},
"/docs/integrations/chat_memory/motorhead_memory/": {
canonical: "/docs/integrations/memory/motorhead_memory",
alternative: ["/v0.1/docs/integrations/chat_memory/motorhead_memory/"],
},
"/docs/integrations/chat_memory/planetscale/": {
canonical: "/docs/integrations/memory/planetscale",
alternative: ["/v0.1/docs/integrations/chat_memory/planetscale/"],
},
"/docs/integrations/chat_memory/postgres/": {
canonical: "/docs/integrations/memory/postgres",
alternative: ["/v0.1/docs/integrations/chat_memory/postgres/"],
},
"/docs/integrations/chat_memory/redis/": {
canonical: "/docs/integrations/memory/redis",
alternative: ["/v0.1/docs/integrations/chat_memory/redis/"],
},
"/docs/integrations/chat_memory/upstash_redis/": {
canonical: "/docs/integrations/memory/upstash_redis",
alternative: ["/v0.1/docs/integrations/chat_memory/upstash_redis/"],
},
"/docs/integrations/chat_memory/xata/": {
canonical: "/docs/integrations/memory/xata",
alternative: ["/v0.1/docs/integrations/chat_memory/xata/"],
},
"/docs/integrations/chat_memory/zep_memory/": {
canonical: "/docs/integrations/memory/zep_memory",
alternative: ["/v0.1/docs/integrations/chat_memory/zep_memory/"],
},
"/docs/integrations/document_compressors/": {
canonical: "/docs/integrations/document_transformers",
alternative: ["/v0.1/docs/integrations/document_compressors/"],
},
"/docs/integrations/llms/togetherai/": {
canonical: "/docs/integrations/llms/together",
alternative: ["/v0.1/docs/integrations/llms/togetherai/"],
},
"/docs/integrations/retrievers/vectorstore/": {
canonical: "/docs/how_to/vectorstore_retriever",
alternative: ["/v0.1/docs/integrations/retrievers/vectorstore/"],
},
"/docs/integrations/vectorstores/azure_cosmosdb/": {
canonical: "/docs/integrations/vectorstores/azure_cosmosdb_mongodb",
alternative: ["/v0.1/docs/integrations/vectorstores/azure_cosmosdb/"],
},
"/docs/langgraph/": {
canonical: "https://langchain-ai.github.io/langgraphjs/",
alternative: ["/v0.1/docs/langgraph/"],
},
"/docs/modules/agents/agent_types/chat_conversation_agent/": {
canonical: "/docs/how_to/migrate_agent",
alternative: [
"/v0.1/docs/modules/agents/agent_types/chat_conversation_agent/",
],
},
"/docs/modules/agents/agent_types/openai_assistant/": {
canonical: "/docs/how_to/migrate_agent",
alternative: ["/v0.1/docs/modules/agents/agent_types/openai_assistant/"],
},
"/docs/modules/agents/agent_types/openai_functions_agent/": {
canonical: "/docs/how_to/migrate_agent",
alternative: [
"/v0.1/docs/modules/agents/agent_types/openai_functions_agent/",
],
},
"/docs/modules/agents/agent_types/openai_tools_agent/": {
canonical: "/docs/how_to/migrate_agent",
alternative: ["/v0.1/docs/modules/agents/agent_types/openai_tools_agent/"],
},
"/docs/modules/agents/agent_types/plan_and_execute/": {
canonical: "/docs/how_to/migrate_agent",
alternative: ["/v0.1/docs/modules/agents/agent_types/plan_and_execute/"],
},
"/docs/modules/agents/agent_types/react/": {
canonical: "/docs/how_to/migrate_agent",
alternative: ["/v0.1/docs/modules/agents/agent_types/react/"],
},
"/docs/modules/agents/agent_types/structured_chat/": {
canonical: "/docs/how_to/migrate_agent",
alternative: ["/v0.1/docs/modules/agents/agent_types/structured_chat/"],
},
"/docs/modules/agents/agent_types/tool_calling/": {
canonical: "/docs/how_to/migrate_agent",
alternative: ["/v0.1/docs/modules/agents/agent_types/tool_calling/"],
},
"/docs/modules/agents/agent_types/xml_legacy/": {
canonical: "/docs/how_to/migrate_agent",
alternative: ["/v0.1/docs/modules/agents/agent_types/xml_legacy/"],
},
"/docs/modules/agents/agent_types/xml/": {
canonical: "/docs/how_to/migrate_agent",
alternative: ["/v0.1/docs/modules/agents/agent_types/xml/"],
},
"/docs/modules/agents/how_to/callbacks/": {
canonical: "/docs/how_to/#callbacks",
alternative: ["/v0.1/docs/modules/agents/how_to/callbacks/"],
},
"/docs/modules/agents/how_to/cancelling_requests/": {
canonical: "/docs/how_to/cancel_execution",
alternative: ["/v0.1/docs/modules/agents/how_to/cancelling_requests/"],
},
"/docs/modules/agents/how_to/custom_agent/": {
canonical:
"https://langchain-ai.github.io/langgraphjs/tutorials/quickstart/",
alternative: ["/v0.1/docs/modules/agents/how_to/custom_agent/"],
},
"/docs/modules/agents/how_to/custom_llm_agent/": {
canonical:
"https://langchain-ai.github.io/langgraphjs/tutorials/quickstart/",
alternative: ["/v0.1/docs/modules/agents/how_to/custom_llm_agent/"],
},
"/docs/modules/agents/how_to/custom_llm_chat_agent/": {
canonical:
"https://langchain-ai.github.io/langgraphjs/tutorials/quickstart/",
alternative: ["/v0.1/docs/modules/agents/how_to/custom_llm_chat_agent/"],
},
"/docs/modules/agents/how_to/custom_mrkl_agent/": {
canonical:
"https://langchain-ai.github.io/langgraphjs/tutorials/quickstart/",
alternative: ["/v0.1/docs/modules/agents/how_to/custom_mrkl_agent/"],
},
"/docs/modules/agents/how_to/handle_parsing_errors/": {
canonical:
"https://langchain-ai.github.io/langgraphjs/how-tos/tool-calling-errors/",
alternative: ["/v0.1/docs/modules/agents/how_to/handle_parsing_errors/"],
},
"/docs/modules/agents/how_to/intermediate_steps/": {
canonical:
"https://langchain-ai.github.io/langgraphjs/how-tos/stream-values/",
alternative: ["/v0.1/docs/modules/agents/how_to/intermediate_steps/"],
},
"/docs/modules/agents/how_to/logging_and_tracing/": {
canonical:
"https://docs.smith.langchain.com/how_to_guides/tracing/trace_with_langgraph",
alternative: ["/v0.1/docs/modules/agents/how_to/logging_and_tracing/"],
},
"/docs/modules/agents/how_to/timeouts/": {
canonical: "/docs/how_to/cancel_execution/",
alternative: ["/v0.1/docs/modules/agents/how_to/timeouts/"],
},
"/docs/modules/agents/tools/": {
canonical:
"https://langchain-ai.github.io/langgraphjs/how-tos/tool-calling/",
alternative: ["/v0.1/docs/modules/agents/tools/"],
},
"/docs/modules/agents/tools/dynamic/": {
canonical: "/docs/how_to/custom_tools/",
alternative: ["/v0.1/docs/modules/agents/tools/dynamic/"],
},
"/docs/modules/agents/tools/how_to/agents_with_vectorstores/": {
canonical: "/docs/how_to/custom_tools",
alternative: [
"/v0.1/docs/modules/agents/tools/how_to/agents_with_vectorstores/",
],
},
"/docs/modules/agents/tools/toolkits/": {
canonical: "/docs/how_to/tools_builtin",
alternative: ["/v0.1/docs/modules/agents/tools/toolkits/"],
},
"/docs/modules/callbacks/how_to/background_callbacks/": {
canonical: "/docs/how_to/callbacks_backgrounding",
alternative: ["/v0.1/docs/modules/callbacks/how_to/background_callbacks/"],
},
"/docs/modules/callbacks/how_to/create_handlers/": {
canonical: "/docs/how_to/custom_callbacks",
alternative: ["/v0.1/docs/modules/callbacks/how_to/create_handlers/"],
},
"/docs/modules/callbacks/how_to/creating_subclasses/": {
canonical: "/docs/how_to/custom_callbacks",
alternative: ["/v0.1/docs/modules/callbacks/how_to/creating_subclasses/"],
},
"/docs/modules/callbacks/how_to/tags/": {
canonical: "/docs/how_to/#callbacks",
alternative: ["/v0.1/docs/modules/callbacks/how_to/tags/"],
},
"/docs/modules/callbacks/how_to/with_listeners/": {
canonical: "/docs/how_to/#callbacks",
alternative: ["/v0.1/docs/modules/callbacks/how_to/with_listeners/"],
},
"/docs/modules/chains/additional/analyze_document/": {
canonical:
"https://js.langchain.com/v0.1/docs/modules/chains/additional/analyze_document/",
alternative: ["/v0.1/docs/modules/chains/additional/analyze_document/"],
},
"/docs/modules/chains/additional/constitutional_chain/": {
canonical:
"https://js.langchain.com/v0.1/docs/modules/chains/additional/constitutional_chain/",
alternative: ["/v0.1/docs/modules/chains/additional/constitutional_chain/"],
},
"/docs/modules/chains/additional/cypher_chain/": {
canonical: "/docs/tutorials/graph",
alternative: ["/v0.1/docs/modules/chains/additional/cypher_chain/"],
},
"/docs/modules/chains/additional/moderation/": {
canonical:
"https://js.langchain.com/v0.1/docs/modules/chains/additional/moderation/",
alternative: ["/v0.1/docs/modules/chains/additional/moderation/"],
},
"/docs/modules/chains/additional/multi_prompt_router/": {
canonical:
"https://js.langchain.com/v0.1/docs/modules/chains/additional/multi_prompt_router/",
alternative: ["/v0.1/docs/modules/chains/additional/multi_prompt_router/"],
},
"/docs/modules/chains/additional/multi_retrieval_qa_router/": {
canonical: "/docs/how_to/multiple_queries",
alternative: [
"/v0.1/docs/modules/chains/additional/multi_retrieval_qa_router/",
],
},
"/docs/modules/chains/additional/openai_functions/": {
canonical: "/docs/how_to/tool_calling",
alternative: ["/v0.1/docs/modules/chains/additional/openai_functions/"],
},
"/docs/modules/chains/additional/openai_functions/extraction/": {
canonical: "/docs/tutorials/extraction",
alternative: [
"/v0.1/docs/modules/chains/additional/openai_functions/extraction/",
],
},
"/docs/modules/chains/additional/openai_functions/openapi/": {
canonical:
"https://js.langchain.com/v0.1/docs/modules/chains/additional/openai_functions/openapi/",
alternative: [
"/v0.1/docs/modules/chains/additional/openai_functions/openapi/",
],
},
"/docs/modules/chains/additional/openai_functions/tagging/": {
canonical: "/docs/tutorials/extraction",
alternative: [
"/v0.1/docs/modules/chains/additional/openai_functions/tagging/",
],
},
"/docs/modules/chains/document/": {
canonical:
"https://api.js.langchain.com/functions/langchain.chains_combine_documents.createStuffDocumentsChain.html",
alternative: ["/v0.1/docs/modules/chains/document/"],
},
"/docs/modules/chains/document/map_reduce/": {
canonical:
"https://js.langchain.com/v0.1/docs/modules/chains/document/map_reduce/",
alternative: ["/v0.1/docs/modules/chains/document/map_reduce/"],
},
"/docs/modules/chains/document/refine/": {
canonical:
"https://js.langchain.com/v0.1/docs/modules/chains/document/refine/",
alternative: ["/v0.1/docs/modules/chains/document/refine/"],
},
"/docs/modules/chains/document/stuff/": {
canonical:
"https://js.langchain.com/v0.1/docs/modules/chains/document/stuff/",
alternative: ["/v0.1/docs/modules/chains/document/stuff/"],
},
"/docs/modules/chains/foundational/llm_chain/": {
canonical: "/docs/tutorials/llm_chain",
alternative: ["/v0.1/docs/modules/chains/foundational/llm_chain/"],
},
"/docs/modules/chains/how_to/debugging/": {
canonical: "/docs/how_to/debugging",
alternative: ["/v0.1/docs/modules/chains/how_to/debugging/"],
},
"/docs/modules/chains/how_to/memory/": {
canonical: "/docs/how_to/qa_chat_history_how_to",
alternative: ["/v0.1/docs/modules/chains/how_to/memory/"],
},
"/docs/modules/chains/popular/api/": {
canonical: "https://js.langchain.com/v0.1/docs/modules/chains/popular/api/",
alternative: ["/v0.1/docs/modules/chains/popular/api/"],
},
"/docs/modules/chains/popular/chat_vector_db_legacy/": {
canonical: "/docs/tutorials/rag",
alternative: ["/v0.1/docs/modules/chains/popular/chat_vector_db_legacy/"],
},
"/docs/modules/chains/popular/chat_vector_db/": {
canonical: "/docs/tutorials/rag",
alternative: ["/v0.1/docs/modules/chains/popular/chat_vector_db/"],
},
"/docs/modules/chains/popular/sqlite_legacy/": {
canonical: "/docs/tutorials/sql_qa",
alternative: ["/v0.1/docs/modules/chains/popular/sqlite_legacy/"],
},
"/docs/modules/chains/popular/sqlite/": {
canonical: "/docs/tutorials/sql_qa",
alternative: ["/v0.1/docs/modules/chains/popular/sqlite/"],
},
"/docs/modules/chains/popular/structured_output/": {
canonical: "/docs/how_to/structured_output",
alternative: ["/v0.1/docs/modules/chains/popular/structured_output/"],
},
"/docs/modules/chains/popular/summarize/": {
canonical: "/docs/tutorials/summarization",
alternative: ["/v0.1/docs/modules/chains/popular/summarize/"],
},
"/docs/modules/chains/popular/vector_db_qa_legacy/": {
canonical: "/docs/tutorials/rag",
alternative: ["/v0.1/docs/modules/chains/popular/vector_db_qa_legacy/"],
},
"/docs/modules/chains/popular/vector_db_qa/": {
canonical: "/docs/tutorials/rag",
alternative: ["/v0.1/docs/modules/chains/popular/vector_db_qa/"],
},
"/docs/modules/data_connection/document_loaders/creating_documents/": {
canonical: "/docs/concepts#document",
alternative: [
"/v0.1/docs/modules/data_connection/document_loaders/creating_documents/",
],
},
"/docs/modules/data_connection/document_transformers/contextual_chunk_headers/":
{
canonical:
"/docs/how_to/parent_document_retriever/#with-contextual-chunk-headers",
alternative: [
"/v0.1/docs/modules/data_connection/document_transformers/contextual_chunk_headers/",
],
},
"/docs/modules/data_connection/document_transformers/custom_text_splitter/": {
canonical: "/docs/how_to/#text-splitters",
alternative: [
"/v0.1/docs/modules/data_connection/document_transformers/custom_text_splitter/",
],
},
"/docs/modules/data_connection/document_transformers/token_splitter/": {
canonical: "/docs/how_to/split_by_token",
alternative: [
"/v0.1/docs/modules/data_connection/document_transformers/token_splitter/",
],
},
"/docs/modules/data_connection/experimental/graph_databases/neo4j/": {
canonical: "/docs/tutorials/graph",
alternative: [
"/v0.1/docs/modules/data_connection/experimental/graph_databases/neo4j/",
],
},
"/docs/modules/data_connection/experimental/multimodal_embeddings/google_vertex_ai/":
{
canonical:
"https://js.langchain.com/v0.1/docs/modules/data_connection/experimental/multimodal_embeddings/google_vertex_ai/",
alternative: [
"/v0.1/docs/modules/data_connection/experimental/multimodal_embeddings/google_vertex_ai/",
],
},
"/docs/modules/data_connection/retrievers/custom/": {
canonical: "/docs/how_to/custom_retriever",
alternative: ["/v0.1/docs/modules/data_connection/retrievers/custom/"],
},
"/docs/modules/data_connection/retrievers/matryoshka_retriever/": {
canonical: "/docs/how_to/reduce_retrieval_latency",
alternative: [
"/v0.1/docs/modules/data_connection/retrievers/matryoshka_retriever/",
],
},
"/docs/modules/data_connection/retrievers/multi-query-retriever/": {
canonical: "/docs/how_to/multiple_queries",
alternative: [
"/v0.1/docs/modules/data_connection/retrievers/multi-query-retriever/",
],
},
"/docs/modules/data_connection/retrievers/multi-vector-retriever/": {
canonical: "/docs/how_to/multi_vector",
alternative: [
"/v0.1/docs/modules/data_connection/retrievers/multi-vector-retriever/",
],
},
"/docs/modules/data_connection/retrievers/parent-document-retriever/": {
canonical: "/docs/how_to/parent_document_retriever",
alternative: [
"/v0.1/docs/modules/data_connection/retrievers/parent-document-retriever/",
],
},
"/docs/modules/data_connection/retrievers/self_query/chroma-self-query/": {
canonical: "/docs/integrations/retrievers/self_query/chroma",
alternative: [
"/v0.1/docs/modules/data_connection/retrievers/self_query/chroma-self-query/",
],
},
"/docs/modules/data_connection/retrievers/self_query/hnswlib-self-query/": {
canonical: "/docs/integrations/retrievers/self_query/hnswlib",
alternative: [
"/v0.1/docs/modules/data_connection/retrievers/self_query/hnswlib-self-query/",
],
},
"/docs/modules/data_connection/retrievers/self_query/memory-self-query/": {
canonical: "/docs/integrations/retrievers/self_query/memory",
alternative: [
"/v0.1/docs/modules/data_connection/retrievers/self_query/memory-self-query/",
],
},
"/docs/modules/data_connection/retrievers/self_query/pinecone-self-query/": {
canonical: "/docs/integrations/retrievers/self_query/pinecone",
alternative: [
"/v0.1/docs/modules/data_connection/retrievers/self_query/pinecone-self-query/",
],
},
"/docs/modules/data_connection/retrievers/self_query/qdrant-self-query/": {
canonical: "/docs/integrations/retrievers/self_query/qdrant",
alternative: [
"/v0.1/docs/modules/data_connection/retrievers/self_query/qdrant-self-query/",
],
},
"/docs/modules/data_connection/retrievers/self_query/supabase-self-query/": {
canonical: "/docs/integrations/retrievers/self_query/supabase",
alternative: [
"/v0.1/docs/modules/data_connection/retrievers/self_query/supabase-self-query/",
],
},
"/docs/modules/data_connection/retrievers/self_query/vectara-self-query/": {
canonical: "/docs/integrations/retrievers/self_query/vectara",
alternative: [
"/v0.1/docs/modules/data_connection/retrievers/self_query/vectara-self-query/",
],
},
"/docs/modules/data_connection/retrievers/self_query/weaviate-self-query/": {
canonical: "/docs/integrations/retrievers/self_query/weaviate",
alternative: [
"/v0.1/docs/modules/data_connection/retrievers/self_query/weaviate-self-query/",
],
},
"/docs/modules/data_connection/retrievers/similarity-score-threshold-retriever/":
{
canonical:
"https://api.js.langchain.com/classes/langchain.retrievers_score_threshold.ScoreThresholdRetriever.html",
alternative: [
"/v0.1/docs/modules/data_connection/retrievers/similarity-score-threshold-retriever/",
],
},
"/docs/modules/data_connection/text_embedding/api_errors/": {
canonical:
"https://js.langchain.com/v0.1/docs/modules/data_connection/text_embedding/api_errors/",
alternative: [
"/v0.1/docs/modules/data_connection/text_embedding/api_errors/",
],
},
"/docs/modules/data_connection/text_embedding/caching_embeddings/": {
canonical: "/docs/how_to/caching_embeddings",
alternative: [
"/v0.1/docs/modules/data_connection/text_embedding/caching_embeddings/",
],
},
"/docs/modules/data_connection/text_embedding/rate_limits/": {
canonical:
"https://js.langchain.com/v0.1/docs/modules/data_connection/text_embedding/rate_limits/",
alternative: [
"/v0.1/docs/modules/data_connection/text_embedding/rate_limits/",
],
},
"/docs/modules/data_connection/text_embedding/timeouts/": {
canonical:
"https://js.langchain.com/v0.1/docs/modules/data_connection/text_embedding/timeouts/",
alternative: [
"/v0.1/docs/modules/data_connection/text_embedding/timeouts/",
],
},
"/docs/modules/data_connection/vectorstores/custom/": {
canonical:
"https://js.langchain.com/v0.1/docs/modules/data_connection/vectorstores/custom/",
alternative: ["/v0.1/docs/modules/data_connection/vectorstores/custom/"],
},
"/docs/modules/experimental/": {
canonical: "https://js.langchain.com/v0.1/docs/modules/experimental/",
alternative: ["/v0.1/docs/modules/experimental/"],
},
"/docs/modules/experimental/mask/": {
canonical:
"https://api.js.langchain.com/modules/langchain.experimental_masking.html",
alternative: ["/v0.1/docs/modules/experimental/mask/"],
},
"/docs/modules/experimental/prompts/custom_formats/": {
canonical:
"https://api.js.langchain.com/classes/langchain.experimental_prompts_handlebars.HandlebarsPromptTemplate.html",
alternative: ["/v0.1/docs/modules/experimental/prompts/custom_formats/"],
},
"/docs/modules/memory/chat_messages/custom/": {
canonical:
"https://js.langchain.com/v0.1/docs/modules/memory/chat_messages/custom/",
alternative: ["/v0.1/docs/modules/memory/chat_messages/custom/"],
},
"/docs/modules/memory/types/buffer_memory_chat/": {
canonical:
"https://api.js.langchain.com/classes/langchain.memory.BufferMemory.html",
alternative: ["/v0.1/docs/modules/memory/types/buffer_memory_chat/"],
},
"/docs/modules/memory/types/buffer_window/": {
canonical:
"https://api.js.langchain.com/classes/langchain.memory.BufferWindowMemory.html",
alternative: ["/v0.1/docs/modules/memory/types/buffer_window/"],
},
"/docs/modules/memory/types/entity_summary_memory/": {
canonical:
"https://api.js.langchain.com/classes/langchain.memory.EntityMemory.html",
alternative: ["/v0.1/docs/modules/memory/types/entity_summary_memory/"],
},
"/docs/modules/memory/types/multiple_memory/": {
canonical:
"https://api.js.langchain.com/classes/langchain.memory.CombinedMemory.html",
alternative: ["/v0.1/docs/modules/memory/types/multiple_memory/"],
},
"/docs/modules/memory/types/summary_buffer/": {
canonical:
"https://api.js.langchain.com/classes/langchain.memory.ConversationSummaryBufferMemory.html",
alternative: ["/v0.1/docs/modules/memory/types/summary_buffer/"],
},
"/docs/modules/memory/types/summary/": {
canonical:
"https://api.js.langchain.com/classes/langchain.memory.ConversationSummaryMemory.html",
alternative: ["/v0.1/docs/modules/memory/types/summary/"],
},
"/docs/modules/memory/types/vectorstore_retriever_memory/": {
canonical:
"https://api.js.langchain.com/classes/langchain.memory.VectorStoreRetrieverMemory.html",
alternative: [
"/v0.1/docs/modules/memory/types/vectorstore_retriever_memory/",
],
},
"/docs/modules/model_io/chat/caching/": {
canonical: "/docs/how_to/chat_model_caching",
alternative: ["/v0.1/docs/modules/model_io/chat/caching/"],
},
"/docs/modules/model_io/chat/cancelling_requests/": {
canonical: "/docs/how_to/cancel_execution",
alternative: ["/v0.1/docs/modules/model_io/chat/cancelling_requests/"],
},
"/docs/modules/model_io/chat/custom_chat/": {
canonical: "/docs/how_to/custom_chat",
alternative: ["/v0.1/docs/modules/model_io/chat/custom_chat/"],
},
"/docs/modules/model_io/chat/dealing_with_api_errors/": {
canonical: "/docs/how_to/fallbacks",
alternative: ["/v0.1/docs/modules/model_io/chat/dealing_with_api_errors/"],
},
"/docs/modules/model_io/chat/dealing_with_rate_limits/": {
canonical: "/docs/how_to/fallbacks",
alternative: ["/v0.1/docs/modules/model_io/chat/dealing_with_rate_limits/"],
},
"/docs/modules/model_io/chat/subscribing_events/": {
canonical: "/docs/how_to/custom_callbacks",
alternative: ["/v0.1/docs/modules/model_io/chat/subscribing_events/"],
},
"/docs/modules/model_io/chat/timeouts/": {
canonical: "/docs/how_to/custom_callbacks",
alternative: ["/v0.1/docs/modules/model_io/chat/timeouts/"],
},
"/docs/modules/model_io/llms/cancelling_requests/": {
canonical: "/docs/how_to/cancel_execution",
alternative: ["/v0.1/docs/modules/model_io/llms/cancelling_requests/"],
},
"/docs/modules/model_io/llms/dealing_with_api_errors/": {
canonical: "/docs/how_to/fallbacks",
alternative: ["/v0.1/docs/modules/model_io/llms/dealing_with_api_errors/"],
},
"/docs/modules/model_io/llms/dealing_with_rate_limits/": {
canonical: "/docs/how_to/fallbacks",
alternative: ["/v0.1/docs/modules/model_io/llms/dealing_with_rate_limits/"],
},
"/docs/modules/model_io/llms/subscribing_events/": {
canonical: "/docs/how_to/custom_callbacks",
alternative: ["/v0.1/docs/modules/model_io/llms/subscribing_events/"],
},
"/docs/modules/model_io/llms/timeouts/": {
canonical: "/docs/how_to/cancel_execution",
alternative: ["/v0.1/docs/modules/model_io/llms/timeouts/"],
},
"/docs/modules/model_io/output_parsers/types/bytes/": {
canonical:
"https://api.js.langchain.com/modules/_langchain_core.output_parsers.html",
alternative: ["/v0.1/docs/modules/model_io/output_parsers/types/bytes/"],
},
"/docs/modules/model_io/output_parsers/types/combining_output_parser/": {
canonical:
"https://api.js.langchain.com/classes/langchain.output_parsers.CombiningOutputParser.html",
alternative: [
"/v0.1/docs/modules/model_io/output_parsers/types/combining_output_parser/",
],
},
"/docs/modules/model_io/output_parsers/types/csv/": {
canonical:
"https://api.js.langchain.com/classes/_langchain_core.output_parsers.CommaSeparatedListOutputParser.html",
alternative: ["/v0.1/docs/modules/model_io/output_parsers/types/csv/"],
},
"/docs/modules/model_io/output_parsers/types/custom_list_parser/": {
canonical:
"https://api.js.langchain.com/classes/_langchain_core.output_parsers.CustomListOutputParser.html",
alternative: [
"/v0.1/docs/modules/model_io/output_parsers/types/custom_list_parser/",
],
},
"/docs/modules/model_io/output_parsers/types/http_response/": {
canonical:
"https://api.js.langchain.com/classes/langchain.output_parsers.HttpResponseOutputParser.html",
alternative: [
"/v0.1/docs/modules/model_io/output_parsers/types/http_response/",
],
},
"/docs/modules/model_io/output_parsers/types/json_functions/": {
canonical:
"https://api.js.langchain.com/classes/langchain.output_parsers.JsonOutputFunctionsParser.html",
alternative: [
"/v0.1/docs/modules/model_io/output_parsers/types/json_functions/",
],
},
"/docs/modules/model_io/output_parsers/types/string/": {
canonical:
"https://api.js.langchain.com/classes/_langchain_core.output_parsers.StringOutputParser.html",
alternative: ["/v0.1/docs/modules/model_io/output_parsers/types/string/"],
},
"/docs/modules/model_io/prompts/example_selector_types/": {
canonical: "/docs/how_to/#example-selectors",
alternative: [
"/v0.1/docs/modules/model_io/prompts/example_selector_types/",
],
},
"/docs/modules/model_io/prompts/example_selector_types/length_based/": {
canonical: "/docs/how_to/example_selectors_length_based",
alternative: [
"/v0.1/docs/modules/model_io/prompts/example_selector_types/length_based/",
],
},
"/docs/modules/model_io/prompts/example_selector_types/similarity/": {
canonical: "/docs/how_to/example_selectors_similarity",
alternative: [
"/v0.1/docs/modules/model_io/prompts/example_selector_types/similarity/",
],
},
"/docs/modules/model_io/prompts/few_shot/": {
canonical: "/docs/how_to/few_shot_examples",
alternative: ["/v0.1/docs/modules/model_io/prompts/few_shot/"],
},
"/docs/modules/model_io/prompts/pipeline/": {
canonical: "/docs/how_to/prompts_composition",
alternative: ["/v0.1/docs/modules/model_io/prompts/pipeline/"],
},
"/docs/production/deployment/": {
canonical: "https://langchain-ai.github.io/langgraph/cloud/",
alternative: ["/v0.1/docs/production/deployment/"],
},
"/docs/production/tracing/": {
canonical:
"https://docs.smith.langchain.com/how_to_guides/tracing/trace_with_langchain",
alternative: ["/v0.1/docs/production/tracing/"],
},
"/docs/use_cases/agent_simulations/": {
canonical:
"https://js.langchain.com/v0.1/docs/use_cases/agent_simulations/",
alternative: ["/v0.1/docs/use_cases/agent_simulations/"],
},
"/docs/use_cases/agent_simulations/generative_agents/": {
canonical:
"https://js.langchain.com/v0.1/docs/use_cases/agent_simulations/generative_agents/",
alternative: ["/v0.1/docs/use_cases/agent_simulations/generative_agents/"],
},
"/docs/use_cases/agent_simulations/violation_of_expectations_chain/": {
canonical:
"https://js.langchain.com/v0.1/docs/use_cases/agent_simulations/violation_of_expectations_chain/",
alternative: [
"/v0.1/docs/use_cases/agent_simulations/violation_of_expectations_chain/",
],
},
"/docs/use_cases/api/": {
canonical: "https://js.langchain.com/v0.1/docs/use_cases/api/",
alternative: ["/v0.1/docs/use_cases/api/"],
},
"/docs/use_cases/autonomous_agents/": {
canonical:
"https://js.langchain.com/v0.1/docs/use_cases/autonomous_agents/",
alternative: ["/v0.1/docs/use_cases/autonomous_agents/"],
},
"/docs/use_cases/autonomous_agents/auto_gpt/": {
canonical:
"https://js.langchain.com/v0.1/docs/use_cases/autonomous_agents/auto_gpt/",
alternative: ["/v0.1/docs/use_cases/autonomous_agents/auto_gpt/"],
},
"/docs/use_cases/autonomous_agents/baby_agi/": {
canonical:
"https://js.langchain.com/v0.1/docs/use_cases/autonomous_agents/baby_agi/",
alternative: ["/v0.1/docs/use_cases/autonomous_agents/baby_agi/"],
},
"/docs/use_cases/autonomous_agents/sales_gpt/": {
canonical:
"https://js.langchain.com/v0.1/docs/use_cases/autonomous_agents/sales_gpt/",
alternative: ["/v0.1/docs/use_cases/autonomous_agents/sales_gpt/"],
},
"/docs/use_cases/graph/construction/": {
canonical: "/docs/tutorials/graph",
alternative: ["/v0.1/docs/use_cases/graph/construction/"],
},
"/docs/use_cases/media/": {
canonical: "/docs/how_to/multimodal_prompts",
alternative: ["/v0.1/docs/use_cases/media/"],
},
"/docs/use_cases/query_analysis/how_to/constructing_filters/": {
canonical: "/docs/tutorials/query_analysis",
alternative: [
"/v0.1/docs/use_cases/query_analysis/how_to/constructing_filters/",
],
},
"/docs/use_cases/tabular/": {
canonical: "/docs/tutorials/sql_qa",
alternative: ["/v0.1/docs/use_cases/tabular/"],
},
};
export default function NotFound() {
const location = useLocation();
const pathname = location.pathname.endsWith("/")
? location.pathname
: `${location.pathname}/`; // Ensure the path matches the keys in suggestedLinks
const { canonical, alternative } = suggestedLinks[pathname] || {};
return (
<>
<PageMetadata
title={translate({
id: "theme.NotFound.title",
message: "Page Not Found",
})}
/>
<Layout>
<main className="container margin-vert--xl">
<div className="row">
<div className="col col--6 col--offset-3">
<h1 className="hero__title">
{canonical
? "Page Moved"
: alternative
? "Page Removed"
: "Page Not Found"}
</h1>
{canonical ? (
<h3>
You can find the new location <a href={canonical}>here</a>.
</h3>
) : alternative ? (
<p>The page you were looking for has been removed.</p>
) : (
<p>We could not find what you were looking for.</p>
)}
{alternative && (
<p>
<details>
<summary>Alternative pages</summary>
<ul>
{alternative.map((alt, index) => (
// eslint-disable-next-line react/no-array-index-key
<li key={index}>
<a href={alt}>{alt}</a>
{alt.startsWith("/v0.1/") && (
<>
{" "}
<LegacyBadge />
</>
)}
</li>
))}
</ul>
</details>
</p>
)}
<p>
Please contact the owner of the site that linked you to the
original URL and let them know their link{" "}
{canonical
? "has moved."
: alternative
? "has been removed."
: "is broken."}
</p>
</div>
</div>
</main>
</Layout>
</>
);
}
|
0 | lc_public_repos/langchainjs/docs/core_docs/src | lc_public_repos/langchainjs/docs/core_docs/src/theme/People.js | import React from "react";
import PeopleData from "../../data/people.yml";
function renderPerson({ login, avatarUrl, url }) {
return (
<div
key={`person:${login}`}
style={{
display: "flex",
flexDirection: "column",
alignItems: "center",
padding: "18px",
}}
>
<a href={url} target="_blank" rel="noreferrer">
<img
src={avatarUrl}
alt={`Avatar for ${login}`}
style={{ borderRadius: "50%", width: "128px", height: "128px" }}
/>
</a>
<a
href={url}
target="_blank"
rel="noreferrer"
style={{ fontSize: "18px", fontWeight: "700" }}
>
@{login}
</a>
</div>
);
}
export default function People({ type, count }) {
let people = PeopleData[type] ?? [];
if (count !== undefined) {
people = people.slice(0, parseInt(count, 10));
}
const html = people.map((person) => renderPerson(person));
return (
<div
style={{
display: "flex",
flexWrap: "wrap",
padding: "10px",
justifyContent: "space-around",
}}
>
{html}
</div>
);
}
|
0 | lc_public_repos/langchainjs/docs/core_docs/src | lc_public_repos/langchainjs/docs/core_docs/src/theme/Feedback.js | /* eslint-disable no-return-assign, react/jsx-props-no-spreading, no-console */
import React, { useState, useEffect } from "react";
import { createClient } from "@supabase/supabase-js";
import useDocusaurusContext from "@docusaurus/useDocusaurusContext";
import { v4 as uuidv4 } from "uuid";
const useCookie = () => {
/**
* Function to set a cookie
* @param {string} name The name of the cookie to set
* @param {string} value The value of the cookie
* @param {number} days the number of days until the cookie expires
*/
const setCookie = (name, value, days) => {
const d = new Date();
d.setTime(d.getTime() + days * 24 * 60 * 60 * 1000);
const expires = `expires=${d.toUTCString()}`;
document.cookie = `${name}=${value};${expires};path=/`;
};
/**
* Function to get a cookie by name
* @param {string} name The name of the cookie to get
* @returns {string} The value of the cookie
*/
const getCookie = (name) => {
const ca = document.cookie.split(";");
const caLen = ca.length;
const cookieName = `${name}=`;
let c;
for (let i = 0; i < caLen; i += 1) {
c = ca[i].replace(/^\s+/g, "");
if (c.indexOf(cookieName) === 0) {
return c.substring(cookieName.length, c.length);
}
}
return "";
};
/**
* Function to check cookie existence
* @param {string} name The name of the cookie to check for
* @returns {boolean} Whether or not the cookie exists
*/
const checkCookie = (name) => {
const cookie = getCookie(name);
if (cookie !== "") {
return true;
}
return false;
};
return { setCookie, checkCookie };
};
function SvgThumbsUp() {
return (
<svg
xmlns="http://www.w3.org/2000/svg"
fill="none"
viewBox="0 0 24 24"
strokeWidth="1.5"
stroke="#166534"
style={{ width: "24px", height: "24px" }}
>
<path
strokeLinecap="round"
strokeLinejoin="round"
d="M6.633 10.25c.806 0 1.533-.446 2.031-1.08a9.041 9.041 0 0 1 2.861-2.4c.723-.384 1.35-.956 1.653-1.715a4.498 4.498 0 0 0 .322-1.672V2.75a.75.75 0 0 1 .75-.75 2.25 2.25 0 0 1 2.25 2.25c0 1.152-.26 2.243-.723 3.218-.266.558.107 1.282.725 1.282m0 0h3.126c1.026 0 1.945.694 2.054 1.715.045.422.068.85.068 1.285a11.95 11.95 0 0 1-2.649 7.521c-.388.482-.987.729-1.605.729H13.48c-.483 0-.964-.078-1.423-.23l-3.114-1.04a4.501 4.501 0 0 0-1.423-.23H5.904m10.598-9.75H14.25M5.904 18.5c.083.205.173.405.27.602.197.4-.078.898-.523.898h-.908c-.889 0-1.713-.518-1.972-1.368a12 12 0 0 1-.521-3.507c0-1.553.295-3.036.831-4.398C3.387 9.953 4.167 9.5 5 9.5h1.053c.472 0 .745.556.5.96a8.958 8.958 0 0 0-1.302 4.665c0 1.194.232 2.333.654 3.375Z"
/>
</svg>
);
}
function SvgThumbsDown() {
return (
<svg
xmlns="http://www.w3.org/2000/svg"
fill="none"
viewBox="0 0 24 24"
strokeWidth="1.5"
stroke="#991b1b"
style={{ width: "24px", height: "24px" }}
>
<path
strokeLinecap="round"
strokeLinejoin="round"
d="M7.498 15.25H4.372c-1.026 0-1.945-.694-2.054-1.715a12.137 12.137 0 0 1-.068-1.285c0-2.848.992-5.464 2.649-7.521C5.287 4.247 5.886 4 6.504 4h4.016a4.5 4.5 0 0 1 1.423.23l3.114 1.04a4.5 4.5 0 0 0 1.423.23h1.294M7.498 15.25c.618 0 .991.724.725 1.282A7.471 7.471 0 0 0 7.5 19.75 2.25 2.25 0 0 0 9.75 22a.75.75 0 0 0 .75-.75v-.633c0-.573.11-1.14.322-1.672.304-.76.93-1.33 1.653-1.715a9.04 9.04 0 0 0 2.86-2.4c.498-.634 1.226-1.08 2.032-1.08h.384m-10.253 1.5H9.7m8.075-9.75c.01.05.027.1.05.148.593 1.2.925 2.55.925 3.977 0 1.487-.36 2.89-.999 4.125m.023-8.25c-.076-.365.183-.75.575-.75h.908c.889 0 1.713.518 1.972 1.368.339 1.11.521 2.287.521 3.507 0 1.553-.295 3.036-.831 4.398-.306.774-1.086 1.227-1.918 1.227h-1.053c-.472 0-.745-.556-.5-.96a8.95 8.95 0 0 0 .303-.54"
/>
</svg>
);
}
/**
* Generated type for the Supabase DB schema.
* @typedef {import('../supabase').Database} Database
*/
const FEEDBACK_COOKIE_PREFIX = "feedbackSent";
/** @type {Database["public"]["Enums"]["project_type"]} */
const LANGCHAIN_PROJECT_NAME = "langchain_js_docs";
/**
* @returns {Promise<string>}
*/
const getIpAddress = async () => {
const response = await fetch("https://api.ipify.org?format=json");
return (await response.json()).ip;
};
export default function Feedback() {
const { setCookie, checkCookie } = useCookie();
const [feedbackId, setFeedbackId] = useState(null);
const [feedbackSent, setFeedbackSent] = useState(false);
const [feedbackDetailsSent, setFeedbackDetailsSent] = useState(false);
const { siteConfig } = useDocusaurusContext();
const [pathname, setPathname] = useState("");
/** @param {"good" | "bad"} feedback */
const handleFeedback = async (feedback) => {
if (process.env.NODE_ENV !== "production") {
console.log("Feedback (dev)");
return;
}
const cookieName = `${FEEDBACK_COOKIE_PREFIX}_${window.location.pathname}`;
if (checkCookie(cookieName)) {
return;
}
/** @type {Database} */
const supabase = createClient(
siteConfig.customFields.supabaseUrl,
siteConfig.customFields.supabasePublicKey
);
try {
const ipAddress = await getIpAddress();
const rowId = uuidv4();
setFeedbackId(rowId);
/**
* "id" and "created_at" are automatically generated by Supabase
* @type {Omit<Database["public"]["Tables"]["feedback"]["Row"], "id" | "created_at">}
*/
const params = {
id: rowId,
is_good: feedback === "good",
url: window.location.pathname,
user_ip: ipAddress,
project: LANGCHAIN_PROJECT_NAME,
};
const { error } = await supabase.from("feedback").insert(params);
if (error) {
throw error;
}
} catch (e) {
console.error("Failed to send feedback", e);
return;
}
// Set a cookie to prevent feedback from being sent multiple times
setCookie(cookieName, window.location.pathname, 1);
setFeedbackSent(true);
};
const handleFeedbackDetails = async (e) => {
e.preventDefault();
if (!feedbackId) {
setFeedbackDetailsSent(true);
return;
}
const details = e.target.elements
.namedItem("details")
?.value.slice(0, 1024);
if (!details) {
return;
}
const supabase = createClient(
siteConfig.customFields.supabaseUrl,
siteConfig.customFields.supabasePublicKey
);
const { error } = await supabase.from("feedback_details").insert({
feedback_id: feedbackId,
details,
});
if (error) {
console.error("Failed to add feedback details", error);
return;
}
setFeedbackDetailsSent(true);
};
useEffect(() => {
if (typeof window !== "undefined") {
// If the cookie exists, set feedback sent to
// true so the user can not send feedback again
// (cookies exp in 24hrs)
const cookieName = `${FEEDBACK_COOKIE_PREFIX}_${window.location.pathname}`;
setFeedbackSent(checkCookie(cookieName));
setPathname(window.location.pathname);
}
}, []);
const defaultFields = {
style: {
display: "flex",
alignItems: "center",
paddingTop: "10px",
paddingBottom: "10px",
paddingLeft: "22px",
paddingRight: "22px",
border: "1px solid gray",
borderRadius: "6px",
gap: "10px",
cursor: "pointer",
fontSize: "16px",
fontWeight: "600",
},
onMouseEnter: (e) => (e.currentTarget.style.backgroundColor = "#f0f0f0"),
onMouseLeave: (e) =>
(e.currentTarget.style.backgroundColor = "transparent"),
onMouseDown: (e) => (e.currentTarget.style.backgroundColor = "#d0d0d0"),
onMouseUp: (e) => (e.currentTarget.style.backgroundColor = "#f0f0f0"),
};
const newGithubIssueURL = pathname
? `https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CIssue+related+to+${pathname}%3E`
: "https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E";
return (
<div style={{ display: "flex", flexDirection: "column" }}>
<hr />
{feedbackSent ? (
<>
<h4>Thanks for your feedback!</h4>
{!feedbackDetailsSent && feedbackId && (
<form
style={{ display: "flex", flexDirection: "column" }}
onSubmit={handleFeedbackDetails}
>
<h4>Do you have any specific comments?</h4>
<textarea
name="details"
style={{ width: "480px", height: "120px" }}
/>
<button
style={{
width: "72px",
marginLeft: "408px",
marginTop: "12px",
}}
type="submit"
>
Submit
</button>
</form>
)}
</>
) : (
<>
<h4>Was this page helpful?</h4>
<div style={{ display: "flex", gap: "5px" }}>
<div
{...defaultFields}
role="button" // Make it recognized as an interactive element
tabIndex={0} // Make it focusable
onKeyDown={async (e) => {
// Handle keyboard interaction
if (e.key === "Enter" || e.key === " ") {
e.preventDefault();
await handleFeedback("good");
}
}}
onClick={async (e) => {
e.preventDefault();
await handleFeedback("good");
}}
>
<SvgThumbsUp />
</div>
<div
{...defaultFields}
role="button" // Make it recognized as an interactive element
tabIndex={0} // Make it focusable
onKeyDown={async (e) => {
// Handle keyboard interaction
if (e.key === "Enter" || e.key === " ") {
e.preventDefault();
await handleFeedback("bad");
}
}}
onClick={async (e) => {
e.preventDefault();
await handleFeedback("bad");
}}
>
<SvgThumbsDown />
</div>
</div>
</>
)}
<br />
<h4>
You can also leave detailed feedback{" "}
<a target="_blank" href={newGithubIssueURL} rel="noreferrer">
on GitHub
</a>
.
</h4>
</div>
);
}
|
0 | lc_public_repos/langchainjs/docs/core_docs/src | lc_public_repos/langchainjs/docs/core_docs/src/theme/FeatureTables.js | /* eslint-disable import/no-extraneous-dependencies */
/* eslint-disable prefer-template */
import React from "react";
import { useCurrentSidebarCategory } from "@docusaurus/theme-common";
import { useDocById } from "@docusaurus/theme-common/internal";
const FEATURE_TABLES = {
chat: {
link: "/docs/integrations/chat",
columns: [
{
title: "Provider",
formatter: (item) => <a href={item.link}>{item.name}</a>,
},
{
title: <a href="/docs/how_to/tool_calling">Tool calling</a>,
formatter: (item) => (item.tool_calling ? "✅" : "❌"),
},
{
title: <a href="/docs/how_to/structured_output/">Structured output</a>,
formatter: (item) => (item.structured_output ? "✅" : "❌"),
},
{
title: "JSON mode",
formatter: (item) => (item.json_mode ? "✅" : "❌"),
},
{ title: "Local", formatter: (item) => (item.local ? "✅" : "❌") },
{
title: <a href="/docs/how_to/multimodal_inputs/">Multimodal</a>,
formatter: (item) => (item.multimodal ? "✅" : "❌"),
},
{
title: "Package",
formatter: (item) => <a href={item.apiLink}>{item.package}</a>,
},
],
items: [
{
name: "ChatAnthropic",
package: "langchain-anthropic",
link: "anthropic/",
structured_output: true,
tool_calling: true,
json_mode: false,
multimodal: true,
local: false,
apiLink:
"https://api.python.langchain.com/en/latest/chat_models/langchain_anthropic.chat_models.ChatAnthropic.html#langchain_anthropic.chat_models.ChatAnthropic",
},
{
name: "ChatMistralAI",
package: "langchain-mistralai",
link: "mistralai/",
structured_output: true,
tool_calling: true,
json_mode: false,
multimodal: false,
local: false,
apiLink:
"https://api.python.langchain.com/en/latest/chat_models/langchain_mistralai.chat_models.ChatMistralAI.html#langchain_mistralai.chat_models.ChatMistralAI",
},
{
name: "ChatFireworks",
package: "langchain-fireworks",
link: "fireworks/",
structured_output: true,
tool_calling: true,
json_mode: true,
multimodal: false,
local: false,
apiLink:
"https://api.python.langchain.com/en/latest/chat_models/langchain_fireworks.chat_models.ChatFireworks.html#langchain_fireworks.chat_models.ChatFireworks",
},
{
name: "AzureChatOpenAI",
package: "langchain-openai",
link: "azure_chat_openai/",
structured_output: true,
tool_calling: true,
json_mode: true,
multimodal: true,
local: false,
apiLink:
"https://api.python.langchain.com/en/latest/chat_models/langchain_openai.chat_models.azure.AzureChatOpenAI.html#langchain_openai.chat_models.azure.AzureChatOpenAI",
},
{
name: "ChatOpenAI",
package: "langchain-openai",
link: "openai/",
structured_output: true,
tool_calling: true,
json_mode: true,
multimodal: true,
local: false,
apiLink:
"https://api.python.langchain.com/en/latest/chat_models/langchain_openai.chat_models.base.ChatOpenAI.html#langchain_openai.chat_models.base.ChatOpenAI",
},
{
name: "ChatTogether",
package: "langchain-together",
link: "together/",
structured_output: true,
tool_calling: true,
json_mode: true,
multimodal: false,
local: false,
apiLink:
"https://api.python.langchain.com/en/latest/chat_models/langchain_together.chat_models.ChatTogether.html#langchain_together.chat_models.ChatTogether",
},
{
name: "ChatVertexAI",
package: "langchain-google-vertexai",
link: "google_vertex_ai_palm/",
structured_output: true,
tool_calling: true,
json_mode: false,
multimodal: true,
local: false,
apiLink:
"https://api.python.langchain.com/en/latest/chat_models/langchain_google_vertexai.chat_models.ChatVertexAI.html#langchain_google_vertexai.chat_models.ChatVertexAI",
},
{
name: "ChatGoogleGenerativeAI",
package: "langchain-google-genai",
link: "google_generative_ai/",
structured_output: true,
tool_calling: true,
json_mode: false,
multimodal: true,
local: false,
apiLink:
"https://api.python.langchain.com/en/latest/chat_models/langchain_google_genai.chat_models.ChatGoogleGenerativeAI.html#langchain_google_genai.chat_models.ChatGoogleGenerativeAI",
},
{
name: "ChatGroq",
package: "langchain-groq",
link: "groq/",
structured_output: true,
tool_calling: true,
json_mode: true,
multimodal: false,
local: false,
apiLink:
"https://api.python.langchain.com/en/latest/chat_models/langchain_groq.chat_models.ChatGroq.html#langchain_groq.chat_models.ChatGroq",
},
{
name: "ChatCohere",
package: "langchain-cohere",
link: "cohere/",
structured_output: true,
tool_calling: true,
json_mode: false,
multimodal: false,
local: false,
apiLink:
"https://api.python.langchain.com/en/latest/chat_models/langchain_cohere.chat_models.ChatCohere.html#langchain_cohere.chat_models.ChatCohere",
},
{
name: "ChatBedrock",
package: "langchain-aws",
link: "bedrock/",
structured_output: true,
tool_calling: true,
json_mode: false,
multimodal: false,
local: false,
apiLink:
"https://api.python.langchain.com/en/latest/chat_models/langchain_aws.chat_models.bedrock.ChatBedrock.html#langchain_aws.chat_models.bedrock.ChatBedrock",
},
{
name: "ChatHuggingFace",
package: "langchain-huggingface",
link: "huggingface/",
structured_output: true,
tool_calling: true,
json_mode: false,
multimodal: false,
local: true,
apiLink:
"https://api.python.langchain.com/en/latest/chat_models/langchain_huggingface.chat_models.huggingface.ChatHuggingFace.html#langchain_huggingface.chat_models.huggingface.ChatHuggingFace",
},
{
name: "ChatNVIDIA",
package: "langchain-nvidia-ai-endpoints",
link: "nvidia_ai_endpoints/",
structured_output: true,
tool_calling: true,
json_mode: false,
multimodal: false,
local: true,
apiLink:
"https://api.python.langchain.com/en/latest/chat_models/langchain_nvidia_ai_endpoints.chat_models.ChatNVIDIA.html#langchain_nvidia_ai_endpoints.chat_models.ChatNVIDIA",
},
{
name: "ChatOllama",
package: "langchain-ollama",
link: "ollama/",
structured_output: true,
tool_calling: true,
json_mode: true,
multimodal: false,
local: true,
apiLink:
"https://api.python.langchain.com/en/latest/chat_models/langchain_ollama.chat_models.ChatOllama.html#langchain_ollama.chat_models.ChatOllama",
},
{
name: "ChatLlamaCpp",
package: "langchain-community",
link: "llamacpp",
structured_output: true,
tool_calling: true,
json_mode: false,
multimodal: false,
local: true,
apiLink:
"https://api.python.langchain.com/en/latest/chat_models/langchain_community.chat_models.llamacpp.ChatLlamaCpp.html#langchain_community.chat_models.llamacpp.ChatLlamaCpp",
},
{
name: "ChatAI21",
package: "langchain-ai21",
link: "ai21",
structured_output: true,
tool_calling: true,
json_mode: false,
multimodal: false,
local: false,
apiLink:
"https://api.python.langchain.com/en/latest/chat_models/langchain_ai21.chat_models.ChatAI21.html#langchain_ai21.chat_models.ChatAI21",
},
{
name: "ChatUpstage",
package: "langchain-upstage",
link: "upstage",
structured_output: true,
tool_calling: true,
json_mode: false,
multimodal: false,
local: false,
apiLink:
"https://api.python.langchain.com/en/latest/chat_models/langchain_upstage.chat_models.ChatUpstage.html#langchain_upstage.chat_models.ChatUpstage",
},
{
name: "Arcjet Redact",
package: "langchain-community",
link: "arcjet",
structured_output: false,
tool_calling: false,
json_mode: false,
multimodal: false,
local: true,
},
],
},
llms: {
link: "/docs/integrations/llms",
columns: [
{
title: "Provider",
formatter: (item) => <a href={item.link}>{item.name}</a>,
},
{
title: "Package",
formatter: (item) => <a href={item.apiLink}>{item.package}</a>,
},
],
items: [
{
name: "AI21LLM",
link: "ai21",
package: "langchain-ai21",
apiLink:
"https://api.python.langchain.com/en/latest/llms/langchain_ai21.llms.AI21LLM.html#langchain_ai21.llms.AI21LLM",
},
{
name: "Arcjet Redact",
link: "arcjet",
package: "langchain-community",
},
{
name: "AnthropicLLM",
link: "anthropic",
package: "langchain-anthropic",
apiLink:
"https://api.python.langchain.com/en/latest/llms/langchain_anthropic.llms.AnthropicLLM.html#langchain_anthropic.llms.AnthropicLLM",
},
{
name: "AzureOpenAI",
link: "azure_openai",
package: "langchain-openai",
apiLink:
"https://api.python.langchain.com/en/latest/llms/langchain_openai.llms.azure.AzureOpenAI.html#langchain_openai.llms.azure.AzureOpenAI",
},
{
name: "BedrockLLM",
link: "bedrock",
package: "langchain-aws",
apiLink:
"https://api.python.langchain.com/en/latest/llms/langchain_aws.llms.bedrock.BedrockLLM.html#langchain_aws.llms.bedrock.BedrockLLM",
},
{
name: "CohereLLM",
link: "cohere",
package: "langchain-cohere",
apiLink:
"https://api.python.langchain.com/en/latest/llms/langchain_cohere.llms.Cohere.html#langchain_cohere.llms.Cohere",
},
{
name: "FireworksLLM",
link: "fireworks",
package: "langchain-fireworks",
apiLink:
"https://api.python.langchain.com/en/latest/llms/langchain_fireworks.llms.Fireworks.html#langchain_fireworks.llms.Fireworks",
},
{
name: "OllamaLLM",
link: "ollama",
package: "langchain-ollama",
apiLink:
"https://api.python.langchain.com/en/latest/llms/langchain_ollama.llms.OllamaLLM.html#langchain_ollama.llms.OllamaLLM",
},
{
name: "OpenAILLM",
link: "openai",
package: "langchain-openai",
apiLink:
"https://api.python.langchain.com/en/latest/llms/langchain_openai.llms.base.OpenAI.html#langchain_openai.llms.base.OpenAI",
},
{
name: "TogetherLLM",
link: "together",
package: "langchain-together",
apiLink:
"https://api.python.langchain.com/en/latest/llms/langchain_together.llms.Together.html#langchain_together.llms.Together",
},
{
name: "VertexAILLM",
link: "google_vertexai",
package: "langchain-google_vertexai",
apiLink:
"https://api.python.langchain.com/en/latest/llms/langchain_google_vertexai.llms.VertexAI.html#langchain_google_vertexai.llms.VertexAI",
},
],
},
text_embedding: {
link: "/docs/integrations/text_embedding",
columns: [
{
title: "Provider",
formatter: (item) => <a href={item.link}>{item.name}</a>,
},
{
title: "Package",
formatter: (item) => <a href={item.apiLink}>{item.package}</a>,
},
],
items: [
{
name: "AzureOpenAI",
link: "azureopenai",
package: "langchain-openai",
apiLink:
"https://api.python.langchain.com/en/latest/embeddings/langchain_openai.embeddings.azure.AzureOpenAIEmbeddings.html#langchain_openai.embeddings.azure.AzureOpenAIEmbeddings",
},
{
name: "Ollama",
link: "ollama",
package: "langchain-ollama",
apiLink:
"https://api.python.langchain.com/en/latest/embeddings/langchain_ollama.embeddings.OllamaEmbeddings.html#langchain_ollama.embeddings.OllamaEmbeddings",
},
{
name: "AI21",
link: "ai21",
package: "langchain-ai21",
apiLink:
"https://api.python.langchain.com/en/latest/embeddings/langchain_ai21.embeddings.AI21Embeddings.html#langchain_ai21.embeddings.AI21Embeddings",
},
{
name: "Fake",
link: "fake",
package: "langchain-core",
apiLink:
"https://api.python.langchain.com/en/latest/embeddings/langchain_core.embeddings.fake.FakeEmbeddings.html#langchain_core.embeddings.fake.FakeEmbeddings",
},
{
name: "OpenAI",
link: "openai",
package: "langchain-openai",
apiLink:
"https://api.python.langchain.com/en/latest/chat_models/langchain_openai.chat_models.base.ChatOpenAI.html#langchain_openai.chat_models.base.ChatOpenAI",
},
{
name: "Together",
link: "together",
package: "langchain-together",
apiLink:
"https://api.python.langchain.com/en/latest/embeddings/langchain_together.embeddings.TogetherEmbeddings.html#langchain_together.embeddings.TogetherEmbeddings",
},
{
name: "Fireworks",
link: "fireworks",
package: "langchain-fireworks",
apiLink:
"https://api.python.langchain.com/en/latest/embeddings/langchain_fireworks.embeddings.FireworksEmbeddings.html#langchain_fireworks.embeddings.FireworksEmbeddings",
},
{
name: "MistralAI",
link: "mistralai",
package: "langchain-mistralai",
apiLink:
"https://api.python.langchain.com/en/latest/embeddings/langchain_mistralai.embeddings.MistralAIEmbeddings.html#langchain_mistralai.embeddings.MistralAIEmbeddings",
},
{
name: "Cohere",
link: "cohere",
package: "langchain-cohere",
apiLink:
"https://api.python.langchain.com/en/latest/embeddings/langchain_cohere.embeddings.CohereEmbeddings.html#langchain_cohere.embeddings.CohereEmbeddings",
},
],
},
document_retrievers: {
link: "docs/integrations/retrievers",
columns: [
{
title: "Retriever",
formatter: (item) => <a href={item.link}>{item.name}</a>,
},
{
title: "Self-host",
formatter: (item) => (item.selfHost ? "✅" : "❌"),
},
{
title: "Cloud offering",
formatter: (item) => (item.cloudOffering ? "✅" : "❌"),
},
{
title: "Package",
formatter: (item) => <a href={item.apiLink}>{item.package}</a>,
},
],
items: [
{
name: "AmazonKnowledgeBasesRetriever",
link: "bedrock",
selfHost: false,
cloudOffering: true,
apiLink:
"https://api.python.langchain.com/en/latest/retrievers/langchain_aws.retrievers.bedrock.AmazonKnowledgeBasesRetriever.html",
package: "langchain_aws",
},
{
name: "AzureAISearchRetriever",
link: "azure_ai_search",
selfHost: false,
cloudOffering: true,
apiLink:
"https://api.python.langchain.com/en/latest/retrievers/langchain_community.retrievers.azure_ai_search.AzureAISearchRetriever.html",
package: "langchain_community",
},
{
name: "ElasticsearchRetriever",
link: "elasticsearch_retriever",
selfHost: true,
cloudOffering: true,
apiLink:
"https://api.python.langchain.com/en/latest/retrievers/langchain_elasticsearch.retrievers.ElasticsearchRetriever.html",
package: "langchain_elasticsearch",
},
{
name: "MilvusCollectionHybridSearchRetriever",
link: "milvus_hybrid_search",
selfHost: true,
cloudOffering: false,
apiLink:
"https://api.python.langchain.com/en/latest/retrievers/langchain_milvus.retrievers.milvus_hybrid_search.MilvusCollectionHybridSearchRetriever.html",
package: "langchain_milvus",
},
{
name: "VertexAISearchRetriever",
link: "google_vertex_ai_search",
selfHost: false,
cloudOffering: true,
apiLink:
"https://api.python.langchain.com/en/latest/vertex_ai_search/langchain_google_community.vertex_ai_search.VertexAISearchRetriever.html",
package: "langchain_google_community",
},
],
},
external_retrievers: {
link: "docs/integrations/retrievers",
columns: [
{
title: "Retriever",
formatter: (item) => <a href={item.link}>{item.name}</a>,
},
{ title: "Source", formatter: (item) => item.source },
{
title: "Package",
formatter: (item) => <a href={item.apiLink}>{item.package}</a>,
},
],
items: [
{
name: "ArxivRetriever",
link: "arxiv",
source: (
<>
Scholarly articles on <a href="https://arxiv.org/">arxiv.org</a>
</>
),
apiLink:
"https://api.python.langchain.com/en/latest/retrievers/langchain_community.retrievers.arxiv.ArxivRetriever.html",
package: "langchain_community",
},
{
name: "TavilySearchAPIRetriever",
link: "tavily",
source: "Internet search",
apiLink:
"https://api.python.langchain.com/en/latest/retrievers/langchain_community.retrievers.tavily_search_api.TavilySearchAPIRetriever.html",
package: "langchain_community",
},
{
name: "WikipediaRetriever",
link: "wikipedia",
source: (
<>
<a href="https://www.wikipedia.org/">Wikipedia</a> articles
</>
),
apiLink:
"https://api.python.langchain.com/en/latest/retrievers/langchain_community.retrievers.wikipedia.WikipediaRetriever.html",
package: "langchain_community",
},
],
},
document_loaders: {
link: "docs/integrations/loaders",
columns: [],
items: [],
},
vectorstores: {
link: "docs/integrations/vectorstores",
columns: [
{
title: "Vectorstore",
formatter: (item) => <a href={item.link}>{item.name}</a>,
},
{
title: "Delete by ID",
formatter: (item) => (item.deleteById ? "✅" : "❌"),
},
{
title: "Filtering",
formatter: (item) => (item.filtering ? "✅" : "❌"),
},
{
title: "Search by Vector",
formatter: (item) => (item.searchByVector ? "✅" : "❌"),
},
{
title: "Search with score",
formatter: (item) => (item.searchWithScore ? "✅" : "❌"),
},
{ title: "Async", formatter: (item) => (item.async ? "✅" : "❌") },
{
title: "Passes Standard Tests",
formatter: (item) => (item.passesStandardTests ? "✅" : "❌"),
},
{
title: "Multi Tenancy",
formatter: (item) => (item.multiTenancy ? "✅" : "❌"),
},
{
title: "IDs in add Documents",
formatter: (item) => (item.idsInAddDocuments ? "✅" : "❌"),
},
{
title: "Local/Cloud",
formatter: (item) => (item.local ? "Local" : "Cloud"),
},
],
items: [
{
name: "AstraDBVectorStore",
link: "astradb",
deleteById: true,
filtering: true,
searchByVector: true,
searchWithScore: true,
async: true,
passesStandardTests: false,
multiTenancy: false,
local: true,
idsInAddDocuments: false,
},
{
name: "Chroma",
link: "chroma",
deleteById: true,
filtering: true,
searchByVector: true,
searchWithScore: true,
async: true,
passesStandardTests: false,
multiTenancy: false,
local: true,
idsInAddDocuments: false,
},
{
name: "Clickhouse",
link: "clickhouse",
deleteById: true,
filtering: true,
searchByVector: false,
searchWithScore: true,
async: false,
passesStandardTests: false,
multiTenancy: false,
local: true,
idsInAddDocuments: false,
},
{
name: "CouchbaseVectorStore",
link: "couchbase",
deleteById: true,
filtering: true,
searchByVector: false,
searchWithScore: true,
async: true,
passesStandardTests: false,
multiTenancy: false,
local: true,
idsInAddDocuments: false,
},
{
name: "ElasticsearchStore",
link: "elasticsearch",
deleteById: true,
filtering: true,
searchByVector: true,
searchWithScore: false,
async: true,
passesStandardTests: false,
multiTenancy: false,
local: true,
idsInAddDocuments: false,
},
{
name: "FAISS",
link: "faiss",
deleteById: true,
filtering: true,
searchByVector: true,
searchWithScore: true,
async: true,
passesStandardTests: false,
multiTenancy: false,
local: true,
idsInAddDocuments: false,
},
{
name: "InMemoryVectorStore",
link: "in_memory",
deleteById: true,
filtering: true,
searchByVector: false,
searchWithScore: true,
async: true,
passesStandardTests: false,
multiTenancy: false,
local: true,
idsInAddDocuments: false,
},
{
name: "Milvus",
link: "milvus",
deleteById: true,
filtering: true,
searchByVector: false,
searchWithScore: true,
async: true,
passesStandardTests: false,
multiTenancy: false,
local: true,
idsInAddDocuments: false,
},
{
name: "MongoDBAtlasVectorSearch",
link: "mongodb_atlas",
deleteById: true,
filtering: true,
searchByVector: false,
searchWithScore: false,
async: true,
passesStandardTests: false,
multiTenancy: false,
local: true,
idsInAddDocuments: false,
},
{
name: "PGVector",
link: "pg_vector",
deleteById: true,
filtering: true,
searchByVector: true,
searchWithScore: true,
async: true,
passesStandardTests: false,
multiTenancy: false,
local: true,
idsInAddDocuments: false,
},
{
name: "PineconeVectorStore",
link: "pinecone",
deleteById: true,
filtering: true,
searchByVector: true,
searchWithScore: false,
async: true,
passesStandardTests: false,
multiTenancy: false,
local: true,
idsInAddDocuments: false,
},
{
name: "QdrantVectorStore",
link: "qdrant",
deleteById: true,
filtering: true,
searchByVector: true,
searchWithScore: true,
async: true,
passesStandardTests: false,
multiTenancy: false,
local: true,
idsInAddDocuments: false,
},
{
name: "Redis",
link: "redis",
deleteById: true,
filtering: true,
searchByVector: true,
searchWithScore: true,
async: true,
passesStandardTests: false,
multiTenancy: false,
local: true,
idsInAddDocuments: false,
},
],
},
};
const DEPRECATED_DOC_IDS = [
"integrations/chat/anthropic_tools",
"integrations/chat/baidu_wenxin",
"integrations/chat/google_palm",
"integrations/chat/ni_bittensor",
"integrations/chat/ollama_functions",
"integrations/chat/prompt_layer_openai",
"integrations/llms/google_palm",
"integrations/llms/ni_bittensor",
"integrations/llms/prompt_layer_openai",
"integrations/text_embedding/google_palm",
"integrations/retrievers/chatgpt-retriever-plugin",
"integrations/tools/aiplugin-tool",
"integrations/tools/zapier_agent",
];
function toTable(columns, items) {
const headers = columns.map((col) => col.title);
return (
<table>
<thead>
<tr>
{headers.map((header, i) => (
// eslint-disable-next-line react/no-array-index-key
<th key={`header-${i}`}>{header}</th>
))}
</tr>
</thead>
<tbody>
{items.map((item, i) => (
// eslint-disable-next-line react/no-array-index-key
<tr key={`row-${i}`}>
{columns.map((col, j) => (
// eslint-disable-next-line react/no-array-index-key
<td key={`cell-${i}-${j}`}>{col.formatter(item)}</td>
))}
</tr>
))}
</tbody>
</table>
);
}
export function CategoryTable({ category }) {
const cat = FEATURE_TABLES[category];
const rtn = toTable(cat.columns, cat.items);
return rtn;
}
export function ItemTable({ category, item }) {
const cat = FEATURE_TABLES[category];
const row = cat.items.find((i) => i.name === item);
if (!row) {
throw new Error(`Item ${item} not found in category ${category}`);
}
const rtn = toTable(cat.columns, [row]);
return rtn;
}
function truncate(str, n) {
return str.length > n ? str.substring(0, n - 1) + "..." : str;
}
export function IndexTable() {
const { items } = useCurrentSidebarCategory();
const rows = items
.filter(
(item) =>
!item.docId?.endsWith?.("/index") &&
!DEPRECATED_DOC_IDS.includes(item.docId)
)
.map((item) => ({
...item,
// eslint-disable-next-line react-hooks/rules-of-hooks
description: useDocById(item.docId ?? undefined)?.description,
}));
const rtn = toTable(
[
{
title: "Name",
formatter: (item) => <a href={item.href}>{item.label}</a>,
},
{
title: "Description",
formatter: (item) => truncate(item.description ?? "", 70),
},
],
rows
);
return rtn;
}
|
0 | lc_public_repos/langchainjs/docs/core_docs/src | lc_public_repos/langchainjs/docs/core_docs/src/theme/Npm2Yarn.js | import React from "react";
import Tabs from "@theme/Tabs";
import TabItem from "@theme/TabItem";
import CodeBlock from "@theme-original/CodeBlock";
// Substitute component for Jupyter notebooks since Quarto has trouble
// parsing built-in npm2yarn markdown blocks
export default function Npm2Yarn({ children }) {
return (
<Tabs groupId="npm2yarn">
<TabItem value="npm" label="npm">
<CodeBlock language="bash">npm i {children}</CodeBlock>
</TabItem>
<TabItem value="yarn" label="yarn" default>
<CodeBlock language="bash">yarn add {children}</CodeBlock>
</TabItem>
<TabItem value="pnpm" label="pnpm">
<CodeBlock language="bash">pnpm add {children}</CodeBlock>
</TabItem>
</Tabs>
);
}
|
0 | lc_public_repos/langchainjs/docs/core_docs/src | lc_public_repos/langchainjs/docs/core_docs/src/theme/RedirectAnchors.js | // eslint-disable-next-line no-unused-vars
import React from "react";
function RedirectAnchors() {
if (typeof window === "undefined") return null;
// get # anchor from url
const lookup = {
"#conceptual-guide": "/docs/concepts",
"#architecture": "/docs/concepts/architecture",
"#langchaincore": "/docs/concepts/architecture/#langchaincore",
"#langchain": "/docs/concepts/architecture/#langchain",
"#langchaincommunity": "/docs/concepts/architecture/#langchaincommunity",
"#partner-packages": "/docs/concepts/architecture/#integration-packages",
"#langgraph": "/docs/concepts/architecture/#langchainlanggraph",
"#langsmith": "/docs/concepts/architecture/#langsmith",
"#langchain-expression-language-lcel": "/docs/concepts/lcel",
"#langchain-expression-language": "/docs/concepts/lcel",
"#runnable-interface": "/docs/concepts/runnables",
"#components": "/docs/concepts/",
"#chat-models": "/docs/concepts/chat_models",
"#multimodality": "/docs/concepts/multimodality",
"#llms": "/docs/concepts/chat_models",
"#messages": "/docs/concepts/messages",
"#message-types": "/docs/concepts/messages",
"#humanmessage": "/docs/concepts/messages/#humanmessage",
"#aimessage": "/docs/concepts/messages/#aimessage",
"#systemmessage": "/docs/concepts/messages/#systemmessage",
"#toolmessage": "/docs/concepts/messages/#toolmessage",
"#legacy-functionmessage":
"/docs/concepts/messages/#legacy-functionmessage",
"#prompt-templates": "/docs/concepts/prompt_templates",
"#string-prompttemplates": "/docs/concepts/prompt_templates",
"#chatprompttemplates": "/docs/concepts/prompt_templates",
"#messagesplaceholder": "/docs/concepts/prompt_templates",
"#example-selectors": "/docs/concepts/example_selectors",
"#output-parsers": "/docs/concepts/output_parsers",
"#chat-history": "/docs/concepts/chat_history",
"#documents":
"https://api.js.langchain.com/classes/_langchain_core.documents.Document.html",
"#document":
"https://api.js.langchain.com/classes/_langchain_core.documents.Document.html",
"#document-loaders": "/docs/concepts/document_loaders",
"#text-splitters": "/docs/concepts/text_splitters",
"#embedding-models": "/docs/concepts/embedding_models",
"#vector-stores": "/docs/concepts/vectorstores",
"#vectorstore": "/docs/concepts/vectorstores",
"#retrievers": "/docs/concepts/retrievers",
"#keyvalue-stores": "/docs/concepts/key_value_stores",
"#interface": "/docs/concepts/runnables",
"#tools": "/docs/concepts/tools",
"#invoke-with-just-the-arguments": "/docs/concepts/tools",
"#invoke-with-toolcall": "/docs/concepts/tools",
"#best-practices": "/docs/concepts/tools/#best-practices",
"#related": "/docs/concepts/tools",
"#toolkits": "/docs/concepts/toosl/#toolkits",
"#initialize-a-toolkit": "/docs/concepts/toosl/#toolkits",
"#get-list-of-tools": "/docs/concepts/toosl/#toolkits",
"#agents": "/docs/concepts/agents",
"#react-agents": "/docs/concepts/agents",
"#callbacks": "/docs/concepts/callbacks",
"#callback-events": "/docs/concepts/callbacks/#callback-events",
"#callback-handlers": "/docs/concepts/callbacks/#callback-handlers",
"#passing-callbacks": "/docs/concepts/callbacks/#passing-callbacks",
"#techniques": "/docs/concepts/",
"#streaming": "/docs/concepts/streaming",
"#stream": "/docs/concepts/streaming#stream",
"#streamevents": "/docs/concepts/streaming#streamevents",
"#tokens": "/docs/concepts/tokens",
"#functiontool-calling": "/docs/concepts/tool_calling",
"#tool-usage": "/docs/concepts/tool_calling",
"#structured-output": "/docs/concepts/structured_outputs",
"#withstructuredoutput": "/docs/concepts/structured_outputs",
"#raw-prompting": "/docs/concepts/structured_outputs",
"#json-mode": "/docs/concepts/structured_outputs/#json-mode",
"#tool-calling-structuredoutputtoolcalling":
"/docs/concepts/structured_outputs",
"#fewshot-prompting": "/docs/concepts/few_shot_prompting",
"#1-generating-examples":
"/docs/concepts/few_shot_prompting/#1-generating-examples",
"#2-number-of-examples":
"/docs/concepts/few_shot_prompting/#2-number-of-examples",
"#3-selecting-examples":
"/docs/concepts/few_shot_prompting/#3-selecting-examples",
"#4-formatting-examples":
"/docs/concepts/few_shot_prompting/#4-formatting-examples",
"#retrieval": "/docs/concepts/retrieval",
"#query-translation": "/docs/concepts/retrieval/#query-re-writing",
"#routing": "/docs/concepts/",
"#query-construction": "/docs/concepts/retrieval/#query-construction",
"#indexing": "/docs/concepts/retrieval/",
"#postprocessing": "/docs/concepts/retrieval/",
"#generation": "/docs/concepts/rag",
"#text-splitting": "/docs/concepts/text_splitting",
"#evaluation": "/docs/concepts/evaluation",
"#tracing": "/docs/concepts/tracing",
"#few-shot-prompting": "/docs/concepts/few_shot_prompting",
};
const hash = window?.location?.hash;
if (hash) {
if (lookup[hash]) {
window.location.href = lookup[hash];
return null;
}
}
return null;
}
export default RedirectAnchors;
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.