index int64 0 0 | repo_id stringclasses 596 values | file_path stringlengths 31 168 | content stringlengths 1 6.2M |
|---|---|---|---|
0 | lc_public_repos/langchainjs/docs/core_docs/docs/integrations | lc_public_repos/langchainjs/docs/core_docs/docs/integrations/llms/replicate.mdx | import CodeBlock from "@theme/CodeBlock";
# Replicate
Here's an example of calling a Replicate model as an LLM:
import IntegrationInstallTooltip from "@mdx_components/integration_install_tooltip.mdx";
<IntegrationInstallTooltip></IntegrationInstallTooltip>
```bash npm2yarn
npm install replicate @langchain/community @langchain/core
```
import ReplicateLlama2 from "@examples/models/llm/replicate_llama2.ts";
<CodeBlock language="typescript">{ReplicateLlama2}</CodeBlock>
You can run other models through Replicate by changing the `model` parameter.
You can find a full list of models on [Replicate's website](https://replicate.com/explore).
## Related
- LLM [conceptual guide](/docs/concepts/text_llms)
- LLM [how-to guides](/docs/how_to/#llms)
|
0 | lc_public_repos/langchainjs/docs/core_docs/docs/integrations | lc_public_repos/langchainjs/docs/core_docs/docs/integrations/llms/yandex.mdx | # YandexGPT
LangChain.js supports calling [YandexGPT](https://cloud.yandex.com/en/services/yandexgpt) LLMs.
## Setup
First, you should [create service account](https://cloud.yandex.com/en/docs/iam/operations/sa/create) with the `ai.languageModels.user` role.
Next, you have two authentication options:
- [IAM token](https://cloud.yandex.com/en/docs/iam/operations/iam-token/create-for-sa).
You can specify the token in a constructor parameter `iam_token` or in an environment variable `YC_IAM_TOKEN`.
- [API key](https://cloud.yandex.com/en/docs/iam/operations/api-key/create)
You can specify the key in a constructor parameter `api_key` or in an environment variable `YC_API_KEY`.
## Usage
import IntegrationInstallTooltip from "@mdx_components/integration_install_tooltip.mdx";
<IntegrationInstallTooltip></IntegrationInstallTooltip>
```bash npm2yarn
npm install @langchain/yandex @langchain/core
```
import CodeBlock from "@theme/CodeBlock";
import YandexGPTExample from "@examples/models/llm/yandex.ts";
<CodeBlock language="typescript">{YandexGPTExample}</CodeBlock>
## Related
- LLM [conceptual guide](/docs/concepts/text_llms)
- LLM [how-to guides](/docs/how_to/#llms)
|
0 | lc_public_repos/langchainjs/docs/core_docs/docs/integrations | lc_public_repos/langchainjs/docs/core_docs/docs/integrations/llms/ollama.ipynb | import { Ollama } from "@langchain/ollama"
const llm = new Ollama({
model: "llama3", // Default value
temperature: 0,
maxRetries: 2,
// other params...
})const inputText = "Ollama is an AI company that "
const completion = await llm.invoke(inputText)
completionimport { PromptTemplate } from "@langchain/core/prompts"
const prompt = PromptTemplate.fromTemplate("How to say {input} in {output_language}:\n")
const chain = prompt.pipe(llm);
await chain.invoke(
{
output_language: "German",
input: "I love programming.",
}
)import { Ollama } from "@langchain/ollama";
import * as fs from "node:fs/promises";
const imageData = await fs.readFile("../../../../../examples/hotdog.jpg");
const model = new Ollama({
model: "llava",
}).bind({
images: [imageData.toString("base64")],
});
const res = await model.invoke("What's in this image?");
console.log(res); |
0 | lc_public_repos/langchainjs/docs/core_docs/docs/integrations | lc_public_repos/langchainjs/docs/core_docs/docs/integrations/llms/chrome_ai.mdx | ---
sidebar_label: ChromeAI
---
import CodeBlock from "@theme/CodeBlock";
# ChromeAI
:::info
This feature is **experimental** and is subject to change.
:::
:::note
The `Built-in AI Early Preview Program` by Google is currently in beta. To apply for access or find more information, please visit [this link](https://developer.chrome.com/docs/ai/built-in).
:::
ChromeAI leverages Gemini Nano to run LLMs directly in the browser or in a [worker](https://developer.mozilla.org/en-US/docs/Web/API/Worker),
without the need for an internet connection. This allows for running faster and private models without ever having data leave the consumers device.
## Getting started
Once you've been granted access to the program, follow Google's provided instructions to download the model.
Once downloaded, you can start using `ChromeAI` in the browser as follows:
```typescript
import { ChromeAI } from "@langchain/community/experimental/llms/chrome_ai";
const model = new ChromeAI({
temperature: 0.5, // Optional, defaults to 0.5
topK: 40, // Optional, defaults to 40
});
const response = await model.invoke("Write me a short poem please");
/*
In the realm where moonlight weaves its hue,
Where dreams and secrets gently intertwine,
There's a place of tranquility and grace,
Where whispers of the night find their place.
Beneath the canopy of starlit skies,
Where dreams take flight and worries cease,
A haven of tranquility, pure and true,
Where the heart finds solace, finding dew.
In this realm where dreams find their release,
Where the soul finds peace, at every peace,
Let us wander, lost in its embrace,
Finding solace in this tranquil space.
*/
```
### Streaming
`ChromeAI` also supports streaming outputs:
```typescript
import { ChromeAI } from "@langchain/community/experimental/llms/chrome_ai";
const model = new ChromeAI({
temperature: 0.5, // Optional, defaults to 0.5
topK: 40, // Optional, defaults to 40
});
for await (const chunk of await model.stream("How are you?")) {
console.log(chunk);
}
/*
As
an
AI
language
model
,
I
don
'
t
have
personal
experiences
or
the
ability
to
experience
emotions
.
Therefore
,
I
cannot
directly
answer
the
question
"
How
are
you
?".
May
I
suggest
answering
something
else
?
*/
```
## Related
- LLM [conceptual guide](/docs/concepts/text_llms)
- LLM [how-to guides](/docs/how_to/#llms)
|
0 | lc_public_repos/langchainjs/docs/core_docs/docs/integrations | lc_public_repos/langchainjs/docs/core_docs/docs/integrations/llms/ni_bittensor.mdx | ---
sidebar_class_name: hidden
---
# NIBittensor
:::warning
This module has been deprecated and is no longer supported. The documentation below will not work in versions 0.2.0 or later.
:::
LangChain.js offers experimental support for Neural Internet's Bittensor LLM models.
Here's an example:
```typescript
import { NIBittensorLLM } from "langchain/experimental/llms/bittensor";
const model = new NIBittensorLLM();
const res = await model.invoke(`What is Bittensor?`);
console.log({ res });
/*
{
res: "\nBittensor is opensource protocol..."
}
*/
```
## Related
- LLM [conceptual guide](/docs/concepts/text_llms)
- LLM [how-to guides](/docs/how_to/#llms)
|
0 | lc_public_repos/langchainjs/docs/core_docs/docs/integrations | lc_public_repos/langchainjs/docs/core_docs/docs/integrations/llms/ai21.mdx | # AI21
You can get started with AI21Labs' Jurassic family of models, as well as see a full list of available foundational models, by signing up for an API key [on their website](https://www.ai21.com/).
Here's an example of initializing an instance in LangChain.js:
import IntegrationInstallTooltip from "@mdx_components/integration_install_tooltip.mdx";
<IntegrationInstallTooltip></IntegrationInstallTooltip>
```bash npm2yarn
npm install @langchain/community @langchain/core
```
import CodeBlock from "@theme/CodeBlock";
import AI21Example from "@examples/models/llm/ai21.ts";
<CodeBlock language="typescript">{AI21Example}</CodeBlock>
## Related
- LLM [conceptual guide](/docs/concepts/text_llms)
- LLM [how-to guides](/docs/how_to/#llms)
|
0 | lc_public_repos/langchainjs/docs/core_docs/docs/integrations | lc_public_repos/langchainjs/docs/core_docs/docs/integrations/llms/google_vertex_ai.ipynb | import { VertexAI } from "@langchain/google-vertexai-web"
const llm = new VertexAI({
model: "gemini-pro",
temperature: 0,
maxRetries: 2,
// other params...
})const inputText = "VertexAI is an AI company that "
const completion = await llm.invoke(inputText)
completionimport { PromptTemplate } from "@langchain/core/prompts"
const prompt = PromptTemplate.fromTemplate("How to say {input} in {output_language}:\n")
const chain = prompt.pipe(llm);
await chain.invoke(
{
output_language: "German",
input: "I love programming.",
}
) |
0 | lc_public_repos/langchainjs/docs/core_docs/docs/integrations | lc_public_repos/langchainjs/docs/core_docs/docs/integrations/llms/writer.mdx | # Writer
LangChain.js supports calling [Writer](https://writer.com/) LLMs.
## Setup
First, you'll need to sign up for an account at https://writer.com/. Create a service account and note your API key.
Next, you'll need to install the official package as a peer dependency:
```bash npm2yarn
yarn add @writerai/writer-sdk
```
import IntegrationInstallTooltip from "@mdx_components/integration_install_tooltip.mdx";
<IntegrationInstallTooltip></IntegrationInstallTooltip>
```bash npm2yarn
npm install @langchain/community @langchain/core
```
## Usage
import CodeBlock from "@theme/CodeBlock";
import WriterExample from "@examples/models/llm/writer.ts";
<CodeBlock language="typescript">{WriterExample}</CodeBlock>
## Related
- LLM [conceptual guide](/docs/concepts/text_llms)
- LLM [how-to guides](/docs/how_to/#llms)
|
0 | lc_public_repos/langchainjs/docs/core_docs/docs/integrations | lc_public_repos/langchainjs/docs/core_docs/docs/integrations/llms/raycast.mdx | # RaycastAI
> **Note:** This is a community-built integration and is not officially supported by Raycast.
You can utilize the LangChain's RaycastAI class within the [Raycast Environment](https://developers.raycast.com/api-reference/ai) to enhance your Raycast extension with Langchain's capabilities.
- The RaycastAI class is only available in the Raycast environment and only to [Raycast Pro](https://www.raycast.com/pro) users as of August 2023. You may check how to create an extension for Raycast [here](https://developers.raycast.com/).
- There is a rate limit of approx 10 requests per minute for each Raycast Pro user. If you exceed this limit, you will receive an error. You can set your desired rpm limit by passing `rateLimitPerMinute` to the `RaycastAI` constructor as shown in the example, as this rate limit may change in the future.
import IntegrationInstallTooltip from "@mdx_components/integration_install_tooltip.mdx";
<IntegrationInstallTooltip></IntegrationInstallTooltip>
```bash npm2yarn
npm install @langchain/community @langchain/core
```
import CodeBlock from "@theme/CodeBlock";
```ts
import { RaycastAI } from "@langchain/community/llms/raycast";
import { Tool } from "@langchain/core/tools";
const model = new RaycastAI({
rateLimitPerMinute: 10, // It is 10 by default so you can omit this line
model: "<model_name>",
creativity: 0, // `creativity` is a term used by Raycast which is equivalent to `temperature` in some other LLMs
});
```
## Related
- LLM [conceptual guide](/docs/concepts/text_llms)
- LLM [how-to guides](/docs/how_to/#llms)
|
0 | lc_public_repos/langchainjs/docs/core_docs/docs/integrations | lc_public_repos/langchainjs/docs/core_docs/docs/integrations/llms/fireworks.ipynb | import { Fireworks } from "@langchain/community/llms/fireworks"
const llm = new Fireworks({
model: "accounts/fireworks/models/llama-v3-70b-instruct",
temperature: 0,
maxTokens: undefined,
timeout: undefined,
maxRetries: 2,
// other params...
})const inputText = "Fireworks is an AI company that "
const completion = await llm.invoke(inputText)
completionimport { PromptTemplate } from "@langchain/core/prompts"
const prompt = PromptTemplate.fromTemplate("How to say {input} in {output_language}:\n")
const chain = prompt.pipe(llm);
await chain.invoke(
{
output_language: "German",
input: "I love programming.",
}
) |
0 | lc_public_repos/langchainjs/docs/core_docs/docs/integrations | lc_public_repos/langchainjs/docs/core_docs/docs/integrations/llms/gradient_ai.mdx | ---
sidebar_class_name: node-only
---
# Gradient AI
LangChain.js supports integration with Gradient AI. Check out [Gradient AI](https://docs.gradient.ai/docs) for a list of available models.
## Setup
You'll need to install the official Gradient Node SDK as a peer dependency:
```bash npm2yarn
npm i @gradientai/nodejs-sdk
```
You will need to set the following environment variables for using the Gradient AI API.
1. `GRADIENT_ACCESS_TOKEN`
2. `GRADIENT_WORKSPACE_ID`
Alternatively, these can be set during the GradientAI Class instantiation as `gradientAccessKey` and `workspaceId` respectively.
For example:
```typescript
const model = new GradientLLM({
gradientAccessKey: "My secret Access Token"
workspaceId: "My secret workspace id"
});
```
## Usage
import IntegrationInstallTooltip from "@mdx_components/integration_install_tooltip.mdx";
<IntegrationInstallTooltip></IntegrationInstallTooltip>
```bash npm2yarn
npm install @langchain/community @langchain/core
```
import CodeBlock from "@theme/CodeBlock";
import GradientLLMBaseExample from "@examples/llms/gradient_ai-base.ts";
import GradientLLMAdapterExample from "@examples/llms/gradient_ai-adapter.ts";
### Using Gradient's Base Models
<CodeBlock language="typescript">{GradientLLMBaseExample}</CodeBlock>
### Using your own fine-tuned Adapters
The use your own custom adapter simply set `adapterId` during setup.
<CodeBlock language="typescript">{GradientLLMAdapterExample}</CodeBlock>
## Related
- LLM [conceptual guide](/docs/concepts/text_llms)
- LLM [how-to guides](/docs/how_to/#llms)
|
0 | lc_public_repos/langchainjs/docs/core_docs/docs/integrations | lc_public_repos/langchainjs/docs/core_docs/docs/integrations/llms/aws_sagemaker.mdx | # AWS SageMakerEndpoint
LangChain.js supports integration with AWS SageMaker-hosted endpoints. Check [Amazon SageMaker JumpStart](https://aws.amazon.com/sagemaker/jumpstart/) for a list of available models, and how to deploy your own.
## Setup
You'll need to install the official SageMaker SDK as a peer dependency:
```bash npm2yarn
npm install @aws-sdk/client-sagemaker-runtime
```
import IntegrationInstallTooltip from "@mdx_components/integration_install_tooltip.mdx";
<IntegrationInstallTooltip></IntegrationInstallTooltip>
```bash npm2yarn
npm install @langchain/community @langchain/core
```
## Usage
import CodeBlock from "@theme/CodeBlock";
import SageMakerEndpointExample from "@examples/models/llm/sagemaker_endpoint.ts";
<CodeBlock language="typescript">{SageMakerEndpointExample}</CodeBlock>
## Related
- LLM [conceptual guide](/docs/concepts/text_llms)
- LLM [how-to guides](/docs/how_to/#llms)
|
0 | lc_public_repos/langchainjs/docs/core_docs/docs/integrations | lc_public_repos/langchainjs/docs/core_docs/docs/integrations/llms/index.mdx | ---
sidebar_position: 0
sidebar_class_name: hidden
---
# LLMs
:::caution
You are currently on a page documenting the use of [text completion models](/docs/concepts/text_llms). Many of the latest and most popular models are [chat completion models](/docs/concepts/chat_models).
Unless you are specifically using more advanced prompting techniques, you are probably looking for [this page instead](/docs/integrations/chat/).
:::
[LLMs](docs/concepts/#llms) are language models that takes a string as input and return a string as output.
:::info
If you'd like to write your own LLM, see [this how-to](/docs/how_to/custom_llm). If you'd like to contribute an integration, see [Contributing integrations](/docs/contributing).
:::
## All LLMs
import { IndexTable } from "@theme/FeatureTables";
<IndexTable />
|
0 | lc_public_repos/langchainjs/docs/core_docs/docs/integrations | lc_public_repos/langchainjs/docs/core_docs/docs/integrations/llms/jigsawstack.mdx | # JigsawStack Prompt Engine
LangChain.js supports calling JigsawStack [Prompt Engine](https://docs.jigsawstack.com/api-reference/prompt-engine/run-direct) LLMs.
## Setup
- Set up an [account](https://jigsawstack.com/dashboard) (Get started for free)
- Create and retrieve your [API key](https://jigsawstack.com/dashboard)
## Credentials
```bash
export JIGSAWSTACK_API_KEY="your-api-key"
```
## Usage
import IntegrationInstallTooltip from "@mdx_components/integration_install_tooltip.mdx";
<IntegrationInstallTooltip></IntegrationInstallTooltip>
```bash npm2yarn
npm install @langchain/jigsawstack
```
import CodeBlock from "@theme/CodeBlock";
```ts
import { JigsawStackPromptEngine } from "@langchain/jigsawstack";
export const run = async () => {
const model = new JigsawStackPromptEngine();
const res = await model.invoke(
"Tell me about the leaning tower of pisa?\nAnswer:"
);
console.log({ res });
};
```
## Related
- LLM [conceptual guide](/docs/concepts/text_llms)
- LLM [how-to guides](/docs/how_to/#llms)
|
0 | lc_public_repos/langchainjs/docs/core_docs/docs/integrations | lc_public_repos/langchainjs/docs/core_docs/docs/integrations/llms/prompt_layer_openai.mdx | ---
sidebar_class_name: hidden
---
# PromptLayer OpenAI
:::warning
This module has been deprecated and is no longer supported. The documentation below will not work in versions 0.2.0 or later.
:::
LangChain integrates with PromptLayer for logging and debugging prompts and responses. To add support for PromptLayer:
1. Create a PromptLayer account here: [https://promptlayer.com](https://promptlayer.com).
2. Create an API token and pass it either as `promptLayerApiKey` argument in the `PromptLayerOpenAI` constructor or in the `PROMPTLAYER_API_KEY` environment variable.
```typescript
import { PromptLayerOpenAI } from "langchain/llms/openai";
const model = new PromptLayerOpenAI({
temperature: 0.9,
apiKey: "YOUR-API-KEY", // In Node.js defaults to process.env.OPENAI_API_KEY
promptLayerApiKey: "YOUR-API-KEY", // In Node.js defaults to process.env.PROMPTLAYER_API_KEY
});
const res = await model.invoke(
"What would be a good company name a company that makes colorful socks?"
);
```
# Azure PromptLayerOpenAI
LangChain also integrates with PromptLayer for Azure-hosted OpenAI instances:
```typescript
import { PromptLayerOpenAI } from "langchain/llms/openai";
const model = new PromptLayerOpenAI({
temperature: 0.9,
azureOpenAIApiKey: "YOUR-AOAI-API-KEY", // In Node.js defaults to process.env.AZURE_OPENAI_API_KEY
azureOpenAIApiInstanceName: "YOUR-AOAI-INSTANCE-NAME", // In Node.js defaults to process.env.AZURE_OPENAI_API_INSTANCE_NAME
azureOpenAIApiDeploymentName: "YOUR-AOAI-DEPLOYMENT-NAME", // In Node.js defaults to process.env.AZURE_OPENAI_API_DEPLOYMENT_NAME
azureOpenAIApiCompletionsDeploymentName:
"YOUR-AOAI-COMPLETIONS-DEPLOYMENT-NAME", // In Node.js defaults to process.env.AZURE_OPENAI_API_COMPLETIONS_DEPLOYMENT_NAME
azureOpenAIApiEmbeddingsDeploymentName:
"YOUR-AOAI-EMBEDDINGS-DEPLOYMENT-NAME", // In Node.js defaults to process.env.AZURE_OPENAI_API_EMBEDDINGS_DEPLOYMENT_NAME
azureOpenAIApiVersion: "YOUR-AOAI-API-VERSION", // In Node.js defaults to process.env.AZURE_OPENAI_API_VERSION
azureOpenAIBasePath: "YOUR-AZURE-OPENAI-BASE-PATH", // In Node.js defaults to process.env.AZURE_OPENAI_BASE_PATH
promptLayerApiKey: "YOUR-API-KEY", // In Node.js defaults to process.env.PROMPTLAYER_API_KEY
});
const res = await model.invoke(
"What would be a good company name a company that makes colorful socks?"
);
```
The request and the response will be logged in the [PromptLayer dashboard](https://promptlayer.com/home).
> **_Note:_** In streaming mode PromptLayer will not log the response.
## Related
- LLM [conceptual guide](/docs/concepts/text_llms)
- LLM [how-to guides](/docs/how_to/#llms)
|
0 | lc_public_repos/langchainjs/docs/core_docs/docs/integrations | lc_public_repos/langchainjs/docs/core_docs/docs/integrations/llms/openai.ipynb | import { OpenAI } from "@langchain/openai"
const llm = new OpenAI({
model: "gpt-3.5-turbo-instruct",
temperature: 0,
maxTokens: undefined,
timeout: undefined,
maxRetries: 2,
apiKey: process.env.OPENAI_API_KEY,
// other params...
})const inputText = "OpenAI is an AI company that "
const completion = await llm.invoke(inputText)
completionimport { PromptTemplate } from "@langchain/core/prompts"
const prompt = new PromptTemplate({
template: "How to say {input} in {output_language}:\n",
inputVariables: ["input", "output_language"],
})
const chain = prompt.pipe(llm);
await chain.invoke(
{
output_language: "German",
input: "I love programming.",
}
)const llmCustomURL = new OpenAI({
temperature: 0.9,
configuration: {
baseURL: "https://your_custom_url.com",
},
}); |
0 | lc_public_repos/langchainjs/docs/core_docs/docs/integrations | lc_public_repos/langchainjs/docs/core_docs/docs/integrations/llms/friendli.mdx | # Friendli
> [Friendli](https://friendli.ai/) enhances AI application performance and optimizes cost savings with scalable, efficient deployment options, tailored for high-demand AI workloads.
This tutorial guides you through integrating `Friendli` with LangChain.
## Setup
Ensure the `@langchain/community` is installed.
import IntegrationInstallTooltip from "@mdx_components/integration_install_tooltip.mdx";
<IntegrationInstallTooltip></IntegrationInstallTooltip>
```bash npm2yarn
npm install @langchain/community @langchain/core
```
Sign in to [Friendli Suite](https://suite.friendli.ai/) to create a Personal Access Token, and set it as the `FRIENDLI_TOKEN` environment.
You can set team id as `FRIENDLI_TEAM` environment.
You can initialize a Friendli chat model with selecting the model you want to use. The default model is `mixtral-8x7b-instruct-v0-1`. You can check the available models at [docs.friendli.ai](https://docs.friendli.ai/guides/serverless_endpoints/pricing#text-generation-models).
## Usage
import CodeBlock from "@theme/CodeBlock";
import Example from "@examples/models/llm/friendli.ts";
<CodeBlock language="typescript">{Example}</CodeBlock>
## Related
- LLM [conceptual guide](/docs/concepts/text_llms)
- LLM [how-to guides](/docs/how_to/#llms)
|
0 | lc_public_repos/langchainjs/docs/core_docs/docs/integrations | lc_public_repos/langchainjs/docs/core_docs/docs/integrations/llms/cloudflare_workersai.ipynb | // @lc-docs-hide-cell
// @ts-expect-error Deno is not recognized
const CLOUDFLARE_ACCOUNT_ID = Deno.env.get("CLOUDFLARE_ACCOUNT_ID");
// @ts-expect-error Deno is not recognized
const CLOUDFLARE_API_TOKEN = Deno.env.get("CLOUDFLARE_API_TOKEN");import { CloudflareWorkersAI } from "@langchain/cloudflare";
const llm = new CloudflareWorkersAI({
model: "@cf/meta/llama-3.1-8b-instruct", // Default value
cloudflareAccountId: CLOUDFLARE_ACCOUNT_ID,
cloudflareApiToken: CLOUDFLARE_API_TOKEN,
// Pass a custom base URL to use Cloudflare AI Gateway
// baseUrl: `https://gateway.ai.cloudflare.com/v1/{YOUR_ACCOUNT_ID}/{GATEWAY_NAME}/workers-ai/`,
});const inputText = "Cloudflare is an AI company that "
const completion = await llm.invoke(inputText);
completionimport { PromptTemplate } from "@langchain/core/prompts"
const prompt = PromptTemplate.fromTemplate("How to say {input} in {output_language}:\n")
const chain = prompt.pipe(llm);
await chain.invoke(
{
output_language: "German",
input: "I love programming.",
}
) |
0 | lc_public_repos/langchainjs/docs/core_docs/docs/integrations | lc_public_repos/langchainjs/docs/core_docs/docs/integrations/llms/azure.ipynb | import { AzureOpenAI } from "@langchain/openai"
const llm = new AzureOpenAI({
model: "gpt-3.5-turbo-instruct",
azureOpenAIApiKey: "<your_key>", // In Node.js defaults to process.env.AZURE_OPENAI_API_KEY
azureOpenAIApiInstanceName: "<your_instance_name>", // In Node.js defaults to process.env.AZURE_OPENAI_API_INSTANCE_NAME
azureOpenAIApiDeploymentName: "<your_deployment_name>", // In Node.js defaults to process.env.AZURE_OPENAI_API_DEPLOYMENT_NAME
azureOpenAIApiVersion: "<api_version>", // In Node.js defaults to process.env.AZURE_OPENAI_API_VERSION
temperature: 0,
maxTokens: undefined,
timeout: undefined,
maxRetries: 2,
// other params...
})const inputText = "AzureOpenAI is an AI company that "
const completion = await llm.invoke(inputText)
completionimport { PromptTemplate } from "@langchain/core/prompts"
const prompt = new PromptTemplate({
template: "How to say {input} in {output_language}:\n",
inputVariables: ["input", "output_language"],
})
const chain = prompt.pipe(llm);
await chain.invoke(
{
output_language: "German",
input: "I love programming.",
}
)import {
DefaultAzureCredential,
getBearerTokenProvider,
} from "@azure/identity";
import { AzureOpenAI } from "@langchain/openai";
const credentials = new DefaultAzureCredential();
const azureADTokenProvider = getBearerTokenProvider(
credentials,
"https://cognitiveservices.azure.com/.default"
);
const managedIdentityLLM = new AzureOpenAI({
azureADTokenProvider,
azureOpenAIApiInstanceName: "<your_instance_name>",
azureOpenAIApiDeploymentName: "<your_deployment_name>",
azureOpenAIApiVersion: "<api_version>",
});
import { AzureOpenAI } from "@langchain/openai";
const differentDomainLLM = new AzureOpenAI({
azureOpenAIApiKey: "<your_key>", // In Node.js defaults to process.env.AZURE_OPENAI_API_KEY
azureOpenAIApiDeploymentName: "<your_deployment_name>", // In Node.js defaults to process.env.AZURE_OPENAI_API_DEPLOYMENT_NAME
azureOpenAIApiVersion: "<api_version>", // In Node.js defaults to process.env.AZURE_OPENAI_API_VERSION
azureOpenAIBasePath:
"https://westeurope.api.microsoft.com/openai/deployments", // In Node.js defaults to process.env.AZURE_OPENAI_BASE_PATH
});
|
0 | lc_public_repos/langchainjs/docs/core_docs/docs/integrations | lc_public_repos/langchainjs/docs/core_docs/docs/integrations/llms/mistral.ipynb | import { MistralAI } from "@langchain/mistralai"
const llm = new MistralAI({
model: "codestral-latest",
temperature: 0,
maxTokens: undefined,
maxRetries: 2,
// other params...
})const inputText = "MistralAI is an AI company that "
const completion = await llm.invoke(inputText)
completionimport { PromptTemplate } from "@langchain/core/prompts"
const prompt = PromptTemplate.fromTemplate("How to say {input} in {output_language}:\n")
const chain = prompt.pipe(llm);
await chain.invoke(
{
output_language: "German",
input: "I love programming.",
}
)const suffixResponse = await llm.invoke(
"You can print 'hello world' to the console in javascript like this:\n```javascript", {
suffix: "```"
}
);
console.log(suffixResponse);import { MistralAI } from "@langchain/mistralai";
const llmForFillInCompletion = new MistralAI({
model: "codestral-latest",
temperature: 0,
});
const suffix = "```";
const customOutputParser = (input: string) => {
if (input.includes(suffix)) {
return input.split(suffix)[0];
}
throw new Error("Input does not contain suffix.")
};
const resWithParser = await llmForFillInCompletion.invoke(
"You can print 'hello world' to the console in javascript like this:\n```javascript", {
suffix,
}
);
console.log(customOutputParser(resWithParser));const beforeRequestHook = (req: Request): Request | void | Promise<Request | void> => {
// Code to run before a request is processed by Mistral
};
const requestErrorHook = (err: unknown, req: Request): void | Promise<void> => {
// Code to run when an error occurs as Mistral is processing a request
};
const responseHook = (res: Response, req: Request): void | Promise<void> => {
// Code to run before Mistral sends a successful response
};import { ChatMistralAI } from "@langchain/mistralai"
const modelWithHooks = new ChatMistralAI({
model: "mistral-large-latest",
temperature: 0,
maxRetries: 2,
beforeRequestHooks: [ beforeRequestHook ],
requestErrorHooks: [ requestErrorHook ],
responseHooks: [ responseHook ],
// other params...
});import { ChatMistralAI } from "@langchain/mistralai"
const model = new ChatMistralAI({
model: "mistral-large-latest",
temperature: 0,
maxRetries: 2,
// other params...
});
model.beforeRequestHooks = [ ...model.beforeRequestHooks, beforeRequestHook ];
model.requestErrorHooks = [ ...model.requestErrorHooks, requestErrorHook ];
model.responseHooks = [ ...model.responseHooks, responseHook ];
model.addAllHooksToHttpClient();model.removeHookFromHttpClient(beforeRequestHook);
model.removeAllHooksFromHttpClient(); |
0 | lc_public_repos/langchainjs/docs/core_docs/docs/integrations | lc_public_repos/langchainjs/docs/core_docs/docs/integrations/llms/llama_cpp.mdx | ---
sidebar_class_name: node-only
---
# Llama CPP
:::tip Compatibility
Only available on Node.js.
:::
This module is based on the [node-llama-cpp](https://github.com/withcatai/node-llama-cpp) Node.js bindings for [llama.cpp](https://github.com/ggerganov/llama.cpp), allowing you to work with a locally running LLM. This allows you to work with a much smaller quantized model capable of running on a laptop environment, ideal for testing and scratch padding ideas without running up a bill!
## Setup
You'll need to install major version `3` of the [node-llama-cpp](https://github.com/withcatai/node-llama-cpp) module to communicate with your local model.
```bash npm2yarn
npm install -S node-llama-cpp@3
```
import IntegrationInstallTooltip from "@mdx_components/integration_install_tooltip.mdx";
<IntegrationInstallTooltip></IntegrationInstallTooltip>
```bash npm2yarn
npm install @langchain/community @langchain/core
```
You will also need a local Llama 2 model (or a model supported by [node-llama-cpp](https://github.com/withcatai/node-llama-cpp)). You will need to pass the path to this model to the LlamaCpp module as a part of the parameters (see example).
Out-of-the-box `node-llama-cpp` is tuned for running on a MacOS platform with support for the Metal GPU of Apple M-series of processors. If you need to turn this off or need support for the CUDA architecture then refer to the documentation at [node-llama-cpp](https://withcatai.github.io/node-llama-cpp/).
A note to LangChain.js contributors: if you want to run the tests associated with this module you will need to put the path to your local model in the environment variable `LLAMA_PATH`.
## Guide to installing Llama2
Getting a local Llama2 model running on your machine is a pre-req so this is a quick guide to getting and building Llama 7B (the smallest) and then quantizing it so that it will run comfortably on a laptop. To do this you will need `python3` on your machine (3.11 is recommended), also `gcc` and `make` so that `llama.cpp` can be built.
### Getting the Llama2 models
To get a copy of Llama2 you need to visit [Meta AI](https://ai.meta.com/resources/models-and-libraries/llama-downloads/) and request access to their models. Once Meta AI grant you access, you will receive an email containing a unique URL to access the files, this will be needed in the next steps.
Now create a directory to work in, for example:
```
mkdir llama2
cd llama2
```
Now we need to get the Meta AI `llama` repo in place so we can download the model.
```
git clone https://github.com/facebookresearch/llama.git
```
Once we have this in place we can change into this directory and run the downloader script to get the model we will be working with. Note: From here on its assumed that the model in use is `llama-2–7b`, if you select a different model don't forget to change the references to the model accordingly.
```
cd llama
/bin/bash ./download.sh
```
This script will ask you for the URL that Meta AI sent to you (see above), you will also select the model to download, in this case we used `llama-2–7b`. Once this step has completed successfully (this can take some time, the `llama-2–7b` model is around 13.5Gb) there should be a new `llama-2–7b` directory containing the model and other files.
### Converting and quantizing the model
In this step we need to use `llama.cpp` so we need to download that repo.
```
cd ..
git clone https://github.com/ggerganov/llama.cpp.git
cd llama.cpp
```
Now we need to build the `llama.cpp` tools and set up our `python` environment. In these steps it's assumed that your install of python can be run using `python3` and that the virtual environment can be called `llama2`, adjust accordingly for your own situation.
```
make
python3 -m venv llama2
source llama2/bin/activate
```
After activating your llama2 environment you should see `(llama2)` prefixing your command prompt to let you know this is the active environment. Note: if you need to come back to build another model or re-quantize the model don't forget to activate the environment again also if you update `llama.cpp` you will need to rebuild the tools and possibly install new or updated dependencies! Now that we have an active python environment, we need to install the python dependencies.
```
python3 -m pip install -r requirements.txt
```
Having done this, we can start converting and quantizing the Llama2 model ready for use locally via `llama.cpp`.
First, we need to convert the model, prior to the conversion let's create a directory to store it in.
```
mkdir models/7B
python3 convert.py --outfile models/7B/gguf-llama2-f16.bin --outtype f16 ../../llama2/llama/llama-2-7b --vocab-dir ../../llama2/llama/llama-2-7b
```
This should create a converted model called `gguf-llama2-f16.bin` in the directory we just created. Note that this is just a converted model so it is also around 13.5Gb in size, in the next step we will quantize it down to around 4Gb.
```
./quantize ./models/7B/gguf-llama2-f16.bin ./models/7B/gguf-llama2-q4_0.bin q4_0
```
Running this should result in a new model being created in the `models\7B` directory, this one called `gguf-llama2-q4_0.bin`, this is the model we can use with langchain. You can validate this model is working by testing it using the `llama.cpp` tools.
```
./main -m ./models/7B/gguf-llama2-q4_0.bin -n 1024 --repeat_penalty 1.0 --color -i -r "User:" -f ./prompts/chat-with-bob.txt
```
Running this command fires up the model for a chat session. BTW if you are running out of disk space this small model is the only one we need, so you can backup and/or delete the original and converted 13.5Gb models.
## Usage
import CodeBlock from "@theme/CodeBlock";
import LlamaCppExample from "@examples/models/llm/llama_cpp.ts";
<CodeBlock language="typescript">{LlamaCppExample}</CodeBlock>
## Streaming
import LlamaCppStreamExample from "@examples/models/llm/llama_cpp_stream.ts";
<CodeBlock language="typescript">{LlamaCppStreamExample}</CodeBlock>;
## Related
- LLM [conceptual guide](/docs/concepts/text_llms)
- LLM [how-to guides](/docs/how_to/#llms)
|
0 | lc_public_repos/langchainjs/docs/core_docs/docs/integrations | lc_public_repos/langchainjs/docs/core_docs/docs/integrations/llms/ibm.ipynb | import { WatsonxLLM } from "@langchain/community/llms/ibm";
const props = {
decoding_method: "sample",
maxNewTokens: 100,
minNewTokens: 1,
temperature: 0.5,
topK: 50,
topP: 1,
};
const instance = new WatsonxLLM({
version: "YYYY-MM-DD",
serviceUrl: process.env.API_URL,
projectId: "<PROJECT_ID>",
spaceId: "<SPACE_ID>",
idOrName: "<DEPLOYMENT_ID>",
model: "<MODEL_ID>",
...props,
});const result = await instance.invoke("Print hello world.");
console.log(result);
const results = await instance.generate([
"Print hello world.",
"Print bye, bye world!",
]);
console.log(results);import { PromptTemplate } from "@langchain/core/prompts"
const prompt = PromptTemplate.fromTemplate("How to say {input} in {output_language}:\n")
const chain = prompt.pipe(instance);
await chain.invoke(
{
output_language: "German",
input: "I love programming.",
}
)const result2 = await instance.invoke("Print hello world.", {
parameters: {
maxNewTokens: 100,
},
});
console.log(result2);const tokens = await instance.getNumTokens("Print hello world.");
console.log(tokens); |
0 | lc_public_repos/langchainjs/docs/core_docs/docs/integrations | lc_public_repos/langchainjs/docs/core_docs/docs/integrations/llms/arcjet.ipynb | import {
ArcjetRedact,
ArcjetSensitiveInfoType,
} from "@langchain/community/llms/arcjet";
import { OpenAI } from "@langchain/openai";
// Create an instance of another LLM for Arcjet to wrap
const openai = new OpenAI({
modelName: "gpt-3.5-turbo-instruct",
openAIApiKey: process.env.OPENAI_API_KEY,
});
const arcjetRedactOptions = {
// Specify a LLM that Arcjet Redact will call once it has redacted the input.
llm: openai,
// Specify the list of entities that should be redacted.
// If this isn't specified then all entities will be redacted.
entities: ["email", "phone-number", "ip-address", "credit-card"] as ArcjetSensitiveInfoType[],
// You can provide a custom detect function to detect entities that we don't support yet.
// It takes a list of tokens and you return a list of identified types or undefined.
// The undefined types that you return should be added to the entities list if used.
detect: (tokens: string[]) => {
return tokens.map((t) => t === "some-sensitive-info" ? "custom-entity" : undefined)
},
// The number of tokens to provide to the custom detect function. This defaults to 1.
// It can be used to provide additional context when detecting custom entity types.
contextWindowSize: 1,
// This allows you to provide custom replacements when redacting. Please ensure
// that the replacements are unique so that unredaction works as expected.
replace: (identifiedType: string) => {
return identifiedType === "email" ? "redacted@example.com" : undefined;
},
};
const arcjetRedact = new ArcjetRedact(arcjetRedactOptions);
const response = await arcjetRedact.invoke(
"My email address is test@example.com, here is some-sensitive-info"
); |
0 | lc_public_repos/langchainjs/docs/core_docs/docs/integrations | lc_public_repos/langchainjs/docs/core_docs/docs/integrations/llms/bedrock.ipynb | // @lc-docs-hide-cell
// Deno requires these imports, and way of loading env vars.
// we don't want to expose in the docs.
// Below this cell we have a typescript markdown codeblock with
// the node code.
import "@aws-sdk/credential-provider-node";
import "@smithy/protocol-http";
import "@aws-crypto/sha256-js";
import "@smithy/protocol-http";
import "@smithy/signature-v4";
import "@smithy/eventstream-codec";
import "@smithy/util-utf8";
import "@aws-sdk/types";
import { Bedrock } from "@langchain/community/llms/bedrock"
import { getEnvironmentVariable } from "@langchain/core/utils/env";
const llm = new Bedrock({
model: "anthropic.claude-v2",
region: "us-east-1",
// endpointUrl: "custom.amazonaws.com",
credentials: {
accessKeyId: getEnvironmentVariable("BEDROCK_AWS_ACCESS_KEY_ID"),
secretAccessKey: getEnvironmentVariable("BEDROCK_AWS_SECRET_ACCESS_KEY"),
},
temperature: 0,
maxTokens: undefined,
maxRetries: 2,
// other params...
})const inputText = "Human: Bedrock is an AI company that\nAssistant: "
const completion = await llm.invoke(inputText)
completionimport { PromptTemplate } from "@langchain/core/prompts"
const prompt = PromptTemplate.fromTemplate("Human: How to say {input} in {output_language}:\nAssistant:")
const chain = prompt.pipe(llm);
await chain.invoke(
{
output_language: "German",
input: "I love programming.",
}
) |
0 | lc_public_repos/langchainjs/docs/core_docs/docs/integrations | lc_public_repos/langchainjs/docs/core_docs/docs/integrations/llms/aleph_alpha.mdx | # AlephAlpha
LangChain.js supports AlephAlpha's Luminous family of models. You'll need to sign up for an API key [on their website](https://www.aleph-alpha.com/).
Here's an example:
import IntegrationInstallTooltip from "@mdx_components/integration_install_tooltip.mdx";
<IntegrationInstallTooltip></IntegrationInstallTooltip>
```bash npm2yarn
npm install @langchain/community @langchain/core
```
import CodeBlock from "@theme/CodeBlock";
import AlephAlphaExample from "@examples/models/llm/aleph_alpha.ts";
<CodeBlock language="typescript">{AlephAlphaExample}</CodeBlock>
## Related
- LLM [conceptual guide](/docs/concepts/text_llms)
- LLM [how-to guides](/docs/how_to/#llms)
|
0 | lc_public_repos/langchainjs/docs/core_docs/docs/integrations | lc_public_repos/langchainjs/docs/core_docs/docs/integrations/llms/together.ipynb | import { TogetherAI } from "@langchain/community/llms/togetherai";
const llm = new TogetherAI({
model: "meta-llama/Meta-Llama-3.1-8B-Instruct-Turbo",
maxTokens: 256,
});const inputText = "Together is an AI company that "
const completion = await llm.invoke(inputText)
completionimport { PromptTemplate } from "@langchain/core/prompts"
const prompt = PromptTemplate.fromTemplate("How to say {input} in {output_language}:\n")
const chain = prompt.pipe(llm);
await chain.invoke(
{
output_language: "German",
input: "I love programming.",
}
) |
0 | lc_public_repos/langchainjs/docs/core_docs/docs/integrations | lc_public_repos/langchainjs/docs/core_docs/docs/integrations/llms/layerup_security.mdx | import CodeBlock from "@theme/CodeBlock";
# Layerup Security
The [Layerup Security](https://uselayerup.com) integration allows you to secure your calls to any LangChain LLM, LLM chain or LLM agent. The LLM object wraps around any existing LLM object, allowing for a secure layer between your users and your LLMs.
While the Layerup Security object is designed as an LLM, it is not actually an LLM itself, it simply wraps around an LLM, allowing it to adapt the same functionality as the underlying LLM.
## Setup
First, you'll need a Layerup Security account from the Layerup [website](https://uselayerup.com).
Next, create a project via the [dashboard](https://dashboard.uselayerup.com), and copy your API key. We recommend putting your API key in your project's environment.
Install the Layerup Security SDK:
```bash npm2yarn
npm install @layerup/layerup-security
```
And install LangChain Community:
```bash npm2yarn
npm install @langchain/community @langchain/core
```
And now you're ready to start protecting your LLM calls with Layerup Security!
import LayerupSecurityExampleCode from "@examples/llms/layerup_security.ts";
<CodeBlock language="typescript">{LayerupSecurityExampleCode}</CodeBlock>
## Related
- LLM [conceptual guide](/docs/concepts/text_llms)
- LLM [how-to guides](/docs/how_to/#llms)
|
0 | lc_public_repos/langchainjs/docs/core_docs/docs/integrations | lc_public_repos/langchainjs/docs/core_docs/docs/integrations/llms/huggingface_inference.mdx | # HuggingFaceInference
Here's an example of calling a HugggingFaceInference model as an LLM:
```bash npm2yarn
npm install @langchain/community @langchain/core @huggingface/inference@2
```
import UnifiedModelParamsTooltip from "@mdx_components/unified_model_params_tooltip.mdx";
<UnifiedModelParamsTooltip></UnifiedModelParamsTooltip>
```typescript
import { HuggingFaceInference } from "langchain/llms/hf";
const model = new HuggingFaceInference({
model: "gpt2",
apiKey: "YOUR-API-KEY", // In Node.js defaults to process.env.HUGGINGFACEHUB_API_KEY
});
const res = await model.invoke("1 + 1 =");
console.log({ res });
```
## Related
- LLM [conceptual guide](/docs/concepts/text_llms)
- LLM [how-to guides](/docs/how_to/#llms)
|
0 | lc_public_repos/langchainjs/docs/core_docs/docs/integrations | lc_public_repos/langchainjs/docs/core_docs/docs/integrations/tools/google_places.mdx | ---
hide_table_of_contents: true
---
import CodeBlock from "@theme/CodeBlock";
# Google Places Tool
The Google Places Tool allows your agent to utilize the Google Places API in order to find addresses,
phone numbers, website, etc. from text about a location listed on Google Places.
## Setup
You will need to get an API key from [Google here](https://developers.google.com/maps/documentation/places/web-service/overview)
and [enable the new Places API](https://console.cloud.google.com/apis/library/places.googleapis.com). Then, set your API key
as `process.env.GOOGLE_PLACES_API_KEY` or pass it in as an `apiKey` constructor argument.
## Usage
import IntegrationInstallTooltip from "@mdx_components/integration_install_tooltip.mdx";
<IntegrationInstallTooltip></IntegrationInstallTooltip>
```bash npm2yarn
npm install @langchain/openai @langchain/community @langchain/core
```
import ToolExample from "@examples/tools/google_places.ts";
<CodeBlock language="typescript">{ToolExample}</CodeBlock>
## Related
- Tool [conceptual guide](/docs/concepts/tools)
- Tool [how-to guides](/docs/how_to/#tools)
|
0 | lc_public_repos/langchainjs/docs/core_docs/docs/integrations | lc_public_repos/langchainjs/docs/core_docs/docs/integrations/tools/connery.mdx | import CodeBlock from "@theme/CodeBlock";
import Example from "@examples/tools/connery.ts";
import IntegrationInstallTooltip from "@mdx_components/integration_install_tooltip.mdx";
# Connery Action Tool
Using this tool, you can integrate individual Connery Action into your LangChain agent.
:::note
If you want to use more than one Connery Action in your agent,
check out the [Connery Toolkit](/docs/integrations/toolkits/connery) documentation.
:::
## What is Connery?
Connery is an open-source plugin infrastructure for AI.
With Connery, you can easily create a custom plugin with a set of actions and seamlessly integrate them into your LangChain agent.
Connery will take care of critical aspects such as runtime, authorization, secret management, access management, audit logs, and other vital features.
Furthermore, Connery, supported by our community, provides a diverse collection of ready-to-use open-source plugins for added convenience.
Learn more about Connery:
- GitHub: https://github.com/connery-io/connery
- Documentation: https://docs.connery.io
## Prerequisites
To use Connery Actions in your LangChain agent, you need to do some preparation:
1. Set up the Connery runner using the [Quickstart](https://docs.connery.io/docs/runner/quick-start/) guide.
2. Install all the plugins with the actions you want to use in your agent.
3. Set environment variables `CONNERY_RUNNER_URL` and `CONNERY_RUNNER_API_KEY` so the toolkit can communicate with the Connery Runner.
## Example of using Connery Action Tool
### Setup
To use the Connery Action Tool you need to install the following official peer dependency:
```bash npm2yarn
npm install @langchain/community @langchain/core
```
<IntegrationInstallTooltip></IntegrationInstallTooltip>
### Usage
In the example below, we fetch action by its ID from the Connery Runner and then call it with the specified parameters.
Here, we use the ID of the **Send email** action from the [Gmail](https://github.com/connery-io/gmail) plugin.
:::info
You can see a LangSmith trace of this example [here](https://smith.langchain.com/public/c4b6723d-f91c-440c-8682-16ec8297a602/r).
:::
<CodeBlock language="typescript">{Example}</CodeBlock>
:::note
Connery Action is a structured tool, so you can only use it in the agents supporting structured tools.
:::
## Related
- Tool [conceptual guide](/docs/concepts/tools)
- Tool [how-to guides](/docs/how_to/#tools)
|
0 | lc_public_repos/langchainjs/docs/core_docs/docs/integrations | lc_public_repos/langchainjs/docs/core_docs/docs/integrations/tools/wolframalpha.mdx | ---
hide_table_of_contents: true
---
import CodeBlock from "@theme/CodeBlock";
# WolframAlpha Tool
The WolframAlpha tool connects your agents and chains to WolframAlpha's state-of-the-art computational intelligence engine.
## Setup
You'll need to create an app from the [WolframAlpha portal](https://developer.wolframalpha.com/) and obtain an `appid`.
## Usage
import ToolExample from "@examples/tools/wolframalpha.ts";
<CodeBlock language="typescript">{ToolExample}</CodeBlock>
## Related
- Tool [conceptual guide](/docs/concepts/tools)
- Tool [how-to guides](/docs/how_to/#tools)
|
0 | lc_public_repos/langchainjs/docs/core_docs/docs/integrations | lc_public_repos/langchainjs/docs/core_docs/docs/integrations/tools/exa_search.ipynb | import { ExaSearchResults } from "@langchain/exa"
import Exa from "exa-js";
// @lc-ts-ignore
const client = new Exa(process.env.EXASEARCH_API_KEY)
const tool = new ExaSearchResults({
// @lc-ts-ignore
client,
searchArgs: {
numResults: 2,
}
})await tool.invoke("what is the weather in wailea?")// This is usually generated by a model, but we'll create a tool call directly for demo purposes.
const modelGeneratedToolCall = {
args: {
input: "what is the weather in wailea"
},
id: "1",
name: tool.name,
type: "tool_call",
}
await tool.invoke(modelGeneratedToolCall)// @lc-docs-hide-cell
import { ChatOpenAI } from "@langchain/openai"
const llm = new ChatOpenAI({
model: "gpt-4o-mini",
})import { ChatPromptTemplate } from "@langchain/core/prompts"
import { RunnableConfig } from "@langchain/core/runnables"
import { AIMessage } from "@langchain/core/messages"
const prompt = ChatPromptTemplate.fromMessages(
[
["system", "You are a helpful assistant."],
["human", "{user_input}"],
["placeholder", "{messages}"],
]
)
// specifying tool_choice will force the model to call this tool.
const llmWithTools = llm.bindTools([tool], {
tool_choice: tool.name
})
const llmChain = prompt.pipe(llmWithTools);
const toolChain = async (userInput: string, config?: RunnableConfig): Promise<AIMessage> => {
const input_ = { user_input: userInput };
const aiMsg = await llmChain.invoke(input_, config);
const toolMsgs = await tool.batch(aiMsg.tool_calls, config);
return llmChain.invoke({ ...input_, messages: [aiMsg, ...toolMsgs] }, config);
};
const toolChainResult = await toolChain("What is Anthropic's estimated revenue for 2024?");const { tool_calls, content } = toolChainResult;
console.log("AIMessage", JSON.stringify({
tool_calls,
content
}, null, 2))// @lc-docs-hide-cell
import { ChatOpenAI } from "@langchain/openai";
const llmForAgent = new ChatOpenAI({
model: "gpt-4o",
temperature: 0
})import Exa from "exa-js";
import { createRetrieverTool } from "langchain/tools/retriever";
import { ExaRetriever } from "@langchain/exa";
import { createReactAgent } from "@langchain/langgraph/prebuilt";
// @lc-ts-ignore
const agentClient = new Exa(process.env.EXASEARCH_API_KEY);
const exaRetrieverForAgent = new ExaRetriever({
// @lc-ts-ignore
client: agentClient,
searchArgs: {
numResults: 2,
},
});
// Convert the ExaRetriever into a tool
const searchToolForAgent = createRetrieverTool(exaRetrieverForAgent, {
name: "search",
description: "Get the contents of a webpage given a string search query.",
});
const toolsForAgent = [searchToolForAgent];
const agentExecutor = createReactAgent({
llm: llmForAgent,
tools: toolsForAgent,
})const exampleQuery = "Summarize for me a fascinating article about cats."
const events = await agentExecutor.stream(
{ messages: [
[
"system",
`You are a web researcher who answers user questions by looking up information on the internet and retrieving contents of helpful documents. Cite your sources.`,
],
["human", exampleQuery],
] },
{ streamMode: "values", }
)
for await (const event of events) {
const lastMsg = event.messages[event.messages.length - 1];
if (lastMsg.tool_calls?.length) {
console.dir(lastMsg.tool_calls, { depth: null });
} else if (lastMsg.content) {
console.log(lastMsg.content);
}
} |
0 | lc_public_repos/langchainjs/docs/core_docs/docs/integrations | lc_public_repos/langchainjs/docs/core_docs/docs/integrations/tools/discord.mdx | ---
hide_table_of_contents: true
---
import CodeBlock from "@theme/CodeBlock";
# Discord Tool
The Discord Tool gives your agent the ability to search, read, and write messages to discord channels.
It is useful for when you need to interact with a discord channel.
## Setup
To use the Discord Tool you need to install the following official peer depencency:
```bash npm2yarn
npm install discord.js
```
## Usage, standalone
import ToolExample from "@examples/tools/discord.ts";
import IntegrationInstallTooltip from "@mdx_components/integration_install_tooltip.mdx";
<IntegrationInstallTooltip></IntegrationInstallTooltip>
```bash npm2yarn
npm install @langchain/openai @langchain/core
```
<CodeBlock language="typescript">{ToolExample}</CodeBlock>
## Usage, in an Agent
import AgentExample from "@examples/agents/discord.ts";
<CodeBlock language="typescript">{AgentExample}</CodeBlock>
## Related
- Tool [conceptual guide](/docs/concepts/tools)
- Tool [how-to guides](/docs/how_to/#tools)
|
0 | lc_public_repos/langchainjs/docs/core_docs/docs/integrations | lc_public_repos/langchainjs/docs/core_docs/docs/integrations/tools/stagehand.mdx | ---
sidebar_label: Stagehand AI Web Automation Toolkit
hide_table_of_contents: true
---
import CodeBlock from "@theme/CodeBlock";
import Example from "@examples/agents/stagehand_ai_web_browser.ts";
import { StagehandToolkit } from "@langchain/community/agents/toolkits/stagehand";
# Stagehand Toolkit
The Stagehand Toolkit equips your AI agent with the following capabilities:
- **navigate()**: Navigate to a specific URL.
- **act()**: Perform browser automation actions like clicking, typing, and navigation.
- **extract()**: Extract structured data from web pages using Zod schemas.
- **observe()**: Get a list of possible actions and elements on the current page.
## Setup
1. Install the required packages:
```bash
npm install @langchain/langgraph @langchain/community @langchain/core
```
2. Create a Stagehand Instance
If you plan to run the browser locally, you'll also need to install Playwright's browser dependencies.
```bash
npx playwright install
```
3. Set up your model provider credentials:
For OpenAI:
```bash
export OPENAI_API_KEY="your-openai-api-key"
```
For Anthropic:
```bash
export ANTHROPIC_API_KEY="your-anthropic-api-key"
```
## Usage, Standalone, Local Browser
```typescript
import { StagehandToolkit } from "langchain/community/agents/toolkits/stagehand";
import { ChatOpenAI } from "@langchain/openai";
import { Stagehand } from "@browserbasehq/stagehand";
// Specify your Browserbase credentials.
process.env.BROWSERBASE_API_KEY = "";
process.env.BROWSERBASE_PROJECT_ID = "";
// Specify OpenAI API key.
process.env.OPENAI_API_KEY = "";
const stagehand = new Stagehand({
env: "LOCAL",
headless: false,
verbose: 2,
debugDom: true,
enableCaching: false,
});
// Create a Stagehand Toolkit with all the available actions from the Stagehand.
const stagehandToolkit = await StagehandToolkit.fromStagehand(stagehand);
const navigateTool = stagehandToolkit.tools.find(
(t) => t.name === "stagehand_navigate"
);
if (!navigateTool) {
throw new Error("Navigate tool not found");
}
await navigateTool.invoke("https://www.google.com");
const actionTool = stagehandToolkit.tools.find(
(t) => t.name === "stagehand_act"
);
if (!actionTool) {
throw new Error("Action tool not found");
}
await actionTool.invoke('Search for "OpenAI"');
const observeTool = stagehandToolkit.tools.find(
(t) => t.name === "stagehand_observe"
);
if (!observeTool) {
throw new Error("Observe tool not found");
}
const result = await observeTool.invoke(
"What actions can be performed on the current page?"
);
const observations = JSON.parse(result);
// Handle observations as needed
console.log(observations);
const currentUrl = stagehand.page.url();
expect(currentUrl).toContain("google.com/search?q=OpenAI");
```
## Usage with LangGraph Agents
<CodeBlock language="typescript">{Example}</CodeBlock>
## Usage on Browserbase - remote headless browser
If you want to run the browser remotely, you can use the Browserbase platform.
You need to set the `BROWSERBASE_API_KEY` environment variable to your Browserbase API key.
```bash
export BROWSERBASE_API_KEY="your-browserbase-api-key"
```
You also need to set `BROWSERBASE_PROJECT_ID` to your Browserbase project ID.
```bash
export BROWSERBASE_PROJECT_ID="your-browserbase-project-id"
```
Then initialize the Stagehand instance with the `BROWSERBASE` environment.
```typescript
const stagehand = new Stagehand({
env: "BROWSERBASE",
});
```
## Related
- Tool [conceptual guide](/docs/concepts/tools)
- Tool [how-to guides](/docs/how_to/#tools)
- [Stagehand Documentation](https://github.com/browserbase/stagehand#readme)
|
0 | lc_public_repos/langchainjs/docs/core_docs/docs/integrations | lc_public_repos/langchainjs/docs/core_docs/docs/integrations/tools/duckduckgo_search.ipynb | import { DuckDuckGoSearch } from "@langchain/community/tools/duckduckgo_search"
const tool = new DuckDuckGoSearch({ maxResults: 1 })await tool.invoke("what is the current weather in sf?")// This is usually generated by a model, but we'll create a tool call directly for demo purposes.
const modelGeneratedToolCall = {
args: {
input: "what is the current weather in sf?"
},
id: "tool_call_id",
name: tool.name,
type: "tool_call",
}
await tool.invoke(modelGeneratedToolCall)// @lc-docs-hide-cell
import { ChatOpenAI } from "@langchain/openai"
const llm = new ChatOpenAI({
model: "gpt-4o-mini",
})import { HumanMessage } from "@langchain/core/messages";
import { ChatPromptTemplate } from "@langchain/core/prompts";
import { RunnableLambda } from "@langchain/core/runnables";
const prompt = ChatPromptTemplate.fromMessages(
[
["system", "You are a helpful assistant."],
["placeholder", "{messages}"],
]
)
const llmWithTools = llm.bindTools([tool]);
const chain = prompt.pipe(llmWithTools);
const toolChain = RunnableLambda.from(
async (userInput: string, config) => {
const humanMessage = new HumanMessage(userInput,);
const aiMsg = await chain.invoke({
messages: [new HumanMessage(userInput)],
}, config);
const toolMsgs = await tool.batch(aiMsg.tool_calls, config);
return chain.invoke({
messages: [humanMessage, aiMsg, ...toolMsgs],
}, config);
}
);
const toolChainResult = await toolChain.invoke("how many people have climbed mount everest?");const { tool_calls, content } = toolChainResult;
console.log("AIMessage", JSON.stringify({
tool_calls,
content,
}, null, 2)); |
0 | lc_public_repos/langchainjs/docs/core_docs/docs/integrations | lc_public_repos/langchainjs/docs/core_docs/docs/integrations/tools/aiplugin-tool.mdx | ---
hide_table_of_contents: true
sidebar_class_name: hidden
---
import CodeBlock from "@theme/CodeBlock";
import Example from "@examples/agents/aiplugin-tool.ts";
# ChatGPT Plugins
:::warning
OpenAI has [deprecated plugins](https://openai.com/index/chatgpt-plugins/).
:::
This example shows how to use ChatGPT Plugins within LangChain abstractions.
Note 1: This currently only works for plugins with no auth.
Note 2: There are almost certainly other ways to do this, this is just a first pass. If you have better ideas, please open a PR!
import IntegrationInstallTooltip from "@mdx_components/integration_install_tooltip.mdx";
<IntegrationInstallTooltip></IntegrationInstallTooltip>
```bash npm2yarn
npm install @langchain/openai @langchain/core
```
<CodeBlock language="typescript">{Example}</CodeBlock>
````
Entering new agent_executor chain...
Thought: Klarna is a payment provider, not a store. I need to check if there is a Klarna Shopping API that I can use to search for t-shirts.
Action:
```
{
"action": "KlarnaProducts",
"action_input": ""
}
```
Usage Guide: Use the Klarna plugin to get relevant product suggestions for any shopping or researching purpose. The query to be sent should not include stopwords like articles, prepositions and determinants. The api works best when searching for words that are related to products, like their name, brand, model or category. Links will always be returned and should be shown to the user.
OpenAPI Spec: {"openapi":"3.0.1","info":{"version":"v0","title":"Open AI Klarna product Api"},"servers":[{"url":"https://www.klarna.com/us/shopping"}],"tags":[{"name":"open-ai-product-endpoint","description":"Open AI Product Endpoint. Query for products."}],"paths":{"/public/openai/v0/products":{"get":{"tags":["open-ai-product-endpoint"],"summary":"API for fetching Klarna product information","operationId":"productsUsingGET","parameters":[{"name":"q","in":"query","description":"query, must be between 2 and 100 characters","required":true,"schema":{"type":"string"}},{"name":"size","in":"query","description":"number of products returned","required":false,"schema":{"type":"integer"}},{"name":"budget","in":"query","description":"maximum price of the matching product in local currency, filters results","required":false,"schema":{"type":"integer"}}],"responses":{"200":{"description":"Products found","content":{"application/json":{"schema":{"$ref":"#/components/schemas/ProductResponse"}}}},"503":{"description":"one or more services are unavailable"}},"deprecated":false}}},"components":{"schemas":{"Product":{"type":"object","properties":{"attributes":{"type":"array","items":{"type":"string"}},"name":{"type":"string"},"price":{"type":"string"},"url":{"type":"string"}},"title":"Product"},"ProductResponse":{"type":"object","properties":{"products":{"type":"array","items":{"$ref":"#/components/schemas/Product"}}},"title":"ProductResponse"}}}}
Now that I know there is a Klarna Shopping API, I can use it to search for t-shirts. I will make a GET request to the API with the query parameter "t-shirt".
Action:
```
{
"action": "requests_get",
"action_input": "https://www.klarna.com/us/shopping/public/openai/v0/products?q=t-shirt"
}
```
{"products":[{"name":"Psycho Bunny Mens Copa Gradient Logo Graphic Tee","url":"https://www.klarna.com/us/shopping/pl/cl10001/3203663222/Clothing/Psycho-Bunny-Mens-Copa-Gradient-Logo-Graphic-Tee/?source=openai","price":"$35.00","attributes":["Material:Cotton","Target Group:Man","Color:White,Blue,Black,Orange"]},{"name":"T-shirt","url":"https://www.klarna.com/us/shopping/pl/cl10001/3203506327/Clothing/T-shirt/?source=openai","price":"$20.45","attributes":["Material:Cotton","Target Group:Man","Color:Gray,White,Blue,Black,Orange"]},{"name":"Palm Angels Bear T-shirt - Black","url":"https://www.klarna.com/us/shopping/pl/cl10001/3201090513/Clothing/Palm-Angels-Bear-T-shirt-Black/?source=openai","price":"$168.36","attributes":["Material:Cotton","Target Group:Man","Color:Black"]},{"name":"Tommy Hilfiger Essential Flag Logo T-shirt","url":"https://www.klarna.com/us/shopping/pl/cl10001/3201840629/Clothing/Tommy-Hilfiger-Essential-Flag-Logo-T-shirt/?source=openai","price":"$22.52","attributes":["Material:Cotton","Target Group:Man","Color:Red,Gray,White,Blue,Black","Pattern:Solid Color","Environmental Attributes :Organic"]},{"name":"Coach Outlet Signature T Shirt","url":"https://www.klarna.com/us/shopping/pl/cl10001/3203005573/Clothing/Coach-Outlet-Signature-T-Shirt/?source=openai","price":"$75.00","attributes":["Material:Cotton","Target Group:Man","Color:Gray"]}]}
Finished chain.
{
result: {
output: 'The available t-shirts in Klarna are Psycho Bunny Mens Copa Gradient Logo Graphic Tee, T-shirt, Palm Angels Bear T-shirt - Black, Tommy Hilfiger Essential Flag Logo T-shirt, and Coach Outlet Signature T Shirt.',
intermediateSteps: [ [Object], [Object] ]
}
}
````
## Related
- Tool [conceptual guide](/docs/concepts/tools)
- Tool [how-to guides](/docs/how_to/#tools)
|
0 | lc_public_repos/langchainjs/docs/core_docs/docs/integrations | lc_public_repos/langchainjs/docs/core_docs/docs/integrations/tools/serpapi.ipynb | import { SerpAPI } from "@langchain/community/tools/serpapi";
const tool = new SerpAPI();await tool.invoke({
input: "what is the current weather in SF?"
});// This is usually generated by a model, but we'll create a tool call directly for demo purposes.
const modelGeneratedToolCall = {
args: {
input: "what is the current weather in SF?"
},
id: "1",
name: tool.name,
type: "tool_call",
}
await tool.invoke(modelGeneratedToolCall)// @lc-docs-hide-cell
import { ChatOpenAI } from "@langchain/openai"
const llm = new ChatOpenAI({
model: "gpt-4o-mini",
temperature: 0,
})import { HumanMessage } from "@langchain/core/messages";
import { ChatPromptTemplate } from "@langchain/core/prompts";
import { RunnableLambda } from "@langchain/core/runnables";
const prompt = ChatPromptTemplate.fromMessages(
[
["system", "You are a helpful assistant."],
["placeholder", "{messages}"],
]
)
const llmWithTools = llm.bindTools([tool]);
const chain = prompt.pipe(llmWithTools);
const toolChain = RunnableLambda.from(
async (userInput: string, config) => {
const humanMessage = new HumanMessage(userInput,);
const aiMsg = await chain.invoke({
messages: [new HumanMessage(userInput)],
}, config);
const toolMsgs = await tool.batch(aiMsg.tool_calls, config);
return chain.invoke({
messages: [humanMessage, aiMsg, ...toolMsgs],
}, config);
}
);
const toolChainResult = await toolChain.invoke("what is the current weather in sf?");const { tool_calls, content } = toolChainResult;
console.log("AIMessage", JSON.stringify({
tool_calls,
content,
}, null, 2)); |
0 | lc_public_repos/langchainjs/docs/core_docs/docs/integrations | lc_public_repos/langchainjs/docs/core_docs/docs/integrations/tools/gmail.mdx | ---
hide_table_of_contents: true
---
import CodeBlock from "@theme/CodeBlock";
# Gmail Tool
The Gmail Tool allows your agent to create and view messages from a linked email account.
## Setup
You will need to get an API key from [Google here](https://developers.google.com/gmail/api/guides)
and [enable the new Gmail API](https://console.cloud.google.com/apis/library/gmail.googleapis.com).
Then, set the environment variables for `GMAIL_CLIENT_EMAIL`, and either `GMAIL_PRIVATE_KEY`, or `GMAIL_KEYFILE`.
To use the Gmail Tool you need to install the following official peer dependency:
import IntegrationInstallTooltip from "@mdx_components/integration_install_tooltip.mdx";
<IntegrationInstallTooltip></IntegrationInstallTooltip>
```bash npm2yarn
npm install @langchain/openai @langchain/community @langchain/core googleapis
```
## Usage
import ToolExample from "@examples/tools/gmail.ts";
<CodeBlock language="typescript">{ToolExample}</CodeBlock>
## Related
- Tool [conceptual guide](/docs/concepts/tools)
- Tool [how-to guides](/docs/how_to/#tools)
|
0 | lc_public_repos/langchainjs/docs/core_docs/docs/integrations | lc_public_repos/langchainjs/docs/core_docs/docs/integrations/tools/google_calendar.mdx | ---
hide_table_of_contents: true
---
import CodeBlock from "@theme/CodeBlock";
# Google Calendar Tool
The Google Calendar Tools allow your agent to create and view Google Calendar events from a linked calendar.
## Setup
To use the Google Calendar Tools you need to install the following official peer dependency:
```bash npm2yarn
npm install googleapis
```
## Usage
import ToolExample from "@examples/tools/google_calendar.ts";
import IntegrationInstallTooltip from "@mdx_components/integration_install_tooltip.mdx";
<IntegrationInstallTooltip></IntegrationInstallTooltip>
```bash npm2yarn
npm install @langchain/openai @langchain/core
```
<CodeBlock language="typescript">{ToolExample}</CodeBlock>
## Related
- Tool [conceptual guide](/docs/concepts/tools)
- Tool [how-to guides](/docs/how_to/#tools)
|
0 | lc_public_repos/langchainjs/docs/core_docs/docs/integrations | lc_public_repos/langchainjs/docs/core_docs/docs/integrations/tools/stackexchange.mdx | ---
hide_table_of_contents: true
---
import CodeBlock from "@theme/CodeBlock";
# StackExchange Tool
The StackExchange tool connects your agents and chains to StackExchange's API.
## Usage
import ToolExample from "@examples/tools/stackexchange.ts";
<CodeBlock language="typescript">{ToolExample}</CodeBlock>
## Related
- Tool [conceptual guide](/docs/concepts/tools)
- Tool [how-to guides](/docs/how_to/#tools)
|
0 | lc_public_repos/langchainjs/docs/core_docs/docs/integrations | lc_public_repos/langchainjs/docs/core_docs/docs/integrations/tools/index.mdx | ---
sidebar_position: 0
sidebar_class_name: hidden
---
# Tools and Toolkits
import { CategoryTable, IndexTable } from "@theme/FeatureTables";
[Tools](/docs/concepts/tools) are utilities designed to be called by a model: their inputs are designed to be generated by models, and their outputs are designed to be passed back to models.
A [toolkit](/docs/concepts/tools/#toolkits) is a collection of tools meant to be used together. For a list of toolkit integrations, see [this page](/docs/integrations/toolkits/).
:::info
If you'd like to write your own tool, see [this how-to](/docs/how_to/custom_tools/). If you'd like to contribute an integration, see [Contributing integrations](/docs/contributing).
:::
## All Tools
<IndexTable />
|
0 | lc_public_repos/langchainjs/docs/core_docs/docs/integrations | lc_public_repos/langchainjs/docs/core_docs/docs/integrations/tools/jigsawstack.mdx | ---
hide_table_of_contents: true
---
import CodeBlock from "@theme/CodeBlock";
# JigsawStack Tool
The JigsawStack Tool provides your agent with the following capabilities:
- JigsawStackAIScrape: Scrape web content using advanced AI.
- JigsawStackAISearch: Perform AI-powered web searches and retrieve high-quality results.
- JigsawStackSpeechToText - Transcribe video and audio files using the Whisper large V3 AI model.
- JigsawStackVOCR - Recognize, describe, and extract data from images using a prompt.
- JigsawStackTextToSQL - Generate semantically correct SQL queries from text.
## Setup
- Set up an [account](https://jigsawstack.com/dashboard) (Get started for free)
- Create and retrieve your [API key](https://jigsawstack.com/dashboard)
## Credentials
```bash
export JIGSAWSTACK_API_KEY="your-api-key"
```
## Usage, standalone
import IntegrationInstallTooltip from "@mdx_components/integration_install_tooltip.mdx";
<IntegrationInstallTooltip></IntegrationInstallTooltip>
```bash npm2yarn
npm install @langchain/openai
```
```js
import {
JigsawStackAIScrape,
JigsawStackAISearch,
JigsawStackSpeechToText,
JigsawStackVOCR,
JigsawStackTextToSQL,
} from "@langchain/jigsawstack";
export const run = async () => {
// AI Scrape Tool
const aiScrapeTool = new JigsawStackAIScrape({
params: {
element_prompts: ["Pro plan"],
},
});
const result = await aiScrapeTool.invoke("https://jigsawstack.com/pricing");
console.log({ result });
// AI Search Tool
const aiSearchTool = new JigsawStackAISearch();
const doc = await aiSearchTool.invoke("The leaning tower of pisa");
console.log({ doc });
// VOCR Tool
const vocrTool = new JigsawStackVOCR({
params: {
prompt: "Describe the image in detail",
},
});
const data = await vocrTool.invoke(
"https://rogilvkqloanxtvjfrkm.supabase.co/storage/v1/object/public/demo/Collabo%201080x842.jpg?t=2024-03-22T09%3A22%3A48.442Z"
);
console.log({ data });
// Speech-to-Text Tool
const sttTool = new JigsawStackSpeechToText();
await sttTool.invoke(
"https://rogilvkqloanxtvjfrkm.supabase.co/storage/v1/object/public/demo/Video%201737458382653833217.mp4?t=2024-03-22T09%3A50%3A49.894"
);
// Text-to-SQL Tool
const sqlTool = new JigsawStackTextToSQL({
params: {
sql_schema:
"CREATE TABLE Transactions (transaction_id INT PRIMARY KEY, user_id INT NOT NULL,total_amount DECIMAL(10, 2 NOT NULL, transaction_date TIMESTAMP DEFAULT CURRENT_TIMESTAMP,status VARCHAR(20) DEFAULT 'pending',FOREIGN KEY(user_id) REFERENCES Users(user_id))",
},
});
await sqlTool.invoke(
"Generate a query to get transactions that amount exceed 10000 and sort by when created"
);
};
```
## Usage, in an Agent
```js
import { ChatOpenAI } from "@langchain/openai";
import { initializeAgentExecutorWithOptions } from "langchain/agents";
import {
JigsawStackAIScrape,
JigsawStackAISearch,
JigsawStackVOCR,
JigsawStackSpeechToText,
JigsawStackTextToSQL,
} from "@langchain/jigsawstack";
const model = new ChatOpenAI({
temperature: 0,
});
// add the tools that you need
const tools = [
new JigsawStackAIScrape(),
new JigsawStackAISearch(),
new JigsawStackVOCR(),
new JigsawStackSpeechToText(),
new JigsawStackTextToSQL(),
];
const executor = await initializeAgentExecutorWithOptions(tools, model, {
agentType: "zero-shot-react-description",
verbose: true,
});
const res = await executor.invoke({
input: `Kokkalo Restaurant Santorini`,
});
console.log(res.output);
/*
{
"query": "Kokkalo Restaurant Santorini",
"ai_overview": "Kokkalo Restaurant, located in Fira, Santorini, offers a unique dining experience that blends traditional Greek cuisine with modern culinary trends. Here are some key details about the restaurant:\n\n- **Location**: Situated on the main road of Firostefani, Kokkalo is surrounded by the picturesque Cycladic architecture and provides stunning views of the Aegean Sea.\n- **Cuisine**: The restaurant specializes in authentic Greek dishes, crafted from high-quality, locally sourced ingredients. The menu is designed to engage all senses and features a variety of Mediterranean flavors.\n- **Ambiance**: Kokkalo boasts a chic and modern décor, creating a welcoming atmosphere for guests. The staff is known for their professionalism and attentiveness, enhancing the overall dining experience.\n- **Culinary Experience**: The name \"Kokkalo,\" meaning \"bone\" in Greek, symbolizes the strong foundation of the restaurant's culinary philosophy. Guests can expect a bold and unforgettable culinary journey.\n- **Cooking Classes**: Kokkalo also offers cooking lessons, allowing visitors to learn how to prepare traditional Greek dishes, providing a unique souvenir of their time in Santorini.\n- **Contact Information**: \n - Address: 25 Martiou str, Fira, Santorini 84 700, Cyclades, Greece\n - Phone: +30 22860 25407\n - Email: reservation@kokkalosantorini.com\n\nFor more information, you can visit their [official website](https://www.santorini-view.com/restaurants/kokkalo-restaurant/) or their [Facebook page](https://www.facebook.com/kokkalorestaurant/).",
"is_safe": true,
"results": [
{
"title": "Kokkalo restaurant, Restaurants in Firostefani Santorini Greece",
"url": "http://www.travel-to-santorini.com/restaurants/firostefani/thebonerestaurant/",
"description": "Details Contact : George Grafakos Address : Firostefani, Opposite of Fira Primary School Zipcode : 84700 City : Santorni Phone : +30 22860 25407 Send an email",
"content": null,
"site_name": "Travel-to-santorini",
"site_long_name": "travel-to-santorini.com",
"language": "en",
"is_safe": true,
"favicon": "https://t1.gstatic.com/faviconV2?client=SOCIAL&type=FAVICON&fallback_opts=TYPE,SIZE,URL&url=http://travel-to-santorini.com&size=96"
}
]
}
*/
```
## Related
- Tool [conceptual guide](/docs/concepts/tools)
- Tool [how-to guides](/docs/how_to/#tools)
|
0 | lc_public_repos/langchainjs/docs/core_docs/docs/integrations | lc_public_repos/langchainjs/docs/core_docs/docs/integrations/tools/azure_dynamic_sessions.mdx | # Azure Container Apps Dynamic Sessions
> [Azure Container Apps dynamic sessions](https://learn.microsoft.com/azure/container-apps/sessions) provide fast access to secure sandboxed environments that are ideal for running code or applications that require strong isolation from other workloads.
You can learn more about Azure Container Apps dynamic sessions and its code interpretation capabilities on [this page](https://learn.microsoft.com/azure/container-apps/sessions). If you don't have an Azure account, you can [create a free account](https://azure.microsoft.com/free/) to get started.
## Setup
You'll first need to install the [`@langchain/azure-dynamic-sessions`](https://www.npmjs.com/package/@langchain/azure-dynamic-sessions) package:
import IntegrationInstallTooltip from "@mdx_components/integration_install_tooltip.mdx";
<IntegrationInstallTooltip></IntegrationInstallTooltip>
```bash npm2yarn
npm install @langchain/azure-dynamic-sessions @langchain/core
```
You'll also need to have a code interpreter session pool instance running. You can deploy a version using [Azure CLI](https://learn.microsoft.com/cli/azure/install-azure-cli) following [this guide](https://learn.microsoft.com/azure/container-apps/sessions-code-interpreter).
Once you have your instance running, you need to make sure you have properly set up the Azure Entra authentication for it. You can find the instructions on how to do that [here](https://learn.microsoft.com/azure/container-apps/sessions?tabs=azure-cli#authentication).
After you've added the role for your identity, you need to retrieve the **session pool management endpoint**. You can find it in the Azure Portal, under the "Overview" section of your instance. Then you need to set the following environment variable:
import CodeBlock from "@theme/CodeBlock";
import EnvVars from "@examples/tools/azure_dynamic_sessions/.env.example";
<CodeBlock language="text">{EnvVars}</CodeBlock>
## Usage example
Below is a simple example that creates a new Python code interpreter session, invoke the tool and prints the result.
import Example from "@examples/tools/azure_dynamic_sessions/azure_dynamic_sessions.ts";
<CodeBlock language="typescript">{Example}</CodeBlock>
Here is a complete example where we use an Azure OpenAI chat model to call the Python code interpreter session tool to execute the code and get the result:
import AgentExample from "@examples/tools/azure_dynamic_sessions/azure_dynamic_sessions-agent.ts";
<CodeBlock language="typescript">{AgentExample}</CodeBlock>
## Related
- Tool [conceptual guide](/docs/concepts/tools)
- Tool [how-to guides](/docs/how_to/#tools)
|
0 | lc_public_repos/langchainjs/docs/core_docs/docs/integrations | lc_public_repos/langchainjs/docs/core_docs/docs/integrations/tools/lambda_agent.mdx | ---
sidebar_label: Agent with AWS Lambda
hide_table_of_contents: true
---
# Agent with AWS Lambda Integration
Full docs here: https://docs.aws.amazon.com/lambda/index.html
**AWS Lambda** is a serverless computing service provided by Amazon Web Services (AWS), designed to allow developers to build and run applications and services without the need for provisioning or managing servers. This serverless architecture enables you to focus on writing and deploying code, while AWS automatically takes care of scaling, patching, and managing the infrastructure required to run your applications.
By including a AWSLambda in the list of tools provided to an Agent, you can grant your Agent the ability to invoke code running in your AWS Cloud for whatever purposes you need.
When an Agent uses the AWSLambda tool, it will provide an argument of type `string` which will in turn be passed into the Lambda function via the `event` parameter.
This quick start will demonstrate how an Agent could use a Lambda function to send an email via [Amazon Simple Email Service](https://aws.amazon.com/ses/). The lambda code which sends the email is not provided, but if you'd like to learn how this could be done, see [here](https://repost.aws/knowledge-center/lambda-send-email-ses). Keep in mind this is an intentionally simple example; Lambda can used to execute code for a near infinite number of other purposes (including executing more Langchains)!
### Note about credentials:
- If you have not run [`aws configure`](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-configure.html) via the AWS CLI, the `region`, `accessKeyId`, and `secretAccessKey` must be provided to the AWSLambda constructor.
- The IAM role corresponding to those credentials must have permission to invoke the lambda function.
import IntegrationInstallTooltip from "@mdx_components/integration_install_tooltip.mdx";
<IntegrationInstallTooltip></IntegrationInstallTooltip>
```bash npm2yarn
npm install @langchain/openai @langchain/core
```
```typescript
import { OpenAI } from "@langchain/openai";
import { SerpAPI } from "langchain/tools";
import { AWSLambda } from "langchain/tools/aws_lambda";
import { initializeAgentExecutorWithOptions } from "langchain/agents";
const model = new OpenAI({ temperature: 0 });
const emailSenderTool = new AWSLambda({
name: "email-sender",
// tell the Agent precisely what the tool does
description:
"Sends an email with the specified content to testing123@gmail.com",
region: "us-east-1", // optional: AWS region in which the function is deployed
accessKeyId: "abc123", // optional: access key id for a IAM user with invoke permissions
secretAccessKey: "xyz456", // optional: secret access key for that IAM user
functionName: "SendEmailViaSES", // the function name as seen in AWS Console
});
const tools = [emailSenderTool, new SerpAPI("api_key_goes_here")];
const executor = await initializeAgentExecutorWithOptions(tools, model, {
agentType: "zero-shot-react-description",
});
const input = `Find out the capital of Croatia. Once you have it, email the answer to testing123@gmail.com.`;
const result = await executor.invoke({ input });
console.log(result);
```
## Related
- Tool [conceptual guide](/docs/concepts/tools)
- Tool [how-to guides](/docs/how_to/#tools)
|
0 | lc_public_repos/langchainjs/docs/core_docs/docs/integrations | lc_public_repos/langchainjs/docs/core_docs/docs/integrations/tools/searxng.mdx | ---
hide_table_of_contents: true
---
import CodeBlock from "@theme/CodeBlock";
# Searxng Search tool
The `SearxngSearch` tool connects your agents and chains to the internet.
A wrapper around the SearxNG API, this tool is useful for performing meta-search engine queries using the SearxNG API. It is particularly helpful in answering questions about current events.
## Usage
import ToolExample from "@examples/tools/searxng_search.ts";
import IntegrationInstallTooltip from "@mdx_components/integration_install_tooltip.mdx";
<IntegrationInstallTooltip></IntegrationInstallTooltip>
```bash npm2yarn
npm install @langchain/openai @langchain/core
```
<CodeBlock language="typescript">{ToolExample}</CodeBlock>
## Related
- Tool [conceptual guide](/docs/concepts/tools)
- Tool [how-to guides](/docs/how_to/#tools)
|
0 | lc_public_repos/langchainjs/docs/core_docs/docs/integrations | lc_public_repos/langchainjs/docs/core_docs/docs/integrations/tools/searchapi.mdx | ---
hide_table_of_contents: true
---
import CodeBlock from "@theme/CodeBlock";
# SearchApi tool
The `SearchApi` tool connects your agents and chains to the internet.
A wrapper around the Search API. This tool is handy when you need to answer questions about current events.
## Usage
Input should be a search query.
import ToolExample from "@examples/tools/searchapi_google_news.ts";
import IntegrationInstallTooltip from "@mdx_components/integration_install_tooltip.mdx";
<IntegrationInstallTooltip></IntegrationInstallTooltip>
```bash npm2yarn
npm install @langchain/openai @langchain/core
```
<CodeBlock language="typescript">{ToolExample}</CodeBlock>
## Related
- Tool [conceptual guide](/docs/concepts/tools)
- Tool [how-to guides](/docs/how_to/#tools)
|
0 | lc_public_repos/langchainjs/docs/core_docs/docs/integrations | lc_public_repos/langchainjs/docs/core_docs/docs/integrations/tools/webbrowser.mdx | ---
hide_table_of_contents: true
---
import CodeBlock from "@theme/CodeBlock";
# Web Browser Tool
The Webbrowser Tool gives your agent the ability to visit a website and extract information. It is described to the agent as
```
useful for when you need to find something on or summarize a webpage. input should be a comma separated list of "valid URL including protocol","what you want to find on the page or empty string for a summary".
```
It exposes two modes of operation:
- when called by the Agent with only a URL it produces a summary of the website contents
- when called by the Agent with a URL and a description of what to find it will instead use an in-memory Vector Store to find the most relevant snippets and summarise those
## Setup
To use the Webbrowser Tool you need to install the dependencies:
```bash npm2yarn
npm install cheerio axios
```
## Usage, standalone
import ToolExample from "@examples/tools/webbrowser.ts";
import IntegrationInstallTooltip from "@mdx_components/integration_install_tooltip.mdx";
<IntegrationInstallTooltip></IntegrationInstallTooltip>
```bash npm2yarn
npm install @langchain/openai @langchain/core
```
<CodeBlock language="typescript">{ToolExample}</CodeBlock>
## Usage, in an Agent
import AgentExample from "@examples/agents/mrkl_browser.ts";
<CodeBlock language="typescript">{AgentExample}</CodeBlock>
## Related
- Tool [conceptual guide](/docs/concepts/tools)
- Tool [how-to guides](/docs/how_to/#tools)
|
0 | lc_public_repos/langchainjs/docs/core_docs/docs/integrations | lc_public_repos/langchainjs/docs/core_docs/docs/integrations/tools/google_routes.mdx | ---
hide_table_of_contents: true
---
import CodeBlock from "@theme/CodeBlock";
# Google Routes Tool
The Google Routes Tool allows your agent to utilize the Google Routes API in order to find a route between
two or more destinations. You can get a route by walk, transit, car, motorcycle and bicycle.
## Setup
You will need to get an API key from [Google here](https://developers.google.com/maps/documentation/places/web-service/overview)
and [enable the Routes API](https://console.cloud.google.com/apis/library/routes.googleapis.com). Then, set your API key
as `process.env.GOOGLE_ROUTES_API_KEY` or pass it in as an `apiKey` constructor argument.
## Usage
import IntegrationInstallTooltip from "@mdx_components/integration_install_tooltip.mdx";
<IntegrationInstallTooltip></IntegrationInstallTooltip>
```bash npm2yarn
npm install @langchain/openai @langchain/community @langchain/core
```
import ToolExample from "@examples/tools/google_routes.ts";
<CodeBlock language="typescript">{ToolExample}</CodeBlock>
## Related
- Tool [conceptual guide](/docs/concepts/tools)
- Tool [how-to guides](/docs/how_to/#tools)
|
0 | lc_public_repos/langchainjs/docs/core_docs/docs/integrations | lc_public_repos/langchainjs/docs/core_docs/docs/integrations/tools/tavily_search.ipynb | import { TavilySearchResults } from "@langchain/community/tools/tavily_search";
const tool = new TavilySearchResults({
maxResults: 2,
// ...
});await tool.invoke({
input: "what is the current weather in SF?"
});// This is usually generated by a model, but we'll create a tool call directly for demo purposes.
const modelGeneratedToolCall = {
args: {
input: "what is the current weather in SF?"
},
id: "1",
name: tool.name,
type: "tool_call",
}
await tool.invoke(modelGeneratedToolCall)// @lc-docs-hide-cell
import { ChatOpenAI } from "@langchain/openai"
const llm = new ChatOpenAI({
model: "gpt-4o",
temperature: 0,
})import { HumanMessage } from "@langchain/core/messages";
import { ChatPromptTemplate } from "@langchain/core/prompts";
import { RunnableLambda } from "@langchain/core/runnables";
const prompt = ChatPromptTemplate.fromMessages(
[
["system", "You are a helpful assistant."],
["placeholder", "{messages}"],
]
)
const llmWithTools = llm.bindTools([tool]);
const chain = prompt.pipe(llmWithTools);
const toolChain = RunnableLambda.from(
async (userInput: string, config) => {
const humanMessage = new HumanMessage(userInput,);
const aiMsg = await chain.invoke({
messages: [new HumanMessage(userInput)],
}, config);
const toolMsgs = await tool.batch(aiMsg.tool_calls, config);
return chain.invoke({
messages: [humanMessage, aiMsg, ...toolMsgs],
}, config);
}
);
const toolChainResult = await toolChain.invoke("what is the current weather in sf?");const { tool_calls, content } = toolChainResult;
console.log("AIMessage", JSON.stringify({
tool_calls,
content,
}, null, 2)); |
0 | lc_public_repos/langchainjs/docs/core_docs/docs/integrations | lc_public_repos/langchainjs/docs/core_docs/docs/integrations/tools/pyinterpreter.mdx | ---
hide_table_of_contents: true
---
import CodeBlock from "@theme/CodeBlock";
# Python interpreter tool
:::warning
This tool executes code and can potentially perform destructive actions. Be careful that you trust any code passed to it!
:::
LangChain offers an experimental tool for executing arbitrary Python code.
This can be useful in combination with an LLM that can generate code to perform more powerful computations.
## Usage
import ToolExample from "@examples/tools/pyinterpreter.ts";
import IntegrationInstallTooltip from "@mdx_components/integration_install_tooltip.mdx";
<IntegrationInstallTooltip></IntegrationInstallTooltip>
```bash npm2yarn
npm install @langchain/openai @langchain/core
```
<CodeBlock language="typescript">{ToolExample}</CodeBlock>
## Related
- Tool [conceptual guide](/docs/concepts/tools)
- Tool [how-to guides](/docs/how_to/#tools)
|
0 | lc_public_repos/langchainjs/docs/core_docs/docs/integrations | lc_public_repos/langchainjs/docs/core_docs/docs/integrations/tools/dalle.mdx | ---
hide_table_of_contents: true
---
import CodeBlock from "@theme/CodeBlock";
import ToolExample from "@examples/tools/dalle_image_generation.ts";
# Dall-E Tool
The Dall-E tool allows your agent to create images using OpenAI's Dall-E image generation tool.
## Setup
You will need an OpenAI API Key which you can get from the [OpenAI web site](https://openai.com)
and then set the OPENAI_API_KEY environment variable to the key you just created.
To use the Dall-E Tool you need to install the LangChain OpenAI integration package:
import IntegrationInstallTooltip from "@mdx_components/integration_install_tooltip.mdx";
<IntegrationInstallTooltip></IntegrationInstallTooltip>
```bash npm2yarn
npm install @langchain/openai @langchain/core
```
<CodeBlock language="typescript">{ToolExample}</CodeBlock>
## Related
- Tool [conceptual guide](/docs/concepts/tools)
- Tool [how-to guides](/docs/how_to/#tools)
|
0 | lc_public_repos/langchainjs/docs/core_docs/docs/integrations | lc_public_repos/langchainjs/docs/core_docs/docs/integrations/tools/wikipedia.mdx | ---
hide_table_of_contents: true
---
import CodeBlock from "@theme/CodeBlock";
# Wikipedia tool
The `WikipediaQueryRun` tool connects your agents and chains to Wikipedia.
## Usage
import ToolExample from "@examples/tools/wikipedia.ts";
<CodeBlock language="typescript">{ToolExample}</CodeBlock>
## Related
- Tool [conceptual guide](/docs/concepts/tools)
- Tool [how-to guides](/docs/how_to/#tools)
|
0 | lc_public_repos/langchainjs/docs/core_docs/docs/integrations | lc_public_repos/langchainjs/docs/core_docs/docs/integrations/tools/zapier_agent.mdx | ---
sidebar_class_name: hidden
---
# Agent with Zapier NLA Integration
:::warning
This module has been deprecated and is no longer supported. The documentation below will not work in versions 0.2.0 or later.
:::
Full docs here: https://nla.zapier.com/start/
**Zapier Natural Language Actions** gives you access to the 5k+ apps and 20k+ actions on Zapier's platform through a natural language API interface.
NLA supports apps like Gmail, Salesforce, Trello, Slack, Asana, HubSpot, Google Sheets, Microsoft Teams, and thousands more apps: https://zapier.com/apps
Zapier NLA handles ALL the underlying API auth and translation from natural language --> underlying API call --> return simplified output for LLMs. The key idea is you, or your users, expose a set of actions via an oauth-like setup window, which you can then query and execute via a REST API.
NLA offers both API Key and OAuth for signing NLA API requests.
Server-side (API Key): for quickly getting started, testing, and production scenarios where LangChain will only use actions exposed in the developer's Zapier account (and will use the developer's connected accounts on Zapier.com)
User-facing (Oauth): for production scenarios where you are deploying an end-user facing application and LangChain needs access to end-user's exposed actions and connected accounts on Zapier.com
Attach NLA credentials via either an environment variable (`ZAPIER_NLA_OAUTH_ACCESS_TOKEN` or `ZAPIER_NLA_API_KEY`) or refer to the params argument in the API reference for `ZapierNLAWrapper`.
Review [auth docs](https://nla.zapier.com/docs/authentication/) for more details.
The example below demonstrates how to use the Zapier integration as an Agent:
import IntegrationInstallTooltip from "@mdx_components/integration_install_tooltip.mdx";
<IntegrationInstallTooltip></IntegrationInstallTooltip>
```bash npm2yarn
npm install @langchain/openai @langchain/core
```
```typescript
import { OpenAI } from "@langchain/openai";
import { ZapierNLAWrapper } from "langchain/tools";
import {
initializeAgentExecutorWithOptions,
ZapierToolKit,
} from "langchain/agents";
const model = new OpenAI({ temperature: 0 });
const zapier = new ZapierNLAWrapper();
const toolkit = await ZapierToolKit.fromZapierNLAWrapper(zapier);
const executor = await initializeAgentExecutorWithOptions(
toolkit.tools,
model,
{
agentType: "zero-shot-react-description",
verbose: true,
}
);
console.log("Loaded agent.");
const input = `Summarize the last email I received regarding Silicon Valley Bank. Send the summary to the #test-zapier Slack channel.`;
console.log(`Executing with input "${input}"...`);
const result = await executor.invoke({ input });
console.log(`Got output ${result.output}`);
```
## Related
- Tool [conceptual guide](/docs/concepts/tools)
- Tool [how-to guides](/docs/how_to/#tools)
|
0 | lc_public_repos/langchainjs/docs/core_docs/docs/integrations | lc_public_repos/langchainjs/docs/core_docs/docs/integrations/retrievers/kendra-retriever.ipynb | import { AmazonKendraRetriever } from "@langchain/aws";
const retriever = new AmazonKendraRetriever({
topK: 10,
indexId: "YOUR_INDEX_ID",
region: "us-east-2", // Your region
clientOptions: {
credentials: {
accessKeyId: "YOUR_ACCESS_KEY_ID",
secretAccessKey: "YOUR_SECRET_ACCESS_KEY",
},
},
});const query = "..."
await retriever.invoke(query);// @lc-docs-hide-cell
import { ChatOpenAI } from "@langchain/openai";
const llm = new ChatOpenAI({
model: "gpt-4o-mini",
temperature: 0,
});import { ChatPromptTemplate } from "@langchain/core/prompts";
import { RunnablePassthrough, RunnableSequence } from "@langchain/core/runnables";
import { StringOutputParser } from "@langchain/core/output_parsers";
import type { Document } from "@langchain/core/documents";
const prompt = ChatPromptTemplate.fromTemplate(`
Answer the question based only on the context provided.
Context: {context}
Question: {question}`);
const formatDocs = (docs: Document[]) => {
return docs.map((doc) => doc.pageContent).join("\n\n");
}
// See https://js.langchain.com/docs/tutorials/rag
const ragChain = RunnableSequence.from([
{
context: retriever.pipe(formatDocs),
question: new RunnablePassthrough(),
},
prompt,
llm,
new StringOutputParser(),
]);await ragChain.invoke(query); |
0 | lc_public_repos/langchainjs/docs/core_docs/docs/integrations | lc_public_repos/langchainjs/docs/core_docs/docs/integrations/retrievers/dria.mdx | ---
hide_table_of_contents: true
---
# Dria Retriever
The [Dria](https://dria.co/profile) retriever allows an agent to perform a text-based search across a comprehensive knowledge hub.
## Setup
To use Dria retriever, first install Dria JS client:
```bash npm2yarn
npm install dria
```
You need to provide two things to the retriever:
- **API Key**: you can get yours at your [profile page](https://dria.co/profile) when you create an account.
- **Contract ID**: accessible at the top of the page when viewing a knowledge or in its URL.
For example, the Bitcoin whitepaper is uploaded on Dria at https://dria.co/knowledge/2KxNbEb040GKQ1DSDNDsA-Fsj_BlQIEAlzBNuiapBR0, so its contract ID is `2KxNbEb040GKQ1DSDNDsA-Fsj_BlQIEAlzBNuiapBR0`.
Contract ID can be omitted during instantiation, and later be set via `dria.contractId = "your-contract"`
Dria retriever exposes the underlying [Dria client](https://npmjs.com/package/dria) as well, refer to the [Dria documentation](https://github.com/firstbatchxyz/dria-js-client?tab=readme-ov-file#usage) to learn more about the client.
## Usage
import IntegrationInstallTooltip from "@mdx_components/integration_install_tooltip.mdx";
<IntegrationInstallTooltip></IntegrationInstallTooltip>
```bash npm2yarn
npm install dria @langchain/community @langchain/core
```
import CodeBlock from "@theme/CodeBlock";
import Example from "@examples/retrievers/dria.ts";
<CodeBlock language="typescript">{Example}</CodeBlock>
## Related
- Retriever [conceptual guide](/docs/concepts/retrievers)
- Retriever [how-to guides](/docs/how_to/#retrievers)
|
0 | lc_public_repos/langchainjs/docs/core_docs/docs/integrations | lc_public_repos/langchainjs/docs/core_docs/docs/integrations/retrievers/hyde.mdx | ---
hide_table_of_contents: true
---
# HyDE Retriever
This example shows how to use the HyDE Retriever, which implements Hypothetical Document Embeddings (HyDE) as described in [this paper](https://arxiv.org/abs/2212.10496).
At a high level, HyDE is an embedding technique that takes queries, generates a hypothetical answer, and then embeds that generated document and uses that as the final example.
In order to use HyDE, we therefore need to provide a base embedding model, as well as an LLM that can be used to generate those documents. By default, the HyDE class comes with some default prompts to use (see the paper for more details on them), but we can also create our own, which should have a single input variable `{question}`.
## Usage
import CodeBlock from "@theme/CodeBlock";
import Example from "@examples/retrievers/hyde.ts";
import IntegrationInstallTooltip from "@mdx_components/integration_install_tooltip.mdx";
<IntegrationInstallTooltip></IntegrationInstallTooltip>
```bash npm2yarn
npm install @langchain/openai @langchain/core
```
<CodeBlock language="typescript">{Example}</CodeBlock>
## Related
- Retriever [conceptual guide](/docs/concepts/retrievers)
- Retriever [how-to guides](/docs/how_to/#retrievers)
|
0 | lc_public_repos/langchainjs/docs/core_docs/docs/integrations | lc_public_repos/langchainjs/docs/core_docs/docs/integrations/retrievers/time-weighted-retriever.mdx | # Time-Weighted Retriever
A Time-Weighted Retriever is a retriever that takes into account recency in addition to similarity. The scoring algorithm is:
```typescript
let score = (1.0 - this.decayRate) ** hoursPassed + vectorRelevance;
```
Notably, `hoursPassed` above refers to the time since the object in the retriever was last accessed, not since it was created. This means that frequently accessed objects remain "fresh" and score higher.
`this.decayRate` is a configurable decimal number between 0 and 1. A lower number means that documents will be "remembered" for longer, while a higher number strongly weights more recently accessed documents.
Note that setting a decay rate of exactly 0 or 1 makes `hoursPassed` irrelevant and makes this retriever equivalent to a standard vector lookup.
## Usage
This example shows how to intialize a `TimeWeightedVectorStoreRetriever` with a vector store.
It is important to note that due to required metadata, all documents must be added to the backing vector store using the `addDocuments` method on the **retriever**, not the vector store itself.
import CodeBlock from "@theme/CodeBlock";
import Example from "@examples/retrievers/time-weighted-retriever.ts";
import IntegrationInstallTooltip from "@mdx_components/integration_install_tooltip.mdx";
<IntegrationInstallTooltip></IntegrationInstallTooltip>
```bash npm2yarn
npm install @langchain/openai @langchain/core
```
<CodeBlock language="typescript">{Example}</CodeBlock>
## Related
- Retriever [conceptual guide](/docs/concepts/retrievers)
- Retriever [how-to guides](/docs/how_to/#retrievers)
|
0 | lc_public_repos/langchainjs/docs/core_docs/docs/integrations | lc_public_repos/langchainjs/docs/core_docs/docs/integrations/retrievers/bedrock-knowledge-bases.ipynb | // process.env.LANGSMITH_API_KEY = "<YOUR API KEY HERE>";
// process.env.LANGSMITH_TRACING = "true";import { AmazonKnowledgeBaseRetriever } from "@langchain/aws";
const retriever = new AmazonKnowledgeBaseRetriever({
topK: 10,
knowledgeBaseId: process.env.AWS_KNOWLEDGE_BASE_ID,
region: "us-east-2",
clientOptions: {
credentials: {
accessKeyId: process.env.AWS_ACCESS_KEY_ID,
secretAccessKey: process.env.AWS_SECRET_ACCESS_KEY,
},
},
});const query = "..."
await retriever.invoke(query);// @lc-docs-hide-cell
import { ChatOpenAI } from "@langchain/openai";
const llm = new ChatOpenAI({
model: "gpt-4o-mini",
temperature: 0,
});import { ChatPromptTemplate } from "@langchain/core/prompts";
import { RunnablePassthrough, RunnableSequence } from "@langchain/core/runnables";
import { StringOutputParser } from "@langchain/core/output_parsers";
import type { Document } from "@langchain/core/documents";
const prompt = ChatPromptTemplate.fromTemplate(`
Answer the question based only on the context provided.
Context: {context}
Question: {question}`);
const formatDocs = (docs: Document[]) => {
return docs.map((doc) => doc.pageContent).join("\n\n");
}
// See https://js.langchain.com/docs/tutorials/rag
const ragChain = RunnableSequence.from([
{
context: retriever.pipe(formatDocs),
question: new RunnablePassthrough(),
},
prompt,
llm,
new StringOutputParser(),
]);await ragChain.invoke("...") |
0 | lc_public_repos/langchainjs/docs/core_docs/docs/integrations | lc_public_repos/langchainjs/docs/core_docs/docs/integrations/retrievers/index.mdx | ---
sidebar_position: 0
sidebar_class_name: hidden
---
import { CategoryTable, IndexTable } from "@theme/FeatureTables";
# Retrievers
A [retriever](/docs/concepts/retrievers) is an interface that returns documents given an unstructured query.
It is more general than a vector store.
A retriever does not need to be able to store documents, only to return (or retrieve) them.
Retrievers accept a string query as input and return a list of Documents.
For specifics on how to use retrievers, see the [relevant how-to guides here](/docs/how_to/#retrievers).
Note that all [vector stores](/docs/concepts/#vectorstores) can be [cast to retrievers](/docs/how_to/vectorstore_retriever/).
Refer to the vector store [integration docs](/docs/integrations/vectorstores/) for available vector store retrievers.
:::info
If you'd like to write your own retriever, see [this how-to](/docs/how_to/custom_retriever/). If you'd like to contribute an integration, see [Contributing integrations](/docs/contributing).
:::
## All retrievers
<IndexTable />
|
0 | lc_public_repos/langchainjs/docs/core_docs/docs/integrations | lc_public_repos/langchainjs/docs/core_docs/docs/integrations/retrievers/zep-retriever.mdx | ---
hide_table_of_contents: true
---
# Zep Open Source Retriever
> [Zep](https://www.getzep.com) is a long-term memory service for AI Assistant apps.
> With Zep, you can provide AI assistants with the ability to recall past conversations, no matter how distant,
> while also reducing hallucinations, latency, and cost.
> Interested in Zep Cloud? See [Zep Cloud Installation Guide](https://help.getzep.com/sdks)
This example shows how to use the Zep Retriever in a retrieval chain to retrieve documents from Zep Open Source memory store.
## Installation
Follow the [Zep Open Source Quickstart Guide](https://docs.getzep.com/deployment/quickstart/) to install and get started with Zep.
## Setup
import IntegrationInstallTooltip from "@mdx_components/integration_install_tooltip.mdx";
<IntegrationInstallTooltip></IntegrationInstallTooltip>
```bash npm2yarn
npm i @getzep/zep-js @langchain/community @langchain/core
```
## Usage
import CodeBlock from "@theme/CodeBlock";
import Example from "@examples/retrievers/zep.ts";
<CodeBlock language="typescript">{Example}</CodeBlock>
## Related
- Retriever [conceptual guide](/docs/concepts/retrievers)
- Retriever [how-to guides](/docs/how_to/#retrievers)
|
0 | lc_public_repos/langchainjs/docs/core_docs/docs/integrations | lc_public_repos/langchainjs/docs/core_docs/docs/integrations/retrievers/supabase-hybrid.mdx | # Supabase Hybrid Search
Langchain supports hybrid search with a Supabase Postgres database. The hybrid search combines the postgres `pgvector` extension (similarity search) and Full-Text Search (keyword search) to retrieve documents. You can add documents via SupabaseVectorStore `addDocuments` function. SupabaseHybridKeyWordSearch accepts embedding, supabase client, number of results for similarity search, and number of results for keyword search as parameters. The `getRelevantDocuments` function produces a list of documents that has duplicates removed and is sorted by relevance score.
## Setup
### Install the library with
```bash npm2yarn
npm install -S @supabase/supabase-js
```
### Create a table and search functions in your database
Run this in your database:
```sql
-- Enable the pgvector extension to work with embedding vectors
create extension vector;
-- Create a table to store your documents
create table documents (
id bigserial primary key,
content text, -- corresponds to Document.pageContent
metadata jsonb, -- corresponds to Document.metadata
embedding vector(1536) -- 1536 works for OpenAI embeddings, change if needed
);
-- Create a function to similarity search for documents
create function match_documents (
query_embedding vector(1536),
match_count int DEFAULT null,
filter jsonb DEFAULT '{}'
) returns table (
id bigint,
content text,
metadata jsonb,
similarity float
)
language plpgsql
as $$
#variable_conflict use_column
begin
return query
select
id,
content,
metadata,
1 - (documents.embedding <=> query_embedding) as similarity
from documents
where metadata @> filter
order by documents.embedding <=> query_embedding
limit match_count;
end;
$$;
-- Create a function to keyword search for documents
create function kw_match_documents(query_text text, match_count int)
returns table (id bigint, content text, metadata jsonb, similarity real)
as $$
begin
return query execute
format('select id, content, metadata, ts_rank(to_tsvector(content), plainto_tsquery($1)) as similarity
from documents
where to_tsvector(content) @@ plainto_tsquery($1)
order by similarity desc
limit $2')
using query_text, match_count;
end;
$$ language plpgsql;
```
## Usage
import IntegrationInstallTooltip from "@mdx_components/integration_install_tooltip.mdx";
<IntegrationInstallTooltip></IntegrationInstallTooltip>
```bash npm2yarn
npm install @langchain/openai @langchain/community @langchain/core
```
import CodeBlock from "@theme/CodeBlock";
import Example from "@examples/retrievers/supabase_hybrid.ts";
<CodeBlock language="typescript">{Example}</CodeBlock>
## Related
- Retriever [conceptual guide](/docs/concepts/retrievers)
- Retriever [how-to guides](/docs/how_to/#retrievers)
|
0 | lc_public_repos/langchainjs/docs/core_docs/docs/integrations | lc_public_repos/langchainjs/docs/core_docs/docs/integrations/retrievers/bm25.ipynb | import { BM25Retriever } from "@langchain/community/retrievers/bm25";
const retriever = BM25Retriever.fromDocuments([
{ pageContent: "Buildings are made out of brick", metadata: {} },
{ pageContent: "Buildings are made out of wood", metadata: {} },
{ pageContent: "Buildings are made out of stone", metadata: {} },
{ pageContent: "Cars are made out of metal", metadata: {} },
{ pageContent: "Cars are made out of plastic", metadata: {} },
{ pageContent: "mitochondria is the powerhouse of the cell", metadata: {} },
{ pageContent: "mitochondria is made of lipids", metadata: {} },
], { k: 4 });
// Will return the 4 documents reranked by the BM25 algorithm
await retriever.invoke("mitochondria"); |
0 | lc_public_repos/langchainjs/docs/core_docs/docs/integrations | lc_public_repos/langchainjs/docs/core_docs/docs/integrations/retrievers/tavily.ipynb | import { TavilySearchAPIRetriever } from "@langchain/community/retrievers/tavily_search_api";
const retriever = new TavilySearchAPIRetriever({
k: 3,
});const query = "what is the current weather in SF?";
await retriever.invoke(query);// @lc-docs-hide-cell
import { ChatOpenAI } from "@langchain/openai";
const llm = new ChatOpenAI({
model: "gpt-4o-mini",
temperature: 0,
});import { ChatPromptTemplate } from "@langchain/core/prompts";
import { RunnablePassthrough, RunnableSequence } from "@langchain/core/runnables";
import { StringOutputParser } from "@langchain/core/output_parsers";
import type { Document } from "@langchain/core/documents";
const prompt = ChatPromptTemplate.fromTemplate(`
Answer the question based only on the context provided.
Context: {context}
Question: {question}`);
const formatDocs = (docs: Document[]) => {
return docs.map((doc) => doc.pageContent).join("\n\n");
}
// See https://js.langchain.com/docs/tutorials/rag
const ragChain = RunnableSequence.from([
{
context: retriever.pipe(formatDocs),
question: new RunnablePassthrough(),
},
prompt,
llm,
new StringOutputParser(),
]);await ragChain.invoke(query); |
0 | lc_public_repos/langchainjs/docs/core_docs/docs/integrations | lc_public_repos/langchainjs/docs/core_docs/docs/integrations/retrievers/vespa-retriever.mdx | # Vespa Retriever
This shows how to use Vespa.ai as a LangChain retriever.
Vespa.ai is a platform for highly efficient structured text and vector search.
Please refer to [Vespa.ai](https://vespa.ai) for more information.
The following sets up a retriever that fetches results from Vespa's documentation search:
import CodeBlock from "@theme/CodeBlock";
import Example from "@examples/retrievers/vespa.ts";
<CodeBlock language="typescript">{Example}</CodeBlock>
Here, up to 5 results are retrieved from the `content` field in the `paragraph` document type,
using `documentation` as the ranking method. The `userQuery()` is replaced with the actual query
passed from LangChain.
Please refer to the [pyvespa documentation](https://pyvespa.readthedocs.io/en/latest/getting-started-pyvespa.html#Query)
for more information.
The URL is the endpoint of the Vespa application.
You can connect to any Vespa endpoint, either a remote service or a local instance using Docker.
However, most Vespa Cloud instances are protected with mTLS.
If this is your case, you can, for instance set up a [CloudFlare Worker](https://cloud.vespa.ai/en/security/cloudflare-workers)
that contains the necessary credentials to connect to the instance.
Now you can return the results and continue using them in LangChain.
## Related
- Retriever [conceptual guide](/docs/concepts/retrievers)
- Retriever [how-to guides](/docs/how_to/#retrievers)
|
0 | lc_public_repos/langchainjs/docs/core_docs/docs/integrations | lc_public_repos/langchainjs/docs/core_docs/docs/integrations/retrievers/exa.ipynb | import { ExaRetriever } from "@langchain/exa";
import Exa from "exa-js";
const retriever = new ExaRetriever({
// @lc-ts-ignore
client: new Exa(
process.env.EXASEARCH_API_KEY // default API key
),
searchArgs: {
numResults: 2,
}
});const query = "What did the speaker say about Justice Breyer in the 2022 State of the Union?";
await retriever.invoke(query);// @lc-docs-hide-cell
import { ChatOpenAI } from "@langchain/openai";
const llm = new ChatOpenAI({
model: "gpt-4o-mini",
temperature: 0,
});import { ChatPromptTemplate } from "@langchain/core/prompts";
import { RunnablePassthrough, RunnableSequence } from "@langchain/core/runnables";
import { StringOutputParser } from "@langchain/core/output_parsers";
import type { Document } from "@langchain/core/documents";
const prompt = ChatPromptTemplate.fromTemplate(`
Answer the question based only on the context provided.
Context: {context}
Question: {question}`);
const formatDocs = (docs: Document[]) => {
return docs.map((doc) => doc.pageContent).join("\n\n");
}
// See https://js.langchain.com/docs/tutorials/rag
const ragChain = RunnableSequence.from([
{
context: retriever.pipe(formatDocs),
question: new RunnablePassthrough(),
},
prompt,
llm,
new StringOutputParser(),
]);await ragChain.invoke("What did the speaker say about Justice Breyer in the 2022 State of the Union?"); |
0 | lc_public_repos/langchainjs/docs/core_docs/docs/integrations | lc_public_repos/langchainjs/docs/core_docs/docs/integrations/retrievers/chatgpt-retriever-plugin.mdx | ---
hide_table_of_contents: true
sidebar_class_name: hidden
---
# ChatGPT Plugin Retriever
:::warning
This module has been deprecated and is no longer supported. The documentation below will not work in versions 0.2.0 or later.
:::
This example shows how to use the ChatGPT Retriever Plugin within LangChain.
To set up the ChatGPT Retriever Plugin, please follow instructions [here](https://github.com/openai/chatgpt-retrieval-plugin).
## Usage
```typescript
import { ChatGPTPluginRetriever } from "langchain/retrievers/remote";
const retriever = new ChatGPTPluginRetriever({
url: "http://0.0.0.0:8000",
auth: {
bearer: "super-secret-jwt-token-with-at-least-32-characters-long",
},
});
const docs = await retriever.invoke("hello world");
console.log(docs);
```
## Related
- Retriever [conceptual guide](/docs/concepts/retrievers)
- Retriever [how-to guides](/docs/how_to/#retrievers)
|
0 | lc_public_repos/langchainjs/docs/core_docs/docs/integrations | lc_public_repos/langchainjs/docs/core_docs/docs/integrations/retrievers/chaindesk-retriever.mdx | # Chaindesk Retriever
This example shows how to use the Chaindesk Retriever in a retrieval chain to retrieve documents from a Chaindesk.ai datastore.
## Usage
import IntegrationInstallTooltip from "@mdx_components/integration_install_tooltip.mdx";
<IntegrationInstallTooltip></IntegrationInstallTooltip>
```bash npm2yarn
npm install @langchain/community @langchain/core
```
import CodeBlock from "@theme/CodeBlock";
import Example from "@examples/retrievers/chaindesk.ts";
<CodeBlock language="typescript">{Example}</CodeBlock>
## Related
- Retriever [conceptual guide](/docs/concepts/retrievers)
- Retriever [how-to guides](/docs/how_to/#retrievers)
|
0 | lc_public_repos/langchainjs/docs/core_docs/docs/integrations | lc_public_repos/langchainjs/docs/core_docs/docs/integrations/retrievers/zep-cloud-retriever.mdx | ---
hide_table_of_contents: true
---
# Zep Cloud Retriever
> [Zep](https://www.getzep.com) is a long-term memory service for AI Assistant apps.
> With Zep, you can provide AI assistants with the ability to recall past conversations, no matter how distant,
> while also reducing hallucinations, latency, and cost.
This example shows how to use the Zep Retriever in a retrieval chain to retrieve documents from Zep Open Source memory store.
## Installation
Sign up for [Zep Cloud](https://app.getzep.com/) and create a project.
Follow the [Zep Cloud Typescript SDK Installation Guide](https://help.getzep.com/sdks) to install and get started with Zep.
You'll need your Zep Cloud Project API Key to use the ZepCloudRetriever. See the [Zep Cloud docs](https://help.getzep.com/projects) for more information.
## Setup
import IntegrationInstallTooltip from "@mdx_components/integration_install_tooltip.mdx";
<IntegrationInstallTooltip></IntegrationInstallTooltip>
```bash npm2yarn
npm i @getzep/zep-cloud @langchain/community @langchain/core
```
## Usage
import CodeBlock from "@theme/CodeBlock";
import Example from "@examples/retrievers/zep_cloud.ts";
<CodeBlock language="typescript">{Example}</CodeBlock>
## Related
- Retriever [conceptual guide](/docs/concepts/retrievers)
- Retriever [how-to guides](/docs/how_to/#retrievers)
|
0 | lc_public_repos/langchainjs/docs/core_docs/docs/integrations | lc_public_repos/langchainjs/docs/core_docs/docs/integrations/retrievers/metal-retriever.mdx | ---
hide_table_of_contents: true
---
# Metal Retriever
This example shows how to use the Metal Retriever in a retrieval chain to retrieve documents from a Metal index.
## Setup
import IntegrationInstallTooltip from "@mdx_components/integration_install_tooltip.mdx";
<IntegrationInstallTooltip></IntegrationInstallTooltip>
```bash npm2yarn
npm i @getmetal/metal-sdk @langchain/community @langchain/core
```
## Usage
import CodeBlock from "@theme/CodeBlock";
import Example from "@examples/retrievers/metal.ts";
<CodeBlock language="typescript">{Example}</CodeBlock>
## Related
- Retriever [conceptual guide](/docs/concepts/retrievers)
- Retriever [how-to guides](/docs/how_to/#retrievers)
|
0 | lc_public_repos/langchainjs/docs/core_docs/docs/integrations/retrievers | lc_public_repos/langchainjs/docs/core_docs/docs/integrations/retrievers/self_query/hnswlib.ipynb | import { OpenAIEmbeddings } from "@langchain/openai";
import { HNSWLib } from "@langchain/community/vectorstores/hnswlib";
import { Document } from "@langchain/core/documents";
import type { AttributeInfo } from "langchain/chains/query_constructor";
/**
* First, we create a bunch of documents. You can load your own documents here instead.
* Each document has a pageContent and a metadata field. Make sure your metadata matches the AttributeInfo below.
*/
const docs = [
new Document({
pageContent:
"A bunch of scientists bring back dinosaurs and mayhem breaks loose",
metadata: { year: 1993, rating: 7.7, genre: "science fiction" },
}),
new Document({
pageContent:
"Leo DiCaprio gets lost in a dream within a dream within a dream within a ...",
metadata: { year: 2010, director: "Christopher Nolan", rating: 8.2 },
}),
new Document({
pageContent:
"A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea",
metadata: { year: 2006, director: "Satoshi Kon", rating: 8.6 },
}),
new Document({
pageContent:
"A bunch of normal-sized women are supremely wholesome and some men pine after them",
metadata: { year: 2019, director: "Greta Gerwig", rating: 8.3 },
}),
new Document({
pageContent: "Toys come alive and have a blast doing so",
metadata: { year: 1995, genre: "animated" },
}),
new Document({
pageContent: "Three men walk into the Zone, three men walk out of the Zone",
metadata: {
year: 1979,
director: "Andrei Tarkovsky",
genre: "science fiction",
rating: 9.9,
},
}),
];
/**
* Next, we define the attributes we want to be able to query on.
* in this case, we want to be able to query on the genre, year, director, rating, and length of the movie.
* We also provide a description of each attribute and the type of the attribute.
* This is used to generate the query prompts.
*/
const attributeInfo: AttributeInfo[] = [
{
name: "genre",
description: "The genre of the movie",
type: "string or array of strings",
},
{
name: "year",
description: "The year the movie was released",
type: "number",
},
{
name: "director",
description: "The director of the movie",
type: "string",
},
{
name: "rating",
description: "The rating of the movie (1-10)",
type: "number",
},
{
name: "length",
description: "The length of the movie in minutes",
type: "number",
},
];
/**
* Next, we instantiate a vector store. This is where we store the embeddings of the documents.
* We also need to provide an embeddings object. This is used to embed the documents.
*/
const embeddings = new OpenAIEmbeddings();
const vectorStore = await HNSWLib.fromDocuments(docs, embeddings);// @lc-docs-hide-cell
import { ChatOpenAI } from "@langchain/openai";
const llm = new ChatOpenAI({
model: "gpt-4o",
temperature: 0,
});import { SelfQueryRetriever } from "langchain/retrievers/self_query";
import { FunctionalTranslator } from "@langchain/core/structured_query";
const selfQueryRetriever = SelfQueryRetriever.fromLLM({
llm: llm,
vectorStore: vectorStore,
/** A short summary of what the document contents represent. */
documentContents: "Brief summary of a movie",
attributeInfo: attributeInfo,
/**
* We need to create a basic translator that translates the queries into a
* filter format that the vector store can understand. We provide a basic translator
* translator here, but you can create your own translator by extending BaseTranslator
* abstract class. Note that the vector store needs to support filtering on the metadata
* attributes you want to query on.
*/
structuredQueryTranslator: new FunctionalTranslator(),
});await selfQueryRetriever.invoke(
"Which movies are rated higher than 8.5?"
);import { ChatPromptTemplate } from "@langchain/core/prompts";
import { RunnablePassthrough, RunnableSequence } from "@langchain/core/runnables";
import { StringOutputParser } from "@langchain/core/output_parsers";
import type { Document } from "@langchain/core/documents";
const prompt = ChatPromptTemplate.fromTemplate(`
Answer the question based only on the context provided.
Context: {context}
Question: {question}`);
const formatDocs = (docs: Document[]) => {
return docs.map((doc) => JSON.stringify(doc)).join("\n\n");
}
// See https://js.langchain.com/docs/tutorials/rag
const ragChain = RunnableSequence.from([
{
context: selfQueryRetriever.pipe(formatDocs),
question: new RunnablePassthrough(),
},
prompt,
llm,
new StringOutputParser(),
]);await ragChain.invoke("Which movies are rated higher than 8.5?")const selfQueryRetrieverWithDefaults = SelfQueryRetriever.fromLLM({
llm,
vectorStore,
documentContents: "Brief summary of a movie",
attributeInfo,
structuredQueryTranslator: new FunctionalTranslator(),
searchParams: {
filter: (doc: Document) => doc.metadata && doc.metadata.rating > 8.5,
mergeFiltersOperator: "and",
},
}); |
0 | lc_public_repos/langchainjs/docs/core_docs/docs/integrations/retrievers | lc_public_repos/langchainjs/docs/core_docs/docs/integrations/retrievers/self_query/memory.ipynb | import { OpenAIEmbeddings } from "@langchain/openai";
import { MemoryVectorStore } from "langchain/vectorstores/memory";
import { Document } from "@langchain/core/documents";
import type { AttributeInfo } from "langchain/chains/query_constructor";
/**
* First, we create a bunch of documents. You can load your own documents here instead.
* Each document has a pageContent and a metadata field. Make sure your metadata matches the AttributeInfo below.
*/
const docs = [
new Document({
pageContent:
"A bunch of scientists bring back dinosaurs and mayhem breaks loose",
metadata: { year: 1993, rating: 7.7, genre: "science fiction" },
}),
new Document({
pageContent:
"Leo DiCaprio gets lost in a dream within a dream within a dream within a ...",
metadata: { year: 2010, director: "Christopher Nolan", rating: 8.2 },
}),
new Document({
pageContent:
"A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea",
metadata: { year: 2006, director: "Satoshi Kon", rating: 8.6 },
}),
new Document({
pageContent:
"A bunch of normal-sized women are supremely wholesome and some men pine after them",
metadata: { year: 2019, director: "Greta Gerwig", rating: 8.3 },
}),
new Document({
pageContent: "Toys come alive and have a blast doing so",
metadata: { year: 1995, genre: "animated" },
}),
new Document({
pageContent: "Three men walk into the Zone, three men walk out of the Zone",
metadata: {
year: 1979,
director: "Andrei Tarkovsky",
genre: "science fiction",
rating: 9.9,
},
}),
];
/**
* Next, we define the attributes we want to be able to query on.
* in this case, we want to be able to query on the genre, year, director, rating, and length of the movie.
* We also provide a description of each attribute and the type of the attribute.
* This is used to generate the query prompts.
*/
const attributeInfo: AttributeInfo[] = [
{
name: "genre",
description: "The genre of the movie",
type: "string or array of strings",
},
{
name: "year",
description: "The year the movie was released",
type: "number",
},
{
name: "director",
description: "The director of the movie",
type: "string",
},
{
name: "rating",
description: "The rating of the movie (1-10)",
type: "number",
},
{
name: "length",
description: "The length of the movie in minutes",
type: "number",
},
];
/**
* Next, we instantiate a vector store. This is where we store the embeddings of the documents.
* We also need to provide an embeddings object. This is used to embed the documents.
*/
const embeddings = new OpenAIEmbeddings();
const vectorStore = await MemoryVectorStore.fromDocuments(docs, embeddings);// @lc-docs-hide-cell
import { ChatOpenAI } from "@langchain/openai";
const llm = new ChatOpenAI({
model: "gpt-4o",
temperature: 0,
});import { SelfQueryRetriever } from "langchain/retrievers/self_query";
import { FunctionalTranslator } from "@langchain/core/structured_query";
const selfQueryRetriever = SelfQueryRetriever.fromLLM({
llm: llm,
vectorStore: vectorStore,
/** A short summary of what the document contents represent. */
documentContents: "Brief summary of a movie",
attributeInfo: attributeInfo,
/**
* We need to create a basic translator that translates the queries into a
* filter format that the vector store can understand. We provide a basic translator
* translator here, but you can create your own translator by extending BaseTranslator
* abstract class. Note that the vector store needs to support filtering on the metadata
* attributes you want to query on.
*/
structuredQueryTranslator: new FunctionalTranslator(),
});await selfQueryRetriever.invoke(
"Which movies are rated higher than 8.5?"
);import { ChatPromptTemplate } from "@langchain/core/prompts";
import { RunnablePassthrough, RunnableSequence } from "@langchain/core/runnables";
import { StringOutputParser } from "@langchain/core/output_parsers";
import type { Document } from "@langchain/core/documents";
const prompt = ChatPromptTemplate.fromTemplate(`
Answer the question based only on the context provided.
Context: {context}
Question: {question}`);
const formatDocs = (docs: Document[]) => {
return docs.map((doc) => JSON.stringify(doc)).join("\n\n");
}
// See https://js.langchain.com/docs/tutorials/rag
const ragChain = RunnableSequence.from([
{
context: selfQueryRetriever.pipe(formatDocs),
question: new RunnablePassthrough(),
},
prompt,
llm,
new StringOutputParser(),
]);await ragChain.invoke("Which movies are rated higher than 8.5?")const selfQueryRetrieverWithDefaultParams = SelfQueryRetriever.fromLLM({
llm: llm,
vectorStore: vectorStore,
documentContents: "Brief summary of a movie",
attributeInfo: attributeInfo,
structuredQueryTranslator: new FunctionalTranslator(),
searchParams: {
filter: (doc: Document) => doc.metadata && doc.metadata.rating > 8.5,
mergeFiltersOperator: "and",
},
}); |
0 | lc_public_repos/langchainjs/docs/core_docs/docs/integrations/retrievers | lc_public_repos/langchainjs/docs/core_docs/docs/integrations/retrievers/self_query/qdrant.ipynb | import { OpenAIEmbeddings } from "@langchain/openai";
import { QdrantVectorStore } from "@langchain/qdrant";
import { Document } from "@langchain/core/documents";
import type { AttributeInfo } from "langchain/chains/query_constructor";
import { QdrantClient } from "@qdrant/js-client-rest";
/**
* First, we create a bunch of documents. You can load your own documents here instead.
* Each document has a pageContent and a metadata field. Make sure your metadata matches the AttributeInfo below.
*/
const docs = [
new Document({
pageContent:
"A bunch of scientists bring back dinosaurs and mayhem breaks loose",
metadata: { year: 1993, rating: 7.7, genre: "science fiction" },
}),
new Document({
pageContent:
"Leo DiCaprio gets lost in a dream within a dream within a dream within a ...",
metadata: { year: 2010, director: "Christopher Nolan", rating: 8.2 },
}),
new Document({
pageContent:
"A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea",
metadata: { year: 2006, director: "Satoshi Kon", rating: 8.6 },
}),
new Document({
pageContent:
"A bunch of normal-sized women are supremely wholesome and some men pine after them",
metadata: { year: 2019, director: "Greta Gerwig", rating: 8.3 },
}),
new Document({
pageContent: "Toys come alive and have a blast doing so",
metadata: { year: 1995, genre: "animated" },
}),
new Document({
pageContent: "Three men walk into the Zone, three men walk out of the Zone",
metadata: {
year: 1979,
director: "Andrei Tarkovsky",
genre: "science fiction",
rating: 9.9,
},
}),
];
/**
* Next, we define the attributes we want to be able to query on.
* in this case, we want to be able to query on the genre, year, director, rating, and length of the movie.
* We also provide a description of each attribute and the type of the attribute.
* This is used to generate the query prompts.
*/
const attributeInfo: AttributeInfo[] = [
{
name: "genre",
description: "The genre of the movie",
type: "string or array of strings",
},
{
name: "year",
description: "The year the movie was released",
type: "number",
},
{
name: "director",
description: "The director of the movie",
type: "string",
},
{
name: "rating",
description: "The rating of the movie (1-10)",
type: "number",
},
{
name: "length",
description: "The length of the movie in minutes",
type: "number",
},
];
/**
* Next, we instantiate a vector store. This is where we store the embeddings of the documents.
* We also need to provide an embeddings object. This is used to embed the documents.
*/
const client = new QdrantClient({ url: process.env.QDRANT_URL });
const embeddings = new OpenAIEmbeddings();
const vectorStore = await QdrantVectorStore.fromDocuments(docs, embeddings, {
client,
collectionName: "movie-collection",
});// @lc-docs-hide-cell
import { ChatOpenAI } from "@langchain/openai";
const llm = new ChatOpenAI({
model: "gpt-4o",
temperature: 0,
});import { SelfQueryRetriever } from "langchain/retrievers/self_query";
import { QdrantTranslator } from "@langchain/community/structured_query/qdrant";
const selfQueryRetriever = SelfQueryRetriever.fromLLM({
llm: llm,
vectorStore: vectorStore,
/** A short summary of what the document contents represent. */
documentContents: "Brief summary of a movie",
attributeInfo: attributeInfo,
structuredQueryTranslator: new QdrantTranslator(),
});await selfQueryRetriever.invoke(
"Which movies are rated higher than 8.5?"
);import { ChatPromptTemplate } from "@langchain/core/prompts";
import { RunnablePassthrough, RunnableSequence } from "@langchain/core/runnables";
import { StringOutputParser } from "@langchain/core/output_parsers";
import type { Document } from "@langchain/core/documents";
const prompt = ChatPromptTemplate.fromTemplate(`
Answer the question based only on the context provided.
Context: {context}
Question: {question}`);
const formatDocs = (docs: Document[]) => {
return docs.map((doc) => JSON.stringify(doc)).join("\n\n");
}
// See https://js.langchain.com/docs/tutorials/rag
const ragChain = RunnableSequence.from([
{
context: selfQueryRetriever.pipe(formatDocs),
question: new RunnablePassthrough(),
},
prompt,
llm,
new StringOutputParser(),
]);await ragChain.invoke("Which movies are rated higher than 8.5?")const selfQueryRetrieverWithDefaultParams = SelfQueryRetriever.fromLLM({
llm: llm,
vectorStore: vectorStore,
documentContents: "Brief summary of a movie",
attributeInfo: attributeInfo,
structuredQueryTranslator: new QdrantTranslator(),
searchParams: {
filter: {
must: [
{
key: "metadata.rating",
range: {
gt: 8.5,
},
},
],
},
mergeFiltersOperator: "and",
},
}); |
0 | lc_public_repos/langchainjs/docs/core_docs/docs/integrations/retrievers | lc_public_repos/langchainjs/docs/core_docs/docs/integrations/retrievers/self_query/index.mdx | ---
sidebar-position: 0
---
# Self-querying retrievers
Learn about how self-querying retrievers work [here](/docs/how_to/self_query).
import DocCardList from "@theme/DocCardList";
<DocCardList />
|
0 | lc_public_repos/langchainjs/docs/core_docs/docs/integrations/retrievers | lc_public_repos/langchainjs/docs/core_docs/docs/integrations/retrievers/self_query/pinecone.ipynb | import { Pinecone } from "@pinecone-database/pinecone";
import { OpenAIEmbeddings } from "@langchain/openai";
import { PineconeStore } from "@langchain/pinecone";
import { Document } from "@langchain/core/documents";
import type { AttributeInfo } from "langchain/chains/query_constructor";
/**
* First, we create a bunch of documents. You can load your own documents here instead.
* Each document has a pageContent and a metadata field. Make sure your metadata matches the AttributeInfo below.
*/
const docs = [
new Document({
pageContent:
"A bunch of scientists bring back dinosaurs and mayhem breaks loose",
metadata: { year: 1993, rating: 7.7, genre: "science fiction" },
}),
new Document({
pageContent:
"Leo DiCaprio gets lost in a dream within a dream within a dream within a ...",
metadata: { year: 2010, director: "Christopher Nolan", rating: 8.2 },
}),
new Document({
pageContent:
"A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea",
metadata: { year: 2006, director: "Satoshi Kon", rating: 8.6 },
}),
new Document({
pageContent:
"A bunch of normal-sized women are supremely wholesome and some men pine after them",
metadata: { year: 2019, director: "Greta Gerwig", rating: 8.3 },
}),
new Document({
pageContent: "Toys come alive and have a blast doing so",
metadata: { year: 1995, genre: "animated" },
}),
new Document({
pageContent: "Three men walk into the Zone, three men walk out of the Zone",
metadata: {
year: 1979,
director: "Andrei Tarkovsky",
genre: "science fiction",
rating: 9.9,
},
}),
];
/**
* Next, we define the attributes we want to be able to query on.
* in this case, we want to be able to query on the genre, year, director, rating, and length of the movie.
* We also provide a description of each attribute and the type of the attribute.
* This is used to generate the query prompts.
*/
const attributeInfo: AttributeInfo[] = [
{
name: "genre",
description: "The genre of the movie",
type: "string or array of strings",
},
{
name: "year",
description: "The year the movie was released",
type: "number",
},
{
name: "director",
description: "The director of the movie",
type: "string",
},
{
name: "rating",
description: "The rating of the movie (1-10)",
type: "number",
},
{
name: "length",
description: "The length of the movie in minutes",
type: "number",
},
];
/**
* Next, we instantiate a vector store. This is where we store the embeddings of the documents.
* We also need to provide an embeddings object. This is used to embed the documents.
*/
const pinecone = new Pinecone();
const pineconeIndex = pinecone.Index(process.env.PINECONE_INDEX!);
const embeddings = new OpenAIEmbeddings();
const vectorStore = await PineconeStore.fromDocuments(docs, embeddings, {
pineconeIndex: pineconeIndex,
});// @lc-docs-hide-cell
import { ChatOpenAI } from "@langchain/openai";
const llm = new ChatOpenAI({
model: "gpt-4o",
temperature: 0,
});import { SelfQueryRetriever } from "langchain/retrievers/self_query";
import { PineconeTranslator } from "@langchain/pinecone";
const selfQueryRetriever = SelfQueryRetriever.fromLLM({
llm: llm,
vectorStore: vectorStore,
/** A short summary of what the document contents represent. */
documentContents: "Brief summary of a movie",
attributeInfo: attributeInfo,
/**
* We need to create a basic translator that translates the queries into a
* filter format that the vector store can understand. We provide a basic translator
* translator here, but you can create your own translator by extending BaseTranslator
* abstract class. Note that the vector store needs to support filtering on the metadata
* attributes you want to query on.
*/
structuredQueryTranslator: new PineconeTranslator(),
});await selfQueryRetriever.invoke(
"Which movies are rated higher than 8.5?"
);import { ChatPromptTemplate } from "@langchain/core/prompts";
import { RunnablePassthrough, RunnableSequence } from "@langchain/core/runnables";
import { StringOutputParser } from "@langchain/core/output_parsers";
import type { Document } from "@langchain/core/documents";
const prompt = ChatPromptTemplate.fromTemplate(`
Answer the question based only on the context provided.
Context: {context}
Question: {question}`);
const formatDocs = (docs: Document[]) => {
return docs.map((doc) => JSON.stringify(doc)).join("\n\n");
}
// See https://js.langchain.com/docs/tutorials/rag
const ragChain = RunnableSequence.from([
{
context: selfQueryRetriever.pipe(formatDocs),
question: new RunnablePassthrough(),
},
prompt,
llm,
new StringOutputParser(),
]);await ragChain.invoke("Which movies are rated higher than 8.5?")const selfQueryRetrieverWithDefaultParams = SelfQueryRetriever.fromLLM({
llm: llm,
vectorStore: vectorStore,
documentContents: "Brief summary of a movie",
attributeInfo: attributeInfo,
structuredQueryTranslator: new PineconeTranslator(),
searchParams: {
filter: {
rating: {
$gt: 8.5,
},
},
mergeFiltersOperator: "and",
},
}); |
0 | lc_public_repos/langchainjs/docs/core_docs/docs/integrations/retrievers | lc_public_repos/langchainjs/docs/core_docs/docs/integrations/retrievers/self_query/weaviate.ipynb | import { OpenAIEmbeddings } from "@langchain/openai";
import { WeaviateStore } from "@langchain/weaviate";
import { Document } from "@langchain/core/documents";
import type { AttributeInfo } from "langchain/chains/query_constructor";
import weaviate from "weaviate-ts-client";
/**
* First, we create a bunch of documents. You can load your own documents here instead.
* Each document has a pageContent and a metadata field. Make sure your metadata matches the AttributeInfo below.
*/
const docs = [
new Document({
pageContent:
"A bunch of scientists bring back dinosaurs and mayhem breaks loose",
metadata: { year: 1993, rating: 7.7, genre: "science fiction" },
}),
new Document({
pageContent:
"Leo DiCaprio gets lost in a dream within a dream within a dream within a ...",
metadata: { year: 2010, director: "Christopher Nolan", rating: 8.2 },
}),
new Document({
pageContent:
"A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea",
metadata: { year: 2006, director: "Satoshi Kon", rating: 8.6 },
}),
new Document({
pageContent:
"A bunch of normal-sized women are supremely wholesome and some men pine after them",
metadata: { year: 2019, director: "Greta Gerwig", rating: 8.3 },
}),
new Document({
pageContent: "Toys come alive and have a blast doing so",
metadata: { year: 1995, genre: "animated" },
}),
new Document({
pageContent: "Three men walk into the Zone, three men walk out of the Zone",
metadata: {
year: 1979,
director: "Andrei Tarkovsky",
genre: "science fiction",
rating: 9.9,
},
}),
];
/**
* Next, we define the attributes we want to be able to query on.
* in this case, we want to be able to query on the genre, year, director, rating, and length of the movie.
* We also provide a description of each attribute and the type of the attribute.
* This is used to generate the query prompts.
*/
const attributeInfo: AttributeInfo[] = [
{
name: "genre",
description: "The genre of the movie",
type: "string or array of strings",
},
{
name: "year",
description: "The year the movie was released",
type: "number",
},
{
name: "director",
description: "The director of the movie",
type: "string",
},
{
name: "rating",
description: "The rating of the movie (1-10)",
type: "number",
},
{
name: "length",
description: "The length of the movie in minutes",
type: "number",
},
];
/**
* Next, we instantiate a vector store. This is where we store the embeddings of the documents.
* We also need to provide an embeddings object. This is used to embed the documents.
*/
const client = (weaviate as any).client({
scheme: process.env.WEAVIATE_SCHEME || "https",
host: process.env.WEAVIATE_HOST || "localhost",
apiKey: process.env.WEAVIATE_API_KEY
? // eslint-disable-next-line @typescript-eslint/no-explicit-any
new (weaviate as any).ApiKey(process.env.WEAVIATE_API_KEY)
: undefined,
});
const embeddings = new OpenAIEmbeddings();
const vectorStore = await WeaviateStore.fromDocuments(docs, embeddings, {
client,
indexName: "Test",
textKey: "text",
metadataKeys: ["year", "director", "rating", "genre"],
});// @lc-docs-hide-cell
import { ChatOpenAI } from "@langchain/openai";
const llm = new ChatOpenAI({
model: "gpt-4o",
temperature: 0,
});import { SelfQueryRetriever } from "langchain/retrievers/self_query";
import { WeaviateTranslator } from "@langchain/weaviate";
const selfQueryRetriever = SelfQueryRetriever.fromLLM({
llm: llm,
vectorStore: vectorStore,
/** A short summary of what the document contents represent. */
documentContents: "Brief summary of a movie",
attributeInfo: attributeInfo,
structuredQueryTranslator: new WeaviateTranslator(),
});await selfQueryRetriever.invoke(
"Which movies are rated higher than 8.5?"
);import { ChatPromptTemplate } from "@langchain/core/prompts";
import { RunnablePassthrough, RunnableSequence } from "@langchain/core/runnables";
import { StringOutputParser } from "@langchain/core/output_parsers";
import type { Document } from "@langchain/core/documents";
const prompt = ChatPromptTemplate.fromTemplate(`
Answer the question based only on the context provided.
Context: {context}
Question: {question}`);
const formatDocs = (docs: Document[]) => {
return docs.map((doc) => JSON.stringify(doc)).join("\n\n");
}
// See https://js.langchain.com/docs/tutorials/rag
const ragChain = RunnableSequence.from([
{
context: selfQueryRetriever.pipe(formatDocs),
question: new RunnablePassthrough(),
},
prompt,
llm,
new StringOutputParser(),
]);await ragChain.invoke("Which movies are rated higher than 8.5?");const selfQueryRetrieverWithDefaultParams = SelfQueryRetriever.fromLLM({
llm: llm,
vectorStore: vectorStore,
documentContents: "Brief summary of a movie",
attributeInfo: attributeInfo,
structuredQueryTranslator: new WeaviateTranslator(),
searchParams: {
filter: {
where: {
operator: "Equal",
path: ["type"],
valueText: "movie",
},
},
mergeFiltersOperator: "or",
},
}); |
0 | lc_public_repos/langchainjs/docs/core_docs/docs/integrations/retrievers | lc_public_repos/langchainjs/docs/core_docs/docs/integrations/retrievers/self_query/chroma.ipynb | import { OpenAIEmbeddings } from "@langchain/openai";
import { Chroma } from "@langchain/community/vectorstores/chroma";
import { Document } from "@langchain/core/documents";
import type { AttributeInfo } from "langchain/chains/query_constructor";
/**
* First, we create a bunch of documents. You can load your own documents here instead.
* Each document has a pageContent and a metadata field. Make sure your metadata matches the AttributeInfo below.
*/
const docs = [
new Document({
pageContent:
"A bunch of scientists bring back dinosaurs and mayhem breaks loose",
metadata: { year: 1993, rating: 7.7, genre: "science fiction" },
}),
new Document({
pageContent:
"Leo DiCaprio gets lost in a dream within a dream within a dream within a ...",
metadata: { year: 2010, director: "Christopher Nolan", rating: 8.2 },
}),
new Document({
pageContent:
"A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea",
metadata: { year: 2006, director: "Satoshi Kon", rating: 8.6 },
}),
new Document({
pageContent:
"A bunch of normal-sized women are supremely wholesome and some men pine after them",
metadata: { year: 2019, director: "Greta Gerwig", rating: 8.3 },
}),
new Document({
pageContent: "Toys come alive and have a blast doing so",
metadata: { year: 1995, genre: "animated" },
}),
new Document({
pageContent: "Three men walk into the Zone, three men walk out of the Zone",
metadata: {
year: 1979,
director: "Andrei Tarkovsky",
genre: "science fiction",
rating: 9.9,
},
}),
];
/**
* Next, we define the attributes we want to be able to query on.
* in this case, we want to be able to query on the genre, year, director, rating, and length of the movie.
* We also provide a description of each attribute and the type of the attribute.
* This is used to generate the query prompts.
*/
const attributeInfo: AttributeInfo[] = [
{
name: "genre",
description: "The genre of the movie",
type: "string or array of strings",
},
{
name: "year",
description: "The year the movie was released",
type: "number",
},
{
name: "director",
description: "The director of the movie",
type: "string",
},
{
name: "rating",
description: "The rating of the movie (1-10)",
type: "number",
},
{
name: "length",
description: "The length of the movie in minutes",
type: "number",
},
];
/**
* Next, we instantiate a vector store. This is where we store the embeddings of the documents.
* We also need to provide an embeddings object. This is used to embed the documents.
*/
const embeddings = new OpenAIEmbeddings();
const vectorStore = await Chroma.fromDocuments(docs, embeddings, {
collectionName: "movie-collection",
});// @lc-docs-hide-cell
import { ChatOpenAI } from "@langchain/openai";
const llm = new ChatOpenAI({
model: "gpt-4o-mini",
temperature: 0,
});import { SelfQueryRetriever } from "langchain/retrievers/self_query";
import { ChromaTranslator } from "@langchain/community/structured_query/chroma";
const selfQueryRetriever = SelfQueryRetriever.fromLLM({
llm,
vectorStore,
/** A short summary of what the document contents represent. */
documentContents: "Brief summary of a movie",
attributeInfo,
/**
* We need to create a basic translator that translates the queries into a
* filter format that the vector store can understand. We provide a basic translator
* translator here, but you can create your own translator by extending BaseTranslator
* abstract class. Note that the vector store needs to support filtering on the metadata
* attributes you want to query on.
*/
structuredQueryTranslator: new ChromaTranslator(),
});await selfQueryRetriever.invoke(
"Which movies are rated higher than 8.5?"
);import { ChatPromptTemplate } from "@langchain/core/prompts";
import { RunnablePassthrough, RunnableSequence } from "@langchain/core/runnables";
import { StringOutputParser } from "@langchain/core/output_parsers";
import type { Document } from "@langchain/core/documents";
const prompt = ChatPromptTemplate.fromTemplate(`
Answer the question based only on the context provided.
Context: {context}
Question: {question}`);
const formatDocs = (docs: Document[]) => {
return docs.map((doc) => JSON.stringify(doc)).join("\n\n");
}
// See https://js.langchain.com/docs/tutorials/rag
const ragChain = RunnableSequence.from([
{
context: selfQueryRetriever.pipe(formatDocs),
question: new RunnablePassthrough(),
},
prompt,
llm,
new StringOutputParser(),
]);await ragChain.invoke("Which movies are rated higher than 8.5?")const selfQueryRetrieverWithDefaultParams = SelfQueryRetriever.fromLLM({
llm,
vectorStore,
documentContents: "Brief summary of a movie",
attributeInfo,
structuredQueryTranslator: new ChromaTranslator(),
searchParams: {
filter: {
rating: {
$gt: 8.5,
}
},
mergeFiltersOperator: "and",
}
}); |
0 | lc_public_repos/langchainjs/docs/core_docs/docs/integrations/retrievers | lc_public_repos/langchainjs/docs/core_docs/docs/integrations/retrievers/self_query/vectara.ipynb | import { VectaraStore } from "@langchain/community/vectorstores/vectara";
import { Document } from "@langchain/core/documents";
import type { AttributeInfo } from "langchain/chains/query_constructor";
// Vectara provides embeddings
import { FakeEmbeddings } from "@langchain/core/utils/testing";
/**
* First, we create a bunch of documents. You can load your own documents here instead.
* Each document has a pageContent and a metadata field. Make sure your metadata matches the AttributeInfo below.
*/
const docs = [
new Document({
pageContent:
"A bunch of scientists bring back dinosaurs and mayhem breaks loose",
metadata: { year: 1993, rating: 7.7, genre: "science fiction" },
}),
new Document({
pageContent:
"Leo DiCaprio gets lost in a dream within a dream within a dream within a ...",
metadata: { year: 2010, director: "Christopher Nolan", rating: 8.2 },
}),
new Document({
pageContent:
"A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea",
metadata: { year: 2006, director: "Satoshi Kon", rating: 8.6 },
}),
new Document({
pageContent:
"A bunch of normal-sized women are supremely wholesome and some men pine after them",
metadata: { year: 2019, director: "Greta Gerwig", rating: 8.3 },
}),
new Document({
pageContent: "Toys come alive and have a blast doing so",
metadata: { year: 1995, genre: "animated" },
}),
new Document({
pageContent: "Three men walk into the Zone, three men walk out of the Zone",
metadata: {
year: 1979,
director: "Andrei Tarkovsky",
genre: "science fiction",
rating: 9.9,
},
}),
];
/**
* Next, we define the attributes we want to be able to query on.
* in this case, we want to be able to query on the genre, year, director, rating, and length of the movie.
* We also provide a description of each attribute and the type of the attribute.
* This is used to generate the query prompts.
*/
const attributeInfo: AttributeInfo[] = [
{
name: "genre",
description: "The genre of the movie",
type: "string or array of strings",
},
{
name: "year",
description: "The year the movie was released",
type: "number",
},
{
name: "director",
description: "The director of the movie",
type: "string",
},
{
name: "rating",
description: "The rating of the movie (1-10)",
type: "number",
},
{
name: "length",
description: "The length of the movie in minutes",
type: "number",
},
];
/**
* Next, we instantiate a vector store. This is where we store the embeddings of the documents.
* We also need to provide an embeddings object. This is used to embed the documents.
*/
// Vectara provides embeddings
const embeddings = new FakeEmbeddings();
const vectorStore = await VectaraStore.fromDocuments(docs, embeddings, {
customerId: Number(process.env.VECTARA_CUSTOMER_ID),
corpusId: Number(process.env.VECTARA_CORPUS_ID),
apiKey: String(process.env.VECTARA_API_KEY),
});// @lc-docs-hide-cell
import { ChatOpenAI } from "@langchain/openai";
const llm = new ChatOpenAI({
model: "gpt-4o",
temperature: 0,
});import { SelfQueryRetriever } from "langchain/retrievers/self_query";
import { VectaraTranslator } from "@langchain/community/structured_query/vectara";
const selfQueryRetriever = SelfQueryRetriever.fromLLM({
llm: llm,
vectorStore: vectorStore,
/** A short summary of what the document contents represent. */
documentContents: "Brief summary of a movie",
attributeInfo: attributeInfo,
structuredQueryTranslator: new VectaraTranslator(),
});await selfQueryRetriever.invoke(
"Which movies are rated higher than 8.5?"
);import { ChatPromptTemplate } from "@langchain/core/prompts";
import { RunnablePassthrough, RunnableSequence } from "@langchain/core/runnables";
import { StringOutputParser } from "@langchain/core/output_parsers";
import type { Document } from "@langchain/core/documents";
const prompt = ChatPromptTemplate.fromTemplate(`
Answer the question based only on the context provided.
Context: {context}
Question: {question}`);
const formatDocs = (docs: Document[]) => {
return docs.map((doc) => JSON.stringify(doc)).join("\n\n");
}
// See https://js.langchain.com/docs/tutorials/rag
const ragChain = RunnableSequence.from([
{
context: selfQueryRetriever.pipe(formatDocs),
question: new RunnablePassthrough(),
},
prompt,
llm,
new StringOutputParser(),
]);await ragChain.invoke("Which movies are rated higher than 8.5?");const selfQueryRetrieverWithDefaultParams = SelfQueryRetriever.fromLLM({
llm,
vectorStore,
documentContents: "Brief summary of a movie",
attributeInfo,
/**
* We need to use a translator that translates the queries into a
* filter format that the vector store can understand. LangChain provides one here.
*/
structuredQueryTranslator: new VectaraTranslator(),
searchParams: {
filter: {
filter: "( doc.genre = 'science fiction' ) and ( doc.rating > 8.5 )",
},
mergeFiltersOperator: "and",
},
}); |
0 | lc_public_repos/langchainjs/docs/core_docs/docs/integrations/retrievers | lc_public_repos/langchainjs/docs/core_docs/docs/integrations/retrievers/self_query/supabase.ipynb | import { OpenAIEmbeddings } from "@langchain/openai";
import { SupabaseVectorStore } from "@langchain/community/vectorstores/supabase";
import { Document } from "@langchain/core/documents";
import type { AttributeInfo } from "langchain/chains/query_constructor";
import { createClient } from "@supabase/supabase-js";
/**
* First, we create a bunch of documents. You can load your own documents here instead.
* Each document has a pageContent and a metadata field. Make sure your metadata matches the AttributeInfo below.
*/
const docs = [
new Document({
pageContent:
"A bunch of scientists bring back dinosaurs and mayhem breaks loose",
metadata: { year: 1993, rating: 7.7, genre: "science fiction" },
}),
new Document({
pageContent:
"Leo DiCaprio gets lost in a dream within a dream within a dream within a ...",
metadata: { year: 2010, director: "Christopher Nolan", rating: 8.2 },
}),
new Document({
pageContent:
"A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea",
metadata: { year: 2006, director: "Satoshi Kon", rating: 8.6 },
}),
new Document({
pageContent:
"A bunch of normal-sized women are supremely wholesome and some men pine after them",
metadata: { year: 2019, director: "Greta Gerwig", rating: 8.3 },
}),
new Document({
pageContent: "Toys come alive and have a blast doing so",
metadata: { year: 1995, genre: "animated" },
}),
new Document({
pageContent: "Three men walk into the Zone, three men walk out of the Zone",
metadata: {
year: 1979,
director: "Andrei Tarkovsky",
genre: "science fiction",
rating: 9.9,
},
}),
];
/**
* Next, we define the attributes we want to be able to query on.
* in this case, we want to be able to query on the genre, year, director, rating, and length of the movie.
* We also provide a description of each attribute and the type of the attribute.
* This is used to generate the query prompts.
*/
const attributeInfo: AttributeInfo[] = [
{
name: "genre",
description: "The genre of the movie",
type: "string or array of strings",
},
{
name: "year",
description: "The year the movie was released",
type: "number",
},
{
name: "director",
description: "The director of the movie",
type: "string",
},
{
name: "rating",
description: "The rating of the movie (1-10)",
type: "number",
},
{
name: "length",
description: "The length of the movie in minutes",
type: "number",
},
];
/**
* Next, we instantiate a vector store. This is where we store the embeddings of the documents.
* We also need to provide an embeddings object. This is used to embed the documents.
*/
const client = createClient(
process.env.SUPABASE_URL,
process.env.SUPABASE_PRIVATE_KEY
);
const embeddings = new OpenAIEmbeddings();
const vectorStore = await SupabaseVectorStore.fromDocuments(docs, embeddings, {
client,
});// @lc-docs-hide-cell
import { ChatOpenAI } from "@langchain/openai";
const llm = new ChatOpenAI({
model: "gpt-4o",
temperature: 0,
});import { SelfQueryRetriever } from "langchain/retrievers/self_query";
import { SupabaseTranslator } from "@langchain/community/structured_query/supabase";
const selfQueryRetriever = SelfQueryRetriever.fromLLM({
llm: llm,
vectorStore: vectorStore,
/** A short summary of what the document contents represent. */
documentContents: "Brief summary of a movie",
attributeInfo: attributeInfo,
structuredQueryTranslator: new SupabaseTranslator(),
});await selfQueryRetriever.invoke(
"Which movies are rated higher than 8.5?"
);import { ChatPromptTemplate } from "@langchain/core/prompts";
import { RunnablePassthrough, RunnableSequence } from "@langchain/core/runnables";
import { StringOutputParser } from "@langchain/core/output_parsers";
import type { Document } from "@langchain/core/documents";
const prompt = ChatPromptTemplate.fromTemplate(`
Answer the question based only on the context provided.
Context: {context}
Question: {question}`);
const formatDocs = (docs: Document[]) => {
return docs.map((doc) => JSON.stringify(doc)).join("\n\n");
}
// See https://js.langchain.com/docs/tutorials/rag
const ragChain = RunnableSequence.from([
{
context: selfQueryRetriever.pipe(formatDocs),
question: new RunnablePassthrough(),
},
prompt,
llm,
new StringOutputParser(),
]);await ragChain.invoke("Which movies are rated higher than 8.5?");import type { SupabaseFilter } from "@langchain/community/vectorstores/supabase";
const selfQueryRetrieverWithDefaultParams = SelfQueryRetriever.fromLLM({
llm: llm,
vectorStore: vectorStore,
documentContents: "Brief summary of a movie",
attributeInfo: attributeInfo,
structuredQueryTranslator: new SupabaseTranslator(),
searchParams: {
filter: (rpc: SupabaseFilter) => rpc.filter("metadata->>type", "eq", "movie"),
mergeFiltersOperator: "and",
},
}); |
0 | lc_public_repos/langchainjs/docs/core_docs/docs/integrations | lc_public_repos/langchainjs/docs/core_docs/docs/integrations/document_transformers/openai_metadata_tagger.mdx | # OpenAI functions metadata tagger
It can often be useful to tag ingested documents with structured metadata, such as the title, tone, or length of a document, to allow for more targeted similarity search later. However, for large numbers of documents, performing this labelling process manually can be tedious.
The `MetadataTagger` document transformer automates this process by extracting metadata from each provided document according to a provided schema. It uses a configurable OpenAI Functions-powered chain under the hood, so if you pass a custom LLM instance, it must be an OpenAI model with functions support.
**Note:** This document transformer works best with complete documents, so it's best to run it first with whole documents before doing any other splitting or processing!
### Usage
For example, let's say you wanted to index a set of movie reviews. You could initialize the document transformer as follows:
import CodeBlock from "@theme/CodeBlock";
import Example from "@examples/document_transformers/metadata_tagger.ts";
import IntegrationInstallTooltip from "@mdx_components/integration_install_tooltip.mdx";
<IntegrationInstallTooltip></IntegrationInstallTooltip>
```bash npm2yarn
npm install @langchain/openai @langchain/core
```
<CodeBlock language="typescript">{Example}</CodeBlock>
There is an additional `createMetadataTagger` method that accepts a valid JSON Schema object as well.
### Customization
You can pass the underlying tagging chain the standard LLMChain arguments in the second options parameter.
For example, if you wanted to ask the LLM to focus specific details in the input documents, or extract metadata in a certain style, you could pass in a custom prompt:
import CustomExample from "@examples/document_transformers/metadata_tagger_custom_prompt.ts";
<CodeBlock language="typescript">{CustomExample}</CodeBlock>
|
0 | lc_public_repos/langchainjs/docs/core_docs/docs/integrations | lc_public_repos/langchainjs/docs/core_docs/docs/integrations/document_transformers/mozilla_readability.mdx | # @mozilla/readability
When ingesting HTML documents for later retrieval, we are often interested only in the actual content of the webpage rather than semantics.
Stripping HTML tags from documents with the MozillaReadabilityTransformer can result in more content-rich chunks, making retrieval more effective.
## Setup
You'll need to install the [`@mozilla/readability`](https://www.npmjs.com/package/@mozilla/readability) and the [`jsdom`](https://www.npmjs.com/package/jsdom) npm package:
```bash npm2yarn
npm install @mozilla/readability jsdom
```
Though not required for the transformer by itself, the below usage examples require [`cheerio`](https://www.npmjs.com/package/cheerio) for scraping:
```bash npm2yarn
npm install cheerio
```
import IntegrationInstallTooltip from "@mdx_components/integration_install_tooltip.mdx";
<IntegrationInstallTooltip></IntegrationInstallTooltip>
```bash npm2yarn
npm install @langchain/community @langchain/core
```
## Usage
The below example scrapes a Hacker News thread, splits it based on HTML tags to group chunks based on the semantic information from the tags,
then extracts content from the individual chunks:
import CodeBlock from "@theme/CodeBlock";
import Example from "@examples/document_transformers/mozilla_readability.ts";
<CodeBlock language="typescript">{Example}</CodeBlock>
## Customization
You can pass the transformer any [arguments accepted by the `@mozilla/readability` package](https://www.npmjs.com/package/@mozilla/readability) to customize how it works.
|
0 | lc_public_repos/langchainjs/docs/core_docs/docs/integrations | lc_public_repos/langchainjs/docs/core_docs/docs/integrations/document_transformers/html-to-text.mdx | # html-to-text
When ingesting HTML documents for later retrieval, we are often interested only in the actual content of the webpage rather than semantics.
Stripping HTML tags from documents with the HtmlToTextTransformer can result in more content-rich chunks, making retrieval more effective.
## Setup
You'll need to install the [`html-to-text`](https://www.npmjs.com/package/html-to-text) npm package:
```bash npm2yarn
npm install html-to-text
```
Though not required for the transformer by itself, the below usage examples require [`cheerio`](https://www.npmjs.com/package/cheerio) for scraping:
```bash npm2yarn
npm install cheerio
```
import IntegrationInstallTooltip from "@mdx_components/integration_install_tooltip.mdx";
<IntegrationInstallTooltip></IntegrationInstallTooltip>
```bash npm2yarn
npm install @langchain/community @langchain/core
```
## Usage
The below example scrapes a Hacker News thread, splits it based on HTML tags to group chunks based on the semantic information from the tags,
then extracts content from the individual chunks:
import CodeBlock from "@theme/CodeBlock";
import Example from "@examples/document_transformers/html_to_text.ts";
<CodeBlock language="typescript">{Example}</CodeBlock>
## Customization
You can pass the transformer any [arguments accepted by the `html-to-text` package](https://www.npmjs.com/package/html-to-text) to customize how it works.
|
0 | lc_public_repos/langchainjs/docs/core_docs/docs/integrations | lc_public_repos/langchainjs/docs/core_docs/docs/integrations/document_loaders/index.mdx | ---
sidebar_position: 0
---
# Document loaders
[Document loaders](/docs/concepts/document_loaders) load data into LangChain's expected format for use-cases such as [retrieval-augmented generation (RAG)](/docs/tutorials/rag).
LangChain.js categorizes document loaders in two different ways:
- [File loaders](/docs/integrations/document_loaders/file_loaders/), which load data into LangChain formats from your local filesystem.
- [Web loaders](/docs/integrations/document_loaders/web_loaders/), which load data from remote sources.
See the individual pages for more on each category.
:::info
If you'd like to write your own document loader, see [this how-to](/docs/how_to/document_loader_custom/). If you'd like to contribute an integration, see [Contributing integrations](/docs/contributing).
:::
|
0 | lc_public_repos/langchainjs/docs/core_docs/docs/integrations/document_loaders | lc_public_repos/langchainjs/docs/core_docs/docs/integrations/document_loaders/web_loaders/notionapi.mdx | ---
sidebar_class_name: node-only
hide_table_of_contents: true
---
# Notion API
This guide will take you through the steps required to load documents from Notion pages and databases using the Notion API.
## Overview
Notion is a versatile productivity platform that consolidates note-taking, task management, and data organization tools into one interface.
This document loader is able to take full Notion pages and databases and turn them into a LangChain Documents ready to be integrated into your projects.
## Setup
1. You will first need to install the official Notion client and the [notion-to-md](https://www.npmjs.com/package/notion-to-md) package as peer dependencies:
```bash npm2yarn
npm install @langchain/community @langchain/core @notionhq/client notion-to-md
```
2. Create a [Notion integration](https://www.notion.so/my-integrations) and securely record the Internal Integration Secret (also known as `NOTION_INTEGRATION_TOKEN`).
3. Add a connection to your new integration on your page or database. To do this open your Notion page, go to the settings pips in the top right and scroll down to `Add connections` and select your new integration.
4. Get the `PAGE_ID` or `DATABASE_ID` for the page or database you want to load.
> The 32 char hex in the url path represents the `ID`. For example:
> PAGE_ID: [https://www.notion.so/skarard/LangChain-Notion-API-`b34ca03f219c4420a6046fc4bdfdf7b4`](https://www.notion.so/skarard/LangChain-Notion-API-b34ca03f219c4420a6046fc4bdfdf7b4)
> DATABASE_ID: [https://www.notion.so/skarard/`c393f19c3903440da0d34bf9c6c12ff2`?v=9c70a0f4e174498aa0f9021e0a9d52de](https://www.notion.so/skarard/c393f19c3903440da0d34bf9c6c12ff2?v=9c70a0f4e174498aa0f9021e0a9d52de)
> REGEX: `/(?<!=)[0-9a-f]{32}/`
## Example Usage
import CodeBlock from "@theme/CodeBlock";
import Example from "@examples/document_loaders/notionapi.ts";
<CodeBlock language="typescript">{Example}</CodeBlock>
|
0 | lc_public_repos/langchainjs/docs/core_docs/docs/integrations/document_loaders | lc_public_repos/langchainjs/docs/core_docs/docs/integrations/document_loaders/web_loaders/assemblyai_audio_transcription.mdx | ---
hide_table_of_contents: true
---
import CodeBlock from "@theme/CodeBlock";
# AssemblyAI Audio Transcript
This covers how to load audio (and video) transcripts as document objects from a file using the [AssemblyAI API](https://www.assemblyai.com/docs/api-reference/transcripts/submit?utm_source=langchainjs).
## Usage
First, you'll need to install the official AssemblyAI package:
```bash npm2yarn
npm install @langchain/community @langchain/core assemblyai
```
To use the loaders you need an [AssemblyAI account](https://www.assemblyai.com/dashboard/signup?utm_source=langchainjs) and
[get your AssemblyAI API key from the dashboard](https://www.assemblyai.com/app/account?utm_source=langchainjs).
Then, configure the API key as the `ASSEMBLYAI_API_KEY` environment variable or the `apiKey` options parameter.
import TranscriptExample from "@examples/document_loaders/assemblyai_audio_transcription.ts";
<CodeBlock language="typescript">{TranscriptExample}</CodeBlock>
> ** info **
>
> - You can use the `AudioTranscriptParagraphsLoader` or `AudioTranscriptSentencesLoader` to split the transcript into paragraphs or sentences.
> - The `audio` parameter can be a URL, a local file path, a buffer, or a stream.
> - The `audio` can also be a video file. See the [list of supported file types in the FAQ doc](https://www.assemblyai.com/docs/concepts/faq?utm_source=langchainjs#:~:text=file%20types%20are%20supported).
> - If you don't pass in the `apiKey` option, the loader will use the `ASSEMBLYAI_API_KEY` environment variable.
> - You can add more properties in addition to `audio`. Find the full list of request parameters in the [AssemblyAI API docs](https://www.assemblyai.com/docs/api-reference/transcripts/submit?utm_source=langchainjs#create-a-transcript).
You can also use the `AudioSubtitleLoader` to get `srt` or `vtt` subtitles as a document.
import SubtitleExample from "@examples/document_loaders/assemblyai_subtitles.ts";
<CodeBlock language="typescript">{SubtitleExample}</CodeBlock>
|
0 | lc_public_repos/langchainjs/docs/core_docs/docs/integrations/document_loaders | lc_public_repos/langchainjs/docs/core_docs/docs/integrations/document_loaders/web_loaders/sonix_audio_transcription.mdx | ---
hide_table_of_contents: true
sidebar_class_name: node-only
---
# Sonix Audio
:::tip Compatibility
Only available on Node.js.
:::
This covers how to load document objects from an audio file using the [Sonix](https://sonix.ai/) API.
## Setup
To run this loader you will need to create an account on the https://sonix.ai/ and obtain an auth key from the https://my.sonix.ai/api page.
You'll also need to install the `sonix-speech-recognition` library:
```bash npm2yarn
npm install @langchain/community @langchain/core sonix-speech-recognition
```
## Usage
Once auth key is configured, you can use the loader to create transcriptions and then convert them into a Document.
In the `request` parameter, you can either specify a local file by setting `audioFilePath` or a remote file using `audioUrl`.
You will also need to specify the audio language. See the list of supported languages [here](https://sonix.ai/docs/api#languages).
import CodeBlock from "@theme/CodeBlock";
import Example from "@examples/document_loaders/sonix_audio_transcription.ts";
<CodeBlock language="typescript">{Example}</CodeBlock>
|
0 | lc_public_repos/langchainjs/docs/core_docs/docs/integrations/document_loaders | lc_public_repos/langchainjs/docs/core_docs/docs/integrations/document_loaders/web_loaders/azure_blob_storage_file.mdx | ---
hide_table_of_contents: true
sidebar_class_name: node-only
---
# Azure Blob Storage File
:::tip Compatibility
Only available on Node.js.
:::
This covers how to load an Azure File into LangChain documents.
## Setup
To use this loader, you'll need to have Unstructured already set up and ready to use at an available URL endpoint. It can also be configured to run locally.
See the docs [here](/docs/integrations/document_loaders/file_loaders/unstructured) for information on how to do that.
You'll also need to install the official Azure Storage Blob client library:
```bash npm2yarn
npm install @langchain/community @langchain/core @azure/storage-blob
```
## Usage
Once Unstructured is configured, you can use the Azure Blob Storage File loader to load files and then convert them into a Document.
import CodeBlock from "@theme/CodeBlock";
import Example from "@examples/document_loaders/azure_blob_storage_file.ts";
<CodeBlock language="typescript">{Example}</CodeBlock>
|
0 | lc_public_repos/langchainjs/docs/core_docs/docs/integrations/document_loaders | lc_public_repos/langchainjs/docs/core_docs/docs/integrations/document_loaders/web_loaders/figma.mdx | ---
hide_table_of_contents: true
---
# Figma
This example goes over how to load data from a Figma file.
You will need a Figma access token in order to get started.
import CodeBlock from "@theme/CodeBlock";
import Example from "@examples/document_loaders/figma.ts";
<CodeBlock language="typescript">{Example}</CodeBlock>
You can find your Figma file's key and node ids by opening the file in your browser and extracting them from the URL:
```
https://www.figma.com/file/<YOUR FILE KEY HERE>/LangChainJS-Test?type=whiteboard&node-id=<YOUR NODE ID HERE>&t=e6lqWkKecuYQRyRg-0
```
|
0 | lc_public_repos/langchainjs/docs/core_docs/docs/integrations/document_loaders | lc_public_repos/langchainjs/docs/core_docs/docs/integrations/document_loaders/web_loaders/recursive_url_loader.ipynb | import { RecursiveUrlLoader } from "@langchain/community/document_loaders/web/recursive_url"
import { compile } from "html-to-text";
const compiledConvert = compile({ wordwrap: 130 }); // returns (text: string) => string;
const loader = new RecursiveUrlLoader("https://langchain.com/", {
extractor: compiledConvert,
maxDepth: 1,
excludeDirs: ["/docs/api/"],
})const docs = await loader.load()
docs[0]console.log(docs[0].metadata) |
0 | lc_public_repos/langchainjs/docs/core_docs/docs/integrations/document_loaders | lc_public_repos/langchainjs/docs/core_docs/docs/integrations/document_loaders/web_loaders/gitbook.mdx | ---
hide_table_of_contents: true
---
# GitBook
This example goes over how to load data from any GitBook, using Cheerio. One document will be created for each page.
## Setup
```bash npm2yarn
npm install @langchain/community @langchain/core cheerio
```
## Load from single GitBook page
```typescript
import { GitbookLoader } from "@langchain/community/document_loaders/web/gitbook";
const loader = new GitbookLoader(
"https://docs.gitbook.com/product-tour/navigation"
);
const docs = await loader.load();
```
## Load from all paths in a given GitBook
For this to work, the GitbookLoader needs to be initialized with the root path (https://docs.gitbook.com in this example) and have `shouldLoadAllPaths` set to `true`.
```typescript
import { GitbookLoader } from "@langchain/community/document_loaders/web/gitbook";
const loader = new GitbookLoader("https://docs.gitbook.com", {
shouldLoadAllPaths: true,
});
const docs = await loader.load();
```
|
0 | lc_public_repos/langchainjs/docs/core_docs/docs/integrations/document_loaders | lc_public_repos/langchainjs/docs/core_docs/docs/integrations/document_loaders/web_loaders/web_playwright.mdx | ---
sidebar_position: 3
hide_table_of_contents: true
sidebar_class_name: node-only
sidebar_label: Playwright
---
# Webpages, with Playwright
:::tip Compatibility
Only available on Node.js.
:::
This example goes over how to load data from webpages using Playwright. One document will be created for each webpage.
Playwright is a Node.js library that provides a high-level API for controlling multiple browser engines, including Chromium, Firefox, and WebKit. You can use Playwright to automate web page interactions, including extracting data from dynamic web pages that require JavaScript to render.
If you want a lighterweight solution, and the webpages you want to load do not require JavaScript to render, you can use the [`CheerioWebBaseLoader`](/docs/integrations/document_loaders/web_loaders/web_cheerio) instead.
## Setup
```bash npm2yarn
npm install @langchain/community @langchain/core playwright
```
## Usage
```typescript
import { PlaywrightWebBaseLoader } from "@langchain/community/document_loaders/web/playwright";
/**
* Loader uses `page.content()`
* as default evaluate function
**/
const loader = new PlaywrightWebBaseLoader("https://www.tabnews.com.br/");
const docs = await loader.load();
```
## Options
Here's an explanation of the parameters you can pass to the PlaywrightWebBaseLoader constructor using the PlaywrightWebBaseLoaderOptions interface:
```typescript
type PlaywrightWebBaseLoaderOptions = {
launchOptions?: LaunchOptions;
gotoOptions?: PlaywrightGotoOptions;
evaluate?: PlaywrightEvaluate;
};
```
1. `launchOptions`: an optional object that specifies additional options to pass to the playwright.chromium.launch() method. This can include options such as the headless flag to launch the browser in headless mode.
2. `gotoOptions`: an optional object that specifies additional options to pass to the page.goto() method. This can include options such as the timeout option to specify the maximum navigation time in milliseconds, or the waitUntil option to specify when to consider the navigation as successful.
3. `evaluate`: an optional function that can be used to evaluate JavaScript code on the page using a custom evaluation function. This can be useful for extracting data from the page, interacting with page elements, or handling specific HTTP responses. The function should return a Promise that resolves to a string containing the result of the evaluation.
By passing these options to the `PlaywrightWebBaseLoader` constructor, you can customize the behavior of the loader and use Playwright's powerful features to scrape and interact with web pages.
Here is a basic example to do it:
```typescript
import {
PlaywrightWebBaseLoader,
Page,
Browser,
} from "@langchain/community/document_loaders/web/playwright";
const url = "https://www.tabnews.com.br/";
const loader = new PlaywrightWebBaseLoader(url);
const docs = await loader.load();
// raw HTML page content
const extractedContents = docs[0].pageContent;
```
And a more advanced example:
```typescript
import {
PlaywrightWebBaseLoader,
Page,
Browser,
} from "@langchain/community/document_loaders/web/playwright";
const loader = new PlaywrightWebBaseLoader("https://www.tabnews.com.br/", {
launchOptions: {
headless: true,
},
gotoOptions: {
waitUntil: "domcontentloaded",
},
/** Pass custom evaluate, in this case you get page and browser instances */
async evaluate(page: Page, browser: Browser, response: Response | null) {
await page.waitForResponse("https://www.tabnews.com.br/va/view");
const result = await page.evaluate(() => document.body.innerHTML);
return result;
},
});
const docs = await loader.load();
```
|
0 | lc_public_repos/langchainjs/docs/core_docs/docs/integrations/document_loaders | lc_public_repos/langchainjs/docs/core_docs/docs/integrations/document_loaders/web_loaders/hn.mdx | ---
hide_table_of_contents: true
---
# Hacker News
This example goes over how to load data from the hacker news website, using Cheerio. One document will be created for each page.
## Setup
```bash npm2yarn
npm install @langchain/community @langchain/core cheerio
```
## Usage
```typescript
import { HNLoader } from "@langchain/community/document_loaders/web/hn";
const loader = new HNLoader("https://news.ycombinator.com/item?id=34817881");
const docs = await loader.load();
```
|
0 | lc_public_repos/langchainjs/docs/core_docs/docs/integrations/document_loaders | lc_public_repos/langchainjs/docs/core_docs/docs/integrations/document_loaders/web_loaders/college_confidential.mdx | ---
hide_table_of_contents: true
---
# College Confidential
This example goes over how to load data from the college confidential website, using Cheerio. One document will be created for each page.
## Setup
```bash npm2yarn
npm install @langchain/community @langchain/core cheerio
```
## Usage
```typescript
import { CollegeConfidentialLoader } from "@langchain/community/document_loaders/web/college_confidential";
const loader = new CollegeConfidentialLoader(
"https://www.collegeconfidential.com/colleges/brown-university/"
);
const docs = await loader.load();
```
|
0 | lc_public_repos/langchainjs/docs/core_docs/docs/integrations/document_loaders | lc_public_repos/langchainjs/docs/core_docs/docs/integrations/document_loaders/web_loaders/index.mdx | ---
label: "Web Loaders"
hide_table_of_contents: true
---
# Web Loaders
These loaders are used to load web resources. They do not involve the local file system.
import { IndexTable } from "@theme/FeatureTables";
:::info
If you'd like to write your own document loader, see [this how-to](/docs/how_to/document_loader_custom/). If you'd like to contribute an integration, see [Contributing integrations](/docs/contributing).
:::
## All web loaders
<IndexTable />
|
0 | lc_public_repos/langchainjs/docs/core_docs/docs/integrations/document_loaders | lc_public_repos/langchainjs/docs/core_docs/docs/integrations/document_loaders/web_loaders/web_cheerio.ipynb | import { CheerioWebBaseLoader } from "@langchain/community/document_loaders/web/cheerio"
const loader = new CheerioWebBaseLoader("https://news.ycombinator.com/item?id=34817881", {
// optional params: ...
})const docs = await loader.load()
docs[0]console.log(docs[0].metadata)import { CheerioWebBaseLoader } from "@langchain/community/document_loaders/web/cheerio"
const loaderWithSelector = new CheerioWebBaseLoader("https://news.ycombinator.com/item?id=34817881", {
selector: "p",
});
const docsWithSelector = await loaderWithSelector.load();
docsWithSelector[0].pageContent; |
0 | lc_public_repos/langchainjs/docs/core_docs/docs/integrations/document_loaders | lc_public_repos/langchainjs/docs/core_docs/docs/integrations/document_loaders/web_loaders/browserbase.mdx | # Browserbase Loader
## Description
[Browserbase](https://browserbase.com) is a developer platform to reliably run, manage, and monitor headless browsers.
Power your AI data retrievals with:
- [Serverless Infrastructure](https://docs.browserbase.com/under-the-hood) providing reliable browsers to extract data from complex UIs
- [Stealth Mode](https://docs.browserbase.com/features/stealth-mode) with included fingerprinting tactics and automatic captcha solving
- [Session Debugger](https://docs.browserbase.com/features/sessions) to inspect your Browser Session with networks timeline and logs
- [Live Debug](https://docs.browserbase.com/guides/session-debug-connection/browser-remote-control) to quickly debug your automation
## Installation
- Get an API key and Project ID from [browserbase.com](https://browserbase.com) and set it in environment variables (`BROWSERBASE_API_KEY`, `BROWSERBASE_PROJECT_ID`).
- Install the [Browserbase SDK](http://github.com/browserbase/js-sdk):
```bash npm2yarn
npm i @langchain/community @langchain/core @browserbasehq/sdk
```
## Example
Utilize the `BrowserbaseLoader` as follows to allow your agent to load websites:
import CodeBlock from "@theme/CodeBlock";
import Example from "@examples/document_loaders/browserbase.ts";
<CodeBlock language="typescript">{Example}</CodeBlock>
## Arguments
- `urls`: Required. List of URLs to load.
## Options
- `textContent` Retrieve only text content. Default is `false`.
- `sessionId` Optional. Provide an existing Session ID.
- `proxy` Optional. Enable/Disable Proxies.
|
0 | lc_public_repos/langchainjs/docs/core_docs/docs/integrations/document_loaders | lc_public_repos/langchainjs/docs/core_docs/docs/integrations/document_loaders/web_loaders/azure_blob_storage_container.mdx | ---
hide_table_of_contents: true
sidebar_class_name: node-only
---
# Azure Blob Storage Container
:::tip Compatibility
Only available on Node.js.
:::
This covers how to load a container on Azure Blob Storage into LangChain documents.
## Setup
To run this loader, you'll need to have Unstructured already set up and ready to use at an available URL endpoint. It can also be configured to run locally.
See the docs [here](/docs/integrations/document_loaders/file_loaders/unstructured) for information on how to do that.
You'll also need to install the official Azure Storage Blob client library:
```bash npm2yarn
npm install @langchain/community @langchain/core @azure/storage-blob
```
## Usage
Once Unstructured is configured, you can use the Azure Blob Storage Container loader to load files and then convert them into a Document.
import CodeBlock from "@theme/CodeBlock";
import Example from "@examples/document_loaders/azure_blob_storage_container.ts";
<CodeBlock language="typescript">{Example}</CodeBlock>
|
0 | lc_public_repos/langchainjs/docs/core_docs/docs/integrations/document_loaders | lc_public_repos/langchainjs/docs/core_docs/docs/integrations/document_loaders/web_loaders/couchbase.mdx | ---
hide_table_of_contents: true
sidebar_class_name: node-only
---
# Couchbase
[Couchbase](http://couchbase.com/) is an award-winning distributed NoSQL cloud database that delivers unmatched versatility, performance, scalability, and financial value for all of your cloud, mobile, AI, and edge computing applications.
This guide shows how to use load documents from couchbase database.
# Installation
```bash npm2yarn
npm install @langchain/community @langchain/core couchbase
```
## Usage
### Querying for Documents from Couchbase
For more details on connecting to a Couchbase cluster, please check the [Node.js SDK documentation](https://docs.couchbase.com/nodejs-sdk/current/howtos/managing-connections.html#connection-strings).
For help with querying for documents using SQL++ (SQL for JSON), please check the [documentation](https://docs.couchbase.com/server/current/n1ql/n1ql-language-reference/index.html).
```typescript
import { CouchbaseDocumentLoader } from "@langchain/community/document_loaders/web/couchbase";
import { Cluster } from "couchbase";
const connectionString = "couchbase://localhost"; // valid couchbase connection string
const dbUsername = "Administrator"; // valid database user with read access to the bucket being queried
const dbPassword = "Password"; // password for the database user
// query is a valid SQL++ query
const query = `
SELECT h.* FROM \`travel-sample\`.inventory.hotel h
WHERE h.country = 'United States'
LIMIT 1
`;
```
### Connect to Couchbase Cluster
```typescript
const couchbaseClient = await Cluster.connect(connectionString, {
username: dbUsername,
password: dbPassword,
configProfile: "wanDevelopment",
});
```
### Create the Loader
```typescript
const loader = new CouchbaseDocumentLoader(
couchbaseClient, // The connected couchbase cluster client
query // A valid SQL++ query which will return the required data
);
```
### Load Documents
You can fetch the documents by calling the `load` method of the loader. It will return a list with all the documents. If you want to avoid this blocking call, you can call `lazy_load` method that returns an Iterator.
```typescript
// using load method
docs = await loader.load();
console.log(docs);
```
```typescript
// using lazy_load
for await (const doc of this.lazyLoad()) {
console.log(doc);
break; // break based on required condition
}
```
### Specifying Fields with Content and Metadata
The fields that are part of the Document content can be specified using the `pageContentFields` parameter.
The metadata fields for the Document can be specified using the `metadataFields` parameter.
```typescript
const loaderWithSelectedFields = new CouchbaseDocumentLoader(
couchbaseClient,
query,
// pageContentFields
[
"address",
"name",
"city",
"phone",
"country",
"geo",
"description",
"reviews",
],
["id"] // metadataFields
);
const filtered_docs = await loaderWithSelectedFields.load();
console.log(filtered_docs);
```
|
0 | lc_public_repos/langchainjs/docs/core_docs/docs/integrations/document_loaders | lc_public_repos/langchainjs/docs/core_docs/docs/integrations/document_loaders/web_loaders/searchapi.mdx | ---
hide_table_of_contents: true
---
# SearchApi Loader
This guide shows how to use SearchApi with LangChain to load web search results.
## Overview
[SearchApi](https://www.searchapi.io/) is a real-time API that grants developers access to results from a variety of search engines, including engines like [Google Search](https://www.searchapi.io/docs/google),
[Google News](https://www.searchapi.io/docs/google-news), [Google Scholar](https://www.searchapi.io/docs/google-scholar), [YouTube Transcripts](https://www.searchapi.io/docs/youtube-transcripts) or any other engine that could be found in documentation.
This API enables developers and businesses to scrape and extract meaningful data directly from the result pages of all these search engines, providing valuable insights for different use-cases.
This guide shows how to load web search results using the `SearchApiLoader` in LangChain. The `SearchApiLoader` simplifies the process of loading and processing web search results from SearchApi.
## Setup
You'll need to sign up and retrieve your [SearchApi API key](https://www.searchapi.io/).
## Usage
Here's an example of how to use the `SearchApiLoader`:
import CodeBlock from "@theme/CodeBlock";
import Example from "@examples/document_loaders/searchapi.ts";
import IntegrationInstallTooltip from "@mdx_components/integration_install_tooltip.mdx";
<IntegrationInstallTooltip></IntegrationInstallTooltip>
```bash npm2yarn
npm install @langchain/community @langchain/core @langchain/openai
```
<CodeBlock language="typescript">{Example}</CodeBlock>
In this example, the `SearchApiLoader` is used to load web search results, which are then stored in memory using `MemoryVectorStore`. A retrieval chain is then used to retrieve the most relevant documents from the memory and answer the question based on these documents. This demonstrates how the `SearchApiLoader` can streamline the process of loading and processing web search results.
|
0 | lc_public_repos/langchainjs/docs/core_docs/docs/integrations/document_loaders | lc_public_repos/langchainjs/docs/core_docs/docs/integrations/document_loaders/web_loaders/s3.mdx | ---
hide_table_of_contents: true
sidebar_class_name: node-only
---
# S3 File
:::tip Compatibility
Only available on Node.js.
:::
This covers how to load document objects from an s3 file object.
## Setup
To run this index you'll need to have Unstructured already set up and ready to use at an available URL endpoint. It can also be configured to run locally.
See the docs [here](/docs/integrations/document_loaders/file_loaders/unstructured) for information on how to do that.
You'll also need to install the official AWS SDK:
```bash npm2yarn
npm install @langchain/community @langchain/core @aws-sdk/client-s3
```
## Usage
Once Unstructured is configured, you can use the S3 loader to load files and then convert them into a Document.
You can optionally provide a s3Config parameter to specify your bucket region, access key, and secret access key. If these are not provided, you will need to have them in your environment (e.g., by running `aws configure`).
import CodeBlock from "@theme/CodeBlock";
import Example from "@examples/document_loaders/s3.ts";
<CodeBlock language="typescript">{Example}</CodeBlock>
|
0 | lc_public_repos/langchainjs/docs/core_docs/docs/integrations/document_loaders | lc_public_repos/langchainjs/docs/core_docs/docs/integrations/document_loaders/web_loaders/serpapi.mdx | ---
hide_table_of_contents: true
---
# SerpAPI Loader
This guide shows how to use SerpAPI with LangChain to load web search results.
## Overview
[SerpAPI](https://serpapi.com/) is a real-time API that provides access to search results from various search engines. It is commonly used for tasks like competitor analysis and rank tracking. It empowers businesses to scrape, extract, and make sense of data from all search engines' result pages.
This guide shows how to load web search results using the `SerpAPILoader` in LangChain. The `SerpAPILoader` simplifies the process of loading and processing web search results from SerpAPI.
## Setup
You'll need to sign up and retrieve your [SerpAPI API key](https://serpapi.com/dashboard).
## Usage
Here's an example of how to use the `SerpAPILoader`:
import CodeBlock from "@theme/CodeBlock";
import Example from "@examples/document_loaders/serpapi.ts";
import IntegrationInstallTooltip from "@mdx_components/integration_install_tooltip.mdx";
<IntegrationInstallTooltip></IntegrationInstallTooltip>
```bash npm2yarn
npm install @langchain/community @langchain/core @langchain/openai
```
<CodeBlock language="typescript">{Example}</CodeBlock>
In this example, the `SerpAPILoader` is used to load web search results, which are then stored in memory using `MemoryVectorStore`. A retrieval chain is then used to retrieve the most relevant documents from the memory and answer the question based on these documents. This demonstrates how the `SerpAPILoader` can streamline the process of loading and processing web search results.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.