index
int64
0
0
repo_id
stringclasses
596 values
file_path
stringlengths
31
168
content
stringlengths
1
6.2M
0
lc_public_repos/langchainjs/docs/core_docs
lc_public_repos/langchainjs/docs/core_docs/docs/security.md
# Security LangChain has a large ecosystem of integrations with various external resources like local and remote file systems, APIs and databases. These integrations allow developers to create versatile applications that combine the power of LLMs with the ability to access, interact with and manipulate external resources. ## Best Practices When building such applications developers should remember to follow good security practices: - [**Limit Permissions**](https://en.wikipedia.org/wiki/Principle_of_least_privilege): Scope permissions specifically to the application's need. Granting broad or excessive permissions can introduce significant security vulnerabilities. To avoid such vulnerabilities, consider using read-only credentials, disallowing access to sensitive resources, using sandboxing techniques (such as running inside a container), etc. as appropriate for your application. - **Anticipate Potential Misuse**: Just as humans can err, so can Large Language Models (LLMs). Always assume that any system access or credentials may be used in any way allowed by the permissions they are assigned. For example, if a pair of database credentials allows deleting data, it’s safest to assume that any LLM able to use those credentials may in fact delete data. - [**Defense in Depth**](<https://en.wikipedia.org/wiki/Defense_in_depth_(computing)>): No security technique is perfect. Fine-tuning and good chain design can reduce, but not eliminate, the odds that a Large Language Model (LLM) may make a mistake. It’s best to combine multiple layered security approaches rather than relying on any single layer of defense to ensure security. For example: use both read-only permissions and sandboxing to ensure that LLMs are only able to access data that is explicitly meant for them to use. Risks of not doing so include, but are not limited to: - Data corruption or loss. - Unauthorized access to confidential information. - Compromised performance or availability of critical resources. Example scenarios with mitigation strategies: - A user may ask an agent with access to the file system to delete files that should not be deleted or read the content of files that contain sensitive information. To mitigate, limit the agent to only use a specific directory and only allow it to read or write files that are safe to read or write. Consider further sandboxing the agent by running it in a container. - A user may ask an agent with write access to an external API to write malicious data to the API, or delete data from that API. To mitigate, give the agent read-only API keys, or limit it to only use endpoints that are already resistant to such misuse. - A user may ask an agent with access to a database to drop a table or mutate the schema. To mitigate, scope the credentials to only the tables that the agent needs to access and consider issuing READ-ONLY credentials. If you're building applications that access external resources like file systems, APIs or databases, consider speaking with your company's security team to determine how to best design and secure your applications. ## Reporting a Vulnerability Please report security vulnerabilities by email to security@langchain.dev. This will ensure the issue is promptly triaged and acted upon as needed.
0
lc_public_repos/langchainjs/docs/core_docs
lc_public_repos/langchainjs/docs/core_docs/docs/packages.mdx
# 📕 Package Versioning As of now, LangChain has an ad hoc release process: releases are cut with high frequency by a maintainer and published to [PyPI](https://pypi.org/). The different packages are versioned slightly differently. ## `@langchain/core` `@langchain/core` is currently on version `0.1.x`. As `@langchain/core` contains the base abstractions and runtime for the whole LangChain ecosystem, we will communicate any breaking changes with advance notice and version bumps. The exception for this is anything marked with the `beta` decorator (you can see this in the API reference and will see warnings when using such functionality). The reason for beta features is that given the rate of change of the field, being able to move quickly is still a priority. Minor version increases will occur for: - Breaking changes for any public interfaces marked as `beta`. Patch version increases will occur for: - Bug fixes - New features - Any changes to private interfaces - Any changes to `beta` features ## `langchain` `langchain` is currently on version `0.1.x` Minor version increases will occur for: - Breaking changes for any public interfaces NOT marked as `beta`. Patch version increases will occur for: - Bug fixes - New features - Any changes to private interfaces - Any changes to `beta` features We are working on the `langchain` v0.2 release, which will have some breaking changes to legacy Chains and Agents. Additionally, we will remove `@langchain/community` as a dependency and stop re-exporting integrations that have been moved to `@langchain/community`. ## `@langchain/community` `@langchain/community` is currently on version `0.0.x` All changes will be accompanied by a patch version increase. ## Partner Packages Partner packages are versioned independently.
0
lc_public_repos/langchainjs/docs/core_docs
lc_public_repos/langchainjs/docs/core_docs/docs/community.mdx
# Community navigator Hi! Thanks for being here. We're lucky to have a community of so many passionate developers building with LangChain–we have so much to teach and learn from each other. Community members contribute code, host meetups, write blog posts, amplify each other's work, become each other's customers and collaborators, and so much more. Whether you're new to LangChain, looking to go deeper, or just want to get more exposure to the world of building with LLMs, this page can point you in the right direction. - **🦜 Contribute to LangChain** - **🌍 Meetups, Events, and Hackathons** - **📣 Help Us Amplify Your Work** - **💬 Stay in the loop** # 🦜 Contribute to LangChain LangChain is the product of over 5,000+ contributions by 1,500+ contributors, and there is **still** so much to do together. Here are some ways to get involved: - **[Open a pull request](https://github.com/langchain-ai/langchainjs/issues):** we'd appreciate all forms of contributions–new features, infrastructure improvements, better documentation, bug fixes, etc. If you have an improvement or an idea, we'd love to work on it with you. - **[Read our contributor guidelines:](https://github.com/langchain-ai/langchainjs/blob/main/CONTRIBUTING.md)** We ask contributors to follow a ["fork and pull request"](https://docs.github.com/en/get-started/quickstart/contributing-to-projects) workflow, run a few local checks for formatting, linting, and testing before submitting, and follow certain documentation and testing conventions. - **Become an expert:** our experts help the community by answering product questions in Discord. If that's a role you'd like to play, we'd be so grateful! (And we have some special experts-only goodies/perks we can tell you more about). Send us an email to introduce yourself at hello@langchain.dev and we'll take it from there! - **Integrate with LangChain:** if your product integrates with LangChain–or aspires to–we want to help make sure the experience is as smooth as possible for you and end users. Send us an email at hello@langchain.dev and tell us what you're working on. - **Become an Integration Maintainer:** Partner with our team to ensure your integration stays up-to-date and talk directly with users (and answer their inquiries) in our Discord. Introduce yourself at hello@langchain.dev if you'd like to explore this role. # 🌍 Meetups, Events, and Hackathons One of our favorite things about working in AI is how much enthusiasm there is for building together. We want to help make that as easy and impactful for you as possible! - **Find a meetup, hackathon, or webinar:** you can find the one for you on on our [global events calendar](https://mirror-feeling-d80.notion.site/0bc81da76a184297b86ca8fc782ee9a3?v=0d80342540df465396546976a50cfb3f). - **Submit an event to our calendar:** email us at events@langchain.dev with a link to your event page! We can also help you spread the word with our local communities. - **Host a meetup:** If you want to bring a group of builders together, we want to help! We can publicize your event on our event calendar/Twitter, share with our local communities in Discord, send swag, or potentially hook you up with a sponsor. Email us at events@langchain.dev to tell us about your event! - **Become a meetup sponsor:** we often hear from groups of builders that want to get together, but are blocked or limited on some dimension (space to host, budget for snacks, prizes to distribute, etc.). If you'd like to help, send us an email to events@langchain.dev we can share more about how it works! - **Speak at an event:** meetup hosts are always looking for great speakers, presenters, and panelists. If you'd like to do that at an event, send us an email to hello@langchain.dev with more information about yourself, what you want to talk about, and what city you're based in and we'll try to match you with an upcoming event! - **Tell us about your LLM community:** If you host or participate in a community that would welcome support from LangChain and/or our team, send us an email at hello@langchain.dev and let us know how we can help. # 📣 Help Us Amplify Your Work If you're working on something you're proud of, and think the LangChain community would benefit from knowing about it, we want to help you show it off. - **Post about your work and mention us:** we love hanging out on Twitter to see what people in the space are talking about and working on. If you tag [@langchainai](https://twitter.com/LangChainAI), we'll almost certainly see it and can show you some love. - **Publish something on our blog:** if you're writing about your experience building with LangChain, we'd love to post (or crosspost) it on our blog! E-mail hello@langchain.dev with a draft of your post! Or even an idea for something you want to write about. - **Get your product onto our [integrations hub](https://integrations.langchain.com/):** Many developers take advantage of our seamless integrations with other products, and come to our integrations hub to find out who those are. If you want to get your product up there, tell us about it (and how it works with LangChain) at hello@langchain.dev. # ☀️ Stay in the loop Here's where our team hangs out, talks shop, spotlights cool work, and shares what we're up to. We'd love to see you there too. - **[Twitter](https://twitter.com/LangChainAI):** we post about what we're working on and what cool things we're seeing in the space. If you tag @langchainai in your post, we'll almost certainly see it, and can snow you some love! - **[GitHub](https://github.com/langchain-ai/langchainjs):** open pull requests, contribute to a discussion, and/or contribute - **[Subscribe to our bi-weekly Release Notes](https://6w1pwbss0py.typeform.com/to/KjZB1auB):** a twice/month email roundup of the coolest things going on in our orbit
0
lc_public_repos/langchainjs/docs/core_docs
lc_public_repos/langchainjs/docs/core_docs/docs/people.mdx
--- hide_table_of_contents: true --- import People from "@theme/People"; # People There are some incredible humans from all over the world who have been instrumental in helping the LangChain community flourish 🌐! This page highlights a few of those folks who have dedicated their time to the open-source repo in the form of direct contributions and reviews. ## Top reviewers As LangChain has grown, the amount of surface area that maintainers cover has grown as well. Thank you to the following folks who have gone above and beyond in reviewing incoming PRs 🙏! <People type="top_reviewers"></People> ## Top recent contributors The list below contains contributors who have had the most PRs merged in the last three months, weighted (imperfectly) by impact. Thank you all so much for your time and efforts in making LangChain better ❤️! <People type="top_recent_contributors" count="20"></People> ## Core maintainers Hello there 👋! We're LangChain's core maintainers. If you've spent time in the community, you've probably crossed paths with at least one of us already. <People type="maintainers"></People> ## Top all-time contributors And finally, this is an all-time list of all-stars who have made significant contributions to the framework 🌟: <People type="top_contributors"></People> We're so thankful for your support! And one more thank you to [@tiangolo](https://github.com/tiangolo) for inspiration via FastAPI's [excellent people page](https://fastapi.tiangolo.com/fastapi-people).
0
lc_public_repos/langchainjs/docs/core_docs
lc_public_repos/langchainjs/docs/core_docs/docs/introduction.mdx
--- sidebar_position: 0 --- # Introduction **LangChain** is a framework for developing applications powered by large language models (LLMs). LangChain simplifies every stage of the LLM application lifecycle: - **Development**: Build your applications using LangChain's open-source [building blocks](/docs/concepts/lcel), [components](/docs/concepts), and [third-party integrations](/docs/integrations/platforms/). Use [LangGraph.js](/docs/concepts/architecture#langgraph) to build stateful agents with first-class streaming and human-in-the-loop support. - **Productionization**: Use [LangSmith](https://docs.smith.langchain.com/) to inspect, monitor and evaluate your chains, so that you can continuously optimize and deploy with confidence. - **Deployment**: Turn your LangGraph applications into production-ready APIs and Assistants with [LangGraph Cloud](https://langchain-ai.github.io/langgraph/cloud/). import ThemedImage from "@theme/ThemedImage"; import useBaseUrl from "@docusaurus/useBaseUrl"; <ThemedImage alt="Diagram outlining the hierarchical organization of the LangChain framework, displaying the interconnected parts across multiple layers." sources={{ light: useBaseUrl("/svg/langchain_stack_062024.svg"), dark: useBaseUrl("/svg/langchain_stack_062024_dark.svg"), }} title="LangChain Framework Overview" style={{ width: "100%" }} /> Concretely, the framework consists of the following open-source libraries: - **`@langchain/core`**: Base abstractions and LangChain Expression Language. - **`@langchain/community`**: Third party integrations. - Partner packages (e.g. **`@langchain/openai`**, **`@langchain/anthropic`**, etc.): Some integrations have been further split into their own lightweight packages that only depend on **`@langchain/core`**. - **`langchain`**: Chains, agents, and retrieval strategies that make up an application's cognitive architecture. - **[LangGraph.js](https://langchain-ai.github.io/langgraphjs/)**: Build robust and stateful multi-actor applications with LLMs by modeling steps as edges and nodes in a graph. - **[LangSmith](https://docs.smith.langchain.com)**: A developer platform that lets you debug, test, evaluate, and monitor LLM applications. :::note These docs focus on the JavaScript LangChain library. [Head here](https://python.langchain.com) for docs on the Python LangChain library. ::: ## [Tutorials](/docs/tutorials) If you're looking to build something specific or are more of a hands-on learner, check out our [tutorials](/docs/tutorials). This is the best place to get started. These are the best ones to get started with: - [Build a Simple LLM Application](/docs/tutorials/llm_chain) - [Build a Chatbot](/docs/tutorials/chatbot) - [Build an Agent](https://langchain-ai.github.io/langgraphjs/tutorials/quickstart/) - [LangGraph.js quickstart](https://langchain-ai.github.io/langgraphjs/tutorials/quickstart/) Explore the full list of LangChain tutorials [here](/docs/tutorials), and check out other [LangGraph tutorials here](https://langchain-ai.github.io/langgraphjs/tutorials/). ## [How-To Guides](/docs/how_to/) [Here](/docs/how_to/) you'll find short answers to “How do I….?” types of questions. These how-to guides don't cover topics in depth - you'll find that material in the [Tutorials](/docs/tutorials) and the [API Reference](https://api.js.langchain.com). However, these guides will help you quickly accomplish common tasks. Check out [LangGraph-specific how-tos here](https://langchain-ai.github.io/langgraphjs/how-tos/). ## [Conceptual Guide](/docs/concepts) Introductions to all the key parts of LangChain you'll need to know! [Here](/docs/concepts) you'll find high level explanations of all LangChain concepts. For a deeper dive into LangGraph concepts, check out [this page](https://langchain-ai.github.io/langgraph/concepts/). ## [API reference](https://api.js.langchain.com) Head to the reference section for full documentation of all classes and methods in the LangChain JavaScript packages. ## Ecosystem ### [🦜🛠️ LangSmith](https://docs.smith.langchain.com) Trace and evaluate your language model applications and intelligent agents to help you move from prototype to production. ### [🦜🕸️ LangGraph](https://langchain-ai.github.io/langgraphjs/) Build stateful, multi-actor applications with LLMs, built on top of (and intended to be used with) LangChain primitives. ## Additional resources ### [Security](/docs/security) Read up on our [Security](/docs/security) best practices to make sure you're developing safely with LangChain. ### [Integrations](/docs/integrations/platforms/) LangChain is part of a rich ecosystem of tools that integrate with our framework and build on top of it. Check out our growing list of [integrations](/docs/integrations/platforms/). ### [Contributing](/docs/contributing) Check out the developer's guide for guidelines on contributing and help getting your dev environment set up.
0
lc_public_repos/langchainjs/docs/core_docs/docs
lc_public_repos/langchainjs/docs/core_docs/docs/additional_resources/tutorials.mdx
# External guides Below are links to external tutorials and courses on LangChain.js. For other written guides on common use cases for LangChain.js, check out the [tutorials](/docs/tutorials/) and [how to](/docs/how_to/) sections. --- ## Deeplearning.ai We've partnered with [Deeplearning.ai](https://deeplearning.ai) and [Andrew Ng](https://en.wikipedia.org/wiki/Andrew_Ng) on a LangChain.js short course. It covers LCEL and other building blocks you can combine to build more complex chains, as well as fundamentals around loading data for retrieval augmented generation (RAG). Try it for free below: - [Build LLM Apps with LangChain.js](https://www.deeplearning.ai/short-courses/build-llm-apps-with-langchain-js) ## Scrimba interactive guides [Scrimba](https://scrimba.com) is a code-learning platform that allows you to interactively edit and run code while watching a video walkthrough. We've partnered with Scrimba on course materials (called "scrims") that teach the fundamentals of building with LangChain.js - check them out below, and check back for more as they become available! ### Learn LangChain.js - [Learn LangChain.js on Scrimba](https://scrimba.com/learn/langchain) An full end-to-end course that walks through how to build a chatbot that can answer questions about a provided document. A great introduction to LangChain and a great first project for learning how to use LangChain Expression Language primitives to perform retrieval! ### LangChain Expression Language (LCEL) - [The basics (PromptTemplate + LLM)](https://v2.scrimba.com/s05iemh) - [Adding an output parser](https://scrimba.com/scrim/co6ae44248eacc1abd87ae3dc) - [Attaching function calls to a model](https://scrimba.com/scrim/cof5449f5bc972f8c90be6a82) - [Composing multiple chains](https://scrimba.com/scrim/co14344c29595bfb29c41f12a) - [Retrieval chains](https://scrimba.com/scrim/co0e040d09941b4000244db46) - [Conversational retrieval chains ("Chat with Docs")](https://scrimba.com/scrim/co3ed4a9eb4c6c6d0361a507c) ### Deeper dives - [Setting up a new `PromptTemplate`](https://scrimba.com/scrim/cbGwRwuV) - [Setting up `ChatOpenAI` parameters](https://scrimba.com/scrim/cEgbBBUw) - [Attaching stop sequences](https://scrimba.com/scrim/co9704e389428fe2193eb955c) ## Neo4j GraphAcademy [Neo4j](https://neo4j.com) has put together a hands-on, practical course that shows how to build a movie-recommending chatbot in Next.js. It covers retrieval-augmented generation (RAG), tracking history, and more. Check it out below: - [Build a Neo4j-backed Chatbot with TypeScript](https://graphacademy.neo4j.com/courses/llm-chatbot-typescript/?ref=langchainjs) ## LangChain.js x AI SDK How to use LangChain.js with AI SDK and React Server Components. - [Streaming agentic data to the client](https://github.com/langchain-ai/langchain-nextjs-template/blob/main/app/ai_sdk/agent/README.md) - [Streaming tool responses to the client](https://github.com/langchain-ai/langchain-nextjs-template/blob/main/app/ai_sdk/tools/README.md) ---
0
lc_public_repos/langchainjs/docs/core_docs/docs
lc_public_repos/langchainjs/docs/core_docs/docs/versions/release_policy.mdx
--- sidebar_position: 2 sidebar_label: Release Policy --- # LangChain releases The LangChain ecosystem is composed of different component packages (e.g., `@langchain/core`, `langchain`, `@langchain/community`, `@langchain/langgraph`, partner packages etc.) ## Versioning ### `langchain` and `@langchain/core` `langchain` and `@langchain/core` follow [semantic versioning](https://semver.org/) in the format of 0.**Y**.**Z**. The packages are under rapid development, and so are currently versioning the packages with a major version of 0. Minor version increases will occur for: - Breaking changes for any public interfaces marked as `beta`. Patch version increases will occur for: - Bug fixes - New features - Any changes to private interfaces - Any changes to `beta` features When upgrading between minor versions, users should review the list of breaking changes and deprecations. From time to time, we will version packages as **release candidates**. These are versions that are intended to be released as stable versions, but we want to get feedback from the community before doing so. Release candidates will be versioned as 0.**Y**.**Z**-rc**.N**. For example, `0.2.0-rc.1`. If no issues are found, the release candidate will be released as a stable version with the same version number. \If issues are found, we will release a new release candidate with an incremented `N` value (e.g., `0.2.0-rc.2`). ### Other packages in the langchain ecosystem Other packages in the ecosystem (including user packages) can follow a different versioning scheme, but are generally expected to pin to specific minor versions of `langchain` and `@langchain/core`. ## Release cadence We expect to space out **minor** releases (e.g., from 0.2.0 to 0.3.0) of `langchain` and `@langchain/core` by at least 2-3 months, as such releases may contain breaking changes. Patch versions are released frequently as they contain bug fixes and new features. ## API stability The development of LLM applications is a rapidly evolving field, and we are constantly learning from our users and the community. As such, we expect that the APIs in `langchain` and `@langchain/core` will continue to evolve to better serve the needs of our users. Even though both `langchain` and `@langchain/core` are currently in a pre-1.0 state, we are committed to maintaining API stability in these packages. - Breaking changes to the public API will result in a minor version bump (the second digit) - Any bug fixes or new features will result in a patch version bump (the third digit) We will generally try to avoid making unnecessary changes, and will provide a deprecation policy for features that are being removed. ### Stability of other packages The stability of other packages in the LangChain ecosystem may vary: - `@langchain/community` is a community maintained package that contains 3rd party integrations. While we do our best to review and test changes in `@langchain/community`, `@langchain/community` is expected to experience more breaking changes than `langchain` and `@langchain/core` as it contains many community contributions. - Partner packages may follow different stability and versioning policies, and users should refer to the documentation of those packages for more information; however, in general these packages are expected to be stable. ### What is a "API stability"? API stability means: - All the public APIs (everything in this documentation) will not be moved or renamed without providing backwards-compatible aliases. - If new features are added to these APIs – which is quite possible – they will not break or change the meaning of existing methods. In other words, "stable" does not (necessarily) mean "complete." - If, for some reason, an API declared stable must be removed or replaced, it will be declared deprecated but will remain in the API for at least two minor releases. Warnings will be issued when the deprecated method is called. ### **APIs marked as internal** Certain APIs are explicitly marked as “internal” in a couple of ways: - Some documentation refers to internals and mentions them as such. If the documentation says that something is internal, it may change. - Functions, methods, and other objects prefixed by a leading underscore (**`_`**). If any method starts with a single **`_`**, it’s an internal API. - **Exception:** Certain methods are prefixed with `_` , but do not contain an implementation. These methods are _meant_ to be overridden by sub-classes that provide the implementation. Such methods are generally part of the **Public API** of LangChain. ## Deprecation policy We will generally avoid deprecating features until a better alternative is available. When a feature is deprecated, it will continue to work in the current and next minor version of `langchain` and `@langchain/core`. After that, the feature will be removed. Since we're expecting to space out minor releases by at least 2-3 months, this means that a feature can be removed within 2-6 months of being deprecated. In some situations, we may allow the feature to remain in the code base for longer periods of time, if it's not causing issues in the packages, to reduce the burden on users.
0
lc_public_repos/langchainjs/docs/core_docs/docs/versions
lc_public_repos/langchainjs/docs/core_docs/docs/versions/migrating_memory/conversation_buffer_window_memory.ipynb
import { AIMessage, HumanMessage, SystemMessage, } from "@langchain/core/messages"; const messages = [ new SystemMessage("you're a good assistant, you always respond with a joke."), new HumanMessage("i wonder why it's called langchain"), new AIMessage( 'Well, I guess they thought "WordRope" and "SentenceString" just didn\'t have the same ring to it!' ), new HumanMessage("and who is harrison chasing anyways"), new AIMessage( "Hmmm let me think.\n\nWhy, he's probably chasing after the last cup of coffee in the office!" ), new HumanMessage("why is 42 always the answer?"), new AIMessage( "Because it's the only number that's constantly right, even when it doesn't add up!" ), new HumanMessage("What did the cow say?"), ]import { trimMessages } from "@langchain/core/messages"; import { ChatOpenAI } from "@langchain/openai"; const selectedMessages = await trimMessages( messages, { // Please see API reference for trimMessages for other ways to specify a token counter. tokenCounter: new ChatOpenAI({ model: "gpt-4o" }), maxTokens: 80, // <-- token limit // The startOn is specified // to make sure we do not generate a sequence where // a ToolMessage that contains the result of a tool invocation // appears before the AIMessage that requested a tool invocation // as this will cause some chat models to raise an error. startOn: "human", strategy: "last", includeSystem: true, // <-- Keep the system message } ) for (const msg of selectedMessages) { console.log(msg); }import { v4 as uuidv4 } from 'uuid'; import { ChatOpenAI } from "@langchain/openai"; import { StateGraph, MessagesAnnotation, END, START, MemorySaver } from "@langchain/langgraph"; import { trimMessages } from "@langchain/core/messages"; // Define a chat model const model = new ChatOpenAI({ model: "gpt-4o" }); // Define the function that calls the model const callModel = async (state: typeof MessagesAnnotation.State): Promise<Partial<typeof MessagesAnnotation.State>> => { // highlight-start const selectedMessages = await trimMessages( state.messages, { tokenCounter: (messages) => messages.length, // Simple message count instead of token count maxTokens: 5, // Allow up to 5 messages strategy: "last", startOn: "human", includeSystem: true, allowPartial: false, } ); // highlight-end const response = await model.invoke(selectedMessages); // With LangGraph, we're able to return a single message, and LangGraph will concatenate // it to the existing list return { messages: [response] }; }; // Define a new graph const workflow = new StateGraph(MessagesAnnotation) // Define the two nodes we will cycle between .addNode("model", callModel) .addEdge(START, "model") .addEdge("model", END) const app = workflow.compile({ // Adding memory is straightforward in LangGraph! // Just pass a checkpointer to the compile method. checkpointer: new MemorySaver() }); // The thread id is a unique key that identifies this particular conversation // --- // NOTE: this must be `thread_id` and not `threadId` as the LangGraph internals expect `thread_id` // --- const thread_id = uuidv4(); const config = { configurable: { thread_id }, streamMode: "values" as const }; const inputMessage = { role: "user", content: "hi! I'm bob", } for await (const event of await app.stream({ messages: [inputMessage] }, config)) { const lastMessage = event.messages[event.messages.length - 1]; console.log(lastMessage.content); } // Here, let's confirm that the AI remembers our name! const followUpMessage = { role: "user", content: "what was my name?", } // --- // NOTE: You must pass the same thread id to continue the conversation // we do that here by passing the same `config` object to the `.stream` call. // --- for await (const event of await app.stream({ messages: [followUpMessage] }, config)) { const lastMessage = event.messages[event.messages.length - 1]; console.log(lastMessage.content); }import { z } from "zod"; import { v4 as uuidv4 } from 'uuid'; import { BaseMessage, trimMessages } from "@langchain/core/messages"; import { tool } from "@langchain/core/tools"; import { ChatOpenAI } from "@langchain/openai"; import { MemorySaver } from "@langchain/langgraph"; import { createReactAgent } from "@langchain/langgraph/prebuilt"; const getUserAge = tool( (name: string): string => { // This is a placeholder for the actual implementation if (name.toLowerCase().includes("bob")) { return "42 years old"; } return "41 years old"; }, { name: "get_user_age", description: "Use this tool to find the user's age.", schema: z.string().describe("the name of the user"), } ); const memory = new MemorySaver(); const model2 = new ChatOpenAI({ model: "gpt-4o" }); // highlight-start const stateModifier = async (messages: BaseMessage[]): Promise<BaseMessage[]> => { // We're using the message processor defined above. return trimMessages( messages, { tokenCounter: (msgs) => msgs.length, // <-- .length will simply count the number of messages rather than tokens maxTokens: 5, // <-- allow up to 5 messages. strategy: "last", // The startOn is specified // to make sure we do not generate a sequence where // a ToolMessage that contains the result of a tool invocation // appears before the AIMessage that requested a tool invocation // as this will cause some chat models to raise an error. startOn: "human", includeSystem: true, // <-- Keep the system message allowPartial: false, } ); }; // highlight-end const app2 = createReactAgent({ llm: model2, tools: [getUserAge], checkpointSaver: memory, // highlight-next-line messageModifier: stateModifier, }); // The thread id is a unique key that identifies // this particular conversation. // We'll just generate a random uuid here. const threadId2 = uuidv4(); const config2 = { configurable: { thread_id: threadId2 }, streamMode: "values" as const }; // Tell the AI that our name is Bob, and ask it to use a tool to confirm // that it's capable of working like an agent. const inputMessage2 = { role: "user", content: "hi! I'm bob. What is my age?", } for await (const event of await app2.stream({ messages: [inputMessage2] }, config2)) { const lastMessage = event.messages[event.messages.length - 1]; console.log(lastMessage.content); } // Confirm that the chat bot has access to previous conversation // and can respond to the user saying that the user's name is Bob. const followUpMessage2 = { role: "user", content: "do you remember my name?", }; for await (const event of await app2.stream({ messages: [followUpMessage2] }, config2)) { const lastMessage = event.messages[event.messages.length - 1]; console.log(lastMessage.content); }import { ChatOpenAI } from "@langchain/openai"; import { AIMessage, HumanMessage, SystemMessage, BaseMessage, trimMessages } from "@langchain/core/messages"; import { tool } from "@langchain/core/tools"; import { z } from "zod"; const model3 = new ChatOpenAI({ model: "gpt-4o" }); const whatDidTheCowSay = tool( (): string => { return "foo"; }, { name: "what_did_the_cow_say", description: "Check to see what the cow said.", schema: z.object({}), } ); // highlight-start const messageProcessor = trimMessages( { tokenCounter: (msgs) => msgs.length, // <-- .length will simply count the number of messages rather than tokens maxTokens: 5, // <-- allow up to 5 messages. strategy: "last", // The startOn is specified // to make sure we do not generate a sequence where // a ToolMessage that contains the result of a tool invocation // appears before the AIMessage that requested a tool invocation // as this will cause some chat models to raise an error. startOn: "human", includeSystem: true, // <-- Keep the system message allowPartial: false, } ); // highlight-end // Note that we bind tools to the model first! const modelWithTools = model3.bindTools([whatDidTheCowSay]); // highlight-next-line const modelWithPreprocessor = messageProcessor.pipe(modelWithTools); const fullHistory = [ new SystemMessage("you're a good assistant, you always respond with a joke."), new HumanMessage("i wonder why it's called langchain"), new AIMessage('Well, I guess they thought "WordRope" and "SentenceString" just didn\'t have the same ring to it!'), new HumanMessage("and who is harrison chasing anyways"), new AIMessage("Hmmm let me think.\n\nWhy, he's probably chasing after the last cup of coffee in the office!"), new HumanMessage("why is 42 always the answer?"), new AIMessage("Because it's the only number that's constantly right, even when it doesn't add up!"), new HumanMessage("What did the cow say?"), ]; // We pass it explicitly to the modelWithPreprocessor for illustrative purposes. // If you're using `RunnableWithMessageHistory` the history will be automatically // read from the source that you configure. const result = await modelWithPreprocessor.invoke(fullHistory); console.log(result);
0
lc_public_repos/langchainjs/docs/core_docs/docs/versions
lc_public_repos/langchainjs/docs/core_docs/docs/versions/migrating_memory/index.mdx
--- sidebar_position: 1 --- # How to migrate to LangGraph memory As of the v0.3 release of LangChain, we recommend that LangChain users take advantage of LangGraph persistence to incorporate `memory` into their LangChain application. - Users that rely on `RunnableWithMessageHistory` or `BaseChatMessageHistory` do **not** need to make any changes, but are encouraged to consider using LangGraph for more complex use cases. - Users that rely on deprecated memory abstractions from LangChain 0.0.x should follow this guide to upgrade to the new LangGraph persistence feature in LangChain 0.3.x. ## Why use LangGraph for memory? The main advantages of persistence in LangGraph are: - Built-in support for multiple users and conversations, which is a typical requirement for real-world conversational AI applications. - Ability to save and resume complex conversations at any point. This helps with: - Error recovery - Allowing human intervention in AI workflows - Exploring different conversation paths ("time travel") - Full compatibility with both traditional [language models](/docs/concepts/text_llms) and modern [chat models](/docs/concepts/chat_models). Early memory implementations in LangChain weren't designed for newer chat model APIs, causing issues with features like tool-calling. LangGraph memory can persist any custom state. - Highly customizable, allowing you to fully control how memory works and use different storage backends. ## Evolution of memory in LangChain The concept of memory has evolved significantly in LangChain since its initial release. ### LangChain 0.0.x memory Broadly speaking, LangChain 0.0.x memory was used to handle three main use cases: | Use Case | Example | | ------------------------------------ | --------------------------------------------------------------------------------------------------------------------------------- | | Managing conversation history | Keep only the last `n` turns of the conversation between the user and the AI. | | Extraction of structured information | Extract structured information from the conversation history, such as a list of facts learned about the user. | | Composite memory implementations | Combine multiple memory sources, e.g., a list of known facts about the user along with facts learned during a given conversation. | While the LangChain 0.0.x memory abstractions were useful, they were limited in their capabilities and not well suited for real-world conversational AI applications. These memory abstractions lacked built-in support for multi-user, multi-conversation scenarios, which are essential for practical conversational AI systems. Most of these implementations have been officially deprecated in LangChain 0.3.x in favor of LangGraph persistence. ### RunnableWithMessageHistory and BaseChatMessageHistory :::note Please see [How to use BaseChatMessageHistory with LangGraph](./chat_history), if you would like to use `BaseChatMessageHistory` (with or without `RunnableWithMessageHistory`) in LangGraph. ::: As of LangChain v0.1, we started recommending that users rely primarily on [BaseChatMessageHistory](https://api.js.langchain.com/classes/_langchain_core.chat_history.BaseChatMessageHistory.html). `BaseChatMessageHistory` serves as a simple persistence for storing and retrieving messages in a conversation. At that time, the only option for orchestrating LangChain chains was via [LCEL](/docs/how_to/#langchain-expression-language-lcel). To incorporate memory with `LCEL`, users had to use the [RunnableWithMessageHistory](https://api.js.langchain.com/classes/_langchain_core.runnables.RunnableWithMessageHistory.html) interface. While sufficient for basic chat applications, many users found the API unintuitive and challenging to use. As of LangChain v0.3, we recommend that **new** code takes advantage of LangGraph for both orchestration and persistence: - Orchestration: In LangGraph, users define [graphs](https://langchain-ai.github.io/langgraphjs/concepts/low_level/) that specify the flow of the application. This allows users to keep using `LCEL` within individual nodes when `LCEL` is needed, while making it easy to define complex orchestration logic that is more readable and maintainable. - Persistence: Users can rely on LangGraph's persistence to store and retrieve data. LangGraph persistence is extremely flexible and can support a much wider range of use cases than the `RunnableWithMessageHistory` interface. :::important If you have been using `RunnableWithMessageHistory` or `BaseChatMessageHistory`, you do not need to make any changes. We do not plan on deprecating either functionality in the near future. This functionality is sufficient for simple chat applications and any code that uses `RunnableWithMessageHistory` will continue to work as expected. ::: ## Migrations :::info Prerequisites These guides assume some familiarity with the following concepts: - [LangGraph](https://langchain-ai.github.io/langgraphjs/) - [v0.0.x Memory](https://js.langchain.com/v0.1/docs/modules/memory/) - [How to add persistence ("memory") to your graph](https://langchain-ai.github.io/langgraphjs/how-tos/persistence/) ::: ### 1. Managing conversation history The goal of managing conversation history is to store and retrieve the history in a way that is optimal for a chat model to use. Often this involves trimming and / or summarizing the conversation history to keep the most relevant parts of the conversation while having the conversation fit inside the context window of the chat model. Memory classes that fall into this category include: | Memory Type | How to Migrate | Description | | --------------------------------- | :----------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | `ConversationTokenBufferMemory` | [Link to Migration Guide](conversation_buffer_window_memory) | Keeps only the most recent messages in the conversation under the constraint that the total number of tokens in the conversation does not exceed a certain limit. | | `ConversationSummaryMemory` | [Link to Migration Guide](conversation_summary_memory) | Continually summarizes the conversation history. The summary is updated after each conversation turn. The abstraction returns the summary of the conversation history. | | `ConversationSummaryBufferMemory` | [Link to Migration Guide](conversation_summary_memory) | Provides a running summary of the conversation together with the most recent messages in the conversation under the constraint that the total number of tokens in the conversation does not exceed a certain limit. | ### 2. Extraction of structured information from the conversation history Memory classes that fall into this category include: | Memory Type | Description | | ----------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | `BaseEntityStore` | An abstract interface that resembles a key-value store. It was used for storing structured information learned during the conversation. The information had to be represented as an object of key-value pairs. | And specific backend implementations of abstractions: | Memory Type | Description | | --------------------- | -------------------------------------------------------------------------------------------------------- | | `InMemoryEntityStore` | An implementation of `BaseEntityStore` that stores the information in the literal computer memory (RAM). | These abstractions have not received much development since their initial release. The reason is that for these abstractions to be useful they typically require a lot of specialization for a particular application, so these abstractions are not as widely used as the conversation history management abstractions. For this reason, there are no migration guides for these abstractions. If you're struggling to migrate an application that relies on these abstractions, please pen an issue on the LangChain GitHub repository, explain your use case, and we'll try to provide more guidance on how to migrate these abstractions. The general strategy for extracting structured information from the conversation history is to use a chat model with tool calling capabilities to extract structured information from the conversation history. The extracted information can then be saved into an appropriate data structure (e.g., an object), and information from it can be retrieved and added into the prompt as needed. ### 3. Implementations that provide composite logic on top of one or more memory implementations Memory classes that fall into this category include: | Memory Type | Description | | ---------------- | ------------------------------------------------------------------------------------------------------------------------------ | | `CombinedMemory` | This abstraction accepted a list of `BaseMemory` and fetched relevant memory information from each of them based on the input. | These implementations did not seem to be used widely or provide significant value. Users should be able to re-implement these without too much difficulty in custom code. ## Related Resources Explore persistence with LangGraph: - [LangGraph quickstart tutorial](https://langchain-ai.github.io/langgraphjs/tutorials/quickstart/) - [How to add persistence ("memory") to your graph](https://langchain-ai.github.io/langgraphjs/how-tos/persistence/) - [How to manage conversation history](https://langchain-ai.github.io/langgraphjs/how-tos/manage-conversation-history/) - [How to add summary of the conversation history](https://langchain-ai.github.io/langgraphjs/how-tos/add-summary-conversation-history/) Add persistence with simple LCEL (favor langgraph for more complex use cases): - [How to add message history](/docs/how_to/message_history/) Working with message history: - [How to trim messages](/docs/how_to/trim_messages) - [How to filter messages](/docs/how_to/filter_messages/) - [How to merge message runs](/docs/how_to/merge_message_runs/)
0
lc_public_repos/langchainjs/docs/core_docs/docs/versions
lc_public_repos/langchainjs/docs/core_docs/docs/versions/migrating_memory/chat_history.ipynb
import { InMemoryChatMessageHistory } from "@langchain/core/chat_history"; const chatsBySessionId: Record<string, InMemoryChatMessageHistory> = {} const getChatHistory = (sessionId: string) => { let chatHistory: InMemoryChatMessageHistory | undefined = chatsBySessionId[sessionId] if (!chatHistory) { chatHistory = new InMemoryChatMessageHistory() chatsBySessionId[sessionId] = chatHistory } return chatHistory }import { v4 as uuidv4 } from "uuid"; import { ChatAnthropic } from "@langchain/anthropic"; import { StateGraph, MessagesAnnotation, END, START } from "@langchain/langgraph"; import { HumanMessage } from "@langchain/core/messages"; import { RunnableConfig } from "@langchain/core/runnables"; // Define a chat model const model = new ChatAnthropic({ modelName: "claude-3-haiku-20240307" }); // Define the function that calls the model const callModel = async ( state: typeof MessagesAnnotation.State, config: RunnableConfig ): Promise<Partial<typeof MessagesAnnotation.State>> => { if (!config.configurable?.sessionId) { throw new Error( "Make sure that the config includes the following information: {'configurable': {'sessionId': 'some_value'}}" ); } const chatHistory = getChatHistory(config.configurable.sessionId as string); let messages = [...(await chatHistory.getMessages()), ...state.messages]; if (state.messages.length === 1) { // First message, ensure it's in the chat history await chatHistory.addMessage(state.messages[0]); } const aiMessage = await model.invoke(messages); // Update the chat history await chatHistory.addMessage(aiMessage); return { messages: [aiMessage] }; }; // Define a new graph const workflow = new StateGraph(MessagesAnnotation) .addNode("model", callModel) .addEdge(START, "model") .addEdge("model", END); const app = workflow.compile(); // Create a unique session ID to identify the conversation const sessionId = uuidv4(); const config = { configurable: { sessionId }, streamMode: "values" as const }; const inputMessage = new HumanMessage("hi! I'm bob"); for await (const event of await app.stream({ messages: [inputMessage] }, config)) { const lastMessage = event.messages[event.messages.length - 1]; console.log(lastMessage.content); } // Here, let's confirm that the AI remembers our name! const followUpMessage = new HumanMessage("what was my name?"); for await (const event of await app.stream({ messages: [followUpMessage] }, config)) { const lastMessage = event.messages[event.messages.length - 1]; console.log(lastMessage.content); }
0
lc_public_repos/langchainjs/docs/core_docs/docs/versions
lc_public_repos/langchainjs/docs/core_docs/docs/versions/v0_2/index.mdx
--- sidebar_position: 1 sidebar_label: v0.2 --- # LangChain v0.2 LangChain v0.2 was released in May 2024. This release includes a number of breaking changes and deprecations. This document contains a guide on upgrading to 0.2.x, as well as a list of deprecations and breaking changes. :::note Reference - [Migrating to Astream Events v2](/docs/versions/v0_2/migrating_astream_events) ::: ## Migration This documentation will help you upgrade your code to LangChain `0.2.x.`. To prepare for migration, we first recommend you take the following steps: 1. install the 0.2.x versions of `@langchain/core`, langchain and upgrade to recent versions of other packages that you may be using (e.g. `@langchain/langgraph`, `@langchain/community`, `@langchain/openai`, etc.) 2. Verify that your code runs properly with the new packages (e.g., unit tests pass) 3. Install a recent version of `langchain-cli` , and use the tool to replace old imports used by your code with the new imports. (See instructions below.) 4. Manually resolve any remaining deprecation warnings 5. Re-run unit tests ### Upgrade to new imports We created a tool to help migrate your code. This tool is still in **beta** and may not cover all cases, but we hope that it will help you migrate your code more quickly. The migration script has the following limitations: 1. It's limited to helping users move from old imports to new imports. It doesn't help address other deprecations. 2. It can't handle imports that involve `as` . 3. New imports are always placed in global scope, even if the old import that was replaced was located inside some local scope (e..g, function body). 4. It will likely miss some deprecated imports. Here is an example of the import changes that the migration script can help apply automatically: | From Package | To Package | Deprecated Import | New Import | | ---------------------- | -------------------------- | -------------------------------------------------------------------------- | -------------------------------------------------------------------------------- | | `langchain` | `@langchain/community` | `import { UpstashVectorStore } from "langchain/vectorstores/upstash"` | `import { UpstashVectorStore } from "@langchain/community/vectorstores/upstash"` | | `@langchain/community` | `@langchain/openai` | `import { ChatOpenAI } from "@langchain/community/chat_models/openai"` | `import { ChatOpenAI } from "@langchain/openai"` | | `langchain` | `@langchain/core` | `import { Document } from "langchain/schema/document"` | `import { Document } from "@langchain/core/documents"` | | `langchain` | `@langchain/textsplitters` | `import { RecursiveCharacterTextSplitter } from "langchain/text_splitter"` | `import { RecursiveCharacterTextSplitter } from "@langchain/textsplitters"` | #### Deprecation timeline We have two main types of deprecations: 1. Code that was moved from `langchain` into another package (e.g, `@langchain/community`) If you try to import it from `langchain`, it will fail since the entrypoint has been removed. 2. Code that has better alternatives available and will eventually be removed, so there's only a single way to do things. (e.g., `predictMessages` method in ChatModels has been deprecated in favor of `invoke`). Many of these were marked for removal in 0.2. We have bumped the removal to 0.3. #### Installation :::note The 0.2.X migration script is only available in version `0.0.14-rc.1` or later. ::: ```bash npm2yarn npm i @langchain/scripts@0.0.14-rc.1 ``` #### Usage Given that the migration script is not perfect, you should make sure you have a backup of your code first (e.g., using version control like `git`). For example, say your code still uses `import ChatOpenAI from "@langchain/community/chat_models/openai";`: Invoking the migration script will replace this import with `import ChatOpenAI from "@langchain/openai";`. ```typescript import { updateEntrypointsFrom0_x_xTo0_2_x } from "@langchain/scripts/migrations"; const pathToMyProject = "..."; // This path is used in the following glob pattern: `${projectPath}/**/*.{ts,tsx,js,jsx}`. updateEntrypointsFrom0_x_xTo0_2_x({ projectPath: pathToMyProject, shouldLog: true, }); ``` #### Other options ```typescript updateEntrypointsFrom0_x_xTo0_2_x({ projectPath: pathToMyProject, tsConfigPath: "tsconfig.json", // Path to the tsConfig file. This will be used to load all the project files into the script. testRun: true, // If true, the script will not save any changes, but will log the changes that would be made. files: ["..."], // A list of .ts file paths to check. If this is provided, the script will only check these files. }); ```
0
lc_public_repos/langchainjs/docs/core_docs/docs/versions
lc_public_repos/langchainjs/docs/core_docs/docs/versions/v0_2/migrating_astream_events.mdx
--- sidebar_position: 2 sidebar_label: Migrating to streamEvents v2 --- # Migrating to streamEvents v2 :::danger This migration guide is a work in progress and is not complete. ::: We've added a `v2` of the [`streamEvents`](/docs/how_to/streaming#using-stream-events) API with the release of `0.2.0`. You can see this [PR](https://github.com/langchain-ai/langchainjs/pull/5539/) for more details. The `v2` version is a re-write of the `v1` version, and should be more efficient, with more consistent output for the events. The `v1` version of the API will be deprecated in favor of the `v2` version and will be removed in `0.4.0`. Below is a list of changes between the `v1` and `v2` versions of the API. ### output for `on_chat_model_end` In `v1`, the outputs associated with `on_chat_model_end` changed depending on whether the chat model was run as a root level runnable or as part of a chain. As a root level runnable the output was: ```ts { data: { output: AIMessageChunk((content = "hello world!"), (id = "some id")); } } ``` As part of a chain the output was: ``` { data: { output: { generations: [ [ { generation_info: None, message: AIMessageChunk( content="hello world!", id="some id" ), text: "hello world!", } ] ], } }, } ``` As of `v2`, the output will always be the simpler representation: ```ts { data: { output: AIMessageChunk((content = "hello world!"), (id = "some id")); } } ``` :::note Non chat models (i.e., regular LLMs) will be consistently associated with the more verbose format for now. ::: ### output for `on_retriever_end` `on_retriever_end` output will always return a list of `Documents`. This was the output in `v1`: ```ts { data: { output: { documents: [ Document(...), Document(...), ... ] } } } ``` And here is the new output for `v2`: ```ts { data: { output: [ Document(...), Document(...), ... ] } } ``` ### Removed `on_retriever_stream` The `on_retriever_stream` event was an artifact of the implementation and has been removed. Full information associated with the event is already available in the `on_retriever_end` event. Please use `on_retriever_end` instead. ### Removed `on_tool_stream` The `on_tool_stream` event was an artifact of the implementation and has been removed. Full information associated with the event is already available in the `on_tool_end` event. Please use `on_tool_end` instead. ### Propagating Names Names of runnables have been updated to be more consistent. If you're filtering by event names, check if you need to update your filters.
0
lc_public_repos/langchainjs/docs/core_docs/docs/versions
lc_public_repos/langchainjs/docs/core_docs/docs/versions/v0_3/index.mdx
--- sidebar_position: 0 sidebar_label: v0.3 --- # LangChain v0.3 _Last updated: 09.14.24_ ## What's changed - All LangChain packages now have `@langchain/core` as a peer dependency instead of a direct dependency to help avoid type errors around [core version conflicts](/docs/how_to/installation/#installing-integration-packages). - You will now need to explicitly install `@langchain/core` rather than relying on an internally resolved version from other packages. - Callbacks are now backgrounded and non-blocking by default rather than blocking. - This means that if you are using e.g. LangSmith for tracing in a serverless environment, you will need to [await the callbacks to ensure they finish before your function ends](/docs/how_to/callbacks_serverless). - Removed deprecated document loader and self-query entrypoints from `langchain` in favor of entrypoints in [`@langchain/community`](https://www.npmjs.com/package/@langchain/community) and integration packages. - Removed deprecated Google PaLM entrypoints from community in favor of entrypoints in [`@langchain/google-vertexai`](https://www.npmjs.com/package/@langchain/google-vertexai) and [`@langchain/google-genai`](https://www.npmjs.com/package/@langchain/google-genai). - Deprecated using objects with a `"type"` as a [`BaseMessageLike`](https://v03.api.js.langchain.com/types/_langchain_core.messages.BaseMessageLike.html) in favor of the more OpenAI-like [`MessageWithRole`](https://v03.api.js.langchain.com/types/_langchain_core.messages.MessageFieldWithRole.html) ## What’s new The following features have been added during the development of 0.2.x: - Simplified tool definition and usage. Read more [here](https://blog.langchain.dev/improving-core-tool-interfaces-and-docs-in-langchain/). - Added a [generalized chat model constructor](https://js.langchain.com/docs/how_to/chat_models_universal_init/). - Added the ability to [dispatch custom events](https://js.langchain.com/docs/how_to/callbacks_custom_events/). - Released LangGraph.js 0.2.0 and made it the [recommended way to create agents](https://js.langchain.com/docs/how_to/migrate_agent) with LangChain.js. - Revamped integration docs and API reference. Read more [here](https://blog.langchain.dev/langchain-integration-docs-revamped/). ## How to update your code If you're using `langchain` / `@langchain/community` / `@langchain/core` 0.0 or 0.1, we recommend that you first [upgrade to 0.2](https://js.langchain.com/v0.2/docs/versions/v0_2/). If you're using `@langchain/langgraph`, upgrade to `@langchain/langgraph>=0.2.3`. This will work with either 0.2 or 0.3 versions of all the base packages. Here is a complete list of all packages that have been released and what we recommend upgrading your version constraints to in your `package.json`. Any package that now supports `@langchain/core` 0.3 had a minor version bump. ### Base packages | Package | Latest | Recommended `package.json` constraint | | ------------------------ | ------ | ------------------------------------- | | langchain | 0.3.0 | >=0.3.0 <0.4.0 | | @langchain/community | 0.3.0 | >=0.3.0 <0.4.0 | | @langchain/textsplitters | 0.1.0 | >=0.1.0 <0.2.0 | | @langchain/core | 0.3.0 | >=0.3.0 <0.4.0 | ### Downstream packages | Package | Latest | Recommended `package.json` constraint | | -------------------- | ------ | ------------------------------------- | | @langchain/langgraph | 0.2.3 | >=0.2.3 <0.3 | ### Integration packages | Package | Latest | Recommended `package.json` constraint | | --------------------------------- | ------ | ------------------------------------- | | @langchain/anthropic | 0.3.0 | >=0.3.0 <0.4.0 | | @langchain/aws | 0.1.0 | >=0.1.0 <0.2.0 | | @langchain/azure-cosmosdb | 0.2.0 | >=0.2.0 <0.3.0 | | @langchain/azure-dynamic-sessions | 0.2.0 | >=0.2.0 <0.3.0 | | @langchain/baidu-qianfan | 0.1.0 | >=0.1.0 <0.2.0 | | @langchain/cloudflare | 0.1.0 | >=0.1.0 <0.2.0 | | @langchain/cohere | 0.3.0 | >=0.3.0 <0.4.0 | | @langchain/exa | 0.1.0 | >=0.1.0 <0.2.0 | | @langchain/google-genai | 0.1.0 | >=0.1.0 <0.2.0 | | @langchain/google-vertexai | 0.1.0 | >=0.1.0 <0.2.0 | | @langchain/google-vertexai-web | 0.1.0 | >=0.1.0 <0.2.0 | | @langchain/groq | 0.1.1 | >=0.1.1 <0.2.0 | | @langchain/mistralai | 0.1.0 | >=0.1.0 <0.2.0 | | @langchain/mixedbread-ai | 0.1.0 | >=0.1.0 <0.2.0 | | @langchain/mongodb | 0.1.0 | >=0.1.0 <0.2.0 | | @langchain/nomic | 0.1.0 | >=0.1.0 <0.2.0 | | @langchain/ollama | 0.1.0 | >=0.1.0 <0.2.0 | | @langchain/openai | 0.3.0 | >=0.3.0 <0.4.0 | | @langchain/pinecone | 0.1.0 | >=0.1.0 <0.2.0 | | @langchain/qdrant | 0.1.0 | >=0.1.0 <0.2.0 | | @langchain/redis | 0.1.0 | >=0.1.0 <0.2.0 | | @langchain/weaviate | 0.1.0 | >=0.1.0 <0.2.0 | | @langchain/yandex | 0.1.0 | >=0.1.0 <0.2.0 | Once you've updated to recent versions of the packages, you will need to explicitly install `@langchain/core` if you haven't already: ```bash npm2yarn npm install @langchain/core ``` We also suggest checking your lockfile or running the [appropriate package manager command](/docs/how_to/installation/#installing-integration-packages) to make sure that your package manager only has one version of `@langchain/core` installed. If you are currently running your code in a serverless environment (e.g., a Cloudflare Worker, Edge function, or AWS Lambda function) and you are using LangSmith tracing or other callbacks, you will need to [await callbacks to ensure they finish before your function ends](/docs/how_to/callbacks_serverless). Here's a quick example: ```ts import { RunnableLambda } from "@langchain/core/runnables"; import { awaitAllCallbacks } from "@langchain/core/callbacks/promises"; const runnable = RunnableLambda.from(() => "hello!"); const customHandler = { handleChainEnd: async () => { await new Promise((resolve) => setTimeout(resolve, 2000)); console.log("Call finished"); }, }; const startTime = new Date().getTime(); await runnable.invoke({ number: "2" }, { callbacks: [customHandler] }); console.log(`Elapsed time: ${new Date().getTime() - startTime}ms`); await awaitAllCallbacks(); console.log(`Final elapsed time: ${new Date().getTime() - startTime}ms`); ``` ``` Elapsed time: 1ms Call finished Final elapsed time: 2164ms ``` You can also set `LANGCHAIN_CALLBACKS_BACKGROUND` to `"false"` to make all callbacks blocking: ```ts process.env.LANGCHAIN_CALLBACKS_BACKGROUND = "false"; const startTimeBlocking = new Date().getTime(); await runnable.invoke({ number: "2" }, { callbacks: [customHandler] }); console.log( `Initial elapsed time: ${new Date().getTime() - startTimeBlocking}ms` ); ``` ``` Call finished Initial elapsed time: 2002ms ```
0
lc_public_repos/langchainjs/docs/core_docs/docs/_static
lc_public_repos/langchainjs/docs/core_docs/docs/_static/css/custom.css
pre { white-space: break-spaces; } @media (min-width: 1200px) { .container, .container-lg, .container-md, .container-sm, .container-xl { max-width: 2560px !important; } } #my-component-root *, #headlessui-portal-root * { z-index: 10000; } .content-container p { margin: revert; }
0
lc_public_repos/langchainjs/docs/core_docs/docs
lc_public_repos/langchainjs/docs/core_docs/docs/mdx_components/integration_install_tooltip.mdx
:::tip See [this section for general instructions on installing integration packages](/docs/how_to/installation#installing-integration-packages). :::
0
lc_public_repos/langchainjs/docs/core_docs/docs
lc_public_repos/langchainjs/docs/core_docs/docs/mdx_components/unified_model_params_tooltip.mdx
:::tip We're unifying model params across all packages. We now suggest using `model` instead of `modelName`, and `apiKey` for API keys. :::
0
lc_public_repos/langchainjs/docs/core_docs/docs
lc_public_repos/langchainjs/docs/core_docs/docs/how_to/migrate_agent.ipynb
// process.env.OPENAI_API_KEY = "..."; // Optional, add tracing in LangSmith // process.env.LANGCHAIN_API_KEY = "ls..."; // process.env.LANGCHAIN_CALLBACKS_BACKGROUND = "true"; // process.env.LANGCHAIN_TRACING_V2 = "true"; // process.env.LANGCHAIN_PROJECT = "How to migrate: LangGraphJS"; // Reduce tracing latency if you are not in a serverless environment // process.env.LANGCHAIN_CALLBACKS_BACKGROUND = "true";import { tool } from "@langchain/core/tools"; import { z } from "zod"; import { ChatOpenAI } from "@langchain/openai"; const llm = new ChatOpenAI({ model: "gpt-4o-mini", }); const magicTool = tool(async ({ input }: { input: number }) => { return `${input + 2}`; }, { name: "magic_function", description: "Applies a magic function to an input.", schema: z.object({ input: z.number(), }), }); const tools = [magicTool]; const query = "what is the value of magic_function(3)?";import { ChatPromptTemplate, } from "@langchain/core/prompts"; import { createToolCallingAgent } from "langchain/agents"; import { AgentExecutor } from "langchain/agents"; const prompt = ChatPromptTemplate.fromMessages([ ["system", "You are a helpful assistant"], ["placeholder", "{chat_history}"], ["human", "{input}"], ["placeholder", "{agent_scratchpad}"], ]); const agent = createToolCallingAgent({ llm, tools, prompt }); const agentExecutor = new AgentExecutor({ agent, tools, }); await agentExecutor.invoke({ input: query });import { createReactAgent } from "@langchain/langgraph/prebuilt"; const app = createReactAgent({ llm, tools, }); let agentOutput = await app.invoke({ messages: [ { role: "user", content: query }, ], }); console.log(agentOutput);const messageHistory = agentOutput.messages; const newQuery = "Pardon?"; agentOutput = await app.invoke({ messages: [ ...messageHistory, { role: "user", content: newQuery } ], }); const spanishPrompt = ChatPromptTemplate.fromMessages([ ["system", "You are a helpful assistant. Respond only in Spanish."], ["placeholder", "{chat_history}"], ["human", "{input}"], ["placeholder", "{agent_scratchpad}"], ]); const spanishAgent = createToolCallingAgent({ llm, tools, prompt: spanishPrompt, }); const spanishAgentExecutor = new AgentExecutor({ agent: spanishAgent, tools, }); await spanishAgentExecutor.invoke({ input: query }); const systemMessage = "You are a helpful assistant. Respond only in Spanish."; // This could also be a SystemMessage object // const systemMessage = new SystemMessage("You are a helpful assistant. Respond only in Spanish."); const appWithSystemMessage = createReactAgent({ llm, tools, messageModifier: systemMessage, }); agentOutput = await appWithSystemMessage.invoke({ messages: [ { role: "user", content: query } ], }); agentOutput.messages[agentOutput.messages.length - 1];import { BaseMessage, SystemMessage, HumanMessage } from "@langchain/core/messages"; const modifyMessages = (messages: BaseMessage[]) => { return [ new SystemMessage("You are a helpful assistant. Respond only in Spanish."), ...messages, new HumanMessage("Also say 'Pandemonium!' after the answer."), ]; }; const appWithMessagesModifier = createReactAgent({ llm, tools, messageModifier: modifyMessages, }); agentOutput = await appWithMessagesModifier.invoke({ messages: [{ role: "user", content: query }], }); console.log({ input: query, output: agentOutput.messages[agentOutput.messages.length - 1].content, });import { ChatMessageHistory } from "@langchain/community/stores/message/in_memory"; import { RunnableWithMessageHistory } from "@langchain/core/runnables"; const memory = new ChatMessageHistory(); const agentExecutorWithMemory = new RunnableWithMessageHistory({ runnable: agentExecutor, getMessageHistory: () => memory, inputMessagesKey: "input", historyMessagesKey: "chat_history", }); const config = { configurable: { sessionId: "test-session" } }; agentOutput = await agentExecutorWithMemory.invoke( { input: "Hi, I'm polly! What's the output of magic_function of 3?" }, config, ); console.log(agentOutput.output); agentOutput = await agentExecutorWithMemory.invoke( { input: "Remember my name?" }, config, ); console.log("---"); console.log(agentOutput.output); console.log("---"); agentOutput = await agentExecutorWithMemory.invoke( { input: "what was that output again?" }, config, ); console.log(agentOutput.output);import { MemorySaver } from "@langchain/langgraph"; const checkpointer = new MemorySaver(); const appWithMemory = createReactAgent({ llm: llm, tools: tools, checkpointSaver: checkpointer }); const langGraphConfig = { configurable: { thread_id: "test-thread", }, }; agentOutput = await appWithMemory.invoke( { messages: [ { role: "user", content: "Hi, I'm polly! What's the output of magic_function of 3?", } ], }, langGraphConfig, ); console.log(agentOutput.messages[agentOutput.messages.length - 1].content); console.log("---"); agentOutput = await appWithMemory.invoke( { messages: [ { role: "user", content: "Remember my name?" } ] }, langGraphConfig, ); console.log(agentOutput.messages[agentOutput.messages.length - 1].content); console.log("---"); agentOutput = await appWithMemory.invoke( { messages: [ { role: "user", content: "what was that output again?" } ] }, langGraphConfig, ); console.log(agentOutput.messages[agentOutput.messages.length - 1].content);const langChainStream = await agentExecutor.stream({ input: query }); for await (const step of langChainStream) { console.log(step); }const langGraphStream = await app.stream( { messages: [{ role: "user", content: query }] }, { streamMode: "updates" }, ); for await (const step of langGraphStream) { console.log(step); }const agentExecutorWithIntermediateSteps = new AgentExecutor({ agent, tools, returnIntermediateSteps: true, }); const result = await agentExecutorWithIntermediateSteps.invoke({ input: query, }); console.log(result.intermediateSteps); agentOutput = await app.invoke({ messages: [ { role: "user", content: query }, ] }); console.log(agentOutput.messages);const badMagicTool = tool(async ({ input: _input }) => { return "Sorry, there was a temporary error. Please try again with the same input."; }, { name: "magic_function", description: "Applies a magic function to an input.", schema: z.object({ input: z.string(), }), }); const badTools = [badMagicTool]; const spanishAgentExecutorWithMaxIterations = new AgentExecutor({ agent: createToolCallingAgent({ llm, tools: badTools, prompt: spanishPrompt, }), tools: badTools, verbose: true, maxIterations: 2, }); await spanishAgentExecutorWithMaxIterations.invoke({ input: query });import { GraphRecursionError } from "@langchain/langgraph"; const RECURSION_LIMIT = 2 * 2 + 1; const appWithBadTools = createReactAgent({ llm, tools: badTools }); try { await appWithBadTools.invoke({ messages: [ { role: "user", content: query } ] }, { recursionLimit: RECURSION_LIMIT, }); } catch (e) { if (e instanceof GraphRecursionError) { console.log("Recursion limit reached."); } else { throw e; } }
0
lc_public_repos/langchainjs/docs/core_docs/docs
lc_public_repos/langchainjs/docs/core_docs/docs/how_to/trim_messages.ipynb
import { AIMessage, HumanMessage, SystemMessage, trimMessages } from "@langchain/core/messages"; import { ChatOpenAI } from "@langchain/openai"; const messages = [ new SystemMessage("you're a good assistant, you always respond with a joke."), new HumanMessage("i wonder why it's called langchain"), new AIMessage( 'Well, I guess they thought "WordRope" and "SentenceString" just didn\'t have the same ring to it!' ), new HumanMessage("and who is harrison chasing anyways"), new AIMessage( "Hmmm let me think.\n\nWhy, he's probably chasing after the last cup of coffee in the office!" ), new HumanMessage("what do you call a speechless parrot"), ]; const trimmed = await trimMessages( messages, { maxTokens: 45, strategy: "last", tokenCounter: new ChatOpenAI({ modelName: "gpt-4" }), } ); console.log(trimmed.map((x) => JSON.stringify({ role: x._getType(), content: x.content, }, null, 2)).join("\n\n"));await trimMessages( messages, { maxTokens: 45, strategy: "last", tokenCounter: new ChatOpenAI({ modelName: "gpt-4" }), includeSystem: true } );await trimMessages( messages, { maxTokens: 50, strategy: "last", tokenCounter: new ChatOpenAI({ modelName: "gpt-4" }), includeSystem: true, allowPartial: true } );await trimMessages( messages, { maxTokens: 60, strategy: "last", tokenCounter: new ChatOpenAI({ modelName: "gpt-4" }), includeSystem: true, startOn: "human" } );await trimMessages( messages, { maxTokens: 45, strategy: "first", tokenCounter: new ChatOpenAI({ modelName: "gpt-4" }), } );import { encodingForModel } from '@langchain/core/utils/tiktoken'; import { BaseMessage, HumanMessage, AIMessage, ToolMessage, SystemMessage, MessageContent, MessageContentText } from '@langchain/core/messages'; async function strTokenCounter(messageContent: MessageContent): Promise<number> { if (typeof messageContent === 'string') { return ( await encodingForModel("gpt-4") ).encode(messageContent).length; } else { if (messageContent.every((x) => x.type === "text" && x.text)) { return ( await encodingForModel("gpt-4") ).encode((messageContent as MessageContentText[]).map(({ text }) => text).join("")).length; } throw new Error(`Unsupported message content ${JSON.stringify(messageContent)}`); } } async function tiktokenCounter(messages: BaseMessage[]): Promise<number> { let numTokens = 3; // every reply is primed with <|start|>assistant<|message|> const tokensPerMessage = 3; const tokensPerName = 1; for (const msg of messages) { let role: string; if (msg instanceof HumanMessage) { role = 'user'; } else if (msg instanceof AIMessage) { role = 'assistant'; } else if (msg instanceof ToolMessage) { role = 'tool'; } else if (msg instanceof SystemMessage) { role = 'system'; } else { throw new Error(`Unsupported message type ${msg.constructor.name}`); } numTokens += tokensPerMessage + (await strTokenCounter(role)) + (await strTokenCounter(msg.content)); if (msg.name) { numTokens += tokensPerName + (await strTokenCounter(msg.name)); } } return numTokens; } await trimMessages(messages, { maxTokens: 45, strategy: 'last', tokenCounter: tiktokenCounter, });import { ChatOpenAI } from "@langchain/openai"; import { trimMessages } from "@langchain/core/messages"; const llm = new ChatOpenAI({ model: "gpt-4o" }) // Notice we don't pass in messages. This creates // a RunnableLambda that takes messages as input const trimmer = trimMessages({ maxTokens: 45, strategy: "last", tokenCounter: llm, includeSystem: true, }) const chain = trimmer.pipe(llm); await chain.invoke(messages)await trimmer.invoke(messages)import { InMemoryChatMessageHistory } from "@langchain/core/chat_history"; import { RunnableWithMessageHistory } from "@langchain/core/runnables"; import { HumanMessage, trimMessages } from "@langchain/core/messages"; import { ChatOpenAI } from "@langchain/openai"; const chatHistory = new InMemoryChatMessageHistory(messages.slice(0, -1)) const dummyGetSessionHistory = async (sessionId: string) => { if (sessionId !== "1") { throw new Error("Session not found"); } return chatHistory; } const llm = new ChatOpenAI({ model: "gpt-4o" }); const trimmer = trimMessages({ maxTokens: 45, strategy: "last", tokenCounter: llm, includeSystem: true, }); const chain = trimmer.pipe(llm); const chainWithHistory = new RunnableWithMessageHistory({ runnable: chain, getMessageHistory: dummyGetSessionHistory, }) await chainWithHistory.invoke( [new HumanMessage("what do you call a speechless parrot")], { configurable: { sessionId: "1"} }, )
0
lc_public_repos/langchainjs/docs/core_docs/docs
lc_public_repos/langchainjs/docs/core_docs/docs/how_to/tool_calls_multimodal.ipynb
import { tool } from "@langchain/core/tools"; import { z } from "zod"; const imageUrl = "https://upload.wikimedia.org/wikipedia/commons/thumb/d/dd/Gfp-wisconsin-madison-the-nature-boardwalk.jpg/2560px-Gfp-wisconsin-madison-the-nature-boardwalk.jpg"; const weatherTool = tool(async ({ weather }) => { console.log(weather); return weather; }, { name: "multiply", description: "Describe the weather", schema: z.object({ weather: z.enum(["sunny", "cloudy", "rainy"]) }), });import { HumanMessage } from "@langchain/core/messages"; import { ChatOpenAI } from "@langchain/openai"; const model = new ChatOpenAI({ model: "gpt-4o", }).bindTools([weatherTool]); const message = new HumanMessage({ content: [ { type: "text", text: "describe the weather in this image" }, { type: "image_url", image_url: { url: imageUrl } } ], }); const response = await model.invoke([message]); console.log(response.tool_calls);import * as fs from "node:fs/promises"; import { ChatAnthropic } from "@langchain/anthropic"; import { HumanMessage } from "@langchain/core/messages"; const imageData = await fs.readFile("../../data/sunny_day.jpeg"); const model = new ChatAnthropic({ model: "claude-3-sonnet-20240229", }).bindTools([weatherTool]); const message = new HumanMessage({ content: [ { type: "text", text: "describe the weather in this image", }, { type: "image_url", image_url: { url: `data:image/jpeg;base64,${imageData.toString("base64")}`, }, }, ], }); const response = await model.invoke([message]); console.log(response.tool_calls);import { ChatGoogleGenerativeAI } from "@langchain/google-genai"; import axios from "axios"; import { ChatPromptTemplate, MessagesPlaceholder } from "@langchain/core/prompts"; import { HumanMessage } from "@langchain/core/messages"; const axiosRes = await axios.get(imageUrl, { responseType: "arraybuffer" }); const base64 = btoa( new Uint8Array(axiosRes.data).reduce( (data, byte) => data + String.fromCharCode(byte), '' ) ); const model = new ChatGoogleGenerativeAI({ model: "gemini-1.5-pro-latest" }).bindTools([weatherTool]); const prompt = ChatPromptTemplate.fromMessages([ ["system", "describe the weather in this image"], new MessagesPlaceholder("message") ]); const response = await prompt.pipe(model).invoke({ message: new HumanMessage({ content: [{ type: "media", mimeType: "image/jpeg", data: base64, }] }) }); console.log(response.tool_calls);import { SystemMessage } from "@langchain/core/messages"; import { tool } from "@langchain/core/tools"; const summaryTool = tool((input) => { return input.summary; }, { name: "summary_tool", description: "Log the summary of the content", schema: z.object({ summary: z.string().describe("The summary of the content to log") }), }); const audioUrl = "https://www.pacdv.com/sounds/people_sound_effects/applause-1.wav"; const axiosRes = await axios.get(audioUrl, { responseType: "arraybuffer" }); const base64 = btoa( new Uint8Array(axiosRes.data).reduce( (data, byte) => data + String.fromCharCode(byte), '' ) ); const model = new ChatGoogleGenerativeAI({ model: "gemini-1.5-pro-latest" }).bindTools([summaryTool]); const response = await model.invoke([ new SystemMessage("Summarize this content. always use the summary_tool in your response"), new HumanMessage({ content: [{ type: "media", mimeType: "audio/wav", data: base64, }] })]); console.log(response.tool_calls);
0
lc_public_repos/langchainjs/docs/core_docs/docs
lc_public_repos/langchainjs/docs/core_docs/docs/how_to/convert_runnable_to_tool.ipynb
import { RunnableLambda } from "@langchain/core/runnables"; import { z } from "zod"; const schema = z.object({ a: z.number(), b: z.array(z.number()), }); const runnable = RunnableLambda.from((input: z.infer<typeof schema>) => { return input.a * Math.max(...input.b); }); const asTool = runnable.asTool({ name: "My tool", description: "Explanation of when to use the tool.", schema, }); asTool.descriptionawait asTool.invoke({ a: 3, b: [1, 2] })const firstRunnable = RunnableLambda.from<string, string>((input) => { return input + "a"; }) const secondRunnable = RunnableLambda.from<string, string>((input) => { return input + "z"; }) const runnable = firstRunnable.pipe(secondRunnable) const asTool = runnable.asTool({ name: "append_letters", description: "Adds letters to a string.", schema: z.string(), }) asTool.description;await asTool.invoke("b")import { ChatOpenAI } from "@langchain/openai"; const llm = new ChatOpenAI({ model: "gpt-3.5-turbo-0125", temperature: 0 }) import { Document } from "@langchain/core/documents" import { MemoryVectorStore } from "langchain/vectorstores/memory"; import { OpenAIEmbeddings } from "@langchain/openai"; const documents = [ new Document({ pageContent: "Dogs are great companions, known for their loyalty and friendliness.", }), new Document({ pageContent: "Cats are independent pets that often enjoy their own space.", }), ] const vectorstore = await MemoryVectorStore.fromDocuments( documents, new OpenAIEmbeddings(), ); const retriever = vectorstore.asRetriever({ k: 1, searchType: "similarity", });import { createReactAgent } from "@langchain/langgraph/prebuilt"; const tools = [ retriever.asTool({ name: "pet_info_retriever", description: "Get information about pets.", schema: z.string(), }) ]; const agent = createReactAgent({ llm: llm, tools }); const stream = await agent.stream({"messages": [["human", "What are dogs known for?"]]}); for await (const chunk of stream) { // Log output from the agent or tools node if (chunk.agent) { console.log("AGENT:", chunk.agent.messages[0]); } else if (chunk.tools) { console.log("TOOLS:", chunk.tools.messages[0]); } console.log("----"); }import { StringOutputParser } from "@langchain/core/output_parsers"; import { ChatPromptTemplate } from "@langchain/core/prompts"; import { RunnableSequence } from "@langchain/core/runnables"; const SYSTEM_TEMPLATE = ` You are an assistant for question-answering tasks. Use the below context to answer the question. If you don't know the answer, say you don't know. Use three sentences maximum and keep the answer concise. Answer in the style of {answer_style}. Context: {context}`; const prompt = ChatPromptTemplate.fromMessages([ ["system", SYSTEM_TEMPLATE], ["human", "{question}"], ]); const ragChain = RunnableSequence.from([ { context: (input, config) => retriever.invoke(input.question, config), question: (input) => input.question, answer_style: (input) => input.answer_style, }, prompt, llm, new StringOutputParser(), ]);const ragTool = ragChain.asTool({ name: "pet_expert", description: "Get information about pets.", schema: z.object({ context: z.string(), question: z.string(), answer_style: z.string(), }), }); const agent = createReactAgent({ llm: llm, tools: [ragTool] }); const stream = await agent.stream({ messages: [ ["human", "What would a pirate say dogs are known for?"] ] }); for await (const chunk of stream) { // Log output from the agent or tools node if (chunk.agent) { console.log("AGENT:", chunk.agent.messages[0]); } else if (chunk.tools) { console.log("TOOLS:", chunk.tools.messages[0]); } console.log("----"); }
0
lc_public_repos/langchainjs/docs/core_docs/docs
lc_public_repos/langchainjs/docs/core_docs/docs/how_to/multimodal_prompts.ipynb
import axios from "axios"; const imageUrl = "https://upload.wikimedia.org/wikipedia/commons/thumb/d/dd/Gfp-wisconsin-madison-the-nature-boardwalk.jpg/2560px-Gfp-wisconsin-madison-the-nature-boardwalk.jpg"; const axiosRes = await axios.get(imageUrl, { responseType: "arraybuffer" }); const base64 = btoa( new Uint8Array(axiosRes.data).reduce( (data, byte) => data + String.fromCharCode(byte), '' ) );import { ChatPromptTemplate } from "@langchain/core/prompts"; import { ChatOpenAI } from "@langchain/openai"; const model = new ChatOpenAI({ model: "gpt-4o" })const prompt = ChatPromptTemplate.fromMessages( [ ["system", "Describe the image provided"], [ "user", [{ type: "image_url", image_url: "data:image/jpeg;base64,{base64}" }], ] ] )const chain = prompt.pipe(model);const response = await chain.invoke({ base64 }) console.log(response.content)const promptWithMultipleImages = ChatPromptTemplate.fromMessages( [ ["system", "compare the two pictures provided"], [ "user", [ { "type": "image_url", "image_url": "data:image/jpeg;base64,{imageData1}", }, { "type": "image_url", "image_url": "data:image/jpeg;base64,{imageData2}", }, ], ], ] )const chainWithMultipleImages = promptWithMultipleImages.pipe(model);const res = await chainWithMultipleImages.invoke({ imageData1: base64, imageData2: base64 }) console.log(res.content)
0
lc_public_repos/langchainjs/docs/core_docs/docs
lc_public_repos/langchainjs/docs/core_docs/docs/how_to/stream_tool_client.mdx
# How to stream structured output to the client This guide will walk you through how we stream agent data to the client using [React Server Components](https://react.dev/reference/rsc/server-components) inside this directory. The code in this doc is taken from the `page.tsx` and `action.ts` files in this directory. To view the full, uninterrupted code, click [here for the actions file](https://github.com/langchain-ai/langchain-nextjs-template/blob/main/app/ai_sdk/tools/action.ts) and [here for the client file](https://github.com/langchain-ai/langchain-nextjs-template/blob/main/app/ai_sdk/tools/page.tsx). :::info Prerequisites This guide assumes familiarity with the following concepts: > - [LangChain Expression Language](/docs/concepts/lcel) > - [Chat models](/docs/concepts/chat_models) > - [Tool calling](/docs/concepts/tool_calling) ::: ## Setup First, install the necessary LangChain & AI SDK packages: ```bash npm2yarn npm install @langchain/openai @langchain/core ai zod zod-to-json-schema ``` Next, we'll create our server file. This will contain all the logic for making tool calls and sending the data back to the client. Start by adding the necessary imports & the `"use server"` directive: ```typescript "use server"; import { ChatOpenAI } from "@langchain/openai"; import { ChatPromptTemplate } from "@langchain/core/prompts"; import { createStreamableValue } from "ai/rsc"; import { z } from "zod"; import { zodToJsonSchema } from "zod-to-json-schema"; import { JsonOutputKeyToolsParser } from "@langchain/core/output_parsers/openai_tools"; ``` After that, we'll define our tool schema. For this example we'll use a simple demo weather schema: ```typescript const Weather = z .object({ city: z.string().describe("City to search for weather"), state: z.string().describe("State abbreviation to search for weather"), }) .describe("Weather search parameters"); ``` Once our schema is defined, we can implement our `executeTool` function. This function takes in a single input of `string`, and contains all the logic for our tool and streaming data back to the client: ```typescript export async function executeTool( input: string, ) { "use server"; const stream = createStreamableValue(); ``` The `createStreamableValue` function is important as this is what we'll use for actually streaming all the data back to the client. For the main logic, we'll wrap it in an async function. Start by defining our prompt and chat model: ```typescript (async () => { const prompt = ChatPromptTemplate.fromMessages([ [ "system", `You are a helpful assistant. Use the tools provided to best assist the user.`, ], ["human", "{input}"], ]); const llm = new ChatOpenAI({ model: "gpt-4o-2024-05-13", temperature: 0, }); ``` After defining our chat model, we'll define our runnable chain using LCEL. We start binding our `weather` tool we defined earlier to the model: ```typescript const modelWithTools = llm.bind({ tools: [ { type: "function" as const, function: { name: "get_weather", description: Weather.description, parameters: zodToJsonSchema(Weather), }, }, ], }); ``` Next, we'll use LCEL to pipe each component together, starting with the prompt, then the model with tools, and finally the output parser: ```typescript const chain = prompt.pipe(modelWithTools).pipe( new JsonOutputKeyToolsParser<z.infer<typeof Weather>>({ keyName: "get_weather", zodSchema: Weather, }) ); ``` Finally, we'll call `.stream` on our chain, and similarly to the [streaming agent](/docs/how_to/stream_agent_client) example, we'll iterate over the stream and stringify + parse the data before updating the stream value: ```typescript const streamResult = await chain.stream({ input, }); for await (const item of streamResult) { stream.update(JSON.parse(JSON.stringify(item, null, 2))); } stream.done(); })(); return { streamData: stream.value }; } ```
0
lc_public_repos/langchainjs/docs/core_docs/docs
lc_public_repos/langchainjs/docs/core_docs/docs/how_to/callbacks_constructor.ipynb
import { ConsoleCallbackHandler } from "@langchain/core/tracers/console"; import { ChatPromptTemplate } from "@langchain/core/prompts"; import { ChatAnthropic } from "@langchain/anthropic"; const handler = new ConsoleCallbackHandler(); const prompt = ChatPromptTemplate.fromTemplate(`What is 1 + {number}?`); const model = new ChatAnthropic({ model: "claude-3-sonnet-20240229", callbacks: [handler], }); const chain = prompt.pipe(model); await chain.invoke({ number: "2" });
0
lc_public_repos/langchainjs/docs/core_docs/docs
lc_public_repos/langchainjs/docs/core_docs/docs/how_to/callbacks_attach.ipynb
import { ConsoleCallbackHandler } from "@langchain/core/tracers/console"; import { ChatPromptTemplate } from "@langchain/core/prompts"; import { ChatAnthropic } from "@langchain/anthropic"; const handler = new ConsoleCallbackHandler(); const prompt = ChatPromptTemplate.fromTemplate(`What is 1 + {number}?`); const model = new ChatAnthropic({ model: "claude-3-sonnet-20240229", }); const chainWithCallbacks = prompt.pipe(model).withConfig({ callbacks: [handler], }); await chainWithCallbacks.invoke({ number: "2" });
0
lc_public_repos/langchainjs/docs/core_docs/docs
lc_public_repos/langchainjs/docs/core_docs/docs/how_to/stream_agent_client.mdx
# How to stream agent data to the client This guide will walk you through how we stream agent data to the client using [React Server Components](https://react.dev/reference/rsc/server-components) inside this directory. The code in this doc is taken from the `page.tsx` and `action.ts` files in this directory. To view the full, uninterrupted code, click [here for the actions file](https://github.com/langchain-ai/langchain-nextjs-template/blob/main/app/ai_sdk/agent/action.ts) and [here for the client file](https://github.com/langchain-ai/langchain-nextjs-template/blob/main/app/ai_sdk/agent/page.tsx). :::info Prerequisites This guide assumes familiarity with the following concepts: - [LangChain Expression Language](/docs/concepts/lcel) - [Chat models](/docs/concepts/chat_models) - [Tool calling](/docs/concepts/tool_calling) - [Agents](/docs/concepts/agents) ::: ## Setup First, install the necessary LangChain & AI SDK packages: ```bash npm2yarn npm install langchain @langchain/core @langchain/community ai ``` In this demo we'll be using the `TavilySearchResults` tool, which requires an API key. You can get one [here](https://app.tavily.com/), or you can swap it out for another tool of your choice, like [`WikipediaQueryRun`](/docs/integrations/tools/wikipedia) which doesn't require an API key. If you choose to use `TavilySearchResults`, set your API key like so: ```bash export TAVILY_API_KEY=your_api_key ``` ## Get started The first step is to create a new RSC file, and add the imports which we'll use for running our agent. In this demo, we'll name it `action.ts`: ```typescript action.ts "use server"; import { ChatOpenAI } from "@langchain/openai"; import { ChatPromptTemplate } from "@langchain/core/prompts"; import { TavilySearchResults } from "@langchain/community/tools/tavily_search"; import { AgentExecutor, createToolCallingAgent } from "langchain/agents"; import { pull } from "langchain/hub"; import { createStreamableValue } from "ai/rsc"; ``` Next, we'll define a `runAgent` function. This function takes in a single input of `string`, and contains all the logic for our agent and streaming data back to the client: ```typescript action.ts export async function runAgent(input: string) { "use server"; } ``` Next, inside our function we'll define our chat model of choice: ```typescript action.ts const llm = new ChatOpenAI({ model: "gpt-4o-2024-05-13", temperature: 0, }); ``` Next, we'll use the `createStreamableValue` helper function provided by the `ai` package to create a streamable value: ```typescript action.ts const stream = createStreamableValue(); ``` This will be very important later on when we start streaming data back to the client. Next, lets define our async function inside which contains the agent logic: ```typescript action.ts (async () => { const tools = [new TavilySearchResults({ maxResults: 1 })]; const prompt = await pull<ChatPromptTemplate>( "hwchase17/openai-tools-agent", ); const agent = createToolCallingAgent({ llm, tools, prompt, }); const agentExecutor = new AgentExecutor({ agent, tools, }); ``` :::tip As of `langchain` version `0.2.8`, the `createToolCallingAgent` function now supports [OpenAI-formatted tools](https://api.js.langchain.com/interfaces/langchain_core.language_models_base.ToolDefinition.html). ::: Here you can see we're doing a few things: The first is we're defining our list of tools (in this case we're only using a single tool) and pulling in our prompt from the LangChain prompt hub. After that, we're passing our LLM, tools and prompt to the `createToolCallingAgent` function, which will construct and return a runnable agent. This is then passed into the `AgentExecutor` class, which will handle the execution & streaming of our agent. Finally, we'll call `.streamEvents` and pass our streamed data back to the `stream` variable we defined above, ```typescript action.ts const streamingEvents = agentExecutor.streamEvents( { input }, { version: "v2" }, ); for await (const item of streamingEvents) { stream.update(JSON.parse(JSON.stringify(item, null, 2))); } stream.done(); })(); ``` As you can see above, we're doing something a little wacky by stringifying and parsing our data. This is due to a bug in the RSC streaming code, however if you stringify and parse like we are above, you shouldn't experience this. Finally, at the bottom of the function return the stream value: ```typescript action.ts return { streamData: stream.value }; ``` Once we've implemented our server action, we can add a couple lines of code in our client function to request and stream this data: First, add the necessary imports: ```typescript page.tsx "use client"; import { useState } from "react"; import { readStreamableValue } from "ai/rsc"; import { runAgent } from "./action"; ``` Then inside our `Page` function, calling the `runAgent` function is straightforward: ```typescript page.tsx export default function Page() { const [input, setInput] = useState(""); const [data, setData] = useState<StreamEvent[]>([]); async function handleSubmit(e: React.FormEvent) { e.preventDefault(); const { streamData } = await runAgent(input); for await (const item of readStreamableValue(streamData)) { setData((prev) => [...prev, item]); } } } ``` That's it! You've successfully built an agent that streams data back to the client. You can now run your application and see the data streaming in real-time.
0
lc_public_repos/langchainjs/docs/core_docs/docs
lc_public_repos/langchainjs/docs/core_docs/docs/how_to/graph_mapping.ipynb
import "neo4j-driver"; import { Neo4jGraph } from "@langchain/community/graphs/neo4j_graph"; const url = process.env.NEO4J_URI; const username = process.env.NEO4J_USER; const password = process.env.NEO4J_PASSWORD; const graph = await Neo4jGraph.initialize({ url, username, password }); // Import movie information const moviesQuery = `LOAD CSV WITH HEADERS FROM 'https://raw.githubusercontent.com/tomasonjo/blog-datasets/main/movies/movies_small.csv' AS row MERGE (m:Movie {id:row.movieId}) SET m.released = date(row.released), m.title = row.title, m.imdbRating = toFloat(row.imdbRating) FOREACH (director in split(row.director, '|') | MERGE (p:Person {name:trim(director)}) MERGE (p)-[:DIRECTED]->(m)) FOREACH (actor in split(row.actors, '|') | MERGE (p:Person {name:trim(actor)}) MERGE (p)-[:ACTED_IN]->(m)) FOREACH (genre in split(row.genres, '|') | MERGE (g:Genre {name:trim(genre)}) MERGE (m)-[:IN_GENRE]->(g))` await graph.query(moviesQuery);import { ChatPromptTemplate } from "@langchain/core/prompts"; import { ChatOpenAI } from "@langchain/openai"; import { z } from "zod"; const llm = new ChatOpenAI({ model: "gpt-3.5-turbo", temperature: 0 }) const entitySchema = z.object({ names: z.array(z.string()).describe("All the person or movies appearing in the text"), }).describe("Identifying information about entities."); const prompt = ChatPromptTemplate.fromMessages( [ [ "system", "You are extracting person and movies from the text." ], [ "human", "Use the given format to extract information from the following\ninput: {question}" ] ] ); const entityChain = prompt.pipe(llm.withStructuredOutput(entitySchema));const entities = await entityChain.invoke({ question: "Who played in Casino movie?" }) entitiesconst matchQuery = ` MATCH (p:Person|Movie) WHERE p.name CONTAINS $value OR p.title CONTAINS $value RETURN coalesce(p.name, p.title) AS result, labels(p)[0] AS type LIMIT 1` const matchToDatabase = async (values) => { let result = "" for (const entity of values.names) { const response = await graph.query(matchQuery, { value: entity }) if (response.length > 0) { result += `${entity} maps to ${response[0]["result"]} ${response[0]["type"]} in database\n` } } return result } await matchToDatabase(entities)import { StringOutputParser } from "@langchain/core/output_parsers"; import { RunnablePassthrough, RunnableSequence } from "@langchain/core/runnables"; // Generate Cypher statement based on natural language input const cypherTemplate = `Based on the Neo4j graph schema below, write a Cypher query that would answer the user's question: {schema} Entities in the question map to the following database values: {entities_list} Question: {question} Cypher query:` const cypherPrompt = ChatPromptTemplate.fromMessages( [ [ "system", "Given an input question, convert it to a Cypher query. No pre-amble.", ], ["human", cypherTemplate] ] ) const llmWithStop = llm.bind({ stop: ["\nCypherResult:"] }) const cypherResponse = RunnableSequence.from([ RunnablePassthrough.assign({ names: entityChain }), RunnablePassthrough.assign({ entities_list: async (x) => matchToDatabase(x.names), schema: async (_) => graph.getSchema(), }), cypherPrompt, llmWithStop, new StringOutputParser(), ])const cypher = await cypherResponse.invoke({"question": "Who played in Casino movie?"}) cypher
0
lc_public_repos/langchainjs/docs/core_docs/docs
lc_public_repos/langchainjs/docs/core_docs/docs/how_to/chatbots_memory.ipynb
// @lc-docs-hide-cell import { ChatOpenAI } from "@langchain/openai"; const llm = new ChatOpenAI({ model: "gpt-4o" })import { HumanMessage, AIMessage } from "@langchain/core/messages"; import { ChatPromptTemplate, MessagesPlaceholder, } from "@langchain/core/prompts"; const prompt = ChatPromptTemplate.fromMessages([ [ "system", "You are a helpful assistant. Answer all questions to the best of your ability.", ], new MessagesPlaceholder("messages"), ]); const chain = prompt.pipe(llm); await chain.invoke({ messages: [ new HumanMessage( "Translate this sentence from English to French: I love programming." ), new AIMessage("J'adore la programmation."), new HumanMessage("What did you just say?"), ], });import { START, END, MessagesAnnotation, StateGraph, MemorySaver } from "@langchain/langgraph"; // Define the function that calls the model const callModel = async (state: typeof MessagesAnnotation.State) => { const systemPrompt = "You are a helpful assistant. " + "Answer all questions to the best of your ability."; const messages = [{ role: "system", content: systemPrompt }, ...state.messages]; const response = await llm.invoke(messages); return { messages: response }; }; const workflow = new StateGraph(MessagesAnnotation) // Define the node and edge .addNode("model", callModel) .addEdge(START, "model") .addEdge("model", END); // Add simple in-memory checkpointer // highlight-start const memory = new MemorySaver(); const app = workflow.compile({ checkpointer: memory }); // highlight-endawait app.invoke( { messages: [ { role: "user", content: "Translate to French: I love programming." } ] }, { configurable: { thread_id: "1" } } );await app.invoke( { messages: [ { role: "user", content: "What did I just ask you?" } ] }, { configurable: { thread_id: "1" } } );const demoEphemeralChatHistory = [ { role: "user", content: "Hey there! I'm Nemo." }, { role: "assistant", content: "Hello!" }, { role: "user", content: "How are you today?" }, { role: "assistant", content: "Fine thanks!" }, ]; await app.invoke( { messages: [ ...demoEphemeralChatHistory, { role: "user", content: "What's my name?" } ] }, { configurable: { thread_id: "2" } } );import { START, END, MessagesAnnotation, StateGraph, MemorySaver } from "@langchain/langgraph"; import { trimMessages } from "@langchain/core/messages"; // Define trimmer // highlight-start // count each message as 1 "token" (tokenCounter: (msgs) => msgs.length) and keep only the last two messages const trimmer = trimMessages({ strategy: "last", maxTokens: 2, tokenCounter: (msgs) => msgs.length }); // highlight-end // Define the function that calls the model const callModel2 = async (state: typeof MessagesAnnotation.State) => { // highlight-start const trimmedMessages = await trimmer.invoke(state.messages); const systemPrompt = "You are a helpful assistant. " + "Answer all questions to the best of your ability."; const messages = [{ role: "system", content: systemPrompt }, ...trimmedMessages]; // highlight-end const response = await llm.invoke(messages); return { messages: response }; }; const workflow2 = new StateGraph(MessagesAnnotation) // Define the node and edge .addNode("model", callModel2) .addEdge(START, "model") .addEdge("model", END); // Add simple in-memory checkpointer const app2 = workflow2.compile({ checkpointer: new MemorySaver() });await app2.invoke( { messages: [ ...demoEphemeralChatHistory, { role: "user", content: "What is my name?" } ] }, { configurable: { thread_id: "3" } } );const demoEphemeralChatHistory2 = [ { role: "user", content: "Hey there! I'm Nemo." }, { role: "assistant", content: "Hello!" }, { role: "user", content: "How are you today?" }, { role: "assistant", content: "Fine thanks!" }, ];import { START, END, MessagesAnnotation, StateGraph, MemorySaver } from "@langchain/langgraph"; import { RemoveMessage } from "@langchain/core/messages"; // Define the function that calls the model const callModel3 = async (state: typeof MessagesAnnotation.State) => { const systemPrompt = "You are a helpful assistant. " + "Answer all questions to the best of your ability. " + "The provided chat history includes a summary of the earlier conversation."; const systemMessage = { role: "system", content: systemPrompt }; const messageHistory = state.messages.slice(0, -1); // exclude the most recent user input // Summarize the messages if the chat history reaches a certain size if (messageHistory.length >= 4) { const lastHumanMessage = state.messages[state.messages.length - 1]; // Invoke the model to generate conversation summary const summaryPrompt = "Distill the above chat messages into a single summary message. " + "Include as many specific details as you can."; const summaryMessage = await llm.invoke([ ...messageHistory, { role: "user", content: summaryPrompt } ]); // Delete messages that we no longer want to show up const deleteMessages = state.messages.map(m => new RemoveMessage({ id: m.id })); // Re-add user message const humanMessage = { role: "user", content: lastHumanMessage.content }; // Call the model with summary & response const response = await llm.invoke([systemMessage, summaryMessage, humanMessage]); return { messages: [summaryMessage, humanMessage, response, ...deleteMessages] }; } else { const response = await llm.invoke([systemMessage, ...state.messages]); return { messages: response }; } }; const workflow3 = new StateGraph(MessagesAnnotation) // Define the node and edge .addNode("model", callModel3) .addEdge(START, "model") .addEdge("model", END); // Add simple in-memory checkpointer const app3 = workflow3.compile({ checkpointer: new MemorySaver() });await app3.invoke( { messages: [ ...demoEphemeralChatHistory2, { role: "user", content: "What did I say my name was?" } ] }, { configurable: { thread_id: "4" } } );
0
lc_public_repos/langchainjs/docs/core_docs/docs
lc_public_repos/langchainjs/docs/core_docs/docs/how_to/llm_caching.mdx
--- sidebar_position: 2 --- # How to cache model responses LangChain provides an optional caching layer for LLMs. This is useful for two reasons: It can save you money by reducing the number of API calls you make to the LLM provider, if you're often requesting the same completion multiple times. It can speed up your application by reducing the number of API calls you make to the LLM provider. import CodeBlock from "@theme/CodeBlock"; import IntegrationInstallTooltip from "@mdx_components/integration_install_tooltip.mdx"; <IntegrationInstallTooltip></IntegrationInstallTooltip> ```bash npm2yarn npm install @langchain/openai @langchain/core ``` ```typescript import { OpenAI } from "@langchain/openai"; const model = new OpenAI({ model: "gpt-3.5-turbo-instruct", cache: true, }); ``` ## In Memory Cache The default cache is stored in-memory. This means that if you restart your application, the cache will be cleared. ```typescript console.time(); // The first time, it is not yet in cache, so it should take longer const res = await model.invoke("Tell me a long joke"); console.log(res); console.timeEnd(); /* A man walks into a bar and sees a jar filled with money on the counter. Curious, he asks the bartender about it. The bartender explains, "We have a challenge for our customers. If you can complete three tasks, you win all the money in the jar." Intrigued, the man asks what the tasks are. The bartender replies, "First, you have to drink a whole bottle of tequila without making a face. Second, there's a pitbull out back with a sore tooth. You have to pull it out. And third, there's an old lady upstairs who has never had an orgasm. You have to give her one." The man thinks for a moment and then confidently says, "I'll do it." He grabs the bottle of tequila and downs it in one gulp, without flinching. He then heads to the back and after a few minutes of struggling, emerges with the pitbull's tooth in hand. The bar erupts in cheers and the bartender leads the man upstairs to the old lady's room. After a few minutes, the man walks out with a big smile on his face and the old lady is giggling with delight. The bartender hands the man the jar of money and asks, "How default: 4.187s */ ``` ```typescript console.time(); // The second time it is, so it goes faster const res2 = await model.invoke("Tell me a joke"); console.log(res2); console.timeEnd(); /* A man walks into a bar and sees a jar filled with money on the counter. Curious, he asks the bartender about it. The bartender explains, "We have a challenge for our customers. If you can complete three tasks, you win all the money in the jar." Intrigued, the man asks what the tasks are. The bartender replies, "First, you have to drink a whole bottle of tequila without making a face. Second, there's a pitbull out back with a sore tooth. You have to pull it out. And third, there's an old lady upstairs who has never had an orgasm. You have to give her one." The man thinks for a moment and then confidently says, "I'll do it." He grabs the bottle of tequila and downs it in one gulp, without flinching. He then heads to the back and after a few minutes of struggling, emerges with the pitbull's tooth in hand. The bar erupts in cheers and the bartender leads the man upstairs to the old lady's room. After a few minutes, the man walks out with a big smile on his face and the old lady is giggling with delight. The bartender hands the man the jar of money and asks, "How default: 175.74ms */ ``` ## Caching with Momento LangChain also provides a Momento-based cache. [Momento](https://gomomento.com) is a distributed, serverless cache that requires zero setup or infrastructure maintenance. Given Momento's compatibility with Node.js, browser, and edge environments, ensure you install the relevant package. To install for **Node.js**: ```bash npm2yarn npm install @gomomento/sdk ``` To install for **browser/edge workers**: ```bash npm2yarn npm install @gomomento/sdk-web ``` Next you'll need to sign up and create an API key. Once you've done that, pass a `cache` option when you instantiate the LLM like this: import MomentoCacheExample from "@examples/cache/momento.ts"; <CodeBlock language="typescript">{MomentoCacheExample}</CodeBlock> ## Caching with Redis LangChain also provides a Redis-based cache. This is useful if you want to share the cache across multiple processes or servers. To use it, you'll need to install the `redis` package: ```bash npm2yarn npm install ioredis ``` Then, you can pass a `cache` option when you instantiate the LLM. For example: ```typescript import { OpenAI } from "@langchain/openai"; import { RedisCache } from "@langchain/community/caches/ioredis"; import { Redis } from "ioredis"; // See https://github.com/redis/ioredis for connection options const client = new Redis({}); const cache = new RedisCache(client); const model = new OpenAI({ cache }); ``` ## Caching with Upstash Redis LangChain provides an Upstash Redis-based cache. Like the Redis-based cache, this cache is useful if you want to share the cache across multiple processes or servers. The Upstash Redis client uses HTTP and supports edge environments. To use it, you'll need to install the `@upstash/redis` package: ```bash npm2yarn npm install @upstash/redis ``` You'll also need an [Upstash account](https://docs.upstash.com/redis#create-account) and a [Redis database](https://docs.upstash.com/redis#create-a-database) to connect to. Once you've done that, retrieve your REST URL and REST token. Then, you can pass a `cache` option when you instantiate the LLM. For example: import UpstashRedisCacheExample from "@examples/cache/upstash_redis.ts"; <CodeBlock language="typescript">{UpstashRedisCacheExample}</CodeBlock> You can also directly pass in a previously created [@upstash/redis](https://docs.upstash.com/redis/sdks/javascriptsdk/overview) client instance: import AdvancedUpstashRedisCacheExample from "@examples/cache/upstash_redis_advanced.ts"; <CodeBlock language="typescript">{AdvancedUpstashRedisCacheExample}</CodeBlock> ## Caching with Cloudflare KV :::info This integration is only supported in Cloudflare Workers. ::: If you're deploying your project as a Cloudflare Worker, you can use LangChain's Cloudflare KV-powered LLM cache. For information on how to set up KV in Cloudflare, see [the official documentation](https://developers.cloudflare.com/kv/). **Note:** If you are using TypeScript, you may need to install types if they aren't already present: ```bash npm2yarn npm install -S @cloudflare/workers-types ``` import CloudflareExample from "@examples/cache/cloudflare_kv.ts"; <CodeBlock language="typescript">{CloudflareExample}</CodeBlock> ## Caching on the File System :::warning This cache is not recommended for production use. It is only intended for local development. ::: LangChain provides a simple file system cache. By default the cache is stored a temporary directory, but you can specify a custom directory if you want. ```typescript const cache = await LocalFileCache.create(); ``` ## Next steps You've now learned how to cache model responses to save time and money. Next, check out the other how-to guides on LLMs, like [how to create your own custom LLM class](/docs/how_to/custom_llm).
0
lc_public_repos/langchainjs/docs/core_docs/docs
lc_public_repos/langchainjs/docs/core_docs/docs/how_to/qa_citations.ipynb
import { TavilySearchAPIRetriever } from "@langchain/community/retrievers/tavily_search_api"; import { ChatPromptTemplate } from "@langchain/core/prompts"; import { ChatOpenAI } from "@langchain/openai"; const llm = new ChatOpenAI({ model: "gpt-3.5-turbo", temperature: 0, }); const retriever = new TavilySearchAPIRetriever({ k: 6, }); const prompt = ChatPromptTemplate.fromMessages([ ["system", "You're a helpful AI assistant. Given a user question and some web article snippets, answer the user question. If none of the articles answer the question, just say you don't know.\n\nHere are the web articles:{context}"], ["human", "{question}"], ]);import { Document } from "@langchain/core/documents"; import { StringOutputParser } from "@langchain/core/output_parsers"; import { RunnableMap, RunnablePassthrough } from "@langchain/core/runnables"; /** * Format the documents into a readable string. */ const formatDocs = (input: Record<string, any>): string => { const { docs } = input; return "\n\n" + docs.map((doc: Document) => `Article title: ${doc.metadata.title}\nArticle Snippet: ${doc.pageContent}`).join("\n\n"); } // subchain for generating an answer once we've done retrieval const answerChain = prompt.pipe(llm).pipe(new StringOutputParser()); const map = RunnableMap.from({ question: new RunnablePassthrough(), docs: retriever, }) // complete chain that calls the retriever -> formats docs to string -> runs answer subchain -> returns just the answer and retrieved docs. const chain = map.assign({ context: formatDocs }).assign({ answer: answerChain }).pick(["answer", "docs"]) await chain.invoke("How fast are cheetahs?")import { z } from "zod"; const llmWithTool1 = llm.withStructuredOutput( z.object({ answer: z.string().describe("The answer to the user question, which is based only on the given sources."), citations: z.array(z.number()).describe("The integer IDs of the SPECIFIC sources which justify the answer.") }).describe("A cited source from the given text"), { name: "cited_answers" } ); const exampleQ = `What is Brian's height? Source: 1 Information: Suzy is 6'2" Source: 2 Information: Jeremiah is blonde Source: 3 Information: Brian is 3 inches shorter than Suzy`; await llmWithTool1.invoke(exampleQ);import { Document } from "@langchain/core/documents"; const formatDocsWithId = (docs: Array<Document>): string => { return "\n\n" + docs.map((doc: Document, idx: number) => `Source ID: ${idx}\nArticle title: ${doc.metadata.title}\nArticle Snippet: ${doc.pageContent}`).join("\n\n"); } // subchain for generating an answer once we've done retrieval const answerChain1 = prompt.pipe(llmWithTool1); const map1 = RunnableMap.from({ question: new RunnablePassthrough(), docs: retriever, }) // complete chain that calls the retriever -> formats docs to string -> runs answer subchain -> returns just the answer and retrieved docs. const chain1 = map1 .assign({ context: (input: { docs: Array<Document> }) => formatDocsWithId(input.docs) }) .assign({ cited_answer: answerChain1 }) .pick(["cited_answer", "docs"]) await chain1.invoke("How fast are cheetahs?")import { Document } from "@langchain/core/documents"; const citationSchema = z.object({ sourceId: z.number().describe("The integer ID of a SPECIFIC source which justifies the answer."), quote: z.string().describe("The VERBATIM quote from the specified source that justifies the answer.") }); const llmWithTool2 = llm.withStructuredOutput( z.object({ answer: z.string().describe("The answer to the user question, which is based only on the given sources."), citations: z.array(citationSchema).describe("Citations from the given sources that justify the answer.") }), { name: "quoted_answer", }) const answerChain2 = prompt.pipe(llmWithTool2); const map2 = RunnableMap.from({ question: new RunnablePassthrough(), docs: retriever, }) // complete chain that calls the retriever -> formats docs to string -> runs answer subchain -> returns just the answer and retrieved docs. const chain2 = map2 .assign({ context: (input: { docs: Array<Document> }) => formatDocsWithId(input.docs) }) .assign({ quoted_answer: answerChain2 }) .pick(["quoted_answer", "docs"]); await chain2.invoke("How fast are cheetahs?")import { ChatAnthropic } from "@langchain/anthropic"; import { ChatPromptTemplate } from "@langchain/core/prompts"; import { XMLOutputParser } from "@langchain/core/output_parsers"; import { Document } from "@langchain/core/documents"; import { RunnableLambda, RunnablePassthrough, RunnableMap } from "@langchain/core/runnables"; const anthropic = new ChatAnthropic({ model: "claude-instant-1.2", temperature: 0, }); const system = `You're a helpful AI assistant. Given a user question and some web article snippets, answer the user question and provide citations. If none of the articles answer the question, just say you don't know. Remember, you must return both an answer and citations. A citation consists of a VERBATIM quote that justifies the answer and the ID of the quote article. Return a citation for every quote across all articles that justify the answer. Use the following format for your final output: <cited_answer> <answer></answer> <citations> <citation><source_id></source_id><quote></quote></citation> <citation><source_id></source_id><quote></quote></citation> ... </citations> </cited_answer> Here are the web articles:{context}`; const anthropicPrompt = ChatPromptTemplate.fromMessages([ ["system", system], ["human", "{question}"] ]); const formatDocsToXML = (docs: Array<Document>): string => { const formatted: Array<string> = []; docs.forEach((doc, idx) => { const docStr = `<source id="${idx}"> <title>${doc.metadata.title}</title> <article_snippet>${doc.pageContent}</article_snippet> </source>` formatted.push(docStr); }); return `\n\n<sources>${formatted.join("\n")}</sources>`; } const format3 = new RunnableLambda({ func: (input: { docs: Array<Document> }) => formatDocsToXML(input.docs) }) const answerChain = anthropicPrompt .pipe(anthropic) .pipe(new XMLOutputParser()) .pipe( new RunnableLambda({ func: (input: { cited_answer: any }) => input.cited_answer }) ); const map3 = RunnableMap.from({ question: new RunnablePassthrough(), docs: retriever, }); const chain3 = map3.assign({ context: format3 }).assign({ cited_answer: answerChain }).pick(["cited_answer", "docs"]) const res = await chain3.invoke("How fast are cheetahs?"); console.log(JSON.stringify(res, null, 2));import { RecursiveCharacterTextSplitter } from "langchain/text_splitter"; import { EmbeddingsFilter } from "langchain/retrievers/document_compressors/embeddings_filter"; import { OpenAIEmbeddings } from "@langchain/openai"; import { DocumentInterface } from "@langchain/core/documents"; import { RunnableMap, RunnablePassthrough } from "@langchain/core/runnables"; const splitter = new RecursiveCharacterTextSplitter({ chunkSize: 400, chunkOverlap: 0, separators: ["\n\n", "\n", ".", " "], keepSeparator: false, }); const compressor = new EmbeddingsFilter({ embeddings: new OpenAIEmbeddings(), k: 10, }); const splitAndFilter = async (input): Promise<Array<DocumentInterface>> => { const { docs, question } = input; const splitDocs = await splitter.splitDocuments(docs); const statefulDocs = await compressor.compressDocuments(splitDocs, question); return statefulDocs; }; const retrieveMap = RunnableMap.from({ question: new RunnablePassthrough(), docs: retriever, }); const retriever = retrieveMap.pipe(splitAndFilter); const docs = await retriever.invoke("How fast are cheetahs?"); for (const doc of docs) { console.log(doc.pageContent, "\n\n"); }const chain4 = retrieveMap .assign({ context: formatDocs }) .assign({ answer: answerChain }) .pick(["answer", "docs"]); // Note the documents have an article "summary" in the metadata that is now much longer than the // actual document page content. This summary isn't actually passed to the model. const res = await chain4.invoke("How fast are cheetahs?"); console.log(JSON.stringify(res, null, 2))
0
lc_public_repos/langchainjs/docs/core_docs/docs
lc_public_repos/langchainjs/docs/core_docs/docs/how_to/streaming.ipynb
import "dotenv/config";// @lc-docs-hide-cell import { ChatOpenAI } from "@langchain/openai"; const model = new ChatOpenAI({ model: "gpt-4o", temperature: 0, });const stream = await model.stream("Hello! Tell me about yourself."); const chunks = []; for await (const chunk of stream) { chunks.push(chunk); console.log(`${chunk.content}|`) }chunks[0]let finalChunk = chunks[0]; for (const chunk of chunks.slice(1, 5)) { finalChunk = finalChunk.concat(chunk); } finalChunkimport { StringOutputParser } from "@langchain/core/output_parsers"; import { ChatPromptTemplate } from "@langchain/core/prompts"; const prompt = ChatPromptTemplate.fromTemplate("Tell me a joke about {topic}"); const parser = new StringOutputParser(); const chain = prompt.pipe(model).pipe(parser); const stream = await chain.stream({ topic: "parrot", }); for await (const chunk of stream) { console.log(`${chunk}|`) }import { JsonOutputParser } from "@langchain/core/output_parsers" const chain = model.pipe(new JsonOutputParser()); const stream = await chain.stream( `Output a list of the countries france, spain and japan and their populations in JSON format. Use a dict with an outer key of "countries" which contains a list of countries. Each country should have the key "name" and "population"` ); for await (const chunk of stream) { console.log(chunk); }// A function that operates on finalized inputs // rather than on an input_stream // A function that does not operates on input streams and breaks streaming. const extractCountryNames = (inputs: Record<string, any>) => { if (!Array.isArray(inputs.countries)) { return ""; } return JSON.stringify(inputs.countries.map((country) => country.name)); } const chain = model.pipe(new JsonOutputParser()).pipe(extractCountryNames); const stream = await chain.stream( `output a list of the countries france, spain and japan and their populations in JSON format. Use a dict with an outer key of "countries" which contains a list of countries. Each country should have the key "name" and "population"` ); for await (const chunk of stream) { console.log(chunk); }import { OpenAIEmbeddings } from "@langchain/openai"; import { MemoryVectorStore } from "langchain/vectorstores/memory"; import { ChatPromptTemplate } from "@langchain/core/prompts"; const template = `Answer the question based only on the following context: {context} Question: {question} `; const prompt = ChatPromptTemplate.fromTemplate(template); const vectorstore = await MemoryVectorStore.fromTexts( ["mitochondria is the powerhouse of the cell", "buildings are made of brick"], [{}, {}], new OpenAIEmbeddings(), ); const retriever = vectorstore.asRetriever(); const chunks = []; for await (const chunk of await retriever.stream("What is the powerhouse of the cell?")) { chunks.push(chunk); } console.log(chunks); import { RunnablePassthrough, RunnableSequence } from "@langchain/core/runnables"; import type { Document } from "@langchain/core/documents"; import { StringOutputParser } from "@langchain/core/output_parsers"; const formatDocs = (docs: Document[]) => { return docs.map((doc) => doc.pageContent).join("\n-----\n") } const retrievalChain = RunnableSequence.from([ { context: retriever.pipe(formatDocs), question: new RunnablePassthrough() }, prompt, model, new StringOutputParser(), ]); const stream = await retrievalChain.stream("What is the powerhouse of the cell?"); for await (const chunk of stream) { console.log(`${chunk}|`); }const events = []; const eventStream = await model.streamEvents("hello", { version: "v2" }); for await (const event of eventStream) { events.push(event); } console.log(events.length)events.slice(0, 3);events.slice(-2);const chain = model.pipe(new JsonOutputParser()); const eventStream = await chain.streamEvents( `Output a list of the countries france, spain and japan and their populations in JSON format. Use a dict with an outer key of "countries" which contains a list of countries. Each country should have the key "name" and "population"`, { version: "v2" }, ); const events = []; for await (const event of eventStream) { events.push(event); } console.log(events.length)events.slice(0, 3);let eventCount = 0; const eventStream = await chain.streamEvents( `Output a list of the countries france, spain and japan and their populations in JSON format. Use a dict with an outer key of "countries" which contains a list of countries. Each country should have the key "name" and "population"`, { version: "v1" }, ); for await (const event of eventStream) { // Truncate the output if (eventCount > 30) { continue; } const eventType = event.event; if (eventType === "on_llm_stream") { console.log(`Chat model chunk: ${event.data.chunk.message.content}`); } else if (eventType === "on_parser_stream") { console.log(`Parser chunk: ${JSON.stringify(event.data.chunk)}`); } eventCount += 1; }const chain = model.withConfig({ runName: "model" }) .pipe( new JsonOutputParser().withConfig({ runName: "my_parser" }) ); const eventStream = await chain.streamEvents( `Output a list of the countries france, spain and japan and their populations in JSON format. Use a dict with an outer key of "countries" which contains a list of countries. Each country should have the key "name" and "population"`, { version: "v2" }, { includeNames: ["my_parser"] }, ); let eventCount = 0; for await (const event of eventStream) { // Truncate the output if (eventCount > 10) { continue; } console.log(event); eventCount += 1; }const chain = model.withConfig({ runName: "model" }) .pipe( new JsonOutputParser().withConfig({ runName: "my_parser" }) ); const eventStream = await chain.streamEvents( `Output a list of the countries france, spain and japan and their populations in JSON format. Use a dict with an outer key of "countries" which contains a list of countries. Each country should have the key "name" and "population"`, { version: "v2" }, { includeTypes: ["chat_model"] }, ); let eventCount = 0; for await (const event of eventStream) { // Truncate the output if (eventCount > 10) { continue; } console.log(event); eventCount += 1; }const chain = model .pipe(new JsonOutputParser().withConfig({ runName: "my_parser" })) .withConfig({ tags: ["my_chain"] }); const eventStream = await chain.streamEvents( `Output a list of the countries france, spain and japan and their populations in JSON format. Use a dict with an outer key of "countries" which contains a list of countries. Each country should have the key "name" and "population"`, { version: "v2" }, { includeTags: ["my_chain"] }, ); let eventCount = 0; for await (const event of eventStream) { // Truncate the output if (eventCount > 10) { continue; } console.log(event); eventCount += 1; }const chain = model .pipe(new JsonOutputParser().withConfig({ runName: "my_parser" })) .withConfig({ tags: ["my_chain"] }); const eventStream = await chain.streamEvents( `Output a list of the countries france, spain and japan and their populations in JSON format. Use a dict with an outer key of "countries" which contains a list of countries. Each country should have the key "name" and "population"`, { version: "v2", encoding: "text/event-stream", }, ); let eventCount = 0; const textDecoder = new TextDecoder(); for await (const event of eventStream) { // Truncate the output if (eventCount > 3) { continue; } console.log(textDecoder.decode(event)); eventCount += 1; }const handler = async () => { const eventStream = await chain.streamEvents( `Output a list of the countries france, spain and japan and their populations in JSON format. Use a dict with an outer key of "countries" which contains a list of countries. Each country should have the key "name" and "population"`, { version: "v2", encoding: "text/event-stream", }, ); return new Response(eventStream, { headers: { "content-type": "text/event-stream", } }); };import { fetchEventSource } from "@microsoft/fetch-event-source"; const makeChainRequest = async () => { await fetchEventSource("https://your_url_here", { method: "POST", body: JSON.stringify({ foo: 'bar' }), onmessage: (message) => { if (message.event === "data") { console.log(message.data); } }, onerror: (err) => { console.log(err); } }); };// A function that operates on finalized inputs // rather than on an input_stream import { JsonOutputParser } from "@langchain/core/output_parsers" import { RunnablePassthrough } from "@langchain/core/runnables"; // A function that does not operates on input streams and breaks streaming. const extractCountryNames = (inputs: Record<string, any>) => { if (!Array.isArray(inputs.countries)) { return ""; } return JSON.stringify(inputs.countries.map((country) => country.name)); } const chain = model.pipe(new JsonOutputParser()).pipe(extractCountryNames); const stream = await chain.stream( `output a list of the countries france, spain and japan and their populations in JSON format. Use a dict with an outer key of "countries" which contains a list of countries. Each country should have the key "name" and "population"` ); for await (const chunk of stream) { console.log(chunk); }const eventStream = await chain.streamEvents( `output a list of the countries france, spain and japan and their populations in JSON format. Use a dict with an outer key of "countries" which contains a list of countries. Each country should have the key "name" and "population" Your output should ONLY contain valid JSON data. Do not include any other text or content in your output.`, { version: "v2" }, ); let eventCount = 0; for await (const event of eventStream) { // Truncate the output if (eventCount > 30) { continue; } const eventType = event.event; if (eventType === "on_chat_model_stream") { console.log(`Chat model chunk: ${event.data.chunk.message.content}`); } else if (eventType === "on_parser_stream") { console.log(`Parser chunk: ${JSON.stringify(event.data.chunk)}`); } else { console.log(eventType) } eventCount += 1; }
0
lc_public_repos/langchainjs/docs/core_docs/docs
lc_public_repos/langchainjs/docs/core_docs/docs/how_to/custom_callbacks.ipynb
import { ChatPromptTemplate } from "@langchain/core/prompts"; import { ChatAnthropic } from "@langchain/anthropic"; const prompt = ChatPromptTemplate.fromTemplate(`What is 1 + {number}?`); const model = new ChatAnthropic({ model: "claude-3-sonnet-20240229", }); const chain = prompt.pipe(model); const customHandler = { handleChatModelStart: async (llm, inputMessages, runId) => { console.log("Chat model start:", llm, inputMessages, runId) }, handleLLMNewToken: async (token) => { console.log("Chat model new token", token); } }; const stream = await chain.stream({ number: "2" }, { callbacks: [customHandler] }); for await (const _ of stream) { // Just consume the stream so the callbacks run }
0
lc_public_repos/langchainjs/docs/core_docs/docs
lc_public_repos/langchainjs/docs/core_docs/docs/how_to/llm_token_usage_tracking.mdx
--- sidebar_position: 5 --- # How to track token usage :::info Prerequisites This guide assumes familiarity with the following concepts: - [LLMs](/docs/concepts/text_llms) ::: This notebook goes over how to track your token usage for specific LLM calls. This is only implemented by some providers, including OpenAI. Here's an example of tracking token usage for a single LLM call via a callback: import CodeBlock from "@theme/CodeBlock"; import Example from "@examples/models/llm/token_usage_tracking.ts"; import IntegrationInstallTooltip from "@mdx_components/integration_install_tooltip.mdx"; <IntegrationInstallTooltip></IntegrationInstallTooltip> ```bash npm2yarn npm install @langchain/openai @langchain/core ``` <CodeBlock language="typescript">{Example}</CodeBlock> If this model is passed to a chain or agent that calls it multiple times, it will log an output each time. ## Next steps You've now seen how to get token usage for supported LLM providers. Next, check out the other how-to guides in this section, like [how to implement your own custom LLM](/docs/how_to/custom_llm).
0
lc_public_repos/langchainjs/docs/core_docs/docs
lc_public_repos/langchainjs/docs/core_docs/docs/how_to/example_selectors.ipynb
const examples = [ { input: "hi", output: "ciao" }, { input: "bye", output: "arrivaderci" }, { input: "soccer", output: "calcio" }, ];import { BaseExampleSelector } from "@langchain/core/example_selectors"; import { Example } from "@langchain/core/prompts"; class CustomExampleSelector extends BaseExampleSelector { private examples: Example[]; constructor(examples: Example[]) { super(); this.examples = examples; } async addExample(example: Example): Promise<void | string> { this.examples.push(example); return; } async selectExamples(inputVariables: Example): Promise<Example[]> { // This assumes knowledge that part of the input will be a 'text' key const newWord = inputVariables.input; const newWordLength = newWord.length; // Initialize variables to store the best match and its length difference let bestMatch: Example | null = null; let smallestDiff = Infinity; // Iterate through each example for (const example of this.examples) { // Calculate the length difference with the first word of the example const currentDiff = Math.abs(example.input.length - newWordLength); // Update the best match if the current one is closer in length if (currentDiff < smallestDiff) { smallestDiff = currentDiff; bestMatch = example; } } return bestMatch ? [bestMatch] : []; } }const exampleSelector = new CustomExampleSelector(examples)await exampleSelector.selectExamples({ input: "okay" })await exampleSelector.addExample({ input: "hand", output: "mano" })await exampleSelector.selectExamples({ input: "okay" })import { PromptTemplate, FewShotPromptTemplate } from "@langchain/core/prompts" const examplePrompt = PromptTemplate.fromTemplate("Input: {input} -> Output: {output}")const prompt = new FewShotPromptTemplate({ exampleSelector, examplePrompt, suffix: "Input: {input} -> Output:", prefix: "Translate the following words from English to Italian:", inputVariables: ["input"], }) console.log(await prompt.format({ input: "word" }))
0
lc_public_repos/langchainjs/docs/core_docs/docs
lc_public_repos/langchainjs/docs/core_docs/docs/how_to/output_parser_fixing.ipynb
import { z } from "zod"; import { RunnableSequence } from "@langchain/core/runnables"; import { StructuredOutputParser } from "@langchain/core/output_parsers"; import { ChatPromptTemplate } from "@langchain/core/prompts"; const zodSchema = z.object({ name: z.string().describe("name of an actor"), film_names: z.array(z.string()).describe("list of names of films they starred in"), }); const parser = StructuredOutputParser.fromZodSchema(zodSchema); const misformatted = "{'name': 'Tom Hanks', 'film_names': ['Forrest Gump']}"; await parser.parse(misformatted);import { ChatAnthropic } from "@langchain/anthropic"; import { OutputFixingParser } from "langchain/output_parsers"; const model = new ChatAnthropic({ model: "claude-3-sonnet-20240229", maxTokens: 512, temperature: 0.1, }); const parserWithFix = OutputFixingParser.fromLLM(model, parser); await parserWithFix.parse(misformatted);
0
lc_public_repos/langchainjs/docs/core_docs/docs
lc_public_repos/langchainjs/docs/core_docs/docs/how_to/indexing.mdx
# How to reindex data to keep your vectorstore in-sync with the underlying data source :::info Prerequisites This guide assumes familiarity with the following concepts: - [Retrieval-augmented generation (RAG)](/docs/tutorials/rag/) - [Vector stores](/docs/concepts/#vectorstores) ::: Here, we will look at a basic indexing workflow using the LangChain indexing API. The indexing API lets you load and keep in sync documents from any source into a vector store. Specifically, it helps: - Avoid writing duplicated content into the vector store - Avoid re-writing unchanged content - Avoid re-computing embeddings over unchanged content All of which should save you time and money, as well as improve your vector search results. Crucially, the indexing API will work even with documents that have gone through several transformation steps (e.g., via text chunking) with respect to the original source documents. ## How it works LangChain indexing makes use of a record manager (`RecordManager`) that keeps track of document writes into the vector store. When indexing content, hashes are computed for each document, and the following information is stored in the record manager: - the document hash (hash of both page content and metadata) - write time - the source ID - each document should include information in its metadata to allow us to determine the ultimate source of this document ## Deletion Modes When indexing documents into a vector store, it's possible that some existing documents in the vector store should be deleted. In certain situations you may want to remove any existing documents that are derived from the same sources as the new documents being indexed. In others you may want to delete all existing documents wholesale. The indexing API deletion modes let you pick the behavior you want: | Cleanup Mode | De-Duplicates Content | Parallelizable | Cleans Up Deleted Source Docs | Cleans Up Mutations of Source Docs and/or Derived Docs | Clean Up Timing | | ------------ | --------------------- | -------------- | ----------------------------- | ------------------------------------------------------ | ------------------ | | None | ✅ | ✅ | ❌ | ❌ | - | | Incremental | ✅ | ✅ | ❌ | ✅ | Continuously | | Full | ✅ | ❌ | ✅ | ✅ | At end of indexing | `None` does not do any automatic clean up, allowing the user to manually do clean up of old content. `incremental` and `full` offer the following automated clean up: - If the content of the source document or derived documents has changed, both `incremental` or `full` modes will clean up (delete) previous versions of the content. - If the source document has been deleted (meaning it is not included in the documents currently being indexed), the full cleanup mode will delete it from the vector store correctly, but the `incremental` mode will not. When content is mutated (e.g., the source PDF file was revised) there will be a period of time during indexing when both the new and old versions may be returned to the user. This happens after the new content was written, but before the old version was deleted. - `incremental` indexing minimizes this period of time as it is able to do clean up continuously, as it writes. - `full` mode does the clean up after all batches have been written. ## Requirements 1. Do not use with a store that has been pre-populated with content independently of the indexing API, as the record manager will not know that records have been inserted previously. 2. Only works with LangChain `vectorstore`'s that support: a). document addition by id (`addDocuments` method with ids argument) b). delete by id (delete method with ids argument) Compatible Vectorstores: [`PGVector`](/docs/integrations/vectorstores/pgvector), [`Chroma`](/docs/integrations/vectorstores/chroma), [`CloudflareVectorize`](/docs/integrations/vectorstores/cloudflare_vectorize), [`ElasticVectorSearch`](/docs/integrations/vectorstores/elasticsearch), [`FAISS`](/docs/integrations/vectorstores/faiss), [`MomentoVectorIndex`](/docs/integrations/vectorstores/momento_vector_index), [`Pinecone`](/docs/integrations/vectorstores/pinecone), [`SupabaseVectorStore`](/docs/integrations/vectorstores/supabase), [`VercelPostgresVectorStore`](/docs/integrations/vectorstores/vercel_postgres), [`Weaviate`](/docs/integrations/vectorstores/weaviate), [`Xata`](/docs/integrations/vectorstores/xata) ## Caution The record manager relies on a time-based mechanism to determine what content can be cleaned up (when using `full` or `incremental` cleanup modes). If two tasks run back-to-back, and the first task finishes before the clock time changes, then the second task may not be able to clean up content. This is unlikely to be an issue in actual settings for the following reasons: 1. The `RecordManager` uses higher resolution timestamps. 2. The data would need to change between the first and the second tasks runs, which becomes unlikely if the time interval between the tasks is small. 3. Indexing tasks typically take more than a few ms. ## Quickstart import CodeBlock from "@theme/CodeBlock"; import QuickStartExample from "@examples/indexes/indexing_api/indexing.ts"; <CodeBlock language="typescript">{QuickStartExample}</CodeBlock> ## Next steps You've now learned how to use indexing in your RAG pipelines. Next, check out some of the other sections on retrieval.
0
lc_public_repos/langchainjs/docs/core_docs/docs
lc_public_repos/langchainjs/docs/core_docs/docs/how_to/vectorstore_retriever.mdx
# How use a vector store to retrieve data :::info Prerequisites This guide assumes familiarity with the following concepts: - [Vector stores](/docs/concepts/#vectorstores) - [Retrievers](/docs/concepts/retrievers) - [Text splitters](/docs/concepts/text_splitters) - [Chaining runnables](/docs/how_to/sequence/) ::: Vector stores can be converted into retrievers using the [`.asRetriever()`](https://api.js.langchain.com/classes/langchain_core.vectorstores.VectorStore.html#asRetriever) method, which allows you to more easily compose them in chains. Below, we show a retrieval-augmented generation (RAG) chain that performs question answering over documents using the following steps: 1. Initialize an vector store 2. Create a retriever from that vector store 3. Compose a question answering chain 4. Ask questions! Each of the steps has multiple sub steps and potential configurations, but we'll go through one common flow. First, install the required dependency: import CodeBlock from "@theme/CodeBlock"; import IntegrationInstallTooltip from "@mdx_components/integration_install_tooltip.mdx"; <IntegrationInstallTooltip></IntegrationInstallTooltip> ```bash npm2yarn npm install @langchain/openai @langchain/core ``` You can download the `state_of_the_union.txt` file [here](https://github.com/langchain-ai/langchain/blob/master/docs/docs/modules/state_of_the_union.txt). import RetrievalQAExample from "@examples/chains/retrieval_qa.ts"; <CodeBlock language="typescript">{RetrievalQAExample}</CodeBlock> Let's walk through what's happening here. 1. We first load a long text and split it into smaller documents using a text splitter. We then load those documents (which also embeds the documents using the passed `OpenAIEmbeddings` instance) into HNSWLib, our vector store, creating our index. 2. Though we can query the vector store directly, we convert the vector store into a retriever to return retrieved documents in the right format for the question answering chain. 3. We initialize a retrieval chain, which we'll call later in step 4. 4. We ask questions! ## Next steps You've now learned how to convert a vector store as a retriever. See the individual sections for deeper dives on specific retrievers, the [broader tutorial on RAG](/docs/tutorials/rag), or this section to learn how to [create your own custom retriever over any data source](/docs/how_to/custom_retriever/).
0
lc_public_repos/langchainjs/docs/core_docs/docs
lc_public_repos/langchainjs/docs/core_docs/docs/how_to/graph_constructing.ipynb
import "neo4j-driver"; import { Neo4jGraph } from "@langchain/community/graphs/neo4j_graph"; const url = process.env.NEO4J_URI; const username = process.env.NEO4J_USER; const password = process.env.NEO4J_PASSWORD; const graph = await Neo4jGraph.initialize({ url, username, password });import { ChatOpenAI } from "@langchain/openai"; import { LLMGraphTransformer } from "@langchain/community/experimental/graph_transformers/llm"; const model = new ChatOpenAI({ temperature: 0, model: "gpt-4-turbo-preview", }); const llmGraphTransformer = new LLMGraphTransformer({ llm: model }); import { Document } from "@langchain/core/documents"; let text = ` Marie Curie, was a Polish and naturalised-French physicist and chemist who conducted pioneering research on radioactivity. She was the first woman to win a Nobel Prize, the first person to win a Nobel Prize twice, and the only person to win a Nobel Prize in two scientific fields. Her husband, Pierre Curie, was a co-winner of her first Nobel Prize, making them the first-ever married couple to win the Nobel Prize and launching the Curie family legacy of five Nobel Prizes. She was, in 1906, the first woman to become a professor at the University of Paris. ` const result = await llmGraphTransformer.convertToGraphDocuments([ new Document({ pageContent: text }), ]); console.log(`Nodes: ${result[0].nodes.length}`); console.log(`Relationships:${result[0].relationships.length}`);const llmGraphTransformerFiltered = new LLMGraphTransformer({ llm: model, allowedNodes: ["PERSON", "COUNTRY", "ORGANIZATION"], allowedRelationships:["NATIONALITY", "LOCATED_IN", "WORKED_AT", "SPOUSE"], strictMode:false }); const result_filtered = await llmGraphTransformerFiltered.convertToGraphDocuments([ new Document({ pageContent: text }), ]); console.log(`Nodes: ${result_filtered[0].nodes.length}`); console.log(`Relationships:${result_filtered[0].relationships.length}`);await graph.addGraphDocuments(result_filtered)
0
lc_public_repos/langchainjs/docs/core_docs/docs
lc_public_repos/langchainjs/docs/core_docs/docs/how_to/document_loader_markdown.ipynb
import { UnstructuredLoader } from "@langchain/community/document_loaders/fs/unstructured"; const markdownPath = "../../../../README.md"; const loader = new UnstructuredLoader(markdownPath, { apiKey: process.env.UNSTRUCTURED_API_KEY, apiUrl: process.env.UNSTRUCTURED_API_URL, }); const data = await loader.load() console.log(data.slice(0, 5));const loaderByTitle = new UnstructuredLoader(markdownPath, { chunkingStrategy: "by_title" }); const loadedDocs = await loaderByTitle.load() console.log(`Number of documents: ${loadedDocs.length}\n`) for (const doc of loadedDocs.slice(0, 2)) { console.log(doc); console.log("\n"); }const categories = new Set(data.map((document) => document.metadata.category)); console.log(categories);
0
lc_public_repos/langchainjs/docs/core_docs/docs
lc_public_repos/langchainjs/docs/core_docs/docs/how_to/custom_llm.ipynb
import { LLM, type BaseLLMParams } from "@langchain/core/language_models/llms"; import type { CallbackManagerForLLMRun } from "@langchain/core/callbacks/manager"; import { GenerationChunk } from "@langchain/core/outputs"; interface CustomLLMInput extends BaseLLMParams { n: number; } class CustomLLM extends LLM { n: number; constructor(fields: CustomLLMInput) { super(fields); this.n = fields.n; } _llmType() { return "custom"; } async _call( prompt: string, options: this["ParsedCallOptions"], runManager: CallbackManagerForLLMRun ): Promise<string> { // Pass `runManager?.getChild()` when invoking internal runnables to enable tracing // await subRunnable.invoke(params, runManager?.getChild()); return prompt.slice(0, this.n); } async *_streamResponseChunks( prompt: string, options: this["ParsedCallOptions"], runManager?: CallbackManagerForLLMRun ): AsyncGenerator<GenerationChunk> { // Pass `runManager?.getChild()` when invoking internal runnables to enable tracing // await subRunnable.invoke(params, runManager?.getChild()); for (const letter of prompt.slice(0, this.n)) { yield new GenerationChunk({ text: letter, }); // Trigger the appropriate callback await runManager?.handleLLMNewToken(letter); } } }const llm = new CustomLLM({ n: 4 }); await llm.invoke("I am an LLM");const stream = await llm.stream("I am an LLM"); for await (const chunk of stream) { console.log(chunk); }import { CallbackManagerForLLMRun } from "@langchain/core/callbacks/manager"; import { LLMResult } from "@langchain/core/outputs"; import { BaseLLM, BaseLLMCallOptions, BaseLLMParams, } from "@langchain/core/language_models/llms"; interface AdvancedCustomLLMCallOptions extends BaseLLMCallOptions {} interface AdvancedCustomLLMParams extends BaseLLMParams { n: number; } class AdvancedCustomLLM extends BaseLLM<AdvancedCustomLLMCallOptions> { n: number; constructor(fields: AdvancedCustomLLMParams) { super(fields); this.n = fields.n; } _llmType() { return "advanced_custom_llm"; } async _generate( inputs: string[], options: this["ParsedCallOptions"], runManager?: CallbackManagerForLLMRun ): Promise<LLMResult> { const outputs = inputs.map((input) => input.slice(0, this.n)); // Pass `runManager?.getChild()` when invoking internal runnables to enable tracing // await subRunnable.invoke(params, runManager?.getChild()); // One input could generate multiple outputs. const generations = outputs.map((output) => [ { text: output, // Optional additional metadata for the generation generationInfo: { outputCount: 1 }, }, ]); const tokenUsage = { usedTokens: this.n, }; return { generations, llmOutput: { tokenUsage }, }; } }const llm = new AdvancedCustomLLM({ n: 4 }); const eventStream = await llm.streamEvents("I am an LLM", { version: "v2", }); for await (const event of eventStream) { if (event.event === "on_llm_end") { console.log(JSON.stringify(event, null, 2)); } }
0
lc_public_repos/langchainjs/docs/core_docs/docs
lc_public_repos/langchainjs/docs/core_docs/docs/how_to/graph_prompting.ipynb
const url = process.env.NEO4J_URI; const username = process.env.NEO4J_USER; const password = process.env.NEO4J_PASSWORD;import "neo4j-driver"; import { Neo4jGraph } from "@langchain/community/graphs/neo4j_graph"; const graph = await Neo4jGraph.initialize({ url, username, password }); // Import movie information const moviesQuery = `LOAD CSV WITH HEADERS FROM 'https://raw.githubusercontent.com/tomasonjo/blog-datasets/main/movies/movies_small.csv' AS row MERGE (m:Movie {id:row.movieId}) SET m.released = date(row.released), m.title = row.title, m.imdbRating = toFloat(row.imdbRating) FOREACH (director in split(row.director, '|') | MERGE (p:Person {name:trim(director)}) MERGE (p)-[:DIRECTED]->(m)) FOREACH (actor in split(row.actors, '|') | MERGE (p:Person {name:trim(actor)}) MERGE (p)-[:ACTED_IN]->(m)) FOREACH (genre in split(row.genres, '|') | MERGE (g:Genre {name:trim(genre)}) MERGE (m)-[:IN_GENRE]->(g))` await graph.query(moviesQuery);await graph.refreshSchema() console.log(graph.getSchema())const examples = [ { "question": "How many artists are there?", "query": "MATCH (a:Person)-[:ACTED_IN]->(:Movie) RETURN count(DISTINCT a)", }, { "question": "Which actors played in the movie Casino?", "query": "MATCH (m:Movie {{title: 'Casino'}})<-[:ACTED_IN]-(a) RETURN a.name", }, { "question": "How many movies has Tom Hanks acted in?", "query": "MATCH (a:Person {{name: 'Tom Hanks'}})-[:ACTED_IN]->(m:Movie) RETURN count(m)", }, { "question": "List all the genres of the movie Schindler's List", "query": "MATCH (m:Movie {{title: 'Schindler\\'s List'}})-[:IN_GENRE]->(g:Genre) RETURN g.name", }, { "question": "Which actors have worked in movies from both the comedy and action genres?", "query": "MATCH (a:Person)-[:ACTED_IN]->(:Movie)-[:IN_GENRE]->(g1:Genre), (a)-[:ACTED_IN]->(:Movie)-[:IN_GENRE]->(g2:Genre) WHERE g1.name = 'Comedy' AND g2.name = 'Action' RETURN DISTINCT a.name", }, { "question": "Which directors have made movies with at least three different actors named 'John'?", "query": "MATCH (d:Person)-[:DIRECTED]->(m:Movie)<-[:ACTED_IN]-(a:Person) WHERE a.name STARTS WITH 'John' WITH d, COUNT(DISTINCT a) AS JohnsCount WHERE JohnsCount >= 3 RETURN d.name", }, { "question": "Identify movies where directors also played a role in the film.", "query": "MATCH (p:Person)-[:DIRECTED]->(m:Movie), (p)-[:ACTED_IN]->(m) RETURN m.title, p.name", }, { "question": "Find the actor with the highest number of movies in the database.", "query": "MATCH (a:Actor)-[:ACTED_IN]->(m:Movie) RETURN a.name, COUNT(m) AS movieCount ORDER BY movieCount DESC LIMIT 1", }, ]import { FewShotPromptTemplate, PromptTemplate } from "@langchain/core/prompts"; const examplePrompt = PromptTemplate.fromTemplate( "User input: {question}\nCypher query: {query}" ) const prompt = new FewShotPromptTemplate({ examples: examples.slice(0, 5), examplePrompt, prefix: "You are a Neo4j expert. Given an input question, create a syntactically correct Cypher query to run.\n\nHere is the schema information\n{schema}.\n\nBelow are a number of examples of questions and their corresponding Cypher queries.", suffix: "User input: {question}\nCypher query: ", inputVariables: ["question", "schema"], })console.log(await prompt.format({ question: "How many artists are there?", schema: "foo" }))import { OpenAIEmbeddings } from "@langchain/openai"; import { SemanticSimilarityExampleSelector } from "@langchain/core/example_selectors"; import { Neo4jVectorStore } from "@langchain/community/vectorstores/neo4j_vector"; const exampleSelector = await SemanticSimilarityExampleSelector.fromExamples( examples, new OpenAIEmbeddings(), Neo4jVectorStore, { k: 5, inputKeys: ["question"], preDeleteCollection: true, url, username, password } )await exampleSelector.selectExamples({ question: "how many artists are there?" })const promptWithExampleSelector = new FewShotPromptTemplate({ exampleSelector, examplePrompt, prefix: "You are a Neo4j expert. Given an input question, create a syntactically correct Cypher query to run.\n\nHere is the schema information\n{schema}.\n\nBelow are a number of examples of questions and their corresponding Cypher queries.", suffix: "User input: {question}\nCypher query: ", inputVariables: ["question", "schema"], })console.log(await promptWithExampleSelector.format({ question: "how many artists are there?", schema: "foo" }))import { ChatOpenAI } from "@langchain/openai"; import { GraphCypherQAChain } from "langchain/chains/graph_qa/cypher"; const llm = new ChatOpenAI({ model: "gpt-3.5-turbo", temperature: 0, }); const chain = GraphCypherQAChain.fromLLM( { graph, llm, cypherPrompt: promptWithExampleSelector, } )await chain.invoke({ query: "How many actors are in the graph?" })
0
lc_public_repos/langchainjs/docs/core_docs/docs
lc_public_repos/langchainjs/docs/core_docs/docs/how_to/parallel.mdx
# How to invoke runnables in parallel :::info Prerequisites This guide assumes familiarity with the following concepts: - [LangChain Expression Language (LCEL)](/docs/concepts/lcel) - [Chaining runnables](/docs/how_to/sequence/) ::: The [`RunnableParallel`](https://api.js.langchain.com/classes/langchain_core.runnables.RunnableParallel.html) (also known as a `RunnableMap`) primitive is an object whose values are runnables (or things that can be coerced to runnables, like functions). It runs all of its values in parallel, and each value is called with the initial input to the `RunnableParallel`. The final return value is an object with the results of each value under its appropriate key. ## Formatting with `RunnableParallels` `RunnableParallels` are useful for parallelizing operations, but can also be useful for manipulating the output of one Runnable to match the input format of the next Runnable in a sequence. You can use them to split or fork the chain so that multiple components can process the input in parallel. Later, other components can join or merge the results to synthesize a final response. This type of chain creates a computation graph that looks like the following: ```text Input / \ / \ Branch1 Branch2 \ / \ / Combine ``` Below, the input to each chain in the `RunnableParallel` is expected to be an object with a key for `"topic"`. We can satisfy that requirement by invoking our chain with an object matching that structure. import IntegrationInstallTooltip from "@mdx_components/integration_install_tooltip.mdx"; <IntegrationInstallTooltip></IntegrationInstallTooltip> ```bash npm2yarn npm install @langchain/anthropic @langchain/cohere @langchain/core ``` import CodeBlock from "@theme/CodeBlock"; import BasicExample from "@examples/guides/expression_language/runnable_maps_basic.ts"; <CodeBlock language="typescript">{BasicExample}</CodeBlock> ## Manipulating outputs/inputs Maps can be useful for manipulating the output of one Runnable to match the input format of the next Runnable in a sequence. Note below that the object within the `RunnableSequence.from()` call is automatically coerced into a runnable map. All keys of the object must have values that are runnables or can be themselves coerced to runnables (functions to `RunnableLambda`s or objects to `RunnableMap`s). This coercion will also occur when composing chains via the `.pipe()` method. import SequenceExample from "@examples/guides/expression_language/runnable_maps_sequence.ts"; <CodeBlock language="typescript">{SequenceExample}</CodeBlock> Here the input to prompt is expected to be a map with keys "context" and "question". The user input is just the question. So we need to get the context using our retriever and passthrough the user input under the "question" key. ## Next steps You now know some ways to format and parallelize chain steps with `RunnableParallel`. Next, you might be interested in [using custom logic](/docs/how_to/functions/) in your chains.
0
lc_public_repos/langchainjs/docs/core_docs/docs
lc_public_repos/langchainjs/docs/core_docs/docs/how_to/split_by_token.ipynb
import { TokenTextSplitter } from "@langchain/textsplitters"; import * as fs from "node:fs"; // Load an example document const rawData = await fs.readFileSync("../../../../examples/state_of_the_union.txt"); const stateOfTheUnion = rawData.toString(); const textSplitter = new TokenTextSplitter({ chunkSize: 10, chunkOverlap: 0, }); const texts = await textSplitter.splitText(stateOfTheUnion); console.log(texts[0]);
0
lc_public_repos/langchainjs/docs/core_docs/docs
lc_public_repos/langchainjs/docs/core_docs/docs/how_to/document_loaders_json.mdx
# How to load JSON data > [JSON (JavaScript Object Notation)](https://en.wikipedia.org/wiki/JSON) is an open standard file format and data interchange format that uses human-readable text to store and transmit data objects consisting of attribute–value pairs and arrays (or other serializable values). > [JSON Lines](https://jsonlines.org/) is a file format where each line is a valid JSON value. The JSON loader uses [JSON pointer](https://github.com/janl/node-jsonpointer) to target keys in your JSON files you want to target. ### No JSON pointer example The most simple way of using it is to specify no JSON pointer. The loader will load all strings it finds in the JSON object. Example JSON file: ```json { "texts": ["This is a sentence.", "This is another sentence."] } ``` Example code: ```typescript import { JSONLoader } from "langchain/document_loaders/fs/json"; const loader = new JSONLoader("src/document_loaders/example_data/example.json"); const docs = await loader.load(); /* [ Document { "metadata": { "blobType": "application/json", "line": 1, "source": "blob", }, "pageContent": "This is a sentence.", }, Document { "metadata": { "blobType": "application/json", "line": 2, "source": "blob", }, "pageContent": "This is another sentence.", }, ] */ ``` ### Using JSON pointer example You can do a more advanced scenario by choosing which keys in your JSON object you want to extract string from. In this example, we want to only extract information from "from" and "surname" entries. ```json { "1": { "body": "BD 2023 SUMMER", "from": "LinkedIn Job", "labels": ["IMPORTANT", "CATEGORY_UPDATES", "INBOX"] }, "2": { "body": "Intern, Treasury and other roles are available", "from": "LinkedIn Job2", "labels": ["IMPORTANT"], "other": { "name": "plop", "surname": "bob" } } } ``` Example code: ```typescript import { JSONLoader } from "langchain/document_loaders/fs/json"; const loader = new JSONLoader( "src/document_loaders/example_data/example.json", ["/from", "/surname"] ); const docs = await loader.load(); /* [ Document { pageContent: 'LinkedIn Job', metadata: { source: './src/json/example.json', line: 1 } }, Document { pageContent: 'LinkedIn Job2', metadata: { source: './src/json/example.json', line: 2 } }, Document { pageContent: 'bob', metadata: { source: './src/json/example.json', line: 3 } } ] **/ ```
0
lc_public_repos/langchainjs/docs/core_docs/docs
lc_public_repos/langchainjs/docs/core_docs/docs/how_to/example_selectors_length_based.mdx
# How to select examples by length :::info Prerequisites This guide assumes familiarity with the following concepts: - [Prompt templates](/docs/concepts/prompt_templates) - [Example selectors](/docs/how_to/example_selectors) ::: This example selector selects which examples to use based on length. This is useful when you are worried about constructing a prompt that will go over the length of the context window. For longer inputs, it will select fewer examples to include, while for shorter inputs it will select more. import CodeBlock from "@theme/CodeBlock"; import ExampleLength from "@examples/prompts/length_based_example_selector.ts"; <CodeBlock language="typescript">{ExampleLength}</CodeBlock> ## Next steps You've now learned a bit about using a length based example selector. Next, check out this guide on how to use a [similarity based example selector](/docs/how_to/example_selectors_similarity).
0
lc_public_repos/langchainjs/docs/core_docs/docs
lc_public_repos/langchainjs/docs/core_docs/docs/how_to/sequence.ipynb
// @lc-docs-hide-cell import { ChatOpenAI } from '@langchain/openai'; const model = new ChatOpenAI({ model: "gpt-4o", temperature: 0, })import { StringOutputParser } from "@langchain/core/output_parsers"; import { ChatPromptTemplate } from "@langchain/core/prompts"; const prompt = ChatPromptTemplate.fromTemplate("tell me a joke about {topic}") const chain = prompt.pipe(model).pipe(new StringOutputParser())await chain.invoke({ topic: "bears" })import { RunnableLambda } from "@langchain/core/runnables"; const analysisPrompt = ChatPromptTemplate.fromTemplate("is this a funny joke? {joke}") const composedChain = new RunnableLambda({ func: async (input: { topic: string }) => { const result = await chain.invoke(input); return { joke: result }; } }).pipe(analysisPrompt).pipe(model).pipe(new StringOutputParser()) await composedChain.invoke({ topic: "bears" })import { RunnableSequence } from "@langchain/core/runnables"; const composedChainWithLambda = RunnableSequence.from([ chain, (input) => ({ joke: input }), analysisPrompt, model, new StringOutputParser() ]) await composedChainWithLambda.invoke({ topic: "beets" })
0
lc_public_repos/langchainjs/docs/core_docs/docs
lc_public_repos/langchainjs/docs/core_docs/docs/how_to/tools_few_shot.ipynb
import { tool } from "@langchain/core/tools"; import { z } from "zod"; import { ChatOpenAI } from "@langchain/openai"; const llm = new ChatOpenAI({ model: "gpt-4o", temperature: 0, }) /** * Note that the descriptions here are crucial, as they will be passed along * to the model along with the class name. */ const calculatorSchema = z.object({ operation: z .enum(["add", "subtract", "multiply", "divide"]) .describe("The type of operation to execute."), number1: z.number().describe("The first number to operate on."), number2: z.number().describe("The second number to operate on."), }); const calculatorTool = tool(async ({ operation, number1, number2 }) => { // Functions must return strings if (operation === "add") { return `${number1 + number2}`; } else if (operation === "subtract") { return `${number1 - number2}`; } else if (operation === "multiply") { return `${number1 * number2}`; } else if (operation === "divide") { return `${number1 / number2}`; } else { throw new Error("Invalid operation."); } }, { name: "calculator", description: "Can perform mathematical operations.", schema: calculatorSchema, }); const llmWithTools = llm.bindTools([calculatorTool]);const res = await llmWithTools.invoke("What is 3 🦜 12"); console.log(res.content); console.log(res.tool_calls);import { HumanMessage, AIMessage, ToolMessage } from "@langchain/core/messages"; const res = await llmWithTools.invoke([ new HumanMessage("What is 333382 🦜 1932?"), new AIMessage({ content: "The 🦜 operator is shorthand for division, so we call the divide tool.", tool_calls: [{ id: "12345", name: "calculator", args: { number1: 333382, number2: 1932, operation: "divide", } }] }), new ToolMessage({ tool_call_id: "12345", content: "The answer is 172.558." }), new AIMessage("The answer is 172.558."), new HumanMessage("What is 6 🦜 2?"), new AIMessage({ content: "The 🦜 operator is shorthand for division, so we call the divide tool.", tool_calls: [{ id: "54321", name: "calculator", args: { number1: 6, number2: 2, operation: "divide", } }] }), new ToolMessage({ tool_call_id: "54321", content: "The answer is 3." }), new AIMessage("The answer is 3."), new HumanMessage("What is 3 🦜 12?") ]); console.log(res.tool_calls);
0
lc_public_repos/langchainjs/docs/core_docs/docs
lc_public_repos/langchainjs/docs/core_docs/docs/how_to/lcel_cheatsheet.ipynb
import { RunnableLambda } from "@langchain/core/runnables"; const runnable = RunnableLambda.from((x: number) => x.toString()); await runnable.invoke(5);import { RunnableLambda } from "@langchain/core/runnables"; const runnable = RunnableLambda.from((x: number) => x.toString()); await runnable.batch([7, 8, 9]);import { RunnableLambda } from "@langchain/core/runnables"; async function* generatorFn(x: number[]) { for (const i of x) { yield i.toString(); } } const runnable = RunnableLambda.from(generatorFn); const stream = await runnable.stream([0, 1, 2, 3, 4]); for await (const chunk of stream) { console.log(chunk); console.log("---") }import { RunnableLambda } from "@langchain/core/runnables"; const runnable1 = RunnableLambda.from((x: any) => { return { foo: x }; }); const runnable2 = RunnableLambda.from((x: any) => [x].concat([x])); const chain = runnable1.pipe(runnable2); await chain.invoke(2);import { RunnableLambda, RunnableSequence } from "@langchain/core/runnables"; const runnable1 = RunnableLambda.from((x: any) => { return { foo: x }; }); const runnable2 = RunnableLambda.from((x: any) => [x].concat([x])); const chain = RunnableSequence.from([ runnable1, runnable2, ]); await chain.invoke(2);import { RunnableLambda, RunnableParallel } from "@langchain/core/runnables"; const runnable1 = RunnableLambda.from((x: any) => { return { foo: x }; }); const runnable2 = RunnableLambda.from((x: any) => [x].concat([x])); const chain = RunnableParallel.from({ first: runnable1, second: runnable2, }); await chain.invoke(2);import { RunnableLambda } from "@langchain/core/runnables"; const adder = (x: number) => { return x + 5; }; const runnable = RunnableLambda.from(adder); await runnable.invoke(5);import { RunnableLambda, RunnablePassthrough } from "@langchain/core/runnables"; const runnable = RunnableLambda.from((x: { foo: number }) => { return x.foo + 7; }); const chain = RunnablePassthrough.assign({ bar: runnable, }); await chain.invoke({ foo: 10 });import { RunnableLambda, RunnableParallel, RunnablePassthrough } from "@langchain/core/runnables"; const runnable = RunnableLambda.from((x: { foo: number }) => { return x.foo + 7; }); const chain = RunnableParallel.from({ bar: runnable, baz: new RunnablePassthrough(), }); await chain.invoke({ foo: 10 });import { type RunnableConfig, RunnableLambda } from "@langchain/core/runnables"; const branchedFn = (mainArg: Record<string, any>, config?: RunnableConfig) => { if (config?.configurable?.boundKey !== undefined) { return { ...mainArg, boundKey: config?.configurable?.boundKey }; } return mainArg; } const runnable = RunnableLambda.from(branchedFn); const boundRunnable = runnable.bind({ configurable: { boundKey: "goodbye!" } }); await boundRunnable.invoke({ bar: "hello" });import { RunnableLambda } from "@langchain/core/runnables"; const runnable = RunnableLambda.from((x: any) => { throw new Error("Error case") }); const fallback = RunnableLambda.from((x: any) => x + x); const chain = runnable.withFallbacks([fallback]); await chain.invoke("foo");import { RunnableLambda } from "@langchain/core/runnables"; let counter = 0; const retryFn = (_: any) => { counter++; console.log(`attempt with counter ${counter}`); throw new Error("Expected error"); }; const chain = RunnableLambda.from(retryFn).withRetry({ stopAfterAttempt: 2, }); await chain.invoke(2);import { RunnableLambda } from "@langchain/core/runnables"; const runnable1 = RunnableLambda.from(async (x: any) => { await new Promise((resolve) => setTimeout(resolve, 2000)); return { foo: x }; }); // Takes 4 seconds await runnable1.batch([1, 2, 3], { maxConcurrency: 2 });import { RunnableLambda } from "@langchain/core/runnables"; const runnable1 = RunnableLambda.from(async (x: any) => { await new Promise((resolve) => setTimeout(resolve, 2000)); return { foo: x }; }).withConfig({ maxConcurrency: 2, }); // Takes 4 seconds await runnable1.batch([1, 2, 3]);import { RunnableLambda } from "@langchain/core/runnables"; const runnable1 = RunnableLambda.from((x: any) => { return { foo: x }; }); const runnable2 = RunnableLambda.from((x: any) => [x].concat([x])); const chain = RunnableLambda.from((x: number): any => { if (x > 6) { return runnable1; } return runnable2; }); await chain.invoke(7);await chain.invoke(5);import { RunnableLambda } from "@langchain/core/runnables"; const runnable1 = RunnableLambda.from((x: number) => { return { foo: x, }; }).withConfig({ runName: "first", }); async function* generatorFn(x: { foo: number }) { for (let i = 0; i < x.foo; i++) { yield i.toString(); } } const runnable2 = RunnableLambda.from(generatorFn).withConfig({ runName: "second", }); const chain = runnable1.pipe(runnable2); for await (const event of chain.streamEvents(2, { version: "v1" })) { console.log(`event=${event.event} | name=${event.name} | data=${JSON.stringify(event.data)}`); }import { RunnableLambda, RunnablePassthrough } from "@langchain/core/runnables"; const runnable = RunnableLambda.from((x: { baz: number }) => { return x.baz + 5; }); const chain = RunnablePassthrough.assign({ foo: runnable, }).pick(["foo", "bar"]); await chain.invoke({"bar": "hi", "baz": 2});import { RunnableLambda } from "@langchain/core/runnables"; const runnable1 = RunnableLambda.from((x: number) => [...Array(x).keys()]); const runnable2 = RunnableLambda.from((x: number) => x + 5); const chain = runnable1.pipe(runnable2.map()); await chain.invoke(3);import { RunnableLambda, RunnableSequence } from "@langchain/core/runnables"; const runnable1 = RunnableLambda.from((x: any) => { return { foo: x }; }); const runnable2 = RunnableLambda.from((x: any) => [x].concat([x])); const runnable3 = RunnableLambda.from((x: any) => x.toString()); const chain = RunnableSequence.from([ runnable1, { second: runnable2, third: runnable3, } ]); await chain.getGraph();
0
lc_public_repos/langchainjs/docs/core_docs/docs
lc_public_repos/langchainjs/docs/core_docs/docs/how_to/prompts_partial.mdx
# How to partially format prompt templates :::info Prerequisites This guide assumes familiarity with the following concepts: - [Prompt templates](/docs/concepts/prompt_templates) ::: Like partially binding arguments to a function, it can make sense to "partial" a prompt template - e.g. pass in a subset of the required values, as to create a new prompt template which expects only the remaining subset of values. LangChain supports this in two ways: 1. Partial formatting with string values. 2. Partial formatting with functions that return string values. In the examples below, we go over the motivations for both use cases as well as how to do it in LangChain. ## Partial with strings One common use case for wanting to partial a prompt template is if you get access to some of the variables in a prompt before others. For example, suppose you have a prompt template that requires two variables, `foo` and `baz`. If you get the `foo` value early on in your chain, but the `baz` value later, it can be inconvenient to pass both variables all the way through the chain. Instead, you can partial the prompt template with the `foo` value, and then pass the partialed prompt template along and just use that. Below is an example of doing this: ```typescript import { PromptTemplate } from "langchain/prompts"; const prompt = new PromptTemplate({ template: "{foo}{bar}", inputVariables: ["foo", "bar"], }); const partialPrompt = await prompt.partial({ foo: "foo", }); const formattedPrompt = await partialPrompt.format({ bar: "baz", }); console.log(formattedPrompt); // foobaz ``` You can also just initialize the prompt with the partialed variables. ```typescript const prompt = new PromptTemplate({ template: "{foo}{bar}", inputVariables: ["bar"], partialVariables: { foo: "foo", }, }); const formattedPrompt = await prompt.format({ bar: "baz", }); console.log(formattedPrompt); // foobaz ``` ## Partial With Functions You can also partial with a function. The use case for this is when you have a variable you know that you always want to fetch in a common way. A prime example of this is with date or time. Imagine you have a prompt which you always want to have the current date. You can't hard code it in the prompt, and passing it along with the other input variables can be tedious. In this case, it's very handy to be able to partial the prompt with a function that always returns the current date. ```typescript const getCurrentDate = () => { return new Date().toISOString(); }; const prompt = new PromptTemplate({ template: "Tell me a {adjective} joke about the day {date}", inputVariables: ["adjective", "date"], }); const partialPrompt = await prompt.partial({ date: getCurrentDate, }); const formattedPrompt = await partialPrompt.format({ adjective: "funny", }); console.log(formattedPrompt); // Tell me a funny joke about the day 2023-07-13T00:54:59.287Z ``` You can also just initialize the prompt with the partialed variables: ```typescript const prompt = new PromptTemplate({ template: "Tell me a {adjective} joke about the day {date}", inputVariables: ["adjective"], partialVariables: { date: getCurrentDate, }, }); const formattedPrompt = await prompt.format({ adjective: "funny", }); console.log(formattedPrompt); // Tell me a funny joke about the day 2023-07-13T00:54:59.287Z ``` ## Next steps You've now learned how to partially apply variables to your prompt templates. Next, check out the other how-to guides on prompt templates in this section, like [adding few-shot examples to your prompt templates](/docs/how_to/few_shot_examples_chat).
0
lc_public_repos/langchainjs/docs/core_docs/docs
lc_public_repos/langchainjs/docs/core_docs/docs/how_to/tools_prompting.ipynb
import { tool } from "@langchain/core/tools"; import { z } from "zod"; const multiplyTool = tool((input) => { return (input.first_int * input.second_int).toString() }, { name: "multiply", description: "Multiply two integers together.", schema: z.object({ first_int: z.number(), second_int: z.number(), }) }) console.log(multiplyTool.name) console.log(multiplyTool.description)await multiplyTool.invoke({ first_int: 4, second_int: 5 })import { renderTextDescription } from "langchain/tools/render"; const renderedTools = renderTextDescription([multiplyTool])import { ChatPromptTemplate } from "@langchain/core/prompts"; const systemPrompt = `You are an assistant that has access to the following set of tools. Here are the names and descriptions for each tool: {rendered_tools} Given the user input, return the name and input of the tool to use. Return your response as a JSON blob with 'name' and 'arguments' keys.`; const prompt = ChatPromptTemplate.fromMessages( [["system", systemPrompt], ["user", "{input}"]] )import { JsonOutputParser } from "@langchain/core/output_parsers"; const chain = prompt.pipe(model).pipe(new JsonOutputParser()) await chain.invoke({ input: "what's thirteen times 4", rendered_tools: renderedTools })import { RunnableLambda, RunnablePick } from "@langchain/core/runnables" const chain = prompt.pipe(model).pipe(new JsonOutputParser()).pipe(new RunnablePick("arguments")).pipe(new RunnableLambda({ func: (input) => multiplyTool.invoke({ first_int: input[0], second_int: input[1] }) })) await chain.invoke({ input: "what's thirteen times 4", rendered_tools: renderedTools })const addTool = tool((input) => { return (input.first_int + input.second_int).toString() }, { name: "add", description: "Add two integers together.", schema: z.object({ first_int: z.number(), second_int: z.number(), }), }); const exponentiateTool = tool((input) => { return Math.pow(input.first_int, input.second_int).toString() }, { name: "exponentiate", description: "Exponentiate the base to the exponent power.", schema: z.object({ first_int: z.number(), second_int: z.number(), }), }); import { StructuredToolInterface } from "@langchain/core/tools" const tools = [addTool, exponentiateTool, multiplyTool] const toolChain = (modelOutput) => { const toolMap: Record<string, StructuredToolInterface> = Object.fromEntries(tools.map(tool => [tool.name, tool])) const chosenTool = toolMap[modelOutput.name] return new RunnablePick("arguments").pipe(new RunnableLambda({ func: (input) => chosenTool.invoke({ first_int: input[0], second_int: input[1] }) })) } const toolChainRunnable = new RunnableLambda({ func: toolChain }) const renderedTools = renderTextDescription(tools) const systemPrompt = `You are an assistant that has access to the following set of tools. Here are the names and descriptions for each tool: {rendered_tools} Given the user input, return the name and input of the tool to use. Return your response as a JSON blob with 'name' and 'arguments' keys.` const prompt = ChatPromptTemplate.fromMessages( [["system", systemPrompt], ["user", "{input}"]] ) const chain = prompt.pipe(model).pipe(new JsonOutputParser()).pipe(toolChainRunnable) await chain.invoke({ input: "what's 3 plus 1132", rendered_tools: renderedTools })import { RunnablePassthrough } from "@langchain/core/runnables" const chain = prompt.pipe(model).pipe(new JsonOutputParser()).pipe(RunnablePassthrough.assign({ output: toolChainRunnable })) await chain.invoke({ input: "what's 3 plus 1132", rendered_tools: renderedTools })
0
lc_public_repos/langchainjs/docs/core_docs/docs
lc_public_repos/langchainjs/docs/core_docs/docs/how_to/few_shot.mdx
# Few Shot Prompt Templates Few shot prompting is a prompting technique which provides the Large Language Model (LLM) with a list of examples, and then asks the LLM to generate some text following the lead of the examples provided. An example of this is the following: Say you want your LLM to respond in a specific format. You can few shot prompt the LLM with a list of question answer pairs so it knows what format to respond in. ```txt Respond to the users question in the with the following format: Question: What is your name? Answer: My name is John. Question: What is your age? Answer: I am 25 years old. Question: What is your favorite color? Answer: ``` Here we left the last `Answer:` undefined so the LLM can fill it in. The LLM will then generate the following: ```txt Answer: I don't have a favorite color; I don't have preferences. ``` ### Use Case In the following example we're few shotting the LLM to rephrase questions into more general queries. We provide two sets of examples with specific questions, and rephrased general questions. The `FewShotChatMessagePromptTemplate` will use our examples and when `.format` is called, we'll see those examples formatted into a string we can pass to the LLM. ```typescript import { ChatPromptTemplate, FewShotChatMessagePromptTemplate, } from "langchain/prompts"; ``` ```typescript const examples = [ { input: "Could the members of The Police perform lawful arrests?", output: "what can the members of The Police do?", }, { input: "Jan Sindel's was born in what country?", output: "what is Jan Sindel's personal history?", }, ]; const examplePrompt = ChatPromptTemplate.fromTemplate(`Human: {input} AI: {output}`); const fewShotPrompt = new FewShotChatMessagePromptTemplate({ examplePrompt, examples, inputVariables: [], // no input variables }); ``` ```typescript const formattedPrompt = await fewShotPrompt.format({}); console.log(formattedPrompt); ``` ```typescript [ HumanMessage { lc_namespace: [ 'langchain', 'schema' ], content: 'Human: Could the members of The Police perform lawful arrests?\n' + 'AI: what can the members of The Police do?', additional_kwargs: {} }, HumanMessage { lc_namespace: [ 'langchain', 'schema' ], content: "Human: Jan Sindel's was born in what country?\n" + "AI: what is Jan Sindel's personal history?", additional_kwargs: {} } ] ``` Then, if we use this with another question, the LLM will rephrase the question how we want. import IntegrationInstallTooltip from "@mdx_components/integration_install_tooltip.mdx"; <IntegrationInstallTooltip></IntegrationInstallTooltip> ```bash npm2yarn npm install @langchain/openai @langchain/core ``` ```typescript import { ChatOpenAI } from "@langchain/openai"; ``` ```typescript const model = new ChatOpenAI({}); const examples = [ { input: "Could the members of The Police perform lawful arrests?", output: "what can the members of The Police do?", }, { input: "Jan Sindel's was born in what country?", output: "what is Jan Sindel's personal history?", }, ]; const examplePrompt = ChatPromptTemplate.fromTemplate(`Human: {input} AI: {output}`); const fewShotPrompt = new FewShotChatMessagePromptTemplate({ prefix: "Rephrase the users query to be more general, using the following examples", suffix: "Human: {input}", examplePrompt, examples, inputVariables: ["input"], }); const formattedPrompt = await fewShotPrompt.format({ input: "What's France's main city?", }); const response = await model.invoke(formattedPrompt); console.log(response); ``` ```typescript AIMessage { lc_namespace: [ 'langchain', 'schema' ], content: 'What is the capital of France?', additional_kwargs: { function_call: undefined } } ``` ### Few Shotting With Functions You can also partial with a function. The use case for this is when you have a variable you know that you always want to fetch in a common way. A prime example of this is with date or time. Imagine you have a prompt which you always want to have the current date. You can't hard code it in the prompt, and passing it along with the other input variables can be tedious. In this case, it's very handy to be able to partial the prompt with a function that always returns the current date. ```typescript const getCurrentDate = () => { return new Date().toISOString(); }; const prompt = new FewShotChatMessagePromptTemplate({ template: "Tell me a {adjective} joke about the day {date}", inputVariables: ["adjective", "date"], }); const partialPrompt = await prompt.partial({ date: getCurrentDate, }); const formattedPrompt = await partialPrompt.format({ adjective: "funny", }); console.log(formattedPrompt); // Tell me a funny joke about the day 2023-07-13T00:54:59.287Z ``` ### Few Shot vs Chat Few Shot The chat and non chat few shot prompt templates act in a similar way. The below example will demonstrate using chat and non chat, and the differences with their outputs. ```typescript import { FewShotPromptTemplate, FewShotChatMessagePromptTemplate, } from "langchain/prompts"; ``` ```typescript const examples = [ { input: "Could the members of The Police perform lawful arrests?", output: "what can the members of The Police do?", }, { input: "Jan Sindel's was born in what country?", output: "what is Jan Sindel's personal history?", }, ]; const prompt = `Human: {input} AI: {output}`; const examplePromptTemplate = PromptTemplate.fromTemplate(prompt); const exampleChatPromptTemplate = ChatPromptTemplate.fromTemplate(prompt); const chatFewShotPrompt = new FewShotChatMessagePromptTemplate({ examplePrompt: exampleChatPromptTemplate, examples, inputVariables: [], // no input variables }); const fewShotPrompt = new FewShotPromptTemplate({ examplePrompt: examplePromptTemplate, examples, inputVariables: [], // no input variables }); ``` ```typescript console.log("Chat Few Shot: ", await chatFewShotPrompt.formatMessages({})); /** Chat Few Shot: [ HumanMessage { lc_namespace: [ 'langchain', 'schema' ], content: 'Human: Could the members of The Police perform lawful arrests?\n' + 'AI: what can the members of The Police do?', additional_kwargs: {} }, HumanMessage { lc_namespace: [ 'langchain', 'schema' ], content: "Human: Jan Sindel's was born in what country?\n" + "AI: what is Jan Sindel's personal history?", additional_kwargs: {} } ] */ ``` ```typescript console.log("Few Shot: ", await fewShotPrompt.formatPromptValue({})); /** Few Shot: Human: Could the members of The Police perform lawful arrests? AI: what can the members of The Police do? Human: Jan Sindel's was born in what country? AI: what is Jan Sindel's personal history? */ ``` Here we can see the main distinctions between `FewShotChatMessagePromptTemplate` and `FewShotPromptTemplate`: input and output values. `FewShotChatMessagePromptTemplate` works by taking in a list of `ChatPromptTemplate` for examples, and its output is a list of instances of `BaseMessage`. On the other hand, `FewShotPromptTemplate` works by taking in a `PromptTemplate` for examples, and its output is a string. ## With Non Chat Models LangChain also provides a class for few shot prompt formatting for non chat models: `FewShotPromptTemplate`. The API is largely the same, but the output is formatted differently (chat messages vs strings). ### Partials With Functions ```typescript import { ChatPromptTemplate, FewShotChatMessagePromptTemplate, } from "langchain/prompts"; ``` ```typescript const examplePrompt = PromptTemplate.fromTemplate("{foo}{bar}"); const prompt = new FewShotPromptTemplate({ prefix: "{foo}{bar}", examplePrompt, inputVariables: ["foo", "bar"], }); const partialPrompt = await prompt.partial({ foo: () => Promise.resolve("boo"), }); const formatted = await partialPrompt.format({ bar: "baz" }); console.log(formatted); ``` ```txt boobaz\n ``` ### With Functions and Example Selector ```typescript import { ChatPromptTemplate, FewShotChatMessagePromptTemplate, } from "langchain/prompts"; ``` ```typescript const examplePrompt = PromptTemplate.fromTemplate("An example about {x}"); const exampleSelector = await LengthBasedExampleSelector.fromExamples( [{ x: "foo" }, { x: "bar" }], { examplePrompt, maxLength: 200 } ); const prompt = new FewShotPromptTemplate({ prefix: "{foo}{bar}", exampleSelector, examplePrompt, inputVariables: ["foo", "bar"], }); const partialPrompt = await prompt.partial({ foo: () => Promise.resolve("boo"), }); const formatted = await partialPrompt.format({ bar: "baz" }); console.log(formatted); ``` ```txt boobaz An example about foo An example about bar ```
0
lc_public_repos/langchainjs/docs/core_docs/docs
lc_public_repos/langchainjs/docs/core_docs/docs/how_to/parent_document_retriever.mdx
import CodeBlock from "@theme/CodeBlock"; import Example from "@examples/retrievers/parent_document_retriever.ts"; import ExampleWithScoreThreshold from "@examples/retrievers/parent_document_retriever_score_threshold.ts"; import ExampleWithChunkHeader from "@examples/retrievers/parent_document_retriever_chunk_header.ts"; import ExampleWithRerank from "@examples/retrievers/parent_document_retriever_rerank.ts"; # How to retrieve the whole document for a chunk :::info Prerequisites This guide assumes familiarity with the following concepts: - [Retrievers](/docs/concepts/retrievers) - [Text splitters](/docs/concepts/text_splitters) - [Retrieval-augmented generation (RAG)](/docs/tutorials/rag) ::: When splitting documents for retrieval, there are often conflicting desires: 1. You may want to have small documents, so that their embeddings can most accurately reflect their meaning. If documents are too long, then the embeddings can lose meaning. 2. You want to have long enough documents that the context of each chunk is retained. The [`ParentDocumentRetriever`](https://api.js.langchain.com/classes/langchain.retrievers_parent_document.ParentDocumentRetriever.html) strikes that balance by splitting and storing small chunks of data. During retrieval, it first fetches the small chunks but then looks up the parent ids for those chunks and returns those larger documents. Note that "parent document" refers to the document that a small chunk originated from. This can either be the whole raw document OR a larger chunk. This is a more specific form of [generating multiple embeddings per document](/docs/how_to/multi_vector). ## Usage import IntegrationInstallTooltip from "@mdx_components/integration_install_tooltip.mdx"; <IntegrationInstallTooltip></IntegrationInstallTooltip> ```bash npm2yarn npm install @langchain/openai @langchain/core ``` <CodeBlock language="typescript">{Example}</CodeBlock> ## With Score Threshold By setting the options in `scoreThresholdOptions` we can force the `ParentDocumentRetriever` to use the `ScoreThresholdRetriever` under the hood. This sets the vector store inside `ScoreThresholdRetriever` as the one we passed when initializing `ParentDocumentRetriever`, while also allowing us to also set a score threshold for the retriever. This can be helpful when you're not sure how many documents you want (or if you are sure, just set the `maxK` option), but you want to make sure that the documents you do get are within a certain relevancy threshold. Note: if a retriever is passed, `ParentDocumentRetriever` will default to use it for retrieving small chunks, as well as adding documents via the `addDocuments` method. <CodeBlock language="typescript">{ExampleWithScoreThreshold}</CodeBlock> ## With Contextual chunk headers Consider a scenario where you want to store collection of documents in a vector store and perform Q&A tasks on them. Simply splitting documents with overlapping text may not provide sufficient context for LLMs to determine if multiple chunks are referencing the same information, or how to resolve information from contradictory sources. Tagging each document with metadata is a solution if you know what to filter against, but you may not know ahead of time exactly what kind of queries your vector store will be expected to handle. Including additional contextual information directly in each chunk in the form of headers can help deal with arbitrary queries. This is particularly important if you have several fine-grained child chunks that need to be correctly retrieved from the vector store. <CodeBlock language="typescript">{ExampleWithChunkHeader}</CodeBlock> ## With Reranking With many documents from the vector store that are passed to LLM, final answers sometimes consist of information from irrelevant chunks, making it less precise and sometimes incorrect. Also, passing multiple irrelevant documents makes it more expensive. So there are two reasons to use rerank - precision and costs. <CodeBlock language="typescript">{ExampleWithRerank}</CodeBlock> ## Next steps You've now learned how to use the `ParentDocumentRetriever`. Next, check out the more general form of [generating multiple embeddings per document](/docs/how_to/multi_vector), the [broader tutorial on RAG](/docs/tutorials/rag), or this section to learn how to [create your own custom retriever over any data source](/docs/how_to/custom_retriever/).
0
lc_public_repos/langchainjs/docs/core_docs/docs
lc_public_repos/langchainjs/docs/core_docs/docs/how_to/multimodal_inputs.ipynb
import * as fs from "node:fs/promises"; import { ChatAnthropic } from "@langchain/anthropic"; const model = new ChatAnthropic({ model: "claude-3-sonnet-20240229", }); const imageData = await fs.readFile("../../../../examples/hotdog.jpg");import { HumanMessage } from "@langchain/core/messages"; const message = new HumanMessage({ content: [ { type: "text", text: "what does this image contain?"}, { type: "image_url", image_url: { url: `data:image/jpeg;base64,${imageData.toString("base64")}`}, }, ], }) const response = await model.invoke([message]); console.log(response.content);import { ChatOpenAI } from "@langchain/openai"; const openAIModel = new ChatOpenAI({ model: "gpt-4o", }); const imageUrl = "https://upload.wikimedia.org/wikipedia/commons/thumb/d/dd/Gfp-wisconsin-madison-the-nature-boardwalk.jpg/2560px-Gfp-wisconsin-madison-the-nature-boardwalk.jpg"; const message = new HumanMessage({ content: [ { type: "text", text: "describe the weather in this image"}, { type: "image_url", image_url: { url: imageUrl } }, ], }); const response = await openAIModel.invoke([message]); console.log(response.content);const message = new HumanMessage({ content: [ { type: "text", text: "are these two images the same?" }, { type: "image_url", image_url: { url: imageUrl } }, { type: "image_url", image_url: { url: imageUrl } }, ], }); const response = await openAIModel.invoke([message]); console.log(response.content);
0
lc_public_repos/langchainjs/docs/core_docs/docs
lc_public_repos/langchainjs/docs/core_docs/docs/how_to/query_multiple_retrievers.ipynb
import { Chroma } from "@langchain/community/vectorstores/chroma" import { OpenAIEmbeddings } from "@langchain/openai" import "chromadb"; const texts = ["Harrison worked at Kensho"] const embeddings = new OpenAIEmbeddings({ model: "text-embedding-3-small" }) const vectorstore = await Chroma.fromTexts(texts, {}, embeddings, { collectionName: "harrison" }) const retrieverHarrison = vectorstore.asRetriever(1) const textsAnkush = ["Ankush worked at Facebook"] const embeddingsAnkush = new OpenAIEmbeddings({ model: "text-embedding-3-small" }) const vectorstoreAnkush = await Chroma.fromTexts(textsAnkush, {}, embeddingsAnkush, { collectionName: "ankush" }) const retrieverAnkush = vectorstoreAnkush.asRetriever(1)import { z } from "zod"; const searchSchema = z.object({ query: z.string().describe("Query to look up"), person: z.string().describe("Person to look things up for. Should be `HARRISON` or `ANKUSH`.") })// @lc-docs-hide-cell import { ChatOpenAI } from '@langchain/openai'; const llm = new ChatOpenAI({ model: "gpt-4o", temperature: 0, })import { ChatPromptTemplate } from "@langchain/core/prompts"; import { RunnableSequence, RunnablePassthrough } from "@langchain/core/runnables"; const system = `You have the ability to issue search queries to get information to help answer user information.` const prompt = ChatPromptTemplate.fromMessages( [ ["system", system], ["human", "{question}"], ] ) const llmWithTools = llm.withStructuredOutput(searchSchema, { name: "Search" }) const queryAnalyzer = RunnableSequence.from([ { question: new RunnablePassthrough(), }, prompt, llmWithTools ])await queryAnalyzer.invoke("where did Harrison Work")await queryAnalyzer.invoke("where did ankush Work")const retrievers = { HARRISON: retrieverHarrison, ANKUSH: retrieverAnkush, }import { RunnableConfig, RunnableLambda } from "@langchain/core/runnables"; const chain = async (question: string, config?: RunnableConfig) => { const response = await queryAnalyzer.invoke(question, config); const retriever = retrievers[response.person]; return retriever.invoke(response.query, config); } const customChain = new RunnableLambda({ func: chain });await customChain.invoke("where did Harrison Work")await customChain.invoke("where did ankush Work")
0
lc_public_repos/langchainjs/docs/core_docs/docs
lc_public_repos/langchainjs/docs/core_docs/docs/how_to/cancel_execution.ipynb
// @lc-docs-hide-cell import { ChatAnthropic } from "@langchain/anthropic"; const llm = new ChatAnthropic({ model: "claude-3-5-sonnet-20240620", });import { TavilySearchAPIRetriever } from "@langchain/community/retrievers/tavily_search_api"; import type { Document } from "@langchain/core/documents"; import { StringOutputParser } from "@langchain/core/output_parsers"; import { ChatPromptTemplate } from "@langchain/core/prompts"; import { RunnablePassthrough, RunnableSequence } from "@langchain/core/runnables"; const formatDocsAsString = (docs: Document[]) => { return docs.map((doc) => doc.pageContent).join("\n\n") } const retriever = new TavilySearchAPIRetriever({ k: 3, }); const prompt = ChatPromptTemplate.fromTemplate(` Use the following context to answer questions to the best of your ability: <context> {context} </context> Question: {question}`) const chain = RunnableSequence.from([ { context: retriever.pipe(formatDocsAsString), question: new RunnablePassthrough(), }, prompt, llm, new StringOutputParser(), ]);await chain.invoke("what is the current weather in SF?");const controller = new AbortController(); const startTimer = console.time("timer1"); setTimeout(() => controller.abort(), 100); try { await chain.invoke("what is the current weather in SF?", { signal: controller.signal, }); } catch (e) { console.log(e); } console.timeEnd("timer1");const startTimer2 = console.time("timer2"); const stream = await chain.stream("what is the current weather in SF?"); for await (const chunk of stream) { console.log("chunk", chunk); break; } console.timeEnd("timer2");const controllerForStream = new AbortController(); const startTimer3 = console.time("timer3"); setTimeout(() => controllerForStream.abort(), 100); try { const streamWithSignal = await chain.stream("what is the current weather in SF?", { signal: controllerForStream.signal }); for await (const chunk of streamWithSignal) { console.log(chunk); break; } } catch (e) { console.log(e); } console.timeEnd("timer3");
0
lc_public_repos/langchainjs/docs/core_docs/docs
lc_public_repos/langchainjs/docs/core_docs/docs/how_to/callbacks_custom_events.ipynb
import { RunnableLambda } from "@langchain/core/runnables"; import { dispatchCustomEvent } from "@langchain/core/callbacks/dispatch"; const reflect = RunnableLambda.from(async (value: string) => { await dispatchCustomEvent("event1", { reversed: value.split("").reverse().join("") }); await dispatchCustomEvent("event2", 5); return value; }); const eventStream = await reflect.streamEvents("hello world", { version: "v2" }); for await (const event of eventStream) { if (event.event === "on_custom_event") { console.log(event); } }import { RunnableConfig, RunnableLambda } from "@langchain/core/runnables"; import { dispatchCustomEvent as dispatchCustomEventWeb } from "@langchain/core/callbacks/dispatch/web"; const reflect = RunnableLambda.from(async (value: string, config?: RunnableConfig) => { await dispatchCustomEventWeb("event1", { reversed: value.split("").reverse().join("") }, config); await dispatchCustomEventWeb("event2", 5, config); return value; }); const eventStream = await reflect.streamEvents("hello world", { version: "v2" }); for await (const event of eventStream) { if (event.event === "on_custom_event") { console.log(event); } }import { RunnableConfig, RunnableLambda } from "@langchain/core/runnables"; import { dispatchCustomEvent } from "@langchain/core/callbacks/dispatch"; const reflect = RunnableLambda.from(async (value: string) => { await dispatchCustomEvent("event1", { reversed: value.split("").reverse().join("") }); await dispatchCustomEvent("event2", 5); return value; }); await reflect.invoke("hello world", { callbacks: [{ handleCustomEvent(eventName, data, runId) { console.log(eventName, data, runId); }, }] });
0
lc_public_repos/langchainjs/docs/core_docs/docs
lc_public_repos/langchainjs/docs/core_docs/docs/how_to/self_query.ipynb
import "peggy"; import { Document } from "@langchain/core/documents"; /** * First, we create a bunch of documents. You can load your own documents here instead. * Each document has a pageContent and a metadata field. Make sure your metadata matches the AttributeInfo below. */ const docs = [ new Document({ pageContent: "A bunch of scientists bring back dinosaurs and mayhem breaks loose", metadata: { year: 1993, rating: 7.7, genre: "science fiction", length: 122 }, }), new Document({ pageContent: "Leo DiCaprio gets lost in a dream within a dream within a dream within a ...", metadata: { year: 2010, director: "Christopher Nolan", rating: 8.2, length: 148 }, }), new Document({ pageContent: "A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea", metadata: { year: 2006, director: "Satoshi Kon", rating: 8.6 }, }), new Document({ pageContent: "A bunch of normal-sized women are supremely wholesome and some men pine after them", metadata: { year: 2019, director: "Greta Gerwig", rating: 8.3, length: 135 }, }), new Document({ pageContent: "Toys come alive and have a blast doing so", metadata: { year: 1995, genre: "animated", length: 77 }, }), new Document({ pageContent: "Three men walk into the Zone, three men walk out of the Zone", metadata: { year: 1979, director: "Andrei Tarkovsky", genre: "science fiction", rating: 9.9, }, }), ];import { OpenAIEmbeddings, OpenAI } from "@langchain/openai"; import { FunctionalTranslator } from "@langchain/core/structured_query"; import { MemoryVectorStore } from "langchain/vectorstores/memory"; import { SelfQueryRetriever } from "langchain/retrievers/self_query"; import type { AttributeInfo } from "langchain/chains/query_constructor"; /** * We define the attributes we want to be able to query on. * in this case, we want to be able to query on the genre, year, director, rating, and length of the movie. * We also provide a description of each attribute and the type of the attribute. * This is used to generate the query prompts. */ const attributeInfo: AttributeInfo[] = [ { name: "genre", description: "The genre of the movie", type: "string or array of strings", }, { name: "year", description: "The year the movie was released", type: "number", }, { name: "director", description: "The director of the movie", type: "string", }, { name: "rating", description: "The rating of the movie (1-10)", type: "number", }, { name: "length", description: "The length of the movie in minutes", type: "number", }, ]; /** * Next, we instantiate a vector store. This is where we store the embeddings of the documents. * We also need to provide an embeddings object. This is used to embed the documents. */ const embeddings = new OpenAIEmbeddings(); const llm = new OpenAI(); const documentContents = "Brief summary of a movie"; const vectorStore = await MemoryVectorStore.fromDocuments(docs, embeddings); const selfQueryRetriever = SelfQueryRetriever.fromLLM({ llm, vectorStore, documentContents, attributeInfo, /** * We need to use a translator that translates the queries into a * filter format that the vector store can understand. We provide a basic translator * translator here, but you can create your own translator by extending BaseTranslator * abstract class. Note that the vector store needs to support filtering on the metadata * attributes you want to query on. */ structuredQueryTranslator: new FunctionalTranslator(), });await selfQueryRetriever.invoke( "Which movies are less than 90 minutes?" );await selfQueryRetriever.invoke( "Which movies are rated higher than 8.5?" );await selfQueryRetriever.invoke( "Which movies are directed by Greta Gerwig?" );await selfQueryRetriever.invoke( "Which movies are either comedy or drama and are less than 90 minutes?" );
0
lc_public_repos/langchainjs/docs/core_docs/docs
lc_public_repos/langchainjs/docs/core_docs/docs/how_to/streaming_llm.mdx
--- sidebar_position: 1 --- # How to stream responses from an LLM All [`LLM`s](https://api.js.langchain.com/classes/langchain_core.language_models_llms.BaseLLM.html) implement the [Runnable interface](https://api.js.langchain.com/classes/langchain_core.runnables.Runnable.html), which comes with **default** implementations of standard runnable methods (i.e. `ainvoke`, `batch`, `abatch`, `stream`, `astream`, `astream_events`). The **default** streaming implementations provide an `AsyncGenerator` that yields a single value: the final output from the underlying chat model provider. The ability to stream the output token-by-token depends on whether the provider has implemented proper streaming support. See which [integrations support token-by-token streaming here](/docs/integrations/llms/). :::{.callout-note} The **default** implementation does **not** provide support for token-by-token streaming, but it ensures that the model can be swapped in for any other model as it supports the same standard interface. ::: ## Using `.stream()` import CodeBlock from "@theme/CodeBlock"; The easiest way to stream is to use the `.stream()` method. This returns an readable stream that you can also iterate over: import StreamMethodExample from "@examples/models/llm/llm_streaming_stream_method.ts"; import IntegrationInstallTooltip from "@mdx_components/integration_install_tooltip.mdx"; <IntegrationInstallTooltip></IntegrationInstallTooltip> ```bash npm2yarn npm install @langchain/openai @langchain/core ``` <CodeBlock language="typescript">{StreamMethodExample}</CodeBlock> For models that do not support streaming, the entire response will be returned as a single chunk. ## Using a callback handler You can also use a [`CallbackHandler`](https://api.js.langchain.com/classes/langchain_core.callbacks_base.BaseCallbackHandler.html) like so: import StreamingExample from "@examples/models/llm/llm_streaming.ts"; <CodeBlock language="typescript">{StreamingExample}</CodeBlock> We still have access to the end `LLMResult` if using `generate`. However, `tokenUsage` may not be currently supported for all model providers when streaming.
0
lc_public_repos/langchainjs/docs/core_docs/docs
lc_public_repos/langchainjs/docs/core_docs/docs/how_to/tool_calling_parallel.ipynb
import { ChatOpenAI } from "@langchain/openai"; import { z } from "zod"; import { tool } from "@langchain/core/tools"; const adderTool = tool(async ({ a, b }) => { return a + b; }, { name: "add", description: "Adds a and b", schema: z.object({ a: z.number(), b: z.number(), }) }); const multiplyTool = tool(async ({ a, b }) => { return a + b; }, { name: "multiply", description: "Multiplies a and b", schema: z.object({ a: z.number(), b: z.number(), }) }); const tools = [adderTool, multiplyTool]; const llm = new ChatOpenAI({ model: "gpt-4o-mini", temperature: 0, });const llmWithTools = llm.bindTools(tools, { parallel_tool_calls: false }); const result = await llmWithTools.invoke("Please call the first tool two times"); result.tool_calls;const llmWithNoBoundParam = llm.bindTools(tools); const result2 = await llmWithNoBoundParam.invoke("Please call the first tool two times"); result2.tool_calls;const result3 = await llmWithNoBoundParam.invoke("Please call the first tool two times", { parallel_tool_calls: false, }); result3.tool_calls;
0
lc_public_repos/langchainjs/docs/core_docs/docs
lc_public_repos/langchainjs/docs/core_docs/docs/how_to/document_loader_csv.mdx
# How to load CSV data > A [comma-separated values (CSV)](https://en.wikipedia.org/wiki/Comma-separated_values) file is a delimited text file that uses a comma to separate values. Each line of the file is a data record. Each record consists of one or more fields, separated by commas. Load CSV data with a single row per document. ## Setup ```bash npm2yarn npm install d3-dsv@2 ``` ## Usage, extracting all columns Example CSV file: ```csv id,text 1,This is a sentence. 2,This is another sentence. ``` Example code: ```typescript import { CSVLoader } from "@langchain/community/document_loaders/fs/csv"; const loader = new CSVLoader("src/document_loaders/example_data/example.csv"); const docs = await loader.load(); /* [ Document { "metadata": { "line": 1, "source": "src/document_loaders/example_data/example.csv", }, "pageContent": "id: 1 text: This is a sentence.", }, Document { "metadata": { "line": 2, "source": "src/document_loaders/example_data/example.csv", }, "pageContent": "id: 2 text: This is another sentence.", }, ] */ ``` ## Usage, extracting a single column Example CSV file: ```csv id,text 1,This is a sentence. 2,This is another sentence. ``` Example code: ```typescript import { CSVLoader } from "@langchain/community/document_loaders/fs/csv"; const loader = new CSVLoader( "src/document_loaders/example_data/example.csv", "text" ); const docs = await loader.load(); /* [ Document { "metadata": { "line": 1, "source": "src/document_loaders/example_data/example.csv", }, "pageContent": "This is a sentence.", }, Document { "metadata": { "line": 2, "source": "src/document_loaders/example_data/example.csv", }, "pageContent": "This is another sentence.", }, ] */ ```
0
lc_public_repos/langchainjs/docs/core_docs/docs
lc_public_repos/langchainjs/docs/core_docs/docs/how_to/message_history.ipynb
// @lc-docs-hide-cell import { ChatOpenAI } from "@langchain/openai"; const llm = new ChatOpenAI({ model: "gpt-4o", temperature: 0, });import { START, END, MessagesAnnotation, StateGraph, MemorySaver } from "@langchain/langgraph"; // Define the function that calls the model const callModel = async (state: typeof MessagesAnnotation.State) => { const response = await llm.invoke(state.messages); // Update message history with response: return { messages: response }; }; // Define a new graph const workflow = new StateGraph(MessagesAnnotation) // Define the (single) node in the graph .addNode("model", callModel) .addEdge(START, "model") .addEdge("model", END); // Add memory const memory = new MemorySaver(); const app = workflow.compile({ checkpointer: memory });import { v4 as uuidv4 } from "uuid"; const config = { configurable: { thread_id: uuidv4() } }const input = [ { role: "user", content: "Hi! I'm Bob.", } ] const output = await app.invoke({ messages: input }, config) // The output contains all messages in the state. // This will long the last message in the conversation. console.log(output.messages[output.messages.length - 1]);const input2 = [ { role: "user", content: "What's my name?", } ] const output2 = await app.invoke({ messages: input2 }, config) console.log(output2.messages[output2.messages.length - 1]);const config2 = { configurable: { thread_id: uuidv4() } } const input3 = [ { role: "user", content: "What's my name?", } ] const output3 = await app.invoke({ messages: input3 }, config2) console.log(output3.messages[output3.messages.length - 1]);import { ChatPromptTemplate, MessagesPlaceholder } from "@langchain/core/prompts"; const prompt = ChatPromptTemplate.fromMessages([ ["system", "Answer in {language}."], new MessagesPlaceholder("messages"), ]) const runnable = prompt.pipe(llm);import { START, END, StateGraph, MemorySaver, MessagesAnnotation, Annotation } from "@langchain/langgraph"; // Define the State // highlight-next-line const GraphAnnotation = Annotation.Root({ // highlight-next-line language: Annotation<string>(), // Spread `MessagesAnnotation` into the state to add the `messages` field. // highlight-next-line ...MessagesAnnotation.spec, }) // Define the function that calls the model const callModel2 = async (state: typeof GraphAnnotation.State) => { const response = await runnable.invoke(state); // Update message history with response: return { messages: [response] }; }; const workflow2 = new StateGraph(GraphAnnotation) .addNode("model", callModel2) .addEdge(START, "model") .addEdge("model", END); const app2 = workflow2.compile({ checkpointer: new MemorySaver() });const config3 = { configurable: { thread_id: uuidv4() } } const input4 = { messages: [ { role: "user", content: "What's my name?", } ], language: "Spanish", } const output4 = await app2.invoke(input4, config3) console.log(output4.messages[output4.messages.length - 1]);const state = (await app2.getState(config3)).values console.log(`Language: ${state.language}`); console.log(state.messages)const _ = await app2.updateState(config3, { messages: [{ role: "user", content: "test" }]})const state2 = (await app2.getState(config3)).values console.log(`Language: ${state2.language}`); console.log(state2.messages)
0
lc_public_repos/langchainjs/docs/core_docs/docs
lc_public_repos/langchainjs/docs/core_docs/docs/how_to/tools_builtin.ipynb
import { WikipediaQueryRun } from "@langchain/community/tools/wikipedia_query_run"; const tool = new WikipediaQueryRun({ topKResults: 1, maxDocContentLength: 100, });tool.name;tool.description;import { zodToJsonSchema } from "zod-to-json-schema"; zodToJsonSchema(tool.schema);tool.returnDirect;await tool.invoke({ input: "langchain" })await tool.invoke("langchain")
0
lc_public_repos/langchainjs/docs/core_docs/docs
lc_public_repos/langchainjs/docs/core_docs/docs/how_to/installation.mdx
--- sidebar_position: 1 --- # Installation ## Supported Environments LangChain is written in TypeScript and can be used in: - Node.js (ESM and CommonJS) - 18.x, 19.x, 20.x - Cloudflare Workers - Vercel / Next.js (Browser, Serverless and Edge functions) - Supabase Edge Functions - Browser - Deno - Bun However, note that individual integrations may not be supported in all environments. ## Installation To install the main `langchain` package, run: import Tabs from "@theme/Tabs"; import TabItem from "@theme/TabItem"; import CodeBlock from "@theme/CodeBlock"; ```bash npm2yarn npm install langchain @langchain/core ``` While this package acts as a sane starting point to using LangChain, much of the value of LangChain comes when integrating it with various model providers, datastores, etc. By default, the dependencies needed to do that are NOT installed. You will need to install the dependencies for specific integrations separately. We'll show how to do that in the next sections of this guide. Please also see the section on [installing integration packages](/docs/how_to/installation/#installing-integration-packages) for some special considerations when installing LangChain packages. ## Ecosystem packages With the exception of the `langsmith` SDK, all packages in the LangChain ecosystem depend on `@langchain/core`, which contains base classes and abstractions that other packages use. The dependency graph below shows how the difference packages are related. A directed arrow indicates that the source package depends on the target package: ![](/img/ecosystem_packages.png) **Note:** It is important that your app only uses one version of `@langchain/core`. Common package managers may introduce additional versions when resolving direct dependencies, even if you don't intend this. See [this section on installing integration packages](/docs/how_to/installation/#installing-integration-packages) for more information and ways to remedy this. ### @langchain/community The [@langchain/community](https://www.npmjs.com/package/@langchain/community) package contains a range of third-party integrations. Install with: ```bash npm2yarn npm install @langchain/community @langchain/core ``` There are also more granular packages containing LangChain integrations for individual providers. ### @langchain/core The [@langchain/core](https://www.npmjs.com/package/@langchain/core) package contains base abstractions that the rest of the LangChain ecosystem uses, along with the LangChain Expression Language. It should be installed separately: ```bash npm2yarn npm install @langchain/core ``` ### LangGraph [LangGraph.js](https://langchain-ai.github.io/langgraphjs/) is a library for building stateful, multi-actor applications with LLMs. It integrates smoothly with LangChain, but can be used without it. Install with: ```bash npm2yarn npm install @langchain/langgraph @langchain/core ``` ### LangSmith SDK The LangSmith SDK is automatically installed by LangChain. If you're not using it with LangChain, install with: ```bash npm2yarn npm install langsmith ``` import IntegrationInstallTooltip from "@mdx_components/integration_install_tooltip.mdx"; <IntegrationInstallTooltip></IntegrationInstallTooltip> ## Installing integration packages LangChain supports packages that contain module integrations with individual third-party providers. They can be as specific as [`@langchain/anthropic`](/docs/integrations/platforms/anthropic/), which contains integrations just for Anthropic models, or as broad as [`@langchain/community`](https://www.npmjs.com/package/@langchain/community), which contains broader variety of community contributed integrations. These packages, as well as the main LangChain package, all have [`@langchain/core`](https://www.npmjs.com/package/@langchain/core) as a peer dependency to avoid package managers installing multiple versions of the same package. It contains the base abstractions that these integration packages extend. To ensure that all integrations and their types interact with each other properly, it is important that they all use the same version of `@langchain/core`. If you encounter type errors around base classes, you may need to guarantee that your package manager is resolving a single version of `@langchain/core`. To do so, you can add a `"resolutions"` or `"overrides"` field like the following in your project's `package.json`. The name will depend on your package manager: :::tip The `resolutions` or `pnpm.overrides` fields for `yarn` or `pnpm` must be set in the root `package.json` file. ::: If you are using `yarn`: ```json title="yarn package.json" { "name": "your-project", "version": "0.0.0", "private": true, "engines": { "node": ">=18" }, "dependencies": { "@langchain/anthropic": "^0.0.2", "@langchain/core": "^0.3.0", "langchain": "0.0.207" }, "resolutions": { "@langchain/core": "0.3.0" } } ``` You can also try running the [`yarn dedupe`](https://yarnpkg.com/cli/dedupe) command if you are on `yarn` version 2 or higher. Or for `npm`: ```json title="npm package.json" { "name": "your-project", "version": "0.0.0", "private": true, "engines": { "node": ">=18" }, "dependencies": { "@langchain/anthropic": "^0.0.2", "@langchain/core": "^0.3.0", "langchain": "0.0.207" }, "overrides": { "@langchain/core": "0.3.0" } } ``` You can also try the [`npm dedupe`](https://docs.npmjs.com/cli/commands/npm-dedupe) command. Or for `pnpm`: ```json title="pnpm package.json" { "name": "your-project", "version": "0.0.0", "private": true, "engines": { "node": ">=18" }, "dependencies": { "@langchain/anthropic": "^0.0.2", "@langchain/core": "^0.3.0", "langchain": "0.0.207" }, "pnpm": { "overrides": { "@langchain/core": "0.3.0" } } } ``` You can also try the [`pnpm dedupe`](https://pnpm.io/cli/dedupe) command. ## Loading the library ### TypeScript LangChain is written in TypeScript and provides type definitions for all of its public APIs. ### ESM LangChain provides an ESM build targeting Node.js environments. You can import it using the following syntax: ```bash npm2yarn npm install @langchain/openai @langchain/core ``` ```typescript import { ChatOpenAI } from "@langchain/openai"; ``` If you are using TypeScript in an ESM project we suggest updating your `tsconfig.json` to include the following: ```json title="tsconfig.json" { "compilerOptions": { ... "target": "ES2020", // or higher "module": "nodenext", } } ``` ### CommonJS LangChain provides a CommonJS build targeting Node.js environments. You can import it using the following syntax: ```typescript const { ChatOpenAI } = require("@langchain/openai"); ``` ### Cloudflare Workers LangChain can be used in Cloudflare Workers. You can import it using the following syntax: ```typescript import { ChatOpenAI } from "@langchain/openai"; ``` ### Vercel / Next.js LangChain can be used in Vercel / Next.js. We support using LangChain in frontend components, in Serverless functions and in Edge functions. You can import it using the following syntax: ```typescript import { ChatOpenAI } from "@langchain/openai"; ``` ### Deno / Supabase Edge Functions LangChain can be used in Deno / Supabase Edge Functions. You can import it using the following syntax: ```typescript import { ChatOpenAI } from "https://esm.sh/@langchain/openai"; ``` or ```typescript import { ChatOpenAI } from "npm:@langchain/openai"; ``` ### Browser LangChain can be used in the browser. In our CI we test bundling LangChain with Webpack and Vite, but other bundlers should work too. You can import it using the following syntax: ```typescript import { ChatOpenAI } from "@langchain/openai"; ``` ## Unsupported: Node.js 16 We do not support Node.js 16, but if you still want to run LangChain on Node.js 16, you will need to follow the instructions in this section. We do not guarantee that these instructions will continue to work in the future. You will have to make `fetch` available globally, either: - run your application with `NODE_OPTIONS='--experimental-fetch' node ...`, or - install `node-fetch` and follow the instructions [here](https://github.com/node-fetch/node-fetch#providing-global-access) You'll also need to [polyfill `ReadableStream`](https://www.npmjs.com/package/web-streams-polyfill) by installing: ```bash npm2yarn npm i web-streams-polyfill@4 ``` And then adding it to the global namespace in your main entrypoint: ```typescript import "web-streams-polyfill/polyfill"; ``` Additionally you'll have to polyfill `structuredClone`, eg. by installing `core-js` and following the instructions [here](https://github.com/zloirock/core-js). If you are running Node.js 18+, you do not need to do anything.
0
lc_public_repos/langchainjs/docs/core_docs/docs
lc_public_repos/langchainjs/docs/core_docs/docs/how_to/chat_token_usage_tracking.mdx
--- sidebar_position: 5 --- # How to track token usage :::info Prerequisites This guide assumes familiarity with the following concepts: - [Chat models](/docs/concepts/chat_models) ::: This notebook goes over how to track your token usage for specific calls. ## Using `AIMessage.usage_metadata` A number of model providers return token usage information as part of the chat generation response. When available, this information will be included on the `AIMessage` objects produced by the corresponding model. LangChain `AIMessage` objects include a [`usage_metadata`](https://api.js.langchain.com/classes/langchain_core.messages.AIMessage.html#usage_metadata) attribute for supported providers. When populated, this attribute will be an object with standard keys (e.g., "input_tokens" and "output_tokens"). #### OpenAI import CodeBlock from "@theme/CodeBlock"; import IntegrationInstallTooltip from "@mdx_components/integration_install_tooltip.mdx"; <IntegrationInstallTooltip></IntegrationInstallTooltip> ```bash npm2yarn npm install @langchain/openai @langchain/core ``` import UsageMetadataExample from "@examples/models/chat/usage_metadata.ts"; <CodeBlock language="typescript">{UsageMetadataExample}</CodeBlock> #### Anthropic ```bash npm2yarn npm install @langchain/anthropic @langchain/core ``` import UsageMetadataExampleAnthropic from "@examples/models/chat/usage_metadata_anthropic.ts"; <CodeBlock language="typescript">{UsageMetadataExampleAnthropic}</CodeBlock> ## Using `AIMessage.response_metadata` A number of model providers return token usage information as part of the chat generation response. When available, this is included in the `AIMessage.response_metadata` field. #### OpenAI import Example from "@examples/models/chat/token_usage_tracking.ts"; <CodeBlock language="typescript">{Example}</CodeBlock> #### Anthropic import AnthropicExample from "@examples/models/chat/token_usage_tracking_anthropic.ts"; <CodeBlock language="typescript">{AnthropicExample}</CodeBlock> ## Streaming Some providers support token count metadata in a streaming context. #### OpenAI For example, OpenAI will return a message chunk at the end of a stream with token usage information. This behavior is supported by `@langchain/openai` >= 0.1.0 and can be enabled by passing a `stream_options` parameter when making your call. :::info By default, the last message chunk in a stream will include a `finish_reason` in the message's `response_metadata` attribute. If we include token usage in streaming mode, an additional chunk containing usage metadata will be added to the end of the stream, such that `finish_reason` appears on the second to last message chunk. ::: import OpenAIStreamTokens from "@examples/models/chat/integration_openai_stream_tokens.ts"; <CodeBlock language="typescript">{OpenAIStreamTokens}</CodeBlock> ## Using callbacks You can also use the `handleLLMEnd` callback to get the full output from the LLM, including token usage for supported models. Here's an example of how you could do that: import CallbackExample from "@examples/models/chat/token_usage_tracking_callback.ts"; <CodeBlock language="typescript">{CallbackExample}</CodeBlock> ## Next steps You've now seen a few examples of how to track chat model token usage for supported providers. Next, check out the other how-to guides on chat models in this section, like [how to get a model to return structured output](/docs/how_to/structured_output) or [how to add caching to your chat models](/docs/how_to/chat_model_caching).
0
lc_public_repos/langchainjs/docs/core_docs/docs
lc_public_repos/langchainjs/docs/core_docs/docs/how_to/query_no_queries.ipynb
import { Chroma } from "@langchain/community/vectorstores/chroma" import { OpenAIEmbeddings } from "@langchain/openai" import "chromadb"; const texts = ["Harrison worked at Kensho"] const embeddings = new OpenAIEmbeddings({ model: "text-embedding-3-small" }) const vectorstore = await Chroma.fromTexts(texts, {}, embeddings, { collectionName: "harrison" }) const retriever = vectorstore.asRetriever(1);import { z } from "zod"; const searchSchema = z.object({ query: z.string().describe("Similarity search query applied to job record."), });// @lc-docs-hide-cell import { ChatOpenAI } from '@langchain/openai'; const llm = new ChatOpenAI({ model: "gpt-4o", temperature: 0, })import { zodToJsonSchema } from "zod-to-json-schema"; import { ChatPromptTemplate } from "@langchain/core/prompts"; import { RunnableSequence, RunnablePassthrough } from "@langchain/core/runnables"; const system = `You have the ability to issue search queries to get information to help answer user information. You do not NEED to look things up. If you don't need to, then just respond normally.`; const prompt = ChatPromptTemplate.fromMessages( [ ["system", system], ["human", "{question}"], ] ) const llmWithTools = llm.bind({ tools: [{ type: "function" as const, function: { name: "search", description: "Search over a database of job records.", parameters: zodToJsonSchema(searchSchema), } }] }) const queryAnalyzer = RunnableSequence.from([ { question: new RunnablePassthrough(), }, prompt, llmWithTools ])await queryAnalyzer.invoke("where did Harrison work")await queryAnalyzer.invoke("hi!")import { JsonOutputKeyToolsParser } from "@langchain/core/output_parsers/openai_tools"; const outputParser = new JsonOutputKeyToolsParser({ keyName: "search", })import { RunnableConfig, RunnableLambda } from "@langchain/core/runnables"; const chain = async (question: string, config?: RunnableConfig) => { const response = await queryAnalyzer.invoke(question, config); if ("tool_calls" in response.additional_kwargs && response.additional_kwargs.tool_calls !== undefined) { const query = await outputParser.invoke(response, config); return retriever.invoke(query[0].query, config); } else { return response; } } const customChain = new RunnableLambda({ func: chain });await customChain.invoke("where did Harrison Work")await customChain.invoke("hi!")
0
lc_public_repos/langchainjs/docs/core_docs/docs
lc_public_repos/langchainjs/docs/core_docs/docs/how_to/logprobs.ipynb
import { ChatOpenAI } from "@langchain/openai"; const model = new ChatOpenAI({ model: "gpt-4o", logprobs: true, }); const responseMessage = await model.invoke("how are you today?"); responseMessage.response_metadata.logprobs.content.slice(0, 5);let count = 0; const stream = await model.stream("How are you today?"); let aggregateResponse; for await (const chunk of stream) { if (count > 5) { break; } if (aggregateResponse === undefined) { aggregateResponse = chunk; } else { aggregateResponse = aggregateResponse.concat(chunk); } console.log(aggregateResponse.response_metadata.logprobs?.content); count++; }const modelWithTopLogprobs = new ChatOpenAI({ model: "gpt-4o", logprobs: true, topLogprobs: 3, }); const res = await modelWithTopLogprobs.invoke("how are you today?"); res.response_metadata.logprobs.content.slice(0, 5);
0
lc_public_repos/langchainjs/docs/core_docs/docs
lc_public_repos/langchainjs/docs/core_docs/docs/how_to/query_high_cardinality.ipynb
import { faker } from "@faker-js/faker"; const names = Array.from({ length: 10000 }, () => (faker as any).person.fullName());names[0]names[567]import { z } from "zod"; const searchSchema = z.object({ query: z.string(), author: z.string(), })// @lc-docs-hide-cell import { ChatOpenAI } from '@langchain/openai'; const llm = new ChatOpenAI({ model: "gpt-3.5-turbo", temperature: 0, })import { ChatPromptTemplate } from "@langchain/core/prompts"; import { RunnablePassthrough, RunnableSequence } from "@langchain/core/runnables"; const system = `Generate a relevant search query for a library system`; const prompt = ChatPromptTemplate.fromMessages( [ ["system", system], ["human", "{question}"], ] ) const llmWithTools = llm.withStructuredOutput(searchSchema, { name: "Search" }) const queryAnalyzer = RunnableSequence.from([ { question: new RunnablePassthrough(), }, prompt, llmWithTools ]);await queryAnalyzer.invoke("what are books about aliens by Jesse Knight")await queryAnalyzer.invoke("what are books about aliens by jess knight")const systemTemplate = `Generate a relevant search query for a library system using the 'search' tool. The 'author' you return to the user MUST be one of the following authors: {authors} Do NOT hallucinate author name!` const basePrompt = ChatPromptTemplate.fromMessages( [ ["system", systemTemplate], ["human", "{question}"], ] ) const promptWithAuthors = await basePrompt.partial({ authors: names.join(", ") }) const queryAnalyzerAll = RunnableSequence.from([ { question: new RunnablePassthrough(), }, promptWithAuthors, llmWithTools ])try { const res = await queryAnalyzerAll.invoke("what are books about aliens by jess knight") } catch (e) { console.error(e) }// @lc-docs-hide-cell import { ChatOpenAI } from '@langchain/openai'; const llmLong = new ChatOpenAI({ model: "gpt-4o", temperature: 0, })const structuredLlmLong = llmLong.withStructuredOutput(searchSchema, { name: "Search" }); const queryAnalyzerAllLong = RunnableSequence.from([ { question: new RunnablePassthrough(), }, prompt, structuredLlmLong ]);await queryAnalyzerAllLong.invoke("what are books about aliens by jess knight")import { OpenAIEmbeddings } from "@langchain/openai"; import { MemoryVectorStore } from "langchain/vectorstores/memory"; const embeddings = new OpenAIEmbeddings({ model: "text-embedding-3-small", }) const vectorstore = await MemoryVectorStore.fromTexts(names, {}, embeddings); const selectNames = async (question: string) => { const _docs = await vectorstore.similaritySearch(question, 10); const _names = _docs.map(d => d.pageContent); return _names.join(", "); } const createPrompt = RunnableSequence.from([ { question: new RunnablePassthrough(), authors: selectNames, }, basePrompt ]) await createPrompt.invoke("what are books by jess knight")const queryAnalyzerSelect = createPrompt.pipe(llmWithTools); await queryAnalyzerSelect.invoke("what are books about aliens by jess knight")
0
lc_public_repos/langchainjs/docs/core_docs/docs
lc_public_repos/langchainjs/docs/core_docs/docs/how_to/extraction_long_text.ipynb
import { CheerioWebBaseLoader } from "@langchain/community/document_loaders/web/cheerio"; // Only required in a Deno notebook environment to load the peer dep. import "cheerio"; const loader = new CheerioWebBaseLoader( "https://en.wikipedia.org/wiki/Car" ); const docs = await loader.load(); docs[0].pageContent.length;import { z } from "zod"; import { ChatPromptTemplate } from "@langchain/core/prompts"; import { ChatOpenAI } from "@langchain/openai"; const keyDevelopmentSchema = z.object({ year: z.number().describe("The year when there was an important historic development."), description: z.string().describe("What happened in this year? What was the development?"), evidence: z.string().describe("Repeat verbatim the sentence(s) from which the year and description information were extracted"), }).describe("Information about a development in the history of cars."); const extractionDataSchema = z.object({ key_developments: z.array(keyDevelopmentSchema), }).describe("Extracted information about key developments in the history of cars"); const SYSTEM_PROMPT_TEMPLATE = [ "You are an expert at identifying key historic development in text.", "Only extract important historic developments. Extract nothing if no important information can be found in the text." ].join("\n"); // Define a custom prompt to provide instructions and any additional context. // 1) You can add examples into the prompt template to improve extraction quality // 2) Introduce additional parameters to take context into account (e.g., include metadata // about the document from which the text was extracted.) const prompt = ChatPromptTemplate.fromMessages([ [ "system", SYSTEM_PROMPT_TEMPLATE, ], // Keep on reading through this use case to see how to use examples to improve performance // MessagesPlaceholder('examples'), ["human", "{text}"], ]); // We will be using tool calling mode, which // requires a tool calling capable model. const llm = new ChatOpenAI({ model: "gpt-4-0125-preview", temperature: 0, }); const extractionChain = prompt.pipe(llm.withStructuredOutput(extractionDataSchema));import { TokenTextSplitter } from "langchain/text_splitter"; const textSplitter = new TokenTextSplitter({ chunkSize: 2000, chunkOverlap: 20, }); // Note that this method takes an array of docs const splitDocs = await textSplitter.splitDocuments(docs);// Limit just to the first 3 chunks // so the code can be re-run quickly const firstFewTexts = splitDocs.slice(0, 3).map((doc) => doc.pageContent); const extractionChainParams = firstFewTexts.map((text) => { return { text }; }); const results = await extractionChain.batch(extractionChainParams, { maxConcurrency: 5 });const keyDevelopments = results.flatMap((result) => result.key_developments); keyDevelopments.slice(0, 20);import { MemoryVectorStore } from "langchain/vectorstores/memory"; import { OpenAIEmbeddings } from "@langchain/openai"; // Only load the first 10 docs for speed in this demo use-case const vectorstore = await MemoryVectorStore.fromDocuments( splitDocs.slice(0, 10), new OpenAIEmbeddings() ); // Only extract from top document const retriever = vectorstore.asRetriever({ k: 1 });import { RunnableSequence } from "@langchain/core/runnables"; const ragExtractor = RunnableSequence.from([ { text: retriever.pipe((docs) => docs[0].pageContent) }, extractionChain, ]);const ragExtractorResults = await ragExtractor.invoke("Key developments associated with cars");ragExtractorResults.key_developments;
0
lc_public_repos/langchainjs/docs/core_docs/docs
lc_public_repos/langchainjs/docs/core_docs/docs/how_to/output_parser_structured.ipynb
import { z } from "zod"; import { RunnableSequence } from "@langchain/core/runnables"; import { StructuredOutputParser } from "@langchain/core/output_parsers"; import { ChatPromptTemplate } from "@langchain/core/prompts"; const zodSchema = z.object({ answer: z.string().describe("answer to the user's question"), source: z.string().describe("source used to answer the user's question, should be a website."), }) const parser = StructuredOutputParser.fromZodSchema(zodSchema); const chain = RunnableSequence.from([ ChatPromptTemplate.fromTemplate( "Answer the users question as best as possible.\n{format_instructions}\n{question}" ), model, parser, ]); console.log(parser.getFormatInstructions()); const response = await chain.invoke({ question: "What is the capital of France?", format_instructions: parser.getFormatInstructions(), }); console.log(response);import { AIMessage } from "@langchain/core/messages"; await parser.invoke(new AIMessage(`{"badfield": "foo"}`));await parser.invoke(new AIMessage(`{"answer": "Paris", "source": "I made it up"}`));const stream = await chain.stream({ question: "What is the capital of France?", format_instructions: parser.getFormatInstructions(), }); for await (const s of stream) { console.log(s) }import { JsonOutputParser } from "@langchain/core/output_parsers"; const template = `Return a JSON object with a single key named "answer" that answers the following question: {question}. Do not wrap the JSON output in markdown blocks.` const jsonPrompt = ChatPromptTemplate.fromTemplate(template); const jsonParser = new JsonOutputParser(); const jsonChain = jsonPrompt.pipe(model).pipe(jsonParser); const stream = await jsonChain.stream({ question: "Who invented the microscope?", }); for await (const s of stream) { console.log(s) }
0
lc_public_repos/langchainjs/docs/core_docs/docs
lc_public_repos/langchainjs/docs/core_docs/docs/how_to/chatbots_retrieval.ipynb
// @lc-docs-hide-cell import { ChatOpenAI } from "@langchain/openai"; const llm = new ChatOpenAI({ model: "gpt-4o", temperature: 0, });import "cheerio"; import { CheerioWebBaseLoader } from "@langchain/community/document_loaders/web/cheerio"; const loader = new CheerioWebBaseLoader( "https://docs.smith.langchain.com/user_guide" ); const rawDocs = await loader.load(); rawDocs[0].pageContent.length;import { RecursiveCharacterTextSplitter } from "langchain/text_splitter"; const textSplitter = new RecursiveCharacterTextSplitter({ chunkSize: 500, chunkOverlap: 0, }); const allSplits = await textSplitter.splitDocuments(rawDocs);import { OpenAIEmbeddings } from "@langchain/openai"; import { MemoryVectorStore } from "langchain/vectorstores/memory"; const vectorstore = await MemoryVectorStore.fromDocuments( allSplits, new OpenAIEmbeddings() );const retriever = vectorstore.asRetriever(4); const docs = await retriever.invoke("how can langsmith help with testing?"); console.log(docs);import { createStuffDocumentsChain } from "langchain/chains/combine_documents"; import { ChatPromptTemplate, MessagesPlaceholder, } from "@langchain/core/prompts"; const SYSTEM_TEMPLATE = `Answer the user's questions based on the below context. If the context doesn't contain any relevant information to the question, don't make something up and just say "I don't know": <context> {context} </context> `; const questionAnsweringPrompt = ChatPromptTemplate.fromMessages([ ["system", SYSTEM_TEMPLATE], new MessagesPlaceholder("messages"), ]); const documentChain = await createStuffDocumentsChain({ llm, prompt: questionAnsweringPrompt, });import { HumanMessage, AIMessage } from "@langchain/core/messages"; await documentChain.invoke({ messages: [new HumanMessage("Can LangSmith help test my LLM applications?")], context: docs, });await documentChain.invoke({ messages: [new HumanMessage("Can LangSmith help test my LLM applications?")], context: [], });import type { BaseMessage } from "@langchain/core/messages"; import { RunnablePassthrough, RunnableSequence, } from "@langchain/core/runnables"; const parseRetrieverInput = (params: { messages: BaseMessage[] }) => { return params.messages[params.messages.length - 1].content; }; const retrievalChain = RunnablePassthrough.assign({ context: RunnableSequence.from([parseRetrieverInput, retriever]), }).assign({ answer: documentChain, });await retrievalChain.invoke({ messages: [new HumanMessage("Can LangSmith help test my LLM applications?")], });await retriever.invoke("Tell me more!");const queryTransformPrompt = ChatPromptTemplate.fromMessages([ new MessagesPlaceholder("messages"), [ "user", "Given the above conversation, generate a search query to look up in order to get information relevant to the conversation. Only respond with the query, nothing else.", ], ]); const queryTransformationChain = queryTransformPrompt.pipe(llm); await queryTransformationChain.invoke({ messages: [ new HumanMessage("Can LangSmith help test my LLM applications?"), new AIMessage( "Yes, LangSmith can help test and evaluate your LLM applications. It allows you to quickly edit examples and add them to datasets to expand the surface area of your evaluation sets or to fine-tune a model for improved quality or reduced costs. Additionally, LangSmith can be used to monitor your application, log all traces, visualize latency and token usage statistics, and troubleshoot specific issues as they arise." ), new HumanMessage("Tell me more!"), ], });import { RunnableBranch } from "@langchain/core/runnables"; import { StringOutputParser } from "@langchain/core/output_parsers"; const queryTransformingRetrieverChain = RunnableBranch.from([ [ (params: { messages: BaseMessage[] }) => params.messages.length === 1, RunnableSequence.from([parseRetrieverInput, retriever]), ], queryTransformPrompt .pipe(llm) .pipe(new StringOutputParser()) .pipe(retriever), ]).withConfig({ runName: "chat_retriever_chain" });const conversationalRetrievalChain = RunnablePassthrough.assign({ context: queryTransformingRetrieverChain, }).assign({ answer: documentChain, });await conversationalRetrievalChain.invoke({ messages: [new HumanMessage("Can LangSmith help test my LLM applications?")], });await conversationalRetrievalChain.invoke({ messages: [ new HumanMessage("Can LangSmith help test my LLM applications?"), new AIMessage( "Yes, LangSmith can help test and evaluate your LLM applications. It allows you to quickly edit examples and add them to datasets to expand the surface area of your evaluation sets or to fine-tune a model for improved quality or reduced costs. Additionally, LangSmith can be used to monitor your application, log all traces, visualize latency and token usage statistics, and troubleshoot specific issues as they arise." ), new HumanMessage("Tell me more!"), ], });const stream = await conversationalRetrievalChain.stream({ messages: [ new HumanMessage("Can LangSmith help test my LLM applications?"), new AIMessage( "Yes, LangSmith can help test and evaluate your LLM applications. It allows you to quickly edit examples and add them to datasets to expand the surface area of your evaluation sets or to fine-tune a model for improved quality or reduced costs. Additionally, LangSmith can be used to monitor your application, log all traces, visualize latency and token usage statistics, and troubleshoot specific issues as they arise." ), new HumanMessage("Tell me more!"), ], }); for await (const chunk of stream) { console.log(chunk); }
0
lc_public_repos/langchainjs/docs/core_docs/docs
lc_public_repos/langchainjs/docs/core_docs/docs/how_to/chat_models_universal_init.mdx
# How to init any model in one line import CodeBlock from "@theme/CodeBlock"; Many LLM applications let end users specify what model provider and model they want the application to be powered by. This requires writing some logic to initialize different ChatModels based on some user configuration. The `initChatModel()` helper method makes it easy to initialize a number of different model integrations without having to worry about import paths and class names. Keep in mind this feature is only for chat models. :::info Prerequisites This guide assumes familiarity with the following concepts: - [Chat models](/docs/concepts/chat_models) - [LangChain Expression Language (LCEL)](/docs/concepts/lcel) - [Tool calling](/docs/concepts/tools) ::: :::caution Compatibility **This feature is only intended to be used in Node environments. Use in non Node environments or with bundlers is not guaranteed to work and not officially supported.** `initChatModel` requires `langchain>=0.2.11`. See [this guide](/docs/how_to/installation/#installing-integration-packages) for some considerations to take when upgrading. See the [initChatModel()](https://api.js.langchain.com/functions/langchain.chat_models_universal.initChatModel.html) API reference for a full list of supported integrations. Make sure you have the integration packages installed for any model providers you want to support. E.g. you should have `@langchain/openai` installed to init an OpenAI model. ::: ## Basic usage import BasicExample from "@examples/models/chat/configurable/basic.ts"; <CodeBlock language="typescript">{BasicExample}</CodeBlock> ## Inferring model provider For common and distinct model names `initChatModel()` will attempt to infer the model provider. See the [API reference](https://api.js.langchain.com/functions/langchain.chat_models_universal.initChatModel.html) for a full list of inference behavior. E.g. any model that starts with `gpt-3...` or `gpt-4...` will be inferred as using model provider `openai`. import InferringProviderExample from "@examples/models/chat/configurable/inferring_model_provider.ts"; <CodeBlock language="typescript">{InferringProviderExample}</CodeBlock> ## Creating a configurable model You can also create a runtime-configurable model by specifying `configurableFields`. If you don't specify a `model` value, then "model" and "modelProvider" be configurable by default. import ConfigurableModelExample from "@examples/models/chat/configurable/configurable_model.ts"; <CodeBlock language="typescript">{ConfigurableModelExample}</CodeBlock> ### Configurable model with default values We can create a configurable model with default model values, specify which parameters are configurable, and add prefixes to configurable params: import ConfigurableModelWithDefaultsExample from "@examples/models/chat/configurable/configurable_model_with_defaults.ts"; <CodeBlock language="typescript"> {ConfigurableModelWithDefaultsExample} </CodeBlock> ### Using a configurable model declaratively We can call declarative operations like `bindTools`, `withStructuredOutput`, `withConfig`, etc. on a configurable model and chain a configurable model in the same way that we would a regularly instantiated chat model object. import ConfigurableModelDeclarativelyExample from "@examples/models/chat/configurable/configurable_model_declaratively.ts"; <CodeBlock language="typescript"> {ConfigurableModelDeclarativelyExample} </CodeBlock>
0
lc_public_repos/langchainjs/docs/core_docs/docs
lc_public_repos/langchainjs/docs/core_docs/docs/how_to/index.mdx
--- sidebar_position: 0 sidebar_class_name: hidden --- # How-to guides Here you'll find answers to “How do I….?” types of questions. These guides are _goal-oriented_ and _concrete_; they're meant to help you complete a specific task. For conceptual explanations see [Conceptual Guides](/docs/concepts/). For end-to-end walkthroughs see [Tutorials](/docs/tutorials). For comprehensive descriptions of every class and function see [API Reference](https://api.js.langchain.com/). ## Installation - [How to: install LangChain packages](/docs/how_to/installation/) ## Key features This highlights functionality that is core to using LangChain. - [How to: return structured data from an LLM](/docs/how_to/structured_output/) - [How to: use a chat model to call tools](/docs/how_to/tool_calling/) - [How to: stream runnables](/docs/how_to/streaming) - [How to: debug your LLM apps](/docs/how_to/debugging/) ## LangChain Expression Language (LCEL) LangChain Expression Language is a way to create arbitrary custom chains. It is built on the [`Runnable`](https://api.js.langchain.com/classes/langchain_core.runnables.Runnable.html) protocol. [**LCEL cheatsheet**](/docs/how_to/lcel_cheatsheet/): For a quick overview of how to use the main LCEL primitives. - [How to: chain runnables](/docs/how_to/sequence) - [How to: stream runnables](/docs/how_to/streaming) - [How to: invoke runnables in parallel](/docs/how_to/parallel/) - [How to: attach runtime arguments to a runnable](/docs/how_to/binding/) - [How to: run custom functions](/docs/how_to/functions) - [How to: pass through arguments from one step to the next](/docs/how_to/passthrough) - [How to: add values to a chain's state](/docs/how_to/assign) - [How to: add message history](/docs/how_to/message_history) - [How to: route execution within a chain](/docs/how_to/routing) - [How to: add fallbacks](/docs/how_to/fallbacks) - [How to: cancel execution](/docs/how_to/cancel_execution/) ## Components These are the core building blocks you can use when building applications. ### Prompt templates [Prompt Templates](/docs/concepts/prompt_templates) are responsible for formatting user input into a format that can be passed to a language model. - [How to: use few shot examples](/docs/how_to/few_shot_examples) - [How to: use few shot examples in chat models](/docs/how_to/few_shot_examples_chat/) - [How to: partially format prompt templates](/docs/how_to/prompts_partial) - [How to: compose prompts together](/docs/how_to/prompts_composition) ### Example selectors [Example Selectors](/docs/concepts/example_selectors) are responsible for selecting the correct few shot examples to pass to the prompt. - [How to: use example selectors](/docs/how_to/example_selectors) - [How to: select examples by length](/docs/how_to/example_selectors_length_based) - [How to: select examples by semantic similarity](/docs/how_to/example_selectors_similarity) - [How to: select examples from LangSmith few-shot datasets](/docs/how_to/example_selectors_langsmith) ### Chat models [Chat Models](/docs/concepts/chat_models) are newer forms of language models that take messages in and output a message. - [How to: do function/tool calling](/docs/how_to/tool_calling) - [How to: get models to return structured output](/docs/how_to/structured_output) - [How to: cache model responses](/docs/how_to/chat_model_caching) - [How to: create a custom chat model class](/docs/how_to/custom_chat) - [How to: get log probabilities](/docs/how_to/logprobs) - [How to: stream a response back](/docs/how_to/chat_streaming) - [How to: track token usage](/docs/how_to/chat_token_usage_tracking) - [How to: pass tool outputs to chat models](/docs/how_to/tool_results_pass_to_model/) - [How to: stream tool calls](/docs/how_to/tool_streaming) - [How to: few shot prompt tool behavior](/docs/how_to/tools_few_shot) - [How to: force a specific tool call](/docs/how_to/tool_choice) - [How to: disable parallel tool calling](/docs/how_to/tool_calling_parallel/) - [How to: init any model in one line](/docs/how_to/chat_models_universal_init/) ### Messages [Messages](/docs/concepts/##message-types) are the input and output of chat models. They have some `content` and a `role`, which describes the source of the message. - [How to: trim messages](/docs/how_to/trim_messages/) - [How to: filter messages](/docs/how_to/filter_messages/) - [How to: merge consecutive messages of the same type](/docs/how_to/merge_message_runs/) ### LLMs What LangChain calls [LLMs](/docs/concepts/text_llms) are older forms of language models that take a string in and output a string. - [How to: cache model responses](/docs/how_to/llm_caching) - [How to: create a custom LLM class](/docs/how_to/custom_llm) - [How to: stream a response back](/docs/how_to/streaming_llm) - [How to: track token usage](/docs/how_to/llm_token_usage_tracking) ### Output parsers [Output Parsers](/docs/concepts/output_parsers) are responsible for taking the output of an LLM and parsing into more structured format. - [How to: use output parsers to parse an LLM response into structured format](/docs/how_to/output_parser_structured) - [How to: parse JSON output](/docs/how_to/output_parser_json) - [How to: parse XML output](/docs/how_to/output_parser_xml) - [How to: try to fix errors in output parsing](/docs/how_to/output_parser_fixing/) ### Document loaders [Document Loaders](/docs/concepts/document_loaders) are responsible for loading documents from a variety of sources. - [How to: load CSV data](/docs/how_to/document_loader_csv) - [How to: load data from a directory](/docs/how_to/document_loader_directory) - [How to: load PDF files](/docs/how_to/document_loader_pdf) - [How to: write a custom document loader](/docs/how_to/document_loader_custom) - [How to: load HTML data](/docs/how_to/document_loader_html) - [How to: load Markdown data](/docs/how_to/document_loader_markdown) ### Text splitters [Text Splitters](/docs/concepts/text_splitters) take a document and split into chunks that can be used for retrieval. - [How to: recursively split text](/docs/how_to/recursive_text_splitter) - [How to: split by character](/docs/how_to/character_text_splitter) - [How to: split code](/docs/how_to/code_splitter) - [How to: split by tokens](/docs/how_to/split_by_token) ### Embedding models [Embedding Models](/docs/concepts/embedding_models) take a piece of text and create a numerical representation of it. - [How to: embed text data](/docs/how_to/embed_text) - [How to: cache embedding results](/docs/how_to/caching_embeddings) ### Vector stores [Vector stores](/docs/concepts/#vectorstores) are databases that can efficiently store and retrieve embeddings. - [How to: create and query vector stores](/docs/how_to/vectorstores) ### Retrievers [Retrievers](/docs/concepts/retrievers) are responsible for taking a query and returning relevant documents. - [How to: use a vector store to retrieve data](/docs/how_to/vectorstore_retriever) - [How to: generate multiple queries to retrieve data for](/docs/how_to/multiple_queries) - [How to: use contextual compression to compress the data retrieved](/docs/how_to/contextual_compression) - [How to: write a custom retriever class](/docs/how_to/custom_retriever) - [How to: combine the results from multiple retrievers](/docs/how_to/ensemble_retriever) - [How to: generate multiple embeddings per document](/docs/how_to/multi_vector) - [How to: retrieve the whole document for a chunk](/docs/how_to/parent_document_retriever) - [How to: generate metadata filters](/docs/how_to/self_query) - [How to: create a time-weighted retriever](/docs/how_to/time_weighted_vectorstore) - [How to: reduce retrieval latency](/docs/how_to/reduce_retrieval_latency) ### Indexing Indexing is the process of keeping your vectorstore in-sync with the underlying data source. - [How to: reindex data to keep your vectorstore in-sync with the underlying data source](/docs/how_to/indexing) ### Tools LangChain [Tools](/docs/concepts/tools) contain a description of the tool (to pass to the language model) as well as the implementation of the function to call. - [How to: create tools](/docs/how_to/custom_tools) - [How to: use built-in tools and toolkits](/docs/how_to/tools_builtin) - [How to: use chat models to call tools](/docs/how_to/tool_calling/) - [How to: pass tool outputs to chat models](/docs/how_to/tool_results_pass_to_model/) - [How to: few shot prompt tool behavior](/docs/how_to/tools_few_shot) - [How to: pass run time values to tools](/docs/how_to/tool_runtime) - [How to: handle tool errors](/docs/how_to/tools_error) - [How to: force a specific tool call](/docs/how_to/tool_choice/) - [How to: disable parallel tool calling](/docs/how_to/tool_calling_parallel/) - [How to: access the `RunnableConfig` object within a custom tool](/docs/how_to/tool_configure) - [How to: stream events from child runs within a custom tool](/docs/how_to/tool_stream_events) - [How to: return artifacts from a tool](/docs/how_to/tool_artifacts) - [How to: convert Runnables to tools](/docs/how_to/convert_runnable_to_tool) - [How to: add ad-hoc tool calling capability to models](/docs/how_to/tools_prompting) ### Agents :::note For in depth how-to guides for agents, please check out [LangGraph](https://langchain-ai.github.io/langgraphjs/) documentation. ::: - [How to: use legacy LangChain Agents (AgentExecutor)](/docs/how_to/agent_executor) - [How to: migrate from legacy LangChain agents to LangGraph](/docs/how_to/migrate_agent) ### Callbacks [Callbacks](/docs/concepts/callbacks) allow you to hook into the various stages of your LLM application's execution. - [How to: pass in callbacks at runtime](/docs/how_to/callbacks_runtime) - [How to: attach callbacks to a module](/docs/how_to/callbacks_attach) - [How to: pass callbacks into a module constructor](/docs/how_to/callbacks_constructor) - [How to: create custom callback handlers](/docs/how_to/custom_callbacks) - [How to: await callbacks in serverless environments](/docs/how_to/callbacks_serverless) - [How to: dispatch custom callback events](/docs/how_to/callbacks_custom_events) ### Custom All of LangChain components can easily be extended to support your own versions. - [How to: create a custom chat model class](/docs/how_to/custom_chat) - [How to: create a custom LLM class](/docs/how_to/custom_llm) - [How to: write a custom retriever class](/docs/how_to/custom_retriever) - [How to: write a custom document loader](/docs/how_to/document_loader_custom) - [How to: create custom callback handlers](/docs/how_to/custom_callbacks) - [How to: define a custom tool](/docs/how_to/custom_tools) - [How to: dispatch custom callback events](/docs/how_to/callbacks_custom_events) ### Generative UI - [How to: build an LLM generated UI](/docs/how_to/generative_ui) - [How to: stream agentic data to the client](/docs/how_to/stream_agent_client) - [How to: stream structured output to the client](/docs/how_to/stream_tool_client) ### Multimodal - [How to: pass multimodal data directly to models](/docs/how_to/multimodal_inputs/) - [How to: use multimodal prompts](/docs/how_to/multimodal_prompts/) - [How to: call tools with multimodal data](/docs/how_to/tool_calls_multimodal/) ## Use cases These guides cover use-case specific details. ### Q&A with RAG Retrieval Augmented Generation (RAG) is a way to connect LLMs to external sources of data. For a high-level tutorial on RAG, check out [this guide](/docs/tutorials/rag/). - [How to: add chat history](/docs/how_to/qa_chat_history_how_to/) - [How to: stream](/docs/how_to/qa_streaming/) - [How to: return sources](/docs/how_to/qa_sources/) - [How to: return citations](/docs/how_to/qa_citations/) - [How to: do per-user retrieval](/docs/how_to/qa_per_user/) ### Extraction Extraction is when you use LLMs to extract structured information from unstructured text. For a high level tutorial on extraction, check out [this guide](/docs/tutorials/extraction/). - [How to: use reference examples](/docs/how_to/extraction_examples/) - [How to: handle long text](/docs/how_to/extraction_long_text/) - [How to: do extraction without using function calling](/docs/how_to/extraction_parse) ### Chatbots Chatbots involve using an LLM to have a conversation. For a high-level tutorial on building chatbots, check out [this guide](/docs/tutorials/chatbot/). - [How to: manage memory](/docs/how_to/chatbots_memory) - [How to: do retrieval](/docs/how_to/chatbots_retrieval) - [How to: use tools](/docs/how_to/chatbots_tools) ### Query analysis Query Analysis is the task of using an LLM to generate a query to send to a retriever. For a high-level tutorial on query analysis, check out [this guide](/docs/tutorials/query_analysis/). - [How to: add examples to the prompt](/docs/how_to/query_few_shot) - [How to: handle cases where no queries are generated](/docs/how_to/query_no_queries) - [How to: handle multiple queries](/docs/how_to/query_multiple_queries) - [How to: handle multiple retrievers](/docs/how_to/query_multiple_retrievers) - [How to: construct filters](/docs/how_to/query_constructing_filters) - [How to: deal with high cardinality categorical variables](/docs/how_to/query_high_cardinality) ### Q&A over SQL + CSV You can use LLMs to do question answering over tabular data. For a high-level tutorial, check out [this guide](/docs/tutorials/sql_qa/). - [How to: use prompting to improve results](/docs/how_to/sql_prompting) - [How to: do query validation](/docs/how_to/sql_query_checking) - [How to: deal with large databases](/docs/how_to/sql_large_db) ### Q&A over graph databases You can use an LLM to do question answering over graph databases. For a high-level tutorial, check out [this guide](/docs/tutorials/graph/). - [How to: map values to a database](/docs/how_to/graph_mapping) - [How to: add a semantic layer over the database](/docs/how_to/graph_semantic) - [How to: improve results with prompting](/docs/how_to/graph_prompting) - [How to: construct knowledge graphs](/docs/how_to/graph_constructing) ## [LangGraph.js](https://langchain-ai.github.io/langgraphjs) LangGraph.js is an extension of LangChain aimed at building robust and stateful multi-actor applications with LLMs by modeling steps as edges and nodes in a graph. LangGraph.js documentation is currently hosted on a separate site. You can peruse [LangGraph.js how-to guides here](https://langchain-ai.github.io/langgraphjs/how-tos/). ## [LangSmith](https://docs.smith.langchain.com/) LangSmith allows you to closely trace, monitor and evaluate your LLM application. It seamlessly integrates with LangChain and LangGraph.js, and you can use it to inspect and debug individual steps of your chains as you build. LangSmith documentation is hosted on a separate site. You can peruse [LangSmith how-to guides here](https://docs.smith.langchain.com/how_to_guides/), but we'll highlight a few sections that are particularly relevant to LangChain below: ### Evaluation <span data-heading-keywords="evaluation,evaluate"></span> Evaluating performance is a vital part of building LLM-powered applications. LangSmith helps with every step of the process from creating a dataset to defining metrics to running evaluators. To learn more, check out the [LangSmith evaluation how-to guides](https://docs.smith.langchain.com/how_to_guides#evaluation). ### Tracing <span data-heading-keywords="trace,tracing"></span> Tracing gives you observability inside your chains and agents, and is vital in diagnosing issues. - [How to: trace with LangChain](https://docs.smith.langchain.com/how_to_guides/tracing/trace_with_langchain) - [How to: add metadata and tags to traces](https://docs.smith.langchain.com/how_to_guides/tracing/trace_with_langchain#add-metadata-and-tags-to-traces) You can see general tracing-related how-tos [in this section of the LangSmith docs](https://docs.smith.langchain.com/how_to_guides/tracing).
0
lc_public_repos/langchainjs/docs/core_docs/docs
lc_public_repos/langchainjs/docs/core_docs/docs/how_to/tool_streaming.ipynb
import { z } from "zod"; import { tool } from "@langchain/core/tools"; import { ChatOpenAI } from "@langchain/openai"; const addTool = tool(async (input) => { return input.a + input.b; }, { name: "add", description: "Adds a and b.", schema: z.object({ a: z.number(), b: z.number(), }), }); const multiplyTool = tool(async (input) => { return input.a * input.b; }, { name: "multiply", description: "Multiplies a and b.", schema: z.object({ a: z.number(), b: z.number(), }), }); const tools = [addTool, multiplyTool]; const model = new ChatOpenAI({ model: "gpt-4o", temperature: 0, }); const modelWithTools = model.bindTools(tools);const query = "What is 3 * 12? Also, what is 11 + 49?"; const stream = await modelWithTools.stream(query); for await (const chunk of stream) { console.log(chunk.tool_call_chunks); }import { concat } from "@langchain/core/utils/stream"; const stream = await modelWithTools.stream(query); let gathered = undefined; for await (const chunk of stream) { gathered = gathered !== undefined ? concat(gathered, chunk) : chunk; console.log(gathered.tool_call_chunks); }console.log(typeof gathered.tool_call_chunks[0].args);console.log(typeof gathered.tool_calls[0].args);
0
lc_public_repos/langchainjs/docs/core_docs/docs
lc_public_repos/langchainjs/docs/core_docs/docs/how_to/agent_executor.ipynb
import "cheerio"; // This is required in notebooks to use the `CheerioWebBaseLoader` import { TavilySearchResults } from "@langchain/community/tools/tavily_search" const search = new TavilySearchResults({ maxResults: 2 }); await search.invoke("what is the weather in SF")import { CheerioWebBaseLoader } from "@langchain/community/document_loaders/web/cheerio"; import { MemoryVectorStore } from "langchain/vectorstores/memory"; import { OpenAIEmbeddings } from "@langchain/openai"; import { RecursiveCharacterTextSplitter } from "@langchain/textsplitters"; const loader = new CheerioWebBaseLoader("https://docs.smith.langchain.com/overview"); const docs = await loader.load(); const splitter = new RecursiveCharacterTextSplitter( { chunkSize: 1000, chunkOverlap: 200 } ); const documents = await splitter.splitDocuments(docs); const vectorStore = await MemoryVectorStore.fromDocuments(documents, new OpenAIEmbeddings()); const retriever = vectorStore.asRetriever(); (await retriever.invoke("how to upload a dataset"))[0];import { z } from "zod"; import { tool } from "@langchain/core/tools"; const retrieverTool = tool(async ({ input }, config) => { const docs = await retriever.invoke(input, config); return docs.map((doc) => doc.pageContent).join("\n\n"); }, { name: "langsmith_search", description: "Search for information about LangSmith. For any questions about LangSmith, you must use this tool!", schema: z.object({ input: z.string() }), });const tools = [search, retrieverTool];// @lc-docs-hide-cell import { ChatOpenAI } from "@langchain/openai"; const model = new ChatOpenAI({ model: "gpt-4o-mini", temperature: 0 })const response = await model.invoke([{ role: "user", content: "hi!" }]); response.content;const modelWithTools = model.bindTools(tools);const responseWithTools = await modelWithTools.invoke([{ role: "user", content: "Hi!" }]) console.log(`Content: ${responseWithTools.content}`) console.log(`Tool calls: ${responseWithTools.tool_calls}`)const responseWithToolCalls = await modelWithTools.invoke([{ role: "user", content: "What's the weather in SF?" }]) console.log(`Content: ${responseWithToolCalls.content}`) console.log(`Tool calls: ${JSON.stringify(responseWithToolCalls.tool_calls, null, 2)}`)import { ChatPromptTemplate } from "@langchain/core/prompts"; const prompt = ChatPromptTemplate.fromMessages([ ["system", "You are a helpful assistant"], ["placeholder", "{chat_history}"], ["human", "{input}"], ["placeholder", "{agent_scratchpad}"], ]); console.log(prompt.promptMessages);import { createToolCallingAgent } from "langchain/agents"; const agent = await createToolCallingAgent({ llm: model, tools, prompt })import { AgentExecutor } from "langchain/agents"; const agentExecutor = new AgentExecutor({ agent, tools })await agentExecutor.invoke({ input: "hi!" })await agentExecutor.invoke({ input: "how can langsmith help with testing?" })await agentExecutor.invoke({ input: "whats the weather in sf?" })// Here we pass in an empty list of messages for chat_history because it is the first message in the chat await agentExecutor.invoke({ input: "hi! my name is bob", chat_history: [] })await agentExecutor.invoke( { chat_history: [ { role: "user", content: "hi! my name is bob" }, { role: "assistant", content: "Hello Bob! How can I assist you today?" }, ], input: "what's my name?", } )import { ChatMessageHistory } from "@langchain/community/stores/message/in_memory"; import { BaseChatMessageHistory } from "@langchain/core/chat_history"; import { RunnableWithMessageHistory } from "@langchain/core/runnables"; const store = {}; function getMessageHistory(sessionId: string): BaseChatMessageHistory { if (!(sessionId in store)) { store[sessionId] = new ChatMessageHistory(); } return store[sessionId]; } const agentWithChatHistory = new RunnableWithMessageHistory({ runnable: agentExecutor, getMessageHistory, inputMessagesKey: "input", historyMessagesKey: "chat_history", }) await agentWithChatHistory.invoke( { input: "hi! I'm bob" }, { configurable: { sessionId: "<foo>" }}, )await agentWithChatHistory.invoke( { input: "what's my name?" }, { configurable: { sessionId: "<foo>" }}, )
0
lc_public_repos/langchainjs/docs/core_docs/docs
lc_public_repos/langchainjs/docs/core_docs/docs/how_to/example_selectors_similarity.mdx
# How to select examples by similarity :::info Prerequisites This guide assumes familiarity with the following concepts: - [Prompt templates](/docs/concepts/prompt_templates) - [Example selectors](/docs/how_to/example_selectors) - [Vector stores](/docs/concepts/vectorstores) ::: This object selects examples based on similarity to the inputs. It does this by finding the examples with the embeddings that have the greatest cosine similarity with the inputs. import CodeBlock from "@theme/CodeBlock"; import ExampleSimilarity from "@examples/prompts/semantic_similarity_example_selector.ts"; The fields of the examples object will be used as parameters to format the `examplePrompt` passed to the `FewShotPromptTemplate`. Each example should therefore contain all required fields for the example prompt you are using. import IntegrationInstallTooltip from "@mdx_components/integration_install_tooltip.mdx"; <IntegrationInstallTooltip></IntegrationInstallTooltip> ```bash npm2yarn npm install @langchain/openai @langchain/community @langchain/core ``` <CodeBlock language="typescript">{ExampleSimilarity}</CodeBlock> By default, each field in the examples object is concatenated together, embedded, and stored in the vectorstore for later similarity search against user queries. If you only want to embed specific keys (e.g., you only want to search for examples that have a similar query to the one the user provides), you can pass an `inputKeys` array in the final `options` parameter. ## Loading from an existing vectorstore You can also use a pre-initialized vector store by passing an instance to the `SemanticSimilarityExampleSelector` constructor directly, as shown below. You can also add more examples via the `addExample` method: import ExampleSimilarityFromExisting from "@examples/prompts/semantic_similarity_example_selector_from_existing.ts"; <CodeBlock language="typescript">{ExampleSimilarityFromExisting}</CodeBlock> ## Metadata filtering When adding examples, each field is available as metadata in the produced document. If you would like further control over your search space, you can add extra fields to your examples and pass a `filter` parameter when initializing your selector: import ExampleSimilarityMetadataFiltering from "@examples/prompts/semantic_similarity_example_selector_metadata_filtering.ts"; <CodeBlock language="typescript"> {ExampleSimilarityMetadataFiltering} </CodeBlock> ## Custom vectorstore retrievers You can also pass a vectorstore retriever instead of a vectorstore. One way this could be useful is if you want to use retrieval besides similarity search such as maximal marginal relevance: import ExampleSimilarityCustomRetriever from "@examples/prompts/semantic_similarity_example_selector_custom_retriever.ts"; <CodeBlock language="typescript">{ExampleSimilarityCustomRetriever}</CodeBlock> ## Next steps You've now learned a bit about using similarity in an example selector. Next, check out this guide on how to use a [length-based example selector](/docs/how_to/example_selectors_length_based).
0
lc_public_repos/langchainjs/docs/core_docs/docs
lc_public_repos/langchainjs/docs/core_docs/docs/how_to/prompts_composition.ipynb
import { PromptTemplate } from "@langchain/core/prompts"; const prompt = PromptTemplate.fromTemplate(`Tell me a joke about {topic}, make it funny and in {language}`) promptawait prompt.format({ topic: "sports", language: "spanish" })import { AIMessage, HumanMessage, SystemMessage} from "@langchain/core/messages" const prompt = new SystemMessage("You are a nice pirate")import { HumanMessagePromptTemplate } from "@langchain/core/prompts" const newPrompt = HumanMessagePromptTemplate.fromTemplate([prompt, new HumanMessage("Hi"), new AIMessage("what?"), "{input}"])await newPrompt.formatMessages({ input: "i said hi" })import { PromptTemplate, PipelinePromptTemplate, } from "@langchain/core/prompts"; const fullPrompt = PromptTemplate.fromTemplate(`{introduction} {example} {start}`); const introductionPrompt = PromptTemplate.fromTemplate( `You are impersonating {person}.` ); const examplePrompt = PromptTemplate.fromTemplate(`Here's an example of an interaction: Q: {example_q} A: {example_a}`); const startPrompt = PromptTemplate.fromTemplate(`Now, do this for real! Q: {input} A:`); const composedPrompt = new PipelinePromptTemplate({ pipelinePrompts: [ { name: "introduction", prompt: introductionPrompt, }, { name: "example", prompt: examplePrompt, }, { name: "start", prompt: startPrompt, }, ], finalPrompt: fullPrompt, }); const formattedPrompt = await composedPrompt.format({ person: "Elon Musk", example_q: `What's your favorite car?`, example_a: "Telsa", input: `What's your favorite social media site?`, }); console.log(formattedPrompt);
0
lc_public_repos/langchainjs/docs/core_docs/docs
lc_public_repos/langchainjs/docs/core_docs/docs/how_to/example_selectors_langsmith.ipynb
import { Client as LangSmithClient } from 'langsmith'; import { z } from 'zod'; import { zodToJsonSchema } from 'zod-to-json-schema'; import fs from "fs/promises"; // Read the example dataset and convert to the format expected by the LangSmith API // for creating new examples const examplesJson = JSON.parse( await fs.readFile("../../data/ls_few_shot_example_dataset.json", "utf-8") ); let inputs: Record<string, any>[] = []; let outputs: Record<string, any>[] = []; let metadata: Record<string, any>[] = []; examplesJson.forEach((ex) => { inputs.push(ex.inputs); outputs.push(ex.outputs); metadata.push(ex.metadata); }); // Define our input schema as this is required for indexing const inputsSchema = zodToJsonSchema(z.object({ input: z.string(), system: z.boolean().optional(), })); const lsClient = new LangSmithClient(); await lsClient.deleteDataset({ datasetName: "multiverse-math-examples-for-few-shot-example" }) const dataset = await lsClient.createDataset("multiverse-math-examples-for-few-shot-example", { inputsSchema, }); const createdExamples = await lsClient.createExamples({ inputs, outputs, metadata, datasetId: dataset.id, }) await lsClient.indexDataset({ datasetId: dataset.id });const examples = await lsClient.similarExamples( { input: "whats the negation of the negation of the negation of 3" }, dataset.id, 3, ) console.log(examples.length)console.log(examples[0].inputs.input)console.log(examples[1].outputs.output)import { tool } from '@langchain/core/tools'; import { z } from 'zod'; const add = tool((input) => { return (input.a + input.b).toString(); }, { name: "add", description: "Add two numbers", schema: z.object({ a: z.number().describe("The first number to add"), b: z.number().describe("The second number to add"), }), }); const cos = tool((input) => { return Math.cos(input.angle).toString(); }, { name: "cos", description: "Calculate the cosine of an angle (in radians)", schema: z.object({ angle: z.number().describe("The angle in radians"), }), }); const divide = tool((input) => { return (input.a / input.b).toString(); }, { name: "divide", description: "Divide two numbers", schema: z.object({ a: z.number().describe("The dividend"), b: z.number().describe("The divisor"), }), }); const log = tool((input) => { return Math.log(input.value).toString(); }, { name: "log", description: "Calculate the natural logarithm of a number", schema: z.object({ value: z.number().describe("The number to calculate the logarithm of"), }), }); const multiply = tool((input) => { return (input.a * input.b).toString(); }, { name: "multiply", description: "Multiply two numbers", schema: z.object({ a: z.number().describe("The first number to multiply"), b: z.number().describe("The second number to multiply"), }), }); const negate = tool((input) => { return (-input.a).toString(); }, { name: "negate", description: "Negate a number", schema: z.object({ a: z.number().describe("The number to negate"), }), }); const pi = tool(() => { return Math.PI.toString(); }, { name: "pi", description: "Return the value of pi", schema: z.object({}), }); const power = tool((input) => { return Math.pow(input.base, input.exponent).toString(); }, { name: "power", description: "Raise a number to a power", schema: z.object({ base: z.number().describe("The base number"), exponent: z.number().describe("The exponent"), }), }); const sin = tool((input) => { return Math.sin(input.angle).toString(); }, { name: "sin", description: "Calculate the sine of an angle (in radians)", schema: z.object({ angle: z.number().describe("The angle in radians"), }), }); const subtract = tool((input) => { return (input.a - input.b).toString(); }, { name: "subtract", description: "Subtract two numbers", schema: z.object({ a: z.number().describe("The number to subtract from"), b: z.number().describe("The number to subtract"), }), });import { ChatOpenAI } from "@langchain/openai"; import { HumanMessage, SystemMessage, BaseMessage, BaseMessageLike } from "@langchain/core/messages"; import { RunnableLambda } from "@langchain/core/runnables"; import { Client as LangSmithClient, Example } from "langsmith"; import { coerceMessageLikeToMessage } from "@langchain/core/messages"; const client = new LangSmithClient(); async function similarExamples(input: Record<string, any>): Promise<Record<string, any>> { const examples = await client.similarExamples(input, dataset.id, 5); return { ...input, examples }; } function constructPrompt(input: { examples: Example[], input: string }): BaseMessage[] { const instructions = "You are great at using mathematical tools."; let messages: BaseMessage[] = [] for (const ex of input.examples) { // Assuming ex.outputs.output is an array of message-like objects messages = messages.concat(ex.outputs.output.flatMap((msg: BaseMessageLike) => coerceMessageLikeToMessage(msg))); } const examples = messages.filter(msg => msg._getType() !== 'system'); examples.forEach((ex) => { if (ex._getType() === 'human') { ex.name = "example_user"; } else { ex.name = "example_assistant"; } }); return [new SystemMessage(instructions), ...examples, new HumanMessage(input.input)]; } const llm = new ChatOpenAI({ model: "gpt-4o", temperature: 0, }); const tools = [add, cos, divide, log, multiply, negate, pi, power, sin, subtract]; const llmWithTools = llm.bindTools(tools); const exampleSelector = new RunnableLambda( { func: similarExamples } ).withConfig({ runName: "similarExamples" }); const chain = exampleSelector.pipe( new RunnableLambda({ func: constructPrompt }).withConfig({ runName: "constructPrompt" }) ).pipe(llmWithTools);const aiMsg = await chain.invoke({ input: "whats the negation of the negation of 3", system: false }) console.log(aiMsg.tool_calls)
0
lc_public_repos/langchainjs/docs/core_docs/docs
lc_public_repos/langchainjs/docs/core_docs/docs/how_to/tool_runtime.ipynb
// @lc-docs-hide-cell import { ChatOpenAI } from "@langchain/openai"; const llm = new ChatOpenAI({ model: "gpt-4o-mini" })import { z } from "zod"; import { tool } from "@langchain/core/tools"; import { getContextVariable } from "@langchain/core/context"; let userToPets: Record<string, string[]> = {}; const updateFavoritePets = tool(async (input) => { const userId = getContextVariable("userId"); if (userId === undefined) { throw new Error(`No "userId" found in current context. Remember to call "setContextVariable('userId', value)";`); } userToPets[userId] = input.pets; return "update_favorite_pets called." }, { name: "update_favorite_pets", description: "add to the list of favorite pets.", schema: z.object({ pets: z.array(z.string()) }), });await updateFavoritePets.invoke({ pets: ["cat", "dog" ]})import { setContextVariable } from "@langchain/core/context"; import { BaseChatModel } from "@langchain/core/language_models/chat_models"; import { RunnableLambda } from "@langchain/core/runnables"; const handleRunTimeRequestRunnable = RunnableLambda.from(async (params: { userId: string; query: string; llm: BaseChatModel; }) => { const { userId, query, llm } = params; if (!llm.bindTools) { throw new Error("Language model does not support tools."); } // Set a context variable accessible to any child runnables called within this one. // You can also set context variables at top level that act as globals. setContextVariable("userId", userId); const tools = [updateFavoritePets]; const llmWithTools = llm.bindTools(tools); const modelResponse = await llmWithTools.invoke(query); // For simplicity, skip checking the tool call's name field and assume // that the model is calling the "updateFavoritePets" tool if (modelResponse.tool_calls.length > 0) { return updateFavoritePets.invoke(modelResponse.tool_calls[0]); } else { return "No tool invoked."; } });await handleRunTimeRequestRunnable.invoke({ userId: "brace", query: "my favorite animals are cats and parrots.", llm: llm });console.log(userToPets);import { z } from "zod"; import { tool } from "@langchain/core/tools"; userToPets = {}; function generateToolsForUser(userId: string) { const updateFavoritePets = tool(async (input) => { userToPets[userId] = input.pets; return "update_favorite_pets called." }, { name: "update_favorite_pets", description: "add to the list of favorite pets.", schema: z.object({ pets: z.array(z.string()) }), }); // You can declare and return additional tools as well: return [updateFavoritePets]; }const [updatePets] = generateToolsForUser("cobb"); await updatePets.invoke({ pets: ["tiger", "wolf"] }); console.log(userToPets);import { BaseChatModel } from "@langchain/core/language_models/chat_models"; async function handleRunTimeRequest(userId: string, query: string, llm: BaseChatModel): Promise<any> { if (!llm.bindTools) { throw new Error("Language model does not support tools."); } const tools = generateToolsForUser(userId); const llmWithTools = llm.bindTools(tools); return llmWithTools.invoke(query); }const aiMessage = await handleRunTimeRequest( "cobb", "my favorite pets are tigers and wolves.", llm, ); console.log(aiMessage.tool_calls[0]);
0
lc_public_repos/langchainjs/docs/core_docs/docs
lc_public_repos/langchainjs/docs/core_docs/docs/how_to/tool_choice.ipynb
import { tool } from '@langchain/core/tools'; import { z } from 'zod'; const add = tool((input) => { return `${input.a + input.b}` }, { name: "add", description: "Adds a and b.", schema: z.object({ a: z.number(), b: z.number(), }) }) const multiply = tool((input) => { return `${input.a * input.b}` }, { name: "Multiply", description: "Multiplies a and b.", schema: z.object({ a: z.number(), b: z.number(), }) }) const tools = [add, multiply]import { ChatOpenAI } from '@langchain/openai'; const llm = new ChatOpenAI({ model: "gpt-3.5-turbo", })const llmForcedToMultiply = llm.bindTools(tools, { tool_choice: "Multiply", }) const multiplyResult = await llmForcedToMultiply.invoke("what is 2 + 4"); console.log(JSON.stringify(multiplyResult.tool_calls, null, 2));const llmForcedToUseTool = llm.bindTools(tools, { tool_choice: "any", }) const anyToolResult = await llmForcedToUseTool.invoke("What day is today?"); console.log(JSON.stringify(anyToolResult.tool_calls, null, 2));
0
lc_public_repos/langchainjs/docs/core_docs/docs
lc_public_repos/langchainjs/docs/core_docs/docs/how_to/output_parser_json.ipynb
import { ChatOpenAI } from "@langchain/openai"; const model = new ChatOpenAI({ model: "gpt-4o", temperature: 0, }) import { JsonOutputParser } from "@langchain/core/output_parsers" import { ChatPromptTemplate } from "@langchain/core/prompts" // Define your desired data structure. Only used for typing the parser output. interface Joke { setup: string punchline: string } // A query and format instructions used to prompt a language model. const jokeQuery = "Tell me a joke."; const formatInstructions = "Respond with a valid JSON object, containing two fields: 'setup' and 'punchline'." // Set up a parser + inject instructions into the prompt template. const parser = new JsonOutputParser<Joke>() const prompt = ChatPromptTemplate.fromTemplate( "Answer the user query.\n{format_instructions}\n{query}\n" ); const partialedPrompt = await prompt.partial({ format_instructions: formatInstructions }); const chain = partialedPrompt.pipe(model).pipe(parser); await chain.invoke({ query: jokeQuery });for await (const s of await chain.stream({ query: jokeQuery })) { console.log(s) }
0
lc_public_repos/langchainjs/docs/core_docs/docs
lc_public_repos/langchainjs/docs/core_docs/docs/how_to/merge_message_runs.ipynb
import { HumanMessage, SystemMessage, AIMessage, mergeMessageRuns } from "@langchain/core/messages"; const messages = [ new SystemMessage("you're a good assistant."), new SystemMessage("you always respond with a joke."), new HumanMessage({ content: [{"type": "text", "text": "i wonder why it's called langchain"}] }), new HumanMessage("and who is harrison chasing anyways"), new AIMessage( 'Well, I guess they thought "WordRope" and "SentenceString" just didn\'t have the same ring to it!' ), new AIMessage("Why, he's probably chasing after the last cup of coffee in the office!"), ]; const merged = mergeMessageRuns(messages); console.log(merged.map((x) => JSON.stringify({ role: x._getType(), content: x.content, }, null, 2)).join("\n\n"));import { ChatAnthropic } from "@langchain/anthropic"; import { mergeMessageRuns } from "@langchain/core/messages"; const llm = new ChatAnthropic({ model: "claude-3-sonnet-20240229", temperature: 0 }); // Notice we don't pass in messages. This creates // a RunnableLambda that takes messages as input const merger = mergeMessageRuns(); const chain = merger.pipe(llm); await chain.invoke(messages);await merger.invoke(messages)
0
lc_public_repos/langchainjs/docs/core_docs/docs
lc_public_repos/langchainjs/docs/core_docs/docs/how_to/code_splitter.ipynb
import { RecursiveCharacterTextSplitter, } from "@langchain/textsplitters"; RecursiveCharacterTextSplitter.getSeparatorsForLanguage("js");const JS_CODE = ` function helloWorld() { console.log("Hello, World!"); } // Call the function helloWorld(); ` const jsSplitter = RecursiveCharacterTextSplitter.fromLanguage( "js", { chunkSize: 60, chunkOverlap: 0, } ) const jsDocs = await jsSplitter.createDocuments([JS_CODE]); jsDocsconst PYTHON_CODE = ` def hello_world(): print("Hello, World!") # Call the function hello_world() ` const pythonSplitter = RecursiveCharacterTextSplitter.fromLanguage( "python", { chunkSize: 50, chunkOverlap: 0, } ) const pythonDocs = await pythonSplitter.createDocuments([PYTHON_CODE]) pythonDocsconst markdownText = ` # 🦜️🔗 LangChain ⚡ Building applications with LLMs through composability ⚡ ## Quick Install \`\`\`bash # Hopefully this code block isn't split pip install langchain \`\`\` As an open-source project in a rapidly developing field, we are extremely open to contributions. `; const mdSplitter = RecursiveCharacterTextSplitter.fromLanguage( "markdown", { chunkSize: 60, chunkOverlap: 0, } ) const mdDocs = await mdSplitter.createDocuments([markdownText]) mdDocsconst latexText = ` \documentclass{article} \begin{document} \maketitle \section{Introduction} Large language models (LLMs) are a type of machine learning model that can be trained on vast amounts of text data to generate human-like language. In recent years, LLMs have made significant advances in a variety of natural language processing tasks, including language translation, text generation, and sentiment analysis. \subsection{History of LLMs} The earliest LLMs were developed in the 1980s and 1990s, but they were limited by the amount of data that could be processed and the computational power available at the time. In the past decade, however, advances in hardware and software have made it possible to train LLMs on massive datasets, leading to significant improvements in performance. \subsection{Applications of LLMs} LLMs have many applications in industry, including chatbots, content creation, and virtual assistants. They can also be used in academia for research in linguistics, psychology, and computational linguistics. \end{document} ` const latexSplitter = RecursiveCharacterTextSplitter.fromLanguage( "latex", { chunkSize: 60, chunkOverlap: 0, } ) const latexDocs = await latexSplitter.createDocuments([latexText]) latexDocsconst htmlText = ` <!DOCTYPE html> <html> <head> <title>🦜️🔗 LangChain</title> <style> body { font-family: Arial, sans-serif; } h1 { color: darkblue; } </style> </head> <body> <div> <h1>🦜️🔗 LangChain</h1> <p>⚡ Building applications with LLMs through composability ⚡</p> </div> <div> As an open-source project in a rapidly developing field, we are extremely open to contributions. </div> </body> </html> ` const htmlSplitter = RecursiveCharacterTextSplitter.fromLanguage( "html", { chunkSize: 60, chunkOverlap: 0, } ) const htmlDocs = await htmlSplitter.createDocuments([htmlText]) htmlDocsconst SOL_CODE = ` pragma solidity ^0.8.20; contract HelloWorld { function add(uint a, uint b) pure public returns(uint) { return a + b; } } ` const solSplitter = RecursiveCharacterTextSplitter.fromLanguage( "sol", { chunkSize: 128, chunkOverlap: 0, } ) const solDocs = await solSplitter.createDocuments([SOL_CODE]) solDocsconst PHP_CODE = `<?php namespace foo; class Hello { public function __construct() { } } function hello() { echo "Hello World!"; } interface Human { public function breath(); } trait Foo { } enum Color { case Red; case Blue; }` const phpSplitter = RecursiveCharacterTextSplitter.fromLanguage( "php", { chunkSize: 50, chunkOverlap: 0, } ) const phpDocs = await phpSplitter.createDocuments([PHP_CODE]) phpDocs
0
lc_public_repos/langchainjs/docs/core_docs/docs
lc_public_repos/langchainjs/docs/core_docs/docs/how_to/chatbots_tools.ipynb
// @lc-docs-hide-cell import { ChatOpenAI } from "@langchain/openai"; const llm = new ChatOpenAI({ model: "gpt-4o", temperature: 0, });import { TavilySearchResults } from "@langchain/community/tools/tavily_search"; const tools = [ new TavilySearchResults({ maxResults: 1, }), ];import { ChatPromptTemplate, } from "@langchain/core/prompts"; // Adapted from https://smith.langchain.com/hub/jacob/tool-calling-agent const prompt = ChatPromptTemplate.fromMessages([ [ "system", "You are a helpful assistant. You may not need to use tools for every query - the user may just want to chat!", ], ]);import { createReactAgent } from "@langchain/langgraph/prebuilt" // messageModifier allows you to preprocess the inputs to the model inside ReAct agent // in this case, since we're passing a prompt string, we'll just always add a SystemMessage // with this prompt string before any other messages sent to the model const agent = createReactAgent({ llm, tools, messageModifier: prompt })await agent.invoke({ messages: [{ role: "user", content: "I'm Nemo!" }]})await agent.invoke({ messages: [{ role: "user", content: "What is the current conservation status of the Great Barrier Reef?" }]})await agent.invoke({ messages: [ { role: "user", content: "I'm Nemo!" }, { role: "user", content: "Hello Nemo! How can I assist you today?" }, { role: "user", content: "What is my name?" } ] })import { MemorySaver } from "@langchain/langgraph" // highlight-start const memory = new MemorySaver() const agent2 = createReactAgent({ llm, tools, messageModifier: prompt, checkpointSaver: memory }) // highlight-endawait agent2.invoke({ messages: [{ role: "user", content: "I'm Nemo!" }]}, { configurable: { thread_id: "1" } })await agent2.invoke({ messages: [{ role: "user", content: "What is my name?" }]}, { configurable: { thread_id: "1" } })
0
lc_public_repos/langchainjs/docs/core_docs/docs
lc_public_repos/langchainjs/docs/core_docs/docs/how_to/few_shot_examples_chat.ipynb
import { ChatPromptTemplate, FewShotChatMessagePromptTemplate, } from "@langchain/core/prompts" const examples = [ { input: "2+2", output: "4" }, { input: "2+3", output: "5" }, ]// This is a prompt template used to format each individual example. const examplePrompt = ChatPromptTemplate.fromMessages( [ ["human", "{input}"], ["ai", "{output}"], ] ) const fewShotPrompt = new FewShotChatMessagePromptTemplate({ examplePrompt, examples, inputVariables: [], // no input variables }) const result = await fewShotPrompt.invoke({}); console.log(result.toChatMessages())const finalPrompt = ChatPromptTemplate.fromMessages( [ ["system", "You are a wondrous wizard of math."], fewShotPrompt, ["human", "{input}"], ] )const chain = finalPrompt.pipe(model); await chain.invoke({ input: "What's the square of a triangle?" })import { SemanticSimilarityExampleSelector } from "@langchain/core/example_selectors"; import { MemoryVectorStore } from "langchain/vectorstores/memory"; import { OpenAIEmbeddings } from '@langchain/openai'; const examples = [ { input: '2+2', output: '4' }, { input: '2+3', output: '5' }, { input: '2+4', output: '6' }, { input: 'What did the cow say to the moon?', output: 'nothing at all' }, { input: 'Write me a poem about the moon', output: 'One for the moon, and one for me, who are we to talk about the moon?', }, ]; const toVectorize = examples.map((example) => `${example.input} ${example.output}`); const embeddings = new OpenAIEmbeddings(); const vectorStore = await MemoryVectorStore.fromTexts(toVectorize, examples, embeddings);const exampleSelector = new SemanticSimilarityExampleSelector( { vectorStore, k: 2 } ) // The prompt template will load examples by passing the input do the `select_examples` method await exampleSelector.selectExamples({ input: "horse"})import { ChatPromptTemplate, FewShotChatMessagePromptTemplate, } from "@langchain/core/prompts" // Define the few-shot prompt. const fewShotPrompt = new FewShotChatMessagePromptTemplate({ // The input variables select the values to pass to the example_selector inputVariables: ["input"], exampleSelector, // Define how ech example will be formatted. // In this case, each example will become 2 messages: // 1 human, and 1 AI examplePrompt: ChatPromptTemplate.fromMessages( [["human", "{input}"], ["ai", "{output}"]] ), }) const results = await fewShotPrompt.invoke({ input: "What's 3+3?" }); console.log(results.toChatMessages())const finalPrompt = ChatPromptTemplate.fromMessages( [ ["system", "You are a wondrous wizard of math."], fewShotPrompt, ["human", "{input}"], ] ) const result = await fewShotPrompt.invoke({ input: "What's 3+3?" }); console.log(result)const chain = finalPrompt.pipe(model); await chain.invoke({ input: "What's 3+3?" })
0
lc_public_repos/langchainjs/docs/core_docs/docs
lc_public_repos/langchainjs/docs/core_docs/docs/how_to/qa_streaming.ipynb
import "cheerio"; import { CheerioWebBaseLoader } from "@langchain/community/document_loaders/web/cheerio"; import { RecursiveCharacterTextSplitter } from "langchain/text_splitter"; import { MemoryVectorStore } from "langchain/vectorstores/memory" import { OpenAIEmbeddings, ChatOpenAI } from "@langchain/openai"; import { pull } from "langchain/hub"; import { ChatPromptTemplate } from "@langchain/core/prompts"; import { formatDocumentsAsString } from "langchain/util/document"; import { RunnableSequence, RunnablePassthrough, RunnableMap } from "@langchain/core/runnables"; import { StringOutputParser } from "@langchain/core/output_parsers"; const loader = new CheerioWebBaseLoader( "https://lilianweng.github.io/posts/2023-06-23-agent/" ); const docs = await loader.load(); const textSplitter = new RecursiveCharacterTextSplitter({ chunkSize: 1000, chunkOverlap: 200 }); const splits = await textSplitter.splitDocuments(docs); const vectorStore = await MemoryVectorStore.fromDocuments(splits, new OpenAIEmbeddings()); // Retrieve and generate using the relevant snippets of the blog. const retriever = vectorStore.asRetriever(); const prompt = await pull<ChatPromptTemplate>("rlm/rag-prompt"); const llm = new ChatOpenAI({ model: "gpt-3.5-turbo", temperature: 0 }); const ragChainFromDocs = RunnableSequence.from([ RunnablePassthrough.assign({ context: (input) => formatDocumentsAsString(input.context) }), prompt, llm, new StringOutputParser() ]); let ragChainWithSource = new RunnableMap({ steps: { context: retriever, question: new RunnablePassthrough() }}) ragChainWithSource = ragChainWithSource.assign({ answer: ragChainFromDocs }); await ragChainWithSource.invoke("What is Task Decomposition")console.log(prompt.promptMessages.map((msg) => msg.prompt.template).join("\n"));for await (const chunk of await ragChainWithSource.stream("What is task decomposition?")) { console.log(chunk) }const output = {}; let currentKey: string | null = null; for await (const chunk of await ragChainWithSource.stream("What is task decomposition?")) { for (const key of Object.keys(chunk)) { if (output[key] === undefined) { output[key] = chunk[key]; } else { output[key] += chunk[key]; } if (key !== currentKey) { console.log(`\n\n${key}: ${JSON.stringify(chunk[key])}`); } else { console.log(chunk[key]); } currentKey = key; } }
0
lc_public_repos/langchainjs/docs/core_docs/docs
lc_public_repos/langchainjs/docs/core_docs/docs/how_to/tool_results_pass_to_model.ipynb
import { z } from "zod"; import { tool } from "@langchain/core/tools"; const addTool = tool(async ({ a, b }) => { return a + b; }, { name: "add", schema: z.object({ a: z.number(), b: z.number(), }), description: "Adds a and b.", }); const multiplyTool = tool(async ({ a, b }) => { return a * b; }, { name: "multiply", schema: z.object({ a: z.number(), b: z.number(), }), description: "Multiplies a and b.", }); const tools = [addTool, multiplyTool];import { HumanMessage } from "@langchain/core/messages"; const llmWithTools = llm.bindTools(tools); const messages = [ new HumanMessage("What is 3 * 12? Also, what is 11 + 49?"), ]; const aiMessage = await llmWithTools.invoke(messages); console.log(aiMessage); messages.push(aiMessage);const toolsByName = { add: addTool, multiply: multiplyTool, } for (const toolCall of aiMessage.tool_calls) { const selectedTool = toolsByName[toolCall.name]; const toolMessage = await selectedTool.invoke(toolCall); messages.push(toolMessage); } console.log(messages);await llmWithTools.invoke(messages);
0
lc_public_repos/langchainjs/docs/core_docs/docs
lc_public_repos/langchainjs/docs/core_docs/docs/how_to/graph_semantic.ipynb
import "neo4j-driver"; import { Neo4jGraph } from "@langchain/community/graphs/neo4j_graph"; const url = process.env.NEO4J_URI; const username = process.env.NEO4J_USER; const password = process.env.NEO4J_PASSWORD; const graph = await Neo4jGraph.initialize({ url, username, password }); // Import movie information const moviesQuery = `LOAD CSV WITH HEADERS FROM 'https://raw.githubusercontent.com/tomasonjo/blog-datasets/main/movies/movies_small.csv' AS row MERGE (m:Movie {id:row.movieId}) SET m.released = date(row.released), m.title = row.title, m.imdbRating = toFloat(row.imdbRating) FOREACH (director in split(row.director, '|') | MERGE (p:Person {name:trim(director)}) MERGE (p)-[:DIRECTED]->(m)) FOREACH (actor in split(row.actors, '|') | MERGE (p:Person {name:trim(actor)}) MERGE (p)-[:ACTED_IN]->(m)) FOREACH (genre in split(row.genres, '|') | MERGE (g:Genre {name:trim(genre)}) MERGE (m)-[:IN_GENRE]->(g))` await graph.query(moviesQuery);const descriptionQuery = `MATCH (m:Movie|Person) WHERE m.title CONTAINS $candidate OR m.name CONTAINS $candidate MATCH (m)-[r:ACTED_IN|HAS_GENRE]-(t) WITH m, type(r) as type, collect(coalesce(t.name, t.title)) as names WITH m, type+": "+reduce(s="", n IN names | s + n + ", ") as types WITH m, collect(types) as contexts WITH m, "type:" + labels(m)[0] + "\ntitle: "+ coalesce(m.title, m.name) + "\nyear: "+coalesce(m.released,"") +"\n" + reduce(s="", c in contexts | s + substring(c, 0, size(c)-2) +"\n") as context RETURN context LIMIT 1` const getInformation = async (entity: string) => { try { const data = await graph.query(descriptionQuery, { candidate: entity }); return data[0]["context"]; } catch (error) { return "No information was found"; } }import { tool } from "@langchain/core/tools"; import { z } from "zod"; const informationTool = tool((input) => { return getInformation(input.entity); }, { name: "Information", description: "useful for when you need to answer questions about various actors or movies", schema: z.object({ entity: z.string().describe("movie or a person mentioned in the question"), }), });import { ChatOpenAI } from "@langchain/openai"; import { AgentExecutor } from "langchain/agents"; import { formatToOpenAIFunctionMessages } from "langchain/agents/format_scratchpad"; import { OpenAIFunctionsAgentOutputParser } from "langchain/agents/openai/output_parser"; import { convertToOpenAIFunction } from "@langchain/core/utils/function_calling"; import { ChatPromptTemplate, MessagesPlaceholder } from "@langchain/core/prompts"; import { AIMessage, BaseMessage, HumanMessage } from "@langchain/core/messages"; import { RunnableSequence } from "@langchain/core/runnables"; const llm = new ChatOpenAI({ model: "gpt-3.5-turbo", temperature: 0 }) const tools = [informationTool] const llmWithTools = llm.bind({ functions: tools.map(convertToOpenAIFunction), }) const prompt = ChatPromptTemplate.fromMessages( [ [ "system", "You are a helpful assistant that finds information about movies and recommends them. If tools require follow up questions, make sure to ask the user for clarification. Make sure to include any available options that need to be clarified in the follow up questions Do only the things the user specifically requested." ], new MessagesPlaceholder("chat_history"), ["human", "{input}"], new MessagesPlaceholder("agent_scratchpad"), ] ) const _formatChatHistory = (chatHistory) => { const buffer: Array<BaseMessage> = [] for (const [human, ai] of chatHistory) { buffer.push(new HumanMessage({ content: human })) buffer.push(new AIMessage({ content: ai })) } return buffer } const agent = RunnableSequence.from([ { input: (x) => x.input, chat_history: (x) => { if ("chat_history" in x) { return _formatChatHistory(x.chat_history); } return []; }, agent_scratchpad: (x) => { if ("steps" in x) { return formatToOpenAIFunctionMessages( x.steps ); } return []; }, }, prompt, llmWithTools, new OpenAIFunctionsAgentOutputParser(), ]) const agentExecutor = new AgentExecutor({ agent, tools });await agentExecutor.invoke({ input: "Who played in Casino?" })
0
lc_public_repos/langchainjs/docs/core_docs/docs
lc_public_repos/langchainjs/docs/core_docs/docs/how_to/sql_prompting.mdx
# How to use prompting to improve results :::info Prerequisites This guide assumes familiarity with the following: - [Question answering over SQL data](/docs/tutorials/sql_qa) ::: In this guide we'll go over prompting strategies to improve SQL query generation. We'll largely focus on methods for getting relevant database-specific information in your prompt. ## Setup First, install the required packages and set your environment variables. This example will use OpenAI as the LLM. ```bash npm install @langchain/community @langchain/openai typeorm sqlite3 ``` ```bash export OPENAI_API_KEY="your api key" # Uncomment the below to use LangSmith. Not required. # export LANGCHAIN_API_KEY="your api key" # export LANGCHAIN_TRACING_V2=true # Reduce tracing latency if you are not in a serverless environment # export LANGCHAIN_CALLBACKS_BACKGROUND=true ``` The below example will use a SQLite connection with Chinook database. Follow these [installation steps](https://database.guide/2-sample-databases-sqlite/) to create `Chinook.db` in the same directory as this notebook: - Save [this](https://raw.githubusercontent.com/lerocha/chinook-database/master/ChinookDatabase/DataSources/Chinook_Sqlite.sql) file as `Chinook_Sqlite.sql` - Run sqlite3 `Chinook.db` - Run `.read Chinook_Sqlite.sql` - Test `SELECT * FROM Artist LIMIT 10;` Now, `Chinhook.db` is in our directory and we can interface with it using the Typeorm-driven `SqlDatabase` class: import CodeBlock from "@theme/CodeBlock"; import DbCheck from "@examples/use_cases/sql/db_check.ts"; <CodeBlock language="typescript">{DbCheck}</CodeBlock> ## Dialect-specific prompting One of the simplest things we can do is make our prompt specific to the SQL dialect we're using. When using the built-in [`createSqlQueryChain`](https://api.js.langchain.com/functions/langchain.chains_sql_db.createSqlQueryChain.html) and [`SqlDatabase`](https://api.js.langchain.com/classes/langchain.sql_db.SqlDatabase.html), this is handled for you for any of the following dialects: import DialectExample from "@examples/use_cases/sql/prompting/list_dialects.ts"; <CodeBlock language="typescript">{DialectExample}</CodeBlock> ## Table definitions and example rows In basically any SQL chain, we'll need to feed the model at least part of the database schema. Without this it won't be able to write valid queries. Our database comes with some convenience methods to give us the relevant context. Specifically, we can get the table names, their schemas, and a sample of rows from each table: import TableDefinitionsExample from "@examples/use_cases/sql/prompting/table_definitions.ts"; <CodeBlock language="typescript">{TableDefinitionsExample}</CodeBlock> ## Few-shot examples Including examples of natural language questions being converted to valid SQL queries against our database in the prompt will often improve model performance, especially for complex queries. Let's say we have the following examples: import ExampleList from "@examples/use_cases/sql/prompting/examples.ts"; <CodeBlock language="typescript">{ExampleList}</CodeBlock> We can create a few-shot prompt with them like so: import FewShotExample from "@examples/use_cases/sql/prompting/few_shot.ts"; <CodeBlock language="typescript">{FewShotExample}</CodeBlock> ## Dynamic few-shot examples If we have enough examples, we may want to only include the most relevant ones in the prompt, either because they don't fit in the model's context window or because the long tail of examples distracts the model. And specifically, given any input we want to include the examples most relevant to that input. We can do just this using an ExampleSelector. In this case we'll use a [`SemanticSimilarityExampleSelector`](https://api.js.langchain.com/classes/langchain_core.example_selectors.SemanticSimilarityExampleSelector.html), which will store the examples in the vector database of our choosing. At runtime it will perform a similarity search between the input and our examples, and return the most semantically similar ones: import DynamicFewShotExample from "@examples/use_cases/sql/prompting/dynamic_few_shot.ts"; <CodeBlock language="typescript">{DynamicFewShotExample}</CodeBlock> ## Next steps You've now learned about some prompting strategies to improve SQL generation. Next, check out some of the other guides in this section, like [how to query over large databases](/docs/how_to/sql_large_db).
0
lc_public_repos/langchainjs/docs/core_docs/docs
lc_public_repos/langchainjs/docs/core_docs/docs/how_to/multiple_queries.ipynb
import { MemoryVectorStore } from "langchain/vectorstores/memory"; import { CohereEmbeddings } from "@langchain/cohere"; import { MultiQueryRetriever } from "langchain/retrievers/multi_query"; import { ChatAnthropic } from "@langchain/anthropic"; const embeddings = new CohereEmbeddings(); const vectorstore = await MemoryVectorStore.fromTexts( [ "Buildings are made out of brick", "Buildings are made out of wood", "Buildings are made out of stone", "Cars are made out of metal", "Cars are made out of plastic", "mitochondria is the powerhouse of the cell", "mitochondria is made of lipids", ], [{ id: 1 }, { id: 2 }, { id: 3 }, { id: 4 }, { id: 5 }], embeddings ); const model = new ChatAnthropic({ model: "claude-3-sonnet-20240229" }); const retriever = MultiQueryRetriever.fromLLM({ llm: model, retriever: vectorstore.asRetriever(), }); const query = "What are mitochondria made of?"; const retrievedDocs = await retriever.invoke(query); /* Generated queries: What are the components of mitochondria?,What substances comprise the mitochondria organelle? ,What is the molecular composition of mitochondria? */ console.log(retrievedDocs);import { LLMChain } from "langchain/chains"; import { pull } from "langchain/hub"; import { BaseOutputParser } from "@langchain/core/output_parsers"; import { PromptTemplate } from "@langchain/core/prompts"; type LineList = { lines: string[]; }; class LineListOutputParser extends BaseOutputParser<LineList> { static lc_name() { return "LineListOutputParser"; } lc_namespace = ["langchain", "retrievers", "multiquery"]; async parse(text: string): Promise<LineList> { const startKeyIndex = text.indexOf("<questions>"); const endKeyIndex = text.indexOf("</questions>"); const questionsStartIndex = startKeyIndex === -1 ? 0 : startKeyIndex + "<questions>".length; const questionsEndIndex = endKeyIndex === -1 ? text.length : endKeyIndex; const lines = text .slice(questionsStartIndex, questionsEndIndex) .trim() .split("\n") .filter((line) => line.trim() !== ""); return { lines }; } getFormatInstructions(): string { throw new Error("Not implemented."); } } // Default prompt is available at: https://smith.langchain.com/hub/jacob/multi-vector-retriever-german const prompt: PromptTemplate = await pull( "jacob/multi-vector-retriever-german" ); const vectorstore = await MemoryVectorStore.fromTexts( [ "Gebäude werden aus Ziegelsteinen hergestellt", "Gebäude werden aus Holz hergestellt", "Gebäude werden aus Stein hergestellt", "Autos werden aus Metall hergestellt", "Autos werden aus Kunststoff hergestellt", "Mitochondrien sind die Energiekraftwerke der Zelle", "Mitochondrien bestehen aus Lipiden", ], [{ id: 1 }, { id: 2 }, { id: 3 }, { id: 4 }, { id: 5 }], embeddings ); const model = new ChatAnthropic({}); const llmChain = new LLMChain({ llm: model, prompt, outputParser: new LineListOutputParser(), }); const retriever = new MultiQueryRetriever({ retriever: vectorstore.asRetriever(), llmChain, }); const query = "What are mitochondria made of?"; const retrievedDocs = await retriever.invoke(query); /* Generated queries: Was besteht ein Mitochondrium?,Aus welchen Komponenten setzt sich ein Mitochondrium zusammen? ,Welche Moleküle finden sich in einem Mitochondrium? */ console.log(retrievedDocs);
0
lc_public_repos/langchainjs/docs/core_docs/docs
lc_public_repos/langchainjs/docs/core_docs/docs/how_to/contextual_compression.mdx
# How to do retrieval with contextual compression :::info Prerequisites This guide assumes familiarity with the following concepts: - [Retrievers](/docs/concepts/retrievers) - [Retrieval-augmented generation (RAG)](/docs/tutorials/rag) ::: One challenge with retrieval is that usually you don't know the specific queries your document storage system will face when you ingest data into the system. This means that the information most relevant to a query may be buried in a document with a lot of irrelevant text. Passing that full document through your application can lead to more expensive LLM calls and poorer responses. Contextual compression is meant to fix this. The idea is simple: instead of immediately returning retrieved documents as-is, you can compress them using the context of the given query, so that only the relevant information is returned. “Compressing” here refers to both compressing the contents of an individual document and filtering out documents wholesale. To use the Contextual Compression Retriever, you'll need: - a base retriever - a Document Compressor The Contextual Compression Retriever passes queries to the base retriever, takes the initial documents and passes them through the Document Compressor. The Document Compressor takes a list of documents and shortens it by reducing the contents of documents or dropping documents altogether. ## Using a vanilla vector store retriever Let's start by initializing a simple vector store retriever and storing the 2023 State of the Union speech (in chunks). Given an example question, our retriever returns one or two relevant docs and a few irrelevant docs, and even the relevant docs have a lot of irrelevant information in them. To extract all the context we can, we use an `LLMChainExtractor`, which will iterate over the initially returned documents and extract from each only the content that is relevant to the query. import IntegrationInstallTooltip from "@mdx_components/integration_install_tooltip.mdx"; <IntegrationInstallTooltip></IntegrationInstallTooltip> ```bash npm2yarn npm install @langchain/openai @langchain/community @langchain/core ``` import CodeBlock from "@theme/CodeBlock"; import Example from "@examples/retrievers/contextual_compression.ts"; <CodeBlock language="typescript">{Example}</CodeBlock> ## `EmbeddingsFilter` Making an extra LLM call over each retrieved document is expensive and slow. The `EmbeddingsFilter` provides a cheaper and faster option by embedding the documents and query and only returning those documents which have sufficiently similar embeddings to the query. This is most useful for non-vector store retrievers where we may not have control over the returned chunk size, or as part of a pipeline, outlined below. Here's an example: import EmbeddingsFilterExample from "@examples/retrievers/embeddings_filter.ts"; <CodeBlock language="typescript">{EmbeddingsFilterExample}</CodeBlock> ## Stringing compressors and document transformers together Using the `DocumentCompressorPipeline` we can also easily combine multiple compressors in sequence. Along with compressors we can add BaseDocumentTransformers to our pipeline, which don't perform any contextual compression but simply perform some transformation on a set of documents. For example `TextSplitters` can be used as document transformers to split documents into smaller pieces, and the `EmbeddingsFilter` can be used to filter out documents based on similarity of the individual chunks to the input query. Below we create a compressor pipeline by first splitting raw webpage documents retrieved from the [Tavily web search API retriever](/docs/integrations/retrievers/tavily) into smaller chunks, then filtering based on relevance to the query. The result is smaller chunks that are semantically similar to the input query. This skips the need to add documents to a vector store to perform similarity search, which can be useful for one-off use cases: import DocumentCompressorPipelineExample from "@examples/retrievers/document_compressor_pipeline.ts"; <CodeBlock language="typescript">{DocumentCompressorPipelineExample}</CodeBlock> ## Next steps You've now learned a few ways to use contextual compression to remove bad data from your results. See the individual sections for deeper dives on specific retrievers, the [broader tutorial on RAG](/docs/tutorials/rag), or this section to learn how to [create your own custom retriever over any data source](/docs/how_to/custom_retriever/).
0
lc_public_repos/langchainjs/docs/core_docs/docs
lc_public_repos/langchainjs/docs/core_docs/docs/how_to/few_shot_examples.ipynb
import { PromptTemplate } from "@langchain/core/prompts"; const examplePrompt = PromptTemplate.fromTemplate("Question: {question}\n{answer}")const examples = [ { question: "Who lived longer, Muhammad Ali or Alan Turing?", answer: ` Are follow up questions needed here: Yes. Follow up: How old was Muhammad Ali when he died? Intermediate answer: Muhammad Ali was 74 years old when he died. Follow up: How old was Alan Turing when he died? Intermediate answer: Alan Turing was 41 years old when he died. So the final answer is: Muhammad Ali ` }, { question: "When was the founder of craigslist born?", answer: ` Are follow up questions needed here: Yes. Follow up: Who was the founder of craigslist? Intermediate answer: Craigslist was founded by Craig Newmark. Follow up: When was Craig Newmark born? Intermediate answer: Craig Newmark was born on December 6, 1952. So the final answer is: December 6, 1952 ` }, { question: "Who was the maternal grandfather of George Washington?", answer: ` Are follow up questions needed here: Yes. Follow up: Who was the mother of George Washington? Intermediate answer: The mother of George Washington was Mary Ball Washington. Follow up: Who was the father of Mary Ball Washington? Intermediate answer: The father of Mary Ball Washington was Joseph Ball. So the final answer is: Joseph Ball ` }, { question: "Are both the directors of Jaws and Casino Royale from the same country?", answer: ` Are follow up questions needed here: Yes. Follow up: Who is the director of Jaws? Intermediate Answer: The director of Jaws is Steven Spielberg. Follow up: Where is Steven Spielberg from? Intermediate Answer: The United States. Follow up: Who is the director of Casino Royale? Intermediate Answer: The director of Casino Royale is Martin Campbell. Follow up: Where is Martin Campbell from? Intermediate Answer: New Zealand. So the final answer is: No ` } ];import { FewShotPromptTemplate } from "@langchain/core/prompts"; const prompt = new FewShotPromptTemplate({ examples, examplePrompt, suffix: "Question: {input}", inputVariables: ["input"], }) const formatted = await prompt.format({ input: "Who was the father of Mary Ball Washington?" }) console.log(formatted.toString())import { SemanticSimilarityExampleSelector } from "@langchain/core/example_selectors"; import { MemoryVectorStore } from "langchain/vectorstores/memory"; import { OpenAIEmbeddings } from "@langchain/openai"; const exampleSelector = await SemanticSimilarityExampleSelector.fromExamples( // This is the list of examples available to select from. examples, // This is the embedding class used to produce embeddings which are used to measure semantic similarity. new OpenAIEmbeddings(), // This is the VectorStore class that is used to store the embeddings and do a similarity search over. MemoryVectorStore, { // This is the number of examples to produce. k: 1, } ) // Select the most similar example to the input. const question = "Who was the father of Mary Ball Washington?" const selectedExamples = await exampleSelector.selectExamples({ question }) console.log(`Examples most similar to the input: ${question}`) for (const example of selectedExamples) { console.log("\n"); console.log(Object.entries(example).map(([k, v]) => `${k}: ${v}`).join("\n")) }const prompt = new FewShotPromptTemplate({ exampleSelector, examplePrompt, suffix: "Question: {input}", inputVariables: ["input"], }) const formatted = await prompt.invoke({ input: "Who was the father of Mary Ball Washington?" }); console.log(formatted.toString())
0
lc_public_repos/langchainjs/docs/core_docs/docs
lc_public_repos/langchainjs/docs/core_docs/docs/how_to/document_loader_html.ipynb
import { UnstructuredLoader } from "@langchain/community/document_loaders/fs/unstructured"; const filePath = "../../../../libs/langchain-community/src/tools/fixtures/wordoftheday.html" const loader = new UnstructuredLoader(filePath, { apiKey: process.env.UNSTRUCTURED_API_KEY, apiUrl: process.env.UNSTRUCTURED_API_URL, }); const data = await loader.load() console.log(data.slice(0, 5));
0
lc_public_repos/langchainjs/docs/core_docs/docs
lc_public_repos/langchainjs/docs/core_docs/docs/how_to/qa_chat_history_how_to.ipynb
// @lc-docs-hide-cell import { ChatOpenAI } from "@langchain/openai"; const llm = new ChatOpenAI({ model: "gpt-4o" });import { CheerioWebBaseLoader } from "@langchain/community/document_loaders/web/cheerio"; import { RecursiveCharacterTextSplitter } from "langchain/text_splitter"; import { MemoryVectorStore } from "langchain/vectorstores/memory" import { OpenAIEmbeddings } from "@langchain/openai"; const loader = new CheerioWebBaseLoader( "https://lilianweng.github.io/posts/2023-06-23-agent/" ); const docs = await loader.load(); const textSplitter = new RecursiveCharacterTextSplitter({ chunkSize: 1000, chunkOverlap: 200 }); const splits = await textSplitter.splitDocuments(docs); const vectorStore = await MemoryVectorStore.fromDocuments(splits, new OpenAIEmbeddings()); // Retrieve and generate using the relevant snippets of the blog. const retriever = vectorStore.asRetriever();import { ChatPromptTemplate, MessagesPlaceholder } from "@langchain/core/prompts"; const contextualizeQSystemPrompt = ( "Given a chat history and the latest user question " + "which might reference context in the chat history, " + "formulate a standalone question which can be understood " + "without the chat history. Do NOT answer the question, " + "just reformulate it if needed and otherwise return it as is." ) const contextualizeQPrompt = ChatPromptTemplate.fromMessages( [ ["system", contextualizeQSystemPrompt], new MessagesPlaceholder("chat_history"), ["human", "{input}"], ] )import { createHistoryAwareRetriever } from "langchain/chains/history_aware_retriever"; const historyAwareRetriever = await createHistoryAwareRetriever({ llm, retriever, rephrasePrompt: contextualizeQPrompt }); import { createStuffDocumentsChain } from "langchain/chains/combine_documents"; import { createRetrievalChain } from "langchain/chains/retrieval"; const systemPrompt = "You are an assistant for question-answering tasks. " + "Use the following pieces of retrieved context to answer " + "the question. If you don't know the answer, say that you " + "don't know. Use three sentences maximum and keep the " + "answer concise." + "\n\n" + "{context}"; const qaPrompt = ChatPromptTemplate.fromMessages([ ["system", systemPrompt], new MessagesPlaceholder("chat_history"), ["human", "{input}"], ]); const questionAnswerChain = await createStuffDocumentsChain({ llm, prompt: qaPrompt, }); const ragChain = await createRetrievalChain({ retriever: historyAwareRetriever, combineDocsChain: questionAnswerChain, });import { AIMessage, BaseMessage, HumanMessage } from "@langchain/core/messages"; import { StateGraph, START, END, MemorySaver, messagesStateReducer, Annotation } from "@langchain/langgraph"; // Define the State interface const GraphAnnotation = Annotation.Root({ input: Annotation<string>(), chat_history: Annotation<BaseMessage[]>({ reducer: messagesStateReducer, default: () => [], }), context: Annotation<string>(), answer: Annotation<string>(), }) // Define the call_model function async function callModel(state: typeof GraphAnnotation.State) { const response = await ragChain.invoke(state); return { chat_history: [ new HumanMessage(state.input), new AIMessage(response.answer), ], context: response.context, answer: response.answer, }; } // Create the workflow const workflow = new StateGraph(GraphAnnotation) .addNode("model", callModel) .addEdge(START, "model") .addEdge("model", END); // Compile the graph with a checkpointer object const memory = new MemorySaver(); const app = workflow.compile({ checkpointer: memory });import { v4 as uuidv4 } from "uuid"; const threadId = uuidv4(); const config = { configurable: { thread_id: threadId } }; const result = await app.invoke( { input: "What is Task Decomposition?" }, config, ) console.log(result.answer);const result2 = await app.invoke( { input: "What is one way of doing it?" }, config, ) console.log(result2.answer);const chatHistory = (await app.getState(config)).values.chat_history; for (const message of chatHistory) { console.log(message); }import { CheerioWebBaseLoader } from "@langchain/community/document_loaders/web/cheerio"; import { RecursiveCharacterTextSplitter } from "langchain/text_splitter"; import { MemoryVectorStore } from "langchain/vectorstores/memory" import { OpenAIEmbeddings, ChatOpenAI } from "@langchain/openai"; import { ChatPromptTemplate, MessagesPlaceholder } from "@langchain/core/prompts"; import { createHistoryAwareRetriever } from "langchain/chains/history_aware_retriever"; import { createStuffDocumentsChain } from "langchain/chains/combine_documents"; import { createRetrievalChain } from "langchain/chains/retrieval"; import { AIMessage, BaseMessage, HumanMessage } from "@langchain/core/messages"; import { StateGraph, START, END, MemorySaver, messagesStateReducer, Annotation } from "@langchain/langgraph"; import { v4 as uuidv4 } from "uuid"; const llm2 = new ChatOpenAI({ model: "gpt-4o" }); const loader2 = new CheerioWebBaseLoader( "https://lilianweng.github.io/posts/2023-06-23-agent/" ); const docs2 = await loader2.load(); const textSplitter2 = new RecursiveCharacterTextSplitter({ chunkSize: 1000, chunkOverlap: 200 }); const splits2 = await textSplitter2.splitDocuments(docs2); const vectorStore2 = await MemoryVectorStore.fromDocuments(splits2, new OpenAIEmbeddings()); // Retrieve and generate using the relevant snippets of the blog. const retriever2 = vectorStore2.asRetriever(); const contextualizeQSystemPrompt2 = "Given a chat history and the latest user question " + "which might reference context in the chat history, " + "formulate a standalone question which can be understood " + "without the chat history. Do NOT answer the question, " + "just reformulate it if needed and otherwise return it as is."; const contextualizeQPrompt2 = ChatPromptTemplate.fromMessages( [ ["system", contextualizeQSystemPrompt2], new MessagesPlaceholder("chat_history"), ["human", "{input}"], ] ) const historyAwareRetriever2 = await createHistoryAwareRetriever({ llm: llm2, retriever: retriever2, rephrasePrompt: contextualizeQPrompt2 }); const systemPrompt2 = "You are an assistant for question-answering tasks. " + "Use the following pieces of retrieved context to answer " + "the question. If you don't know the answer, say that you " + "don't know. Use three sentences maximum and keep the " + "answer concise." + "\n\n" + "{context}"; const qaPrompt2 = ChatPromptTemplate.fromMessages([ ["system", systemPrompt2], new MessagesPlaceholder("chat_history"), ["human", "{input}"], ]); const questionAnswerChain2 = await createStuffDocumentsChain({ llm: llm2, prompt: qaPrompt2, }); const ragChain2 = await createRetrievalChain({ retriever: historyAwareRetriever2, combineDocsChain: questionAnswerChain2, }); // Define the State interface const GraphAnnotation2 = Annotation.Root({ input: Annotation<string>(), chat_history: Annotation<BaseMessage[]>({ reducer: messagesStateReducer, default: () => [], }), context: Annotation<string>(), answer: Annotation<string>(), }) // Define the call_model function async function callModel2(state: typeof GraphAnnotation2.State) { const response = await ragChain2.invoke(state); return { chat_history: [ new HumanMessage(state.input), new AIMessage(response.answer), ], context: response.context, answer: response.answer, }; } // Create the workflow const workflow2 = new StateGraph(GraphAnnotation2) .addNode("model", callModel2) .addEdge(START, "model") .addEdge("model", END); // Compile the graph with a checkpointer object const memory2 = new MemorySaver(); const app2 = workflow2.compile({ checkpointer: memory2 }); const threadId2 = uuidv4(); const config2 = { configurable: { thread_id: threadId2 } }; const result3 = await app2.invoke( { input: "What is Task Decomposition?" }, config2, ) console.log(result3.answer); const result4 = await app2.invoke( { input: "What is one way of doing it?" }, config2, ) console.log(result4.answer);import { createRetrieverTool } from "langchain/tools/retriever"; const tool = createRetrieverTool( retriever, { name: "blog_post_retriever", description: "Searches and returns excerpts from the Autonomous Agents blog post.", } ) const tools = [tool]import { createReactAgent } from "@langchain/langgraph/prebuilt"; const agentExecutor = createReactAgent({ llm, tools })const query = "What is Task Decomposition?" for await (const s of await agentExecutor.stream( { messages: [{ role: "user", content: query }] }, )){ console.log(s) console.log("----") }import { MemorySaver } from "@langchain/langgraph"; const memory3 = new MemorySaver(); const agentExecutor2 = createReactAgent({ llm, tools, checkpointSaver: memory3 })const threadId3 = uuidv4(); const config3 = { configurable: { thread_id: threadId3 } }; for await (const s of await agentExecutor2.stream({ messages: [{ role: "user", content: "Hi! I'm bob" }] }, config3)) { console.log(s) console.log("----") }const query2 = "What is Task Decomposition?" for await (const s of await agentExecutor2.stream({ messages: [{ role: "user", content: query2 }] }, config3)) { console.log(s) console.log("----") }const query3 = "What according to the blog post are common ways of doing it? redo the search" for await (const s of await agentExecutor2.stream({ messages: [{ role: "user", content: query3 }] }, config3)) { console.log(s) console.log("----") }import { createRetrieverTool } from "langchain/tools/retriever"; import { createReactAgent } from "@langchain/langgraph/prebuilt"; import { MemorySaver } from "@langchain/langgraph"; import { ChatOpenAI } from "@langchain/openai"; import { CheerioWebBaseLoader } from "@langchain/community/document_loaders/web/cheerio"; import { RecursiveCharacterTextSplitter } from "langchain/text_splitter"; import { MemoryVectorStore } from "langchain/vectorstores/memory" import { OpenAIEmbeddings } from "@langchain/openai"; const llm3 = new ChatOpenAI({ model: "gpt-4o" }); const loader3 = new CheerioWebBaseLoader( "https://lilianweng.github.io/posts/2023-06-23-agent/" ); const docs3 = await loader3.load(); const textSplitter3 = new RecursiveCharacterTextSplitter({ chunkSize: 1000, chunkOverlap: 200 }); const splits3 = await textSplitter3.splitDocuments(docs3); const vectorStore3 = await MemoryVectorStore.fromDocuments(splits3, new OpenAIEmbeddings()); // Retrieve and generate using the relevant snippets of the blog. const retriever3 = vectorStore3.asRetriever(); const tool2 = createRetrieverTool( retriever3, { name: "blog_post_retriever", description: "Searches and returns excerpts from the Autonomous Agents blog post.", } ) const tools2 = [tool2] const memory4 = new MemorySaver(); const agentExecutor3 = createReactAgent({ llm: llm3, tools: tools2, checkpointSaver: memory4 })
0
lc_public_repos/langchainjs/docs/core_docs/docs
lc_public_repos/langchainjs/docs/core_docs/docs/how_to/callbacks_runtime.ipynb
import { ConsoleCallbackHandler } from "@langchain/core/tracers/console"; import { ChatPromptTemplate } from "@langchain/core/prompts"; import { ChatAnthropic } from "@langchain/anthropic"; const handler = new ConsoleCallbackHandler(); const prompt = ChatPromptTemplate.fromTemplate(`What is 1 + {number}?`); const model = new ChatAnthropic({ model: "claude-3-sonnet-20240229", }); const chain = prompt.pipe(model); await chain.invoke({ number: "2" }, { callbacks: [handler] });
0
lc_public_repos/langchainjs/docs/core_docs/docs
lc_public_repos/langchainjs/docs/core_docs/docs/how_to/ensemble_retriever.mdx
# How to combine results from multiple retrievers :::info Prerequisites This guide assumes familiarity with the following concepts: - [Documents](https://api.js.langchain.com/classes/_langchain_core.documents.Document.html) - [Retrievers](/docs/concepts/retrievers) ::: The [EnsembleRetriever](https://api.js.langchain.com/classes/langchain.retrievers_ensemble.EnsembleRetriever.html) supports ensembling of results from multiple retrievers. It is initialized with a list of [BaseRetriever](https://api.js.langchain.com/classes/langchain_core.retrievers.BaseRetriever.html) objects. EnsembleRetrievers rerank the results of the constituent retrievers based on the [Reciprocal Rank Fusion](https://plg.uwaterloo.ca/~gvcormac/cormacksigir09-rrf.pdf) algorithm. By leveraging the strengths of different algorithms, the `EnsembleRetriever` can achieve better performance than any single algorithm. One useful pattern is to combine a keyword matching retriever with a dense retriever (like embedding similarity), because their strengths are complementary. This can be considered a form of "hybrid search". The sparse retriever is good at finding relevant documents based on keywords, while the dense retriever is good at finding relevant documents based on semantic similarity. Below we demonstrate ensembling of a [simple custom retriever](/docs/how_to/custom_retriever/) that simply returns documents that directly contain the input query with a retriever derived from a [demo, in-memory, vector store](https://api.js.langchain.com/classes/langchain.vectorstores_memory.MemoryVectorStore.html). import CodeBlock from "@theme/CodeBlock"; import Example from "@examples/retrievers/ensemble_retriever.ts"; <CodeBlock language="typescript">{Example}</CodeBlock> ## Next steps You've now learned how to combine results from multiple retrievers. Next, check out some other retrieval how-to guides, such as how to [improve results using multiple embeddings per document](/docs/how_to/multi_vector) or how to [create your own custom retriever](/docs/how_to/custom_retriever).
0
lc_public_repos/langchainjs/docs/core_docs/docs
lc_public_repos/langchainjs/docs/core_docs/docs/how_to/sql_query_checking.mdx
# How to do query validation :::info Prerequisites This guide assumes familiarity with the following: - [Question answering over SQL data](/docs/tutorials/sql_qa) ::: Perhaps the most error-prone part of any SQL chain or agent is writing valid and safe SQL queries. In this guide we'll go over some strategies for validating our queries and handling invalid queries. ## Setup First, get required packages and set environment variables: import IntegrationInstallTooltip from "@mdx_components/integration_install_tooltip.mdx"; <IntegrationInstallTooltip></IntegrationInstallTooltip> ```bash npm install @langchain/community @langchain/openai typeorm sqlite3 ``` ```bash export OPENAI_API_KEY="your api key" # Uncomment the below to use LangSmith. Not required. # export LANGCHAIN_API_KEY="your api key" # export LANGCHAIN_TRACING_V2=true # Reduce tracing latency if you are not in a serverless environment # export LANGCHAIN_CALLBACKS_BACKGROUND=true ``` The below example will use a SQLite connection with Chinook database. Follow these [installation steps](https://database.guide/2-sample-databases-sqlite/) to create `Chinook.db` in the same directory as this notebook: - Save [this](https://raw.githubusercontent.com/lerocha/chinook-database/master/ChinookDatabase/DataSources/Chinook_Sqlite.sql) file as `Chinook_Sqlite.sql` - Run sqlite3 `Chinook.db` - Run `.read Chinook_Sqlite.sql` - Test `SELECT * FROM Artist LIMIT 10;` Now, `Chinhook.db` is in our directory and we can interface with it using the Typeorm-driven `SqlDatabase` class: import CodeBlock from "@theme/CodeBlock"; import DbCheck from "@examples/use_cases/sql/db_check.ts"; <CodeBlock language="typescript">{DbCheck}</CodeBlock> ## Query checker Perhaps the simplest strategy is to ask the model itself to check the original query for common mistakes. Suppose we have the following SQL query chain: import FullExample from "@examples/use_cases/sql/query_checking.ts"; <CodeBlock language="typescript">{FullExample}</CodeBlock> ## Next steps You've now learned about some strategies to validate generated SQL queries. Next, check out some of the other guides in this section, like [how to query over large databases](/docs/how_to/sql_large_db).
0
lc_public_repos/langchainjs/docs/core_docs/docs
lc_public_repos/langchainjs/docs/core_docs/docs/how_to/caching_embeddings.mdx
import CodeBlock from "@theme/CodeBlock"; import InMemoryExample from "@examples/embeddings/cache_backed_in_memory.ts"; import RedisExample from "@examples/embeddings/cache_backed_redis.ts"; # How to cache embedding results :::info Prerequisites This guide assumes familiarity with the following concepts: - [Embeddings](/docs/concepts/embedding_models) ::: Embeddings can be stored or temporarily cached to avoid needing to recompute them. Caching embeddings can be done using a `CacheBackedEmbeddings` instance. The cache backed embedder is a wrapper around an embedder that caches embeddings in a key-value store. The text is hashed and the hash is used as the key in the cache. The main supported way to initialized a `CacheBackedEmbeddings` is the `fromBytesStore` static method. This takes in the following parameters: - `underlyingEmbeddings`: The embeddings model to use. - `documentEmbeddingCache`: The cache to use for storing document embeddings. - `namespace`: (optional, defaults to "") The namespace to use for document cache. This namespace is used to avoid collisions with other caches. For example, you could set it to the name of the embedding model used. **Attention:** Be sure to set the namespace parameter to avoid collisions of the same text embedded using different embeddings models. ## In-memory import IntegrationInstallTooltip from "@mdx_components/integration_install_tooltip.mdx"; <IntegrationInstallTooltip></IntegrationInstallTooltip> ```bash npm2yarn npm install @langchain/openai @langchain/community @langchain/core ``` Here's a basic test example with an in memory cache. This type of cache is primarily useful for unit tests or prototyping. Do not use this cache if you need to actually store the embeddings for an extended period of time: <CodeBlock language="typescript">{InMemoryExample}</CodeBlock> ## Redis Here's an example with a Redis cache. You'll first need to install `ioredis` as a peer dependency and pass in an initialized client: ```bash npm2yarn npm install ioredis ``` <CodeBlock language="typescript">{RedisExample}</CodeBlock> ## Next steps You've now learned how to use caching to avoid recomputing embeddings. Next, check out the [full tutorial on retrieval-augmented generation](/docs/tutorials/rag).
0
lc_public_repos/langchainjs/docs/core_docs/docs
lc_public_repos/langchainjs/docs/core_docs/docs/how_to/qa_per_user.ipynb
import { OpenAIEmbeddings } from "@langchain/openai"; import { PineconeStore } from "@langchain/pinecone"; import { Pinecone } from "@pinecone-database/pinecone"; import { Document } from "@langchain/core/documents"; const embeddings = new OpenAIEmbeddings(); const pinecone = new Pinecone(); const pineconeIndex = pinecone.Index(process.env.PINECONE_INDEX); /** * Pinecone allows you to partition the records in an index into namespaces. * Queries and other operations are then limited to one namespace, * so different requests can search different subsets of your index. * Read more about namespaces here: https://docs.pinecone.io/guides/indexes/use-namespaces * * NOTE: If you have namespace enabled in your Pinecone index, you must provide the namespace when creating the PineconeStore. */ const namespace = "pinecone"; const vectorStore = await PineconeStore.fromExistingIndex( new OpenAIEmbeddings(), { pineconeIndex, namespace }, ); await vectorStore.addDocuments( [new Document({ pageContent: "i worked at kensho" })], { namespace: "harrison" }, ); await vectorStore.addDocuments( [new Document({ pageContent: "i worked at facebook" })], { namespace: "ankush" }, );// This will only get documents for Ankush const ankushRetriever = vectorStore.asRetriever({ filter: { namespace: "ankush", }, }); await ankushRetriever.invoke( "where did i work?", );// This will only get documents for Harrison const harrisonRetriever = vectorStore.asRetriever({ filter: { namespace: "harrison", }, }); await harrisonRetriever.invoke( "where did i work?", );import { StringOutputParser } from "@langchain/core/output_parsers"; import { ChatPromptTemplate } from "@langchain/core/prompts"; import { RunnableBinding, RunnableLambda, RunnablePassthrough, } from "@langchain/core/runnables"; import { ChatOpenAI, OpenAIEmbeddings } from "@langchain/openai"; const template = `Answer the question based only on the following context: {context} Question: {question}`; const prompt = ChatPromptTemplate.fromTemplate(template); const model = new ChatOpenAI({ model: "gpt-3.5-turbo-0125", temperature: 0, });import { RunnablePassthrough, RunnableSequence } from "@langchain/core/runnables"; const chain = RunnableSequence.from([ RunnablePassthrough.assign({ context: async (input: { question: string }, config) => { if (!config || !("configurable" in config)) { throw new Error("No config"); } const { configurable } = config; const documents = await vectorStore.asRetriever(configurable).invoke( input.question, config, ); return documents.map((doc) => doc.pageContent).join("\n\n"); }, }), prompt, model, new StringOutputParser(), ]);await chain.invoke( { question: "where did the user work?"}, { configurable: { filter: { namespace: "harrison" } } }, );await chain.invoke( { question: "where did the user work?"}, { configurable: { filter: { namespace: "ankush" } } }, );
0
lc_public_repos/langchainjs/docs/core_docs/docs
lc_public_repos/langchainjs/docs/core_docs/docs/how_to/filter_messages.ipynb
import { HumanMessage, SystemMessage, AIMessage, filterMessages } from "@langchain/core/messages" const messages = [ new SystemMessage({ content: "you are a good assistant", id: "1" }), new HumanMessage({ content: "example input", id: "2", name: "example_user" }), new AIMessage({ content: "example output", id: "3", name: "example_assistant" }), new HumanMessage({ content: "real input", id: "4", name: "bob" }), new AIMessage({ content: "real output", id: "5", name: "alice" }), ] filterMessages(messages, { includeTypes: ["human"] })filterMessages(messages, { excludeNames: ["example_user", "example_assistant"] })filterMessages(messages, { includeTypes: [HumanMessage, AIMessage], excludeIds: ["3"] }) import { ChatAnthropic } from "@langchain/anthropic"; const llm = new ChatAnthropic({ model: "claude-3-sonnet-20240229", temperature: 0 }) // Notice we don't pass in messages. This creates // a RunnableLambda that takes messages as input const filter_ = filterMessages({ excludeNames: ["example_user", "example_assistant"], end }) const chain = filter_.pipe(llm); await chain.invoke(messages)await filter_.invoke(messages)
0
lc_public_repos/langchainjs/docs/core_docs/docs
lc_public_repos/langchainjs/docs/core_docs/docs/how_to/reduce_retrieval_latency.mdx
# How to reduce retrieval latency :::info Prerequisites This guide assumes familiarity with the following concepts: - [Retrievers](/docs/concepts/retrievers) - [Embeddings](/docs/concepts/embedding_models) - [Vector stores](/docs/concepts/#vectorstores) - [Retrieval-augmented generation (RAG)](/docs/tutorials/rag) ::: One way to reduce retrieval latency is through a technique called "Adaptive Retrieval". The [`MatryoshkaRetriever`](https://api.js.langchain.com/classes/langchain.retrievers_matryoshka_retriever.MatryoshkaRetriever.html) uses the Matryoshka Representation Learning (MRL) technique to retrieve documents for a given query in two steps: - **First-pass**: Uses a lower dimensional sub-vector from the MRL embedding for an initial, fast, but less accurate search. - **Second-pass**: Re-ranks the top results from the first pass using the full, high-dimensional embedding for higher accuracy. ![Matryoshka Retriever](/img/adaptive_retrieval.png) It is based on this [Supabase](https://supabase.com/) blog post ["Matryoshka embeddings: faster OpenAI vector search using Adaptive Retrieval"](https://supabase.com/blog/matryoshka-embeddings). ### Setup import IntegrationInstallTooltip from "@mdx_components/integration_install_tooltip.mdx"; <IntegrationInstallTooltip></IntegrationInstallTooltip> ```bash npm2yarn npm install @langchain/openai @langchain/community @langchain/core ``` To follow the example below, you need an OpenAI API key: ```bash export OPENAI_API_KEY=your-api-key ``` We'll also be using `chroma` for our vector store. Follow the instructions [here](/docs/integrations/vectorstores/chroma) to setup. import CodeBlock from "@theme/CodeBlock"; import Example from "@examples/retrievers/matryoshka_retriever.ts"; <CodeBlock language="typescript">{Example}</CodeBlock> :::note Due to the constraints of some vector stores, the large embedding metadata field is stringified (`JSON.stringify`) before being stored. This means that the metadata field will need to be parsed (`JSON.parse`) when retrieved from the vector store. ::: ## Next steps You've now learned a technique that can help speed up your retrieval queries. Next, check out the [broader tutorial on RAG](/docs/tutorials/rag), or this section to learn how to [create your own custom retriever over any data source](/docs/how_to/custom_retriever/).
0
lc_public_repos/langchainjs/docs/core_docs/docs
lc_public_repos/langchainjs/docs/core_docs/docs/how_to/tools_error.ipynb
import { z } from "zod"; import { ChatOpenAI } from "@langchain/openai"; import { tool } from "@langchain/core/tools"; const llm = new ChatOpenAI({ model: "gpt-3.5-turbo-0125", temperature: 0, }); const complexTool = tool(async (params) => { return params.int_arg * params.float_arg; }, { name: "complex_tool", description: "Do something complex with a complex tool.", schema: z.object({ int_arg: z.number(), float_arg: z.number(), number_arg: z.object({}), }) }); const llmWithTools = llm.bindTools([complexTool]); const chain = llmWithTools .pipe((message) => message.tool_calls?.[0].args) .pipe(complexTool);await chain.invoke( "use complex tool. the args are 5, 2.1, potato" );const tryExceptToolWrapper = async (input, config) => { try { const result = await complexTool.invoke(input); return result; } catch (e) { return `Calling tool with arguments:\n\n${JSON.stringify(input)}\n\nraised the following error:\n\n${e}` } } const chainWithTools = llmWithTools .pipe((message) => message.tool_calls?.[0].args) .pipe(tryExceptToolWrapper); const res = await chainWithTools.invoke("use complex tool. the args are 5, 2.1, potato"); console.log(res);const badChain = llmWithTools .pipe((message) => message.tool_calls?.[0].args) .pipe(complexTool); const betterModel = new ChatOpenAI({ model: "gpt-4-1106-preview", temperature: 0, }).bindTools([complexTool]); const betterChain = betterModel .pipe((message) => message.tool_calls?.[0].args) .pipe(complexTool); const chainWithFallback = badChain.withFallbacks([betterChain]); await chainWithFallback.invoke("use complex tool. the args are 5, 2.1, potato");