{"id": "d124c125df69-0", "text": "\ud83d\uddc3\ufe0f Document transformers\n8 items", "source": "https://python.langchain.com/docs/integrations/"} {"id": "ddfe37b68cf0-0", "text": "\ud83d\udcc4\ufe0f AzureML Chat Online Endpoint\nAzureML is a platform used to build, train, and deploy machine learning models. Users can explore the types of models to deploy in the Model Catalog, which provides Azure Foundation Models and OpenAI Models. Azure Foundation Models include various open-source models and popular Hugging Face models. Users can also import models of their liking into AzureML.", "source": "https://python.langchain.com/docs/integrations/chat/"} {"id": "2a8f5d640c79-0", "text": "\ud83d\udcc4\ufe0f Label Studio\nLabel Studio is an open-source data labeling platform that provides LangChain with flexibility when it comes to labeling data for fine-tuning large language models (LLMs). It also enables the preparation of custom training data and the collection and evaluation of responses through human feedback.", "source": "https://python.langchain.com/docs/integrations/callbacks/"} {"id": "c33a413482b6-0", "text": "Chat loaders\nLike document loaders, chat loaders are utilities designed to help load conversations from popular communication platforms such as Facebook, Slack, Discord, etc. These are loaded into memory as LangChain chat message objects. Such utilities facilitate tasks such as fine-tuning a language model to match your personal style or voice. \nThis brief guide will illustrate the process using OpenAI's fine-tuning API comprised of six steps:\nExport your Facebook Messenger chat data in a compatible format for your intended chat loader.\nLoad the chat data into memory as LangChain chat message objects. (this is what is covered in each integration notebook in this section of the documentation).\nAssign a person to the \"AI\" role and optionally filter, group, and merge messages.\nExport these acquired messages in a format expected by the fine-tuning API.\nUpload this data to OpenAI.\nFine-tune your model.\nImplement the fine-tuned model in LangChain.\nThis guide is not wholly comprehensive but is designed to take you through the fundamentals of going from raw data to fine-tuned model.\nWe will demonstrate the procedure through an example of fine-tuning a gpt-3.5-turbo model on Facebook Messenger data. \n1. Export your chat data\u200b\nTo export your Facebook messenger data, you can follow the instructions here. \nJSON format\nYou must select \"JSON format\" (instead of HTML) when exporting your data to be compatible with the current loader.\nOpenAI requires at least 10 examples to fine-tune your model, but they recommend between 50-100 for more optimal results. You can use the example data stored at this google drive link to test the process.\n2. Load the chat\u200b\nOnce you've obtained your chat data, you can load it into memory as LangChain chat message objects. Here\u2019s an example of loading data using the Python code:\nfrom langchain.chat_loaders.facebook_messenger import FolderFacebookMessengerChatLoader\n\nloader = FolderFacebookMessengerChatLoader(\npath=\"./facebook_messenger_chats\",\n)", "source": "https://python.langchain.com/docs/integrations/chat_loaders/"} {"id": "c33a413482b6-1", "text": "loader = FolderFacebookMessengerChatLoader(\npath=\"./facebook_messenger_chats\",\n)\n\nchat_sessions = loader.load()\nIn this snippet, we point the loader to a directory of Facebook chat dumps which are then loaded as multiple \"sessions\" of messages, one session per conversation file.\nOnce you've loaded the messages, you should decide which person you want to fine-tune the model to (usually yourself). You can also decide to merge consecutive messages from the same sender into a single chat message. For both of these tasks, you can use the chat_loaders utilities to do so:\nfrom langchain.chat_loaders.utils import (\nmerge_chat_runs,\nmap_ai_messages,\n)\n\nmerged_sessions = merge_chat_runs(chat_sessions)\nalternating_sessions = list(map_ai_messages(merged_sessions, \"My Name\"))\n3. Export messages to OpenAI format\u200b\nConvert the chat messages to dictionaries using the convert_messages_for_finetuning function. Then, group the data into chunks for better context modeling and overlap management.\nfrom langchain.adapters.openai import convert_messages_for_finetuning\n\nopenai_messages = convert_messages_for_finetuning(chat_sessions)\nAt this point, the data is ready for upload to OpenAI. You can choose to split up conversations into smaller chunks for training if you do not have enough conversations to train on. Feel free to play around with different chunk sizes or with adding system messages to the fine-tuning data.\nchunk_size = 8\noverlap = 2\n\nmessage_groups = [\nconversation_messages[i: i + chunk_size] \nfor conversation_messages in openai_messages\nfor i in range(\n0, len(conversation_messages) - chunk_size + 1, \nchunk_size - overlap)\n]\n\nlen(message_groups)\n# 9\n4. Upload the data to OpenAI\u200b\nEnsure you have set your OpenAI API key by following these instructions, then upload the training file. An audit is performed to ensure data compliance, so you may have to wait a few minutes for the dataset to become ready for use.\nimport time\nimport json\nimport io\n\nimport openai\n\nmy_file = io.BytesIO()\nfor group in message_groups:\nmy_file.write((json.dumps({\"messages\": group}) + \"\\n\").encode('utf-8'))\n\nmy_file.seek(0)\ntraining_file = openai.File.create(\nfile=my_file,\npurpose='fine-tune'\n)", "source": "https://python.langchain.com/docs/integrations/chat_loaders/"} {"id": "c33a413482b6-2", "text": "# Wait while the file is processed\nstatus = openai.File.retrieve(training_file.id).status\nstart_time = time.time()\nwhile status != \"processed\":\nprint(f\"Status=[{status}]... {time.time() - start_time:.2f}s\", end=\"\\r\", flush=True)\ntime.sleep(5)\nstatus = openai.File.retrieve(training_file.id).status\nprint(f\"File {training_file.id} ready after {time.time() - start_time:.2f} seconds.\")\nOnce this is done, you can proceed to the model training!\n5. Fine-tune the model\u200b\nStart the fine-tuning job with your chosen base model.\njob = openai.FineTuningJob.create(\ntraining_file=training_file.id,\nmodel=\"gpt-3.5-turbo\",\n)\nThis might take a while. Check the status with openai.FineTuningJob.retrieve(job.id).status and wait for it to report succeeded.\n# It may take 10-20+ minutes to complete training.\nstatus = openai.FineTuningJob.retrieve(job.id).status\nstart_time = time.time()\nwhile status != \"succeeded\":\nprint(f\"Status=[{status}]... {time.time() - start_time:.2f}s\", end=\"\\r\", flush=True)\ntime.sleep(5)\njob = openai.FineTuningJob.retrieve(job.id)\nstatus = job.status\n6. Use the model in LangChain\u200b\nYou're almost there! Use the fine-tuned model in LangChain.\nfrom langchain import chat_models\n\nmodel_name = job.fine_tuned_model\n# Example: ft:gpt-3.5-turbo-0613:personal::5mty86jblapsed\nmodel = chat_models.ChatOpenAI(model=model_name)\nfrom langchain.prompts import ChatPromptTemplate\nfrom langchain.schema.output_parser import StrOutputParser \n\nprompt = ChatPromptTemplate.from_messages(\n[\n(\"human\", \"{input}\"),\n]\n)\n\nchain = prompt | model | StrOutputParser()\n\nfor tok in chain.stream({\"input\": \"What classes are you taking?\"}):\nprint(tok, end=\"\", flush=True)", "source": "https://python.langchain.com/docs/integrations/chat_loaders/"} {"id": "c33a413482b6-3", "text": "# The usual - Potions, Transfiguration, Defense Against the Dark Arts. What about you?\nAnd that's it! You've successfully fine-tuned a model and used it in LangChain.\nSupported Chat Loaders\u200b\nLangChain currently supports the following chat loaders. Feel free to contribute more!\n\ud83d\udcc4\ufe0f Discord\nThis notebook shows how to create your own chat loader that works on copy-pasted messages (from dms) to a list of LangChain messages.\n\ud83d\udcc4\ufe0f Facebook Messenger\nThis notebook shows how to load data from Facebook in a format you can finetune on. The overall steps are:\n\ud83d\udcc4\ufe0f GMail\nThis loader goes over how to load data from GMail. There are many ways you could want to load data from GMail. This loader is currently fairly opionated in how to do so. The way it does it is it first looks for all messages that you have sent. It then looks for messages where you are responding to a previous email. It then fetches that previous email, and creates a training example of that email, followed by your email.\n\ud83d\udcc4\ufe0f iMessage\nThis notebook shows how to use the iMessage chat loader. This class helps convert iMessage conversations to LangChain chat messages.\n\ud83d\udcc4\ufe0f Slack\nThis notebook shows how to use the Slack chat loader. This class helps map exported slack conversations to LangChain chat messages.\n\ud83d\udcc4\ufe0f Telegram\nThis notebook shows how to use the Telegram chat loader. This class helps map exported Telegram conversations to LangChain chat messages.\n\ud83d\udcc4\ufe0f Twitter (via Apify)\nThis notebook shows how to load chat messages from Twitter to finetune on. We do this by utilizing Apify.\n\ud83d\udcc4\ufe0f WhatsApp\nThis notebook shows how to use the WhatsApp chat loader. This class helps map exported Telegram conversations to LangChain chat messages.", "source": "https://python.langchain.com/docs/integrations/chat_loaders/"} {"id": "19f700974791-0", "text": "\ud83d\udcc4\ufe0f OpenAI Functions Metadata Tagger\nIt can often be useful to tag ingested documents with structured metadata, such as the title, tone, or length of a document, to allow for more targeted similarity search later. However, for large numbers of documents, performing this labelling process manually can be tedious.", "source": "https://python.langchain.com/docs/integrations/document_transformers/"} {"id": "a2c4b7f1cf80-0", "text": "\ud83d\udcc4\ufe0f Amazon Textract\nAmazon Textract is a machine learning (ML) service that automatically extracts text, handwriting, and data from scanned documents. It goes beyond simple optical character recognition (OCR) to identify, understand, and extract data from forms and tables. Today, many companies manually extract data from scanned documents such as PDFs, images, tables, and forms, or through simple OCR software that requires manual configuration (which often must be updated when the form changes). To overcome these manual and expensive processes, Textract uses ML to read and process any type of document, accurately extracting text, handwriting, tables, and other data with no manual effort. You can quickly automate document processing and act on the information extracted, whether you\u2019re automating loans processing or extracting information from invoices and receipts. Textract can extract the data in minutes instead of hours or days.", "source": "https://python.langchain.com/docs/integrations/document_loaders/"} {"id": "647c0d8883c5-0", "text": "\ud83d\udcc4\ufe0f Cassandra Chat Message History\nApache Cassandra\u00ae is a NoSQL, row-oriented, highly scalable and highly available database, well suited for storing large amounts of data.", "source": "https://python.langchain.com/docs/integrations/memory/"} {"id": "94bc17007655-0", "text": "\ud83d\udcc4\ufe0f NLP Cloud\nThe NLP Cloud serves high performance pre-trained or custom models for NER, sentiment-analysis, classification, summarization, paraphrasing, grammar and spelling correction, keywords and keyphrases extraction, chatbot, product description and ad generation, intent classification, text generation, image generation, blog post generation, code generation, question answering, automatic speech recognition, machine translation, language detection, semantic search, semantic similarity, tokenization, POS tagging, embeddings, and dependency parsing. It is ready for production, served through a REST API.", "source": "https://python.langchain.com/docs/integrations/llms/"} {"id": "85480a2dd581-0", "text": "Retrievers\n\ud83d\udcc4\ufe0f Amazon Kendra\nAmazon Kendra is an intelligent search service provided by Amazon Web Services (AWS). It utilizes advanced natural language processing (NLP) and machine learning algorithms to enable powerful search capabilities across various data sources within an organization. Kendra is designed to help users find the information they need quickly and accurately, improving productivity and decision-making.\n\ud83d\udcc4\ufe0f Arxiv\narXiv is an open-access archive for 2 million scholarly articles in the fields of physics, mathematics, computer science, quantitative biology, quantitative finance, statistics, electrical engineering and systems science, and economics.\n\ud83d\udcc4\ufe0f Azure Cognitive Search\nAzure Cognitive Search (formerly known as Azure Search) is a cloud search service that gives developers infrastructure, APIs, and tools for building a rich search experience over private, heterogeneous content in web, mobile, and enterprise applications.\n\ud83d\udcc4\ufe0f BM25\nBM25 also known as the Okapi BM25, is a ranking function used in information retrieval systems to estimate the relevance of documents to a given search query.\n\ud83d\udcc4\ufe0f Chaindesk\nChaindesk platform brings data from anywhere (Datsources: Text, PDF, Word, PowerPpoint, Excel, Notion, Airtable, Google Sheets, etc..) into Datastores (container of multiple Datasources).\n\ud83d\udcc4\ufe0f ChatGPT Plugin\nOpenAI plugins connect ChatGPT to third-party applications. These plugins enable ChatGPT to interact with APIs defined by developers, enhancing ChatGPT's capabilities and allowing it to perform a wide range of actions.\n\ud83d\udcc4\ufe0f Cohere Reranker\nCohere is a Canadian startup that provides natural language processing models that help companies improve human-machine interactions.\n\ud83d\udcc4\ufe0f DocArray Retriever\nDocArray is a versatile, open-source tool for managing your multi-modal data. It lets you shape your data however you want, and offers the flexibility to store and search it using various document index backends. Plus, it gets even better - you can utilize your DocArray document index to create a DocArrayRetriever, and build awesome Langchain apps!\n\ud83d\udcc4\ufe0f ElasticSearch BM25\nElasticsearch is a distributed, RESTful search and analytics engine. It provides a distributed, multitenant-capable full-text search engine with an HTTP web interface and schema-free JSON documents.", "source": "https://python.langchain.com/docs/integrations/retrievers/"} {"id": "85480a2dd581-1", "text": "\ud83d\udcc4\ufe0f Google Cloud Enterprise Search\nEnterprise Search is a part of the Generative AI App Builder suite of tools offered by Google Cloud.\n\ud83d\udcc4\ufe0f Google Drive Retriever\nThis notebook covers how to retrieve documents from Google Drive.\n\ud83d\udcc4\ufe0f kNN\nIn statistics, the k-nearest neighbors algorithm (k-NN) is a non-parametric supervised learning method first developed by Evelyn Fix and Joseph Hodges in 1951, and later expanded by Thomas Cover. It is used for classification and regression.\n\ud83d\udcc4\ufe0f LOTR (Merger Retriever)\nLord of the Retrievers, also known as MergerRetriever, takes a list of retrievers as input and merges the results of their getrelevantdocuments() methods into a single list. The merged results will be a list of documents that are relevant to the query and that have been ranked by the different retrievers.\n\ud83d\udcc4\ufe0f Metal\nMetal is a managed service for ML Embeddings.\n\ud83d\udcc4\ufe0f Pinecone Hybrid Search\nPinecone is a vector database with broad functionality.\n\ud83d\udcc4\ufe0f PubMed\nPubMed\u00ae by The National Center for Biotechnology Information, National Library of Medicine comprises more than 35 million citations for biomedical literature from MEDLINE, life science journals, and online books. Citations may include links to full text content from PubMed Central and publisher web sites.\n\ud83d\udcc4\ufe0f RePhraseQueryRetriever\nSimple retriever that applies an LLM between the user input and the query pass the to retriever.\n\ud83d\udcc4\ufe0f SVM\nSupport vector machines (SVMs) are a set of supervised learning methods used for classification, regression and outliers detection.\n\ud83d\udcc4\ufe0f TF-IDF\nTF-IDF means term-frequency times inverse document-frequency.\n\ud83d\udcc4\ufe0f Vespa\nVespa is a fully featured search engine and vector database. It supports vector search (ANN), lexical search, and search in structured data, all in the same query.\n\ud83d\udcc4\ufe0f Weaviate Hybrid Search\nWeaviate is an open source vector database.\n\ud83d\udcc4\ufe0f Wikipedia\nWikipedia is a multilingual free online encyclopedia written and maintained by a community of volunteers, known as Wikipedians, through open collaboration and using a wiki-based editing system called MediaWiki. Wikipedia is the largest and most-read reference work in history.", "source": "https://python.langchain.com/docs/integrations/retrievers/"} {"id": "85480a2dd581-2", "text": "\ud83d\udcc4\ufe0f Zep\nRetriever Example for Zep - A long-term memory store for LLM applications.", "source": "https://python.langchain.com/docs/integrations/retrievers/"} {"id": "ca8d70a0ee57-0", "text": "Text embedding models\n\ud83d\udcc4\ufe0f AwaEmbedding\nThis notebook explains how to use AwaEmbedding, which is included in awadb, to embedding texts in langchain.\n\ud83d\udcc4\ufe0f Aleph Alpha\nThere are two possible ways to use Aleph Alpha's semantic embeddings. If you have texts with a dissimilar structure (e.g. a Document and a Query) you would want to use asymmetric embeddings. Conversely, for texts with comparable structures, symmetric embeddings are the suggested approach.\n\ud83d\udcc4\ufe0f AzureOpenAI\nLet's load the OpenAI Embedding class with environment variables set to indicate to use Azure endpoints.\n\ud83d\udcc4\ufe0f Bedrock Embeddings\n\ud83d\udcc4\ufe0f BGE Hugging Face Embeddings\nThis notebook shows how to use BGE Embeddings through Hugging Face\n\ud83d\udcc4\ufe0f Clarifai\nClarifai is an AI Platform that provides the full AI lifecycle ranging from data exploration, data labeling, model training, evaluation, and inference.\n\ud83d\udcc4\ufe0f Cohere\nLet's load the Cohere Embedding class.\n\ud83d\udcc4\ufe0f DashScope\nLet's load the DashScope Embedding class.\n\ud83d\udcc4\ufe0f DeepInfra\nDeepInfra is a serverless inference as a service that provides access to a variety of LLMs and embeddings models. This notebook goes over how to use LangChain with DeepInfra for text embeddings.\n\ud83d\udcc4\ufe0f EDEN AI\nEden AI is revolutionizing the AI landscape by uniting the best AI providers, empowering users to unlock limitless possibilities and tap into the true potential of artificial intelligence. With an all-in-one comprehensive and hassle-free platform, it allows users to deploy AI features to production lightning fast, enabling effortless access to the full breadth of AI capabilities via a single API. (website//edenai.co/)\n\ud83d\udcc4\ufe0f Elasticsearch\nWalkthrough of how to generate embeddings using a hosted embedding model in Elasticsearch\n\ud83d\udcc4\ufe0f Embaas\nembaas is a fully managed NLP API service that offers features like embedding generation, document text extraction, document to embeddings and more. You can choose a variety of pre-trained models.\n\ud83d\udcc4\ufe0f ERNIE Embedding-V1\nERNIE Embedding-V1 is a text representation model based on Baidu Wenxin's large-scale model technology,\n\ud83d\udcc4\ufe0f Fake Embeddings", "source": "https://python.langchain.com/docs/integrations/text_embedding/"} {"id": "ca8d70a0ee57-1", "text": "\ud83d\udcc4\ufe0f Fake Embeddings\nLangChain also provides a fake embedding class. You can use this to test your pipelines.\n\ud83d\udcc4\ufe0f Google Cloud Platform Vertex AI PaLM\nNote: This is seperate from the Google PaLM integration, it exposes Vertex AI PaLM API on Google Cloud.\n\ud83d\udcc4\ufe0f GPT4All\nGPT4All is a free-to-use, locally running, privacy-aware chatbot. There is no GPU or internet required. It features popular models and its own models such as GPT4All Falcon, Wizard, etc.\n\ud83d\udcc4\ufe0f Hugging Face Hub\nLet's load the Hugging Face Embedding class.\n\ud83d\udcc4\ufe0f InstructEmbeddings\nLet's load the HuggingFace instruct Embeddings class.\n\ud83d\udcc4\ufe0f Jina\nLet's load the Jina Embedding class.\n\ud83d\udcc4\ufe0f Llama-cpp\nThis notebook goes over how to use Llama-cpp embeddings within LangChain\n\ud83d\udcc4\ufe0f LocalAI\nLet's load the LocalAI Embedding class. In order to use the LocalAI Embedding class, you need to have the LocalAI service hosted somewhere and configure the embedding models. See the documentation at https//localai.io/features/embeddings/index.html.\n\ud83d\udcc4\ufe0f MiniMax\nMiniMax offers an embeddings service.\n\ud83d\udcc4\ufe0f ModelScope\nLet's load the ModelScope Embedding class.\n\ud83d\udcc4\ufe0f MosaicML embeddings\nMosaicML offers a managed inference service. You can either use a variety of open source models, or deploy your own.\n\ud83d\udcc4\ufe0f NLP Cloud\nNLP Cloud is an artificial intelligence platform that allows you to use the most advanced AI engines, and even train your own engines with your own data.\n\ud83d\udcc4\ufe0f OpenAI\nLet's load the OpenAI Embedding class.\n\ud83d\udcc4\ufe0f SageMaker Endpoint Embeddings\nLet's load the SageMaker Endpoints Embeddings class. The class can be used if you host, e.g. your own Hugging Face model on SageMaker.\n\ud83d\udcc4\ufe0f Self Hosted Embeddings\nLet's load the SelfHostedEmbeddings, SelfHostedHuggingFaceEmbeddings, and SelfHostedHuggingFaceInstructEmbeddings classes.\n\ud83d\udcc4\ufe0f Sentence Transformers Embeddings", "source": "https://python.langchain.com/docs/integrations/text_embedding/"} {"id": "ca8d70a0ee57-2", "text": "\ud83d\udcc4\ufe0f Sentence Transformers Embeddings\nSentenceTransformers embeddings are called using the HuggingFaceEmbeddings integration. We have also added an alias for SentenceTransformerEmbeddings for users who are more familiar with directly using that package.\n\ud83d\udcc4\ufe0f Spacy Embedding\nLoading the Spacy embedding class to generate and query embeddings\n\ud83d\udcc4\ufe0f TensorflowHub\nLet's load the TensorflowHub Embedding class.\n\ud83d\udcc4\ufe0f Xorbits inference (Xinference)\nThis notebook goes over how to use Xinference embeddings within LangChain", "source": "https://python.langchain.com/docs/integrations/text_embedding/"} {"id": "cf7c7309ab45-0", "text": "Agents & Toolkits\nAgents and Toolkits are placed in the same directory because they are always used together.\n\ud83d\udcc4\ufe0f AINetwork\nAI Network is a layer 1 blockchain designed to accommodate large-scale AI models, utilizing a decentralized GPU network powered by the $AIN token, enriching AI-driven NFTs (AINFTs).\n\ud83d\udcc4\ufe0f Airbyte Question Answering\nThis notebook shows how to do question answering over structured data, in this case using the AirbyteStripeLoader.\n\ud83d\udcc4\ufe0f Amadeus\nThis notebook walks you through connecting LangChain to the Amadeus travel information API\n\ud83d\udcc4\ufe0f Azure Cognitive Services\nThis toolkit is used to interact with the Azure Cognitive Services API to achieve some multimodal capabilities.\n\ud83d\udcc4\ufe0f CSV\nThis notebook shows how to use agents to interact with data in CSV format. It is mostly optimized for question answering.\n\ud83d\udcc4\ufe0f Document Comparison\nThis notebook shows how to use an agent to compare two documents.\n\ud83d\udcc4\ufe0f Github\nThe Github toolkit contains tools that enable an LLM agent to interact with a github repository.\n\ud83d\udcc4\ufe0f Gmail\nThis notebook walks through connecting a LangChain email to the Gmail API.\n\ud83d\udcc4\ufe0f Google Drive tool\nThis notebook walks through connecting a LangChain to the Google Drive API.\n\ud83d\udcc4\ufe0f Jira\nThis notebook goes over how to use the Jira toolkit.\n\ud83d\udcc4\ufe0f JSON\nThis notebook showcases an agent interacting with large JSON/dict objects.\n\ud83d\udcc4\ufe0f MultiOn\nThis notebook walks you through connecting LangChain to the MultiOn Client in your browser\n\ud83d\udcc4\ufe0f Office365\nThis notebook walks through connecting LangChain to Office365 email and calendar.\n\ud83d\udcc4\ufe0f OpenAPI\nWe can construct agents to consume arbitrary APIs, here APIs conformant to the OpenAPI/Swagger specification.\n\ud83d\udcc4\ufe0f Natural Language APIs\nNatural Language API Toolkits (NLAToolkits) permit LangChain Agents to efficiently plan and combine calls across endpoints.\n\ud83d\udcc4\ufe0f Pandas Dataframe\nThis notebook shows how to use agents to interact with a Pandas DataFrame. It is mostly optimized for question answering.\n\ud83d\udcc4\ufe0f PlayWright Browser", "source": "https://python.langchain.com/docs/integrations/toolkits/"} {"id": "cf7c7309ab45-1", "text": "\ud83d\udcc4\ufe0f PlayWright Browser\nThis toolkit is used to interact with the browser. While other tools (like the Requests tools) are fine for static sites, PlayWright Browser toolkits let your agent navigate the web and interact with dynamically rendered sites.\n\ud83d\udcc4\ufe0f PowerBI Dataset\nThis notebook showcases an agent interacting with a Power BI Dataset. The agent is answering more general questions about a dataset, as well as recover from errors.\n\ud83d\udcc4\ufe0f Python\nThis notebook showcases an agent designed to write and execute Python code to answer a question.\n\ud83d\udcc4\ufe0f Spark Dataframe\nThis notebook shows how to use agents to interact with a Spark DataFrame and Spark Connect. It is mostly optimized for question answering.\n\ud83d\udcc4\ufe0f Spark SQL\nThis notebook shows how to use agents to interact with Spark SQL. Similar to SQL Database Agent, it is designed to address general inquiries about Spark SQL and facilitate error recovery.\n\ud83d\udcc4\ufe0f SQL Database\nThis notebook showcases an agent designed to interact with a SQL databases.\n\ud83d\udcc4\ufe0f Vectorstore\nThis notebook showcases an agent designed to retrieve information from one or more vectorstores, either with or without sources.\n\ud83d\udcc4\ufe0f Xorbits\nThis notebook shows how to use agents to interact with Xorbits Pandas dataframe and Xorbits Numpy ndarray. It is mostly optimized for question answering.", "source": "https://python.langchain.com/docs/integrations/toolkits/"} {"id": "51a87f7229e0-0", "text": "Tools\n\ud83d\udcc4\ufe0f Alpha Vantage\nAlpha Vantage Alpha Vantage provides realtime and historical financial market data through a set of powerful and developer-friendly data APIs and spreadsheets.\n\ud83d\udcc4\ufe0f Apify\nThis notebook shows how to use the Apify integration for LangChain.\n\ud83d\udcc4\ufe0f ArXiv\nThis notebook goes over how to use the arxiv tool with an agent.\n\ud83d\udcc4\ufe0f AWS Lambda\nAmazon AWS Lambda is a serverless computing service provided by Amazon Web Services (AWS). It helps developers to build and run applications and services without provisioning or managing servers. This serverless architecture enables you to focus on writing and deploying code, while AWS automatically takes care of scaling, patching, and managing the infrastructure required to run your applications.\n\ud83d\udcc4\ufe0f Shell (bash)\nGiving agents access to the shell is powerful (though risky outside a sandboxed environment).\n\ud83d\udcc4\ufe0f Bing Search\nThis notebook goes over how to use the bing search component.\n\ud83d\udcc4\ufe0f Brave Search\nThis notebook goes over how to use the Brave Search tool.\n\ud83d\udcc4\ufe0f ChatGPT Plugins\nThis example shows how to use ChatGPT Plugins within LangChain abstractions.\n\ud83d\udcc4\ufe0f Dall-E Image Generator\nThis notebook shows how you can generate images from a prompt synthesized using an OpenAI LLM. The images are generated using Dall-E, which uses the same OpenAI API key as the LLM.\n\ud83d\udcc4\ufe0f DataForSeo\nThis notebook demonstrates how to use the DataForSeo API to obtain search engine results. The DataForSeo API retrieves SERP from most popular search engines like Google, Bing, Yahoo. It also allows to get SERPs from different search engine types like Maps, News, Events, etc.\n\ud83d\udcc4\ufe0f DuckDuckGo Search\nThis notebook goes over how to use the duck-duck-go search component.\n\ud83d\udcc4\ufe0f Eden AI\nThis Jupyter Notebook demonstrates how to use Eden AI tools with an Agent.\n\ud83d\udcc4\ufe0f File System\nLangChain provides tools for interacting with a local file system out of the box. This notebook walks through some of them.\n\ud83d\udcc4\ufe0f Golden Query", "source": "https://python.langchain.com/docs/integrations/tools/"} {"id": "51a87f7229e0-1", "text": "\ud83d\udcc4\ufe0f Golden Query\nGolden provides a set of natural language APIs for querying and enrichment using the Golden Knowledge Graph e.g. queries such as: Products from OpenAI, Generative ai companies with series a funding, and rappers who invest can be used to retrieve structured data about relevant entities.\n\ud83d\udcc4\ufe0f Google Drive\nThis notebook walks through connecting a LangChain to the Google Drive API.\n\ud83d\udcc4\ufe0f Google Places\nThis notebook goes through how to use Google Places API\n\ud83d\udcc4\ufe0f Google Search\nThis notebook goes over how to use the google search component.\n\ud83d\udcc4\ufe0f Google Serper\nThis notebook goes over how to use the Google Serper component to search the web. First you need to sign up for a free account at serper.dev and get your api key.\n\ud83d\udcc4\ufe0f Gradio\nThere are many 1000s of Gradio apps on Hugging Face Spaces. This library puts them at the tips of your LLM's fingers \ud83e\uddbe\n\ud83d\udcc4\ufe0f GraphQL\nGraphQL is a query language for APIs and a runtime for executing those queries against your data. GraphQL provides a complete and understandable description of the data in your API, gives clients the power to ask for exactly what they need and nothing more, makes it easier to evolve APIs over time, and enables powerful developer tools.\n\ud83d\udcc4\ufe0f HuggingFace Hub Tools\nHuggingface Tools that supporting text I/O can be\n\ud83d\udcc4\ufe0f Human as a tool\nHuman are AGI so they can certainly be used as a tool to help out AI agent\n\ud83d\udcc4\ufe0f IFTTT WebHooks\nThis notebook shows how to use IFTTT Webhooks.\n\ud83d\udcc4\ufe0f Lemon Agent\nLemon Agent helps you build powerful AI assistants in minutes and automate workflows by allowing for accurate and reliable read and write operations in tools like Airtable, Hubspot, Discord, Notion, Slack and Github.\n\ud83d\udcc4\ufe0f Metaphor Search\nMetaphor is a search engine fully designed to be used by LLMs. You can search and then get the contents for any page.\n\ud83d\udcc4\ufe0f Nuclia Understanding\nNuclia automatically indexes your unstructured data from any internal and external source, providing optimized search results and generative answers. It can handle video and audio transcription, image content extraction, and document parsing.\n\ud83d\udcc4\ufe0f OpenWeatherMap", "source": "https://python.langchain.com/docs/integrations/tools/"} {"id": "51a87f7229e0-2", "text": "\ud83d\udcc4\ufe0f OpenWeatherMap\nThis notebook goes over how to use the OpenWeatherMap component to fetch weather information.\n\ud83d\udcc4\ufe0f PubMed\nPubMed\u00ae comprises more than 35 million citations for biomedical literature from MEDLINE, life science journals, and online books. Citations may include links to full text content from PubMed Central and publisher web sites.\n\ud83d\udcc4\ufe0f Requests\nThe web contains a lot of information that LLMs do not have access to. In order to easily let LLMs interact with that information, we provide a wrapper around the Python Requests module that takes in a URL and fetches data from that URL.\n\ud83d\udcc4\ufe0f SceneXplain\nSceneXplain is an ImageCaptioning service accessible through the SceneXplain Tool.\n\ud83d\udcc4\ufe0f Search Tools\nThis notebook shows off usage of various search tools.\n\ud83d\udcc4\ufe0f SearxNG Search\nThis notebook goes over how to use a self hosted SearxNG search API to search the web.\n\ud83d\udcc4\ufe0f SerpAPI\nThis notebook goes over how to use the SerpAPI component to search the web.\n\ud83d\udcc4\ufe0f SQL Database Chain\nThis example demonstrates the use of the SQLDatabaseChain for answering questions over a SQL database.\n\ud83d\udcc4\ufe0f Twilio\nThis notebook goes over how to use the Twilio API wrapper to send a message through SMS or Twilio Messaging Channels.\n\ud83d\udcc4\ufe0f Wikipedia\nWikipedia is a multilingual free online encyclopedia written and maintained by a community of volunteers, known as Wikipedians, through open collaboration and using a wiki-based editing system called MediaWiki. Wikipedia is the largest and most-read reference work in history.\n\ud83d\udcc4\ufe0f Wolfram Alpha\nThis notebook goes over how to use the wolfram alpha component.\n\ud83d\udcc4\ufe0f Yahoo Finance News\nThis notebook goes over how to use the yahoofinancenews tool with an agent.\n\ud83d\udcc4\ufe0f YouTube\nYouTube Search package searches YouTube videos avoiding using their heavily rate-limited API.\n\ud83d\udcc4\ufe0f Zapier Natural Language Actions\nZapier Natural Language Actions gives you access to the 5k+ apps, 20k+ actions on Zapier's platform through a natural language API interface.", "source": "https://python.langchain.com/docs/integrations/tools/"} {"id": "b21d7830c59f-0", "text": "Vector stores\n\ud83d\udcc4\ufe0f Activeloop Deep Lake\nActiveloop Deep Lake as a Multi-Modal Vector Store that stores embeddings and their metadata including text, Jsons, images, audio, video, and more. It saves the data locally, in your cloud, or on Activeloop storage. It performs hybrid search including embeddings and their attributes.\n\ud83d\udcc4\ufe0f Alibaba Cloud OpenSearch\nAlibaba Cloud Opensearch is a one-stop platform to develop intelligent search services. OpenSearch was built on the large-scale distributed search engine developed by Alibaba. OpenSearch serves more than 500 business cases in Alibaba Group and thousands of Alibaba Cloud customers. OpenSearch helps develop search services in different search scenarios, including e-commerce, O2O, multimedia, the content industry, communities and forums, and big data query in enterprises.\n\ud83d\udcc4\ufe0f AnalyticDB\nAnalyticDB for PostgreSQL is a massively parallel processing (MPP) data warehousing service that is designed to analyze large volumes of data online.\n\ud83d\udcc4\ufe0f Annoy\nAnnoy (Approximate Nearest Neighbors Oh Yeah) is a C++ library with Python bindings to search for points in space that are close to a given query point. It also creates large read-only file-based data structures that are mmapped into memory so that many processes may share the same data.\n\ud83d\udcc4\ufe0f Atlas\nAtlas is a platform by Nomic made for interacting with both small and internet scale unstructured datasets. It enables anyone to visualize, search, and share massive datasets in their browser.\n\ud83d\udcc4\ufe0f AwaDB\nAwaDB is an AI Native database for the search and storage of embedding vectors used by LLM Applications.\n\ud83d\udcc4\ufe0f Azure Cognitive Search\nAzure Cognitive Search (formerly known as Azure Search) is a cloud search service that gives developers infrastructure, APIs, and tools for building a rich search experience over private, heterogeneous content in web, mobile, and enterprise applications.\n\ud83d\udcc4\ufe0f BagelDB\nBagelDB (Open Vector Database for AI), is like GitHub for AI data.\n\ud83d\udcc4\ufe0f Cassandra\nApache Cassandra\u00ae is a NoSQL, row-oriented, highly scalable and highly available database.\n\ud83d\udcc4\ufe0f Chroma\nChroma is a AI-native open-source vector database focused on developer productivity and happiness. Chroma is licensed under Apache 2.0.\n\ud83d\udcc4\ufe0f ClickHouse", "source": "https://python.langchain.com/docs/integrations/vectorstores/"} {"id": "b21d7830c59f-1", "text": "\ud83d\udcc4\ufe0f ClickHouse\nClickHouse is the fastest and most resource efficient open-source database for real-time apps and analytics with full SQL support and a wide range of functions to assist users in writing analytical queries. Lately added data structures and distance search functions (like L2Distance) as well as approximate nearest neighbor search indexes enable ClickHouse to be used as a high performance and scalable vector database to store and search vectors with SQL.\n\ud83d\udcc4\ufe0f DashVector\nDashVector is a fully-managed vectorDB service that supports high-dimension dense and sparse vectors, real-time insertion and filtered search. It is built to scale automatically and can adapt to different application requirements.\n\ud83d\udcc4\ufe0f Dingo\nDingo is a distributed multi-mode vector database, which combines the characteristics of data lakes and vector databases, and can store data of any type and size (Key-Value, PDF, audio, video, etc.). It has real-time low-latency processing capabilities to achieve rapid insight and response, and can efficiently conduct instant analysis and process multi-modal data.\n\ud83d\udcc4\ufe0f DocArray HnswSearch\nDocArrayHnswSearch is a lightweight Document Index implementation provided by Docarray that runs fully locally and is best suited for small- to medium-sized datasets. It stores vectors on disk in hnswlib, and stores all other data in SQLite.\n\ud83d\udcc4\ufe0f DocArray InMemorySearch\nDocArrayInMemorySearch is a document index provided by Docarray that stores documents in memory. It is a great starting point for small datasets, where you may not want to launch a database server.\n\ud83d\udcc4\ufe0f Elasticsearch\nElasticsearch is a distributed, RESTful search and analytics engine, capable of performing both vector and lexical search. It is built on top of the Apache Lucene library.\n\ud83d\udcc4\ufe0f Epsilla\nEpsilla is an open-source vector database that leverages the advanced parallel graph traversal techniques for vector indexing. Epsilla is licensed under GPL-3.0.\n\ud83d\udcc4\ufe0f Faiss\nFacebook AI Similarity Search (Faiss) is a library for efficient similarity search and clustering of dense vectors. It contains algorithms that search in sets of vectors of any size, up to ones that possibly do not fit in RAM. It also contains supporting code for evaluation and parameter tuning.\n\ud83d\udcc4\ufe0f Hologres", "source": "https://python.langchain.com/docs/integrations/vectorstores/"} {"id": "b21d7830c59f-2", "text": "\ud83d\udcc4\ufe0f Hologres\nHologres is a unified real-time data warehousing service developed by Alibaba Cloud. You can use Hologres to write, update, process, and analyze large amounts of data in real time.\n\ud83d\udcc4\ufe0f LanceDB\nLanceDB is an open-source database for vector-search built with persistent storage, which greatly simplifies retrevial, filtering and management of embeddings. Fully open source.\n\ud83d\udcc4\ufe0f Marqo\nThis notebook shows how to use functionality related to the Marqo vectorstore.\n\ud83d\udcc4\ufe0f Google Vertex AI MatchingEngine\nThis notebook shows how to use functionality related to the GCP Vertex AI MatchingEngine vector database.\n\ud83d\udcc4\ufe0f Meilisearch\nMeilisearch is an open-source, lightning-fast, and hyper relevant search engine. It comes with great defaults to help developers build snappy search experiences.\n\ud83d\udcc4\ufe0f Milvus\nMilvus is a database that stores, indexes, and manages massive embedding vectors generated by deep neural networks and other machine learning (ML) models.\n\ud83d\udcc4\ufe0f MongoDB Atlas\nMongoDB Atlas is a fully-managed cloud database available in AWS, Azure, and GCP. It now has support for native Vector Search on your MongoDB document data.\n\ud83d\udcc4\ufe0f MyScale\nMyScale is a cloud-based database optimized for AI applications and solutions, built on the open-source ClickHouse.\n\ud83d\udcc4\ufe0f Neo4j Vector Index\nNeo4j is an open-source graph database with integrated support for vector similarity search\n\ud83d\udcc4\ufe0f OpenSearch\nOpenSearch is a scalable, flexible, and extensible open-source software suite for search, analytics, and observability applications licensed under Apache 2.0. OpenSearch is a distributed search and analytics engine based on Apache Lucene.\n\ud83d\udcc4\ufe0f Postgres Embedding\nPostgres Embedding is an open-source vector similarity search for Postgres that uses Hierarchical Navigable Small Worlds (HNSW) for approximate nearest neighbor search.\n\ud83d\udcc4\ufe0f PGVector\nPGVector is an open-source vector similarity search for Postgres\n\ud83d\udcc4\ufe0f Pinecone\nPinecone is a vector database with broad functionality.\n\ud83d\udcc4\ufe0f Qdrant", "source": "https://python.langchain.com/docs/integrations/vectorstores/"} {"id": "b21d7830c59f-3", "text": "Pinecone is a vector database with broad functionality.\n\ud83d\udcc4\ufe0f Qdrant\nQdrant (read: quadrant ) is a vector similarity search engine. It provides a production-ready service with a convenient API to store, search, and manage points - vectors with an additional payload. Qdrant is tailored to extended filtering support. It makes it useful for all sorts of neural network or semantic-based matching, faceted search, and other applications.\n\ud83d\udcc4\ufe0f Redis\nRedis vector database introduction and langchain integration guide.\n\ud83d\udcc4\ufe0f Rockset\nRockset is a real-time search and analytics database built for the cloud. Rockset uses a Converged Index\u2122 with an efficient store for vector embeddings to serve low latency, high concurrency search queries at scale. Rockset has full support for metadata filtering and handles real-time ingestion for constantly updating, streaming data.\n\ud83d\udcc4\ufe0f ScaNN\nScaNN (Scalable Nearest Neighbors) is a method for efficient vector similarity search at scale.\n\ud83d\udcc4\ufe0f SingleStoreDB\nSingleStoreDB is a high-performance distributed SQL database that supports deployment both in the cloud and on-premises. It provides vector storage, and vector functions including dotproduct and euclideandistance, thereby supporting AI applications that require text similarity matching.\n\ud83d\udcc4\ufe0f scikit-learn\nscikit-learn is an open source collection of machine learning algorithms, including some implementations of the k nearest neighbors. SKLearnVectorStore wraps this implementation and adds the possibility to persist the vector store in json, bson (binary json) or Apache Parquet format.\n\ud83d\udcc4\ufe0f StarRocks\nStarRocks is a High-Performance Analytical Database.\n\ud83d\udcc4\ufe0f Supabase (Postgres)\nSupabase is an open source Firebase alternative. Supabase is built on top of PostgreSQL, which offers strong SQL querying capabilities and enables a simple interface with already-existing tools and frameworks.\n\ud83d\udcc4\ufe0f Tair\nTair is a cloud native in-memory database service developed by Alibaba Cloud.\n\ud83d\udcc4\ufe0f Tencent Cloud VectorDB", "source": "https://python.langchain.com/docs/integrations/vectorstores/"} {"id": "b21d7830c59f-4", "text": "\ud83d\udcc4\ufe0f Tencent Cloud VectorDB\nTencent Cloud VectorDB is a fully managed, self-developed, enterprise-level distributed database service designed for storing, retrieving, and analyzing multi-dimensional vector data. The database supports multiple index types and similarity calculation methods. A single index can support a vector scale of up to 1 billion and can support millions of QPS and millisecond-level query latency. Tencent Cloud Vector Database can not only provide an external knowledge base for large models to improve the accuracy of large model responses but can also be widely used in AI fields such as recommendation systems, NLP services, computer vision, and intelligent customer service.\n\ud83d\udcc4\ufe0f Tigris\nTigris is an open source Serverless NoSQL Database and Search Platform designed to simplify building high-performance vector search applications.\n\ud83d\udcc4\ufe0f Typesense\nTypesense is an open source, in-memory search engine, that you can either self-host or run on Typesense Cloud.\n\ud83d\udcc4\ufe0f USearch\nUSearch is a Smaller & Faster Single-File Vector Search Engine\n\ud83d\udcc4\ufe0f Vectara\nVectara is a API platform for building GenAI applications. It provides an easy-to-use API for document indexing and querying that is managed by Vectara and is optimized for performance and accuracy.\n\ud83d\udcc4\ufe0f Weaviate\nWeaviate is an open-source vector database. It allows you to store data objects and vector embeddings from your favorite ML-models, and scale seamlessly into billions of data objects.\n\ud83d\udcc4\ufe0f Xata\nXata is a serverless data platform, based on PostgreSQL. It provides a Python SDK for interacting with your database, and a UI for managing your data.\n\ud83d\udcc4\ufe0f Zep\nZep is an open source long-term memory store for LLM applications. Zep makes it easy to add relevant documents,\n\ud83d\udcc4\ufe0f Zilliz\nZilliz Cloud is a fully managed service on cloud for LF AI Milvus\u00ae,", "source": "https://python.langchain.com/docs/integrations/vectorstores/"} {"id": "77437ea20dd2-0", "text": "Grouped by provider\n\ud83d\udcc4\ufe0f Activeloop Deep Lake\nThis page covers how to use the Deep Lake ecosystem within LangChain.\n\ud83d\udcc4\ufe0f AI21 Labs\nThis page covers how to use the AI21 ecosystem within LangChain.\n\ud83d\udcc4\ufe0f Aim\nAim makes it super easy to visualize and debug LangChain executions. Aim tracks inputs and outputs of LLMs and tools, as well as actions of agents.\n\ud83d\udcc4\ufe0f AINetwork\nAI Network is a layer 1 blockchain designed to accommodate\n\ud83d\udcc4\ufe0f Airbyte\nAirbyte is a data integration platform for ELT pipelines from APIs,\n\ud83d\udcc4\ufe0f Airtable\nAirtable is a cloud collaboration service.\n\ud83d\udcc4\ufe0f Aleph Alpha\nAleph Alpha was founded in 2019 with the mission to research and build the foundational technology for an era of strong AI. The team of international scientists, engineers, and innovators researches, develops, and deploys transformative AI like large language and multimodal models and runs the fastest European commercial AI cluster.\n\ud83d\udcc4\ufe0f Alibaba Cloud Opensearch\nAlibaba Cloud Opensearch OpenSearch is a one-stop platform to develop intelligent search services. OpenSearch was built based on the large-scale distributed search engine developed by Alibaba. OpenSearch serves more than 500 business cases in Alibaba Group and thousands of Alibaba Cloud customers. OpenSearch helps develop search services in different search scenarios, including e-commerce, O2O, multimedia, the content industry, communities and forums, and big data query in enterprises.\n\ud83d\udcc4\ufe0f Amazon API Gateway\nAmazon API Gateway is a fully managed service that makes it easy for developers to create, publish, maintain, monitor, and secure APIs at any scale. APIs act as the \"front door\" for applications to access data, business logic, or functionality from your backend services. Using API Gateway, you can create RESTful APIs and WebSocket APIs that enable real-time two-way communication applications. API Gateway supports containerized and serverless workloads, as well as web applications.\n\ud83d\udcc4\ufe0f AnalyticDB\nThis page covers how to use the AnalyticDB ecosystem within LangChain.\n\ud83d\udcc4\ufe0f Annoy", "source": "https://python.langchain.com/docs/integrations/providers/"} {"id": "77437ea20dd2-1", "text": "\ud83d\udcc4\ufe0f Annoy\nAnnoy (Approximate Nearest Neighbors Oh Yeah) is a C++ library with Python bindings to search for points in space that are close to a given query point. It also creates large read-only file-based data structures that are mmapped into memory so that many processes may share the same data.\n\ud83d\udcc4\ufe0f Anyscale\nThis page covers how to use the Anyscale ecosystem within LangChain.\n\ud83d\udcc4\ufe0f Apify\nThis page covers how to use Apify within LangChain.\n\ud83d\udcc4\ufe0f ArangoDB\nArangoDB is a scalable graph database system to drive value from connected data, faster. Native graphs, an integrated search engine, and JSON support, via a single query language. ArangoDB runs on-prem, in the cloud \u2013 anywhere.\n\ud83d\udcc4\ufe0f Argilla\nArgilla - Open-source data platform for LLMs\n\ud83d\udcc4\ufe0f Arthur\nArthur is a model monitoring and observability platform.\n\ud83d\udcc4\ufe0f Arxiv\narXiv is an open-access archive for 2 million scholarly articles in the fields of physics,\n\ud83d\udcc4\ufe0f Atlas\nNomic Atlas is a platform for interacting with both\n\ud83d\udcc4\ufe0f AwaDB\nAwaDB is an AI Native database for the search and storage of embedding vectors used by LLM Applications.\n\ud83d\udcc4\ufe0f AWS S3 Directory\nAmazon Simple Storage Service (Amazon S3) is an object storage service.\n\ud83d\udcc4\ufe0f AZLyrics\nAZLyrics is a large, legal, every day growing collection of lyrics.\n\ud83d\udcc4\ufe0f Azure Blob Storage\nAzure Blob Storage is Microsoft's object storage solution for the cloud. Blob Storage is optimized for storing massive amounts of unstructured data. Unstructured data is data that doesn't adhere to a particular data model or definition, such as text or binary data.\n\ud83d\udcc4\ufe0f Azure Cognitive Search\nAzure Cognitive Search (formerly known as Azure Search) is a cloud search service that gives developers infrastructure, APIs, and tools for building a rich search experience over private, heterogeneous content in web, mobile, and enterprise applications.\n\ud83d\udcc4\ufe0f Azure OpenAI", "source": "https://python.langchain.com/docs/integrations/providers/"} {"id": "77437ea20dd2-2", "text": "\ud83d\udcc4\ufe0f Azure OpenAI\nMicrosoft Azure, often referred to as Azure is a cloud computing platform run by Microsoft, which offers access, management, and development of applications and services through global data centers. It provides a range of capabilities, including software as a service (SaaS), platform as a service (PaaS), and infrastructure as a service (IaaS). Microsoft Azure supports many programming languages, tools, and frameworks, including Microsoft-specific and third-party software and systems.\n\ud83d\udcc4\ufe0f BagelDB\nBagelDB (Open Vector Database for AI), is like GitHub for AI data.\n\ud83d\udcc4\ufe0f Banana\nThis page covers how to use the Banana ecosystem within LangChain.\n\ud83d\udcc4\ufe0f Baseten\nLearn how to use LangChain with models deployed on Baseten.\n\ud83d\udcc4\ufe0f Beam\nThis page covers how to use Beam within LangChain.\n\ud83d\udcc4\ufe0f Bedrock\nAmazon Bedrock is a fully managed service that makes FMs from leading AI startups and Amazon available via an API, so you can choose from a wide range of FMs to find the model that is best suited for your use case.\n\ud83d\udcc4\ufe0f BiliBili\nBilibili is one of the most beloved long-form video sites in China.\n\ud83d\udcc4\ufe0f NIBittensor\nThis page covers how to use the BittensorLLM inference runtime within LangChain.\n\ud83d\udcc4\ufe0f Blackboard\nBlackboard Learn (previously the Blackboard Learning Management System)\n\ud83d\udcc4\ufe0f Brave Search\nBrave Search is a search engine developed by Brave Software.\n\ud83d\udcc4\ufe0f Cassandra\nApache Cassandra\u00ae is a free and open-source, distributed, wide-column\n\ud83d\udcc4\ufe0f CerebriumAI\nThis page covers how to use the CerebriumAI ecosystem within LangChain.\n\ud83d\udcc4\ufe0f Chaindesk\nChaindesk is an open source document retrieval platform that helps to connect your personal data with Large Language Models.\n\ud83d\udcc4\ufe0f Chroma\nChroma is a database for building AI applications with embeddings.\n\ud83d\udcc4\ufe0f Clarifai", "source": "https://python.langchain.com/docs/integrations/providers/"} {"id": "77437ea20dd2-3", "text": "\ud83d\udcc4\ufe0f Clarifai\nClarifai is one of first deep learning platforms having been founded in 2013. Clarifai provides an AI platform with the full AI lifecycle for data exploration, data labeling, model training, evaluation and inference around images, video, text and audio data. In the LangChain ecosystem, as far as we're aware, Clarifai is the only provider that supports LLMs, embeddings and a vector store in one production scale platform, making it an excellent choice to operationalize your LangChain implementations.\n\ud83d\udcc4\ufe0f ClearML\nClearML is a ML/DL development and production suite, it contains 5 main modules:\n\ud83d\udcc4\ufe0f ClickHouse\nClickHouse is the fast and resource efficient open-source database for real-time\n\ud83d\udcc4\ufe0f CnosDB\nCnosDB is an open source distributed time series database with high performance, high compression rate and high ease of use.\n\ud83d\udcc4\ufe0f Cohere\nCohere is a Canadian startup that provides natural language processing models\n\ud83d\udcc4\ufe0f College Confidential\nCollege Confidential gives information on 3,800+ colleges and universities.\n\ud83d\udcc4\ufe0f Comet\nIn this guide we will demonstrate how to track your Langchain Experiments, Evaluation Metrics, and LLM Sessions with Comet.\n\ud83d\udcc4\ufe0f Confluence\nConfluence is a wiki collaboration platform that saves and organizes all of the project-related material. Confluence is a knowledge base that primarily handles content management activities.\n\ud83d\udcc4\ufe0f C Transformers\nThis page covers how to use the C Transformers library within LangChain.\n\ud83d\udcc4\ufe0f DashVector\nDashVector is a fully-managed vectorDB service that supports high-dimension dense and sparse vectors, real-time insertion and filtered search. It is built to scale automatically and can adapt to different application requirements.\n\ud83d\udcc4\ufe0f Databricks\nThis notebook covers how to connect to the Databricks runtimes and Databricks SQL using the SQLDatabase wrapper of LangChain.\n\ud83d\udcc4\ufe0f Datadog Tracing\nddtrace is a Datadog application performance monitoring (APM) library which provides an integration to monitor your LangChain application.\n\ud83d\udcc4\ufe0f Datadog Logs\nDatadog is a monitoring and analytics platform for cloud-scale applications.\n\ud83d\udcc4\ufe0f DataForSEO", "source": "https://python.langchain.com/docs/integrations/providers/"} {"id": "77437ea20dd2-4", "text": "\ud83d\udcc4\ufe0f DataForSEO\nThis page provides instructions on how to use the DataForSEO search APIs within LangChain.\n\ud83d\udcc4\ufe0f DeepInfra\nThis page covers how to use the DeepInfra ecosystem within LangChain.\n\ud83d\udcc4\ufe0f DeepSparse\nThis page covers how to use the DeepSparse inference runtime within LangChain.\n\ud83d\udcc4\ufe0f Diffbot\nDiffbot is a service to read web pages. Unlike traditional web scraping tools,\n\ud83d\udcc4\ufe0f Dingo\nThis page covers how to use the Dingo ecosystem within LangChain.\n\ud83d\udcc4\ufe0f Discord\nDiscord is a VoIP and instant messaging social platform. Users have the ability to communicate\n\ud83d\udcc4\ufe0f DocArray\nDocArray is a library for nested, unstructured, multimodal data in transit,\n\ud83d\udcc4\ufe0f Docugami\nDocugami converts business documents into a Document XML Knowledge Graph, generating forests\n\ud83d\udcc4\ufe0f DuckDB\nDuckDB is an in-process SQL OLAP database management system.\n\ud83d\udcc4\ufe0f Elasticsearch\nElasticsearch is a distributed, RESTful search and analytics engine.\n\ud83d\udcc4\ufe0f Epsilla\nThis page covers how to use Epsilla within LangChain.\n\ud83d\udcc4\ufe0f EverNote\nEverNote is intended for archiving and creating notes in which photos, audio and saved web content can be embedded. Notes are stored in virtual \"notebooks\" and can be tagged, annotated, edited, searched, and exported.\n\ud83d\udcc4\ufe0f Facebook Chat\nMessenger) is an American proprietary instant messaging app and\n\ud83d\udcc4\ufe0f Facebook Faiss\nFacebook AI Similarity Search (Faiss)\n\ud83d\udcc4\ufe0f Figma\nFigma is a collaborative web application for interface design.\n\ud83d\udcc4\ufe0f Fireworks\nThis page covers how to use the Fireworks models within Langchain.\n\ud83d\udcc4\ufe0f Flyte\nFlyte is an open-source orchestrator that facilitates building production-grade data and ML pipelines.\n\ud83d\udcc4\ufe0f ForefrontAI\nThis page covers how to use the ForefrontAI ecosystem within LangChain.\n\ud83d\udcc4\ufe0f Git\nGit is a distributed version control system that tracks changes in any set of computer files, usually used for coordinating work among programmers collaboratively developing source code during software development.\n\ud83d\udcc4\ufe0f GitBook", "source": "https://python.langchain.com/docs/integrations/providers/"} {"id": "77437ea20dd2-5", "text": "\ud83d\udcc4\ufe0f GitBook\nGitBook is a modern documentation platform where teams can document everything from products to internal knowledge bases and APIs.\n\ud83d\udcc4\ufe0f Golden\nGolden provides a set of natural language APIs for querying and enrichment using the Golden Knowledge Graph e.g. queries such as: Products from OpenAI, Generative ai companies with series a funding, and rappers who invest can be used to retrieve structured data about relevant entities.\n\ud83d\udcc4\ufe0f Google BigQuery\nGoogle BigQuery is a serverless and cost-effective enterprise data warehouse that works across clouds and scales with your data.\n\ud83d\udcc4\ufe0f Google Cloud Storage\nGoogle Cloud Storage is a managed service for storing unstructured data.\n\ud83d\udcc4\ufe0f Google Drive\nGoogle Drive is a file storage and synchronization service developed by Google.\n\ud83d\udcc4\ufe0f Google Search\nThis page covers how to use the Google Search API within LangChain.\n\ud83d\udcc4\ufe0f Google Serper\nThis page covers how to use the Serper Google Search API within LangChain. Serper is a low-cost Google Search API that can be used to add answer box, knowledge graph, and organic results data from Google Search.\n\ud83d\udcc4\ufe0f Google Vertex AI MatchingEngine\nGoogle Vertex AI Matching Engine provides\n\ud83d\udcc4\ufe0f GooseAI\nThis page covers how to use the GooseAI ecosystem within LangChain.\n\ud83d\udcc4\ufe0f GPT4All\nThis page covers how to use the GPT4All wrapper within LangChain. The tutorial is divided into two parts: installation and setup, followed by usage with an example.\n\ud83d\udcc4\ufe0f Graphsignal\nThis page covers how to use Graphsignal to trace and monitor LangChain. Graphsignal enables full visibility into your application. It provides latency breakdowns by chains and tools, exceptions with full context, data monitoring, compute/GPU utilization, OpenAI cost analytics, and more.\n\ud83d\udcc4\ufe0f Grobid\nGROBID is a machine learning library for extracting, parsing, and re-structuring raw documents.\n\ud83d\udcc4\ufe0f Gutenberg\nProject Gutenberg is an online library of free eBooks.\n\ud83d\udcc4\ufe0f Hacker News\nHacker News (sometimes abbreviated as HN) is a social news\n\ud83d\udcc4\ufe0f Hazy Research\nThis page covers how to use the Hazy Research ecosystem within LangChain.\n\ud83d\udcc4\ufe0f Helicone", "source": "https://python.langchain.com/docs/integrations/providers/"} {"id": "77437ea20dd2-6", "text": "\ud83d\udcc4\ufe0f Helicone\nThis page covers how to use the Helicone ecosystem within LangChain.\n\ud83d\udcc4\ufe0f Hologres\nHologres is a unified real-time data warehousing service developed by Alibaba Cloud. You can use Hologres to write, update, process, and analyze large amounts of data in real time.\n\ud83d\udcc4\ufe0f Hugging Face\nThis page covers how to use the Hugging Face ecosystem (including the Hugging Face Hub) within LangChain.\n\ud83d\udcc4\ufe0f iFixit\niFixit is the largest, open repair community on the web. The site contains nearly 100k\n\ud83d\udcc4\ufe0f IMSDb\nIMSDb is the Internet Movie Script Database.\n\ud83d\udcc4\ufe0f Infino\nInfino is an open-source observability platform that stores both metrics and application logs together.\n\ud83d\udcc4\ufe0f Jina\nThis page covers how to use the Jina ecosystem within LangChain.\n\ud83d\udcc4\ufe0f LanceDB\nThis page covers how to use LanceDB within LangChain.\n\ud83d\udcc4\ufe0f LangChain Decorators \u2728\nlanchchain decorators is a layer on the top of LangChain that provides syntactic sugar \ud83c\udf6d for writing custom langchain prompts and chains\n\ud83d\udcc4\ufe0f Llama.cpp\nThis page covers how to use llama.cpp within LangChain.\n\ud83d\udcc4\ufe0f Log10\nThis page covers how to use the Log10 within LangChain.\n\ud83d\udcc4\ufe0f Marqo\nThis page covers how to use the Marqo ecosystem within LangChain.\n\ud83d\udcc4\ufe0f MediaWikiDump\nMediaWiki XML Dumps contain the content of a wiki\n\ud83d\udcc4\ufe0f Meilisearch\nMeilisearch is an open-source, lightning-fast, and hyper\n\ud83d\udcc4\ufe0f Metal\nThis page covers how to use Metal within LangChain.\n\ud83d\udcc4\ufe0f Microsoft OneDrive\nMicrosoft OneDrive (formerly SkyDrive) is a file-hosting service operated by Microsoft.\n\ud83d\udcc4\ufe0f Microsoft PowerPoint\nMicrosoft PowerPoint is a presentation program by Microsoft.\n\ud83d\udcc4\ufe0f Microsoft Word\nMicrosoft Word is a word processor developed by Microsoft.\n\ud83d\udcc4\ufe0f Milvus\nThis page covers how to use the Milvus ecosystem within LangChain.\n\ud83d\udcc4\ufe0f Minimax\nMinimax is a Chinese startup that provides natural language processing models", "source": "https://python.langchain.com/docs/integrations/providers/"} {"id": "77437ea20dd2-7", "text": "\ud83d\udcc4\ufe0f Minimax\nMinimax is a Chinese startup that provides natural language processing models\n\ud83d\udcc4\ufe0f MLflow AI Gateway\nThe MLflow AI Gateway service is a powerful tool designed to streamline the usage and management of various large\n\ud83d\udcc4\ufe0f MLflow\nMLflow is a versatile, expandable, open-source platform for managing workflows and artifacts across the machine learning lifecycle. It has built-in integrations with many popular ML libraries, but can be used with any library, algorithm, or deployment tool. It is designed to be extensible, so you can write plugins to support new workflows, libraries, and tools.\n\ud83d\udcc4\ufe0f Modal\nThis page covers how to use the Modal ecosystem to run LangChain custom LLMs.\n\ud83d\udcc4\ufe0f ModelScope\nThis page covers how to use the modelscope ecosystem within LangChain.\n\ud83d\udcc4\ufe0f Modern Treasury\nModern Treasury simplifies complex payment operations. It is a unified platform to power products and processes that move money.\n\ud83d\udcc4\ufe0f Momento\nMomento Cache is the world's first truly serverless caching service. It provides instant elasticity, scale-to-zero\n\ud83d\udcc4\ufe0f MongoDB Atlas\nMongoDB Atlas is a fully-managed cloud\n\ud83d\udcc4\ufe0f Motherduck\nMotherduck is a managed DuckDB-in-the-cloud service.\n\ud83d\udcc4\ufe0f MyScale\nThis page covers how to use MyScale vector database within LangChain.\n\ud83d\udcc4\ufe0f Neo4j\nThis page covers how to use the Neo4j ecosystem within LangChain.\n\ud83d\udcc4\ufe0f NLPCloud\nThis page covers how to use the NLPCloud ecosystem within LangChain.\n\ud83d\udcc4\ufe0f Notion DB\nNotion is a collaboration platform with modified Markdown support that integrates kanban\n\ud83d\udcc4\ufe0f Obsidian\nObsidian is a powerful and extensible knowledge base\n\ud83d\udcc4\ufe0f OpenAI\nOpenAI is American artificial intelligence (AI) research laboratory\n\ud83d\udcc4\ufe0f OpenLLM\nThis page demonstrates how to use OpenLLM\n\ud83d\udcc4\ufe0f OpenSearch\nThis page covers how to use the OpenSearch ecosystem within LangChain.\n\ud83d\udcc4\ufe0f OpenWeatherMap\nOpenWeatherMap provides all essential weather data for a specific location:\n\ud83d\udcc4\ufe0f Petals\nThis page covers how to use the Petals ecosystem within LangChain.\n\ud83d\udcc4\ufe0f Postgres Embedding", "source": "https://python.langchain.com/docs/integrations/providers/"} {"id": "77437ea20dd2-8", "text": "\ud83d\udcc4\ufe0f Postgres Embedding\npgembedding is an open-source package for\n\ud83d\udcc4\ufe0f PGVector\nThis page covers how to use the Postgres PGVector ecosystem within LangChain\n\ud83d\udcc4\ufe0f Pinecone\nThis page covers how to use the Pinecone ecosystem within LangChain.\n\ud83d\udcc4\ufe0f PipelineAI\nThis page covers how to use the PipelineAI ecosystem within LangChain.\n\ud83d\uddc3\ufe0f Portkey\n1 items\n\ud83d\udcc4\ufe0f Predibase\nLearn how to use LangChain with models on Predibase.\n\ud83d\udcc4\ufe0f Prediction Guard\nThis page covers how to use the Prediction Guard ecosystem within LangChain.\n\ud83d\udcc4\ufe0f PromptLayer\nThis page covers how to use PromptLayer within LangChain.\n\ud83d\udcc4\ufe0f Psychic\nPsychic is a platform for integrating with SaaS tools like Notion, Zendesk,\n\ud83d\udcc4\ufe0f PubMed\nPubMed\u00ae by The National Center for Biotechnology Information, National Library of Medicine\n\ud83d\udcc4\ufe0f Qdrant\nThis page covers how to use the Qdrant ecosystem within LangChain.\n\ud83d\udcc4\ufe0f Ray Serve\nRay Serve is a scalable model serving library for building online inference APIs. Serve is particularly well suited for system composition, enabling you to build a complex inference service consisting of multiple chains and business logic all in Python code.\n\ud83d\udcc4\ufe0f Rebuff\nRebuff is a self-hardening prompt injection detector.\n\ud83d\udcc4\ufe0f Reddit\nReddit is an American social news aggregation, content rating, and discussion website.\n\ud83d\udcc4\ufe0f Redis\nThis page covers how to use the Redis ecosystem within LangChain.\n\ud83d\udcc4\ufe0f Replicate\nThis page covers how to run models on Replicate within LangChain.\n\ud83d\udcc4\ufe0f Roam\nROAM is a note-taking tool for networked thought, designed to create a personal knowledge base.\n\ud83d\udcc4\ufe0f Rockset\nRockset is a real-time analytics database service for serving low latency, high concurrency analytical queries at scale. It builds a Converged Index\u2122 on structured and semi-structured data with an efficient store for vector embeddings. Its support for running SQL on schemaless data makes it a perfect choice for running vector search with metadata filters.\n\ud83d\udcc4\ufe0f Runhouse\nThis page covers how to use the Runhouse ecosystem within LangChain.\n\ud83d\udcc4\ufe0f RWKV-4", "source": "https://python.langchain.com/docs/integrations/providers/"} {"id": "77437ea20dd2-9", "text": "\ud83d\udcc4\ufe0f RWKV-4\nThis page covers how to use the RWKV-4 wrapper within LangChain.\n\ud83d\udcc4\ufe0f SageMaker Endpoint\nAmazon SageMaker is a system that can build, train, and deploy machine learning (ML) models with fully managed infrastructure, tools, and workflows.\n\ud83d\udcc4\ufe0f SageMaker Tracking\nThis notebook shows how LangChain Callback can be used to log and track prompts and other LLM hyperparameters into SageMaker Experiments. Here, we use different scenarios to showcase the capability:\n\ud83d\udcc4\ufe0f ScaNN\nGoogle ScaNN\n\ud83d\udcc4\ufe0f SearxNG Search API\nThis page covers how to use the SearxNG search API within LangChain.\n\ud83d\udcc4\ufe0f SerpAPI\nThis page covers how to use the SerpAPI search APIs within LangChain.\n\ud83d\udcc4\ufe0f Shale Protocol\nShale Protocol provides production-ready inference APIs for open LLMs. It's a Plug & Play API as it's hosted on a highly scalable GPU cloud infrastructure.\n\ud83d\udcc4\ufe0f SingleStoreDB\nSingleStoreDB is a high-performance distributed SQL database that supports deployment both in the cloud and on-premises. It provides vector storage, and vector functions including dotproduct and euclideandistance, thereby supporting AI applications that require text similarity matching.\n\ud83d\udcc4\ufe0f scikit-learn\nscikit-learn is an open source collection of machine learning algorithms,\n\ud83d\udcc4\ufe0f Slack\nSlack is an instant messaging program.\n\ud83d\udcc4\ufe0f spaCy\nspaCy is an open-source software library for advanced natural language processing, written in the programming languages Python and Cython.\n\ud83d\udcc4\ufe0f Spreedly\nSpreedly is a service that allows you to securely store credit cards and use them to transact against any number of payment gateways and third party APIs. It does this by simultaneously providing a card tokenization/vault service as well as a gateway and receiver integration service. Payment methods tokenized by Spreedly are stored at Spreedly, allowing you to independently store a card and then pass that card to different end points based on your business requirements.\n\ud83d\udcc4\ufe0f StarRocks\nStarRocks is a High-Performance Analytical Database.\n\ud83d\udcc4\ufe0f StochasticAI\nThis page covers how to use the StochasticAI ecosystem within LangChain.", "source": "https://python.langchain.com/docs/integrations/providers/"} {"id": "77437ea20dd2-10", "text": "This page covers how to use the StochasticAI ecosystem within LangChain.\n\ud83d\udcc4\ufe0f Stripe\nStripe is an Irish-American financial services and software as a service (SaaS) company. It offers payment-processing software and application programming interfaces for e-commerce websites and mobile applications.\n\ud83d\udcc4\ufe0f Supabase (Postgres)\nSupabase is an open source Firebase alternative.\n\ud83d\udcc4\ufe0f Nebula\nThis page covers how to use Nebula, Symbl.ai's LLM, ecosystem within LangChain.\n\ud83d\udcc4\ufe0f Tair\nThis page covers how to use the Tair ecosystem within LangChain.\n\ud83d\udcc4\ufe0f Telegram\nTelegram Messenger is a globally accessible freemium, cross-platform, encrypted, cloud-based and centralized instant messaging service. The application also provides optional end-to-end encrypted chats and video calling, VoIP, file sharing and several other features.\n\ud83d\udcc4\ufe0f TencentVectorDB\nThis page covers how to use the TencentVectorDB ecosystem within LangChain.\n\ud83d\udcc4\ufe0f TensorFlow Datasets\nTensorFlow Datasets is a collection of datasets ready to use,\n\ud83d\udcc4\ufe0f Tigris\nTigris is an open source Serverless NoSQL Database and Search Platform designed to simplify building high-performance vector search applications.\n\ud83d\udcc4\ufe0f 2Markdown\n2markdown service transforms website content into structured markdown files.\n\ud83d\udcc4\ufe0f Trello\nTrello is a web-based project management and collaboration tool that allows individuals and teams to organize and track their tasks and projects. It provides a visual interface known as a \"board\" where users can create lists and cards to represent their tasks and activities.\n\ud83d\udcc4\ufe0f TruLens\nThis page covers how to use TruLens to evaluate and track LLM apps built on langchain.\n\ud83d\udcc4\ufe0f Twitter\nTwitter is an online social media and social networking service.\n\ud83d\udcc4\ufe0f Typesense\nTypesense is an open source, in-memory search engine, that you can either\n\ud83d\udcc4\ufe0f Unstructured\nThe unstructured package from\n\ud83d\udcc4\ufe0f USearch\nUSearch is a Smaller & Faster Single-File Vector Search Engine.\n\ud83d\uddc3\ufe0f Vectara\n2 items\n\ud83d\udcc4\ufe0f Vespa\nVespa is a fully featured search engine and vector database.\n\ud83d\udcc4\ufe0f WandB Tracing", "source": "https://python.langchain.com/docs/integrations/providers/"} {"id": "77437ea20dd2-11", "text": "\ud83d\udcc4\ufe0f WandB Tracing\nThere are two recommended ways to trace your LangChains:\n\ud83d\udcc4\ufe0f Weights & Biases\nThis notebook goes over how to track your LangChain experiments into one centralized Weights and Biases dashboard. To learn more about prompt engineering and the callback please refer to this Report which explains both alongside the resultant dashboards you can expect to see.\n\ud83d\udcc4\ufe0f Weather\nOpenWeatherMap is an open source weather service provider.\n\ud83d\udcc4\ufe0f Weaviate\nThis page covers how to use the Weaviate ecosystem within LangChain.\n\ud83d\udcc4\ufe0f WhatsApp\nWhatsApp (also called WhatsApp Messenger) is a freeware, cross-platform, centralized instant messaging (IM) and voice-over-IP (VoIP) service. It allows users to send text and voice messages, make voice and video calls, and share images, documents, user locations, and other content.\n\ud83d\udcc4\ufe0f WhyLabs\nWhyLabs is an observability platform designed to monitor data pipelines and ML applications for data quality regressions, data drift, and model performance degradation. Built on top of an open-source package called whylogs, the platform enables Data Scientists and Engineers to:\n\ud83d\udcc4\ufe0f Wikipedia\nWikipedia is a multilingual free online encyclopedia written and maintained by a community of volunteers, known as Wikipedians, through open collaboration and using a wiki-based editing system called MediaWiki. Wikipedia is the largest and most-read reference work in history.\n\ud83d\udcc4\ufe0f Wolfram Alpha\nWolframAlpha is an answer engine developed by Wolfram Research.\n\ud83d\udcc4\ufe0f Writer\nThis page covers how to use the Writer ecosystem within LangChain.\n\ud83d\udcc4\ufe0f Xata\nXata is a serverless data platform, based on PostgreSQL.\n\ud83d\udcc4\ufe0f Xorbits Inference (Xinference)\nThis page demonstrates how to use Xinference\n\ud83d\udcc4\ufe0f Yeager.ai\nThis page covers how to use Yeager.ai to generate LangChain tools and agents.\n\ud83d\udcc4\ufe0f YouTube\nYouTube is an online video sharing and social media platform by Google.\n\ud83d\udcc4\ufe0f Zep\nZep - A long-term memory store for LLM applications.\n\ud83d\udcc4\ufe0f Zilliz\nZilliz Cloud is a fully managed service on cloud for LF AI Milvus\u00ae,", "source": "https://python.langchain.com/docs/integrations/providers/"} {"id": "99ca47c635e2-0", "text": "Anthropic\nThis notebook covers how to get started with Anthropic chat models.\nfrom langchain.chat_models import ChatAnthropic\nfrom langchain.prompts.chat import (\nChatPromptTemplate,\nSystemMessagePromptTemplate,\nAIMessagePromptTemplate,\nHumanMessagePromptTemplate,\n)\nfrom langchain.schema import AIMessage, HumanMessage, SystemMessage\nAPI Reference:\nChatAnthropic\nChatPromptTemplate\nSystemMessagePromptTemplate\nAIMessagePromptTemplate\nHumanMessagePromptTemplate\nAIMessage\nHumanMessage\nSystemMessage\nmessages = [\nHumanMessage(\ncontent=\"Translate this sentence from English to French. I love programming.\"\n)\n]\nchat(messages)\nAIMessage(content=\" J'aime la programmation.\", additional_kwargs={}, example=False)\nChatAnthropic also supports async and streaming functionality:\u200b\nfrom langchain.callbacks.manager import CallbackManager\nfrom langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler\nawait chat.agenerate([messages])\nLLMResult(generations=[[ChatGeneration(text=\" J'aime programmer.\", generation_info=None, message=AIMessage(content=\" J'aime programmer.\", additional_kwargs={}, example=False))]], llm_output={}, run=[RunInfo(run_id=UUID('8cc8fb68-1c35-439c-96a0-695036a93652'))])\nchat = ChatAnthropic(\nstreaming=True,\nverbose=True,\ncallback_manager=CallbackManager([StreamingStdOutCallbackHandler()]),\n)\nchat(messages)\nJ'aime la programmation.\n\n\n\n\nAIMessage(content=\" J'aime la programmation.\", additional_kwargs={}, example=False)", "source": "https://python.langchain.com/docs/integrations/chat/anthropic"} {"id": "0386480638c1-0", "text": "This notebook demonstrates the use of langchain.chat_models.ChatAnyscale for Anyscale Endpoints.\nThis way, the three requests will only take as long as the longest individual request.\nmeta-llama/Llama-2-70b-chat-hf\n\nGreetings! I'm just an AI, I don't have a personal identity like humans do, but I'm here to help you with any questions you have.\n\nI'm a large language model, which means I'm trained on a large corpus of text data to generate language outputs that are coherent and natural-sounding. My architecture is based on a transformer model, which is a type of neural network that's particularly well-suited for natural language processing tasks.\n\nAs for my parameters, I have a few billion parameters, but I don't have access to the exact number as it's not relevant to my functioning. My training data includes a vast amount of text from various sources, including books, articles, and websites, which I use to learn patterns and relationships in language.\n\nI'm designed to be a helpful tool for a variety of tasks, such as answering questions, providing information, and generating text. I'm constantly learning and improving my abilities through machine learning algorithms and feedback from users like you.\n\nI hope this helps! Is there anything else you'd like to know about me or my capabilities?\n\n---\n\nmeta-llama/Llama-2-7b-chat-hf", "source": "https://python.langchain.com/docs/integrations/chat/anyscale"} {"id": "0386480638c1-1", "text": "---\n\nmeta-llama/Llama-2-7b-chat-hf\n\nAh, a fellow tech enthusiast! *adjusts glasses* I'm glad to share some technical details about myself. \ud83e\udd13\nIndeed, I'm a transformer model, specifically a BERT-like language model trained on a large corpus of text data. My architecture is based on the transformer framework, which is a type of neural network designed for natural language processing tasks. \ud83c\udfe0\nAs for the number of parameters, I have approximately 340 million. *winks* That's a pretty hefty number, if I do say so myself! These parameters allow me to learn and represent complex patterns in language, such as syntax, semantics, and more. \ud83e\udd14\nBut don't ask me to do math in my head \u2013 I'm a language model, not a calculating machine! \ud83d\ude05 My strengths lie in understanding and generating human-like text, so feel free to chat with me anytime you'd like. \ud83d\udcac\nNow, do you have any more technical questions for me? Or would you like to engage in a nice chat? \ud83d\ude0a\n\n---\n\nmeta-llama/Llama-2-13b-chat-hf\n\nHello! As a friendly and helpful AI, I'd be happy to share some technical facts about myself.\n\nI am a transformer-based language model, specifically a variant of the BERT (Bidirectional Encoder Representations from Transformers) architecture. BERT was developed by Google in 2018 and has since become one of the most popular and widely-used AI language models.\n\nHere are some technical details about my capabilities:", "source": "https://python.langchain.com/docs/integrations/chat/anyscale"} {"id": "0386480638c1-2", "text": "Here are some technical details about my capabilities:\n\n1. Parameters: I have approximately 340 million parameters, which are the numbers that I use to learn and represent language. This is a relatively large number of parameters compared to some other languages models, but it allows me to learn and understand complex language patterns and relationships.\n2. Training: I was trained on a large corpus of text data, including books, articles, and other sources of written content. This training allows me to learn about the structure and conventions of language, as well as the relationships between words and phrases.\n3. Architectures: My architecture is based on the transformer model, which is a type of neural network that is particularly well-suited for natural language processing tasks. The transformer model uses self-attention mechanisms to allow the model to \"attend\" to different parts of the input text, allowing it to capture long-range dependencies and contextual relationships.\n4. Precision: I am capable of generating text with high precision and accuracy, meaning that I can produce text that is close to human-level quality in terms of grammar, syntax, and coherence.\n5. Generative capabilities: In addition to being able to generate text based on prompts and questions, I am also capable of generating text based on a given topic or theme. This allows me to create longer, more coherent pieces of text that are organized around a specific idea or concept.\n\nOverall, I am a powerful and versatile language model that is capable of a wide range of natural language processing tasks. I am constantly learning and improving, and I am here to help answer any questions you may have!\n\n---\n\nCPU times: user 371 ms, sys: 15.5 ms, total: 387 ms\nWall time: 12 s", "source": "https://python.langchain.com/docs/integrations/chat/anyscale"} {"id": "0fadfc171372-0", "text": "Anthropic Functions\nThis notebook shows how to use an experimental wrapper around Anthropic that gives it the same API as OpenAI Functions.\nfrom langchain_experimental.llms.anthropic_functions import AnthropicFunctions\n/Users/harrisonchase/.pyenv/versions/3.9.1/envs/langchain/lib/python3.9/site-packages/deeplake/util/check_latest_version.py:32: UserWarning: A newer version of deeplake (3.6.14) is available. It's recommended that you update to the latest version using `pip install -U deeplake`.\nwarnings.warn(\nInitialize Model\u200b\nYou can initialize this wrapper the same way you'd initialize ChatAnthropic\nmodel = AnthropicFunctions(model='claude-2')\nPassing in functions\u200b\nYou can now pass in functions in a similar way\nfunctions=[\n{\n\"name\": \"get_current_weather\",\n\"description\": \"Get the current weather in a given location\",\n\"parameters\": {\n\"type\": \"object\",\n\"properties\": {\n\"location\": {\n\"type\": \"string\",\n\"description\": \"The city and state, e.g. San Francisco, CA\"\n},\n\"unit\": {\n\"type\": \"string\",\n\"enum\": [\"celsius\", \"fahrenheit\"]\n}\n},\n\"required\": [\"location\"]\n}\n}\n]\nfrom langchain.schema import HumanMessage\nresponse = model.predict_messages(\n[HumanMessage(content=\"whats the weater in boston?\")], \nfunctions=functions\n)\nAIMessage(content=' ', additional_kwargs={'function_call': {'name': 'get_current_weather', 'arguments': '{\"location\": \"Boston, MA\", \"unit\": \"fahrenheit\"}'}}, example=False)\nYou can now use this for extraction.\nfrom langchain.chains import create_extraction_chain\nschema = {\n\"properties\": {\n\"name\": {\"type\": \"string\"},\n\"height\": {\"type\": \"integer\"},\n\"hair_color\": {\"type\": \"string\"},\n},\n\"required\": [\"name\", \"height\"],\n}\ninp = \"\"\"\nAlex is 5 feet tall. Claudia is 1 feet taller Alex and jumps higher than him. Claudia is a brunette and Alex is blonde.\n\"\"\"\nchain = create_extraction_chain(schema, model)", "source": "https://python.langchain.com/docs/integrations/chat/anthropic_functions"} {"id": "0fadfc171372-1", "text": "\"\"\"\nchain = create_extraction_chain(schema, model)\n[{'name': 'Alex', 'height': '5', 'hair_color': 'blonde'},\n{'name': 'Claudia', 'height': '6', 'hair_color': 'brunette'}]\nUsing for tagging\u200b\nYou can now use this for tagging\nfrom langchain.chains import create_tagging_chain\nschema = {\n\"properties\": {\n\"sentiment\": {\"type\": \"string\"},\n\"aggressiveness\": {\"type\": \"integer\"},\n\"language\": {\"type\": \"string\"},\n}\n}\nchain = create_tagging_chain(schema, model)\nchain.run(\"this is really cool\")\n{'sentiment': 'positive', 'aggressiveness': '0', 'language': 'english'}", "source": "https://python.langchain.com/docs/integrations/chat/anthropic_functions"} {"id": "d0892c63c05e-0", "text": "Azure\nThis notebook goes over how to connect to an Azure hosted OpenAI endpoint\nfrom langchain.chat_models import AzureChatOpenAI\nfrom langchain.schema import HumanMessage\nBASE_URL = \"https://${TODO}.openai.azure.com\"\nAPI_KEY = \"...\"\nDEPLOYMENT_NAME = \"chat\"\nmodel = AzureChatOpenAI(\nopenai_api_base=BASE_URL,\nopenai_api_version=\"2023-05-15\",\ndeployment_name=DEPLOYMENT_NAME,\nopenai_api_key=API_KEY,\nopenai_api_type=\"azure\",\n)\nmodel(\n[\nHumanMessage(\ncontent=\"Translate this sentence from English to French. I love programming.\"\n)\n]\n)\nAIMessage(content=\"\\n\\nJ'aime programmer.\", additional_kwargs={})\nModel Version\u200b\nAzure OpenAI responses contain model property, which is name of the model used to generate the response. However unlike native OpenAI responses, it does not contain the version of the model, which is set on the deplyoment in Azure. This makes it tricky to know which version of the model was used to generate the response, which as result can lead to e.g. wrong total cost calculation with OpenAICallbackHandler.\nTo solve this problem, you can pass model_version parameter to AzureChatOpenAI class, which will be added to the model name in the llm output. This way you can easily distinguish between different versions of the model.\nfrom langchain.callbacks import get_openai_callback\nBASE_URL = \"https://{endpoint}.openai.azure.com\"\nAPI_KEY = \"...\"\nDEPLOYMENT_NAME = \"gpt-35-turbo\" # in Azure, this deployment has version 0613 - input and output tokens are counted separately\nmodel = AzureChatOpenAI(\nopenai_api_base=BASE_URL,\nopenai_api_version=\"2023-05-15\",\ndeployment_name=DEPLOYMENT_NAME,\nopenai_api_key=API_KEY,\nopenai_api_type=\"azure\",\n)\nwith get_openai_callback() as cb:\nmodel(\n[\nHumanMessage(\ncontent=\"Translate this sentence from English to French. I love programming.\"\n)\n]\n)", "source": "https://python.langchain.com/docs/integrations/chat/azure_chat_openai"} {"id": "d0892c63c05e-1", "text": ")\n]\n)\nprint(f\"Total Cost (USD): ${format(cb.total_cost, '.6f')}\") # without specifying the model version, flat-rate 0.002 USD per 1k input and output tokens is used\nTotal Cost (USD): $0.000054\nWe can provide the model version to AzureChatOpenAI constructor. It will get appended to the model name returned by Azure OpenAI and cost will be counted correctly.\nmodel0613 = AzureChatOpenAI(\nopenai_api_base=BASE_URL,\nopenai_api_version=\"2023-05-15\",\ndeployment_name=DEPLOYMENT_NAME,\nopenai_api_key=API_KEY,\nopenai_api_type=\"azure\",\nmodel_version=\"0613\"\n)\nwith get_openai_callback() as cb:\nmodel0613(\n[\nHumanMessage(\ncontent=\"Translate this sentence from English to French. I love programming.\"\n)\n]\n)\nprint(f\"Total Cost (USD): ${format(cb.total_cost, '.6f')}\")\nTotal Cost (USD): $0.000044", "source": "https://python.langchain.com/docs/integrations/chat/azure_chat_openai"} {"id": "1b46970f824f-0", "text": "AzureML is a platform used to build, train, and deploy machine learning models. Users can explore the types of models to deploy in the Model Catalog, which provides Azure Foundation Models and OpenAI Models. Azure Foundation Models include various open-source models and popular Hugging Face models. Users can also import models of their liking into AzureML.\nThis notebook goes over how to use a chat model hosted on an AzureML online endpoint\nThe content_formatter parameter is a handler class for transforming the request and response of an AzureML endpoint to match with required schema. Since there are a wide range of models in the model catalog, each of which may process data differently from one another, a ContentFormatterBase class is provided to allow users to transform data to their liking. The following content formatters are provided:\nAIMessage(content=' The Collatz Conjecture is one of the most famous unsolved problems in mathematics, and it has been the subject of much study and research for many years. While it is impossible to predict with certainty whether the conjecture will ever be solved, there are several reasons why it is considered a challenging and important problem:\\n\\n1. Simple yet elusive: The Collatz Conjecture is a deceptively simple statement that has proven to be extraordinarily difficult to prove or disprove. Despite its simplicity, the conjecture has eluded some of the brightest minds in mathematics, and it remains one of the most famous open problems in the field.\\n2. Wide-ranging implications: The Collatz Conjecture has far-reaching implications for many areas of mathematics, including number theory, algebra, and analysis. A solution to the conjecture could have significant impacts on these fields and potentially lead to new insights and discoveries.\\n3. Computational evidence: While the conjecture remains unproven, extensive computational evidence supports its validity. In fact, no counterexample to the conjecture has been found for any starting value up to 2^64 (a number', additional_kwargs={}, example=False)", "source": "https://python.langchain.com/docs/integrations/chat/azureml_chat_endpoint"} {"id": "e428ee2f2025-0", "text": "Bedrock Chat\nAmazon Bedrock is a fully managed service that makes FMs from leading AI startups and Amazon available via an API, so you can choose from a wide range of FMs to find the model that is best suited for your use case\nfrom langchain.chat_models import BedrockChat\nfrom langchain.schema import HumanMessage\nchat = BedrockChat(model_id=\"anthropic.claude-v2\", model_kwargs={\"temperature\":0.1})\nmessages = [\nHumanMessage(\ncontent=\"Translate this sentence from English to French. I love programming.\"\n)\n]\nchat(messages)\nAIMessage(content=\" Voici la traduction en fran\u00e7ais : J'adore programmer.\", additional_kwargs={}, example=False)", "source": "https://python.langchain.com/docs/integrations/chat/bedrock"} {"id": "6f14eab6a5de-0", "text": "ERNIE-Bot Chat\nERNIE-Bot is a large language model developed by Baidu, covering a huge amount of Chinese data. This notebook covers how to get started with ErnieBot chat models.\nfrom langchain.chat_models import ErnieBotChat\nfrom langchain.schema import HumanMessage\nchat = ErnieBotChat(ernie_client_id='YOUR_CLIENT_ID', ernie_client_secret='YOUR_CLIENT_SECRET')\nor you can set client_id and client_secret in your environment variables\nexport ERNIE_CLIENT_ID=YOUR_CLIENT_ID\nexport ERNIE_CLIENT_SECRET=YOUR_CLIENT_SECRET\nchat([\nHumanMessage(content='hello there, who are you?')\n])\nAIMessage(content='Hello, I am an artificial intelligence language model. My purpose is to help users answer questions or provide information. What can I do for you?', additional_kwargs={}, example=False)", "source": "https://python.langchain.com/docs/integrations/chat/ernie"} {"id": "85ce067e5905-0", "text": "Note: This is seperate from the Google PaLM integration. Google has chosen to offer an enterprise version of PaLM through GCP, and this supports the models made available through there. \nBy default, Google Cloud does not use Customer Data to train its foundation models as part of Google Cloud`s AI/ML Privacy Commitment. More details about how Google processes data can also be found in Google's Customer Data Processing Addendum (CDPA).\nTo use Vertex AI PaLM you must have the google-cloud-aiplatform Python package installed and either:\nHave credentials configured for your environment (gcloud, workload identity, etc...)\nStore the path to a service account JSON file as the GOOGLE_APPLICATION_CREDENTIALS environment variable\nThis codebase uses the google.auth library which first looks for the application credentials variable mentioned above, and then looks for system-level auth.\nFor more information, see: \nhttps://cloud.google.com/docs/authentication/application-default-credentials#GAC\nhttps://googleapis.dev/python/google-auth/latest/reference/google.auth.html#module-google.auth\n#!pip install google-cloud-aiplatform\nfrom langchain.chat_models import ChatVertexAI\nfrom langchain.prompts.chat import (\nChatPromptTemplate,\nSystemMessagePromptTemplate,\nHumanMessagePromptTemplate,\n)\nfrom langchain.schema import HumanMessage, SystemMessage\nmessages = [\nSystemMessage(\ncontent=\"You are a helpful assistant that translates English to French.\"\n),\nHumanMessage(\ncontent=\"Translate this sentence from English to French. I love programming.\"\n),\n]\nchat(messages)\nAIMessage(content='Sure, here is the translation of the sentence \"I love programming\" from English to French:\\n\\nJ\\'aime programmer.', additional_kwargs={}, example=False)\nYou can make use of templating by using a MessagePromptTemplate. You can build a ChatPromptTemplate from one or more MessagePromptTemplates. You can use ChatPromptTemplate's format_prompt -- this returns a PromptValue, which you can convert to a string or Message object, depending on whether you want to use the formatted value as input to an llm or chat model.\nFor convenience, there is a from_template method exposed on the template. If you were to use this template, this is what it would look like:\ntemplate = (\n\"You are a helpful assistant that translates {input_language} to {output_language}.\"\n)", "source": "https://python.langchain.com/docs/integrations/chat/google_vertex_ai_palm"} {"id": "85ce067e5905-1", "text": "\"You are a helpful assistant that translates {input_language} to {output_language}.\"\n)\nsystem_message_prompt = SystemMessagePromptTemplate.from_template(template)\nhuman_template = \"{text}\"\nhuman_message_prompt = HumanMessagePromptTemplate.from_template(human_template)\nchat_prompt = ChatPromptTemplate.from_messages(\n[system_message_prompt, human_message_prompt]\n)", "source": "https://python.langchain.com/docs/integrations/chat/google_vertex_ai_palm"} {"id": "85ce067e5905-2", "text": "# get a chat completion from the formatted messages\nchat(\nchat_prompt.format_prompt(\ninput_language=\"English\", output_language=\"French\", text=\"I love programming.\"\n).to_messages()\n)\nAIMessage(content='Sure, here is the translation of \"I love programming\" in French:\\n\\nJ\\'aime programmer.', additional_kwargs={}, example=False)\nYou can now leverage the Codey API for code chat within Vertex AI. The model name is:\ncodechat-bison: for code assistance\nchat = ChatVertexAI(model_name=\"codechat-bison\")\nmessages = [\nHumanMessage(\ncontent=\"How do I create a python function to identify all prime numbers?\"\n)\n]\nchat(messages)\nAIMessage(content='The following Python function can be used to identify all prime numbers up to a given integer:\\n\\n```\\ndef is_prime(n):\\n \"\"\"\\n Determines whether the given integer is prime.\\n\\n Args:\\n n: The integer to be tested for primality.\\n\\n Returns:\\n True if n is prime, False otherwise.\\n \"\"\"\\n\\n # Check if n is divisible by 2.\\n if n % 2 == 0:\\n return False\\n\\n # Check if n is divisible by any integer from 3 to the square root', additional_kwargs={}, example=False)", "source": "https://python.langchain.com/docs/integrations/chat/google_vertex_ai_palm"} {"id": "d13b51108c87-0", "text": "JinaChat\nThis notebook covers how to get started with JinaChat chat models.\nfrom langchain.chat_models import JinaChat\nfrom langchain.prompts.chat import (\nChatPromptTemplate,\nSystemMessagePromptTemplate,\nAIMessagePromptTemplate,\nHumanMessagePromptTemplate,\n)\nfrom langchain.schema import AIMessage, HumanMessage, SystemMessage\nAPI Reference:\nJinaChat\nChatPromptTemplate\nSystemMessagePromptTemplate\nAIMessagePromptTemplate\nHumanMessagePromptTemplate\nAIMessage\nHumanMessage\nSystemMessage\nchat = JinaChat(temperature=0)\nmessages = [\nSystemMessage(\ncontent=\"You are a helpful assistant that translates English to French.\"\n),\nHumanMessage(\ncontent=\"Translate this sentence from English to French. I love programming.\"\n),\n]\nchat(messages)\nAIMessage(content=\"J'aime programmer.\", additional_kwargs={}, example=False)\nYou can make use of templating by using a MessagePromptTemplate. You can build a ChatPromptTemplate from one or more MessagePromptTemplates. You can use ChatPromptTemplate's format_prompt -- this returns a PromptValue, which you can convert to a string or Message object, depending on whether you want to use the formatted value as input to an llm or chat model.\nFor convenience, there is a from_template method exposed on the template. If you were to use this template, this is what it would look like:\ntemplate = (\n\"You are a helpful assistant that translates {input_language} to {output_language}.\"\n)\nsystem_message_prompt = SystemMessagePromptTemplate.from_template(template)\nhuman_template = \"{text}\"\nhuman_message_prompt = HumanMessagePromptTemplate.from_template(human_template)\nchat_prompt = ChatPromptTemplate.from_messages(\n[system_message_prompt, human_message_prompt]\n)\n\n# get a chat completion from the formatted messages\nchat(\nchat_prompt.format_prompt(\ninput_language=\"English\", output_language=\"French\", text=\"I love programming.\"\n).to_messages()\n)\nAIMessage(content=\"J'aime programmer.\", additional_kwargs={}, example=False)", "source": "https://python.langchain.com/docs/integrations/chat/jinachat"} {"id": "e668e0eaaa3a-0", "text": "\ud83d\ude85 LiteLLM\nLiteLLM is a library that simplifies calling Anthropic, Azure, Huggingface, Replicate, etc. \nThis notebook covers how to get started with using Langchain + the LiteLLM I/O library. \nfrom langchain.chat_models import ChatLiteLLM\nfrom langchain.prompts.chat import (\nChatPromptTemplate,\nSystemMessagePromptTemplate,\nAIMessagePromptTemplate,\nHumanMessagePromptTemplate,\n)\nfrom langchain.schema import AIMessage, HumanMessage, SystemMessage\nAPI Reference:\nChatLiteLLM\nChatPromptTemplate\nSystemMessagePromptTemplate\nAIMessagePromptTemplate\nHumanMessagePromptTemplate\nAIMessage\nHumanMessage\nSystemMessage\nchat = ChatLiteLLM(model=\"gpt-3.5-turbo\")\nmessages = [\nHumanMessage(\ncontent=\"Translate this sentence from English to French. I love programming.\"\n)\n]\nchat(messages)\nAIMessage(content=\" J'aime la programmation.\", additional_kwargs={}, example=False)\nChatLiteLLM also supports async and streaming functionality:\u200b\nfrom langchain.callbacks.manager import CallbackManager\nfrom langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler\nawait chat.agenerate([messages])\nLLMResult(generations=[[ChatGeneration(text=\" J'aime programmer.\", generation_info=None, message=AIMessage(content=\" J'aime programmer.\", additional_kwargs={}, example=False))]], llm_output={}, run=[RunInfo(run_id=UUID('8cc8fb68-1c35-439c-96a0-695036a93652'))])\nchat = ChatLiteLLM(\nstreaming=True,\nverbose=True,\ncallback_manager=CallbackManager([StreamingStdOutCallbackHandler()]),\n)\nchat(messages)\nJ'aime la programmation.\n\n\n\n\nAIMessage(content=\" J'aime la programmation.\", additional_kwargs={}, example=False)", "source": "https://python.langchain.com/docs/integrations/chat/litellm"} {"id": "21eaafb4c735-0", "text": "Llama API\nThis notebook shows how to use LangChain with LlamaAPI - a hosted version of Llama2 that adds in support for function calling.\n!pip install -U llamaapi\nfrom llamaapi import LlamaAPI\n\n# Replace 'Your_API_Token' with your actual API token\nllama = LlamaAPI('Your_API_Token')\nfrom langchain_experimental.llms import ChatLlamaAPI\n/Users/harrisonchase/.pyenv/versions/3.9.1/envs/langchain/lib/python3.9/site-packages/deeplake/util/check_latest_version.py:32: UserWarning: A newer version of deeplake (3.6.12) is available. It's recommended that you update to the latest version using `pip install -U deeplake`.\nwarnings.warn(\nmodel = ChatLlamaAPI(client=llama)\nfrom langchain.chains import create_tagging_chain\n\nschema = {\n\"properties\": {\n\"sentiment\": {\"type\": \"string\", 'description': 'the sentiment encountered in the passage'},\n\"aggressiveness\": {\"type\": \"integer\", 'description': 'a 0-10 score of how aggressive the passage is'},\n\"language\": {\"type\": \"string\", 'description': 'the language of the passage'},\n}\n}\n\nchain = create_tagging_chain(schema, model)\nchain.run(\"give me your money\")\n{'sentiment': 'aggressive', 'aggressiveness': 8}", "source": "https://python.langchain.com/docs/integrations/chat/llama_api"} {"id": "e10e28c29d51-0", "text": "Ollama allows you to run open-source large language models, such as LLaMA2, locally.\nOllama bundles model weights, configuration, and data into a single package, defined by a Modelfile. \nIt optimizes setup and configuration details, including GPU usage.\nFor a complete list of supported models and model variants, see the Ollama model library.\nIf you are using a LLaMA chat model (e.g., ollama pull llama2:7b-chat) then you can use the ChatOllama interface.\nWith StreamingStdOutCallbackHandler, you will see tokens streamed.\nLet's also use local embeddings from GPT4AllEmbeddings and Chroma.\nYou can also get logging for tokens.\nBased on the given context, here is the answer to the question \"What are the approaches to Task Decomposition?\"\n\nThere are three approaches to task decomposition:", "source": "https://python.langchain.com/docs/integrations/chat/ollama"} {"id": "e10e28c29d51-1", "text": "1. LLM with simple prompting, such as \"Steps for XYZ.\" or \"What are the subgoals for achieving XYZ?\"\n2. Using task-specific instructions, like \"Write a story outline\" for writing a novel.", "source": "https://python.langchain.com/docs/integrations/chat/ollama"} {"id": "e10e28c29d51-2", "text": "3. With human inputs.{'model': 'llama2:13b-chat', 'created_at': '2023-08-23T15:37:51.469127Z', 'done': True, 'context': [1, 29871, 1, 29961, 25580, 29962, 518, 25580, 29962, 518, 25580, 29962, 3532, 14816, 29903, 6778, 4803, 278, 1494, 12785, 310, 3030, 304, 1234, 278, 1139, 472, 278, 1095, 29889, 29871, 13, 3644, 366, 1016, 29915, 29873, 1073, 278, 1234, 29892, 925, 1827, 393, 366, 1016, 29915, 29873, 1073, 29892, 1016, 29915, 29873, 1018, 304, 1207, 701, 385, 1234, 29889, 29871, 13, 11403, 2211, 25260, 7472, 322, 3013, 278, 1234, 408, 3022, 895, 408, 1950, 29889, 529, 829, 14816, 29903, 6778, 13, 5398, 26227, 508, 367, 2309, 313, 29896, 29897, 491, 365, 26369, 411, 2560, 9508, 292, 763, 376, 7789, 567, 363, 1060, 29979, 29999, 7790, 29876, 29896, 19602, 376, 5618, 526, 278, 1014, 1484, 1338, 363, 3657, 15387, 1060, 29979, 29999, 29973,", "source": "https://python.langchain.com/docs/integrations/chat/ollama"} {"id": "e10e28c29d51-3", "text": "15387, 1060, 29979, 29999, 29973, 613, 313, 29906, 29897, 491, 773, 3414, 29899, 14940, 11994, 29936, 321, 29889, 29887, 29889, 376, 6113, 263, 5828, 27887, 1213, 363, 5007, 263, 9554, 29892, 470, 313, 29941, 29897, 411, 5199, 10970, 29889, 13, 13, 5398, 26227, 508, 367, 2309, 313, 29896, 29897, 491, 365, 26369, 411, 2560, 9508, 292, 763, 376, 7789, 567, 363, 1060, 29979, 29999, 7790, 29876, 29896, 19602, 376, 5618, 526, 278, 1014, 1484, 1338, 363, 3657, 15387, 1060, 29979, 29999, 29973, 613, 313, 29906, 29897, 491, 773, 3414, 29899, 14940, 11994, 29936, 321, 29889, 29887, 29889, 376, 6113, 263, 5828, 27887, 1213, 363, 5007, 263, 9554, 29892, 470, 313, 29941, 29897, 411, 5199, 10970, 29889, 13, 13, 1451, 16047, 267, 297, 1472, 29899, 8489, 18987, 322, 3414, 26227, 29901, 1858, 9450, 975, 263, 3309, 29891, 4955,", "source": "https://python.langchain.com/docs/integrations/chat/ollama"} {"id": "e10e28c29d51-4", "text": "975, 263, 3309, 29891, 4955, 322, 17583, 3902, 8253, 278, 1650, 2913, 3933, 18066, 292, 29889, 365, 26369, 29879, 21117, 304, 10365, 13900, 746, 20050, 411, 15668, 4436, 29892, 3907, 963, 3109, 16424, 9401, 304, 25618, 1058, 5110, 515, 14260, 322, 1059, 29889, 13, 13, 1451, 16047, 267, 297, 1472, 29899, 8489, 18987, 322, 3414, 26227, 29901, 1858, 9450, 975, 263, 3309, 29891, 4955, 322, 17583, 3902, 8253, 278, 1650, 2913, 3933, 18066, 292, 29889, 365, 26369, 29879, 21117, 304, 10365, 13900, 746, 20050, 411, 15668, 4436, 29892, 3907, 963, 3109, 16424, 9401, 304, 25618, 1058, 5110, 515, 14260, 322, 1059, 29889, 13, 16492, 29901, 1724, 526, 278, 13501, 304, 9330, 897, 510, 3283, 29973, 13, 29648, 1319, 673, 10834, 29914, 25580, 29962, 518, 29914, 25580, 29962, 518, 29914, 25580, 29962, 29871, 16564, 373, 278,", "source": "https://python.langchain.com/docs/integrations/chat/ollama"} {"id": "e10e28c29d51-5", "text": "29962, 29871, 16564, 373, 278, 2183, 3030, 29892, 1244, 338, 278, 1234, 304, 278, 1139, 376, 5618, 526, 278, 13501, 304, 9330, 897, 510, 3283, 3026, 13, 13, 8439, 526, 2211, 13501, 304, 3414, 26227, 29901, 13, 13, 29896, 29889, 365, 26369, 411, 2560, 9508, 292, 29892, 1316, 408, 376, 7789, 567, 363, 1060, 29979, 29999, 1213, 470, 376, 5618, 526, 278, 1014, 1484, 1338, 363, 3657, 15387, 1060, 29979, 29999, 3026, 13, 29906, 29889, 5293, 3414, 29899, 14940, 11994, 29892, 763, 376, 6113, 263, 5828, 27887, 29908, 363, 5007, 263, 9554, 29889, 13, 29941, 29889, 2973, 5199, 10970, 29889, 2], 'total_duration': 9514823750, 'load_duration': 795542, 'sample_count': 99, 'sample_duration': 68732000, 'prompt_eval_count': 146, 'prompt_eval_duration': 6206275000, 'eval_count': 98, 'eval_duration': 3229641000}", "source": "https://python.langchain.com/docs/integrations/chat/ollama"} {"id": "6ff56ff206cf-0", "text": "PromptLayer ChatOpenAI\nThis example showcases how to connect to PromptLayer to start recording your ChatOpenAI requests.\nInstall PromptLayer\u200b\nThe promptlayer package is required to use PromptLayer with OpenAI. Install promptlayer using pip.\nImports\u200b\nimport os\nfrom langchain.chat_models import PromptLayerChatOpenAI\nfrom langchain.schema import HumanMessage\nSet the Environment API Key\u200b\nYou can create a PromptLayer API Key at www.promptlayer.com by clicking the settings cog in the navbar.\nSet it as an environment variable called PROMPTLAYER_API_KEY.\nos.environ[\"PROMPTLAYER_API_KEY\"] = \"**********\"\nUse the PromptLayerOpenAI LLM like normal\u200b\nYou can optionally pass in pl_tags to track your requests with PromptLayer's tagging feature.\nchat = PromptLayerChatOpenAI(pl_tags=[\"langchain\"])\nchat([HumanMessage(content=\"I am a cat and I want\")])\nAIMessage(content='to take a nap in a cozy spot. I search around for a suitable place and finally settle on a soft cushion on the window sill. I curl up into a ball and close my eyes, relishing the warmth of the sun on my fur. As I drift off to sleep, I can hear the birds chirping outside and feel the gentle breeze blowing through the window. This is the life of a contented cat.', additional_kwargs={})\nThe above request should now appear on your PromptLayer dashboard.\nUsing PromptLayer Track\u200b\nIf you would like to use any of the PromptLayer tracking features, you need to pass the argument return_pl_id when instantializing the PromptLayer LLM to get the request id. \nchat = PromptLayerChatOpenAI(return_pl_id=True)\nchat_results = chat.generate([[HumanMessage(content=\"I am a cat and I want\")]])\n\nfor res in chat_results.generations:\npl_request_id = res[0].generation_info[\"pl_request_id\"]\npromptlayer.track.score(request_id=pl_request_id, score=100)\nUsing this allows you to track the performance of your model in the PromptLayer dashboard. If you are using a prompt template, you can attach a template to a request as well. Overall, this gives you the opportunity to track the performance of different templates and models in the PromptLayer dashboard.", "source": "https://python.langchain.com/docs/integrations/chat/promptlayer_chatopenai"} {"id": "5ed972b2355d-0", "text": "OpenAI\nThis notebook covers how to get started with OpenAI chat models.\nfrom langchain.chat_models import ChatOpenAI\nfrom langchain.prompts.chat import (\nChatPromptTemplate,\nSystemMessagePromptTemplate,\nAIMessagePromptTemplate,\nHumanMessagePromptTemplate,\n)\nfrom langchain.schema import AIMessage, HumanMessage, SystemMessage\nAPI Reference:\nChatOpenAI\nChatPromptTemplate\nSystemMessagePromptTemplate\nAIMessagePromptTemplate\nHumanMessagePromptTemplate\nAIMessage\nHumanMessage\nSystemMessage\nchat = ChatOpenAI(temperature=0)\nThe above cell assumes that your OpenAI API key is set in your environment variables. If you would rather manually specify your API key and/or organization ID, use the following code:\nchat = ChatOpenAI(temperature=0, openai_api_key=\"YOUR_API_KEY\", openai_organization=\"YOUR_ORGANIZATION_ID\")\nRemove the openai_organization parameter should it not apply to you.\nmessages = [\nSystemMessage(\ncontent=\"You are a helpful assistant that translates English to French.\"\n),\nHumanMessage(\ncontent=\"Translate this sentence from English to French. I love programming.\"\n),\n]\nchat(messages)\nAIMessage(content=\"J'adore la programmation.\", additional_kwargs={}, example=False)\nYou can make use of templating by using a MessagePromptTemplate. You can build a ChatPromptTemplate from one or more MessagePromptTemplates. You can use ChatPromptTemplate's format_prompt -- this returns a PromptValue, which you can convert to a string or Message object, depending on whether you want to use the formatted value as input to an llm or chat model.\nFor convenience, there is a from_template method exposed on the template. If you were to use this template, this is what it would look like:\ntemplate = (\n\"You are a helpful assistant that translates {input_language} to {output_language}.\"\n)\nsystem_message_prompt = SystemMessagePromptTemplate.from_template(template)\nhuman_template = \"{text}\"\nhuman_message_prompt = HumanMessagePromptTemplate.from_template(human_template)\nchat_prompt = ChatPromptTemplate.from_messages(\n[system_message_prompt, human_message_prompt]\n)", "source": "https://python.langchain.com/docs/integrations/chat/openai"} {"id": "5ed972b2355d-1", "text": "# get a chat completion from the formatted messages\nchat(\nchat_prompt.format_prompt(\ninput_language=\"English\", output_language=\"French\", text=\"I love programming.\"\n).to_messages()\n)\nAIMessage(content=\"J'adore la programmation.\", additional_kwargs={}, example=False)\nFine-tuning\u200b\nYou can call fine-tuned OpenAI models by passing in your corresponding modelName parameter.\nThis generally takes the form of ft:{OPENAI_MODEL_NAME}:{ORG_NAME}::{MODEL_ID}. For example:\nfine_tuned_model = ChatOpenAI(temperature=0, model_name=\"ft:gpt-3.5-turbo-0613:langchain::7qTVM5AR\")\n\nfine_tuned_model(messages)\nAIMessage(content=\"J'adore la programmation.\", additional_kwargs={}, example=False)", "source": "https://python.langchain.com/docs/integrations/chat/openai"} {"id": "d2ee9e74e94d-0", "text": "Streamlit\nStreamlit is a faster way to build and share data apps. Streamlit turns data scripts into shareable web apps in minutes. All in pure Python. No front\u2011end experience required. See more examples at streamlit.io/generative-ai.\nIn this guide we will demonstrate how to use StreamlitCallbackHandler to display the thoughts and actions of an agent in an interactive Streamlit app. Try it out with the running app below using the MRKL agent:\nInstallation and Setup\u200b\npip install langchain streamlit\nYou can run streamlit hello to load a sample app and validate your install succeeded. See full instructions in Streamlit's Getting started documentation.\nDisplay thoughts and actions\u200b\nTo create a StreamlitCallbackHandler, you just need to provide a parent container to render the output.\nfrom langchain.callbacks import StreamlitCallbackHandler\nimport streamlit as st\n\nst_callback = StreamlitCallbackHandler(st.container())\nAdditional keyword arguments to customize the display behavior are described in the API reference.\nScenario 1: Using an Agent with Tools\u200b\nThe primary supported use case today is visualizing the actions of an Agent with Tools (or Agent Executor). You can create an agent in your Streamlit app and simply pass the StreamlitCallbackHandler to agent.run() in order to visualize the thoughts and actions live in your app.\nfrom langchain.llms import OpenAI\nfrom langchain.agents import AgentType, initialize_agent, load_tools\nfrom langchain.callbacks import StreamlitCallbackHandler\nimport streamlit as st\n\nllm = OpenAI(temperature=0, streaming=True)\ntools = load_tools([\"ddg-search\"])\nagent = initialize_agent(\ntools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True\n)", "source": "https://python.langchain.com/docs/integrations/callbacks/streamlit"} {"id": "d2ee9e74e94d-1", "text": "if prompt := st.chat_input():\nst.chat_message(\"user\").write(prompt)\nwith st.chat_message(\"assistant\"):\nst_callback = StreamlitCallbackHandler(st.container())\nresponse = agent.run(prompt, callbacks=[st_callback])\nst.write(response)\nNote: You will need to set OPENAI_API_KEY for the above app code to run successfully. The easiest way to do this is via Streamlit secrets.toml, or any other local ENV management tool.\nAdditional scenarios\u200b\nCurrently StreamlitCallbackHandler is geared towards use with a LangChain Agent Executor. Support for additional agent types, use directly with Chains, etc will be added in the future.\nYou may also be interested in using StreamlitChatMessageHistory for LangChain.", "source": "https://python.langchain.com/docs/integrations/callbacks/streamlit"} {"id": "0fe459504a73-0", "text": "Argilla\nArgilla is an open-source data curation platform for LLMs. Using Argilla, everyone can build robust language models through faster data curation using both human and machine feedback. We provide support for each step in the MLOps cycle, from data labeling to model monitoring.\nIn this guide we will demonstrate how to track the inputs and reponses of your LLM to generate a dataset in Argilla, using the ArgillaCallbackHandler.\nIt's useful to keep track of the inputs and outputs of your LLMs to generate datasets for future fine-tuning. This is especially useful when you're using a LLM to generate data for a specific task, such as question answering, summarization, or translation.\nInstallation and Setup\u200b\npip install argilla --upgrade\npip install openai\nGetting API Credentials\u200b\nTo get the Argilla API credentials, follow the next steps:\nGo to your Argilla UI.\nClick on your profile picture and go to \"My settings\".\nThen copy the API Key.\nIn Argilla the API URL will be the same as the URL of your Argilla UI.\nTo get the OpenAI API credentials, please visit https://platform.openai.com/account/api-keys\nimport os\n\nos.environ[\"ARGILLA_API_URL\"] = \"...\"\nos.environ[\"ARGILLA_API_KEY\"] = \"...\"\n\nos.environ[\"OPENAI_API_KEY\"] = \"...\"\nSetup Argilla\u200b\nTo use the ArgillaCallbackHandler we will need to create a new FeedbackDataset in Argilla to keep track of your LLM experiments. To do so, please use the following code:\nfrom packaging.version import parse as parse_version", "source": "https://python.langchain.com/docs/integrations/callbacks/argilla"} {"id": "0fe459504a73-1", "text": "if parse_version(rg.__version__) < parse_version(\"1.8.0\"):\nraise RuntimeError(\n\"`FeedbackDataset` is only available in Argilla v1.8.0 or higher, please \"\n\"upgrade `argilla` as `pip install argilla --upgrade`.\"\n)\ndataset = rg.FeedbackDataset(\nfields=[\nrg.TextField(name=\"prompt\"),\nrg.TextField(name=\"response\"),\n],\nquestions=[\nrg.RatingQuestion(\nname=\"response-rating\",\ndescription=\"How would you rate the quality of the response?\",\nvalues=[1, 2, 3, 4, 5],\nrequired=True,\n),\nrg.TextQuestion(\nname=\"response-feedback\",\ndescription=\"What feedback do you have for the response?\",\nrequired=False,\n),\n],\nguidelines=\"You're asked to rate the quality of the response and provide feedback.\",\n)\n\nrg.init(\napi_url=os.environ[\"ARGILLA_API_URL\"],\napi_key=os.environ[\"ARGILLA_API_KEY\"],\n)\n\ndataset.push_to_argilla(\"langchain-dataset\");\n\ud83d\udccc NOTE: at the moment, just the prompt-response pairs are supported as FeedbackDataset.fields, so the ArgillaCallbackHandler will just track the prompt i.e. the LLM input, and the response i.e. the LLM output.\nTracking\u200b\nTo use the ArgillaCallbackHandler you can either use the following code, or just reproduce one of the examples presented in the following sections.\nfrom langchain.callbacks import ArgillaCallbackHandler\n\nargilla_callback = ArgillaCallbackHandler(\ndataset_name=\"langchain-dataset\",\napi_url=os.environ[\"ARGILLA_API_URL\"],\napi_key=os.environ[\"ARGILLA_API_KEY\"],\n)\nScenario 1: Tracking an LLM\u200b\nFirst, let's just run a single LLM a few times and capture the resulting prompt-response pairs in Argilla.\nfrom langchain.callbacks import ArgillaCallbackHandler, StdOutCallbackHandler\nfrom langchain.llms import OpenAI\n\nargilla_callback = ArgillaCallbackHandler(\ndataset_name=\"langchain-dataset\",\napi_url=os.environ[\"ARGILLA_API_URL\"],\napi_key=os.environ[\"ARGILLA_API_KEY\"],\n)\ncallbacks = [StdOutCallbackHandler(), argilla_callback]", "source": "https://python.langchain.com/docs/integrations/callbacks/argilla"} {"id": "0fe459504a73-2", "text": "llm = OpenAI(temperature=0.9, callbacks=callbacks)\nllm.generate([\"Tell me a joke\", \"Tell me a poem\"] * 3)", "source": "https://python.langchain.com/docs/integrations/callbacks/argilla"} {"id": "0fe459504a73-3", "text": "LLMResult(generations=[[Generation(text='\\n\\nQ: What did the fish say when he hit the wall? \\nA: Dam.', generation_info={'finish_reason': 'stop', 'logprobs': None})], [Generation(text='\\n\\nThe Moon \\n\\nThe moon is high in the midnight sky,\\nSparkling like a star above.\\nThe night so peaceful, so serene,\\nFilling up the air with love.\\n\\nEver changing and renewing,\\nA never-ending light of grace.\\nThe moon remains a constant view,\\nA reminder of life\u2019s gentle pace.\\n\\nThrough time and space it guides us on,\\nA never-fading beacon of hope.\\nThe moon shines down on us all,\\nAs it continues to rise and elope.', generation_info={'finish_reason': 'stop', 'logprobs': None})], [Generation(text='\\n\\nQ. What did one magnet say to the other magnet?\\nA. \"I find you very attractive!\"', generation_info={'finish_reason': 'stop', 'logprobs': None})], [Generation(text=\"\\n\\nThe world is charged with the grandeur of God.\\nIt will flame out, like shining from shook foil;\\nIt gathers to a greatness, like the ooze of oil\\nCrushed. Why do men then now not reck his rod?\\n\\nGenerations have trod, have trod, have trod;\\nAnd all is seared with trade; bleared, smeared with toil;\\nAnd wears man's smudge and shares man's smell: the soil\\nIs bare now, nor can foot feel, being shod.\\n\\nAnd for all this, nature is never spent;\\nThere lives the dearest freshness deep down things;\\nAnd though the last lights off the black West went\\nOh, morning, at the brown brink eastward, springs \u2014\\n\\nBecause the Holy Ghost over the bent\\nWorld broods with warm breast and with ah! bright wings.\\n\\n~Gerard Manley Hopkins\", generation_info={'finish_reason': 'stop', 'logprobs': None})], [Generation(text='\\n\\nQ: What did one ocean say to the other ocean?\\nA: Nothing, they just waved.', generation_info={'finish_reason': 'stop', 'logprobs': None})], [Generation(text=\"\\n\\nA poem for", "source": "https://python.langchain.com/docs/integrations/callbacks/argilla"} {"id": "0fe459504a73-4", "text": "'stop', 'logprobs': None})], [Generation(text=\"\\n\\nA poem for you\\n\\nOn a field of green\\n\\nThe sky so blue\\n\\nA gentle breeze, the sun above\\n\\nA beautiful world, for us to love\\n\\nLife is a journey, full of surprise\\n\\nFull of joy and full of surprise\\n\\nBe brave and take small steps\\n\\nThe future will be revealed with depth\\n\\nIn the morning, when dawn arrives\\n\\nA fresh start, no reason to hide\\n\\nSomewhere down the road, there's a heart that beats\\n\\nBelieve in yourself, you'll always succeed.\", generation_info={'finish_reason': 'stop', 'logprobs': None})]], llm_output={'token_usage': {'completion_tokens': 504, 'total_tokens': 528, 'prompt_tokens': 24}, 'model_name': 'text-davinci-003'})", "source": "https://python.langchain.com/docs/integrations/callbacks/argilla"} {"id": "0fe459504a73-5", "text": "Scenario 2: Tracking an LLM in a chain\u200b\nThen we can create a chain using a prompt template, and then track the initial prompt and the final response in Argilla.\nfrom langchain.callbacks import ArgillaCallbackHandler, StdOutCallbackHandler\nfrom langchain.llms import OpenAI\nfrom langchain.chains import LLMChain\nfrom langchain.prompts import PromptTemplate", "source": "https://python.langchain.com/docs/integrations/callbacks/argilla"} {"id": "0fe459504a73-6", "text": "argilla_callback = ArgillaCallbackHandler(\ndataset_name=\"langchain-dataset\",\napi_url=os.environ[\"ARGILLA_API_URL\"],\napi_key=os.environ[\"ARGILLA_API_KEY\"],\n)\ncallbacks = [StdOutCallbackHandler(), argilla_callback]\nllm = OpenAI(temperature=0.9, callbacks=callbacks)\n\ntemplate = \"\"\"You are a playwright. Given the title of play, it is your job to write a synopsis for that title.\nTitle: {title}\nPlaywright: This is a synopsis for the above play:\"\"\"\nprompt_template = PromptTemplate(input_variables=[\"title\"], template=template)\nsynopsis_chain = LLMChain(llm=llm, prompt=prompt_template, callbacks=callbacks)\n\ntest_prompts = [{\"title\": \"Documentary about Bigfoot in Paris\"}]\nsynopsis_chain.apply(test_prompts)\n\n\n> Entering new LLMChain chain...\nPrompt after formatting:\nYou are a playwright. Given the title of play, it is your job to write a synopsis for that title.\nTitle: Documentary about Bigfoot in Paris\nPlaywright: This is a synopsis for the above play:\n\n> Finished chain.", "source": "https://python.langchain.com/docs/integrations/callbacks/argilla"} {"id": "0fe459504a73-7", "text": "> Finished chain.\n\n\n\n\n\n[{'text': \"\\n\\nDocumentary about Bigfoot in Paris focuses on the story of a documentary filmmaker and their search for evidence of the legendary Bigfoot creature in the city of Paris. The play follows the filmmaker as they explore the city, meeting people from all walks of life who have had encounters with the mysterious creature. Through their conversations, the filmmaker unravels the story of Bigfoot and finds out the truth about the creature's presence in Paris. As the story progresses, the filmmaker learns more and more about the mysterious creature, as well as the different perspectives of the people living in the city, and what they think of the creature. In the end, the filmmaker's findings lead them to some surprising and heartwarming conclusions about the creature's existence and the importance it holds in the lives of the people in Paris.\"}]\nScenario 3: Using an Agent with Tools\u200b\nFinally, as a more advanced workflow, you can create an agent that uses some tools. So that ArgillaCallbackHandler will keep track of the input and the output, but not about the intermediate steps/thoughts, so that given a prompt we log the original prompt and the final response to that given prompt.\nNote that for this scenario we'll be using Google Search API (Serp API) so you will need to both install google-search-results as pip install google-search-results, and to set the Serp API Key as os.environ[\"SERPAPI_API_KEY\"] = \"...\" (you can find it at https://serpapi.com/dashboard), otherwise the example below won't work.\nfrom langchain.agents import AgentType, initialize_agent, load_tools\nfrom langchain.callbacks import ArgillaCallbackHandler, StdOutCallbackHandler\nfrom langchain.llms import OpenAI\n\nargilla_callback = ArgillaCallbackHandler(\ndataset_name=\"langchain-dataset\",\napi_url=os.environ[\"ARGILLA_API_URL\"],\napi_key=os.environ[\"ARGILLA_API_KEY\"],\n)\ncallbacks = [StdOutCallbackHandler(), argilla_callback]\nllm = OpenAI(temperature=0.9, callbacks=callbacks)\n\ntools = load_tools([\"serpapi\"], llm=llm, callbacks=callbacks)\nagent = initialize_agent(\ntools,\nllm,\nagent=AgentType.ZERO_SHOT_REACT_DESCRIPTION,\ncallbacks=callbacks,\n)\nagent.run(\"Who was the first president of the United States of America?\")", "source": "https://python.langchain.com/docs/integrations/callbacks/argilla"} {"id": "0fe459504a73-8", "text": "> Entering new AgentExecutor chain...\nI need to answer a historical question\nAction: Search\nAction Input: \"who was the first president of the United States of America\" \nObservation: George Washington\nThought: George Washington was the first president\nFinal Answer: George Washington was the first president of the United States of America.\n\n> Finished chain.\n\n\n\n\n\n'George Washington was the first president of the United States of America.'", "source": "https://python.langchain.com/docs/integrations/callbacks/argilla"} {"id": "e1c3eadc67ed-0", "text": "Context\nContext provides user analytics for LLM powered products and features.\nWith Context, you can start understanding your users and improving their experiences in less than 30 minutes.\nIn this guide we will show you how to integrate with Context.\nInstallation and Setup\u200b\n$ pip install context-python --upgrade\nGetting API Credentials\u200b\nTo get your Context API token:\nGo to the settings page within your Context account (https://with.context.ai/settings).\nGenerate a new API Token.\nStore this token somewhere secure.\nSetup Context\u200b\nTo use the ContextCallbackHandler, import the handler from Langchain and instantiate it with your Context API token.\nEnsure you have installed the context-python package before using the handler.\nimport os\n\nfrom langchain.callbacks import ContextCallbackHandler\n\ntoken = os.environ[\"CONTEXT_API_TOKEN\"]\n\ncontext_callback = ContextCallbackHandler(token)\nUsage\u200b\nUsing the Context callback within a chat model\u200b\nThe Context callback handler can be used to directly record transcripts between users and AI assistants.\nExample\u200b\nimport os\n\nfrom langchain.chat_models import ChatOpenAI\nfrom langchain.schema import (\nSystemMessage,\nHumanMessage,\n)\nfrom langchain.callbacks import ContextCallbackHandler\n\ntoken = os.environ[\"CONTEXT_API_TOKEN\"]\n\nchat = ChatOpenAI(\nheaders={\"user_id\": \"123\"}, temperature=0, callbacks=[ContextCallbackHandler(token)]\n)\n\nmessages = [\nSystemMessage(\ncontent=\"You are a helpful assistant that translates English to French.\"\n),\nHumanMessage(content=\"I love programming.\"),\n]\n\nprint(chat(messages))\nUsing the Context callback within Chains\u200b\nThe Context callback handler can also be used to record the inputs and outputs of chains. Note that intermediate steps of the chain are not recorded - only the starting inputs and final outputs.\nNote: Ensure that you pass the same context object to the chat model and the chain.\nWrong:\nchat = ChatOpenAI(temperature=0.9, callbacks=[ContextCallbackHandler(token)])\nchain = LLMChain(llm=chat, prompt=chat_prompt_template, callbacks=[ContextCallbackHandler(token)])\nCorrect:\nhandler = ContextCallbackHandler(token)\nchat = ChatOpenAI(temperature=0.9, callbacks=[callback])\nchain = LLMChain(llm=chat, prompt=chat_prompt_template, callbacks=[callback])\nExample\u200b\nimport os", "source": "https://python.langchain.com/docs/integrations/callbacks/context"} {"id": "e1c3eadc67ed-1", "text": "from langchain.chat_models import ChatOpenAI\nfrom langchain import LLMChain\nfrom langchain.prompts import PromptTemplate\nfrom langchain.prompts.chat import (\nChatPromptTemplate,\nHumanMessagePromptTemplate,\n)\nfrom langchain.callbacks import ContextCallbackHandler\n\ntoken = os.environ[\"CONTEXT_API_TOKEN\"]\n\nhuman_message_prompt = HumanMessagePromptTemplate(\nprompt=PromptTemplate(\ntemplate=\"What is a good name for a company that makes {product}?\",\ninput_variables=[\"product\"],\n)\n)\nchat_prompt_template = ChatPromptTemplate.from_messages([human_message_prompt])\ncallback = ContextCallbackHandler(token)\nchat = ChatOpenAI(temperature=0.9, callbacks=[callback])\nchain = LLMChain(llm=chat, prompt=chat_prompt_template, callbacks=[callback])\nprint(chain.run(\"colorful socks\"))", "source": "https://python.langchain.com/docs/integrations/callbacks/context"} {"id": "427bff4d5b51-0", "text": "This example shows how one can track the following while calling OpenAI models via LangChain and Infino:\n# Set your key here.\n# os.environ[\"OPENAI_API_KEY\"] = \"YOUR_API_KEY\"\n\n# Create callback handler. This logs latency, errors, token usage, prompts as well as prompt responses to Infino.\nhandler = InfinoCallbackHandler(\nmodel_id=\"test_openai\", model_version=\"0.1\", verbose=False\n)\n\n# Create LLM.\nllm = OpenAI(temperature=0.1)\n\n# Number of questions to ask the OpenAI model. We limit to a short number here to save $$ while running this demo.\nnum_questions = 10\n\nquestions = questions[0:num_questions]\nfor question in questions:\nprint(question)", "source": "https://python.langchain.com/docs/integrations/callbacks/infino"} {"id": "427bff4d5b51-1", "text": "# We send the question to OpenAI API, with Infino callback.\nllm_result = llm.generate([question], callbacks=[handler])\nprint(llm_result)\nIn what country is Normandy located?\ngenerations=[[Generation(text='\\n\\nNormandy is located in France.', generation_info={'finish_reason': 'stop', 'logprobs': None})]] llm_output={'token_usage': {'total_tokens': 16, 'completion_tokens': 9, 'prompt_tokens': 7}, 'model_name': 'text-davinci-003'} run=RunInfo(run_id=UUID('8de21639-acec-4bd1-a12d-8124de1e20da'))\nWhen were the Normans in Normandy?\ngenerations=[[Generation(text='\\n\\nThe Normans first settled in Normandy in the late 9th century.', generation_info={'finish_reason': 'stop', 'logprobs': None})]] llm_output={'token_usage': {'total_tokens': 24, 'completion_tokens': 16, 'prompt_tokens': 8}, 'model_name': 'text-davinci-003'} run=RunInfo(run_id=UUID('cf81fc86-250b-4e6e-9d92-2df3bebb019a'))\nFrom which countries did the Norse originate?\ngenerations=[[Generation(text='\\n\\nThe Norse originated from Scandinavia, which includes modern-day Norway, Sweden, and Denmark.', generation_info={'finish_reason': 'stop', 'logprobs': None})]] llm_output={'token_usage': {'total_tokens': 29, 'completion_tokens': 21, 'prompt_tokens': 8}, 'model_name': 'text-davinci-003'} run=RunInfo(run_id=UUID('50f42f5e-b4a4-411a-a049-f92cb573a74f'))\nWho was the Norse leader?", "source": "https://python.langchain.com/docs/integrations/callbacks/infino"} {"id": "427bff4d5b51-2", "text": "Who was the Norse leader?\ngenerations=[[Generation(text='\\n\\nThe most famous Norse leader was the legendary Viking king Ragnar Lodbrok. He is believed to have lived in the 9th century and is renowned for his exploits in England and France.', generation_info={'finish_reason': 'stop', 'logprobs': None})]] llm_output={'token_usage': {'total_tokens': 45, 'completion_tokens': 39, 'prompt_tokens': 6}, 'model_name': 'text-davinci-003'} run=RunInfo(run_id=UUID('e32f31cb-ddc9-4863-8e6e-cb7a281a0ada'))\nWhat century did the Normans first gain their separate identity?\ngenerations=[[Generation(text='\\n\\nThe Normans first gained their separate identity in the 11th century.', generation_info={'finish_reason': 'stop', 'logprobs': None})]] llm_output={'token_usage': {'total_tokens': 28, 'completion_tokens': 16, 'prompt_tokens': 12}, 'model_name': 'text-davinci-003'} run=RunInfo(run_id=UUID('da9d8f73-b3b3-4bc5-8495-da8b11462a51'))\nWho gave their name to Normandy in the 1000's and 1100's\ngenerations=[[Generation(text='\\n\\nThe Normans, a people from northern France, gave their name to Normandy in the 1000s and 1100s. The Normans were descended from Viking settlers who had come to the region in the late 800s.', generation_info={'finish_reason': 'stop', 'logprobs': None})]] llm_output={'token_usage': {'total_tokens': 58, 'completion_tokens': 45, 'prompt_tokens': 13}, 'model_name': 'text-davinci-003'} run=RunInfo(run_id=UUID('bb5829bf-b6a6-4429-adfa-414ac5be46e5'))\nWhat is France a region of?", "source": "https://python.langchain.com/docs/integrations/callbacks/infino"} {"id": "427bff4d5b51-3", "text": "What is France a region of?\ngenerations=[[Generation(text='\\n\\nFrance is a region of Europe.', generation_info={'finish_reason': 'stop', 'logprobs': None})]] llm_output={'token_usage': {'total_tokens': 16, 'completion_tokens': 9, 'prompt_tokens': 7}, 'model_name': 'text-davinci-003'} run=RunInfo(run_id=UUID('6943880b-b4e4-4c74-9ca1-8c03c10f7e9c'))\nWho did King Charles III swear fealty to?\ngenerations=[[Generation(text='\\n\\nKing Charles III swore fealty to Pope Innocent III.', generation_info={'finish_reason': 'stop', 'logprobs': None})]] llm_output={'token_usage': {'total_tokens': 23, 'completion_tokens': 13, 'prompt_tokens': 10}, 'model_name': 'text-davinci-003'} run=RunInfo(run_id=UUID('c91fd663-09e6-4d00-b746-4c7fd96f9ceb'))\nWhen did the Frankish identity emerge?\ngenerations=[[Generation(text='\\n\\nThe Frankish identity began to emerge in the late 5th century, when the Franks began to expand their power and influence in the region. The Franks were a Germanic tribe that had migrated to the area from the east and had established a kingdom in what is now modern-day France. The Franks were eventually able to establish a powerful kingdom that lasted until the 10th century.', generation_info={'finish_reason': 'stop', 'logprobs': None})]] llm_output={'token_usage': {'total_tokens': 86, 'completion_tokens': 78, 'prompt_tokens': 8}, 'model_name': 'text-davinci-003'} run=RunInfo(run_id=UUID('23f86775-e592-4cb8-baa3-46ebe74305b2'))\nWho was the duke in the battle of Hastings?", "source": "https://python.langchain.com/docs/integrations/callbacks/infino"} {"id": "427bff4d5b51-4", "text": "Who was the duke in the battle of Hastings?\ngenerations=[[Generation(text='\\n\\nThe Duke of Normandy, William the Conqueror, was the leader of the Norman forces at the Battle of Hastings in 1066.', generation_info={'finish_reason': 'stop', 'logprobs': None})]] llm_output={'token_usage': {'total_tokens': 39, 'completion_tokens': 28, 'prompt_tokens': 11}, 'model_name': 'text-davinci-003'} run=RunInfo(run_id=UUID('ad5b7984-8758-4d95-a5eb-ee56e0218f6b'))\nWe now use matplotlib to create graphs of latency, errors and tokens consumed.\n# Helper function to create a graph using matplotlib.\ndef plot(data, title):\ndata = json.loads(data)", "source": "https://python.langchain.com/docs/integrations/callbacks/infino"} {"id": "427bff4d5b51-5", "text": "# Extract x and y values from the data\ntimestamps = [item[\"time\"] for item in data]\ndates = [dt.datetime.fromtimestamp(ts) for ts in timestamps]\ny = [item[\"value\"] for item in data]\n\nplt.rcParams[\"figure.figsize\"] = [6, 4]\nplt.subplots_adjust(bottom=0.2)\nplt.xticks(rotation=25)\nax = plt.gca()\nxfmt = md.DateFormatter(\"%Y-%m-%d %H:%M:%S\")\nax.xaxis.set_major_formatter(xfmt)\n\n# Create the plot\nplt.plot(dates, y)\n\n# Set labels and title\nplt.xlabel(\"Time\")\nplt.ylabel(\"Value\")\nplt.title(title)\n\nplt.show()\n\n\nresponse = client.search_ts(\"__name__\", \"latency\", 0, int(time.time()))\nplot(response.text, \"Latency\")\n\nresponse = client.search_ts(\"__name__\", \"error\", 0, int(time.time()))\nplot(response.text, \"Errors\")\n\nresponse = client.search_ts(\"__name__\", \"prompt_tokens\", 0, int(time.time()))\nplot(response.text, \"Prompt Tokens\")\n\nresponse = client.search_ts(\"__name__\", \"completion_tokens\", 0, int(time.time()))\nplot(response.text, \"Completion Tokens\")\n\nresponse = client.search_ts(\"__name__\", \"total_tokens\", 0, int(time.time()))\nplot(response.text, \"Total Tokens\")\n# Search for a particular prompt text.\nquery = \"normandy\"\nresponse = client.search_log(query, 0, int(time.time()))\nprint(\"Results for\", query, \":\", response.text)\n\nprint(\"===\")\n\nquery = \"king charles III\"\nresponse = client.search_log(\"king charles III\", 0, int(time.time()))\nprint(\"Results for\", query, \":\", response.text)", "source": "https://python.langchain.com/docs/integrations/callbacks/infino"} {"id": "f0b758da0083-0", "text": "Label Studio\nLabel Studio is an open-source data labeling platform that provides LangChain with flexibility when it comes to labeling data for fine-tuning large language models (LLMs). It also enables the preparation of custom training data and the collection and evaluation of responses through human feedback.\nIn this guide, you will learn how to connect a LangChain pipeline to Label Studio to:\nAggregate all input prompts, conversations, and responses in a single LabelStudio project. This consolidates all the data in one place for easier labeling and analysis.\nRefine prompts and responses to create a dataset for supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) scenarios. The labeled data can be used to further train the LLM to improve its performance.\nEvaluate model responses through human feedback. LabelStudio provides an interface for humans to review and provide feedback on model responses, allowing evaluation and iteration.\nInstallation and setup\u200b\nFirst install latest versions of Label Studio and Label Studio API client:\npip install -U label-studio label-studio-sdk openai\nNext, run label-studio on the command line to start the local LabelStudio instance at http://localhost:8080. See the Label Studio installation guide for more options.\nYou'll need a token to make API calls.\nOpen your LabelStudio instance in your browser, go to Account & Settings > Access Token and copy the key.\nSet environment variables with your LabelStudio URL, API key and OpenAI API key:\nimport os", "source": "https://python.langchain.com/docs/integrations/callbacks/labelstudio"} {"id": "f0b758da0083-1", "text": "os.environ['LABEL_STUDIO_URL'] = '' # e.g. http://localhost:8080\nos.environ['LABEL_STUDIO_API_KEY'] = ''\nos.environ['OPENAI_API_KEY'] = ''\nCollecting LLMs prompts and responses\u200b\nThe data used for labeling is stored in projects within Label Studio. Every project is identified by an XML configuration that details the specifications for input and output data. \nCreate a project that takes human input in text format and outputs an editable LLM response in a text area:\n\n\n\n\n\n\n