id stringlengths 14 16 | text stringlengths 36 2.73k | source stringlengths 49 117 |
|---|---|---|
efb94d05503b-1 | previous
GooseAI
next
Graphsignal
Contents
Installation and Setup
Usage
GPT4All
Model File
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023. | https://python.langchain.com/en/latest/integrations/gpt4all.html |
5af0d839d7b8-0 | .ipynb
.pdf
MLflow
MLflow#
This notebook goes over how to track your LangChain experiments into your MLflow Server
!pip install azureml-mlflow
!pip install pandas
!pip install textstat
!pip install spacy
!pip install openai
!pip install google-search-results
!python -m spacy download en_core_web_sm
import os
os.environ... | https://python.langchain.com/en/latest/integrations/mlflow_tracking.html |
5af0d839d7b8-1 | test_prompts = [
{
"title": "documentary about good video games that push the boundary of game design"
},
]
synopsis_chain.apply(test_prompts)
mlflow_callback.flush_tracker(synopsis_chain)
from langchain.agents import initialize_agent, load_tools
from langchain.agents import AgentType
# SCENARIO 3 - Age... | https://python.langchain.com/en/latest/integrations/mlflow_tracking.html |
4337b932ac0d-0 | .md
.pdf
ForefrontAI
Contents
Installation and Setup
Wrappers
LLM
ForefrontAI#
This page covers how to use the ForefrontAI ecosystem within LangChain.
It is broken into two parts: installation and setup, and then references to specific ForefrontAI wrappers.
Installation and Setup#
Get an ForefrontAI api key and set i... | https://python.langchain.com/en/latest/integrations/forefrontai.html |
3ddec5906f42-0 | .md
.pdf
Runhouse
Contents
Installation and Setup
Self-hosted LLMs
Self-hosted Embeddings
Runhouse#
This page covers how to use the Runhouse ecosystem within LangChain.
It is broken into three parts: installation and setup, LLMs, and Embeddings.
Installation and Setup#
Install the Python SDK with pip install runhouse... | https://python.langchain.com/en/latest/integrations/runhouse.html |
757cae7ba2b2-0 | .md
.pdf
Beam
Contents
Installation and Setup
Wrappers
LLM
Define your Beam app.
Deploy your Beam app
Call your Beam app
Beam#
This page covers how to use Beam within LangChain.
It is broken into two parts: installation and setup, and then references to specific Beam wrappers.
Installation and Setup#
Create an accoun... | https://python.langchain.com/en/latest/integrations/beam.html |
757cae7ba2b2-1 | This returns the GPT2 text response to your prompt.
response = llm._call("Running machine learning on a remote GPU")
An example script which deploys the model and calls it would be:
from langchain.llms.beam import Beam
import time
llm = Beam(model_name="gpt2",
name="langchain-gpt2-test",
cpu=8,
... | https://python.langchain.com/en/latest/integrations/beam.html |
14d44c27aa07-0 | .ipynb
.pdf
Chat Over Documents with Vectara
Contents
Pass in chat history
Return Source Documents
ConversationalRetrievalChain with search_distance
ConversationalRetrievalChain with map_reduce
ConversationalRetrievalChain with Question Answering with sources
ConversationalRetrievalChain with streaming to stdout
get_... | https://python.langchain.com/en/latest/integrations/vectara/vectara_chat.html |
14d44c27aa07-1 | qa = ConversationalRetrievalChain.from_llm(llm, retriever, memory=memory)
<class 'langchain.vectorstores.vectara.Vectara'>
query = "What did the president say about Ketanji Brown Jackson"
result = qa({"question": query})
result["answer"]
" The president said that Ketanji Brown Jackson is one of the nation's top legal m... | https://python.langchain.com/en/latest/integrations/vectara/vectara_chat.html |
14d44c27aa07-2 | result['answer']
' Justice Stephen Breyer.'
Return Source Documents#
You can also easily return source documents from the ConversationalRetrievalChain. This is useful for when you want to inspect what documents were returned.
qa = ConversationalRetrievalChain.from_llm(llm, vectorstore.as_retriever(), return_source_docu... | https://python.langchain.com/en/latest/integrations/vectara/vectara_chat.html |
14d44c27aa07-3 | ConversationalRetrievalChain with map_reduce#
We can also use different types of combine document chains with the ConversationalRetrievalChain chain.
from langchain.chains import LLMChain
from langchain.chains.question_answering import load_qa_chain
from langchain.chains.conversational_retrieval.prompts import CONDENSE... | https://python.langchain.com/en/latest/integrations/vectara/vectara_chat.html |
14d44c27aa07-4 | result = chain({"question": query, "chat_history": chat_history})
result['answer']
' The president did not mention Ketanji Brown Jackson.\nSOURCES: ../../modules/state_of_the_union.txt'
ConversationalRetrievalChain with streaming to stdout#
Output from the chain will be streamed to stdout token by token in this example... | https://python.langchain.com/en/latest/integrations/vectara/vectara_chat.html |
14d44c27aa07-5 | chat_history = [(query, result["answer"])]
query = "Did he mention who she suceeded"
result = qa({"question": query, "chat_history": chat_history})
Justice Stephen Breyer.
get_chat_history Function#
You can also specify a get_chat_history function, which can be used to format the chat_history string.
def get_chat_hist... | https://python.langchain.com/en/latest/integrations/vectara/vectara_chat.html |
9ae3dd258b5a-0 | .ipynb
.pdf
Vectara Text Generation
Contents
Prepare Data
Set Up Vector DB
Set Up LLM Chain with Custom Prompt
Generate Text
Vectara Text Generation#
This notebook is based on chat_vector_db and adapted to Vectara.
Prepare Data#
First, we prepare the data. For this example, we fetch a documentation site that consists... | https://python.langchain.com/en/latest/integrations/vectara/vectara_text_generation.html |
9ae3dd258b5a-1 | source_chunks = []
splitter = CharacterTextSplitter(separator=" ", chunk_size=1024, chunk_overlap=0)
for source in sources:
for chunk in splitter.split_text(source.page_content):
source_chunks.append(chunk)
Cloning into '.'...
Set Up Vector DB#
Now that we have the documentation content in chunks, let’s put... | https://python.langchain.com/en/latest/integrations/vectara/vectara_text_generation.html |
9ae3dd258b5a-2 | print(chain.apply(inputs))
generate_blog_post("environment variables") | https://python.langchain.com/en/latest/integrations/vectara/vectara_text_generation.html |
9ae3dd258b5a-3 | [{'text': '\n\nEnvironment variables are an essential part of any development workflow. They provide a way to store and access information that is specific to the environment in which the code is running. This can be especially useful when working with different versions of a language or framework, or when running code... | https://python.langchain.com/en/latest/integrations/vectara/vectara_text_generation.html |
9ae3dd258b5a-4 | and any environment variables.\n\nUsing environment variables with the Deno CLI tasks extension is a great way to ensure that your code is running in the correct environment. For example, if you are running a test suite,'}, {'text': '\n\nEnvironment variables are an important part of any programming language, and they ... | https://python.langchain.com/en/latest/integrations/vectara/vectara_text_generation.html |
9ae3dd258b5a-5 | && echo $VAR && deno eval "console.log(\'Deno: \' + Deno.env.get(\'VAR\'))"\n```\n\nThis would output the following:\n\n```\nhello\nDeno: undefined\n```\n\nAs you can see, the value stored in the shell variable is not available in the spawned process.\n\n'}, {'text': '\n\nWhen it comes to developing applications, envir... | https://python.langchain.com/en/latest/integrations/vectara/vectara_text_generation.html |
9ae3dd258b5a-6 | is `DENO_DIR`. This environment variable is used to store the directory where Deno will store its files. This includes the Deno executable, the Deno cache, and the Deno configuration files. By setting this environment variable, you can ensure that Deno will always be able to find the files it needs.\n\nFinally, there i... | https://python.langchain.com/en/latest/integrations/vectara/vectara_text_generation.html |
9ae3dd258b5a-7 | `Deno.env` has getter and setter methods. Here is example usage:\n\n```ts\nDeno.env.set("FIREBASE_API_KEY", "examplekey123");\nDeno.env.set("FIREBASE_AUTH_DOMAIN", "firebasedomain.com");\n\nconsole.log(Deno.env.get("FIREBASE_API_KEY")); // examplekey123\nconsole.log(Deno.env.get("FIREBASE_AUTH_'}] | https://python.langchain.com/en/latest/integrations/vectara/vectara_text_generation.html |
9ae3dd258b5a-8 | Contents
Prepare Data
Set Up Vector DB
Set Up LLM Chain with Custom Prompt
Generate Text
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023. | https://python.langchain.com/en/latest/integrations/vectara/vectara_text_generation.html |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.