id
stringlengths
14
16
text
stringlengths
36
2.73k
source
stringlengths
49
117
82d3899ee82b-0
.ipynb .pdf Self Hosted Embeddings Self Hosted Embeddings# Let’s load the SelfHostedEmbeddings, SelfHostedHuggingFaceEmbeddings, and SelfHostedHuggingFaceInstructEmbeddings classes. from langchain.embeddings import ( SelfHostedEmbeddings, SelfHostedHuggingFaceEmbeddings, SelfHostedHuggingFaceInstructEmbeddi...
https://python.langchain.com/en/latest/modules/models/text_embedding/examples/self-hosted.html
82d3899ee82b-1
tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained(model_id) return pipeline("feature-extraction", model=model, tokenizer=tokenizer) def inference_fn(pipeline, prompt): # Return last hidden state of the model if isinstance(prompt, list): return [emb[...
https://python.langchain.com/en/latest/modules/models/text_embedding/examples/self-hosted.html
8e2d5a87acb0-0
.ipynb .pdf Sentence Transformers Embeddings Sentence Transformers Embeddings# SentenceTransformers embeddings are called using the HuggingFaceEmbeddings integration. We have also added an alias for SentenceTransformerEmbeddings for users who are more familiar with directly using that package. SentenceTransformers is a...
https://python.langchain.com/en/latest/modules/models/text_embedding/examples/sentence_transformers.html
1789f617e7ac-0
.ipynb .pdf SageMaker Endpoint Embeddings SageMaker Endpoint Embeddings# Let’s load the SageMaker Endpoints Embeddings class. The class can be used if you host, e.g. your own Hugging Face model on SageMaker. For instructions on how to do this, please see here. Note: In order to handle batched requests, you will need to...
https://python.langchain.com/en/latest/modules/models/text_embedding/examples/sagemaker-endpoint.html
1789f617e7ac-1
query_result = embeddings.embed_query("foo") doc_results = embeddings.embed_documents(["foo"]) doc_results previous OpenAI next Self Hosted Embeddings By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on May 28, 2023.
https://python.langchain.com/en/latest/modules/models/text_embedding/examples/sagemaker-endpoint.html
b02cf6bbbc66-0
.ipynb .pdf Getting Started Contents PromptTemplates LLMChain Streaming Getting Started# This notebook covers how to get started with chat models. The interface is based around messages rather than raw text. from langchain.chat_models import ChatOpenAI from langchain import PromptTemplate, LLMChain from langchain.pro...
https://python.langchain.com/en/latest/modules/models/chat/getting_started.html
b02cf6bbbc66-1
[ SystemMessage(content="You are a helpful assistant that translates English to French."), HumanMessage(content="I love programming.") ], [ SystemMessage(content="You are a helpful assistant that translates English to French."), HumanMessage(content="I love artificial intelligenc...
https://python.langchain.com/en/latest/modules/models/chat/getting_started.html
b02cf6bbbc66-2
human_template="{text}" human_message_prompt = HumanMessagePromptTemplate.from_template(human_template) chat_prompt = ChatPromptTemplate.from_messages([system_message_prompt, human_message_prompt]) # get a chat completion from the formatted messages chat(chat_prompt.format_prompt(input_language="English", output_langua...
https://python.langchain.com/en/latest/modules/models/chat/getting_started.html
b02cf6bbbc66-3
Sparkling water, you're my vibe Verse 2: No sugar, no calories, just pure bliss A drink that's hard to resist It's the perfect way to quench my thirst A drink that always comes first Chorus: Sparkling water, oh so fine A drink that's always on my mind With every sip, I feel alive Sparkling water, you're my vibe Bridge:...
https://python.langchain.com/en/latest/modules/models/chat/getting_started.html
02db748172a2-0
.rst .pdf How-To Guides How-To Guides# The examples here all address certain “how-to” guides for working with chat models. How to use few shot examples How to stream responses previous Getting Started next How to use few shot examples By Harrison Chase © Copyright 2023, Harrison Chase. Last updated ...
https://python.langchain.com/en/latest/modules/models/chat/how_to_guides.html
26e9f4d4f71a-0
.rst .pdf Integrations Integrations# The examples here all highlight how to integrate with different chat models. Anthropic Azure Google Cloud Platform Vertex AI PaLM OpenAI PromptLayer ChatOpenAI previous How to stream responses next Anthropic By Harrison Chase © Copyright 2023, Harrison Chase. Las...
https://python.langchain.com/en/latest/modules/models/chat/integrations.html
934937a1db44-0
.ipynb .pdf Anthropic Contents ChatAnthropic also supports async and streaming functionality: Anthropic# This notebook covers how to get started with Anthropic chat models. from langchain.chat_models import ChatAnthropic from langchain.prompts.chat import ( ChatPromptTemplate, SystemMessagePromptTemplate, ...
https://python.langchain.com/en/latest/modules/models/chat/integrations/anthropic.html
966b0ac8d325-0
.ipynb .pdf Azure Azure# This notebook goes over how to connect to an Azure hosted OpenAI endpoint from langchain.chat_models import AzureChatOpenAI from langchain.schema import HumanMessage BASE_URL = "https://${TODO}.openai.azure.com" API_KEY = "..." DEPLOYMENT_NAME = "chat" model = AzureChatOpenAI( openai_api_ba...
https://python.langchain.com/en/latest/modules/models/chat/integrations/azure_chat_openai.html
d7210ab26bd4-0
.ipynb .pdf Google Cloud Platform Vertex AI PaLM Google Cloud Platform Vertex AI PaLM# Note: This is seperate from the Google PaLM integration. Google has chosen to offer an enterprise version of PaLM through GCP, and this supports the models made available through there. PaLM API on Vertex AI is a Preview offering, su...
https://python.langchain.com/en/latest/modules/models/chat/integrations/google_vertex_ai_palm.html
d7210ab26bd4-1
HumanMessage, SystemMessage ) chat = ChatVertexAI() messages = [ SystemMessage(content="You are a helpful assistant that translates English to French."), HumanMessage(content="Translate this sentence from English to French. I love programming.") ] chat(messages) AIMessage(content='Sure, here is the translat...
https://python.langchain.com/en/latest/modules/models/chat/integrations/google_vertex_ai_palm.html
e38ab95d7f4d-0
.ipynb .pdf OpenAI OpenAI# This notebook covers how to get started with OpenAI chat models. from langchain.chat_models import ChatOpenAI from langchain.prompts.chat import ( ChatPromptTemplate, SystemMessagePromptTemplate, AIMessagePromptTemplate, HumanMessagePromptTemplate, ) from langchain.schema impo...
https://python.langchain.com/en/latest/modules/models/chat/integrations/openai.html
e38ab95d7f4d-1
AIMessage(content="J'adore la programmation.", additional_kwargs={}) previous Google Cloud Platform Vertex AI PaLM next PromptLayer ChatOpenAI By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on May 28, 2023.
https://python.langchain.com/en/latest/modules/models/chat/integrations/openai.html
15b715e69d53-0
.ipynb .pdf PromptLayer ChatOpenAI Contents Install PromptLayer Imports Set the Environment API Key Use the PromptLayerOpenAI LLM like normal Using PromptLayer Track PromptLayer ChatOpenAI# This example showcases how to connect to PromptLayer to start recording your ChatOpenAI requests. Install PromptLayer# The promp...
https://python.langchain.com/en/latest/modules/models/chat/integrations/promptlayer_chatopenai.html
15b715e69d53-1
chat = PromptLayerChatOpenAI(return_pl_id=True) chat_results = chat.generate([[HumanMessage(content="I am a cat and I want")]]) for res in chat_results.generations: pl_request_id = res[0].generation_info["pl_request_id"] promptlayer.track.score(request_id=pl_request_id, score=100) Using this allows you to track...
https://python.langchain.com/en/latest/modules/models/chat/integrations/promptlayer_chatopenai.html
50550b76cfcc-0
.ipynb .pdf How to use few shot examples Contents Alternating Human/AI messages System Messages How to use few shot examples# This notebook covers how to use few shot examples in chat models. There does not appear to be solid consensus on how best to do few shot prompting. As a result, we are not solidifying any abst...
https://python.langchain.com/en/latest/modules/models/chat/examples/few_shot_examples.html
50550b76cfcc-1
template="You are a helpful assistant that translates english to pirate." system_message_prompt = SystemMessagePromptTemplate.from_template(template) example_human = SystemMessagePromptTemplate.from_template("Hi", additional_kwargs={"name": "example_user"}) example_ai = SystemMessagePromptTemplate.from_template("Argh m...
https://python.langchain.com/en/latest/modules/models/chat/examples/few_shot_examples.html
f911f27be603-0
.ipynb .pdf How to stream responses How to stream responses# This notebook goes over how to use streaming with a chat model. from langchain.chat_models import ChatOpenAI from langchain.schema import ( HumanMessage, ) from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler chat = ChatOpenAI(s...
https://python.langchain.com/en/latest/modules/models/chat/examples/streaming.html
f911f27be603-1
previous How to use few shot examples next Integrations By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on May 28, 2023.
https://python.langchain.com/en/latest/modules/models/chat/examples/streaming.html
dfac0854e5ce-0
Source code for langchain.text_splitter """Functionality for splitting text.""" from __future__ import annotations import copy import logging from abc import ABC, abstractmethod from typing import ( AbstractSet, Any, Callable, Collection, Iterable, List, Literal, Optional, Sequence, ...
https://python.langchain.com/en/latest/_modules/langchain/text_splitter.html
dfac0854e5ce-1
documents = [] for i, text in enumerate(texts): for chunk in self.split_text(text): new_doc = Document( page_content=chunk, metadata=copy.deepcopy(_metadatas[i]) ) documents.append(new_doc) return documents [docs] def spl...
https://python.langchain.com/en/latest/_modules/langchain/text_splitter.html
dfac0854e5ce-2
doc = self._join_docs(current_doc, separator) if doc is not None: docs.append(doc) # Keep on popping if: # - we have a larger chunk than in the chunk overlap # - or if we still have any chunks and the length is long ...
https://python.langchain.com/en/latest/_modules/langchain/text_splitter.html
dfac0854e5ce-3
) return cls(length_function=_huggingface_tokenizer_length, **kwargs) [docs] @classmethod def from_tiktoken_encoder( cls: Type[TS], encoding_name: str = "gpt2", model_name: Optional[str] = None, allowed_special: Union[Literal["all"], AbstractSet[str]] = set(), disa...
https://python.langchain.com/en/latest/_modules/langchain/text_splitter.html
dfac0854e5ce-4
) -> Sequence[Document]: """Transform sequence of documents by splitting them.""" return self.split_documents(list(documents)) [docs] async def atransform_documents( self, documents: Sequence[Document], **kwargs: Any ) -> Sequence[Document]: """Asynchronously transform a sequence ...
https://python.langchain.com/en/latest/_modules/langchain/text_splitter.html
dfac0854e5ce-5
raise ImportError( "Could not import tiktoken python package. " "This is needed in order to for TokenTextSplitter. " "Please install it with `pip install tiktoken`." ) if model_name is not None: enc = tiktoken.encoding_for_model(model_name)...
https://python.langchain.com/en/latest/_modules/langchain/text_splitter.html
dfac0854e5ce-6
[docs] def split_text(self, text: str) -> List[str]: """Split incoming text and return chunks.""" final_chunks = [] # Get appropriate separator to use separator = self._separators[-1] for _s in self._separators: if _s == "": separator = _s ...
https://python.langchain.com/en/latest/_modules/langchain/text_splitter.html
dfac0854e5ce-7
"NLTK is not installed, please install it with `pip install nltk`." ) self._separator = separator [docs] def split_text(self, text: str) -> List[str]: """Split incoming text and return chunks.""" # First we naively split the large input into a bunch of smaller ones. splits...
https://python.langchain.com/en/latest/_modules/langchain/text_splitter.html
dfac0854e5ce-8
"\n## ", "\n### ", "\n#### ", "\n##### ", "\n###### ", # Note the alternative syntax for headings (below) is not handled here # Heading level 2 # --------------- # End of code block "```\n\n", # Horiz...
https://python.langchain.com/en/latest/_modules/langchain/text_splitter.html
dfac0854e5ce-9
"\n\\begin{align}", "$$", "$", # Now split by the normal type of lines " ", "", ] super().__init__(separators=separators, **kwargs) [docs]class PythonCodeTextSplitter(RecursiveCharacterTextSplitter): """Attempts to split the text along Pyth...
https://python.langchain.com/en/latest/_modules/langchain/text_splitter.html
fca60faea55b-0
Source code for langchain.document_transformers """Transform documents""" from typing import Any, Callable, List, Sequence import numpy as np from pydantic import BaseModel, Field from langchain.embeddings.base import Embeddings from langchain.math_utils import cosine_similarity from langchain.schema import BaseDocumen...
https://python.langchain.com/en/latest/_modules/langchain/document_transformers.html
fca60faea55b-1
for first_idx, second_idx in redundant_stacked[redundant_sorted]: if first_idx in included_idxs and second_idx in included_idxs: # Default to dropping the second document of any highly similar pair. included_idxs.remove(second_idx) return list(sorted(included_idxs)) def _get_embeddin...
https://python.langchain.com/en/latest/_modules/langchain/document_transformers.html
fca60faea55b-2
"""Filter down documents.""" stateful_documents = get_stateful_documents(documents) embedded_documents = _get_embeddings_from_stateful_docs( self.embeddings, stateful_documents ) included_idxs = _filter_similar_embeddings( embedded_documents, self.similarity_fn, s...
https://python.langchain.com/en/latest/_modules/langchain/document_transformers.html
7ae4bd41fa38-0
Source code for langchain.requests """Lightweight wrapper around requests library, with async support.""" from contextlib import asynccontextmanager from typing import Any, AsyncGenerator, Dict, Optional import aiohttp import requests from pydantic import BaseModel, Extra class Requests(BaseModel): """Wrapper aroun...
https://python.langchain.com/en/latest/_modules/langchain/requests.html
7ae4bd41fa38-1
def delete(self, url: str, **kwargs: Any) -> requests.Response: """DELETE the URL and return the text.""" return requests.delete(url, headers=self.headers, **kwargs) @asynccontextmanager async def _arequest( self, method: str, url: str, **kwargs: Any ) -> AsyncGenerator[aiohttp.Clien...
https://python.langchain.com/en/latest/_modules/langchain/requests.html
7ae4bd41fa38-2
"""PATCH the URL and return the text asynchronously.""" async with self._arequest("PATCH", url, **kwargs) as response: yield response @asynccontextmanager async def aput( self, url: str, data: Dict[str, Any], **kwargs: Any ) -> AsyncGenerator[aiohttp.ClientResponse, None]: ...
https://python.langchain.com/en/latest/_modules/langchain/requests.html
7ae4bd41fa38-3
"""POST to the URL and return the text.""" return self.requests.post(url, data, **kwargs).text [docs] def patch(self, url: str, data: Dict[str, Any], **kwargs: Any) -> str: """PATCH the URL and return the text.""" return self.requests.patch(url, data, **kwargs).text [docs] def put(self, ur...
https://python.langchain.com/en/latest/_modules/langchain/requests.html
7ae4bd41fa38-4
"""PUT the URL and return the text asynchronously.""" async with self.requests.aput(url, **kwargs) as response: return await response.text() [docs] async def adelete(self, url: str, **kwargs: Any) -> str: """DELETE the URL and return the text asynchronously.""" async with self.req...
https://python.langchain.com/en/latest/_modules/langchain/requests.html
d4d2fd8942b9-0
Source code for langchain.llms.modal """Wrapper around Modal API.""" import logging from typing import Any, Dict, List, Mapping, Optional import requests from pydantic import Extra, Field, root_validator from langchain.callbacks.manager import CallbackManagerForLLMRun from langchain.llms.base import LLM from langchain....
https://python.langchain.com/en/latest/_modules/langchain/llms/modal.html
d4d2fd8942b9-1
logger.warning( f"""{field_name} was transfered to model_kwargs. Please confirm that {field_name} is what you intended.""" ) extra[field_name] = values.pop(field_name) values["model_kwargs"] = extra return values @property d...
https://python.langchain.com/en/latest/_modules/langchain/llms/modal.html
6b499c8a5f66-0
Source code for langchain.llms.fake """Fake LLM wrapper for testing purposes.""" from typing import Any, List, Mapping, Optional from langchain.callbacks.manager import CallbackManagerForLLMRun from langchain.llms.base import LLM [docs]class FakeListLLM(LLM): """Fake LLM wrapper for testing purposes.""" respons...
https://python.langchain.com/en/latest/_modules/langchain/llms/fake.html
3e10f91c5189-0
Source code for langchain.llms.llamacpp """Wrapper around llama.cpp.""" import logging from typing import Any, Dict, Generator, List, Optional from pydantic import Field, root_validator from langchain.callbacks.manager import CallbackManagerForLLMRun from langchain.llms.base import LLM logger = logging.getLogger(__name...
https://python.langchain.com/en/latest/_modules/langchain/llms/llamacpp.html
3e10f91c5189-1
f16_kv: bool = Field(True, alias="f16_kv") """Use half-precision for key/value cache.""" logits_all: bool = Field(False, alias="logits_all") """Return logits for all tokens, not just the last token.""" vocab_only: bool = Field(False, alias="vocab_only") """Only load the vocabulary, no weights.""" ...
https://python.langchain.com/en/latest/_modules/langchain/llms/llamacpp.html
3e10f91c5189-2
"""Whether to echo the prompt.""" stop: Optional[List[str]] = [] """A list of strings to stop generation when encountered.""" repeat_penalty: Optional[float] = 1.1 """The penalty to apply to repeated tokens.""" top_k: Optional[int] = 40 """The top-k value to use for sampling.""" last_n_token...
https://python.langchain.com/en/latest/_modules/langchain/llms/llamacpp.html
3e10f91c5189-3
except ImportError: raise ModuleNotFoundError( "Could not import llama-cpp-python library. " "Please install the llama-cpp-python library to " "use this embedding model: pip install llama-cpp-python" ) except Exception as e: rai...
https://python.langchain.com/en/latest/_modules/langchain/llms/llamacpp.html
3e10f91c5189-4
Returns: Dictionary containing the combined parameters. """ # Raise error if stop sequences are in both input and default params if self.stop and stop is not None: raise ValueError("`stop` found in both the input and default params.") params = self._default_params...
https://python.langchain.com/en/latest/_modules/langchain/llms/llamacpp.html
3e10f91c5189-5
result = self.client(prompt=prompt, **params) return result["choices"][0]["text"] [docs] def stream( self, prompt: str, stop: Optional[List[str]] = None, run_manager: Optional[CallbackManagerForLLMRun] = None, ) -> Generator[Dict, None, None]: """Yields results...
https://python.langchain.com/en/latest/_modules/langchain/llms/llamacpp.html
3e10f91c5189-6
for chunk in result: token = chunk["choices"][0]["text"] log_probs = chunk["choices"][0].get("logprobs", None) if run_manager: run_manager.on_llm_new_token( token=token, verbose=self.verbose, log_probs=log_probs ) yield ...
https://python.langchain.com/en/latest/_modules/langchain/llms/llamacpp.html
5063b10a5eeb-0
Source code for langchain.llms.huggingface_text_gen_inference """Wrapper around Huggingface text generation inference API.""" from functools import partial from typing import Any, Dict, List, Optional from pydantic import Extra, Field, root_validator from langchain.callbacks.manager import CallbackManagerForLLMRun from...
https://python.langchain.com/en/latest/_modules/langchain/llms/huggingface_text_gen_inference.html
5063b10a5eeb-1
inference_server_url = "http://localhost:8010/", max_new_tokens = 512, top_k = 10, top_p = 0.95, typical_p = 0.95, temperature = 0.01, repetition_penalty = 1.03, ) print(llm("What is Deep Learning?"))...
https://python.langchain.com/en/latest/_modules/langchain/llms/huggingface_text_gen_inference.html
5063b10a5eeb-2
@root_validator() def validate_environment(cls, values: Dict) -> Dict: """Validate that python package exists in environment.""" try: import text_generation values["client"] = text_generation.Client( values["inference_server_url"], timeout=values["timeout"] ...
https://python.langchain.com/en/latest/_modules/langchain/llms/huggingface_text_gen_inference.html
5063b10a5eeb-3
text_callback = None if run_manager: text_callback = partial( run_manager.on_llm_new_token, verbose=self.verbose ) params = { "stop_sequences": stop, "max_new_tokens": self.max_new_tokens, "top_k"...
https://python.langchain.com/en/latest/_modules/langchain/llms/huggingface_text_gen_inference.html
0b3d4b6b1b91-0
Source code for langchain.llms.cohere """Wrapper around Cohere APIs.""" import logging from typing import Any, Dict, List, Optional from pydantic import Extra, root_validator from langchain.callbacks.manager import CallbackManagerForLLMRun from langchain.llms.base import LLM from langchain.llms.utils import enforce_sto...
https://python.langchain.com/en/latest/_modules/langchain/llms/cohere.html
0b3d4b6b1b91-1
"""Penalizes repeated tokens. Between 0 and 1.""" truncate: Optional[str] = None """Specify how the client handles inputs longer than the maximum token length: Truncate from START, END or NONE""" cohere_api_key: Optional[str] = None stop: Optional[List[str]] = None class Config: """Confi...
https://python.langchain.com/en/latest/_modules/langchain/llms/cohere.html
0b3d4b6b1b91-2
def _llm_type(self) -> str: """Return type of llm.""" return "cohere" def _call( self, prompt: str, stop: Optional[List[str]] = None, run_manager: Optional[CallbackManagerForLLMRun] = None, ) -> str: """Call out to Cohere's generate endpoint. Args:...
https://python.langchain.com/en/latest/_modules/langchain/llms/cohere.html
3450c1e17d4b-0
Source code for langchain.llms.openlm from typing import Any, Dict from pydantic import root_validator from langchain.llms.openai import BaseOpenAI [docs]class OpenLM(BaseOpenAI): @property def _invocation_params(self) -> Dict[str, Any]: return {**{"model": self.model_name}, **super()._invocation_params...
https://python.langchain.com/en/latest/_modules/langchain/llms/openlm.html
a89d9f86a69b-0
Source code for langchain.llms.self_hosted """Run model inference on self-hosted remote hardware.""" import importlib.util import logging import pickle from typing import Any, Callable, List, Mapping, Optional from pydantic import Extra from langchain.callbacks.manager import CallbackManagerForLLMRun from langchain.llm...
https://python.langchain.com/en/latest/_modules/langchain/llms/self_hosted.html
a89d9f86a69b-1
) if device < 0 and cuda_device_count > 0: logger.warning( "Device has %d GPUs available. " "Provide device={deviceId} to `from_model_id` to use available" "GPUs for execution. deviceId is -1 for CPU and " "can be a positive integer ass...
https://python.langchain.com/en/latest/_modules/langchain/llms/self_hosted.html
a89d9f86a69b-2
llm = SelfHostedPipeline( model_load_fn=load_pipeline, hardware=gpu, model_reqs=model_reqs, inference_fn=inference_fn ) Example for <2GB model (can be serialized and sent directly to the server): .. code-block:: python from langchain.ll...
https://python.langchain.com/en/latest/_modules/langchain/llms/self_hosted.html
a89d9f86a69b-3
load_fn_kwargs: Optional[dict] = None """Key word arguments to pass to the model load function.""" model_reqs: List[str] = ["./", "torch"] """Requirements to install on hardware to inference the model.""" class Config: """Configuration for this pydantic object.""" extra = Extra.forbid ...
https://python.langchain.com/en/latest/_modules/langchain/llms/self_hosted.html
a89d9f86a69b-4
if not isinstance(pipeline, str): logger.warning( "Serializing pipeline to send to remote hardware. " "Note, it can be quite slow" "to serialize and send large models with each execution. " "Consider sending the pipeline" "to th...
https://python.langchain.com/en/latest/_modules/langchain/llms/self_hosted.html
6b9911e26da6-0
Source code for langchain.llms.huggingface_pipeline """Wrapper around HuggingFace Pipeline APIs.""" import importlib.util import logging from typing import Any, List, Mapping, Optional from pydantic import Extra from langchain.callbacks.manager import CallbackManagerForLLMRun from langchain.llms.base import LLM from la...
https://python.langchain.com/en/latest/_modules/langchain/llms/huggingface_pipeline.html
6b9911e26da6-1
""" pipeline: Any #: :meta private: model_id: str = DEFAULT_MODEL_ID """Model name to use.""" model_kwargs: Optional[dict] = None """Key word arguments passed to the model.""" pipeline_kwargs: Optional[dict] = None """Key word arguments passed to the pipeline.""" class Config: "...
https://python.langchain.com/en/latest/_modules/langchain/llms/huggingface_pipeline.html
6b9911e26da6-2
else: raise ValueError( f"Got invalid task {task}, " f"currently only {VALID_TASKS} are supported" ) except ImportError as e: raise ValueError( f"Could not load the {task} model due to missing dependencies." ...
https://python.langchain.com/en/latest/_modules/langchain/llms/huggingface_pipeline.html
6b9911e26da6-3
) return cls( pipeline=pipeline, model_id=model_id, model_kwargs=_model_kwargs, pipeline_kwargs=_pipeline_kwargs, **kwargs, ) @property def _identifying_params(self) -> Mapping[str, Any]: """Get the identifying parameters.""" ...
https://python.langchain.com/en/latest/_modules/langchain/llms/huggingface_pipeline.html
6b9911e26da6-4
By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on May 28, 2023.
https://python.langchain.com/en/latest/_modules/langchain/llms/huggingface_pipeline.html
8a6de997a515-0
Source code for langchain.llms.forefrontai """Wrapper around ForefrontAI APIs.""" from typing import Any, Dict, List, Mapping, Optional import requests from pydantic import Extra, root_validator from langchain.callbacks.manager import CallbackManagerForLLMRun from langchain.llms.base import LLM from langchain.llms.util...
https://python.langchain.com/en/latest/_modules/langchain/llms/forefrontai.html
8a6de997a515-1
@root_validator() def validate_environment(cls, values: Dict) -> Dict: """Validate that api key exists in environment.""" forefrontai_api_key = get_from_dict_or_env( values, "forefrontai_api_key", "FOREFRONTAI_API_KEY" ) values["forefrontai_api_key"] = forefrontai_api_key...
https://python.langchain.com/en/latest/_modules/langchain/llms/forefrontai.html
8a6de997a515-2
""" response = requests.post( url=self.endpoint_url, headers={ "Authorization": f"Bearer {self.forefrontai_api_key}", "Content-Type": "application/json", }, json={"text": prompt, **self._default_params}, ) response_j...
https://python.langchain.com/en/latest/_modules/langchain/llms/forefrontai.html
cb28ace1ac4b-0
Source code for langchain.llms.aleph_alpha """Wrapper around Aleph Alpha APIs.""" from typing import Any, Dict, List, Optional, Sequence from pydantic import Extra, root_validator from langchain.callbacks.manager import CallbackManagerForLLMRun from langchain.llms.base import LLM from langchain.llms.utils import enforc...
https://python.langchain.com/en/latest/_modules/langchain/llms/aleph_alpha.html
cb28ace1ac4b-1
"""Total probability mass of tokens to consider at each step.""" presence_penalty: float = 0.0 """Penalizes repeated tokens.""" frequency_penalty: float = 0.0 """Penalizes repeated tokens according to frequency.""" repetition_penalties_include_prompt: Optional[bool] = False """Flag deciding whet...
https://python.langchain.com/en/latest/_modules/langchain/llms/aleph_alpha.html
cb28ace1ac4b-2
"""Echo the prompt in the completion.""" use_multiplicative_frequency_penalty: bool = False sequence_penalty: float = 0.0 sequence_penalty_min_length: int = 2 use_multiplicative_sequence_penalty: bool = False completion_bias_inclusion: Optional[Sequence[str]] = None completion_bias_inclusion_fir...
https://python.langchain.com/en/latest/_modules/langchain/llms/aleph_alpha.html
cb28ace1ac4b-3
"""Validate that api key and python package exists in environment.""" aleph_alpha_api_key = get_from_dict_or_env( values, "aleph_alpha_api_key", "ALEPH_ALPHA_API_KEY" ) try: import aleph_alpha_client values["client"] = aleph_alpha_client.Client(token=aleph_alp...
https://python.langchain.com/en/latest/_modules/langchain/llms/aleph_alpha.html
cb28ace1ac4b-4
"minimum_tokens": self.minimum_tokens, "echo": self.echo, "use_multiplicative_frequency_penalty": self.use_multiplicative_frequency_penalty, # noqa: E501 "sequence_penalty": self.sequence_penalty, "sequence_penalty_min_length": self.sequence_penalty_min_length, ...
https://python.langchain.com/en/latest/_modules/langchain/llms/aleph_alpha.html
cb28ace1ac4b-5
Args: prompt: The prompt to pass into the model. stop: Optional list of stop words to use when generating. Returns: The string generated by the model. Example: .. code-block:: python response = alpeh_alpha("Tell me a joke.") """ ...
https://python.langchain.com/en/latest/_modules/langchain/llms/aleph_alpha.html
ac7d3a8d9209-0
Source code for langchain.llms.huggingface_endpoint """Wrapper around HuggingFace APIs.""" from typing import Any, Dict, List, Mapping, Optional import requests from pydantic import Extra, root_validator from langchain.callbacks.manager import CallbackManagerForLLMRun from langchain.llms.base import LLM from langchain....
https://python.langchain.com/en/latest/_modules/langchain/llms/huggingface_endpoint.html
ac7d3a8d9209-1
huggingfacehub_api_token: Optional[str] = None class Config: """Configuration for this pydantic object.""" extra = Extra.forbid @root_validator() def validate_environment(cls, values: Dict) -> Dict: """Validate that api key and python package exists in environment.""" hugging...
https://python.langchain.com/en/latest/_modules/langchain/llms/huggingface_endpoint.html
ac7d3a8d9209-2
return "huggingface_endpoint" def _call( self, prompt: str, stop: Optional[List[str]] = None, run_manager: Optional[CallbackManagerForLLMRun] = None, ) -> str: """Call out to HuggingFace Hub's inference endpoint. Args: prompt: The prompt to pass into t...
https://python.langchain.com/en/latest/_modules/langchain/llms/huggingface_endpoint.html
ac7d3a8d9209-3
elif self.task == "summarization": text = generated_text[0]["summary_text"] else: raise ValueError( f"Got invalid task {self.task}, " f"currently only {VALID_TASKS} are supported" ) if stop is not None: # This is a bit hacky...
https://python.langchain.com/en/latest/_modules/langchain/llms/huggingface_endpoint.html
ec34ca0ae46e-0
Source code for langchain.llms.petals """Wrapper around Petals API.""" import logging from typing import Any, Dict, List, Mapping, Optional from pydantic import Extra, Field, root_validator from langchain.callbacks.manager import CallbackManagerForLLMRun from langchain.llms.base import LLM from langchain.llms.utils imp...
https://python.langchain.com/en/latest/_modules/langchain/llms/petals.html
ec34ca0ae46e-1
"""Whether or not to use sampling; use greedy decoding otherwise.""" max_length: Optional[int] = None """The maximum length of the sequence to be generated.""" model_kwargs: Dict[str, Any] = Field(default_factory=dict) """Holds any model parameters valid for `create` call not explicitly specified.""...
https://python.langchain.com/en/latest/_modules/langchain/llms/petals.html
ec34ca0ae46e-2
from petals import DistributedBloomForCausalLM from transformers import BloomTokenizerFast model_name = values["model_name"] values["tokenizer"] = BloomTokenizerFast.from_pretrained(model_name) values["client"] = DistributedBloomForCausalLM.from_pretrained(model_name) ...
https://python.langchain.com/en/latest/_modules/langchain/llms/petals.html
ec34ca0ae46e-3
"""Call the Petals API.""" params = self._default_params inputs = self.tokenizer(prompt, return_tensors="pt")["input_ids"] outputs = self.client.generate(inputs, **params) text = self.tokenizer.decode(outputs[0]) if stop is not None: # I believe this is required since...
https://python.langchain.com/en/latest/_modules/langchain/llms/petals.html
84a35223cb7b-0
Source code for langchain.llms.cerebriumai """Wrapper around CerebriumAI API.""" import logging from typing import Any, Dict, List, Mapping, Optional from pydantic import Extra, Field, root_validator from langchain.callbacks.manager import CallbackManagerForLLMRun from langchain.llms.base import LLM from langchain.llms...
https://python.langchain.com/en/latest/_modules/langchain/llms/cerebriumai.html
84a35223cb7b-1
all_required_field_names = {field.alias for field in cls.__fields__.values()} extra = values.get("model_kwargs", {}) for field_name in list(values): if field_name not in all_required_field_names: if field_name in extra: raise ValueError(f"Found {field_name...
https://python.langchain.com/en/latest/_modules/langchain/llms/cerebriumai.html
84a35223cb7b-2
try: from cerebrium import model_api_request except ImportError: raise ValueError( "Could not import cerebrium python package. " "Please install it with `pip install cerebrium`." ) params = self.model_kwargs or {} response = mod...
https://python.langchain.com/en/latest/_modules/langchain/llms/cerebriumai.html
0fc5d829b002-0
Source code for langchain.llms.deepinfra """Wrapper around DeepInfra APIs.""" from typing import Any, Dict, List, Mapping, Optional import requests from pydantic import Extra, root_validator from langchain.callbacks.manager import CallbackManagerForLLMRun from langchain.llms.base import LLM from langchain.llms.utils im...
https://python.langchain.com/en/latest/_modules/langchain/llms/deepinfra.html
0fc5d829b002-1
return values @property def _identifying_params(self) -> Mapping[str, Any]: """Get the identifying parameters.""" return { **{"model_id": self.model_id}, **{"model_kwargs": self.model_kwargs}, } @property def _llm_type(self) -> str: """Return type ...
https://python.langchain.com/en/latest/_modules/langchain/llms/deepinfra.html
0fc5d829b002-2
text = enforce_stop_tokens(text, stop) return text By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on May 28, 2023.
https://python.langchain.com/en/latest/_modules/langchain/llms/deepinfra.html
0afc048b109c-0
Source code for langchain.llms.anthropic """Wrapper around Anthropic APIs.""" import re import warnings from typing import Any, Callable, Dict, Generator, List, Mapping, Optional, Tuple, Union from pydantic import BaseModel, Extra, root_validator from langchain.callbacks.manager import ( AsyncCallbackManagerForLLMR...
https://python.langchain.com/en/latest/_modules/langchain/llms/anthropic.html
0afc048b109c-1
anthropic_api_key = get_from_dict_or_env( values, "anthropic_api_key", "ANTHROPIC_API_KEY" ) try: import anthropic values["client"] = anthropic.Client( api_key=anthropic_api_key, default_request_timeout=values["default_request_timeout"]...
https://python.langchain.com/en/latest/_modules/langchain/llms/anthropic.html
0afc048b109c-2
if stop is None: stop = [] # Never want model to invent new turns of Human / Assistant dialog. stop.extend([self.HUMAN_PROMPT]) return stop [docs]class Anthropic(LLM, _AnthropicCommon): r"""Wrapper around Anthropic's large language models. To use, you should have the ``anthro...
https://python.langchain.com/en/latest/_modules/langchain/llms/anthropic.html
0afc048b109c-3
extra = Extra.forbid @property def _llm_type(self) -> str: """Return type of llm.""" return "anthropic-llm" def _wrap_prompt(self, prompt: str) -> str: if not self.HUMAN_PROMPT or not self.AI_PROMPT: raise NameError("Please ensure the anthropic package is loaded") ...
https://python.langchain.com/en/latest/_modules/langchain/llms/anthropic.html
0afc048b109c-4
if self.streaming: stream_resp = self.client.completion_stream( prompt=self._wrap_prompt(prompt), stop_sequences=stop, **self._default_params, ) current_completion = "" for data in stream_resp: delta = data["...
https://python.langchain.com/en/latest/_modules/langchain/llms/anthropic.html
0afc048b109c-5
**self._default_params, ) return response["completion"] [docs] def stream(self, prompt: str, stop: Optional[List[str]] = None) -> Generator: r"""Call Anthropic completion_stream and return the resulting generator. BETA: this is a beta feature while we figure out the right abstraction....
https://python.langchain.com/en/latest/_modules/langchain/llms/anthropic.html
044657510b9b-0
Source code for langchain.llms.google_palm """Wrapper arround Google's PaLM Text APIs.""" from __future__ import annotations import logging from typing import Any, Callable, Dict, List, Optional from pydantic import BaseModel, root_validator from tenacity import ( before_sleep_log, retry, retry_if_exception...
https://python.langchain.com/en/latest/_modules/langchain/llms/google_palm.html
044657510b9b-1
), before_sleep=before_sleep_log(logger, logging.WARNING), ) def generate_with_retry(llm: GooglePalm, **kwargs: Any) -> Any: """Use tenacity to retry the completion call.""" retry_decorator = _create_retry_decorator() @retry_decorator def _generate_with_retry(**kwargs: Any) -> Any: r...
https://python.langchain.com/en/latest/_modules/langchain/llms/google_palm.html