id
stringlengths
14
15
text
stringlengths
49
2.47k
source
stringlengths
61
166
d1a9bcf3f58a-4
Asynchronously pass a string to the model and return a string prediction. Use this method when calling pure text generation models and only the topcandidate generation is needed. Parameters text – String input to pass to the model. stop – Stop words to use when generating. Model output is cut off at the first occurrenc...
https://api.python.langchain.com/en/latest/llms/langchain.llms.llamacpp.LlamaCpp.html
d1a9bcf3f58a-5
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = ‘allow’ was set since it adds all passed values copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclu...
https://api.python.langchain.com/en/latest/llms/langchain.llms.llamacpp.LlamaCpp.html
d1a9bcf3f58a-6
Run the LLM on the given prompt and input. generate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, **kwargs: Any) → LLMResult¶ Pass a sequence of pr...
https://api.python.langchain.com/en/latest/llms/langchain.llms.llamacpp.LlamaCpp.html
d1a9bcf3f58a-7
get_num_tokens_from_messages(messages: List[BaseMessage]) → int¶ Get the number of tokens in the messages. Useful for checking if an input will fit in a model’s context window. Parameters messages – The message inputs to tokenize. Returns The sum of the number of tokens across the messages. get_token_ids(text: str) → L...
https://api.python.langchain.com/en/latest/llms/langchain.llms.llamacpp.LlamaCpp.html
d1a9bcf3f58a-8
classmethod parse_obj(obj: Any) → Model¶ classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ predict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶ Pass a single string input to t...
https://api.python.langchain.com/en/latest/llms/langchain.llms.llamacpp.LlamaCpp.html
d1a9bcf3f58a-9
.. code-block:: python llm.save(file_path=”path/llm.yaml”) classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶ classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶ stream(input: Union[Promp...
https://api.python.langchain.com/en/latest/llms/langchain.llms.llamacpp.LlamaCpp.html
d1a9bcf3f58a-10
property lc_serializable: bool¶ Return whether or not the class is serializable. Examples using LlamaCpp¶ Llama.cpp Llama-cpp Running LLMs locally Use local LLMs WebResearchRetriever
https://api.python.langchain.com/en/latest/llms/langchain.llms.llamacpp.LlamaCpp.html
135ca0f4bc84-0
langchain.llms.bedrock.Bedrock¶ class langchain.llms.bedrock.Bedrock[source]¶ Bases: LLM Bedrock models. To authenticate, the AWS client uses the following methods to automatically load credentials: https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html If a specific credential profile should be...
https://api.python.langchain.com/en/latest/llms/langchain.llms.bedrock.Bedrock.html
135ca0f4bc84-1
Key word arguments to pass to the model. param region_name: Optional[str] = None¶ The aws region e.g., us-west-2. Fallsback to AWS_DEFAULT_REGION env variable or region specified in ~/.aws/config in case it is not provided here. param tags: Optional[List[str]] = None¶ Tags to add to the run trace. param verbose: bool [...
https://api.python.langchain.com/en/latest/llms/langchain.llms.bedrock.Bedrock.html
135ca0f4bc84-2
Run the LLM on the given prompt and input. async agenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, **kwargs: Any) → LLMResult¶ Asynchronously...
https://api.python.langchain.com/en/latest/llms/langchain.llms.bedrock.Bedrock.html
135ca0f4bc84-3
Asynchronously pass a string to the model and return a string prediction. Use this method when calling pure text generation models and only the topcandidate generation is needed. Parameters text – String input to pass to the model. stop – Stop words to use when generating. Model output is cut off at the first occurrenc...
https://api.python.langchain.com/en/latest/llms/langchain.llms.bedrock.Bedrock.html
135ca0f4bc84-4
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = ‘allow’ was set since it adds all passed values copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclu...
https://api.python.langchain.com/en/latest/llms/langchain.llms.bedrock.Bedrock.html
135ca0f4bc84-5
Run the LLM on the given prompt and input. generate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, **kwargs: Any) → LLMResult¶ Pass a sequence of pr...
https://api.python.langchain.com/en/latest/llms/langchain.llms.bedrock.Bedrock.html
135ca0f4bc84-6
get_num_tokens_from_messages(messages: List[BaseMessage]) → int¶ Get the number of tokens in the messages. Useful for checking if an input will fit in a model’s context window. Parameters messages – The message inputs to tokenize. Returns The sum of the number of tokens across the messages. get_token_ids(text: str) → L...
https://api.python.langchain.com/en/latest/llms/langchain.llms.bedrock.Bedrock.html
135ca0f4bc84-7
classmethod parse_obj(obj: Any) → Model¶ classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ predict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶ Pass a single string input to t...
https://api.python.langchain.com/en/latest/llms/langchain.llms.bedrock.Bedrock.html
135ca0f4bc84-8
.. code-block:: python llm.save(file_path=”path/llm.yaml”) classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶ classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶ stream(input: Union[Promp...
https://api.python.langchain.com/en/latest/llms/langchain.llms.bedrock.Bedrock.html
135ca0f4bc84-9
property lc_serializable: bool¶ Return whether or not the class is serializable. Examples using Bedrock¶ Bedrock
https://api.python.langchain.com/en/latest/llms/langchain.llms.bedrock.Bedrock.html
bbe039271308-0
langchain.llms.ai21.AI21PenaltyData¶ class langchain.llms.ai21.AI21PenaltyData[source]¶ Bases: BaseModel Parameters for AI21 penalty data. Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param applyToEmojis:...
https://api.python.langchain.com/en/latest/llms/langchain.llms.ai21.AI21PenaltyData.html
bbe039271308-1
deep – set to True to make a deep copy of the model Returns new model instance dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, ex...
https://api.python.langchain.com/en/latest/llms/langchain.llms.ai21.AI21PenaltyData.html
bbe039271308-2
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶ classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶ classmethod update_forward_refs(**localns: Any) → None¶ Try to update ForwardRefs on...
https://api.python.langchain.com/en/latest/llms/langchain.llms.ai21.AI21PenaltyData.html
fe2c9d929317-0
langchain.llms.huggingface_hub.HuggingFaceHub¶ class langchain.llms.huggingface_hub.HuggingFaceHub[source]¶ Bases: LLM HuggingFaceHub models. To use, you should have the huggingface_hub python package installed, and the environment variable HUGGINGFACEHUB_API_TOKEN set with your API token, or pass it as a named parame...
https://api.python.langchain.com/en/latest/llms/langchain.llms.huggingface_hub.HuggingFaceHub.html
fe2c9d929317-1
param verbose: bool [Optional]¶ Whether to print out response text. __call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → str¶ Check Cache...
https://api.python.langchain.com/en/latest/llms/langchain.llms.huggingface_hub.HuggingFaceHub.html
fe2c9d929317-2
need more output from the model than just the top generated value, are building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models). Parameters prompts – List of PromptValues. A PromptValue is an object that can be converted to match the format of any languag...
https://api.python.langchain.com/en/latest/llms/langchain.llms.huggingface_hub.HuggingFaceHub.html
fe2c9d929317-3
Asynchronously pass messages to the model and return a message prediction. Use this method when calling chat models and only the topcandidate generation is needed. Parameters messages – A sequence of chat messages corresponding to a single model input. stop – Stop words to use when generating. Model output is cut off a...
https://api.python.langchain.com/en/latest/llms/langchain.llms.huggingface_hub.HuggingFaceHub.html
fe2c9d929317-4
Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include – fields to include in new model exclude – fields to exclude from new model, as with values this takes precedence over include update – values to change/add in the new model. Note: the data is not validated before creat...
https://api.python.langchain.com/en/latest/llms/langchain.llms.huggingface_hub.HuggingFaceHub.html
fe2c9d929317-5
Parameters prompts – List of PromptValues. A PromptValue is an object that can be converted to match the format of any language model (string for pure text generation models and BaseMessages for chat models). stop – Stop words to use when generating. Model output is cut off at the first occurrence of any of these subst...
https://api.python.langchain.com/en/latest/llms/langchain.llms.huggingface_hub.HuggingFaceHub.html
fe2c9d929317-6
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Cal...
https://api.python.langchain.com/en/latest/llms/langchain.llms.huggingface_hub.HuggingFaceHub.html
fe2c9d929317-7
to the model provider API call. Returns Top model prediction as a string. predict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶ Pass a message sequence to the model and return a message prediction. Use this method when passing in chat messages. If you want ...
https://api.python.langchain.com/en/latest/llms/langchain.llms.huggingface_hub.HuggingFaceHub.html
fe2c9d929317-8
classmethod validate(value: Any) → Model¶ with_fallbacks(fallbacks: ~typing.Sequence[~langchain.schema.runnable.Runnable[~langchain.schema.runnable.Input, ~langchain.schema.runnable.Output]], *, exceptions_to_handle: ~typing.Tuple[~typing.Type[BaseException]] = (<class 'Exception'>,)) → RunnableWithFallbacks[Input, Out...
https://api.python.langchain.com/en/latest/llms/langchain.llms.huggingface_hub.HuggingFaceHub.html
ee1d58e28f84-0
langchain_experimental.llms.anthropic_functions.AnthropicFunctions¶ class langchain_experimental.llms.anthropic_functions.AnthropicFunctions[source]¶ Bases: BaseChatModel Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a v...
https://api.python.langchain.com/en/latest/llms/langchain_experimental.llms.anthropic_functions.AnthropicFunctions.html
ee1d58e28f84-1
Top Level call async agenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → LLMResult¶ Asynchronously pass a sequence of prompts and return model generations. This method should make use of batche...
https://api.python.langchain.com/en/latest/llms/langchain_experimental.llms.anthropic_functions.AnthropicFunctions.html
ee1d58e28f84-2
Use this method when calling pure text generation models and only the topcandidate generation is needed. Parameters text – String input to pass to the model. stop – Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings. **kwargs – Arbitrary additional keyword argu...
https://api.python.langchain.com/en/latest/llms/langchain_experimental.llms.anthropic_functions.AnthropicFunctions.html
ee1d58e28f84-3
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = ‘allow’ was set since it adds all passed values copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclu...
https://api.python.langchain.com/en/latest/llms/langchain_experimental.llms.anthropic_functions.AnthropicFunctions.html
ee1d58e28f84-4
This method should make use of batched calls for models that expose a batched API. Use this method when you want to: take advantage of batched calls, need more output from the model than just the top generated value, are building chains that are agnostic to the underlying language modeltype (e.g., pure text completion ...
https://api.python.langchain.com/en/latest/llms/langchain_experimental.llms.anthropic_functions.AnthropicFunctions.html
ee1d58e28f84-5
Parameters text – The string input to tokenize. Returns A list of ids corresponding to the tokens in the text, in order they occurin the text. invoke(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → BaseMessageChunk¶ json(*...
https://api.python.langchain.com/en/latest/llms/langchain_experimental.llms.anthropic_functions.AnthropicFunctions.html
ee1d58e28f84-6
Parameters text – String input to pass to the model. stop – Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings. **kwargs – Arbitrary additional keyword arguments. These are usually passed to the model provider API call. Returns Top model prediction as a string....
https://api.python.langchain.com/en/latest/llms/langchain_experimental.llms.anthropic_functions.AnthropicFunctions.html
ee1d58e28f84-7
classmethod validate(value: Any) → Model¶ with_fallbacks(fallbacks: ~typing.Sequence[~langchain.schema.runnable.Runnable[~langchain.schema.runnable.Input, ~langchain.schema.runnable.Output]], *, exceptions_to_handle: ~typing.Tuple[~typing.Type[BaseException]] = (<class 'Exception'>,)) → RunnableWithFallbacks[Input, Out...
https://api.python.langchain.com/en/latest/llms/langchain_experimental.llms.anthropic_functions.AnthropicFunctions.html
cee0c5c0a317-0
langchain.llms.bedrock.LLMInputOutputAdapter¶ class langchain.llms.bedrock.LLMInputOutputAdapter[source]¶ Adapter class to prepare the inputs from Langchain to a format that LLM model expects. It also provides helper function to extract the generated text from the model response. Methods __init__() prepare_input(provid...
https://api.python.langchain.com/en/latest/llms/langchain.llms.bedrock.LLMInputOutputAdapter.html
f99c31591771-0
langchain.llms.base.LLM¶ class langchain.llms.base.LLM[source]¶ Bases: BaseLLM Base LLM abstract class. The purpose of this class is to expose a simpler interface for working with LLMs, rather than expect the user to implement the full _generate method. Create a new model by parsing and validating input data from keywo...
https://api.python.langchain.com/en/latest/llms/langchain.llms.base.LLM.html
f99c31591771-1
async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, *, tags: Optional[Union[List[str], List[List[str]]]] = None, metadata: Optional[Union[Dict[str, Any]...
https://api.python.langchain.com/en/latest/llms/langchain.llms.base.LLM.html
f99c31591771-2
to the model provider API call. Returns An LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output. async ainvoke(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs...
https://api.python.langchain.com/en/latest/llms/langchain.llms.base.LLM.html
f99c31591771-3
batch(inputs: List[Union[PromptValue, str, List[BaseMessage]]], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, max_concurrency: Optional[int] = None, **kwargs: Any) → List[str]¶ bind(**kwargs: Any) → Runnable[Input, Output]¶ Bind arguments to a Runnable, returning a new Runnable. classmethod cons...
https://api.python.langchain.com/en/latest/llms/langchain.llms.base.LLM.html
f99c31591771-4
classmethod from_orm(obj: Any) → Model¶ generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, *, tags: Optional[Union[List[str], List[List[str]]]] = None, metada...
https://api.python.langchain.com/en/latest/llms/langchain.llms.base.LLM.html
f99c31591771-5
to the model provider API call. Returns An LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output. get_num_tokens(text: str) → int¶ Get the number of tokens present in the text. Useful for checking if an input will fit in a model’s context window. Pa...
https://api.python.langchain.com/en/latest/llms/langchain.llms.base.LLM.html
f99c31591771-6
Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol...
https://api.python.langchain.com/en/latest/llms/langchain.llms.base.LLM.html
f99c31591771-7
**kwargs – Arbitrary additional keyword arguments. These are usually passed to the model provider API call. Returns Top model prediction as a message. save(file_path: Union[Path, str]) → None¶ Save the LLM. Parameters file_path – Path to file to save the LLM to. Example: .. code-block:: python llm.save(file_path=”path/...
https://api.python.langchain.com/en/latest/llms/langchain.llms.base.LLM.html
f99c31591771-8
property lc_namespace: List[str]¶ Return the namespace of the langchain object. eg. [“langchain”, “llms”, “openai”] property lc_secrets: Dict[str, str]¶ Return a map of constructor argument names to secret ids. eg. {“openai_api_key”: “OPENAI_API_KEY”} property lc_serializable: bool¶ Return whether or not the class is s...
https://api.python.langchain.com/en/latest/llms/langchain.llms.base.LLM.html
ceca5fb9e20c-0
langchain.llms.google_palm.GooglePalm¶ class langchain.llms.google_palm.GooglePalm[source]¶ Bases: BaseLLM, BaseModel Google PaLM models. Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param cache: Optional...
https://api.python.langchain.com/en/latest/llms/langchain.llms.google_palm.GooglePalm.html
ceca5fb9e20c-1
param verbose: bool [Optional]¶ Whether to print out response text. __call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → str¶ Check Cache...
https://api.python.langchain.com/en/latest/llms/langchain.llms.google_palm.GooglePalm.html
ceca5fb9e20c-2
need more output from the model than just the top generated value, are building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models). Parameters prompts – List of PromptValues. A PromptValue is an object that can be converted to match the format of any languag...
https://api.python.langchain.com/en/latest/llms/langchain.llms.google_palm.GooglePalm.html
ceca5fb9e20c-3
Asynchronously pass messages to the model and return a message prediction. Use this method when calling chat models and only the topcandidate generation is needed. Parameters messages – A sequence of chat messages corresponding to a single model input. stop – Stop words to use when generating. Model output is cut off a...
https://api.python.langchain.com/en/latest/llms/langchain.llms.google_palm.GooglePalm.html
ceca5fb9e20c-4
Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include – fields to include in new model exclude – fields to exclude from new model, as with values this takes precedence over include update – values to change/add in the new model. Note: the data is not validated before creat...
https://api.python.langchain.com/en/latest/llms/langchain.llms.google_palm.GooglePalm.html
ceca5fb9e20c-5
Parameters prompts – List of PromptValues. A PromptValue is an object that can be converted to match the format of any language model (string for pure text generation models and BaseMessages for chat models). stop – Stop words to use when generating. Model output is cut off at the first occurrence of any of these subst...
https://api.python.langchain.com/en/latest/llms/langchain.llms.google_palm.GooglePalm.html
ceca5fb9e20c-6
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Cal...
https://api.python.langchain.com/en/latest/llms/langchain.llms.google_palm.GooglePalm.html
ceca5fb9e20c-7
to the model provider API call. Returns Top model prediction as a string. predict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶ Pass a message sequence to the model and return a message prediction. Use this method when passing in chat messages. If you want ...
https://api.python.langchain.com/en/latest/llms/langchain.llms.google_palm.GooglePalm.html
ceca5fb9e20c-8
classmethod validate(value: Any) → Model¶ with_fallbacks(fallbacks: ~typing.Sequence[~langchain.schema.runnable.Runnable[~langchain.schema.runnable.Input, ~langchain.schema.runnable.Output]], *, exceptions_to_handle: ~typing.Tuple[~typing.Type[BaseException]] = (<class 'Exception'>,)) → RunnableWithFallbacks[Input, Out...
https://api.python.langchain.com/en/latest/llms/langchain.llms.google_palm.GooglePalm.html
d194c6a2cc45-0
langchain.llms.mosaicml.MosaicML¶ class langchain.llms.mosaicml.MosaicML[source]¶ Bases: LLM MosaicML LLM service. To use, you should have the environment variable MOSAICML_API_TOKEN set with your API token, or pass it as a named parameter to the constructor. Example from langchain.llms import MosaicML endpoint_url = (...
https://api.python.langchain.com/en/latest/llms/langchain.llms.mosaicml.MosaicML.html
d194c6a2cc45-1
param verbose: bool [Optional]¶ Whether to print out response text. __call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → str¶ Check Cache...
https://api.python.langchain.com/en/latest/llms/langchain.llms.mosaicml.MosaicML.html
d194c6a2cc45-2
need more output from the model than just the top generated value, are building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models). Parameters prompts – List of PromptValues. A PromptValue is an object that can be converted to match the format of any languag...
https://api.python.langchain.com/en/latest/llms/langchain.llms.mosaicml.MosaicML.html
d194c6a2cc45-3
Asynchronously pass messages to the model and return a message prediction. Use this method when calling chat models and only the topcandidate generation is needed. Parameters messages – A sequence of chat messages corresponding to a single model input. stop – Stop words to use when generating. Model output is cut off a...
https://api.python.langchain.com/en/latest/llms/langchain.llms.mosaicml.MosaicML.html
d194c6a2cc45-4
Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include – fields to include in new model exclude – fields to exclude from new model, as with values this takes precedence over include update – values to change/add in the new model. Note: the data is not validated before creat...
https://api.python.langchain.com/en/latest/llms/langchain.llms.mosaicml.MosaicML.html
d194c6a2cc45-5
Parameters prompts – List of PromptValues. A PromptValue is an object that can be converted to match the format of any language model (string for pure text generation models and BaseMessages for chat models). stop – Stop words to use when generating. Model output is cut off at the first occurrence of any of these subst...
https://api.python.langchain.com/en/latest/llms/langchain.llms.mosaicml.MosaicML.html
d194c6a2cc45-6
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Cal...
https://api.python.langchain.com/en/latest/llms/langchain.llms.mosaicml.MosaicML.html
d194c6a2cc45-7
to the model provider API call. Returns Top model prediction as a string. predict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶ Pass a message sequence to the model and return a message prediction. Use this method when passing in chat messages. If you want ...
https://api.python.langchain.com/en/latest/llms/langchain.llms.mosaicml.MosaicML.html
d194c6a2cc45-8
classmethod validate(value: Any) → Model¶ with_fallbacks(fallbacks: ~typing.Sequence[~langchain.schema.runnable.Runnable[~langchain.schema.runnable.Input, ~langchain.schema.runnable.Output]], *, exceptions_to_handle: ~typing.Tuple[~typing.Type[BaseException]] = (<class 'Exception'>,)) → RunnableWithFallbacks[Input, Out...
https://api.python.langchain.com/en/latest/llms/langchain.llms.mosaicml.MosaicML.html
0409f808ee4e-0
langchain.llms.human.HumanInputLLM¶ class langchain.llms.human.HumanInputLLM[source]¶ Bases: LLM It returns user input as the response. Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param cache: Optional[b...
https://api.python.langchain.com/en/latest/llms/langchain.llms.human.HumanInputLLM.html
0409f808ee4e-1
async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, *, tags: Optional[Union[List[str], List[List[str]]]] = None, metadata: Optional[Union[Dict[str, Any]...
https://api.python.langchain.com/en/latest/llms/langchain.llms.human.HumanInputLLM.html
0409f808ee4e-2
to the model provider API call. Returns An LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output. async ainvoke(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs...
https://api.python.langchain.com/en/latest/llms/langchain.llms.human.HumanInputLLM.html
0409f808ee4e-3
batch(inputs: List[Union[PromptValue, str, List[BaseMessage]]], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, max_concurrency: Optional[int] = None, **kwargs: Any) → List[str]¶ bind(**kwargs: Any) → Runnable[Input, Output]¶ Bind arguments to a Runnable, returning a new Runnable. classmethod cons...
https://api.python.langchain.com/en/latest/llms/langchain.llms.human.HumanInputLLM.html
0409f808ee4e-4
classmethod from_orm(obj: Any) → Model¶ generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, *, tags: Optional[Union[List[str], List[List[str]]]] = None, metada...
https://api.python.langchain.com/en/latest/llms/langchain.llms.human.HumanInputLLM.html
0409f808ee4e-5
to the model provider API call. Returns An LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output. get_num_tokens(text: str) → int¶ Get the number of tokens present in the text. Useful for checking if an input will fit in a model’s context window. Pa...
https://api.python.langchain.com/en/latest/llms/langchain.llms.human.HumanInputLLM.html
0409f808ee4e-6
Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol...
https://api.python.langchain.com/en/latest/llms/langchain.llms.human.HumanInputLLM.html
0409f808ee4e-7
**kwargs – Arbitrary additional keyword arguments. These are usually passed to the model provider API call. Returns Top model prediction as a message. save(file_path: Union[Path, str]) → None¶ Save the LLM. Parameters file_path – Path to file to save the LLM to. Example: .. code-block:: python llm.save(file_path=”path/...
https://api.python.langchain.com/en/latest/llms/langchain.llms.human.HumanInputLLM.html
0409f808ee4e-8
property lc_namespace: List[str]¶ Return the namespace of the langchain object. eg. [“langchain”, “llms”, “openai”] property lc_secrets: Dict[str, str]¶ Return a map of constructor argument names to secret ids. eg. {“openai_api_key”: “OPENAI_API_KEY”} property lc_serializable: bool¶ Return whether or not the class is s...
https://api.python.langchain.com/en/latest/llms/langchain.llms.human.HumanInputLLM.html
099af18a7cf4-0
langchain.llms.base.update_cache¶ langchain.llms.base.update_cache(existing_prompts: Dict[int, List], llm_string: str, missing_prompt_idxs: List[int], new_results: LLMResult, prompts: List[str]) → Optional[dict][source]¶ Update the cache and get the LLM output.
https://api.python.langchain.com/en/latest/llms/langchain.llms.base.update_cache.html
ce4987a62fe6-0
langchain.llms.azureml_endpoint.AzureMLOnlineEndpoint¶ class langchain.llms.azureml_endpoint.AzureMLOnlineEndpoint[source]¶ Bases: LLM, BaseModel Azure ML Online Endpoint models. Example azure_llm = AzureMLOnlineEndpoint( endpoint_url="https://<your-endpoint>.<your_region>.inference.ml.azure.com/score", endpoin...
https://api.python.langchain.com/en/latest/llms/langchain.llms.azureml_endpoint.AzureMLOnlineEndpoint.html
ce4987a62fe6-1
param verbose: bool [Optional]¶ Whether to print out response text. __call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → str¶ Check Cache...
https://api.python.langchain.com/en/latest/llms/langchain.llms.azureml_endpoint.AzureMLOnlineEndpoint.html
ce4987a62fe6-2
need more output from the model than just the top generated value, are building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models). Parameters prompts – List of PromptValues. A PromptValue is an object that can be converted to match the format of any languag...
https://api.python.langchain.com/en/latest/llms/langchain.llms.azureml_endpoint.AzureMLOnlineEndpoint.html
ce4987a62fe6-3
Asynchronously pass messages to the model and return a message prediction. Use this method when calling chat models and only the topcandidate generation is needed. Parameters messages – A sequence of chat messages corresponding to a single model input. stop – Stop words to use when generating. Model output is cut off a...
https://api.python.langchain.com/en/latest/llms/langchain.llms.azureml_endpoint.AzureMLOnlineEndpoint.html
ce4987a62fe6-4
Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include – fields to include in new model exclude – fields to exclude from new model, as with values this takes precedence over include update – values to change/add in the new model. Note: the data is not validated before creat...
https://api.python.langchain.com/en/latest/llms/langchain.llms.azureml_endpoint.AzureMLOnlineEndpoint.html
ce4987a62fe6-5
Parameters prompts – List of PromptValues. A PromptValue is an object that can be converted to match the format of any language model (string for pure text generation models and BaseMessages for chat models). stop – Stop words to use when generating. Model output is cut off at the first occurrence of any of these subst...
https://api.python.langchain.com/en/latest/llms/langchain.llms.azureml_endpoint.AzureMLOnlineEndpoint.html
ce4987a62fe6-6
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Cal...
https://api.python.langchain.com/en/latest/llms/langchain.llms.azureml_endpoint.AzureMLOnlineEndpoint.html
ce4987a62fe6-7
to the model provider API call. Returns Top model prediction as a string. predict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶ Pass a message sequence to the model and return a message prediction. Use this method when passing in chat messages. If you want ...
https://api.python.langchain.com/en/latest/llms/langchain.llms.azureml_endpoint.AzureMLOnlineEndpoint.html
ce4987a62fe6-8
classmethod validate(value: Any) → Model¶ with_fallbacks(fallbacks: ~typing.Sequence[~langchain.schema.runnable.Runnable[~langchain.schema.runnable.Input, ~langchain.schema.runnable.Output]], *, exceptions_to_handle: ~typing.Tuple[~typing.Type[BaseException]] = (<class 'Exception'>,)) → RunnableWithFallbacks[Input, Out...
https://api.python.langchain.com/en/latest/llms/langchain.llms.azureml_endpoint.AzureMLOnlineEndpoint.html
c6f27e621a68-0
langchain.llms.azureml_endpoint.ContentFormatterBase¶ class langchain.llms.azureml_endpoint.ContentFormatterBase[source]¶ Transform request and response of AzureML endpoint to match with required schema. Attributes accepts The MIME type of the response data returned from the endpoint content_type The MIME type of the i...
https://api.python.langchain.com/en/latest/llms/langchain.llms.azureml_endpoint.ContentFormatterBase.html
29c4bb3e3142-0
langchain.llms.azureml_endpoint.DollyContentFormatter¶ class langchain.llms.azureml_endpoint.DollyContentFormatter[source]¶ Content handler for the Dolly-v2-12b model Attributes accepts The MIME type of the response data returned from the endpoint content_type The MIME type of the input data passed to the endpoint Meth...
https://api.python.langchain.com/en/latest/llms/langchain.llms.azureml_endpoint.DollyContentFormatter.html
fb48fdbb5666-0
langchain.llms.replicate.Replicate¶ class langchain.llms.replicate.Replicate[source]¶ Bases: LLM Replicate models. To use, you should have the replicate python package installed, and the environment variable REPLICATE_API_TOKEN set with your API token. You can find your token here: https://replicate.com/account The mod...
https://api.python.langchain.com/en/latest/llms/langchain.llms.replicate.Replicate.html
fb48fdbb5666-1
param verbose: bool [Optional]¶ Whether to print out response text. __call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → str¶ Check Cache...
https://api.python.langchain.com/en/latest/llms/langchain.llms.replicate.Replicate.html
fb48fdbb5666-2
need more output from the model than just the top generated value, are building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models). Parameters prompts – List of PromptValues. A PromptValue is an object that can be converted to match the format of any languag...
https://api.python.langchain.com/en/latest/llms/langchain.llms.replicate.Replicate.html
fb48fdbb5666-3
Asynchronously pass messages to the model and return a message prediction. Use this method when calling chat models and only the topcandidate generation is needed. Parameters messages – A sequence of chat messages corresponding to a single model input. stop – Stop words to use when generating. Model output is cut off a...
https://api.python.langchain.com/en/latest/llms/langchain.llms.replicate.Replicate.html
fb48fdbb5666-4
Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include – fields to include in new model exclude – fields to exclude from new model, as with values this takes precedence over include update – values to change/add in the new model. Note: the data is not validated before creat...
https://api.python.langchain.com/en/latest/llms/langchain.llms.replicate.Replicate.html
fb48fdbb5666-5
Parameters prompts – List of PromptValues. A PromptValue is an object that can be converted to match the format of any language model (string for pure text generation models and BaseMessages for chat models). stop – Stop words to use when generating. Model output is cut off at the first occurrence of any of these subst...
https://api.python.langchain.com/en/latest/llms/langchain.llms.replicate.Replicate.html
fb48fdbb5666-6
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Cal...
https://api.python.langchain.com/en/latest/llms/langchain.llms.replicate.Replicate.html
fb48fdbb5666-7
to the model provider API call. Returns Top model prediction as a string. predict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶ Pass a message sequence to the model and return a message prediction. Use this method when passing in chat messages. If you want ...
https://api.python.langchain.com/en/latest/llms/langchain.llms.replicate.Replicate.html
fb48fdbb5666-8
classmethod validate(value: Any) → Model¶ with_fallbacks(fallbacks: ~typing.Sequence[~langchain.schema.runnable.Runnable[~langchain.schema.runnable.Input, ~langchain.schema.runnable.Output]], *, exceptions_to_handle: ~typing.Tuple[~typing.Type[BaseException]] = (<class 'Exception'>,)) → RunnableWithFallbacks[Input, Out...
https://api.python.langchain.com/en/latest/llms/langchain.llms.replicate.Replicate.html
0ed94a45ce68-0
langchain.llms.aviary.AviaryBackend¶ class langchain.llms.aviary.AviaryBackend(backend_url: str, bearer: str)[source]¶ Attributes backend_url bearer Methods __init__(backend_url, bearer) from_env() __init__(backend_url: str, bearer: str) → None¶ classmethod from_env() → AviaryBackend[source]¶
https://api.python.langchain.com/en/latest/llms/langchain.llms.aviary.AviaryBackend.html
ba1cb208b9b2-0
langchain.llms.cohere.Cohere¶ class langchain.llms.cohere.Cohere[source]¶ Bases: LLM Cohere large language models. To use, you should have the cohere python package installed, and the environment variable COHERE_API_KEY set with your API key, or pass it as a named parameter to the constructor. Example from langchain.ll...
https://api.python.langchain.com/en/latest/llms/langchain.llms.cohere.Cohere.html
ba1cb208b9b2-1
param tags: Optional[List[str]] = None¶ Tags to add to the run trace. param temperature: float = 0.75¶ A non-negative float that tunes the degree of randomness in generation. param truncate: Optional[str] = None¶ Specify how the client handles inputs longer than the maximum token length: Truncate from START, END or NON...
https://api.python.langchain.com/en/latest/llms/langchain.llms.cohere.Cohere.html
ba1cb208b9b2-2
Run the LLM on the given prompt and input. async agenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, **kwargs: Any) → LLMResult¶ Asynchronously...
https://api.python.langchain.com/en/latest/llms/langchain.llms.cohere.Cohere.html
ba1cb208b9b2-3
Asynchronously pass a string to the model and return a string prediction. Use this method when calling pure text generation models and only the topcandidate generation is needed. Parameters text – String input to pass to the model. stop – Stop words to use when generating. Model output is cut off at the first occurrenc...
https://api.python.langchain.com/en/latest/llms/langchain.llms.cohere.Cohere.html