id
stringlengths
14
15
text
stringlengths
49
2.47k
source
stringlengths
61
166
7bb8c64566f1-3
**kwargs – Arbitrary additional keyword arguments. These are usually passed to the model provider API call. Returns Top model prediction as a string. async apredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶ Asynchronously pass messages to the model and ...
https://api.python.langchain.com/en/latest/llms/langchain.llms.openai.OpenAIChat.html
7bb8c64566f1-4
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶ Duplicate a model, optionally...
https://api.python.langchain.com/en/latest/llms/langchain.llms.openai.OpenAIChat.html
7bb8c64566f1-5
Pass a sequence of prompts to the model and return model generations. This method should make use of batched calls for models that expose a batched API. Use this method when you want to: take advantage of batched calls, need more output from the model than just the top generated value, are building chains that are agno...
https://api.python.langchain.com/en/latest/llms/langchain.llms.openai.OpenAIChat.html
7bb8c64566f1-6
Get the token IDs using the tiktoken package. invoke(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → str¶ json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetInt...
https://api.python.langchain.com/en/latest/llms/langchain.llms.openai.OpenAIChat.html
7bb8c64566f1-7
stop – Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings. **kwargs – Arbitrary additional keyword arguments. These are usually passed to the model provider API call. Returns Top model prediction as a string. predict_messages(messages: List[BaseMessage], *, sto...
https://api.python.langchain.com/en/latest/llms/langchain.llms.openai.OpenAIChat.html
7bb8c64566f1-8
to_json_not_implemented() → SerializedNotImplemented¶ classmethod update_forward_refs(**localns: Any) → None¶ Try to update ForwardRefs on fields based on this Model, globalns and localns. classmethod validate(value: Any) → Model¶ with_fallbacks(fallbacks: ~typing.Sequence[~langchain.schema.runnable.Runnable[~langchain...
https://api.python.langchain.com/en/latest/llms/langchain.llms.openai.OpenAIChat.html
54b92f32ff71-0
langchain.llms.fake.FakeListLLM¶ class langchain.llms.fake.FakeListLLM[source]¶ Bases: LLM Fake LLM for testing purposes. Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param cache: Optional[bool] = None¶ p...
https://api.python.langchain.com/en/latest/llms/langchain.llms.fake.FakeListLLM.html
54b92f32ff71-1
async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, *, tags: Optional[Union[List[str], List[List[str]]]] = None, metadata: Optional[Union[Dict[str, Any]...
https://api.python.langchain.com/en/latest/llms/langchain.llms.fake.FakeListLLM.html
54b92f32ff71-2
to the model provider API call. Returns An LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output. async ainvoke(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs...
https://api.python.langchain.com/en/latest/llms/langchain.llms.fake.FakeListLLM.html
54b92f32ff71-3
batch(inputs: List[Union[PromptValue, str, List[BaseMessage]]], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, max_concurrency: Optional[int] = None, **kwargs: Any) → List[str]¶ bind(**kwargs: Any) → Runnable[Input, Output]¶ Bind arguments to a Runnable, returning a new Runnable. classmethod cons...
https://api.python.langchain.com/en/latest/llms/langchain.llms.fake.FakeListLLM.html
54b92f32ff71-4
classmethod from_orm(obj: Any) → Model¶ generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, *, tags: Optional[Union[List[str], List[List[str]]]] = None, metada...
https://api.python.langchain.com/en/latest/llms/langchain.llms.fake.FakeListLLM.html
54b92f32ff71-5
to the model provider API call. Returns An LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output. get_num_tokens(text: str) → int¶ Get the number of tokens present in the text. Useful for checking if an input will fit in a model’s context window. Pa...
https://api.python.langchain.com/en/latest/llms/langchain.llms.fake.FakeListLLM.html
54b92f32ff71-6
Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol...
https://api.python.langchain.com/en/latest/llms/langchain.llms.fake.FakeListLLM.html
54b92f32ff71-7
**kwargs – Arbitrary additional keyword arguments. These are usually passed to the model provider API call. Returns Top model prediction as a message. save(file_path: Union[Path, str]) → None¶ Save the LLM. Parameters file_path – Path to file to save the LLM to. Example: .. code-block:: python llm.save(file_path=”path/...
https://api.python.langchain.com/en/latest/llms/langchain.llms.fake.FakeListLLM.html
54b92f32ff71-8
property lc_namespace: List[str]¶ Return the namespace of the langchain object. eg. [“langchain”, “llms”, “openai”] property lc_secrets: Dict[str, str]¶ Return a map of constructor argument names to secret ids. eg. {“openai_api_key”: “OPENAI_API_KEY”} property lc_serializable: bool¶ Return whether or not the class is s...
https://api.python.langchain.com/en/latest/llms/langchain.llms.fake.FakeListLLM.html
b3895c751cef-0
langchain.llms.google_palm.generate_with_retry¶ langchain.llms.google_palm.generate_with_retry(llm: GooglePalm, **kwargs: Any) → Any[source]¶ Use tenacity to retry the completion call.
https://api.python.langchain.com/en/latest/llms/langchain.llms.google_palm.generate_with_retry.html
7ba7dec8a00a-0
langchain.llms.fireworks.update_token_usage¶ langchain.llms.fireworks.update_token_usage(keys: Set[str], response: Dict[str, Any], token_usage: Dict[str, Any]) → None[source]¶ Update token usage.
https://api.python.langchain.com/en/latest/llms/langchain.llms.fireworks.update_token_usage.html
714d629d841e-0
langchain.llms.bananadev.Banana¶ class langchain.llms.bananadev.Banana[source]¶ Bases: LLM Banana large language models. To use, you should have the banana-dev python package installed, and the environment variable BANANA_API_KEY set with your API key. Any parameters that are valid to be passed to the call can be passe...
https://api.python.langchain.com/en/latest/llms/langchain.llms.bananadev.Banana.html
714d629d841e-1
Check Cache and run the LLM on the given prompt and input. async abatch(inputs: List[Union[PromptValue, str, List[BaseMessage]]], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, max_concurrency: Optional[int] = None, **kwargs: Any) → List[str]¶ async agenerate(prompts: List[str], stop: Optional[Li...
https://api.python.langchain.com/en/latest/llms/langchain.llms.bananadev.Banana.html
714d629d841e-2
text generation models and BaseMessages for chat models). stop – Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings. callbacks – Callbacks to pass through. Used for executing additional functionality, such as logging or streaming, throughout generation. **kwarg...
https://api.python.langchain.com/en/latest/llms/langchain.llms.bananadev.Banana.html
714d629d841e-3
first occurrence of any of these substrings. **kwargs – Arbitrary additional keyword arguments. These are usually passed to the model provider API call. Returns Top model prediction as a message. async astream(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[...
https://api.python.langchain.com/en/latest/llms/langchain.llms.bananadev.Banana.html
714d629d841e-4
the new model: you should trust this data deep – set to True to make a deep copy of the model Returns new model instance dict(**kwargs: Any) → Dict¶ Return a dictionary of the LLM. classmethod from_orm(obj: Any) → Model¶ generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHa...
https://api.python.langchain.com/en/latest/llms/langchain.llms.bananadev.Banana.html
714d629d841e-5
first occurrence of any of these substrings. callbacks – Callbacks to pass through. Used for executing additional functionality, such as logging or streaming, throughout generation. **kwargs – Arbitrary additional keyword arguments. These are usually passed to the model provider API call. Returns An LLMResult, which co...
https://api.python.langchain.com/en/latest/llms/langchain.llms.bananadev.Banana.html
714d629d841e-6
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Cal...
https://api.python.langchain.com/en/latest/llms/langchain.llms.bananadev.Banana.html
714d629d841e-7
to the model provider API call. Returns Top model prediction as a string. predict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶ Pass a message sequence to the model and return a message prediction. Use this method when passing in chat messages. If you want ...
https://api.python.langchain.com/en/latest/llms/langchain.llms.bananadev.Banana.html
714d629d841e-8
classmethod validate(value: Any) → Model¶ with_fallbacks(fallbacks: ~typing.Sequence[~langchain.schema.runnable.Runnable[~langchain.schema.runnable.Input, ~langchain.schema.runnable.Output]], *, exceptions_to_handle: ~typing.Tuple[~typing.Type[BaseException]] = (<class 'Exception'>,)) → RunnableWithFallbacks[Input, Out...
https://api.python.langchain.com/en/latest/llms/langchain.llms.bananadev.Banana.html
4249d3f3a120-0
langchain.llms.openlm.OpenLM¶ class langchain.llms.openlm.OpenLM[source]¶ Bases: BaseOpenAI OpenLM models. Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param allowed_special: Union[Literal['all'], Abstrac...
https://api.python.langchain.com/en/latest/llms/langchain.llms.openlm.OpenLM.html
4249d3f3a120-1
Model name to use. param n: int = 1¶ How many completions to generate for each prompt. param openai_api_base: Optional[str] = None¶ param openai_api_key: Optional[str] = None¶ param openai_organization: Optional[str] = None¶ param openai_proxy: Optional[str] = None¶ param presence_penalty: float = 0¶ Penalizes repeated...
https://api.python.langchain.com/en/latest/llms/langchain.llms.openlm.OpenLM.html
4249d3f3a120-2
param verbose: bool [Optional]¶ Whether to print out response text. __call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → str¶ Check Cache...
https://api.python.langchain.com/en/latest/llms/langchain.llms.openlm.OpenLM.html
4249d3f3a120-3
need more output from the model than just the top generated value, are building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models). Parameters prompts – List of PromptValues. A PromptValue is an object that can be converted to match the format of any languag...
https://api.python.langchain.com/en/latest/llms/langchain.llms.openlm.OpenLM.html
4249d3f3a120-4
Asynchronously pass messages to the model and return a message prediction. Use this method when calling chat models and only the topcandidate generation is needed. Parameters messages – A sequence of chat messages corresponding to a single model input. stop – Stop words to use when generating. Model output is cut off a...
https://api.python.langchain.com/en/latest/llms/langchain.llms.openlm.OpenLM.html
4249d3f3a120-5
Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include – fields to include in new model exclude – fields to exclude from new model, as with values this takes precedence over include update – values to change/add in the new model. Note: the data is not validated before creat...
https://api.python.langchain.com/en/latest/llms/langchain.llms.openlm.OpenLM.html
4249d3f3a120-6
API. Use this method when you want to: take advantage of batched calls, need more output from the model than just the top generated value, are building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models). Parameters prompts – List of PromptValues. A PromptVal...
https://api.python.langchain.com/en/latest/llms/langchain.llms.openlm.OpenLM.html
4249d3f3a120-7
get_token_ids(text: str) → List[int]¶ Get the token IDs using the tiktoken package. invoke(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → str¶ json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, ...
https://api.python.langchain.com/en/latest/llms/langchain.llms.openlm.OpenLM.html
4249d3f3a120-8
max_tokens = openai.modelname_to_contextsize("text-davinci-003") classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ classmethod parse_obj(obj: Any) → Model¶ classmethod parse_raw(b: Union[str, bytes], *...
https://api.python.langchain.com/en/latest/llms/langchain.llms.openlm.OpenLM.html
4249d3f3a120-9
to the model provider API call. Returns Top model prediction as a message. save(file_path: Union[Path, str]) → None¶ Save the LLM. Parameters file_path – Path to file to save the LLM to. Example: .. code-block:: python llm.save(file_path=”path/llm.yaml”) classmethod schema(by_alias: bool = True, ref_template: unicode =...
https://api.python.langchain.com/en/latest/llms/langchain.llms.openlm.OpenLM.html
4249d3f3a120-10
eg. [“langchain”, “llms”, “openai”] property lc_secrets: Dict[str, str]¶ Return a map of constructor argument names to secret ids. eg. {“openai_api_key”: “OPENAI_API_KEY”} property lc_serializable: bool¶ Return whether or not the class is serializable. property max_context_size: int¶ Get max context size for this model...
https://api.python.langchain.com/en/latest/llms/langchain.llms.openlm.OpenLM.html
2c5fc75f3c7e-0
langchain_experimental.llms.jsonformer_decoder.import_jsonformer¶ langchain_experimental.llms.jsonformer_decoder.import_jsonformer() → jsonformer[source]¶ Lazily import jsonformer.
https://api.python.langchain.com/en/latest/llms/langchain_experimental.llms.jsonformer_decoder.import_jsonformer.html
7c76a3bccd22-0
langchain.llms.anthropic.Anthropic¶ class langchain.llms.anthropic.Anthropic[source]¶ Bases: LLM, _AnthropicCommon Anthropic large language models. To use, you should have the anthropic python package installed, and the environment variable ANTHROPIC_API_KEY set with your API key, or pass it as a named parameter to the...
https://api.python.langchain.com/en/latest/llms/langchain.llms.anthropic.Anthropic.html
7c76a3bccd22-1
Timeout for requests to Anthropic Completion API. Default is 600 seconds. param max_tokens_to_sample: int = 256¶ Denotes the number of tokens to predict per generation. param metadata: Optional[Dict[str, Any]] = None¶ Metadata to add to the run trace. param model: str = 'claude-2'¶ Model name to use. param streaming: b...
https://api.python.langchain.com/en/latest/llms/langchain.llms.anthropic.Anthropic.html
7c76a3bccd22-2
async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, *, tags: Optional[Union[List[str], List[List[str]]]] = None, metadata: Optional[Union[Dict[str, Any]...
https://api.python.langchain.com/en/latest/llms/langchain.llms.anthropic.Anthropic.html
7c76a3bccd22-3
to the model provider API call. Returns An LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output. async ainvoke(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs...
https://api.python.langchain.com/en/latest/llms/langchain.llms.anthropic.Anthropic.html
7c76a3bccd22-4
batch(inputs: List[Union[PromptValue, str, List[BaseMessage]]], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, max_concurrency: Optional[int] = None, **kwargs: Any) → List[str]¶ bind(**kwargs: Any) → Runnable[Input, Output]¶ Bind arguments to a Runnable, returning a new Runnable. classmethod cons...
https://api.python.langchain.com/en/latest/llms/langchain.llms.anthropic.Anthropic.html
7c76a3bccd22-5
classmethod from_orm(obj: Any) → Model¶ generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, *, tags: Optional[Union[List[str], List[List[str]]]] = None, metada...
https://api.python.langchain.com/en/latest/llms/langchain.llms.anthropic.Anthropic.html
7c76a3bccd22-6
to the model provider API call. Returns An LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output. get_num_tokens(text: str) → int[source]¶ Calculate number of tokens. get_num_tokens_from_messages(messages: List[BaseMessage]) → int¶ Get the number of...
https://api.python.langchain.com/en/latest/llms/langchain.llms.anthropic.Anthropic.html
7c76a3bccd22-7
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ classmethod parse_obj(obj: Any) → Model¶ classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto...
https://api.python.langchain.com/en/latest/llms/langchain.llms.anthropic.Anthropic.html
7c76a3bccd22-8
Save the LLM. Parameters file_path – Path to file to save the LLM to. Example: .. code-block:: python llm.save(file_path=”path/llm.yaml”) classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶ classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/...
https://api.python.langchain.com/en/latest/llms/langchain.llms.anthropic.Anthropic.html
7c76a3bccd22-9
Return a map of constructor argument names to secret ids. eg. {“openai_api_key”: “OPENAI_API_KEY”} property lc_serializable: bool¶ Return whether or not the class is serializable.
https://api.python.langchain.com/en/latest/llms/langchain.llms.anthropic.Anthropic.html
3683cfffc57a-0
langchain_experimental.llms.anthropic_functions.TagParser¶ class langchain_experimental.llms.anthropic_functions.TagParser[source]¶ A heavy-handed solution, but it’s fast for prototyping. Might be re-implemented later to restrict scope to the limited grammar, and more efficiency. Uses an HTML parser to parse a limited ...
https://api.python.langchain.com/en/latest/llms/langchain_experimental.llms.anthropic_functions.TagParser.html
3683cfffc57a-1
parse_declaration(i) parse_endtag(i) parse_html_declaration(i) parse_marked_section(i[, report]) parse_pi(i) parse_starttag(i) reset() Reset this instance. set_cdata_mode(elem) unknown_decl(data) updatepos(i, j) __init__() → None[source]¶ A heavy-handed solution, but it’s fast for prototyping. Might be re-implemented l...
https://api.python.langchain.com/en/latest/llms/langchain_experimental.llms.anthropic_functions.TagParser.html
3683cfffc57a-2
Hook when a tag is closed. handle_entityref(name)¶ handle_pi(data)¶ handle_startendtag(tag, attrs)¶ handle_starttag(tag: str, attrs: Any) → None[source]¶ Hook when a new tag is encountered. parse_bogus_comment(i, report=1)¶ parse_comment(i, report=1)¶ parse_declaration(i)¶ parse_endtag(i)¶ parse_html_declaration(i)¶ pa...
https://api.python.langchain.com/en/latest/llms/langchain_experimental.llms.anthropic_functions.TagParser.html
dad2962b0d75-0
langchain.llms.cohere.acompletion_with_retry¶ langchain.llms.cohere.acompletion_with_retry(llm: Cohere, **kwargs: Any) → Any[source]¶ Use tenacity to retry the completion call.
https://api.python.langchain.com/en/latest/llms/langchain.llms.cohere.acompletion_with_retry.html
bc42c6a2bb85-0
langchain.llms.azureml_endpoint.OSSContentFormatter¶ class langchain.llms.azureml_endpoint.OSSContentFormatter[source]¶ Deprecated: Kept for backwards compatibility Content handler for LLMs from the OSS catalog. Attributes accepts The MIME type of the response data returned from the endpoint content_formatter content_t...
https://api.python.langchain.com/en/latest/llms/langchain.llms.azureml_endpoint.OSSContentFormatter.html
e274631c9935-0
langchain.llms.vertexai.VertexAI¶ class langchain.llms.vertexai.VertexAI[source]¶ Bases: _VertexAICommon, LLM Google Vertex AI large language models. Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param cac...
https://api.python.langchain.com/en/latest/llms/langchain.llms.vertexai.VertexAI.html
e274631c9935-1
param top_p: float = 0.95¶ Tokens are selected from most probable to least until the sum of their param tuned_model_name: Optional[str] = None¶ The name of a tuned model. If provided, model_name is ignored. param verbose: bool [Optional]¶ Whether to print out response text. __call__(prompt: str, stop: Optional[List[str...
https://api.python.langchain.com/en/latest/llms/langchain.llms.vertexai.VertexAI.html
e274631c9935-2
Asynchronously pass a sequence of prompts and return model generations. This method should make use of batched calls for models that expose a batched API. Use this method when you want to: take advantage of batched calls, need more output from the model than just the top generated value, are building chains that are ag...
https://api.python.langchain.com/en/latest/llms/langchain.llms.vertexai.VertexAI.html
e274631c9935-3
**kwargs – Arbitrary additional keyword arguments. These are usually passed to the model provider API call. Returns Top model prediction as a string. async apredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶ Asynchronously pass messages to the model and ...
https://api.python.langchain.com/en/latest/llms/langchain.llms.vertexai.VertexAI.html
e274631c9935-4
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶ Duplicate a model, optionally...
https://api.python.langchain.com/en/latest/llms/langchain.llms.vertexai.VertexAI.html
e274631c9935-5
Pass a sequence of prompts to the model and return model generations. This method should make use of batched calls for models that expose a batched API. Use this method when you want to: take advantage of batched calls, need more output from the model than just the top generated value, are building chains that are agno...
https://api.python.langchain.com/en/latest/llms/langchain.llms.vertexai.VertexAI.html
e274631c9935-6
Return the ordered ids of the tokens in a text. Parameters text – The string input to tokenize. Returns A list of ids corresponding to the tokens in the text, in order they occurin the text. invoke(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] =...
https://api.python.langchain.com/en/latest/llms/langchain.llms.vertexai.VertexAI.html
e274631c9935-7
Pass a single string input to the model and return a string prediction. Use this method when passing in raw text. If you want to pass in specifictypes of chat messages, use predict_messages. Parameters text – String input to pass to the model. stop – Stop words to use when generating. Model output is cut off at the fir...
https://api.python.langchain.com/en/latest/llms/langchain.llms.vertexai.VertexAI.html
e274631c9935-8
stream(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → Iterator[str]¶ to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶ to_json_not_implemented() → SerializedNotImplemented¶ classmethod update_forward_ref...
https://api.python.langchain.com/en/latest/llms/langchain.llms.vertexai.VertexAI.html
628aef77ed2a-0
langchain.llms.predictionguard.PredictionGuard¶ class langchain.llms.predictionguard.PredictionGuard[source]¶ Bases: LLM Prediction Guard large language models. To use, you should have the predictionguard python package installed, and the environment variable PREDICTIONGUARD_TOKEN set with your access token, or pass it...
https://api.python.langchain.com/en/latest/llms/langchain.llms.predictionguard.PredictionGuard.html
628aef77ed2a-1
param token: Optional[str] = None¶ Your Prediction Guard access token. param verbose: bool [Optional]¶ Whether to print out response text. __call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metad...
https://api.python.langchain.com/en/latest/llms/langchain.llms.predictionguard.PredictionGuard.html
628aef77ed2a-2
API. Use this method when you want to: take advantage of batched calls, need more output from the model than just the top generated value, are building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models). Parameters prompts – List of PromptValues. A PromptVal...
https://api.python.langchain.com/en/latest/llms/langchain.llms.predictionguard.PredictionGuard.html
628aef77ed2a-3
to the model provider API call. Returns Top model prediction as a string. async apredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶ Asynchronously pass messages to the model and return a message prediction. Use this method when calling chat models and on...
https://api.python.langchain.com/en/latest/llms/langchain.llms.predictionguard.PredictionGuard.html
628aef77ed2a-4
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶ Duplicate a model, optionally...
https://api.python.langchain.com/en/latest/llms/langchain.llms.predictionguard.PredictionGuard.html
628aef77ed2a-5
Pass a sequence of prompts to the model and return model generations. This method should make use of batched calls for models that expose a batched API. Use this method when you want to: take advantage of batched calls, need more output from the model than just the top generated value, are building chains that are agno...
https://api.python.langchain.com/en/latest/llms/langchain.llms.predictionguard.PredictionGuard.html
628aef77ed2a-6
Return the ordered ids of the tokens in a text. Parameters text – The string input to tokenize. Returns A list of ids corresponding to the tokens in the text, in order they occurin the text. invoke(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] =...
https://api.python.langchain.com/en/latest/llms/langchain.llms.predictionguard.PredictionGuard.html
628aef77ed2a-7
Pass a single string input to the model and return a string prediction. Use this method when passing in raw text. If you want to pass in specifictypes of chat messages, use predict_messages. Parameters text – String input to pass to the model. stop – Stop words to use when generating. Model output is cut off at the fir...
https://api.python.langchain.com/en/latest/llms/langchain.llms.predictionguard.PredictionGuard.html
628aef77ed2a-8
stream(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → Iterator[str]¶ to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶ to_json_not_implemented() → SerializedNotImplemented¶ classmethod update_forward_ref...
https://api.python.langchain.com/en/latest/llms/langchain.llms.predictionguard.PredictionGuard.html
20ebbc4d8106-0
langchain.llms.databricks.get_default_api_token¶ langchain.llms.databricks.get_default_api_token() → str[source]¶ Gets the default Databricks personal access token. Raises an error if the token cannot be automatically determined.
https://api.python.langchain.com/en/latest/llms/langchain.llms.databricks.get_default_api_token.html
2fbe595783c0-0
langchain.llms.nlpcloud.NLPCloud¶ class langchain.llms.nlpcloud.NLPCloud[source]¶ Bases: LLM NLPCloud large language models. To use, you should have the nlpcloud python package installed, and the environment variable NLPCLOUD_API_KEY set with your API key. Example from langchain.llms import NLPCloud nlpcloud = NLPCloud...
https://api.python.langchain.com/en/latest/llms/langchain.llms.nlpcloud.NLPCloud.html
2fbe595783c0-1
The minimum number of tokens to generate in the completion. param model_name: str = 'finetuned-gpt-neox-20b'¶ Model name to use. param nlpcloud_api_key: Optional[str] = None¶ param num_beams: int = 1¶ Number of beams for beam search. param num_return_sequences: int = 1¶ How many completions to generate for each prompt....
https://api.python.langchain.com/en/latest/llms/langchain.llms.nlpcloud.NLPCloud.html
2fbe595783c0-2
async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, *, tags: Optional[Union[List[str], List[List[str]]]] = None, metadata: Optional[Union[Dict[str, Any]...
https://api.python.langchain.com/en/latest/llms/langchain.llms.nlpcloud.NLPCloud.html
2fbe595783c0-3
to the model provider API call. Returns An LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output. async ainvoke(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs...
https://api.python.langchain.com/en/latest/llms/langchain.llms.nlpcloud.NLPCloud.html
2fbe595783c0-4
batch(inputs: List[Union[PromptValue, str, List[BaseMessage]]], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, max_concurrency: Optional[int] = None, **kwargs: Any) → List[str]¶ bind(**kwargs: Any) → Runnable[Input, Output]¶ Bind arguments to a Runnable, returning a new Runnable. classmethod cons...
https://api.python.langchain.com/en/latest/llms/langchain.llms.nlpcloud.NLPCloud.html
2fbe595783c0-5
classmethod from_orm(obj: Any) → Model¶ generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, *, tags: Optional[Union[List[str], List[List[str]]]] = None, metada...
https://api.python.langchain.com/en/latest/llms/langchain.llms.nlpcloud.NLPCloud.html
2fbe595783c0-6
to the model provider API call. Returns An LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output. get_num_tokens(text: str) → int¶ Get the number of tokens present in the text. Useful for checking if an input will fit in a model’s context window. Pa...
https://api.python.langchain.com/en/latest/llms/langchain.llms.nlpcloud.NLPCloud.html
2fbe595783c0-7
Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol...
https://api.python.langchain.com/en/latest/llms/langchain.llms.nlpcloud.NLPCloud.html
2fbe595783c0-8
**kwargs – Arbitrary additional keyword arguments. These are usually passed to the model provider API call. Returns Top model prediction as a message. save(file_path: Union[Path, str]) → None¶ Save the LLM. Parameters file_path – Path to file to save the LLM to. Example: .. code-block:: python llm.save(file_path=”path/...
https://api.python.langchain.com/en/latest/llms/langchain.llms.nlpcloud.NLPCloud.html
2fbe595783c0-9
property lc_namespace: List[str]¶ Return the namespace of the langchain object. eg. [“langchain”, “llms”, “openai”] property lc_secrets: Dict[str, str]¶ Return a map of constructor argument names to secret ids. eg. {“openai_api_key”: “OPENAI_API_KEY”} property lc_serializable: bool¶ Return whether or not the class is s...
https://api.python.langchain.com/en/latest/llms/langchain.llms.nlpcloud.NLPCloud.html
174625e744c6-0
langchain.llms.openai.update_token_usage¶ langchain.llms.openai.update_token_usage(keys: Set[str], response: Dict[str, Any], token_usage: Dict[str, Any]) → None[source]¶ Update token usage.
https://api.python.langchain.com/en/latest/llms/langchain.llms.openai.update_token_usage.html
e6bd29f58aeb-0
langchain.llms.amazon_api_gateway.ContentHandlerAmazonAPIGateway¶ class langchain.llms.amazon_api_gateway.ContentHandlerAmazonAPIGateway[source]¶ Adapter to prepare the inputs from Langchain to a format that LLM model expects. It also provides helper function to extract the generated text from the model response. Metho...
https://api.python.langchain.com/en/latest/llms/langchain.llms.amazon_api_gateway.ContentHandlerAmazonAPIGateway.html
0483aa9a5f18-0
langchain.llms.openai.completion_with_retry¶ langchain.llms.openai.completion_with_retry(llm: Union[BaseOpenAI, OpenAIChat], run_manager: Optional[CallbackManagerForLLMRun] = None, **kwargs: Any) → Any[source]¶ Use tenacity to retry the completion call.
https://api.python.langchain.com/en/latest/llms/langchain.llms.openai.completion_with_retry.html
6a1260f3197e-0
langchain.llms.databricks.get_default_host¶ langchain.llms.databricks.get_default_host() → str[source]¶ Gets the default Databricks workspace hostname. Raises an error if the hostname cannot be automatically determined.
https://api.python.langchain.com/en/latest/llms/langchain.llms.databricks.get_default_host.html
b6693cb02454-0
langchain.llms.promptlayer_openai.PromptLayerOpenAI¶ class langchain.llms.promptlayer_openai.PromptLayerOpenAI[source]¶ Bases: OpenAI PromptLayer OpenAI large language models. To use, you should have the openai and promptlayer python package installed, and the environment variable OPENAI_API_KEY and PROMPTLAYER_API_KEY...
https://api.python.langchain.com/en/latest/llms/langchain.llms.promptlayer_openai.PromptLayerOpenAI.html
b6693cb02454-1
param frequency_penalty: float = 0¶ Penalizes repeated tokens according to frequency. param logit_bias: Optional[Dict[str, float]] [Optional]¶ Adjust the probability of specific tokens being generated. param max_retries: int = 6¶ Maximum number of retries to make when generating. param max_tokens: int = 256¶ The maximu...
https://api.python.langchain.com/en/latest/llms/langchain.llms.promptlayer_openai.PromptLayerOpenAI.html
b6693cb02454-2
param temperature: float = 0.7¶ What sampling temperature to use. param tiktoken_model_name: Optional[str] = None¶ The model name to pass to tiktoken when using this class. Tiktoken is used to count the number of tokens in documents to constrain them to be under a certain limit. By default, when set to None, this will ...
https://api.python.langchain.com/en/latest/llms/langchain.llms.promptlayer_openai.PromptLayerOpenAI.html
b6693cb02454-3
async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, *, tags: Optional[Union[List[str], List[List[str]]]] = None, metadata: Optional[Union[Dict[str, Any]...
https://api.python.langchain.com/en/latest/llms/langchain.llms.promptlayer_openai.PromptLayerOpenAI.html
b6693cb02454-4
to the model provider API call. Returns An LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output. async ainvoke(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs...
https://api.python.langchain.com/en/latest/llms/langchain.llms.promptlayer_openai.PromptLayerOpenAI.html
b6693cb02454-5
batch(inputs: List[Union[PromptValue, str, List[BaseMessage]]], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, max_concurrency: Optional[int] = None, **kwargs: Any) → List[str]¶ bind(**kwargs: Any) → Runnable[Input, Output]¶ Bind arguments to a Runnable, returning a new Runnable. classmethod cons...
https://api.python.langchain.com/en/latest/llms/langchain.llms.promptlayer_openai.PromptLayerOpenAI.html
b6693cb02454-6
classmethod from_orm(obj: Any) → Model¶ generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, *, tags: Optional[Union[List[str], List[List[str]]]] = None, metada...
https://api.python.langchain.com/en/latest/llms/langchain.llms.promptlayer_openai.PromptLayerOpenAI.html
b6693cb02454-7
to the model provider API call. Returns An LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output. get_num_tokens(text: str) → int¶ Get the number of tokens present in the text. Useful for checking if an input will fit in a model’s context window. Pa...
https://api.python.langchain.com/en/latest/llms/langchain.llms.promptlayer_openai.PromptLayerOpenAI.html
b6693cb02454-8
Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). max_tokens_for_prompt(prompt: str) → int¶ Calculate the maximum number of tokens possible to generate for a prompt. Paramet...
https://api.python.langchain.com/en/latest/llms/langchain.llms.promptlayer_openai.PromptLayerOpenAI.html
b6693cb02454-9
first occurrence of any of these substrings. **kwargs – Arbitrary additional keyword arguments. These are usually passed to the model provider API call. Returns Top model prediction as a string. predict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶ Pass a m...
https://api.python.langchain.com/en/latest/llms/langchain.llms.promptlayer_openai.PromptLayerOpenAI.html
b6693cb02454-10
classmethod update_forward_refs(**localns: Any) → None¶ Try to update ForwardRefs on fields based on this Model, globalns and localns. classmethod validate(value: Any) → Model¶ with_fallbacks(fallbacks: ~typing.Sequence[~langchain.schema.runnable.Runnable[~langchain.schema.runnable.Input, ~langchain.schema.runnable.Out...
https://api.python.langchain.com/en/latest/llms/langchain.llms.promptlayer_openai.PromptLayerOpenAI.html
9de7165eddac-0
langchain.llms.fireworks.Fireworks¶ class langchain.llms.fireworks.Fireworks[source]¶ Bases: BaseFireworks Wrapper around Fireworks large language models. To use, you should have the fireworks python package installed, and the environment variable FIREWORKS_API_KEY set with your API key. Any parameters that are valid t...
https://api.python.langchain.com/en/latest/llms/langchain.llms.fireworks.Fireworks.html
9de7165eddac-1
Tags to add to the run trace. param temperature: float = 0.7¶ What sampling temperature to use. param top_p: float = 1¶ Total probability mass of tokens to consider at each step. param verbose: bool [Optional]¶ Whether to print out response text. __call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Option...
https://api.python.langchain.com/en/latest/llms/langchain.llms.fireworks.Fireworks.html
9de7165eddac-2
Asynchronously pass a sequence of prompts and return model generations. This method should make use of batched calls for models that expose a batched API. Use this method when you want to: take advantage of batched calls, need more output from the model than just the top generated value, are building chains that are ag...
https://api.python.langchain.com/en/latest/llms/langchain.llms.fireworks.Fireworks.html