id
stringlengths
14
15
text
stringlengths
49
2.47k
source
stringlengths
61
166
cffb9fdd9dcf-1
minimum: 0 param tags: Optional[List[str]] = None¶ Tags to add to the run trace. param temperature: Optional[float] = 0.6¶ Temperature value. exclusiveMinimum: 0 param tfs: Optional[float] = 0.9¶ Tail free sampling value. maximum: 1 minimum: 0 param top_a: Optional[float] = 0.9¶ Top-a sampling value. minimum: 0 param t...
https://api.python.langchain.com/en/latest/llms/langchain.llms.koboldai.KoboldApiLLM.html
cffb9fdd9dcf-2
Check Cache and run the LLM on the given prompt and input. async abatch(inputs: List[Union[PromptValue, str, List[BaseMessage]]], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, max_concurrency: Optional[int] = None, **kwargs: Any) → List[str]¶ async agenerate(prompts: List[str], stop: Optional[Li...
https://api.python.langchain.com/en/latest/llms/langchain.llms.koboldai.KoboldApiLLM.html
cffb9fdd9dcf-3
text generation models and BaseMessages for chat models). stop – Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings. callbacks – Callbacks to pass through. Used for executing additional functionality, such as logging or streaming, throughout generation. **kwarg...
https://api.python.langchain.com/en/latest/llms/langchain.llms.koboldai.KoboldApiLLM.html
cffb9fdd9dcf-4
first occurrence of any of these substrings. **kwargs – Arbitrary additional keyword arguments. These are usually passed to the model provider API call. Returns Top model prediction as a message. async astream(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[...
https://api.python.langchain.com/en/latest/llms/langchain.llms.koboldai.KoboldApiLLM.html
cffb9fdd9dcf-5
the new model: you should trust this data deep – set to True to make a deep copy of the model Returns new model instance dict(**kwargs: Any) → Dict¶ Return a dictionary of the LLM. classmethod from_orm(obj: Any) → Model¶ generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHa...
https://api.python.langchain.com/en/latest/llms/langchain.llms.koboldai.KoboldApiLLM.html
cffb9fdd9dcf-6
first occurrence of any of these substrings. callbacks – Callbacks to pass through. Used for executing additional functionality, such as logging or streaming, throughout generation. **kwargs – Arbitrary additional keyword arguments. These are usually passed to the model provider API call. Returns An LLMResult, which co...
https://api.python.langchain.com/en/latest/llms/langchain.llms.koboldai.KoboldApiLLM.html
cffb9fdd9dcf-7
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Cal...
https://api.python.langchain.com/en/latest/llms/langchain.llms.koboldai.KoboldApiLLM.html
cffb9fdd9dcf-8
to the model provider API call. Returns Top model prediction as a string. predict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶ Pass a message sequence to the model and return a message prediction. Use this method when passing in chat messages. If you want ...
https://api.python.langchain.com/en/latest/llms/langchain.llms.koboldai.KoboldApiLLM.html
cffb9fdd9dcf-9
classmethod validate(value: Any) → Model¶ with_fallbacks(fallbacks: ~typing.Sequence[~langchain.schema.runnable.Runnable[~langchain.schema.runnable.Input, ~langchain.schema.runnable.Output]], *, exceptions_to_handle: ~typing.Tuple[~typing.Type[BaseException]] = (<class 'Exception'>,)) → RunnableWithFallbacks[Input, Out...
https://api.python.langchain.com/en/latest/llms/langchain.llms.koboldai.KoboldApiLLM.html
c6df5242049b-0
langchain.llms.octoai_endpoint.OctoAIEndpoint¶ class langchain.llms.octoai_endpoint.OctoAIEndpoint[source]¶ Bases: LLM OctoAI LLM Endpoints. OctoAIEndpoint is a class to interact with OctoAICompute Service large language model endpoints. To use, you should have the octoai python package installed, and the environment v...
https://api.python.langchain.com/en/latest/llms/langchain.llms.octoai_endpoint.OctoAIEndpoint.html
c6df5242049b-1
OCTOAI API Token param tags: Optional[List[str]] = None¶ Tags to add to the run trace. param verbose: bool [Optional]¶ Whether to print out response text. __call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str...
https://api.python.langchain.com/en/latest/llms/langchain.llms.octoai_endpoint.OctoAIEndpoint.html
c6df5242049b-2
This method should make use of batched calls for models that expose a batched API. Use this method when you want to: take advantage of batched calls, need more output from the model than just the top generated value, are building chains that are agnostic to the underlying language modeltype (e.g., pure text completion ...
https://api.python.langchain.com/en/latest/llms/langchain.llms.octoai_endpoint.OctoAIEndpoint.html
c6df5242049b-3
to the model provider API call. Returns Top model prediction as a string. async apredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶ Asynchronously pass messages to the model and return a message prediction. Use this method when calling chat models and on...
https://api.python.langchain.com/en/latest/llms/langchain.llms.octoai_endpoint.OctoAIEndpoint.html
c6df5242049b-4
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶ Duplicate a model, optionally...
https://api.python.langchain.com/en/latest/llms/langchain.llms.octoai_endpoint.OctoAIEndpoint.html
c6df5242049b-5
Pass a sequence of prompts to the model and return model generations. This method should make use of batched calls for models that expose a batched API. Use this method when you want to: take advantage of batched calls, need more output from the model than just the top generated value, are building chains that are agno...
https://api.python.langchain.com/en/latest/llms/langchain.llms.octoai_endpoint.OctoAIEndpoint.html
c6df5242049b-6
Return the ordered ids of the tokens in a text. Parameters text – The string input to tokenize. Returns A list of ids corresponding to the tokens in the text, in order they occurin the text. invoke(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] =...
https://api.python.langchain.com/en/latest/llms/langchain.llms.octoai_endpoint.OctoAIEndpoint.html
c6df5242049b-7
Pass a single string input to the model and return a string prediction. Use this method when passing in raw text. If you want to pass in specifictypes of chat messages, use predict_messages. Parameters text – String input to pass to the model. stop – Stop words to use when generating. Model output is cut off at the fir...
https://api.python.langchain.com/en/latest/llms/langchain.llms.octoai_endpoint.OctoAIEndpoint.html
c6df5242049b-8
stream(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → Iterator[str]¶ to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶ to_json_not_implemented() → SerializedNotImplemented¶ classmethod update_forward_ref...
https://api.python.langchain.com/en/latest/llms/langchain.llms.octoai_endpoint.OctoAIEndpoint.html
a5a2eda802fb-0
langchain.llms.azureml_endpoint.AzureMLEndpointClient¶ class langchain.llms.azureml_endpoint.AzureMLEndpointClient(endpoint_url: str, endpoint_api_key: str, deployment_name: str = '')[source]¶ AzureML Managed Endpoint client. Initialize the class. Methods __init__(endpoint_url, endpoint_api_key[, ...]) Initialize the c...
https://api.python.langchain.com/en/latest/llms/langchain.llms.azureml_endpoint.AzureMLEndpointClient.html
f81a969caba4-0
langchain.llms.base.get_prompts¶ langchain.llms.base.get_prompts(params: Dict[str, Any], prompts: List[str]) → Tuple[Dict[int, List], str, List[int], List[str]][source]¶ Get prompts that are already cached.
https://api.python.langchain.com/en/latest/llms/langchain.llms.base.get_prompts.html
6372edc8f252-0
langchain.llms.clarifai.Clarifai¶ class langchain.llms.clarifai.Clarifai[source]¶ Bases: LLM Clarifai large language models. To use, you should have an account on the Clarifai platform, the clarifai python package installed, and the environment variable CLARIFAI_PAT set with your PAT key, or pass it as a named paramete...
https://api.python.langchain.com/en/latest/llms/langchain.llms.clarifai.Clarifai.html
6372edc8f252-1
param verbose: bool [Optional]¶ Whether to print out response text. __call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → str¶ Check Cache...
https://api.python.langchain.com/en/latest/llms/langchain.llms.clarifai.Clarifai.html
6372edc8f252-2
need more output from the model than just the top generated value, are building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models). Parameters prompts – List of PromptValues. A PromptValue is an object that can be converted to match the format of any languag...
https://api.python.langchain.com/en/latest/llms/langchain.llms.clarifai.Clarifai.html
6372edc8f252-3
Asynchronously pass messages to the model and return a message prediction. Use this method when calling chat models and only the topcandidate generation is needed. Parameters messages – A sequence of chat messages corresponding to a single model input. stop – Stop words to use when generating. Model output is cut off a...
https://api.python.langchain.com/en/latest/llms/langchain.llms.clarifai.Clarifai.html
6372edc8f252-4
Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include – fields to include in new model exclude – fields to exclude from new model, as with values this takes precedence over include update – values to change/add in the new model. Note: the data is not validated before creat...
https://api.python.langchain.com/en/latest/llms/langchain.llms.clarifai.Clarifai.html
6372edc8f252-5
Parameters prompts – List of PromptValues. A PromptValue is an object that can be converted to match the format of any language model (string for pure text generation models and BaseMessages for chat models). stop – Stop words to use when generating. Model output is cut off at the first occurrence of any of these subst...
https://api.python.langchain.com/en/latest/llms/langchain.llms.clarifai.Clarifai.html
6372edc8f252-6
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Cal...
https://api.python.langchain.com/en/latest/llms/langchain.llms.clarifai.Clarifai.html
6372edc8f252-7
to the model provider API call. Returns Top model prediction as a string. predict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶ Pass a message sequence to the model and return a message prediction. Use this method when passing in chat messages. If you want ...
https://api.python.langchain.com/en/latest/llms/langchain.llms.clarifai.Clarifai.html
6372edc8f252-8
classmethod validate(value: Any) → Model¶ with_fallbacks(fallbacks: ~typing.Sequence[~langchain.schema.runnable.Runnable[~langchain.schema.runnable.Input, ~langchain.schema.runnable.Output]], *, exceptions_to_handle: ~typing.Tuple[~typing.Type[BaseException]] = (<class 'Exception'>,)) → RunnableWithFallbacks[Input, Out...
https://api.python.langchain.com/en/latest/llms/langchain.llms.clarifai.Clarifai.html
0a93c780e045-0
langchain.llms.writer.Writer¶ class langchain.llms.writer.Writer[source]¶ Bases: LLM Writer large language models. To use, you should have the environment variable WRITER_API_KEY and WRITER_ORG_ID set with your API key and organization ID respectively. Example from langchain import Writer writer = Writer(model_id="palm...
https://api.python.langchain.com/en/latest/llms/langchain.llms.writer.Writer.html
0a93c780e045-1
param tags: Optional[List[str]] = None¶ Tags to add to the run trace. param temperature: Optional[float] = None¶ What sampling temperature to use. param top_p: Optional[float] = None¶ Total probability mass of tokens to consider at each step. param verbose: bool [Optional]¶ Whether to print out response text. param wri...
https://api.python.langchain.com/en/latest/llms/langchain.llms.writer.Writer.html
0a93c780e045-2
Run the LLM on the given prompt and input. async agenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, **kwargs: Any) → LLMResult¶ Asynchronously...
https://api.python.langchain.com/en/latest/llms/langchain.llms.writer.Writer.html
0a93c780e045-3
Asynchronously pass a string to the model and return a string prediction. Use this method when calling pure text generation models and only the topcandidate generation is needed. Parameters text – String input to pass to the model. stop – Stop words to use when generating. Model output is cut off at the first occurrenc...
https://api.python.langchain.com/en/latest/llms/langchain.llms.writer.Writer.html
0a93c780e045-4
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = ‘allow’ was set since it adds all passed values copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclu...
https://api.python.langchain.com/en/latest/llms/langchain.llms.writer.Writer.html
0a93c780e045-5
Run the LLM on the given prompt and input. generate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, **kwargs: Any) → LLMResult¶ Pass a sequence of pr...
https://api.python.langchain.com/en/latest/llms/langchain.llms.writer.Writer.html
0a93c780e045-6
get_num_tokens_from_messages(messages: List[BaseMessage]) → int¶ Get the number of tokens in the messages. Useful for checking if an input will fit in a model’s context window. Parameters messages – The message inputs to tokenize. Returns The sum of the number of tokens across the messages. get_token_ids(text: str) → L...
https://api.python.langchain.com/en/latest/llms/langchain.llms.writer.Writer.html
0a93c780e045-7
classmethod parse_obj(obj: Any) → Model¶ classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ predict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶ Pass a single string input to t...
https://api.python.langchain.com/en/latest/llms/langchain.llms.writer.Writer.html
0a93c780e045-8
.. code-block:: python llm.save(file_path=”path/llm.yaml”) classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶ classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶ stream(input: Union[Promp...
https://api.python.langchain.com/en/latest/llms/langchain.llms.writer.Writer.html
0a93c780e045-9
property lc_serializable: bool¶ Return whether or not the class is serializable. Examples using Writer¶ Writer
https://api.python.langchain.com/en/latest/llms/langchain.llms.writer.Writer.html
da1cf760a791-0
langchain.llms.sagemaker_endpoint.LLMContentHandler¶ class langchain.llms.sagemaker_endpoint.LLMContentHandler[source]¶ Content handler for LLM class. Attributes accepts The MIME type of the response data returned from endpoint content_type The MIME type of the input data passed to endpoint Methods __init__() transform...
https://api.python.langchain.com/en/latest/llms/langchain.llms.sagemaker_endpoint.LLMContentHandler.html
14ad1dfa45d2-0
langchain.llms.baseten.Baseten¶ class langchain.llms.baseten.Baseten[source]¶ Bases: LLM Baseten models. To use, you should have the baseten python package installed, and run baseten.login() with your Baseten API key. The required model param can be either a model id or model version id. Using a model version ID will r...
https://api.python.langchain.com/en/latest/llms/langchain.llms.baseten.Baseten.html
14ad1dfa45d2-1
Check Cache and run the LLM on the given prompt and input. async abatch(inputs: List[Union[PromptValue, str, List[BaseMessage]]], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, max_concurrency: Optional[int] = None, **kwargs: Any) → List[str]¶ async agenerate(prompts: List[str], stop: Optional[Li...
https://api.python.langchain.com/en/latest/llms/langchain.llms.baseten.Baseten.html
14ad1dfa45d2-2
text generation models and BaseMessages for chat models). stop – Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings. callbacks – Callbacks to pass through. Used for executing additional functionality, such as logging or streaming, throughout generation. **kwarg...
https://api.python.langchain.com/en/latest/llms/langchain.llms.baseten.Baseten.html
14ad1dfa45d2-3
first occurrence of any of these substrings. **kwargs – Arbitrary additional keyword arguments. These are usually passed to the model provider API call. Returns Top model prediction as a message. async astream(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[...
https://api.python.langchain.com/en/latest/llms/langchain.llms.baseten.Baseten.html
14ad1dfa45d2-4
the new model: you should trust this data deep – set to True to make a deep copy of the model Returns new model instance dict(**kwargs: Any) → Dict¶ Return a dictionary of the LLM. classmethod from_orm(obj: Any) → Model¶ generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHa...
https://api.python.langchain.com/en/latest/llms/langchain.llms.baseten.Baseten.html
14ad1dfa45d2-5
first occurrence of any of these substrings. callbacks – Callbacks to pass through. Used for executing additional functionality, such as logging or streaming, throughout generation. **kwargs – Arbitrary additional keyword arguments. These are usually passed to the model provider API call. Returns An LLMResult, which co...
https://api.python.langchain.com/en/latest/llms/langchain.llms.baseten.Baseten.html
14ad1dfa45d2-6
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Cal...
https://api.python.langchain.com/en/latest/llms/langchain.llms.baseten.Baseten.html
14ad1dfa45d2-7
to the model provider API call. Returns Top model prediction as a string. predict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶ Pass a message sequence to the model and return a message prediction. Use this method when passing in chat messages. If you want ...
https://api.python.langchain.com/en/latest/llms/langchain.llms.baseten.Baseten.html
14ad1dfa45d2-8
classmethod validate(value: Any) → Model¶ with_fallbacks(fallbacks: ~typing.Sequence[~langchain.schema.runnable.Runnable[~langchain.schema.runnable.Input, ~langchain.schema.runnable.Output]], *, exceptions_to_handle: ~typing.Tuple[~typing.Type[BaseException]] = (<class 'Exception'>,)) → RunnableWithFallbacks[Input, Out...
https://api.python.langchain.com/en/latest/llms/langchain.llms.baseten.Baseten.html
49616fcf6ba0-0
langchain.llms.pipelineai.PipelineAI¶ class langchain.llms.pipelineai.PipelineAI[source]¶ Bases: LLM, BaseModel PipelineAI large language models. To use, you should have the pipeline-ai python package installed, and the environment variable PIPELINE_API_KEY set with your API key. Any parameters that are valid to be pas...
https://api.python.langchain.com/en/latest/llms/langchain.llms.pipelineai.PipelineAI.html
49616fcf6ba0-1
Check Cache and run the LLM on the given prompt and input. async abatch(inputs: List[Union[PromptValue, str, List[BaseMessage]]], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, max_concurrency: Optional[int] = None, **kwargs: Any) → List[str]¶ async agenerate(prompts: List[str], stop: Optional[Li...
https://api.python.langchain.com/en/latest/llms/langchain.llms.pipelineai.PipelineAI.html
49616fcf6ba0-2
text generation models and BaseMessages for chat models). stop – Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings. callbacks – Callbacks to pass through. Used for executing additional functionality, such as logging or streaming, throughout generation. **kwarg...
https://api.python.langchain.com/en/latest/llms/langchain.llms.pipelineai.PipelineAI.html
49616fcf6ba0-3
first occurrence of any of these substrings. **kwargs – Arbitrary additional keyword arguments. These are usually passed to the model provider API call. Returns Top model prediction as a message. async astream(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[...
https://api.python.langchain.com/en/latest/llms/langchain.llms.pipelineai.PipelineAI.html
49616fcf6ba0-4
the new model: you should trust this data deep – set to True to make a deep copy of the model Returns new model instance dict(**kwargs: Any) → Dict¶ Return a dictionary of the LLM. classmethod from_orm(obj: Any) → Model¶ generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHa...
https://api.python.langchain.com/en/latest/llms/langchain.llms.pipelineai.PipelineAI.html
49616fcf6ba0-5
first occurrence of any of these substrings. callbacks – Callbacks to pass through. Used for executing additional functionality, such as logging or streaming, throughout generation. **kwargs – Arbitrary additional keyword arguments. These are usually passed to the model provider API call. Returns An LLMResult, which co...
https://api.python.langchain.com/en/latest/llms/langchain.llms.pipelineai.PipelineAI.html
49616fcf6ba0-6
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Cal...
https://api.python.langchain.com/en/latest/llms/langchain.llms.pipelineai.PipelineAI.html
49616fcf6ba0-7
to the model provider API call. Returns Top model prediction as a string. predict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶ Pass a message sequence to the model and return a message prediction. Use this method when passing in chat messages. If you want ...
https://api.python.langchain.com/en/latest/llms/langchain.llms.pipelineai.PipelineAI.html
49616fcf6ba0-8
classmethod validate(value: Any) → Model¶ with_fallbacks(fallbacks: ~typing.Sequence[~langchain.schema.runnable.Runnable[~langchain.schema.runnable.Input, ~langchain.schema.runnable.Output]], *, exceptions_to_handle: ~typing.Tuple[~typing.Type[BaseException]] = (<class 'Exception'>,)) → RunnableWithFallbacks[Input, Out...
https://api.python.langchain.com/en/latest/llms/langchain.llms.pipelineai.PipelineAI.html
a2a1c0b8973f-0
langchain.llms.forefrontai.ForefrontAI¶ class langchain.llms.forefrontai.ForefrontAI[source]¶ Bases: LLM ForefrontAI large language models. To use, you should have the environment variable FOREFRONTAI_API_KEY set with your API key. Example from langchain.llms import ForefrontAI forefrontai = ForefrontAI(endpoint_url=""...
https://api.python.langchain.com/en/latest/llms/langchain.llms.forefrontai.ForefrontAI.html
a2a1c0b8973f-1
param verbose: bool [Optional]¶ Whether to print out response text. __call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → str¶ Check Cache...
https://api.python.langchain.com/en/latest/llms/langchain.llms.forefrontai.ForefrontAI.html
a2a1c0b8973f-2
need more output from the model than just the top generated value, are building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models). Parameters prompts – List of PromptValues. A PromptValue is an object that can be converted to match the format of any languag...
https://api.python.langchain.com/en/latest/llms/langchain.llms.forefrontai.ForefrontAI.html
a2a1c0b8973f-3
Asynchronously pass messages to the model and return a message prediction. Use this method when calling chat models and only the topcandidate generation is needed. Parameters messages – A sequence of chat messages corresponding to a single model input. stop – Stop words to use when generating. Model output is cut off a...
https://api.python.langchain.com/en/latest/llms/langchain.llms.forefrontai.ForefrontAI.html
a2a1c0b8973f-4
Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include – fields to include in new model exclude – fields to exclude from new model, as with values this takes precedence over include update – values to change/add in the new model. Note: the data is not validated before creat...
https://api.python.langchain.com/en/latest/llms/langchain.llms.forefrontai.ForefrontAI.html
a2a1c0b8973f-5
Parameters prompts – List of PromptValues. A PromptValue is an object that can be converted to match the format of any language model (string for pure text generation models and BaseMessages for chat models). stop – Stop words to use when generating. Model output is cut off at the first occurrence of any of these subst...
https://api.python.langchain.com/en/latest/llms/langchain.llms.forefrontai.ForefrontAI.html
a2a1c0b8973f-6
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Cal...
https://api.python.langchain.com/en/latest/llms/langchain.llms.forefrontai.ForefrontAI.html
a2a1c0b8973f-7
to the model provider API call. Returns Top model prediction as a string. predict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶ Pass a message sequence to the model and return a message prediction. Use this method when passing in chat messages. If you want ...
https://api.python.langchain.com/en/latest/llms/langchain.llms.forefrontai.ForefrontAI.html
a2a1c0b8973f-8
classmethod validate(value: Any) → Model¶ with_fallbacks(fallbacks: ~typing.Sequence[~langchain.schema.runnable.Runnable[~langchain.schema.runnable.Input, ~langchain.schema.runnable.Output]], *, exceptions_to_handle: ~typing.Tuple[~typing.Type[BaseException]] = (<class 'Exception'>,)) → RunnableWithFallbacks[Input, Out...
https://api.python.langchain.com/en/latest/llms/langchain.llms.forefrontai.ForefrontAI.html
60030d3df3ac-0
langchain.llms.base.create_base_retry_decorator¶ langchain.llms.base.create_base_retry_decorator(error_types: List[Type[BaseException]], max_retries: int = 1, run_manager: Optional[Union[AsyncCallbackManagerForLLMRun, CallbackManagerForLLMRun]] = None) → Callable[[Any], Any][source]¶ Create a retry decorator for a give...
https://api.python.langchain.com/en/latest/llms/langchain.llms.base.create_base_retry_decorator.html
963f0a2e269d-0
langchain.llms.promptlayer_openai.PromptLayerOpenAIChat¶ class langchain.llms.promptlayer_openai.PromptLayerOpenAIChat[source]¶ Bases: OpenAIChat Wrapper around OpenAI large language models. To use, you should have the openai and promptlayer python package installed, and the environment variable OPENAI_API_KEY and PROM...
https://api.python.langchain.com/en/latest/llms/langchain.llms.promptlayer_openai.PromptLayerOpenAIChat.html
963f0a2e269d-1
param model_kwargs: Dict[str, Any] [Optional]¶ Holds any model parameters valid for create call not explicitly specified. param model_name: str = 'gpt-3.5-turbo'¶ Model name to use. param openai_api_base: Optional[str] = None¶ param openai_api_key: Optional[str] = None¶ param openai_proxy: Optional[str] = None¶ param p...
https://api.python.langchain.com/en/latest/llms/langchain.llms.promptlayer_openai.PromptLayerOpenAIChat.html
963f0a2e269d-2
async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, *, tags: Optional[Union[List[str], List[List[str]]]] = None, metadata: Optional[Union[Dict[str, Any]...
https://api.python.langchain.com/en/latest/llms/langchain.llms.promptlayer_openai.PromptLayerOpenAIChat.html
963f0a2e269d-3
to the model provider API call. Returns An LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output. async ainvoke(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs...
https://api.python.langchain.com/en/latest/llms/langchain.llms.promptlayer_openai.PromptLayerOpenAIChat.html
963f0a2e269d-4
batch(inputs: List[Union[PromptValue, str, List[BaseMessage]]], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, max_concurrency: Optional[int] = None, **kwargs: Any) → List[str]¶ bind(**kwargs: Any) → Runnable[Input, Output]¶ Bind arguments to a Runnable, returning a new Runnable. classmethod cons...
https://api.python.langchain.com/en/latest/llms/langchain.llms.promptlayer_openai.PromptLayerOpenAIChat.html
963f0a2e269d-5
classmethod from_orm(obj: Any) → Model¶ generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, *, tags: Optional[Union[List[str], List[List[str]]]] = None, metada...
https://api.python.langchain.com/en/latest/llms/langchain.llms.promptlayer_openai.PromptLayerOpenAIChat.html
963f0a2e269d-6
to the model provider API call. Returns An LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output. get_num_tokens(text: str) → int¶ Get the number of tokens present in the text. Useful for checking if an input will fit in a model’s context window. Pa...
https://api.python.langchain.com/en/latest/llms/langchain.llms.promptlayer_openai.PromptLayerOpenAIChat.html
963f0a2e269d-7
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ classmethod parse_obj(obj: Any) → Model¶ classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto...
https://api.python.langchain.com/en/latest/llms/langchain.llms.promptlayer_openai.PromptLayerOpenAIChat.html
963f0a2e269d-8
Save the LLM. Parameters file_path – Path to file to save the LLM to. Example: .. code-block:: python llm.save(file_path=”path/llm.yaml”) classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶ classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/...
https://api.python.langchain.com/en/latest/llms/langchain.llms.promptlayer_openai.PromptLayerOpenAIChat.html
963f0a2e269d-9
Return a map of constructor argument names to secret ids. eg. {“openai_api_key”: “OPENAI_API_KEY”} property lc_serializable: bool¶ Return whether or not the class is serializable.
https://api.python.langchain.com/en/latest/llms/langchain.llms.promptlayer_openai.PromptLayerOpenAIChat.html
89a9e8405ff2-0
langchain.llms.ollama.Ollama¶ class langchain.llms.ollama.Ollama[source]¶ Bases: BaseLLM, _OllamaCommon Ollama locally run large language models. To use, follow the instructions at https://ollama.ai/. Example from langchain.llms import Ollama ollama = Ollama(model="llama2") Create a new model by parsing and validating ...
https://api.python.langchain.com/en/latest/llms/langchain.llms.ollama.Ollama.html
89a9e8405ff2-1
Model name to use. param num_ctx: Optional[int] = None¶ Sets the size of the context window used to generate the next token. (Default: 2048) param num_gpu: Optional[int] = None¶ The number of GPUs to use. On macOS it defaults to 1 to enable metal support, 0 to disable. param num_thread: Optional[int] = None¶ Sets the n...
https://api.python.langchain.com/en/latest/llms/langchain.llms.ollama.Ollama.html
89a9e8405ff2-2
param top_k: Optional[int] = None¶ Reduces the probability of generating nonsense. A higher value (e.g. 100) will give more diverse answers, while a lower value (e.g. 10) will be more conservative. (Default: 40) param top_p: Optional[int] = None¶ Works together with top-k. A higher value (e.g., 0.95) will lead to more ...
https://api.python.langchain.com/en/latest/llms/langchain.llms.ollama.Ollama.html
89a9e8405ff2-3
Run the LLM on the given prompt and input. async agenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, **kwargs: Any) → LLMResult¶ Asynchronously...
https://api.python.langchain.com/en/latest/llms/langchain.llms.ollama.Ollama.html
89a9e8405ff2-4
Asynchronously pass a string to the model and return a string prediction. Use this method when calling pure text generation models and only the topcandidate generation is needed. Parameters text – String input to pass to the model. stop – Stop words to use when generating. Model output is cut off at the first occurrenc...
https://api.python.langchain.com/en/latest/llms/langchain.llms.ollama.Ollama.html
89a9e8405ff2-5
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = ‘allow’ was set since it adds all passed values copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclu...
https://api.python.langchain.com/en/latest/llms/langchain.llms.ollama.Ollama.html
89a9e8405ff2-6
Run the LLM on the given prompt and input. generate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, **kwargs: Any) → LLMResult¶ Pass a sequence of pr...
https://api.python.langchain.com/en/latest/llms/langchain.llms.ollama.Ollama.html
89a9e8405ff2-7
get_num_tokens_from_messages(messages: List[BaseMessage]) → int¶ Get the number of tokens in the messages. Useful for checking if an input will fit in a model’s context window. Parameters messages – The message inputs to tokenize. Returns The sum of the number of tokens across the messages. get_token_ids(text: str) → L...
https://api.python.langchain.com/en/latest/llms/langchain.llms.ollama.Ollama.html
89a9e8405ff2-8
classmethod parse_obj(obj: Any) → Model¶ classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ predict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶ Pass a single string input to t...
https://api.python.langchain.com/en/latest/llms/langchain.llms.ollama.Ollama.html
89a9e8405ff2-9
.. code-block:: python llm.save(file_path=”path/llm.yaml”) classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶ classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶ stream(input: Union[Promp...
https://api.python.langchain.com/en/latest/llms/langchain.llms.ollama.Ollama.html
038feff78cf3-0
langchain.llms.huggingface_endpoint.HuggingFaceEndpoint¶ class langchain.llms.huggingface_endpoint.HuggingFaceEndpoint[source]¶ Bases: LLM HuggingFace Endpoint models. To use, you should have the huggingface_hub python package installed, and the environment variable HUGGINGFACEHUB_API_TOKEN set with your API token, or ...
https://api.python.langchain.com/en/latest/llms/langchain.llms.huggingface_endpoint.HuggingFaceEndpoint.html
038feff78cf3-1
param verbose: bool [Optional]¶ Whether to print out response text. __call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → str¶ Check Cache...
https://api.python.langchain.com/en/latest/llms/langchain.llms.huggingface_endpoint.HuggingFaceEndpoint.html
038feff78cf3-2
need more output from the model than just the top generated value, are building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models). Parameters prompts – List of PromptValues. A PromptValue is an object that can be converted to match the format of any languag...
https://api.python.langchain.com/en/latest/llms/langchain.llms.huggingface_endpoint.HuggingFaceEndpoint.html
038feff78cf3-3
Asynchronously pass messages to the model and return a message prediction. Use this method when calling chat models and only the topcandidate generation is needed. Parameters messages – A sequence of chat messages corresponding to a single model input. stop – Stop words to use when generating. Model output is cut off a...
https://api.python.langchain.com/en/latest/llms/langchain.llms.huggingface_endpoint.HuggingFaceEndpoint.html
038feff78cf3-4
Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include – fields to include in new model exclude – fields to exclude from new model, as with values this takes precedence over include update – values to change/add in the new model. Note: the data is not validated before creat...
https://api.python.langchain.com/en/latest/llms/langchain.llms.huggingface_endpoint.HuggingFaceEndpoint.html
038feff78cf3-5
Parameters prompts – List of PromptValues. A PromptValue is an object that can be converted to match the format of any language model (string for pure text generation models and BaseMessages for chat models). stop – Stop words to use when generating. Model output is cut off at the first occurrence of any of these subst...
https://api.python.langchain.com/en/latest/llms/langchain.llms.huggingface_endpoint.HuggingFaceEndpoint.html
038feff78cf3-6
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Cal...
https://api.python.langchain.com/en/latest/llms/langchain.llms.huggingface_endpoint.HuggingFaceEndpoint.html
038feff78cf3-7
to the model provider API call. Returns Top model prediction as a string. predict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶ Pass a message sequence to the model and return a message prediction. Use this method when passing in chat messages. If you want ...
https://api.python.langchain.com/en/latest/llms/langchain.llms.huggingface_endpoint.HuggingFaceEndpoint.html
038feff78cf3-8
classmethod validate(value: Any) → Model¶ with_fallbacks(fallbacks: ~typing.Sequence[~langchain.schema.runnable.Runnable[~langchain.schema.runnable.Input, ~langchain.schema.runnable.Output]], *, exceptions_to_handle: ~typing.Tuple[~typing.Type[BaseException]] = (<class 'Exception'>,)) → RunnableWithFallbacks[Input, Out...
https://api.python.langchain.com/en/latest/llms/langchain.llms.huggingface_endpoint.HuggingFaceEndpoint.html
7bb8c64566f1-0
langchain.llms.openai.OpenAIChat¶ class langchain.llms.openai.OpenAIChat[source]¶ Bases: BaseLLM OpenAI Chat large language models. To use, you should have the openai python package installed, and the environment variable OPENAI_API_KEY set with your API key. Any parameters that are valid to be passed to the openai.cre...
https://api.python.langchain.com/en/latest/llms/langchain.llms.openai.OpenAIChat.html
7bb8c64566f1-1
param prefix_messages: List [Optional]¶ Series of messages for Chat input. param streaming: bool = False¶ Whether to stream the results or not. param tags: Optional[List[str]] = None¶ Tags to add to the run trace. param verbose: bool [Optional]¶ Whether to print out response text. __call__(prompt: str, stop: Optional[L...
https://api.python.langchain.com/en/latest/llms/langchain.llms.openai.OpenAIChat.html
7bb8c64566f1-2
Asynchronously pass a sequence of prompts and return model generations. This method should make use of batched calls for models that expose a batched API. Use this method when you want to: take advantage of batched calls, need more output from the model than just the top generated value, are building chains that are ag...
https://api.python.langchain.com/en/latest/llms/langchain.llms.openai.OpenAIChat.html