id stringlengths 14 15 | text stringlengths 49 2.47k | source stringlengths 61 166 |
|---|---|---|
9de7165eddac-3 | **kwargs – Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
Returns
Top model prediction as a string.
async apredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶
Asynchronously pass messages to the model and ... | https://api.python.langchain.com/en/latest/llms/langchain.llms.fireworks.Fireworks.html |
9de7165eddac-4 | Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally... | https://api.python.langchain.com/en/latest/llms/langchain.llms.fireworks.Fireworks.html |
9de7165eddac-5 | Run the LLM on the given prompt and input.
generate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, **kwargs: Any) → LLMResult¶
Pass a sequence of pr... | https://api.python.langchain.com/en/latest/llms/langchain.llms.fireworks.Fireworks.html |
9de7165eddac-6 | Useful for checking if an input will fit in a model’s context window.
Parameters
text – The string input to tokenize.
Returns
The integer number of tokens in the text.
get_num_tokens_from_messages(messages: List[BaseMessage]) → int¶
Get the number of tokens in the messages.
Useful for checking if an input will fit in a... | https://api.python.langchain.com/en/latest/llms/langchain.llms.fireworks.Fireworks.html |
9de7165eddac-7 | classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto... | https://api.python.langchain.com/en/latest/llms/langchain.llms.fireworks.Fireworks.html |
9de7165eddac-8 | Save the LLM.
Parameters
file_path – Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=”path/llm.yaml”)
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/... | https://api.python.langchain.com/en/latest/llms/langchain.llms.fireworks.Fireworks.html |
9de7165eddac-9 | Return a map of constructor argument names to secret ids.
eg. {“openai_api_key”: “OPENAI_API_KEY”}
property lc_serializable: bool¶
Return whether or not the class is serializable. | https://api.python.langchain.com/en/latest/llms/langchain.llms.fireworks.Fireworks.html |
b9ca8d7571d1-0 | langchain.llms.openllm.IdentifyingParams¶
class langchain.llms.openllm.IdentifyingParams[source]¶
Parameters for identifying a model as a typed dict.
model_name: str¶
model_id: Optional[str]¶
server_url: Optional[str]¶
server_type: Optional[Literal['http', 'grpc']]¶
embedded: bool¶
llm_kwargs: Dict[str, Any]¶ | https://api.python.langchain.com/en/latest/llms/langchain.llms.openllm.IdentifyingParams.html |
2bcd204150b9-0 | langchain.llms.gooseai.GooseAI¶
class langchain.llms.gooseai.GooseAI[source]¶
Bases: LLM
GooseAI large language models.
To use, you should have the openai python package installed, and the
environment variable GOOSEAI_API_KEY set with your API key.
Any parameters that are valid to be passed to the openai.create call ca... | https://api.python.langchain.com/en/latest/llms/langchain.llms.gooseai.GooseAI.html |
2bcd204150b9-1 | Model name to use
param n: int = 1¶
How many completions to generate for each prompt.
param presence_penalty: float = 0¶
Penalizes repeated tokens.
param tags: Optional[List[str]] = None¶
Tags to add to the run trace.
param temperature: float = 0.7¶
What sampling temperature to use
param top_p: float = 1¶
Total probabi... | https://api.python.langchain.com/en/latest/llms/langchain.llms.gooseai.GooseAI.html |
2bcd204150b9-2 | Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, **kwargs: Any) → LLMResult¶
Asynchronously... | https://api.python.langchain.com/en/latest/llms/langchain.llms.gooseai.GooseAI.html |
2bcd204150b9-3 | Asynchronously pass a string to the model and return a string prediction.
Use this method when calling pure text generation models and only the topcandidate generation is needed.
Parameters
text – String input to pass to the model.
stop – Stop words to use when generating. Model output is cut off at the
first occurrenc... | https://api.python.langchain.com/en/latest/llms/langchain.llms.gooseai.GooseAI.html |
2bcd204150b9-4 | Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclu... | https://api.python.langchain.com/en/latest/llms/langchain.llms.gooseai.GooseAI.html |
2bcd204150b9-5 | Run the LLM on the given prompt and input.
generate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, **kwargs: Any) → LLMResult¶
Pass a sequence of pr... | https://api.python.langchain.com/en/latest/llms/langchain.llms.gooseai.GooseAI.html |
2bcd204150b9-6 | get_num_tokens_from_messages(messages: List[BaseMessage]) → int¶
Get the number of tokens in the messages.
Useful for checking if an input will fit in a model’s context window.
Parameters
messages – The message inputs to tokenize.
Returns
The sum of the number of tokens across the messages.
get_token_ids(text: str) → L... | https://api.python.langchain.com/en/latest/llms/langchain.llms.gooseai.GooseAI.html |
2bcd204150b9-7 | classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
predict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶
Pass a single string input to t... | https://api.python.langchain.com/en/latest/llms/langchain.llms.gooseai.GooseAI.html |
2bcd204150b9-8 | .. code-block:: python
llm.save(file_path=”path/llm.yaml”)
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
stream(input: Union[Promp... | https://api.python.langchain.com/en/latest/llms/langchain.llms.gooseai.GooseAI.html |
2bcd204150b9-9 | property lc_serializable: bool¶
Return whether or not the class is serializable.
Examples using GooseAI¶
GooseAI | https://api.python.langchain.com/en/latest/llms/langchain.llms.gooseai.GooseAI.html |
7214dd6502d0-0 | langchain.llms.beam.Beam¶
class langchain.llms.beam.Beam[source]¶
Bases: LLM
Beam API for gpt2 large language model.
To use, you should have the beam-sdk python package installed,
and the environment variable BEAM_CLIENT_ID set with your client id
and BEAM_CLIENT_SECRET set with your client secret. Information on how
t... | https://api.python.langchain.com/en/latest/llms/langchain.llms.beam.Beam.html |
7214dd6502d0-1 | param gpu: str = ''¶
param max_length: str = ''¶
param memory: str = ''¶
param metadata: Optional[Dict[str, Any]] = None¶
Metadata to add to the run trace.
param model_kwargs: Dict[str, Any] [Optional]¶
Holds any model parameters valid for create call not
explicitly specified.
param model_name: str = ''¶
param name: st... | https://api.python.langchain.com/en/latest/llms/langchain.llms.beam.Beam.html |
7214dd6502d0-2 | async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, *, tags: Optional[Union[List[str], List[List[str]]]] = None, metadata: Optional[Union[Dict[str, Any]... | https://api.python.langchain.com/en/latest/llms/langchain.llms.beam.Beam.html |
7214dd6502d0-3 | to the model provider API call.
Returns
An LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output.
async ainvoke(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs... | https://api.python.langchain.com/en/latest/llms/langchain.llms.beam.Beam.html |
7214dd6502d0-4 | to the model provider API call.
Returns
Top model prediction as a message.
async astream(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → AsyncIterator[str]¶
batch(inputs: List[Union[PromptValue, str, List[BaseMessage]]], c... | https://api.python.langchain.com/en/latest/llms/langchain.llms.beam.Beam.html |
7214dd6502d0-5 | Returns
new model instance
dict(**kwargs: Any) → Dict¶
Return a dictionary of the LLM.
classmethod from_orm(obj: Any) → Model¶
generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallback... | https://api.python.langchain.com/en/latest/llms/langchain.llms.beam.Beam.html |
7214dd6502d0-6 | callbacks – Callbacks to pass through. Used for executing additional
functionality, such as logging or streaming, throughout generation.
**kwargs – Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
Returns
An LLMResult, which contains a list of candidate Generations for ea... | https://api.python.langchain.com/en/latest/llms/langchain.llms.beam.Beam.html |
7214dd6502d0-7 | json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Cal... | https://api.python.langchain.com/en/latest/llms/langchain.llms.beam.Beam.html |
7214dd6502d0-8 | to the model provider API call.
Returns
Top model prediction as a string.
predict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶
Pass a message sequence to the model and return a message prediction.
Use this method when passing in chat messages. If you want ... | https://api.python.langchain.com/en/latest/llms/langchain.llms.beam.Beam.html |
7214dd6502d0-9 | classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶
with_fallbacks(fallbacks: ~typing.Sequence[~langchain.schema.runnable.Runnable[~langchain.schema.runnable.Input, ~langchain.schema.runnable.Out... | https://api.python.langchain.com/en/latest/llms/langchain.llms.beam.Beam.html |
f9c876865b02-0 | langchain.llms.fireworks.FireworksChat¶
class langchain.llms.fireworks.FireworksChat[source]¶
Bases: BaseLLM
Wrapper around Fireworks Chat large language models.
To use, you should have the fireworksai python package installed, and the
environment variable FIREWORKS_API_KEY set with your API key.
Any parameters that ar... | https://api.python.langchain.com/en/latest/llms/langchain.llms.fireworks.FireworksChat.html |
f9c876865b02-1 | What sampling temperature to use.
param top_p: float = 1¶
Total probability mass of tokens to consider at each step.
param verbose: bool [Optional]¶
Whether to print out response text.
__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = No... | https://api.python.langchain.com/en/latest/llms/langchain.llms.fireworks.FireworksChat.html |
f9c876865b02-2 | This method should make use of batched calls for models that expose a batched
API.
Use this method when you want to:
take advantage of batched calls,
need more output from the model than just the top generated value,
are building chains that are agnostic to the underlying language modeltype (e.g., pure text completion ... | https://api.python.langchain.com/en/latest/llms/langchain.llms.fireworks.FireworksChat.html |
f9c876865b02-3 | to the model provider API call.
Returns
Top model prediction as a string.
async apredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶
Asynchronously pass messages to the model and return a message prediction.
Use this method when calling chat models and on... | https://api.python.langchain.com/en/latest/llms/langchain.llms.fireworks.FireworksChat.html |
f9c876865b02-4 | Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally... | https://api.python.langchain.com/en/latest/llms/langchain.llms.fireworks.FireworksChat.html |
f9c876865b02-5 | Pass a sequence of prompts to the model and return model generations.
This method should make use of batched calls for models that expose a batched
API.
Use this method when you want to:
take advantage of batched calls,
need more output from the model than just the top generated value,
are building chains that are agno... | https://api.python.langchain.com/en/latest/llms/langchain.llms.fireworks.FireworksChat.html |
f9c876865b02-6 | Return the ordered ids of the tokens in a text.
Parameters
text – The string input to tokenize.
Returns
A list of ids corresponding to the tokens in the text, in order they occurin the text.
invoke(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] =... | https://api.python.langchain.com/en/latest/llms/langchain.llms.fireworks.FireworksChat.html |
f9c876865b02-7 | Pass a single string input to the model and return a string prediction.
Use this method when passing in raw text. If you want to pass in specifictypes of chat messages, use predict_messages.
Parameters
text – String input to pass to the model.
stop – Stop words to use when generating. Model output is cut off at the
fir... | https://api.python.langchain.com/en/latest/llms/langchain.llms.fireworks.FireworksChat.html |
f9c876865b02-8 | stream(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → Iterator[str]¶
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
classmethod update_forward_ref... | https://api.python.langchain.com/en/latest/llms/langchain.llms.fireworks.FireworksChat.html |
6d511800b2a0-0 | langchain.llms.sagemaker_endpoint.ContentHandlerBase¶
class langchain.llms.sagemaker_endpoint.ContentHandlerBase[source]¶
A handler class to transform input from LLM to a
format that SageMaker endpoint expects.
Similarly, the class handles transforming output from the
SageMaker endpoint to a format that LLM class expec... | https://api.python.langchain.com/en/latest/llms/langchain.llms.sagemaker_endpoint.ContentHandlerBase.html |
a1a8e5d818d6-0 | langchain.llms.loading.load_llm¶
langchain.llms.loading.load_llm(file: Union[str, Path]) → BaseLLM[source]¶
Load LLM from file.
Examples using load_llm¶
AzureML Online Endpoint
Serialization | https://api.python.langchain.com/en/latest/llms/langchain.llms.loading.load_llm.html |
76efa8b93c43-0 | langchain.llms.sagemaker_endpoint.SagemakerEndpoint¶
class langchain.llms.sagemaker_endpoint.SagemakerEndpoint[source]¶
Bases: LLM
Sagemaker Inference Endpoint models.
To use, you must supply the endpoint name from your deployed
Sagemaker model & the region where it is deployed.
To authenticate, the AWS client uses the... | https://api.python.langchain.com/en/latest/llms/langchain.llms.sagemaker_endpoint.SagemakerEndpoint.html |
76efa8b93c43-1 | param endpoint_kwargs: Optional[Dict] = None¶
Optional attributes passed to the invoke_endpoint
function. See `boto3`_. docs for more info.
.. _boto3: <https://boto3.amazonaws.com/v1/documentation/api/latest/index.html>
param endpoint_name: str = ''¶
The name of the endpoint from the deployed Sagemaker model.
Must be u... | https://api.python.langchain.com/en/latest/llms/langchain.llms.sagemaker_endpoint.SagemakerEndpoint.html |
76efa8b93c43-2 | async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, *, tags: Optional[Union[List[str], List[List[str]]]] = None, metadata: Optional[Union[Dict[str, Any]... | https://api.python.langchain.com/en/latest/llms/langchain.llms.sagemaker_endpoint.SagemakerEndpoint.html |
76efa8b93c43-3 | to the model provider API call.
Returns
An LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output.
async ainvoke(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs... | https://api.python.langchain.com/en/latest/llms/langchain.llms.sagemaker_endpoint.SagemakerEndpoint.html |
76efa8b93c43-4 | batch(inputs: List[Union[PromptValue, str, List[BaseMessage]]], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, max_concurrency: Optional[int] = None, **kwargs: Any) → List[str]¶
bind(**kwargs: Any) → Runnable[Input, Output]¶
Bind arguments to a Runnable, returning a new Runnable.
classmethod cons... | https://api.python.langchain.com/en/latest/llms/langchain.llms.sagemaker_endpoint.SagemakerEndpoint.html |
76efa8b93c43-5 | classmethod from_orm(obj: Any) → Model¶
generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, *, tags: Optional[Union[List[str], List[List[str]]]] = None, metada... | https://api.python.langchain.com/en/latest/llms/langchain.llms.sagemaker_endpoint.SagemakerEndpoint.html |
76efa8b93c43-6 | to the model provider API call.
Returns
An LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output.
get_num_tokens(text: str) → int¶
Get the number of tokens present in the text.
Useful for checking if an input will fit in a model’s context window.
Pa... | https://api.python.langchain.com/en/latest/llms/langchain.llms.sagemaker_endpoint.SagemakerEndpoint.html |
76efa8b93c43-7 | Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol... | https://api.python.langchain.com/en/latest/llms/langchain.llms.sagemaker_endpoint.SagemakerEndpoint.html |
76efa8b93c43-8 | **kwargs – Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
Returns
Top model prediction as a message.
save(file_path: Union[Path, str]) → None¶
Save the LLM.
Parameters
file_path – Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=”path/... | https://api.python.langchain.com/en/latest/llms/langchain.llms.sagemaker_endpoint.SagemakerEndpoint.html |
76efa8b93c43-9 | property lc_namespace: List[str]¶
Return the namespace of the langchain object.
eg. [“langchain”, “llms”, “openai”]
property lc_secrets: Dict[str, str]¶
Return a map of constructor argument names to secret ids.
eg. {“openai_api_key”: “OPENAI_API_KEY”}
property lc_serializable: bool¶
Return whether or not the class is s... | https://api.python.langchain.com/en/latest/llms/langchain.llms.sagemaker_endpoint.SagemakerEndpoint.html |
ee81e7e8f3c8-0 | langchain.llms.azureml_endpoint.LlamaContentFormatter¶
class langchain.llms.azureml_endpoint.LlamaContentFormatter[source]¶
Content formatter for LLaMa
Attributes
accepts
The MIME type of the response data returned from the endpoint
content_type
The MIME type of the input data passed to the endpoint
Methods
__init__()
... | https://api.python.langchain.com/en/latest/llms/langchain.llms.azureml_endpoint.LlamaContentFormatter.html |
7e37de8f1fa1-0 | langchain.llms.aviary.Aviary¶
class langchain.llms.aviary.Aviary[source]¶
Bases: LLM
Aviary hosted models.
Aviary is a backend for hosted models. You can
find out more about aviary at
http://github.com/ray-project/aviary
To get a list of the models supported on an
aviary, follow the instructions on the website to
insta... | https://api.python.langchain.com/en/latest/llms/langchain.llms.aviary.Aviary.html |
7e37de8f1fa1-1 | Whether to print out response text.
param version: Optional[str] = None¶
__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → str¶
Check ... | https://api.python.langchain.com/en/latest/llms/langchain.llms.aviary.Aviary.html |
7e37de8f1fa1-2 | API.
Use this method when you want to:
take advantage of batched calls,
need more output from the model than just the top generated value,
are building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models).
Parameters
prompts – List of PromptValues. A PromptVal... | https://api.python.langchain.com/en/latest/llms/langchain.llms.aviary.Aviary.html |
7e37de8f1fa1-3 | to the model provider API call.
Returns
Top model prediction as a string.
async apredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶
Asynchronously pass messages to the model and return a message prediction.
Use this method when calling chat models and on... | https://api.python.langchain.com/en/latest/llms/langchain.llms.aviary.Aviary.html |
7e37de8f1fa1-4 | Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally... | https://api.python.langchain.com/en/latest/llms/langchain.llms.aviary.Aviary.html |
7e37de8f1fa1-5 | Pass a sequence of prompts to the model and return model generations.
This method should make use of batched calls for models that expose a batched
API.
Use this method when you want to:
take advantage of batched calls,
need more output from the model than just the top generated value,
are building chains that are agno... | https://api.python.langchain.com/en/latest/llms/langchain.llms.aviary.Aviary.html |
7e37de8f1fa1-6 | Return the ordered ids of the tokens in a text.
Parameters
text – The string input to tokenize.
Returns
A list of ids corresponding to the tokens in the text, in order they occurin the text.
invoke(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] =... | https://api.python.langchain.com/en/latest/llms/langchain.llms.aviary.Aviary.html |
7e37de8f1fa1-7 | Pass a single string input to the model and return a string prediction.
Use this method when passing in raw text. If you want to pass in specifictypes of chat messages, use predict_messages.
Parameters
text – String input to pass to the model.
stop – Stop words to use when generating. Model output is cut off at the
fir... | https://api.python.langchain.com/en/latest/llms/langchain.llms.aviary.Aviary.html |
7e37de8f1fa1-8 | stream(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → Iterator[str]¶
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
classmethod update_forward_ref... | https://api.python.langchain.com/en/latest/llms/langchain.llms.aviary.Aviary.html |
1346a752aa35-0 | langchain.llms.deepinfra.DeepInfra¶
class langchain.llms.deepinfra.DeepInfra[source]¶
Bases: LLM
DeepInfra models.
To use, you should have the requests python package installed, and the
environment variable DEEPINFRA_API_TOKEN set with your API token, or pass
it as a named parameter to the constructor.
Only supports te... | https://api.python.langchain.com/en/latest/llms/langchain.llms.deepinfra.DeepInfra.html |
1346a752aa35-1 | Check Cache and run the LLM on the given prompt and input.
async abatch(inputs: List[Union[PromptValue, str, List[BaseMessage]]], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, max_concurrency: Optional[int] = None, **kwargs: Any) → List[str]¶
async agenerate(prompts: List[str], stop: Optional[Li... | https://api.python.langchain.com/en/latest/llms/langchain.llms.deepinfra.DeepInfra.html |
1346a752aa35-2 | text generation models and BaseMessages for chat models).
stop – Stop words to use when generating. Model output is cut off at the
first occurrence of any of these substrings.
callbacks – Callbacks to pass through. Used for executing additional
functionality, such as logging or streaming, throughout generation.
**kwarg... | https://api.python.langchain.com/en/latest/llms/langchain.llms.deepinfra.DeepInfra.html |
1346a752aa35-3 | first occurrence of any of these substrings.
**kwargs – Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
Returns
Top model prediction as a message.
async astream(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[... | https://api.python.langchain.com/en/latest/llms/langchain.llms.deepinfra.DeepInfra.html |
1346a752aa35-4 | the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(**kwargs: Any) → Dict¶
Return a dictionary of the LLM.
classmethod from_orm(obj: Any) → Model¶
generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHa... | https://api.python.langchain.com/en/latest/llms/langchain.llms.deepinfra.DeepInfra.html |
1346a752aa35-5 | first occurrence of any of these substrings.
callbacks – Callbacks to pass through. Used for executing additional
functionality, such as logging or streaming, throughout generation.
**kwargs – Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
Returns
An LLMResult, which co... | https://api.python.langchain.com/en/latest/llms/langchain.llms.deepinfra.DeepInfra.html |
1346a752aa35-6 | json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Cal... | https://api.python.langchain.com/en/latest/llms/langchain.llms.deepinfra.DeepInfra.html |
1346a752aa35-7 | to the model provider API call.
Returns
Top model prediction as a string.
predict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶
Pass a message sequence to the model and return a message prediction.
Use this method when passing in chat messages. If you want ... | https://api.python.langchain.com/en/latest/llms/langchain.llms.deepinfra.DeepInfra.html |
1346a752aa35-8 | classmethod validate(value: Any) → Model¶
with_fallbacks(fallbacks: ~typing.Sequence[~langchain.schema.runnable.Runnable[~langchain.schema.runnable.Input, ~langchain.schema.runnable.Output]], *, exceptions_to_handle: ~typing.Tuple[~typing.Type[BaseException]] = (<class 'Exception'>,)) → RunnableWithFallbacks[Input, Out... | https://api.python.langchain.com/en/latest/llms/langchain.llms.deepinfra.DeepInfra.html |
6ad36d61d9f9-0 | langchain.llms.symblai_nebula.Nebula¶
class langchain.llms.symblai_nebula.Nebula[source]¶
Bases: LLM
Nebula Service models.
To use, you should have the environment variable NEBULA_SERVICE_URL,
NEBULA_SERVICE_PATH and NEBULA_SERVICE_API_KEY set with your Nebula
Service, or pass it as a named parameter to the constructor... | https://api.python.langchain.com/en/latest/llms/langchain.llms.symblai_nebula.Nebula.html |
6ad36d61d9f9-1 | param verbose: bool [Optional]¶
Whether to print out response text.
__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → str¶
Check Cache... | https://api.python.langchain.com/en/latest/llms/langchain.llms.symblai_nebula.Nebula.html |
6ad36d61d9f9-2 | need more output from the model than just the top generated value,
are building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models).
Parameters
prompts – List of PromptValues. A PromptValue is an object that can be
converted to match the format of any languag... | https://api.python.langchain.com/en/latest/llms/langchain.llms.symblai_nebula.Nebula.html |
6ad36d61d9f9-3 | Asynchronously pass messages to the model and return a message prediction.
Use this method when calling chat models and only the topcandidate generation is needed.
Parameters
messages – A sequence of chat messages corresponding to a single model input.
stop – Stop words to use when generating. Model output is cut off a... | https://api.python.langchain.com/en/latest/llms/langchain.llms.symblai_nebula.Nebula.html |
6ad36d61d9f9-4 | Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creat... | https://api.python.langchain.com/en/latest/llms/langchain.llms.symblai_nebula.Nebula.html |
6ad36d61d9f9-5 | Parameters
prompts – List of PromptValues. A PromptValue is an object that can be
converted to match the format of any language model (string for pure
text generation models and BaseMessages for chat models).
stop – Stop words to use when generating. Model output is cut off at the
first occurrence of any of these subst... | https://api.python.langchain.com/en/latest/llms/langchain.llms.symblai_nebula.Nebula.html |
6ad36d61d9f9-6 | json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Cal... | https://api.python.langchain.com/en/latest/llms/langchain.llms.symblai_nebula.Nebula.html |
6ad36d61d9f9-7 | to the model provider API call.
Returns
Top model prediction as a string.
predict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶
Pass a message sequence to the model and return a message prediction.
Use this method when passing in chat messages. If you want ... | https://api.python.langchain.com/en/latest/llms/langchain.llms.symblai_nebula.Nebula.html |
6ad36d61d9f9-8 | classmethod validate(value: Any) → Model¶
with_fallbacks(fallbacks: ~typing.Sequence[~langchain.schema.runnable.Runnable[~langchain.schema.runnable.Input, ~langchain.schema.runnable.Output]], *, exceptions_to_handle: ~typing.Tuple[~typing.Type[BaseException]] = (<class 'Exception'>,)) → RunnableWithFallbacks[Input, Out... | https://api.python.langchain.com/en/latest/llms/langchain.llms.symblai_nebula.Nebula.html |
9f894d625597-0 | langchain.llms.self_hosted_hugging_face.SelfHostedHuggingFaceLLM¶
class langchain.llms.self_hosted_hugging_face.SelfHostedHuggingFaceLLM[source]¶
Bases: SelfHostedPipeline
HuggingFace Pipeline API to run on self-hosted remote hardware.
Supported hardware includes auto-launched instances on AWS, GCP, Azure,
and Lambda, ... | https://api.python.langchain.com/en/latest/llms/langchain.llms.self_hosted_hugging_face.SelfHostedHuggingFaceLLM.html |
9f894d625597-1 | Construct the pipeline remotely using an auxiliary function.
The load function needs to be importable to be imported
and run on the server, i.e. in a module and not a REPL or closure.
Then, initialize the remote inference function.
param cache: Optional[bool] = None¶
param callback_manager: Optional[BaseCallbackManager... | https://api.python.langchain.com/en/latest/llms/langchain.llms.self_hosted_hugging_face.SelfHostedHuggingFaceLLM.html |
9f894d625597-2 | param verbose: bool [Optional]¶
Whether to print out response text.
__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → str¶
Check Cache... | https://api.python.langchain.com/en/latest/llms/langchain.llms.self_hosted_hugging_face.SelfHostedHuggingFaceLLM.html |
9f894d625597-3 | need more output from the model than just the top generated value,
are building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models).
Parameters
prompts – List of PromptValues. A PromptValue is an object that can be
converted to match the format of any languag... | https://api.python.langchain.com/en/latest/llms/langchain.llms.self_hosted_hugging_face.SelfHostedHuggingFaceLLM.html |
9f894d625597-4 | Asynchronously pass messages to the model and return a message prediction.
Use this method when calling chat models and only the topcandidate generation is needed.
Parameters
messages – A sequence of chat messages corresponding to a single model input.
stop – Stop words to use when generating. Model output is cut off a... | https://api.python.langchain.com/en/latest/llms/langchain.llms.self_hosted_hugging_face.SelfHostedHuggingFaceLLM.html |
9f894d625597-5 | Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creat... | https://api.python.langchain.com/en/latest/llms/langchain.llms.self_hosted_hugging_face.SelfHostedHuggingFaceLLM.html |
9f894d625597-6 | API.
Use this method when you want to:
take advantage of batched calls,
need more output from the model than just the top generated value,
are building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models).
Parameters
prompts – List of PromptValues. A PromptVal... | https://api.python.langchain.com/en/latest/llms/langchain.llms.self_hosted_hugging_face.SelfHostedHuggingFaceLLM.html |
9f894d625597-7 | invoke(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → str¶
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool... | https://api.python.langchain.com/en/latest/llms/langchain.llms.self_hosted_hugging_face.SelfHostedHuggingFaceLLM.html |
9f894d625597-8 | first occurrence of any of these substrings.
**kwargs – Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
Returns
Top model prediction as a string.
predict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶
Pass a m... | https://api.python.langchain.com/en/latest/llms/langchain.llms.self_hosted_hugging_face.SelfHostedHuggingFaceLLM.html |
9f894d625597-9 | classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶
with_fallbacks(fallbacks: ~typing.Sequence[~langchain.schema.runnable.Runnable[~langchain.schema.runnable.Input, ~langchain.schema.runnable.Out... | https://api.python.langchain.com/en/latest/llms/langchain.llms.self_hosted_hugging_face.SelfHostedHuggingFaceLLM.html |
a2524907c809-0 | langchain.llms.vllm.VLLM¶
class langchain.llms.vllm.VLLM[source]¶
Bases: BaseLLM
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param best_of: Optional[int] = None¶
Number of output sequences that are gener... | https://api.python.langchain.com/en/latest/llms/langchain.llms.vllm.VLLM.html |
a2524907c809-1 | The number of GPUs to use for distributed execution with tensor parallelism.
param top_k: int = -1¶
Integer that controls the number of top tokens to consider.
param top_p: float = 1.0¶
Float that controls the cumulative probability of the top tokens to consider.
param trust_remote_code: Optional[bool] = False¶
Trust r... | https://api.python.langchain.com/en/latest/llms/langchain.llms.vllm.VLLM.html |
a2524907c809-2 | Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, **kwargs: Any) → LLMResult¶
Asynchronously... | https://api.python.langchain.com/en/latest/llms/langchain.llms.vllm.VLLM.html |
a2524907c809-3 | Asynchronously pass a string to the model and return a string prediction.
Use this method when calling pure text generation models and only the topcandidate generation is needed.
Parameters
text – String input to pass to the model.
stop – Stop words to use when generating. Model output is cut off at the
first occurrenc... | https://api.python.langchain.com/en/latest/llms/langchain.llms.vllm.VLLM.html |
a2524907c809-4 | Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclu... | https://api.python.langchain.com/en/latest/llms/langchain.llms.vllm.VLLM.html |
a2524907c809-5 | Run the LLM on the given prompt and input.
generate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, **kwargs: Any) → LLMResult¶
Pass a sequence of pr... | https://api.python.langchain.com/en/latest/llms/langchain.llms.vllm.VLLM.html |
a2524907c809-6 | get_num_tokens_from_messages(messages: List[BaseMessage]) → int¶
Get the number of tokens in the messages.
Useful for checking if an input will fit in a model’s context window.
Parameters
messages – The message inputs to tokenize.
Returns
The sum of the number of tokens across the messages.
get_token_ids(text: str) → L... | https://api.python.langchain.com/en/latest/llms/langchain.llms.vllm.VLLM.html |
a2524907c809-7 | classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
predict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶
Pass a single string input to t... | https://api.python.langchain.com/en/latest/llms/langchain.llms.vllm.VLLM.html |
a2524907c809-8 | .. code-block:: python
llm.save(file_path=”path/llm.yaml”)
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
stream(input: Union[Promp... | https://api.python.langchain.com/en/latest/llms/langchain.llms.vllm.VLLM.html |
d1a9bcf3f58a-0 | langchain.llms.llamacpp.LlamaCpp¶
class langchain.llms.llamacpp.LlamaCpp[source]¶
Bases: LLM
llama.cpp model.
To use, you should have the llama-cpp-python library installed, and provide the
path to the Llama model as a named parameter to the constructor.
Check out: https://github.com/abetlen/llama-cpp-python
Example
fr... | https://api.python.langchain.com/en/latest/llms/langchain.llms.llamacpp.LlamaCpp.html |
d1a9bcf3f58a-1 | param metadata: Optional[Dict[str, Any]] = None¶
Metadata to add to the run trace.
param model_kwargs: Dict[str, Any] [Optional]¶
Any additional parameters to pass to llama_cpp.Llama.
param model_path: str [Required]¶
The path to the Llama model file.
param n_batch: Optional[int] = 8¶
Number of tokens to process in par... | https://api.python.langchain.com/en/latest/llms/langchain.llms.llamacpp.LlamaCpp.html |
d1a9bcf3f58a-2 | param temperature: Optional[float] = 0.8¶
The temperature to use for sampling.
param top_k: Optional[int] = 40¶
The top-k value to use for sampling.
param top_p: Optional[float] = 0.95¶
The top-p value to use for sampling.
param use_mlock: bool = False¶
Force system to keep model in RAM.
param use_mmap: Optional[bool] ... | https://api.python.langchain.com/en/latest/llms/langchain.llms.llamacpp.LlamaCpp.html |
d1a9bcf3f58a-3 | Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, **kwargs: Any) → LLMResult¶
Asynchronously... | https://api.python.langchain.com/en/latest/llms/langchain.llms.llamacpp.LlamaCpp.html |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.