id stringlengths 14 15 | text stringlengths 35 2.51k | source stringlengths 61 154 |
|---|---|---|
796629b81c5d-1 | Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → LLMResult¶
Take in a list of prompt values and return an LLMResult.
classmethod all_required... | https://api.python.langchain.com/en/latest/llms/langchain.llms.fake.FakeListLLM.html |
796629b81c5d-2 | Get the token present in the text.
predict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶
Predict text from text.
predict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶
Predict message from messages.
validator raise_deprecation » ... | https://api.python.langchain.com/en/latest/llms/langchain.llms.fake.FakeListLLM.html |
ea23a66c5063-0 | langchain.llms.modal.Modal¶
class langchain.llms.modal.Modal(*, cache: Optional[bool] = None, verbose: bool = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, endpoint_url: str = '', model_k... | https://api.python.langchain.com/en/latest/llms/langchain.llms.modal.Modal.html |
ea23a66c5063-1 | Check Cache and run the LLM on the given prompt and input.
async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, **kwargs: Any) → LLMResult¶
Run the LLM on the given prompt and input.
... | https://api.python.langchain.com/en/latest/llms/langchain.llms.modal.Modal.html |
ea23a66c5063-2 | Take in a list of prompt values and return an LLMResult.
get_num_tokens(text: str) → int¶
Get the number of tokens present in the text.
get_num_tokens_from_messages(messages: List[BaseMessage]) → int¶
Get the number of tokens in the message.
get_token_ids(text: str) → List[int]¶
Get the token present in the text.
predi... | https://api.python.langchain.com/en/latest/llms/langchain.llms.modal.Modal.html |
ea23a66c5063-3 | Return a map of constructor argument names to secret ids.
eg. {“openai_api_key”: “OPENAI_API_KEY”}
property lc_serializable: bool¶
Return whether or not the class is serializable.
model Config[source]¶
Bases: object
Configuration for this pydantic config.
extra = 'forbid'¶ | https://api.python.langchain.com/en/latest/llms/langchain.llms.modal.Modal.html |
9cd988e95e84-0 | langchain.llms.ctransformers.CTransformers¶
class langchain.llms.ctransformers.CTransformers(*, cache: Optional[bool] = None, verbose: bool = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None,... | https://api.python.langchain.com/en/latest/llms/langchain.llms.ctransformers.CTransformers.html |
9cd988e95e84-1 | The name of the model file in repo or directory.
param model_type: Optional[str] = None¶
The model type.
param tags: Optional[List[str]] = None¶
Tags to add to the run trace.
param verbose: bool [Optional]¶
Whether to print out response text.
__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[U... | https://api.python.langchain.com/en/latest/llms/langchain.llms.ctransformers.CTransformers.html |
9cd988e95e84-2 | dict(**kwargs: Any) → Dict¶
Return a dictionary of the LLM.
generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, **kwargs: Any) → LLMResult¶
Run the LLM on the given prompt and input.
genera... | https://api.python.langchain.com/en/latest/llms/langchain.llms.ctransformers.CTransformers.html |
9cd988e95e84-3 | This allows users to pass in None as verbose to access the global setting.
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
validator validate_environment » all fields[source]¶
Validate that ctransformers package is installed.
property lc_attrib... | https://api.python.langchain.com/en/latest/llms/langchain.llms.ctransformers.CTransformers.html |
c7a53ecd1582-0 | langchain.llms.anthropic.Anthropic¶
class langchain.llms.anthropic.Anthropic(*, client: Any = None, model: str = 'claude-v1', max_tokens_to_sample: int = 256, temperature: Optional[float] = None, top_k: Optional[int] = None, top_p: Optional[float] = None, streaming: bool = False, default_request_timeout: Optional[Union... | https://api.python.langchain.com/en/latest/llms/langchain.llms.anthropic.Anthropic.html |
c7a53ecd1582-1 | raw_prompt = "What are the biggest risks facing humanity?"
prompt = f"{anthropic.HUMAN_PROMPT} {prompt}{anthropic.AI_PROMPT}"
response = model(prompt)
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param AI... | https://api.python.langchain.com/en/latest/llms/langchain.llms.anthropic.Anthropic.html |
c7a53ecd1582-2 | param verbose: bool [Optional]¶
Whether to print out response text.
__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → str¶
Check Cache and run the LLM on the given prompt and input.
async agenerate(prompts: List[st... | https://api.python.langchain.com/en/latest/llms/langchain.llms.anthropic.Anthropic.html |
c7a53ecd1582-3 | Run the LLM on the given prompt and input.
generate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → LLMResult¶
Take in a list of prompt values and return an LLMResult.
get_num_tokens(text: str) → int... | https://api.python.langchain.com/en/latest/llms/langchain.llms.anthropic.Anthropic.html |
c7a53ecd1582-4 | BETA: this is a beta feature while we figure out the right abstraction.
Once that happens, this interface could change.
Parameters
prompt – The prompt to pass into the model.
stop – Optional list of stop words to use when generating.
Returns
A generator representing the stream of tokens from Anthropic.
Example
prompt =... | https://api.python.langchain.com/en/latest/llms/langchain.llms.anthropic.Anthropic.html |
bef2261e0c36-0 | langchain.llms.amazon_api_gateway.AmazonAPIGateway¶
class langchain.llms.amazon_api_gateway.AmazonAPIGateway(*, cache: ~typing.Optional[bool] = None, verbose: bool = None, callbacks: ~typing.Optional[~typing.Union[~typing.List[~langchain.callbacks.base.BaseCallbackHandler], ~langchain.callbacks.base.BaseCallbackManager... | https://api.python.langchain.com/en/latest/llms/langchain.llms.amazon_api_gateway.AmazonAPIGateway.html |
bef2261e0c36-1 | param tags: Optional[List[str]] = None¶
Tags to add to the run trace.
param verbose: bool [Optional]¶
Whether to print out response text.
__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → str¶
Check Cache and run t... | https://api.python.langchain.com/en/latest/llms/langchain.llms.amazon_api_gateway.AmazonAPIGateway.html |
bef2261e0c36-2 | Run the LLM on the given prompt and input.
generate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → LLMResult¶
Take in a list of prompt values and return an LLMResult.
get_num_tokens(text: str) → int... | https://api.python.langchain.com/en/latest/llms/langchain.llms.amazon_api_gateway.AmazonAPIGateway.html |
bef2261e0c36-3 | constructor.
property lc_namespace: List[str]¶
Return the namespace of the langchain object.
eg. [“langchain”, “llms”, “openai”]
property lc_secrets: Dict[str, str]¶
Return a map of constructor argument names to secret ids.
eg. {“openai_api_key”: “OPENAI_API_KEY”}
property lc_serializable: bool¶
Return whether or not t... | https://api.python.langchain.com/en/latest/llms/langchain.llms.amazon_api_gateway.AmazonAPIGateway.html |
1d228ea51195-0 | langchain.llms.llamacpp.LlamaCpp¶
class langchain.llms.llamacpp.LlamaCpp(*, cache: Optional[bool] = None, verbose: bool = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, client: Any = None,... | https://api.python.langchain.com/en/latest/llms/langchain.llms.llamacpp.LlamaCpp.html |
1d228ea51195-1 | Example
from langchain.llms import LlamaCppEmbeddings
llm = LlamaCppEmbeddings(model_path="/path/to/llama/model")
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param cache: Optional[bool] = None¶
param cal... | https://api.python.langchain.com/en/latest/llms/langchain.llms.llamacpp.LlamaCpp.html |
1d228ea51195-2 | param n_parts: int = -1¶
Number of parts to split the model into.
If -1, the number of parts is automatically determined.
param n_threads: Optional[int] = None¶
Number of threads to use.
If None, the number of threads is automatically determined.
param repeat_penalty: Optional[float] = 1.1¶
The penalty to apply to repe... | https://api.python.langchain.com/en/latest/llms/langchain.llms.llamacpp.LlamaCpp.html |
1d228ea51195-3 | Check Cache and run the LLM on the given prompt and input.
async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, **kwargs: Any) → LLMResult¶
Run the LLM on the given prompt and input.
... | https://api.python.langchain.com/en/latest/llms/langchain.llms.llamacpp.LlamaCpp.html |
1d228ea51195-4 | get_num_tokens(text: str) → int[source]¶
Get the number of tokens present in the text.
get_num_tokens_from_messages(messages: List[BaseMessage]) → int¶
Get the number of tokens in the message.
get_token_ids(text: str) → List[int]¶
Get the token present in the text.
predict(text: str, *, stop: Optional[Sequence[str]] = ... | https://api.python.langchain.com/en/latest/llms/langchain.llms.llamacpp.LlamaCpp.html |
1d228ea51195-5 | stop: Optional list of stop words to use when generating.
Returns:A generator representing the stream of tokens being generated.
Yields:A dictionary like objects containing a string token and metadata.
See llama-cpp-python docs and below for more.
Example:from langchain.llms import LlamaCpp
llm = LlamaCpp(
model_pa... | https://api.python.langchain.com/en/latest/llms/langchain.llms.llamacpp.LlamaCpp.html |
7c131ae1de42-0 | langchain.llms.self_hosted.SelfHostedPipeline¶
class langchain.llms.self_hosted.SelfHostedPipeline(*, cache: ~typing.Optional[bool] = None, verbose: bool = None, callbacks: ~typing.Optional[~typing.Union[~typing.List[~langchain.callbacks.base.BaseCallbackHandler], ~langchain.callbacks.base.BaseCallbackManager]] = None,... | https://api.python.langchain.com/en/latest/llms/langchain.llms.self_hosted.SelfHostedPipeline.html |
7c131ae1de42-1 | )
def inference_fn(pipeline, prompt, stop = None):
return pipeline(prompt)[0]["generated_text"]
gpu = rh.cluster(name="rh-a10x", instance_type="A100:1")
llm = SelfHostedPipeline(
model_load_fn=load_pipeline,
hardware=gpu,
model_reqs=model_reqs, inference_fn=inference_fn
)
Example for <2GB model (can be ... | https://api.python.langchain.com/en/latest/llms/langchain.llms.self_hosted.SelfHostedPipeline.html |
7c131ae1de42-2 | param callbacks: Callbacks = None¶
param hardware: Any = None¶
Remote hardware to send the inference function to.
param inference_fn: Callable = <function _generate_text>¶
Inference function to send to the remote hardware.
param load_fn_kwargs: Optional[dict] = None¶
Key word arguments to pass to the model load functio... | https://api.python.langchain.com/en/latest/llms/langchain.llms.self_hosted.SelfHostedPipeline.html |
7c131ae1de42-3 | Predict text from text.
async apredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶
Predict message from messages.
dict(**kwargs: Any) → Dict¶
Return a dictionary of the LLM.
classmethod from_pipeline(pipeline: Any, hardware: Any, model_reqs: Optional[List... | https://api.python.langchain.com/en/latest/llms/langchain.llms.self_hosted.SelfHostedPipeline.html |
7c131ae1de42-4 | Predict message from messages.
validator raise_deprecation » all fields¶
Raise deprecation warning if callback_manager is used.
save(file_path: Union[Path, str]) → None¶
Save the LLM.
Parameters
file_path – Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=”path/llm.yaml”)
validator ... | https://api.python.langchain.com/en/latest/llms/langchain.llms.self_hosted.SelfHostedPipeline.html |
080f37d2ca29-0 | langchain.llms.promptlayer_openai.PromptLayerOpenAIChat¶
class langchain.llms.promptlayer_openai.PromptLayerOpenAIChat(*, cache: Optional[bool] = None, verbose: bool = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: O... | https://api.python.langchain.com/en/latest/llms/langchain.llms.promptlayer_openai.PromptLayerOpenAIChat.html |
080f37d2ca29-1 | Generation object.
Example
from langchain.llms import PromptLayerOpenAIChat
openaichat = PromptLayerOpenAIChat(model_name="gpt-3.5-turbo")
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param allowed_specia... | https://api.python.langchain.com/en/latest/llms/langchain.llms.promptlayer_openai.PromptLayerOpenAIChat.html |
080f37d2ca29-2 | param verbose: bool [Optional]¶
Whether to print out response text.
__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → str¶
Check Cache and run the LLM on the given prompt and input.
async agenerate(prompts: List[st... | https://api.python.langchain.com/en/latest/llms/langchain.llms.promptlayer_openai.PromptLayerOpenAIChat.html |
080f37d2ca29-3 | Run the LLM on the given prompt and input.
generate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → LLMResult¶
Take in a list of prompt values and return an LLMResult.
get_num_tokens(text: str) → int... | https://api.python.langchain.com/en/latest/llms/langchain.llms.promptlayer_openai.PromptLayerOpenAIChat.html |
080f37d2ca29-4 | property lc_attributes: Dict¶
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]¶
Return the namespace of the langchain object.
eg. [“langchain”, “llms”, “openai”]
property lc_secrets: Dict[str, str]¶
... | https://api.python.langchain.com/en/latest/llms/langchain.llms.promptlayer_openai.PromptLayerOpenAIChat.html |
c804a99938cf-0 | langchain.llms.writer.Writer¶
class langchain.llms.writer.Writer(*, cache: Optional[bool] = None, verbose: bool = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, writer_org_id: Optional[str... | https://api.python.langchain.com/en/latest/llms/langchain.llms.writer.Writer.html |
c804a99938cf-1 | param logprobs: bool = False¶
Whether to return log probabilities.
param max_tokens: Optional[int] = None¶
Maximum number of tokens to generate.
param min_tokens: Optional[int] = None¶
Minimum number of tokens to generate.
param model_id: str = 'palmyra-instruct'¶
Model name to use.
param n: Optional[int] = None¶
How m... | https://api.python.langchain.com/en/latest/llms/langchain.llms.writer.Writer.html |
c804a99938cf-2 | Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → LLMResult¶
Take in a list of prompt values and return an LLMResult.
classmethod all_required... | https://api.python.langchain.com/en/latest/llms/langchain.llms.writer.Writer.html |
c804a99938cf-3 | Get the token present in the text.
predict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶
Predict text from text.
predict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶
Predict message from messages.
validator raise_deprecation » ... | https://api.python.langchain.com/en/latest/llms/langchain.llms.writer.Writer.html |
5829508095cd-0 | langchain.llms.mosaicml.MosaicML¶
class langchain.llms.mosaicml.MosaicML(*, cache: Optional[bool] = None, verbose: bool = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, endpoint_url: str =... | https://api.python.langchain.com/en/latest/llms/langchain.llms.mosaicml.MosaicML.html |
5829508095cd-1 | Endpoint URL to use.
param inject_instruction_format: bool = False¶
Whether to inject the instruction format into the prompt.
param model_kwargs: Optional[dict] = None¶
Key word arguments to pass to the model.
param mosaicml_api_token: Optional[str] = None¶
param retry_sleep: float = 1.0¶
How long to try sleeping for i... | https://api.python.langchain.com/en/latest/llms/langchain.llms.mosaicml.MosaicML.html |
5829508095cd-2 | Predict message from messages.
dict(**kwargs: Any) → Dict¶
Return a dictionary of the LLM.
generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, **kwargs: Any) → LLMResult¶
Run the LLM on the... | https://api.python.langchain.com/en/latest/llms/langchain.llms.mosaicml.MosaicML.html |
5829508095cd-3 | validator set_verbose » verbose¶
If verbose is None, set it.
This allows users to pass in None as verbose to access the global setting.
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
validator validate_environment » all fields[source]¶
Valid... | https://api.python.langchain.com/en/latest/llms/langchain.llms.mosaicml.MosaicML.html |
f33ededc423e-0 | langchain.llms.bedrock.Bedrock¶
class langchain.llms.bedrock.Bedrock(*, cache: Optional[bool] = None, verbose: bool = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, client: Any = None, reg... | https://api.python.langchain.com/en/latest/llms/langchain.llms.bedrock.Bedrock.html |
f33ededc423e-1 | param model_id: str [Required]¶
Id of the model to call, e.g., amazon.titan-tg1-large, this is
equivalent to the modelId property in the list-foundation-models api
param model_kwargs: Optional[Dict] = None¶
Key word arguments to pass to the model.
param region_name: Optional[str] = None¶
The aws region e.g., us-west-2.... | https://api.python.langchain.com/en/latest/llms/langchain.llms.bedrock.Bedrock.html |
f33ededc423e-2 | Predict text from text.
async apredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶
Predict message from messages.
dict(**kwargs: Any) → Dict¶
Return a dictionary of the LLM.
generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optiona... | https://api.python.langchain.com/en/latest/llms/langchain.llms.bedrock.Bedrock.html |
f33ededc423e-3 | Example:
.. code-block:: python
llm.save(file_path=”path/llm.yaml”)
validator set_verbose » verbose¶
If verbose is None, set it.
This allows users to pass in None as verbose to access the global setting.
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImple... | https://api.python.langchain.com/en/latest/llms/langchain.llms.bedrock.Bedrock.html |
873c3557a30a-0 | langchain.llms.pipelineai.PipelineAI¶
class langchain.llms.pipelineai.PipelineAI(*, cache: Optional[bool] = None, verbose: bool = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, pipeline_ke... | https://api.python.langchain.com/en/latest/llms/langchain.llms.pipelineai.PipelineAI.html |
873c3557a30a-1 | param verbose: bool [Optional]¶
Whether to print out response text.
__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → str¶
Check Cache and run the LLM on the given prompt and input.
async agenerate(prompts: List[st... | https://api.python.langchain.com/en/latest/llms/langchain.llms.pipelineai.PipelineAI.html |
873c3557a30a-2 | Run the LLM on the given prompt and input.
generate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → LLMResult¶
Take in a list of prompt values and return an LLMResult.
get_num_tokens(text: str) → int... | https://api.python.langchain.com/en/latest/llms/langchain.llms.pipelineai.PipelineAI.html |
873c3557a30a-3 | property lc_attributes: Dict¶
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]¶
Return the namespace of the langchain object.
eg. [“langchain”, “llms”, “openai”]
property lc_secrets: Dict[str, str]¶
... | https://api.python.langchain.com/en/latest/llms/langchain.llms.pipelineai.PipelineAI.html |
dfc53e433420-0 | langchain.llms.sagemaker_endpoint.ContentHandlerBase¶
class langchain.llms.sagemaker_endpoint.ContentHandlerBase[source]¶
Bases: Generic[INPUT_TYPE, OUTPUT_TYPE]
A handler class to transform input from LLM to a
format that SageMaker endpoint expects. Similarily,
the class also handles transforming output from the
SageM... | https://api.python.langchain.com/en/latest/llms/langchain.llms.sagemaker_endpoint.ContentHandlerBase.html |
54b1f8c87d3c-0 | langchain.llms.ai21.AI21PenaltyData¶
class langchain.llms.ai21.AI21PenaltyData(*, scale: int = 0, applyToWhitespaces: bool = True, applyToPunctuations: bool = True, applyToNumbers: bool = True, applyToStopwords: bool = True, applyToEmojis: bool = True)[source]¶
Bases: BaseModel
Parameters for AI21 penalty data.
Create ... | https://api.python.langchain.com/en/latest/llms/langchain.llms.ai21.AI21PenaltyData.html |
627103ebfe17-0 | langchain.llms.predictionguard.PredictionGuard¶
class langchain.llms.predictionguard.PredictionGuard(*, cache: Optional[bool] = None, verbose: bool = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]]... | https://api.python.langchain.com/en/latest/llms/langchain.llms.predictionguard.PredictionGuard.html |
627103ebfe17-1 | Model name to use.
param output: Optional[Dict[str, Any]] = None¶
The output type or structure for controlling the LLM output.
param stop: Optional[List[str]] = None¶
param tags: Optional[List[str]] = None¶
Tags to add to the run trace.
param temperature: float = 0.75¶
A non-negative float that tunes the degree of rand... | https://api.python.langchain.com/en/latest/llms/langchain.llms.predictionguard.PredictionGuard.html |
627103ebfe17-2 | Predict message from messages.
dict(**kwargs: Any) → Dict¶
Return a dictionary of the LLM.
generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, **kwargs: Any) → LLMResult¶
Run the LLM on the... | https://api.python.langchain.com/en/latest/llms/langchain.llms.predictionguard.PredictionGuard.html |
627103ebfe17-3 | validator set_verbose » verbose¶
If verbose is None, set it.
This allows users to pass in None as verbose to access the global setting.
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
validator validate_environment » all fields[source]¶
Valid... | https://api.python.langchain.com/en/latest/llms/langchain.llms.predictionguard.PredictionGuard.html |
0b3210d693ed-0 | langchain.llms.huggingface_pipeline.HuggingFacePipeline¶
class langchain.llms.huggingface_pipeline.HuggingFacePipeline(*, cache: Optional[bool] = None, verbose: bool = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: O... | https://api.python.langchain.com/en/latest/llms/langchain.llms.huggingface_pipeline.HuggingFacePipeline.html |
0b3210d693ed-1 | param callbacks: Callbacks = None¶
param model_id: str = 'gpt2'¶
Model name to use.
param model_kwargs: Optional[dict] = None¶
Key word arguments passed to the model.
param pipeline_kwargs: Optional[dict] = None¶
Key word arguments passed to the pipeline.
param tags: Optional[List[str]] = None¶
Tags to add to the run t... | https://api.python.langchain.com/en/latest/llms/langchain.llms.huggingface_pipeline.HuggingFacePipeline.html |
0b3210d693ed-2 | dict(**kwargs: Any) → Dict¶
Return a dictionary of the LLM.
classmethod from_model_id(model_id: str, task: str, device: int = - 1, model_kwargs: Optional[dict] = None, pipeline_kwargs: Optional[dict] = None, **kwargs: Any) → LLM[source]¶
Construct the pipeline object from model_id and task.
generate(prompts: List[str],... | https://api.python.langchain.com/en/latest/llms/langchain.llms.huggingface_pipeline.HuggingFacePipeline.html |
0b3210d693ed-3 | Save the LLM.
Parameters
file_path – Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=”path/llm.yaml”)
validator set_verbose » verbose¶
If verbose is None, set it.
This allows users to pass in None as verbose to access the global setting.
to_json() → Union[SerializedConstructor, Ser... | https://api.python.langchain.com/en/latest/llms/langchain.llms.huggingface_pipeline.HuggingFacePipeline.html |
91a818efe2b8-0 | langchain.llms.bananadev.Banana¶
class langchain.llms.bananadev.Banana(*, cache: Optional[bool] = None, verbose: bool = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, model_key: str = '', ... | https://api.python.langchain.com/en/latest/llms/langchain.llms.bananadev.Banana.html |
91a818efe2b8-1 | param verbose: bool [Optional]¶
Whether to print out response text.
__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → str¶
Check Cache and run the LLM on the given prompt and input.
async agenerate(prompts: List[st... | https://api.python.langchain.com/en/latest/llms/langchain.llms.bananadev.Banana.html |
91a818efe2b8-2 | Run the LLM on the given prompt and input.
generate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → LLMResult¶
Take in a list of prompt values and return an LLMResult.
get_num_tokens(text: str) → int... | https://api.python.langchain.com/en/latest/llms/langchain.llms.bananadev.Banana.html |
91a818efe2b8-3 | property lc_attributes: Dict¶
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]¶
Return the namespace of the langchain object.
eg. [“langchain”, “llms”, “openai”]
property lc_secrets: Dict[str, str]¶
... | https://api.python.langchain.com/en/latest/llms/langchain.llms.bananadev.Banana.html |
636318da72d4-0 | langchain.llms.aviary.get_completions¶
langchain.llms.aviary.get_completions(model: str, prompt: str, use_prompt_format: bool = True, version: str = '') → Dict[str, Union[str, float, int]][source]¶
Get completions from Aviary models. | https://api.python.langchain.com/en/latest/llms/langchain.llms.aviary.get_completions.html |
ce3f859cdea8-0 | langchain.llms.loading.load_llm_from_config¶
langchain.llms.loading.load_llm_from_config(config: dict) → BaseLLM[source]¶
Load LLM from Config Dict. | https://api.python.langchain.com/en/latest/llms/langchain.llms.loading.load_llm_from_config.html |
7b45f439f63e-0 | langchain.llms.openai.OpenAIChat¶
class langchain.llms.openai.OpenAIChat(*, cache: Optional[bool] = None, verbose: bool = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, client: Any = None,... | https://api.python.langchain.com/en/latest/llms/langchain.llms.openai.OpenAIChat.html |
7b45f439f63e-1 | param callback_manager: Optional[BaseCallbackManager] = None¶
param callbacks: Callbacks = None¶
param disallowed_special: Union[Literal['all'], Collection[str]] = 'all'¶
Set of special tokens that are not allowed。
param max_retries: int = 6¶
Maximum number of retries to make when generating.
param model_kwargs: Dict[s... | https://api.python.langchain.com/en/latest/llms/langchain.llms.openai.OpenAIChat.html |
7b45f439f63e-2 | Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → LLMResult¶
Take in a list of prompt values and return an LLMResult.
classmethod all_required... | https://api.python.langchain.com/en/latest/llms/langchain.llms.openai.OpenAIChat.html |
7b45f439f63e-3 | get_token_ids(text: str) → List[int][source]¶
Get the token IDs using the tiktoken package.
predict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶
Predict text from text.
predict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶
Predi... | https://api.python.langchain.com/en/latest/llms/langchain.llms.openai.OpenAIChat.html |
7b45f439f63e-4 | Return whether or not the class is serializable.
model Config¶
Bases: object
Configuration for this pydantic object.
arbitrary_types_allowed = True¶ | https://api.python.langchain.com/en/latest/llms/langchain.llms.openai.OpenAIChat.html |
02b5a5fafa0a-0 | langchain.llms.self_hosted_hugging_face.SelfHostedHuggingFaceLLM¶
class langchain.llms.self_hosted_hugging_face.SelfHostedHuggingFaceLLM(*, cache: ~typing.Optional[bool] = None, verbose: bool = None, callbacks: ~typing.Optional[~typing.Union[~typing.List[~langchain.callbacks.base.BaseCallbackHandler], ~langchain.callba... | https://api.python.langchain.com/en/latest/llms/langchain.llms.self_hosted_hugging_face.SelfHostedHuggingFaceLLM.html |
02b5a5fafa0a-1 | import runhouse as rh
gpu = rh.cluster(name="rh-a10x", instance_type="A100:1")
hf = SelfHostedHuggingFaceLLM(
model_id="google/flan-t5-large", task="text2text-generation",
hardware=gpu
)
Example passing fn that generates a pipeline (bc the pipeline is not serializable):from langchain.llms import SelfHostedHuggi... | https://api.python.langchain.com/en/latest/llms/langchain.llms.self_hosted_hugging_face.SelfHostedHuggingFaceLLM.html |
02b5a5fafa0a-2 | param load_fn_kwargs: Optional[dict] = None¶
Key word arguments to pass to the model load function.
param model_id: str = 'gpt2'¶
Hugging Face model_id to load the model.
param model_kwargs: Optional[dict] = None¶
Key word arguments to pass to the model.
param model_load_fn: Callable = <function _load_transformer>¶
Fun... | https://api.python.langchain.com/en/latest/llms/langchain.llms.self_hosted_hugging_face.SelfHostedHuggingFaceLLM.html |
02b5a5fafa0a-3 | Take in a list of prompt values and return an LLMResult.
classmethod all_required_field_names() → Set¶
async apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶
Predict text from text.
async apredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: ... | https://api.python.langchain.com/en/latest/llms/langchain.llms.self_hosted_hugging_face.SelfHostedHuggingFaceLLM.html |
02b5a5fafa0a-4 | Predict text from text.
predict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶
Predict message from messages.
validator raise_deprecation » all fields¶
Raise deprecation warning if callback_manager is used.
save(file_path: Union[Path, str]) → None¶
Save th... | https://api.python.langchain.com/en/latest/llms/langchain.llms.self_hosted_hugging_face.SelfHostedHuggingFaceLLM.html |
e1c3d90d6760-0 | langchain.llms.textgen.TextGen¶
class langchain.llms.textgen.TextGen(*, cache: Optional[bool] = None, verbose: bool = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, model_url: str, max_new... | https://api.python.langchain.com/en/latest/llms/langchain.llms.textgen.TextGen.html |
e1c3d90d6760-1 | Paremeters below taken from text-generation-webui api example:
https://github.com/oobabooga/text-generation-webui/blob/main/api-examples/api-example.py
Example
from langchain.llms import TextGen
llm = TextGen(model_url="http://localhost:8500")
Create a new model by parsing and validating input data from keyword argumen... | https://api.python.langchain.com/en/latest/llms/langchain.llms.textgen.TextGen.html |
e1c3d90d6760-2 | Only 0 or high values are a good idea in most cases.
param num_beams: Optional[int] = 1¶
Number of beams
param penalty_alpha: Optional[float] = 0¶
Penalty Alpha
param repetition_penalty: Optional[float] = 1.18¶
Exponential penalty factor for repeating prior tokens. 1 means no penalty,
higher value = less repetition, lo... | https://api.python.langchain.com/en/latest/llms/langchain.llms.textgen.TextGen.html |
e1c3d90d6760-3 | param typical_p: Optional[float] = 1¶
If not set to 1, select only tokens that are at least this much more likely to
appear than random tokens, given the prior text.
param verbose: bool [Optional]¶
Whether to print out response text.
__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List... | https://api.python.langchain.com/en/latest/llms/langchain.llms.textgen.TextGen.html |
e1c3d90d6760-4 | dict(**kwargs: Any) → Dict¶
Return a dictionary of the LLM.
generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, **kwargs: Any) → LLMResult¶
Run the LLM on the given prompt and input.
genera... | https://api.python.langchain.com/en/latest/llms/langchain.llms.textgen.TextGen.html |
e1c3d90d6760-5 | This allows users to pass in None as verbose to access the global setting.
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
property lc_attributes: Dict¶
Return a list of attribute names that should be included in the
serialized kwargs. These attr... | https://api.python.langchain.com/en/latest/llms/langchain.llms.textgen.TextGen.html |
a0ed75066467-0 | langchain.llms.utils.enforce_stop_tokens¶
langchain.llms.utils.enforce_stop_tokens(text: str, stop: List[str]) → str[source]¶
Cut off the text as soon as any stop words occur. | https://api.python.langchain.com/en/latest/llms/langchain.llms.utils.enforce_stop_tokens.html |
2d8d81397ee5-0 | langchain.llms.sagemaker_endpoint.SagemakerEndpoint¶
class langchain.llms.sagemaker_endpoint.SagemakerEndpoint(*, cache: Optional[bool] = None, verbose: bool = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[... | https://api.python.langchain.com/en/latest/llms/langchain.llms.sagemaker_endpoint.SagemakerEndpoint.html |
2d8d81397ee5-1 | The content handler class that provides an input and
output transform functions to handle formats between LLM
and the endpoint.
param credentials_profile_name: Optional[str] = None¶
The name of the profile in the ~/.aws/credentials or ~/.aws/config files, which
has either access keys or role information specified.
If n... | https://api.python.langchain.com/en/latest/llms/langchain.llms.sagemaker_endpoint.SagemakerEndpoint.html |
2d8d81397ee5-2 | Check Cache and run the LLM on the given prompt and input.
async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, **kwargs: Any) → LLMResult¶
Run the LLM on the given prompt and input.
... | https://api.python.langchain.com/en/latest/llms/langchain.llms.sagemaker_endpoint.SagemakerEndpoint.html |
2d8d81397ee5-3 | get_num_tokens(text: str) → int¶
Get the number of tokens present in the text.
get_num_tokens_from_messages(messages: List[BaseMessage]) → int¶
Get the number of tokens in the message.
get_token_ids(text: str) → List[int]¶
Get the token present in the text.
predict(text: str, *, stop: Optional[Sequence[str]] = None, **... | https://api.python.langchain.com/en/latest/llms/langchain.llms.sagemaker_endpoint.SagemakerEndpoint.html |
2d8d81397ee5-4 | Return a map of constructor argument names to secret ids.
eg. {“openai_api_key”: “OPENAI_API_KEY”}
property lc_serializable: bool¶
Return whether or not the class is serializable.
model Config[source]¶
Bases: object
Configuration for this pydantic object.
extra = 'forbid'¶ | https://api.python.langchain.com/en/latest/llms/langchain.llms.sagemaker_endpoint.SagemakerEndpoint.html |
654a27eaf509-0 | langchain.llms.databricks.get_repl_context¶
langchain.llms.databricks.get_repl_context() → Any[source]¶
Gets the notebook REPL context if running inside a Databricks notebook.
Returns None otherwise. | https://api.python.langchain.com/en/latest/llms/langchain.llms.databricks.get_repl_context.html |
6252bee07afe-0 | langchain.llms.beam.Beam¶
class langchain.llms.beam.Beam(*, cache: Optional[bool] = None, verbose: bool = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, model_name: str = '', name: str = '... | https://api.python.langchain.com/en/latest/llms/langchain.llms.beam.Beam.html |
6252bee07afe-1 | "safetensors",
"xformers",],
max_length=50)
llm._deploy()
call_result = llm._call(input)
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param app_id: Optional[str] = None¶
param beam_client_id: ... | https://api.python.langchain.com/en/latest/llms/langchain.llms.beam.Beam.html |
6252bee07afe-2 | Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → LLMResult¶
Take in a list of prompt values and return an LLMResult.
classmethod all_required... | https://api.python.langchain.com/en/latest/llms/langchain.llms.beam.Beam.html |
6252bee07afe-3 | get_num_tokens_from_messages(messages: List[BaseMessage]) → int¶
Get the number of tokens in the message.
get_token_ids(text: str) → List[int]¶
Get the token present in the text.
predict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶
Predict text from text.
predict_messages(messages: List[Bas... | https://api.python.langchain.com/en/latest/llms/langchain.llms.beam.Beam.html |
6252bee07afe-4 | property lc_secrets: Dict[str, str]¶
Return a map of constructor argument names to secret ids.
eg. {“openai_api_key”: “OPENAI_API_KEY”}
property lc_serializable: bool¶
Return whether or not the class is serializable.
model Config[source]¶
Bases: object
Configuration for this pydantic config.
extra = 'forbid'¶ | https://api.python.langchain.com/en/latest/llms/langchain.llms.beam.Beam.html |
f60550eb6c2e-0 | langchain.llms.aviary.get_models¶
langchain.llms.aviary.get_models() → List[str][source]¶
List available models | https://api.python.langchain.com/en/latest/llms/langchain.llms.aviary.get_models.html |
240b1c080ae1-0 | langchain.llms.openai.update_token_usage¶
langchain.llms.openai.update_token_usage(keys: Set[str], response: Dict[str, Any], token_usage: Dict[str, Any]) → None[source]¶
Update token usage. | https://api.python.langchain.com/en/latest/llms/langchain.llms.openai.update_token_usage.html |
33f8c367a0e4-0 | langchain.llms.openai.OpenAI¶
class langchain.llms.openai.OpenAI(*, cache: Optional[bool] = None, verbose: bool = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, client: Any = None, model: ... | https://api.python.langchain.com/en/latest/llms/langchain.llms.openai.OpenAI.html |
33f8c367a0e4-1 | Example
from langchain.llms import OpenAI
openai = OpenAI(model_name="text-davinci-003")
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param allowed_special: Union[Literal['all'], AbstractSet[str]] = {}¶
S... | https://api.python.langchain.com/en/latest/llms/langchain.llms.openai.OpenAI.html |
33f8c367a0e4-2 | param n: int = 1¶
How many completions to generate for each prompt.
param openai_api_base: Optional[str] = None¶
param openai_api_key: Optional[str] = None¶
param openai_organization: Optional[str] = None¶
param openai_proxy: Optional[str] = None¶
param presence_penalty: float = 0¶
Penalizes repeated tokens.
param requ... | https://api.python.langchain.com/en/latest/llms/langchain.llms.openai.OpenAI.html |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.