id
stringlengths
14
15
text
stringlengths
35
2.51k
source
stringlengths
61
154
796629b81c5d-1
Run the LLM on the given prompt and input. async agenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → LLMResult¶ Take in a list of prompt values and return an LLMResult. classmethod all_required_field_names() → Set¶ async apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶ Predict text from text. async apredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶ Predict message from messages. dict(**kwargs: Any) → Dict¶ Return a dictionary of the LLM. generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, **kwargs: Any) → LLMResult¶ Run the LLM on the given prompt and input. generate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → LLMResult¶ Take in a list of prompt values and return an LLMResult. get_num_tokens(text: str) → int¶ Get the number of tokens present in the text. get_num_tokens_from_messages(messages: List[BaseMessage]) → int¶ Get the number of tokens in the message. get_token_ids(text: str) → List[int]¶ Get the token present in the text.
https://api.python.langchain.com/en/latest/llms/langchain.llms.fake.FakeListLLM.html
796629b81c5d-2
Get the token present in the text. predict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶ Predict text from text. predict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶ Predict message from messages. validator raise_deprecation  »  all fields¶ Raise deprecation warning if callback_manager is used. save(file_path: Union[Path, str]) → None¶ Save the LLM. Parameters file_path – Path to file to save the LLM to. Example: .. code-block:: python llm.save(file_path=”path/llm.yaml”) validator set_verbose  »  verbose¶ If verbose is None, set it. This allows users to pass in None as verbose to access the global setting. to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶ to_json_not_implemented() → SerializedNotImplemented¶ property lc_attributes: Dict¶ Return a list of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the constructor. property lc_namespace: List[str]¶ Return the namespace of the langchain object. eg. [“langchain”, “llms”, “openai”] property lc_secrets: Dict[str, str]¶ Return a map of constructor argument names to secret ids. eg. {“openai_api_key”: “OPENAI_API_KEY”} property lc_serializable: bool¶ Return whether or not the class is serializable. model Config¶ Bases: object Configuration for this pydantic object. arbitrary_types_allowed = True¶
https://api.python.langchain.com/en/latest/llms/langchain.llms.fake.FakeListLLM.html
ea23a66c5063-0
langchain.llms.modal.Modal¶ class langchain.llms.modal.Modal(*, cache: Optional[bool] = None, verbose: bool = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, endpoint_url: str = '', model_kwargs: Dict[str, Any] = None)[source]¶ Bases: LLM Wrapper around Modal large language models. To use, you should have the modal-client python package installed. Any parameters that are valid to be passed to the call can be passed in, even if not explicitly saved on this class. Example from langchain.llms import Modal modal = Modal(endpoint_url="") Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param cache: Optional[bool] = None¶ param callback_manager: Optional[BaseCallbackManager] = None¶ param callbacks: Callbacks = None¶ param endpoint_url: str = ''¶ model endpoint to use param model_kwargs: Dict[str, Any] [Optional]¶ Holds any model parameters valid for create call not explicitly specified. param tags: Optional[List[str]] = None¶ Tags to add to the run trace. param verbose: bool [Optional]¶ Whether to print out response text. __call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → str¶ Check Cache and run the LLM on the given prompt and input.
https://api.python.langchain.com/en/latest/llms/langchain.llms.modal.Modal.html
ea23a66c5063-1
Check Cache and run the LLM on the given prompt and input. async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, **kwargs: Any) → LLMResult¶ Run the LLM on the given prompt and input. async agenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → LLMResult¶ Take in a list of prompt values and return an LLMResult. classmethod all_required_field_names() → Set¶ async apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶ Predict text from text. async apredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶ Predict message from messages. validator build_extra  »  all fields[source]¶ Build extra kwargs from additional params that were passed in. dict(**kwargs: Any) → Dict¶ Return a dictionary of the LLM. generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, **kwargs: Any) → LLMResult¶ Run the LLM on the given prompt and input. generate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → LLMResult¶
https://api.python.langchain.com/en/latest/llms/langchain.llms.modal.Modal.html
ea23a66c5063-2
Take in a list of prompt values and return an LLMResult. get_num_tokens(text: str) → int¶ Get the number of tokens present in the text. get_num_tokens_from_messages(messages: List[BaseMessage]) → int¶ Get the number of tokens in the message. get_token_ids(text: str) → List[int]¶ Get the token present in the text. predict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶ Predict text from text. predict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶ Predict message from messages. validator raise_deprecation  »  all fields¶ Raise deprecation warning if callback_manager is used. save(file_path: Union[Path, str]) → None¶ Save the LLM. Parameters file_path – Path to file to save the LLM to. Example: .. code-block:: python llm.save(file_path=”path/llm.yaml”) validator set_verbose  »  verbose¶ If verbose is None, set it. This allows users to pass in None as verbose to access the global setting. to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶ to_json_not_implemented() → SerializedNotImplemented¶ property lc_attributes: Dict¶ Return a list of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the constructor. property lc_namespace: List[str]¶ Return the namespace of the langchain object. eg. [“langchain”, “llms”, “openai”] property lc_secrets: Dict[str, str]¶ Return a map of constructor argument names to secret ids.
https://api.python.langchain.com/en/latest/llms/langchain.llms.modal.Modal.html
ea23a66c5063-3
Return a map of constructor argument names to secret ids. eg. {“openai_api_key”: “OPENAI_API_KEY”} property lc_serializable: bool¶ Return whether or not the class is serializable. model Config[source]¶ Bases: object Configuration for this pydantic config. extra = 'forbid'¶
https://api.python.langchain.com/en/latest/llms/langchain.llms.modal.Modal.html
9cd988e95e84-0
langchain.llms.ctransformers.CTransformers¶ class langchain.llms.ctransformers.CTransformers(*, cache: Optional[bool] = None, verbose: bool = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, client: Any = None, model: str, model_type: Optional[str] = None, model_file: Optional[str] = None, config: Optional[Dict[str, Any]] = None, lib: Optional[str] = None)[source]¶ Bases: LLM Wrapper around the C Transformers LLM interface. To use, you should have the ctransformers python package installed. See https://github.com/marella/ctransformers Example from langchain.llms import CTransformers llm = CTransformers(model="/path/to/ggml-gpt-2.bin", model_type="gpt2") Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param cache: Optional[bool] = None¶ param callback_manager: Optional[BaseCallbackManager] = None¶ param callbacks: Callbacks = None¶ param config: Optional[Dict[str, Any]] = None¶ The config parameters. See https://github.com/marella/ctransformers#config param lib: Optional[str] = None¶ The path to a shared library or one of avx2, avx, basic. param model: str [Required]¶ The path to a model file or directory or the name of a Hugging Face Hub model repo. param model_file: Optional[str] = None¶ The name of the model file in repo or directory.
https://api.python.langchain.com/en/latest/llms/langchain.llms.ctransformers.CTransformers.html
9cd988e95e84-1
The name of the model file in repo or directory. param model_type: Optional[str] = None¶ The model type. param tags: Optional[List[str]] = None¶ Tags to add to the run trace. param verbose: bool [Optional]¶ Whether to print out response text. __call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → str¶ Check Cache and run the LLM on the given prompt and input. async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, **kwargs: Any) → LLMResult¶ Run the LLM on the given prompt and input. async agenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → LLMResult¶ Take in a list of prompt values and return an LLMResult. classmethod all_required_field_names() → Set¶ async apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶ Predict text from text. async apredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶ Predict message from messages. dict(**kwargs: Any) → Dict¶ Return a dictionary of the LLM.
https://api.python.langchain.com/en/latest/llms/langchain.llms.ctransformers.CTransformers.html
9cd988e95e84-2
dict(**kwargs: Any) → Dict¶ Return a dictionary of the LLM. generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, **kwargs: Any) → LLMResult¶ Run the LLM on the given prompt and input. generate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → LLMResult¶ Take in a list of prompt values and return an LLMResult. get_num_tokens(text: str) → int¶ Get the number of tokens present in the text. get_num_tokens_from_messages(messages: List[BaseMessage]) → int¶ Get the number of tokens in the message. get_token_ids(text: str) → List[int]¶ Get the token present in the text. predict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶ Predict text from text. predict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶ Predict message from messages. validator raise_deprecation  »  all fields¶ Raise deprecation warning if callback_manager is used. save(file_path: Union[Path, str]) → None¶ Save the LLM. Parameters file_path – Path to file to save the LLM to. Example: .. code-block:: python llm.save(file_path=”path/llm.yaml”) validator set_verbose  »  verbose¶ If verbose is None, set it. This allows users to pass in None as verbose to access the global setting.
https://api.python.langchain.com/en/latest/llms/langchain.llms.ctransformers.CTransformers.html
9cd988e95e84-3
This allows users to pass in None as verbose to access the global setting. to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶ to_json_not_implemented() → SerializedNotImplemented¶ validator validate_environment  »  all fields[source]¶ Validate that ctransformers package is installed. property lc_attributes: Dict¶ Return a list of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the constructor. property lc_namespace: List[str]¶ Return the namespace of the langchain object. eg. [“langchain”, “llms”, “openai”] property lc_secrets: Dict[str, str]¶ Return a map of constructor argument names to secret ids. eg. {“openai_api_key”: “OPENAI_API_KEY”} property lc_serializable: bool¶ Return whether or not the class is serializable. model Config¶ Bases: object Configuration for this pydantic object. arbitrary_types_allowed = True¶
https://api.python.langchain.com/en/latest/llms/langchain.llms.ctransformers.CTransformers.html
c7a53ecd1582-0
langchain.llms.anthropic.Anthropic¶ class langchain.llms.anthropic.Anthropic(*, client: Any = None, model: str = 'claude-v1', max_tokens_to_sample: int = 256, temperature: Optional[float] = None, top_k: Optional[int] = None, top_p: Optional[float] = None, streaming: bool = False, default_request_timeout: Optional[Union[float, Tuple[float, float]]] = None, anthropic_api_url: Optional[str] = None, anthropic_api_key: Optional[str] = None, HUMAN_PROMPT: Optional[str] = None, AI_PROMPT: Optional[str] = None, count_tokens: Optional[Callable[[str], int]] = None, cache: Optional[bool] = None, verbose: bool = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None)[source]¶ Bases: LLM, _AnthropicCommon Wrapper around Anthropic’s large language models. To use, you should have the anthropic python package installed, and the environment variable ANTHROPIC_API_KEY set with your API key, or pass it as a named parameter to the constructor. Example import anthropic from langchain.llms import Anthropic model = Anthropic(model="<model_name>", anthropic_api_key="my-api-key") # Simplest invocation, automatically wrapped with HUMAN_PROMPT # and AI_PROMPT. response = model("What are the biggest risks facing humanity?") # Or if you want to use the chat mode, build a few-shot-prompt, or # put words in the Assistant's mouth, use HUMAN_PROMPT and AI_PROMPT: raw_prompt = "What are the biggest risks facing humanity?"
https://api.python.langchain.com/en/latest/llms/langchain.llms.anthropic.Anthropic.html
c7a53ecd1582-1
raw_prompt = "What are the biggest risks facing humanity?" prompt = f"{anthropic.HUMAN_PROMPT} {prompt}{anthropic.AI_PROMPT}" response = model(prompt) Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param AI_PROMPT: Optional[str] = None¶ param HUMAN_PROMPT: Optional[str] = None¶ param anthropic_api_key: Optional[str] = None¶ param anthropic_api_url: Optional[str] = None¶ param cache: Optional[bool] = None¶ param callback_manager: Optional[langchain.callbacks.base.BaseCallbackManager] = None¶ param callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None¶ param count_tokens: Optional[Callable[[str], int]] = None¶ param default_request_timeout: Optional[Union[float, Tuple[float, float]]] = None¶ Timeout for requests to Anthropic Completion API. Default is 600 seconds. param max_tokens_to_sample: int = 256¶ Denotes the number of tokens to predict per generation. param model: str = 'claude-v1'¶ Model name to use. param streaming: bool = False¶ Whether to stream the results. param tags: Optional[List[str]] = None¶ Tags to add to the run trace. param temperature: Optional[float] = None¶ A non-negative float that tunes the degree of randomness in generation. param top_k: Optional[int] = None¶ Number of most likely tokens to consider at each step. param top_p: Optional[float] = None¶ Total probability mass of tokens to consider at each step. param verbose: bool [Optional]¶ Whether to print out response text.
https://api.python.langchain.com/en/latest/llms/langchain.llms.anthropic.Anthropic.html
c7a53ecd1582-2
param verbose: bool [Optional]¶ Whether to print out response text. __call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → str¶ Check Cache and run the LLM on the given prompt and input. async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, **kwargs: Any) → LLMResult¶ Run the LLM on the given prompt and input. async agenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → LLMResult¶ Take in a list of prompt values and return an LLMResult. classmethod all_required_field_names() → Set¶ async apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶ Predict text from text. async apredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶ Predict message from messages. dict(**kwargs: Any) → Dict¶ Return a dictionary of the LLM. generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, **kwargs: Any) → LLMResult¶ Run the LLM on the given prompt and input.
https://api.python.langchain.com/en/latest/llms/langchain.llms.anthropic.Anthropic.html
c7a53ecd1582-3
Run the LLM on the given prompt and input. generate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → LLMResult¶ Take in a list of prompt values and return an LLMResult. get_num_tokens(text: str) → int[source]¶ Calculate number of tokens. get_num_tokens_from_messages(messages: List[BaseMessage]) → int¶ Get the number of tokens in the message. get_token_ids(text: str) → List[int]¶ Get the token present in the text. predict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶ Predict text from text. predict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶ Predict message from messages. validator raise_deprecation  »  all fields¶ Raise deprecation warning if callback_manager is used. validator raise_warning  »  all fields[source]¶ Raise warning that this class is deprecated. save(file_path: Union[Path, str]) → None¶ Save the LLM. Parameters file_path – Path to file to save the LLM to. Example: .. code-block:: python llm.save(file_path=”path/llm.yaml”) validator set_verbose  »  verbose¶ If verbose is None, set it. This allows users to pass in None as verbose to access the global setting. stream(prompt: str, stop: Optional[List[str]] = None) → Generator[source]¶ Call Anthropic completion_stream and return the resulting generator. BETA: this is a beta feature while we figure out the right abstraction.
https://api.python.langchain.com/en/latest/llms/langchain.llms.anthropic.Anthropic.html
c7a53ecd1582-4
BETA: this is a beta feature while we figure out the right abstraction. Once that happens, this interface could change. Parameters prompt – The prompt to pass into the model. stop – Optional list of stop words to use when generating. Returns A generator representing the stream of tokens from Anthropic. Example prompt = "Write a poem about a stream." prompt = f"\n\nHuman: {prompt}\n\nAssistant:" generator = anthropic.stream(prompt) for token in generator: yield token to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶ to_json_not_implemented() → SerializedNotImplemented¶ validator validate_environment  »  all fields¶ Validate that api key and python package exists in environment. property lc_attributes: Dict¶ Return a list of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the constructor. property lc_namespace: List[str]¶ Return the namespace of the langchain object. eg. [“langchain”, “llms”, “openai”] property lc_secrets: Dict[str, str]¶ Return a map of constructor argument names to secret ids. eg. {“openai_api_key”: “OPENAI_API_KEY”} property lc_serializable: bool¶ Return whether or not the class is serializable. model Config¶ Bases: object Configuration for this pydantic object. arbitrary_types_allowed = True¶
https://api.python.langchain.com/en/latest/llms/langchain.llms.anthropic.Anthropic.html
bef2261e0c36-0
langchain.llms.amazon_api_gateway.AmazonAPIGateway¶ class langchain.llms.amazon_api_gateway.AmazonAPIGateway(*, cache: ~typing.Optional[bool] = None, verbose: bool = None, callbacks: ~typing.Optional[~typing.Union[~typing.List[~langchain.callbacks.base.BaseCallbackHandler], ~langchain.callbacks.base.BaseCallbackManager]] = None, callback_manager: ~typing.Optional[~langchain.callbacks.base.BaseCallbackManager] = None, tags: ~typing.Optional[~typing.List[str]] = None, api_url: str, headers: ~typing.Optional[~typing.Dict] = None, model_kwargs: ~typing.Optional[~typing.Dict] = None, content_handler: ~langchain.llms.amazon_api_gateway.ContentHandlerAmazonAPIGateway = <langchain.llms.amazon_api_gateway.ContentHandlerAmazonAPIGateway object>)[source]¶ Bases: LLM Wrapper around custom Amazon API Gateway Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param api_url: str [Required]¶ API Gateway URL param cache: Optional[bool] = None¶ param callback_manager: Optional[BaseCallbackManager] = None¶ param callbacks: Callbacks = None¶ param content_handler: langchain.llms.amazon_api_gateway.ContentHandlerAmazonAPIGateway = <langchain.llms.amazon_api_gateway.ContentHandlerAmazonAPIGateway object>¶ The content handler class that provides an input and output transform functions to handle formats between LLM and the endpoint. param headers: Optional[Dict] = None¶ API Gateway HTTP Headers to send, e.g. for authentication param model_kwargs: Optional[Dict] = None¶ Key word arguments to pass to the model. param tags: Optional[List[str]] = None¶ Tags to add to the run trace.
https://api.python.langchain.com/en/latest/llms/langchain.llms.amazon_api_gateway.AmazonAPIGateway.html
bef2261e0c36-1
param tags: Optional[List[str]] = None¶ Tags to add to the run trace. param verbose: bool [Optional]¶ Whether to print out response text. __call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → str¶ Check Cache and run the LLM on the given prompt and input. async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, **kwargs: Any) → LLMResult¶ Run the LLM on the given prompt and input. async agenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → LLMResult¶ Take in a list of prompt values and return an LLMResult. classmethod all_required_field_names() → Set¶ async apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶ Predict text from text. async apredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶ Predict message from messages. dict(**kwargs: Any) → Dict¶ Return a dictionary of the LLM. generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, **kwargs: Any) → LLMResult¶ Run the LLM on the given prompt and input.
https://api.python.langchain.com/en/latest/llms/langchain.llms.amazon_api_gateway.AmazonAPIGateway.html
bef2261e0c36-2
Run the LLM on the given prompt and input. generate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → LLMResult¶ Take in a list of prompt values and return an LLMResult. get_num_tokens(text: str) → int¶ Get the number of tokens present in the text. get_num_tokens_from_messages(messages: List[BaseMessage]) → int¶ Get the number of tokens in the message. get_token_ids(text: str) → List[int]¶ Get the token present in the text. predict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶ Predict text from text. predict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶ Predict message from messages. validator raise_deprecation  »  all fields¶ Raise deprecation warning if callback_manager is used. save(file_path: Union[Path, str]) → None¶ Save the LLM. Parameters file_path – Path to file to save the LLM to. Example: .. code-block:: python llm.save(file_path=”path/llm.yaml”) validator set_verbose  »  verbose¶ If verbose is None, set it. This allows users to pass in None as verbose to access the global setting. to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶ to_json_not_implemented() → SerializedNotImplemented¶ property lc_attributes: Dict¶ Return a list of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the constructor. property lc_namespace: List[str]¶
https://api.python.langchain.com/en/latest/llms/langchain.llms.amazon_api_gateway.AmazonAPIGateway.html
bef2261e0c36-3
constructor. property lc_namespace: List[str]¶ Return the namespace of the langchain object. eg. [“langchain”, “llms”, “openai”] property lc_secrets: Dict[str, str]¶ Return a map of constructor argument names to secret ids. eg. {“openai_api_key”: “OPENAI_API_KEY”} property lc_serializable: bool¶ Return whether or not the class is serializable. model Config[source]¶ Bases: object Configuration for this pydantic object. extra = 'forbid'¶
https://api.python.langchain.com/en/latest/llms/langchain.llms.amazon_api_gateway.AmazonAPIGateway.html
1d228ea51195-0
langchain.llms.llamacpp.LlamaCpp¶ class langchain.llms.llamacpp.LlamaCpp(*, cache: Optional[bool] = None, verbose: bool = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, client: Any = None, model_path: str, lora_base: Optional[str] = None, lora_path: Optional[str] = None, n_ctx: int = 512, n_parts: int = - 1, seed: int = - 1, f16_kv: bool = True, logits_all: bool = False, vocab_only: bool = False, use_mlock: bool = False, n_threads: Optional[int] = None, n_batch: Optional[int] = 8, n_gpu_layers: Optional[int] = None, suffix: Optional[str] = None, max_tokens: Optional[int] = 256, temperature: Optional[float] = 0.8, top_p: Optional[float] = 0.95, logprobs: Optional[int] = None, echo: Optional[bool] = False, stop: Optional[List[str]] = [], repeat_penalty: Optional[float] = 1.1, top_k: Optional[int] = 40, last_n_tokens_size: Optional[int] = 64, use_mmap: Optional[bool] = True, streaming: bool = True)[source]¶ Bases: LLM Wrapper around the llama.cpp model. To use, you should have the llama-cpp-python library installed, and provide the path to the Llama model as a named parameter to the constructor. Check out: https://github.com/abetlen/llama-cpp-python Example from langchain.llms import LlamaCppEmbeddings
https://api.python.langchain.com/en/latest/llms/langchain.llms.llamacpp.LlamaCpp.html
1d228ea51195-1
Example from langchain.llms import LlamaCppEmbeddings llm = LlamaCppEmbeddings(model_path="/path/to/llama/model") Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param cache: Optional[bool] = None¶ param callback_manager: Optional[BaseCallbackManager] = None¶ param callbacks: Callbacks = None¶ param echo: Optional[bool] = False¶ Whether to echo the prompt. param f16_kv: bool = True¶ Use half-precision for key/value cache. param last_n_tokens_size: Optional[int] = 64¶ The number of tokens to look back when applying the repeat_penalty. param logits_all: bool = False¶ Return logits for all tokens, not just the last token. param logprobs: Optional[int] = None¶ The number of logprobs to return. If None, no logprobs are returned. param lora_base: Optional[str] = None¶ The path to the Llama LoRA base model. param lora_path: Optional[str] = None¶ The path to the Llama LoRA. If None, no LoRa is loaded. param max_tokens: Optional[int] = 256¶ The maximum number of tokens to generate. param model_path: str [Required]¶ The path to the Llama model file. param n_batch: Optional[int] = 8¶ Number of tokens to process in parallel. Should be a number between 1 and n_ctx. param n_ctx: int = 512¶ Token context window. param n_gpu_layers: Optional[int] = None¶ Number of layers to be loaded into gpu memory. Default None. param n_parts: int = -1¶
https://api.python.langchain.com/en/latest/llms/langchain.llms.llamacpp.LlamaCpp.html
1d228ea51195-2
param n_parts: int = -1¶ Number of parts to split the model into. If -1, the number of parts is automatically determined. param n_threads: Optional[int] = None¶ Number of threads to use. If None, the number of threads is automatically determined. param repeat_penalty: Optional[float] = 1.1¶ The penalty to apply to repeated tokens. param seed: int = -1¶ Seed. If -1, a random seed is used. param stop: Optional[List[str]] = []¶ A list of strings to stop generation when encountered. param streaming: bool = True¶ Whether to stream the results, token by token. param suffix: Optional[str] = None¶ A suffix to append to the generated text. If None, no suffix is appended. param tags: Optional[List[str]] = None¶ Tags to add to the run trace. param temperature: Optional[float] = 0.8¶ The temperature to use for sampling. param top_k: Optional[int] = 40¶ The top-k value to use for sampling. param top_p: Optional[float] = 0.95¶ The top-p value to use for sampling. param use_mlock: bool = False¶ Force system to keep model in RAM. param use_mmap: Optional[bool] = True¶ Whether to keep the model loaded in RAM param verbose: bool [Optional]¶ Whether to print out response text. param vocab_only: bool = False¶ Only load the vocabulary, no weights. __call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → str¶ Check Cache and run the LLM on the given prompt and input.
https://api.python.langchain.com/en/latest/llms/langchain.llms.llamacpp.LlamaCpp.html
1d228ea51195-3
Check Cache and run the LLM on the given prompt and input. async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, **kwargs: Any) → LLMResult¶ Run the LLM on the given prompt and input. async agenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → LLMResult¶ Take in a list of prompt values and return an LLMResult. classmethod all_required_field_names() → Set¶ async apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶ Predict text from text. async apredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶ Predict message from messages. dict(**kwargs: Any) → Dict¶ Return a dictionary of the LLM. generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, **kwargs: Any) → LLMResult¶ Run the LLM on the given prompt and input. generate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → LLMResult¶ Take in a list of prompt values and return an LLMResult. get_num_tokens(text: str) → int[source]¶
https://api.python.langchain.com/en/latest/llms/langchain.llms.llamacpp.LlamaCpp.html
1d228ea51195-4
get_num_tokens(text: str) → int[source]¶ Get the number of tokens present in the text. get_num_tokens_from_messages(messages: List[BaseMessage]) → int¶ Get the number of tokens in the message. get_token_ids(text: str) → List[int]¶ Get the token present in the text. predict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶ Predict text from text. predict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶ Predict message from messages. validator raise_deprecation  »  all fields¶ Raise deprecation warning if callback_manager is used. save(file_path: Union[Path, str]) → None¶ Save the LLM. Parameters file_path – Path to file to save the LLM to. Example: .. code-block:: python llm.save(file_path=”path/llm.yaml”) validator set_verbose  »  verbose¶ If verbose is None, set it. This allows users to pass in None as verbose to access the global setting. stream(prompt: str, stop: Optional[List[str]] = None, run_manager: Optional[CallbackManagerForLLMRun] = None) → Generator[Dict, None, None][source]¶ Yields results objects as they are generated in real time. BETA: this is a beta feature while we figure out the right abstraction. Once that happens, this interface could change. It also calls the callback manager’s on_llm_new_token event with similar parameters to the OpenAI LLM class method of the same name. Args:prompt: The prompts to pass into the model. stop: Optional list of stop words to use when generating.
https://api.python.langchain.com/en/latest/llms/langchain.llms.llamacpp.LlamaCpp.html
1d228ea51195-5
stop: Optional list of stop words to use when generating. Returns:A generator representing the stream of tokens being generated. Yields:A dictionary like objects containing a string token and metadata. See llama-cpp-python docs and below for more. Example:from langchain.llms import LlamaCpp llm = LlamaCpp( model_path="/path/to/local/model.bin", temperature = 0.5 ) for chunk in llm.stream("Ask 'Hi, how are you?' like a pirate:'", stop=["'"," “]):result = chunk[“choices”][0] print(result[“text”], end=’’, flush=True) to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶ to_json_not_implemented() → SerializedNotImplemented¶ validator validate_environment  »  all fields[source]¶ Validate that llama-cpp-python library is installed. property lc_attributes: Dict¶ Return a list of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the constructor. property lc_namespace: List[str]¶ Return the namespace of the langchain object. eg. [“langchain”, “llms”, “openai”] property lc_secrets: Dict[str, str]¶ Return a map of constructor argument names to secret ids. eg. {“openai_api_key”: “OPENAI_API_KEY”} property lc_serializable: bool¶ Return whether or not the class is serializable. model Config¶ Bases: object Configuration for this pydantic object. arbitrary_types_allowed = True¶
https://api.python.langchain.com/en/latest/llms/langchain.llms.llamacpp.LlamaCpp.html
7c131ae1de42-0
langchain.llms.self_hosted.SelfHostedPipeline¶ class langchain.llms.self_hosted.SelfHostedPipeline(*, cache: ~typing.Optional[bool] = None, verbose: bool = None, callbacks: ~typing.Optional[~typing.Union[~typing.List[~langchain.callbacks.base.BaseCallbackHandler], ~langchain.callbacks.base.BaseCallbackManager]] = None, callback_manager: ~typing.Optional[~langchain.callbacks.base.BaseCallbackManager] = None, tags: ~typing.Optional[~typing.List[str]] = None, pipeline_ref: ~typing.Any = None, client: ~typing.Any = None, inference_fn: ~typing.Callable = <function _generate_text>, hardware: ~typing.Any = None, model_load_fn: ~typing.Callable, load_fn_kwargs: ~typing.Optional[dict] = None, model_reqs: ~typing.List[str] = ['./', 'torch'])[source]¶ Bases: LLM Run model inference on self-hosted remote hardware. Supported hardware includes auto-launched instances on AWS, GCP, Azure, and Lambda, as well as servers specified by IP address and SSH credentials (such as on-prem, or another cloud like Paperspace, Coreweave, etc.). To use, you should have the runhouse python package installed. Example for custom pipeline and inference functions:from langchain.llms import SelfHostedPipeline from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline import runhouse as rh def load_pipeline(): tokenizer = AutoTokenizer.from_pretrained("gpt2") model = AutoModelForCausalLM.from_pretrained("gpt2") return pipeline( "text-generation", model=model, tokenizer=tokenizer, max_new_tokens=10 ) def inference_fn(pipeline, prompt, stop = None):
https://api.python.langchain.com/en/latest/llms/langchain.llms.self_hosted.SelfHostedPipeline.html
7c131ae1de42-1
) def inference_fn(pipeline, prompt, stop = None): return pipeline(prompt)[0]["generated_text"] gpu = rh.cluster(name="rh-a10x", instance_type="A100:1") llm = SelfHostedPipeline( model_load_fn=load_pipeline, hardware=gpu, model_reqs=model_reqs, inference_fn=inference_fn ) Example for <2GB model (can be serialized and sent directly to the server):from langchain.llms import SelfHostedPipeline import runhouse as rh gpu = rh.cluster(name="rh-a10x", instance_type="A100:1") my_model = ... llm = SelfHostedPipeline.from_pipeline( pipeline=my_model, hardware=gpu, model_reqs=["./", "torch", "transformers"], ) Example passing model path for larger models:from langchain.llms import SelfHostedPipeline import runhouse as rh import pickle from transformers import pipeline generator = pipeline(model="gpt2") rh.blob(pickle.dumps(generator), path="models/pipeline.pkl" ).save().to(gpu, path="models") llm = SelfHostedPipeline.from_pipeline( pipeline="models/pipeline.pkl", hardware=gpu, model_reqs=["./", "torch", "transformers"], ) Init the pipeline with an auxiliary function. The load function must be in global scope to be imported and run on the server, i.e. in a module and not a REPL or closure. Then, initialize the remote inference function. param cache: Optional[bool] = None¶ param callback_manager: Optional[BaseCallbackManager] = None¶ param callbacks: Callbacks = None¶ param hardware: Any = None¶
https://api.python.langchain.com/en/latest/llms/langchain.llms.self_hosted.SelfHostedPipeline.html
7c131ae1de42-2
param callbacks: Callbacks = None¶ param hardware: Any = None¶ Remote hardware to send the inference function to. param inference_fn: Callable = <function _generate_text>¶ Inference function to send to the remote hardware. param load_fn_kwargs: Optional[dict] = None¶ Key word arguments to pass to the model load function. param model_load_fn: Callable [Required]¶ Function to load the model remotely on the server. param model_reqs: List[str] = ['./', 'torch']¶ Requirements to install on hardware to inference the model. param tags: Optional[List[str]] = None¶ Tags to add to the run trace. param verbose: bool [Optional]¶ Whether to print out response text. __call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → str¶ Check Cache and run the LLM on the given prompt and input. async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, **kwargs: Any) → LLMResult¶ Run the LLM on the given prompt and input. async agenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → LLMResult¶ Take in a list of prompt values and return an LLMResult. classmethod all_required_field_names() → Set¶ async apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶ Predict text from text.
https://api.python.langchain.com/en/latest/llms/langchain.llms.self_hosted.SelfHostedPipeline.html
7c131ae1de42-3
Predict text from text. async apredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶ Predict message from messages. dict(**kwargs: Any) → Dict¶ Return a dictionary of the LLM. classmethod from_pipeline(pipeline: Any, hardware: Any, model_reqs: Optional[List[str]] = None, device: int = 0, **kwargs: Any) → LLM[source]¶ Init the SelfHostedPipeline from a pipeline object or string. generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, **kwargs: Any) → LLMResult¶ Run the LLM on the given prompt and input. generate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → LLMResult¶ Take in a list of prompt values and return an LLMResult. get_num_tokens(text: str) → int¶ Get the number of tokens present in the text. get_num_tokens_from_messages(messages: List[BaseMessage]) → int¶ Get the number of tokens in the message. get_token_ids(text: str) → List[int]¶ Get the token present in the text. predict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶ Predict text from text. predict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶ Predict message from messages. validator raise_deprecation  »  all fields¶
https://api.python.langchain.com/en/latest/llms/langchain.llms.self_hosted.SelfHostedPipeline.html
7c131ae1de42-4
Predict message from messages. validator raise_deprecation  »  all fields¶ Raise deprecation warning if callback_manager is used. save(file_path: Union[Path, str]) → None¶ Save the LLM. Parameters file_path – Path to file to save the LLM to. Example: .. code-block:: python llm.save(file_path=”path/llm.yaml”) validator set_verbose  »  verbose¶ If verbose is None, set it. This allows users to pass in None as verbose to access the global setting. to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶ to_json_not_implemented() → SerializedNotImplemented¶ property lc_attributes: Dict¶ Return a list of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the constructor. property lc_namespace: List[str]¶ Return the namespace of the langchain object. eg. [“langchain”, “llms”, “openai”] property lc_secrets: Dict[str, str]¶ Return a map of constructor argument names to secret ids. eg. {“openai_api_key”: “OPENAI_API_KEY”} property lc_serializable: bool¶ Return whether or not the class is serializable. model Config[source]¶ Bases: object Configuration for this pydantic object. extra = 'forbid'¶
https://api.python.langchain.com/en/latest/llms/langchain.llms.self_hosted.SelfHostedPipeline.html
080f37d2ca29-0
langchain.llms.promptlayer_openai.PromptLayerOpenAIChat¶ class langchain.llms.promptlayer_openai.PromptLayerOpenAIChat(*, cache: Optional[bool] = None, verbose: bool = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, client: Any = None, model_name: str = 'gpt-3.5-turbo', model_kwargs: Dict[str, Any] = None, openai_api_key: Optional[str] = None, openai_api_base: Optional[str] = None, openai_proxy: Optional[str] = None, max_retries: int = 6, prefix_messages: List = None, streaming: bool = False, allowed_special: Union[Literal['all'], AbstractSet[str]] = {}, disallowed_special: Union[Literal['all'], Collection[str]] = 'all', pl_tags: Optional[List[str]] = None, return_pl_id: Optional[bool] = False)[source]¶ Bases: OpenAIChat Wrapper around OpenAI large language models. To use, you should have the openai and promptlayer python package installed, and the environment variable OPENAI_API_KEY and PROMPTLAYER_API_KEY set with your openAI API key and promptlayer key respectively. All parameters that can be passed to the OpenAIChat LLM can also be passed here. The PromptLayerOpenAIChat adds two optional Parameters pl_tags – List of strings to tag the request with. return_pl_id – If True, the PromptLayer request ID will be returned in the generation_info field of the Generation object. Example from langchain.llms import PromptLayerOpenAIChat
https://api.python.langchain.com/en/latest/llms/langchain.llms.promptlayer_openai.PromptLayerOpenAIChat.html
080f37d2ca29-1
Generation object. Example from langchain.llms import PromptLayerOpenAIChat openaichat = PromptLayerOpenAIChat(model_name="gpt-3.5-turbo") Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param allowed_special: Union[Literal['all'], AbstractSet[str]] = {}¶ Set of special tokens that are allowed。 param cache: Optional[bool] = None¶ param callback_manager: Optional[BaseCallbackManager] = None¶ param callbacks: Callbacks = None¶ param disallowed_special: Union[Literal['all'], Collection[str]] = 'all'¶ Set of special tokens that are not allowed。 param max_retries: int = 6¶ Maximum number of retries to make when generating. param model_kwargs: Dict[str, Any] [Optional]¶ Holds any model parameters valid for create call not explicitly specified. param model_name: str = 'gpt-3.5-turbo'¶ Model name to use. param openai_api_base: Optional[str] = None¶ param openai_api_key: Optional[str] = None¶ param openai_proxy: Optional[str] = None¶ param pl_tags: Optional[List[str]] = None¶ param prefix_messages: List [Optional]¶ Series of messages for Chat input. param return_pl_id: Optional[bool] = False¶ param streaming: bool = False¶ Whether to stream the results or not. param tags: Optional[List[str]] = None¶ Tags to add to the run trace. param verbose: bool [Optional]¶ Whether to print out response text.
https://api.python.langchain.com/en/latest/llms/langchain.llms.promptlayer_openai.PromptLayerOpenAIChat.html
080f37d2ca29-2
param verbose: bool [Optional]¶ Whether to print out response text. __call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → str¶ Check Cache and run the LLM on the given prompt and input. async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, **kwargs: Any) → LLMResult¶ Run the LLM on the given prompt and input. async agenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → LLMResult¶ Take in a list of prompt values and return an LLMResult. classmethod all_required_field_names() → Set¶ async apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶ Predict text from text. async apredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶ Predict message from messages. validator build_extra  »  all fields¶ Build extra kwargs from additional params that were passed in. dict(**kwargs: Any) → Dict¶ Return a dictionary of the LLM. generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, **kwargs: Any) → LLMResult¶ Run the LLM on the given prompt and input.
https://api.python.langchain.com/en/latest/llms/langchain.llms.promptlayer_openai.PromptLayerOpenAIChat.html
080f37d2ca29-3
Run the LLM on the given prompt and input. generate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → LLMResult¶ Take in a list of prompt values and return an LLMResult. get_num_tokens(text: str) → int¶ Get the number of tokens present in the text. get_num_tokens_from_messages(messages: List[BaseMessage]) → int¶ Get the number of tokens in the message. get_token_ids(text: str) → List[int]¶ Get the token IDs using the tiktoken package. predict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶ Predict text from text. predict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶ Predict message from messages. validator raise_deprecation  »  all fields¶ Raise deprecation warning if callback_manager is used. save(file_path: Union[Path, str]) → None¶ Save the LLM. Parameters file_path – Path to file to save the LLM to. Example: .. code-block:: python llm.save(file_path=”path/llm.yaml”) validator set_verbose  »  verbose¶ If verbose is None, set it. This allows users to pass in None as verbose to access the global setting. to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶ to_json_not_implemented() → SerializedNotImplemented¶ validator validate_environment  »  all fields¶ Validate that api key and python package exists in environment. property lc_attributes: Dict¶ Return a list of attribute names that should be included in the
https://api.python.langchain.com/en/latest/llms/langchain.llms.promptlayer_openai.PromptLayerOpenAIChat.html
080f37d2ca29-4
property lc_attributes: Dict¶ Return a list of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the constructor. property lc_namespace: List[str]¶ Return the namespace of the langchain object. eg. [“langchain”, “llms”, “openai”] property lc_secrets: Dict[str, str]¶ Return a map of constructor argument names to secret ids. eg. {“openai_api_key”: “OPENAI_API_KEY”} property lc_serializable: bool¶ Return whether or not the class is serializable. model Config¶ Bases: object Configuration for this pydantic object. arbitrary_types_allowed = True¶
https://api.python.langchain.com/en/latest/llms/langchain.llms.promptlayer_openai.PromptLayerOpenAIChat.html
c804a99938cf-0
langchain.llms.writer.Writer¶ class langchain.llms.writer.Writer(*, cache: Optional[bool] = None, verbose: bool = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, writer_org_id: Optional[str] = None, model_id: str = 'palmyra-instruct', min_tokens: Optional[int] = None, max_tokens: Optional[int] = None, temperature: Optional[float] = None, top_p: Optional[float] = None, stop: Optional[List[str]] = None, presence_penalty: Optional[float] = None, repetition_penalty: Optional[float] = None, best_of: Optional[int] = None, logprobs: bool = False, n: Optional[int] = None, writer_api_key: Optional[str] = None, base_url: Optional[str] = None)[source]¶ Bases: LLM Wrapper around Writer large language models. To use, you should have the environment variable WRITER_API_KEY and WRITER_ORG_ID set with your API key and organization ID respectively. Example from langchain import Writer writer = Writer(model_id="palmyra-base") Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param base_url: Optional[str] = None¶ Base url to use, if None decides based on model name. param best_of: Optional[int] = None¶ Generates this many completions server-side and returns the “best”. param cache: Optional[bool] = None¶ param callback_manager: Optional[BaseCallbackManager] = None¶ param callbacks: Callbacks = None¶ param logprobs: bool = False¶ Whether to return log probabilities.
https://api.python.langchain.com/en/latest/llms/langchain.llms.writer.Writer.html
c804a99938cf-1
param logprobs: bool = False¶ Whether to return log probabilities. param max_tokens: Optional[int] = None¶ Maximum number of tokens to generate. param min_tokens: Optional[int] = None¶ Minimum number of tokens to generate. param model_id: str = 'palmyra-instruct'¶ Model name to use. param n: Optional[int] = None¶ How many completions to generate. param presence_penalty: Optional[float] = None¶ Penalizes repeated tokens regardless of frequency. param repetition_penalty: Optional[float] = None¶ Penalizes repeated tokens according to frequency. param stop: Optional[List[str]] = None¶ Sequences when completion generation will stop. param tags: Optional[List[str]] = None¶ Tags to add to the run trace. param temperature: Optional[float] = None¶ What sampling temperature to use. param top_p: Optional[float] = None¶ Total probability mass of tokens to consider at each step. param verbose: bool [Optional]¶ Whether to print out response text. param writer_api_key: Optional[str] = None¶ Writer API key. param writer_org_id: Optional[str] = None¶ Writer organization ID. __call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → str¶ Check Cache and run the LLM on the given prompt and input. async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, **kwargs: Any) → LLMResult¶ Run the LLM on the given prompt and input.
https://api.python.langchain.com/en/latest/llms/langchain.llms.writer.Writer.html
c804a99938cf-2
Run the LLM on the given prompt and input. async agenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → LLMResult¶ Take in a list of prompt values and return an LLMResult. classmethod all_required_field_names() → Set¶ async apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶ Predict text from text. async apredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶ Predict message from messages. dict(**kwargs: Any) → Dict¶ Return a dictionary of the LLM. generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, **kwargs: Any) → LLMResult¶ Run the LLM on the given prompt and input. generate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → LLMResult¶ Take in a list of prompt values and return an LLMResult. get_num_tokens(text: str) → int¶ Get the number of tokens present in the text. get_num_tokens_from_messages(messages: List[BaseMessage]) → int¶ Get the number of tokens in the message. get_token_ids(text: str) → List[int]¶ Get the token present in the text.
https://api.python.langchain.com/en/latest/llms/langchain.llms.writer.Writer.html
c804a99938cf-3
Get the token present in the text. predict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶ Predict text from text. predict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶ Predict message from messages. validator raise_deprecation  »  all fields¶ Raise deprecation warning if callback_manager is used. save(file_path: Union[Path, str]) → None¶ Save the LLM. Parameters file_path – Path to file to save the LLM to. Example: .. code-block:: python llm.save(file_path=”path/llm.yaml”) validator set_verbose  »  verbose¶ If verbose is None, set it. This allows users to pass in None as verbose to access the global setting. to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶ to_json_not_implemented() → SerializedNotImplemented¶ validator validate_environment  »  all fields[source]¶ Validate that api key and organization id exist in environment. property lc_attributes: Dict¶ Return a list of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the constructor. property lc_namespace: List[str]¶ Return the namespace of the langchain object. eg. [“langchain”, “llms”, “openai”] property lc_secrets: Dict[str, str]¶ Return a map of constructor argument names to secret ids. eg. {“openai_api_key”: “OPENAI_API_KEY”} property lc_serializable: bool¶ Return whether or not the class is serializable. model Config[source]¶ Bases: object Configuration for this pydantic object. extra = 'forbid'¶
https://api.python.langchain.com/en/latest/llms/langchain.llms.writer.Writer.html
5829508095cd-0
langchain.llms.mosaicml.MosaicML¶ class langchain.llms.mosaicml.MosaicML(*, cache: Optional[bool] = None, verbose: bool = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, endpoint_url: str = 'https://models.hosted-on.mosaicml.hosting/mpt-7b-instruct/v1/predict', inject_instruction_format: bool = False, model_kwargs: Optional[dict] = None, retry_sleep: float = 1.0, mosaicml_api_token: Optional[str] = None)[source]¶ Bases: LLM Wrapper around MosaicML’s LLM inference service. To use, you should have the environment variable MOSAICML_API_TOKEN set with your API token, or pass it as a named parameter to the constructor. Example from langchain.llms import MosaicML endpoint_url = ( "https://models.hosted-on.mosaicml.hosting/mpt-7b-instruct/v1/predict" ) mosaic_llm = MosaicML( endpoint_url=endpoint_url, mosaicml_api_token="my-api-key" ) Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param cache: Optional[bool] = None¶ param callback_manager: Optional[BaseCallbackManager] = None¶ param callbacks: Callbacks = None¶ param endpoint_url: str = 'https://models.hosted-on.mosaicml.hosting/mpt-7b-instruct/v1/predict'¶ Endpoint URL to use. param inject_instruction_format: bool = False¶
https://api.python.langchain.com/en/latest/llms/langchain.llms.mosaicml.MosaicML.html
5829508095cd-1
Endpoint URL to use. param inject_instruction_format: bool = False¶ Whether to inject the instruction format into the prompt. param model_kwargs: Optional[dict] = None¶ Key word arguments to pass to the model. param mosaicml_api_token: Optional[str] = None¶ param retry_sleep: float = 1.0¶ How long to try sleeping for if a rate limit is encountered param tags: Optional[List[str]] = None¶ Tags to add to the run trace. param verbose: bool [Optional]¶ Whether to print out response text. __call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → str¶ Check Cache and run the LLM on the given prompt and input. async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, **kwargs: Any) → LLMResult¶ Run the LLM on the given prompt and input. async agenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → LLMResult¶ Take in a list of prompt values and return an LLMResult. classmethod all_required_field_names() → Set¶ async apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶ Predict text from text. async apredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶ Predict message from messages.
https://api.python.langchain.com/en/latest/llms/langchain.llms.mosaicml.MosaicML.html
5829508095cd-2
Predict message from messages. dict(**kwargs: Any) → Dict¶ Return a dictionary of the LLM. generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, **kwargs: Any) → LLMResult¶ Run the LLM on the given prompt and input. generate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → LLMResult¶ Take in a list of prompt values and return an LLMResult. get_num_tokens(text: str) → int¶ Get the number of tokens present in the text. get_num_tokens_from_messages(messages: List[BaseMessage]) → int¶ Get the number of tokens in the message. get_token_ids(text: str) → List[int]¶ Get the token present in the text. predict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶ Predict text from text. predict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶ Predict message from messages. validator raise_deprecation  »  all fields¶ Raise deprecation warning if callback_manager is used. save(file_path: Union[Path, str]) → None¶ Save the LLM. Parameters file_path – Path to file to save the LLM to. Example: .. code-block:: python llm.save(file_path=”path/llm.yaml”) validator set_verbose  »  verbose¶ If verbose is None, set it.
https://api.python.langchain.com/en/latest/llms/langchain.llms.mosaicml.MosaicML.html
5829508095cd-3
validator set_verbose  »  verbose¶ If verbose is None, set it. This allows users to pass in None as verbose to access the global setting. to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶ to_json_not_implemented() → SerializedNotImplemented¶ validator validate_environment  »  all fields[source]¶ Validate that api key and python package exists in environment. property lc_attributes: Dict¶ Return a list of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the constructor. property lc_namespace: List[str]¶ Return the namespace of the langchain object. eg. [“langchain”, “llms”, “openai”] property lc_secrets: Dict[str, str]¶ Return a map of constructor argument names to secret ids. eg. {“openai_api_key”: “OPENAI_API_KEY”} property lc_serializable: bool¶ Return whether or not the class is serializable. model Config[source]¶ Bases: object Configuration for this pydantic object. extra = 'forbid'¶
https://api.python.langchain.com/en/latest/llms/langchain.llms.mosaicml.MosaicML.html
f33ededc423e-0
langchain.llms.bedrock.Bedrock¶ class langchain.llms.bedrock.Bedrock(*, cache: Optional[bool] = None, verbose: bool = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, client: Any = None, region_name: Optional[str] = None, credentials_profile_name: Optional[str] = None, model_id: str, model_kwargs: Optional[Dict] = None)[source]¶ Bases: LLM LLM provider to invoke Bedrock models. To authenticate, the AWS client uses the following methods to automatically load credentials: https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html If a specific credential profile should be used, you must pass the name of the profile from the ~/.aws/credentials file that is to be used. Make sure the credentials / roles used have the required policies to access the Bedrock service. Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param cache: Optional[bool] = None¶ param callback_manager: Optional[BaseCallbackManager] = None¶ param callbacks: Callbacks = None¶ param credentials_profile_name: Optional[str] = None¶ The name of the profile in the ~/.aws/credentials or ~/.aws/config files, which has either access keys or role information specified. If not specified, the default credential profile or, if on an EC2 instance, credentials from IMDS will be used. See: https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html param model_id: str [Required]¶
https://api.python.langchain.com/en/latest/llms/langchain.llms.bedrock.Bedrock.html
f33ededc423e-1
param model_id: str [Required]¶ Id of the model to call, e.g., amazon.titan-tg1-large, this is equivalent to the modelId property in the list-foundation-models api param model_kwargs: Optional[Dict] = None¶ Key word arguments to pass to the model. param region_name: Optional[str] = None¶ The aws region e.g., us-west-2. Fallsback to AWS_DEFAULT_REGION env variable or region specified in ~/.aws/config in case it is not provided here. param tags: Optional[List[str]] = None¶ Tags to add to the run trace. param verbose: bool [Optional]¶ Whether to print out response text. __call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → str¶ Check Cache and run the LLM on the given prompt and input. async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, **kwargs: Any) → LLMResult¶ Run the LLM on the given prompt and input. async agenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → LLMResult¶ Take in a list of prompt values and return an LLMResult. classmethod all_required_field_names() → Set¶ async apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶ Predict text from text.
https://api.python.langchain.com/en/latest/llms/langchain.llms.bedrock.Bedrock.html
f33ededc423e-2
Predict text from text. async apredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶ Predict message from messages. dict(**kwargs: Any) → Dict¶ Return a dictionary of the LLM. generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, **kwargs: Any) → LLMResult¶ Run the LLM on the given prompt and input. generate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → LLMResult¶ Take in a list of prompt values and return an LLMResult. get_num_tokens(text: str) → int¶ Get the number of tokens present in the text. get_num_tokens_from_messages(messages: List[BaseMessage]) → int¶ Get the number of tokens in the message. get_token_ids(text: str) → List[int]¶ Get the token present in the text. predict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶ Predict text from text. predict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶ Predict message from messages. validator raise_deprecation  »  all fields¶ Raise deprecation warning if callback_manager is used. save(file_path: Union[Path, str]) → None¶ Save the LLM. Parameters file_path – Path to file to save the LLM to. Example: .. code-block:: python
https://api.python.langchain.com/en/latest/llms/langchain.llms.bedrock.Bedrock.html
f33ededc423e-3
Example: .. code-block:: python llm.save(file_path=”path/llm.yaml”) validator set_verbose  »  verbose¶ If verbose is None, set it. This allows users to pass in None as verbose to access the global setting. to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶ to_json_not_implemented() → SerializedNotImplemented¶ validator validate_environment  »  all fields[source]¶ Validate that AWS credentials to and python package exists in environment. property lc_attributes: Dict¶ Return a list of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the constructor. property lc_namespace: List[str]¶ Return the namespace of the langchain object. eg. [“langchain”, “llms”, “openai”] property lc_secrets: Dict[str, str]¶ Return a map of constructor argument names to secret ids. eg. {“openai_api_key”: “OPENAI_API_KEY”} property lc_serializable: bool¶ Return whether or not the class is serializable. model Config[source]¶ Bases: object Configuration for this pydantic object. extra = 'forbid'¶
https://api.python.langchain.com/en/latest/llms/langchain.llms.bedrock.Bedrock.html
873c3557a30a-0
langchain.llms.pipelineai.PipelineAI¶ class langchain.llms.pipelineai.PipelineAI(*, cache: Optional[bool] = None, verbose: bool = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, pipeline_key: str = '', pipeline_kwargs: Dict[str, Any] = None, pipeline_api_key: Optional[str] = None)[source]¶ Bases: LLM, BaseModel Wrapper around PipelineAI large language models. To use, you should have the pipeline-ai python package installed, and the environment variable PIPELINE_API_KEY set with your API key. Any parameters that are valid to be passed to the call can be passed in, even if not explicitly saved on this class. Example from langchain import PipelineAI pipeline = PipelineAI(pipeline_key="") Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param cache: Optional[bool] = None¶ param callback_manager: Optional[BaseCallbackManager] = None¶ param callbacks: Callbacks = None¶ param pipeline_api_key: Optional[str] = None¶ param pipeline_key: str = ''¶ The id or tag of the target pipeline param pipeline_kwargs: Dict[str, Any] [Optional]¶ Holds any pipeline parameters valid for create call not explicitly specified. param tags: Optional[List[str]] = None¶ Tags to add to the run trace. param verbose: bool [Optional]¶ Whether to print out response text.
https://api.python.langchain.com/en/latest/llms/langchain.llms.pipelineai.PipelineAI.html
873c3557a30a-1
param verbose: bool [Optional]¶ Whether to print out response text. __call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → str¶ Check Cache and run the LLM on the given prompt and input. async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, **kwargs: Any) → LLMResult¶ Run the LLM on the given prompt and input. async agenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → LLMResult¶ Take in a list of prompt values and return an LLMResult. classmethod all_required_field_names() → Set¶ async apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶ Predict text from text. async apredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶ Predict message from messages. validator build_extra  »  all fields[source]¶ Build extra kwargs from additional params that were passed in. dict(**kwargs: Any) → Dict¶ Return a dictionary of the LLM. generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, **kwargs: Any) → LLMResult¶ Run the LLM on the given prompt and input.
https://api.python.langchain.com/en/latest/llms/langchain.llms.pipelineai.PipelineAI.html
873c3557a30a-2
Run the LLM on the given prompt and input. generate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → LLMResult¶ Take in a list of prompt values and return an LLMResult. get_num_tokens(text: str) → int¶ Get the number of tokens present in the text. get_num_tokens_from_messages(messages: List[BaseMessage]) → int¶ Get the number of tokens in the message. get_token_ids(text: str) → List[int]¶ Get the token present in the text. predict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶ Predict text from text. predict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶ Predict message from messages. validator raise_deprecation  »  all fields¶ Raise deprecation warning if callback_manager is used. save(file_path: Union[Path, str]) → None¶ Save the LLM. Parameters file_path – Path to file to save the LLM to. Example: .. code-block:: python llm.save(file_path=”path/llm.yaml”) validator set_verbose  »  verbose¶ If verbose is None, set it. This allows users to pass in None as verbose to access the global setting. to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶ to_json_not_implemented() → SerializedNotImplemented¶ validator validate_environment  »  all fields[source]¶ Validate that api key and python package exists in environment. property lc_attributes: Dict¶ Return a list of attribute names that should be included in the
https://api.python.langchain.com/en/latest/llms/langchain.llms.pipelineai.PipelineAI.html
873c3557a30a-3
property lc_attributes: Dict¶ Return a list of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the constructor. property lc_namespace: List[str]¶ Return the namespace of the langchain object. eg. [“langchain”, “llms”, “openai”] property lc_secrets: Dict[str, str]¶ Return a map of constructor argument names to secret ids. eg. {“openai_api_key”: “OPENAI_API_KEY”} property lc_serializable: bool¶ Return whether or not the class is serializable. model Config[source]¶ Bases: object Configuration for this pydantic config. extra = 'forbid'¶
https://api.python.langchain.com/en/latest/llms/langchain.llms.pipelineai.PipelineAI.html
dfc53e433420-0
langchain.llms.sagemaker_endpoint.ContentHandlerBase¶ class langchain.llms.sagemaker_endpoint.ContentHandlerBase[source]¶ Bases: Generic[INPUT_TYPE, OUTPUT_TYPE] A handler class to transform input from LLM to a format that SageMaker endpoint expects. Similarily, the class also handles transforming output from the SageMaker endpoint to a format that LLM class expects. Methods __init__() transform_input(prompt, model_kwargs) Transforms the input to a format that model can accept as the request Body. transform_output(output) Transforms the output from the model to string that the LLM class expects. Attributes accepts The MIME type of the response data returned from endpoint content_type The MIME type of the input data passed to endpoint abstract transform_input(prompt: INPUT_TYPE, model_kwargs: Dict) → bytes[source]¶ Transforms the input to a format that model can accept as the request Body. Should return bytes or seekable file like object in the format specified in the content_type request header. abstract transform_output(output: bytes) → OUTPUT_TYPE[source]¶ Transforms the output from the model to string that the LLM class expects. accepts: Optional[str] = 'text/plain'¶ The MIME type of the response data returned from endpoint content_type: Optional[str] = 'text/plain'¶ The MIME type of the input data passed to endpoint
https://api.python.langchain.com/en/latest/llms/langchain.llms.sagemaker_endpoint.ContentHandlerBase.html
54b1f8c87d3c-0
langchain.llms.ai21.AI21PenaltyData¶ class langchain.llms.ai21.AI21PenaltyData(*, scale: int = 0, applyToWhitespaces: bool = True, applyToPunctuations: bool = True, applyToNumbers: bool = True, applyToStopwords: bool = True, applyToEmojis: bool = True)[source]¶ Bases: BaseModel Parameters for AI21 penalty data. Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param applyToEmojis: bool = True¶ param applyToNumbers: bool = True¶ param applyToPunctuations: bool = True¶ param applyToStopwords: bool = True¶ param applyToWhitespaces: bool = True¶ param scale: int = 0¶
https://api.python.langchain.com/en/latest/llms/langchain.llms.ai21.AI21PenaltyData.html
627103ebfe17-0
langchain.llms.predictionguard.PredictionGuard¶ class langchain.llms.predictionguard.PredictionGuard(*, cache: Optional[bool] = None, verbose: bool = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, client: Any = None, model: Optional[str] = 'MPT-7B-Instruct', output: Optional[Dict[str, Any]] = None, max_tokens: int = 256, temperature: float = 0.75, token: Optional[str] = None, stop: Optional[List[str]] = None)[source]¶ Bases: LLM Wrapper around Prediction Guard large language models. To use, you should have the predictionguard python package installed, and the environment variable PREDICTIONGUARD_TOKEN set with your access token, or pass it as a named parameter to the constructor. To use Prediction Guard’s API along with OpenAI models, set the environment variable OPENAI_API_KEY with your OpenAI API key as well. Example pgllm = PredictionGuard(model="MPT-7B-Instruct", token="my-access-token", output={ "type": "boolean" }) Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param cache: Optional[bool] = None¶ param callback_manager: Optional[BaseCallbackManager] = None¶ param callbacks: Callbacks = None¶ param max_tokens: int = 256¶ Denotes the number of tokens to predict per generation. param model: Optional[str] = 'MPT-7B-Instruct'¶ Model name to use.
https://api.python.langchain.com/en/latest/llms/langchain.llms.predictionguard.PredictionGuard.html
627103ebfe17-1
Model name to use. param output: Optional[Dict[str, Any]] = None¶ The output type or structure for controlling the LLM output. param stop: Optional[List[str]] = None¶ param tags: Optional[List[str]] = None¶ Tags to add to the run trace. param temperature: float = 0.75¶ A non-negative float that tunes the degree of randomness in generation. param token: Optional[str] = None¶ Your Prediction Guard access token. param verbose: bool [Optional]¶ Whether to print out response text. __call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → str¶ Check Cache and run the LLM on the given prompt and input. async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, **kwargs: Any) → LLMResult¶ Run the LLM on the given prompt and input. async agenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → LLMResult¶ Take in a list of prompt values and return an LLMResult. classmethod all_required_field_names() → Set¶ async apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶ Predict text from text. async apredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶ Predict message from messages.
https://api.python.langchain.com/en/latest/llms/langchain.llms.predictionguard.PredictionGuard.html
627103ebfe17-2
Predict message from messages. dict(**kwargs: Any) → Dict¶ Return a dictionary of the LLM. generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, **kwargs: Any) → LLMResult¶ Run the LLM on the given prompt and input. generate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → LLMResult¶ Take in a list of prompt values and return an LLMResult. get_num_tokens(text: str) → int¶ Get the number of tokens present in the text. get_num_tokens_from_messages(messages: List[BaseMessage]) → int¶ Get the number of tokens in the message. get_token_ids(text: str) → List[int]¶ Get the token present in the text. predict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶ Predict text from text. predict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶ Predict message from messages. validator raise_deprecation  »  all fields¶ Raise deprecation warning if callback_manager is used. save(file_path: Union[Path, str]) → None¶ Save the LLM. Parameters file_path – Path to file to save the LLM to. Example: .. code-block:: python llm.save(file_path=”path/llm.yaml”) validator set_verbose  »  verbose¶ If verbose is None, set it.
https://api.python.langchain.com/en/latest/llms/langchain.llms.predictionguard.PredictionGuard.html
627103ebfe17-3
validator set_verbose  »  verbose¶ If verbose is None, set it. This allows users to pass in None as verbose to access the global setting. to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶ to_json_not_implemented() → SerializedNotImplemented¶ validator validate_environment  »  all fields[source]¶ Validate that the access token and python package exists in environment. property lc_attributes: Dict¶ Return a list of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the constructor. property lc_namespace: List[str]¶ Return the namespace of the langchain object. eg. [“langchain”, “llms”, “openai”] property lc_secrets: Dict[str, str]¶ Return a map of constructor argument names to secret ids. eg. {“openai_api_key”: “OPENAI_API_KEY”} property lc_serializable: bool¶ Return whether or not the class is serializable. model Config[source]¶ Bases: object Configuration for this pydantic object. extra = 'forbid'¶
https://api.python.langchain.com/en/latest/llms/langchain.llms.predictionguard.PredictionGuard.html
0b3210d693ed-0
langchain.llms.huggingface_pipeline.HuggingFacePipeline¶ class langchain.llms.huggingface_pipeline.HuggingFacePipeline(*, cache: Optional[bool] = None, verbose: bool = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, pipeline: Any = None, model_id: str = 'gpt2', model_kwargs: Optional[dict] = None, pipeline_kwargs: Optional[dict] = None)[source]¶ Bases: LLM Wrapper around HuggingFace Pipeline API. To use, you should have the transformers python package installed. Only supports text-generation, text2text-generation and summarization for now. Example using from_model_id:from langchain.llms import HuggingFacePipeline hf = HuggingFacePipeline.from_model_id( model_id="gpt2", task="text-generation", pipeline_kwargs={"max_new_tokens": 10}, ) Example passing pipeline in directly:from langchain.llms import HuggingFacePipeline from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline model_id = "gpt2" tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained(model_id) pipe = pipeline( "text-generation", model=model, tokenizer=tokenizer, max_new_tokens=10 ) hf = HuggingFacePipeline(pipeline=pipe) Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param cache: Optional[bool] = None¶ param callback_manager: Optional[BaseCallbackManager] = None¶ param callbacks: Callbacks = None¶
https://api.python.langchain.com/en/latest/llms/langchain.llms.huggingface_pipeline.HuggingFacePipeline.html
0b3210d693ed-1
param callbacks: Callbacks = None¶ param model_id: str = 'gpt2'¶ Model name to use. param model_kwargs: Optional[dict] = None¶ Key word arguments passed to the model. param pipeline_kwargs: Optional[dict] = None¶ Key word arguments passed to the pipeline. param tags: Optional[List[str]] = None¶ Tags to add to the run trace. param verbose: bool [Optional]¶ Whether to print out response text. __call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → str¶ Check Cache and run the LLM on the given prompt and input. async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, **kwargs: Any) → LLMResult¶ Run the LLM on the given prompt and input. async agenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → LLMResult¶ Take in a list of prompt values and return an LLMResult. classmethod all_required_field_names() → Set¶ async apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶ Predict text from text. async apredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶ Predict message from messages. dict(**kwargs: Any) → Dict¶ Return a dictionary of the LLM.
https://api.python.langchain.com/en/latest/llms/langchain.llms.huggingface_pipeline.HuggingFacePipeline.html
0b3210d693ed-2
dict(**kwargs: Any) → Dict¶ Return a dictionary of the LLM. classmethod from_model_id(model_id: str, task: str, device: int = - 1, model_kwargs: Optional[dict] = None, pipeline_kwargs: Optional[dict] = None, **kwargs: Any) → LLM[source]¶ Construct the pipeline object from model_id and task. generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, **kwargs: Any) → LLMResult¶ Run the LLM on the given prompt and input. generate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → LLMResult¶ Take in a list of prompt values and return an LLMResult. get_num_tokens(text: str) → int¶ Get the number of tokens present in the text. get_num_tokens_from_messages(messages: List[BaseMessage]) → int¶ Get the number of tokens in the message. get_token_ids(text: str) → List[int]¶ Get the token present in the text. predict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶ Predict text from text. predict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶ Predict message from messages. validator raise_deprecation  »  all fields¶ Raise deprecation warning if callback_manager is used. save(file_path: Union[Path, str]) → None¶ Save the LLM. Parameters
https://api.python.langchain.com/en/latest/llms/langchain.llms.huggingface_pipeline.HuggingFacePipeline.html
0b3210d693ed-3
Save the LLM. Parameters file_path – Path to file to save the LLM to. Example: .. code-block:: python llm.save(file_path=”path/llm.yaml”) validator set_verbose  »  verbose¶ If verbose is None, set it. This allows users to pass in None as verbose to access the global setting. to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶ to_json_not_implemented() → SerializedNotImplemented¶ property lc_attributes: Dict¶ Return a list of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the constructor. property lc_namespace: List[str]¶ Return the namespace of the langchain object. eg. [“langchain”, “llms”, “openai”] property lc_secrets: Dict[str, str]¶ Return a map of constructor argument names to secret ids. eg. {“openai_api_key”: “OPENAI_API_KEY”} property lc_serializable: bool¶ Return whether or not the class is serializable. model Config[source]¶ Bases: object Configuration for this pydantic object. extra = 'forbid'¶
https://api.python.langchain.com/en/latest/llms/langchain.llms.huggingface_pipeline.HuggingFacePipeline.html
91a818efe2b8-0
langchain.llms.bananadev.Banana¶ class langchain.llms.bananadev.Banana(*, cache: Optional[bool] = None, verbose: bool = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, model_key: str = '', model_kwargs: Dict[str, Any] = None, banana_api_key: Optional[str] = None)[source]¶ Bases: LLM Wrapper around Banana large language models. To use, you should have the banana-dev python package installed, and the environment variable BANANA_API_KEY set with your API key. Any parameters that are valid to be passed to the call can be passed in, even if not explicitly saved on this class. Example from langchain.llms import Banana banana = Banana(model_key="") Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param banana_api_key: Optional[str] = None¶ param cache: Optional[bool] = None¶ param callback_manager: Optional[BaseCallbackManager] = None¶ param callbacks: Callbacks = None¶ param model_key: str = ''¶ model endpoint to use param model_kwargs: Dict[str, Any] [Optional]¶ Holds any model parameters valid for create call not explicitly specified. param tags: Optional[List[str]] = None¶ Tags to add to the run trace. param verbose: bool [Optional]¶ Whether to print out response text.
https://api.python.langchain.com/en/latest/llms/langchain.llms.bananadev.Banana.html
91a818efe2b8-1
param verbose: bool [Optional]¶ Whether to print out response text. __call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → str¶ Check Cache and run the LLM on the given prompt and input. async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, **kwargs: Any) → LLMResult¶ Run the LLM on the given prompt and input. async agenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → LLMResult¶ Take in a list of prompt values and return an LLMResult. classmethod all_required_field_names() → Set¶ async apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶ Predict text from text. async apredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶ Predict message from messages. validator build_extra  »  all fields[source]¶ Build extra kwargs from additional params that were passed in. dict(**kwargs: Any) → Dict¶ Return a dictionary of the LLM. generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, **kwargs: Any) → LLMResult¶ Run the LLM on the given prompt and input.
https://api.python.langchain.com/en/latest/llms/langchain.llms.bananadev.Banana.html
91a818efe2b8-2
Run the LLM on the given prompt and input. generate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → LLMResult¶ Take in a list of prompt values and return an LLMResult. get_num_tokens(text: str) → int¶ Get the number of tokens present in the text. get_num_tokens_from_messages(messages: List[BaseMessage]) → int¶ Get the number of tokens in the message. get_token_ids(text: str) → List[int]¶ Get the token present in the text. predict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶ Predict text from text. predict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶ Predict message from messages. validator raise_deprecation  »  all fields¶ Raise deprecation warning if callback_manager is used. save(file_path: Union[Path, str]) → None¶ Save the LLM. Parameters file_path – Path to file to save the LLM to. Example: .. code-block:: python llm.save(file_path=”path/llm.yaml”) validator set_verbose  »  verbose¶ If verbose is None, set it. This allows users to pass in None as verbose to access the global setting. to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶ to_json_not_implemented() → SerializedNotImplemented¶ validator validate_environment  »  all fields[source]¶ Validate that api key and python package exists in environment. property lc_attributes: Dict¶ Return a list of attribute names that should be included in the
https://api.python.langchain.com/en/latest/llms/langchain.llms.bananadev.Banana.html
91a818efe2b8-3
property lc_attributes: Dict¶ Return a list of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the constructor. property lc_namespace: List[str]¶ Return the namespace of the langchain object. eg. [“langchain”, “llms”, “openai”] property lc_secrets: Dict[str, str]¶ Return a map of constructor argument names to secret ids. eg. {“openai_api_key”: “OPENAI_API_KEY”} property lc_serializable: bool¶ Return whether or not the class is serializable. model Config[source]¶ Bases: object Configuration for this pydantic config. extra = 'forbid'¶
https://api.python.langchain.com/en/latest/llms/langchain.llms.bananadev.Banana.html
636318da72d4-0
langchain.llms.aviary.get_completions¶ langchain.llms.aviary.get_completions(model: str, prompt: str, use_prompt_format: bool = True, version: str = '') → Dict[str, Union[str, float, int]][source]¶ Get completions from Aviary models.
https://api.python.langchain.com/en/latest/llms/langchain.llms.aviary.get_completions.html
ce3f859cdea8-0
langchain.llms.loading.load_llm_from_config¶ langchain.llms.loading.load_llm_from_config(config: dict) → BaseLLM[source]¶ Load LLM from Config Dict.
https://api.python.langchain.com/en/latest/llms/langchain.llms.loading.load_llm_from_config.html
7b45f439f63e-0
langchain.llms.openai.OpenAIChat¶ class langchain.llms.openai.OpenAIChat(*, cache: Optional[bool] = None, verbose: bool = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, client: Any = None, model_name: str = 'gpt-3.5-turbo', model_kwargs: Dict[str, Any] = None, openai_api_key: Optional[str] = None, openai_api_base: Optional[str] = None, openai_proxy: Optional[str] = None, max_retries: int = 6, prefix_messages: List = None, streaming: bool = False, allowed_special: Union[Literal['all'], AbstractSet[str]] = {}, disallowed_special: Union[Literal['all'], Collection[str]] = 'all')[source]¶ Bases: BaseLLM Wrapper around OpenAI Chat large language models. To use, you should have the openai python package installed, and the environment variable OPENAI_API_KEY set with your API key. Any parameters that are valid to be passed to the openai.create call can be passed in, even if not explicitly saved on this class. Example from langchain.llms import OpenAIChat openaichat = OpenAIChat(model_name="gpt-3.5-turbo") Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param allowed_special: Union[Literal['all'], AbstractSet[str]] = {}¶ Set of special tokens that are allowed。 param cache: Optional[bool] = None¶ param callback_manager: Optional[BaseCallbackManager] = None¶
https://api.python.langchain.com/en/latest/llms/langchain.llms.openai.OpenAIChat.html
7b45f439f63e-1
param callback_manager: Optional[BaseCallbackManager] = None¶ param callbacks: Callbacks = None¶ param disallowed_special: Union[Literal['all'], Collection[str]] = 'all'¶ Set of special tokens that are not allowed。 param max_retries: int = 6¶ Maximum number of retries to make when generating. param model_kwargs: Dict[str, Any] [Optional]¶ Holds any model parameters valid for create call not explicitly specified. param model_name: str = 'gpt-3.5-turbo'¶ Model name to use. param openai_api_base: Optional[str] = None¶ param openai_api_key: Optional[str] = None¶ param openai_proxy: Optional[str] = None¶ param prefix_messages: List [Optional]¶ Series of messages for Chat input. param streaming: bool = False¶ Whether to stream the results or not. param tags: Optional[List[str]] = None¶ Tags to add to the run trace. param verbose: bool [Optional]¶ Whether to print out response text. __call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → str¶ Check Cache and run the LLM on the given prompt and input. async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, **kwargs: Any) → LLMResult¶ Run the LLM on the given prompt and input.
https://api.python.langchain.com/en/latest/llms/langchain.llms.openai.OpenAIChat.html
7b45f439f63e-2
Run the LLM on the given prompt and input. async agenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → LLMResult¶ Take in a list of prompt values and return an LLMResult. classmethod all_required_field_names() → Set¶ async apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶ Predict text from text. async apredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶ Predict message from messages. validator build_extra  »  all fields[source]¶ Build extra kwargs from additional params that were passed in. dict(**kwargs: Any) → Dict¶ Return a dictionary of the LLM. generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, **kwargs: Any) → LLMResult¶ Run the LLM on the given prompt and input. generate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → LLMResult¶ Take in a list of prompt values and return an LLMResult. get_num_tokens(text: str) → int¶ Get the number of tokens present in the text. get_num_tokens_from_messages(messages: List[BaseMessage]) → int¶ Get the number of tokens in the message. get_token_ids(text: str) → List[int][source]¶
https://api.python.langchain.com/en/latest/llms/langchain.llms.openai.OpenAIChat.html
7b45f439f63e-3
get_token_ids(text: str) → List[int][source]¶ Get the token IDs using the tiktoken package. predict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶ Predict text from text. predict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶ Predict message from messages. validator raise_deprecation  »  all fields¶ Raise deprecation warning if callback_manager is used. save(file_path: Union[Path, str]) → None¶ Save the LLM. Parameters file_path – Path to file to save the LLM to. Example: .. code-block:: python llm.save(file_path=”path/llm.yaml”) validator set_verbose  »  verbose¶ If verbose is None, set it. This allows users to pass in None as verbose to access the global setting. to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶ to_json_not_implemented() → SerializedNotImplemented¶ validator validate_environment  »  all fields[source]¶ Validate that api key and python package exists in environment. property lc_attributes: Dict¶ Return a list of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the constructor. property lc_namespace: List[str]¶ Return the namespace of the langchain object. eg. [“langchain”, “llms”, “openai”] property lc_secrets: Dict[str, str]¶ Return a map of constructor argument names to secret ids. eg. {“openai_api_key”: “OPENAI_API_KEY”} property lc_serializable: bool¶ Return whether or not the class is serializable. model Config¶ Bases: object
https://api.python.langchain.com/en/latest/llms/langchain.llms.openai.OpenAIChat.html
7b45f439f63e-4
Return whether or not the class is serializable. model Config¶ Bases: object Configuration for this pydantic object. arbitrary_types_allowed = True¶
https://api.python.langchain.com/en/latest/llms/langchain.llms.openai.OpenAIChat.html
02b5a5fafa0a-0
langchain.llms.self_hosted_hugging_face.SelfHostedHuggingFaceLLM¶ class langchain.llms.self_hosted_hugging_face.SelfHostedHuggingFaceLLM(*, cache: ~typing.Optional[bool] = None, verbose: bool = None, callbacks: ~typing.Optional[~typing.Union[~typing.List[~langchain.callbacks.base.BaseCallbackHandler], ~langchain.callbacks.base.BaseCallbackManager]] = None, callback_manager: ~typing.Optional[~langchain.callbacks.base.BaseCallbackManager] = None, tags: ~typing.Optional[~typing.List[str]] = None, pipeline_ref: ~typing.Any = None, client: ~typing.Any = None, inference_fn: ~typing.Callable = <function _generate_text>, hardware: ~typing.Any = None, model_load_fn: ~typing.Callable = <function _load_transformer>, load_fn_kwargs: ~typing.Optional[dict] = None, model_reqs: ~typing.List[str] = ['./', 'transformers', 'torch'], model_id: str = 'gpt2', task: str = 'text-generation', device: int = 0, model_kwargs: ~typing.Optional[dict] = None)[source]¶ Bases: SelfHostedPipeline Wrapper around HuggingFace Pipeline API to run on self-hosted remote hardware. Supported hardware includes auto-launched instances on AWS, GCP, Azure, and Lambda, as well as servers specified by IP address and SSH credentials (such as on-prem, or another cloud like Paperspace, Coreweave, etc.). To use, you should have the runhouse python package installed. Only supports text-generation, text2text-generation and summarization for now. Example using from_model_id:from langchain.llms import SelfHostedHuggingFaceLLM import runhouse as rh
https://api.python.langchain.com/en/latest/llms/langchain.llms.self_hosted_hugging_face.SelfHostedHuggingFaceLLM.html
02b5a5fafa0a-1
import runhouse as rh gpu = rh.cluster(name="rh-a10x", instance_type="A100:1") hf = SelfHostedHuggingFaceLLM( model_id="google/flan-t5-large", task="text2text-generation", hardware=gpu ) Example passing fn that generates a pipeline (bc the pipeline is not serializable):from langchain.llms import SelfHostedHuggingFaceLLM from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline import runhouse as rh def get_pipeline(): model_id = "gpt2" tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained(model_id) pipe = pipeline( "text-generation", model=model, tokenizer=tokenizer ) return pipe hf = SelfHostedHuggingFaceLLM( model_load_fn=get_pipeline, model_id="gpt2", hardware=gpu) Construct the pipeline remotely using an auxiliary function. The load function needs to be importable to be imported and run on the server, i.e. in a module and not a REPL or closure. Then, initialize the remote inference function. param cache: Optional[bool] = None¶ param callback_manager: Optional[BaseCallbackManager] = None¶ param callbacks: Callbacks = None¶ param device: int = 0¶ Device to use for inference. -1 for CPU, 0 for GPU, 1 for second GPU, etc. param hardware: Any = None¶ Remote hardware to send the inference function to. param inference_fn: Callable = <function _generate_text>¶ Inference function to send to the remote hardware. param load_fn_kwargs: Optional[dict] = None¶
https://api.python.langchain.com/en/latest/llms/langchain.llms.self_hosted_hugging_face.SelfHostedHuggingFaceLLM.html
02b5a5fafa0a-2
param load_fn_kwargs: Optional[dict] = None¶ Key word arguments to pass to the model load function. param model_id: str = 'gpt2'¶ Hugging Face model_id to load the model. param model_kwargs: Optional[dict] = None¶ Key word arguments to pass to the model. param model_load_fn: Callable = <function _load_transformer>¶ Function to load the model remotely on the server. param model_reqs: List[str] = ['./', 'transformers', 'torch']¶ Requirements to install on hardware to inference the model. param tags: Optional[List[str]] = None¶ Tags to add to the run trace. param task: str = 'text-generation'¶ Hugging Face task (“text-generation”, “text2text-generation” or “summarization”). param verbose: bool [Optional]¶ Whether to print out response text. __call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → str¶ Check Cache and run the LLM on the given prompt and input. async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, **kwargs: Any) → LLMResult¶ Run the LLM on the given prompt and input. async agenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → LLMResult¶ Take in a list of prompt values and return an LLMResult.
https://api.python.langchain.com/en/latest/llms/langchain.llms.self_hosted_hugging_face.SelfHostedHuggingFaceLLM.html
02b5a5fafa0a-3
Take in a list of prompt values and return an LLMResult. classmethod all_required_field_names() → Set¶ async apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶ Predict text from text. async apredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶ Predict message from messages. dict(**kwargs: Any) → Dict¶ Return a dictionary of the LLM. classmethod from_pipeline(pipeline: Any, hardware: Any, model_reqs: Optional[List[str]] = None, device: int = 0, **kwargs: Any) → LLM¶ Init the SelfHostedPipeline from a pipeline object or string. generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, **kwargs: Any) → LLMResult¶ Run the LLM on the given prompt and input. generate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → LLMResult¶ Take in a list of prompt values and return an LLMResult. get_num_tokens(text: str) → int¶ Get the number of tokens present in the text. get_num_tokens_from_messages(messages: List[BaseMessage]) → int¶ Get the number of tokens in the message. get_token_ids(text: str) → List[int]¶ Get the token present in the text. predict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶ Predict text from text.
https://api.python.langchain.com/en/latest/llms/langchain.llms.self_hosted_hugging_face.SelfHostedHuggingFaceLLM.html
02b5a5fafa0a-4
Predict text from text. predict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶ Predict message from messages. validator raise_deprecation  »  all fields¶ Raise deprecation warning if callback_manager is used. save(file_path: Union[Path, str]) → None¶ Save the LLM. Parameters file_path – Path to file to save the LLM to. Example: .. code-block:: python llm.save(file_path=”path/llm.yaml”) validator set_verbose  »  verbose¶ If verbose is None, set it. This allows users to pass in None as verbose to access the global setting. to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶ to_json_not_implemented() → SerializedNotImplemented¶ property lc_attributes: Dict¶ Return a list of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the constructor. property lc_namespace: List[str]¶ Return the namespace of the langchain object. eg. [“langchain”, “llms”, “openai”] property lc_secrets: Dict[str, str]¶ Return a map of constructor argument names to secret ids. eg. {“openai_api_key”: “OPENAI_API_KEY”} property lc_serializable: bool¶ Return whether or not the class is serializable. model Config[source]¶ Bases: object Configuration for this pydantic object. extra = 'forbid'¶
https://api.python.langchain.com/en/latest/llms/langchain.llms.self_hosted_hugging_face.SelfHostedHuggingFaceLLM.html
e1c3d90d6760-0
langchain.llms.textgen.TextGen¶ class langchain.llms.textgen.TextGen(*, cache: Optional[bool] = None, verbose: bool = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, model_url: str, max_new_tokens: Optional[int] = 250, do_sample: bool = True, temperature: Optional[float] = 1.3, top_p: Optional[float] = 0.1, typical_p: Optional[float] = 1, epsilon_cutoff: Optional[float] = 0, eta_cutoff: Optional[float] = 0, repetition_penalty: Optional[float] = 1.18, top_k: Optional[float] = 40, min_length: Optional[int] = 0, no_repeat_ngram_size: Optional[int] = 0, num_beams: Optional[int] = 1, penalty_alpha: Optional[float] = 0, length_penalty: Optional[float] = 1, early_stopping: bool = False, seed: int = - 1, add_bos_token: bool = True, truncation_length: Optional[int] = 2048, ban_eos_token: bool = False, skip_special_tokens: bool = True, stopping_strings: Optional[List[str]] = [], streaming: bool = False)[source]¶ Bases: LLM Wrapper around the text-generation-webui model. To use, you should have the text-generation-webui installed, a model loaded, and –api added as a command-line option. Suggested installation, use one-click installer for your OS: https://github.com/oobabooga/text-generation-webui#one-click-installers Paremeters below taken from text-generation-webui api example:
https://api.python.langchain.com/en/latest/llms/langchain.llms.textgen.TextGen.html
e1c3d90d6760-1
Paremeters below taken from text-generation-webui api example: https://github.com/oobabooga/text-generation-webui/blob/main/api-examples/api-example.py Example from langchain.llms import TextGen llm = TextGen(model_url="http://localhost:8500") Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param add_bos_token: bool = True¶ Add the bos_token to the beginning of prompts. Disabling this can make the replies more creative. param ban_eos_token: bool = False¶ Ban the eos_token. Forces the model to never end the generation prematurely. param cache: Optional[bool] = None¶ param callback_manager: Optional[BaseCallbackManager] = None¶ param callbacks: Callbacks = None¶ param do_sample: bool = True¶ Do sample param early_stopping: bool = False¶ Early stopping param epsilon_cutoff: Optional[float] = 0¶ Epsilon cutoff param eta_cutoff: Optional[float] = 0¶ ETA cutoff param length_penalty: Optional[float] = 1¶ Length Penalty param max_new_tokens: Optional[int] = 250¶ The maximum number of tokens to generate. param min_length: Optional[int] = 0¶ Minimum generation length in tokens. param model_url: str [Required]¶ The full URL to the textgen webui including http[s]://host:port param no_repeat_ngram_size: Optional[int] = 0¶ If not set to 0, specifies the length of token sets that are completely blocked from repeating at all. Higher values = blocks larger phrases, lower values = blocks words or letters from repeating. Only 0 or high values are a good idea in most cases.
https://api.python.langchain.com/en/latest/llms/langchain.llms.textgen.TextGen.html
e1c3d90d6760-2
Only 0 or high values are a good idea in most cases. param num_beams: Optional[int] = 1¶ Number of beams param penalty_alpha: Optional[float] = 0¶ Penalty Alpha param repetition_penalty: Optional[float] = 1.18¶ Exponential penalty factor for repeating prior tokens. 1 means no penalty, higher value = less repetition, lower value = more repetition. param seed: int = -1¶ Seed (-1 for random) param skip_special_tokens: bool = True¶ Skip special tokens. Some specific models need this unset. param stopping_strings: Optional[List[str]] = []¶ A list of strings to stop generation when encountered. param streaming: bool = False¶ Whether to stream the results, token by token (currently unimplemented). param tags: Optional[List[str]] = None¶ Tags to add to the run trace. param temperature: Optional[float] = 1.3¶ Primary factor to control randomness of outputs. 0 = deterministic (only the most likely token is used). Higher value = more randomness. param top_k: Optional[float] = 40¶ Similar to top_p, but select instead only the top_k most likely tokens. Higher value = higher range of possible random results. param top_p: Optional[float] = 0.1¶ If not set to 1, select tokens with probabilities adding up to less than this number. Higher value = higher range of possible random results. param truncation_length: Optional[int] = 2048¶ Truncate the prompt up to this length. The leftmost tokens are removed if the prompt exceeds this length. Most models require this to be at most 2048. param typical_p: Optional[float] = 1¶
https://api.python.langchain.com/en/latest/llms/langchain.llms.textgen.TextGen.html
e1c3d90d6760-3
param typical_p: Optional[float] = 1¶ If not set to 1, select only tokens that are at least this much more likely to appear than random tokens, given the prior text. param verbose: bool [Optional]¶ Whether to print out response text. __call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → str¶ Check Cache and run the LLM on the given prompt and input. async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, **kwargs: Any) → LLMResult¶ Run the LLM on the given prompt and input. async agenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → LLMResult¶ Take in a list of prompt values and return an LLMResult. classmethod all_required_field_names() → Set¶ async apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶ Predict text from text. async apredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶ Predict message from messages. dict(**kwargs: Any) → Dict¶ Return a dictionary of the LLM.
https://api.python.langchain.com/en/latest/llms/langchain.llms.textgen.TextGen.html
e1c3d90d6760-4
dict(**kwargs: Any) → Dict¶ Return a dictionary of the LLM. generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, **kwargs: Any) → LLMResult¶ Run the LLM on the given prompt and input. generate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → LLMResult¶ Take in a list of prompt values and return an LLMResult. get_num_tokens(text: str) → int¶ Get the number of tokens present in the text. get_num_tokens_from_messages(messages: List[BaseMessage]) → int¶ Get the number of tokens in the message. get_token_ids(text: str) → List[int]¶ Get the token present in the text. predict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶ Predict text from text. predict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶ Predict message from messages. validator raise_deprecation  »  all fields¶ Raise deprecation warning if callback_manager is used. save(file_path: Union[Path, str]) → None¶ Save the LLM. Parameters file_path – Path to file to save the LLM to. Example: .. code-block:: python llm.save(file_path=”path/llm.yaml”) validator set_verbose  »  verbose¶ If verbose is None, set it. This allows users to pass in None as verbose to access the global setting.
https://api.python.langchain.com/en/latest/llms/langchain.llms.textgen.TextGen.html
e1c3d90d6760-5
This allows users to pass in None as verbose to access the global setting. to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶ to_json_not_implemented() → SerializedNotImplemented¶ property lc_attributes: Dict¶ Return a list of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the constructor. property lc_namespace: List[str]¶ Return the namespace of the langchain object. eg. [“langchain”, “llms”, “openai”] property lc_secrets: Dict[str, str]¶ Return a map of constructor argument names to secret ids. eg. {“openai_api_key”: “OPENAI_API_KEY”} property lc_serializable: bool¶ Return whether or not the class is serializable. model Config¶ Bases: object Configuration for this pydantic object. arbitrary_types_allowed = True¶
https://api.python.langchain.com/en/latest/llms/langchain.llms.textgen.TextGen.html
a0ed75066467-0
langchain.llms.utils.enforce_stop_tokens¶ langchain.llms.utils.enforce_stop_tokens(text: str, stop: List[str]) → str[source]¶ Cut off the text as soon as any stop words occur.
https://api.python.langchain.com/en/latest/llms/langchain.llms.utils.enforce_stop_tokens.html
2d8d81397ee5-0
langchain.llms.sagemaker_endpoint.SagemakerEndpoint¶ class langchain.llms.sagemaker_endpoint.SagemakerEndpoint(*, cache: Optional[bool] = None, verbose: bool = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, client: Any = None, endpoint_name: str = '', region_name: str = '', credentials_profile_name: Optional[str] = None, content_handler: LLMContentHandler, model_kwargs: Optional[Dict] = None, endpoint_kwargs: Optional[Dict] = None)[source]¶ Bases: LLM Wrapper around custom Sagemaker Inference Endpoints. To use, you must supply the endpoint name from your deployed Sagemaker model & the region where it is deployed. To authenticate, the AWS client uses the following methods to automatically load credentials: https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html If a specific credential profile should be used, you must pass the name of the profile from the ~/.aws/credentials file that is to be used. Make sure the credentials / roles used have the required policies to access the Sagemaker endpoint. See: https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies.html Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param cache: Optional[bool] = None¶ param callback_manager: Optional[BaseCallbackManager] = None¶ param callbacks: Callbacks = None¶ param content_handler: langchain.llms.sagemaker_endpoint.LLMContentHandler [Required]¶ The content handler class that provides an input and output transform functions to handle formats between LLM
https://api.python.langchain.com/en/latest/llms/langchain.llms.sagemaker_endpoint.SagemakerEndpoint.html
2d8d81397ee5-1
The content handler class that provides an input and output transform functions to handle formats between LLM and the endpoint. param credentials_profile_name: Optional[str] = None¶ The name of the profile in the ~/.aws/credentials or ~/.aws/config files, which has either access keys or role information specified. If not specified, the default credential profile or, if on an EC2 instance, credentials from IMDS will be used. See: https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html param endpoint_kwargs: Optional[Dict] = None¶ Optional attributes passed to the invoke_endpoint function. See `boto3`_. docs for more info. .. _boto3: <https://boto3.amazonaws.com/v1/documentation/api/latest/index.html> param endpoint_name: str = ''¶ The name of the endpoint from the deployed Sagemaker model. Must be unique within an AWS Region. param model_kwargs: Optional[Dict] = None¶ Key word arguments to pass to the model. param region_name: str = ''¶ The aws region where the Sagemaker model is deployed, eg. us-west-2. param tags: Optional[List[str]] = None¶ Tags to add to the run trace. param verbose: bool [Optional]¶ Whether to print out response text. __call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → str¶ Check Cache and run the LLM on the given prompt and input.
https://api.python.langchain.com/en/latest/llms/langchain.llms.sagemaker_endpoint.SagemakerEndpoint.html
2d8d81397ee5-2
Check Cache and run the LLM on the given prompt and input. async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, **kwargs: Any) → LLMResult¶ Run the LLM on the given prompt and input. async agenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → LLMResult¶ Take in a list of prompt values and return an LLMResult. classmethod all_required_field_names() → Set¶ async apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶ Predict text from text. async apredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶ Predict message from messages. dict(**kwargs: Any) → Dict¶ Return a dictionary of the LLM. generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, **kwargs: Any) → LLMResult¶ Run the LLM on the given prompt and input. generate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → LLMResult¶ Take in a list of prompt values and return an LLMResult. get_num_tokens(text: str) → int¶
https://api.python.langchain.com/en/latest/llms/langchain.llms.sagemaker_endpoint.SagemakerEndpoint.html
2d8d81397ee5-3
get_num_tokens(text: str) → int¶ Get the number of tokens present in the text. get_num_tokens_from_messages(messages: List[BaseMessage]) → int¶ Get the number of tokens in the message. get_token_ids(text: str) → List[int]¶ Get the token present in the text. predict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶ Predict text from text. predict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶ Predict message from messages. validator raise_deprecation  »  all fields¶ Raise deprecation warning if callback_manager is used. save(file_path: Union[Path, str]) → None¶ Save the LLM. Parameters file_path – Path to file to save the LLM to. Example: .. code-block:: python llm.save(file_path=”path/llm.yaml”) validator set_verbose  »  verbose¶ If verbose is None, set it. This allows users to pass in None as verbose to access the global setting. to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶ to_json_not_implemented() → SerializedNotImplemented¶ validator validate_environment  »  all fields[source]¶ Validate that AWS credentials to and python package exists in environment. property lc_attributes: Dict¶ Return a list of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the constructor. property lc_namespace: List[str]¶ Return the namespace of the langchain object. eg. [“langchain”, “llms”, “openai”] property lc_secrets: Dict[str, str]¶ Return a map of constructor argument names to secret ids.
https://api.python.langchain.com/en/latest/llms/langchain.llms.sagemaker_endpoint.SagemakerEndpoint.html
2d8d81397ee5-4
Return a map of constructor argument names to secret ids. eg. {“openai_api_key”: “OPENAI_API_KEY”} property lc_serializable: bool¶ Return whether or not the class is serializable. model Config[source]¶ Bases: object Configuration for this pydantic object. extra = 'forbid'¶
https://api.python.langchain.com/en/latest/llms/langchain.llms.sagemaker_endpoint.SagemakerEndpoint.html
654a27eaf509-0
langchain.llms.databricks.get_repl_context¶ langchain.llms.databricks.get_repl_context() → Any[source]¶ Gets the notebook REPL context if running inside a Databricks notebook. Returns None otherwise.
https://api.python.langchain.com/en/latest/llms/langchain.llms.databricks.get_repl_context.html
6252bee07afe-0
langchain.llms.beam.Beam¶ class langchain.llms.beam.Beam(*, cache: Optional[bool] = None, verbose: bool = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, model_name: str = '', name: str = '', cpu: str = '', memory: str = '', gpu: str = '', python_version: str = '', python_packages: List[str] = [], max_length: str = '', url: str = '', model_kwargs: Dict[str, Any] = None, beam_client_id: str = '', beam_client_secret: str = '', app_id: Optional[str] = None)[source]¶ Bases: LLM Wrapper around Beam API for gpt2 large language model. To use, you should have the beam-sdk python package installed, and the environment variable BEAM_CLIENT_ID set with your client id and BEAM_CLIENT_SECRET set with your client secret. Information on how to get these is available here: https://docs.beam.cloud/account/api-keys. The wrapper can then be called as follows, where the name, cpu, memory, gpu, python version, and python packages can be updated accordingly. Once deployed, the instance can be called. Example llm = Beam(model_name="gpt2", name="langchain-gpt2", cpu=8, memory="32Gi", gpu="A10G", python_version="python3.8", python_packages=[ "diffusers[torch]>=0.10", "transformers", "torch", "pillow", "accelerate", "safetensors", "xformers",],
https://api.python.langchain.com/en/latest/llms/langchain.llms.beam.Beam.html
6252bee07afe-1
"safetensors", "xformers",], max_length=50) llm._deploy() call_result = llm._call(input) Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param app_id: Optional[str] = None¶ param beam_client_id: str = ''¶ param beam_client_secret: str = ''¶ param cache: Optional[bool] = None¶ param callback_manager: Optional[BaseCallbackManager] = None¶ param callbacks: Callbacks = None¶ param cpu: str = ''¶ param gpu: str = ''¶ param max_length: str = ''¶ param memory: str = ''¶ param model_kwargs: Dict[str, Any] [Optional]¶ Holds any model parameters valid for create call not explicitly specified. param model_name: str = ''¶ param name: str = ''¶ param python_packages: List[str] = []¶ param python_version: str = ''¶ param tags: Optional[List[str]] = None¶ Tags to add to the run trace. param url: str = ''¶ model endpoint to use param verbose: bool [Optional]¶ Whether to print out response text. __call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → str¶ Check Cache and run the LLM on the given prompt and input. async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, **kwargs: Any) → LLMResult¶
https://api.python.langchain.com/en/latest/llms/langchain.llms.beam.Beam.html
6252bee07afe-2
Run the LLM on the given prompt and input. async agenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → LLMResult¶ Take in a list of prompt values and return an LLMResult. classmethod all_required_field_names() → Set¶ app_creation() → None[source]¶ Creates a Python file which will contain your Beam app definition. async apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶ Predict text from text. async apredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶ Predict message from messages. validator build_extra  »  all fields[source]¶ Build extra kwargs from additional params that were passed in. dict(**kwargs: Any) → Dict¶ Return a dictionary of the LLM. generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, **kwargs: Any) → LLMResult¶ Run the LLM on the given prompt and input. generate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → LLMResult¶ Take in a list of prompt values and return an LLMResult. get_num_tokens(text: str) → int¶ Get the number of tokens present in the text. get_num_tokens_from_messages(messages: List[BaseMessage]) → int¶
https://api.python.langchain.com/en/latest/llms/langchain.llms.beam.Beam.html
6252bee07afe-3
get_num_tokens_from_messages(messages: List[BaseMessage]) → int¶ Get the number of tokens in the message. get_token_ids(text: str) → List[int]¶ Get the token present in the text. predict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶ Predict text from text. predict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶ Predict message from messages. validator raise_deprecation  »  all fields¶ Raise deprecation warning if callback_manager is used. run_creation() → None[source]¶ Creates a Python file which will be deployed on beam. save(file_path: Union[Path, str]) → None¶ Save the LLM. Parameters file_path – Path to file to save the LLM to. Example: .. code-block:: python llm.save(file_path=”path/llm.yaml”) validator set_verbose  »  verbose¶ If verbose is None, set it. This allows users to pass in None as verbose to access the global setting. to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶ to_json_not_implemented() → SerializedNotImplemented¶ validator validate_environment  »  all fields[source]¶ Validate that api key and python package exists in environment. property authorization: str¶ property lc_attributes: Dict¶ Return a list of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the constructor. property lc_namespace: List[str]¶ Return the namespace of the langchain object. eg. [“langchain”, “llms”, “openai”] property lc_secrets: Dict[str, str]¶
https://api.python.langchain.com/en/latest/llms/langchain.llms.beam.Beam.html
6252bee07afe-4
property lc_secrets: Dict[str, str]¶ Return a map of constructor argument names to secret ids. eg. {“openai_api_key”: “OPENAI_API_KEY”} property lc_serializable: bool¶ Return whether or not the class is serializable. model Config[source]¶ Bases: object Configuration for this pydantic config. extra = 'forbid'¶
https://api.python.langchain.com/en/latest/llms/langchain.llms.beam.Beam.html
f60550eb6c2e-0
langchain.llms.aviary.get_models¶ langchain.llms.aviary.get_models() → List[str][source]¶ List available models
https://api.python.langchain.com/en/latest/llms/langchain.llms.aviary.get_models.html
240b1c080ae1-0
langchain.llms.openai.update_token_usage¶ langchain.llms.openai.update_token_usage(keys: Set[str], response: Dict[str, Any], token_usage: Dict[str, Any]) → None[source]¶ Update token usage.
https://api.python.langchain.com/en/latest/llms/langchain.llms.openai.update_token_usage.html
33f8c367a0e4-0
langchain.llms.openai.OpenAI¶ class langchain.llms.openai.OpenAI(*, cache: Optional[bool] = None, verbose: bool = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, client: Any = None, model: str = 'text-davinci-003', temperature: float = 0.7, max_tokens: int = 256, top_p: float = 1, frequency_penalty: float = 0, presence_penalty: float = 0, n: int = 1, best_of: int = 1, model_kwargs: Dict[str, Any] = None, openai_api_key: Optional[str] = None, openai_api_base: Optional[str] = None, openai_organization: Optional[str] = None, openai_proxy: Optional[str] = None, batch_size: int = 20, request_timeout: Optional[Union[float, Tuple[float, float]]] = None, logit_bias: Optional[Dict[str, float]] = None, max_retries: int = 6, streaming: bool = False, allowed_special: Union[Literal['all'], AbstractSet[str]] = {}, disallowed_special: Union[Literal['all'], Collection[str]] = 'all', tiktoken_model_name: Optional[str] = None)[source]¶ Bases: BaseOpenAI Wrapper around OpenAI large language models. To use, you should have the openai python package installed, and the environment variable OPENAI_API_KEY set with your API key. Any parameters that are valid to be passed to the openai.create call can be passed in, even if not explicitly saved on this class. Example from langchain.llms import OpenAI
https://api.python.langchain.com/en/latest/llms/langchain.llms.openai.OpenAI.html
33f8c367a0e4-1
Example from langchain.llms import OpenAI openai = OpenAI(model_name="text-davinci-003") Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param allowed_special: Union[Literal['all'], AbstractSet[str]] = {}¶ Set of special tokens that are allowed。 param batch_size: int = 20¶ Batch size to use when passing multiple documents to generate. param best_of: int = 1¶ Generates best_of completions server-side and returns the “best”. param cache: Optional[bool] = None¶ param callback_manager: Optional[BaseCallbackManager] = None¶ param callbacks: Callbacks = None¶ param client: Any = None¶ param disallowed_special: Union[Literal['all'], Collection[str]] = 'all'¶ Set of special tokens that are not allowed。 param frequency_penalty: float = 0¶ Penalizes repeated tokens according to frequency. param logit_bias: Optional[Dict[str, float]] [Optional]¶ Adjust the probability of specific tokens being generated. param max_retries: int = 6¶ Maximum number of retries to make when generating. param max_tokens: int = 256¶ The maximum number of tokens to generate in the completion. -1 returns as many tokens as possible given the prompt and the models maximal context size. param model_kwargs: Dict[str, Any] [Optional]¶ Holds any model parameters valid for create call not explicitly specified. param model_name: str = 'text-davinci-003' (alias 'model')¶ Model name to use. param n: int = 1¶ How many completions to generate for each prompt.
https://api.python.langchain.com/en/latest/llms/langchain.llms.openai.OpenAI.html
33f8c367a0e4-2
param n: int = 1¶ How many completions to generate for each prompt. param openai_api_base: Optional[str] = None¶ param openai_api_key: Optional[str] = None¶ param openai_organization: Optional[str] = None¶ param openai_proxy: Optional[str] = None¶ param presence_penalty: float = 0¶ Penalizes repeated tokens. param request_timeout: Optional[Union[float, Tuple[float, float]]] = None¶ Timeout for requests to OpenAI completion API. Default is 600 seconds. param streaming: bool = False¶ Whether to stream the results or not. param tags: Optional[List[str]] = None¶ Tags to add to the run trace. param temperature: float = 0.7¶ What sampling temperature to use. param tiktoken_model_name: Optional[str] = None¶ The model name to pass to tiktoken when using this class. Tiktoken is used to count the number of tokens in documents to constrain them to be under a certain limit. By default, when set to None, this will be the same as the embedding model name. However, there are some cases where you may want to use this Embedding class with a model name not supported by tiktoken. This can include when using Azure embeddings or when using one of the many model providers that expose an OpenAI-like API but with different models. In those cases, in order to avoid erroring when tiktoken is called, you can specify a model name to use here. param top_p: float = 1¶ Total probability mass of tokens to consider at each step. param verbose: bool [Optional]¶ Whether to print out response text.
https://api.python.langchain.com/en/latest/llms/langchain.llms.openai.OpenAI.html