id stringlengths 14 15 | text stringlengths 35 2.51k | source stringlengths 61 154 |
|---|---|---|
46297f691d32-0 | langchain.env.get_runtime_environment¶
langchain.env.get_runtime_environment() → dict[source]¶
Get information about the environment. | https://api.python.langchain.com/en/latest/env/langchain.env.get_runtime_environment.html |
6fae2db9e22f-0 | langchain.client.runner_utils.run_llm¶
langchain.client.runner_utils.run_llm(llm: BaseLanguageModel, inputs: Dict[str, Any], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]], *, tags: Optional[List[str]] = None, input_mapper: Optional[Callable[[Dict], Any]] = None) → Union[LLMResult, ChatResul... | https://api.python.langchain.com/en/latest/client/langchain.client.runner_utils.run_llm.html |
3f1033e94d3b-0 | langchain.client.runner_utils.run_on_dataset¶
langchain.client.runner_utils.run_on_dataset(dataset_name: str, llm_or_chain_factory: Union[Callable[[], Chain], BaseLanguageModel], *, num_repetitions: int = 1, project_name: Optional[str] = None, verbose: bool = False, client: Optional[LangChainPlusClient] = None, tags: O... | https://api.python.langchain.com/en/latest/client/langchain.client.runner_utils.run_on_dataset.html |
3f1033e94d3b-1 | has inputs with keys that differ from what is expected by your chain
or agent.
Returns
A dictionary containing the run’s project name and the resulting model outputs. | https://api.python.langchain.com/en/latest/client/langchain.client.runner_utils.run_on_dataset.html |
06d59f053586-0 | langchain.client.runner_utils.InputFormatError¶
class langchain.client.runner_utils.InputFormatError[source]¶
Bases: Exception
Raised when the input format is invalid.
add_note()¶
Exception.add_note(note) –
add a note to the exception
with_traceback()¶
Exception.with_traceback(tb) –
set self.__traceback__ to tb and ret... | https://api.python.langchain.com/en/latest/client/langchain.client.runner_utils.InputFormatError.html |
28f7b23533e4-0 | langchain.client.runner_utils.run_on_examples¶
langchain.client.runner_utils.run_on_examples(examples: Iterator[Example], llm_or_chain_factory: Union[Callable[[], Chain], BaseLanguageModel], *, num_repetitions: int = 1, project_name: Optional[str] = None, verbose: bool = False, client: Optional[LangChainPlusClient] = N... | https://api.python.langchain.com/en/latest/client/langchain.client.runner_utils.run_on_examples.html |
940b60cf6e9e-0 | langchain.client.runner_utils.run_llm_or_chain¶
langchain.client.runner_utils.run_llm_or_chain(example: Example, llm_or_chain_factory: Union[Callable[[], Chain], BaseLanguageModel], n_repetitions: int, *, tags: Optional[List[str]] = None, callbacks: Optional[List[BaseCallbackHandler]] = None, input_mapper: Optional[Cal... | https://api.python.langchain.com/en/latest/client/langchain.client.runner_utils.run_llm_or_chain.html |
9049d48bf928-0 | langchain.evaluation.qa.generate_chain.QAGenerateChain¶
class langchain.evaluation.qa.generate_chain.QAGenerateChain(*, memory: Optional[BaseMemory] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, verbose: bool = None, ta... | https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.qa.generate_chain.QAGenerateChain.html |
9049d48bf928-1 | There are many different types of memory - please see memory docs
for the full catalog.
param output_key: str = 'text'¶
param output_parser: BaseLLMOutputParser [Optional]¶
Output parser to use.
Defaults to one that takes the most likely string but does not change it
otherwise.
param prompt: BasePromptTemplate [Require... | https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.qa.generate_chain.QAGenerateChain.html |
9049d48bf928-2 | chain will be returned. Defaults to False.
callbacks – Callbacks to use for this chain run. If not provided, will
use the callbacks provided to the chain.
include_run_info – Whether to include run info in the response. Defaults
to False.
async aapply(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[Base... | https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.qa.generate_chain.QAGenerateChain.html |
9049d48bf928-3 | Generate LLM result from inputs.
apply(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) → List[Dict[str, str]]¶
Utilize the LLM generate method for speed gains.
apply_and_parse(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallba... | https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.qa.generate_chain.QAGenerateChain.html |
9049d48bf928-4 | Create outputs from response.
dict(**kwargs: Any) → Dict¶
Return dictionary representation of chain.
classmethod from_llm(llm: BaseLanguageModel, **kwargs: Any) → QAGenerateChain[source]¶
Load QA Generate Chain from LLM.
classmethod from_string(llm: BaseLanguageModel, template: str) → LLMChain¶
Create LLMChain from LLM... | https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.qa.generate_chain.QAGenerateChain.html |
9049d48bf928-5 | Raise deprecation warning if callback_manager is used.
run(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, **kwargs: Any) → str¶
Run the chain as text in, text out or multiple variables, text out.
save(file_path: Union[Path, str]) → None¶
... | https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.qa.generate_chain.QAGenerateChain.html |
22baae1484c8-0 | langchain.evaluation.qa.eval_chain.QAEvalChain¶
class langchain.evaluation.qa.eval_chain.QAEvalChain(*, memory: Optional[BaseMemory] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, verbose: bool = None, tags: Optional[Lis... | https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.qa.eval_chain.QAEvalChain.html |
22baae1484c8-1 | There are many different types of memory - please see memory docs
for the full catalog.
param output_key: str = 'text'¶
param output_parser: BaseLLMOutputParser [Optional]¶
Output parser to use.
Defaults to one that takes the most likely string but does not change it
otherwise.
param prompt: BasePromptTemplate [Require... | https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.qa.eval_chain.QAEvalChain.html |
22baae1484c8-2 | chain will be returned. Defaults to False.
callbacks – Callbacks to use for this chain run. If not provided, will
use the callbacks provided to the chain.
include_run_info – Whether to include run info in the response. Defaults
to False.
async aapply(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[Base... | https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.qa.eval_chain.QAEvalChain.html |
22baae1484c8-3 | include_run_info – Whether to include run info in the response. Defaults
to False.
async aevaluate_strings(*, prediction: str, reference: Optional[str] = None, input: Optional[str] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → dict[source]¶
async agenerate(i... | https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.qa.eval_chain.QAEvalChain.html |
22baae1484c8-4 | Call apredict and then parse the results.
async aprep_prompts(input_list: List[Dict[str, Any]], run_manager: Optional[AsyncCallbackManagerForChainRun] = None) → Tuple[List[PromptValue], Optional[List[str]]]¶
Prepare prompts from inputs.
async arun(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCal... | https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.qa.eval_chain.QAEvalChain.html |
22baae1484c8-5 | Returns
The evaluation results containing the score or value.
Return type
dict
classmethod from_llm(llm: BaseLanguageModel, prompt: PromptTemplate = PromptTemplate(input_variables=['query', 'result', 'answer'], output_parser=None, partial_variables={}, template="You are a teacher grading a quiz.\nYou are given a questi... | https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.qa.eval_chain.QAEvalChain.html |
22baae1484c8-6 | Create LLMChain from LLM and template.
generate(input_list: List[Dict[str, Any]], run_manager: Optional[CallbackManagerForChainRun] = None) → LLMResult¶
Generate LLM result from inputs.
predict(callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → str¶
Format prompt with kw... | https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.qa.eval_chain.QAEvalChain.html |
22baae1484c8-7 | Save the chain.
Parameters
file_path – Path to file to save the chain to.
Example:
.. code-block:: python
chain.save(file_path=”path/chain.yaml”)
validator set_verbose » verbose¶
If verbose is None, set it.
This allows users to pass in None as verbose to access the global setting.
to_json() → Union[SerializedConstruc... | https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.qa.eval_chain.QAEvalChain.html |
eae3b3826c83-0 | langchain.evaluation.run_evaluators.base.RunEvaluatorOutputParser¶
class langchain.evaluation.run_evaluators.base.RunEvaluatorOutputParser(*, eval_chain_output_key: str = 'text')[source]¶
Bases: BaseOutputParser[EvaluationResult]
Parse the output of a run.
Create a new model by parsing and validating input data from ke... | https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.run_evaluators.base.RunEvaluatorOutputParser.html |
eae3b3826c83-1 | serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]¶
Return the namespace of the langchain object.
eg. [“langchain”, “llms”, “openai”]
property lc_secrets: Dict[str, str]¶
Return a map of constructor argument names to secret ids.
eg. {“openai_api_key”: “OPENAI_API_K... | https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.run_evaluators.base.RunEvaluatorOutputParser.html |
180615ed4062-0 | langchain.evaluation.run_evaluators.implementations.StringRunEvaluatorInputMapper¶
class langchain.evaluation.run_evaluators.implementations.StringRunEvaluatorInputMapper(*, prediction_map: Dict[str, str], input_map: Dict[str, str], answer_map: Optional[Dict[str, str]] = None)[source]¶
Bases: RunEvaluatorInputMapper, B... | https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.run_evaluators.implementations.StringRunEvaluatorInputMapper.html |
35162ba909cc-0 | langchain.evaluation.agents.trajectory_eval_chain.TrajectoryEvalChain¶
class langchain.evaluation.agents.trajectory_eval_chain.TrajectoryEvalChain(*, memory: Optional[BaseMemory] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = ... | https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.agents.trajectory_eval_chain.TrajectoryEvalChain.html |
35162ba909cc-1 | Optional memory object. Defaults to None.
Memory is a class that gets called at the start
and at the end of every chain. At the start, memory loads variables and passes
them along in the chain. At the end, it saves any returned variables.
There are many different types of memory - please see memory docs
for the full ca... | https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.agents.trajectory_eval_chain.TrajectoryEvalChain.html |
35162ba909cc-2 | chain will be returned. Defaults to False.
callbacks – Callbacks to use for this chain run. If not provided, will
use the callbacks provided to the chain.
include_run_info – Whether to include run info in the response. Defaults
to False.
async acall(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False,... | https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.agents.trajectory_eval_chain.TrajectoryEvalChain.html |
35162ba909cc-3 | Returns
The evaluation result.
Return type
dict
apply(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) → List[Dict[str, str]]¶
Call the chain on all inputs in the list.
async arun(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCal... | https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.agents.trajectory_eval_chain.TrajectoryEvalChain.html |
35162ba909cc-4 | available tothe agent.
output_parser (Optional[TrajectoryOutputParser]) – The output parser
used to parse the chain output into a score.
return_reasoning (bool) – Whether to return the
reasoning along with the score.
Returns
The TrajectoryEvalChain object.
Return type
TrajectoryEvalChain
static get_agent_trajectory(ste... | https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.agents.trajectory_eval_chain.TrajectoryEvalChain.html |
35162ba909cc-5 | to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
property input_keys: List[str]¶
Get the input keys for the chain.
Returns
The input keys.
Return type
List[str]
property lc_attributes: Dict¶
Return a list of attribute names that should be included... | https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.agents.trajectory_eval_chain.TrajectoryEvalChain.html |
439dd6719aba-0 | langchain.evaluation.criteria.eval_chain.CriteriaEvalChain¶
class langchain.evaluation.criteria.eval_chain.CriteriaEvalChain(*, memory: Optional[BaseMemory] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, verbose: bool = ... | https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.criteria.eval_chain.CriteriaEvalChain.html |
439dd6719aba-1 | >>> llm = ChatAnthropic()
>>> criteria = {"my-custom-criterion": "Is the submission the most amazing ever?"}
>>> chain = CriteriaEvalChain.from_llm(llm=llm, criteria=criteria)
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to for... | https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.criteria.eval_chain.CriteriaEvalChain.html |
439dd6719aba-2 | If false, will return a bunch of extra information about the generation.
param tags: Optional[List[str]] = None¶
Optional list of tags associated with the chain. Defaults to None
These tags will be associated with each call to this chain,
and passed as arguments to the handlers defined in callbacks.
You can use these t... | https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.criteria.eval_chain.CriteriaEvalChain.html |
439dd6719aba-3 | Utilize the LLM generate method for speed gains.
async aapply_and_parse(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) → Sequence[Union[str, List[str], Dict[str, str]]]¶
Call apply and then parse the results.
async acall(inputs: Union[Dict[str, Any],... | https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.criteria.eval_chain.CriteriaEvalChain.html |
439dd6719aba-4 | method.
Returns
The evaluation results.
Return type
dict
Examples
>>> from langchain.llms import OpenAI
>>> from langchain.evaluation.criteria import CriteriaEvalChain
>>> llm = OpenAI()
>>> criteria = "conciseness"
>>> chain = CriteriaEvalChain.from_llm(llm=llm, criteria=criteria)
>>> await chain.aevaluate_strings(
... | https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.criteria.eval_chain.CriteriaEvalChain.html |
439dd6719aba-5 | Completion from LLM.
Example
completion = llm.predict(adjective="funny")
async apredict_and_parse(callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → Union[str, List[str], Dict[str, str]]¶
Call apredict and then parse the results.
async aprep_prompts(input_list: List[Dict... | https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.criteria.eval_chain.CriteriaEvalChain.html |
439dd6719aba-6 | >>> from langchain.evaluation.criteria import CriteriaEvalChain
>>> llm = OpenAI()
>>> criteria = "conciseness"
>>> chain = CriteriaEvalChain.from_llm(llm=llm, criteria=criteria)
>>> chain.evaluate_strings(
prediction="The answer is 42.",
reference="42",
input="What is the answer to life, the un... | https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.criteria.eval_chain.CriteriaEvalChain.html |
439dd6719aba-7 | An instance of the CriteriaEvalChain class.
Return type
CriteriaEvalChain
Examples
>>> from langchain.llms import OpenAI
>>> from langchain.evaluation.criteria import CriteriaEvalChain
>>> llm = OpenAI()
>>> criteria = {
"hallucination": (
"Does this submission contain information"
" not... | https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.criteria.eval_chain.CriteriaEvalChain.html |
439dd6719aba-8 | Completion from LLM.
Example
completion = llm.predict(adjective="funny")
predict_and_parse(callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → Union[str, List[str], Dict[str, Any]]¶
Call predict and then parse the results.
prep_inputs(inputs: Union[Dict[str, Any], Any]) →... | https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.criteria.eval_chain.CriteriaEvalChain.html |
439dd6719aba-9 | 'coherence': 'Is the submission coherent, well-structured, and organized?'}
run(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, **kwargs: Any) → str¶
Run the chain as text in, text out or multiple variables, text out.
save(file_path: Union... | https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.criteria.eval_chain.CriteriaEvalChain.html |
dcea0d285ff8-0 | langchain.evaluation.run_evaluators.implementations.get_trajectory_evaluator¶ | https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.run_evaluators.implementations.get_trajectory_evaluator.html |
dcea0d285ff8-1 | langchain.evaluation.run_evaluators.implementations.get_trajectory_evaluator(llm: BaseChatModel, agent_tools: Union[Sequence[str], Sequence[BaseTool]], *, input_key: str = 'input', prediction_key: str = 'output', tool_input_key: str = 'input', tool_output_key: str = 'output', prompt: BasePromptTemplate = ChatPromptTemp... | https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.run_evaluators.implementations.get_trajectory_evaluator.html |
dcea0d285ff8-2 | to the United States by France, as a symbol of the two countries' friendship. It was erected atop an American-designed ...\n[END_AGENT_TRAJECTORY]\n\n[RESPONSE]\nThe AI language model's final answer to the question was: There are different ways to measure the length of the United States, but if we use the distance betw... | https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.run_evaluators.implementations.get_trajectory_evaluator.html |
dcea0d285ff8-3 | used for current events or specific questions.The tools were not used in a helpful way. The model did not use too many steps to answer the question.The model did not use the appropriate tools to answer the question. \nJudgment: Given the good reasoning in the final answer but otherwise poor performance, we give the ... | https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.run_evaluators.implementations.get_trajectory_evaluator.html |
dcea0d285ff8-4 | template_format='f-string', validate_template=True), additional_kwargs={})]), evaluation_name: str = 'Agent Trajectory', **kwargs: Any) → RunEvaluatorChain[source]¶ | https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.run_evaluators.implementations.get_trajectory_evaluator.html |
dcea0d285ff8-5 | Get an eval chain for grading a model’s response against a map of criteria. | https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.run_evaluators.implementations.get_trajectory_evaluator.html |
cfcd19e7e989-0 | langchain.evaluation.run_evaluators.implementations.TrajectoryEvalOutputParser¶
class langchain.evaluation.run_evaluators.implementations.TrajectoryEvalOutputParser(*, eval_chain_output_key: str = 'text', evaluation_name: str = 'Agent Trajectory', evaluator_info: dict = None)[source]¶
Bases: RunEvaluatorOutputParser
Cr... | https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.run_evaluators.implementations.TrajectoryEvalOutputParser.html |
cfcd19e7e989-1 | Parameters
completion – output of language model
prompt – prompt value
Returns
structured output
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
property lc_attributes: Dict¶
Return a list of attribute names that should be included in the
seriali... | https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.run_evaluators.implementations.TrajectoryEvalOutputParser.html |
484d68b820e9-0 | langchain.evaluation.loading.load_dataset¶
langchain.evaluation.loading.load_dataset(uri: str) → List[Dict][source]¶ | https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.loading.load_dataset.html |
21f113ec2121-0 | langchain.evaluation.schema.PairwiseStringEvaluator¶
class langchain.evaluation.schema.PairwiseStringEvaluator(*args, **kwargs)[source]¶
Bases: Protocol
A protocol for comparing the output of two models.
Methods
__init__(*args, **kwargs)
aevaluate_string_pairs(prediction, prediction_b)
Evaluate the output string pairs.... | https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.schema.PairwiseStringEvaluator.html |
21f113ec2121-1 | as callbacks and optional reference strings.
Returns
A dictionary containing the preference, scores, and/orother information.
Return type
dict | https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.schema.PairwiseStringEvaluator.html |
2b340355eb60-0 | langchain.evaluation.comparison.eval_chain.PairwiseStringResultOutputParser¶
class langchain.evaluation.comparison.eval_chain.PairwiseStringResultOutputParser[source]¶
Bases: BaseOutputParser[dict]
A parser for the output of the PairwiseStringEvalChain.
Create a new model by parsing and validating input data from keywo... | https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.comparison.eval_chain.PairwiseStringResultOutputParser.html |
2b340355eb60-1 | Return a map of constructor argument names to secret ids.
eg. {“openai_api_key”: “OPENAI_API_KEY”}
property lc_serializable: bool¶
Return whether or not the class is serializable.
model Config¶
Bases: object
extra = 'ignore'¶ | https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.comparison.eval_chain.PairwiseStringResultOutputParser.html |
f5f01a185f92-0 | langchain.evaluation.schema.StringEvaluator¶
class langchain.evaluation.schema.StringEvaluator(*args, **kwargs)[source]¶
Bases: Protocol
Protocol for evaluating strings.
Methods
__init__(*args, **kwargs)
aevaluate_strings(*, prediction[, ...])
Asynchronously evaluate Chain or LLM output, based on optional
evaluate_stri... | https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.schema.StringEvaluator.html |
e539a40fc55d-0 | langchain.evaluation.agents.trajectory_eval_chain.TrajectoryOutputParser¶
class langchain.evaluation.agents.trajectory_eval_chain.TrajectoryOutputParser[source]¶
Bases: BaseOutputParser
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be par... | https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.agents.trajectory_eval_chain.TrajectoryOutputParser.html |
e539a40fc55d-1 | property lc_namespace: List[str]¶
Return the namespace of the langchain object.
eg. [“langchain”, “llms”, “openai”]
property lc_secrets: Dict[str, str]¶
Return a map of constructor argument names to secret ids.
eg. {“openai_api_key”: “OPENAI_API_KEY”}
property lc_serializable: bool¶
Return whether or not the class is s... | https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.agents.trajectory_eval_chain.TrajectoryOutputParser.html |
242721d96a87-0 | langchain.evaluation.comparison.eval_chain.PairwiseStringEvalChain¶
class langchain.evaluation.comparison.eval_chain.PairwiseStringEvalChain(*, memory: Optional[BaseMemory] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, ... | https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.comparison.eval_chain.PairwiseStringEvalChain.html |
242721d96a87-1 | # . ” by explaining what the formula means.
[[B]]”# }
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param callback_manager: Optional[BaseCallbackManager] = None¶
Deprecated, use callbacks instead.
para... | https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.comparison.eval_chain.PairwiseStringEvalChain.html |
242721d96a87-2 | These tags will be associated with each call to this chain,
and passed as arguments to the handlers defined in callbacks.
You can use these to eg identify a specific instance of a chain with its use case.
param verbose: bool [Optional]¶
Whether or not run in verbose mode. In verbose mode, some intermediate logs
will be... | https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.comparison.eval_chain.PairwiseStringEvalChain.html |
242721d96a87-3 | Call apply and then parse the results.
async acall(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, include_run_info: bool = False) → Dict[str, Any]¶
Run the logic of this chain ... | https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.comparison.eval_chain.PairwiseStringEvalChain.html |
242721d96a87-4 | score: The preference score, which is 1 for ‘A’, 0 for ‘B’,and 0.5 for None.
Return type
dict
async agenerate(input_list: List[Dict[str, Any]], run_manager: Optional[AsyncCallbackManagerForChainRun] = None) → LLMResult¶
Generate LLM result from inputs.
apply(input_list: List[Dict[str, Any]], callbacks: Optional[Union[L... | https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.comparison.eval_chain.PairwiseStringEvalChain.html |
242721d96a87-5 | Prepare prompts from inputs.
async arun(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, **kwargs: Any) → str¶
Run the chain as text in, text out or multiple variables, text out.
create_outputs(llm_result: LLMResult) → List[Dict[str, Any]]¶... | https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.comparison.eval_chain.PairwiseStringEvalChain.html |
242721d96a87-6 | Initialize the PairwiseStringEvalChain from an LLM.
Parameters
llm (BaseLanguageModel) – The LLM to use.
prompt (PromptTemplate, optional) – The prompt to use.
require_reference (bool, optional) – Whether to require a reference
string. Defaults to False.
**kwargs (Any) – Additional keyword arguments.
Returns
The initia... | https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.comparison.eval_chain.PairwiseStringEvalChain.html |
242721d96a87-7 | Validate and prep outputs.
prep_prompts(input_list: List[Dict[str, Any]], run_manager: Optional[CallbackManagerForChainRun] = None) → Tuple[List[PromptValue], Optional[List[str]]]¶
Prepare prompts from inputs.
validator raise_deprecation » all fields¶
Raise deprecation warning if callback_manager is used.
run(*args: ... | https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.comparison.eval_chain.PairwiseStringEvalChain.html |
242721d96a87-8 | model Config¶
Bases: object
Configuration for this pydantic object.
arbitrary_types_allowed = True¶
extra = 'forbid'¶ | https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.comparison.eval_chain.PairwiseStringEvalChain.html |
ad41d7cfcf5f-0 | langchain.evaluation.run_evaluators.implementations.CriteriaOutputParser¶
class langchain.evaluation.run_evaluators.implementations.CriteriaOutputParser(*, eval_chain_output_key: str = 'text', evaluation_name: str)[source]¶
Bases: RunEvaluatorOutputParser
Parse a criteria results into an evaluation result.
Create a new... | https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.run_evaluators.implementations.CriteriaOutputParser.html |
ad41d7cfcf5f-1 | property lc_namespace: List[str]¶
Return the namespace of the langchain object.
eg. [“langchain”, “llms”, “openai”]
property lc_secrets: Dict[str, str]¶
Return a map of constructor argument names to secret ids.
eg. {“openai_api_key”: “OPENAI_API_KEY”}
property lc_serializable: bool¶
Return whether or not the class is s... | https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.run_evaluators.implementations.CriteriaOutputParser.html |
2d6c61991bb6-0 | langchain.evaluation.run_evaluators.implementations.ChoicesOutputParser¶
class langchain.evaluation.run_evaluators.implementations.ChoicesOutputParser(*, eval_chain_output_key: str = 'text', evaluation_name: str, choices_map: Optional[Dict[str, int]] = None)[source]¶
Bases: RunEvaluatorOutputParser
Parse a feedback run... | https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.run_evaluators.implementations.ChoicesOutputParser.html |
2d6c61991bb6-1 | property lc_attributes: Dict¶
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]¶
Return the namespace of the langchain object.
eg. [“langchain”, “llms”, “openai”]
property lc_secrets: Dict[str, str]¶
... | https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.run_evaluators.implementations.ChoicesOutputParser.html |
e9a8a50049d6-0 | langchain.evaluation.agents.trajectory_eval_chain.TrajectoryEval¶
class langchain.evaluation.agents.trajectory_eval_chain.TrajectoryEval(score, reasoning)[source]¶
Bases: NamedTuple
Create new instance of TrajectoryEval(score, reasoning)
Methods
__init__()
count(value, /)
Return number of occurrences of value.
index(va... | https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.agents.trajectory_eval_chain.TrajectoryEval.html |
743732075ea4-0 | langchain.evaluation.run_evaluators.implementations.TrajectoryInputMapper¶
class langchain.evaluation.run_evaluators.implementations.TrajectoryInputMapper(*, tool_descriptions: List[str], agent_input_key: str = 'input', agent_output_key: str = 'output', tool_input_key: str = 'input', tool_output_key: str = 'output')[so... | https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.run_evaluators.implementations.TrajectoryInputMapper.html |
a8244ee27be8-0 | langchain.evaluation.criteria.eval_chain.CriteriaResultOutputParser¶
class langchain.evaluation.criteria.eval_chain.CriteriaResultOutputParser[source]¶
Bases: BaseOutputParser[dict]
A parser for the output of the CriteriaEvalChain.
Create a new model by parsing and validating input data from keyword arguments.
Raises V... | https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.criteria.eval_chain.CriteriaResultOutputParser.html |
a8244ee27be8-1 | Return a map of constructor argument names to secret ids.
eg. {“openai_api_key”: “OPENAI_API_KEY”}
property lc_serializable: bool¶
Return whether or not the class is serializable.
model Config¶
Bases: object
extra = 'ignore'¶ | https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.criteria.eval_chain.CriteriaResultOutputParser.html |
d37c8b4ff4f0-0 | langchain.evaluation.qa.eval_chain.CotQAEvalChain¶
class langchain.evaluation.qa.eval_chain.CotQAEvalChain(*, memory: Optional[BaseMemory] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, verbose: bool = None, tags: Option... | https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.qa.eval_chain.CotQAEvalChain.html |
d37c8b4ff4f0-1 | There are many different types of memory - please see memory docs
for the full catalog.
param output_key: str = 'text'¶
param output_parser: BaseLLMOutputParser [Optional]¶
Output parser to use.
Defaults to one that takes the most likely string but does not change it
otherwise.
param prompt: BasePromptTemplate [Require... | https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.qa.eval_chain.CotQAEvalChain.html |
d37c8b4ff4f0-2 | chain will be returned. Defaults to False.
callbacks – Callbacks to use for this chain run. If not provided, will
use the callbacks provided to the chain.
include_run_info – Whether to include run info in the response. Defaults
to False.
async aapply(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[Base... | https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.qa.eval_chain.CotQAEvalChain.html |
d37c8b4ff4f0-3 | include_run_info – Whether to include run info in the response. Defaults
to False.
async aevaluate_strings(*, prediction: str, reference: Optional[str] = None, input: Optional[str] = None, **kwargs: Any) → dict¶
async agenerate(input_list: List[Dict[str, Any]], run_manager: Optional[AsyncCallbackManagerForChainRun] = N... | https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.qa.eval_chain.CotQAEvalChain.html |
d37c8b4ff4f0-4 | Prepare prompts from inputs.
async arun(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, **kwargs: Any) → str¶
Run the chain as text in, text out or multiple variables, text out.
create_outputs(llm_result: LLMResult) → List[Dict[str, Any]]¶... | https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.qa.eval_chain.CotQAEvalChain.html |
d37c8b4ff4f0-5 | classmethod from_llm(llm: BaseLanguageModel, prompt: PromptTemplate = PromptTemplate(input_variables=['query', 'context', 'result'], output_parser=None, partial_variables={}, template="You are a teacher grading a quiz.\nYou are given a question, the context the question is about, and the student's answer. You are asked... | https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.qa.eval_chain.CotQAEvalChain.html |
d37c8b4ff4f0-6 | Returns
the loaded QA eval chain.
Return type
ContextQAEvalChain
classmethod from_string(llm: BaseLanguageModel, template: str) → LLMChain¶
Create LLMChain from LLM and template.
generate(input_list: List[Dict[str, Any]], run_manager: Optional[CallbackManagerForChainRun] = None) → LLMResult¶
Generate LLM result from in... | https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.qa.eval_chain.CotQAEvalChain.html |
d37c8b4ff4f0-7 | Run the chain as text in, text out or multiple variables, text out.
save(file_path: Union[Path, str]) → None¶
Save the chain.
Parameters
file_path – Path to file to save the chain to.
Example:
.. code-block:: python
chain.save(file_path=”path/chain.yaml”)
validator set_verbose » verbose¶
If verbose is None, set it.
T... | https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.qa.eval_chain.CotQAEvalChain.html |
511eca462af6-0 | langchain.evaluation.run_evaluators.implementations.get_criteria_evaluator¶
langchain.evaluation.run_evaluators.implementations.get_criteria_evaluator(llm: BaseLanguageModel, criteria: Union[Mapping[str, str], Sequence[str], str], *, input_key: str = 'input', prediction_key: str = 'output', prompt: BasePromptTemplate =... | https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.run_evaluators.implementations.get_criteria_evaluator.html |
b5fc1976334d-0 | langchain.evaluation.run_evaluators.base.RunEvaluatorChain¶
class langchain.evaluation.run_evaluators.base.RunEvaluatorChain(*, memory: Optional[BaseMemory] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, verbose: bool = ... | https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.run_evaluators.base.RunEvaluatorChain.html |
b5fc1976334d-1 | for the full catalog.
param output_parser: RunEvaluatorOutputParser [Required]¶
Parse the output of the eval chain into feedback.
param tags: Optional[List[str]] = None¶
Optional list of tags associated with the chain. Defaults to None
These tags will be associated with each call to this chain,
and passed as arguments ... | https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.run_evaluators.base.RunEvaluatorChain.html |
b5fc1976334d-2 | include_run_info – Whether to include run info in the response. Defaults
to False.
async acall(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, include_run_info: bool = False) → ... | https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.run_evaluators.base.RunEvaluatorChain.html |
b5fc1976334d-3 | Evaluate an example.
prep_inputs(inputs: Union[Dict[str, Any], Any]) → Dict[str, str]¶
Validate and prep inputs.
prep_outputs(inputs: Dict[str, str], outputs: Dict[str, str], return_only_outputs: bool = False) → Dict[str, str]¶
Validate and prep outputs.
validator raise_deprecation » all fields¶
Raise deprecation war... | https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.run_evaluators.base.RunEvaluatorChain.html |
b5fc1976334d-4 | eg. {“openai_api_key”: “OPENAI_API_KEY”}
property lc_serializable: bool¶
Return whether or not the class is serializable.
property output_keys: List[str]¶
Output keys this chain expects.
model Config¶
Bases: object
Configuration for this pydantic object.
arbitrary_types_allowed = True¶ | https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.run_evaluators.base.RunEvaluatorChain.html |
fb2e0cdeef08-0 | langchain.evaluation.qa.eval_chain.ContextQAEvalChain¶
class langchain.evaluation.qa.eval_chain.ContextQAEvalChain(*, memory: Optional[BaseMemory] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, verbose: bool = None, tags... | https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.qa.eval_chain.ContextQAEvalChain.html |
fb2e0cdeef08-1 | There are many different types of memory - please see memory docs
for the full catalog.
param output_key: str = 'text'¶
param output_parser: BaseLLMOutputParser [Optional]¶
Output parser to use.
Defaults to one that takes the most likely string but does not change it
otherwise.
param prompt: BasePromptTemplate [Require... | https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.qa.eval_chain.ContextQAEvalChain.html |
fb2e0cdeef08-2 | chain will be returned. Defaults to False.
callbacks – Callbacks to use for this chain run. If not provided, will
use the callbacks provided to the chain.
include_run_info – Whether to include run info in the response. Defaults
to False.
async aapply(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[Base... | https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.qa.eval_chain.ContextQAEvalChain.html |
fb2e0cdeef08-3 | include_run_info – Whether to include run info in the response. Defaults
to False.
async aevaluate_strings(*, prediction: str, reference: Optional[str] = None, input: Optional[str] = None, **kwargs: Any) → dict[source]¶
async agenerate(input_list: List[Dict[str, Any]], run_manager: Optional[AsyncCallbackManagerForChain... | https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.qa.eval_chain.ContextQAEvalChain.html |
fb2e0cdeef08-4 | Prepare prompts from inputs.
async arun(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, **kwargs: Any) → str¶
Run the chain as text in, text out or multiple variables, text out.
create_outputs(llm_result: LLMResult) → List[Dict[str, Any]]¶... | https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.qa.eval_chain.ContextQAEvalChain.html |
fb2e0cdeef08-5 | classmethod from_llm(llm: BaseLanguageModel, prompt: PromptTemplate = PromptTemplate(input_variables=['query', 'context', 'result'], output_parser=None, partial_variables={}, template="You are a teacher grading a quiz.\nYou are given a question, the context the question is about, and the student's answer. You are asked... | https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.qa.eval_chain.ContextQAEvalChain.html |
fb2e0cdeef08-6 | Create LLMChain from LLM and template.
generate(input_list: List[Dict[str, Any]], run_manager: Optional[CallbackManagerForChainRun] = None) → LLMResult¶
Generate LLM result from inputs.
predict(callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → str¶
Format prompt with kw... | https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.qa.eval_chain.ContextQAEvalChain.html |
fb2e0cdeef08-7 | Save the chain.
Parameters
file_path – Path to file to save the chain to.
Example:
.. code-block:: python
chain.save(file_path=”path/chain.yaml”)
validator set_verbose » verbose¶
If verbose is None, set it.
This allows users to pass in None as verbose to access the global setting.
to_json() → Union[SerializedConstruc... | https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.qa.eval_chain.ContextQAEvalChain.html |
b8d706005954-0 | langchain.evaluation.run_evaluators.implementations.get_qa_evaluator¶
langchain.evaluation.run_evaluators.implementations.get_qa_evaluator(llm: BaseLanguageModel, *, prompt: Union[PromptTemplate, str] = PromptTemplate(input_variables=['query', 'result', 'answer'], output_parser=None, partial_variables={}, template="You... | https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.run_evaluators.implementations.get_qa_evaluator.html |
5a9fad4055d3-0 | langchain.callbacks.wandb_callback.WandbCallbackHandler¶
class langchain.callbacks.wandb_callback.WandbCallbackHandler(job_type: Optional[str] = None, project: Optional[str] = 'langchain_callback_demo', entity: Optional[str] = None, tags: Optional[Sequence] = None, group: Optional[str] = None, name: Optional[str] = Non... | https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.wandb_callback.WandbCallbackHandler.html |
5a9fad4055d3-1 | Flush the tracker and reset the session.
get_custom_callback_meta()
on_agent_action(action, **kwargs)
Run on agent action.
on_agent_finish(finish, **kwargs)
Run when agent ends running.
on_chain_end(outputs, **kwargs)
Run when chain ends running.
on_chain_error(error, **kwargs)
Run when chain errors.
on_chain_start(ser... | https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.wandb_callback.WandbCallbackHandler.html |
5a9fad4055d3-2 | ignore_agent
Whether to ignore agent callbacks.
ignore_chain
Whether to ignore chain callbacks.
ignore_chat_model
Whether to ignore chat model callbacks.
ignore_llm
Whether to ignore LLM callbacks.
ignore_retriever
Whether to ignore retriever callbacks.
raise_error
run_inline
flush_tracker(langchain_asset: Any = None, ... | https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.wandb_callback.WandbCallbackHandler.html |
5a9fad4055d3-3 | Run when chain errors.
on_chain_start(serialized: Dict[str, Any], inputs: Dict[str, Any], **kwargs: Any) → None[source]¶
Run when chain starts running.
on_chat_model_start(serialized: Dict[str, Any], messages: List[List[BaseMessage]], *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = No... | https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.wandb_callback.WandbCallbackHandler.html |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.