code
stringlengths
66
870k
docstring
stringlengths
19
26.7k
func_name
stringlengths
1
138
language
stringclasses
1 value
repo
stringlengths
7
68
path
stringlengths
5
324
url
stringlengths
46
389
license
stringclasses
7 values
def split_list(lst, n): """Split a list into n (roughly) equal-sized chunks""" chunk_size = math.ceil(len(lst) / n) # integer division return [lst[i:i+chunk_size] for i in range(0, len(lst), chunk_size)]
Split a list into n (roughly) equal-sized chunks
split_list
python
PKU-YuanGroup/Chat-UniVi
ChatUniVi/eval/model_video_general.py
https://github.com/PKU-YuanGroup/Chat-UniVi/blob/master/ChatUniVi/eval/model_video_general.py
Apache-2.0
def split_list(lst, n): """Split a list into n (roughly) equal-sized chunks""" chunk_size = math.ceil(len(lst) / n) # integer division return [lst[i:i+chunk_size] for i in range(0, len(lst), chunk_size)]
Split a list into n (roughly) equal-sized chunks
split_list
python
PKU-YuanGroup/Chat-UniVi
ChatUniVi/eval/model_video_qa.py
https://github.com/PKU-YuanGroup/Chat-UniVi/blob/master/ChatUniVi/eval/model_video_qa.py
Apache-2.0
def split_list(lst, n): """Split a list into n (roughly) equal-sized chunks""" chunk_size = math.ceil(len(lst) / n) # integer division return [lst[i:i+chunk_size] for i in range(0, len(lst), chunk_size)]
Split a list into n (roughly) equal-sized chunks
split_list
python
PKU-YuanGroup/Chat-UniVi
ChatUniVi/eval/model_vqa.py
https://github.com/PKU-YuanGroup/Chat-UniVi/blob/master/ChatUniVi/eval/model_vqa.py
Apache-2.0
def split_list(lst, n): """Split a list into n (roughly) equal-sized chunks""" chunk_size = math.ceil(len(lst) / n) # integer division return [lst[i:i+chunk_size] for i in range(0, len(lst), chunk_size)]
Split a list into n (roughly) equal-sized chunks
split_list
python
PKU-YuanGroup/Chat-UniVi
ChatUniVi/eval/model_vqa_scienceqa.py
https://github.com/PKU-YuanGroup/Chat-UniVi/blob/master/ChatUniVi/eval/model_vqa_scienceqa.py
Apache-2.0
def annotate(prediction_set, caption_files, output_dir): """ Evaluates question and answer pairs using GPT-3 Returns a score for correctness. """ for file in caption_files: key = file[:-5] # Strip file extension qa_set = prediction_set[key] question = qa_set['q'] answ...
Evaluates question and answer pairs using GPT-3 Returns a score for correctness.
annotate
python
PKU-YuanGroup/Chat-UniVi
ChatUniVi/eval/evaluate/evaluate_benchmark_1_correctness.py
https://github.com/PKU-YuanGroup/Chat-UniVi/blob/master/ChatUniVi/eval/evaluate/evaluate_benchmark_1_correctness.py
Apache-2.0
def main(): """ Main function to control the flow of the program. """ # Parse arguments. args = parse_args() file = args.pred_path try: pred_contents = json.load(file) except: pred_contents = read_jsonl(file) # Dictionary to store the count of occurrences for each v...
Main function to control the flow of the program.
main
python
PKU-YuanGroup/Chat-UniVi
ChatUniVi/eval/evaluate/evaluate_benchmark_1_correctness.py
https://github.com/PKU-YuanGroup/Chat-UniVi/blob/master/ChatUniVi/eval/evaluate/evaluate_benchmark_1_correctness.py
Apache-2.0
def annotate(prediction_set, caption_files, output_dir): """ Evaluates question and answer pairs using GPT-3 and returns a score for detailed orientation. """ for file in caption_files: key = file[:-5] # Strip file extension qa_set = prediction_set[key] question = qa_set['q']...
Evaluates question and answer pairs using GPT-3 and returns a score for detailed orientation.
annotate
python
PKU-YuanGroup/Chat-UniVi
ChatUniVi/eval/evaluate/evaluate_benchmark_2_detailed_orientation.py
https://github.com/PKU-YuanGroup/Chat-UniVi/blob/master/ChatUniVi/eval/evaluate/evaluate_benchmark_2_detailed_orientation.py
Apache-2.0
def main(): """ Main function to control the flow of the program. """ # Parse arguments. args = parse_args() file = args.pred_path try: pred_contents = json.load(file) except: pred_contents = read_jsonl(file) # Dictionary to store the count of occurrences for each v...
Main function to control the flow of the program.
main
python
PKU-YuanGroup/Chat-UniVi
ChatUniVi/eval/evaluate/evaluate_benchmark_2_detailed_orientation.py
https://github.com/PKU-YuanGroup/Chat-UniVi/blob/master/ChatUniVi/eval/evaluate/evaluate_benchmark_2_detailed_orientation.py
Apache-2.0
def annotate(prediction_set, caption_files, output_dir): """ Evaluates question and answer pairs using GPT-3 and returns a score for contextual understanding. """ for file in caption_files: key = file[:-5] # Strip file extension qa_set = prediction_set[key] question = qa_set[...
Evaluates question and answer pairs using GPT-3 and returns a score for contextual understanding.
annotate
python
PKU-YuanGroup/Chat-UniVi
ChatUniVi/eval/evaluate/evaluate_benchmark_3_context.py
https://github.com/PKU-YuanGroup/Chat-UniVi/blob/master/ChatUniVi/eval/evaluate/evaluate_benchmark_3_context.py
Apache-2.0
def main(): """ Main function to control the flow of the program. """ # Parse arguments. args = parse_args() file = args.pred_path try: pred_contents = json.load(file) except: pred_contents = read_jsonl(file) # Dictionary to store the count of occurrences for each v...
Main function to control the flow of the program.
main
python
PKU-YuanGroup/Chat-UniVi
ChatUniVi/eval/evaluate/evaluate_benchmark_3_context.py
https://github.com/PKU-YuanGroup/Chat-UniVi/blob/master/ChatUniVi/eval/evaluate/evaluate_benchmark_3_context.py
Apache-2.0
def annotate(prediction_set, caption_files, output_dir): """ Evaluates question and answer pairs using GPT-3 and returns a score for temporal understanding. """ for file in caption_files: key = file[:-5] # Strip file extension qa_set = prediction_set[key] question = qa_set['q...
Evaluates question and answer pairs using GPT-3 and returns a score for temporal understanding.
annotate
python
PKU-YuanGroup/Chat-UniVi
ChatUniVi/eval/evaluate/evaluate_benchmark_4_temporal.py
https://github.com/PKU-YuanGroup/Chat-UniVi/blob/master/ChatUniVi/eval/evaluate/evaluate_benchmark_4_temporal.py
Apache-2.0
def main(): """ Main function to control the flow of the program. """ # Parse arguments. args = parse_args() file = args.pred_path try: pred_contents = json.load(file) except: pred_contents = read_jsonl(file) # Dictionary to store the count of occurrences for each v...
Main function to control the flow of the program.
main
python
PKU-YuanGroup/Chat-UniVi
ChatUniVi/eval/evaluate/evaluate_benchmark_4_temporal.py
https://github.com/PKU-YuanGroup/Chat-UniVi/blob/master/ChatUniVi/eval/evaluate/evaluate_benchmark_4_temporal.py
Apache-2.0
def annotate(prediction_set, caption_files, output_dir): """ Evaluates question and answer pairs using GPT-3 and returns a score for consistency. """ for file in caption_files: key = file[:-5] # Strip file extension qa_set = prediction_set[key] question1 = qa_set['q1'] ...
Evaluates question and answer pairs using GPT-3 and returns a score for consistency.
annotate
python
PKU-YuanGroup/Chat-UniVi
ChatUniVi/eval/evaluate/evaluate_benchmark_5_consistency.py
https://github.com/PKU-YuanGroup/Chat-UniVi/blob/master/ChatUniVi/eval/evaluate/evaluate_benchmark_5_consistency.py
Apache-2.0
def main(): """ Main function to control the flow of the program. """ # Parse arguments. args = parse_args() file = args.pred_path try: pred_contents = json.load(file) except: pred_contents = read_jsonl(file) # Dictionary to store the count of occurrences for each v...
Main function to control the flow of the program.
main
python
PKU-YuanGroup/Chat-UniVi
ChatUniVi/eval/evaluate/evaluate_benchmark_5_consistency.py
https://github.com/PKU-YuanGroup/Chat-UniVi/blob/master/ChatUniVi/eval/evaluate/evaluate_benchmark_5_consistency.py
Apache-2.0
def get_pred_idx(prediction, choices, options): """ Get the index (e.g. 2) from the prediction (e.g. 'C') """ if prediction in options[:len(choices)]: return options.index(prediction) else: return random.choice(range(len(choices)))
Get the index (e.g. 2) from the prediction (e.g. 'C')
get_pred_idx
python
PKU-YuanGroup/Chat-UniVi
ChatUniVi/eval/evaluate/evaluate_science_qa.py
https://github.com/PKU-YuanGroup/Chat-UniVi/blob/master/ChatUniVi/eval/evaluate/evaluate_science_qa.py
Apache-2.0
def annotate(prediction_set, caption_files, output_dir): """ Evaluates question and answer pairs using GPT-3 Returns a score for correctness. """ for file in caption_files: key = file[:-5] # Strip file extension qa_set = prediction_set[key] question = qa_set['q'] answ...
Evaluates question and answer pairs using GPT-3 Returns a score for correctness.
annotate
python
PKU-YuanGroup/Chat-UniVi
ChatUniVi/eval/evaluate/evaluate_video_qa.py
https://github.com/PKU-YuanGroup/Chat-UniVi/blob/master/ChatUniVi/eval/evaluate/evaluate_video_qa.py
Apache-2.0
def main(): """ Main function to control the flow of the program. """ # Parse arguments. args = parse_args() file = args.pred_path try: pred_contents = json.load(file) except: pred_contents = read_jsonl(file) # Dictionary to store the count of occurrences for each v...
Main function to control the flow of the program.
main
python
PKU-YuanGroup/Chat-UniVi
ChatUniVi/eval/evaluate/evaluate_video_qa.py
https://github.com/PKU-YuanGroup/Chat-UniVi/blob/master/ChatUniVi/eval/evaluate/evaluate_video_qa.py
Apache-2.0
def trunc_normal_(tensor, mean=0., std=1., a=-2., b=2.): # type: (Tensor, float, float, float, float) -> Tensor r"""Fills the input Tensor with values drawn from a truncated normal distribution. The values are effectively drawn from the normal distribution :math:`\mathcal{N}(\text{mean}, \text{std}^2)` ...
Fills the input Tensor with values drawn from a truncated normal distribution. The values are effectively drawn from the normal distribution :math:`\mathcal{N}(\text{mean}, \text{std}^2)` with values outside :math:`[a, b]` redrawn until they are within the bounds. The method used for generating the rand...
trunc_normal_
python
PKU-YuanGroup/Chat-UniVi
ChatUniVi/model/cluster.py
https://github.com/PKU-YuanGroup/Chat-UniVi/blob/master/ChatUniVi/model/cluster.py
Apache-2.0
def drop_path(x, drop_prob: float = 0., training: bool = False): """Drop paths (Stochastic Depth) per sample (when applied in main path of residual blocks). """ if drop_prob == 0. or not training: return x keep_prob = 1 - drop_prob shape = (x.shape[0],) + (1,) * (x.ndim - 1) # work with dif...
Drop paths (Stochastic Depth) per sample (when applied in main path of residual blocks).
drop_path
python
PKU-YuanGroup/Chat-UniVi
ChatUniVi/model/cluster.py
https://github.com/PKU-YuanGroup/Chat-UniVi/blob/master/ChatUniVi/model/cluster.py
Apache-2.0
def index_points(points, idx): """Sample features following the index. Returns: new_points:, indexed points data, [B, S, C] Args: points: input points data, [B, N, C] idx: sample index data, [B, S] """ device = points.device B = points.shape[0] view_shape = list(idx....
Sample features following the index. Returns: new_points:, indexed points data, [B, S, C] Args: points: input points data, [B, N, C] idx: sample index data, [B, S]
index_points
python
PKU-YuanGroup/Chat-UniVi
ChatUniVi/model/cluster.py
https://github.com/PKU-YuanGroup/Chat-UniVi/blob/master/ChatUniVi/model/cluster.py
Apache-2.0
def cluster_dpc_knn(token_dict, cluster_num, k=5, token_mask=None): """Cluster tokens with DPC-KNN algorithm. Return: idx_cluster (Tensor[B, N]): cluster index of each token. cluster_num (int): actual cluster number. The same with input cluster number Args: token_dict (di...
Cluster tokens with DPC-KNN algorithm. Return: idx_cluster (Tensor[B, N]): cluster index of each token. cluster_num (int): actual cluster number. The same with input cluster number Args: token_dict (dict): dict for token information cluster_num (int): cluster number ...
cluster_dpc_knn
python
PKU-YuanGroup/Chat-UniVi
ChatUniVi/model/cluster.py
https://github.com/PKU-YuanGroup/Chat-UniVi/blob/master/ChatUniVi/model/cluster.py
Apache-2.0
def merge_tokens(token_dict, idx_cluster, cluster_num, token_weight=None): """Merge tokens in the same cluster to a single cluster. Implemented by torch.index_add(). Flops: B*N*(C+2) Return: out_dict (dict): dict for output token information Args: token_dict (dict): dict for input token...
Merge tokens in the same cluster to a single cluster. Implemented by torch.index_add(). Flops: B*N*(C+2) Return: out_dict (dict): dict for output token information Args: token_dict (dict): dict for input token information idx_cluster (Tensor[B, N]): cluster index of each token. ...
merge_tokens
python
PKU-YuanGroup/Chat-UniVi
ChatUniVi/model/cluster.py
https://github.com/PKU-YuanGroup/Chat-UniVi/blob/master/ChatUniVi/model/cluster.py
Apache-2.0
def setup_for_distributed(is_master): """ This function disables printing when not in master process """ import builtins as __builtin__ builtin_print = __builtin__.print def print(*args, **kwargs): force = kwargs.pop("force", False) if is_master or force: builtin_pr...
This function disables printing when not in master process
setup_for_distributed
python
PKU-YuanGroup/Chat-UniVi
ChatUniVi/model/multimodal_encoder/utils.py
https://github.com/PKU-YuanGroup/Chat-UniVi/blob/master/ChatUniVi/model/multimodal_encoder/utils.py
Apache-2.0
def download_cached_file(url, check_hash=True, progress=False): """ Download a file from a URL and cache it locally. If the file already exists, it is not downloaded again. If distributed, only the main process downloads the file, and the other processes wait for the file to be downloaded. """ def ...
Download a file from a URL and cache it locally. If the file already exists, it is not downloaded again. If distributed, only the main process downloads the file, and the other processes wait for the file to be downloaded.
download_cached_file
python
PKU-YuanGroup/Chat-UniVi
ChatUniVi/model/multimodal_encoder/utils.py
https://github.com/PKU-YuanGroup/Chat-UniVi/blob/master/ChatUniVi/model/multimodal_encoder/utils.py
Apache-2.0
def forward( self, hidden_states: torch.Tensor, attention_mask: Optional[torch.Tensor] = None, position_ids: Optional[torch.Tensor] = None, past_key_value: Optional[Tuple[torch.Tensor]] = None, output_attentions: bool = False, use_cache: bool = False, ) -> Tuple[torch.Tensor, Optional[torch....
Input shape: Batch x Time x Channel attention_mask: [bsz, q_len]
forward
python
PKU-YuanGroup/Chat-UniVi
ChatUniVi/train/llama_flash_attn_monkey_patch.py
https://github.com/PKU-YuanGroup/Chat-UniVi/blob/master/ChatUniVi/train/llama_flash_attn_monkey_patch.py
Apache-2.0
def safe_save_model_for_hf_trainer(trainer: transformers.Trainer, output_dir: str): """Collects the state dict and dump to disk.""" if getattr(trainer.args, "tune_mm_mlp_adapter", False): # Only save Adapter keys_to_match = ['mm_projector', "ctm", "block"] ...
Collects the state dict and dump to disk.
safe_save_model_for_hf_trainer
python
PKU-YuanGroup/Chat-UniVi
ChatUniVi/train/train.py
https://github.com/PKU-YuanGroup/Chat-UniVi/blob/master/ChatUniVi/train/train.py
Apache-2.0
def smart_tokenizer_and_embedding_resize( special_tokens_dict: Dict, tokenizer: transformers.PreTrainedTokenizer, model: transformers.PreTrainedModel, ): """Resize tokenizer and embedding. Note: This is the unoptimized version that may make your embedding size not be divisible by 64. ...
Resize tokenizer and embedding. Note: This is the unoptimized version that may make your embedding size not be divisible by 64.
smart_tokenizer_and_embedding_resize
python
PKU-YuanGroup/Chat-UniVi
ChatUniVi/train/train.py
https://github.com/PKU-YuanGroup/Chat-UniVi/blob/master/ChatUniVi/train/train.py
Apache-2.0
def _add_speaker_and_signal(header, source, get_conversation=True): """Add speaker and start/end signal on each round.""" BEGIN_SIGNAL = "### " END_SIGNAL = "\n" conversation = header for sentence in source: from_str = sentence["from"] if from_str.lower() == "human": from...
Add speaker and start/end signal on each round.
_add_speaker_and_signal
python
PKU-YuanGroup/Chat-UniVi
ChatUniVi/train/train.py
https://github.com/PKU-YuanGroup/Chat-UniVi/blob/master/ChatUniVi/train/train.py
Apache-2.0
def preprocess( sources: Sequence[str], tokenizer: transformers.PreTrainedTokenizer, has_image: bool = False ) -> Dict: """ Given a list of sources, each is a conversation list. This transform: 1. Add signal '### ' at the beginning each sentence, with end signal '\n'; 2. Concaten...
Given a list of sources, each is a conversation list. This transform: 1. Add signal '### ' at the beginning each sentence, with end signal ' '; 2. Concatenate conversations together; 3. Tokenize the concatenated conversation; 4. Make a deepcopy as the target. Mask human words with IGNORE_INDEX. ...
preprocess
python
PKU-YuanGroup/Chat-UniVi
ChatUniVi/train/train.py
https://github.com/PKU-YuanGroup/Chat-UniVi/blob/master/ChatUniVi/train/train.py
Apache-2.0
def make_supervised_data_module(tokenizer: transformers.PreTrainedTokenizer, data_args) -> Dict: """Make dataset and collator for supervised fine-tuning.""" train_dataset = LazySupervisedDataset(tokenizer=tokenizer, data_args=data_args) data_collator = DataCollatorForSupervis...
Make dataset and collator for supervised fine-tuning.
make_supervised_data_module
python
PKU-YuanGroup/Chat-UniVi
ChatUniVi/train/train.py
https://github.com/PKU-YuanGroup/Chat-UniVi/blob/master/ChatUniVi/train/train.py
Apache-2.0
def __init__(self, ipc_socket, callback=None, quit_callback=None): """Create the wrapper. *ipc_socket* is the pipe name. (Not including \\\\.\\pipe\\) *callback(json_data)* is the function for recieving events. *quit_callback* is called when the socket connection dies. """ ...
Create the wrapper. *ipc_socket* is the pipe name. (Not including \\.\pipe\) *callback(json_data)* is the function for recieving events. *quit_callback* is called when the socket connection dies.
__init__
python
kjtsune/embyToLocalPlayer
utils/python_mpv_jsonipc.py
https://github.com/kjtsune/embyToLocalPlayer/blob/master/utils/python_mpv_jsonipc.py
Apache-2.0
def send(self, data): """Send *data* to the pipe, encoded as JSON.""" try: self.socket.send_bytes(json.dumps(data).encode('utf-8') + b'\n') except OSError as ex: if len(ex.args) == 1 and ex.args[0] == "handle is closed": raise BrokenPipeError("handle is cl...
Send *data* to the pipe, encoded as JSON.
send
python
kjtsune/embyToLocalPlayer
utils/python_mpv_jsonipc.py
https://github.com/kjtsune/embyToLocalPlayer/blob/master/utils/python_mpv_jsonipc.py
Apache-2.0
def run(self): """Process pipe events. Do not run this directly. Use *start*.""" data = b'' try: while True: current_data = self.socket.recv_bytes(2048) if current_data == b'': break data += current_data ...
Process pipe events. Do not run this directly. Use *start*.
run
python
kjtsune/embyToLocalPlayer
utils/python_mpv_jsonipc.py
https://github.com/kjtsune/embyToLocalPlayer/blob/master/utils/python_mpv_jsonipc.py
Apache-2.0
def __init__(self, ipc_socket, callback=None, quit_callback=None): """Create the wrapper. *ipc_socket* is the path to the socket. *callback(json_data)* is the function for recieving events. *quit_callback* is called when the socket connection dies. """ self.ipc_socket = ...
Create the wrapper. *ipc_socket* is the path to the socket. *callback(json_data)* is the function for recieving events. *quit_callback* is called when the socket connection dies.
__init__
python
kjtsune/embyToLocalPlayer
utils/python_mpv_jsonipc.py
https://github.com/kjtsune/embyToLocalPlayer/blob/master/utils/python_mpv_jsonipc.py
Apache-2.0
def send(self, data): """Send *data* to the socket, encoded as JSON.""" if self.socket is None: raise BrokenPipeError("socket is closed") self.socket.send(json.dumps(data).encode('utf-8') + b'\n')
Send *data* to the socket, encoded as JSON.
send
python
kjtsune/embyToLocalPlayer
utils/python_mpv_jsonipc.py
https://github.com/kjtsune/embyToLocalPlayer/blob/master/utils/python_mpv_jsonipc.py
Apache-2.0
def run(self): """Process socket events. Do not run this directly. Use *start*.""" data = b'' try: while True: current_data = self.socket.recv(1024) if current_data == b'': break data += current_data ...
Process socket events. Do not run this directly. Use *start*.
run
python
kjtsune/embyToLocalPlayer
utils/python_mpv_jsonipc.py
https://github.com/kjtsune/embyToLocalPlayer/blob/master/utils/python_mpv_jsonipc.py
Apache-2.0
def __init__(self, ipc_socket, mpv_location=None, **kwargs): """ Create and start the MPV process. Will block until socket/pipe is available. *ipc_socket* is the path to the Unix/Linux socket or name of the Windows pipe. *mpv_location* is the path to mpv. If left unset it tries the one ...
Create and start the MPV process. Will block until socket/pipe is available. *ipc_socket* is the path to the Unix/Linux socket or name of the Windows pipe. *mpv_location* is the path to mpv. If left unset it tries the one in the PATH. All other arguments are forwarded to MPV as comman...
__init__
python
kjtsune/embyToLocalPlayer
utils/python_mpv_jsonipc.py
https://github.com/kjtsune/embyToLocalPlayer/blob/master/utils/python_mpv_jsonipc.py
Apache-2.0
def __init__(self, ipc_socket, callback=None, quit_callback=None): """Create the wrapper. *ipc_socket* is the path to the Unix/Linux socket or name of the Windows pipe. *callback(event_name, data)* is the function for recieving events. *quit_callback* is called when the socket connectio...
Create the wrapper. *ipc_socket* is the path to the Unix/Linux socket or name of the Windows pipe. *callback(event_name, data)* is the function for recieving events. *quit_callback* is called when the socket connection to MPV dies.
__init__
python
kjtsune/embyToLocalPlayer
utils/python_mpv_jsonipc.py
https://github.com/kjtsune/embyToLocalPlayer/blob/master/utils/python_mpv_jsonipc.py
Apache-2.0
def event_callback(self, data): """Internal callback for recieving events from MPV.""" if "request_id" in data: self.cid_result[data["request_id"]] = data self.cid_wait[data["request_id"]].set() elif "event" in data: self.callback(data["event"], data)
Internal callback for recieving events from MPV.
event_callback
python
kjtsune/embyToLocalPlayer
utils/python_mpv_jsonipc.py
https://github.com/kjtsune/embyToLocalPlayer/blob/master/utils/python_mpv_jsonipc.py
Apache-2.0
def command(self, command, *args): """ Issue a command to MPV. Will block until completed or timeout is reached. *command* is the name of the MPV command All further arguments are forwarded to the MPV command. Throws TimeoutError if timeout of 120 seconds is reached. ...
Issue a command to MPV. Will block until completed or timeout is reached. *command* is the name of the MPV command All further arguments are forwarded to the MPV command. Throws TimeoutError if timeout of 120 seconds is reached.
command
python
kjtsune/embyToLocalPlayer
utils/python_mpv_jsonipc.py
https://github.com/kjtsune/embyToLocalPlayer/blob/master/utils/python_mpv_jsonipc.py
Apache-2.0
def run(self): """Process socket events. Do not run this directly. Use *start*.""" while True: event = self.queue.get() if event == "quit": break try: event[0](*event[1]) except Exception: log.error("EventHan...
Process socket events. Do not run this directly. Use *start*.
run
python
kjtsune/embyToLocalPlayer
utils/python_mpv_jsonipc.py
https://github.com/kjtsune/embyToLocalPlayer/blob/master/utils/python_mpv_jsonipc.py
Apache-2.0
def __init__(self, start_mpv=True, ipc_socket=None, mpv_location=None, log_handler=None, loglevel=None, quit_callback=None, **kwargs): """ Create the interface to MPV and process instance. *start_mpv* will start an MPV process if true. (Default: True) *ipc_socket* is th...
Create the interface to MPV and process instance. *start_mpv* will start an MPV process if true. (Default: True) *ipc_socket* is the path to the Unix/Linux socket or name of Windows pipe. (Default: Random Temp File) *mpv_location* is the location of MPV for *start_mpv*. (Default: Use M...
__init__
python
kjtsune/embyToLocalPlayer
utils/python_mpv_jsonipc.py
https://github.com/kjtsune/embyToLocalPlayer/blob/master/utils/python_mpv_jsonipc.py
Apache-2.0
def bind_event(self, name, callback): """ Bind a callback to an MPV event. *name* is the MPV event name. *callback(event_data)* is the function to call. """ if name not in self.event_bindings: self.event_bindings[name] = set() self.event_bindings[name...
Bind a callback to an MPV event. *name* is the MPV event name. *callback(event_data)* is the function to call.
bind_event
python
kjtsune/embyToLocalPlayer
utils/python_mpv_jsonipc.py
https://github.com/kjtsune/embyToLocalPlayer/blob/master/utils/python_mpv_jsonipc.py
Apache-2.0
def on_event(self, name): """ Decorator to bind a callback to an MPV event. @on_event(name) def my_callback(event_data): pass """ def wrapper(func): self.bind_event(name, func) return func return wrapper
Decorator to bind a callback to an MPV event. @on_event(name) def my_callback(event_data): pass
on_event
python
kjtsune/embyToLocalPlayer
utils/python_mpv_jsonipc.py
https://github.com/kjtsune/embyToLocalPlayer/blob/master/utils/python_mpv_jsonipc.py
Apache-2.0
def on_key_press(self, name): """ Decorator to bind a callback to an MPV keypress event. @on_key_press(key_name) def my_callback(): pass """ def wrapper(func): self.bind_key_press(name, func) return func return wrapper
Decorator to bind a callback to an MPV keypress event. @on_key_press(key_name) def my_callback(): pass
on_key_press
python
kjtsune/embyToLocalPlayer
utils/python_mpv_jsonipc.py
https://github.com/kjtsune/embyToLocalPlayer/blob/master/utils/python_mpv_jsonipc.py
Apache-2.0
def bind_key_press(self, name, callback): """ Bind a callback to an MPV keypress event. *name* is the key symbol. *callback()* is the function to call. """ self.keybind_lock.acquire() keybind_id = self.keybind_id self.keybind_id += 1 self.keybind_...
Bind a callback to an MPV keypress event. *name* is the key symbol. *callback()* is the function to call.
bind_key_press
python
kjtsune/embyToLocalPlayer
utils/python_mpv_jsonipc.py
https://github.com/kjtsune/embyToLocalPlayer/blob/master/utils/python_mpv_jsonipc.py
Apache-2.0
def bind_property_observer(self, name, callback): """ Bind a callback to an MPV property change. *name* is the property name. *callback(name, data)* is the function to call. Returns a unique observer ID needed to destroy the observer. """ self.observer_lock.acqu...
Bind a callback to an MPV property change. *name* is the property name. *callback(name, data)* is the function to call. Returns a unique observer ID needed to destroy the observer.
bind_property_observer
python
kjtsune/embyToLocalPlayer
utils/python_mpv_jsonipc.py
https://github.com/kjtsune/embyToLocalPlayer/blob/master/utils/python_mpv_jsonipc.py
Apache-2.0
def property_observer(self, name): """ Decorator to bind a callback to an MPV property change. @property_observer(property_name) def my_callback(name, data): pass """ def wrapper(func): self.bind_property_observer(name, func) return fu...
Decorator to bind a callback to an MPV property change. @property_observer(property_name) def my_callback(name, data): pass
property_observer
python
kjtsune/embyToLocalPlayer
utils/python_mpv_jsonipc.py
https://github.com/kjtsune/embyToLocalPlayer/blob/master/utils/python_mpv_jsonipc.py
Apache-2.0
def wait_for_property(self, name): """ Waits for the value of a property to change. *name* is the name of the property. """ event = threading.Event() first_event = True def handler(*_): nonlocal first_event if first_event == True: ...
Waits for the value of a property to change. *name* is the name of the property.
wait_for_property
python
kjtsune/embyToLocalPlayer
utils/python_mpv_jsonipc.py
https://github.com/kjtsune/embyToLocalPlayer/blob/master/utils/python_mpv_jsonipc.py
Apache-2.0
def terminate(self, join=True): """Terminate the connection to MPV and process (if *start_mpv* is used).""" if self.mpv_process: self.mpv_process.stop() if self.mpv_inter: self.mpv_inter.stop(join) self.event_handler.stop(join)
Terminate the connection to MPV and process (if *start_mpv* is used).
terminate
python
kjtsune/embyToLocalPlayer
utils/python_mpv_jsonipc.py
https://github.com/kjtsune/embyToLocalPlayer/blob/master/utils/python_mpv_jsonipc.py
Apache-2.0
def get_series_single_season(self, ser_id, season_num, translations='', info_only=False): """ ser_id: Trakt ID, Trakt slug, or IMDB ID translations: specific 2 digit country language code return: [{.., ids:ep_ids}, ..] not standard ids_item, not type field """ trans = f'?...
ser_id: Trakt ID, Trakt slug, or IMDB ID translations: specific 2 digit country language code return: [{.., ids:ep_ids}, ..] not standard ids_item, not type field
get_series_single_season
python
kjtsune/embyToLocalPlayer
utils/trakt_api.py
https://github.com/kjtsune/embyToLocalPlayer/blob/master/utils/trakt_api.py
Apache-2.0
async def math_guardrail( context: RunContextWrapper[None], agent: Agent, input: str | list[TResponseInputItem] ) -> GuardrailFunctionOutput: """This is an input guardrail function, which happens to call an agent to check if the input is a math homework question. """ result = await Runner.run(guardr...
This is an input guardrail function, which happens to call an agent to check if the input is a math homework question.
math_guardrail
python
openai/openai-agents-python
examples/agent_patterns/input_guardrails.py
https://github.com/openai/openai-agents-python/blob/master/examples/agent_patterns/input_guardrails.py
MIT
async def update_seat( context: RunContextWrapper[AirlineAgentContext], confirmation_number: str, new_seat: str ) -> str: """ Update the seat for a given confirmation number. Args: confirmation_number: The confirmation number for the flight. new_seat: The new seat to update to. """ ...
Update the seat for a given confirmation number. Args: confirmation_number: The confirmation number for the flight. new_seat: The new seat to update to.
update_seat
python
openai/openai-agents-python
examples/customer_service/main.py
https://github.com/openai/openai-agents-python/blob/master/examples/customer_service/main.py
MIT
def get_weather(city: str) -> str: """Get the weather for a given city.""" print(f"[debug] get_weather called with city: {city}") choices = ["sunny", "cloudy", "rainy", "snowy"] return f"The weather in {city} is {random.choice(choices)}."
Get the weather for a given city.
get_weather
python
openai/openai-agents-python
examples/voice/static/main.py
https://github.com/openai/openai-agents-python/blob/master/examples/voice/static/main.py
MIT
def compose(self) -> ComposeResult: """Create child widgets for the app.""" with Container(): yield Header(id="session-display") yield AudioStatusIndicator(id="status-indicator") yield RichLog(id="bottom-pane", wrap=True, highlight=True, markup=True)
Create child widgets for the app.
compose
python
openai/openai-agents-python
examples/voice/streamed/main.py
https://github.com/openai/openai-agents-python/blob/master/examples/voice/streamed/main.py
MIT
def get_weather(city: str) -> str: """Get the weather for a given city.""" print(f"[debug] get_weather called with city: {city}") choices = ["sunny", "cloudy", "rainy", "snowy"] return f"The weather in {city} is {random.choice(choices)}."
Get the weather for a given city.
get_weather
python
openai/openai-agents-python
examples/voice/streamed/my_workflow.py
https://github.com/openai/openai-agents-python/blob/master/examples/voice/streamed/my_workflow.py
MIT
def __init__(self, secret_word: str, on_start: Callable[[str], None]): """ Args: secret_word: The secret word to guess. on_start: A callback that is called when the workflow starts. The transcription is passed in as an argument. """ self._input_his...
Args: secret_word: The secret word to guess. on_start: A callback that is called when the workflow starts. The transcription is passed in as an argument.
__init__
python
openai/openai-agents-python
examples/voice/streamed/my_workflow.py
https://github.com/openai/openai-agents-python/blob/master/examples/voice/streamed/my_workflow.py
MIT
def as_tool( self, tool_name: str | None, tool_description: str | None, custom_output_extractor: Callable[[RunResult], Awaitable[str]] | None = None, ) -> Tool: """Transform this agent into a tool, callable by other agents. This is different from handoffs in two ways...
Transform this agent into a tool, callable by other agents. This is different from handoffs in two ways: 1. In handoffs, the new agent receives the conversation history. In this tool, the new agent receives generated input. 2. In handoffs, the new agent takes over the conversation. I...
as_tool
python
openai/openai-agents-python
src/agents/agent.py
https://github.com/openai/openai-agents-python/blob/master/src/agents/agent.py
MIT
async def get_system_prompt(self, run_context: RunContextWrapper[TContext]) -> str | None: """Get the system prompt for the agent.""" if isinstance(self.instructions, str): return self.instructions elif callable(self.instructions): if inspect.iscoroutinefunction(self.inst...
Get the system prompt for the agent.
get_system_prompt
python
openai/openai-agents-python
src/agents/agent.py
https://github.com/openai/openai-agents-python/blob/master/src/agents/agent.py
MIT
async def get_mcp_tools(self) -> list[Tool]: """Fetches the available tools from the MCP servers.""" convert_schemas_to_strict = self.mcp_config.get("convert_schemas_to_strict", False) return await MCPUtil.get_all_function_tools(self.mcp_servers, convert_schemas_to_strict)
Fetches the available tools from the MCP servers.
get_mcp_tools
python
openai/openai-agents-python
src/agents/agent.py
https://github.com/openai/openai-agents-python/blob/master/src/agents/agent.py
MIT
async def get_all_tools(self, run_context: RunContextWrapper[Any]) -> list[Tool]: """All agent tools, including MCP tools and function tools.""" mcp_tools = await self.get_mcp_tools() async def _check_tool_enabled(tool: Tool) -> bool: if not isinstance(tool, FunctionTool): ...
All agent tools, including MCP tools and function tools.
get_all_tools
python
openai/openai-agents-python
src/agents/agent.py
https://github.com/openai/openai-agents-python/blob/master/src/agents/agent.py
MIT
def __init__(self, output_type: type[Any], strict_json_schema: bool = True): """ Args: output_type: The type of the output. strict_json_schema: Whether the JSON schema is in strict mode. We **strongly** recommend setting this to True, as it increases the likelihoo...
Args: output_type: The type of the output. strict_json_schema: Whether the JSON schema is in strict mode. We **strongly** recommend setting this to True, as it increases the likelihood of correct JSON input.
__init__
python
openai/openai-agents-python
src/agents/agent_output.py
https://github.com/openai/openai-agents-python/blob/master/src/agents/agent_output.py
MIT
def json_schema(self) -> dict[str, Any]: """The JSON schema of the output type.""" if self.is_plain_text(): raise UserError("Output type is plain text, so no JSON schema is available") return self._output_schema
The JSON schema of the output type.
json_schema
python
openai/openai-agents-python
src/agents/agent_output.py
https://github.com/openai/openai-agents-python/blob/master/src/agents/agent_output.py
MIT
def validate_json(self, json_str: str) -> Any: """Validate a JSON string against the output type. Returns the validated object, or raises a `ModelBehaviorError` if the JSON is invalid. """ validated = _json.validate_json(json_str, self._type_adapter, partial=False) if self._is_wr...
Validate a JSON string against the output type. Returns the validated object, or raises a `ModelBehaviorError` if the JSON is invalid.
validate_json
python
openai/openai-agents-python
src/agents/agent_output.py
https://github.com/openai/openai-agents-python/blob/master/src/agents/agent_output.py
MIT
def to_call_args(self, data: BaseModel) -> tuple[list[Any], dict[str, Any]]: """ Converts validated data from the Pydantic model into (args, kwargs), suitable for calling the original function. """ positional_args: list[Any] = [] keyword_args: dict[str, Any] = {} ...
Converts validated data from the Pydantic model into (args, kwargs), suitable for calling the original function.
to_call_args
python
openai/openai-agents-python
src/agents/function_schema.py
https://github.com/openai/openai-agents-python/blob/master/src/agents/function_schema.py
MIT
def generate_func_documentation( func: Callable[..., Any], style: DocstringStyle | None = None ) -> FuncDocumentation: """ Extracts metadata from a function docstring, in preparation for sending it to an LLM as a tool. Args: func: The function to extract documentation from. style: The s...
Extracts metadata from a function docstring, in preparation for sending it to an LLM as a tool. Args: func: The function to extract documentation from. style: The style of the docstring to use for parsing. If not provided, we will attempt to auto-detect the style. Returns: ...
generate_func_documentation
python
openai/openai-agents-python
src/agents/function_schema.py
https://github.com/openai/openai-agents-python/blob/master/src/agents/function_schema.py
MIT
def function_schema( func: Callable[..., Any], docstring_style: DocstringStyle | None = None, name_override: str | None = None, description_override: str | None = None, use_docstring_info: bool = True, strict_json_schema: bool = True, ) -> FuncSchema: """ Given a python function, extract...
Given a python function, extracts a `FuncSchema` from it, capturing the name, description, parameter descriptions, and other metadata. Args: func: The function to extract the schema from. docstring_style: The style of the docstring to use for parsing. If not provided, we will a...
function_schema
python
openai/openai-agents-python
src/agents/function_schema.py
https://github.com/openai/openai-agents-python/blob/master/src/agents/function_schema.py
MIT
def input_guardrail( func: _InputGuardrailFuncSync[TContext_co] | _InputGuardrailFuncAsync[TContext_co] | None = None, *, name: str | None = None, ) -> ( InputGuardrail[TContext_co] | Callable[ [_InputGuardrailFuncSync[TContext_co] | _InputGuardrailFuncAsync[TContext_co]], In...
Decorator that transforms a sync or async function into an `InputGuardrail`. It can be used directly (no parentheses) or with keyword args, e.g.: @input_guardrail def my_sync_guardrail(...): ... @input_guardrail(name="guardrail_name") async def my_async_guardrail(...): ... ...
input_guardrail
python
openai/openai-agents-python
src/agents/guardrail.py
https://github.com/openai/openai-agents-python/blob/master/src/agents/guardrail.py
MIT
def output_guardrail( func: _OutputGuardrailFuncSync[TContext_co] | _OutputGuardrailFuncAsync[TContext_co] | None = None, *, name: str | None = None, ) -> ( OutputGuardrail[TContext_co] | Callable[ [_OutputGuardrailFuncSync[TContext_co] | _OutputGuardrailFuncAsync[TContext_co]], ...
Decorator that transforms a sync or async function into an `OutputGuardrail`. It can be used directly (no parentheses) or with keyword args, e.g.: @output_guardrail def my_sync_guardrail(...): ... @output_guardrail(name="guardrail_name") async def my_async_guardrail(...): ... ...
output_guardrail
python
openai/openai-agents-python
src/agents/guardrail.py
https://github.com/openai/openai-agents-python/blob/master/src/agents/guardrail.py
MIT
def handoff( agent: Agent[TContext], tool_name_override: str | None = None, tool_description_override: str | None = None, on_handoff: OnHandoffWithInput[THandoffInput] | OnHandoffWithoutInput | None = None, input_type: type[THandoffInput] | None = None, input_filter: Callable[[HandoffInputData],...
Create a handoff from an agent. Args: agent: The agent to handoff to, or a function that returns an agent. tool_name_override: Optional override for the name of the tool that represents the handoff. tool_description_override: Optional override for the description of the tool that ...
handoff
python
openai/openai-agents-python
src/agents/handoffs.py
https://github.com/openai/openai-agents-python/blob/master/src/agents/handoffs.py
MIT
def to_input_item(self) -> TResponseInputItem: """Converts this item into an input item suitable for passing to the model.""" if isinstance(self.raw_item, dict): # We know that input items are dicts, so we can ignore the type error return self.raw_item # type: ignore eli...
Converts this item into an input item suitable for passing to the model.
to_input_item
python
openai/openai-agents-python
src/agents/items.py
https://github.com/openai/openai-agents-python/blob/master/src/agents/items.py
MIT
def to_input_items(self) -> list[TResponseInputItem]: """Convert the output into a list of input items suitable for passing to the model.""" # We happen to know that the shape of the Pydantic output items are the same as the # equivalent TypedDict input items, so we can just convert each one. ...
Convert the output into a list of input items suitable for passing to the model.
to_input_items
python
openai/openai-agents-python
src/agents/items.py
https://github.com/openai/openai-agents-python/blob/master/src/agents/items.py
MIT
def extract_last_content(cls, message: TResponseOutputItem) -> str: """Extracts the last text content or refusal from a message.""" if not isinstance(message, ResponseOutputMessage): return "" last_content = message.content[-1] if isinstance(last_content, ResponseOutputText)...
Extracts the last text content or refusal from a message.
extract_last_content
python
openai/openai-agents-python
src/agents/items.py
https://github.com/openai/openai-agents-python/blob/master/src/agents/items.py
MIT
def extract_last_text(cls, message: TResponseOutputItem) -> str | None: """Extracts the last text content from a message, if any. Ignores refusals.""" if isinstance(message, ResponseOutputMessage): last_content = message.content[-1] if isinstance(last_content, ResponseOutputText)...
Extracts the last text content from a message, if any. Ignores refusals.
extract_last_text
python
openai/openai-agents-python
src/agents/items.py
https://github.com/openai/openai-agents-python/blob/master/src/agents/items.py
MIT
def input_to_new_input_list( cls, input: str | list[TResponseInputItem] ) -> list[TResponseInputItem]: """Converts a string or list of input items into a list of input items.""" if isinstance(input, str): return [ { "content": input, ...
Converts a string or list of input items into a list of input items.
input_to_new_input_list
python
openai/openai-agents-python
src/agents/items.py
https://github.com/openai/openai-agents-python/blob/master/src/agents/items.py
MIT
def text_message_outputs(cls, items: list[RunItem]) -> str: """Concatenates all the text content from a list of message output items.""" text = "" for item in items: if isinstance(item, MessageOutputItem): text += cls.text_message_output(item) return text
Concatenates all the text content from a list of message output items.
text_message_outputs
python
openai/openai-agents-python
src/agents/items.py
https://github.com/openai/openai-agents-python/blob/master/src/agents/items.py
MIT
def text_message_output(cls, message: MessageOutputItem) -> str: """Extracts all the text content from a single message output item.""" text = "" for item in message.raw_item.content: if isinstance(item, ResponseOutputText): text += item.text return text
Extracts all the text content from a single message output item.
text_message_output
python
openai/openai-agents-python
src/agents/items.py
https://github.com/openai/openai-agents-python/blob/master/src/agents/items.py
MIT
def tool_call_output_item( cls, tool_call: ResponseFunctionToolCall, output: str ) -> FunctionCallOutput: """Creates a tool call output item from a tool call and its output.""" return { "call_id": tool_call.call_id, "output": output, "type": "function_call...
Creates a tool call output item from a tool call and its output.
tool_call_output_item
python
openai/openai-agents-python
src/agents/items.py
https://github.com/openai/openai-agents-python/blob/master/src/agents/items.py
MIT
async def on_agent_start( self, context: RunContextWrapper[TContext], agent: Agent[TContext] ) -> None: """Called before the agent is invoked. Called each time the current agent changes.""" pass
Called before the agent is invoked. Called each time the current agent changes.
on_agent_start
python
openai/openai-agents-python
src/agents/lifecycle.py
https://github.com/openai/openai-agents-python/blob/master/src/agents/lifecycle.py
MIT
async def on_agent_end( self, context: RunContextWrapper[TContext], agent: Agent[TContext], output: Any, ) -> None: """Called when the agent produces a final output.""" pass
Called when the agent produces a final output.
on_agent_end
python
openai/openai-agents-python
src/agents/lifecycle.py
https://github.com/openai/openai-agents-python/blob/master/src/agents/lifecycle.py
MIT
async def on_start(self, context: RunContextWrapper[TContext], agent: Agent[TContext]) -> None: """Called before the agent is invoked. Called each time the running agent is changed to this agent.""" pass
Called before the agent is invoked. Called each time the running agent is changed to this agent.
on_start
python
openai/openai-agents-python
src/agents/lifecycle.py
https://github.com/openai/openai-agents-python/blob/master/src/agents/lifecycle.py
MIT
async def on_end( self, context: RunContextWrapper[TContext], agent: Agent[TContext], output: Any, ) -> None: """Called when the agent produces a final output.""" pass
Called when the agent produces a final output.
on_end
python
openai/openai-agents-python
src/agents/lifecycle.py
https://github.com/openai/openai-agents-python/blob/master/src/agents/lifecycle.py
MIT
async def on_handoff( self, context: RunContextWrapper[TContext], agent: Agent[TContext], source: Agent[TContext], ) -> None: """Called when the agent is being handed off to. The `source` is the agent that is handing off to this agent.""" pass
Called when the agent is being handed off to. The `source` is the agent that is handing off to this agent.
on_handoff
python
openai/openai-agents-python
src/agents/lifecycle.py
https://github.com/openai/openai-agents-python/blob/master/src/agents/lifecycle.py
MIT
def resolve(self, override: ModelSettings | None) -> ModelSettings: """Produce a new ModelSettings by overlaying any non-None values from the override on top of this instance.""" if override is None: return self changes = { field.name: getattr(override, field.nam...
Produce a new ModelSettings by overlaying any non-None values from the override on top of this instance.
resolve
python
openai/openai-agents-python
src/agents/model_settings.py
https://github.com/openai/openai-agents-python/blob/master/src/agents/model_settings.py
MIT
async def run_demo_loop(agent: Agent[Any], *, stream: bool = True) -> None: """Run a simple REPL loop with the given agent. This utility allows quick manual testing and debugging of an agent from the command line. Conversation state is preserved across turns. Enter ``exit`` or ``quit`` to stop the loop...
Run a simple REPL loop with the given agent. This utility allows quick manual testing and debugging of an agent from the command line. Conversation state is preserved across turns. Enter ``exit`` or ``quit`` to stop the loop. Args: agent: The starting agent to run. stream: Whether to s...
run_demo_loop
python
openai/openai-agents-python
src/agents/repl.py
https://github.com/openai/openai-agents-python/blob/master/src/agents/repl.py
MIT
def final_output_as(self, cls: type[T], raise_if_incorrect_type: bool = False) -> T: """A convenience method to cast the final output to a specific type. By default, the cast is only for the typechecker. If you set `raise_if_incorrect_type` to True, we'll raise a TypeError if the final output is...
A convenience method to cast the final output to a specific type. By default, the cast is only for the typechecker. If you set `raise_if_incorrect_type` to True, we'll raise a TypeError if the final output is not of the given type. Args: cls: The type to cast the final output to. ...
final_output_as
python
openai/openai-agents-python
src/agents/result.py
https://github.com/openai/openai-agents-python/blob/master/src/agents/result.py
MIT
def to_input_list(self) -> list[TResponseInputItem]: """Creates a new input list, merging the original input with all the new items generated.""" original_items: list[TResponseInputItem] = ItemHelpers.input_to_new_input_list(self.input) new_items = [item.to_input_item() for item in self.new_item...
Creates a new input list, merging the original input with all the new items generated.
to_input_list
python
openai/openai-agents-python
src/agents/result.py
https://github.com/openai/openai-agents-python/blob/master/src/agents/result.py
MIT
def last_response_id(self) -> str | None: """Convenience method to get the response ID of the last model response.""" if not self.raw_responses: return None return self.raw_responses[-1].response_id
Convenience method to get the response ID of the last model response.
last_response_id
python
openai/openai-agents-python
src/agents/result.py
https://github.com/openai/openai-agents-python/blob/master/src/agents/result.py
MIT
def cancel(self) -> None: """Cancels the streaming run, stopping all background tasks and marking the run as complete.""" self._cleanup_tasks() # Cancel all running tasks self.is_complete = True # Mark the run as complete to stop event streaming # Optionally, clear the event q...
Cancels the streaming run, stopping all background tasks and marking the run as complete.
cancel
python
openai/openai-agents-python
src/agents/result.py
https://github.com/openai/openai-agents-python/blob/master/src/agents/result.py
MIT
def _create_error_details(self) -> RunErrorDetails: """Return a `RunErrorDetails` object considering the current attributes of the class.""" return RunErrorDetails( input=self.input, new_items=self.new_items, raw_responses=self.raw_responses, last_agent=se...
Return a `RunErrorDetails` object considering the current attributes of the class.
_create_error_details
python
openai/openai-agents-python
src/agents/result.py
https://github.com/openai/openai-agents-python/blob/master/src/agents/result.py
MIT
async def run( cls, starting_agent: Agent[TContext], input: str | list[TResponseInputItem], *, context: TContext | None = None, max_turns: int = DEFAULT_MAX_TURNS, hooks: RunHooks[TContext] | None = None, run_config: RunConfig | None = None, previo...
Run a workflow starting at the given agent. The agent will run in a loop until a final output is generated. The loop runs like so: 1. The agent is invoked with the given input. 2. If there is a final output (i.e. the agent produces something of type `agent.output_type`, the loop term...
run
python
openai/openai-agents-python
src/agents/run.py
https://github.com/openai/openai-agents-python/blob/master/src/agents/run.py
MIT
def run_sync( cls, starting_agent: Agent[TContext], input: str | list[TResponseInputItem], *, context: TContext | None = None, max_turns: int = DEFAULT_MAX_TURNS, hooks: RunHooks[TContext] | None = None, run_config: RunConfig | None = None, previou...
Run a workflow synchronously, starting at the given agent. Note that this just wraps the `run` method, so it will not work if there's already an event loop (e.g. inside an async function, or in a Jupyter notebook or async context like FastAPI). For those cases, use the `run` method instead. ...
run_sync
python
openai/openai-agents-python
src/agents/run.py
https://github.com/openai/openai-agents-python/blob/master/src/agents/run.py
MIT
def run_streamed( cls, starting_agent: Agent[TContext], input: str | list[TResponseInputItem], context: TContext | None = None, max_turns: int = DEFAULT_MAX_TURNS, hooks: RunHooks[TContext] | None = None, run_config: RunConfig | None = None, previous_respo...
Run a workflow starting at the given agent in streaming mode. The returned result object contains a method you can use to stream semantic events as they are generated. The agent will run in a loop until a final output is generated. The loop runs like so: 1. The agent is invoked with the given i...
run_streamed
python
openai/openai-agents-python
src/agents/run.py
https://github.com/openai/openai-agents-python/blob/master/src/agents/run.py
MIT
def ensure_strict_json_schema( schema: dict[str, Any], ) -> dict[str, Any]: """Mutates the given JSON schema to ensure it conforms to the `strict` standard that the OpenAI API expects. """ if schema == {}: return _EMPTY_SCHEMA return _ensure_strict_json_schema(schema, path=(), root=schem...
Mutates the given JSON schema to ensure it conforms to the `strict` standard that the OpenAI API expects.
ensure_strict_json_schema
python
openai/openai-agents-python
src/agents/strict_schema.py
https://github.com/openai/openai-agents-python/blob/master/src/agents/strict_schema.py
MIT
def function_tool( func: ToolFunction[...], *, name_override: str | None = None, description_override: str | None = None, docstring_style: DocstringStyle | None = None, use_docstring_info: bool = True, failure_error_function: ToolErrorFunction | None = None, strict_mode: bool = True, ...
Overload for usage as @function_tool (no parentheses).
function_tool
python
openai/openai-agents-python
src/agents/tool.py
https://github.com/openai/openai-agents-python/blob/master/src/agents/tool.py
MIT
def function_tool( func: ToolFunction[...] | None = None, *, name_override: str | None = None, description_override: str | None = None, docstring_style: DocstringStyle | None = None, use_docstring_info: bool = True, failure_error_function: ToolErrorFunction | None = default_tool_error_functi...
Decorator to create a FunctionTool from a function. By default, we will: 1. Parse the function signature to create a JSON schema for the tool's parameters. 2. Use the function's docstring to populate the tool's description. 3. Use the function's docstring to populate argument descriptions. The docs...
function_tool
python
openai/openai-agents-python
src/agents/tool.py
https://github.com/openai/openai-agents-python/blob/master/src/agents/tool.py
MIT
def maybe_reset_tool_choice( cls, agent: Agent[Any], tool_use_tracker: AgentToolUseTracker, model_settings: ModelSettings ) -> ModelSettings: """Resets tool choice to None if the agent has used tools and the agent's reset_tool_choice flag is True.""" if agent.reset_tool_choice is Tr...
Resets tool choice to None if the agent has used tools and the agent's reset_tool_choice flag is True.
maybe_reset_tool_choice
python
openai/openai-agents-python
src/agents/_run_impl.py
https://github.com/openai/openai-agents-python/blob/master/src/agents/_run_impl.py
MIT
def remove_all_tools(handoff_input_data: HandoffInputData) -> HandoffInputData: """Filters out all tool items: file search, web search and function calls+output.""" history = handoff_input_data.input_history new_items = handoff_input_data.new_items filtered_history = ( _remove_tool_types_from_...
Filters out all tool items: file search, web search and function calls+output.
remove_all_tools
python
openai/openai-agents-python
src/agents/extensions/handoff_filters.py
https://github.com/openai/openai-agents-python/blob/master/src/agents/extensions/handoff_filters.py
MIT
def get_main_graph(agent: Agent) -> str: """ Generates the main graph structure in DOT format for the given agent. Args: agent (Agent): The agent for which the graph is to be generated. Returns: str: The DOT format string representing the graph. """ parts = [ """ di...
Generates the main graph structure in DOT format for the given agent. Args: agent (Agent): The agent for which the graph is to be generated. Returns: str: The DOT format string representing the graph.
get_main_graph
python
openai/openai-agents-python
src/agents/extensions/visualization.py
https://github.com/openai/openai-agents-python/blob/master/src/agents/extensions/visualization.py
MIT
def get_all_nodes( agent: Agent, parent: Agent | None = None, visited: set[str] | None = None ) -> str: """ Recursively generates the nodes for the given agent and its handoffs in DOT format. Args: agent (Agent): The agent for which the nodes are to be generated. Returns: str: The ...
Recursively generates the nodes for the given agent and its handoffs in DOT format. Args: agent (Agent): The agent for which the nodes are to be generated. Returns: str: The DOT format string representing the nodes.
get_all_nodes
python
openai/openai-agents-python
src/agents/extensions/visualization.py
https://github.com/openai/openai-agents-python/blob/master/src/agents/extensions/visualization.py
MIT