code stringlengths 66 870k | docstring stringlengths 19 26.7k | func_name stringlengths 1 138 | language stringclasses 1
value | repo stringlengths 7 68 | path stringlengths 5 324 | url stringlengths 46 389 | license stringclasses 7
values |
|---|---|---|---|---|---|---|---|
def test_items_to_messages_with_function_output_item():
"""
A function call output item should be converted into a tool role message
dict with the appropriate tool_call_id and content.
"""
func_output_item: FunctionCallOutput = {
"type": "function_call_output",
"call_id": "somecall",... |
A function call output item should be converted into a tool role message
dict with the appropriate tool_call_id and content.
| test_items_to_messages_with_function_output_item | python | openai/openai-agents-python | tests/test_openai_chatcompletions_converter.py | https://github.com/openai/openai-agents-python/blob/master/tests/test_openai_chatcompletions_converter.py | MIT |
def test_extract_all_and_text_content_for_strings_and_lists():
"""
The converter provides helpers for extracting user-supplied message content
either as a simple string or as a list of `input_text` dictionaries.
When passed a bare string, both `extract_all_content` and
`extract_text_content` should ... |
The converter provides helpers for extracting user-supplied message content
either as a simple string or as a list of `input_text` dictionaries.
When passed a bare string, both `extract_all_content` and
`extract_text_content` should return the string unchanged.
When passed a list of input dictionar... | test_extract_all_and_text_content_for_strings_and_lists | python | openai/openai-agents-python | tests/test_openai_chatcompletions_converter.py | https://github.com/openai/openai-agents-python/blob/master/tests/test_openai_chatcompletions_converter.py | MIT |
def test_items_to_messages_handles_system_and_developer_roles():
"""
Roles other than `user` (e.g. `system` and `developer`) need to be
converted appropriately whether provided as simple dicts or as full
`message` typed dicts.
"""
sys_items: list[TResponseInputItem] = [{"role": "system", "conten... |
Roles other than `user` (e.g. `system` and `developer`) need to be
converted appropriately whether provided as simple dicts or as full
`message` typed dicts.
| test_items_to_messages_handles_system_and_developer_roles | python | openai/openai-agents-python | tests/test_openai_chatcompletions_converter.py | https://github.com/openai/openai-agents-python/blob/master/tests/test_openai_chatcompletions_converter.py | MIT |
def test_maybe_input_message_allows_message_typed_dict():
"""
The `Converter.maybe_input_message` should recognize a dict with
"type": "message" and a supported role as an input message. Ensure
that such dicts are passed through by `items_to_messages`.
"""
# Construct a dict with the proper requ... |
The `Converter.maybe_input_message` should recognize a dict with
"type": "message" and a supported role as an input message. Ensure
that such dicts are passed through by `items_to_messages`.
| test_maybe_input_message_allows_message_typed_dict | python | openai/openai-agents-python | tests/test_openai_chatcompletions_converter.py | https://github.com/openai/openai-agents-python/blob/master/tests/test_openai_chatcompletions_converter.py | MIT |
def test_tool_call_conversion():
"""
Test that tool calls are converted correctly.
"""
function_call = ResponseFunctionToolCallParam(
id="tool1",
call_id="abc",
name="math",
arguments="{}",
type="function_call",
)
messages = Converter.items_to_messages([f... |
Test that tool calls are converted correctly.
| test_tool_call_conversion | python | openai/openai-agents-python | tests/test_openai_chatcompletions_converter.py | https://github.com/openai/openai-agents-python/blob/master/tests/test_openai_chatcompletions_converter.py | MIT |
def test_input_message_with_all_roles(role: str):
"""
The `Converter.maybe_input_message` should recognize a dict with
"type": "message" and a supported role as an input message. Ensure
that such dicts are passed through by `items_to_messages`.
"""
# Construct a dict with the proper required key... |
The `Converter.maybe_input_message` should recognize a dict with
"type": "message" and a supported role as an input message. Ensure
that such dicts are passed through by `items_to_messages`.
| test_input_message_with_all_roles | python | openai/openai-agents-python | tests/test_openai_chatcompletions_converter.py | https://github.com/openai/openai-agents-python/blob/master/tests/test_openai_chatcompletions_converter.py | MIT |
def test_item_reference_errors():
"""
Test that item references are converted correctly.
"""
with pytest.raises(UserError):
Converter.items_to_messages(
[
{
"type": "item_reference",
"id": "item1",
}
... |
Test that item references are converted correctly.
| test_item_reference_errors | python | openai/openai-agents-python | tests/test_openai_chatcompletions_converter.py | https://github.com/openai/openai-agents-python/blob/master/tests/test_openai_chatcompletions_converter.py | MIT |
def test_unknown_object_errors():
"""
Test that unknown objects are converted correctly.
"""
with pytest.raises(UserError, match="Unhandled item type or structure"):
# Purposely ignore the type error
Converter.items_to_messages([TestObject()]) # type: ignore |
Test that unknown objects are converted correctly.
| test_unknown_object_errors | python | openai/openai-agents-python | tests/test_openai_chatcompletions_converter.py | https://github.com/openai/openai-agents-python/blob/master/tests/test_openai_chatcompletions_converter.py | MIT |
def test_assistant_messages_in_history():
"""
Test that assistant messages are added to the history.
"""
messages = Converter.items_to_messages(
[
{
"role": "user",
"content": "Hello",
},
{
"role": "assistant",
... |
Test that assistant messages are added to the history.
| test_assistant_messages_in_history | python | openai/openai-agents-python | tests/test_openai_chatcompletions_converter.py | https://github.com/openai/openai-agents-python/blob/master/tests/test_openai_chatcompletions_converter.py | MIT |
def test_convert_tool_choice_standard_values():
"""
Make sure that the standard tool_choice values map to themselves or
to "auto"/"required"/"none" as appropriate, and that special string
values map to the appropriate dicts.
"""
assert Converter.convert_tool_choice(None) is NOT_GIVEN
assert ... |
Make sure that the standard tool_choice values map to themselves or
to "auto"/"required"/"none" as appropriate, and that special string
values map to the appropriate dicts.
| test_convert_tool_choice_standard_values | python | openai/openai-agents-python | tests/test_openai_responses_converter.py | https://github.com/openai/openai-agents-python/blob/master/tests/test_openai_responses_converter.py | MIT |
def test_get_response_format_plain_text_and_json_schema():
"""
For plain text output (default, or output type of `str`), the converter
should return NOT_GIVEN, indicating no special response format constraint.
If an output schema is provided for a structured type, the converter
should return a `form... |
For plain text output (default, or output type of `str`), the converter
should return NOT_GIVEN, indicating no special response format constraint.
If an output schema is provided for a structured type, the converter
should return a `format` dict with the schema and strictness. The exact
JSON schema... | test_get_response_format_plain_text_and_json_schema | python | openai/openai-agents-python | tests/test_openai_responses_converter.py | https://github.com/openai/openai-agents-python/blob/master/tests/test_openai_responses_converter.py | MIT |
def test_convert_tools_basic_types_and_includes():
"""
Construct a variety of tool types and make sure `convert_tools` returns
a matching list of tool param dicts and the expected includes. Also
check that only a single computer tool is allowed.
"""
# Simple function tool
tool_fn = function_... |
Construct a variety of tool types and make sure `convert_tools` returns
a matching list of tool param dicts and the expected includes. Also
check that only a single computer tool is allowed.
| test_convert_tools_basic_types_and_includes | python | openai/openai-agents-python | tests/test_openai_responses_converter.py | https://github.com/openai/openai-agents-python/blob/master/tests/test_openai_responses_converter.py | MIT |
def test_convert_tools_includes_handoffs():
"""
When handoff objects are included, `convert_tools` should append their
tool param dicts after tools and include appropriate descriptions.
"""
agent = Agent(name="support", handoff_description="Handles support")
handoff_obj = handoff(agent)
conv... |
When handoff objects are included, `convert_tools` should append their
tool param dicts after tools and include appropriate descriptions.
| test_convert_tools_includes_handoffs | python | openai/openai-agents-python | tests/test_openai_responses_converter.py | https://github.com/openai/openai-agents-python/blob/master/tests/test_openai_responses_converter.py | MIT |
def test_bad_cast_doesnt_raise():
"""Bad casts shouldn't error unless we ask for it."""
result = create_run_result(1)
result.final_output_as(str)
result = create_run_result("test")
result.final_output_as(Foo) | Bad casts shouldn't error unless we ask for it. | test_bad_cast_doesnt_raise | python | openai/openai-agents-python | tests/test_result_cast.py | https://github.com/openai/openai-agents-python/blob/master/tests/test_result_cast.py | MIT |
def test_bad_cast_with_param_raises():
"""Bad casts should raise a TypeError when we ask for it."""
result = create_run_result(1)
with pytest.raises(TypeError):
result.final_output_as(str, raise_if_incorrect_type=True)
result = create_run_result("test")
with pytest.raises(TypeError):
... | Bad casts should raise a TypeError when we ask for it. | test_bad_cast_with_param_raises | python | openai/openai-agents-python | tests/test_result_cast.py | https://github.com/openai/openai-agents-python/blob/master/tests/test_result_cast.py | MIT |
async def test_model_provider_on_run_config_is_used_for_agent_model_name() -> None:
"""
When the agent's ``model`` attribute is a string and no explicit model override is
provided in the ``RunConfig``, the ``Runner`` should resolve the model using the
``model_provider`` on the ``RunConfig``.
"""
... |
When the agent's ``model`` attribute is a string and no explicit model override is
provided in the ``RunConfig``, the ``Runner`` should resolve the model using the
``model_provider`` on the ``RunConfig``.
| test_model_provider_on_run_config_is_used_for_agent_model_name | python | openai/openai-agents-python | tests/test_run_config.py | https://github.com/openai/openai-agents-python/blob/master/tests/test_run_config.py | MIT |
async def test_run_config_model_name_override_takes_precedence() -> None:
"""
When a model name string is set on the RunConfig, then that name should be looked up
using the RunConfig's model_provider, and should override any model on the agent.
"""
fake_model = FakeModel(initial_output=[get_text_mes... |
When a model name string is set on the RunConfig, then that name should be looked up
using the RunConfig's model_provider, and should override any model on the agent.
| test_run_config_model_name_override_takes_precedence | python | openai/openai-agents-python | tests/test_run_config.py | https://github.com/openai/openai-agents-python/blob/master/tests/test_run_config.py | MIT |
async def test_run_config_model_override_object_takes_precedence() -> None:
"""
When a concrete Model instance is set on the RunConfig, then that instance should be
returned by Runner._get_model regardless of the agent's model.
"""
fake_model = FakeModel(initial_output=[get_text_message("override-ob... |
When a concrete Model instance is set on the RunConfig, then that instance should be
returned by Runner._get_model regardless of the agent's model.
| test_run_config_model_override_object_takes_precedence | python | openai/openai-agents-python | tests/test_run_config.py | https://github.com/openai/openai-agents-python/blob/master/tests/test_run_config.py | MIT |
async def test_agent_model_object_is_used_when_present() -> None:
"""
If the agent has a concrete Model object set as its model, and the RunConfig does
not specify a model override, then that object should be used directly without
consulting the RunConfig's model_provider.
"""
fake_model = FakeM... |
If the agent has a concrete Model object set as its model, and the RunConfig does
not specify a model override, then that object should be used directly without
consulting the RunConfig's model_provider.
| test_agent_model_object_is_used_when_present | python | openai/openai-agents-python | tests/test_run_config.py | https://github.com/openai/openai-agents-python/blob/master/tests/test_run_config.py | MIT |
def test_should_reset_tool_choice_direct(self):
"""
Test the _should_reset_tool_choice method directly with various inputs
to ensure it correctly identifies cases where reset is needed.
"""
agent = Agent(name="test_agent")
# Case 1: Empty tool use tracker should not chan... |
Test the _should_reset_tool_choice method directly with various inputs
to ensure it correctly identifies cases where reset is needed.
| test_should_reset_tool_choice_direct | python | openai/openai-agents-python | tests/test_tool_choice_reset.py | https://github.com/openai/openai-agents-python/blob/master/tests/test_tool_choice_reset.py | MIT |
async def test_required_tool_choice_with_multiple_runs(self):
"""
Test scenario 1: When multiple runs are executed with tool_choice="required", ensure each
run works correctly and doesn't get stuck in an infinite loop. Also verify that tool_choice
remains "required" between runs.
... |
Test scenario 1: When multiple runs are executed with tool_choice="required", ensure each
run works correctly and doesn't get stuck in an infinite loop. Also verify that tool_choice
remains "required" between runs.
| test_required_tool_choice_with_multiple_runs | python | openai/openai-agents-python | tests/test_tool_choice_reset.py | https://github.com/openai/openai-agents-python/blob/master/tests/test_tool_choice_reset.py | MIT |
async def test_required_with_stop_at_tool_name(self):
"""
Test scenario 2: When using required tool_choice with stop_at_tool_names behavior, ensure
it correctly stops at the specified tool
"""
# Set up fake model to return a tool call for second_tool
fake_model = FakeMode... |
Test scenario 2: When using required tool_choice with stop_at_tool_names behavior, ensure
it correctly stops at the specified tool
| test_required_with_stop_at_tool_name | python | openai/openai-agents-python | tests/test_tool_choice_reset.py | https://github.com/openai/openai-agents-python/blob/master/tests/test_tool_choice_reset.py | MIT |
async def test_specific_tool_choice(self):
"""
Test scenario 3: When using a specific tool choice name, ensure it doesn't cause infinite
loops.
"""
# Set up fake model to return a text message
fake_model = FakeModel()
fake_model.set_next_output([get_text_message("... |
Test scenario 3: When using a specific tool choice name, ensure it doesn't cause infinite
loops.
| test_specific_tool_choice | python | openai/openai-agents-python | tests/test_tool_choice_reset.py | https://github.com/openai/openai-agents-python/blob/master/tests/test_tool_choice_reset.py | MIT |
async def test_required_with_single_tool(self):
"""
Test scenario 4: When using required tool_choice with only one tool, ensure it doesn't cause
infinite loops.
"""
# Set up fake model to return a tool call followed by a text message
fake_model = FakeModel()
fake_... |
Test scenario 4: When using required tool_choice with only one tool, ensure it doesn't cause
infinite loops.
| test_required_with_single_tool | python | openai/openai-agents-python | tests/test_tool_choice_reset.py | https://github.com/openai/openai-agents-python/blob/master/tests/test_tool_choice_reset.py | MIT |
async def test_dont_reset_tool_choice_if_not_required(self):
"""
Test scenario 5: When agent.reset_tool_choice is False, ensure tool_choice is not reset.
"""
# Set up fake model to return a tool call followed by a text message
fake_model = FakeModel()
fake_model.add_multi... |
Test scenario 5: When agent.reset_tool_choice is False, ensure tool_choice is not reset.
| test_dont_reset_tool_choice_if_not_required | python | openai/openai-agents-python | tests/test_tool_choice_reset.py | https://github.com/openai/openai-agents-python/blob/master/tests/test_tool_choice_reset.py | MIT |
async def test_custom_tool_use_behavior_sync() -> None:
"""If tool_use_behavior is a sync function, we should call it and propagate its return."""
def behavior(
context: RunContextWrapper, results: list[FunctionToolResult]
) -> ToolsToFinalOutputResult:
assert len(results) == 3
retu... | If tool_use_behavior is a sync function, we should call it and propagate its return. | test_custom_tool_use_behavior_sync | python | openai/openai-agents-python | tests/test_tool_use_behavior.py | https://github.com/openai/openai-agents-python/blob/master/tests/test_tool_use_behavior.py | MIT |
async def test_custom_tool_use_behavior_async() -> None:
"""If tool_use_behavior is an async function, we should await it and propagate its return."""
async def behavior(
context: RunContextWrapper, results: list[FunctionToolResult]
) -> ToolsToFinalOutputResult:
assert len(results) == 3
... | If tool_use_behavior is an async function, we should await it and propagate its return. | test_custom_tool_use_behavior_async | python | openai/openai-agents-python | tests/test_tool_use_behavior.py | https://github.com/openai/openai-agents-python/blob/master/tests/test_tool_use_behavior.py | MIT |
async def test_invalid_tool_use_behavior_raises() -> None:
"""If tool_use_behavior is invalid, we should raise a UserError."""
agent = Agent(name="test")
# Force an invalid value; mypy will complain, so ignore the type here.
agent.tool_use_behavior = "bad_value" # type: ignore[assignment]
tool_resu... | If tool_use_behavior is invalid, we should raise a UserError. | test_invalid_tool_use_behavior_raises | python | openai/openai-agents-python | tests/test_tool_use_behavior.py | https://github.com/openai/openai-agents-python/blob/master/tests/test_tool_use_behavior.py | MIT |
def get_span(processor: TracingProcessor) -> SpanImpl[AgentSpanData]:
"""Create a minimal agent span for testing processors."""
return SpanImpl(
trace_id="test_trace_id",
span_id="test_span_id",
parent_id=None,
processor=processor,
span_data=AgentSpanData(name="test_agent... | Create a minimal agent span for testing processors. | get_span | python | openai/openai-agents-python | tests/test_trace_processor.py | https://github.com/openai/openai-agents-python/blob/master/tests/test_trace_processor.py | MIT |
def test_batch_trace_processor_scheduled_export(mocked_exporter):
"""
Tests that items are automatically exported when the schedule_delay expires.
We mock time.time() so we can trigger the condition without waiting in real time.
"""
with patch("time.time") as mock_time:
base_time = 1000.0
... |
Tests that items are automatically exported when the schedule_delay expires.
We mock time.time() so we can trigger the condition without waiting in real time.
| test_batch_trace_processor_scheduled_export | python | openai/openai-agents-python | tests/test_trace_processor.py | https://github.com/openai/openai-agents-python/blob/master/tests/test_trace_processor.py | MIT |
async def test_streaming_context():
"""This ensures that FastAPI streaming works. The context for this test is that the Runner
method was called in one async context, and the streaming was ended in another context,
leading to a tracing error because the context was closed in the wrong context. This test
... | This ensures that FastAPI streaming works. The context for this test is that the Runner
method was called in one async context, and the streaming was ended in another context,
leading to a tracing error because the context was closed in the wrong context. This test
ensures that this actually works.
| test_streaming_context | python | openai/openai-agents-python | tests/fastapi/test_streaming_context.py | https://github.com/openai/openai-agents-python/blob/master/tests/fastapi/test_streaming_context.py | MIT |
async def test_server_caching_works(
mock_list_tools: AsyncMock, mock_initialize: AsyncMock, mock_stdio_client
):
"""Test that if we turn caching on, the list of tools is cached and not fetched from the server
on each call to `list_tools()`.
"""
server = MCPServerStdio(
params={
... | Test that if we turn caching on, the list of tools is cached and not fetched from the server
on each call to `list_tools()`.
| test_server_caching_works | python | openai/openai-agents-python | tests/mcp/test_caching.py | https://github.com/openai/openai-agents-python/blob/master/tests/mcp/test_caching.py | MIT |
async def test_async_ctx_manager_works(
mock_list_tools: AsyncMock, mock_initialize: AsyncMock, mock_stdio_client
):
"""Test that the async context manager works."""
server = MCPServerStdio(
params={
"command": tee,
},
cache_tools_list=True,
)
tools = [
M... | Test that the async context manager works. | test_async_ctx_manager_works | python | openai/openai-agents-python | tests/mcp/test_connect_disconnect.py | https://github.com/openai/openai-agents-python/blob/master/tests/mcp/test_connect_disconnect.py | MIT |
async def test_manual_connect_disconnect_works(
mock_list_tools: AsyncMock, mock_initialize: AsyncMock, mock_stdio_client
):
"""Test that the async context manager works."""
server = MCPServerStdio(
params={
"command": tee,
},
cache_tools_list=True,
)
tools = [
... | Test that the async context manager works. | test_manual_connect_disconnect_works | python | openai/openai-agents-python | tests/mcp/test_connect_disconnect.py | https://github.com/openai/openai-agents-python/blob/master/tests/mcp/test_connect_disconnect.py | MIT |
async def test_get_all_function_tools():
"""Test that the get_all_function_tools function returns all function tools from a list of MCP
servers.
"""
names = ["test_tool_1", "test_tool_2", "test_tool_3", "test_tool_4", "test_tool_5"]
schemas = [
{},
{},
{},
Foo.model_j... | Test that the get_all_function_tools function returns all function tools from a list of MCP
servers.
| test_get_all_function_tools | python | openai/openai-agents-python | tests/mcp/test_mcp_util.py | https://github.com/openai/openai-agents-python/blob/master/tests/mcp/test_mcp_util.py | MIT |
async def test_invoke_mcp_tool():
"""Test that the invoke_mcp_tool function invokes an MCP tool and returns the result."""
server = FakeMCPServer()
server.add_tool("test_tool_1", {})
ctx = RunContextWrapper(context=None)
tool = MCPTool(name="test_tool_1", inputSchema={})
await MCPUtil.invoke_m... | Test that the invoke_mcp_tool function invokes an MCP tool and returns the result. | test_invoke_mcp_tool | python | openai/openai-agents-python | tests/mcp/test_mcp_util.py | https://github.com/openai/openai-agents-python/blob/master/tests/mcp/test_mcp_util.py | MIT |
async def test_agent_convert_schemas_true():
"""Test that setting convert_schemas_to_strict to True converts non-strict schemas to strict.
- 'foo' tool is already strict and remains strict.
- 'bar' tool is non-strict and becomes strict (additionalProperties set to False, etc).
"""
strict_schema = Fo... | Test that setting convert_schemas_to_strict to True converts non-strict schemas to strict.
- 'foo' tool is already strict and remains strict.
- 'bar' tool is non-strict and becomes strict (additionalProperties set to False, etc).
| test_agent_convert_schemas_true | python | openai/openai-agents-python | tests/mcp/test_mcp_util.py | https://github.com/openai/openai-agents-python/blob/master/tests/mcp/test_mcp_util.py | MIT |
async def test_agent_convert_schemas_false():
"""Test that setting convert_schemas_to_strict to False leaves tool schemas as non-strict.
- 'foo' tool remains strict.
- 'bar' tool remains non-strict (additionalProperties remains True).
"""
strict_schema = Foo.model_json_schema()
non_strict_schema... | Test that setting convert_schemas_to_strict to False leaves tool schemas as non-strict.
- 'foo' tool remains strict.
- 'bar' tool remains non-strict (additionalProperties remains True).
| test_agent_convert_schemas_false | python | openai/openai-agents-python | tests/mcp/test_mcp_util.py | https://github.com/openai/openai-agents-python/blob/master/tests/mcp/test_mcp_util.py | MIT |
async def test_agent_convert_schemas_unset():
"""Test that leaving convert_schemas_to_strict unset (defaulting to False) leaves tool schemas
as non-strict.
- 'foo' tool remains strict.
- 'bar' tool remains non-strict.
"""
strict_schema = Foo.model_json_schema()
non_strict_schema = Baz.json_s... | Test that leaving convert_schemas_to_strict unset (defaulting to False) leaves tool schemas
as non-strict.
- 'foo' tool remains strict.
- 'bar' tool remains non-strict.
| test_agent_convert_schemas_unset | python | openai/openai-agents-python | tests/mcp/test_mcp_util.py | https://github.com/openai/openai-agents-python/blob/master/tests/mcp/test_mcp_util.py | MIT |
async def test_util_adds_properties():
"""The MCP spec doesn't require the inputSchema to have `properties`, so we need to add it
if it's missing.
"""
schema = {
"type": "object",
"description": "Test tool",
}
server = FakeMCPServer()
server.add_tool("test_tool", schema)
... | The MCP spec doesn't require the inputSchema to have `properties`, so we need to add it
if it's missing.
| test_util_adds_properties | python | openai/openai-agents-python | tests/mcp/test_mcp_util.py | https://github.com/openai/openai-agents-python/blob/master/tests/mcp/test_mcp_util.py | MIT |
async def test_runner_calls_mcp_tool(streaming: bool):
"""Test that the runner calls an MCP tool when the model produces a tool call."""
server = FakeMCPServer()
server.add_tool("test_tool_1", {})
server.add_tool("test_tool_2", {})
server.add_tool("test_tool_3", {})
model = FakeModel()
agent... | Test that the runner calls an MCP tool when the model produces a tool call. | test_runner_calls_mcp_tool | python | openai/openai-agents-python | tests/mcp/test_runner_calls_mcp.py | https://github.com/openai/openai-agents-python/blob/master/tests/mcp/test_runner_calls_mcp.py | MIT |
async def test_runner_asserts_when_mcp_tool_not_found(streaming: bool):
"""Test that the runner asserts when an MCP tool is not found."""
server = FakeMCPServer()
server.add_tool("test_tool_1", {})
server.add_tool("test_tool_2", {})
server.add_tool("test_tool_3", {})
model = FakeModel()
agen... | Test that the runner asserts when an MCP tool is not found. | test_runner_asserts_when_mcp_tool_not_found | python | openai/openai-agents-python | tests/mcp/test_runner_calls_mcp.py | https://github.com/openai/openai-agents-python/blob/master/tests/mcp/test_runner_calls_mcp.py | MIT |
async def test_runner_works_with_multiple_mcp_servers(streaming: bool):
"""Test that the runner works with multiple MCP servers."""
server1 = FakeMCPServer()
server1.add_tool("test_tool_1", {})
server2 = FakeMCPServer()
server2.add_tool("test_tool_2", {})
server2.add_tool("test_tool_3", {})
... | Test that the runner works with multiple MCP servers. | test_runner_works_with_multiple_mcp_servers | python | openai/openai-agents-python | tests/mcp/test_runner_calls_mcp.py | https://github.com/openai/openai-agents-python/blob/master/tests/mcp/test_runner_calls_mcp.py | MIT |
async def test_runner_errors_when_mcp_tools_clash(streaming: bool):
"""Test that the runner errors when multiple servers have the same tool name."""
server1 = FakeMCPServer()
server1.add_tool("test_tool_1", {})
server1.add_tool("test_tool_2", {})
server2 = FakeMCPServer()
server2.add_tool("test... | Test that the runner errors when multiple servers have the same tool name. | test_runner_errors_when_mcp_tools_clash | python | openai/openai-agents-python | tests/mcp/test_runner_calls_mcp.py | https://github.com/openai/openai-agents-python/blob/master/tests/mcp/test_runner_calls_mcp.py | MIT |
async def test_runner_calls_mcp_tool_with_args(streaming: bool):
"""Test that the runner calls an MCP tool when the model produces a tool call."""
server = FakeMCPServer()
await server.connect()
server.add_tool("test_tool_1", {})
server.add_tool("test_tool_2", Foo.model_json_schema())
server.add... | Test that the runner calls an MCP tool when the model produces a tool call. | test_runner_calls_mcp_tool_with_args | python | openai/openai-agents-python | tests/mcp/test_runner_calls_mcp.py | https://github.com/openai/openai-agents-python/blob/master/tests/mcp/test_runner_calls_mcp.py | MIT |
async def test_litellm_kwargs_forwarded(monkeypatch):
"""
Test that kwargs from ModelSettings are forwarded to litellm.acompletion.
"""
captured: dict[str, object] = {}
async def fake_acompletion(model, messages=None, **kwargs):
captured.update(kwargs)
msg = Message(role="assistant"... |
Test that kwargs from ModelSettings are forwarded to litellm.acompletion.
| test_litellm_kwargs_forwarded | python | openai/openai-agents-python | tests/models/test_kwargs_functionality.py | https://github.com/openai/openai-agents-python/blob/master/tests/models/test_kwargs_functionality.py | MIT |
async def test_openai_chatcompletions_kwargs_forwarded(monkeypatch):
"""
Test that kwargs from ModelSettings are forwarded to OpenAI chat completions API.
"""
captured: dict[str, object] = {}
class MockChatCompletions:
async def create(self, **kwargs):
captured.update(kwargs)
... |
Test that kwargs from ModelSettings are forwarded to OpenAI chat completions API.
| test_openai_chatcompletions_kwargs_forwarded | python | openai/openai-agents-python | tests/models/test_kwargs_functionality.py | https://github.com/openai/openai-agents-python/blob/master/tests/models/test_kwargs_functionality.py | MIT |
async def test_empty_kwargs_handling(monkeypatch):
"""
Test that empty or None kwargs are handled gracefully.
"""
captured: dict[str, object] = {}
async def fake_acompletion(model, messages=None, **kwargs):
captured.update(kwargs)
msg = Message(role="assistant", content="test respon... |
Test that empty or None kwargs are handled gracefully.
| test_empty_kwargs_handling | python | openai/openai-agents-python | tests/models/test_kwargs_functionality.py | https://github.com/openai/openai-agents-python/blob/master/tests/models/test_kwargs_functionality.py | MIT |
async def test_extra_body_is_forwarded(monkeypatch):
"""
Forward `extra_body` entries into litellm.acompletion kwargs.
This ensures that user-provided parameters (e.g. cached_content)
arrive alongside default arguments.
"""
captured: dict[str, object] = {}
async def fake_acompletion(model,... |
Forward `extra_body` entries into litellm.acompletion kwargs.
This ensures that user-provided parameters (e.g. cached_content)
arrive alongside default arguments.
| test_extra_body_is_forwarded | python | openai/openai-agents-python | tests/models/test_litellm_extra_body.py | https://github.com/openai/openai-agents-python/blob/master/tests/models/test_litellm_extra_body.py | MIT |
def verify_serialization(model_settings: ModelSettings) -> None:
"""Verify that ModelSettings can be serialized to a JSON string."""
json_dict = model_settings.to_json_dict()
json_string = json.dumps(json_dict)
assert json_string is not None | Verify that ModelSettings can be serialized to a JSON string. | verify_serialization | python | openai/openai-agents-python | tests/model_settings/test_serialization.py | https://github.com/openai/openai-agents-python/blob/master/tests/model_settings/test_serialization.py | MIT |
def test_basic_serialization() -> None:
"""Tests whether ModelSettings can be serialized to a JSON string."""
# First, lets create a ModelSettings instance
model_settings = ModelSettings(
temperature=0.5,
top_p=0.9,
max_tokens=100,
)
# Now, lets serialize the ModelSettings ... | Tests whether ModelSettings can be serialized to a JSON string. | test_basic_serialization | python | openai/openai-agents-python | tests/model_settings/test_serialization.py | https://github.com/openai/openai-agents-python/blob/master/tests/model_settings/test_serialization.py | MIT |
def test_all_fields_serialization() -> None:
"""Tests whether ModelSettings can be serialized to a JSON string."""
# First, lets create a ModelSettings instance
model_settings = ModelSettings(
temperature=0.5,
top_p=0.9,
frequency_penalty=0.0,
presence_penalty=0.0,
t... | Tests whether ModelSettings can be serialized to a JSON string. | test_all_fields_serialization | python | openai/openai-agents-python | tests/model_settings/test_serialization.py | https://github.com/openai/openai-agents-python/blob/master/tests/model_settings/test_serialization.py | MIT |
def test_extra_args_resolve() -> None:
"""Test that extra_args are properly merged in the resolve method."""
base_settings = ModelSettings(
temperature=0.5, extra_args={"param1": "base_value", "param2": "base_only"}
)
override_settings = ModelSettings(
top_p=0.9, extra_args={"param1": "... | Test that extra_args are properly merged in the resolve method. | test_extra_args_resolve | python | openai/openai-agents-python | tests/model_settings/test_serialization.py | https://github.com/openai/openai-agents-python/blob/master/tests/model_settings/test_serialization.py | MIT |
def test_extra_args_resolve_with_none() -> None:
"""Test that resolve works properly when one side has None extra_args."""
# Base with extra_args, override with None
base_settings = ModelSettings(extra_args={"param1": "value1"})
override_settings = ModelSettings(temperature=0.8)
resolved = base_set... | Test that resolve works properly when one side has None extra_args. | test_extra_args_resolve_with_none | python | openai/openai-agents-python | tests/model_settings/test_serialization.py | https://github.com/openai/openai-agents-python/blob/master/tests/model_settings/test_serialization.py | MIT |
def test_extra_args_resolve_both_none() -> None:
"""Test that resolve works when both sides have None extra_args."""
base_settings = ModelSettings(temperature=0.5)
override_settings = ModelSettings(top_p=0.9)
resolved = base_settings.resolve(override_settings)
assert resolved.extra_args is None
... | Test that resolve works when both sides have None extra_args. | test_extra_args_resolve_both_none | python | openai/openai-agents-python | tests/model_settings/test_serialization.py | https://github.com/openai/openai-agents-python/blob/master/tests/model_settings/test_serialization.py | MIT |
async def extract_events(result: StreamedAudioResult) -> tuple[list[str], list[bytes]]:
"""Collapse pipeline stream events to simple labels for ordering assertions."""
flattened: list[str] = []
audio_chunks: list[bytes] = []
async for ev in result.stream():
if ev.type == "voice_stream_event_aud... | Collapse pipeline stream events to simple labels for ordering assertions. | extract_events | python | openai/openai-agents-python | tests/voice/helpers.py | https://github.com/openai/openai-agents-python/blob/master/tests/voice/helpers.py | MIT |
def create_mock_websocket(messages: list[str]) -> AsyncMock:
"""
Creates a mock websocket (AsyncMock) that will return the provided incoming_messages
from __aiter__() as if they came from the server.
"""
mock_ws = AsyncMock()
mock_ws.__aenter__.return_value = mock_ws
# The incoming_messages... |
Creates a mock websocket (AsyncMock) that will return the provided incoming_messages
from __aiter__() as if they came from the server.
| create_mock_websocket | python | openai/openai-agents-python | tests/voice/test_openai_stt.py | https://github.com/openai/openai-agents-python/blob/master/tests/voice/test_openai_stt.py | MIT |
async def test_non_json_messages_should_crash():
"""This tests that non-JSON messages will raise an exception"""
# Setup: mock websockets.connect
mock_ws = create_mock_websocket(["not a json message"])
with patch("websockets.connect", return_value=mock_ws):
# Instantiate the session
inpu... | This tests that non-JSON messages will raise an exception | test_non_json_messages_should_crash | python | openai/openai-agents-python | tests/voice/test_openai_stt.py | https://github.com/openai/openai-agents-python/blob/master/tests/voice/test_openai_stt.py | MIT |
async def test_session_connects_and_configures_successfully():
"""
Test that the session:
1) Connects to the correct URL with correct headers.
2) Receives a 'session.created' event.
3) Sends an update message for session config.
4) Receives a 'session.updated' event.
"""
# Setup: mock we... |
Test that the session:
1) Connects to the correct URL with correct headers.
2) Receives a 'session.created' event.
3) Sends an update message for session config.
4) Receives a 'session.updated' event.
| test_session_connects_and_configures_successfully | python | openai/openai-agents-python | tests/voice/test_openai_stt.py | https://github.com/openai/openai-agents-python/blob/master/tests/voice/test_openai_stt.py | MIT |
async def test_stream_audio_sends_correct_json():
"""
Test that when audio is placed on the input queue, the session:
1) Base64-encodes the data.
2) Sends the correct JSON message over the websocket.
"""
# Simulate a single "transcription_session.created" and "transcription_session.updated" even... |
Test that when audio is placed on the input queue, the session:
1) Base64-encodes the data.
2) Sends the correct JSON message over the websocket.
| test_stream_audio_sends_correct_json | python | openai/openai-agents-python | tests/voice/test_openai_stt.py | https://github.com/openai/openai-agents-python/blob/master/tests/voice/test_openai_stt.py | MIT |
async def test_transcription_event_puts_output_in_queue():
"""
Test that a 'conversation.item.input_audio_transcription.completed' event
yields a transcript from transcribe_turns().
"""
mock_ws = create_mock_websocket(
[
json.dumps({"type": "transcription_session.created"}),
... |
Test that a 'conversation.item.input_audio_transcription.completed' event
yields a transcript from transcribe_turns().
| test_transcription_event_puts_output_in_queue | python | openai/openai-agents-python | tests/voice/test_openai_stt.py | https://github.com/openai/openai-agents-python/blob/master/tests/voice/test_openai_stt.py | MIT |
async def test_timeout_waiting_for_created_event(monkeypatch):
"""
If the 'session.created' event does not arrive before SESSION_CREATION_TIMEOUT,
the session should raise a TimeoutError.
"""
time_gen = fake_time(increment=30) # increment by 30 seconds each time
# Define a replacement function... |
If the 'session.created' event does not arrive before SESSION_CREATION_TIMEOUT,
the session should raise a TimeoutError.
| test_timeout_waiting_for_created_event | python | openai/openai-agents-python | tests/voice/test_openai_stt.py | https://github.com/openai/openai-agents-python/blob/master/tests/voice/test_openai_stt.py | MIT |
async def test_session_error_event():
"""
If the session receives an event with "type": "error", it should propagate an exception
and put an ErrorSentinel in the output queue.
"""
mock_ws = create_mock_websocket(
[
json.dumps({"type": "transcription_session.created"}),
... |
If the session receives an event with "type": "error", it should propagate an exception
and put an ErrorSentinel in the output queue.
| test_session_error_event | python | openai/openai-agents-python | tests/voice/test_openai_stt.py | https://github.com/openai/openai-agents-python/blob/master/tests/voice/test_openai_stt.py | MIT |
async def test_inactivity_timeout():
"""
Test that if no events arrive in EVENT_INACTIVITY_TIMEOUT ms,
_handle_events breaks out and a SessionCompleteSentinel is placed in the output queue.
"""
# We'll feed only the creation + updated events. Then do nothing.
# The handle_events loop should even... |
Test that if no events arrive in EVENT_INACTIVITY_TIMEOUT ms,
_handle_events breaks out and a SessionCompleteSentinel is placed in the output queue.
| test_inactivity_timeout | python | openai/openai-agents-python | tests/voice/test_openai_stt.py | https://github.com/openai/openai-agents-python/blob/master/tests/voice/test_openai_stt.py | MIT |
def _make_fake_openai_client(fake_create) -> SimpleNamespace:
"""Construct an object with nested audio.speech.with_streaming_response.create."""
return SimpleNamespace(
audio=SimpleNamespace(
speech=SimpleNamespace(with_streaming_response=SimpleNamespace(create=fake_create))
)
) | Construct an object with nested audio.speech.with_streaming_response.create. | _make_fake_openai_client | python | openai/openai-agents-python | tests/voice/test_openai_tts.py | https://github.com/openai/openai-agents-python/blob/master/tests/voice/test_openai_tts.py | MIT |
async def test_openai_tts_default_voice_and_instructions() -> None:
"""If no voice is specified, OpenAITTSModel uses its default voice and passes instructions."""
chunks = [b"abc", b"def"]
captured: dict[str, object] = {}
def fake_create(
*, model: str, voice: str, input: str, response_format: ... | If no voice is specified, OpenAITTSModel uses its default voice and passes instructions. | test_openai_tts_default_voice_and_instructions | python | openai/openai-agents-python | tests/voice/test_openai_tts.py | https://github.com/openai/openai-agents-python/blob/master/tests/voice/test_openai_tts.py | MIT |
async def test_openai_tts_custom_voice_and_instructions() -> None:
"""Specifying voice and instructions are forwarded to the API."""
chunks = [b"x"]
captured: dict[str, object] = {}
def fake_create(
*, model: str, voice: str, input: str, response_format: str, extra_body: dict[str, Any]
) ->... | Specifying voice and instructions are forwarded to the API. | test_openai_tts_custom_voice_and_instructions | python | openai/openai-agents-python | tests/voice/test_openai_tts.py | https://github.com/openai/openai-agents-python/blob/master/tests/voice/test_openai_tts.py | MIT |
def domain_has_ip(resolver, domain):
""" Return true if the domain has at least one IP (IPv4 or IPv6)"""
len_dns_a = 0
len_dns_aaaa = 0
try:
dns_response = resolver.resolve(domain, RdataType.A)
len_dns_a = len(dns_response.rrset)
except (NoAnswer, NXDOMAIN, LifetimeTimeout, NoNameser... | Return true if the domain has at least one IP (IPv4 or IPv6) | domain_has_ip | python | quenhus/uBlock-Origin-dev-filter | src/clean_data/main.py | https://github.com/quenhus/uBlock-Origin-dev-filter/blob/master/src/clean_data/main.py | Unlicense |
def closest_vec_to(self, vec2_pt):
'''
produces a vector normal to this line passing through the given point vec2_pt
'''
delta_pt = self.point - vec2_pt
dp = delta_pt.dot(self.ray)
return self.ray * dp - delta_pt |
produces a vector normal to this line passing through the given point vec2_pt
| closest_vec_to | python | autorope/donkeycar | donkeycar/geom.py | https://github.com/autorope/donkeycar/blob/master/donkeycar/geom.py | MIT |
def cross_track_error(self, vec2_pt):
'''
a signed magnitude of distance from line segment
'''
err_vec = self.closest_vec_to(vec2_pt)
mag = err_vec.mag()
err_vec.scale(1.0 / mag)
sign = 1.
if err_vec.cross(self.ray) < 0.0:
sign = -1.
re... |
a signed magnitude of distance from line segment
| cross_track_error | python | autorope/donkeycar | donkeycar/geom.py | https://github.com/autorope/donkeycar/blob/master/donkeycar/geom.py | MIT |
def from_axis_angle(self, axis, angle):
'''
construct a quat from an normalized axis vector and radian rotation about that axis
'''
sinha = math.sin(angle * 0.5)
cosha = math.cos(angle * 0.5)
self.w = cosha
self.x = sinha * axis.x
self.y = sinha * axis.y
... |
construct a quat from an normalized axis vector and radian rotation about that axis
| from_axis_angle | python | autorope/donkeycar | donkeycar/la.py | https://github.com/autorope/donkeycar/blob/master/donkeycar/la.py | MIT |
def to_axis_angle(self):
'''
returns a normalized axis vector and radian rotation about that axis
'''
halfa = math.acos(self.w)
sinha = math.sin(halfa)
axis = Vec3()
if sinha != 0.0:
axis.x = self.x / sinha
axis.y = self.y / sinha
... |
returns a normalized axis vector and radian rotation about that axis
| to_axis_angle | python | autorope/donkeycar | donkeycar/la.py | https://github.com/autorope/donkeycar/blob/master/donkeycar/la.py | MIT |
def scale(im, size=128):
"""
accepts: PIL image, size of square sides
returns: PIL image scaled so sides length == size
"""
size = (size,size)
im.thumbnail(size, Image.ANTIALIAS)
return im |
accepts: PIL image, size of square sides
returns: PIL image scaled so sides length == size
| scale | python | autorope/donkeycar | donkeycar/utils.py | https://github.com/autorope/donkeycar/blob/master/donkeycar/utils.py | MIT |
def img_to_binary(img, format='jpeg'):
'''
accepts: PIL image
returns: binary stream (used to save to database)
'''
f = BytesIO()
try:
img.save(f, format=format)
except Exception as e:
raise e
return f.getvalue() |
accepts: PIL image
returns: binary stream (used to save to database)
| img_to_binary | python | autorope/donkeycar | donkeycar/utils.py | https://github.com/autorope/donkeycar/blob/master/donkeycar/utils.py | MIT |
def arr_to_img(arr):
'''
accepts: numpy array with shape (Height, Width, Channels)
returns: binary stream (used to save to database)
'''
arr = np.uint8(arr)
img = Image.fromarray(arr)
return img |
accepts: numpy array with shape (Height, Width, Channels)
returns: binary stream (used to save to database)
| arr_to_img | python | autorope/donkeycar | donkeycar/utils.py | https://github.com/autorope/donkeycar/blob/master/donkeycar/utils.py | MIT |
def binary_to_img(binary):
'''
accepts: binary file object from BytesIO
returns: PIL image
'''
if binary is None or len(binary) == 0:
return None
img = BytesIO(binary)
try:
img = Image.open(img)
return img
except:
return None |
accepts: binary file object from BytesIO
returns: PIL image
| binary_to_img | python | autorope/donkeycar | donkeycar/utils.py | https://github.com/autorope/donkeycar/blob/master/donkeycar/utils.py | MIT |
def rgb2gray(rgb):
"""
Convert normalized numpy image array with shape (w, h, 3) into greyscale
image of shape (w, h)
:param rgb: normalized [0,1] float32 numpy image array or [0,255] uint8
numpy image array with shape(w,h,3)
:return: normalized [0,1] float32 numpy ima... |
Convert normalized numpy image array with shape (w, h, 3) into greyscale
image of shape (w, h)
:param rgb: normalized [0,1] float32 numpy image array or [0,255] uint8
numpy image array with shape(w,h,3)
:return: normalized [0,1] float32 numpy image array shape(w,h) or
... | rgb2gray | python | autorope/donkeycar | donkeycar/utils.py | https://github.com/autorope/donkeycar/blob/master/donkeycar/utils.py | MIT |
def load_pil_image(filename, cfg):
"""Loads an image from a file path as a PIL image. Also handles resizing.
Args:
filename (string): path to the image file
cfg (object): donkey configuration file
Returns: a PIL image.
"""
try:
img = Image.open(filename)
if img.heig... | Loads an image from a file path as a PIL image. Also handles resizing.
Args:
filename (string): path to the image file
cfg (object): donkey configuration file
Returns: a PIL image.
| load_pil_image | python | autorope/donkeycar | donkeycar/utils.py | https://github.com/autorope/donkeycar/blob/master/donkeycar/utils.py | MIT |
def load_image_sized(filename, image_width, image_height, image_depth):
"""Loads an image from a file path as a PIL image. Also handles resizing.
Args:
filename (string): path to the image file
image_width: width in pixels of the output image
image_height: height in pixels of the output... | Loads an image from a file path as a PIL image. Also handles resizing.
Args:
filename (string): path to the image file
image_width: width in pixels of the output image
image_height: height in pixels of the output image
image_depth: depth of the output image (1 for greyscale)
Re... | load_image_sized | python | autorope/donkeycar | donkeycar/utils.py | https://github.com/autorope/donkeycar/blob/master/donkeycar/utils.py | MIT |
def most_recent_file(dir_path, ext=''):
'''
return the most recent file given a directory path and extension
'''
query = dir_path + '/*' + ext
newest = min(glob.iglob(query), key=os.path.getctime)
return newest |
return the most recent file given a directory path and extension
| most_recent_file | python | autorope/donkeycar | donkeycar/utils.py | https://github.com/autorope/donkeycar/blob/master/donkeycar/utils.py | MIT |
def zip_dir(dir_path, zip_path):
"""
Create and save a zipfile of a one level directory
"""
file_paths = glob.glob(dir_path + "/*") #create path to search for files.
zf = zipfile.ZipFile(zip_path, 'w')
dir_name = os.path.basename(dir_path)
for p in file_paths:
file_name = os.path.ba... |
Create and save a zipfile of a one level directory
| zip_dir | python | autorope/donkeycar | donkeycar/utils.py | https://github.com/autorope/donkeycar/blob/master/donkeycar/utils.py | MIT |
def linear_bin(a, N=15, offset=1, R=2.0):
'''
create a bin of length N
map val A to range R
offset one hot bin by offset, commonly R/2
'''
a = a + offset
b = round(a / (R / (N - offset)))
arr = np.zeros(N)
b = clamp(b, 0, N - 1)
arr[int(b)] = 1
return arr |
create a bin of length N
map val A to range R
offset one hot bin by offset, commonly R/2
| linear_bin | python | autorope/donkeycar | donkeycar/utils.py | https://github.com/autorope/donkeycar/blob/master/donkeycar/utils.py | MIT |
def linear_unbin(arr, N=15, offset=-1, R=2.0):
'''
preform inverse linear_bin, taking
one hot encoded arr, and get max value
rescale given R range and offset
'''
b = np.argmax(arr)
a = b * (R / (N + offset)) + offset
return a |
preform inverse linear_bin, taking
one hot encoded arr, and get max value
rescale given R range and offset
| linear_unbin | python | autorope/donkeycar | donkeycar/utils.py | https://github.com/autorope/donkeycar/blob/master/donkeycar/utils.py | MIT |
def map_range(x, X_min, X_max, Y_min, Y_max):
'''
Linear mapping between two ranges of values
'''
X_range = X_max - X_min
Y_range = Y_max - Y_min
XY_ratio = X_range/Y_range
y = ((x-X_min) / XY_ratio + Y_min) // 1
return int(y) |
Linear mapping between two ranges of values
| map_range | python | autorope/donkeycar | donkeycar/utils.py | https://github.com/autorope/donkeycar/blob/master/donkeycar/utils.py | MIT |
def map_range_float(x, X_min, X_max, Y_min, Y_max):
'''
Same as map_range but supports floats return, rounded to 2 decimal places
'''
X_range = X_max - X_min
Y_range = Y_max - Y_min
XY_ratio = X_range/Y_range
y = ((x-X_min) / XY_ratio + Y_min)
# print("y= {}".format(y))
return rou... |
Same as map_range but supports floats return, rounded to 2 decimal places
| map_range_float | python | autorope/donkeycar | donkeycar/utils.py | https://github.com/autorope/donkeycar/blob/master/donkeycar/utils.py | MIT |
def map_frange(x, X_min, X_max, Y_min, Y_max):
'''
Linear mapping between two ranges of values
map from x range to y range
'''
X_range = X_max - X_min
Y_range = Y_max - Y_min
XY_ratio = X_range/Y_range
y = ((x-X_min) / XY_ratio + Y_min)
return y |
Linear mapping between two ranges of values
map from x range to y range
| map_frange | python | autorope/donkeycar | donkeycar/utils.py | https://github.com/autorope/donkeycar/blob/master/donkeycar/utils.py | MIT |
def merge_two_dicts(x, y):
"""Given two dicts, merge them into a new dict as a shallow copy."""
z = x.copy()
z.update(y)
return z | Given two dicts, merge them into a new dict as a shallow copy. | merge_two_dicts | python | autorope/donkeycar | donkeycar/utils.py | https://github.com/autorope/donkeycar/blob/master/donkeycar/utils.py | MIT |
def param_gen(params):
'''
Accepts a dictionary of parameter options and returns
a list of dictionary with the permutations of the parameters.
'''
for p in itertools.product(*params.values()):
yield dict(zip(params.keys(), p )) |
Accepts a dictionary of parameter options and returns
a list of dictionary with the permutations of the parameters.
| param_gen | python | autorope/donkeycar | donkeycar/utils.py | https://github.com/autorope/donkeycar/blob/master/donkeycar/utils.py | MIT |
def get_model_by_type(model_type: str, cfg: 'Config') -> Union['KerasPilot', 'FastAiPilot']:
'''
given the string model_type and the configuration settings in cfg
create a Keras model and return it.
'''
from donkeycar.parts.keras import KerasCategorical, KerasLinear, \
KerasInferred, KerasIM... |
given the string model_type and the configuration settings in cfg
create a Keras model and return it.
| get_model_by_type | python | autorope/donkeycar | donkeycar/utils.py | https://github.com/autorope/donkeycar/blob/master/donkeycar/utils.py | MIT |
def get_test_img(keras_pilot):
"""
query the input to see what it likes make an image capable of using with
that test model
:param keras_pilot: input keras pilot
:return np.ndarry(np.uint8): numpy random img array
"""
try:
count, h, w, ch = keras_pilot.get_input_shape(... |
query the input to see what it likes make an image capable of using with
that test model
:param keras_pilot: input keras pilot
:return np.ndarry(np.uint8): numpy random img array
| get_test_img | python | autorope/donkeycar | donkeycar/utils.py | https://github.com/autorope/donkeycar/blob/master/donkeycar/utils.py | MIT |
def train_test_split(data_list: List[Any],
shuffle: bool = True,
test_size: float = 0.2) -> Tuple[List[Any], List[Any]]:
'''
take a list, split it into two sets while selecting a
random element in order to shuffle the results.
use the test_size to choose the spl... |
take a list, split it into two sets while selecting a
random element in order to shuffle the results.
use the test_size to choose the split percent.
shuffle is always True, left there to be backwards compatible
| train_test_split | python | autorope/donkeycar | donkeycar/utils.py | https://github.com/autorope/donkeycar/blob/master/donkeycar/utils.py | MIT |
def add(self, part, inputs=[], outputs=[],
threaded=False, run_condition=None):
"""
Method to add a part to the vehicle drive loop.
Parameters
----------
part: class
donkey vehicle part has run() attribute
inputs : list
... |
Method to add a part to the vehicle drive loop.
Parameters
----------
part: class
donkey vehicle part has run() attribute
inputs : list
Channel names to get from memory.
outputs : list
Channel names to save to ... | add | python | autorope/donkeycar | donkeycar/vehicle.py | https://github.com/autorope/donkeycar/blob/master/donkeycar/vehicle.py | MIT |
def start(self, rate_hz=10, max_loop_count=None, verbose=False):
"""
Start vehicle's main drive loop.
This is the main thread of the vehicle. It starts all the new
threads for the threaded parts then starts an infinite loop
that runs each part and updates the memory.
Pa... |
Start vehicle's main drive loop.
This is the main thread of the vehicle. It starts all the new
threads for the threaded parts then starts an infinite loop
that runs each part and updates the memory.
Parameters
----------
rate_hz : int
The max frequ... | start | python | autorope/donkeycar | donkeycar/vehicle.py | https://github.com/autorope/donkeycar/blob/master/donkeycar/vehicle.py | MIT |
def servo_duty_cycle(pulse_ms, frequency=60):
"""
Formula for working out the servo duty_cycle at 16 bit input
"""
period_ms = 1.0 / frequency * 1000.0
duty_cycle = int(pulse_ms / 1000 / (period_ms / 65535.0))
return duty_cycle |
Formula for working out the servo duty_cycle at 16 bit input
| servo_duty_cycle | python | autorope/donkeycar | donkeycar/contrib/robohat/code-robocarstore.py | https://github.com/autorope/donkeycar/blob/master/donkeycar/contrib/robohat/code-robocarstore.py | MIT |
def state_changed(control):
"""
Reads the RC channel and smooths value
"""
prev = control.value
control.channel.pause()
for i in range(0, len(control.channel)):
val = control.channel[i]
# prevent ranges outside of control space
if(val < 1000 or val > 2000):
co... |
Reads the RC channel and smooths value
| state_changed | python | autorope/donkeycar | donkeycar/contrib/robohat/code-robocarstore.py | https://github.com/autorope/donkeycar/blob/master/donkeycar/contrib/robohat/code-robocarstore.py | MIT |
def servo_duty_cycle(pulse_ms, frequency = 60):
"""
Formula for working out the servo duty_cycle at 16 bit input
"""
period_ms = 1.0 / frequency * 1000.0
duty_cycle = int(pulse_ms / 1000 / (period_ms / 65535.0))
return duty_cycle |
Formula for working out the servo duty_cycle at 16 bit input
| servo_duty_cycle | python | autorope/donkeycar | donkeycar/contrib/robohat/code.py | https://github.com/autorope/donkeycar/blob/master/donkeycar/contrib/robohat/code.py | MIT |
def state_changed(control):
"""
Reads the RC channel and smooths value
"""
control.channel.pause()
for i in range(0, len(control.channel)):
val = control.channel[i]
# prevent ranges outside of control space
if(val < 1000 or val > 2000):
continue
# set new ... |
Reads the RC channel and smooths value
| state_changed | python | autorope/donkeycar | donkeycar/contrib/robohat/code.py | https://github.com/autorope/donkeycar/blob/master/donkeycar/contrib/robohat/code.py | MIT |
def load_config(config_path, myconfig='myconfig.py'):
"""
load a config from the given path
"""
conf = os.path.expanduser(config_path)
if not os.path.exists(conf):
logger.error(f"No config file at location: {conf}. Add --config to "
f"specify location or run from dir con... |
load a config from the given path
| load_config | python | autorope/donkeycar | donkeycar/management/base.py | https://github.com/autorope/donkeycar/blob/master/donkeycar/management/base.py | MIT |
def create_car(self, path, template='complete', overwrite=False):
"""
This script sets up the folder structure for donkey to work.
It must run without donkey installed so that people installing with
docker can build the folder structure for docker to mount to.
"""
# thes... |
This script sets up the folder structure for donkey to work.
It must run without donkey installed so that people installing with
docker can build the folder structure for docker to mount to.
| create_car | python | autorope/donkeycar | donkeycar/management/base.py | https://github.com/autorope/donkeycar/blob/master/donkeycar/management/base.py | MIT |
def run(self, args):
'''
Load the images from a tub and create a movie from them.
Movie
'''
args, parser = self.parse_args(args)
from donkeycar.management.makemovie import MakeMovie
mm = MakeMovie()
mm.run(args, parser) |
Load the images from a tub and create a movie from them.
Movie
| run | python | autorope/donkeycar | donkeycar/management/base.py | https://github.com/autorope/donkeycar/blob/master/donkeycar/management/base.py | MIT |
Subsets and Splits
Django Code with Docstrings
Filters Python code examples from Django repository that contain Django-related code, helping identify relevant code snippets for understanding Django framework usage patterns.
SQL Console for Shuu12121/python-treesitter-filtered-datasetsV2
Retrieves specific code examples from the Flask repository but doesn't provide meaningful analysis or patterns beyond basic data retrieval.
HTTPX Repo Code and Docstrings
Retrieves specific code examples from the httpx repository, which is useful for understanding how particular libraries are used but doesn't provide broader analytical insights about the dataset.
Requests Repo Docstrings & Code
Retrieves code examples with their docstrings and file paths from the requests repository, providing basic filtering but limited analytical value beyond finding specific code samples.
Quart Repo Docstrings & Code
Retrieves code examples with their docstrings from the Quart repository, providing basic code samples but offering limited analytical value for understanding broader patterns or relationships in the dataset.