code stringlengths 66 870k | docstring stringlengths 19 26.7k | func_name stringlengths 1 138 | language stringclasses 1
value | repo stringlengths 7 68 | path stringlengths 5 324 | url stringlengths 46 389 | license stringclasses 7
values |
|---|---|---|---|---|---|---|---|
def test_connect_to_mainnet_by_default(mocker):
"""
Tests the condition where mainnet is configured as the default network
and no --network option is passed. It should avoid running the tests
to be safe.
"""
cfg = mocker.MagicMock()
cfg.network = "ethereum:mainnet:node"
runner = PytestA... |
Tests the condition where mainnet is configured as the default network
and no --network option is passed. It should avoid running the tests
to be safe.
| test_connect_to_mainnet_by_default | python | ApeWorX/ape | tests/functional/test_test.py | https://github.com/ApeWorX/ape/blob/master/tests/functional/test_test.py | Apache-2.0 |
def test_names(self, fixture_map):
"""
Show that we have both the initialized fixtures as well
as the properly injected isolation fixtures. Order is
EXTREMELY important here! It determines the order in which
fixtures run; isolation should run before their sister fixtures.
... |
Show that we have both the initialized fixtures as well
as the properly injected isolation fixtures. Order is
EXTREMELY important here! It determines the order in which
fixtures run; isolation should run before their sister fixtures.
Function isolation is expected even when not ... | test_names | python | ApeWorX/ape | tests/functional/test_test.py | https://github.com/ApeWorX/ape/blob/master/tests/functional/test_test.py | Apache-2.0 |
def test_isolate(
self, isolation_manager, owner, vyper_contract_instance, empty_snapshot_registry
):
"""
Low-level test simulating how pytest interacts with these yield-based
isolation fixtures.
"""
start_number = vyper_contract_instance.myNumber()
session = ... |
Low-level test simulating how pytest interacts with these yield-based
isolation fixtures.
| test_isolate | python | ApeWorX/ape | tests/functional/test_test.py | https://github.com/ApeWorX/ape/blob/master/tests/functional/test_test.py | Apache-2.0 |
def test_parse_rich_tree(vyper_contract_instance):
"""
Show that when full selector is set as the method ID,
the tree-output only shows the short method name.
"""
contract_id = vyper_contract_instance.contract_type.name
method_id = vyper_contract_instance.contract_type.methods["setAddress"].sele... |
Show that when full selector is set as the method ID,
the tree-output only shows the short method name.
| test_parse_rich_tree | python | ApeWorX/ape | tests/functional/test_trace.py | https://github.com/ApeWorX/ape/blob/master/tests/functional/test_trace.py | Apache-2.0 |
def test_transaction_trace_basic_approach_on_failed_call(chain, vyper_contract_instance, not_owner):
"""
Show we can use the basic approach for failed calls.
"""
tx = vyper_contract_instance.setNumber(0, sender=not_owner, raise_on_revert=False)
trace = TransactionTrace.model_validate(
{
... |
Show we can use the basic approach for failed calls.
| test_transaction_trace_basic_approach_on_failed_call | python | ApeWorX/ape | tests/functional/test_trace.py | https://github.com/ApeWorX/ape/blob/master/tests/functional/test_trace.py | Apache-2.0 |
def test_call_trace_debug_trace_call_not_supported(owner, vyper_contract_instance):
"""
When using EthTester, we can still see the top-level trace of a call.
"""
tx = {"to": vyper_contract_instance.address, "from": owner.address}
trace = CallTrace(tx=tx)
actual = f"{trace}"
assert actual == ... |
When using EthTester, we can still see the top-level trace of a call.
| test_call_trace_debug_trace_call_not_supported | python | ApeWorX/ape | tests/functional/test_trace.py | https://github.com/ApeWorX/ape/blob/master/tests/functional/test_trace.py | Apache-2.0 |
def test_type_1_transactions_using_access_list(ethereum, access_list, key):
"""
If not given type and only given accessList, the assumed type is 1,
an "access-list" transaction.
"""
data = {key: access_list}
txn = ethereum.create_transaction(**data)
assert txn.type == 1 |
If not given type and only given accessList, the assumed type is 1,
an "access-list" transaction.
| test_type_1_transactions_using_access_list | python | ApeWorX/ape | tests/functional/test_transaction.py | https://github.com/ApeWorX/ape/blob/master/tests/functional/test_transaction.py | Apache-2.0 |
def test_type_2_transactions_with_max_fee_and_access_list(ethereum, access_list, key):
"""
Dynamic-fee txns also support access lists, so the presence of max_fee
with access_list implies a type 2 txn.
"""
data = {"max_fee": 1000000000, key: access_list}
txn = ethereum.create_transaction(**data)
... |
Dynamic-fee txns also support access lists, so the presence of max_fee
with access_list implies a type 2 txn.
| test_type_2_transactions_with_max_fee_and_access_list | python | ApeWorX/ape | tests/functional/test_transaction.py | https://github.com/ApeWorX/ape/blob/master/tests/functional/test_transaction.py | Apache-2.0 |
def test_txn_hash_when_access_list_is_raw(ethereum, owner):
"""
Tests against a condition I was never able to reproduce where
a transaction's access list contained bytes-values and that
causes the serialization to error.
"""
txn = ethereum.create_transaction(accessList=ACCESS_LIST_HEXBYTES, typ... |
Tests against a condition I was never able to reproduce where
a transaction's access list contained bytes-values and that
causes the serialization to error.
| test_txn_hash_when_access_list_is_raw | python | ApeWorX/ape | tests/functional/test_transaction.py | https://github.com/ApeWorX/ape/blob/master/tests/functional/test_transaction.py | Apache-2.0 |
def test_str_when_data_is_bytes(ethereum):
"""
Tests against a condition that would cause transactions to
fail with string-encoding errors.
"""
txn = ethereum.create_transaction(data=HexBytes("0x123"))
actual = str(txn)
assert isinstance(actual, str) |
Tests against a condition that would cause transactions to
fail with string-encoding errors.
| test_str_when_data_is_bytes | python | ApeWorX/ape | tests/functional/test_transaction.py | https://github.com/ApeWorX/ape/blob/master/tests/functional/test_transaction.py | Apache-2.0 |
def test_str_when_data_is_long_shows_first_4_bytes(vyper_contract_instance):
"""
Tests against a condition that would cause transactions to
fail with string-encoding errors.
"""
txn = vyper_contract_instance.setNumber.as_transaction(123)
actual = str(txn)
assert isinstance(actual, str)
a... |
Tests against a condition that would cause transactions to
fail with string-encoding errors.
| test_str_when_data_is_long_shows_first_4_bytes | python | ApeWorX/ape | tests/functional/test_transaction.py | https://github.com/ApeWorX/ape/blob/master/tests/functional/test_transaction.py | Apache-2.0 |
def test_override_annotated_fields():
"""
This test is to prove that a user may use an `int` for a base-class
when the API field is described as a `HexInt`.
"""
class MyTransaction(TransactionAPI):
@property
def txn_hash(self) -> HexBytes:
return HexBytes("")
de... |
This test is to prove that a user may use an `int` for a base-class
when the API field is described as a `HexInt`.
| test_override_annotated_fields | python | ApeWorX/ape | tests/functional/test_transaction.py | https://github.com/ApeWorX/ape/blob/master/tests/functional/test_transaction.py | Apache-2.0 |
def test_none(self):
"""
Was getting unhelpful conversion errors here. We should instead
let Pydantic fail as it normally does in this situation.
"""
class MyModel(BaseModel):
an_int: HexInt
expected = ".*Input should be a valid integer.*"
with pytes... |
Was getting unhelpful conversion errors here. We should instead
let Pydantic fail as it normally does in this situation.
| test_none | python | ApeWorX/ape | tests/functional/test_types.py | https://github.com/ApeWorX/ape/blob/master/tests/functional/test_types.py | Apache-2.0 |
def test_separaters(convert, sep):
"""
Show that separates, such as commands and underscores, are OK
in currency-string values, e.g. "10,000 ETH" is valid.
"""
currency_str = f"10{sep}000 ETHER"
actual = convert(currency_str, int)
expected = TEN_THOUSAND_ETHER_IN_WEI
assert actual == exp... |
Show that separates, such as commands and underscores, are OK
in currency-string values, e.g. "10,000 ETH" is valid.
| test_separaters | python | ApeWorX/ape | tests/functional/conversion/test_ether.py | https://github.com/ApeWorX/ape/blob/master/tests/functional/conversion/test_ether.py | Apache-2.0 |
def test_convert_logs_and_passes_errors_from_is_convertible(conversion_manager, ape_caplog):
"""
When checking if something is convertible, and is_convertible errors
for whatever reason, log the error and consider it "not convertible".
More than likely, it isn't by that converter and is a plugin-error.
... |
When checking if something is convertible, and is_convertible errors
for whatever reason, log the error and consider it "not convertible".
More than likely, it isn't by that converter and is a plugin-error.
| test_convert_logs_and_passes_errors_from_is_convertible | python | ApeWorX/ape | tests/functional/conversion/test_misc.py | https://github.com/ApeWorX/ape/blob/master/tests/functional/conversion/test_misc.py | Apache-2.0 |
def contract_with_call_depth_geth(
owner, geth_provider, get_contract_type, leaf_contract_geth, middle_contract_geth
):
"""
This contract has methods that make calls to other local contracts
and is used for any testing that requires nested calls, such as
call trees or event-name clashes.
"""
... |
This contract has methods that make calls to other local contracts
and is used for any testing that requires nested calls, such as
call trees or event-name clashes.
| contract_with_call_depth_geth | python | ApeWorX/ape | tests/functional/geth/conftest.py | https://github.com/ApeWorX/ape/blob/master/tests/functional/geth/conftest.py | Apache-2.0 |
def test_contract_call_show_trace(geth_contract, geth_account):
"""
Show the `show_trace=True` does not corrupt the value.
Note: The provider uses `debug_traceCall` to get the result instead of
`eth_call`.
"""
geth_contract.setNumber(203, sender=geth_account)
actual = geth_contract.myNumber(... |
Show the `show_trace=True` does not corrupt the value.
Note: The provider uses `debug_traceCall` to get the result instead of
`eth_call`.
| test_contract_call_show_trace | python | ApeWorX/ape | tests/functional/geth/test_contract.py | https://github.com/ApeWorX/ape/blob/master/tests/functional/geth/test_contract.py | Apache-2.0 |
def test_revert_out_of_gas_error(geth_account, geth_second_account, geth_provider):
"""
Attempt to transact with not quite enough gas. We should get an error saying
we ran out of gas.
"""
with pytest.raises(OutOfGasError) as err:
geth_account.transfer(geth_second_account, 1, gas_limit=1)
... |
Attempt to transact with not quite enough gas. We should get an error saying
we ran out of gas.
| test_revert_out_of_gas_error | python | ApeWorX/ape | tests/functional/geth/test_contract.py | https://github.com/ApeWorX/ape/blob/master/tests/functional/geth/test_contract.py | Apache-2.0 |
def test_get_proxy_from_explorer(
mock_explorer,
create_mock_sepolia,
safe_proxy_container,
geth_account,
vyper_contract_container,
geth_provider,
chain,
):
"""
Simulated when you get a contract from Etherscan for the first time
but that contract is a proxy. We expect both proxy ... |
Simulated when you get a contract from Etherscan for the first time
but that contract is a proxy. We expect both proxy and target ABIs
to be cached under the proxy's address.
| test_get_proxy_from_explorer | python | ApeWorX/ape | tests/functional/geth/test_contracts_cache.py | https://github.com/ApeWorX/ape/blob/master/tests/functional/geth/test_contracts_cache.py | Apache-2.0 |
def mock_geth_sepolia(ethereum, geth_provider, geth_contract):
"""
Temporarily tricks Ape into thinking the local network
is Sepolia so we can test features that require a live
network.
"""
# Ensuring contract exists before hack.
# This allows the network to be past genesis which is more rea... |
Temporarily tricks Ape into thinking the local network
is Sepolia so we can test features that require a live
network.
| mock_geth_sepolia | python | ApeWorX/ape | tests/functional/geth/test_network_manager.py | https://github.com/ApeWorX/ape/blob/master/tests/functional/geth/test_network_manager.py | Apache-2.0 |
def test_parse_network_choice_evmchains(networks, connection_str):
"""
Show we can (without having a plugin installed) connect to a network
that evm-chains knows about.
"""
with networks.parse_network_choice(connection_str) as moon_provider:
assert moon_provider.network.name == "moonriver"
... |
Show we can (without having a plugin installed) connect to a network
that evm-chains knows about.
| test_parse_network_choice_evmchains | python | ApeWorX/ape | tests/functional/geth/test_network_manager.py | https://github.com/ApeWorX/ape/blob/master/tests/functional/geth/test_network_manager.py | Apache-2.0 |
def test_uri_non_dev_and_not_configured(mocker, ethereum):
"""
If the URI was not configured and we are not using a dev
network (local or -fork), then it should fail, rather than
use local-host.
"""
network = ethereum.sepolia.model_copy(deep=True)
# NOTE: This may fail if using real network... |
If the URI was not configured and we are not using a dev
network (local or -fork), then it should fail, rather than
use local-host.
| test_uri_non_dev_and_not_configured | python | ApeWorX/ape | tests/functional/geth/test_provider.py | https://github.com/ApeWorX/ape/blob/master/tests/functional/geth/test_provider.py | Apache-2.0 |
def test_connect_to_chain_that_started_poa(mock_web3, web3_factory, ethereum):
"""
Ensure that when connecting to a chain that
started out as PoA, such as Sepolia, we include
the right middleware. Note: even if the chain
is no longer PoA, we still need the middleware
to fetch blocks during the P... |
Ensure that when connecting to a chain that
started out as PoA, such as Sepolia, we include
the right middleware. Note: even if the chain
is no longer PoA, we still need the middleware
to fetch blocks during the PoA portion of the chain.
| test_connect_to_chain_that_started_poa | python | ApeWorX/ape | tests/functional/geth/test_provider.py | https://github.com/ApeWorX/ape/blob/master/tests/functional/geth/test_provider.py | Apache-2.0 |
def test_connect_using_only_ipc_for_uri_already_connected(project, networks, geth_provider):
"""
Shows we can remote-connect to a node that is already running when it exposes its IPC path.
"""
ipc_path = geth_provider.ipc_path
with project.temp_config(node={"ethereum": {"local": {"uri": f"{ipc_path}... |
Shows we can remote-connect to a node that is already running when it exposes its IPC path.
| test_connect_using_only_ipc_for_uri_already_connected | python | ApeWorX/ape | tests/functional/geth/test_provider.py | https://github.com/ApeWorX/ape/blob/master/tests/functional/geth/test_provider.py | Apache-2.0 |
def test_get_block_pending(geth_provider, geth_account, geth_second_account, accounts):
"""
Pending timestamps can be weird.
This ensures we can check those are various strange states of geth.
"""
actual = geth_provider.get_block("latest")
assert isinstance(actual, Block)
snap = geth_provid... |
Pending timestamps can be weird.
This ensures we can check those are various strange states of geth.
| test_get_block_pending | python | ApeWorX/ape | tests/functional/geth/test_provider.py | https://github.com/ApeWorX/ape/blob/master/tests/functional/geth/test_provider.py | Apache-2.0 |
def test_get_pending_block(geth_provider, geth_account, geth_second_account, accounts):
"""
Pending timestamps can be weird.
This ensures we can check those are various strange states of geth.
"""
actual = geth_provider.get_block("latest")
assert isinstance(actual, Block)
snap = geth_provid... |
Pending timestamps can be weird.
This ensures we can check those are various strange states of geth.
| test_get_pending_block | python | ApeWorX/ape | tests/functional/geth/test_provider.py | https://github.com/ApeWorX/ape/blob/master/tests/functional/geth/test_provider.py | Apache-2.0 |
def test_send_transaction_exceed_block_gas_limit(chain, geth_provider, geth_contract, geth_account):
"""
Shows that the local geth node will retry the transaction
with a new gas if this happens, automatically.
"""
transaction = geth_contract.setNumber.as_transaction(23333322101, sender=geth_account)... |
Shows that the local geth node will retry the transaction
with a new gas if this happens, automatically.
| test_send_transaction_exceed_block_gas_limit | python | ApeWorX/ape | tests/functional/geth/test_provider.py | https://github.com/ApeWorX/ape/blob/master/tests/functional/geth/test_provider.py | Apache-2.0 |
def test_send_call_base_class_block_id(networks, ethereum, mocker):
"""
Testing a case where was a bug in the base class for most providers.
Note: can't use ape-node as-is, as it overrides `send_call()`.
"""
provider = mocker.MagicMock()
provider.network.name = "mainnet"
def hacked_send_ca... |
Testing a case where was a bug in the base class for most providers.
Note: can't use ape-node as-is, as it overrides `send_call()`.
| test_send_call_base_class_block_id | python | ApeWorX/ape | tests/functional/geth/test_provider.py | https://github.com/ApeWorX/ape/blob/master/tests/functional/geth/test_provider.py | Apache-2.0 |
def test_send_call_handles_contract_type_failure(mocker, geth_provider, tx_for_call, mock_web3):
"""
Fixes an issue where we would get a recursion error during
handling a CALL failure, which would happen during proxy detection.
"""
orig_web3 = geth_provider._web3
def sfx(rpc, arguments, *args, ... |
Fixes an issue where we would get a recursion error during
handling a CALL failure, which would happen during proxy detection.
| test_send_call_handles_contract_type_failure | python | ApeWorX/ape | tests/functional/geth/test_provider.py | https://github.com/ApeWorX/ape/blob/master/tests/functional/geth/test_provider.py | Apache-2.0 |
def test_send_call_skip_trace(mocker, geth_provider, ethereum, tx_for_call):
"""
When we pass skip_trace=True to `send_call` (as proxy-checking des), we should
also not bother with any traces in exception handling for that call, as proxy-
checks fail consistently and getting their traces is unnecessary.... |
When we pass skip_trace=True to `send_call` (as proxy-checking des), we should
also not bother with any traces in exception handling for that call, as proxy-
checks fail consistently and getting their traces is unnecessary.
| test_send_call_skip_trace | python | ApeWorX/ape | tests/functional/geth/test_provider.py | https://github.com/ApeWorX/ape/blob/master/tests/functional/geth/test_provider.py | Apache-2.0 |
def test_disconnect_does_not_delete_unrelated_files_in_given_data_dir(networks):
"""
One time, I used a data-dir containing other files I didn't want to lose. GethDev
deleted the entire folder during `.disconnect()`, and it was tragic. Ensure this does
not happen to anyone else.
"""
with create_... |
One time, I used a data-dir containing other files I didn't want to lose. GethDev
deleted the entire folder during `.disconnect()`, and it was tragic. Ensure this does
not happen to anyone else.
| test_disconnect_does_not_delete_unrelated_files_in_given_data_dir | python | ApeWorX/ape | tests/functional/geth/test_provider.py | https://github.com/ApeWorX/ape/blob/master/tests/functional/geth/test_provider.py | Apache-2.0 |
def test_is_rpc_ready_false(self, mocker, data_folder):
"""
Both Geth and Reth nodes raise simple URLError when the node is not running.
"""
urlopen_patch = mocker.patch("ape_node.provider.urlopen")
urlopen_patch.side_effect = URLError("Unable to connect")
geth_dev = Geth... |
Both Geth and Reth nodes raise simple URLError when the node is not running.
| test_is_rpc_ready_false | python | ApeWorX/ape | tests/functional/geth/test_provider.py | https://github.com/ApeWorX/ape/blob/master/tests/functional/geth/test_provider.py | Apache-2.0 |
def test_is_rpc_ready_true_geth(self, mocker, data_folder):
"""
Geth has no error when the RPC is ready.
"""
urlopen_patch = mocker.patch("ape_node.provider.urlopen")
urlopen_patch.return_value = None
geth_dev = GethDevProcess.from_uri("path/to/geth.ipc", data_folder)
... |
Geth has no error when the RPC is ready.
| test_is_rpc_ready_true_geth | python | ApeWorX/ape | tests/functional/geth/test_provider.py | https://github.com/ApeWorX/ape/blob/master/tests/functional/geth/test_provider.py | Apache-2.0 |
def test_is_rpc_ready_true_reth(self, mocker, data_folder):
"""
Reth raises HTTPError("Method not found") when the RPC is ready.
"""
urlopen_patch = mocker.patch("ape_node.provider.urlopen")
urlopen_patch.side_effect = HTTPError("127.0.0.1", 404, "method not found", 0, 0) # type... |
Reth raises HTTPError("Method not found") when the RPC is ready.
| test_is_rpc_ready_true_reth | python | ApeWorX/ape | tests/functional/geth/test_provider.py | https://github.com/ApeWorX/ape/blob/master/tests/functional/geth/test_provider.py | Apache-2.0 |
def test_command_reth(self, mocker, data_folder, ignore_bin_check):
"""
Showing we get usable kwargs for a reth --dev node.
"""
reth_dev = GethDevProcess.from_uri(
"path/to/reth.ipc", data_folder, executable=["reth", "node"], verify_bin=False
)
actual = reth_d... |
Showing we get usable kwargs for a reth --dev node.
| test_command_reth | python | ApeWorX/ape | tests/functional/geth/test_provider.py | https://github.com/ApeWorX/ape/blob/master/tests/functional/geth/test_provider.py | Apache-2.0 |
def test_await_confirmations_zero_confirmations(mocker, geth_account, geth_contract):
"""
We still need to wait for the nonce to increase when required confirmations is 0.
Otherwise, we sometimes ran into nonce-issues when transacting too fast with
the same account.
"""
tx = geth_contract.setNum... |
We still need to wait for the nonce to increase when required confirmations is 0.
Otherwise, we sometimes ran into nonce-issues when transacting too fast with
the same account.
| test_await_confirmations_zero_confirmations | python | ApeWorX/ape | tests/functional/geth/test_receipt.py | https://github.com/ApeWorX/ape/blob/master/tests/functional/geth/test_receipt.py | Apache-2.0 |
def test_return_value_tuple(geth_provider):
"""
Tests against a bug where a trace in a certain state (HH returning a tuple) was
unable to get the correct return_value.
"""
transaction_hash = "0xa4803961e06c673b255ca6af78d00df3c0ebef0b2f23325a1457eaaf20914e8e"
abi = MethodABI(
type="funct... |
Tests against a bug where a trace in a certain state (HH returning a tuple) was
unable to get the correct return_value.
| test_return_value_tuple | python | ApeWorX/ape | tests/functional/geth/test_trace.py | https://github.com/ApeWorX/ape/blob/master/tests/functional/geth/test_trace.py | Apache-2.0 |
def test_decode_data_missing_trailing_zeroes(
collection, topics, log_data_missing_trailing_zeroes, ape_caplog
):
"""
This test is for a time where Alchemy gave us log data when it was missing trailing zeroes.
When using strict=False, it was able to properly decode. In this case, in Ape, we warn
the... |
This test is for a time where Alchemy gave us log data when it was missing trailing zeroes.
When using strict=False, it was able to properly decode. In this case, in Ape, we warn
the user and still proceed to decode the log.
| test_decode_data_missing_trailing_zeroes | python | ApeWorX/ape | tests/functional/utils/test_abi.py | https://github.com/ApeWorX/ape/blob/master/tests/functional/utils/test_abi.py | Apache-2.0 |
def test_get_release_retry(self, mock_release, github_client, mock_session, version):
"""
Ensure after failing to get a release, we re-attempt with
out a v-prefix.
"""
opposite = version.lstrip("v") if version.startswith("v") else f"v{version}"
def side_effect(method, ur... |
Ensure after failing to get a release, we re-attempt with
out a v-prefix.
| test_get_release_retry | python | ApeWorX/ape | tests/functional/utils/test_github.py | https://github.com/ApeWorX/ape/blob/master/tests/functional/utils/test_github.py | Apache-2.0 |
def test_available_plugins_handles_401(self, mocker, github_client, mock_session, ape_caplog):
"""
When you get a 401 from using a token, Ape's GitHub client should not
only warn the user but retry the request w/o authorization, as it likely
will still work.
"""
mock_sess... |
When you get a 401 from using a token, Ape's GitHub client should not
only warn the user but retry the request w/o authorization, as it likely
will still work.
| test_available_plugins_handles_401 | python | ApeWorX/ape | tests/functional/utils/test_github.py | https://github.com/ApeWorX/ape/blob/master/tests/functional/utils/test_github.py | Apache-2.0 |
def test_path_match_recurse_dir(path):
"""
Testing a specific way of excluding all the files in a directory.
"""
excl = "exclude_dir/**"
assert path_match(path, excl) |
Testing a specific way of excluding all the files in a directory.
| test_path_match_recurse_dir | python | ApeWorX/ape | tests/functional/utils/test_os.py | https://github.com/ApeWorX/ape/blob/master/tests/functional/utils/test_os.py | Apache-2.0 |
def test_setitem_user_agent_parts_exist(self, headers):
"""
Tests the case when user-agents share a sub-set
of each other, that it does not duplicate.
"""
headers["User-Agent"] = "test0/1.0"
# The beginning of the user-agent is already present.
# It shouldn't add ... |
Tests the case when user-agents share a sub-set
of each other, that it does not duplicate.
| test_setitem_user_agent_parts_exist | python | ApeWorX/ape | tests/functional/utils/test_rpc.py | https://github.com/ApeWorX/ape/blob/master/tests/functional/utils/test_rpc.py | Apache-2.0 |
def pytest_collection_modifyitems(session, config, items):
"""
Filter out tests marked to be skipped using ``skip_projects``
and the ``skip_projects_except`` decorators.
"""
modified_items = []
for item in items:
item_name_parts = item.name.split("[")
item_name_parts = [p.strip("... |
Filter out tests marked to be skipped using ``skip_projects``
and the ``skip_projects_except`` decorators.
| pytest_collection_modifyitems | python | ApeWorX/ape | tests/integration/cli/conftest.py | https://github.com/ApeWorX/ape/blob/master/tests/integration/cli/conftest.py | Apache-2.0 |
def project_dir_map():
"""
Ensure only copying projects once to prevent `TooManyOpenFilesError`.
"""
class ProjectDirCache:
project_map: dict[str, Path] = {}
def load(self, name: str) -> Path:
base_path = Path(__file__).parent / "projects"
if name in self.projec... |
Ensure only copying projects once to prevent `TooManyOpenFilesError`.
| project_dir_map | python | ApeWorX/ape | tests/integration/cli/conftest.py | https://github.com/ApeWorX/ape/blob/master/tests/integration/cli/conftest.py | Apache-2.0 |
def ape_plugins_runner(config):
"""
Use subprocess runner so can manipulate site packages and see results.
"""
class PluginSubprocessRunner(ApeSubprocessRunner):
def __init__(self):
super().__init__("plugins", data_folder=config.DATA_FOLDER)
def invoke_list(self, arguments:... |
Use subprocess runner so can manipulate site packages and see results.
| ape_plugins_runner | python | ApeWorX/ape | tests/integration/cli/conftest.py | https://github.com/ApeWorX/ape/blob/master/tests/integration/cli/conftest.py | Apache-2.0 |
def test_import_alias_is_really_long(ape_cli, runner):
"""
For entropy related use-cases regarding alias, we
must ensure long aliases are supported.
"""
long_alias = "this is a long alias that i am going to use and you can't stop me"
result = runner.invoke(
ape_cli,
("accounts",... |
For entropy related use-cases regarding alias, we
must ensure long aliases are supported.
| test_import_alias_is_really_long | python | ApeWorX/ape | tests/integration/cli/test_accounts.py | https://github.com/ApeWorX/ape/blob/master/tests/integration/cli/test_accounts.py | Apache-2.0 |
def test_compile_when_sources_change_problematically(ape_cli, runner, integ_project, clean_cache):
"""
There was a bug when sources changes but had errors, that the old sources continued
to be used and the errors were swallowed.
"""
source_path = integ_project.contracts_folder / "Interface.json"
... |
There was a bug when sources changes but had errors, that the old sources continued
to be used and the errors were swallowed.
| test_compile_when_sources_change_problematically | python | ApeWorX/ape | tests/integration/cli/test_compile.py | https://github.com/ApeWorX/ape/blob/master/tests/integration/cli/test_compile.py | Apache-2.0 |
def test_compile_when_source_contains_return_characters(
ape_cli, runner, integ_project, clean_cache
):
"""
This tests a bugfix where a source file contained return-characters
and that triggered endless re-compiles because it technically contains extra
bytes than the ones that show up in the text.
... |
This tests a bugfix where a source file contained return-characters
and that triggered endless re-compiles because it technically contains extra
bytes than the ones that show up in the text.
| test_compile_when_source_contains_return_characters | python | ApeWorX/ape | tests/integration/cli/test_compile.py | https://github.com/ApeWorX/ape/blob/master/tests/integration/cli/test_compile.py | Apache-2.0 |
def test_console_natspecs(integ_project, solidity_contract_type, console_runner):
"""
This test shows that the various natspec integrations with ABI-backed
types work in ``ape console``.
"""
contract_code = solidity_contract_type.model_dump_json(by_alias=True)
# flake8: noqa
cmd_ls = [
... |
This test shows that the various natspec integrations with ABI-backed
types work in ``ape console``.
| test_console_natspecs | python | ApeWorX/ape | tests/integration/cli/test_console.py | https://github.com/ApeWorX/ape/blob/master/tests/integration/cli/test_console.py | Apache-2.0 |
def test_try_run_script_missing_cli_decorator(scripts_runner, integ_project):
"""
Shows that we cannot run a script defining a `cli()` method without
it being a click command. The script is not recognized, so you get
a usage error.
"""
scripts_runner.project = integ_project
result = scripts_... |
Shows that we cannot run a script defining a `cli()` method without
it being a click command. The script is not recognized, so you get
a usage error.
| test_try_run_script_missing_cli_decorator | python | ApeWorX/ape | tests/integration/cli/test_run.py | https://github.com/ApeWorX/ape/blob/master/tests/integration/cli/test_run.py | Apache-2.0 |
def test_scripts_module_already_installed(integ_project, scripts_runner, mocker):
"""
Make sure that if there is for some reason a python module names `scripts`
installed, it does not interfere with Ape's scripting mechanism.
"""
scripts_runner.project = integ_project
mock_scripts = mocker.Magic... |
Make sure that if there is for some reason a python module names `scripts`
installed, it does not interfere with Ape's scripting mechanism.
| test_scripts_module_already_installed | python | ApeWorX/ape | tests/integration/cli/test_run.py | https://github.com/ApeWorX/ape/blob/master/tests/integration/cli/test_run.py | Apache-2.0 |
def test_run_recompiles_if_needed(runner, ape_cli, scripts_runner, integ_project):
"""
Ensure that when a change is made to a contract,
when we run a script, it re-compiles the script first.
"""
scripts_runner.project = integ_project
# Ensure we begin compiled.
runner.invoke(ape_cli, ("comp... |
Ensure that when a change is made to a contract,
when we run a script, it re-compiles the script first.
| test_run_recompiles_if_needed | python | ApeWorX/ape | tests/integration/cli/test_run.py | https://github.com/ApeWorX/ape/blob/master/tests/integration/cli/test_run.py | Apache-2.0 |
def test_verbosity(runner, ape_cli):
"""
Tests again an issue where `ape test -v debug` would fail because of
an invalid type check from click; only appeared in `ape test` command
for some reason.
"""
# NOTE: Only using `--fixtures` flag to avoid running tests (just prints fixtures).
cmd = (... |
Tests again an issue where `ape test -v debug` would fail because of
an invalid type check from click; only appeared in `ape test` command
for some reason.
| test_verbosity | python | ApeWorX/ape | tests/integration/cli/test_test.py | https://github.com/ApeWorX/ape/blob/master/tests/integration/cli/test_test.py | Apache-2.0 |
def test_vvv(runner, ape_cli, integ_project, v_arg):
"""
Showing you can somehow use pytest's -v flag without
messing up Ape.
"""
here = integ_project.path
os.chdir(integ_project.path)
name = f"test_{v_arg.replace('-', '_')}"
TEST = f"""
def {name}():
assert True
""".lst... |
Showing you can somehow use pytest's -v flag without
messing up Ape.
| test_vvv | python | ApeWorX/ape | tests/integration/cli/test_test.py | https://github.com/ApeWorX/ape/blob/master/tests/integration/cli/test_test.py | Apache-2.0 |
def test_gas_when_estimating(geth_provider, setup_pytester, integ_project, pytester, geth_account):
"""
Shows that gas reports still work when estimating gas.
"""
cfg = integ_project.config.model_dump(by_alias=True, mode="json")
cfg["test"]["gas"] = {"reports": ["terminal"]}
geth_account.transfe... |
Shows that gas reports still work when estimating gas.
| test_gas_when_estimating | python | ApeWorX/ape | tests/integration/cli/test_test.py | https://github.com/ApeWorX/ape/blob/master/tests/integration/cli/test_test.py | Apache-2.0 |
def test_coverage(geth_provider, setup_pytester, integ_project, pytester, geth_account):
"""
Ensures the --coverage flag works.
For better coverage tests, see ape-vyper because the Vyper
plugin is what implements the `trace_source()` method which does the bulk
of the coverage work.
"""
geth_... |
Ensures the --coverage flag works.
For better coverage tests, see ape-vyper because the Vyper
plugin is what implements the `trace_source()` method which does the bulk
of the coverage work.
| test_coverage | python | ApeWorX/ape | tests/integration/cli/test_test.py | https://github.com/ApeWorX/ape/blob/master/tests/integration/cli/test_test.py | Apache-2.0 |
def do_skip(self, project: str, module: str, test: str) -> bool:
"""
Returns ``True`` if a test has been marked to be
skipped for the given project using the ``skip_project`` or
``skip_project_except`` decorators.
"""
if project not in self.projects:
# Not a p... |
Returns ``True`` if a test has been marked to be
skipped for the given project using the ``skip_project`` or
``skip_project_except`` decorators.
| do_skip | python | ApeWorX/ape | tests/integration/cli/utils.py | https://github.com/ApeWorX/ape/blob/master/tests/integration/cli/utils.py | Apache-2.0 |
def skip_projects(self, method: Callable, *projects: str):
"""
Call this method to record a 'skip'.
The ``skip_project`` decorator calls this method
on the test method they are wrapped around.
"""
assert hasattr(method, "__name__") and hasattr(method, "__module__")
... |
Call this method to record a 'skip'.
The ``skip_project`` decorator calls this method
on the test method they are wrapped around.
| skip_projects | python | ApeWorX/ape | tests/integration/cli/utils.py | https://github.com/ApeWorX/ape/blob/master/tests/integration/cli/utils.py | Apache-2.0 |
def skip_projects_except(self, method: Callable, *projects: str):
"""
Call this method to record 'skip's for each project that is not
in the given list. The ``skip_project_except`` decorator calls
this method on the test method they are wrapped around.
"""
assert hasattr(... |
Call this method to record 'skip's for each project that is not
in the given list. The ``skip_project_except`` decorator calls
this method on the test method they are wrapped around.
| skip_projects_except | python | ApeWorX/ape | tests/integration/cli/utils.py | https://github.com/ApeWorX/ape/blob/master/tests/integration/cli/utils.py | Apache-2.0 |
def skip_projects(*names: str):
"""
Use this decorator to cause a CLI integration test
not to run for the given projects.
"""
def decorator(f):
project_skipper.skip_projects(f, *names)
return f
return decorator |
Use this decorator to cause a CLI integration test
not to run for the given projects.
| skip_projects | python | ApeWorX/ape | tests/integration/cli/utils.py | https://github.com/ApeWorX/ape/blob/master/tests/integration/cli/utils.py | Apache-2.0 |
def skip_projects_except(*names: str):
"""
Use this decorator to cause a CLI integration test
to only run for the given projects.
"""
def decorator(f):
project_skipper.skip_projects_except(f, *names)
return f
return decorator |
Use this decorator to cause a CLI integration test
to only run for the given projects.
| skip_projects_except | python | ApeWorX/ape | tests/integration/cli/utils.py | https://github.com/ApeWorX/ape/blob/master/tests/integration/cli/utils.py | Apache-2.0 |
def test_extra_account(chain):
"""
Show we can fund accounts from the config option.
"""
addr = "0x63c7f11162dBFC374DC6f5C0B3Aa26C618846a85"
actual = chain.provider.get_balance(addr)
assert actual > 0 |
Show we can fund accounts from the config option.
| test_extra_account | python | ApeWorX/ape | tests/integration/cli/projects/geth/tests/test_using_local_geth.py | https://github.com/ApeWorX/ape/blob/master/tests/integration/cli/projects/geth/tests/test_using_local_geth.py | Apache-2.0 |
def test_using_contract_with_same_type_and_method_call(accounts, project):
"""
Deploy the same contract from the ``contract`` fixture and call a method
that gets called elsewhere in the test suite. This shows that we amass
results across all instances of contract types when making the gas report.
""... |
Deploy the same contract from the ``contract`` fixture and call a method
that gets called elsewhere in the test suite. This shows that we amass
results across all instances of contract types when making the gas report.
| test_using_contract_with_same_type_and_method_call | python | ApeWorX/ape | tests/integration/cli/projects/geth/tests/test_using_local_geth.py | https://github.com/ApeWorX/ape/blob/master/tests/integration/cli/projects/geth/tests/test_using_local_geth.py | Apache-2.0 |
def test_two_contracts_with_same_symbol(accounts, project):
"""
Tests against scenario when using 2 tokens with same symbol.
There was almost a bug where the contract IDs clashed.
This is to help prevent future bugs related to this.
"""
receiver = accounts[-1]
sender = accounts[-2]
token... |
Tests against scenario when using 2 tokens with same symbol.
There was almost a bug where the contract IDs clashed.
This is to help prevent future bugs related to this.
| test_two_contracts_with_same_symbol | python | ApeWorX/ape | tests/integration/cli/projects/geth/tests/test_using_local_geth.py | https://github.com/ApeWorX/ape/blob/master/tests/integration/cli/projects/geth/tests/test_using_local_geth.py | Apache-2.0 |
def test_call_method_excluded_from_cli_options(accounts, contract):
"""
Call a method so that we can intentionally ignore it via command
line options and test that it does not show in the report.
"""
receipt = contract.fooAndBar(sender=accounts[9])
assert not receipt.failed |
Call a method so that we can intentionally ignore it via command
line options and test that it does not show in the report.
| test_call_method_excluded_from_cli_options | python | ApeWorX/ape | tests/integration/cli/projects/geth/tests/test_using_local_geth.py | https://github.com/ApeWorX/ape/blob/master/tests/integration/cli/projects/geth/tests/test_using_local_geth.py | Apache-2.0 |
def test_call_method_excluded_from_config(accounts, contract):
"""
Call a method excluded in the ``ape-config.yaml`` file
for asserting it does not show in gas report.
"""
account = accounts[-4]
receipt = contract.setAddress(account.address, sender=account)
assert not receipt.failed |
Call a method excluded in the ``ape-config.yaml`` file
for asserting it does not show in gas report.
| test_call_method_excluded_from_config | python | ApeWorX/ape | tests/integration/cli/projects/geth/tests/test_using_local_geth.py | https://github.com/ApeWorX/ape/blob/master/tests/integration/cli/projects/geth/tests/test_using_local_geth.py | Apache-2.0 |
def cli():
"""
This script tests the scenario when a cli script is missing
a click-decorator. The script itself is not runnable by Ape,
but it will cause a warning. Primarily, it is important that
it does not cause the entire scripts-integration to fail.
""" |
This script tests the scenario when a cli script is missing
a click-decorator. The script itself is not runnable by Ape,
but it will cause a warning. Primarily, it is important that
it does not cause the entire scripts-integration to fail.
| cli | python | ApeWorX/ape | tests/integration/cli/projects/script/scripts/error_forgot_click.py | https://github.com/ApeWorX/ape/blob/master/tests/integration/cli/projects/script/scripts/error_forgot_click.py | Apache-2.0 |
def test_isolation_with_session_module_and_function(chain, session_one, session_two, function_one):
"""
The sessions should be used, so that is 6.
Function is 1 and the module 3.
Also, setup does a transfer - that bumps up another 1.
Expected is 11.
"""
# NOTE: Module is on autouse=True
... |
The sessions should be used, so that is 6.
Function is 1 and the module 3.
Also, setup does a transfer - that bumps up another 1.
Expected is 11.
| test_isolation_with_session_module_and_function | python | ApeWorX/ape | tests/integration/cli/projects/test/tests/test_fixture_isolation.py | https://github.com/ApeWorX/ape/blob/master/tests/integration/cli/projects/test/tests/test_fixture_isolation.py | Apache-2.0 |
def functional_fixture_using_session(chain, session_one):
"""
Showing the transactions in a functional-scoped
fixture that use a session-scoped fixture don't
persist on-chain.
"""
_ = session_one
chain.mine()
return 11 # expected: 10 built up plus this 1. |
Showing the transactions in a functional-scoped
fixture that use a session-scoped fixture don't
persist on-chain.
| functional_fixture_using_session | python | ApeWorX/ape | tests/integration/cli/projects/test/tests/test_fixture_isolation.py | https://github.com/ApeWorX/ape/blob/master/tests/integration/cli/projects/test/tests/test_fixture_isolation.py | Apache-2.0 |
def test_use_parametrized_transaction_again(chain, parametrized_transaction):
"""
Should not have invalidated parametrized fixture.
"""
starting = 10 # All session + module
assert chain.blocks.height == starting + parametrized_transaction |
Should not have invalidated parametrized fixture.
| test_use_parametrized_transaction_again | python | ApeWorX/ape | tests/integration/cli/projects/test/tests/test_fixture_isolation.py | https://github.com/ApeWorX/ape/blob/master/tests/integration/cli/projects/test/tests/test_fixture_isolation.py | Apache-2.0 |
def test_use_isolate_in_test(chain, parametrized_transaction):
"""
Show the isolation we control doesn't affect
the isolation fixtures.
"""
_ = parametrized_transaction # Using this for complexity.
start_block = chain.blocks.height
with chain.isolate():
chain.mine()
assert c... |
Show the isolation we control doesn't affect
the isolation fixtures.
| test_use_isolate_in_test | python | ApeWorX/ape | tests/integration/cli/projects/test/tests/test_fixture_isolation.py | https://github.com/ApeWorX/ape/blob/master/tests/integration/cli/projects/test/tests/test_fixture_isolation.py | Apache-2.0 |
def main():
"""
Cause an uncaught contract logic error to test traceback output.
"""
account = ape.accounts.test_accounts[0]
contract = account.deploy(ape.project.ContractA)
# Fails.
contract.setNumber(5, sender=account) |
Cause an uncaught contract logic error to test traceback output.
| main | python | ApeWorX/ape | tests/integration/cli/projects/with-contracts/scripts/txerr.py | https://github.com/ApeWorX/ape/blob/master/tests/integration/cli/projects/with-contracts/scripts/txerr.py | Apache-2.0 |
def batch_parallelize(algos, fn, batch_size):
"""
Algorithms are coroutines that yield items to be processed in parallel.
We concurrently run the algorithm on all items in the batch.
"""
inputs = []
for i, algo in enumerate(algos):
inputs.append((i, next(algo)))
results = [None] * le... |
Algorithms are coroutines that yield items to be processed in parallel.
We concurrently run the algorithm on all items in the batch.
| batch_parallelize | python | openai/sparse_autoencoder | sparse_autoencoder/explanations.py | https://github.com/openai/sparse_autoencoder/blob/master/sparse_autoencoder/explanations.py | MIT |
def triton_sparse_transpose_dense_matmul(
sparse_indices: torch.Tensor,
sparse_values: torch.Tensor,
dense: torch.Tensor,
N: int,
BLOCK_SIZE_AK=128,
) -> torch.Tensor:
"""
calculates sparse.T @ dense (i.e reducing along the collated dimension of sparse)
dense must be contiguous along dim... |
calculates sparse.T @ dense (i.e reducing along the collated dimension of sparse)
dense must be contiguous along dim 0 (in other words, dense.T is contiguous)
sparse_indices is shape (A, k)
sparse_values is shape (A, k)
dense is shape (A, B)
output is shape (N, B)
| triton_sparse_transpose_dense_matmul | python | openai/sparse_autoencoder | sparse_autoencoder/kernels.py | https://github.com/openai/sparse_autoencoder/blob/master/sparse_autoencoder/kernels.py | MIT |
def triton_sparse_transpose_dense_matmul_kernel(
coo_indices_ptr,
coo_values_ptr,
dense_ptr,
out_ptr,
stride_da,
stride_db,
B,
N,
AK,
BLOCK_SIZE_AK: tl.constexpr,
BLOCK_SIZE_B: tl.constexpr,
):
"""
coo_indices is shape (2, AK)
coo_values is shape (AK,)
dense i... |
coo_indices is shape (2, AK)
coo_values is shape (AK,)
dense is shape (A, B), contiguous along B
out is shape (N, B)
| triton_sparse_transpose_dense_matmul_kernel | python | openai/sparse_autoencoder | sparse_autoencoder/kernels.py | https://github.com/openai/sparse_autoencoder/blob/master/sparse_autoencoder/kernels.py | MIT |
def triton_sparse_dense_matmul(
sparse_indices: torch.Tensor,
sparse_values: torch.Tensor,
dense: torch.Tensor,
) -> torch.Tensor:
"""
calculates sparse @ dense (i.e reducing along the uncollated dimension of sparse)
dense must be contiguous along dim 0 (in other words, dense.T is contiguous)
... |
calculates sparse @ dense (i.e reducing along the uncollated dimension of sparse)
dense must be contiguous along dim 0 (in other words, dense.T is contiguous)
sparse_indices is shape (A, k)
sparse_values is shape (A, k)
dense is shape (N, B)
output is shape (A, B)
| triton_sparse_dense_matmul | python | openai/sparse_autoencoder | sparse_autoencoder/kernels.py | https://github.com/openai/sparse_autoencoder/blob/master/sparse_autoencoder/kernels.py | MIT |
def triton_sparse_dense_matmul_kernel(
sparse_indices_ptr,
sparse_values_ptr,
dense_ptr,
out_ptr,
stride_dn,
stride_db,
A,
B,
N,
K,
BLOCK_SIZE_K: tl.constexpr,
BLOCK_SIZE_B: tl.constexpr,
):
"""
sparse_indices is shape (A, K)
sparse_values is shape (A, K)
... |
sparse_indices is shape (A, K)
sparse_values is shape (A, K)
dense is shape (N, B), contiguous along B
out is shape (A, B)
| triton_sparse_dense_matmul_kernel | python | openai/sparse_autoencoder | sparse_autoencoder/kernels.py | https://github.com/openai/sparse_autoencoder/blob/master/sparse_autoencoder/kernels.py | MIT |
def triton_dense_dense_sparseout_matmul(
dense1: torch.Tensor,
dense2: torch.Tensor,
at_indices: torch.Tensor,
) -> torch.Tensor:
"""
dense1: shape (A, B)
dense2: shape (B, N)
at_indices: shape (A, K)
out values: shape (A, K)
calculates dense1 @ dense2 only for the indices in at_indi... |
dense1: shape (A, B)
dense2: shape (B, N)
at_indices: shape (A, K)
out values: shape (A, K)
calculates dense1 @ dense2 only for the indices in at_indices
equivalent to (dense1 @ dense2).gather(1, at_indices)
| triton_dense_dense_sparseout_matmul | python | openai/sparse_autoencoder | sparse_autoencoder/kernels.py | https://github.com/openai/sparse_autoencoder/blob/master/sparse_autoencoder/kernels.py | MIT |
def triton_dense_dense_sparseout_matmul_kernel(
dense1_ptr,
dense2_ptr,
at_indices_ptr,
out_ptr,
stride_d1a,
stride_d1b,
stride_d2b,
stride_d2n,
A,
B,
N,
K,
BLOCK_SIZE_B: tl.constexpr,
BLOCK_SIZE_N: tl.constexpr,
BLOCK_SIZE_K: tl.constexpr,
):
"""
dens... |
dense1: shape (A, B)
dense2: shape (B, N)
at_indices: shape (A, K)
out values: shape (A, K)
| triton_dense_dense_sparseout_matmul_kernel | python | openai/sparse_autoencoder | sparse_autoencoder/kernels.py | https://github.com/openai/sparse_autoencoder/blob/master/sparse_autoencoder/kernels.py | MIT |
def autoencoder_loss(
reconstruction: torch.Tensor,
original_input: torch.Tensor,
latent_activations: torch.Tensor,
l1_weight: float,
) -> torch.Tensor:
"""
:param reconstruction: output of Autoencoder.decode (shape: [batch, n_inputs])
:param original_input: input of Autoencoder.encode (shap... |
:param reconstruction: output of Autoencoder.decode (shape: [batch, n_inputs])
:param original_input: input of Autoencoder.encode (shape: [batch, n_inputs])
:param latent_activations: output of Autoencoder.encode (shape: [batch, n_latents])
:param l1_weight: weight of L1 loss
:return: loss (shape: ... | autoencoder_loss | python | openai/sparse_autoencoder | sparse_autoencoder/loss.py | https://github.com/openai/sparse_autoencoder/blob/master/sparse_autoencoder/loss.py | MIT |
def normalized_mean_squared_error(
reconstruction: torch.Tensor,
original_input: torch.Tensor,
) -> torch.Tensor:
"""
:param reconstruction: output of Autoencoder.decode (shape: [batch, n_inputs])
:param original_input: input of Autoencoder.encode (shape: [batch, n_inputs])
:return: normalized m... |
:param reconstruction: output of Autoencoder.decode (shape: [batch, n_inputs])
:param original_input: input of Autoencoder.encode (shape: [batch, n_inputs])
:return: normalized mean squared error (shape: [1])
| normalized_mean_squared_error | python | openai/sparse_autoencoder | sparse_autoencoder/loss.py | https://github.com/openai/sparse_autoencoder/blob/master/sparse_autoencoder/loss.py | MIT |
def normalized_L1_loss(
latent_activations: torch.Tensor,
original_input: torch.Tensor,
) -> torch.Tensor:
"""
:param latent_activations: output of Autoencoder.encode (shape: [batch, n_latents])
:param original_input: input of Autoencoder.encode (shape: [batch, n_inputs])
:return: normalized L1 ... |
:param latent_activations: output of Autoencoder.encode (shape: [batch, n_latents])
:param original_input: input of Autoencoder.encode (shape: [batch, n_inputs])
:return: normalized L1 loss (shape: [1])
| normalized_L1_loss | python | openai/sparse_autoencoder | sparse_autoencoder/loss.py | https://github.com/openai/sparse_autoencoder/blob/master/sparse_autoencoder/loss.py | MIT |
def __init__(
self, n_latents: int, n_inputs: int, activation: Callable = nn.ReLU(), tied: bool = False,
normalize: bool = False
) -> None:
"""
:param n_latents: dimension of the autoencoder latent
:param n_inputs: dimensionality of the original data (e.g residual stream, num... |
:param n_latents: dimension of the autoencoder latent
:param n_inputs: dimensionality of the original data (e.g residual stream, number of MLP hidden units)
:param activation: activation function
:param tied: whether to tie the encoder and decoder weights
| __init__ | python | openai/sparse_autoencoder | sparse_autoencoder/model.py | https://github.com/openai/sparse_autoencoder/blob/master/sparse_autoencoder/model.py | MIT |
def encode_pre_act(self, x: torch.Tensor, latent_slice: slice = slice(None)) -> torch.Tensor:
"""
:param x: input data (shape: [batch, n_inputs])
:param latent_slice: slice of latents to compute
Example: latent_slice = slice(0, 10) to compute only the first 10 latents.
:retur... |
:param x: input data (shape: [batch, n_inputs])
:param latent_slice: slice of latents to compute
Example: latent_slice = slice(0, 10) to compute only the first 10 latents.
:return: autoencoder latents before activation (shape: [batch, n_latents])
| encode_pre_act | python | openai/sparse_autoencoder | sparse_autoencoder/model.py | https://github.com/openai/sparse_autoencoder/blob/master/sparse_autoencoder/model.py | MIT |
def encode(self, x: torch.Tensor) -> tuple[torch.Tensor, dict[str, Any]]:
"""
:param x: input data (shape: [batch, n_inputs])
:return: autoencoder latents (shape: [batch, n_latents])
"""
x, info = self.preprocess(x)
return self.activation(self.encode_pre_act(x)), info |
:param x: input data (shape: [batch, n_inputs])
:return: autoencoder latents (shape: [batch, n_latents])
| encode | python | openai/sparse_autoencoder | sparse_autoencoder/model.py | https://github.com/openai/sparse_autoencoder/blob/master/sparse_autoencoder/model.py | MIT |
def decode(self, latents: torch.Tensor, info: dict[str, Any] | None = None) -> torch.Tensor:
"""
:param latents: autoencoder latents (shape: [batch, n_latents])
:return: reconstructed data (shape: [batch, n_inputs])
"""
ret = self.decoder(latents) + self.pre_bias
if self.... |
:param latents: autoencoder latents (shape: [batch, n_latents])
:return: reconstructed data (shape: [batch, n_inputs])
| decode | python | openai/sparse_autoencoder | sparse_autoencoder/model.py | https://github.com/openai/sparse_autoencoder/blob/master/sparse_autoencoder/model.py | MIT |
def forward(self, x: torch.Tensor) -> tuple[torch.Tensor, torch.Tensor, torch.Tensor]:
"""
:param x: input data (shape: [batch, n_inputs])
:return: autoencoder latents pre activation (shape: [batch, n_latents])
autoencoder latents (shape: [batch, n_latents])
... |
:param x: input data (shape: [batch, n_inputs])
:return: autoencoder latents pre activation (shape: [batch, n_latents])
autoencoder latents (shape: [batch, n_latents])
reconstructed data (shape: [batch, n_inputs])
| forward | python | openai/sparse_autoencoder | sparse_autoencoder/model.py | https://github.com/openai/sparse_autoencoder/blob/master/sparse_autoencoder/model.py | MIT |
def v1(location, layer_index):
"""
Details:
- Number of autoencoder latents: 32768
- Number of training tokens: ~64M
- Activation function: ReLU
- L1 regularization strength: 0.01
- Layer normed inputs: false
- NeuronRecord files:
`az://openaipublic/sparse-autoencoder/gpt2-small/... |
Details:
- Number of autoencoder latents: 32768
- Number of training tokens: ~64M
- Activation function: ReLU
- L1 regularization strength: 0.01
- Layer normed inputs: false
- NeuronRecord files:
`az://openaipublic/sparse-autoencoder/gpt2-small/{location}/collated_activations/{layer... | v1 | python | openai/sparse_autoencoder | sparse_autoencoder/paths.py | https://github.com/openai/sparse_autoencoder/blob/master/sparse_autoencoder/paths.py | MIT |
def v5_32k(location, layer_index):
"""
Details:
- Number of autoencoder latents: 2**15 = 32768
- Number of training tokens: TODO
- Activation function: TopK(32)
- L1 regularization strength: n/a
- Layer normed inputs: true
"""
assert location in ["resid_delta_attn", "resid_delta_mlp... |
Details:
- Number of autoencoder latents: 2**15 = 32768
- Number of training tokens: TODO
- Activation function: TopK(32)
- L1 regularization strength: n/a
- Layer normed inputs: true
| v5_32k | python | openai/sparse_autoencoder | sparse_autoencoder/paths.py | https://github.com/openai/sparse_autoencoder/blob/master/sparse_autoencoder/paths.py | MIT |
def v5_128k(location, layer_index):
"""
Details:
- Number of autoencoder latents: 2**17 = 131072
- Number of training tokens: TODO
- Activation function: TopK(32)
- L1 regularization strength: n/a
- Layer normed inputs: true
"""
assert location in ["resid_delta_attn", "resid_delta_ml... |
Details:
- Number of autoencoder latents: 2**17 = 131072
- Number of training tokens: TODO
- Activation function: TopK(32)
- L1 regularization strength: n/a
- Layer normed inputs: true
| v5_128k | python | openai/sparse_autoencoder | sparse_autoencoder/paths.py | https://github.com/openai/sparse_autoencoder/blob/master/sparse_autoencoder/paths.py | MIT |
def unit_norm_decoder_grad_adjustment_(autoencoder) -> None:
"""project out gradient information parallel to the dictionary vectors - assumes that the decoder is already unit normed"""
assert autoencoder.decoder.weight.grad is not None
triton_add_mul_(
autoencoder.decoder.weight.grad,
torc... | project out gradient information parallel to the dictionary vectors - assumes that the decoder is already unit normed | unit_norm_decoder_grad_adjustment_ | python | openai/sparse_autoencoder | sparse_autoencoder/train.py | https://github.com/openai/sparse_autoencoder/blob/master/sparse_autoencoder/train.py | MIT |
def batch_tensors(
it: Iterable[torch.Tensor],
batch_size: int,
drop_last=True,
stream=None,
) -> Iterator[torch.Tensor]:
"""
input is iterable of tensors of shape [batch_old, ...]
output is iterable of tensors of shape [batch_size, ...]
batch_old does not need to be divisible by batch_s... |
input is iterable of tensors of shape [batch_old, ...]
output is iterable of tensors of shape [batch_size, ...]
batch_old does not need to be divisible by batch_size
| batch_tensors | python | openai/sparse_autoencoder | sparse_autoencoder/train.py | https://github.com/openai/sparse_autoencoder/blob/master/sparse_autoencoder/train.py | MIT |
def bootstrap_statistic(observed_sample, compute_statistic, num_trials):
"""
Creates num_trials resamples of the initial sample.
Returns an array of the provided statistic for those samples.
* observed_sample: the initial sample, as an array.
* compute_statistic: a function that takes a sampl... |
Creates num_trials resamples of the initial sample.
Returns an array of the provided statistic for those samples.
* observed_sample: the initial sample, as an array.
* compute_statistic: a function that takes a sample as
an array and returns the statistic for that
... | bootstrap_statistic | python | plasma-umass/ChatDBG | samples/python/ds101.py | https://github.com/plasma-umass/ChatDBG/blob/master/samples/python/ds101.py | Apache-2.0 |
def stop_handler(event):
"""Sets last error type so we can report it later."""
# Check if the event is a stop event
global last_error_type
if not hasattr(event, "stop_signal"):
last_error_type = "" # Not a real error (e.g., a breakpoint)
return
if event.stop_signal is not None:
... | Sets last error type so we can report it later. | stop_handler | python | plasma-umass/ChatDBG | src/chatdbg/chatdbg_gdb.py | https://github.com/plasma-umass/ChatDBG/blob/master/src/chatdbg/chatdbg_gdb.py | Apache-2.0 |
def llm_debug(self, command: str):
"""
{
"name": "debug",
"description": "The `debug` function runs a GDB command on the stopped program and gets the response.",
"parameters": {
"type": "object",
"properties": {
"com... |
{
"name": "debug",
"description": "The `debug` function runs a GDB command on the stopped program and gets the response.",
"parameters": {
"type": "object",
"properties": {
"command": {
"type": "stri... | llm_debug | python | plasma-umass/ChatDBG | src/chatdbg/chatdbg_gdb.py | https://github.com/plasma-umass/ChatDBG/blob/master/src/chatdbg/chatdbg_gdb.py | Apache-2.0 |
def _is_debug_build(self) -> bool:
"""Returns False if not compiled with debug information."""
target = self._debugger.GetSelectedTarget()
if not target:
return False
for module in target.module_iter():
for cu in module.compile_unit_iter():
for lin... | Returns False if not compiled with debug information. | _is_debug_build | python | plasma-umass/ChatDBG | src/chatdbg/chatdbg_lldb.py | https://github.com/plasma-umass/ChatDBG/blob/master/src/chatdbg/chatdbg_lldb.py | Apache-2.0 |
def get_thread(self) -> Optional[lldb.SBThread]:
"""
Returns a currently stopped thread in the debugged process.
:return: A currently stopped thread or None if no thread is stopped.
"""
process = self._get_process()
if not process:
return None
for thre... |
Returns a currently stopped thread in the debugged process.
:return: A currently stopped thread or None if no thread is stopped.
| get_thread | python | plasma-umass/ChatDBG | src/chatdbg/chatdbg_lldb.py | https://github.com/plasma-umass/ChatDBG/blob/master/src/chatdbg/chatdbg_lldb.py | Apache-2.0 |
def _get_process(self) -> Optional[lldb.SBProcess]:
"""
Get the process that the current target owns.
:return: An lldb object representing the process (lldb.SBProcess) that this target owns.
"""
target = self._debugger.GetSelectedTarget()
return target.process if target e... |
Get the process that the current target owns.
:return: An lldb object representing the process (lldb.SBProcess) that this target owns.
| _get_process | python | plasma-umass/ChatDBG | src/chatdbg/chatdbg_lldb.py | https://github.com/plasma-umass/ChatDBG/blob/master/src/chatdbg/chatdbg_lldb.py | Apache-2.0 |
def llm_debug(self, command: str):
"""
{
"name": "debug",
"description": "The `debug` function runs an LLDB command on the stopped program and gets the response.",
"parameters": {
"type": "object",
"properties": {
"c... |
{
"name": "debug",
"description": "The `debug` function runs an LLDB command on the stopped program and gets the response.",
"parameters": {
"type": "object",
"properties": {
"command": {
"type": "st... | llm_debug | python | plasma-umass/ChatDBG | src/chatdbg/chatdbg_lldb.py | https://github.com/plasma-umass/ChatDBG/blob/master/src/chatdbg/chatdbg_lldb.py | Apache-2.0 |
def onecmd(self, line: str) -> bool:
"""
Override to stash the results in our history.
"""
if not line:
# blank -- let super call back to into onecmd
return super().onecmd(line)
else:
hist_file = CaptureOutput(self.stdout)
self.stdo... |
Override to stash the results in our history.
| onecmd | python | plasma-umass/ChatDBG | src/chatdbg/chatdbg_pdb.py | https://github.com/plasma-umass/ChatDBG/blob/master/src/chatdbg/chatdbg_pdb.py | Apache-2.0 |
Subsets and Splits
Django Code with Docstrings
Filters Python code examples from Django repository that contain Django-related code, helping identify relevant code snippets for understanding Django framework usage patterns.
SQL Console for Shuu12121/python-treesitter-filtered-datasetsV2
Retrieves specific code examples from the Flask repository but doesn't provide meaningful analysis or patterns beyond basic data retrieval.
HTTPX Repo Code and Docstrings
Retrieves specific code examples from the httpx repository, which is useful for understanding how particular libraries are used but doesn't provide broader analytical insights about the dataset.
Requests Repo Docstrings & Code
Retrieves code examples with their docstrings and file paths from the requests repository, providing basic filtering but limited analytical value beyond finding specific code samples.
Quart Repo Docstrings & Code
Retrieves code examples with their docstrings from the Quart repository, providing basic code samples but offering limited analytical value for understanding broader patterns or relationships in the dataset.