code stringlengths 66 870k | docstring stringlengths 19 26.7k | func_name stringlengths 1 138 | language stringclasses 1
value | repo stringlengths 7 68 | path stringlengths 5 324 | url stringlengths 46 389 | license stringclasses 7
values |
|---|---|---|---|---|---|---|---|
def pop_statement_description_entry(d):
"""
extracts the description for statements and removes the description entry from the document
a statement can only have one description
example:
the features definition
- or:
- description: statement description
- number: 1
d... |
extracts the description for statements and removes the description entry from the document
a statement can only have one description
example:
the features definition
- or:
- description: statement description
- number: 1
description: feature description
becomes
... | pop_statement_description_entry | python | mandiant/capa | capa/rules/__init__.py | https://github.com/mandiant/capa/blob/master/capa/rules/__init__.py | Apache-2.0 |
def unique(sequence):
"""deduplicate the items in the given sequence, returning a list with the same order.
via: https://stackoverflow.com/a/58666031
"""
seen = set()
return [x for x in sequence if not (x in seen or seen.add(x))] # type: ignore [func-returns-value] | deduplicate the items in the given sequence, returning a list with the same order.
via: https://stackoverflow.com/a/58666031
| unique | python | mandiant/capa | capa/rules/__init__.py | https://github.com/mandiant/capa/blob/master/capa/rules/__init__.py | Apache-2.0 |
def get_dependencies(self, namespaces: dict[str, list["Rule"]]) -> set[str]:
"""
fetch the names of rules this rule relies upon.
these are only the direct dependencies; a user must
compute the transitive dependency graph themself, if they want it.
Args:
namespaces: map... |
fetch the names of rules this rule relies upon.
these are only the direct dependencies; a user must
compute the transitive dependency graph themself, if they want it.
Args:
namespaces: mapping from namespace name to rules in it.
see `index_rules_by_namespace`.
... | get_dependencies | python | mandiant/capa | capa/rules/__init__.py | https://github.com/mandiant/capa/blob/master/capa/rules/__init__.py | Apache-2.0 |
def extract_subscope_rules(self):
"""
scan through the statements of this rule,
replacing subscope statements with `match` references to a newly created rule,
which are yielded from this routine.
note: this mutates the current rule.
example::
for derived_ru... |
scan through the statements of this rule,
replacing subscope statements with `match` references to a newly created rule,
which are yielded from this routine.
note: this mutates the current rule.
example::
for derived_rule in rule.extract_subscope_rules():
... | extract_subscope_rules | python | mandiant/capa | capa/rules/__init__.py | https://github.com/mandiant/capa/blob/master/capa/rules/__init__.py | Apache-2.0 |
def extract_all_features(self) -> set[Feature]:
"""
recursively extracts all feature statements in this rule.
returns:
set: A set of all feature statements contained within this rule.
"""
if not isinstance(self.statement, ceng.Statement):
# For rules with... |
recursively extracts all feature statements in this rule.
returns:
set: A set of all feature statements contained within this rule.
| extract_all_features | python | mandiant/capa | capa/rules/__init__.py | https://github.com/mandiant/capa/blob/master/capa/rules/__init__.py | Apache-2.0 |
def get_rules_and_dependencies(rules: list[Rule], rule_name: str) -> Iterator[Rule]:
"""
from the given collection of rules, select a rule and its dependencies (transitively).
"""
# we evaluate `rules` multiple times, so if it's a generator, realize it into a list.
rules = list(rules)
namespaces... |
from the given collection of rules, select a rule and its dependencies (transitively).
| get_rules_and_dependencies | python | mandiant/capa | capa/rules/__init__.py | https://github.com/mandiant/capa/blob/master/capa/rules/__init__.py | Apache-2.0 |
def ensure_rule_dependencies_are_met(rules: list[Rule]) -> None:
"""
raise an exception if a rule dependency does not exist.
raises:
InvalidRule: if a dependency is not met.
"""
# we evaluate `rules` multiple times, so if it's a generator, realize it into a list.
rules = list(rules)
n... |
raise an exception if a rule dependency does not exist.
raises:
InvalidRule: if a dependency is not met.
| ensure_rule_dependencies_are_met | python | mandiant/capa | capa/rules/__init__.py | https://github.com/mandiant/capa/blob/master/capa/rules/__init__.py | Apache-2.0 |
def index_rules_by_namespace(rules: list[Rule]) -> dict[str, list[Rule]]:
"""
compute the rules that fit into each namespace found within the given rules.
for example, given:
- c2/shell :: create reverse shell
- c2/file-transfer :: download and write a file
return the index:
c2/she... |
compute the rules that fit into each namespace found within the given rules.
for example, given:
- c2/shell :: create reverse shell
- c2/file-transfer :: download and write a file
return the index:
c2/shell: [create reverse shell]
c2/file-transfer: [download and write a file]
... | index_rules_by_namespace | python | mandiant/capa | capa/rules/__init__.py | https://github.com/mandiant/capa/blob/master/capa/rules/__init__.py | Apache-2.0 |
def topologically_order_rules(rules: list[Rule]) -> list[Rule]:
"""
order the given rules such that dependencies show up before dependents.
this means that as we match rules, we can add features for the matches, and these
will be matched by subsequent rules if they follow this order.
assumes that ... |
order the given rules such that dependencies show up before dependents.
this means that as we match rules, we can add features for the matches, and these
will be matched by subsequent rules if they follow this order.
assumes that the rule dependency graph is a DAG.
| topologically_order_rules | python | mandiant/capa | capa/rules/__init__.py | https://github.com/mandiant/capa/blob/master/capa/rules/__init__.py | Apache-2.0 |
def _index_rules_by_feature(scope: Scope, rules: list[Rule], scores_by_rule: dict[str, int]) -> _RuleFeatureIndex:
"""
Index the given rules by their minimal set of most "uncommon" features required to match.
If absolutely necessary, provide the Regex/Substring/Bytes features
(which are... |
Index the given rules by their minimal set of most "uncommon" features required to match.
If absolutely necessary, provide the Regex/Substring/Bytes features
(which are not hashable and require a scan) that have to match, too.
| _index_rules_by_feature | python | mandiant/capa | capa/rules/__init__.py | https://github.com/mandiant/capa/blob/master/capa/rules/__init__.py | Apache-2.0 |
def rec(
rule_name: str,
node: Union[Feature, Statement],
) -> Optional[tuple[int, set[Feature]]]:
"""
Walk through a rule's logic tree, picking the features to use for indexing,
returning the feature and an associated score.
The higher the... |
Walk through a rule's logic tree, picking the features to use for indexing,
returning the feature and an associated score.
The higher the score, the more selective the feature is expected to be.
The score is only used internally, to pick the best feature from within AND ... | rec | python | mandiant/capa | capa/rules/__init__.py | https://github.com/mandiant/capa/blob/master/capa/rules/__init__.py | Apache-2.0 |
def _get_rules_for_scope(rules, scope) -> list[Rule]:
"""
given a collection of rules, collect the rules that are needed at the given scope.
these rules are ordered topologically.
don't include auto-generated "subscope" rules.
we want to include general "lib" rules here - even i... |
given a collection of rules, collect the rules that are needed at the given scope.
these rules are ordered topologically.
don't include auto-generated "subscope" rules.
we want to include general "lib" rules here - even if they are not dependencies of other rules, see #398
| _get_rules_for_scope | python | mandiant/capa | capa/rules/__init__.py | https://github.com/mandiant/capa/blob/master/capa/rules/__init__.py | Apache-2.0 |
def _extract_subscope_rules(rules) -> list[Rule]:
"""
process the given sequence of rules.
for each one, extract any embedded subscope rules into their own rule.
process these recursively.
then return a list of the refactored rules.
note: this operation mutates the rules... |
process the given sequence of rules.
for each one, extract any embedded subscope rules into their own rule.
process these recursively.
then return a list of the refactored rules.
note: this operation mutates the rules passed in - they may now have `match` statements
fo... | _extract_subscope_rules | python | mandiant/capa | capa/rules/__init__.py | https://github.com/mandiant/capa/blob/master/capa/rules/__init__.py | Apache-2.0 |
def filter_rules_by_meta(self, tag: str) -> "RuleSet":
"""
return new rule set with rules filtered based on all meta field values, adds all dependency rules
apply tag-based rule filter assuming that all required rules are loaded
can be used to specify selected rules vs. providing a rules... |
return new rule set with rules filtered based on all meta field values, adds all dependency rules
apply tag-based rule filter assuming that all required rules are loaded
can be used to specify selected rules vs. providing a rules child directory where capa cannot resolve
dependencies fr... | filter_rules_by_meta | python | mandiant/capa | capa/rules/__init__.py | https://github.com/mandiant/capa/blob/master/capa/rules/__init__.py | Apache-2.0 |
def _match(self, scope: Scope, features: FeatureSet, addr: Address) -> tuple[FeatureSet, ceng.MatchResults]:
"""
Match rules from this ruleset at the given scope against the given features.
This routine should act just like `capa.engine.match`, except that it may be more performant.
It ... |
Match rules from this ruleset at the given scope against the given features.
This routine should act just like `capa.engine.match`, except that it may be more performant.
It uses its knowledge of all the rules to evaluate a minimal set of candidate rules for the given features.
| _match | python | mandiant/capa | capa/rules/__init__.py | https://github.com/mandiant/capa/blob/master/capa/rules/__init__.py | Apache-2.0 |
def match(
self, scope: Scope, features: FeatureSet, addr: Address, paranoid=False
) -> tuple[FeatureSet, ceng.MatchResults]:
"""
Match rules from this ruleset at the given scope against the given features.
This wrapper around _match exists so that we can assert it matches precisely... |
Match rules from this ruleset at the given scope against the given features.
This wrapper around _match exists so that we can assert it matches precisely
the same as `capa.engine.match`, just faster.
This matcher does not handle some edge cases:
- top level NOT statements
... | match | python | mandiant/capa | capa/rules/__init__.py | https://github.com/mandiant/capa/blob/master/capa/rules/__init__.py | Apache-2.0 |
def collect_rule_file_paths(rule_paths: list[Path]) -> list[Path]:
"""
collect all rule file paths, including those in subdirectories.
"""
rule_file_paths = []
for rule_path in rule_paths:
if not rule_path.exists():
raise IOError(f"rule path {rule_path} does not exist or cannot b... |
collect all rule file paths, including those in subdirectories.
| collect_rule_file_paths | python | mandiant/capa | capa/rules/__init__.py | https://github.com/mandiant/capa/blob/master/capa/rules/__init__.py | Apache-2.0 |
def get_rules(
rule_paths: list[RulePath],
cache_dir=None,
on_load_rule: Callable[[RulePath, int, int], None] = on_load_rule_default,
enable_cache: bool = True,
) -> RuleSet:
"""
args:
rule_paths: list of paths to rules files or directories containing rules files
cache_dir: directory... |
args:
rule_paths: list of paths to rules files or directories containing rules files
cache_dir: directory to use for caching rules, or will use the default detected cache directory if None
on_load_rule: callback to invoke before a rule is loaded, use for progress or cancellation
enable_cach... | get_rules | python | mandiant/capa | capa/rules/__init__.py | https://github.com/mandiant/capa/blob/master/capa/rules/__init__.py | Apache-2.0 |
def get_capa_results(args):
"""
run capa against the file at the given path, using the given rules.
args is a tuple, containing:
rules, signatures, format, backend, os, input_file
as provided via the CLI arguments.
args is a tuple because i'm not quite sure how to unpack multiple arguments u... |
run capa against the file at the given path, using the given rules.
args is a tuple, containing:
rules, signatures, format, backend, os, input_file
as provided via the CLI arguments.
args is a tuple because i'm not quite sure how to unpack multiple arguments using `map`.
returns an dict wi... | get_capa_results | python | mandiant/capa | scripts/bulk-process.py | https://github.com/mandiant/capa/blob/master/scripts/bulk-process.py | Apache-2.0 |
def find_subrule_matches(doc: rd.ResultDocument) -> set[str]:
"""
collect the rule names that have been matched as a subrule match.
this way we can avoid displaying entries for things that are too specific.
"""
matches = set()
def rec(node: rd.Match):
if not node.success:
# ... |
collect the rule names that have been matched as a subrule match.
this way we can avoid displaying entries for things that are too specific.
| find_subrule_matches | python | mandiant/capa | scripts/capa-as-library.py | https://github.com/mandiant/capa/blob/master/scripts/capa-as-library.py | Apache-2.0 |
def render_capabilities(doc: rd.ResultDocument, result):
"""
example::
{'CAPABILITY': {'accept command line arguments': 'host-interaction/cli',
'allocate thread local storage (2 matches)': 'host-interaction/process',
'check for time delay via GetTickCount': 'anti-analysis... |
example::
{'CAPABILITY': {'accept command line arguments': 'host-interaction/cli',
'allocate thread local storage (2 matches)': 'host-interaction/process',
'check for time delay via GetTickCount': 'anti-analysis/anti-debugging/debugger-detection',
'check if p... | render_capabilities | python | mandiant/capa | scripts/capa-as-library.py | https://github.com/mandiant/capa/blob/master/scripts/capa-as-library.py | Apache-2.0 |
def render_attack(doc, result):
"""
example::
{'ATT&CK': {'COLLECTION': ['Input Capture::Keylogging [T1056.001]'],
'DEFENSE EVASION': ['Obfuscated Files or Information [T1027]',
'Virtualization/Sandbox Evasion::System Checks '
'... |
example::
{'ATT&CK': {'COLLECTION': ['Input Capture::Keylogging [T1056.001]'],
'DEFENSE EVASION': ['Obfuscated Files or Information [T1027]',
'Virtualization/Sandbox Evasion::System Checks '
'[T1497.001]'],
'DISCOVERY':... | render_attack | python | mandiant/capa | scripts/capa-as-library.py | https://github.com/mandiant/capa/blob/master/scripts/capa-as-library.py | Apache-2.0 |
def render_mbc(doc, result):
"""
example::
{'MBC': {'ANTI-BEHAVIORAL ANALYSIS': ['Debugger Detection::Timing/Delay Check '
'GetTickCount [B0001.032]',
'Emulator Detection [B0004]',
'Virtual ... |
example::
{'MBC': {'ANTI-BEHAVIORAL ANALYSIS': ['Debugger Detection::Timing/Delay Check '
'GetTickCount [B0001.032]',
'Emulator Detection [B0004]',
'Virtual Machine Detection::Instruction '
... | render_mbc | python | mandiant/capa | scripts/capa-as-library.py | https://github.com/mandiant/capa/blob/master/scripts/capa-as-library.py | Apache-2.0 |
def _populate_artifact(sarif_log: dict, meta_data: dict) -> None:
"""
@param sarif_log: dict - sarif data structure including runs
@param meta_data: dict - Capa meta output
@returns None, updates sarif_log via side-effects
"""
sample = meta_data["sample"]
artifact = {
"location": {"u... |
@param sarif_log: dict - sarif data structure including runs
@param meta_data: dict - Capa meta output
@returns None, updates sarif_log via side-effects
| _populate_artifact | python | mandiant/capa | scripts/capa2sarif.py | https://github.com/mandiant/capa/blob/master/scripts/capa2sarif.py | Apache-2.0 |
def _populate_invocations(sarif_log: dict, meta_data: dict) -> None:
"""
@param sarif_log: dict - sarif data structure including runs
@param meta_data: dict - Capa meta output
@returns None, updates sarif_log via side-effects
"""
analysis_time = meta_data["timestamp"]
argv = meta_data["argv"... |
@param sarif_log: dict - sarif data structure including runs
@param meta_data: dict - Capa meta output
@returns None, updates sarif_log via side-effects
| _populate_invocations | python | mandiant/capa | scripts/capa2sarif.py | https://github.com/mandiant/capa/blob/master/scripts/capa2sarif.py | Apache-2.0 |
def _populate_results(sarif_log: dict, data_rules: dict, ghidra_compat: bool) -> None:
"""
@param sarif_log: dict - sarif data structure including runs
@param meta_data: dict - Capa meta output
@returns None, updates sarif_log via side-effects
"""
results = sarif_log["runs"][0]["results"]
#... |
@param sarif_log: dict - sarif data structure including runs
@param meta_data: dict - Capa meta output
@returns None, updates sarif_log via side-effects
| _populate_results | python | mandiant/capa | scripts/capa2sarif.py | https://github.com/mandiant/capa/blob/master/scripts/capa2sarif.py | Apache-2.0 |
def _add_filler_optional(capa_result: dict, sarif_log: dict) -> None:
"""Update sarif file with just enough fields to pass radare tests"""
base_address = capa_result["meta"]["analysis"]["base_address"]["value"]
# Assume there is only one run, and one binary artifact
artifact = sarif_log["runs"][0]["arti... | Update sarif file with just enough fields to pass radare tests | _add_filler_optional | python | mandiant/capa | scripts/capa2sarif.py | https://github.com/mandiant/capa/blob/master/scripts/capa2sarif.py | Apache-2.0 |
def get_features(rule_path: str) -> set[Feature]:
"""
Extracts all features from a given rule file.
Args:
rule_path (str): The path to the rule file to extract features from.
Returns:
set: A set of all feature statements contained within the rule file.
"""
with Path(rule_path).... |
Extracts all features from a given rule file.
Args:
rule_path (str): The path to the rule file to extract features from.
Returns:
set: A set of all feature statements contained within the rule file.
| get_features | python | mandiant/capa | scripts/detect_duplicate_features.py | https://github.com/mandiant/capa/blob/master/scripts/detect_duplicate_features.py | Apache-2.0 |
def append_func_cmt(bv, va, cmt):
"""
add the given comment to the given function,
if it doesn't already exist.
"""
func = bv.get_function_at(va)
if not func:
raise ValueError("not a function")
if cmt in func.comment:
return
func.comment = func.comment + "\n" + cmt |
add the given comment to the given function,
if it doesn't already exist.
| append_func_cmt | python | mandiant/capa | scripts/import-to-bn.py | https://github.com/mandiant/capa/blob/master/scripts/import-to-bn.py | Apache-2.0 |
def append_func_cmt(va, cmt, repeatable=False):
"""
add the given comment to the given function,
if it doesn't already exist.
"""
func = ida_funcs.get_func(va)
if not func:
raise ValueError("not a function")
existing = ida_funcs.get_func_cmt(func, repeatable) or ""
if cmt in exi... |
add the given comment to the given function,
if it doesn't already exist.
| append_func_cmt | python | mandiant/capa | scripts/import-to-ida.py | https://github.com/mandiant/capa/blob/master/scripts/import-to-ida.py | Apache-2.0 |
def _is_static_scope_compatible(parent: Rule, child: Rule) -> bool:
"""
A child rule's scope is compatible if it is equal to or lower than the parent scope.
"""
if parent.scopes.static and not child.scopes.static and child.is_subscope_rule():
# this is ok: the child isn't a ... |
A child rule's scope is compatible if it is equal to or lower than the parent scope.
| _is_static_scope_compatible | python | mandiant/capa | scripts/lint.py | https://github.com/mandiant/capa/blob/master/scripts/lint.py | Apache-2.0 |
def _is_dynamic_scope_compatible(parent: Rule, child: Rule) -> bool:
"""
A child rule's scope is compatible if it is equal to or lower than the parent scope.
"""
if parent.scopes.dynamic and not child.scopes.dynamic and child.is_subscope_rule():
# this is ok: the child isn't... |
A child rule's scope is compatible if it is equal to or lower than the parent scope.
| _is_dynamic_scope_compatible | python | mandiant/capa | scripts/lint.py | https://github.com/mandiant/capa/blob/master/scripts/lint.py | Apache-2.0 |
def lint(ctx: Context):
"""
Returns: dict[string, tuple(int, int)]
- # lints failed
- # lints warned
"""
ret = {}
source_rules = [rule for rule in ctx.rules.rules.values() if not rule.is_subscope_rule()]
n_rules: int = len(source_rules)
with capa.helpers.CapaProgressBar(transie... |
Returns: dict[string, tuple(int, int)]
- # lints failed
- # lints warned
| lint | python | mandiant/capa | scripts/lint.py | https://github.com/mandiant/capa/blob/master/scripts/lint.py | Apache-2.0 |
def collect_samples(samples_path: Path) -> dict[str, Path]:
"""
recurse through the given path, collecting all file paths, indexed by their content sha256, md5, and filename.
"""
samples = {}
for path in samples_path.rglob("*"):
if path.suffix in [".viv", ".idb", ".i64", ".frz", ".fnames"]:
... |
recurse through the given path, collecting all file paths, indexed by their content sha256, md5, and filename.
| collect_samples | python | mandiant/capa | scripts/lint.py | https://github.com/mandiant/capa/blob/master/scripts/lint.py | Apache-2.0 |
def __init__(self):
"""Download and store in memory the STIX data on instantiation."""
if self.kill_chain_name == "":
raise ValueError(f"Kill chain name not specified in class {self.__class__.__name__}")
if self.url == "":
raise ValueError(f"URL not specified in class {s... | Download and store in memory the STIX data on instantiation. | __init__ | python | mandiant/capa | scripts/setup-linter-dependencies.py | https://github.com/mandiant/capa/blob/master/scripts/setup-linter-dependencies.py | Apache-2.0 |
def _remove_deprecated_objects(stix_objects) -> list[AttackPattern]:
"""Remove any revoked or deprecated objects from queries made to the data source."""
return list(
filter(
lambda x: x.get("x_mitre_deprecated", False) is False and x.get("revoked", False) is False,
... | Remove any revoked or deprecated objects from queries made to the data source. | _remove_deprecated_objects | python | mandiant/capa | scripts/setup-linter-dependencies.py | https://github.com/mandiant/capa/blob/master/scripts/setup-linter-dependencies.py | Apache-2.0 |
def _get_parent_technique_from_subtechnique(self, technique: AttackPattern) -> AttackPattern:
"""Get parent technique of a sub technique using the technique ID TXXXX.YYY"""
sub_id = technique["external_references"][0]["external_id"].split(".")[0]
parent_technique = self._remove_deprecated_object... | Get parent technique of a sub technique using the technique ID TXXXX.YYY | _get_parent_technique_from_subtechnique | python | mandiant/capa | scripts/setup-linter-dependencies.py | https://github.com/mandiant/capa/blob/master/scripts/setup-linter-dependencies.py | Apache-2.0 |
def run(self) -> dict[str, dict[str, str]]:
"""Iterate over every technique over every tactic. If the technique is a sub technique, then
we also search for the parent technique name.
"""
logging.info("Starting extraction...")
data: dict[str, dict[str, str]] = {}
for tacti... | Iterate over every technique over every tactic. If the technique is a sub technique, then
we also search for the parent technique name.
| run | python | mandiant/capa | scripts/setup-linter-dependencies.py | https://github.com/mandiant/capa/blob/master/scripts/setup-linter-dependencies.py | Apache-2.0 |
def _get_tactics(self) -> list[dict]:
"""Override _get_tactics to edit the tactic name for Micro-objective"""
tactics = super()._get_tactics()
# We don't want the Micro-objective string inside objective names
for tactic in tactics:
tactic["name"] = tactic["name"].replace(" Mi... | Override _get_tactics to edit the tactic name for Micro-objective | _get_tactics | python | mandiant/capa | scripts/setup-linter-dependencies.py | https://github.com/mandiant/capa/blob/master/scripts/setup-linter-dependencies.py | Apache-2.0 |
def render_matches_by_function(doc: rd.ResultDocument):
"""
like:
function at 0x1000321a with 33 features:
- get hostname
- initialize Winsock library
function at 0x10003286 with 63 features:
- create thread
- terminate thread
function at 0x100034... |
like:
function at 0x1000321a with 33 features:
- get hostname
- initialize Winsock library
function at 0x10003286 with 63 features:
- create thread
- terminate thread
function at 0x10003415 with 116 features:
- write file
- send d... | render_matches_by_function | python | mandiant/capa | scripts/show-capabilities-by-function.py | https://github.com/mandiant/capa/blob/master/scripts/show-capabilities-by-function.py | Apache-2.0 |
def xfail(condition, reason=None):
"""
context manager that wraps a block that is expected to fail in some cases.
when it does fail (and is expected), then mark this as pytest.xfail.
if its unexpected, raise an exception, so the test fails.
example::
# this test:
# - passes on Lin... |
context manager that wraps a block that is expected to fail in some cases.
when it does fail (and is expected), then mark this as pytest.xfail.
if its unexpected, raise an exception, so the test fails.
example::
# this test:
# - passes on Linux if foo() works
# - fails on L... | xfail | python | mandiant/capa | tests/fixtures.py | https://github.com/mandiant/capa/blob/master/tests/fixtures.py | Apache-2.0 |
def fixup_viv(path: Path, extractor):
"""
vivisect fixups to overcome differences between backends
"""
if "3b13b" in path.name:
# vivisect only recognizes calling thunk function at 0x10001573
extractor.vw.makeFunction(0x10006860)
if "294b8d" in path.name:
# see vivisect/#561
... |
vivisect fixups to overcome differences between backends
| fixup_viv | python | mandiant/capa | tests/fixtures.py | https://github.com/mandiant/capa/blob/master/tests/fixtures.py | Apache-2.0 |
def get_sample_md5_by_name(name):
"""used by IDA tests to ensure the correct IDB is loaded"""
if name == "mimikatz":
return "5f66b82558ca92e54e77f216ef4c066c"
elif name == "kernel32":
return "e80758cf485db142fca1ee03a34ead05"
elif name == "kernel32-64":
return "a8565440629ac87f6f... | used by IDA tests to ensure the correct IDB is loaded | get_sample_md5_by_name | python | mandiant/capa | tests/fixtures.py | https://github.com/mandiant/capa/blob/master/tests/fixtures.py | Apache-2.0 |
def parametrize(params, values, **kwargs):
"""
extend `pytest.mark.parametrize` to pretty-print features.
by default, it renders objects as an opaque value.
ref: https://docs.pytest.org/en/2.9.0/example/parametrize.html#different-options-for-test-ids
rendered ID might look something like:
mi... |
extend `pytest.mark.parametrize` to pretty-print features.
by default, it renders objects as an opaque value.
ref: https://docs.pytest.org/en/2.9.0/example/parametrize.html#different-options-for-test-ids
rendered ID might look something like:
mimikatz-function=0x403BAC-api(CryptDestroyKey)-True... | parametrize | python | mandiant/capa | tests/fixtures.py | https://github.com/mandiant/capa/blob/master/tests/fixtures.py | Apache-2.0 |
def standardize_posix_str(psx_str):
"""fixture test passes the PosixPath to the test data
params: psx_str - PosixPath() to the test data
return: string that matches test-id sample name
"""
if "Practical Malware Analysis Lab" in str(psx_str):
# <PosixPath>/'Practical Malware Analysis Lab 16... | fixture test passes the PosixPath to the test data
params: psx_str - PosixPath() to the test data
return: string that matches test-id sample name
| standardize_posix_str | python | mandiant/capa | tests/test_ghidra_features.py | https://github.com/mandiant/capa/blob/master/tests/test_ghidra_features.py | Apache-2.0 |
def check_input_file(wanted):
"""check that test is running on the loaded sample
params: wanted - PosixPath() passed from test arg
"""
import capa.ghidra.helpers as ghidra_helpers
found = ghidra_helpers.get_file_md5()
sample_name = standardize_posix_str(wanted)
if not found.startswith(fi... | check that test is running on the loaded sample
params: wanted - PosixPath() passed from test arg
| check_input_file | python | mandiant/capa | tests/test_ghidra_features.py | https://github.com/mandiant/capa/blob/master/tests/test_ghidra_features.py | Apache-2.0 |
def nocollect(f):
"don't collect the decorated function as a pytest test"
f.__test__ = False
return f | don't collect the decorated function as a pytest test | nocollect | python | mandiant/capa | tests/test_ida_features.py | https://github.com/mandiant/capa/blob/master/tests/test_ida_features.py | Apache-2.0 |
def match(rules, features, va, scope=Scope.FUNCTION):
"""
use all matching algorithms and verify that they compute the same result.
then, return those results to the caller so they can make their asserts.
"""
features1, matches1 = capa.engine.match(rules, features, va)
ruleset = capa.rules.Rule... |
use all matching algorithms and verify that they compute the same result.
then, return those results to the caller so they can make their asserts.
| match | python | mandiant/capa | tests/test_match.py | https://github.com/mandiant/capa/blob/master/tests/test_match.py | Apache-2.0 |
def test_match_adds_matched_rule_feature():
"""show that using `match` adds a feature for matched rules."""
rule = textwrap.dedent(
"""
rule:
meta:
name: test rule
scopes:
static: function
dynamic: process
... | show that using `match` adds a feature for matched rules. | test_match_adds_matched_rule_feature | python | mandiant/capa | tests/test_match.py | https://github.com/mandiant/capa/blob/master/tests/test_match.py | Apache-2.0 |
def test_match_matched_rules():
"""show that using `match` adds a feature for matched rules."""
rules = [
capa.rules.Rule.from_yaml(
textwrap.dedent(
"""
rule:
meta:
name: test rule1
scopes:
... | show that using `match` adds a feature for matched rules. | test_match_matched_rules | python | mandiant/capa | tests/test_match.py | https://github.com/mandiant/capa/blob/master/tests/test_match.py | Apache-2.0 |
def get_labels_length(file_path):
"""
Return labels and their count in a file.
Args:
file_path (str): The path to the file containing the labels.
Returns:
list: labels; int: The number of labels in the file.
"""
with open(file_path, encoding = "UTF-8") as f:
tokens = [t... |
Return labels and their count in a file.
Args:
file_path (str): The path to the file containing the labels.
Returns:
list: labels; int: The number of labels in the file.
| get_labels_length | python | FutureUniant/Tailor | app/src/algorithm/base/emoti_voice/config/config.py | https://github.com/FutureUniant/Tailor/blob/master/app/src/algorithm/base/emoti_voice/config/config.py | Apache-2.0 |
def griffin_lim(magnitudes, stft_fn, n_iters=30):
"""
PARAMS
------
magnitudes: spectrogram magnitudes
stft_fn: STFT class with transform (STFT) and inverse (ISTFT) methods
"""
angles = np.angle(np.exp(2j * np.pi * np.random.rand(*magnitudes.size())))
angles = angles.astype(np.float32)
... |
PARAMS
------
magnitudes: spectrogram magnitudes
stft_fn: STFT class with transform (STFT) and inverse (ISTFT) methods
| griffin_lim | python | FutureUniant/Tailor | app/src/algorithm/base/emoti_voice/models/prompt_tts_modified/feats.py | https://github.com/FutureUniant/Tailor/blob/master/app/src/algorithm/base/emoti_voice/models/prompt_tts_modified/feats.py | Apache-2.0 |
def flat_accuracy(preds, labels):
"""
Function to calculate the accuracy of our predictions vs labels
"""
pred_flat = np.argmax(preds, axis=1).flatten()
labels_flat = labels.flatten()
return np.sum(pred_flat == labels_flat) / len(labels_flat) |
Function to calculate the accuracy of our predictions vs labels
| flat_accuracy | python | FutureUniant/Tailor | app/src/algorithm/base/emoti_voice/models/prompt_tts_modified/simbert.py | https://github.com/FutureUniant/Tailor/blob/master/app/src/algorithm/base/emoti_voice/models/prompt_tts_modified/simbert.py | Apache-2.0 |
def forward(self, xs: torch.Tensor, x_masks: torch.Tensor = None) -> torch.Tensor:
"""Calculate forward propagation.
Args:
xs (Tensor): Batch of input sequences (B, Tmax, idim).
x_masks (ByteTensor): Batch of masks indicating padded part (B, Tmax).
Returns:
... | Calculate forward propagation.
Args:
xs (Tensor): Batch of input sequences (B, Tmax, idim).
x_masks (ByteTensor): Batch of masks indicating padded part (B, Tmax).
Returns:
Tensor: Batch of predicted sequences (B, Tmax, 1).
| forward | python | FutureUniant/Tailor | app/src/algorithm/base/emoti_voice/models/prompt_tts_modified/modules/variance.py | https://github.com/FutureUniant/Tailor/blob/master/app/src/algorithm/base/emoti_voice/models/prompt_tts_modified/modules/variance.py | Apache-2.0 |
def resnext50_32x4d(pretrained: bool = False, progress: bool = True, **kwargs: Any) -> ResNet:
r"""ResNeXt-50 32x4d model from
`"Aggregated Residual Transformation for Deep Neural Networks" <https://arxiv.org/pdf/1611.05431.pdf>`_.
Args:
pretrained (bool): If True, returns a model pre-trained on Im... | ResNeXt-50 32x4d model from
`"Aggregated Residual Transformation for Deep Neural Networks" <https://arxiv.org/pdf/1611.05431.pdf>`_.
Args:
pretrained (bool): If True, returns a model pre-trained on ImageNet
progress (bool): If True, displays a progress bar of the download to stderr
| resnext50_32x4d | python | FutureUniant/Tailor | app/src/algorithm/base/sadtalker/src/face3d/models/networks.py | https://github.com/FutureUniant/Tailor/blob/master/app/src/algorithm/base/sadtalker/src/face3d/models/networks.py | Apache-2.0 |
def resnext101_32x8d(pretrained: bool = False, progress: bool = True, **kwargs: Any) -> ResNet:
r"""ResNeXt-101 32x8d model from
`"Aggregated Residual Transformation for Deep Neural Networks" <https://arxiv.org/pdf/1611.05431.pdf>`_.
Args:
pretrained (bool): If True, returns a model pre-trained on ... | ResNeXt-101 32x8d model from
`"Aggregated Residual Transformation for Deep Neural Networks" <https://arxiv.org/pdf/1611.05431.pdf>`_.
Args:
pretrained (bool): If True, returns a model pre-trained on ImageNet
progress (bool): If True, displays a progress bar of the download to stderr
| resnext101_32x8d | python | FutureUniant/Tailor | app/src/algorithm/base/sadtalker/src/face3d/models/networks.py | https://github.com/FutureUniant/Tailor/blob/master/app/src/algorithm/base/sadtalker/src/face3d/models/networks.py | Apache-2.0 |
def wide_resnet50_2(pretrained: bool = False, progress: bool = True, **kwargs: Any) -> ResNet:
r"""Wide ResNet-50-2 model from
`"Wide Residual Networks" <https://arxiv.org/pdf/1605.07146.pdf>`_.
The model is the same as ResNet except for the bottleneck number of channels
which is twice larger in every ... | Wide ResNet-50-2 model from
`"Wide Residual Networks" <https://arxiv.org/pdf/1605.07146.pdf>`_.
The model is the same as ResNet except for the bottleneck number of channels
which is twice larger in every block. The number of channels in outer 1x1
convolutions is the same, e.g. last block in ResNet-50 h... | wide_resnet50_2 | python | FutureUniant/Tailor | app/src/algorithm/base/sadtalker/src/face3d/models/networks.py | https://github.com/FutureUniant/Tailor/blob/master/app/src/algorithm/base/sadtalker/src/face3d/models/networks.py | Apache-2.0 |
def wide_resnet101_2(pretrained: bool = False, progress: bool = True, **kwargs: Any) -> ResNet:
r"""Wide ResNet-101-2 model from
`"Wide Residual Networks" <https://arxiv.org/pdf/1605.07146.pdf>`_.
The model is the same as ResNet except for the bottleneck number of channels
which is twice larger in ever... | Wide ResNet-101-2 model from
`"Wide Residual Networks" <https://arxiv.org/pdf/1605.07146.pdf>`_.
The model is the same as ResNet except for the bottleneck number of channels
which is twice larger in every block. The number of channels in outer 1x1
convolutions is the same, e.g. last block in ResNet-50 ... | wide_resnet101_2 | python | FutureUniant/Tailor | app/src/algorithm/base/sadtalker/src/face3d/models/networks.py | https://github.com/FutureUniant/Tailor/blob/master/app/src/algorithm/base/sadtalker/src/face3d/models/networks.py | Apache-2.0 |
def align_img(img, lm, lm3D, mask=None, target_size=224., rescale_factor=102.):
"""
Return:
transparams --numpy.array (raw_W, raw_H, scale, tx, ty)
img_new --PIL.Image (target_size, target_size, 3)
lm_new --numpy.array (68, 2), y direction is opposite to ... |
Return:
transparams --numpy.array (raw_W, raw_H, scale, tx, ty)
img_new --PIL.Image (target_size, target_size, 3)
lm_new --numpy.array (68, 2), y direction is opposite to v direction
mask_new --PIL.Image (target_size, target_size)
... | align_img | python | FutureUniant/Tailor | app/src/algorithm/base/sadtalker/src/face3d/util/preprocess.py | https://github.com/FutureUniant/Tailor/blob/master/app/src/algorithm/base/sadtalker/src/face3d/util/preprocess.py | Apache-2.0 |
def kp2gaussian(kp, spatial_size, kp_variance):
"""
Transform a keypoint into gaussian like representation
"""
mean = kp['value']
coordinate_grid = make_coordinate_grid(spatial_size, mean.type())
number_of_leading_dimensions = len(mean.shape) - 1
shape = (1,) * number_of_leading_dimensions ... |
Transform a keypoint into gaussian like representation
| kp2gaussian | python | FutureUniant/Tailor | app/src/algorithm/base/sadtalker/src/facerender/modules/util.py | https://github.com/FutureUniant/Tailor/blob/master/app/src/algorithm/base/sadtalker/src/facerender/modules/util.py | Apache-2.0 |
def _data_parallel_master(self, intermediates):
"""Reduce the sum and square-sum, compute the statistics, and broadcast it."""
# Always using same "device order" makes the ReduceAdd operation faster.
# Thanks to:: Tete Xiao (http://tetexiao.com/)
intermediates = sorted(intermediates, ke... | Reduce the sum and square-sum, compute the statistics, and broadcast it. | _data_parallel_master | python | FutureUniant/Tailor | app/src/algorithm/base/sadtalker/src/facerender/sync_batchnorm/batchnorm.py | https://github.com/FutureUniant/Tailor/blob/master/app/src/algorithm/base/sadtalker/src/facerender/sync_batchnorm/batchnorm.py | Apache-2.0 |
def _compute_mean_std(self, sum_, ssum, size):
"""Compute the mean and standard-deviation with sum and square-sum. This method
also maintains the moving average on the master device."""
assert size > 1, 'BatchNorm computes unbiased standard-deviation, which requires size > 1.'
mean = sum... | Compute the mean and standard-deviation with sum and square-sum. This method
also maintains the moving average on the master device. | _compute_mean_std | python | FutureUniant/Tailor | app/src/algorithm/base/sadtalker/src/facerender/sync_batchnorm/batchnorm.py | https://github.com/FutureUniant/Tailor/blob/master/app/src/algorithm/base/sadtalker/src/facerender/sync_batchnorm/batchnorm.py | Apache-2.0 |
def __init__(self, master_callback):
"""
Args:
master_callback: a callback to be invoked after having collected messages from slave devices.
"""
self._master_callback = master_callback
self._queue = queue.Queue()
self._registry = collections.OrderedDict()
... |
Args:
master_callback: a callback to be invoked after having collected messages from slave devices.
| __init__ | python | FutureUniant/Tailor | app/src/algorithm/base/sadtalker/src/facerender/sync_batchnorm/comm.py | https://github.com/FutureUniant/Tailor/blob/master/app/src/algorithm/base/sadtalker/src/facerender/sync_batchnorm/comm.py | Apache-2.0 |
def register_slave(self, identifier):
"""
Register an slave device.
Args:
identifier: an identifier, usually is the device id.
Returns: a `SlavePipe` object which can be used to communicate with the master device.
"""
if self._activated:
assert ... |
Register an slave device.
Args:
identifier: an identifier, usually is the device id.
Returns: a `SlavePipe` object which can be used to communicate with the master device.
| register_slave | python | FutureUniant/Tailor | app/src/algorithm/base/sadtalker/src/facerender/sync_batchnorm/comm.py | https://github.com/FutureUniant/Tailor/blob/master/app/src/algorithm/base/sadtalker/src/facerender/sync_batchnorm/comm.py | Apache-2.0 |
def run_master(self, master_msg):
"""
Main entry for the master device in each forward pass.
The messages were first collected from each devices (including the master device), and then
an callback will be invoked to compute the message to be sent back to each devices
(including t... |
Main entry for the master device in each forward pass.
The messages were first collected from each devices (including the master device), and then
an callback will be invoked to compute the message to be sent back to each devices
(including the master device).
Args:
... | run_master | python | FutureUniant/Tailor | app/src/algorithm/base/sadtalker/src/facerender/sync_batchnorm/comm.py | https://github.com/FutureUniant/Tailor/blob/master/app/src/algorithm/base/sadtalker/src/facerender/sync_batchnorm/comm.py | Apache-2.0 |
def execute_replication_callbacks(modules):
"""
Execute an replication callback `__data_parallel_replicate__` on each module created by original replication.
The callback will be invoked with arguments `__data_parallel_replicate__(ctx, copy_id)`
Note that, as all modules are isomorphism, we assign eac... |
Execute an replication callback `__data_parallel_replicate__` on each module created by original replication.
The callback will be invoked with arguments `__data_parallel_replicate__(ctx, copy_id)`
Note that, as all modules are isomorphism, we assign each sub-module with a context
(shared among multi... | execute_replication_callbacks | python | FutureUniant/Tailor | app/src/algorithm/base/sadtalker/src/facerender/sync_batchnorm/replicate.py | https://github.com/FutureUniant/Tailor/blob/master/app/src/algorithm/base/sadtalker/src/facerender/sync_batchnorm/replicate.py | Apache-2.0 |
def patch_replication_callback(data_parallel):
"""
Monkey-patch an existing `DataParallel` object. Add the replication callback.
Useful when you have customized `DataParallel` implementation.
Examples:
> sync_bn = SynchronizedBatchNorm1d(10, eps=1e-5, affine=False)
> sync_bn = DataParal... |
Monkey-patch an existing `DataParallel` object. Add the replication callback.
Useful when you have customized `DataParallel` implementation.
Examples:
> sync_bn = SynchronizedBatchNorm1d(10, eps=1e-5, affine=False)
> sync_bn = DataParallel(sync_bn, device_ids=[0, 1])
> patch_replic... | patch_replication_callback | python | FutureUniant/Tailor | app/src/algorithm/base/sadtalker/src/facerender/sync_batchnorm/replicate.py | https://github.com/FutureUniant/Tailor/blob/master/app/src/algorithm/base/sadtalker/src/facerender/sync_batchnorm/replicate.py | Apache-2.0 |
def num_frames(length, fsize, fshift):
"""Compute number of time frames of spectrogram
"""
pad = (fsize - fshift)
if length % fshift == 0:
M = (length + pad * 2 - fsize) // fshift + 1
else:
M = (length + pad * 2 - fsize) // fshift + 2
return M | Compute number of time frames of spectrogram
| num_frames | python | FutureUniant/Tailor | app/src/algorithm/base/sadtalker/src/utils/audio.py | https://github.com/FutureUniant/Tailor/blob/master/app/src/algorithm/base/sadtalker/src/utils/audio.py | Apache-2.0 |
def get_landmark(self, img_np):
"""get landmark with dlib
:return: np.array shape=(68, 2)
"""
with torch.no_grad():
dets = self.predictor.det_net.detect_faces(img_np, 0.97)
if len(dets) == 0:
return None
det = dets[0]
img = img_np[int(det... | get landmark with dlib
:return: np.array shape=(68, 2)
| get_landmark | python | FutureUniant/Tailor | app/src/algorithm/base/sadtalker/src/utils/croper.py | https://github.com/FutureUniant/Tailor/blob/master/app/src/algorithm/base/sadtalker/src/utils/croper.py | Apache-2.0 |
def split_coeff(coeffs):
"""
Return:
coeffs_dict -- a dict of torch.tensors
Parameters:
coeffs -- torch.tensor, size (B, 256)
"""
id_coeffs = coeffs[:, :80]
exp_coeffs = coeffs[:, 80: 144]
tex_coeffs = coeffs[:, 144: 224]
... |
Return:
coeffs_dict -- a dict of torch.tensors
Parameters:
coeffs -- torch.tensor, size (B, 256)
| split_coeff | python | FutureUniant/Tailor | app/src/algorithm/base/sadtalker/src/utils/preprocess.py | https://github.com/FutureUniant/Tailor/blob/master/app/src/algorithm/base/sadtalker/src/utils/preprocess.py | Apache-2.0 |
def __init__(
self,
model: SAM2Base,
points_per_side: Optional[int] = 32,
points_per_batch: int = 64,
pred_iou_thresh: float = 0.8,
stability_score_thresh: float = 0.95,
stability_score_offset: float = 1.0,
mask_threshold: float = 0.0,
box_nms_thre... |
Using a SAM 2 model, generates masks for the entire image.
Generates a grid of point prompts over the image, then filters
low quality and duplicate masks. The default settings are chosen
for SAM 2 with a HieraL backbone.
Arguments:
model (Sam): The SAM 2 model to use ... | __init__ | python | FutureUniant/Tailor | app/src/algorithm/base/sam2/sam2/automatic_mask_generator.py | https://github.com/FutureUniant/Tailor/blob/master/app/src/algorithm/base/sam2/sam2/automatic_mask_generator.py | Apache-2.0 |
def from_pretrained(cls, model_id: str, **kwargs) -> "SAM2AutomaticMaskGenerator":
"""
Load a pretrained model from the Hugging Face hub.
Arguments:
model_id (str): The Hugging Face repository ID.
**kwargs: Additional arguments to pass to the model constructor.
Retu... |
Load a pretrained model from the Hugging Face hub.
Arguments:
model_id (str): The Hugging Face repository ID.
**kwargs: Additional arguments to pass to the model constructor.
Returns:
(SAM2AutomaticMaskGenerator): The loaded model.
| from_pretrained | python | FutureUniant/Tailor | app/src/algorithm/base/sam2/sam2/automatic_mask_generator.py | https://github.com/FutureUniant/Tailor/blob/master/app/src/algorithm/base/sam2/sam2/automatic_mask_generator.py | Apache-2.0 |
def generate(self, image: np.ndarray) -> List[Dict[str, Any]]:
"""
Generates masks for the given image.
Arguments:
image (np.ndarray): The image to generate masks for, in HWC uint8 format.
Returns:
list(dict(str, any)): A list over records for masks. Each record is... |
Generates masks for the given image.
Arguments:
image (np.ndarray): The image to generate masks for, in HWC uint8 format.
Returns:
list(dict(str, any)): A list over records for masks. Each record is
a dict containing the following keys:
segment... | generate | python | FutureUniant/Tailor | app/src/algorithm/base/sam2/sam2/automatic_mask_generator.py | https://github.com/FutureUniant/Tailor/blob/master/app/src/algorithm/base/sam2/sam2/automatic_mask_generator.py | Apache-2.0 |
def postprocess_small_regions(
mask_data: MaskData, min_area: int, nms_thresh: float
) -> MaskData:
"""
Removes small disconnected regions and holes in masks, then reruns
box NMS to remove any new duplicates.
Edits mask_data in place.
Requires open-cv as a dependenc... |
Removes small disconnected regions and holes in masks, then reruns
box NMS to remove any new duplicates.
Edits mask_data in place.
Requires open-cv as a dependency.
| postprocess_small_regions | python | FutureUniant/Tailor | app/src/algorithm/base/sam2/sam2/automatic_mask_generator.py | https://github.com/FutureUniant/Tailor/blob/master/app/src/algorithm/base/sam2/sam2/automatic_mask_generator.py | Apache-2.0 |
def __init__(
self,
sam_model: SAM2Base,
mask_threshold=0.0,
max_hole_area=0.0,
max_sprinkle_area=0.0,
**kwargs,
) -> None:
"""
Uses SAM-2 to calculate the image embedding for an image, and then
allow repeated, efficient mask prediction given p... |
Uses SAM-2 to calculate the image embedding for an image, and then
allow repeated, efficient mask prediction given prompts.
Arguments:
sam_model (Sam-2): The model to use for mask prediction.
mask_threshold (float): The threshold to use when converting mask logits
... | __init__ | python | FutureUniant/Tailor | app/src/algorithm/base/sam2/sam2/sam2_image_predictor.py | https://github.com/FutureUniant/Tailor/blob/master/app/src/algorithm/base/sam2/sam2/sam2_image_predictor.py | Apache-2.0 |
def from_pretrained(cls, model_id: str, **kwargs) -> "SAM2ImagePredictor":
"""
Load a pretrained model from the Hugging Face hub.
Arguments:
model_id (str): The Hugging Face repository ID.
**kwargs: Additional arguments to pass to the model constructor.
Returns:
... |
Load a pretrained model from the Hugging Face hub.
Arguments:
model_id (str): The Hugging Face repository ID.
**kwargs: Additional arguments to pass to the model constructor.
Returns:
(SAM2ImagePredictor): The loaded model.
| from_pretrained | python | FutureUniant/Tailor | app/src/algorithm/base/sam2/sam2/sam2_image_predictor.py | https://github.com/FutureUniant/Tailor/blob/master/app/src/algorithm/base/sam2/sam2/sam2_image_predictor.py | Apache-2.0 |
def set_image(
self,
image: Union[np.ndarray, Image],
) -> None:
"""
Calculates the image embeddings for the provided image, allowing
masks to be predicted with the 'predict' method.
Arguments:
image (np.ndarray or PIL Image): The input image to embed in RG... |
Calculates the image embeddings for the provided image, allowing
masks to be predicted with the 'predict' method.
Arguments:
image (np.ndarray or PIL Image): The input image to embed in RGB format. The image should be in HWC format if np.ndarray, or WHC format if PIL Image
... | set_image | python | FutureUniant/Tailor | app/src/algorithm/base/sam2/sam2/sam2_image_predictor.py | https://github.com/FutureUniant/Tailor/blob/master/app/src/algorithm/base/sam2/sam2/sam2_image_predictor.py | Apache-2.0 |
def set_image_batch(
self,
image_list: List[Union[np.ndarray]],
) -> None:
"""
Calculates the image embeddings for the provided image batch, allowing
masks to be predicted with the 'predict_batch' method.
Arguments:
image_list (List[np.ndarray]): The input ... |
Calculates the image embeddings for the provided image batch, allowing
masks to be predicted with the 'predict_batch' method.
Arguments:
image_list (List[np.ndarray]): The input images to embed in RGB format. The image should be in HWC format if np.ndarray
with pixel values... | set_image_batch | python | FutureUniant/Tailor | app/src/algorithm/base/sam2/sam2/sam2_image_predictor.py | https://github.com/FutureUniant/Tailor/blob/master/app/src/algorithm/base/sam2/sam2/sam2_image_predictor.py | Apache-2.0 |
def predict_batch(
self,
point_coords_batch: List[np.ndarray] = None,
point_labels_batch: List[np.ndarray] = None,
box_batch: List[np.ndarray] = None,
mask_input_batch: List[np.ndarray] = None,
multimask_output: bool = True,
return_logits: bool = False,
no... | This function is very similar to predict(...), however it is used for batched mode, when the model is expected to generate predictions on multiple images.
It returns a tuple of lists of masks, ious, and low_res_masks_logits.
| predict_batch | python | FutureUniant/Tailor | app/src/algorithm/base/sam2/sam2/sam2_image_predictor.py | https://github.com/FutureUniant/Tailor/blob/master/app/src/algorithm/base/sam2/sam2/sam2_image_predictor.py | Apache-2.0 |
def predict(
self,
point_coords: Optional[np.ndarray] = None,
point_labels: Optional[np.ndarray] = None,
box: Optional[np.ndarray] = None,
mask_input: Optional[np.ndarray] = None,
multimask_output: bool = True,
return_logits: bool = False,
normalize_coords... |
Predict masks for the given input prompts, using the currently set image.
Arguments:
point_coords (np.ndarray or None): A Nx2 array of point prompts to the
model. Each point is in (X,Y) in pixels.
point_labels (np.ndarray or None): A length N array of labels for the
... | predict | python | FutureUniant/Tailor | app/src/algorithm/base/sam2/sam2/sam2_image_predictor.py | https://github.com/FutureUniant/Tailor/blob/master/app/src/algorithm/base/sam2/sam2/sam2_image_predictor.py | Apache-2.0 |
def _predict(
self,
point_coords: Optional[torch.Tensor],
point_labels: Optional[torch.Tensor],
boxes: Optional[torch.Tensor] = None,
mask_input: Optional[torch.Tensor] = None,
multimask_output: bool = True,
return_logits: bool = False,
img_idx: int = -1,
... |
Predict masks for the given input prompts, using the currently set image.
Input prompts are batched torch tensors and are expected to already be
transformed to the input frame using SAM2Transforms.
Arguments:
point_coords (torch.Tensor or None): A BxNx2 array of point prompts... | _predict | python | FutureUniant/Tailor | app/src/algorithm/base/sam2/sam2/sam2_image_predictor.py | https://github.com/FutureUniant/Tailor/blob/master/app/src/algorithm/base/sam2/sam2/sam2_image_predictor.py | Apache-2.0 |
def get_image_embedding(self) -> torch.Tensor:
"""
Returns the image embeddings for the currently set image, with
shape 1xCxHxW, where C is the embedding dimension and (H,W) are
the embedding spatial dimension of SAM (typically C=256, H=W=64).
"""
if not self._is_image_se... |
Returns the image embeddings for the currently set image, with
shape 1xCxHxW, where C is the embedding dimension and (H,W) are
the embedding spatial dimension of SAM (typically C=256, H=W=64).
| get_image_embedding | python | FutureUniant/Tailor | app/src/algorithm/base/sam2/sam2/sam2_image_predictor.py | https://github.com/FutureUniant/Tailor/blob/master/app/src/algorithm/base/sam2/sam2/sam2_image_predictor.py | Apache-2.0 |
def reset_predictor(self) -> None:
"""
Resets the image embeddings and other state variables.
"""
self._is_image_set = False
self._features = None
self._orig_hw = None
self._is_batch = False |
Resets the image embeddings and other state variables.
| reset_predictor | python | FutureUniant/Tailor | app/src/algorithm/base/sam2/sam2/sam2_image_predictor.py | https://github.com/FutureUniant/Tailor/blob/master/app/src/algorithm/base/sam2/sam2/sam2_image_predictor.py | Apache-2.0 |
def from_pretrained(cls, model_id: str, **kwargs) -> "SAM2VideoPredictor":
"""
Load a pretrained model from the Hugging Face hub.
Arguments:
model_id (str): The Hugging Face repository ID.
**kwargs: Additional arguments to pass to the model constructor.
Returns:
... |
Load a pretrained model from the Hugging Face hub.
Arguments:
model_id (str): The Hugging Face repository ID.
**kwargs: Additional arguments to pass to the model constructor.
Returns:
(SAM2VideoPredictor): The loaded model.
| from_pretrained | python | FutureUniant/Tailor | app/src/algorithm/base/sam2/sam2/sam2_video_predictor.py | https://github.com/FutureUniant/Tailor/blob/master/app/src/algorithm/base/sam2/sam2/sam2_video_predictor.py | Apache-2.0 |
def _obj_id_to_idx(self, inference_state, obj_id):
"""Map client-side object id to model-side object index."""
obj_idx = inference_state["obj_id_to_idx"].get(obj_id, None)
if obj_idx is not None:
return obj_idx
# This is a new object id not sent to the server before. We only... | Map client-side object id to model-side object index. | _obj_id_to_idx | python | FutureUniant/Tailor | app/src/algorithm/base/sam2/sam2/sam2_video_predictor.py | https://github.com/FutureUniant/Tailor/blob/master/app/src/algorithm/base/sam2/sam2/sam2_video_predictor.py | Apache-2.0 |
def _get_orig_video_res_output(self, inference_state, any_res_masks):
"""
Resize the object scores to the original video resolution (video_res_masks)
and apply non-overlapping constraints for final output.
"""
device = inference_state["device"]
video_H = inference_state["... |
Resize the object scores to the original video resolution (video_res_masks)
and apply non-overlapping constraints for final output.
| _get_orig_video_res_output | python | FutureUniant/Tailor | app/src/algorithm/base/sam2/sam2/sam2_video_predictor.py | https://github.com/FutureUniant/Tailor/blob/master/app/src/algorithm/base/sam2/sam2/sam2_video_predictor.py | Apache-2.0 |
def _consolidate_temp_output_across_obj(
self,
inference_state,
frame_idx,
is_cond,
run_mem_encoder,
consolidate_at_video_res=False,
):
"""
Consolidate the per-object temporary outputs in `temp_output_dict_per_obj` on
a frame into a single outp... |
Consolidate the per-object temporary outputs in `temp_output_dict_per_obj` on
a frame into a single output for all objects, including
1) fill any missing objects either from `output_dict_per_obj` (if they exist in
`output_dict_per_obj` for this frame) or leave them as placeholder val... | _consolidate_temp_output_across_obj | python | FutureUniant/Tailor | app/src/algorithm/base/sam2/sam2/sam2_video_predictor.py | https://github.com/FutureUniant/Tailor/blob/master/app/src/algorithm/base/sam2/sam2/sam2_video_predictor.py | Apache-2.0 |
def _get_empty_mask_ptr(self, inference_state, frame_idx):
"""Get a dummy object pointer based on an empty mask on the current frame."""
# A dummy (empty) mask with a single object
batch_size = 1
mask_inputs = torch.zeros(
(batch_size, 1, self.image_size, self.image_size),
... | Get a dummy object pointer based on an empty mask on the current frame. | _get_empty_mask_ptr | python | FutureUniant/Tailor | app/src/algorithm/base/sam2/sam2/sam2_video_predictor.py | https://github.com/FutureUniant/Tailor/blob/master/app/src/algorithm/base/sam2/sam2/sam2_video_predictor.py | Apache-2.0 |
def propagate_in_video_preflight(self, inference_state):
"""Prepare inference_state and consolidate temporary outputs before tracking."""
# Tracking has started and we don't allow adding new objects until session is reset.
inference_state["tracking_has_started"] = True
batch_size = self.... | Prepare inference_state and consolidate temporary outputs before tracking. | propagate_in_video_preflight | python | FutureUniant/Tailor | app/src/algorithm/base/sam2/sam2/sam2_video_predictor.py | https://github.com/FutureUniant/Tailor/blob/master/app/src/algorithm/base/sam2/sam2/sam2_video_predictor.py | Apache-2.0 |
def propagate_in_video(
self,
inference_state,
start_frame_idx=None,
max_frame_num_to_track=None,
reverse=False,
):
"""Propagate the input points across frames to track in the entire video."""
self.propagate_in_video_preflight(inference_state)
output_... | Propagate the input points across frames to track in the entire video. | propagate_in_video | python | FutureUniant/Tailor | app/src/algorithm/base/sam2/sam2/sam2_video_predictor.py | https://github.com/FutureUniant/Tailor/blob/master/app/src/algorithm/base/sam2/sam2/sam2_video_predictor.py | Apache-2.0 |
def _add_output_per_object(
self, inference_state, frame_idx, current_out, storage_key
):
"""
Split a multi-object output into per-object output slices and add them into
`output_dict_per_obj`. The resulting slices share the same tensor storage.
"""
maskmem_features = ... |
Split a multi-object output into per-object output slices and add them into
`output_dict_per_obj`. The resulting slices share the same tensor storage.
| _add_output_per_object | python | FutureUniant/Tailor | app/src/algorithm/base/sam2/sam2/sam2_video_predictor.py | https://github.com/FutureUniant/Tailor/blob/master/app/src/algorithm/base/sam2/sam2/sam2_video_predictor.py | Apache-2.0 |
def reset_state(self, inference_state):
"""Remove all input points or mask in all frames throughout the video."""
self._reset_tracking_results(inference_state)
# Remove all object ids
inference_state["obj_id_to_idx"].clear()
inference_state["obj_idx_to_id"].clear()
infere... | Remove all input points or mask in all frames throughout the video. | reset_state | python | FutureUniant/Tailor | app/src/algorithm/base/sam2/sam2/sam2_video_predictor.py | https://github.com/FutureUniant/Tailor/blob/master/app/src/algorithm/base/sam2/sam2/sam2_video_predictor.py | Apache-2.0 |
def _reset_tracking_results(self, inference_state):
"""Reset all tracking inputs and results across the videos."""
for v in inference_state["point_inputs_per_obj"].values():
v.clear()
for v in inference_state["mask_inputs_per_obj"].values():
v.clear()
for v in inf... | Reset all tracking inputs and results across the videos. | _reset_tracking_results | python | FutureUniant/Tailor | app/src/algorithm/base/sam2/sam2/sam2_video_predictor.py | https://github.com/FutureUniant/Tailor/blob/master/app/src/algorithm/base/sam2/sam2/sam2_video_predictor.py | Apache-2.0 |
def _get_image_feature(self, inference_state, frame_idx, batch_size):
"""Compute the image features on a given frame."""
# Look up in the cache first
image, backbone_out = inference_state["cached_features"].get(
frame_idx, (None, None)
)
if backbone_out is None:
... | Compute the image features on a given frame. | _get_image_feature | python | FutureUniant/Tailor | app/src/algorithm/base/sam2/sam2/sam2_video_predictor.py | https://github.com/FutureUniant/Tailor/blob/master/app/src/algorithm/base/sam2/sam2/sam2_video_predictor.py | Apache-2.0 |
def _run_single_frame_inference(
self,
inference_state,
output_dict,
frame_idx,
batch_size,
is_init_cond_frame,
point_inputs,
mask_inputs,
reverse,
run_mem_encoder,
prev_sam_mask_logits=None,
):
"""Run tracking on a sing... | Run tracking on a single frame based on current inputs and previous memory. | _run_single_frame_inference | python | FutureUniant/Tailor | app/src/algorithm/base/sam2/sam2/sam2_video_predictor.py | https://github.com/FutureUniant/Tailor/blob/master/app/src/algorithm/base/sam2/sam2/sam2_video_predictor.py | Apache-2.0 |
def _run_memory_encoder(
self, inference_state, frame_idx, batch_size, high_res_masks, is_mask_from_pts
):
"""
Run the memory encoder on `high_res_masks`. This is usually after applying
non-overlapping constraints to object scores. Since their scores changed, their
memory als... |
Run the memory encoder on `high_res_masks`. This is usually after applying
non-overlapping constraints to object scores. Since their scores changed, their
memory also need to be computed again with the memory encoder.
| _run_memory_encoder | python | FutureUniant/Tailor | app/src/algorithm/base/sam2/sam2/sam2_video_predictor.py | https://github.com/FutureUniant/Tailor/blob/master/app/src/algorithm/base/sam2/sam2/sam2_video_predictor.py | Apache-2.0 |
def _get_maskmem_pos_enc(self, inference_state, current_out):
"""
`maskmem_pos_enc` is the same across frames and objects, so we cache it as
a constant in the inference session to reduce session storage size.
"""
model_constants = inference_state["constants"]
# "out_maskm... |
`maskmem_pos_enc` is the same across frames and objects, so we cache it as
a constant in the inference session to reduce session storage size.
| _get_maskmem_pos_enc | python | FutureUniant/Tailor | app/src/algorithm/base/sam2/sam2/sam2_video_predictor.py | https://github.com/FutureUniant/Tailor/blob/master/app/src/algorithm/base/sam2/sam2/sam2_video_predictor.py | Apache-2.0 |
def _clear_non_cond_mem_around_input(self, inference_state, frame_idx):
"""
Remove the non-conditioning memory around the input frame. When users provide
correction clicks, the surrounding frames' non-conditioning memories can still
contain outdated object appearance information and coul... |
Remove the non-conditioning memory around the input frame. When users provide
correction clicks, the surrounding frames' non-conditioning memories can still
contain outdated object appearance information and could confuse the model.
This method clears those non-conditioning memories su... | _clear_non_cond_mem_around_input | python | FutureUniant/Tailor | app/src/algorithm/base/sam2/sam2/sam2_video_predictor.py | https://github.com/FutureUniant/Tailor/blob/master/app/src/algorithm/base/sam2/sam2/sam2_video_predictor.py | Apache-2.0 |
def _pe_encoding(self, coords: torch.Tensor) -> torch.Tensor:
"""Positionally encode points that are normalized to [0,1]."""
# assuming coords are in [0, 1]^2 square and have d_1 x ... x d_n x 2 shape
coords = 2 * coords - 1
coords = coords @ self.positional_encoding_gaussian_matrix
... | Positionally encode points that are normalized to [0,1]. | _pe_encoding | python | FutureUniant/Tailor | app/src/algorithm/base/sam2/sam2/modeling/position_encoding.py | https://github.com/FutureUniant/Tailor/blob/master/app/src/algorithm/base/sam2/sam2/modeling/position_encoding.py | Apache-2.0 |
def forward(self, size: Tuple[int, int]) -> torch.Tensor:
"""Generate positional encoding for a grid of the specified size."""
h, w = size
device: Any = self.positional_encoding_gaussian_matrix.device
grid = torch.ones((h, w), device=device, dtype=torch.float32)
y_embed = grid.cu... | Generate positional encoding for a grid of the specified size. | forward | python | FutureUniant/Tailor | app/src/algorithm/base/sam2/sam2/modeling/position_encoding.py | https://github.com/FutureUniant/Tailor/blob/master/app/src/algorithm/base/sam2/sam2/modeling/position_encoding.py | Apache-2.0 |
Subsets and Splits
Django Code with Docstrings
Filters Python code examples from Django repository that contain Django-related code, helping identify relevant code snippets for understanding Django framework usage patterns.
SQL Console for Shuu12121/python-treesitter-filtered-datasetsV2
Retrieves specific code examples from the Flask repository but doesn't provide meaningful analysis or patterns beyond basic data retrieval.
HTTPX Repo Code and Docstrings
Retrieves specific code examples from the httpx repository, which is useful for understanding how particular libraries are used but doesn't provide broader analytical insights about the dataset.
Requests Repo Docstrings & Code
Retrieves code examples with their docstrings and file paths from the requests repository, providing basic filtering but limited analytical value beyond finding specific code samples.
Quart Repo Docstrings & Code
Retrieves code examples with their docstrings from the Quart repository, providing basic code samples but offering limited analytical value for understanding broader patterns or relationships in the dataset.