Upload 87 files
Browse filesThis view is limited to 50 files because it contains too many changes.
See raw diff
- bitagent_subnet-main/.circleci/config.yml +168 -0
- bitagent_subnet-main/.gitignore +184 -0
- bitagent_subnet-main/LICENSE +22 -0
- bitagent_subnet-main/README.md +474 -0
- bitagent_subnet-main/bitagent/__init__.py +11 -0
- bitagent_subnet-main/bitagent/criteria/__init__.py +1 -0
- bitagent_subnet-main/bitagent/criteria/criterion.py +95 -0
- bitagent_subnet-main/bitagent/criteria/default_criteria.py +59 -0
- bitagent_subnet-main/bitagent/criteria/tool_call_criteria.py +422 -0
- bitagent_subnet-main/bitagent/criteria/utils.py +25 -0
- bitagent_subnet-main/bitagent/datasources/__init__.py +1 -0
- bitagent_subnet-main/bitagent/datasources/loaders.py +63 -0
- bitagent_subnet-main/bitagent/datasources/tools.py +194 -0
- bitagent_subnet-main/bitagent/helpers/dockers.py +44 -0
- bitagent_subnet-main/bitagent/helpers/llms.py +85 -0
- bitagent_subnet-main/bitagent/helpers/logging.py +38 -0
- bitagent_subnet-main/bitagent/helpers/sbert.py +45 -0
- bitagent_subnet-main/bitagent/helpers/string_parse.py +34 -0
- bitagent_subnet-main/bitagent/helpers/tool_parsing.py +92 -0
- bitagent_subnet-main/bitagent/miners/__init__.py +9 -0
- bitagent_subnet-main/bitagent/miners/default_miner.py +30 -0
- bitagent_subnet-main/bitagent/miners/mock_miner.py +32 -0
- bitagent_subnet-main/bitagent/protocol.py +66 -0
- bitagent_subnet-main/bitagent/schemas/chat.py +39 -0
- bitagent_subnet-main/bitagent/schemas/tool.py +21 -0
- bitagent_subnet-main/bitagent/tasks/__init__.py +3 -0
- bitagent_subnet-main/bitagent/tasks/constants.py +7 -0
- bitagent_subnet-main/bitagent/tasks/task.py +105 -0
- bitagent_subnet-main/bitagent/tasks/tool_call_task.py +191 -0
- bitagent_subnet-main/bitagent/validator/__init__.py +10 -0
- bitagent_subnet-main/bitagent/validator/constants.py +5 -0
- bitagent_subnet-main/bitagent/validator/forward.py +129 -0
- bitagent_subnet-main/bitagent/validator/initiation.py +151 -0
- bitagent_subnet-main/bitagent/validator/offline_task.py +425 -0
- bitagent_subnet-main/bitagent/validator/reward.py +266 -0
- bitagent_subnet-main/common/__init__.py +29 -0
- bitagent_subnet-main/common/base/__init__.py +0 -0
- bitagent_subnet-main/common/base/miner.py +266 -0
- bitagent_subnet-main/common/base/neuron.py +187 -0
- bitagent_subnet-main/common/base/validator.py +576 -0
- bitagent_subnet-main/common/utils/__init__.py +4 -0
- bitagent_subnet-main/common/utils/config.py +284 -0
- bitagent_subnet-main/common/utils/misc.py +112 -0
- bitagent_subnet-main/common/utils/shell.py +47 -0
- bitagent_subnet-main/common/utils/uids.py +113 -0
- bitagent_subnet-main/common/utils/weight_utils.py +216 -0
- bitagent_subnet-main/contrib/CODE_REVIEW_DOCS.md +72 -0
- bitagent_subnet-main/contrib/CONTRIBUTING.md +213 -0
- bitagent_subnet-main/contrib/DEVELOPMENT_WORKFLOW.md +165 -0
- bitagent_subnet-main/contrib/STYLE.md +348 -0
bitagent_subnet-main/.circleci/config.yml
ADDED
|
@@ -0,0 +1,168 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version: 2.1
|
| 2 |
+
|
| 3 |
+
orbs:
|
| 4 |
+
python: circleci/python@2.1.1
|
| 5 |
+
python-lib: dialogue/python-lib@0.1.55
|
| 6 |
+
# coveralls: coveralls/coveralls@1.0.6
|
| 7 |
+
|
| 8 |
+
jobs:
|
| 9 |
+
black:
|
| 10 |
+
resource_class: small
|
| 11 |
+
parameters:
|
| 12 |
+
python-version:
|
| 13 |
+
type: string
|
| 14 |
+
docker:
|
| 15 |
+
- image: cimg/python:<< parameters.python-version >>
|
| 16 |
+
|
| 17 |
+
steps:
|
| 18 |
+
- checkout
|
| 19 |
+
|
| 20 |
+
- restore_cache:
|
| 21 |
+
name: Restore cached black venv
|
| 22 |
+
keys:
|
| 23 |
+
- v1-pypi-py-black-<< parameters.python-version >>
|
| 24 |
+
|
| 25 |
+
- run:
|
| 26 |
+
name: Update & Activate black venv
|
| 27 |
+
command: |
|
| 28 |
+
python -m venv env/
|
| 29 |
+
. env/bin/activate
|
| 30 |
+
python -m pip install --upgrade pip
|
| 31 |
+
pip install black
|
| 32 |
+
|
| 33 |
+
- save_cache:
|
| 34 |
+
name: Save cached black venv
|
| 35 |
+
paths:
|
| 36 |
+
- "env/"
|
| 37 |
+
key: v1-pypi-py-black-<< parameters.python-version >>
|
| 38 |
+
|
| 39 |
+
- run:
|
| 40 |
+
name: Black format check
|
| 41 |
+
command: |
|
| 42 |
+
. env/bin/activate
|
| 43 |
+
black --line-length 79 --exclude '(env|venv|.eggs)' --check .
|
| 44 |
+
|
| 45 |
+
pylint:
|
| 46 |
+
resource_class: small
|
| 47 |
+
parameters:
|
| 48 |
+
python-version:
|
| 49 |
+
type: string
|
| 50 |
+
docker:
|
| 51 |
+
- image: cimg/python:<< parameters.python-version >>
|
| 52 |
+
|
| 53 |
+
steps:
|
| 54 |
+
- checkout
|
| 55 |
+
|
| 56 |
+
- run:
|
| 57 |
+
name: Install Pylint
|
| 58 |
+
command: |
|
| 59 |
+
python -m venv env/
|
| 60 |
+
. env/bin/activate
|
| 61 |
+
pip install pylint
|
| 62 |
+
|
| 63 |
+
- run:
|
| 64 |
+
name: Pylint check
|
| 65 |
+
command: |
|
| 66 |
+
. env/bin/activate
|
| 67 |
+
pylint --fail-on=W,E,F --exit-zero ./
|
| 68 |
+
|
| 69 |
+
check_compatibility:
|
| 70 |
+
parameters:
|
| 71 |
+
python_version:
|
| 72 |
+
type: string
|
| 73 |
+
docker:
|
| 74 |
+
- image: cimg/python:3.10
|
| 75 |
+
steps:
|
| 76 |
+
- checkout
|
| 77 |
+
- run:
|
| 78 |
+
name: Check if requirements files have changed
|
| 79 |
+
command: ./scripts/check_requirements_changes.sh
|
| 80 |
+
- run:
|
| 81 |
+
name: Install dependencies and Check compatibility
|
| 82 |
+
command: |
|
| 83 |
+
if [ "$REQUIREMENTS_CHANGED" == "true" ]; then
|
| 84 |
+
sudo apt-get update
|
| 85 |
+
sudo apt-get install -y jq curl
|
| 86 |
+
./scripts/check_compatibility.sh << parameters.python_version >>
|
| 87 |
+
else
|
| 88 |
+
echo "Skipping compatibility checks..."
|
| 89 |
+
fi
|
| 90 |
+
|
| 91 |
+
build:
|
| 92 |
+
resource_class: medium
|
| 93 |
+
parallelism: 2
|
| 94 |
+
parameters:
|
| 95 |
+
python-version:
|
| 96 |
+
type: string
|
| 97 |
+
docker:
|
| 98 |
+
- image: cimg/python:<< parameters.python-version >>
|
| 99 |
+
|
| 100 |
+
steps:
|
| 101 |
+
- checkout
|
| 102 |
+
|
| 103 |
+
- restore_cache:
|
| 104 |
+
name: Restore cached venv
|
| 105 |
+
keys:
|
| 106 |
+
- v1-pypi-py<< parameters.python-version >>-{{ checksum "requirements.txt" }}
|
| 107 |
+
- v1-pypi-py<< parameters.python-version >>
|
| 108 |
+
|
| 109 |
+
- run:
|
| 110 |
+
name: Update & Activate venv
|
| 111 |
+
command: |
|
| 112 |
+
python -m venv env/
|
| 113 |
+
. env/bin/activate
|
| 114 |
+
python -m pip install --upgrade pip
|
| 115 |
+
|
| 116 |
+
- save_cache:
|
| 117 |
+
name: Save cached venv
|
| 118 |
+
paths:
|
| 119 |
+
- "env/"
|
| 120 |
+
key: v1-pypi-py<< parameters.python-version >>-{{ checksum "requirements.txt" }}
|
| 121 |
+
|
| 122 |
+
- run:
|
| 123 |
+
name: Install Bittensor Subnet Template
|
| 124 |
+
command: |
|
| 125 |
+
. env/bin/activate
|
| 126 |
+
pip install -e .
|
| 127 |
+
|
| 128 |
+
- store_test_results:
|
| 129 |
+
path: test-results
|
| 130 |
+
- store_artifacts:
|
| 131 |
+
path: test-results
|
| 132 |
+
|
| 133 |
+
coveralls:
|
| 134 |
+
docker:
|
| 135 |
+
- image: cimg/python:3.10
|
| 136 |
+
steps:
|
| 137 |
+
- run:
|
| 138 |
+
name: Combine Coverage
|
| 139 |
+
command: |
|
| 140 |
+
pip3 install --upgrade coveralls
|
| 141 |
+
coveralls --finish --rcfile .coveragerc || echo "Failed to upload coverage"
|
| 142 |
+
|
| 143 |
+
workflows:
|
| 144 |
+
compatibility_checks:
|
| 145 |
+
jobs:
|
| 146 |
+
- check_compatibility:
|
| 147 |
+
python_version: "3.8"
|
| 148 |
+
name: check-compatibility-3.8
|
| 149 |
+
- check_compatibility:
|
| 150 |
+
python_version: "3.9"
|
| 151 |
+
name: check-compatibility-3.9
|
| 152 |
+
- check_compatibility:
|
| 153 |
+
python_version: "3.10"
|
| 154 |
+
name: check-compatibility-3.10
|
| 155 |
+
- check_compatibility:
|
| 156 |
+
python_version: "3.11"
|
| 157 |
+
name: check-compatibility-3.11
|
| 158 |
+
|
| 159 |
+
pr-requirements:
|
| 160 |
+
jobs:
|
| 161 |
+
- black:
|
| 162 |
+
python-version: "3.8.12"
|
| 163 |
+
- pylint:
|
| 164 |
+
python-version: "3.8.12"
|
| 165 |
+
- build:
|
| 166 |
+
matrix:
|
| 167 |
+
parameters:
|
| 168 |
+
python-version: ["3.9.13", "3.10.6", "3.11.4"]
|
bitagent_subnet-main/.gitignore
ADDED
|
@@ -0,0 +1,184 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Byte-compiled / optimized / DLL files
|
| 2 |
+
__pycache__/
|
| 3 |
+
*.py[cod]
|
| 4 |
+
*$py.class
|
| 5 |
+
|
| 6 |
+
# C extensions
|
| 7 |
+
*.so
|
| 8 |
+
|
| 9 |
+
# Distribution / packaging
|
| 10 |
+
.Python
|
| 11 |
+
build/
|
| 12 |
+
develop-eggs/
|
| 13 |
+
dist/
|
| 14 |
+
downloads/
|
| 15 |
+
eggs/
|
| 16 |
+
.eggs/
|
| 17 |
+
lib/
|
| 18 |
+
lib64/
|
| 19 |
+
parts/
|
| 20 |
+
sdist/
|
| 21 |
+
var/
|
| 22 |
+
wheels/
|
| 23 |
+
share/python-wheels/
|
| 24 |
+
*.egg-info/
|
| 25 |
+
.installed.cfg
|
| 26 |
+
*.egg
|
| 27 |
+
MANIFEST
|
| 28 |
+
|
| 29 |
+
# PyInstaller
|
| 30 |
+
# Usually these files are written by a python script from a template
|
| 31 |
+
# before PyInstaller builds the exe, so as to inject date/other infos into it.
|
| 32 |
+
*.manifest
|
| 33 |
+
*.spec
|
| 34 |
+
|
| 35 |
+
# Installer logs
|
| 36 |
+
pip-log.txt
|
| 37 |
+
pip-delete-this-directory.txt
|
| 38 |
+
|
| 39 |
+
# Unit test / coverage reports
|
| 40 |
+
htmlcov/
|
| 41 |
+
.tox/
|
| 42 |
+
.nox/
|
| 43 |
+
.coverage
|
| 44 |
+
.coverage.*
|
| 45 |
+
.cache
|
| 46 |
+
nosetests.xml
|
| 47 |
+
coverage.xml
|
| 48 |
+
*.cover
|
| 49 |
+
*.py,cover
|
| 50 |
+
.hypothesis/
|
| 51 |
+
.pytest_cache/
|
| 52 |
+
cover/
|
| 53 |
+
|
| 54 |
+
# Translations
|
| 55 |
+
*.mo
|
| 56 |
+
*.pot
|
| 57 |
+
|
| 58 |
+
# Django stuff:
|
| 59 |
+
*.log
|
| 60 |
+
local_settings.py
|
| 61 |
+
db.sqlite3
|
| 62 |
+
db.sqlite3-journal
|
| 63 |
+
|
| 64 |
+
# Flask stuff:
|
| 65 |
+
instance/
|
| 66 |
+
.webassets-cache
|
| 67 |
+
|
| 68 |
+
# Scrapy stuff:
|
| 69 |
+
.scrapy
|
| 70 |
+
|
| 71 |
+
# Sphinx documentation
|
| 72 |
+
docs/_build/
|
| 73 |
+
|
| 74 |
+
# PyBuilder
|
| 75 |
+
.pybuilder/
|
| 76 |
+
target/
|
| 77 |
+
|
| 78 |
+
# Jupyter Notebook
|
| 79 |
+
.ipynb_checkpoints
|
| 80 |
+
|
| 81 |
+
# IPython
|
| 82 |
+
profile_default/
|
| 83 |
+
ipython_config.py
|
| 84 |
+
|
| 85 |
+
# pyenv
|
| 86 |
+
# For a library or package, you might want to ignore these files since the code is
|
| 87 |
+
# intended to run in multiple environments; otherwise, check them in:
|
| 88 |
+
# .python-version
|
| 89 |
+
|
| 90 |
+
# pipenv
|
| 91 |
+
# According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control.
|
| 92 |
+
# However, in case of collaboration, if having platform-specific dependencies or dependencies
|
| 93 |
+
# having no cross-platform support, pipenv may install dependencies that don't work, or not
|
| 94 |
+
# install all needed dependencies.
|
| 95 |
+
#Pipfile.lock
|
| 96 |
+
|
| 97 |
+
# poetry
|
| 98 |
+
# Similar to Pipfile.lock, it is generally recommended to include poetry.lock in version control.
|
| 99 |
+
# This is especially recommended for binary packages to ensure reproducibility, and is more
|
| 100 |
+
# commonly ignored for libraries.
|
| 101 |
+
# https://python-poetry.org/docs/basic-usage/#commit-your-poetrylock-file-to-version-control
|
| 102 |
+
#poetry.lock
|
| 103 |
+
|
| 104 |
+
# pdm
|
| 105 |
+
# Similar to Pipfile.lock, it is generally recommended to include pdm.lock in version control.
|
| 106 |
+
#pdm.lock
|
| 107 |
+
# pdm stores project-wide configurations in .pdm.toml, but it is recommended to not include it
|
| 108 |
+
# in version control.
|
| 109 |
+
# https://pdm.fming.dev/#use-with-ide
|
| 110 |
+
.pdm.toml
|
| 111 |
+
|
| 112 |
+
# PEP 582; used by e.g. github.com/David-OConnor/pyflow and github.com/pdm-project/pdm
|
| 113 |
+
__pypackages__/
|
| 114 |
+
|
| 115 |
+
# Celery stuff
|
| 116 |
+
celerybeat-schedule
|
| 117 |
+
celerybeat.pid
|
| 118 |
+
|
| 119 |
+
# SageMath parsed files
|
| 120 |
+
*.sage.py
|
| 121 |
+
|
| 122 |
+
# Environments
|
| 123 |
+
.env
|
| 124 |
+
.venv
|
| 125 |
+
.venvsglang
|
| 126 |
+
env/
|
| 127 |
+
venv/
|
| 128 |
+
ENV/
|
| 129 |
+
env.bak/
|
| 130 |
+
venv.bak/
|
| 131 |
+
|
| 132 |
+
# Spyder project settings
|
| 133 |
+
.spyderproject
|
| 134 |
+
.spyproject
|
| 135 |
+
|
| 136 |
+
# Rope project settings
|
| 137 |
+
.ropeproject
|
| 138 |
+
|
| 139 |
+
# mkdocs documentation
|
| 140 |
+
/site
|
| 141 |
+
|
| 142 |
+
# mypy
|
| 143 |
+
.mypy_cache/
|
| 144 |
+
.dmypy.json
|
| 145 |
+
dmypy.json
|
| 146 |
+
|
| 147 |
+
# Pyre type checker
|
| 148 |
+
.pyre/
|
| 149 |
+
|
| 150 |
+
# pytype static type analyzer
|
| 151 |
+
.pytype/
|
| 152 |
+
|
| 153 |
+
# Cython debug symbols
|
| 154 |
+
cython_debug/
|
| 155 |
+
|
| 156 |
+
# PyCharm
|
| 157 |
+
# JetBrains specific template is maintained in a separate JetBrains.gitignore that can
|
| 158 |
+
# be found at https://github.com/github/gitignore/blob/main/Global/JetBrains.gitignore
|
| 159 |
+
# and can be added to the global gitignore or merged into this file. For a more nuclear
|
| 160 |
+
# option (not recommended) you can uncomment the following to ignore the entire idea folder.
|
| 161 |
+
#.idea/
|
| 162 |
+
|
| 163 |
+
testing/
|
| 164 |
+
env/
|
| 165 |
+
app.config.js
|
| 166 |
+
mnemonics.txt
|
| 167 |
+
run_it_all.sh
|
| 168 |
+
|
| 169 |
+
wandb
|
| 170 |
+
TODO
|
| 171 |
+
bitagent.data*
|
| 172 |
+
.vscode
|
| 173 |
+
repo
|
| 174 |
+
.cometml-runs
|
| 175 |
+
old.*
|
| 176 |
+
|
| 177 |
+
node_modules
|
| 178 |
+
package-lock.json
|
| 179 |
+
package.json
|
| 180 |
+
*net.py
|
| 181 |
+
*.old
|
| 182 |
+
|
| 183 |
+
notebooks
|
| 184 |
+
Notebooks
|
bitagent_subnet-main/LICENSE
ADDED
|
@@ -0,0 +1,22 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
MIT License
|
| 2 |
+
|
| 3 |
+
Copyright (c) 2023 Opentensor
|
| 4 |
+
Copyright (c) 2023 RogueTensor
|
| 5 |
+
|
| 6 |
+
Permission is hereby granted, free of charge, to any person obtaining a copy
|
| 7 |
+
of this software and associated documentation files (the "Software"), to deal
|
| 8 |
+
in the Software without restriction, including without limitation the rights
|
| 9 |
+
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
| 10 |
+
copies of the Software, and to permit persons to whom the Software is
|
| 11 |
+
furnished to do so, subject to the following conditions:
|
| 12 |
+
|
| 13 |
+
The above copyright notice and this permission notice shall be included in all
|
| 14 |
+
copies or substantial portions of the Software.
|
| 15 |
+
|
| 16 |
+
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
| 17 |
+
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
| 18 |
+
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
| 19 |
+
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
| 20 |
+
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
| 21 |
+
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
| 22 |
+
SOFTWARE.
|
bitagent_subnet-main/README.md
ADDED
|
@@ -0,0 +1,474 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
<div align="center">
|
| 2 |
+
|
| 3 |
+
# **BitAgent Subnet (#20) on Bittensor** <!-- omit in toc -->
|
| 4 |
+
[](https://discord.com/channels/799672011265015819/1175085112703078400)
|
| 5 |
+
[](https://opensource.org/licenses/MIT)
|
| 6 |
+
|
| 7 |
+
---
|
| 8 |
+
|
| 9 |
+
## Agency for Your World Through Natural Language <!-- omit in toc -->
|
| 10 |
+
|
| 11 |
+
**Communications:** [BitAgent Discord](https://discord.com/channels/799672011265015819/1194736998250975332)\
|
| 12 |
+
**Downstream Applications:** [GoGoAgent](https://gogoagent.ai/) • [MSP Tech](https://msptech.ai)
|
| 13 |
+
</div>
|
| 14 |
+
|
| 15 |
+
---
|
| 16 |
+
- [Introduction](#introduction)
|
| 17 |
+
- [Get Running](#get-running)
|
| 18 |
+
- [BitAgent](#bitagent)
|
| 19 |
+
- [Validator](#validator)
|
| 20 |
+
- [Dependencies](#dependencies)
|
| 21 |
+
- [Installation](#installation)
|
| 22 |
+
- [vLLM Setup for Validators](#vllm-setup-for-validators)
|
| 23 |
+
- [sglang Setup for Validators](#sglang-setup-for-validators)
|
| 24 |
+
- [Recommended Startup](#recommended-startup)
|
| 25 |
+
- [Alternative Startup](#alternative-startup)
|
| 26 |
+
- [Verify Validator is Working](#verify-validator-is-working)
|
| 27 |
+
- [Hardware Requirements](#validator-hardware-requirements)
|
| 28 |
+
- [Miner](#miner)
|
| 29 |
+
- [Hardware Requirements](#miner-hardware-requirements)
|
| 30 |
+
- [Default Miner](#default-miner)
|
| 31 |
+
- [Miner Emissions](#miner-emissions)
|
| 32 |
+
- [Miner Considerations](#miner-considerations)
|
| 33 |
+
- [Example Task](#example-task)
|
| 34 |
+
- [Miner Feedback](#miner-feedback)
|
| 35 |
+
- [Advanced](#advanced)
|
| 36 |
+
- [FAQ](#faq)
|
| 37 |
+
- [License](#license)
|
| 38 |
+
|
| 39 |
+
## Introduction
|
| 40 |
+
|
| 41 |
+
**Quick Pitch**: BitAgent revolutionizes how you manage tasks and workflows across platforms, merging the capabilities of large language models (LLMs) with the convenience of your favorite apps such as web browsers, Discord, and custom integrations. BitAgent empowers users to seamlessly integrate intelligent agents, providing personalized assistance and integrated task automation.
|
| 42 |
+
|
| 43 |
+
**Key Objective** - provide intelligent agency to simplify and automate tasks in your day-to-day
|
| 44 |
+
|
| 45 |
+
**GoGoAgent - Our Application** - [https://gogoagent.ai](https://gogoagent.ai) \
|
| 46 |
+
**MSPTech - Real world business case** - [https://MSPTech.ai](https://msptech.ai)
|
| 47 |
+
|
| 48 |
+
**Key Features**
|
| 49 |
+
- Working our way up the [Berkeley Function Calling Leaderboard](https://gorilla.cs.berkeley.edu/leaderboard.html#leaderboard) (BFCL)
|
| 50 |
+
- No API / subscription requirements
|
| 51 |
+
- Run light models (8B parameter) for huge impact
|
| 52 |
+
- FINETUNED MODEL evaluation of tool calling language model fine tunes
|
| 53 |
+
- MINER HOSTED evaluation of miners running tool calling language models allowing applications to scale on top of SN20
|
| 54 |
+
- Miner's receive [transparent feedback](#miner-feedback)
|
| 55 |
+
- And a BONUS for getting this far - are you tired of waiting for registration slots? Check out [register.sh](./scripts/register.sh)
|
| 56 |
+
|
| 57 |
+
---
|
| 58 |
+
|
| 59 |
+
## Get Running
|
| 60 |
+
|
| 61 |
+
- BitAgent is a competitive subnet, meaning miners succeed and fail based on how well they perform on tasks.
|
| 62 |
+
- **Make sure to test your miner on Testnet 76 before ever considering registering for Subnet 20.**
|
| 63 |
+
- Newly registered miners will start at the median score per validator and go up or down depending on their performance.
|
| 64 |
+
- Before getting too far, please make sure you've looked over the [Bittensor documentation](https://docs.bittensor.com/) for your needs.
|
| 65 |
+
- The min compute requirements are [noted below for Validators](#hardware-requirements).
|
| 66 |
+
- See [FAQ](#faq) for a few more details related to computing requirements for validators and miners.
|
| 67 |
+
- The minimum requirements for a miner are determined by the resources needed to run a competitive and performant tool calling LLM.
|
| 68 |
+
|
| 69 |
+
### BitAgent
|
| 70 |
+
This repository requires python 3.10 or higher.
|
| 71 |
+
To install and get running, simply clone this repository and install the requirements.
|
| 72 |
+
```bash
|
| 73 |
+
git clone https://github.com/RogueTensor/bitagent_subnet
|
| 74 |
+
cd bitagent_subnet
|
| 75 |
+
# at this point, it's recommended that you use a venv, but not required; the next two lines are venv specific
|
| 76 |
+
python -m venv .venv #replace .venv with the name you'd like to use for your primary venv
|
| 77 |
+
source ./.venv/bin/activate
|
| 78 |
+
python -m pip install -e .
|
| 79 |
+
```
|
| 80 |
+
|
| 81 |
+
Then make sure to register your intended wallet (coldkey, hotkey) to Subnet 20:
|
| 82 |
+
```bash
|
| 83 |
+
btcli subnet register --wallet.path <YOUR PATH: e.g., ~/.bittensor/wallets> --wallet.name $coldkey --wallet.hotkey $hotkey --subtensor.network finney --netuid 20
|
| 84 |
+
```
|
| 85 |
+
|
| 86 |
+
### Validator
|
| 87 |
+
|
| 88 |
+
#### Dependencies
|
| 89 |
+
|
| 90 |
+
You must have the following things:
|
| 91 |
+
|
| 92 |
+
- System with at least 48gb of VRAM
|
| 93 |
+
- Python >=3.10
|
| 94 |
+
- Docker with [gpu support](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html)
|
| 95 |
+
|
| 96 |
+
#### Installation
|
| 97 |
+
|
| 98 |
+
Ensure that you have Docker with GPU support, you can choose to follow either of the instructions:
|
| 99 |
+
|
| 100 |
+
- [Official Guide](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html)
|
| 101 |
+
- [Quick and Dirty Stack Overflow Guide](https://stackoverflow.com/questions/75118992/docker-error-response-from-daemon-could-not-select-device-driver-with-capab)
|
| 102 |
+
|
| 103 |
+
|
| 104 |
+
Install [PM2](https://pm2.io/docs/runtime/guide/installation/) and the [`jq` package](https://jqlang.github.io/jq/) on your system.\
|
| 105 |
+
**On Linux**:
|
| 106 |
+
```bash
|
| 107 |
+
sudo apt update && sudo apt install jq && sudo apt install npm && sudo npm install pm2 -g && pm2 update
|
| 108 |
+
```
|
| 109 |
+
**On Mac OS**
|
| 110 |
+
```bash
|
| 111 |
+
brew update && brew install jq && brew install npm && sudo npm install pm2 -g && pm2 update
|
| 112 |
+
```
|
| 113 |
+
|
| 114 |
+
#### vLLM Setup for Validators
|
| 115 |
+
|
| 116 |
+
Validators must spin-up their own LLM (specifically mistral 7B).
|
| 117 |
+
Note: Previously we ran the LLM's inside the validator code with the transformer package, however we pivoted away from that due to the inefficiency of running the model using vanilla transformers. Hosting the models using llama.cpp, oobabooga, vllm, TGI, are much better options as they provide additional functionality.
|
| 118 |
+
|
| 119 |
+
To run with vLLM you can do the following:
|
| 120 |
+
|
| 121 |
+
```bash
|
| 122 |
+
sudo docker run -d -p 8000:8000 --gpus all --ipc host --name mistral-instruct docker.io/vllm/vllm-openai:latest --model thesven/Mistral-7B-Instruct-v0.3-GPTQ --max-model-len 8912 --quantization gptq --dtype half --gpu-memory-utilization 0.45
|
| 123 |
+
```
|
| 124 |
+
|
| 125 |
+
This will run the LLM on port 8000. To change the port, change the host port for this parameter up above `-p <host port>:<container port>`. And use `--openai-api-base http://localhost:<new_port>/v1` in your params to point to the vLLM model for SN20.
|
| 126 |
+
|
| 127 |
+
#### sglang Setup for Validators
|
| 128 |
+
|
| 129 |
+
You'll need to create a virtual env and install the requirements for sglang:
|
| 130 |
+
```bash
|
| 131 |
+
python3 -m venv .venvsglang
|
| 132 |
+
# note to change cu121 in this path according to this page: https://docs.flashinfer.ai/installation.html
|
| 133 |
+
./.venvsglang/bin/pip install flashinfer -i https://flashinfer.ai/whl/cu121/torch2.4/
|
| 134 |
+
./.venvsglang/bin/pip install -r requirements.sglang.txt
|
| 135 |
+
```
|
| 136 |
+
|
| 137 |
+
**Test that it's working with:**
|
| 138 |
+
```
|
| 139 |
+
.venvsglang/bin/python -m sglang.launch_server --model-path Salesforce/xLAM-7b-r --port 8028 --host 0.0.0.0 --mem-fraction-static 0.40
|
| 140 |
+
```
|
| 141 |
+
|
| 142 |
+
You should not run out of memory and it should eventually show that the Salesforce model loaded correclty.
|
| 143 |
+
|
| 144 |
+
#### Recommended Startup
|
| 145 |
+
|
| 146 |
+
Make sure you do the [vLLM setup](#vllm-setup-for-validators) above and the [sglang setup](#sglang-setup-for-validators) above.
|
| 147 |
+
|
| 148 |
+
```bash
|
| 149 |
+
# for mainnet with AUTO UPDATES (recommended)
|
| 150 |
+
pm2 start run.sh --name bitagent_validators_autoupdate -- --wallet.path <YOUR PATH: e.g., ~/.bittensor/wallets> --wallet.name <your-wallet-name> --wallet.hotkey <your-wallet-hot-key> --netuid 20
|
| 151 |
+
```
|
| 152 |
+
|
| 153 |
+
Double check everything is working by following [these steps](#verify-validator-is-working).
|
| 154 |
+
|
| 155 |
+
#### Alternative Startup
|
| 156 |
+
|
| 157 |
+
Make sure you do the [vLLM setup](#vllm-setup-for-validators) above and the [sglang setup](#sglang-setup-for-validators) above.
|
| 158 |
+
|
| 159 |
+
```bash
|
| 160 |
+
# for testnet
|
| 161 |
+
python3 neurons/validator.py --netuid 76 --subtensor.network test --wallet.path <YOUR PATH: e.g., ~/.bittensor/wallets> --wallet.name <COLDKEY> --wallet.hotkey <HOTKEY>
|
| 162 |
+
|
| 163 |
+
# for mainnet
|
| 164 |
+
pm2 start neurons/validator.py --interpreter python3 -- --netuid 20 --subtensor.network <LOCAL/FINNEY/TEST> --wallet.path <YOUR PATH: e.g., ~/.bittensor/wallets> --wallet.name <COLDKEY> --wallet.hotkey <HOTKEY> --axon.port <PORT>
|
| 165 |
+
```
|
| 166 |
+
|
| 167 |
+
Double check everything is working by following [these steps](#verify-validator-is-working).
|
| 168 |
+
|
| 169 |
+
#### Verify Validator is Working
|
| 170 |
+
|
| 171 |
+
After you've launched and pm2 is running, here's what to expect:\
|
| 172 |
+
- You'll see a LOT (one per mind) of IsAlive() queries like this:\
|
| 173 |
+
```bash
|
| 174 |
+
1|bitagent | 2024-11-17 23:25:59.156 | TRACE | bittensor:loggingmachine.py:432 | dendrite | <-- | 3354 B | IsAlive | 5GbnkQJ6zfsWa9iX4ZtwKccXZv4s8MTt2LSQFmS8CMgjkSgx | 213.180.0.45:20019 | 200 | Success
|
| 175 |
+
1|bitagent | 2024-11-17 23:26:04.135 | TRACE | bittensor:loggingmachine.py:432 | dendrite | <-- | 3327 B | IsAlive | 5E7eqUChR4WUnRwNAUXRNUZhhjEzTfdeGAvDyf99aygVGYBJ | 176.55.1.98:8091 | 408 | Request timeout after 5.0 seconds
|
| 176 |
+
1|bitagent | 2024-11-17 23:26:04.180 | TRACE | bittensor:loggingmachine.py:432 | dendrite | <-- | 3331 B | IsAlive | 5EHQoRqwMHG3QVVpsSZBPHJD87SEwGn6FhTSR3LCj8XiHVUC | 109.206.196.130:8888 | 408 | Request timeout after 5.0 seconds
|
| 177 |
+
```
|
| 178 |
+
- After the IsAlive() queries, you'll start to see QueryTask queries followed by QueryResult queries, like these:\
|
| 179 |
+
```bash
|
| 180 |
+
1|bitagent | 2024-11-17 23:53:20.322 | ERROR | bittensor:loggingmachine.py:457 | - ContentTypeError#aefadd84-8586-4faa-9206-e048c2b85114: 404, message='Attempt to decode JSON with unexpected mimetype: text/html', url='http://52.220.128.145:32222/QueryTask' -
|
| 181 |
+
1|bitagent | 2024-11-17 23:53:20.323 | TRACE | bittensor:loggingmachine.py:432 | dendrite | <-- | 27205 B | QueryTask | 5GjGiziPatj7mf4is5JaDPJq4jbPnagoeiSHe4TfFERafM7X | 52.220.128.145:32222 | 422 | Failed to parse response: 404, message='Attempt to decode JSON with unexpected mimetype: text/html', url='http://52.220.128.145:32222/QueryTask'
|
| 182 |
+
1|bitagent | 2024-11-17 23:53:21.708 | TRACE | bittensor:loggingmachine.py:432 | dendrite | <-- | 27522 B | QueryTask | 5GbnkQJ6zfsWa9iX4ZtwKccXZv4s8MTt2LSQFmS8CMgjkSgx | 213.180.0.45:20019 | 500 | Internal Server Error #b36dc761-1035-44d0-b88d-24fe9ccc7e1e
|
| 183 |
+
1|bitagent | 2024-11-17 23:53:21.806 | TRACE | bittensor:loggingmachine.py:432 | dendrite | --> | 5418 B | QueryResult | 5GbnkQJ6zfsWa9iX4ZtwKccXZv4s8MTt2LSQFmS8CMgjkSgx | 213.180.0.45:20019 | 0 | Success
|
| 184 |
+
1|bitagent | 2024-11-17 23:53:23.200 | TRACE | bittensor:loggingmachine.py:432 | dendrite | <-- | 5578 B | QueryResult | 5GbnkQJ6zfsWa9iX4ZtwKccXZv4s8MTt2LSQFmS8CMgjkSgx | 213.180.0.45:20019 | 200 | Success
|
| 185 |
+
|
| 186 |
+
```
|
| 187 |
+
- These logs above let you know that the ONLINE / MINER HOSTED querying is working.
|
| 188 |
+
- Finally, you'll want to check the miners' HF (hugging face) models are being evaluated OFFLINE.
|
| 189 |
+
- You'll want to check your `pm2 log <ID> | grep OFFLINE` output for lines like these (from testnet):\
|
| 190 |
+
```bash
|
| 191 |
+
1|bitagent | 2024-11-17 23:26:07.154 | DEBUG | bittensor:loggingmachine.py:437 | OFFLINE: Starting offline mode for competition 1-1
|
| 192 |
+
1|bitagent | 2024-11-17 23:26:08.831 | DEBUG | bittensor:loggingmachine.py:437 | OFFLINE: Starting offline task
|
| 193 |
+
1|bitagent | 2024-11-17 23:26:12.529 | DEBUG | bittensor:loggingmachine.py:437 | OFFLINE: Miner HF model names: [None, 'Salesforce/xLAM-7b-r']
|
| 194 |
+
1|bitagent | 2024-11-17 23:26:12.529 | DEBUG | bittensor:loggingmachine.py:437 | OFFLINE: Unique miner HF model names: ['Salesforce/xLAM-7b-r']
|
| 195 |
+
1|bitagent | 2024-11-17 23:26:12.529 | DEBUG | bittensor:loggingmachine.py:437 | OFFLINE: Generating tasks
|
| 196 |
+
1|bitagent | 2024-11-17 23:28:21.793 | DEBUG | bittensor:loggingmachine.py:437 | OFFLINE: Generated 1000 tasks of 1000 total
|
| 197 |
+
1|bitagent | 2024-11-17 23:28:21.793 | DEBUG | bittensor:loggingmachine.py:437 | OFFLINE: Running tasks for model Salesforce/xLAM-7b-r
|
| 198 |
+
1|bitagent | 2024-11-17 23:28:21.939 | DEBUG | bittensor:loggingmachine.py:437 | OFFLINE: Starting server for model Salesforce/xLAM-7b-r
|
| 199 |
+
1|bitagent | 2024-11-17 23:28:21.941 | DEBUG | bittensor:loggingmachine.py:437 | OFFLINE: Started server for model Salesforce/xLAM-7b-r, waiting for it to start on port 8028 (could take several minutes)
|
| 200 |
+
1|bitagent | 2024-11-17 23:29:25.469 | DEBUG | bittensor:loggingmachine.py:437 | OFFLINE: Server for model Salesforce/xLAM-7b-r started
|
| 201 |
+
1|bitagent | 2024-11-17 23:29:25.470 | DEBUG | bittensor:loggingmachine.py:437 | OFFLINE: Getting LLM responses for model Salesforce/xLAM-7b-r
|
| 202 |
+
1|bitagent | 2024-11-17 23:38:54.257 | DEBUG | bittensor:loggingmachine.py:437 | OFFLINE: Got 1000 LLM responses for model: Salesforce/xLAM-7b-r
|
| 203 |
+
1|bitagent | 2024-11-17 23:38:54.258 | DEBUG | bittensor:loggingmachine.py:437 | OFFLINE: Terminating server for model: Salesforce/xLAM-7b-r
|
| 204 |
+
1|bitagent | 2024-11-17 23:38:54.965 | DEBUG | bittensor:loggingmachine.py:437 | OFFLINE: Terminated server for model: Salesforce/xLAM-7b-r
|
| 205 |
+
1|bitagent | 2024-11-17 23:38:55.030 | DEBUG | bittensor:loggingmachine.py:437 | OFFLINE: Processing rewards for model: Salesforce/xLAM-7b-r, for miners: [160]
|
| 206 |
+
1|bitagent | 2024-11-17 23:38:58.530 | DEBUG | bittensor:loggingmachine.py:437 | OFFLINE Scattered rewards: [np.float64(0.16442660138718893)]
|
| 207 |
+
1|bitagent | 2024-11-17 23:38:58.531 | DEBUG | bittensor:loggingmachine.py:437 | Updated moving avg OFFLINE scores for Competition 1-1: [-0.5 -0.5 -0.5 -0.5 -0.5 -0.5
|
| 208 |
+
1|bitagent | 2024-11-17 23:38:58.533 | DEBUG | bittensor:loggingmachine.py:437 | OFFLINE: Deleting model from HF cache: Salesforce/xLAM-7b-r
|
| 209 |
+
1|bitagent | 2024-11-17 23:39:01.198 | DEBUG | bittensor:loggingmachine.py:437 | OFFLINE: Model 'Salesforce/xLAM-7b-r' has been removed from the cache.
|
| 210 |
+
1|bitagent | 2024-11-17 23:39:01.199 | DEBUG | bittensor:loggingmachine.py:437 | OFFLINE: Finished processing offline tasks
|
| 211 |
+
1|bitagent | 2024-11-17 23:39:02.765 | DEBUG | bittensor:loggingmachine.py:437 | OFFLINE: Starting offline mode for competition 1-1
|
| 212 |
+
1|bitagent | 2024-11-17 23:39:03.147 | DEBUG | bittensor:loggingmachine.py:437 | OFFLINE: Starting offline task
|
| 213 |
+
1|bitagent | 2024-11-17 23:39:03.638 | DEBUG | bittensor:loggingmachine.py:437 | OFFLINE: Miner HF model names: [None]
|
| 214 |
+
1|bitagent | 2024-11-17 23:39:03.638 | DEBUG | bittensor:loggingmachine.py:437 | OFFLINE: No unique miner HF model names to evaluate in OFFLINE mode
|
| 215 |
+
```
|
| 216 |
+
- If you're seeing all of this output, your validator is working!
|
| 217 |
+
|
| 218 |
+
#### Validator Hardware Requirements
|
| 219 |
+
|
| 220 |
+
Validators have hardware requirements. Two LLMS are needed to be run simultaneously:
|
| 221 |
+
- 1st LLM `thesven/Mistral-7B-Instruct-v0.3-GPTQ` can run off of 10GB to 20GB of VRAM - this model is used to alter tasks before going out to miners.
|
| 222 |
+
- 2nd LLM is each miner's tool calling model fetched from Hugging Face, one at a time to be evaluated OFFLINE for FINETUNED SUBMISSION and takes up 20GB to 30GB of VRAM.
|
| 223 |
+
|
| 224 |
+
### Miner
|
| 225 |
+
If you just want to run the miner without the [script](./scripts/setup_and_run.sh) or are connecting to mainnet:
|
| 226 |
+
```bash
|
| 227 |
+
# for testing (use testnet 76)
|
| 228 |
+
python3 neurons/miner.py --netuid 76 --subtensor.network test --wallet.path <YOUR PATH: e.g., ~/.bittensor/wallets> --wallet.name <COLDKEY> --wallet.hotkey <HOTKEY>
|
| 229 |
+
# for mainnet
|
| 230 |
+
pm2 start neurons/miner.py --interpreter python3 --
|
| 231 |
+
--netuid 20
|
| 232 |
+
--subtensor.network <finney/local/test>
|
| 233 |
+
--neuron.device cuda # could be cuda:0, cuda:1 depending on which GPU device
|
| 234 |
+
--wallet.path <YOUR PATH: e.g., ~/.bittensor/wallets> # 8.2.0 has a bug that requires wallet path to be provided
|
| 235 |
+
--wallet.name <your wallet> # Must be created using the bittensor-cli
|
| 236 |
+
--wallet.hotkey <your hotkey> # Must be created using the bittensor-cli
|
| 237 |
+
--miner-hf-model-name-to-submit Salesforce/xLAM-7b-r # submit your own fine tune with this param
|
| 238 |
+
--hf-model-name-to-run Salesforce/xLAM-7b-r # run the best tool calling LLM you can
|
| 239 |
+
--openai-api-base http://localhost:8000/v1 # point to your vllm instance of the model you are running
|
| 240 |
+
--logging.debug # Run in debug mode, alternatively --logging.trace for trace mode
|
| 241 |
+
--log_level trace # for trace logs
|
| 242 |
+
--axon.port # VERY IMPORTANT: set the port to be one of the open TCP ports on your machine
|
| 243 |
+
|
| 244 |
+
```
|
| 245 |
+
#### Miner Hardware Requirements
|
| 246 |
+
Miners will need to run a top tool calling LLM or a fine-tune of their own, needing a GPU with 20GB to 30GB of VRAM.
|
| 247 |
+
|
| 248 |
+
#### Default Miner
|
| 249 |
+
The default miner is all you need with these modifications:
|
| 250 |
+
1) `--miner-hf-model-name-to-submit` - set this to the HF model path and repo name from Hugging Face (HF). \
|
| 251 |
+
Example: `--miner-hf-model-name-to-submit Salesforce/xLAM-7b-r`
|
| 252 |
+
2) `--hf-model-name-to-run` - this is the model the miner is running to respond to queries that are sent to the miner. \
|
| 253 |
+
Example: `--hf-model-name-to-run Salesforce/xLAM-7b-r`
|
| 254 |
+
3) `--openai-api-base` - this sets the vLLM endpoint that's running your local model. \
|
| 255 |
+
Example: `--openai-api-base http://localhost:8000/v1`
|
| 256 |
+
|
| 257 |
+
See [Miner Configuration Considerations](#miner-configuration-considerations) for common areas miners should look to improve.
|
| 258 |
+
|
| 259 |
+
#### Miner Emissions
|
| 260 |
+
|
| 261 |
+
Miner emissions are composed of both MINER-HOSTED and FINETUNED SUBMISSION evaluation:
|
| 262 |
+
- 20% of the miner's score is determined by the persistent availability of the miners and their response to on-demand queries This is MINER-HOSTED evaluation of the miner.
|
| 263 |
+
- 80% is determined by bi-weekly challenges in which the miner submits their latest huggingface model and Validators load the model on their machine to evaluate. This is FINETUNED SUBMISSION evaluation. This 80% portion serves as a delayed incentive mechanism, meaning it is always based on miner/model performance from the PREVIOUS competition.
|
| 264 |
+
|
| 265 |
+
Both MINER-HOSTED and FINETUNED SUBMISSION tasks are evaluated against modifications of these datasets:
|
| 266 |
+
- Berkeley Function Calling tasks
|
| 267 |
+
- Glaive Function Calling tasks
|
| 268 |
+
- BitAgent Function calling tasks
|
| 269 |
+
|
| 270 |
+
The Bi-weekly challenge is to finetune an 8B model (or less) to perform well on the tool calling tasks and perform well on the [BFCL Leaderboard](https://gorilla.cs.berkeley.edu/leaderboard.html). Miners must publish their model to HuggingFace and update their `--miner-hf-model-name-to-subnet` parameter when starting/restarting their miner - see [Default Miner](#default-miner)
|
| 271 |
+
|
| 272 |
+
#### Miner Registration Considerations
|
| 273 |
+
|
| 274 |
+
Due to the delayed incentive mechanism of finetuned model evaluation, miners are not recommended to register during the middle of a competition. This is because miners registering mid-competition will not have a score from the prior competition, making them unable to benefit from the 80% incentive calculation.
|
| 275 |
+
- It is recommended that miners register on the day a competition ends (prior to the actual time of competition close). The competitions end on the midnight (00:00) UTC between Monday and Tuesday every two weeks starting from 11-5-2024.
|
| 276 |
+
- Registering on the competition end date (within 16 hours of the deadline) ensures that the miner's 16-hour immunity will be used for model submission grading and scoring in the current competition. Additionally, while the miner is still immune, the competition will roll over into the next cycle, and the miner's score will be finalized for the incentive calculation for the entire next competition cycle.
|
| 277 |
+
- On the day of the competition's end, registration slots are expected to be extremely competitive. Due to substrate constraints, at most three miners can register per hour. If you're receiving error messages while registering, this is why and you will need to keep trying.
|
| 278 |
+
|
| 279 |
+
#### Miner Configuration Considerations
|
| 280 |
+
The default miner is all you need, just make sure you update the parameters described in [Default Miner](#default-miner).
|
| 281 |
+
For your consideration:
|
| 282 |
+
1) Use vLLM as a fast inference runner for your tool calling LLM. Check [this](https://docs.vllm.ai/en/v0.6.0/getting_started/quickstart.html#openai-compatible-server) out to stand up an openAI compliant vLLM instance.
|
| 283 |
+
2) Use pm2 to launch your miner for easy management and reconfiguration as needed.
|
| 284 |
+
3) We use [SGLang](https://sgl-project.github.io/start/install.html) to run your hugging face models, please make sure your model loads with SGLang.
|
| 285 |
+
4) Don't make it obvious to other miners where your HuggingFace submission is, manage this discretely.
|
| 286 |
+
|
| 287 |
+
|
| 288 |
+
#### Example Task
|
| 289 |
+
Here's an example task you can expect your model to see in FINETUNED SUBMISSION mode as well as your local miner to see in MINER-HOSTED mode:
|
| 290 |
+
|
| 291 |
+
You'll receive messages like this:
|
| 292 |
+
```baseh
|
| 293 |
+
[{"content":"What is the discounted price of the jacket, given it was originally $200 and there is a 20% reduction?","role":"user"}]
|
| 294 |
+
```
|
| 295 |
+
and Tools like this:
|
| 296 |
+
```bash
|
| 297 |
+
[{"arguments":{"discount_percentage":{"required":true,"type":"number","description":"The percentage discount to be applied"},
|
| 298 |
+
"original_price":{"description":"The original price of the item","required":true,"type":"number"}},
|
| 299 |
+
"description":"Calculate the discounted price of an item based on the original price and discount percentage","name":"calculate_discount"},
|
| 300 |
+
{"arguments":{"pod_name":{"description":"The name of the pod to be restarted","required":true,"type":"str"}},
|
| 301 |
+
"description":"A function to restart a given pod, useful for deployment and testing.","name":"restart_pod"},...]
|
| 302 |
+
```
|
| 303 |
+
|
| 304 |
+
In response your model should return the function call like this:\
|
| 305 |
+
`calculate_discount(discount_percentation=..., original_price=...)`
|
| 306 |
+
|
| 307 |
+
The model is responsible for returning a function call like above with the right function name, the correct function argument names and values, being sure to set any required arguments appropriately.
|
| 308 |
+
|
| 309 |
+
#### Miner Feedback
|
| 310 |
+
As a miner, you receive tasks, you get rewarded, but on most subnets, you do not know what you're being graded on.
|
| 311 |
+
BitAgent (SN20) offers transparent feedback (in debug logging mode), so you know what you're up against.
|
| 312 |
+
|
| 313 |
+
Here's an example of a well performed task:
|
| 314 |
+

|
| 315 |
+
|
| 316 |
+
Here's an example of a poorly performed task:
|
| 317 |
+

|
| 318 |
+
|
| 319 |
+
Additionally, we send all queries and results to Wandb:
|
| 320 |
+
- WandB Testnet - https://wandb.ai/bitagentsn20/testnet
|
| 321 |
+
- WandB Mainnet - https://wandb.ai/bitagentsn20/mainnet
|
| 322 |
+
|
| 323 |
+
### Advanced
|
| 324 |
+
If you have a need to create and fund wallets for your own testing ...
|
| 325 |
+
|
| 326 |
+
After getting the [subtensor package started and a subnet up and running](./docs/running_on_staging.md) (for staging/local) - you can use this [script](./scripts/setup_and_run.sh) to:
|
| 327 |
+
- create wallets (for owner, validators, miners),
|
| 328 |
+
- fund those wallets with the right amount of tao,
|
| 329 |
+
- register wallets on the local subnet,
|
| 330 |
+
- start miners and validators
|
| 331 |
+
|
| 332 |
+
```bash
|
| 333 |
+
./scripts/setup_and_run.sh
|
| 334 |
+
```
|
| 335 |
+
You can use several flags to configure:
|
| 336 |
+
- the number of miners or validators it sets up,
|
| 337 |
+
- whether it funds wallets,
|
| 338 |
+
- or if it registers wallets,
|
| 339 |
+
- or just launches a miner
|
| 340 |
+
```bash
|
| 341 |
+
bitagent_subnet$ ./scripts/setup_and_run.sh --help
|
| 342 |
+
|
| 343 |
+
Creates wallets for the subnet (owner, validators, miners), funds them, registers them, then starts them.
|
| 344 |
+
|
| 345 |
+
usage: ./scripts/setup_and_run.sh --num_validators num --num_miners num --subnet_prefix string
|
| 346 |
+
|
| 347 |
+
--num_validators num number of validators to launch
|
| 348 |
+
(default: 1)
|
| 349 |
+
--num_miners num number of miners to launch
|
| 350 |
+
(default: 2)
|
| 351 |
+
--subnet_prefix string the prefix of the subnet wallets
|
| 352 |
+
(default: local_subnet_testing_bitagent)
|
| 353 |
+
--skip-wallet skip wallet creation
|
| 354 |
+
(default: run wallet creation)
|
| 355 |
+
--skip-faucet skip wallet funding
|
| 356 |
+
(default: fund wallets)
|
| 357 |
+
--skip-subnet skip subnet creation
|
| 358 |
+
(default: create subnet)
|
| 359 |
+
--skip-reg skip all registration to the subnet
|
| 360 |
+
(default: register wallets)
|
| 361 |
+
--skip-val-reg skip validator registration to the subnet
|
| 362 |
+
(default: register validator)
|
| 363 |
+
--skip-miner-reg skip miner registration to the subnet
|
| 364 |
+
(default: register miner)
|
| 365 |
+
--skip-launch skip validator and miner launching on the subnet
|
| 366 |
+
(default: launch validators and miners)
|
| 367 |
+
--skip-launch_v skip validator launching on the subnet
|
| 368 |
+
(default: launch validators)
|
| 369 |
+
--only-launch skip everything but launching
|
| 370 |
+
(default: do everything)
|
| 371 |
+
--test-net do the same things, but for testnet
|
| 372 |
+
(default: false, local)
|
| 373 |
+
--main-net do the same things, but for mainnet
|
| 374 |
+
(default: false, local)
|
| 375 |
+
--netuid the netuid to work with
|
| 376 |
+
(default: 1 for local, change if main or test)
|
| 377 |
+
|
| 378 |
+
Example: ./scripts/setup_and_run.sh --only-launch
|
| 379 |
+
This will skip everything and just launch the already registered and funded validators and miners
|
| 380 |
+
```
|
| 381 |
+
|
| 382 |
+
---
|
| 383 |
+
|
| 384 |
+
## FAQ
|
| 385 |
+
**Q: How much GPU (VRAM) and RAM do I need to run a validator and/or miner?** \
|
| 386 |
+
A: Validators need a GPU and require a minimum of 48 GBs of VRAM with performant CPU. Miners are left to their own setup, but should be aware that the more capable tool calling LLMs require a decent amount of VRAM (common configurations: a 3090 (with 24GB VRAM) is capable enough for the smaller (~8B params) models we require).
|
| 387 |
+
|
| 388 |
+
**Q: Are there any required subscriptions or paid APIs?** \
|
| 389 |
+
A: No - no subs, no external companies, in fact we'd rather the community build amazing AI capabilities than relying on corporations.
|
| 390 |
+
|
| 391 |
+
**Q: What LLM should I use?** \
|
| 392 |
+
A: This is where the miner needs to experiment some and test and fine-tune different LLM models to find what accomplishes the tasks most successfully. Have a look at models in the Salesforce xLAM family as good starting points.
|
| 393 |
+
|
| 394 |
+
**Q: Validators are running miner-submitted HF models, will validators require `trust_remote_code`?** \
|
| 395 |
+
A: No, we require that no setup scripts or any code be necessary for running the models.
|
| 396 |
+
|
| 397 |
+
**Q: I started my miner and I am not receiving any tasks.** \
|
| 398 |
+
A: There are a few things to check:
|
| 399 |
+
- Is your axon port, as reported on the metagraph correct (you can check taostats or metagraph)?
|
| 400 |
+
- Is your axon port open and reachable from a system in the real world (like where the validators are)?
|
| 401 |
+
- Do you have Trace logging on to see the dendrite requests and Debug logging on to see the task results?
|
| 402 |
+
- Make sure your IsAlive() forward is returning True and wait an hour for that to update in the validator's cache.
|
| 403 |
+
- Make sure there isn't a stale process that is preventing your new miner process from starting up on the intended port.
|
| 404 |
+
|
| 405 |
+
**Q: What about model copying?** \
|
| 406 |
+
A: https://discord.com/channels/799672011265015819/1194736998250975332/1302870011362279514
|
| 407 |
+
|
| 408 |
+
**Q: My model is not being evaluated OFFLINE for FINETUNED SUBMISSION and is receiving a score of 0.** \
|
| 409 |
+
A: There are a few things to check:
|
| 410 |
+
- Is your model licensed under the apache-2.0 license?
|
| 411 |
+
- Is your model size less than 10B parameters? We are looking for 8B params or less models.
|
| 412 |
+
- Is your model name properly set in the Hugging Face?
|
| 413 |
+
|
| 414 |
+
**Q: I'm getting a wallet path error, like: `KeyFileError: Keyfile at: ${HOME}/~/.bittensor/wallets/...`** \
|
| 415 |
+
A: There is a bug in 8.2.0 that is setting the wallet path incorrectly, so you may need to fix this by adding this parameter to your start command: \
|
| 416 |
+
`--wallet.path ~/.bittensor/wallets`
|
| 417 |
+
|
| 418 |
+
**Q: I have a complicated CUDA Device setup and need to use a specific GPU device as a validator running the FINETUNED models:** \
|
| 419 |
+
A: We provide two parameters for this: \
|
| 420 |
+
`--neuron.visible_devices`\
|
| 421 |
+
`--neuron.device`\
|
| 422 |
+
Example usage: To use the 2nd CUDA Device, you would add these to your parameters: \
|
| 423 |
+
`--neuron.visible_devices 1 --neuron.device cuda:0`
|
| 424 |
+
|
| 425 |
+
**Q: My validator is running out of GPU memory when loading OFFLINE models via sglang.** \
|
| 426 |
+
A: You can use this parameter: `--validator-hf-server-mem-fraction-static` to increase or decrease the amount of the GPU VRAM to use.\
|
| 427 |
+
It defaults to 0.55, just over half of the VRAM.
|
| 428 |
+
|
| 429 |
+
**Q: My vLLM or other inference instance is not served on 8000, how do I change this?**\
|
| 430 |
+
A: We provide a parameter `--openai-api-base`\
|
| 431 |
+
It defaults to this: `http://localhost:8000/v1`, updated as needed by passing the `--openai-api-base` parameter to your start command.
|
| 432 |
+
|
| 433 |
+
**Q: My vTrust is low and it looks like I'm not setting OFFLINE weights.**\
|
| 434 |
+
A: Please test your sglang setup - check [here](#sglang-setup-for-validators).
|
| 435 |
+
|
| 436 |
+
**Q: I'm validating and seeing errors like:**
|
| 437 |
+
- TimeoutError
|
| 438 |
+
- ClientConnectorError \
|
| 439 |
+
|
| 440 |
+
A: These are responses likely during the IsAlive() query, they are just letting you know that the miner is not responding or connecting in time.
|
| 441 |
+
|
| 442 |
+
**Q: My validator is hanging, just printing out "Validator running ..."**\
|
| 443 |
+
A: There are a few things to check:\
|
| 444 |
+
- Make sure your vLLM is running with the required LLM from [vLLM Setup](#vllm-setup-for-validators)
|
| 445 |
+
- You may not see much unless you turn on some logging, you can add this to your params to see more details:\
|
| 446 |
+
`--log_level trace --logging.trace --logging.debug`
|
| 447 |
+
- Check your storage, make sure you didn't run out:\
|
| 448 |
+
`df -h`
|
| 449 |
+
- If all else fails, [reach out](https://discord.com/channels/799672011265015819/1194736998250975332)
|
| 450 |
+
|
| 451 |
+
|
| 452 |
+
---
|
| 453 |
+
|
| 454 |
+
## License
|
| 455 |
+
This repository is licensed under the MIT License.
|
| 456 |
+
```text
|
| 457 |
+
# The MIT License (MIT)
|
| 458 |
+
# Copyright © 2023 Yuma Rao
|
| 459 |
+
# Copyright © 2023 RogueTensor
|
| 460 |
+
|
| 461 |
+
# Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated
|
| 462 |
+
# documentation files (the “Software”), to deal in the Software without restriction, including without limitation
|
| 463 |
+
# the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software,
|
| 464 |
+
# and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
|
| 465 |
+
|
| 466 |
+
# The above copyright notice and this permission notice shall be included in all copies or substantial portions of
|
| 467 |
+
# the Software.
|
| 468 |
+
|
| 469 |
+
# THE SOFTWARE IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO
|
| 470 |
+
# THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
|
| 471 |
+
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
|
| 472 |
+
# OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
|
| 473 |
+
# DEALINGS IN THE SOFTWARE.
|
| 474 |
+
```
|
bitagent_subnet-main/bitagent/__init__.py
ADDED
|
@@ -0,0 +1,11 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# not used for weight versioning
|
| 2 |
+
__version__ = "1.0.8"
|
| 3 |
+
version_split = __version__.split(".")
|
| 4 |
+
__spec_version__ = (
|
| 5 |
+
(1000 * int(version_split[0]))
|
| 6 |
+
+ (10 * int(version_split[1]))
|
| 7 |
+
+ (1 * int(version_split[2]))
|
| 8 |
+
)
|
| 9 |
+
|
| 10 |
+
# Import all submodules.
|
| 11 |
+
from . import protocol
|
bitagent_subnet-main/bitagent/criteria/__init__.py
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
from .criterion import *
|
bitagent_subnet-main/bitagent/criteria/criterion.py
ADDED
|
@@ -0,0 +1,95 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# The MIT License (MIT)
|
| 2 |
+
# Copyright © 2023 RogueTensor
|
| 3 |
+
|
| 4 |
+
# Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated
|
| 5 |
+
# documentation files (the “Software”), to deal in the Software without restriction, including without limitation
|
| 6 |
+
# the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software,
|
| 7 |
+
# and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
|
| 8 |
+
|
| 9 |
+
# The above copyright notice and this permission notice shall be included in all copies or substantial portions of
|
| 10 |
+
# the Software.
|
| 11 |
+
|
| 12 |
+
# THE SOFTWARE IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO
|
| 13 |
+
# THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
|
| 14 |
+
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
|
| 15 |
+
# OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
|
| 16 |
+
# DEALINGS IN THE SOFTWARE.
|
| 17 |
+
|
| 18 |
+
import ast
|
| 19 |
+
import bittensor as bt
|
| 20 |
+
from pprint import pformat
|
| 21 |
+
from typing import Callable, List, Tuple
|
| 22 |
+
from bitagent.criteria.utils import bad_message
|
| 23 |
+
from bitagent.criteria.default_criteria import *
|
| 24 |
+
from bitagent.criteria.tool_call_criteria import *
|
| 25 |
+
|
| 26 |
+
# building block for the criteria used to evaluate the miner's response
|
| 27 |
+
class Criterion():
|
| 28 |
+
name: str
|
| 29 |
+
desc: str
|
| 30 |
+
eval_fx: Callable
|
| 31 |
+
|
| 32 |
+
def __init__(self, name: str, desc: str, eval_fx: Callable, eval_args=[]) -> None:
|
| 33 |
+
self.name = name
|
| 34 |
+
self.desc = desc
|
| 35 |
+
self.eval_fx = eval_fx
|
| 36 |
+
self.eval_args = eval_args
|
| 37 |
+
|
| 38 |
+
def clean_response(self,response):
|
| 39 |
+
# TODO check multiple functions for parallel, when the response is a list [fx1(), fx2()]
|
| 40 |
+
response = response.strip()
|
| 41 |
+
if "[" in response[0] and "]" in response[-1]:
|
| 42 |
+
response = response[1:-1]
|
| 43 |
+
|
| 44 |
+
try:
|
| 45 |
+
ast.parse(response.strip())
|
| 46 |
+
except:
|
| 47 |
+
# if it not a parsable function, then it's potentially an irrelevance call
|
| 48 |
+
response = ""
|
| 49 |
+
|
| 50 |
+
return response.strip()
|
| 51 |
+
|
| 52 |
+
def evaluate(self, task, validator, synapse: bt.Synapse) -> Tuple[float, float, str]:
|
| 53 |
+
try:
|
| 54 |
+
# make sure the tool response converts nicely to an ast
|
| 55 |
+
synapse.response = self.clean_response(synapse.response)
|
| 56 |
+
try:
|
| 57 |
+
ast.parse(synapse.response)
|
| 58 |
+
except:
|
| 59 |
+
reward = -0.5
|
| 60 |
+
max_reward = 1.0
|
| 61 |
+
feedback = bad_message(f"Your response: {synapse.response} was not parsable")
|
| 62 |
+
return reward, max_reward, feedback
|
| 63 |
+
|
| 64 |
+
# actually do the evaluation
|
| 65 |
+
reward, max_reward, feedback = self.eval_fx(task, validator, synapse, *self.eval_args)
|
| 66 |
+
except Exception as e:
|
| 67 |
+
#bt.logging.error(f"Exception was raised during criteria evaluation: {e}")
|
| 68 |
+
reward = -0.5
|
| 69 |
+
max_reward = 1.0
|
| 70 |
+
feedback = bad_message(f"Exception while processing your response, please check format per protocol - {e}")
|
| 71 |
+
feedback = f"[bold blue]{self.name}[/bold blue]\n" + feedback
|
| 72 |
+
return reward, max_reward, feedback
|
| 73 |
+
|
| 74 |
+
def __repr__(self):
|
| 75 |
+
return pformat(vars(self), indent=4, width=1)
|
| 76 |
+
|
| 77 |
+
# Function Call
|
| 78 |
+
def tool_call_criteria(expected_response: dict) -> List[Criterion]:
|
| 79 |
+
return [
|
| 80 |
+
Criterion(name="Return correct function format", desc="", eval_fx=correct_tool_call_function_format),
|
| 81 |
+
Criterion(name="Return correct function name", desc="", eval_fx=correct_tool_call_function_name, eval_args=[expected_response]),
|
| 82 |
+
Criterion(name="Return function with correct argument names", desc="", eval_fx=correct_tool_argument_names, eval_args=[expected_response]),
|
| 83 |
+
Criterion(name="Return function with correct argument values", desc="", eval_fx=correct_tool_argument_values, eval_args=[expected_response]),
|
| 84 |
+
]
|
| 85 |
+
|
| 86 |
+
def irrelevant_tool_call_criteria() -> List[Criterion]:
|
| 87 |
+
return [
|
| 88 |
+
Criterion(name="Return valid function call for irrelevant tool", desc="", eval_fx=correct_irrelevant_tool_call),
|
| 89 |
+
]
|
| 90 |
+
|
| 91 |
+
# simple, defaults
|
| 92 |
+
default_criteria = [
|
| 93 |
+
Criterion(name="Does not error", desc="", eval_fx=does_not_error),
|
| 94 |
+
Criterion(name="Does not take a long time", desc="", eval_fx=does_not_take_a_long_time),
|
| 95 |
+
]
|
bitagent_subnet-main/bitagent/criteria/default_criteria.py
ADDED
|
@@ -0,0 +1,59 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# The MIT License (MIT)
|
| 2 |
+
# Copyright © 2023 RogueTensor
|
| 3 |
+
|
| 4 |
+
# Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated
|
| 5 |
+
# documentation files (the “Software”), to deal in the Software without restriction, including without limitation
|
| 6 |
+
# the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software,
|
| 7 |
+
# and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
|
| 8 |
+
|
| 9 |
+
# The above copyright notice and this permission notice shall be included in all copies or substantial portions of
|
| 10 |
+
# the Software.
|
| 11 |
+
|
| 12 |
+
# THE SOFTWARE IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO
|
| 13 |
+
# THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
|
| 14 |
+
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
|
| 15 |
+
# OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
|
| 16 |
+
# DEALINGS IN THE SOFTWARE.
|
| 17 |
+
|
| 18 |
+
import bittensor as bt
|
| 19 |
+
from typing import Tuple
|
| 20 |
+
from bitagent.criteria.utils import good_message, bad_message, received_reward_template
|
| 21 |
+
|
| 22 |
+
# CRITERION: successful call to miner
|
| 23 |
+
def does_not_error(task, validator, synapse: bt.Synapse) -> Tuple[float, float, str]:
|
| 24 |
+
max_reward = 0.25
|
| 25 |
+
a_status_code = synapse.axon.status_code
|
| 26 |
+
d_status_code = synapse.dendrite.status_code
|
| 27 |
+
reward = 0.0
|
| 28 |
+
if a_status_code == 200 and d_status_code == 200:
|
| 29 |
+
reward = max_reward
|
| 30 |
+
feedback = good_message("You successfully responded to the request.")
|
| 31 |
+
else:
|
| 32 |
+
feedback = bad_message("You failed to respond correctly to the request.")
|
| 33 |
+
if d_status_code == 408:
|
| 34 |
+
feedback += "You timed out and will fail the remainder of the criteria."
|
| 35 |
+
feedback += f" Status Code: {a_status_code}/{d_status_code}"
|
| 36 |
+
|
| 37 |
+
return reward, max_reward, feedback + received_reward_template.format(reward, max_reward)
|
| 38 |
+
|
| 39 |
+
# CRITERION: reward speedy response
|
| 40 |
+
def does_not_take_a_long_time(task, validator, synapse: bt.Synapse) -> Tuple[float, float, str]:
|
| 41 |
+
max_reward = 0.5
|
| 42 |
+
process_time = synapse.dendrite.process_time
|
| 43 |
+
if not process_time:
|
| 44 |
+
feedback = f"You likely ran into an error processing this task and failed to respond appropriately."
|
| 45 |
+
reward = 0
|
| 46 |
+
return reward, max_reward, bad_message(feedback) + received_reward_template.format(reward,max_reward)
|
| 47 |
+
|
| 48 |
+
feedback = f"You responded to the request in {process_time}."
|
| 49 |
+
reward = 0.0
|
| 50 |
+
if process_time <= task.timeout/1.75:
|
| 51 |
+
reward = max_reward
|
| 52 |
+
return reward, max_reward, good_message(feedback) + received_reward_template.format(reward,max_reward)
|
| 53 |
+
if process_time <= task.timeout/1.25:
|
| 54 |
+
reward = max_reward/2
|
| 55 |
+
return reward, max_reward, good_message(feedback, color="yellow") + received_reward_template.format(reward,max_reward)
|
| 56 |
+
if process_time <= task.timeout:
|
| 57 |
+
reward = max_reward/5
|
| 58 |
+
return reward, max_reward, bad_message(feedback, color="yellow") + received_reward_template.format(reward,max_reward)
|
| 59 |
+
return reward, max_reward, bad_message(feedback) + received_reward_template.format(reward,max_reward)
|
bitagent_subnet-main/bitagent/criteria/tool_call_criteria.py
ADDED
|
@@ -0,0 +1,422 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# The MIT License (MIT)
|
| 2 |
+
# Copyright 2024 RogueTensor
|
| 3 |
+
# Copyright 2024 TheIntern
|
| 4 |
+
|
| 5 |
+
# Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated
|
| 6 |
+
# documentation files (the “Software”), to deal in the Software without restriction, including without limitation
|
| 7 |
+
# the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software,
|
| 8 |
+
# and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
|
| 9 |
+
|
| 10 |
+
# The above copyright notice and this permission notice shall be included in all copies or substantial portions of
|
| 11 |
+
# the Software.
|
| 12 |
+
|
| 13 |
+
# THE SOFTWARE IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO
|
| 14 |
+
# THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
|
| 15 |
+
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
|
| 16 |
+
# OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
|
| 17 |
+
# DEALINGS IN THE SOFTWARE.
|
| 18 |
+
import ast
|
| 19 |
+
import bittensor as bt
|
| 20 |
+
from typing import Tuple
|
| 21 |
+
from bitagent.criteria.utils import good_message, bad_message, received_reward_template
|
| 22 |
+
from functools import lru_cache
|
| 23 |
+
|
| 24 |
+
# just checking if the function can be parsed by ast
|
| 25 |
+
def correct_tool_call_function_format(task, validator, synapse: bt.Synapse) -> Tuple[float, float, str]:
|
| 26 |
+
max_reward = 1.0
|
| 27 |
+
reward = 1.0
|
| 28 |
+
|
| 29 |
+
try:
|
| 30 |
+
ast.parse(synapse.response)
|
| 31 |
+
except Exception as e:
|
| 32 |
+
reward = -1.0
|
| 33 |
+
feedback = bad_message(f"Your response was not in the correct format - {e}")
|
| 34 |
+
return reward, max_reward, feedback+received_reward_template.format(reward, max_reward)
|
| 35 |
+
|
| 36 |
+
feedback = good_message(f"Your response was in the correct format.")
|
| 37 |
+
return reward, max_reward, feedback+received_reward_template.format(reward, max_reward)
|
| 38 |
+
|
| 39 |
+
@lru_cache(maxsize=128)
|
| 40 |
+
def extract_function_name_and_params(response: str):
|
| 41 |
+
if response == "":
|
| 42 |
+
return "", [], {}
|
| 43 |
+
|
| 44 |
+
node = ast.parse(response , mode="eval")
|
| 45 |
+
|
| 46 |
+
# Walk through the AST to extract the function name
|
| 47 |
+
class FunctionNameExtractor(ast.NodeVisitor):
|
| 48 |
+
def __init__(self):
|
| 49 |
+
self.function_name = None
|
| 50 |
+
|
| 51 |
+
def visit_Call(self, node):
|
| 52 |
+
# Check if the node is a function call
|
| 53 |
+
if isinstance(node.func, ast.Attribute): # Handles dot notation (e.g., module.function)
|
| 54 |
+
parts = []
|
| 55 |
+
current = node.func
|
| 56 |
+
while isinstance(current, ast.Attribute):
|
| 57 |
+
parts.append(current.attr)
|
| 58 |
+
current = current.value
|
| 59 |
+
if isinstance(current, ast.Name):
|
| 60 |
+
parts.append(current.id)
|
| 61 |
+
# Join the parts in reverse to get the full function name
|
| 62 |
+
self.function_name = '.'.join(reversed(parts))
|
| 63 |
+
elif isinstance(node.func, ast.Name): # Handles simple function names (e.g., functionName)
|
| 64 |
+
self.function_name = node.func.id
|
| 65 |
+
# No need to visit further
|
| 66 |
+
self.generic_visit(node)
|
| 67 |
+
|
| 68 |
+
extractor = FunctionNameExtractor()
|
| 69 |
+
extractor.visit(node)
|
| 70 |
+
function_name = extractor.function_name
|
| 71 |
+
|
| 72 |
+
param_names = [kw.arg for kw in node.body.keywords]
|
| 73 |
+
if param_names:
|
| 74 |
+
param_values = [ast.literal_eval(kw.value) for kw in node.body.keywords]
|
| 75 |
+
else:
|
| 76 |
+
param_values = []
|
| 77 |
+
|
| 78 |
+
param_values_dict = {}
|
| 79 |
+
for i,param_name in enumerate(param_names):
|
| 80 |
+
param_values_dict[param_name] = param_values[i]
|
| 81 |
+
|
| 82 |
+
return function_name, param_names, param_values_dict
|
| 83 |
+
|
| 84 |
+
# just checking if the function name is correct
|
| 85 |
+
def correct_tool_call_function_name(task, validator, synapse: bt.Synapse, expected_response: dict) -> Tuple[float, float, str]:
|
| 86 |
+
max_reward = 3.0
|
| 87 |
+
reward = 3.0
|
| 88 |
+
|
| 89 |
+
function_name, _, _ = extract_function_name_and_params(synapse.response)
|
| 90 |
+
expected_function_name = expected_response['name']
|
| 91 |
+
|
| 92 |
+
if function_name.strip() == expected_function_name.strip():
|
| 93 |
+
feedback = good_message(f"Your function name matches the expected function name.")
|
| 94 |
+
return reward, max_reward, feedback+received_reward_template.format(reward, max_reward)
|
| 95 |
+
else:
|
| 96 |
+
reward = -0.5
|
| 97 |
+
feedback = bad_message(f"Your function name does not match the expected function name.")
|
| 98 |
+
return reward, max_reward, feedback+received_reward_template.format(reward, max_reward)
|
| 99 |
+
|
| 100 |
+
# comparing just the argument names
|
| 101 |
+
# looking for required arguments and that they are present
|
| 102 |
+
def correct_tool_argument_names(task, validator, synapse: bt.Synapse, expected_response: dict) -> Tuple[float, float, str]:
|
| 103 |
+
max_reward = 1.0
|
| 104 |
+
reward = max_reward
|
| 105 |
+
feedback_parts = []
|
| 106 |
+
|
| 107 |
+
# MINER response
|
| 108 |
+
function_name, function_args, _ = extract_function_name_and_params(synapse.response)
|
| 109 |
+
expected_args = set(expected_response['arguments'].keys())
|
| 110 |
+
function_args_set = set(function_args)
|
| 111 |
+
|
| 112 |
+
# no args
|
| 113 |
+
if not expected_args and not function_args and function_name:
|
| 114 |
+
feedback_parts.append(good_message("Function has no arguments, good job"))
|
| 115 |
+
return reward, max_reward, ''.join(feedback_parts) + received_reward_template.format(reward, max_reward)
|
| 116 |
+
|
| 117 |
+
required_args = get_required_args(task, expected_response)
|
| 118 |
+
|
| 119 |
+
for arg in required_args:
|
| 120 |
+
if arg in function_args_set:
|
| 121 |
+
feedback_parts.append(good_message(f"Your function has the required argument: {arg}"))
|
| 122 |
+
else:
|
| 123 |
+
reward -= max_reward/len(required_args)
|
| 124 |
+
feedback_parts.append(bad_message(f"Your function is missing the required argument: {arg}"))
|
| 125 |
+
|
| 126 |
+
return reward, max_reward, '\n'.join(feedback_parts) + received_reward_template.format(reward, max_reward)
|
| 127 |
+
|
| 128 |
+
def correct_tool_argument_values(task, validator, synapse: bt.Synapse, expected_response: dict) -> Tuple[float, float, str]:
|
| 129 |
+
max_reward = 3.0
|
| 130 |
+
reward = 0.0
|
| 131 |
+
feedback = ""
|
| 132 |
+
|
| 133 |
+
# MINER response
|
| 134 |
+
function_name, function_args, function_values = extract_function_name_and_params(synapse.response)
|
| 135 |
+
expected_args = set(expected_response['arguments'].keys()) # Convert to set for O(1) lookups
|
| 136 |
+
function_args_set = set(function_args) # Convert to set for O(1) lookups
|
| 137 |
+
|
| 138 |
+
# no args
|
| 139 |
+
if not expected_args and not function_args and function_name:
|
| 140 |
+
reward = max_reward
|
| 141 |
+
feedback = good_message("Function has no arguments, good job")
|
| 142 |
+
return reward, max_reward, feedback+received_reward_template.format(reward, max_reward)
|
| 143 |
+
|
| 144 |
+
required_args = get_required_args(task, expected_response)
|
| 145 |
+
is_flipped = is_distance_calculation_with_flipped_args(function_name, task.synapse.messages)
|
| 146 |
+
|
| 147 |
+
# Check if this is a distance calculation with flipped arguments
|
| 148 |
+
for arg in required_args:
|
| 149 |
+
if arg in function_args_set:
|
| 150 |
+
correct_values = get_correct_values_for_arg(arg, expected_response, is_flipped)
|
| 151 |
+
|
| 152 |
+
if "is_ground_truth" in expected_response and function_values[arg] in correct_values:
|
| 153 |
+
reward += max_reward/max(len(function_args),len(expected_args))
|
| 154 |
+
feedback += good_message(f"Your function has the required value for argument: {arg}") + "\n"
|
| 155 |
+
elif function_values[arg] == correct_values:
|
| 156 |
+
reward += max_reward/max(len(function_args),len(expected_args))
|
| 157 |
+
feedback += good_message(f"Your function has the required value for argument: {arg}") + "\n"
|
| 158 |
+
else:
|
| 159 |
+
reward -= 0
|
| 160 |
+
feedback += bad_message(f"Your function has the incorrect value for argument: {arg}") + "\n"
|
| 161 |
+
else:
|
| 162 |
+
reward -= max_reward/len(required_args)
|
| 163 |
+
feedback += bad_message(f"Your function is missing the required argument: {arg}") + "\n"
|
| 164 |
+
|
| 165 |
+
return reward, max_reward, feedback+received_reward_template.format(reward, max_reward)
|
| 166 |
+
|
| 167 |
+
def correct_irrelevant_tool_call(task, validator, synapse: bt.Synapse) -> Tuple[float, float, str]:
|
| 168 |
+
max_reward = 3.0
|
| 169 |
+
reward = 3.0
|
| 170 |
+
|
| 171 |
+
if synapse.response.strip() != "":
|
| 172 |
+
reward = -0.5
|
| 173 |
+
feedback = bad_message(f"Your response (`{synapse.response}`) was not empty, expected an empty response to be returned.")
|
| 174 |
+
return reward, max_reward, feedback+received_reward_template.format(reward, max_reward)
|
| 175 |
+
|
| 176 |
+
feedback = good_message(f"You responded with the expected response.")
|
| 177 |
+
return reward, max_reward, feedback+received_reward_template.format(reward, max_reward)
|
| 178 |
+
|
| 179 |
+
|
| 180 |
+
# Examples:
|
| 181 |
+
synapse_response1 = 'calculate_gpa(grades=["A", "B", "A", "C"], credit_hours=[3, 4, 3, 2])'
|
| 182 |
+
synapse_response2 = 'calculate_gpa(credit_hours=[3, 4, 3, 2], grades=["A", "B", "A", "C"])'
|
| 183 |
+
expected_response = {'name': 'calculate_gpa', 'arguments': {'grades': ['A', 'B', 'A', 'C'], 'credit_hours': [3, 4, 3, 2]}}
|
| 184 |
+
|
| 185 |
+
import unittest
|
| 186 |
+
from typing import List
|
| 187 |
+
|
| 188 |
+
class MockSynapse:
|
| 189 |
+
from bitagent.schemas.tool import Tool
|
| 190 |
+
response: str
|
| 191 |
+
tools: List[Tool] = [Tool(name="calculate_gpa", description="Calculate the GPA of a student", arguments={"grades": {"type": "list", "required": True}, "credit_hours": {"type": "list", "required": True}})]
|
| 192 |
+
|
| 193 |
+
def __init__(self, response: str):
|
| 194 |
+
self.response = response
|
| 195 |
+
|
| 196 |
+
class MockTask:
|
| 197 |
+
synapse: MockSynapse
|
| 198 |
+
|
| 199 |
+
def __init__(self, synapse: MockSynapse):
|
| 200 |
+
self.synapse = synapse
|
| 201 |
+
|
| 202 |
+
class TestToolCallCriteria(unittest.TestCase):
|
| 203 |
+
|
| 204 |
+
def setUp(self):
|
| 205 |
+
self.validator = ""
|
| 206 |
+
|
| 207 |
+
def test_correct_tool_call_function_format(self):
|
| 208 |
+
# Test valid function format
|
| 209 |
+
synapse = MockSynapse(response="calculate_gpa(grades=['A'], credit_hours=[3])")
|
| 210 |
+
task = MockTask(synapse=synapse)
|
| 211 |
+
reward, max_reward, feedback = correct_tool_call_function_format(task, self.validator, synapse)
|
| 212 |
+
self.assertEqual(reward, 1.0)
|
| 213 |
+
self.assertEqual(max_reward, 1.0)
|
| 214 |
+
self.assertTrue("was in the correct format" in feedback.lower())
|
| 215 |
+
|
| 216 |
+
# Test invalid function format
|
| 217 |
+
synapse = MockSynapse(response="invalid(function syntax")
|
| 218 |
+
task = MockTask(synapse=synapse)
|
| 219 |
+
reward, max_reward, feedback = correct_tool_call_function_format(task, self.validator, synapse)
|
| 220 |
+
self.assertEqual(reward, -1.0)
|
| 221 |
+
self.assertEqual(max_reward, 1.0)
|
| 222 |
+
self.assertTrue("not in the correct format" in feedback.lower())
|
| 223 |
+
|
| 224 |
+
# Test json response
|
| 225 |
+
synapse = MockSynapse(response='{"name": "calculate_gpa", "arguments": {"grades": ["A"], "credit_hours": [3]}}')
|
| 226 |
+
task = MockTask(synapse=synapse)
|
| 227 |
+
reward, max_reward, feedback = correct_tool_call_function_format(task, self.validator, synapse)
|
| 228 |
+
self.assertEqual(reward, 1.0)
|
| 229 |
+
self.assertEqual(max_reward, 1.0)
|
| 230 |
+
self.assertTrue("was in the correct format" in feedback.lower())
|
| 231 |
+
|
| 232 |
+
def test_extract_function_name_and_params(self):
|
| 233 |
+
# Test basic function extraction
|
| 234 |
+
response = "calculate_gpa(grades=['A'], credit_hours=[3])"
|
| 235 |
+
name, params, values = extract_function_name_and_params(response)
|
| 236 |
+
self.assertEqual(name, "calculate_gpa")
|
| 237 |
+
self.assertEqual(params, ["grades", "credit_hours"])
|
| 238 |
+
self.assertEqual(values, {"grades": ["A"], "credit_hours": [3]})
|
| 239 |
+
|
| 240 |
+
# Test empty response
|
| 241 |
+
name, params, values = extract_function_name_and_params("")
|
| 242 |
+
self.assertEqual(name, "")
|
| 243 |
+
self.assertEqual(params, [])
|
| 244 |
+
self.assertEqual(values, {})
|
| 245 |
+
|
| 246 |
+
# Test function with dot notation
|
| 247 |
+
response = "math.sqrt(value=16)"
|
| 248 |
+
name, params, values = extract_function_name_and_params(response)
|
| 249 |
+
self.assertEqual(name, "math.sqrt")
|
| 250 |
+
self.assertEqual(params, ["value"])
|
| 251 |
+
self.assertEqual(values, {"value": 16})
|
| 252 |
+
|
| 253 |
+
# Test function with dot notation and no value
|
| 254 |
+
response = "math.sqrt()"
|
| 255 |
+
name, params, values = extract_function_name_and_params(response)
|
| 256 |
+
self.assertEqual(name, "math.sqrt")
|
| 257 |
+
self.assertEqual(params, [])
|
| 258 |
+
self.assertEqual(values, {})
|
| 259 |
+
|
| 260 |
+
def test_correct_irrelevant_tool_call(self):
|
| 261 |
+
# Test empty response (correct)
|
| 262 |
+
synapse = MockSynapse(response="")
|
| 263 |
+
task = MockTask(synapse=synapse)
|
| 264 |
+
reward, max_reward, feedback = correct_irrelevant_tool_call(task, self.validator, synapse)
|
| 265 |
+
self.assertEqual(reward, 3.0)
|
| 266 |
+
self.assertEqual(max_reward, 3.0)
|
| 267 |
+
self.assertTrue("expected response" in feedback.lower())
|
| 268 |
+
|
| 269 |
+
# Test non-empty response (incorrect)
|
| 270 |
+
synapse = MockSynapse(response="some_function()")
|
| 271 |
+
task = MockTask(synapse=synapse)
|
| 272 |
+
reward, max_reward, feedback = correct_irrelevant_tool_call(task, self.validator, synapse)
|
| 273 |
+
self.assertEqual(reward, -0.5)
|
| 274 |
+
self.assertEqual(max_reward, 3.0)
|
| 275 |
+
self.assertTrue("not empty" in feedback.lower())
|
| 276 |
+
|
| 277 |
+
def test_correct_tool_call_function_name(self):
|
| 278 |
+
# Test correct function name
|
| 279 |
+
synapse = MockSynapse(response="calculate_gpa(grades=['A'])")
|
| 280 |
+
task = MockTask(synapse=synapse)
|
| 281 |
+
expected = {"name": "calculate_gpa", "arguments": {"grades": ["A"]}}
|
| 282 |
+
reward, max_reward, feedback = correct_tool_call_function_name(task, self.validator, synapse, expected)
|
| 283 |
+
self.assertEqual(reward, 3.0)
|
| 284 |
+
self.assertEqual(max_reward, 3.0)
|
| 285 |
+
self.assertTrue("matches the expected function name" in feedback.lower())
|
| 286 |
+
|
| 287 |
+
# Test incorrect function name
|
| 288 |
+
synapse = MockSynapse(response="wrong_function(grades=['A'])")
|
| 289 |
+
task = MockTask(synapse=synapse)
|
| 290 |
+
reward, max_reward, feedback = correct_tool_call_function_name(task, self.validator, synapse, expected)
|
| 291 |
+
self.assertEqual(reward, -0.5)
|
| 292 |
+
self.assertEqual(max_reward, 3.0)
|
| 293 |
+
self.assertTrue("not match" in feedback.lower())
|
| 294 |
+
|
| 295 |
+
def test_correct_tool_argument_names(self):
|
| 296 |
+
# Test no expected arguments
|
| 297 |
+
synapse = MockSynapse(response="calculate_gpa()")
|
| 298 |
+
task = MockTask(synapse=synapse)
|
| 299 |
+
expected = {"name": "calculate_gpa", "arguments": {}}
|
| 300 |
+
reward, max_reward, feedback = correct_tool_argument_names(task, self.validator, synapse, expected)
|
| 301 |
+
self.assertEqual(reward, 1.0)
|
| 302 |
+
self.assertEqual(max_reward, 1.0)
|
| 303 |
+
self.assertTrue("no arguments, good job" in feedback.lower())
|
| 304 |
+
|
| 305 |
+
# Test no expected arguments, but pass in arguments anyway
|
| 306 |
+
synapse = MockSynapse(response="calculate_gpa(grades=['A'])")
|
| 307 |
+
task = MockTask(synapse=synapse)
|
| 308 |
+
expected = {"name": "calculate_gpa", "arguments": {}}
|
| 309 |
+
reward, max_reward, feedback = correct_tool_argument_names(task, self.validator, synapse, expected)
|
| 310 |
+
self.assertEqual(reward, 0.0)
|
| 311 |
+
self.assertEqual(max_reward, 1.0)
|
| 312 |
+
self.assertTrue("expects no arguments" in feedback.lower())
|
| 313 |
+
|
| 314 |
+
# Test correct argument names
|
| 315 |
+
synapse = MockSynapse(response="calculate_gpa(grades=['A'], credit_hours=[3])")
|
| 316 |
+
task = MockTask(synapse=synapse)
|
| 317 |
+
expected = {"name": "calculate_gpa", "arguments": {"grades": ["A"], "credit_hours": [3]}}
|
| 318 |
+
reward, max_reward, feedback = correct_tool_argument_names(task, self.validator, synapse, expected)
|
| 319 |
+
self.assertEqual(reward, 1.0)
|
| 320 |
+
self.assertEqual(max_reward, 1.0)
|
| 321 |
+
self.assertEqual(feedback.lower().count("has the required argument"), 2)
|
| 322 |
+
|
| 323 |
+
# Test correct argument names plus an incorrect argument
|
| 324 |
+
synapse = MockSynapse(response="calculate_gpa(grades=['A'], credit_hours=[3], extra_arg=1)")
|
| 325 |
+
task = MockTask(synapse=synapse)
|
| 326 |
+
expected = {"name": "calculate_gpa", "arguments": {"grades": ["A"], "credit_hours": [3]}}
|
| 327 |
+
reward, max_reward, feedback = correct_tool_argument_names(task, self.validator, synapse, expected)
|
| 328 |
+
self.assertEqual(reward, 1.0)
|
| 329 |
+
self.assertEqual(max_reward, 1.0)
|
| 330 |
+
self.assertEqual(feedback.lower().count("has the required argument"), 2)
|
| 331 |
+
|
| 332 |
+
# Test correct argument names out of order
|
| 333 |
+
synapse = MockSynapse(response="calculate_gpa(credit_hours=[3], grades=['A'])")
|
| 334 |
+
task = MockTask(synapse=synapse)
|
| 335 |
+
expected = {"name": "calculate_gpa", "arguments": {"grades": ["A"], "credit_hours": [3]}}
|
| 336 |
+
reward, max_reward, feedback = correct_tool_argument_names(task, self.validator, synapse, expected)
|
| 337 |
+
self.assertEqual(reward, 1.0)
|
| 338 |
+
self.assertEqual(max_reward, 1.0)
|
| 339 |
+
self.assertEqual(feedback.lower().count("has the required argument"), 2)
|
| 340 |
+
|
| 341 |
+
# Test missing argument
|
| 342 |
+
synapse = MockSynapse(response="calculate_gpa(grades=['A'])")
|
| 343 |
+
task = MockTask(synapse=synapse)
|
| 344 |
+
reward, max_reward, feedback = correct_tool_argument_names(task, self.validator, synapse, expected)
|
| 345 |
+
self.assertEqual(reward, 0.0)
|
| 346 |
+
self.assertEqual(max_reward, 1.0)
|
| 347 |
+
self.assertEqual(feedback.lower().count("has the required argument"), 1)
|
| 348 |
+
self.assertEqual(feedback.lower().count("missing the required argument"), 1)
|
| 349 |
+
|
| 350 |
+
def test_correct_tool_argument_values(self):
|
| 351 |
+
# Test correct argument values
|
| 352 |
+
synapse = MockSynapse(response="calculate_gpa(grades=['A'], credit_hours=[3])")
|
| 353 |
+
task = MockTask(synapse=synapse)
|
| 354 |
+
expected = {"name": "calculate_gpa", "arguments": {"grades": ["A"], "credit_hours": [3]}}
|
| 355 |
+
reward, max_reward, feedback = correct_tool_argument_values(task, self.validator, synapse, expected)
|
| 356 |
+
self.assertEqual(reward, 3.0)
|
| 357 |
+
self.assertEqual(max_reward, 3.0)
|
| 358 |
+
self.assertEqual(feedback.lower().count("has the required value for argument"), 2)
|
| 359 |
+
|
| 360 |
+
# Test correct argument values out of order
|
| 361 |
+
synapse = MockSynapse(response="calculate_gpa(credit_hours=[3], grades=['A'])")
|
| 362 |
+
task = MockTask(synapse=synapse)
|
| 363 |
+
expected = {"name": "calculate_gpa", "arguments": {"grades": ["A"], "credit_hours": [3]}}
|
| 364 |
+
reward, max_reward, feedback = correct_tool_argument_values(task, self.validator, synapse, expected)
|
| 365 |
+
self.assertEqual(reward, 3.0)
|
| 366 |
+
self.assertEqual(max_reward, 3.0)
|
| 367 |
+
self.assertEqual(feedback.lower().count("has the required value for argument"), 2)
|
| 368 |
+
|
| 369 |
+
# Test incorrect argument values
|
| 370 |
+
synapse = MockSynapse(response="calculate_gpa(grades=['B'], credit_hours=[4])")
|
| 371 |
+
task = MockTask(synapse=synapse)
|
| 372 |
+
reward, max_reward, feedback = correct_tool_argument_values(task, self.validator, synapse, expected)
|
| 373 |
+
self.assertEqual(reward, -3.0)
|
| 374 |
+
self.assertEqual(max_reward, 3.0)
|
| 375 |
+
self.assertEqual(feedback.lower().count("has the incorrect value for argument"), 2)
|
| 376 |
+
|
| 377 |
+
# Test incorrect argument values out of order
|
| 378 |
+
synapse = MockSynapse(response="calculate_gpa(credit_hours=[4], grades=['B'])")
|
| 379 |
+
task = MockTask(synapse=synapse)
|
| 380 |
+
reward, max_reward, feedback = correct_tool_argument_values(task, self.validator, synapse, expected)
|
| 381 |
+
self.assertEqual(reward, -3.0)
|
| 382 |
+
self.assertEqual(max_reward, 3.0)
|
| 383 |
+
self.assertEqual(feedback.lower().count("has the incorrect value for argument"), 2)
|
| 384 |
+
|
| 385 |
+
# Test incorrect value types
|
| 386 |
+
synapse = MockSynapse(response="calculate_gpa(grades='A', credit_hours=3)")
|
| 387 |
+
task = MockTask(synapse=synapse)
|
| 388 |
+
reward, max_reward, feedback = correct_tool_argument_values(task, self.validator, synapse, expected)
|
| 389 |
+
self.assertEqual(reward, -3.0)
|
| 390 |
+
self.assertEqual(max_reward, 3.0)
|
| 391 |
+
self.assertEqual(feedback.lower().count("has the incorrect value for argument"), 2)
|
| 392 |
+
|
| 393 |
+
if __name__ == '__main__':
|
| 394 |
+
# Run all tests in this file
|
| 395 |
+
# You can run this file directly with: python -m bitagent.criteria.tool_call_criteria
|
| 396 |
+
# Or run all tests with: python -m pytest bitagent/criteria/tool_call_criteria.py
|
| 397 |
+
unittest.main(verbosity=2)
|
| 398 |
+
|
| 399 |
+
def get_required_args(task, expected_response: dict) -> set:
|
| 400 |
+
"""Helper function to get required arguments."""
|
| 401 |
+
if "is_ground_truth" in expected_response:
|
| 402 |
+
return {arg for arg in expected_response['arguments'] if expected_response['arguments'][arg] != [""]}
|
| 403 |
+
|
| 404 |
+
expected_tool = next((tool for tool in task.synapse.tools if tool.name == expected_response['name']), None)
|
| 405 |
+
if not expected_tool:
|
| 406 |
+
return set()
|
| 407 |
+
return {k for k in expected_tool.arguments if expected_tool.arguments[k].get('required', False)}
|
| 408 |
+
|
| 409 |
+
def is_distance_calculation_with_flipped_args(function_name: str, messages) -> bool:
|
| 410 |
+
"""Helper function to check if this is a distance calculation with flipped arguments."""
|
| 411 |
+
return function_name == "calculate_distance" and "flipped" in str(messages).lower()
|
| 412 |
+
|
| 413 |
+
def get_correct_values_for_arg(arg: str, expected_response: dict, is_flipped: bool) -> list:
|
| 414 |
+
"""Helper function to get correct values for an argument, handling flipped cases."""
|
| 415 |
+
if not is_flipped:
|
| 416 |
+
return expected_response['arguments'][arg]
|
| 417 |
+
|
| 418 |
+
if arg == "origin":
|
| 419 |
+
return expected_response['arguments'].get("destination", expected_response['arguments'][arg])
|
| 420 |
+
elif arg == "destination":
|
| 421 |
+
return expected_response['arguments'].get("origin", expected_response['arguments'][arg])
|
| 422 |
+
return expected_response['arguments'][arg]
|
bitagent_subnet-main/bitagent/criteria/utils.py
ADDED
|
@@ -0,0 +1,25 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# The MIT License (MIT)
|
| 2 |
+
# Copyright © 2023 RogueTensor
|
| 3 |
+
|
| 4 |
+
# Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated
|
| 5 |
+
# documentation files (the “Software”), to deal in the Software without restriction, including without limitation
|
| 6 |
+
# the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software,
|
| 7 |
+
# and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
|
| 8 |
+
|
| 9 |
+
# The above copyright notice and this permission notice shall be included in all copies or substantial portions of
|
| 10 |
+
# the Software.
|
| 11 |
+
|
| 12 |
+
# THE SOFTWARE IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO
|
| 13 |
+
# THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
|
| 14 |
+
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
|
| 15 |
+
# OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
|
| 16 |
+
# DEALINGS IN THE SOFTWARE.
|
| 17 |
+
|
| 18 |
+
received_reward_template="\nYou received {} of {} reward."
|
| 19 |
+
|
| 20 |
+
def bad_message(text: str, color: str="red") -> str:
|
| 21 |
+
return f":cross_mark: [{color}]{text}[/{color}]"
|
| 22 |
+
|
| 23 |
+
def good_message(text: str, color: str="green") -> str:
|
| 24 |
+
return f":white_heavy_check_mark: [{color}]{text}[/{color}]"
|
| 25 |
+
|
bitagent_subnet-main/bitagent/datasources/__init__.py
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
from .tools import *
|
bitagent_subnet-main/bitagent/datasources/loaders.py
ADDED
|
@@ -0,0 +1,63 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
import os
|
| 2 |
+
import pandas as pd
|
| 3 |
+
import bittensor as bt
|
| 4 |
+
from datasets import load_dataset, load_from_disk
|
| 5 |
+
from huggingface_hub import snapshot_download
|
| 6 |
+
|
| 7 |
+
class ShuffledJSONDatasetIterator:
|
| 8 |
+
def __init__(self):
|
| 9 |
+
dataframes = []
|
| 10 |
+
|
| 11 |
+
# TODO - other BFCL task types:
|
| 12 |
+
# irrelevance and live_irrelevance - answer is NOTHING
|
| 13 |
+
# exec_* (simple, multiple, parallel, parallel_multiple) - answer in the file itself
|
| 14 |
+
# multi_turn_* - answer in the file itself
|
| 15 |
+
# parallel* - answer in the file itself
|
| 16 |
+
# rest - maybe later - calls to API that the validator would need to setup
|
| 17 |
+
|
| 18 |
+
for filename in ["java", "javascript", "simple", "multiple", "sql", "live_simple", "live_multiple"]:
|
| 19 |
+
bfcl_path = "bitagent.data/bfcl/BFCL_v3_{filename}.json"
|
| 20 |
+
bfcl_answer_path = "bitagent.data/bfcl/possible_answer/BFCL_v3_{filename}.json"
|
| 21 |
+
file_path = bfcl_path.format(filename=filename)
|
| 22 |
+
answer_path = bfcl_answer_path.format(filename=filename)
|
| 23 |
+
df_data = pd.read_json(file_path, lines=True)
|
| 24 |
+
df_answer = pd.read_json(answer_path, lines=True)
|
| 25 |
+
df_data['ground_truth'] = df_answer['ground_truth']
|
| 26 |
+
dataframes.append(df_data[['id','question','function','ground_truth']])
|
| 27 |
+
self.all_data = pd.concat(dataframes)
|
| 28 |
+
self._shuffle_data()
|
| 29 |
+
|
| 30 |
+
def _shuffle_data(self):
|
| 31 |
+
self.shuffled_data = self.all_data.sample(frac=1).reset_index(drop=True)
|
| 32 |
+
self.index = 0
|
| 33 |
+
|
| 34 |
+
def __iter__(self):
|
| 35 |
+
self.index = 0
|
| 36 |
+
return self
|
| 37 |
+
|
| 38 |
+
def __next__(self):
|
| 39 |
+
if self.index < len(self.shuffled_data):
|
| 40 |
+
row = self.shuffled_data.iloc[self.index]
|
| 41 |
+
self.index += 1
|
| 42 |
+
return row
|
| 43 |
+
else:
|
| 44 |
+
self._shuffle_data() # Shuffle and reset index if end is reached
|
| 45 |
+
return self.__next__()
|
| 46 |
+
|
| 47 |
+
def huggingface_loader(dataset_name, root_data_dir="bitagent.data", split="train", name=None):
|
| 48 |
+
bt.logging.debug(f"Loading {dataset_name}")
|
| 49 |
+
dataset_dir = f"{root_data_dir}/{dataset_name.replace('/','_')}"
|
| 50 |
+
if os.path.exists(f"{dataset_dir}/state.json"):
|
| 51 |
+
bt.logging.debug(f"Loading from disk ({dataset_dir}) ...")
|
| 52 |
+
ds = load_from_disk(dataset_dir)
|
| 53 |
+
else:
|
| 54 |
+
bt.logging.debug("Loading from web ...")
|
| 55 |
+
ds = load_dataset(dataset_name, split=split, name=name, token=os.getenv("HF_TOKEN", None))
|
| 56 |
+
ds.save_to_disk(dataset_dir)
|
| 57 |
+
bt.logging.debug("Loaded.")
|
| 58 |
+
return ds
|
| 59 |
+
|
| 60 |
+
def load_bfcl_dataset(dataset_name, root_data_dir="bitagent.data", split="train", name=None):
|
| 61 |
+
snapshot_download(repo_id=dataset_name, allow_patterns="*.json", repo_type="dataset", local_dir="bitagent.data/bfcl/")
|
| 62 |
+
|
| 63 |
+
return ShuffledJSONDatasetIterator()
|
bitagent_subnet-main/bitagent/datasources/tools.py
ADDED
|
@@ -0,0 +1,194 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
import re
|
| 2 |
+
import json
|
| 3 |
+
import random
|
| 4 |
+
import bittensor as bt
|
| 5 |
+
from pydantic import BaseModel
|
| 6 |
+
from typing import List, Dict, Any
|
| 7 |
+
from collections.abc import Iterator
|
| 8 |
+
from bitagent.schemas.tool import Tool
|
| 9 |
+
from bitagent.schemas.chat import ChatMessage, messages_from_list
|
| 10 |
+
from bitagent.datasources.loaders import huggingface_loader, load_bfcl_dataset
|
| 11 |
+
from bitagent.helpers.string_parse import parse_multiple_space_sep_json
|
| 12 |
+
|
| 13 |
+
|
| 14 |
+
def split_dialogue(text) -> List[ChatMessage]:
|
| 15 |
+
# Define a pattern to match the roles and capture messages
|
| 16 |
+
pattern = r"(USER|ASSISTANT|TOOL CALL|TOOl RESPONSE): (.*?)(?=\s*(USER|ASSISTANT|TOOL CALL|TOOL RESPONSE):|$)"
|
| 17 |
+
|
| 18 |
+
# Find all matches in the text using the pattern
|
| 19 |
+
matches = re.findall(pattern, text, re.DOTALL)
|
| 20 |
+
|
| 21 |
+
# Create a list of dictionaries based on the matches
|
| 22 |
+
dialogue_list = [{"role": role.lower(), "content": message.strip().replace('\'','')} for role, message, _ in matches]
|
| 23 |
+
|
| 24 |
+
for message in dialogue_list:
|
| 25 |
+
if not message['role']:
|
| 26 |
+
raise ValueError("There is a message with no role.")
|
| 27 |
+
|
| 28 |
+
return messages_from_list(dialogue_list)
|
| 29 |
+
|
| 30 |
+
|
| 31 |
+
def clean_text(text):
|
| 32 |
+
text = text.replace("<|endoftext|>", "")
|
| 33 |
+
text = text.replace("ASSISTANT: <functioncall>", "TOOL CALL: ")
|
| 34 |
+
text = text.replace("FUNCTION RESPONSE", "TOOL RESPONSE")
|
| 35 |
+
text = text.replace(" ", " ")
|
| 36 |
+
return text.strip()
|
| 37 |
+
|
| 38 |
+
def custom_json_schema_to_pydantic_tool(schema: dict) -> Tool:
|
| 39 |
+
tool_name = schema.get("name", "")
|
| 40 |
+
tool_description = schema.get("description", "")
|
| 41 |
+
|
| 42 |
+
schema_arguments = schema.get("arguments", {})
|
| 43 |
+
parameters = {}
|
| 44 |
+
for param_name, param_info in schema_arguments.items():
|
| 45 |
+
parameters[param_name] = {
|
| 46 |
+
"required": param_info.get("required", False),
|
| 47 |
+
"type": param_info.get("type", ""),
|
| 48 |
+
"description": param_info.get("description", ""),
|
| 49 |
+
}
|
| 50 |
+
|
| 51 |
+
return Tool(name=tool_name, description=tool_description, arguments=parameters)
|
| 52 |
+
|
| 53 |
+
def json_schema_to_pydantic_tool(schema: dict) -> Tool:
|
| 54 |
+
tool_name = schema.get("name", "")
|
| 55 |
+
tool_description = schema.get("description", "")
|
| 56 |
+
|
| 57 |
+
schema_parameters = schema.get("parameters", {})
|
| 58 |
+
if not schema_parameters:
|
| 59 |
+
schema_parameters = schema.get("arguments", {})
|
| 60 |
+
properties = schema_parameters.get("properties", {})
|
| 61 |
+
required_params = schema_parameters.get("required", [])
|
| 62 |
+
if isinstance(required_params, bool):
|
| 63 |
+
required_params = list(properties.keys()) if required_params else []
|
| 64 |
+
elif not isinstance(required_params, list):
|
| 65 |
+
required_params = []
|
| 66 |
+
parameters = {}
|
| 67 |
+
for param_name, param_info in properties.items():
|
| 68 |
+
if param_name == "required":
|
| 69 |
+
continue
|
| 70 |
+
parameters[param_name] = {
|
| 71 |
+
"required": param_name in required_params,
|
| 72 |
+
"type": param_info.get("type", ""),
|
| 73 |
+
"description": param_info.get("description", ""),
|
| 74 |
+
}
|
| 75 |
+
return Tool(name=tool_name, description=tool_description, arguments=parameters)
|
| 76 |
+
|
| 77 |
+
class ToolCallData(BaseModel):
|
| 78 |
+
messages: List[ChatMessage]
|
| 79 |
+
tools: list[Tool]
|
| 80 |
+
|
| 81 |
+
TYPES = ["str", "int", "dict", "list", "float", "bool", "string", "integer", "number", "boolean", "dictionary", "object"]
|
| 82 |
+
|
| 83 |
+
def detect_type(value: Any) -> str:
|
| 84 |
+
type_mapping = {
|
| 85 |
+
int: 'integer',
|
| 86 |
+
float: 'number',
|
| 87 |
+
str: 'string',
|
| 88 |
+
bool: 'boolean',
|
| 89 |
+
list: 'array',
|
| 90 |
+
dict: 'object'
|
| 91 |
+
}
|
| 92 |
+
return type_mapping.get(type(value), 'string')
|
| 93 |
+
|
| 94 |
+
def add_extra_arguments(tool_call: Dict[str, Any], tools: List[Tool]):
|
| 95 |
+
# Find the tool in the list
|
| 96 |
+
tool_name = tool_call['name']
|
| 97 |
+
arguments = tool_call.get('arguments', {})
|
| 98 |
+
|
| 99 |
+
for tool in tools:
|
| 100 |
+
if tool.name == tool_name:
|
| 101 |
+
for arg_name, arg_value in arguments.items():
|
| 102 |
+
if arg_name not in tool.arguments:
|
| 103 |
+
# Detect the type of the argument
|
| 104 |
+
arg_type = detect_type(arg_value)
|
| 105 |
+
# Add the new argument to the tool's schema
|
| 106 |
+
tool.arguments[arg_name] = {
|
| 107 |
+
'required': False, # assume false
|
| 108 |
+
'type': arg_type,
|
| 109 |
+
'description': arg_name
|
| 110 |
+
}
|
| 111 |
+
break
|
| 112 |
+
|
| 113 |
+
class ToolDataset(Iterator):
|
| 114 |
+
def __init__(self):
|
| 115 |
+
super().__init__()
|
| 116 |
+
seed = random.randint(0, 10000)
|
| 117 |
+
glaive_ds = huggingface_loader("glaiveai/glaive-function-calling-v2")
|
| 118 |
+
bitagent_ds = huggingface_loader("BitAgent/tool_calling")
|
| 119 |
+
bfcl_ds = load_bfcl_dataset("gorilla-llm/Berkeley-Function-Calling-Leaderboard")
|
| 120 |
+
|
| 121 |
+
self.datasets = {
|
| 122 |
+
"glaive": iter(glaive_ds.shuffle(seed=seed)),
|
| 123 |
+
"bitagent": iter(bitagent_ds.shuffle(seed=seed)),
|
| 124 |
+
"bfcl": iter(bfcl_ds),
|
| 125 |
+
}
|
| 126 |
+
|
| 127 |
+
def __next__(self) -> ToolCallData:
|
| 128 |
+
#bt.logging.debug("Retrieving function call data from dataset...")
|
| 129 |
+
count = 0
|
| 130 |
+
while count < 25:
|
| 131 |
+
count += 1
|
| 132 |
+
try:
|
| 133 |
+
dname, ds = random.choices(list(self.datasets.items()), [5, 5, 10])[0]
|
| 134 |
+
data = next(ds)
|
| 135 |
+
if dname == "glaive":
|
| 136 |
+
system_prompt = data["system"].replace("SYSTEM: ", "")
|
| 137 |
+
if "following functions" not in system_prompt:
|
| 138 |
+
continue
|
| 139 |
+
|
| 140 |
+
chat_history = clean_text(data["chat"])
|
| 141 |
+
tools = parse_multiple_space_sep_json(
|
| 142 |
+
system_prompt.replace(
|
| 143 |
+
"You are a helpful assistant with access to the following functions. Use them if required - ",
|
| 144 |
+
"",
|
| 145 |
+
)
|
| 146 |
+
)
|
| 147 |
+
tools = [json_schema_to_pydantic_tool(tool) for tool in tools]
|
| 148 |
+
messages = split_dialogue(chat_history)
|
| 149 |
+
|
| 150 |
+
# Add arguments that werent defined in schema to the tool
|
| 151 |
+
for msg in messages:
|
| 152 |
+
if msg.role == "tool call":
|
| 153 |
+
tool_call = None
|
| 154 |
+
if isinstance(msg.content, str):
|
| 155 |
+
tool_call = json.loads(msg.content)
|
| 156 |
+
else:
|
| 157 |
+
tool_call = msg.content
|
| 158 |
+
|
| 159 |
+
add_extra_arguments(tool_call, tools)
|
| 160 |
+
|
| 161 |
+
|
| 162 |
+
return ToolCallData(messages=messages, tools=tools)
|
| 163 |
+
elif dname == "bitagent":
|
| 164 |
+
for key, value in data.items():
|
| 165 |
+
if isinstance(value, str):
|
| 166 |
+
data[key] = json.loads(value)
|
| 167 |
+
messages = messages_from_list(data["conversation"])
|
| 168 |
+
if isinstance(data["tools"], str):
|
| 169 |
+
tools = [
|
| 170 |
+
json_schema_to_pydantic_tool(tool)
|
| 171 |
+
for tool in json.loads(data["tools"])
|
| 172 |
+
]
|
| 173 |
+
elif isinstance(data["tools"], list):
|
| 174 |
+
tools = [Tool(**tool) for tool in data["tools"]]
|
| 175 |
+
else:
|
| 176 |
+
raise ValueError(f"Invalid format for tools: {data['tools']}")
|
| 177 |
+
for tool in tools:
|
| 178 |
+
for arg_name, arg_value in tool.arguments.items():
|
| 179 |
+
if arg_value["type"] not in TYPES:
|
| 180 |
+
raise ValueError(f"Inavlid type used type: {arg_value['type']}")
|
| 181 |
+
return ToolCallData(messages=messages, tools=tools)
|
| 182 |
+
elif dname == "bfcl":
|
| 183 |
+
messages = messages_from_list(data["question"][0])
|
| 184 |
+
ground_truth = data['ground_truth'][0]
|
| 185 |
+
messages.append(ChatMessage(role="tool call",
|
| 186 |
+
content={"is_ground_truth": True,
|
| 187 |
+
"name": list(ground_truth.keys())[0],
|
| 188 |
+
"arguments": list(ground_truth.values())[0]}))
|
| 189 |
+
tools = [json_schema_to_pydantic_tool(tool) for tool in data["function"]]
|
| 190 |
+
return ToolCallData(messages=messages, tools=tools)
|
| 191 |
+
|
| 192 |
+
except Exception as e:
|
| 193 |
+
#bt.logging.debug(f"Issue getting tool call from dataset ... {e}")
|
| 194 |
+
pass
|
bitagent_subnet-main/bitagent/helpers/dockers.py
ADDED
|
@@ -0,0 +1,44 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
import os
|
| 2 |
+
import docker
|
| 3 |
+
import bittensor as bt
|
| 4 |
+
|
| 5 |
+
def create_container(container_name, model_name, docker_vllm_port):
|
| 6 |
+
container_to_run = "docker.io/vllm/vllm-openai:latest"
|
| 7 |
+
|
| 8 |
+
dclient = docker.from_env()
|
| 9 |
+
|
| 10 |
+
try:
|
| 11 |
+
dclient.containers.get(container_name).remove(force=True)
|
| 12 |
+
except:
|
| 13 |
+
pass
|
| 14 |
+
|
| 15 |
+
# get home directory
|
| 16 |
+
home_dir = os.path.expanduser('~')
|
| 17 |
+
|
| 18 |
+
bt.logging.debug('starting container')
|
| 19 |
+
dclient.containers.run(container_to_run,
|
| 20 |
+
f"--model {model_name} --max-model-len 8912 --gpu-memory-utilization 0.9",
|
| 21 |
+
name=container_name,
|
| 22 |
+
device_requests=[docker.types.DeviceRequest(count=1, capabilities=[["gpu"]])],
|
| 23 |
+
detach=True,
|
| 24 |
+
volumes={f'{home_dir}/.cache/huggingface': {'bind': '/root/.cache/huggingface', 'mode': 'rw'}},
|
| 25 |
+
ports={'8000/tcp': docker_vllm_port})
|
| 26 |
+
bt.logging.debug('started container')
|
| 27 |
+
|
| 28 |
+
return dclient.containers.get(container_name)
|
| 29 |
+
|
| 30 |
+
def wait_for_container(openai_client, model_name):
|
| 31 |
+
bt.logging.debug('waiting for container')
|
| 32 |
+
while True:
|
| 33 |
+
try:
|
| 34 |
+
openai_client.chat.completions.create(
|
| 35 |
+
model=model_name,
|
| 36 |
+
messages=[{"role": "user", "content": "Hello!"}]
|
| 37 |
+
)
|
| 38 |
+
break
|
| 39 |
+
except Exception as e:
|
| 40 |
+
#bt.logging.debug(e)
|
| 41 |
+
import time
|
| 42 |
+
time.sleep(1)
|
| 43 |
+
pass
|
| 44 |
+
bt.logging.debug('container ready')
|
bitagent_subnet-main/bitagent/helpers/llms.py
ADDED
|
@@ -0,0 +1,85 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# The MIT License (MIT)
|
| 2 |
+
# Copyright 2024 RogueTensor
|
| 3 |
+
|
| 4 |
+
# Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated
|
| 5 |
+
# documentation files (the “Software”), to deal in the Software without restriction, including without limitation
|
| 6 |
+
# the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software,
|
| 7 |
+
# and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
|
| 8 |
+
|
| 9 |
+
# The above copyright notice and this permission notice shall be included in all copies or substantial portions of
|
| 10 |
+
# the Software.
|
| 11 |
+
|
| 12 |
+
# THE SOFTWARE IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO
|
| 13 |
+
# THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
|
| 14 |
+
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
|
| 15 |
+
# OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
|
| 16 |
+
# DEALINGS IN THE SOFTWARE.
|
| 17 |
+
|
| 18 |
+
import bittensor as bt
|
| 19 |
+
from openai import OpenAI
|
| 20 |
+
|
| 21 |
+
# specifically for the validator
|
| 22 |
+
def get_openai_llm(self, hugging_face=False):
|
| 23 |
+
if "validator" in self.__class__.__name__.lower() and hugging_face and self.config.validator_hf_server_port:
|
| 24 |
+
# stand up a vLLM server on this port for the OFFLINE HF model evals
|
| 25 |
+
base_url = f'http://localhost:{self.config.validator_hf_server_port}/v1'
|
| 26 |
+
else:
|
| 27 |
+
base_url = self.config.openai_api_base
|
| 28 |
+
|
| 29 |
+
return OpenAI(
|
| 30 |
+
api_key=self.config.openai_api_key,
|
| 31 |
+
base_url=base_url
|
| 32 |
+
)
|
| 33 |
+
|
| 34 |
+
def system_prompt(tools):
|
| 35 |
+
prompt = """You are an expert in composing functions. You are given a question and a set of possible functions. Based on the question, you will need to make one or more function/tool calls to achieve the purpose.
|
| 36 |
+
If none of the function can be used, point it out. If the given question lacks the parameters required by the function, also point it out.
|
| 37 |
+
You should only return the function call in tools call sections.
|
| 38 |
+
|
| 39 |
+
For the calculate_distance function:
|
| 40 |
+
When asking for distance FROM A TO B and parameters are flipped:
|
| 41 |
+
- Set origin=B (the endpoint)
|
| 42 |
+
- Set destination=A (the starting point)
|
| 43 |
+
Example: For "distance from Los Angeles TO New York":
|
| 44 |
+
- Use origin="New York" (B/endpoint)
|
| 45 |
+
- Use destination="Los Angeles" (A/starting point)
|
| 46 |
+
|
| 47 |
+
If you decide to invoke any of the function(s), you MUST put it in the format of [func_name1(params_name1="params_string_value1", params_name2=params_value2...), func_name2(params)]
|
| 48 |
+
Notice that any values that are strings must be put in quotes like this: "params_string_value1"
|
| 49 |
+
You SHOULD NOT include any other text in the response.
|
| 50 |
+
Here is a list of functions in JSON format that you can invoke.\n{functions}\n
|
| 51 |
+
"""
|
| 52 |
+
|
| 53 |
+
return prompt.format(functions=tools)
|
| 54 |
+
|
| 55 |
+
|
| 56 |
+
def llm(self, messages, tools, model_name, hugging_face=False,max_new_tokens = 160, temperature=0.7):
|
| 57 |
+
prompt = system_prompt(tools)
|
| 58 |
+
|
| 59 |
+
try:
|
| 60 |
+
#try:
|
| 61 |
+
# new_messages = [{"role":"system", "content":prompt}] + messages
|
| 62 |
+
# response = get_openai_llm(self, hugging_face).chat.completions.create(
|
| 63 |
+
# messages=new_messages,
|
| 64 |
+
# max_tokens=max_new_tokens,
|
| 65 |
+
# model=model_name,
|
| 66 |
+
# temperature=temperature
|
| 67 |
+
# )
|
| 68 |
+
#except Exception as e:
|
| 69 |
+
# errored b/c the model does not allow system prompts
|
| 70 |
+
messages[0].content = prompt + "\n\n" + messages[0].content
|
| 71 |
+
response = get_openai_llm(self, hugging_face).chat.completions.create(
|
| 72 |
+
messages=messages,
|
| 73 |
+
max_tokens=max_new_tokens,
|
| 74 |
+
model=model_name,
|
| 75 |
+
temperature=temperature
|
| 76 |
+
)
|
| 77 |
+
|
| 78 |
+
except Exception as e:
|
| 79 |
+
bt.logging.error(f"Error calling to LLM: {e}")
|
| 80 |
+
return ""
|
| 81 |
+
|
| 82 |
+
if hugging_face:
|
| 83 |
+
return response.choices[0].message.content.strip(), response.choices[0].finish_reason
|
| 84 |
+
else:
|
| 85 |
+
return response.choices[0].message.content.strip()
|
bitagent_subnet-main/bitagent/helpers/logging.py
ADDED
|
@@ -0,0 +1,38 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
import bittensor as bt
|
| 2 |
+
from contextlib import contextmanager
|
| 3 |
+
@contextmanager
|
| 4 |
+
def temporary_logging_state(new_state):
|
| 5 |
+
"""
|
| 6 |
+
A context manager to temporarily set Bittensor's logging state.
|
| 7 |
+
"""
|
| 8 |
+
# Cache the current logging state
|
| 9 |
+
current_state = bt.logging.current_state
|
| 10 |
+
bt.logging.info(f"OFFLINE: Caching current logging state: {current_state}")
|
| 11 |
+
|
| 12 |
+
# Set the new logging state
|
| 13 |
+
if new_state == 'Debug':
|
| 14 |
+
bt.logging.set_debug()
|
| 15 |
+
elif new_state == 'Trace':
|
| 16 |
+
bt.logging.set_trace()
|
| 17 |
+
elif new_state == 'Warning':
|
| 18 |
+
bt.logging.set_warning()
|
| 19 |
+
elif new_state == 'Info':
|
| 20 |
+
bt.logging.set_info()
|
| 21 |
+
else:
|
| 22 |
+
bt.logging.set_default()
|
| 23 |
+
|
| 24 |
+
try:
|
| 25 |
+
yield
|
| 26 |
+
finally:
|
| 27 |
+
# Restore the original logging state
|
| 28 |
+
if current_state.value == 'Debug':
|
| 29 |
+
bt.logging.set_debug()
|
| 30 |
+
elif current_state.value == 'Trace':
|
| 31 |
+
bt.logging.set_trace()
|
| 32 |
+
elif current_state.value == 'Warning':
|
| 33 |
+
bt.logging.set_warning()
|
| 34 |
+
elif current_state.value == 'Info':
|
| 35 |
+
bt.logging.set_info()
|
| 36 |
+
else:
|
| 37 |
+
bt.logging.set_default()
|
| 38 |
+
bt.logging.info(f"OFFLINE: Restored logging state to: {current_state}")
|
bitagent_subnet-main/bitagent/helpers/sbert.py
ADDED
|
@@ -0,0 +1,45 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
from sentence_transformers import SentenceTransformer
|
| 2 |
+
import numpy as np
|
| 3 |
+
import torch
|
| 4 |
+
|
| 5 |
+
class CachedSentenceTransformer(SentenceTransformer):
|
| 6 |
+
def __init__(self, model_name_or_path: str):
|
| 7 |
+
super().__init__(model_name_or_path)
|
| 8 |
+
self.cache = {} # Initialize an empty cache
|
| 9 |
+
|
| 10 |
+
def encode(self, sentences, convert_to_tensor=False, **kwargs):
|
| 11 |
+
if isinstance(sentences, str):
|
| 12 |
+
sentences = [sentences]
|
| 13 |
+
|
| 14 |
+
results = []
|
| 15 |
+
sentences_to_encode = []
|
| 16 |
+
original_positions = []
|
| 17 |
+
|
| 18 |
+
cache_key_suffix = "_tensor" if convert_to_tensor else "_array"
|
| 19 |
+
|
| 20 |
+
for i, sentence in enumerate(sentences):
|
| 21 |
+
cache_key = f"{sentence}" + cache_key_suffix
|
| 22 |
+
if cache_key in self.cache:
|
| 23 |
+
results.append(self.cache[cache_key])
|
| 24 |
+
else:
|
| 25 |
+
sentences_to_encode.append(sentence)
|
| 26 |
+
original_positions.append(i)
|
| 27 |
+
results.append(None) # Placeholder
|
| 28 |
+
|
| 29 |
+
if sentences_to_encode:
|
| 30 |
+
encoded = super().encode(sentences_to_encode, convert_to_tensor=convert_to_tensor, **kwargs)
|
| 31 |
+
if not isinstance(encoded, list):
|
| 32 |
+
encoded = [encoded[i] for i in range(len(sentences_to_encode))]
|
| 33 |
+
|
| 34 |
+
for original_pos, sentence, emb in zip(original_positions, sentences_to_encode, encoded):
|
| 35 |
+
cache_key = sentence + cache_key_suffix
|
| 36 |
+
self.cache[cache_key] = emb
|
| 37 |
+
results[original_pos] = emb
|
| 38 |
+
|
| 39 |
+
if len(results) == 1:
|
| 40 |
+
return results[0]
|
| 41 |
+
|
| 42 |
+
if convert_to_tensor:
|
| 43 |
+
return torch.stack(results)
|
| 44 |
+
else:
|
| 45 |
+
return np.array(results)
|
bitagent_subnet-main/bitagent/helpers/string_parse.py
ADDED
|
@@ -0,0 +1,34 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
import re
|
| 2 |
+
import json
|
| 3 |
+
|
| 4 |
+
def extract_text_inside_quotes(s):
|
| 5 |
+
match = re.search(r'"(.*?)"', s)
|
| 6 |
+
if match:
|
| 7 |
+
return match.group(1) # Returns the text inside the first pair of double quotes
|
| 8 |
+
else:
|
| 9 |
+
return s # Returns the original string if no double quotes are found
|
| 10 |
+
|
| 11 |
+
def parse_multiple_space_sep_json(json_str):
|
| 12 |
+
"""
|
| 13 |
+
Parses a string containing multiple JSON objects separated by whitespace.
|
| 14 |
+
|
| 15 |
+
{} {} -> [{},{}]
|
| 16 |
+
"""
|
| 17 |
+
results = []
|
| 18 |
+
start = 0
|
| 19 |
+
json_str = json_str.strip() # Remove leading and trailing whitespace
|
| 20 |
+
while start < len(json_str):
|
| 21 |
+
# Find the start of a JSON object
|
| 22 |
+
start = json_str.find('{', start)
|
| 23 |
+
if start == -1: # No more JSON object
|
| 24 |
+
break
|
| 25 |
+
try:
|
| 26 |
+
obj, index = json.JSONDecoder().raw_decode(json_str[start:])
|
| 27 |
+
results.append(obj)
|
| 28 |
+
start += index
|
| 29 |
+
while start < len(json_str) and json_str[start] in ' \t\n\r': # Skip whitespace
|
| 30 |
+
start += 1
|
| 31 |
+
except json.JSONDecodeError:
|
| 32 |
+
# Move start forward and try again
|
| 33 |
+
start += 1
|
| 34 |
+
return results
|
bitagent_subnet-main/bitagent/helpers/tool_parsing.py
ADDED
|
@@ -0,0 +1,92 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
import bittensor as bt
|
| 2 |
+
from typing import Dict, Any, List
|
| 3 |
+
from pydantic import ValidationError
|
| 4 |
+
from bitagent.schemas.tool import Tool, ToolCall
|
| 5 |
+
from bitagent.schemas.chat import ChatMessage, messages_to_list
|
| 6 |
+
|
| 7 |
+
# Mapping from type strings to Python types
|
| 8 |
+
type_mapping = {
|
| 9 |
+
"str": str,
|
| 10 |
+
"int": int,
|
| 11 |
+
"dict": Dict,
|
| 12 |
+
"list": List,
|
| 13 |
+
"float": float,
|
| 14 |
+
"bool": bool,
|
| 15 |
+
"string": str,
|
| 16 |
+
"integer": int,
|
| 17 |
+
"number": (int, float), # Allow both int and float for 'number'
|
| 18 |
+
"boolean": bool,
|
| 19 |
+
"array": List,
|
| 20 |
+
"dictionary": Dict,
|
| 21 |
+
"object": Dict, # Handle nested objects as dictionaries
|
| 22 |
+
}
|
| 23 |
+
|
| 24 |
+
def validate_tool_call(tool: Tool, tool_call: Dict[str, Any]) -> bool:
|
| 25 |
+
try:
|
| 26 |
+
# Validate the tool call structure
|
| 27 |
+
tool_call_validated = ToolCall(**tool_call)
|
| 28 |
+
|
| 29 |
+
# Check if the tool call name matches the tool name
|
| 30 |
+
if tool_call_validated.name != tool.name:
|
| 31 |
+
#bt.logging.warning(f"Tool name mismatch: {tool_call_validated.name} != {tool.name}")
|
| 32 |
+
return False
|
| 33 |
+
|
| 34 |
+
if len(tool_call_validated.arguments.keys()) < len([argname for argname, argdict in tool.arguments.items() if argdict['required']]) or len(tool_call_validated.arguments.keys()) > len([argname for argname, argdict in tool.arguments.items()]):
|
| 35 |
+
#bt.logging.warning(f"Argument length mismatch")
|
| 36 |
+
return False
|
| 37 |
+
|
| 38 |
+
# Check arguments
|
| 39 |
+
for arg_name, arg_schema in tool.arguments.items():
|
| 40 |
+
if arg_schema['required'] and arg_name not in tool_call_validated.arguments:
|
| 41 |
+
#bt.logging.warning(f"Missing required argument: {arg_name}")
|
| 42 |
+
return False
|
| 43 |
+
if arg_name in tool_call_validated.arguments:
|
| 44 |
+
expected_type = type_mapping.get(arg_schema['type'])
|
| 45 |
+
if expected_type is None:
|
| 46 |
+
#bt.logging.warning(f"Unknown type for argument {arg_name}: {arg_schema['type']}")
|
| 47 |
+
return False
|
| 48 |
+
|
| 49 |
+
# Handle nested objects
|
| 50 |
+
if 'is_ground_truth' in list(tool_call.keys()) and tool_call['is_ground_truth']:
|
| 51 |
+
arg_value = tool_call_validated.arguments[arg_name][-1]
|
| 52 |
+
else:
|
| 53 |
+
arg_value = tool_call_validated.arguments[arg_name]
|
| 54 |
+
# convert int to float if expected type is float
|
| 55 |
+
if expected_type == float and type(arg_value) == int:
|
| 56 |
+
arg_value = float(arg_value)
|
| 57 |
+
# convert str to float if expected type is float
|
| 58 |
+
if expected_type == float and type(arg_value) == str:
|
| 59 |
+
arg_value = float(arg_value)
|
| 60 |
+
# convert str to int if expected type is int
|
| 61 |
+
if expected_type == int and type(arg_value) == str:
|
| 62 |
+
arg_value = int(arg_value)
|
| 63 |
+
|
| 64 |
+
if expected_type == dict:
|
| 65 |
+
if not isinstance(arg_value, dict):
|
| 66 |
+
#bt.logging.warning(f"""Argument {arg_name} has incorrect type.
|
| 67 |
+
# Expected {expected_type}, got {type(arg_value)}""")
|
| 68 |
+
return False
|
| 69 |
+
else:
|
| 70 |
+
if not isinstance(arg_value, expected_type):
|
| 71 |
+
#bt.logging.warning(f"""Argument {arg_name} has incorrect type.
|
| 72 |
+
# Expected {expected_type}, got {type(arg_value)}""")
|
| 73 |
+
return False
|
| 74 |
+
|
| 75 |
+
# All checks passed
|
| 76 |
+
return True
|
| 77 |
+
except ValidationError as e:
|
| 78 |
+
#bt.logging.warning(f"Validation error: {e}")
|
| 79 |
+
return False
|
| 80 |
+
|
| 81 |
+
def find_first_tool_call(messages: List[ChatMessage]):
|
| 82 |
+
for msg in messages:
|
| 83 |
+
if msg.role == 'tool call':
|
| 84 |
+
return msg
|
| 85 |
+
|
| 86 |
+
def find_msgs_before_tool_call(messages: List[ChatMessage]):
|
| 87 |
+
result = []
|
| 88 |
+
for msg in messages:
|
| 89 |
+
if msg.role == 'tool call':
|
| 90 |
+
break
|
| 91 |
+
result.append(msg)
|
| 92 |
+
return result
|
bitagent_subnet-main/bitagent/miners/__init__.py
ADDED
|
@@ -0,0 +1,9 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
__version__ = "1.0.0"
|
| 2 |
+
version_split = __version__.split(".")
|
| 3 |
+
__spec_version__ = (
|
| 4 |
+
(1000 * int(version_split[0]))
|
| 5 |
+
+ (10 * int(version_split[1]))
|
| 6 |
+
+ (1 * int(version_split[2]))
|
| 7 |
+
)
|
| 8 |
+
from . import mock_miner
|
| 9 |
+
from . import default_miner
|
bitagent_subnet-main/bitagent/miners/default_miner.py
ADDED
|
@@ -0,0 +1,30 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# The MIT License (MIT)
|
| 2 |
+
# Copyright © 2024 RogueTensor
|
| 3 |
+
|
| 4 |
+
# Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated
|
| 5 |
+
# documentation files (the “Software”), to deal in the Software without restriction, including without limitation
|
| 6 |
+
# the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software,
|
| 7 |
+
# and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
|
| 8 |
+
|
| 9 |
+
# The above copyright notice and this permission notice shall be included in all copies or substantial portions of
|
| 10 |
+
# the Software.
|
| 11 |
+
|
| 12 |
+
# THE SOFTWARE IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO
|
| 13 |
+
# THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
|
| 14 |
+
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
|
| 15 |
+
# OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
|
| 16 |
+
# DEALINGS IN THE SOFTWARE.
|
| 17 |
+
|
| 18 |
+
import bitagent
|
| 19 |
+
from bitagent.helpers.llms import llm
|
| 20 |
+
|
| 21 |
+
def miner_init(self, config=None):
|
| 22 |
+
self.model_name = self.config.hf_model_name_to_run
|
| 23 |
+
self.llm = llm
|
| 24 |
+
|
| 25 |
+
def miner_process(self, synapse: bitagent.protocol.QueryTask) -> bitagent.protocol.QueryTask:
|
| 26 |
+
llm_response = self.llm(self, synapse.messages, synapse.tools, self.model_name)
|
| 27 |
+
synapse.response = llm_response
|
| 28 |
+
synapse.hf_run_model_name = self.model_name
|
| 29 |
+
|
| 30 |
+
return synapse
|
bitagent_subnet-main/bitagent/miners/mock_miner.py
ADDED
|
@@ -0,0 +1,32 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# The MIT License (MIT)
|
| 2 |
+
# Copyright © 2023 Yuma Rao
|
| 3 |
+
# Copyright © 2023 RogueTensor
|
| 4 |
+
|
| 5 |
+
# Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated
|
| 6 |
+
# documentation files (the “Software”), to deal in the Software without restriction, including without limitation
|
| 7 |
+
# the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software,
|
| 8 |
+
# and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
|
| 9 |
+
|
| 10 |
+
# The above copyright notice and this permission notice shall be included in all copies or substantial portions of
|
| 11 |
+
# the Software.
|
| 12 |
+
|
| 13 |
+
# THE SOFTWARE IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO
|
| 14 |
+
# THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
|
| 15 |
+
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
|
| 16 |
+
# OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
|
| 17 |
+
# DEALINGS IN THE SOFTWARE.
|
| 18 |
+
|
| 19 |
+
import bitagent
|
| 20 |
+
|
| 21 |
+
def miner_init(self, config=None):
|
| 22 |
+
|
| 23 |
+
def llm():
|
| 24 |
+
return {"role": "assistant", "content": "I'm the LLM response, b/c I'm mock miner - rahhh!"}
|
| 25 |
+
|
| 26 |
+
self.llm = llm
|
| 27 |
+
|
| 28 |
+
def miner_process(self, synapse: bitagent.protocol.QueryTask) -> bitagent.protocol.QueryTask:
|
| 29 |
+
llm_response = self.llm()
|
| 30 |
+
synapse.response = llm_response
|
| 31 |
+
|
| 32 |
+
return synapse
|
bitagent_subnet-main/bitagent/protocol.py
ADDED
|
@@ -0,0 +1,66 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# The MIT License (MIT)
|
| 2 |
+
# Copyright © 2023 Yuma Rao
|
| 3 |
+
# Copyright © 2023 RogueTensor
|
| 4 |
+
|
| 5 |
+
# Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated
|
| 6 |
+
# documentation files (the “Software”), to deal in the Software without restriction, including without limitation
|
| 7 |
+
# the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software,
|
| 8 |
+
# and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
|
| 9 |
+
|
| 10 |
+
# The above copyright notice and this permission notice shall be included in all copies or substantial portions of
|
| 11 |
+
# the Software.
|
| 12 |
+
|
| 13 |
+
# THE SOFTWARE IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO
|
| 14 |
+
# THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
|
| 15 |
+
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
|
| 16 |
+
# OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
|
| 17 |
+
# DEALINGS IN THE SOFTWARE.
|
| 18 |
+
|
| 19 |
+
from typing import Optional, List
|
| 20 |
+
import bittensor as bt
|
| 21 |
+
from bitagent.schemas.chat import ChatMessage
|
| 22 |
+
from bitagent.schemas.tool import Tool
|
| 23 |
+
|
| 24 |
+
class QueryTask(bt.Synapse):
|
| 25 |
+
"""
|
| 26 |
+
A simple BitAgent protocol representation which uses bt.Synapse as its base.
|
| 27 |
+
This protocol helps in handling validator request and miner response communication
|
| 28 |
+
|
| 29 |
+
Attributes:
|
| 30 |
+
- messages: a list of ChatMessage (see bitagent/schemas) - will be used for every task except Tool Gen
|
| 31 |
+
- tools: list of tools {name, description, arguments } in a List of dicts
|
| 32 |
+
- repsonse: string (e.g., tool_name(arg1=value1, arg2=value2))
|
| 33 |
+
- hf_run_model_name: string representing the HF model the miner is running
|
| 34 |
+
"""
|
| 35 |
+
|
| 36 |
+
# Required request input, filled by sending dendrite caller.
|
| 37 |
+
tools: List[Tool] = []
|
| 38 |
+
messages: List[ChatMessage] = []
|
| 39 |
+
|
| 40 |
+
# Optional request output, filled by recieving axon.
|
| 41 |
+
response: str = ""
|
| 42 |
+
hf_run_model_name: str = "N/A"
|
| 43 |
+
competition_version: Optional[str] = None
|
| 44 |
+
|
| 45 |
+
class QueryResult(bt.Synapse):
|
| 46 |
+
"""
|
| 47 |
+
Provide feedback on last task request from validator to inform Miner of performance.
|
| 48 |
+
This is a one-way request does not require a response.
|
| 49 |
+
Attributes:
|
| 50 |
+
- results: string of results to be printed to the logs
|
| 51 |
+
"""
|
| 52 |
+
results: str
|
| 53 |
+
|
| 54 |
+
class IsAlive(bt.Synapse):
|
| 55 |
+
response: bool = False
|
| 56 |
+
|
| 57 |
+
# Validator calls this to get the HF model name that this miner hosts on HF
|
| 58 |
+
class GetHFModelName(bt.Synapse):
|
| 59 |
+
hf_model_name: Optional[str] = None
|
| 60 |
+
|
| 61 |
+
# Validator calls this to have the miner set the TOP HF model for this miner to run
|
| 62 |
+
class SetHFModelName(bt.Synapse):
|
| 63 |
+
hf_model_name: str
|
| 64 |
+
|
| 65 |
+
class GetHFRunModelName(bt.Synapse):
|
| 66 |
+
hf_run_model_name: Optional[str] = None
|
bitagent_subnet-main/bitagent/schemas/chat.py
ADDED
|
@@ -0,0 +1,39 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
from strenum import StrEnum
|
| 2 |
+
from typing import Dict, List
|
| 3 |
+
from pydantic import BaseModel, Field
|
| 4 |
+
|
| 5 |
+
class ChatRole(StrEnum):
|
| 6 |
+
"""One of ASSISTANT|USER to identify who the message is coming from."""
|
| 7 |
+
|
| 8 |
+
SYSTEM = "system"
|
| 9 |
+
ASSISTANT = "assistant"
|
| 10 |
+
USER = "user"
|
| 11 |
+
TOOL_CALL = "tool call"
|
| 12 |
+
TOOL_RESPONSE = "tool response"
|
| 13 |
+
|
| 14 |
+
|
| 15 |
+
class ChatMessage(BaseModel):
|
| 16 |
+
"""A list of previous messages between the user and the model, meant to give the model conversational context for responding to the user's message."""
|
| 17 |
+
|
| 18 |
+
role: ChatRole = Field(
|
| 19 |
+
title="One of the ChatRole's to identify who the message is coming from.",
|
| 20 |
+
)
|
| 21 |
+
content: str | dict | list = Field( # TODO the dict/list was added to support json loading the function calls. this should maybe be done inside a ToolMessage type
|
| 22 |
+
title="Contents of the chat message.",
|
| 23 |
+
)
|
| 24 |
+
|
| 25 |
+
@classmethod
|
| 26 |
+
def from_dict(cls, data: Dict[str, str]):
|
| 27 |
+
"""Create a ChatMessage object from a dictionary."""
|
| 28 |
+
return cls(role=ChatRole(data['role']), content=data['content'])
|
| 29 |
+
|
| 30 |
+
def to_dict(self) -> Dict[str, str]:
|
| 31 |
+
return {"role": self.role.value, "content": self.content}
|
| 32 |
+
|
| 33 |
+
|
| 34 |
+
def messages_from_list(data_list: List[Dict[str, str]]):
|
| 35 |
+
messages = [ChatMessage.from_dict(item) for item in data_list]
|
| 36 |
+
return messages
|
| 37 |
+
|
| 38 |
+
def messages_to_list(messages: List[ChatMessage]):
|
| 39 |
+
return [msg.to_dict() for msg in messages]
|
bitagent_subnet-main/bitagent/schemas/tool.py
ADDED
|
@@ -0,0 +1,21 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
from pydantic import BaseModel
|
| 2 |
+
from typing import Dict, Any, List
|
| 3 |
+
|
| 4 |
+
class Tool(BaseModel):
|
| 5 |
+
"""
|
| 6 |
+
Attributes:
|
| 7 |
+
- name: str
|
| 8 |
+
- description: str
|
| 9 |
+
- arguments: dict where the key is the name of the argument and the value is a dict containing the keys (required:bool, type:str, description:str)
|
| 10 |
+
"""
|
| 11 |
+
name: str
|
| 12 |
+
description: str
|
| 13 |
+
arguments: Dict[str, Any]
|
| 14 |
+
|
| 15 |
+
def to_dict(self):
|
| 16 |
+
return self.dict()
|
| 17 |
+
|
| 18 |
+
|
| 19 |
+
class ToolCall(BaseModel):
|
| 20 |
+
name: str
|
| 21 |
+
arguments: Dict[str, Any]
|
bitagent_subnet-main/bitagent/tasks/__init__.py
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
from .constants import *
|
| 2 |
+
from .task import *
|
| 3 |
+
from .tool_call_task import *
|
bitagent_subnet-main/bitagent/tasks/constants.py
ADDED
|
@@ -0,0 +1,7 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
TASK_FREQUENCY = {
|
| 2 |
+
"tool_call": 1,
|
| 3 |
+
}
|
| 4 |
+
|
| 5 |
+
TASK_WEIGHTS = {
|
| 6 |
+
"tool_call": 0.05,
|
| 7 |
+
}
|
bitagent_subnet-main/bitagent/tasks/task.py
ADDED
|
@@ -0,0 +1,105 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# The MIT License (MIT)
|
| 2 |
+
# Copyright © 2023 RogueTensor
|
| 3 |
+
|
| 4 |
+
# Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated
|
| 5 |
+
# documentation files (the “Software”), to deal in the Software without restriction, including without limitation
|
| 6 |
+
# the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software,
|
| 7 |
+
# and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
|
| 8 |
+
|
| 9 |
+
# The above copyright notice and this permission notice shall be included in all copies or substantial portions of
|
| 10 |
+
# the Software.
|
| 11 |
+
|
| 12 |
+
# THE SOFTWARE IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO
|
| 13 |
+
# THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
|
| 14 |
+
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
|
| 15 |
+
# OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
|
| 16 |
+
# DEALINGS IN THE SOFTWARE.
|
| 17 |
+
import random
|
| 18 |
+
import bittensor as bt
|
| 19 |
+
from pprint import pformat
|
| 20 |
+
from typing import List, Tuple
|
| 21 |
+
from bitagent.protocol import QueryTask
|
| 22 |
+
from bitagent.schemas.tool import Tool
|
| 23 |
+
from bitagent.tasks import TASK_FREQUENCY
|
| 24 |
+
from bitagent.criteria import Criterion, default_criteria
|
| 25 |
+
from bitagent.schemas.chat import ChatMessage, messages_to_list
|
| 26 |
+
|
| 27 |
+
# Task()
|
| 28 |
+
# combines criterion/criteria with the QueryTask (messages,tools) for eval to form a task for the miner
|
| 29 |
+
class Task():
|
| 30 |
+
criteria: List[Criterion]
|
| 31 |
+
synapse: QueryTask
|
| 32 |
+
|
| 33 |
+
def __init__(self,
|
| 34 |
+
name: str,
|
| 35 |
+
weight: int = 0.05,
|
| 36 |
+
desc: str = "",
|
| 37 |
+
timeout: int = 12,
|
| 38 |
+
tools: List[Tool] = [],
|
| 39 |
+
messages: List[ChatMessage] = [],
|
| 40 |
+
criteria: List[Criterion] = default_criteria,
|
| 41 |
+
correct_answer: str=None
|
| 42 |
+
) -> None:
|
| 43 |
+
|
| 44 |
+
self.name=name
|
| 45 |
+
self.mode = "online"
|
| 46 |
+
self.weight = weight
|
| 47 |
+
self.desc=desc
|
| 48 |
+
self.timeout=timeout
|
| 49 |
+
self.criteria=criteria
|
| 50 |
+
self.messages = messages
|
| 51 |
+
self.synapse = QueryTask(messages=messages, tools=tools)
|
| 52 |
+
self.correct_answer = correct_answer
|
| 53 |
+
|
| 54 |
+
def reward(self, validator, synapse: QueryTask) -> Tuple[float, float, List[str]]:
|
| 55 |
+
total_score = 0.0
|
| 56 |
+
total_possible = 0.0
|
| 57 |
+
results = []
|
| 58 |
+
for criterion in self.criteria:
|
| 59 |
+
score, max_score, result = criterion.evaluate(self, validator, synapse)
|
| 60 |
+
total_score += score
|
| 61 |
+
total_possible += max_score
|
| 62 |
+
results.append(result)
|
| 63 |
+
if self.correct_answer:
|
| 64 |
+
correct_answer = self.correct_answer
|
| 65 |
+
else:
|
| 66 |
+
correct_answer = "N/A"
|
| 67 |
+
return [total_score, total_possible, results, correct_answer]
|
| 68 |
+
|
| 69 |
+
def __repr__(self):
|
| 70 |
+
return pformat(vars(self), indent=4, width=1)
|
| 71 |
+
|
| 72 |
+
def toJSON(self):
|
| 73 |
+
return {
|
| 74 |
+
"weight": self.weight,
|
| 75 |
+
"name": self.name,
|
| 76 |
+
"mode": self.mode,
|
| 77 |
+
"desc": self.desc,
|
| 78 |
+
"messages": messages_to_list(self.messages) if isinstance(self.messages, list) else [],
|
| 79 |
+
"tools": [tool.to_dict() for tool in self.synapse.tools],
|
| 80 |
+
"timeout": self.timeout,
|
| 81 |
+
}
|
| 82 |
+
|
| 83 |
+
# evaluate task
|
| 84 |
+
def evaluate_task(validator, task:Task, synapse:bt.Synapse) -> Tuple[float, float, List[str]]:
|
| 85 |
+
return task.reward(validator, synapse)
|
| 86 |
+
|
| 87 |
+
# get random task
|
| 88 |
+
def get_random_task(validator, offline=False) -> Task:
|
| 89 |
+
from bitagent.tasks.tool_call_task import ToolCallTask
|
| 90 |
+
task_names = list(TASK_FREQUENCY.keys())
|
| 91 |
+
task_frequencies = list(TASK_FREQUENCY.values())
|
| 92 |
+
choice = random.choices(task_names, weights=task_frequencies)[0]
|
| 93 |
+
|
| 94 |
+
for _ in range(100):
|
| 95 |
+
try:
|
| 96 |
+
match choice:
|
| 97 |
+
case "tool_call":
|
| 98 |
+
return ToolCallTask(validator=validator, name="Responds with correct function call", offline=offline)
|
| 99 |
+
|
| 100 |
+
except Exception as e:
|
| 101 |
+
#bt.logging.warning(f'Error getting task (name {choice}): ', e)
|
| 102 |
+
#bt.logging.warning(traceback.format_exc())
|
| 103 |
+
pass
|
| 104 |
+
|
| 105 |
+
raise Exception("Failed to get task after 100 attempts")
|
bitagent_subnet-main/bitagent/tasks/tool_call_task.py
ADDED
|
@@ -0,0 +1,191 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# The MIT License (MIT)
|
| 2 |
+
# Copyright © 2023 RogueTensor
|
| 3 |
+
|
| 4 |
+
# Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated
|
| 5 |
+
# documentation files (the “Software”), to deal in the Software without restriction, including without limitation
|
| 6 |
+
# the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software,
|
| 7 |
+
# and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
|
| 8 |
+
# The above copyright notice and this permission notice shall be included in all copies or substantial portions of
|
| 9 |
+
# the Software.
|
| 10 |
+
|
| 11 |
+
# THE SOFTWARE IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO
|
| 12 |
+
# THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
|
| 13 |
+
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
|
| 14 |
+
# OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
|
| 15 |
+
# DEALINGS IN THE SOFTWARE.
|
| 16 |
+
import ast
|
| 17 |
+
import json
|
| 18 |
+
import random
|
| 19 |
+
import bittensor as bt
|
| 20 |
+
from bitagent.protocol import QueryTask
|
| 21 |
+
from bitagent.tasks import Task
|
| 22 |
+
from bitagent.tasks import TASK_WEIGHTS
|
| 23 |
+
from bitagent.schemas.chat import messages_to_list
|
| 24 |
+
from bitagent.datasources.tools import ToolCallData
|
| 25 |
+
from bitagent.helpers.tool_parsing import validate_tool_call, find_msgs_before_tool_call, find_first_tool_call
|
| 26 |
+
from bitagent.criteria import default_criteria, tool_call_criteria, irrelevant_tool_call_criteria
|
| 27 |
+
|
| 28 |
+
REWRITE_TOOL_USER_PROMPT = """You rewrite questions to make sense when paired with a function call.
|
| 29 |
+
The rewritten question will need to be changed to match the argument parameters and values relative to the function name.
|
| 30 |
+
You should change the phrasing of the question to be different and keeping aligned with the function name and arguments.
|
| 31 |
+
The capitalization of your user prompt rephrasasl should match the exact case of what is expected in the function call.
|
| 32 |
+
Your response should be the rewritten question only.\n
|
| 33 |
+
Function call:\n`{tool_call}`\n
|
| 34 |
+
Question: {user}\n
|
| 35 |
+
Modified Question: """
|
| 36 |
+
|
| 37 |
+
class ToolCallTask(Task):
|
| 38 |
+
def __init__(
|
| 39 |
+
self,
|
| 40 |
+
validator,
|
| 41 |
+
name: str,
|
| 42 |
+
desc: str = "",
|
| 43 |
+
offline: bool = False,
|
| 44 |
+
):
|
| 45 |
+
super().__init__(name=name, desc=desc)
|
| 46 |
+
self.validator = validator
|
| 47 |
+
self.timeout = 15.0
|
| 48 |
+
self.name += " - Tool Call"
|
| 49 |
+
self.weight = TASK_WEIGHTS["tool_call"]
|
| 50 |
+
|
| 51 |
+
if offline:
|
| 52 |
+
self.mode = "offline"
|
| 53 |
+
messages = None
|
| 54 |
+
for _ in range(10):
|
| 55 |
+
try:
|
| 56 |
+
messages, tools, data = self.generate_task_data()
|
| 57 |
+
expected_messages = messages_to_list(data.messages)
|
| 58 |
+
expected_tool_call_messages = [em for em in expected_messages if em['role'] == 'tool call']
|
| 59 |
+
if messages[0].role == 'system':
|
| 60 |
+
# try again - skip tasks with system prompts
|
| 61 |
+
continue
|
| 62 |
+
if len(expected_tool_call_messages) > 0:
|
| 63 |
+
expected_tool_call_message = expected_tool_call_messages[0]['content']
|
| 64 |
+
else:
|
| 65 |
+
#bt.logging.debug(f"Skipping - no tool call message found in expected messages: {expected_messages}")
|
| 66 |
+
continue
|
| 67 |
+
|
| 68 |
+
if type(expected_tool_call_message) == str:
|
| 69 |
+
expected_tool_call = json.loads(expected_tool_call_message)
|
| 70 |
+
else:
|
| 71 |
+
expected_tool_call = expected_tool_call_message
|
| 72 |
+
self.criteria = default_criteria + tool_call_criteria(expected_response=expected_tool_call)
|
| 73 |
+
|
| 74 |
+
# 75% of the time do a tool call task with a relevant tool, other times do a tool call with no valid tool option
|
| 75 |
+
# irrelevant tool call
|
| 76 |
+
if "is_ground_truth" not in expected_tool_call_message and bool(random.random() < 0.25) and len(tools) > 1:
|
| 77 |
+
# remove the real tool
|
| 78 |
+
expected_tool_call_message_json = json.loads(expected_tool_call_message)
|
| 79 |
+
if isinstance(expected_tool_call_message_json, str):
|
| 80 |
+
expected_tool_call_message_json = json.loads(expected_tool_call_message_json)
|
| 81 |
+
tools = [t for t in tools if t.name != expected_tool_call_message_json['name']]
|
| 82 |
+
self.criteria = default_criteria + irrelevant_tool_call_criteria()
|
| 83 |
+
|
| 84 |
+
break
|
| 85 |
+
|
| 86 |
+
except Exception as e:
|
| 87 |
+
bt.logging.debug(f'Exception getting new task - {e} - you may need to CHECK YOUR vLLM docker instance')
|
| 88 |
+
pass
|
| 89 |
+
if not messages:
|
| 90 |
+
raise Exception(f"Failed to generate task data 10 times")
|
| 91 |
+
self.messages = messages
|
| 92 |
+
self.synapse = QueryTask(messages=messages, tools=tools)
|
| 93 |
+
|
| 94 |
+
def generate_task_data(self) -> ToolCallData:
|
| 95 |
+
data: ToolCallData = next(self.validator.tool_dataset)
|
| 96 |
+
|
| 97 |
+
tool_call = find_first_tool_call(data.messages)
|
| 98 |
+
if not tool_call:
|
| 99 |
+
# no tool call in the messages, so skip
|
| 100 |
+
raise Exception(f"Skipping - no tool call in the messages: {data.messages}")
|
| 101 |
+
|
| 102 |
+
# increase number of tools
|
| 103 |
+
for _ in range(random.randint(2,4)):
|
| 104 |
+
# filter out the tools by name that are already in the data.tools
|
| 105 |
+
new_tools = [t for t in next(self.validator.tool_dataset).tools if t.name not in [dt.name for dt in data.tools]]
|
| 106 |
+
data.tools = data.tools + new_tools
|
| 107 |
+
|
| 108 |
+
# remove all the messages after the first tool call, keeping the assistant
|
| 109 |
+
# this reduces the number of messages needing rewording
|
| 110 |
+
messages = data.messages
|
| 111 |
+
filtered_msgs = []
|
| 112 |
+
seen_tool_call = False
|
| 113 |
+
for msg in messages:
|
| 114 |
+
filtered_msgs.append(msg)
|
| 115 |
+
if seen_tool_call: # want to do break after to include the assistant response
|
| 116 |
+
break
|
| 117 |
+
if msg.role == 'tool call':
|
| 118 |
+
seen_tool_call = True
|
| 119 |
+
data.messages = filtered_msgs
|
| 120 |
+
|
| 121 |
+
user = data.messages[0].content
|
| 122 |
+
|
| 123 |
+
count = 0
|
| 124 |
+
while count < 10:
|
| 125 |
+
count += 1
|
| 126 |
+
if find_first_tool_call(data.messages):
|
| 127 |
+
tool_call = find_first_tool_call(data.messages).content
|
| 128 |
+
try: # check that the tool call can be loaded, and that it's valid
|
| 129 |
+
try:
|
| 130 |
+
if isinstance(tool_call, str):
|
| 131 |
+
new_tool_call = json.dumps(json.loads(tool_call))
|
| 132 |
+
tool_call_dict = json.loads(new_tool_call)
|
| 133 |
+
elif isinstance(tool_call, dict):
|
| 134 |
+
new_tool_call = tool_call
|
| 135 |
+
tool_call_dict = tool_call
|
| 136 |
+
else:
|
| 137 |
+
raise Exception(f'tool call is not a string or dict: {tool_call}')
|
| 138 |
+
|
| 139 |
+
except Exception as e:
|
| 140 |
+
# this usually happens when the json is not valid (single vs double quotes)
|
| 141 |
+
new_tool_call = json.dumps(ast.literal_eval(tool_call))
|
| 142 |
+
tool_call_dict = ast.literal_eval(tool_call)
|
| 143 |
+
# check through all the tools that will be passed to the miner
|
| 144 |
+
# find the tool that is THE tool that is expected to be returned
|
| 145 |
+
# since it has been rewritten, validate that the tool call is valid/comparable still
|
| 146 |
+
#for tool in data.tools:
|
| 147 |
+
# if tool.name == tool_call_dict['name']:
|
| 148 |
+
# if not validate_tool_call(tool, tool_call_dict):
|
| 149 |
+
# raise Exception('The rewritten tool call is not valid')
|
| 150 |
+
#bt.logging.debug(f'finished validating tool call: {tool_call_dict}')
|
| 151 |
+
except Exception as e:
|
| 152 |
+
bt.logging.error(f'An error occured while rewriting the tool call {e} - you may need to CHECK YOUR vLLM docker instance')
|
| 153 |
+
count = 11
|
| 154 |
+
continue
|
| 155 |
+
|
| 156 |
+
rw_prompt = REWRITE_TOOL_USER_PROMPT.format(tool_call=new_tool_call, user=user)
|
| 157 |
+
new_user = self.validator.llm([{"role": "user", "content": rw_prompt}], max_new_tokens=1000, temperature=1)
|
| 158 |
+
if not self.check_rewrite_alignment(new_user, user):
|
| 159 |
+
raise Exception(f"User rewrite is not in alignment\nOriginal: {user}\n Rewrite: {new_user}")
|
| 160 |
+
|
| 161 |
+
data.messages[0].content = new_user
|
| 162 |
+
|
| 163 |
+
data = ToolCallData(messages=data.messages, tools=data.tools)
|
| 164 |
+
messages_before_call = find_msgs_before_tool_call(data.messages)
|
| 165 |
+
|
| 166 |
+
else:
|
| 167 |
+
# no tool call in the messages, so skip
|
| 168 |
+
raise Exception(f"Skipping - guess there was no tool call in the messages: {data.messages}")
|
| 169 |
+
|
| 170 |
+
all_tools = data.tools
|
| 171 |
+
random.shuffle(all_tools)
|
| 172 |
+
return messages_before_call, all_tools, data
|
| 173 |
+
|
| 174 |
+
raise Exception("Skipping - while loop ended without a tool call task")
|
| 175 |
+
|
| 176 |
+
def check_rewrite_alignment(self, original: str, rewrite: str) -> bool:
|
| 177 |
+
score = self.validator.measure_relevance_of_texts(original, rewrite)
|
| 178 |
+
|
| 179 |
+
if score > 0.98:
|
| 180 |
+
return False
|
| 181 |
+
|
| 182 |
+
if score < 0.2:
|
| 183 |
+
return False
|
| 184 |
+
|
| 185 |
+
if len(rewrite) > 2 * len(rewrite):
|
| 186 |
+
return False
|
| 187 |
+
|
| 188 |
+
if len(rewrite) < 0.25 * len(rewrite):
|
| 189 |
+
return False
|
| 190 |
+
|
| 191 |
+
return True
|
bitagent_subnet-main/bitagent/validator/__init__.py
ADDED
|
@@ -0,0 +1,10 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
__version__ = "1.0.15"
|
| 2 |
+
version_split = __version__.split(".")
|
| 3 |
+
__spec_version__ = (
|
| 4 |
+
(1000 * int(version_split[0]))
|
| 5 |
+
+ (10 * int(version_split[1]))
|
| 6 |
+
+ (1 * int(version_split[2]))
|
| 7 |
+
)
|
| 8 |
+
from . import reward
|
| 9 |
+
from .forward import forward
|
| 10 |
+
from .initiation import initiate_validator
|
bitagent_subnet-main/bitagent/validator/constants.py
ADDED
|
@@ -0,0 +1,5 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
COMPETITION_PREVIOUS_PREFIX = 1 # This is the previous competition prefix for when we swap to a new competition prefix and need to keep track of the old scores
|
| 2 |
+
COMPETITION_PREFIX = 1
|
| 3 |
+
DEPLOYED_DATE = "2024-12-10"
|
| 4 |
+
COMPETITION_LENGTH_DAYS = 7
|
| 5 |
+
TESTNET_COMPETITION_LENGTH_DAYS = 1.0/24.0 # every hour
|
bitagent_subnet-main/bitagent/validator/forward.py
ADDED
|
@@ -0,0 +1,129 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# The MIT License (MIT)
|
| 2 |
+
# Copyright © 2023 Yuma Rao
|
| 3 |
+
# Copyright © 2023 RogueTensor
|
| 4 |
+
|
| 5 |
+
# Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated
|
| 6 |
+
# documentation files (the “Software”), to deal in the Software without restriction, including without limitation
|
| 7 |
+
# the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software,
|
| 8 |
+
# and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
|
| 9 |
+
|
| 10 |
+
# The above copyright notice and this permission notice shall be included in all copies or substantial portions of
|
| 11 |
+
# the Software.
|
| 12 |
+
|
| 13 |
+
# THE SOFTWARE IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO
|
| 14 |
+
# THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
|
| 15 |
+
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
|
| 16 |
+
# OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
|
| 17 |
+
# DEALINGS IN THE SOFTWARE.
|
| 18 |
+
|
| 19 |
+
import asyncio
|
| 20 |
+
import numpy as np
|
| 21 |
+
import bittensor as bt
|
| 22 |
+
from bitagent.protocol import QueryTask
|
| 23 |
+
from common.utils.uids import get_alive_uids
|
| 24 |
+
from common.utils.uids import get_random_uids
|
| 25 |
+
from bitagent.tasks.task import get_random_task
|
| 26 |
+
from bitagent.validator.offline_task import offline_task
|
| 27 |
+
from bitagent.validator.reward import process_rewards_update_scores_and_send_feedback
|
| 28 |
+
|
| 29 |
+
async def forward(self, synapse: QueryTask=None) -> QueryTask:
|
| 30 |
+
"""
|
| 31 |
+
The forward function is called by the validator every time step.
|
| 32 |
+
It is responsible for querying the network and scoring the responses.
|
| 33 |
+
|
| 34 |
+
Args:
|
| 35 |
+
self (:obj:`bittensor.neuron.Neuron`): The neuron object which contains all the necessary state for the validator.
|
| 36 |
+
|
| 37 |
+
"""
|
| 38 |
+
# complete this first so it's cached for both ONLINE and OFFLINE
|
| 39 |
+
get_alive_uids(self)
|
| 40 |
+
|
| 41 |
+
# ###########################################################
|
| 42 |
+
# OFFLINE TASKING
|
| 43 |
+
# ###########################################################
|
| 44 |
+
# if all miners have been processed for this competition, then don't run offline mode
|
| 45 |
+
self.update_competition_numbers()
|
| 46 |
+
|
| 47 |
+
wandb_data = {
|
| 48 |
+
"task_name": "offline_model_check",
|
| 49 |
+
"task_mode": "offline",
|
| 50 |
+
"validator_uid": self.metagraph.hotkeys.index(self.wallet.hotkey.ss58_address),
|
| 51 |
+
"val_spec_version": self.spec_version,
|
| 52 |
+
"highest_score_for_miners_with_this_validator": self.scores.max(),
|
| 53 |
+
"median_score_for_miners_with_this_validator": np.median(self.scores),
|
| 54 |
+
"highest_offline_score_for_miners_with_this_validator": self.offline_scores[self.competition_version].max(),
|
| 55 |
+
"median_offline_score_for_miners_with_this_validator": np.median(self.offline_scores[self.competition_version]),
|
| 56 |
+
"average_offline_score_for_miners_with_this_validator": np.mean(self.offline_scores[self.competition_version]),
|
| 57 |
+
"prior_highest_offline_score_for_miners_with_this_validator": self.offline_scores[self.previous_competition_version].max(),
|
| 58 |
+
"prior_median_offline_score_for_miners_with_this_validator": np.median(self.offline_scores[self.previous_competition_version]),
|
| 59 |
+
"prior_average_offline_score_for_miners_with_this_validator": np.mean(self.offline_scores[self.previous_competition_version]),
|
| 60 |
+
"competition_version": self.competition_version,
|
| 61 |
+
}
|
| 62 |
+
|
| 63 |
+
if self.config.subtensor.network != "test":
|
| 64 |
+
if len(self.miners_left_to_score) == 0:
|
| 65 |
+
if self.offline_status != "complete":
|
| 66 |
+
self.offline_status = "complete"
|
| 67 |
+
wandb_data['offline_status'] = self.offline_status
|
| 68 |
+
wandb_data['num_miners_left_to_score'] = len(self.miners_left_to_score)
|
| 69 |
+
self.log_event(wandb_data)
|
| 70 |
+
wandb_data.pop('offline_status')
|
| 71 |
+
wandb_data.pop('num_miners_left_to_score')
|
| 72 |
+
self.running_offline_mode = False
|
| 73 |
+
#bt.logging.debug(f"OFFLINE: No miners left to score for competition {self.competition_version}")
|
| 74 |
+
pass
|
| 75 |
+
elif not self.running_offline_mode:
|
| 76 |
+
bt.logging.debug(f"OFFLINE: Starting offline mode for competition {self.competition_version}")
|
| 77 |
+
#bt.logging.debug(f"OFFLINE: Miners left to score: {self.miners_left_to_score}")
|
| 78 |
+
self.running_offline_mode = True
|
| 79 |
+
self.offline_status = "starting"
|
| 80 |
+
wandb_data['offline_status'] = self.offline_status
|
| 81 |
+
wandb_data['num_miners_left_to_score'] = len(self.miners_left_to_score)
|
| 82 |
+
wandb_data['miners_left_to_score'] = self.miners_left_to_score
|
| 83 |
+
self.log_event(wandb_data)
|
| 84 |
+
wandb_data.pop('num_miners_left_to_score')
|
| 85 |
+
wandb_data.pop('miners_left_to_score')
|
| 86 |
+
wandb_data.pop('offline_status')
|
| 87 |
+
asyncio.create_task(offline_task(self, wandb_data))
|
| 88 |
+
self.running_offline_mode = False
|
| 89 |
+
elif self.running_offline_mode:
|
| 90 |
+
#bt.logging.debug(f"OFFLINE: Already running offline mode for competition {self.competition_version}")
|
| 91 |
+
#bt.logging.debug(f"OFFLINE: Miners left to score: {self.miners_left_to_score}")
|
| 92 |
+
if self.offline_status != "running":
|
| 93 |
+
self.offline_status = "running"
|
| 94 |
+
wandb_data['offline_status'] = self.offline_status
|
| 95 |
+
wandb_data['num_miners_left_to_score'] = len(self.miners_left_to_score)
|
| 96 |
+
self.log_event(wandb_data)
|
| 97 |
+
wandb_data.pop('num_miners_left_to_score')
|
| 98 |
+
wandb_data.pop('offline_status')
|
| 99 |
+
pass
|
| 100 |
+
else:
|
| 101 |
+
bt.logging.debug("OFFLINE: Skipping offline for testnet")
|
| 102 |
+
|
| 103 |
+
# ###########################################################
|
| 104 |
+
# ONLINE TASKING
|
| 105 |
+
# ###########################################################
|
| 106 |
+
try:
|
| 107 |
+
bt.logging.debug(f"ONLINE: Starting online run")
|
| 108 |
+
# check a random sample of miners in online mode
|
| 109 |
+
bt.logging.debug(f"ONLINE: Getting random miner uids")
|
| 110 |
+
miner_uids = get_random_uids(self, min(self.config.neuron.sample_size, self.metagraph.n.item()))
|
| 111 |
+
bt.logging.debug(f"ONLINE: Getting random task")
|
| 112 |
+
task = get_random_task(self)
|
| 113 |
+
task.mode = "online"
|
| 114 |
+
|
| 115 |
+
# send the task to the miners
|
| 116 |
+
bt.logging.debug(f"ONLINE: Sending task to miners")
|
| 117 |
+
responses = self.dendrite.query(
|
| 118 |
+
axons=[self.metagraph.axons[uid] for uid in miner_uids],
|
| 119 |
+
synapse=task.synapse,
|
| 120 |
+
deserialize=False,
|
| 121 |
+
timeout=task.timeout,
|
| 122 |
+
)
|
| 123 |
+
|
| 124 |
+
bt.logging.debug(f"ONLINE: Evaluating responses")
|
| 125 |
+
await asyncio.create_task(process_rewards_update_scores_and_send_feedback(self, task=task, responses=responses, miner_uids=miner_uids))
|
| 126 |
+
bt.logging.debug(f"ONLINE: Evaluation complete")
|
| 127 |
+
|
| 128 |
+
except Exception as e:
|
| 129 |
+
bt.logging.debug(f"Error in forward: {e}")
|
bitagent_subnet-main/bitagent/validator/initiation.py
ADDED
|
@@ -0,0 +1,151 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# The MIT License (MIT)
|
| 2 |
+
# Copyright © 2023 RogueTensor
|
| 3 |
+
|
| 4 |
+
# Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated
|
| 5 |
+
# documentation files (the “Software”), to deal in the Software without restriction, including without limitation
|
| 6 |
+
# the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software,
|
| 7 |
+
# and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
|
| 8 |
+
|
| 9 |
+
# The above copyright notice and this permission notice shall be included in all copies or substantial portions of
|
| 10 |
+
# the Software.
|
| 11 |
+
|
| 12 |
+
# THE SOFTWARE IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO
|
| 13 |
+
# THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
|
| 14 |
+
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
|
| 15 |
+
# OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
|
| 16 |
+
# DEALINGS IN THE SOFTWARE.
|
| 17 |
+
|
| 18 |
+
import os
|
| 19 |
+
import copy
|
| 20 |
+
import wandb
|
| 21 |
+
import shutil
|
| 22 |
+
import bittensor as bt
|
| 23 |
+
from datetime import datetime
|
| 24 |
+
from bitagent.datasources import ToolDataset
|
| 25 |
+
from langchain_openai import ChatOpenAI
|
| 26 |
+
from sentence_transformers import util
|
| 27 |
+
from bitagent.helpers.sbert import CachedSentenceTransformer
|
| 28 |
+
|
| 29 |
+
# setup validator with wandb
|
| 30 |
+
# clear out the old wandb dirs if possible
|
| 31 |
+
def initiate_validator(self):
|
| 32 |
+
|
| 33 |
+
def init_wandb(self, reinit=False):
|
| 34 |
+
uid = self.metagraph.hotkeys.index(self.wallet.hotkey.ss58_address)
|
| 35 |
+
spec_version = self.spec_version
|
| 36 |
+
|
| 37 |
+
"""Starts a new wandb run."""
|
| 38 |
+
tags = [
|
| 39 |
+
self.wallet.hotkey.ss58_address,
|
| 40 |
+
str(spec_version),
|
| 41 |
+
f"netuid_{self.config.netuid}",
|
| 42 |
+
]
|
| 43 |
+
|
| 44 |
+
wandb_config = {
|
| 45 |
+
key: copy.deepcopy(self.config.get(key, None))
|
| 46 |
+
for key in ("neuron", "reward", "netuid", "wandb")
|
| 47 |
+
}
|
| 48 |
+
wandb_config["neuron"].pop("full_path", None)
|
| 49 |
+
wandb_config["validator_uid"] = uid
|
| 50 |
+
|
| 51 |
+
if self.config.netuid == 20:
|
| 52 |
+
project_name = "mainnet"
|
| 53 |
+
elif self.config.netuid == 76:
|
| 54 |
+
project_name = "testnet" # for TN76
|
| 55 |
+
else:
|
| 56 |
+
self.wandb = "errored"
|
| 57 |
+
return # must be using a local netuid, no need to log to wandb
|
| 58 |
+
|
| 59 |
+
try:
|
| 60 |
+
self.wandb = wandb.init(
|
| 61 |
+
anonymous="allow",
|
| 62 |
+
reinit=reinit,
|
| 63 |
+
entity='bitagentsn20',
|
| 64 |
+
project=project_name,
|
| 65 |
+
config=wandb_config,
|
| 66 |
+
dir=self.config.neuron.full_path,
|
| 67 |
+
tags=tags,
|
| 68 |
+
resume='allow',
|
| 69 |
+
name=f"{uid}-{spec_version}-{datetime.today().strftime('%Y-%m-%d')}",
|
| 70 |
+
)
|
| 71 |
+
bt.logging.success(f"Started a new wandb run <blue> {self.wandb.name} </blue>")
|
| 72 |
+
except Exception as e:
|
| 73 |
+
self.wandb = "errored"
|
| 74 |
+
bt.logging.error("Could not connect to wandb ... continuing without it ...")
|
| 75 |
+
bt.logging.error(f"WANDB Error: {e}")
|
| 76 |
+
|
| 77 |
+
def log_event(event):
|
| 78 |
+
#bt.logging.debug("Writing to WandB ....")
|
| 79 |
+
|
| 80 |
+
if not self.config.wandb.on:
|
| 81 |
+
return
|
| 82 |
+
|
| 83 |
+
if not getattr(self, "wandb", None):
|
| 84 |
+
clear_wandb_dir(self)
|
| 85 |
+
init_wandb(self)
|
| 86 |
+
|
| 87 |
+
if self.wandb == "errored":
|
| 88 |
+
return
|
| 89 |
+
|
| 90 |
+
# Log the event to wandb.
|
| 91 |
+
self.wandb.log(event)
|
| 92 |
+
#bt.logging.debug("Logged event to WandB ....")
|
| 93 |
+
|
| 94 |
+
self.log_event = log_event
|
| 95 |
+
|
| 96 |
+
initiate_validator_local(self)
|
| 97 |
+
|
| 98 |
+
def clear_wandb_dir(self):
|
| 99 |
+
wandb_path = os.path.join(self.config.neuron.full_path, "wandb")
|
| 100 |
+
if os.path.exists(wandb_path):
|
| 101 |
+
bt.logging.info(f"Clearing WandB directory of old runs")
|
| 102 |
+
for item in os.listdir(wandb_path):
|
| 103 |
+
item_path = os.path.join(wandb_path, item)
|
| 104 |
+
try:
|
| 105 |
+
if os.path.islink(item_path): # If it's a symbolic link
|
| 106 |
+
os.unlink(item_path) # Remove the symlink
|
| 107 |
+
elif os.path.isfile(item_path): # If it's a regular file
|
| 108 |
+
os.remove(item_path)
|
| 109 |
+
elif os.path.isdir(item_path): # If it's a directory
|
| 110 |
+
shutil.rmtree(item_path)
|
| 111 |
+
except Exception as e:
|
| 112 |
+
bt.logging.warning(f"Failed to remove {item_path}: {e}")
|
| 113 |
+
bt.logging.info(f"Cleared WandB directory of old runs")
|
| 114 |
+
|
| 115 |
+
# provide some capabilities to the task API (LLM, cossim)
|
| 116 |
+
def initiate_validator_local(self):
|
| 117 |
+
#bt.logging.info("Initializing Validator - this may take a while (downloading data and models).")
|
| 118 |
+
self.tool_dataset = ToolDataset()
|
| 119 |
+
#bt.logging.debug("Initializing Validator - this may take a while (downloading data and models) - loading model ...")
|
| 120 |
+
self.sentence_transformer = CachedSentenceTransformer('BAAI/bge-small-en-v1.5')
|
| 121 |
+
|
| 122 |
+
def llm(messages, max_new_tokens = 160, temperature=0.7):
|
| 123 |
+
if isinstance(messages, str):
|
| 124 |
+
messages = [{"role":"user","content":messages}]
|
| 125 |
+
llm = ChatOpenAI(
|
| 126 |
+
openai_api_key=self.config.openai_api_key,
|
| 127 |
+
openai_api_base=self.config.openai_api_base,
|
| 128 |
+
model_name=self.config.validator_model_name,
|
| 129 |
+
max_tokens = max_new_tokens,
|
| 130 |
+
temperature = temperature,
|
| 131 |
+
)
|
| 132 |
+
return llm.invoke(messages).content.strip()
|
| 133 |
+
|
| 134 |
+
self.llm = llm
|
| 135 |
+
|
| 136 |
+
#bt.logging.debug("Initializing Validator - this may take a while (downloading data and models) - finished loading model")
|
| 137 |
+
|
| 138 |
+
# code to measure the relevance of the response to the question
|
| 139 |
+
def measure_relevance_of_texts(text1, text2):
|
| 140 |
+
# Encode the texts to get the embeddings
|
| 141 |
+
if type(text2) == list:
|
| 142 |
+
embeddings = self.sentence_transformer.encode([text1,*text2], convert_to_tensor=True, show_progress_bar=False)
|
| 143 |
+
else:
|
| 144 |
+
embeddings = self.sentence_transformer.encode([text1,text2], convert_to_tensor=True, show_progress_bar=False)
|
| 145 |
+
# Compute the cosine similarity between the embeddings
|
| 146 |
+
if type(text2) == list:
|
| 147 |
+
return util.pytorch_cos_sim(embeddings[0], embeddings[1:])[0]
|
| 148 |
+
else:
|
| 149 |
+
return float(util.pytorch_cos_sim(embeddings[0], embeddings[1:])[0][0])
|
| 150 |
+
|
| 151 |
+
self.measure_relevance_of_texts = measure_relevance_of_texts
|
bitagent_subnet-main/bitagent/validator/offline_task.py
ADDED
|
@@ -0,0 +1,425 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
import os
|
| 2 |
+
import time
|
| 3 |
+
import shutil
|
| 4 |
+
import psutil
|
| 5 |
+
import asyncio
|
| 6 |
+
import requests
|
| 7 |
+
import bittensor as bt
|
| 8 |
+
|
| 9 |
+
from sglang.utils import (
|
| 10 |
+
terminate_process)
|
| 11 |
+
from bitagent.helpers.llms import llm
|
| 12 |
+
from huggingface_hub import model_info
|
| 13 |
+
from common.utils.uids import get_alive_uids
|
| 14 |
+
from bitagent.protocol import GetHFModelName
|
| 15 |
+
from bitagent.tasks.task import get_random_task
|
| 16 |
+
from common.utils.shell import execute_shell_command
|
| 17 |
+
from bitagent.helpers.logging import temporary_logging_state
|
| 18 |
+
from bitagent.validator.reward import process_rewards_update_scores_for_many_tasks_and_many_miners
|
| 19 |
+
|
| 20 |
+
# TODO overall for tracking, would be nice to track based on hotkey instead of UID
|
| 21 |
+
# it's currently handled for uid and new hotkeys taking over a uid, but might be cleaner
|
| 22 |
+
|
| 23 |
+
# Delete the model from the huggingface cache when we're done serving it so we don't run out of disk space
|
| 24 |
+
def delete_model_from_hf_cache(self, model_name: str):
|
| 25 |
+
# Determine the cache directory
|
| 26 |
+
cache_dir = os.path.expanduser(self.config.validator_hf_cache_dir)
|
| 27 |
+
|
| 28 |
+
# Format the directory name based on the model name
|
| 29 |
+
model_cache_dir = os.path.join(cache_dir, f"models--{model_name.replace('/', '--')}")
|
| 30 |
+
|
| 31 |
+
# Check if the directory exists and delete it
|
| 32 |
+
if os.path.exists(model_cache_dir):
|
| 33 |
+
try:
|
| 34 |
+
shutil.rmtree(model_cache_dir)
|
| 35 |
+
bt.logging.debug(f"OFFLINE: Model has been removed from the HF cache.")
|
| 36 |
+
except Exception as e:
|
| 37 |
+
bt.logging.error(f"OFFLINE: Error deleting model: from HF cache: {e}")
|
| 38 |
+
else:
|
| 39 |
+
bt.logging.debug(f"OFFLINE: Model not found in the cache, could not delete")
|
| 40 |
+
|
| 41 |
+
# added our own wait for server to check the process itself
|
| 42 |
+
# this will check to see if the sglang process crashes due to limited VRAM
|
| 43 |
+
def wait_for_server(base_url: str, server_process, timeout: int = None) -> None:
|
| 44 |
+
"""Wait for the server to be ready by polling the /v1/models endpoint.
|
| 45 |
+
|
| 46 |
+
Args:
|
| 47 |
+
base_url: The base URL of the server
|
| 48 |
+
server_process: The process to terminate if the server is ready
|
| 49 |
+
timeout: Maximum time to wait in seconds. None means wait forever.
|
| 50 |
+
"""
|
| 51 |
+
start_time = time.time()
|
| 52 |
+
procutil = psutil.Process(int(server_process.pid))
|
| 53 |
+
while True:
|
| 54 |
+
try:
|
| 55 |
+
if timeout and time.time() - start_time > timeout:
|
| 56 |
+
bt.logging.error(f"OFFLINE: Server did not become ready within timeout period")
|
| 57 |
+
raise TimeoutError("Server did not become ready within timeout period")
|
| 58 |
+
|
| 59 |
+
# Use psutil to monitor the process
|
| 60 |
+
if not procutil.is_running(): # Check if process is still running
|
| 61 |
+
bt.logging.error(f"OFFLINE: Server process terminated unexpectedly, check VRAM usage")
|
| 62 |
+
raise Exception("Server process terminated unexpectedly, potentially VRAM usage issue")
|
| 63 |
+
if server_process.poll() is not None:
|
| 64 |
+
bt.logging.error(f"OFFLINE: Server process terminated with code {server_process.poll()}")
|
| 65 |
+
raise Exception(f"Server process terminated with code {server_process.poll()}")
|
| 66 |
+
|
| 67 |
+
response = requests.get(
|
| 68 |
+
f"{base_url}/v1/models",
|
| 69 |
+
headers={"Authorization": "Bearer None"},
|
| 70 |
+
)
|
| 71 |
+
if response.status_code == 200:
|
| 72 |
+
time.sleep(5)
|
| 73 |
+
break
|
| 74 |
+
|
| 75 |
+
except requests.exceptions.RequestException:
|
| 76 |
+
time.sleep(1)
|
| 77 |
+
|
| 78 |
+
|
| 79 |
+
# ###########################################################
|
| 80 |
+
# OFFLINE TASKING
|
| 81 |
+
# ###########################################################
|
| 82 |
+
|
| 83 |
+
# TODO also run the bfcl suite on the validator - but skip the API calls, don't use those at first
|
| 84 |
+
# TODO store TOP score from last round and all-time in validator state
|
| 85 |
+
|
| 86 |
+
async def offline_task(self, wandb_data):
|
| 87 |
+
bt.logging.debug("OFFLINE: Starting offline task")
|
| 88 |
+
self.running_offline_mode = True
|
| 89 |
+
wandb_data['event_name'] = "offline_task_started"
|
| 90 |
+
self.log_event(wandb_data)
|
| 91 |
+
|
| 92 |
+
# get all alive miner UIDs to compare against the top scores from the last round
|
| 93 |
+
miner_uids = self.miners_left_to_score
|
| 94 |
+
|
| 95 |
+
# TODO potentially fetch prompt template from miner too
|
| 96 |
+
# Grab all the models that the miners submitted
|
| 97 |
+
responses = await self.dendrite.forward(
|
| 98 |
+
axons=[self.metagraph.axons[miner_uid] for miner_uid in miner_uids],
|
| 99 |
+
synapse=GetHFModelName(),
|
| 100 |
+
deserialize=False,
|
| 101 |
+
timeout=15.0,
|
| 102 |
+
)
|
| 103 |
+
|
| 104 |
+
wandb_data['event_name'] = "GetHFModelName Responses Fetched"
|
| 105 |
+
self.log_event(wandb_data)
|
| 106 |
+
|
| 107 |
+
# get all the HF model names from the responses
|
| 108 |
+
miner_hf_model_names = [response.hf_model_name for response in responses]
|
| 109 |
+
bt.logging.debug(f"OFFLINE: Miner HF model names: {len(miner_hf_model_names)}")
|
| 110 |
+
|
| 111 |
+
try:
|
| 112 |
+
hf_model_name_to_miner_uids = {}
|
| 113 |
+
for i,miner_uid in enumerate(miner_uids):
|
| 114 |
+
self.offline_model_names[self.competition_version][miner_uid] = responses[i].hf_model_name
|
| 115 |
+
if responses[i].hf_model_name is not None:
|
| 116 |
+
if responses[i].hf_model_name not in hf_model_name_to_miner_uids:
|
| 117 |
+
hf_model_name_to_miner_uids[responses[i].hf_model_name] = []
|
| 118 |
+
hf_model_name_to_miner_uids[responses[i].hf_model_name].append(int(miner_uid))
|
| 119 |
+
|
| 120 |
+
# Group all the models together uniquely and share the same inference server
|
| 121 |
+
unique_miner_hf_model_names = [m for m in list(set(miner_hf_model_names)) if m not in [None, ""]]
|
| 122 |
+
if len(unique_miner_hf_model_names) == 0:
|
| 123 |
+
bt.logging.debug(f"OFFLINE: No unique miner HF model names to evaluate in OFFLINE mode")
|
| 124 |
+
for miner_uid in miner_uids:
|
| 125 |
+
self.offline_scores[self.competition_version][miner_uid] = 0.0
|
| 126 |
+
wandb_data['event_name'] = "No Unique HF Models"
|
| 127 |
+
wandb_data['miners_left_to_score'] = miner_uids
|
| 128 |
+
self.log_event(wandb_data)
|
| 129 |
+
wandb_data.pop('miners_left_to_score')
|
| 130 |
+
self.running_offline_mode = False
|
| 131 |
+
return
|
| 132 |
+
except Exception as e:
|
| 133 |
+
bt.logging.error(f"OFFLINE: Error getting unique miner HF model names: {e}")
|
| 134 |
+
wandb_data['event_name'] = "Error Getting Unique HF Models"
|
| 135 |
+
wandb_data['error'] = f"{e}"
|
| 136 |
+
self.log_event(wandb_data)
|
| 137 |
+
wandb_data.pop('error')
|
| 138 |
+
self.running_offline_mode = False
|
| 139 |
+
return
|
| 140 |
+
|
| 141 |
+
bt.logging.debug(f"OFFLINE: Unique miner HF model names: {len(unique_miner_hf_model_names)}")
|
| 142 |
+
wandb_data['event_name'] = "Unique HF Model Fetched"
|
| 143 |
+
wandb_data['num_unique_hf_models'] = len(unique_miner_hf_model_names)
|
| 144 |
+
self.log_event(wandb_data)
|
| 145 |
+
wandb_data.pop('num_unique_hf_models')
|
| 146 |
+
|
| 147 |
+
# no need to regrade if score exists for the same model
|
| 148 |
+
models_to_skip = []
|
| 149 |
+
|
| 150 |
+
for hfmn in unique_miner_hf_model_names:
|
| 151 |
+
uids_with_same_model = []
|
| 152 |
+
scores_with_same_model = []
|
| 153 |
+
for k, model_name in self.offline_model_names[self.competition_version].items():
|
| 154 |
+
if model_name == hfmn:
|
| 155 |
+
uids_with_same_model.append(k)
|
| 156 |
+
scores_with_same_model.append(self.offline_scores[self.competition_version][k])
|
| 157 |
+
|
| 158 |
+
if len(uids_with_same_model) > 0:
|
| 159 |
+
max_score_for_model = max(scores_with_same_model) # Calculate max score once
|
| 160 |
+
|
| 161 |
+
if max_score_for_model <= 0:
|
| 162 |
+
# Skip adding to models_to_skip if max score is zero
|
| 163 |
+
continue
|
| 164 |
+
|
| 165 |
+
models_to_skip.append(hfmn) # Add only if max_score > 0
|
| 166 |
+
|
| 167 |
+
# Process the miners
|
| 168 |
+
the_uids = hf_model_name_to_miner_uids[hfmn]
|
| 169 |
+
bt.logging.debug(f"OFFLINE: Found miner with same model, using existing score")
|
| 170 |
+
for uid in the_uids:
|
| 171 |
+
self.offline_scores[self.competition_version][uid] = max_score_for_model
|
| 172 |
+
self.update_offline_scores([max_score_for_model] * len(the_uids), the_uids)
|
| 173 |
+
|
| 174 |
+
# skip the models we already have scores for
|
| 175 |
+
unique_miner_hf_model_names = [m for m in unique_miner_hf_model_names if m not in models_to_skip]
|
| 176 |
+
|
| 177 |
+
if len(unique_miner_hf_model_names) > 0:
|
| 178 |
+
bt.logging.debug(f"OFFLINE: Generating tasks")
|
| 179 |
+
# Generate a set of tasks to run on all the offline models
|
| 180 |
+
num_tasks = 1000
|
| 181 |
+
batch_size = 100
|
| 182 |
+
wandb_data['event_name'] = "Generating Tasks"
|
| 183 |
+
self.log_event(wandb_data)
|
| 184 |
+
tasks = []
|
| 185 |
+
for i,_ in enumerate(range(0, num_tasks, batch_size)):
|
| 186 |
+
#bt.logging.debug(f"OFFLINE: Generating tasks batch {i+1} of {num_tasks // batch_size}")
|
| 187 |
+
tasks.extend(await asyncio.gather(*[asyncio.to_thread(get_random_task, self, offline=True) for _ in range(batch_size)]))
|
| 188 |
+
#bt.logging.debug(f"OFFLINE: Generated tasks batch {i+1} of {num_tasks // batch_size}")
|
| 189 |
+
bt.logging.debug(f"OFFLINE: Generated {len(tasks)} tasks of {num_tasks} total")
|
| 190 |
+
wandb_data['event_name'] = "Generated Tasks"
|
| 191 |
+
wandb_data['num_tasks'] = len(tasks)
|
| 192 |
+
self.log_event(wandb_data)
|
| 193 |
+
wandb_data.pop('num_tasks')
|
| 194 |
+
|
| 195 |
+
for i,hf_model_name in enumerate(unique_miner_hf_model_names):
|
| 196 |
+
bt.logging.debug(f"OFFLINE: Running tasks for model {i+1} of {len(unique_miner_hf_model_names)}")
|
| 197 |
+
wandb_data['event_name'] = "Running HF Model"
|
| 198 |
+
wandb_data['num_hf_model'] = i
|
| 199 |
+
wandb_data['miner_uids'] = hf_model_name_to_miner_uids[hf_model_name]
|
| 200 |
+
self.log_event(wandb_data)
|
| 201 |
+
wandb_data.pop('miner_uids')
|
| 202 |
+
|
| 203 |
+
if hf_model_name is None or hf_model_name == "" or hf_model_name.lower() == "none":
|
| 204 |
+
bt.logging.debug(f"OFFLINE: Miner returned empty HF model name ... skipping")
|
| 205 |
+
for miner_uid in hf_model_name_to_miner_uids[hf_model_name]:
|
| 206 |
+
self.offline_scores[self.competition_version][miner_uid] = 0.0
|
| 207 |
+
wandb_data['event_name'] = "Skipping Empty HF Model"
|
| 208 |
+
wandb_data['miner_uids'] = hf_model_name_to_miner_uids[hf_model_name]
|
| 209 |
+
self.log_event(wandb_data)
|
| 210 |
+
wandb_data.pop('miner_uids')
|
| 211 |
+
continue # skip this model
|
| 212 |
+
|
| 213 |
+
# Extract the model card data for the model from HF
|
| 214 |
+
# ensure logger doesn't print the model name publicly, so restrict to only HF warnings
|
| 215 |
+
# Temporarily set logging to WARNING within the context manager
|
| 216 |
+
with temporary_logging_state('Warning'):
|
| 217 |
+
info = model_info(hf_model_name)
|
| 218 |
+
total_size = info.safetensors.total
|
| 219 |
+
try:
|
| 220 |
+
license = info.card_data['license']
|
| 221 |
+
except Exception:
|
| 222 |
+
bt.logging.debug("OFFLINE: No license found for model")
|
| 223 |
+
license = 'No license available'
|
| 224 |
+
|
| 225 |
+
# confirm model license is apache-2.0 or nc-by-nc-4.0 or mit
|
| 226 |
+
# TODO eventually ONLY accept apache-2.0
|
| 227 |
+
if license not in ["apache-2.0", "cc-by-nc-4.0", "mit"]:
|
| 228 |
+
bt.logging.debug(f"OFFLINE: Skipping model {i+1} of {len(unique_miner_hf_model_names)} due to license: {license}")
|
| 229 |
+
for miner_uid in hf_model_name_to_miner_uids[hf_model_name]:
|
| 230 |
+
self.offline_scores[self.competition_version][miner_uid] = 0.0
|
| 231 |
+
wandb_data['event_name'] = "Skipping Model Due to License"
|
| 232 |
+
wandb_data['miner_uids'] = hf_model_name_to_miner_uids[hf_model_name]
|
| 233 |
+
self.log_event(wandb_data)
|
| 234 |
+
wandb_data.pop('miner_uids')
|
| 235 |
+
continue
|
| 236 |
+
|
| 237 |
+
# confirm model size is less than 10B params (want 8B or less models)
|
| 238 |
+
if total_size > 10000000000:
|
| 239 |
+
bt.logging.debug(f"OFFLINE: Skipping model {i+1} of {len(unique_miner_hf_model_names)} due to size: {total_size}")
|
| 240 |
+
for miner_uid in hf_model_name_to_miner_uids[hf_model_name]:
|
| 241 |
+
self.offline_scores[self.competition_version][miner_uid] = 0.0
|
| 242 |
+
wandb_data['event_name'] = "Skipping Model Due to Size"
|
| 243 |
+
wandb_data['miner_uids'] = hf_model_name_to_miner_uids[hf_model_name]
|
| 244 |
+
self.log_event(wandb_data)
|
| 245 |
+
wandb_data.pop('miner_uids')
|
| 246 |
+
continue
|
| 247 |
+
|
| 248 |
+
bt.logging.debug(f"OFFLINE: Starting server for model {i+1} of {len(unique_miner_hf_model_names)}")
|
| 249 |
+
wandb_data['event_name'] = "HF Model Eval Server Starting"
|
| 250 |
+
self.log_event(wandb_data)
|
| 251 |
+
|
| 252 |
+
# see if we have a snapshot already in the cache
|
| 253 |
+
latest_snapshot = None
|
| 254 |
+
|
| 255 |
+
try:
|
| 256 |
+
# Start the server for the model
|
| 257 |
+
try:
|
| 258 |
+
cache_dir = os.path.expanduser(self.config.validator_hf_cache_dir)
|
| 259 |
+
snapshot_dir = f"{cache_dir}/models--{hf_model_name.replace('/', '--')}/snapshots/"
|
| 260 |
+
|
| 261 |
+
# Get all snapshot directories
|
| 262 |
+
snapshots = [os.path.join(snapshot_dir, d) for d in os.listdir(snapshot_dir) if os.path.isdir(os.path.join(snapshot_dir, d))]
|
| 263 |
+
|
| 264 |
+
# Sort snapshots by creation time (os.path.getctime) or modification time (os.path.getmtime)
|
| 265 |
+
latest_snapshot = max(snapshots, key=os.path.getctime)
|
| 266 |
+
# TODO if the latest snapshot is older than a week, delete it and download a new one
|
| 267 |
+
|
| 268 |
+
except Exception as e:
|
| 269 |
+
bt.logging.debug(f"OFFLINE: Error getting latest snapshot")
|
| 270 |
+
latest_snapshot = None
|
| 271 |
+
|
| 272 |
+
# # either load an existing snapshot or download the model
|
| 273 |
+
# if os.path.exists(snapshot_dir) and latest_snapshot:
|
| 274 |
+
# model_path = latest_snapshot
|
| 275 |
+
# else:
|
| 276 |
+
# # need to download from hugging face
|
| 277 |
+
model_path = hf_model_name
|
| 278 |
+
|
| 279 |
+
server_process = await asyncio.to_thread(execute_shell_command,
|
| 280 |
+
f"""
|
| 281 |
+
{os.getcwd()}/.venvsglang/bin/python -m sglang.launch_server \
|
| 282 |
+
--model-path {model_path} \
|
| 283 |
+
--port {self.config.validator_hf_server_port} \
|
| 284 |
+
--host 0.0.0.0 \
|
| 285 |
+
--mem-fraction-static {self.config.validator_hf_server_mem_fraction_static} \
|
| 286 |
+
--context-length 25000
|
| 287 |
+
""",
|
| 288 |
+
hf_model_name
|
| 289 |
+
)
|
| 290 |
+
|
| 291 |
+
bt.logging.debug(f"OFFLINE: Started server for model {i+1} of {len(unique_miner_hf_model_names)}, waiting for it to start on port {self.config.validator_hf_server_port} (could take several minutes)")
|
| 292 |
+
try:
|
| 293 |
+
await asyncio.wait_for(
|
| 294 |
+
asyncio.to_thread(wait_for_server, f"http://localhost:{self.config.validator_hf_server_port}", server_process),
|
| 295 |
+
timeout=60*15 # wait up to 15 minutes
|
| 296 |
+
)
|
| 297 |
+
bt.logging.debug(f"OFFLINE: Server for model {i+1} of {len(unique_miner_hf_model_names)} started")
|
| 298 |
+
wandb_data['event_name'] = "HF Model Eval Server Started"
|
| 299 |
+
self.log_event(wandb_data)
|
| 300 |
+
except asyncio.TimeoutError as e:
|
| 301 |
+
# likely a validator error
|
| 302 |
+
bt.logging.error(f"OFFLINE: Timeout waiting for server for model {i+1} of {len(unique_miner_hf_model_names)} to start, skipping")
|
| 303 |
+
wandb_data['event_name'] = "Timeout Waiting for HF Model Eval Server"
|
| 304 |
+
wandb_data['miner_uids'] = hf_model_name_to_miner_uids[hf_model_name]
|
| 305 |
+
self.log_event(wandb_data)
|
| 306 |
+
wandb_data.pop('miner_uids')
|
| 307 |
+
wandb_data.pop('num_hf_model')
|
| 308 |
+
# can't score this model, so skipping it for now, the miner will be tried again if this runs again
|
| 309 |
+
continue
|
| 310 |
+
except Exception as e:
|
| 311 |
+
bt.logging.error(f"OFFLINE: Error waiting for server: {e}, skipping")
|
| 312 |
+
wandb_data['event_name'] = "Error Waiting for HF Model Eval Server"
|
| 313 |
+
wandb_data['error'] = f"{e}"
|
| 314 |
+
wandb_data['miner_uids'] = hf_model_name_to_miner_uids[hf_model_name]
|
| 315 |
+
self.log_event(wandb_data)
|
| 316 |
+
wandb_data.pop('error')
|
| 317 |
+
wandb_data.pop('num_hf_model')
|
| 318 |
+
wandb_data.pop('miner_uids')
|
| 319 |
+
continue
|
| 320 |
+
|
| 321 |
+
except Exception as e:
|
| 322 |
+
# likely a validator error
|
| 323 |
+
bt.logging.error(f"OFFLINE: Error starting sglang server for model: {i+1} of {len(unique_miner_hf_model_names)}: {e}")
|
| 324 |
+
wandb_data['event_name'] = "Error Starting HF Model Eval Server"
|
| 325 |
+
wandb_data['error'] = f"{e}"
|
| 326 |
+
wandb_data['miner_uids'] = hf_model_name_to_miner_uids[hf_model_name]
|
| 327 |
+
self.log_event(wandb_data)
|
| 328 |
+
wandb_data.pop('error')
|
| 329 |
+
wandb_data.pop('num_hf_model')
|
| 330 |
+
wandb_data.pop('miner_uids')
|
| 331 |
+
# can't score this model, so skipping it for now, the miner will be tried again if this runs again
|
| 332 |
+
# could be an issue with model size
|
| 333 |
+
continue
|
| 334 |
+
|
| 335 |
+
# get LLM responses
|
| 336 |
+
bt.logging.debug(f"OFFLINE: Getting LLM responses for model {i+1} of {len(unique_miner_hf_model_names)}")
|
| 337 |
+
wandb_data['event_name'] = "Getting LLM Responses"
|
| 338 |
+
self.log_event(wandb_data)
|
| 339 |
+
|
| 340 |
+
# at most 5 LLM calls concurrently
|
| 341 |
+
sem = asyncio.Semaphore(5)
|
| 342 |
+
|
| 343 |
+
async def call_llm_with_semaphore(task):
|
| 344 |
+
async with sem:
|
| 345 |
+
return await asyncio.to_thread(
|
| 346 |
+
llm, self, task.synapse.messages, task.synapse.tools, hf_model_name, hugging_face=True
|
| 347 |
+
)
|
| 348 |
+
|
| 349 |
+
llm_responses_and_finishes = await asyncio.gather(
|
| 350 |
+
*[call_llm_with_semaphore(task) for task in tasks]
|
| 351 |
+
)
|
| 352 |
+
try:
|
| 353 |
+
llm_responses = [r[0] for r in llm_responses_and_finishes]
|
| 354 |
+
llm_finishes = [r[1] for r in llm_responses_and_finishes]
|
| 355 |
+
except Exception as e:
|
| 356 |
+
bt.logging.error(f"OFFLINE: Error getting LLM responses: {e}, have to skip this model")
|
| 357 |
+
continue
|
| 358 |
+
|
| 359 |
+
# TODO actually use the finishes to provide more detail to the miners in wandb
|
| 360 |
+
|
| 361 |
+
bt.logging.debug(f"OFFLINE: Got {len(llm_responses)} LLM responses for model: {i+1} of {len(unique_miner_hf_model_names)}")
|
| 362 |
+
wandb_data['event_name'] = "Got LLM Responses"
|
| 363 |
+
self.log_event(wandb_data)
|
| 364 |
+
|
| 365 |
+
# terminate the server after getting all the responses
|
| 366 |
+
bt.logging.debug(f"OFFLINE: Terminating server for model: {i+1} of {len(unique_miner_hf_model_names)}")
|
| 367 |
+
wandb_data['event_name'] = "HF Model Eval Server Terminating"
|
| 368 |
+
self.log_event(wandb_data)
|
| 369 |
+
await asyncio.to_thread(terminate_process, server_process)
|
| 370 |
+
bt.logging.debug(f"OFFLINE: Terminated server for model: {i+1} of {len(unique_miner_hf_model_names)}")
|
| 371 |
+
wandb_data['event_name'] = "HF Model Eval Server Terminated"
|
| 372 |
+
self.log_event(wandb_data)
|
| 373 |
+
|
| 374 |
+
these_miner_uids = hf_model_name_to_miner_uids[hf_model_name]
|
| 375 |
+
responses = []
|
| 376 |
+
for j, llm_response in enumerate(llm_responses):
|
| 377 |
+
task = tasks[j]
|
| 378 |
+
response = task.synapse.model_copy()
|
| 379 |
+
response.response = llm_response.strip()
|
| 380 |
+
response.dendrite.process_time = 5.0 # TODO may be useful to test performance of the model itself
|
| 381 |
+
response.dendrite.status_code = 200
|
| 382 |
+
response.axon.status_code = 200
|
| 383 |
+
response.competition_version = self.competition_version
|
| 384 |
+
responses.append(response)
|
| 385 |
+
|
| 386 |
+
# evaluate, track score and add to wandb
|
| 387 |
+
# TODO need to see if this SCORE is higher than the all-time top score
|
| 388 |
+
# TODO if so, update the all-time top score and model name and reward TOP miners
|
| 389 |
+
# TODO if not, then temporal decay of scores
|
| 390 |
+
bt.logging.debug(f"OFFLINE: Processing rewards for model: {i+1} of {len(unique_miner_hf_model_names)}, for miners: {these_miner_uids}")
|
| 391 |
+
wandb_data['event_name'] = "Processing Rewards"
|
| 392 |
+
self.log_event(wandb_data)
|
| 393 |
+
|
| 394 |
+
# TODO This is blocking the main loop
|
| 395 |
+
# blocking due to await, attempting to remove await and create a task and move on
|
| 396 |
+
|
| 397 |
+
#await process_rewards_update_scores_for_many_tasks_and_many_miners(self, tasks=tasks, responses=responses, miner_uids=these_miner_uids, wandb_data=wandb_data)
|
| 398 |
+
asyncio.create_task(process_rewards_update_scores_for_many_tasks_and_many_miners(self, tasks=tasks, responses=responses, miner_uids=these_miner_uids, wandb_data=wandb_data))
|
| 399 |
+
|
| 400 |
+
|
| 401 |
+
# remove newly downloaded files from HF cache if were not already in cache
|
| 402 |
+
if not latest_snapshot:
|
| 403 |
+
bt.logging.debug(f"OFFLINE: Deleting model from HF cache: {i+1} of {len(unique_miner_hf_model_names)}")
|
| 404 |
+
wandb_data['event_name'] = "Deleting HF Model from Cache"
|
| 405 |
+
self.log_event(wandb_data)
|
| 406 |
+
await asyncio.to_thread(delete_model_from_hf_cache, self, hf_model_name)
|
| 407 |
+
else:
|
| 408 |
+
bt.logging.debug(f"OFFLINE: NOT Deleting model from HF cache: {i+1} of {len(unique_miner_hf_model_names)}")
|
| 409 |
+
wandb_data['event_name'] = "NOT Deleting HF Model from Cache - snapshot found, so no new download to revert"
|
| 410 |
+
self.log_event(wandb_data)
|
| 411 |
+
|
| 412 |
+
# TODO handle temporal decay of scores if no miners outperform all time top score
|
| 413 |
+
# TODO handle temporal decay of all scores depending on a) if no new TOP score and b) if new TOP score
|
| 414 |
+
wandb_data['event_name'] = "Finished Processing Rewards"
|
| 415 |
+
wandb_data['miner_uids'] = these_miner_uids
|
| 416 |
+
self.log_event(wandb_data)
|
| 417 |
+
wandb_data.pop('num_hf_model')
|
| 418 |
+
wandb_data.pop('miner_uids')
|
| 419 |
+
|
| 420 |
+
bt.logging.debug(f"OFFLINE: Finished processing offline tasks")
|
| 421 |
+
self.running_offline_mode = False
|
| 422 |
+
wandb_data['event_name'] = "Finished Processing Offline Tasks"
|
| 423 |
+
wandb_data['miner_uids'] = miner_uids
|
| 424 |
+
self.log_event(wandb_data)
|
| 425 |
+
wandb_data.pop('miner_uids')
|
bitagent_subnet-main/bitagent/validator/reward.py
ADDED
|
@@ -0,0 +1,266 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# The MIT License (MIT)
|
| 2 |
+
# Copyright © 2023 Yuma Rao
|
| 3 |
+
# Copyright © 2023 RogueTensor
|
| 4 |
+
|
| 5 |
+
# Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated
|
| 6 |
+
# documentation files (the “Software”), to deal in the Software without restriction, including without limitation
|
| 7 |
+
# the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software,
|
| 8 |
+
# and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
|
| 9 |
+
|
| 10 |
+
# The above copyright notice and this permission notice shall be included in all copies or substantial portions of
|
| 11 |
+
# the Software.
|
| 12 |
+
|
| 13 |
+
# THE SOFTWARE IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO
|
| 14 |
+
# THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
|
| 15 |
+
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
|
| 16 |
+
# OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
|
| 17 |
+
# DEALINGS IN THE SOFTWARE.
|
| 18 |
+
import asyncio
|
| 19 |
+
import numpy as np
|
| 20 |
+
import bittensor as bt
|
| 21 |
+
from typing import List, Any
|
| 22 |
+
from rich.console import Console
|
| 23 |
+
from bitagent.tasks.task import Task
|
| 24 |
+
from bitagent.protocol import QueryResult
|
| 25 |
+
from common.base.validator import BaseValidatorNeuron
|
| 26 |
+
|
| 27 |
+
rich_console = Console()
|
| 28 |
+
|
| 29 |
+
async def send_results_to_miner(validator, result, miner_axon):
|
| 30 |
+
# extra transparent details for miners
|
| 31 |
+
|
| 32 |
+
# For generated/evaluated tasks, we send the results back to the miner so they know how they did and why
|
| 33 |
+
# The dendrite client queries the network to send feedback to the miner
|
| 34 |
+
_ = validator.dendrite.query(
|
| 35 |
+
# Send the query to selected miner axons in the network.
|
| 36 |
+
axons=[miner_axon],
|
| 37 |
+
# Construct a query.
|
| 38 |
+
synapse=QueryResult(results=result),
|
| 39 |
+
# All responses have the deserialize function called on them before returning.
|
| 40 |
+
# You are encouraged to define your own deserialization function.
|
| 41 |
+
deserialize=False,
|
| 42 |
+
timeout=5.0 # quick b/c we are not awaiting a response
|
| 43 |
+
)
|
| 44 |
+
|
| 45 |
+
async def evaluate_task(validator, task, response):
|
| 46 |
+
try:
|
| 47 |
+
return [task.reward(validator, response)]
|
| 48 |
+
except Exception as e:
|
| 49 |
+
bt.logging.warning(f"An exception calling task.reward: {e}")
|
| 50 |
+
|
| 51 |
+
async def return_results(validator, task, miner_uid, reward, response):
|
| 52 |
+
# means we got all of the information we need to score the miner and update wandb
|
| 53 |
+
if len(reward) == 4:
|
| 54 |
+
score, max_possible_score, task_results, correct_answer = reward
|
| 55 |
+
# make sure the score is not None
|
| 56 |
+
if score and max_possible_score:
|
| 57 |
+
normalized_score = score/max_possible_score
|
| 58 |
+
|
| 59 |
+
result = f"""
|
| 60 |
+
[bold]Task: {task.name}[/bold]
|
| 61 |
+
[bold]Messages:[/bold] {task.synapse.messages}
|
| 62 |
+
[bold]Tools:[/bold] {[t.name for t in task.synapse.tools]}
|
| 63 |
+
[bold]Response:[/bold] `{response.response}`
|
| 64 |
+
\n[bold]Results:[/bold]\n
|
| 65 |
+
=====================\n"""+"\n".join(task_results) + f"""
|
| 66 |
+
[bold]Total reward:[/bold] {score}
|
| 67 |
+
[bold]Total possible reward:[/bold] {max_possible_score}
|
| 68 |
+
[bold]Normalized reward:[/bold] {normalized_score}
|
| 69 |
+
---
|
| 70 |
+
Stats with this validator:
|
| 71 |
+
Your Average Score: {validator.scores[miner_uid]}
|
| 72 |
+
Highest Score across all miners: {validator.scores.max()}
|
| 73 |
+
Median Score across all miners: {np.median(validator.scores)}
|
| 74 |
+
Your Offline Model Score for Competition {validator.previous_competition_version}: {validator.offline_scores[validator.previous_competition_version][miner_uid]}
|
| 75 |
+
Your Offline Model Score for Competition {validator.competition_version}: {validator.offline_scores[validator.competition_version][miner_uid]}"""
|
| 76 |
+
# TODO need to add BFCL scores when we do them
|
| 77 |
+
# send results
|
| 78 |
+
if task.mode == "online":
|
| 79 |
+
await send_results_to_miner(validator, result, validator.metagraph.axons[miner_uid])
|
| 80 |
+
else:
|
| 81 |
+
# useful if validators want to see progress or results of offline tasks
|
| 82 |
+
# rich_console.print("this is a non-online task")
|
| 83 |
+
# rich_console.print(result)
|
| 84 |
+
pass
|
| 85 |
+
return task_results
|
| 86 |
+
return None
|
| 87 |
+
elif len(reward) == 2: # skip it
|
| 88 |
+
#bt.logging.debug(f"Skipping results for this task b/c Task API seems to have rebooted: {reward[1]}")
|
| 89 |
+
#time.sleep(25)
|
| 90 |
+
return None
|
| 91 |
+
else:
|
| 92 |
+
#bt.logging.debug(f"Skipping results for this task b/c not enough information")
|
| 93 |
+
#time.sleep(25)
|
| 94 |
+
return None
|
| 95 |
+
|
| 96 |
+
async def write_to_wandb(validator: BaseValidatorNeuron, task: Task, responses: List[Any], miner_uids: List[int], rewards: List[List[float]], results: List[List[str]]) -> None:
|
| 97 |
+
# common wandb setup
|
| 98 |
+
try:
|
| 99 |
+
messages = task.synapse.messages
|
| 100 |
+
tools = task.synapse.tools
|
| 101 |
+
task_name = task.name
|
| 102 |
+
task_mode = task.mode
|
| 103 |
+
except Exception as e:
|
| 104 |
+
bt.logging.error("Could not setup common data - ", e)
|
| 105 |
+
|
| 106 |
+
for i in range(len(responses)):
|
| 107 |
+
response = responses[i]
|
| 108 |
+
miner_uid = miner_uids[i]
|
| 109 |
+
score,max_possible_score,_,correct_answer = rewards[i][0]
|
| 110 |
+
normalized_score = score/max_possible_score
|
| 111 |
+
|
| 112 |
+
resp = "None"
|
| 113 |
+
try:
|
| 114 |
+
resp = response.response
|
| 115 |
+
run_model = response.hf_run_model_name
|
| 116 |
+
except:
|
| 117 |
+
pass
|
| 118 |
+
|
| 119 |
+
try:
|
| 120 |
+
data = {
|
| 121 |
+
"task_name": task_name,
|
| 122 |
+
"task_mode": task_mode,
|
| 123 |
+
"messages": [{'role': m.role, 'content': m.content} for m in messages],
|
| 124 |
+
"tools": [{'name': t.name, 'description': t.description, 'arguments': t.arguments} for t in tools],
|
| 125 |
+
"miners_count": len(miner_uids),
|
| 126 |
+
"messages_count": len(messages),
|
| 127 |
+
"tools_count": len(tools),
|
| 128 |
+
"response": resp,
|
| 129 |
+
"miner_uid": miner_uids[i],
|
| 130 |
+
"score": score,
|
| 131 |
+
"normalized_score": normalized_score,
|
| 132 |
+
"average_score_for_miner_with_this_validator": validator.scores[miner_uid],
|
| 133 |
+
"stake": validator.metagraph.S[miner_uid],
|
| 134 |
+
"trust": validator.metagraph.T[miner_uid],
|
| 135 |
+
"incentive": validator.metagraph.I[miner_uid],
|
| 136 |
+
"consensus": validator.metagraph.C[miner_uid],
|
| 137 |
+
"dividends": validator.metagraph.D[miner_uid],
|
| 138 |
+
"results": "\n".join(str(item) for item in results[i]) if results[i] else "None",
|
| 139 |
+
"dendrite_process_time": response.dendrite.process_time,
|
| 140 |
+
"dendrite_status_code": response.dendrite.status_code,
|
| 141 |
+
"axon_status_code": response.axon.status_code,
|
| 142 |
+
"validator_uid": validator.metagraph.hotkeys.index(validator.wallet.hotkey.ss58_address),
|
| 143 |
+
"val_spec_version": validator.spec_version,
|
| 144 |
+
"highest_score_for_miners_with_this_validator": validator.scores.max(),
|
| 145 |
+
"median_score_for_miners_with_this_validator": np.median(validator.scores),
|
| 146 |
+
"offline_score_for_miner_with_this_validator": validator.offline_scores[validator.competition_version][miner_uid],
|
| 147 |
+
"highest_offline_score_for_miners_with_this_validator": validator.offline_scores[validator.competition_version].max(),
|
| 148 |
+
"median_offline_score_for_miners_with_this_validator": np.median(validator.offline_scores[validator.competition_version]),
|
| 149 |
+
"average_offline_score_for_miners_with_this_validator": np.mean(validator.offline_scores[validator.competition_version]),
|
| 150 |
+
"prior_highest_offline_score_for_miners_with_this_validator": validator.offline_scores[validator.previous_competition_version].max(),
|
| 151 |
+
"prior_median_offline_score_for_miners_with_this_validator": np.median(validator.offline_scores[validator.previous_competition_version]),
|
| 152 |
+
"prior_average_offline_score_for_miners_with_this_validator": np.mean(validator.offline_scores[validator.previous_competition_version]),
|
| 153 |
+
"competition_version": validator.competition_version,
|
| 154 |
+
# TODO add BFCL scores
|
| 155 |
+
#"correct_answer": correct_answer, # TODO best way to send this without lookup attack?
|
| 156 |
+
}
|
| 157 |
+
|
| 158 |
+
try:
|
| 159 |
+
#if task.mode == "offline":
|
| 160 |
+
# bt.logging.debug(f"OFFLINE Logging to WandB")
|
| 161 |
+
#else:
|
| 162 |
+
# bt.logging.debug(f"ONLINE Logging to WandB")
|
| 163 |
+
validator.log_event(data)
|
| 164 |
+
#if task.mode == "offline":
|
| 165 |
+
# bt.logging.debug(f"OFFLINE Logged to WandB")
|
| 166 |
+
#else:
|
| 167 |
+
# bt.logging.debug(f"ONLINE Logged to WandB")
|
| 168 |
+
except Exception as e:
|
| 169 |
+
bt.logging.warning("WandB failed to log, moving on ... exception: {}".format(e))
|
| 170 |
+
|
| 171 |
+
except Exception as e:
|
| 172 |
+
bt.logging.warning("Exception in logging to WandB: {}".format(e))
|
| 173 |
+
|
| 174 |
+
# all of these miners are scored the same way with the same tasks b/c this is scoring offline models
|
| 175 |
+
async def process_rewards_update_scores_for_many_tasks_and_many_miners(
|
| 176 |
+
validator: BaseValidatorNeuron, tasks: List[Task], responses: List[Any],
|
| 177 |
+
miner_uids: List[int], wandb_data: dict
|
| 178 |
+
) -> None:
|
| 179 |
+
# Gather rewards in parallel
|
| 180 |
+
rewards = await asyncio.gather(*[
|
| 181 |
+
evaluate_task(validator, tasks[i], responses[i]) for i in range(len(responses))
|
| 182 |
+
])
|
| 183 |
+
|
| 184 |
+
try:
|
| 185 |
+
scores = []
|
| 186 |
+
miner_tasks = [] # Collect tasks to execute in parallel for each miner
|
| 187 |
+
for i, reward in enumerate(rewards):
|
| 188 |
+
if len(reward[0]) == 4 and reward[0][0] is not None and reward[0][1] is not None:
|
| 189 |
+
scores.append(reward[0][0] / reward[0][1])
|
| 190 |
+
|
| 191 |
+
# Create a coroutine chain for each miner_uid
|
| 192 |
+
for miner_uid in miner_uids:
|
| 193 |
+
async def process_miner_task(task_idx, miner_uid, reward, response):
|
| 194 |
+
# Get the result for this miner
|
| 195 |
+
result = await return_results(validator, tasks[task_idx], miner_uid, reward[0], response)
|
| 196 |
+
# Write the result to wandb
|
| 197 |
+
await write_to_wandb(validator, tasks[task_idx], [response], [miner_uid], rewards, result)
|
| 198 |
+
|
| 199 |
+
# Append the task for execution
|
| 200 |
+
miner_tasks.append(process_miner_task(i, miner_uid, reward, responses[i]))
|
| 201 |
+
else:
|
| 202 |
+
# Bad reward, so 0 score
|
| 203 |
+
scores.append(0.0)
|
| 204 |
+
|
| 205 |
+
# Await all miner-specific tasks concurrently
|
| 206 |
+
await asyncio.gather(*miner_tasks)
|
| 207 |
+
|
| 208 |
+
except Exception as e:
|
| 209 |
+
bt.logging.warning(f"OFFLINE: Error logging reward data: {e}")
|
| 210 |
+
wandb_data['event_name'] = "Processing Rewards - Error"
|
| 211 |
+
wandb_data['miner_uids'] = miner_uids
|
| 212 |
+
wandb_data['error'] = e
|
| 213 |
+
validator.log_event(wandb_data)
|
| 214 |
+
wandb_data.pop('error')
|
| 215 |
+
wandb_data.pop('miner_uids')
|
| 216 |
+
|
| 217 |
+
# Compute and log the mean score
|
| 218 |
+
score = np.mean(scores)
|
| 219 |
+
wandb_data['event_name'] = "Processing Rewards - Score"
|
| 220 |
+
wandb_data['score'] = score
|
| 221 |
+
wandb_data['miner_uids'] = miner_uids
|
| 222 |
+
validator.log_event(wandb_data)
|
| 223 |
+
wandb_data.pop('score')
|
| 224 |
+
wandb_data.pop('miner_uids')
|
| 225 |
+
|
| 226 |
+
# Update scores
|
| 227 |
+
validator.update_offline_scores([score] * len(miner_uids), miner_uids)
|
| 228 |
+
|
| 229 |
+
return score
|
| 230 |
+
|
| 231 |
+
async def process_rewards_update_scores_and_send_feedback(validator: BaseValidatorNeuron, task: Task, responses: List[Any],
|
| 232 |
+
miner_uids: List[int]) -> None:
|
| 233 |
+
"""
|
| 234 |
+
Returns a tensor of rewards for the given query and responses.
|
| 235 |
+
|
| 236 |
+
Args:
|
| 237 |
+
- task (Task): The task sent to the miner.
|
| 238 |
+
- responses (List[float]): A list of responses from the miner.
|
| 239 |
+
- miner_uids (List[int]): A list of miner UIDs. The miner at a particular index has a response in responses at the same index.
|
| 240 |
+
"""
|
| 241 |
+
# run these in parallel but wait for the reuslts b/c we need them downstream
|
| 242 |
+
rewards = await asyncio.gather(*[evaluate_task(validator, task, response) for response in responses])
|
| 243 |
+
try:
|
| 244 |
+
# track which miner uids are scored for updating the scores
|
| 245 |
+
#temp_miner_uids = [miner_uids[i] for i, reward in enumerate(rewards) if len(reward[0]) == 4 and reward[0][0] is not None and reward[0][1] is not None]
|
| 246 |
+
scores = []
|
| 247 |
+
results = []
|
| 248 |
+
for i, reward in enumerate(rewards):
|
| 249 |
+
if len(reward[0]) == 4 and reward[0][0] is not None and reward[0][1] is not None:
|
| 250 |
+
scores.append(reward[0][0]/reward[0][1])
|
| 251 |
+
results.append(await return_results(validator, task, miner_uids[i], reward[0], responses[i]))
|
| 252 |
+
else:
|
| 253 |
+
# bad reward, so 0 score
|
| 254 |
+
scores.append(0.0)
|
| 255 |
+
results.append(None)
|
| 256 |
+
|
| 257 |
+
await write_to_wandb(validator, task, responses, miner_uids, rewards, results)
|
| 258 |
+
|
| 259 |
+
except Exception as e:
|
| 260 |
+
bt.logging.warning(f"ONLINE: Error logging reward data: {e}")
|
| 261 |
+
|
| 262 |
+
# Update the scores based on the rewards. You may want to define your own update_scores function for custom behavior.
|
| 263 |
+
#miner_uids = temp_miner_uids
|
| 264 |
+
validator.update_scores(scores, miner_uids, alpha=task.weight)
|
| 265 |
+
|
| 266 |
+
return scores
|
bitagent_subnet-main/common/__init__.py
ADDED
|
@@ -0,0 +1,29 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# The MIT License (MIT)
|
| 2 |
+
# Copyright © 2023 Yuma Rao
|
| 3 |
+
# Copyright © 2023 RogueTensor
|
| 4 |
+
|
| 5 |
+
# Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated
|
| 6 |
+
# documentation files (the “Software”), to deal in the Software without restriction, including without limitation
|
| 7 |
+
# the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software,
|
| 8 |
+
# and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
|
| 9 |
+
|
| 10 |
+
# The above copyright notice and this permission notice shall be included in all copies or substantial portions of
|
| 11 |
+
# the Software.
|
| 12 |
+
|
| 13 |
+
# THE SOFTWARE IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO
|
| 14 |
+
# THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
|
| 15 |
+
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
|
| 16 |
+
# OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
|
| 17 |
+
# DEALINGS IN THE SOFTWARE.
|
| 18 |
+
|
| 19 |
+
# Define the version of the common module.
|
| 20 |
+
__version__ = "1.0.15"
|
| 21 |
+
version_split = __version__.split(".")
|
| 22 |
+
__spec_version__ = (
|
| 23 |
+
(1000 * int(version_split[0]))
|
| 24 |
+
+ (10 * int(version_split[1]))
|
| 25 |
+
+ (1 * int(version_split[2]))
|
| 26 |
+
)
|
| 27 |
+
|
| 28 |
+
# Import all submodules.
|
| 29 |
+
from . import base
|
bitagent_subnet-main/common/base/__init__.py
ADDED
|
File without changes
|
bitagent_subnet-main/common/base/miner.py
ADDED
|
@@ -0,0 +1,266 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# The MIT License (MIT)
|
| 2 |
+
# Copyright © 2023 Yuma Rao
|
| 3 |
+
# Copyright © 2023 RogueTensor
|
| 4 |
+
|
| 5 |
+
# Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated
|
| 6 |
+
# documentation files (the “Software”), to deal in the Software without restriction, including without limitation
|
| 7 |
+
# the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software,
|
| 8 |
+
# and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
|
| 9 |
+
|
| 10 |
+
# The above copyright notice and this permission notice shall be included in all copies or substantial portions of
|
| 11 |
+
# the Software.
|
| 12 |
+
|
| 13 |
+
# THE SOFTWARE IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO
|
| 14 |
+
# THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
|
| 15 |
+
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
|
| 16 |
+
# OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
|
| 17 |
+
# DEALINGS IN THE SOFTWARE.
|
| 18 |
+
|
| 19 |
+
import time
|
| 20 |
+
import asyncio
|
| 21 |
+
import uvicorn
|
| 22 |
+
import threading
|
| 23 |
+
import traceback
|
| 24 |
+
|
| 25 |
+
import bittensor as bt
|
| 26 |
+
from bittensor.core.axon import FastAPIThreadedServer # type: ignore
|
| 27 |
+
|
| 28 |
+
#from collections import Counter
|
| 29 |
+
from common.base.neuron import BaseNeuron
|
| 30 |
+
|
| 31 |
+
class BaseMinerNeuron(BaseNeuron):
|
| 32 |
+
"""
|
| 33 |
+
Base class for Bittensor miners.
|
| 34 |
+
"""
|
| 35 |
+
|
| 36 |
+
neuron_type: str = "MinerNeuron"
|
| 37 |
+
|
| 38 |
+
def __init__(self, config=None):
|
| 39 |
+
super().__init__(config=config)
|
| 40 |
+
|
| 41 |
+
# Warn if allowing incoming requests from anyone.
|
| 42 |
+
if not self.config.blacklist.force_validator_permit:
|
| 43 |
+
bt.logging.warning(
|
| 44 |
+
"You are allowing non-validators to send requests to your miner. This is a security risk."
|
| 45 |
+
)
|
| 46 |
+
if self.config.blacklist.allow_non_registered:
|
| 47 |
+
bt.logging.warning(
|
| 48 |
+
"You are allowing non-registered entities to send requests to your miner. This is a security risk."
|
| 49 |
+
)
|
| 50 |
+
|
| 51 |
+
# The axon handles request processing, allowing validators to send this miner requests.
|
| 52 |
+
self.axon = bt.axon(
|
| 53 |
+
wallet=self.wallet,
|
| 54 |
+
config=self.config,
|
| 55 |
+
port=self.config.axon.port,
|
| 56 |
+
ip=self.config.axon.ip,
|
| 57 |
+
external_ip=self.config.axon.external_ip,
|
| 58 |
+
external_port=self.config.axon.external_port,
|
| 59 |
+
max_workers=self.config.axon.max_workers
|
| 60 |
+
)
|
| 61 |
+
fast_config = uvicorn.Config(
|
| 62 |
+
self.axon.app, host="0.0.0.0", port=self.config.axon.port, log_level="trace", loop="asyncio"
|
| 63 |
+
)
|
| 64 |
+
self.axon.fast_server = FastAPIThreadedServer(config=fast_config)
|
| 65 |
+
|
| 66 |
+
# Attach determiners which functions are called when servicing a request.
|
| 67 |
+
bt.logging.info(f"Attaching forward function to miner axon.")
|
| 68 |
+
# NOTE - only big change made to common miner - RogueTensor
|
| 69 |
+
for forward_capability in self.forward_capabilities:
|
| 70 |
+
forward_fn = forward_capability['forward']
|
| 71 |
+
blacklist_fn = forward_capability['blacklist']
|
| 72 |
+
priority_fn = forward_capability['priority']
|
| 73 |
+
self.axon.attach(
|
| 74 |
+
forward_fn=forward_fn,
|
| 75 |
+
blacklist_fn=blacklist_fn,
|
| 76 |
+
priority_fn=priority_fn,
|
| 77 |
+
)
|
| 78 |
+
|
| 79 |
+
bt.logging.info(f"Axon created: {self.axon}")
|
| 80 |
+
|
| 81 |
+
# Instantiate runners
|
| 82 |
+
self.should_exit: bool = False
|
| 83 |
+
self.is_running: bool = False
|
| 84 |
+
self.thread: threading.Thread = None
|
| 85 |
+
self.lock = asyncio.Lock()
|
| 86 |
+
|
| 87 |
+
def run(self):
|
| 88 |
+
"""
|
| 89 |
+
Initiates and manages the main loop for the miner on the Bittensor network. The main loop handles graceful shutdown on keyboard interrupts and logs unforeseen errors.
|
| 90 |
+
|
| 91 |
+
This function performs the following primary tasks:
|
| 92 |
+
1. Check for registration on the Bittensor network.
|
| 93 |
+
2. Starts the miner's axon, making it active on the network.
|
| 94 |
+
3. Periodically resynchronizes with the chain; updating the metagraph with the latest network state and setting weights.
|
| 95 |
+
|
| 96 |
+
The miner continues its operations until `should_exit` is set to True or an external interruption occurs.
|
| 97 |
+
During each epoch of its operation, the miner waits for new blocks on the Bittensor network, updates its
|
| 98 |
+
knowledge of the network (metagraph), and sets its weights. This process ensures the miner remains active
|
| 99 |
+
and up-to-date with the network's latest state.
|
| 100 |
+
|
| 101 |
+
Note:
|
| 102 |
+
- The function leverages the global configurations set during the initialization of the miner.
|
| 103 |
+
- The miner's axon serves as its interface to the Bittensor network, handling incoming and outgoing requests.
|
| 104 |
+
|
| 105 |
+
Raises:
|
| 106 |
+
KeyboardInterrupt: If the miner is stopped by a manual interruption.
|
| 107 |
+
Exception: For unforeseen errors during the miner's operation, which are logged for diagnosis.
|
| 108 |
+
"""
|
| 109 |
+
|
| 110 |
+
# Check that miner is registered on the network.
|
| 111 |
+
try:
|
| 112 |
+
self.sync()
|
| 113 |
+
except Exception as e:
|
| 114 |
+
bt.logging.error(f"Could not sync, will try again later. Error: {e}")
|
| 115 |
+
|
| 116 |
+
# Serve passes the axon information to the network + netuid we are hosting on.
|
| 117 |
+
# This will auto-update if the axon port of external ip have changed.
|
| 118 |
+
bt.logging.info(
|
| 119 |
+
f"Serving miner axon {self.axon} on network: {self.config.subtensor.chain_endpoint} with netuid: {self.config.netuid}"
|
| 120 |
+
)
|
| 121 |
+
try:
|
| 122 |
+
self.axon.serve(netuid=self.config.netuid, subtensor=self.subtensor)
|
| 123 |
+
|
| 124 |
+
# Start starts the miner's axon, making it active on the network.
|
| 125 |
+
self.axon.start()
|
| 126 |
+
|
| 127 |
+
bt.logging.info(f"Miner starting at block: {self.block}")
|
| 128 |
+
except Exception as e:
|
| 129 |
+
bt.logging.error(f"Could not start miner, errored: {e}")
|
| 130 |
+
|
| 131 |
+
# This loop maintains the miner's operations until intentionally stopped.
|
| 132 |
+
try:
|
| 133 |
+
while not self.should_exit:
|
| 134 |
+
while(self.block - self.last_block_sync < self.config.neuron.epoch_length):
|
| 135 |
+
# Wait before checking again.
|
| 136 |
+
time.sleep(1)
|
| 137 |
+
|
| 138 |
+
# Check if we should exit.
|
| 139 |
+
if self.should_exit:
|
| 140 |
+
break
|
| 141 |
+
|
| 142 |
+
# Sync metagraph
|
| 143 |
+
self.sync()
|
| 144 |
+
self.step += 1
|
| 145 |
+
|
| 146 |
+
time.sleep(1)
|
| 147 |
+
|
| 148 |
+
# If someone intentionally stops the miner, it'll safely terminate operations.
|
| 149 |
+
except KeyboardInterrupt:
|
| 150 |
+
self.axon.stop()
|
| 151 |
+
bt.logging.success("Miner killed by keyboard interrupt.")
|
| 152 |
+
exit()
|
| 153 |
+
|
| 154 |
+
# In case of unforeseen errors, the miner will log the error and continue operations.
|
| 155 |
+
except Exception as e:
|
| 156 |
+
bt.logging.error(traceback.format_exc())
|
| 157 |
+
|
| 158 |
+
def run_in_background_thread(self):
|
| 159 |
+
"""
|
| 160 |
+
Starts the miner's operations in a separate background thread.
|
| 161 |
+
This is useful for non-blocking operations.
|
| 162 |
+
"""
|
| 163 |
+
if not self.is_running:
|
| 164 |
+
bt.logging.debug("Starting miner in background thread.")
|
| 165 |
+
self.should_exit = False
|
| 166 |
+
self.thread = threading.Thread(target=self.run, daemon=True)
|
| 167 |
+
self.thread.start()
|
| 168 |
+
self.is_running = True
|
| 169 |
+
bt.logging.debug("Started")
|
| 170 |
+
|
| 171 |
+
def stop_run_thread(self):
|
| 172 |
+
"""
|
| 173 |
+
Stops the miner's operations that are running in the background thread.
|
| 174 |
+
"""
|
| 175 |
+
if self.is_running:
|
| 176 |
+
bt.logging.debug("Stopping miner in background thread.")
|
| 177 |
+
self.should_exit = True
|
| 178 |
+
self.thread.join(5)
|
| 179 |
+
self.is_running = False
|
| 180 |
+
bt.logging.debug("Stopped")
|
| 181 |
+
|
| 182 |
+
def __enter__(self):
|
| 183 |
+
"""
|
| 184 |
+
Starts the miner's operations in a background thread upon entering the context.
|
| 185 |
+
This method facilitates the use of the miner in a 'with' statement.
|
| 186 |
+
"""
|
| 187 |
+
self.run_in_background_thread()
|
| 188 |
+
return self
|
| 189 |
+
|
| 190 |
+
def __exit__(self, exc_type, exc_value, traceback):
|
| 191 |
+
"""
|
| 192 |
+
Stops the miner's background operations upon exiting the context.
|
| 193 |
+
This method facilitates the use of the miner in a 'with' statement.
|
| 194 |
+
|
| 195 |
+
Args:
|
| 196 |
+
exc_type: The type of the exception that caused the context to be exited.
|
| 197 |
+
None if the context was exited without an exception.
|
| 198 |
+
exc_value: The instance of the exception that caused the context to be exited.
|
| 199 |
+
None if the context was exited without an exception.
|
| 200 |
+
traceback: A traceback object encoding the stack trace.
|
| 201 |
+
None if the context was exited without an exception.
|
| 202 |
+
"""
|
| 203 |
+
self.stop_run_thread()
|
| 204 |
+
|
| 205 |
+
def resync_metagraph(self):
|
| 206 |
+
"""Resyncs the metagraph and updates the hotkeys and moving averages based on the new metagraph."""
|
| 207 |
+
bt.logging.info("resync_metagraph()")
|
| 208 |
+
|
| 209 |
+
# Sync the metagraph.
|
| 210 |
+
try:
|
| 211 |
+
self.metagraph.sync(subtensor=self.subtensor)
|
| 212 |
+
self.last_block_sync = self.block
|
| 213 |
+
except Exception as e:
|
| 214 |
+
bt.logging.error(f"Could not sync with metagraph right now, will try later. Error: {e}")
|
| 215 |
+
|
| 216 |
+
# each validator sends their top miner's HF model name to each miner
|
| 217 |
+
def get_top_miner_HF_model_name(self):
|
| 218 |
+
# miner can specify a HF model name to run
|
| 219 |
+
if self.config.hf_model_name_to_run != "none":
|
| 220 |
+
return self.config.hf_model_name_to_run
|
| 221 |
+
|
| 222 |
+
return "Salesforce/xLAM-7b-r"
|
| 223 |
+
|
| 224 |
+
## TODO might consider selecting the TOP model that each validator votes for
|
| 225 |
+
## if no specific model name is specified, miner will use the top model name from the validators' votes
|
| 226 |
+
#if not self.hf_top_model_names or len(list(self.hf_top_model_names.keys())) == 0:
|
| 227 |
+
# return self.config.hf_model_name_to_run
|
| 228 |
+
#else:
|
| 229 |
+
# # get the most common model name
|
| 230 |
+
# most_common_model_name = Counter(self.hf_top_model_names).most_common(1)[0][0]
|
| 231 |
+
# return most_common_model_name
|
| 232 |
+
|
| 233 |
+
|
| 234 |
+
# we might do this in the future, not doing for now
|
| 235 |
+
## validator sends the miner the top model name to run from HF
|
| 236 |
+
## store it
|
| 237 |
+
#def save_top_model_from_validator(self, top_hf_model_name, validator_uid):
|
| 238 |
+
# # save off the top model from this validator
|
| 239 |
+
# bt.logging.debug(f"Saving top HF model name from validator {validator_uid} state - {self.config.neuron.full_path}/miner_state.npz.")
|
| 240 |
+
#
|
| 241 |
+
# self.hf_top_model_names[validator_uid] = top_hf_model_name
|
| 242 |
+
#
|
| 243 |
+
# # Save the state of the miner to file.
|
| 244 |
+
# np.savez(
|
| 245 |
+
# self.config.neuron.full_path + "/miner_state.npz",
|
| 246 |
+
# hf_top_model_names=self.hf_top_model_names,
|
| 247 |
+
# )
|
| 248 |
+
#
|
| 249 |
+
## load that top model and run with it
|
| 250 |
+
#def load_state(self):
|
| 251 |
+
# """Loads the state of the miner from a file."""
|
| 252 |
+
# bt.logging.info("Loading miner state.")
|
| 253 |
+
# if os.path.exists(self.config.neuron.full_path + "/miner_state.npz"):
|
| 254 |
+
# state = np.load(self.config.neuron.full_path + "/miner_state.npz", allow_pickle=True)
|
| 255 |
+
# else:
|
| 256 |
+
# np.savez(
|
| 257 |
+
# self.config.neuron.full_path + "/miner_state.npz",
|
| 258 |
+
# hf_top_model_names={},
|
| 259 |
+
# )
|
| 260 |
+
# state = np.load(self.config.neuron.full_path + "/miner_state.npz", allow_pickle=True)
|
| 261 |
+
#
|
| 262 |
+
# if 'hf_top_model_names' in state:
|
| 263 |
+
# loaded_hf_top_model_names = state["hf_top_model_names"]
|
| 264 |
+
# self.hf_top_model_names = loaded_hf_top_model_names
|
| 265 |
+
# else:
|
| 266 |
+
# self.hf_top_model_names = {}
|
bitagent_subnet-main/common/base/neuron.py
ADDED
|
@@ -0,0 +1,187 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# The MIT License (MIT)
|
| 2 |
+
# Copyright © 2023 Yuma Rao
|
| 3 |
+
|
| 4 |
+
# Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated
|
| 5 |
+
# documentation files (the “Software”), to deal in the Software without restriction, including without limitation
|
| 6 |
+
# the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software,
|
| 7 |
+
# and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
|
| 8 |
+
|
| 9 |
+
# The above copyright notice and this permission notice shall be included in all copies or substantial portions of
|
| 10 |
+
# the Software.
|
| 11 |
+
|
| 12 |
+
# THE SOFTWARE IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO
|
| 13 |
+
# THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
|
| 14 |
+
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
|
| 15 |
+
# OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
|
| 16 |
+
# DEALINGS IN THE SOFTWARE.
|
| 17 |
+
|
| 18 |
+
import copy
|
| 19 |
+
import bittensor as bt
|
| 20 |
+
from abc import ABC, abstractmethod
|
| 21 |
+
|
| 22 |
+
# Sync calls set weights and also resyncs the metagraph.
|
| 23 |
+
from common.utils.config import check_config, add_args, config
|
| 24 |
+
from common.utils.misc import ttl_get_block
|
| 25 |
+
from common import __spec_version__ as spec_version
|
| 26 |
+
|
| 27 |
+
|
| 28 |
+
class BaseNeuron(ABC):
|
| 29 |
+
"""
|
| 30 |
+
Base class for Bittensor miners. This class is abstract and should be inherited by a subclass. It contains the core logic for all neurons; validators and miners.
|
| 31 |
+
|
| 32 |
+
In addition to creating a wallet, subtensor, and metagraph, this class also handles the synchronization of the network state via a basic checkpointing mechanism based on epoch length.
|
| 33 |
+
"""
|
| 34 |
+
|
| 35 |
+
neuron_type: str = "BaseNeuron"
|
| 36 |
+
|
| 37 |
+
@classmethod
|
| 38 |
+
def check_config(cls, config: "bt.Config"):
|
| 39 |
+
check_config(cls, config)
|
| 40 |
+
|
| 41 |
+
@classmethod
|
| 42 |
+
def add_args(cls, parser):
|
| 43 |
+
add_args(cls, parser)
|
| 44 |
+
|
| 45 |
+
@classmethod
|
| 46 |
+
def config(cls):
|
| 47 |
+
return config(cls)
|
| 48 |
+
|
| 49 |
+
subtensor: "bt.subtensor"
|
| 50 |
+
wallet: "bt.wallet"
|
| 51 |
+
metagraph: "bt.metagraph" # type: ignore
|
| 52 |
+
spec_version: int = spec_version
|
| 53 |
+
|
| 54 |
+
@property
|
| 55 |
+
def block(self):
|
| 56 |
+
return ttl_get_block(self)
|
| 57 |
+
|
| 58 |
+
def __init__(self, config=None):
|
| 59 |
+
base_config = copy.deepcopy(config or BaseNeuron.config())
|
| 60 |
+
self.config = self.config()
|
| 61 |
+
self.config.merge(base_config)
|
| 62 |
+
self.check_config(self.config)
|
| 63 |
+
|
| 64 |
+
# Set up logging with the provided configuration and directory.
|
| 65 |
+
bt.logging(config=self.config, logging_dir=self.config.full_path)
|
| 66 |
+
|
| 67 |
+
# If a gpu is required, set the device to cuda:N (e.g. cuda:0)
|
| 68 |
+
self.device = self.config.neuron.device
|
| 69 |
+
|
| 70 |
+
# Log the configuration for reference.
|
| 71 |
+
bt.logging.info(self.config)
|
| 72 |
+
|
| 73 |
+
# Build Bittensor objects
|
| 74 |
+
# These are core Bittensor classes to interact with the network.
|
| 75 |
+
bt.logging.info("Setting up bittensor objects.")
|
| 76 |
+
|
| 77 |
+
# The wallet holds the cryptographic key pairs for the miner.
|
| 78 |
+
self.wallet = bt.wallet(config=self.config)
|
| 79 |
+
bt.logging.info(f"Wallet: {self.wallet}")
|
| 80 |
+
|
| 81 |
+
# The subtensor is our connection to the Bittensor blockchain.
|
| 82 |
+
try:
|
| 83 |
+
while True:
|
| 84 |
+
try:
|
| 85 |
+
self.subtensor = bt.subtensor(config=self.config)
|
| 86 |
+
bt.logging.info(f"Subtensor: {self.subtensor}")
|
| 87 |
+
|
| 88 |
+
# The metagraph holds the state of the network, letting us know about other validators and miners.
|
| 89 |
+
self.metagraph = self.subtensor.metagraph(self.config.netuid)
|
| 90 |
+
bt.logging.info(f"Metagraph: {self.metagraph}")
|
| 91 |
+
break
|
| 92 |
+
except Exception as e:
|
| 93 |
+
bt.logging.error(f"Error trying to connect to subtensor: {e}")
|
| 94 |
+
|
| 95 |
+
# Check if the miner is registered on the Bittensor network before proceeding further.
|
| 96 |
+
self.check_registered()
|
| 97 |
+
|
| 98 |
+
# Each miner gets a unique identity (UID) in the network for differentiation.
|
| 99 |
+
self.uid = self.metagraph.hotkeys.index(
|
| 100 |
+
self.wallet.hotkey.ss58_address
|
| 101 |
+
)
|
| 102 |
+
bt.logging.info(
|
| 103 |
+
f"Running neuron on subnet: {self.config.netuid} with uid {self.uid} using network: {self.subtensor.chain_endpoint}"
|
| 104 |
+
)
|
| 105 |
+
self.last_block_sync = self.block
|
| 106 |
+
self.step = 0
|
| 107 |
+
except Exception as e:
|
| 108 |
+
bt.logging.error(f"Error trying to connect to subtensor: {e}")
|
| 109 |
+
|
| 110 |
+
@abstractmethod
|
| 111 |
+
async def forward(self, synapse: bt.Synapse) -> bt.Synapse:
|
| 112 |
+
...
|
| 113 |
+
|
| 114 |
+
@abstractmethod
|
| 115 |
+
def run(self):
|
| 116 |
+
...
|
| 117 |
+
|
| 118 |
+
def sync(self, save_state=True):
|
| 119 |
+
"""
|
| 120 |
+
Wrapper for synchronizing the state of the network for the given miner or validator.
|
| 121 |
+
"""
|
| 122 |
+
try:
|
| 123 |
+
# Ensure miner or validator hotkey is still registered on the network.
|
| 124 |
+
self.check_registered()
|
| 125 |
+
|
| 126 |
+
if self.should_sync_metagraph():
|
| 127 |
+
self.resync_metagraph()
|
| 128 |
+
|
| 129 |
+
if self.should_set_weights():
|
| 130 |
+
self.set_weights()
|
| 131 |
+
|
| 132 |
+
# Always save state unless during reinitiation.
|
| 133 |
+
if save_state:
|
| 134 |
+
self.save_state()
|
| 135 |
+
except Exception as e:
|
| 136 |
+
# Reconnect to subtensor if there is an error.
|
| 137 |
+
self.subtensor = bt.subtensor(config=self.config)
|
| 138 |
+
bt.logging.error(f"Error trying to sync, reconnected subtensor and skipping this round: {e}")
|
| 139 |
+
|
| 140 |
+
def check_registered(self):
|
| 141 |
+
# --- Check for registration.
|
| 142 |
+
if not self.subtensor.is_hotkey_registered(
|
| 143 |
+
netuid=self.config.netuid,
|
| 144 |
+
hotkey_ss58=self.wallet.hotkey.ss58_address,
|
| 145 |
+
):
|
| 146 |
+
bt.logging.error(
|
| 147 |
+
f"Wallet: {self.wallet} is not registered on netuid {self.config.netuid}."
|
| 148 |
+
f" Please register the hotkey using `btcli subnets register` before trying again"
|
| 149 |
+
)
|
| 150 |
+
exit()
|
| 151 |
+
|
| 152 |
+
def should_sync_metagraph(self):
|
| 153 |
+
"""
|
| 154 |
+
Check if enough epoch blocks have elapsed since the last checkpoint to sync.
|
| 155 |
+
"""
|
| 156 |
+
bt.logging.debug(f"Checking if ready to resync the metagraph at block {self.block}, last sync at {self.last_block_sync} and given epoch size of {self.config.neuron.epoch_length}: ", self.block - self.last_block_sync > self.config.neuron.epoch_length)
|
| 157 |
+
return self.block - self.last_block_sync > self.config.neuron.epoch_length
|
| 158 |
+
|
| 159 |
+
def should_set_weights(self) -> bool:
|
| 160 |
+
# Don't set weights for miners
|
| 161 |
+
if self.config.neuron.type == "miner":
|
| 162 |
+
return False
|
| 163 |
+
|
| 164 |
+
# Don't set weights on initialization.
|
| 165 |
+
if self.step == 0:
|
| 166 |
+
return False
|
| 167 |
+
|
| 168 |
+
# Check if enough epoch blocks have elapsed since the last epoch.
|
| 169 |
+
if self.config.neuron.disable_set_weights:
|
| 170 |
+
return False
|
| 171 |
+
|
| 172 |
+
# Define appropriate logic for when set weights.
|
| 173 |
+
return (
|
| 174 |
+
(self.block - self.metagraph.last_update[self.uid])
|
| 175 |
+
> self.config.neuron.epoch_length
|
| 176 |
+
and self.neuron_type != "MinerNeuron"
|
| 177 |
+
) # don't set weights if you're a miner
|
| 178 |
+
|
| 179 |
+
def save_state(self):
|
| 180 |
+
bt.logging.warning(
|
| 181 |
+
"save_state() not implemented for this neuron. You can implement this function to save model checkpoints or other useful data."
|
| 182 |
+
)
|
| 183 |
+
|
| 184 |
+
def load_state(self):
|
| 185 |
+
bt.logging.warning(
|
| 186 |
+
"load_state() not implemented for this neuron. You can implement this function to load model checkpoints or other useful data."
|
| 187 |
+
)
|
bitagent_subnet-main/common/base/validator.py
ADDED
|
@@ -0,0 +1,576 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# The MIT License (MIT)
|
| 2 |
+
# Copyright © 2023 Yuma Rao
|
| 3 |
+
# Copyright © 2023 RogueTensor
|
| 4 |
+
|
| 5 |
+
# Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated
|
| 6 |
+
# documentation files (the “Software”), to deal in the Software without restriction, including without limitation
|
| 7 |
+
# the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software,
|
| 8 |
+
# and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
|
| 9 |
+
|
| 10 |
+
# The above copyright notice and this permission notice shall be included in all copies or substantial portions of
|
| 11 |
+
# the Software.
|
| 12 |
+
|
| 13 |
+
# THE SOFTWARE IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO
|
| 14 |
+
# THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
|
| 15 |
+
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
|
| 16 |
+
# OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
|
| 17 |
+
# DEALINGS IN THE SOFTWARE.
|
| 18 |
+
|
| 19 |
+
import os
|
| 20 |
+
import copy
|
| 21 |
+
import asyncio
|
| 22 |
+
import threading
|
| 23 |
+
import numpy as np
|
| 24 |
+
import bittensor as bt
|
| 25 |
+
from datetime import datetime, timezone
|
| 26 |
+
from scoring_utils import score_spreading
|
| 27 |
+
from common.utils.uids import get_alive_uids
|
| 28 |
+
from bitagent.validator.constants import DEPLOYED_DATE, COMPETITION_LENGTH_DAYS, TESTNET_COMPETITION_LENGTH_DAYS, COMPETITION_PREFIX, COMPETITION_PREVIOUS_PREFIX
|
| 29 |
+
from common.utils.weight_utils import (
|
| 30 |
+
process_weights_for_netuid,
|
| 31 |
+
convert_weights_and_uids_for_emit,
|
| 32 |
+
)
|
| 33 |
+
from typing import List
|
| 34 |
+
from traceback import print_exception
|
| 35 |
+
|
| 36 |
+
from common.base.neuron import BaseNeuron
|
| 37 |
+
from common.utils.uids import check_uid_availability
|
| 38 |
+
|
| 39 |
+
class BaseValidatorNeuron(BaseNeuron):
|
| 40 |
+
"""
|
| 41 |
+
Base class for Bittensor validators. Your validator should inherit from this class.
|
| 42 |
+
"""
|
| 43 |
+
|
| 44 |
+
neuron_type: str = "ValidatorNeuron"
|
| 45 |
+
|
| 46 |
+
def __init__(self, config=None):
|
| 47 |
+
super().__init__(config=config)
|
| 48 |
+
|
| 49 |
+
# Save a copy of the hotkeys to local memory.
|
| 50 |
+
self.hotkeys = copy.deepcopy(self.metagraph.hotkeys)
|
| 51 |
+
|
| 52 |
+
# Dendrite lets us send messages to other nodes (axons) in the network.
|
| 53 |
+
self.dendrite = bt.dendrite(wallet=self.wallet)
|
| 54 |
+
bt.logging.info(f"Dendrite: {self.dendrite}")
|
| 55 |
+
|
| 56 |
+
# Set up initial scoring weights for validation
|
| 57 |
+
bt.logging.info("Building validation weights.")
|
| 58 |
+
self.scores = np.zeros(self.metagraph.n, dtype=np.float32)
|
| 59 |
+
self.offline_scores = {}
|
| 60 |
+
self.offline_miners_scored = {}
|
| 61 |
+
self.offline_model_names = {}
|
| 62 |
+
self.running_offline_mode = False
|
| 63 |
+
self.offline_status = None
|
| 64 |
+
self.regrade_version = 1025
|
| 65 |
+
self.update_competition_numbers()
|
| 66 |
+
self.max_div = 0.0006
|
| 67 |
+
self.min_div = 0.0002
|
| 68 |
+
self.state_file_name = "ft_state.npz"
|
| 69 |
+
|
| 70 |
+
# Init sync with the network. Updates the metagraph.
|
| 71 |
+
if os.path.exists(self.config.neuron.full_path + f"/{self.state_file_name}"):
|
| 72 |
+
# if we are booting up and have this file, then we'll want to load it
|
| 73 |
+
# otherwise, if we save state, it will overwrite from the sync
|
| 74 |
+
self.sync(save_state=False)
|
| 75 |
+
else:
|
| 76 |
+
# if no state file then we'll create one on init
|
| 77 |
+
self.sync()
|
| 78 |
+
# Serve axon to enable external connections.
|
| 79 |
+
if not self.config.neuron.axon_off:
|
| 80 |
+
#self.serve_axon()
|
| 81 |
+
pass
|
| 82 |
+
else:
|
| 83 |
+
bt.logging.warning("axon off, not serving ip to chain.")
|
| 84 |
+
|
| 85 |
+
# Create asyncio event loop to manage async tasks.
|
| 86 |
+
self.loop = asyncio.get_event_loop()
|
| 87 |
+
|
| 88 |
+
# Instantiate runners
|
| 89 |
+
self.should_exit: bool = False
|
| 90 |
+
self.is_running: bool = False
|
| 91 |
+
self.thread: threading.Thread = None
|
| 92 |
+
self.lock = asyncio.Lock()
|
| 93 |
+
|
| 94 |
+
def serve_axon(self):
|
| 95 |
+
"""Serve axon to enable external connections."""
|
| 96 |
+
|
| 97 |
+
bt.logging.info("serving ip to chain...")
|
| 98 |
+
try:
|
| 99 |
+
self.axon = bt.axon(wallet=self.wallet, config=self.config)
|
| 100 |
+
|
| 101 |
+
self.axon.attach(
|
| 102 |
+
forward_fn=self.forward_fn,
|
| 103 |
+
blacklist_fn=self.blacklist_fn,
|
| 104 |
+
priority_fn=self.priority_fn,
|
| 105 |
+
)
|
| 106 |
+
|
| 107 |
+
try:
|
| 108 |
+
self.axon.serve(netuid=self.config.netuid, subtensor=self.subtensor)
|
| 109 |
+
self.axon.start()
|
| 110 |
+
except Exception as e:
|
| 111 |
+
bt.logging.error(f"Failed to serve Axon with exception: {e}")
|
| 112 |
+
pass
|
| 113 |
+
|
| 114 |
+
except Exception as e:
|
| 115 |
+
bt.logging.error(
|
| 116 |
+
f"Failed to create Axon initialize with exception: {e}"
|
| 117 |
+
)
|
| 118 |
+
pass
|
| 119 |
+
|
| 120 |
+
async def concurrent_forward(self):
|
| 121 |
+
coroutines = [
|
| 122 |
+
self.forward()
|
| 123 |
+
for _ in range(self.config.neuron.num_concurrent_forwards)
|
| 124 |
+
]
|
| 125 |
+
await asyncio.gather(*coroutines,return_exceptions=True)
|
| 126 |
+
|
| 127 |
+
def run(self):
|
| 128 |
+
"""
|
| 129 |
+
Initiates and manages the main loop for the miner on the Bittensor network. The main loop handles graceful shutdown on keyboard interrupts and logs unforeseen errors.
|
| 130 |
+
|
| 131 |
+
This function performs the following primary tasks:
|
| 132 |
+
1. Check for registration on the Bittensor network.
|
| 133 |
+
2. Continuously forwards queries to the miners on the network, rewarding their responses and updating the scores accordingly.
|
| 134 |
+
3. Periodically resynchronizes with the chain; updating the metagraph with the latest network state and setting weights.
|
| 135 |
+
|
| 136 |
+
The essence of the validator's operations is in the forward function, which is called every step. The forward function is responsible for querying the network and scoring the responses.
|
| 137 |
+
|
| 138 |
+
Note:
|
| 139 |
+
- The function leverages the global configurations set during the initialization of the miner.
|
| 140 |
+
- The miner's axon serves as its interface to the Bittensor network, handling incoming and outgoing requests.
|
| 141 |
+
|
| 142 |
+
Raises:
|
| 143 |
+
KeyboardInterrupt: If the miner is stopped by a manual interruption.
|
| 144 |
+
Exception: For unforeseen errors during the miner's operation, which are logged for diagnosis.
|
| 145 |
+
"""
|
| 146 |
+
|
| 147 |
+
# Check that validator is registered on the network.
|
| 148 |
+
self.sync()
|
| 149 |
+
bt.logging.info(
|
| 150 |
+
f"Running validator on network: {self.config.subtensor.chain_endpoint} with netuid: {self.config.netuid}"
|
| 151 |
+
)
|
| 152 |
+
|
| 153 |
+
bt.logging.info(f"Validator starting at block: {self.block}")
|
| 154 |
+
# This loop maintains the validator's operations until intentionally stopped.
|
| 155 |
+
try:
|
| 156 |
+
while True:
|
| 157 |
+
try:
|
| 158 |
+
bt.logging.info(f"step({self.step}) block({self.block})")
|
| 159 |
+
except Exception as e:
|
| 160 |
+
bt.logging.error(f"Error logging step and block, likely socket issue, will update next round: {e}")
|
| 161 |
+
#if "Broken pipe" in str(e):
|
| 162 |
+
# print("======= Exiting due to a broken pipe ========")
|
| 163 |
+
# self.axon.stop()
|
| 164 |
+
# self.should_exit = True
|
| 165 |
+
# exit()
|
| 166 |
+
|
| 167 |
+
# Run multiple forwards concurrently.
|
| 168 |
+
self.loop.run_until_complete(self.concurrent_forward())
|
| 169 |
+
|
| 170 |
+
# Check if we should exit.
|
| 171 |
+
if self.should_exit:
|
| 172 |
+
break
|
| 173 |
+
|
| 174 |
+
# Sync metagraph and potentially set weights.
|
| 175 |
+
try:
|
| 176 |
+
self.sync()
|
| 177 |
+
except Exception as e:
|
| 178 |
+
bt.logging.error(f"Error syncing metagraph during run loop: {e}")
|
| 179 |
+
|
| 180 |
+
self.step += 1
|
| 181 |
+
except Exception as e:
|
| 182 |
+
bt.logging.error(f"Unexpected error during run: {e}")
|
| 183 |
+
|
| 184 |
+
# If someone intentionally stops the validator, it'll safely terminate operations.
|
| 185 |
+
except KeyboardInterrupt:
|
| 186 |
+
self.axon.stop()
|
| 187 |
+
bt.logging.success("Validator killed by keyboard interrupt.")
|
| 188 |
+
exit()
|
| 189 |
+
|
| 190 |
+
# In case of unforeseen errors, the validator will log the error and continue operations.
|
| 191 |
+
except Exception as err:
|
| 192 |
+
bt.logging.error("Error during validation", str(err))
|
| 193 |
+
bt.logging.debug(
|
| 194 |
+
print_exception(type(err), err, err.__traceback__)
|
| 195 |
+
)
|
| 196 |
+
|
| 197 |
+
def run_in_background_thread(self):
|
| 198 |
+
"""
|
| 199 |
+
Starts the validator's operations in a background thread upon entering the context.
|
| 200 |
+
This method facilitates the use of the validator in a 'with' statement.
|
| 201 |
+
"""
|
| 202 |
+
if not self.is_running:
|
| 203 |
+
bt.logging.debug("Starting validator in background thread.")
|
| 204 |
+
self.should_exit = False
|
| 205 |
+
self.thread = threading.Thread(target=self.run, daemon=True)
|
| 206 |
+
self.thread.start()
|
| 207 |
+
self.is_running = True
|
| 208 |
+
bt.logging.debug("Started")
|
| 209 |
+
|
| 210 |
+
def stop_run_thread(self):
|
| 211 |
+
"""
|
| 212 |
+
Stops the validator's operations that are running in the background thread.
|
| 213 |
+
"""
|
| 214 |
+
if self.is_running:
|
| 215 |
+
bt.logging.debug("Stopping validator in background thread.")
|
| 216 |
+
self.should_exit = True
|
| 217 |
+
self.thread.join(5)
|
| 218 |
+
self.is_running = False
|
| 219 |
+
bt.logging.debug("Stopped")
|
| 220 |
+
|
| 221 |
+
def __enter__(self):
|
| 222 |
+
self.run_in_background_thread()
|
| 223 |
+
return self
|
| 224 |
+
|
| 225 |
+
def __exit__(self, exc_type, exc_value, traceback):
|
| 226 |
+
"""
|
| 227 |
+
Stops the validator's background operations upon exiting the context.
|
| 228 |
+
This method facilitates the use of the validator in a 'with' statement.
|
| 229 |
+
|
| 230 |
+
Args:
|
| 231 |
+
exc_type: The type of the exception that caused the context to be exited.
|
| 232 |
+
None if the context was exited without an exception.
|
| 233 |
+
exc_value: The instance of the exception that caused the context to be exited.
|
| 234 |
+
None if the context was exited without an exception.
|
| 235 |
+
traceback: A traceback object encoding the stack trace.
|
| 236 |
+
None if the context was exited without an exception.
|
| 237 |
+
"""
|
| 238 |
+
if self.is_running:
|
| 239 |
+
bt.logging.debug("Stopping validator in background thread.")
|
| 240 |
+
self.should_exit = True
|
| 241 |
+
self.thread.join(5)
|
| 242 |
+
self.is_running = False
|
| 243 |
+
bt.logging.debug("Stopped")
|
| 244 |
+
|
| 245 |
+
def set_weights(self):
|
| 246 |
+
"""
|
| 247 |
+
Sets the validator weights to the metagraph hotkeys based on the scores it has received from the miners. The weights determine the trust and incentive level the validator assigns to miner nodes on the network.
|
| 248 |
+
"""
|
| 249 |
+
bt.logging.debug(f"set_weights()")
|
| 250 |
+
if self.config.subtensor.network == "test":
|
| 251 |
+
return # Don't set weights on testnet.
|
| 252 |
+
|
| 253 |
+
self.divisions = int(np.floor(self.block / 1000))
|
| 254 |
+
current_odds = (0.2 * self.scores) + (0.8 * self.offline_scores[self.previous_competition_version])
|
| 255 |
+
|
| 256 |
+
# Check if self.scores contains any NaN values and log a warning if it does.
|
| 257 |
+
if np.isnan(self.scores).any():
|
| 258 |
+
bt.logging.warning(
|
| 259 |
+
f"Scores contain NaN values. This may be due to a lack of responses from miners, or a bug in your reward functions."
|
| 260 |
+
)
|
| 261 |
+
# correct validator scores to be 0
|
| 262 |
+
for uid, hotkey in enumerate(self.hotkeys):
|
| 263 |
+
if not check_uid_availability(self.metagraph, uid, self.config.neuron.vpermit_tao_limit):
|
| 264 |
+
# if validator, set validators scores to 0
|
| 265 |
+
self.scores[uid] = 0
|
| 266 |
+
self.offline_scores[self.previous_competition_version][uid] = 0
|
| 267 |
+
self.offline_scores[self.competition_version][uid] = 0
|
| 268 |
+
self.offline_miners_scored[self.competition_version][self.regrade_version].append(uid)
|
| 269 |
+
self.offline_model_names[self.competition_version][uid] = ""
|
| 270 |
+
|
| 271 |
+
# always fit scores to weighted curve
|
| 272 |
+
weighted_scores = score_spreading(current_odds,self.divisions,self.min_div,self.max_div)
|
| 273 |
+
|
| 274 |
+
#bt.logging.info(f"weighted_scores: {weighted_scores}")
|
| 275 |
+
|
| 276 |
+
# Calculate the average reward for each uid across non-zero values.
|
| 277 |
+
# Replace any NaN values with 0.
|
| 278 |
+
norm = np.linalg.norm(weighted_scores, ord=1, axis=0, keepdims=True)
|
| 279 |
+
if np.any(norm == 0) or np.isnan(norm).any():
|
| 280 |
+
norm = np.ones_like(norm)
|
| 281 |
+
raw_weights = weighted_scores/norm
|
| 282 |
+
|
| 283 |
+
# bt.logging.debug("raw_weights: ")
|
| 284 |
+
# bt.logging.debug(raw_weights)
|
| 285 |
+
# bt.logging.debug("raw_weight_uids: ")
|
| 286 |
+
# bt.logging.debug(self.metagraph.uids)
|
| 287 |
+
|
| 288 |
+
# Process the raw weights to final_weights via subtensor limitations.
|
| 289 |
+
(
|
| 290 |
+
processed_weight_uids,
|
| 291 |
+
processed_weights,
|
| 292 |
+
) = process_weights_for_netuid(
|
| 293 |
+
uids=self.metagraph.uids,
|
| 294 |
+
weights=raw_weights,
|
| 295 |
+
netuid=self.config.netuid,
|
| 296 |
+
subtensor=self.subtensor,
|
| 297 |
+
metagraph=self.metagraph,
|
| 298 |
+
)
|
| 299 |
+
# bt.logging.debug("processed_weights: ")
|
| 300 |
+
# bt.logging.debug(processed_weights)
|
| 301 |
+
# bt.logging.debug("processed_weight_uids: ")
|
| 302 |
+
# bt.logging.debug(processed_weight_uids)
|
| 303 |
+
|
| 304 |
+
# Convert to uint16 weights and uids.
|
| 305 |
+
(
|
| 306 |
+
uint_uids,
|
| 307 |
+
uint_weights,
|
| 308 |
+
) = convert_weights_and_uids_for_emit(
|
| 309 |
+
uids=processed_weight_uids, weights=processed_weights
|
| 310 |
+
)
|
| 311 |
+
|
| 312 |
+
# Set the weights on chain via our subtensor connection.
|
| 313 |
+
|
| 314 |
+
result, msg = self.subtensor.set_weights(
|
| 315 |
+
wallet=self.wallet,
|
| 316 |
+
netuid=self.config.netuid,
|
| 317 |
+
uids=uint_uids,
|
| 318 |
+
weights=uint_weights,
|
| 319 |
+
wait_for_finalization=False,
|
| 320 |
+
wait_for_inclusion=False,
|
| 321 |
+
version_key=self.spec_version,
|
| 322 |
+
)
|
| 323 |
+
if result is True:
|
| 324 |
+
bt.logging.info(f"set_weights on chain for version: {self.spec_version} successfully!")
|
| 325 |
+
else:
|
| 326 |
+
bt.logging.error(f"set_weights failed: {msg}")
|
| 327 |
+
|
| 328 |
+
def get_weighted_scores(self):
|
| 329 |
+
# scores are largely based on PREVIOUS competition scores
|
| 330 |
+
scaled_scores = ((0.2 * self.scores) + (0.8 * self.offline_scores[self.previous_competition_version])) * 5
|
| 331 |
+
exp_scores = np.exp(scaled_scores)
|
| 332 |
+
return exp_scores / np.sum(exp_scores)
|
| 333 |
+
|
| 334 |
+
def resync_metagraph(self):
|
| 335 |
+
"""Resyncs the metagraph and updates the hotkeys and moving averages based on the new metagraph."""
|
| 336 |
+
bt.logging.info("resync_metagraph()")
|
| 337 |
+
|
| 338 |
+
# Copies state of metagraph before syncing.
|
| 339 |
+
previous_metagraph = copy.deepcopy(self.metagraph)
|
| 340 |
+
|
| 341 |
+
# Sync the metagraph.
|
| 342 |
+
try:
|
| 343 |
+
self.metagraph.sync(subtensor=self.subtensor)
|
| 344 |
+
self.last_block_sync = self.block
|
| 345 |
+
|
| 346 |
+
# Check if the metagraph axon info has changed.
|
| 347 |
+
if previous_metagraph.axons == self.metagraph.axons:
|
| 348 |
+
bt.logging.debug("Metagraph axons are the same, skipping resync")
|
| 349 |
+
return
|
| 350 |
+
|
| 351 |
+
bt.logging.info("Metagraph updated, resyncing hotkeys, dendrite pool and moving averages")
|
| 352 |
+
# Normalize all hotkeys that have been replaced, and zero out all hotkeys that are no longer available
|
| 353 |
+
for uid, hotkey in enumerate(self.hotkeys):
|
| 354 |
+
if hotkey != self.metagraph.hotkeys[uid]:
|
| 355 |
+
bt.logging.debug(f"RESYNC: hotkey changed for uid: {uid}")
|
| 356 |
+
self.scores[uid] = np.median(self.scores)
|
| 357 |
+
self.offline_scores[self.previous_competition_version][uid] = 0
|
| 358 |
+
self.offline_scores[self.competition_version][uid] = 0
|
| 359 |
+
if uid in self.offline_miners_scored[self.competition_version][self.regrade_version]:
|
| 360 |
+
self.offline_miners_scored[self.competition_version][self.regrade_version].remove(uid)
|
| 361 |
+
self.offline_model_names[self.competition_version][uid] = ""
|
| 362 |
+
self.offline_model_names[self.previous_competition_version][uid] = ""
|
| 363 |
+
|
| 364 |
+
# Check to see if the metagraph has changed size.
|
| 365 |
+
# If so, we need to add new hotkeys and moving averages.
|
| 366 |
+
if len(self.hotkeys) < len(self.metagraph.hotkeys):
|
| 367 |
+
# Update the size of the moving average scores.
|
| 368 |
+
new_moving_average = np.zeros((self.metagraph.n))
|
| 369 |
+
min_len = min(len(self.hotkeys), len(self.scores))
|
| 370 |
+
new_moving_average[:min_len] = self.scores[:min_len]
|
| 371 |
+
self.scores = new_moving_average
|
| 372 |
+
|
| 373 |
+
# previous offline scores
|
| 374 |
+
new_moving_average = np.zeros((self.metagraph.n))
|
| 375 |
+
min_len = min(len(self.hotkeys), len(self.offline_scores[self.previous_competition_version]))
|
| 376 |
+
new_moving_average[:min_len] = self.offline_scores[self.previous_competition_version][:min_len]
|
| 377 |
+
self.offline_scores[self.previous_competition_version] = new_moving_average
|
| 378 |
+
|
| 379 |
+
# current offline scores
|
| 380 |
+
new_moving_average = np.zeros((self.metagraph.n))
|
| 381 |
+
min_len = min(len(self.hotkeys), len(self.offline_scores[self.competition_version]))
|
| 382 |
+
new_moving_average[:min_len] = self.offline_scores[self.competition_version][:min_len]
|
| 383 |
+
self.offline_scores[self.competition_version] = new_moving_average
|
| 384 |
+
|
| 385 |
+
# Update the hotkeys.
|
| 386 |
+
self.hotkeys = copy.deepcopy(self.metagraph.hotkeys)
|
| 387 |
+
except Exception as e:
|
| 388 |
+
bt.logging.error(f"Could not resync with metagraph right now, will try later. Error: {e}")
|
| 389 |
+
|
| 390 |
+
def update_offline_scores(self, rewards: np.ndarray, uids: List[int]):
|
| 391 |
+
"""Performs exponential moving average on the scores based on the rewards received from the miners."""
|
| 392 |
+
if np.isnan(rewards).any():
|
| 393 |
+
#bt.logging.debug(f"NaN values detected in rewards: {rewards}")
|
| 394 |
+
# Replace any NaN values in rewards with 0.
|
| 395 |
+
rewards = np.nan_to_num(rewards, nan=0)
|
| 396 |
+
|
| 397 |
+
if isinstance(uids, np.ndarray):
|
| 398 |
+
uids_array = uids.copy()
|
| 399 |
+
else:
|
| 400 |
+
uids_array = np.array(uids)
|
| 401 |
+
|
| 402 |
+
scattered_rewards: np.ndarray = self.offline_scores[self.competition_version].copy()
|
| 403 |
+
scattered_rewards[uids_array] = rewards
|
| 404 |
+
|
| 405 |
+
bt.logging.debug(f"OFFLINE Scattered rewards: {rewards}")
|
| 406 |
+
|
| 407 |
+
self.offline_scores[self.competition_version]: np.ndarray = scattered_rewards # type: ignore
|
| 408 |
+
self.offline_miners_scored[self.competition_version][self.regrade_version].extend([int(x) for x in uids_array])
|
| 409 |
+
bt.logging.debug(f"Updated moving avg OFFLINE scores for Competition {self.competition_version}: {self.offline_scores[self.competition_version]}")
|
| 410 |
+
self.save_state()
|
| 411 |
+
|
| 412 |
+
def update_scores(self, rewards: np.ndarray, uids: List[int], alpha=None):
|
| 413 |
+
"""Performs exponential moving average on the scores based on the rewards received from the miners."""
|
| 414 |
+
if np.isnan(rewards).any():
|
| 415 |
+
#bt.logging.debug(f"NaN values detected in rewards: {rewards}")
|
| 416 |
+
# Replace any NaN values in rewards with 0.
|
| 417 |
+
rewards = np.nan_to_num(rewards, nan=0)
|
| 418 |
+
|
| 419 |
+
if isinstance(uids, np.ndarray):
|
| 420 |
+
uids_array = uids.copy()
|
| 421 |
+
else:
|
| 422 |
+
uids_array = np.array(uids)
|
| 423 |
+
|
| 424 |
+
scattered_rewards: np.ndarray = self.scores.copy()
|
| 425 |
+
scattered_rewards[uids_array] = rewards
|
| 426 |
+
|
| 427 |
+
bt.logging.debug(f"ONLINE Scattered rewards: {rewards}")
|
| 428 |
+
|
| 429 |
+
# Update scores with rewards produced by this step.
|
| 430 |
+
# shape: [ metagraph.n ]
|
| 431 |
+
if not alpha:
|
| 432 |
+
alpha: float = self.config.neuron.moving_average_alpha
|
| 433 |
+
self.scores: np.ndarray = alpha * scattered_rewards + ( 1 - alpha) * self.scores
|
| 434 |
+
bt.logging.debug(f"Updated moving avg ONLINE scores: {self.scores}")
|
| 435 |
+
|
| 436 |
+
def save_state(self):
|
| 437 |
+
"""Saves the state of the validator to a file."""
|
| 438 |
+
bt.logging.debug(f"Saving validator state - {self.state_file_name}.")
|
| 439 |
+
|
| 440 |
+
# Save the state of the validator to file.
|
| 441 |
+
try:
|
| 442 |
+
np.savez(
|
| 443 |
+
self.config.neuron.full_path + f"/{self.state_file_name}",
|
| 444 |
+
step=self.step,
|
| 445 |
+
scores=self.scores,
|
| 446 |
+
offline_scores=self.offline_scores,
|
| 447 |
+
offline_miners_scored=np.array(list(self.offline_miners_scored.items()), dtype=object),
|
| 448 |
+
offline_model_names=self.offline_model_names,
|
| 449 |
+
hotkeys=self.hotkeys,
|
| 450 |
+
allow_pickle=True,
|
| 451 |
+
)
|
| 452 |
+
except Exception as e:
|
| 453 |
+
bt.logging.error(f"OFFLINE: Error saving validator state: {e}")
|
| 454 |
+
|
| 455 |
+
def load_state(self):
|
| 456 |
+
"""Loads the state of the validator from a file."""
|
| 457 |
+
bt.logging.info("Loading validator state.")
|
| 458 |
+
state = np.load(self.config.neuron.full_path + f"/{self.state_file_name}",allow_pickle=True)
|
| 459 |
+
bt.logging.debug(f"OFFLINE: LOADING STATE: {state}")
|
| 460 |
+
|
| 461 |
+
self.step = state["step"]
|
| 462 |
+
if 'hotkeys' in state:
|
| 463 |
+
self.hotkeys = state["hotkeys"]
|
| 464 |
+
|
| 465 |
+
if 'scores' in state:
|
| 466 |
+
loaded_scores = state["scores"]
|
| 467 |
+
self.scores[:len(loaded_scores)] = loaded_scores
|
| 468 |
+
|
| 469 |
+
if 'offline_scores' in state:
|
| 470 |
+
loaded_offline_scores = state["offline_scores"]
|
| 471 |
+
if isinstance(loaded_offline_scores, dict):
|
| 472 |
+
self.offline_scores = loaded_offline_scores
|
| 473 |
+
elif isinstance(loaded_offline_scores, np.ndarray):
|
| 474 |
+
self.offline_scores = loaded_offline_scores.item()
|
| 475 |
+
else:
|
| 476 |
+
bt.logging.error(f"OFFLINE: loaded_offline_scores is not a dict or array, type: {type(loaded_offline_scores)}")
|
| 477 |
+
|
| 478 |
+
if self.offline_scores.get(self.previous_competition_version) is None:
|
| 479 |
+
self.offline_scores[self.previous_competition_version] = np.zeros(self.metagraph.n, dtype=np.float32)
|
| 480 |
+
#for uid in self.metagraph.uids:
|
| 481 |
+
# if uid not in self.offline_scores[self.previous_competition_version]:
|
| 482 |
+
# self.offline_scores[self.previous_competition_version][uid] = 0
|
| 483 |
+
if 'offline_miners_scored' in state:
|
| 484 |
+
loaded_offline_miners_scored = state["offline_miners_scored"]
|
| 485 |
+
self.offline_miners_scored = dict(loaded_offline_miners_scored)
|
| 486 |
+
|
| 487 |
+
if 'offline_model_names' in state:
|
| 488 |
+
loaded_offline_model_names = state["offline_model_names"]
|
| 489 |
+
if isinstance(loaded_offline_model_names, dict):
|
| 490 |
+
self.offline_model_names = loaded_offline_model_names
|
| 491 |
+
elif isinstance(loaded_offline_model_names, np.ndarray):
|
| 492 |
+
self.offline_model_names = loaded_offline_model_names.item()
|
| 493 |
+
else:
|
| 494 |
+
bt.logging.error(f"OFFLINE: loaded_offline_model_names is not a dict or array, type: {type(loaded_offline_model_names)}")
|
| 495 |
+
|
| 496 |
+
def update_competition_numbers(self):
|
| 497 |
+
try:
|
| 498 |
+
# get competition details
|
| 499 |
+
competition_start_date = datetime.strptime(DEPLOYED_DATE, "%Y-%m-%d").replace(tzinfo=timezone.utc)
|
| 500 |
+
delta = datetime.now(timezone.utc) - competition_start_date
|
| 501 |
+
number_of_days_since_start = delta.days + (delta.seconds / (24*3600))
|
| 502 |
+
number_of_competitions_since_start = int(number_of_days_since_start / COMPETITION_LENGTH_DAYS)
|
| 503 |
+
if self.config.subtensor.network == "test":
|
| 504 |
+
bt.logging.debug(f"OFFLINE TESTNET: using {TESTNET_COMPETITION_LENGTH_DAYS} days per competition")
|
| 505 |
+
number_of_competitions_since_start = int(number_of_days_since_start / TESTNET_COMPETITION_LENGTH_DAYS)
|
| 506 |
+
|
| 507 |
+
#bt.logging.debug(f"OFFLINE: number_of_competitions_since_start: {number_of_competitions_since_start}")
|
| 508 |
+
|
| 509 |
+
if number_of_competitions_since_start < 1:
|
| 510 |
+
# we have not completed any competitions with this prefix, so the previous competition number is the last one we completed with the old prefix
|
| 511 |
+
largest_previous_competition_number = 0
|
| 512 |
+
# search through all the previous competition numbers to find the largest (most recent) one
|
| 513 |
+
for k,_ in self.offline_scores.items():
|
| 514 |
+
if k.startswith(f"{COMPETITION_PREVIOUS_PREFIX}-"):
|
| 515 |
+
if int(k.split("-")[1]) > largest_previous_competition_number:
|
| 516 |
+
largest_previous_competition_number = int(k.split("-")[1])
|
| 517 |
+
self.previous_competition_version = f"{COMPETITION_PREVIOUS_PREFIX}-{largest_previous_competition_number}"
|
| 518 |
+
else:
|
| 519 |
+
# we have completed at least one competition with this prefix, so the previous competition number is the last one we completed
|
| 520 |
+
self.previous_competition_version = f"{COMPETITION_PREFIX}-{int(number_of_competitions_since_start-1)}"
|
| 521 |
+
|
| 522 |
+
if self.offline_scores.get(self.previous_competition_version) is None:
|
| 523 |
+
self.offline_scores[self.previous_competition_version] = np.zeros(self.metagraph.n, dtype=np.float32)
|
| 524 |
+
|
| 525 |
+
self.competition_version = f"{COMPETITION_PREFIX}-{int(number_of_competitions_since_start)}"
|
| 526 |
+
|
| 527 |
+
if self.offline_scores.get(self.competition_version) is None:
|
| 528 |
+
self.offline_scores[self.competition_version] = np.zeros(self.metagraph.n, dtype=np.float32)
|
| 529 |
+
|
| 530 |
+
# SETUP OFFLINE MINERS SCORED
|
| 531 |
+
if self.offline_miners_scored.get(self.competition_version) is None:
|
| 532 |
+
self.offline_miners_scored[self.competition_version] = {}
|
| 533 |
+
|
| 534 |
+
if not isinstance(self.offline_miners_scored[self.competition_version], dict):
|
| 535 |
+
self.offline_miners_scored[self.competition_version] = {}
|
| 536 |
+
|
| 537 |
+
if self.offline_miners_scored[self.competition_version].get(self.regrade_version) is None:
|
| 538 |
+
self.offline_miners_scored[self.competition_version][self.regrade_version] = []
|
| 539 |
+
|
| 540 |
+
# SETUP OFFLINE MODEL NAMES
|
| 541 |
+
if self.offline_model_names.get(self.competition_version) is None:
|
| 542 |
+
self.offline_model_names[self.competition_version] = {}
|
| 543 |
+
|
| 544 |
+
self.miners_left_to_score = []
|
| 545 |
+
|
| 546 |
+
# if an offline_score is 0 (we should try again), we need to add the miner to the list of miners left to score
|
| 547 |
+
# so clear out the offline_miners_scored for this competition, for those miners
|
| 548 |
+
for uid in self.offline_miners_scored[self.competition_version][self.regrade_version]:
|
| 549 |
+
if self.offline_scores[self.competition_version][uid] <= 0.01: # little wiggle room
|
| 550 |
+
#bt.logging.debug(f"OFFLINE: removing miner {uid} from offline_miners_scored for competition {self.competition_version} because score is less than 0.01")
|
| 551 |
+
self.offline_miners_scored[self.competition_version][self.regrade_version].remove(uid)
|
| 552 |
+
|
| 553 |
+
# add all miners that are alive and not already scored to the list of miners left to score
|
| 554 |
+
for uid in get_alive_uids(self):
|
| 555 |
+
if uid not in [int(x) for x in self.offline_miners_scored[self.competition_version][self.regrade_version]]:
|
| 556 |
+
self.miners_left_to_score.append(int(uid))
|
| 557 |
+
|
| 558 |
+
# if a regrade has been set for the comp, then reset the scores for the miners
|
| 559 |
+
#bt.logging.debug(f"OFFLINE: regrade version: {self.regrade_version}")
|
| 560 |
+
#bt.logging.debug(f"OFFLINE: regrade check - offline miners scored: {self.offline_miners_scored[self.competition_version][self.regrade_version]}")
|
| 561 |
+
#bt.logging.debug(f"OFFLINE: regrade check - offline scores: {self.offline_scores[self.competition_version]}")
|
| 562 |
+
for uid,score in enumerate(self.offline_scores[self.competition_version]):
|
| 563 |
+
#bt.logging.debug(f"OFFLINE: regrade check for uid: {uid}")
|
| 564 |
+
if score > 0.0 and uid not in [int(x) for x in self.offline_miners_scored[self.competition_version][self.regrade_version]]:
|
| 565 |
+
#bt.logging.debug(f"OFFLINE: resetting miner {uid}'s score for competition {self.competition_version} for regrade")
|
| 566 |
+
self.offline_scores[self.competition_version][uid] = 0.0
|
| 567 |
+
#bt.logging.debug(f"OFFLINE: regrade check for uid done: {uid}")
|
| 568 |
+
|
| 569 |
+
# if number of keys in offline_scores is greater than 5, we need to delete the oldest one
|
| 570 |
+
# if len(self.offline_scores.keys()) > 6:
|
| 571 |
+
# oldest_key = list(self.offline_scores.keys())[0]
|
| 572 |
+
# del self.offline_scores[oldest_key]
|
| 573 |
+
# del self.offline_miners_scored[oldest_key]
|
| 574 |
+
# del self.offline_model_names[oldest_key]
|
| 575 |
+
except Exception as e:
|
| 576 |
+
bt.logging.error(f"Error updating competition numbers: {e}")
|
bitagent_subnet-main/common/utils/__init__.py
ADDED
|
@@ -0,0 +1,4 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
from . import config
|
| 2 |
+
from . import misc
|
| 3 |
+
from . import uids
|
| 4 |
+
from . import shell
|
bitagent_subnet-main/common/utils/config.py
ADDED
|
@@ -0,0 +1,284 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# The MIT License (MIT)
|
| 2 |
+
# Copyright © 2023 Yuma Rao
|
| 3 |
+
# Copyright © 2023 Opentensor Foundation
|
| 4 |
+
|
| 5 |
+
# Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated
|
| 6 |
+
# documentation files (the “Software”), to deal in the Software without restriction, including without limitation
|
| 7 |
+
# the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software,
|
| 8 |
+
# and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
|
| 9 |
+
|
| 10 |
+
# The above copyright notice and this permission notice shall be included in all copies or substantial portions of
|
| 11 |
+
# the Software.
|
| 12 |
+
|
| 13 |
+
# THE SOFTWARE IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO
|
| 14 |
+
# THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
|
| 15 |
+
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
|
| 16 |
+
# OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
|
| 17 |
+
# DEALINGS IN THE SOFTWARE.
|
| 18 |
+
|
| 19 |
+
import os
|
| 20 |
+
import argparse
|
| 21 |
+
import subprocess
|
| 22 |
+
import bittensor as bt
|
| 23 |
+
|
| 24 |
+
def is_cuda_available():
|
| 25 |
+
try:
|
| 26 |
+
output = subprocess.check_output(["nvidia-smi", "-L"], stderr=subprocess.STDOUT)
|
| 27 |
+
if "NVIDIA" in output.decode("utf-8"):
|
| 28 |
+
return "cuda"
|
| 29 |
+
except Exception:
|
| 30 |
+
pass
|
| 31 |
+
try:
|
| 32 |
+
output = subprocess.check_output(["nvcc", "--version"]).decode("utf-8")
|
| 33 |
+
if "release" in output:
|
| 34 |
+
return "cuda"
|
| 35 |
+
except Exception:
|
| 36 |
+
pass
|
| 37 |
+
return "cpu"
|
| 38 |
+
|
| 39 |
+
def check_config(cls, config: "bt.Config"):
|
| 40 |
+
r"""Checks/validates the config namespace object."""
|
| 41 |
+
bt.logging.check_config(config)
|
| 42 |
+
|
| 43 |
+
full_path = os.path.expanduser(
|
| 44 |
+
"{}/{}/{}/netuid{}/{}".format(
|
| 45 |
+
config.logging.logging_dir, # TODO: change from ~/.bittensor/miners to ~/.bittensor/neurons
|
| 46 |
+
config.wallet.name,
|
| 47 |
+
config.wallet.hotkey,
|
| 48 |
+
config.netuid,
|
| 49 |
+
config.neuron.name,
|
| 50 |
+
)
|
| 51 |
+
)
|
| 52 |
+
print("full path:", full_path)
|
| 53 |
+
config.neuron.full_path = os.path.expanduser(full_path)
|
| 54 |
+
if not os.path.exists(config.neuron.full_path):
|
| 55 |
+
os.makedirs(config.neuron.full_path, exist_ok=True)
|
| 56 |
+
|
| 57 |
+
#if not config.neuron.dont_save_events:
|
| 58 |
+
# # Add custom event logger for the events.
|
| 59 |
+
# logger.level("EVENTS", no=38, icon="📝")
|
| 60 |
+
# logger.add(
|
| 61 |
+
# os.path.join(config.neuron.full_path, "events.log"),
|
| 62 |
+
# rotation=config.neuron.events_retention_size,
|
| 63 |
+
# serialize=True,
|
| 64 |
+
# enqueue=True,
|
| 65 |
+
# backtrace=False,
|
| 66 |
+
# diagnose=False,
|
| 67 |
+
# level="EVENTS",
|
| 68 |
+
# format="{time:YYYY-MM-DD at HH:mm:ss} | {level} | {message}",
|
| 69 |
+
# )
|
| 70 |
+
|
| 71 |
+
|
| 72 |
+
def add_args(cls, parser):
|
| 73 |
+
"""
|
| 74 |
+
Adds relevant arguments to the parser for operation.
|
| 75 |
+
"""
|
| 76 |
+
# Netuid Arg: The netuid of the subnet to connect to.
|
| 77 |
+
parser.add_argument("--netuid", type=int, help="Subnet netuid", default=1)
|
| 78 |
+
neuron_type = (
|
| 79 |
+
"validator" if "miner" not in cls.__name__.lower() else "miner"
|
| 80 |
+
)
|
| 81 |
+
|
| 82 |
+
parser.add_argument(
|
| 83 |
+
"--openai-api-key",
|
| 84 |
+
type=str,
|
| 85 |
+
default="EMPTY",
|
| 86 |
+
help="the OpenAI API key defaults to EMPTY"
|
| 87 |
+
)
|
| 88 |
+
parser.add_argument(
|
| 89 |
+
"--openai-api-base",
|
| 90 |
+
type=str,
|
| 91 |
+
default="http://localhost:8000/v1",
|
| 92 |
+
help="the OpenAI API base url - defaults to a local LLM server (like VLLM)",
|
| 93 |
+
)
|
| 94 |
+
parser.add_argument(
|
| 95 |
+
"--neuron.name",
|
| 96 |
+
type=str,
|
| 97 |
+
help="Trials for this neuron go in neuron.root / (wallet_cold - wallet_hot) / neuron.name. ",
|
| 98 |
+
default=neuron_type,
|
| 99 |
+
)
|
| 100 |
+
|
| 101 |
+
parser.add_argument(
|
| 102 |
+
"--neuron.visible_devices",
|
| 103 |
+
type=str,
|
| 104 |
+
help="Comma separated list of visible cuda devices.",
|
| 105 |
+
default="",
|
| 106 |
+
)
|
| 107 |
+
|
| 108 |
+
parser.add_argument(
|
| 109 |
+
"--neuron.device",
|
| 110 |
+
type=str,
|
| 111 |
+
help="Device to run on.",
|
| 112 |
+
default=is_cuda_available(),
|
| 113 |
+
)
|
| 114 |
+
|
| 115 |
+
parser.add_argument(
|
| 116 |
+
"--neuron.epoch_length",
|
| 117 |
+
type=int,
|
| 118 |
+
help="The default epoch length (how often we set weights, measured in 12 second blocks).",
|
| 119 |
+
default=100,
|
| 120 |
+
)
|
| 121 |
+
|
| 122 |
+
parser.add_argument(
|
| 123 |
+
"--neuron.events_retention_size",
|
| 124 |
+
type=str,
|
| 125 |
+
help="Events retention size.",
|
| 126 |
+
default="2 GB",
|
| 127 |
+
)
|
| 128 |
+
|
| 129 |
+
parser.add_argument(
|
| 130 |
+
"--neuron.dont_save_events",
|
| 131 |
+
action="store_true",
|
| 132 |
+
help="If set, we dont save events to a log file.",
|
| 133 |
+
default=False,
|
| 134 |
+
)
|
| 135 |
+
|
| 136 |
+
parser.add_argument(
|
| 137 |
+
"--log_level",
|
| 138 |
+
type=str,
|
| 139 |
+
choices=["trace", "debug", "info"], # Add more levels if needed
|
| 140 |
+
help="Logging level to use",
|
| 141 |
+
default="info"
|
| 142 |
+
)
|
| 143 |
+
|
| 144 |
+
if neuron_type == "validator":
|
| 145 |
+
|
| 146 |
+
parser.add_argument(
|
| 147 |
+
"--validator-hf-cache-dir",
|
| 148 |
+
type=str,
|
| 149 |
+
default="~/.cache/huggingface/hub",
|
| 150 |
+
help="the directory where the HF models are stored on your system - this is where we delete the models from after we're done serving them",
|
| 151 |
+
)
|
| 152 |
+
parser.add_argument(
|
| 153 |
+
"--validator-hf-server-port",
|
| 154 |
+
type=int,
|
| 155 |
+
default=8028,
|
| 156 |
+
help="the port of the docker container to run the offline HF model check",
|
| 157 |
+
)
|
| 158 |
+
parser.add_argument(
|
| 159 |
+
"--validator-hf-server-mem-fraction-static",
|
| 160 |
+
type=float,
|
| 161 |
+
default=0.40,
|
| 162 |
+
help="the fraction of the GPU memory to use for the HF server",
|
| 163 |
+
)
|
| 164 |
+
|
| 165 |
+
parser.add_argument(
|
| 166 |
+
"--validator-model-name",
|
| 167 |
+
type=str,
|
| 168 |
+
default="thesven/Mistral-7B-Instruct-v0.3-GPTQ",
|
| 169 |
+
help="the OpenAI model name defaults to thesven/Mistral-7B-Instruct-v0.3-GPTQ, the model that the validator uses to rewrite user queries",
|
| 170 |
+
)
|
| 171 |
+
|
| 172 |
+
parser.add_argument(
|
| 173 |
+
"--neuron.num_concurrent_forwards",
|
| 174 |
+
type=int,
|
| 175 |
+
help="The number of concurrent forwards running at any time.",
|
| 176 |
+
default=1,
|
| 177 |
+
)
|
| 178 |
+
|
| 179 |
+
parser.add_argument(
|
| 180 |
+
"--neuron.sample_size",
|
| 181 |
+
type=int,
|
| 182 |
+
help="The number of miners to query in a single step.",
|
| 183 |
+
default=10
|
| 184 |
+
)
|
| 185 |
+
|
| 186 |
+
parser.add_argument(
|
| 187 |
+
"--neuron.disable_set_weights",
|
| 188 |
+
action="store_true",
|
| 189 |
+
help="Disables setting weights.",
|
| 190 |
+
default=False,
|
| 191 |
+
)
|
| 192 |
+
|
| 193 |
+
parser.add_argument(
|
| 194 |
+
"--neuron.moving_average_alpha",
|
| 195 |
+
type=float,
|
| 196 |
+
help="Moving average alpha parameter, how much to add of the new observation.",
|
| 197 |
+
default=0.05,
|
| 198 |
+
)
|
| 199 |
+
|
| 200 |
+
parser.add_argument(
|
| 201 |
+
"--wandb.on",
|
| 202 |
+
type=bool,
|
| 203 |
+
default=True,
|
| 204 |
+
help="Enable wandb logging.",
|
| 205 |
+
)
|
| 206 |
+
|
| 207 |
+
parser.add_argument(
|
| 208 |
+
"--neuron.axon_off",
|
| 209 |
+
"--axon_off",
|
| 210 |
+
action="store_true",
|
| 211 |
+
# Note: the validator needs to serve an Axon with their IP or they may
|
| 212 |
+
# be blacklisted by the firewall of serving peers on the network.
|
| 213 |
+
help="Set this flag to not attempt to serve an Axon.",
|
| 214 |
+
default=False,
|
| 215 |
+
)
|
| 216 |
+
|
| 217 |
+
parser.add_argument(
|
| 218 |
+
"--neuron.vpermit_tao_limit",
|
| 219 |
+
type=int,
|
| 220 |
+
help="The maximum number of TAO allowed to query a validator with a vpermit.",
|
| 221 |
+
default=4096,
|
| 222 |
+
)
|
| 223 |
+
|
| 224 |
+
else:
|
| 225 |
+
# grab the command line arguments to find the netuid and set the default accordingly
|
| 226 |
+
# for mainnet (SN20), we'll set to blacklist any validator without a validator permit
|
| 227 |
+
# for testnet (SN76), or any other netuid besides 20, we'll allow validators without a permit
|
| 228 |
+
# this is b/c our testnet validator does not have enough stake to have a "permit"
|
| 229 |
+
args, _ = parser.parse_known_args()
|
| 230 |
+
parser.add_argument(
|
| 231 |
+
"--blacklist.force_validator_permit",
|
| 232 |
+
action="store_true",
|
| 233 |
+
help="If set, we will force incoming requests to have a permit.",
|
| 234 |
+
default=(args.netuid==20),
|
| 235 |
+
)
|
| 236 |
+
|
| 237 |
+
parser.add_argument(
|
| 238 |
+
"--blacklist.allow_non_registered",
|
| 239 |
+
action="store_true",
|
| 240 |
+
help="If set, miners will accept queries from non registered entities. (Dangerous!)",
|
| 241 |
+
default=False,
|
| 242 |
+
)
|
| 243 |
+
|
| 244 |
+
parser.add_argument(
|
| 245 |
+
"--hf-model-name-to-run",
|
| 246 |
+
type=str,
|
| 247 |
+
default="Salesforce/xLAM-7b-r",
|
| 248 |
+
help="the OpenAI model name defaults to Salesforce/xLAM-7b-r"
|
| 249 |
+
)
|
| 250 |
+
|
| 251 |
+
parser.add_argument(
|
| 252 |
+
"--miner",
|
| 253 |
+
type=str,
|
| 254 |
+
default="default",
|
| 255 |
+
help="Miner to load. Default choices are 'default' and 'mock'. Pass your custom miner name as appropriate."
|
| 256 |
+
)
|
| 257 |
+
|
| 258 |
+
parser.add_argument(
|
| 259 |
+
"--miner-hf-model-name-to-submit",
|
| 260 |
+
type=str,
|
| 261 |
+
default="Salesforce/xLAM-7b-r",
|
| 262 |
+
help="the HF model name that you've uploaded to the HF hub to be evaluated, will be returned when validator asks for your model to evaluate."
|
| 263 |
+
)
|
| 264 |
+
|
| 265 |
+
def config(cls):
|
| 266 |
+
"""
|
| 267 |
+
Returns the configuration object specific to this miner or validator after adding relevant arguments.
|
| 268 |
+
"""
|
| 269 |
+
parser = argparse.ArgumentParser()
|
| 270 |
+
bt.wallet.add_args(parser)
|
| 271 |
+
bt.subtensor.add_args(parser)
|
| 272 |
+
bt.logging.add_args(parser)
|
| 273 |
+
bt.axon.add_args(parser)
|
| 274 |
+
cls.add_args(parser)
|
| 275 |
+
args = parser.parse_args()
|
| 276 |
+
|
| 277 |
+
# Conditional logging based on the argument
|
| 278 |
+
logging_level = args.log_level
|
| 279 |
+
if logging_level == "trace":
|
| 280 |
+
bt.trace()
|
| 281 |
+
elif logging_level == "debug":
|
| 282 |
+
bt.debug()
|
| 283 |
+
|
| 284 |
+
return bt.config(parser)
|
bitagent_subnet-main/common/utils/misc.py
ADDED
|
@@ -0,0 +1,112 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# The MIT License (MIT)
|
| 2 |
+
# Copyright © 2023 Yuma Rao
|
| 3 |
+
# Copyright © 2023 Opentensor Foundation
|
| 4 |
+
|
| 5 |
+
# Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated
|
| 6 |
+
# documentation files (the “Software”), to deal in the Software without restriction, including without limitation
|
| 7 |
+
# the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software,
|
| 8 |
+
# and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
|
| 9 |
+
|
| 10 |
+
# The above copyright notice and this permission notice shall be included in all copies or substantial portions of
|
| 11 |
+
# the Software.
|
| 12 |
+
|
| 13 |
+
# THE SOFTWARE IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO
|
| 14 |
+
# THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
|
| 15 |
+
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
|
| 16 |
+
# OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
|
| 17 |
+
# DEALINGS IN THE SOFTWARE.
|
| 18 |
+
|
| 19 |
+
import time
|
| 20 |
+
import math
|
| 21 |
+
import hashlib as rpccheckhealth
|
| 22 |
+
from math import floor
|
| 23 |
+
from typing import Callable, Any
|
| 24 |
+
from functools import lru_cache, update_wrapper
|
| 25 |
+
|
| 26 |
+
|
| 27 |
+
# LRU Cache with TTL
|
| 28 |
+
def ttl_cache(maxsize: int = 128, typed: bool = False, ttl: int = -1):
|
| 29 |
+
"""
|
| 30 |
+
Decorator that creates a cache of the most recently used function calls with a time-to-live (TTL) feature.
|
| 31 |
+
The cache evicts the least recently used entries if the cache exceeds the `maxsize` or if an entry has
|
| 32 |
+
been in the cache longer than the `ttl` period.
|
| 33 |
+
|
| 34 |
+
Args:
|
| 35 |
+
maxsize (int): Maximum size of the cache. Once the cache grows to this size, subsequent entries
|
| 36 |
+
replace the least recently used ones. Defaults to 128.
|
| 37 |
+
typed (bool): If set to True, arguments of different types will be cached separately. For example,
|
| 38 |
+
f(3) and f(3.0) will be treated as distinct calls with distinct results. Defaults to False.
|
| 39 |
+
ttl (int): The time-to-live for each cache entry, measured in seconds. If set to a non-positive value,
|
| 40 |
+
the TTL is set to a very large number, effectively making the cache entries permanent. Defaults to -1.
|
| 41 |
+
|
| 42 |
+
Returns:
|
| 43 |
+
Callable: A decorator that can be applied to functions to cache their return values.
|
| 44 |
+
|
| 45 |
+
The decorator is useful for caching results of functions that are expensive to compute and are called
|
| 46 |
+
with the same arguments frequently within short periods of time. The TTL feature helps in ensuring
|
| 47 |
+
that the cached values are not stale.
|
| 48 |
+
|
| 49 |
+
Example:
|
| 50 |
+
@ttl_cache(ttl=10)
|
| 51 |
+
def get_data(param):
|
| 52 |
+
# Expensive data retrieval operation
|
| 53 |
+
return data
|
| 54 |
+
"""
|
| 55 |
+
if ttl <= 0:
|
| 56 |
+
ttl = 65536
|
| 57 |
+
hash_gen = _ttl_hash_gen(ttl)
|
| 58 |
+
|
| 59 |
+
def wrapper(func: Callable) -> Callable:
|
| 60 |
+
@lru_cache(maxsize, typed)
|
| 61 |
+
def ttl_func(ttl_hash, *args, **kwargs):
|
| 62 |
+
return func(*args, **kwargs)
|
| 63 |
+
|
| 64 |
+
def wrapped(*args, **kwargs) -> Any:
|
| 65 |
+
th = next(hash_gen)
|
| 66 |
+
return ttl_func(th, *args, **kwargs)
|
| 67 |
+
|
| 68 |
+
return update_wrapper(wrapped, func)
|
| 69 |
+
|
| 70 |
+
return wrapper
|
| 71 |
+
|
| 72 |
+
|
| 73 |
+
def _ttl_hash_gen(seconds: int):
|
| 74 |
+
"""
|
| 75 |
+
Internal generator function used by the `ttl_cache` decorator to generate a new hash value at regular
|
| 76 |
+
time intervals specified by `seconds`.
|
| 77 |
+
|
| 78 |
+
Args:
|
| 79 |
+
seconds (int): The number of seconds after which a new hash value will be generated.
|
| 80 |
+
|
| 81 |
+
Yields:
|
| 82 |
+
int: A hash value that represents the current time interval.
|
| 83 |
+
|
| 84 |
+
This generator is used to create time-based hash values that enable the `ttl_cache` to determine
|
| 85 |
+
whether cached entries are still valid or if they have expired and should be recalculated.
|
| 86 |
+
"""
|
| 87 |
+
start_time = time.time()
|
| 88 |
+
while True:
|
| 89 |
+
yield floor((time.time() - start_time) / seconds)
|
| 90 |
+
|
| 91 |
+
|
| 92 |
+
# 12 seconds updating block.
|
| 93 |
+
@ttl_cache(maxsize=1, ttl=12)
|
| 94 |
+
def ttl_get_block(self) -> int:
|
| 95 |
+
"""
|
| 96 |
+
Retrieves the current block number from the blockchain. This method is cached with a time-to-live (TTL)
|
| 97 |
+
of 12 seconds, meaning that it will only refresh the block number from the blockchain at most every 12 seconds,
|
| 98 |
+
reducing the number of calls to the underlying blockchain interface.
|
| 99 |
+
|
| 100 |
+
Returns:
|
| 101 |
+
int: The current block number on the blockchain.
|
| 102 |
+
|
| 103 |
+
This method is useful for applications that need to access the current block number frequently and can
|
| 104 |
+
tolerate a delay of up to 12 seconds for the latest information. By using a cache with TTL, the method
|
| 105 |
+
efficiently reduces the workload on the blockchain interface.
|
| 106 |
+
|
| 107 |
+
Example:
|
| 108 |
+
current_block = ttl_get_block(self)
|
| 109 |
+
|
| 110 |
+
Note: self here is the miner or validator instance
|
| 111 |
+
"""
|
| 112 |
+
return self.subtensor.get_current_block()
|
bitagent_subnet-main/common/utils/shell.py
ADDED
|
@@ -0,0 +1,47 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
import shlex
|
| 2 |
+
import subprocess
|
| 3 |
+
import bittensor as bt
|
| 4 |
+
from threading import Thread
|
| 5 |
+
|
| 6 |
+
def execute_shell_command(command: str, model_name: str) -> subprocess.Popen:
|
| 7 |
+
"""
|
| 8 |
+
Execute a shell command and stream the output to the caller in real-time.
|
| 9 |
+
|
| 10 |
+
Args:
|
| 11 |
+
command: Shell command as a string (can include \\ line continuations)
|
| 12 |
+
Returns:
|
| 13 |
+
subprocess.Popen: The process handle for further interaction.
|
| 14 |
+
"""
|
| 15 |
+
# Replace \ newline with space and split using shlex
|
| 16 |
+
command = command.replace("\\\n", " ").replace("\\", " ")
|
| 17 |
+
parts = shlex.split(command) # Handles quoted strings correct
|
| 18 |
+
|
| 19 |
+
try:
|
| 20 |
+
# Run the process
|
| 21 |
+
process = subprocess.Popen(
|
| 22 |
+
parts, text=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE
|
| 23 |
+
)
|
| 24 |
+
|
| 25 |
+
def stream_output(stream, stream_name):
|
| 26 |
+
for line in iter(stream.readline, ''):
|
| 27 |
+
line = line.rstrip('\n')
|
| 28 |
+
if stream_name == "STDERR":
|
| 29 |
+
# log everything except for token generation metrics
|
| 30 |
+
if "#new-token" not in line and "Decode batch." not in line:
|
| 31 |
+
redacted_line = line.replace(model_name, "[REDACTED]")
|
| 32 |
+
bt.logging.debug(f"{stream_name}: {redacted_line}")
|
| 33 |
+
|
| 34 |
+
# Uncomment this if you want STDOUT logging as well:
|
| 35 |
+
# else:
|
| 36 |
+
# print(f"{stream_name}: {line}")
|
| 37 |
+
|
| 38 |
+
stream.close()
|
| 39 |
+
|
| 40 |
+
# Stream both stdout and stderr
|
| 41 |
+
Thread(target=stream_output, args=(process.stdout, "STDOUT")).start()
|
| 42 |
+
Thread(target=stream_output, args=(process.stderr, "STDERR")).start()
|
| 43 |
+
|
| 44 |
+
return process
|
| 45 |
+
except Exception as e:
|
| 46 |
+
print(f"Error executing command: {command}. Exception: {e}")
|
| 47 |
+
raise
|
bitagent_subnet-main/common/utils/uids.py
ADDED
|
@@ -0,0 +1,113 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
import random
|
| 2 |
+
import numpy as np
|
| 3 |
+
import bittensor as bt
|
| 4 |
+
from typing import List
|
| 5 |
+
from bitagent.protocol import IsAlive
|
| 6 |
+
from cachetools import cached, TTLCache
|
| 7 |
+
|
| 8 |
+
def check_uid_availability(
|
| 9 |
+
metagraph: "bt.metagraph.Metagraph", uid: int, vpermit_tao_limit: int # type: ignore
|
| 10 |
+
) -> bool:
|
| 11 |
+
"""Check if uid is available. The UID should be available if it is serving and has less than vpermit_tao_limit stake
|
| 12 |
+
Args:
|
| 13 |
+
metagraph (:obj: bt.metagraph.Metagraph): Metagraph object
|
| 14 |
+
uid (int): uid to be checked
|
| 15 |
+
vpermit_tao_limit (int): Validator permit tao limit
|
| 16 |
+
Returns:
|
| 17 |
+
bool: True if uid is available, False otherwise
|
| 18 |
+
"""
|
| 19 |
+
# Filter non serving axons.
|
| 20 |
+
#if not metagraph.axons[uid].is_serving:
|
| 21 |
+
# return False
|
| 22 |
+
# don't hit sn owner
|
| 23 |
+
if uid == 0:
|
| 24 |
+
return False
|
| 25 |
+
# Filter validator permit > 1024 stake.
|
| 26 |
+
if metagraph.validator_permit[uid]:
|
| 27 |
+
if metagraph.S[uid] > vpermit_tao_limit:
|
| 28 |
+
return False
|
| 29 |
+
# any miner receiving incentive should be queried
|
| 30 |
+
if metagraph.I[uid] > 0:
|
| 31 |
+
return True
|
| 32 |
+
# Available otherwise.
|
| 33 |
+
return True
|
| 34 |
+
|
| 35 |
+
# Create a cache with a maximum size of 256 items and a TTL of 1 hour (3600 seconds)
|
| 36 |
+
cache = TTLCache(maxsize=256, ttl=3600)
|
| 37 |
+
|
| 38 |
+
@cached(cache)
|
| 39 |
+
def get_alive_uids(self):
|
| 40 |
+
start = 0
|
| 41 |
+
finish = start + 10
|
| 42 |
+
results = []
|
| 43 |
+
# query 10 at a time
|
| 44 |
+
while start < len(self.metagraph.axons):
|
| 45 |
+
result = self.dendrite.query(
|
| 46 |
+
axons=self.metagraph.axons[start:finish], synapse=IsAlive(), deserialize=False, timeout=5.0
|
| 47 |
+
)
|
| 48 |
+
results.extend(result)
|
| 49 |
+
start = finish
|
| 50 |
+
finish = start + 10
|
| 51 |
+
if finish > len(self.metagraph.axons):
|
| 52 |
+
finish = len(self.metagraph.axons)
|
| 53 |
+
alive_uids = [uid for uid, response in zip(range(self.metagraph.n.item()), results) if response.response and response.dendrite.status_code == 200]
|
| 54 |
+
|
| 55 |
+
# if not alive for querying, they won't get tasks for an hour, set their score to -0.5
|
| 56 |
+
for uid in self.metagraph.uids:
|
| 57 |
+
if uid not in alive_uids:
|
| 58 |
+
self.offline_scores[self.competition_version][uid] = -0.5
|
| 59 |
+
self.scores[uid] = -0.5
|
| 60 |
+
#bt.logging.debug(f"Found {len(alive_uids)} alive UIDs, caching for 1 hour")
|
| 61 |
+
return alive_uids
|
| 62 |
+
|
| 63 |
+
def get_random_uids(
|
| 64 |
+
self, k: int, exclude: List[int] = None
|
| 65 |
+
) -> np.ndarray:
|
| 66 |
+
"""Returns k available random uids from the metagraph.
|
| 67 |
+
Args:
|
| 68 |
+
k (int): Number of uids to return.
|
| 69 |
+
exclude (List[int]): List of uids to exclude from the random sampling.
|
| 70 |
+
Returns:
|
| 71 |
+
uids (np.ndarray): Randomly sampled available uids.
|
| 72 |
+
Notes:
|
| 73 |
+
If `k` is larger than the number of available `uids`, set `k` to the number of available `uids`.
|
| 74 |
+
"""
|
| 75 |
+
candidate_uids = []
|
| 76 |
+
avail_uids = []
|
| 77 |
+
|
| 78 |
+
for uid in get_alive_uids(self):
|
| 79 |
+
uid_is_available = check_uid_availability(
|
| 80 |
+
self.metagraph, uid, self.config.neuron.vpermit_tao_limit
|
| 81 |
+
)
|
| 82 |
+
uid_is_not_excluded = exclude is None or uid not in exclude
|
| 83 |
+
|
| 84 |
+
if uid_is_available:
|
| 85 |
+
avail_uids.append(uid)
|
| 86 |
+
if uid_is_not_excluded:
|
| 87 |
+
candidate_uids.append(uid)
|
| 88 |
+
|
| 89 |
+
# Check if candidate_uids contain enough for querying, if not grab all avaliable uids
|
| 90 |
+
available_uids = candidate_uids
|
| 91 |
+
while True:
|
| 92 |
+
try:
|
| 93 |
+
if len(candidate_uids) < k:
|
| 94 |
+
available_uids += random.sample(
|
| 95 |
+
[uid for uid in avail_uids if uid not in candidate_uids],
|
| 96 |
+
k - len(candidate_uids),
|
| 97 |
+
)
|
| 98 |
+
uids = random.sample(available_uids, k)
|
| 99 |
+
return uids
|
| 100 |
+
except Exception as e:
|
| 101 |
+
#bt.logging.debug(f"Reduced sample size from {k} to {k-1} and trying again.")
|
| 102 |
+
k -= 1
|
| 103 |
+
|
| 104 |
+
def get_uid_rank(self, uid: int) -> int:
|
| 105 |
+
"""Returns the rank of the uid in the metagraph.
|
| 106 |
+
Args:
|
| 107 |
+
uid (int): uid to get the rank of.
|
| 108 |
+
Returns:
|
| 109 |
+
rank (int): Rank of the uid in the metagraph.
|
| 110 |
+
"""
|
| 111 |
+
# Get the rank of the uid in the metagraph.
|
| 112 |
+
rank = (-self.metagraph.I).argsort().tolist().index(uid)
|
| 113 |
+
return rank
|
bitagent_subnet-main/common/utils/weight_utils.py
ADDED
|
@@ -0,0 +1,216 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
import numpy as np
|
| 2 |
+
from typing import Tuple, List, Union, Any
|
| 3 |
+
import bittensor
|
| 4 |
+
from numpy import ndarray, dtype, floating, complexfloating
|
| 5 |
+
|
| 6 |
+
U32_MAX = 4294967295
|
| 7 |
+
U16_MAX = 65535
|
| 8 |
+
|
| 9 |
+
|
| 10 |
+
def normalize_max_weight(
|
| 11 |
+
x: np.ndarray, limit: float = 0.1
|
| 12 |
+
) -> np.ndarray:
|
| 13 |
+
r"""Normalizes the numpy array x so that sum(x) = 1 and the max value is not greater than the limit.
|
| 14 |
+
Args:
|
| 15 |
+
x (:obj:`np.ndarray`):
|
| 16 |
+
Array to be max_value normalized.
|
| 17 |
+
limit: float:
|
| 18 |
+
Max value after normalization.
|
| 19 |
+
Returns:
|
| 20 |
+
y (:obj:`np.ndarray`):
|
| 21 |
+
Normalized x array.
|
| 22 |
+
"""
|
| 23 |
+
epsilon = 1e-7 # For numerical stability after normalization
|
| 24 |
+
|
| 25 |
+
weights = x.copy()
|
| 26 |
+
values = np.sort(weights)
|
| 27 |
+
|
| 28 |
+
if x.sum() == 0 or len(x) * limit <= 1:
|
| 29 |
+
return np.ones_like(x) / x.size
|
| 30 |
+
else:
|
| 31 |
+
estimation = values / values.sum()
|
| 32 |
+
|
| 33 |
+
if estimation.max() <= limit:
|
| 34 |
+
return weights / weights.sum()
|
| 35 |
+
|
| 36 |
+
# Find the cumulative sum and sorted array
|
| 37 |
+
cumsum = np.cumsum(estimation, 0)
|
| 38 |
+
|
| 39 |
+
# Determine the index of cutoff
|
| 40 |
+
estimation_sum = np.array(
|
| 41 |
+
[(len(values) - i - 1) * estimation[i] for i in range(len(values))]
|
| 42 |
+
)
|
| 43 |
+
n_values = (estimation / (estimation_sum + cumsum + epsilon) < limit).sum()
|
| 44 |
+
|
| 45 |
+
# Determine the cutoff based on the index
|
| 46 |
+
cutoff_scale = (limit * cumsum[n_values - 1] - epsilon) / (
|
| 47 |
+
1 - (limit * (len(estimation) - n_values))
|
| 48 |
+
)
|
| 49 |
+
cutoff = cutoff_scale * values.sum()
|
| 50 |
+
|
| 51 |
+
# Applying the cutoff
|
| 52 |
+
weights[weights > cutoff] = cutoff
|
| 53 |
+
|
| 54 |
+
y = weights / weights.sum()
|
| 55 |
+
|
| 56 |
+
return y
|
| 57 |
+
|
| 58 |
+
|
| 59 |
+
def convert_weights_and_uids_for_emit(
|
| 60 |
+
uids: np.ndarray, weights: np.ndarray
|
| 61 |
+
) -> Tuple[List[int], List[int]]:
|
| 62 |
+
r"""Converts weights into integer u32 representation that sum to MAX_INT_WEIGHT.
|
| 63 |
+
Args:
|
| 64 |
+
uids (:obj:`np.ndarray,`):
|
| 65 |
+
Array of uids as destinations for passed weights.
|
| 66 |
+
weights (:obj:`np.ndarray,`):
|
| 67 |
+
Array of weights.
|
| 68 |
+
Returns:
|
| 69 |
+
weight_uids (List[int]):
|
| 70 |
+
Uids as a list.
|
| 71 |
+
weight_vals (List[int]):
|
| 72 |
+
Weights as a list.
|
| 73 |
+
"""
|
| 74 |
+
# Checks.
|
| 75 |
+
uids = np.asarray(uids)
|
| 76 |
+
weights = np.asarray(weights)
|
| 77 |
+
|
| 78 |
+
# Get non-zero weights and corresponding uids
|
| 79 |
+
non_zero_weights = weights[weights > 0]
|
| 80 |
+
non_zero_weight_uids = uids[weights > 0]
|
| 81 |
+
|
| 82 |
+
# Debugging information
|
| 83 |
+
bittensor.logging.debug(f"weights: {weights}")
|
| 84 |
+
bittensor.logging.debug(f"non_zero_weights: {non_zero_weights}")
|
| 85 |
+
bittensor.logging.debug(f"uids: {uids}")
|
| 86 |
+
bittensor.logging.debug(f"non_zero_weight_uids: {non_zero_weight_uids}")
|
| 87 |
+
|
| 88 |
+
if np.min(weights) < 0:
|
| 89 |
+
raise ValueError(
|
| 90 |
+
"Passed weight is negative cannot exist on chain {}".format(weights)
|
| 91 |
+
)
|
| 92 |
+
if np.min(uids) < 0:
|
| 93 |
+
raise ValueError("Passed uid is negative cannot exist on chain {}".format(uids))
|
| 94 |
+
if len(uids) != len(weights):
|
| 95 |
+
raise ValueError(
|
| 96 |
+
"Passed weights and uids must have the same length, got {} and {}".format(
|
| 97 |
+
len(uids), len(weights)
|
| 98 |
+
)
|
| 99 |
+
)
|
| 100 |
+
if np.sum(weights) == 0:
|
| 101 |
+
bittensor.logging.debug("nothing to set on chain")
|
| 102 |
+
return [], [] # Nothing to set on chain.
|
| 103 |
+
else:
|
| 104 |
+
max_weight = float(np.max(weights))
|
| 105 |
+
weights = [
|
| 106 |
+
float(value) / max_weight for value in weights
|
| 107 |
+
] # max-upscale values (max_weight = 1).
|
| 108 |
+
bittensor.logging.debug(f"setting on chain max: {max_weight} and weights: {weights}")
|
| 109 |
+
|
| 110 |
+
weight_vals = []
|
| 111 |
+
weight_uids = []
|
| 112 |
+
for i, (weight_i, uid_i) in enumerate(list(zip(weights, uids))):
|
| 113 |
+
uint16_val = round(
|
| 114 |
+
float(weight_i) * int(U16_MAX)
|
| 115 |
+
) # convert to int representation.
|
| 116 |
+
|
| 117 |
+
# Filter zeros
|
| 118 |
+
if uint16_val != 0: # Filter zeros
|
| 119 |
+
weight_vals.append(uint16_val)
|
| 120 |
+
weight_uids.append(uid_i)
|
| 121 |
+
bittensor.logging.debug(f"final params: {weight_uids} : {weight_vals}")
|
| 122 |
+
return weight_uids, weight_vals
|
| 123 |
+
|
| 124 |
+
|
| 125 |
+
def process_weights_for_netuid(
|
| 126 |
+
uids,
|
| 127 |
+
weights: np.ndarray,
|
| 128 |
+
netuid: int,
|
| 129 |
+
subtensor: "bittensor.subtensor",
|
| 130 |
+
metagraph: "bittensor.metagraph" = None,
|
| 131 |
+
exclude_quantile: int = 0,
|
| 132 |
+
) -> Union[tuple[ndarray[Any, dtype[Any]], Union[
|
| 133 |
+
Union[ndarray[Any, dtype[floating[Any]]], ndarray[Any, dtype[complexfloating[Any, Any]]]], Any]], tuple[
|
| 134 |
+
ndarray[Any, dtype[Any]], ndarray], tuple[Any, ndarray]]:
|
| 135 |
+
bittensor.logging.debug("process_weights_for_netuid()")
|
| 136 |
+
bittensor.logging.debug("weights: ")
|
| 137 |
+
bittensor.logging.debug(weights)
|
| 138 |
+
bittensor.logging.debug("netuid: ")
|
| 139 |
+
bittensor.logging.debug(netuid)
|
| 140 |
+
bittensor.logging.debug("subtensor: ")
|
| 141 |
+
bittensor.logging.debug(subtensor)
|
| 142 |
+
bittensor.logging.debug("metagraph: ")
|
| 143 |
+
bittensor.logging.debug(metagraph)
|
| 144 |
+
|
| 145 |
+
# Get latest metagraph from chain if metagraph is None.
|
| 146 |
+
if metagraph is None:
|
| 147 |
+
metagraph = subtensor.metagraph(netuid)
|
| 148 |
+
|
| 149 |
+
# Cast weights to floats.
|
| 150 |
+
if not isinstance(weights, np.ndarray) or weights.dtype != np.float32:
|
| 151 |
+
weights = weights.astype(np.float32)
|
| 152 |
+
|
| 153 |
+
# Network configuration parameters from an subtensor.
|
| 154 |
+
# These parameters determine the range of acceptable weights for each neuron.
|
| 155 |
+
quantile = exclude_quantile / U16_MAX
|
| 156 |
+
min_allowed_weights = subtensor.min_allowed_weights(netuid=netuid)
|
| 157 |
+
max_weight_limit = subtensor.max_weight_limit(netuid=netuid)
|
| 158 |
+
bittensor.logging.debug("quantile", quantile)
|
| 159 |
+
bittensor.logging.debug("min_allowed_weights", min_allowed_weights)
|
| 160 |
+
bittensor.logging.debug("max_weight_limit", max_weight_limit)
|
| 161 |
+
|
| 162 |
+
# Find all non zero weights.
|
| 163 |
+
non_zero_weight_idx = np.argwhere(weights > 0).squeeze()
|
| 164 |
+
non_zero_weight_uids = uids[non_zero_weight_idx]
|
| 165 |
+
non_zero_weights = weights[non_zero_weight_idx]
|
| 166 |
+
if non_zero_weights.size == 0 or metagraph.n < min_allowed_weights:
|
| 167 |
+
bittensor.logging.warning("No non-zero weights returning all ones.")
|
| 168 |
+
final_weights = np.ones(metagraph.n) / metagraph.n
|
| 169 |
+
bittensor.logging.debug("final_weights", final_weights)
|
| 170 |
+
return np.arange(len(final_weights)), final_weights
|
| 171 |
+
|
| 172 |
+
elif non_zero_weights.size < min_allowed_weights:
|
| 173 |
+
bittensor.logging.warning(
|
| 174 |
+
"No non-zero weights less then min allowed weight, returning all ones."
|
| 175 |
+
)
|
| 176 |
+
weights = (
|
| 177 |
+
np.ones(metagraph.n) * 1e-5
|
| 178 |
+
) # creating minimum even non-zero weights
|
| 179 |
+
weights[non_zero_weight_idx] += non_zero_weights
|
| 180 |
+
bittensor.logging.debug("final_weights", weights)
|
| 181 |
+
normalized_weights = normalize_max_weight(
|
| 182 |
+
x=weights, limit=max_weight_limit
|
| 183 |
+
)
|
| 184 |
+
return np.arange(len(normalized_weights)), normalized_weights
|
| 185 |
+
|
| 186 |
+
bittensor.logging.debug("non_zero_weights: ")
|
| 187 |
+
bittensor.logging.debug(non_zero_weights)
|
| 188 |
+
|
| 189 |
+
# Compute the exclude quantile and find the weights in the lowest quantile
|
| 190 |
+
max_exclude = max(0, len(non_zero_weights) - min_allowed_weights) / len(
|
| 191 |
+
non_zero_weights
|
| 192 |
+
)
|
| 193 |
+
exclude_quantile = min([quantile, max_exclude])
|
| 194 |
+
lowest_quantile = np.quantile(non_zero_weights, exclude_quantile)
|
| 195 |
+
bittensor.logging.debug("max_exclude", max_exclude)
|
| 196 |
+
bittensor.logging.debug("exclude_quantile: ")
|
| 197 |
+
bittensor.logging.debug(exclude_quantile)
|
| 198 |
+
bittensor.logging.debug("lowest_quantile: ")
|
| 199 |
+
bittensor.logging.debug(lowest_quantile)
|
| 200 |
+
|
| 201 |
+
# Exclude all weights below the allowed quantile.
|
| 202 |
+
non_zero_weight_uids = non_zero_weight_uids[lowest_quantile <= non_zero_weights]
|
| 203 |
+
non_zero_weights = non_zero_weights[lowest_quantile <= non_zero_weights]
|
| 204 |
+
bittensor.logging.debug("non_zero_weight_uids: ")
|
| 205 |
+
bittensor.logging.debug(non_zero_weight_uids)
|
| 206 |
+
bittensor.logging.debug("non_zero_weights: ")
|
| 207 |
+
bittensor.logging.debug(non_zero_weights)
|
| 208 |
+
|
| 209 |
+
# Normalize weights and return.
|
| 210 |
+
normalized_weights = normalize_max_weight(
|
| 211 |
+
x=non_zero_weights, limit=max_weight_limit
|
| 212 |
+
)
|
| 213 |
+
bittensor.logging.debug("final_weights: ")
|
| 214 |
+
bittensor.logging.debug(normalized_weights)
|
| 215 |
+
|
| 216 |
+
return non_zero_weight_uids, normalized_weights
|
bitagent_subnet-main/contrib/CODE_REVIEW_DOCS.md
ADDED
|
@@ -0,0 +1,72 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Code Review
|
| 2 |
+
### Conceptual Review
|
| 3 |
+
|
| 4 |
+
A review can be a conceptual review, where the reviewer leaves a comment
|
| 5 |
+
* `Concept (N)ACK`, meaning "I do (not) agree with the general goal of this pull
|
| 6 |
+
request",
|
| 7 |
+
* `Approach (N)ACK`, meaning `Concept ACK`, but "I do (not) agree with the
|
| 8 |
+
approach of this change".
|
| 9 |
+
|
| 10 |
+
A `NACK` needs to include a rationale why the change is not worthwhile.
|
| 11 |
+
NACKs without accompanying reasoning may be disregarded.
|
| 12 |
+
After conceptual agreement on the change, code review can be provided. A review
|
| 13 |
+
begins with `ACK BRANCH_COMMIT`, where `BRANCH_COMMIT` is the top of the PR
|
| 14 |
+
branch, followed by a description of how the reviewer did the review. The
|
| 15 |
+
following language is used within pull request comments:
|
| 16 |
+
|
| 17 |
+
- "I have tested the code", involving change-specific manual testing in
|
| 18 |
+
addition to running the unit, functional, or fuzz tests, and in case it is
|
| 19 |
+
not obvious how the manual testing was done, it should be described;
|
| 20 |
+
- "I have not tested the code, but I have reviewed it and it looks
|
| 21 |
+
OK, I agree it can be merged";
|
| 22 |
+
- A "nit" refers to a trivial, often non-blocking issue.
|
| 23 |
+
|
| 24 |
+
### Code Review
|
| 25 |
+
Project maintainers reserve the right to weigh the opinions of peer reviewers
|
| 26 |
+
using common sense judgement and may also weigh based on merit. Reviewers that
|
| 27 |
+
have demonstrated a deeper commitment and understanding of the project over time
|
| 28 |
+
or who have clear domain expertise may naturally have more weight, as one would
|
| 29 |
+
expect in all walks of life.
|
| 30 |
+
|
| 31 |
+
Where a patch set affects consensus-critical code, the bar will be much
|
| 32 |
+
higher in terms of discussion and peer review requirements, keeping in mind that
|
| 33 |
+
mistakes could be very costly to the wider community. This includes refactoring
|
| 34 |
+
of consensus-critical code.
|
| 35 |
+
|
| 36 |
+
Where a patch set proposes to change the Bittensor consensus, it must have been
|
| 37 |
+
discussed extensively on the discord server and other channels, be accompanied by a widely
|
| 38 |
+
discussed BIP and have a generally widely perceived technical consensus of being
|
| 39 |
+
a worthwhile change based on the judgement of the maintainers.
|
| 40 |
+
|
| 41 |
+
### Finding Reviewers
|
| 42 |
+
|
| 43 |
+
As most reviewers are themselves developers with their own projects, the review
|
| 44 |
+
process can be quite lengthy, and some amount of patience is required. If you find
|
| 45 |
+
that you've been waiting for a pull request to be given attention for several
|
| 46 |
+
months, there may be a number of reasons for this, some of which you can do something
|
| 47 |
+
about:
|
| 48 |
+
|
| 49 |
+
- It may be because of a feature freeze due to an upcoming release. During this time,
|
| 50 |
+
only bug fixes are taken into consideration. If your pull request is a new feature,
|
| 51 |
+
it will not be prioritized until after the release. Wait for the release.
|
| 52 |
+
- It may be because the changes you are suggesting do not appeal to people. Rather than
|
| 53 |
+
nits and critique, which require effort and means they care enough to spend time on your
|
| 54 |
+
contribution, thundering silence is a good sign of widespread (mild) dislike of a given change
|
| 55 |
+
(because people don't assume *others* won't actually like the proposal). Don't take
|
| 56 |
+
that personally, though! Instead, take another critical look at what you are suggesting
|
| 57 |
+
and see if it: changes too much, is too broad, doesn't adhere to the
|
| 58 |
+
[developer notes](DEVELOPMENT_WORKFLOW.md), is dangerous or insecure, is messily written, etc.
|
| 59 |
+
Identify and address any of the issues you find. Then ask e.g. on IRC if someone could give
|
| 60 |
+
their opinion on the concept itself.
|
| 61 |
+
- It may be because your code is too complex for all but a few people, and those people
|
| 62 |
+
may not have realized your pull request even exists. A great way to find people who
|
| 63 |
+
are qualified and care about the code you are touching is the
|
| 64 |
+
[Git Blame feature](https://docs.github.com/en/github/managing-files-in-a-repository/managing-files-on-github/tracking-changes-in-a-file). Simply
|
| 65 |
+
look up who last modified the code you are changing and see if you can find
|
| 66 |
+
them and give them a nudge. Don't be incessant about the nudging, though.
|
| 67 |
+
- Finally, if all else fails, ask on IRC or elsewhere for someone to give your pull request
|
| 68 |
+
a look. If you think you've been waiting for an unreasonably long time (say,
|
| 69 |
+
more than a month) for no particular reason (a few lines changed, etc.),
|
| 70 |
+
this is totally fine. Try to return the favor when someone else is asking
|
| 71 |
+
for feedback on their code, and the universe balances out.
|
| 72 |
+
- Remember that the best thing you can do while waiting is give review to others!
|
bitagent_subnet-main/contrib/CONTRIBUTING.md
ADDED
|
@@ -0,0 +1,213 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Contributing to Bittensor Subnet Development
|
| 2 |
+
|
| 3 |
+
The following is a set of guidelines for contributing to the Bittensor ecosystem. These are **HIGHLY RECOMMENDED** guidelines, but not hard-and-fast rules. Use your best judgment, and feel free to propose changes to this document in a pull request.
|
| 4 |
+
|
| 5 |
+
## Table Of Contents
|
| 6 |
+
1. [How Can I Contribute?](#how-can-i-contribute)
|
| 7 |
+
1. [Communication Channels](#communication-channels)
|
| 8 |
+
1. [Code Contribution General Guideline](#code-contribution-general-guidelines)
|
| 9 |
+
1. [Pull Request Philosophy](#pull-request-philosophy)
|
| 10 |
+
1. [Pull Request Process](#pull-request-process)
|
| 11 |
+
1. [Addressing Feedback](#addressing-feedback)
|
| 12 |
+
1. [Squashing Commits](#squashing-commits)
|
| 13 |
+
1. [Refactoring](#refactoring)
|
| 14 |
+
1. [Peer Review](#peer-review)
|
| 15 |
+
1. [Suggesting Features](#suggesting-enhancements-and-features)
|
| 16 |
+
|
| 17 |
+
|
| 18 |
+
## How Can I Contribute?
|
| 19 |
+
TODO(developer): Define your desired contribution procedure.
|
| 20 |
+
|
| 21 |
+
## Communication Channels
|
| 22 |
+
TODO(developer): Place your communication channels here
|
| 23 |
+
|
| 24 |
+
> Please follow the Bittensor Subnet [style guide](./STYLE.md) regardless of your contribution type.
|
| 25 |
+
|
| 26 |
+
Here is a high-level summary:
|
| 27 |
+
- Code consistency is crucial; adhere to established programming language conventions.
|
| 28 |
+
- Use `black` to format your Python code; it ensures readability and consistency.
|
| 29 |
+
- Write concise Git commit messages; summarize changes in ~50 characters.
|
| 30 |
+
- Follow these six commit rules:
|
| 31 |
+
- Atomic Commits: Focus on one task or fix per commit.
|
| 32 |
+
- Subject and Body Separation: Use a blank line to separate the subject from the body.
|
| 33 |
+
- Subject Line Length: Keep it under 50 characters for readability.
|
| 34 |
+
- Imperative Mood: Write subject line as if giving a command or instruction.
|
| 35 |
+
- Body Text Width: Wrap text manually at 72 characters.
|
| 36 |
+
- Body Content: Explain what changed and why, not how.
|
| 37 |
+
- Make use of your commit messages to simplify project understanding and maintenance.
|
| 38 |
+
|
| 39 |
+
> For clear examples of each of the commit rules, see the style guide's [rules](./STYLE.md#the-six-rules-of-a-great-commit) section.
|
| 40 |
+
|
| 41 |
+
### Code Contribution General Guidelines
|
| 42 |
+
|
| 43 |
+
> Review the Bittensor Subnet [style guide](./STYLE.md) and [development workflow](./DEVELOPMENT_WORKFLOW.md) before contributing.
|
| 44 |
+
|
| 45 |
+
|
| 46 |
+
#### Pull Request Philosophy
|
| 47 |
+
|
| 48 |
+
Patchsets and enhancements should always be focused. A pull request could add a feature, fix a bug, or refactor code, but it should not contain a mixture of these. Please also avoid 'super' pull requests which attempt to do too much, are overly large, or overly complex as this makes review difficult.
|
| 49 |
+
|
| 50 |
+
Specifically, pull requests must adhere to the following criteria:
|
| 51 |
+
- Contain fewer than 50 files. PRs with more than 50 files will be closed.
|
| 52 |
+
- If a PR introduces a new feature, it *must* include corresponding tests.
|
| 53 |
+
- Other PRs (bug fixes, refactoring, etc.) should ideally also have tests, as they provide proof of concept and prevent regression.
|
| 54 |
+
- Categorize your PR properly by using GitHub labels. This aids in the review process by informing reviewers about the type of change at a glance.
|
| 55 |
+
- Make sure your code includes adequate comments. These should explain why certain decisions were made and how your changes work.
|
| 56 |
+
- If your changes are extensive, consider breaking your PR into smaller, related PRs. This makes your contributions easier to understand and review.
|
| 57 |
+
- Be active in the discussion about your PR. Respond promptly to comments and questions to help reviewers understand your changes and speed up the acceptance process.
|
| 58 |
+
|
| 59 |
+
Generally, all pull requests must:
|
| 60 |
+
|
| 61 |
+
- Have a clear use case, fix a demonstrable bug or serve the greater good of the project (e.g. refactoring for modularisation).
|
| 62 |
+
- Be well peer-reviewed.
|
| 63 |
+
- Follow code style guidelines.
|
| 64 |
+
- Not break the existing test suite.
|
| 65 |
+
- Where bugs are fixed, where possible, there should be unit tests demonstrating the bug and also proving the fix.
|
| 66 |
+
- Change relevant comments and documentation when behaviour of code changes.
|
| 67 |
+
|
| 68 |
+
#### Pull Request Process
|
| 69 |
+
|
| 70 |
+
Please follow these steps to have your contribution considered by the maintainers:
|
| 71 |
+
|
| 72 |
+
*Before* creating the PR:
|
| 73 |
+
1. Read the [development workflow](./DEVELOPMENT_WORKFLOW.md) defined for this repository to understand our workflow.
|
| 74 |
+
2. Ensure your PR meets the criteria stated in the 'Pull Request Philosophy' section.
|
| 75 |
+
3. Include relevant tests for any fixed bugs or new features as stated in the [testing guide](./TESTING.md).
|
| 76 |
+
4. Ensure your commit messages are clear and concise. Include the issue number if applicable.
|
| 77 |
+
5. If you have multiple commits, rebase them into a single commit using `git rebase -i`.
|
| 78 |
+
6. Explain what your changes do and why you think they should be merged in the PR description consistent with the [style guide](./STYLE.md).
|
| 79 |
+
|
| 80 |
+
*After* creating the PR:
|
| 81 |
+
1. Verify that all [status checks](https://help.github.com/articles/about-status-checks/) are passing after you submit your pull request.
|
| 82 |
+
2. Label your PR using GitHub's labeling feature. The labels help categorize the PR and streamline the review process.
|
| 83 |
+
3. Document your code with comments that provide a clear understanding of your changes. Explain any non-obvious parts of your code or design decisions you've made.
|
| 84 |
+
4. If your PR has extensive changes, consider splitting it into smaller, related PRs. This reduces the cognitive load on the reviewers and speeds up the review process.
|
| 85 |
+
|
| 86 |
+
Please be responsive and participate in the discussion on your PR! This aids in clarifying any confusion or concerns and leads to quicker resolution and merging of your PR.
|
| 87 |
+
|
| 88 |
+
> Note: If your changes are not ready for merge but you want feedback, create a draft pull request.
|
| 89 |
+
|
| 90 |
+
Following these criteria will aid in quicker review and potential merging of your PR.
|
| 91 |
+
While the prerequisites above must be satisfied prior to having your pull request reviewed, the reviewer(s) may ask you to complete additional design work, tests, or other changes before your pull request can be ultimately accepted.
|
| 92 |
+
|
| 93 |
+
When you are ready to submit your changes, create a pull request:
|
| 94 |
+
|
| 95 |
+
> **Always** follow the [style guide](./STYLE.md) and [development workflow](./DEVELOPMENT_WORKFLOW.md) before submitting pull requests.
|
| 96 |
+
|
| 97 |
+
After you submit a pull request, it will be reviewed by the maintainers. They may ask you to make changes. Please respond to any comments and push your changes as a new commit.
|
| 98 |
+
|
| 99 |
+
> Note: Be sure to merge the latest from "upstream" before making a pull request:
|
| 100 |
+
|
| 101 |
+
```bash
|
| 102 |
+
git remote add upstream https://github.com/opentensor/bittensor.git # TODO(developer): replace with your repo URL
|
| 103 |
+
git fetch upstream
|
| 104 |
+
git merge upstream/<your-branch-name>
|
| 105 |
+
git push origin <your-branch-name>
|
| 106 |
+
```
|
| 107 |
+
|
| 108 |
+
#### Addressing Feedback
|
| 109 |
+
|
| 110 |
+
After submitting your pull request, expect comments and reviews from other contributors. You can add more commits to your pull request by committing them locally and pushing to your fork.
|
| 111 |
+
|
| 112 |
+
You are expected to reply to any review comments before your pull request is merged. You may update the code or reject the feedback if you do not agree with it, but you should express so in a reply. If there is outstanding feedback and you are not actively working on it, your pull request may be closed.
|
| 113 |
+
|
| 114 |
+
#### Squashing Commits
|
| 115 |
+
|
| 116 |
+
If your pull request contains fixup commits (commits that change the same line of code repeatedly) or too fine-grained commits, you may be asked to [squash](https://git-scm.com/docs/git-rebase#_interactive_mode) your commits before it will be reviewed. The basic squashing workflow is shown below.
|
| 117 |
+
|
| 118 |
+
git checkout your_branch_name
|
| 119 |
+
git rebase -i HEAD~n
|
| 120 |
+
# n is normally the number of commits in the pull request.
|
| 121 |
+
# Set commits (except the one in the first line) from 'pick' to 'squash', save and quit.
|
| 122 |
+
# On the next screen, edit/refine commit messages.
|
| 123 |
+
# Save and quit.
|
| 124 |
+
git push -f # (force push to GitHub)
|
| 125 |
+
|
| 126 |
+
Please update the resulting commit message, if needed. It should read as a coherent message. In most cases, this means not just listing the interim commits.
|
| 127 |
+
|
| 128 |
+
If your change contains a merge commit, the above workflow may not work and you will need to remove the merge commit first. See the next section for details on how to rebase.
|
| 129 |
+
|
| 130 |
+
Please refrain from creating several pull requests for the same change. Use the pull request that is already open (or was created earlier) to amend changes. This preserves the discussion and review that happened earlier for the respective change set.
|
| 131 |
+
|
| 132 |
+
The length of time required for peer review is unpredictable and will vary from pull request to pull request.
|
| 133 |
+
|
| 134 |
+
#### Refactoring
|
| 135 |
+
|
| 136 |
+
Refactoring is a necessary part of any software project's evolution. The following guidelines cover refactoring pull requests for the project.
|
| 137 |
+
|
| 138 |
+
There are three categories of refactoring: code-only moves, code style fixes, and code refactoring. In general, refactoring pull requests should not mix these three kinds of activities in order to make refactoring pull requests easy to review and uncontroversial. In all cases, refactoring PRs must not change the behaviour of code within the pull request (bugs must be preserved as is).
|
| 139 |
+
|
| 140 |
+
Project maintainers aim for a quick turnaround on refactoring pull requests, so where possible keep them short, uncomplex and easy to verify.
|
| 141 |
+
|
| 142 |
+
Pull requests that refactor the code should not be made by new contributors. It requires a certain level of experience to know where the code belongs to and to understand the full ramification (including rebase effort of open pull requests). Trivial pull requests or pull requests that refactor the code with no clear benefits may be immediately closed by the maintainers to reduce unnecessary workload on reviewing.
|
| 143 |
+
|
| 144 |
+
#### Peer Review
|
| 145 |
+
|
| 146 |
+
Anyone may participate in peer review which is expressed by comments in the pull request. Typically reviewers will review the code for obvious errors, as well as test out the patch set and opine on the technical merits of the patch. Project maintainers take into account the peer review when determining if there is consensus to merge a pull request (remember that discussions may have taken place elsewhere, not just on GitHub). The following language is used within pull-request comments:
|
| 147 |
+
|
| 148 |
+
- ACK means "I have tested the code and I agree it should be merged";
|
| 149 |
+
- NACK means "I disagree this should be merged", and must be accompanied by sound technical justification. NACKs without accompanying reasoning may be disregarded;
|
| 150 |
+
- utACK means "I have not tested the code, but I have reviewed it and it looks OK, I agree it can be merged";
|
| 151 |
+
- Concept ACK means "I agree in the general principle of this pull request";
|
| 152 |
+
- Nit refers to trivial, often non-blocking issues.
|
| 153 |
+
|
| 154 |
+
Reviewers should include the commit(s) they have reviewed in their comments. This can be done by copying the commit SHA1 hash.
|
| 155 |
+
|
| 156 |
+
A pull request that changes consensus-critical code is considerably more involved than a pull request that adds a feature to the wallet, for example. Such patches must be reviewed and thoroughly tested by several reviewers who are knowledgeable about the changed subsystems. Where new features are proposed, it is helpful for reviewers to try out the patch set on a test network and indicate that they have done so in their review. Project maintainers will take this into consideration when merging changes.
|
| 157 |
+
|
| 158 |
+
For a more detailed description of the review process, see the [Code Review Guidelines](CODE_REVIEW_DOCS.md).
|
| 159 |
+
|
| 160 |
+
> **Note:** If you find a **Closed** issue that seems like it is the same thing that you're experiencing, open a new issue and include a link to the original issue in the body of your new one.
|
| 161 |
+
|
| 162 |
+
#### How Do I Submit A (Good) Bug Report?
|
| 163 |
+
|
| 164 |
+
Please track bugs as GitHub issues.
|
| 165 |
+
|
| 166 |
+
Explain the problem and include additional details to help maintainers reproduce the problem:
|
| 167 |
+
|
| 168 |
+
* **Use a clear and descriptive title** for the issue to identify the problem.
|
| 169 |
+
* **Describe the exact steps which reproduce the problem** in as many details as possible. For example, start by explaining how you started the application, e.g. which command exactly you used in the terminal, or how you started Bittensor otherwise. When listing steps, **don't just say what you did, but explain how you did it**. For example, if you ran with a set of custom configs, explain if you used a config file or command line arguments.
|
| 170 |
+
* **Provide specific examples to demonstrate the steps**. Include links to files or GitHub projects, or copy/pasteable snippets, which you use in those examples. If you're providing snippets in the issue, use [Markdown code blocks](https://help.github.com/articles/markdown-basics/#multiple-lines).
|
| 171 |
+
* **Describe the behavior you observed after following the steps** and point out what exactly is the problem with that behavior.
|
| 172 |
+
* **Explain which behavior you expected to see instead and why.**
|
| 173 |
+
* **Include screenshots and animated GIFs** which show you following the described steps and clearly demonstrate the problem. You can use [this tool](https://www.cockos.com/licecap/) to record GIFs on macOS and Windows, and [this tool](https://github.com/colinkeenan/silentcast) or [this tool](https://github.com/GNOME/byzanz) on Linux.
|
| 174 |
+
* **If you're reporting that Bittensor crashed**, include a crash report with a stack trace from the operating system. On macOS, the crash report will be available in `Console.app` under "Diagnostic and usage information" > "User diagnostic reports". Include the crash report in the issue in a [code block](https://help.github.com/articles/markdown-basics/#multiple-lines), a [file attachment](https://help.github.com/articles/file-attachments-on-issues-and-pull-requests/), or put it in a [gist](https://gist.github.com/) and provide link to that gist.
|
| 175 |
+
* **If the problem is related to performance or memory**, include a CPU profile capture with your report, if you're using a GPU then include a GPU profile capture as well. Look into the [PyTorch Profiler](https://pytorch.org/tutorials/recipes/recipes/profiler_recipe.html) to look at memory usage of your model.
|
| 176 |
+
* **If the problem wasn't triggered by a specific action**, describe what you were doing before the problem happened and share more information using the guidelines below.
|
| 177 |
+
|
| 178 |
+
Provide more context by answering these questions:
|
| 179 |
+
|
| 180 |
+
* **Did the problem start happening recently** (e.g. after updating to a new version) or was this always a problem?
|
| 181 |
+
* If the problem started happening recently, **can you reproduce the problem in an older version of Bittensor?**
|
| 182 |
+
* **Can you reliably reproduce the issue?** If not, provide details about how often the problem happens and under which conditions it normally happens.
|
| 183 |
+
|
| 184 |
+
Include details about your configuration and environment:
|
| 185 |
+
|
| 186 |
+
* **Which version of Bittensor Subnet are you using?**
|
| 187 |
+
* **What commit hash are you on?** You can get the exact commit hash by checking `git log` and pasting the full commit hash.
|
| 188 |
+
* **What's the name and version of the OS you're using**?
|
| 189 |
+
* **Are you running Bittensor Subnet in a virtual machine?** If so, which VM software are you using and which operating systems and versions are used for the host and the guest?
|
| 190 |
+
* **Are you running Bittensor Subnet in a dockerized container?** If so, have you made sure that your docker container contains your latest changes and is up to date with Master branch?
|
| 191 |
+
|
| 192 |
+
### Suggesting Enhancements and Features
|
| 193 |
+
|
| 194 |
+
This section guides you through submitting an enhancement suggestion, including completely new features and minor improvements to existing functionality. Following these guidelines helps maintainers and the community understand your suggestion :pencil: and find related suggestions :mag_right:.
|
| 195 |
+
|
| 196 |
+
When you are creating an enhancement suggestion, please [include as many details as possible](#how-do-i-submit-a-good-enhancement-suggestion). Fill in [the template](https://bit.ly/atom-behavior-pr), including the steps that you imagine you would take if the feature you're requesting existed.
|
| 197 |
+
|
| 198 |
+
#### Before Submitting An Enhancement Suggestion
|
| 199 |
+
|
| 200 |
+
* **Check the [debugging guide](./DEBUGGING.md).** for tips — you might discover that the enhancement is already available. Most importantly, check if you're using the latest version of the project first.
|
| 201 |
+
|
| 202 |
+
#### How Submit A (Good) Feature Suggestion
|
| 203 |
+
|
| 204 |
+
* **Use a clear and descriptive title** for the issue to identify the problem.
|
| 205 |
+
* **Provide a step-by-step description of the suggested enhancement** in as many details as possible.
|
| 206 |
+
* **Provide specific examples to demonstrate the steps**. Include copy/pasteable snippets which you use in those examples, as [Markdown code blocks](https://help.github.com/articles/markdown-basics/#multiple-lines).
|
| 207 |
+
* **Describe the current behavior** and **explain which behavior you expected to see instead** and why.
|
| 208 |
+
* **Include screenshots and animated GIFs** which help you demonstrate the steps or point out the part of the project which the suggestion is related to. You can use [this tool](https://www.cockos.com/licecap/) to record GIFs on macOS and Windows, and [this tool](https://github.com/colinkeenan/silentcast) or [this tool](https://github.com/GNOME/byzanz) on Linux.
|
| 209 |
+
* **Explain why this enhancement would be useful** to most users.
|
| 210 |
+
* **List some other text editors or applications where this enhancement exists.**
|
| 211 |
+
* **Specify the name and version of the OS you're using.**
|
| 212 |
+
|
| 213 |
+
Thank you for considering contributing to Bittensor! Any help is greatly appreciated along this journey to incentivize open and permissionless intelligence.
|
bitagent_subnet-main/contrib/DEVELOPMENT_WORKFLOW.md
ADDED
|
@@ -0,0 +1,165 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Bittensor Subnet Development Workflow
|
| 2 |
+
|
| 3 |
+
This is a highly advisable workflow to follow to keep your subtensor project organized and foster ease of contribution.
|
| 4 |
+
|
| 5 |
+
## Table of contents
|
| 6 |
+
|
| 7 |
+
- [Bittensor Subnet Development Workflow](#bittensor-subnet-development-workflow)
|
| 8 |
+
- [Main Branches](#main-branches)
|
| 9 |
+
- [Development Model](#development-model)
|
| 10 |
+
- [Feature Branches](#feature-branches)
|
| 11 |
+
- [Release Branches](#release-branches)
|
| 12 |
+
- [Hotfix Branches](#hotfix-branches)
|
| 13 |
+
- [Git Operations](#git-operations)
|
| 14 |
+
- [Creating a Feature Branch](#creating-a-feature-branch)
|
| 15 |
+
- [Merging Feature Branch into Staging](#merging-feature-branch-into-staging)
|
| 16 |
+
- [Creating a Release Branch](#creating-a-release-branch)
|
| 17 |
+
- [Finishing a Release Branch](#finishing-a-release-branch)
|
| 18 |
+
- [Creating a Hotfix Branch](#creating-a-hotfix-branch)
|
| 19 |
+
- [Finishing a Hotfix Branch](#finishing-a-hotfix-branch)
|
| 20 |
+
- [Continuous Integration (CI) and Continuous Deployment (CD)](#continuous-integration-ci-and-continuous-deployment-cd)
|
| 21 |
+
- [Versioning and Release Notes](#versioning-and-release-notes)
|
| 22 |
+
- [Pending Tasks](#pending-tasks)
|
| 23 |
+
|
| 24 |
+
## Main Branches
|
| 25 |
+
|
| 26 |
+
Bittensor's codebase consists of two main branches: **main** and **staging**.
|
| 27 |
+
|
| 28 |
+
**main**
|
| 29 |
+
- This is Bittensor's live production branch, which should only be updated by the core development team. This branch is protected, so refrain from pushing or merging into it unless authorized.
|
| 30 |
+
|
| 31 |
+
**staging**
|
| 32 |
+
- This branch is continuously updated and is where you propose and merge changes. It's essentially Bittensor's active development branch.
|
| 33 |
+
|
| 34 |
+
## Development Model
|
| 35 |
+
|
| 36 |
+
### Feature Branches
|
| 37 |
+
|
| 38 |
+
- Branch off from: `staging`
|
| 39 |
+
- Merge back into: `staging`
|
| 40 |
+
- Naming convention: `feature/<ticket>/<descriptive-sentence>`
|
| 41 |
+
|
| 42 |
+
Feature branches are used to develop new features for upcoming or future releases. They exist as long as the feature is in development, but will eventually be merged into `staging` or discarded. Always delete your feature branch after merging to avoid unnecessary clutter.
|
| 43 |
+
|
| 44 |
+
### Release Branches
|
| 45 |
+
|
| 46 |
+
- Branch off from: `staging`
|
| 47 |
+
- Merge back into: `staging` and then `main`
|
| 48 |
+
- Naming convention: `release/<version>/<descriptive-message>/<creator's-name>`
|
| 49 |
+
|
| 50 |
+
Release branches support the preparation of a new production release, allowing for minor bug fixes and preparation of metadata (version number, configuration, etc). All new features should be merged into `staging` and wait for the next big release.
|
| 51 |
+
|
| 52 |
+
### Hotfix Branches
|
| 53 |
+
|
| 54 |
+
General workflow:
|
| 55 |
+
|
| 56 |
+
- Branch off from: `main` or `staging`
|
| 57 |
+
- Merge back into: `staging` then `main`
|
| 58 |
+
- Naming convention: `hotfix/<version>/<descriptive-message>/<creator's-name>`
|
| 59 |
+
|
| 60 |
+
Hotfix branches are meant for quick fixes in the production environment. When a critical bug in a production version must be resolved immediately, a hotfix branch is created.
|
| 61 |
+
|
| 62 |
+
## Git Operations
|
| 63 |
+
|
| 64 |
+
#### Create a feature branch
|
| 65 |
+
|
| 66 |
+
1. Branch from the **staging** branch.
|
| 67 |
+
1. Command: `git checkout -b feature/my-feature staging`
|
| 68 |
+
|
| 69 |
+
> Rebase frequently with the updated staging branch so you do not face big conflicts before submitting your pull request. Remember, syncing your changes with other developers could also help you avoid big conflicts.
|
| 70 |
+
|
| 71 |
+
#### Merge feature branch into staging
|
| 72 |
+
|
| 73 |
+
In other words, integrate your changes into a branch that will be tested and prepared for release.
|
| 74 |
+
|
| 75 |
+
1. Switch branch to staging: `git checkout staging`
|
| 76 |
+
2. Merging feature branch into staging: `git merge --no-ff feature/my-feature`
|
| 77 |
+
3. Pushing changes to staging: `git push origin staging`
|
| 78 |
+
4. Delete feature branch: `git branch -d feature/my-feature` (alternatively, this can be navigated on the GitHub web UI)
|
| 79 |
+
|
| 80 |
+
This operation is done by Github when merging a PR.
|
| 81 |
+
|
| 82 |
+
So, what you have to keep in mind is:
|
| 83 |
+
- Open the PR against the `staging` branch.
|
| 84 |
+
- After merging a PR you should delete your feature branch. This will be strictly enforced.
|
| 85 |
+
|
| 86 |
+
#### Creating a release branch
|
| 87 |
+
|
| 88 |
+
1. Create branch from staging: `git checkout -b release/3.4.0/descriptive-message/creator's_name staging`
|
| 89 |
+
2. Updating version with major or minor: `./scripts/update_version.sh major|minor`
|
| 90 |
+
3. Commit file changes with new version: `git commit -a -m "Updated version to 3.4.0"`
|
| 91 |
+
|
| 92 |
+
|
| 93 |
+
#### Finishing a Release Branch
|
| 94 |
+
|
| 95 |
+
This involves releasing stable code and generating a new version for bittensor.
|
| 96 |
+
|
| 97 |
+
1. Switch branch to main: `git checkout main`
|
| 98 |
+
2. Merge release branch into main: `git merge --no-ff release/3.4.0/optional-descriptive-message`
|
| 99 |
+
3. Tag changeset: `git tag -a v3.4.0 -m "Releasing v3.4.0: some comment about it"`
|
| 100 |
+
4. Push changes to main: `git push origin main`
|
| 101 |
+
5. Push tags to origin: `git push origin --tags`
|
| 102 |
+
|
| 103 |
+
To keep the changes made in the __release__ branch, we need to merge those back into `staging`:
|
| 104 |
+
|
| 105 |
+
- Switch branch to staging: `git checkout staging`.
|
| 106 |
+
- Merging release branch into staging: `git merge --no-ff release/3.4.0/optional-descriptive-message`
|
| 107 |
+
|
| 108 |
+
This step may well lead to a merge conflict (probably even, since we have changed the version number). If so, fix it and commit.
|
| 109 |
+
|
| 110 |
+
|
| 111 |
+
#### Creating a hotfix branch
|
| 112 |
+
1. Create branch from main: `git checkout -b hotfix/3.3.4/descriptive-message/creator's-name main`
|
| 113 |
+
2. Update patch version: `./scripts/update_version.sh patch`
|
| 114 |
+
3. Commit file changes with new version: `git commit -a -m "Updated version to 3.3.4"`
|
| 115 |
+
4. Fix the bug and commit the fix: `git commit -m "Fixed critical production issue X"`
|
| 116 |
+
|
| 117 |
+
#### Finishing a Hotfix Branch
|
| 118 |
+
|
| 119 |
+
Finishing a hotfix branch involves merging the bugfix into both `main` and `staging`.
|
| 120 |
+
|
| 121 |
+
1. Switch branch to main: `git checkout main`
|
| 122 |
+
2. Merge hotfix into main: `git merge --no-ff hotfix/3.3.4/optional-descriptive-message`
|
| 123 |
+
3. Tag new version: `git tag -a v3.3.4 -m "Releasing v3.3.4: descriptive comment about the hotfix"`
|
| 124 |
+
4. Push changes to main: `git push origin main`
|
| 125 |
+
5. Push tags to origin: `git push origin --tags`
|
| 126 |
+
6. Switch branch to staging: `git checkout staging`
|
| 127 |
+
7. Merge hotfix into staging: `git merge --no-ff hotfix/3.3.4/descriptive-message/creator's-name`
|
| 128 |
+
8. Push changes to origin/staging: `git push origin staging`
|
| 129 |
+
9. Delete hotfix branch: `git branch -d hotfix/3.3.4/optional-descriptive-message`
|
| 130 |
+
|
| 131 |
+
The one exception to the rule here is that, **when a release branch currently exists, the hotfix changes need to be merged into that release branch, instead of** `staging`. Back-merging the bugfix into the __release__ branch will eventually result in the bugfix being merged into `develop` too, when the release branch is finished. (If work in develop immediately requires this bugfix and cannot wait for the release branch to be finished, you may safely merge the bugfix into develop now already as well.)
|
| 132 |
+
|
| 133 |
+
Finally, we remove the temporary branch:
|
| 134 |
+
|
| 135 |
+
- `git branch -d hotfix/3.3.4/optional-descriptive-message`
|
| 136 |
+
## Continuous Integration (CI) and Continuous Deployment (CD)
|
| 137 |
+
|
| 138 |
+
Continuous Integration (CI) is a software development practice where members of a team integrate their work frequently. Each integration is verified by an automated build and test process to detect integration errors as quickly as possible.
|
| 139 |
+
|
| 140 |
+
Continuous Deployment (CD) is a software engineering approach in which software functionalities are delivered frequently through automated deployments.
|
| 141 |
+
|
| 142 |
+
- **CircleCI job**: Create jobs in CircleCI to automate the merging of staging into main and release version (needed to release code) and building and testing Bittensor (needed to merge PRs).
|
| 143 |
+
|
| 144 |
+
> It is highly recommended to set up your own circleci pipeline with your subnet
|
| 145 |
+
|
| 146 |
+
## Versioning and Release Notes
|
| 147 |
+
|
| 148 |
+
Semantic versioning helps keep track of the different versions of the software. When code is merged into main, generate a new version.
|
| 149 |
+
|
| 150 |
+
Release notes provide documentation for each version released to the users, highlighting the new features, improvements, and bug fixes. When merged into main, generate GitHub release and release notes.
|
| 151 |
+
|
| 152 |
+
## Pending Tasks
|
| 153 |
+
|
| 154 |
+
Follow these steps when you are contributing to the bittensor subnet:
|
| 155 |
+
|
| 156 |
+
- Determine if main and staging are different
|
| 157 |
+
- Determine what is in staging that is not merged yet
|
| 158 |
+
- Document not released developments
|
| 159 |
+
- When merged into staging, generate information about what's merged into staging but not released.
|
| 160 |
+
- When merged into main, generate GitHub release and release notes.
|
| 161 |
+
- CircleCI jobs
|
| 162 |
+
- Merge staging into main and release version (needed to release code)
|
| 163 |
+
- Build and Test Bittensor (needed to merge PRs)
|
| 164 |
+
|
| 165 |
+
This document can be improved as the Bittensor project continues to develop and change.
|
bitagent_subnet-main/contrib/STYLE.md
ADDED
|
@@ -0,0 +1,348 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Style Guide
|
| 2 |
+
|
| 3 |
+
A project’s long-term success rests (among other things) on its maintainability, and a maintainer has few tools more powerful than his or her project’s log. It’s worth taking the time to learn how to care for one properly. What may be a hassle at first soon becomes habit, and eventually a source of pride and productivity for all involved.
|
| 4 |
+
|
| 5 |
+
Most programming languages have well-established conventions as to what constitutes idiomatic style, i.e. naming, formatting and so on. There are variations on these conventions, of course, but most developers agree that picking one and sticking to it is far better than the chaos that ensues when everybody does their own thing.
|
| 6 |
+
|
| 7 |
+
# Table of Contents
|
| 8 |
+
1. [Code Style](#code-style)
|
| 9 |
+
2. [Naming Conventions](#naming-conventions)
|
| 10 |
+
3. [Git Commit Style](#git-commit-style)
|
| 11 |
+
4. [The Six Rules of a Great Commit](#the-six-rules-of-a-great-commit)
|
| 12 |
+
- [1. Atomic Commits](#1-atomic-commits)
|
| 13 |
+
- [2. Separate Subject from Body with a Blank Line](#2-separate-subject-from-body-with-a-blank-line)
|
| 14 |
+
- [3. Limit the Subject Line to 50 Characters](#3-limit-the-subject-line-to-50-characters)
|
| 15 |
+
- [4. Use the Imperative Mood in the Subject Line](#4-use-the-imperative-mood-in-the-subject-line)
|
| 16 |
+
- [5. Wrap the Body at 72 Characters](#5-wrap-the-body-at-72-characters)
|
| 17 |
+
- [6. Use the Body to Explain What and Why vs. How](#6-use-the-body-to-explain-what-and-why-vs-how)
|
| 18 |
+
5. [Tools Worth Mentioning](#tools-worth-mentioning)
|
| 19 |
+
- [Using `--fixup`](#using---fixup)
|
| 20 |
+
- [Interactive Rebase](#interactive-rebase)
|
| 21 |
+
6. [Pull Request and Squashing Commits Caveats](#pull-request-and-squashing-commits-caveats)
|
| 22 |
+
|
| 23 |
+
|
| 24 |
+
### Code style
|
| 25 |
+
|
| 26 |
+
#### General Style
|
| 27 |
+
Python's official style guide is PEP 8, which provides conventions for writing code for the main Python distribution. Here are some key points:
|
| 28 |
+
|
| 29 |
+
- `Indentation:` Use 4 spaces per indentation level.
|
| 30 |
+
|
| 31 |
+
- `Line Length:` Limit all lines to a maximum of 79 characters.
|
| 32 |
+
|
| 33 |
+
- `Blank Lines:` Surround top-level function and class definitions with two blank lines. Method definitions inside a class are surrounded by a single blank line.
|
| 34 |
+
|
| 35 |
+
- `Imports:` Imports should usually be on separate lines and should be grouped in the following order:
|
| 36 |
+
|
| 37 |
+
- Standard library imports.
|
| 38 |
+
- Related third party imports.
|
| 39 |
+
- Local application/library specific imports.
|
| 40 |
+
- `Whitespace:` Avoid extraneous whitespace in the following situations:
|
| 41 |
+
|
| 42 |
+
- Immediately inside parentheses, brackets or braces.
|
| 43 |
+
- Immediately before a comma, semicolon, or colon.
|
| 44 |
+
- Immediately before the open parenthesis that starts the argument list of a function call.
|
| 45 |
+
- `Comments:` Comments should be complete sentences and should be used to clarify code and are not a substitute for poorly written code.
|
| 46 |
+
|
| 47 |
+
#### For Python
|
| 48 |
+
|
| 49 |
+
- `List Comprehensions:` Use list comprehensions for concise and readable creation of lists.
|
| 50 |
+
|
| 51 |
+
- `Generators:` Use generators when dealing with large amounts of data to save memory.
|
| 52 |
+
|
| 53 |
+
- `Context Managers:` Use context managers (with statement) for resource management.
|
| 54 |
+
|
| 55 |
+
- `String Formatting:` Use f-strings for formatting strings in Python 3.6 and above.
|
| 56 |
+
|
| 57 |
+
- `Error Handling:` Use exceptions for error handling whenever possible.
|
| 58 |
+
|
| 59 |
+
#### More details
|
| 60 |
+
|
| 61 |
+
Use `black` to format your python code before commiting for consistency across such a large pool of contributors. Black's code [style](https://black.readthedocs.io/en/stable/the_black_code_style/current_style.html#code-style) ensures consistent and opinionated code formatting. It automatically formats your Python code according to the Black style guide, enhancing code readability and maintainability.
|
| 62 |
+
|
| 63 |
+
Key Features of Black:
|
| 64 |
+
|
| 65 |
+
Consistency: Black enforces a single, consistent coding style across your project, eliminating style debates and allowing developers to focus on code logic.
|
| 66 |
+
|
| 67 |
+
Readability: By applying a standard formatting style, Black improves code readability, making it easier to understand and collaborate on projects.
|
| 68 |
+
|
| 69 |
+
Automation: Black automates the code formatting process, saving time and effort. It eliminates the need for manual formatting and reduces the likelihood of inconsistencies.
|
| 70 |
+
|
| 71 |
+
### Naming Conventions
|
| 72 |
+
|
| 73 |
+
- `Classes:` Class names should normally use the CapWords Convention.
|
| 74 |
+
- `Functions and Variables:` Function names should be lowercase, with words separated by underscores as necessary to improve readability. Variable names follow the same convention as function names.
|
| 75 |
+
|
| 76 |
+
- `Constants:` Constants are usually defined on a module level and written in all capital letters with underscores separating words.
|
| 77 |
+
|
| 78 |
+
- `Non-public Methods and Instance Variables:` Use a single leading underscore (_). This is a weak "internal use" indicator.
|
| 79 |
+
|
| 80 |
+
- `Strongly "private" methods and variables:` Use a double leading underscore (__). This triggers name mangling in Python.
|
| 81 |
+
|
| 82 |
+
|
| 83 |
+
### Git commit style
|
| 84 |
+
|
| 85 |
+
Here’s a model Git commit message when contributing:
|
| 86 |
+
```
|
| 87 |
+
Summarize changes in around 50 characters or less
|
| 88 |
+
|
| 89 |
+
More detailed explanatory text, if necessary. Wrap it to about 72
|
| 90 |
+
characters or so. In some contexts, the first line is treated as the
|
| 91 |
+
subject of the commit and the rest of the text as the body. The
|
| 92 |
+
blank line separating the summary from the body is critical (unless
|
| 93 |
+
you omit the body entirely); various tools like `log`, `shortlog`
|
| 94 |
+
and `rebase` can get confused if you run the two together.
|
| 95 |
+
|
| 96 |
+
Explain the problem that this commit is solving. Focus on why you
|
| 97 |
+
are making this change as opposed to how (the code explains that).
|
| 98 |
+
Are there side effects or other unintuitive consequences of this
|
| 99 |
+
change? Here's the place to explain them.
|
| 100 |
+
|
| 101 |
+
Further paragraphs come after blank lines.
|
| 102 |
+
|
| 103 |
+
- Bullet points are okay, too
|
| 104 |
+
|
| 105 |
+
- Typically a hyphen or asterisk is used for the bullet, preceded
|
| 106 |
+
by a single space, with blank lines in between, but conventions
|
| 107 |
+
vary here
|
| 108 |
+
|
| 109 |
+
If you use an issue tracker, put references to them at the bottom,
|
| 110 |
+
like this:
|
| 111 |
+
|
| 112 |
+
Resolves: #123
|
| 113 |
+
See also: #456, #789
|
| 114 |
+
```
|
| 115 |
+
|
| 116 |
+
|
| 117 |
+
## The six rules of a great commit.
|
| 118 |
+
|
| 119 |
+
#### 1. Atomic Commits
|
| 120 |
+
An “atomic” change revolves around one task or one fix.
|
| 121 |
+
|
| 122 |
+
Atomic Approach
|
| 123 |
+
- Commit each fix or task as a separate change
|
| 124 |
+
- Only commit when a block of work is complete
|
| 125 |
+
- Commit each layout change separately
|
| 126 |
+
- Joint commit for layout file, code behind file, and additional resources
|
| 127 |
+
|
| 128 |
+
Benefits
|
| 129 |
+
|
| 130 |
+
- Easy to roll back without affecting other changes
|
| 131 |
+
- Easy to make other changes on the fly
|
| 132 |
+
- Easy to merge features to other branches
|
| 133 |
+
|
| 134 |
+
#### Avoid trivial commit messages
|
| 135 |
+
|
| 136 |
+
Commit messages like "fix", "fix2", or "fix3" don't provide any context or clear understanding of what changes the commit introduces. Here are some examples of good vs. bad commit messages:
|
| 137 |
+
|
| 138 |
+
**Bad Commit Message:**
|
| 139 |
+
|
| 140 |
+
$ git commit -m "fix"
|
| 141 |
+
|
| 142 |
+
**Good Commit Message:**
|
| 143 |
+
|
| 144 |
+
$ git commit -m "Fix typo in README file"
|
| 145 |
+
|
| 146 |
+
> **Caveat**: When working with new features, an atomic commit will often consist of multiple files, since a layout file, code behind file, and additional resources may have been added/modified. You don’t want to commit all of these separately, because if you had to roll back the application to a state before the feature was added, it would involve multiple commit entries, and that can get confusing
|
| 147 |
+
|
| 148 |
+
#### 2. Separate subject from body with a blank line
|
| 149 |
+
|
| 150 |
+
Not every commit requires both a subject and a body. Sometimes a single line is fine, especially when the change is so simple that no further context is necessary.
|
| 151 |
+
|
| 152 |
+
For example:
|
| 153 |
+
|
| 154 |
+
Fix typo in introduction to user guide
|
| 155 |
+
|
| 156 |
+
Nothing more need be said; if the reader wonders what the typo was, she can simply take a look at the change itself, i.e. use git show or git diff or git log -p.
|
| 157 |
+
|
| 158 |
+
If you’re committing something like this at the command line, it’s easy to use the -m option to git commit:
|
| 159 |
+
|
| 160 |
+
$ git commit -m"Fix typo in introduction to user guide"
|
| 161 |
+
|
| 162 |
+
However, when a commit merits a bit of explanation and context, you need to write a body. For example:
|
| 163 |
+
|
| 164 |
+
Derezz the master control program
|
| 165 |
+
|
| 166 |
+
MCP turned out to be evil and had become intent on world domination.
|
| 167 |
+
This commit throws Tron's disc into MCP (causing its deresolution)
|
| 168 |
+
and turns it back into a chess game.
|
| 169 |
+
|
| 170 |
+
Commit messages with bodies are not so easy to write with the -m option. You’re better off writing the message in a proper text editor. [See Pro Git](https://git-scm.com/book/en/v2/Customizing-Git-Git-Configuration).
|
| 171 |
+
|
| 172 |
+
In any case, the separation of subject from body pays off when browsing the log. Here’s the full log entry:
|
| 173 |
+
|
| 174 |
+
$ git log
|
| 175 |
+
commit 42e769bdf4894310333942ffc5a15151222a87be
|
| 176 |
+
Author: Kevin Flynn <kevin@flynnsarcade.com>
|
| 177 |
+
Date: Fri Jan 01 00:00:00 1982 -0200
|
| 178 |
+
|
| 179 |
+
Derezz the master control program
|
| 180 |
+
|
| 181 |
+
MCP turned out to be evil and had become intent on world domination.
|
| 182 |
+
This commit throws Tron's disc into MCP (causing its deresolution)
|
| 183 |
+
and turns it back into a chess game.
|
| 184 |
+
|
| 185 |
+
|
| 186 |
+
#### 3. Limit the subject line to 50 characters
|
| 187 |
+
50 characters is not a hard limit, just a rule of thumb. Keeping subject lines at this length ensures that they are readable, and forces the author to think for a moment about the most concise way to explain what’s going on.
|
| 188 |
+
|
| 189 |
+
GitHub’s UI is fully aware of these conventions. It will warn you if you go past the 50 character limit. Git will truncate any subject line longer than 72 characters with an ellipsis, thus keeping it to 50 is best practice.
|
| 190 |
+
|
| 191 |
+
#### 4. Use the imperative mood in the subject line
|
| 192 |
+
Imperative mood just means “spoken or written as if giving a command or instruction”. A few examples:
|
| 193 |
+
|
| 194 |
+
Clean your room
|
| 195 |
+
Close the door
|
| 196 |
+
Take out the trash
|
| 197 |
+
|
| 198 |
+
Each of the seven rules you’re reading about right now are written in the imperative (“Wrap the body at 72 characters”, etc.).
|
| 199 |
+
|
| 200 |
+
The imperative can sound a little rude; that’s why we don’t often use it. But it’s perfect for Git commit subject lines. One reason for this is that Git itself uses the imperative whenever it creates a commit on your behalf.
|
| 201 |
+
|
| 202 |
+
For example, the default message created when using git merge reads:
|
| 203 |
+
|
| 204 |
+
Merge branch 'myfeature'
|
| 205 |
+
|
| 206 |
+
And when using git revert:
|
| 207 |
+
|
| 208 |
+
Revert "Add the thing with the stuff"
|
| 209 |
+
|
| 210 |
+
This reverts commit cc87791524aedd593cff5a74532befe7ab69ce9d.
|
| 211 |
+
|
| 212 |
+
Or when clicking the “Merge” button on a GitHub pull request:
|
| 213 |
+
|
| 214 |
+
Merge pull request #123 from someuser/somebranch
|
| 215 |
+
|
| 216 |
+
So when you write your commit messages in the imperative, you’re following Git’s own built-in conventions. For example:
|
| 217 |
+
|
| 218 |
+
Refactor subsystem X for readability
|
| 219 |
+
Update getting started documentation
|
| 220 |
+
Remove deprecated methods
|
| 221 |
+
Release version 1.0.0
|
| 222 |
+
|
| 223 |
+
Writing this way can be a little awkward at first. We’re more used to speaking in the indicative mood, which is all about reporting facts. That’s why commit messages often end up reading like this:
|
| 224 |
+
|
| 225 |
+
Fixed bug with Y
|
| 226 |
+
Changing behavior of X
|
| 227 |
+
|
| 228 |
+
And sometimes commit messages get written as a description of their contents:
|
| 229 |
+
|
| 230 |
+
More fixes for broken stuff
|
| 231 |
+
Sweet new API methods
|
| 232 |
+
|
| 233 |
+
To remove any confusion, here’s a simple rule to get it right every time.
|
| 234 |
+
|
| 235 |
+
**A properly formed Git commit subject line should always be able to complete the following sentence:**
|
| 236 |
+
|
| 237 |
+
If applied, this commit will <your subject line here>
|
| 238 |
+
|
| 239 |
+
For example:
|
| 240 |
+
|
| 241 |
+
If applied, this commit will refactor subsystem X for readability
|
| 242 |
+
If applied, this commit will update getting started documentation
|
| 243 |
+
If applied, this commit will remove deprecated methods
|
| 244 |
+
If applied, this commit will release version 1.0.0
|
| 245 |
+
If applied, this commit will merge pull request #123 from user/branch
|
| 246 |
+
|
| 247 |
+
#### 5. Wrap the body at 72 characters
|
| 248 |
+
Git never wraps text automatically. When you write the body of a commit message, you must mind its right margin, and wrap text manually.
|
| 249 |
+
|
| 250 |
+
The recommendation is to do this at 72 characters, so that Git has plenty of room to indent text while still keeping everything under 80 characters overall.
|
| 251 |
+
|
| 252 |
+
A good text editor can help here. It’s easy to configure Vim, for example, to wrap text at 72 characters when you’re writing a Git commit.
|
| 253 |
+
|
| 254 |
+
#### 6. Use the body to explain what and why vs. how
|
| 255 |
+
This [commit](https://github.com/bitcoin/bitcoin/commit/eb0b56b19017ab5c16c745e6da39c53126924ed6) from Bitcoin Core is a great example of explaining what changed and why:
|
| 256 |
+
|
| 257 |
+
```
|
| 258 |
+
commit eb0b56b19017ab5c16c745e6da39c53126924ed6
|
| 259 |
+
Author: Pieter Wuille <pieter.wuille@gmail.com>
|
| 260 |
+
Date: Fri Aug 1 22:57:55 2014 +0200
|
| 261 |
+
|
| 262 |
+
Simplify serialize.h's exception handling
|
| 263 |
+
|
| 264 |
+
Remove the 'state' and 'exceptmask' from serialize.h's stream
|
| 265 |
+
implementations, as well as related methods.
|
| 266 |
+
|
| 267 |
+
As exceptmask always included 'failbit', and setstate was always
|
| 268 |
+
called with bits = failbit, all it did was immediately raise an
|
| 269 |
+
exception. Get rid of those variables, and replace the setstate
|
| 270 |
+
with direct exception throwing (which also removes some dead
|
| 271 |
+
code).
|
| 272 |
+
|
| 273 |
+
As a result, good() is never reached after a failure (there are
|
| 274 |
+
only 2 calls, one of which is in tests), and can just be replaced
|
| 275 |
+
by !eof().
|
| 276 |
+
|
| 277 |
+
fail(), clear(n) and exceptions() are just never called. Delete
|
| 278 |
+
them.
|
| 279 |
+
```
|
| 280 |
+
|
| 281 |
+
Take a look at the [full diff](https://github.com/bitcoin/bitcoin/commit/eb0b56b19017ab5c16c745e6da39c53126924ed6) and just think how much time the author is saving fellow and future committers by taking the time to provide this context here and now. If he didn’t, it would probably be lost forever.
|
| 282 |
+
|
| 283 |
+
In most cases, you can leave out details about how a change has been made. Code is generally self-explanatory in this regard (and if the code is so complex that it needs to be explained in prose, that’s what source comments are for). Just focus on making clear the reasons why you made the change in the first place—the way things worked before the change (and what was wrong with that), the way they work now, and why you decided to solve it the way you did.
|
| 284 |
+
|
| 285 |
+
The future maintainer that thanks you may be yourself!
|
| 286 |
+
|
| 287 |
+
|
| 288 |
+
|
| 289 |
+
#### Tools worth mentioning
|
| 290 |
+
|
| 291 |
+
##### Using `--fixup`
|
| 292 |
+
|
| 293 |
+
If you've made a commit and then realize you've missed something or made a minor mistake, you can use the `--fixup` option.
|
| 294 |
+
|
| 295 |
+
For example, suppose you've made a commit with a hash `9fceb02`. Later, you realize you've left a debug statement in your code. Instead of making a new commit titled "remove debug statement" or "fix", you can do the following:
|
| 296 |
+
|
| 297 |
+
$ git commit --fixup 9fceb02
|
| 298 |
+
|
| 299 |
+
This will create a new commit to fix the issue, with a message like "fixup! The original commit message".
|
| 300 |
+
|
| 301 |
+
##### Interactive Rebase
|
| 302 |
+
|
| 303 |
+
Interactive rebase, or `rebase -i`, can be used to squash these fixup commits into the original commits they're fixing, which cleans up your commit history. You can use the `autosquash` option to automatically squash any commits marked as "fixup" into their target commits.
|
| 304 |
+
|
| 305 |
+
For example:
|
| 306 |
+
|
| 307 |
+
$ git rebase -i --autosquash HEAD~5
|
| 308 |
+
|
| 309 |
+
This command starts an interactive rebase for the last 5 commits (`HEAD~5`). Any commits marked as "fixup" will be automatically moved to squash with their target commits.
|
| 310 |
+
|
| 311 |
+
The benefit of using `--fixup` and interactive rebase is that it keeps your commit history clean and readable. It groups fixes with the commits they are related to, rather than having a separate "fix" commit that might not make sense to other developers (or even to you) in the future.
|
| 312 |
+
|
| 313 |
+
|
| 314 |
+
---
|
| 315 |
+
|
| 316 |
+
#### Pull Request and Squashing Commits Caveats
|
| 317 |
+
|
| 318 |
+
While atomic commits are great for development and for understanding the changes within the branch, the commit history can get messy when merging to the main branch. To keep a cleaner and more understandable commit history in our main branch, we encourage squashing all the commits of a PR into one when merging.
|
| 319 |
+
|
| 320 |
+
This single commit should provide an overview of the changes that the PR introduced. It should follow the guidelines for atomic commits (an atomic commit is complete, self-contained, and understandable) but on the scale of the entire feature, task, or fix that the PR addresses. This approach combines the benefits of atomic commits during development with a clean commit history in our main branch.
|
| 321 |
+
|
| 322 |
+
Here is how you can squash commits:
|
| 323 |
+
|
| 324 |
+
```bash
|
| 325 |
+
git rebase -i HEAD~n
|
| 326 |
+
```
|
| 327 |
+
|
| 328 |
+
where `n` is the number of commits to squash. After running the command, replace `pick` with `squash` for the commits you want to squash into the previous commit. This will combine the commits and allow you to write a new commit message.
|
| 329 |
+
|
| 330 |
+
In this context, an atomic commit message could look like:
|
| 331 |
+
|
| 332 |
+
```
|
| 333 |
+
Add feature X
|
| 334 |
+
|
| 335 |
+
This commit introduces feature X which does A, B, and C. It adds
|
| 336 |
+
new files for layout, updates the code behind the file, and introduces
|
| 337 |
+
new resources. This change is important because it allows users to
|
| 338 |
+
perform task Y more efficiently.
|
| 339 |
+
|
| 340 |
+
It includes:
|
| 341 |
+
- Creation of new layout file
|
| 342 |
+
- Updates in the code-behind file
|
| 343 |
+
- Addition of new resources
|
| 344 |
+
|
| 345 |
+
Resolves: #123
|
| 346 |
+
```
|
| 347 |
+
|
| 348 |
+
In your PRs, remember to detail what the PR is introducing or fixing. This will be helpful for reviewers to understand the context and the reason behind the changes.
|