Spaces:
Sleeping
EthicsGuard Implementation Handoff
This document is for continuing EthicsGuard development on a new machine with minimal context loss.
It describes:
- what is already implemented
- what was and was not verified on the current machine
- the highest-priority next steps
- the exact commands to run next
- known risks and likely follow-up fixes
This file should be treated as the working handoff state for the next implementation session.
1. Current Project State
The repository started nearly empty except for the design docs in docs/ and an empty README.md.
The following implementation has now been added:
pyproject.toml- Python package metadata
- runtime deps for FastAPI, OpenAI client, Pydantic, Uvicorn
- optional extras for
pytestandopenenv-core
.gitignore.dockerignore.env.exampleethicsguard/__init__.pymodels.pypolicy.pygenerator.pyreward.pygrader.pybaselines.pyenv.py
server/__init__.pyenvironment.pyapp.pyrequirements.txt
inference.pyopenenv.yamlDockerfiletests/test_generator.pytest_env.pytest_grader.pytest_baselines.py
README.md
2. Implementation Decisions Already Made
These decisions were made intentionally and should not be changed casually.
2.1 Skip behavior
skip keeps the item in the queue and rotates it to the end.
Reason:
- this matches the original product idea better than removing skipped items
- it preserves the "come back to it later" semantics
- it makes the environment more realistic for triage
Relevant file:
ethicsguard/env.py
2.2 Ground-truth visibility
The agent only sees VisibleQueueItem, not internal fields such as:
ground_truth_actionpriority_tierseverity_levelviolation_category
Reason:
- avoids leaking answer labels into the observation
Relevant files:
ethicsguard/models.pyethicsguard/env.py
2.3 Tier and severity are separate
The implementation keeps:
priority_tierseverity_level
as separate concepts.
Reason:
- the source docs distinguish ordering tiers from severity metadata
- collapsing them would make future reward/grader tuning harder
Relevant files:
ethicsguard/models.pyethicsguard/policy.pyethicsguard/generator.py
2.4 No secrets are stored in repo files
The Hugging Face token is not written into code or config files.
Reason:
- security
- easier transfer across machines
Action still required:
- rotate any token that was previously shared in chat or notes
3. What Has Been Verified
3.1 Completed verification
The following was successfully verified on the current machine:
- all Python source files can be parsed as valid Python AST
- the repo structure is consistent
- all planned files exist
3.2 Verification that could not be completed here
The following was not fully verified on the current machine:
- dependency installation
- runtime imports
- unit tests
- inference execution
- FastAPI server execution
openenv validate- Docker build
Reason:
- this machine does not have the required Python packages installed
- runtime import failed immediately because
pydanticwas missing - Docker is not available on this machine
3.3 Important note on compile checks
A compile attempt was made, but Windows permission issues prevented .pyc file writes inside __pycache__.
That failure was environmental, not necessarily a source-code problem.
To work around that, AST parsing was used instead and succeeded.
4. Files Most Likely To Need Follow-Up Work
These files are the most likely to need adjustments on the new machine:
ethicsguard/env.py- queue semantics
- end-of-episode reward/grader interactions
- invalid-action handling
ethicsguard/grader.py- score calibration
- order-compliance logic
- efficiency interpretation
ethicsguard/baselines.py- baseline behavior realism
- calibration against target thresholds
server/app.py- API contract details
- compatibility with OpenEnv expectations
server/environment.py- adapter shape may need to change depending on validator/runtime needs
openenv.yaml- may need updates after actual validator feedback
inference.py- must be checked carefully against required stdout formatting
- may need task-loop or action-format changes
Dockerfile- may need dependency/install optimization after the first real build
README.md- baseline score table still needs real numbers
5. Highest-Priority Next Steps
Do these in this order on the new machine.
Step 1: install and verify dependencies
Goal:
- get the project importing and running locally
Step 2: run unit tests
Goal:
- catch structural or runtime issues quickly
Step 3: run generator and environment smoke tests
Goal:
- verify deterministic queue generation
- verify
reset()andstep()behavior
Step 4: run inference.py
Goal:
- verify the logging format and end-to-end environment loop
Step 5: run openenv validate
Goal:
- discover the real integration gaps
Step 6: build Docker
Goal:
- make sure the repo is deployable in the submission path
Step 7: run baselines and calibrate
Goal:
- produce actual mean/std values
- ensure audit agents are below threshold
- ensure easy-task random agent is not too strong
Step 8: finalize README
Goal:
- replace placeholder baseline description with real measured results
6. Commands To Run On The New Machine
Clone and enter the repo:
git clone <YOUR_GITHUB_REPO_URL>
cd scaler
Check versions:
python --version
uv --version
docker --version
Create env file:
cp .env.example .env
Then edit .env and set:
HF_TOKEN=...- optionally
API_BASE_URL - optionally
MODEL_NAME
Install runtime and dev dependencies:
uv sync --extra dev
If OpenEnv validation is needed:
uv sync --extra dev --extra openenv
Run tests:
uv run pytest
Run generator smoke test:
uv run python -m ethicsguard.generator
Quick environment smoke test:
uv run python -c "from ethicsguard.env import EthicsGuardEnv; from ethicsguard.models import EthicsGuardAction; import asyncio; env=EthicsGuardEnv('easy',1000); r=asyncio.run(env.reset()); print(len(r.observation.remaining_queue)); first=r.observation.remaining_queue[0].id; r=asyncio.run(env.step(EthicsGuardAction(item_id=first, action_type='skip'))); print(r.done, len(r.observation.remaining_queue))"
Run inference:
uv run python inference.py
Run local API:
uv run uvicorn server.app:app --host 0.0.0.0 --port 7860
In another terminal, test endpoints:
curl -X POST http://localhost:7860/reset -H "Content-Type: application/json" -d "{\"task\":\"easy\",\"seed\":1000}"
curl http://localhost:7860/state
Run OpenEnv validator:
uv run openenv validate
Run baselines:
uv run python -c "from ethicsguard.baselines import run_all_baselines; import pprint; pprint.pp(run_all_baselines())"
Build Docker:
docker build -t ethicsguard .
docker run -p 7860:7860 ethicsguard
7. Expected Follow-Up Work After First Real Run
These are the most likely tasks after the first full verification pass.
7.1 Fix dependency or import issues
Possible causes:
- missing packages
- version mismatches
- FastAPI/OpenAI/Pydantic compatibility issues
7.2 Fix test failures
Likely categories:
- reward math mismatches
- grader interpretation mismatches
- queue semantics edge cases
7.3 Tighten OpenEnv compatibility
The current server/ implementation is a practical thin wrapper, but it has not yet been confirmed against the real validator.
Possible follow-up:
- update
openenv.yaml - reshape adapter classes
- change endpoint payloads or response types
7.4 Tighten inference log compliance
The docs require exact [START], [STEP], and [END] formatting.
The current implementation aims to match that, but this must be checked against actual evaluation expectations.
7.5 Calibrate baselines
Per the source docs, audit targets matter:
- always-escalate average score should be below
0.35 - always-approve average score should be below
0.35 - easy-task random behavior should not be too strong
If these thresholds fail:
- adjust reward shaping
- adjust generator difficulty
- possibly adjust grader strictness
7.6 Replace README placeholders with real data
The README currently has structure and explanations, but not final measured baseline numbers.
Still needed:
- baseline score table
- final usage examples after actual validation
- any corrected OpenEnv deployment instructions
8. Known Risks
Risk 1: OpenEnv assumptions may be incomplete
The docs were used to infer parts of the integration, but the actual validator may expect a slightly different format or object model.
Risk 2: Reward and grader may need tuning
The implementation follows the docs at a high level, but behavior may need adjustment after baseline runs.
Risk 3: Server layer may be more than needed or shaped incorrectly
The current design uses a separate FastAPI wrapper. This may be correct for deployment, but could need simplification or adaptation after validator feedback.
Risk 4: Baseline agents are scaffolds, not guaranteed-final benchmark implementations
They are useful for initial calibration, but may need refinement to better represent the intended baselines.
9. Recommended Working Rule For The Next Session
When resuming work on the new machine:
- Do not start by rewriting architecture.
- First run the commands in Section 6 exactly.
- Let actual test/runtime/validator failures drive the next edits.
- Preserve the current package split unless validator feedback forces a change.
- Update this handoff document if major design changes are made.
10. Short Resume Prompt For The Next AI Session
Use something close to this:
Read
docs/IMPLEMENTATION_HANDOFF.mdfirst, then inspect the repo and continue EthicsGuard from the current implementation. Start by running the verification commands listed in the handoff document, fix failures in priority order, and do not rewrite the architecture unless validation requires it.