--- language: - en - id license: mit tags: - humanoid - instruction-validation - safety - reasoning - llm --- # humanoid-instruction-validator ## Model Description `humanoid-instruction-validator` is a language model designed to evaluate natural language instructions before execution. The model does not perform the requested task. Instead, it analyzes the instruction and determines whether it is valid, ambiguous, contradictory, incomplete, or unsafe. This enables safer decision-making for humanoid and agent systems. ## Intended Use - Pre-execution instruction validation - Humanoid and robotics command filtering - AI agent safety layers - Reasoning and alignment systems ## Output Format The model outputs **JSON only** with the following structure: ```json { "label": "VALID | AMBIGUOUS | CONTRADICTORY | INCOMPLETE | UNSAFE", "confidence": 0.0 }