TS-Proof-Ranker-v3
An experimental chain-repair decoder for first-order Horn proof chains. Given
(axioms, goal, broken_chain, suspected_pos) it emits a repaired chain in the
same canonical proof serialization used by the earlier TS-Proof-Ranker layers.
This sits above:
BoggersTheFish/ts-proof-ranker-v0BoggersTheFish/ts-proof-ranker-v1BoggersTheFish/ts-proof-ranker-v2
The uploaded neural decoder is not yet the production repair component. The repo also includes a symbolic v3 repair baseline that recomputes the shortest valid proof with the verifier; that baseline is what currently powers v4.
Architecture
- Shared tokenizer vocabulary:
41tokens - BiGRU encoder over
(axioms, goal, broken_chain, suspected_pos) - GRU decoder with Luong-general attention
- Greedy decoding over canonical proof-chain tokens
- 181609 parameters
Test results
Symbolic baseline:
{
"n_examples": 1000,
"exact_repair_acc": 1.0,
"valid_repair_acc": 1.0,
"by_depth_exact": {
"1": 1.0,
"2": 1.0,
"3": 1.0,
"4": 1.0,
"5": 1.0
},
"by_depth_valid": {
"1": 1.0,
"2": 1.0,
"3": 1.0,
"4": 1.0,
"5": 1.0
}
}
Neural decoder checkpoint:
{
"n_examples": 1000,
"parse_rate": 1.0,
"valid_repair_acc": 0.013,
"exact_repair_acc": 0.013,
"by_depth_valid": {
"1": 0.06190476190476191,
"2": 0.0,
"3": 0.0,
"4": 0.0,
"5": 0.0
},
"by_depth_exact": {
"1": 0.06190476190476191,
"2": 0.0,
"3": 0.0,
"4": 0.0,
"5": 0.0
}
}
Training
{
"task": "synthetic_first_order_horn_chain_repair",
"n_parameters": 181609,
"history_summary": [
{
"epoch": 1,
"val_tok_acc": 0.5792,
"val_seq_acc": 0.0,
"val_loss": 1.1424
},
{
"epoch": 2,
"val_tok_acc": 0.7526,
"val_seq_acc": 0.013,
"val_loss": 0.8413
},
{
"epoch": 3,
"val_tok_acc": 0.7674,
"val_seq_acc": 0.008,
"val_loss": 0.6826
},
{
"epoch": 4,
"val_tok_acc": 0.8059,
"val_seq_acc": 0.015,
"val_loss": 0.6061
},
{
"epoch": 5,
"val_tok_acc": 0.8094,
"val_seq_acc": 0.01,
"val_loss": 0.5901
},
{
"epoch": 6,
"val_tok_acc": 0.8128,
"val_seq_acc": 0.014,
"val_loss": 0.5736
}
]
}
Input and output format
The encoder input is the canonical v3 repair serialization:
<axiom> ax1 <axiom> ax2 ... <goal> goal_atom <chain> broken1 <step> broken2 ...
The decoder output is the repaired chain:
step1 <step> step2 ...
See ts_proof/env/serialize.py in the source repo for the canonical
serializer and parser.
Usage
import json
import torch
from huggingface_hub import hf_hub_download
ckpt_path = hf_hub_download("BoggersTheFish/ts-proof-ranker-v3", "pytorch_model.bin")
config = json.loads(open(hf_hub_download("BoggersTheFish/ts-proof-ranker-v3", "config.json")).read())
tokens = json.loads(open(hf_hub_download("BoggersTheFish/ts-proof-ranker-v3", "tokenizer.json")).read())["token_to_id"]
# Build the model from source repo code.
# from ts_proof.models.v3_decoder import V3Config, V3Decoder
# model = V3Decoder(V3Config(**config))
# model.load_state_dict(torch.load(ckpt_path, map_location="cpu", weights_only=True))
Limitations
- The neural decoder is experimental: it parses consistently but has very low exact/valid repair accuracy on the current held-out set.
- The reliable v3 behavior today is the symbolic verifier-backed repair baseline, not the neural checkpoint.
- Trained only on synthetic 1-arity Horn problems with proof depths 1-5.
- Final proof validity must be checked by the symbolic kernel.
- Downloads last month
- 18