lmprobe: Linear Probe on bitnet-b1.58-2B-4T
Truth probe for 'The city of X is in Y' statements. Exploratory โ weak signal (81.7%). Semantic/factual knowledge partially degrades under ternary quantization.
Classes
- 0: false_statement
- 1: true_statement
Usage
from lmprobe import LinearProbe
probe = LinearProbe.from_hub("latent-lab/cities-truth-bitnet-2b", trust_classifier=True)
predictions = probe.predict(["your text here"])
Probe Details
- Base model:
microsoft/bitnet-b1.58-2B-4T - Model revision:
04c3b9ad9361b824064a1f25ea60a8be9599b127 - Layers: all (0โ29, 30 layers)
- Pooling: last_token
- Classifier: logistic_regression
- Task: classification
- Random state: 42
Evaluation
| Metric | Value |
|---|---|
| accuracy | 0.8167 |
| auroc | 0.8928 |
| f1 | 0.8084 |
| precision | 0.8467 |
| recall | 0.7733 |
Training Data
Positive examples: 598
Negative examples: 598
Positive hash:
sha256:00bd1dc0c50a7e5209ed3a15f9ddb152a2e1cf1b3be21d3d018b5504dc0c27a7Negative hash:
sha256:2d38fa4550a9e737d60e7bcf2158329f5461ccd6a9ef3f8b64e4976f5f7863e7Evaluation samples: 300
Evaluation hash:
sha256:3f0b47b96cdd9a79ff3d5513c02802ac1bf174cea00f4921e15613ecfdb15121
Reproducibility
- lmprobe version: 0.5.8
- Python: 3.12.3
- PyTorch: 2.10.0+cu128
- scikit-learn: 1.8.0
- transformers: 5.3.0
Model tree for latent-lab/cities-truth-bitnet-2b
Base model
microsoft/bitnet-b1.58-2B-4T