Update README.md
Browse files
README.md
CHANGED
|
@@ -4,6 +4,16 @@ datasets:
|
|
| 4 |
- AbstractPhil/geometric-vocab
|
| 5 |
pipeline_tag: zero-shot-classification
|
| 6 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 7 |
|
| 8 |
# Tinkerbell
|
| 9 |
128d 128heads 4.0 mlp, depth 4 only geometric attention...
|
|
|
|
| 4 |
- AbstractPhil/geometric-vocab
|
| 5 |
pipeline_tag: zero-shot-classification
|
| 6 |
---
|
| 7 |
+
# Better testing methodology development
|
| 8 |
+
|
| 9 |
+
I'm reading up on some papers for how various companies and research institutions tested their VITS. My testing methodology isn't accurate enough because the accuracy isn't just reflecting on the logit alignments but also the internal ML layer feature generations.
|
| 10 |
+
|
| 11 |
+
I'm crutching heavily on the logit alignment instead of managing the feature alignment testing as well, which is likely cutting heavily into my system.
|
| 12 |
+
|
| 13 |
+
Currently I'm building a notebook with the better feature testing capabilities to test features correctly.
|
| 14 |
+
|
| 15 |
+
It's possible these vits could be potentially MUCH MORE or MUCH LESS accurate then advertise and I apologise for the inconvenience this has caused to any onlookers. I'll be updating with additional inference code very soon.
|
| 16 |
+
|
| 17 |
|
| 18 |
# Tinkerbell
|
| 19 |
128d 128heads 4.0 mlp, depth 4 only geometric attention...
|