AbstractPhil commited on
Commit
4aaad59
·
verified ·
1 Parent(s): 5087eb5

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +14 -0
README.md CHANGED
@@ -4,6 +4,20 @@ datasets:
4
  - AbstractPhil/geometric-vocab
5
  pipeline_tag: zero-shot-classification
6
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
7
  # Better testing methodology development
8
 
9
  I'm reading up on some papers for how various companies and research institutions tested their VITS. My testing methodology isn't accurate enough because the accuracy isn't just reflecting on the logit alignments but also the internal ML layer feature generations.
 
4
  - AbstractPhil/geometric-vocab
5
  pipeline_tag: zero-shot-classification
6
  ---
7
+
8
+ # All losses modified heavily, the originals did not work at all with the structure.
9
+
10
+ Pushing HEAVILY into losses based on the WORKING high-entropy high-learn rate classification heads and forcing this thing into cohesion INSTANTLY.
11
+
12
+ Thats the play. No more 200 epochs. These things should be ready in 10-20 epochs at most, and they should be 80%+ accuracy, or they fail. Those are the two potentials here.
13
+
14
+ With correct logit and probe assessment the substructure should be a profoundly more efficient and easily analyzable series of charts based on similarity for assessments and capability. None of this guessing or guesswork based on "what works with other models" We KNOW what works and I should have never second guessed the formulas.
15
+
16
+ I have implemented all of the most crucial and most powerful formulas from the others, now lets see if the universe makes a fool of me or not.
17
+
18
+ If it does, SO BE IT! Lets build an empire from there.
19
+
20
+
21
  # Better testing methodology development
22
 
23
  I'm reading up on some papers for how various companies and research institutions tested their VITS. My testing methodology isn't accurate enough because the accuracy isn't just reflecting on the logit alignments but also the internal ML layer feature generations.