AbstractPhil commited on
Commit
ad97899
·
verified ·
1 Parent(s): fd95e88

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -27,7 +27,7 @@ base_model:
27
 
28
 
29
  # Newest: Prepping 12m conceptual-captions bert extractions aka 36m extractions * 5 models
30
- So around, 180,000,000 total samples.
31
 
32
  The dataset is going to be in pt chunks because they load directly to vram nearly instantly in colab, and the system operates on them quicker than dataloaders.
33
 
 
27
 
28
 
29
  # Newest: Prepping 12m conceptual-captions bert extractions aka 36m extractions * 5 models
30
+ So around, 180,000,000 different toal samples, which is fundamentally different than a single task of repeated of 200k or 500k like I've been doing.
31
 
32
  The dataset is going to be in pt chunks because they load directly to vram nearly instantly in colab, and the system operates on them quicker than dataloaders.
33