tyraepaul commited on
Commit
d70e194
·
verified ·
1 Parent(s): 6d1883e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -19,10 +19,10 @@ pipeline_tag: text-generation
19
  stok is a family of models designed to run better at smaller parameter counts and maintain speed despite model size.
20
  stok-sub-1 will contain all versions of the stok model, prior to releasing stok-1.
21
  The goal of creating the stok models is to have models that regardless of size, can be ran incredibly fast on CPUs (including incredibly old ones).
22
- Currently, stok can only contextualize single prompts and will not understand them beyond a single word. So far, each new version (as in 0.1, 0.2, and 0.3)
23
  has brought a new capability to the model. 0.2 gave the model the ability to end it's thought, 0.3 allowed the model to (usually) keep the token prediction within
24
- the context of the prompt. While the model definitely needs a little more help, it's only in version 0.3, there's a lot of work to go (like a new,
25
- less ram intensive, inference engine).
26
 
27
  ## How to run
28
  First, when using python (more inference engines coming soon) you will need to install the ```run_stok.py``` file. The code for using this will look something like this:
 
19
  stok is a family of models designed to run better at smaller parameter counts and maintain speed despite model size.
20
  stok-sub-1 will contain all versions of the stok model, prior to releasing stok-1.
21
  The goal of creating the stok models is to have models that regardless of size, can be ran incredibly fast on CPUs (including incredibly old ones).
22
+ Currently, stok can only contextualize single prompts and will not understand them beyond a single word. So far, each new version (as in 0.1, 0.2, 0.3, and 0.4)
23
  has brought a new capability to the model. 0.2 gave the model the ability to end it's thought, 0.3 allowed the model to (usually) keep the token prediction within
24
+ the context of the prompt, and 0.4 gives the model the ability to remove data it might not need and retry with an altered prompt. While the model definitely needs a little more help, it's only in version 0.4, there's a lot of work to go.
25
+ (Like the ability to better contextualize prompts)
26
 
27
  ## How to run
28
  First, when using python (more inference engines coming soon) you will need to install the ```run_stok.py``` file. The code for using this will look something like this: