david-ar commited on
Commit
f6735f2
·
verified ·
1 Parent(s): a5a8779

Upload folder using huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +7 -3
README.md CHANGED
@@ -35,9 +35,12 @@ The architecture is a single-layer associative network trained via Hebbian learn
35
  ## Quick Start
36
 
37
  ```python
38
- from transformers import AutoModelForCausalLM
 
 
39
  model = AutoModelForCausalLM.from_pretrained("david-ar/20q", trust_remote_code=True)
40
- model.play()
 
41
  ```
42
 
43
  ## Pipeline Usage
@@ -49,6 +52,7 @@ from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
49
 
50
  tokenizer = AutoTokenizer.from_pretrained("david-ar/20q", trust_remote_code=True)
51
  model = AutoModelForCausalLM.from_pretrained("david-ar/20q", trust_remote_code=True)
 
52
  pipe = pipeline("text-generation", model=model, tokenizer=tokenizer)
53
 
54
  messages = [
@@ -82,7 +86,7 @@ The model is a weight matrix mapping 156 features (questions) to 1,200 output cl
82
 
83
  ## Why This Exists
84
 
85
- Mostly to see if it could be done. A 252KB model that plays a conversational guessing game, loaded through `from_pretrained`, running through `pipeline("text-generation")` with chat templates. Every bit of it works the same as models a million times its size.
86
 
87
  Also: 2-bit quantization was cool before it was cool.
88
 
 
35
  ## Quick Start
36
 
37
  ```python
38
+ from transformers import AutoModelForCausalLM, AutoTokenizer
39
+
40
+ tokenizer = AutoTokenizer.from_pretrained("david-ar/20q", trust_remote_code=True)
41
  model = AutoModelForCausalLM.from_pretrained("david-ar/20q", trust_remote_code=True)
42
+ model.set_vocab(tokenizer.questions, tokenizer.targets)
43
+ model.play() # interactive CLI game
44
  ```
45
 
46
  ## Pipeline Usage
 
52
 
53
  tokenizer = AutoTokenizer.from_pretrained("david-ar/20q", trust_remote_code=True)
54
  model = AutoModelForCausalLM.from_pretrained("david-ar/20q", trust_remote_code=True)
55
+ model.set_vocab(tokenizer.questions, tokenizer.targets)
56
  pipe = pipeline("text-generation", model=model, tokenizer=tokenizer)
57
 
58
  messages = [
 
86
 
87
  ## Why This Exists
88
 
89
+ Mostly to see if it could be done. A 214KB model that plays a conversational guessing game, loaded through `from_pretrained`, running through `pipeline("text-generation")` with chat templates. Every bit of it works the same as models a million times its size.
90
 
91
  Also: 2-bit quantization was cool before it was cool.
92