dnakov commited on
Commit
120dae8
·
verified ·
1 Parent(s): c5e3db2

Upload folder using huggingface_hub

Browse files
Files changed (3) hide show
  1. README.md +11 -2
  2. tiktoken/tokenizer.model +3 -0
  3. tokenizer.json +0 -0
README.md CHANGED
@@ -23,13 +23,13 @@ Nanochat is a small language model from Andrej Karpathy, converted to HuggingFac
23
 
24
  ## Usage
25
 
26
- This model uses a custom tokenizer. You must use `trust_remote_code=True` when loading.
27
 
28
  ```python
29
  from transformers import AutoModelForCausalLM, AutoTokenizer
30
 
31
  model = AutoModelForCausalLM.from_pretrained("<model-path>", trust_remote_code=True)
32
- tokenizer = AutoTokenizer.from_pretrained("<model-path>", trust_remote_code=True)
33
 
34
  prompt = "Once upon a time"
35
  inputs = tokenizer(prompt, return_tensors="pt")
@@ -37,6 +37,15 @@ outputs = model.generate(**inputs, max_new_tokens=50)
37
  print(tokenizer.decode(outputs[0]))
38
  ```
39
 
 
 
 
 
 
 
 
 
 
40
  ## Citation
41
 
42
  Original model by Andrej Karpathy: https://github.com/karpathy/nanochat
 
23
 
24
  ## Usage
25
 
26
+ ### With Transformers (PyTorch)
27
 
28
  ```python
29
  from transformers import AutoModelForCausalLM, AutoTokenizer
30
 
31
  model = AutoModelForCausalLM.from_pretrained("<model-path>", trust_remote_code=True)
32
+ tokenizer = AutoTokenizer.from_pretrained("<model-path>")
33
 
34
  prompt = "Once upon a time"
35
  inputs = tokenizer(prompt, return_tensors="pt")
 
37
  print(tokenizer.decode(outputs[0]))
38
  ```
39
 
40
+ ### Converting to MLX
41
+
42
+ To use with Apple's MLX framework:
43
+
44
+ ```bash
45
+ mlx_lm.convert --hf-path <model-path> --mlx-path nanochat-mlx --trust-remote-code
46
+ mlx_lm.generate --model nanochat-mlx --prompt "Once upon a time"
47
+ ```
48
+
49
  ## Citation
50
 
51
  Original model by Andrej Karpathy: https://github.com/karpathy/nanochat
tiktoken/tokenizer.model ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:22b70a034f8864b798f8349672ee348cbc519406fc71af52ee54f0eb83b1eece
3
+ size 1139659
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff