ereniko commited on
Commit
9a297a5
·
verified ·
1 Parent(s): bb90bd8

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -4
README.md CHANGED
@@ -49,12 +49,12 @@ import torch
49
 
50
  # Load model
51
  model = AutoModelForCausalLM.from_pretrained(
52
- "ereniko/LaaLM-exp-v1",
53
  torch_dtype=torch.bfloat16,
54
  device_map="auto"
55
  )
56
  tokenizer = AutoTokenizer.from_pretrained(
57
- "ereniko/LaaLM-exp-v1",
58
  fix_mistral_regex=True # Important for proper tokenization
59
  )
60
  model.eval()
@@ -149,7 +149,7 @@ print(run_command("ls")) # backup.txt
149
 
150
  GGUF quantizations are available for CPU inference and lower memory usage:
151
 
152
- **[LaaLM-exp-v1-GGUF](https://huggingface.co/ereniko/LaaLM-exp-v1-GGUF)**
153
 
154
  Includes Q2_K through fp16 quantizations (1.27GB - 6.18GB) for use with:
155
  - llama.cpp
@@ -254,7 +254,7 @@ Each test consists of:
254
 
255
  Part of the LaaLM (Linux as a Language Model) project:
256
 
257
- - [**LaaLM-v1**](https://huggingface.co/ereniko/LaaLM-v1) - State-based approach with external filesystem tracking (T5-base, 80k examples)
258
  - **LaaLM-exp-v1** - Conversation-based approach with internal state tracking (Qwen 3B, 800k messages) (current)
259
  - **LaaLM-v2** - Planned with bash scripting, pipes, and expanded command set
260
 
 
49
 
50
  # Load model
51
  model = AutoModelForCausalLM.from_pretrained(
52
+ "LaaLM/LaaLM-exp-v1",
53
  torch_dtype=torch.bfloat16,
54
  device_map="auto"
55
  )
56
  tokenizer = AutoTokenizer.from_pretrained(
57
+ "LaaLM/LaaLM-exp-v1",
58
  fix_mistral_regex=True # Important for proper tokenization
59
  )
60
  model.eval()
 
149
 
150
  GGUF quantizations are available for CPU inference and lower memory usage:
151
 
152
+ **[LaaLM-exp-v1-GGUF](https://huggingface.co/LaaLM/LaaLM-exp-v1-GGUF)**
153
 
154
  Includes Q2_K through fp16 quantizations (1.27GB - 6.18GB) for use with:
155
  - llama.cpp
 
254
 
255
  Part of the LaaLM (Linux as a Language Model) project:
256
 
257
+ - [**LaaLM-v1**](https://huggingface.co/LaaLM/LaaLM-v1) - State-based approach with external filesystem tracking (T5-base, 80k examples)
258
  - **LaaLM-exp-v1** - Conversation-based approach with internal state tracking (Qwen 3B, 800k messages) (current)
259
  - **LaaLM-v2** - Planned with bash scripting, pipes, and expanded command set
260