File size: 1,389 Bytes
eda01d4
 
6bf25bf
eda01d4
 
 
 
 
 
40b7f9f
eda01d4
 
 
6bf25bf
eda01d4
 
 
 
 
 
 
40b7f9f
eda01d4
 
 
6bf25bf
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
---
library_name: transformers
license: mit
---

# Model Card for Model ID

<!-- Provide a quick summary of what the model is/does. -->

Distiled `Qwen/Qwen2.5-Coder-0.5B-Instruct` by  `westenfelder/NL2SH-ALFA` for NLP to bash command. Distiled only decoder block from 24 to 4 with the original tokenizer.



## Model Details



### Model Sources [optional]

<!-- Provide the basic links for the model. -->

- **Blog:** [More Information Needed]

## Uses

```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch

model_name = "tripathysagar/Qwen2.5-Coder-196M-Shell"

tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
    model_name,
    dtype=torch.bfloat16,
    device_map="auto",
    )

def infer(inp, debug=False):
  msg = [
    {"role": "system", "content": "Generate shell command."},
    {"role": "user", "content": inp},
  ]
  text = tokenizer.apply_chat_template(
    msg,
    tokenize=False,
    add_generation_prompt=True,
  )
  if debug:
    print(text)

  model_inputs = tokenizer([text], return_tensors="pt")
  generated_ids = model.generate(
    **model_inputs,
    max_new_tokens=256,
    do_sample=True,

  )

  resp_text = tokenizer.batch_decode(generated_ids)[0]
  if debug:
    print(resp_text)

  return (inp, resp_text[len(text):].replace('<|im_end|>', ''))

infer("get kernel name.")

```