ogflash commited on
Commit
7421411
·
verified ·
1 Parent(s): 13c0e9f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +8 -1
README.md CHANGED
@@ -1,3 +1,10 @@
 
 
 
 
 
 
 
1
  # Mistral LoRA - BitNet 1.58 Q&A Expert
2
 
3
  This is a LoRA fine-tuned adapter for [`mistralai/Mistral-7B-Instruct-v0.2`] on a custom Q&A dataset derived from the paper **"The Era of 1-bit LLMs" (BitNet b1.58)**.
@@ -36,4 +43,4 @@ tokenizer = AutoTokenizer.from_pretrained("ogflash/mistral-lora-qa-1bit")
36
  prompt = "### Instruction:\nwhat is 1 bit llm\n\n### Response:\n"
37
  inputs = tokenizer(prompt, return_tensors="pt").to("cuda")
38
  outputs = model.generate(**inputs, max_new_tokens=100)
39
- print(tokenizer.decode(outputs[0], skip_special_tokens=True))
 
1
+ ---
2
+ license: unknown
3
+ language:
4
+ - en
5
+ base_model:
6
+ - mistralai/Mistral-7B-Instruct-v0.2
7
+ ---
8
  # Mistral LoRA - BitNet 1.58 Q&A Expert
9
 
10
  This is a LoRA fine-tuned adapter for [`mistralai/Mistral-7B-Instruct-v0.2`] on a custom Q&A dataset derived from the paper **"The Era of 1-bit LLMs" (BitNet b1.58)**.
 
43
  prompt = "### Instruction:\nwhat is 1 bit llm\n\n### Response:\n"
44
  inputs = tokenizer(prompt, return_tensors="pt").to("cuda")
45
  outputs = model.generate(**inputs, max_new_tokens=100)
46
+ print(tokenizer.decode(outputs[0], skip_special_tokens=True))