inumulaisk commited on
Commit
7909065
·
verified ·
1 Parent(s): 724e573

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +31 -8
README.md CHANGED
@@ -1,19 +1,42 @@
1
  ---
2
-
3
  license: apache-2.0
4
  datasets:
5
- - inumulaisk/test_archa_dataset
6
  language:
7
- - en
8
  metrics:
9
- - accuracy
10
- - bleu
11
  base_model:
12
- - deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B
13
  new_version: inumulaisk/eval_model
14
  pipeline_tag: question-answering
15
  library_name: adapter-transformers
16
  tags:
17
- - cloud
18
- - architecture
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
19
  ---
 
1
  ---
 
2
  license: apache-2.0
3
  datasets:
4
+ - inumulaisk/test_archa_dataset
5
  language:
6
+ - en
7
  metrics:
8
+ - accuracy
9
+ - bleu
10
  base_model:
11
+ - deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B
12
  new_version: inumulaisk/eval_model
13
  pipeline_tag: question-answering
14
  library_name: adapter-transformers
15
  tags:
16
+ - cloud
17
+ - architecture
18
+ - DeepSeek Eval Model
19
+ Section Overview:
20
+ - inumulaisk/eval_model
21
+ - This model is a finetuned on deepseek-R1-distill-quen-1.5b using LoRa
22
+ Adapters technique. Dataset used here is a Cloud Architecture Framework
23
+ dataset.
24
+ Table of Contents:
25
+ - Model Description This model has been come from deepseek-r1-distill base
26
+ model as it is finetuned using LoRa adapters. This model is the first
27
+ version deepseek-r1-distill base model finetuned model and contains 20% of
28
+ trainable paremeters. Apache 2.0 copyright available to this model.
29
+ Developed by: inumulaisk
30
+ Funded by: inumulaisk
31
+ Model type:
32
+ - Supervised/Learning Method
33
+ "Language(s) [NLP]":
34
+ - English.
35
+ License:
36
+ - Apache 2.0
37
+ Finetuned From Model:
38
+ - deepseek-ai/deepseek-r1-distill-quen-1.5b
39
+ Model Sources optional:
40
+ - Repository: inumulaisk/eval_model
41
+
42
  ---