johnhandleyd commited on
Commit
58a5ba7
·
verified ·
1 Parent(s): 45a3ac8

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +15 -15
README.md CHANGED
@@ -5,29 +5,29 @@ tags:
5
  - trl
6
  - sft
7
  - generated_from_trainer
 
 
8
  model-index:
9
- - name: thesa_2
10
  results: []
 
 
 
 
 
11
  ---
12
 
13
- <!-- This model card has been generated automatically according to the information the Trainer had access to. You
14
- should probably proofread and complete it, then remove this comment. -->
15
 
16
- # thesa_2
17
-
18
- This model is a fine-tuned version of [TheBloke/zephyr-7B-alpha-GPTQ](https://huggingface.co/TheBloke/zephyr-7B-alpha-GPTQ) on the None dataset.
19
 
20
  ## Model description
21
 
22
- More information needed
23
 
24
  ## Intended uses & limitations
25
 
26
- More information needed
27
-
28
- ## Training and evaluation data
29
-
30
- More information needed
31
 
32
  ## Training procedure
33
 
@@ -43,9 +43,6 @@ The following hyperparameters were used during training:
43
  - training_steps: 250
44
  - mixed_precision_training: Native AMP
45
 
46
- ### Training results
47
-
48
-
49
 
50
  ### Framework versions
51
 
@@ -53,3 +50,6 @@ The following hyperparameters were used during training:
53
  - Pytorch 2.1.0+cu121
54
  - Datasets 2.16.1
55
  - Tokenizers 0.15.1
 
 
 
 
5
  - trl
6
  - sft
7
  - generated_from_trainer
8
+ - peft
9
+ - gptq
10
  model-index:
11
+ - name: thesa
12
  results: []
13
+ language:
14
+ - en
15
+ datasets:
16
+ - loaiabdalslam/counselchat
17
+ pipeline_tag: text-generation
18
  ---
19
 
20
+ # Thesa
 
21
 
22
+ Thesa is an experimental project of a therapy chatbot trained on mental health data and fine-tuned with the Zephyr GPTQ model that uses quantization to decrease high computatinal and storage costs.
 
 
23
 
24
  ## Model description
25
 
26
+ - Fine-tuned from [TheBloke/zephyr-7B-alpha-GPTQ](https://huggingface.co/TheBloke/zephyr-7B-alpha-GPTQ)
27
 
28
  ## Intended uses & limitations
29
 
30
+ The intended use is experimental.
 
 
 
 
31
 
32
  ## Training procedure
33
 
 
43
  - training_steps: 250
44
  - mixed_precision_training: Native AMP
45
 
 
 
 
46
 
47
  ### Framework versions
48
 
 
50
  - Pytorch 2.1.0+cu121
51
  - Datasets 2.16.1
52
  - Tokenizers 0.15.1
53
+
54
+ ## More info
55
+ More info at https://github.com/johnhandleyd/thesa