ccore commited on
Commit
906b214
·
1 Parent(s): 1a2a470

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +18 -29
README.md CHANGED
@@ -12,47 +12,36 @@ model-index:
12
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
13
  should probably proofread and complete it, then remove this comment. -->
14
 
15
- # core2
16
 
17
- This model is a fine-tuned version of [./core2](https://huggingface.co/./core2) on an unknown dataset.
18
- It achieves the following results on the evaluation set:
19
- - Loss: 2.7608
20
- - Accuracy: 0.4077
21
 
22
- ## Model description
23
 
24
- More information needed
25
 
26
- ## Intended uses & limitations
27
 
28
- More information needed
29
 
30
- ## Training and evaluation data
31
 
32
- More information needed
33
 
34
- ## Training procedure
35
 
36
- ### Training hyperparameters
37
 
38
- The following hyperparameters were used during training:
39
- - learning_rate: 0.0001
40
- - train_batch_size: 1
41
- - eval_batch_size: 8
42
- - seed: 42
43
- - gradient_accumulation_steps: 32
44
- - total_train_batch_size: 32
45
- - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
46
- - lr_scheduler_type: linear
47
- - num_epochs: 1.0
48
 
49
- ### Training results
50
 
 
51
 
 
52
 
53
- ### Framework versions
54
 
55
- - Transformers 4.34.0.dev0
56
- - Pytorch 2.0.1+cu117
57
- - Datasets 2.14.5
58
- - Tokenizers 0.13.3
 
12
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
13
  should probably proofread and complete it, then remove this comment. -->
14
 
15
+ # Model Card: LLama 2 - Version 7b (Embedding + Output + 1 Hidden Layer)
16
 
17
+ ## Overview
 
 
 
18
 
19
+ - **Link to Training Progress:** [WandB Training Progress](https://wandb.ai/inteligenciaartificialcursos/huggingface/runs/wjh9m9x1/workspace?workspace=user-inteligenciaartificialcursos)
20
 
21
+ - **Model Name:** LLama 2 - Version 7b
22
 
23
+ - **Total Parameters:** 446 million
24
 
25
+ ## Training Data
26
 
27
+ The model has been trained with the following sequence of datasets:
28
 
29
+ 1. **GPT-2 Data (Done):** The initial training phase involves GPT-2 data and is currently in the finalization stage.
30
 
31
+ 2. **Wikipedia QA in Markdown (In Progress):** The model's training will continue with Wikipedia question-answering data in Markdown format.
32
 
33
+ 3. **QA with Rhetoric (Future Stages):** The model will further be fine-tuned with question-answering data generated from various LLama models, incorporating rhetorical elements.
34
 
35
+ ## Model Description
 
 
 
 
 
 
 
 
 
36
 
37
+ The LLama 2 - Version 7b model is a powerful language model with a total of 446 million parameters. It utilizes embeddings, an output layer, and one hidden layer to perform a wide range of natural language processing tasks. The training is conducted in multiple stages, each focused on different datasets and objectives.
38
 
39
+ ## Disclaimer
40
 
41
+ This model card provides an overview of the LLama 2 - Version 7b model, its training data, and intended use cases. Keep in mind that the model's performance may vary depending on the specific task or dataset. Users are encouraged to evaluate the model's suitability for their applications and exercise caution when using it in real-world scenarios.
42
 
43
+ For any further inquiries or issues related to this model, please contact the model developers through the provided training progress link.
44
 
45
+ ---
46
+
47
+ Feel free to customize this Model Card further if you have additional details or specific use cases you'd like to highlight.