oaishi commited on
Commit
57de309
·
verified ·
1 Parent(s): 6542e01

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -4
README.md CHANGED
@@ -20,7 +20,7 @@ metrics:
20
  library_name: transformers
21
  ---
22
 
23
- # Model Card for CowCorpus/UserGroup3_final_fixed_llava
24
 
25
  <!-- Provide a quick summary of what the model is/does. -->
26
  This model is a **specialized fine-tune** of the general [CowCorpus-Llava](https://huggingface.co/CowCorpus/CowCorpus-llama3-llava-next-8b) model.
@@ -56,14 +56,12 @@ The model is trained on a rich, multimodal state representation:
56
 
57
  For inference code, prompt templates, and setup instructions, please refer to our [GitHub Repository](https://github.com/oaishi/CowCorpus).
58
 
59
- ## Training Details
60
-
61
  ### Training Data
62
 
63
  <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
64
  The model underwent a two-stage training process:
65
  1. **Stage 1 (General Adaptation):** Fine-tuned on the complete CowCorpus dataset.
66
- 2. **Stage 2 (User Personalization):** Further fine-tuned on the **User Cluster 3 subset** of CowCorpus, consists of 26 trajectories and 131 steps. (P10, P13, P18)
67
 
68
  **User Cluster 2 Characteristics:**
69
  * **Data Source:** A subset of the collaborative trajectories specific to User Group 3.
 
20
  library_name: transformers
21
  ---
22
 
23
+ # Model Card for CowCorpus/Cluster3-Takeover-Llava
24
 
25
  <!-- Provide a quick summary of what the model is/does. -->
26
  This model is a **specialized fine-tune** of the general [CowCorpus-Llava](https://huggingface.co/CowCorpus/CowCorpus-llama3-llava-next-8b) model.
 
56
 
57
  For inference code, prompt templates, and setup instructions, please refer to our [GitHub Repository](https://github.com/oaishi/CowCorpus).
58
 
 
 
59
  ### Training Data
60
 
61
  <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
62
  The model underwent a two-stage training process:
63
  1. **Stage 1 (General Adaptation):** Fine-tuned on the complete CowCorpus dataset.
64
+ 2. **Stage 2 (User Personalization):** Further fine-tuned on the **User Cluster 3 subset** of CowCorpus, consists of 26 trajectories and 131 steps.
65
 
66
  **User Cluster 2 Characteristics:**
67
  * **Data Source:** A subset of the collaborative trajectories specific to User Group 3.