Add comprehensive model card
Browse files
README.md
CHANGED
|
@@ -1,199 +1,282 @@
|
|
| 1 |
---
|
| 2 |
library_name: transformers
|
| 3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 4 |
---
|
| 5 |
|
| 6 |
-
# Model Card for
|
| 7 |
-
|
| 8 |
-
<!-- Provide a quick summary of what the model is/does. -->
|
| 9 |
|
|
|
|
| 10 |
|
|
|
|
| 11 |
|
| 12 |
## Model Details
|
| 13 |
|
| 14 |
### Model Description
|
| 15 |
|
| 16 |
-
|
| 17 |
|
| 18 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 19 |
|
| 20 |
-
|
| 21 |
-
- **Funded by [optional]:** [More Information Needed]
|
| 22 |
-
- **Shared by [optional]:** [More Information Needed]
|
| 23 |
-
- **Model type:** [More Information Needed]
|
| 24 |
-
- **Language(s) (NLP):** [More Information Needed]
|
| 25 |
-
- **License:** [More Information Needed]
|
| 26 |
-
- **Finetuned from model [optional]:** [More Information Needed]
|
| 27 |
|
| 28 |
-
|
|
|
|
|
|
|
|
|
|
| 29 |
|
| 30 |
-
|
| 31 |
|
| 32 |
-
- **Repository:**
|
| 33 |
-
- **
|
| 34 |
-
- **
|
| 35 |
|
| 36 |
## Uses
|
| 37 |
|
| 38 |
-
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
|
| 39 |
-
|
| 40 |
### Direct Use
|
| 41 |
|
| 42 |
-
|
| 43 |
-
|
| 44 |
-
[More Information Needed]
|
| 45 |
|
| 46 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
| 47 |
|
| 48 |
-
|
| 49 |
|
| 50 |
-
|
|
|
|
|
|
|
|
|
|
| 51 |
|
| 52 |
### Out-of-Scope Use
|
| 53 |
|
| 54 |
-
|
| 55 |
|
| 56 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
| 57 |
|
| 58 |
## Bias, Risks, and Limitations
|
| 59 |
|
| 60 |
-
|
| 61 |
|
| 62 |
-
|
| 63 |
|
| 64 |
-
|
| 65 |
|
| 66 |
-
|
| 67 |
|
| 68 |
-
|
| 69 |
|
| 70 |
-
|
| 71 |
|
| 72 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 73 |
|
| 74 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 75 |
|
| 76 |
## Training Details
|
| 77 |
|
| 78 |
### Training Data
|
| 79 |
|
| 80 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 81 |
|
| 82 |
-
|
| 83 |
|
| 84 |
### Training Procedure
|
| 85 |
|
| 86 |
-
|
| 87 |
-
|
| 88 |
-
#### Preprocessing [optional]
|
| 89 |
-
|
| 90 |
-
[More Information Needed]
|
| 91 |
|
|
|
|
|
|
|
|
|
|
| 92 |
|
| 93 |
#### Training Hyperparameters
|
| 94 |
|
| 95 |
-
- **Training
|
| 96 |
-
|
| 97 |
-
|
| 98 |
-
|
| 99 |
-
|
| 100 |
-
|
| 101 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 102 |
|
| 103 |
## Evaluation
|
| 104 |
|
| 105 |
-
|
| 106 |
-
|
| 107 |
-
### Testing Data, Factors & Metrics
|
| 108 |
-
|
| 109 |
-
#### Testing Data
|
| 110 |
-
|
| 111 |
-
<!-- This should link to a Dataset Card if possible. -->
|
| 112 |
-
|
| 113 |
-
[More Information Needed]
|
| 114 |
-
|
| 115 |
-
#### Factors
|
| 116 |
-
|
| 117 |
-
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
|
| 118 |
|
| 119 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
| 120 |
|
| 121 |
-
###
|
| 122 |
|
| 123 |
-
|
| 124 |
-
|
| 125 |
-
|
| 126 |
|
| 127 |
### Results
|
| 128 |
|
| 129 |
-
|
| 130 |
-
|
| 131 |
-
|
| 132 |
-
|
| 133 |
-
|
| 134 |
-
|
| 135 |
-
## Model Examination [optional]
|
| 136 |
-
|
| 137 |
-
<!-- Relevant interpretability work for the model goes here -->
|
| 138 |
-
|
| 139 |
-
[More Information Needed]
|
| 140 |
-
|
| 141 |
-
## Environmental Impact
|
| 142 |
-
|
| 143 |
-
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
|
| 144 |
-
|
| 145 |
-
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
|
| 146 |
|
| 147 |
-
|
| 148 |
-
- **Hours used:** [More Information Needed]
|
| 149 |
-
- **Cloud Provider:** [More Information Needed]
|
| 150 |
-
- **Compute Region:** [More Information Needed]
|
| 151 |
-
- **Carbon Emitted:** [More Information Needed]
|
| 152 |
|
| 153 |
-
##
|
| 154 |
|
| 155 |
-
|
| 156 |
-
|
| 157 |
-
|
|
|
|
|
|
|
| 158 |
|
| 159 |
### Compute Infrastructure
|
| 160 |
|
| 161 |
-
[More Information Needed]
|
| 162 |
-
|
| 163 |
#### Hardware
|
| 164 |
|
| 165 |
-
|
|
|
|
|
|
|
| 166 |
|
| 167 |
#### Software
|
| 168 |
|
| 169 |
-
|
| 170 |
-
|
| 171 |
-
|
| 172 |
-
|
| 173 |
-
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
|
| 174 |
-
|
| 175 |
-
**BibTeX:**
|
| 176 |
|
| 177 |
-
|
| 178 |
|
| 179 |
-
|
| 180 |
|
| 181 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 182 |
|
| 183 |
-
##
|
| 184 |
-
|
| 185 |
-
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
|
| 186 |
-
|
| 187 |
-
[More Information Needed]
|
| 188 |
-
|
| 189 |
-
## More Information [optional]
|
| 190 |
|
| 191 |
-
|
|
|
|
|
|
|
| 192 |
|
| 193 |
-
##
|
| 194 |
|
| 195 |
-
|
|
|
|
|
|
|
| 196 |
|
| 197 |
-
##
|
| 198 |
|
| 199 |
-
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
---
|
| 2 |
library_name: transformers
|
| 3 |
+
license: mit
|
| 4 |
+
base_model: Qwen/Qwen2.5-3B-Instruct
|
| 5 |
+
tags:
|
| 6 |
+
- text-generation
|
| 7 |
+
- conversational
|
| 8 |
+
- immigration-law
|
| 9 |
+
- legal-assistant
|
| 10 |
+
- qwen
|
| 11 |
+
- lora
|
| 12 |
---
|
| 13 |
|
| 14 |
+
# Model Card for DoloresAI-Merged
|
|
|
|
|
|
|
| 15 |
|
| 16 |
+
## Model Summary
|
| 17 |
|
| 18 |
+
**DoloresAI-Merged** is a fine-tuned conversational AI assistant specialized in U.S. immigration law. This model is a merged version of a LoRA adapter trained on the base model `Qwen/Qwen2.5-3B-Instruct`. It provides accurate, context-aware responses to immigration-related questions and assists with form completion, case management, and legal guidance.
|
| 19 |
|
| 20 |
## Model Details
|
| 21 |
|
| 22 |
### Model Description
|
| 23 |
|
| 24 |
+
DoloresAI-Merged is a merged model that combines the base Qwen2.5-3B-Instruct model with fine-tuned LoRA weights. The model has been specifically trained to understand and respond to immigration law queries, USCIS form questions, and provide legal assistance for immigrants navigating the U.S. immigration system.
|
| 25 |
|
| 26 |
+
- **Developed by:** JustiGuide
|
| 27 |
+
- **Model type:** Causal Language Model (Decoder-only)
|
| 28 |
+
- **Language(s):** English (primary), with support for multilingual queries
|
| 29 |
+
- **License:** MIT
|
| 30 |
+
- **Finetuned from:** `Qwen/Qwen2.5-3B-Instruct`
|
| 31 |
+
- **Merged from:** `JustiGuide/DoloresAI` (LoRA adapter)
|
| 32 |
|
| 33 |
+
### Model Architecture
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 34 |
|
| 35 |
+
- **Base Model:** Qwen/Qwen2.5-3B-Instruct (3B parameters)
|
| 36 |
+
- **Architecture:** Transformer-based decoder
|
| 37 |
+
- **Context Length:** 32,768 tokens
|
| 38 |
+
- **Model Format:** Merged (LoRA weights integrated into base model)
|
| 39 |
|
| 40 |
+
### Model Sources
|
| 41 |
|
| 42 |
+
- **Repository:** https://huggingface.co/JustiGuide/DoloresAI-Merged
|
| 43 |
+
- **Base Model:** https://huggingface.co/Qwen/Qwen2.5-3B-Instruct
|
| 44 |
+
- **Original LoRA Adapter:** https://huggingface.co/JustiGuide/DoloresAI
|
| 45 |
|
| 46 |
## Uses
|
| 47 |
|
|
|
|
|
|
|
| 48 |
### Direct Use
|
| 49 |
|
| 50 |
+
This model is intended for use as an immigration law assistant that can:
|
|
|
|
|
|
|
| 51 |
|
| 52 |
+
- Answer questions about U.S. immigration law and procedures
|
| 53 |
+
- Assist with USCIS form completion (I-130, I-765, I-589, I-129, N-400)
|
| 54 |
+
- Provide guidance on immigration processes and requirements
|
| 55 |
+
- Help users understand legal terminology and requirements
|
| 56 |
+
- Support case management and document preparation
|
| 57 |
|
| 58 |
+
### Intended Use Cases
|
| 59 |
|
| 60 |
+
1. **Immigration Form Assistance:** Help users complete USCIS forms accurately
|
| 61 |
+
2. **Legal Q&A:** Answer questions about immigration law, processes, and requirements
|
| 62 |
+
3. **Case Management:** Assist with tracking immigration cases and deadlines
|
| 63 |
+
4. **Educational Support:** Provide explanations of immigration concepts and procedures
|
| 64 |
|
| 65 |
### Out-of-Scope Use
|
| 66 |
|
| 67 |
+
This model should NOT be used for:
|
| 68 |
|
| 69 |
+
- Providing definitive legal advice (users should consult licensed attorneys)
|
| 70 |
+
- Making final legal decisions
|
| 71 |
+
- Replacing professional legal counsel
|
| 72 |
+
- Handling emergency legal situations
|
| 73 |
+
- Providing advice on non-U.S. immigration systems
|
| 74 |
|
| 75 |
## Bias, Risks, and Limitations
|
| 76 |
|
| 77 |
+
### Limitations
|
| 78 |
|
| 79 |
+
1. **Not Legal Advice:** This model provides information and assistance but does not constitute legal advice. Users should consult licensed immigration attorneys for legal representation.
|
| 80 |
|
| 81 |
+
2. **Training Data Limitations:** The model's knowledge is based on training data and may not reflect the most recent changes in immigration law or policy.
|
| 82 |
|
| 83 |
+
3. **Context Window:** Limited to 32,768 tokens, which may not capture all relevant context for complex cases.
|
| 84 |
|
| 85 |
+
4. **Language:** Primarily trained on English; performance may vary for other languages.
|
| 86 |
|
| 87 |
+
5. **Accuracy:** While trained on immigration law data, responses should be verified with official sources and legal professionals.
|
| 88 |
|
| 89 |
+
### Recommendations
|
| 90 |
+
|
| 91 |
+
- Always verify information with official USCIS sources
|
| 92 |
+
- Consult licensed immigration attorneys for legal representation
|
| 93 |
+
- Use this model as a tool to assist, not replace, professional legal services
|
| 94 |
+
- Keep in mind that immigration law changes frequently
|
| 95 |
|
| 96 |
+
## How to Get Started with the Model
|
| 97 |
+
|
| 98 |
+
### Using Transformers
|
| 99 |
+
|
| 100 |
+
```python
|
| 101 |
+
from transformers import AutoModelForCausalLM, AutoTokenizer
|
| 102 |
+
|
| 103 |
+
model_name = "JustiGuide/DoloresAI-Merged"
|
| 104 |
+
tokenizer = AutoTokenizer.from_pretrained(model_name)
|
| 105 |
+
model = AutoModelForCausalLM.from_pretrained(model_name)
|
| 106 |
+
|
| 107 |
+
# Format prompt for Qwen2.5 chat template
|
| 108 |
+
messages = [
|
| 109 |
+
{"role": "system", "content": "You are Dolores, an immigration law assistant."},
|
| 110 |
+
{"role": "user", "content": "What is an H-1B visa?"}
|
| 111 |
+
]
|
| 112 |
+
|
| 113 |
+
# Apply chat template
|
| 114 |
+
prompt = tokenizer.apply_chat_template(
|
| 115 |
+
messages,
|
| 116 |
+
tokenize=False,
|
| 117 |
+
add_generation_prompt=True
|
| 118 |
+
)
|
| 119 |
+
|
| 120 |
+
# Generate response
|
| 121 |
+
inputs = tokenizer(prompt, return_tensors="pt")
|
| 122 |
+
outputs = model.generate(**inputs, max_new_tokens=256, temperature=0.7)
|
| 123 |
+
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
|
| 124 |
+
print(response)
|
| 125 |
+
```
|
| 126 |
+
|
| 127 |
+
### Using Hugging Face Inference Endpoint
|
| 128 |
+
|
| 129 |
+
```python
|
| 130 |
+
import requests
|
| 131 |
+
|
| 132 |
+
endpoint_url = "YOUR_ENDPOINT_URL"
|
| 133 |
+
headers = {
|
| 134 |
+
"Authorization": "Bearer YOUR_API_KEY",
|
| 135 |
+
"Content-Type": "application/json"
|
| 136 |
+
}
|
| 137 |
+
|
| 138 |
+
payload = {
|
| 139 |
+
"inputs": "User: What is an H-1B visa?\nAssistant:",
|
| 140 |
+
"parameters": {
|
| 141 |
+
"max_new_tokens": 256,
|
| 142 |
+
"temperature": 0.7,
|
| 143 |
+
"top_p": 0.9
|
| 144 |
+
}
|
| 145 |
+
}
|
| 146 |
+
|
| 147 |
+
response = requests.post(endpoint_url, json=payload, headers=headers)
|
| 148 |
+
print(response.json())
|
| 149 |
+
```
|
| 150 |
|
| 151 |
## Training Details
|
| 152 |
|
| 153 |
### Training Data
|
| 154 |
|
| 155 |
+
The model was fine-tuned on a custom dataset of:
|
| 156 |
+
- Immigration law questions and answers
|
| 157 |
+
- USCIS form instructions and examples
|
| 158 |
+
- Legal terminology and definitions
|
| 159 |
+
- Case management scenarios
|
| 160 |
+
- Immigration process documentation
|
| 161 |
|
| 162 |
+
**Dataset Size:** 338+ training examples (as of training)
|
| 163 |
|
| 164 |
### Training Procedure
|
| 165 |
|
| 166 |
+
#### Preprocessing
|
|
|
|
|
|
|
|
|
|
|
|
|
| 167 |
|
| 168 |
+
- Training data was formatted using Qwen2.5 chat template
|
| 169 |
+
- System prompts included role definitions and instructions
|
| 170 |
+
- Context and examples were included in training format
|
| 171 |
|
| 172 |
#### Training Hyperparameters
|
| 173 |
|
| 174 |
+
- **Training Type:** LoRA (Low-Rank Adaptation)
|
| 175 |
+
- **Base Model:** Qwen/Qwen2.5-3B-Instruct
|
| 176 |
+
- **LoRA Rank (r):** 16
|
| 177 |
+
- **LoRA Alpha:** 32
|
| 178 |
+
- **LoRA Dropout:** 0.1
|
| 179 |
+
- **Target Modules:** q_proj, v_proj, k_proj, o_proj, gate_proj, up_proj, down_proj
|
| 180 |
+
- **Learning Rate:** 2e-4 (0.0002)
|
| 181 |
+
- **Batch Size:** 4
|
| 182 |
+
- **Gradient Accumulation Steps:** 4
|
| 183 |
+
- **Effective Batch Size:** 16
|
| 184 |
+
- **Epochs:** 3
|
| 185 |
+
- **Max Sequence Length:** 1024 tokens
|
| 186 |
+
- **Warmup Steps:** 50
|
| 187 |
+
- **Mixed Precision:** FP16
|
| 188 |
+
|
| 189 |
+
#### Training Infrastructure
|
| 190 |
+
|
| 191 |
+
- **Platform:** Hugging Face AutoTrain
|
| 192 |
+
- **Hardware:** GPU (specific GPU type may vary)
|
| 193 |
+
- **Training Time:** 1-3 hours (depending on hardware)
|
| 194 |
+
|
| 195 |
+
### Model Merging
|
| 196 |
+
|
| 197 |
+
The LoRA adapter was merged with the base model to create a single, unified model file. This process:
|
| 198 |
+
- Integrates LoRA weights into the base model
|
| 199 |
+
- Creates a standalone model that doesn't require adapter loading
|
| 200 |
+
- Improves inference stability and reduces CUDA errors
|
| 201 |
+
- Maintains all fine-tuning benefits
|
| 202 |
|
| 203 |
## Evaluation
|
| 204 |
|
| 205 |
+
### Testing Data
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 206 |
|
| 207 |
+
The model was evaluated on:
|
| 208 |
+
- Immigration law Q&A accuracy
|
| 209 |
+
- Form completion assistance quality
|
| 210 |
+
- Legal terminology understanding
|
| 211 |
+
- Response coherence and relevance
|
| 212 |
|
| 213 |
+
### Metrics
|
| 214 |
|
| 215 |
+
- **Training Loss:** Decreased from ~2.5 to ~1.5-2.0
|
| 216 |
+
- **Response Quality:** Improved context awareness and accuracy
|
| 217 |
+
- **Form Assistance:** Accurate guidance on USCIS forms
|
| 218 |
|
| 219 |
### Results
|
| 220 |
|
| 221 |
+
The merged model maintains the fine-tuning benefits while providing:
|
| 222 |
+
- ✅ Stable inference (no CUDA errors from adapter loading)
|
| 223 |
+
- ✅ Faster inference (single model file)
|
| 224 |
+
- ✅ Better compatibility with inference endpoints
|
| 225 |
+
- ✅ Preserved training quality
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 226 |
|
| 227 |
+
## Technical Specifications
|
|
|
|
|
|
|
|
|
|
|
|
|
| 228 |
|
| 229 |
+
### Model Architecture
|
| 230 |
|
| 231 |
+
- **Architecture:** Transformer Decoder
|
| 232 |
+
- **Parameters:** ~3B
|
| 233 |
+
- **Layers:** Based on Qwen2.5-3B-Instruct architecture
|
| 234 |
+
- **Attention Mechanism:** Multi-head self-attention
|
| 235 |
+
- **Activation:** SwiGLU
|
| 236 |
|
| 237 |
### Compute Infrastructure
|
| 238 |
|
|
|
|
|
|
|
| 239 |
#### Hardware
|
| 240 |
|
| 241 |
+
- **Training:** GPU (via Hugging Face AutoTrain)
|
| 242 |
+
- **Inference:** GPU recommended (T4, A10G, or A100)
|
| 243 |
+
- **Minimum:** CPU inference possible but slower
|
| 244 |
|
| 245 |
#### Software
|
| 246 |
|
| 247 |
+
- **Framework:** PyTorch
|
| 248 |
+
- **Transformers:** Hugging Face Transformers library
|
| 249 |
+
- **LoRA:** PEFT (Parameter-Efficient Fine-Tuning)
|
|
|
|
|
|
|
|
|
|
|
|
|
| 250 |
|
| 251 |
+
## Citation
|
| 252 |
|
| 253 |
+
If you use this model, please cite:
|
| 254 |
|
| 255 |
+
```bibtex
|
| 256 |
+
@model{JustiGuide/DoloresAI-Merged,
|
| 257 |
+
title={DoloresAI-Merged: Immigration Law Assistant},
|
| 258 |
+
author={JustiGuide},
|
| 259 |
+
year={2025},
|
| 260 |
+
url={https://huggingface.co/JustiGuide/DoloresAI-Merged},
|
| 261 |
+
base_model={Qwen/Qwen2.5-3B-Instruct}
|
| 262 |
+
}
|
| 263 |
+
```
|
| 264 |
|
| 265 |
+
## Model Card Contact
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 266 |
|
| 267 |
+
For questions or issues, please contact:
|
| 268 |
+
- **Organization:** JustiGuide
|
| 269 |
+
- **Model Repository:** https://huggingface.co/JustiGuide/DoloresAI-Merged
|
| 270 |
|
| 271 |
+
## Acknowledgments
|
| 272 |
|
| 273 |
+
- Base model: Qwen/Qwen2.5-3B-Instruct by Alibaba Cloud
|
| 274 |
+
- Training platform: Hugging Face AutoTrain
|
| 275 |
+
- Framework: Hugging Face Transformers and PEFT
|
| 276 |
|
| 277 |
+
## Version History
|
| 278 |
|
| 279 |
+
- **v1.0 (Merged):** Initial merged model release
|
| 280 |
+
- Merged LoRA adapter with base model
|
| 281 |
+
- Optimized for inference endpoints
|
| 282 |
+
- Fixed CUDA compatibility issues
|