update readme
Browse files
README.md
CHANGED
|
@@ -117,7 +117,7 @@ Carbon emissions can be estimated using the [Machine Learning Impact calculator]
|
|
| 117 |
- **Provider:** UMass Gypsum HPC
|
| 118 |
- **Carbon Estimate:** <1 kg CO₂ (low academic footprint)
|
| 119 |
|
| 120 |
-
## Technical Specifications
|
| 121 |
|
| 122 |
### Model Architecture and Objective
|
| 123 |
|
|
@@ -131,7 +131,7 @@ Carbon emissions can be estimated using the [Machine Learning Impact calculator]
|
|
| 131 |
|
| 132 |
## Glossary
|
| 133 |
|
| 134 |
-
- **LoRA (Low-Rank Adaptation):** A parameter-efficient fine-tuning method where only small adapter matrices are trained, while the large base model remains frozen. This drastically reduces compute and storage costs
|
| 135 |
- **4-bit Quantization:** A compression technique that reduces model weights from 16/32-bit floating-point numbers to 4-bit representations. This allows large models (like Saul-7B) to fit and run on smaller GPUs with minimal accuracy loss.
|
| 136 |
- **ToS (Terms of Service) Anomaly Detection:** The task of identifying clauses in service agreements that are potentially unfair, unusual, or restrictive for consumers (e.g., sudden account termination, hidden fees).
|
| 137 |
- **PEFT (Parameter-Efficient Fine-Tuning):** A family of methods (like LoRA) that fine-tune large models by updating only a small subset of parameters instead of the entire model.
|
|
@@ -154,7 +154,7 @@ For questions, feedback, or collaborations, please reach out:
|
|
| 154 |
### Framework Versions
|
| 155 |
- **Transformers:** 4.51.3
|
| 156 |
- **PEFT:** 0.15.2
|
| 157 |
-
- **PyTorch:** 2.2.2
|
| 158 |
- **Datasets:** 2.21.0
|
| 159 |
|
| 160 |
### Framework versions
|
|
|
|
| 117 |
- **Provider:** UMass Gypsum HPC
|
| 118 |
- **Carbon Estimate:** <1 kg CO₂ (low academic footprint)
|
| 119 |
|
| 120 |
+
## Technical Specifications:
|
| 121 |
|
| 122 |
### Model Architecture and Objective
|
| 123 |
|
|
|
|
| 131 |
|
| 132 |
## Glossary
|
| 133 |
|
| 134 |
+
- **LoRA (Low-Rank Adaptation):** A parameter-efficient fine-tuning method where only small adapter matrices are trained, while the large base model remains frozen. This drastically reduces compute and storage costs.
|
| 135 |
- **4-bit Quantization:** A compression technique that reduces model weights from 16/32-bit floating-point numbers to 4-bit representations. This allows large models (like Saul-7B) to fit and run on smaller GPUs with minimal accuracy loss.
|
| 136 |
- **ToS (Terms of Service) Anomaly Detection:** The task of identifying clauses in service agreements that are potentially unfair, unusual, or restrictive for consumers (e.g., sudden account termination, hidden fees).
|
| 137 |
- **PEFT (Parameter-Efficient Fine-Tuning):** A family of methods (like LoRA) that fine-tune large models by updating only a small subset of parameters instead of the entire model.
|
|
|
|
| 154 |
### Framework Versions
|
| 155 |
- **Transformers:** 4.51.3
|
| 156 |
- **PEFT:** 0.15.2
|
| 157 |
+
- **PyTorch:** 2.2.2
|
| 158 |
- **Datasets:** 2.21.0
|
| 159 |
|
| 160 |
### Framework versions
|