Upload README.md with huggingface_hub
Browse files
README.md
CHANGED
|
@@ -262,6 +262,8 @@ Some memorable responses from training evaluation:
|
|
| 262 |
- **Context length** - Limited to 1024 tokens
|
| 263 |
- **Tokenizer quirks** - Some punctuation (like `?`) may display oddly
|
| 264 |
- **Knowledge cutoff** - Limited to training data, no real-time information
|
|
|
|
|
|
|
| 265 |
|
| 266 |
---
|
| 267 |
|
|
@@ -280,12 +282,30 @@ Opus 1.5 is intended for:
|
|
| 280 |
|
| 281 |
---
|
| 282 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 283 |
## Ethical Considerations
|
| 284 |
|
| 285 |
- Model may generate biased or incorrect content
|
| 286 |
- Trained on internet data which contains biases
|
| 287 |
- Should not be used to generate harmful content
|
| 288 |
- Human oversight recommended for all outputs
|
|
|
|
| 289 |
|
| 290 |
---
|
| 291 |
|
|
|
|
| 262 |
- **Context length** - Limited to 1024 tokens
|
| 263 |
- **Tokenizer quirks** - Some punctuation (like `?`) may display oddly
|
| 264 |
- **Knowledge cutoff** - Limited to training data, no real-time information
|
| 265 |
+
- **No identity fine-tuning** - This release is the base model only, not fine-tuned for self-awareness
|
| 266 |
+
- **No safety alignment** - Model has not undergone RLHF, DPO, or other safety training
|
| 267 |
|
| 268 |
---
|
| 269 |
|
|
|
|
| 282 |
|
| 283 |
---
|
| 284 |
|
| 285 |
+
## ⚠️ Safety Notice
|
| 286 |
+
|
| 287 |
+
**This model has NO safety alignment.** It has not been fine-tuned with:
|
| 288 |
+
- RLHF (Reinforcement Learning from Human Feedback)
|
| 289 |
+
- DPO (Direct Preference Optimization)
|
| 290 |
+
- Constitutional AI
|
| 291 |
+
- Content filtering
|
| 292 |
+
|
| 293 |
+
**Users must implement their own safety mechanisms** if deploying this model. The model may generate:
|
| 294 |
+
- Incorrect or misleading information
|
| 295 |
+
- Biased content reflecting training data
|
| 296 |
+
- Inappropriate responses
|
| 297 |
+
|
| 298 |
+
We strongly recommend human oversight for all outputs.
|
| 299 |
+
|
| 300 |
+
---
|
| 301 |
+
|
| 302 |
## Ethical Considerations
|
| 303 |
|
| 304 |
- Model may generate biased or incorrect content
|
| 305 |
- Trained on internet data which contains biases
|
| 306 |
- Should not be used to generate harmful content
|
| 307 |
- Human oversight recommended for all outputs
|
| 308 |
+
- **Implement your own content moderation** before any public deployment
|
| 309 |
|
| 310 |
---
|
| 311 |
|