Update README.md
Browse files
README.md
CHANGED
|
@@ -40,7 +40,7 @@ The model uses a hierarchical approach:
|
|
| 40 |
from transformers import AutoTokenizer, AutoModelForSequenceClassification
|
| 41 |
|
| 42 |
# Load model and tokenizer
|
| 43 |
-
model = AutoModelForSequenceClassification.from_pretrained("chungpt2123/
|
| 44 |
tokenizer = AutoTokenizer.from_pretrained("Alibaba-NLP/gte-multilingual-base")
|
| 45 |
|
| 46 |
# Example usage
|
|
@@ -69,10 +69,6 @@ The model achieves strong performance on ESG classification tasks with hierarchi
|
|
| 69 |
- Performance may vary on domain-specific or technical ESG content
|
| 70 |
- Best performance on texts similar to training data distribution
|
| 71 |
|
| 72 |
-
## Citation
|
| 73 |
-
|
| 74 |
-
If you use this model, please cite:
|
| 75 |
-
|
| 76 |
```bibtex
|
| 77 |
@misc{esg_hierarchical_model,
|
| 78 |
title={ESG Hierarchical Multi-Task Learning Model},
|
|
|
|
| 40 |
from transformers import AutoTokenizer, AutoModelForSequenceClassification
|
| 41 |
|
| 42 |
# Load model and tokenizer
|
| 43 |
+
model = AutoModelForSequenceClassification.from_pretrained("chungpt2123/esg-subfactor-classifier", trust_remote_code=True)
|
| 44 |
tokenizer = AutoTokenizer.from_pretrained("Alibaba-NLP/gte-multilingual-base")
|
| 45 |
|
| 46 |
# Example usage
|
|
|
|
| 69 |
- Performance may vary on domain-specific or technical ESG content
|
| 70 |
- Best performance on texts similar to training data distribution
|
| 71 |
|
|
|
|
|
|
|
|
|
|
|
|
|
| 72 |
```bibtex
|
| 73 |
@misc{esg_hierarchical_model,
|
| 74 |
title={ESG Hierarchical Multi-Task Learning Model},
|