Delete README.md
Browse files
README.md
DELETED
|
@@ -1,115 +0,0 @@
|
|
| 1 |
-
---
|
| 2 |
-
license: mit
|
| 3 |
-
language:
|
| 4 |
-
- en
|
| 5 |
-
tags:
|
| 6 |
-
- finance
|
| 7 |
-
- nlp
|
| 8 |
-
- lora
|
| 9 |
-
- llama
|
| 10 |
-
- sentiment-analysis
|
| 11 |
-
- named-entity-recognition
|
| 12 |
-
- xbrl
|
| 13 |
-
- financial-analysis
|
| 14 |
-
pipeline_tag: text-generation
|
| 15 |
-
---
|
| 16 |
-
|
| 17 |
-
# FinLoRA: Financial Large Language Models with LoRA Adaptation
|
| 18 |
-
|
| 19 |
-
## Overview
|
| 20 |
-
|
| 21 |
-
FinLoRA is a comprehensive framework for fine-tuning large language models on financial tasks using Low-Rank Adaptation (LoRA). This repository contains trained LoRA adapters for various financial NLP tasks including sentiment analysis, named entity recognition, headline classification, XBRL processing, and CFA knowledge integration.
|
| 22 |
-
|
| 23 |
-
## Model Architecture
|
| 24 |
-
|
| 25 |
-
- **Base Model**: Meta-Llama-3.1-8B-Instruct
|
| 26 |
-
- **Adaptation Method**: LoRA (Low-Rank Adaptation)
|
| 27 |
-
- **Quantization**: 8-bit quantization for efficient inference
|
| 28 |
-
- **Tasks**: Financial sentiment analysis, NER, classification, XBRL processing, CFA knowledge integration
|
| 29 |
-
|
| 30 |
-
## Available Models
|
| 31 |
-
|
| 32 |
-
### Core Financial Models
|
| 33 |
-
- `sentiment_llama_3_1_8b_8bits_r8` - Financial sentiment analysis
|
| 34 |
-
- `ner_llama_3_1_8b_8bits_r8` - Named entity recognition
|
| 35 |
-
- `headline_llama_3_1_8b_8bits_r8` - Financial headline classification
|
| 36 |
-
- `xbrl_extract_llama_3_1_8b_8bits_r8` - XBRL tag extraction
|
| 37 |
-
- `financebench_llama_3_1_8b_8bits_r8` - Comprehensive financial benchmark
|
| 38 |
-
- `finer_llama_3_1_8b_8bits_r8` - Financial NER
|
| 39 |
-
- `formula_llama_3_1_8b_8bits_r8` - Financial formula processing
|
| 40 |
-
|
| 41 |
-
## Quick Start
|
| 42 |
-
|
| 43 |
-
### 1. Install Dependencies
|
| 44 |
-
```bash
|
| 45 |
-
pip install transformers torch peft bitsandbytes accelerate datasets
|
| 46 |
-
```
|
| 47 |
-
|
| 48 |
-
### 2. Load and Use a Model
|
| 49 |
-
```python
|
| 50 |
-
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
|
| 51 |
-
from peft import PeftModel
|
| 52 |
-
import torch
|
| 53 |
-
|
| 54 |
-
# Load tokenizer
|
| 55 |
-
tokenizer = AutoTokenizer.from_pretrained("meta-llama/Llama-3.1-8B-Instruct")
|
| 56 |
-
if tokenizer.pad_token is None:
|
| 57 |
-
tokenizer.pad_token = tokenizer.eos_token
|
| 58 |
-
|
| 59 |
-
# Load model with quantization
|
| 60 |
-
bnb_config = BitsAndBytesConfig(load_in_8bit=True)
|
| 61 |
-
base_model = AutoModelForCausalLM.from_pretrained(
|
| 62 |
-
"meta-llama/Llama-3.1-8B-Instruct",
|
| 63 |
-
quantization_config=bnb_config,
|
| 64 |
-
device_map="auto"
|
| 65 |
-
)
|
| 66 |
-
|
| 67 |
-
# Load LoRA adapter
|
| 68 |
-
model = PeftModel.from_pretrained(base_model, "models/sentiment_llama_3_1_8b_8bits_r8")
|
| 69 |
-
|
| 70 |
-
# Generate prediction
|
| 71 |
-
def predict(text):
|
| 72 |
-
inputs = tokenizer(text, return_tensors="pt")
|
| 73 |
-
with torch.no_grad():
|
| 74 |
-
outputs = model.generate(**inputs, max_new_tokens=50, temperature=0.7)
|
| 75 |
-
return tokenizer.decode(outputs[0], skip_special_tokens=True)
|
| 76 |
-
|
| 77 |
-
# Test
|
| 78 |
-
result = predict("Classify sentiment: 'The stock market is performing well today.'")
|
| 79 |
-
print(result)
|
| 80 |
-
```
|
| 81 |
-
|
| 82 |
-
### 3. Run Evaluation
|
| 83 |
-
```bash
|
| 84 |
-
python comprehensive_evaluation.py
|
| 85 |
-
```
|
| 86 |
-
|
| 87 |
-
### 4. Run Inference
|
| 88 |
-
```bash
|
| 89 |
-
python inference.py
|
| 90 |
-
```
|
| 91 |
-
|
| 92 |
-
## Performance
|
| 93 |
-
|
| 94 |
-
The models have been evaluated on multiple financial datasets:
|
| 95 |
-
|
| 96 |
-
- **Financial Phrasebank**: F1=0.333, Accuracy=0.500
|
| 97 |
-
- **NER Classification**: F1=0.889, Accuracy=0.800
|
| 98 |
-
- **Headline Classification**: F1=0.697, Accuracy=0.700
|
| 99 |
-
- **XBRL Tag Extraction**: Accuracy=0.200
|
| 100 |
-
- **FIQA Sentiment Analysis**: F1=0.727, Accuracy=0.700
|
| 101 |
-
|
| 102 |
-
## Citation
|
| 103 |
-
|
| 104 |
-
```bibtex
|
| 105 |
-
@article{finlora2024,
|
| 106 |
-
title={FinLoRA: Financial Large Language Models with LoRA Adaptation},
|
| 107 |
-
author={Your Name},
|
| 108 |
-
journal={Financial AI Conference},
|
| 109 |
-
year={2024}
|
| 110 |
-
}
|
| 111 |
-
```
|
| 112 |
-
|
| 113 |
-
## License
|
| 114 |
-
|
| 115 |
-
This project is licensed under the MIT License.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|