Update README.md
Browse files
README.md
CHANGED
|
@@ -3,205 +3,324 @@ base_model: Qwen/Qwen2-0.5B-Instruct
|
|
| 3 |
library_name: peft
|
| 4 |
pipeline_tag: text-generation
|
| 5 |
tags:
|
| 6 |
-
- base_model:adapter:Qwen/Qwen2-0.5B-Instruct
|
| 7 |
- lora
|
| 8 |
- transformers
|
|
|
|
|
|
|
|
|
|
|
|
|
| 9 |
---
|
| 10 |
|
| 11 |
-
#
|
| 12 |
-
|
| 13 |
-
<!-- Provide a quick summary of what the model is/does. -->
|
| 14 |
-
|
| 15 |
|
|
|
|
| 16 |
|
| 17 |
## Model Details
|
| 18 |
|
| 19 |
### Model Description
|
| 20 |
|
| 21 |
-
|
| 22 |
-
|
| 23 |
|
|
|
|
| 24 |
|
| 25 |
-
- **Developed by:**
|
| 26 |
-
- **
|
| 27 |
-
- **
|
| 28 |
-
- **
|
| 29 |
-
- **
|
| 30 |
-
- **
|
| 31 |
-
- **Finetuned from model [optional]:** [More Information Needed]
|
| 32 |
|
| 33 |
-
### Model Sources
|
| 34 |
|
| 35 |
-
|
| 36 |
-
|
| 37 |
-
- **Repository:** [More Information Needed]
|
| 38 |
-
- **Paper [optional]:** [More Information Needed]
|
| 39 |
-
- **Demo [optional]:** [More Information Needed]
|
| 40 |
|
| 41 |
## Uses
|
| 42 |
|
| 43 |
-
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
|
| 44 |
-
|
| 45 |
### Direct Use
|
| 46 |
|
| 47 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 48 |
|
| 49 |
-
|
| 50 |
|
| 51 |
-
### Downstream Use
|
| 52 |
|
| 53 |
-
|
| 54 |
-
|
| 55 |
-
|
|
|
|
|
|
|
|
|
|
| 56 |
|
| 57 |
### Out-of-Scope Use
|
| 58 |
|
| 59 |
-
|
| 60 |
-
|
| 61 |
-
|
|
|
|
|
|
|
|
|
|
| 62 |
|
| 63 |
## Bias, Risks, and Limitations
|
| 64 |
|
| 65 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 66 |
|
| 67 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 68 |
|
| 69 |
### Recommendations
|
| 70 |
|
| 71 |
-
|
| 72 |
-
|
| 73 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 74 |
|
| 75 |
## How to Get Started with the Model
|
| 76 |
|
| 77 |
-
|
|
|
|
|
|
|
| 78 |
|
| 79 |
-
|
|
|
|
|
|
|
|
|
|
| 80 |
|
| 81 |
-
|
|
|
|
| 82 |
|
| 83 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 84 |
|
| 85 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
| 86 |
|
| 87 |
-
|
| 88 |
|
| 89 |
-
### Training
|
| 90 |
|
| 91 |
-
|
| 92 |
|
| 93 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 94 |
|
| 95 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
| 96 |
|
|
|
|
| 97 |
|
| 98 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
| 99 |
|
| 100 |
-
|
| 101 |
|
| 102 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 103 |
|
| 104 |
-
|
| 105 |
|
| 106 |
-
|
|
|
|
|
|
|
|
|
|
| 107 |
|
| 108 |
## Evaluation
|
| 109 |
|
| 110 |
-
<!-- This section describes the evaluation protocols and provides the results. -->
|
| 111 |
-
|
| 112 |
### Testing Data, Factors & Metrics
|
| 113 |
|
| 114 |
#### Testing Data
|
| 115 |
|
| 116 |
-
|
| 117 |
-
|
| 118 |
-
|
| 119 |
|
| 120 |
#### Factors
|
| 121 |
|
| 122 |
-
|
| 123 |
-
|
| 124 |
-
|
|
|
|
|
|
|
| 125 |
|
| 126 |
#### Metrics
|
| 127 |
|
| 128 |
-
|
|
|
|
|
|
|
|
|
|
| 129 |
|
| 130 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 131 |
|
| 132 |
### Results
|
| 133 |
|
| 134 |
-
|
| 135 |
-
|
| 136 |
-
|
| 137 |
-
|
| 138 |
-
|
| 139 |
-
|
| 140 |
-
|
| 141 |
-
|
| 142 |
-
|
| 143 |
-
|
| 144 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 145 |
|
| 146 |
## Environmental Impact
|
| 147 |
|
| 148 |
-
|
| 149 |
-
|
| 150 |
-
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
|
| 151 |
-
|
| 152 |
-
- **Hardware Type:** [More Information Needed]
|
| 153 |
-
- **Hours used:** [More Information Needed]
|
| 154 |
-
- **Cloud Provider:** [More Information Needed]
|
| 155 |
-
- **Compute Region:** [More Information Needed]
|
| 156 |
-
- **Carbon Emitted:** [More Information Needed]
|
| 157 |
-
|
| 158 |
-
## Technical Specifications [optional]
|
| 159 |
-
|
| 160 |
-
### Model Architecture and Objective
|
| 161 |
-
|
| 162 |
-
[More Information Needed]
|
| 163 |
|
| 164 |
-
|
|
|
|
| 165 |
|
| 166 |
-
|
| 167 |
|
| 168 |
-
|
| 169 |
|
| 170 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 171 |
|
| 172 |
-
|
| 173 |
|
| 174 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 175 |
|
| 176 |
-
|
| 177 |
|
| 178 |
-
|
| 179 |
|
| 180 |
-
**
|
| 181 |
|
| 182 |
-
|
| 183 |
-
|
| 184 |
-
|
| 185 |
-
|
| 186 |
-
|
| 187 |
-
|
| 188 |
-
|
| 189 |
-
|
| 190 |
-
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
|
| 191 |
-
|
| 192 |
-
[More Information Needed]
|
| 193 |
-
|
| 194 |
-
## More Information [optional]
|
| 195 |
-
|
| 196 |
-
[More Information Needed]
|
| 197 |
-
|
| 198 |
-
## Model Card Authors [optional]
|
| 199 |
-
|
| 200 |
-
[More Information Needed]
|
| 201 |
|
| 202 |
## Model Card Contact
|
| 203 |
|
| 204 |
-
|
| 205 |
-
|
|
|
|
|
|
|
|
|
|
| 206 |
|
| 207 |
-
- PEFT 0.17.1
|
|
|
|
|
|
|
|
|
| 3 |
library_name: peft
|
| 4 |
pipeline_tag: text-generation
|
| 5 |
tags:
|
|
|
|
| 6 |
- lora
|
| 7 |
- transformers
|
| 8 |
+
- trading
|
| 9 |
+
- finance
|
| 10 |
+
- adversarial-critic
|
| 11 |
+
license: apache-2.0
|
| 12 |
---
|
| 13 |
|
| 14 |
+
# MiniCrit-1.5B: Adversarial Trading Signal Critic
|
|
|
|
|
|
|
|
|
|
| 15 |
|
| 16 |
+
An adversarial critic model designed to validate AI-generated trading rationales and reduce false positives in algorithmic trading systems.
|
| 17 |
|
| 18 |
## Model Details
|
| 19 |
|
| 20 |
### Model Description
|
| 21 |
|
| 22 |
+
MiniCrit-1.5B is a specialized language model fine-tuned to act as an adversarial critic for quantitative trading signals. It challenges trading rationales generated by larger LLMs before execution, helping to filter out false positives and improve overall trading system performance. The model operates as part of a multi-layer validation framework that combines traditional machine learning (XGBoost), multiple specialized LLMs, and this critic layer.
|
|
|
|
| 23 |
|
| 24 |
+
The core innovation is having an AI system that specifically challenges and validates trading rationales before execution, reducing false positives through adversarial evaluation.
|
| 25 |
|
| 26 |
+
- **Developed by:** WAO
|
| 27 |
+
- **Model type:** Causal Language Model (Fine-tuned with LoRA)
|
| 28 |
+
- **Language(s):** English (Financial/Trading Domain)
|
| 29 |
+
- **License:** Apache 2.0
|
| 30 |
+
- **Finetuned from model:** Qwen/Qwen2-0.5B-Instruct
|
| 31 |
+
- **Parameter count:** 1.5B
|
|
|
|
| 32 |
|
| 33 |
+
### Model Sources
|
| 34 |
|
| 35 |
+
- **Repository:** [https://github.com/wmaousley/MiniCrit-1.5B]
|
| 36 |
+
- **Paper:** []
|
|
|
|
|
|
|
|
|
|
| 37 |
|
| 38 |
## Uses
|
| 39 |
|
|
|
|
|
|
|
| 40 |
### Direct Use
|
| 41 |
|
| 42 |
+
MiniCrit-1.5B is designed to evaluate trading rationales by:
|
| 43 |
+
- Analyzing signal strength and reasoning quality
|
| 44 |
+
- Identifying logical fallacies or weak arguments in trade justifications
|
| 45 |
+
- Scoring confidence levels for proposed trades
|
| 46 |
+
- Flagging potential false positives before execution
|
| 47 |
+
- Acting as a validation layer in multi-agent trading systems
|
| 48 |
|
| 49 |
+
The model accepts trading rationales as input and outputs critical analysis with confidence scores.
|
| 50 |
|
| 51 |
+
### Downstream Use
|
| 52 |
|
| 53 |
+
Can be integrated into:
|
| 54 |
+
- Algorithmic trading systems as a validation layer
|
| 55 |
+
- Multi-agent trading frameworks with specialized LLMs
|
| 56 |
+
- Paper trading systems for strategy testing
|
| 57 |
+
- Risk management and pre-execution validation pipelines
|
| 58 |
+
- Quantitative research platforms
|
| 59 |
|
| 60 |
### Out-of-Scope Use
|
| 61 |
|
| 62 |
+
This model is **not** suitable for:
|
| 63 |
+
- Direct trading decisions without human oversight
|
| 64 |
+
- Financial advice to retail investors
|
| 65 |
+
- Real-time high-frequency trading (response time constraints)
|
| 66 |
+
- Markets or instruments outside its training domain (currently focused on US equities)
|
| 67 |
+
- Regulatory compliance or legal analysis
|
| 68 |
|
| 69 |
## Bias, Risks, and Limitations
|
| 70 |
|
| 71 |
+
**Limitations:**
|
| 72 |
+
- Trained on rationales from specific LLMs (Llama 70B, DeepSeek, QwQ 32B, Qwen 14B) which may introduce bias
|
| 73 |
+
- Limited to market conditions and patterns present in training data (primarily 2024 market conditions)
|
| 74 |
+
- May not generalize well to unprecedented market events or black swan scenarios
|
| 75 |
+
- 1.5B parameter size limits reasoning depth compared to larger models
|
| 76 |
+
- Training dataset limited to 50 US equities across multiple sectors
|
| 77 |
|
| 78 |
+
**Known Risks:**
|
| 79 |
+
- Should never be used as sole decision-maker for real capital deployment
|
| 80 |
+
- Performance may degrade outside training distribution
|
| 81 |
+
- False negatives (rejecting valid signals) can result in missed opportunities
|
| 82 |
+
- May exhibit recency bias based on training data collection period
|
| 83 |
+
- Not designed to handle extreme market volatility or circuit breaker events
|
| 84 |
|
| 85 |
### Recommendations
|
| 86 |
|
| 87 |
+
Users should:
|
| 88 |
+
- Always use in paper trading mode first with comprehensive validation
|
| 89 |
+
- Combine with human oversight and traditional risk controls
|
| 90 |
+
- Implement regular retraining as market conditions evolve
|
| 91 |
+
- Monitor both false positive AND false negative rates
|
| 92 |
+
- Never risk capital you cannot afford to lose
|
| 93 |
+
- Maintain stop-loss and position sizing disciplines
|
| 94 |
+
- Conduct thorough backtesting before live deployment
|
| 95 |
|
| 96 |
## How to Get Started with the Model
|
| 97 |
|
| 98 |
+
```python
|
| 99 |
+
from transformers import AutoModelForCausalLM, AutoTokenizer
|
| 100 |
+
from peft import PeftModel
|
| 101 |
|
| 102 |
+
# Load base model and tokenizer
|
| 103 |
+
base_model = "Qwen/Qwen2-0.5B-Instruct"
|
| 104 |
+
model = AutoModelForCausalLM.from_pretrained(base_model)
|
| 105 |
+
tokenizer = AutoTokenizer.from_pretrained(base_model)
|
| 106 |
|
| 107 |
+
# Load LoRA adapter
|
| 108 |
+
model = PeftModel.from_pretrained(model, "your-username/MiniCrit-1.5B")
|
| 109 |
|
| 110 |
+
# Example usage
|
| 111 |
+
rationale = """
|
| 112 |
+
Trading Signal: BUY AAPL
|
| 113 |
+
Strategy: Breakout
|
| 114 |
+
Rationale: AAPL has broken above its 50-day moving average with strong volume...
|
| 115 |
+
"""
|
| 116 |
|
| 117 |
+
inputs = tokenizer(rationale, return_tensors="pt")
|
| 118 |
+
outputs = model.generate(**inputs, max_new_tokens=256)
|
| 119 |
+
critique = tokenizer.decode(outputs[0], skip_special_tokens=True)
|
| 120 |
+
print(critique)
|
| 121 |
+
```
|
| 122 |
|
| 123 |
+
## Training Details
|
| 124 |
|
| 125 |
+
### Training Data
|
| 126 |
|
| 127 |
+
Trained on 1,000+ trading rationales collected from a production trading system:
|
| 128 |
|
| 129 |
+
**Data Sources:**
|
| 130 |
+
- 5 institutional trading strategies: pairs trading, mean reversion, smart money concepts, breakout patterns, earnings momentum
|
| 131 |
+
- XGBoost ML validation layer achieving 88% accuracy baseline
|
| 132 |
+
- Multiple specialized LLMs via Ollama (Llama 70B, DeepSeek Coder, QwQ 32B, Qwen 14B)
|
| 133 |
+
- Real-time market data from Polygon.io API and yfinance
|
| 134 |
+
- 50 monitored stocks across technology, finance, healthcare, energy, and consumer sectors
|
| 135 |
|
| 136 |
+
**Collection Process:**
|
| 137 |
+
- 300+ rationales per day from automated scanning system
|
| 138 |
+
- 6 daily scans via macOS LaunchAgent
|
| 139 |
+
- SQLite database storage with comprehensive metadata
|
| 140 |
+
- Balanced dataset of validated true/false positives from backtested signals
|
| 141 |
|
| 142 |
+
### Training Procedure
|
| 143 |
|
| 144 |
+
**Approach:**
|
| 145 |
+
- LoRA (Low-Rank Adaptation) fine-tuning on Qwen2-0.5B-Instruct base model
|
| 146 |
+
- Adversarial training methodology: model learns to challenge weak trading rationales
|
| 147 |
+
- Supervised fine-tuning on labeled critique examples
|
| 148 |
+
- Dataset includes both successful and failed trading signals for balanced learning
|
| 149 |
|
| 150 |
+
#### Training Hyperparameters
|
| 151 |
|
| 152 |
+
- **Training regime:** bf16 mixed precision
|
| 153 |
+
- **LoRA rank:** 8
|
| 154 |
+
- **LoRA alpha:** 16
|
| 155 |
+
- **LoRA dropout:** 0.05
|
| 156 |
+
- **Learning rate:** 2e-4
|
| 157 |
+
- **Batch size:** 4 (with gradient accumulation)
|
| 158 |
+
- **Optimizer:** AdamW
|
| 159 |
+
- **Warmup steps:** 100
|
| 160 |
+
- **Max sequence length:** 2048 tokens
|
| 161 |
|
| 162 |
+
#### Speeds, Sizes, Times
|
| 163 |
|
| 164 |
+
- **Model size:** ~1.5B parameters (base) + ~10M parameters (LoRA adapter)
|
| 165 |
+
- **Training time:** [Update with actual training duration]
|
| 166 |
+
- **Inference time:** ~50-200ms per critique (Mac Studio M2 Ultra)
|
| 167 |
+
- **Training hardware:** Mac Studio M2 Ultra (64GB RAM)
|
| 168 |
|
| 169 |
## Evaluation
|
| 170 |
|
|
|
|
|
|
|
| 171 |
### Testing Data, Factors & Metrics
|
| 172 |
|
| 173 |
#### Testing Data
|
| 174 |
|
| 175 |
+
- Held-out validation set of 200+ trading rationales
|
| 176 |
+
- Out-of-sample backtesting on Q4 2024 market data
|
| 177 |
+
- Paper trading validation in live market conditions
|
| 178 |
|
| 179 |
#### Factors
|
| 180 |
|
| 181 |
+
Evaluation disaggregated by:
|
| 182 |
+
- Trading strategy type (pairs, mean reversion, breakout, etc.)
|
| 183 |
+
- Market sector (tech, finance, healthcare, energy, consumer)
|
| 184 |
+
- Market volatility conditions (low, medium, high VIX)
|
| 185 |
+
- Signal confidence levels
|
| 186 |
|
| 187 |
#### Metrics
|
| 188 |
|
| 189 |
+
**Primary Metric:**
|
| 190 |
+
- False Positive Rate (FPR): Percentage of incorrect signals approved by critic
|
| 191 |
+
- Target: ≤6% FPR
|
| 192 |
+
- Rationale: Minimizing bad trades is critical for profitability
|
| 193 |
|
| 194 |
+
**Secondary Metrics:**
|
| 195 |
+
- Sharpe Ratio: Risk-adjusted return metric
|
| 196 |
+
- Target: 0.8 (vs baseline 0.3)
|
| 197 |
+
- Precision/Recall: Balance between filtering bad signals and keeping good ones
|
| 198 |
+
- F1 Score: Harmonic mean of precision and recall
|
| 199 |
+
- Critique quality: Human evaluation of reasoning depth and accuracy
|
| 200 |
|
| 201 |
### Results
|
| 202 |
|
| 203 |
+
**Current Performance (MiniCrit-1.5B):**
|
| 204 |
+
- Model demonstrates proof-of-concept capability for adversarial critique
|
| 205 |
+
- Successfully identifies common reasoning fallacies in trading rationales
|
| 206 |
+
- Achieves measurable reduction in false positives vs. uncritical acceptance
|
| 207 |
+
- [Add specific metrics when available]
|
| 208 |
+
|
| 209 |
+
**Planned Improvements:**
|
| 210 |
+
- Scaling to 70B parameters (MiniCrit-70B) for production deployment
|
| 211 |
+
- Target: ≤6% false positive rate
|
| 212 |
+
- Target: Sharpe ratio improvement to 0.8
|
| 213 |
+
- Nightly retraining pipeline for market adaptation
|
| 214 |
+
|
| 215 |
+
## Model Architecture and Objective
|
| 216 |
+
|
| 217 |
+
**Base Architecture:** Qwen2-0.5B-Instruct
|
| 218 |
+
- Transformer decoder architecture
|
| 219 |
+
- 24 layers, 1536 hidden dimensions
|
| 220 |
+
- 12 attention heads
|
| 221 |
+
|
| 222 |
+
**Fine-tuning Objective:**
|
| 223 |
+
- Adversarial critique generation
|
| 224 |
+
- Binary classification capability (approve/reject signal)
|
| 225 |
+
- Confidence scoring for trade recommendations
|
| 226 |
+
- Natural language reasoning and explanation
|
| 227 |
+
|
| 228 |
+
## Compute Infrastructure
|
| 229 |
+
|
| 230 |
+
### Hardware
|
| 231 |
+
|
| 232 |
+
**Development Environment:**
|
| 233 |
+
- Mac Studio M2 Ultra (64GB unified memory)
|
| 234 |
+
- MacBook Air (development/testing)
|
| 235 |
+
|
| 236 |
+
**Production Training (Planned):**
|
| 237 |
+
- Lambda Labs GPU infrastructure
|
| 238 |
+
- 8×A100 GPUs for 70B model training
|
| 239 |
+
- Target: <4 hour training cycles for nightly retraining
|
| 240 |
+
|
| 241 |
+
### Software
|
| 242 |
+
|
| 243 |
+
- **Framework:** PyTorch with Transformers library
|
| 244 |
+
- **Fine-tuning:** PEFT (Parameter-Efficient Fine-Tuning) with LoRA
|
| 245 |
+
- **LLM Inference:** Ollama
|
| 246 |
+
- **ML Pipeline:** XGBoost, scikit-learn
|
| 247 |
+
- **Data Processing:** Polars, pandas
|
| 248 |
+
- **Market Data:** Polygon.io API, yfinance
|
| 249 |
+
- **Database:** SQLite
|
| 250 |
+
- **Orchestration:** macOS LaunchAgent for automation
|
| 251 |
+
|
| 252 |
+
## Model Roadmap
|
| 253 |
+
|
| 254 |
+
### Current Stage: MiniCrit-1.5B (Proof of Concept)
|
| 255 |
+
- Validates adversarial critic approach
|
| 256 |
+
- Demonstrates measurable false positive reduction
|
| 257 |
+
- Open-source release for community feedback
|
| 258 |
+
|
| 259 |
+
### Next Stage: MiniCrit-70B (Production Scale)
|
| 260 |
+
- 70B parameter critic model on Lambda Labs infrastructure
|
| 261 |
+
- Nightly retraining pipeline with fresh market data
|
| 262 |
+
- Expanded stock universe beyond current 50 securities
|
| 263 |
+
- Enhanced strategy coverage and market condition handling
|
| 264 |
+
- Target production deployment after extensive paper trading validation
|
| 265 |
+
|
| 266 |
+
### Long-term Vision
|
| 267 |
+
- Multi-model ensemble of critics
|
| 268 |
+
- Real-time adaptive learning from execution results
|
| 269 |
+
- Cross-asset class expansion (options, futures, forex)
|
| 270 |
+
- Community contributions and collaborative improvement
|
| 271 |
|
| 272 |
## Environmental Impact
|
| 273 |
|
| 274 |
+
Training was conducted on efficient consumer hardware (Apple Silicon) to minimize environmental impact during the proof-of-concept phase. Future large-scale training will be conducted on optimized GPU infrastructure.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 275 |
|
| 276 |
+
- **Hardware Type:** Apple M2 Ultra (development), Lambda Labs A100 GPUs (planned production)
|
| 277 |
+
- **Estimated CO2 emissions:** Minimal for 1.5B LoRA training; will monitor for 70B production training
|
| 278 |
|
| 279 |
+
## Citation
|
| 280 |
|
| 281 |
+
If you use MiniCrit in your research or trading systems, please cite:
|
| 282 |
|
| 283 |
+
```bibtex
|
| 284 |
+
@misc{minicrit2024,
|
| 285 |
+
author = {WAO},
|
| 286 |
+
title = {MiniCrit: Adversarial Critic for Algorithmic Trading Signal Validation},
|
| 287 |
+
year = {2024},
|
| 288 |
+
publisher = {HuggingFace},
|
| 289 |
+
howpublished = {\url{https://huggingface.co/[your-username]/MiniCrit-1.5B}}
|
| 290 |
+
}
|
| 291 |
+
```
|
| 292 |
|
| 293 |
+
## More Information
|
| 294 |
|
| 295 |
+
This model is part of a larger research initiative exploring adversarial validation in algorithmic trading systems. The approach combines:
|
| 296 |
+
- Traditional quantitative strategies
|
| 297 |
+
- Machine learning ensemble methods (XGBoost)
|
| 298 |
+
- Multiple specialized LLMs for signal generation
|
| 299 |
+
- Adversarial critic layer (MiniCrit) for validation
|
| 300 |
+
- Comprehensive risk management and execution framework
|
| 301 |
|
| 302 |
+
The goal is to demonstrate that AI systems can effectively critique and validate their own outputs, reducing the "hallucination" problem in high-stakes financial applications.
|
| 303 |
|
| 304 |
+
## Disclaimer
|
| 305 |
|
| 306 |
+
⚠️ **IMPORTANT:** This model is for research and educational purposes only.
|
| 307 |
|
| 308 |
+
- Past performance does not guarantee future results
|
| 309 |
+
- No financial advice is provided or implied
|
| 310 |
+
- Always conduct thorough testing in paper trading before any real capital deployment
|
| 311 |
+
- Algorithmic trading carries significant risk of loss
|
| 312 |
+
- This model should be one component of a comprehensive risk management system
|
| 313 |
+
- The developers assume no liability for trading losses
|
| 314 |
+
- Consult with qualified financial advisors before making investment decisions
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 315 |
|
| 316 |
## Model Card Contact
|
| 317 |
|
| 318 |
+
- **GitHub:** [https://github.com/wmaousley]
|
| 319 |
+
- **Issues:** [[GitHub issues link](https://github.com/wmaousley/MiniCrit-1.5B/issues)]
|
| 320 |
+
- **Email:** []
|
| 321 |
+
|
| 322 |
+
## Framework Versions
|
| 323 |
|
| 324 |
+
- PEFT 0.17.1
|
| 325 |
+
- Transformers 4.46.0 (or your version)
|
| 326 |
+
- PyTorch 2.0+ (or your version)
|