File size: 4,810 Bytes
88878ba | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 | ---
license: apache-2.0
base_model: allura-forge/Llama-3.3-8B-Instruct
datasets:
- TeichAI/claude-4.5-opus-high-reasoning-250x
language:
- en
tags:
- thinking
- reasoning
- instruct
- economics
- finance
- analysis
- llama3.3
- unsloth
- finetune
- bfloat16
- 128k context
pipeline_tag: text-generation
library_name: transformers
model_type: llama
---
# AEGIS Conduct - Economic Analysis Model
## Model Overview
This repository contains the Llama 3.3 8B Instruct model with thinking capabilities, fine-tuned for economic and financial analysis using Claude 4.5-Opus High Reasoning dataset.
**Key Features:**
- **Thinking Mode**: Automatic activation for complex reasoning
- **Economic Focus**: Specialized for financial analysis and market insights
- **128k Context**: Extended context window for comprehensive analysis
- **Optimized**: Fine-tuned with Unsloth for efficient inference
## Model Details
- **Base Model**: allura-forge/Llama-3.3-8B-Instruct
- **Fine-tuning Dataset**: TeichAI/claude-4.5-opus-high-reasoning-250x
- **Context Length**: 128k tokens
- **Training Method**: Unsloth (3 epochs)
- **Format**: SafeTensors
- **Precision**: bfloat16
## Repository Structure
All model files are now located in the root directory for optimal compatibility:
```
βββ config.json # Model configuration
βββ generation_config.json # Generation parameters
βββ tokenizer.json # Tokenizer vocabulary
βββ tokenizer_config.json # Tokenizer configuration
βββ special_tokens_map.json # Special tokens mapping
βββ chat_template.jinja # Chat template
βββ model.safetensors.index.json # Model index
βββ model-00001-of-00004.safetensors # Model weights (part 1)
βββ model-00002-of-00004.safetensors # Model weights (part 2)
βββ model-00003-of-00004.safetensors # Model weights (part 3)
βββ model-00004-of-00004.safetensors # Model weights (part 4)
βββ reco.py # Model utilities
βββ matrix-neo-reloaded-fight.gif # Visual asset
βββ README.md # This file
```
## Usage
### Quick Start with Transformers
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
# Load model and tokenizer directly (no subfolder needed)
tokenizer = AutoTokenizer.from_pretrained("Gaston895/aegisconduct")
model = AutoModelForCausalLM.from_pretrained("Gaston895/aegisconduct")
# Generate response
inputs = tokenizer("Analyze the economic impact of inflation on consumer spending:", return_tensors="pt")
outputs = model.generate(**inputs, max_length=512, temperature=0.7)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(response)
```
### Thinking Mode Activation
The model automatically activates thinking mode for complex reasoning:
```python
# These prompts will trigger thinking mode
prompts = [
"Think deeply: Analyze the economic implications of rising interest rates",
"Explain the financial impact of supply chain disruptions",
"Think through: What are the long-term effects of quantitative easing?"
]
```
### Recommended Settings
- **Temperature**: 0.7
- **Repetition Penalty**: 1.05
- **Top-p**: 0.95
- **Min-p**: 0.05
- **Top-k**: 40
- **Context Window**: 4k minimum, 8k+ recommended
## Capabilities
This model excels at:
- **Economic Analysis**: Market trends, policy impacts, forecasting
- **Financial Planning**: Investment strategies, risk assessment
- **Data Interpretation**: Economic indicators, statistical analysis
- **Policy Analysis**: Regulatory impacts, fiscal policy effects
- **Global Economics**: International trade, currency analysis
- **Research**: Academic-level economic reasoning and explanation
## Example Outputs
The model provides detailed, step-by-step reasoning for complex economic questions, often showing its "thinking" process before delivering final answers.
## Technical Notes
- All model files are in the root directory for direct loading
- Supports both instruct and thinking modes
- No system prompt required (thinking tags self-generate)
- Compatible with quantization (Q4KS, IQ3_M recommended minimum)
- Optimized for inference with various backends (transformers, llama.cpp, etc.)
## License
Apache 2.0 (inherited from base model)
## Credits
- **Base Model**: [allura-forge/Llama-3.3-8B-Instruct](https://huggingface.co/allura-forge/Llama-3.3-8B-Instruct)
- **Dataset**: [TeichAI/claude-4.5-opus-high-reasoning-250x](https://huggingface.co/datasets/TeichAI/claude-4.5-opus-high-reasoning-250x)
- **Training Framework**: [Unsloth](https://github.com/unslothai/unsloth)
|