πŸͺ™ CoinReason-7B (Proof of Concept)

⚠️ Prototype Warning: This adapter was trained on a synthetic "Gold Standard" prototype dataset to demonstrate an End-to-End MLOps Pipeline. It is intended to showcase the fine-tuning architecture (Unsloth, QLoRA, Hugging Face integration) rather than to provide financial advice. It may overfit to specific training examples.

Model Overview

CoinReason-7B is a specialized Low-Rank Adapter (LoRA) for the Mistral-7B Large Language Model. It is designed to analyze cryptocurrency social media text and output structured financial reasoning.

Unlike standard sentiment models that output simple "Positive/Negative" labels, CoinReason attempts to generate:

  1. Sentiment: (Bullish/Bearish)
  2. Explanation: The logical reasoning behind the sentiment.
  3. Market Implication: A short-term predictive outlook for price action.

Technical Specifications

  • Base Model: unsloth/mistral-7b-instruct-v0.3-bnb-4bit
  • Fine-Tuning Technique: QLoRA (Quantized Low-Rank Adaptation)
  • Quantization: 4-bit (NF4) for efficient inference on edge hardware (T4 GPUs)
  • Framework: Unsloth (2x faster training) + Hugging Face Transformers

How to Use

You can load this model using the unsloth library for fast inference.

from unsloth import FastLanguageModel

# 1. Load the model and adapters
model, tokenizer = FastLanguageModel.from_pretrained(
    "sarfras/coinreason-7b-lora",
    max_seq_length = 2048,
    dtype = None,
    load_in_4bit = True,
)
FastLanguageModel.for_inference(model)

# 2. Define the prompt format
tweet = "Bitcoin volume is dying and we are stuck at resistance. I think we go down."

prompt = f"""<s>[INST] Analyze the following Bitcoin market text for sentiment and short-horizon implication.

Text: {tweet}

Provide output in this exact format:
Sentiment: [Bullish/Bearish]
Explanation: [reasoning]
Market Implication: [brief BTC price direction outlook][/INST]"""

# 3. Generate
inputs = tokenizer([prompt], return_tensors = "pt").to("cuda")
outputs = model.generate(**inputs, max_new_tokens = 128, use_cache = True)
print(tokenizer.batch_decode(outputs, skip_special_tokens=True)[0])

Training Details

  • Dataset: Synthetic Financial Reasoning Dataset (Prototype v1)
  • Objective: Instruction Fine-Tuning (SFT)
  • LoRA Rank (r): 16
  • LoRA Alpha: 16
  • Optimizer: AdamW 8-bit

Example Output

Input: "Whales are dumping BTC heavily on Binance, price dropping fast below support."

Model Prediction:

Sentiment: Bearish

Explanation: Large inflows of BTC to exchanges (Whale movement) typically signal an intent to sell, increasing sell-side pressure.

Market Implication: Price likely to test the $60k support; a breakdown could trigger a flush to lower levels.


Disclaimer

This model is a proof of concept and should NOT be used for actual financial decision-making. Always conduct your own research and consult with qualified financial advisors before making investment decisions.


Created by Sarfras as part of an End-to-End LLM Engineering Portfolio.

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for sarfras/coinreason-7b-lora