How to use from
llama.cpp
Install from brew
brew install llama.cpp
# Start a local OpenAI-compatible server with a web UI:
llama-server -hf deagentai/lobe3:Q4_0
# Run inference directly in the terminal:
llama-cli -hf deagentai/lobe3:Q4_0
Install from WinGet (Windows)
winget install llama.cpp
# Start a local OpenAI-compatible server with a web UI:
llama-server -hf deagentai/lobe3:Q4_0
# Run inference directly in the terminal:
llama-cli -hf deagentai/lobe3:Q4_0
Use pre-built binary
# Download pre-built binary from:
# https://github.com/ggerganov/llama.cpp/releases
# Start a local OpenAI-compatible server with a web UI:
./llama-server -hf deagentai/lobe3:Q4_0
# Run inference directly in the terminal:
./llama-cli -hf deagentai/lobe3:Q4_0
Build from source code
git clone https://github.com/ggerganov/llama.cpp.git
cd llama.cpp
cmake -B build
cmake --build build -j --target llama-server llama-cli
# Start a local OpenAI-compatible server with a web UI:
./build/bin/llama-server -hf deagentai/lobe3:Q4_0
# Run inference directly in the terminal:
./build/bin/llama-cli -hf deagentai/lobe3:Q4_0
Use Docker
docker model run hf.co/deagentai/lobe3:Q4_0
Quick Links

Crypto-Expert LLM

Model Description

This model is a fine-tuned version of Qwen-7B, specifically optimized for cryptocurrency, Web3, and DeFi-related tasks. It is designed to provide specialized knowledge and decision ability while maintaining computational efficiency.

Supported Tasks

  • Cryptocurrency and DeFi concepts explanation
  • Smart contract analysis and auditing guidance
  • Trading strategy discussions
  • AMM (Automated Market Maker) mechanics
  • MEV (Maximal Extractable Value) analysis
  • Web3 development assistance (Rust, Python)
  • Decentralized infrastructure (IPFS, libp2p)

Training Data

The model is fine-tuned on specialized crypto and web3 prompts. Training data should be placed in the ./json/ directory in JSONL format.

Performance and Limitations

The model is optimized for:

  • Reduced resource consumption compared to larger models
  • Domain-specific accuracy in crypto/Web3 contexts
  • Cost-effective deployment

Note: Specific performance metrics will be added after training evaluation.

Intended Use

This model is designed for:

  • Developers working on Web3 projects
  • DeFi researchers and analysts
  • Cryptocurrency protocol designers
  • Smart contract developers
  • Blockchain infrastructure engineers

Ethical Considerations

Users should:

  • Verify all financial advice independently
  • Be aware of data privacy when using the model
  • Understand the model's limitations in real-time market analysis
  • Follow appropriate licensing requirements for training data
Downloads last month
29
Safetensors
Model size
8B params
Tensor type
F16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for deagentai/lobe3

Quantizations
2 models