NewBreaker commited on
Commit
4488b30
·
verified ·
1 Parent(s): d9c1b8f

Create README.md

Browse files

---
license: apache-2.0
tags:
- fortune-telling
- chinese
- lora
- unsloth
- deepseek
- llama
- instruction-tuned
- local-model
language:
- zh
---

# deepseek-fortune

本模型是基于 [`unsloth/DeepSeek-R1-Distill-Llama-8B`](https://huggingface.co/unsloth/DeepSeek-R1-Distill-Llama-8B) 微调得到的命理问答(算命)中文大语言模型,适用于八字预测、运势分析、姻缘、事业、健康等传统命理对话任务。

## 🔧 模型训练信息

- 基础模型:`unsloth/DeepSeek-R1-Distill-Llama-8B`(4-bit量化)
- 微调方式:LoRA (r=16, lora_alpha=16, dropout=0)
- 使用工具:[Unsloth](https://github.com/unslothai/unsloth)
- 数据集来源:[Conard/fortune-telling](https://huggingface.co/datasets/Conard/fortune-telling)
- 训练环境:Windows / CUDA 12.8 / PyTorch 2.7.1
- 微调目标:提升中文命理问答准确性与上下文理解能力

## 🧠 使用示例

```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch

model = AutoModelForCausalLM.from_pretrained("NewBreaker/deepseek-fortune", torch_dtype=torch.float16, device_map="auto")
tokenizer = AutoTokenizer.from_pretrained("NewBreaker/deepseek-fortune")

prompt = "我出生在1995年农历三月初三上午九点,请问我的姻缘如何?"

inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=200)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))

Files changed (1) hide show
  1. README.md +0 -0
README.md ADDED
File without changes