digitalassistant-ai's picture
Update README.md
6abb245 verified
---
license: apache-2.0
base_model:
- Qwen/Qwen2.5-0.5B
language:
- en
- cn
tags:
- agent
- intent-recognition
- sentiment-analysis
- conversion-prediction
- script-generation
- quality-assurance
library_name: transformers
---
# Selling-Assistant-V1
<p align="center">
<img src="selling_assistant.png" width=75%/>
<p>
## Overview
Selling-Assistant-V1 is an intelligent sales assistant model trained primarily on Chinese sales dialogues and service logs. It understands customer intent, analyzes sentiment, predicts purchase propensity, generates persuasive sales scripts, and audits conversations for compliance and quality. The model is packaged for local inference with the Transformers ecosystem and can be embedded into CRM systems, chat widgets, and customer service tools.
- Model path: `/Users/wangyiwen/Desktop/Selling_AI/model_simulation/model`
- Training focus: Chinese language, sales and support domain
- Use cases: pre-sales consultation, lead nurturing, e-commerce guidance, customer service QA, and content marketing
## Key Capabilities
1. Intent Recognition: Predicts and classifies sales intent to guide conversation strategy.
2. Sentiment Analysis: Detects user emotions and adjusts tone and response style accordingly.
3. Conversion Prediction: Estimates purchase inclination and highlights key influencing factors.
4. Sales Script Generation: Produces tailored scripts and product recommendations based on inferred needs.
5. Quality Assurance: Evaluates compliance and interaction quality, providing self-learning optimizations.
The specific workflow is shown here:
<p align="center">
<img src="workflow.png" width=75%/>
<p>
## Quickstart
Install dependencies:
```bash
pip install transformers accelerate torch --upgrade
```
Minimal inference (local path):
```python
from transformers import AutoTokenizer, AutoModelForCausalLM, pipeline
MODEL_PATH = "https://huggingface.co/digitalassistant-ai/Selling-Assistant-V1/model"
tokenizer = AutoTokenizer.from_pretrained(MODEL_PATH)
model = AutoModelForCausalLM.from_pretrained(MODEL_PATH, device_map="auto", torch_dtype="auto")
assistant = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
)
prompt = (
"你是一名智能销售助手。用户正在寻找价格友好、适合小户型房子的智能家居设备。"
"请推荐三个产品,并给出简短且有说服力的销售话术。"
)
out = assistant(
prompt,
max_new_tokens=256,
do_sample=True,
temperature=0.7,
top_p=0.9,
)
print(out[0]["generated_text"])
```
Recommended settings:
- `max_new_tokens=128–512` depending on context length
- `temperature=0.6–0.8`, `top_p=0.85–0.95` for more diversity
- `repetition_penalty=1.05–1.15` to reduce redundancy
## Architecture & Files
This repository includes a ready-to-run model directory containing tokenizer and weights for direct loading:
- `config.json`, `model.safetensors`, `tokenizer.json`, `vocab.json`, `merges.txt`, `special_tokens_map.json`, `added_tokens.json`, `tokenizer_config.json`, `chat_template.jinja`
## Safety & Responsible Use
Use the model in accordance with the Apache 2.0 License. Ensure generated content respects applicable laws, privacy, and platform policies. Apply domain-specific guardrails for compliance and brand tone. Always verify critical recommendations (e.g., pricing, legal terms) before use.
## Limitations
- Trained primarily on Chinese data; performance in non-Chinese contexts may vary.
- Conversion predictions are probabilistic and should be combined with business rules.
- Complex multi-turn dialogues may require retrieval augmentation and session memory.
## Roadmap
- Multi-lingual fine-tuning and domain adapters
- Retrieval-augmented generation for product catalogs and FAQs
- Advanced QA scoring and compliance templates
## License
Apache 2.0. You retain rights to generated content, subject to compliance and responsible-use guidelines.
## Acknowledgements
Built with Hugging Face Transformers and community datasets. Inspired by best practices in intent detection, sentiment modeling, and sales conversation design.